13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

42P (X i = m|Y i ) ∝ P (Y i |X i = m)P (X i = m) (3.23)As shown in equation 3.15, we have modeled the distribution <strong>of</strong> Y i , and we canestimate the prior P (X i = m) by using the mixture proportion P m . Substitutinginto equation 3.22,C i = argmax m P m Φ(Y i |θ m ) (3.24)In other words, pixel i is assigned to the component for which it has the highestlikelihood, and we include the mixture proportions (P m ) in the likelihood. Thisseems intuitively reasonable since we are using the mixture density given by equation3.15.End <strong>of</strong> Pro<strong>of</strong>.Componentwise ClassificationAlthough maximizing the number <strong>of</strong> correctly classified pixels is both reasonableand consistent with our presumed mixture model, a componentwise approach tothis problem may be more useful in some circumstances. Consider a case in whichwe have a large background and only a few small features <strong>of</strong> interest. The mixtureproportions will be dominated by the component describing the background, givinginordinate weight to classification <strong>of</strong> pixels as background. As the proportion <strong>of</strong>pixels in the background increases, classification by the mixture likelihood maylead to classifying all pixels as background even when Φ(Y i |θ j ) for the feature isseveral orders <strong>of</strong> magnitude larger than Φ(Y i |θ j ) for the background. In this case,we would want to use componentwise classification so that the feature componentswould receive a more reasonable share <strong>of</strong> influence on the classification.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!