13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

97is the number <strong>of</strong> components (this is the same as equation 3.15 <strong>of</strong> section 3.2).K∑f(Y i |K, θ) = P j Φ(Y i |θ j ) (5.70)j=1Under the assumption that all pixels are independent, the likelihood for thewhole image is given by equation 5.71, where N is the number <strong>of</strong> data points.f(Y |K, θ) =M-Step⎛⎞N∏ K∑⎝ P j Φ(Y i |θ j ) ⎠ (5.71)i=1 j=1The M-step consists <strong>of</strong> finding the maximum likelihood estimate <strong>of</strong> θ conditionalon Q. Think <strong>of</strong> Q as a matrix with one row for each pixel and one column foreach component. The mixture proportions are easily obtainable from Q as shownin equation 5.72.ˆP j = 1 NN∑ˆQ ij (5.72)Since Q ij is the probability <strong>of</strong> pixel i being in component j, we know that eachrow <strong>of</strong> Q must sum to 1. The sum <strong>of</strong> all the elements <strong>of</strong> Q will be equal to N.This agrees with the fact that ∑ j P j = 1.We now find estimates for the density parameters. For a Gaussian mixture,i=1we need estimates for the mean µ j and variance σ 2 jfor each component j. For aPoisson mixture, only the mean is needed. Formulas for the mean and varianceestimates are gven in equations 5.73 and 5.74. In essence, these are simply weightedversions <strong>of</strong> the usual maximum likelihood estimators, with the weights given byQ.ˆµ j =∑ Ni=1 ˆQij Y i∑i ˆQ ij(5.73)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!