13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

101However, experience with the EM algorithm suggests that it makes large steps atfirst, and then takes longer to converge once it is in the vicinity <strong>of</strong> a solution. Forclassification purposes, we do not really need extremely accurate estimates <strong>of</strong> theparameters <strong>of</strong> each component (especially in light <strong>of</strong> the inherent uncertainty inthe data due to discretization, as discussed above). An adaptive threshold canbe found by considering the contribution <strong>of</strong> each pixel to the loglikelihood. Forinstance, if the change in loglikelihood (from iteration i to iteration i + 1) for eachpixel was less than 0.00001, then the overall change in loglikelihood would be lessthan 0.00001N, where N is the number <strong>of</strong> pixels. Of course, some pixels mighthave a larger or smaller change than 0.00001. My XV implementation uses thisapproach, so the convergence criterion for EM is a change in the loglikelihood <strong>of</strong>less than 0.00001N. If 0.00001N is larger than 1, then 1 is used as the criterioninstead. My Splus implementation runs much more slowly than XV, so I simplyallow a user-definable limit on the number <strong>of</strong> iterations for each execution <strong>of</strong> theEM algorithm. Inspection <strong>of</strong> output reveals that 20 iterations is usually sufficientfor the parameter estimates to stabilize, so this is the value that I typically usewhen running the algorithm in Splus.Final Marginal SegmentationOnce we have used EM to estimate the mixture model with K components, allthat remains is to classify each pixel into one <strong>of</strong> the K segments. In section3.4, I discussed two different methods for performing this classification: mixtureclassification and componentwise classification. In either method, we consider eachpixel (or each unique data value) in turn, and classify it into the segment for whichit has the highest likelihood (using the parameter estimates from the last iteration<strong>of</strong> EM). The difference between the two methods is that in mixture classificationthe mixture proportions are included in the likelihood, while in componentwise

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!