13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5not be generally applicable, though <strong>of</strong>ten one can set the tuning parameter to avalue which is reasonable for a large class <strong>of</strong> images. Also, it is usually easier tochoose a tuning parameter than to choose K directly; the tuning parameter maylet K vary rather than forcing a single value <strong>of</strong> K. Below are a few recent paperswhich address the automatic choice <strong>of</strong> K.Dingle and Morrison (1996) use local empirical density functions to characterizeeach segment. They begin with K=1, and then create ”outlier” regions by choosingan arbitrary threshold on total variation to determine when two density functionsare different. Outlier regions become segments (thereby increasing K) when theirsize is larger than another arbitrarily chosen threshold.Chen and Kundu (1993) use a hidden Markov model (HMM) approach totexture segmentation. They define a distance between HMMs called the discriminationinformation (DI). A split-and-merge procedure is used in which a HMMis fit to each region, and regions are merged if their corresponding HMMs havea DI below a certain threshold. This threshold is chosen by a convoluted ad hocprocedure which depends on 3 arbitrarily chosen parameters.Johnson (1994) defines a Gibbs distribution on region identifiers in order toallow inference. An arbitrary parameter in the potential function for the Gibbsdistribution is used to penalize results with many segments.Given some choice for K, there are several estimation methods which can beused to fit a mixture model to the data. The EM algorithm (Dempster et. al., 1977)can be used to estimate parameters in a Gaussian or Poisson model (Hathaway,1986). Many variations <strong>of</strong> this approach have been developed: CEM (Celeux andGovaert, 1992), SEM (Masson and Pieczinsky, 1993), and NEM (Ambroise et.al., 1996). CEM is an adaptation <strong>of</strong> EM for hard classification; the other twomethods take some account <strong>of</strong> spatial information. A similar but nonparametricsegmentation method was developed by Letts (1978) and extended to the case <strong>of</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!