10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.304 22 — Maximum Likelihood <strong>and</strong> ClusteringAlgorithm 22.2. The soft K-meansAssignment step. The responsibilities arealgorithm, version 2.(1π kr (n)( √ exp − 1 )2πσ k ) I σk2 d(m (k) , x (n) )k=(∑k ′ π 1k( √ exp − 1) (22.22)2πσ k ′ ) I σk 2 d(m (k′) , x (n) )′where I is the dimensionality of x.Update step. Each cluster’s parameters, m (k) , π k , <strong>and</strong> σk 2 , are adjustedto match the data points that it is responsible for.∑r (n)kx(n)m (k) n=R (k) (22.23)∑r (n)k (x(n) − m (k) ) 2σk 2 = nIR (k) (22.24)π k = ∑k R(k)(22.25)R(k)where R (k) is the total responsibility of mean k,R (k) = ∑ r (n)k . (22.26)nt = 0 t = 1 t = 2 t = 3 t = 9Figure 22.3. Soft K-meansalgorithm, with K = 2, applied(a) to the 40-point data set offigure 20.3; (b) to the little ’n’large data set of figure 20.5.t = 0 t = 1 t = 10 t = 20 t = 30 t = 35r (n)k=π k1∏ Ii=1√2πσ(k)iexp(−I∑i=1(m (k)i)− x (n)i) 2/ 2(σ (k)i) 2∑k ′ (numerator, with k′ in place of k)∑r (n)k(x(n) i− m (k)i) 2(k)=σ 2 in(22.27)R (k) (22.28)Algorithm 22.4. The soft K-meansalgorithm, version 3, whichcorresponds to a model ofaxis-aligned Gaussians.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!