01.08.2013 Views

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981<br />

You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.<br />

304 22 — Maximum Likelihood <strong>and</strong> Clustering<br />

Assignment step. The responsibilities are<br />

r (n)<br />

k =<br />

1<br />

πk<br />

( √ <br />

exp −<br />

2πσk) I 1<br />

σ2 d(m<br />

k<br />

(k) , x (n) <br />

)<br />

1<br />

k ′ πk<br />

( √ <br />

2πσk ′ ) I exp − 1<br />

σ2 k ′<br />

d(m (k′ (22.22)<br />

) (n)<br />

, x )<br />

where I is the dimensionality of x.<br />

Update step. Each cluster’s parameters, m (k) , πk, <strong>and</strong> σ2 k , are adjusted<br />

to match the data points that it is responsible for.<br />

σ 2 k =<br />

m (k) =<br />

<br />

n<br />

<br />

n<br />

r (n)<br />

k x(n)<br />

R (k)<br />

r (n)<br />

k (x(n) − m (k) ) 2<br />

IR (k)<br />

πk = R(k)<br />

<br />

k R(k)<br />

where R (k) is the total responsibility of mean k,<br />

R (k) = <br />

r (n)<br />

k =<br />

n<br />

(22.23)<br />

(22.24)<br />

(22.25)<br />

r (n)<br />

k . (22.26)<br />

t = 0 t = 1 t = 2 t = 3 t = 9<br />

t = 0 t = 1 t = 10 t = 20 t = 30 t = 35<br />

πk I i=1<br />

<br />

I<br />

1<br />

√ exp − (m<br />

(k)<br />

2πσ i<br />

i=1<br />

(k)<br />

i − x(n)<br />

i ) 2<br />

2(σ (k)<br />

i ) 2<br />

<br />

<br />

k ′ (numerator, with k′ in place of k)<br />

<br />

σ 2(k) i =<br />

n<br />

r (n)<br />

k (x(n)<br />

i − m(k)<br />

i ) 2<br />

R (k)<br />

(22.27)<br />

(22.28)<br />

Algorithm 22.2. The soft K-means<br />

algorithm, version 2.<br />

Figure 22.3. Soft K-means<br />

algorithm, with K = 2, applied<br />

(a) to the 40-point data set of<br />

figure 20.3; (b) to the little ’n’<br />

large data set of figure 20.5.<br />

Algorithm 22.4. The soft K-means<br />

algorithm, version 3, which<br />

corresponds to a model of<br />

axis-aligned Gaussians.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!