08.10.2016 Views

Foundations of Data Science

2dLYwbK

2dLYwbK

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

W 1 W 2<br />

W 1 W 2 W 3<br />

(a)<br />

(b)<br />

Figure 6.9: Autoencoder technique used to train one level at a time. In the Figure 6.9 (a)<br />

train W 1 and W 2 . Then in Figure 6.9 (b), freeze W 1 and train W 2 and W 3 . In this way<br />

one trains one set <strong>of</strong> weights at a time.<br />

The output layer <strong>of</strong> a deep network typically uses a s<strong>of</strong>tmax procedure. S<strong>of</strong>tmax is<br />

a generalization <strong>of</strong> logistic regression where given a set <strong>of</strong> vectors {x 1 , x 2 , . . . x n } with<br />

labels l 1 , l 2 , . . . l n , l i ∈ {0, 1} and with a weight vector w we define the probability that<br />

the label l given x equals 0 or 1 by<br />

and<br />

where σ is the sigmoid function.<br />

Prob(l = 1|x) =<br />

1<br />

1 + e −wT x = σ(wT x)<br />

Prob(l = 0|x) = 1 − Prob(l = 1/x)<br />

Define a cost function<br />

J(w) = ∑ (<br />

)<br />

l i log(Prob(l = 1|x)) + (1 − l i ) log(1 − Prob(l = 1|x))<br />

i<br />

and compute w to minimize J(x). Then<br />

J(w) = ∑ (<br />

)<br />

l i log(σ(w T x)) + (1 − l i ) log(1 − σ(w T x))<br />

i<br />

Since ∂σ(wT x)<br />

∂w j<br />

= σ(w T x)(1 − σ(w T x))x j , it follows that ∂ log(σ(wT x))<br />

∂w j<br />

= σ(wT x)(1−σ(w T x))x j<br />

σ(w T x)<br />

,<br />

225

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!