21.01.2015 Views

Chapter 2 Introduction to Neural network

Chapter 2 Introduction to Neural network

Chapter 2 Introduction to Neural network

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Assume that we have a sequence of input vec<strong>to</strong>rs and a corresponding<br />

sequence of target (desired) scalars<br />

(¯x 1 , t 1 ), (¯x 2 , t 2 ), , · · · , (¯x N , t N )<br />

We wish <strong>to</strong> find the weights of a neuron with a non-linear function<br />

f(·) so that we can minimize the squared difference between the<br />

output y n and the target t n , i.e.<br />

min E n = min 1 2 (t n − y n ) 2 = min 1 2 e2 n<br />

, n = 1, 2, · · · , N<br />

We will use the steepest descent approach<br />

¯w (n+1) = ¯w (n) − α∇ w E<br />

We need <strong>to</strong> find ∇ w E!<br />

1<br />

∇ w E n = ∇ w<br />

2 (t n − f( ¯w T ¯x } {{ n ) }<br />

u<br />

} {{ }<br />

f(u)<br />

} {{ }<br />

h(f)<br />

) 2<br />

} {{ }<br />

g(h)<br />

The chain-rule gives ∇ w g(h(f(u))) = ∇ h g∇ f h∇ u f∇ w u<br />

∇ h g = e n<br />

∇ f h = t n − f(u) = −1<br />

∇ u f = depends on the nonlinearity f(·) we choose<br />

∇ w u = ¯x n<br />

⇒ ¯w (n+1) = ¯w (n) + αe n ∇ u f¯x n<br />

since f(·) is a function f : R → R, i.e. one-dimensional. We can<br />

write ∇ u f = df<br />

du<br />

⇒ The neuron learning rule for general function f(·) is<br />

¯w (n+1) = ¯w (n) + αe n<br />

df<br />

du¯x n<br />

Where u = ¯w T ¯x and α is the stepsize.<br />

OBS! If f(u) = u, that is a linear function with slope 1, the above<br />

algorithm will become the LMS alg. for a linear neuron.<br />

( df<br />

du = 1 )<br />

42

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!