26.12.2013 Views

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

MULTILAYER NEURAL NETWORKS<br />

179<br />

k ðpÞ ¼y k ðpÞ½1 y k ðpÞŠ e k ðpÞ; ð6:14Þ<br />

where<br />

y k ðpÞ ¼<br />

1<br />

1 þ exp½ X k ðpÞŠ :<br />

How can we determine the weight correction for a neuron in the hidden<br />

layer?<br />

To calculate the weight correction for the hidden layer, we can apply the same<br />

equation as for the output layer:<br />

w ij ðpÞ ¼ x i ðpÞ j ðpÞ;<br />

ð6:15Þ<br />

where j ðpÞ represents the error gradient at neuron j in the hidden layer:<br />

j ðpÞ ¼y j ðpÞ½1<br />

y j ðpÞŠ Xl<br />

k¼1<br />

k ðpÞw jk ðpÞ;<br />

where l is the number <strong>of</strong> neurons in the output layer;<br />

y j ðpÞ ¼<br />

X j ðpÞ ¼ Xn<br />

1<br />

1 þ e X jð pÞ ;<br />

i¼1<br />

x i ðpÞw ij ðpÞ j ;<br />

and n is the number <strong>of</strong> neurons in the input layer.<br />

Now we can derive the back-propagation training algorithm.<br />

Step 1:<br />

Initialisation<br />

Set all the weights and threshold levels <strong>of</strong> the network <strong>to</strong> random<br />

numbers uniformly distributed inside a small range (Haykin, 1999):<br />

<br />

2:4<br />

; þ 2:4 <br />

;<br />

F i F i<br />

where F i is the <strong>to</strong>tal number <strong>of</strong> inputs <strong>of</strong> neuron i in the network. The<br />

weight initialisation is done on a neuron-by-neuron basis.<br />

Step 2:<br />

Activation<br />

Activate the back-propagation neural network by applying inputs<br />

x 1 ðpÞ; x 2 ðpÞ; ...; x n ðpÞ and desired outputs y d;1 ðpÞ; y d;2 ðpÞ; ...; y d;n ðpÞ.<br />

(a)<br />

Calculate the actual outputs <strong>of</strong> the neurons in the hidden layer:<br />

" #<br />

y j ðpÞ ¼sigmoid Xn<br />

x i ðpÞw ij ðpÞ ;<br />

i¼1<br />

j<br />

where n is the number <strong>of</strong> inputs <strong>of</strong> neuron j in the hidden layer,<br />

and sigmoid is the sigmoid activation function.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!