26.12.2013 Views

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

180<br />

ARTIFICIAL NEURAL NETWORKS<br />

(b) Calculate the actual outputs <strong>of</strong> the neurons in the output layer:<br />

2<br />

3<br />

y k ðpÞ ¼sigmoid4<br />

Xm<br />

x jk ðpÞw jk ðpÞ 5;<br />

j¼1<br />

where m is the number <strong>of</strong> inputs <strong>of</strong> neuron k in the output layer.<br />

k<br />

Step 3:<br />

Weight training<br />

Update the weights in the back-propagation network propagating<br />

backward the errors associated with output neurons.<br />

(a)<br />

Calculate the error gradient for the neurons in the output layer:<br />

k ðpÞ ¼y k ðpÞ½1 y k ðpÞŠ e k ðpÞ<br />

where<br />

e k ðpÞ ¼y d;k ðpÞ<br />

y k ðpÞ<br />

Calculate the weight corrections:<br />

w jk ðpÞ ¼ y j ðpÞ k ðpÞ<br />

Update the weights at the output neurons:<br />

w jk ðp þ 1Þ ¼w jk ðpÞþw jk ðpÞ<br />

(b) Calculate the error gradient for the neurons in the hidden layer:<br />

j ðpÞ ¼y j ðpÞ½1<br />

y j ðpÞŠ Xl<br />

k¼1<br />

k ðpÞw jk ðpÞ<br />

Calculate the weight corrections:<br />

w ij ðpÞ ¼ x i ðpÞ j ðpÞ<br />

Update the weights at the hidden neurons:<br />

w ij ðp þ 1Þ ¼w ij ðpÞþw ij ðpÞ<br />

Step 4:<br />

Iteration<br />

Increase iteration p by one, go back <strong>to</strong> Step 2 and repeat the process<br />

until the selected error criterion is satisfied.<br />

As an example, we may consider the three-layer back-propagation network<br />

shown in Figure 6.10. Suppose that the network is required <strong>to</strong> perform logical<br />

operation Exclusive-OR. Recall that a single-layer perceptron could not do<br />

this operation. Now we will apply the three-layer net.<br />

Neurons 1 and 2 in the input layer accept inputs x 1 and x 2 , respectively, and<br />

redistribute these inputs <strong>to</strong> the neurons in the hidden layer without any<br />

processing:<br />

x 13 ¼ x 14 ¼ x 1 and x 23 ¼ x 24 ¼ x 2 .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!