06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix B<br />

Derivation of the<br />

Back-propagation<br />

Algorithm<br />

This appendix presents the derivation of the back-propagation algorithm,<br />

covered in chapter 14. We derive it for two cases, with <strong>and</strong> without nonlinearity<br />

in the output layered nodes.<br />

B.1 Derivation of the Back-propagation Algorithm<br />

The back-propagation algorithm is based on the principle of gradient descent<br />

learning. In gradient descent learning, the weight wp,q,k which is connected<br />

between neurons p in layer ( k-1) with neuron q at the k-th (output) layer is<br />

hereafter denoted by Wpq for simplicity. Let E be the Euclidean norm of the<br />

error vector, for a given training pattern, produced at the output layer.<br />

Formally,<br />

E = (1 / 2) ∑ ( tr – Out r ) 2<br />

∀r

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!