06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Drawbacks of back-propagation algorithm<br />

The back-propagation algorithm suffers from two major drawbacks, namely<br />

network paralysis <strong>and</strong> trapping at local minima. These issues are briefly<br />

outlined below.<br />

Network paralysis: As the network receives training, the weights are<br />

adjusted to large values. This can force all or most of the neurons to operate at<br />

large Outs, i.e., in a region where F ' (Net) →0. Since the error sent back for<br />

training is proportional to F ' (Net), the training process comes to a virtual<br />

st<strong>and</strong>still. One way to solve the problem is to reduce η, which, however,<br />

increases the training time.<br />

Layer j<br />

Neuron p<br />

w p, q , k<br />

1<br />

Layer k<br />

∑+ F<br />

∑+ F<br />

w p, q , k<br />

Net j<br />

F '<br />

Outp, j (1−Outp, j)<br />

q3<br />

∑ w . p , q , k δ<br />

q = q<br />

δ p, j<br />

1<br />

2<br />

w , k<br />

q<br />

p, 3<br />

q , k<br />

Fig. 14.7: The computation of δp at layer j.<br />

Neuron q1<br />

∑+ F<br />

Neuron q2<br />

∑+F<br />

Neuron q3<br />

δ , k<br />

q 1<br />

δ k<br />

2 , q<br />

δ , k<br />

q 3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!