17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

390 Appendices<br />

between two units in proportion to the product of their simultaneous<br />

activation. More formally, the Hebbian learning rule can be stated as<br />

follows (see left side of Figure C.5):<br />

Awij = Oi Oj (5)<br />

where the output of neuron j is multiplied by the output of neuron i<br />

and by a constant of proportionality q (the learning rate) to produce the<br />

amount by which to change the weight between them. Resynaptic<br />

signals and postsynaptic signals are used to modify the weights,<br />

without benefit of a teacher (i.e., this is unsupervised learning). The<br />

larger q is, the larger the changes in the weights. The advantage of<br />

this rule is that information is locally available for determining how to<br />

change the weights. A disadvantage of such a simple learning rule can<br />

be seen in a network that is trying to associate one pattern with<br />

another. Two patterns are presented and weights are modified so that,<br />

if only one pattern is presented the network can produce the second<br />

one correctly. Using Hebbian learning, this can happen only when all<br />

patterns are completely uncorrelated.<br />

Hebbian learning rule Delta learning rule<br />

Figure C.5 Two different learning rules for updating the synaptic<br />

weights. On the left, Hebbian learning is shown, where weights<br />

are adjusted in proportion to the product of two neurons’s<br />

simultaneous activation. On the right, the delta rule is shown,<br />

where a teacher is needed to compute the error between the desired<br />

output and actual output.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!