17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix C - Computational Neural Modelling 395<br />

In order to use gradient descent, a continuous, nonlinear activation<br />

functionf is needed, since the derivative is used and because hidden<br />

units are part of the network topology. The disadvantage of this<br />

technique is that a local minimum could be reached which is not a true<br />

minimum. There is also no indication of how long multiple repetitions<br />

of presenting the inpudoutput pairs are needed in order to get the error<br />

reduced to some small value close to zero.<br />

C.<strong>2.</strong>3 Processing capabilities in neural networks<br />

Some of the types of information processing performed by<br />

artificial neural networks include pattern association, pattern<br />

completion, pattern classification, and regularity discovery. In pattern<br />

association, a pattern is not stored; instead, the strength of the<br />

connection between two neurons adjusts, so that the next time an input<br />

pattern is seen at neuron j that causes it to output Oj, the weight Wij has<br />

been set to the correct value to produce the associated pattern Oi as<br />

output from neuron i. Pattern completion involves filling in a missing<br />

portion of a pattern. For example, a pattern is associated with itself, in<br />

what is called auto-association, so that when a degraded version of the<br />

pattern is presented to the network, it can reconstruct the complete<br />

pattern. Degraded patterns occur when incomplete information is<br />

available, such as in partially-hidden objects, or noise on a signal.<br />

Particularly useful properties of artificial neural networks include<br />

default assignments, generalization, and graceful degradation. A<br />

computational architecture that can generalize about how to grasp<br />

objects never seen before is better than a system where every possible<br />

associated pattern must be accounted for. In these artificial neural<br />

networks, where similar input patterns lead to similar outputs, only a<br />

few sample training points are needed. Similar patterns reinforce the<br />

strengths of the weights between neurons. By presenting a pair with<br />

some noise added, the system will learn the central tendency. Default<br />

assignments as well come from this same property. Also, one does<br />

not need a perfect copy of the input to produce the output.<br />

C.3 Network Example: Heteroassociative<br />

Memories<br />

A single-layer, feedforward neural network may be used as a<br />

heteroassociative memory. That is, it can be trained to produce an<br />

arbitrary output pattern given a paired input pattern. We consider the<br />

case where the network has an n-vector output of linear summation

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!