17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix C - Computational Neural Modelling 387<br />

OUTPUTLAYER<br />

HIDDEN LAYER<br />

INPUT LAYER<br />

Figure C.3 Simple pattern of connectivity in multi-layered<br />

network, containing one input layer, one hidden layer, and one<br />

output layer.<br />

The function f can be the identity function, or more usually, it is a<br />

threshold function that ensures that a neuron only outputs a signal<br />

when its activation exceeds a certain value, as typically seen in real<br />

neurons.<br />

In parallel distributed processing models, neurons are connected in<br />

some pattern of connectivity, or network topology. A feedforward<br />

network is a set of neurons connected in such a way that there are no<br />

cyclic paths. This type of network has no dynamics; that is, after an<br />

input pattern is applied to the network, within a finite number of<br />

‘steps’ (synaptic delays) equal to the length of the longest path from<br />

input to output, the network will settle into a static state of neural<br />

activation. Such a network is seen in Figure C.3. This feedforward<br />

network has three layers. Certain neurons are defined to be ‘input<br />

units’ and certain ones are ‘output units’. Intermediate units, which<br />

are not input or output are called ‘hidden units’. In 1969, Marvin<br />

Minsky and Seymour Papert demonstrated that adding a layer of<br />

neurons hidden between the input layer and output layer can allow the<br />

inputs to be recoded into a sufficiently similar structure for correct

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!