17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

388 A pp e n dices<br />

processing. This in effect creates a higher dimensional input space,<br />

allowing more ‘elbow room’ for the computation. It is often the case<br />

that these hidden units attend to features of the input patterns important<br />

to the network’s computation, so experimenters analyze their<br />

responses to determine how the network operates.<br />

A representation can either be encoded or decoded. With a<br />

decoded input representation, there is a separate unit for each possible<br />

input pattern. Since only one of these is active at a time, the network<br />

must see every possible input pattern during training. The<br />

disadvantage is that there is a combinatorially explosive number of<br />

units. However, the weight modification rule is then simple, since<br />

input units can drive output units with excitatory synapses, and inhibit<br />

others. Other advantages of using highly decoded representations<br />

include fault tolerance, sufficiency of components with relatively poor<br />

dynamic range, and ease of local feature extraction. The ease of<br />

learning a function will depend on the degree to which the output<br />

representation is related to the input representation on which it<br />

depends. With an encoded input representation, the activation of<br />

output units depends on complex nonlinear interactions between the<br />

inputs, and therefore multiple layers for computing more complex<br />

functions are needed.<br />

Inputs I<br />

+ t<br />

outputs<br />

*<br />

rA<br />

a<br />

P<br />

rA<br />

c)<br />

a<br />

P<br />

c)<br />

f 8<br />

c(<br />

Figure C.4 Recurrent networks that are completely connected.<br />

Both networks receive input directly from every other unit.<br />

The alternative to feedforward networks is the recurrent or<br />

feedback network, as seen in Figure C.4. These examples are both<br />

completely connected recurrent networks; i.e., every unit receives

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!