26.12.2013 Views

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

AI - a Guide to Intelligent Systems.pdf - Member of EEPIS

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

QUESTIONS FOR REVIEW<br />

215<br />

neuron. Although competitive learning was proposed in the early 1970s, it<br />

was largely ignored until the late 1980s, when Teuvo Kohonen introduced a<br />

special class <strong>of</strong> artificial neural networks called self-organising feature maps.<br />

He also formulated the principle <strong>of</strong> <strong>to</strong>pographic map formation which states<br />

that the spatial location <strong>of</strong> an output neuron in the <strong>to</strong>pographic map<br />

corresponds <strong>to</strong> a particular feature <strong>of</strong> the input pattern.<br />

. The Kohonen network consists <strong>of</strong> a single layer <strong>of</strong> computation neurons, but<br />

it has two different types <strong>of</strong> connections. There are forward connections from<br />

the neurons in the input layer <strong>to</strong> the neurons in the output layer, and lateral<br />

connections between neurons in the output layer. The lateral connections are<br />

used <strong>to</strong> create a competition between neurons. In the Kohonen network, a<br />

neuron learns by shifting its weights from inactive connections <strong>to</strong> active ones.<br />

Only the winning neuron and its neighbourhood are allowed <strong>to</strong> learn. If a<br />

neuron does not respond <strong>to</strong> a given input pattern, then learning does not<br />

occur in that neuron.<br />

Questions for review<br />

1 How does an artificial neural network model the brain? Describe two major classes <strong>of</strong><br />

learning paradigms: supervised learning and unsupervised (self-organised) learning.<br />

What are the features that distinguish these two paradigms from each other?<br />

2 What are the problems with using a perceptron as a biological model? How does the<br />

perceptron learn? Demonstrate perceptron learning <strong>of</strong> the binary logic function OR.<br />

Why can the perceptron learn only linearly separable functions?<br />

3 What is a fully connected multilayer perceptron? Construct a multilayer perceptron with<br />

an input layer <strong>of</strong> six neurons, a hidden layer <strong>of</strong> four neurons and an output layer <strong>of</strong> two<br />

neurons. What is a hidden layer for, and what does it hide?<br />

4 How does a multilayer neural network learn? Derive the back-propagation training<br />

algorithm. Demonstrate multilayer network learning <strong>of</strong> the binary logic function<br />

Exclusive-OR.<br />

5 What are the main problems with the back-propagation learning algorithm? How can<br />

learning be accelerated in multilayer neural networks? Define the generalised delta<br />

rule.<br />

6 What is a recurrent neural network? How does it learn? Construct a single six-neuron<br />

Hopfield network and explain its operation. What is a fundamental memory?<br />

7 Derive the Hopfield network training algorithm. Demonstrate how <strong>to</strong> s<strong>to</strong>re three<br />

fundamental memories in the six-neuron Hopfield network.<br />

8 The delta rule and Hebb’s rule represent two different methods <strong>of</strong> learning in neural<br />

networks. Explain the differences between these two rules.<br />

9 What is the difference between au<strong>to</strong>associative and heteroassociative types <strong>of</strong><br />

memory? What is the bidirectional associative memory (BAM)? How does the BAM<br />

work?

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!