10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.506 42 — Hopfield Networkswith another stimulus, n (for example, the sight of a yellow object). If thesetwo stimuli – a yellow sight <strong>and</strong> a banana smell – co-occur in the environment,then the Hebbian learning rule (42.1) will increase the weights w nm <strong>and</strong> w mn .This means that when, on a later occasion, stimulus n occurs in isolation, makingthe activity x n large, the positive weight from n to m will cause neuron malso to be activated. Thus the response to the sight of a yellow object is anautomatic association with the smell of a banana. We could call this ‘patterncompletion’. No teacher is required for this associative memory to work. Nosignal is needed to indicate that a correlation has been detected or that an associationshould be made. The unsupervised, local learning algorithm <strong>and</strong> theunsupervised, local activity rule spontaneously produce associative memory.This idea seems so simple <strong>and</strong> so effective that it must be relevant to howmemories work in the brain.42.2 Definition of the binary Hopfield networkConvention for weights. Our convention in general will be that w ij denotesthe connection from neuron j to neuron i.Architecture. A Hopfield network consists of I neurons. They are fullyconnected through symmetric, bidirectional connections with weightsw ij = w ji . There are no self-connections, so w ii = 0 for all i. Biasesw i0 may be included (these may be viewed as weights from a neuron ‘0’whose activity is permanently x 0 = 1). We will denote the activity ofneuron i (its output) by x i .Activity rule. Roughly, a Hopfield network’s activity rule is for each neuronto update its state as if it were a single neuron with the thresholdactivation functionx(a) = Θ(a) ≡{ 1 a ≥ 0−1 a < 0.(42.2)Since there is feedback in a Hopfield network (every neuron’s output isan input to all the other neurons) we will have to specify an order for theupdates to occur. The updates may be synchronous or asynchronous.Synchronous updates – all neurons compute their activationsa i = ∑ jw ij x j (42.3)then update their states simultaneously tox i = Θ(a i ). (42.4)Asynchronous updates – one neuron at a time computes its activation<strong>and</strong> updates its state. The sequence of selected neurons maybe a fixed sequence or a r<strong>and</strong>om sequence.The properties of a Hopfield network may be sensitive to the abovechoices.<strong>Learning</strong> rule. The learning rule is intended to make a set of desired memories{x (n) } be stable states of the Hopfield network’s activity rule. Eachmemory is a binary pattern, with x i ∈ {−1, 1}.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!