10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.508 42 — Hopfield Networkswhen we discussed variational methods (section 33.2): when we approximatedthe spin system whose energy function wasE(x; J) = − 1 ∑J mn x m x n − ∑ h n x n (42.9)2m,n nwith a separable distribution(Q(x; a) = 1 ∑expZ Qna n x n)<strong>and</strong> optimized the latter so as to minimize the variational free energyβ ˜F (a) = β ∑ xQ(x; a)E(x; J) − ∑ xQ(x; a) lnwe found that the pair of iterative equations( )∑a m = β J mn¯x n + h mn<strong>and</strong>(42.10)1Q(x; a) , (42.11)(42.12)¯x n = tanh(a n ) (42.13)were guaranteed to decrease the variational free energy(β ˜F (a) = β − 1 ∑J mn¯x m¯x n − ∑ )h n¯x n − ∑ H (e)22(q n). (42.14)m,n nnIf we simply replace J by w, ¯x by x, <strong>and</strong> h n by w i0 , we see that theequations of the Hopfield network are identical to a set of mean-field equationsthat minimizeβ ˜F (x) = −β 1 2 xT Wx − ∑ iH (e)2 [(1 + x i)/2]. (42.15)There is a general name for a function that decreases under the dynamicalevolution of a system <strong>and</strong> that is bounded below: such a function is a Lyapunovfunction for the system. It is useful to be able to prove the existence ofLyapunov functions: if a system has a Lyapunov function then its dynamicsare bound to settle down to a fixed point, which is a local minimum of theLyapunov function, or a limit cycle, along which the Lyapunov function is aconstant. Chaotic behaviour is not possible for a system with a Lyapunovfunction. If a system has a Lyapunov function then its state space can bedivided into basins of attraction, one basin associated with each attractor.So, the continuous Hopfield network’s activity rules (if implemented asynchronously)have a Lyapunov function. This Lyapunov function is a convexfunction of each parameter a i so a Hopfield network’s dynamics will alwaysconverge to a stable fixed point.This convergence proof depends crucially on the fact that the Hopfieldnetwork’s connections are symmetric. It also depends on the updates beingmade asynchronously.Exercise 42.3. [2, p.520] Show by constructing an example that if a feedbacknetwork does not have symmetric connections then its dynamics mayfail to converge to a fixed point.Exercise 42.4. [2, p.521] Show by constructing an example that if a Hopfieldnetwork is updated synchronously that, from some initial conditions, itmay fail to converge to a fixed point.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!