10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.41.4: Monte Carlo implementation of a single neuron 499Figure 41.6. Samples obtained bythe Langevin Monte Carlomethod. The learning rate was setto η = 0.01 <strong>and</strong> the weight decayrate to α = 0.01. The step size isgiven by ɛ = √ 2η. The functionperformed by the neuron is shownby three of its contours every 1000iterations from iteration 10 000 to40 000. The contours shown arethose corresponding to a = 0, ±1,namely y = 0.5, 0.27 <strong>and</strong> 0.73.Also shown is a vectorproportional to (w 1 , w 2 ).1010(a)5(b)5000 5 100 5 10Figure 41.7. Bayesian predictions found by the Langevin Monte Carlo method compared with thepredictions using the optimized parameters. (a) The predictive function obtained by averagingthe predictions for 30 samples uniformly spaced between iterations 10 000 <strong>and</strong>40 000, shown in figure 41.6. The contours shown are those corresponding to a = 0, ±1, ±2,namely y = 0.5, 0.27, 0.73, 0.12 <strong>and</strong> 0.88. (b) For contrast, the predictions given by the‘most probable’ setting of the neuron’s parameters, as given by optimization of M(w).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!