10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.130 6 — Stream CodesFigure 6.12. A texture consisting of horizontal <strong>and</strong> vertical pins dropped at r<strong>and</strong>om on the plane.(c) Sources with intricate redundancy, such as files generated by computers.For example, a L A TEX file followed by its encoding into a PostScriptfile. The information content of this pair of files is roughly equal to theinformation content of the LATEX file alone.(d) A picture of the M<strong>and</strong>elbrot set. The picture has an information contentequal to the number of bits required to specify the range of the complexplane studied, the pixel sizes, <strong>and</strong> the colouring rule used.(e) A picture of a ground state of a frustrated antiferromagnetic Ising model(figure 6.13), which we will discuss in Chapter 31. Like figure 6.12, thisbinary image has interesting correlations in two directions.Figure 6.13. Frustrated triangularIsing model in one of its groundstates.(f) Cellular automata – figure 6.14 shows the state history of 100 steps ofa cellular automaton with 400 cells. The update rule, in which eachcell’s new state depends on the state of five preceding cells, was selectedat r<strong>and</strong>om. The information content is equal to the information in theboundary (400 bits), <strong>and</strong> the propagation rule, which here can be describedin 32 bits. An optimal compressor will thus give a compressed filelength which is essentially constant, independent of the vertical height ofthe image. Lempel–Ziv would only give this zero-cost compression oncethe cellular automaton has entered a periodic limit cycle, which couldeasily take about 2 100 iterations.In contrast, the JBIG compression method, which models the probabilityof a pixel given its local context <strong>and</strong> uses arithmetic coding, would do agood job on these images.Solution to exercise 6.14 (p.124). For a one-dimensional Gaussian, the varianceof x, E[x 2 ], is σ 2 . So the mean value of r 2 in N dimensions, since thecomponents of x are independent r<strong>and</strong>om variables, isE[r 2 ] = Nσ 2 . (6.25)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!