10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.4.8: Solutions 871st AAB ABA ABB ABC BBC BCA BCB BCC2nd weighings put AAB CAA CAB CAC in pan A, ABA ABB ABC BBC in pan B.3rd ABA BCA CAA CCA AAB ABB BCB CABNow in a given weighing, a pan will either end up in the• Canonical position (C) that it assumes when the pans are balanced,or• Above that position (A), or• Below it (B),so the three weighings determine for each pan a sequence of three ofthese letters.If both sequences are CCC, then there’s no odd ball. Otherwise, for justone of the two pans, the sequence is among the 12 above, <strong>and</strong> namesthe odd ball, whose weight is Above or Below the proper one accordingas the pan is A or B.(b) In W weighings the odd ball can be identified from amongN = (3 W − 3)/2 (4.49)balls in the same way, by labelling them with all the non-constant sequencesof W letters from A, B, C whose first change is A-to-B or B-to-Cor C-to-A, <strong>and</strong> at the wth weighing putting those whose wth letter is Ain pan A <strong>and</strong> those whose wth letter is B in pan B.Solution to exercise 4.15 (p.85). The curves 1 N H δ(X N ) as a function of δ forN = 1, 2 <strong>and</strong> 1000 are shown in figure 4.14. Note that H 2 (0.2) = 0.72 bits.10.8N=1N=2N=1000N = 1δ1N H δ(X)2 H δ(X)N = 2δ1N H δ(X)2 H δ(X)0.60.40.20–0.2 1 20.2–1 0 10–0.04 1 40.04–0.2 0.79 30.2–0.36 0.5 20.36–1 0 100 0.2 0.4 0.6 0.8 1Solution to exercise 4.17 (p.85). The Gibbs entropy is k B∑i p i ln 1 p i, where iruns over all states of the system. This entropy is equivalent (apart from thefactor of k B ) to the Shannon entropy of the ensemble.Whereas the Gibbs entropy can be defined for any ensemble, the Boltzmannentropy is only defined for microcanonical ensembles, which have aprobability distribution that is uniform over a set of accessible states. TheBoltzmann entropy is defined to be S B = k B ln Ω where Ω is the number of accessiblestates of the microcanonical ensemble. This is equivalent (apart fromthe factor of k B ) to the perfect information content H 0 of that constrainedensemble. The Gibbs entropy of a microcanonical ensemble is trivially equalto the Boltzmann entropy.Figure 4.14.1N H δ(X) (verticalaxis) against δ (horizontal), forN = 1, 2, 100 binary variableswith p 1 = 0.4.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!