10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.258 17 — Communication over Constrained Noiseless ChannelsSolution to exercise 17.10 (p.255).Let the invariant distribution beP (s) = αe (L)s e (R)s , (17.22)where α is a normalization constant. The entropy of S t given S t−1 , assumingS t−1 comes from the invariant distribution, isH(S t |S t−1 ) = − ∑ s,s ′ P (s)P (s ′ |s) log P (s ′ |s) (17.23)Here, as in Chapter 4, S t denotesthe ensemble whose r<strong>and</strong>omvariable is the state s t .= − ∑ s,s ′ αe (L)se (R)se (L)sA ′ s ′ sλe (L)slog e(L) sA ′ s ′ sλe (L)s(17.24)= − ∑ s,s ′ α e (R)se (L)sA [′ s ′ slog e (L)λs ′]+ log A s ′ s − log λ − log e (L)s . (17.25)Now, A s ′ s is either 0 or 1, so the contributions from the terms proportional toA s ′ s log A s ′ s are all zero. So( )H(S t |S t−1 ) = log λ + − α ∑ ∑A sλ′ se (R)s e (L)slog e (L)′ s+ ′s ′ s( )α ∑ ∑e (L)λsA ′ s ′ s e (R)s log e (L)s (17.26)s ′= log λ − α λ∑s ′sλe (R)s ′e (L)slog e (L)′ s+ α ′λ∑sλe (L)se (R)s log e (L)s (17.27)= log λ. (17.28)Solution to exercise 17.11 (p.255). The principal eigenvalues of the connectionmatrices of the two channels are 1.839 <strong>and</strong> 1.928. The capacities (log λ) are0.879 <strong>and</strong> 0.947 bits.Solution to exercise 17.12 (p.256). The channel is similar to the unconstrainedbinary channel; runs of length greater than L are rare if L is large, so we onlyexpect weak differences from this channel; these differences will show up incontexts where the run length is close to L. The capacity of the channel isvery close to one bit.A lower bound on the capacity is obtained by considering the simplevariable-length code for this channel which replaces occurrences of the maximumrunlength string 111. . .1 by 111. . .10, <strong>and</strong> otherwise leaves the source fileunchanged. The average rate of this code is 1/(1 + 2 −L ) because the invariantdistribution will hit the ‘add an extra zero’ state a fraction 2 −L of the time.We can reuse the solution for the variable-length channel in exercise 6.18(p.125). The capacity is the value of β such that the equationL+1∑Z(β) = 2 −βl = 1 (17.29)l=1is satisfied. The L+1 terms in the sum correspond to the L+1 possible stringsthat can be emitted, 0, 10, 110, . . . , 11. . .10. The sum is exactly given by:(2Z(β) = 2 −β −β ) L+1 − 12 −β . (17.30)− 1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!