10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.154 9 — Communication over a Noisy Channel010 1001001110010011100001000010011000010101001101110000110010101110100111011011111110000100001001100001010100110111000011001010111010011101101111111N = 1 N = 2 N = 4Figure 9.7. Extended channelsobtained from a Z channel withtransition probability 0.15. Eachcolumn corresponds to an input,<strong>and</strong> each row is a different output.A N YTypical y✬✩✗✔✗✔ ✗✔ ✗✔✗✔ ✗✔ ✗✔ ✗✔✗✔ ✗✔✗✔✖✕✖✕✗✔✗✔✖✕✖✕ ✖✕ ✗✔✖✕✖✕✖✕✗✔✗✔ ✖✕✖✕✗✔ ✖✕ ✗✔ ✖✕✗✔✖✕✖✕✗✔✗✔✗✔ ✗✔ ✖✕ ✖✕✗✔✗✔✖✕✗✔✖✕✖✕✖✕ ✖✕✖✕ ✻✫✖✕✖✕✖✕ ✖✕ ✖✕✪A N YTypical y✬✗✔✗✔✗✔✩✖✕✗✔✖✕ ✗✔ ✖✕ ✗✔ ✖✕ ✗✔✖✕✗✔✫✖✕✖✕✖✕✪Figure 9.8. (a) Some typicaloutputs in A N Y corresponding totypical inputs x. (b) A subset ofthe typical sets shown in (a) thatdo not overlap each other. Thispicture can be compared with thesolution to the noisy typewriter infigure 9.5.Typical y for a given typical x(a)(b)Exercise 9.14. [2, p.159] Find the transition probability matrices Q for the extendedchannel, with N = 2, derived from the binary erasure channelhaving erasure probability 0.15.By selecting two columns of this transition probability matrix, we c<strong>and</strong>efine a rate- 1/ 2 code for this channel with blocklength N = 2. What isthe best choice of two columns? What is the decoding algorithm?To prove the noisy-channel coding theorem, we make use of large blocklengthsN. The intuitive idea is that, if N is large, an extended channel looksa lot like the noisy typewriter. Any particular input x is very likely to producean output in a small subspace of the output alphabet – the typical output set,given that input. So we can find a non-confusable subset of the inputs thatproduce essentially disjoint output sequences. For a given N, let us considera way of generating such a non-confusable subset of the inputs, <strong>and</strong> count uphow many distinct inputs it contains.Imagine making an input sequence x for the extended channel by drawingit from an ensemble X N , where X is an arbitrary ensemble over the inputalphabet. Recall the source coding theorem of Chapter 4, <strong>and</strong> consider thenumber of probable output sequences y. The total number of typical outputsequences y is 2 NH(Y ) , all having similar probability. For any particular typicalinput sequence x, there are about 2 NH(Y |X) probable sequences. Some of thesesubsets of A N Yare depicted by circles in figure 9.8a.We now imagine restricting ourselves to a subset of the typical inputsx such that the corresponding typical output sets do not overlap, as shownin figure 9.8b. We can then bound the number of non-confusable inputs bydividing the size of the typical y set, 2 NH(Y ) , by the size of each typical-y-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!