10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.10.7: Other coding theorems 171Symmetry is a useful property because, as we will see in a later chapter,communication at capacity can be achieved over symmetric channels by linearcodes.Exercise 10.10. [2 ] Prove that for a symmetric channel with any number ofinputs, the uniform distribution over the inputs is an optimal inputdistribution.⊲ Exercise 10.11. [2, p.174] Are there channels that are not symmetric whose optimalinput distributions are uniform? Find one, or prove there arenone.10.7 Other coding theoremsThe noisy-channel coding theorem that we proved in this chapter is quite general,applying to any discrete memoryless channel; but it is not very specific.The theorem only says that reliable communication with error probability ɛ<strong>and</strong> rate R can be achieved by using codes with sufficiently large blocklengthN. The theorem does not say how large N needs to be to achieve given valuesof R <strong>and</strong> ɛ.Presumably, the smaller ɛ is <strong>and</strong> the closer R is to C, the larger N has tobe.E r (R)RFigure 10.8. A typicalr<strong>and</strong>om-coding exponent.CNoisy-channel coding theorem – version with explicit N-dependenceFor a discrete memoryless channel, a blocklength N <strong>and</strong> a rate R,there exist block codes of length N whose average probability oferror satisfies:p B ≤ exp [−NE r (R)] (10.25)where E r (R) is the r<strong>and</strong>om-coding exponent of the channel, aconvex ⌣, decreasing, positive function of R for 0 ≤ R < C. Ther<strong>and</strong>om-coding exponent is also known as the reliability function.[By an expurgation argument it can also be shown that there existblock codes for which the maximal probability of error p BM is alsoexponentially small in N.]The definition of E r (R) is given in Gallager (1968), p. 139. E r (R) approacheszero as R → C; the typical behaviour of this function is illustrated in figure10.8. The computation of the r<strong>and</strong>om-coding exponent for interestingchannels is a challenging task on which much effort has been expended. Evenfor simple channels like the binary symmetric channel, there is no simple expressionfor E r (R).Lower bounds on the error probability as a function of blocklengthThe theorem stated above asserts that there are codes with p B smaller thanexp [−NE r (R)]. But how small can the error probability be? Could it bemuch smaller?For any code with blocklength N on a discrete memoryless channel,the probability of error assuming all source messages are used withequal probability satisfiesp B exp[−NE sp (R)], (10.26)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!