10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.8 1 — Introduction to <strong>Information</strong> <strong>Theory</strong>0.10.080.06R1p b0.10.011e-05R5R3more useful codesR1Figure 1.12. Error probability p bversus rate for repetition codesover a binary symmetric channelwith f = 0.1. The right-h<strong>and</strong>figure shows p b on a logarithmicscale. We would like the rate tobe large <strong>and</strong> p b to be small.0.04R31e-100.020R5 more useful codesR610 0.2 0.4 0.6 0.8 1Rate1e-15R610 0.2 0.4 0.6 0.8 1RateThe repetition code R 3 has therefore reduced the probability of error, asdesired. Yet we have lost something: our rate of information transfer hasfallen by a factor of three. So if we use a repetition code to communicate dataover a telephone line, it will reduce the error frequency, but it will also reduceour communication rate. We will have to pay three times as much for eachphone call. Similarly, we would need three of the original noisy gigabyte diskdrives in order to create a one-gigabyte disk drive with p b = 0.03.Can we push the error probability lower, to the values required for a sellabledisk drive – 10 −15 ? We could achieve lower error probabilities by usingrepetition codes with more repetitions.[3, p.16]Exercise 1.3. (a) Show that the probability of error of R N , the repetitioncode with N repetitions, isfor odd N.p b =N∑n=(N+1)/2( Nn)f n (1 − f) N−n , (1.24)(b) Assuming f = 0.1, which of the terms in this sum is the biggest?How much bigger is it than the second-biggest term?(c) Use Stirling’s approximation (p.2) to approximate the ( Nn)in thelargest term, <strong>and</strong> find, approximately, the probability of error ofthe repetition code with N repetitions.(d) Assuming f = 0.1, find how many repetitions are required to getthe probability of error down to 10 −15 . [Answer: about 60.]So to build a single gigabyte disk drive with the required reliability from noisygigabyte drives with f = 0.1, we would need sixty of the noisy disk drives.The tradeoff between error probability <strong>and</strong> rate for repetition codes is shownin figure 1.12.Block codes – the (7, 4) Hamming codeWe would like to communicate with tiny probability of error <strong>and</strong> at a substantialrate. Can we improve on repetition codes? What if we add redundancy toblocks of data instead of encoding one bit at a time? We now study a simpleblock code.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!