10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.14Very Good Linear Codes ExistIn this chapter we’ll use a single calculation to prove simultaneously the sourcecoding theorem <strong>and</strong> the noisy-channel coding theorem for the binary symmetricchannel.Incidentally, this proof works for much more general channel models, notonly the binary symmetric channel. For example, the proof can be reworkedfor channels with non-binary outputs, for time-varying channels <strong>and</strong> for channelswith memory, as long as they have binary inputs satisfying a symmetryproperty, cf. section 10.6.14.1 A simultaneous proof of the source coding <strong>and</strong> noisy-channelcoding theoremsWe consider a linear error-correcting code with binary parity-check matrix H.The matrix has M rows <strong>and</strong> N columns. Later in the proof we will increaseN <strong>and</strong> M, keeping M ∝ N. The rate of the code satisfiesR ≥ 1 − M N . (14.1)If all the rows of H are independent then this is an equality, R = 1 − M/N.In what follows, we’ll assume the equality holds. Eager readers may work outthe expected rank of a r<strong>and</strong>om binary matrix H (it’s very close to M) <strong>and</strong>pursue the effect that the difference (M − rank) has on the rest of this proof(it’s negligible).A codeword t is selected, satisfyingHt = 0 mod 2, (14.2)<strong>and</strong> a binary symmetric channel adds noise x, giving the received signalr = t + x mod 2. (14.3)In this chapter x denotes thenoise added by the channel, notthe input to the channel.The receiver aims to infer both t <strong>and</strong> x from r using a syndrome-decodingapproach. Syndrome decoding was first introduced in section 1.2 (p.10 <strong>and</strong>11). The receiver computes the syndromez = Hr mod 2 = Ht + Hx mod 2 = Hx mod 2. (14.4)The syndrome only depends on the noise x, <strong>and</strong> the decoding problem is tofind the most probable x that satisfiesHx = z mod 2. (14.5)229

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!