10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.13.2: Obsession with distance 207wA(w)0 15 128 309 2010 7211 12012 10013 18014 24015 27216 34517 30018 20019 12020 36Total 20483503002502001501005001001010 5 8 10 15 20 25 30Figure 13.2. The graph definingthe (30, 11) dodecahedron code(the circles are the 30 transmittedbits <strong>and</strong> the triangles are the 20parity checks, one of which isredundant) <strong>and</strong> the weightenumerator function (solid lines).The dotted lines show the averageweight enumerator function of allr<strong>and</strong>om linear codes with thesame size of generator matrix,which will be computed shortly.The lower figure shows the samefunctions on a log scale.0 5 8 10 15 20 25 30A bounded-distance decoder is a decoder that returns the closest codewordto a received binary vector r if the distance from r to that codewordis less than or equal to t; otherwise it returns a failure message.The rationale for not trying to decode when more than t errors have occurredmight be ‘we can’t guarantee that we can correct more than t errors, so wewon’t bother trying – who would be interested in a decoder that corrects someerror patterns of weight greater than t, but not others?’ This defeatist attitudeis an example of worst-case-ism, a widespread mental ailment which this bookis intended to cure.The fact is that bounded-distance decoders cannot reach the Shannon limitof the binary symmetric channel; only a decoder that often corrects more thant errors can do this. The state of the art in error-correcting codes have decodersthat work way beyond the minimum distance of the code.∗Definitions of good <strong>and</strong> bad distance propertiesGiven a family of codes of increasing blocklength N, <strong>and</strong> with rates approachinga limit R > 0, we may be able to put that family in one of the followingcategories, which have some similarities to the categories of ‘good’ <strong>and</strong> ‘bad’codes defined earlier (p.183):A sequence of codes has ‘good’ distance if d/Ngreater than zero.A sequence of codes has ‘bad’ distance if d/N tends to zero.tends to a constantA sequence of codes has ‘very bad’ distance if d tends to a constant.Figure 13.3. The graph of arate- 1/ 2 low-densitygenerator-matrix code. Therightmost M of the transmittedbits are each connected to a singledistinct parity constraint. Theleftmost K transmitted bits areeach connected to a small numberof parity constraints.Example 13.3. A low-density generator-matrix code is a linear code whose K ×N generator matrix G has a small number d 0 of 1s per row, regardlessof how big N is. The minimum distance of such a code is at most d 0 , solow-density generator-matrix codes have ‘very bad’ distance.While having large distance is no bad thing, we’ll see, later on, why anemphasis on distance can be unhealthy.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!