10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.11.4: What are the capabilities of practical error-correcting codes? 185(a)101100110110011011001 (b)⋆⋆⋆⋆⋆(c)111101111100011011101 (d)(d ′ )11110011111111111100101100011111001 (e)1011001 (e ′ )1011001101100110110011 1 1(1) (1) (1)1111110 00 0001 1 1Figure 11.6. A product code. (a)A string 1011 encoded using aconcatenated code consisting oftwo Hamming codes, H(3, 1) <strong>and</strong>H(7, 4). (b) a noise pattern thatflips 5 bits. (c) The receivedvector. (d) After decoding usingthe horizontal (3, 1) decoder, <strong>and</strong>(e) after subsequently using thevertical (7, 4) decoder. Thedecoded vector matches theoriginal.(d ′ , e ′ ) After decoding in the otherorder, three errors still remain.horizontally then with H(7, 4) vertically. The blocklength of the concatenatedcode is 27. The number of source bits per codeword is four, shown by thesmall rectangle.We can decode conveniently (though not optimally) by using the individualdecoders for each of the subcodes in some sequence. It makes most sense tofirst decode the code which has the lowest rate <strong>and</strong> hence the greatest errorcorrectingability.Figure 11.6(c–e) shows what happens if we receive the codeword of figure11.6a with some errors (five bits flipped, as shown) <strong>and</strong> apply the decoderfor H(3, 1) first, <strong>and</strong> then the decoder for H(7, 4). The first decoder correctsthree of the errors, but erroneously modifies the third bit in the second rowwhere there are two bit errors. The (7, 4) decoder can then correct all threeof these errors.Figure 11.6(d ′ – e ′ ) shows what happens if we decode the two codes in theother order. In columns one <strong>and</strong> two there are two errors, so the (7, 4) decoderintroduces two extra errors. It corrects the one error in column 3. The (3, 1)decoder then cleans up four of the errors, but erroneously infers the secondbit.InterleavingThe motivation for interleaving is that by spreading out bits that are nearbyin one code, we make it possible to ignore the complex correlations among theerrors that are produced by the inner code. Maybe the inner code will messup an entire codeword; but that codeword is spread out one bit at a time overseveral codewords of the outer code. So we can treat the errors introduced bythe inner code as if they are independent.Other channel modelsIn addition to the binary symmetric channel <strong>and</strong> the Gaussian channel, codingtheorists keep more complex channels in mind also.Burst-error channels are important models in practice. Reed–Solomoncodes use Galois fields (see Appendix C.1) with large numbers of elements(e.g. 2 16 ) as their input alphabets, <strong>and</strong> thereby automatically achieve a degreeof burst-error tolerance in that even if 17 successive bits are corrupted, only 2successive symbols in the Galois field representation are corrupted. Concatenation<strong>and</strong> interleaving can give further protection against burst errors. Theconcatenated Reed–Solomon codes used on digital compact discs are able tocorrect bursts of errors of length 4000 bits.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!