10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.47.4: Pictorial demonstration of Gallager codes 565(a2)(a1)0.40.350.30.250.20.150.10.050P(y|‘0’)P(y|‘1’)-4 -2 0 2 4(b2)(b1)0.40.350.30.250.20.150.10.050P(y|‘0’)P(y|‘1’)-4 -2 0 2 4Figure 47.7. Demonstration of aGallager code for a Gaussianchannel. (a1) The received vectorafter transmission over a Gaussianchannel with x/σ = 1.185(E b /N 0 = 1.47 dB). The greyscalerepresents the value of thenormalized likelihood. Thistransmission can be perfectlydecoded by the sum–productdecoder. The empiricalprobability of decoding failure isabout 10 −5 . (a2) The probabilitydistribution of the output y of thechannel with x/σ = 1.185 for eachof the two possible inputs. (b1)The received transmission over aGaussian channel with x/σ = 1.0,which corresponds to the Shannonlimit. (b2) The probabilitydistribution of the output y of thechannel with x/σ = 1.0 for each ofthe two possible inputs.10.10.010.0010.00011e-051e-06N=816N=408(N=204)N=96(N=96)N=2041 1.5 2 2.5 3 3.5 4 4.5 5 5.5(a)Gaussian channel10.10.010.0010.0001j=4j=3 j=51e-05j=61 1.5 2 2.5 3 3.5 4In figure 47.7 the left picture shows the received vector after transmission overa Gaussian channel with x/σ = 1.185. The greyscale represents the valueP (y | tof the normalized likelihood,= 1)P (y | t = 1)+P (y | t =. This signal-to-noise ratio0)x/σ = 1.185 is a noise level at which this rate-1/2 Gallager code communicatesreliably (the probability of error is ≃ 10 −5 ). To show how close we are to theShannon limit, the right panel shows the received vector when the signal-tonoiseratio is reduced to x/σ = 1.0, which corresponds to the Shannon limitfor codes of rate 1/2.(b)Figure 47.8. Performance ofrate- 1/ 2 Gallager codes on theGaussian channel. Vertical axis:block error probability. Horizontalaxis: signal-to-noise ratio E b /N 0 .(a) Dependence on blocklength Nfor (j, k) = (3, 6) codes. From leftto right: N = 816, N = 408,N = 204, N = 96. The dashedlines show the frequency ofundetected errors, which ismeasurable only when theblocklength is as small as N = 96or N = 204. (b) Dependence oncolumn weight j for codes ofblocklength N = 816.Variation of performance with code parametersFigure 47.8 shows how the parameters N <strong>and</strong> j affect the performance oflow-density parity-check codes. As Shannon would predict, increasing theblocklength leads to improved performance. The dependence on j follows adifferent pattern. Given an optimal decoder, the best performance would beobtained for the codes closest to r<strong>and</strong>om codes, that is, the codes with largestj. However, the sum–product decoder makes poor progress in dense graphs,so the best performance is obtained for a small value of j. Among the values

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!