10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.49.3: Decoding 583(a)1010101010101010101010Figure 49.1. Factor graphs for arepeat–accumulate code with rate1/3. (a) Using elementary nodes.Each white circle represents atransmitted bit. Eachconstraint forces the sum of the 3bits to which it is connected to beeven. Each black circle representsan intermediate binary variable.Each constraint forces the threevariables to which it is connectedto be equal.(b) Factor graph normally usedfor decoding. The top rectanglerepresents the trellis of theaccumulator, shown in the inset.(b)10.10.010.0010.00011e-059999 3000N=30000totalundetected816408N=2041 2 3 4 5Figure 49.2. Performance of sixrate- 1/ 3 repeat–accumulate codeson the Gaussian channel. Theblocklengths range from N = 204to N = 30 000. Vertical axis:block error probability; horizontalaxis: E b /N 0 . The dotted linesshow the frequency of undetectederrors.This graph is a factor graph for the prior probability over codewords,with the circles being binary variable nodes, <strong>and</strong> the squares representingtwo types of factor nodes. As usual, each contributes a factor of the form[ ∑ x=0 mod 2]; each contributes a factor of the form [x 1 =x 2 =x 3 ].49.3 DecodingThe repeat–accumulate code is normally decoded using the sum–product algorithmon the factor graph depicted in figure 49.1b. The top box represents thetrellis of the accumulator, including the channel likelihoods. In the first halfof each iteration, the top trellis receives likelihoods for every transition in thetrellis, <strong>and</strong> runs the forward–backward algorithm so as to produce likelihoodsfor each variable node. In the second half of the iteration, these likelihoodsare multiplied together at the nodes to produce new likelihood messages tosend back to the trellis.As with Gallager codes <strong>and</strong> turbo codes, the stop-when-it’s-done decodingmethod can be applied, so it is possible to distinguish between undetectederrors (which are caused by low-weight codewords in the code) <strong>and</strong> detectederrors (where the decoder gets stuck <strong>and</strong> knows that it has failed to find avalid answer).Figure 49.2 shows the performance of six r<strong>and</strong>omly-constructed repeat–accumulate codes on the Gaussian channel. If one does not mind the errorfloor which kicks in at about a block error probability of 10 −4 , the performanceis staggeringly good for such a simple code (cf. figure 47.17).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!