01.08.2013 Views

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981<br />

You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.<br />

49.3: Decoding 583<br />

(b)<br />

(a)<br />

1<br />

0.1<br />

0.01<br />

0.001<br />

0.0001<br />

1e-05<br />

9999 3000<br />

N=30000<br />

total<br />

undetected<br />

1<br />

N=204<br />

1 2 3 4 5<br />

This graph is a factor graph for the prior probability over codewords,<br />

with the circles being binary variable nodes, <strong>and</strong> the squares representing<br />

two types of factor nodes. As usual, each contributes a factor of <br />

the form<br />

[ x=0 mod 2]; each contributes a factor of the form [x1 =x2 =x3].<br />

49.3 Decoding<br />

The repeat–accumulate code is normally decoded using the sum–product algorithm<br />

on the factor graph depicted in figure 49.1b. The top box represents the<br />

trellis of the accumulator, including the channel likelihoods. In the first half<br />

of each iteration, the top trellis receives likelihoods for every transition in the<br />

trellis, <strong>and</strong> runs the forward–backward algorithm so as to produce likelihoods<br />

for each variable node. In the second half of the iteration, these likelihoods<br />

are multiplied together at the nodes to produce new likelihood messages to<br />

send back to the trellis.<br />

As with Gallager codes <strong>and</strong> turbo codes, the stop-when-it’s-done decoding<br />

method can be applied, so it is possible to distinguish between undetected<br />

errors (which are caused by low-weight codewords in the code) <strong>and</strong> detected<br />

errors (where the decoder gets stuck <strong>and</strong> knows that it has failed to find a<br />

valid answer).<br />

Figure 49.2 shows the performance of six r<strong>and</strong>omly-constructed repeat–<br />

accumulate codes on the Gaussian channel. If one does not mind the error<br />

floor which kicks in at about a block error probability of 10 −4 , the performance<br />

is staggeringly good for such a simple code (cf. figure 47.17).<br />

816<br />

408<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

1<br />

0<br />

Figure 49.1. Factor graphs for a<br />

repeat–accumulate code with rate<br />

1/3. (a) Using elementary nodes.<br />

Each white circle represents a<br />

transmitted bit. Each<br />

constraint forces the sum of the 3<br />

bits to which it is connected to be<br />

even. Each black circle represents<br />

an intermediate binary variable.<br />

Each constraint forces the three<br />

variables to which it is connected<br />

to be equal.<br />

(b) Factor graph normally used<br />

for decoding. The top rectangle<br />

represents the trellis of the<br />

accumulator, shown in the inset.<br />

Figure 49.2. Performance of six<br />

rate- 1/3 repeat–accumulate codes<br />

on the Gaussian channel. The<br />

blocklengths range from N = 204<br />

to N = 30 000. Vertical axis:<br />

block error probability; horizontal<br />

axis: Eb/N0. The dotted lines<br />

show the frequency of undetected<br />

errors.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!