10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.584 49 — Repeat–Accumulate Codes10.10.010.0010.0001totaldetectedundetected1e-050.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.120001800160014001200100080060040020000 20 40 60 80 100 120 140 160 1803000250020001500100050000 20 40 60 80 100 120 140 160 180(a) (ii.b) E b /N 0 = 0.749 dB (ii.c) E b /N 0 = 0.846 dB1000100010010010101149.4 Empirical distribution of decoding times10 20 30 40 50 60 70 80 90100(iii.b)It is interesting to study the number of iterations τ of the sum–product algorithmrequired to decode a sparse-graph code. Given one code <strong>and</strong> a set ofchannel conditions, the decoding time varies r<strong>and</strong>omly from trial to trial. Wefind that the histogram of decoding times follows a power law, P (τ) ∝ τ −p ,for large τ. The power p depends on the signal-to-noise ratio <strong>and</strong> becomessmaller (so that the distribution is more heavy-tailed) as the signal-to-noiseratio decreases. We have observed power laws in repeat–accumulate codes<strong>and</strong> in irregular <strong>and</strong> regular Gallager codes. Figures 49.3(ii) <strong>and</strong> (iii) show thedistribution of decoding times of a repeat–accumulate code at two differentsignal-to-noise ratios. The power laws extend over several orders of magnitude.10 20 30 40 50 60 70 80 90 100(iii.c)Figure 49.3. Histograms ofnumber of iterations to find avalid decoding for arepeat–accumulate code withsource block length K = 10 000<strong>and</strong> transmitted blocklengthN = 30 000. (a) Block errorprobability versus signal-to-noiseratio for the RA code. (ii.b)Histogram for x/σ = 0.89,E b /N 0 = 0.749 dB. (ii.c)x/σ = 0.90, E b /N 0 = 0.846 dB.(iii.b, iii.c) Fits of power laws to(ii.b) (1/τ 6 ) <strong>and</strong> (ii.c) (1/τ 9 ).Exercise 49.1. [5 ] Investigate these power laws. Does density evolution predictthem? Can the design of a code be used to manipulate the power law ina useful way?49.5 Generalized parity-check matricesI find that it is helpful when relating sparse-graph codes to each other to usea common representation for them all. Forney (2001) introduced the idea ofa normal graph in which the only nodes are <strong>and</strong> <strong>and</strong> all variable nodeshave degree one or two; variable nodes with degree two can be represented onedges that connect a node to a node. The generalized parity-check matrixis a graphical way of representing normal graphs. In a parity-check matrix,the columns are transmitted bits, <strong>and</strong> the rows are linear constraints. In ageneralized parity-check matrix, additional columns may be included, whichrepresent state variables that are not transmitted. One way of thinking of thesestate variables is that they are punctured from the code before transmission.State variables are indicated by a horizontal line above the correspondingcolumns. The other pieces of diagrammatic notation for generalized parity-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!