10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.558 47 — Low-Density Parity-Check Codest n(a) The prior distribution over codewordsP (t) ∝ [Ht = 0].The variable nodes are the transmitted bits {t n }.Each node represents the factor [ ∑ n∈N (m) t n =0 mod 2].P (r n | t n )t n(b) The posterior distribution over codewords,P (t | r) ∝ P (t)P (r | t).Each upper function node represents a likelihood factor P (r n | t n ).z mP (n n )n n(c) The joint probability of the noise n <strong>and</strong> syndrome z,P (n, z) = P (n) [z=Hn].The top variable nodes are now the noise bits {n n }.The added variable nodes at the base are the syndrome values{z m }.Each definition z m = ∑ n H mnn n mod 2 is enforced by a factor.The codeword decoding viewpointFirst, we note that the prior distribution over codewords,Figure 47.2. Factor graphsassociated with a low-densityparity-check code.P (t) ∝ [Ht = 0 mod 2], (47.2)can be represented by a factor graph (figure 47.2a), with the factorizationbeingP (t) ∝ ∏ ∑[ tn = 0 mod 2]. (47.3)mn∈N (m)(We’ll omit the ‘mod 2’s from now on.) The posterior distribution over codewordsis given by multiplying this prior by the likelihood, which introducesanother N factors, one for each received bit.P (t | r) ∝ P (t)P (r | t)∝∏ ∑[ tn = 0 ] ∏mnn∈N (m)P (r n | t n ) (47.4)The factor graph corresponding to this function is shown in figure 47.2b. Itis the same as the graph for the prior, except for the addition of likelihood‘dongles’ to the transmitted bits.In this viewpoint, the received signal r n can live in any alphabet; all thatmatters are the values of P (r n | t n ).The syndrome decoding viewpointAlternatively, we can view the channel output in terms of a binary receivedvector r <strong>and</strong> a noise vector n, with a probability distribution P (n) that canbe derived from the channel properties <strong>and</strong> whatever additional informationis available at the channel outputs.For example, with a binary symmetric channel, we define the noise byr = t + n, the syndrome z = Hr, <strong>and</strong> noise model P (n n = 1) = f. For otherchannels such as the Gaussian channel with output y, we may define a received

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!