10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.560 47 — Low-Density Parity-Check Codeswhich x <strong>and</strong> z live is then the original graph (figure 47.2a) whose edges aredefined by the 1s in H. The graph contains nodes of two types, which we’llcall checks <strong>and</strong> bits. The graph connecting the checks <strong>and</strong> bits is a bipartitegraph: bits connect only to checks, <strong>and</strong> vice versa. On each iteration, a probabilityratio is propagated along each edge in the graph, <strong>and</strong> each bit node x nupdates its probability that it should be in state 1.We denote the set of bits n that participate in check m by N (m) ≡ {n :H mn = 1}. Similarly we define the set of checks in which bit n participates,M(n) ≡ {m : H mn = 1}. We denote a set N (m) with bit n excluded byN (m)\n. The algorithm has two alternating parts, in which quantities q mn<strong>and</strong> r mn associated with each edge in the graph are iteratively updated. Thequantity qmn x is meant to be the probability that bit n of x has the value x,given the information obtained via checks other than check m. The quantityrmn x is meant to be the probability of check m being satisfied if bit n of x isconsidered fixed at x <strong>and</strong> the other bits have a separable distribution givenby the probabilities {q mn ′ : n ′ ∈ N (m)\n}. The algorithm would produce theexact posterior probabilities of all the bits after a fixed number of iterationsif the bipartite graph defined by the matrix H contained no cycles.Initialization. Let p 0 n = P (x n = 0) (the prior probability that bit x n is 0),<strong>and</strong> let p 1 n = P (x n = 1) = 1 − p 0 n . If we are taking the syndrome decodingviewpoint <strong>and</strong> the channel is a binary symmetric channel then p 1 n will equalf. If the noise level varies in a known way (for example if the channel is abinary-input Gaussian channel with a real output) then p 1 n is initialized to theappropriate normalized likelihood. For every (n, m) such that H mn = 1 thevariables qmn 0 <strong>and</strong> qmn 1 are initialized to the values p 0 n <strong>and</strong> p 1 n respectively.Horizontal step. In the horizontal step of the algorithm (horizontal fromthe point of view of the matrix H), we run through the checks m <strong>and</strong> computefor each n ∈ N (m) two probabilities: first, rmn, 0 the probability of the observedvalue of z m arising when x n = 0, given that the other bits {x n ′ : n ′ ≠ n} havea separable distribution given by the probabilities {qmn 0 , q 1 ′ mn}, defined by:′r 0 mn =∑{x n ′: n ′ ∈N (m)\n}P ( z m | x n = 0, { x n ′ : n ′ ∈ N (m)\n })∏n ′ ∈N (m)\nq x n ′mn ′(47.7)<strong>and</strong> second, r 1 mn , the probability of the observed value of z m arising whenx n = 1, defined by:r 1 mn =∑{x n ′: n ′ ∈N (m)\n}P ( z m | x n = 1, { x n ′ : n ′ ∈ N (m)\n })∏n ′ ∈N (m)\nq x n ′mn ′ .(47.8)The conditional probabilities in these summations are either zero or one, dependingon whether the observed z m matches the hypothesized values for x n<strong>and</strong> the {x n ′}.These probabilities can be computed in various obvious ways based onequation (47.7) <strong>and</strong> (47.8). The computations may be done most efficiently (if|N (m)| is large) by regarding z m +x n as the final state of a Markov chain withstates 0 <strong>and</strong> 1, this chain being started in state 0, <strong>and</strong> undergoing transitionscorresponding to additions of the various x n ′, with transition probabilitiesgiven by the corresponding q 0 mn ′ <strong>and</strong> q 1 mn ′ . The probabilities for z m having itsobserved value given either x n = 0 or x n = 1 can then be found efficiently byuse of the forward–backward algorithm (section 25.3).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!