10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.330 25 — Exact Marginalization in Trellisesα i = ∑j∈P(i)w ij α j . (25.13)These messages can be computed sequentially from left to right.⊲ Exercise 25.5. [2 ] Show that for a node i whose time-coordinate is n, α i isproportional to the joint probability that the codeword’s path passedthrough node i <strong>and</strong> that the first n received symbols were y 1 , . . . , y n .The message α I computed at the end node of the trellis is proportional to themarginal probability of the data.⊲ Exercise 25.6. [2 ] What is the constant of proportionality? [Answer: 2 K ]We define a second set of backward-pass messages β i in a similar manner.Let node I be the end node.β I = 1∑β j =i:j∈P(i)w ij β i . (25.14)These messages can be computed sequentially in a backward pass from rightto left.⊲ Exercise 25.7. [2 ] Show that for a node i whose time-coordinate is n, β i isproportional to the conditional probability, given that the codeword’spath passed through node i, that the subsequent received symbols werey n+1 . . . y N .Finally, to find the probability that the nth bit was a 1 or 0, we do twosummations of products of the forward <strong>and</strong> backward messages. Let i run overnodes at time n <strong>and</strong> j run over nodes at time n − 1, <strong>and</strong> let t ij be the valueof t n associated with the trellis edge from node j to node i. For each value oft = 0/1, we computer (t)n =∑i,j: j∈P(i), t ij =tThen the posterior probability that t n was t = 0/1 isα j w ij β i . (25.15)P (t n = t | y) = 1 Z r(t) n , (25.16)where the normalizing constant Z = r n (0) + r n(1) should be identical to the finalforward message α I that was computed earlier.Exercise 25.8. [2 ] Confirm that the above sum–product algorithm does computeP (t n = t | y).Other names for the sum–product algorithm presented here are ‘the forward–backward algorithm’, ‘the BCJR algorithm’, <strong>and</strong> ‘belief propagation’.⊲ Exercise 25.9. [2, p.333] A codeword of the simple parity code P 3 is transmitted,<strong>and</strong> the received signal y has associated likelihoods shown in table 25.4.Use the min–sum algorithm <strong>and</strong> the sum–product algorithm in the trellis(figure 25.1) to solve the MAP codeword decoding problem <strong>and</strong> thebitwise decoding problem. Confirm your answers by enumeration ofall codewords (000, 011, 110, 101). [Hint: use logs to base 2 <strong>and</strong> dothe min–sum computations by h<strong>and</strong>. When working the sum–productalgorithm by h<strong>and</strong>, you may find it helpful to use three colours of pen,one for the αs, one for the ws, <strong>and</strong> one for the βs.]n P (y n | t n )t n = 0 t n = 11 1/ 4 1/ 22 1/ 2 1/ 43 1/ 8 1/ 2Table 25.4. Bitwise likelihoods fora codeword of P 3 .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!