10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.25.3: Solving the decoding problems on a trellis 329n Likelihood Posterior marginalsP (y n | t n = 1) P (y n | t n = 0) P (t n = 1 | y) P (t n = 0 | y)1 0.1 0.9 0.061 0.9392 0.4 0.6 0.674 0.3263 0.9 0.1 0.746 0.2544 0.1 0.9 0.061 0.9395 0.1 0.9 0.061 0.9396 0.1 0.9 0.061 0.9397 0.3 0.7 0.659 0.341Figure 25.3. Marginal posteriorprobabilities for the 7 bits underthe posterior distribution offigure 25.2.Exercise 25.4. [2, p.333] Find the most probable codeword in the case wherethe normalized likelihood is (0.2, 0.2, 0.9, 0.2, 0.2, 0.2, 0.2). Also find orestimate the marginal posterior probability for each of the seven bits,<strong>and</strong> give the bit-by-bit decoding.[Hint: concentrate on the few codewords that have the largest probability.]We now discuss how to use message passing on a code’s trellis to solve thedecoding problems.The min–sum algorithmThe MAP codeword decoding problem can be solved using the min–sum algorithmthat was introduced in section 16.3. Each codeword of the codecorresponds to a path across the trellis. Just as the cost of a journey is thesum of the costs of its constituent steps, the log likelihood of a codeword isthe sum of the bitwise log likelihoods. By convention, we flip the sign of thelog likelihood (which we would like to maximize) <strong>and</strong> talk in terms of a cost,which we would like to minimize.We associate with each edge a cost −log P (y n | t n ), where t n is the transmittedbit associated with that edge, <strong>and</strong> y n is the received symbol. Themin–sum algorithm presented in section 16.3 can then identify the most probablecodeword in a number of computer operations equal to the number ofedges in the trellis. This algorithm is also known as the Viterbi algorithm(Viterbi, 1967).The sum–product algorithmTo solve the bitwise decoding problem, we can make a small modification tothe min–sum algorithm, so that the messages passed through the trellis define‘the probability of the data up to the current point’ instead of ‘the cost of thebest route to this point’. We replace the costs on the edges, −log P (y n | t n ), bythe likelihoods themselves, P (y n | t n ). We replace the min <strong>and</strong> sum operationsof the min–sum algorithm by a sum <strong>and</strong> product respectively.Let i run over nodes/states, i = 0 be the label for the start state, P(i)denote the set of states that are parents of state i, <strong>and</strong> w ij be the likelihoodassociated with the edge from node j to node i. We define the forward-passmessages α i byα 0 = 1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!