10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.148 9 — Communication over a Noisy ChannelSome useful model channels are:Binary symmetric channel. A X = {0, 1}. A Y = {0, 1}.0 ✲x ❅❘1✒❅ 0 yP (y = 0 | x = 0) = 1 − f;✲1P (y = 1 | x = 0) = f;P (y = 0 | x = 1) = f;P (y = 1 | x = 1) = 1 − f.010 1Binary erasure channel. A X = {0, 1}. A Y = {0, ?, 1}.0 ✲0❅x ❅❘ ? y1 ✒ ✲1P (y = 0 | x = 0) = 1 − f;P (y = ? | x = 0) = f;P (y = 1 | x = 0) = 0;P (y = 0 | x = 1) = 0;P (y = ? | x = 1) = f;P (y = 1 | x = 1) = 1 − f.0?10 1Noisy typewriter. A X = A Y = the 27 letters {A, B, . . . , Z, -}. The lettersare arranged in a circle, <strong>and</strong> when the typist attempts to type B, whatcomes out is either A, B or C, with probability 1/ 3 each; when the input isC, the output is B, C or D; <strong>and</strong> so forth, with the final letter ‘-’ adjacentto the first letter A.A❈ ✏ ✏✏✏✏ ✏✏✏✶✏ ✏✏✶✏✏✶✏✏✶✏✏✶✏✏✶✏✏✶ ✲ AB ✲ BC❈ ✲ CD ❈❈❈❈❈❈❈❈❈❈❲ ✲ DE ✲ EF ✲ FG ✲ GH ✏ ✏✏✶✲ HY✏ ✏✏✶ . Z✏ ✏✏✶Y ✲✲-✏ ✏✏✶ Z✄ ✄✄✄✄✄✄✄✄✄✄✄✗✲-.P (y = F | x = G) = 1/3;P (y = G | x = G) = 1/3;P (y = H | x = G) = 1/3;.ABCDEFGHIJKLMNOPQRSTUVWXYZ-A B C D E F G H I J K L M N O P Q R S T U V W X Y Z -Z channel. A X = {0, 1}. A Y = {0, 1}.0 ✲0 x 1✒ yP (y = 0 | x = 0) = 1;✲1P (y = 1 | x = 0) = 0;P (y = 0 | x = 1) = f;P (y = 1 | x = 1) = 1 − f.010 19.4 Inferring the input given the outputIf we assume that the input x to a channel comes from an ensemble X, thenwe obtain a joint ensemble XY in which the r<strong>and</strong>om variables x <strong>and</strong> y havethe joint distribution:P (x, y) = P (y | x)P (x). (9.3)Now if we receive a particular symbol y, what was the input symbol x? Wetypically won’t know for certain. We can write down the posterior distributionof the input using Bayes’ theorem:P (x | y) =P (y | x)P (x)P (y)=P (y | x)P (x)∑x ′ P (y | x′ )P (x ′ ) . (9.4)Example 9.1. Consider a binary symmetric channel with probability of errorf = 0.15. Let the input ensemble be P X : {p 0 = 0.9, p 1 = 0.1}. Assumewe observe y = 1.P (x = 1 | y = 1) ==P (y = 1 | x = 1)P (x = 1)∑x ′ P (y | x′ )P (x ′ )0.85 × 0.10.85 × 0.1 + 0.15 × 0.9= 0.0850.22= 0.39. (9.5)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!