01.08.2013 Views

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

Information Theory, Inference, and Learning ... - MAELabs UCSD

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981<br />

You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.<br />

9.8: Further exercises 155<br />

given-typical-x set, 2 NH(Y |X) . So the number of non-confusable inputs, if they<br />

are selected from the set of typical inputs x ∼ X N , is ≤ 2 NH(Y )−NH(Y |X) =<br />

2 NI(X;Y ) .<br />

The maximum value of this bound is achieved if X is the ensemble that<br />

maximizes I(X; Y ), in which case the number of non-confusable inputs is<br />

≤ 2 NC . Thus asymptotically up to C bits per cycle, <strong>and</strong> no more, can be<br />

communicated with vanishing error probability. ✷<br />

This sketch has not rigorously proved that reliable communication really<br />

is possible – that’s our task for the next chapter.<br />

9.8 Further exercises<br />

Exercise 9.15. [3, p.159] Refer back to the computation of the capacity of the Z<br />

channel with f = 0.15.<br />

(a) Why is p ∗ 1<br />

less than 0.5? One could argue that it is good to favour<br />

the 0 input, since it is transmitted without error – <strong>and</strong> also argue<br />

that it is good to favour the 1 input, since it often gives rise to the<br />

highly prized 1 output, which allows certain identification of the<br />

input! Try to make a convincing argument.<br />

(b) In the case of general f, show that the optimal input distribution<br />

is<br />

p ∗ 1/(1 − f)<br />

1 =<br />

.<br />

(H2(f)/(1−f))<br />

1 + 2<br />

(9.19)<br />

(c) What happens to p ∗ 1<br />

if the noise level f is very close to 1?<br />

Exercise 9.16. [2, p.159] Sketch graphs of the capacity of the Z channel, the<br />

binary symmetric channel <strong>and</strong> the binary erasure channel as a function<br />

of f.<br />

⊲ Exercise 9.17. [2 ] What is the capacity of the five-input, ten-output channel<br />

whose transition probability matrix is<br />

⎡<br />

⎢<br />

⎣<br />

0.25<br />

0.25<br />

0.25<br />

0.25<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0.25<br />

0.25<br />

0.25<br />

0.25<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0.25<br />

0.25<br />

0.25<br />

0.25<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0.25<br />

0.25<br />

0.25<br />

⎤<br />

0.25<br />

0.25 ⎥<br />

0 ⎥<br />

0 ⎥<br />

0 ⎥<br />

0 ⎥<br />

0 ⎥<br />

0 ⎥<br />

0.25 ⎦<br />

0<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

0 1 2 3 4<br />

? (9.20)<br />

0 0 0 0.25 0.25<br />

Exercise 9.18. [2, p.159] Consider a Gaussian channel with binary input x ∈<br />

{−1, +1} <strong>and</strong> real output alphabet AY , with transition probability density<br />

(y−xα)2<br />

1<br />

Q(y | x, α, σ) = √ e− 2σ<br />

2πσ2 2 , (9.21)<br />

where α is the signal amplitude.<br />

(a) Compute the posterior probability of x given y, assuming that the<br />

two inputs are equiprobable. Put your answer in the form<br />

P (x = 1 | y, α, σ) =<br />

1<br />

. (9.22)<br />

1 + e−a(y)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!