10.07.2015 Views

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

Information Theory, Inference, and Learning ... - Inference Group

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.1.6: Solutions 17A slightly more careful answer (short of explicit computation) goes as follows.Taking the approximation for ( NK)to the next order, we find:( NN/2)≃ 2 N 1√2πN/4. (1.40)This approximation can be proved from an accurate version of Stirling’s approximation(1.12), or by considering the binomial distribution with p = 1/2<strong>and</strong> noting1 = ∑ K( ( ) N/2N N ∑( )2K)−N ≃ 2 −N e −r2 /2σ 2 N √2πσ,≃ 2 −N (1.41)N/2N/2r=−N/2where σ = √ N/4, from which equation (1.40) follows. The distinction between⌈N/2⌉ <strong>and</strong> N/2 is not important in this term since ( NK)has a maximum atK = N/2.Then the probability of error (for odd N) is to leading order( )Np b ≃f (N+1)/2 (1 − f) (N−1)/2 (1.42)(N +1)/2≃ 2 N 1√ f[f(1 − f)] (N−1)/2 1≃ √ f[4f(1 − f)] (N−1)/2 . (1.43)πN/2 πN/8The equation p b = 10 −15 can be written(N − 1)/2 ≃ log 10−15 + loglog 4f(1 − f)√πN/8f(1.44)In equation (1.44), the logarithmscan be taken to any base, as longas it’s the same base throughout.In equation (1.45), I use base 10.which may be solved for N iteratively, the first iteration starting from ˆN 1 = 68:( ˆN 2 − 1)/2 ≃−15 + 1.7−0.44= 29.9 ⇒ ˆN 2 ≃ 60.9. (1.45)This answer is found to be stable, so N ≃ 61 is the blocklength at whichp b ≃ 10 −15 .Solution to exercise 1.6 (p.13).(a) The probability of block error of the Hamming code is a sum of six terms– the probabilities that 2, 3, 4, 5, 6, or 7 errors occur in one block.p B =To leading order, this goes as7∑r=2p B ≃( 7r)f r (1 − f) 7−r . (1.46)( 72)f 2 = 21f 2 . (1.47)(b) The probability of bit error of the Hamming code is smaller than theprobability of block error because a block error rarely corrupts all bits inthe decoded block. The leading-order behaviour is found by consideringthe outcome in the most probable case where the noise vector has weighttwo. The decoder will erroneously flip a third bit, so that the modifiedreceived vector (of length 7) differs in three bits from the transmittedvector. That means, if we average over all seven bits, the probability thata r<strong>and</strong>omly chosen bit is flipped is 3/7 times the block error probability,to leading order. Now, what we really care about is the probability that

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!