06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

766 INTRODUCTION TO INFORMATION THEORY

We have assumed that the mean square value of the signal x(t) is constrained to have a value

S, and the mean square value of the noise is N. We shall also assume that the signal x(t) and

the noise n(t) are independent. In such a case, the mean square value of y will be the sum of

the mean square values of x and n. Hence,

y 2 = S +N

For a given noise [given H(n)], /(x; y) is maximum when H(y) is maximum. We have seen

that for a given mean square value of y (y 2 = S + N), H (y) will be maximum if y is Gaussian,

and the maximum entropy Hmax(Y) is then given by

1

Hmax (Y) = log [2ne(S + N)]

2

(13.61)

Because

y=x+ n

and n is Gaussian, y will be Gaussian only if x is Gaussian. As the mean square value of x is

S, this implies that

and

lmax (x; y) = Hmax (y) - H (n)

1

= log [2ne(S + N)] - H (n)

2

For a white Gaussian noise with mean square value N,

1

H(n) = 2

Iog 2neN

N=NB

and

(13.62a)

(13.62b)

The channel capacity per second will be the maximum information that can be transmitted

per second. Equations (13.62) represent the maximum information transmitted per sample.

If all the samples are statistically independent, the total information transmitted per second

will be 2B times C s . If the samples are not independent, then the total information will be

less than 2BC s . Because the channel capacity C represents the maximum possible information

transmitted per second,

C = 2B [ log ( 1 + ! ) ]

= Blog ( 1 + ! ) bit/s (13.63)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!