06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

758 INTRODUCTION TO INFORMATION THEORY

(in the limit as ,6.x, ,6.y ---+ 0),

R1 = lim - log ,6.x

x➔O

R 2 = lim - log ,6.y

y➔O

and

= log 2 = 1 bit

Thus, R1 , the reference entropy of x, is higher than the reference entropy R2 for y. Hence,

if x and y have equal absolute entropies, their differential (relative) entropies must differ

by 1 bit.

Maximum Entropy for a Given Mean Square Value of x

For discrete random variables, we observed that entropy was maximum when all the outcomes

(messages) were equally likely (uniform probability distribution). For continuous random

variables, there also exists a PDF p(x) that maximizes H(x) in Eqs. (13.32). In the case of a

continuous distribution, however, we may have additional constraints on x. Either the maximum

value of x or the mean square value of x may be given. We shall find here the PDF p(x) that

will yield maximum entropy when x 2 is given to be a constant a 2 . The problem, then, is to

maximize H(x):

with the constraints

Joo 1

H(x) = p(x) log - dx

oo p(x)

(13.33)

1_: p(x) dx = 1

(13.34a)

1_: x 2 p(x) dx = a 2 (13.34b)

To solve this problem, we use a theorem from the calculus of variation. Given the integral I,

I= 1 b F(x,p) dx

(13.35)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!