06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

13.4 Channel Capacity of a Discrete Memoryless Channel 753

Fi g ure 13.4

Binary symmetric

channel capacity

as a function of

error

probability Pe

.

(..)

·51

c

]

-

"

(.)

"

1 .0

0.8

0.6

0.4

0.2

0.2 0.4 0.6 0.8 1.0

Error probability

P e ---+-

I

I

I

From Fig. 13.4, which shows C s vs. P e , it follows that the maximum value of C s is

unity. This means we can transmit at most l bit of information per binary digit. This is the

expected result, because one binary digit can convey one of the two equiprobable messages.

The information content of one of the two equiprobable messages is log 2 2 = 1 bit. Second,

we observe that C s is maximum when the error probability P e = 0 or P e = I. When the

error probability P e = 0, the channel is noiseless, and we expect C s to be maximum. But

surprisingly, C s is also maximum when P e = l. This is easy to explain, because a channel

that consistently and with certainty makes errors is as good as a noiseless channel. All we

have to do to have error-free reception is reverse the decision that is made; that is, if O is

received, we decide that 1 was actually sent, and vice versa. The channel capacity C s is

zero (minimum) when P e = ½ - If the error probability is ½ , then the transmitted symbols

and the received symbols are statistically independent. If we received 0, for example,

either 1 or O is equally likely to have been transmitted, and the information received

is zero.

Channel Capacity per Second

The channel capacity C s in Eq. (13.22) gives the maximum possible information transmitted

when one symbol (digit) is transmitted. If K symbols are being transmitted per second, then the

maximum rate of transmission of information per second is KC., . This is the channel capacity

in information units per seconds and will be denoted by C (in bits per second):

C=KCs

A Comment on Channel Capacity: Channel capacity is the property of a particular

physical channel over which the information is transmitted. This is true provided

the term channel is correctly interpreted. A channel means not only the transmission

medium, it also includes the specifications of the kind of signals (binary, r-ary, etc., or

orthogonal, simplex, etc.) and the kind of receiver used (the receiver determines the error

probability). All these specifications are included in the channel matrix. A channel matrix

completely specifies a channel. If we decide to use, for example, 4-ary digits instead of

binary digits over the same physical channel, the channel matrix changes (it becomes a

4 x 4 matrix), as does the channel capacity. Similarly, a change in the receiver or the

signal power or noise power will change the channel matrix and, hence, the channel

capacity.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!