06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

14

ERROR CORRECTING

CODES

14. 1 OVERVIEW

A

seen from the discussion in Chapter 13, the key to achieving error-free digital communication

in the presence of distortion, noise, and interference is the addition of

appropriate redundancy to the original data bits. The addition of a single parity check

digit to detect an odd number of errors is a good example. Since Shannon's pioneering paper, 1

a great deal of work has been carried out in the area of forward error correcting (FEC) codes.

In this chapter, we will provide an introduction; readers can find much more in-depth coverage

of this topic from the classic textbook by Lin and Costello. 2

Generally, there are two important classes ofFEC codes: block codes and convolutional codes.

In block codes, every block of k data digits is encoded into a longer codeword of n digits

(n > k). Every unique sequence of k data digits fully determines a unique codeword of n

digits. In convolutional codes, the coded sequence of n digits depends not only on the k data

digits but also on the previous N - l data digits (N > 1). Hence, the coded sequence for a

certain k data digits is not unique but depends also on N - l earlier data digits. In short, the

encoder has memory. In block codes, k data digits are accumulated and then encoded into an

n-digit codeword. In convolutional codes, the encoding is done on a continuous running basis

rather than by blocks of k data digits.

Shannon's pioneer work 1 on the capacity of noisy channels has yielded a famous result

known as the noisy channel coding theorem. This result states that for a noisy channel with a

capacity C, there exist codes of rate R < C such that maximum likelihood decoding can lead

to error probability

(14.1)

where E b (R) is the energy per information bit defined as a function of code rate R. This

remarkable result shows that arbitrarily small error probability can be achieved by increasing

the block code length n while keeping the code rate constant. A similar result for convolutional

codes was also shown in Ref. 1. Note that this result establishes the existence of good codes. It

does not, however, tell us how to find such codes. In fact, it is not simply a question of designing

good codes. Indeed, this result also requires large n to reduce error probability and requires

decoders to use large storage and high complexity for large codewords of size n. Thus, the key

problem in code design is the dual task of searching for good error correction codes with large

802

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!