06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

14. 12 Low-Density Parity Check (LDPC) Codes 857

Because LDPC codes are typically of length greater than 1000, their Tanner graphs are

normally too large to illustrate in practice. However, the basic Tanner graph concept is very

helpful to understanding LDPC codes and its iterative decoding.

A cycle in the Tanner graph is marked by a closed loop of connected edges. The loop

originates from and ends at the same variable ( or check) node. The length of a cycle is defined

by the number of its edges. In Example 14.9, there exist several cycles of length 4 and length 6.

Cycles of lengths 4 and 6 are considered to be short cycles. Short cycles are known to be

undesirable in some iterative decoding algorithms for LDPC codes. When a Tanner graph is

free of short cycles, iterative decoding of LDPC codes based on the so-called sum-product

algorithm can converge and generate results close to the full-scale MAP decoder that is too

complex to implement in practice.

To prevent a cycle of length 4, LDPC code design usually imposes an additional constraint

on the parity matrix H: No two rows or columns may have more than one component

in common. This property, known as the "row-column (RC) constraint," is sufficient and

necessary to avoid cycles of length 4. The presence of cycles is often unavoidable in most

LDPC code designs based on computer searches. A significant number of researchers have

been studying the challenging problem of either reducing the number of, or eliminating short

cycles of length 4, 6, and possibly 8. Interested readers should consult the book by Lin and

Costello. 2

We now describe two decoding methods for LDPC codes.

Bit-Flipping LDPC Decoding

The large code length of LDPC codes makes decoding a highly challenging problem. Two of

the most common decoding algorithms are the hard decision bit-flipping (BF) algorithm and

the soft-decision sum-product algorithm (SPA).

The bit-flipping (BF) algorithm operates on a sequence of hard-decision bits r =

011010 • • • 010. Parity checks on r generate the syndrome vector

Those syndrome bits of value 1 indicate parity failure. The BF algorithm tries to change a bit

(by flipping) in r based on how the flip would affect the syndrome bits.

When a code bit participates in only a single failed parity check, flipping this bit at best

will correct 1 failed parity check but will cause y - I new parity failures. For this reason,

BF only flips bits that affect a large number of failed parity checks. A simple BF algorithm

consists of the following steps: 2

Step 1: Calculate the parity checks s = rH T . If all syndromes are zero, stop decoding.

Step 2: Determine the number of failed parity checks for every bit:

J;

i = I, 2, ... , n

Step 3: Identify the set of bits Fm ax with the largest Ji and flip the bits in Fmax to generate a

new codeword r'.

Step 4. Let r = r' and repeat steps 1 to 3 until the maximum number of iterations has been

reached.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!