Hybrid LDPC codes and iterative decoding methods - i3s

ensea.fr

Hybrid LDPC codes and iterative decoding methods - i3s

24 Chapitre 1 : Introduction to binary and non-binary LDPC codes

Consider a transmission over a noisy channel. Let X be the input random vector and

let Y be the output random vector. We assume that Y depends on X via a conditional

probability density function P X|Y (x|y). Given a received vector y = (y 0 , . . .,y N−1 ),

the most likely transmitted codeword is the one that maximizes P X|Y (x|y) [36]. If the

channel is memoryless and each of the codewords are equally likely, then this reduces

to the codeword x = (x 0 , . . .,x N−1 ) which maximizes P Y|X (y|x). This is known as

maximum likelihood (ML) estimate of the transmitted codeword and is written as follows

[36]:

ˆx = arg max

x∈C P Y|X(y|x)

where the maximization is done over the input alphabet of the channel.

Now we discuss the correction capability of a linear block code. The correction ability

of a code is determined by its minimum distance d min , which is the smallest Hamming

distance between two codewords [37]. From an algebraic perspective, the received vector

is the sent codeword with some components corrupted. The error correction, i.e. the

decoding process, consists in finding the nearest codeword to the received vector. All the

vectors in {0, 1} N whose nearest codeword is x are such that, for all i ∈ 1, . . .,N, if the

i th bit of the vector is different from the i th bit of the codeword x, then the Hamming

distance between x and the vector must be lower than dloc min (i) , with d loc

2 min(i) being the local

minimum distance of bit i in the code, as defined in [38]. The local minimum distance

on the i th digit corresponds to the minimum Hamming distance between two codewords

whose the i th digits are different [38]. Hence, the maximum number of errors that a code

can detect is d min − 1, whatever the location of the errors in the codeword. Similarly,

if the error correction is achieved according to the ML principle, the maximum number

of errors that the code is able to correct is ⌊ d min

⌋. The maximum number of correctable

2

errors is hence ⌊ d min−1

⌋, whatever the location of the errors in the codeword.

2

ML decoding corresponds to solve the nearest neighbor problem. Looking for the

nearest neighbor in a high-dimensional space is an algorithmic problem which does not

have a better solution than an exhaustive search when the space elements are not sorted.

Thus, the decoding process can be very complex (O(2 K )) [37]. This is brute force approach

is reasonable only for short length codes. Faster sub-optimal solutions have been

developed. The first one is applied to block codes like BCH [39] and Reed-Solomon codes

[40]. In these approaches, the code is built with the a priori knowledge of the minimum

distance, and built so as the nearest neighbor search can be performed in reduced subspaces.

The second coding scheme which allows to have good minimum distance with

acceptable decoding speed is based on convolutional codes. Encoding is done thanks to

linear feedback shift registers fed by information bits. This technique generates a set C of

codewords sorted according to the correlation between the bits of the codeword. Viterbi

algorithm [41] takes advantage of this construction by modeling the encoder as a finite

state machine whose transitions between possible states are considered as a Markov chain

and form a convolutional trellis, or state graph. Each path in this state graph corresponds

to a codeword, and looking for the most likely codeword results in finding the path which

minimizes the distance with the received vector. The complexity is linear in the informa-

More magazines by this user
Similar magazines