38 Chapitre 1 : Introduction to binary **and** non-binary **LDPC** **codes**

EXIT charts for GF(q) **LDPC** **codes**

Let us consider the binary input AWGN channel. This paragraph presents the tool for

optimization of the irregularity of GF(q) **LDPC** code ensemble thanks to EXIT charts.

First, let us discuss the accuracy of the Gaussian approximation of the channel output

in symbolwise LLR form for GF(q) **LDPC** code ensembles. The channel outputs are

noisy observations of bits, from which we obtain bitwise LLR, all identically distributed

as N( 2 , 4 ) [50]. Let s be the vector gathering the LLRs b

σ 2 σ 2 1 , . . .,b pk of bits of which a

symbol in G(q k ) is made: s = (b 1 , . . .,b pk ) T . Each component of an input LLR r**and**om

vector l of size (q k − 1) is then a linear combination of these bitwise LLRs:

l = B qk · s (1.17)

where B qk is the matrix of size q k × log 2 (q k ) in which the i th row is the binary map of

the i th element of G(q k ). The distribution of initial messages is hence a mixture of onedimensional

Gaussian curves, but are not Gaussian distributed vectors. Indeed, it is easy

to see that the covariance matrix of vector l is not invertible.

Formally, EXIT charts track the mutual information I(C;W) between the transmitted

code symbol C at a variable node **and** the message W transmitted across an edge

emanating from it.

Definition 5 [48] The mutual information between a symmetric LDR-vector message W

of size q − 1 **and** the codeword sent, under the all-zero codeword assumption, is defined

by:

)

q−1

∑

I(C;W) = 1 − E log q

(1 + e −W i

|C = 0

The equivalent definition for the probability vector X = LDR −1 (W) of size q is

( ∑q−1

)

i=0

I(C;X) = 1 − E log X i

q |C = 0 . (1.18)

X 0

In the following, the shortcut “mutual information of a LDR vector” is used instead of

“mutual information between a LDR vector **and** the codeword sent”. If this information

is zero, then the message is independent of the transmitted code symbol **and** thus the

probability of error is q−1 . As the information approaches 1, the probability of error

q

approaches zero. Note that we assume that the base of the log function in the mutual

information is q, so as 0 ≤ I(C;W) ≤ 1. I(C;W) is taken to represent the distribution

of the message W. That is, unlike density evolution, where the entire distribution of the

message W at each iteration is recorded, with EXIT charts, I(C;W) is assumed to be

a faithful surrogate. In other words, since the densities are assumed to be dependent on

only one scalar parameter, instead of tracking the mean of one component, one tracks the

information content of the message. It is shown in [48] that, under the cycle free graph

assumption:

)

q−1

∑

I(C;W) = 1 − E W

(log q (1 + e −w i

)|C = 0

i=1

i=1