Design of Generalized LDPC Codes and their Decoders

Design of Generalized LDPC Codes and their Decoders

Design of Generalized LDPC Codes and their


Shadi Abu-Surra

Electrical and Computer

Engineering Department

University of Arizona


Abstract— We first consider the design of generalized LPDC

(G-LDPC) codes with recursive systematic convolutional (RSC)

constraint nodes in place of the standard single parity check

constraint nodes. Because rate-1/2 RSC nodes lead to low-rate

G-LDPC codes, we consider high-rate tail-biting RSC nodes for

which Riedel’s APP-decoder based on the reciprocal-dual code

trellis becomes necessary. We present the log-domain version of

this decoder as well as a suboptimal approximation. Another

approach to increasing the rate of G-LDPC codes is via the class

of doubly generalized LDPC (DG-LDPC) codes. We show how

the graph of a DG-LDPC code (called a DG-graph) may be

transformed into a G-graph. This alternative representation of

DG-LDPC codes leads to a modified-schedule G-graph decoder

which is equivalent to the flooding-schedule DG-graph decoder.

Lastly, we demonstrate the unequal error protection capability

of selected G-LDPC codes. Our codes are based on protographs

and most of them have adjacency matrices in block-circulant

form and, hence, are quasi-cyclic.


A generalization of LDPC codes was suggested by Tanner

[1] for which subsets of the set of code bits obey a more

complex constraint than a single parity check (SPC) constraint,

such as a Hamming code constraint. The generalized constraint

nodes are called super constraint nodes (super-CNs). The

Tanner graph of a generalized LDPC (G-LDPC) code with

length n and mc constraints is depicted in Fig. 1. There are

several advantages to employing super-CNs. First, super-CNs

tend to lead to larger minimum distances. Second, because a

complex constraint node can encapsulate multiple SPC constraints,

the resulting Tanner graph will contain fewer edges so

that deleterious graphical properties are more easily avoided.

Third, the belief propagation decoder tends to converge more

quickly because the CN processors now correspond to stronger

codes. Lastly, unequal error protection (UEP) is facilitated by

these generalized constraints. The first two advantages lead to

a lower error-rate floor. The third advantage leads to lower

decoder complexity and/or higher decoding speed. The last

advantage is useful in applications such as image and video

transmission in which some bits have more importance than


0 This work was funded in part by University of Bologna, Progetto Pluriennale,

and by NASA-JPL grant 1264726.

Gianluigi Liva

Institute of Communications

and Navigation

DLR (German Aerospace Center)

Wessling, Germany 82234


William E. Ryan

Electrical and Computer

Engineering Department

University of Arizona


Tanner’s G-LDPC codes were investigated by several researchers

in recent years. In [2], [3] Hamming component

codes were used in regular Tanner graphs. In [4] codes were

designed by using BCH or Reed-Solomon code constraints and

in [5] constraints based on recursive systematic convolutional

(RSC) codes were used. Hybrid and irregular G-LDPC codes

were investigated in [6]–[8]. In each of these works Hamming

constraints were used, and in [7], [8] G-LDPC codes were

designed using protographs [9]. Irregular G-LDPC codes based

on protographs with rate-1/2 RSC constraints were introduced

in [10], [11].

Because G-LDPC codes typically replace high-rate SPC

nodes with lower-rate nodes (e.g, rate-4/7 Hamming nodes),

the code rate of G-LDPC codes tends to be small. In this

paper we consider two approaches to increasing the rate of

G-LDPC codes. First, we replace SPC nodes by RSC nodes

of the same rate. Second, we explore a further generalization

of the G-LDPC principle, called doubly generalized LDPC

(DG-LDPC) codes, studied in [12], [13]. In DG-LDPC codes,

the variable nodes, which represent (low-rate) repetition codes

in the graphs of LDPC and G-LDPC codes, are replaced by

nodes which represent more general codes (of higher rate).

These generalized variable nodes are called super variable

nodes (super-VNs). The Tanner graph for a DG-LDPC code

is presented in Fig. 2. We will call this graph a DG-graph

to distinguish it from the graph of a G-LDPC code (Fig. 1),

which we will call a G-graph.

In [10], [11] we presented designs for rate-1/6 and rate-

1/4 G-LDPC codes with rate-1/2 RSC component codes. In

this paper, in order to achieve a higher overall code rate, we

employ high-rate RSC component codes in our design of G-

LDPC codes. We employ tail-biting RSC component codes

with rate-(κ − 1)/κ in our designs. Because the standard

BCJR decoder for a rate-(κ − 1)/κ RSC code has complexity

proportional to 2 κ−1 , in this paper we employ Riedel’s a

posteriori probability (APP) decoder [14] which is based

on the trellis of the rate-1/κ reciprocal-dual code. Also, we

describe an additive log-domain version of Riedel’s decoder,

following the work of [15]. Moreover, for our tail-biting

decoder, we use a suboptimal soft-in/soft-out (SISO) decoder,

which is a variation of the additive Riedel APP decoder [16]–

C0 C1 Cmc−1

. . .

❧ ❧ ❧ . . . ❧ ❧

V0 V1 V2 Vn−2 Vn−1

Fig. 1. Tanner graph (G-graph) of G-LDPC code.

C0 C1 Cmc−1

. . .

❧ ❧ ❧ . . . ❧ ❧


V1 V2 Vn−2 Vn−1

Fig. 2. DG-graph of DG-LDPC code.


In this paper, we explore DG-LPDC codes by transforming

them into G-LDPC codes. That is, we show how any DG-

LDPC code can be represented by a G-graph. This leads to a

decoder based on the G-graph which is an alternative to the

one presented in [13] which is based on the DG-graph.

Finally, in this paper we show by simulation that G-LDPC

codes can be used to design codes with both good thresholds

and low-floors, so they can be used in different applications.

Also, we demonstrate the UEP capability for some G-LDPC

codes which makes them amenable to joint source-channel

coding applications.

The paper is organized as follows. In the following section

we give an overview of the design of G-LDPC codes. Then, in

Section III we present examples of G-LDPC codes with highrate

RSC components and their corresponding decoders. In

Section IV, we introduce the alternative decoder for DG-LDPC

codes. Section V presents simulation results of the codes we

have discussed.


This section focuses on the design of G-LPDC codes (Fig.

1), although the comments made either hold directly for, or

are straightforwardly extended to, DG-LDPC codes (Fig. 2).

The graph of a G-LDPC code in Fig. 1, has n variable nodes

and mc constraint nodes. The connections between the variable

nodes and the constraint nodes is given by an mc×n adjacency

matrix Γ. For an LDPC code, the adjacency matrix Γ and the

parity-check matrix H are identical.

The parameters in standard LDPC code design which most

affect code performance are the degree distributions of the

node types, the topology of the graph (e.g., girth), and the

minimum distance, dmin. For the design of G-LDPC codes,

decisions must also be made on the types and multiplicities of

component codes to be used. The choice of component code

types and their multiplicities is dictated by the code rate and

complexity requirements.

As for LDPC codes, the topology of the graph for a G-

LDPC code should be free of short cycles. Obtaining optimal

or near-optimal degree distributions for the graphs of G-

LDPC codes can proceed as for LDPC codes, using EXIT

charts [23], for example. In this paper, we instead follow

the pragmatic design approach introduced in [7], [8]. It starts

with a protograph (defined below) that is known to have a

good decoding threshold and replaces selected SPC nodes with

RSC nodes. Although we provide no proof, the substitution of

these more complex nodes tends to increase minimum distance

as supported by simulations. Further, it leads to a smaller

adjacency matrix since multiple SPC nodes are replaced by

a single component code node. The implication of a smaller

adjacency matrix is that short cycles and other deleterious

graphical properties are more easily avoided.

A protograph [9], [24] is a relatively small bipartite graph

from which a larger graph can be obtained by a copy-andpermute

procedure: the protograph is copied q times, and then

the edges of the individual replicas are permuted among the

replicas (under restrictions described below) to obtain a single,

large graph. Of course, the edge connections are specified

by the adjacency matrix Γ. Note that the edge permutations

cannot be arbitrary. In particular, the nodes of the protograph

are labeled so that if variable node A is connected to constraint

node B in the protograph, then variable node A in a replica

can only connect to one of the q replicated B constraint nodes.

Doing so preserves the decoding threshold properties of the

protograph. A protograph can possess parallel edges, i.e., two

nodes can be connected by more than one edge. The copy-andpermute

procedure must eliminate such parallel connections in

order to obtain a derived graph appropriate for a parity-check


Consider a protograph with Mc CNs and Nc VNs. Now

make q copies and make edge connections among the copies

in accordance with an adjacency matrix Γ. It is convenient to

choose an adjacency matrix Γ that is an Mc × Nc array of

q ×q circulant permutation matrices along with the q ×q zero

matrix. We call each row of permutation matrices a block row

which we observe has q rows and n = qNc columns. We note

that there is one block row for each constraint node of the

protograph and one binary row for each of the mc = qMc

constraints in the G-graph. We note also that the number of

nonzero permutation matrices in a block row is simultaneously

equal to the degree of its corresponding CNs and the common

length of the nodes’ component codes. The H matrix of the

G-LDPC code can be derived from Γ and the parity-check

matrices {Hi} Mc

i=1 for each constraint node (there is one matrix

Hi for each block row of Γ) by replacing each 1 in a binary

row by its corresponding column in Hi. When Γ is block

circulant, the resulting matrix H can also be put in a blockcirculant

form so that the G-LDPC code will be quasi-cyclic

Fig. 3. Protograph of rate-1/2 G-LDPC with two rate-2/3 RSC constraints.

[7], [8].



A. Example G-LDPC codes with high-rate RSC components.

In the previous section we presented the general process

for the design of G-LDPC codes. In this section we follow

this process in the design of two rate-1/2 G-LDPC codes

with RSC components. The protograph of the first code is

shown in Fig. 3. It consists of two rate-2/3 RSC component

codes. In Fig. 3 we divided the variable nodes into T identical

groups (enclosed in the dashed boxes). Each group contains

two information bits i1, i2, and two parity bits, p1 (for RSC on

left) and p2 (for RSC on right). Note that the RSC component

codes each have blocklength 3T . This code is a turbo-like

code, and in fact a rate-1/2 turbo code can be obtained from

this protograph using a large T and only one copy of the

protograph. However, in our codes we used a comparatively

small value for T and a large number of protograph copies

q. This allows the use of a modular decoder with reasonable

complexity and hardware requirements.

A rate-1/2 (8160, 4080) G-LDPC code can be constructed

from the protograph in Fig. 3 as follows. First, we use tailbiting

RSC component codes with memory υ = 4, blocklength

3T = 60, and generator polynomials (32, 36, 31)8. It follows

that T = 20 and each protograph has 40 information bits

and 40 parity bits. Next we make q = 102 replicas of the

protograph. By choosing the adjacency matrix Γ of the code

to be in block-circulant form, the code will be quasi-cyclic

because the component codes are tail-biting codes.

The second G-LDPC code has the protograph shown in Fig.

4. We started with the rate-1/2 AR4JA protograph in [25] and

we replaced each rate-5/6 SPC node in the AR4JA protograph

with a rate-5/6 RSC component code of blocklength 6T .

Note that the protograph in Fig. 4 contains T equivalent

AR4JA sub-protographs, where each sub-protograph contains

two information bits, i1 and i2, and three parity bits, p1, p2,

and p3. The overall protograph corresponds to a rate-2/5 G-

LDPC code. In order to achieve rate-1/2, we puncture i1. The

RSC component code has the generator polynomials (25, 37,

27, 31, 23, 35)8 and memory υ = 4.

We designed a rate-1/2 (576, 288) G-LDPC code based

on the protograph in Fig. 4 with T = 8. The number of

information bits in the protograph is 16. To obtain k = 288,

we made q = 18 copies of the protograph.

B. Iterative decoder

For the G-LDPC codes in this paper, we used the standard

belief propagation algorithm. As indicated in the Introduction,

the complexity of the BCJR decoder [19] for high-rate RSC

nodes is prohibitive. Consequently, in this paper we adopt a

variation of Riedel’s decoder [14] (described in equation (20)

in [14]) which uses the trellis of the reciprocal-dual code. As

an example of a reciprocal-dual code, consider the rate-4/5

RSC code with (octal) generator polynomials (7, 2, 6, 5, 7)8

and memory υ = 2. Its parity check matrix is given by H(D)

= [1 + D + D 2 D 1 + D 1 + D 2 1 + D + D 2 ]. The

reciprocal-dual code is generated by Grd(D) = D 2 H(D −1 ) =

[1 + D + D 2 D D + D 2 1 + D 2 1 + D + D 2 ]. In other

words, the trellis of the reciprocal-dual code in this example

is generated by the feedforward convolutional code generators

(7, 2, 3, 5, 7)8.

The decoder in this paper is a log-domain (additive) version

of Riedel’s decoder. Before describing it, we introduce our

adopted notation. Let us consider a rate-(κ −1)/κ RSC code,

C, with block length N, and memory υ. Its reciprocal-dual

code has the same blocklength and memory, but its rate is

1/κ. The trellises of both the original and the reciprocaldual

code have Λ = N/κ trellis sections, each with 2 υ

left-states and 2 υ right-states. The original code has 2 κ−1

branches leaving/entering each state and the reciprocal-dual

code has 2 branches leaving/entering each state. We use sl

and sr to refer to a left-state and a right-state, respectively.

The reciprocal-dual code encoder output associated with the

transition from sl to sr is denoted by the κ-tuple ¯ b(sl,sr) =

(b0(sl,sr),...,bκ−1(sl,sr)). Also, define the two sets SL(sr)

and SR(sl) as follows: SL(sr) = {sl : sl → sr exists}

and similarly, SR(sl) = {sr : sl → sr exists}. Denote the

transmitted codeword ¯c ∈ C by ¯c = (c0,c1 ...,cN−1) and the

corresponding channel output by ¯y = (y0,y1,...,yN−1).

Riedel’s decoder can be summarized by the log-likelihood

Fig. 4. Protograph of rate-1/2 G-LDPC with rate-5/6 RSC constraints.

atio L(ci) of the i-th bit in ¯c given by

L(ci) = L(yi | ci)

� �

+ ln�

At(sl)Θt(i,sl,sr)Bt+1(sr) �

sl sr

� �

− ln � At(sl)Θt(i,sl,sr)Bt+1(sr)

sl sr

.(−1) bi−tκ(sl,sr)

� .

where the sl summations are over all sl ∈ {0,...,2 υ − 1}

and the sr summations are over all sr ∈ SR(sl). The soft

output from the channel L(yi | ci) = 2yi/σ 2 , where σ 2 is the

AWGN variance. The forward, backward, and branch metrics

At(sl), Bt+1(sr), and Θt(i,sl,sr), respectively, are defined

in [14], where t designates a trellis section and is related to i

by i = ⌊i/κ⌋.

We use the convention in [15] to derive the additive version

of the above decoder. Let X = χse χm and Z = ζse ζm be real

numbers where χs ∈ {±1} and ζs ∈ {±1} represent signs and

χm ∈ R and ζm ∈ R represent logarithms of the magnitudes.

We define the one-to-one mapping

X ⇒ χ = (χm, χs) � (ln |X|, sign(X)),

and similarly for Z. From this definition, the product XZ

maps to

χ + ζ � (ln |XZ|, sign(XZ)) = (χm + ζm,χsζs)

and the sum X + Z maps to

max (χ, ζ) � (ln |X + Z|, sign(X + Z)) =

+ln �1 + χsζse −|χm−ζm|

� ,


max(χm, ζm)


max (χs, ζs) ,

where max m (χs, ζs) equals χs if χm > ζm, and equals ζs,

otherwise. Now we can rewrite (1) as follows

L(ci) = L(yi | ci) + γ (1)

m (yi,ci) + γ (2)

m (yi,ci). (2)

where γ (1) (yi,ci), and γ (2) (yi,ci) are given in the following

two equations:

γ (1) (yi,ci) =



max {αt(sl) + ϑt(i,sl,sr)



+ βt+1(sr)}

γ (2) �

∗ ∗

(yi,ci) = max max {αt(sl) + ϑt(i,sl,sr)

sl sr

+ βt+1(sr) + (0,(−1) bi−tκ(sl,sr)

)} .

where sl is taken over all sl ∈ {0,...,2 υ − 1} and sr is taken

over all sr ∈ SR(sl). Also, αt(sl), βt(sr), and ϑt(i,sl,sr)

are given by

αt(sr) =




sl∈SL(sr) {αt−1(sl) + ϑt−1(sl,sr)}, (5)

βt(sl) =


sr∈SR(sl) {βt+1(sr) + ϑt+1(sl,sr)}, (6)


κ� �

ϑt(i,sl,sr) = bj(sl,sr)ln �

�tanh � ��

L(ytκ+j | ctκ+j) ���








� �

L(ytκ+j | ctκ+j)




Finally, ϑt(sl,sr) in (5), and (6) is given by

κ� �

ϑt(sl,sr) = bj(sl,sr)ln �



� ��

L(ytκ+j | ctκ+j) ���



� �

L(ytκ+j | ctκ+j)

, sign




The decoder requires initialization of the forward and backward

recursions. The initializations depend on the type of the

RSC codes employed, as follows. For truncated RSC codes,

set α0(s) to (0, 1) for all states s, and set βΛ(s) to (0, 1) for

the zero state, and (-∞, 1) for the other states. On the other

hand, for terminated RSC codes set both α0(s) and βΛ(s) to

(0, 1) for all states. The idea behind this initialization is that

the code and its dual are related by a Fourier transform relation


Similar to the above procedure, one can easily derive

the optimal additive version of Riedel’s tail-biting decoder.

However, the complexity of this optimal decoder is 2υ times

the complexity of the decoder in (2). Instead, we used a

suboptimal decoder analogous to that in [16]. The idea is

to find “correct” initializations, α0(s) and βΛ(s), then run

the decoder in (2). Making use of the circular form of the

tail-biting trellis, Anderson [16] showed that the forward and

backward recursions define an eigenvector problem whose

solution is the desired initialization. Moreover, he showed

that starting with any random forward initialization, then

iterating the forward recursion in (5) enough times with a

proper normalization, the forward initialization converges to

the “correct” initialization. In a similar way, the backward

initialization can be found. The approximate additive tailbiting

decoder is summarized as follows:

• For s = 0, 1, ..., 2υ −1, set α ′ 0(s) = (0, 1), if s = 0, and

(−∞, 1), if s �= 0, where α ′ t(s) refers to a normalized


• Find a set of normalized vectors α ′ 1, α ′ 2, ..., α ′ Λ , where

α ′ t = (α ′ t(0), α ′ t(1), ..., α ′ t(2υ −1)), as follows: First use

(5) to find αt. But αt−1(sl) on the right side of (5) is

replaced by α ′ t−1(sl). Then, α ′ t = αt − αt(0). Continue

this recursion to find α ′ t, t = Λ + 1, Λ + 2, . . . with

ϑt(sl,sr) = ϑτ(sl,sr) and τ = (t mod Λ). Stop, when

�α ′ t−α ′ t−Λ� is sufficiently small, or after a preset number



of rounds (we used three rounds). Now, the forward

recursion initialization is given by α0 = α ′ t.

• Use a similar procedure to find the backward recursion

initialization βΛ.

• Run the decoder in (2).


As mentioned in the Introduction, one way to design a

generalized LDPC code so that the rate is not too low is to use

rate-(κ − 1)/κ RSC codes in place of simple SPC constraint

nodes and another is to use rate-k/n block codes in place of

variable nodes (which represent repetition codes). However,

the presence of such super-VNs necessitates modification of

the belief propagation decoder at the variable nodes. In this

section, we will show how to represent a DG-LDPC code by

the G-graph of Fig. 1, that is, transform a DG-graph into a Ggraph.

Thus, we obtain a graph with standard (repetition code)

variable nodes. We will then show how a modified-schedule

decoder based on the G-graph, but equivalent in performance

to the DG-graph decoder, may be obtained.

We first assume that the encoders corresponding to the

super-VN’s in the DG-LDPC graph are systematic. Now

consider the DG-subgraph in Fig. 5(a) which contains a super-

VN corresponding to a systematic (4,2) code which we denote

by C. By “subgraph” we mean that only those nodes and

edges pertinent to the discussion are drawn. Because the (4,2)

code is systematic, due to the introduction of dummy nodes

along the outgoing edges of the super-VN, we are able to

manipulate the DG-subgraph to the alternative representation

shown in Fig. 5(b). Lastly, in Fig. 5(c), we see how the

representation of Fig. 5(b) can also be put in a standard G-

LDPC Tanner graph representation. We remark that the notion

of introducing an alternative graphical representation of a

code in which additional CNs and VNs are introduced was

previously considered in [20] and [21]. See also [22] for socalled

code-to-code and code-to-bit representations.

The two equivalent graphical representations of a DG-LDPC

code in Fig. 5(a) and Fig. 5(c) implies equivalence in the

sense that they correspond to the same set of codewords.

However, they do not imply equivalent iterative decoders. In

fact, if the flooding schedule was employed in each of the

iterative decoders corresponding to these graphs, the various

decoder metrics would differ somewhat after each iteration,

although the decoder output decisions would almost always

agree after a sufficient number of iterations. This is because

the G-graph decoder converges more slowly as explained in

the next paragraph. (These points will also be demonstrated

in the next section.) However, as we will show, there exists

a modified-schedule iterative decoder operating on the Ggraph

that is equivalent (identical metrics and bit decisions)

to a flooding-schedule iterative decoder operating on the DGgraph.

The only different between the DG-graph representation in

Fig. 5(a) and the G-graph representation in Fig. 5(c) is that

in the first case there is a single edge between super-VN C

and each of the CNs W, X, Y, and Z whereas in the latter

Fig. 5. Transformation from DG-graph to G-graph.

case there are two edges between the super-CN C and the

CNs W, X, Y, and Z. Thus, in the first case, it requires one

half-iteration to send a message between node C and any of

its neighbor-CNs, whereas in the latter case two half-iterations

are required. The implication of this is that flooding-schedule

decoders based on the two graphs will yield slightly different


However, it is possible to adjust to decoding schedule of

the G-graph decoder so that it is equivalent to the floodingschedule

DG-graph decoder (under the assumption that the

corresponding nodal decoders for the two graphs are identical).

The modified schedule G-graph decoder effectively accelerates

the message-passing between node C and CNs W, X, Y, and

Z so that these messages are passed in one half-iteration as

in the DG-graph case. That is, the two edges between CN C

and CN W are treated as one edge, and similarly for the two

edges between CN C and each of the CNs X, Y, and Z. For

the case of nodes W and X, channel LLRs are added to the

messages coming from node C (see Fig. 5(c)), whereas the

messages are unaltered for the cases of nodes Y and Z.

As an example, we designed a (1800,960) DG-LDPC code.

The DG-graph has 450 super-VNs based on the (7,4) Hamming

code and 210 super-CNs based on the (15,11) Hamming

code. The DG-LDPC code was obtained from 30 copies of a

protograph consisting of 15 Hamming (7,4) super-VNs and

7 Hamming (15,11) super-CNs. The protograph expansion

was performed by progressive edge-growth (PEG) [27] using

cyclic edge permutations. The (1800,960) DG-LDPC code is

therefore quasi-cyclic. The code’s performance using both the

flooding-schedule decoder and modified-schedule decoder on

the G-graph is presented in the next section.


In this section we present the performance of the codes

described earlier in the paper. The performance of the

(8160,4080) G-LDPC code based on the protograph in Fig.

3 is shown in Fig. 6. The maximum number of decoding

iterations used was Imax = 50. The performance is compared

to the random coding bound (RCB) [28] for (8160,4080)

codes and it is seen that it is about 0.3 dB from the RCB

at a frame error rate (FER) of 7 × 10 −4 . The FER curve has

an error floor near 10 −4 . However, its bit error rate (BER)

curve shows that it is very good in both the waterfall and floor

regions. Compared to the BER of the (8192,4096) AR4JA

code [25], the BER of this code has a gain of about 0.3 dB

down to BER = 10 −7 .

In Fig. 7 we present the performance (with Imax = 10)

of the (576,288) G-LDPC code based on the protograph in

Fig. 4. The FER curve is about 1.8 dB from the RCB and

has no floor down to 10 −4 , which is excellent considering its

short blocklength. Also, this code provides the unequal error

protection for the two types of information bits, labeled i1

and i2 in Fig. 4. As seen in Fig. 7, there is a factor of five

difference in BER for the two bit types.


10 0

10 −2

10 −4

10 −6

10 −8

(8160, 4080)RSC, BER

(8192, 4096)AR4JA, BER

(8160, 4080)RSC, FER

4096 RCB

0.2 0.4 0.6 0.8

E /No [dB]


1 1.2 1.4

Fig. 6. Performance of the (8160, 4080) G-LDPC code, compared with that

of (8192, 4096) AR4JA code.



10 0

10 −1

10 −2

10 −3

10 −4

10 −5



i 1 BER

i 2 BER



1.5 2 2.5 3 3.5 4


E /No [dB]


10 0

10 −1

10 −2

10 −3

10 −4

10 −5

Fig. 7. Performance of the (576, 288) G-LDPC code.


1 1.5 2 2.5 3


10 −6


FER Modified, 200it

FER Flooding, 200it

FER Modified, 20it

FER Flooding, 20it

E /N [dB]

b o

Fig. 8. Performance of the (1800,960) DG-LDPC code using the floodingschedule

and modified-schedule decoders, both based on the code’s G-graph


The (1800,960) DG-LDPC code performance has been simulated

using the modified-schedule G-graph iterative decoder

(equivalent to flooding-schedule DG-graph decoder) and the

flooding-schedule G-graph iterative decoder. In both cases,

we allowed 20 and 200 iterations. The FER performance is

depicted in Fig. 8 along with the RCB for these code parameters.

We observe that, for 200 iterations, the two decoders

have identical performance. However, for 20 iterations, the

modified-schedule G-graph decoder provides superior performance,

confirming our point on the faster convergence of

the DG-graph decoder (equivalently, the modified-schedule Ggraph

decoder). We observe also excellent performance at low

error rates for such a simple, regular GD-LDPC code: the FER

curves show no floor down to FER ≃ 4 × 10 −7 .


The authors would like to thank Marc Fossorier and Yige

Wang of the University of Hawaii for a preprint of [13] and

for interesting discussions.


[1] R. M. Tanner, “A recursive approach to low complexity codes,” IEEE

Trans. on Inform. Theory, vol. 27, pp. 533–547, September 1981.

[2] J. Boutros, O. Pothier, and G. Zemor, “Generalized low density (Tanner)

codes,” in IEEE Int. Conf. on Commun., ICC ’99, pp. 441–445, June


[3] M. Lentmaier and K. S. Zigangirov, “Iterative decoding of generalized

low-density parity-check codes,” in IEEE Int. Symp. on Inform. Theory,

p. 149, August 1998.

[4] N. Miladinovic and M. Fossorier, “Generalized LDPC codes with Reed-

Solomon and BCH codes as component codes for binary channels,” in

IEEE Global Telecommunications Conf., GLOBECOM ’05, November


[5] S. Vialle and J. Boutros, “A Gallager-Tanner construction based on

convolutional codes,” in Proceedings of Int. Workshop on Coding and

Cryptography, WCC’99, pp. 393–404, January 1999.

[6] R. M. Tanner, “A hybrid coding scheme for the Gillbert-Elliot channel,”

in Proc. of the 42th Annual Allerton Conf. on Commun., Control, and

Computing, Illinois, September 2004.

[7] G. Liva and W. E. Ryan, “Short low-error-floor Tanner codes with

Hamming nodes,” in IEEE Military Commun. Conf., MILCOM ’05,


[8] G. Liva, W. E. Ryan, and M. Chiani, “Design of quasi-cyclic Tanner

codes with low error floors,” in 4th Int. Symp. on Turbo Codes, ISTC-

2006, April 2006.

[9] J. Thorpe, “Low-density parity-check (LDPC) codes constructed from

protographs,” Tech. Rep. 42-154, IPN Progress Report, August 2003.

[10] S. Abu-Surra, G. Liva, and W. E. Ryan, “Design and performance

of selected classes of Tanner codes,” UCSD Workshop

on Information Theory and Its Applications, February 2006,

[11] S. Abu-Surra, G. Liva, and W. E. Ryan, “Low-floor Tanner codes via

Hamming-node or RSCC-node doping,” Lecture Notes in Computer

Science, (Proc. of the 16th AAECC), vol. 3857, pp. 245–254, February


[12] A. Ashikhmin, G. Kramer, and S. ten Brink, “Extrinsic information

transfer functions: Model and erasure channel properties,” IEEE Trans.

Inf. Theory, pp. 2657-2673, Nov. 2004.

[13] Y. Wang and M. Fossorier, “Doubly generalized LDPC codes,” IEEE

Int. Symp. on Inform. Theory, Seattle, WA, July 2006.

[14] S. Riedel, “Symbol-by-symbol MAP decoding algorithm for high-rate

convolutional codes that use reciprocal dual codes,” IEEE J. on Select.

Areas in Commun., vol. 16, pp. 175–185, February 1998.

[15] A. Graell i Amat, G. Montorsi, and S. Benedetto, “Design and decoding

of optimal high-rate convolutional codes,” IEEE Trans. on Inform.

Theory, vol. 50, pp. 867–881, May 2004.

[16] J. B. Anderson and S. M. Hladik, “Tailbiting MAP decoder,” IEEE J.

on Select. Areas in Commun., vol. 16, pp. 297–302, February 1998.

[17] C. Weiβ and J. Berkmann, “Suboptimum MAP-decoding of tail-biting

codes using the dual trellis,” in Proc. 3rd ITG conf. source and channel

coding, Munich, Germany, pp. 199–204, January 2000.

[18] C. Weiβ and C. Bettstetter, “Code construction and decoding of parallel

concatenated tail-biting codes,” IEEE Trans. on Inform. Theory, vol. 47,

pp. 366–386, January 2001.

[19] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of

linear codes for minimizing symbol error rate,” IEEE Trans. on Inform.

Theory, vol. 20, pp. 284–287, March 1974.

[20] J. Yedidia, J. Chen and M. Fossorier, “Generating Code Representations

Suitable for Belief Propagation Decoding,” Proceedings 40-th Annual

Allerton Conference on Communication, Control and Computing, Monticello,

USA, October 2002.

[21] S. Sankaranarayanan and B. Vasic, “Iterative Decoding of Linear Block

Codes: A Parity-Check Orthogonalization Approach,” IEEE Trans. Inf.

Theory, pp. 3347-3353, Sept. 2005.

[22] J. Chen and R. M. Tanner, “A hybrid coding scheme for the Gilbert-

Elliot channel,” 42nd Allerton Conference on Communication, Control

and Computing, Monticello, IL, Sept. 2004.

[23] S. ten Brink, G. Kramer, and A. Ashikhmin, “Design of low-density

parity-check codes for modulation and detection,” IEEE Trans. on

Commun., vol. 52, pp. 670–678, April 2004.

[24] S. Lin, J. Xu, I. Djurdjevic, and H. Tang, “Hybrid construction of LDPC

codes,” in Proc. of the 40th Annual Allerton Conf. on Commun., Control,

and Computing, Illinois, October 2002.

[25] D. Divsalar, C. Jones, S. Dolinar, and J. Thorpe, “Protograph base LDPC

codes with minimum distance linearly growing with block size,” in

IEEE Global Telecommunications Conf., GLOBECOM ’05, pp. 1152–

1156, November 2005.

[26] J. Berkmann and C. Weiβ, “On dualizing trellis-based APP decoding algorithms,”

IEEE Trans. on Commun., vol. 50, pp. 1743–1757, November


[27] X. Y. Hu, E. Eleftheriou, and D. M. Arnold, “Progressive edge-growth

Tanner graphs,” in IEEE Global Telecommunications Conf., GLOBE-

COM ’01, pp. 995–1001, November 2001.

[28] R. G. Gallager, Information Theory and Reliable Communication. New

York: Wiley, 1968.

More magazines by this user
Similar magazines