Novel DVB-RCS Standard Turbo Code - Esa

esamultimedia.esa.int

Novel DVB-RCS Standard Turbo Code - Esa

INTRODUCTION

NOVEL DVB-RCS STANDARD TURBO CODE:

DETAILS AND PERFORMANCES OF A DECODING ALGORITHM

D. Giancristofaro, A. Bartolazzi

Alenia Aerospazio,

Via Saccomuro 24, 00131 Roma, ITALY

Tel. 06-4151 2669, Fax. 06-4151 2507

Email: d.giancristofaro@roma.alespazio.it; a.bartolazzi@roma.alespazio.it

Aim of the present paper is to provide description of a turbo decoding algorithm relevant to the turbo codes [1]

standardised for DVB-RCS [2] and presentation of the associated design architecture implemented in Very high speed

integrated circuit Hardware Description Language (VHDL), as well as synthesis on a specific Field Programmable Gate

Array (FPGA). The present paper, among other topics dealt with, addresses particularisation of the turbo decoding

algorithm for non-binary trellises, exploitation of the cyclic state feature (tail-biting) [2] and simulation performances

for both decoding algorithm and VHDL design which has been developed by the authors at Alenia Spazio.

Previously, a decoder based on DVB-RCS Turbo Code was manufactured by Alenia Spazio with the support of

ENST and TURBOCONCEPT; in particular this last provided the design in VHDL of the Turbo Decoder for suitable

synthesis in Application Specific Integrated Circuit ASIC, manufactured by ATMEL and integrated in the Skyplex [3]

units at Alenia Spazio premises. Such development was realised in the framework of the ARTES III “Enhanced

Skyplex” program co-funded by ESA (European Space Agency) and Alenia Spazio. Besides Turbo-Decoding, the

‘Enhanced Skyplex’ Unit includes other technical innovations. This unit will be flown on the Eultesat Hot Bird 6

satellite. The other Skyplex units aboard the Eutelsat Hot Bird 6 will also utilise the Turbo Decoders thus improving

the up-link performance. The coding scheme implemented in the Skyplex modules allows a consistent reduction of the

up-link EIRP of the service provider stations. The coding rates implemented in the Skyplex program use R=4/5 and

R=6/7, respectively for SCPC and TDMA mode of operation. The operating point on the channel has been mapped

from E b/N 0 = 10.6 dB for the ‘classical’ Skyplex down to 5.5 dB for SCPC and 6.8 dB for TDMA, hence requiring a reassessment

of all of the Skyplex synchronisation sections. The algorithm presented in this paper, which has been

developed independently at Alenia, presents performance results which basically match with results of those

implemented in Skyplex as provided by TURBOCONCEPT. The analysis of the differences of the two algorithms

maybe the subject of a future activity.

CODE FEATURES

The turbo code subject of the present analysis is described in the DVB-RCS standard description document [2];

it will be addressed very briefly here; interested reader should refer to the document [2]. The encoder is composed by

two identical non-binary Recursive Convolutional Encoders (RSC), shown in Fig. 1, named component encoders; they

A ∈ 0,

1 . As shown in Fig. 1, in the turbo encoder

accept a two-bit input (A k,B k) at each clock cycle, with { }

operation, data are encoded a first time, and permuted (i.e. interleaved) before being encoded again. The codeword

contains the sequence of input couples (systematic part) plus both the sequences of redundancy couples Yk and Wk, with

Y ∈ 0,

1 and 1 ≤ k ≤ N , produced by the first and the second encoding process. Since each systematic couple

k , Wk

{ }

(A k,B k) is complemented by four redundant bits: two (Y k,W k) produced with data sequence in natural order and other

two produced by encoding of the interleaved data sequence, a minimum coding rate of 1/3 is obtained by the turbo

encoder (the systematic couple produced in the second encoding process is not transmitted).

A k

B k

S 1 S 2 S 3

W k Y k

A

B

k, Bk

Permutation

P

1

2

S 1 S2 S 3

Yfirst

or second

Wfirst

or second

Puncturing

Systematic bits

Redundancy bits

Fig. 1. Component encoder with binary couple input A k,B k (left figure) and turbo encoder scheme (right figure).

C

o

d

e

w

o

r

d


The puncturing mechanism is illustrated in [2]. As proposed in [2], circular coding (or tailbiting) has been

adopted. With circular convolutional codes, the encoder retrieves the initial state at the end of the encoding operation.

TURBO DECODING SCHEME

Turbo decoding is an iterative process, using twin Soft Input Soft Output (SISO) modules [4], capable of passing

soft information for each single group of systematic bits (A k,B k), which is named couple. The SISO modules,

particularised for the non-binary code trellis used here, are structured as shown in Fig. 2, where the 3 inputs are:

1) The A Priori Multidimensional Probability (APrMP) for the couples (A k, B k); it will be introduced in a next

paragraph;

2) Uncoded couple of bits affected by noise,

3) Parity bits affected by noise.

The above a priori probability is set to an initialisation value at the first decoder iteration, but is then passed by the other

component decoder at subsequent iterations, constituting the main means for building the iterative decoding algorithm,

as is shown in Fig. 3. The SISO decoders DEC 1 and DEC 2, both shown in Fig. 3, produce the A Posteriori

Multidimensional Probabilities (APoMP; in literature a similar quantity is known as extrinsic information). The inputs

for DEC 2 are: the APrMP, as estimated using the APoMP output by DEC 1, the redundant bits of the second encoding

process, and the interleaved systematic bits. Then, DEC 2 output is sent to DEC 1, as a feed-back; hence, the above

process can be iterated again. Each SISO decoder internal algorithm could, theoretically, be the BCJR (Bahl, Cocke,

Jelinek and Raviv) algorithm [5], but other simpler implementations are possible, e.g. the Max Log MAP (or Dual

Viterbi) [6].

STARTING FROM BCJR ALGORITHM

During the first encoding process, with data in natural order, the encoder shown in Fig. 1 receives in input a

sequence of bit couples u k , with k from 1 to N, composed as in the following relation:

uk = ( Ak

, Bk

)

(1)

For each individual input couple, e.g. for couple k, the encoder produces as output the following set of bits, divided into

a systematic (superscript s) and parity (superscript p) part:

s s p p

Yk

= ( Ak

, Bk

, Yk

, Wk

)

(2)

The same applies for the second encoding process, which is executed after data permutation. To decide about which

was the transmitted bit couple, the decoders shown in the previous paragraph must produce the Multidimensional Log-

Likelihood Ratio (MLLR) from the observation of the received version of the turbo encoder output set Y k (with k from

1 to N) as expressed by (2), whose bits have been transmitted through a memory-less channel affected by AWGN, and

are affected by mutually independent noise samples. The encoder output has also undergone modulator mapping

according to the following relation:

z = 1 − 2q

(3)

where q is the modulator input bit, which assumes values in the set {0,1} while z assumes values in {1,-1}. Hence the

observed samples, at the receiver side, can be expressed in the following form:

APrMP ( u ) k

2

Ak ,Bk

Yk

,Wk

Soft-in /

Soft-Out Decoder

APoMP ( u )

Fig. 2. Component decoder (DEC1 or

DEC2)

1

k

y k

w k

a = 1 − 2A

+ n

(4)

ak bk s

k

y 1k

w 1k

y2k w2k s

k

Interleaving

DEC 1

from encoding process

with permuted data

k

Extrinsic info

Interleaving

APoMP 1 (u k )

DEC 2

Extrinsic info

Deinterleaving

Decision

Algorithm

Fig. 3. Iterative decoder architecture

APoMP 2 (u k )

^

ak ^

bk


s p p

where nk is the AWGN noise contribution. A similar relation as (4) applies also for B k , Yk

, Wk

, which underwent

the same mapping. Hence, the observed noisy samples, relevant to the k-th input couple, as presented to that decoder

module dedicated to data in natural order, are so indicated:

s s p p

yk

= ( ak

, bk

, yk

, wk

)

(5)

and their sequence will be indicated by y as follows:

N

1

N

y = y , ... , y , ... , y )

(6)

1 ( 1 k

N

The turbo decoding algorithm for codes based on binary trellis requires a single Log-Likelihood Ratio (LLR) [1]

and decision is executed simply using the sign of the LLR. When non-binary codes are used, a Multidimensional LLR

(MLLR), indicated as L ( uk

) , must instead be defined: it can be defined forming all of the dispositions of four

conditional probabilities onto two places (numerator and denominator of the LLR), without repetition. The conditional

probabilities, whose dispositions will be formed, are the following:

N

( uk

( 0,

0)

| y )

N

( u k ( 0,

1)

| y )

N

( uk

( 1,

0)

| y )

N

( u ( 1,

1)

| y )

N

P0 , k | y1

= P =

1

N

P1 , k | y1

= P =

1

N

P2 , k | y1

= P =

1

N

P3 , k | y1

= P k =

1

For the present case, the total number of LLRs, constituting the components of MLLR, is 12. However, all of the

useful information is contained in a smaller number of LLRs, which is equal to 3. Hence, the MLLR can be defined

using only the three following scalar LLRs:

L

A

( u )

k

Δ ⎛ P

ln


⎝ P

( )

( ) ⎟⎟

N

u = ⎞

k ( 0,

0)

| y1

u = ( 0,

1)

| y

= N

k

1

Hence, the MLLR is defined as:


; L ( u )

B

k

⎛ P

ln ⎜

⎝ P

( )

( ) ⎟⎟

N

u = ⎞

k ( 0,

0)

| y1

; ( ) ( )

u = ( 1,

0)

| y

( ) ⎟⎟

Δ

N

⎛ P u = ⎞

k ( 0,

0)

| y1

LC

uk

= ln


N

P u = ( 1,

1)

| y

Δ

= N

k

1

[ ]

( u ) L ( u ) , L ( u ) L ( u )

k

Δ

A

k

B

k

C


L = ,

(9)

Once the above MLLR will have been calculated and after the last iteration, the decision on the bit couple will be

executed. Exploiting knowledge of the code trellis, each individual LLR can be written in the following form:

L

A

( u )

k


Δ ⎜

= ln⎜




S00


S01

p(

s

p(

s

k−1

k−1

= s'

, s

= s'

, s

k

k

= s,

y

= s,

y

k

N

1

N

1

) / p(

y

) / p(

y

where sk ∈ S , with S the set of all the 2 3 constituent encoder states, is the state of the encoder at time k, S 01 is the set

of pairs (s’,s) corresponding to all state transitions ( sk

1 s'

) = − ( sk

= s ) caused by an input couple u k = ( 0,

1)

. Set

S is similarly defined for u = ( 0,

0)

. The derivation of the other two terms of the MLLR of (9) is perfectly similar,

00

k

N

and will hence be omitted. In (10), the term p y ) can be cancelled (actually in [6] it is preferred to isolate and

( 1

N

eliminate, at both the numerator and denominator, the term p(

y1 ) / p(

y)

). Hence, the remaining term can be

calculated using the modified BCJR or Max Log MAP algorithm. The BCJR algorithm, which also assumes a memoryless

transmission channel, states:

in which :

N

1

N

1

) ⎞



) ⎟


( s',

s)

β ( s)

N

p( sk

− 1 = s',

sk

= s,

y1

) = α k−1(

s')

γ k k

(11)

k

s ) = p(

s = s,

y )

Δ

N

α , β ( ) p(

y 1 | s = s)

, γ ( ',

s)

p(

s = s,

y | s 1 = s'

) (12)

k ( k 1

Δ

k s = k+

k


Δ

k s = k k k−

α k ( s ) is a measure (also named metric) of s based on history previous to time k. β k (s)

is another measure of s based

on future history; In other words, both α (s)

and β (s)

are quantities proportional to the probability of assuming the

k

k

k

1


(7)

(8)

(10)


state s at time k. Measure γ ( s'

, s ) is based on present history. Two iterative relations can be obtained for α (s)

and

β k (s)

[1],[7]:

k

k

= ∑

s'∈S

k−1

( s')

( s',

s)

α ( s)

α γ , β ( s ')

β ( s')

γ ( s',

s)

(13)

k

k−

1 = ∑

s∈S

Hence, cancelling common terms at numerator and denominator, the LLR in (10) can be written in the following

manner:

( )

( )

( ) ⎟ ⎟⎟



⎜ ∑α

k −1(

s'

) γ k s',

s β k ( s)

S00

LA

u = ln⎜

(14)

k

⎜ ∑α

k −1(

s'

) γ k s',

s β k ( s)

⎝ S01


in which the sums are executed over the states set S 00 (numerator) and set S 01(denominator)

which have already been

defined above. Similar relations hold for the other components of the MLLR. After the initialisation, the extrinsic

information, whose expression will soon be defined, is produced in output by a decoder stage in its logarithmic version.

Instead of the likelihood ratios MLLR, some simplifications are obtained using the strictly related APoMP, which will

be provided as input to the following decoding stage, being required in the calculation of γ ( s',

s)

:

in which k ( s s ) u P ,

order to calculate ( )

Δ

k

k

k

( u ) p(

y | s',

s)

γ k ( s', s)

= p(

sk

= s,

yk

| sk−1

= s'

) = P(

s | s')

p(

yk

| s',

s)

= Pk

s',

s k

(15)

' corresponds to the probability of u k, the input potentially causing state transition s’s at time k. In

k s s u P for ' ,

k ( s',

s)

γ , the code trellis must be used. In the Max Log MAP algorithm not the above

APoMP, but rather, their logarithms are calculated via approximate relations, using only additions and maximum

searches.

MAX LOG MAP ALGORITHM

The simplified algorithm here proposed, is based on the Max Log MAP algorithm [7]. Both Max Log MAP

algorithm and some simplifications which will be derived in this paper, exploit the following approximation:

∑ α

e j ln ≈ max

j

j

( a )

hence, the calculations of α k ( s ), βk

( s ) and γ k ( s'

, s ) are executed in a simplified manner, according to the Max

Log MAP algorithm. All the quantities will be substituted by their logarithms and using the above (13) and (15), for

(16) we obtain:

Δ

Δ

ck k

j

k

(16)

( s',

s)

= lnγ

( s',

s)

(17)

[ αk

( s)

] = ln∑

exp[

ak

−1(

s')

+ ck

( s',

s)

] ≅ max[

ak

( s')

+ c ( s',

s)

]

ak ( s)

= ln

−1

k

s'∈S

s'∈S

[ β ( s')

] = ln exp[

b ( s)

+ c ( s',

s)

] ≅ max[

b ( s)

+ c ( s',

s)

]

Δ

bk −1( s')

= ln k −1


s∈S

k k

s∈S

k k

LA uk

≈ max ak

−1(

s')

+ ck

( s',

s)

+ bk

( s)

− max ak

−1(

s')

+ ck

( s',

s)

+ bk

s',

s:

u = ( 0,

0)

s ', s:

u = ( 0,

1)

k

k

( ) [ ] [ ( s)

]

ITERATIVE DECODING ALGORITHM

Using the previous definitions, and Bayes theorem, it is easy to get to the following relation:

L

A

N

⎛ p

( ) ( y1

| uk

= ( 0,

0)

)

u = ⎜ k ln N

p(

y | u = ( 0,

1)

)

and consequently define the ALLR (A Priori Log Likelihood Ratio):


1

k

⎞ ⎛ P

⎟ + ln ⎜

⎠ ⎝ P

( )

( ) ⎟⎟

u = ⎞

k ( 0,

0)

u = ( 0,

1)

k


(18)

(19)

(20)

(21)


L

( u )

p

A

k

Δ ⎛ P

= ln ⎜

⎝ P

( )

( ) ⎟⎟

u = ⎞

k ( 0,

0)

u = ( 1,

0)

The superscript p in the ALLR indicate that information is a priori. Two additional analogous ALLR can be defined

following the same procedure, for the subscript B and C. Together, they constitute the Multidimensional ALLR

(MALLR). The components of the MALLR can be listed in the following form:

p

A

p

p

( u ) = ln(

P ) ln(

P ) , L ( u ) = ln(

P ) - ln(

P ) , L ( u ) ln(

P ) - ln(

P )

L -

k

0 , k

1,

k

in which the following notation has been used:

P0 , k k k

B

k

0,

k

k

2,

k


C

k

0,

k

3,

k

(22)

= (23)

= P(

A = 0,

B = 0)

, = P(

A = 0,

B = 1)

, = P(

A = 1,

B = 0)

, = P(

A = 1,

B = 1)

(24)

P1 , k k k

P2 , k k k

P3 k, k k

Furthermore, the (21) can be written in the following manner which is relevant to MAP, but can be analogously written

for Max Log MAP algorithm [7]:

[ ( ) ( ) ] ⎟ L


+ c p p

[ y00

yk

+ w00wk

] ⎞


2

⎛ ⎞

s

s

P ∑ α


k −1(

s'

) e

βk

( s)

0,

k

L = − − + − + ⎜ ⎜ S00


A(

uk

) Lc

ak

ANum

ADen

bk

BNum

BDen

ln

⎜ + ln

(25)

⎝ P ⎜

Lc

p p

+

1,

k ⎠

[ y + ] ⎟

01 yk

w01wk


2


⎜ ∑α

k −1(

s'

) e

β k ( s)


⎝ S01


where y00, y01, w00 and w01 are obtained exploiting the knowledge of the code trellis; they are equal to ±1. The

coefficients A Num,

ADen,

BNum

and B Den depend on the specific likelihood component considered: for LA(

uk

), LB(

uk

)

and L C( uk)

following the above order, they are respectively equal to [0,0,0,1], [0,1,0,0] and [0,1,0,1]. In (25), the first

term of the second member is the channel value, the second term is the a priori information (the input of the considered

decoder) and the third is the extrinsic information calculated by the considered decoder. The last two terms provide

improvement of the poor (at low Eb/No) estimate obtainable by the channel value symbols. In particular, the third is the

e

component decoder output and it will be fed as input to the next decoder. This term is indicated by LA ( uk

) . However,

instead of the extrinsic Log-Likelihood ratios, use of the APoMP has been preferred. They are an estimate of the a

priori probabilities (24), as obtained using the extrinsic information. As a matter of fact if we define:

L e

A

= ln( P ) − ln(

P ) , L ln( P ) ln(

P )

e

= − , L ln( P ) ln(

P )

e


0

it is easy to get to the following approximations for the probabilities defined in (26):

1

B

0

e e e

( P ) = max[

LA,

LB

, LC

, 0]

e e e e e

( P ) = max[

− LA,

−LA

+ LB

, −LA

+ LC

, 0]

e e e e e

( P ) = max[

− LB,

−LB

+ LA,

−LB

+ LC

, 0]

e e e e e

( P ) max[

− L , −L

+ L , −L

+ L , 0]

ln 0

ln 1

ln 2

ln 3

C

C

A

2

C

C

= (26)

= (27)

They are the logarithms version of the probabilities of interest, that are the APoMP. Each decoder carry out ones

in a single vector of four components:

[ ln( P ) , ln(

P ) , ln(

P ) , ln(

P ) ]

0

1

2

B

APoMP = (28)

In [9] is demonstrated that performance can be improved if a scaling factor for the extrinsic information is

adopted. This factor (it ranges from 0.7 to 0.9 and can be constant or modified throughout different iterations) provides

a 0.1~0.3 dB performance improvement when compared to standard Max Log MAP decoding. Furthermore, it is

possible to demonstrate that decoding algorithm Max Log MAP is independent from SNR.

CIRCULAR TRELLIS DECODING AND MEMORY SIZE REDUCTION

The circular closure of the trellis, allowed by the tail-biting technique [1], allows decoding without neither

degradation of the initial and final sections of data, nor introduction of additional bits. Additionally, the decoding

algorithm has been developed using a substantially reduced amount of memory, when compared with the original

3

0

3


Block Initial

Metrics (stored)

k=752 -55

k=1

S 752=So

k=752

S1

Forward

Processor

Sub-Block

Backward

Re-computation

Fwd Processor: prologue leeds to valid 0th state metric (after 56 transitions)

Bwd Processor: prologue leeds to valid 752nd state metric (after 56 transitions)

k=56

Backward Processor

Preliminary computation

Fig. 4. Circular Trellis Closure, prologues for metric convergence and memory reduction strategy

Max Log MAP algorithm. This decoding procedure, developed and verified, has been presented in a previous paper

[10] and the execution flow is depicted in Fig. 4. Two processors are used in decoding: the backward and the forward

processors; an analogous operation strategy was described in [6], but with no use of cyclic state encoding. Decoding is

based on use of a prologue, aimed to estimation of the initial state using the last part of the frame, exploiting trellis

closure.

VHDL DESIGN DETAILS

The present section describes the VHDL design implementation. Since the size of the acquisition memory RAMs

is a function of the code rate R, it has to be sized on the worst case, that is R = 1/3; the configuration of R (puncturing

selection) can then be defined by the microcontroller section (see Fig. 5). The designed decoder is composed of:

microcontroller, RAMs and computation blocks (see Fig. 7). The microcontroller controls acquisition and decoding of

each single frame. It is composed by two finite state machines (FSMs), the first controlling memorisation of the frame

alternatively in memory block 1 or 2 and the other controlling all of the actions necessary for the decision of every bit

couple. These FSMs must be synchronised, because the memory RAMs are used in a “ping pong” fashion.

The microcontroller also includes counters, buffers, interleaver and ROMs for the configuration of acquisition

operations (i.e., the puncturing algorithms). The RAMs are divided in two kinds: one is used for memorising the frames

in out from demodulator with a “ping-pong” management (acquisition/elaboration), while the other is used for the

decoding operations and for memorising the APPs variables. The size of the last kind of RAMs are rather small. To

achieve this, the single data frame has been partitioned in segments during processing as clearly described in [10]. The

initial beta RAM are used for memorising only 27 groups of the beta variables during the backward process of decoder

A and B, whereas the elaboration alpha and beta RAM are used during the elaboration of the APP for every frame

segment.

There is no RAM dedicated at to memorisation of gamma variables, as their memorisation would require

memorisation of 32 values for every state transition. Rather, the gamma variables are calculated simultaneously at the

elaboration of alpha or beta variables. The computation blocks compute alpha, beta, gamma, APP. The structure of

these blocks is easy and mostly based on combinatory logic. The alpha and beta register are used for iterative

computation of alpha and beta variables, whereas the prologue alpha register 1 & 2 are used for alphas initial value.

The prologue length, code rate and decoding iteration number parameters can be set by the microcontroller.

The number of bit used for quantization of the algorithm’s variables has been optimised. Considering a

normalised voltage input, as it would result using an automatic gain control, the value of the LSB of the internal binary

representation has been optimised via simulation identifying the best value range from 0.12 to 0.14 (see Fig. 6). Input

quantity is represented with only 4 bits. Complexity results are shown in Tab. 1. A Xilinx Virtex1000BG560 is the

used FPGA. It is endowed with 6144 configurable logic blocks CLBs, whose structure is shown in Fig. 8; 32 RAM

blocks (known as EAB, every of 4 Kbytes) and it can operate a with a more than 200 MHz clock.


CK_ Dem

CK_Decoder

Address

ROM

Acquisition

FSM

CE

ROM

Elaboration

FSM

Interleaver

ROM

Acquisition

counter

Elaboration

counters

Fig. 5. The microcontroller’s schematic

Input

Frame

M

I

c

r

o

c

o

n

t

r

o

l

l

o

r

Write

Buffer

MICROCONTROLLOR

IMPLEMENTATION

BLOCK RESULTS

Block CLBs

Alpha 261

Beta 266

Gamma 60

APP 281

Decision 55

Microcontroller 462

Control

Bus

SISO

Address

Bus

RAM

Block 1

Memory Frame

Alfa

Vector

Read

Buffer

Ram

Ib1

Alfa

Ram

Ear

RAM

Block 2

Beta

Vector

Gamma

Ram

Ebr

BER

1.E-03

1.E-04

Beta APP

Ram

Ib2

Best

range

1.E-05

0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20 0.21 0.22 0.23 0.24 0.25 0.26

LSB

Non-quantised

Quantised

Fig. 6. Identification of Best LSB Range

Decision Block

Mux

APP

Metrics memory

Fig. 7. The Turbo decoder’s schematic

OVERALL DECODER

COMPLEXITY

CLB => 48%

EAB => 68%

APP1

Decision

Block

APP1

Memory

APP2

Memory

APP Memory

APP2

Estimated

Sequence

Tab. 1. Implementation and complexity block results Fig. 8. CLB’s structure

SIMULATION RESULTS

The simulations are executed using Monte Carlo technique. Random packets of 188 bytes are generated and

coded. An AWGN satellite channel is considered. The noisy code-word received by the decoder (Fig. 3) is decoded

using 6 iterations. The results are relevant to a 6/7 coding rate and address both floating point simulation and bit true

and are shown in Fig. 9. These high level language simulations (FORTRAN) have also been used to provide input

signals to the VHDL simulations which allowed cross checks at various test points of the decoder architecture,

revealing proper operation of the VHDL design. Apart from showing a minimal degradation caused by the internal

decoder registers quantisation, Fig. 9 also shows the BER improvement introduced by the scale factor introduced

among iterations [9].


1.E-01

1.E-02

1.E-03

1.E-04

BER

1.E-05

1.E-06

1.E-07

1.E-08

Channel BER

6^ Iteration_bit_true

6^ Iteration_floating

1.E-09

3.0 3.5 4.0 4.5 5.0

Eb/No

0.125 dB

1.E-01

1.E-02

1.E-03

1.E-04

BER

1.E-05

1.E-06

1.E-07

1.E-08

1.E-09

3.0 3.5 4.0 4.5 5

Eb/No

coeff=.8 bit-true

coeff=1 bit-true

0.25 dB

Fig. 9. BER performances: Bit true (left side) degradation and scale factor (right side) improvement

CONCLUSIONS

The present paper has presented the details and shown simulation results for the decoding algorithm of DVB-

RCS turbo code. The algorithm has been both implemented in high level language and its VHDL design has been

executed. Full rationale of the choices made within the design have been introduced here. Some preliminary comparison

with decoder ASIC actual performance measurements were already presented in [10].

ACKNOWLEDGEMENTS

The authors wish to gratefully thank G. D’Amora and S. Galeota for the contribution during preparation of their thesis.

REFERENCES

[1] C. Berrou, C. Glavieux, Thitimasjshima: ‘Near Shannon Limit error-correcting coding and decoding: Turbo

Codes’, ICC 1993, May 1993, Geneva, Switzerland, pp. 1064-1070.

[2] Digital Video Broadcasting (DVB); Interaction Channel for Satellite Distribution Systems; TM2267r3 DVB-

RCS001 rev12 (11 Feb. 2000)

[3] C. Elia, E. Colzi: “Skyplex: Distributed Up-Link for Digital Television via Satellite”, IEEE Transactions on

Broadcasting, Vol. 42, No.4 December 1996.

[4] S. Benedetto, G. Montorsi:, D. Divsalar, F. Pollara: ‘Soft-Input Soft-Output Modules for the Construction of

Distributed Iterative Decoding Networks’. European Transactions on Communications, Vol. 9, No. 2, March-April

1998.

[5] L.R. Bahl, J. Cocke, F. Jelinek and J. Raviv: "Optimal decoding of linear codes for minimising symbol error rate",

IEEE Trans. Inform. Theory, IT-20, pp. 248-287, Mar. 1974.

[6] A. J. Viterbi: ‘An intuitive justification and a simplified implementation of the MAP decoder for convolutional

codes’, IEEE J. On Selected Areas in Communications, Vol. 16, pp.260-264, Feb. 1998.

[7] W. E. Ryan: ”A Turbo Code Tutorial”, Unpublished Paper, available at the web site:

http://csc.postech.ac.kr/group/tc/paper_list.html

[8] P. Robertson, E. Villebrun and P. Hoeher: 'A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms

Operating in the Log Domain'; Internat. Conf. on Communications (ICC '95), Seattle, June 1995.

[9] J. Vogt and A. Finger: “Improving the MAX-Log_MAP turbo decoder”, Electronics letters, Vol.36 No.23, 9 th

November 2000.

[10] D.Giancristofaro, R.Giubilei, R.Novello, V.Piloni, J.Toush: “Performances of Novel DVB-RCS Standard Turbo

Code and its Use in On-Board Processing Satellites”. Proceedings of the EMPS workshop, in IEEE EMPS/PIMRC,

London, 17-21 September 2000.

More magazines by this user
Similar magazines