Lecture notes on turbo codes.


Lecture notes on turbo codes.

Concatenated Coding• Parallel Concatenated Convolutional Codes (PCCC) or “Turbo” CodesFirst Paper: “NEAR SHANNON LIMIT ERROR-CORRECTING CODINGAND DECODING: TURBO – CODES(1)” byClaude Berrou, Alain Glavieux and Punya ThitimajshimaEcole Nationale Superieure des Telecommunications (ENST) de Bretagne , FranceInternational Communications Conference (ICC), Geneva, Switzerland May 1993,pp. 1064-1070.• use systematic codes – nonsystematic ones have equivalent forms• systematic codes with feed forward encoders produce less powerful codes.Convolutional codes in their systematic feedback form are equivalent to thenonsystematic form in distance and nearest neighbor path properties. Howeverat low signal to noise ratio (SNR), the bit error rate (BER) is slightly better.Figures of FB-CC and FF-CC for R=1/2 v=2.u kx k,1x k,2x k,1 =u ku kx k,2- 9 -

d kdelayline (L 1 )InterleavingX kY kC 1(2, 1, 2) resursivesystematicconvolutional codeY 1kY 2kC 2(2, 1, 2) resursivesystematicconvolutional codeWe can now decode the information sequence or move to the second decoderwhich will decode for the same information sequence but in the interleaved form.Now we’ll make use of the L e (i) generated by the first encoder.This can be given as a priori (L e (i) as L I2 (i) -after interleaving) information to thesecond decoder.So effectively we can interleave L sys (i) and L e (i) and feed it to the second decoder.p ap (i t =-1) and p ap (i t =1) can be used associated with γ, in the calculation of α,β forthe second decoder .Thus the output of the second decoder will be of the formL app (i) = L sys2 (i) + L I2 (i) + L e2 (i)- 10 -

This finishes the first or initial full decoding step- step 0.L e2 (i) reflects the extrinsic information generated by the second decoder. Now in asimilar manner as earlier we can give - feedback the first decoder with L e2 (i) (γtocalculate α,β) after deinterleaving as new a priori L I1 (i) information to the firstdecoder.Decoding step 1First DecoderL app (i) = L sys (i) + L I1 (i) + L e1 (i)Second Decoder (after interleaving L e1 (i))L app (i) = L sys (i) + L I2 (i) + L e2 (i)Then from decoder two feedback L e2 (i) as before to the first decoder afterdeinterleaving – and the process repeats.This is known as iterative decoding or turbo decoding.• Remember it is always the extrinsic information that we pass to the nextdecoder as new a priori. We don’t feedback a priori or intrinsic informationwhy?– that was already generated by the earlier decoder so no point in feedingback the same information.- 11 -

LEAVINGx k16-STATEDECODERDEC 1INTER-LEAVING16-STATEDECODERDEC 2y kDEINTER-DEINTER-LEAVINGDEMUX/INSERTIOdecoded outputdˆkThe p I (i t =±1) needed to calculate γcan also be obtained from L I (i) which will inturn be obtained from L e (i) of the earlier decoding stage.Reference: “Iterative Decoding of Binary Block and Convolutional Codes” byJoachim Hagenauer, Elke Offer, and Lutz Papke, IEEE Tans. Info. Theory, Vol.42, No.2, March 1996, pp. 429-445.Text Book Chapters:Bossert’s: Chapter 8 on Convolutional Codes and Appendix A on SCC, PCCFundamentals of Convolutional Coding: Chapter 7- Iterative Decoding- 12 -

PCCC –Reference: A soft-Input Soft-Output maximum A Posteriori (MAP) Moduleto Decode Parallel and Serial Concatenated CodesS. Benedetto, D. Divsalar, G. Montorsi, and F. PollaraTDA Progress Report 42-127, November 1996 from JPL web siteBlock diagram: EncoderENCODER1RATE=1/2TO CHANNELπENCODER2RATE=1/2NOT TRANSMITTEDTO CHANNELDecoderπ(.) represents log probabilities or working in the log domain.From DemodFrom Demodπ(c;I) π(c;O) Not used π(c;I) π(c;O)Not usedSISOSISO12π(u;I) π(u;O) π(u:I) π(u;O)ππ −1DecisionSISO – soft-input, soft-output Module: implements MAP algorithm13

The general Trellis Encoder:INPUT TRELLIS OUTPUTENCODERucSCCC – Serial Concatenated Convolutional CodesEncoder:OUTERENCODERRATE=1/2pTO CHANNELINNERENCODERRATE=2/3Decoder:From Demodπ(c;I)SISOπ(c;O) Not usedπ(u;I)INNERπ(u;O) π(c:I)π −1SISOπ(c;O)OUTERπ(u;I)π(u;O)DECISION0π −114

Differences between PCCC and SCCC:PCCC: Updated probabilities (extrinsic) of code symbols are never used bythe decoding algorithmSCCC: Both, updated probabilities (extrinsic) of the input and code symbolsare used in the decoding algorithmDecoding algorithm is – Additive SISO algorithm (A-SISO) working in thelog domainπ(c;I)π(u;I)SISOπ(c;O)π(u;O)es E (e)s S (e)u(e), c(e)An Edge (e) of the Trellis SectionThe following functions are associated with each edge e• The starting state s S (e)• The ending state s E (e)• The input symbol• The output symbol15

The relationship between these functions depends on the particular encoder.As an example, in the case of systematic encoders, (s E (e),c(e)) also identifiesthe edge since u(e) is uniquely determined by c(e).Here it is only assumed that the pair (s S (e), u(e)) uniquely identifies theending state s E (e) – this assumption is always verified , as it is equivalent tosay that – given the initial trellis state, there is a one-to-one correspondencebetween input sequences and state sequences.The Additive SISO Algorithm (A-SISO)αk⎡( s)= log⎢∑exp{α⎢ E⎣e: s ( e)= s⎤[ c(e);I]}⎥,k⎥⎦Sk −1 [ s ( e)]+ πk[ u(e);I]+ πk=1,2,... nβk⎡( s)= log⎢exp{⎢∑ βk+⎣e: sS( e)= sk = n −1,..,01[ sE( e)]+πk + 1[ u(e);I]+πk + 1⎤[ c(e);I]}⎥,⎥⎦These are forward and backward recursions. At time k, the output(extrinsic) probability distributions are computed as (approximately)πk⎡⎤⎢SE( c;O)= log ∑exp{αk−1 [ s ( e)]+ πk[ u(e);I]+ βk[s ( e)]}⎥⎢⎣e:c(e)= c⎥⎦⎡⎤⎢SEπk( u;O)= log ∑exp{αk−1 [ s ( e)]+ πk[ c(e);I]+ βk[s ( e)]}⎥⎢⎣e:u(e)= u⎥⎦with initial values α 0 (s) = 0 for s=S 0 otherwise α 0 (s) = -∞16

similarly βn(s) = 0 for s=S n otherwise β n (s) = -∞So we replace log and exp with maximum values. Thus we have thefollowing set of equations.Eα k (s) = max (e:s (e)=s) {α k-1 [s S (e) + π k [u(e);I] + π k [c(e);I]}, k=1,2..,nSβ k (s) = max (e:s (e)=s) {β k+1 [s E (e) + π k+1 [u(e);I] + π k+1 [c(e);I]}, k=n-1,… ,0π k (c;O) = max (e:c(e)=c)) {α k-1 [s S (e) + π k [u(e);I] + β k [s E (e)]}π k (u;O) = max (e:u(e)=u)) {α k-1 [s S (e) + π k [c(e);I] + β k [s E (e)]}Generally for serial concatenated codes:OUTER CODE- NON RECURSIVE or NON-Feed back , Non systematicINNER CODE- RECURSIVE SYSTEMATIC or Feed-back SystematicIt is seen that the performance of SCCC is usually better than that of PCCC.BERSCCCPCCCError Floor Effectin PCCCE b /N 017

The summations involved in the algorithm are calculated using trellis edges,rather than using pairs of states. This makes the algorithm general andcapable of dealing with parallel edges – suitable for TCM.The A-SISO algorithm at Bit LevelConsider a rate ½ convolutional encoder. U k input and C 1,k and C 2,k outputbits at time k, taking values {0,1}. Therefore on the trellis edges at time k wehave u k (e), c 1,k (e), c 2,k (e). Drop k for simplicity.Define the reliability (LLR) of a bit Z taking values {0,1} at time k asP [ Z = 1;.]λ [ Z;.]≡ logkk= πk[ Z = 1;.] − πk[ Z = 0;.]Pk[Z = 0;.]Eα k (s) = max (e:s (e)=s) {α k-1 [s S (e) + u(e)λ k [u;I] + c 1 (e)λ k [C 1 ;I] +c 2 (e)λ k [C 2 ;I] } + h αkβ k (s) = max (e:sS(e)=s) {β k+1 [s E (e) + u(e)λ k+1 [u;I] + c 1 (e)λ k+1 [C 1 ;I] +c 2 (e)λ k+1 [C 2 ;I] } + h βkwith initial values α 0 (s) = 0 if s=S 0 and α 0 (s) = - ∞ otherwise and β n (s)=0 ifs=S n and β n (s) = - ∞ otherwise. h αk and h βk are normalization constants.For the inner decoder, which is connected to the AWGN channel, we haveλ k [C 1 ;I]= (2A/σ 2 )r 1,k λ k [C 2 ;I]= (2A/σ 2 )r 2,k wherer i,k = A(2Z i - 1) +i ,k , i=1,2is the received samples at the output of the matched filter, c i ∈ {-1,1} andn i,k is the zero mean independent identically distributed (i.i.d.) Gaussiannoise samples with variance σ 2 .18

The extrinsic bit information for U,C 1 and C 2 can be obtained asλ k (U;O) = max (e:u(e)=1) {α k-1 [s S (e) + c 1 (e)λ k [C 1 ;I] + c 2 (e)λ k [C 2 ;I]+ β k [s E (e)]}- max (e:u(e)=0) {α k-1 [s S (e) + c 1 (e)λ k [C 1 ;I] + c 2 (e)λ k [C 2 ;I]+β k [s E (e)]}λ k (C 1 ;O) = max (e:c1(e)=1) {α k-1 [s S (e) + u(e)λ k [U;I] + c 2 (e)λ k [C 2 ;I]+ β k [s E (e)]}- max (e:c1(e)=0) {α k-1 [s S (e) + u(e)λ k [U;I] + c 2 (e)λ k [C 2 ;I]+β k [s E (e)]}λ k (C 2 ;O) = max (e:c2(e)=1) {α k-1 [s S (e) + u(e)λ k [U;I] + c 1 (e)λ k [C 1 ;I]+ β k [s E (e)]}- max (e:c2(e)=0) {α k-1 [s S (e) + u(e)λ k [U;I] + c 1 (e)λ k [C 1 ;I]+β k [s E (e)]}Parameters for PCCCIn general for a Convolutional code (CC) w min is defined as the minimuminformation weight in the error events of the CC.w min = 1 for non recursive codesw min = 2 for recursive codes.The error coefficient (in p b (error)) interleaving gain for a PCCC with largeinterleaving length goes as N 1-w min , where N is the interleaver length.19

Thus for recursive CC’s the interleaving gain (or BER reduction) goes as1/N. On the other hand all nonrecursive CC and block codes have w min = 1,so such codes are not useful in Parallel Concatenated Codes.The next most important constituent code parameter is z min , the minimumparity-check weight in the code sequences with w=2. For a large range ofSNR, the behaviour of PCCC is determined by the “effective free distance”d free,eff = 2 + 2 z min .It is possible to achieve z min = (n-1)(2 v-1 + 2) with a rate 1/n recursive CCwith memory v.PCCC’s exhibit error floor effect (defined as the change of slope of BERcurve for Turbo codes) at lower BER. Initially the coding gain increaseswith the number of iterations. However, after a number of iterations (10-12in AWGN) the performance improvement is marginal.Extensions to consider:1. Turbo codes with multilevel modulation2. Turbo TCM3. Iterative Equalization and Decoding – Turbo Equalization.20

More magazines by this user
Similar magazines