Turbo decoding algorithms Algorithms for Iterative (Turbo) Data ...

comlab.hut.fi

Turbo decoding algorithms Algorithms for Iterative (Turbo) Data ...

TABLE OF CONTENTSINTRODUCTION ........................................................................................................................................4APPLICATION COMPLETION OVERVIEW ...........................................................................................6COMPLETENESS CHECKLIST COVER SHEET.....................................................................................7CDFI CERTIFICATION CRITERIA – LEGAL ENTITY ..........................................................................8CDFI CERTIFICATION CRITERIA – PRIMARY MISSION ...................................................................9CDFI CERTIFICATION CRITERIA – FINANCING ENTITY................................................................10CDFI CERTIFICATION CRITERIA – TARGET MARKET ...................................................................11CDFI CERTIFICATION CRITERIA – ACCOUNTABILITY..................................................................12CDFI CERTIFICATION CRITERIA – DEVELOPMENT SERVICES .........................................................13CDFI CERTIFICATION CRITERIA – NON-GOVERNMENT ENTITY .....................................................14CDFI CERTIFICATION CRITERIA QUESTIONNAIRE ...............................................................................15TABLESASSET INFORMATION TABLESTAFF ALLOCATION TABLETARGET MARKET TABLEDEVELOPMENT SERVICE TABLEACCOUNTABILITY CHARTPAGE 3 OF 16


!!"")))) )))) ! ! "" ! ))" ))!"!!!!!!!))))""!!"****##******$$********Kalle Ruttik 2005%%Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 13S-72.630 Algorithms for Turbo Decoding (5) 14) ) !! ) * * "" ## $$ %!!!!!! !!!!!""!""! "!"""!#!#! #!$!$%!The a-posteriori probability can be expressed as:p A k (u) = p ( )u k = u|Y1N∑1= σp(Y1 N )k (e)e:u(e)=u∑ (1= Ap(Y1 N )k−1 sSk(e) ) (· M k (e) · B k sEk(e) )e:u(e)=u) ) ) ) * "!"!" !!" """"" !"" "#"#" !$"%"Figure 4: Caculation of the joint probability.Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 15Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 16) ) !" !!!) ) ! " !) !"!" ") * * ""!"""" !"" """ "! "!"""!##!# #! #!#" !) * Figure 5: Caculation of the a-posteriori probability (APP).#"$$!$"$ $%%!%"Applying MAP algorithm to turbo codesEncoder generates multiple encoded bit streams. These streamscan be generated on different interleaved bit sequences.+Interleaver+Figure 6: Example of a parallel concated convolutional codes++


Kalle Ruttik 2005FFFKalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 17S-72.630 Algorithms for Turbo Decoding (5) 18Iterative turbo decoding principle (PCCC)The Transition probability in the trellis can be expressed by the bitprobability from the other decoder and by the observed probabilityat the channel.()M k (e) = C · e (u kL(u k )/2 L cn∑· exp y kl x yl2One of the received bits corresponds to the systematic bits that iscommon for both coders.M k (e)l=1=C·e (u kL(u k )/2) · exp ( L c2y kd u k)· exp⎛⎝ L c2n∑l=1l≠1y kl x yl⎞⎠=C·e (u kL(u k )/2) · exp ( L c2y kd u k)· Le1Where d is the index of the systematic bit in the codeword section k.We assume that each section contains only one systematic bit.The loglikelihood ratio of the estimated bit is the division of theprobablity that u k = 1 to probability that the bit was u k = 0.L ( )u k |Y1N= ln p(u k=0|Y N== ln= ln1 )1 )p(u∑ k =1|Y Nσ k (e)∑σ k (e)e:u(e)=0e:u(e)=1∑e:u(e)=0∑e:u(e)=1A k−1(s S k (e))·M k(e)·B k(s E k (e))A k−1(s S k (e))·M k(e)·B k(s E k (e))∑e:u(e)=1(mapped to −1)∑e:u(e)=1(mapped to −1)= L(u k ) + L c y ks + L e1 (u k )A k−1(s S k (e))·eL(u k )/2·e L cy ks /2·e ( L c2y kd u k)·Bk(s E k (e))A k−1(s S k (e))·e−L(u k )/2·e −L cy ks /2·e ( L c2y kd u k)·Bk(s E k (e))Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 19Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 20Parallel concated convolutional codes (PCCC)L(u k ) a-priori information.L c y ks loglikelihood of the systematic bit.L e1 (u k ) extrinsic information calculated for given decoder itdescribe information derived by imposing constraints ofgiven code.L ( )u k |Y1N describes a posteriori information at the output ofgiven decoder.1 J A H A = L A H- ? @ A H- ? @ A HFigure 7: EncoderJ ? D = A To the other decoder (for example to decoder 2) is given onlyextrinsic information (from decoder 1). That describes informationderived from the other component code (code 1) and not otherviseavailable to the next decoder (decoder 2).. H @ A @5 1 5 J K I A @1 J A H A = L A H. H @ A @5 1 5 @ A ? E I E J K I A @Figure 8: Decoder


FF FKalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 21S-72.630 Algorithms for Turbo Decoding (5) 22Serial concated convolutional codes (SCCC)K J A H- ? @ A H . H @ A @5 1 5 1 J A H A = L A H1 A H- ? @ A H Figure 9: Encoder J K I A @5 1 5 J ? D = A @ A ? E I E Parallel concated convolutional codes (PCCC)- Encoder contains by two or more systematic convolutionalencoders- The constituent encoders code the same data stream thatfor different encoders are interleaved- The systematic bits are transmitted only once- In reciever the extrinsic information is calculated for theinformation bit and feed to the other code decoder1 J A H A = L A HFigure 10: DecoderKalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 23S-72.630 Algorithms for Turbo Decoding (5) 24Serial concated convolutional codes (SCCC)- Code is formed by concatenating two encoders- The output coded bit stream from the outer encoder isinterleaved and feed to the inner encoder- The decoder- Calculates the loglikelihoods of information bits at theoutput of the inner decoder and deinterleaver them- The outer decode is decoded and loglikelihoods for thecoded bits are calculated- The coded bits loglikelihoods are interleaved and feedback to the inner decoder- The decision are made after the decoding iterations onthe loglikelihoods of the information bits at the output ofthe outer decoderAlgorithms for Iterative (Turbo) Data ProcessingMAPAlgorithmlog-MAPmax-log-MAPSymbol-by-symboldetectionTrellis-BasedDetection AlgorithmsViterbiAlgorithmSOVAModifiedSOVASequencedetectionRequirementsAccept soft-inputs in theform of a priori probabilitiesor log-likelihood ratiosProduce APP for output dataSoft-Input Soft-Output- MAP: Maximum APosteriori (symbolby-symbol)- SOVA: Soft OutputViterbi Algorithm


Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 25S-72.630 Algorithms for Turbo Decoding (5) 26Modification of MAP algorithm· MAP algorithm operates in probability domain.p A k (u) = 1p(Y N1 )∑e:u(e)=uA k−1(sSk(e) ) · M k (e) · B k(sEk(e) )· When probablity is expressed by loglikelihood value wehave to deal with numbers in very large range. (overflowsin computers).Simplification Log-MAP algorithm description• Log-MAP algorithm is a transformation of MAP intologarithmic deomain.• The MAP algorithm logarithmic domain is expressed withreplaced computations- Multiplication is converted to addition.- Addition is converted to a max ∗ (·) operation.max ∗ (x, y) = log (e x + e y ) = max (x, y) + log(1 + e −|x−y|)• The terms for calculating the probabilities in trellis areconvertedα k (s) = log A k (s)β k (s) = log B k (s)γ k (s) = log M k (e)Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 27Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 28• The complete logMAP∑algortihm isλ A (k (u) = log A k−1 sSk (e) ) (· M k (e) · B k sEk (e) )− log∑e:u(e)=0= max ∗e:u(e)=1e:u(e)=1A k−1(sSk (e) ) · M k (e) · B k(sEk (e) )(αk−1(sSk (e) ) + γ k (e) + β k(sEk (e) ))( (− max ∗ αk−1 sSk (e) ) (+ γ k (e) + β k sEk (e) ))e:u(e)=0∑ (α k (s) = log A k sSk(e) ) · M k (e)e:s E k∑(e)=s (β k (s) = log B k+1 sSk+1(e) ) · M k+1 (e)e:s S k+1 (e)=sMax-Log-MAP decoding Algorithm• In summation of probabilities in Log-MAP algorithm weare using max ∗ (·) operation.• The max ∗ (·) requires to convert LLR value intoexponential and after adding 1 to move back into logdomain.• Simplifications- We can replace log ( 1 + e −|x−y|) by a lookup table.- We can skip the term Max-Log-Map.


Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 29S-72.630 Algorithms for Turbo Decoding (5) 30Log-MAP Algorithm (Forward Recursion 1)α n (s) = log(A n ) = log( ∑ s k[A n−1 (s S k )M n(e)]= log( ∑ s kexp[α n−1 (s k ) + γ n (s k , s ′ k )])00a0(0)=0.0g1(00)=-0.5a1(0)=-0.5g2(00)=-2.3a2(0)=-2.8= max ∗ ( ∑ s kexp[α n−1 (s k ) + γ n (s k , s ′ k )])01g1(02)=-0.3g2(02)=-5.0a2(0)=-1.5≈ max(α n−1 (s k ) + γ n (s k , s ′ k ))⇒ Max-Log-MAP10g2(21)=-1.2a2(2)=-5.5β n−1 (s) = log(B n ) = log( ∑ s k[B n (s S k )M n(e)]= log( ∑ s kexp[β n (s k ) + γ n (s k , s ′ k )])11a1(2)=-0.3g2(23)=-2.3a2(3)=-2.6= max ∗ ( ∑ s kexp[β n (s k ) + γ n (s k , s ′ k )])α n+1 (s ′ k ) = log(∑ s kexp[α n (s k ) + γ n+1 (s k , s ′ k )])log(exp[x] + exp[y]) ≈ max(x, y) + log(1 + exp[−|x − y]) = max ∗ (x, y)} {{ }tab∆α 1 (s 1 ) = log(exp[α 0 (s 0 ) + γ 1 (s 0 , s 1 )]) ⇒ log(exp[0.0 + (−0.3)]) = −0.3≈ max(β n (s k ) + γ n (s k , s ′ k )) ⇒ Max-Log-MAP Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 31S-72.630 Algorithms for Turbo Decoding (5) 32Max-Log-MAP approximation00011011a0(0)=0.0g1(00)=-0.5g1(02)=-0.3a1(0)=-0.5a1(2)=-0.3g2(00)=-2.3g2(02)=-5.0g2(21)=-1.2g2(23)=-2.3a2(0)=-2.8a3(0)=-6.4g3(00)=-6.6g3(02)=-6.0 g3(10)=-5.0a2(0)=-1.5a3(0)=-3.4g3(12)=-2.7a2(2)=-5.5a3(2)=-4.2g3(21)=-2.4g3(31)=-0.8a2(3)=-2.6 g3(23)=-2.5 a3(3)=-3.7g3(33)=-1.200011011a0(0)=0.0g1(00)=-0.5g1(02)=-0.3a1(0)=-0.5a1(2)=-0.3g2(00)=-2.3g2(02)=-5.0g2(21)=-1.2g2(23)=-2.3a2(0)=-2.8a3(0)=-6.5g3(00)=-6.6g3(02)=-6.0 g3(10)=-5.0a2(0)=-1.5a3(0)=-3.4g3(12)=-2.7a2(2)=-5.5a3(2)=-6.5g3(21)=-2.4g3(31)=-0.8a2(3)=-2.6 g3(23)=-2.5 a3(3)=-3.8g3(33)=-1.2α n+1 (s ′ k ) = log(∑ s kexp[α n (s k ) + γ n+1 (s k , s ′ k )])α 3 (s 2 ) = log ( e (α 2(s 0 )+γ 3 (s 0 ,s 2 )) + e (α 2(s 1 )+γ 3 (s 1 ,s 2 )) )⇒ ln ( e (−2.8−6.0) + e (−1.5−2.7)) ≈ max ∗(−8.8, −6.5) = −6.4α n+1 (s ′ k ) = log(∑ s kexp[α n (s k ) + γ n+1 (s k , s ′ k )])α 3 (s 2 ) = log ( e (α 2(s 0 )+γ 3 (s 0 ,s 2 )) + e (α 2(s 1 )+γ 3 (s 1 ,s 2 )) )≈ max ( e (−2.8−6.0) + e (−1.5−2.7)) = max(−8.8, −6.5) = −6.5


Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 33S-72.630 Algorithms for Turbo Decoding (5) 34Soft-output Viterbi algorithm• Two modifications compared to the classical Viterbialgorithm- The path metric is modified to acoount the extrinsicinformation. This is similar to the metric calculationin Max-Log-MAP algorithm.- The algorithm is modified to calculate the soft bit.Figure 5.16 - 5.17 from the book.SOVA• For each state in the trellis the metric M (s s k) is calculatedfor both merging paths.• The path with the highest metric is selected to be thesurvivor.• For the state (at this stage) a pointer to the previous statealong the surviving path is stored.• The information to give L ( u k | y ) is stored.- The difference ∆ s kbetween the discarded andsurviving path.- The binary vector containing δ + 1 bits, indicatinglast δ + 1 bits that generated the discarded path.• After ML path is found the update sequences and metricdifferences are used to calculate L ( u k | y ) .Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 35Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 36Calculation of L ( u k | y ) .- For each bit u MLkin the ML path we try to find the pathmerging with ML path that had compared to the u MLkinML different bit value u k at state k and this path shouldhave minimal distance with ML path.- We go trough δ + 1 merging paths that follow stage k i.e.the ∆ s ii i = k...(k + δ)- For each merging path in that set we calculate back to findout which value of the bit u k generated this path.- If the bit u k in this path is not u MLkand ∆ s ii is less thancurrent ∆ minkwe set ∆ mink=∆ s ii- L ( u k | y ) ≈ u k mini=k...k+σu MLk≠u i k∆ s iiComparison of the computing principlesMAP• The MAP algorithm is the optimal component decoderalgorithm.• It finds the probability of each bit u k of either being a +1or -1 by summing the probabilities of all the codewordswhere the given bit is +1 and where the bit is -1.• Extremely complex.• Because of the exponent is probability calculations inpracitce the MAP algrithm often suffers of numericalproblems.


Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 37S-72.630 Algorithms for Turbo Decoding (5) 38LogMAP• LogMAP theorectically identical to MAP the calculationonly are made in logarithmic domain.• Multiplications are replaced with addition and summationwith max ∗(·) operation.• Numerical problems that occure in MAP arecirmcumvented.Max-Log-MAP• Similar to LogMAP but replaces the maxlog operation(max ∗(·)) with taking maximum.• Because at each state in forward and backward calcualtionsonly the path with maximum value is considered theprobabilities are not calculated over all the codewords.- In recursive calculation of α and β also only the besttransition is considered.- The algorithm gives the logarithm of the probability thatonly the most likely path reaches the state.• In the MaxLogMAP L ( u k | y ) is comparison of probabilityof most likely path giving u k = −1 to the most likely pathgiving u k = +1.- In calcualtions of loglikelihood ratio only two codewordsare considered (two transitions): The best transition thatwould give +1 and the best transition that would give -1.• MaxLogMAP performs worse than MAP or LogMAPKalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 39Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 40SOVA• SOVA the ML path is found.- The recursion used is identical to the one used forcalcuating of α in MaxLogMAP algorithm.• Along the ML path hard decision on the bit u k is made.• L ( u k | y ) is the minimum metric difference between the MLpath and the path that merges with ML path and isgenerated with different bit value u k .- In L ( u k | y ) calcualtions accordingly to MaxLogMAPone path is ML path and other is the most likely paththat gives the different u k .- In SOVA the difference is calculated between the MLand the most likely path that merges with ML path andgives different u k .This path but the other may not be the most likely onefor giving different u k .• The output of SOVA just more noisy compared toMaxlogMAP output (SOVA does not have bias).• The SOVA and MaxLogMAP have the same output- The magnitude of the soft decisions of SOVA will eitherbe identical of higher than those of MaxLogMAP.- If the most likely path that gives the different harddecision for u k has survived and merges with ML paththe two algorithms are identical.- If that path does not survive the path on what differentu k is made is less likely than the path which should havebeen used.


Kalle Ruttik 2005Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 41S-72.630 Algorithms for Turbo Decoding (5) 42Table 1: Comparison of complexity of different decoding algorithmsOperations maxlogMAP logMAP Sovamax-ops 5 × 2 M − 2 5 × 2 M − 2 3 (M + 1) + 2 Madditions 10 × 2 M + 11 10 × 2 M + 11 2 × 2 M + 8mult. by ±1 8 8 8bit comps 6 (M + 1)look-ups 5 × 2 M − 2M is the length of the code memory.Table accordingly to reference [1]If to assume that each operation is comparable we can calculate thetotat amount of operations per bit every algorithm demands fordecoding one code in one iteration.Table 2: Number of reguired operations per bit for different decodingalgorithmsmemory (M) MaxLogMAP LogMAP Sova2 77 95 553 137 175 764 257 335 1095 497 655 166Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 43Kalle Ruttik 2005S-72.630 Algorithms for Turbo Decoding (5) 44Performance of the Turbo Decoding algorithms• Component decoding algorithms.• Number of iterations used.• Frame length impact to performance.• Interleavers.• Channel reliability valuesReferences1 P. Robertson, E. Villebrun, P. Hoeher, ”Comparison ofOptimal and Suboptimal MAP decoding algorithms”, ICC,1995 page 1009-1013.Figures from the book pages 150 - 171.

More magazines by this user
Similar magazines