Implementation of encoder and decoder for Turbo codes - gopalax ...

ijcns.com
  • No tags were found...

Implementation of encoder and decoder for Turbo codes - gopalax ...

©gopalax -International Journal of Technology And Engineering System(IJTES):Jan –March 2011- Vol.2.No.2.)Implementation of encoder and decoder for Turbo codes1 Rutuja D. Deshmukh and 2 Prof. D.M.MeshramDepartment of Electronics, Priyadarshini college of engineering, Nagpur, India.1 rutuja1502@yahoo.com., 2 divyameshram@gmail.comAbstract— Error correcting coding (ECC) is acritical part of modern communication systems,where it is used to detect and correct errorsintroduced during a transmission over a channel. Itrelies on transmitting the data in encoded form,such that the redundancy introduced by the codingallows a decoding device at the receiver to detectand correct errors. In information theory, turbocodes are a class of high-performance forward errorcorrection (FEC) codes developed in 1993, whichwere the first practical codes to closely approach thechannel capacity, a theoretical maximum for thechannel noise at which reliable communication isstill possible. Turbo codes are finding use in (deepspace) satellite communications and otherapplications where designers seek to achieve reliableinformation transfer over bandwidth or latencyconstrained communication links. Turbo codes arenowadays competing with LDPC codes, whichprovide similar performance. It is theoreticallypossible to approach the Shannon limit by using ablock code with large block length or aconvolutional code with a large constraint length.The processing power required to decode such longcodes makes this approach impractical. Turbo codesovercome this limitation by using recursive codersand iterative soft decoders.The recursive codermakes convolutional codes with short constraintlength appear to be block codes with a large blocklength, and the iterative soft decoder progressivelyimproves the estimate of the received message. Theturbo encoder and decoder is based on convolutionconstituent codes which will outperform all otherForward Error Correction techniques. The turbodecoder can be modified easily to fit any size foradvanced communication system-on-chip product.Therefore it has been adopted by importantbroadband communication applications andstandards such as DVB-RSC (Digital VideoBroadcast Return Channel Satellite) and the 3Gwireless communication.Keywords-Turbo Codes, Error Correcting Code,Encoder, DecoderI. INTRODUCTIONCoding theorists have traditionally attacked theproblem of designing good codes by developing codeswith a lot of structure, which lends itself to feasibledecoders, although coding theory suggests that codeschosen at random should perform well if their blocksizes are large enough. The challenge to find practicaldecoders for almost random, large codes has not beenseriously considered until recently. Perhaps the mostexciting and potentially important development incoding theory in recent years has been the dramaticannouncement of turbo codes by Berrou et al. in 1993.The announced performance of these codes was sogood that the initial reaction of the codingestablishment was deep skepticism, but recentlyresearchers around the world have been able toreproduce those results. The introduction of turbo codeshas opened a whole new way of looking at the problemof constructing good codes and decoding them with lowcomplexity . Turbo codes achieved near-Shannon-limiterror correction performance with relatively simplecomponent codes and large interleavers. A requiredEb=N0 of 0.7 dB was reported for a bit-error rate(BER) of 10.5 for a rate 1/2 turbo code . Multiple turbocodes (parallel concatenation of q > 2 convolutionalcodes) and a suitable decoder structure derived from anapproximation to the maximum a posteriori (MAP).II. PARALLEL CONCATINATION OFCONVOLUTION CODESThe codes considered in this article consist of theparallel concatenation of multiple (q ¸ 2) convolutionalcodes with random interleavers (permutations) at theinput of each encoder. This extends the original resultson turbo codes reported which considered turbo codesformed from just two constituent codes and an overallrate of 1/2. Figure 1 provides an example of parallelconcatenation of three convolutional codes. Theencoder contains three recursive binary convolutionalencoders with m1, m2, and m3 memory cells,respectively. In general, the three component encodersmay be di®erent and may even have di®erent rates.The first component encoder operates directly on theinformation bit sequence u of length N, producing thetwo output sequences x0 and x1. The secondcomponent encoder operates on a reordered sequence ofinformation bits, u2, produced by a permuter(interleaver), ¼2, of length N, and outputs the sequencex2. Similarly, subsequent component encoders operateon a reordered sequence of information bits. Theinterleaver is a pseudorandom block scrambler definedby a permutation of N elements without repetitions: A163 gopalax Publications


complete block was read into the the interleaver andread out in a specified (fixed) random order. The sameinterleaver was used repeatedly for all subsequentblocks. Figure 1 shows an example where a rate r =1=n = 1=4 code is generated by three component codeswith memory m1 = m2 = m3 = m = 2, producing theoutputs x0 , x1, g1,g0, x2, x3 , where the generatorpolynomials g0 and g1 have octal representation . Notethat various code rates can be obtained by properpuncturing of x1, x2, x3, and even x0. We used theencoder in Fig. 1 to generate an (n(N + m);N) blockcode, where the m tail bits of code 2 and code 3 are nottransmitted. Since the component encoders arerecursive, it is not sufficient to set the last minformation bits to zero in order to drive the encoder tothe all-zero state, i.e., to terminate the trellis. Thetermination (tail) sequence depends on the state of eachcomponent encoder after N bits, which makes itimpossible to terminate all component encoders with mpredetermined tail bits. This issue, which had not beenresolved in the original turbo code implementation, canbe dealt with by applying a simple method describedthat is valid for any number of component codes. Adesign for constituent convolutional codes, which arenot necessarily optimum convolutional codes, wasoriginally reported in for rate 1=n codes.where dp j;2 is the minimum parity-weight (weight dueto parity checks only) of the codewords at the output ofthe jth constituent code due to weight-2 data sequences,and ¯ is a constant independent of N. Define dj;2 =dpj;2 +2 as the minimum output weight includingparity and information bits, if the jth constituent codetransmits the information (systematic) bits. Usually oneconstituent code transmits the information bits (j = 1),and the information bits of others are punctured. Definedef = ∑ q j=1 dp j;2 +2 as the effective free distance ofthe turbo code and 1=Nq¡1 as the interleaver's gain. Wehave the following bound on dp 2 for any constituentcode.III. INTERLEVER DESIGNThe random interleaver uses a fixed randompermutation and maps the input sequence according tothe permutation order. The length of the input sequenceis assumed to be L. Figure 2 shows a randominterleaver with L=8.Write In01 1 0 1 0 0 1FixedRandomPermutationRead Out1 3 6 8 2 7 4 50 1 0 1 1 0 0 1Figure 2: A Pseudo random Interlever with N=8From Figure 2, the interleaver writes in [0 1 1 0 1 00 1] and reads out [0 1 0 1 1 0 0 1].Figure 1: Example of encoder with three codes.II. DESIGN OF CONSTITUENT ENCODERSMaximizing the weight of output codewordscorresponding to weight-2 data sequences gives the bestBER performance for a moderate bit signal-to-noiseratio (SNR) as the random interleaver size N gets large.In this region, the dominant term in the expression forbit error probability of a turbo code with q constituentencoders isIV VITERBI DECODERThis Viterbi decoding algorithm was for a simpleBinary Convolutional Code with rate 1/2, constraintlength K=3. For optimal decoding for a modulationscheme with memory (as is the case here), if there are Ncoded bits, we need to search from 2^N possiblecombinations. This became prohibitively complex as Nbecomes large.(1) Though we reached each state from 2 possiblestates, only one of the transition in valid.(2) We found the transition which is more likely(based on the received coded bits) and ignore theother transition.gopalax Publications 164


(3) The errors in the received coded sequence arerandomly distributed and the probability of error issmall.V. OBSERVATIONSBased on the above assumptions, the decodingscheme proceed as follows: Assume that there are Ncoded bits. Take two coded bits at a time for processingand compute Hamming distance, Branch Metric, PathMetric and Survivor Path index for (N/2)+K-1 times.Let be the index varying from 1 till (N/2)+K-1.Figure 4: BER plot for BPSK modulation in AWGNchannel with Binary Convolutional code and harddecision Viterbi decodingFigure 3: Branch Metric and Path Metriccomputation for Viterbi decoderA. Traceback UnitOnce the survivor path is computed (N/2)+K-1times, the decoding algorithm can start trying toestimate the input sequence. Thanks to presence of tailbits (additional K-1 zeros) , it is known that the finalstate following convolution codes is State 00.currentstatei/p if previous state00 01 10 1100 0 0 x x01 x x 0 010 1 1 x x11 x x 1 1Table 1: Input given current state and previous state(1) I obtained the results shown in this chart using theexample simulation code, with the trellis depth setto Kx 5, using the adaptive quantizer with three-bitchannel symbol quantization. For each data point, Iran the simulation until 100 errors (or possiblymore) occurred. With this number of errors, I have95% confidence that the true number of errors forthe number of data bits through the simulation liesbetween 80 and 120.(2) Notice how the simulation results for BER on anuncoded channel closely track the theoretical BERfor an uncoded channel, which is given by theequation P(e) = 0.5 * rfc(sqrt(Eb/N0)) =Q(sqrt(2Eb /N0). This validates the uncoded BERalgorithm and the Gaussian noise generator. Thecoded BER results appear to agree well with thoseobtained by others.(3) Error events occur as a Poisson process, a randomsequence of events in time. The Poisson processhas a mean rate _ equal to n/t, where n is thenumber of events (the number of errors, in thiscase) and t is the time interval of the measurement.For the purposes of the simulation, let's let t = thetotal number of bits in the simulation. Let's say wemeasure 100 errors in 100,000 bits. The rate is thus100/100,000, or 1 x 10-3. If we set up thesimulation to run for 100,000 bits, then the mean µof the Poisson distribution is t, or 100 errors. Theformula for the probability of an expected number rof errors, given a mean of µ errors, is µ So thePoisson distribution for 50 to 150 errors,given amean of 100 errors, is illustrated in the chartbelow:165 gopalax Publications


.The cumulative probability of the abovedistribution for the range of 80 to 120 errors isactually 95.99%(approximately).REFERENCES[1] Clark, G. C. Jr. and J. Bibb Cain., Error-CorrectionCoding for Digital Communications, New York, PlenumPress, 1981.[2] Gitlin, Richard D., Jeremiah F. Hayes, and Stephen B.Weinstein, Data Communications Principles, New York,Plenum, 1992.[3] Heller, J. A. and I. M. Jacobs, "Viterbi Decoding forSatellite and Space Communication," IEEE Transactionson Communication Technology, Vol. COM-19, October1971, pp 835–848.[4] Yasuda, Y., et. al., "High rate punctured convolutionalcodes for soft decision Viterbi decoding," IEEETransactions on Communications, vol. COM-32, No. 3,pp 315–319, Mar. 1984.[5] Haccoun, D., and G. Begin, "High-rate puncturedconvolutional codes for Viterbi and sequentialdecoding," IEEE Transactions on Communications, vol.37, No. 11, pp 1113–1125, Nov. 1989.[6] G. Begin, et.al., "Further results on high-rate puncturedconvolutional codes for Viterbi and sequentialdecoding," IEEE Transactions on Communications, vol.38, No. 11, pp 1922–1928, Nov. 1990.gopalax Publications 166

More magazines by this user
Similar magazines