S.72-3320 Advanced Digital Communication (4 cr) Targets today

comlab.hut.fi

S.72-3320 Advanced Digital Communication (4 cr) Targets today

S.72-3320 Advanced Digital Communication (4 cr)Convolutional Codes1Targets today Why to apply convolutional coding? Defining convolutional codes Practical encoding circuits Defining quality of convolutional codes Decoding principles Viterbi decodingTimo O. Korhonen, HUT Communication Laboratory21


Convolutional encodinginput bitk bits (n,k,L) n bits(n,k,L)message bitsencoder encoderencoded bitsn(L+1) output bitsConvolutional codes are applied in applications that require goodperformance with low implementation complexity. They operate oncode streams (not in blocks)Convolution codes have memory that utilizes previous bits to encode ordecode following bits (block codes are memoryless)Convolutional codes are denoted by (n,k,L), where L is code (orencoder) Memory depth (number of register stages)Constraint length C=n(L+1) is defined as the number of encoded bits amessage bit can influence toConvolutional codes achieve good performance by expanding theirmemory depthTimo O. Korhonen, HUT Communication Laboratory3Example: Convolutional encoder, k = 1, n = 2x' m m mx'' m mj j2jj j2 j1jmemorydepth L= numberof statesx x' x'' x' x'' x' x'' ...out1 1 2 2 3 3(n,k,L) = (2,1,2) encoderConvolutional encoder is a finite state machine (FSM) processinginformation bits in a serial mannerThus the generated code is a function of input and the state of the FSMIn this (n,k,L) = (2,1,2) encoder each message bit influences a span ofC= n(L+1)=6 successive output bits = constraint length CThus, for generation of n-bit output, we require in this example n shiftregisters in k = 1 convolutional encoderTimo O. Korhonen, HUT Communication Laboratory42


Example: (n,k,L)=(3,2,1) Convolutional encoderx m m m' j j3 j2jx m m m'' j j3 j1jx m m''' j j2jTimo O. Korhonen, HUT Communication Laboratory•After each new block of k input bitsfollows a transition into new state•Hence, from each input statetransition, 2 k different output statesmay follow•Each message bit influences a spanof C = n(L+1) = 3(1+1) = 6successive output bits5Generator sequencesk bits(n,k,L) (n,k,L)encoder encodern bits(n,k,L) Convolutional code can be described by the generator sequences(1) ( 2) ( n )g , g ,... g that are the impulse responses for each coder n outputbranches:(1)g 0(1)g2(1)g m(2,1,2) encoder( ) ( ) ( ) ( )gn [ g n g n g n0 1m](1)g [1 0 11] Note that the generator sequence length ( 2)g [1111] exceeds register depth always by 1Generator sequences specify convolutional code completely by theassociated generator matrixEncoded convolution code is produced by matrix multiplication of theinput and the generator matrixTimo O. Korhonen, HUT Communication Laboratory63


Timo O. Korhonen, HUT Communication LaboratoryS.Lin, D.J. Costello: Error Control Coding, II ed, p. 4569Representing convolutional codes: Code treeNumber of brachesdeviating from each nodeequals 2 k(n,k,L) = (2,1,2) encoderx' m m mj j2 j1jx'' m mj j2jx x' x'' x' x'' x' x'' ...out1 1 2 2 3 3 x' j10 0 x''j10 x' j 0 10 x''j 0 0mj 2mj 1 0 1This tells how one input bitis transformed into two output bits(initially register is all zero)Timo O. Korhonen, HUT Communication Laboratory x' j 0 11 x''j 0 1105


Representing convolutional codes compactly:code trellis and state diagramInput state ‘1’indicated by dashed lineCode trellisState diagramShift register statesTimo O. Korhonen, HUT Communication Laboratory11Inspecting state diagram: Structural properties ofconvolutional codesEach new block of k input bits causes a transition into new stateHence there are 2 k branches leaving each stateAssuming encoder zero initial state, encoded word for any input of k bitscan thus be obtained. For instance, below for u=(1 1 1 0 1), encodedword v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:Verify that you obtain the same result!Input stateTimo O. Korhonen, HUT Communication Laboratory- encoder state diagram for (n,k,L)=(2,1,2) code- note that the number of states is 8 = 2 L+1 => L = 2 (two state bits)126


Code weight, path gain, and generating functionThe state diagram can be modified to yield information on code distanceproperties (= tells how good the code is to detect or correct errors)Rules (example on the next slide):– (1) Split S 0 into initial and final state, remove self-loop– (2) Label each branch by the branch gain X i. Here i is the weight* of the nencoded bits on that branch– (3) Each path connecting the initial state and the final state represents anonzero code word that diverges and re-emerges with S 0 only onceThe path gain is the product of the branch gains along a path, and the weight ofthe associated code word is the power of X in the path gainCode weigh distribution is obtained by using a weighted gain formula tocompute its generating function (input-output equation)T( X ) A Xiwhere A i is the number of encoded words of weight iiiTimo O. Korhonen, HUT Communication Laboratory*In linear codes, weight is the number of ‘1’:s in the encoder output13branchgainweight: 2weight: 1Example: The path representing the statesequence S 0 S 1 S 3 S 7 S 6 S 5 S 2 S 4 S 0 has thepath gain X 2 X 1 X 1 X 1 X 2 X 1 X 2 X 2 =X 12and the corresponding code wordhas the weight of 12Timo O. Korhonen, HUT Communication LaboratoryWhere does these terms come from?T( X ) X 3X 5XiA X6 7 8 i9 1011X25 X ....i147


Distance properties of convolutional codesCode strength is measured by the minimum free distance:dfreewhere v’ and v’’ are the encoded words corresponding informationsequences u’ and u’’. Code can correct up to t d free/ 2 errors.The minimum free distance d free denotes: The minimum weight of all the paths in the state diagram thatdiverge from and remerge with the all-zero state S 0 The lowest power of the code-generating function T(X)iT( X ) A XTimo O. Korhonen, HUT Communication Laboratory* for derivation, see Carlson’s, p. 583 min d( v ', v '') : u' u'' X 3X 5Xi6 7 8 Code gain*:G kd / 2 n R d / 2 1c free c freei9 1011X25 X ....d free 615Coding gain for some selected convolutional codesHere is a table of some selected convolutional codes and theircode gains R C d free /2 expressed for hard decoding also by 10log ( R d10cfree/ 2) dBTimo O. Korhonen, HUT Communication Laboratory168


Decoding of convolutional codesMaximum likelihood decoding of convolutional codes means finding thecode branch in the code trellis that was most likely transmittedTherefore maximum likelihood decoding is based on calculating codeHamming distances for each branch potentially forming encoded wordAssume that the information symbols applied into an AWGN channelare equally alike and independentLet’s denote by x encoded symbols (no errors) and by y received(potentially erroneous) symbols: x x x x ... x0 1 2 j... y y y ... y0 1 j...Probability to decode the symbols is then received code: yp( y, x) p( y | x )j0The most likely path through the trellis will maximize this metric.Often ln() is taken from both sides, because probabilities are oftensmall numbers, yielding:jln p( y, x) ln p( y x )j mjj1(note this corresponds equavalently also the smallest Hamming distance)jnon -erroneous code: xDecoder(=distancecalculation)bitdecisionsTimo O. Korhonen, HUT Communication Laboratory17Example of exhaustive maximal likelihood detection Assume a three bit message is transmitted and encoded by (2,1,2)convolutional encoder. To clear the decoder, two zero-bits are appendedafter message. Thus 5 bits are encoded resulting 10 bits of code. Assumechannel error probability is p = 0.1. After the channel 10,01,10,11,00 isproduced (including some errors). What comes after the decoder, e.g. whatwas most likely the transmitted code and what were the respective messagebits?abstatescddecoder outputsif this path is selectedTimo O. Korhonen, HUT Communication Laboratory189


p( y, x) p( y | x )j0ln p( y, x) ln p( y | x )jjj0j jweight for prob. toreceive bit in-errorTimo O. Korhonen, HUT Communication Laboratoryerrorscorrect19correct:1+1+2+2+2=8;8 ( 0.11) 0.88false:1+1+0+0+0=2;2 ( 2.30) 4.6total path metric: 5.48Note also the Hamming distances!The largest metric, verifythat you get the same result!Timo O. Korhonen, HUT Communication Laboratory2010


The Viterbi algorithmProblem of optimum decoding is to find the minimum distance pathfrom the initial state back to the initial state (below from S 0 to S 0 ). Theminimum distance is one of the sums of all path metrics from S 0 to S 0Exhaustive maximum likelihoodmethod must search all the pathsin phase trellis (2 k paths emerging/entering from 2 L+1 states foran (n,k,L) code)The Viterbi algorithm getsimprovement in computationalefficiency via concentrating intosurvivor paths of the trellisTimo O. Korhonen, HUT Communication Laboratory21The survivor path Assume for simplicity a convolutional code with k=1, and thus up to 2 k =2 branches can enter each state in trellis diagram Assume optimal path passes S. Metric comparison is done by adding themetric of S 1 and S 2 to S. At the survivor path the accumulated metric isnaturally smaller (otherwise it could not be the optimum path)For this reason the non-survived path canbe discarded -> all path alternatives need notto be further consideredNote that in principle the whole transmittedsequence must be received before decision.However, in practice storing of states forLinput length of 5L is quite adequateTimo O. Korhonen, HUT Communication Laboratory2 nodes, determinedby memory depthk2 branches enter each nodebranch of largermetric discarded2211


Example of using the Viterbi algorithmAssume the received sequence isy 01101111010001and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbidecoded output sequence! states(Note that for this encoder code rate is 1/2 and memory depth equals L = 2)Timo O. Korhonen, HUT Communication Laboratory23The maximum likelihood pathAfter register length L+1=3branch pattern begins to repeatSmaller accumulatedmetric selected(1)1(1)(1)(1)(2)1(0)(Branch Hamming distancesin parenthesis)First depth with two entries to the nodeThe decoded ML code sequence is 11 10 10 11 00 00 00 whose Hammingdistance to the received sequence is 4 and the respective decodedsequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path.(Black circles denote the deleted branches, dashed lines: '1' was applied)Timo O. Korhonen, HUT Communication Laboratory2412


How to end-up decoding?In the previous example it was assumed that the register was finallyfilled with zeros thus finding the minimum distance pathIn practice with long code words zeroing requires feeding of longsequence of zeros to the end of the message bits: this wastes channelcapacity & introduces delayTo avoid this path memory truncation is applied:– Trace all the surviving paths to thedepth where they merge– Figure right shows a common pointat a memory depth J– J is a random variable whose applicablemagnitude shown in the figure (5L)has been experimentally tested fornegligible error rate increase– Note that this also introduces thedelay of 5L! J 5Lstages of the trellisTimo O. Korhonen, HUT Communication Laboratory25Lessons learned You understand the differences between cyclic codes andconvolutional codes You can create state diagram for a convolutional encoder You know how to construct convolutional encoder circuitsbased on knowing the generator sequences You can analyze code strengths based on known codegeneration circuits / state diagrams or generator sequences You understand how to realize maximum likelihoodconvolutional decoding by using exhaustive search You understand the principle of Viterbi decodingTimo O. Korhonen, HUT Communication Laboratory2613

More magazines by this user
Similar magazines