06.06.2022 Views

B. P. Lathi, Zhi Ding - Modern Digital and Analog Communication Systems-Oxford University Press (2009)

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

References 865

rig_l= (l+sign ( xig_n) )/2;

r=reshape (rig_l ,7,Ll) ';

x=mod (r*H' , 2) ;

for kl=l :Ll,

for k2=1 :K2,

if Syndrome (k2, :)==x (kl, :),

idxe=k2 ;

end

end

error=E ( idxe , : ) ;

cword=xor (r(kl, :),error) ;

sigcw( :,kl)=cword(l:4) ;

end

cw=reshape (sigcw, 1,K) ;

BER_coded (ii) =sum ( abs (cw-sig_b ) )/K;

%Hard decisions

%S/P to form 7 bit codewords

% generate error syndromes

%find the Syndrome index

%look up the error pattern

%error correction

%keep the message bits

% Coded BER on info bits

% Uncoded Simulation Without Hamming code

xig_3 =2*sig_b- 1;

% Polar signaling

xig_m=sqrt (SNR) *xig_3 +AWnoise2 ;

% Add AWGN and adj ust SNR

rig_l= (l+sign ( xig_m) )/2;

% Hard decision

BER_uncode (ii) =sum(abs ( rig_l-sig_b) ) /K; % Compute BER

end

EboverN= [l:14] -3 ; % Need to note that SNR = 2 Eb/N

Naturally, when the E b / JV is low, there tends to be more than 1 error bit per codeword. Thus, when

there is more than 1 bit error, the decoder will still consider the codeword to be corrupted by only 1 bit

error. Its attempt to correct 1-bit error may in fact add an error bit. When the E b / N is high, it is more

likely that a codeword has at most 1 bit error. This explains why the coded BER is worse at lower E b / N

and better at higher E b / N. On other other hand, Fig. 14.3 gives an optimistic approximation by assuming

a cognitive decoder that will take no action when the number of bit errors in each codeword exceeds 1.

Its performance is marginally better at low E b / N ratio.

REFERENCES

1. C. E. Shannon, "A Mathematical Theory of Communication," Bell Syst. Tech. J. , vol. 27,

pp. 379-423, 623-656, 1948.

2. S. Lin and D. Costello, Envr Control Coding: Fundamentals and Applications, 2nd ed., Prentice

Hall, Upper Saddle River, NJ, 2004.

3. W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed., Wiley, New York, 1972.

4. P. Elias, "Coding for Noisy Channels," IRE Natl. Convention. Rec., vol. 3, part 4, pp. 37-46, 1955.

5. A. J. Viterbi, "Convolutional Codes and Their Performance in Communications Systems," IEEE

Trans. Commun. Te chnol., vol. CT-19, pp. 751-77 1, Oct. 1971.

6. J. L. Massey, Threshold Decoding, MIT Press, Cambridge, MA, 1963.

7. J. K. Wolf, "Efficient Maximum-Likelihood Decoding of Linear Block Codes Using a Trellis," IEEE

Trans. Info rm. Theory, vol. IT-24, pp. 76-80, Jan. 1978.

8. G.D. Forney, Jr., Concatenated Codes, MIT Press, Cambridge, MA, 1966.

9. D. Chase, "A Class of Algorithms for Decoding Block Codes with Channel Measurement

Information," IEEE Trans. Inform. Themy, IT-18, pp. 170-182, 1972.

10. J. Hagenauer and P. Hoeher, "A Viterbi Algorithm with Soft-Decision Outputs and Its Applications,"

Proc. of IEEE Globecom, pp. 1680-1686, Nov. 1989.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!