15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Given a number r of redundant bits, we say that a [2 r − 1, 2 r − r − 1, 3] Hamming code is a code<br />

having an r × (2 r − 1) parity check matrix H such that its columns are all the different nonzero vectors<br />

of length r.<br />

A Hamming code has minimum distance 3. This follows from its definition and Corollary 1. Notice<br />

that any two columns in H, being different, are linearly independent. Also, if we take any two different<br />

columns and their sum, these three columns are linearly dependent, proving our assertion.<br />

A natural way of writing the columns of H in a Hamming code, is by considering them as binary<br />

numbers on base 2 in increasing order. This means, the first column is 1 on base 2, the second column<br />

is 2, and so on. The last column is 2 r − 1 on base 2, i.e., (1, 1,…, 1) T . This parity check matrix, although<br />

nonsystematic, makes the decoding very simple.<br />

In effect, let be a received vector such that = ⊕ e, where was the transmitted codeword and<br />

is an error vector of weight 1. Then, the syndrome is = H T r r v v e<br />

s e , which gives the column corresponding<br />

to the location in error. This column, as a number on base 2, tells us exactly where the error has occurred,<br />

so the received vector can be corrected.<br />

Example 4 Consider the [7, 4, 3] Hamming code C with parity check matrix<br />

© 2002 by CRC Press LLC<br />

H<br />

⎛ 0 0 0 1 1 1 1 ⎞<br />

⎜ ⎟<br />

= ⎜ 0 1 1 0 0 1 1 ⎟<br />

⎜ 1 0 1 0 1 0 1 ⎟<br />

⎝ ⎠<br />

(34.56)<br />

Assume that vector = 1100101 is received. The syndrome is = H T = 001, which is the binary<br />

representation of the number 1. Hence, the first location is in error, so the decoder estimates that the<br />

transmitted vector was = 0100101. �<br />

We can obtain 1-error correcting codes of any length simply by shortening a Hamming code. This<br />

procedure works as follows: assume that we want to encode k information bits into a 1-error correcting<br />

code. Let r be the smallest number such that k ≤ 2 r − r − 1. Let H be the parity check matrix of a [2 r − 1,<br />

2 r − r − 1, 3] Hamming code. Then construct a matrix by eliminating some 2 r − r − 1 − k columns<br />

from H. The code whose parity check matrix is is a [k + r, k, d] code with d ≥ 3, hence it can correct<br />

one error. We call it a shortened Hamming code. For instance, the [5,2,3] code whose parity check matrix<br />

is given by (34.51) is a shortened Hamming code.<br />

In general, if H is the parity check matrix of a code C, H ′ is a matrix obtained by eliminating a certain<br />

number of columns from H and C ′ is the code with parity check matrix H ′, we say that C ′ is obtained<br />

by shortening C.<br />

A [2 r − 1, 2 r − r − 1, 3] Hamming code can be extended to a [2 r , 2 r − r − 1, 4] Hamming code by<br />

adding to each codeword a parity bit, that is, the exclusive-OR of the first 2 r r s r<br />

v<br />

H′<br />

H′<br />

− 1 bits. The new code is<br />

called an extended Hamming code.<br />

So far, we have not talked about probabilities of errors. Assume that we have a binary symmetric<br />

channel (BSC), i.e., the probability of a 1 becoming a 0 or of a 0 becoming a 1 is p < .5. Let Perr be the<br />

probability of error after decoding using a code, i.e., the output of the decoder does not correspond to<br />

the originally transmitted information vector. A fundamental question is the following: given a BSC<br />

with bit error probability p, does it exist a code of high rate that can arbitrarily lower Perr? The answer,<br />

due to Shannon, is yes, provided that the code has rate below a parameter called the capacity of the<br />

channel, as defined next.<br />

Definition 2 Given a BSC with probability of bit error p, we say that the capacity of the channel is<br />

C( p)<br />

=<br />

1 + p log2 p + ( 1 – p)<br />

log2( 1 – p)<br />

(34.57)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!