15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The error event likelihoods are calculated as the difference in the squared Euclidean distances between<br />

the signal and the convolution of maximum likelihood sequence estimate and the channel PR, versus that<br />

between the signal and the convolution of an alternative data pattern and the channel PR. During each<br />

clock cycle, the best M of them are chosen, and the syndromes for these error events are calculated.<br />

Throughout the processing of each block, a list is maintained of the N most likely error events, along with<br />

their associated error types, positions and syndromes. At the end of the block, when the list of candidate<br />

error events is finalized, the likelihoods and syndromes are calculated for each of ( ) combinations of Lset<br />

candidate error events that are possible. After disqualifying those L-sets of candidates, which overlap<br />

in the time domain, and those candidates and L-sets of candidates, which produce a syndrome which does<br />

not match the actual syndrome, the candidate or L-set of candidates, which remains and which has the<br />

highest likelihood is chosen for correction. Finding the error event position and type completes decoding.<br />

The decoder can make two types of errors: it fails to correct if the syndrome is zero, or it makes a<br />

wrong correction if the syndrome is nonzero, but the most likely error event or combination of error<br />

events does not produce the right syndrome. A code must be able to detect a single error from the list<br />

of dominant error events and should minimize the probability of producing zero syndrome when more<br />

than one error event occurs in a codeword. Consider a linear code given by an (n − k) × n parity check<br />

matrix H. We are interested in capable of correcting or detecting dominant errors. If all errors from a<br />

list were contiguous and shorter than m, a cyclic n − k = m parity bit code could be used to correct a<br />

single error event [16]; however, in reality, the error sequences are more complex, and occurrence<br />

probabilities of error events of lengths 6, 7, 8 or more are not negligible. Furthermore, practical reasons<br />

(such as decoding delay, thermal asperities, etc.) dictate using short codes, and consequently, in order to<br />

keep the code rate high, only a relatively small number of parity bits is allowed, making the design of<br />

error event detection codes nontrivial. The code redundancy must be used carefully so that the code is<br />

optimal for a given E.<br />

The parity check matrix of a code can be created by a recursive algorithm that adds one column of H<br />

at a time using the criterion that after adding each new column, the code error-event-detection capabilities<br />

are still satisfied. The algorithm can be described as a process of building a directed graph whose vertices<br />

are labeled by the portions of parity check matrix long enough to capture the longest error event, and<br />

whose edges are labeled by column vectors that can be appended to the parity check matrix without<br />

violating the error event detection capability [4]. To formalize code construction requirements, for each<br />

error event from E, denote by si,l a syndrome of error vector σl(ei) (si,l = σl(ei) · H T N<br />

L<br />

), where σl(ei) is an<br />

l-time shifted version of error event ei. The code should be designed in such a way that any shift of any<br />

dominant error sequence produces a nonzero syndrome, i.e., that si,l ≠ 0 for any 1 ≤ i ≤ I and 1 ≤ l ≤ n.<br />

In this way, a single error event can be detected (relying on error event likelihoods to localize the error<br />

event). The correctable shifts must include negative shifts as well as shifts larger than n in order to cover<br />

those error events that straddle adjacent codewords, because the failure to correct straddling events<br />

significantly affects the performance. A stronger code could have a parity check matrix that guaranties<br />

that syndromes of any two-error event-error position pairs ((i1, l1), (i2, l2)) are different, i.e., si1 ,l ≠ s .<br />

1<br />

i2 ,l2 This condition would result in a single error event correction capability. The codes capable of correcting<br />

multiple error events can be defined analogously. We can even strengthen this property and require that<br />

for any two shifts and any two dominant error events, the Hamming distance between any pair of<br />

syndromes is larger than δ; however, by strengthening any of these requirements the code rate decreases.<br />

If Li is a length of the ith error event, and if L is the length of the longest error event from E,<br />

( L = max1≤i≤ I { Li} ), then it is easy to see that for a code capable of detecting an error event from E<br />

that ends at position j, the linear combination of error events and the columns of H from j − L + 1 to j<br />

has to be nonzero. More precisely, for any i and any j (ignoring the codeword boundary effects)<br />

where e i,m is the mth element of the error event e i, and h j is the jth column of H.<br />

© 2002 by CRC Press LLC<br />

T ∑<br />

ei,m ⋅ hj−Li +m<br />

1≤m≤L i<br />

≠ 0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!