15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Now the task of detecting i [0,∞) is to find x [0,∞) that is closest to y [0,∞) in the Euclidean sense. Recall that<br />

we stated as an assumption that channel noise is AWGN, while in magnetic recording systems after<br />

equalization the noise is colored so that the minimum-distance detector is not an optimal one, and<br />

additional post-processing is necessary, which will be addressed later in this chapter.<br />

The Viterbi algorithm is a classical application of dynamic programming. Structurally, the algorithm<br />

contains q M lists, one for each state, where the paths whose states correspond to the label indices are stored,<br />

compared, and the best one of them retained. The algorithm can be described recursively as follows:<br />

1. Initial condition: Initialize the starting list with the root node (the known initial state) and set its<br />

metric to zero, l = 0.<br />

2. Path extension: Extend all the paths (nodes) by one branch to yield new candidates, l = l + 1, and<br />

find the sum of the metric of the predecessor node and the branch metric of the connecting branch<br />

(ADD). Classify these candidates into corresponding q M lists (or less for l < M). Each list (except<br />

in the head of the trellis) contains q paths.<br />

3. Path selection: For each end-node of extended paths determine the maximum/minimum* of these<br />

sums (COMPARE) and assign it to the node. Label the node with the best path metric to it, selecting<br />

(SELECT) that path for the next step of the algorithm (discard others). If two or more paths have<br />

the same metric, i.e., if they are equally likely, choose the best one at random. Find the best of all<br />

the survivor paths, x′ , and its corresponding information sequence i ′ and release the bit i′ .<br />

[0,l )<br />

[0,l )<br />

l−δ<br />

Go to step 2.<br />

In the description of the algorithm we emphasized three Viterbi-characteristic operations—add, compare,<br />

select (ADC)—that are performed in every recursion of the algorithm. So today’s specialized signal<br />

processors have this operation embedded optimizing its execution time. Consider now the amount of<br />

“processing” done at each depth l, where all of the q M states of the trellis code are present. For each state<br />

it is necessary to compare q paths that merge in that state, discard all but the best path, and then compute<br />

and send the metrics of q of its successors to the depth l + 1.<br />

Consequently, the computational complexity of the VA exponentially increases with M. These operations<br />

can be easily parallelized, but then the number of parallel processors rises as the number of node<br />

computations decreases. The total time-space complexity of the algorithm is fixed and increases exponentially<br />

with the memory length.<br />

The sliding window VA decodes infinite sequences with delay of δ branches from the last received one.<br />

In order to minimize its memory requirements (δ + 1 trellis levels), and achieve bit error rate only<br />

insignificantly higher than with finite sequence VA, δ is chosen as δ ≈ 4M. In this way, the Viterbi detector<br />

introduces a fixed decision delay.<br />

Example<br />

Assume that a recorded channel input sequence x, consisting of L equally likely binary symbols from the<br />

alphabet {0, 1}, is “transmitted” over PR4 channel. The channel is characterized by the trellis of Fig. 34.57,<br />

i.e., all admissible symbol sequences correspond to the paths traversing the trellis from l = 0 to l = L,<br />

with one symbol labeling each branch, Fig. 34.58. Suppose that the noisy sequence of samples at the<br />

channel output is y = 0.9, 0.2, –0.6, –0.3, 0.6, 0.9, 1.2, 0.3,… If we apply a simple symbol-by-symbol<br />

detector to this sequence, the fifth symbol will be erroneous due to the hard quantization rule for noiseless<br />

channel output estimate<br />

*It depends on whether the metric or the distance is accumulated.<br />

© 2002 by CRC Press LLC<br />

yˆ k<br />

=<br />

⎧<br />

⎪<br />

⎨<br />

⎪<br />

⎩<br />

– 1 yk < – 0.5<br />

1 yk > 0.5<br />

0 otherwise

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!