02.10.2019 Views

UploadFile_6417

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

LMS Algorithm for Coefficient Adjustment 597<br />

We call {r dx (k)} the crosscorrelation between the desired output sequence<br />

{d(n)} and the input sequence {x(n)}, and {r xx (k)} is the autocorrelation<br />

sequence of {x(n)}.<br />

The sum of squared errors E is a quadratic function of the FIR filter<br />

coefficients. Consequently, the minimization of E with respect to the filter<br />

coefficients {h(k)} results in a set of linear equations. By differentiating<br />

E with respect to each of the filter coefficients, we obtain<br />

and hence<br />

N−1<br />

∑<br />

k=0<br />

∂E<br />

=0, 0 ≤ m ≤ N − 1 (11.6)<br />

∂h(m)<br />

h(k)r xx (k − m) =r dx (m), 0 ≤ m ≤ N − 1 (11.7)<br />

This is the set of linear equations that yield the optimum filter coefficients.<br />

To solve the set of linear equations directly, we must first compute<br />

the autocorrelation sequence {r xx (k)} of the input signal and the crosscorrelation<br />

sequence {r dx (k)} between the desired sequence {d(n)} and<br />

the input sequence {x(n)}.<br />

The LMS algorithm provides an alternative computational method for<br />

determining the optimum filter coefficients {h(k)} without explicitly computing<br />

the correlation sequences {r xx (k)} and {r dx (k)}. The algorithm is<br />

basically a recursive gradient (steepest-descent) method that finds the<br />

minimum of E and thus yields the set of optimum filter coefficients.<br />

We begin with any arbitrary choice for the initial values of {h(k)},<br />

say {h 0 (k)}. For example, we may begin with h 0 (k) =0, 0 ≤ k ≤ N −1.<br />

Then after each new input sample {x(n)} enters the adaptive FIR filter,<br />

we compute the corresponding output, say {y(n)}, form the error signal<br />

e(n) =d(n) − y(n), and update the filter coefficients according to the<br />

equation<br />

h n (k) =h n−1 (k)+△·e(n) · x (n − k) , 0 ≤ k ≤ N − 1, n =0, 1,...<br />

(11.8)<br />

where △ is called the step size parameter, x(n − k) isthe sample of the<br />

input signal located at the kth tap of the filter at time n, and e(n)x (n − k)<br />

is an approximation (estimate) of the negative of the gradient for the kth<br />

filter coefficient. This is the LMS recursive algorithm for adjusting the<br />

filter coefficients adaptively so as to minimize the sum of squared errors E.<br />

The step size parameter △ controls the rate of convergence of the<br />

algorithm to the optimum solution. A large value of △ leads to large step<br />

size adjustments and thus to rapid convergence, while a small value of<br />

△ results in slower convergence. However, if △ is made too large the<br />

algorithm becomes unstable. To ensure stability, △ must be chosen [22]<br />

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).<br />

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!