18.01.2015 Views

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

46 2. Estimation theory<br />

= ˆθ t−1 + K t ε t (2.276)<br />

K t =<br />

P t =<br />

P t−1 ϕ t<br />

ϕ T (2.277)<br />

t P t−1 ϕ t + C vt<br />

(<br />

P t−1 ϕ t ϕ T )<br />

t<br />

I −<br />

ϕ T P t−1 + C wt (2.278)<br />

t P t−1 ϕ t + C vt<br />

P t is then a recursive estimate of the covariance + C C˜θt wt<br />

Kalman gain. If we now choose (make assumption)<br />

<strong>and</strong> K t is called the<br />

C vt = λ t (2.279)<br />

C wt = ( λ −1<br />

t − 1 ) ( P t−1 ϕ t ϕ T )<br />

t<br />

I −<br />

ϕ T P t−1 (2.280)<br />

t P t−1 ϕ t + C vt<br />

then the Kalman filter reduces to the so-called recursive least squares (RLS) algorithm.<br />

This leads to the conclusion, that the RLS is optimal in mean square<br />

sense if the assumptions (2.279) <strong>and</strong> (2.280) are valid. Otherwise the RLS only<br />

approximates the optimal recursive estimate. If λ t ≡ λ the method is called the<br />

<strong>for</strong>getting factor RLS. The <strong>for</strong>gertting factor RLS is popular e.g. is in time series<br />

modeling, since its per<strong>for</strong>mance can be tuned with one parameter λ. Since λ is<br />

usually tuned in RLS near to unity, the implicit assumption is, that C wt in Kalman<br />

filter is “small” corresponding to slow variation of the model parameters.<br />

With different choices of C wt , C vt <strong>and</strong> P 0 several algorithms of the <strong>for</strong>m<br />

ˆθ t = ˆθ t−1 + K t (z t − ϕ T t ˆθ t−1 ) (2.281)<br />

ε t = z t − ϕ T t ˆθ t−1 (2.282)<br />

can be obtained. The most popular <strong>for</strong>ms are the normalized least mean square<br />

(NLMS) algorithm<br />

ˆθ t = ˆθ µϕ t<br />

t−1 +<br />

µϕ T t ϕ t + 1 ε t (2.283)<br />

<strong>and</strong> the least mean square (LMS) algorithm<br />

ˆθ t = ˆθ t−1 + µϕ t ε t (2.284)<br />

where µ is the step size parameter that controls the convergence of the algorithm.<br />

The connection of the LMS with the steepest descent method is clearly visible. The<br />

connections of RLS, NLMS <strong>and</strong> LMS algorithms to Kalman filtering are discussed<br />

in detail in [99]. The corresponding Kalman gain K t <strong>and</strong> momentary covariance<br />

estimate P t are summarized in Table 2.1. These connections are fundamental,<br />

when the implicit assumptions of the recursive algorithms are compared.<br />

The time-varying algorithms presented here are all in their generic <strong>for</strong>ms. In<br />

many cases the effectiveness of the algorithm can be tuned with different matrix<br />

decompositions <strong>and</strong> scalings during the iteration. A review of adaptive algorithms<br />

can be found e.q. in [81]. Another common reference to recursive systems is [116].<br />

The connection of the different computational <strong>for</strong>ms of the adaptive algorithms

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!