28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tel-00559107, version 1 - 24 Jan 2011<br />

2. the use of a variant of <strong>extended</strong> observers.<br />

1.2 Motivations<br />

A story is best told when started from the beginning. Thus we begin by introducing the<br />

<strong>Kalman</strong> <strong>filter</strong> in the linear case. A linear system is given by:<br />

� dx(t)<br />

dt = Ax(t)+Bu(t),<br />

y(t) = Cy(t),<br />

where A, B <strong>and</strong> C are matrices (having the appropriate dimensions) that may or may not<br />

depend on time. The two archetypal observers for such a system were proposed in the 1960’s<br />

by D. G. Luenberger [86], R. E. <strong>Kalman</strong>. <strong>and</strong> B. S. Bucy [75, 76]. They are known as the<br />

Luenberger observer <strong>and</strong> the <strong>Kalman</strong>-Bucy <strong>filter</strong> respectively.<br />

The leading mechanism in those two algorithms is the prediction correction scheme. The<br />

new estimated state is obtained by means of:<br />

− a prediction based on the model <strong>and</strong> the old estimated state, <strong>and</strong><br />

− the correction of the obtained values by the measurement error weighted by a <strong>gain</strong><br />

matrix.<br />

We denote the estimated state by z(t). The corresponding equation for the estimated state<br />

is:<br />

dz(t)<br />

= Az(t)+Bu(t) − K (Cz(t) − y(t)) .<br />

dt<br />

In the Luenberger observer, the matrix K is computed once <strong>and</strong> for all. The real part of<br />

all the eigenvalues of (A − KC) have to be strictly negative.<br />

In the <strong>Kalman</strong> <strong>filter</strong>, the <strong>gain</strong> matrix is defined as K(t) =S−1 (t)C ′<br />

R−1 where S is the<br />

solution of the differential equation:<br />

d<br />

S(t) =−A′ S(t) − S(t)A − S(t)QS(t)+C<br />

dt ′<br />

R −1 C.<br />

This equation is a Riccati equation of matrices <strong>and</strong> is referred to as the Riccati equation.<br />

The matrices A, B <strong>and</strong> C are expected to be time dependent (i.e. A(t), B(t) <strong>and</strong> C(t)).<br />

Otherwise, the <strong>Kalman</strong> <strong>filter</strong> is equivalent to the Luenberger observer.<br />

The <strong>Kalman</strong> <strong>filter</strong> is the solution to an optimization problem where the matrices Q <strong>and</strong><br />

R play the role of weighting coefficients (details can be found in [75]). These matrices must<br />

be symmetric definite positive. According to this definition, the <strong>Kalman</strong> <strong>filter</strong> is an optimal<br />

solution to the observation problem. We will see below that the <strong>Kalman</strong> <strong>filter</strong> has a stochastic<br />

interpretation that gives sense to those Q <strong>and</strong> R matrices.<br />

Those two observers are well known algorithms, i.e. convergence can be proven. However,<br />

when the matrices A, B <strong>and</strong> C are time dependent, the <strong>Kalman</strong> <strong>filter</strong> has to be used since<br />

the <strong>gain</strong> matrix is constantly being updated.<br />

Observability in the linear case is characterized by a simple criterion. The loop closure<br />

problem has an elegant solution called the separation principle: controller <strong>and</strong> observer can<br />

be designed independently <strong>and</strong> the overall loop remains stable. Interesting <strong>and</strong> detailed exposes<br />

on the <strong>Kalman</strong> <strong>filter</strong> can be found in [43, 45, 58].<br />

5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!