28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tel-00559107, version 1 - 24 Jan 2011<br />

− whenever |y1 − z1| > δ then θ = θ1, <strong>and</strong> the delay timer is reset,<br />

2.6 On Adaptive High-<strong>gain</strong> Observers<br />

− when |y1 − z1| < δ, we don’t know wether it is an overshoot or not: θ = θ1 <strong>and</strong> the<br />

delay timer is started,<br />

− when |y1 − z1| < δ <strong>and</strong> the delay timer is equal to Td, estimation is satisfactory <strong>and</strong><br />

θ = θ2.<br />

The authors consider a control strategy that stabilizes the system provided that the<br />

full state is known. They include a fixed <strong>high</strong>-<strong>gain</strong> observer <strong>and</strong> consider the closed loop<br />

system. They propose a set of assumptions such that the system-observer-controler ensemble<br />

is uniformly asymptotically stable 20 , (see [11], Theorem 1). The last step is the demonstration<br />

that stability remains when the <strong>high</strong>-<strong>gain</strong> of the observer is switched between two well defined<br />

values (see [11], Theorem 2 <strong>and</strong> example of Section 4).<br />

The Luemberger observer doesn’t have the same local properties as the <strong>extended</strong> <strong>Kalman</strong><br />

<strong>filter</strong>, namely good <strong>filter</strong>ing properties <strong>and</strong> analytically guaranteed convergence for small<br />

initial errors. We therefore expect a <strong>high</strong>-<strong>gain</strong> <strong>extended</strong> <strong>Kalman</strong> <strong>filter</strong> having a varying θ<br />

parameter to be more efficient with respect to the noise <strong>filter</strong>ing issue.<br />

y z<br />

2.6.4 Adaptive <strong>Kalman</strong> Filters<br />

Overshoots=no <strong>high</strong> <strong>gain</strong> change<br />

Td<br />

Change in the <strong>high</strong> <strong>gain</strong> value<br />

Time<br />

Figure 2.2: Switching strategy to deal with peaking.<br />

We now consider the problem of the adaptation of the <strong>high</strong>-<strong>gain</strong> parameter of <strong>Kalman</strong><br />

<strong>filter</strong>s. Recall that the <strong>high</strong>-<strong>gain</strong> parameter is used to provide the Q <strong>and</strong> R matrices with a<br />

specific structure. Therefore, the adaptation of the <strong>high</strong>-<strong>gain</strong> parameter may be seen as the<br />

modification of those two matrices 21 . There is nonetheless a big difference when the <strong>high</strong>-<strong>gain</strong><br />

structure is not considered: there is no proof of convergence of the <strong>extended</strong> <strong>Kalman</strong> <strong>filter</strong><br />

when the estimated state doesn’t lie in a neighborhood of the real state. The situation could<br />

be even worse. Examples of systems for which the <strong>filter</strong> doesn’t converge can be found in<br />

[100, 101].<br />

The adaptation problem, when viewed as the adaptation of the Q <strong>and</strong> R matrices, has<br />

been the object of quite a few publications both in linear <strong>and</strong> nonlinear cases. In the linear<br />

20 The Theorem demonstrates that all the trajectories are bounded (ultimately with time) <strong>and</strong> that the<br />

trajectory (under output feedback i.e. using the observer) is close to that of the state feedback (i.e. controller<br />

with full state knowledge).<br />

21 Recall that when the system is modeled using stochastic differential equations, Q <strong>and</strong> R represent the<br />

covariances of the state <strong>and</strong> output noise, respectively.<br />

29

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!