28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

Definition 30<br />

The adaptive <strong>high</strong>-<strong>gain</strong> <strong>extended</strong> <strong>Kalman</strong> <strong>filter</strong> is the system:<br />

⎧<br />

⎪⎨<br />

dz<br />

dt<br />

⎪⎩<br />

= A(u)z + b(z, u) − S−1C ′ R −1<br />

θ (Cz − y(t))<br />

dS<br />

dt = −(A(u)+b∗ (z, u)) ′ S − S(A (u)+b∗ (z, u)) + C ′ R −1<br />

θ C − SQθS<br />

dθ<br />

dt = F(θ, Id (t)).<br />

3.3 Innovation<br />

(3.3)<br />

The functions F <strong>and</strong> Id will be defined later, in Section 3.3 <strong>and</strong> Lemma 42. The function<br />

Id is called the innovation. The initial conditions are z(0) ∈ χ, S(0) is symmetric positive<br />

definite, <strong>and</strong> θ(0) = 1.<br />

The function F has only to satisfy certain requirements stated precisely in Lemma 42.<br />

Therefore, several different choices for an adaptation function are possible.<br />

Roughly speaking, F(θ, Id (t)) should be such that if the estimation z (t) is far from x (t)<br />

then θ (t) increases (<strong>high</strong>-<strong>gain</strong> mode). Contrarily, if z (t) is close to x (t), θ goes to 1 (<strong>Kalman</strong><br />

<strong>filter</strong>ing mode). As it is clear from the proof of Theorem 36, this observer makes sense only<br />

when θ(t) ≥ 1, for all t ≥ 0. This is therefore another requirement that F(θ, Id) has to meet.<br />

The achievement of this behavior requires that we evaluate the quality of the estimation.<br />

This is the object of the next section.<br />

Remark 31<br />

1. Readers familiar with <strong>high</strong>-<strong>gain</strong> observers may notice that the matrices Rθ <strong>and</strong> Qθ are<br />

not exactly the same as in earlier articles such as [38, 47, 57]. The definitions developed<br />

here can also be substituted into those previous works without consequence.<br />

2. The hypothesis θ(0) = 1 may appear a bit atypical as compared to the results in [38, 57]<br />

for instance.<br />

− For technical reasons, the result of Lemma 39 depends on the initial value of θ,<br />

<strong>and</strong> θ(0) = 1 has no impact on α <strong>and</strong> β.<br />

− Secondly, note that in the case of large perturbations, θ will increase. In these<br />

instances, the initial value of θ is of little importance as we show in Lemma 42<br />

that the adaptation function can be chosen in such a way that θ reaches any large<br />

value in an arbitrary small time.<br />

− Finally, in the ideal case of no initial error, θ doesn’t increase which saves us from<br />

useless noise sensitivity due to a large <strong>high</strong>-<strong>gain</strong> initial value.<br />

3.3 Innovation<br />

The innovation Id is a measurement of the quality of the estimation. It is different 4 from<br />

the st<strong>and</strong>ard concept of innovation, which is based on a linearization around the estimated<br />

4 The same definition of innovation is used for moving horizon observers where the estimated state is the<br />

solution of a minimization problem. The cost function used here represents the proximity to the real state. It<br />

is the innovation we use, ([12, 95]).<br />

In related publications, observability is defined by the inequality of Lemma 33. In our case, the inequality is<br />

a consequence of the observability theory.<br />

40

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!