28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tel-00559107, version 1 - 24 Jan 2011<br />

2.6 On Adaptive High-<strong>gain</strong> Observers<br />

Those conditions, imposed on the vector fields fi, appear to be very technical, but are in<br />

fact necessary in order to prove the convergence of the observer. Moreover, the importance<br />

of the functionΓ( u, y) has not to be underestimated as it is of the utmost importance in the<br />

definition of the update law of the <strong>high</strong>-<strong>gain</strong> parameter.<br />

Definition 20<br />

The <strong>high</strong>-<strong>gain</strong> observer with updated <strong>gain</strong> is defined by the set of equations:<br />

�<br />

˙z = A(y)x + b(y, z2, .., zn, , u)+θLA(y)K � z1−y<br />

θb �<br />

�<br />

�<br />

˙θ = θ ϕ1(ϕ2 − θ)+ϕ3Γ(u, y) 1 + �n j=2 |ˆx j| vj<br />

��<br />

(2.15)<br />

where L = diag(L b ,L b+1 , ..., L b+n−1 ).<br />

Theorem 21<br />

Consider the system (2.13) <strong>and</strong> the associated observer (2.15). It is is then possible to<br />

choose the parameters ϕ1, ϕ2, ϕ3 (with ϕ2, ϕ3 <strong>high</strong> enough) such that for any L(0) ≥ ϕ2 the<br />

estimation error e(t) =z(t) − x(t) is bounded as follows:<br />

|L −1 e(t)| ≤ β1(L(0) −1 e(0),t)+sup s∈[0,t[γ1<br />

��� ����<br />

for all t ∈ [0,Tu[. Moreover L statisfies the relation:<br />

⎛�⎛<br />

�<br />

�� � �<br />

�<br />

e(0)<br />

⎜�⎜<br />

⎜�⎜<br />

L(t) ≤ 4ϕ2 + β2<br />

,t + sups∈[0,t[γ2 ⎜�⎜<br />

L(0)<br />

⎝�⎝<br />

�<br />

�<br />

δ(s)<br />

ϕ2<br />

α(y(s))δy(s)<br />

δ(s)<br />

ϕ2<br />

α(y(s))δy(s)<br />

Γ(u(s),y(s))<br />

x(s)<br />

where β1 <strong>and</strong> β2 are KL functions, <strong>and</strong> γ1, γ2 are functions of class K.<br />

��� ����<br />

⎞�⎞<br />

�<br />

�<br />

⎟�⎟<br />

⎟�⎟<br />

⎟�⎟<br />

⎠�⎠<br />

�<br />

�<br />

This work uses a special form of the phase variable representation (2.2) of J-P. Gauthier<br />

et al. <strong>and</strong> therefore applies to observable systems in the sense of Section (2.2) above. The<br />

observer (2.15) has roughly the same structure as a Luenberger <strong>high</strong>-<strong>gain</strong> observer except<br />

for the fact that the correction <strong>gain</strong> is given as a function of the output error. The update<br />

function is determined by a function that bounds the incremental rate of the vector field<br />

b(y, x, u)(this is the part written in bold in (2.15)).<br />

The strategy adopted here isn’t based on a global Lipschitz vector field, which implies<br />

the off-line search <strong>and</strong> tuning of the upper bound (or the value) of the <strong>high</strong>-<strong>gain</strong> parameter<br />

θ. Instead, the observer tunes itself as a consequence of the adaptation function. However<br />

the function that drives the adaptation may not be that easy to find.<br />

The idea is quite similar to that of E. Bullinger <strong>and</strong> F. Allgőwer [35], with the difference<br />

being that in their case the adaptation is driven by the output error <strong>and</strong> in the case of L.<br />

Praly et al. the adaptation is model dependent.<br />

Theorem 21 states 17 that the observer (2.15) together with its adaptation function gives,<br />

at least for bounded solutions, an estimation error converging to a ball centered at the origin<br />

with a radius that depends on the asymptotic L ∞ -norm of the disturbances δ <strong>and</strong> δy. This<br />

17 The precise <strong>and</strong> complete theorem appear in the article [16], Theorem 1.<br />

26

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!