28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

3.3 Innovation<br />

Since �B (t)� ≤Lb each bi,j(t) can be interpreted as a bounded element of L∞ [0,d] (R). We<br />

�<br />

identify L∞ [0,d] (R)<br />

� n(n+1)<br />

2<br />

to L∞ �<br />

[0,d] R n(n+1) �<br />

2 <strong>and</strong> consider the function:<br />

Λ : L ∞ [0,d]<br />

�<br />

R n(n+1)<br />

2<br />

�<br />

× L ∞ [0,d] (Rnu ) −→ R +<br />

(bi,j) (j≤i)∈{1,..,n},u↩ → λmin (Gd)<br />

where λmin (Gd) is the smallest eigenvalue of Gd. Let us endow L ∞ [0,d]<br />

�<br />

R n(n+1)<br />

2<br />

�<br />

× L ∞ [0,d] (Rnu )<br />

with the weak-* topology6 <strong>and</strong> R has the topology induced by the uniform convergence. The<br />

weak-* topology on a bounded set implies uniform continuity of the resolvent, hence Λ is<br />

continuous7 .<br />

Since control variables are supposed to be bounded,<br />

Ω1 =<br />

�<br />

L ∞ [0,d]<br />

�<br />

R n(n+1)<br />

2<br />

�<br />

; �B� ≤Lb<br />

<strong>and</strong><br />

�<br />

Ω2 = u ∈ L ∞ [0,d] (Rn �<br />

);�u� ≤Mu<br />

are compact subsets. Therefore Λ (Ω1 × Ω2) is a compact subset of R which does not contain<br />

0 since the system is observable for any input. � Thus Gd is never singular. Moreover, for Mu<br />

sufficiently large,<br />

includes L∞ [0,d] (Uadm).<br />

�<br />

u ∈ L∞ [0,d] (Rn );�u� ≤Mu<br />

Hence, there exists λ0 d such that Gd ≥ λ0 d Id for any u <strong>and</strong> any matrix B(t) as above.<br />

Since<br />

y � 0,x 0 1, τ � − y � 0,x 0 2, τ � = CΨ (τ) x 0 1 − CΨ (τ) x 0 2,<br />

then � �<br />

�y 0<br />

0,x1, τ � − y � 0,x 0 2, τ �� �2 = � � 0<br />

CΨ (τ) x1 − CΨ (τ) x 0 �<br />

�<br />

2<br />

2 ,<br />

<strong>and</strong> finally<br />

� d �<br />

�y<br />

0<br />

� 0,x 0 1, τ � − y � 0,x 0 2, τ �� �2 dτ = � x0 1 − x0 � ′ �<br />

2 Gd x0 1 − x0 �<br />

2<br />

≥ λ0 �<br />

�x d<br />

0 1 − x0 �<br />

�<br />

2<br />

2 .<br />

�<br />

(3.6)<br />

Remark 35<br />

As is clear from the proof, we could have used both linearizations along the trajectories x1<br />

<strong>and</strong> x2 in order to define I. However, that definition would lead to the same inequality. In<br />

addition our definition is more practical to implement.<br />

The solution of the Riccati equation can also not be used to obtain an information equivalent<br />

to innovation. Here, to compute our innovation, we make an exact prediction (without<br />

the correction term that could disturb the estimation).<br />

6 The definition of the weak-* topology is given in Appendix A.<br />

7 This property is explained in Appendix A.<br />

43

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!