28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

with ∆= diag � {1, 1<br />

θ , ..., 1<br />

θ n−1 } � ,<br />

5.2 Continuous-discrete Framework<br />

− the innovation, Ik,d (d ∈ N ∗ ), is computed over the time window [(k − d)δt; kδt] with:<br />

– yk−i denotes the measurement at epoch k − i, <strong>and</strong><br />

– ˆyk−i denotes the output of system (5.21) at epoch k − i with z ((k − d)δt) as initial<br />

value at epoch k − d.<br />

Remark 61<br />

In [24] we presented a different version of this observer. In this paper, the adaptation of<br />

the <strong>high</strong>-<strong>gain</strong> parameter was determined during the prediction steps via a differential equation.<br />

We changed our strategy with respect to the peculiar manner in which the continuous discrete<br />

version of the observer works.<br />

Recall that the estimation is a sequence of continuous prediction periods followed by discrete<br />

correction steps when a new measurement/observation is available. Consequently, since<br />

innovation is based on the measurements available, it is computed at the correction steps.<br />

Suppose now that θ is adapted via a differential equation. It starts to change during the<br />

prediction step following the computation of a large innovation value, <strong>and</strong> reaches θ1 after<br />

some time. If alternatively θ is adapted at the end of the correction step, directly after the<br />

computation of innovation, then it reaches θ1 much faster.<br />

We opt for the strategy in which θ is adapted at the end of the correction step 11 .<br />

An advantage brought about by this approach is that now we may remove one of the<br />

assumptions on the adaptation function: “there exists M such that | F<br />

θ 2 | 0 <strong>and</strong> any ɛ ∗ > 0, there exist<br />

− two real constants µ <strong>and</strong> θ1,<br />

− d ≥ n − 1 ∈ N ∗ , <strong>and</strong><br />

− an adaptation function F(θ, I),<br />

such that for all δt sufficiently small (i.e. 2θ1δt

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!