28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

Chapter 6<br />

Conclusion <strong>and</strong> Perspectives<br />

The work described in this thesis deals with the design of an observer of the <strong>Kalman</strong> type<br />

for nonlinear systems.<br />

More precisely, we considered the <strong>high</strong>-<strong>gain</strong> formalism <strong>and</strong> proposed an improvement of<br />

the <strong>high</strong>-<strong>gain</strong> <strong>extended</strong> <strong>Kalman</strong> <strong>filter</strong> in the form of an adaptive scheme for the parameter at<br />

the heart of the method. Indeed, although the <strong>high</strong>-<strong>gain</strong> approach allows us to analytically<br />

prove the convergence of the algorithm in the deterministic setting, it comes with an increased<br />

sensitivity to measurement noise. We propose to let the observer evolve between two end<br />

point configurations, one that rejects noise <strong>and</strong> one that makes the estimate converge toward<br />

the real trajectory. The strategy we developed here allowed us to analytically prove this<br />

convergence.<br />

Observability theory constitutes the framework of the present study. Thus, we began this<br />

thesis by providing a review <strong>and</strong> some insight into the main results of the theory of [57]. We<br />

also provided a review of similar adaptive strategies. In this introduction <strong>and</strong> background<br />

review, we stated that the main concern of the thesis would be theoretically proving that the<br />

observer is convergent.<br />

The observer has been described in Chapters 3 <strong>and</strong> 5. It was initially described in the<br />

continuous setting, <strong>and</strong> afterwards <strong>extended</strong> to the continuous-discrete setting. The adaptive<br />

strategy was also explained in those chapters. This strategy is composed of two elements:<br />

1. a measurement of the quality of the estimation, <strong>and</strong><br />

2. an adaptation equation.<br />

The quality measurement is called innovation or innovation for an horizon of length d. It<br />

is slightly different than the usual concept of innovation. The major improvement provided<br />

by our definition is a proof that shows that innovation places an upper bound on the past<br />

estimation error (the delay equals the parameter d above mentioned). This fact is a corner<br />

stone of the overall convergence proof.<br />

The second element of the strategy is the adaptation equation that drives the <strong>high</strong>-<strong>gain</strong><br />

parameter. A differential equation was used in the continuous setting, <strong>and</strong> a function in<br />

the continuous-discrete setting (i.e. the adaptation is performed at the end of the update<br />

procedure). The sets of requirements for those two <strong>applications</strong> have been proposed such that<br />

several adaptation functions can be conceived. The set of possible <strong>applications</strong> isn’t void, as<br />

we demonstrated by actually displaying an eligible function.<br />

115

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!