28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

The full proof of convergence has been developed in the single output continuous setting,<br />

<strong>and</strong> afterwards <strong>extended</strong> to the multiple output <strong>and</strong> continuous-discrete settings.<br />

A second major concern of this work, was the applicability of the observer. We therefore<br />

extensively described its implementation on a single output system: a series-connected DC<br />

machine. The time constraints where investigated via experiments performed using a real<br />

motor in a hard real-time environment. The testbed was described in Chapter 4 <strong>and</strong> the<br />

compatibility with real-time constraints assessed.<br />

We conclude this work with some ideas for future investigations.<br />

The Luenberger Case<br />

It is much more direct to prove the convergence of a <strong>high</strong>-<strong>gain</strong> Luenberger observer.<br />

This is because of the absence of the Riccati equation. However, it prevents us from<br />

providing a local result of convergence when θ = 1. Therefore the adaptation strategy<br />

has to be different from that used here (Cf. [11]), or performed for a specific class of<br />

nonlinear systems (see for example [19]).<br />

Automatic code generation<br />

The implementation procedure for this algorithm is now well known. It can be roughly<br />

classified into two parts: 1) coding specific to the model, <strong>and</strong> 2) coding related to the<br />

observer mechanisms. It would be interesting to create a utility that automatically<br />

generates the code of the observer once the model has been provided. We could save<br />

development time, <strong>and</strong> implementation errors cause by typos.<br />

Dynamic output stabilization<br />

Dynamic output stabilization is considered in the second part of [57]. An extension of<br />

this work to a closed loop containing an adaptive observer is a natural development.<br />

This may be accomplished because the observer presented here is an exponential observer.<br />

Since θ is allowed to increase when convergence is not achieved, we can expect to<br />

deliver a good estimate to the control algorithm. The ability to quickly switch between<br />

modes will be important.<br />

Cascaded systems<br />

Let us consider an observable nonlinear cascaded system of the form:<br />

⎧<br />

⎨<br />

⎩<br />

˙x = f(x, u),<br />

˙ξ = g(x,ξ ),<br />

y = h(x,ξ ).<br />

One could imagine a situation where the state variable x is well known or estimated,<br />

but not the variable ξ. Does the θ parameter of the observer really need to be <strong>high</strong><br />

for the part of the estimation that corresponds to x? We could consider a <strong>high</strong>-<strong>gain</strong><br />

observer with two varying <strong>high</strong>-<strong>gain</strong> parameters.<br />

Unscented <strong>Kalman</strong> <strong>filter</strong><br />

The unscented <strong>Kalman</strong> <strong>filter</strong> is a derivative free, nonlinear observer that has received a<br />

lot of attention recently [71]. This observer is based on the unscented transformation:<br />

116

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!