28.01.2013 Views

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

Adaptative high-gain extended Kalman filter and applications

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tel-00559107, version 1 - 24 Jan 2011<br />

unknowns (i.e. Laf1 = Laf2 ), the model becomes19 :<br />

� V<br />

0<br />

�<br />

=<br />

� I Iωr 0 0<br />

0 I 2 ωr ω 2.08<br />

r<br />

4.3 real-time Implementation<br />

⎛<br />

�<br />

⎜<br />

⎝<br />

From a series of experiments we collected enough data to constitute three sets of values:<br />

(V1, ..., VN), (I1, ..., IN) <strong>and</strong> ((ωr)1, ..., (ωr)N) <strong>and</strong> found a mean least squares solution<br />

to the equation above.<br />

2. At this stage a nonlinear optimization routine was used20 . The solutions found above<br />

together with L = J = 1, <strong>and</strong> Laf = Laf1 = Laf2 where taken as the initial values of the<br />

optimization routine. The cost function to minimize is taken as the distance between<br />

measured data <strong>and</strong> predicted trajectory<br />

� T ∗<br />

�I(v) − Ĩ(v)�2 � T ∗<br />

dv + α2 �ωr(v) − ˜ωr(v)� 2 dv,<br />

K(L, R, Laf1 , J, Laf2 , B, p) ↦→ α1<br />

where<br />

0<br />

− I <strong>and</strong> ωr are some measured data that excite the dynamical modes of the process<br />

(in a pseudo r<strong>and</strong>om binary sequence manner [85]),<br />

− Ĩ <strong>and</strong> ˜ωr are the predicted trajectory obtained from the model above using the<br />

measured input variables <strong>and</strong> Ĩ(0) = I(0) <strong>and</strong> ˜ωr(0) = ωr(0).<br />

− <strong>and</strong> α1, α2 are weighting factors, that may be used to compensate for the difference<br />

of scale between the current <strong>and</strong> voltage amplitudes.<br />

3. The situation may arise when the solution found by the optimization routine is a local<br />

minimum. In this case, the solution set of parameters doesn’t match the experimental<br />

data while the search algorithm stops. A solution would be to slightly modify the<br />

output of the algorithm <strong>and</strong> relaunch the search.<br />

Because of the great number of parameters <strong>and</strong> the difficulty in analyzing the cost<br />

function, this re-initialization is rather venturesome. It is difficult to determine which<br />

parameters shall be modified, <strong>and</strong> how. We propose to compare the measured data (I<br />

<strong>and</strong> ωr) to the predictions ( Ĩ <strong>and</strong> ˜ωr), using the initial set of parameters, <strong>and</strong> to find<br />

p1 (resp. p2) such that p1I ≈ Ĩ (resp. p2ωr ≈ ˜ωr). Then we reuse the initial model in<br />

order to determine how to modify the parameters, e.g.<br />

L ˙ Ĩ = V − RĨ − Laf1Ĩ ˜ωr<br />

R<br />

Laf<br />

B<br />

p<br />

L(p1 ˙<br />

I) =V − R(p1I) − Laf1 (p1I)(p2ωr)<br />

(p1L) ˙<br />

I = V − (Rp1)I − (Laf1 p1p2)Iωr.<br />

Although non st<strong>and</strong>ard, this method gives us new initial values for a new optimization<br />

search. In practice, this method produced excellent results.<br />

⎞<br />

⎟<br />

⎠ .<br />

19 If we keep Laf1 �= Laf2, the least square solution of the second line is trivially (0, 0, 0)<br />

20 We kept it simple: the simplex search method (see, for example, [88, 104]).<br />

83<br />

0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!