20.01.2013 Views

Volume1.pdf

Volume1.pdf

Volume1.pdf

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Notes of robust control<br />

• Zeros (continuous-time systems)<br />

• Parametrization (continuous-time systems)<br />

• H2 control (continuous-time systems)<br />

• H∞ control (continuous-time systems)<br />

• H∞ filtering in discrete-time (discrete-time systems)


.<br />

ZEROS AND POLES OF<br />

MULTIVARIABLE SYSTEMS<br />

• Transmission zeros<br />

SUMMARY<br />

Rank property and time-domain characterization<br />

• Invariant zeros<br />

Rank property and time-domain characterization<br />

• Decoupling zeros<br />

• Infinite zero structure


.<br />

(Transmission) Zeros and poles of a rational<br />

matrix<br />

Basic assumption:<br />

−1<br />

G ( s)<br />

= C(<br />

sI − A)<br />

B + D,<br />

p outputs,<br />

[ G(<br />

s)<br />

] = min( p,<br />

m)<br />

r<br />

normal rank<br />

=<br />

Smith-McMillan form of G(s)<br />

⎡ f1(<br />

s)<br />

⎢<br />

⎢<br />

0<br />

M ( s)<br />

= ⎢ M<br />

⎢<br />

⎢ 0<br />

⎢<br />

⎣ 0<br />

εi(<br />

s)<br />

fi<br />

( s)<br />

= , i = 1,<br />

2,<br />

L,<br />

r<br />

ψ ( s)<br />

εi divides εi+1<br />

ψi+1 divides ψi<br />

i<br />

f<br />

2<br />

0<br />

( s)<br />

M<br />

0<br />

0<br />

L<br />

L<br />

O<br />

L<br />

L<br />

f<br />

r<br />

0<br />

0<br />

M<br />

( s)<br />

0<br />

0⎤<br />

0<br />

⎥<br />

⎥<br />

M⎥<br />

⎥<br />

0⎥<br />

0⎥<br />

⎦<br />

• The transmission zeros are the roots of e i(s), i=1,2,…,r<br />

m inputs<br />

• The (transmission) poles are the roots of y i(s), i=1,2,…,r


.<br />

Example<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

=<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡−<br />

=<br />

⎥<br />

⎥<br />

⎥<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎢<br />

⎢<br />

⎢<br />

⎣<br />

⎡<br />

=<br />

⎥<br />

⎥<br />

⎥<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎢<br />

⎢<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

−<br />

=<br />

1<br />

1<br />

,<br />

0<br />

1<br />

0<br />

0<br />

0<br />

0<br />

5<br />

13<br />

,<br />

0<br />

2<br />

1<br />

0<br />

,<br />

0<br />

1<br />

1<br />

1<br />

0<br />

5<br />

0<br />

0<br />

0<br />

0<br />

7<br />

10<br />

0<br />

0<br />

1<br />

0<br />

D<br />

C<br />

B<br />

A<br />

Hence<br />

)<br />

5<br />

)(<br />

2<br />

(<br />

1<br />

)<br />

2<br />

)(<br />

3<br />

(<br />

)<br />

1<br />

)(<br />

3<br />

(<br />

)<br />

(<br />

−<br />

−<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

−<br />

+<br />

−<br />

=<br />

s<br />

s<br />

s<br />

s<br />

s<br />

s<br />

s<br />

G<br />

)<br />

5<br />

)(<br />

2<br />

(<br />

1<br />

0<br />

3<br />

)<br />

(<br />

,<br />

1<br />

−<br />

−<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡ −<br />

=<br />

=<br />

s<br />

s<br />

s<br />

s<br />

M<br />

r<br />

There is one transmission zero at s=3 and two poles in s=2 and s=5.


Theorem 1<br />

.<br />

Rank property of transmission zeros<br />

The complex number λ is a transmission zero of G(s) iff there exists<br />

polynomial vector z(s) such that z(λ)≠0 and<br />

⎧ lim G ( s)<br />

z(<br />

s)<br />

= 0<br />

⎪<br />

s → λ<br />

⎨<br />

lim G(<br />

s)'<br />

z(<br />

s)<br />

= 0<br />

⎪<br />

⎪⎩<br />

s → λ<br />

p ≥ m<br />

p ≤ m<br />

Remark<br />

In the square case (p=m), it follows<br />

det<br />

[ G ( s)<br />

]<br />

∏<br />

= r<br />

r<br />

i = 1<br />

∏<br />

i = 1<br />

ε<br />

ψ<br />

i<br />

i<br />

( s)<br />

( s)<br />

if<br />

if<br />

so that it is possible to catch all zeros and poles if and only if there are not<br />

cancellations.<br />

Proof of the theorem<br />

Assume that p≥ m and let M(s) be the Smith-Mc Millan form of G(s), i.e. G(s)=L(s)M(s)R(s) for suitable polynomial<br />

and unimodular matrices L(s) (of dimension p) and R(s) (of dimensions m). It is possible to write:<br />

where E(s)=diag{εi(s)} and Ψ(s)=diag{ψi(s)}. Now assume that λ is a zero of the polynomial eκ(s). Recall that R(s) is<br />

unimodular and take the polynomial vector z(s)=[R -1 (s)]k, i.e. the k-th column of the inverse of R(s). Since ψk(λ)≠0, it<br />

follows that G(s)z(s)=L(s)E(s)Ψ −1 (s)R(s)z(s)=L(s)E(s) [Ψ −1 (s)]k= L(s)[E(s)]k/ψ k (s)=[L(s)]kε k (s) /ψ k (s). This quantity<br />

goes to zero as s tends to λ. Vice-versa, if z(s) is such that G(s)z(s) goes to zero as s tends to λ, then the limit as s tends<br />

to λ of E(λ)Y -1 −1<br />

⎡E( s)<br />

Ψ ( s)<br />

⎤<br />

M<br />

( s)<br />

= ⎢ ⎥<br />

⎣ 0 ⎦<br />

(s)R(λ)z(λ) being zero means that at least one polynomial εk(s) should vanish at s= λ.<br />

The proof in the opposite case (p≤m) follows the same lines and therefore is omitted.


.<br />

Example<br />

G<br />

⎡1<br />

s)<br />

= ⎢<br />

⎣0<br />

( s −1)(<br />

s + 2)<br />

⎤ 1<br />

( s −1)<br />

⎥<br />

⎦ ( s −1)(<br />

s + 2)<br />

( 2<br />

Hence,<br />

⎡1<br />

G(<br />

s)<br />

= ⎢<br />

⎣0<br />

so that, with<br />

⎡ 1<br />

0⎤⎢<br />

( s −1)(<br />

s +<br />

⎢<br />

1<br />

⎥<br />

⎦⎢<br />

0<br />

⎢⎣<br />

⎡ − ( s −1)(<br />

s + 2)<br />

⎤<br />

z(<br />

s)<br />

= ⎢<br />

⎥<br />

⎣ 1 ⎦<br />

it follows<br />

lim G(<br />

s)<br />

z(<br />

s)<br />

= 0<br />

s →1<br />

2)<br />

Notice that it is not true that G(s)z(1) tends to zero as s tends to one. .<br />

⎤<br />

0<br />

⎥ 1<br />

⎥<br />

s −1<br />

⎥ ( s −1)(<br />

s + 2)<br />

s + 2⎥⎦


.<br />

Time-domain characterization of transmission<br />

zeros and poles<br />

Let Σ=(Α,Β, C ,D) be the state-space system with transfer function<br />

G(s)=C(sI-A) -1 B+D. Let Σ′=(A′,C′,B′,D′) the “transponse” system whose<br />

transfer function is G(s) ′=B′(sI-A′) -1 C′+D′. Now, let<br />

Σˆ<br />

⎧ Σ<br />

= ⎨<br />

⎩ Σ '<br />

Finally, let δ i (t) denote the i-th derivative of the impulsive “function”.<br />

Theorem 2<br />

p<br />

p<br />

≥<br />

≤<br />

m<br />

m<br />

Gˆ<br />

( s )<br />

⎧ G ( s )<br />

⎨<br />

⎩ G ( s )'<br />

(a) The complex number λ is a pole of Σ iff the exists an impulsive input<br />

such that the forced output yf(t) of Σ is<br />

(b) The complex number λ is a zero of Σ iff the exists an<br />

exponential/impulsive input<br />

such that the forced output yf(t) of Σ is<br />

=<br />

u ( t )<br />

y<br />

f<br />

y f<br />

Zeros blocking property. The proof of the theorem is not reported here for space limitations.<br />

∑<br />

i = 0<br />

= ν<br />

α<br />

p<br />

p<br />

i<br />

δ<br />

≥<br />

≤<br />

i<br />

m<br />

m<br />

( t )<br />

λt<br />

( t ) = y 0e , ∀ t ≥<br />

λ t<br />

i<br />

u ( t ) u 0 e + α<br />

iδ<br />

( t ) , u 0<br />

= ∑<br />

i = 0<br />

ν<br />

( t ) = 0 , ∀ t ≥<br />

0<br />

0<br />

≠<br />

0


.<br />

Invariant zeros<br />

Given a system (A,B,C,D) with transfer function G(s)=C(sI-A) -1 B+D, the<br />

polynomial matrix<br />

P ( s )<br />

=<br />

⎡ sI − A −<br />

⎢<br />

⎣ C D<br />

is called the system matrix.<br />

Consider the Smith form S(s) of P(s):<br />

⎡α1(<br />

s)<br />

⎢<br />

⎢<br />

0<br />

S(<br />

s)<br />

= ⎢ M<br />

⎢<br />

⎢ 0<br />

⎢<br />

⎣ 0<br />

αi divides αi+1.<br />

α<br />

2<br />

0<br />

( s)<br />

M<br />

0<br />

0<br />

L<br />

L<br />

O<br />

L<br />

L<br />

αν<br />

0<br />

0<br />

M<br />

( s)<br />

0<br />

0⎤<br />

0<br />

⎥<br />

⎥<br />

M⎥<br />

⎥<br />

0⎥<br />

0⎥<br />

⎦<br />

The invariant zeros are the roots of αi(s), i=1,2,…,ν<br />

−−−−−−−−−−−−−−−−−<br />

Notice that<br />

P ( s)<br />

=<br />

⎡sI −<br />

⎢<br />

⎣ C<br />

A<br />

so that the rank of P(s) is ϖ=n+rank(G(s))<br />

B<br />

⎤<br />

⎥<br />

⎦<br />

0⎤<br />

⎡( sI − A )<br />

I<br />

⎥ ⎢<br />

⎦ ⎣ 0<br />

−1<br />

0 ⎤ ⎡sI<br />

−<br />

⎥<br />

G ( s)<br />

⎢<br />

⎦ ⎣ 0<br />

A<br />

− B ⎤<br />

I<br />

⎥<br />


Theorem 3<br />

.<br />

Rank property of invariant zeros<br />

The complex number λ is an invariant zero of (A,B,C,D) iff P(s) looses<br />

rank in s=λ, i.e. iff there exists a nonzero vector z such that<br />

⎧ P ( λ ) z = 0<br />

⎪<br />

⎨<br />

P<br />

⎪<br />

⎪⎩<br />

( λ )' z = 0<br />

if<br />

if<br />

Notice that if T is the transformation for a change of basis in the state-space, then<br />

P ( s ) =<br />

⎡ sI − A − B<br />

⎢<br />

⎣ C D<br />

⎤<br />

⎥<br />

⎦<br />

=<br />

⎡T<br />

⎢<br />

⎣ 0<br />

Therefore, the invariant zeros are invariant with respect to a change of basis.<br />

p<br />

p<br />

≥<br />

≤<br />

m<br />

m<br />

0⎤<br />

⎡sI − A −<br />

I<br />

⎥ ⎢<br />

⎦ ⎣ C D<br />

⎤ ⎡T<br />

⎥ ⎢<br />

⎦ ⎣<br />

Proof of Theorem 3<br />

Consider the case p≥m and let P(s)=L(s)S(s)R(s), where R(s) and R(s) are suitable polynomial<br />

unimodular matrices. Hence, if P(λ)z=0 for some z≠0, then S(λ )R(λ)z=0 so that S(λ)y=0, with y≠0.<br />

Hence, λ must be the root of at least one polynomial αi(s). Vice-versa, if αk(λ) for some k, then<br />

z=[R -1 (λ)]k is such that P(λ)z=L(λ)S(λ)R(λ)z= L(λ)S(λ)R(λ)[R -1 (λ)]k=L(λ)[S(λ)]k=0.<br />

B<br />

−1<br />

0<br />

0 ⎤<br />

⎥<br />

I ⎦


.<br />

Time-domain characterization of invariant zeros<br />

Let Σ=(Α,Β, C ,D) be the state-space system with transfer function<br />

G(s)=C(sI-A) -1 B+D. Let Σ′=(A′,C′,B′,D′) the “transponse” system whose<br />

transfer function is G(s) ′=B′(sI-A′) -1 C′+D′. Now, let<br />

⎧ Σ<br />

Σˆ<br />

= ⎨<br />

⎩Σ<br />

'<br />

Theorem 4<br />

p ≥<br />

p ≤<br />

m<br />

m<br />

G s<br />

Gˆ<br />

⎧ ( )<br />

( s)<br />

= ⎨<br />

⎩G<br />

( s)'<br />

The complex number λ is an invariant zero Σ iff at leat one of the two<br />

following conditions holds:<br />

(i) λ is an eigenvalue of the unobservable part of Σ<br />

(ii) there exist two vectors x0 and u0≠0 such that the forced output of Σ<br />

corresponding to the input u(t)=u0e λt , t≥0 and the initial state<br />

x(0)=x0 is identically zero for t≥0.<br />

p ≥<br />

p ≤<br />

Zeros blocking property. The proof of the theorem is not reported here for space limitations.<br />

m<br />

m


.<br />

Trasmission vs invariant zeros<br />

Theorem 5<br />

(i) A transmission zero is also an invariant zero.<br />

(ii) Invariant and transmission zeros of a system in minimal form do<br />

coincide.<br />

Proof of point (i). Let p≥m and let λ be a transmission zero. Then, there exists an<br />

exponential/impulsive input<br />

λt<br />

ν<br />

∑<br />

i=<br />

0<br />

( i)<br />

u ( t)<br />

= u e + α δ ( t)<br />

such that the forced output is zero, i.e.<br />

y<br />

f<br />

0<br />

t<br />

i<br />

ν<br />

A(<br />

t−τ)<br />

⎡ λτ<br />

( i)<br />

⎤ λt<br />

( t)<br />

= C∫<br />

e B⎢u<br />

0e<br />

+ ∑ α iδ<br />

( τ)<br />

⎥dτ<br />

+ Du0e<br />

= 0,<br />

t > 0 . Letting<br />

⎣<br />

i=<br />

0 ⎦<br />

0<br />

⎡α0<br />

⎤<br />

⎢ ⎥<br />

ν<br />

[ ] ⎢<br />

α1<br />

x =<br />

⎥<br />

0 B AB L A B , it follows<br />

⎢ M ⎥<br />

⎢ ⎥<br />

⎣αν<br />

⎦<br />

Ce<br />

t<br />

At<br />

x0<br />

C<br />

A(<br />

t−τ<br />

) λτ<br />

λt<br />

e Bu0e<br />

dτ<br />

+ Du0e<br />

+ ∫<br />

0<br />

=<br />

0,<br />

t > 0<br />

so that λ is an invariant zero. The proof in the case p≤m is the same, considering the transponse<br />

system.<br />

Proof of point (ii). Let p≥m. We have already proven that a trasmission zero is an invariant zero.<br />

Then, consider a minimum system and an invariant zero λ. Hence Av+Bw=λw, Cv+Dw=0. Notice<br />

that w≠0 since the system is observable. Moreover, thanks to the reachability condition there exists<br />

a vector<br />

⎡α<br />

0 ⎤<br />

⎢ ⎥<br />

ν<br />

[ ] ⎢<br />

α1<br />

v = B AB L A B ⎥<br />

⎢ M ⎥<br />

⎢ ⎥<br />

⎣αν<br />

⎦<br />

Since λ is an invariant zero there exists an initial state x(0)=v and an input u(t)=we λt , such that<br />

Ce<br />

t<br />

At A<br />

λ<br />

v + C∫<br />

e<br />

By noticing that<br />

C<br />

0<br />

( t−τ<br />

) λτ<br />

t<br />

Bwe dτ<br />

+ Dwe = 0,<br />

t > 0<br />

Ce<br />

At<br />

ν<br />

∑<br />

t<br />

At i<br />

A(<br />

t−τ)<br />

( i)<br />

v = Ce A Bαi<br />

= Ce B αiδ<br />

( τ)<br />

dτ<br />

, it follows,<br />

i=<br />

0<br />

t<br />

ν<br />

A( t−τ)<br />

⎡ λτ<br />

λ<br />

∫ e B⎢u<br />

0e<br />

+ ∑<br />

0 ⎣<br />

i=<br />

0<br />

trransmission zero.<br />

∫<br />

0<br />

ν<br />

∑<br />

i=<br />

0<br />

( i)<br />

⎤<br />

t<br />

α iδ<br />

( τ)<br />

⎥dτ<br />

+ Du0e<br />

= 0,<br />

t > 0,<br />

which means that λ is a<br />


Associated Smith forms<br />

.<br />

S<br />

P C<br />

C<br />

Decoupling zeros<br />

⎡ sI − A ⎤<br />

( s ) = ⎢ ⎥ PB ( s)<br />

= [ sI − A − B]<br />

⎣ C ⎦<br />

( s )<br />

=<br />

⎡<br />

⎢<br />

⎣<br />

diag<br />

C { α ( s ) } ⎤<br />

B<br />

i<br />

⎥⎦ S B ( s ) =<br />

diag { α i ( s ) }<br />

0<br />

[ 0 ]<br />

The output decoupling zeros are the roots of α1 C (s) α2 C (s)… αn C (s), the<br />

input decoupling zeros are the roots of αi B (s) α2 B (s)…αn B (s) and the inputoutput<br />

decoupling zeros are the common roots of both.<br />

Lemma<br />

(i) A complex number λ is an output decoupling zero iff there exists<br />

z≠0 such that PC(λ)z=0.<br />

(ii) A complex number λ is an input decoupling zero iff there exists<br />

w≠0 such that P B(λ)′w=0.<br />

(iii) A complex number λ is an input-output decoupling zero iff there<br />

exist z≠0 and w≠0 such that PC(λ)z=0, PB(λ)′w=0.<br />

(iv) For square systems the set of decoupling zeros is a subset of the<br />

set of invariant zeros.<br />

(v) For square invertible systems the transmission zeros of the inverse<br />

system coincide with the poles of the system and the invariant<br />

zeros of the inverse system coincide with the eigenvalues of the<br />

system.


.<br />

Infinite zero structure<br />

Mc Millan form over the ring of causal rational matrices<br />

Given the transfer function G(s) there exists two unimodular matrices in<br />

that ring (bi-proper rational functions) such that<br />

⎡M<br />

( s)<br />

0⎤<br />

G(<br />

s)<br />

= V ( s)<br />

⎢ ⎥W<br />

( s),<br />

M ( s)<br />

= diag<br />

,<br />

⎣ 0 0⎦<br />

where δ1, δ2, …, δr are integers such that δ1≤ δ2≤…≤ δr. They describes the<br />

structure of the infinity zeros.<br />

• Inversion<br />

• Model matching<br />

• ……<br />

Example: compute the finite and infinite zeros of:<br />

⎡s<br />

+ 1<br />

⎢<br />

⎢s<br />

− 1<br />

G(<br />

s)<br />

= ⎢ 1<br />

⎢<br />

⎢ 1<br />

⎢<br />

⎣ s<br />

0<br />

1<br />

s<br />

0<br />

s<br />

3<br />

s + 1 ⎤<br />

⎥<br />

s − 1 ⎥<br />

s + 1 ⎥<br />

s ⎥<br />

2<br />

− 3s<br />

+ s + 5⎥<br />

3 ⎥<br />

s ( s − 3)<br />

⎦<br />

{ } r<br />

−δ1<br />

−δ2<br />

−δ<br />

s , s , L s


PARAMETRIZATION OF STABILIZING<br />

CONTROLLERS<br />

Summary<br />

• BIBO stability and internal stability of feedback systems<br />

• Stabilization of SISO systems. Parameterization and<br />

characterization of stabilizing controllers<br />

• Coprime factorization<br />

• Stabilization of MIMO systems. Parameterization and<br />

characterization of stabilizing controllers<br />

• Strong stabilization


Stability of feedback systems<br />

d<br />

r e u y<br />

C(s) P(s)<br />

The problem consists in the analysis of the asymptotic stability<br />

(internal stability) of closed-loop system from some characteristic<br />

transfer functions. This result is useful also for the design.<br />

THEOREM 1<br />

The closed loop system is asymptotically stable if and only if the<br />

transfer matrix G(s) from the input to the output<br />

is stable.<br />

G(<br />

s)<br />

⎡r<br />

⎤<br />

⎢ ⎥<br />

⎣d<br />

⎦<br />

−1<br />

⎡ ( I + P(<br />

s)<br />

C(<br />

s))<br />

⎢<br />

−1<br />

⎣(<br />

I + C(<br />

s)<br />

P(<br />

s))<br />

C(<br />

s)<br />

⎡e⎤<br />

⎢ ⎥<br />

⎣u⎦<br />

−1<br />

− ( I + P(<br />

s)<br />

C(<br />

s))<br />

P(<br />

s)<br />

⎤<br />

⎥<br />

( I + C(<br />

s)<br />

P(<br />

s))<br />

⎦<br />

= −1


Stability of interconnected SISO systems<br />

The closed loop system is asymptotically stable if and only if the<br />

three transfer functions<br />

S(<br />

s)<br />

=<br />

1<br />

are stable.<br />

+<br />

1<br />

, R(<br />

s)<br />

C(<br />

s)<br />

P(<br />

s)<br />

=<br />

1<br />

+<br />

C(<br />

s)<br />

, H ( s)<br />

C(<br />

s)<br />

P(<br />

s)<br />

P(<br />

s)<br />

C(<br />

s)<br />

P(<br />

s)<br />

• Notice that if S(s), R(s), H(s) are stable, then also the<br />

complementary sensitivity<br />

C(<br />

s)<br />

P(<br />

s)<br />

T ( s)<br />

=<br />

= 1 − S(<br />

s)<br />

1+<br />

C(<br />

s)<br />

P(<br />

s)<br />

is stable.<br />

• The stability of the three transfer functions are made to avoid<br />

prohibited (unstable) cancellations between C(s) and P(s).<br />

Example<br />

Let C(s)=(s-1)/(s+1), P(s)=1/(s-1) . Then S(s)=(s+1)/(s+2),<br />

R(s)=(s-1)/(s+2) e H(s)=(s+1)/(s 2 +s-2)<br />

=<br />

1<br />

+


Comments<br />

The proof of Theorem 1 can be carried out pursuing different<br />

paths. A simple way is as follows. Let assume that C(s) and P(s)<br />

are described by means of minimal realizations C(s)=(A,B,C,D),<br />

P(s)=(F,G,H,E). It is possible to write a realization of the closedloop<br />

system as Σcl=(Acl,Bcl,Ccl,Dcl). Obviously, if Acl is<br />

asymptotically stable, then G(s) is stable. Vice-versa, if G(s) is<br />

stable, asymptotic stability of Acl follows form being<br />

Σcl=(Acl,Bcl,Ccl,Dcl) a minimal realization. This can be seen<br />

through the well known PBH test.<br />

In our example, it follows<br />

A<br />

cl<br />

G(<br />

s)<br />

=<br />

0<br />

⎡<br />

⎢<br />

⎣−<br />

⎡<br />

⎢<br />

= ⎢<br />

⎢<br />

⎢⎣<br />

1<br />

s<br />

s<br />

s<br />

s<br />

+<br />

+<br />

−<br />

+<br />

−<br />

−<br />

1<br />

2<br />

1<br />

2<br />

2<br />

1<br />

⎤<br />

⎥<br />

⎦<br />

,<br />

( s<br />

−<br />

−<br />

B<br />

cl<br />

=<br />

⎡<br />

⎢<br />

⎣<br />

1<br />

1<br />

( s + 1)<br />

1)(<br />

s + 2)<br />

s + 1<br />

s + 2<br />

⎤<br />

⎥<br />

⎥<br />

⎥<br />

⎥⎦<br />

1<br />

0<br />

⎤<br />

⎥<br />

⎦<br />

,<br />

SMM<br />

><br />

⎡−<br />

= ⎢<br />

⎣−<br />

Hence, Acl is unstable and the closed-loop system is reachable and<br />

observable. This means that G(s) is unstable as well.<br />

C<br />

cl<br />

⎡<br />

⎢<br />

⎢<br />

⎣<br />

( s<br />

−<br />

1<br />

1<br />

−<br />

1<br />

1)(<br />

s<br />

0<br />

0<br />

2<br />

+<br />

⎤<br />

⎥<br />

⎦<br />

,<br />

2)<br />

D<br />

cl<br />

( s<br />

=<br />

+<br />

⎡<br />

⎢<br />

⎣<br />

1<br />

1<br />

0<br />

1)(<br />

s<br />

−<br />

0<br />

1<br />

⎤<br />

⎥<br />

⎦<br />

1)<br />

⎤<br />

⎥<br />

⎥<br />


Parametrization of stabilizing controllers<br />

1° case: SISO systems and P(s) stable<br />

d<br />

r e u<br />

C(s) P(s)<br />

THEOREM 2<br />

The family of all controllers C(s) such that the closed-loop system<br />

is asymptotically stable is:<br />

C(<br />

s)<br />

=<br />

1<br />

−<br />

Q(<br />

s)<br />

P(<br />

s)<br />

Q(<br />

s)<br />

where Q(s) is proper and stable and Q(∞)P(∞) ≠1.


Proof of Theorem 2<br />

Let C(s) be a stabilizing controller and define<br />

Q(s)=C(s)/(1+C(s)P(s)). Notice that Q(s) is stable since it is the<br />

transfer function from r to u. Hence C(s) can be written as<br />

=Q(s)/(1-P(s)Q(s)), with Q(s) stable and, obviously, the stability<br />

of both Q(s) and P(s) implies that Q(∞)P(∞)=C(∞)/(1+C(∞)P(∞))<br />

≠1.<br />

Vice-versa, assume that Q(s) is stable and Q(∞)P(∞)≠1. Define<br />

C(s)=Q(s)/(1-P(s)Q(s)). It results that<br />

S(s)=1/(1+C(s)P(s))=1-P(s)Q(s),<br />

R(s)=C(s)/(1+C(s)P(s))=Q(s),<br />

H(s)=P(s)/(1+C(s)P(s))=P(s)(1-P(s)Q(s))<br />

are stable. This means that the closed-loop system is<br />

asymptotically stable.


Parametrization of stabilizing controllers<br />

2° case: SISO systems and generic P(s)<br />

It is always possible to write (coprime factorization)<br />

P ( s)<br />

=<br />

N(<br />

s)<br />

M ( s)<br />

where N(s) and M(s) are stable rational coprime functions, i.e.<br />

such that there exist two stable rational functions X(s) e Y(s)<br />

satisfying (equation of Diofanto, Aryabatta, Bezout)<br />

N(<br />

s)<br />

X ( s)<br />

+ M ( s)<br />

Y ( s)<br />

= 1<br />

THEOREM 3<br />

The family of all controllers C(s) such that the closed-loop system<br />

is well-posed and asymptotically stable is:<br />

C(<br />

s)<br />

=<br />

X ( s)<br />

Y(<br />

s)<br />

+<br />

−<br />

M ( s)<br />

Q(<br />

s)<br />

N(<br />

s)<br />

Q(<br />

s)<br />

where Q(s) is proper, stable and such that Q(∞)N(∞) ≠ Y(∞).


Comments<br />

The proof of the previous theorem can be carried out following<br />

different ways. However, it requires a preliminary discussion on<br />

the concept of coprime factorization and on the stability of<br />

factorized interconnected systems.<br />

d<br />

r e z1 u z2 y<br />

Mc(s) -1<br />

LEMMA 1<br />

Let P(s)=N(s)/M(s) e C(s)=Nc(s)/Mc(s) stable coprime<br />

factorizations. Then the closed-loop system is asymptotically<br />

stable if and only if the transfer matrix K(s) from [r′ d′]′ to [z1′<br />

z2′]′ is stable.<br />

K(<br />

s)<br />

M(s) -1<br />

Nc(s) N(s)<br />

⎡<br />

⎢<br />

⎣<br />

r<br />

d<br />

=<br />

⎤<br />

⎥<br />

⎦<br />

⎡<br />

⎢<br />

⎣−<br />

M<br />

c<br />

N<br />

( s)<br />

c<br />

( s)<br />

⎡<br />

⎢<br />

⎣<br />

z<br />

z<br />

1<br />

2<br />

⎤<br />

⎥<br />

⎦<br />

N(<br />

s)<br />

M ( s)<br />

⎤<br />

⎥<br />

⎦<br />

−<br />

1


Proof of Lemma 1<br />

Define four stable functions X(s),Y(s),Xc(s),Yc(s) such that<br />

X ( s)<br />

N(<br />

s)<br />

+ Y ( s)<br />

M ( s)<br />

= 1<br />

X c c c c<br />

( s)<br />

N ( s)<br />

+ Y ( s)<br />

M ( s)<br />

= 1<br />

To proof the Lemma it is enough to resort to Theorem 1, by<br />

noticing that the transfer functions K(s) and G(s) (from [r′ d′]′ to<br />

[e′ u′]′) are related as follows:<br />

K(<br />

s)<br />

=<br />

⎡<br />

⎢<br />

⎣−<br />

Y ( s)<br />

c<br />

X ( s)<br />

X<br />

c<br />

( s)<br />

Y ( s)<br />

⎤<br />

⎥<br />

⎦<br />

G(<br />

s)<br />

⎡<br />

− ⎢<br />

⎣−<br />

⎡I<br />

0⎤<br />

⎡ 0 − N ⎤<br />

G(<br />

s)<br />

=<br />

⎢ ⎥ + ⎢ ⎥ K(<br />

s)<br />

0 I N 0<br />

⎣<br />

⎦<br />

⎣<br />

c<br />

⎦<br />

0<br />

X<br />

X<br />

0<br />

c<br />

⎤<br />

⎥<br />


Proof of Theorem 3<br />

Assume that Q(s) is stable and Q(∞)N(∞) ≠ Y(∞). Moreover,<br />

define C(s)=(X(s)+M(s)Q(s))/(Y(s)-N(s)Q(s)). It follows that<br />

1=N(s)X(s)+M(s)Y(s)=N(s)(X(s)+M(s)Q(s))+M(s)(Y(s)-<br />

N(s)Q(s))<br />

so that the functions X(s)+M(s)Q(s) e Y(s)-N(s)Q(s) defining C(s)<br />

are coprime (besides being both stable). Hence, the three<br />

characterist transfer functions are:<br />

S(s)=1/(1+C(s)P(s))=M(s)(Y(s)-N(s)Q(s)),<br />

R(s)=C(s)/(1+C(s)P(s))=M(s)(X(s)+M(s)Q(s)),<br />

H(s)=P(s)/(1+C(s)P(s))=N(s) (Y(s)-N(s)Q(s))<br />

Since they are all stable, the closed-loop system is asymptotically<br />

stable as well (Theorem 1).<br />

Vice-versa, assume that C(s)=Nc(s)/Mc(s) (stable coprime<br />

factorization) is such that the closed-loop system is well-posed<br />

and asymptotically stable. Define Q(s)=(Y(s)Nc(s)-<br />

X(s)Mc(s))/(M(s)Mc(s)+N(s)Nc(s)). Since the closed-loop system<br />

is asymptotically stable and (Nc,Mc) are coprime, then, in view of<br />

Lemma 1 it follows that<br />

Q(s) = (Y(s)Nc(s)-X(s)Mc(s))/(M(s)Mc(s)+N(s)Nc(s)) =<br />

= [0 I]K(s)[Y(s)′-X(s)′]′<br />

is stable. This leads to<br />

C(s)=(X(s)+M(s)Q(s))/(Y(s)-N(s)Q(s)).<br />

Finally, Q(∞)N(∞) ≠ Y(∞), as can be easily be verified.


Coprime Factorization<br />

1° case: SISO systems<br />

Lemma 1 makes reference to a factorized description of a transfer<br />

function. Indeed, it is easy to see that it is always possible to write<br />

a transfer function as the ratio of two stable transfer functions<br />

without common divisors (coprime). For example<br />

with<br />

It results<br />

with<br />

s + 1<br />

G(<br />

s)<br />

= = N(<br />

s)<br />

M ( s)<br />

( s − 1)(<br />

s + 10)(<br />

s − 2)<br />

N(<br />

s)<br />

=<br />

( s<br />

+<br />

1<br />

10)(<br />

s<br />

+<br />

1)<br />

N(<br />

s)<br />

X(<br />

s)<br />

+ M(<br />

s)<br />

X ( s)<br />

=<br />

X ( s)<br />

=<br />

Euclide’s Algorithm<br />

4(<br />

16s<br />

− 5)<br />

,<br />

( s + 1)<br />

,<br />

Y ( s)<br />

M ( s)<br />

1<br />

=<br />

( s<br />

( s<br />

=<br />

+<br />

+<br />

−<br />

( s − 1)(<br />

s<br />

( s + 1)<br />

15)<br />

10)<br />

1<br />

−<br />

2<br />

2)


Coprime Factorization<br />

1° case: MIMO systems<br />

In the MIMO case, we need to distinguish between right and left<br />

factorizations.<br />

Right and left factorization<br />

Given P(s), find four proper and stable transfer matrices such that:<br />

− 1<br />

−1<br />

P( s)<br />

= Nd<br />

( s)<br />

M d ( s)<br />

= M s(<br />

s)<br />

N s(<br />

s)<br />

Nd ( s),<br />

M d ( s)<br />

right coprime X d ( s)<br />

Nd<br />

( s)<br />

+ Yd<br />

( s)<br />

M d(<br />

s)<br />

Ns ( s),<br />

M s(<br />

s)<br />

left coprime N s(<br />

s)<br />

X s(<br />

s)<br />

+ M s(<br />

s)<br />

Ys<br />

( s)<br />

Double coprime factorization<br />

Given P(s), find eight proper and stable transfer functions such<br />

that:<br />

− 1<br />

−1<br />

( i) P(<br />

s)<br />

= Nd<br />

( s)<br />

M d(<br />

s)<br />

= M s(<br />

s)<br />

Ns<br />

( s)<br />

( ii)<br />

⎡<br />

⎢<br />

⎣<br />

Y<br />

d<br />

( s)<br />

X<br />

( s)<br />

M<br />

( s)<br />

( s)<br />

− N ( s)<br />

M ( s)<br />

N ( s)<br />

Y ( s)<br />

s<br />

Hermite’s algorithm<br />

Observer and control law<br />

d<br />

s<br />

⎤ ⎡ d −<br />

⎥ ⎢<br />

⎦ ⎣ d<br />

X<br />

s<br />

s<br />

⎤<br />

⎥<br />

⎦<br />

=<br />

I<br />

=<br />

I<br />

=<br />

I


Construction of a stable double coprime<br />

factorization<br />

Let (A,B,C,D) a stabilizable and detectable realization of P(s) and<br />

conventionally write P=(A,B,C,D) .<br />

THEOREM 4<br />

Let F e L two matrices such that A+BF e A+LC are stable. Then<br />

there exists a stable double coprime factorization, given by:<br />

Md=(A+BF,B, F,I), Nd=(A+BF,B,C+DF,D)<br />

Ys=(A+BF,L,-C-DF,I), Xs=(A+BF,L,F,0)<br />

Ms=(A+ LC,L,C,I), Ns=(A+LC,B+LD,C,D)<br />

Yd=(A+LC,B+LD,- F,I), Xd=(A+LC,L,F,0)


Proof of Theorem 4<br />

The existence of stabilizing F e L is ensured by the assumptions of<br />

stabilizability and detectability of the system. To check that the 8<br />

transfer functions form a stable double coprime factorization<br />

notice first that they are all stable and that Md e Ms are biproper<br />

systems. Finally one can use matrix calculus to verify the theorem.<br />

Alternatively, suitable state variables can be introduced. For<br />

example, from<br />

x&<br />

y<br />

u<br />

=<br />

=<br />

=<br />

Ax<br />

Cx<br />

Fx<br />

+<br />

+<br />

+<br />

Bu<br />

Du<br />

v<br />

=<br />

=<br />

( A<br />

( C<br />

+<br />

+<br />

control<br />

BF ) x<br />

DF)<br />

x<br />

law<br />

+<br />

+<br />

Bv<br />

Dv<br />

one obtains y=P(s)u=Nd(s)v, u=Md(s)v, so that P(s)Md(s)=Nd(s).<br />

Similarly,<br />

x&<br />

y<br />

=<br />

=<br />

Ax<br />

Cx<br />

⎧ =<br />

⎨<br />

⎩ η =<br />

θ &<br />

A<br />

C<br />

+<br />

+<br />

θ<br />

θ<br />

Bu<br />

Du<br />

+<br />

+<br />

Bu<br />

Du<br />

+<br />

−<br />

L<br />

η<br />

y<br />

observer<br />

implies η=Ns(s)u-Ms(s)y=Ns(s)u-Ms(s)P(s)y=0 (stable autonomous<br />

dynamic) so that Ns(s)=Ms(s)P(s). Analoguously one can proceed<br />

for the rest of the proof.


Parametrization of stabilizing controllers<br />

3° case: MIMO systems and generic P(s)<br />

− 1<br />

−1<br />

( i) P(<br />

s)<br />

= Nd<br />

( s)<br />

M d ( s)<br />

= M s(<br />

s)<br />

Ns<br />

( s)<br />

( ii)<br />

⎡ Yd<br />

( s)<br />

X d ( s)<br />

⎤ ⎡M d ( s)<br />

− X s(<br />

s)<br />

⎤<br />

⎢<br />

=<br />

N s s M s s<br />

⎥ ⎢<br />

Nd<br />

s Ys<br />

s<br />

⎥<br />

⎣−<br />

( ) ( ) ⎦ ⎣ ( ) ( ) ⎦<br />

THEOREM 5<br />

The family of all proper transfer matrices C(s) such that the<br />

closed-loop system is well-posed and asymptotically stable is:<br />

C(<br />

s)<br />

= ( X<br />

= ( Y<br />

d<br />

s<br />

C(s) P(s)<br />

( s)<br />

+ M<br />

( s)<br />

Q<br />

( s)<br />

− Q ( s)<br />

N ( s))<br />

s<br />

d<br />

s<br />

d<br />

( X<br />

I<br />

( s))(<br />

Y ( s)<br />

− N<br />

−1<br />

s<br />

d<br />

d<br />

( s)<br />

Q<br />

( s)<br />

+ Q ( s)<br />

M<br />

s<br />

d<br />

( s))<br />

s<br />

−1<br />

( s))<br />

where Qd(s) [Qs(s)] is stable and such that Nd(∞)Qd(∞)≠Ys(∞)<br />

[Qs(∞)Ns(∞)≠Yd(∞)].


Comments<br />

The proof of Theorem 5 follows the same lines as that of Theorem<br />

3. Thowever, the generalization of Lemma 1 to MIMO is required.<br />

d<br />

r e z1 u z2 y<br />

Mc(s) -1<br />

LEMMA 2<br />

Let P(s)=Nd(s)Md(s) -1 and C(s)=Nc(s)Mc(s) -1 stable right coprime<br />

factorizations. Then the closed-loop system is asymptotically<br />

stable if and only if the transfer matrix K(s) from [r′ d′]′ to [z1′<br />

z2′]′ is stable.<br />

⎡ M c(<br />

s)<br />

K(<br />

s)<br />

= ⎢<br />

⎣−<br />

Nc<br />

( s)<br />

Md(s) -1<br />

Nc(s) Nd(s)<br />

N<br />

M<br />

d<br />

d<br />

( s)<br />

⎤<br />

( s)<br />

⎥<br />

⎦<br />

−1


Proof of Lemma 2<br />

It is completely similar to the proof of Lemma 1.<br />

Proof of Theorem 5<br />

Assume that Q(s) is stable and define<br />

C(s)=(Xs(s)+Md(s)Q(s))(Ys(s)-Nd(s)Q(s)) -1<br />

It results that<br />

I=Ns(s)Xs(s)+Ms(s)Ys(s)<br />

=Ns(s)(Xs(s)+Md(s)Q(s))+Ms(s)(Y(s)s-Nd(s)Q(s))<br />

so that the functions Xs(s)+Md(s)Q(s) e Ys(s)-Nd(s)Q(s) defining<br />

C(s) are right coprime (besides being both stable). The four<br />

transfer matrices characterizing the closed-loop are:<br />

(1+PC) -1 = (Ys-NdQ)Ms<br />

-(1+PC) -1 P= (Ys-NdQ)Ns<br />

(1+CP) -1 C=C(I+PC) -1 = (Xs+MdQ)Ms<br />

(1+CP) -1 =I-C(I+PC) -1 C= I-(Xs+MdQ)Ns<br />

They are all stable so that the closed-loop system is asymptotically<br />

stable.<br />

Vice-versa, assume that C(s)=Nc(s)Mc(s) -1<br />

(right coprime<br />

factorization) is stabilizing. Notice that the matrices<br />

⎡ Yd<br />

( s)<br />

X d(<br />

s)<br />

⎤ ⎡M<br />

d ( s)<br />

Nc<br />

( s)<br />

⎤<br />

⎢<br />

and<br />

≈<br />

N s s M s s<br />

⎥ ⎢<br />

Nd<br />

s M c s<br />

⎥<br />

⎣−<br />

( ) ( ) ⎦ ⎣ ( ) ( ) ⎦<br />

have stable inverse. Hence,<br />

⎡ Yd<br />

( s)<br />

X d ( s)<br />

⎤ ⎡M<br />

d ( s)<br />

Nc<br />

( s)<br />

⎤ ⎡I<br />

Yd<br />

( s)<br />

N c(<br />

s)<br />

X d ( s)<br />

Nc<br />

( s)<br />

⎤<br />

⎢<br />

⎥ ⎢<br />

⎥ = ⎢<br />

⎥<br />

⎣−<br />

N s ( s)<br />

M s ( s)<br />

⎦ ⎣ N d ( s)<br />

M c ( s)<br />

⎦ ⎣0<br />

N s ( s)<br />

N c(<br />

s)<br />

+ M s ( s)<br />

M c ( s)<br />

⎦<br />

(*)<br />

has stable inverse as well. Then,<br />

K


Q(s)=(Yd(s)Nc(s)-Xd(s)Mc(s))(Ms(s)Mc(s)+Ns(s)Nc(s)) -1<br />

is well–posed and stable. Premultiplying equation (*) by<br />

[Md(s) –Xs(s);Nd(s) Ys(s)]<br />

it follows<br />

Nc(s)Mc(s) -1 =(Xs(s)+Md(s)Qd(s))(Ys(s)-Nd(s)Qd(s)) -1 .


Observation<br />

Taking Q(s)=0 we have the so-called central controller<br />

C0(s) =Xs(s)Ys(s) -1<br />

which coincides with the controller designed with the pole<br />

assignment technique.<br />

Taking r=d=0 one has:<br />

ξ&<br />

= Aξ<br />

+ Bu + L(<br />

Cξ<br />

+ Du − y)<br />

u = Fξ<br />

LEMMA 3<br />

−1<br />

−1<br />

−1<br />

[ ( I − Y ( s)<br />

N ( s)<br />

Q(<br />

s))<br />

] Y ( )<br />

−1<br />

C( s)<br />

= C0<br />

( s)<br />

+ Yd<br />

( s)<br />

Q(<br />

s)<br />

s d<br />

s s<br />

u y<br />

P<br />

Yd -1<br />

C0<br />

Nd<br />

Q<br />

Ys -1


Strong Stabilization<br />

The problem is that of finding, if possible, a stable controller<br />

which stabilizes the closed-loop system.<br />

Example<br />

s −1<br />

P(<br />

s)<br />

=<br />

( s − 2)(<br />

s + 2)<br />

is not stabilizable with a stable controller. Why?<br />

s − 2<br />

P(<br />

s)<br />

=<br />

( s −1)(<br />

s + 2)<br />

is stabilizable with a stable controller. Why?<br />

Stabilizazion of many plants<br />

Two-step stabilization<br />

C(s) P(s)


Interpolation<br />

Let consider a SISO control system with P(s)=N(s)/M(s) e<br />

C(s)=Nc(s)/Mc(s) (stable coprime stabilization). Then the closedloop<br />

system is asymptotically stable if and only if<br />

U(s)=N(s)Nc(s)+M(s)Mc(s) is an unity (stable with stable inverse).<br />

Since we want that C(s) be stable, we can choice Nc(s)=C(s) and<br />

Dc(s)=1. Then,<br />

U ( s)<br />

− M ( s)<br />

C(<br />

s)<br />

=<br />

N ( s)<br />

Of course, we must require that C(s) be stable. This fact depends<br />

on the role of the right zeros of P(s). Indeed, if N(b)=0, with<br />

Re(b)≥0 (b can be infinity), then the interpolation condition must<br />

hold true:<br />

U ( b)<br />

= M ( b)<br />

Consider the first example and take M(s)=(s-2)(s+2)/(s+1) 2 ,<br />

N(s)=(s-1)/(s+1) 2 . Then, it must be: U(1)=-0.75, U(∞)=1.<br />

Obviously this is impossible.<br />

Consider the second example and take M(s)=(s-1)/(s+2) e N(s)=(s-<br />

2)/(s+2) 2 . It must be U(1)=0.25, U(∞)=1, which is indeed possible.


THEOREM 6<br />

Parity interlacing property (PIP)<br />

• P(s) è strongly stabilizable if and only if the number of poles of<br />

P(s) between any pair of real right zeros of P(s) (including<br />

infinity) is an even number.<br />

• We have seen that the PIP is equivalent to the existence of the<br />

existence of a unity which interpolates the right zeros of P(s).<br />

Hence, if the PIP holds, the problem boils down to the<br />

computation of this interpolant.<br />

• In the MIMO case, Theorem 6 holds unchanged. However it is<br />

worth pointing out that the poles must be counted accordingly<br />

with their Mc Millan degree and the zeros to be considered are<br />

the so-called blocking zeros.


REFERENECES<br />

J.G.Truxal, Control Systems Synthesis, Mc-Graw Hill, NY, 1955.<br />

J.R. Ragazzini, G.E. Franklin, Sampled-data Control Systems,<br />

Mc-Graw Hill, NY, 1958.<br />

M. Vidyasagar, Control System Synthesis: a Factorization<br />

Approach, MIT Press, Cambridge, MA, 1985.<br />

J.C. Doyle, B.A. Franklin, A.R. Tannenbaum, Feedback Control<br />

Theory, Mc Millan, NY, 1992.<br />

D.C. Youla, J.J. Bongiorno, N.N. Lu, Single-loop feedback<br />

stabilization of linear multivariable dynamical plants”,<br />

Automatica, 10, p. 159-175, 1974.<br />

G. Zames, “Feedback and optimal sensitivity in model reference<br />

transformation, multiplicative seminorms, approximate inverses,<br />

IEEE Trans. AC-26, p. 301-320, 1981.<br />

V. Kucera, Discrete Linear Control: The Polynomial Equation<br />

Approach, Wiley, NY, 1979.


H2 CONTROL<br />

SUMMARY<br />

• Definition and characterization of the H2 norm<br />

• State-feedback control (LQ)<br />

• Parametrization, mixed problem, duality<br />

• Disturbance rejection, Estimation (Kalman filter), Partial<br />

Information<br />

• Conclusions and generalizations


The L2 space<br />

L2 space: The set of (rational) functions G(s) such that<br />

1<br />

2π<br />

∞<br />

∫<br />

−∞<br />

[ G(<br />

− jω)'G(<br />

jω<br />

] dω<br />

< ∞<br />

trace )<br />

(strictly proper – no poles on the imaginary axis!)<br />

With the inner product of two functions G(s) and F(s):<br />

∞<br />

∫<br />

−∞<br />

1<br />

G ( s),<br />

F(<br />

s)<br />

= trace [ G(<br />

− jω<br />

)' F(<br />

jω)<br />

] dω<br />

2π<br />

The space L2 is a pre-Hilbert space. Since it is also complete, it is indeed a<br />

Hilbert space. The norm, induced by the inner product, of a function G(s)<br />

is<br />

G(<br />

s)<br />

2<br />

⎡ ∞<br />

⎢ 1<br />

= ∫ ⎢ trace<br />

2π<br />

⎢<br />

⎣ −∞<br />

[ G(<br />

− jω<br />

)'G(<br />

jω)<br />

]<br />

dω<br />

⎤<br />

⎥<br />

⎥<br />

⎥<br />

⎦<br />

1/<br />

2


The subspaces H2 and H2 ^<br />

The subspace H2 is constituted by the functions of L2 which are analytic in<br />

the right half plane. (strictly proper – stable !)<br />

The subspace H2 ⊥ is constituted by the functions of L2 which are analytic<br />

in the left half plane. (stricty proper – untistable !)<br />

Note: A rational function in L2 is a strictly proper function without poles<br />

on the imaginary axis. A rational function in H2 is a strictly proper<br />

function without poles in the closed right half plane. A rational function in<br />

H2 ⊥ is a strictly proper function without poles in the closed left half plane.<br />

The functions in H2 are related to the square integrable functions of the<br />

real variable t in (0,∞]. The functions in H2 ⊥ are related to the square<br />

integrable functions of the real variable t in (-∞,0].<br />

A function in L2 can be written in an unique way as the sum of a function<br />

in H2 and a function in H2: G(s)=G1(s)+G2(s). Of course, G1(s) and G2(s)<br />

are orthogonal, i.e. =0, so that<br />

2<br />

2<br />

G s)<br />

=<br />

G<br />

2 1(<br />

s)<br />

+<br />

2<br />

( G ( s)<br />

2<br />

2<br />

2


Consider the system<br />

x& ( t)<br />

= Ax(<br />

t)<br />

+ Bw(<br />

t)<br />

z(<br />

t)<br />

= Cx(<br />

t)<br />

x(<br />

0)<br />

= 0<br />

System theoretic interpretation<br />

of the H2 norm<br />

And let G(s) be its transfer function. Also consider the quantity to be<br />

evaluated:<br />

J<br />

1<br />

=<br />

m<br />

∑∫<br />

i=<br />

1 0<br />

∞<br />

z<br />

( i)<br />

'(<br />

t)<br />

z<br />

( i)<br />

( t)<br />

dt<br />

where z (i) represents the output of the system forced by an impulse input at<br />

the i-th component of the input.<br />

J<br />

1<br />

1<br />

2π<br />

=<br />

∞<br />

m<br />

i=<br />

1 0<br />

m<br />

− ∞ i=<br />

1<br />

∞<br />

∑∫<br />

∫∑<br />

z<br />

( i)<br />

'(<br />

t)<br />

z<br />

( i)<br />

1<br />

( t)<br />

dt =<br />

2π<br />

− ∞ i=<br />

1<br />

Z<br />

1<br />

ei'G(<br />

− jω)'G(<br />

jω)<br />

eidω<br />

=<br />

2π<br />

∞<br />

m<br />

∫∑<br />

( i)<br />

( − jω)'Z<br />

∞<br />

∫<br />

−∞<br />

trace<br />

( i)<br />

( jω)<br />

dω<br />

=<br />

[ G(<br />

−jω)'G(<br />

jω)<br />

]<br />

= G(<br />

s)<br />

2<br />

2


Computation of the norm<br />

Lyapunov equations<br />

“Control-type Lyapunov equation”<br />

“Filter-type Lyapunov equation”<br />

G(<br />

s)<br />

A o o<br />

' P + P A + C'C<br />

= 0<br />

Pr r<br />

2<br />

2<br />

A'<br />

+ P A + BB'<br />

= 0<br />

=<br />

J<br />

1<br />

=<br />

m<br />

∑∫<br />

∞<br />

∞<br />

z<br />

( i)<br />

'(<br />

t)<br />

z<br />

( i)<br />

( t)<br />

dt =<br />

i=<br />

1 0<br />

0 i=<br />

1<br />

∞ ⎡<br />

m ⎤<br />

= trace ⎢e<br />

A't<br />

C'Ce<br />

At<br />

Beiei'<br />

B'⎥dt<br />

∫ ⎢ ∑ ⎥<br />

0 ⎢⎣<br />

i=<br />

1 ⎥⎦<br />

⎡ ∞<br />

⎤<br />

⎢ A't<br />

At ⎥<br />

= trace ⎢B'<br />

e C'<br />

Ce dt B⎥<br />

= trace ∫ ⎢<br />

⎣ 0<br />

⎥<br />

⎦<br />

⎡ ∞<br />

⎤<br />

⎢<br />

trace C e<br />

At<br />

BB'<br />

e<br />

A't<br />

⎥<br />

= ⎢<br />

dt C'⎥<br />

= trace ∫ ⎢<br />

⎣ 0<br />

⎥<br />

⎦<br />

m<br />

∫∑<br />

trace<br />

[ B'<br />

P B]<br />

[ CP C']<br />

r<br />

[ e 'B<br />

'e<br />

A't<br />

C'Ce<br />

At<br />

i<br />

Bei]<br />

0<br />

dt<br />

=


Other interpretations<br />

Assume now that w is a white noise with identity intensity and consider<br />

the quantity:<br />

J<br />

2<br />

= lim E ( z'(<br />

t)<br />

z(<br />

t))<br />

t→∞ It follows:<br />

J<br />

2<br />

t<br />

t<br />

⎡ ( )<br />

( ) ( )' '<br />

'(<br />

)<br />

lim trace E<br />

⎡<br />

Ce<br />

A t−τ<br />

Bw d w B e<br />

A t−<br />

C'd<br />

⎤⎤<br />

= ⎢<br />

⎥<br />

E =<br />

0<br />

0<br />

t ⎣<br />

⎢∫<br />

τ τ ∫ σ<br />

σ<br />

σ<br />

⎣<br />

⎥<br />

→∞<br />

⎦⎦<br />

⎡<br />

= lim trace ⎢<br />

t→∞<br />

⎣<br />

⎡<br />

= lim trace ⎢<br />

t→∞<br />

⎣<br />

t t<br />

∫∫<br />

0 0<br />

Finally, consider the quantity<br />

J<br />

3<br />

and let again w(.) be a white noise with identity intensity. It follows:<br />

Notice that J3=(||G(s)||2) 1/2 since<br />

t<br />

∫<br />

0<br />

Ce<br />

1<br />

= lim<br />

T →∞ T<br />

Ce<br />

A(<br />

t−τ)<br />

A(<br />

t−τ)<br />

E<br />

T<br />

∫<br />

0<br />

BE<br />

BB'e<br />

[ w(<br />

τ)<br />

w(<br />

σ )']<br />

A'(<br />

t−τ<br />

)<br />

z'(<br />

t)<br />

z(<br />

t)<br />

dt<br />

⎤<br />

C'dτ<br />

⎥<br />

⎦<br />

B'e<br />

=<br />

A'(<br />

t−σ<br />

)<br />

G(<br />

s)<br />

Aτ A'τ<br />

[ CP(<br />

t)<br />

C']<br />

, P(<br />

t)<br />

= e BB'<br />

e dτ<br />

E(<br />

z'(<br />

t)<br />

z(<br />

t))<br />

= trace<br />

∫<br />

t<br />

0<br />

⎤<br />

C'dτ<br />

dσ<br />

⎥<br />

⎦<br />

P&<br />

1<br />

( t)<br />

= AP ( t)<br />

+ P(<br />

t)<br />

A'+<br />

BB'<br />

, AY + YA'+<br />

BB'<br />

= 0,<br />

Y = lim ∫ P(<br />

τ ) dτ<br />

T →∞ T<br />

2<br />

2<br />

=<br />

T<br />

0


x&<br />

Optimal Linear Quadratic Control (LQ)<br />

versus Full-Information H2 control<br />

= Ax + Bu<br />

Assume that<br />

And notice that these conditions are equivalent to<br />

Find (if any) u(⋅) minimizing J<br />

The above matrix result is the well known Schur Lemma. The proof is as follows. If H≥0, then<br />

Vice-versa, assume that H≥0 and R>0, and suppose by contradiction that W is not positive<br />

semidefinite, then Wx=νx, for some ν<br />

0,<br />

W<br />

0<br />

= Q − SR<br />

[ ] 0<br />

1<br />

'≥<br />

0<br />

− S


Optimal Linear Quadratic Control (LQ)<br />

versus Full-Information H2 control<br />

Let C11 a factorization of W=C11′C11 and define<br />

C<br />

u<br />

1<br />

=<br />

R<br />

Then, it is easy to verify that<br />

∞<br />

J = ∫ ( x'Qx<br />

+ 2u'<br />

Sx + u'<br />

Ru<br />

) dt = ∫ z'zdt<br />

= z<br />

0<br />

Moreover, the free motion of the state can be considered as a state motion<br />

caused by an impulsive input. Hence, with w(t)=imp(t), let<br />

The (LQ) problem with stability is that of finding a controller<br />

fed by x and w and yielding u that minimizes the H2 norm of the transfer<br />

function from w to z.<br />

• LQS problem<br />

⎡ C<br />

⎢ 1/<br />

⎣R<br />

11<br />

= − 2<br />

1/<br />

2<br />

u,<br />

⎤<br />

,<br />

S'<br />

⎥<br />

⎦<br />

D<br />

z = C x + D<br />

1<br />

12<br />

x(<br />

0)<br />

⎡0⎤<br />

= ⎢<br />

I<br />

⎥<br />

⎣ ⎦<br />

1<br />

= 0<br />

12<br />

u<br />

x& = Ax + B u + B w<br />

2<br />

z = C x + D<br />

u<br />

12<br />

1<br />

u<br />

ξ = Fξ<br />

+ G x + G<br />

&<br />

1<br />

1<br />

21<br />

2<br />

w<br />

= Hξ<br />

+ E x + E w<br />

∞<br />

0<br />

2<br />

2


x&<br />

= Ax + B w + B u<br />

z = C x + D<br />

1<br />

⎡x<br />

⎤<br />

y = ⎢ ⎥<br />

⎣w⎦<br />

1<br />

12<br />

u<br />

Full information<br />

w z<br />

u x<br />

2<br />

Problem: Find the minimun value of ||Tzw||2 attainable by an<br />

admissible controller. Find an admissible controller minimizing<br />

||Tzw||2. Find a set of all controllers generating all ||Tzw||20<br />

(Ac,C1c) detectable<br />

Ac = A-B2(D12′D12) -1 D12′C1 stable invariant zeros of (A,B2,C1,D12)<br />

C1c = (I-D12 (D12′D12) -1 D12′)C1<br />

P<br />

K


Solution of the Full Information Problem<br />

Theorem 1<br />

There exists an admissible controller FI minimizing ||Tzw||2 . It is<br />

given by<br />

F<br />

2<br />

-1 ( 'D<br />

) ( B ' + D 'C<br />

)<br />

= −<br />

P<br />

D12 12 2 12 1<br />

where P is the positive semidefinite and stabilizing solution of the<br />

Riccati equation<br />

A'<br />

P+<br />

PA−(<br />

PB + C 'D<br />

)( D 'D<br />

)<br />

A<br />

= A−B<br />

( D<br />

2<br />

'D<br />

1<br />

')<br />

12<br />

( B 'P+<br />

D<br />

−1<br />

12<br />

'C<br />

) = A+<br />

B F<br />

−1<br />

cc 2 12 12 2 12 1<br />

2 2<br />

The minimum norm is:<br />

2 2<br />

Tzw α , α = P<br />

2<br />

cB1<br />

= trace(<br />

B<br />

2<br />

1'<br />

PB1<br />

) , Pc<br />

12<br />

( B 'P+<br />

D 'C<br />

) + C 'C<br />

= 0<br />

2<br />

12<br />

1<br />

1<br />

1<br />

stable<br />

= ( s)<br />

= ( C + D F )( sI − A−B<br />

F )<br />

The set is given by<br />

Q(s)<br />

u w<br />

where Q(s) is a stable strictly proper system, satisfying<br />

Q<br />

2 2 2<br />

( s)<br />

< γ −α<br />

2<br />

F2<br />

x<br />

1<br />

12<br />

2<br />

2<br />

2<br />

−1


Proof of Theorem 1<br />

The assumptions guarantee the existence of the stabilizing solution to the<br />

Riccati equation P. Let v=u-F2x so that u=F2x+v, where v is a new input.<br />

Hence, z(s)=Pc(s)B1w(s)+U(s)v, where U(s)=Pc(s)B2+D12. It follows that<br />

Tzw(s)=Pc(s)B1+U(s)Tvw(s). The problem is recast as to find a controller<br />

minimizing the norm from w to z of the following system<br />

where Pv is given by:<br />

x&<br />

= Ax + B w + B u<br />

v = −F<br />

x + u<br />

2<br />

⎡x<br />

⎤<br />

y = ⎢ ⎥<br />

⎣w⎦<br />

1<br />

2<br />

w v<br />

Pv<br />

u x<br />

K<br />

Notice that Tzw(s) is strictly proper iff Tvw(s) is such. Exploiting the<br />

Riccati equality it is simple to verify that U(s) ~ U(s)=D12′D12 and that<br />

U(s) ~ 2 2 2<br />

Pc(s) is antistable. Hence, ||Tzw(s)|| =||Pc(s)B1|| + ||Tvw(s)|| . Hence<br />

2<br />

2<br />

2<br />

the optimal control is v=0, i.e. u=F2x.<br />

2 2<br />

Finally, take a controller K(s) such that ||Tzw(s)|| < γ . From this controller<br />

2<br />

and the system it is possible to form the transfer function Q(s)=Tvw(s). Of<br />

2 2 2<br />

course, it is ||Q(s)|| < γ -α . It is enough to show that the controller<br />

2<br />

yielding u(s)=F2x(s)+v(s)=F2x(s)+Q(s)w(s) generates the same transfer<br />

function Tzw(s). This computation is left to the reader.<br />

w


x&<br />

= Ax + B w + B u<br />

z = C x + D<br />

1<br />

y = C<br />

2<br />

x + w<br />

Disturbance Feedforward<br />

1<br />

12<br />

u<br />

Same assumptions as FI+stability of A-B1C2<br />

C2=0 → direct compensation<br />

C2≠0 → indirect compensation<br />

A-B1C2 B1 B2 y u<br />

R(s) = 0 0 I<br />

I 0 0<br />

-C2 I 0<br />

2<br />

KDF<br />

R(s)<br />

KFI(s)<br />

The proof of the DF theorem consist in verifying that the transfer function from w to z achieved by<br />

the FI regulator for the system A,B1,B2,C1, D12 is the same as the transfer function from w to z<br />

obtained by using the regulator KDF shown in the figure.


x&<br />

= Ax + B w + ( B u)<br />

z = C x + u<br />

1<br />

1<br />

y = C2x<br />

+ D21w<br />

w y<br />

Output Estimation<br />

w z<br />

u y<br />

2<br />

P11(s)<br />

P21(s)<br />

P<br />

K(s)<br />

K(s)<br />

u =<br />

−zˆ<br />

Assumptions<br />

(A,C2) detectable<br />

D21D21′> 0 Af = A-B1 D21′(D21D21′) -1 C2<br />

B1f = B1(I-D21′ (D21D21′) -1 D21)<br />

(Af,Bf) stabilizable<br />

(A-B2C1) stable<br />

z


Solution of the<br />

Output Estimation problem<br />

Theorem 2<br />

There exists an admissible controller (filter) minimizing ||Tzw||2. It<br />

is given by<br />

ξ&<br />

= Aξ<br />

+ B<br />

u<br />

where<br />

L = −(<br />

PC + B<br />

2<br />

2<br />

2<br />

D'<br />

21<br />

)<br />

( ) -1<br />

D D<br />

21<br />

21<br />

and P is the positive semidefinite and stabilizing solution of the<br />

Riccati equation<br />

AP+<br />

PA'−(<br />

PC '+<br />

BD<br />

A<br />

= −C<br />

ξ<br />

1<br />

2<br />

= A−(<br />

PC '+<br />

B D<br />

2<br />

u + L<br />

1<br />

2<br />

21<br />

( C<br />

')(<br />

D<br />

')(<br />

D<br />

D<br />

D<br />

−1<br />

( C P+<br />

D B ')<br />

+ B B '=<br />

0<br />

= A+<br />

L C<br />

−1<br />

ff 2 1 21 21 21<br />

2 2<br />

2<br />

ξ<br />

21<br />

− y)<br />

21<br />

')<br />

')<br />

2<br />

21<br />

1<br />

1<br />

stable<br />

1


Solution of the<br />

Output Estimation problem (cont.)<br />

The optimal norm is<br />

2<br />

2<br />

Tzw = 2<br />

f<br />

2<br />

−1<br />

C1Pf<br />

( s)<br />

= trace(<br />

C1PC1'),<br />

P ( s)<br />

= ( sI−<br />

A−L2C2<br />

) ( B1<br />

+ L2D21)<br />

The set of all controllers generating all ||Tzw||2 < γ is given by<br />

M(s)=<br />

Aff -B2C1 L∞ -B2<br />

C1 0 I<br />

C2 I 0<br />

u y<br />

M<br />

Where Q(s) is proper, stable and such that ||Q(s)|| 2 2 <<br />

γ 2 −||C1Pf(s)|| 2 2<br />

Q


Proof of Theorem 2<br />

It is enough to observe that the “transpose system”<br />

λ&<br />

= A'λ<br />

+ C'<br />

μ<br />

ϑ<br />

= B 'λ<br />

+ D<br />

=<br />

B<br />

1<br />

2<br />

'λ<br />

+ ς<br />

1<br />

ς<br />

21<br />

+ C<br />

'ν<br />

2<br />

'ν<br />

has the same structure as the system defining the DF problem.<br />

Hence the solution is the “transpose” solution of the DF problem.<br />

It is worth noticing that the H2 solution does not depend on the<br />

particular linear combination of the state that one wants to<br />

estimate.


Given the system<br />

x& = Ax + ς + B<br />

1<br />

y = Cx + ς<br />

2<br />

22<br />

u<br />

Filtering<br />

where [z′1 z′2]′ are zero mean white Gaussian noises with intensity<br />

W<br />

⎡W11<br />

W12<br />

⎤<br />

= ⎢ ,<br />

12'<br />

⎥ W<br />

⎣W<br />

W22⎦<br />

22<br />

> 0<br />

find an estimate of the linear combination Sx of the state such as<br />

to minimize<br />

J<br />

= lim<br />

t →∞<br />

Letting<br />

C<br />

B<br />

1<br />

11<br />

= −S,<br />

B<br />

11<br />

y = W<br />

E<br />

'=<br />

W<br />

[ ( Sx(<br />

t)<br />

− u(<br />

t))'(<br />

Sx(<br />

t)<br />

− u(<br />

t))<br />

]<br />

C<br />

−1/<br />

2<br />

22<br />

11<br />

2<br />

y,<br />

= W<br />

−W<br />

12<br />

−1<br />

22<br />

W<br />

C,<br />

−1<br />

22<br />

W<br />

D<br />

z = C x + u,<br />

1<br />

12<br />

21<br />

',<br />

=<br />

⎡ς<br />

⎢<br />

⎣ς<br />

[ 0<br />

1<br />

2<br />

B<br />

1<br />

I],<br />

= [ B<br />

⎤ ⎡B<br />

⎥ = ⎢<br />

⎦ ⎣ 0<br />

11<br />

11<br />

W<br />

12<br />

W<br />

W<br />

−1/<br />

2<br />

22<br />

]<br />

−1/<br />

2<br />

12W22<br />

1 / 2<br />

W22<br />

⎤<br />

⎥w<br />

⎦<br />

the problem is recast to find a controller that minimizes the H2<br />

norm from w to z.


z = C x + D<br />

y = C x + D<br />

The partial information problem<br />

(LQG)<br />

x&<br />

= Ax + B w + B u<br />

1<br />

2<br />

12<br />

21<br />

u<br />

w<br />

Assumptions: FI + DF.<br />

1<br />

w z<br />

u y<br />

2<br />

P<br />

K(s)


Theorem 3<br />

There exists an admissible controller minimizing ||Tzw||2. It is<br />

given by<br />

ξ&<br />

= Aξ<br />

+ B u + L ( C ξ − y)<br />

u = −F<br />

ξ<br />

The optimal norm is<br />

2<br />

2<br />

2<br />

2<br />

Tzw P 2 c s)<br />

L2<br />

+ C 2 1Pf<br />

( s)<br />

= P ( s)<br />

B<br />

2<br />

c 1 + F 2 2Pf<br />

= ( ( s)<br />

= γ<br />

The set of all controllers generating all ||Tzw||2 < γ is given by<br />

S(s)=<br />

2<br />

2<br />

2<br />

A +B2F2+L2C2 L2 -B2<br />

-F2 0 I<br />

C2 I 0<br />

2<br />

2<br />

2<br />

2<br />

0<br />

u y<br />

S<br />

Where Q(s) is proper, stable and such that ||Q(s)|| 2 2 < γ 2 −γο 2<br />

Proof<br />

Separation principle: The problem is reformulated as an Output Estimation problem for the estimate<br />

of -F2 x.<br />

Q


Important topics to discuss<br />

• Robustness of the LQ regulator (FI)<br />

• Robustness of the Kalman filter (OE)<br />

• Loss of robustness of the LQG regulator (Partial Information)<br />

• LTR technique


.<br />

THE H¥<br />

CONTROL PROBLEM<br />

SUMMARY<br />

• Definition and characterization of the H∞ norm<br />

• Description of the uncertainty: standard problem<br />

• State-feedback control<br />

• Parametrization, mixed problem, duality<br />

• Disturbance rejection, Estimation, Partial Information<br />

• Conclusions and generalizations


.<br />

G ( s)<br />

20<br />

10<br />

0<br />

-10<br />

-20<br />

50<br />

0<br />

-50<br />

-100<br />

-150<br />

10 0<br />

What the H¥ norm is?<br />

rational<br />

stable<br />

proper<br />

G (s)<br />

= G(<br />

s)<br />

=<br />

∞<br />

sup Re( s ) ≥ 0<br />

Bode Diagrams<br />

From: U(1)<br />

Frequency (rad/sec)<br />

max G(<br />

jω)<br />

Ω = σ ( Ω)<br />

= λmax(<br />

Ω * Ω)<br />

= λmax(<br />

ΩΩ*)<br />

For the frequency-domain definition we can consider a transfer function without poles on the<br />

imaginary axis. It is easy to understand that the norm of a function in L∞ can be recast to the norm<br />

of its stable part given through the so-called inner-outer factorization. The time-domain definition,<br />

given in the next page, requires, for obvious reasons, the stability.<br />

10 1<br />

ω<br />

10 2


G(s) = D+C(sI-A) -1 B<br />

x&<br />

= Ax + Bw<br />

.<br />

Time-domain characterization<br />

z = Cx + Dw<br />

A = asymptotically stable<br />

G(s)<br />

∞<br />

=<br />

sup<br />

w∈L<br />

There does not exist a direct procedure to compute the infinity<br />

norm. However, it is easy to establish whether the norm is<br />

bounded from above by a given positive number γ.<br />

The symbol indifferently the space of square integrable or the space of the strictly proper rational<br />

function without poles on the imaginary axis. If w(.) is a white noise, the infinity norm represents<br />

the square root of the maximal intensity of the output spectrum.<br />

2<br />

z<br />

w<br />

2<br />

2


.<br />

BOUNDED REAL LEMMA<br />

Let γ be a positive number and let A be asymptotically stable.<br />

THEOREM 1<br />

The three following conditions are equivalent each other:<br />

(i) ||G(s)||∞ < γ<br />

(ii) ||D||


Comments<br />

Notice that as γ tends to infinity, the Riccati equation becomes a<br />

Lyapunov equation that admits a (unique) positive semidefinite<br />

solution, thanks to the system stability. This is obvious if one<br />

notices that the infinity norm of a stable system is finite.<br />

Proof of Theorem 1<br />

For simplicity, let consider the case where the system is strictly proper, i.e. D=0. The equation and<br />

the inequality are<br />

Also denote: G(s) ~ =G(-s)′.<br />

.<br />

PBB'P<br />

A'<br />

P + PA + 2 + C'C<br />

= ( < ) 0<br />

γ<br />

Points (ii)→(i) and (iii) →(i).<br />

Assume that there exists a positive semidefinite (definite) solution P of the equation (inequality).<br />

We can write:<br />

PBB'<br />

P<br />

( sI + A')<br />

P − P(<br />

sI − A)<br />

+ + 2 C'C<br />

= ( < ) 0<br />

γ<br />

Premultiply to the left by B′(sI+A′) -1 e to the right by (sI-A) -1 B, it follows<br />

G<br />

so that ||G(s)||∞ < γ.<br />

~<br />

( s)<br />

G(<br />

s)<br />

= γ<br />

T ( s)<br />

= γI<br />

−γ<br />

−1<br />

2<br />

I − T<br />

~<br />

( s)<br />

T(<br />

s)<br />

B'<br />

P(<br />

sI − A)<br />

Points (i)→(ii)<br />

Assume that ||G(s)||∞ < γ. We now proof that the Hamiltonian matrix<br />

⎡ A<br />

H = ⎢<br />

⎣−<br />

C'C<br />

−1<br />

−2<br />

γ BB'⎤<br />

⎥<br />

− A'<br />

⎦<br />

does not have eigenvalues on the imaginary axis. Indeed, if, by contradiction jω is one such<br />

eigenvalue, then


( jω<br />

− A)<br />

x − γ<br />

( jω<br />

+ A')<br />

y + C'Cx<br />

= 0<br />

.<br />

−2<br />

BB'<br />

y = 0<br />

Hence Cx = -γ -2 C(jωI-A) -1 BB′(jωI+A) -1 C′Cx, so that G(-jω)′G(jω)Cx=0 and Cx=0. Consequently,<br />

y=0 and x=0, that is a contradiction. Then, since the Hamiltonian matrix does not have imaginary<br />

eigenvalues, it must have 2n eigenvalues, n of them having negative real parts. The remaining on<br />

eigenvalues are the complex conjugate of the previous ones. This fact follows from the matrix being<br />

Hamiltonian, i.e. satisfying JH+H′J=0, where J=[0 I;-I 0]. Let take the n-dimensional subspace<br />

generated by the (generalized) eigenvectors associated with the stable eigenvalues and let choice a<br />

matrix S=[X* Y*]* whose range coincides with such a subspace. For a certain asymptotically stable<br />

matrix T (restriction of H) it follows HS=ST. We now proof that X*Y=Y*X. Indeed let define<br />

V=X*Y-Y*X=S*JS and notice that VT=S*JST=S*JHS=-S*H′JS=-T*S*JS=-T*V, so that<br />

VT+T*V=0. The stability of T yields (by the well known Lyapunov Lemma) that the unique<br />

solution is V=0, i.e. X*Y=Y*X. We now proof that X is invertible. Indeed from HS=ST it follows<br />

that AX+γ -2 BB′Y=XT and –A′Y-C′CX=YT. Premultiplying the first equation by Y* yields<br />

Y*AX+γ -2 Y*BB′Y =Y*XT=X*YT. Hence if, by contradiction, v∈Ker(X) then v*Y*BB′Yx=0 so<br />

that B′Yv=0. From the first equation we have XTv=0 and -A′Yv=YTv. In conclusion, if v∈Ker(X)<br />

then Tv∈Ker(X). By induction it follows T k v∈Ker(X), and again<br />

-A′YT k v =YT k+1 v, k nonnegative. Take the monic polynomial of minimum degree h(T) such that<br />

h(T)v=0 (notice that it always exists) and write h(T)=(λI-T)m(T). Since T è asymptotically stable, it<br />

results Re(λ)


.<br />

Worst Case<br />

The “worst case” interpretation of the H∞ norm is given by the<br />

following result:<br />

THEOREM 2<br />

Let A be stable, ||G(s)||∞ < γ and let x0 be the initial state of the<br />

system. Then,<br />

where P is the solution of the BRL Riccati equation.<br />

Proof of Theorem 2<br />

2 2 2<br />

sup w ∈L<br />

z −γ<br />

w = x<br />

2<br />

2 0'<br />

Px<br />

2 0<br />

Consider the function V(x)=x′Px and its derivative along the trajectories of the system. Letting<br />

Δ=(γ 2 I-D′D) -1 we have:<br />

V&<br />

= x'(<br />

A'<br />

P + PA+<br />

PBw + B'PB)<br />

x<br />

= −x'C'<br />

Cx − x'<br />

( PB + C'D)<br />

Δ(<br />

B'P<br />

+ D'<br />

C)<br />

x = −z'<br />

z<br />

+ w'(<br />

D'C<br />

+ B'<br />

P)<br />

x + x'(<br />

C'D<br />

+ PB)<br />

w+<br />

w'D'<br />

Dx +<br />

− x'<br />

( PB + C'<br />

D)<br />

Δ(<br />

B'P<br />

+ D'C)<br />

x =<br />

2<br />

−1<br />

= −z'<br />

z + γ w'<br />

w − ( w − w )'Δ<br />

( w−<br />

w )<br />

ws<br />

where wws = Δ(B’P+D’C)x is the worst disturbance. Recalling that the system is asymptotically<br />

stable and taking the integral of both hands, the conclusion follows.<br />

ws


.<br />

Observation: LMI<br />

Schur Lemma<br />

0<br />

'<br />

'<br />

'<br />

'<br />

'<br />

'<br />

2<br />

><br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

+<br />

+<br />

−<br />

−<br />

−<br />

D<br />

D<br />

I<br />

C<br />

D<br />

P<br />

B<br />

D<br />

C<br />

PB<br />

C<br />

C<br />

PA<br />

P<br />

A<br />

γ<br />

( ) 0<br />

'<br />

)<br />

'<br />

'<br />

(<br />

'<br />

)<br />

'<br />

(<br />

'<br />

0<br />

1<br />

2<br />

<<br />

+<br />

+<br />

−<br />

+<br />

+<br />

+<br />

><br />

−<br />

C<br />

C<br />

DC<br />

P<br />

B<br />

D<br />

D<br />

I<br />

D<br />

C<br />

PB<br />

PA<br />

P<br />

A<br />

P<br />

γ


.<br />

x& = ( A + LΔN)<br />

x ,<br />

⎧ x&<br />

⎪<br />

⎨ z<br />

⎪<br />

⎩w<br />

=<br />

=<br />

=<br />

Complex Stability Radius and<br />

quadratic stability<br />

Ax + Lw<br />

Nx<br />

Δx<br />

Δ<br />

≤ α<br />

The system is said to be quadratically stable if there exists a<br />

solution (Lyapunov function) to the following inequality, for<br />

every Δ in the set<br />

THEOREM 3<br />

The system is quadratically stable if and only if ||N(sI-A) -1 L||∞


Proof of Theorem 3<br />

First observe that the following inequality holds:<br />

( A + LΔN)'P<br />

+ P(<br />

A + LΔN)<br />

=<br />

.<br />

A'<br />

P + PA + N'<br />

Δ'ΔN<br />

+ PLL'<br />

P − ( N'<br />

Δ'−PL)(<br />

ΔN<br />

− L'<br />

P)<br />

≤ A'P<br />

+ PA + PLL'<br />

P + N'<br />

Nα<br />

2<br />

= α<br />

2<br />

( A'<br />

X<br />

XLL'<br />

X<br />

+ XA + −2<br />

α<br />

where Pα 2 =X. Hence, if there exists X>0 satisfying<br />

+ N'<br />

N)<br />

then A+LΔN is asymptotically stable for every Δ, ||Δ|| ≤α, with the<br />

same Lyapunov function (quadratic stability). This happens if<br />

||N(sI-A) -1 L||∞0, we have that there exists<br />

s=jω that violates (*), a contradiction.


Real stability radius<br />

A= stable (eigenvalues with strictly negative real part)<br />

r ( A,<br />

B,<br />

C)<br />

= inf<br />

r<br />

Linear algebra problem: compute<br />

μr<br />

Proposition<br />

= inf<br />

s∈<br />

jω<br />

= inf<br />

s∈<br />

jω<br />

m×<br />

p<br />

{ Δ : Δ ∈ R and A + BΔC<br />

is unstable}<br />

m×<br />

p<br />

inf{<br />

Δ : Δ ∈ R and det(<br />

sI − A − BΔC)<br />

= 0}<br />

m×<br />

p<br />

inf{<br />

Δ : Δ ∈ R<br />

−1<br />

and det(<br />

I − ΔC(<br />

sI − A)<br />

B)<br />

= 0}<br />

[ ] 1 −<br />

inf<br />

m×<br />

p<br />

{ Δ : Δ ∈ R and det(<br />

I − Δ ) = 0}<br />

( M ) = M<br />

μ<br />

⎛ ⎡ Re M − γ Im M ⎤⎞<br />

⎜<br />

⎟<br />

⎜ ⎢<br />

⎥⎟<br />

⎝ ⎣ Im M Re ⎦⎠<br />

r ( M ) = inf σ 2<br />

γ∈(<br />

0,<br />

1]<br />

−1<br />

γ<br />

M<br />

Taking M=G(s)=C(sI-A) -1 B it follows<br />

⎛ ⎡ Re G(<br />

jω)<br />

− γ Im G(<br />

jω)<br />

⎤ ⎞<br />

sup inf σ ⎜<br />

⎟<br />

2<br />

∈ ⎜ ⎢<br />

⎥ ⎟<br />

ω γ ( 0,<br />

1]<br />

⎝ ⎣γ<br />

Im G(<br />

jω)<br />

Re G(<br />

jω)<br />

⎦ ⎠<br />

−1<br />

rr ( A,<br />

B,<br />

C)<br />

= −1


Example<br />

0 5 10 15 20 25 30<br />

0<br />

0.5<br />

1<br />

1.5<br />

2<br />

2.5<br />

3<br />

0 5 10 15 20 25 30<br />

0<br />

0.2<br />

0.4<br />

0.6<br />

0.8<br />

1<br />

1.2<br />

1.4<br />

1.6<br />

1.8<br />

2<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

=<br />

⎥<br />

⎥<br />

⎥<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎢<br />

⎢<br />

⎢<br />

⎣<br />

⎡<br />

=<br />

⎥<br />

⎥<br />

⎥<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎢<br />

⎢<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

−<br />

−<br />

−<br />

−<br />

−<br />

−<br />

−<br />

=<br />

4175<br />

.<br />

0<br />

3834<br />

.<br />

0<br />

6711<br />

.<br />

0<br />

0535<br />

.<br />

0<br />

0668<br />

.<br />

0<br />

0077<br />

.<br />

0<br />

5297<br />

.<br />

0<br />

0346<br />

.<br />

0<br />

8310<br />

.<br />

0<br />

6793<br />

.<br />

0<br />

5194<br />

.<br />

0<br />

6789<br />

.<br />

0<br />

3835<br />

.<br />

0<br />

0470<br />

.<br />

0<br />

9347<br />

.<br />

0<br />

2190<br />

.<br />

0<br />

,<br />

11<br />

5<br />

.<br />

14<br />

9<br />

5<br />

.<br />

33<br />

38<br />

60<br />

40<br />

167<br />

13<br />

17<br />

12<br />

41<br />

20<br />

30<br />

20<br />

79<br />

C<br />

B<br />

A<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡−<br />

=<br />

Δ<br />

4996<br />

.<br />

0<br />

1214<br />

.<br />

0<br />

1214<br />

.<br />

0<br />

4996<br />

.<br />

0<br />

worst


.<br />

Entropy<br />

Consider a stable system G(s) with state space realization<br />

(A,B,C,0), and assume that ||G(s)||∞


Comments<br />

It is easy to check that the γ-entropy measure is not a norm, but<br />

can be considered as a generalization of the square of the H2 norm<br />

Indeed the γ-entropy can be written as<br />

Graph of f(x 2 ) parametrized in γ>1. For large γ the function f(x 2 )<br />

gets closer to x 2 (red line).<br />

.<br />

I<br />

G(<br />

s)<br />

γ<br />

m<br />

1<br />

( G)<br />

= f ( σ<br />

2π<br />

∫∑<br />

f ( x<br />

2<br />

2<br />

2<br />

) = −γ<br />

f<br />

1<br />

=<br />

2π<br />

5<br />

4.5<br />

4<br />

3.5<br />

3<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

− ∞ i=<br />

1<br />

2<br />

+∞<br />

ln<br />

+∞<br />

( 1<br />

m<br />

∫∑<br />

− ∞ i=<br />

1<br />

σ<br />

x<br />

−<br />

γ<br />

2<br />

i<br />

2<br />

i<br />

2<br />

2<br />

[ G(<br />

jω)<br />

] dω<br />

[ G(<br />

jω)<br />

]<br />

)<br />

) dω<br />

0<br />

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1<br />

x


Comments<br />

An interpretation for the γ-entropy for SISO systems is as follows.<br />

Consider the feedback configuration:<br />

.<br />

w z<br />

G(s)<br />

Δ(s)<br />

where w(.) is a white noise and Δ(s) is a random transfer function<br />

with Δ(jω1) independent on Δ(jω2) and uniformly distributed on<br />

the disk of radius γ -1 in the complex plane.<br />

Hence the expectation value over the random feedback transfer<br />

function is<br />

2<br />

⎛ G(<br />

s)<br />

⎞<br />

⎜<br />

⎟<br />

E = Iγ<br />

( G)<br />

Δ<br />

⎜ 1 G(<br />

s)<br />

( s)<br />

⎟<br />

⎝<br />

− Δ 2 ⎠


Proof of the Proposition<br />

First notice that f(x 2 ) ≥ x 2 so that the conclusion that Iγ(G) ≥||G(s)|| 2 2 follows immediately.<br />

Now, let<br />

Of course it is ri≥β>1. Then<br />

so that<br />

which is the conclusion. In order to prove that the entropy can be computed from the stabilizing<br />

solution of the Riccati equation, recall (Theorem 1) that, since ||G(s)||∞


.<br />

z = Cx + Du<br />

ACTUATOR DISTURBANCE<br />

x&<br />

= Ax + B(<br />

w + u)<br />

The optimal H2 state-feedback controller is<br />

u<br />

= F x,<br />

2<br />

F<br />

2<br />

= −(<br />

D'D)<br />

Proposition<br />

The optimal H2 control law ensures an H∞ norm of the closed loop<br />

system (from w to z) lower than 2||D||.<br />

Proof. Consider the equation of the Bounded Real Lemma for the closed-loop system, i.e.<br />

−2<br />

( A + BF2<br />

)'Q<br />

+ Q(<br />

A + BF2<br />

) + γ QBB'Q<br />

+ ( C + DF2<br />

)'(<br />

C + DF2)<br />

< 0<br />

Now take Q=P/α and consider both equations in P. It follows:<br />

( 1<br />

This condition is verified by choosing α≤1 and the minimum γ such that R≤0, i.e. the minimum γ<br />

such that<br />

α-α 2<br />

The term α-α 2 is maximized by α=0.5, for which α-α 2 2 −2<br />

2 −2<br />

2<br />

( α − α ) I − γ D'<br />

D ≥ α −α<br />

− γ σ ( D)<br />

= 0<br />

=0.25, so that the minimum γ is<br />

−1<br />

0 = A'<br />

P + PA + C'C<br />

− F D'<br />

DF<br />

2<br />

( B'<br />

P + D'C)<br />

−1<br />

−1<br />

−1<br />

−1<br />

−2<br />

( I − D(<br />

D'D)<br />

D')<br />

C + PBRB'<br />

P < 0,<br />

R = ( D'<br />

D)<br />

( 1−<br />

α ) + α<br />

−1<br />

−α ) C'<br />

I γ<br />

γ =<br />

2σ( D)<br />

2


The Small Gain Theorem<br />

THEOREM 4<br />

Assume that G1(s) is stable. Then:<br />

(i) The interconnected system is stable for each stable G2(s) with<br />

||G2(s)||∞α −1 then there exists a stable G2(s) with<br />

||G2(s)||∞


(iii) Proof of Theorem 4<br />

Point (i).<br />

If ||G2(s)||∞0, and write the<br />

singular value decomposition of G1(jω), i.e.<br />

G1(jω)=U(jω)Σ(jω)V ~ (jω) where Σ(jω)=[S(jω)′ 0]′ and S(jω) is<br />

square with dimension m. Moreover, take a stable G2(s) such that<br />

G2(jω)=ρV(jω)[I 0]U ~ (jω). Notice that G2 ~ (jω)G2(jω)=ρ 2


The H¥<br />

design problem<br />

du dc<br />

c 0 y u up K(s) G(s)<br />

c<br />

Find K(s) in such a way to guarantee:<br />

• Stability<br />

• Satisfactory performances<br />

Design in nominal conditions<br />

Uncertainties description<br />

Design under uncertaint conditions<br />

dr


G(s) = Gn(s)<br />

Nominal Design<br />

The performances are expressed in terms of requirements on some<br />

transfer functions<br />

du dc<br />

c 0 y K(s) u up G(s)<br />

c<br />

Characteristic functions:<br />

• Sensitivity Sn(s)=(I+Gn(s)K(s)) -1<br />

dc → c, c 0 →y, -dr→y<br />

• Complementary sensitivity<br />

Tn(s)=Gn(s)K(s)(I+Gn(s)K(s)) -1<br />

c 0 →c, -dr→c<br />

• Control sensitivity<br />

Vn(s)=K(s)(I+Gn(s)K(s)) -1<br />

c 0 →up, -dr→up, -dc→up<br />

dr


Shaping Functions<br />

In the SISO case, the requirement to have a “small” transfer<br />

function φ(s) can be well expressed by saying that the absolute<br />

value |φ(jω)|, for each frequency ω, must be smaller than a<br />

given function θ(ω) (generally depending on the frequency):<br />

φ( jω)<br />

< θ ( ω),<br />

∀ω<br />

Analogously, in the MIMO case, one can write<br />

This can be done by choosing a matrix transfer function W(s),<br />

stable with stable inverse (“shaping” function) , such that<br />

∀ω<br />

σ<br />

[ φ ( jω)<br />

] < θ ( ω),<br />

ω<br />

σ ∀<br />

−1<br />

[ W ( jω)<br />

] = θ ( ω)<br />

← W ( s)<br />

φ ( s)<br />

< 1<br />

A general requirement is that the sensitivity function is small at<br />

low frequency (tracking) whereas the complementary sensitivity<br />

function is small at high frequiency (feedback disturbances<br />

attenuation). Mixed sensitivity:<br />

⎡W1<br />

( s)<br />

Sn<br />

( s)<br />

⎤<br />

⎢<br />

2(<br />

) ( )<br />

⎥<br />

⎣W<br />

s Tn<br />

s ⎦<br />

∞<br />

< 1<br />


Description of the uncertainties<br />

The nominal transfer function Gn(s) of the process G(s) belongs<br />

to a set G which can be parametrized through a transfer<br />

function Δ(s) included in the set<br />

D<br />

α<br />

For instance,<br />

{ Δ(<br />

s)<br />

Δ(<br />

s)<br />

∈ H , Δ(<br />

s)<br />

< α }<br />

= ∞<br />

∞<br />

• G = {G(s) | G(s) = Gn(s)+Δ(s)}<br />

• G = {G(s) | G(s) = Gn(s)(I+Δ(s))}<br />

• G = {G(s) | G(s) = (I+Δ(s))Gn(s)}<br />

• G = {G(s) | G(s) = (I-Δ(s)) -1 Gn(s)}<br />

• G = {G(s) | G(s) = (I-Gn(s)Δ(s)) -1 Gn(s)}


Examples<br />

• G = {G(s) | G(s) = Gn(s)+Δ(s)}<br />

Uncertaint unstable zeros<br />

s − 2 ε<br />

G(<br />

s)<br />

=<br />

+<br />

( s + 2)(<br />

s + 1)<br />

( s + 2)(<br />

s + 1)<br />

• G = {G(s) | G(s) = Gn(s)(I+Δ(s))}<br />

Unmodelled high frequency poles or unstable zeros<br />

G(<br />

s)<br />

1 ⎛ εs<br />

⎞<br />

= ⎜1−<br />

⎟,<br />

( s + 1)<br />

⎝ ( 1+<br />

εs)<br />

⎠<br />

• G = {G(s) | G(s) = (I-Δ(s)) -1 Gn(s)}<br />

Unmodelled unstable poles<br />

⎛ 10 ⎞<br />

G(<br />

s)<br />

= ⎜1−<br />

⎟<br />

⎝ ( 1+<br />

s)<br />

⎠<br />

• G = {G(s) | G(s) = (I-Gn(s)Δ(s)) -1 Gn(s)}<br />

Uncertain unstable poles<br />

−1<br />

⎛ 1 ⎞<br />

G( s)<br />

= ⎜1−<br />

ε<br />

⎟<br />

⎝ s −1<br />

⎠<br />

G(<br />

s)<br />

1<br />

( s + 10)<br />

−1<br />

1<br />

( s −1)<br />

1 ⎛ 2 ⎞<br />

= ⎜1−<br />

⎟<br />

( s + 1)<br />

⎝ ( 1+<br />

s)<br />


Design in nominal conditions<br />

Example of mixed performances<br />

z1<br />

W 1<br />

du dc<br />

w = c 0 y K(s) u up Gn(s) c z2<br />

W2 w z<br />

P<br />

u y<br />

K<br />

⎡ z1<br />

⎤<br />

z =<br />

⎢ , w = c<br />

z<br />

⎥<br />

⎣ 2 ⎦<br />

W5=1<br />

0<br />

⎡W<br />

=<br />

⎢<br />

⎢<br />

0<br />

⎢⎣<br />

I<br />

1<br />

dr<br />

−W<br />

G<br />

P 2<br />

1<br />

W G<br />

− G<br />

n<br />

n<br />

n<br />

⎤<br />

⎥<br />

⎥<br />

⎥⎦


Design under uncertain condition<br />

Robust stability and nominal performances<br />

z1<br />

W 1<br />

z2<br />

K(s) Gn(s) dc<br />

c 0 y u c<br />

w z<br />

P<br />

u y<br />

K<br />

⎡ z1⎤<br />

z = ⎢ w =<br />

z<br />

⎥,<br />

⎣ 2⎦<br />

W4<br />

dc<br />

Δ<br />

W5<br />

⎡−W<br />

=<br />

⎢<br />

⎢<br />

0<br />

⎢⎣<br />

− I<br />

dc=w<br />

dr<br />

−W<br />

G<br />

P 4<br />

1<br />

1<br />

W G<br />

− G<br />

n<br />

n<br />

n<br />

⎤<br />

⎥<br />

⎥<br />

⎥⎦


Design under uncertain condition<br />

Robust stability and robust performances<br />

z1<br />

W 1<br />

z2<br />

K(s) Gn(s) dc<br />

c 0 y u c<br />

1) Robust sensitivity performance<br />

2) Robust stability<br />

dc=w<br />

Result: (take W5=1)<br />

1) and 2) are both achieved if the controller is such that<br />

W4<br />

Δ<br />

W5<br />

−1<br />

[ I −W<br />

( s)<br />

Δ(<br />

s)<br />

W ( s)<br />

V ( s)<br />

] , ∀ Δ(<br />

s)<br />

< 1<br />

W1 ( s)<br />

S(<br />

s)<br />

= W1(<br />

s)<br />

Sn(<br />

s)<br />

5<br />

4 n<br />

W4 n<br />

∞<br />

( s)<br />

V ( s)<br />

W5(<br />

s)<br />

∞<br />

< 1<br />

⎡W1<br />

( s)<br />

Sn<br />

( s)<br />

⎤<br />

⎢<br />

4(<br />

) ( )<br />

⎥<br />

⎣W<br />

s Vn<br />

s ⎦<br />

∞<br />

< 1<br />

dr<br />


The proof of the result follows from 1) and 2) and on the following<br />

Lemma<br />

Let X(s) and Y(s) in L∞ and Ψ a generic L∞ function such that ||Y∞||


The Standard Problem<br />

w z<br />

u y<br />

All the situations studied so far can be recast in the so-called<br />

standard problem: Find K(s) in such a way that:<br />

• The closed-loop system is asymptotically stable<br />

•<br />

T(<br />

z,<br />

w,<br />

s)<br />

∞<br />

< γ<br />

P<br />

K<br />

Existence of a feasible K(s)and parametrization of all controllers<br />

such that the closed-loop norm between w and z be less than g.


x&<br />

= Ax+<br />

B w + B u<br />

z = C x + D<br />

1<br />

⎡x<br />

⎤<br />

y = ⎢ ⎥<br />

⎣w⎦<br />

1<br />

12<br />

u<br />

Ac = A-B2(D12′D12) -1 D12′C1<br />

C1c = (I-D12 (D12′D12) -1 D12′)C1<br />

Full information<br />

w z<br />

u x<br />

2<br />

P<br />

K<br />

w<br />

Assumptions<br />

(A,B2) stabilizzabile<br />

D12′D12>0<br />

(Ac,C1c) rivelabile


Solution of the Full Information Problem<br />

THEOREM 5<br />

There exists a controller FI, feasible and such that ||T(z,w,s)||∞ < γ<br />

if and only if there exists a positive semidefinite and stabilizing<br />

solution of the Riccati equation<br />

−1<br />

B1B<br />

1'<br />

A'<br />

P+<br />

PA−(<br />

PB2<br />

+ C1'D12)(<br />

D12'D12)<br />

( B2'<br />

P+<br />

D12'C1)<br />

+ P P+<br />

C1'C<br />

2<br />

1 = 0<br />

γ<br />

−1<br />

B1B<br />

1'<br />

A−B2<br />

( D12'D12')<br />

( B2'P+<br />

D12'C1)<br />

+ P stable<br />

2<br />

γ<br />

F<br />

∞<br />

=<br />

−<br />

F∞<br />

-γ<br />

u w<br />

Q(s)<br />

-2 B1′P<br />

-1 ( 'D<br />

) ( B 'P<br />

+ D 'C<br />

)<br />

D12 12 2 12 1<br />

Q(s) is a stable system, proper and satisfying ||Q(s)||∞ < γ.<br />

x


Comments<br />

• As (γ→∞) the central controller (Q(s)=0) coincides with the H2<br />

optimal controller<br />

• The parametrized family directly includes the only static<br />

controller u=F∞x.<br />

• The proof of the previous result, in its general fashion, would<br />

require too much time. Indeed, the necessary part requires a<br />

digression on the Hankel-Toeplitz operator. If we drop off the<br />

result on the parametrization and limit ourselves to the static<br />

state-feedback problem, i.e. u=Kx, the following simple result<br />

holds:<br />

THEOREM 6<br />

There exists a stabilizing control law u=Kx such that the norm of<br />

the system (A+B2K,B1,C1+D12K) is less than γ if and only if there<br />

exists a positive definite solution of the inequality:<br />

A'<br />

P + PA − ( PB<br />

2<br />

+ C'<br />

D<br />

B1B1<br />

'<br />

+ P P + C1'<br />

C1<br />

< 0<br />

2<br />

γ<br />

12<br />

)( D<br />

12<br />

'D<br />

12<br />

)<br />

−1<br />

( B<br />

2<br />

' P + D<br />

12<br />

'C<br />

1<br />

)


Observation: LMI<br />

P > 0<br />

A'P<br />

+ PA − ( PB<br />

Ac = A-B2(D12′D12) -1 D12′C1<br />

+ C'D<br />

B1B1'<br />

+ P P + C1'C<br />

1 < 0<br />

2<br />

γ<br />

X<br />

> 0<br />

C1c = (I-D12 (D12′D12) -1 D12′)C1<br />

2<br />

12<br />

⎡ − XAc<br />

'−Ac<br />

X − B1B1'+<br />

γ<br />

⎢<br />

⎣ C1c<br />

X<br />

)( D<br />

2<br />

12<br />

'D<br />

12<br />

)<br />

−1<br />

( B<br />

2<br />

Schur Lemma<br />

X= γ 2 P -1<br />

B<br />

2<br />

B<br />

2<br />

'<br />

'P<br />

+ D<br />

XC '⎤<br />

1c<br />

> 0<br />

2 ⎥<br />

γ I ⎦<br />

12<br />

'C<br />

1<br />

)


Proof of Theorem 6<br />

Assume that there exists K such that A+B2K is stable and the<br />

norm of the system (A+B2K,B1,C1+D12K) is less than γ. Then,<br />

from the we know that there exists a positive definite solution of<br />

the inequality<br />

( A+ B K)'P<br />

+ P(<br />

A + B K)<br />

+ γ<br />

+ ( C<br />

1<br />

2<br />

+ D<br />

12<br />

Now, defining<br />

F<br />

K)'(<br />

C<br />

1<br />

+<br />

D<br />

12<br />

2<br />

K)<br />

< 0<br />

−2<br />

-1 ( 'D<br />

) ( B ' + D 'C<br />

)<br />

= −<br />

P<br />

D12 12 2 12 1<br />

PB B 'P<br />

+<br />

1<br />

1<br />

(*)<br />

the Riccati inequality can be equivalently rewritten as<br />

A'<br />

P + PA − ( PB<br />

P<br />

B<br />

1 1<br />

2<br />

γ<br />

B<br />

'<br />

1<br />

2<br />

1<br />

+ C'<br />

D<br />

12<br />

)( D<br />

' D<br />

) +<br />

so concluding the proof. Vice-versa, assume that there exists a<br />

positive definite solution of the inequality. Then, inequality (*)<br />

is satisfied with K=F, so that with such a K, the norm of the<br />

closed-loop transfer function is less than γ (BRL).<br />

12<br />

12<br />

)<br />

−1<br />

P + C 'C<br />

+ ( K − F)'(<br />

K − F)<br />

< 0<br />

( B ' P + D<br />

2<br />

12<br />

'C<br />

1


Define:<br />

A<br />

C<br />

W<br />

R<br />

R<br />

Parametrization of all algebraic<br />

“state-feedback” controllers<br />

THEOREM 7<br />

The set of all controllers u=Kx such that the H∞ norm of the<br />

closed-loop system is less than γ is given by:<br />

K<br />

W<br />

1<br />

=<br />

=<br />

=<br />

[ A B2<br />

]<br />

[ C1<br />

D12<br />

]<br />

[ W W ]<br />

= W<br />

> 0<br />

2<br />

1<br />

'W<br />

−1<br />

1<br />

⎡− ARW<br />

−WAR'−<br />

B1B<br />

⎢<br />

⎣ CRW<br />

'<br />

2<br />

1<br />

'<br />

WCR<br />

'⎤<br />

2 ⎥ > 0<br />

γ I ⎦


Proof of Theorem 7<br />

We prove the theorem in the simple case in which C1′D12=0 and<br />

D12′D12=I. The LMI is equivalent to<br />

K = W 'W<br />

W<br />

1<br />

AW<br />

> 0<br />

1<br />

2<br />

1<br />

−1<br />

1<br />

+ W A'+<br />

B B '+<br />

W B<br />

1<br />

1<br />

2<br />

2<br />

'+<br />

B W<br />

2<br />

2<br />

'+<br />

γ<br />

If there exists W satisfying such an inequality, it follows that the<br />

inequality<br />

Is satisfied with K=W2′W1 -1 and P=γ 2 W1 -1 . The result follows from<br />

the BRL. Vice-versa, assume that there exists P>0 satisfying this<br />

last inequality. The conclusion directly follows by letting<br />

W1==γ 2 P -1 e and W2=W1K′.<br />

−2<br />

WC<br />

'CW<br />

+ γ<br />

1<br />

1<br />

1<br />

1<br />

−2<br />

W W '<<br />

0<br />

−2<br />

P( A + B2<br />

K)<br />

+ ( A + B2K)'P<br />

+ γ<br />

PB1B<br />

1'P<br />

+ C1'C1<br />

+ KK'<<br />

0<br />

2<br />

2


Example<br />

⎡ 0<br />

x&<br />

= ⎢<br />

⎣−1<br />

1 ⎤ ⎡0⎤<br />

⎥x<br />

+ ⎢ ⎥(<br />

w + u)<br />

−1⎦<br />

⎣1⎦<br />

⎡1<br />

z = ⎢<br />

⎣0<br />

0⎤<br />

⎡0⎤<br />

⎥x<br />

+ ⎢ ⎥u<br />

0⎦<br />

⎣1⎦<br />

γinf = 0.75, γ = 0.76<br />

||T(z1,w)||∞<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

robustness<br />

⎡0<br />

ΔA<br />

= ⎢<br />

⎣Ω<br />

0<br />

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />

H2<br />

H<br />

Ω<br />

0⎤<br />

0<br />

⎥<br />


x&<br />

=<br />

z<br />

ξ<br />

=<br />

Ax + B w + B<br />

C<br />

1<br />

x +<br />

1<br />

D<br />

12<br />

= Lx + Mu<br />

u<br />

Mixed Problem H2/H¥<br />

2<br />

u<br />

The problem consists in finding u=Kx in such a way that<br />

min T ( ξ, w,<br />

s,<br />

K)<br />

: T ( z,<br />

w,<br />

s,<br />

K)<br />

≤ γ<br />

2<br />

K<br />

• For instance, take the case z=ξ. The problem makes sense since<br />

a blind minimization of the H∞ infinity norm may bring to a<br />

serious deterioration of the H2 (mean squares) performances.<br />

• Obviously, the problem is non trivial only if the value of γ is<br />

included in the interval (γinf γ2), where γing is the infimun<br />

achievable H∞ norm for T(z,w,s,K) as a function of K, whereas<br />

γ2 is the H∞ norm of T(z,w,s,KOTT) where KOTT is the optimal<br />

unconstrained H2 controller.<br />

• This problem has been tackled in different ways, but an analytic<br />

solution is not available yet. In the literature, many sub-optimal<br />

solutions can be found (Nash-game approach, convex<br />

optimization, inverse problem, …)<br />


x&<br />

=<br />

z<br />

ξ<br />

=<br />

Ax + B w + B<br />

C<br />

1<br />

x +<br />

D<br />

= Lx + Mu<br />

Post-Optimization procedure<br />

1<br />

12<br />

u<br />

2<br />

u<br />

simplifying assumptions L′M=0, M′M=I<br />

Let Ksub a controller such that ||T(z,w,s,Ksub)||∞


THEOREM 8<br />

Post-Opt Algorithm<br />

Each controller Kα of the family (α-con) is a stabilizing controller<br />

and is such that<br />

T ( ξ,<br />

w,<br />

s,<br />

K<br />

( 1<br />

d<br />

−α<br />

)<br />

dα<br />

α<br />

)<br />

2<br />

= T ( ξ,<br />

w,<br />

s,<br />

K2−<br />

T ( ξ,<br />

w,<br />

s,<br />

K<br />

α<br />

)<br />

2<br />

≤ 0<br />

Interpretation: For α=0, the controller is K0=Ksub. Hence,<br />

||T(z,w,s,K0)||∞


Proof of Theorem 8<br />

First notice that the equation (α-RIC) canm be rewritten as:<br />

( 2 α α α 2<br />

A + B K )'P<br />

+ P ( A + B K ) + K 'K<br />

+ L'<br />

L = 0 ( α − RIC)<br />

so that<br />

α<br />

||T(ξ,w,s,Kα)||2 =(Trace(B1′PαB1)) 1/2<br />

The fact that the norm ||T(ξ,w,s,Kα)||2 is symmetric with respect to<br />

α=1 follows directly (α-RIC) by inspection. Taking the derivative<br />

with respect to α one obtains<br />

F<br />

α<br />

' Xα<br />

+ XαFα<br />

− 2(<br />

1−α<br />

) Λα'Λα<br />

= 0<br />

where Xα is the derivate of Pα with respect to α and<br />

Fα=Aα-B2αB2α′Pα, Λα=Ksub+PαB2<br />

Matrix Fα is stable since Pα is the stabilizing solution of (α-RIC).<br />

Hence, for the Lyapunov Lemma the conclusion follows.<br />

α<br />

α


Example<br />

⎡ 0<br />

x&<br />

= ⎢<br />

⎣−1<br />

1 ⎤ ⎡0⎤<br />

⎥x<br />

+ ⎢ ⎥(<br />

w + u)<br />

−1⎦<br />

⎣1⎦<br />

⎡1<br />

z = ⎢<br />

⎣0<br />

0⎤<br />

⎡0⎤<br />

⎥x<br />

+ ⎢ ⎥u<br />

0⎦<br />

⎣1⎦<br />

γinf = 0.75, γ2 = 0.8415, γ=0.8<br />

Ksub=Kcen=[-0.6019 -0.7676]<br />

robustness<br />

a*=0.56, Ka*=[-0.4981 -0.5424]<br />

Performances with respect to W<br />

1.1<br />

1<br />

0.9<br />

0.8<br />

0.7<br />

0.6<br />

0.5<br />

0.64<br />

0.63<br />

0.62<br />

0.61<br />

0.6<br />

0.59<br />

0.58<br />

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2<br />

Ka<br />

Kcen<br />

K2<br />

0.4<br />

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />

1<br />

0.95<br />

0.9<br />

0.85<br />

0.8<br />

⎡0<br />

ΔA<br />

= ⎢<br />

⎣Ω<br />

0⎤<br />

0<br />

⎥<br />

⎦<br />

0.75<br />

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2<br />

3<br />

2.5<br />

2<br />

1.5<br />

1<br />

K2<br />

Kcen<br />

0.5<br />

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />

Ka


x&<br />

= Ax + B w + B u<br />

z = C x + D<br />

1<br />

y = C<br />

2<br />

x + w<br />

Disturbance Feedforward<br />

1<br />

12<br />

u<br />

Same assumptions as FI+stability of A-B1C2<br />

C2=0 → direct compensation<br />

C2≠0 → indirect compensation<br />

A-B1C2 B1 B2 y u<br />

R(s) = 0 0 I<br />

I 0 0<br />

-C2 I 0<br />

2<br />

KDF<br />

R(s)<br />

KFI(s)<br />

The proof of the DF theorem consist in verifying that the transfer function from w to z achieved by<br />

the FI regulator for the system A,B1,B2,C1, D12 is the same as the transfer function from w to z<br />

obtained by using the regulator KDF shown in the figure.


x&<br />

= Ax + B w + ( B u)<br />

z = C x + u<br />

1<br />

1<br />

y = C2x<br />

+ D21w<br />

w y<br />

Output Estimation<br />

w z<br />

u y<br />

2<br />

P11(s)<br />

P21(s)<br />

P<br />

K(s)<br />

K(s)<br />

ξ<br />

Assumptions<br />

(A,C2) detectable<br />

D21D21′> 0 Af = A-B1 D21′(D21D21′) -1 C2<br />

B1f = B1(I-D21′ (D21D21′) -1 D21)<br />

(Af,B1f) stabilizable<br />

(A-B2C1) stable<br />

u<br />

ξˆ = −<br />

z


Solution of the<br />

Output Estimation problem<br />

THEOREM 9<br />

There exists a feasible controllor (Filter) such that ||T(z,w,s)||∞ < γ<br />

if and only if there exists the stabilizing positive semidefinite<br />

solution of the Riccati equation<br />

M(s)=<br />

A<br />

C<br />

I<br />

L<br />

AΠ<br />

+ ΠA′<br />

− ( ΠC<br />

A − ( ΠC<br />

2 f<br />

f<br />

ff<br />

∞<br />

=<br />

=<br />

=<br />

Aff L∞ -B2 γ -2 ΠC1′<br />

C1 0 I<br />

C2f If 0<br />

( )<br />

( )<br />

( )<br />

( ) 1 -<br />

-1<br />

A − ΠC2'<br />

D21D21'<br />

C2<br />

-1<br />

D21D21'<br />

C2<br />

-1<br />

D21D21'<br />

−<br />

−(<br />

ΠC<br />

'+<br />

B D ')<br />

D D '<br />

=<br />

2<br />

'+<br />

B D<br />

1<br />

2<br />

21<br />

2<br />

'+<br />

B<br />

')(<br />

D<br />

1<br />

1<br />

21<br />

D<br />

21<br />

21<br />

D<br />

')(<br />

D<br />

21<br />

')<br />

−1<br />

21<br />

C1'C<br />

1<br />

C2<br />

Π 2<br />

γ<br />

21<br />

B<br />

2<br />

C<br />

1<br />

stable<br />

u y<br />

M<br />

Q(s) is a proper stable system satisfying ||Q(s)||∞ < γ.<br />

21<br />

D<br />

21<br />

')<br />

−1<br />

( C Π +<br />

2<br />

D<br />

21<br />

C1'C<br />

1<br />

B1<br />

) + Π Π + B1<br />

B<br />

2<br />

1'<br />

= 0<br />

γ<br />

Q


Proof of Theorem 9<br />

It is enough to observe that the “transpose system”<br />

λ&<br />

= A'λ<br />

+ C'<br />

μ<br />

ϑ<br />

= B 'λ<br />

+ D<br />

=<br />

B<br />

1<br />

2<br />

'λ<br />

+ ς<br />

1<br />

ς<br />

21<br />

+ C<br />

'ν<br />

2<br />

'ν<br />

Has the same structure as the system defining the DF problem.<br />

Hence the solution is the “transpose” solution of the DF problem.<br />

It is worth noticing that the solution depends on the particular<br />

linear combination of the state that one wants to estimate.


Example<br />

⎡ 0<br />

x&<br />

= ⎢<br />

⎣−1<br />

y =<br />

1 ⎤ ⎡0⎤<br />

x + w<br />

−1<br />

+ Ω<br />

⎥ ⎢<br />

1<br />

⎥<br />

⎦ ⎣ ⎦<br />

[ 1 0]<br />

x + w2<br />

1<br />

damping ξ = ( 1−<br />

Ω)<br />

/ 2<br />

Let consider six filters:<br />

Filter K1: Kalman filter. The noises are assumed to be white<br />

uncorrelated gaussian noises with identity intensities, namely<br />

W1=1, W2=1.<br />

Filter K2: Kalman filter. The noises are assumed to be white<br />

uncorrelated gaussian noises with W1=1, W2=0.5.<br />

Filter K3: Kalman filter. The noises are assumed to be white<br />

uncorrelated gaussian noises with W1=0.5, W2=1.<br />

Filters H1,H2,H3: H∞ filters with γ=1.1, γ=1.01, γ=1.005 (notice<br />

that with γ=1 the stabilizing solution of the Riccati equation does<br />

not exist).<br />

Other techniques of Robust filtering are possible<br />

Norm H2<br />

2.2<br />

2<br />

1.8<br />

1.6<br />

1.4<br />

1.2<br />

1<br />

0.8<br />

H3<br />

0.6<br />

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9<br />

H2<br />

H1<br />

Norm H∞<br />

7<br />

6<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9


Assumptions: FI + DF.<br />

Partial Information<br />

w z<br />

u y<br />

Theorem 10<br />

There exists a feasible such that ||T(z,w,s)||∞ < γ if and only if<br />

• There exists the stabilizing positive semidefinite solution of<br />

−1<br />

B1B<br />

1'<br />

A'<br />

P+<br />

PA−(<br />

PB2<br />

+ C1'D12)(<br />

D12'<br />

D12)<br />

( B2'<br />

P + D12'C1<br />

) + P 2 P+<br />

C1'C1<br />

= 0<br />

γ<br />

• There exists the stabilizing positive semidefinite solution of<br />

•<br />

x&<br />

= Ax + B w + B u<br />

z = C x + D<br />

y = C x + D<br />

−1<br />

C1'C1<br />

AΠ+<br />

ΠA'−(<br />

ΠC2<br />

'+<br />

B1D21')(<br />

D21D21')<br />

( C2Π+<br />

D21B1')<br />

+ Π Π+<br />

B1B<br />

1'=<br />

0<br />

2<br />

γ<br />

max λ ( PΠ)<br />

< γ<br />

i<br />

1<br />

2<br />

i<br />

1<br />

12<br />

21<br />

u<br />

w<br />

2<br />

2<br />

P<br />

K(s)


Structure of the regulator<br />

u y<br />

S∞<br />

Q<br />

Afin B1fin B2fin<br />

S∞(s) = F∞ 0 (D21D21′) -1/2<br />

C2fin (D21D21′) -1/2 0<br />

Afin = A−B2(D12′D12) -1 D12C1−B2(D12′D12) -1 B2′P+γ -2 B1B1′P+<br />

+ (I−γ -2 ΠP) -1 L∞(C2+γ -2 D21B1′P)<br />

B1fin = −(I−γ -2 ΠP) -1 L∞<br />

B2fin = (I−γ -2 ΠP) -1 (B2+γ -2 ΠC1′D12)(D12′D12) -1/2<br />

C2fin = −(D21D21′) -1 (C2+γ -2 D21B1′P)


Comments<br />

It is easy to check that the central controller (Q(s)=0) is described<br />

by:<br />

ξ&<br />

= A ξ + B u + Z L C ξ − y + D w*)<br />

+ B w*<br />

u<br />

=<br />

F<br />

∞<br />

ξ<br />

2<br />

∞<br />

∞ ( 2<br />

21 1<br />

This closely resembles the structure of the optimal H2 controller.<br />

Notice, however, that the well-known separation principle does<br />

not hold for the presence of the worst disturbance w*=B1′Pξ.<br />

Proof of Theorem 10 (sketch)<br />

Define the variables r e q as:<br />

w = r + γ -2 B1′Px<br />

q = u − F∞x<br />

In this way, the system becomes<br />

x&<br />

= ( A + γ<br />

q<br />

y<br />

= −F<br />

= ( C<br />

∞<br />

2<br />

−2<br />

x + u<br />

+ γ<br />

B B 'P)<br />

x + B r + B<br />

−2<br />

1<br />

D<br />

1<br />

21<br />

B ')<br />

x + D<br />

1<br />

1<br />

21<br />

w<br />

2<br />

u<br />

This system has the same structure as the one of the OE problem.<br />

Hence, the solution can be obtained from the solution of the OE<br />

problem, by recognizing that the solution Πt of the relevant<br />

Riccati equation is related to the solutions P e Π above in the<br />

following way:<br />

Π = Π(<br />

I −γ<br />

P<br />

t<br />

−2 Π<br />

)<br />

−1


⎡P<br />

P = ⎢<br />

⎣P<br />

11<br />

21<br />

P<br />

P<br />

12<br />

22<br />

⎤<br />

⎥<br />

⎦<br />

T ( z,<br />

w,<br />

s)<br />

= P<br />

11<br />

T ( z,<br />

w,<br />

s)<br />

= T ( s)<br />

−T<br />

1<br />

Operatorial Approach<br />

w z<br />

P<br />

u y<br />

( s)<br />

+ P<br />

2<br />

12<br />

( s)<br />

K<br />

( I − K(<br />

s)<br />

P ( s)<br />

)<br />

( s)<br />

Q(<br />

s)<br />

T<br />

( s)<br />

K(<br />

s)<br />

P<br />

( s)<br />

The functions T1, T2, T3 depend on a double coprime factorization<br />

of P22. The function Q(s) is the Youla-Kucera parameter (stable<br />

and proper function).<br />

• Inner-outer factorization and reduction of the problem to a<br />

distance Nehari problem<br />

3<br />

22<br />

−1<br />

21


Laurent and Hankel operators<br />

• Let F(s) ∈ L∞ . The map ΛF: L2 → L2 defined as<br />

ΛF : G(s) → ΛFG(s) = F(s)G(s)<br />

is called the Laurent operator (with symbol F).<br />

• Consider G(s)∈L2 and let G(s) =Gs(s)+Ga(s), with Gs(s) ∈H2 and Ga(s)<br />

∈ H2 ⊥ . The map Π s: L2 → H2 defined as<br />

Πs : G(s) → Π sG(s) = Gs(s)<br />

is called the stable projection.<br />

• Consider G(s)∈L2 and let G(s) =Gs(s)+Ga(s), with Gs(s) ∈H2 and Ga(s)<br />

∈ H2 ⊥ . The map Π a: L2 → H2 ⊥ defined as<br />

Π a : G(s) → Π aG(s) = Ga(s)<br />

is called the untistable projection.<br />

• Let F(s)∈L∞. The map ΓF: H2 ⊥ → H2 defined as<br />

ΓF : G(s) → ΓFG(s) = Π sΛFG(s)<br />

is called the Hankel operator (with symbol F).<br />

Notice that ΛF is linear and ||ΛF||=||F(s)||∞ so that is a bounded operator. It is immediate to check that<br />

the projections Πs and Πa are indeed linear operators. The Hankel operator is the result of the<br />

composition of the Laurent operator and the stable projection, i.e. ΓF=ΠsΛF . Notice also that only<br />

the stable part of F(s) contributes to ΓFG(s). Indeed, since G(s) since G(s)∈Η2 ⊥ it follows<br />

ΓFG(s)=ΠsΛFG(s)= ΠsF(s)G(s)= Πs[(Fs(s)+Fa(s))G(s)]= ΠsFs(s)G(s)=ΓFsG(s)


The adjoint operators<br />

• The adjoint Laurent operator (with symbol F) is the adjoint operator<br />

with symbol F ~ , i.e.<br />

Λ*F = ΛF ~<br />

• The adjoint Hankel operator (with symbol F) is<br />

Γ*F = Π aΛ ∗ F<br />

Laurent: G1(s)∈ L2 and G2(s)∈ L2 . Then<br />

G<br />

1<br />

G , F<br />

1<br />

,<br />

*<br />

1<br />

Λ FG<br />

2 = Λ FG1,<br />

G 2 =<br />

2 ∫<br />

+∞<br />

π −∞<br />

~<br />

G<br />

2<br />

=<br />

G , Λ<br />

1<br />

F<br />

~<br />

G<br />

2<br />

trace<br />

[ G ( − jω)'<br />

F(<br />

− jω)'G<br />

( jω)<br />

]<br />

Hankel: H(s) ∈ H2 and G(s)∈ H2 ⊥ . Recall that =0 if a∈H2 and b∈<br />

H2 ⊥ . Then<br />

G, Γ<br />

G, F<br />

*<br />

F<br />

~<br />

H<br />

H<br />

=<br />

=<br />

Γ G, H<br />

F<br />

G, Π<br />

a<br />

F<br />

~<br />

=<br />

H<br />

Π FG, H<br />

=<br />

s<br />

G, Π Λ<br />

a<br />

=<br />

F<br />

~<br />

H<br />

1<br />

FG, H<br />

=<br />

2<br />

dω<br />

=


Time-domain interpretation of the Hankel<br />

operator<br />

Let F(s)=C(sI-A) -1 B, with A stable, (A,C) observable and (A,B) reachable.<br />

Hence, the inverse Laplace transform of F(s) is<br />

⎧ 0<br />

( t)<br />

= ⎨<br />

⎩Ce<br />

f At<br />

t ≤ 0<br />

t > 0<br />

Now, consider the map Γf : L2(−∞,0) → L2(0, +∞)<br />

Defined by<br />

⎧ 0<br />

⎪ 0<br />

Γf<br />

: u( t)<br />

→ Γf<br />

u(<br />

t)<br />

= y(<br />

t)<br />

= ⎨ At −Aτ<br />

Ce ( )<br />

⎪ ∫ e Bu τ dτ<br />

⎩ −∞<br />

B<br />

t ≤ 0<br />

t > 0<br />

This operator maps past input to future outputs. Indeed, the past inputs (with x(−∞)=0) contribute<br />

to form the state x(0) from which the free output can be calculated. Γf is the time-domain<br />

counterpart of ΓF. Γf can be seen as the composition of two operators Ψo (the observability<br />

operator) and Ψr (the reachability operator)<br />

Γ<br />

f<br />

Ψ<br />

Ψ<br />

r<br />

o<br />

= Ψ<br />

r<br />

Ψ<br />

o<br />

,<br />

Ψ<br />

: u(<br />

t)<br />

→ Ψ u(<br />

t)<br />

: = x =<br />

r<br />

r<br />

n<br />

: L ( −∞,<br />

0)<br />

→ R ,<br />

⎧ 0<br />

: x → Ψox<br />

: = y(<br />

t)<br />

= ⎨ At<br />

⎩Ce<br />

x<br />

2<br />

t < 0<br />

t ≥ 0<br />

→ L<br />

The operator Ψo is injective thanks to the observability of (A,C). Indeed, from Ψo x=0 it follows<br />

x=0.<br />

Conversely, the operator Ψr is surjective thanks to the reachability of (A,B). Indeed, given any point<br />

x of R n it follows that Ψru(t)=x, where u(t)=0, for t>0 and u(t)=B′e -A′t Pr -1 for t≤0, where Pr is the<br />

unique solution of the Lyapunov equation A′Pr+PrA+BB′=0<br />

0<br />

∫ ∞<br />

−<br />

e<br />

− Aτ<br />

Ψ<br />

o<br />

Bu(<br />

τ)<br />

dτ<br />

: R<br />

n<br />

2<br />

( 0,<br />

∞)


Time-domain interpretation of the adjoint<br />

Hankel operator<br />

The adjoint reachability and observability operators are as follows:<br />

Ψ<br />

Ψ<br />

*<br />

r<br />

*<br />

o<br />

− ⎧<br />

* B'e<br />

: x → Ψr<br />

x = u(<br />

t)<br />

= ⎨<br />

⎩ 0<br />

: y(<br />

t)<br />

→ Ψ<br />

*<br />

o<br />

y(<br />

t)<br />

= x =<br />

C'<br />

y(<br />

τ ) dτ<br />

Therefore the adjoint Hankel operator in time-domain is<br />

Γ<br />

Γ<br />

*<br />

f<br />

*<br />

f<br />

: L<br />

2<br />

[ 0,<br />

∞)<br />

→ L ( −∞,<br />

0]<br />

: y(<br />

t)<br />

→ Γ<br />

*<br />

f<br />

2<br />

The rank of Ψr and Ψo is n = McMillan degree of C(sI-A) -1 B. Moreover,<br />

since Ψo is injective and Ψr surjective, it turns out that Γf (and ΓF) has rank<br />

n. Therefore the self adjoint operatopr Γ * FΓF has rank n, so that it admits<br />

eigenvalues.<br />

∞<br />

∫<br />

0<br />

e<br />

⎧<br />

⎪B'e<br />

y(<br />

t)<br />

= u(<br />

t)<br />

= ⎨<br />

⎪<br />

⎩<br />

A't<br />

x<br />

−A'τ<br />

∞<br />

A't<br />

∫<br />

0<br />

e<br />

t ≤ 0<br />

t > 0<br />

A'τ<br />

0<br />

C'<br />

y(<br />

τ)<br />

dτ<br />

t ≤ 0<br />

t > 0


Reachability and observability grammians<br />

Notice that Pr and Po are the unique (positive definite) solutions of the<br />

Lyapunov equations:<br />

APr+PrA′+BB′=0<br />

PoA+A′Po+C′C=0<br />

The operator Γ * FΓF and the matrix PrPo share the same nonzero<br />

eigenvalues. To see this notice that Γ*fΓf = Ψ * rΨ * oΨoΨr=PrPo<br />

The Hankel norm of the system is<br />

______________<br />

Example:<br />

Consider the stable part and take a minimal realization:<br />

It results<br />

Γ<br />

Ψ<br />

Ψ<br />

F<br />

r<br />

*<br />

o<br />

Ψ<br />

Ψ<br />

=<br />

*<br />

r<br />

o<br />

Γ<br />

=<br />

=<br />

f<br />

∞<br />

∫<br />

0<br />

∞<br />

∫<br />

0<br />

e<br />

e<br />

=<br />

Aτ<br />

A'τ<br />

BB'e<br />

C'Ce<br />

λ<br />

A'τ<br />

Aτ<br />

dτ<br />

= P<br />

r<br />

dτ<br />

= P<br />

( P P )<br />

max r o<br />

5(<br />

s −1)(<br />

s + 5)(<br />

s − 3)<br />

22.<br />

5 2.<br />

5(<br />

3s<br />

− 5)<br />

F(<br />

s)<br />

=<br />

= 5 + +<br />

2<br />

2<br />

( s + 1)<br />

( s − 7)<br />

s − 7 ( s + 1)<br />

⎡ 0 1 ⎤ ⎡0⎤<br />

A = ⎢ , = , =<br />

1 2<br />

⎥ B ⎢<br />

1<br />

⎥ C<br />

⎣−<br />

− ⎦ ⎣ ⎦<br />

[ −12.<br />

5 7.<br />

5]<br />

⎡0.<br />

25 0 ⎤ ⎡303.<br />

125 78.<br />

125⎤<br />

Pr<br />

= ⎢ ,<br />

,<br />

0 0.<br />

25<br />

⎥ Pr<br />

= ⎢<br />

78.<br />

125 53.<br />

125<br />

⎥ λ<br />

⎣ ⎦ ⎣<br />

⎦<br />

max<br />

(P P ) = 81.3827, Γ<br />

r<br />

o<br />

o<br />

F<br />

=<br />

9.<br />

0212


Formulae for the minimization problem via<br />

the operatorial approach<br />

All stable functions<br />

)<br />

(<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

21<br />

3<br />

12<br />

2<br />

21<br />

12<br />

11<br />

1<br />

s<br />

P<br />

s<br />

M<br />

s<br />

T<br />

s<br />

M<br />

s<br />

P<br />

s<br />

T<br />

s<br />

P<br />

s<br />

Y<br />

s<br />

M<br />

s<br />

P<br />

s<br />

P<br />

s<br />

T<br />

=<br />

=<br />

+<br />

=<br />

I<br />

s<br />

X<br />

s<br />

N<br />

s<br />

Y<br />

s<br />

M<br />

s<br />

M<br />

s<br />

N<br />

s<br />

Y<br />

s<br />

X<br />

s<br />

N<br />

s<br />

M<br />

s<br />

M<br />

s<br />

N<br />

s<br />

P<br />

=<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

−<br />

=<br />

=<br />

−<br />

−<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

)<br />

(<br />

)<br />

(<br />

1<br />

1<br />

22<br />

[ ] [ ]<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

)<br />

(<br />

ˆ<br />

)<br />

(<br />

1<br />

s<br />

M<br />

s<br />

Q<br />

s<br />

Y<br />

s<br />

N<br />

s<br />

Q<br />

s<br />

X<br />

s<br />

K −<br />

−<br />

=<br />

−<br />

K<br />

P


Scalar case<br />

It is possible to perform an inner-outer factorization of T4=T2T3=T4iT4o so<br />

that, letting F=T4i -1 T1 and X=T4oQ we have:<br />

T ( s)<br />

− T ( s)<br />

Q(<br />

s)<br />

T ( s)<br />

T<br />

1<br />

4i<br />

The problem is then reduced to a Nehari problem.<br />

Now, let<br />

F<br />

( s)<br />

Theorem 11<br />

The function<br />

is such that<br />

−1<br />

2<br />

T ( s)<br />

− T<br />

1<br />

F(<br />

s)<br />

= F<br />

a<br />

PQβ<br />

= λ<br />

max<br />

40<br />

( s)<br />

= C(<br />

sI − A)<br />

A'<br />

P + PA = BB',<br />

a<br />

3<br />

( PQ)<br />

β,<br />

∞<br />

=<br />

( s)<br />

Q(<br />

s)<br />

( s)<br />

+ F ( s)<br />

+ D,<br />

−1<br />

f ( s)<br />

= B'(<br />

sI − A')<br />

X<br />

0<br />

λ<br />

( s)<br />

= F ( s)<br />

−<br />

s<br />

−1<br />

max<br />

B<br />

β,<br />

∞<br />

T ( s)<br />

− T ( s)<br />

Q(<br />

s)<br />

1<br />

=<br />

F<br />

AQ + QA'<br />

= C'C<br />

χ = [ λ<br />

a<br />

4<br />

F ( s)<br />

− X ( s)<br />

max<br />

( s)<br />

∈ H<br />

( PQ)]<br />

⊥<br />

2<br />

−1<br />

Pβ<br />

g(<br />

s)<br />

= C(<br />

sI − A)<br />

( PQ)<br />

g(<br />

s)<br />

f ( s)<br />

0<br />

F( s)<br />

X ( s)<br />

= infx<br />

( s ) ∈H<br />

− ∞<br />

F(<br />

s)<br />

− X ( s)<br />

,<br />

−1<br />

∞<br />

∞<br />

χ<br />

=<br />

F ( s)<br />

∈ H<br />

s<br />

2


Proof of Theorem 11<br />

From F(s) construct f(s), g(s) and λmax(PQ). Let be X o (s) be the optimal<br />

solution and define h(s)=(F(s)-X o (s))f(s). Notice that h(s) ∈L2, since F(s)-<br />

X o (s) ∈L∞ and f(s) ∈ H2. It follows:<br />

h(<br />

s)<br />

− Γ<br />

〈 h(<br />

s),<br />

h(<br />

s)<br />

〉 + 〈 Γ<br />

Now, taking into account that Γ * F ~ f(s) ∈H2 ⊥ and X o f(s)∈H2, it follows<br />

〈 h(<br />

s),<br />

Γ<br />

〈 Π<br />

a<br />

So that<br />

*<br />

~<br />

F<br />

*<br />

~<br />

F<br />

f ( s)<br />

2<br />

2<br />

*<br />

~<br />

F<br />

f ( s)<br />

〉<br />

F(<br />

s)<br />

f ( s),<br />

Γ<br />

h(<br />

s)<br />

− Γ<br />

*<br />

~<br />

( F(<br />

s)<br />

− X<br />

= 〈 h(<br />

s)<br />

− Γ<br />

f ( s),<br />

Γ<br />

= 〈 Π<br />

*<br />

~<br />

2<br />

⎡ o<br />

( F(<br />

s)<br />

− X ( s))<br />

−<br />

∞<br />

⎢⎣<br />

F<br />

F<br />

f ( s)<br />

o<br />

2<br />

2<br />

*<br />

~<br />

F<br />

*<br />

~<br />

F<br />

f ( s)<br />

〉 = 〈 Γ<br />

f ( s),<br />

h(<br />

s)<br />

− Γ<br />

f ( s)<br />

〉 − 2〈<br />

h(<br />

s),<br />

Γ<br />

[ F(<br />

s)<br />

− X<br />

a<br />

( s))<br />

f ( s)<br />

2<br />

2<br />

− λ<br />

max<br />

max<br />

*<br />

~<br />

F<br />

o<br />

( PQ)<br />

⎤<br />

⎥⎦<br />

f ( s)<br />

〉<br />

The Nehari theorem says that the minimum of ||F(s)-X(s)||∞ is equal to<br />

||ΓF~||=λmax(PQ) [the proof is omitted]. Hence, it follows ||F(s)-X o (s)||∞ =<br />

λmax(PQ) so that h(s)=Γ * F~f(s). On the other hand, as it is easy to check,<br />

Γ * F~f(s)=g(s). Therefore,<br />

*<br />

~<br />

*<br />

~<br />

F<br />

( s)]<br />

f ( s),<br />

Γ<br />

f ( s),<br />

Γ<br />

= 〈 h(<br />

s),<br />

h(<br />

s)<br />

〉 − 〈 Γ<br />

λ<br />

( PQ)<br />

o<br />

X F<br />

( s)<br />

=<br />

F(<br />

s)<br />

− Γ ~<br />

*<br />

~<br />

F<br />

*<br />

~<br />

F<br />

f ( s)<br />

f ( s)<br />

2<br />

2<br />

F<br />

*<br />

~<br />

F<br />

f ( s)<br />

〉<br />

f ( s),<br />

Γ<br />

2<br />

2<br />

g(<br />

s)<br />

f ( s)<br />

≤<br />

f ( s)<br />

〉 =<br />

f ( s)<br />

〉<br />

*<br />

~<br />

F<br />

=<br />

=<br />

f ( s)<br />

〉 =


⎡a<br />

⎢<br />

⎣a<br />

1<br />

2<br />

⎤<br />

⎥<br />

⎦<br />

Chain Scattering Representation<br />

⎡b<br />

= Σ⎢<br />

⎣b<br />

1<br />

2<br />

⎤<br />

⎥<br />

⎦<br />

a1 b1<br />

If Σ21 is invertible, then b1=Σ21 -1 (a2−Σ22b2) and<br />

⎡a<br />

⎢<br />

⎣b<br />

1<br />

1<br />

⎤<br />

⎥<br />

⎦<br />

⎡a<br />

= Chain(<br />

Σ)<br />

⎢<br />

⎣a<br />

1<br />

2<br />

⎤<br />

⎥<br />

⎦<br />

• Algebra of chain scattering representation<br />

a2<br />

Chain(<br />

a1<br />

b1<br />

Σ<br />

⎡Σ<br />

) = ⎢<br />

⎣<br />

b2<br />

−1<br />

−1<br />

12 − Σ11Σ21<br />

Σ22<br />

Σ11Σ21<br />

Σ −1<br />

−1<br />

− Σ21<br />

Σ22<br />

Σ21<br />

chain(Σ)<br />

b2<br />

a2<br />

⎤<br />

⎥<br />


Homographic Trasformation<br />

a1<br />

b1<br />

a2<br />

b2<br />

a1 b2<br />

b1<br />

a2<br />

Y is J-unitary<br />

Σ<br />

S<br />

Θ =<br />

chain(Σ)<br />

S<br />

1<br />

22<br />

21<br />

12<br />

11<br />

21<br />

1<br />

22<br />

12<br />

11<br />

1<br />

1<br />

)<br />

)(<br />

(<br />

)<br />

;<br />

(<br />

)<br />

(<br />

)<br />

;<br />

(<br />

,<br />

−<br />

−<br />

Θ<br />

+<br />

Θ<br />

Θ<br />

+<br />

Θ<br />

=<br />

Θ<br />

=<br />

Φ<br />

Σ<br />

Σ<br />

−<br />

Σ<br />

+<br />

Σ<br />

=<br />

Σ<br />

=<br />

Φ<br />

Φ<br />

=<br />

S<br />

S<br />

S<br />

HM<br />

S<br />

I<br />

S<br />

S<br />

LF<br />

b<br />

a<br />

⎥<br />

⎦<br />

⎤<br />

⎢<br />

⎣<br />

⎡<br />

−<br />

=<br />

−<br />

Ψ<br />

=<br />

Ψ<br />

=<br />

Ψ<br />

Ψ<br />

I<br />

I<br />

J<br />

s<br />

s<br />

J<br />

s<br />

J<br />

s<br />

0<br />

0<br />

,<br />

)'<br />

(<br />

)<br />

(<br />

,<br />

)<br />

(<br />

)<br />

(<br />

~<br />

~


The Standard Problem via chain-scattering<br />

representation<br />

w z<br />

P<br />

u y<br />

We say that G(s) admits a J-unitary factorization if<br />

G(s)=Θ(s)Λ(s)<br />

Where Θ(s) is J-unitary and Λ(s) is unimodular (stable with stable<br />

inverse) .<br />

Theorem12<br />

Assume that P has a chain scattering representation G=chain(P)<br />

such that G is left invertible and does not have poles or zeros on<br />

the imaginary axis. Then, the H∞ problem is solvable if and only if<br />

G has a J-unitary factorization G=ΘΛ. The controllers K can be<br />

written as K=HM(Λ -1 ;S) for each proper and stable S.<br />

• J-spectral Factorization in control and filtering<br />

K


A robust stability problem<br />

k<br />

z = w,<br />

1 − gk<br />

Z δ<br />

w<br />

k<br />

q =<br />

1 − gk<br />

We want to maximize ||d||∞ (minimize ||q|∞). For the stability of the closed<br />

loop it must be<br />

(1) q stable<br />

(2) gq stable<br />

(3) (1+gq)g =stable<br />

Hence, if pi are the unstable poles of g (assumed to be simple and with<br />

Re(pi)>0) q must be such that<br />

(0) q minimum norm<br />

(1) q stable<br />

(2) q(pi)=0<br />

(3) (1+gq)(pi)=0<br />

Letting<br />

w<br />

s + pi<br />

q = qa = q∏i<br />

pi<br />

− s<br />

it follows that q w must be such that<br />

(0) q W minimim norm<br />

(1) q W stable<br />

(2) q W (pi)=-ag -1 (pi)<br />

g<br />

k


A simple example<br />

where<br />

z<br />

δ<br />

w<br />

u y<br />

g<br />

s + 2<br />

g =<br />

( s + 1)(<br />

s −1)<br />

Find k in such a way that ||d||¥ is maximized.<br />

Take<br />

s + 1<br />

a =<br />

1−<br />

s<br />

and find q w stable of minimum norm such that such that<br />

q w (1)=-ag -1 (1)=4/3. It follows q w =4/3 so that ||d||∞max =3/4 and<br />

w<br />

q 4(<br />

s + 1)<br />

= = −<br />

1 + q g ( 3s<br />

+ 5)<br />

k w<br />

k


In state-space<br />

A=[0 1;1 0], B1=[0;0], B2=[0;1], C1=[0 0], D12=1, C2=[2 1], D21=1.<br />

Stabilizing solution of:<br />

A′P+PA-PB2B2′P+PB1B1′P/γ 2 +C1′C1=0<br />

Stabilizing solution of:<br />

AQ+QA′-PC2′C2Q+PC1′C1Q/ γ 2 +B1B1′=0<br />

rs(PQ)


Operatorial approach<br />

P<br />

⎡0<br />

s)<br />

= ⎢<br />

⎣1<br />

Bezout<br />

1⎤<br />

g<br />

⎥<br />

⎦<br />

Model Matching<br />

Inner-outer factorization of t4=t2t3<br />

Unstable part fa of f=t4i -1 t1<br />

Hence, X 0 =0, Q=0 and<br />

s + 2 n<br />

= 2<br />

s −1<br />

m<br />

s + 2<br />

( s + 1)<br />

( g =<br />

n = 2<br />

s + 5/<br />

3<br />

− ny + mx = 1, x = , y = −4<br />

s + 1<br />

t<br />

t<br />

1<br />

2<br />

t<br />

=<br />

p<br />

= t<br />

4<br />

3<br />

11<br />

=<br />

+<br />

p<br />

12<br />

s −1<br />

s + 1<br />

myp<br />

21<br />

− 4 / 3(<br />

s −1)<br />

=<br />

s + 1<br />

2<br />

( s −1)<br />

= t4it4o<br />

, t4i<br />

= t4<br />

= , t 2 4o<br />

( s + 1)<br />

/ 3<br />

= 1<br />

− 4/<br />

3(<br />

s + 1)<br />

8/<br />

3 8/<br />

3<br />

f =<br />

= −4<br />

/ 3−<br />

, f a = −<br />

s −1<br />

s −1<br />

s −1<br />

K(<br />

s)<br />

=<br />

y<br />

x<br />

− 4/<br />

3(<br />

s + 1)<br />

4(<br />

s + 1)<br />

=<br />

= −<br />

s + 5/<br />

3 3s<br />

+ 5<br />

,<br />

m =<br />

s −1<br />

s + 1


REFERENCES<br />

V.M. Adamjan, D.Z. Arov and M.G. Krein. Infinite block Hankel<br />

matrices and related extension problem. American Mathematical Society<br />

Translation, 111: 133-156, 1978.<br />

Bala Subrahmanyam, Finite Horizon H∞ and Related Control Problems,<br />

Birkhäuser, Boston, 1995.<br />

Basar, T., and Bernhard, P., H∞ Optimal Control and Related Miinimax<br />

Design Prooblems: A Dynamic Game Approach, Birkhäuser, Boston,<br />

1995.<br />

Chen, B.M., Robust and Optimal Control, Springer-Verlag, 2000.<br />

Colaneri, P., Geromel, J.C., and Locatelli, A., Control Theory & Design:<br />

An H2 and H∞ Viewpoint, Academic Press, San Diego, 1997.<br />

Dahleh, M.A., and Diaz-Bobillo, I., Control of Uncertain Systems, A<br />

Linear Programming Approach, Prentice Hall, Englewood Cliffs, 1995.<br />

Davis, C., Kahan W.M., Weinberger, H.F., "Norm.preserving dilations<br />

and their applications to optimal error bounds", SIAM Journal of Control<br />

and Optimization, vol. 20, pp. 445-469, 1982.<br />

Doyle J.C., "Analysis of feedback systems with structured<br />

uncertainties", Proceedings of the IEEE, Pard D, vol. 129, pp. 242-250,<br />

1982Hazony, D., "Zero cancellation synthesis using impedence<br />

operators", IRE Trans. on Circuit Theory, vool. 8, pp. 114-120, 1961.


Doyle J.C., Lecture Notes in Advanced Multivariable Control, Lecture<br />

Note ONR/Honeywell Workshop, Minneapolis, 1984.<br />

Doyle, J.C., Francis, B.A., and Tannenbaum, A.R., Feedback Control<br />

Theory, MacMillan Publishing Co., New York, 1992.<br />

J.C. Doyle, K. Glover, P.P. Khargonekar, B. Francis, “State-space<br />

solution to standard H2-H∞ control problems”, IEEE Trans. Autom.<br />

Control, AC-34, pp. 831-847, 1989.<br />

D. Fragopoulos, H∞ synthesis theory using polynomial system<br />

representations”, PhD Thesis, University of Strathclyde, 1994.<br />

Francis B.A., and Zames, G., "On L∞- optimal sensitivity theory for<br />

SISO feedback systems", IEEE Trans. Automat. Contr., vol. AC-29, pp.<br />

9-16, 1984.<br />

B.A. Francis, “A course in H∞ control theory”, in Lecture Notes in<br />

Control and Information Science, Vol. 88, Springer-Verlag, Berlin,<br />

1987.<br />

Glover, K., "All optimal Hankel-norm approximations of linear<br />

multivariable systems and their L∞-error bounds, International Journal<br />

of Control, vol. 39, pp. 1115-1193, 1984.<br />

M. Green, K. Glover, D.J.N. Limebeer, J.C. Doyle, “A J-spectral<br />

factorization approach to H∞ control”, SIAM J. Contr. And<br />

Optimization, 28, pp. 1350-1371, 1990.<br />

M.J. Grimble and V. Kucera, “Polynomial methods for control systems<br />

design”, Springer Verlag 1996.<br />

Helton, J.W:, and Merino, O., Classical Control Using H∞ Methods:<br />

Theory, Optimization and Design, SIAM, Philadelphia, 1988.


Heyde, R.A., H∞ Aerospace Control Design: A VSTOL Flight<br />

Application, Springer-Verlag, New-York, 1995.<br />

Kimura, H., Okinishi, F., "Chain scattering approach to control system<br />

design", In Trends in Control, An European Perspective, A.Isidori Ed.,<br />

pp. 151-171, Springer Verlag,, Berlin, 1995.<br />

Kimura, H., "Conjugation, interpolation and model-matching in H∞,<br />

International Journal of Control, vol. 49, pp. 269-307, 1989.<br />

H. Kwakernaak, M. Sebek, “Polynomial J-spectral factorization”, IEEE<br />

Trans. Autom. Control, AC-39, pp. 315-328, 1994.<br />

McFarlane D., and Glover, K, "A loop shaping design procedure using<br />

H∞ synthesis", IEEE Trans. Automat. Control, vol. 37, pp. 759-769,<br />

1992.<br />

G. Mensma, “Frequency-domain methods in H∞ Control”, PhD Thesis,<br />

University of Twente, 1993.<br />

Mustafa, D. and Glover, K., Minimum Entropy H∞ Control, Lecure<br />

Notes in Control and Information Sciences, Vol. 16, Springer-Verlag,<br />

1990.<br />

R. Nevanlinna,, "Über beschränkte Funktionen, die in gegeben Punkten<br />

vorgeschriebenne Werte annehmen", Ann. Acad, Sci. Fenn., Vol. 13, pp.<br />

27-43, 1919.<br />

G. Pick, "Über Beschränkuungen analytischer Funktionen, welche durch<br />

vorgegebene Funktions wert bewirkt sind", Mathematische Ann., Vol.<br />

77, pp. 7-23, 1916.


Stoorgvogel, A., The H∞ Control Problem: A State Space Approach,<br />

Prentice Hall, Englewwod Cliffs, New Jersey, 1992.<br />

Van Keulen, B., H∞ Control for Distribuited Parameter Systems: A<br />

State Space Approach, Birkhäuser, Boston, 1993.<br />

W.M. Vidyasagar, “Control system synthesis: a factorization approach”,<br />

MIT Press, Cambridge, 1985.<br />

I. Yaesh, “Linear Optimal Control and Estimation in the Minimum H∞<br />

norm sense”, Phd thesis, Tel Aviv University, 1992.<br />

Zames, G., "Feedback and optimal sensitivity: Model refernce<br />

transformations, multiplicity seminorms, and approximation inverses",<br />

IEEE Trans. Automat. Contr., vol. AC-26, pp. 301-320, 1981.


H¥ FILTERING IN DISCRETE-TIME<br />

P.Colaneri<br />

Dipartimento di Elettronica e Informazione del Politecnico di<br />

Milano


• H¥ analisys<br />

SUMMARY<br />

H∞ norm. Small gain theorem. Algebraic and difference analysis Riccati<br />

equation<br />

• H¥ filtering (infinite horizon)<br />

Stochastic filtering. Basic prediction problem. J-spectral factorization.<br />

Duality. Krein spaces. Risk sensitivity.<br />

• Mixed H 2 /H¥ filtering<br />

Convex optimization. α procedure<br />

• H¥ filtering (finite horizon)<br />

Feasibility. Convergence. Gamma Switching<br />

• Conclusions


x(<br />

t + 1)<br />

= Ax ( t)<br />

+ Bw(<br />

t)<br />

z(<br />

t)<br />

= Cx(<br />

t)<br />

+ Dw(<br />

t)<br />

H¥ NORM<br />

−1<br />

G ( z)<br />

= C(<br />

zI − A)<br />

B + D<br />

2<br />

jϑ − jϑ<br />

yF<br />

G( z ) ∞ = sup λ max ( G( e ) G( e )' ) = sup = sup<br />

ϑ∈ 0 , π<br />

w l<br />

2<br />

w ( )<br />

z − 0. 5<br />

G( z)<br />

= 2<br />

z + 0. 5z + 0. 7<br />

2<br />

2<br />

∈ 2 w z ∈l2<br />

2<br />

G( z ) w( z )<br />

w( z )<br />

If w( t)<br />

is a white noise, the norm represents the square-root of the peak<br />

value of the spectrum of the output<br />

______________________________________________________________<br />

For the frequency-domain definition it is enough to require that G(z) don’t have poles on the unit circle (L ∞ ). It is easy<br />

to reckognize that the H ∞ norm of a function in L ∞ coicides with the norm of its stable part G o (z) in the inner-outer<br />

factorization G(z)=G o (z)G i (z). The time-domain definition requires the external stability of the system. The symbol l 2<br />

indicates the set of the square-summable discrete-time funtions in Z + . Noice that if w belongs to l 2 and the system is<br />

asymptotically stable, the forced y F is in l 2 as well.<br />

2<br />

2<br />

2<br />

2


SMALL GAIN THEOREM: robust stability<br />

Let G(z) and Δ(z) stable. Then the closed-loop system (Δ,G) is stable for<br />

each ||Δ(z)|∞≤ α iff ||G(z)||∞


Descriptor simplectic system:<br />

Theorem 2<br />

RICCATI EQUATION<br />

FOR THE H¥ ANALISYS<br />

A is stable and G( z)<br />

< γ iff there exists Q ≥ 0 satisfying<br />

∞<br />

• (i)<br />

2 −1<br />

Q = AQA '+ BB ' + ( AQC ' + BD ')( γ I − DD '− CQC ') ( AQC '+ BD ')'<br />

• (ii) γ 2<br />

I − DD' − CQC'<br />

> 0<br />

feasibility<br />

2 −1<br />

• (iii) A + ( AQC '+ BD ')( γ I − DD' −CQC')<br />

C<br />

stability


Notes:<br />

The Riccati equation (or its dual) is associated with the simplectic system written above and described by equations<br />

below.<br />

⎡I<br />

⎢<br />

⎣0<br />

2<br />

−1<br />

2<br />

−1<br />

− C'(<br />

γ I − DD'<br />

) C ⎤ ⎡A'+<br />

C'<br />

( γ I − DD')<br />

DB'<br />

η(<br />

t + 1)<br />

=<br />

2<br />

−1<br />

⎥ ⎢<br />

−2<br />

A + BD'<br />

( γ I − DD'<br />

) C⎦<br />

⎣ − B(<br />

I − γ D'D)<br />

B'<br />

0⎤<br />

⎥η(<br />

t)<br />

I ⎦<br />

For the existence of the stabilizing solution of the Riccati equation it is necessary that such a system does not have<br />

characteristic values with unitary modulus.<br />

From the equation it readily follows that:<br />

γ<br />

2<br />

I − G(<br />

z)<br />

G(<br />

z<br />

−1<br />

)'=<br />

Ψ(<br />

z)<br />

Ψ(<br />

z)<br />

= I − C(<br />

zI − A)<br />

−1<br />

2 [ γ I − DD'−CQC']<br />

( BD'+<br />

AQC'<br />

)( γ<br />

2<br />

Ψ(<br />

z<br />

−1<br />

)',<br />

I − DD'−CQC'<br />

)<br />

This explains the fact that γ2I − DD ' −CQC<br />

' must be positive. As will be clear from the proof, the theorem can be<br />

formulated for the inequality<br />

− Q + AQA' + ( AQC ' + BD' )( γ I − DD ' − CQC ' ) ( AQC ' + BD'<br />

)' ≤<br />

2 1 0<br />

without the stability condition (iii).<br />

−<br />

Notice that as γ→∞ (the loop in the figure opens) the Riccati equation boils down to the Lyapunov equation<br />

associated with the H 2 norm.<br />

Proof of Theorem 2<br />

Assume that there exists a solution Q of the Riccati equation. Such an equation can be transformed in the following<br />

way (the computations are left to the reader):<br />

γ<br />

2<br />

I − G ( z ) G ( z<br />

−1 )' = Ψ( z ) γ<br />

2<br />

I − DD ' −CQC<br />

' Ψ ( z<br />

−1<br />

) ',<br />

where<br />

Ψ( z) = I−C( zI− A) −1 ( BD' + AQC' )( γ<br />

2<br />

I −DD' −CQC'<br />

)<br />

−1<br />

.<br />

Hence, on the basis of the property γ 2<br />

I − DD ' − CQC ' > 0 and invertibiliy of Ψ ( e )<br />

jϑ we have that<br />

2 jϑ − jϑ jϑ 2<br />

− jϑ<br />

γ I − G ( e ) G ( e )' = Ψ ( e ) γ I − DD' − CQC ' Ψ ( e )' > 0 , ∀ϑ<br />

,<br />

i.e. the thesis.<br />

Vice-versa assume that G ( z ) ∞ < γ . We prove that the symplectic system<br />

⎡I<br />

⎢<br />

⎣0<br />

2<br />

−1<br />

2<br />

−1<br />

− C'(<br />

γ I − DD'<br />

) C ⎤ ⎡A'+<br />

C'<br />

( γ I − DD')<br />

DB'<br />

η(<br />

t + 1)<br />

=<br />

2<br />

−1<br />

⎥ ⎢<br />

−2<br />

A + BD'<br />

( γ I − DD'<br />

) C⎦<br />

⎣ − B(<br />

I − γ D'D)<br />

B'<br />

−1<br />

0⎤<br />

⎥η(<br />

t)<br />

I ⎦<br />

does not have characteristic values with unitary modulus. Indeed, assume, by contradiction, λ is such, i.e.


2 −1 2 −1<br />

λx − λC ' ( γ I − DD ' ) Cy = ( A' + C' ( γ I − DD ' ) DB ' ) x<br />

2 −1 −2<br />

λ( A + BD ' ( γ I − DD ' ) C ) y = y − B( I − γ D' D ) B' x<br />

for a certain [x 'y']'≠0 and define p=(γ 2 I-DD') -1 (DB'x+λCy). It results G(λ -1 )G(λ)'=γ 2 I, which is a contradiction unless<br />

p=0. On the other hand, p=0 implies A'x=λx, and, since A is stable, x=0. Finally, p=0 and x=0 imply (λA-I)y=0 so<br />

that y=0 ( A is stable). Hence, x=0 and y=0 is a contradiction.<br />

Since the symplectic system does not have characteristic values on the unit circle, there are n characteristic values<br />

inside (possible in zero) and n outside (the reciprocals) the unit circle (possibly at infinity). Grouping the 2ndimensional<br />

vectors associated with the n characteristic values inside the unit circle, one can construct a matrix [X'<br />

Y']', whose range is a ln-dimensional subspace. It follows<br />

⎡I<br />

⎢<br />

⎣0<br />

2<br />

−1<br />

2<br />

−1<br />

− C'<br />

( γ I − DD'<br />

) C ⎤⎡X<br />

⎤ ⎡A'+<br />

C'<br />

( γ I − DD')<br />

DB'<br />

⎥⎢<br />

⎥T<br />

=<br />

2<br />

−1<br />

⎢<br />

−2<br />

A + BD'<br />

( γ I − DD'<br />

) C⎦<br />

⎣Y<br />

⎦ ⎣ − B(<br />

I −γ<br />

D'D)<br />

B'<br />

0⎤⎡<br />

X ⎤<br />

⎥⎢<br />

⎥<br />

I ⎦⎣Y<br />

⎦<br />

where T is a matrix with eigenvalues inside the unit circle (stable matrix). Consider now the solutions of the system<br />

r(t+1)=Tr(t), i.e. r(t)=T t r(0) and left multiply by r(t) the two matrix equations above. Next, define p(t)=B'Xr(t) and<br />

q(t)=CYr(t). It follows<br />

p( t + 1)<br />

= A' p( t) + C ' v1( t ), v2 ( t ) = B' p( t ) + D ' v1( t )<br />

Aq ( t + 1)<br />

= q ( t ) − Bv2 ( t) ,<br />

1<br />

v1( t ) = ( Cq( t + 1)<br />

+ Dv ( t ))<br />

2 2<br />

γ<br />

from which, passing to the Z-transform<br />

−1 2 −1<br />

2<br />

G ( z ) ' G ( z ) v1( z ) = γ v1( z) , G ( z ) G ( z ) ' v2( z ) = γ v2 ( z ) .<br />

From the assumption it is v1 ( t ) = 0, v2 ( t ) = 0 , so that B'X =0 and C'Y=0. Hence, from the symplectic system it<br />

follows A'X=XT, AYT=Y. Assume that there exists a vector d such that Xd=0 and take the minimal degrre polynomial<br />

ϖ(T) of T such that ϖ(T)d=0. Such a polynomial can be written as ϖ(T)=(λI-T)μ(T). For the minimality of ϖ(T) it<br />

results μ(T)d≠0. Then, AYTd=λAYd=Yd. Since λ has modulus less than 1, λ-1 has modulus greater than one so that<br />

this equation contradicts the stability of A', unless Yd=0. On the other hand, this last conclusion, together with Xd=0<br />

contradicts the n-dimensionality of the range of [X 'Y']'.


THE DIFFERENCE EQUATION<br />

x( t + 1)<br />

= Ax( t ) + Bw( t )<br />

y( t ) = Cx( t ) + Dw( t )<br />

Q( t + 1)<br />

= AQ( t ) A' + ( AQ( t ) C' + BD' )( γ I − DD' − CQ( t ) C' ) ( AQ( t ) C' + BD' )' + BB'<br />

Q( 0 ) = Q > 0<br />

0<br />

Theorem 3<br />

N<br />

N<br />

2 −1<br />

J = ∑ y( t )' y( t ) − γ ∑w( t )' w( t ) − γ x( 0 )' Q x( 0 ) < 0, ∀( w(.), x(<br />

0 )) ≠ 0<br />

t=<br />

0<br />

t=<br />

0<br />

2<br />

2<br />

−<br />

0 1<br />

iff there exists Q(t)≥0 in [0,N] such that γ 2<br />

I − DD' − CQ( t ) C'<br />

><br />

0


Proof of Theorem 3<br />

Consider the cost.<br />

N<br />

N<br />

~<br />

2<br />

J ( N)<br />

= z(<br />

t)'<br />

z(<br />

t)<br />

−γ<br />

w(<br />

t)'<br />

w(<br />

t)<br />

+ x(<br />

N + 1)'<br />

PN<br />

+ 1x(<br />

N + 1)<br />

< 0,<br />

x(<br />

0)<br />

= 0,<br />

∀w(.)<br />

≠ 0<br />

∑<br />

t=<br />

0<br />

~<br />

∑<br />

t=<br />

0<br />

We proof the dual result: J ( N)<br />

< 0,<br />

x(<br />

0)<br />

= 0,<br />

∀w(.)<br />

≠ 0 iff there exists the solution of the equation<br />

P( t ) = A' P( t + 1)<br />

A +<br />

2 −1<br />

+ ( A' P( t + 1) B + C' D )( γ I − D' D − B' P ( t + 1 ) B ) ( A' P( t + 1 ) B + C' D )' + C' C ,<br />

P( N + 1) = PN + 1 > 0<br />

with γ 2<br />

I − D'D − B' P( t + 1) B><br />

0 , in [0,N+1]. Sufficient condition: if there exists the solution with the indicated<br />

properties, then pre-premultiplying the equation by x(t)' and post-multiplying by x(t), and summing all terms, it results<br />

N<br />

∑<br />

t=<br />

0<br />

(*)<br />

where<br />

z(<br />

t)'<br />

z(<br />

t)<br />

−γ<br />

2<br />

N<br />

∑<br />

t=<br />

0<br />

w(<br />

t)'w(<br />

t)<br />

+ x(<br />

N + 1)'<br />

P<br />

~ 1<br />

N + 1<br />

x(<br />

N + 1)<br />

= x(<br />

0)'P<br />

( 0)<br />

x(<br />

0)<br />

−<br />

2<br />

−<br />

w ( t)<br />

= ( I − D'<br />

D − B'<br />

P(<br />

t + 1)<br />

B)<br />

( A'<br />

P(<br />

t + 1)<br />

B + C'<br />

D)'<br />

x(<br />

t)<br />

γ .<br />

Hence , being x(0)=0 we have J ( N)<br />

< 0,<br />

∀w(.)<br />

≠ 0 .<br />

~<br />

~<br />

N<br />

∑<br />

t=<br />

0<br />

( w(<br />

t)<br />

−w~<br />

( t))'(<br />

w(<br />

t)<br />

− w~<br />

( t))<br />

Vice-versa, assume that J ( N)<br />

< 0,<br />

∀w(.)<br />

≠ 0 and take w(.)=0 in [0,N-1]. Being x(0)=0, with such a choice we have<br />

0 > w( N)' − γ 2I+<br />

D'D + BPN + 1B'<br />

w( N), ∀w( N)<br />

≠0<br />

so that<br />

− γ + + + ><br />

2<br />

I D' D BPN 1B'<br />

0.<br />

Furthermore, from<br />

~ ~<br />

J ( N)<br />

= J ( N −1)<br />

− ( w(<br />

N)<br />

− w~<br />

( N ))'(<br />

w(<br />

N)<br />

− w~<br />

( N))<br />

< 0,<br />

∀w(.)<br />

∈ ,<br />

it turns out taht<br />

~<br />

( N −1) > 0,<br />

∀w(.)<br />

∈ N<br />

[ 0,<br />

−1]<br />

J .<br />

By iterating this reasoning we have<br />

γ 2<br />

I − D'D − B' P( t + 1) B><br />

0<br />

in [0,N]. Theorem 3 is proven from (*) by noticing that<br />

−1 2<br />

P( t ) = Q( t ) γ .<br />

[ 0 N ]


STOCHASTIC FILTERING<br />

⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ v1(<br />

t)<br />

⎪<br />

⎨ z(<br />

t)<br />

= Lx(<br />

t)<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

+ v2(<br />

t)<br />

where v1( t), v2 ( t)<br />

are zero-mean white gaussian noises . Moreover,<br />

⎡v1<br />

( t)<br />

⎤ ⎡ V1<br />

E ⎢ =<br />

2(<br />

)<br />

⎥ ⎢<br />

⎣v<br />

t ⎦ ⎣V12<br />

'<br />

V<br />

V<br />

12<br />

2<br />

⎤<br />

⎥,<br />

⎦<br />

V<br />

2<br />

> 0<br />

The problem is to find an estimate z ˆ( t)<br />

in H∞ of a linear combination<br />

z ( t)<br />

= Lx(<br />

t)<br />

of the state, i.e. one wants to find an estimator based on the<br />

observations of { y( i),<br />

i ≤ t − N}<br />

) in such a way that the maximum of the<br />

output spectrum of the error ε = z − zˆ<br />

due to [ v 1'<br />

v2']'<br />

is minimized (or<br />

bounded from above by a certain positive scalar γ).<br />

Letting<br />

u( t)<br />

= −zˆ<br />

( t),<br />

ε ( t)<br />

= Lx(<br />

t)<br />

+ u(<br />

t),<br />

D =<br />

B =<br />

1/<br />

2 [ 0 V2<br />

] , C = C<br />

−1<br />

1/<br />

2<br />

−1/<br />

2<br />

[ ( V −V<br />

V V ')<br />

V V ]<br />

1<br />

12<br />

2<br />

12<br />

12<br />

2<br />

y(<br />

t)<br />

= V<br />

−1/<br />

2<br />

2<br />

y(<br />

t)<br />

and defining with w t<br />

( ) a zero mean white gaussian noise with identity<br />

covariance, it follows:


BASIC FILTERING PROBLEM<br />

S<br />

⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ Bw(<br />

t)<br />

⎪<br />

S : ⎨ ε(<br />

t)<br />

= Lx(<br />

t)<br />

+ u(<br />

t)<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

+ Dw(<br />

t)<br />

⎧ xˆ<br />

( t + 1)<br />

= A<br />

F : ⎨<br />

⎩ u(<br />

t)<br />

= C<br />

F<br />

F<br />

xˆ<br />

( t)<br />

+ B<br />

xˆ<br />

( t)<br />

+ D<br />

Basic problem: Find A F , BF<br />

, CF<br />

, DF<br />

such that<br />

F<br />

F<br />

e<br />

y(<br />

t − N)<br />

y(<br />

t − N)<br />

• (S,F) is asymptotically stable<br />

T < γ<br />

•<br />

ε w<br />

∞<br />

________________________________________________________________<br />

If z(t)=Lx(t) is the variable to be estimated and z ˆ( t)<br />

the estimated variable, it follows u( t)<br />

= −zˆ<br />

( t)<br />

. Hence ε( t )<br />

represents the filtering error z( t)<br />

= z(<br />

t)<br />

− zˆ<br />

( t)<br />

= ε(<br />

t)<br />

. If w(t) is a white noise, the problem is that of minimizing<br />

the peak value of the error spectrum If w(t) is a generic signal in l2 , the problem is that of minimizing the l2 “gain”<br />

between the error and the disturbance.


SOLUTION OF THE FILTERING PROBLEM<br />

DD'> 0<br />

Ipotesi: A = stable<br />

( A , B,<br />

C,<br />

D)<br />

does not have unstable invariant zeros<br />

Theorem 4<br />

There exists a filter F solving the basic problem iff ∃ Q≥0 solution of<br />

⎡ CQA'+<br />

DB'⎤<br />

⎡DD'+<br />

CQC'<br />

i)<br />

Q = AQA'+<br />

BB'−⎢<br />

⎥ ⎢<br />

⎣ LQA'<br />

⎦ ⎣ LQC'<br />

ii) V = γ<br />

2<br />

I − LQL'+<br />

LQC'(<br />

DD'+<br />

CQC ')<br />

⎡ CQA'+<br />

DB'⎤<br />

⎡DD'+<br />

CQC'<br />

iii)<br />

Aˆ<br />

= A − ⎢ ⎥ ⎢<br />

⎣ LQA'<br />

⎦ ⎣ LQC'<br />

'<br />

2<br />

'<br />

−1 CQL<br />

CQL'<br />

⎤<br />

2<br />

− γ I + LQL'<br />

⎥<br />

⎦<br />

'><br />

0<br />

CQL'<br />

⎤<br />

2<br />

−γ<br />

I + LQL'<br />

⎥<br />

⎦<br />

CENTRAL FILTER<br />

ξ ( t + 1)<br />

= Aξ(<br />

t)<br />

+ F ( y(<br />

t)<br />

−Cξ<br />

( t)),<br />

zˆ(<br />

t)<br />

= Lξ<br />

( t)<br />

+ F ( y(<br />

t)<br />

− Cξ<br />

( t)),<br />

F<br />

1<br />

2<br />

F<br />

1<br />

−1<br />

−1<br />

⎡ CQA+<br />

⎢<br />

⎣<br />

filter feasibility<br />

⎡C⎤<br />

⎢ ⎥<br />

⎣L⎦<br />

= LQC'(<br />

CQC'+<br />

DD')<br />

−1<br />

stable<br />

= ( AQC'+<br />

BD')(<br />

CQC'+<br />

DD')<br />

−1<br />

' DB'⎤<br />

LQA'<br />

⎥<br />

⎦<br />

_________________________________________________________________________<br />

The assumption on the zeros is equivalent to the stabilizability of the pair<br />

~ ~ ~<br />

−1<br />

~<br />

−1<br />

( A,<br />

B),<br />

A = A − BD'<br />

( DD'<br />

) C,<br />

B = B(<br />

I − D'(<br />

DD'<br />

) D)<br />

. The assumption of stability of A can be weaked to<br />

the assumption of detectability of (A,C) if one is only interested in the stability of the filter.


NOTES<br />

• If G≠0, one obtains the central filter<br />

ξ ( t + 1)<br />

= Aξ(<br />

t)<br />

+ F ( y(<br />

t)<br />

− Cξ(<br />

t))<br />

+ Gu(<br />

t)<br />

zˆ<br />

( t)<br />

= Lξ(<br />

t)<br />

+ F ( y(<br />

t)<br />

−Cξ<br />

( t))<br />

2<br />

1<br />

• As γ → ∞ the H 2 filtering is obtained, where F 1 e F 2 depend on the<br />

stabilizing solution of the standard Riccati equation:<br />

Q = AQA '+ BB' − ( CQA' + DB')'( DD' + CQC) ( CQA' + DB')<br />

• Notice that in the H ∞, filter the solution depends on L, i.e. on the<br />

particular linear combination of the state to be estimated. This is a<br />

difference between H2, and H ∞, design.<br />

• The “gains” F 1 and F 2 are related by:<br />

with<br />

F<br />

F<br />

2<br />

1<br />

=<br />

=<br />

LK<br />

−1<br />

−1<br />

[ A − BD'(<br />

DD')<br />

C]<br />

K − BD'(<br />

DD')<br />

K = QC'(<br />

CQC'+<br />

DD')<br />

In the uncorrelated case (DB'=0) these relations become<br />

Hence<br />

F = LK , F = AK<br />

2 1<br />

ξ ( t +<br />

1)<br />

= Aξ<br />

( t)<br />

+<br />

rˆ<br />

( t)<br />

= Lξ<br />

( t)<br />

+ LK<br />

−1<br />

AK[<br />

y(<br />

t)<br />

−Cξ<br />

( t)<br />

]<br />

[ y(<br />

t)<br />

− Cξ<br />

( t)<br />

]<br />

−1


BASIC ONE-STEP PREDICTION PROBLEM<br />

Assume that we want to estimate z ( t)<br />

= Lx(<br />

t)<br />

on the basis of the data<br />

y (i)<br />

, i≤ t.<br />

S<br />

P<br />

⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ Bw(<br />

t)<br />

+ Gu(<br />

t)<br />

⎪<br />

S : ⎨ ε(<br />

t)<br />

= Lx(<br />

t)<br />

+ u(<br />

t)<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

+ Dw(<br />

t)<br />

⎧<br />

~<br />

x(<br />

t + 1)<br />

= A<br />

~<br />

P x(<br />

t)<br />

+ BP<br />

y(<br />

t)<br />

P : ⎨<br />

u(<br />

t)<br />

C<br />

~<br />

⎩ = Px<br />

( t)<br />

Basic problem: Find AP , BP , CP<br />

such that:<br />

• (S,F) asymptotically stable<br />

• T rw ∞


SOLUTION OF THE ONE-STEP PREDICTION<br />

PROBLEM<br />

Assumption:<br />

A = stable, DD'> 0, ( A, B, C, D ) without unstable zeros<br />

Theorem 5<br />

There exists a one-step predictor P solving the problem iff ∃ Q≥0<br />

satisfying<br />

'<br />

⎡ CQA'+<br />

DB'⎤<br />

⎡DD'+<br />

CQC'<br />

i)<br />

Q = AQA'+<br />

BB'−⎢<br />

⎥ ⎢<br />

⎣ LQA'<br />

⎦ ⎣ LQC'<br />

CQL'<br />

⎤<br />

2 −γ<br />

I + LQL'<br />

⎥<br />

⎦<br />

⎡ CQA'+<br />

DB'⎤<br />

⎢ ⎥<br />

⎣ LQA'<br />

⎦<br />

2<br />

ii) γ I − LQL'><br />

0<br />

predictor feasibility<br />

iii )<br />

⎡ CQA + DB ⎤ ⎡DD<br />

+ CQC<br />

Aˆ<br />

' ' ' '<br />

= A − ⎢<br />

⎥ ⎢<br />

⎣ LQA ' ⎦ ⎣ LQC '<br />

'<br />

− γ<br />

CENTRAL PREDICTOR<br />

CQL ' ⎤<br />

⎥<br />

I + LQL '⎦<br />

η(<br />

t + 1)<br />

= Aη(<br />

t)<br />

+ F3(<br />

y(<br />

t)<br />

−Cη(<br />

t))<br />

~ z(<br />

t)<br />

= Lη(<br />

t)<br />

2<br />

−1<br />

−1<br />

⎡ C ⎤<br />

⎢ ⎥<br />

⎣−<br />

L⎦<br />

F = ( AQC' + BD' )( CQC' + DD' ) + A QL'V LQC' ( CQC' + DD'<br />

)<br />

3<br />

A = A − ( AQC' + BD' )( CQC' + DD' ) C<br />

X<br />

2 −1<br />

V = γ I − LQL' + LQC' ( DD' + CQC' ) CQL'<br />

stable<br />

−1 X<br />

−1<br />

−1 −1<br />

_________________________________________________________________________<br />

The assumption on the zeros is equivalent to the stabilizability of the pair<br />

~ ~ ~<br />

−1<br />

~<br />

−1<br />

( A,<br />

B),<br />

A = A − BD'<br />

( DD'<br />

) C,<br />

B = B(<br />

I − D'(<br />

DD'<br />

) D)<br />

. The assumption of stability of A can be weakened<br />

to that of detectability of (A,C), if one is only interested to the stability of the predictor.


NOTES<br />

• If G≠0, the so-called central predictor is obtained<br />

μ t + 1)<br />

= Aη(<br />

t)<br />

+ F<br />

~ z(<br />

t)<br />

= Lη(<br />

t)<br />

( 3<br />

( y(<br />

t)<br />

− Cη(<br />

t))<br />

+ Gu(<br />

t)<br />

3 → 1,<br />

i.e.<br />

the gain of the predictor and the gain of the filter coincide and depend<br />

on the stabilizing solution of the standard Riccati equation:<br />

• When γ → ∞ the H 2 predictor is recovered. Moreover, F F<br />

Q = AQA '+ BB' − ( CQA' + DB')'( DD' + CQC) ( CQA' + DB')<br />

• Notice that the H ∞ predictor depends on L, i.e. on the particular linear<br />

combination of the state to be estimated. This is a difference between<br />

H2, and H ∞, design.<br />

• The “gain” F3 of the predictor and the “gains” F1 and F2 of the filter<br />

are related by:<br />

F = F +<br />

−<br />

A − F C QL'V F<br />

3 1 1<br />

________________________________________________________________<br />

In the uncorrelated case (DB'=0) and under the assumption of reachability of (A,B), the Ricacti equation of<br />

Theorem 3 can be rewritten in the following way:<br />

⎡ −1<br />

−1<br />

1 ⎤<br />

Q = A⎢Q<br />

+ C'<br />

( DD'<br />

) C − L'<br />

L A'+<br />

BB'<br />

2 ⎥<br />

⎣<br />

γ ⎦<br />

Feasibility condition for the predictor γ 2<br />

I − LQL'><br />

0<br />

Feasibility condition for the filter<br />

Filter/Predictor<br />

−1 −2<br />

Q + C' C − L' Lγ<br />

> 0<br />

F<br />

1<br />

−1<br />

1 2<br />

−1<br />

−1<br />

−1<br />

−1<br />

−1<br />

1<br />

= A(<br />

Q + C'C<br />

) C',<br />

F2<br />

= L(<br />

Q + C'C)<br />

C',<br />

F3<br />

= A(<br />

Q + C'C<br />

− L'L)<br />

C'<br />

2<br />

γ<br />

−1


PREDICTOR VS FILTER<br />

The Riccati equation is the same, but the feasibility conditions are<br />

different<br />

System for the prediction error<br />

μ( t + ) = ( A − F C) μ(<br />

t) + ( F C − B) w( t)<br />

1 3 3<br />

e( t) = Lμ( t)<br />

feasibility (analysis) : γ 2<br />

I − LQL'<br />

><br />

System for the filtering error<br />

η(<br />

t + 1)<br />

= ( A − F C)<br />

η(<br />

t)<br />

+ ( F D − B)<br />

w(<br />

t)<br />

e(<br />

t)<br />

= ( L − F C)<br />

η(<br />

t)<br />

+ F Dw(<br />

t)<br />

feasibility (analysis) :<br />

γ<br />

2<br />

1<br />

2 −1<br />

2<br />

1<br />

0<br />

I − LQL' + LQC '( DD' + CQC') CQL' −( F − F )( DD' + CQC')( F − F )' ><br />

F = LQC'( CQC' + DD')<br />

2<br />

−1<br />

2 2 2 2 0<br />

________________________________________________________________<br />

1) At γ fixed, the fesibility condition of the predictor is more stringent than that of the filter. They coincide as γ<br />

tends to infinity, to witness the fact that the Kalman predictor and filter share the same existence conditions.<br />

2) Assuming the existence of the stabilizing solution of the Riccati equation satisfying the feasibility condition, the<br />

sufficient part of Theorem 3 is proved from the analysis result. Indeed, the analysis Riccati equation coincides<br />

with the synthesis Riccati equation once the gains F 1 , F 2 (for the filter) or F 3 (for the predictor) are substituted<br />

there.


Proof of Theorem 4/5<br />

For simplicity we study the case where DB'=0, DD'=I, (A,B) reachable. Moreover, we limit the proof to filters<br />

(predictors) in observer form. This is a restriction, but the analysis of all filters (predictors) ensuring the norm<br />

bounded is far from being trivial and is not within our scope. Notice that if F 1 =AK e F 2 =LK are the characteristic<br />

gains of a generic filter, the transfer function from the disturbance to the error is<br />

T εw =(L-LKC)(zI-A+AKC) -1 (AKD-B)+LKD<br />

So that the analysis Riccati equation is<br />

−1 −2 −1<br />

Q = A( X − L' Lγ ) A' + BB' ,<br />

with the feasibility condition<br />

X = ( I − KC ) Q( I − KC )' + KK'<br />

(1)<br />

γ 2<br />

I − LXL'<br />

> 0<br />

and asymptotic stability of<br />

(2)<br />

2 −1<br />

A( I − KC ) + AXL' ( γ I − LXL' ) ( I − KC )' L'<br />

On the other hand, if F3 is the gain of a generic one-step predictor, the transfer function is<br />

(3)<br />

T εw =L(zI-A+F 3 C) -1 (F 3 D-B),<br />

and then the analysis Riccati equation is:<br />

−1<br />

−2<br />

−1<br />

Q = A(<br />

Q + C'C<br />

− L'<br />

Lγ<br />

) + BB'+<br />

( F<br />

Y = ( Q<br />

−1<br />

− L'<br />

Lγ<br />

−2<br />

)<br />

3<br />

− AYC'(<br />

I + CYC')<br />

−1<br />

)( I + CYC')(<br />

F<br />

3<br />

− AYC'(<br />

I + CYC')<br />

with the feasibility condition<br />

γ 2<br />

I − LQL ' > 0<br />

(5)<br />

and asymptotic stability of<br />

2 −1<br />

A − F3C + ( A + F3C ) QL' ( γ I − LQL' ) L'<br />

(6)<br />

Now, assume that there exists Q≥0 satisfying Q=A[Q -1 +C'C-L'Lγ -2 ] -1 +BB', and such that Q -1 +C'C-L'Lγ -2 >0 and<br />

A(Q -1 +C'C-L'Lγ -2 ) -1 Q -1 stable. Take the filter characterized by F 1 =AK, F 2 =LK, with K=(Q -1 +C'C) -1 C'. From (1) it<br />

results X=(Q-1 +C'C) -1 and hence<br />

Q = AXA' + BB' + AXL' ( γ2I − LXL' ) −1 LXA' = A( X− 1 − L' Lγ− 2 ) −1 A' + BB' = A( Q−1 + C' C− L' Lγ− 2 ) −1<br />

A' + BB'<br />

Therefore, the analysis Riccati equation admits a positive semidefinite solution with<br />

2 2 −1<br />

γ I − LXL' = γ I − LQL ' LQC ' ( I + CQC ' ) CQL'<br />

> 0,<br />

so that the feasibility condition is satisfied. Also the stability condition (3) is satisfied.<br />

Hence the norm is less than γ.<br />

Assume now that there exists Q≥0 satisfying Q=A[Q -1 +C'C-L'Lγ -2 ] -1 +BB', and such that Q -1 -L'Lγ -2 >0 and A(Q -<br />

1 +C'C-L'Lγ -2 ) -1 Q -1 stable. Take a predictor defined by F3 =A(Q -1 +C'C-L'Lγ -2 ) -1 C'. From (4) it results that<br />

F 3 =AYC'(I+CYC') -1 and hence Q=AYA'+BB'-AYC'(I+CYC') -1 CYA'=A(Q -1 +C'C-L'Lγ -2 ) -1 A'+BB'. The analysis<br />

Riccati equation admits a positive semidefinite solution and Q -1 -L'Lγ -2 >0 coincides with the feasibility condition<br />

(5) for the predictor. Also the stability condition (6) is satisfied .<br />

Hence the norm is less than γ.<br />

Vice-versa, assume that there exists a filter such that the norm is less than γ. Then, there exists Q solving (1)-(3). On<br />

the other hand, X=(Q -1 +C'C) -1 +(K-(Q -1 +C'C) -1 C')(I+CQC')(K-(Q -1 +C'C) -1 C')'≥(Q -1 +C'C) -1 and hence X -1 ≤Q -1 +C'C<br />

implies that Q solves the inequality Q≥A(Q -1 '+C'C-L'Lγ -2 ) -1 A’+BB' whereas γ 2 I-LXL'>0 with X>0 implies that<br />

Q -1 +C'C-L'Lγ -2 >0. The existence of Q solving Q=A(Q -1 +C'C-L'Lγ -2 ) -1 A'+BB' with Q -1 A'+C'C-L'Lγ -2 >0 and<br />

satisfying the stability condition follows form standard monotonicity results of the solutions of the Riccati equation.<br />

Finally, assume that there exists a predictor such that the norm is less than Then, there exists Q solving (4)-(6).<br />

Hence Q≥A(Q -1 A'+C'C-L'Lγ -2 ) -1 A+BB' with γ 2 I-LQL'>0 The existence of Q solving Q=A(Q -1 A'+C'C-L'Lγ -2 ) -<br />

1 A'+BB' with γ 2 I-LQL'>0 and satisfying the stability condition follows form standard monotonicity results of the<br />

solutions of the Riccati equation<br />

−1<br />

(4)<br />

)'


J-SPECTRAL FACTORIZATION<br />

The Riccati equation underlying the problem solution can be compactly<br />

written as:<br />

~ ~ ~ ~ −1<br />

~ ~<br />

[ AQC<br />

'+<br />

BD'][<br />

R + CQC<br />

']<br />

[ CQA'<br />

DB']<br />

Q = AQA'+<br />

BB'−<br />

+<br />

~ ⎡C⎤<br />

~ ⎡D⎤<br />

⎡DD'<br />

= ⎢ ⎥,<br />

D = ⎢ ⎥,<br />

R = ⎢<br />

⎣ L⎦<br />

⎣ 0 ⎦ ⎣ 0<br />

0 ⎤<br />

− γ I<br />

⎥<br />

⎦<br />

C 2<br />

Let define:<br />

H<br />

H<br />

1<br />

2<br />

( z)<br />

= D + C(<br />

zI − A)<br />

−<br />

( z)<br />

= L(<br />

zI − A)<br />

1<br />

B<br />

−1<br />

B<br />

⎡H1(<br />

z)<br />

H(<br />

z)<br />

= ⎢<br />

⎣H<br />

2(<br />

z)<br />

0⎤<br />

I<br />

⎥<br />

⎦<br />

⎡I<br />

= ⎢<br />

⎣0<br />

0 ⎤<br />

− γ I<br />

⎥<br />

⎦<br />

J 2<br />

Theorem 6<br />

Assume that A is stable. There exists a stable filter F(z) such that<br />

H2(z)-F(z)H1(z) is stable and<br />

H ( z)<br />

− F(<br />

z)<br />

H ( z)<br />

2<br />

H ( z)<br />

− F(<br />

z)<br />

H ( z)<br />

2<br />

1<br />

1<br />

∞<br />

stable<br />

< γ<br />

if and only if there exists a square Ω(z), stable with stable inverse, with<br />

stable [Ω(z) -1 ]11, solving the factorization problem<br />

H ( z)<br />

JH ( z<br />

)'=<br />

Ω(<br />

z)<br />

JΩ(<br />

z<br />

−1<br />

−1<br />

)'<br />

All feasible filters can be parametrized in the following way:<br />

−1<br />

F(<br />

z)<br />

= ( Ω21(<br />

z)<br />

− Ω22<br />

( z)<br />

U(<br />

z))(<br />

Ω11(<br />

z)<br />

− Ω12<br />

( z)<br />

U ( z))<br />

, U ( z)<br />

∈ H ∞,<br />

U ( z)<br />

< γ ∞


Notes:<br />

From z = H2<br />

( z)<br />

w,<br />

y = H1(<br />

z)<br />

w,<br />

zˆ<br />

= F(<br />

z)<br />

y , it results z − zˆ<br />

= ( H1(<br />

z)<br />

− F ( z)<br />

H2<br />

( z))<br />

w and then the<br />

filtering problem can be equivalently rewritten as a model matching problem.<br />

Proof of Theorem 6<br />

Assume that the exists a feasible filter. We limit ourselves to the case where the filter is in the observer form. From<br />

Theorem 4 we know that the exists a positive semidefinite solution Q of the Riccati equation. Moreover, such a<br />

solution is stabilizing and feasible. It is a matter of cumbersome computations to show that<br />

Ω(<br />

z)<br />

=<br />

V<br />

= γ<br />

2<br />

~<br />

−1<br />

~ ~ ~ ~ −1<br />

[ I + C(<br />

zI − A)<br />

( BD'+<br />

AQC<br />

')(<br />

R + CQC')<br />

]<br />

I − LQL'+<br />

LQC'<br />

( DD'+<br />

CQC'<br />

)<br />

−1<br />

CQL'<br />

1/<br />

2<br />

⎡ ( DD'+<br />

CQC'<br />

)<br />

⎢<br />

⎢LQC<br />

'(<br />

DD'+<br />

CQC'<br />

)<br />

⎣<br />

is a J-spectral factor. The stability of Ω(z) follows from stability of A. Stability of Ω(z) -1 and [Ω(z) -1 ]11 follows from<br />

Q being a stabilizing solution. Existence of V 1/2 follows from the feasibility condition.<br />

Vice-versa, assume that Ω(z) with the given properties exists and take the formula defining all filters F(z). It follows<br />

−1<br />

−1<br />

−1<br />

−1<br />

[ ( z)<br />

− I]<br />

= −(<br />

Ω ( z)<br />

− Ω ( z)<br />

Ω ( z)<br />

Ω ( z)<br />

)( I −U<br />

( z)<br />

Ω ( z)<br />

Ω ( z)<br />

) [ U ( z)<br />

I]<br />

Ω(<br />

z)<br />

F (*)<br />

22<br />

Notice that, thanks to the small gain theorem,<br />

( ) 1<br />

−1 −<br />

I<br />

− U(<br />

z)<br />

Ω ( z)<br />

Ω ( z)<br />

21<br />

11<br />

12<br />

is stable. Indeed, U(z) is stable and ||U(z)||∞


DUALITY'<br />

The prediction problem is dual with respect to the full-state control<br />

problem (x is measurable).<br />

The filtering problem is dual with respect to the full-information<br />

control problem (x and w are measurable).<br />

⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ Bw(<br />

t)<br />

+<br />

⎪<br />

S : ⎨ ε ( t)<br />

= Lx(<br />

t)<br />

+ u(<br />

t)<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

+ Dw(<br />

t)<br />

( ( ( (<br />

⎧ x(<br />

t + 1)<br />

= A'<br />

x(<br />

t)<br />

+ L'ε<br />

( t)<br />

+ C'<br />

y(<br />

t)<br />

S' R : ⎨ ( ( (<br />

⎩ w(<br />

t)<br />

= B'x(<br />

t)<br />

+ D'<br />

y(<br />

t)<br />

Theorem 7<br />

y t)<br />

K x(<br />

t)<br />

K ε ( t)<br />

( ( (<br />

= − − stabilizes S'R and T ( (<br />

wε<br />

< γ iff<br />

( 1<br />

2<br />

ξ( t + ) = Aξ( t) + K '( y( t) − Cξ( t))<br />

1 1<br />

− u( t) = Lξ( t) + K '( y( t) − Cξ( t))<br />

stabilizes S and Tεw ∞<br />

2<br />

< γ .<br />

∞<br />

OE<br />

FI-SF<br />

_________________________________________________________________<br />

Notice that if K2 is zero (state-feedback), the filter is indeed a predictor. The proof of Theorem 7 is immediate. It<br />

is enough to verify that for any K1 and K2 it results T ( (<br />

εw<br />

= Twε<br />

' . Actually, an extended version of this result<br />

allows to link any full-information controller K(z) for system S R ' to a filter F(z) for system S.


KREIN SPACES<br />

The filtering (prediction) problem can be interpreted as a Kalman<br />

filtering (prediction) problem in the Krein space (instead of Hilbert<br />

space).<br />

with<br />

~ ⎡C⎤<br />

C = ⎢ ⎥,<br />

⎣L⎦<br />

( )<br />

~<br />

( ) ( )<br />

) (<br />

⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ v1<br />

t<br />

⎪<br />

⎨ z(<br />

t)<br />

= Lx t<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx<br />

t + v2<br />

t<br />

⎡⎡v1<br />

⎤<br />

E⎢⎢<br />

⎥<br />

⎣⎣v2⎦<br />

[ v ' v ']<br />

1<br />

2<br />

⎡BB'<br />

⎤<br />

=<br />

⎢<br />

⎥ ⎢<br />

DB'<br />

⎦ ⎢⎣<br />

0<br />

BD'<br />

DD'<br />

0<br />

0 ⎤<br />

0<br />

⎥<br />

⎥<br />

2<br />

−γ<br />

I⎥⎦


RISK SENSITIVITY<br />

x( t + ) = Ax ( t) + v ( t)<br />

1 1<br />

y( t) = Cx( t) + v ( t)<br />

where the noises are white gaussian noises with zero mean.<br />

The problem of the risk sensitive filter is to minimize<br />

⎡ 2 ⎛ ⎛<br />

minμ<br />

( ϑ)<br />

= min⎢−<br />

log⎜<br />

E⎜<br />

zˆ<br />

( y)<br />

zˆ<br />

( y)<br />

⎢<br />

⎜ ⎜<br />

e<br />

⎣<br />

ϑ<br />

⎝ ⎝<br />

The parameter θ is the risk parameter<br />

θ=0 risk-neutral Kalman<br />

θ>0 risk-seeking modified Kalman<br />

θ


MIXED H2/H¥ FILTERING<br />

It makes sense to distinguish between two classes of disturbances:<br />

those whose statistics is known and those which are only requested to<br />

be bounded. For instance, in general, the spectral characteristics of the<br />

transducer errors are known whereas those related to modelling errors<br />

are unknown. Consequently, we can write a mixed filtering problem<br />

associated to a variable z(t):<br />

x( t + ) = Ax ( t)<br />

+ Bw( t)<br />

= Ax ( t)<br />

+ B w ( t)<br />

+ B w ( t)<br />

z( t)<br />

y( t)<br />

1 1 1<br />

= Lx( t)<br />

= Cx( t)<br />

+ Dw( t)<br />

= Cx( t)<br />

+ D w ( t)<br />

+ D w ( t)<br />

ε ( t) = z(<br />

t)<br />

− zˆ<br />

( t)<br />

Mixed Problem<br />

Find a filter solving:<br />

1 1<br />

{ Tε , w ( F)<br />

s.<br />

t.<br />

Tε<br />

, w ( F)<br />

≤ γ }<br />

inf 1<br />

2<br />

2<br />

∞<br />

2 2<br />

2 2<br />

The solution to this problem is non trivial if γ < Tε, w ( Kalman)<br />

2 ∞ .<br />

Up to now there not exist a closed-form solution. It can be proven that<br />

the optimal mixed solution Fo is on the “boundary”, i.e. such that<br />

o<br />

Tε, w ( F )<br />

2 ∞ =γ.<br />

___________________________________________________________________<br />

Obviously, the mixed problem makes sense since there exist an infinite number of filters F (a parametrized family)<br />

such that ( F)<br />

≤ γ<br />

Tε , w<br />

2<br />


OBSERVATIONS<br />

The gain F3 of the predictor ensuring the norm less than γ can be<br />

-1<br />

parametrized in the following way: F3=W1 W2, where (W1,W2) satisfyies an elliptic equation.<br />

Convex Optimization<br />

min tr(XW) ≥ min T ( W W )<br />

ε w<br />

1 1 −<br />

2<br />

2<br />

2<br />

⎡W1<br />

W2<br />

⎤<br />

, W = ⎢ ⎥<br />

⎣W2<br />

' W3<br />

⎦<br />

where X depends from the data. If w 1=w 2, then one obtains the<br />

central predictor. In this way one simply minimizes an upper bound<br />

of the real cost. The solution may be too conservative.<br />

Search for boundary values<br />

The boundary values are (norm equal to γ) on the tangent sets between<br />

the ellipsoid and the planes passing through the origin.<br />

− γ2<br />

W<br />

2<br />

γ2<br />

W<br />

1


a - PROCEDURE<br />

We can parametrize with a scalar variable a family of predictor<br />

(defined by the only gain K) which ensure the norm less than γ with<br />

minimal two norm.<br />

Idea<br />

||T εw(K)|| 2<br />

α<br />

γ ||T εw(K)|| ∞<br />

K( α) = ( 1−<br />

α) K + α AQ( α) C' + BD' ( CQ( α)<br />

C' + DD'<br />

)<br />

c<br />

2 −1<br />

Q = AQA' + BB' −( 2α<br />

− α )( AQC' + BD' )( CQC' + DD' ) ( AQC' + BD'<br />

)'<br />

A = A − KcC, B = B − KcD where Kc is such that ||Tεw(Kc)|| ∞


⎧ x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ Bw(<br />

t)<br />

⎪<br />

S : ⎨ z(<br />

t)<br />

= Lx(<br />

t)<br />

⎪<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

+ Dw(<br />

t)<br />

Finite horizon problem in 1,T :<br />

FINITE HORIZON<br />

Find a linear causal filter based on su y( 1),<br />

y(<br />

2),<br />

L , y(<br />

t)<br />

, t ≤ T<br />

providing an estimate z ˆ( t)<br />

guaranteeing a level of attenuation γ>0,<br />

i.e. such that J F 0 is a weighting matrix (it playes the role played by the<br />

initial covariance matrix in the Kalman filter design). Putting the two equations together we get the difference<br />

Riccati equation: Q( t+ ) = A Q( t) −<br />

−<br />

1<br />

1<br />

+ C'C− L' L/ 2<br />

1<br />

−1<br />

γ + BB'<br />

, with Q( 1)<br />

= AΨ A' + BB'<br />

.<br />

.


zˆ<br />

( t)<br />

= Lη(<br />

t)<br />

SOLUTION OF THE FINITE HORIZON<br />

PROBLEM<br />

η(<br />

t + 1)<br />

= Aη(<br />

t)<br />

+ K ( t + 1)<br />

The gain is given by:<br />

K( t ) = Q( t ) C' I + CQ( t ) C'<br />

[ y(<br />

t + 1)<br />

− CAη(<br />

t)<br />

]<br />

−1<br />

where Q( t ) is the solution of the difference Riccati equation<br />

QUESTIONS<br />

Under which conditions it is possible to guarantee the existence of the<br />

time-varying filter for an arbitrarily long interval?<br />

Under which conditions it is possible to guarantee the convergence for<br />

t→∞ to the H∞ filter?<br />

____________________________________________________________<br />

The time-varying filter can be written as<br />

ξ ( t + 1)<br />

= Aξ<br />

( t)<br />

+ AK<br />

zˆ<br />

( t)<br />

= Lξ<br />

( t)<br />

+ LK(<br />

t)<br />

( t)<br />

[ y(<br />

t)<br />

− Cξ<br />

( t)<br />

]<br />

[ y(<br />

t)<br />

− Cξ<br />

( t)<br />

]<br />

The gains F1 ( t ) = AK ( t ), F2 ( t ) = LK ( t ) have the same structure as in the stationary case (uncorrelated noises).


1) A invertible<br />

FEASIBILITY AND CONVERGENCE<br />

2) There exists the stabilizing solution Qs of<br />

−1<br />

~ −1<br />

~ −1<br />

Q = A Q + C'<br />

R C A'+<br />

BB<br />

3) The pair (A,B) is reachable<br />

[ ] '<br />

Thanks to these assumptions, there exists the unique (positive<br />

semidefinite) solution of the Lyapunov equation<br />

−1<br />

~ −1<br />

~ −1<br />

−1<br />

~ −1<br />

~ −1<br />

~ ~ ~ −1<br />

~<br />

[ Qs<br />

+ C'<br />

R C]<br />

A'YA[<br />

Qs<br />

+ C'<br />

R C]<br />

− [ C'<br />

( R + CQ<br />

C')<br />

] −<br />

Y s<br />

= C<br />

Theorem 10<br />

(i) The solution Q(t) of the difference Riccati equation is feasible, ∀<br />

t>0, if, for some ε :<br />

{ [ [ ] ] } 1<br />

~ ~ ~ −1<br />

~<br />

−<br />

Y − C'(<br />

R + CQ<br />

C')<br />

C +<br />

0 < ( 1)<br />

< Q +<br />

I<br />

Q s s ε + +<br />

(ii) The solution Q(t) of the difference Riccati equation is feasible and<br />

converges to the stabilizing solution Qs if<br />

0 < Q( 1)<br />

< Q + Y + ε I<br />

s<br />

−1<br />

____________________________________________________________<br />

It is possible to see that the conditions introduced in the theorem are also necessary in two significant cases: L=0<br />

(γ→∞) and C=0. In the first case Y=0 and therefore one recovers the condition of convergence for the Kalman<br />

filter (Q(1)>0), whereas, in the second case the condition of convergence is equivalent to<br />

S(1) = Q(1) -1 +C'C- L'Lγ -2 >S a ,<br />

where S a is the antistabilizing solution of<br />

S=(AS -1 A'+BB') -1 +C'C-L'Lγ -2


g-SWITCHING<br />

If Q(1) is "too large", nothing can be said on the convergence of<br />

Q(t,γs) towards the stabilizing solution Qs(γ) associated with a given<br />

value of γ=γs.<br />

The convergence condition is: 0 < Q( 1)<br />

< Q ( γ ) + Y( γ ) + ε I .<br />

Theorem 11<br />

(i) The solution Y(γ) of the Lyapunov equation goes to zero for γ→∞<br />

(ii) The stabilizingsolution Qs(γ) of the Riccati equation increases as γ<br />

increases<br />

Thanks to point (i) of the theorem, if Q(1) does not satisfy the<br />

convergence condition, one can find a value of γ=γ 1>γ s such that the<br />

convergence condition is satisfied for γ=γ1. The solution of the Riccati<br />

equation with γ=γ1 converges to Q s(γ1). On the other hand, thanks to<br />

point (ii) ot the theorem, one can say that<br />

Qs(γ1)≤ Qs(γs)< Qs(γs)+(Y(γs)-1+εI)-1<br />

So that ε>0 and t ε>0 are such that<br />

Q(t ε,γ1)< Qs(γs)+(Y(γs)-1+εI)-1..<br />

Finally, imposing<br />

⎧γ<br />

1 , 1 ≤ t < tε<br />

γ ( t)<br />

= ⎨<br />

⎩γ<br />

s , t ≥ tε<br />

the solution Q(t,γ(t)) tends a Qs(γs) as t→∞.<br />

_______________________________________________________<br />

The γ-switching strategy modifies the cost:<br />

2<br />

T −<br />

T<br />

~ r(<br />

t)<br />

rˆ<br />

( t)<br />

⎡<br />

2<br />

⎤<br />

J F = ∑ − ⎢∑<br />

w(<br />

t)<br />

+ ( x(<br />

0)<br />

− xˆ<br />

0 )'Ψ(<br />

x(<br />

0)<br />

− xˆ<br />

2<br />

0 ) ⎥<br />

t = 1 γ<br />

( t)<br />

⎣ t=<br />

0<br />

⎦<br />

s<br />

−1


EXAMPLE<br />

A=0.5, B=[1 0], C=3, D=[0 1], L=2, T=1/12<br />

Q( t + 1) = 0. 25<br />

Q( t )<br />

Feasibility condition<br />

2<br />

γ<br />

Q( t ) <<br />

2<br />

4 − 9γ<br />

Algebraic Riccati equation<br />

Q = 0. 25<br />

Q<br />

Choice: γs=0.66<br />

1<br />

−1 −2<br />

+ 9 − 4γ<br />

1<br />

−1 −2<br />

+ 9 − 4γ<br />

+ 1<br />

+ 1 , Q(<br />

1) = 4<br />

Hence, Q s (γs)=1.5318 is the stabilizing solution<br />

Q( 1)<br />

< 5.4724 feasibility condition<br />

Q(1) 0.6576<br />

Integrating the equation with γ=γs it results Q( t )=4, 4.7167, 9.5394, -<br />

2.2089, 0.6066, ..... and then the feasibility is lost after three steps.<br />

Taking γ1 in such a way that 4


REFERENCES<br />

T.Basar, P.Bernhard, "H∞ -Optimal Control and related minimax design<br />

problems- a dynamic game approach", Birkhauser, 1991.<br />

P.Bolzern, P.Colaneri, G.De Nicolao, "Transient and asymptotic analysis<br />

of discrete-time H∞ filters", European Journal of Control, to appear.<br />

J.C.Geromel, P.L.D.Peres, S.R.Souza, "A convex approach to the mixed<br />

H2/H∞ control problem for discrete-time uncertain systems", SIAM J.<br />

Control and Optimization, Vol. 33, 6, pp. 1816-1833, 1995.<br />

M.Green, K.Glover, D.Limebeer, J.Doyle, "A J-spectral factorization<br />

approach to H∞ control", SIAM J. Control and Optimization, 25,6, pp.<br />

1350-1371.<br />

M.J.Grimble, A. El Sayed, "Solution of the H∞ optimal linear filtering<br />

problem for discrete-time systems", IEEE Trans. ASSP, 38, 7, pp. 1092-<br />

1104, 1990.<br />

W.Haddad, D.Bernstein, D.Mustafa, "Mixed H2/H∞ - regulation and<br />

estimation", Systems and Control Letters, 16, 1991.<br />

B.Hassibi, T.Kailath, "H∞ bounds for recursive-least square algorithms",<br />

33rd Conference on Decision and Control, Lake Buena Vista, pp. 3927-<br />

3938, 1994.<br />

B.Hassibi, T.Kailath, "H∞ optimality of the LMS algorithms", IEEE<br />

Trans. on Automatic Control, 44, 2, pp. 267-280, 1996.<br />

B.Hassibi, A.H.Sayed, T.Kailath, "Linear estimation in Krein spaces -<br />

Part II: Applications", IEEE Trans. AC, 41, pp. 34-49, 1996.<br />

B.Hassibi, A.H.Sayed, T.Kailath, "Linear estimation in Krein spaces -<br />

Part I: Theory", IEEE Trans. AC, 41, pp. 18-33, 1996.<br />

B.Hassibi, A.H.Sayed, T.Kailath, "H∞ Optimality of the LMS<br />

algorithm", IEEE Trans. ASSP, 44, pp. 267-1022, 1996.<br />

D.H.Jacobson, "Optimal stochastic linear systems with exponential<br />

performance criteria and their relation to deterministic games", IEEE<br />

Trans. Automatic Control, AC-18, 2, 1973.<br />

P.Lancaster, L.Rodman, "Algebraic Riccati Equations", Oxford Series<br />

Pub., 1995.<br />

D.J.Limebeer, M.Green, D.Walker, "Discrete-time H∞ control", Proc.<br />

28th Conference on Decision and Control, Tampa (USA), pp. 392-396,<br />

1989.


H.Rotstein, M.Sznaier, M.Idan, "H2/H∞ filtering theory and an<br />

aerospace application", Int. J. Robust and Nonlinbear Control, 6, pp.<br />

347-366, 1996.<br />

J.Speyer, J.Deyst, D.H.Jacobson, "Optimization of stochastic linear<br />

systems with additive measurement and process noise using exponential<br />

performance criteria", IEEE Trans. Automatic Coontrol, AC-19, pp.358-<br />

366, 1974.<br />

J.L.Speyer, C.Fan, R.N.Banavar, "Optimal stochastic estimation with<br />

exponential cost criteria", Proc. Conference on Decision and Control, pp.<br />

2293-2298, 1992.<br />

A.A.Stoorvogel, A.Saberi, B.M.Chen, "The discrete-time H∞ control<br />

problem with measurementt feedback", Int. J. Robust and Nonlinear<br />

Contr., 4, pp. 457-479, 1994.<br />

Y.Theodor, U.Shaked, N.Berman, "Time-domain H∞ identification",<br />

IEEE Trans. on Aut. Control, 41, pp. 1019-1022, 1996.<br />

M.Vidyasagar, "Control Sysem Synthesis: a facorization approach", MIT<br />

Press Series, 1985.<br />

P.Whittle, "Risk Sensitive Optimal Control", New York: Wiley, 1990.<br />

I.Yaesh, U.Shaked, "Game theory approach to optimal linear estimation<br />

in the minimum H∞ norm sense", 28th Conference on Decision and<br />

Control, Tampa, pp. 421-425, 1989.<br />

I.Yaesh, U.Shaked, "A transfer function approach to the problem of<br />

discrete-time systems: H∞-optimal linear control and filtering", IEEE<br />

Trans. on Automatic Control, 36, 11, pp. 1264-1271, 1991.<br />

I.Yaesh, U.Shaked, "Game theory approach to state-estimation of linear<br />

discrete-time processes and its relation to H∞-optimal estimation", Int. J.<br />

on Control, 55, 6, pp. 1443-1452, 1992.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!