21.08.2013 Views

Robust Control

Robust Control

Robust Control

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2<br />

<strong>Robust</strong> <strong>Control</strong><br />

Ad Damen and Siep Weiland<br />

(tekst bij het college:<br />

Robuuste Regelingen<br />

5P430, najaarstrimester)<br />

Measurement and <strong>Control</strong> Group<br />

Department of Electrical Engineering<br />

Eindhoven University of Technology<br />

P.O.Box 513<br />

5600 MB Eindhoven<br />

Draft version of August 23, 2001


4<br />

Indien een mondelinge nabespreking door de grootte van de groep niet mogelijk is<br />

kan gedurende de tentamenperiode dit college worden afgesloten met een `take home'<br />

tentamen.<br />

Computers<br />

Preface<br />

Om praktische ervaring op te doen met het ontwerp van robuuste regelsystemen zal voor<br />

enkele opgaven in de eerste helft van het trimester, alsmede voor de te presenteren computersimulaties<br />

gebruik worden gemaakt van de <strong>Robust</strong> <strong>Control</strong> Toolbox in MATLAB.<br />

Deze software is door de vakgroep Meten en Regelen op commerciele basis aangekocht<br />

voor onderzoeksdoeleinden. Heel nadrukkelijk wordt er op gewezen dat het niet is toegestaan<br />

software te kopieren. Een beknopte en elementaire handleiding over MATLAB is op<br />

uitleenbasis beschikbaar bij het secretariaat van de vakgroep Meten en Regelen (EH-4.34).<br />

Computerapparatuur met deze applicatie is beschikbaar op onderstaande lokaties.<br />

Opzet<br />

1. E-hoog zaal 6.05. (6 pc's, alleen 's morgens beschikbaar).<br />

2. W-hoog zalen 2.141 (486) en 3A.014 (AT), buiten de voor andere colleges en praktika<br />

gereserveerde uren.<br />

3. Open Shop RC.<br />

Voor enkele computersimulatieopgaven zal gebruik worden gemaakt van een menu gestuurd<br />

pakket voor H1-regelaarontwerp. Uitleg hierover zal tijdens een van de colleges (chapter<br />

13) worden gegeven en een handleiding voor deze applicatie is eveneens op uitleenbasis<br />

beschikbaar.<br />

Beoordeling<br />

Dit college heeft het karakter van een werkgroep. Dit betekent dat u geen kant en klare<br />

portie `wetenschap' ter bestudering krijgt aangeboden, maar dat van u een actieve participatie<br />

zal worden verwacht indevorm van bijdragen aan discussies en presentaties. In dit<br />

college willen we een overzicht aanbieden van moderne, deels nog in ontwikkeling zijnde,<br />

technieken voor het ontwerpen van robuuste regelaars voor dynamische systemen.<br />

In de eerste helft van het trimester zal de theorie over robuust regelaarontwerp aan<br />

de orde komen in reguliere hoorcolleges. Als voorkennis is vereist de basis klassieke regeltechniek<br />

en wordt aanbevolen kennis omtrent LQG-control en matrixrekening/functionaal<br />

analyse. In het college zal de nadruk liggen op het toegankelijk maken van robuust regelaarontwerp<br />

voor regeltechnici en niet op een uitputtende analyse van de benodigde mathematiek.<br />

Gedurende deze periode zullen zes keer oefenopgaven worden verstrekt die bedoeld<br />

zijn om u ervaring op te laten doen met zowel theoretische als praktische aspecten<br />

m.b.t. dit onderwerp. De instruktieopgaven zullen worden gecorrigeerd en zullen voor de<br />

eindbeoordeling meetellen volgens een bonussysteem (zie `beoordeling' hieronder).<br />

De bruikbaarheid en de beperkingen van de theorie zullen vervolgens worden getoetst<br />

aan diverse toepassingen die in de tweede helft van het college door u en uw collegastudenten<br />

worden gepresenteerd en besproken. De opdrachten zijn deels opgezet voor<br />

individuele oplossing en deels voor uitwerking in koppels. Ukunthierbij een keuze maken<br />

uit :<br />

Het eindcijfer E = P + B, waarin P 2 [1 10] een gewogen gemiddelde is van de beoordeling<br />

van uw presentatie, uw discussiebijdrage bij andere presentaties uw verslag en<br />

het eindgesprek. De bonus bestaat uit .1 punt per ingeleverde en voldoende beoordeelde<br />

hoofdstukoefening.<br />

het kritisch evalueren van een artikel uit de toegepast wetenschappelijke literatuur<br />

een regelaarontwerp voor een computersimulatie<br />

Cursusmateriaal<br />

een regelaarontwerp voor een laboratoriumproces.<br />

Naast het collegediktaat is het volgende een beknopt overzicht van aanbevolen literatuur:<br />

[1]<br />

Zeer bruikbaar naslagwerk voorradig in de TUE boekhandel<br />

[2]<br />

Geeft zeker een goed inzicht in de problematiek met methoden om voor SISOsystemen<br />

zelf oplossingen te creeren. Mist evenwel de toestandsruimte aanpak voor<br />

MIMO-systemen.<br />

[3]<br />

Zeer praktijk gericht voor procesindustrie. Mist behoorlijk overzicht.<br />

Meer informatie hierover zal op het eerste college worden gegeven alwaar intekenlijsten<br />

gereed liggen.<br />

Iedere presentatie duurt 45 minuten, inclusief discussietijd. De uren en de verroostering<br />

van de presentaties zullen nader bekend worden gemaakt. Benodigd materiaal voor<br />

de presentaties (sheets, pennen, e.d.) zullen ter beschikking worden gesteld en zijn verkrijgbaar<br />

bij het secretariaat van de vakgroep Meten en Regelen (E-hoog 4.32 's ochtends<br />

geopend). Er wordt verwacht dat u bij tenminste 13 presentaties aanwezig bent en dat u<br />

aktief deelneemt aan de discussies. Een presentielijst zal hiervoor worden bijgehouden.<br />

Over uw bevindingen t.a.v. het door ugekozen onderwerp schrijft ueenkort verslag<br />

(maximaal 4 Aviertjes) dat u dient in te leveren voor aanvang van de tentamenperiode.<br />

Gedurende de tentamenperiode zal dit college worden beeindigd met een bespreking over<br />

de inhoud van dit college en de inhoud van uw verslag.<br />

3


6<br />

5<br />

[4]<br />

Dankzij stormachtige ontwikkelingen in het onderzoeksgebied van H1 regeltheorie,<br />

was dit boek reeds verouderd op het moment van publikatie. Desalniettemin een<br />

goed geschreven inleiding over H1 regelproblemen.<br />

[5]<br />

Goed leesbaar standaardwerk voor vervolgstudie<br />

[6]<br />

Van de uitvinders zelf ::: Aanbevolen referentie voor ;analyse.<br />

[7]<br />

Een boek vol formules voor de liefhebbers van `harde' bewijzen.<br />

[8]<br />

Een korte introductie, die wellicht zonder al te veel details de hoofdlijnen verduidelijkt.<br />

[9]<br />

Robuuste regelingen vanuit een wat ander gezichtspunt.<br />

[12]<br />

Dit boek omvat een groot deel van het materiaal van deze cursus. Goed geschreven,<br />

mathematisch georienteerd, met echter iets te weinig aandacht voor de praktische<br />

aspecten aangaande regelaarontwerp.<br />

[13]<br />

Uitgebreide verhandeling vanuit een wat andere invalshoek: de parametrische benadering.<br />

[14]<br />

Doorwrocht boek geschreven door degenen, die aan de mathematische wieg van<br />

robuust regelen hebben gestaan. Wiskundig georienteerd.<br />

[15]<br />

Dit boek is geschreven in de stijl van het dictaat. Helaas kwam het te laat uit.<br />

Uitstekende voorbeelden, die ook in ons college gebruikt worden.


8 CONTENTS<br />

6 Weighting lters 63<br />

6.1 The use of weighting lters . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />

6.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />

6.1.2 Singular value loop shaping . . . . . . . . . . . . . . . . . . . . . . . 63<br />

6.1.3 Implications for control design . . . . . . . . . . . . . . . . . . . . . 69<br />

6.2 <strong>Robust</strong> stabilization of uncertain systems . . . . . . . . . . . . . . . . . . . 70<br />

6.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<br />

6.2.2 Modeling model errors . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />

6.2.3 The robust stabilization problem . . . . . . . . . . . . . . . . . . . . 75<br />

6.2.4 <strong>Robust</strong> stabilization: main results . . . . . . . . . . . . . . . . . . . 78<br />

6.2.5 <strong>Robust</strong> stabilization in practice . . . . . . . . . . . . . . . . . . . . . 80<br />

6.2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81<br />

Contents<br />

7 General problem. 83<br />

7.1 Augmented plant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />

7.2 Combining control aims. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85<br />

7.3 Mixed sensitivity problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87<br />

7.4 A simple example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88<br />

7.5 The typical compromise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90<br />

7.6 An aggregated example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91<br />

7.7 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />

1 Introduction 11<br />

1.1 What's robust control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />

1.2 H1 in a nutshell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />

1.3 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />

2 What about LQG? 17<br />

2.1 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

8 Performance robustness and -analysis/synthesis. 97<br />

8.1 <strong>Robust</strong> performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />

8.2 -analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99<br />

8.3 Computation of the -norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 105<br />

8.3.1 Maximizing the lower bound. . . . . . . . . . . . . . . . . . . . . . . 105<br />

8.3.2 Minimising the upper bound. . . . . . . . . . . . . . . . . . . . . . . 106<br />

8.4 -analysis/synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108<br />

8.5 A simple example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109<br />

8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113<br />

3 <strong>Control</strong> goals 23<br />

3.1 Stability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

3.2 Disturbance reduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

3.3 Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

3.4 Sensor noise avoidance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />

3.5 Actuator saturation avoidance. . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />

3.6 <strong>Robust</strong> stability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

3.7 Performance robustness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />

3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

4 Internal model control 33<br />

4.1 Maximum Modulus Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />

4.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />

4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />

9 Filter Selection and Limitations. 115<br />

9.1 A zero frequency set-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />

9.1.1 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />

9.1.2 Actuator saturation, parsimony and model error. . . . . . . . . . . . 116<br />

9.1.3 Bounds for tracking and disturbance reduction. . . . . . . . . . . . . 118<br />

9.2 Frequency dependent weights. . . . . . . . . . . . . . . . . . . . . . . . . . 120<br />

9.2.1 Weight selection by scaling per frequency. . . . . . . . . . . . . . . . 120<br />

9.2.2 Actuator saturation: Wu . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />

9.2.3 Model errors and parsimony. . . . . . . . . . . . . . . . . . . . . . . 123<br />

9.2.4 We bounded by fundamental constraint: S + T = I . . . . . . . . . 126<br />

9.3 Limitations due to plant characteristics. . . . . . . . . . . . . . . . . . . . . 128<br />

9.3.1 Plant gain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />

9.3.2 RHP-zeros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130<br />

9.3.3 Bode integral. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132<br />

9.3.4 RHP-poles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />

9.3.5 RHP-poles and RHP-zeros . . . . . . . . . . . . . . . . . . . . . . . . 138<br />

9.3.6 MIMO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140<br />

9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143<br />

5 Signal spaces and norms 39<br />

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />

5.2 Signals and signal norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />

5.2.1 Periodic and a-periodic signals . . . . . . . . . . . . . . . . . . . . . 40<br />

5.2.2 Continuous time signals . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />

5.2.3 Discrete time signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />

5.2.4 Stochastic signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

5.3 Systems and system norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

5.3.1 The H1 norm of a system . . . . . . . . . . . . . . . . . . . . . . . . 48<br />

5.3.2 The H2 norm of a system . . . . . . . . . . . . . . . . . . . . . . . . 53<br />

5.4 Multivariable generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />

5.4.1 The singular value decomposition . . . . . . . . . . . . . . . . . . . . 56<br />

5.4.2 The H1 norm for multivariable systems . . . . . . . . . . . . . . . . 59<br />

5.4.3 The H2 norm for multivariable systems . . . . . . . . . . . . . . . . 59<br />

5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />

7


10 CONTENTS<br />

CONTENTS 9<br />

10 Design example 147<br />

10.1 Plant denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147<br />

10.2 Classic control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />

10.3 Augmented plant andweight lter selection . . . . . . . . . . . . . . . . . . 154<br />

10.4 <strong>Robust</strong> control toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

10.5 H1 design in mutools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161<br />

10.6 LMI toolbox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />

10.7 designinmutools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163<br />

11 Basic solution of the general problem 171<br />

11.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176<br />

12 Solution to the general H1 control problem 177<br />

12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />

12.2 The computation of system norms . . . . . . . . . . . . . . . . . . . . . . . 178<br />

12.2.1 The computation of the H2 norm . . . . . . . . . . . . . . . . . . . . 178<br />

12.2.2 The computation of the H1 norm . . . . . . . . . . . . . . . . . . . 180<br />

12.3 The computation of H2 optimal controllers . . . . . . . . . . . . . . . . . . 182<br />

12.4 The computation of H1 optimal controllers . . . . . . . . . . . . . . . . . . 186<br />

12.5 The state feedback H1 control problem . . . . . . . . . . . . . . . . . . . . 190<br />

12.6 The H1 ltering problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191<br />

12.7 Computational aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193<br />

12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193<br />

13 Solution to the general H1 control problem 197<br />

13.1 Dissipative dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . . 197<br />

13.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197<br />

13.1.2 Dissipativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198<br />

13.1.3 A rstcharacterization of dissipativity . . . . . . . . . . . . . . . . . 200<br />

13.2 Dissipative systems with quadratic supply functions . . . . . . . . . . . . . 202<br />

13.2.1 Quadratic supply functions . . . . . . . . . . . . . . . . . . . . . . . 202<br />

13.2.2 Complete characterizations of dissipativity . . . . . . . . . . . . . . . 203<br />

13.2.3 The positive real lemma . . . . . . . . . . . . . . . . . . . . . . . . . 205<br />

13.2.4 The bounded real lemma . . . . . . . . . . . . . . . . . . . . . . . . 205<br />

13.3 Dissipativity andH1 performance . . . . . . . . . . . . . . . . . . . . . . . 206<br />

13.4 Synthesis of H1 controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 207<br />

13.5 H1 controller synthesis in Matlab . . . . . . . . . . . . . . . . . . . . . . . 210


12 CHAPTER 1. INTRODUCTION<br />

Chapter 1<br />

Figure 1.1: Simple block scheme of a controlled system<br />

Introduction<br />

avoidence of actuator saturation The actuator, not explicitly drawn here but taken<br />

as the rst part of process P , should not become saturated but has to operate as a<br />

linear transfer.<br />

1.1 What's robust control?<br />

robustness If the real dynamics of the process change by an amount P , the performance<br />

of the system, i.e. all previous desiderata, should not deteriorate to an<br />

unacceptable level. (In speci c cases it may be that only stability is considered.)<br />

It will be clear that all above desiderata can only be ful lled to some extent. It will be<br />

explained how some constraints put similar demands on the controller C, while others<br />

require contradictory actions, and as a result the nal controller can only be a kind of<br />

compromise. To that purpose it is important that we can quantify the various aims<br />

and consequently weight each claim against the others. As an example, emphasis on the<br />

robustness requirement weakens the other achievable constraints, because a performance<br />

should not only hold for a very speci c process P , where the control action can be tuned<br />

very speci cally, but also for deviating dynamics. The true process dynamics are then<br />

given by:<br />

In previous courses the processes to be controlled were represented by rather simple transfer<br />

functions or state space representations. These dynamics were analysed and controllers<br />

were designed such that the closed loop system was at least stable and showed some desired<br />

performance. In particular, the Nyquist criterion used to be very popular in testing<br />

the closed loop stability and some margins were generally taken into account to stay `far<br />

enough' from instability. It was readily observed that as soon as the Nyquist curve passes<br />

the point {1 too close, the closed loop system becomes `nervous'. It is then in a kind of<br />

transition phase towards actual instability. And, if the dynamics of the controlled process<br />

deviate somewhat from the nominal model, the shift may cause the encirclement of the<br />

point {1 resulting in an unstable system. So, with these margins, stability was e ectively<br />

made robust against small perturbations in the process dynamics. The proposed margins<br />

were really rules of thumb: the allowed perturbations in dynamics were not quantised and<br />

only stability of the closed loop is guarded, not the performance. Moreover, the method<br />

does not work for multivariable systems. In this course we will try to overcome these four<br />

de ciencies i.e. provide very strict and well de ned criteria, de ne clear descriptions and<br />

bounds for the allowed perturbations and not only guarantee robustness for stability but<br />

also for the total performance of the closed loop system even in the case of multivariable<br />

systems. Consequently a de nition of robust control could be stated as:<br />

Ptrue = P + P (1.1)<br />

where now P takes the role of the nominal model while P represents the additive<br />

model perturbation. Their is no way to avoid P considering the causes behind it:<br />

unmodelled dynamics The nominal model P will generally be taken linear, time-invariant<br />

and of low order. As a consequence the real behaviour is necessarily approximated,<br />

since real processes cannot be caught in those simple representations.<br />

Design a controller such that some level of performance ofthecontrolled system<br />

is guaranteed irrespective of changes in the plant dynamics within a prede ned<br />

class.<br />

time variance Inevitably the real dynamics of physical processes change in time. They<br />

are susceptable to wear during aging (e.g. steel rollers), will be a ected by pollution<br />

(e.g. catalysts) or undergo the in uence of temperature (or pressure, humidity ::: )<br />

changes (e.g. day and night uctuations in glass furnaces).<br />

For facilitating the discussion consider a simple representation of a controlled system<br />

in Fig. 1.1.<br />

The control block C is to be designed such that the following goals and constraints<br />

can be realised in some optimal form:<br />

varying loads Dynamics can substantially change, if the load is altered: the mass and<br />

the inertial moment of a robot arm is determined considerably by the load unless<br />

you are willing to pay foravery heavy robot that is very costly in operation.<br />

stability The closed loop system should be stable.<br />

tracking The real output y should follow the reference signal ref.<br />

manufacturing variance A prototype process may be characterised very accurately.<br />

This is of no help, if the variance over the production series is high. Alowvariance<br />

production can turn to be immensely costly, if one thinks e.g. of a CD-player.<br />

Basically, one can produce a drive with tolerances in the micrometer-domain but,<br />

thanks to control, we can be satis ed with less.<br />

disturbance rejection The output y should be free of the in uences of the disturbing<br />

noise.<br />

sensor noise rejection The noise introduced by the sensor should not a ect the output<br />

y.<br />

11


14 CHAPTER 1. INTRODUCTION<br />

1.2. H1 IN A NUTSHELL 13<br />

enough to tackle this problem. However, the intermediate popularity and evolution of the<br />

LQG-design in time domain was not in vain, as we will elucidate in the next chapter 2 and<br />

in the discussion of the nal solution in chapters 11 and 13. It will then follow thatLQG<br />

is just one alternative inavery broad set of possible robust controllers each characterised<br />

by their own signal and system spaces. This may appear very abstract at the moment but<br />

these normed spaces are necessary to quantify signals and transfer functions in order to<br />

be able to compare and weight the various control goals. The de nitions of the various<br />

normed spaces are given in chapter 5 while the translation of the various control goals is<br />

described in detail in chapter 3. Here we will shortly outline the whole procedure starting<br />

with a rearrangment in Fig.1.3 of the structure of the problem in Fig.1.1.<br />

limited identi cation Even if the real process were linear and time-invariant, we still<br />

have to measure or identify its characteristics and this cannot be done without an<br />

error. Measuring equipment and identi cation methods, using nite data sets of<br />

limited sample rate, will inevitably be su ering from inaccuracies.<br />

actuators & sensors What has been said about the process can be attributed to actuators<br />

and sensors as well, that are part of the controlled system. One might require<br />

a minimum level of performance (e.g. stability) of the controlled system in case of<br />

e.g. sensor failure or actuator degradation.<br />

In Fig. 1.2 the e ect of the robustness requirement is illustrated.<br />

6<br />

-<br />

P<br />

6<br />

u<br />

-<br />

z<br />

outputs<br />

-<br />

-<br />

?<br />

- -<br />

?<br />

- -<br />

C<br />

-<br />

P<br />

- -<br />

6<br />

-<br />

?<br />

6<br />

6<br />

+<br />

-<br />

+<br />

6;<br />

+<br />

+<br />

+ e y<br />

+<br />

;<br />

+<br />

?<br />

d<br />

r<br />

? - 6+<br />

inputs<br />

Figure 1.2: <strong>Robust</strong> performance<br />

Figure 1.3: Structure dictated by exogenous inputs and outputs to be minimised<br />

On the left we havegathered all inputs of the nal closed loop system that we do not<br />

know beforehand but that will live in certain bounded sets. These so called exogenous<br />

inputs consist in this case of the reference signal r,the disturbance d and the measurement<br />

noise . These signals will be characterised as bounded by a (mathematical) ball of radius 1<br />

in a normed space together with lters that represent their frequency contents as discussed<br />

in chapter 5. Next, at the right side we haveput together those output signals that have<br />

to be minimised according to the control goals in a similar characterisation as the input<br />

signals. We are not interested in minimising the actual output y (so this is not part of<br />

the output) but only in the way that y follows the reference signal r. Consequently the<br />

error z = r ; y is taken as an output to be minimised. Note also that we have taken<br />

the di erence with the actual y and not the measured error e. As an extra output to be<br />

minimised is shown the input u of the real process in order to avoid actuator saturation.<br />

How strong this constraint is in comparison to the tracking aim depends on the quality<br />

and thus price of the actuator and is going to be translated in forthcoming weightings<br />

and lters. Another goal, i.e. the attenuation of e ects of both the disturbance d and the<br />

measurement noiseis automatically represented by theminimisation of output z. In a<br />

more complicated way also the e ect of perturbation P on the robustness of stability<br />

and performance should be minimised. As is clearly observed from Fig.1.3 P is an extra<br />

transfer between output u and input d . If we can keep the transfer from d to u small by a<br />

proper controller, the loop closed by P won't have much e ect. Consequently robustness<br />

is increased implicitely by keeping u small as we will analyse in chapter 3. Therefor we<br />

In concedance to the natural inclination to consider something as being "better" if<br />

it is "higher", optimal performance is a maximum here. This is contrary to the criteria,<br />

to be introduced later on, where the best performance occurs in the minimum. So here<br />

the vertical axis represents a degree of performance where higher value indicate better<br />

performance. Positive values are representing improvements by the control action compared<br />

to the uncontrolled situation and negative values correspond to deteriorations by<br />

the very use of the controller. For extreme values ;1 the system is unstable and +1 is<br />

the extreme optimist's performance. In this supersimpli ed picture we let the horizontal<br />

axis represent all possible plant behaviours centered around the nominal plant P with a<br />

deviation P living in the shaded slice. So this slice represents the class of possible plants.<br />

If the controller is designed to perform well for just the nominal process, it can really be<br />

ne-tuned to it, but for a small model error P the performance will soon deteriorate<br />

dramatically. We can improve this e ect by robustifying the control and indeed improve<br />

the performance for greater P but unfortunately and inevitably at the cost of the performance<br />

for the nominal model P . One will readily recognise this e ect in manytechnical<br />

designs (cars,bikes,tools, ::: ), but also e.g. in natural evolution (animals, organs, ::: ).<br />

1.2 H1 in a nutshell<br />

The techniques, to be presented in this course, are named H1-control and -analysis/synthesis.<br />

They have beendevelopped since the beginning of the eighties and are, as a matter of fact,<br />

a well quantised application of the classical control design methods, fully applied in the<br />

frequency domain. It thus took about forty years to evolve a mathematical context strong


16 CHAPTER 1. INTRODUCTION<br />

1.2. H1 IN A NUTSHELL 15<br />

1.3 Exercise<br />

d<br />

?<br />

+ y<br />

- Ci<br />

- P - -<br />

+<br />

?<br />

l l<br />

6;<br />

Let the true process be a delay ofunknown value :<br />

Pt = e ;s (1.2)<br />

0 :01 (1.3)<br />

have to quantify the bounds of P again by a proper ball or norm and lters.<br />

At last we have toprovide a linear, time-invariant, nominal model P of the dynamics<br />

of the process that may be a multivariable (MIMO Multi Input Multi Input) transfer.<br />

In the multivariable case all single lines then represent vectors of signals. Provisionally<br />

we will discuss the matter in s-domain so that P is representing a transfer function in<br />

s-domain. In the multivariable case, P is a transfer matrix where each entry is a transfer<br />

function of the corresponding input to the corresponding output. The same holds for the<br />

controller C and consequently the signals (lines) represent vectors in s-domain so that we<br />

can write e.g. u(s) =C(s)e(s). Having characterised the control goals in terms of outputs<br />

to be minimised provided that the inputs remain con ned as de ned, the principle idea<br />

behind the control design of block C now consists of three phases as presented in chapter<br />

11:<br />

Let the nominal model be given by unity transfer:<br />

P =1 (1.4)<br />

1. Compute a controller C0 that stabilises P .<br />

2. Establish around this central controller C0 the set of all controllers that stabilise P<br />

according to the Youla parametrisation.<br />

Let there be some unknown disturbance d additive to the output consisting of a single sine<br />

wave ! =25 ):<br />

3. Search in this last set for that (robust) controller that minimises the outputs in the<br />

proper sense.<br />

d = sin (25 t) (1.5)<br />

By an appropriate controller Ci the disturbance will be reduced and the output will<br />

be:<br />

y(t) =^y sin(25 t + ) (1.6)<br />

De ne the performance of the controlled system in steady state by:<br />

; ln j^yj (1.7)<br />

a) Design a proportional controller C1 = K for the nominal model P , so completely<br />

ignoring the model uncertainty, such thatj^yj is minimal and thus performance<br />

is maximal. Possible actuator saturation can be ignored in this academic example.<br />

Plot the actual performance as a function of .<br />

This design procedure is quite unusual at rst instance so that we start to analyse it for<br />

stable transfers P where we can apply the internal model approach in chapter 4. Afterwards<br />

the original concept of a general solution is given in chapter 11. This historically<br />

rst method is treated as it shows a clear analysis of the problem. In later times improved<br />

solution algorithms have been developped by means of Riccati equations or by means of<br />

Linear Matrix Inequalities (LMI) as explained in Chapter 13. In the next chapter 8 the<br />

robustness concept will be revisited and improved which will yield the -analysis/synthesis.<br />

After the theory, which is treated till here, chapter 9 is devoted to the selection of<br />

appropriate design lters in practice, while in the last chapter 10 an example illustrates<br />

the methods, algorithms and programs. In this chapter you will also get instructions how<br />

to use dedicated toolboxes in MATLAB.<br />

Hint: Analyse the Nyquist plot.<br />

b) Design a proportional controller C2 = K by incorporating the knowledge about<br />

the model uncertainty P = e ;s ; 1 where is unknown apart from its range.<br />

<strong>Robust</strong> stability is required. Plot again the actual performance as a function of .<br />

c) The same conditions as indicated sub b) but now for an integrating controller<br />

C3 = K=s. If you have expressed the performance as a function of in the form:<br />

; ln j ^ yj = ; ln jX( )+jY ( )j (1.8)<br />

the following Matlab program can help you to compute the actual function and to<br />

plot it:<br />

>> for k=1:100<br />

theta(k)=k/10000<br />

perf(k)=-log(sqrt(X(theta(k)) 2 +Y(theta(k)) 2 )<br />

end<br />

>>plot(theta,perf)


18 CHAPTER 2. WHAT ABOUT LQG?<br />

where u is the control input, y is the measured output, x is the state vector, v is the state<br />

disturbance and w is the measurement noise. This multivariable process is assumed to be<br />

completely detectable and reachable. Fig. 2.1 intends to recapitulate the set-up of the<br />

LQG-control, where the state feedback matrix L and the Kalman gain K are obtained<br />

from the well known citeria to be minimised:<br />

Chapter 2<br />

L = arg min Efx T Qx + u T Rug (2.1)<br />

K = arg min Ef(x ; ^x) T (x ; ^x)g (2.2)<br />

for nonnegative Q and positive de nite R. Certainly, the closed loop LQG-scheme is<br />

nominally stable, but the crucial question is, whether stability is possibly lost, if the real<br />

system, represented by state space matrices fAtBtCtg, does no longer correspond to the<br />

model of the form fA B Cg. The robust stability, which is then under study, can best be<br />

illustrated by anumerical example .<br />

Consider a very ordinary, stable and minimum phase transfer function:<br />

What about LQG?<br />

v w<br />

(2.3)<br />

s +2<br />

(s +1)(s +3)<br />

P (s) =<br />

+<br />

+ y<br />

- - - - -<br />

+ ? +<br />

Bt n ? l<br />

x<br />

I Ct<br />

1<br />

s<br />

-<br />

which admits the following state space representation:<br />

6<br />

6<br />

+<br />

(2.4)<br />

u + v<br />

0 1<br />

_x =<br />

x +<br />

;3 ;4<br />

0<br />

1<br />

y = ; 2 1 x + w<br />

?<br />

At<br />

u<br />

plant<br />

where v and w are independent white noise sources of variances:<br />

- m - 1<br />

s I - C<br />

- m<br />

A<br />

?<br />

6<br />

^x<br />

+<br />

6<br />

+<br />

6<br />

?<br />

+<br />

+<br />

;<br />

e<br />

;L<br />

(2.5)<br />

1225 ;2135<br />

;2135 3721<br />

Efw 2 g =1 Efvv T g =<br />

B<br />

? -<br />

and the control criterion given by:<br />

x + u 2 g (2.6)<br />

Efx T 2800 80 p 35<br />

80 p 35 80<br />

From this last criterion we can easily obtain the state feedback matrix L by solving the<br />

corresponding Riccati equation. If we were able to feed back the real states x, the stability<br />

properties could easily be studied by analysing the looptransfer L(sI ;A) ;1B as indicated<br />

in Fig. 2.2. The feedback loop is then interrupted at the cross at input u to obtain the<br />

?<br />

K<br />

controller<br />

Figure 2.1: Block scheme LQG-control<br />

Figure 2.2: Real state feedback.<br />

Before submerging in all details of robust control, it is worthwhile to show, why the<br />

LQG-control, as presented in the course \modern control theory", is leading to a dead<br />

end, when robustness enters the control goals. Later in this course, we will see, how the<br />

accomplishments of LQG-control can be used and what LQG means in terms of robust<br />

control. At the moment we can only show, how the classical interpretation of LQG gives<br />

no clues to treat robustness. This short treatment is a summarised display of the article<br />

[10], written just before the emergence of H1-control.<br />

Given a linear, time invariant model of a plant in state space form:<br />

loop transfer (LT). Note, that we analyse with the modelparameters fA Bg, while the real<br />

process is supposed to have true parameters fAtBtg. This subtlety is caused by the fact,<br />

that we only have the model parameters fA B Cg available and may assume, that the<br />

_x = Ax + Bu + v<br />

y = Cx + w<br />

17


20 CHAPTER 2. WHAT ABOUT LQG?<br />

19<br />

decreases, the Nyquist curve shrinks to the origin and soon the point {1 is tresspassed,<br />

causing instability.<br />

The problem now is, how to e ect robustness. An obvious idea is to modify the<br />

Kalman gain K in some way, such that the loop transfer resembles the previous loop<br />

transfer, when feeding back the real states. This can indeed be accomplished in case of<br />

stable and minimum phase processes. Without entering into many details, the procedure<br />

is in main lines:<br />

Put K equal to qBW, where W is a nonsingular matrix and q a positive constant. If we<br />

let q increase in the (thus obtained) loop transfer:<br />

L(sI ; A + qBWC<br />

| {z } +BL);1 qBWC<br />

| {z } (sI ; A);1B (2.8)<br />

the underbraced term in the rst inverted matrix will dominate and thus almost completely<br />

annihilate the same second underbraced expression and we are indeed left with the simple<br />

looptransfer L(sI ; A) ;1B. In doing so , it appears, that some observer poles (the real<br />

cause of the problem) shift to the zeros of P and cancel out, while the others are moved<br />

to ;1. In Fig. 2.3 some loop transfers for increasing q have been drawn and indeed the<br />

transfer converges to the original robust loop transfer. However, all that matters here is,<br />

that, by doing so, we have implemented a completely nonoptimal Kalman gain as far as<br />

disturbance reduction is concerned. We are dealing now with very extreme entries in K<br />

which will cause a very high impact of measurement noise w. So we have sacri ced our<br />

optimal observer for obtaining su cient robustness.<br />

Alternatively, we could have taken the feedback matrix L as a means to e ect robustness.<br />

Along similar lines we would then nd extreme entries in L, so that certainly the<br />

actuator would saturate. Then this saturation would be the price for robustness. Next, we<br />

could of course try to distribute the pain over both K and L, but we haveno clear means<br />

to balance the increase of the robustness and the decrease of the remaining performance.<br />

And then, we do not even talk about robustness of the complete performance. On top of<br />

that we havecon ned ourselves implicitly by departing from LQG and thus to the limited<br />

structure of the total controller as given in Fig. 2.1, where the only tunable parameters<br />

are K and L. Conclusively, we thus have to admit, that we rst ought to de ne and<br />

quantify the control aims very clearly (see next chapter) in order to be able to weight<br />

them relatively and then come up with some machinary, that is able to design controllers<br />

in the face of all these weighted aims. And surely, the straightforward approach of LQG<br />

is not the proper way.<br />

Figure 2.3: Various Nyquist curves.<br />

real parameters fAtBtCtg are very close (in some norm). The Nyquistplot is drawn in<br />

Fig. 2.3. You will immediately notice, that this curve is far from the endangering point {1,<br />

so that stability robustness is guaranteed. This is all very well, but in practice we cannot<br />

measure all states directly. We have to be satis ed with estimated states ^x, so that the<br />

actual feedback is brought about according to Fig. 2.4. Check for yourself, that cutting<br />

Figure 2.4: Feedback with observer.<br />

aloopatcross (1) would lead to the same loop transfer as before under the assumption,<br />

that the model and process parameters are exactly the same (then e=0!). Unfortunately,<br />

thefullfeedbackcontroller is as indicated by the dashed box, so that we have tointerrupt<br />

the true loop at e.g. cross (2), yielding the looptransfer:<br />

(2.7)<br />

Ct(sI ; At) ;1 Bt | {z }<br />

process transfer<br />

L(sI ; A + KC + BL) ;1 K<br />

| {z }<br />

model parametersfABCg<br />

All we can do, is substitute the model parameters for the unknown process parameters<br />

and study the Nyquist plot in Fig. 2.3. Amazingly, the robustness is now completely<br />

lost and we even have to face conditional stability: If, e.g. by aging, the process gain


22 CHAPTER 2. WHAT ABOUT LQG?<br />

2.1. EXERCISE 21<br />

2.1 Exercise<br />

v w<br />

P<br />

y<br />

l? - - 1<br />

?<br />

- m -<br />

s+1<br />

6<br />

u<br />

6<br />

C<br />

?<br />

;KL<br />

s+1+K+L<br />

Above blockscheme represents<br />

a process P of rst order disturbed by white state noise v and independent white<br />

measurement noise w. L is the state feedback gain. K is the Kalman observer gain based<br />

upon the known variances of v and w.<br />

a) If we do not penalise the control signal u, what would be the optimal L? Could<br />

this be allowed here?<br />

b) Suppose that for this L the actuator is not saturated. Is the resultant controller<br />

C robust (in stability)? Is it satisfying the 450 phase margin?<br />

c) Consider the same questions when P = 1<br />

s(s+1) and in particular analyse what you<br />

have to compute and how.(Do not try to actually do the computations.) What can<br />

you do if the resultant solution is not robust?


24 CHAPTER 3. CONTROL GOALS<br />

straints can be listed as stability, robust stability and (avoidance of) actuator saturation.<br />

Within the freedom, left by these constraints, one wants to optimise, in a weighted balance,<br />

aims like disturbance reduction and good tracking without introducing too much e ects of<br />

the sensor noise and keeping this total performance on a su cient level in the face of the<br />

system perturbations i.e. performance robustness against model errors. In detail:<br />

Chapter 3<br />

3.1 Stability.<br />

Unless one is designing oscillators or systems in transition, the closed loop system is<br />

required to be stable. This can be obtained by claimingthat, nowhere in the closed loop<br />

system, some nite disturbance can cause other signals in the loop to grow to in nity: the<br />

so-called BIBO-stability from Bounded Input to Bounded Output. Ergo all corresponding<br />

transfers have tobechecked on possible unstable poles. So certainly the straight transfer<br />

between the reference input r and the output y, given by :<br />

<strong>Control</strong> goals<br />

y = PC(I + PC) ;1 r (3.1)<br />

In this chapter we will list and analyse the various goals of control in more detail. The<br />

relevant transfer functions will be de ned and named and it will be shown, how some<br />

groups of control aims are in con ict with each other. To start with we reconsider the<br />

block scheme of a simple con guration in Fig. 3.1 which is only slightly di erent from<br />

Fig.1.1 in chapter 1.<br />

But this alone is not su cient as, in the computation of this transfer, possibly unstable<br />

poles may vanish in a pole-zero cancellation. Another possible input position of stray<br />

signals can be found at the actual input of the plant, additive to what is indicated as x<br />

(think e.g. of drift of integrators). Let us de ne it by dx. Then also the transfer of dx to<br />

say y has to be checked for stability whichtransfer is given by:<br />

(3.2)<br />

y =(I + PC) ;1 Pdx = P (I + CP) ;1 dx<br />

Consequently for this simple scheme we distinguish four di erent transfers from r and dx<br />

to y and x, because a closer look soon reveals that inputs d and are equivalent torand outputs z and u are equivalent toy.<br />

Figure 3.1: Simple control structure<br />

3.2 Disturbance reduction.<br />

Without feedback the disturbance d is fully present in the real output y. By means of the<br />

feedback the e ect of the disturbance can be in uenced and at least be reduced in some<br />

frequency band. The closed loop e ect can be easily computed as read from:<br />

d (3.3)<br />

y = PC(I + PC) ;1 (r ; )+(I + PC) ;1<br />

| {z }<br />

S<br />

The underbraced expression represents the Sensitivity S of the output to the disturbance<br />

thus de ned by:<br />

(3.4)<br />

Notice that we havemade the sensor noise explicit in . Basically, the sensor itself has<br />

a transfer function unequal to 1, so that this should be inserted as an extra block in the<br />

feedback scheme just before the sensor noise addition. However, a good quality sensor has<br />

a at frequency response for a much broader band than the process transfer. In that case<br />

the sensor transfer may be neglected. Only in case the sensor transfer is not su ciently<br />

broadbanded (easier to manufacture and thus cheaper), a proper block has to be inserted.<br />

In general one will avoid this, because the ultimate control performance highly depends<br />

on the quality of measurement: the resolution of the sensor puts an upper limit to the<br />

accuracy of the output control as will be shown.<br />

The process or plant (the word "system" is usually reserved for the total, controlled<br />

structure) incorporates the actuator. The same remarks, as made for the sensor, hold for<br />

the actuator. In general the actuator will be made su ciently broadbanded by proper<br />

control loops and all possibly remaining defects are supposed to be represented in the<br />

transfer P . Actuator disturbances are combined with the output disturbance d by computing<br />

or rather estimating its e ect at the output of the plant. Therefor one should know<br />

the real plant transfer Pt consisting of the nominal model transfer P plus the possible<br />

additive model error P . As only the nominal model P and some upper bound for the<br />

model error P is known, it is clear that only upper bounds for the equivalent of actuator<br />

disturbances in the output disturbances d can be established. The e ects of model errors<br />

(or system perturbations) is not yet made explicit in Fig. 3.1 but will be discussed later<br />

in the analysis of robustness.<br />

Next we will elaborate on various common control constraints and aims. The con-<br />

S =(I + PC) ;1<br />

If we want to decrease the e ect of the disturbance d on the output y, we thus have to<br />

choose controller C such that the sensitivity S is small in the frequency band where d has<br />

most of its power or where the disturbance is most \disturbing".<br />

3.3 Tracking.<br />

Especially for servo controllers, but in fact for all systems where a reference signal is<br />

involved, there is the aim of letting the output track the reference signal with a small<br />

23


26 CHAPTER 3. CONTROL GOALS<br />

3.4. SENSOR NOISE AVOIDANCE. 25<br />

The relevant (underbraced) transfer is named control sensitivity for obvious reasons and<br />

symbolised by R thus:<br />

error at least in some tracking band. Let us de ne the tracking error e in our simple<br />

system by:<br />

R = C(I + PC) ;1 (3.10)<br />

(3.5)<br />

| {z }<br />

T<br />

e def<br />

= r ; y =(I + PC) ;1<br />

(r ; d)+PC(I + PC) ;1<br />

| {z }<br />

S<br />

In order to keep x small enough we have tomakesure that the control sensitivity R is small<br />

in the bands of r, and d. Of course with proper relative weightings and \small" still to<br />

be de ned. Notice also that R is very similar to T apart from the extra multiplication by<br />

P in T . We willinterprete later that this P then functions as an weighting that cannot be<br />

in uenced by C as P is xed. So R can be seen as a weighted T and as such the actuator<br />

saturation claim opposes the other aims related to S. Also in LQG-design we have met<br />

this contradiction inamoretwo-faced disguise:<br />

Note that e is the real tracking error and not the measured tracking error observed as<br />

signal u in Fig. 3.1, because the last one incorporates the e ect of the measurement<br />

noise substantially di erently. In equation 3.5 we recognise (underbraced) the sensitivity<br />

as relating the tracking error to both the disturbance and the reference signal r. It is<br />

therefore also called awkwardly the \inverse return di erence operator". Whatever the<br />

name, it is clear that we have to keep S small in both the disturbance and the tracking<br />

band.<br />

Actuator saturation was prevented by proper choice of the weights R and Q in the<br />

design of the state feedback for disturbance reduction.<br />

The e ect of the measurement noise was properly outweighted in the observer design.<br />

3.4 Sensor noise avoidance.<br />

Also the stability was stated in LQG, but its robustness and the robustness of the total<br />

performance was lacking and hard to introduce. In this H1- context this comes quite<br />

naturally as follows:<br />

3.6 <strong>Robust</strong> stability.<br />

Without any feedback it is clear that the sensor noise will not haveanyin uence on the real<br />

output y. On the other hand the greater the feedback the greater its e ect in disrupting<br />

the output. So we have towatch that in our enthousiasm to decrease the sensitivity, we are<br />

not introducing too much sensor noise e ects. This actually reminiscences to the optimal<br />

Kalman gain. As the reference r is a completely independent signal, just compared with<br />

y in e, we may as well study the e ect of on the tracking error e in equation 3.5. The<br />

coe cient (relevant transfer) of is then given by:<br />

<strong>Robust</strong>ness of the stability in the face of model errors will be treated here rather shortly<br />

as more details will follow in chapter 5. The whole concept is based on the so-called<br />

small gain theorem which trivially applies to the situation sketched in Fig. 3.2 . The<br />

(3.6)<br />

T = PC(I + PC) ;1<br />

and denoted as the complementary sensitivity T . This name is induced by the following<br />

simple relation that can easily be veri ed:<br />

S + T = I (3.7)<br />

and for SISO (Single Input Single Output) systems this turns into:<br />

Figure 3.2: Closed loop with loop transfer H.<br />

S + T =1 (3.8)<br />

stable transfer H represents the total looptransfer in a closed loop. If we require that the<br />

modulus (amplitude) of H is less than 1 for all frequencies it is clear from Fig. 3.3 that the<br />

polar curve cannot encompass the point -1and thus we know from the Nyquist criterion<br />

that the loop will always constitute a stable system. So stability is guaranteed as long as:<br />

This relation has a crucial and detrimental in uence on the ultimate performance of the<br />

total control system! If we want tochoose S very close to zero for reasons of disturbance<br />

and tracking we are necessarily left with a T close to 1 which introduces the full sensor<br />

noise in the output and vice versa. Ergo optimality will be some compromise and the<br />

more because, as we will see, some aims relate to S and others to T .<br />

k H k1 def<br />

= sup jH(j!)j < 1 (3.11)<br />

!<br />

\Sup" stands for supremum which e ectively indicates the maximum. (Only in case that<br />

the supremum is approached at within any small distance but never really reached it is<br />

not allowed to speak of a maximum.) Notice that we haveused no information concerning<br />

the phase angle which istypically H1. In above fomula we get the rst taste of H1 by<br />

the simultaneous de nition of the in nity norm indicated by k : k1. More about this in<br />

chapter 5 where we also learn that for MIMO systems the small gain condition is given<br />

by:<br />

3.5 Actuator saturation avoidance.<br />

The input signal of the actuator is indicated by x in Fig. 3.1 because the actuator was<br />

thought to be incorporated into the plant transfer P . This signal x should be restricted<br />

to the input range of the actuator to avoid saturation. Its relation to all exogenous inputs<br />

is simply derived as:<br />

x =(I + CP) ;1 C(r ; ; d) =C(I + PC) ;1(r<br />

; ; d) (3.9)<br />

(H(j!)) < 1 (3.12)<br />

k H k1 def<br />

= sup<br />

!<br />

| {z }<br />

R


28 CHAPTER 3. CONTROL GOALS<br />

3.6. ROBUST STABILITY. 27<br />

Schwartz inequality so that we maywrite: k R P k1 k R k1k P k1 (3.13)<br />

Ergo, if we can guarantee that:<br />

(3.14)<br />

1<br />

k P k1<br />

a su cient condition for stability is:<br />

k R k1< (3.15)<br />

If all we require from P is stated in equation 3.13 then it is easy to prove that the<br />

condition on R is also a necessary condition. Still this is a rather crude condition but it<br />

can be re ned by weighting over the frequency axis as will be shown in chapter 5. Once<br />

again from Fig. 3.5 we recognise that the robustness stability constraint e ectively limits<br />

the feedback from the point, where both the disturbance and the output of the model<br />

error block P enter, and the input of the plant such thatthe loop transfer is less than<br />

one. The smaller the error bound 1= the greater the feedback can be and vice versa!<br />

We so analysed the e ect of additive model error P . Similarly we can study the<br />

e ect of multiplicative error which isvery easy if we take:<br />

Figure 3.3: Small gain stability in Nyquist space<br />

The denotes the maximum singular value (always real) of the transfer H (for the !<br />

under consideration).<br />

All together, these conditions may seem somewhat exaggerated, because transfers, less<br />

than one, are not so common. The actual application is therefore somewhat \nested" and<br />

very depictively indicated in literature as \the baby small gain theorem" illustrated in<br />

Fig. 3.4. In the upper blockscheme all relevant elements of Fig. 3.1 have been displayed<br />

Ptrue = P + P =(I + )P (3.16)<br />

-<br />

P<br />

6 -<br />

where obviously is the bounded multiplicative model error. (Together with P it evidently<br />

constitutes the additive model error P .) In similar blockschemes we nowget Figs.<br />

3.6 and 3.7. The \baby"-loop now contains explicitly and we notice that transfer P<br />

+<br />

?<br />

< ;equivalent; > ;C(I + PC) ;1<br />

- P -<br />

6<br />

9 +<br />

?<br />

- C - +<br />

P -<br />

6;<br />

-<br />

-<br />

= ;R<br />

?<br />

6 -<br />

6 -<br />

:<br />

+<br />

+<br />

?<br />

?<br />

< ;equivalent; > ;PC(I + PC) ;1<br />

6<br />

- C - P -<br />

+<br />

Figure 3.4: Baby small gain theorem for additive model error.<br />

6;<br />

= ;T<br />

?<br />

in case we have to deal with an additive model error P . We now consider the \baby"<br />

loop as indicated containing P explicitly. The lower transfer between the output and<br />

the input of P , as once again illustrated in Fig. 3.5, can be evaluated and happens to<br />

Figure 3.6: Baby small gain theorem for multiplicative model error.<br />

is somewhat \displaced"out of the additive perturbation block. The result is that sees<br />

itself fed back by (minus) the complementary sensitivity T . (The P has, so to speak,<br />

been taken out of P and adjoined to R yielding T .) If we require that:<br />

(3.17)<br />

1<br />

k k1<br />

the robust stability follows from:<br />

k T k1 k T k1k k1 1 (3.18)<br />

Figure 3.5: <strong>Control</strong> sensitivity guards stability robustness for additive model error.<br />

yielding as nal condition:<br />

k T k1< (3.19)<br />

Again proper weighting may re ne the condition.<br />

be equal to the control sensitivity R as shown in the lower blockscheme. (Actually we get<br />

a minus sign that can be joined to P . Because we only consider absolute values in the<br />

small gain theorem, this minus sign is irrelevant: it just causes a phase shift of 1800 which<br />

leaves the conditions unaltered.) Now it is easy to apply the small gain theorem to the<br />

total looptransfer H = R P . The in nity norm will appear to be an induced operator<br />

norm in the mapping between identical signal spaces L2 in chapter 5 and as such itfollows


30 CHAPTER 3. CONTROL GOALS<br />

3.7. PERFORMANCE ROBUSTNESS. 29<br />

The second group centers around the complementary sensitivity and requires the controller<br />

C to minimise T :<br />

avoidance of sensor noise<br />

avoidance of actuator saturation<br />

stability robustness<br />

robustness of S<br />

If we were dealing with real numbers only, the choice would be very easy and limited.<br />

Remembering that<br />

Figure 3.7: Complementary sensitivity guards stability robustness for multiplicative model<br />

error<br />

S =(I + PC) ;1 (3.24)<br />

T = PC(I + PC) ;1<br />

(3.25)<br />

3.7 Performance robustness.<br />

a large C would imply a small S but T I while a small C would yield a small T and<br />

S I. Besides, for no feedback, i.e. C = 0, , necessarily T ! 0 and S ! I. This<br />

is also true for very large ! when all physical processes necessarily have a zero transfer<br />

(PC ! 0). So ultimately for very high frequencies, the tracking error and the disturbance<br />

e ect is inevitably 100%.<br />

This may givesome rough ideas of the e ect of C, but the real impact is more di cult<br />

as:<br />

Till now, all aims could be grouped around either the sensitivity S or the complementary<br />

sensitivity T . Once we have optimised some balanced criterion in both S and T and<br />

thus obtained a nominal performance, we wish that this performance is kept more or less,<br />

irrespective of the inevitable model errors. Consequently, performance robustness requires<br />

that S and T change only slightly, if P is close to the true transfer Pt. We can analyse<br />

the relative errors in these quantities for SISO plants:<br />

We deal with complex numbers .<br />

= (1 + PtC) ;1 ; (1 + PC) ;1<br />

(1 + PtC) ;1 = (3.20)<br />

The transfer may bemultivariable and thus we encounter matrices.<br />

= ; T (3.21)<br />

PC<br />

1+PC<br />

= ; P<br />

P<br />

St ; S<br />

St<br />

1+PC<br />

= 1+PC; 1 ; PtC<br />

The crucial quantities S and T involve matrix inversions (I + PC) ;1<br />

and:<br />

The controller C may only be chosen from the set of stabilising controllers.<br />

= PtC(1 + PtC) ;1 ; PC(1 + PC) ;1<br />

PtC(1 + PtC) ;1 = (3.22)<br />

Tt ; T<br />

Tt<br />

It happens that we can circumvent the last two problems, in particular when we are dealing<br />

with a stable transfer P . This can be done by means of the internal model control concept<br />

as shown in the next chapter. We will later generalise this for also unstable nominal<br />

processes.<br />

S (3.23)<br />

1<br />

1+PC<br />

P<br />

Pt<br />

P<br />

=<br />

P<br />

= PtC ; PC<br />

PtC(1 + PC)<br />

As a result we note that in order to keep the relative change in S small we have to take<br />

the product of and T small. The smaller the error bound is, the greater a T can we<br />

a ord and vice versa. But what is astonishingly is that the smaller S is and consequently<br />

the greater the complement T is (see equation 3.7), the less robust is this performance<br />

measured in S. The same story holds for the performance measured in T where the<br />

robustness depends on the complement S. This explains the remark in chapter 1 that<br />

increase of performance for a particular nominal model P decreases its robustness for<br />

model errors. So also in this respect the controller will have to be a compromise!<br />

Summary<br />

We can distinguish two competitive groups because S + T = I. One group centered<br />

around the sensitivity that requires the controller C to be such thatSis \small" and can<br />

be listed as:<br />

disturbance rejection<br />

tracking<br />

robustness of T


32 CHAPTER 3. CONTROL GOALS<br />

3.8. EXERCISES 31<br />

3.8 Exercises<br />

3.1:<br />

u<br />

?<br />

+<br />

+<br />

? ?<br />

- n - C - n - P<br />

- -<br />

6{<br />

?<br />

0<br />

d<br />

+<br />

u<br />

+ y<br />

+<br />

+<br />

r<br />

a) Derive by reasoning that in the above scheme internal stability is guaranteed if<br />

all transfers from u0 and d to u and y are stable.<br />

b) Analyse the stability for<br />

(3.26)<br />

(3.27)<br />

P = 1<br />

1 ; s<br />

1 ; s<br />

C =<br />

1+s<br />

3.2:<br />

d<br />

r +<br />

- C1<br />

- k - +<br />

C2<br />

- ? j - P<br />

-<br />

6{<br />

+ y<br />

?<br />

C3<br />

Which transfers in the given scheme are relevant for:<br />

a) disturbance reduction<br />

b) tracking


34 CHAPTER 4. INTERNAL MODEL CONTROL<br />

Chapter 4<br />

Figure 4.3: Equivalence of the `internal model' and the `conventional' structure.<br />

Internal model control<br />

and from this we get:<br />

C ; CPQ = Q (4.2)<br />

so that reversely:<br />

Q =(I + CP) ;1 C = C(I + PC) ;1 = R (4.3)<br />

In the internal model control scheme, the controller explicitly contains the nominal model<br />

of the process and it appears that, in this structure, it is easy to denote the set of all<br />

stabilising controllers. Furthermore, the sensitivity and the complementary sensitivity<br />

take very simple forms, expressed in process and controller transfer, without inversions. A<br />

severe condition for application is that the process itself is a stable one.<br />

In Fig. 4.1 we repeat the familiar conventional structure while in Fig. 4.2 the internal<br />

Remarkably, the Q equals the previously encountered control sensitivity R! The reason<br />

behind this becomes clear, if we consider the situation where the nominal model P exactly<br />

equals the true process Pt. As outlined before, we haveno other choice than taking P = Pt<br />

for the synthesis and analysis of the controller. Re nement can only occur by using the<br />

information about the model error P that will be done later. If then P = Pt, it is<br />

obvious from Fig. 4.2 that only the disturbance d and the measurement noise are fed<br />

back because the outputs of P and Pt are equal. Also the condition of stability of P<br />

is then trivial, because there is no way to correct for ever increasing but equal outputs<br />

of P and Pt (due to instability) by feedback. Since only d and are fed back, we may<br />

draw the equivalent as in Fig. 4.4. So, e ectively, there seems to be no feedback in this<br />

Figure 4.1: Conventional control structure.<br />

model structure is shown. The di erence actually is the nominal model which isfedby the<br />

Figure 4.4: Internal model structure equivalent for P = Pt.<br />

structure and the complete system is stable, i (i.e. if and only if) transfer Q = R is stable,<br />

because P was already stable by condition. This is very revealing, as we nowsimply have<br />

the complete set of all controllers that stabilise P ! We only need to search for proper<br />

stabilising controllers C by studying the stable transfers Q. Furthermore, as there is no<br />

actual feedback in Fig. 4.4 the sensitivity and the complementary sensitivity contain no<br />

inversions, but take so-called a ne expressions in the transfer Q, which are easily derived<br />

as:<br />

Figure 4.2: Internal model controller concept.<br />

(4.4)<br />

T = PR = PQ<br />

S = I ; T = I ; PQ<br />

Extreme designs are now immediately clear:<br />

same input as the true process, while only the di erence of the measured and simulated<br />

output is fed back. Of course, it is allowed to subtract the simulated output from the<br />

feedback loop after the entrance of the reference yielding the structure of Fig. 4.3. The<br />

similarity with the conventional structure is then obvious, where we identify the dashed<br />

block as the conventional controller C. So it is easy to relate C and the internal model<br />

control block Q as:<br />

C = Q(I ; PQ) ;1<br />

(4.1)<br />

33


36 CHAPTER 4. INTERNAL MODEL CONTROL<br />

35<br />

minimal complementary sensitivity T :<br />

T =0! S = I ! Q =0! C =0 (4.5)<br />

Figure 4.5: Pole zero inversion of nonminimum phase,stable process.<br />

there is obviously neither feedback nor control causing:<br />

In fact poles and zeros in the open left half plane can easily be compensated for by<br />

Q. Also the poles in the closed right half plane cause no real problems as the rootloci<br />

from them in a feedback can be \drawn" over to the left plane in a feedback by putting<br />

zeros there in the controller. The real problems are due to the nonminimum phase zeros<br />

i.e. the zeros in the closed right half plane, as we will analyse further. But before doing<br />

so, we haveto state that in fact all physical plants su er more or less from this negative<br />

property.<br />

We need some extra notion about the numbers of poles and zeros, their de nition and<br />

considerations for realistic, physical processes. Let np denote the number of poles and<br />

similarly nz the number of zeros in a conventional, SISO transfer function where denominator<br />

and numerator are factorised. We can then distinguish the following categories by<br />

the attributes:<br />

{ no measurement in uence (T =0)<br />

{ no actuator saturation (R=Q=0)<br />

{ 100% disturbance in output (S=I)<br />

{ 100% tracking error (S = I)<br />

{ stability (Pt was stable)<br />

{ robust stability (R=Q=0 and T =0)<br />

{ robust S (T =0), but this \performance" can hardly be worse.<br />

minimal sensitivity S:<br />

S =0! T = I ! Q = P ;1 ! C = 1 (4.6)<br />

if at least P ;1 exists and is stable, we get in nite feedback causing:<br />

proper if np nz<br />

biproper if np = nz<br />

strictly proper if np > nz<br />

nonproper if np < nz<br />

Any physical process should be proper because nonproperness would involve:<br />

{ all disturbance is eliminated from the output (S =0)<br />

{ y tracks r exactly (S=0)<br />

{ y is fully contaminated by measurement noise (T = I)<br />

{ stability only in case Q = P ;1 is stable<br />

{ very likely actuator saturation (Q = R will tend to in nity see later)<br />

{ questionable robust stability (Q = R will tend to in nity see later)<br />

{ robust T (S = 0), but this \performance" can hardly be worse too.<br />

P (j!) =1 (4.8)<br />

lim<br />

!!1<br />

so that the process would e ectively have poles at in nity, would have an in nitely<br />

large transfer at in nity and would certainly start oscillating at frequency ! = 1. On<br />

the other hand a real process can neither be biproper as it then should still have a nite<br />

transfer for ! = 1 and at that frequency the transfer is necessarily zero. Consequently<br />

any physical process is by nature strictly proper. But this implies that:<br />

Once again it is clear that a good control should be a well designed compromise between<br />

the indicated extremes. What is left is to analyse the possibility of the above last sketched<br />

extreme where we neededthatPQ = I and Q is stable.<br />

It is obvious that the solution could be Q = P ;1 if P is square and invertible and the<br />

inverse itself is stable. If P is wide (more inputs than outputs) the pseudo inverse would<br />

su ce under the condition of stability. If P is tall (less inputs than outputs) there is no<br />

solution though. Nevertheless, the problem is more severe, because we can show that,<br />

even for SISO systems, the proposed solution yielding in nite feedback is not feasible<br />

for realistic, physical processes. For a SISO process, where P becomes a scalar transfer,<br />

inversion of P turn poles into zeros and vice versa. Let us take a simple example:<br />

P (j!) =0 (4.9)<br />

lim<br />

!!1<br />

(4.7)<br />

s + a<br />

s ; b<br />

s ; b<br />

s + a a>0 b>0 ! P ;1 =<br />

and thus P has e ectively (at least) one zero at in nity which is in the closed right<br />

half space! Take for example:<br />

P =<br />

(4.10)<br />

s + a<br />

K<br />

P = K<br />

s + a a>0 ! P ;1 =<br />

and consequently Q = P ;1 cannot be realised as it is nonproper.<br />

where the corresponding pole/zero-plots are shown in Fig. 4.5.<br />

It is clear that the original zeros of P have to live in the open (stable) left half plane,<br />

because they turn into the poles of P ;1 that should be stable. Ergo, the given example,<br />

where this is not true, is not allowed. Processes which havezeros in the closed right half<br />

plane, named nonminimum phase, thus cause problems in obtaining agoodperformance<br />

in the sense of a small S.


38 CHAPTER 4. INTERNAL MODEL CONTROL<br />

4.1. MAXIMUM MODULUS PRINCIPLE. 37<br />

4.3 Exercises<br />

4.1 Maximum Modulus Principle.<br />

u1<br />

4.1:<br />

The disturbing fact about nonminimum phase zeros can now be illustrated with the use<br />

of the so-called Maximum Modulus Principle which claims:<br />

yt<br />

-<br />

Pt<br />

-<br />

? +<br />

k<br />

-<br />

6+<br />

-<br />

? +<br />

l<br />

P<br />

? +<br />

m<br />

r +<br />

- - Q<br />

u -<br />

6{<br />

?-<br />

+<br />

j<br />

6{<br />

-<br />

u2<br />

y<br />

-<br />

?<br />

a) Derive by reasoning that for IMC internal model stability is guaranteed if all<br />

transfers from r, u1 and u2 to yt, y and u are stable. Take all signal lines to be<br />

single.<br />

b) To which simple a condition this boils down if P = Pt?<br />

8H 2H1 :k H k1 jH(s)js2C + (4.11)<br />

It says that for all stable transfers H (i.e. no poles in the right half plane denoted<br />

+ by C ) the maximum modulus on the imaginary axis is always greater than or equal<br />

to the maximum modulus in the right half plane. We will not prove this, but facilitate<br />

its acceptance by the following concept. Imagine that the modulus of a stable transfer<br />

function of s is represented by a rubber sheet above the s-plane. Zeros will then pinpoint<br />

the sheet to the zero, bottom level, while poles will act as in nitely high spikes lifting the<br />

sheet. Because of the strictly properness of the transfer, there is a zero at in nity, sothat,<br />

in whatever direction we travel, ultimately the sheet will come to the bottom. Because of<br />

stability there are no poles and thus spikes in the right halfplane.It is obvious that such<br />

a rubber landscape with mountains exclusively in the left half plane will gets its heights<br />

in the right half plane only because of the mountains in the left half plane. If we cut it<br />

precisely at the imaginary axis we will notice only valleys at the right hand side. It is<br />

always going down at the right side and this is exactly what the principle tells.<br />

We are now in the position to apply the maximum modulus principle to the sensitivity<br />

function S of a nonminimum phase SISO process P :<br />

s=zn<br />

z}|{<br />

k S k1= sup jS(j!)j jS(s)js2C + = j1 ; PQjs2C + = 1 (4.12)<br />

!<br />

+ where zn ( C )isanynonminimum phase zero of P . As a consequence we haveto accept<br />

that for some ! the sensitivity has to be greater than or equal to 1. For that frequency<br />

the disturbance and the tracking errors will thus be minimally 100%! So for some band<br />

we will get disturbance ampli cation if we want to decrease it by feedback in some other<br />

(mostly lower) band. That seems to be the price. And reminding the rubber landscape,<br />

it is clear that this band, where S > 1, is the more low frequent thecloser the troubling<br />

zero is to the origin of the s-plane!<br />

By proper weighting over the frequency axis we can still optimise a solution. For an<br />

appropriate explanation of this weighting procedure we rst present the intermezzo of the<br />

next chapter about the necessary norms.<br />

c) What if P 6= Pt ?<br />

4.2: For the general scheme let P = Pt =1.<br />

Suppose that d is white noise with power density dd = 1 and similarly that is white<br />

noise with power density = :01.<br />

a) Design for an IMC set-up a Q such that the power density yy is minimal. (As you<br />

are dealing with white noises all variables are constants independent of the frequency<br />

!.) Compute yy, S, T , Q and C. What is the bound on k P k1 for guaranteed<br />

stability ?<br />

b) In order not to saturate the actuator we nowadd the extra constraint uu 1.<br />

It has been shown that internal model control can greatly facilitate the design procedure of<br />

controllers. It only holds, though, for stable processes and the generalisation to unstable<br />

systems has to wait until chapter 11. Limitations of control are recognised in the e ects<br />

of nonminimum phase zeros of the plant and in fact all physical plant su er from these at<br />

least at in nity.<br />

b) We want to obtain good tracking for a low pass band as broad as possible. At<br />

least the ` nal error' for a step input should be zero. What can we reach byvariation<br />

of K and ? (MATLAB can be useful)<br />

c) The same question a) but now the zero of P is at ;1.


40 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

total of q > 0 real valued quantities, then the signal space W consists of q copies of the<br />

set of real numbers, i.e.,<br />

W =R | ::: {z R}<br />

q copies<br />

which is denoted as W =Rq . A signal s : T !Rq thus represents at each time instant<br />

t 2 T a vector<br />

Chapter 5<br />

1<br />

C<br />

A<br />

s1(t)<br />

s2(t)<br />

0<br />

B<br />

@<br />

s(t) =<br />

Signal spaces and norms<br />

.<br />

sq(t)<br />

where si(t), the i-the component, is a real number for each time instant t.<br />

The `size' of a signal is measured by norms. Suppose that the signal space is a complex<br />

valued q-dimensional space, i.e. W =C q for some q > 0. We will attach to each vector<br />

w =(w1w2:::wq) 0 2 W its usual `length'<br />

q<br />

jwj := w1 w1 + w2 w2 + :::+ wq wq<br />

5.1 Introduction<br />

which is the Euclidean norm of w. (Here, w denotes the complex conjugate of the complex<br />

number w. That is, if w = x + jy with x the real part and y the imaginary part of w,<br />

then w = x ; jy). If q =1this expresses the absolute value of w, which is the reason<br />

for using this notation. This norm will be attached to the signal space W , and makes it<br />

a normed space.<br />

Signals can be classi ed in manyways. We distinguish between continuous and discrete<br />

time signals, deterministic and stochastic signals, periodic and a-periodic signals.<br />

In the previous chapters we de ned the concepts of sensitivity and complementary sensitivity<br />

and we expressed the desire to keep both of these transfer functions `small' in a<br />

frequency band of interest. In this chapter we will quantify in a more precise way what<br />

`small' means. In this chapter we will quantify the size of a signal and the size of a system.<br />

We will be rather formal to combine precise de nitions with good intuition. A rst section<br />

is dedicated to signals and signal norms. We then consider input-output systems and<br />

de ne the induced norm of an input-output mapping. The H1 norm and the H2 norm of<br />

a system are de ned and interpreted both for single input single output systems, as well<br />

as for multivariable systems.<br />

5.2 Signals and signal norms<br />

5.2.1 Periodic and a-periodic signals<br />

De nition 5.2 Suppose that the time set T is closed under addition, that is, for any two<br />

points t1t2 2 T also t1 + t2 2 T . A signal s : T ! W is said to be periodic with period P<br />

(or P -periodic) if<br />

s(t) =s(t + P ) t 2 T:<br />

A signal that is not P -periodic for any P is a-periodic.<br />

We will start this chapter with some system theoretic basics which willbeneeded in the<br />

sequel. In order to formalize concepts on the level of systems, we need to rst recall some<br />

basics on signal spaces. Manyphysical quantities (suchasvoltages, currents, temperatures,<br />

pressures) depend on time and can be interpreted as functions of time. Such functions<br />

quantify how information evolves over time and are called signals. It is therefore logical<br />

to specify a time set T , indicating the time instances of interest. We will think of time as<br />

a one dimensional entity and we therefore assume that T R. We distinguish between<br />

continuous time signals (T a possibly in nite interval ofR) and discrete time signals (T<br />

a countable set). Typical examples of frequently encountered time sets are nite horizon<br />

discrete time sets T = f0 1 2:::Ng, in nite horizon discrete time sets T =Z+ or T =Z<br />

Common time sets such asT =Zor T =R are closed under addition, nite time sets<br />

such as intervals T = [a b] are not. Well known examples of continuous time periodic<br />

signals are sinusoidal signals s(t) =A sin(!t + ) or harmonic signals s(t) =Aej!t . Here,<br />

A, ! and are constants referred to as the amplitude, frequency (in rad/sec) and phase,<br />

respectively. These signals have frequency !=2 (in Hertz) and period P = 2 =!. We<br />

emphasize that the sum of two periodic signals does not need to be periodic. For example,<br />

s(t) =sin(t) + sin( t) isa-periodic. The class of all periodic signals with time set T will<br />

be denoted by P(T ).<br />

or, for sampled signals, T = fk s j k 2Zg where s > 0 is the sampling time. Examples<br />

of continuous time sets include T =R, T =R+ or intervals T =[a b].<br />

The values which aphysically relevant signal assumes are usually real numbers. However,<br />

complex valued signals, binary signals, nonnegative signals, angles and quantized<br />

signals are very common in applications, and assume values in di erent sets. We therefore<br />

introduce a signal space W , which is the set in which a signal takes its values.<br />

5.2.2 Continuous time signals<br />

De nition 5.1 A signal is a function s : T ! W where T R is the time set and W is<br />

a set, called the signal space.<br />

It is convenient tointroduce various signal classi cations. First, we consider signals which<br />

have nite energy and nite power. To introduce these signal classes, suppose that I(t)<br />

More often than not, it is necessary that at each time instant t 2 T , a number of<br />

physical quantities are represented. If we wish a signal s to express at instant t 2 T a<br />

39


42 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.2. SIGNALS AND SIGNAL NORMS 41<br />

Note that these quantities are de ned for nite or in nite time sets T . In particular, if<br />

T = R, ksk2 2 = Es, i.e the energy content of a signal is the same as the square of its<br />

2-norm.<br />

denotes the current through a resistance R producing a voltage V (t). The instantaneous<br />

power per Ohm is p(t) =V (t)I(t)=R = I2 (t). Integrating this quantity over time, leads<br />

to R de ning the total energy (in Joules). The per Ohm energy of the resistance is therefore<br />

1<br />

;1 jI(t)j2dt Joules.<br />

Remark 5.6 To be precise, one needs to check whether these quantities indeed de ne<br />

norms. Recall from your very rst course of linear algebra that a norm is de ned as a<br />

real-valued function which assigns to each element s of a vector space a real number k s k,<br />

called the norm of s, with the properties that<br />

1. k s k 0andk s k= 0 if and only if s =0.<br />

De nition 5.3 Let s be a signal de ned on the time set T =R. The energy content Es<br />

of s is de ned as<br />

Z 1<br />

Es := js(t)j<br />

;1<br />

2 dt<br />

2. k s1 + s2 k ks1 k + k s2 k for all s1 and s2.<br />

If Es < 1 then s is said to be a ( nite) energy signal.<br />

3. k s k= j jks k for all 2C .<br />

The quantities de ned by k s k1, k s k2 and k s k1 indeed de ne (signal) norms and have<br />

the properties 1,2 and 3 of a norm.<br />

Clearly, not all signals have nite energy. Indeed, for harmonic signals s(t) = cej!t we<br />

have that js(t)j2 = jcj2 so that Es = 1 whenever c 6= 0. In general, the energy content of<br />

periodic signals is in nite. We therefore associate with periodic signals their power:<br />

Example 5.7 The sinusoidal signal s(t) :=Asin(!t + ) for t 0 has nite amplitude<br />

k s k1= A but its two-norm and one-norm are in nite.<br />

De nition 5.4 Let s be a continuous time periodic signal with period P . The power of<br />

s is de ned as<br />

Z t0+P<br />

Example 5.8 As another example, consider the signal s(t) which is described by the<br />

di erential equations<br />

js(t)j 2 dt (5.1)<br />

Ps := 1<br />

P<br />

t0<br />

where t0 2R. If Ps < 1 then s is said to be a ( nite) power signal.<br />

(5.5)<br />

dx<br />

= Ax(t)<br />

dt<br />

s(t) =Cx(t)<br />

where A and C are real matrices of dimension n n and 1 n, resp. It is clear that s is<br />

uniquely de ned by these equations once an initial condition x(0) = x0 has been speci ed.<br />

Then s is equal to s(t) =CeAtx0 where we take t 0. If the eigenvalues of A are in the<br />

In case of the resistance, the power of a (periodic) current I is measured per period and<br />

will be in Watt. It is easily seen that the power is independent of the initial time instant t0<br />

in (5.1). A signal which is periodic with period P is also periodic with period nP , where<br />

n is an integer. However, it is a simple exercise to verify that the right hand side of (5.1)<br />

does not change if P is replaced by nP . It is in this sense that the power is independent of<br />

the period of the signal. We emphasize that all nonzero nite power signals have in nite<br />

energy.<br />

Z 1<br />

left-half complex plane then<br />

x T<br />

0 eAT t T At<br />

C Ce x0dt = x T<br />

0 Mx0<br />

k s k 2 2 =<br />

0<br />

Example 5.5 The sinusoidal signal s(t) =A sin(!t+ ) is periodic with period P =2 =!,<br />

has in nite energy and has power<br />

Z =!<br />

Z<br />

with the obvious de nition for M. The matrix M has the same dimensions as A, is<br />

symmetric and is called the observability gramian of the pair (A C). The observability<br />

gramian M is a solution of the equation<br />

sin 2 ( + ) d = A 2 =2:<br />

;<br />

A 2 sin 2 (!t + ) dt = A2<br />

2<br />

; =!<br />

Ps = !<br />

2<br />

A T M + MA+ C T C =0<br />

which is the Lyapunov equation associated with the pair (A C).<br />

Let s : T !Rq be a continuous time signal. The most important norms associated<br />

with s are the in nity-norm, the two-norm and the one-norm de ned either over a nite<br />

or an in nite interval T . They are de ned as follows<br />

The sets of signals for which the above quantities are nite will be of special interest.<br />

De ne<br />

jsi(t)j (5.2)<br />

sup<br />

t2T<br />

k s k1 = max<br />

i<br />

o 1=2<br />

nZ<br />

L1(T ) = fs : T ! W jks k1 < 1g<br />

L2(T ) = fs : T ! W jks k2 < 1g<br />

L1(T ) = fs : T ! W jks k1 < 1g<br />

P(T ) = fs : T ! W j p Ps < 1g<br />

(5.3)<br />

js(t)j 2 dt<br />

k s k2 =<br />

t2T<br />

Z<br />

js(t)jdt (5.4)<br />

k s k1 =<br />

t2T<br />

More generally, the p-norm, with 1 p


44 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.2. SIGNALS AND SIGNAL NORMS 43<br />

5.2.3 Discrete time signals<br />

For discrete time signals s : T ! Rq a similar classi cation can be set up. The most<br />

important norms are de ned as follows.<br />

drop the T in the above signal spaces whenever the time set is clear from the context. As<br />

an example, the sinusoidal signal of Example 5.7 belongs to L1[0 1) and P[0 1), but<br />

not to L2[0 1) and neither to L1[0 1).<br />

For either nite or in nite time sets T , the space L2(T )isa Hilbert space with inner<br />

jsi(t)j (5.9)<br />

Z<br />

product de ned by<br />

sup<br />

t2T<br />

k s k1 = max<br />

i<br />

nX<br />

s T<br />

2 (t)s1(t) dt:<br />

hs1s2i =<br />

(5.10)<br />

js(t)j 2o 1=2<br />

k s k2 =<br />

t2T<br />

t2T<br />

js(t)j (5.11)<br />

k s k1 = X<br />

Two signals s1 and s2 are orthogonal if hs1s2i = 0. This is a natural extension of<br />

orthogonality inRn .<br />

t2T<br />

The Fourier transforms<br />

More generally, the p-norm, with 1 p


46 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.3. SYSTEMS AND SYSTEM NORMS 45<br />

y(t)<br />

-<br />

5.2.4 Stochastic signals<br />

H<br />

-<br />

u(t)<br />

Figure 5.1: Input-output systems: the engineering view<br />

Occasionally we consider stocastic signals in this course. We will not give a complete<br />

treatise of stochastic system theory at this place but instead recall a few concepts.A<br />

stationary stochastic process is a sequence of real random variablesu(t) where t runs over<br />

some time set T . By de nition of stationarity,its mean, (t) := E[u(t)] is independent<br />

of the time instant t, and the second order moment E[u(t1)u(t2)] depends only on the<br />

di erence t1 ; t2. The covariance of such a process is de ned by<br />

Remark 5.13 Again a philoso cal warning is in its place. If an input-output system is<br />

mathematically represented as a function H, then to each input u 2 U, H attaches a<br />

unique output y = H(u). However, more often than not, the memory structure of many<br />

physical systems allows various outputs to correspond to one input signal. A capacitor C<br />

imposes the relation C d dtV = I on voltage-current pairs VI. Taking I = 0 as input allows<br />

the output V to be any constant signal V (t) = V0. Hence, there is no obvious mapping<br />

I 7! V modeling this simple relationship!<br />

Ru( ):=E (u(t + ) ; )(u(t) ; )<br />

where = (t) = E[u(t)] is the mean. A stochastic (stationary) process u(t) is called a<br />

white noise process if its mean = E[u(t)] = 0 and if u(t1) and u(t2) are uncorrelated for<br />

all t1 6= t2. Stated otherwise, the covariance of a (continuous time) white noise process<br />

is Ru( ) = 2 2 ( ). The number is called the variance. The Fourier transform of the<br />

covariance function Ru( )is<br />

Z 1<br />

Of course, there are manyways to represent input-output mappings. We will be particularly<br />

interested in (input-output) mappings de ned by convolutions and those de ned by<br />

transfer functions. Undoubtedly, you have seen various of the following de nitions before,<br />

but for the purpose of this course, it is of importance to understand (and fully appreciate)<br />

the system theoretic nature of the concepts below. In order not to complicate things from<br />

the outset, we rst consider single input single output continuous time systems with time<br />

set T =R and turn to the multivariable case in the next section. This means that we will<br />

focus on analog systems. We will not treat discrete time (or digital) systems explicitly, for<br />

their de nitions will be similar and apparent from the treatment below.<br />

In a (continuous time) convolution system, an input signal u 2U is transformed to an<br />

Ru( )e ;j! d<br />

u(!) :=<br />

;1<br />

and is usually referred to as the power spectrum, energy spectrum or just the spectrum of<br />

the stochastic process u.<br />

5.3 Systems and system norms<br />

h(t ; )u( )d (5.12)<br />

output signal y = H(u) according to the convolution<br />

Z 1<br />

y(t) =(Hu)(t) =(h u)(t) =<br />

;1<br />

where h :R !R is a function called the convolution kernel. In system theoretic language,<br />

h is usually referred to as the impulse response of the system, as the output y is equal<br />

to h whenever the input u is taken to be a Dirac impulse u(t) = (t). Obviously, H<br />

de nes a linear map (as H(u1 + u2) =H(u1)+H(u2) andH( u) = H(u)) and for this<br />

reason the corresponding input-output system is also called linear. Moreover, it de nes a<br />

time-invariant system in the sense that H maps the time shifted input signal u(t ; t0) to<br />

the time shifted output y(t ; t0).<br />

No mapping is well de ned if we are lead to guess what the domain U of H should be.<br />

There are various options:<br />

A system is any setSof signals. In engineering we usually study systems which havequite some structure. It is common engineering practice to consider systems whose signals are<br />

naturally decomposed in two independent sets: a set of input signals and a set of output<br />

signals. A system then speci es the relations among the input and output signals. These<br />

relations may be speci ed by transfer functions, state space representations, di erential<br />

equations or whatever mathematical expression you can think of. We nd this theme<br />

in almost all applications where lter and control design are used for the processing of<br />

signals. Input signals are typically assumed to be unrestricted. Filters are designed so<br />

as to change the frequency characteristics of the input signals. Output signals are the<br />

responses of the system (or lter) after excitation with an input signal. For the purpose of<br />

this course, we exclusively consider systems in which an input-output partitioning of the<br />

signals has already been made. In engineering applications, it is good tradition to depict<br />

input-output systems as `blocks' as in Figure 5.3, and you probably have a great deal<br />

of experience in constructing complex systems by interconnecting various systems using<br />

block diagrams. The arrows in Figure 5.3 indicate the causality direction.<br />

One can take bounded signals, i.e., U = L1.<br />

One can take harmonic signals, i.e., U = fce j!t j c 2C ! 2Rg.<br />

Remark 5.12 Also a word of warning concerning the use of blocks is in its place. For<br />

example, many electrical networks do not have a `natural' input-output partition of system<br />

variables, neither need such a partitioning of variables be unique. Ohm's law V = RI<br />

imposes a simple relation among the signals `voltage' V and `current' I but it is not<br />

evident whichsignal is to be treated as input and which as output.<br />

One can take energy signals, i.e., U = L2.<br />

One can take periodic signals with nite power, i.e., U = P.<br />

The mathematical analog of such a `block' is a function or an operator H mapping<br />

inputs u taken from an input space U to output signals y belonging to an output space Y.<br />

We write<br />

The input class can also exist of one signal only. If we are interested in the impulse<br />

response only, we take U = f g.<br />

H : U ;!Y:


48 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.3. SYSTEMS AND SYSTEM NORMS 47<br />

5.3.1 The H1 norm of a system<br />

Induced norms<br />

One can take white noise stochastic processes as inputs. In that case U consists of<br />

all stationary zero mean signals u with nite covariance Ru( )= 2 ( ).<br />

Let T be a continuous time set. If we assume that the impulse response h : R ! R<br />

satis es k h k1= R 1<br />

;1 jh(t)jdt < 1 (in other words, if we assume that h 2L1), then H is<br />

a stable system in the sense that bounded inputs produce bounded outputs. Thus, under<br />

this condition,<br />

Example 5.14 For example, the response to a harmonic input signal u(t) =ej!t is given<br />

by<br />

Z 1<br />

y(t) = h( )e<br />

;1<br />

j!(t; ) d = b j!t<br />

h(!)e<br />

H : L1(T ) ;!L1(T )<br />

where b h is the Fourier transform of h as de ned in (5.7).<br />

and we can de ne the L1-induced norm of H as<br />

k H(u) k1<br />

k u k1<br />

k H k (11) := sup<br />

u2L1<br />

Example 5.15 A P -periodic signal with line spectrum fukg, k 2Z, canberepresented as u(t) = P1 k=;1 ukejk!t where ! =2 =P and its corresponding output is given by<br />

Interestingly, under the same condition, H also de nes a mapping from energy signals to<br />

energy signals, i.e.<br />

b h(k!)uke jk!t :<br />

1X<br />

y(t) =<br />

k=;1<br />

H : L2(T ) ;!L2(T )<br />

with the corresponding L2-induced norm<br />

Consequently, y is also periodic with period P and the line spectrum of the output is given<br />

by yk = bh(k!)uk, k 2Z.<br />

k H(u) k2<br />

k u k2<br />

k H k (22) := sup<br />

u2L2<br />

Assume that both U and Y are normed linear spaces. Then we call H bounded if there<br />

is a constant M 0such that<br />

In view of our de nition of `energy' signals, this norm is also referred to as the induced<br />

energy norm. The power does not de ne a norm for the class P of periodic signals.<br />

Nevertheless, Example 5.15 shows that<br />

k H(u) k M k u k :<br />

H : P(T ) !P(T )<br />

and we de ne the power-induced norm<br />

Note that the norm on the left hand side is the norm de ned on signals in the output<br />

space Y and the norm on the right hand side corresponds to the norm of the input signals<br />

in U. In system theoretic terms, boundednes of H can be interpreted in the sense that H<br />

is stable with respect to the chosen input class and the corresponding norms. If a linear<br />

map H : U ! Y is bounded then its norm k H k can be de ned in several alternative<br />

(and equivalent) ways:<br />

:<br />

p<br />

Py<br />

p<br />

Pu<br />

kHkpow := sup<br />

Pu6=0<br />

The following result characterizes these system norms<br />

Theorem 5.16 Let T =R orR+ be the time set and let H be de ned by(5.12). Suppose<br />

that h 2L1. Then<br />

(5.13)<br />

k H k = inffM<br />

jkHu k M k u k for all u 2Ug<br />

M<br />

k Hu k<br />

= sup<br />

u2Uu6=0 k u k<br />

= sup k Hu k<br />

u2Ukuk 1<br />

1. the L1-induced norm of H is given by<br />

k Hu k<br />

= sup<br />

u2Ukuk=1<br />

k H k (11)=k h k1<br />

2. the L2-induced norm of H is given by<br />

k H k (22)= max<br />

!2R jbh(!)j (5.14)<br />

For linear operators, all these expressions are equal and either one of them serves as<br />

de nition for the norm of an input-output system. The norm k H k is often called the<br />

induced norm or the operator norm of H and it has the interpretation of the maximal<br />

`gain' of the mapping H : U !Y. A most important observation is that<br />

3. the power-induced norm of H is given by<br />

k H kpow= max<br />

!2R jbh(!)j (5.15)<br />

the norm of the input-output system de ned byH depends on the class of inputs<br />

U and on the signal norms for elements u 2U and y 2Y. A di erent class of<br />

inputs or di erent norms on the input and output signals results in di erent<br />

operator norms of H.


5.3. SYSTEMS AND SYSTEM NORMS 49<br />

We will extensively use the above characterizations of the L2-induced and powerinduced<br />

norm. The rst characterization on the 1-induced norm is interesting, but will<br />

not be further used in this course. The Fourier transform b h of the impulse response h is<br />

generally referred to as the frequency response of the system (5.12). It has the property<br />

that whenever h 2L1 and u 2L2,<br />

y(t) =(h u)(t) () by(!) =b h(!)bu(!) (5.16)<br />

Loosely speaking, this result states that convolution in the time domain is equivalent to<br />

multiplication in the frequency domain.<br />

Remark 5.17 The quantity max!2R j ^ h(!)j satis es the axioms of a norm, and is precisely<br />

equal to the L1-norm of the frequency response, i.e, k ^ h k1= max!2R j ^ h(!)j.<br />

Remark 5.18 The frequency response can be written as<br />

b h(!) =jb h(!)je j (!) :<br />

Various graphical representations of frequency responses are illustrative to investigate<br />

system properties like bandwidth, system gains, etc. A plot of j ^ h(!)j and (!) as function<br />

of ! 2R is called a Bode diagram. See Figure 5.2. In view of the equivalence (5.16) a<br />

Bode diagram therefore provides information to what extent the system ampli es purely<br />

harmonic input signals with frequency ! 2R. In order to interpret these diagrams one<br />

usually takes logarithmic scales on the ! axis and plots 20 10 log( ^ h(!)) to get units in<br />

dB. Theorem 5.16 states that the L2 induced norm of the system de ned by (5.12)<br />

equals the highest gain value occuring in the Bode plot of the frequency response of the<br />

system. In view of Example 5.14, any frequency !0 for which this maximum is attained<br />

has the interpretation that a harmonic input signal u(t) = e j!0t results in a (harmonic)<br />

output signal y(t) with frequency !0 and maximal amplitude j ^ h(!0)j. (Unfortunately,<br />

sin(!0t) =2L2, sowe cannot use this insight directly in a proof of Theorem 5.16.)<br />

To prove Theorem 5.16, we derive fromParseval's identity that<br />

which shows that k H k (22)<br />

k H k 2 (22)<br />

k h u k<br />

= sup<br />

u2L2<br />

2 2<br />

k u k2 2<br />

1=(2 ) k[h u k 2 2<br />

= sup<br />

^u2L2 1=(2 ) k bu k2 2<br />

R<br />

j^ 2 2 h(!)j j^u(!)j d!<br />

= sup<br />

^u2L2<br />

k ^u k 2 2<br />

max!2R j<br />

sup<br />

^u2L2<br />

^ h(!)j2 k ^u k2 2<br />

k ^u k2 2<br />

= max<br />

!2R jb h(!)j 2<br />

max!2R j ^ h(!)j. Similarly, using Parseval's identity for<br />

50 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

Gain dB<br />

Phase deg<br />

40<br />

20<br />

0<br />

−20<br />

10 −3<br />

0<br />

−90<br />

−180<br />

10 −3<br />

periodic signals<br />

10 −2<br />

10 −2<br />

k H k 2 pow = sup<br />

P<br />

10 −1<br />

Frequency (rad/sec)<br />

10 −1<br />

Frequency (rad/sec)<br />

Figure 5.2: A Bode diagram<br />

= sup<br />

P<br />

sup<br />

P<br />

sup<br />

Py<br />

u is P -periodic Pu<br />

sup<br />

u is P -periodic<br />

= max<br />

!2R jb h(!)j 2<br />

max<br />

k2Z jb 2<br />

h(2 k=P)j<br />

10 0<br />

10 0<br />

P 1k=;1 jb h(2 k=P)ukj 2<br />

P 1k=;1 jukj 2<br />

showing that k H kpow max!2R jbh(!)j. Theorem 5.16 provides equality for the latter<br />

inequalities. For periodic signals (statement 3) this can be seen as follows. Suppose that<br />

!0 is such that<br />

jbh(!0)j =max<br />

!2R jbh(!)j Take a harmonic input u(t) =e j!0t and note that this signal has power Pu =1and line<br />

spectrum u1 =1,uk =0for k 6= 1. From Example 5.14 it follows that the output y has<br />

line spectrum y1 = b h(!0), and yk =0fork 6= 1, and using Parseval's identity, the output<br />

has power Py = jb h(!0)j 2 . We therefore obtain that<br />

k H kpow = b h(!0) = max<br />

!2R j^ h(!)j<br />

10 1<br />

10 1


52 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.3. SYSTEMS AND SYSTEM NORMS 51<br />

A stochastic interpretation of the H1 norm<br />

as claimed. The proof of statement 2 is more involved and will be skipped here.<br />

We conclude this subsection with a discussion on a stochastic interpretation of the H1<br />

norm of a transfer function. Consider the set T of all stochastic (continuous time)<br />

processes s(t) on the nite time interval [0T] for which the expectation<br />

Eksk2 Z T<br />

2T := E s<br />

0<br />

T (t)s(t) dt (5.18)<br />

The transfer function associated with (5.12) is the Laplace transform of the impulse<br />

response h. This object will be denoted by H(s) (which the careful reader perceives as<br />

poor and ambiguous notation at this stage2 ). Formally,<br />

Z 1<br />

H(s) := h(t)e<br />

;1<br />

;st dt<br />

is well de ned and bounded. Consider the convolution system (5.12) and assume that<br />

h 2L1 (i.e. the system is stable) and the input u 2 T . Then the output y is a stochastic<br />

process and we can introduce the \induced norm"<br />

where the complex variable s is assumed to belong to an area of the complex plane where<br />

the above integral is nite and well de ned. The Laplace transforms of signals are de ned<br />

in a similar way andwehave that<br />

Ekyk2 2T<br />

Ekuk2 2T<br />

kHk2 stochT := sup<br />

u2 T<br />

y = h u () ^y(!) = ^ h(!)^u(!) () Y (s) =H(s)U(s):<br />

which depends on the length of the time horizon T . This is closely related to an induced<br />

operator norm for the convolution system (5.12). We would like to extend this de nition<br />

to the in nite horizon case. For this purpose it seems reasonable to de ne<br />

If the Laplace transform exists in an area of the complex plane which includes the imaginary<br />

axis, then the Fourier transform is simply bh(!) =H(j!).<br />

(5.19)<br />

1<br />

E<br />

T ksk2 2T<br />

Eksk2 2 := lim<br />

T !1<br />

Remark 5.19 It is common engineering practice (the adjective `good' or `bad' is left<br />

to your discretion) to denote Laplace transforms of signals u ambiguously by u. Thus<br />

u(t) means something really di erent than u(s)! Whereas y(t) = H(u)(t) refers to the<br />

convolution (5.12), the notation y(s) =Hu(s) istobeinterpreted as the product of H(s)<br />

and the Laplace transform u(s) ofu(t). The notation y = Hu can therefore be interpreted<br />

in two (equivalent) ways!<br />

assuming that the limit exists. This expectation can be interpreted as the average power<br />

of a stochastic signal. However, as motivated in this section, we would also like to work<br />

with input and output spaces U and Y that are linear vector spaces. Unfortunately, the<br />

class of stochastic processes for which the limit in (5.19) exists is not a linear space. For<br />

this reason, the class of stochastic input signals U is set to<br />

We return to our discussion of induced norms. The right-hand side of (5.14) and (5.15)<br />

is de ned as the H1 norm of the system (5.12).<br />

:=fs j ksk < 1g<br />

De nition 5.20 Let H(s) be the transfer function of a stable single input single output<br />

system with frequency response ^ h(!). The H1 norm of H, denotedkHk1 is the number<br />

where<br />

k H k1 := max<br />

!2R j^ h(!)j: (5.17)<br />

E 1<br />

T ksk2 2T<br />

ksk2 := lim sup<br />

T !1<br />

In this case, is a linear space of stochastic signals, but k k does not de ne a norm<br />

on . This is easily seen as ksk =0for any s 2L2. However, it is a semi norm as it<br />

satis es conditions 2 and 3 in Remark 5.6. With this class of input signals, we can extend<br />

the \induced norm" kHkstochT to the in nite horizon case<br />

The H1 norm of a SISO transfer function has therefore the interpretation of the maximal<br />

peak in the Bode diagram of the frequency response ^ h of the system and can be directly<br />

`read' from such a diagram. Theorem 5.16 therefore states that<br />

k H(s) k1=k H k (22)=k H kpow :<br />

kyk<br />

kuk<br />

In words, this states that<br />

kHk2 stoch := sup<br />

u2<br />

the energy induced norm and the power induced normofH is equal to the H1<br />

norm of the transfer function H(s).<br />

which is bounded for stable systems H. The following result is the crux of this discussion<br />

and states that kHkstoch is, in fact, equal to the H1 norm of the transfer function H.<br />

Theorem 5.21 Let h 2 L1 and let H(s) be the transfer function of the system (5.12).<br />

Then<br />

kHk stoch = kHk1:<br />

A proof of this result is beyond the scope of these lecture notes. The result can be<br />

found in [18].<br />

2 For we de nedH already as the mapping that associates with u 2Uthe element H(u). However,<br />

from the context it will always be clear what we mean


54 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.3. SYSTEMS AND SYSTEM NORMS 53<br />

(RMS) value of the output of the system when the input is a realization of a unit variance<br />

white noise process. That is, let<br />

(<br />

a unit variance white noise process t 2 [0T]<br />

u(t) =<br />

0 otherwise<br />

5.3.2 The H2 norm of a system<br />

The notation H2 is commonly used for the class of functions of a complex variable that<br />

do not have poles in the open right-half complex plane (they are analytic in the open<br />

right-half complex plane) and for which the norm<br />

n Z 1 1<br />

o1=2 kskH2 := sup s ( + j!)s( + j!)d!<br />

>0 2 ;1<br />

and let y = h u be the corresponding output. Using the de nition of a nite horizon<br />

2-norm from (5.18), we set<br />

kHk2 Z 1<br />

RMST := E y<br />

;1<br />

T (t)y(t)dt = Ekyk2 2T<br />

is nite. The `H' stands for Hardy space. Thus,<br />

< 1g<br />

H2 = fs :C !C j sanalytic in 0 and kskH2<br />

where E denotes expectation. Substitute (5.12) in the latter expression and use that<br />

h( )h( ) d<br />

E(u(t1)u(t2)) = (t1 ; t2) to obtain that<br />

kHk2 Z T Z t<br />

RMST = dt<br />

0 t;T<br />

Z T<br />

. This \cold-hearted" de nition has, in fact, a very elegant system theoretic interpretation.<br />

Before giving this, we rst remark that the H2 norm can be evaluated on the imaginary<br />

axis. That is, for any s 2H2 one can construct a boundary function s(!) =lim #0 s( +<br />

j!), which exists for almost all !. Moreover, this boundary function is square integrable,<br />

Z T<br />

i.e., s 2L2 and kskH2 = ksk2. Stated otherwise,<br />

t(h( )h( )+h(; )h(;tau))d<br />

h( )h( )d ;<br />

= T<br />

0<br />

;T<br />

If the transfer function is such that the limit<br />

1<br />

T kHk2 RMST<br />

kHk2 RMS = lim<br />

T !1<br />

n 1 o1=2 kskH2 = s (!)s(!)<br />

2<br />

Thus, the supremum in the de nition of the H2 norm always occurs on the boundary<br />

=0. It is for this reason that s is usually identi ed with the boundary function and the<br />

bar in s is usually omitted.<br />

remains bounded we obtain the in nite horizon RMS-value of the transfer function H. In<br />

fact, it then follows that<br />

kHk2 Z 1<br />

RMS = h( )h( ) d<br />

;1<br />

= 1<br />

Z 1<br />

H(j!)H (j!) d!<br />

2 ;1<br />

= kHk2 H2<br />

Deterministic interpretation<br />

To interpret the H norm, consider again the convolution system (5.12) and suppose that<br />

we areinterested only in the impulse response of this system. This means, that we take the<br />

impulse (t) as the only candidate input for H. The resulting output y(t) =(Hu)(t) =h(t)<br />

is an energy function so that Eh < 1. Using Parseval's identity we obtain<br />

Thus, the H2 norm of the transfer function is equal to the in nite horizon RMS value of<br />

the transfer function.<br />

Another stochastic interpretation of the H2 norm can be given as follows. Let u(t) be<br />

a stochastic process with mean 0 and covariance Ru( ). Taking such a process as input to<br />

(5.12) results in the output y(t) which is a random variable for each time instant t 2 T . It<br />

is easy to see that the output y has also zero mean. The condition that h 2L2 guarantees<br />

that the output y has nite covariances y( ) = E[y(t)y(t ; )] and easy calculations4 show that the covariances Ry( ) are given by<br />

Eh =k h k2 2=<br />

1<br />

p k b 2<br />

h k2 2<br />

n Z 1 1<br />

= ^h (!)<br />

2 ;1<br />

^ o<br />

h(!)d!<br />

= kH(s)k2 H2<br />

where H(s) is the transfer function associated with the input-output system. The square<br />

of the H2 norm is therefore equal to the energy of the impulse response. To summarize:<br />

Ry( )=E[y(t + )y(t)]<br />

Z 1 Z 1<br />

= h(s<br />

;1 ;1<br />

0 )Ru( + s 00 ; s 0 )h(s 00 )ds 0 ds 00<br />

De nition 5.22 Let H(s) be the transfer function of a stable single input single output<br />

system with frequency response ^ h(!). The H2 norm of H, denoted k H kH2 is the number<br />

n Z 1 1<br />

o1=2 k H kH2 :=<br />

H(j!)H(;j!)d! : (5.20)<br />

2 ;1<br />

The latter expression is a double convolution which by taking Fourier transforms results<br />

in the equivalent expression<br />

Stochastic interpretation<br />

y(!) = ^ h(j!) u(!) ^ h(;j!): (5.21)<br />

The H2 norm of a transfer function has an elegant equivalent interpretation in terms of<br />

stationary stochastic signals3 . The H2 norm is equal to the expected root-mean-square<br />

4 Details are not important here.<br />

3 The derivations in this subsection are not relevant for the course!


56 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.4. MULTIVARIABLE GENERALIZATIONS 55<br />

y1<br />

y2<br />

y3<br />

- --<br />

H<br />

- ----<br />

u1<br />

u2<br />

u3<br />

u4<br />

u5<br />

Figure 5.3: Amultivariable system.<br />

in the frequency domain. We nowassume u to be a white noise process with u(!) = 1 for<br />

all ! 2R. (This implies that Ru( )= ( ). Indeed the variance of this signal theoretically<br />

equals (0) = (0) = 1. This is caused by the fact that all freqencies have equal power<br />

(density) 1, which in turn is necessary to allow for in nitely fast changes of the signal<br />

to make future values independent of momentary values irrespective of the small time<br />

di erence. Of course in practice it is su cient if the \whiteness" is just in broadbanded<br />

noise with respect to the frequency band of the plant under study.) Using (5.21), the<br />

spectrum of the output is then given by<br />

(5.22)<br />

where the convolution kernel h(t) isnow, for every t 2R, a real matrix of dimension p m.<br />

The transfer function associated with this system is the Laplace transform of h and is the<br />

y(!) =j ^ h(!)j 2<br />

Z 1<br />

function<br />

which relates the spectrum of the input and the spectrum of the output of the system<br />

de ned by theconvloution (5.12). Integrating the latter expression over ! 2R and using<br />

the de nition of the H2 norm yields that<br />

h(t)e ;st dt:<br />

H(s) =<br />

;1<br />

k ^ h k2 2<br />

Z 1<br />

^h(!)<br />

;1<br />

^ h(;!)d!<br />

Z 1<br />

;1 y(!)d!<br />

1<br />

=<br />

2<br />

k H(s) k 2 H2<br />

Thus H(s) has dimension p m for every s 2C . We will again assume that the system<br />

is stable in the sense that all entries [H(s)]ij of H(s) (i = 1::: p and j = 1::: m)<br />

have their poles in the left half plane or, equivalently, that the ij-th element [h(t)]ij<br />

of h, viewed as a function of t, belongs to L1. As in the previous section, under this<br />

assumption H de nes an operator mapping bounded inputs to bounded outputs (but now<br />

for multivariable signals!) and bounded energy inputs to bounded energy outputs. That<br />

is,<br />

(5.23)<br />

= 1<br />

2<br />

= 1<br />

2<br />

= k Ry( ) k 2 2<br />

H : Lm 1 ;! Lp 1<br />

H : Lm 2 ;! L p<br />

2<br />

Thus the H2 norm of the transfer function H(s) has the interpretation of the L2 norm of<br />

the covariance function Ry( ) of the output y of the system when the input u is taken to<br />

be a white noise signal with variance equal to 1. From this it should now be evident that<br />

when we de ne in this stochastic context the norm of a stochastic (stationary) signal s<br />

where the superscripts p and m denote the dimensions of the signals. We will be mainly<br />

interested in the L2-induced and power induced norm of such a system. These norms are<br />

de ned as in the previous section<br />

with mean 0 and covariance Rs( )tobe<br />

o 1=2<br />

nZ 1<br />

E[s(t + )s(t)]d<br />

k s k := k Rs( ) k2=<br />

k y k2<br />

k u k2<br />

;1<br />

k H k (22) := sup<br />

u2Lm 2<br />

P 1=2<br />

y<br />

P 1=2<br />

u<br />

k H kpow := sup<br />

Pu6=0<br />

then the H2 norm of the transfer function H(s) is equal to the norm k y k of the output y,<br />

when taking white noise as input to the system. Note that above norm is rather a power<br />

norm than an energy norm and that for a white noise input u we get<br />

nZ 1 o1=2 k u k=k Ru( ) k2= ( )d =1:<br />

where y = H(u) is the output signal.<br />

Like in section 5.3, we wish to express the L2-induced and power-induced norm of<br />

the operator H as an H1 norm of the (multivariable) transfer function H(s) and to<br />

obtain (if possible) a multivariable analog for the maximum peak in the Bode diagram of<br />

a transfer function. This requires some background on what is undoubtedly one of the<br />

most frequently encountered decompositions of matrices: the singular value decomposition.<br />

It occurs in numerous applications in control theory, system identi cation, modelling,<br />

numerical linear algebra, time series analysis, to mention only a few areas. We will devote<br />

a subsection to the singular value decomposition (SVD) as a refreshment.<br />

;1<br />

5.4 Multivariable generalizations<br />

5.4.1 The singular value decomposition<br />

In the previous section we introduced various norms to measure the relative size of a single<br />

input single output system. In this section we generalize these measures for multivariable<br />

systems. The mathematical background and the main ideas behind the de nitions and<br />

characterizations of norms for multivariable systems is to a large extent identical to the<br />

concepts derived in the previous section. Throughout this section we will consider an<br />

input-output system with m inputs and p outputsasinFigure5.3.<br />

Again, starting with a convolution representation of such a system, the output y is<br />

In this section we will forget about dynamics and just consider real constant matrices of<br />

dimension p m. Let H 2Rp m be a given matrix. Then H maps any vector u 2Rm to<br />

avector y = Hu inRp according to the usual matrix multiplication.<br />

Z 1<br />

determined from the input u by<br />

h(t ; )u( )d<br />

y(t) =(Hu)(t) =h u =<br />

;1


58 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.4. MULTIVARIABLE GENERALIZATIONS 57<br />

acting on vectors u 2 Rm and producing vectors y = Hu 2 Rp according to the usual<br />

matrix multiplication.<br />

Let H = Y U T be a singular value decomposition of H and suppose that the m m<br />

matrix U =(u1u2::: um) and the p p matrix Y =(y1y2 ::: yp) where ui and yj are<br />

the columns of U and Y respectively, i.e.,<br />

De nition 5.23 A singular value decomposition (SVD) of a matrix H 2Rp m is a decomposition<br />

H = Y U T , where<br />

Y 2Rp p is orthogonal, i.e. Y T Y = YY T = Ip,<br />

U 2Rm m is orthogonal, i.e. U T U = UU T = Im,<br />

ui 2Rm <br />

0 0 where<br />

2Rp m is diagonal, i.e. = ; 0 0<br />

yj 2Rp<br />

with i = 1 2::: m and j = 1::: p. Since U is an orthogonal matrix, the vectors<br />

1<br />

C<br />

A<br />

1 0 0 ::: 0<br />

0 2 0 ::: 0<br />

0<br />

B<br />

@<br />

0 =diag( 1::: r) =<br />

fuigi=1:::m constitute an orthonormal basis for Rm . Similarly, the vectors fyjgj=1:::p<br />

T<br />

constitute an orthonormal basis forRp . Moreover, since uj ui is zero except when i = j<br />

(in which case uT . . . . .<br />

0 0 0 0 r<br />

i ui = 1), there holds<br />

and<br />

Hui = Y U T ui = Y ei = iyi:<br />

In other words, the i-th basis vector ui is mapped in the direction of the i-th basis vector<br />

yi and `ampli ed' by an amount of i. It thus follows that<br />

1 2 ::: r > 0<br />

Every matrix H has such a decomposition. The ordered positive numbers<br />

k Hui k = i k yi k = i<br />

1 2::: r<br />

where we used that k yi k= 1. So, e ectively, if we have a general input vector u it<br />

will rst be decomposed by U T along the various orthogonal directions ui. Next, these<br />

decomposed components are multiplied by the corresponding singular values ( ) and then<br />

(by Y ) mapped onto the corresponding directions yi. If the "energy" in u is restricted<br />

to 1, i.e. k u k= 1, the "energetically" largest output y is certainly obtained if the u is<br />

directed along u1 so that u = u1. As a consequence, it is easy to grasp that the induced<br />

norm of H is related to the singular value decomposition as follows<br />

are uniquely de ned and are called the singular values of H. The singular values of<br />

H 2Rp m can be computed via the familiar eigenvalue decomposition because:<br />

H T H = U Y T Y U T = U 2 U T = U U T<br />

and:<br />

HH T = U U T U Y T = Y 2 Y T = Y Y T<br />

k Hu k<br />

k u k = k Hu1 k<br />

k u1 k = 1<br />

k H k := sup<br />

u2Rm Consequently, ifyou want to compute the singular values with pencil and paper, you can<br />

use the following algorithm. (For numerically well conditioned methods, however, you<br />

should avoid the eigenvalue decomposition.)<br />

In other words, the largest singular value 1 of H equals the induced norm of H (viewed<br />

as a function fromRm toRp ) whereas the input u1 2Rm de nes an `optimal direction' in<br />

the sense that the norm of Hu1 is equal to the induced norm of H. The maximal singular<br />

value 1, often denoted by , can thus be viewed as the maximal `gain' of the matrix<br />

H, whereas the smallest singular value r, sometimes denoted as , can be viewed as the<br />

minimal `gain' of the matrix under normalized `inputs' and provided that the matrix has<br />

full rank. (If the matrix H has not full rank, it has a non-trivial kernel so that Hu =0<br />

for some input u 6= 0).<br />

Algorithm 5.24 (Singular value decomposition) Given a p m matrix H.<br />

Construct the symmetric matrix H T H (or HH T if m is much larger than p)<br />

Compute the non-zero eigenvalues 1::: r of H TH. Since for a symmetric matrix<br />

the non-zero eigenvalues are positive numbers, we can assume the eigenvalues to be<br />

ordered: 1 ::: r > 0<br />

Remark 5.25 To verify the latter expression, note that for any u 2Rm , the norm<br />

The k-th singular value of H is given by k = p k, k =1 2::: r.<br />

T U T u<br />

k Hu k 2 = u T H T Hu = u T U<br />

= x T 0 0 x<br />

where x = U T u. It follows that<br />

mX<br />

The number r is equal to the rank of H and we remark that the matrices U and Y need<br />

not be unique. (The sign is not de ned and nonuniqueness can occur in case of multiple<br />

singular values.)<br />

The singular value decomposition and the singular values of a matrix have a simple<br />

and straightforward interpretation in terms of the `gains' and the so called `principal<br />

directions' of H. 5 For this, it is most convenient to view the matrix as a linear operator<br />

2<br />

i jxij<br />

k 0 0 x k 2 =<br />

k Hu k2 = max<br />

kxk=1<br />

max<br />

kuk=1<br />

i=1<br />

which is easily seen to be maximal if x1 = 1 and xi = 0 for all i 6= 1.<br />

5<br />

In fact, one may argue why eigenvalues of a matrix have played such a dominant role in your linear<br />

algebra course. In the context of linear mappings, singular values have amuch more direct and logical<br />

operator theoretic interpretation.


5.4. MULTIVARIABLE GENERALIZATIONS 59<br />

5.4.2 The H1 norm for multivariable systems<br />

Consider the p m stable transfer function H(s) and let<br />

H(j!) =Y (j!) (j!)U (j!)<br />

be a singular value decomposition of H(j!) for a xed value of ! 2R. Since H(j!) is<br />

in general complex valued, we have that H(j!) 2C p m and the singular vectors stored<br />

in Y (j!) and U(j!) are complex valued. For each such !, the singular values, still being<br />

real valued (i.e. i 2R), are ordered according to<br />

1(!) 2(!) ::: r(!) > 0<br />

where r is equal to the rank of H(s) and in general equal to the minimum of p and m.<br />

Thus the singular values become frequency dependent! From the previous section we infer<br />

that for each ! 2R<br />

0<br />

k H(j!)^u(!) k<br />

k ^u(!) k<br />

1(!)<br />

or, stated otherwise,<br />

k H(j!)^u(!) k 1(!) k ^u(!) k<br />

so that (!) := 1(!) viewed as a function of ! has the interpretation of a maximal gain<br />

of the system at frequency !. It is for this reason that a plot of (!) with ! 2R can be<br />

viewed as a multivariable generalization of the Bode diagram!<br />

De nition 5.26 Let H(s) be a stable multivariable transfer function. The H1 norm of<br />

H(s) is de ned as<br />

k H(s) k1:= sup<br />

!2R (H(j!)):<br />

With this de nition we obtain the natural generalization of the results of section 5.3 for<br />

multivariable systems. Indeed, we have the following multivariable analog of theorem 5.16:<br />

Theorem 5.27 Let T =R+ or T =R be the time set. For a stable multivariable transfer<br />

function H(s) the L2 induced norm and the power induced normisequal to the H1 norm<br />

of H(s). That is,<br />

k H k (22)=k H k pow=k H(s) k1<br />

The derivation of this result is to a large extent similar to the one given in (5.23). An<br />

example of a \multivariable Bode diagram"' is depicted in Figure 5.4.<br />

The bottom line of this subsection is therefore that the L2-induced operator norm and<br />

the power-induced norm of a system is equal to the H1 norm of its tranfer function.<br />

5.4.3 The H2 norm for multivariable systems<br />

The H2 norm of a p m transfer function H(s) is de ned as follows.<br />

De nition 5.28 Let H(s) be a stable multivariable transfer function of dimension p m.<br />

The H2 norm of H(s) is de ned as<br />

n Z 1 1<br />

o1=2 k H(s) kH2= trace [H (;j!)H(j!)]d! :<br />

2 ;1<br />

60 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

Singular Values dB<br />

40<br />

30<br />

20<br />

10<br />

0<br />

−10<br />

−20<br />

−30<br />

10 −3<br />

10 −2<br />

10 −1<br />

Frequency (rad/sec)<br />

10 0<br />

Figure 5.4: The singular values of a transfer function<br />

Here, the `trace' of a square matrix is the sum of the entries at its diagonal. The rationale<br />

behind this de nition is a very simple one and very similar, in spirit, to the idea behind<br />

the H2 norm of a scalar valued transfer function. For single-input single-output systems<br />

the square of the H2 norm of a transfer function H(s) is equal to the energy in the impulse<br />

response of the system. For a system with m inputs we can consider m impulse responses<br />

by putting an impulsive input at the i-th input channel (i = 1::: m) and `watching'<br />

the corresponding output, say y (i) , which is a p dimensional energy signal for each such<br />

input. We will de ne the squared H2 norm of a multi-variable system as the sum of the<br />

energies of the outputs y (i) as a re ection of the total \energy". Precisely, let us de ne m<br />

impulsive inputs,the i-th being<br />

u (i) (t) =<br />

where the impulse (t) appears at the i-th spot. The corresponding output is a p dimensional<br />

signal which we will denote by y (i) (t) and which has bounded energy if the system<br />

0<br />

B<br />

@<br />

0.<br />

(t)<br />

0<br />

.<br />

0<br />

1<br />

C<br />

A<br />

10 1


62 CHAPTER 5. SIGNAL SPACES AND NORMS<br />

5.4. MULTIVARIABLE GENERALIZATIONS 61<br />

5.5 Exercises<br />

is assumed to be stable. The square of its two norm is given by<br />

Z 1 pX<br />

1. Consider the following continuous time signals and determine their amplitude (k<br />

k1), their energy (k k2) and their L1 norm (k k1).<br />

jy (i)<br />

j (t)j2dt k y (i) k 2 2 :=<br />

j=1<br />

;1<br />

0 for t 0.<br />

where y (i)<br />

j denotes the j-th component of the output due to an impulsive input at the i-th<br />

input channel.<br />

The H2 norm of the transfer function H(s) is nothing else than the square root of the<br />

(c) x(t) = exp( jtj) for xed . Distinguish the cases where > 0 < 0 and<br />

=0.<br />

sum of the two-norms of these outputs. That is:<br />

mX<br />

2. This exercise is mainly meant to familiarize you with various routines and procedures<br />

in MATLAB. This exercise involves a single input single output control scheme and<br />

should be viewed as a `prelude' for the multivariable control scheme of an exercise<br />

below.<br />

k y (i) k 2 2 :<br />

k H(s) k 2 H2 =<br />

i=1<br />

Consider a single input single output system described by the transfer function<br />

;s<br />

(s +1)(s +2) :<br />

P (s) =<br />

The system P is controlled by the constant controller C(s) = 1. We consider the<br />

usual feedback interconnection of P and C as described earlier.<br />

In a stochastic setting, an m dimensional (stationary) stochastic process admits an<br />

m-dimensional mean = E[u(t)] which is is independent of t, whereas its second order<br />

moments E[u(t1)u(t2) T ] now de ne m m matrices which only depend on the time difference<br />

t1 ; t2. As in the previous section, we derive that the in nite horizon RMS value<br />

equals the H2 norm of the system, i.e.,<br />

kHk2 Z 1<br />

RMS = trace[h(t)h<br />

;1<br />

T (t)] dt = kHk2 H2 :<br />

(a) Determine the H1 norm of the system P .<br />

Hint: You can represent the plant P in MATLAB by introducing the numerator<br />

(`teller') and the denominator (`noemer') polynomial coe cients separately. Since<br />

(s +1)(s +2) = s2 +3s +2 the denominator polynomial is represented by a variable<br />

den=[1 3 2] (coe cients always in descending order). Similarly, thenumerator<br />

polynomial of P is represented by num=[0 -1 0]. The H1 norm of P can now be<br />

read from the Bode plot of P by invoking the procedure bode(num,den).<br />

(b) Determine the H1 norm of the sensitivity S, the complementary sensitivity T<br />

and the control sensitivity R of the closed loop system.<br />

Hint: The feedback interconnection of P and C can be obtained by the MATLAB procedures<br />

feedbk or feedback. After reading the help information about this procedure<br />

(help feedbk) welearn that the procedure requires state space representations of P<br />

and C and produces a state space representation of the closed loop system. Make sure<br />

that you use the right `type' option to obtain S T and R, respectively. A state space<br />

representation of P can be obtained, e.g., by invoking the routine tf2ss (`transfer-tostate-space').<br />

Thus, [a,b,c,d]= tf2ss(num,den) gives a state space description of P .<br />

If you prefer a transfer function description of the closed loop to determine the H1<br />

norms, then try the conversion routine ss2tf. See the corresponding help information.


64 CHAPTER 6. WEIGHTING FILTERS<br />

d<br />

- y<br />

- g ?<br />

P (s)<br />

-<br />

u<br />

C(s)<br />

-<br />

r - g e<br />

6;<br />

?g<br />

Chapter 6<br />

Figure 6.1: Multivariable feedback con guration<br />

which maps the reference signal r to the (real) tracking error r ; y (6= e in Fig.6.1<br />

because of !) and the disturbance d to y.<br />

Weighting lters<br />

The complementary sensitivity<br />

T = PC(I + PC) ;1 = I ; S<br />

6.1 The use of weighting lters<br />

which maps the reference signal r to the output y and the sensor noise to y.<br />

6.1.1 Introduction<br />

The H1 norm of an input-output system has been shown to be equal to<br />

The control sensitivity<br />

R = C(I + PC) ;1<br />

which maps the reference signal r, the disturbance d and the measurement noise<br />

to the control input u.<br />

The maximal singular values of each of these transfer functions S T and R play an<br />

important role in robust control design for multivariable systems. As is seen from the<br />

de nitions of these transfers, the singular values of the sensitivity S (viewed as function<br />

of frequency ! 2R) determine both the tracking performance as well as the disturbance<br />

attenuation quality of the closed-loop system. Similarly, the singular values of the complementary<br />

sensitivity model the ampli cation (or attenuation) of the sensor noise to<br />

the closed-loop output y for each frequency, whereas the singular values of the control sensitivity<br />

give insight for which frequencies the reference signal has maximal (or minimal)<br />

e ect on the control input u.<br />

All our H1 control designs will be formulated in such away that<br />

an optimal controller will be designed so as to minimize the H1 norm of a<br />

multivariable closed-loop transfer function.<br />

k H(u) k2<br />

k H(s) k1 = sup (H(j!)) = sup<br />

!2R u2L2 k u k2<br />

The H1 norm therefore indicates the maximal gain of the system if the inputs are allowed<br />

to vary over the class of signals with bounded two-norm. The frequency dependent<br />

maximal singular value (!), viewed as a function of !, provides obviously more detailed<br />

information about the gain characteristics of the system than the H1 norm only.<br />

For example, if a system is known to be all pass, meaning that the two-norm of the<br />

output is equal to the two-norm of the input for all possible inputs u, then at every<br />

frequency ! the maximal gain (H(j!)) of the system is constant and equal to the H1<br />

norm k H(s) k1. The system is then said to have a at spectrum. This in contrast to<br />

low-pass or high-pass systems in which the function (!) vanishes(or is attenuated) at<br />

high frequencies and low frequencies, respectively.<br />

It is this function, (j!), that is extensively manipulated in H1 control system design<br />

to meet desired performance objectives. These manipulations are carried out by choosing<br />

appropriate weights on the signals entering and leaving a control con guration like for<br />

example the one of Figure 6.1. The speci cation of these weights is of crucial importance<br />

for the overall control design and is one of the few aspects in H1 robust control design that<br />

can not be automated. The choice of appropriate weighting lters is a typical `engineering<br />

skill' which is based on a few simple mathematical observations, and a good insight in the<br />

performance speci cations one wishes to achieve.<br />

Once a control problem has been speci ed as an optimization problem in which the H1<br />

norm of a (multivariable) transfer function needs to be minimized, the actual computation<br />

of an H1 optimal controller which achieves this minimum is surprisingly easy, fast and<br />

reliable. The algorithms for this computation of H1 optimal controllers will be the subject<br />

of Chapter 8. The most time consuming part for a well performing control system using<br />

H1 optimal control methods is the concise formulation of an H1 optimization problem.<br />

This formulation is required to include all our a-priori knowledge concerning signals of<br />

interest, all the (sometimes con icting) performance speci cations, stability requirements<br />

and, de nitely not least, robustness considerations with respect to parameter variations<br />

and model uncertainty.<br />

Let us consider a simpli ed version of an H1 design problem. Suppose that a plant P<br />

is given and suppose that we are interested in minimizing the H1 norm of the sensitivity<br />

6.1.2 Singular value loop shaping<br />

Consider the multivariable feedback control system of Figure 6.1 As mentioned before,<br />

the multivariable stability margins and performance speci cations can be quanti ed by<br />

considering the frequency dependent singular values of the various closed-loop systems<br />

which we can distinguish in Figure 6.1<br />

In this con guration we distinguish various `closed-loop' transfer functions:<br />

The sensitivity<br />

S =(I + PC) ;1<br />

63


66 CHAPTER 6. WEIGHTING FILTERS<br />

6.1. THE USE OF WEIGHTING FILTERS 65<br />

jV (j!)j<br />

1<br />

S =(I + PC) ;1 over all controllers C that stabilize the plant P . The H1 optimal control<br />

problem then amounts to determine a stabilizing controller C opt such that<br />

min<br />

C stab k S(s) k1 = k (I + PC opt ) ;1 k1<br />

;!r 0 !r<br />

Figure 6.2: Ideal low pass lter<br />

d<br />

Such acontroller then deserves to be called H1 optimal. However,itisby no means clear<br />

that there exists a controller which achieves this minimum. The `minimum' is therefore<br />

usually replaced by an `in mum' and we need in general to be satis ed with a stabilizing<br />

controller Copt such that<br />

- y<br />

- g ?<br />

P (s)<br />

-<br />

u<br />

C(s)<br />

-<br />

e<br />

-r g 6<br />

V (s)<br />

-<br />

r 0<br />

opt := inf<br />

C stab k S(s) k1 (6.1)<br />

g<br />

k(I + PCopt) ;1 k1<br />

(6.2)<br />

: (6.3)<br />

where opt is a prespeci ed number which we like to (and are able to) choose as close<br />

Figure 6.3: Application of an input weighting lter<br />

as possible to the optimal value opt. For obvious reasons, C opt is called a suboptimal H1<br />

controller, and this controller may clearly depend on the speci ed value of .<br />

Suppose that a controller achieves that k S(s) k1 . It then follows that for all<br />

frequencies ! 2R<br />

S(s)V (s). In Figure 6.3 we see that this amounts to including the transfer function V (s)<br />

in the diagram of Figure 6.1 and considering the `new' reference signal r0 as input instead<br />

of r. Thus, instead of the criterion (6.4), we nowlook for a controller which achieves that<br />

(S(j!)) kS(s) k1 (6.4)<br />

k S(s)V (s) k1<br />

Thus is an upperbound of the maximum singular value of the sensitivityateachfrequency ! 2R. Conclude from (6.4) and the general properties of singular values, that the tracking<br />

error r ; y (interpreted as a frequency signal) then satis es<br />

where 0. Observe that for the ideal low-pass lter V this implies that<br />

(S(j!)V (j!))<br />

k ^r(!) ; ^y(!) k (S(j!)) k ^r(!) k (6.5)<br />

k ^r(!) k<br />

(S(j!)) (6.6)<br />

k S(s)V (s) k1 = max<br />

!<br />

= max<br />

j!j !r<br />

In this design, no frequency dependent a-priori information concerning the reference<br />

signal r or frequency dependent performance speci cations concerning the tracking error<br />

r ; y has been incorporated. The inequalities (6.5) hold for all frequencies.<br />

Thus, is now an upperbound of the maximum singular value of the sensitivity for frequencies<br />

! belonging to the restricted interval [;!r!r]! Conclude that with this ideal<br />

lter V<br />

The e ect of input weightings<br />

the minimization of the H1 norm of the weighted sensitivity corresponds to<br />

minimization of the maximal singular value (!) of the sensitivity function for<br />

frequencies ! 2 [;!r!r].<br />

The tracking error r ; y now satis es for all ! 2R the inequalities<br />

Suppose that the reference signal r is known to have a bandwith [0!r]. Then inequality<br />

(6.5) is only interesting for frequencies ! 2 [0!r], as frequencies larger than !r are<br />

not likely to occur. However, the controller was designed to achieve (6.4) for all ! 2R<br />

and did not take bandwith speci cations of the reference signal into account. If we de ne<br />

a stable transfer function V (s) with ideal frequency response<br />

(<br />

1 if ! 2 [;!r!r]<br />

V (j!) =<br />

0 otherwise<br />

(6.7)<br />

k r(!) ; y(!) k = k S(j!)V (j!)r 0 (!) k<br />

kS(j!) k kV (j!)r 0 (!) k<br />

(S(j!)) k r(!) k<br />

jV ;1 (j!)jkr(!) k<br />

then the outputs of such a lter are band limited signals with bandwith [0!r], i.e. for<br />

any r0 2L2 the signal<br />

where r is now a bandlimited reference signal, and<br />

r(s) =V (s)r 0 (s)<br />

1<br />

jV (j!)j<br />

jV ;1 (j!)j =<br />

has bandwidth [0!r]. (See Figure 6.2). Instead of minimizing the H1 norm of the<br />

sensitivity S(s) we now consider minimizing the H1 norm of the weighted sensitivity


68 CHAPTER 6. WEIGHTING FILTERS<br />

6.1. THE USE OF WEIGHTING FILTERS 67<br />

where W is a (stable) transfer function whose frequency response is ideally de ned by the<br />

band pass lter<br />

(<br />

1 if ! j!j !<br />

W (j!) =<br />

0 otherwise<br />

which isto be interpreted as 1 whenever V (j!) =0. For those frequencies (j!j >!r in<br />

this example) the designed controller does not put a limit to the tracking error for these<br />

frequencies did not appear in the reference signal r.<br />

The last inequality in (6.7) is the most useful one and it follows from the more general<br />

observation that, whenever k S(s)V (s) k1 with V a square stable transfer function<br />

whose inverse V ;1 is again stable, then for all ! 2R there holds<br />

and depicted in Figure 6.4. Instead of minimizing the H1 norm of the sensitivity S(s)<br />

jW (j!)j<br />

[S(j!)] = [S(j!)V (j!)V ;1 (j!)]<br />

[S(j!)V (j!)] [V ;1 (j!)]<br />

6<br />

1<br />

[V ;1 (j!)]<br />

We thus come to the important conclusion that<br />

!<br />

-<br />

0 ! !<br />

Figure 6.4: Ideal band pass lter<br />

a controller C which achieves that the weighted sensitivity<br />

k S(s)V (s) k1<br />

we consider minimizing the H1 norm of the weighted sensitivity W (s)S(s). In Figure 6.5<br />

it is shown that this amounts to including the transfer function W (s) in the diagram of<br />

Figure 6.1 (where we put = 0) and considering the `new' output signal e0 . A controller<br />

which achieves an upperbound on the weighted sensitivity<br />

results in a closed loop system in which<br />

(S(j!)) [V ;1 (j!)] (6.8)<br />

k W (s)S(s) k1<br />

accomplishes, as in (6.6), that<br />

(W (j!)S(j!))<br />

(S(j!)) (6.9)<br />

k W (s)S(s) k1 = max<br />

!<br />

= max<br />

Remark 6.1 This conclusion holds for any stable weighting lter V (s) whose inverse<br />

V ;1 (s) is again a stable transfer function. This is questionable for the ideal lter V we<br />

used here to illustrate the e ect, because for ! > !r the inverse lter V (j!) ;1 can be<br />

quali ed as unstable. In practice we will therefore choose lters which have a rational<br />

transfer function being stable, minimum phase and biproper. An alternative rst order<br />

lter for this example could thus have been e.g. V (s) = (s+100!r)<br />

100(s+!r) .<br />

! ! !<br />

Remark 6.2 It is a standard property of the singular value decomposition that,whenever<br />

V ;1 (j!) exists,<br />

which provides an upperbound of the maximum singular value of the sensitivity for frequencies<br />

! belonging to the restricted interval ! ! !. The tracking error e satis es<br />

again the inequalities (6.7), with V replaced by W and it should not be surprising that the<br />

same conclusions concerning the upperbound of the spectrum of the sensitivity S hold. In<br />

particular, we nd similar to (6.8) that for all ! 2R there holds<br />

1<br />

[V (j!)]<br />

[V ;1 (j!)] =<br />

where denotes the smallest singular value.<br />

(S(j!)) [W ;1 (j!)] (6.10)<br />

The e ect of output weightings<br />

provided the stable weighting lter W (s) has an inverse W ;1 (s) which is again stable.<br />

6 e0<br />

d<br />

W<br />

- y<br />

- g ?<br />

P (s)<br />

-<br />

u<br />

C(s)<br />

r - g e -<br />

6<br />

6<br />

In the previous subsection we considered the e ect of applying a weighting lter for an<br />

input signal. Likewise, we can also de ne weighting lters on the output signals which<br />

occur in a closed-loop con guration as in Figure 6.1.<br />

We consider again (as an example) the sensitivity S(s) viewed as a mapping from<br />

the reference input r to the tracking error r ; y = e, when we fully disregard for the<br />

moment the measurement noise . A straightforward H1 design would minimize the H1<br />

norm of the sensitivity S(s) and result in the upperbound (6.5) for the tracking error.<br />

We could, however, be interested in minimizing the spectrum of the tracking error at<br />

speci c frequencies only. Let us suppose that we are interested in the tracking error e at<br />

frequencies ! ! ! only, where ! > 0 and !>0 de ne a lower and upperbound. As<br />

in the previous subsection, we introduce a new signal<br />

Figure 6.5: Application of an output weighting lter<br />

e 0 (s) =W (s)e(s)


70 CHAPTER 6. WEIGHTING FILTERS<br />

6.1. THE USE OF WEIGHTING FILTERS 69<br />

for some > 0 which is as close as possible to filt (which depends on the plant P and<br />

the choice of the weigthing lters V and W ). To nd such a larger than or equal to the<br />

unknown and optimal lt is the subject of Chapter 8, but what is important here is that<br />

the resulting sensitivity satis es (6.11).<br />

By incorporating weighting lters to each input and output signal which isofinterest<br />

in the closed-loop control con guration, we arrive at extended con guration diagrams such<br />

as the one shown in Figure 6.6.<br />

6.1.3 Implications for control design<br />

In this section we willcommentonhowthe foregoing can be used for design purposes. To<br />

this end, there are a few important observations to make.<br />

d 0<br />

?<br />

6<br />

u 0<br />

For one thing, we showed in subsection 6.1.2 that by choosing the frequency response<br />

of an input weighting lter V (s) so as to `model' the frequency characteristic of the<br />

input signal r, the a-priori information of this reference signal has been incorporated<br />

in the controller design. By doing so, the minimization of the maximum singular<br />

value of the sensitivity S(s) has been re ned (like in (6.6)) to the frequency interval<br />

of interest. Clearly, we can do this for any input signal.<br />

Vd<br />

Wu<br />

-y Wy -y0<br />

- f ?d<br />

P (s)<br />

v- u<br />

C(s) -<br />

6<br />

f<br />

6<br />

-<br />

r 0<br />

Vr -r<br />

0<br />

?f V<br />

- e0<br />

- f- e ?<br />

We<br />

Secondly, we obtained in (6.8) and in (6.10) frequency dependent upperbounds for<br />

the maximum gain of the sensitivity. Choosing V (j!) (or W (j!)) appropriately,<br />

enables one to specify the frequency attenuation of the closed-loop transfer function<br />

(the sensitivity in this case). Indeed, choosing, for example, V (j!)alowpass transfer<br />

function implies that V ;1 (j!) is a high pass upper-bound on the frequency spectrum<br />

of the closed-loop transfer function. Using (6.8) this implies that low frequencies of<br />

the sensitivity are attenuated and that the frequency characteristic of V has `shaped'<br />

the frequency characteristic of S. The same kind of `loop-shaping' can be achieved<br />

by either choosing input or output weightings.<br />

Figure 6.6: Extended con guration diagram<br />

Thirdly, by applying weighting factors to both input signals and output signals<br />

we can minimize (for example) the H1 norm of the two-sided weighted sensitivity<br />

W (s)S(s)V (s), i.e., a controller could be designed so as to achieve that<br />

k W (s)S(s)V (s) k1<br />

General guidelines on how to determine input and output weightings can not be given,<br />

for each application requires its own performance speci cations and a-priori information<br />

on signals. Although the choice of weighting lters in uences the overall controller design,<br />

the choice of an appropriate lter is to a large extent subjective. As a general warning,<br />

however, one should try to keep the lters of as low a degree as possible. This, because the<br />

order of a controller C that achieves inequality (6.12) is, in general, equal to the sum of the<br />

order of the plant P , and the orders of all input weightings V and output weightings W .<br />

The complexity of the resulting controller is therefore directly related to the complexity<br />

of the plant and the complexity of the chosen lters. High order lters lead to high order<br />

controllers, which maybe undesirable.<br />

More about appropriate weighting lters and their interactive e ects on the nal solution<br />

in a complicated scheme as Fig. 6.6 follows in the next chapters.<br />

for some >0. Provided the transfer functions V (s) and W (s) havestable inverses,<br />

this leads to a frequency dependent upperbound for the original sensitivity. Precisely,<br />

in this case<br />

(S(j!)) [V ;1 (j!)] [W ;1 (j!)] (6.11)<br />

6.2 <strong>Robust</strong> stabilization of uncertain systems<br />

from which we see that the frequency characteristic of the sensitivity is shaped<br />

by both V as well as W . It is precisely this formula that provides you with a<br />

wealth of design possibilities! Once a performance requirement for a closed-loop<br />

transfer function (let's say the sensitivity S(s)) is speci ed in terms of its frequency<br />

characteristic, this characteristic needs to be `modeled' by the frequency response<br />

of the product V ;1 (j!)W ;1 (j!) by choosing the input and output lters V and<br />

W appropriately. A controller C(s) that bounds the H1 norm of the weighted<br />

sensitivity W (s)S(s)V (s) then achieves the desired characteristic by equation (6.11).<br />

6.2.1 Introduction<br />

The theory of H1 control design is model based. By this we mean that the design of a<br />

controller for a system is based on a model of that system. In this course we will not<br />

address the question how such a model can be obtained, but any modeling procedure<br />

will, in practice, be inaccurate. Depending on our modeling e orts, we can in general<br />

expect a large or small discrepancy between the behavior of the (physical) system which<br />

we wish to control and the mathematical model we obtained. This discrepancy between<br />

the behavior of the physical plant and the mathematical model is responsible for the fact<br />

that a controller, we designed optimally on the basis of the mathematical model, need not<br />

ful ll our optimality expectations once the controller is connected to the physical system.<br />

It is easy to give examples of systems in which arbitrary small parameter variations of<br />

The weighting lters V and W on input and output signals of a closed-loop transfer<br />

function give therefore the possibility to shape the spectrum of that speci c closed-loop<br />

transfer. Once these lters are speci ed, a controller is computed to minimize the H1<br />

norm of the weighted transfer and results in a closed-loop transfer whose spectrum has<br />

been shaped according to (6.11).<br />

In the example of a weighted sensitivity, the controller C is thus computed to establish<br />

that<br />

lt := inf<br />

C stab k W (s)S(s)V (s) k1 (6.12)<br />

kW (s)(I + PC) ;1 V (s) k1<br />

(6.13)<br />

: (6.14)


72 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 71<br />

Additive uncertainty<br />

The simplest way to represent the discrepancy between the model and the true system is<br />

by taking the di erence of their respective transfer functions. That is,<br />

Pt = P + P (6.15)<br />

plant parameters, in a stable closed loop system con guration, fully destroy the stability<br />

properties of the system.<br />

<strong>Robust</strong> stability refers to the ability of a closed loop stable system to remain stable in<br />

the presence of modeling errors. For this, one needs to have some insight in the accuracy<br />

of a mathematical model which represents the physical system we wishtocontrol. There<br />

are many ways to do this:<br />

where P is the the nominal model, Pt is the true or perturbed model and P is the additive<br />

perturbation. In order to comply with notations in earlier chapters and to stress the relation<br />

with the next input multiplicative perturbation description, we use the notation P as<br />

one mathematical object to display the perturbation of the nominal plant P . Additive<br />

perturbations are pictorially represented as in Figure 6.7.<br />

One can take a stochastic approach and attach a certain likelihood or probability<br />

to the elements of a class of models which are assumed to represent the unknown,<br />

(often called `true') system.<br />

One can de ne a class of models each of which is equally acceptable to model the<br />

unknown physical system<br />

P<br />

-<br />

One can select one nominal model together with a description of its uncertainty in<br />

terms of its parameters, in terms of its frequency response, in terms of its impulse<br />

response, etc. In this case, the uncertain part of a process is modeled separately<br />

from the known (nominal) part.<br />

-<br />

+<br />

?<br />

- P - j<br />

+<br />

For each of these possibilities a quanti cation of model uncertainty is necessary and essential<br />

for the design of controllers which are robust against those uncertainties.<br />

In practice, the design of controllers is often based on various iterations of the loop<br />

Figure 6.7: Additive perturbations<br />

data collection ;! modeling ;! controller design ;! validation<br />

Multiplicative uncertainty<br />

in which improvement of performance of the previous iteration is the main aim.<br />

In this chapter we analyze robust stability ofacontrol system. We introduce various<br />

ways to represent model uncertainty and we will study to what extent these uncertainty<br />

descriptions can be taken into account to design robustly stabilizing controllers.<br />

Model errors may also be represented in the relative ormultiplicative form. We distinguish<br />

the two cases<br />

6.2.2 Modeling model errors<br />

Pt = (I + )P = P + P = P + P (6.16)<br />

Pt = P (I + )=P + P =P + P (6.17)<br />

It may sound somewhat paradoxial to model dynamics of a system which one deliberately<br />

decided not to take into account in the modeling phase. Our purpose here will be to only<br />

provide upperbounds on modeling errors. Various approaches are possible<br />

where P is the nominal model, Pt is the true or perturbed model and is the relative<br />

perturbation. Equation (6.16) is used to represent output multiplicative uncertainty, equation(6.17)<br />

represents input multiplicative uncertainty. Input multiplicative uncertainty is<br />

well suited to represent inaccuracies of the actuator being incorporated in the transfer P .<br />

Analogously, the output multiplicative uncertainty is a proper means to represent noise<br />

e ects of the sensor. (However, the real output y should be still distinguishable from<br />

the measured output y + ). The situations are depicted in Figure 6.8 and Figure 6.9,<br />

respectively. Note that for single input single output systems these two multiplicative<br />

uncertainty descriptions coincide. Note also that the products P and P in 6.16 and<br />

6.17 can be interpreted as additive perturbations of P .<br />

model errors can be quanti ed in the time domain. Typical examples include descriptions<br />

of variations in the physical parameters in a state space model.<br />

alternatively, model errors can be quanti ed in the frequency domain by analyzing<br />

perturbations of transfer functions or frequency responses.<br />

We will basically concentrate on the latter in this chapter. For frequency domain model<br />

uncertainty descriptions one usually distinguishes two approaches, which lead to di erent<br />

research directions:<br />

Unstructured uncertainty: model uncertainty is expressed only in terms of upperbounds<br />

on errors of frequency responses. No more information on the origin of the<br />

modeling errors is used.<br />

Remark 6.3 We also emphasize that, at least for single input single output systems, the<br />

multiplicative uncertainty description leaves the zeros of the perturbed system invariant.<br />

The popularityofmultiplicative model uncertainty description is for this reason di cult to<br />

understand for it is well known that an accurate identi cation of the zeros of a dynamical<br />

system is a non-trivial and very hard problem in system identi cation.<br />

Structured uncertainty: apart from an upperbound on the modeling errors, also the<br />

speci c structure in uncertainty of parameters is taken into account.<br />

For the analysis of unstructured model uncertainty in the frequency domain there are<br />

four main uncertainty models which we brie y review.


74 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 73<br />

Coprime factor uncertainty<br />

-<br />

Coprime factor perturbations have beenintroduced to cope with perturbations of unstable<br />

plants. Any (multivariable rational) transfer function P can be factorized as P = ND ;1<br />

in such away that<br />

both N and D are stable transfer functions.<br />

-<br />

+ ?<br />

- P - j<br />

+<br />

D is square and N has the same dimensions as P .<br />

there exist stable transfer functions X and Y such that<br />

Figure 6.8: Output multiplicative uncertainty<br />

XN + YD= I<br />

which isknown as the Bezout equation, Diophantine equation or even Aryabhatta's<br />

identity.<br />

-<br />

6<br />

Such a factorization is called a (right) coprime factorization of P .<br />

Remark 6.4 The terminology comes from number theory where two integers n and d are<br />

called coprime if 1 is their common greatest divisor. It follows that n and d are coprime<br />

if and only if there exist integers x and y such thatxn + yd =1.<br />

? - P -<br />

j<br />

+<br />

-<br />

A left coprime factorization has the following interpretation. Suppose that a nominal<br />

plant P is factorized as P = ND ;1 . Then the input output relation de ned by P satis es<br />

Figure 6.9: Input multiplicative uncertainty<br />

y = Pu = ND ;1 u = Nv<br />

where we de ned v = D ;1u, or, equivalently, u = Dv. Now note that, since N and D are<br />

stable, the transfer function<br />

Feedback multiplicative uncertainty<br />

In few applications one encounters feedback versions of the multiplicative model uncertainties.<br />

They are de ned by<br />

(6.20)<br />

y<br />

: v 7!<br />

u<br />

N<br />

D<br />

is stable as well. We haveseen that such a transfer matrix maps L2 signals to L2 signals.<br />

We can thus interpret (6.20) as a way togenerate all bounded energy input-output signals<br />

u and y which are compatible with the plant P . Indeed, any element v in L2 generates via<br />

(6.20) an input output pair (u y) for which y = Pu,andconversely, any pair (u y) 2L2<br />

satisfying y = Pu is generated by plugging in v = D ;1u in (6.20).<br />

Pt = (I + ) ;1 P (6.18)<br />

Pt = P (I + ) ;1 (6.19)<br />

Example 6.5 The scalar transfer function P (s) = (s;1)(s+2)<br />

(s;3)(s+4) has a coprime factorization<br />

P (s) =N(s)D ;1 (s) with<br />

s ; 1<br />

; 3<br />

N(s) = D(s) =s<br />

s +4 s +2<br />

Let P = ND ;1 be a right coprime factorization of a nominal plant P . Coprime factor<br />

uncertainty refers to perturbations in the coprime factors N and D of P . We de ne a<br />

perturbed model<br />

and referred to as the output feedback multiplicative model error and the input feedback<br />

multiplicative model error. We will hardly use these uncertainty representations in this<br />

course, and mention them only for completeness. The situation of an output feedback<br />

multiplicative model error is depicted in Figure 6.10. Note that the sign of the feedback<br />

addition from is irrelevant, because the phase of will not be taken into account when<br />

considering norms of .<br />

Pt = (N + N)(D + D) ;1 (6.21)<br />

-<br />

- P - j ?<br />

where<br />

:= ; N D<br />

Figure 6.10: Output feedback multiplicative uncertainty<br />

re ects the perturbation of the coprime factors N and D of P . The next Fig. 6.11 illustrates<br />

this right coprime uncertainty inablockscheme.


76 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 75<br />

- -<br />

D N<br />

6<br />

- y<br />

Pt(s)<br />

- C(s) - u<br />

r - h<br />

6<br />

D ;1 ?<br />

?<br />

y<br />

- - -<br />

N<br />

- -<br />

v<br />

u<br />

Figure 6.12: Feedback loop with uncertain system<br />

Figure 6.11: Right coprime uncertainty.<br />

Find a controller C for the feedback con guration of Figure 6.12 such that C<br />

1<br />

stabilizes the perturbed plant Pt for all k k1 with > 0 as small as<br />

possible (i.e., C makes the stability margin as large as possible).<br />

Remark 6.6 It should be emphasized that the coprime factors N and D of P are by<br />

no means unique! A plant P admits many coprime factorizations P = ND ;1 and it is<br />

therefore useful to introduce some kind of normalization of the coprime factors N and D.<br />

It is often required that the coprime factors should satisfy the normalization<br />

Such a controller is called robustly stabilizing or optimally robustly stabilizing for the<br />

perturbed systems Pt. Since this problem can be formulated for each of the uncertainty<br />

descriptions introduced in the previous section, we can de ne four types of stability margins<br />

D D + N N = I<br />

The additive stability margin is the H1 norm of the smallest stable P for which<br />

the con guration of Figure 6.12 with Pt de ned by (6.15) becomes unstable.<br />

This de nes the normalized right coprime factorization of P and it has the interpretation<br />

that the transfer de ned in (6.20) is all pass.<br />

The output multiplicative stability margin is the H1 norm of the smallest stable<br />

which destabilizes the system in Figure 6.12 with Pt de ned by (6.16).<br />

6.2.3 The robust stabilization problem<br />

For each of the above types of model uncertainty, the perturbation is a transfer function<br />

which we assume to belong to a class of transfer functions with an upperbound on their<br />

The input multiplicative stability margin is similarly de ned with respect to equation<br />

(6.17) and<br />

H1 norm. Thus, we assume that<br />

1<br />

The coprime factor stability margin is analogously de ned with respect to (6.21) and<br />

the particular coprime factorization of the plant P .<br />

k k1<br />

The main results with respect to robust stabilization of dynamical systems follow in<br />

a straightforward way from the celebrated small gain theorem. If we consider in the<br />

con guration of Figure 6.12 output multiplicative perturbed plants Pt =(I + )P then<br />

we can replace the block indicated by Pt by the con guration of Figure 6.8 to obtain the<br />

system depicted in Figure 6.13.<br />

where 0. 1 Large values of therefore allow for small upper bounds on the norm of<br />

, whereas small values of allow for large deviations of P . Note that if !1then the<br />

H1 norm of is required to be zero, in which case perturbed models Pt coincide with<br />

the nominal model P .<br />

For a given nominal plant P this class of perturbations de nes a class of perturbed<br />

plants<br />

1<br />

Pt k k1<br />

-<br />

v w<br />

? -y - h<br />

P (s)<br />

- C(s) - u<br />

+<br />

r - h<br />

6;<br />

Figure 6.13: <strong>Robust</strong> stabilization for multiplicative perturbations<br />

To study the stability properties of this system we can equivalently consider the system<br />

of Figure 6.14 in which M is the system obtained from Figure 6.13 by setting r =0and<br />

`pulling' out the uncertainty block .<br />

which depends on the particular model uncertainty structure.<br />

Consider the feedback con guration of Figure 6.12 where the plant P has been replaced<br />

by the uncertain plant Pt. We will assume that the controller C stabilizes this system if<br />

= 0, that is, we assume that the closed-loop system is asymptotically stable for the<br />

nominal plant P . An obvious question is how large k k1 can become before the closed<br />

loop system becomes unstable. The H1 norm of the smallest (stable) perturbation<br />

which destabilizes the closed-loop system of Figure 6.12 is called the stability margin of<br />

the system.<br />

We can also turn this question into a control problem. The robust stabilization problem<br />

amounts to nding a controller C so that the stability margin of the closed loop system is<br />

maximized. The robust stabilization problem is therefore formalized as follows:<br />

1 The reason for taking the inverse ;1 as an upperbound rather than will turn useful later.


78 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 77<br />

for all ! 2R. Stated otherwise<br />

-<br />

: (6.22)<br />

1<br />

[M(j!)]<br />

[ (j!)] <<br />

for all ! 2R.<br />

M<br />

6.2.4 <strong>Robust</strong> stabilization: main results<br />

<strong>Robust</strong> stabilization under additive perturbations<br />

Figure 6.14: Small gain con guration<br />

Carrying out the above analysis for the case of additive perturbations leads to the following<br />

main result on robust stabilization in the presence of additive uncertainty.<br />

For the case of output multiplicative perturbations M maps the signal w to v and the<br />

corresponding transfer function is easily seen to be<br />

Theorem 6.8 (<strong>Robust</strong> stabilization with additive uncertainty) Acontroller C stabilizes<br />

Pt = P + P for all k P k1< 1 if and only if<br />

M = T = PC(I + PC) ;1<br />

C stabilizes the nominal plant P<br />

.<br />

k C(I + PC) ;1 k1<br />

i.e., M is precisely the complementary sensitivity transfer function2 . Since we assumed<br />

that the controller C stabilizes the nominal plant P it follows that M is a stable transfer<br />

function, independent of the perturbation but dependent on the choice of the controller<br />

C.<br />

The stability properties of the con guration of Figure 6.13 are determined by the small<br />

gain theorem (Zames, 1966):<br />

Remark 6.9 Note that the transfer function R = C(I + PC) ;1 is the control sensitivity<br />

of the closed-loop system. The control sensitivity of a closed-loop system therefore re ects<br />

the robustness properties of that system under additive perturbations of the plant!<br />

The interpretation of this result is as follows<br />

Theorem 6.7 (Small gain theorem) Suppose that the systems M and are both stable.<br />

Then the autonomous system determined by the feedback interconnection of Figure<br />

6.14 is asymptotically stable if<br />

The smaller the norm of the control sensitivity, the greater will be the norm of the<br />

smallest destabilizing additive perturbation. The additive stability margin of the<br />

closed loop system is therefore precisely the inverse of the H1 norm of the control<br />

sensitivity<br />

k M k1< 1<br />

For a given controller C the small gain theorem therefore guarantees the stability of<br />

the closed loop system of Figure 6.14 (and thus also the system of Figure 6.13) provided<br />

is stable and satis es for all ! 2R<br />

(M(j!) (j!)) < 1:<br />

1<br />

k C(I + PC) ;1 k1<br />

If we liketo maximize the additive stability margin for the closed loop system, then<br />

we needtominimize the H1 norm of the control sensitivity R(s)!<br />

For SISO systems this translates in a condition on the absolute values of the frequency<br />

responses of M and . Precisely, for all ! 2R we shouldhavethat Theorem 6.8 can be re ned by considering for each frequency ! 2 R the maximal<br />

allowable perturbation P which makes the system of Figure 6.12 unstable. If we assume<br />

that C stabilizes the nominal plant P then the small gain theorem and (6.22) yields that<br />

for all additive stable perturbations P for which<br />

(M ) = jM j = jMjj j = (M) ( )<br />

(where we omitted the argument j! in each transfer) to guarantee the stability of the closed<br />

loop system. For MIMO systems we obtain, by using the singular value decomposition,<br />

that for all ! 2R<br />

1<br />

[R(j!)]<br />

[ P (j!)] <<br />

(M ) = (YM MU M Y U ) (M) ( )<br />

the closed-loop system is stable. Furthermore, there exists a perturbation P right on<br />

the boundary (and certainly beyond) with<br />

(where again every transfer is supposed to be evaluated at j!) and the maximum is<br />

reached for Y = UM that can always be accomplished without a ecting the constraint<br />

k k1< 1 . Hence, to obtain robust stability we have to guarantee for both SISO and<br />

MIMO systems that<br />

1<br />

[R(j!)]<br />

[ P (j!)] =<br />

[M(j!)] [ (j!)] < 1<br />

which destabilizes the system of Figure 6.12.<br />

2 Like inchapter 3 actually M = ;T , but the sign is irrelevant as it can be incorporated in .


80 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 79<br />

C stabilizes the nominal plant P<br />

<strong>Robust</strong> stabilization under multiplicative perturbations<br />

.<br />

k (I + PC) ;1 k1<br />

Remark 6.13 We recognize the transfer function S = (I + PC) ;1 = I ; T to be the<br />

sensitivity of the closed-loop system.<br />

For multiplicative perturbations, the main result on robust stabilization also follows as a<br />

direct consequence of the small gain theorem, and reads as follows for the class of output<br />

multiplicative perturbations.<br />

The interpretation of this result is similar to the foregoing robustness theorem and not<br />

included here.<br />

Theorem 6.10 (<strong>Robust</strong> stabilization with multiplicative uncertainty) Acontroller<br />

C stabilizes Pt =(I + )P for all k k1< 1 if and only if<br />

C stabilizes the nominal plant P<br />

6.2.5 <strong>Robust</strong> stabilization in practice<br />

.<br />

k PC(I + PC) ;1 k1<br />

The robust stabilization theorems of the previous section can be used in various ways.<br />

If there is no a-priori information on model uncertainty then the frequency responses<br />

of the control sensitivity ( [R(j!)]), the complementary sensitivity ( [T (j!)]) and<br />

the sensitivity ( [S(j!)]) provide precise information about the maximal allowable<br />

perturbations [ (j!)] for which the controlled system remains asymptotically stable<br />

under (respectively) additive, multiplicative and feedback multiplicative perturbations<br />

of the plant P . Graphically, we can get insight in the magnitude of these<br />

admissable perturbations by plotting the curves<br />

1<br />

add(!) =<br />

[R(j!)]<br />

Remark 6.11 We recognize the transfer function T = PC(I + PC) ;1 = I ; S to be<br />

the complementary sensitivity of the closed-loop system. The complementary sensitivity<br />

of a closed-loop system therefore re ects the robustness properties of that system under<br />

multiplicative perturbations of the plant!<br />

The interpretation of this result is similar to the foregoing robustness theorem:<br />

The smaller the norm of the complementary sensitivity T (s), the greater will be the<br />

norm of the smallest destabilizing output multiplicative perturbation. The output<br />

multiplicative stability margin of the closed loop system is therefore the inverse of<br />

the H1 norm of the complementary sensitivity<br />

1<br />

[T (j!)]<br />

1<br />

[S(j!)]<br />

mult(!) =<br />

1<br />

k PC(I + PC) ;1 k1<br />

feed(!) =<br />

for all frequency ! 2R. (which corresponds to `mirroring' the frequency responses of<br />

[R(j!)], [T (j!)], and [S(j!)] around the 0dB axis). The curves add(!), mult(!)<br />

and feed(!) then provide an upperbound on the allowable additive, multplicative<br />

and feedback multiplicative perturbations per frequency ! 2R.<br />

By minimizing the H1 norm of the complementary sensitivity T (s) we achieve a<br />

closed loop system which ismaximally robust against output multiplicative perturbations.<br />

If, on the other hand, the information about the maximal allowable uncertainty of<br />

the plant P has been speci ed in terms of one or more of the curves add(!), mult(!)<br />

or feed(!) then we can use these speci cations to shape the frequency response of<br />

either R(j!), T (j!) orS(j!) using the ltering techniques described in the previous<br />

chapter. Speci cally, let us suppose that a nominal plant P is available together<br />

with information of the maximal multiplicative model error mult(!) for ! 2 R.<br />

We can then interpret mult as the frequency response of a weighting lter with<br />

transfer function V (s), i.e. V (j!) = mult(!). The set of all allowable multiplicative<br />

perturbations of the nominal plant P is then given by<br />

Theorem 6.10 can also be re ned by considering for each frequency ! 2R the maximal<br />

allowable perturbation which makes the system of Figure 6.12 unstable. If we assume<br />

that C stabilizes the nominal plant P then all stable output multiplicative perturbations<br />

for which<br />

1<br />

[T (j!)]<br />

[ (j!)] <<br />

leave the closed-loop system stable. Moreover, there exists a perturbation right onthe<br />

boundary, so:<br />

1<br />

[T (j!)]<br />

[ (j!)]<br />

V<br />

where V is the chosen weighting lter with frequency response mult and where<br />

is any stable transfer function with k k1 < 1. Pulling out the transfer matrix<br />

from the closed-loop con guration (as in the previous section) now yields a slight<br />

modi cation of the formulas in Theorem 6.10. A controller C now achieves robust<br />

stability against this class of perturbations if and only if it stabilizes P (of course)<br />

and<br />

which destabilizes the system of Figure 6.12.<br />

<strong>Robust</strong> stabilization under feedback multiplicative perturbations<br />

For feedback multiplicative perturbations, the main results are as follows<br />

kPC(I + PC) ;1 V k1 = kTVk1 1:<br />

Theorem 6.12 (<strong>Robust</strong> stabilization with feedback multiplicative uncertainty)<br />

Acontroller C stabilizes Pt =(I + ) ;1P for all k k1< 1 if and only if


82 CHAPTER 6. WEIGHTING FILTERS<br />

6.2. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 81<br />

(a) Determine the H1 norm of P . At which frequency ! is the norm k P k1<br />

attained?<br />

- W - - V -<br />

6<br />

Hint: First compute a state space representation of P by means of the conversion algorithm<br />

tfm2ss (`transfer-matrix-to-state-space'). Read the help information carefully!<br />

(The denominator polynomial is the same as in Exercise 2, the numerator polynomials<br />

are represented in one matrix: the rst row being [0 -47 2], the second [0 -42 0], etc.<br />

Once you have a state space representation of P you can read its H1 norm from a plot<br />

of the singular values of P . Use the routine sigma.<br />

?<br />

- P<br />

- n -<br />

Figure 6.15: Filtered additive perturbation.<br />

(b) Use (6.24) as a controller for the plant P and plot the singular values of the<br />

closed loop control-sensitivity C(I + PC) ;1 to investigate robust stability of<br />

this system. Determine the robust stability margin of the closed loop system<br />

under additive perturbations of the plant.<br />

Hint: Use the MATLAB routine feedbk with the right `type' option as in exercise 6.1<br />

to construct a state space representation of the control-sensitivity and use sigma to<br />

read H1 norms of multivariable systems.<br />

(c) Consider the perturbed controller<br />

The latter expression is a constraint ontheH1 norm of the weighted complementary<br />

sensitivity! We therefore need to consider the H1 optimal control design problem<br />

so as to bound the H1 norm of the weighted complementary sensitivity TV by one.<br />

So the next goal is to synthesize controllers which accomplish this upperbound. This<br />

problem will be discussed in forthcoming chapters. In a more general and maybe<br />

more familiar setting we can quantify our knowledge concerning the additive model<br />

error by means of pre- and post lters V and W as schematized in Fig 6.15. Clearly,<br />

in this case the additive model error P = V W . If satis es the norm constraint<br />

1:13 0<br />

0 :88<br />

C(s) =<br />

k k1< 1<br />

and compute the closed-loop poles of this system. Conclusion?<br />

Hint: Use again the procedure feedbk to obtain a state space representation of the<br />

closed loop system. Recall that the closed loop poles are the eigenvalues of the `A'<br />

matrix in any minimal representation of the closed loop system. See also the routine<br />

minreal.<br />

then for every frequency ! 2R we have that<br />

( P (j!)) (W (j!)) (V (j!)):<br />

3. Consider the linearized system of an unstable batch reactor described by the state<br />

space model<br />

Consequently, if we `pull out' the transfer from the closed loop yields that M =<br />

WRV. To ful l the small gain constraint the control sensitivity R then needs to<br />

satisfy<br />

0 0<br />

5:679 0 C<br />

1:136 ;3:146A<br />

1:136 0<br />

u<br />

1<br />

C<br />

0<br />

B<br />

@<br />

0<br />

B<br />

@<br />

k WRV k1 1:<br />

x +<br />

1<br />

C<br />

A<br />

1:38 ;0:2077 6:715 ;5:676<br />

;0:5814 ;4:29 0 0:675<br />

1:067 4:273 ;6:654 5:893<br />

0:048 4:273 1:343 ;2:104<br />

_x =<br />

6.2.6 Exercises<br />

1. Derive a robust stabilization theorem in the spirit of Theorem 6.10 for<br />

1 0 1 ;1<br />

0 1 0 0<br />

y =<br />

(a) the class of input multiplicative perturbations.<br />

(b) the class of input feedback multiplicative perturbations<br />

(a) Verify (using MATLAB!) that the input output system de ned by this model<br />

is unstable.<br />

2. Consider a 2 2 system described by the transfer matrix<br />

(b) Consider the controller with transfer function<br />

0 ;2<br />

8 0<br />

3<br />

7<br />

56s<br />

(s+1)(s+2)<br />

;47s+2<br />

(s+1)(s+2)<br />

1<br />

+<br />

s<br />

0 ;2<br />

5 0<br />

C(s) =<br />

5 : (6.23)<br />

2<br />

6<br />

4<br />

P (s) =<br />

50s+2<br />

(s+1)(s+2)<br />

;42s<br />

(s+1)(s+2)<br />

Using Matlab, interconnect the controller with the given plant and show that<br />

the corresponding closed loop system is stable.<br />

(c) Make a plot of the singular values (as function of frequency) of the complementary<br />

sensitivity PC(I + PC) ;1 of the closed loop system.<br />

(d) What are your conclusions concerning robust stability of the closed loop system?<br />

The controller for this system is a diagonal constant gain matrix given by<br />

(6.24)<br />

1 0<br />

0 1<br />

C(s) =<br />

We consider the usual feedback con guration of plant and controller.


84 CHAPTER 7. GENERAL PROBLEM.<br />

while:<br />

u = Ky (7.2)<br />

denotes the controller. Eliminating u and y yields:<br />

Chapter 7<br />

z =[G11 + G12K(I ; G22K) ;1 G21]w def<br />

= M(K)w (7.3)<br />

An expression like (7.3) in K will be met very often and has got the name linear<br />

fractional transformation abbreviated as LFT. Our combined control aim requires:<br />

General problem.<br />

k M(K) k1<br />

(7.4)<br />

= min<br />

Kstabilising<br />

k z k2<br />

k w k2<br />

min<br />

Kstabilising sup<br />

w2L2<br />

Now that we have prepared all necessary ingredients in past chapters we are ready to<br />

compose the general problem in such a structure, the so-called augmented plant, that the<br />

problem is well de ned and therefore the solution straight forward to obtain. We will start<br />

with a formal exposure and de nitions and next illustrate it by examples.<br />

as the H1-norm is the induced operator norm for functions mapping L2;signals to<br />

L2;signals, as explained in chapter 5. Of course we have tocheck whether stabilising controllers<br />

indeed exist. This can best be analysed when we consider a state space description<br />

of G in the following stylised form:<br />

7.1 Augmented plant.<br />

(7.5)<br />

1<br />

A (n+[w]+[u])<br />

2R(n+[z]+[y])<br />

0<br />

@ A B1 B2<br />

C1 D11 D12<br />

G :<br />

C2 D21 D22<br />

The augmented plant contains, beyond the process model, all the lters for characterising<br />

the inputs and weighting the penalised outputs as well as the model error lines. In Fig.<br />

7.1 the augmented plant isschematised.<br />

exogenous input w z output to be controlled<br />

- -<br />

- G(s)<br />

-<br />

6<br />

control input u y measured output<br />

where n is the dimension of the state space of G while [.] indicates the dimension of<br />

the enclosed vector. It is evident that the unstable modes (i.e. canonical states) have to<br />

be reachable from u so as to guarantee the existence of stabilising controllers. This means<br />

that the pair fA B2g needs to be stabilisable. The controller is only able to stabilise, if<br />

it can conceive all information concerning the unstable modes so that it is necessary to<br />

require that fA C2g must be detectable. So, summarising:<br />

There exist stabilising controllers K(s), i the unstable modes of G are both controllable<br />

by u and observable from y which is equivalent to requiring that fA B2g is stabilisable and<br />

fA C2g is detectable.<br />

An illustrative example: sensitivity. Consider the structure of Fig. 7.2.<br />

?<br />

K(s)<br />

Figure 7.1: Augmented plant.<br />

~x ~n<br />

- -<br />

0<br />

6<br />

?<br />

Vn<br />

Wx<br />

6<br />

n<br />

r =0 + e<br />

? y ~y<br />

- n - C - P - n - Wy<br />

-<br />

; 6<br />

x<br />

?<br />

In order not to confuse the inputs and outputs of the augmented plant with those of<br />

the internal blocks we will indicate the former ones in bold face. All exogenous inputs are<br />

collected in w and are the L2;bounded signals entering the shaping lters that yield the<br />

actual input signals such as reference, disturbance, system perturbation signals, sensor<br />

noise and the kind. The output signals, that have to be minimised in L2;norm and<br />

that result from the weighting lters, are collected in z and refer to (weighted) tracking<br />

errors, actuator inputs, model error block inputs etc. The output y contains the actually<br />

measured signals that can be used as inputs for the controller block K. Its output u<br />

functions as the controller input, applied to the augmented system with transfer function<br />

G(s). Consequently in s;domain we may write the augmented plant in the following,<br />

properly partitioned form:<br />

Figure 7.2: Mixed sensitivity structure.<br />

(7.1)<br />

w<br />

u<br />

z<br />

y = G11 G12<br />

G21 G22<br />

83


86 CHAPTER 7. GENERAL PROBLEM.<br />

7.2. COMBINING CONTROL AIMS. 85<br />

m11 m12<br />

(7.9)<br />

1<br />

C<br />

A<br />

.<br />

. ..<br />

. ..<br />

m21<br />

0<br />

B<br />

@<br />

M =<br />

Output disturbance n is characterised by lter Vn from exogenous signal ~n belonging<br />

to L2. For the moment wetakethe reference signal r equal to zero and forget about the<br />

model error, lter Wx, the measurement noise etc. , because we want to focus rst on<br />

exclusively one performance measure. We would like to minimise ~y, i.e. the disturbance<br />

in output y with weighting lter Wy so that equation (7.4) turns into:<br />

.<br />

It can be proved that:<br />

= min<br />

Cstabilising k Wy(I + PC) ;1 Vn k1= (7.6)<br />

k ~y k2<br />

k ~n k2<br />

min<br />

Cstabilising sup<br />

~n2L2<br />

k mij k1 k M k1 (7.10)<br />

(7.7)<br />

= min<br />

Cstabilising k WySVn k1<br />

Consequently the condition:<br />

In the general setting of the augmented plant the structure would be as displayed in<br />

Fig. 7.3.<br />

k M k1< 1 (7.11)<br />

G(s)<br />

is su cient to guarantee that:<br />

~y<br />

+<br />

- Vn<br />

- - Wy<br />

-<br />

6+<br />

;<br />

?<br />

- P<br />

-<br />

n<br />

~n<br />

8i j :k mij k1< 1 (7.12)<br />

So the k : k1 of the full matrix M bounds the k : k1 of the various entries. Certainly,<br />

it is not a necessary condition, as can be seen from the example:<br />

;e<br />

6<br />

x<br />

(7.13)<br />

M =(m1 m2)<br />

if k mi k1 1fori p =12 then k M k1 2<br />

?<br />

;C<br />

so that it is advisory to keep the composed matrix M as small as possible. The most<br />

trivial example is the so-called mixed sensitivity problem as represented in Fig. 7.2. The<br />

reference r is kept zero again so that we have only one exogenous input viz. ~n and two<br />

outputs ~y and ~x that yield a two blockaugmented system transfer to minimise:<br />

Figure 7.3: Augmented plant for sensitivity alone.<br />

The corresponding signals and transfers can be represented as:<br />

(7.14)<br />

k1<br />

k M k1= k WySVn<br />

WxRVn<br />

G<br />

z }| {<br />

(7.8)<br />

~n<br />

x<br />

WyVn WyP<br />

Vn P<br />

~y<br />

;e =<br />

The corresponding augmented problem setting is given in Fig. 7.4 and described by<br />

the generalised transfer function G as follows:<br />

;C = K<br />

(7.15)<br />

1<br />

A ~n<br />

x<br />

0<br />

@ WyVn WyP<br />

0 Wx<br />

P<br />

=<br />

1<br />

A = G ~n<br />

x<br />

0<br />

@ ~y<br />

~x<br />

;e<br />

It is a trivial exercise to subsitute the entries Gij in equation (7.3), yielding the same<br />

M as in equation (7.6).<br />

Vn<br />

By proper choice of Vn the disturbance can be characterised and the lter Wx should<br />

guard the saturation range of the actuator in P . Consequently the lower term in M viz.<br />

k WxRVn k1 1 represents a constraint. From Fig. 7.2 we also learn that we can think<br />

the additive weighted model error 0 between ~x and ~n. Consequently if we end up with:<br />

7.2 Combining control aims.<br />

k WxRVn k1 k M k1< (7.16)<br />

Along similar lines we could go through all kinds of separate and isolated criteria (as is<br />

done in the exercises!). However, we are not so much interested in single criteria but much<br />

more in con icting and combined criteria. This is usually realised as follows:<br />

If we have several transfers properly weighted, they can be taken as entries mij in a<br />

composed matrix M like:


88 CHAPTER 7. GENERAL PROBLEM.<br />

7.3. MIXED SENSITIVITY PROBLEM. 87<br />

3. Compute a stabilising controller C (see chapter 13) such that:<br />

< (7.21)<br />

W1SV1<br />

W2NV2 1<br />

z<br />

where is as small as possible.<br />

4. If >1, decrease W1 and/or V1 in gain and/or frequency band in order to relax the<br />

performance aim and thereby giving more room to satisfy the robustness constraint.<br />

Go back tostep3.<br />

G(s)<br />

- Wy<br />

-<br />

6<br />

+<br />

~n - Vn<br />

- n -<br />

6<br />

Wx<br />

+<br />

;<br />

?<br />

x -<br />

6<br />

P<br />

- ;e<br />

? - 6<br />

~y<br />

- - ~x<br />

6<br />

5. If


90 CHAPTER 7. GENERAL PROBLEM.<br />

7.4. A SIMPLE EXAMPLE. 89<br />

a>0 (7.28)<br />

V (s) = a<br />

s + a<br />

j (7.24)<br />

V<br />

1+PC<br />

j<br />

jM(s)j = sup<br />

s2C +<br />

jM(j!)j = sup<br />

s2C +<br />

sup<br />

!<br />

Since S = MV ;1 , the corresponding sensitivities can be displayed in a Bode diagram<br />

as in Fig. 7.7.<br />

The peaks inC + will occur for the extrema in S = (1 + PC) ;1 when P (bi) is zero.<br />

These zeros put the bounds and it can be proved that a controller can be found such that:<br />

jV (bi)j (7.25)<br />

k M k1= max<br />

i<br />

If there exists only one right half plane zero b, we can optimise M by a stabilising<br />

controller C1 in the 1-norm leading to optimal transfer M1. For comparison we can<br />

also optimise the 2-norm by a controller C2 analogously yielding M2. Do not try to solve<br />

this yourself. The solutions can be found in [11]). The ideal controllers are computed which<br />

will turn out to be nonproper. In practice we can therefore only apply these controllers<br />

in a su ciently broad band. For higher frequencies we have to attenuate the controller<br />

transfer by adding a su cient number of poles to accomplish the so-called roll-o . For the<br />

ideal controllers the corresponding optimal, closed loop transfers are given by:<br />

Figure 7.7: Bode plot of tracking solution S.<br />

M1 = jV (b)j (7.26)<br />

Unfortunately, theS1 approaches in nity for increasing !,contrary to the S2. Remember<br />

that we still study the solution for the ideal, nonproper controllers. Is this increasing<br />

sensitivity disastrous? Not in the ideal situation, where we did not expect any reference<br />

signal components for these high frequencies. However, in the face of stability robustness<br />

and actuator saturation, this is a bad behaviour as we necessarily require that T is small<br />

and because S + T = 1, inevitably:<br />

(7.27)<br />

M2 = V (b) 2b<br />

s + b<br />

as displayed in the approximate Bode-diagram Fig. 7.6.<br />

lim<br />

!!1 jS1j = 1 ) lim<br />

!!1 jT1j = lim<br />

!!1 j1 ; S1j = 1 (7.29)<br />

Consequently robustness and saturation requirements will certainly be violated. But<br />

it is no use complaining, as these requirements were not included in the criterion after all.<br />

Inclusion can indeed improve the solution in these respects, but, like in the H2 solution,<br />

we have topaythen by aworse sensitivity at the low pass band. This is another waterbed<br />

e ect.<br />

Figure 7.6: Bode plot of tracking solution M(K).<br />

7.5 The typical compromise<br />

Atypical weighting situation for the mixed sensitivity problem is displayed in Fig. 7.8.<br />

Suppose the constraint isonN = T . Usually, W1V1 is low pass and W2V2 is high pass.<br />

Suppose also that, by readjusting weights W1V1, wehaveindeed obtained:<br />

inf<br />

Kstabilising k M(K) k1= 1 (7.30)<br />

Then certainly :<br />

k W1SV1 k1< 1 )8! : jS(j!)j < jW1(j!) ;1 V1(j!) ;1 j (7.31)<br />

k W2TV2 k1< 1 )8! : jT (j!)j < jW2(j!) ;1 V2(j!) ;1 j (7.32)<br />

Notice that M1 is an all pass function. (From this alone we may conclude that the<br />

ideal controller must be nonproper.) It turns out that, if somewhere on the frequency axis<br />

there were a little hill for M, whose top determines the 1-norm, optimisation could still<br />

be continued to lower this peak but at the cost of an increase of the bottom line until<br />

the total transfer were at again. This e ect is known as the waterbed e ect. We also<br />

note that this could never be the solution for the 2-norm problem as the integration of<br />

this constant level jM1j from ! =0till ! = 1 would result in an in nitely large value.<br />

Therefore, H2 accepts the extra costs at the low pass band for obtaining large advantage<br />

after the corner frequency ! = b.<br />

Nevertheless, the H2 solution has another advantage here, if we study the real goal:<br />

the sensitivity. Therefore we haveto de ne the shaping lter V that characterises the type<br />

of reference signals that we may expect for this particular tracking system. Suppose e.g.<br />

that the reference signals live in a low pass band till ! = a so that we couldchoose lter<br />

V as:


92 CHAPTER 7. GENERAL PROBLEM.<br />

7.6. AN AGGREGATED EXAMPLE 91<br />

nv<br />

- o -<br />

6~u<br />

?<br />

Vv<br />

Wu<br />

6 P<br />

v<br />

r<br />

u<br />

+<br />

+<br />

?<br />

- Vr<br />

- Cff - - P0<br />

- -<br />

6<br />

+<br />

+<br />

nr<br />

y<br />

?<br />

Cfb<br />

Figure 7.8: Typical mixed sensitivity weights.<br />

n ?<br />

+ ;<br />

? -<br />

?<br />

e<br />

We<br />

? ~e<br />

as exempli ed in Fig. 7.8. Now it is crucial that the point of intersection of the<br />

curves ! !jW1(j!)V1(j!)j and ! !jW2(j!)V2(j!)j is below the 0 dB-level. Otherwise,<br />

there would be a con ict with S + T = 1 and there would be no solution! Consequently,<br />

heavily weighted bands (> 0dB) for S and T should always exclude each other. This is<br />

the basic e ect, that dictates how model uncertainty and actuator saturation, that puts<br />

a constraint on T , ultimately bounds the obtainable tracking and disturbance reduction<br />

band represented in the performance measure S.<br />

Figure 7.9: Atwo degree of freedom controller.<br />

7.6 An aggregated example<br />

However, the bound for a particular subcriterion will mainly be e ected if all other<br />

entries are zero. Inversely, if we would know beforehand that say k mij k1< 1 for<br />

i 2 1 2::: nij 2 1 2::: nj, then the norm for the complete matrix k M k1 could still<br />

become p max (ninj). Ergo, it is advantageous to combine most control aims.<br />

In Fig. 7.10 the augmented plant/controller con guration is shown for the two degree<br />

of freedom controlled system.<br />

An augmented planted is generally governed by the following equations:<br />

(7.34)<br />

w<br />

u<br />

z<br />

y = G11 G12<br />

G21 G22<br />

(7.35)<br />

Till so far only very simple situations have been analysed. If we deal with more complicated<br />

schemes where also more control blocks can be distinguished, the main lines remain valid,<br />

but a higher appeal is done for one's creativity in combining control aims and constraints.<br />

Also the familiar transfers take more complicated forms. As a straightforward example<br />

we just take the standard control scheme with only a feedforward block extra as sketched<br />

in Fig. 7.9.<br />

This so-called two degree of freedom controller o ers more possibilities: tracking and<br />

disturbance reduction are represented now by di erent transfers, while before, these were<br />

combined in the sensitivity. Note also that the additive uncertainty P is combined with<br />

the disturbance characterisation lter Vv and the actuator weighting lter Wu such that<br />

P = Vv oWu under the assumption:<br />

u = Ky (7.36)<br />

8! 2R : j oj 1 ) j Pj jVvWuj (7.33)<br />

that take for the particular system the form:<br />

1<br />

A (7.37)<br />

0<br />

@ nv<br />

nr<br />

u<br />

1<br />

C<br />

A<br />

;WeVv WeVr ;WePo<br />

0 0 Wu<br />

0<br />

B<br />

@<br />

1<br />

C<br />

A =<br />

~e<br />

~u<br />

y<br />

r<br />

0<br />

B<br />

@<br />

Vv 0 Po<br />

0 Vr 0<br />

(7.38)<br />

(7.39)<br />

y<br />

r<br />

By properly choosing Vv and Wu we can obtain robustness against the model uncertainty<br />

and at the same time prevent actuator saturation and minimise disturbance.<br />

Certainly we then have to design the two lters Vv and Wu for the worst case bounds of<br />

the three control aims and thus we likely have to exaggerate somewhere for each separate<br />

aim. Nevertheless, this is preferable above the choice of not combining them and instead<br />

adding more exogenous inputs and outputs. These extra inputs and outputs would increase<br />

the dimensions of the closed loop transfer M and, the more entries M has, the more<br />

conservative the bounding of the subcriteria de ned by these entries will be, because we<br />

only have:<br />

u = ; Cfb Cff<br />

The closed loop system is then optimised by minimising:<br />

if k M k1< then 8i j : k mij k1


94 CHAPTER 7. GENERAL PROBLEM.<br />

7.6. AN AGGREGATED EXAMPLE 93<br />

The respective, above transfer functions at the left and the right side of the inequality<br />

signs can then be plotted in Bode diagrams for comparison so that we can observe which<br />

constraints are the bottlenecks at which frequencies.<br />

- We<br />

-<br />

~e<br />

n<br />

6<br />

+<br />

-<br />

;<br />

AugmentedP lant<br />

6<br />

z<br />

- v<br />

-<br />

~u<br />

- -<br />

Vv<br />

Wu<br />

nv<br />

6<br />

w<br />

- r<br />

Vr - - Po<br />

- -<br />

6<br />

+<br />

y<br />

-<br />

r<br />

? -<br />

?<br />

+<br />

n<br />

nr<br />

y<br />

6<br />

u<br />

u<br />

<strong>Control</strong>ler<br />

?<br />

Cff<br />

? n<br />

+<br />

?<br />

6<br />

Cfb<br />

+<br />

Figure 7.10: Augmented plant/controller for two degree of freedom controller.<br />

(7.40)<br />

k1<br />

k M k1=k G11 + G12K(I ; G22K) ;1 G21 k1=k M11 M12<br />

M21 M22<br />

and in particular:<br />

1<br />

A (7.41)<br />

0<br />

@ ;We(I ; PoCfb) ;1Vv WefI ; (I ; PoCfb) ;1PoCffgVr M =<br />

Wu(I ; PoCfb) ;1 CffVr<br />

WuCfb(I ; PoCfb) ;1 Vv<br />

which canbeschematised as:<br />

performance<br />

1<br />

C<br />

A<br />

tracking : ~e<br />

nr<br />

sensitivity : ~e<br />

nv<br />

(7.42)<br />

0<br />

B<br />

@<br />

constraints<br />

stability robustness : ~u input saturation : nv<br />

~u nr<br />

Suppose that we can manage to obtain:<br />

k M k1< 1 (7.43)<br />

then it can be guaranteed that 8! 2R:<br />

jI ; (I ; PoCfb) ;1 PoCffj < jWeVrj<br />

j(I ; PoCfb) ;1 j < jWeVvj<br />

(7.44)<br />

1<br />

C<br />

A<br />

0<br />

B<br />

@<br />

j(I ; PoCfb) ;1 Cffj < jWuVrj<br />

jCfb(I ; PoCfb) ;1 j < jWuVvj


96 CHAPTER 7. GENERAL PROBLEM.<br />

7.7. EXERCISE 95<br />

7.7 Exercise<br />

~e 6~x<br />

6~z<br />

6<br />

?<br />

We Wx Wz V<br />

6<br />

6<br />

6 n<br />

r e x z ? +<br />

~y<br />

- Vr - k - C - P - l - Wy -<br />

+<br />

6{<br />

+<br />

y<br />

? -<br />

~r<br />

For the given blockscheme we consider rst SISO-transfers from a certain input to a<br />

certain output. It is asked to compute the linear fractional transfer, to explain the use of<br />

the particular transfer, to name it (if possible) and nally to give the augmented plant in<br />

blockscheme and express the matrix transfer G. Train yourself for the following transfers:<br />

a) from to ~y (see example `sensitivity' in lecture notes)<br />

b) from ~r to ~e<br />

c) from to ~z (two goals!)<br />

d) from to ~x (two goals!)<br />

The same for the following MIMO-transfers:<br />

e) from to ~y and ~z (three goals!)<br />

We nowsplit the previously combined inputs in into two inputs 1 and 2 with respective<br />

shaping lters V1 and V2:<br />

f) from 1 and 2 to ~y and ~z.<br />

Also for the next scheme:<br />

-<br />

~x<br />

Wx<br />

-<br />

6<br />

~r r + x y { e<br />

~e<br />

- Vr<br />

- C1<br />

- k -<br />

P<br />

- k - We<br />

-<br />

6+<br />

6<br />

+<br />

?<br />

C2<br />

? -<br />

g) from ~r to ~x and ~e.


98CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

Stability robustness. Because proper scaling was taken, it follows that stability<br />

robustness can be guaranteed according to:<br />

fk k1 1g\fkM11(K) k1< 1g (8.3)<br />

Chapter 8<br />

So the 1-norm of M11 determines robust stability.<br />

Nominal performance. Without model errors taken into account (i.e. =0 and<br />

thus h=0) k z k2 can be kept less than 1 provided that:<br />

k M22(K) k1< 1 (8.4)<br />

Performance robustness and<br />

-analysis/synthesis.<br />

So the 1-norm of M22 determines nominal performance.<br />

This condition can be unambiguously translated into a stability condition, like for<br />

stability robustness, by introducing a fancy feedback over a fancy block p as:<br />

8.1 <strong>Robust</strong> performance<br />

w = pz : fk p k1 1g\fkM22(K) k1< 1g (8.5)<br />

There is now a complete symmetry and similarity in the two separate loops over<br />

and p.<br />

It has been shown how tosolveamultiple criteria problem where also stability robustness<br />

is involved. But it is not since chapter 3 that we have discussed performance robustness<br />

and then only in rather abstract terms where a small S had to watch robustness for T<br />

and vice versa. It is time now to reconsider this issue, to quantify its importance and to<br />

combine it with the other goals. It will turn out that we have practically inadvertently<br />

incorporated this aspect as can be illustrated very easily with Fig. 8.1.<br />

<strong>Robust</strong> performance. For robust performance we haveto guarantee that z stays<br />

below 1 irrespective of the model errors. That is, in the face of a signal h unequal<br />

to zero and k h k2 1, we require k z k2< 1. If we nowrequire that:<br />

k M(K) k1< 1 (8.6)<br />

we havea su cient condition to guarantee that the performance is robust.<br />

proof: From equation 8.6 we have:<br />

Figure 8.1: Performance robustness translated into stability robustness<br />

(8.7)<br />

h<br />

w 2<br />

<<br />

g<br />

z 2<br />

The left block scheme shows the augmented plant where the lines, linking the model<br />

error block, have been made explicit. When we incorporate the controller K, asshown in<br />

the right blockscheme, the closed loop system M(K) isalsocontainig these lines, named<br />

by g and h. With the proper partitioning the total transfer can be written as:<br />

From k k1 1wemaystate:<br />

(8.1)<br />

h<br />

w<br />

g<br />

z = M11 M12<br />

M21 M22<br />

k h k2 k g k2 (8.8)<br />

Combination with the rst inequality yields:<br />

h = g (8.2)<br />

(8.9)<br />

g<br />

w 2<br />

<<br />

g<br />

z 2<br />

We suppose that a proper scaling of the various signals has been taken place such that<br />

each of the output signals has 2-norm less than or equal to one provided that each of the<br />

input components has 2-norm less than one. We can then make three remarks about the<br />

closed loop matrix M(K):<br />

97


100CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.2. -ANALYSIS 99<br />

so that indeed:<br />

k z k2


102CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.2. -ANALYSIS 101<br />

when:<br />

Consequently, an equivalent condition for stability is:<br />

(M(j!)) def<br />

=k M k (8.25)<br />

sup<br />

!<br />

sup (M ) < 1 (8.17)<br />

! R<br />

represents a yet unknown measure. For obvious reasons, the is also called the<br />

structured singular value. Because in general we can no longer have VWT = 2 it<br />

will also be clear that<br />

As we willshow, this condition takes the already encountered form:<br />

for is unstructured : fk k1 1g\fkM k1< 1g (8.18)<br />

(M) (M) (8.26)<br />

This -value is certainly less than or equal to the maximum singular value of M,<br />

because it incorporates the knowledge about the diagonal structure and should thus display<br />

less conservatism. The father of is John Doyle and the symbol has generally been<br />

accepted in control community for this measure. Equation 8.24 suggests that we can nd<br />

a norm \k k " on exclusively matrix M that can function in a condition for stability.<br />

First of all, the condition, and thus this -norm, cannot be independent on because the<br />

special structural parameters (i.e. ni and mi) should be used. Consequently this so-called<br />

-norm is implicitely taken for the special structure of . Secondly, we can indeed connect<br />

a certain number to k M k , but it is not a norm \pur sang". It has all properties to<br />

be a "distance" in the mathematical sense, but it lacks one property necessary to be a<br />

norm, namely: k M k can be zero without M being zero itself (see example later on).<br />

Consequently, \k k " is called a seminorm.<br />

Because all above conditions and de nitions may be somewhat confusing by now, some<br />

simple examples will be treated, to illustrate the e ects. We rst consider some matrices<br />

M and for a speci c frequency !, which is not explicitly de ned.<br />

We depart from one -matrix given by:<br />

for the case that the block has no special structure. Note, that this is a condition<br />

solely on matrix M.<br />

proof:<br />

Condition (8.18) for the unstructured can be explained as follows. The (M) indicates<br />

the \maximum ampli cation" by mapping M. If M = W V represents the singular<br />

value decomposition of M, we can always choose =VW because:<br />

( ) = (VW )= p max(VW WV )= p max(I) =1 (8.19)<br />

which isallowed. Consequently:<br />

M =W W = W W ;1 ) (M) = (M ) = sup (M ) (8.20)<br />

because the singular value decomposition happens here to be the eigenvalue decompostion<br />

as well. So from equations 8.17 and 8.20 robust stability is a fact if we have for<br />

each frequency:<br />

f8 ( ) 1g\f (M) < 1g (8.21)<br />

(8.27)<br />

( 1) 1(,j 1j 1)<br />

( 2) 1(,j 2j 1)<br />

0<br />

1<br />

0 2<br />

=<br />

Next we study three matrices M in relation to this :<br />

(8.28)<br />

0<br />

1<br />

2<br />

1 0 2<br />

M =<br />

If we apply this for each !, we end up in condition (8.18).<br />

end proof.<br />

However, if 2 has the special diagonal structure, then we can not (generally)<br />

choose = VW . In other words, in such a case the system would not be robustly stable<br />

for unstructured but could still be robustly stable for structured . So, it no longer<br />

holds that sup (M ) = (M). But in analogy we de ne:<br />

see Fig. 8.3.<br />

The loop transfer consists of two independent loops as Fig. 8.3 reveals and that<br />

follows from:<br />

(M ) (8.22)<br />

(M) def<br />

= sup<br />

2<br />

and the equivalent stability condition for each frequency is:<br />

(8.29)<br />

1<br />

2 1 0<br />

1 0 2 2<br />

M =<br />

f8 2 g\f (M) < 1g (8.23)<br />

Obviously (M) = max(M ) = 1<br />

2 , which is less than one, so that robust stability<br />

is guaranteed. But in this case also (M) = 1<br />

2 so that there is no di erence between<br />

the structured and the unstructured case. Because all matrices are diagonal, we are<br />

just dealing with two independent loops.<br />

In analogy we then have a similar condition on M for robust stability in the case of<br />

the structured , by:<br />

for is structured : f 2 g\fkM k < 1g (8.24)


104CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.2. -ANALYSIS 103<br />

(8.32)<br />

0 10<br />

0 0 ) M = 0 10 2<br />

0 0<br />

M =<br />

Now we deal with an open connection as Fig. 8.5 shows .<br />

Figure 8.3: Two separate robustly stable loops<br />

Figure 8.5: <strong>Robust</strong>ly stable open loop.<br />

It is clear that (M) = max(M ) = 0, although M 6= 0! Indeed is not a norm.<br />

Nevertheless = 0 indicates maximal robustness. Whatever ( ) < 1= (M) =1 ,<br />

the closed loop is stable, because M is certainly stable and the stable transfers are<br />

not in a closed loop at all. On the other hand, the \conservative" 1-norm warns for<br />

non-robustness as (M) =10> 1. From its perspective , supposing a full matrix,<br />

this is correct since:<br />

The equivalence still holds if we change M into:<br />

(8.30)<br />

2 0<br />

0 1<br />

M =<br />

Then one learns:<br />

(8.31)<br />

0 10<br />

0 0<br />

M = 2 1 0<br />

0 2<br />

(8.33)<br />

= 10 21 10 2<br />

0 0<br />

1 12<br />

21 2<br />

M =<br />

so that Fig. 8.6 represents the details in the closed loop.<br />

so that (M) = max(M )=2> 1 and stability is not robust. But also (M) =2<br />

would have told us this and Fig. 8.4.<br />

1 12<br />

6<br />

6<br />

6<br />

21 2<br />

6<br />

? - l - 10<br />

-<br />

-6 M<br />

?<br />

Figure 8.6: Detailed closed loop M with unstructured .<br />

Clearly there is a closed loop now with looptransfer 10 21 where in worst case we can<br />

have j 21j = 1 so that the system is not robustly stable. Correctly the (M) = 10 tells<br />

us that for robust stability we require ( ) < 1= (M) =1=10 and thus j 21j < 1=10.<br />

Figure 8.4: Two not robustly stable loops<br />

Summarising we obtained merely as a de nition that robust stability is realised if:<br />

Things become completely di erent ifweleave the diagonal matrices and study:


106CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.3. COMPUTATION OF THE -NORM. 105<br />

(M) < 1g (8.34)<br />

f 2 g\fkM k = sup<br />

!<br />

0<br />

1<br />

0<br />

U 1<br />

U 2<br />

So a Bode plot could look like displayed in Fig. 8.7.<br />

6<br />

2<br />

U 2<br />

p<br />

0<br />

U 3<br />

0<br />

0<br />

U1<br />

U2<br />

Figure 8.7: Bode plot of structured singular value.<br />

MU<br />

-<br />

M(K)<br />

?- -<br />

U3<br />

0<br />

The actual computation of the -norm is quite another thing and appears to be<br />

complicated, indirect and at least cumbersome.<br />

8.3 Computation of the -norm.<br />

Figure 8.8: Detailed structure of U related to .<br />

The crucial observation at the basis of the computation, which will become an approximation,<br />

is:<br />

Because (M) will stay larger than (MU)even if we change U we can push this lower<br />

bound upwards until it even equals the (M):<br />

(M) (M) (M) (8.35)<br />

Without proving these two-sided bounds explicitly, we will exploit them in deriving<br />

tighter bounds in the next two subsections.<br />

sup (MU)= (M) (8.38)<br />

U<br />

So in principle this could be used to compute , but unfortunately the iteration process,<br />

to arrive at the supremum is a hard one because the function (MU) is not convex in the<br />

entries uij.<br />

So our hope is xed to lowering the upper bound.<br />

8.3.1 Maximizing the lower bound.<br />

8.3.2 Minimising the upper bound.<br />

Again we apply the trick of inserting identities, consisting of matrices, into the loop. This<br />

time both at the left and the right side of the block which wewant tokeep unchanged<br />

as exempli ed in Fig. 8.9<br />

Careful inspection of this Fig. 8.9 teaches that if is postmultiplied by DR and<br />

premultiplied by D ;1<br />

L it remains completely unchanged because of the "corresponding<br />

identities structure" of DR and DL. This can be formalised as:<br />

Without a ecting the loop properties we can insert an identity into the loop e ected by<br />

UU = U U = I where U is a unitary matrix. A matrix is unitary if its conjugate<br />

transpose U , is orthonormal to U, soU U =1. It is just a generalisation of orthonormal<br />

matrices for complex matrices.<br />

The lower bound can be increased by inserting such compensating blocks U and U in<br />

the loop such that the -block isunchanged while the M-part is maximised in . The<br />

is invariant under premultiplication by a unitary matrix U of corresponding structure as<br />

shown in Fig. 8.8.<br />

Let the matrix U consist of diagonal blocks Ui corresponding to the blocks i :<br />

U U = fdiag(U1U2::: Up)j dim(Ui) =dim( i T i )UiU i = Ig (8.36)<br />

DL 2 DL = fdiag(d1I1d2I2::: dpIp)j dim(Ii) = dim( i T i )di 2Rg (8.39)<br />

as exempli ed in Fig. 8.8 Then, neither the stability nor the loop transfer is changed<br />

if we insert I = UU into the loop. As U is unitary, we can also rede ne the dashed block<br />

U as the new model error which also lives in set :<br />

DR 2 DR = fdiag(d1I1d2I2::: dpIp)j dim(Ii) = dim( T i i)di 2Rg (8.40)<br />

0 def<br />

= U 2 (8.37)


108CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.3. COMPUTATION OF THE -NORM. 107<br />

In practice one minimises for a su cient number of frequencies !j the maximum singular<br />

value (DLMD ;1<br />

R ) for all di(!j). Next biproper, stable and minimum phase lters<br />

^di(j!) are tted to the sequence di(!j) and the augmented plant in a closed loop with<br />

the controller K is properly pre- and postmultiplied by the obtained lter structure. In<br />

that way we are left with generalised rational transfers again. This operation leads to the<br />

following formal, shorthand notation:<br />

DR D ;1<br />

L 2<br />

0<br />

I1<br />

1<br />

d ;1<br />

2 I1<br />

0<br />

0<br />

I1<br />

6<br />

2<br />

d2I1<br />

d ;1<br />

3 I2<br />

0<br />

p<br />

0<br />

d3I1<br />

0<br />

;1<br />

DLMD R k1 k DLM(K) ^ D ^ ;1<br />

R k1 ;! inf k DM(K)D<br />

D ;1 k1<br />

(8.44)<br />

k inf<br />

di(!)i=23:::p<br />

where the distinction between DL and DR is left out of the notation as they are<br />

linked in di anyhow. Also their rational lter structure is not explicitly indicated. As a<br />

consequence we can write:<br />

0<br />

I1<br />

0<br />

I1<br />

d2I1<br />

- -<br />

M(K)<br />

d ;1<br />

2 I1<br />

?- -<br />

A(M(!)) (8.45)<br />

k M k inf k DMD<br />

D ;1 k1 sup<br />

!<br />

d ;1<br />

3 I1<br />

0<br />

d3I2<br />

Consequently, if A remains below 1 for all frequencies, robust stability is guaranteed<br />

and the smaller it is, the more robustly stable the closed loop system is. This nishes the<br />

-analysis part: given a particular controller K the -analysis tells you about robustness<br />

in stability and performance.<br />

0<br />

DLMD ;1<br />

R<br />

8.4 -analysis/synthesis<br />

Figure 8.9: Detailed structure of D related to .<br />

By equation (8.43) we have a tool to verify robustness of the total augmented plant in<br />

a closed loop with controller K. The augmented plant includes both the model errorblock<br />

and the arti cial, fancy performance block. Consequently robust stability should<br />

be understood here as concerning the generalised stability which implies that also the<br />

performance is robust against the plant perturbations. But this is only the analysis, given<br />

a particular controlled block M which is still a function (LFT) of the controller K. For<br />

the synthesis of the controller we were used to minimise the H1-norm:<br />

If all i are square, the left matrix DL and a right matrix DR coincide. All coe cients<br />

di can be multiplied by a free constant without a ecting anything in the complete loop.<br />

Therefore the coe cient d1 is generally chosen to be one as a "reference".<br />

Again the loop transfer and the stability condition are not in uenced by DL and DR<br />

and we can rede ne the model error :<br />

= 2 (8.41)<br />

0 def<br />

= DR D ;1<br />

L<br />

k M(K) k1<br />

(8.46)<br />

inf<br />

Kstabilising<br />

Again the is not in uenced so that we can vary all di and thereby pushing the upper<br />

bound downwards:<br />

but we have just found that this is conservative and that we should minimise:<br />

;1 def<br />

(DLMD R ) = A(M) (8.42)<br />

(M) inf<br />

dii=23:::p<br />

(8.47)<br />

inf<br />

Kstabilising k DM(K)D;1 k1<br />

However, for each newKthe subsequently altered M(K) involves a new minimisation for<br />

D so that we haveto solve:<br />

inf<br />

Kstabilising inf<br />

D k DM(K)D;1 k1 (8.48)<br />

It turns out that this upper bound A(M) isvery close in practice to (M) and it even<br />

equals (M) if the dimension of is less or equal to 3. And fortunately, the optimisation<br />

with respect to di is a well conditioned one, because the function k DLMD ;1<br />

R k1 appears<br />

to be convex in di. So A is generally used as the practical estimation of . However, it<br />

should be done for all frequencies ! which boils down to a nite, representative number<br />

of frequencies ! and we nally have:<br />

In practice this is tried to be solved by the following iteration procedure under the name<br />

D-K-iteration process:<br />

1. Put D = I<br />

A(M(!)) (8.43)<br />

;1<br />

DLMD R k1= sup<br />

!<br />

k M k k inf<br />

di(!)i=23:::p


110CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.5. A SIMPLE EXAMPLE 109<br />

2. K-iteration. Compute optimal K for the last D.<br />

3. D-iteration. Compute optimal D for the last K.<br />

4. Has the criterion k DM(K)D ;1 k1 changed signi cantly during the last two steps?<br />

If yes: goto K-iteration, ifno: stop.<br />

In practice this iteration process appears to converge usually in not too many steps. But<br />

there can be exceptions and in principle there is a possibility that it does not converge at<br />

all.<br />

This formally completes the very brief introduction into -analysis/synthesis. A few<br />

extra remarks will be added before a simple example will illustrate the theory.<br />

As a formal de nition of the structured singular value one often \stumbles" across<br />

the following \mind boggling" expression in literature:<br />

Figure 8.10: First order plant with parameter uncertainties.<br />

(8.49)<br />

(M) = [inff ( )j det(I ; M )=0g] ;1<br />

where one has to keep in mind that the in mum is over which has indeed the same<br />

structure as de ned in the set but not restricted to ( i) < 1. Nevertheless, the<br />

de nition is equivalent with the one discussed in this section. In the exercises one<br />

can verify that the three methods (if dim( ) 3) yield the same results.<br />

ni mi It is tacitly supposed that all i live in the unity balls in C while we often<br />

know thatonlyrealnumbers are possible. This happens e.g. when it concerns inaccuracies<br />

in \physical" real parameters (see next section). Consequently not taking<br />

into account this con nement to real numbers (R) will again give rise to conservatism.<br />

Implicit incorporation of this knowledge asks more complicated numerical<br />

tools though.<br />

8.5 A simple example<br />

Figure 8.11: Augmented plant for parameter uncertainties.<br />

Consider the following rst order process:<br />

while the outer loops are de ned by:<br />

1 0 a1<br />

(8.52)<br />

0 2 a2<br />

u = Ky = Cy (8.53)<br />

=<br />

a1<br />

a2<br />

=<br />

b1<br />

b2<br />

Incorporation of a stabilising controller K, which is taken as a static feedback here,<br />

we obtain for the transfer M(K):<br />

P = K0<br />

(8.50)<br />

s +<br />

where we havesome doubts about the correct values of the two parameters K0 and .<br />

So let 1 be the uncertainty in the gain K0 and 2 be the model error of the pole value .<br />

Furthermore, we assume a disturbance w at the input of the process. We want to minimise<br />

its e ect at the output by feedback across controller C. For simplicity there are no shaping<br />

nor weighting lters and measurement noise and actuator saturation are neglected. The<br />

whole set up can then easily be presented by Fig. 8.10 and the corresponding augmented<br />

plant byFig. 8.11.<br />

The complete input-output transfer of the augmented plant Ge can be represented as:<br />

1<br />

A (8.54)<br />

0<br />

@ b1<br />

b2<br />

w<br />

1<br />

A<br />

;K ;1 1<br />

;K ;1 1<br />

s + ;K0 K0<br />

0<br />

@<br />

1<br />

s + + K0K<br />

1<br />

A =<br />

0<br />

@ a1<br />

a2<br />

z<br />

| {z }<br />

1<br />

C<br />

A<br />

b1<br />

b2<br />

w<br />

u<br />

0<br />

B<br />

@<br />

1<br />

C<br />

A<br />

;1<br />

s+<br />

;1<br />

s+<br />

;K0<br />

s+<br />

;K0<br />

s+<br />

1<br />

s+<br />

1<br />

s+<br />

K0<br />

s+<br />

K0<br />

s+<br />

;1 0 s+<br />

;1 0 s+<br />

M(K)<br />

(8.51)<br />

1 ;K0<br />

s+<br />

0<br />

B<br />

@<br />

1<br />

C<br />

A =<br />

a1<br />

a2<br />

z<br />

y<br />

0<br />

B<br />

@<br />

The analysis for robustness of the complete matrix M(K) is rather complicated for<br />

1 ;K0<br />

s+


112CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.5. A SIMPLE EXAMPLE 111<br />

information is lost in the H1-approach and this takes over to the -approach. Explicit<br />

implementation of the phase information can only be done in such a simple example and<br />

will appear to be the great winner. Because we know that 1 and 2 are real, the pole of<br />

the system with proportional feedback K is given by:<br />

analytical expressions so that we like to con ne to the robust stability in the strict sense<br />

for changes in 1 and 2 that is:<br />

(8.55)<br />

;K ;1<br />

;K ;1<br />

1<br />

s + + K0K<br />

M11 =<br />

;( + K0K + 2 + K 1) (8.66)<br />

Since we did not scale, we may de ne the -analysis as:<br />

Because K is such that nominal (for i = 0) stability is true, total stability is guaranteed<br />

for:<br />

k M11 k = (8.56)<br />

2 = (diag( 1 2)j ( i) < 1 ) (8.57)<br />

K 1 + 2 > ;( + K0K) (8.67)<br />

This half space in 1 2-space is drawn in Fig. 8.12 for numerical values: =1K0 =<br />

1K =2.<br />

For (!) we get (the computation is an exercise):<br />

(8.58)<br />

jKj +1<br />

p<br />

! 2 +( + K0K) 2<br />

(!) =<br />

The supremum over the frequency axis is then obtained for ! = 0 so that:<br />

= (8.59)<br />

jKj +1<br />

+ K0K<br />

k M11 k =<br />

because K stabilises the nominal plant so that:<br />

+ K0K >0 (8.60)<br />

Ergo, -analysis guarantees robust stability as long as :<br />

(8.61)<br />

= 1<br />

+ K0K<br />

jKj +1<br />

for i =1:2 : j ij <<br />

It is easy to verify (also an exercise) that the unstructured H1condition is:<br />

Figure 8.12: Various bounds in parameter space.<br />

The two square bounds are the -bound and the H1-bound. The improve of on<br />

H1 is rather poor in this example but can become substantial for other realistic plants.<br />

There is also drawn a circular bound in Fig. 8.12. This one is obtained by recognising that<br />

signals a1 and a2 are the same in Fig. 8.11. This is the reason that M11 had so evidently<br />

rank 1. By proper combination the robust stability can thus be established by a reduced<br />

M11 that consists of only one row and then is no longer di erent from H1 both yielding<br />

the circular bound with less computations. (This is an exercise.)<br />

Another appealing result is obtained by letting K approach 1, then:<br />

s<br />

2(K2 +1)<br />

(M(K !)) =<br />

! 2 ! (8.62)<br />

+( + K0K) 2<br />

p<br />

2(K2 +1)<br />

k M11 k1=<br />

+ K0K = 1 ! (8.63)<br />

+ K0K<br />

j ij < p =<br />

2(K2 +1) 1<br />

(8.64)<br />

1<br />

Indeed, the -analysis is less conservative than the H1-analysis as it is easy to verify<br />

that:<br />

(8.68)<br />

; bound : j 1j (8.65)<br />

(8.70)<br />

true ; bound : 1 > ;K0<br />

Finally we would like to compare these results with an even less conservativeapproach<br />

where we make use of the phase information as well. As mentioned before, all phase


114CHAPTER 8. PERFORMANCE ROBUSTNESS AND -ANALYSIS/SYNTHESIS.<br />

8.6. EXERCISES 113<br />

8.6 Exercises<br />

9.1: Show that, in case M12 =0orM21 = 0, the robust performance condition is ful lled<br />

if both the robust stability and the performance for the nominal model are guaranteed.<br />

Does this case, o -diagonal terms of M zero, make sense ?<br />

9.2: Given the three examples in this chapter:<br />

(8.71)<br />

0 10<br />

0 0<br />

2 0<br />

0 1<br />

1=2 0<br />

0 1=2<br />

M =<br />

according to the second de nition :<br />

0<br />

1<br />

0 2<br />

Compute the -norm if =<br />

= [inff ( )j det (I ; M ) = 0g] ;1 (8.72)<br />

9.3: Given:<br />

(8.73)<br />

0<br />

1<br />

0 2<br />

=<br />

;1=2 1=2<br />

;1=2 1=2<br />

M =<br />

a) Compute and of M. Are these good bounds for ?<br />

b) Compute in three ways.<br />

9.4: Compute explicitly k M11 k1 and k M11 k for the example in this chapter where:<br />

(8.74)<br />

K ;1<br />

K ;1<br />

1<br />

s + + K0K<br />

M11 =<br />

What happens if we use the fact that the the error block output signals a1 and a2 are<br />

the same , so that can be de ned as =[ 1 2 ] T ? Show that the circular bound<br />

of the last Fig. 8.12 results.


116 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

~d<br />

-<br />

dmax<br />

-<br />

Chapter 9<br />

n<br />

- rmax - n- 1<br />

C - - ~P<br />

- zmax -<br />

umax<br />

max - ?<br />

d<br />

P<br />

r +<br />

u ~u ~z z ?<br />

6;<br />

y<br />

~<br />

-<br />

?<br />

~r<br />

Filter Selection and Limitations.<br />

Figure 9.1: Range scaled controlled system.<br />

An H1-analogon for such a zero frequency setup would be as follows. In H1 we<br />

measure the inputs and outputs as k w k2 and k z k2 so that the induced norm is<br />

k M k1. In zero frequency q setup it would be the Euclidean norm for inputs and outputs,<br />

i.e. k w kE=k w k2= iw2 i and likewise for z. The induced norm is trivially the usual<br />

matrix norm, so k M k1= maxi( i(M)) = (M) . Note that because of the scaling we<br />

immediately have for all signals, inputs or outputs:<br />

k s k2= j~sj 1 (9.2)<br />

For instance a straightforward augmented plant could lead to:<br />

In this chapter we will discuss several aspects of lter selection in practice. First we<br />

will show how signal characteristics and model errors can be measured and how these<br />

measurements together with performance aims can lead to e ective lters. E ective in<br />

the sense, that solutions with k M k1< 1 are feasible without contradicting e.g.<br />

"S+T=I" and other fundamental bounds.<br />

Apart from the chosen lters there are also characteristics of the process itself, which<br />

ultimately bound the performance, for instance RHP (=Right Half Plane) zeros and/or<br />

poles, actuator and output ranges, less inputs than outputs etc. We will shortly indicate<br />

their e ects such that one is able to detect the reason, why 1 could not be obtained<br />

and what the best remedy or compromise can be.<br />

1<br />

A = Mw (9.3)<br />

0<br />

@ ~r<br />

~d<br />

~<br />

9.1 A zero frequency set-up.<br />

z = ~u<br />

~e = WuRVr WuRVd WuRV<br />

WeSVr WeSVd WeTV<br />

9.1.1 Scaling<br />

where as usual S =1=(1 + PC), T = PC=(1 + PC), R = C=(1 + PC) and e = r ; y.<br />

In the one frequency set-up the majority of lters can be directly obtained from the<br />

scaling:<br />

1<br />

A = Mw (9.4)<br />

0<br />

@ ~r<br />

~d<br />

~<br />

1<br />

umax Rrmax<br />

1<br />

umax Rdmax<br />

1<br />

umax R max<br />

WeSrmax WeSdmax WeT max<br />

~u<br />

~e =<br />

z =<br />

9.1.2 Actuator saturation, parsimony and model error.<br />

Suppose that the problem is well de ned and we would be able to nd a controller C R<br />

such that<br />

k M k1= (M) < 1 (9.5)<br />

The numerical values of the various signals in a controlled system are usually expressed<br />

in their physical dimensions like m, N, V , A, o , ::: . Next, depending on the size of the<br />

signals, we also have a rough scaling possibility in the choice of the units. For instance<br />

a distance will basically be expressed in meters, but in order to avoid very large or very<br />

small numbers we can choose among km, mm, m, A or lightyears. Still this is too<br />

rough a scaling to compare signals of di erent physical dimensions. As a matter of fact<br />

the complete concept of mapping normed input signals onto normed output signals, as<br />

discussed in chapter 5, incorporates the basic idea of appropriate comparison of physically<br />

di erent signals by means of the input characterising lters V and output weighting<br />

lters W . The lter choice is actually a scaling problem for each frequency. So let us<br />

start in a simpli ed context and analyse the scaling rst for one particular frequency,<br />

say ! = 0. Scaling on physical, numerically comparable units as indicated above is not<br />

accurate enough and a trivial solution is simply the familiar technique of eliminating<br />

physical dimensions by dividing by the maximum amplitude. So each signal s can then<br />

be expressed in dimensionless units as ~s according to:<br />

then this tells us e.g. that k z k2< 1, so certainly ~u


118 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.1. AZERO FREQUENCY SET-UP. 117<br />

or, since weights are naturally chosen as positive numbers, we take:<br />

(9.12)<br />

pmax = umax<br />

Consequently, an extra addition to the output of the plant representing the model<br />

perturbation is realised by jpj pmax. In combining the output additions we get n =<br />

;p ; d + r and:<br />

By the applied scaling we can only guarantee that k w k2< p 3 so that disappointingly<br />

follows u< p 3umax, which is not su cient toavoid actuator saturation. This e ect can<br />

be weakened by choosing Wu = p 3=umax or we can try to eliminate it by diminishing<br />

the number of inputs. This can be accomplished because both tracking and disturbance<br />

reduction require a small sensitivity S. In the next Fig. 9.2 we showhowby rearrangement<br />

reference signals, disturbances and model perturbations can be combined in one augmented<br />

plant input signal.<br />

~n<br />

(9.13)<br />

nmax = pmax + dmax + rmax<br />

?<br />

6<br />

~u<br />

Note that the sign of p and d, actually being a phase angle, does not in uence the<br />

weighting. Also convince yourself of the substantial di erence of diminishing the number<br />

of inputs compared with increasing the Wu with a factor p 3. We have j~nj 1 contrary<br />

to the original three inputs j~pj 1, j ~ dj 1 and j~rj 1, implying a reduction of a factor<br />

3 in stead of p 3. The 2-norm applied to w in the two blockschemes of Fig. 9.2 would<br />

indeed yield the factor p q<br />

3 as k ~p k2 + k d~ k2 + k ~r k2 p 3 contrary to p k ~n k2 1.<br />

By reducing the number of inputs we have done so taking care that the maximum value<br />

was retained. If several 2-normed signals are placed in a vector, the total 2-norm takes<br />

the average of the energy or power. Consequently we are confronted again with the fact<br />

that not H1 is suited for protection against actuator saturation, but l1-control is.<br />

Note, that for the proper quantisation of the actuator input signal we had to actually<br />

add the reference signal, the disturbance and the model error output. For robust stability<br />

alone it is now su cient that:<br />

nmax<br />

~p d ~ ~r<br />

-<br />

? ? ?<br />

6 -<br />

1<br />

umax<br />

~u<br />

n<br />

?<br />

pmax dmax rmax<br />

; -<br />

6? -<br />

1<br />

umax<br />

P<br />

u<br />

e = r ; y<br />

P<br />

C<br />

p<br />

?<br />

d r<br />

? y<br />

- - ; ?<br />

-<br />

? e = r ; y<br />

- max -<br />

~<br />

)<br />

?<br />

6 -<br />

?<br />

u<br />

- 6<br />

max<br />

-<br />

C<br />

~<br />

Figure 9.2: Combining sensitivity inputs.<br />

(9.14)<br />

nmax = umax<br />

The measuring of the model perturbation will be discussed later. Here we assume that<br />

the general, frequency dependent, additive model error can be expressed as :<br />

whatever the derivation of nmax might be. In the next section we will see that in the<br />

frequency dependent case, a real prevention of actuator saturation can never be guaranteed<br />

in H1-control. Actual practice will then be to combine Vd and Vr into Vn, heuristically<br />

de ne a Wu and verify whether for robust stability the condition:<br />

k P k1< (9.7)<br />

The transfer from ~p to ~u in Fig. 9.2 is given by:<br />

1<br />

8! : jVnWuj (!) (9.15)<br />

Rpmax k1< 1 (9.8)<br />

k<br />

umax<br />

is ful lled. If not, either Vn or Wu should be corrected.<br />

so that stability is robust for:<br />

9.1.3 Bounds for tracking and disturbance reduction.<br />

k k1< 1 (9.9)<br />

Till sofar we have discussed all weights except for the error weight We. Certainly we<br />

would like tochoose We as big and broad as possible in order to keep the error e as small<br />

as possible. If we forget about the measurement noise for the moment and apply the<br />

simpli ed right scheme of Fig. 9.2, we obtain a simple mixed sensitivity problem:<br />

Combination yields that:<br />

(9.10)<br />

k1


120 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.1. AZERO FREQUENCY SET-UP. 119<br />

Consequently, if we have obviously rmax zmax for a tracking system, the tracking<br />

error e = r ; y can never become small.<br />

For SISO plants this e ect is quite obvious, but for MIMO systems the same internal<br />

scaling of plant P can be very revealing in detecting these kind of internal insu ciencies<br />

as we will show later.<br />

On the other hand, if the gain of the scaled plant P~ is larger than 1, one should not<br />

think that the way is free to zero sensitivity S. For real systems, where the full frequency<br />

dependence plays a role, we will see plenty oflimitinge ects.Only for ! =0we are used<br />

to claim zero sensitivity in case of integrator(s) in the loop. In that case we haveindeed in nite gain (1=(j!)) similar to the previous example by taking C = 1. Nevertheless in<br />

practice we always have to deal with the sensor and inevitable sensor noise . If we indeed<br />

have S = 0, inevitably T = 1 and e = T = . So in its full extent the measurement<br />

noise is present in the error, which simply re ects the trivial fact that you can never track<br />

better than the accuracy of the sensor. So sensor noise bounds both traking error and<br />

disturbance rejection and should be brought in properly by the weight max in our example<br />

in order to minimise its e ect in balance with the other bounds and claims.<br />

(9.17)<br />

C = W 2 e Pu2 max<br />

In order to keep (M) 1 to prevent actuator saturation we can put (M) =1for<br />

the computed controller C yielding:<br />

(9.18)<br />

We =1= p n 2 max ; P 2 u 2 max<br />

A special case occurs for jPumaxj = jnmaxj which simply states that the range of<br />

the actuator is exactly su cient to cause the output z of plant P to compensate for the<br />

"disturbance" n. So if actuator range and plant gain is su ciently large we can choose<br />

We = 1 and thus C = 1 so that M becomes:<br />

(9.19)<br />

nmax<br />

Pumax<br />

0<br />

M =<br />

9.2 Frequency dependent weights.<br />

9.2.1 Weight selection by scaling per frequency.<br />

In the previous section the single frequency case servedasavery simple concept to illustrate<br />

some fundamental limitations, that certainly exist in the full, frequency dependent<br />

situation. All e ects take over, where we have to consider a similar kind of scaling but<br />

actually for each frequency. Usually the H1-norm is presented as the induced norm of<br />

the mapping from the L2 space to the L2 space. In engineering terms we then talk about<br />

the (square-root of) the energy of inputs k w k2 towards the (square-root of) the energy of<br />

outputs k z k2. Mathematically, this is ne, but in practice we seldomly deal with nite energy<br />

signals. Fortunately, theH1-norm is also the induced norm for mapping powers onto<br />

powers or even expected powers onto expected powers as explained in chapter 5. If one<br />

considers a signal to be deterministic, where certain characteristics may vary, the power<br />

can simply be obtained by describing that signal by a Fourier series, where the Fourier<br />

coe cients directly represent the maximal amplitude per frequency. This maximum can<br />

thus be used as a scaling for each frequency analogous to the one frequency example of<br />

the previous section. On the other hand if one considers the signal to be stochastic (stationary,<br />

one sample from an ergodic ensemble), one can determine the power density s<br />

and use the square-root of it as the scaling. One can even combine the two approaches,<br />

for instance stochastic disturbances and deterministic reference signals. In that case one<br />

should bear in mind that the dimensions are fundamentally di erent and a proper constant<br />

should be brought in for appropriate weighting. Only if one sticks to one kind of<br />

approach, any scaling constant c is irrelevant as it disappears by the fundamental division<br />

in the de nition:<br />

and no error results while (M) =jm11j =1.<br />

If jPumaxj > jnmaxj, their is plenty ofchoice for the controller and the H1 criterion is<br />

minimised by decreasing jm11j more at the cost of a small increase of jm21j. Note that this<br />

control design is di erent from minimising jm21j under the constraint of jm11j 1. For<br />

this simple example the last problem can be solved, but the reader is invited to do this<br />

and by doing so to obtain an impression of the tremendous task for a realistically sized<br />

problem.<br />

If jPumaxj < jnmaxj, it is principally impossible to compensate all "possible disturbance"<br />

n. This is re ected in the maximal weight We we can choose that allows for a<br />

(M) 1. Some algebra shows that:<br />

(9.20)<br />

s<br />

1 ; P 2u2 max<br />

n2 max<br />

=<br />

1<br />

Wenmax<br />

If e.g. only half the n can be compensated, i.e. jPumaxj = 1<br />

2jnmaxj, we have jSj<br />

1<br />

Wenmax =<br />

q<br />

3<br />

4 which is very poor. This represents the impossibility totrack better than<br />

50% or reduce the disturbance more than 50%. If one increases the weight We one is<br />

confronted with a similar increase of and no solution k M k1 1 can be obtained. One<br />

can test this beforehand by analysing the scaled plant as indicated in Fig. 9.1. The plant<br />

P has been normalised internally according to:<br />

(9.21)<br />

P = zmax ~ P 1<br />

umax<br />

(9.23)<br />

k cMw kpower<br />

k cw kpower<br />

=sup<br />

w<br />

k Mw kpower<br />

k w kpower<br />

k M k1= sup<br />

w<br />

so that P~ is the transfer from ~u, maximally excited actuator normalised on 1, to<br />

maximal, undisturbed, scaled output ~z. Suppose now that j Pj ~ < 1. It tells you that not<br />

all outputs in the intended output range can be obtained due to the actual actuator. The<br />

maximal input umax can only yield:<br />

Furthermore, as we have learned from the ! = 0 scaling, the maxima (=range) of<br />

the inputs scale and thus de ne the input characterising lters directly, while the output<br />

lters are determined by theinverse so that we obtain e.g. for input v to output x:<br />

(9.22)<br />

umaxj = jzmax ~ Pj


122 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.2. FREQUENCY DEPENDENT WEIGHTS. 121<br />

3. The lters should be preferably be biproper. Any pole zero excess would in fact cause<br />

zeros at in nity, that make thelter uninvertable and inversion happens implicitly<br />

in the controller design.<br />

(9.24)<br />

Mxvcvmax<br />

Mxvvmax = 1<br />

cxmax<br />

WxMxvVv = 1<br />

xmax<br />

4. The dynamics of the generalised plant should not exceed about 5 decades on the<br />

frequency scale for numerical reasons dependent on the length of the mantissa in<br />

your computer. So double precision can increase the number of decades. In single<br />

precision it thus means that the smallest radius (=distance to the origin) divided by<br />

the largest radius of all poles and zeros of plant and lters should not be less than<br />

10 ;5 .<br />

5. The lters are preferably of low order. Not only the controller will be simpler as<br />

it will have the total order of the augmented plant. Also lters very steep at the<br />

border of the aimed tracking band will cause problems for the robustness as small<br />

deviations will easily let the fast loops in Nyquist plot tresspass the hazardous point<br />

-1.<br />

So again the constant is irrelevant, unless input- and output lters are de ned with<br />

di erent constants. In chapter 5 it has been illustrated how the constant relating the<br />

deterministic power contents to a power density value can be obtained. It has been done<br />

by explicitely computing the norms in both concepts for an example signal set that can<br />

serve for both interpretations. From here on we suppose that one has chosen the one<br />

or other convention and that we can continue with a scaling per frequency similar to<br />

the scaling in the previous section. So smax(!) represents the square-root of any powerde<br />

nition for signal s(j!), e.g. smax(!) = p ss(j!). Remember that the phase of lters<br />

and thus of smax(!) is irrelevant. Straightforward implementation of scaling would then<br />

lead to:<br />

9.2.2 Actuator saturation: Wu<br />

1<br />

smax(!) s(!) ! Ws(j!)s(!) s(!) =smax(!)~s(!) ! Vs(j!)~s(!) (9.25)<br />

~s(!) =<br />

The characterisation or weighting lters of most signals can su ciently well be obtained<br />

as described in the previous subsection. A characterisation per frequency is well in line<br />

with practice. The famous exception is the lter Wu where we would like to bound the<br />

actuator signal (and sometimes its derivative) in time. However, time domain bounds,<br />

in fact L1-norms, are incompatible with frequency domain norms. This is in contrast<br />

with the energy and power norms (k : k2) that relate exacty according to the theorem of<br />

Parceval. Let us illustrate this, starting with the zero frequency set-up of the rst section.<br />

As we were only dealing with frequency zero a bounded power would uniquely limit the<br />

maximum value in time as the signal is simply a constant value:<br />

Arrows have been used in above equations because immediate choice of e.g.Vs(j!) =<br />

smax(!) = p ss(j!) would unfortunately rarely yield a rational transferfunction Vs(j!)<br />

and all available techniques and algorithms in H1 design are only applicable for rational<br />

weights. Therefore one has to come up with not too complicated rational weights Vs or<br />

Ws satisfying:<br />

e:g:<br />

j = j p ss(j!)j (9.26)<br />

1<br />

smax(!)<br />

jVs(j!)j jsmax(!)j e:g:<br />

= j p ss(j!)j jWs(j!)j j<br />

(9.27)<br />

k s kL1= jsj = p s 2 =k s kpower<br />

If the power can be distributed over more frequencies, a maximum peak in time can be<br />

created by proper phase alignment of the various components as represented in Fig. 9.3.<br />

The routine "magshape" in Matlab-toolbox LMI can help you with this task. There<br />

you can de ne a number of points in the Bode amplitude plot where the routine provides<br />

you with a low order rational weight function passing through these points. When you<br />

have a series of measured or computed weights in frequency domain, you can easily come<br />

up with a rational weight su ciently close (from above) to them.<br />

Whether you use these routines or you do it by hand, you have watch the following<br />

side conditions:<br />

3<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

−0.5<br />

−1<br />

−1.5<br />

−4 −3 −2 −1 0 1 2 3 4<br />

1. The weighting lter should be stable and minimum phase. Be sure that there are no<br />

RHP (=Right Half Plane) poles or zeros. Unstable poles would disrupt the condition<br />

of stability for the total design, also for the augmented plant. Nonminimum phase<br />

zeros would prohibit implicit inversion of the lters in the controller design.<br />

Figure 9.3: Maximum sum of 3 properly phase aligned sine waves.<br />

Suppose we have n sine waves:<br />

s(t) =a1 sin(!1t + 1)+a2 sin(!2t + 2)+a3 sin(!3t + 3)+:::+ an sin(!nt + n)<br />

(9.28)<br />

2. Poles or zeros on the imaginary axis cause numerical problems for virtually the same<br />

reason and should thus be avoided. If one wants an integral weighting, i.e. a pole<br />

in the origin, in order to obtain an in nite weight at frequency zero and to force<br />

the design to place an integrator in the controller, one should approximate this in<br />

the lter. In practice it means that one positions a pole in the weight very close<br />

to the origin in the LHP (Left Half Plane). The distance to the origin should be<br />

very small compared to the distances of other poles and zeros in plant and lters.<br />

Alternatively, one could properly include an integrator to the plant and separate it<br />

out to the controller lateron, when the design is nished. In that case be thoughtful<br />

about how theintegrator is included in the plant (not just concatenation!).


124 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.2. FREQUENCY DEPENDENT WEIGHTS. 123<br />

zt<br />

- -<br />

Pt<br />

with total power equal to one. If we distribute the power equally over all sine waves<br />

we get:<br />

6<br />

+<br />

? p<br />

- -<br />

;<br />

u<br />

(9.29)<br />

q<br />

1<br />

n<br />

n<br />

i=1a2 i =1 8i : ai = a ) ai = a =<br />

z<br />

P<br />

-<br />

and consequently, with proper choice of phases i the peak in time domain equals:<br />

(9.30)<br />

r<br />

n 1<br />

i=1ai = n<br />

n<br />

Figure 9.4: Additive model error from p=u.<br />

Certainly, for the continuous case, we have in nitely many frequencies so that n !1<br />

and:<br />

known inputs, the deviating transfers Pt for the respective frequencies can be computed.<br />

Alternatively, one could use broadbanded input noise and compute the various tranfer<br />

samples by crosscorrelation techniques.<br />

Quite often these cumbersome measurements, that are contaminated by inevitable<br />

disturbances and measurement noise and are very hard to obtain in case of unstable plants,<br />

can be circumvented by proper computations. If the structure of the plant-transfer is very<br />

well known but various parameter values are unclear, one can simply evaluate the transfers<br />

for sets of expected parameters and treat these as possible model-deviating transfers.<br />

Next, the various deviating transfers for a typical set of frequencies, obtained either by<br />

measurements or by computations, should be evaluated in a polar (Nyquist) plot contrary<br />

to what is often shown by means of a Bode plot. This is illustrated in Fig. 9.5.<br />

= 1 (9.31)<br />

lim<br />

n!1 n<br />

r<br />

1<br />

n<br />

10<br />

10 0<br />

Frequency (rad/sec)<br />

10 −1<br />

−30<br />

So the bare fact that we have in nitely many frequencies available (continuous spectrum)<br />

will create the possibility of in nitely large peaks in time domain. Fortunately, this<br />

very worst case will usually not happen in practice and we can put bounds in frequency<br />

domain that will generally be su cient for the practical kind of signals that will virtually<br />

exclude the very exceptional occurrence of above phase aligned sine waves. Nevertheless<br />

fundamentally we cannot have any mathematical basis to choose the proper weight Wu<br />

and we have to rely on heuristics. Usually an actuator will be able to follow sine waves<br />

over a certain band. Beyond this band, the steep increases and decreases of the signals<br />

cannot be tracked any more and in particular the higher frequencies cause the high peaks.<br />

Therefore in most cases Wu has to have the character of a high pass lter with a level<br />

equal to several times the maximum amplitude of a sine wave the actuator can track.<br />

The design, based upon such a lter , has to be tested next in a simulation with realistic<br />

reference signals and disturbances. If the actuator happens to saturate, it will be clear<br />

that Wu should be increased in amplitude and/or bandwidth. If the actuator is excited<br />

far from saturation the weight Wu can be softened. This Wu certainly forms the weakest<br />

aspect in lter design.<br />

0.6<br />

0<br />

−10<br />

0.4<br />

Gain dB<br />

−20<br />

0.2<br />

10 1<br />

0<br />

Imag Axis<br />

0<br />

−0.2<br />

X<br />

−0.4<br />

X M X<br />

−0.6<br />

X<br />

0 0.5 1 1.5<br />

Real Axis<br />

−30<br />

−60<br />

Phase deg<br />

10 1<br />

10 0<br />

Frequency (rad/sec)<br />

10 −1<br />

−90<br />

9.2.3 Model errors and parsimony.<br />

Figure 9.5: Additive model errors in Bode and Nyquist plots.<br />

The model P is given by:<br />

Like actuator saturation, also model errors put strict bounds, but they can fortunately be<br />

de ned and measured directly in frequency domain. As an example we treat the additive<br />

model error according to Fig. 9.4.<br />

We can measure p = zt ; z =(Pt ; P )u. For each frequency we would like to obtain<br />

the di erence jPt(j!) ; P (j!)j. In particular we are interested in the maximum deviation<br />

(!) R such that:<br />

(9.33)<br />

P = 1<br />

s +1<br />

while deviating transfers Pt are taken as:<br />

8! : jPt(j!) ; P (j!)j = j P (j!)j < (!) (9.32)<br />

(9.34)<br />

1:2<br />

s+:8<br />

:8<br />

s+1:2 or<br />

1:2<br />

s+1:2 or<br />

Pt = :8<br />

s+:8 or<br />

Given the Bode plot one is tended to take the width of the band in the gain plot as a<br />

measure for the additive model error for each frequency. This would lead to:<br />

Since P is a rational transfer, we would like to have the transfer Pt in terms of gain<br />

and phase as function of the frequency !. This can be measured by o ering respective<br />

sinewaves of increasing frequency to the real plant and measure amplitude and phase of<br />

the output for long periods to monitor all changes that will usually occur. Given the


126 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.2. FREQUENCY DEPENDENT WEIGHTS. 125<br />

If this condition is full lled, we don't have tointroduce an extra lter Vp for stability.<br />

The extra exogenous input p can be prefered for proper quantisation of the control signal<br />

u, but this can also be done by increasing Wu properly.<br />

The best is to combine the exogeneous inputs d, r and p into a signal n, likewe did in<br />

section 9.1.2, but now with appropriate combination for each frequency. This boils down<br />

to nding a rational lter transfer Vn(j!) such that:<br />

jjPtj;jPjj (9.35)<br />

max<br />

Pt<br />

which is certainly wrong. In the Nyquist plot we haveindicated for ! =1the model<br />

transfer by 'M' and the several deviating transfers by 'X'. The maximum model error is<br />

clearly given by the radius of the smallest circle around 'M' that encompasses all plants<br />

'X'. Then we really obtain the vectorial di erences for each !:<br />

8! : jVn(j!)j jVd(j!)j + jVr(j!)j + jVp(j!)j (9.44)<br />

Again the routine "magshape" in the LMI-toolbox can help here.<br />

Pragmatically, one usually combines only Vd and Vr into Vn and cheques whether:<br />

jPt(j!) ; P (j!)j (9.36)<br />

(!) =max<br />

Pt<br />

8! : (!) < jWu(j!)Vn(j!)j (9.45)<br />

The reader is invited to analyse how the wrong measure of equation 9.35 can be<br />

distinguished in the Nyquistplot.<br />

Finally we havethe following bounds for each frequency:<br />

is satis ed. If not, the weighting lter Wu is adapted until the condition is satis ed.<br />

9.2.4 We bounded by fundamental constraint: S + T = I<br />

j P (j!)j < (!) (9.37)<br />

Foratypical low-sized H1 problem like:<br />

~n<br />

~ = Mw (9.46)<br />

z = ~u<br />

~e = WuRVn WuRV<br />

WeSVn WeTV<br />

The signal p in Fig. 9.4 is that component in the disturbance free output of the<br />

true proces Pt due to input u, that is not accounted for by the model output Pu. This<br />

component can be represented by an extra disturbance at the output in the generalised<br />

plant likein Fig. 9.2, but now withaweighting lter p = Vp(j!)~p. If the goal k M k1<<br />

1wehave:<br />

all weights have been discussed except for the performance weight We. The characterising<br />

lters of the exogenous inputs ~n and ~ left little choice as these were determined<br />

by the actual signals to be expected for the closed loop system. The control weighting<br />

lter Wu was de ned by rigorous bounds derived from actuator limitations and model<br />

perturbations. Now it is to be seen how good a nal performance can be obtained by<br />

optimum choice of the error lter We. We would like to see that the nal closed loop system<br />

shows good tracking behaviour and disturbance rejection for a broad frequency band.<br />

Unfortunately, the We will appear to be restricted by many bounds, induced by limitations<br />

in actuators, sensors, model accuracy and the dynamic properties of the plant tobe<br />

controlled. The in uence of the plant dynamics will be discussed in the next section. Here<br />

we will show how the in uences of actuator, sensor and model accuracy put restrictions<br />

on the performance via respectively Wu, V and the combination of Wu, Vn and V .<br />

Mentioned lters all bound the complementarity sensitivity T as a contraint:<br />

k WuRVp k1< 1 ()8! : jWuRVpj < 1 (9.38)<br />

For robust stability, based on the small gain theorem, we have as condition:<br />

k R P k1< 1 ()8! : jR Pj < 1 () (9.39)<br />

j Pj<br />

8! : jWuRVpj < 1 (9.40)<br />

jWuVpj<br />

Given the bounded transfer of equation 9.38, a su cient condition is:<br />

8! : j Pj < jWuVpj (9.41)<br />

fk WuRVn k1< 1g,f8! : jWuRVnj = jWuP ;1 TVnj < 1g (9.47)<br />

fk WuRV k1< 1g,f8! : jWuRV j = jWuP ;1 TV j < 1g (9.48)<br />

fk WeTV k1< 1g,f8! : jWeTV j < 1g (9.49)<br />

and this can be guaranteed if the weights are su ciently large such that the bounded<br />

model perturbations of equation 9.37 can be brought in as:<br />

8! : j Pj < (!) < jWuVpj (9.42)<br />

In above inequalities the plant transfer P , that is not optional contrary to the controller<br />

C, functions as part of the weights on T . Because the T is bounded accordingly the<br />

freedom in the performance represented by the sensitivity S is bounded on the basis of<br />

the fundamental constraint S + T = I.<br />

The constraints on T can be represented as k W2TV2 k1< 1whereW2 and V2 represent<br />

the various weight combinations of inequalities 9.47-9.49. Renaming the performance aim<br />

as:<br />

Of course, for stability also the other input weight lters Vd, Vr or even V in stead of<br />

Vp could have been used, because they all combined with Wu limit the control sensitivity<br />

R. Consequently, for robust stability it is su cient tohave:<br />

8! : (!) < supfjWuVdj jWuVrj jWuV jg (9.43)


128 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.2. FREQUENCY DEPENDENT WEIGHTS. 127<br />

k WeSVn k1 def<br />

= k W1SV1 k1< 1 (9.50)<br />

*<br />

T<br />

S<br />

Now we can repeat the comments made in section 7.5.<br />

The H1 design problem requires:<br />

: ^<br />

1<br />

T<br />

0<br />

T<br />

z<br />

S<br />

S<br />

k W1SV1 k1< 1 ,8! : jS(j!)j < jW1(j!) ;1 V1(j!) ;1 j (9.51)<br />

k W2TV2 k1< 1 ,8! : jT (j!)j < jW2(j!) ;1 V2(j!) ;1 j (9.52)<br />

^<br />

Atypical weighting situation for the mixed sensitivity problem is displayed in Fig. 9.6.<br />

Figure 9.7: Possibilities for jSj < 1, jTj < 1 and S + T =1.<br />

g (9.56)<br />

1<br />

jW2V2j<br />

g\fjTj <<br />

1<br />

jW1V1j<br />

fjSj <<br />

then necessarily:<br />

(9.57)<br />

1<br />

jW2V2j<br />

< jTj <<br />

1 ; S = T ) 1 ; 1<br />

jW1V1j<br />

Figure 9.6: Typical mixed sensitivity weights.<br />

which essentially tells us that for aimed small S, enforced by jW1V1j, theweight jW2V2j<br />

should be chosen less than 1 and vice versa.<br />

Generally, this can be accomplished but an extra complication occurs when W1 = W2<br />

and V1 and V2 have xed values as they characterise real signals. This happens in the<br />

example under study where we have WeSVn and WeTV . This leads to an upper bound<br />

for the lter We according to:<br />

It is clear that not both jSj < 1=2 andjTj < 1=2 can be obtained, because S + T =1<br />

for the SISO-case. Consequently the intersection point of the inverse weights should be<br />

greater than 1:<br />

> 1=2 (9.53)<br />

1<br />

jW2V2j<br />

1<br />

jW1V1j =<br />

9! :<br />

(9.58)<br />

1<br />

jWeVnj )j jV j)jWej < 1<br />

jV j<br />

> jV (1 ;<br />

1<br />

jWej<br />

This is still too restrictive, because it is not to be expected that equal phase 0 can be<br />

accomplished by any controller at the intersection point. To allow for su cient freedom<br />

in phase it is usually required to take at least:<br />

The better the sensor, the smaller the measurement lter jV j can be, the larger the<br />

lter jWej can be chosen and the better the ultimate performance will be. Again this<br />

re ects the fact that we can never control better than the accuracy of the sensor allows<br />

us. We encountered this very same e ect before in the one frequency example. Indeed,<br />

this e ect particularly poses a signi cant limiting e ect on the aim to accomplish zero<br />

tracking error at ! =1. A nal zero error in the step-response for a control loop including<br />

an integrator should therefore be understood within this measurement noise e ect.<br />

1<br />

jW1V1j =<br />

1<br />

> 1 , (9.54)<br />

jW2V2j<br />

9! : jW1V1j = jW2V2j < 1 (9.55)<br />

9! :<br />

9.3 Limitations due to plant characteristics.<br />

In the previous subsections the weights V have been based on the exogenous input characteristics.<br />

The weight Wu was determined by the actuator limits and the model perturbations.<br />

Finally, limits on the weight We were derived based on the relation S + T = I.<br />

Never, the characteristics of the plant itself were considered. It appears that these very<br />

It can easily be understood that the S and T vectors for frequencies in the neighbourhood<br />

of the intersection point can then only be taken in the intersection area of the two<br />

circles in Fig. 9.7.<br />

Consequently, it is crucial that the point ofintersection of the curves jW1(j!)V1(j!)j<br />

and jW2(j!)V2(j!)j is below the 0 dB-level, otherwise there would be a con ict with<br />

S + T =1and there would be no solution 1! Consequently, heavily weighted bands<br />

(> 0dB) forS and T should always exclude each other.<br />

Further away from the intersection point the condition S + T requires that for small<br />

S the T should e ectively be greater than 1 and vice versa. If we want:


130 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 129<br />

g (9.67)<br />

1<br />

jWuVnj<br />

8! : fjWuQVnj < 1g,fjQj <<br />

dynamical properties put bounds on the nal performance. This is clear if one accepts<br />

that some e ort is to be made to stabilise the plant, which inevitably will be at the cost of<br />

the performance. We will see that not so much instability, but in particular nonminimum<br />

phase zeros and limited gain can have detrimental e ects on the nal performance.<br />

Above bound on jQj prohibits to take Q = P ;1 as jPj is too small for certain frequencies<br />

! , so that we willalways have:<br />

9.3.1 Plant gain.<br />

Let us forget about the low measurement noise for the moment and concentrate on the<br />

remaining mixed sensitivity problem:<br />

> 0g (9.68)<br />

8! :fjPQj < 1g)fj1 ; PQj > 1 ;jPQj > 1 ; jPj<br />

jWuVnj<br />

Consequently, we learn from the condition jWe(1 ; PQ)Vnj < 1:<br />

(~n) =Mw (9.59)<br />

WuRVn<br />

=<br />

WeSVn<br />

z = ~u<br />

~e<br />

(9.69)<br />

1<br />

1<br />

8! :jWej <<br />

jP j<br />

jWuj )<br />

jVnj;<br />

) =<br />

jP j<br />

jWuVnj<br />

jVnj(1 ;<br />

1<br />

jVn(1 ; PQ)j <<br />

From chapter 4 we know that for stable plants P we may use the internal model<br />

implementation of the controller where Q = R and S =1; PQ. Very high weights jWej<br />

for good tracking necessarily require:<br />

and the best sensivity we can expect for such a weight We is necessarily close to its<br />

upper bound given by:<br />

(9.70)<br />

=1; jPj<br />

jWuVnj<br />

jP j<br />

jVnj; jWuj<br />

=<br />

jVnj<br />

1<br />

jWeVnj<br />

8! :jSj <<br />

8! : fjWeSVnj = jWe(1 ; PQ)Vnj < 1g, (9.60)<br />

fj(1 ; PQ)Vnj < 1<br />

0g) (9.61)<br />

jWej<br />

fQ = P ;1g (9.62)<br />

9.3.2 RHP-zeros.<br />

Even in the case that P is invertable, it needs to have su cient gain, since the rst<br />

term in the mixed sensitivity problem yields:<br />

For perfect tracking and disturbance rejection one should be able to choose Q = P ;1 .<br />

In the previous section this was thwarted by the range of the actuator or by the model<br />

uncertainty via mainly Wu. Another condition on Q is stability and here the nonminimum<br />

phase or RHP (Right Half Plane) zeros are the spoil-sport. The crux is that no controller<br />

C may compensate these zeros by RHP-poles as the closed loop system would become<br />

internally unstable. So necessarily from the maximum modulus principle, introduced in<br />

chapter 4, we get:<br />

8! : fjWuRVnj = jWuP ;1 Vnj < 1g, (9.63)<br />

fjP (j!)j > jWu(j!)Vn(j!)jg , (9.64)<br />

1<br />

fjP (j!)j<br />

jWu(j!)j > jVn(j!)jg (9.65)<br />

sup jWe(j!)S(j!)Vn(j!)j jWe(z)(1 ; P (z)Q(z))Vn(z)j = jWe(z)Vn(z)j (9.71)<br />

!<br />

where z is any RHP-zero where necessarily P (z) = 0 and jQ(z)j < 1. Unfortunately,<br />

this puts an underbound on the weighted sensitivity. Because we want the weighted<br />

sensitivity to be less than one, we should at least require that the weights satisfy:<br />

The last constraint simply states that, given the bound on the actuator input by<br />

j1=Wuj, the maximum e ect of an input u at the output, viz. jP=Wuj should potentially<br />

compensate the maximum disturbance jVnj. That is, the gain of the plant P for each<br />

frequency in the tracking band should be large enough to compensate for the disturbance<br />

n as a reaction of the input u. In frequency domain, this is the same constraint as we<br />

found in subsection 9.1.3<br />

Typically, if we compare the lower bound on the plant with the robustness constraint<br />

on the additive model perturbation, we get:<br />

jWe(z)Vn(z)j < 1 (9.72)<br />

This puts a strong constraint on the choice of the weight We because heavy weights<br />

at the imaginary axis band, where we like to have a small S, will have to be arranged<br />

by poles and zeros of We and Vn in the LHP and the "mountain peaks" caused by the<br />

poles will certainly have their " mountain ridges" passed on to the RHP where at the<br />

position of the zero z their height is limited according to above formula. This is quite an<br />

abstract explanation. Let us therefore turn to the background of the RHP-zeros and a<br />

simple example.<br />

8! : jPj > jWuVnj > j Pj (9.66)<br />

which says that modelling error larger than 100% will certainly prevent tracking and<br />

disturbance rejection, as can easily be grasped.<br />

All is well, but what should be done if the gain of P is insu cient, at least at certain<br />

frequencies? Simply adapt your performance aim by decreasing the weight We as follows.<br />

Starting from the constraint wehave:


132 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 131<br />

Step Response<br />

From: U(1)<br />

10 1<br />

1.5<br />

Nonminimum phase zeros, as the engineering name indicates, originate from some<br />

strange internal phase characteristics, usually by contradictory signs of behaviour in certain<br />

frequency bands. As an example may function:<br />

1<br />

PC/(PC+1)<br />

10 0<br />

P1C/s(P1C+1)<br />

P2C/(P2C+1)<br />

0.5<br />

(9.73)<br />

s ; 8<br />

(s + 1)(s + 10)<br />

= ;<br />

PC/s(PC+1)<br />

To: Y(1)<br />

Amplitude<br />

2<br />

;<br />

s +10<br />

P (s) =P1(s)+P2(s) = 1<br />

s +1<br />

P1C/(P1C+1)<br />

0<br />

10 −1<br />

−0.5<br />

P2C/s(P2C+1)<br />

0 0.5 1 1.5<br />

−1<br />

10 2<br />

10 1<br />

10 0<br />

10 −1<br />

10 −2<br />

Time (sec.)<br />

Figure 9.9: Closed loop of nonminimum phase plant and its components.<br />

The two transfer components show competing e ects because of the sign. The the<br />

sign of the transfer with the slowest pole at -1 is positive. The sign of the other transfer<br />

with the faster dynamics pole at -10 is negative. Brought into one rational transfer this<br />

e ect causes the RHP-zero at z =8. Note that the zero is right between the two poles in<br />

absolute value. The zero could also have been occured in the LHP, e.g. by di erent gains<br />

of the two rst order transfers (try for yourself). In that case a controller could easily<br />

cope with the phase characteristic by putting a pole on this LHP-zero. In the RHP this<br />

is not allowed because of internal stability requirement. So, let us take a straightforward<br />

PI-controller that compensates the slowest pole:<br />

As a consequence for the choice of We for such a system we cannot aim at a broader<br />

frequency band than, as a rule of the thumb, ! (0 jzj=2) and also the gain of We is limited.<br />

This limit is re ected in the above found limitation:<br />

(9.74)<br />

s +1<br />

K<br />

s<br />

s ; 8<br />

(s +1)(s + 10)<br />

P (s)C(s) =;<br />

jWe(z)Vn(z)j < 1 (9.75)<br />

and take controller gain K such that we obtain equal real and imaginary parts for the<br />

closed loop poles as shown in Fig. 9.8.<br />

If, on the other hand, wewould like to obtain a good tracking for a band ! (2jzj 100jzj)<br />

the controller can indeed wellbechosen to control the component ;2=(s + 10), while now<br />

the other component 1=(s +1) is the nasty one. In a band ! (jzj=2 2jzj) we can never<br />

track well, because the opposite e ects of both components of the plant are apparent in<br />

their full extent.<br />

If we havemore RHP-zeros zi, wehave asmanyforbidden tracking bands ! (jzij=2 2<br />

jzij). Even zeros at in nity playarole as explained in the next subsection.<br />

20<br />

15<br />

10<br />

5<br />

9.3.3 Bode integral.<br />

0<br />

Imag Axis<br />

For strictly proper plants combined with strictly proper controllers we will have zeros<br />

at in nity. It is irrelevant whether in nity is in the RHP. Zeros at in nity should be<br />

treated like all RHP-zeros, simply because they cannot be compensated by poles. Because<br />

in practice each system is strictly proper, we have that the combination of plant and<br />

controller L(s) = P (s)C(s) has at least a pole zero excess (#poles ; #zeros) of two.<br />

Consequently it is required:<br />

−5<br />

−10<br />

−15<br />

−20<br />

−20 −15 −10 −5 0 5 10 15 20<br />

Real Axis<br />

Figure 9.8: Rootlocus for PI-controlled nonminimumphase plant.<br />

jWe(1)Vn(1)j < 1 (9.76)<br />

and we necessarily have:<br />

j =1 (9.77)<br />

1<br />

1+L(s)<br />

jSj = lim<br />

s!1 j<br />

lim<br />

s!1<br />

Any tracking band will necessarily be bounded. However, how can we see the in uence<br />

of zeros at in nity at a nite band? Here the Bode Sensitivity Integral gives us an<br />

impression (the proof can be found in e.g. Doyle [2]). If the pole zero excess is at least 2<br />

and we haveno RHP poles, the following holds:<br />

which leads to K =3. In Fig. 9.9 the step response and the bode plot for the closed<br />

loop system is showed.<br />

Also the results for the same controller applied to the one component P1(s) =1=(s+1)<br />

or the other component P2(s) =;2=(s +10) is shown. The bodeplot shows a total gain<br />

enclosed by the two separate components and the component ;2=(s +10) is even more<br />

broadbanded. Alas, if we would have only this component, the chosen controller would<br />

make the plant unstable as seen in the step response. For the higher frequencies the phase<br />

of the controller is incorrect. For the lower frequencies ! (0 3:5) the phase of the controller<br />

is appropriate and the plant iswell controlled. The e ect of the higher frequencies is still<br />

seen at the initial time of the response where the direction (sign) is wrong.


134 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 133<br />

again because of internal stability, but in closed loop they have been displaced into the LHP<br />

by means of the feedback. So in closed loop, they are no longer existent, and consequently<br />

their e ect is not as severe as of the RHP-zeros. Nevertheless, their shift towards the LHP<br />

has to be paid for, as we will see.<br />

The e ect of RHP-poles cannot be analysed by means of the internal model, because<br />

this concept can only be applied to stable plants P . The straightforward generalisation<br />

of the internal model for unstable plants has been explained in chapter 11. Essentially,<br />

the plant is rst fed back for stabilisation and next an extra external loop with a stable<br />

controller Q is applied for optimisation. So the idea is rst stabilisation and on top of that<br />

optimisation of the stable closed loop. It will be clear that the extra e ort of stabilisation<br />

has to be paid for. The currency is the use of the actuator range. Part of the actuator range<br />

will be occupied for the stabilisation task so that less is left for the optimisation compared<br />

with a stable plant, where we can use the whole range of the actuator for optimisation.<br />

This can be illustrated by a simple example represented in Fig. 9.11.<br />

Z 1<br />

ln jS(j!)jd! =0 (9.78)<br />

0<br />

The explanation can best be done with an example:<br />

(9.79)<br />

K<br />

s(s +100)<br />

L(s) =P (s)C(s) =<br />

so that the sensitivity in closed loop will be:<br />

(9.80)<br />

s(s +100)<br />

s 2 + 100s + K<br />

S =<br />

r +<br />

u<br />

- - 1<br />

K<br />

- -<br />

s a<br />

6;<br />

n<br />

For increasing controller gain K = f2100 21000 210000g the tracking band will be<br />

broader but we have topaywith higher overshoot in both frequency and time domain as<br />

Fig. 9.10 shows.<br />

10 1<br />

?<br />

10 0<br />

Figure 9.11: Example for stabilisation e ort.<br />

The plant has either a pole in RHP at a > 0 or a pole in LHP at ;a < 0. The<br />

proportional controller K is bounded by the range of juj < umax, while the closed loop<br />

should be able to track a unit step. The control sensitivity is given by:<br />

K=2100<br />

10 −1<br />

K=21000<br />

10 −2<br />

K=210000<br />

(9.81)<br />

s a + K<br />

= K(s a)<br />

K<br />

1+ K<br />

s a<br />

R = r<br />

u =<br />

10 −3<br />

For stability we certainly need K > a. The maximum juj for a unit step occurs at<br />

t = 0 so:<br />

10 −4<br />

10 4<br />

10 3<br />

10 2<br />

10 1<br />

10 0<br />

10 −1<br />

10 −5<br />

(9.82)<br />

max(u)<br />

=u(0) = lim R(s) =K = umax<br />

t<br />

s!1<br />

Figure 9.10: Sensitivity for looptransfer with pole zero excess 2 and no RHP-poles.<br />

So it is immediately clear that, limited by theactuator saturation, the pole in closed<br />

loop can maximally be shifted umax to the left. Consequently, for the unstable plant, a<br />

part a is used for stabilisation of the plant and only the remainder K ; a can be used for<br />

a bandwidth K ; a as illustrated in Fig. 9.12.<br />

Note that the actuator range should be large enough, i.e. umax > a. Otherwise<br />

stabilisation is not possible. It de nes a lower bound on K > a. With the same e ort<br />

we obtain a tracking band of K + a for the stable plant. Also the nal error for the step<br />

response is smaller:<br />

The Bode rule states that the area of jS(j!)j under 0dB equals the area above it.<br />

Note that we haveas usual a horizontal logarithmic scale for ! in Fig. 9.10 which visually<br />

disrupts the concepts of equal areas. Nevertheless the message is clear: the less tracking<br />

error and disturbance we want to obtain over a broader band, the more we have to pay<br />

for this by a more than 100% tracking error and disturbance multiplication outside this<br />

band.<br />

9.3.4 RHP-poles.<br />

(9.83)<br />

a<br />

K a<br />

s a<br />

s a + K =<br />

e = Sr ) e(1) = lim<br />

s!0<br />

and certainly<br />

The RHP-zeros play a fundamental role in the performance limitation because they cannot<br />

be compensated by poles in the controller and will thus persist in existence also in the<br />

closed loop. Also the RHP-poles cannot be compensated by RHP-zeros in the controller,


136 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 135<br />

three times a weighted complementary sensitivity, of which two are explicitly weighted<br />

control sensitivities:<br />

K = umax<br />

X<br />

a<br />

<<br />

X<br />

;a<br />

;K ; a ;K + a<br />

<<br />

WeTV ) WT VT = WeV (9.87)<br />

(9.88)<br />

K = umax<br />

(9.89)<br />

WuRVn ) WT VT = WuVn<br />

P<br />

WuRV ) WT VT = WuV<br />

P<br />

Only the rst entry yields bounds on the weights according to:<br />

Figure 9.12: Rootloci for both plants 1=(s a).<br />

jWe(p)V ((p)j < 1 (9.90)<br />

because for the other two holds:<br />

j =0< 1 (9.91)<br />

j Wu(p)Vn[ (p)<br />

P (p)<br />

a<br />

K + a <<br />

a<br />

(9.84)<br />

K ; a<br />

being the respective absolute nal errors. Let us show these e ects by assuming some<br />

numerical values: K = umax =5,a = 1, which leads to poles of respectively -4 and -6 and<br />

dito bandwidths. The nal errors are respectively 1=6 and 1=4. Fig. 9.13 shows the two<br />

step responses. Also the two sensitivities are shown, where the di erences in bandwidth<br />

and the nal error (at ! = 0) are evident.<br />

as jP (p)j = 1.<br />

The condition of inequality 9.90 is only a poor condition, because measurement noise<br />

is usually very small. This is not the e ect we are looking for, but alas I have not been<br />

able to nd it explicitly. You are invited to express the stabilisation e ort explicitly in the<br />

weights.<br />

In the Bode integral the e ect of RHP-poles is evident, because if we haveNp unstable<br />

poles pi the Bode integral changes into:<br />

Step Response<br />

From: U(1)<br />

10 0<br />

1.4<br />

1.2<br />

P=1/(s−1) K=5<br />

1<br />

Z 1<br />

0.8<br />

P=1/(s+1) K=5<br />

Np<br />

i=1


138 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 137<br />

actuator range could not be made explicit in a bound on the allowable weighting lters<br />

for the left over performance. If you have a good idea yourself, you will certainly get a<br />

good mark for this course.<br />

where we used that h(0) = 0 when the closed loop system is strictly proper and that<br />

h(1) is nite.Because it is straightforward that<br />

Z 1<br />

Z 1<br />

9.3.5 RHP-poles and RHP-zeros<br />

(9.97)<br />

de ;pt = ; e;pt<br />

p j1 0 = 1<br />

p<br />

0<br />

e ;pt dt = ; 1<br />

p<br />

0<br />

It goes without saying that, when a plant has both RHP-zeros and RHP-poles, the limitations<br />

of both e ects will at least add up. It will be more because the stabilisation e ort<br />

will be larger. RHP-zeros will attract rootloci to the RHP, while we want to pull the<br />

rootloci over the imaginary axis into the LHP. The stabilisation is in particular a heavy<br />

task when we haveto deal with alternating poles and zeros on the positive real axis. These<br />

plants are infamous, because they can only be stabilised by unstable and nonminimum<br />

phase controllers that add to the limitations again. These plants are called "not strongly<br />

stabilisable". Take for instance a plant with a zero z>0, p>0 and an integrator pole at<br />

0. If z0 K0 K O < X ><br />

0<br />

z p<br />

Equation 9.98 restricts the attainable step responses: the integral of the step response<br />

error, weighted by e ;pt must vanish. As h(t) is below 1 for small values of t, this area must<br />

be compensated by values above 1 for larger t, and this compensation is discounted for<br />

t !1by theweight e ;pt and even more so if the steady state error happens to be zero<br />

by integral control action. So the step response cannot show an in nitesimally small error<br />

for a long time to satisfy 9.98. The larger p is, the shorter the available compensation<br />

time will be, during which the response is larger than 1. If an unstable pole and actuator<br />

limitations are both present, the initial error integral of the step response is bounded from<br />

below, and hence there must be a positive control error area which is at least as large<br />

as the initial error integral due to the weight e ;pt . Consequently either large overshoot<br />

and rapid convergence to the steady state value or small overshoot and slow convergence<br />

must occur. For our example P =1=(s ; 1) we can choose C = 5, as we did before, or<br />

C =5(s+1)=s to accomplish zero steady state error and still avoiding actuator saturation.<br />

The respective step responses are displayed in Fig. 9.14 together with the weight e ;t .<br />

1.4<br />

Figure 9.15: Rootloci for a plant, which is not strongly stabilisable.<br />

C=5<br />

1.2<br />

Only if we add RHP-zeros and RHP-poles in the controller such thatwe alternatingly<br />

have pairs of zeros and poles on the real positive axiswecan accomplish that the rootloci<br />

leave the real positive axis and can be drawn to the LHP as illustrated in Fig. 9.16.<br />

1<br />

C=5(s+1)/s<br />

0.8<br />

K>0<br />

K>0<br />

6<br />

K>0 K


140 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 139<br />

9.3.6 MIMO.<br />

The previous subsections were based on the silent assumption of a SISO plant P . For<br />

MIMO plants fundamentally the same restrictions hold but the interpretation is more<br />

complicated. For example the plant gain is multivariable and consequent limitations need<br />

further study. For a m input m output plant the situation is sketched in Fig 9.17, where<br />

e.g. m =3.<br />

passing through the LHP rst. In Skogestadt & Postlethwaite [15] this is formalised in<br />

the following bounding theorem:<br />

Theorem: Combined RHP-poles and RHP-zeros. Suppose that P (s) has Nz<br />

RHP-zeros zj and has Np RHP-poles pi. Then for closed-loop stability the weighted sensitivity<br />

function must satisfy for each RHP-zero zj:<br />

~r1 ~r2 ~r3<br />

6~u1 6~u2 6~u3<br />

1 (9.99)<br />

jzj +pij<br />

jzj ; pij<br />

k WSSVS k1 c1jjWS(zj)VS(zj)j c1j = Np<br />

i=1<br />

? ?<br />

?<br />

and the weighted complementary sensitivity function must satisfy for each RHP-pole<br />

Vr3<br />

Vr2<br />

Vr1<br />

Wu3<br />

Wu2<br />

Wu1<br />

pi:<br />

+ r1 r2 r3<br />

?<br />

- - n<br />

- We1<br />

-<br />

;<br />

e1 ~e1<br />

+<br />

?<br />

- P<br />

- n - We2<br />

-<br />

;<br />

e2 ~e2<br />

+ ?<br />

- - n - We3<br />

-<br />

; e3 ~e3<br />

6<br />

6 6<br />

1 (9.100)<br />

jzj + pij<br />

jzj ; pij<br />

k WT TVT k1 c2jjWT (pi)VT (pi)j c2j = Nz<br />

j=1<br />

u1<br />

u2<br />

where WS and VS are sensitivity weighting lters like the pair fWeVng. Similarly,<br />

WT and VT are complementary sensitivity weighting lters like the pair fWeV g. If we<br />

want theinnitynorms to be less than 1, above inequalities put upper bounds on the<br />

weight lters. On the other hand if we apply the theorem without weights we get:<br />

u3<br />

(9.101)<br />

c2i<br />

c1j k T k1 max<br />

i<br />

k S k1 max<br />

j<br />

Figure 9.17: Scaling of a 3x3-plant.<br />

This shows that large peaks for S and T are unavoidable if we have a RHP-pole and<br />

RHP-zero located close to each other.<br />

The scaled tracking error ~e as a function of the scaled reference ~r and the scaled control<br />

signal ~u is given by:<br />

1<br />

A =<br />

0<br />

@ ~e1<br />

~e2<br />

~e =<br />

1<br />

A<br />

0<br />

@ ~r1<br />

~r2<br />

1<br />

A<br />

0<br />

@ Vr1 0 0<br />

1<br />

A<br />

~e3<br />

0<br />

0 Vr2 0<br />

0 0 Vr3<br />

1<br />

A =<br />

0<br />

@ ~u1<br />

~u2<br />

~u3<br />

1<br />

A<br />

~r3<br />

;1<br />

W<br />

0<br />

@<br />

1<br />

A<br />

0<br />

u1 0 0<br />

0<br />

@ P11 P12 P13<br />

P21 P22 P23<br />

P31 P32 P33<br />

1<br />

A<br />

0 W ;1<br />

u2<br />

0 0 W ;1<br />

u3<br />

= @ We1 0 0<br />

0 We2 0<br />

0 0 We3<br />

0<br />

; @ We1 0 0<br />

0 We2 0<br />

0 0 We3 ;<br />

= We Vr~r ; PW ;1<br />

u ~u<br />

(9.102)<br />

Note that we haveas usual diagonal weights where jWui(j!)j stands for the maximum<br />

range of the corresponding actuator for the particular frequency !. Also the aimed range<br />

of the reference ri is characterised by jVri(j!)j and should at least correspond to the<br />

permitted range for the particular output zi for frequency !. For heavy weights We in<br />

order to make ~e 0we need:<br />

Vr~r = PW ;1<br />

u ~u = r = Pu (9.103)


142 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.3. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 141<br />

Let the input u = F being the horizontal force exerted to the carriage and let the<br />

outputs be the angle of the pendulum and x the position of the carriage. So we have 1<br />

input and 2 outputs and we would like to track a certain reference for the carriage and at<br />

the same time keep the infuence of disturbance on the pendulum angle small according to<br />

The ranges of the actuators should be su ciently large in order to excite each output<br />

up to the wanted amplitude expressed by:<br />

Fig. 9.19.<br />

~u = WuP ;1 Vr~r (9.104)<br />

-<br />

e<br />

6<br />

so that<br />

6<br />

-<br />

+ ? -<br />

u +<br />

- n - - P1 -<br />

-<br />

6;<br />

P2<br />

; C1 C2<br />

(9.105)<br />

k ~u k2 k WuP ;1Vr k1k ~r k2 1<br />

,k WuP ;1Vr k1 1<br />

6+<br />

r<br />

Because (A ;1 )= (A) wemay write:<br />

?<br />

-<br />

d<br />

(9.106)<br />

8! : (WuP ;1Vr) 1 ,<br />

8! : (V ;1<br />

r PW;1 u ) 1<br />

Figure 9.19: Keeping e and small in the face of r and d.<br />

That is, we like tomake the total sensitivity small:<br />

(9.107)<br />

d<br />

r<br />

which simply states that the gains of the scaled plant in the form of the singular values<br />

should all be larger than 1. The plant is scaled with respect to each allowd input ui and<br />

each aimed output zj for each frequency !. A singular value less than one implies that a<br />

certain aimed combination of outputs indicated by the corresponding left singular vector<br />

cannot be achieved by anyallowed input vector ~u 1.<br />

We presented the analysis for the tracking problem. Exactly the same holds of course<br />

for the disturbance rejection for which Vd should be substituted for Vr. Note, that the<br />

di erence in sign for r and d does not matter. Also the additive model perturbation, i.e.<br />

Vp and Wu, can be treated in the same way and certainly the combination of tracking,<br />

disturbance reduction and model error robustness by means of Vn.<br />

In above derivation we assumed that all matrices were square and invertible. If we<br />

have m inputs against p outputs where p > m (tall transfer matrix) we are in trouble.<br />

We actually have p ; m singular values equal to 0 which iscertainly less than 1. It says<br />

that certain output combinations cannot be controlled independent from other output<br />

combinations as we haveinsu cient inputs. We can only aim at controlling p ; m output<br />

combinations. Let us show this with a well known example: the pendulum on a carriage<br />

of Fig. 9.18.<br />

;1<br />

C1<br />

C2<br />

= P1u + d<br />

x = P2u<br />

u = C1 + C2e<br />

e = r ; x<br />

)<br />

e<br />

=<br />

1 0<br />

0 1 + ; P1 P2<br />

If we want both the tracking of x and the disturbance reduction of better than<br />

without control we need:<br />

< 1 (9.108)<br />

1<br />

(I + PC)<br />

(S) = ((I + PC) ;1 )=<br />

The following can be proved:<br />

y<br />

(I + PC) 1+ (PC) (9.109)<br />

6<br />

Since the rank of PC is 1 (1 input u) wehave (PC) = 0 so that:<br />

j<br />

2l<br />

h<br />

(I + PC) 1 ) (S) 1 (9.110)<br />

?<br />

This result implies that we can never control both outputs e and appropriately in the<br />

same frequency band! It does not matter how the real transfer functions P1 and P2 look<br />

like. Also instability is not relevant here. The same result holds for a rocket, when the<br />

pendulum is upright, or for a gantry crane, when the pendulum is hanging. The crucial<br />

limitation is the fact that we haveonly one input u. The remedy is therefore either to add<br />

more independent inputs (e.g. a torque on the pendulum) or require less by weighting the<br />

tracking performance heavily and leaving only determined by stabilisation conditions.<br />

F<br />

M<br />

x<br />

Figure 9.18: The inverted pendulum on a carriage.


144 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.4. SUMMARY 143<br />

1. (A B2) is not stabilisable. Unstable modes cannot be controlled. Usually not well<br />

de ned plant. Be sure that all your weights are stable and minimum phase.<br />

2. (A C2) is not detectable. Unstable modes cannot be observed. Usually not well<br />

de ned plant. Again, all your weights should be stable and minimum phase.<br />

3. D12 does not have full rank equal to the number of inputs ui. This means that not<br />

all inputs ui are penalised in the outputs z by means of the weights Wui. They<br />

should be penalised for all frequencies so that biproper weights Wui are required. If<br />

not all ui are weighted for all frequencies, the e ect is the same as when we have in<br />

LQG-control a weight matrix R which is singular and needs to be inverted in the<br />

solution algorithm. In chapter 13 we sawthat for the LQG-problem D12 = R 1<br />

2 .<br />

In above example of the pendulum we treated the extreme case that (P ) = 0 but<br />

certainly simular e ects occur approximately if (P ) :<br />

0<br />

y = C2x + D21w + D22u


146 CHAPTER 9. FILTER SELECTION AND LIMITATIONS.<br />

9.4. SUMMARY 145<br />

6. In case of both RHP-poles and RHP-zeros test on basis of theorem equations 9.99<br />

and 9.100.<br />

Still no solution? Find an expert.


148 CHAPTER 10. DESIGN EXAMPLE<br />

(10.3)<br />

(s + :06 + 6j)(s + :06 ; 6j)<br />

(s + :05 + 5j)(s + :05 ; 5j)<br />

(s + :125)<br />

(s + 1)(s ; 1)<br />

Pa(s) =Ka<br />

Finally, we haveM(s) =P (s) as basic model and Ps(s) ; M(s) and Pa(s) ; M(s) as<br />

possible additive model perturbations. The Bode plots are shown in Fig. 10.1. As the<br />

errors exceed the nominal plant at! 5:5 the control band will certainly be less wide.<br />

Chapter 10<br />

|M|,|M−Ps|,|M−Pa|<br />

10 2<br />

10 1<br />

Design example<br />

10 0<br />

10 −1<br />

10 −2<br />

10 −3<br />

10 −4<br />

The aim of this chapter is to synthesize a controller for a rocket model with perturbations.<br />

First, a classic control design will be made so as to compare the results with H1-control<br />

and -control. The use of various control toolboxes will be illustrated. The program<br />

les which will be used can be obtained from the ftp-site nt01.er.ele.tue.nl or via the<br />

internet home page of this course. We refer to the \readme" le for details.<br />

10 −5<br />

10 −6<br />

10.1 Plant de nition<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

The plant and its perturbations<br />

10 −2<br />

10 −3<br />

10 −7<br />

Figure 10.1: Nominal plant and additive perturbations.<br />

In Matlab, the plant de nition can be implemented as follows<br />

% This is the script file PLANTDEF.M<br />

%<br />

% It first defines the model M(s)=-8(s+.125)/(s+1)(s-1)<br />

% from its zero and pole locations. Subsequently, it introduces<br />

% the perturbed models Pa(s)=M(s)*D(s) and Ps(s) = M(s)/D(s) where<br />

% D(s) has poles and zeros nearby the imaginary axis<br />

z0=-.125 p0=[-11]<br />

zs=[-.125-.05+j*5-.05-j*5] ps=[-11-.06+j*6-.06-j*6]<br />

za=[-.125-.06+j*6-.06-j*6] pa=[-11-.05+j*5-.05-j*5]<br />

[numm,denm]=zp2tf(z0,p0,1)<br />

[nums,dens]=zp2tf(zs,ps,1)<br />

[numa,dena]=zp2tf(za,pa,1)<br />

% adjust the gains:<br />

km=polyval(denm,0)/polyval(numm,0)<br />

ks=polyval(dens,0)/polyval(nums,0)<br />

ka=polyval(dena,0)/polyval(numa,0)<br />

numm=numm*kmnums=nums*ksnuma=numa*ka<br />

The model has been inspired by a paper on rocket control from Enns [17]. Booster rockets<br />

y through the atmosphere on their way to orbit. Along the way, they encounter aerodynamic<br />

forces which tend to make the rocket tumble. This unstable phenomenon can be<br />

controlled with a feedback of the pitch rate to thrust control. The elasticity of the rocket<br />

complicates the feedback control. Instability can result if the control law confuses elastic<br />

motion with rigid body motion. The input is a thrust vector control and the measured<br />

output is the pitch rate. The rocket engines are mounted in gimbals attached to the bottom<br />

of the vehicle to accomplish the thrust vector control. The pitch rate is measured<br />

with a gyroscope located just below the center of the rocket. Thus the sensor and actuator<br />

are not co-located. In this example we have an extra so called \ ight path zero" in the<br />

transfer function on top of the well known, so called \short period pole pair" which are<br />

mirrored with respect to the imaginary axis. The rigid body motion model is described<br />

by the transfer function<br />

(10.1)<br />

(s + :125)<br />

(s + 1)(s ; 1)<br />

M(s) =;8<br />

Note that M(0) =1. We we will use the model M as the basic model P in the control<br />

design.<br />

The elastic modes are described by complex, lightly damped poles associated with<br />

zeros. In this simpli ed model we only take thelowest frequency mode yielding:<br />

% Define error models M-Pa and M-Ps<br />

[dnuma,ddena]=parallel(numa,dena,-numm,denm)<br />

[dnums,ddens]=parallel(nums,dens,-numm,denm)<br />

% Plot the bode diagram of model and its (additive) errors<br />

w=logspace(-3,2,3000)<br />

magm=bode(numm,denm,w)<br />

mags=bode(dnums,ddens,w)<br />

dmaga=bode(dnuma,ddena,w)<br />

(10.2)<br />

(s + :05 + 5j)(s + :05 ; 5j)<br />

(s + :06 + 6j)(s + :06 ; 6j)<br />

(s + :125)<br />

(s +1)(s ; 1)<br />

Ps(s) =Ks<br />

The gain Ks is determined so that Ps(0) = 1. Fuel consumption will decrease distributed<br />

mass and sti ness of the fuel tanks. Also changes in temperature play a role. As<br />

a consequence, the elastic modes will change. We have taken the worst scenario in which<br />

poles and zeros change place. This yields:<br />

147


150 CHAPTER 10. DESIGN EXAMPLE<br />

10.2. CLASSIC CONTROL 149<br />

Nyquist MC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

rootlocus MC<br />

4<br />

3<br />

2<br />

dmags=bode(dnums,ddens,w)<br />

loglog(w,magm,w,dmags,w,dmaga)<br />

title('|M|,|M-Ps|,|M-Pa|')<br />

xlabel('The plant and its perturbations')<br />

1<br />

Imag Axis<br />

0<br />

Imag Axis<br />

−1<br />

10.2 Classic control<br />

−2<br />

−3<br />

−4<br />

−4 −3 −2 −1 0 1 2 3 4<br />

Real Axis<br />

Figure 10.3: Root locus and Nyquist plot for low order controller.<br />

rootloci PtsC and PtaC<br />

The plant is a simple SISO-system, so we should be able to design a controller with classic<br />

tools. In general, this is a good start as it gives insight into the problem and is therefore<br />

of considerable help in choosing the weighting lters for an H1-design.<br />

For the controlled system we wish to obtain a zero steady state, i.e., integral action,<br />

while the bandwidth is bounded by the elastic mode at approximately 5.5 rad/s, as we<br />

require robust stability and robust performance for the elastic mode models. Some trial<br />

and error with a simple low order controller, leads soon to a controller of the form<br />

10<br />

(10.4)<br />

(s +1)<br />

s(s +2)<br />

C(s) = 1<br />

2<br />

5<br />

In the bode plot of this controller in Fig. 10.2, we observe that the control band is<br />

bounded by ! 0:25rad/s.<br />

0<br />

Imag Axis<br />

50<br />

−5<br />

0<br />

Gain dB<br />

−10<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

Frequency (rad/sec)<br />

bodeplots controller<br />

10 −2<br />

10 −3<br />

−50<br />

−10 −5 0 5 10<br />

Real Axis<br />

−250<br />

Figure 10.4: Root loci for the elastic mode models.<br />

−255<br />

−260<br />

Phase deg<br />

Nyquist PtaC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

Nyquist PtsC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

−265<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

Frequency (rad/sec)<br />

10 −2<br />

10 −3<br />

−270<br />

Imag Axis<br />

Imag Axis<br />

Figure 10.2: Classic low order controller.<br />

Figure 10.5: Nyquist plots for elastic mode models.<br />

and the elastic mode models in Fig. 10.6. Still, we notice some high frequent oscillations,<br />

that occur for the model Pa, as the poles have been shifted closer to the imaginary axis<br />

by the feedback and consequently the elastic modes are less damped.<br />

We can do better by taking care that the feedback loop shows no or very little action<br />

just in the neighborhood of the elastic modes. Therefore we include a notch lter, which<br />

The root locus and the Nyquist plot look familiar for the nominal plant in Fig. 10.3,<br />

but we could have done much better by shifting the pole at -2 to the left and increasing<br />

the gain.<br />

If we study the root loci for the two elastic mode models of Fig. 10.4 and the Nyquist<br />

plots in Fig. 10.5, it is clear why sucharestricted low pass controller is obtained. Increase<br />

of the controller gain or bandwidth would soon cause the root loci to pass the imaginary<br />

axis to the RHP for the elastic mode model Pa. This model shows the most nasty dynamics.<br />

It has the pole pair closest to the origin. The root loci, which emerge from those poles,<br />

loop in the RHP. Also in the corresponding right Nyquist plot we see that an increase of<br />

the gain would soon lead to an extra and forbidden encircling of the point -1by the loops<br />

originating from the elastic mode.<br />

By keeping the control action strictly low pass, the elastic mode dynamics will hardly<br />

be in uenced, as we mayobserve from the closed loop step responses of the nominal model


152 CHAPTER 10. DESIGN EXAMPLE<br />

10.2. CLASSIC CONTROL 151<br />

Nyquist MC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

step disturbance for M, Pts or Pta in loop<br />

1<br />

0.5<br />

Imag Axis<br />

rootlocus MC<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

−2<br />

−4<br />

−6<br />

−8<br />

−10<br />

−10 −8 −6 −4 −2 0 2 4 6 8 10<br />

Real Axis<br />

Imag Axis<br />

0<br />

−0.5<br />

Amplitude<br />

−1<br />

Figure 10.8: Root locus and Nyquist plot controller with notch lter.<br />

−1.5<br />

−2<br />

0 10 20 30 40 50 60 70 80 90 100<br />

Time (secs)<br />

the e ect because apparently the gain should be very large in order to derive the exact<br />

track of the root locus.<br />

Figure 10.6: Step responses for low order controller.<br />

rootloci PtsC and PtaC<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

−2<br />

−4<br />

−6<br />

−8<br />

−10<br />

−10 −8 −6 −4 −2 0 2 4 6 8 10<br />

Real Axis<br />

has a narrow dip in the transfer just at the proper place:<br />

(10.5)<br />

+ :055 + 5:5j)(s + :055 ; 5:5j)<br />

150(s<br />

(s +50+50j)(s +50; 50j)<br />

(s +1)<br />

s(s +2)<br />

C(s) = 1<br />

2<br />

Imag Axis<br />

We have positioned zeros just in the middle of the elastic modes pole-zero couples.<br />

Roll o poles have been placed far away, where they cannot in uence control, because at<br />

! =50theplanttransfer itself is very small. We clearly discern this dip in the bode plot<br />

of this controller in Fig. 10.7.<br />

50<br />

0<br />

Figure 10.9: Root loci for the elastic mode models with notch lter.<br />

Gain dB<br />

−50<br />

This is also re ected in the Nyquist plots in Fig.10.10. Because of the notch lters,<br />

the loops due to the elastic modes have been substantially decreased in the loop transfer<br />

and consequently there is little chance left that the point -1 is encircled.<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

Frequency (rad/sec)<br />

bodeplots controller<br />

10 −2<br />

10 −3<br />

−100<br />

0<br />

Nyquist PtaC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

−90<br />

−180<br />

Phase deg<br />

−270<br />

Imag Axis<br />

Nyquist PtsC<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

−4<br />

−5<br />

−5 −4 −3 −2 −1 0 1 2 3 4 5<br />

Real Axis<br />

Imag Axis<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

Frequency (rad/sec)<br />

10 −2<br />

10 −3<br />

Figure 10.7: Classic controller with notch lter.<br />

Figure 10.10: Nyquist plots for elastic mode models with notch lter.<br />

Finally, as a consequence, the step responses of the two elastic models show no longer<br />

elastic mode oscillations and they di er hardly from the rigid mass model as shown in<br />

The root locus and the Nyquist plot for the nominal plant in Fig. 10.8 are hardly<br />

changed close to the origin. Further away, where the roll-o poles lay, the root locus is not<br />

interesting and has not been shown. The poles remain su ciently far from the imaginary<br />

axis, as expected, given the small plant transfer at those high frequencies.<br />

Studying the root loci for the two elastic mode models of Fig. 10.9 it can be seen that<br />

there is hardly any shift of the elastic mode poles. Even Matlab had problems in showing


154 CHAPTER 10. DESIGN EXAMPLE<br />

10.2. CLASSIC CONTROL 153<br />

[numcls,dencls]=feedback(1,1,numls,denls,-1)<br />

[numcla,dencla]=feedback(1,1,numla,denla,-1)<br />

step(numcl,dencl) hold<br />

step(numcls,dencls) step(numcla,dencla)<br />

title('step disturbance for M, Pts or Pta in loop')<br />

pause hold off<br />

Fig. 10.11.<br />

step disturbance for M, Pts or Pta in loop<br />

1<br />

0.5<br />

% Improved classic controller C(s)=[.5(s+1)/s(s+2)]*<br />

% 150(s+.055+j*5.5)(s+.055-j*5.5)/(s+50-j*50)(s+50-j*50)<br />

0<br />

−0.5<br />

Amplitude<br />

numc=conv(-[.5 .5],[1 .1 30.2525]*150)<br />

denc=conv([1 2 0],[1 100 5000])<br />

bode(numc,denc,w) title('bodeplots controller')<br />

pause<br />

numl=conv(numc,numm) denl=conv(denc,denm)<br />

rlocus(numl,denl)<br />

axis([-10,10,-10,10]) title('rootlocus MC')<br />

pause<br />

nyquist(numl,denl,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist MC')<br />

pause<br />

[numls,denls]=series(numc,denc,nums,dens)<br />

[numla,denla]=series(numc,denc,numa,dena)<br />

rlocus(numls,denls) hold rlocus(numla,denla)<br />

axis([-10,10,-10,10]) title('rootloci PtsC and PtaC')<br />

pause hold off<br />

nyquist(numls,denls,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist PtsC')<br />

pause<br />

nyquist(numla,denla,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist PtaC')<br />

pause<br />

[numcl,dencl]=feedback(1,1,numl,denl,-1)<br />

[numcls,dencls]=feedback(1,1,numls,denls,-1)<br />

[numcla,dencla]=feedback(1,1,numla,denla,-1)<br />

step(numcl,dencl)<br />

hold<br />

step(numcls,dencls)step(numcla,dencla)<br />

title('step disturbance for M, Pts or Pta in loop')<br />

pause hold off<br />

−1<br />

−1.5<br />

−2<br />

0 2 4 6 8 10 12 14 16 18 20<br />

Time (secs)<br />

Figure 10.11: Step responses for controller with notch lter.<br />

You can replay all computations, possibly with modi cations, by running raketcla.m<br />

as listed below:<br />

% This is the script file RAKETCLA.M<br />

%<br />

% In this script file we synthesize controllers<br />

% for the plant (defined in plantdef) using classical<br />

% design techniques. It is assumed that you ran<br />

% *plantdef* before invoking this script.<br />

%<br />

% First try the classic control law: C(s)=.5(s+1)/s(s+2)<br />

10.3 Augmented plant and weight lter selection<br />

Being an example we want tokeep the control design simple so that we propose a simple<br />

mixed sensitivity set-up as depicted in Fig. 10.12.<br />

The exogenous input w = ~ d stands in principle for the aerodynamic forces acting on<br />

the rocket in ight for a nominal speed. It will also represent the model perturbations<br />

together with the weight on the actuator input u = u. The disturbed output of the rocket,<br />

the pitch rate, should be kept to zero as close as possible. Because we can see it as an<br />

error, we incorporate it, in a weighted form ~e, asa component oftheoutputz =(~u ~e) T .<br />

At the same time, the error e is used as the measurement y = e for the controller. Note<br />

numc=-[.5 .5] denc=[1 2 0]<br />

bode(numc,denc,w) title('bodeplots controller')<br />

pause<br />

numl=conv(numc,numm) denl=conv(denc,denm)<br />

rlocus(numl,denl) title('rootlocus MC')<br />

pause<br />

nyquist(numl,denl,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist MC')<br />

pause<br />

[numls,denls]=series(numc,denc,nums,dens)<br />

[numla,denla]=series(numc,denc,numa,dena)<br />

rlocus(numls,denls) hold<br />

rlocus(numla,denla) title('rootloci PtsC and PtaC')<br />

pause hold off<br />

nyquist(numls,denls,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist PtsC')<br />

pause<br />

nyquist(numla,denla,w)<br />

set(figure(1),'currentaxes',get(gcr,'plotaxes'))<br />

axis([-5,5,-5,5]) title('Nyquist PtaC')<br />

pause<br />

[numcl,dencl]=feedback(1,1,numl,denl,-1)


156 CHAPTER 10. DESIGN EXAMPLE<br />

10.3. AUGMENTED PLANT AND WEIGHT FILTER SELECTION 155<br />

solution because Vd is equally involved in both terms of the mixed sensitivity problem.<br />

The bode plot of lter is displayed in Fig. 10.13.<br />

z = ~u<br />

~e<br />

-<br />

Wu<br />

-<br />

-<br />

6<br />

- Vd<br />

-<br />

6<br />

? -<br />

w = ~ d<br />

Weighting parameters in control configuration<br />

10 4<br />

10 2<br />

?<br />

- - -<br />

-<br />

y = e<br />

-<br />

P n We<br />

u = u<br />

-<br />

10 0<br />

?<br />

6<br />

G<br />

10 −2<br />

?<br />

K<br />

10 −4<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

|Vd|, |We|, |Wu|<br />

10 −2<br />

10 −3<br />

10 −6<br />

Figure 10.12: Augmented plant for rocket.<br />

Figure 10.13: Weighting lters for rocket.<br />

that we did not pay attention to measurement errors. The mixed sensitivity isthus de ned<br />

by:<br />

Based on the exercise of classic control design, we cannot expect a disturbance rejection<br />

over a broader band than 2rad/s. Choosing again a biproper lter for We, we cannot go<br />

much further with the zero than the zero at 100 for Vd. Keeping We on the 0 dB line for<br />

low frequencies we thus obtain:<br />

!<br />

~d (10.6)<br />

w = WuRVd<br />

WeSVd<br />

WuKVd<br />

1;PK<br />

WeVd<br />

1;PK<br />

z = ~u<br />

~e =<br />

(10.8)<br />

+ 100<br />

= :02s<br />

s +2<br />

:02s +2<br />

s +2<br />

We =<br />

as displayed in Fig. 10.13.<br />

Concerning Wu, we again know very little about the actuator consisting of a servosystem<br />

driving the angle of the gimbals to direct the thrust vector. Certainly,theallowed band<br />

will be low pass. So all we can do is to choose a high pass penalty Wu such that the expected<br />

model perturbations will be covered and hope that this is su cient to prevent from actuator<br />

saturation. The additive model perturbations jPs(j!);M(j!)j and jPa(j!);M(j!)j<br />

are shown in Fig. 10.14 and should be less than WR(j!) = Wu(j!)Vd(j!)j, which are<br />

displayed as well.<br />

We have chosen two poles in between the poles and zeros of the exible mode of the<br />

rocket just at the place where we have chosen zeros in the classic controller. We will see<br />

that, by doing so, indeed the mixed sensitivity controller will also contain zeros at these<br />

positions showing the same notch lter. In order to make the Wu biproper again, we<br />

now haveto choose zeros at the lower end of the frequency range, i.e., at .001rad/s. The<br />

gain of the lter has been chosen such that the additive model errors are just covered by<br />

WR(j!) =Wu(j!)Vd(j!)j. Finally we have forWu:<br />

The disturbance lter Vd represents the aerodynamic forces. Since these are forces<br />

which act on the rocket, like the actuator does by directing the gimballs,itwould be more<br />

straightforward to model d as an input disturbance. To keep track with the presentation of<br />

disturbances at the output throughout the lecture notes and to cope more easily with the<br />

additive perturbations by means of VdWu,wehavechosen to leave it an output disturbance.<br />

As we knowvery little about the aerodynamic forces, a at spectrum seems appropriate as<br />

we see no reason that some frequencies should be favoured. Passing through the process,<br />

of which the predominant behaviour is dictated by two poles and one zero, there will be<br />

a decay for frequencies higher than 1rad/s with -20dB/decade. We could then choose a<br />

rst order lter Vd with a pole at -1. We like to shift the pole to the origin. In that<br />

way we will penalise the tracking error via WeSVd in nitely heavily at ! = 0, so that<br />

the controller will necessarily contain integral action. For numerical reasons we have to<br />

take theintegration pole somewhat in the LHP at a distance which is small compared to<br />

the poles and zeros that determine the transfer P . Furthermore, if we choose Vd to be<br />

biproper, we avoid problems with inversions, where we will see that in the controller a lot<br />

of pole-zero cancellations with the augmented plant will occur, in particular for the mixed<br />

sensitivity problems. So, Vd has been chosen as:<br />

Wu = 1 100s<br />

3<br />

2 + :2s + :0001<br />

s2 100 (s + :001)<br />

=<br />

+ :1s +30:2525 3<br />

2<br />

(10.9)<br />

(s + :05 + j5:5)(s + :05 ; j5:5)<br />

Having de ned all lters, we can now test, whether the conditions with repect to<br />

S + T =1are satis ed. Therefore we display WS = WeVd as the weighting lter for the<br />

sensitivity S in Fig. 10.15.<br />

(10.7)<br />

s + 100<br />

= :01<br />

s + :0001<br />

:01s +1<br />

s + :0001<br />

Vd =<br />

Note that the pole and zero lay 6 decades apart, which will be on the edge of numerical<br />

power. The gain has been chosen as .01, which appeared to give least numerical problems.<br />

As there are no other exogenous inputs, there is no problem of scaling. If we increase<br />

the gain of Vd we will just have a larger in nity norm bound , but no di erent optimal


158 CHAPTER 10. DESIGN EXAMPLE<br />

10.3. AUGMENTED PLANT AND WEIGHT FILTER SELECTION 157<br />

magWu=bode(numWu,denWu,w)<br />

loglog(w,magVd,w,magWe,w,magWu)<br />

xlabel('|Vd|, |We|, |Wu|')<br />

title('Weighting parameters in control configuration')<br />

pause<br />

magWS=magVd.*magWe<br />

magWR=magVd.*magWu<br />

magWT=magWR./magm<br />

loglog(w,magWS,w,magWR,w,magWT)<br />

xlabel('|WS|, |WR| and |WT|')<br />

title('Sensitivity, control and complementary sensitivity weightings')<br />

pause<br />

loglog(w,magWR,w,dmags,w,dmaga)<br />

title('Compare additive modelerror weight and "real" additive errors')<br />

pause<br />

echo off<br />

Compare additive modelerror weight and "real" additive errors<br />

10 4<br />

10 2<br />

10 0<br />

10 −2<br />

10 −4<br />

10 −6<br />

10 2<br />

10 1<br />

10 0<br />

10 −1<br />

10 −2<br />

10 −3<br />

10 −8<br />

Figure 10.14: WR encompasses additive model error.<br />

10.4 <strong>Robust</strong> control toolbox<br />

Sensitivity, control and complementary sensitivity weightings<br />

10 3<br />

The mixed sensitivity problem is well de ned now. With the Matlab <strong>Robust</strong> <strong>Control</strong><br />

toolbox we can compute a controller together with the associated . This toolbox can<br />

only be used for a simple mixed sensitivity problem. The con guration structure is xed,<br />

only the weighting lters corresponding to S, T and/or R have to be speci ed. The<br />

example which we study in this chapter ts in such a framework, but we emphasize that<br />

the toolbox lacks the exibility for larger, or di erent structures. For the example, it nds<br />

=1:338 which issomewhat too large, so that we should adapt the weights once again.<br />

The frequency response of the controller is displayed in Fig. 10.16 and looks similar to the<br />

controller found by classical means.<br />

10 2<br />

10 1<br />

10 0<br />

10 −1<br />

10 −2<br />

10 −3<br />

100<br />

10 2<br />

10 1<br />

10 −1<br />

10 0<br />

|WS|, |WR| and |WT|<br />

10 −2<br />

10 −3<br />

10 −4<br />

50<br />

Gain dB<br />

0<br />

Figure 10.15: Weighting lters for sensitivities S R and T .<br />

10 4<br />

10 2<br />

10 −2<br />

10 0<br />

Frequency (rad/sec)<br />

10 −4<br />

10 −6<br />

−50<br />

270<br />

180<br />

90<br />

Phase deg<br />

10 4<br />

10 2<br />

10 −2<br />

10 0<br />

Frequency (rad/sec)<br />

10 −4<br />

10 −6<br />

0<br />

Figure 10.16: H1 controller found by <strong>Robust</strong> <strong>Control</strong> Toolbox.<br />

Similarly the weight for the control sensitivity R is WR = WuVd and from that we derive<br />

that for the complementary sensitivity T the weight equals WT = WuVd=P represented in<br />

Fig. 10.15. We observe that WS is lowpass and WT is high pass and, more importantly,<br />

they intersect below the 0dB-line.<br />

In this example, the above reasoning seems to suggest that one can derive and synthesize<br />

weighting lters. In reality this is an iterative process, where one starts with certain<br />

lters and adapts them in subsequent iterations such that they lead to a controller which<br />

gives acceptable behaviour of the closed loop system. In particular, the gains of the various<br />

lters need several iterations to arrive at proper values.<br />

The proposed lter selection is implemented in the following Matlab script.<br />

Nevertheless, the impulse responses displayed in Fig. 10.17 still show the oscillatory<br />

e ects of the elastic modes. More trial and error for improving the weights is therefore<br />

necessary. In particular, has to be decreased.<br />

Finally in Fig. 10.18 the sensitivity and the control sensitivity are shown together with<br />

their bounds which satisfy:<br />

% This is the script WEIGHTS.M<br />

numVd=[.01 1] denVd=[1 .001]<br />

numWe=[.02 2] denWe=[1 2]<br />

numWu=[100 .2 .0001]/3 denWu=[1 .1 30.2525]<br />

magVd=bode(numVd,denVd,w)<br />

magWe=bode(numWe,denWe,w)


160 CHAPTER 10. DESIGN EXAMPLE<br />

10.4. ROBUST CONTROL TOOLBOX 159<br />

1<br />

% Define the weigthing parameters<br />

weights<br />

0.8<br />

0.6<br />

% Next we need to construct the augmented plant. To do so,<br />

% the robust control toolbox allows to define *three weigths* only.<br />

% (This may be viewed as a severe handicap!) These weights will be<br />

% called W1, W2, and W3 and represent the transfer function weightings<br />

% on the controlled system sensitivity (S), control sensitivity (R)<br />

% and complementary sensitivity (T), respectively. From our configuration<br />

% we find that W1 = Vd*We, W2=Vd*Wu and W3 is not in use. We specify<br />

% this in state space form as follows.<br />

[aw1,bw1,cw1,dw1]=tf2ss(conv(numVd,numWe),conv(denVd,denWe))<br />

ssw1=mksys(aw1,bw1,cw1,dw1)<br />

[aw2,bw2,cw2,dw2]=tf2ss(conv(numVd,numWu),conv(denVd,denWu))<br />

ssw2=mksys(aw2,bw2,cw2,dw2)<br />

ssw3=mksys([],[],[],[])<br />

0.4<br />

0.2<br />

0<br />

Amplitude<br />

−0.2<br />

−0.4<br />

−0.6<br />

−0.8<br />

−1<br />

0 10 20 30 40 50 60 70 80 90 100<br />

Time (secs)<br />

Figure 10.17: Step responses for closed loop system with P = M, Ps or Pa and H1<br />

controller.<br />

% the augmented system is now generated with the command *augss*<br />

% (sorry, it is the only command for this purpose in this toolbox...)<br />

[tss]=augss(syg,ssw1,ssw2,ssw3)<br />

% <strong>Control</strong>ler synthesis in this toolbox is done with the routine<br />

% *hinfopt*. Check out the help information on this routine and<br />

% find out that we actually compute 1/gamma where gamma is<br />

% the `usual' gamma that we use throughout the lecture notes.<br />

[gamma,ssf,sscl]=hinfopt(tss,[1:2],[.001,1,0])<br />

gamma=1/gamma<br />

disp('Optimal H-infinity norm is approximately ')<br />

disp(num2str(gamma))<br />

(10.10)<br />

8! : jS(j!)j < jW ;1<br />

S (j!)j = jWe(j!)Vd(j!)j<br />

8! : jR(j!)j < jW ;1<br />

R (j!)j = jWu(j!)Vd(j!)j<br />

|S|, |R| and their bounds<br />

10 4<br />

10 3<br />

10 2<br />

% Next we evaluate the robust performance of this controller<br />

[af,bf,cf,df]=branch(ssf) % returns the controller in state space form<br />

bode(af,bf,cf,df) pause<br />

[as,bs,cs,ds]=tf2ss(nums,dens) % returns Ps in state space form<br />

[aa,ba,ca,da]=tf2ss(numa,dena) % returns Pa in state space form<br />

[alm,blm,clm,dlm]=series(af,bf,cf,df,ag,bg,cg,dg)<br />

[als,bls,cls,dls]=series(af,bf,cf,df,as,bs,cs,ds)<br />

[ala,bla,cla,dla]=series(af,bf,cf,df,aa,ba,ca,da)<br />

[acle,bcle,ccle,dcle]=feedback([],[],[],1,alm,blm,clm,dlm,-1)<br />

[aclu,bclu,cclu,dclu]=feedback(af,bf,cf,df,ag,bg,cg,dg,-1)<br />

[acls,bcls,ccls,dcls]=feedback([],[],[],1,als,bls,cls,dls,-1)<br />

[acla,bcla,ccla,dcla]=feedback([],[],[],1,ala,bla,cla,dla,-1)<br />

step(acle,bcle,ccle,dcle)<br />

hold<br />

step(acls,bcls,ccls,dcls)<br />

step(acla,bcla,ccla,dcla)<br />

pause<br />

hold off<br />

boundR=gamma./magWR boundS=gamma./magWS<br />

magcle=bode(acle,bcle,ccle,dcle,1,w)<br />

magclu=bode(aclu,bclu,cclu,dclu,1,w)<br />

loglog(w,magcle,w,magclu,w,boundR,w,boundS)<br />

title('|S|, |R| and their bounds')<br />

10 1<br />

10 0<br />

10 −1<br />

10 −2<br />

10 2<br />

10 1<br />

10 0<br />

10 −1<br />

10 −2<br />

10 −3<br />

10 −3<br />

Figure 10.18: jSj and jRj and their bounds =jWSj resp. =jWRj.<br />

Note that for low frequencies the sensitivity S is the limiting factor, while for high<br />

frequencies the control sensitivity R puts the contraints. At about 1


162 CHAPTER 10. DESIGN EXAMPLE<br />

10.5. H1 DESIGN IN MUTOOLS. 161<br />

Planta=nd2sys(numa,dena)<br />

systemnames='Contr Planta'<br />

inputvar='[d]'<br />

outputvar='[-Planta-dContr]'<br />

input_to_Planta='[Contr]'<br />

input_to_Contr='[-Planta-d]'<br />

sysoutname='realclpa'<br />

cleanupsysic='yes'<br />

sysic<br />

10.5 H1 design in mutools.<br />

In the \ -analysis and synthesis toolbox", simply indicated by \Mutools", we haveplenty of freedom to de ne the structure of the augmented plant ourselves. The listing for the<br />

example under study raketmut.m is given as:<br />

%<br />

% SCRIPT FILE FOR THE CALCULATION AND EVALUATION<br />

% OF CONTROLLERS USING THE MU-TOOLBOX<br />

%<br />

% This script assumes that you ran the files plantdef and weights<br />

%<br />

% CONTROLLER AND CLOSED LOOP EVALUATION<br />

[ac,bc,cc,dc]=unpck(Contr)<br />

bode(ac,bc,cc,dc)<br />

pause<br />

[acl,bcl,ccl,dcl]=unpck(realclp)<br />

[acls,bcls,ccls,dcls]=unpck(realclps)<br />

[acla,bcla,ccla,dcla]=unpck(realclpa)<br />

step(acl,bcl,ccl,dcl)<br />

hold<br />

pause<br />

step(acls,bcls,ccls,dcls)<br />

pause<br />

step(acla,bcla,ccla,dcla)<br />

pause<br />

hold off<br />

boundR=gamma./magWR<br />

boundS=gamma./magWS<br />

[magcl,phasecl,w]=bode(acl,bcl,ccl,dcl,1,w)<br />

loglog(w,magcl,w,boundR,w,boundS)<br />

title('|S| , |R| and their bounds')<br />

% REPRESENT SYSTEM BLOCKS IN INTERNAL FORMAT<br />

Plant=nd2sys(numm,denm)<br />

Vd=nd2sys(numVd,denVd)<br />

We=nd2sys(numWe,denWe)<br />

Wu=nd2sys(numWu,denWu)<br />

% MAKE GENERALIZED PLANT USING *sysic*<br />

systemnames='Plant Vd We Wu'<br />

inputvar='[dwu]'<br />

outputvar='[WeWu-Plant-Vd]'<br />

input_to_Plant='[u]'<br />

input_to_Vd='[dw]'<br />

input_to_We='[-Plant-Vd]'<br />

input_to_Wu='[u]'<br />

sysoutname='G'<br />

cleanupsysic='yes'<br />

sysic<br />

Running this script in Matlab yields =1:337 and a controller that deviates somewhat<br />

from the robust control toolbox controller for high frequencies ! > 103rad/s. The step<br />

responses and sensitivities are virtually the same. It shows that the controller is not<br />

unique as it is just a controller in the set of controllers that obey k G k1< with G<br />

stable. As long as is not exactly minimal, the set of controllers contains more than<br />

one controller. For MIMO-plants even for minimal the solution for the controller is not<br />

unique. Furthermore there are aberrations due to numerical anomalies.<br />

% CALCULATE CONTROLLER<br />

[Contr,fclp,gamma]=hinfsyn(G,1,1,0,10,1e-4)<br />

10.6 LMI toolbox.<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR MODEL<br />

systemnames='Contr Plant'<br />

inputvar='[d]'<br />

outputvar='[-Plant-dContr]'<br />

input_to_Plant='[Contr]'<br />

input_to_Contr='[-Plant-d]'<br />

sysoutname='realclp'<br />

cleanupsysic='yes'<br />

sysic<br />

The \LMI toolbox" provides a very exible way for synthesizing H1 controllers. The<br />

toolbox has its own format for the internal representation of dynamical systems which,<br />

in general, is not compatible with the formats of other toolboxes (as usual). The toolbox<br />

can handle parameter varying systems and has a user friendly graphical interface for the<br />

design of weighting lers. As for the latter, we refer to the routine<br />

magshape<br />

The calculation of H1 optimal controllers proceeds as follows.<br />

% Script file for the calculation of H-infinity controllers<br />

% in the LMI toolbox. This script assumes that you ran the files<br />

% *plantdef* and *weights* before.<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Ps<br />

Plants=nd2sys(nums,dens)<br />

systemnames='Contr Plants'<br />

inputvar='[d]'<br />

outputvar='[-Plants-dContr]'<br />

input_to_Plants='[Contr]'<br />

input_to_Contr='[-Plants-d]'<br />

sysoutname='realclps'<br />

cleanupsysic='yes'<br />

sysic<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Pa


164 CHAPTER 10. DESIGN EXAMPLE<br />

10.7. DESIGN IN MUTOOLS 163<br />

6j<br />

K<br />

% FIRST REPRESENT SYSTEM BLOCKS IN INTERNAL FORMAT<br />

Ptsys=ltisys('tf',numm,denm)<br />

Vdsys=ltisys('tf',numVd,denVd)<br />

Wesys=ltisys('tf',numWe,denWe)<br />

Wusys=ltisys('tf',numWu,denWu)<br />

5j<br />

U<br />

;:05<br />

;:06<br />

Figure 10.19: Variability of elastic mode.<br />

% MAKE GENERALIZED PLANT<br />

inputs = 'dwu'<br />

outputs = 'WeWu-Pt-Vd'<br />

Ptin='Pt : u'<br />

Vdin='Vd : dw'<br />

Wein='We : -Pt-Vd'<br />

Wuin='Wu : u'<br />

G=sconnect(inputs,outputs,[],Ptin,Ptsys,Vdin,Vdsys,...<br />

Wein,Wesys,Wuin,Wusys)<br />

% CALCULATE H-INFTY CONTROLLER USING LMI SOLUTION<br />

[gamma,Ksys]=hinflmi(G,[1 1],0,1e-4)<br />

(10.12)<br />

(10.13)<br />

;8(s + :125)<br />

Pt(s) =<br />

(s ; 1)(s +1)<br />

(s + :055 ; :005 ; j(5:5 ; :5 ))(s + :055 ; :005 + j(5:5 ; :5 ))<br />

(s + :055 + :005 ; j(5:5+:5 ))(s + :055 + :005 + j(5:5+:5 ))<br />

K0<br />

% MAKE CLOSED-LOOP INTERCONNECTION FOR MODEL<br />

Ssys = sinv(sadd(1,smult(Ptsys,Ksys)))<br />

Rsys = smult(Ksys,Ssys)<br />

where the extra constant K0 is determined by Pt(0) = 1: the DC-gain is kept on 1.<br />

If we de ne the nominal position of the poles and zeros by a0 = :055 and b0 = 5:5<br />

rearrangement yields:<br />

f1+ multg (10.14)<br />

s + :125<br />

Pt(s) =;8<br />

s2 ; 1<br />

; (:02s + :02a0 +2b0)<br />

(10.15)<br />

s 2 +(2a0 + :01 )s + a 2 0 + b2 0 + (:01a0 + b0)+:250025 2<br />

mult = k0<br />

(10.16)<br />

k0 = a2 0 + b2 0 + :250025 2 + (:01a0 + b0)<br />

a2 0 + b2 0 + :250025 2 ; (:01a0 + b0)<br />

The factor F = 1+ mult can easily be brought into a state space description with<br />

fA B C Dg:<br />

% EVALUATE CONTROLLED SYSTEM<br />

splot(Ksys,'bo',w) title('Bodeplot of controller')<br />

pause<br />

splot(Ssys,'sv') title('Maximal singular value of Sensitivity')<br />

pause<br />

splot(Ssys,'ny') title('Nyquist plot of Sensitivity')<br />

pause<br />

splot(Ssys,'st') title('Step response of Sensitivity')<br />

pause<br />

splot(Rsys,'sv') title('Maximal sv of <strong>Control</strong> Sensitivity')<br />

pause<br />

splot(Rsys,'ny') title('Nyquist plot of <strong>Control</strong> Sensitivity')<br />

pause<br />

splot(Rsys,'st') title('Step response of <strong>Control</strong> Sensitivity')<br />

pause<br />

10.7 design in mutools<br />

(10.17)<br />

0 1<br />

A = A1 + dA =<br />

;a2 0 ; b2 0 0<br />

+<br />

0 ;2a0 ; ; 2 :01<br />

C = C1 + dC = ; 0 0 + a2 0 +b2 0 + + 2<br />

a2 0 +b2 ;<br />

0 ; + 2 ; ;:02<br />

B = B1 + dB = 0 0<br />

+<br />

1 0<br />

D = D1 + dD =1+0 =11:0011 =5:50055 = :250025<br />

In -design we pretend to model the variability of the exible mode very tightly by means<br />

of speci c parameters in stead of the rough modelling by an additive perturbation bound<br />

by WuVd. In that way we hopetoobtain a less conservative controller. We suppose that<br />

the poles and zeros of the exible mode shift along a straight line in complex plane between<br />

the extreme positions of Ps and Pa as illustrated in Fig. 10.19.<br />

Algebraically this variation can then be represented by one parameter according to:<br />

Note that for =0we simply have F =1+ mult =1.<br />

If we let = (s) with j (j!)j 1 we have given the parameter delta much more<br />

freedom, but the whole description then ts with the -analysis. We havefor the dynamic<br />

transfer F (s):<br />

(10.11)<br />

8 R ;1 1<br />

poles : ;:055 ; :005 j(5:5+:5 )<br />

zeros : ;:055 + :005 j(5:5 ; :5 )<br />

(10.18)<br />

sx = A1x + B1u1 + dA(s)x + dB(s)u1 dB(s) =0<br />

y1 = C1x + D1u1 + dC(s)x + dD(s)u1 dD(s) =0<br />

So that the total transfer of the plant including the perturbation is given by:


166 CHAPTER 10. DESIGN EXAMPLE<br />

10.7. DESIGN IN MUTOOLS 165<br />

=30:2502 we rewrite:<br />

With = a 2 0 + b2 0<br />

dA = B2(I ; D22) ;1 C2 (10.22)<br />

dB = B2(I ; D22) ;1 D12 (10.23)<br />

dC = D12(I ; D22) ;1 C2 (10.24)<br />

dD = D12(I ; D22) ;1 D21 (10.25)<br />

dC = ; 1+ ; + 2 0 (10.19)<br />

0 0<br />

; ; 2 ;:02<br />

dA =<br />

and with some patience one can derive that:<br />

Next we can de ne 5 extra input lines in a vector u2 and correspondingly 5 extra<br />

output lines in a vector y2 that are linked in a closed loop via u2 = y2 with:<br />

0<br />

@ A B1 B2<br />

C1 D11 D12<br />

(10.20)<br />

1<br />

C<br />

A<br />

0 0 0 0<br />

0 0 0 0<br />

0 0 0 0<br />

0 0 0 0<br />

0 0 0 0<br />

0<br />

B<br />

@<br />

=<br />

(10.26)<br />

1<br />

C<br />

A<br />

0 1 0 0 0 0 0 0<br />

; ;:11 1 ; 0 0 ; ;:01<br />

;2<br />

0 0 1 ; 0 0 ;:02<br />

1 0 0 0 0 0 0 0<br />

0 0 0 0 0 1 0 0<br />

0 0 0 1 ; 0 0<br />

0 0 0 1 0 0 0 0<br />

0 1 0 0 0 0 0 0<br />

0<br />

B<br />

@<br />

1<br />

A =<br />

C2 D21 D22<br />

and let F be represented by:<br />

This multiplicative error structure can be embedded in the augmented plantassketched<br />

in Fig. 10.21.<br />

1<br />

A (10.21)<br />

0<br />

@ x<br />

u1<br />

u2<br />

1<br />

A<br />

0<br />

@ A B1 B2<br />

C1 D11 D12<br />

1<br />

A =<br />

C2 D21 D22<br />

0<br />

@ _x<br />

y1<br />

y2<br />

6<br />

I5<br />

so that we have obtained the structure according to Fig. 10.20.<br />

6 -<br />

~e<br />

~d<br />

-<br />

Vd<br />

-<br />

u2<br />

6<br />

y2<br />

u2<br />

y2<br />

? -<br />

-<br />

F ?<br />

- P - - l - m - We<br />

-<br />

6<br />

u1<br />

y1<br />

6<br />

? -<br />

- -<br />

6<br />

:0001<br />

- ? -<br />

G<br />

K<br />

?<br />

j<br />

F<br />

- -<br />

D22<br />

-<br />

D12<br />

6<br />

-<br />

-6 -<br />

D21<br />

?<br />

6 -<br />

C2<br />

B2<br />

1<br />

s I<br />

6<br />

y1<br />

- B1<br />

- h?<br />

- - C1<br />

- ? i -<br />

6<br />

6<br />

u1<br />

Figure 10.21: Augmented plant for -set-up.<br />

?<br />

A1<br />

Note that we have skipped the weighted controller output ~u. We had no real bounds<br />

on the actuator ranges and we actually determined Wu in the previous H1-designs such<br />

that the additive model perturbations are covered. In this -design under study the model<br />

perturbations are represented by the -block so that in principle we can skip Wu. If we<br />

do so, the direct feedthrough of the augmented plant D12 has insu cient rank. We have<br />

to penalise the input u and this is accomplished by the extra gain block with value .0001.<br />

This weights u very lightly via the output error e. It is just su cient toavoid numerical<br />

anomalies without in uencing substantially the intended weights.<br />

? - -<br />

D1<br />

Figure 10.20: Dynamic structure of multiplicative error.<br />

The two representations correspond according to linear fractional transformation LFT:


168 CHAPTER 10. DESIGN EXAMPLE<br />

10.7. DESIGN IN MUTOOLS 167<br />

blkp=[1 11 11 11 11 11 1]<br />

[bnds1,dvec1,sens1,pvec1]=mu(clp1_g,blkp)<br />

vplot('liv,m',vnorm(clp1_g),bnds1)<br />

pause<br />

Unfortunately, the -toolbox was not yet ready to process uncertainty blocks in the<br />

form of I so that we haveto proceed with 5 independent uncertainty parameters i and<br />

thus:<br />

% FIRST mu-CONTROLLER<br />

(10.27)<br />

1<br />

C<br />

A<br />

1 0 0 0 0<br />

0 2 0 0 0<br />

0 0 3 0 0<br />

0 0 0 4 0<br />

0 0 0 0 5<br />

0<br />

B<br />

@<br />

[dsysL1,dsysR1]=musynfit('first',dvec1,sens1,blkp,1,1)<br />

mu_inc1=mmult(dsysL1,GMU,minv(dsysR1))<br />

[k2,clp2]=hinfsyn(mu_inc1,1,1,0,100,1e-4)<br />

clp2_g=frsp(clp2,omega)<br />

[bnds2,dvec2,sens2,pvec2]=mu(clp2_g,blkp)<br />

vplot('liv,m',vnorm(clp2_g),bnds2)<br />

pause<br />

[ac,bc,cc,dc]=unpck(k2)<br />

bode(ac,bc,cc,dc)<br />

pause<br />

=<br />

As a consequence the design will be more conservative, but the controller will become<br />

more robust. The commands for solving this design in the -toolbox are given in the next<br />

script:<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR MODEL<br />

systemnames='k2 Plant'<br />

inputvar='[d]'<br />

outputvar='[-Plant-dk2]'<br />

input_to_Plant='[k2]'<br />

input_to_k2='[-Plant-d]'<br />

sysoutname='realclp'<br />

cleanupsysic='yes'<br />

sysic<br />

% Let's make the system DMULT first<br />

alpha=11.0011 beta=30.25302<br />

gamma=5.50055 epsilon=.250025<br />

ADMULT=[0,1-beta,-.11]<br />

BDMULT=[0,0,0,0,0,01,-gamma,0,0,-epsilon,-.01]<br />

CDMULT=[0,0,1,00,00,00,00,1]<br />

DDMULT=[1,-alpha,0,-2*alpha*gamma/beta,0,-.02 ...<br />

0,0,0,0,0,0 ...<br />

0,0,0,1,0,0 ...<br />

0,1,-epsilon/beta,gamma/beta,0,0, ...<br />

0,1,0,0,0,0 ...<br />

0,0,0,0,0,0]<br />

mat=[ADMULT BDMULTCDMULT DDMULT]<br />

DMULT=pss2sys(mat,2)<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Ps<br />

Plants=nd2sys(nums,dens)<br />

systemnames='k2 Plants'<br />

inputvar='[d]'<br />

outputvar='[-Plants-dk2]'<br />

input_to_Plants='[k2]'<br />

input_to_k2='[-Plants-d]'<br />

sysoutname='realclps'<br />

cleanupsysic='yes'<br />

sysic<br />

% MAKE GENERALIZED MUPLANT<br />

systemnames='Plant Vd We DMULT'<br />

inputvar='[u2(5)dwx]'<br />

outputvar='[DMULT(2:6)We+.0001*x-DMULT(1)-Vd]'<br />

input_to_Plant='[x]'<br />

input_to_Vd='[dw]'<br />

input_to_We='[-DMULT(1)-Vd]'<br />

input_to_DMULT='[Plantu2(1:5)]'<br />

sysoutname='GMU'<br />

cleanupsysic='yes'<br />

sysic<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Pa<br />

Planta=nd2sys(numa,dena)<br />

systemnames='k2 Planta'<br />

inputvar='[d]'<br />

outputvar='[-Planta-dk2]'<br />

input_to_Planta='[k2]'<br />

input_to_k2='[-Planta-d]'<br />

sysoutname='realclpa'<br />

cleanupsysic='yes'<br />

sysic<br />

% CALCULATE HINF CONTROLLER<br />

[k1,clp1]=hinfsyn(GMU,1,1,0,100,1e-4)<br />

% PROPERTIES OF CONTROLLER<br />

% <strong>Control</strong>ler and closed loop evaluation<br />

[acl,bcl,ccl,dcl]=unpck(realclp)<br />

[acls,bcls,ccls,dcls]=unpck(realclps)<br />

[acla,bcla,ccla,dcla]=unpck(realclpa)<br />

step(acl,bcl,ccl,dcl)<br />

omega=logspace(-2,3,100)<br />

spoles(k1)<br />

k1_g=frsp(k1,omega)<br />

vplot('bode',k1_g)<br />

pause<br />

clp1_g=frsp(clp1,omega)<br />

blk=[1 11 1 1 11 11 1]


170 CHAPTER 10. DESIGN EXAMPLE<br />

10.7. DESIGN IN MUTOOLS 169<br />

% <strong>Control</strong>ler and closed loop evaluation<br />

[acl,bcl,ccl,dcl]=unpck(realclp)<br />

[acls,bcls,ccls,dcls]=unpck(realclps)<br />

[acla,bcla,ccla,dcla]=unpck(realclpa)<br />

step(acl,bcl,ccl,dcl)<br />

hold<br />

pause<br />

step(acls,bcls,ccls,dcls)<br />

pause<br />

step(acla,bcla,ccla,dcla)<br />

pause<br />

hold off<br />

hold<br />

pause<br />

step(acls,bcls,ccls,dcls)<br />

pause<br />

step(acla,bcla,ccla,dcla)<br />

pause<br />

hold off<br />

% SECOND mu-CONTROLLER<br />

spoles(k2)<br />

k2_g=frsp(k2,omega)<br />

vplot('bode',k2_g)<br />

pause<br />

First the H1-controller for the augmented plant is computed. The = 33:0406,<br />

much too high. Next one is invited to choose the respective orders of the lters that<br />

approximate the D-scalings for a number of frequencies. If one chooses a zero order, the<br />

rst approximate = = 19:9840 and yields un Pa unstable at closed loop. A second<br />

iteration with second order approximate lters even increases = = 29:4670 and Pa<br />

remains unstable.<br />

A second try with second order lters in the rst iteration brings = down to 5.4538<br />

but still leads to an unstable Pa. In second iteration with second order lters the program<br />

fails altogether.<br />

Stimulated nevertheless by the last attempt we increase the rst iteration order to 3<br />

which produces a = = 4:9184 and a Pa that just oscillates in feedback. A second<br />

iteration with rst order lters increases the = to 21.2902, but the resulting closed<br />

loops are all stable.<br />

Going still higher we takeboth iterations with 4-th order lters and the = take the<br />

respective values 4.4876 and 10.8217. In the rst iteration the Pa still shows a ill damped<br />

oscillation, but the second iteration results in very stable closed loops for all P , Ps and<br />

Pa. The cost is a very complicated controller of the order 4+10*4+10*4=44!<br />

[dsysL2,dsysR2]=musynfit(dsysL1,dvec2,sens2,blkp,1,1)<br />

mu_inc2=mmult(dsysL2,mu_inc1,minv(dsysR2))<br />

[k3,clp3]=hinfsyn(mu_inc2,1,1,0,100,1e-4)<br />

clp3_g=frsp(clp3,omega)<br />

[bnds3,dvec3,sens3,pvec3]=mu(clp3_g,blkp)<br />

vplot('liv,m',vnorm(clp3_g),bnds3)<br />

pause<br />

[ac,bc,cc,dc]=unpck(k3)<br />

bode(ac,bc,cc,dc)<br />

pause<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR MODEL<br />

systemnames='k3 Plant'<br />

inputvar='[d]'<br />

outputvar='[-Plant-dk3]'<br />

input_to_Plant='[k3]'<br />

input_to_k3='[-Plant-d]'<br />

sysoutname='realclp'<br />

cleanupsysic='yes'<br />

sysic<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Ps<br />

Plants=nd2sys(nums,dens)<br />

systemnames='k3 Plants'<br />

inputvar='[d]'<br />

outputvar='[-Plants-dk3]'<br />

input_to_Plants='[k3]'<br />

input_to_k3='[-Plants-d]'<br />

sysoutname='realclps'<br />

cleanupsysic='yes'<br />

sysic<br />

% MAKE CLOSED LOOP INTERCONNECTION FOR Pa<br />

Planta=nd2sys(numa,dena)<br />

systemnames='k3 Planta'<br />

inputvar='[d]'<br />

outputvar='[-Planta-dk3]'<br />

input_to_Planta='[k3]'<br />

input_to_k3='[-Planta-d]'<br />

sysoutname='realclpa'<br />

cleanupsysic='yes'<br />

sysic


172 CHAPTER 11. BASIC SOLUTION OF THE GENERAL PROBLEM<br />

jsI ; A ; B2Fj =0 jsI ; A ; HC2j =0 (11.1)<br />

The really new component istheblocktransfer Q(s) as an extra feedback operating on<br />

the output error e. If Q =0,wejust have the stabilising LQG-controller that we will call<br />

here the nominal controller Knom. For analysing the e ect of the extra feedback by Q, we<br />

can combine the augmented plant and the nominal controller in a block T as illustrated<br />

in Fig. 11.2.<br />

- G - - G - - -<br />

-<br />

6<br />

T<br />

Knom<br />

?<br />

K<br />

Q Q<br />

-<br />

-<br />

6<br />

?<br />

-<br />

- -<br />

-<br />

6 6<br />

?<br />

-<br />

w z w z w z<br />

u y u y<br />

=<br />

=<br />

v e v e<br />

?<br />

Chapter 11<br />

Basic solution of the general<br />

problem<br />

Figure 11.2: Combining Knom and G into T .<br />

Originally, we had as optimisation criterion:<br />

In this chapter we will present the principle of the solution of the general problem. It<br />

o ers all the insight into the problem that we need. The computational solution follows<br />

a somewhat di erent direction (nowadays) and will be presented in the next chapter 13.<br />

The fundamental solution discussed here is a generalisation of the previously discussed<br />

\internal model control" for stable systems.<br />

The set of all stabilising controllers, also for unstable systems, can be derived from the<br />

blockscheme in Fig. 11.1.<br />

(11.2)<br />

min<br />

Kstabilising k G11 + G12K(I ; G22K) ;1 G21 k1<br />

\Around" the stabilising controller Knom, incorporated in block T , we get a similar<br />

criterion in terms of Tij that highly simpli es into the next a ne expression:<br />

(11.3)<br />

min<br />

Qstabilising k T11 + T12QT21 k1<br />

because T22 appears to be zero! As illustrated in Fig. 11.3, T22 is actually the transfer<br />

between output error e and input v of Fig. 11.1. To understand that this transfer is<br />

zero, we have to realise that the augmented plant is completely and exactly known. It<br />

incorporates the nominal plant model P and known lters. Although the real process<br />

may deviate and cause a model error, for all these e ects one should have taken care by<br />

appropriately chosen lters that guard the robustness. It leaves the augmented plant fully<br />

and exactly known. This means that the model thereafter, that is used in the nominal<br />

controller, ts exactly. Consequently, ifw=0, the output error e, only excited by v, must<br />

be zero! And the corresponding transfer is precisely T22. From the viewpoint ofQ: it sees<br />

no transfer between v and e.<br />

If T22 = 0, the consequent a ne expression in controller Q can be interpreted then<br />

very easily as a simple forward tracking problem as illustrated in Fig. 11.3.<br />

Because Knom stabilised the augmented plant for Q = 0, we can be sure that all<br />

transfers Tij will be stable. But then the simple forward tracking problem of Fig. 11.3<br />

can only remain stable, if Q itself is a stable transfer. As a consequence we nowhavethe set of all stabilising controllers by just choosing Q stable. This set is then clustered on<br />

the nominal controller Knom, de ned by F and H, and certainly the ultimate controller<br />

Figure 11.1: Solution principle<br />

The upper major block represents the augmented plant. For reasons of clarity, wehave<br />

skipped the direct feedthrough block D. The lower, major block can easily be recognised<br />

as a familiar LQG-control where F is the state feedback control block and H functions<br />

as a Kalman gain block. Neither F nor H have to be optimal yet, as long as they cause<br />

stable poles from:<br />

171


174 CHAPTER 11. BASIC SOLUTION OF THE GENERAL PROBLEM<br />

173<br />

(The remainder of this chapter might poseyou to some problems, if you are not well introduced into \functional analysis".<br />

Then just try to make the best out of it as it is only one page.)<br />

It appears that we can now use the freedom, left in the choices for F andH, and it can<br />

be proved that F and H can be chosen (for square transfers) such that :<br />

T12 T12 = I (11.5)<br />

T21 T21 = I (11.6)<br />

In mathematical terminology these matrices are therefore called inner, while engineers<br />

prefer to denote them as all pass transfers. These transfers all possess poles in the left<br />

half plane and corresponding zeros in the right half plane exactly symmetric with respect<br />

to the imaginary axis. If the norm is restricted to the imaginary axis, which is the case<br />

for the 1-norm and the 2-norm, we maythus freely multiply by the conjugated transpose<br />

of these inners and obtain:<br />

Figure 11.3: Resulting forward tracking problem.<br />

min<br />

Qstable k T11 + T12QT21 k= min<br />

Qstable k T12 T11T21 + T12 T12QT21T21 k= (11.7)<br />

k L + Q k (11.8)<br />

def<br />

= min<br />

Qstable<br />

K can be expressed in the \parameter" Q. This expression, which we will not explicitly<br />

give here for reasons of compactness, is called the Youla parametrisation after its inventor.<br />

This is the momenttostepbackfor a moment and memorise the internal model control<br />

where we were also dealing with a comparable transfer Q. Once more Fig. 11.4 pictures<br />

that structure with comparable signals v and e.<br />

By the conjugation of the inners into Tij ,wehavee ectively turned zeros into poles<br />

and vice versa, thereby causing that all poles of L are in the right halfplane. For the<br />

norm along the imaginary axis there is no objection but more correctly we have to say<br />

now that we deal with the L1 and the L2 spaces and norms. As outlined in chapter 5 the<br />

(Lebesque) space L1 combines the familiar (Hardy) space H1 of stable transfers and the<br />

complementary H ; 1 space, containing the antistable or anticausal transfers that have all<br />

? l<br />

d +<br />

+<br />

-<br />

+<br />

Pt<br />

? +<br />

m<br />

-<br />

;<br />

P<br />

-<br />

e v 6<br />

- l - Q -<br />

+<br />

6;<br />

? -<br />

r<br />

their poles in the right half plane. Transfer L is such a transfer. Similarly the space L2<br />

consists of both the H2 and the complementary space H ? 2 of anticausal transfers. The<br />

question then arises, how to approximate an anticausal transfer L by a stable, causal Q in<br />

the complementary space where the approximation is measured on the imaginary axis by<br />

the proper norm. The easiest solution is o ered in the L2 space, because this is a Hilbert<br />

space and thus an inner product space which implies that H2 and H ? 2 are perpendicular<br />

(that induced the symboling). Consequently Q is perpendicular to L and can thus never<br />

\represent" a componentofL in the used norm and will thus only contribute to an increase<br />

of the norm unless it is taken zero. So in the 2-norm the solution is obviously: Q =0.<br />

Unfortunately, for the space L1, where we are actually interested in, the solution is<br />

not so trivial, because L1 is a Banach space and not an inner product space. This famous<br />

problem :<br />

?<br />

Figure 11.4: Internal model control structure.<br />

Indeed, for P = Pt and the other external input d (to be compared with w) being zero,<br />

thetransferseenby Q between v and e is zero. Furthermore, we also obtained, as a result<br />

of this T22 = 0, a ne expressions for the other transfers Tij, being the bare sensitivity<br />

and complementary sensitivity by then. So the internal model control can be seen as a<br />

particular application of a much more general scheme that we study now. In fact Fig. 11.1<br />

turns into the internal model of Fig. 11.4 by choosing F =0andH =0,whichisallowed,<br />

because P and thus G is stable.<br />

The remaining problem is:<br />

(11.9)<br />

k L + Q k1<br />

L 2H ; 1 : min<br />

Q2H1<br />

has been given the name Nehari problem to the rst scientist, studying this problem. It<br />

took considerable time and energy to nd solutions one of which is o ered to you in chapter<br />

8, as being an elegant one.But maybe you already got some taste here of the reasons why<br />

it took so long to formalise classical control along these lines. As nal remarks we can<br />

add:<br />

(11.4)<br />

min<br />

Qstable k T11 + T12QT21 k1<br />

Generically minQ2H1 (L + Q) is all pass i.e. constant for all ! 2R. T12 and T21<br />

were already taken all pass, but also the total transfer from w to z viz. T11+T12QT21<br />

will be all pass for the SISO case, due to the waterbed e ect.<br />

Note that the phrase Qstabilising is now equivalent with Qstable! Furthermore we<br />

may aswell take other norms provided that the respective transfers live in the particular<br />

normed space. We could e.g. translate the LQG-problem in an augmented plant and then<br />

require to minimise the 2-norm in stead of the 1-norm. As Tij and Q are necessarily stable<br />

they live inH2 as well so that we can also minimise the 2-norm for reasons of comparison.(<br />

If there is a direct feed through block D involved, the 2-norm is not applicable, because a<br />

constant transfer is not allowed in L2.)


176 CHAPTER 11. BASIC SOLUTION OF THE GENERAL PROBLEM<br />

175<br />

11.1 Exercises<br />

7.1:Consider the following feedback system:<br />

For MIMO systems the solution is not unique, as we just consider the maximum<br />

singular value. The freedom in the remaining singular values can be used to optimise<br />

extra desiderata.<br />

Plant: y = P (u + d)<br />

<strong>Control</strong>ler: u = K(r ; y)<br />

Errors: e1 = W1u and e2 = W2(r ; y)<br />

It is known that k r k2 1 and k d k2 1, and it is desired to design K so as to minimise:<br />

k2<br />

k e1<br />

e2<br />

a) Show that this can be formulated as a standard H1problem and compute G.<br />

b) If P is stable, rede ne the problem a ne in Q.<br />

7.2: Take the rst blockscheme of the exercise of chapter 6. To facilitate the computations<br />

we just consider a SISO-plant and DC-signals (i.e. only for ! = 0!) so that we avoid<br />

complex computations due to frequency dependence. If there is given that k k2< 1 and<br />

P =1=2 then it is asked to minimise k ~y k2 under the condition k x k2< 1 while is the<br />

only input.<br />

a) Solve this problem by means of a mixed sensitivity problem iteratively adapting Wy<br />

renamed as . Hint: First de ne V and Wx. Sketch the solution in terms of controller<br />

C and compute the solution directly as a function of .<br />

b) Solve the problem exactly: minimise k ~y k2 while k x k2< 1. Why is there a di erence<br />

with the solution sub a) ? Hint: For this question it is easier to de ne the problem<br />

a ne in Q.


178 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

control algorithms, which we brie y describe in a separate section and which you are<br />

probably familiar with.<br />

12.2 The computation of system norms<br />

We start this chapter by considering the problem of characterizing the H2 and H1 norms<br />

of a given (multivariable) transfer function H(s) in terms of a state space description of<br />

the system. We will consider the continuous time case only for the discrete time versions<br />

of the results below are less insightfull and more involved.<br />

Let H(s) beastable transfer function of dimension p m and suppose that<br />

Chapter 12<br />

H(s) =C(Is; A) ;1 B + D<br />

Solution to the general H1 control<br />

problem<br />

where A B C and D are real matrices de ning the state space equations<br />

(12.1)<br />

_x(t) =Ax(t)+Bw(t)<br />

z(t) =Cx(t)+Dw(t):<br />

12.1 Introduction<br />

Since H(s) is stable, all eigenvalues of A are assumed to be in the left half complex plane.<br />

We suppose that the state space has dimension n and, to avoid redundancy, we moreover<br />

assume that (12.1) de nes a minimal state space representation of H(s) in the sense that<br />

n is as small as possible among all state space representations of H(s).<br />

Let us recall the de nitions of the H2 and H1 norms of H(s):<br />

k H(s) k2 Z 1<br />

2 := 1=2 trace(H(j!)H (j!))d!<br />

;1<br />

k H(s) k1 := sup<br />

!2R (H(j!))<br />

where denotes the maximal singular value.<br />

12.2.1 The computation of the H2 norm<br />

We haveseen in Chapter 5 that the (squared) H2 norm of a system has the simple interpretation<br />

as the sum of the (squared) L2 norms of the impulse responses which we can<br />

extract from (12.1). If we assume that D = 0 in (12.1) (otherwise the H2 norm is in nite<br />

so H =2H2) and if bi denotes the i-th column of B, then the i-th impulse response is given<br />

by<br />

In previous chapters we havebeen mainly concerned with properties of control con gurations<br />

in which a controller is designed so as to minimize the H1 norm of a closed loop<br />

transfer function. So far, we did not address the question how such a controller is actually<br />

computed. This has been a problem of main concern in the early 80-s. Various mathematical<br />

techniques have been developed to compute `H1-optimal controllers', i.e., feedback<br />

controllers which stabilize a closed loop system and at the same time minimize the H1<br />

norm of a closed loop transfer function. In this chapter we treat a solution to a most<br />

general version of the H1 optimal control problem which is now generally accepted to<br />

be the fastest, simplest, and computationally most reliable and e cient way to synthesize<br />

H1 optimal controllers.<br />

The solution which we present here is the result of almost a decenium of impressive<br />

research e ort in the area of H1 optimal control and has received widespread attention<br />

in the control community. An amazing number of scienti c papers have appeared (and<br />

still appear!) in this area of research. In this chapter we will treat a solution of the<br />

general H1 control problem which popularly is referred to as the `DGKF-solution', the<br />

acronym standing for Doyle, Glover, Khargonekar and Francis, four authors of a famous<br />

and prize winning paper in the IEEE Transactions on Automatic <strong>Control</strong>1 . From a mathematical<br />

and system theoretic point of view, this so called `state space solution' to the<br />

H1 control problem is extremely elegant and worth a thorough treatment. However, for<br />

practical applications it is su cient to know the precise conditions under which the state<br />

space solution `works' so as to have a computationally reliable way to obtain and to design<br />

H1 optimal controllers. The solution presented in this chapter admits a relatively<br />

straightforward implementation in a software environment likeMatlab. The <strong>Robust</strong> <strong>Control</strong><br />

Toolbox hasvarious routines for the synthesis of H1 optimal controllers and we will<br />

devote a section in this chapter on how to use these routines.<br />

This chapter is organized as follows. In the next section we rst treat the problem<br />

of how to compute the H2 norm and the H1 norm of a transfer function. These results<br />

will be used in subsequent sections, where we present the main results concerning H1<br />

controller synthesis in Theorem 12.7. We will make a comparison to the H2 optimal<br />

zi(t) =Ce At bi:<br />

Since H(s) has m inputs, we have m of such responses, and for i = 1::: m, their L2<br />

norms satisfy<br />

k zi k2 Z 1<br />

2 = b<br />

0<br />

T<br />

i eAT t T At<br />

C Ce bidt<br />

= b T<br />

Z 1<br />

i e<br />

0<br />

ATt T At<br />

C Ce dtbi<br />

= b T<br />

i Mbi:<br />

Z 1<br />

Here, we de ned<br />

e AT t C T Ce At dt<br />

M :=<br />

1 \State Space Solutions to the Standard H2 and H1 <strong>Control</strong> Problems", by J.Doyle, K. Glover, P.<br />

Khargonekar and B. Francis, IEEE Transactions on Automatic <strong>Control</strong>, August 1989.<br />

0<br />

177


180 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.2. THE COMPUTATION OF SYSTEM NORMS 179<br />

12.2.2 The computation of the H1 norm<br />

which is a square symmetric matrix of dimension n n which is called the observability<br />

gramian of the system (12.1). Since xTMx 0 for all x 2Rn we have that M is nonnegative<br />

de nite 2 . In fact, the observability gramian M satis es the equation<br />

The computation of the H1 norm of a transfer function H(s) is slightly more involved.<br />

We will again present an algebraic algorithm, but instead of nding an exact expression<br />

for k H(s) k1, we will nd an algebraic condition whether or not<br />

MA+ A T M + C T C =0 (12.2)<br />

k H(s) k1 < (12.4)<br />

for some real number 0. Thus, we will set up a test so as to determine whether (12.4)<br />

holds for certain value of 0. By performing this test for various values of we may<br />

get arbitrarily close to the norm k H(s) k1.<br />

We will brie y outline the main ideas behind this test. Recall from Chapter 5, that<br />

the H1 norm is equal to the L2 induced norm of the transfer function, i.e.,<br />

k Hw k2<br />

k H(s) k1 = sup :<br />

w2L2 k w k2<br />

This means that k H(s) k1 if and only if<br />

k Hw k2 2 ; 2 k w k2 2=k z k2 2 ; 2 k w k2 0: (12.5)<br />

which is called a Lyapunov equation in the unknown M. Since we assumed that the state<br />

space parameters (A B C D) de ne a minimal representation of the transfer function<br />

H(s), the pair (A C) isobservable3 , and the matrix M is the only symmetric non-negative<br />

de nite solution of (12.2). Thus, M can be computed from an algebraic equation, the<br />

Lyapunov equation (12.2), which isamuch simpler task than solving the in nite integral<br />

expression for M.<br />

The observability gramian M completely determines the H2 norm of the system H(s)<br />

as is seen from the following characterization.<br />

Theorem 12.1 Let H(s) be a stable transfer function of the system described by the<br />

state space equations (12.1). Suppose that (A B C D) is a minimal representation of<br />

H(s). Then<br />

for all w 2 L2. (Indeed, dividing (12.5) by k w k2 2 gives you the equivalence). Here,<br />

z = Hw is the output of the system (12.1) when the input w is applied and when the<br />

initial state x(0) is set to 0.<br />

Now, suppose that 0 and the system (12.1) is given. Motivated by the middle<br />

expression of (12.5) weintroduce for arbitrary initial conditions x(0) = x0 and any w 2L2,<br />

the criterion<br />

1. k H(s) k2 < 1 if and only if D =0.<br />

2. If M is the observability gramian of (12.1) then<br />

mX<br />

b T i Mbi:<br />

k H(s) k 2 2= trace(B T MB)=<br />

(12.6)<br />

J(x0w):=kz k2 2 ; 2 k w k2 2<br />

Z 1<br />

= jz(t)j<br />

0<br />

2 ; 2jw(t)j2 dt<br />

i=1<br />

where z is the output of the system (12.1) when the input w is applied and the initial<br />

state x(0) is taken to be x0.<br />

For xed initial condition x0 we willbeinterested in maximizing this criterion over all<br />

possible inputs w. Precisely, for xed x0, we look for an optimal input w 2L2 such that<br />

Thus the H2 norm of H(s) isgiven by a trace formula involving the state space matrices<br />

A B C, from which the observability gramian M is computed. The main issue here is<br />

that Theorem 12.1 provides an algebraic characterization of the H2 norm which proves<br />

extremely useful for computational purposes.<br />

There is a `dual' version of theorem 12.1 which is obtained from the fact that k<br />

H(s) k2=k H (s) k2. We state it for completeness<br />

J(x0w) J(x0w ) (12.7)<br />

for all w 2L2. We will moreover require that the state trajectory x(t) generated by this<br />

so called worst case input is stable in the sense that the solution x(t) of the state equation<br />

_x = Ax + Bw with x(0) = x0 satis es limt!0 x(t) =0.<br />

The solution to this problem is simpler than it looks. Let us take > 0 such that<br />

2 T I ; D D is positive de nite (and thus invertible) and introduce the following Riccati<br />

Theorem 12.2 Under the same conditions as in theorem 12.1,<br />

k H(s) k 2 2=trace(CWC T )<br />

equation<br />

where W is the unique symmetric non-negative de nite solution of the Lyapunov equation<br />

A T K + KA +(B T K ; D T C) T [ 2 I ; D T D] ;1 (B T K ; D T C)+C T C = 0: (12.8)<br />

AW + WA T + BB T =0: (12.3)<br />

It is then a straightforward exercise in linear algebra4 to verify that for any real symmetric<br />

solution K of (12.8) there holds<br />

(12.9)<br />

J(x0w)=x T<br />

0 Kx0;kw +[ 2 I ; D T D] ;1 (B T K ; D T C)x k2 ( 2I;DTD) The square symmetric matrix W is called the controllability gramian of the system (12.1).<br />

Theorem 12.2 therefore states that the H2 norm of H(s) can also be obtained by computing<br />

the controllability gramian associated with the system (12.1).<br />

4 A `completion of the squares' argument. If you are interested, work out the derivative d<br />

dt xT (t)Kx(t)<br />

using (12.1), substitute (12.8) and integrate over [0 1) to obtain the desired expression (12.9).<br />

2which isnot thesameassaying that all elements of M are non-negative!!!<br />

3 At that is, Ce x0 =0forallt 0 only if the initial condition x0 =0.


182 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.2. THE COMPUTATION OF SYSTEM NORMS 181<br />

OUTPUT: the value approximating the H1 norm of H(s) within .<br />

The second step of this algorithm involves the investigation of the existence of stabilizing<br />

solutions of (12.8), which is a standard routine in Matlab. We will not go into the details of<br />

an e cient algebraic implementation of the latter problem. What is of crucial importance<br />

here, though, is the fact that the computation of the H1 norm of a transfer function (just<br />

like the computation of the H2 norm of a transfer function) has been transformed to an<br />

algebraic problem. This implies a fast and extremely reliable way to compute these system<br />

norms.<br />

for all w 2 L2 which drive the state trajectory to zero for t ! 1. Here, k f k2 Q with<br />

Q = QT > 0 denotes the `weighted' L2 norm<br />

k f k2 Z 1<br />

Q := f T (t)Qf(t)dt: (12.10)<br />

0<br />

Now, have a look at the expression (12.9). It shows that for all w 2 L2, (for which<br />

limt!1 x(t) = 0) the criterion J(x0w) is at most equal to xT 0 Kx0, and equality is obtained<br />

by substituting for w the state feedback<br />

12.3 The computation of H2 optimal controllers<br />

w (t) =;[ 2 I ; D T D] ;1 (B T K ; D T C)x(t) (12.11)<br />

The computation of H2 optimal controllers is not a subject of this course. In fact, H2<br />

optimal controllers coincide with the well known LQG controllers which some of you may<br />

be familiar with from earlier courses. However, for the sake of completenes we treat the<br />

controller structure of H2 optimal controllers once more in this section.<br />

We consider the general control con guration as depicted in Figure 13.1. Here,<br />

which then maximizes J(x0w) over all w 2 L2. This worst case input achieves the<br />

inequality (12.7) (again, provided the feedback (12.11) stabilizes the system (12.1)). The<br />

only extra requirement for the solution K to (12.8) is therefore that the eigenvalues<br />

fA +[ 2 I ; D T D] ;1 (B T K ; D T C)g C ;<br />

-<br />

z<br />

-<br />

w<br />

G<br />

-<br />

all lie in the left half complex plane. The latter is precisely the case when the solution K to<br />

(12.8) is non-negative de nite and for obvious reasons we call such a solution a stabilizing<br />

solution of (12.8). One can show that whenever a stabilizing solution K of (12.8) exists,<br />

it is unique. So there exists at most one stabilizing solution to (12.8).<br />

For a stabilizing solution K, wethus obtain that<br />

u y<br />

J(x0w) J(x0w ) = x T<br />

0 Kx0<br />

K<br />

for all w 2L2. Now, taking x0 = 0 yields that<br />

0<br />

J(0w)=k z k 2 2 ; 2 k w k 2 2<br />

Figure 12.1: General control con guration<br />

for all w 2 L2. This is precisely (12.5) and it follows that k H(s) k1 . These<br />

observations provide the main idea behind the proof of the following result.<br />

w are the exogenous inputs (disturbances, noise signals, reference inputs), u denote the<br />

control inputs, z is the to be controlled output signal and y denote the measurements.<br />

The generalized plant G is supposed to be given, whereas the controller K needs to be<br />

designed. Admissable controllers are all linear time-invariant systems K that internally<br />

stabilize the con guration of Figure 13.1. Every such admissible controller K gives rise<br />

to a closed loop system which maps disturbance inputs w to the to-be-controlled output<br />

variables z. Precisely, if M denotes the closed-loop transfer function M : w 7! z, then<br />

with the obvious partitioning of G,<br />

Theorem 12.3 Let H(s) be represented by the (minimal) state space model (12.1). Then<br />

1. k H(s) k1 < 1 if and only if eigenvalues (A) C ;<br />

2. k H(s) k1 < if and only if there exists a stabilizing solution K of the Riccati<br />

equation (12.8).<br />

How does this result convert into an algorithm to compute the H1 norm of a transfer<br />

function? The following bisection type of algorithm works in general extremely fast:<br />

M = G11 + G12K(I ; G22K) ;1 G21:<br />

The H2 optimal control problem is formalized as follows<br />

Algorithm 12.4 INPUT: stopping criterion " > 0 and two numbers l h satisfying<br />

l < k H(s) k1 < h.<br />

Synthesize a stabilizing controller K for the generalized plant G such that<br />

k M k2 is minimal.<br />

Step 1. Set =( l + h)=2.<br />

Step 2. Verify whether (12.8) admits a stabilizing solution.<br />

The solution of this important problem is split into two independent problems and makes<br />

use of a separation structure:<br />

Step 3. If so, set h = . If not, set l = .<br />

Step 4. Put " = h ; l<br />

First, obtain an \optimal estimate" ^x of the state variable x, based on the measurements<br />

y.<br />

Step 5. If " " then STOP, elsogo to Step 1.


184 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.3. THE COMPUTATION OF H2 OPTIMAL CONTROLLERS 183<br />

The resulting state space description then is:<br />

1<br />

C<br />

A<br />

Second, use this estimate ^x as if the controller would have perfect knowledge of the<br />

full state x of the system.<br />

v 0 B<br />

Q 1<br />

2 0 0 0<br />

A R 1<br />

2<br />

1<br />

A =<br />

G =<br />

0 0 0 R 1<br />

2<br />

0<br />

B<br />

@<br />

0<br />

@ A B1 0 B2<br />

C1 0 0 D<br />

C2 0 I 0<br />

0<br />

C 0 R 1<br />

2<br />

w<br />

As is well known, the Kalman lter is the optimal solution to the rst problem and the<br />

state feedback linear quadratic regulator is the solution to the second problem. We will<br />

devote a short discussion on these two sub-problems.<br />

Let the transfer function G be described in state space form by the equations<br />

_x = Ax + B1w1 + B2u<br />

8<br />

><<br />

The celebrated Kalman lter is a causal, linear mapping taking the control input u<br />

and the measurements y as its inputs, and producing an estimate ^x of the state x in such<br />

away thattheH2 norm of the transfer function from the noise w to the estimation error<br />

e = x ; ^x is minimal. Thus, using our deterministic interpretation of the H2 norm of a<br />

transfer function, the Kalman lter is the optimal lter in the con guration of Figure 12.3<br />

for which theL2 norm of the impulse response of the estimator Me : w 7! e is minimized.<br />

It is implemented as follows.<br />

(12.12)<br />

z = C1x + Du<br />

>:<br />

y = C2x + w2<br />

x<br />

-<br />

+ h -<br />

; 6^x<br />

e<br />

z (not - used)<br />

-<br />

w<br />

-<br />

Plant<br />

-<br />

u<br />

Filter<br />

-<br />

y<br />

u<br />

where the disturbance input w = ; w1<br />

w2 is assumed to be partitioned in a component w1<br />

acting on the state (the process noise) and an independent component w2 representing<br />

measurement noise. In (12.12) we assume that the system G has no direct feedthrough<br />

in the transfers w ! z (otherwise M =2 H2) and u ! y (mainly to simplify the formulas<br />

below). We further assume that the pair (A C2) is detectable and that the pair (A B2)<br />

is stabilizable. The latter two conditions are necessary to guarantee the existence of<br />

stabilizing controllers. All these conditions are easy to grasp if we compare the set of<br />

equations (12.12) with the LQG-problem de nition as proposed e.g. in the course \Modern<br />

<strong>Control</strong> Theory":<br />

Consider Fig.12.2<br />

w2<br />

6~x<br />

w1<br />

6~u<br />

Figure 12.3: The Kalman lter con guration<br />

?<br />

?<br />

R 1<br />

2<br />

w<br />

Q 1<br />

2<br />

R 1<br />

2<br />

v<br />

R 1<br />

2<br />

Theorem 12.5 (The Kalman lter.) Let the system (12.12) be given and assume that<br />

the pair (A C2) is detectable. Then<br />

B [sI ; A] ;1<br />

v<br />

6<br />

w<br />

?<br />

- - n - x<br />

? y<br />

- C - n -<br />

6<br />

u<br />

1. the optimal lter which minimizes the H2 norm of the mapping Me : w ! e in the<br />

con guration of Figure 12.3is given by the state space equations<br />

d^x<br />

dt (t) =A^x(t)+B2u(t)+H(y(t) ; C2^x(t)) (12.13)<br />

=(A ; HC2)^x(t)+B2u(t)+Hy(t) (12.14)<br />

Figure 12.2: The LQG problem.<br />

(12.15)<br />

where H = YCT 2 and Y is the unique square symmetric solution of<br />

0=AY + YA T ; YC T 2 C2Y + B1B T 1<br />

which has the property that (A ; HC2) C ;<br />

.<br />

where v and w are independent, white, Gaussian noises of variance respectively Rv and<br />

Rw. They represent the direct state disturbance and the measurement noise. In order to<br />

cope with the requirement of the equal variances of the inputs they are inversely scaled<br />

1<br />

1<br />

; ; 2<br />

2<br />

by blocks R v and R w to obtain inputs w1 and w2 that have unit variances. The output<br />

of this augmented plant is de ned by:<br />

!<br />

2. The minimal H2 norm of the transfer Me : w 7! e is given by k Me k2 2 = trace Y .<br />

~x<br />

~u<br />

=<br />

1<br />

Q 2 x<br />

R 1<br />

2 u<br />

z =<br />

The solution Y to the Riccati equation (12.15) or the gain matrix H = YCT 2 are sometimes<br />

referred to as the Kalman gain of the lter (12.13). Note that Theorem 12.5 is put<br />

completely in a deterministic setting: no stochastics are necessary here.<br />

For our second sub-problem we assume perfect knowledge of the state variable. That<br />

is, we assume that the controller has access to the state x of (12.12) and our aim is to nd<br />

a state feedback control law of the form<br />

in order to accomplish that<br />

k z k2 Z 1<br />

2= fx<br />

0<br />

T Qx + u T Rugdt<br />

(compare the forthcoming equation (12.16)). The other inputs and outputs are given by:<br />

u(t) =Fx(t)<br />

u = u y = y:<br />

w = w1<br />

w2


186 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.3. THE COMPUTATION OF H2 OPTIMAL CONTROLLERS 185<br />

K(s)<br />

y<br />

^x<br />

Regulator Filter<br />

u<br />

u<br />

such thattheH2 norm of the state controlled, closed-loop, transfer function Mx : w 7! z<br />

is minimized. In this sub-problem the measurements y and the measurement noise w2<br />

evidently do not play arole. Since k Mx k2 is equal to the L2 norm of the corresponding<br />

impulse response, our aim is therefore to nd a control input u which minimizes the<br />

i dt (12.16)<br />

h<br />

T T<br />

x (t)C1 C1x(t)+2u T (t)D T C1x(t)+u T (t)D T Du(t)<br />

Z 1<br />

criterion<br />

k z k 2 2 =<br />

0<br />

Figure 12.4: Separation structure for H2 optimal controllers<br />

12.4 The computation of H1 optimal controllers<br />

subject to the system equations (12.12). Minimization of equation 12.16 yields the so<br />

called quadratic regulator and supposes only an initial value x(0) and no inputs w. The<br />

solution is independent oftheinitialvalue x(0) and thus such an initial value can also be<br />

accomplished by dirac pulses on w1. This is similar to the equivalence of the quadratic<br />

regulator problem and the stochastic regulator problem as discussed in the course \Modern<br />

<strong>Control</strong> Theory". The nal solution is as follows:<br />

In this section we will rst present the main algorithm behind the computation of H1<br />

optimal controllers. From Section 12.2 we learned that the characterization of the H1<br />

norm of a transfer function is expressed in terms of the existence of a particular solution<br />

to an algebraic Riccati equation. It should therefore not be a surprise6 to see that the<br />

computation of H1 optimal controllers hinges on the computation of speci c solutions of<br />

Riccati equations. In this section we present the main algorithm and we will resist the<br />

temptation to go into the details of its derivation. The background and the main ideas<br />

behind the algorithms are very similar to the ideas behind the derivation of Theorem 12.3<br />

and the cost criterion (12.6). We defer this background material to the next section.<br />

We consider again the general control con guration as depicted in Figure 13.1 with<br />

the same interpretation of the signals as given in the previous section. All variables may<br />

be multivariable. The block G denotes the \generalized system" and typically includes<br />

a model of the plant P together with all weighting functions which are speci ed by the<br />

`user'. The block K denotes the \generalized controller" and includes typically a feedback<br />

controller and/or a feedforward controller. The block G contains all the `known' features<br />

(plant model, input weightings, output weightings and interconnection structures), the<br />

block K needs to be designed. Admissable controllers are all linear, time-invariant systems<br />

K that internally stabilize the con guration of Figure 13.1. Every such admissible<br />

controller K gives rise to a closed loop system which maps disturbance inputs w to the tobe-controlled<br />

output variables z. Precisely, ifM denotes the closed-loop transfer function<br />

M : w 7! z, then with the obvious partitioning of G,<br />

Theorem 12.6 (The state feedback regulator.) Let the system (12.12) be given and<br />

assume that (A B2) is stabilizable. Then<br />

1. the optimal state feedback regulator which minimizes the H2 norm of the transfer<br />

Mx : w ! z is given by<br />

u(t) =Fx(t) =;[D T D] ;1 (B T 2 X + DT C1)x(t) (12.17)<br />

where X is the unique square symmetric solution of<br />

0=A T X + XA ; (B T 2 X + DTC1) T [D T D] ;1 (B T 2 X + DTC1)+C T 1 C1 (12.18)<br />

which has the property that (A ; B2F ) C ;<br />

.<br />

2. The minimal H2 norm of the transfer Mx : w 7! z is given by k R k2 2 = trace BT 1 XB1.<br />

The result of Theorem 12.6 is easily derived by using a completion of the squares<br />

argument applied for the criterion (12.16). If X satis es the Riccati equation (12.18) then<br />

a straightforward exercise in rst-years-linear-algebra gives you that<br />

k z k2 2 = x T<br />

0 Xx0+ k u ; [D T D] ;1 (B T<br />

2 X + D T C1)x kDTD M = G11 + G12K(I ; G22K) ;1 G21<br />

and the H1 control problem is formalized as follows<br />

where X is the unique solution of the Riccati equation (12.18) and where we used the<br />

notation of (12.10). From the latter expression it is immediate that k z k2 is minimized if<br />

u is chosen as in (12.17).<br />

The optimal solution of the H2 optimal control problem is now obtained by combining<br />

the Kalman lter with the optimal state feedback regulator. The so called certainty<br />

equivalence principle or separation principle 5 implies that an optimal controller K which<br />

minimizes k M(s) k2 is obtained by<br />

Synthesize a stabilizing controller K such that<br />

k M k1 <<br />

replacing the state x in the state feedback regulator (12.17) by the Kalman lter<br />

estimate ^x generated in (12.13).<br />

for some value of >0. 7<br />

Note that already at this stage of formalizing the H1 control problem, we can see<br />

that the solution of the problem is necessary going to be of a `testing type'. The synthesis<br />

algorithm will require to<br />

The separation structure of the optimal H2 controller is depicted in Figure 12.4. In<br />

equations, the optimal H2 controller K is represented in state space form by<br />

( d^x<br />

(12.19)<br />

dt (t) =(A + B2F ; HC2)^x(t)+Hy(t)<br />

u(t) = F ^x(t)<br />

6<br />

Although it took about ten years of research!<br />

7Strictly speaking, this is a suboptimal H1 control problem. The optimal H1 control problem amounts<br />

to minimizing k M(s) k1 over all stabilizing controllers K. Precisely, if 0 := inf K k M(s) k1then<br />

stabilizing<br />

the optimal control problem is to determine 0 and an optimalK that achieves this minimal norm. However,<br />

this problem isvery hard to solve in this general setting.<br />

where the gains H and F are given as in Theorem 12.5 and Theorem 12.6.<br />

5 The word `principle' is an incredible misnamer at this place for a result which requires rigorous mathematical<br />

deduction.


188 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.4. THE COMPUTATION OF H1 OPTIMAL CONTROLLERS 187<br />

The synthesis of H1 suboptimal controllers is based on the following two Riccati<br />

equations<br />

Choose a value of >0.<br />

See whether there exist a controller K such thatk M(s) k1 < .<br />

(12.22)<br />

1 C1<br />

0=A T X + XA ; X[B2B T<br />

2 ; ;2 B1B T<br />

1 ]X + C T<br />

If yes, then decrease . If no, then increase .<br />

0=AY + YA T ; Y [C T<br />

2 C2 ; ;2 C T<br />

1 C1]Y + B1B T<br />

1 : (12.23)<br />

Observe that these de ne quadratic equations in the unknowns X and Y . The unknown<br />

matrices X and Y are symmetric and have both dimension n n where n is the dimension<br />

of the state space of (12.21). The quadratic terms are inde nite in both equations (both<br />

quadratic terms consist of the di erence of two non-negative de nite matrices), and we<br />

moreover observe that both equations (and hence their solutions) depend on the value of .<br />

We will be particularly interested in the so called stabilizing solutions of these equations.<br />

We call a symmetric matrix X a stabilizing solution of (12.22) if the eigenvalues<br />

(A ; B2B T<br />

2 X + ;2 B1B T ;<br />

1 X) C :<br />

Similarly, a symmetric matrix Y is called a stabilizing solution of (12.23) if<br />

(A ; YC2C T<br />

2 + ;2 YC T<br />

1 C1) ; C :<br />

To solve this problem, consider the generalized system G and let<br />

_x = Ax + B1w + B2u<br />

(12.20)<br />

z = C1x + D11w + D12u<br />

8<br />

><<br />

>:<br />

y = C2x + D21w + D22u<br />

be a state space description of G. Thus,<br />

G11(s) =C1(Is; A) ;1 B1 + D11 G12(s) =C1(Is; A) ;1 B2 + D12<br />

G21(s) =C2(Is; A) ;1 B1 + D21 G22(s) =C2(Is; A) ;1 B2 + D22:<br />

With some sacri ce of generality we make the following assumptions.<br />

A-1 D11 = 0 and D22 =0.<br />

It can be shown that whenever stabilizing solutions X or Y of (12.22) or (12.23) exist,<br />

then they are unique. In other words, there exists at most one stabilizing solution X of<br />

(12.22) and at most one stabilizing solution Y of (12.23). However, because these Riccati<br />

equations are inde nite in their quadratic terms, it is not at all clear that stabilizing<br />

solutions in fact exist. The following result is the main result of this section, and has been<br />

considered as one of the main contributions in optimal control theory during the last 10<br />

years. 8<br />

A-2 The triple (A B2C2) is stabilizable and detectable.<br />

A-3 The triple (A B1C1) is stabilizable and detectable.<br />

A-4 DT 12 (C1 D12) =(0 I).<br />

DT 21 )=(0 I).<br />

A-5 D21(BT 1<br />

Theorem 12.7 Under the conditions A-1{A-5, there exists an internally stabilizing controller<br />

K that achieves<br />

k M(s) k1 <<br />

if and only if<br />

1. Equation (12.22) has a stabilizing solution X = X T 0.<br />

2. Equation (12.23) has a stabilizing solution Y = Y T 0.<br />

Assumption A-1 states that there is no direct feedthrough in the transfers w 7! z and<br />

u 7! y. The second assumption A-2 implies that we assume that there are no unobservable<br />

and uncontrollable unstable modes in G22. This assumption is precisely equivalent to saying<br />

that internally stabilizing controllers exist. Assumption A-3 is a technical assumption<br />

made on the transfer function G11. Assumptions A-4 and A-5 are just scaling assumptions<br />

that can be easily removed, but will make all formulas and equations in the remainder of<br />

this chapter acceptably complicated. Assumption A-4, simply requires that<br />

k z k2 Z 1<br />

2= jC1x + D12uj2 Z 1<br />

dt = (x T C T<br />

1 C1x + u T u)dt:<br />

3. (XY ) < 2 .<br />

0<br />

0<br />

Moreover, in that case one such controller is given by<br />

(<br />

_ ;2 =(A + B1BT 1 X) + B2u + ZH(C2 ; y)<br />

(12.24)<br />

u = F<br />

In the to-be controlled output z, we thus have a unit weight on the control input signal<br />

u, a weight C T<br />

1 C1 on the state x and a zero weight on the cross terms involving u and<br />

x. Similarly, assumption A-5 claims that state noise (or process noise)is independent of<br />

measurement noise. With assumption A-5 we can partition the exogenous noise input w<br />

as w = ; w1<br />

w2 where w1 only a ects the state x and w2 only a ects the measurements y.<br />

The foregoing assumptions therefore require our state space model to take the form<br />

where<br />

F := ;B T 2 X<br />

H := YC T 2<br />

Z := (I ; ;2 YX) ;1<br />

_x = Ax + B1w1 + B2u<br />

(12.21)<br />

z = C1x + D12u<br />

8<br />

><<br />

>:<br />

y = C2x + w2<br />

8 We hope you like it:::<br />

where w = ; w1<br />

w2 .


190 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.4. THE COMPUTATION OF H1 OPTIMAL CONTROLLERS 189<br />

Step 2. Let := ( l + h)=2 and verify whether there exists matrices X = X T and<br />

Y = Y T satisfying the conditions 1{3 of Theorem 12.7.<br />

A few crucial observations need to be made.<br />

Step 3. If so, then set h = . If not, then set l = .<br />

Step 4. Put " = h ; l.<br />

Theorem 12.7 claims that three algebraic conditions need to be checked before we<br />

can conclude that there exists a stabilizing controller K which achieves that the<br />

closed loop transfer function M has H1 norm less than . Once these conditions<br />

are satis ed, one possible controller is given explicitly by the equations (12.24) which<br />

we putinobserver form.<br />

Step 5. If ">"then go to Step 2.<br />

Step 6. Put = h and let<br />

(<br />

_ ;2 =(A + B1B T<br />

1 X + B2F + ZHC2) ; ZHy<br />

u = F<br />

de ne the state space equations of a controller K(s).<br />

Note that the dynamic order of this controller is equal to the dimension n of the<br />

state space of the generalized system G. Incorporating high order weighting lters<br />

in the internal structure of G therefore results in high order controllers, which may<br />

be undesirable. The controller (12.24) has the block structure as depicted in Figure<br />

12.5. This diagram shows that the controller consists of a dynamic observer<br />

which computes a state vector on the basis of the measurements y and the control<br />

input u and a memoryless feedback F which maps to the control input u.<br />

OUTPUT: K(s) de nes a stabilizing controller which achieves k M(s) k1 < .<br />

12.5 The state feedback H1 control problem<br />

It is interesting to compare the Riccati equations of Theorem 12.7 with those which<br />

determine the H2 optimal controller. In particular, we emphasize that the presence<br />

of the inde nite quadratic terms in (12.22) and (12.23) are a major complication to<br />

guarantee existence of solutions to these equations. If we let ! 1 we see that<br />

the inde nite quadratic terms in (12.22) and (12.23) become de nite in the limit<br />

and that in the limit the equations (12.22) and (12.23) coincide with the Riccati<br />

equations of the previous section.<br />

The results of the previous section can not fully be appreciated if no further system<br />

theoretic insight is given in the main results. In this section we will treat the state<br />

feedback H1 optimal control problem, which is a special case of Theorem 12.7 and which<br />

provides quite some insight in the structure of optimal H1 control laws.<br />

In this section we will therefore assume that the controller K(s) has access to the full<br />

state x, i.e., we assume that the measurements y = x and we wish to design a controller<br />

K(s) for which the closed loop transfer function, alternatively indicated here by Mx :<br />

w 7! z satis esk Mx k1 < . The procedure to obtain such a controller is basically an<br />

interesting extension of thearguments we put forward in section 12.2.<br />

The criterion (12.6) de ned in section 12.2 only depends on the initial condition x0<br />

of the state and the input w of the system (12.1). Since we are now dealing with the<br />

system (12.21) with state measurements (y = x) and two inputs u and w, we should treat<br />

the criterion<br />

K(s)<br />

y<br />

u<br />

u<br />

F H1 lter<br />

Figure 12.5: Separation structure for H1 controllers<br />

(12.26)<br />

J(x0uw) := k z k2 2 ; 2 k w k2<br />

Z 1<br />

= jz(t)j<br />

0<br />

2 ; 2jw(t)j2 dt<br />

A transfer function K(s) of the controller is easily derived from (12.24) and takes the<br />

explicit state space form<br />

as a function of the initial state x0 and both the control inputs u as well as the disturbance<br />

inputs w. Here z is of course the output of the system (12.21) when the inputs u and w<br />

are applied and the initial state x(0) is taken to be x0.<br />

We will view the criterion (12.26) as a game between two players. One player, u, aims<br />

to minimize the criterion J, while the other player, w, aims to maximize it. 9 We call a<br />

pair of strategies (u w ) optimal with respect to the criterion J(x0uw) if for all u 2L2<br />

and w 2L2 the inequalities<br />

(12.25)<br />

_ ;2<br />

=(A + B1B T<br />

1 X + B2F + ZHC2) ; ZHy<br />

u = F<br />

which de nes the desired map K : y 7! u.<br />

Summarizing, the H1 control synthesis algorithm looks as follows:<br />

Algorithm 12.8 INPUT: generalized plant G in state space form (13.16) or (12.21)<br />

tolerance level ">0.<br />

J(x0u w) J(x0u w ) J(x0uw ) (12.27)<br />

ASSUMPTIONS: A-1tillA-5.<br />

9<br />

Just like a soccer match where instead of administrating the number of goals of each team, the<br />

di erence between the number of goals is taken as the relevant performance criterion. After all, this is the<br />

only relevant criterion which counts at the end of a soccer game ::: .<br />

Step 1. Find l h such thatM : w 7! z satis es<br />

l < k M(s) k1 < h


192 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.6. THE H1 FILTERING PROBLEM 191<br />

h -<br />

6;<br />

^z<br />

e<br />

-<br />

z +<br />

-<br />

w<br />

-<br />

y<br />

Plant<br />

-<br />

u<br />

Filter<br />

are satis ed. Such apair(uw ) de nes a saddle point for the criterion J. We maythink of u as a best control strategy, while w is the worst exogenous input. The existence<br />

of such a saddle point is guaranteed by the solutions X of the Riccati equation (12.22).<br />

Speci cally, under the assumptions made in the previous section, for any solution X<br />

of (12.22) a completion of the squares argument will give you that for all pairs (u w) of<br />

(square integrable) inputs of the system (12.21) for which limt!1 x(t) = 0 there holds<br />

-<br />

u<br />

: (12.28)<br />

J(x0uw)=x T<br />

0 Xx0+ k w ; ;2 B T<br />

1 Xx k2 2 ;ku + B T<br />

2 Xx k2 2<br />

Figure 12.6: The H1 lter con guration<br />

a lter mapping (u y) 7! ^z such that the for overall con guration with transfer function<br />

Me : w 7! e the H1 norm<br />

Thus, if both `players' u and w have access to the state x of (12.21) then (12.28) gives us<br />

immediately a saddle point<br />

(<br />

T<br />

u (t) := ;B2 Xx(t)<br />

w (t) := ;2B1Xx(t) which satis es the inequalities (12.27). We see that in that case the saddle point<br />

(12.30)<br />

k e k 2 2<br />

k w1 k 2 2 + k w2 k 2 2<br />

k Me(s) k2 1 = sup<br />

w1w22L2<br />

J(x0u w )=x T<br />

0 Xx0<br />

2 is less than or equal to some pre-speci ed value .<br />

The solution to this problem is entirely dual to the solution of the state feedback H1<br />

problem and given in the following theorem.<br />

which gives a nice interpretation of the solution X of the Riccati equation (12.22). Now,<br />

taking the initial state x0 =0gives that the saddle point J(0u w ) = 0 which, by (12.27)<br />

gives that for all w 2L2<br />

Theorem 12.9 (The H1 lter.) Let the system (12.29) be given and assume that the<br />

assumptions A-1 till A-5 hold. Then<br />

J(0u w) J(0u w )=0<br />

1. there exists a lter which achieves that the mapping Me : w ! e in the con guration<br />

of Figure 12.6 satis es<br />

As in section 12.2 it thus follows that the closed loop system Mx : w 7! z obtained by<br />

applying the static state feedback controller<br />

u (t) = ;B T<br />

2 Xx(t)<br />

k Me k1 <<br />

if and only if the Riccati equation (12.23) has a stabilizing solution Y = Y T 0.<br />

results in k Mx(s) k1 . We moreover see from this analysis that the worst case<br />

disturbance is generated by w .<br />

2. In that case one such lter is given by the equations<br />

(<br />

_ ;2 =(A + B1BT 1 X) + B2u + H(C2 ; y)<br />

12.6 The H1 ltering problem<br />

(12.31)<br />

^z = C1 + D21u<br />

where H = YC T 2 .<br />

Just like we splitted the optimal H2 control problem into a state feedback problem and a<br />

ltering problem, the H1 control problem admits a similar separation. The H1 ltering<br />

problem is the subject of this section and can be formalized as follows.<br />

We reconsider the state space equations (12.21):<br />

Let us make a few important observations<br />

_x = Ax + B1w1 + B2u<br />

8<br />

><<br />

We emphasize again that this lter design is carried out completely in a deterministic<br />

setting. The matrix H is generally referred to as the H1 lter gain and clearly<br />

depends on the value of (since Y depends on ).<br />

(12.29)<br />

z = C1x + D12u<br />

>:<br />

y = C2x + w2<br />

It is important to observe that in contrast to the Kalman lter, the H1 lter depends<br />

on the to-be-estimated signal. This, because the matrix C1, which de nes the to-beestimated<br />

signal z explicitly, appears in the Riccati equation. The resulting lter<br />

therefore depends on the to-be-estimated signal.<br />

under the same conditions as in the previous section.<br />

Just like the Kalman lter, the H1 lter is a causal, linear mapping taking the control<br />

input u and the measurements y as its inputs, and producing an estimate ^z of the signal z<br />

in suchaway that the H1 norm of the transfer function from the noise w to the estimation<br />

error e = z ; ^z is minimal. Thus, in the con guration of Figure 12.6, we wish to design


194 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.7. COMPUTATIONAL ASPECTS 193<br />

Exercise 3. Suppose that a stable transfer function H(s) admits the state space representation<br />

12.7 Computational aspects<br />

_x = Ax + Bw<br />

z = Cx + Dw:<br />

2 T Show that k H(s) k1 < implies that I ; D D is positive de nite. Give an<br />

example of a system for which the converse is not true, i.e., give an example for<br />

which 2I ; DTD is positive de nite and k H(s) k1 > .<br />

Exercise 4. This exercise is a more extensive simulation exercise. Using MATLAB and<br />

the installed package MHC (Multivariable H1 <strong>Control</strong>ler design) you should be able<br />

to design a robust controller for the following problem. You may also like to use the<br />

MHC package that has been demonstrated and for which amanual is available upon<br />

request.<br />

The system considered in this design is a satellite with two highly exible solar arrays<br />

attached. The model for control analysis represents the transfer function from the<br />

torque applied to the roll axis of the satellite to the corresponding satellite roll angle.<br />

In order to keep the model simple, only a rigid body mode and a single exible mode<br />

are included, resulting in a four state model. The state space system is described by<br />

_x = Ax + Bu + Bw<br />

y = Cx<br />

where u is the control torque (in units Nm), w is a constant disturbance torque<br />

(Nm), and y is the roll angle measurement (in rad). The state space matrices are<br />

The <strong>Robust</strong> <strong>Control</strong> Toolbox in Matlab includes various routines for the computation<br />

of H2 optimal and H1 optimal controllers. These routines are implemented with the<br />

algorithms described in this chapter.<br />

The relevant routine in this toolbox for H2 optimal control synthesis is h2lqg. This<br />

routine takes the parameters of the state space model (12.12) or the more general state<br />

space model (13.16) (which itconverts to (12.12)) as its input arguments and produces the<br />

state space matrices (Ac Bc Cc Dc) of the optimal H2 controller as de ned in (12.19)<br />

as its outputs. If desired, this routine also produces the state space description of the<br />

corresponding closed-loop transfer function M as its output. (See the corresponding help<br />

le).<br />

For H1 optimal control synthesis, the <strong>Robust</strong> <strong>Control</strong> Toolbox includes an e cient implementation<br />

of the result mentioned in Theorem 12.7. The Matlab routine hinf takes the<br />

state space parameters of the model (13.16) as its input arguments and produces the state<br />

space parameters of the so called central controller as speci ed by the formulae (12.24) in<br />

Theorem 12.7. The routine makes use of the two Riccati solution as presented above. Also<br />

the state space parameters of the corresponding closed loop system can be obtained as<br />

an optional output argument. The <strong>Robust</strong> <strong>Control</strong> Toolbox provides features to quickly<br />

generate augmented plants which incorporate suitable weighting lters. An e cient use of<br />

these routines, however, requires quite some programming e ort in Matlab. Although we<br />

consider this an excellent exercise it is not really the purpose of this course. The package<br />

MHC (Multivariable H1 <strong>Control</strong> Design) has been written as part of a PhD study by<br />

one of the students of the Measurement and <strong>Control</strong> Group at TUE, and has been customized<br />

to easily experiment with lter design. During this course we will give asoftware<br />

demonstration of this package.<br />

0<br />

B<br />

@<br />

0<br />

B<br />

@<br />

given by<br />

0<br />

1:7319 10 ;5<br />

0<br />

3:7859 10 ;4<br />

C<br />

A <br />

1<br />

0 1 0 0<br />

0 0 0 0<br />

A =<br />

0 0 0 1<br />

0 0 ;! 2 B =<br />

;2 !<br />

C = ; 1 0 1 0 D =0:<br />

1<br />

C<br />

A<br />

12.8 Exercises<br />

where ! =1:539rad=sec is the frequency of the exible mode and =0:003 is the<br />

exural damping ratio. The nominal open loop poles are at<br />

Exercise 0. Take the rst blockscheme of the exercise of chapter 6. De ne a mixed<br />

sensitivity problem where the performance is represented by good tracking. Filter<br />

We is low pass and has to be chosen. The robustness term is de ned by abounded<br />

additive model error: k Wx ;1 P k1< 1.<br />

Furthermore,k r k2< 1, P =(s ; 1)=(s + 1) and Wx = s=(s + 3). What bandwidth<br />

can you obtain for the sensitivity being less than .01 ? Use the tool MHC!<br />

;0:0046 + 1:5390j ;0:0046 ; 1:5390j 0<br />

and the nite zeros at<br />

Exercise 1. Write a routine h2comp in MATLAB which computes the H2 norm of a<br />

transfer function H(s). Let the state space parameters (A B C D) be the input to<br />

this routine, and the H2 norm<br />

;0:0002 + 0:3219j ;0:0002 ; 0:3219j:<br />

Because of the highly exible nature of this system, the use of control torque for<br />

attitude control can lead to excitation of the lightly damped exural modes and<br />

hence loss of control. It is therefore desired to design a feedback controller which<br />

increases the system damping and maintains a speci ed pointing accuracy. That<br />

is, variations in the roll angle are to be limited in the face of torque disturbances.<br />

In addition the sti ness of the structure is uncertain, and the natural frequency, !,<br />

can only be approximately estimated. Hence, it is desirable that the closed loop be<br />

robustly stable to variations in this parameter.<br />

k C(Is; A) ;1 B + D k2<br />

its output. Build in su cient checks on the matrices (A B C D) to guarantee a<br />

`fool-proof' behavior of the routine.<br />

Hint: Use the Theorem 12.1. See the help les of the routines lyap in the control system<br />

toolbox tosolvetheLyapunov equation (12.2). The procedures abcdchk, minreal, eig, or<br />

obsv may provehelpful. The design objectives are as follows.<br />

Exercise 2. Write a block diagram for the optimal H2 controller and for the optimal H1<br />

controller.


196 CHAPTER 12. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

12.8. EXERCISES 195<br />

1. Performance: required pointing accuracy due to 0:3Nm step torque disturbance<br />

should be y(t) < 0:0007 rad for all t > 0. Additionally, the response<br />

time is required to be less than 1 minute (60sec).<br />

2. <strong>Robust</strong> stability: stable response for about 10% variations in the natural<br />

frequency !.<br />

3. <strong>Control</strong> level: control e ort due to 0:3Nm step torque disturbances u(t) <<br />

0:5 Nm.<br />

We will start the design by making a few simple observations<br />

Verify that with a feedback control law u = ;Cy the resulting closed-loop<br />

transfer U := (I + PC) ;1P maps the torque disturbance w to the roll angle y.<br />

Note that, to achieve a pointing accuracy of 0:0007rad in the face of 0:3Nm<br />

torque input disturbances, we require that U satis es the condition<br />

= 0:0021 rad=Nm (12.32)<br />

(U) = (I + PC) ;1 P < 0:0007<br />

0:3<br />

at least at low frequencies.<br />

Recall that, for a suitable weighting function W we can achieve thatjU(j!)j<br />

jW (j!)j for all !, where is the usual parameter in the ` {iteration' of the H1<br />

optimization procedure.<br />

Consider the weighting lter<br />

(12.33)<br />

s +0:4<br />

s +0:001<br />

Wk(s) =k<br />

where k is a positive constant.<br />

1. Determine a value of k so as to achieve the required level of pointing accuracy<br />

in the H1 design. Try to obtain a value of which is more or less equal to 1.<br />

Hint: Set up a scheme for H1 controller design in which the output y +10 ;5w is used<br />

as a measurement variable and in which the to be controlled variables are<br />

Wky<br />

10 ;5u z =<br />

(the extra output is necessary to regularize the design). Use the MHC package to<br />

compute a suboptimal H1 controller C which minimizes the H1 norm of the closed<br />

loop transfer w 7! z for various values of k > 0. Construct a 0:3Nm step torque<br />

input disturbance w to verify whether your closed-loop system meets the pointing<br />

speci cation. See the MHC help facility to get more details.<br />

2. Let Wk be given by the lter (12.33) with k as determined in 2. Let V (s) be<br />

a second weighting lter and consider the weighted control sensitivity M :=<br />

WkUV = Wk(I + PC) ;1PV. Choose V in such away that an H1 suboptimal<br />

controller C which minimizes k M k1 meets the design speci cations.<br />

Hint: Use the same con guration as in part 2 and compute controllers C by using the<br />

package MHC and by varying the weighting lter V .<br />

3. After you complete the design phase, make Bode plots of the closed-loop response<br />

of the system and verify whether the speci cations are met by perturbing<br />

the parameter ! and by plotting the closed{loop system responses of the signals<br />

u and y under step torque disturbances of 0:3Nm.


198 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

example, by observing that the system is an interconnection of dissipative components, or<br />

by considering systems in which a loss of energy is inherent to the behavior of the system<br />

(due to friction, optical dispersion, evaporation losses, etc.).<br />

In this chapter we will formalize the notion of a dissipative dynamical system for the<br />

class of linear time-invariant systems. It will be shown that linear matrix inequalities<br />

(LMI's) occur in a very natural way in the study of linear dissipative systems. Solutions<br />

of these inequalities have a natural interpretation as storage functions associated with a<br />

dissipative dyamical system. This interpretation will play a key role in understanding<br />

the relation between LMI's and questions related to stability, robustness, and H1 controller<br />

design. In recent years, linear matrix inequalities have emerged as apowerful tool<br />

to approach control problems that appear hard if not impossible to solve in an analytic<br />

fashion. Although the history of LMI's goes back to the fourties with a major emphasis of<br />

their role in control in the sixties (Kalman, Yakubovich, Popov, Willems), only recently<br />

powerful numerical interior point techniques have beendeveloped to solve LMI's in a practically<br />

e cient manner (Nesterov, Nemirovskii 1994). Several Matlab software packages<br />

are available that allow a simple coding of general LMI problems and of those that arise<br />

in typical control problems (LMI <strong>Control</strong> Toolbox, LMI-tool).<br />

Chapter 13<br />

Solution to the general H1 control<br />

problem<br />

13.1.2 Dissipativity<br />

Consider a continuous time, time-invariant dynamical system described by the equations1<br />

(<br />

_x = Ax + Bu<br />

:<br />

(13.1)<br />

y = Cx + Du<br />

As usual, x is the state which takes its values in a state space X = Rn , u is the input<br />

taking its values in an input space U =Rm and y denotes the output of the system which<br />

assumes its values in the output space Y =Rp . Let<br />

s : U Y !R<br />

be a mapping and assume that for all time instances t0t1 2R and for all input-output<br />

pairs u y satisfying (13.1) the function<br />

In previous chapters we have been mainly concerned with properties of control con gurations<br />

in which a controller is designed so as to minimize the H1 norm of a closed<br />

loop transfer function. So far, we did not address the question how such a controller is<br />

actually computed. This has been a problem of main concern in the early 80-s. Various<br />

mathematical techniques have been developed to compute H1-optimal controllers, i.e.,<br />

feedback controllers which stabilize a closed loop system and at the same time minimize<br />

the H1 norm of a closed loop transfer function. In this chapter we treat a solution to a<br />

most general version of the H1 optimal control problem. We willmakeuseofatechnique<br />

which isbasedonLinear Matrix Inequalities (LMI's). This technique is fast, simple, and<br />

at the same time a most reliable and e cient way to synthesize H1 optimal controllers.<br />

This chapter is organized as follows. In the next section we rst treat the concept of<br />

a dissipative dynamical system. We will see that linear dissipative systems are closepy<br />

related to Linear Matrix Inequalities (LMI's) and we will subsequently show howtheH1<br />

norm of a transfer function can be computed by means of LMI's. Finally, we consider the<br />

synthesis question of how to obtain a controller which stabilizes a given dynamical system<br />

so as to minimize the H1 norm of the closed loop system. Proofs of theorems are included<br />

for completeness only. They are not part of the material of the course and can be skipped<br />

upon rst reading of the chapter.<br />

s(t) :=s(u(t)y(t))<br />

13.1 Dissipative dynamical systems<br />

js(t)jdt < 1. The mapping s will be referred to as the supply<br />

is locally integrable, i.e., R t1<br />

t0<br />

13.1.1 Introduction<br />

function.<br />

De nition 13.1 (Dissipativity) The system with supply rate s is said to be dissipative<br />

if there exists a non-negative function V : X !R such that<br />

Z t1<br />

s(t)dt V (x(t1)) (13.2)<br />

V (x(t0)) +<br />

t0<br />

for all t0 t1 and all trajectories (u x y) which satisfy (13.1).<br />

1 Much of what is said in this chapter can be applied for (much) more general systems of the form<br />

_x = f(x u), y = g(x u).<br />

The notion of dissipativity (or passivity) is motivated by the idea of energy dissipation in<br />

many physical dynamical systems. It is a most important concept in system theory and<br />

dissipativity plays a crucial role in many modeling questions. Especially in the physical<br />

sciences, dissipativity is closely related to the notion of energy. Roughly speaking, a<br />

dissipative system is characterized by the property that at any time the amount of energy<br />

which the system can conceivably supply to its environment can not exceed the amount<br />

of energy that has been supplied to it. Stated otherwise, when time evolves a dissipative<br />

system absorbs a fraction of its supplied energy and transforms it for example into heat,<br />

an increase of entropy, mass, electromagnetic radiation, or other kinds of energy `losses'.<br />

In many applications, the question whether a system is dissipative or not can be answered<br />

from physical considerations on the way the system interacts with its environment. For<br />

197


200 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.1. DISSIPATIVE DYNAMICAL SYSTEMS 199<br />

the two basic laws of thermodynamics state that for all system trajectories (T Q W ) and<br />

all time instants t0 t1<br />

Z t1<br />

Q(t)+W (t) dt = E(x(t1))<br />

E(x(t0)) +<br />

t0<br />

(which is conservation of thermodynamical energy) and the second law of thermodynamics<br />

states that the system trajectories satisfy<br />

Z t1<br />

dt S(x(t1))<br />

; Q(t)<br />

T (t)<br />

S(x(t0)) +<br />

t0<br />

Interpretation 13.2 The supply function (or supply rate) s should be interpreted as the<br />

supply delivered to the system. This means that in a time interval [0t] work has been<br />

done on the system whenever R t<br />

0 s( )d is positive, while work is done by the system<br />

if this integral is negative. The non-negative function V is called a storage function<br />

and generalizes the notion of an energy function for a dissipative system. With this<br />

interpretation, inequality (13.2) formalizes the intuitive idea that a dissipative system is<br />

characterized by the property that the change of internal storage (V (x(t1)) ; V (x(t0)))<br />

in any time interval [t0t1] will never exceed the amount of supply that ows into the<br />

system (or the `work done on the system). This means that part of what is supplied to<br />

the system is stored, while the remaining part is dissipated. Inequality (13.2) is known as<br />

the dissipation inequality.<br />

for a storage function S. Here, E is called the internal energy and S the entropy. The<br />

rst law promises that the change of internal energy is equal to the heat absorbed by the<br />

system and the mechanical work which is done on the system. The second law states that<br />

the entropy decreases at a higher rate than the quotient of absorbed heat and temperature.<br />

Note that thermodynamical systems are dissipative with respect to more than one supply<br />

function!<br />

Remark 13.3 If the function V (x( )) with V a storage function and x :R ! X a state<br />

trajectory of (13.1) is di erentiable as a function of time, then (13.2) can be equivalently<br />

written as<br />

_V (t) s(u(t)y(t)): (13.3)<br />

Example 13.7 As another example, the product of forces and velocities is a candidate<br />

supply function in mechanical systems. For those familiar with the theory of bond-graphs<br />

we remark that every bond-graph can be viewed as a representation of a dissipative dynamical<br />

system where input and output variables are taken to be e ort and ow variables<br />

and the supply function s is invariably taken to be the product of these two variables.<br />

A bond-graph is therefore a special case of a dissipative system (and not the other way<br />

around!).<br />

Remark 13.4 (this remark may be skipped) There is a re nement of De nition 13.1<br />

which isworth mentioning. The system is said to be conservative (or lossless) if there<br />

exists a non-negative functionV : X !R such that equality holds in (13.2) for all t0 t1<br />

and all (u x y) whichsatisfy (13.1). .<br />

Example 13.8 Typical examples of supply functions s : U Y !R are<br />

s(u y) =u T y (13.4)<br />

s(u y) =kyk2 ;kuk2 (13.5)<br />

s(u y) =kyk2 + kuk2 (13.6)<br />

s(u y) =kyk2 (13.7)<br />

Example 13.5 Consider an electrical network with n external ports. Denote the external<br />

voltages and currents of the i-th port by(ViIi) and let V and I denote the vectors of length<br />

n whose i-th component is Vi and Ii, respectively. Assume that the network contains (a<br />

nite number of) resistors, capacitors, inductors and lossless elements such as transformers<br />

and gyrators. Let nC and nL denote the number of capacitors and inductors in the network<br />

and denote by VC and IL the vectors of voltage drops accrioss the capacitors and currents<br />

through the inductors of the network. An impedance description of the system then takes<br />

the form (13.1), where u = I, y = V and x = ; V T<br />

C I T T<br />

L . For such a circuit, a natural<br />

supply function is<br />

which arise in network theory, bondgraph theory, scattering theory, H1 theory, game<br />

theory, LQ-optimal control and H2-optimal control theory.<br />

s(V (t)I(t)) = V T (t)I(t):<br />

This system is dissipative and<br />

If is dissipative with storage function V , then we will assume that there exists a<br />

reference point x 2 X of minimal storage, i.e. there exists x 2 X such that V (x ) =<br />

minx2X V (x). You can think of x as the state in which the system is `at rest', an<br />

`equilibrium state' for which noenergy is stored in the system. Given a storage function<br />

V ,itsnormalization (with respect to x )isdenedasV (x) :=V (x) ; V (x ). Obviously,<br />

V (x ) = 0 and V is a storage function of whenever V is. For linear systems of the form<br />

(13.1) we usually take x =0.<br />

nLX<br />

nCX<br />

LiI 2 Li<br />

CiV 2 Ci +<br />

V (x) :=<br />

i=1<br />

i=1<br />

is a storage function of the system that represents the total electrical energy in the capacitors<br />

and inductors.<br />

13.1.3 A rst characterization of dissipativity<br />

Instead of considering the set of all possible storage functions associated with a dynamical<br />

system , we will restrict attention to the set of normalized storage functions. Formally,<br />

the set of normalized storage functions (associated with ( s)) is de ned by<br />

V(x ):=fV : X !R+ j V (x ) = 0 and (13.2) holdsg:<br />

Example 13.6 Consider a thermodynamic system at uniform temperature T on which<br />

mechanical work is being done at rate W and which is being heated at rate Q. Let<br />

(TQW) be the external variables of such a system and assume that {either by physical<br />

or chemical principles or through experimentation{ the mathematical model of the thermodynamic<br />

system has been decided upon and is given by the time invariant system (13.1).<br />

The rst and second law of thermodynamics may then be formulated in the sense of De -<br />

nition 13.1 by saying that the system is conservative with respect to the supply function<br />

s1 := (W + Q) and dissipative with respect to the supply function s2 := ;Q=T . Indeed,


202 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.1. DISSIPATIVE DYNAMICAL SYSTEMS 201<br />

Taking the supremum over all t1 0 and all such trajectories (u x y) (with x(0) = x0) yields that<br />

Vav(x0) V (x0) < 1. To provethe converse implication it su ces to show thatVav is a storage<br />

function. To see this, rst note that Vav(x) 0 for all x 2 X (take t1 = 0 in (13.8a)). To prove<br />

that Vav satis es (13.2), lett0 t1 t2 and (u x y) satisfy (13.1). Then<br />

The existence of a reference point x of minimal storage implies that for a dissipative<br />

system<br />

Z t1<br />

Z t2<br />

Z t1<br />

s(u(t)y(t)) dt 0<br />

0<br />

s(u(t)y(t))dt:<br />

s(u(t)y(t))dt ;<br />

Vav(x(t0)) ;<br />

t 1<br />

t 0<br />

Since the second term in the right hand side of this inequality holds for arbitrary t2 t1 and<br />

arbitrary (u x y)j [t1t2] (with x(t1) xed), we can take the supremum over all such trajectories to<br />

conclude that<br />

Z t1<br />

s(u(t)y(t))dt ; Vav(x(t1)):<br />

Vav(x(t0)) ;<br />

for any t1 0 and any (u x y) satisfying (13.1) with x(0) = x . Stated otherwise, any<br />

trajectory of the system which emanates from x has the property that the net ow of<br />

supply is into the system. In many treatments of dissipativity this property is often taken<br />

as de nition of passivity.<br />

We introduce two mappings Vav : X !R+ [1and Vreq : X !R [ f;1g which will<br />

play a crucial role in the sequel. They are de ned by<br />

t 0<br />

Z t1<br />

which shows that Vav satis es (13.2).<br />

2a. Suppose that is dissipative and let V be a storage function. Then V (x) :=V (x);V (x ) 2<br />

V(x ) so that V(x ) 6= . Observe that Vav(x ) 0 and Vreq(x ) 0 (take t1 = t;1 = 0 in<br />

(13.8)). Suppose that the latter inequalities are strict. Then, using controllability of the system,<br />

s(t) dt j t1 0 (u x y) satisfy (13.1) with x(0) = x0<br />

Vav(x0) :=sup ;<br />

0<br />

(13.8a)<br />

there exists t;1 0 t1 and a state trajectory x with x(t;1) = x(0) = x(t1) = x such<br />

that ; R t1 0 s(t)dt > 0 and R 0<br />

s(t)dt < 0. But this yields a contradiction with (13.2) as both<br />

(Z 0<br />

s(t) dt j t;1 0 (u x y) satisfy (13.1) with (13.8b)<br />

Vreq(x0) := inf<br />

t;1<br />

R t1<br />

0 s(t)dt 0andR 0<br />

s(t)dt 0. Thus, Vav(x )=Vreq(x )=0. We already proved that Vav is a<br />

t;1<br />

storage function so that Vav 2V(x ). Along the same lines one shows that also Vreq 2V(x ).<br />

t;1<br />

x(0) = x0 and x(t;1) =x g<br />

Z 0<br />

s(u(t)y(t))dt<br />

s(u(t)y(t))dt V (x0)<br />

2b. If V 2V(x ) then<br />

Z t1<br />

;<br />

t;1<br />

0<br />

Interpretation 13.9 Vav(x) denotes the maximal amount of internal storage that may<br />

be recovered from the system over all state trajectories starting from x. Similarly, Vreq(x)<br />

re ects the minimal supply the environment has to deliver to the system in order to excite<br />

the state x via any trajectory in the state space originating in x .<br />

for all t;1 0 t1 and (u x y) satisfying (13.1) with x(t;1) =x and x(0) = x0. Now takethe supremum and in mum over all such trajectories to obtain that Vav V Vreq.<br />

13.2 Dissipative systems with quadratic supply functions<br />

13.2.1 Quadratic supply functions<br />

We refer to Vav and Vreq as the available storage and the required supply, respectively. Note<br />

that in (13.8b) it is assumed that the point x0 2 X is reachable from the reference pont<br />

x , i.e. it is assumed that there exist a control input u which brings the state trajectory<br />

x from x at time t = t;1 to x0 at time t = 0. This is possible when the system is<br />

controllable.<br />

In this section we will apply the above theory by considering systems of the form (13.1)<br />

with quadratic supply functions s : U Y !R, de ned by<br />

Theorem 13.10 Let the system be described by (13.1) and let s be a supply function.<br />

Then<br />

T<br />

(13.9)<br />

y<br />

u<br />

Qyy Qyu<br />

Quy Quu<br />

s(u y) = y<br />

u<br />

1. is dissipative if and only if V av(x) is nite for all x 2 X.<br />

2. If is dissipative and controllable then<br />

Here,<br />

(a) V avV req 2V(x ).<br />

Q := Qyy Qyu<br />

Quy Quu<br />

(b) fV 2V(x )g ) fFor all x 2 X there holds 0 V av(x) V (x) V req(x)g.<br />

is a real symmetric matrix (i.e. Q = QT ) which is partitioned conformally with u and<br />

y. Note that the supply functions given in Example 13.8 can all be written in the form<br />

(13.9).<br />

Interpretation 13.11 Theorem 13.10 gives a necessary and su cient condition for a<br />

system to be dissipative. It shows that both the available storage and the required supply<br />

are possible storage functions. Moreover, statement (b) shows that the available storage<br />

and the required supply are the extremal storage functions in V(x ). In particular, for any<br />

state of a dissipative system, the available storage is at most equal to the required supply.<br />

Remark 13.12 Substituting the output equation y = Cx + Du in the supply function<br />

(13.9) shows that (13.9) can equivalently be viewed as a quadratic function in the variables<br />

u and x. Indeed,<br />

Proof. 1. Let be dissipative, V a storage function and x0 2 X. From (13.2) it then follows<br />

that for all t1 0 and all (u x y) satis ng (13.1) with x(0) = x0,<br />

x<br />

u<br />

Qxx Qxu<br />

Qux Quu<br />

T<br />

s(u y) =s(u Cx + Du) = x<br />

u<br />

Z t1<br />

s(u(t)y(t))dt V (x0) < 1:<br />

;<br />

0


204 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.2. DISSIPATIVE SYSTEMS WITH QUADRATIC SUPPLY FUNCTIONS 203<br />

Since (13.12) holds for all t0 t1 and all inputs u this reduces to the requirement that K 0<br />

satis es the LMI F (K) 0.<br />

(3)2). Conversely, if there exist K 0 such that F (K) 0 then (13.12) holds and it follows<br />

that V (x) =xTKx is a storage function which satis es the dissipation inequality.<br />

(1,5). If ( s) is dissipative thenbyTheorem (13.10), Vreq is a storage function. Since Vreq<br />

is de ned as an optimal cost corresponding to a linear quadratic optimization problem, Vreq is<br />

quadratic. Hence, if the reference point x =0,Vreq(x) is of the form xTK+x for some K+ 0.<br />

Conversely, if Vreq = xTK+x, K+ 0, then it is easily seen that Vreq satis es the dissipation<br />

inequality (13.2) which implies that ( s)isdissipative.<br />

(1,6). Let ! 2 R be such that det(j!I ; A) 6= 0and consider the harmonic input u(t) =<br />

exp(j!t)u0 with u0 2 Rm . De ne x(t) :=exp(j!t)(j!I ; A) ;1Bu0 and y(t) := Cx(t) +Du(t).<br />

Then y(t) = exp(j!t)G(j!)u0 and the triple (u x y) satis es (13.1). Moreover,<br />

where<br />

C D<br />

0 I<br />

Qyy Qyu<br />

Quy Quu<br />

T<br />

:<br />

0 I<br />

= C D<br />

Qxx Qxu<br />

Qux Quu<br />

13.2.2 Complete characterizations of dissipativity<br />

The following theorem is the main result of this section. It provides necessary and su cient<br />

conditions for dissipativeness.<br />

Theorem 13.13 Suppose that the system described by (13.1) is controllable and let<br />

G(s) =C(Is; A) ;1B + D be thecorresponding transfer function. Let the supply function<br />

s be de ned by (13.9). Then the following statements are equivalent.<br />

u0<br />

G(j!)<br />

I<br />

Qyy Qyu<br />

Quy Quu<br />

G(j!)<br />

I<br />

s(u(t)y(t)) = u 0<br />

1. ( s) is dissipative.<br />

which is a constant for all time t 2 R. Now suppose that ( s) is dissipative. For non-zero<br />

frequencies ! the triple (u x y) is periodic with period P =2 =!. In particular, there must exist<br />

a time instant t0 such that x(t0) = x(t0 + kP) = 0, k 2 Z. Since V (0) = 0, the dissipation<br />

inequality (13.2) reads<br />

2. ( s) admits a quadratic storage function V (x) :=x T Kx with K = K T 0.<br />

3. There exists K = K T 0 such that<br />

C D<br />

0 I<br />

T Qyy Qyu<br />

Z t1<br />

Z t1<br />

0:<br />

0 I<br />

+ C D<br />

Quy Quu<br />

F (K) :=; ATK + KA KB<br />

BTK 0<br />

u0<br />

G(j!)<br />

I<br />

Qyy Qyu<br />

Quy Quu<br />

G(j!)<br />

I<br />

u 0<br />

s(u(t)y(t)) dt =<br />

t 0<br />

t 0<br />

(13.10)<br />

0<br />

u0<br />

G(j!)<br />

I<br />

Qyy Qyu<br />

Quy Quu<br />

G(j!)<br />

I<br />

=(t1 ; t0)u 0<br />

0 such that V av(x) =x T K;x.<br />

4. There exists K; = K T ;<br />

for all t1 >t0. Since u0 and t1 >t0 are arbitrary this yields that statement 6 holds.<br />

The implication 6 ,1 ismuch more involved and will be omitted here.<br />

0 such that V req(x) =x T K+x.<br />

5. There exists K+ = K T +<br />

6. For all ! 2R with det(j!I ; A) 6= 0, there holds<br />

Interpretation 13.14 The matrix F (K) is usually called the dissipation matrix. The<br />

inequality F (K) 0 is an example of a Linear Matrix Inequality (LMI) in the (unknown)<br />

matrix K. The crux of the above theorem is that the set of quadratic storage functions<br />

in V(0) is completely characterized by the inequalities K 0 and F (K) 0. In other<br />

words, the set of normalized quadratic storage functions associated with ( s) coincides<br />

with those matrices K for which K = K T 0 and F (K) 0. In particular, the available<br />

storage and the required supply are quadratic storage functions and hence K; and K+<br />

also satisfy F (K;) 0 and F (K+) 0. Using Theorem 13.10, it moreover follows that<br />

any solution K = K T 0ofF (K) 0 has the property that<br />

0 (13.11)<br />

G(j!)<br />

I<br />

Qyy Qyu<br />

Quy Quu<br />

G(j!)<br />

I<br />

Moreover, if one of the above equivalent statements holds, then V (x) := xTKx is a<br />

quadratic storage function in V(0) if and only if K 0 and F (K) 0.<br />

Proof. (1)2,4). If ( s) is dissipative then we infer from Theorem 13.10 that the available<br />

storage Vav(x) is nite for any x 2 Rn . We claim that Vav(x) is a quadratic function of x. This is<br />

a standard result from LQ optimization. Indeed, s is quadratic and<br />

Z Z t1<br />

t1<br />

Vav(x) = sup ; s(t)dt = ; inf s(t)dt<br />

0 K; K K+:<br />

0<br />

0<br />

In other words, among the set of positive semi-de nite solutions K of the LMI F (K) 0<br />

there exists a smallest and a largest element. Statement 6 provides a frequency domain<br />

characterization of dissipativity. For physical systems, this means that whenever the<br />

system is dissipative with respect to a quadratic supply function (and quite some physical<br />

systems are), then there is at least one energy function which is a quadratic function of<br />

the state variable, this function is in general non-unique and squeezed in between the<br />

available storage and the required supply. Any physically relevant energy function which<br />

happens to be of the form V (x) =xTKx will satisfy the linear matrix inequalities K > 0<br />

and F (K) 0.<br />

denotes the optimal cost of a linear quadratic optimization problem. It is well known that this<br />

in mum is a quadratic form in x.<br />

(4)1). Obvious from Theorem (13.10).<br />

(2)3). If V (x) =xTKx with K 0 is a storage function then the dissipation inequality can<br />

be rewritten as<br />

Z t1<br />

; d<br />

dt x(t)TKx(t)+s(u(t)y(t)) dt 0:<br />

t 0<br />

Substituting the system equations (13.1), thisisequivalent to<br />

o<br />

x(t)<br />

u(t)<br />

T n T ;A K ; KA ;KB<br />

;BT +<br />

K 0<br />

C D<br />

T<br />

Qyy Qyu C D<br />

0 I Quy Quu 0 I<br />

| {z }<br />

x(t)<br />

u(t)<br />

Z t1<br />

dt 0:<br />

For conservative systems with quadratic supply functions a similar characterization<br />

can be given. The precize formulation is evident from Theorem 13.13 and is left to the<br />

reader.<br />

t 0<br />

F (K)<br />

(13.12)


206 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.2. DISSIPATIVE SYSTEMS WITH QUADRATIC SUPPLY FUNCTIONS 205<br />

13.3 Dissipativity and H1 performance<br />

13.2.3 The positive real lemma<br />

Let us analyze the importance of the last result, Corollary 13.17, for H1 optimal control. If<br />

is dissipative with respect to the supply function (13.13), then we infer from Remark 13.3<br />

that for any quadratic storage function V (x) =xTKx, 2 u T u ; y T y: (13.14)<br />

_V<br />

We apply the above results to two quadratic supply functions which play an important<br />

role in a wide variety of applications. First, consider the system (13.1) together with the<br />

quadratic supply function s(u y) = yTu. This function satis es (13.9) with Quu = 0,<br />

Qyy =0and Quy = QT yu =1=2I. With these parameters, the following is an immediate<br />

consequence of Theorem 13.13.<br />

Suppose that x(0) = 0, A has all its eigenvalues in the open left-half complex plane (i.e. the<br />

system is stable) and the input u is taken from the set L2 of square integrable functions.<br />

Then both the state x and the output y of (13.1) are square integrable functions and<br />

limt!1 x(t) =0. We can therefore integrate (13.14) from t = 0 till 1 to obtain that for<br />

all u 2L2<br />

Corollary 13.15 Suppose that the system described by (13.1) is controllable and has<br />

transfer function G. Let s(u y) =yTu be a supply function. Then equivalent statements<br />

are<br />

1. ( s) is dissipative.<br />

2. the LMI's<br />

0<br />

2 kuk 2 2 ;kyk 2 2<br />

where the norms are the usual L2 norms. Equivalently,<br />

K = K T<br />

0<br />

;ATK ; KA ;KB + CT ;BTK + C D + DT 0<br />

have a solution.<br />

: (13.15)<br />

kyk2<br />

kuk2<br />

sup<br />

u2L2<br />

3. For all ! 2R with det(j!I ; A) 6= 0G(j!) + G(j!) 0.<br />

Now recall from Chapter 5, that the left-hand side of (13.15) is the L2-induced norm or<br />

L2-gain of the system (13.1). In particular, from Chapter 5 we infer that the H1 norm<br />

of the transfer function G is equal to the L2-induced norm. We thus derived the following<br />

result.<br />

Moreover, V (x) =xTKx de nes a quadratic storage function if and only if K satis es the<br />

above LMI's.<br />

Theorem 13.18 Suppose that the system described by(13.1) is controllable, stable and<br />

has transfer function G. Let s(u y) = 2uTu ; yTy be asupply function. Then equivalent<br />

statements are<br />

Remark 13.16 Corollary 13.15 is known as the Kalman-Yacubovich-Popov or the positive<br />

real lemma and has played a crucial role in questions related to the stability of control<br />

systems and synthesis of passive electrical networks. Transfer functions which satisfy the<br />

third statement are generally called positive real.<br />

13.2.4 The bounded real lemma<br />

1. ( s) is dissipative.<br />

Second, consider the quadratic supply function<br />

.<br />

2. kGkH1<br />

s(u y) = 2 u T u ; y T y (13.13)<br />

3. The LMI's<br />

where 0. In a similar fashion we obtain the following result as an immediate consequence<br />

of Theorem 13.13.<br />

0<br />

K = K T 0<br />

ATK + KA + CTC KB + CTD BTK + DTC DTD ; 2I Corollary 13.17 Suppose that the system described by (13.1) is controllable and has<br />

transfer function G. Let s(u y) = 2uTu ; yTy be a supply function. Then equivalent<br />

statements are<br />

have a solution.<br />

1. ( s) is dissipative.<br />

2. The LMI's<br />

Moreover, V (x) =xTKx de nes a quadratic storage function if and only if K satis es the<br />

above LMI's.<br />

0<br />

K = K T 0<br />

ATK + KA + CTC KB + CTD BTK + DTC DTD ; 2I Interpretation 13.19 Statement 3 of Theorem 13.18 therefore provides a test whether<br />

or not the H1-norm of the transfer function G is smaller than a prede ned number >0.<br />

We can compute the L2-induced gain of the system (which istheH1 norm of the transfer<br />

function) by minimizing > 0 over all variables and K > 0 that satisfy the LMI's<br />

of statement 3. The issue here is that such a test and minimization can be e ciently<br />

performed in the LMI-toolbox asimplemented in MATLAB.<br />

have a solution.<br />

2 I.<br />

3. For all ! 2R with det(j!I ; A) 6= 0G(j!) G(j!)<br />

Moreover, V (x) =xTKx de nes a quadratic storage function if and only if K satis es the<br />

above LMI's.


208 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.4. SYNTHESIS OF H1 CONTROLLERS 207<br />

<strong>Control</strong>lers are therefore simply parameterized by the matrices Ac, Bc, Cc, Dc. The<br />

controlled or closed-loop system then admits the description<br />

(<br />

_ = A + Bw<br />

(13.18)<br />

z = C + Dw<br />

-<br />

z<br />

-<br />

w<br />

G<br />

-<br />

u y<br />

where<br />

1<br />

A : (13.19)<br />

0<br />

@ A + BDcC BCc B1 + BDcF<br />

BcC Ac BcF<br />

C1 + EDcC ECc D + EDcF<br />

K<br />

A B<br />

C D =<br />

Figure 13.1: General control con guration<br />

The closed-loop transfer matrix M can therefore be represented as M(s) =C(Is;A) ;1B+ D.<br />

The optimal value of the H1 controller synthesis problem is de ned as<br />

13.4 Synthesis of H1 controllers<br />

; kMk1:<br />

= inf<br />

(AcBcCcDc) suchthat (A) C<br />

Clearly, the number is larger than if and only if there exists a controller such that<br />

;<br />

(A) C and kMk1 < :<br />

The optimal H1 value is then given by theminimal for which acontroller can still<br />

be found.<br />

By Theorem 13.182 , the controller (AcBcCcDc) achieves that (A) C ; and the<br />

H1 norm kMkH1 < if and only if there exists a symmetric matrix X satisfying<br />

< 0 (13.20)<br />

ATX + XA+ CTC XB+ CTD BTX + DTC DTD; 2I X = X T > 0<br />

In this section we present the main algorithm for the synthesis of H1 optimal controllers.<br />

Consider the general control con guration as depicted in Figure 13.1. Here, w are the exogenous<br />

inputs (disturbances, noise signals, reference inputs), u denote the control inputs,<br />

z is the to be controlled output signal and y denote the measurements. All variables may<br />

be multivariable. The block G denotes the \generalized system" and typically includes a<br />

model of the plant together with all weighting functions which arespeci ed by the user.<br />

The block K denotes the \generalized controller" and includes typically a feedback controller<br />

and/or a feedforward controller. The block G contains all the known features (plant<br />

model, input weightings, output weightings and interconnection structures), the block K<br />

needs to be designed. Admissable controllers are all linear time-invariant systems K that<br />

internally stabilize the con guration of Figure 13.1. Every such admissible controller K<br />

gives rise to a closed loop system which maps disturbance inputs w to the to-be-controlled<br />

output variables z. Precisely, if M denotes the closed-loop transfer function M : w 7! z,<br />

then with the obvious partitioning of G,<br />

The corresponding synthesis problem therefore reads as follows: Search controller parameters<br />

(AcBcCcDc) and an X > 0 such that (13.20) holds.<br />

Recall that A depends on the controller parameters since X is also a variable, we<br />

observe that XA depends non-linearly on the variables to be found. There exist a clever<br />

transformation so that the blocks in (13.20) which depend non-linearly on the decision<br />

variables X and (AcBcC ; c Dc), is transformed to an a ne dependence of a new set<br />

of decision variables<br />

M = G11 + G12K(I ; G22K) ;1 G21:<br />

The H1 control problem is formalized as follows<br />

Synthesize a stabilizing controller K such that<br />

kMkH1 <<br />

for some value of >0.<br />

K L<br />

M N<br />

v := X Y<br />

Since our ultimate aim is to minimize the H1 norm of the closed-loop transfer function<br />

M, we wish to synthesize an admissible K for as small as possible.<br />

To solve this problem, consider the generalized system G and let<br />

For this purpose, de ne<br />

_x = Ax + B1w + Bu<br />

8<br />

><<br />

Y I<br />

I X<br />

(13.16)<br />

z = C1x + Dw + Eu<br />

X(v) :=<br />

>:<br />

y = Cx + Fw<br />

1<br />

A<br />

0<br />

@ AY + BM A + BNC B1 + BNF<br />

K AX + LC XB1 + LF<br />

C1Y + EM C1 + ENC D + ENF<br />

:=<br />

A(v) B(v)<br />

C(v) D(v)<br />

2 With a slight variation.<br />

be a state space description of G. An admissible controller is a nite dimensional linear<br />

time invariant system described as<br />

(<br />

_xc = Acxc + Bcy<br />

(13.17)<br />

u = Ccxc + Dcy


210 CHAPTER 13. SOLUTION TO THE GENERAL H1 CONTROL PROBLEM<br />

13.4. SYNTHESIS OF H1 CONTROLLERS 209<br />

the calculations to reconstruct the controller out of the decision variable v. In particular,<br />

one should avoid that the parameters v get too large, and that I ; XY is close to singular<br />

what might render the controller computation ill-conditioned.<br />

With these de nitions, the inequalities (13.20) can be replaced by the inequalities<br />

1<br />

0<br />

A < 0: (13.21)<br />

B(v) T ; I D(v) T<br />

@ A(v)T + A(v) B(v) C(v) T<br />

X(v) > 0<br />

13.5 H1 controller synthesis in Matlab<br />

C(v) D(v) ; I<br />

The result of Theorem 13.20 has been implemented in the LMI <strong>Control</strong> Toolbox ofMatlab.<br />

The LMI <strong>Control</strong> Toolbox supports continuous- and discrete time H1 synthesis<br />

using either Riccati- or LMI based approaches. (The Riccati based approach had not been<br />

discussed in this chapter). While the LMI approach is computationally more involved<br />

for large problems, it has the decisive merit of eliminating the so called regularity conditions<br />

attached to the Riccati-based solutions. Both approaches are based on state space<br />

calculations. The following are the main synthesis routines in the LMI toolbox.<br />

The one-one relation between the decision variables in (13.20), the decision variables in<br />

(13.21) and solutions of the H1 control problem are now given in the following main<br />

result.<br />

Theorem 13.20 (H1 Synthesis Theorem) The following statements are equivalent.<br />

1. There exists a controller (AcBcCcDc) and an X satisfying (13.20)<br />

such that the inequalities (13.21) hold.<br />

K L<br />

M N<br />

2. There exists v := X Y<br />

Riccati-based LMI-based<br />

continuous time systems hinfric hinflmi<br />

discrete time systems dhinfric dhinflmi<br />

Moreover, for any such v, the matrix I ; XY is invertible and there exist nonsingular U,<br />

V such that I ; XY = UV T . The unique solutions X and (AcBcCcDc) are then given<br />

by<br />

Riccati-based synthesis routines require that<br />

I 0<br />

X U<br />

;1<br />

Y V<br />

I 0<br />

X =<br />

1. the matrices E and F have full rank,<br />

;1<br />

2. the transfer functions G12(s) :=C(Is;A) ;1B1+F and G21(s) :=C1(Is;A) ;1B+E have no zeros on the j! axis.<br />

:<br />

V T 0<br />

CY I<br />

;1<br />

K ; XAY L<br />

M N<br />

0 I<br />

= U XB<br />

Ac Bc<br />

Cc Dc<br />

LMI synthesis routines have no assumptions on the matrices which de ne the system<br />

(13.16). Examples of the usage of these routines will be given in Chapter 10. We refer to<br />

the corresponding help- les for more information.<br />

In the LMI toolbox the command<br />

We have obtained ageneralprocedure for deriving from analysis inequalities the corresponding<br />

synthesis inequalities and for construction of the corresponding controllers.<br />

The power of Theorem 13.20 lies in its simplicity and its generality. Virtually all analysis<br />

results that are based on a dissipativity constraint with respect to a quadratic supply<br />

function can be converted with ease into the corresponding synthesis result.<br />

G = ltisys(A, [B1 B], [C1 C], [D E F zeros(dy,du)])<br />

de nes the state space model (13.16) in the internal LMI format. Here dy and du are the<br />

dimensions of the measurementvector y and the control input u, respectively. Information<br />

about G is obtained by typing sinfo(G), plots of responses of G are obtained through<br />

splot(G, 'bo') for a Bode diagram, splot(G, 'sv') for a singular value plot, splot(G,<br />

'st') for a step response, etc. The command<br />

[gopt, K] = hinflmi(G,r)<br />

Remark on the controller order. In Theorem 13.20 we have not restricted the<br />

order of the controller. In proving necessity of the solvability of the synthesis inequalities,<br />

the size of Ac was arbitrary. The speci c construction of a controller in proving su ciency<br />

leads to an Ac that has the same size as A. Hence Theorem 13.20 also include the side result<br />

that controllers of order larger than that of the plant o er no advantage over controllers<br />

that have the same order as the plant. The story is very di erent in reduced order control:<br />

Then the intention is to include a constraint dim(Ac) k for some k that is smaller<br />

than the dimension of A. It is not very di cult to derive the corresponding synthesis<br />

inequalities however, they include rank constraints that are hard if not impossible to<br />

treat by current optimization techniques.<br />

then returns the optimal H1 performance in gopt and the optimal controller K in K.<br />

The state space matrices (AcBcCcDc) which de ne the controller K are returned by<br />

the command<br />

[ac,bc,cc,dc] = ltiss(K).<br />

Remark on strictly proper controllers. Note that the direct feed-through of<br />

the controller Dc is actually not transformed we simply have Dc = N. If we intend to<br />

design a strictly proper controller (i.e. Dc = 0), we can just set N =0to arrive at the<br />

corresponding synthesis inequalities. The construction of the other controller parameters<br />

remains the same. Clearly, the same holds if one wishes to impose an arbitrary more<br />

re ned structural constraint on the direct feed-through term as long as it can be expressed<br />

in terms of LMI's.<br />

Remarks on numerical aspects. After having veri ed the solvability of the synthesis<br />

inequalities, we recommend to take some precautions to improve the conditioning of


212 BIBLIOGRAPHY<br />

[17] D.F. Enns, \Structured Singular Value Synthesis Design Example: Rocket Stabilisation",<br />

American <strong>Control</strong> Conf., Vol.3, pp.2514-2520, 1990.<br />

[18] M.A. Peters and A.A. Stoorvogel, \Mixed H2=H1 <strong>Control</strong> in a Stochastic Framework,"<br />

Linear Algebra and its Applications, Vol. 205-206, pp. 971-996, 1984.<br />

Bibliography<br />

[1] J.M. Maciejowski, \Multivariable Feedback Design," Addison Wesley, 1989.<br />

[2] J. Doyle, B. Francis, A. Tannenbaum, \Feedback <strong>Control</strong> Theory," McMillan Publishing<br />

Co., 1990.<br />

[3] M. Morari, E. Za riou, \<strong>Robust</strong> Process <strong>Control</strong>," Prentice Hall Inc., 1989.<br />

[4] B.A. Francis, \A Course in H1 <strong>Control</strong> Theory," Lecture Notes in <strong>Control</strong> and Information<br />

Sciences, Vol. 88, Springer, 1987.<br />

[5] D.C. Mc.Farlane and K. Glover, \<strong>Robust</strong> <strong>Control</strong>ler Design using Normalized Coprime<br />

Plant Descriptions," Lecture Notes in <strong>Control</strong> and Information Sciences, Vol. 138,<br />

Springer, 1990.<br />

[6] A. Packard and J. Doyle, \The Complex Structured Singular Value," Automatica, Vol.<br />

29, pp.71{109, January 1993.<br />

[7] A. Weinmann, \Uncertain Models and <strong>Robust</strong> <strong>Control</strong>," Springer, 1991.<br />

[8] I. Postlethwaite, \<strong>Robust</strong> <strong>Control</strong> of Multivariable Systems using H1 Optimization,"<br />

Journal A, Vol 32, No. 4, pp 8{19, 1991.<br />

[9] B.Ross Barmish, \New Tools fo <strong>Robust</strong>ness of Linear Systems", Macmillan Publishing<br />

Company,1994.<br />

[10] Doyle and Stein,\<strong>Robust</strong>ness with Observers", IEEE AC-24, no.4, August 1997.<br />

[11] G.Zames and B.A.Francis,\Feedback ,Minimax Sensitivity and Optimal <strong>Robust</strong>ness",IEEE<br />

AC-28,no.5,May 1983.<br />

[12] M. Green and D.J.N. Limebeer, \Linear <strong>Robust</strong> <strong>Control</strong>", Prentice Hall Information<br />

and System Science Series, New Yersey, 1995.<br />

[13] S.P. Bhattacharyya, H. Chapellat andL.H.Keel,\<strong>Robust</strong> <strong>Control</strong>: The Parametric<br />

Approach ",Prentice Hall Information and Science Series, New Yersey, 1995.<br />

[14] K. Zhou, with J.C. Doyle and K. Glover,\<strong>Robust</strong> and Optimal <strong>Control</strong>," Prentice<br />

Hall Information and Science Series, New Yersey,1996.<br />

[15] S. Skogestad and I. Postlethwaite,\Multivariable Feedback <strong>Control</strong>," John Whiley<br />

and Sons, Chichester, 1996.<br />

[16] S. Engell, \Design of <strong>Robust</strong> <strong>Control</strong> Systems with Time-Domain Speci cations",<br />

<strong>Control</strong> Eng. Practice, Vol.3, No.3, pp.365-372, 1995.<br />

211

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!