EL2620 Nonlinear Control Lecture notes
EL2620 Nonlinear Control Lecture notes
EL2620 Nonlinear Control Lecture notes
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> <strong>notes</strong><br />
Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen<br />
This revision December 2011<br />
Automatic <strong>Control</strong><br />
KTH, Stockholm, Sweden
Preface<br />
Many people have contributed to these lecture <strong>notes</strong> in nonlinear control.<br />
Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson,<br />
and later revised by Bo Wahlberg and myself. Contributions and comments<br />
by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt,<br />
Krister Jacobsson, Björn Johansson and Torbjörn Nordling are gratefully<br />
acknowledged.<br />
Elling W. Jacobsen<br />
Stockholm, December 2011
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
Automatic <strong>Control</strong> Lab, KTH<br />
• Disposition<br />
7.5 credits, lp 2<br />
28h lectures, 28h exercises, 3 home-works<br />
• Instructors<br />
Elling W. Jacobsen, lectures and course responsible<br />
jacobsen@kth.se<br />
Per Hägg, Farhad Farokhi, teaching assistants<br />
pehagg@kth.se, farakhi@kth.se<br />
Hanna Holmqvist, course administration<br />
hanna.holmqvist@ee.kth.se<br />
STEX (entrance floor, Osquldasv. 10), course material<br />
stex@s3.kth.se<br />
<strong>Lecture</strong> 1 1<br />
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 1<br />
• Practical information<br />
• Course outline<br />
• Linear vs <strong>Nonlinear</strong> Systems<br />
• <strong>Nonlinear</strong> differential equations<br />
<strong>Lecture</strong> 1 3<br />
<strong>EL2620</strong> 2011<br />
Course Goal<br />
To provide participants with a solid theoretical foundation of nonlinear<br />
control systems combined with a good engineering understanding<br />
You should after the course be able to<br />
• understand common nonlinear control phenomena<br />
• apply the most powerful nonlinear analysis methods<br />
• use some practical nonlinear control design methods<br />
<strong>Lecture</strong> 1 2<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Describe distinctive phenomena in nonlinear dynamic systems<br />
• Mathematically describe common nonlinearities in control<br />
systems<br />
• Transform differential equations to first-order form<br />
• Derive equilibrium points<br />
<strong>Lecture</strong> 1 4
<strong>EL2620</strong> 2011<br />
Course Information<br />
• All info and handouts are available at the course homepage at<br />
KTH Social<br />
• Compulsory course items<br />
– 3 homeworks, have to be handed in on time (we are strict on<br />
this!)<br />
– 5h written exam on December 11 2012<br />
<strong>Lecture</strong> 1 5<br />
<strong>EL2620</strong> 2011<br />
Course Outline<br />
• Introduction: nonlinear models and phenomena, computer<br />
simulation (L1-L2)<br />
• Feedback analysis: linearization, stability theory, describing<br />
functions (L3-L6)<br />
• <strong>Control</strong> design: compensation, high-gain design, Lyapunov<br />
methods (L7-L10)<br />
• Alternatives: gain scheduling, optimal control, neural networks,<br />
fuzzy control (L11-L13)<br />
• Summary (L14)<br />
<strong>Lecture</strong> 1 7<br />
<strong>EL2620</strong> 2011<br />
Material<br />
• Textbook: Khalil, <strong>Nonlinear</strong> Systems, Prentice Hall, 3rd ed.,<br />
2002. Optional but highly recommended.<br />
• <strong>Lecture</strong> <strong>notes</strong>: Copies of transparencies (from previous year)<br />
• Exercises: Class room and home exercises<br />
• Homeworks: 3 computer exercises to hand in (and review)<br />
• Software: Matlab<br />
Alternative textbooks (decreasing mathematical rigour):<br />
Sastry, <strong>Nonlinear</strong> Systems: Analysis, Stability and <strong>Control</strong>; Vidyasagar,<br />
<strong>Nonlinear</strong> Systems Analysis; Slotine & Li, Applied <strong>Nonlinear</strong> <strong>Control</strong>; Glad&<br />
Ljung, Reglerteori, flervariabla och olinjära metoder.<br />
Only references to Khalil will be given.<br />
Two course compendia sold by STEX.<br />
<strong>Lecture</strong> 1 6<br />
<strong>EL2620</strong> 2011<br />
Linear Systems<br />
Definition: Let M be a signal space. The system S : M → M is<br />
linear if for all u, v ∈ M and α ∈ R<br />
S(αu) =αS(u) scaling<br />
S(u + v) =S(u)+S(v) superposition<br />
Example: Linear time-invariant systems<br />
ẋ(t) =Ax(t)+Bu(t), y(t) =Cx(t), x(0) = 0<br />
y(t) =g(t) ⋆ u(t) =<br />
Y (s) =G(s)U(s)<br />
∫ t<br />
0<br />
g(τ)u(t − τ)dτ<br />
Notice the importance to have zero initial conditions<br />
<strong>Lecture</strong> 1 8
<strong>EL2620</strong> 2011<br />
Linear Systems Have Nice Properties<br />
Local stability=global stability Stability if all eigenvalues of A (or<br />
poles of G(s)) are in the left half-plane<br />
Superposition Enough to know a step (or impulse) response<br />
Frequency analysis possible Sinusoidal inputs give sinusoidal<br />
outputs: Y (iω) =G(iω)U(iω)<br />
<strong>Lecture</strong> 1 9<br />
<strong>EL2620</strong> 2011<br />
Can the ball move 0.1 meter in 0.1 seconds from steady state?<br />
Linear model (step response with φ = φ0) gives<br />
x(t) ≈ 10 t2 2 φ 0 ≈ 0.05φ0<br />
so that<br />
φ0 ≈ 0.1<br />
0.05<br />
=2rad =114◦<br />
Unrealistic answer. Clearly outside linear region!<br />
Linear model valid only if sin φ ≈ φ<br />
Must consider nonlinear model. Possibly also include other<br />
nonlinearities such as centripetal force, saturation, friction etc.<br />
<strong>Lecture</strong> 1 11<br />
<strong>EL2620</strong> 2011<br />
Linear Models may be too Crude<br />
Approximations<br />
Example:Positioning of a ball on a beam<br />
φ<br />
x<br />
<strong>Nonlinear</strong> model: mẍ(t) =mg sin φ(t), Linear model: ẍ(t) =gφ(t)<br />
<strong>Lecture</strong> 1 10<br />
<strong>EL2620</strong> 2011<br />
Linear Models are not Rich Enough<br />
Linear models can not describe many phenomena seen in<br />
nonlinear systems<br />
<strong>Lecture</strong> 1 12
PSfr<br />
<strong>EL2620</strong> 2011<br />
Stability Can Depend on Reference Signal<br />
Example: <strong>Control</strong> system with valve characteristic f(u) =u 2<br />
Motor Valve Process<br />
r y<br />
Σ<br />
1 1<br />
s (s+1) 2<br />
−1<br />
Simulink block diagram:<br />
1<br />
s<br />
1<br />
u 2<br />
y<br />
(s+1)(s+1)<br />
-1<br />
<strong>Lecture</strong> 1 13<br />
<strong>EL2620</strong> 2011<br />
STEP RESPONSES<br />
0.4<br />
0.2<br />
Output y<br />
r =0.2<br />
0<br />
4<br />
0 5 10 15 20 25 30<br />
Time t<br />
2<br />
Output y<br />
r =1.68<br />
0<br />
10<br />
5<br />
0 5 10 15 20 25 30<br />
Time t<br />
Output y<br />
<strong>EL2620</strong> 2011<br />
Multiple Equilibria<br />
Example: chemical reactor<br />
Input<br />
(<br />
− 1<br />
ẋ1 = −x1exp x2<br />
(<br />
)<br />
− 1 ẋ2 = x1exp x2<br />
f = 0.7, ɛ =0.4<br />
)<br />
+ f(1 − x1)<br />
− ɛf(x2 − xc)<br />
20<br />
15<br />
10<br />
0 50 100 150 200<br />
Output<br />
200<br />
150<br />
100<br />
50<br />
r =1.72<br />
0<br />
0 5 10 15 20 25 30<br />
Time t<br />
Stability depends on amplitude of the reference signal!<br />
(The linearized gain of the valve increases with increasing amplitude)<br />
<strong>Lecture</strong> 1 14<br />
Temperature x 2<br />
Coolant temp x c<br />
0<br />
0 50 100 150 200<br />
time [s]<br />
Existence of multiple stable equilibria for the same input gives<br />
hysteresis effect<br />
<strong>Lecture</strong> 1 15<br />
<strong>EL2620</strong> 2011<br />
Stable Periodic Solutions<br />
Example: Position control of motor with back-lash<br />
0<br />
Constant<br />
5<br />
Sum P-controller<br />
1<br />
5s 2 +s<br />
Motor<br />
y<br />
Backlash<br />
-1<br />
Motor: G(s) = 1<br />
s(1+5s)<br />
<strong>Control</strong>ler: K =5<br />
<strong>Lecture</strong> 1 16
<strong>EL2620</strong> 2011<br />
Back-lash induces an oscillation<br />
Period and amplitude independent of initial conditions:<br />
0.5<br />
0<br />
Output y<br />
-0.5<br />
0.5<br />
0 10 20 30 40 50<br />
Time t<br />
0<br />
Output y<br />
-0.5<br />
0.5<br />
0 10 20 30 40 50<br />
Time t<br />
0<br />
Output y<br />
-0.5<br />
0 10 20 30 40 50<br />
Time t<br />
How predict and avoid oscillations?<br />
<strong>Lecture</strong> 1 17<br />
<strong>EL2620</strong> 2011<br />
Harmonic Distortion<br />
Example: Sinusoidal response of saturation<br />
a sin t y(t) = ∑ ∞<br />
k=1 A k sin(kt)<br />
Saturation<br />
a =1<br />
10 0<br />
10 -2<br />
10 0<br />
Frequency (Hz)<br />
a =2<br />
10 0<br />
10 -2<br />
10 0<br />
Frequency (Hz)<br />
2<br />
1<br />
0<br />
-1<br />
-2<br />
0 1 2 3 4 5<br />
Time t<br />
2<br />
1<br />
0<br />
-1<br />
-2<br />
0 1 2 3 4 5<br />
Time t<br />
<strong>EL2620</strong> 2011<br />
Automatic Tuning of PID <strong>Control</strong>lers<br />
Relay induces a desired oscillation whose frequency and amplitude<br />
are used to choose PID parameters<br />
PID<br />
Σ A u Process<br />
y<br />
T<br />
Relay<br />
− 1<br />
1<br />
y<br />
u<br />
0<br />
−1<br />
0 5 10<br />
Time<br />
<strong>Lecture</strong> 1 18<br />
Amplitude y<br />
Amplitude y<br />
<strong>Lecture</strong> 1 19<br />
<strong>EL2620</strong> 2011<br />
Example: Electrical power distribution<br />
<strong>Nonlinear</strong>ities such as rectifiers, switched electronics, and<br />
transformers give rise to harmonic distortion<br />
∑ ∞<br />
k=2 Energy in tone k<br />
Total Harmonic Distortion =<br />
Energy in tone 1<br />
Example: Electrical amplifiers<br />
Effective amplifiers work in nonlinear region<br />
Introduces spectrum leakage, which is a problem in cellular systems<br />
Trade-off between effectivity and linearity<br />
<strong>Lecture</strong> 1 20
<strong>EL2620</strong> 2011<br />
Subharmonics<br />
Example: Duffing’s equation ÿ +ẏ + y − y 3 = a sin(ωt)<br />
0.5<br />
0<br />
-0.5<br />
1<br />
0 5 10 15 20 25 30<br />
Time t<br />
0.5<br />
0<br />
-0.5<br />
a sin ωt<br />
y<br />
-1<br />
0 5 10 15 20 25 30<br />
Time t<br />
<strong>Lecture</strong> 1 21<br />
<strong>EL2620</strong> 2011<br />
Existence Problems<br />
Example: The differential equation ẋ = x 2 , x(0) = x0<br />
has solution x(t) = x 0<br />
1 − x0t , 0 ≤ t< 1 x0<br />
Solution not defined for tf = 1 x0<br />
Solution interval depends on initial condition!<br />
Recall the trick: ẋ = x 2 ⇒ dx<br />
x = dt 2<br />
− −1<br />
Integrate ⇒ −1<br />
x 0<br />
x(t) − −1<br />
x(0) = t ⇒ x(t) = x 1 − x0t<br />
<strong>Lecture</strong> 1 23<br />
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> Differential Equations<br />
Definition: A solution to<br />
ẋ(t) =f(x(t)), x(0) = x0 (1)<br />
over an interval [0,T] is a C 1 function x :[0,T] → R n such that (1)<br />
is fulfilled.<br />
• When does there exists a solution?<br />
• When is the solution unique?<br />
Example: ẋ = Ax, x(0) = x0, gives x(t) =exp(At)x0<br />
<strong>Lecture</strong> 1 22<br />
<strong>EL2620</strong> 2011<br />
Finite Escape Time<br />
Simulation for various initial conditions x0<br />
Finite escape time of dx/dt = x 2<br />
5<br />
4.5<br />
4<br />
3.5<br />
3<br />
2.5<br />
x(t)<br />
2<br />
1.5<br />
1<br />
0.5<br />
0<br />
0 1 2 3 4 5<br />
Time t<br />
<strong>Lecture</strong> 1 24
<strong>EL2620</strong> 2011<br />
Uniqueness Problems<br />
Example: ẋ = √ x, x(0) = 0, has many solutions:<br />
{ (t − C)<br />
2 /4 t>C<br />
x(t) =<br />
0 t ≤ C<br />
2<br />
1.5<br />
1<br />
0.5<br />
x(t)<br />
0<br />
-0.5<br />
-1<br />
0 1 2 3 4 5<br />
Time t<br />
<strong>Lecture</strong> 1 25<br />
<strong>EL2620</strong> 2011<br />
Lipschitz Continuity<br />
Definition: f : R n → R n is Lipschitz continuous if there exists<br />
L, r > 0 such that for all<br />
x, y ∈ Br(x0) ={z ∈ R n : ‖z − x0‖ 0 such that<br />
ẋ(t) =f(x(t)), x(0) = x0<br />
has a unique solution in Br(x0) over [0, δ]. Proof: See Khalil,<br />
Appendix C.1. Based on the contraction mapping theorem<br />
Remarks<br />
• δ = δ(r, L)<br />
• f being C 0 is not sufficient (cf., tank example)<br />
• f being C 1 implies Lipschitz continuity<br />
(L =maxx∈Br(x0) f ′ (x))<br />
<strong>Lecture</strong> 1 28
<strong>EL2620</strong> 2011<br />
State-Space Models<br />
State x, input u, output y<br />
General: f(x, u, y, ẋ, ˙u, ẏ, . . .) =0<br />
Explicit: ẋ = f(x, u), y = h(x)<br />
Affine in u: ẋ = f(x)+g(x)u, y = h(x)<br />
Linear: ẋ = Ax + Bu, y = Cx<br />
<strong>Lecture</strong> 1 29<br />
<strong>EL2620</strong> 2011<br />
Transformation to First-Order System<br />
Given a differential equation in y with highest derivative dn y<br />
dt n ,<br />
express the equation in x =<br />
(<br />
y<br />
dy<br />
...<br />
dt<br />
) T<br />
d n−1 y<br />
Example:<br />
dt n−1<br />
Pendulum<br />
MR 2¨θ + k ˙θ + MgRsin θ =0<br />
x = ( θ ˙θ)T gives<br />
ẋ1 = x2<br />
ẋ2 = − k<br />
MR x 2 2 − g R sin x 1<br />
<strong>Lecture</strong> 1 31<br />
<strong>EL2620</strong> 2011<br />
Transformation to Autonomous System<br />
A nonautonomous system<br />
ẋ = f(x, t)<br />
is always possible to transform to an autonomous system by<br />
introducing xn+1 = t:<br />
ẋ = f(x, xn+1)<br />
ẋn+1 =1<br />
<strong>Lecture</strong> 1 30<br />
<strong>EL2620</strong> 2011<br />
Equilibria<br />
Definition: A point (x ∗ ,u ∗ ,y ∗ ) is an equilibrium, if a solution starting<br />
in (x ∗ ,u ∗ ,y ∗ ) stays there forever.<br />
Corresponds to putting all derivatives to zero:<br />
General: f(x ∗ ,u ∗ ,y ∗ , 0, 0,...)=0<br />
Explicit: 0=f(x ∗ ,u ∗ ), y ∗ = h(x ∗ )<br />
Affine in u: 0=f(x ∗ )+g(x ∗ )u ∗ , y ∗ = h(x ∗ )<br />
Linear: 0=Ax ∗ + Bu ∗ , y ∗ = Cx ∗<br />
Often the equilibrium is defined only through the state x ∗<br />
<strong>Lecture</strong> 1 32
<strong>EL2620</strong> 2011<br />
Multiple Equilibria<br />
Example: Pendulum<br />
MR 2¨θ + k ˙θ + MgRsin θ =0<br />
¨θ = ˙θ =0gives sin θ =0and thus θ<br />
∗ = kπ<br />
Alternatively in first-order form:<br />
ẋ1 = x2<br />
ẋ2 = − k<br />
MR x 2 2 − g R sin x 1<br />
ẋ1 =ẋ2 =0gives x ∗ 2 =0and sin(x ∗ 1)=0<br />
<strong>Lecture</strong> 1 33<br />
<strong>EL2620</strong> 2011<br />
When do we need <strong>Nonlinear</strong> Analysis &<br />
Design?<br />
• When the system is strongly nonlinear<br />
• When the range of operation is large<br />
• When distinctive nonlinear phenomena are relevant<br />
• When we want to push performance to the limit<br />
<strong>Lecture</strong> 1 35<br />
<strong>EL2620</strong> 2011<br />
Some Common <strong>Nonlinear</strong>ities in <strong>Control</strong><br />
Systems<br />
|u|<br />
e u<br />
Abs<br />
Math<br />
Function<br />
Saturation<br />
Sign<br />
Dead Zone<br />
Look-Up<br />
Table<br />
Relay<br />
Backlash<br />
Coulomb &<br />
Viscous Friction<br />
<strong>Lecture</strong> 1 34<br />
<strong>EL2620</strong> 2011<br />
Next <strong>Lecture</strong><br />
• Simulation in Matlab<br />
• Linearization<br />
• Phase plane analysis<br />
<strong>Lecture</strong> 1 36
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 2<br />
• Wrap-up of <strong>Lecture</strong> 1: <strong>Nonlinear</strong> systems and phenomena<br />
• Modeling and simulation in Simulink<br />
• Phase-plane analysis<br />
<strong>Lecture</strong> 2 1<br />
<strong>EL2620</strong> 2011<br />
Analysis Through Simulation<br />
Simulation tools:<br />
ODE’s ẋ = f(t, x, u)<br />
• ACSL, Simnon, Simulink<br />
DAE’s F (t, ẋ, x, u) =0<br />
• Omsim, Dymola, Modelica<br />
http://www.modelica.org<br />
Special purpose simulation tools<br />
• Spice, EMTP, ADAMS, gPROMS<br />
<strong>Lecture</strong> 2 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Model and simulate in Simulink<br />
• Linearize using Simulink<br />
• Do phase-plane analysis using pplane (or other tool)<br />
<strong>Lecture</strong> 2 2<br />
<strong>EL2620</strong> 2011<br />
Simulink<br />
> matlab<br />
>> simulink<br />
<strong>Lecture</strong> 2 4
<strong>EL2620</strong> 2011<br />
An Example in Simulink<br />
File -> New -> Model<br />
Double click on Continous<br />
Transfer Fcn<br />
Step (in Sources)<br />
Scope (in Sinks)<br />
Connect (mouse-left)<br />
Simulation -> Parameters<br />
1<br />
s+1<br />
Step Transfer Fcn<br />
Scope<br />
<strong>Lecture</strong> 2 5<br />
<strong>EL2620</strong> 2011<br />
Save Results to Workspace<br />
stepmodel.mdl<br />
Clock<br />
To Workspace<br />
1<br />
s+1<br />
Step<br />
Transfer Fcn<br />
y<br />
To Workspace1<br />
u<br />
To Workspace2<br />
Check “Save format” of output blocks (“Array” instead of “Structure”)<br />
>> plot(t,y)<br />
<strong>Lecture</strong> 2 7<br />
<strong>EL2620</strong> 2011<br />
Choose Simulation Parameters<br />
Don’t forget “Apply”<br />
<strong>Lecture</strong> 2 6<br />
<strong>EL2620</strong> 2011<br />
How To Get Better Accuracy<br />
Modify Refine, Absolute and Relative Tolerances, Integration method<br />
Refine adds interpolation points:<br />
Refine = 1<br />
Refine = 10<br />
0.8<br />
0.8<br />
0.6<br />
0.6<br />
0.4<br />
0.4<br />
0.2<br />
0.2<br />
-0.2<br />
-0.2<br />
-0.4<br />
-0.4<br />
-0.6<br />
-0.6<br />
-0.8<br />
-0.8<br />
-1<br />
-1<br />
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10<br />
<strong>Lecture</strong> 2 8<br />
t<br />
1<br />
0<br />
1<br />
0
<strong>EL2620</strong> 2011<br />
Use Scripts to Document Simulations<br />
If the block-diagram is saved to stepmodel.mdl,<br />
the following Script-file simstepmodel.m simulates the system:<br />
open_system(’stepmodel’)<br />
set_param(’stepmodel’,’RelTol’,’1e-3’)<br />
set_param(’stepmodel’,’AbsTol’,’1e-6’)<br />
set_param(’stepmodel’,’Refine’,’1’)<br />
tic<br />
sim(’stepmodel’,6)<br />
toc<br />
subplot(2,1,1),plot(t,y),title(’y’)<br />
subplot(2,1,2),plot(t,u),title(’u’)<br />
<strong>Lecture</strong> 2 9<br />
<strong>EL2620</strong> 2011<br />
Example: Two-Tank System<br />
The system consists of two identical tank models:<br />
ḣ =(u − q)/A<br />
q = a √ 2g √ h<br />
1<br />
q<br />
1<br />
In<br />
1/A<br />
Sum Gain<br />
1<br />
s<br />
Integrator<br />
f(u)<br />
Fcn<br />
2<br />
h<br />
1<br />
In<br />
q<br />
In<br />
h<br />
Subsystem<br />
q<br />
In<br />
h<br />
Subsystem2<br />
1<br />
Out<br />
<strong>Lecture</strong> 2 11<br />
PSfr<br />
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> <strong>Control</strong> System<br />
Example: <strong>Control</strong> system with valve characteristic f(u) =u 2<br />
Motor Valve Process<br />
r y<br />
Σ<br />
1 1<br />
s (s+1) 2<br />
−1<br />
Simulink block diagram:<br />
1<br />
s<br />
1<br />
u 2<br />
y<br />
(s+1)(s+1)<br />
-1<br />
<strong>Lecture</strong> 2 10<br />
<strong>EL2620</strong> 2011<br />
Linearization in Simulink<br />
Linearize about equilibrium (x0,u0,y0):<br />
>> A=2.7e-3;a=7e-6,g=9.8;<br />
>> [x0,u0,y0]=trim(’twotank’,[0.1 0.1]’,[],0.1)<br />
x0 =<br />
0.1000<br />
0.1000<br />
u0 =<br />
9.7995e-006<br />
y0 =<br />
0.1000<br />
>> [aa,bb,cc,dd]=linmod(’twotank’,x0,u0);<br />
>> sys=ss(aa,bb,cc,dd);<br />
>> bode(sys)<br />
<strong>Lecture</strong> 2 12
<strong>EL2620</strong> 2011<br />
Differential Equation Editor<br />
dee is a Simulink-based differential equation editor<br />
>> dee<br />
Differential Equation<br />
Editor<br />
DEE<br />
deedemo1 deedemo2 deedemo3 deedemo4<br />
Run the demonstrations<br />
<strong>Lecture</strong> 2 13<br />
<strong>EL2620</strong> 2011<br />
Homework 1<br />
• Use your favorite phase-plane analysis tool<br />
• Follow instructions in Exercise Compendium on how to write the<br />
report.<br />
• See the course homepage for a report example<br />
• The report should be short and include only necessary plots.<br />
Write in English.<br />
<strong>Lecture</strong> 2 15<br />
<strong>EL2620</strong> 2011<br />
Phase-Plane Analysis<br />
• Download ICTools from<br />
http://www.control.lth.se/˜ictools<br />
• Down load DFIELD and PPLANE from<br />
http://math.rice.edu/˜dfield<br />
This was the preferred tool last year!<br />
<strong>Lecture</strong> 2 14
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 3<br />
• Stability definitions<br />
• Linearization<br />
• Phase-plane analysis<br />
• Periodic solutions<br />
<strong>Lecture</strong> 3 1<br />
<strong>EL2620</strong> 2011<br />
Local Stability<br />
Consider ẋ = f(x) with f(0) = 0<br />
Definition: The equilibrium x ∗ =0is stable if for all ɛ > 0 there<br />
exists δ = δ(ɛ) > 0 such that<br />
‖x(0)‖ < δ ⇒ ‖x(t)‖ < ɛ, ∀t ≥ 0<br />
x(t)<br />
δ<br />
ɛ<br />
If x ∗ =0is not stable it is called unstable.<br />
<strong>Lecture</strong> 3 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Explain local and global stability<br />
• Linearize around equilibria and trajectories<br />
• Sketch phase portraits for two-dimensional systems<br />
• Classify equilibria into nodes, focuses, saddle points, and<br />
center points<br />
• Analyze stability of periodic solutions through Poincaré maps<br />
<strong>Lecture</strong> 3 2<br />
<strong>EL2620</strong> 2011<br />
Asymptotic Stability<br />
Definition: The equilibrium x =0is asymptotically stable if it is<br />
stable and δ can be chosen such that<br />
t→∞<br />
‖x(0)‖ < δ ⇒ lim<br />
x(t) =0<br />
The equilibrium is globally asymptotically stable if it is stable and<br />
limt→∞ x(t) =0for all x(0).<br />
<strong>Lecture</strong> 3 4
<strong>EL2620</strong> 2011<br />
Linearization Around a Trajectory<br />
Let (x0(t),u0(t)) denote a solution to ẋ = f(x, u) and consider<br />
another solution (x(t),u(t)) = (x0(t)+˜x(t),u0(t)+ũ(t)):<br />
ẋ(t) =f(x0(t)+˜x(t),u0(t)+ũ(t))<br />
= f(x0(t),u0(t)) + ∂f<br />
∂x (x 0(t),u0(t))˜x(t)<br />
+ ∂f<br />
∂u (x 0(t),u0(t))ũ(t)+O(‖˜x, ũ‖ 2 )<br />
(x0(t),u0(t))<br />
(x0(t)+˜x(t),u0(t)+ũ(t))<br />
<strong>Lecture</strong> 3 5<br />
<strong>EL2620</strong> 2011<br />
Example<br />
h(t)<br />
m(t)<br />
ḣ(t) =v(t)<br />
˙v(t) =−g + veu(t)/m(t)<br />
ṁ(t) =−u(t)<br />
Let x0(t) =(h0(t),v0(t),m0(t)) T , u0(t) ≡ u0 > 0, be a solution.<br />
Then,<br />
˙˜x(t) =<br />
⎛<br />
0 1 0<br />
⎜ ⎝ 0 0 − veu 0<br />
(m0−u0t) 2<br />
0 0 0<br />
⎞<br />
⎟ ⎠ ˜x(t)+<br />
⎛<br />
⎝<br />
0<br />
ve<br />
m0−u0t<br />
1<br />
⎞<br />
⎠ ũ(t)<br />
<strong>Lecture</strong> 3 7<br />
<strong>EL2620</strong> 2011<br />
Hence, for small (˜x, ũ), approximately<br />
˙˜x(t) =A(x0(t),u0(t))˜x(t)+B(x0(t),u0(t))ũ(t)<br />
where<br />
A(x0(t),u0(t)) = ∂f<br />
∂x (x 0(t),u0(t))<br />
B(x0(t),u0(t)) = ∂f<br />
∂u (x 0(t),u0(t))<br />
Note that A and B are time dependent. However, if<br />
(x0(t),u0(t)) ≡ (x0,u0) then A and B are constant.<br />
<strong>Lecture</strong> 3 6<br />
<strong>EL2620</strong> 2011<br />
Pointwise Left Half-Plane Eigenvalues of<br />
A(t) Do Not Impose Stability<br />
A(t) =<br />
( −1+α cos<br />
2 t 1 − α sin t cos t<br />
−1 − α sin t cos t −1+α sin 2 t<br />
)<br />
, α > 0<br />
Pointwise eigenvalues are given by<br />
λ(t) =λ = α − 2 ± √ α 2 − 4<br />
2<br />
which are stable for 0 < α < 2. However,<br />
( e<br />
(α−1)t<br />
cos t e −t sin t<br />
x(t) =<br />
−e (α−1)t sin t e −t cos t<br />
)<br />
x(0),<br />
is unbounded solution for α > 1.<br />
<strong>Lecture</strong> 3 8
<strong>EL2620</strong> 2011<br />
Lyapunov’s Linearization Method<br />
Theorem: Let x0 be an equilibrium of ẋ = f(x) with f ∈ C 1 .<br />
Denote A = ∂f (x ∂x 0) and α(A) =maxRe(λ(A)).<br />
• If α(A) < 0, then x0 is asymptotically stable<br />
• If α(A) > 0, then x0 is unstable<br />
The fundamental result for linear systems theory!<br />
The case α(A) =0needs further investigation.<br />
The theorem is also called Lyapunov’s Indirect Method.<br />
A proof is given next lecture.<br />
<strong>Lecture</strong> 3 9<br />
<strong>EL2620</strong> 2011<br />
Linear Systems Revival<br />
d<br />
dt<br />
[ x 1<br />
x2<br />
]<br />
= A<br />
[ x 1<br />
x2<br />
]<br />
Analytic solution: x(t) =e At x(0), t ≥ 0.<br />
If A is diagonalizable, then<br />
e At = Ve Λt V −1 = [ v1 v2<br />
] [ e λ 1t<br />
0<br />
0 e λ 2t<br />
] [v<br />
1 v2<br />
] −1<br />
where v1,v2 are the eigenvectors of A (Av1 = λ1v1 etc).<br />
This implies that<br />
x(t) =c1e λ 1t v 1 + c2e λ 2t v 2,<br />
where the constants c1 and c2 are given by the initial conditions<br />
<strong>Lecture</strong> 3 11<br />
<strong>EL2620</strong> 2011<br />
Example<br />
The linearization of<br />
ẋ1 = −x 2 1 + x1 + sin x2<br />
ẋ2 =cosx2 − x 3 1 − 5x2<br />
at the equilibrium x0 =(1, 0) T is given by<br />
( ) −1 1<br />
A =<br />
, λ(A) ={−2, −4}<br />
−3 −5<br />
x0 is thus an asymptotically stable equilibrium for the nonlinear<br />
system.<br />
<strong>Lecture</strong> 3 10<br />
<strong>EL2620</strong> 2011<br />
Example: Two real negative eigenvalues<br />
Given the eigenvalues λ1 < λ2 < 0, with corresponding<br />
eigenvectors v1 and v2, respectively.<br />
Solution: x(t) =c1e λ 1t v 1 + c2e λ 2t v 2<br />
Slow eigenvalue/vector: x(t) ≈ c2e λ 2t v 2 for large t.<br />
Moves along the slow eigenvector towards x =0for large t<br />
Fast eigenvalue/vector: x(t) ≈ c1e λ 1t v 1 + c2v2 for small t.<br />
Moves along the fast eigenvector for small t<br />
<strong>Lecture</strong> 3 12
<strong>EL2620</strong> 2011<br />
Phase-Plane Analysis for Linear Systems<br />
The location of the eigenvalues λ(A) determines the characteristics<br />
of the trajectories.<br />
Six cases:<br />
stable node unstable node saddle point<br />
Im λi =0: λ1, λ2 < 0 λ1, λ2 > 0 λ1 < 0 < λ2<br />
Im λi ≠0: Reλi < 0 Reλi > 0 Reλi =0<br />
stable focus unstable focus center point<br />
<strong>Lecture</strong> 3 13<br />
<strong>EL2620</strong> 2011<br />
ẋ =<br />
Example—Unstable Focus<br />
[ σ −ω<br />
ω σ<br />
x(t) =e At x(0) =<br />
]<br />
x, σ, ω > 0, λ1,2 = σ ± iω<br />
[ 1 1<br />
−i i<br />
][ ][ e<br />
σt e<br />
iωt<br />
1 1<br />
0 0 e σt e −iωt −i i<br />
]−1<br />
x(0)<br />
In polar coordinates r = √ x 2 1 + x 2 2 , θ =arctanx 2/x1<br />
(x1 = r cos θ, x2 = r sin θ):<br />
ṙ = σr<br />
˙θ = ω<br />
<strong>Lecture</strong> 3 15<br />
<strong>EL2620</strong> 2011<br />
Equilibrium Points for Linear Systems<br />
x2<br />
x1<br />
Im λ<br />
Re λ<br />
<strong>Lecture</strong> 3 14<br />
<strong>EL2620</strong> 2011<br />
λ1,2 =1± i λ1,2 =0.3 ± i<br />
<strong>Lecture</strong> 3 16
<strong>EL2620</strong> 2011<br />
Example—Stable Node<br />
ẋ =<br />
[ −1 1<br />
0 −2<br />
(λ1, λ2) =(−1, −2) and<br />
]<br />
x<br />
[<br />
[ 1 −1<br />
v 1 v2]<br />
=<br />
0 1<br />
]<br />
v1 is the slow direction and v2 is the fast.<br />
<strong>Lecture</strong> 3 17<br />
<strong>EL2620</strong> 2011<br />
5 minute exercise: What is the phase portrait if λ1 = λ2?<br />
Hint: Two cases; only one linear independent eigenvector or all<br />
vectors are eigenvectors<br />
<strong>Lecture</strong> 3 19<br />
<strong>EL2620</strong> 2011<br />
Fast: x2 = −x1 + c3<br />
Slow: x2 =0<br />
<strong>Lecture</strong> 3 18<br />
<strong>EL2620</strong> 2011<br />
Phase-Plane Analysis for <strong>Nonlinear</strong> Systems<br />
Close to equilibrium points “nonlinear system” ≈ “linear system”<br />
Theorem: Assume<br />
ẋ = f(x) =Ax + g(x),<br />
with lim‖x‖→0 ‖g(x)‖/‖x‖ =0. If ż = Az has a focus, node, or<br />
saddle point, then ẋ = f(x) has the same type of equilibrium at the<br />
origin.<br />
Remark: If the linearized system has a center, then the nonlinear<br />
system has either a center or a focus.<br />
<strong>Lecture</strong> 3 20
<strong>EL2620</strong> 2011<br />
How to Draw Phase Portraits<br />
By hand:<br />
1. Find equilibria<br />
2. Sketch local behavior around equilibria<br />
3. Sketch (ẋ1, ẋ2) for some other points. Notice that<br />
dx2<br />
dx1<br />
= ẋ1<br />
ẋ2<br />
4. Try to find possible periodic orbits<br />
5. Guess solutions<br />
By computer:<br />
1. Matlab: dee or pplane<br />
<strong>Lecture</strong> 3 21<br />
<strong>EL2620</strong> 2011<br />
Phase-Plane Analysis of PLL<br />
Let (x1,x2) =(θ out , ˙θ out ), K, T > 0, and θ in (t) ≡ θ in .<br />
ẋ1(t) =x2(t)<br />
ẋ2(t) =−T −1 x2(t)+KT −1 sin(θ in − x1(t))<br />
Equilibria are (θ in + nπ, 0) since<br />
ẋ1 =0⇒ x2 =0<br />
ẋ2 =0⇒ sin(θin − x1) =0⇒ x1 = θ in + nπ<br />
<strong>Lecture</strong> 3 23<br />
<strong>EL2620</strong> 2011<br />
Phase-Locked Loop<br />
A PLL tracks phase θ in (t) of a signal s in (t) =A sin[ωt + θ in (t)].<br />
s in<br />
Phase<br />
“θout ”<br />
Detector Filter VCO<br />
−<br />
e K<br />
sin(·)<br />
1+sT<br />
θ in θout<br />
˙θout<br />
1<br />
s<br />
<strong>Lecture</strong> 3 22<br />
<strong>EL2620</strong> 2011<br />
Classification of Equilibria<br />
Linearization gives the following characteristic equations:<br />
n even:<br />
λ 2 + T −1 λ + KT −1 =0<br />
K>(4T ) −1 gives stable focus<br />
0
<strong>EL2620</strong> 2011<br />
Phase-Plane for PLL<br />
(K, T) =(1/2, 1): focuses ( 2kπ, 0 ) , saddle points ( (2k + 1)π, 0 )<br />
<strong>Lecture</strong> 3 25<br />
<strong>EL2620</strong> 2011<br />
Periodic solution: Polar coordinates.<br />
x1 = r cos θ ⇒ ẋ1 =cosθṙ − r sin θ ˙θ<br />
implies<br />
( ṙ<br />
Now, from (1)<br />
x2 = r sin θ ⇒ ẋ2 =sinθṙ + r cos θ ˙θ<br />
˙θ<br />
)<br />
= 1 r<br />
( r cos θ r sin θ<br />
− sin θ cos θ<br />
)(<br />
ẋ1<br />
ẋ1 = r(1 − r 2 )cosθ − r sin θ<br />
ẋ2<br />
)<br />
gives<br />
Only r =1is a stable equilibrium!<br />
ẋ2 = r(1 − r 2 )sinθ + r cos θ<br />
ṙ = r(1 − r 2 )<br />
˙θ =1<br />
<strong>Lecture</strong> 3 27<br />
<strong>EL2620</strong> 2011<br />
Periodic Solutions<br />
Example of an asymptotically stable periodic solution:<br />
ẋ1 = x1 − x2 − x1(x 2 1 + x 2 2)<br />
ẋ2 = x1 + x2 − x2(x 2 1 + x 2 2)<br />
(1)<br />
<strong>Lecture</strong> 3 26<br />
<strong>EL2620</strong> 2011<br />
A system has a periodic solution if for some T>0<br />
x(t + T )=x(t), ∀t ≥ 0<br />
A periodic orbit is the image of x in the phase portrait.<br />
• When does there exist a periodic solution?<br />
• When is it stable?<br />
Note that x(t) ≡ const is by convention not regarded periodic<br />
<strong>Lecture</strong> 3 28
<strong>EL2620</strong> 2011<br />
Flow<br />
The solution of ẋ = f(x) is sometimes denoted<br />
φt(x0)<br />
to emphasize the dependence on the initial point x0 ∈ R n<br />
φt(·) is called the flow.<br />
<strong>Lecture</strong> 3 29<br />
<strong>EL2620</strong> 2011<br />
Existence of Periodic Orbits<br />
A point x ∗ such that P (x ∗ )=x ∗ corresponds to a periodic orbit.<br />
P (x ∗ )=x ∗<br />
x ∗ is called a fixed point of P .<br />
<strong>Lecture</strong> 3 31<br />
<strong>EL2620</strong> 2011<br />
Poincaré Map<br />
Assume φt(x0) is a periodic solution with period T .<br />
Let Σ ⊂ R n be an n − 1-dim hyperplane transverse to f at x0.<br />
Definition: The Poincaré map P : Σ → Σ is<br />
P (x) =φτ(x)(x)<br />
where τ(x) is the time of first return.<br />
Σ<br />
P (x)<br />
x φt(x)<br />
<strong>Lecture</strong> 3 30<br />
<strong>EL2620</strong> 2011<br />
1 minute exercise: What does a fixed point of P k corresponds to?<br />
<strong>Lecture</strong> 3 32
<strong>EL2620</strong> 2011<br />
Stable Periodic Orbit<br />
The linearization of P around x ∗ gives a matrix W such that<br />
P (x) ≈ Wx<br />
if x is close to x ∗ .<br />
• λj(W )=1for some j<br />
• If |λi(W )| < 1 for all i ≠ j, then the corresponding periodic<br />
orbit is asymptotically stable<br />
• If |λi(W )| > 1 for some i, then the periodic orbit is unstable.<br />
<strong>Lecture</strong> 3 33<br />
<strong>EL2620</strong> 2011<br />
The Poincaré map is<br />
P (r0, θ0) =<br />
( [1 + (r<br />
−2<br />
0 − 1)e −2·2π ] −1/2<br />
θ0 +2π<br />
)<br />
(r0, θ0) =(1, 2πk) is a fixed point.<br />
The periodic solution that corresponds to (r(t), θ(t)) = (1,t) is<br />
asymptotically stable because<br />
W = dP<br />
( e<br />
−4π<br />
d(r0, θ0) (1, 2πk) = 0<br />
0 1<br />
)<br />
⇒ Stable periodic orbit (as we already knew for this example) !<br />
<strong>Lecture</strong> 3 35<br />
<strong>EL2620</strong> 2011<br />
Example—Stable Unit Circle<br />
Rewrite (1) in polar coordinates:<br />
ṙ = r(1 − r 2 )<br />
˙θ =1<br />
Choose Σ = {(r, θ) : r>0, θ =2πk}.<br />
The solution is<br />
φt(r0, θ0) =<br />
(<br />
[1 + (r 0 −2 − 1)e −2t ] −1/2 ,t+ θ0<br />
)<br />
First return time from any point (r0, θ0) ∈ Σ is<br />
τ(r0, θ0) =2π.<br />
<strong>Lecture</strong> 3 34
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 5<br />
• Input–output stability<br />
u y<br />
S<br />
<strong>Lecture</strong> 5 1<br />
<strong>EL2620</strong> 2011<br />
r y<br />
G(s)<br />
−<br />
History<br />
f(y)<br />
y<br />
f(·)<br />
For what G(s) and f(·) is the closed-loop system stable?<br />
• Luré and Postnikov’s problem (1944)<br />
• Aizerman’s conjecture (1949) (False!)<br />
• Kalman’s conjecture (1957) (False!)<br />
• Solution by Popov (1960) (Led to the Circle Criterion)<br />
<strong>Lecture</strong> 5 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• derive the gain of a system<br />
• analyze stability using<br />
– Small Gain Theorem<br />
– Circle Criterion<br />
– Passivity<br />
<strong>Lecture</strong> 5 2<br />
<strong>EL2620</strong> 2011<br />
Gain<br />
Idea: Generalize the concept of gain to nonlinear dynamical systems<br />
u y<br />
S<br />
The gain γ of S is the largest amplification from u to y<br />
Here S can be a constant, a matrix, a linear time-invariant system, etc<br />
Question: How should we measure the size of u and y?<br />
<strong>Lecture</strong> 5 4
<strong>EL2620</strong> 2011<br />
Norms<br />
A norm ‖ · ‖ measures size<br />
Definition:<br />
A norm is a function ‖ · ‖ : Ω → R + , such that for all x, y ∈ Ω<br />
• ‖x‖ ≥0 and ‖x‖ =0 ⇔ x =0<br />
• ‖x + y‖ ≤‖x‖ + ‖y‖<br />
• ‖αx‖ = |α|·‖x‖, for all α ∈ R<br />
Examples:<br />
Euclidean norm: ‖x‖ = √ x 2 1 + ···+ x 2 n<br />
Max norm: ‖x‖ =max{|x1|,...,|xn|}<br />
<strong>Lecture</strong> 5 5<br />
<strong>EL2620</strong> 2011<br />
Eigenvalues are not gains<br />
The spectral radius of a matrix M<br />
ρ(M) =max<br />
i<br />
|λi(M)|<br />
is not a gain (nor a norm).<br />
Why? What amplification is described by the eigenvalues?<br />
<strong>Lecture</strong> 5 7<br />
<strong>EL2620</strong> 2011<br />
Gain of a Matrix<br />
Every matrix M ∈ C n×n has a singular value decomposition<br />
M = UΣV ∗<br />
where<br />
Σ = diag {σ1,...,σn} ; U ∗ U = I ; V ∗ V = I<br />
σi - the singular values<br />
The “gain” of M is the largest singular value of M :<br />
‖Mx‖<br />
σmax(M) =σ1 =sup<br />
x∈R n ‖x‖<br />
where ‖ · ‖ is the Euclidean norm.<br />
<strong>Lecture</strong> 5 6<br />
<strong>EL2620</strong> 2011<br />
Signal Norms<br />
A signal x is a function x : R + → R.<br />
A signal norm ‖ · ‖k is a norm on the space of signals x.<br />
Examples:<br />
2-norm (energy norm): ‖x‖2 =<br />
√ ∫ ∞<br />
|x(t)|<br />
0 2 dt<br />
sup-norm: ‖x‖∞ =sup t∈R + |x(t)|<br />
<strong>Lecture</strong> 5 8
<strong>EL2620</strong> 2011<br />
Parseval’s Theorem<br />
L2 de<strong>notes</strong> the space of signals with bounded energy: ‖x‖2 < ∞<br />
Theorem: If x, y ∈ L2 have the Fourier transforms<br />
X(iω) =<br />
then ∫ ∞<br />
In particular,<br />
0<br />
∫ ∞<br />
0<br />
‖x‖ 2 2 =<br />
e −iωt x(t)dt, Y (iω) =<br />
y(t)x(t)dt = 1<br />
2π<br />
∫ ∞<br />
0<br />
∫ ∞<br />
−∞<br />
|x(t)| 2 dt = 1<br />
2π<br />
∫ ∞<br />
0<br />
Y ∗ (iω)X(iω)dω.<br />
∫ ∞<br />
−∞<br />
e −iωt y(t)dt,<br />
|X(iω)| 2 dω.<br />
The power calculated in the time domain equals the power calculated<br />
in the frequency domain<br />
<strong>Lecture</strong> 5 9<br />
<strong>EL2620</strong> 2011<br />
2 minute exercise: Show that γ(S1S2) ≤ γ(S1)γ(S2).<br />
u y<br />
S2 S1<br />
<strong>Lecture</strong> 5 11<br />
<strong>EL2620</strong> 2011<br />
System Gain<br />
A system S is a map from L2 to L2: y = S(u).<br />
u y<br />
S<br />
The gain of S is defined as γ(S) =sup<br />
u∈L2<br />
‖y‖2<br />
‖u‖2<br />
=sup<br />
u∈L2<br />
‖S(u)‖2<br />
‖u‖2<br />
Example: The gain of a scalar static system y(t) =αu(t) is<br />
γ(α) =sup<br />
u∈L2<br />
‖αu‖2<br />
‖u‖2<br />
=sup<br />
u∈L2<br />
|α|‖u‖2<br />
‖u‖2<br />
= |α|<br />
<strong>Lecture</strong> 5 10<br />
<strong>EL2620</strong> 2011<br />
Gain of a Static <strong>Nonlinear</strong>ity<br />
Lemma: A static nonlinearity f such that |f(x)| ≤ K|x| and<br />
f(x ∗ )=Kx ∗ has gain γ(f) =K.<br />
Kx<br />
x ∗<br />
f(x)<br />
u(t) f(·) y(t)<br />
x<br />
Proof: ‖y‖ 2 2 = ∫ ∞<br />
f 2( u(t) ) dt ≤ ∫ ∞<br />
K 2 u 2 (t)dt = K 2 ‖u‖ 2 0 0 2 ,<br />
where u(t) =x ∗ , t ∈ (0, 1), gives equality, so<br />
γ(f) =sup<br />
u∈L2<br />
‖y‖2/<br />
‖u‖ 2 = K<br />
<strong>Lecture</strong> 5 12
<strong>EL2620</strong> 2011<br />
Gain of a Stable Linear System<br />
Lemma:<br />
10<br />
γ(G) 1<br />
|G(iω)|<br />
10 0<br />
γ ( G ) =sup<br />
u∈L2<br />
‖Gu‖2<br />
‖u‖2<br />
= sup<br />
ω∈(0,∞)<br />
|G(iω)|<br />
10 -1<br />
10 -2<br />
10 -1 10 0 10 1<br />
Proof: Assume |G(iω)| ≤ K for ω ∈ (0, ∞) and |G(iω ∗ )| = K<br />
for some ω ∗ . Parseval’s theorem gives<br />
‖y‖ 2 2 = 1<br />
2π<br />
= 1<br />
2π<br />
∫ ∞<br />
−∞<br />
∫ ∞<br />
−∞<br />
|Y (iω)| 2 dω<br />
|G(iω)| 2 |U(iω)| 2 dω ≤ K 2 ‖u‖ 2 2<br />
Arbitrary close to equality by choosing u(t) close to sin ω ∗ t.<br />
<strong>Lecture</strong> 5 13<br />
<strong>EL2620</strong> 2011<br />
The Small Gain Theorem<br />
r1<br />
e1<br />
S1<br />
S2<br />
e2<br />
r2<br />
Theorem: Assume S1 and S2 are BIBO stable. If γ(S1)γ(S2) < 1,<br />
then the closed-loop system is BIBO stable from (r1,r2) to (e1,e2)<br />
<strong>Lecture</strong> 5 15<br />
<strong>EL2620</strong> 2011<br />
BIBO Stability<br />
Definition:<br />
S is bounded-input bounded-output (BIBO) stable if γ(S) < ∞.<br />
u y<br />
S γ ( S ) =sup<br />
u∈L2<br />
‖S(u)‖2<br />
‖u‖2<br />
Example: If ẋ = Ax is asymptotically stable then<br />
G(s) =C(sI − A) −1 B + D is BIBO stable.<br />
<strong>Lecture</strong> 5 14<br />
<strong>EL2620</strong> 2011<br />
Example—Static <strong>Nonlinear</strong> Feedback<br />
r y<br />
G(s)<br />
−<br />
Ky<br />
f(y)<br />
y<br />
f(·)<br />
G(s) =<br />
2<br />
(s +1) , 0 ≤ f(y)<br />
2 y<br />
≤ K, ∀y ≠0, f(0) = 0<br />
γ(G) =2and γ(f) ≤ K.<br />
Small Gain Theorem gives BIBO stability for K ∈ (0, 1/2).<br />
<strong>Lecture</strong> 5 16
<strong>EL2620</strong> 2011<br />
“Proof” of the Small Gain Theorem<br />
‖e1‖2 ≤‖r1‖2 + γ(S2)[‖r2‖2 + γ(S1)‖e1‖2]<br />
gives<br />
‖e1‖2 ≤ ‖r 1‖2 + γ(S2)‖r2‖2<br />
1 − γ(S2)γ(S1)<br />
γ(S2)γ(S1) < 1, ‖r1‖2 < ∞, ‖r2‖2 < ∞ give ‖e1‖2 < ∞.<br />
Similarly we get<br />
‖e2‖2 ≤ ‖r 2‖2 + γ(S1)‖r1‖2<br />
1 − γ(S1)γ(S2)<br />
so also e2 is bounded.<br />
Note: Formal proof requires ‖ · ‖2e, see Khalil<br />
<strong>Lecture</strong> 5 17<br />
<strong>EL2620</strong> 2011<br />
The Nyquist Theorem<br />
1<br />
From: U(1)<br />
G(s) 0<br />
-0.5<br />
-1<br />
− G(s) -1 -0.5 0 0.5 1<br />
0.5<br />
To: Y(1)<br />
<strong>EL2620</strong> 2011<br />
Small Gain Theorem can be Conservative<br />
Let f(y) =Ky in the previous example. Then the Nyquist Theorem<br />
proves stability for all K ∈ [0, ∞), while the Small Gain Theorem<br />
only proves stability for K ∈ (0, 1/2).<br />
r y<br />
G(s)<br />
−<br />
1<br />
0.5<br />
0<br />
f(·)<br />
-0.5<br />
-1<br />
G(iω)<br />
-1 -0.5 0 0.5 1 1.5 2<br />
<strong>Lecture</strong> 5 19<br />
Theorem: If G has no poles in the right half plane and the Nyquist<br />
curve G(iω), ω ∈ [0, ∞), does not encircle −1, then the<br />
closed-loop system is stable.<br />
<strong>Lecture</strong> 5 18<br />
<strong>EL2620</strong> 2011<br />
r y<br />
G(s)<br />
−<br />
The Circle Criterion<br />
k2y f(y)<br />
k1y<br />
y<br />
− 1 k1<br />
− 1 k2<br />
f(·)<br />
G(iω)<br />
Theorem: Assume that G(s) has no poles in the right half plane, and<br />
0 ≤ k1 ≤ f(y)<br />
y<br />
≤ k2, ∀y ≠0, f(0) = 0<br />
If the Nyquist curve of G(s) does not encircle or intersect the circle<br />
defined by the points −1/k1 and −1/k2, then the closed-loop<br />
system is BIBO stable.<br />
<strong>Lecture</strong> 5 20
<strong>EL2620</strong> 2011<br />
Example—Static <strong>Nonlinear</strong> Feedback (cont’d)<br />
Ky<br />
f(y)<br />
1<br />
y<br />
0.5<br />
0<br />
-0.5<br />
− 1 K<br />
G(iω)<br />
-1<br />
-1 -0.5 0 0.5 1 1.5 2<br />
The “circle” is defined by −1/k1 = −∞ and −1/k2 = −1/K.<br />
Since<br />
min Re G(iω) =−1/4<br />
the Circle Criterion gives that the system is BIBO stable if K ∈ (0, 4).<br />
<strong>Lecture</strong> 5 21<br />
<strong>EL2620</strong> 2011<br />
˜r1<br />
˜G(s)<br />
−k<br />
− ˜f(·)<br />
r2<br />
R<br />
1<br />
G(iω)<br />
Small Gain Theorem gives stability if | ˜G(iω)|R R<br />
<strong>Lecture</strong> 5 23<br />
<strong>EL2620</strong> 2011<br />
Proof of the Circle Criterion<br />
Let k =(k1 + k2)/2, ˜f(y) =f(y) − ky, and ˜r1 = r1 − kr2:<br />
∣ ∣ ∣∣∣ ˜f(y) ∣∣∣<br />
≤ k 2 − k1<br />
=: R, ∀y ≠0, ˜f(0) = 0<br />
y 2<br />
r1 e1<br />
y2<br />
G(s)<br />
−f(·)<br />
y1<br />
e2 r2<br />
˜G −k<br />
˜r1<br />
G<br />
− ˜f(·)<br />
y1<br />
r2<br />
<strong>Lecture</strong> 5 22<br />
<strong>EL2620</strong> 2011<br />
The curve G −1 (iω) and the circle {z ∈ C : |z + k| >R} mapped<br />
through z ↦→ 1/z gives the result:<br />
−k2 − k1<br />
− 1 k1<br />
− 1 k2<br />
Note that<br />
R<br />
G<br />
1+kG<br />
1<br />
G(iω)<br />
is stable since −1/k is inside the circle.<br />
G(iω)<br />
Note that G(s) may have poles on the imaginary axis, e.g.,<br />
integrators are allowed<br />
<strong>Lecture</strong> 5 24
<strong>EL2620</strong> 2011<br />
Passivity and BIBO Stability<br />
The main result: Feedback interconnections of passive systems are<br />
passive, and BIBO stable (under some additional mild criteria)<br />
<strong>Lecture</strong> 5 25<br />
<strong>EL2620</strong> 2011<br />
Passive System<br />
u y<br />
S<br />
Definition: Consider signals u, y :[0,T] → R m . The system S is<br />
passive if<br />
〈y, u〉T ≥ 0, for all T>0 and all u<br />
and strictly passive if there exists ɛ > 0 such that<br />
〈y, u〉T ≥ ɛ(|y| 2 T + |u| 2 T ), for all T>0 and all u<br />
Warning: There exist many other definitions for strictly passive<br />
<strong>Lecture</strong> 5 27<br />
<strong>EL2620</strong> 2011<br />
Scalar Product<br />
Scalar product for signals y and u<br />
〈y, u〉T =<br />
∫ T<br />
0<br />
y T (t)u(t)dt<br />
u y<br />
S<br />
If u and y are interpreted as vectors then 〈y, u〉T = |y|T |u|T cos φ<br />
|y|T = √ 〈y, y〉T - length of y, φ - angle between u and y<br />
Cauchy-Schwarz Inequality: 〈y, u〉T ≤ |y|T |u|T<br />
Example: u =sint and y =cost are orthogonal if T = kπ,<br />
because<br />
cos φ = 〈y, u〉 T<br />
|y|T |u|T<br />
=0<br />
<strong>Lecture</strong> 5 26<br />
<strong>EL2620</strong> 2011<br />
2 minute exercise: Is the pure delay system y(t) =u(t − θ)<br />
passive? Consider for instance the input u(t) =sin ((π/θ)t).<br />
<strong>Lecture</strong> 5 28
<strong>EL2620</strong> 2011<br />
Example—Passive Electrical Components<br />
u(t) =Ri(t) :〈u, i〉T =<br />
i = C du<br />
dt : 〈u, i〉 T =<br />
u = L di<br />
dt : 〈u, i〉 T =<br />
∫ T<br />
0<br />
∫ T<br />
0<br />
∫ T<br />
0<br />
Ri 2 (t)dt ≥ R〈i, i〉T ≥ 0<br />
u(t)C du<br />
dt dt = Cu2 (T )<br />
2<br />
L di<br />
dt i(t)dt = Li2 (T )<br />
2<br />
≥ 0<br />
≥ 0<br />
<strong>Lecture</strong> 5 29<br />
<strong>EL2620</strong> 2011<br />
Passivity of Linear Systems<br />
Theorem: An asymptotically stable linear system G(s) is passive if<br />
and only if<br />
Re G(iω) ≥ 0, ∀ω > 0<br />
It is strictly passive if and only if there exists ɛ > 0 such that<br />
Re G(iω − ɛ) ≥ 0, ∀ω > 0<br />
Example:<br />
G(s) = 1<br />
s +1<br />
G(s) = 1 s<br />
is strictly passive,<br />
is passive but not strictly<br />
passive.<br />
0 0.2 0.4 0.6 0.8 1<br />
0.6<br />
0.4<br />
0.2<br />
-0.2<br />
-0.4<br />
G(iω)<br />
<strong>Lecture</strong> 5 31<br />
<strong>EL2620</strong> 2011<br />
Feedback of Passive Systems is Passive<br />
r1 e1 y1<br />
S1<br />
−<br />
y2 e2 r2<br />
S2<br />
Lemma: If S1 and S2 are passive then the closed-loop system from<br />
(r1,r2) to (y1,y2) is also passive.<br />
Proof:<br />
〈y, e〉T = 〈y1,e1〉T + 〈y2,e2〉T = 〈y1,r1 − y2〉T + 〈y2,r2 + y1〉T<br />
= 〈y1,r1〉T + 〈y2,r2〉T = 〈y, r〉T<br />
Hence, 〈y, r〉T ≥ 0 if 〈y1,e1〉T ≥ 0 and 〈y2,e2〉T ≥ 0.<br />
<strong>Lecture</strong> 5 30<br />
<strong>EL2620</strong> 2011<br />
A Strictly Passive System Has Finite Gain<br />
u y<br />
S<br />
Lemma: If S is strictly passive then γ(S) =sup u∈L 2<br />
‖y‖2<br />
< ∞.<br />
‖u‖2<br />
Proof:<br />
ɛ(|y| 2 ∞ + |u| 2 ∞) ≤〈y, u〉∞ ≤ |y|∞ ·|u|∞ = ‖y‖2 · ‖u‖2<br />
Hence, ɛ‖y‖ 2 2 ≤‖y‖2 · ‖u‖2, so<br />
‖y‖2 ≤ 1 ɛ ‖u‖ 2<br />
<strong>Lecture</strong> 5 32<br />
0
<strong>EL2620</strong> 2011<br />
The Passivity Theorem<br />
r1 e1 y1<br />
S1<br />
−<br />
y2 e2 r2<br />
S2<br />
Theorem: If S1 is strictly passive and S2 is passive, then the<br />
closed-loop system is BIBO stable from r to y.<br />
<strong>Lecture</strong> 5 33<br />
<strong>EL2620</strong> 2011<br />
The Passivity Theorem is a<br />
“Small Phase Theorem”<br />
r1 e1<br />
−<br />
y1<br />
S1<br />
y2 e2 r2<br />
S2<br />
φ1<br />
φ2<br />
S2 passive ⇒ cos φ2 ≥ 0 ⇒ |φ2| ≤ π/2<br />
S1 strictly passive ⇒ cos φ1 > 0 ⇒ |φ1| < π/2<br />
<strong>Lecture</strong> 5 35<br />
<strong>EL2620</strong> 2011<br />
Proof<br />
S1 strictly passive and S2 passive give<br />
ɛ ( |y1| 2 T + |e1| 2 T<br />
)<br />
≤〈y 1,e1〉T + 〈y2,e2〉T = 〈y, r〉T<br />
Therefore<br />
|y1| 2 T + 〈r1 − y2,r1 − y2〉T ≤ 1 ɛ 〈y, r〉 T<br />
or<br />
|y1| 2 T + |y2| 2 T − 2〈y2,r2〉T + |r1| 2 T ≤ 1 ɛ 〈y, r〉 T<br />
Hence<br />
|y| 2 T ≤ 2〈y2,r2〉T + 1 (<br />
ɛ 〈y, r〉 T ≤ 2+ 1 )<br />
|y|T |r|T<br />
ɛ<br />
Let T →∞and the result follows.<br />
<strong>Lecture</strong> 5 34<br />
<strong>EL2620</strong> 2011<br />
−<br />
G(s)<br />
2 minute exercise: Apply the Passivity Theorem and compare it with<br />
the Nyquist Theorem. What about conservativeness? [Compare the<br />
discussion on the Small Gain Theorem.]<br />
<strong>Lecture</strong> 5 36
<strong>EL2620</strong> 2011<br />
Example—Gain Adaptation<br />
Applications in telecommunication channel estimation and in noise<br />
cancellation etc.<br />
u<br />
θ ∗<br />
Process<br />
G(s)<br />
y<br />
θ(t)<br />
G(s)<br />
ym<br />
Model<br />
Adaptation law:<br />
dθ<br />
dt = −cu(t)[y m(t) − y(t)], c > 0.<br />
<strong>Lecture</strong> 5 37<br />
<strong>EL2620</strong> 2011<br />
Gain Adaptation is BIBO Stable<br />
(θ − θ ∗ )u ym − y<br />
G(s)<br />
−<br />
θ ∗ θ<br />
− c s<br />
u S<br />
S is passive (see exercises).<br />
If G(s) is strictly passive, the closed-loop system is BIBO stable<br />
<strong>Lecture</strong> 5 39<br />
<strong>EL2620</strong> 2011<br />
replacements<br />
Gain Adaptation—Closed-Loop System<br />
u<br />
θ ∗<br />
G(s)<br />
θ(t) G(s)<br />
y<br />
−<br />
− c θ<br />
s<br />
ym<br />
<strong>Lecture</strong> 5 38<br />
<strong>EL2620</strong> 2011<br />
Simulation of Gain Adaptation<br />
Let G(s) = 1<br />
s +1<br />
, c =1, u =sint, and θ(0) = 0.<br />
2<br />
0<br />
y, ym<br />
-2<br />
0 5 10 15 20<br />
1.5<br />
1<br />
θ<br />
0.5<br />
0<br />
0 5 10 15 20<br />
<strong>Lecture</strong> 5 40
<strong>EL2620</strong> 2011<br />
Storage Function<br />
Consider the nonlinear control system<br />
ẋ = f(x, u), y = h(x)<br />
A storage function is a C 1 function V : R n → R such that<br />
• V (0) = 0 and V (x) ≥ 0, ∀x ≠0<br />
• ˙V (x) ≤ u T y, ∀x, u<br />
Remark:<br />
• V (T ) represents the stored energy in the system<br />
• V (x(T ))<br />
} {{ }<br />
stored energy at t = T<br />
≤<br />
∫ T<br />
y(t)u(t)dt<br />
0<br />
} {{ }<br />
absorbed energy<br />
+ V (x(0))<br />
} {{ }<br />
stored energy at t =0<br />
, ∀T >0<br />
<strong>Lecture</strong> 5 41<br />
<strong>EL2620</strong> 2011<br />
Lyapunov vs. Passivity<br />
Storage function is a generalization of Lyapunov function<br />
Lyapunov idea: “Energy is decreasing”<br />
˙V ≤ 0<br />
Passivity idea: “Increase in stored energy ≤ Added energy”<br />
˙V ≤ u<br />
T y<br />
<strong>Lecture</strong> 5 43<br />
<strong>EL2620</strong> 2011<br />
Storage Function and Passivity<br />
Lemma: If there exists a storage function V for a system<br />
ẋ = f(x, u), y = h(x)<br />
with x(0) = 0, then the system is passive.<br />
Proof: For all T>0,<br />
∫ T<br />
〈y, u〉T =<br />
0<br />
y(t)u(t)dt ≥ V (x(T ))−V (x(0)) = V (x(T )) ≥ 0<br />
<strong>Lecture</strong> 5 42<br />
<strong>EL2620</strong> 2011<br />
Example—KYP Lemma<br />
Consider an asymptotically stable linear system<br />
ẋ = Ax + Bu, y = Cx<br />
Assume there exists positive definite matrices P, Q such that<br />
A T P + PA = −Q, B T P = C<br />
Consider V =0.5x T Px. Then<br />
˙V =0.5(ẋ<br />
T Px+ x<br />
T P ẋ) =0.5x<br />
T (A<br />
T P + PA)x + uB<br />
T Px<br />
= −0.5x T Qx + uy < uy, x ≠0<br />
and hence the system is strictly passive. This fact is part of the<br />
Kalman-Yakubovich-Popov lemma.<br />
<strong>Lecture</strong> 5 44
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• derive the gain of a system<br />
• analyze stability using<br />
– Small Gain Theorem<br />
– Circle Criterion<br />
– Passivity<br />
<strong>Lecture</strong> 5 45
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 6<br />
• Describing function analysis<br />
<strong>Lecture</strong> 6 1<br />
<strong>EL2620</strong> 2011<br />
Motivating Example<br />
r e u y<br />
G(s)<br />
−<br />
2<br />
1<br />
0<br />
y<br />
u<br />
-1<br />
-2<br />
0 5 10 15 20<br />
G(s) =<br />
4<br />
s(s +1)<br />
2<br />
and u = sat e give a stable oscillation.<br />
• How can the oscillation be predicted?<br />
<strong>Lecture</strong> 6 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Derive describing functions for static nonlinearities<br />
• Analyze existence and stability of periodic solutions by describing<br />
function analysis<br />
<strong>Lecture</strong> 6 2<br />
<strong>EL2620</strong> 2011<br />
A Frequency Response Approach<br />
Nyquist / Bode:<br />
A (linear) feedback system will have sustained oscillations<br />
(center) if the loop-gain is 1 at the frequency where the phase lag<br />
is −180 o<br />
But, can we talk about the frequency response, in terms of gain and<br />
phase lag, of a static nonlinearity?<br />
<strong>Lecture</strong> 6 4
<strong>EL2620</strong> 2011<br />
Fourier Series<br />
A periodic function u(t) =u(t + T ) has a Fourier series expansion<br />
u(t) = a 0 ∑<br />
∞<br />
2 + n=1<br />
= a 0 ∑<br />
∞<br />
2 + n=1<br />
(an cos nωt + bn sin nωt)<br />
√<br />
a<br />
2<br />
n + b 2 n sin[nωt +arctan(an/bn)]<br />
where ω =2π/T and<br />
an(ω) = 2 T<br />
∫ T<br />
0<br />
u(t)cosnωtdt, bn(ω) = 2 T<br />
∫ T<br />
0<br />
u(t)sinnωtdt<br />
Note: Sometimes we make the change of variable t → φ/ω<br />
<strong>Lecture</strong> 6 5<br />
<strong>EL2620</strong> 2011<br />
Key Idea<br />
r e u y<br />
−<br />
N.L. G(s)<br />
e(t) =A sin ωt gives<br />
√<br />
u(t) = a<br />
2<br />
n + b 2 n sin[nωt +arctan(an/bn)]<br />
∞ ∑<br />
n=1<br />
If |G(inω)| ≪ |G(iω)| for n ≥ 2, then n =1suffices, so that<br />
√<br />
y(t) ≈ |G(iω)| a 2 1 + b 2 1 sin[ωt + arctan(a1/b1)+argG(iω)]<br />
That is, we assume all higher harmonics are filtered out by G<br />
<strong>Lecture</strong> 6 7<br />
<strong>EL2620</strong> 2011<br />
The Fourier Coefficients are Optimal<br />
The finite expansion<br />
ûk(t) = a 0 ∑<br />
k<br />
2 + n=1<br />
(an cos nωt + bn sin nωt)<br />
solves<br />
min<br />
û<br />
2<br />
T<br />
∫ T<br />
0<br />
[<br />
u(t) − û k(t) ] 2<br />
dt<br />
<strong>Lecture</strong> 6 6<br />
<strong>EL2620</strong> 2011<br />
Definition of Describing Function<br />
The describing function is<br />
N(A, ω) = b 1(ω)+ia1(ω)<br />
A<br />
e(t) u(t)<br />
N.L.<br />
e(t) û1(t)<br />
N(A, ω)<br />
If G is low pass and a0 =0, then<br />
û1(t) =|N(A, ω)|A sin[ωt +argN(A, ω)]<br />
can be used instead of u(t) to analyze the system.<br />
Amplitude dependent gain and phase shift!<br />
<strong>Lecture</strong> 6 8
<strong>EL2620</strong> 2011<br />
Describing Function for a Relay<br />
u<br />
H<br />
e<br />
0.5<br />
e u<br />
−H<br />
a1 = 1 π<br />
b1 = 1 π<br />
∫ 2π<br />
0<br />
∫ 2π<br />
0<br />
-0.5<br />
u(φ) cos φ dφ =0<br />
u(φ)sinφ dφ = 2 π<br />
The describing function for a relay is thus<br />
-1<br />
0 1 2 3 4 5 6<br />
φ =2πt/T<br />
∫ π<br />
0<br />
H sin φ dφ = 4H π<br />
N(A) = b 1(ω)+ia1(ω)<br />
A<br />
= 4H<br />
πA<br />
<strong>Lecture</strong> 6 9<br />
<strong>EL2620</strong> 2011<br />
replacements<br />
Existence of Periodic Solutions<br />
0 e u y<br />
− f(·) G(s) −1/N (A)<br />
G(iω)<br />
A<br />
Proposal: sustained oscillations if loop-gain 1 and phase-lag −180 o<br />
G(iω)N(A) =−1 ⇔ G(iω) =−1/N (A)<br />
The intersections of the curves G(iω) and −1/N (A)<br />
give ω and A for a possible periodic solution.<br />
<strong>Lecture</strong> 6 11<br />
<strong>EL2620</strong> 2011<br />
Odd Static <strong>Nonlinear</strong>ities<br />
Assume f(·) and g(·) are odd (i.e. f(−e) =−f(e)) static<br />
nonlinearities with describing functions Nf and Ng. Then,<br />
• Im Nf(A, ω) =0<br />
• Nf(A, ω) =Nf(A)<br />
• Nαf(A) =αNf(A)<br />
• Nf+g(A) =Nf(A)+Ng(A)<br />
<strong>Lecture</strong> 6 10<br />
<strong>EL2620</strong> 2011<br />
Periodic Solutions in Relay System<br />
r e u y<br />
G(s)<br />
−<br />
0.1<br />
0<br />
-0.1<br />
-0.2<br />
-0.3<br />
−1/N (A)<br />
G(iω)<br />
-0.4<br />
-0.5<br />
-1 -0.8 -0.6 -0.4 -0.2 0<br />
G(s) =<br />
3<br />
(s +1) 3 with feedback u = −sgn y<br />
No phase lag in f(·), arg G(iω) =−π for ω = √ 3=1.7<br />
G(i √ 3) = −3/8 =−1/N (A) =−πA/4 ⇒ A =12/8π ≈ 0.48<br />
<strong>Lecture</strong> 6 12<br />
1<br />
0
<strong>EL2620</strong> 2011<br />
The prediction via the describing function agrees very well with the<br />
true oscillations:<br />
1<br />
0.5<br />
u<br />
y<br />
0<br />
-0.5<br />
-1<br />
0 2 4 6 8 10<br />
Note that G filters out almost all higher-order harmonics.<br />
<strong>Lecture</strong> 6 13<br />
<strong>EL2620</strong> 2011<br />
a1 = 1 π<br />
b1 = 1 π<br />
= 4A π<br />
= A π<br />
∫ 2π<br />
0<br />
∫ 2π<br />
(<br />
0<br />
∫ φ 0<br />
0<br />
u(φ)cosφ dφ =0<br />
u(φ)sinφ dφ = 4 π<br />
sin 2 φ dφ + 4D π<br />
)<br />
2φ0 +sin2φ0<br />
∫ π/2<br />
0<br />
∫ π/2<br />
φ0<br />
u(φ)sinφ dφ<br />
sin φ dφ<br />
<strong>Lecture</strong> 6 15<br />
<strong>EL2620</strong> 2011<br />
Describing Function for a Saturation<br />
u<br />
H<br />
−D D e<br />
1<br />
0.5<br />
0<br />
e<br />
u<br />
−H<br />
-0.5<br />
φ<br />
0 1 2 3 4 5 6<br />
-1<br />
Let e(t) =A sin ωt = A sin φ. First set H = D. Then for<br />
φ ∈ (0, π)<br />
{ A sin φ, φ ∈ (0, φ<br />
u(φ) =<br />
0) ∪ (π − φ0, π)<br />
D, φ ∈ (φ0, π − φ0)<br />
where φ0 =arcsinD/A.<br />
<strong>Lecture</strong> 6 14<br />
<strong>EL2620</strong> 2011<br />
Hence, if H = D, then N(A) = 1 (<br />
)<br />
2φ0 +sin2φ0<br />
π<br />
If H ≠ D, then the rule Nαf(A) =αNf(A) gives<br />
N(A) = H (<br />
)<br />
2φ0 +sin2φ0<br />
Dπ<br />
.<br />
1.1<br />
1<br />
0.9<br />
0.8<br />
N(A) for H = D =1<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0 2 4 6 8 10<br />
<strong>Lecture</strong> 6 16
<strong>EL2620</strong> 2011<br />
5 minute exercise: What oscillation amplitude and frequency do the<br />
describing function analysis predict for the “Motivating Example”?<br />
<strong>Lecture</strong> 6 17<br />
<strong>EL2620</strong> 2011<br />
Stability of Periodic Solutions<br />
Ω<br />
G(Ω)<br />
−1/N (A)<br />
Assume that G(s) is stable.<br />
• If G(Ω) encircles the point −1/N (A), then the oscillation<br />
amplitude is increasing.<br />
• If G(Ω) does not encircle the point −1/N (A), then the<br />
oscillation amplitude is decreasing.<br />
<strong>Lecture</strong> 6 19<br />
<strong>EL2620</strong> 2011<br />
The Nyquist Theorem<br />
KG(s)<br />
−<br />
G(iω)<br />
−1/K<br />
Assume that G is stable, and K is a positive gain.<br />
• If G(iω) goes through the point −1/K the closed-loop system<br />
displays sustained oscillations<br />
• If G(iω) encircles the point −1/K, then the closed-loop system<br />
is unstable (growing amplitude oscillations).<br />
• If G(iω) does not encircle the point −1/K, then the closed-loop<br />
system is stable (damped oscillations)<br />
<strong>Lecture</strong> 6 18<br />
<strong>EL2620</strong> 2011<br />
An Unstable Periodic Solution<br />
G(Ω)<br />
−1/N (A)<br />
An intersection with amplitude A0 is unstable if AA0 leads to increasing.<br />
<strong>Lecture</strong> 6 20
<strong>EL2620</strong> 2011<br />
Stable Periodic Solution in Relay System<br />
r e u y<br />
G(s)<br />
−<br />
0.2<br />
0.15<br />
0.1<br />
0.05<br />
0<br />
-0.05<br />
-0.1<br />
G(iω)<br />
−1/N (A)<br />
-0.15<br />
-0.2<br />
-5 -4 -3 -2 -1 0<br />
G(s) =<br />
(s +10)2<br />
(s +1) 3 with feedback u = −sgn y<br />
gives one stable and one unstable limit cycle. The left most<br />
intersection corresponds to the stable one.<br />
<strong>Lecture</strong> 6 21<br />
<strong>EL2620</strong> 2011<br />
Describing Function for a Quantizer<br />
u<br />
1<br />
e<br />
D<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
-0.2<br />
-0.4<br />
-0.6<br />
e<br />
u<br />
-0.8<br />
-1<br />
0 1 2 3 4 5 6<br />
Let e(t) =A sin ωt = A sin φ. Then for φ ∈ (0, π)<br />
{ 0, φ ∈ (0, φ<br />
u(φ) =<br />
0)<br />
1, φ ∈ (φ0, π − φ0)<br />
where φ0 =arcsinD/A.<br />
<strong>Lecture</strong> 6 23<br />
<strong>EL2620</strong> 2011<br />
Automatic Tuning of PID <strong>Control</strong>ler<br />
Period and amplitude of relay feedback limit cycle can be used for<br />
autotuning:<br />
PID<br />
Σ A u Process<br />
y<br />
T<br />
Relay<br />
− 1<br />
1<br />
y<br />
u<br />
0<br />
−1<br />
0 5 10<br />
Time<br />
<strong>Lecture</strong> 6 22<br />
<strong>EL2620</strong> 2011<br />
a1 = 1 π<br />
b1 = 1 π<br />
∫ 2π<br />
0<br />
∫ 2π<br />
0<br />
u(φ)cosφ dφ =0<br />
u(φ)sinφdφ = 4 π<br />
∫ π/2<br />
φ0<br />
sin φdφ<br />
= 4 π cos φ 0 = 4 π<br />
√<br />
1 − D<br />
2 /A<br />
2<br />
N(A) =<br />
{ 0, A < D<br />
4 √<br />
1 − D<br />
2 /A<br />
2 , A ≥ D<br />
πA<br />
<strong>Lecture</strong> 6 24
<strong>EL2620</strong> 2011<br />
Plot of Describing Function Quantizer<br />
0.7<br />
0.6<br />
0.5<br />
N(A) for D =1<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
0<br />
0 2 4 6 8 10<br />
Notice that N(A) ≈ 1.3/A for large amplitudes<br />
<strong>Lecture</strong> 6 25<br />
<strong>EL2620</strong> 2011<br />
Accuracy of Describing Function Analysis<br />
<strong>Control</strong> loop with friction F = sgn y:<br />
y ref<br />
F<br />
Friction<br />
_<br />
_<br />
C<br />
G<br />
y<br />
Corresponds to<br />
G<br />
s(s − z)<br />
1+GC = s 3 +2s 2 +2s +1<br />
with feedback u = −sgn y<br />
The oscillation depends on the zero at s = z.<br />
<strong>Lecture</strong> 6 27<br />
<strong>EL2620</strong> 2011<br />
Describing Function Pitfalls<br />
Describing function analysis can give erroneous results.<br />
• A DF may predict a limit cycle even if one does not exist.<br />
• A limit cycle may exist even if the DF does not predict it.<br />
• The predicted amplitude and frequency are only approximations<br />
and can be far from the true values.<br />
<strong>Lecture</strong> 6 26<br />
<strong>EL2620</strong> 2011<br />
0.4<br />
0.2<br />
-0.2<br />
1<br />
0<br />
y z =1/3<br />
0<br />
-1<br />
0 5 10 15 20 25 30<br />
y z =4/3<br />
1.2<br />
z =4/3<br />
1<br />
0.8<br />
z =1/3<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
-0.2<br />
-0.4<br />
0 5 10 15 20 25 30<br />
-0.4<br />
-1 -0.5 0 0.5<br />
DF gives period times and amplitudes (T,A)=(11.4, 1.00) and<br />
(17.3, 0.23), respectively.<br />
Accurate results only if y is close to sinusoidal!<br />
<strong>Lecture</strong> 6 28
<strong>EL2620</strong> 2011<br />
2 minute exercise: What is N(A) for f(x) =x 2 ?<br />
<strong>Lecture</strong> 6 29<br />
<strong>EL2620</strong> 2011<br />
Analysis of Oscillations—A Summary<br />
Time-domain:<br />
• Poincaré maps and Lyapunov functions<br />
• Rigorous results but only for simple examples<br />
• Hard to use for large problems<br />
Frequency-domain:<br />
• Describing function analysis<br />
• Approximate results<br />
• Powerful graphical methods<br />
<strong>Lecture</strong> 6 31<br />
<strong>EL2620</strong> 2011<br />
Harmonic Balance<br />
e(t) =A sin ωt u(t)<br />
f(·)<br />
A few more Fourier coefficients in the truncation<br />
ûk(t) = a 0 ∑<br />
k<br />
2 + n=1<br />
(an cos nωt + bn sin nωt)<br />
may give much better result. Describing function corresponds to<br />
k =1and a0 =0.<br />
Example: f(x) =x 2 gives u(t) =(1− cos 2ωt)/2. Hence by<br />
considering a0 =1and a2 =1/2 we get the exact result.<br />
<strong>Lecture</strong> 6 30<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Derive describing functions for static nonlinearities<br />
• Analyze existence and stability of periodic solutions by describing<br />
function analysis<br />
<strong>Lecture</strong> 6 32
PSfr<br />
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 7<br />
• Compensation for saturation (anti-windup)<br />
• Friction models<br />
• Compensation for friction<br />
<strong>Lecture</strong> 7 1<br />
<strong>EL2620</strong> 2011<br />
The Problem with Saturating Actuator<br />
r e v<br />
PID<br />
u<br />
G<br />
y<br />
• The feedback path is broken when u saturates ⇒ Open loop<br />
behavior!<br />
• Leads to problem when system and/or the controller are unstable<br />
– Example: I-part in PID<br />
Recall: C PID (s) =K<br />
(<br />
1+ 1<br />
Tis + T ds<br />
)<br />
<strong>Lecture</strong> 7 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to analyze and design<br />
• Anti-windup for PID and state-space controllers<br />
• Compensation for friction<br />
<strong>Lecture</strong> 7 2<br />
<strong>EL2620</strong> 2011<br />
Example—Windup in PID <strong>Control</strong>ler<br />
2<br />
1<br />
0<br />
0.1<br />
0 20 40 60 80<br />
0<br />
u<br />
y<br />
−0.1<br />
0 20 40 60 80<br />
Time<br />
PID controller without (dashed) and with (solid) anti-windup<br />
<strong>Lecture</strong> 7 4
<strong>EL2620</strong> 2011<br />
Anti-Windup for PID <strong>Control</strong>ler<br />
Anti-windup (a) with actuator output available and (b) without<br />
(a)<br />
− y<br />
∑<br />
Actuator<br />
v u<br />
(b)<br />
K<br />
KT ds<br />
∑<br />
K<br />
∑<br />
1<br />
− ∑<br />
+<br />
T t<br />
e s<br />
1<br />
− y<br />
Actuator model Actuator<br />
e v<br />
u<br />
K<br />
T i<br />
T i<br />
KT ds<br />
K<br />
1<br />
− +<br />
∑ ∑<br />
1<br />
T t<br />
e s<br />
<strong>Lecture</strong> 7 5<br />
<strong>EL2620</strong> 2011<br />
Anti-Windup for Observer-Based<br />
State Feedback <strong>Control</strong>ler<br />
x m<br />
y<br />
Observer State feedback<br />
− x ˆ<br />
Σ<br />
L<br />
Actuator<br />
sat v<br />
˙ˆx = Aˆx + B sat v + K(y − C ˆx)<br />
v = L(xm − ˆx)<br />
ˆx is estimate of process state, xm desired (model) state<br />
Need actuator model if sat v is not measurable<br />
<strong>Lecture</strong> 7 7<br />
<strong>EL2620</strong> 2011<br />
Anti-Windup is Based on Tracking<br />
When the control signal saturates, the integration state in the<br />
controller tracks the proper state<br />
The tracking time Tt is the design parameter of the anti-windup<br />
Common choices of Tt:<br />
• Tt = Ti<br />
• Tt = √ TiTd<br />
Remark: If 0
<strong>EL2620</strong> 2011<br />
Mimic the observer-based controller:<br />
ẋc = Fxc + Gy + K(u − Cxc − Dy)<br />
=(F − KC)xc +(G − KD)y + Ku<br />
Choose K such that F0 = F − KC has desired (stable)<br />
eigenvalues. Then use controller<br />
ẋc = F0xc + G0y + Ku<br />
u =sat(Cxc + Dy)<br />
where G0 = G − KD.<br />
<strong>Lecture</strong> 7 9<br />
<strong>EL2620</strong> 2011<br />
<strong>Control</strong>lers with ”Stable” Zeros<br />
Most controllers are minimum phase, i.e. have zeros strictly in LHP<br />
ẋc = Fxc + Gy<br />
u = Cxc + Dy<br />
⇒u=0 ẋc =<br />
zero dynamics<br />
{ }} {<br />
(F − GC/D) xc<br />
y = −C/Dxc<br />
Thus, choose ”observer” gain<br />
K = G/D ⇒ F − KC = F − GC/D<br />
and the eigenvalues of the ”observer” based controller becomes equal<br />
to the zeros of F (s) when u saturates<br />
Note that this implies G − KD =0in the figure on the previous<br />
slide, and we thus obtain P-feedback with gain D under saturation.<br />
<strong>Lecture</strong> 7 11<br />
<strong>EL2620</strong> 2011<br />
State-space controller without and with anti-windup:<br />
D<br />
F<br />
y<br />
G ∑<br />
xc<br />
s −1 C<br />
u<br />
∑<br />
D<br />
F − KC<br />
y<br />
G − KD ∑ s −1 xc<br />
C ∑<br />
v sat<br />
u<br />
K<br />
<strong>Lecture</strong> 7 10<br />
<strong>EL2620</strong> 2011<br />
<strong>Control</strong>ler F (s) with ”Stable” Zeros<br />
Let D =lims→∞ F (s) and consider the feedback implementation<br />
y<br />
u<br />
Σ<br />
sat<br />
+<br />
D<br />
−<br />
1<br />
F (s) − D<br />
1<br />
It is easy to show that transfer function from y to u with no saturation<br />
equals F (s)!<br />
If the transfer function (1/F (s) − 1/D) in the feedback loop is<br />
stable (stable zeros) ⇒ No stability problems in case of saturation<br />
<strong>Lecture</strong> 7 12
<strong>EL2620</strong> 2011<br />
Internal Model <strong>Control</strong> (IMC)<br />
IMC: apply feedback only when system G and model Ĝ differ!<br />
C(s)<br />
r u y<br />
Q(s) G(s)<br />
−<br />
Ĝ(s)<br />
ŷ<br />
−<br />
Assume G stable. Note: feedback from the model error y − ŷ.<br />
Design: assume Ĝ ≈ G and choose Q stable with Q ≈ G−1 .<br />
<strong>Lecture</strong> 7 13<br />
<strong>EL2620</strong> 2011<br />
IMC with Static <strong>Nonlinear</strong>ity<br />
Include nonlinearity in model<br />
r u<br />
Q G<br />
−<br />
v<br />
Ĝ<br />
y<br />
−<br />
Choose Q ≈ G −1 .<br />
<strong>Lecture</strong> 7 15<br />
<strong>EL2620</strong> 2011<br />
Example<br />
Ĝ(s) =<br />
1<br />
T1s +1<br />
Choose<br />
Q = T 1s +1<br />
τs +1 , τ
<strong>EL2620</strong> 2011<br />
Other Anti-Windup Solutions<br />
Solutions above are all based on tracking.<br />
Other solutions include:<br />
• Tune controller to avoid saturation<br />
• Don’t update controller states at saturation<br />
• Conditionally reset integration state to zero<br />
• Apply optimal control theory (<strong>Lecture</strong> 12)<br />
<strong>Lecture</strong> 7 17<br />
<strong>EL2620</strong> 2011<br />
Friction<br />
Friction is present almost everywhere<br />
• Often bad:<br />
– Friction in valves and other actuators<br />
• Sometimes good:<br />
– Friction in brakes<br />
• Sometimes too small:<br />
– Earthquakes<br />
Problems:<br />
• How to model friction?<br />
• How to compensate for friction?<br />
• How to detect friction in control loops?<br />
<strong>Lecture</strong> 7 19<br />
<strong>EL2620</strong> 2011<br />
Bumpless Transfer<br />
Another application of the tracking idea is in the switching between<br />
automatic and manual control modes.<br />
PID with anti-windup and bumpless transfer:<br />
u c<br />
1<br />
Tm<br />
∑<br />
1<br />
T r<br />
−<br />
∑<br />
+<br />
1<br />
y<br />
PD<br />
u c<br />
∑<br />
T r<br />
1<br />
1<br />
∑<br />
A<br />
M<br />
u<br />
−<br />
∑<br />
+<br />
1<br />
T r<br />
Note the incremental form of the manual control mode ( ˙u ≈ uc/Tm)<br />
<strong>Lecture</strong> 7 18<br />
<strong>EL2620</strong> 2011<br />
Stick-Slip Motion<br />
2<br />
x<br />
y<br />
1<br />
0<br />
F f<br />
0 10 20<br />
Fp Time<br />
0.4<br />
0.2<br />
ẏ<br />
0<br />
y<br />
x<br />
1<br />
0<br />
0 10 20<br />
Time<br />
Fp<br />
F f<br />
0 10 20<br />
Time<br />
<strong>Lecture</strong> 7 20<br />
e<br />
s<br />
s
<strong>EL2620</strong> 2011<br />
Position <strong>Control</strong> of Servo with Friction<br />
F<br />
Friction<br />
−<br />
x r u<br />
PID Σ 1 1 ms v s<br />
x<br />
1<br />
0.5<br />
0<br />
0 20 40 60 80 100<br />
Time<br />
0.2<br />
−0.2<br />
0 20 40 60 80 100<br />
Time<br />
1<br />
0<br />
−1<br />
0 20 40 60 80 100<br />
Time<br />
<strong>Lecture</strong> 7 21<br />
<strong>EL2620</strong> 2011<br />
Friction Modeling<br />
<strong>Lecture</strong> 7 23<br />
<strong>EL2620</strong> 2011<br />
5 minute exercise: Which are the signals in the previous plots?<br />
<strong>Lecture</strong> 7 22<br />
<strong>EL2620</strong> 2011<br />
Stribeck Effect<br />
Friction increases with decreasing velocity (for low velocities)<br />
Stribeck (1902)<br />
Steady state friction: Joint 1<br />
200<br />
150<br />
100<br />
50<br />
0<br />
-50<br />
Friction [Nm]<br />
-100<br />
-150<br />
-200<br />
-0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04<br />
Velocity [rad/sec]<br />
<strong>Lecture</strong> 7 24
<strong>EL2620</strong> 2011<br />
Classical Friction Models<br />
Advanced models capture various friction phenomena better<br />
<strong>Lecture</strong> 7 25<br />
<strong>EL2620</strong> 2011<br />
Integral Action<br />
• Integral action compensates for any external disturbance<br />
• Works if friction force changes slowly (v(t) ≈ const)<br />
• If friction force changes quickly, then large integral action<br />
(small Ti) necessary. May lead to stability problem<br />
F Friction<br />
xr u 1 v x<br />
1<br />
PID<br />
ms<br />
s<br />
<strong>Lecture</strong> 7 27<br />
<strong>EL2620</strong> 2011<br />
Friction Compensation<br />
• Lubrication<br />
• Integral action<br />
• Dither signal<br />
• Model-based friction compensation<br />
• Adaptive friction compensation<br />
• The Knocker<br />
<strong>Lecture</strong> 7 26<br />
<strong>EL2620</strong> 2011<br />
Modified Integral Action<br />
∫<br />
Modify the integral part to I = K t<br />
ê(τ)dτ where<br />
Ti<br />
ê(t)<br />
η<br />
e(t)<br />
Advantage: Avoid that small static error introduces oscillation<br />
Disadvantage: Error won’t go to zero<br />
<strong>Lecture</strong> 7 28
<strong>EL2620</strong> 2011<br />
replacements<br />
Dither Signal<br />
Avoid sticking at v =0(where there is high friction)<br />
by adding high-frequency mechanical vibration (dither)<br />
F Friction<br />
xr u 1 v 1 x<br />
PID<br />
ms<br />
s<br />
Cf., mechanical maze puzzle<br />
(labyrintspel)<br />
<strong>Lecture</strong> 7 29<br />
<strong>EL2620</strong> 2011<br />
Adaptive Friction Compensation<br />
F<br />
Friction<br />
u PID − 1<br />
+<br />
+ ms<br />
v<br />
1<br />
s<br />
x<br />
Friction<br />
estimator<br />
ˆF<br />
Coulomb friction model: F = a sgn v<br />
Friction estimator:<br />
ż = ku PID sgn v<br />
â = z − km|v|<br />
ˆF =â sgn v<br />
<strong>Lecture</strong> 7 31<br />
<strong>EL2620</strong> 2011<br />
Model-Based Friction Compensation<br />
For process with friction F :<br />
mẍ = u − F<br />
use control signal<br />
u = u PID + ˆF<br />
where u PID is the regular control signal and ˆF an estimate of F .<br />
Possible if:<br />
• An estimate ˆF ≈ F is available<br />
• u and F does apply at the same point<br />
<strong>Lecture</strong> 7 30<br />
<strong>EL2620</strong> 2011<br />
Adaptation converges: e = a − â → 0 as t →∞<br />
Proof:<br />
de<br />
dt = −dâ dt = −dz dt + km d dt |v|<br />
= −ku PID sgn v + km˙v sgn v<br />
= −k sgn v(u PID − m ˙v)<br />
= −k sgn v(F − ˆF )<br />
= −k(a − â)<br />
= −ke<br />
Remark: Careful with d dt<br />
|v| at v =0.<br />
<strong>Lecture</strong> 7 32
<strong>EL2620</strong> 2011<br />
0.5<br />
v<br />
P-controller<br />
vref<br />
0.5<br />
PI-controller<br />
-0.5<br />
-0.5<br />
-1<br />
-1<br />
0 1 2 3 4 5 6 7 8<br />
0 1 2 3 4 5 6 7 8<br />
10<br />
u<br />
10<br />
-5<br />
-5<br />
-10<br />
-10<br />
0 1 2 3 4 5 6 7 8<br />
0 1 2 3 4 5 6 7 8<br />
P-controller with adaptive friction compensation<br />
0.5<br />
-0.5<br />
-1<br />
0 1 2 3 4 5 6 7 8<br />
10<br />
-5<br />
-10<br />
0 1 2 3 4 5 6 7 8<br />
<strong>Lecture</strong> 7 33<br />
<strong>EL2620</strong> 2011<br />
Detection of Friction in <strong>Control</strong> Loops<br />
• Friction is due to wear and increases with time<br />
• Q: When should valves be maintained?<br />
• Idea: Monitor loops automatically and estimate friction<br />
102<br />
101<br />
100<br />
99<br />
98<br />
50 100 150 200 250 300 350<br />
12<br />
11.5<br />
11<br />
10.5<br />
50 100 150 200 250 300 350<br />
<strong>EL2620</strong> 2011<br />
The Knocker<br />
Coulomb friction compensation and square wave dither<br />
Typical control signal u<br />
2.5<br />
1.5<br />
0.5<br />
-0.5<br />
0 1 2 3 4 5 6 7 8 9 10<br />
Hägglund: Patent and Innovation Cup winner<br />
<strong>Lecture</strong> 7 34<br />
1<br />
0<br />
5<br />
0<br />
1<br />
0<br />
5<br />
0<br />
1<br />
0<br />
5<br />
0<br />
2<br />
1<br />
0<br />
u<br />
y<br />
time<br />
Horch: PhD thesis (2000) and patent<br />
<strong>Lecture</strong> 7 35<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to analyze and design<br />
• Anti-windup for PID, state-space, and polynomial controllers<br />
• Compensation for friction<br />
<strong>Lecture</strong> 7 36
<strong>EL2620</strong> 2011<br />
Next <strong>Lecture</strong><br />
• Backlash<br />
• Quantization<br />
<strong>Lecture</strong> 7 37
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 8<br />
• Backlash<br />
• Quantization<br />
<strong>Lecture</strong> 8 1<br />
<strong>EL2620</strong> 2011<br />
Linear and Angular Backlash<br />
xout<br />
✲<br />
Fout<br />
✲<br />
Fin<br />
✛ D ✲ ✛ D ✲<br />
xin<br />
2D<br />
Tin Tout<br />
θin θout<br />
<strong>Lecture</strong> 8 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to analyze and design for<br />
• Backlash<br />
• Quantization<br />
<strong>Lecture</strong> 8 2<br />
<strong>EL2620</strong> 2011<br />
Backlash<br />
Backlash (glapp) is<br />
• present in most mechanical and hydraulic systems<br />
• increasing with wear<br />
• necessary for a gearbox to work in high temperature<br />
• bad for control performance<br />
• sometimes inducing oscillations<br />
<strong>Lecture</strong> 8 4
<strong>EL2620</strong> 2011<br />
Backlash Model<br />
ẋout =<br />
{ẋin, in contact<br />
0, otherwise<br />
“in contact” if |xout − xin| = D and ẋin(xin − xout) > 0.<br />
xout<br />
θout<br />
D<br />
D<br />
xin<br />
θin<br />
• Multivalued output; current output depends on history. Thus,<br />
backlash is a dynamic phenomena.<br />
<strong>Lecture</strong> 8 5<br />
<strong>EL2620</strong> 2011<br />
Effects of Backlash<br />
P-control of motor angle with gearbox having backlash with D =0.2<br />
θ ref<br />
−<br />
e u 1<br />
˙θin 1 θ in<br />
θ out<br />
K<br />
1+sT s<br />
<strong>Lecture</strong> 8 7<br />
<strong>EL2620</strong> 2011<br />
Alternative Model<br />
Force<br />
(Torque)<br />
D<br />
xin − xout<br />
(θin − θout)<br />
Not equivalent to “Backlash Model”<br />
<strong>Lecture</strong> 8 6<br />
<strong>EL2620</strong> 2011<br />
1.5<br />
No backlash K =0.25, 1, 4<br />
1.6<br />
Backlash K =0.25, 1, 4<br />
1.4<br />
1.2<br />
1<br />
1<br />
0.8<br />
0.5<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 5 10 15 20 25 30 35 40<br />
0<br />
0 5 10 15 20 25 30 35 40<br />
• Oscillations for K =4but not for K =0.25 or K =1.Why?<br />
• Note that the amplitude will decrease with decreasing D,<br />
but never vanish<br />
<strong>Lecture</strong> 8 8
<strong>EL2620</strong> 2011<br />
Describing Function for Backlash<br />
If A
<strong>EL2620</strong> 2011<br />
Rewrite the system:<br />
θin θout<br />
˙θin<br />
˙θout<br />
BL<br />
G(s)<br />
G(s)<br />
The block “BL” satisfies<br />
{<br />
=<br />
˙θin in contact<br />
˙θout 0 otherwise<br />
<strong>Lecture</strong> 8 13<br />
<strong>EL2620</strong> 2011<br />
Backlash Compensation<br />
• Mechanical solutions<br />
• Deadzone<br />
• Linear controller design<br />
• Backlash inverse<br />
<strong>Lecture</strong> 8 15<br />
<strong>EL2620</strong> 2011<br />
Homework 2<br />
Analyze this backlash system with input–output stability results:<br />
˙θin<br />
BL<br />
˙θout<br />
G(s)<br />
Passivity Theorem BL is passive<br />
Small Gain Theorem BL has gain γ(BL) =1<br />
Circle Criterion BL contained in sector [0, 1]<br />
<strong>Lecture</strong> 8 14<br />
<strong>EL2620</strong> 2011<br />
Linear <strong>Control</strong>ler Design<br />
Introduce phase lead compensation:<br />
θ ref<br />
−<br />
e u 1<br />
˙θin θ in<br />
θ out<br />
K 1+sT2<br />
1+sT1<br />
1+sT<br />
1<br />
s<br />
<strong>Lecture</strong> 8 16
<strong>EL2620</strong> 2011<br />
F (s) =K 1+sT 2<br />
with T 1 =0.5,T2 =2.0:<br />
1+sT1<br />
10<br />
Nyquist Diagrams<br />
1.5<br />
8<br />
1<br />
6<br />
0.5<br />
4<br />
y, with/without filter<br />
2<br />
0<br />
Imaginary Axis<br />
-2<br />
0<br />
0 5 10 15 20<br />
2<br />
-4<br />
1<br />
with filter<br />
-6<br />
0<br />
-8<br />
without filter<br />
-10<br />
-10 -5 0 5 10<br />
Real Axis<br />
-1<br />
-2<br />
u, with/without filter<br />
0 5 10 15 20<br />
Oscillation removed!<br />
<strong>Lecture</strong> 8 17<br />
<strong>EL2620</strong> 2011<br />
xin(t) =<br />
⎪⎨ u + ̂D if u(t) >u(t−)<br />
u −<br />
⎪ if u(t)
<strong>EL2620</strong> 2011<br />
Example—Under-Compensation<br />
1.2<br />
0.8<br />
0.4<br />
0<br />
0 20 40 60 80<br />
4<br />
2<br />
0<br />
0 20 40 60 80<br />
<strong>Lecture</strong> 8 21<br />
<strong>EL2620</strong> 2011<br />
Quantization<br />
• What precision is needed in A/D and D/A converters? (8–14 bits?)<br />
• What precision is needed in computations? (8–64 bits?)<br />
u y<br />
D/A G A/D<br />
∆<br />
∆/2<br />
• Quantization in A/D and D/A converters<br />
• Quantization of parameters<br />
• Roundoff, overflow, underflow in computations<br />
<strong>Lecture</strong> 8 23<br />
<strong>EL2620</strong> 2011<br />
Example—Over-Compensation<br />
1.2<br />
0.8<br />
0.4<br />
0<br />
0 1 2 3 4 5<br />
6<br />
4<br />
2<br />
0<br />
-2<br />
0 1 2 3 4 5<br />
<strong>Lecture</strong> 8 22<br />
<strong>EL2620</strong> 2011<br />
Linear Model of Quantization<br />
Model quantization error as a uniformly distributed stochastic signal e<br />
independent of u with<br />
Var(e) =<br />
∫ ∞<br />
−∞<br />
e 2 fe de =<br />
∫ ∆/2<br />
−∆/2<br />
e 2<br />
∆<br />
∆2<br />
de =<br />
12<br />
e<br />
u<br />
y u<br />
Q +<br />
y<br />
• May be reasonable if ∆ is small compared to the variations in u<br />
• But, added noise can never affect stability while quantization can!<br />
<strong>Lecture</strong> 8 24
<strong>EL2620</strong> 2011<br />
Describing Function for Deadzone Relay<br />
u<br />
1<br />
e<br />
D<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
-0.2<br />
-0.4<br />
-0.6<br />
e<br />
u<br />
-0.8<br />
<strong>Lecture</strong> 6 ⇒<br />
N(A) =<br />
-1<br />
0 1 2 3 4 5 6<br />
{ 0, A < D<br />
4 √<br />
1 − D<br />
2 /A<br />
2 , A > D<br />
πA<br />
<strong>Lecture</strong> 8 25<br />
<strong>EL2620</strong> 2011<br />
1<br />
0<br />
0 2 4<br />
A/Delta<br />
<strong>EL2620</strong> 2011<br />
Describing Function for Quantizer<br />
y<br />
u<br />
Q<br />
y<br />
∆<br />
∆/2 u<br />
N(A) =<br />
⎧<br />
⎪⎨ 0,<br />
√<br />
A < ∆ 2<br />
n∑<br />
⎪ ⎩ 1 −<br />
4∆<br />
πA<br />
i=1<br />
( ) 2 2i − 1<br />
2A ∆ , 2n−1<br />
∆ 1/0.79 = 1.27 avoids oscillation<br />
• Reducing ∆ reduces only the oscillation amplitude<br />
<strong>Lecture</strong> 8 27<br />
<strong>EL2620</strong> 2011<br />
Example—Motor with P-controller.<br />
Quantization of process output with ∆ =0.2<br />
Nyquist of linear part (K &ZOH&G(s)) intersects at −0.5K:<br />
Stability for K2/1.27 = 1.57 with Q.<br />
r<br />
−<br />
K<br />
u y<br />
1−e −s<br />
G(s) Q<br />
s<br />
<strong>Lecture</strong> 8 28
<strong>EL2620</strong> 2011<br />
(a)<br />
1<br />
K =0.8<br />
Output<br />
0<br />
0 50<br />
(b)<br />
1<br />
K =1.2<br />
Output<br />
0<br />
0 50<br />
(c)<br />
1<br />
K =1.6<br />
Output<br />
0<br />
0 50<br />
Time<br />
<strong>Lecture</strong> 8 29<br />
<strong>EL2620</strong> 2011<br />
Example—1/s 2 & 2nd-Order <strong>Control</strong>ler<br />
2<br />
0<br />
Imaginary axis<br />
<strong>EL2620</strong> 2011<br />
Quantization ∆ =0.02 in A/D converter:<br />
1<br />
0<br />
0 50 100 150<br />
−2<br />
−6 −4 −2 0<br />
Real axis<br />
A/D F D/A G<br />
<strong>Lecture</strong> 8 30<br />
Output<br />
1.02<br />
Output<br />
0.98<br />
0.05<br />
0 50 100 150<br />
0<br />
Input<br />
−0.05<br />
0 50 100 150<br />
Time<br />
Describing function: Ay =0.01 and T =39<br />
Simulation: Ay =0.01 and T =28<br />
<strong>Lecture</strong> 8 31<br />
<strong>EL2620</strong> 2011<br />
Quantization ∆ =0.01 in D/A converter:<br />
1<br />
Output<br />
0<br />
0 50 100 150<br />
0.05<br />
0<br />
Unquantized<br />
−0.05<br />
0 50 100 150<br />
0.05<br />
0<br />
Input<br />
−0.05<br />
0 50 100 150<br />
Time<br />
Describing function: Au =0.005 and T =39<br />
Simulation: Au =0.005 and T =39<br />
<strong>Lecture</strong> 8 32
<strong>EL2620</strong> 2011<br />
Quantization Compensation<br />
• Improve accuracy (larger word length)<br />
• Avoid unstable controller and gain margins < 1.3<br />
• Use the tracking idea from<br />
anti-windup to improve D/A<br />
converter<br />
Digital Analog<br />
controller D/A<br />
+ −<br />
• Use analog dither, oversampling,<br />
and digital lowpass filter<br />
to improve accuracy of A/D<br />
converter<br />
+<br />
A/D filter decim.<br />
<strong>Lecture</strong> 8 33<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should now be able to analyze and design for<br />
• Backlash<br />
• Quantization<br />
<strong>Lecture</strong> 8 34
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 9<br />
• <strong>Nonlinear</strong> control design based on high-gain control<br />
<strong>Lecture</strong> 9 1<br />
<strong>EL2620</strong> 2011<br />
History of the Feedback Amplifier<br />
New York–San Francisco communication link 1914.<br />
High signal amplification with low distortion was needed.<br />
f(·)<br />
f(·)<br />
r y<br />
f(·)<br />
−<br />
k<br />
Feedback amplifiers were the solution!<br />
Black, Bode, and Nyquist at Bell Labs 1920–1950.<br />
<strong>Lecture</strong> 9 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to analyze and design<br />
• High-gain control systems<br />
• Sliding mode controllers<br />
<strong>Lecture</strong> 9 2<br />
<strong>EL2620</strong> 2011<br />
Linearization Through High Gain Feedback<br />
α2e f(e)<br />
r y<br />
f(·)<br />
−<br />
α1e<br />
e<br />
K<br />
α1 ≤ f(e)<br />
e<br />
≤ α2 ⇒<br />
α1<br />
1+α1K r ≤ y ≤ α2<br />
1+α2K r<br />
choose K ≫ 1/α1, yields<br />
y ≈ 1 K r<br />
<strong>Lecture</strong> 9 4
<strong>EL2620</strong> 2011<br />
AWordofCaution<br />
Nyquist: high loop-gain may induce oscillations (due to dynamics)!<br />
<strong>Lecture</strong> 9 5<br />
<strong>EL2620</strong> 2011<br />
Remark: How to Obtain f −1 from f using<br />
Feedback<br />
v k u<br />
− s<br />
f(u)<br />
u<br />
f(·)<br />
˙u = k ( v − f(u) )<br />
If k>0 large and df /du > 0, then ˙u → 0 and<br />
0=k ( v − f(u) ) ⇔ f(u) =v ⇔ u = f −1 (v)<br />
<strong>Lecture</strong> 9 7<br />
<strong>EL2620</strong> 2011<br />
Inverting <strong>Nonlinear</strong>ities<br />
Compensation of static nonlinearity through inversion:<br />
<strong>Control</strong>ler<br />
−<br />
F (s) ˆf<br />
−1 (·) f(·) G(s)<br />
Should be combined with feedback as in the figure!<br />
<strong>Lecture</strong> 9 6<br />
<strong>EL2620</strong> 2011<br />
Example—Linearization of Static <strong>Nonlinear</strong>ity<br />
1<br />
r e u y<br />
− K f(·)<br />
0.8<br />
0.6<br />
y(r)<br />
0.4<br />
0.2<br />
f(u)<br />
0<br />
0 0.2 0.4 0.6 0.8 1<br />
Linearization of f(u) =u 2 through feedback.<br />
The case K =100is shown in the plot: y(r) ≈ r.<br />
<strong>Lecture</strong> 9 8
<strong>EL2620</strong> 2011<br />
The Sensitivity Function S =(1+GF ) −1<br />
The closed-loop system is<br />
G cl =<br />
G<br />
1+GF<br />
r y<br />
G<br />
−<br />
F<br />
Small perturbations dG in G gives<br />
dG cl<br />
dG = 1<br />
⇒<br />
(1 + GF ) 2<br />
dG cl<br />
=<br />
G cl<br />
1<br />
1+GF<br />
dG<br />
G = S dG G<br />
S is the closed-loop sensitivity to open-loop perturbations.<br />
<strong>Lecture</strong> 9 9<br />
<strong>EL2620</strong> 2011<br />
Example—Distortion Reduction<br />
Let G =1000,<br />
distortion dG/G =0.1<br />
r y<br />
G<br />
−<br />
K<br />
Choose K =0.1 ⇒ S =(1+GK) −1 ≈ 0.01. Then<br />
dG cl<br />
= S dG G cl G ≈ 0.001<br />
100 feedback amplifiers in series give total amplification<br />
G tot =(G cl ) 100 ≈ 10 100<br />
and total distortion<br />
dG tot<br />
=(1+10 −3 ) 100 − 1 ≈ 0.1<br />
G tot<br />
<strong>Lecture</strong> 9 11<br />
<strong>EL2620</strong> 2011<br />
Distortion Reduction via Feedback<br />
The feedback reduces distortion in each link.<br />
Several links give distortion-free high gain.<br />
− f(·) − f(·)<br />
K<br />
K<br />
<strong>Lecture</strong> 9 10<br />
<strong>EL2620</strong> 2011<br />
Transcontinental Communication Revolution<br />
The feedback amplifier was patented by Black 1937.<br />
Year Channels Loss (dB) No amp’s<br />
1914 1 60 3–6<br />
1923 1–4 150–400 6–20<br />
1938 16 1000 40<br />
1941 480 30000 600<br />
<strong>Lecture</strong> 9 12
<strong>EL2620</strong> 2011<br />
Sensitivity and the Circle Criterion<br />
r<br />
−1<br />
r y<br />
GF<br />
−<br />
f(·)<br />
GF (iω)<br />
Consider a circle C := {z ∈ C : |z +1| = r}, r ∈ (0, 1).<br />
GF (iω) stays outside C if<br />
|1+GF (iω)| >r ⇔ |S(iω)| ≤ r −1<br />
Then, the Circle Criterion gives stability if<br />
1<br />
1+r ≤ f(y)<br />
y<br />
≤ 1<br />
1 − r<br />
<strong>Lecture</strong> 9 13<br />
<strong>EL2620</strong> 2011<br />
On–Off <strong>Control</strong><br />
On–off control is the simplest control strategy.<br />
Common in temperature control, level control etc.<br />
r e u y<br />
G(s)<br />
−<br />
The relay corresponds to infinitely high gain at the switching point.<br />
<strong>Lecture</strong> 9 15<br />
<strong>EL2620</strong> 2011<br />
Small Sensitivity Allows Large Uncertainty<br />
If |S(iω)| is small, we can choose r large (close to one).<br />
This corresponds to a large sector for f(·).<br />
Hence, |S(iω)| small implies low sensitivity to nonlinearities.<br />
k2y f(y)<br />
k1y<br />
k1 = 1<br />
1+r<br />
k2 = 1<br />
1 − r<br />
y<br />
<strong>Lecture</strong> 9 14<br />
<strong>EL2620</strong> 2011<br />
A <strong>Control</strong> Design Idea<br />
Assume V (x) =x T Px, P = P T > 0, represents the energy of<br />
ẋ = Ax + Bu, u ∈ [−1, 1]<br />
Choose u such that V decays as fast as possible:<br />
˙V = x<br />
T (A<br />
T P + PA)x +2B<br />
T Pxu<br />
is minimized if u = − sgn(B T Px) (Notice that ˙V = a + bu, i.e. just<br />
a segment of line in u, −1
<strong>EL2620</strong> 2011<br />
ẋ =<br />
{ f<br />
+ (x), σ(x) > 0<br />
f − (x), σ(x) < 0<br />
Sliding Modes<br />
f + f −<br />
σ(x) > 0<br />
σ(x) < 0<br />
The sliding mode is ẋ = αf + +(1− α)f − , where α satisfies<br />
αf n + +(1− α)f n − =0for the normal projections of f + ,f −<br />
f −<br />
αf + +(1− α)f −<br />
f +<br />
The sliding surface is S = {x : σ(x) =0}.<br />
<strong>Lecture</strong> 9 17<br />
<strong>EL2620</strong> 2011<br />
For small x2 we have<br />
⎧⎪ ẋ2(t) ≈ ⎨<br />
x1 − 1,<br />
⎪ ⎩<br />
dx2<br />
dx1<br />
≈ 1 − x1 x2 > 0<br />
dx2<br />
ẋ2(t) ≈ x1 +1, ≈ 1+x1 x2 < 0<br />
dx1<br />
This implies the following behavior<br />
x2 > 0<br />
x2 =0<br />
x1 = −1 x1 =1<br />
x2 < 0<br />
<strong>Lecture</strong> 9 19<br />
<strong>EL2620</strong> 2011<br />
Example<br />
ẋ =<br />
( 0 −1<br />
1 −1<br />
)<br />
x +<br />
( 1<br />
1)<br />
u = Ax + Bu<br />
u = − sgn σ(x) =− sgn x2 = − sgn(Cx)<br />
is equivalent to<br />
ẋ =<br />
Ax − B, x 2 > 0<br />
Ax + B, x2 < 0<br />
<strong>Lecture</strong> 9 18<br />
<strong>EL2620</strong> 2011<br />
Sliding Mode Dynamics<br />
The dynamics along the sliding surface S is obtained by setting<br />
u = u eq ∈ [−1, 1] such that x(t) stays on S.<br />
u eq is called the equivalent control.<br />
<strong>Lecture</strong> 9 20
<strong>EL2620</strong> 2011<br />
Example (cont’d)<br />
Finding u = u eq such that ˙σ(x) =ẋ2 =0on σ(x) =x2 =0gives<br />
0=ẋ2 = x1 − x2<br />
}{{}<br />
=0<br />
+ u eq = x1 + u eq ⇒ u eq = −x1<br />
Insert this in the equation for ẋ1:<br />
ẋ1 = − x2<br />
}{{}<br />
=0<br />
+ u eq = −x1<br />
gives the dynamics on the sliding surface S = {x : x2 =0}.<br />
<strong>Lecture</strong> 9 21<br />
<strong>EL2620</strong> 2011<br />
Equivalent <strong>Control</strong> for Linear System<br />
ẋ = Ax + Bu<br />
u = − sgn σ(x) =− sgn(Cx)<br />
Assume CB > 0. The sliding surface S = {x : Cx =0} so<br />
0= ˙σ(x) = dσ (<br />
)<br />
f(x)+g(x)u = C ( )<br />
Ax + Bu eq<br />
dx<br />
gives u eq = −CAx/CB.<br />
Example (cont’d): For the example:<br />
u eq = −CAx/CB = − ( 1 −1 ) x = −x1,<br />
because σ(x) =x2 =0. (Same result as before.)<br />
<strong>Lecture</strong> 9 23<br />
<strong>EL2620</strong> 2011<br />
Deriving the Equivalent <strong>Control</strong><br />
Assume<br />
ẋ = f(x)+g(x)u<br />
u = − sgn σ(x)<br />
has a stable sliding surface S = {x : σ(x) =0}. Then, for x ∈ S,<br />
0= ˙σ(x) = dσ<br />
dx · dx<br />
dt = dσ (<br />
)<br />
f(x)+g(x)u<br />
dx<br />
The equivalent control is thus given by<br />
u eq = −<br />
( ) −1 dσ dσ<br />
dx g(x) dx f(x)<br />
if the inverse exists.<br />
<strong>Lecture</strong> 9 22<br />
<strong>EL2620</strong> 2011<br />
Sliding Dynamics<br />
The dynamics on S = {x : Cx =0} is given by<br />
(<br />
ẋ = Ax + Bu eq = I − 1 )<br />
CB BC Ax,<br />
under the constraint Cx =0, where the eigenvalues of<br />
(I − BC/CB)A are equal to the zeros of<br />
sG(s) =sC(sI − A) −1 B.<br />
Remark: The condition that Cx =0corresponds to the zero at<br />
s =0, and thus this dynamic disappears on S = {x : Cx =0}.<br />
<strong>Lecture</strong> 9 24
<strong>EL2620</strong> 2011<br />
Proof<br />
ẋ = Ax + Bu<br />
y = Cx ⇒ ẏ = CAx + CBu ⇒ u = 1<br />
CB CAx − 1<br />
CBẏ ⇒<br />
ẋ =<br />
(<br />
I − 1<br />
CB BC )<br />
Ax − 1<br />
CB Bẏ<br />
Hence, the transfer function from ẏ to u equals<br />
−1<br />
1<br />
CB + 1<br />
CB<br />
1<br />
CA(sI − ((I −<br />
CB<br />
−1<br />
BC)A))−1<br />
CB B<br />
but this transfer function is also 1/(sG(s)) Hence, the eigenvalues of<br />
(I − BC/CB)A are equal to the zeros of sG(s).<br />
<strong>Lecture</strong> 9 25<br />
<strong>EL2620</strong> 2011<br />
Closed-Loop Stability<br />
Consider V (x) =σ 2 (x)/2 with σ(x) =p T x. Then,<br />
( )<br />
˙V = σ<br />
T (x)˙σ(x) =x<br />
T p p<br />
T f(x)+p<br />
T g(x)u<br />
With the chosen control law, we get<br />
˙V = −µσ(x)sgnσ(x) < 0<br />
so x tend to σ(x) =0.<br />
0=σ(x) =p1x1 + ···+ pn−1xn−1 + pnxn<br />
= p1x (n−1)<br />
n + ···+ pn−1x (1)<br />
n + pnx (0)<br />
n<br />
where x (k) denote time derivative. Now p corresponds to a stable<br />
differential equation, and xn → 0 exponentially as t →∞. The<br />
state relations xk−1 =ẋk now give x → 0 exponentially as t →∞.<br />
<strong>Lecture</strong> 9 27<br />
<strong>EL2620</strong> 2011<br />
Design of Sliding Mode <strong>Control</strong>ler<br />
Idea: Design a control law that forces the state to σ(x) =0. Choose<br />
σ(x) such that the sliding mode tends to the origin. Assume<br />
⎛<br />
⎞<br />
⎛<br />
⎞<br />
d<br />
dt<br />
x1<br />
⎜ ⎜⎜⎝ x2<br />
. ..<br />
⎟ ⎟⎟⎠<br />
=<br />
f1(x)+g1(x)u<br />
⎜ ⎜⎜⎝ x1<br />
. ..<br />
⎟ ⎟⎟⎠<br />
= f(x)+g(x)u<br />
xn<br />
xn−1<br />
Choose control law<br />
u = − pT f(x)<br />
p T g(x) − µ<br />
p T g(x)<br />
sgn σ(x),<br />
where µ>0 is a design parameter, σ(x) =p T x, and<br />
p T = ( p1 ... pn)<br />
are the coefficients of a stable polynomial.<br />
<strong>Lecture</strong> 9 26<br />
<strong>EL2620</strong> 2011<br />
Time to Switch<br />
Consider an initial point x0 such that σ0 = σ(x0) > 0. Since<br />
σ(x)˙σ(x) =−µσ(x)sgnσ(x)<br />
it follows that as long as σ(x) > 0:<br />
˙σ(x) =−µ<br />
Hence, the time to the first switch (σ(x) =0)is<br />
t s = σ 0<br />
µ < ∞<br />
Note that t s → 0 as µ →∞.<br />
<strong>Lecture</strong> 9 28
<strong>EL2620</strong> 2011<br />
Example—Sliding Mode <strong>Control</strong>ler<br />
Design state-feedback controller for<br />
ẋ =<br />
( 1 0<br />
1 0<br />
)<br />
y = ( 0 1 ) x<br />
x +<br />
( 1<br />
0)<br />
u<br />
Choose p1s + p2 = s +1so that σ(x) =x1 + x2. The controller is<br />
given by<br />
u = − pT Ax<br />
p T B − µ<br />
p T B<br />
sgn σ(x)<br />
=2x1 − µ sgn(x1 + x2)<br />
<strong>Lecture</strong> 9 29<br />
<strong>EL2620</strong> 2011<br />
Time Plots<br />
x1<br />
Initial condition<br />
x(0) = ( 1.5 0 ) T .<br />
Simulation agrees well with<br />
time to switch<br />
x2<br />
t s = σ 0<br />
µ =3<br />
and sliding dynamics<br />
ẏ = −y<br />
u<br />
<strong>Lecture</strong> 9 31<br />
<strong>EL2620</strong> 2011<br />
Phase Portrait<br />
Simulation with µ =0.5. Note the sliding surface σ(x) =x1 + x2.<br />
x2<br />
1<br />
0<br />
−1<br />
−2 −1 0 1 2<br />
x1<br />
<strong>Lecture</strong> 9 30<br />
<strong>EL2620</strong> 2011<br />
The Sliding Mode <strong>Control</strong>ler is Robust<br />
Assume that only a model ẋ = ̂f(x)+ĝ(x)u of the true system<br />
ẋ = f(x)+g(x)u is known. Still, however,<br />
[ p<br />
T (fĝ<br />
T −<br />
T ̂fg )p<br />
˙V = σ(x)<br />
p T ĝ<br />
− µ pT g<br />
p T ĝ sgn σ(x) ]<br />
< 0<br />
if sgn(p T g)=sgn(p T ĝ) and µ>0 is sufficiently large.<br />
The closed-loop system is thus robust against model errors!<br />
(High gain control with stable open loop zeros)<br />
<strong>Lecture</strong> 9 32
<strong>EL2620</strong> 2011<br />
Comments on Sliding Mode <strong>Control</strong><br />
• Efficient handling of model uncertainties<br />
• Often impossible to implement infinite fast switching<br />
• Smooth version through low pass filter or boundary layer<br />
• Applications in robotics and vehicle control<br />
• Compare puls-width modulated control signals<br />
<strong>Lecture</strong> 9 33<br />
<strong>EL2620</strong> 2011<br />
Next <strong>Lecture</strong><br />
• Lyapunov design methods<br />
• Exact feedback linearization<br />
<strong>Lecture</strong> 9 35<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to analyze and design<br />
• High-gain control systems<br />
• Sliding mode controllers<br />
<strong>Lecture</strong> 9 34
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 10<br />
• Exact feedback linearization<br />
• Input-output linearization<br />
• Lyapunov-based control design methods<br />
<strong>Lecture</strong> 10 1<br />
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> <strong>Control</strong>lers<br />
• <strong>Nonlinear</strong> dynamical controller: ż = a(z,y), u = c(z)<br />
• Linear dynamics, static nonlinearity: ż = Az + By, u = c(z)<br />
• Linear controller: ż = Az + By, u = Cz<br />
<strong>Lecture</strong> 10 3<br />
<strong>EL2620</strong> 2011<br />
Output Feedback and State Feedback<br />
ẋ = f(x, u)<br />
y = h(x)<br />
• Output feedback: Find u = k(y) such that the closed-loop<br />
system<br />
ẋ = f ( x, k ( h(x) ))<br />
has nice properties.<br />
• State feedback: Find u = l(x) such that<br />
ẋ = f(x, l(x))<br />
has nice properties.<br />
k and l may include dynamics.<br />
<strong>Lecture</strong> 10 2<br />
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> Observers<br />
What if x is not measurable?<br />
ẋ = f(x, u), y = h(x)<br />
Simplest observer<br />
˙̂x = f(̂x, u)<br />
Feedback correction, as in linear case,<br />
˙̂x = f(̂x, u)+K(y − h(̂x))<br />
Choices of K<br />
• Linearize f at x0, find K for the linearization<br />
• Linearize f at ̂x(t), find K = K(̂x) for the linearization<br />
Second case is called Extended Kalman Filter<br />
<strong>Lecture</strong> 10 4
<strong>EL2620</strong> 2011<br />
Some state feedback control approaches<br />
• Exact feedback linearization<br />
• Input-output linearization<br />
• Lyapunov-based design - backstepping control<br />
<strong>Lecture</strong> 10 5<br />
<strong>EL2620</strong> 2011<br />
Another Example<br />
ẋ1 = a sin x2<br />
ẋ2 = −x 2 1 + u<br />
How do we cancel the term sin x2?<br />
Perform transformation of states into linearizable form:<br />
z1 = x1, z2 =ẋ1 = a sin x2<br />
yields<br />
ż1 = z2, ż2 = a cos x2(−x 2 1 + u)<br />
and the linearizing control becomes<br />
u(x) =x 2 v<br />
1 + , x2 ∈ [−π/2, π/2]<br />
a cos x2<br />
<strong>Lecture</strong> 10 7<br />
<strong>EL2620</strong> 2011<br />
Exact Feedback Linearization<br />
Consider the nonlinear control-affine system<br />
ẋ = f(x)+g(x)u<br />
idea: use a state-feedback controller u(x) to make the system linear<br />
Example 1:<br />
The state-feedback controller<br />
ẋ =cosx − x 3 + u<br />
u(x) =− cos x + x 3 − kx + v<br />
yields the linear system<br />
ẋ = −kx + v<br />
<strong>Lecture</strong> 10 6<br />
<strong>EL2620</strong> 2011<br />
Diffeomorphisms<br />
A nonlinear state transformation z = T (x) with<br />
• T invertible for x in the domain of interest<br />
• T and T −1 continuously differentiable<br />
is called a diffeomorphism<br />
Definition: A nonlinear system<br />
ẋ = f(x)+g(x)u<br />
is feedback linearizable if there exist a diffeomorphism T whose<br />
domain contains the origin and transforms the system into the form<br />
ẋ = Ax + Bγ(x)(u − α(x))<br />
with (A, B) controllable and γ(x) nonsingular for all x in the domain<br />
of interest.<br />
<strong>Lecture</strong> 10 8
<strong>EL2620</strong> 2011<br />
Exact vs. Input-Output Linearization<br />
The example again, but now with an output<br />
ẋ1 = a sin x2, ẋ2 = −x 2 1 + u, y = x2<br />
• The control law u = x 2 1 + v/a cos x2 yields<br />
ż1 = z2 ; ż2 = v ; y =sin −1 (z2/a)<br />
which is nonlinear in the output.<br />
• If we want a linear input-output relationship we could instead use<br />
u = x 2 1 + v<br />
to obtain<br />
ẋ1 = a sin x2, ẋ2 = v, y = x2<br />
which is linear from v to y<br />
Caution: the control has rendered the state x1 unobservable<br />
<strong>Lecture</strong> 10 9<br />
<strong>EL2620</strong> 2011<br />
Example: controlled van der Pol equation<br />
ẋ1 = x2<br />
ẋ2 = −x1 + ɛ(1 − x 2 1)x2 + u<br />
y = x1<br />
Differentiate the output<br />
ẏ =ẋ1 = x2<br />
ÿ =ẋ2 = −x1 + ɛ(1 − x 2 1)x2 + u<br />
The state feedback controller<br />
u = x1 − ɛ(1 − x 2 1)x2 + v ⇒ ÿ = v<br />
<strong>Lecture</strong> 10 11<br />
<strong>EL2620</strong> 2011<br />
Input-Output Linearization<br />
Use state feedback u(x) to make the control-affine system<br />
ẋ = f(x)+g(x)u<br />
y = h(x)<br />
linear from the input v to the output y<br />
The general idea: differentiate the output, y = h(x), p times untill the<br />
control u appears explicitly in y (p) , and then determine u so that<br />
y (p) = v<br />
i.e., G(s) =1/s p<br />
<strong>Lecture</strong> 10 10<br />
<strong>EL2620</strong> 2011<br />
Lie Derivatives<br />
Consider the nonlinear SISO system<br />
ẋ = f(x)+g(x)u ; x ∈ R n ,u∈ R 1<br />
y = h(x), y ∈ R<br />
The derivative of the output<br />
ẏ = dh<br />
dxẋ = dh<br />
dx (f(x)+g(x)u) L fh(x)+Lgh(x)u<br />
where Lfh(x) and Lgh(x) are Lie derivatives (Lfh is the derivative<br />
of h along the vector field of ẋ = f(x))<br />
Repeated derivatives<br />
L k fh(x) = d(Lk−1 f<br />
h)<br />
dx<br />
f(x), LgLfh(x) = d(L fh)<br />
dx g(x)<br />
<strong>Lecture</strong> 10 12
<strong>EL2620</strong> 2011<br />
Lie derivatives and relative degree<br />
• The relative degree p of a system is defined as the number of<br />
integrators between the input and the output (the number of times<br />
y must be differentiated for the input u to appear)<br />
• A linear system<br />
Y (s)<br />
U(s) = b0s m + ...+ bm<br />
s n + a1s n−1 + ...+ an<br />
has relative degree p = n − m<br />
• A nonlinear system has relative degree p if<br />
LgL i−1<br />
f<br />
h(x) =0,i=1,...,p−1; LgL p−1<br />
f<br />
h(x) ≠0 ∀x ∈ D<br />
<strong>Lecture</strong> 10 13<br />
<strong>EL2620</strong> 2011<br />
The input-output linearizing control<br />
Consider a nth order SISO system with relative degree p<br />
ẋ = f(x)+g(x)u<br />
y = h(x)<br />
Differentiating the output repeatedly<br />
ẏ = dh<br />
dxẋ = L fh(x)+<br />
=0<br />
{ }} {<br />
Lgh(x)u<br />
. ..<br />
y (p) = L p f h(x)+L gL p−1<br />
f<br />
h(x)u<br />
<strong>Lecture</strong> 10 15<br />
<strong>EL2620</strong> 2011<br />
Example<br />
The controlled van der Pol equation<br />
ẋ1 = x2<br />
ẋ2 = −x1 + ɛ(1 − x 2 1)x2 + u<br />
y = x1<br />
Differentiating the output<br />
ẏ =ẋ1 = x2<br />
ÿ =ẋ2 = −x1 + ɛ(1 − x 2 1)x2 + u<br />
Thus, the system has relative degree p =2<br />
<strong>Lecture</strong> 10 14<br />
<strong>EL2620</strong> 2011<br />
and hence the state-feedback controller<br />
u =<br />
1<br />
LgL p−1<br />
f<br />
h(x)<br />
(<br />
−L<br />
p<br />
f h(x)+v)<br />
results in the linear input-output system<br />
y (p) = v<br />
<strong>Lecture</strong> 10 16
<strong>EL2620</strong> 2011<br />
Zero Dynamics<br />
• Note that the order of the linearized system is p, corresponding to<br />
the relative degree of the system<br />
• Thus, if p
<strong>EL2620</strong> 2011<br />
The same example<br />
ẋ =cosx − x 3 + u<br />
Now try the control law<br />
u = − cos x − kx<br />
Choose the same Lyapunov candidate V (x) =x 2 /2<br />
V (x) > 0 , ˙V = −x<br />
4 − kx<br />
2 < 0<br />
Thus, also globally asymptotically stable (and more negative ˙V )<br />
<strong>Lecture</strong> 10 21<br />
<strong>EL2620</strong> 2011<br />
Simulating the two controllers<br />
Simulation with x(0) = 10<br />
State trajectory <strong>Control</strong> input<br />
10<br />
linearizing<br />
non-linearizing<br />
1000<br />
linearizing<br />
non-linearizing<br />
800<br />
600<br />
400<br />
0<br />
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5<br />
time [s]<br />
9<br />
8<br />
7<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
State x<br />
200<br />
-200<br />
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5<br />
time [s]<br />
0<br />
Input u<br />
<strong>EL2620</strong> 2011<br />
Back-Stepping <strong>Control</strong> Design<br />
We want to design a state feedback u = u(x) that stabilizes<br />
ẋ1 = f(x1)+g(x1)x2<br />
ẋ2 = u<br />
(1)<br />
at x =0with f(0) = 0.<br />
Idea: See the system as a cascade connection. Design controller first<br />
for the inner loop and then for the outer.<br />
∫<br />
u x2 x1<br />
g(x1)<br />
∫<br />
f()<br />
<strong>Lecture</strong> 10 23<br />
The linearizing control is slower and uses excessive input. Thus,<br />
linearization can have a significant cost!<br />
<strong>Lecture</strong> 10 22<br />
<strong>EL2620</strong> 2011<br />
Suppose the partial system<br />
ẋ1 = f(x1)+g(x1)¯v<br />
can be stabilized by ¯v = φ(x1) and there exists Lyapunov fcn<br />
V1 = V1(x1) such that<br />
˙V1(x1) = dV 1<br />
dx1<br />
(<br />
f(x1)+g(x1)φ(x1)<br />
)<br />
≤−W (x1)<br />
for some positive definite function W .<br />
This is a critical assumption in backstepping control!<br />
<strong>Lecture</strong> 10 24
<strong>EL2620</strong> 2011<br />
The Trick<br />
Equation (1) can be rewritten as<br />
ẋ1 = f(x1)+g(x1)φ(x1)+g(x1)[x2 − φ(x1)]<br />
ẋ2 = u<br />
∫<br />
u x2 x1<br />
g(x1)<br />
∫<br />
−φ(x1)<br />
f + gφ<br />
<strong>Lecture</strong> 10 25<br />
<strong>EL2620</strong> 2011<br />
Consider V2(x1,x2) =V1(x1)+ζ 2 /2. Then,<br />
˙V2(x1,x2) = dV (<br />
)<br />
1<br />
f(x1)+g(x1)φ(x1)<br />
dx1<br />
+ dV 1<br />
dx1<br />
g(x1)ζ + ζv<br />
Choosing<br />
gives<br />
≤−W (x1)+ dV 1<br />
v = − dV 1<br />
dx1<br />
dx1<br />
g(x1)ζ + ζv<br />
g(x1) − kζ, k > 0<br />
˙V2(x1,x2) ≤−W (x1) − kζ 2<br />
Hence, x =0is asymptotically stable for (1) with control law<br />
u(x) = ˙φ(x)+v(x).<br />
If V1 radially unbounded, then global stability.<br />
<strong>Lecture</strong> 10 27<br />
<strong>EL2620</strong> 2011<br />
Introduce new state ζ = x2 − φ(x1) and control v = u − ˙φ:<br />
ẋ1 = f(x1)+g(x1)φ(x1)+g(x1)ζ<br />
˙ζ = v<br />
where<br />
˙φ(x 1)= dφ<br />
ẋ1 = dφ<br />
dx1 dx1<br />
∫<br />
(<br />
)<br />
f(x1)+g(x1)x2<br />
u ζ x1<br />
g(x1)<br />
∫<br />
− ˙φ(x1)<br />
f + gφ<br />
<strong>Lecture</strong> 10 26<br />
<strong>EL2620</strong> 2011<br />
Back-Stepping Lemma<br />
Lemma: Let z =(x1,...,xk−1) T and<br />
ż = f(z)+g(z)xk<br />
ẋk = u<br />
Assume φ(0) = 0, f(0) = 0,<br />
ż = f(z)+g(z)φ(z)<br />
stable, and V (z) a Lyapunov fcn (with ˙V ≤−W ). Then,<br />
(<br />
)<br />
u = dφ<br />
dz<br />
f(z)+g(z)xk<br />
− dV<br />
dz g(z) − (x k − φ(z))<br />
stabilizes x =0with V (z)+(xk − φ(z)) 2 /2 being a Lyapunov fcn.<br />
<strong>Lecture</strong> 10 28
<strong>EL2620</strong> 2011<br />
Strict Feedback Systems<br />
Back-stepping Lemma can be applied to stabilize systems on strict<br />
feedback form:<br />
ẋ1 = f1(x1)+g1(x1)x2<br />
ẋ2 = f2(x1,x2)+g2(x1,x2)x3<br />
ẋ3 = f3(x1,x2,x3)+g3(x1,x2,x3)x4<br />
. ..<br />
ẋn = fn(x1,...,xn)+gn(x1,...,xn)u<br />
where gk ≠0<br />
Note: x1,...,xk do not depend on xk+2,...,xn.<br />
<strong>Lecture</strong> 10 29<br />
<strong>EL2620</strong> 2011<br />
Back-Stepping<br />
Back-Stepping Lemma can be applied recursively to a system<br />
ẋ = f(x)+g(x)u<br />
on strict feedback form.<br />
Back-stepping generates stabilizing feedbacks φk(x1,...,xk)<br />
(equal to u in Back-Stepping Lemma) and Lyapunov functions<br />
Vk(x1,...,xk) =Vk−1(x1,...,xk−1)+[xk − φk−1] 2 /2<br />
by “stepping back” from x1 to u (see Khalil pp. 593–594 for details).<br />
Back-stepping results in the final state feedback<br />
u = φn(x1,...,xn)<br />
<strong>Lecture</strong> 10 31<br />
<strong>EL2620</strong> 2011<br />
2 minute exercise: Give an example of a linear system<br />
ẋ = Ax + Bu on strict feedback form.<br />
<strong>Lecture</strong> 10 30<br />
<strong>EL2620</strong> 2011<br />
Example<br />
Design back-stepping controller for<br />
ẋ1 = x 2 1 + x2, ẋ2 = x3, ẋ3 = u<br />
Step 0 Verify strict feedback form<br />
Step 1 Consider first subsystem<br />
ẋ1 = x 2 1 + φ1(x1), ẋ2 = u1<br />
where φ1(x1) =−x 2 1 − x1 stabilizes the first equation. With<br />
V1(x1) =x 2 1/2, Back-Stepping Lemma gives<br />
u1 =(−2x1 − 1)(x 2 1 + x2) − x1 − (x2 + x 2 1 + x1) =φ2(x1,x2)<br />
V2 = x 2 1/2+(x2 + x 2 1 + x1) 2 /2<br />
<strong>Lecture</strong> 10 32
<strong>EL2620</strong> 2011<br />
Step 2 Applying Back-Stepping Lemma on<br />
ẋ1 = x 2 1 + x2<br />
ẋ2 = x3<br />
ẋ3 = u<br />
gives<br />
u = u2 = dφ 2<br />
dz<br />
(<br />
f(z)+g(z)xn<br />
)<br />
− dV 2<br />
dz g(z) − (x n − φ2(z))<br />
= ∂φ 2<br />
∂x1<br />
(x 2 1 + x2)+ ∂φ 2<br />
∂x2<br />
x3 − ∂V 2<br />
∂x2<br />
which globally stabilizes the system.<br />
− (x3 − φ2(x1,x2))<br />
<strong>Lecture</strong> 10 33
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 11<br />
• <strong>Nonlinear</strong> controllability<br />
• Gain scheduling<br />
<strong>Lecture</strong> 11 1<br />
<strong>EL2620</strong> 2011<br />
<strong>Control</strong>lability<br />
Definition:<br />
ẋ = f(x, u)<br />
is controllable if for any x 0 , x 1 there exists T>0 and<br />
u :[0,T] → R such that x(0) = x 0 and x(T )=x 1 .<br />
<strong>Lecture</strong> 11 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Determine if a nonlinear system is controllable<br />
• Apply gain scheduling to simple examples<br />
<strong>Lecture</strong> 11 2<br />
<strong>EL2620</strong> 2011<br />
Linear Systems<br />
Lemma:<br />
ẋ = Ax + Bu<br />
is controllable if and only if<br />
Wn = ( B AB ... A n−1 B )<br />
has full rank.<br />
Is there a corresponding result for nonlinear systems?<br />
<strong>Lecture</strong> 11 4
<strong>EL2620</strong> 2011<br />
<strong>Control</strong>lable Linearization<br />
Lemma: Let<br />
ż = Az + Bu<br />
be the linearization of<br />
ẋ = f(x)+g(x)u<br />
at x =0with f(0) = 0. If the linear system is controllable then the<br />
nonlinear system is controllable in a neighborhood of the origin.<br />
Remark:<br />
• Hence, if rank Wn = n then there is an ɛ > 0 such that for every<br />
x1 ∈ Bɛ(0) there exists u :[0,T] → R so that x(T )=x1<br />
• A nonlinear system can be controllable, even if the linearized<br />
system is not controllable<br />
<strong>Lecture</strong> 11 5<br />
<strong>EL2620</strong> 2011<br />
Linearization for u1 = u2 =0gives<br />
ż = Az + B1u1 + B2u2<br />
with A =0and<br />
B1 =<br />
⎛<br />
0<br />
⎜ ⎜⎜⎝ 0<br />
0<br />
1<br />
⎞<br />
⎟ ⎟⎟⎠<br />
, B2 =<br />
⎛<br />
cos(ϕ0 + θ0)<br />
⎜ ⎜⎜⎝ sin(ϕ0 + θ0)<br />
sin(θ0)<br />
0<br />
⎞<br />
⎟ ⎟⎟⎠<br />
rank Wn =rank ( B AB ... A n−1 B ) =2< 4, so the<br />
linearization is not controllable. Still the car is controllable!<br />
Linearization does not capture the controllability good enough<br />
<strong>Lecture</strong> 11 7<br />
<strong>EL2620</strong> 2011<br />
y<br />
Car Example<br />
θ<br />
ϕ<br />
x<br />
(x, y)<br />
d<br />
dt<br />
⎛<br />
x<br />
⎜ ⎜⎜⎝ y<br />
ϕ<br />
θ<br />
⎞<br />
⎟ ⎟⎟⎠<br />
=<br />
⎛<br />
0<br />
⎜ ⎜⎜⎝ 0<br />
0<br />
1<br />
⎞<br />
⎟ ⎟⎟⎠<br />
u1 +<br />
Input: u1 steering wheel velocity, u2 forward<br />
velocity<br />
⎛<br />
cos(ϕ + θ)<br />
⎜ ⎜⎜⎝ sin(ϕ + θ)<br />
sin(θ)<br />
0<br />
⎞<br />
⎟ ⎟⎟⎠<br />
u2 = g1(z)u1 + g2(z)u2<br />
<strong>Lecture</strong> 11 6<br />
<strong>EL2620</strong> 2011<br />
Lie Brackets<br />
Lie bracket between vector fields f,g : R n → R n is a vector field<br />
defined by<br />
[f,g] = ∂g<br />
∂x f − ∂f<br />
∂x g<br />
Example:<br />
( ) ( )<br />
cos x 2<br />
x 1<br />
f = , g =<br />
x1<br />
1<br />
( )( ) ( )( )<br />
1 0 cos x 2 0 − sin x 2 x 1<br />
[f,g] =<br />
−<br />
0 0 x1 1 0 1<br />
( )<br />
cos x 2 + sin x2<br />
=<br />
−x1<br />
<strong>Lecture</strong> 11 8
<strong>EL2620</strong> 2011<br />
Lie Bracket Direction<br />
For the system<br />
the control<br />
(u1,u2) =<br />
ẋ = g1(x)u1 + g2(x)u2<br />
⎧⎪<br />
(1, 0), t ∈ [0, ɛ)<br />
⎨<br />
(0, 1), t ∈ [ɛ, 2ɛ)<br />
⎪ (−1, 0), t ∈ [2ɛ, 3ɛ)<br />
⎩<br />
(0, −1), t ∈ [3ɛ, 4ɛ)<br />
gives motion<br />
x(4ɛ) =x(0) + ɛ 2 [g1,g2]+O(ɛ 3 )<br />
The system can move in the [g1,g2] direction!<br />
<strong>Lecture</strong> 11 9<br />
<strong>EL2620</strong> 2011<br />
Proof, continued<br />
3. Similarily, for t ∈ [2ɛ, 3ɛ]<br />
( dg<br />
x(3ɛ) =x0 + ɛg2 + ɛ 2 2<br />
dx g 1 − dg 1<br />
dx g 2 + 1 2<br />
dg2<br />
dx g 2<br />
)<br />
4. Finally, for t ∈ [3ɛ, 4ɛ]<br />
( dg<br />
x(4ɛ) =x0 + ɛ 2 2<br />
dx g 1 − dg 1<br />
dx g 2<br />
)<br />
<strong>Lecture</strong> 11 11<br />
<strong>EL2620</strong> 2011<br />
Proof<br />
1. For t ∈ [0, ɛ], assumingɛ small and x(0) = x0, Taylorseriesyields<br />
x(ɛ) =x0 + g1(x0)ɛ + 1 2<br />
dg1<br />
dx g 1(x0)ɛ 2 + O(ɛ 3 ) (1)<br />
2. Similarily, for t ∈ [ɛ, 2ɛ]<br />
x(2ɛ) =x(ɛ)+g2(x(ɛ))ɛ + 1 2<br />
dg2<br />
dx g 2(x(ɛ))ɛ 2<br />
and with x(ɛ) from (1), and g2(x(ɛ)) = g2(x0)+ dg 2<br />
dx ɛg 1(x0)<br />
x(2ɛ) =x0 + ɛ(g1(x0)+g2(x0))+<br />
ɛ 2 ( 1<br />
2<br />
dg1<br />
dx (x 0)g1(x0)+ dg 2<br />
dx (x 0)g1(x0)+ 1 2<br />
dg2<br />
dx (x 0)g2(x0)<br />
<strong>Lecture</strong> 11 10<br />
<strong>EL2620</strong> 2011<br />
Car Example (Cont’d)<br />
g3 := [g1,g2] = ∂g 2<br />
∂x g 1 − ∂g 1<br />
∂x g 2<br />
=<br />
⎛<br />
0 0 − sin(ϕ + θ) − sin(ϕ + θ)<br />
⎜ ⎜⎜⎝ 0 0 cos(ϕ + θ) cos(ϕ + θ)<br />
0 0 0 cos(θ)<br />
0 0 0 0<br />
⎛<br />
⎞<br />
⎞ ⎛<br />
0<br />
⎟ ⎜ ⎟⎟⎠ ⎜⎜⎝ 0<br />
0<br />
1<br />
⎞<br />
⎟ ⎟⎟⎠<br />
− 0<br />
=<br />
− sin(ϕ + θ)<br />
⎜ ⎜⎜⎝ cos(ϕ + θ)<br />
cos(θ)<br />
0<br />
⎟ ⎟⎟⎠<br />
<strong>Lecture</strong> 11 12<br />
)
<strong>EL2620</strong> 2011<br />
We can hence move the car in the g3 direction (“wriggle”) by applying<br />
the control sequence<br />
(u1,u2) ={(1, 0), (0, 1), (−1, 0), (0, −1)}<br />
y<br />
θ<br />
ϕ<br />
x<br />
(x, y)<br />
<strong>Lecture</strong> 11 13<br />
<strong>EL2620</strong> 2011<br />
Parking Theorem<br />
You can get out of any parking lot that is ɛ > 0 bigger than your car<br />
by applying control corresponding to g4, that is, by applying the<br />
control sequence<br />
Wriggle, Drive, −Wriggle, −Drive<br />
<strong>Lecture</strong> 11 15<br />
<strong>EL2620</strong> 2011<br />
The car can also move in the direction<br />
⎛<br />
⎞<br />
g4 := [g3,g2] = ∂g 2<br />
∂x g 3 − ∂g 3<br />
∂x g 2 = ...=<br />
− sin(ϕ +2θ)<br />
⎜ ⎜⎜⎝ cos(ϕ +2θ)<br />
0<br />
0<br />
⎟ ⎟⎟⎠<br />
(−sin(ϕ), cos(ϕ))<br />
g4 direction corresponds to<br />
sideways movement<br />
<strong>Lecture</strong> 11 14<br />
<strong>EL2620</strong> 2011<br />
2 minute exercise: What does the direction [g1,g2] correspond to for<br />
a linear system ẋ = g1(x)u1 + g2(x)u2 = B1u1 + B2u2?<br />
<strong>Lecture</strong> 11 16
<strong>EL2620</strong> 2011<br />
The Lie Bracket Tree<br />
[g1,g2]<br />
[g1, [g1,g2]]<br />
[g2, [g1,g2]]<br />
[g1, [g1, [g1,g2]]] [g2, [g1, [g1,g2]]] [g1, [g2, [g1,g2]]] [g2, [g2, [g1,g2]]]<br />
<strong>Lecture</strong> 11 17<br />
<strong>EL2620</strong> 2011<br />
Example—Unicycle<br />
d<br />
dt<br />
⎛<br />
⎝<br />
x1<br />
x2<br />
θ<br />
⎞<br />
⎠ =<br />
⎛<br />
⎝<br />
cos θ<br />
sin θ<br />
0<br />
⎞<br />
⎠ u1 +<br />
⎛<br />
⎝<br />
0<br />
0<br />
1<br />
⎞<br />
⎠ u2<br />
θ<br />
(x1,x2)<br />
g1 =<br />
⎛<br />
⎝<br />
cos θ<br />
sin θ<br />
0<br />
⎞<br />
⎠ , g2 =<br />
⎛<br />
⎝<br />
0<br />
0<br />
1<br />
⎞<br />
⎠ , [g1,g2] =<br />
⎛<br />
⎝<br />
sin θ<br />
− cos θ<br />
0<br />
⎞<br />
⎠<br />
<strong>Control</strong>lable because {g1,g2, [g1,g2]} spans R 3<br />
<strong>Lecture</strong> 11 19<br />
<strong>EL2620</strong> 2011<br />
<strong>Control</strong>lability Theorem<br />
Theorem:The system<br />
ẋ = g1(x)u1 + g2(x)u2<br />
is controllable if the Lie bracket tree (together with g1 and g2) spans<br />
R n for all x<br />
Remark:<br />
• The system can be steered in any direction of the Lie bracket tree<br />
<strong>Lecture</strong> 11 18<br />
<strong>EL2620</strong> 2011<br />
2 minute exercise:<br />
• Show that {g1,g2, [g1,g2]} spans R 3 for the unicycle<br />
• Is the linearization of the unicycle controllable?<br />
<strong>Lecture</strong> 11 20
<strong>EL2620</strong> 2011<br />
Example—Rolling Penny<br />
φ<br />
(x1,x2)<br />
θ<br />
d<br />
dt<br />
⎛<br />
x1<br />
⎜ ⎜⎜⎝ x2<br />
θ<br />
φ<br />
⎞<br />
⎟ ⎟⎟⎠<br />
=<br />
⎛<br />
cos θ<br />
⎜ ⎜⎜⎝ sin θ<br />
0<br />
1<br />
⎞<br />
⎟ ⎟⎟⎠<br />
u1+<br />
⎛<br />
0<br />
⎜ ⎜⎜⎝ 0<br />
1<br />
0<br />
⎞<br />
⎟ ⎟⎟⎠<br />
u2<br />
<strong>Control</strong>lable because {g1,g2, [g1,g2], [g2, [g1,g2]]} spans R 4<br />
<strong>Lecture</strong> 11 21<br />
<strong>EL2620</strong> 2011<br />
Gain Scheduling<br />
<strong>Control</strong> parameters depend on operating conditions:<br />
<strong>Control</strong>ler<br />
parameters<br />
Gain<br />
schedule<br />
Command<br />
signal<br />
<strong>Control</strong>ler<br />
<strong>Control</strong><br />
signal<br />
Process<br />
Operating<br />
condition<br />
Output<br />
Example: PID controller with K = K(α), where α is the scheduling<br />
variable.<br />
Examples of scheduling variable are production rate, machine speed,<br />
Mach number, flow rate<br />
<strong>Lecture</strong> 11 23<br />
<strong>EL2620</strong> 2011<br />
When is Feedback Linearization Possible?<br />
Q: When can we transform ẋ = f(x)+g(x)u into ż = Az + bv by<br />
means of feedback u = α(x)+β(x)v and change of variables<br />
z = T (x) (see previous lecture)?<br />
A: The answer requires Lie brackets and further concepts from<br />
differential geometry (see Khalil and PhD course)<br />
v u x z<br />
β ẋ = f<br />
T<br />
α<br />
<strong>Lecture</strong> 11 22<br />
<strong>EL2620</strong> 2011<br />
Valve Characteristics<br />
Flo<br />
Quickopening<br />
Linear<br />
Eualpercentage<br />
Position<br />
<strong>Lecture</strong> 11 24
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> Valve<br />
Process<br />
u c c u<br />
v y<br />
Σ PI<br />
f ˆ −1<br />
G 0 (s)<br />
−1<br />
Valve characteristics<br />
10 ˆf(u)<br />
f(u)<br />
0<br />
0 0.5 1 1.5 2<br />
<strong>Lecture</strong> 11 25<br />
<strong>EL2620</strong> 2011<br />
With gain scheduling:<br />
0.3<br />
uc<br />
0.2<br />
0 20 40 60 80 100<br />
Time<br />
uc<br />
1.1<br />
y<br />
y<br />
1.0<br />
0 20 40 60 80 100<br />
Time<br />
uc<br />
5.1<br />
5.0<br />
y<br />
0 20 40 60 80 100<br />
Time<br />
<strong>Lecture</strong> 11 27<br />
<strong>EL2620</strong> 2011<br />
Without gain scheduling:<br />
0.3<br />
uc<br />
0.2<br />
y<br />
0 10 20 30 40<br />
Time<br />
uc<br />
1.1<br />
1.0<br />
5.2<br />
y<br />
0 10 20 30 40<br />
Time<br />
y<br />
5.0<br />
uc<br />
0 10 20 30 40<br />
Time<br />
<strong>Lecture</strong> 11 26<br />
<strong>EL2620</strong> 2011<br />
Flight <strong>Control</strong><br />
Pitch dynamics<br />
q = θ ˙<br />
θ<br />
V<br />
α<br />
N z δ e<br />
<strong>Lecture</strong> 11 28<br />
f
<strong>EL2620</strong> 2011<br />
Flight <strong>Control</strong><br />
Operating conditions:<br />
80<br />
60<br />
40<br />
3 4<br />
20<br />
Altitude (x1000 ft)<br />
1 2<br />
0<br />
0 0.4 0.8 1.2 1.6 2.0 2.4<br />
Mach number<br />
<strong>Lecture</strong> 11 29<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Determine if a nonlinear system is controllable<br />
• Apply gain scheduling to simple examples<br />
<strong>Lecture</strong> 11 31<br />
<strong>EL2620</strong> 2011<br />
The Pitch <strong>Control</strong> Channel<br />
V IAS<br />
H<br />
Pitch stick<br />
Position<br />
Filter<br />
Acceleration<br />
Filter<br />
Pitch rate<br />
A/D<br />
A/D<br />
Filter A/D<br />
Gear<br />
H V IAS<br />
M<br />
H Σ<br />
−<br />
T 1s<br />
K DSE<br />
1+ T s<br />
K<br />
SG<br />
1 M<br />
Filter<br />
1<br />
T Σ 1+ 3 s K Q1 NZ<br />
M H<br />
T 2s K QD<br />
1+ T 2 s<br />
H M V IAS<br />
Σ D/A Σ<br />
D/A<br />
Σ<br />
Σ<br />
To servos<br />
<strong>Lecture</strong> 11 30
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 12<br />
• Optimal control<br />
<strong>Lecture</strong> 12 1<br />
<strong>EL2620</strong> 2011<br />
Optimal <strong>Control</strong> Problems<br />
Idea: formulate the control design problem as an optimization problem<br />
min<br />
u(t)<br />
J(x, u, t), ẋ = f(t, x, u)<br />
+ provides a systematic design framework<br />
+ applicable to nonlinear problems<br />
+ can deal with constraints<br />
- difficult to formulate control objectives as a single objective<br />
function<br />
- determining the optimal controller can be hard<br />
<strong>Lecture</strong> 12 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Design controllers based on optimal control theory<br />
<strong>Lecture</strong> 12 2<br />
<strong>EL2620</strong> 2011<br />
Example—Boat in Stream<br />
Sail as far as possible in x1 direction<br />
Speed of water v(x2) with dv/dx2 =1<br />
v(x2)<br />
x2<br />
Rudder angle control:<br />
u(t) ∈ U = {(u1,u2) : u 2 1 + u 2 2 =1}<br />
x1<br />
max<br />
u:[0,tf ]→U<br />
x1(tf)<br />
ẋ1(t) =v(x2)+u1(t)<br />
ẋ2(t) =u2(t)<br />
x1(0) = x2(0) = 0<br />
<strong>Lecture</strong> 12 4
<strong>EL2620</strong> 2011<br />
Example—Resource Allocation<br />
Maximization of stored profit<br />
x(t) ∈ [0, ∞) production rate<br />
u(t) ∈ [0, 1] portion of x reinvested<br />
1 − u(t) portion of x stored<br />
γu(t)x(t) change of production rate (γ > 0)<br />
[1 − u(t)]x(t) amount of stored profit<br />
max<br />
u:[0,tf ]→[0,1]<br />
∫ t f<br />
0<br />
ẋ(t) =γu(t)x(t)<br />
x(0) = x0 > 0<br />
[1 − u(t)]x(t)dt<br />
<strong>Lecture</strong> 12 5<br />
<strong>EL2620</strong> 2011<br />
Optimal <strong>Control</strong> Problem<br />
Standard form:<br />
min<br />
u:[0,tf ]→U<br />
∫ t f<br />
0<br />
L(x(t),u(t)) dt + φ(x(tf))<br />
ẋ(t) =f(x(t),u(t)), x(0) = x0<br />
Remarks:<br />
• U ⊂ R m set of admissible control<br />
• Infinite dimensional optimization problem:<br />
Optimization over functions u :[0,tf] → U<br />
• Constraints on x from the dynamics<br />
• Final time tf fixed (free later)<br />
<strong>Lecture</strong> 12 7<br />
<strong>EL2620</strong> 2011<br />
Example—Minimal Curve Length<br />
Find the curve with minimal length between a given point and a line<br />
x(t)<br />
Curve: (t, x(t)) with x(0) = a<br />
a<br />
Line: Vertical through (tf, 0)<br />
t<br />
min<br />
u:[0,tf ]→R<br />
∫ t f<br />
0<br />
ẋ(t) =u(t)<br />
x(0) = a<br />
√<br />
1+u<br />
2 (t)dt<br />
tf<br />
<strong>Lecture</strong> 12 6<br />
<strong>EL2620</strong> 2011<br />
Pontryagin’s Maximum Principle<br />
Theorem: Introduce the Hamiltonian function<br />
H(x, u, λ) =L(x, u)+λ T f(x, u)<br />
Suppose the optimal control problem above has the solution<br />
u ∗ :[0,tf] → U and x ∗ :[0,tf] → R n . Then,<br />
min<br />
u∈U H(x∗ (t),u,λ(t)) = H(x ∗ (t),u ∗ (t), λ(t)), ∀t ∈ [0,tf]<br />
where λ(t) solves the adjoint equation<br />
˙λ(t) =−<br />
∂H T<br />
∂x (x∗ (t),u ∗ (t), λ(t)), λ(tf) = ∂φT<br />
∂x (x∗ (tf))<br />
Moreover, the optimal control is given by<br />
u ∗ (t) =argmin<br />
u∈U H(x∗ (t),u,λ(t))<br />
<strong>Lecture</strong> 12 8
<strong>EL2620</strong> 2011<br />
Remarks<br />
• See textbook, e.g., Glad and Ljung, for proof. The outline is simply<br />
to note that every change of u(t) from the optimal u ∗ (t) must<br />
increase the criterium. Then perform a clever Taylor expansion.<br />
• Pontryagin’s Maximum Principle provides necessary condition:<br />
there may exist many or none solutions<br />
(cf., minu:[0,1]→R x(1), ẋ = u, x(0) = 0)<br />
• The Maximum Principle provides all possible candidates.<br />
• Solution involves 2n ODE’s with boundary conditions x(0) = x0<br />
and λ(tf) =∂φ T /∂x(x ∗ (tf)). Often hard to solve explicitly.<br />
• “maximum” is due to Pontryagin’s original formulation<br />
<strong>Lecture</strong> 12 9<br />
<strong>EL2620</strong> 2011<br />
Optimal control<br />
u ∗ (t) =arg min λ1(t)(v(x ∗ 2(t)) + u1)+λ2(t)u2<br />
u 2 1 +u2 2 =1<br />
=arg min λ1(t)u1 + λ2(t)u2<br />
u 2 1 +u2 2 =1<br />
Hence,<br />
u1(t) =−<br />
λ1(t)<br />
√<br />
λ<br />
2<br />
1 (t)+λ 2 2(t) , u 2(t) =−<br />
λ2(t)<br />
√<br />
λ<br />
2<br />
1 (t)+λ 2 2(t)<br />
or<br />
u1(t) =<br />
1<br />
√<br />
1+(t − t f) , u 2(t) =<br />
2<br />
tf − t<br />
√<br />
1+(t − t f) 2<br />
<strong>Lecture</strong> 12 11<br />
<strong>EL2620</strong> 2011<br />
Example—Boat in Stream (cont’d)<br />
Hamiltonian satisfies<br />
H = λ T f = ( λ1 λ2<br />
∂H<br />
∂x = ( 0 λ1<br />
) ( v(x2)+u1<br />
u2<br />
)<br />
, φ(x) =−x 1<br />
)<br />
Adjoint equations<br />
˙λ1(t) =0, λ1(tf) =−1<br />
˙λ2(t) =−λ1(t), λ2(tf) =0<br />
have solution<br />
λ1(t) =−1, λ2(t) =t − tf<br />
<strong>Lecture</strong> 12 10<br />
<strong>EL2620</strong> 2011<br />
Example—Resource Allocation (cont’d)<br />
min<br />
u:[0,tf ]→[0,1]<br />
∫ t f<br />
0<br />
[u(t) − 1]x(t)dt<br />
ẋ(t) =γu(t)x(t), x(0) = x0<br />
Hamiltonian satisfies<br />
H = L + λ T f =(u − 1)x + λγux<br />
Adjoint equation<br />
˙λ(t) =1− u<br />
∗ (t) − λ(t)γu<br />
∗ (t), λ(t f)=0<br />
<strong>Lecture</strong> 12 12
<strong>EL2620</strong> 2011<br />
Optimal control<br />
u ∗ (t) =arg min<br />
u∈[0,1]<br />
(u − 1)x ∗ (t)+λ(t)γux ∗ (t)<br />
=arg min u(1 + λ(t)γ), (x ∗ (t) > 0)<br />
u∈[0,1]<br />
{ =<br />
0, λ(t) ≥−1/γ<br />
1, λ(t) < −1/γ<br />
For t ≈ tf , we have u ∗ (t) =0(why?) and thus ˙λ(t) =1.<br />
For t
<strong>EL2620</strong> 2011<br />
History—Calculus of Variations<br />
• Brachistochrone (shortest time) problem (1696): Find the<br />
(frictionless) curve that takes a particle from A to B in shortest<br />
time<br />
dt = ds<br />
√<br />
dx<br />
v = 2 + dy<br />
2<br />
v<br />
=<br />
√<br />
1+y<br />
′ (x)<br />
2<br />
√ dx<br />
2gy(x)<br />
Minimize<br />
J(y) =<br />
∫ B<br />
A<br />
√<br />
1+y<br />
′ (x)<br />
2<br />
√ dx<br />
2gy(x)<br />
Solved by John and James Bernoulli, Newton, l’Hospital<br />
• Find the curve enclosing largest area (Euler)<br />
<strong>Lecture</strong> 12 17<br />
<strong>EL2620</strong> 2011<br />
Goddard’s Rocket Problem (1910)<br />
How to send a rocket as high up in the air as possible?<br />
d<br />
dt<br />
⎛<br />
v<br />
⎜ ⎝ h<br />
m<br />
⎞ ⎛<br />
u − D<br />
⎟ ⎜<br />
⎠ = ⎜⎝ m − g<br />
v<br />
−γu<br />
⎞<br />
⎟ ⎟⎠<br />
h<br />
m<br />
(v(0),h(0),m(0)) = (0, 0,m0), g, γ > 0<br />
u motor force, D = D(v, h) air resistance<br />
Constraints: 0 ≤ u ≤ umax and m(tf) =m1 (empty)<br />
Optimization criterion: maxu h(tf)<br />
<strong>Lecture</strong> 12 19<br />
<strong>EL2620</strong> 2011<br />
History—Optimal <strong>Control</strong><br />
• The space race (Sputnik, 1957)<br />
• Pontryagin’s Maximum Principle (1956)<br />
• Bellman’s Dynamic Programming (1957)<br />
• Huge influence on engineering and other sciences:<br />
– Robotics—trajectory generation<br />
– Aeronautics—satellite orbits<br />
– Physics—Snell’s law, conservation laws<br />
– Finance—portfolio theory<br />
<strong>Lecture</strong> 12 18<br />
<strong>EL2620</strong> 2011<br />
Generalized form:<br />
min<br />
u:[0,tf ]→U<br />
∫ t f<br />
0<br />
L(x(t),u(t)) dt + φ(x(tf))<br />
ẋ(t) =f(x(t),u(t)), x(0) = x0<br />
ψ(x(tf)) = 0<br />
Note the diffences compared to standard form:<br />
• End time tf is free<br />
• Final state is constrained: ψ(x(tf)) = x3(tf) − m1 =0<br />
<strong>Lecture</strong> 12 20
<strong>EL2620</strong> 2011<br />
Solution to Goddard’s Problem<br />
Goddard’s problem is on generalized form with<br />
x =(v, h, m) T , L ≡ 0, φ(x) =−x2, ψ(x) =x3 − m1<br />
D(v, h) ≡ 0:<br />
• Easy: let u(t) =umax until m(t) =m1<br />
• Burn fuel as fast as possible, because it costs energy to lift it<br />
D(v, h) ≢ 0:<br />
• Hard: e.g., it can be optimal to have low speed when air<br />
resistance is high, in order to burn fuel at higher level<br />
• Took 50 years before a complete solution was presented<br />
<strong>Lecture</strong> 12 21<br />
<strong>EL2620</strong> 2011<br />
˙λ(t) =−<br />
∂H T<br />
λ T (tf) =n0<br />
∂x (x∗ (t),u ∗ (t), λ(t),n0)<br />
∂φ<br />
∂x (t f,x ∗ (tf)) + µ T ∂ψ<br />
∂x (t f,x ∗ (tf))<br />
H(x ∗ (tf),u ∗ (tf), λ(tf),n0)<br />
= −n0<br />
∂φ<br />
∂t (t f,x ∗ (tf)) − µ T ∂ψ<br />
∂t (t f,x ∗ (tf))<br />
Remarks:<br />
• tf may be a free variable<br />
• With fixed tf : H(x ∗ (tf),u ∗ (tf), λ(tf),n0) =0<br />
• ψ defines end point constraints<br />
<strong>Lecture</strong> 12 23<br />
<strong>EL2620</strong> 2011<br />
General Pontryagin’s Maximum Principle<br />
Theorem: Suppose u ∗ :[0,tf] → U and x ∗ :[0,tf] → R n are<br />
solutions to<br />
min<br />
u:[0,tf ]→U<br />
∫ t f<br />
0<br />
L(x(t),u(t)) dt + φ(tf,x(tf))<br />
ẋ(t) =f(x(t),u(t)), x(0) = x0<br />
ψ(tf,x(tf)) = 0<br />
Then, there exists n0 ≥ 0, µ ∈ R n such that (n0,µ T ) ≠0and<br />
min<br />
u∈U H(x∗ (t),u,λ(t),n0) =H(x ∗ (t),u ∗ (t), λ(t),n0), t ∈ [0,tf]<br />
where<br />
H(x, u, λ,n0) =n0L(x, u)+λ T f(x, u)<br />
<strong>Lecture</strong> 12 22<br />
<strong>EL2620</strong> 2011<br />
Example—Minimum Time <strong>Control</strong><br />
Bring the states of the double integrator to the origin as fast as possible<br />
min<br />
u:[0,tf ]→[−1,1]<br />
∫ t f<br />
0<br />
1 dt = min t f<br />
u:[0,tf ]→[−1,1]<br />
ẋ1(t) =x2(t), ẋ2(t) =u(t)<br />
ψ(x(tf)) = (x1(tf),x2(tf)) T =(0, 0) T<br />
Optimal control is the bang-bang control<br />
u ∗ (t) =arg min 1+λ 1(t)x ∗ 2(t)+λ2(t)u<br />
u∈[−1,1]<br />
{ 1, λ<br />
=<br />
2(t) < 0<br />
−1, λ2(t) ≥ 0<br />
<strong>Lecture</strong> 12 24
<strong>EL2620</strong> 2011<br />
Adjoint equations ˙λ1(t) =0, ˙λ2(t) =−λ1(t) gives<br />
λ1(t) =c1, λ2(t) =c2 − c1t<br />
With u(t) =ζ = ±1, we have<br />
x1(t) =x1(0) + x2(0)t + ζt 2 /2<br />
x2(t) =x2(0) + ζt<br />
Eliminating t gives curves<br />
x1(t) ± x2(t) 2 /2=const<br />
These define the switch curve, where the optimal control switch<br />
<strong>Lecture</strong> 12 25<br />
<strong>EL2620</strong> 2011<br />
Linear Quadratic <strong>Control</strong><br />
with<br />
∫ ∞<br />
min<br />
u:[0,∞)→R m<br />
0<br />
(x T Qx + u T Ru) dt<br />
ẋ = Ax + Bu<br />
has optimal solution<br />
u = −Lx<br />
where L = R −1 B T S and S>0 is the solution to<br />
SA + A T S + Q − SBR −1 B T S =0<br />
<strong>Lecture</strong> 12 27<br />
<strong>EL2620</strong> 2011<br />
Reference Generation using Optimal <strong>Control</strong><br />
• Optimal control problem makes no distinction between open-loop<br />
control u ∗ (t) and closed-loop control u ∗ (t, x).<br />
• We may use the optimal open-loop solution u ∗ (t) as the<br />
reference value to a linear regulator, which keeps the system<br />
close to the wanted trajectory<br />
• Efficient design method for nonlinear problems<br />
<strong>Lecture</strong> 12 26<br />
<strong>EL2620</strong> 2011<br />
Properties of LQ <strong>Control</strong><br />
• Stabilizing<br />
• Closed-loop system stable with u = −α(t)Lx for<br />
α(t) ∈ [1/2, ∞) (infinite gain margin)<br />
• Phase margin 60 degrees<br />
If x is not measurable, then one may use a Kalman filter; leads to<br />
linear quadratic Gaussian (LQG) control.<br />
• But, then system may have arbitrarily poor robustness! (Doyle,<br />
1978)<br />
<strong>Lecture</strong> 12 28
<strong>EL2620</strong> 2011<br />
Tetra Pak Milk Race<br />
Move milk in minimum time without spilling<br />
[Grundelius & Bernhardsson,1999]<br />
<strong>Lecture</strong> 12 29<br />
<strong>EL2620</strong> 2011<br />
Pros & Cons for Optimal <strong>Control</strong><br />
+ Systematic design procedure<br />
+ Applicable to nonlinear control problems<br />
+ Captures limitations (as optimization constraints)<br />
− Hard to find suitable criteria<br />
− Hard to solve the equations that give optimal controller<br />
<strong>Lecture</strong> 12 31<br />
<strong>EL2620</strong> 2011<br />
Given dynamics of system and maximum slosh φ =0.63, solve<br />
1 dt, where u is the acceleration.<br />
minu:[0,tf ]→[−10,10]<br />
15<br />
∫ t f<br />
0<br />
Acceleration<br />
10<br />
-5<br />
-10<br />
-15<br />
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4<br />
Slosh<br />
0.5<br />
-0.5<br />
-1<br />
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4<br />
Optimal time = 375 ms, TetraPak = 540ms<br />
<strong>Lecture</strong> 12 30<br />
<strong>EL2620</strong> 2011<br />
SF2852 Optimal <strong>Control</strong> Theory<br />
• Period 3, 7.5 credits<br />
• Optimization and Systems Theory<br />
http://www.math.kth.se/optsyst/<br />
Dynamic Programming: Discrete & continuous; Principle of<br />
optimality; Hamilton-Jacobi-Bellman equation<br />
Pontryagin’s Maximum principle: Main results; Special cases such<br />
as time optimal control and LQ control<br />
Numerical Methods: Numerical solution of optimal control problems<br />
Applications: Aeronautics, Robotics, Process <strong>Control</strong>,<br />
Bioengineering, Economics, Logistics<br />
<strong>Lecture</strong> 12 32<br />
5<br />
0<br />
1<br />
0
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should be able to<br />
• Design controllers based on optimal control theory for<br />
– Standard form<br />
– Generalized form<br />
• Understand possibilities and limitations of optimal control<br />
<strong>Lecture</strong> 12 33
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 13<br />
• Fuzzy logic and fuzzy control<br />
• Artificial neural networks<br />
Some slides copied from K.-E. Årzén and M. Johansson<br />
<strong>Lecture</strong> 13 1<br />
<strong>EL2620</strong> 2011<br />
Fuzzy <strong>Control</strong><br />
• Many plants are manually controlled by experienced operators<br />
• Transfer process knowledge to control algorithm is difficult<br />
Idea:<br />
• Model operator’s control actions (instead of the plant)<br />
• Implement as rules (instead of as differential equations)<br />
Example of a rule:<br />
IF Speed is High AND Traffic is Heavy<br />
THEN Reduce Gas A Bit<br />
<strong>Lecture</strong> 13 3<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should<br />
• understand the basics of fuzzy logic and fuzzy controllers<br />
• understand simple neural networks<br />
<strong>Lecture</strong> 13 2<br />
<strong>EL2620</strong> 2011<br />
Model <strong>Control</strong>ler Instead of Plant<br />
Conventional control design<br />
Model plant P → Analyze feedback → Synthesize controller C →<br />
Implement control algorithm<br />
Fuzzy control design<br />
Model manual control → Implement control rules<br />
r e u y<br />
− C P<br />
<strong>Lecture</strong> 13 4
<strong>EL2620</strong> 2011<br />
Fuzzy Set Theory<br />
Specify how well an object satisfies a (vague) description<br />
Conventional set theory: x ∈ A or x ∉ A<br />
Fuzzy set theory: x ∈ A to a certain degree µA(x)<br />
Membership function:<br />
µA : Ω → [0, 1] expresses the degree x belongs to A<br />
1<br />
µC µW<br />
Cold Warm<br />
0<br />
0 10<br />
25<br />
A fuzzy set is defined as (A, µA)<br />
x<br />
<strong>Lecture</strong> 13 5<br />
<strong>EL2620</strong> 2011<br />
Fuzzy Logic<br />
How to calculate with fuzzy sets (A, µA)?<br />
Conventional logic:<br />
AND: A ∩ B<br />
OR: A ∪ B<br />
NOT: A ′<br />
Fuzzy logic:<br />
AND: µA∩B(x) =min(µA(x),µB(x))<br />
OR: µA∪B(x) =max(µA(x),µB(x))<br />
NOT: µA ′(x) =1− µ A(x)<br />
Defines logic calculations as X AND Y OR Z<br />
Mimic human linguistic (approximate) reasoning [Zadeh,1965]<br />
<strong>Lecture</strong> 13 7<br />
<strong>EL2620</strong> 2011<br />
Example<br />
Q1: Is the temperature x =15cold?<br />
A1: It is quite cold since µC(15) = 2/3.<br />
Q2: Is x =15warm?<br />
A2: It is not really warm since µW (15) = 1/3.<br />
1<br />
µC µW<br />
Cold Warm<br />
0<br />
0 10 25<br />
15<br />
x<br />
<strong>Lecture</strong> 13 6<br />
<strong>EL2620</strong> 2011<br />
Example<br />
Q1: Is it cold AND warm? Q2: Is it cold OR warm?<br />
1<br />
1<br />
µC∪W<br />
Cold Warm<br />
Cold Warm<br />
0<br />
µC∩W<br />
0 10 25<br />
x<br />
0<br />
0 10<br />
25<br />
x<br />
<strong>Lecture</strong> 13 8
<strong>EL2620</strong> 2011<br />
Fuzzy <strong>Control</strong> System<br />
r<br />
Fuzzy<br />
<strong>Control</strong>ler<br />
u<br />
Plant<br />
y<br />
r, y, u :[0, ∞) ↦→ R are conventional signals<br />
Fuzzy controller is a nonlinear mapping from y (and r) to u<br />
<strong>Lecture</strong> 13 9<br />
<strong>EL2620</strong> 2011<br />
Fuzzifier<br />
Fuzzy set evaluation of input y<br />
Example<br />
y =15: µC(15) = 2/3 and µW (15) = 1/3<br />
1<br />
µC µW<br />
Cold Warm<br />
0<br />
0 10 25 y<br />
15<br />
<strong>Lecture</strong> 13 11<br />
<strong>EL2620</strong> 2011<br />
Fuzzy <strong>Control</strong>ler<br />
Fuzzy <strong>Control</strong>ler<br />
y Fuzzy<br />
u<br />
Fuzzifier<br />
Inference<br />
Defuzzifier<br />
Fuzzifier: Fuzzy set evaluation of y (and r)<br />
Fuzzy Inference: Fuzzy set calculations<br />
Defuzzifier: Map fuzzy set to u<br />
Fuzzifier and defuzzifier act as interfaces to the crisp signals<br />
<strong>Lecture</strong> 13 10<br />
<strong>EL2620</strong> 2011<br />
Fuzzy Inference<br />
Fuzzy Inference:<br />
1. Calculate degree of fulfillment for each rule<br />
2. Calculate fuzzy output of each rule<br />
3. Aggregate rule outputs<br />
Examples of fuzzy rules:<br />
Rule 1: IF y is Cold<br />
} {{ }<br />
1.<br />
Rule 2: IF y is Warm<br />
} {{ }<br />
1.<br />
THEN u is High<br />
} {{ }<br />
2.<br />
THEN u }<br />
is<br />
{{<br />
Low<br />
}<br />
2.<br />
<strong>Lecture</strong> 13 12
<strong>EL2620</strong> 2011<br />
1. Calculate degree of fulfillment for rules<br />
<strong>Lecture</strong> 13 13<br />
<strong>EL2620</strong> 2011<br />
3. Aggregate rule outputs<br />
<strong>Lecture</strong> 13 15<br />
<strong>EL2620</strong> 2011<br />
2. Calculate fuzzy output of each rule<br />
Note that ”mu” is standard fuzzy-logic nomenclature for ”truth value”:<br />
<strong>Lecture</strong> 13 14<br />
<strong>EL2620</strong> 2011<br />
Defuzzifier<br />
<strong>Lecture</strong> 13 16
<strong>EL2620</strong> 2011<br />
Fuzzy <strong>Control</strong>ler—Summary<br />
Fuzzy <strong>Control</strong>ler<br />
y Fuzzy<br />
u<br />
Fuzzifier Inference Defuzzifier<br />
Fuzzifier: Fuzzy set evaluation of y (and r)<br />
Fuzzy Inference: Fuzzy set calculations<br />
1. Calculate degree of fulfillment for each rule<br />
2. Calculate fuzzy output of each rule<br />
3. Aggregate rule outputs<br />
Defuzzifier: Map fuzzy set to u<br />
<strong>Lecture</strong> 13 17<br />
<strong>EL2620</strong> 2011<br />
<strong>Lecture</strong> 13 19<br />
<strong>EL2620</strong> 2011<br />
Example—Fuzzy <strong>Control</strong> of Steam Engine<br />
http://isc.faqs.org/docs/air/ttfuzzy.html<br />
<strong>Lecture</strong> 13 18<br />
<strong>EL2620</strong> 2011<br />
<strong>Lecture</strong> 13 20
<strong>EL2620</strong> 2011<br />
<strong>Lecture</strong> 13 21<br />
<strong>EL2620</strong> 2011<br />
Rule-Based View of Fuzzy <strong>Control</strong><br />
<strong>Lecture</strong> 13 23<br />
<strong>EL2620</strong> 2011<br />
<strong>Lecture</strong> 13 22<br />
<strong>EL2620</strong> 2011<br />
<strong>Nonlinear</strong> View of Fuzzy <strong>Control</strong><br />
<strong>Lecture</strong> 13 24
<strong>EL2620</strong> 2011<br />
Pros and Cons of Fuzzy <strong>Control</strong><br />
Advantages<br />
• User-friendly way to design nonlinear controllers<br />
• Explicit representation of operator (process) knowledge<br />
• Intuitive for non-experts in conventional control<br />
Disadvantages<br />
• Limited analysis and synthesis<br />
• Sometimes hard to combine with classical control<br />
• Not obvious how to include dynamics in controller<br />
Fuzzy control is a way to obtain a class of nonlinear controllers<br />
<strong>Lecture</strong> 13 25<br />
<strong>EL2620</strong> 2011<br />
Neurons<br />
Brain neuron Artificial neuron<br />
x1 w1<br />
x2 w2<br />
wn<br />
xn<br />
y<br />
Σ φ(·)<br />
b<br />
<strong>Lecture</strong> 13 27<br />
<strong>EL2620</strong> 2011<br />
Neural Networks<br />
• How does the brain work?<br />
• A network of computing components (neurons)<br />
<strong>Lecture</strong> 13 26<br />
<strong>EL2620</strong> 2011<br />
Model of a Neuron<br />
Inputs: x1,x2,...,xn<br />
Weights: w1,w2,...,wn<br />
Bias: b y = φ ( b + ∑ n<br />
i=1 w ixi<br />
<strong>Nonlinear</strong>ity: φ(·)<br />
Output: y<br />
)<br />
x1<br />
w1<br />
x2 w2<br />
y<br />
Σ φ(·)<br />
wn<br />
xn<br />
b<br />
<strong>Lecture</strong> 13 28
<strong>EL2620</strong> 2011<br />
A Simple Neural Network<br />
Neural network consisting of six neurons:<br />
Input Layer Hidden Layer Output Layer<br />
u1<br />
u2<br />
y<br />
u3<br />
Represents a nonlinear mapping from inputs to outputs<br />
<strong>Lecture</strong> 13 29<br />
<strong>EL2620</strong> 2011<br />
Success Stories<br />
Fuzzy controls:<br />
• Zadeh (1965)<br />
• Complex problems but with possible linguistic controls<br />
• Applications took off in mid 70’s<br />
– Cement kilns, washing machines, vacuum cleaners<br />
Artificial neural networks:<br />
• McCulloch & Pitts (1943), Minsky (1951)<br />
• Complex problems with unknown and highly nonlinear structure<br />
• Applications took off in mid 80’s<br />
– Pattern recognition (e.g., speech, vision), data classification<br />
<strong>Lecture</strong> 13 31<br />
<strong>EL2620</strong> 2011<br />
Neural Network Design<br />
1. How many hidden layers?<br />
2. How many neurons in each layer?<br />
3. How to choose the weights?<br />
The choice of weights are often done adaptively through learning<br />
<strong>Lecture</strong> 13 30<br />
<strong>EL2620</strong> 2011<br />
Today’s Goal<br />
You should<br />
• understand the basics of fuzzy logic and fuzzy controllers<br />
• understand simple neural networks<br />
<strong>Lecture</strong> 13 32
<strong>EL2620</strong> 2011<br />
Next <strong>Lecture</strong><br />
• <strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong> revisited<br />
• Spring courses in control<br />
• Master thesis projects<br />
• PhD thesis projects<br />
<strong>Lecture</strong> 13 33
<strong>EL2620</strong> 2011<br />
<strong>EL2620</strong> <strong>Nonlinear</strong> <strong>Control</strong><br />
<strong>Lecture</strong> 14<br />
• Summary and repetition<br />
• Spring courses in control<br />
• Master thesis projects<br />
<strong>Lecture</strong> 14 1<br />
<strong>EL2620</strong> 2011<br />
Question 1<br />
What’s on the exam?<br />
• <strong>Nonlinear</strong> models: equilibria, phase portaits, linearization and stability<br />
• Lyapunov stability (local and global), LaSalle<br />
• Circle Criterion, Small Gain Theorem, Passivity Theorem<br />
• Compensating static nonlinearities<br />
• Describing functions<br />
• Sliding modes, equivalent controls<br />
• Lyapunov based design: back-stepping<br />
• Exact feedback linearization, input-output linearization, zerodynamics<br />
• <strong>Nonlinear</strong> controllability<br />
• Optimal control<br />
<strong>Lecture</strong> 14 3<br />
<strong>EL2620</strong> 2011<br />
Exam<br />
• Regular written exam (in English) with five problems<br />
• Sign up on course homepage<br />
• You may bring lecture <strong>notes</strong>, Glad & Ljung “Reglerteknik”, and<br />
TEFYMA or BETA<br />
(No other material: textbooks, exercises, calculators etc. Any<br />
other basic control book must be approved by me before the<br />
exam.).<br />
• See homepage for old exams<br />
<strong>Lecture</strong> 14 2<br />
<strong>EL2620</strong> 2011<br />
Question 2<br />
What design method should I use in practice?<br />
The answer is highly problem dependent. Possible (learning)<br />
approach:<br />
• Start with the simplest:<br />
– linear methods (loop shaping, state feedback, . . . )<br />
• Evaluate:<br />
– strong nonlinearities (under feedback!)?<br />
– varying operating conditions?<br />
– analyze and simulate with nonlinear model<br />
• Some nonlinearities to compensate for?<br />
– saturations, valves etc<br />
• Is the system generically nonlinear? E.g, ẋ = xu<br />
<strong>Lecture</strong> 14 4
<strong>EL2620</strong> 2011<br />
Question 3<br />
Can a system be proven stable with the Small Gain Theorem and<br />
unstable with the Circle Criterion?<br />
• No, the Small Gain Theorem, Passivity Theorem and Circle<br />
Criterion all provide only sufficient conditions for stability<br />
• But, if one method does not prove stability, another one may.<br />
• Since they do not provide necessary conditions for stability, none<br />
of them can be used to prove instability.<br />
<strong>Lecture</strong> 14 5<br />
<strong>EL2620</strong> 2011<br />
k2y f(y)<br />
The Circle Criterion<br />
k1y<br />
− 1 − 1 k1 k2<br />
y<br />
Theorem Consider a feedback loop with y = Gu and<br />
u = −f(y). Assume G(s) is stable and that<br />
G(iω)<br />
k1 ≤ f(y)/y ≤ k2.<br />
If the Nyquist curve of G(s) stays on the correct side of the circle<br />
defined by the points −1/k1 and −1/k2, then the closed-loop<br />
system is BIBO stable.<br />
<strong>Lecture</strong> 14 7<br />
<strong>EL2620</strong> 2011<br />
Question 4<br />
Can you review the circle criterion? What about k1 < 0
<strong>EL2620</strong> 2011<br />
Question 5<br />
Please repeat antiwindup<br />
<strong>Lecture</strong> 14 9<br />
<strong>EL2620</strong> 2011<br />
Antiwindup—General State-Space Model<br />
D<br />
F<br />
y<br />
G ∑<br />
xc<br />
s −1 C<br />
u<br />
∑<br />
D<br />
F − KC<br />
y xc<br />
G − KD ∑ s −1<br />
∑<br />
v sat<br />
C<br />
u<br />
K<br />
Choose K such that F − KC has stable eigenvalues.<br />
<strong>Lecture</strong> 14 11<br />
<strong>EL2620</strong> 2011<br />
Tracking PID<br />
(a)<br />
− y<br />
e<br />
K<br />
Actuator<br />
v u<br />
∑<br />
(b)<br />
KT d s<br />
K<br />
∑<br />
T i<br />
1<br />
s<br />
− +<br />
T i<br />
e s<br />
1<br />
T t<br />
− y<br />
KT d s<br />
Actuator model Actuator<br />
e v<br />
u<br />
K<br />
∑<br />
∑<br />
K<br />
1<br />
− +<br />
∑ ∑<br />
s<br />
1<br />
T t<br />
e s<br />
<strong>Lecture</strong> 14 10<br />
<strong>EL2620</strong> 2011<br />
Question 6<br />
Please repeat Lyapunov theory<br />
<strong>Lecture</strong> 14 12
<strong>EL2620</strong> 2011<br />
Stability Definitions<br />
An equilibrium point x =0of ẋ = f(x) is<br />
locally stable, if for every R>0 there exists r>0, such that<br />
‖x(0)‖ 0 for all x ≠0<br />
(3) ˙V (x) ≤ 0 for all x<br />
(4) V (x) →∞as ‖x‖ →∞<br />
(5) The only solution of ẋ = f(x) such that ˙V (x) =0is x(t) =0<br />
for all t<br />
then x =0is globally asymptotically stable.<br />
<strong>Lecture</strong> 14 16
<strong>EL2620</strong> 2011<br />
LaSalle’s Invariant Set Theorem<br />
Theorem Let Ω ∈ R n be a bounded and closed set that is invariant<br />
with respect to<br />
ẋ = f(x).<br />
Let V : R n → R be a C 1 function such that ˙V (x) ≤ 0 for x ∈ Ω.<br />
Let E be the set of points in Ω where ˙V (x) =0. If M is the largest<br />
invariant set in E, then every solution with x(0) ∈ Ω approaches M<br />
as t →∞<br />
Remark: acompact set (bounded and closed) is obtained if we e.g.,<br />
consider<br />
Ω = {x ∈ R n |V (x) ≤ c}<br />
and V is a positive definite function<br />
<strong>Lecture</strong> 14 17<br />
<strong>EL2620</strong> 2011<br />
Example: Pendulum with friction<br />
ẋ1 = x2 , ẋ2 = − g l sin x 1 − k m x 2<br />
V (x) = g l (1 − cos x 1)+ 1 2 x2 2 ⇒ ˙V = − k m x2 2<br />
• We can not prove global asymptotic stability; why?<br />
• The set E = {(x1,x2)| ˙V =0} is E = {(x1,x2)|x2 =0}<br />
• The invariant points in E are given by ẋ1 = x2 =0and ẋ2 =0.<br />
Thus, the largest invariant set in E is<br />
M = {(x1,x2)|x1 = kπ,x2 =0}<br />
• The domain is compact if we consider<br />
Ω = {(x1,x2) ∈ R 2 |V (x) ≤ c}<br />
<strong>Lecture</strong> 14 19<br />
<strong>EL2620</strong> 2011<br />
Relation to Poincare-Bendixson Theorem<br />
Poincare-Bendixson Any orbit of a continuous 2nd order system that<br />
stays in a compact region of the phase plane approaches its ω-limit<br />
set, which is either a fixed point, a periodic orbit, or several fixed<br />
points connected through homoclinic or heteroclinic orbits<br />
In particular, if the compact region does not contain any fixed point<br />
then the ω-limit set is a limit cycle<br />
<strong>Lecture</strong> 14 18<br />
<strong>EL2620</strong> 2011<br />
• If we e.g., consider Ω : x 2 1 + x 2 2 ≤ 1 then<br />
M = {(x1,x2)|x1 =0,x2 =0} and we have proven<br />
asymptotic stability of the origin.<br />
<strong>Lecture</strong> 14 20
<strong>EL2620</strong> 2011<br />
Question 7<br />
Please repeat the most important facts about sliding modes.<br />
There are 3 essential parts you need to understand:<br />
1. The sliding manifold<br />
2. The sliding control<br />
3. The equivalent control<br />
<strong>Lecture</strong> 14 21<br />
<strong>EL2620</strong> 2011<br />
Example<br />
ẋ1 = x2(t)<br />
ẋ2 = x1(t)x2(t)+u(t)<br />
Choose S for desired behavior, e.g.,<br />
σ(x) =ax1 + x2 =0 ⇒ ẋ1 = −ax1(t)<br />
Choose large a: fast convergence along sliding manifold<br />
<strong>Lecture</strong> 14 23<br />
<strong>EL2620</strong> 2011<br />
Step 1. The Sliding Manifold S<br />
Aim: we want to stabilize the equilibrium of the dynamic system<br />
ẋ = f(x)+g(x)u, x ∈ R n ,u∈ R 1<br />
Idea: use u to force the system onto a sliding manifold S of<br />
dimension n − 1 in finite time<br />
S = {x ∈ R n |σ(x) =0} σ ∈ R 1<br />
and make S invariant<br />
If x ∈ R 2 then S is R 1 , i.e., a curve in the state-plane (phase plane).<br />
<strong>Lecture</strong> 14 22<br />
<strong>EL2620</strong> 2011<br />
Step 2. The Sliding <strong>Control</strong>ler<br />
Use Lyapunov ideas to design u(x) such that S is an attracting<br />
invariant set<br />
Lyapunov function V (x) =0.5σ 2 yields ˙V = σ ˙σ<br />
For 2nd order system ẋ1 = x2 , ẋ2 = f(x)+g(x)u and<br />
σ = x1 + x2 we get<br />
˙V = σ (x 2 + f(x)+g(x)u) < 0 ⇐ u = − f(x)+x 2 + sgn(σ)<br />
g(x)<br />
Example: f(x) =x1x2, g(x) =1, σ = x1 + x2, yields<br />
u = −x1x2 − x2 − sgn(x1 + x2)<br />
<strong>Lecture</strong> 14 24
<strong>EL2620</strong> 2011<br />
Step 3. The Equivalent <strong>Control</strong><br />
When trajectory reaches sliding mode, i.e., x ∈ S, then u will chatter<br />
(high frequency switching).<br />
However, an equivalent control ueq(t) that keeps x(t) on S can be<br />
computed from ˙σ =0when σ =0<br />
Example:<br />
˙σ =ẋ1 +ẋ2 = x2 + x1x2 + ueq =0 ⇒ ueq = −x2 − x1x2<br />
Thus, the sliding controller will take the system to the sliding manifold<br />
S in finite time, and the equivalent control will keep it on S.<br />
<strong>Lecture</strong> 14 25<br />
<strong>EL2620</strong> 2011<br />
Question 8<br />
Can you repeat backstepping?<br />
<strong>Lecture</strong> 14 27<br />
<strong>EL2620</strong> 2011<br />
Note!<br />
Previous years it has often been assumed that the sliding mode<br />
control always is on the form<br />
u = −sgn(σ)<br />
This is OK, but is not completely general (see example)<br />
<strong>Lecture</strong> 14 26<br />
<strong>EL2620</strong> 2011<br />
Backstepping Design<br />
We are concerned with finding a stabilizing control u(x) for the<br />
system<br />
ẋ = f(x, u)<br />
General Lyapunov control design: determine a <strong>Control</strong> Lyapunov<br />
function V (x, u) and determine u(x) so that<br />
V (x) > 0 , ˙V (x) < 0 ∀x ∈ R<br />
n<br />
In this course we only consider f(x, u) with a special structure,<br />
namely strict feedback structure<br />
<strong>Lecture</strong> 14 28
<strong>EL2620</strong> 2011<br />
Strict Feedback Systems<br />
ẋ1 = f1(x1)+g1(x1)x2<br />
ẋ2 = f2(x1,x2)+g2(x1,x2)x3<br />
ẋ3 = f3(x1,x2,x3)+g3(x1,x2,x3)x4<br />
. ..<br />
ẋn = fn(x1,...,xn)+gn(x1,...,xn)u<br />
where gk ≠0<br />
Note: x1,...,xk do not depend on xk+2,...,xn.<br />
<strong>Lecture</strong> 14 29<br />
<strong>EL2620</strong> 2011<br />
The Backstepping Result<br />
Let V1(x1) be a <strong>Control</strong> Lyapunov Function for the system<br />
ẋ1 = f1(x1)+g1(x1)u<br />
with corresponding controller u = φ(x1).<br />
Then V2(x1,x2) =V1(x1)+(x2 − φ(x1)) 2 /2 is a <strong>Control</strong><br />
Lyapunov Function for the system<br />
ẋ1 = f1(x1)+g1(x1)x2<br />
ẋ2 = f2(x1,x2)+u<br />
with corresponding controller<br />
u(x) = dφ<br />
dx1<br />
(<br />
f(x1)+g(x1)x2<br />
)<br />
− dV<br />
dx1<br />
g(x1)−(xk−φ(x1))−f2(x1,x2)<br />
<strong>Lecture</strong> 14 31<br />
<strong>EL2620</strong> 2011<br />
The Backstepping Idea<br />
Given a <strong>Control</strong> Lyapunov Function V1(x1), with corresponding<br />
control u = φ1(x1), for the system<br />
ẋ1 = f1(x1)+g1(x1)u<br />
find a <strong>Control</strong> Lyapunov function V2(x1,x2), with corresponding<br />
control u = φ2(x1,x2), for the system<br />
x1 ˙ = f1(x1)+g1(x1)x2<br />
x2 ˙ = f2(x1,x2)+u<br />
<strong>Lecture</strong> 14 30<br />
<strong>EL2620</strong> 2011<br />
Question 9<br />
Repeat backlash compensation<br />
<strong>Lecture</strong> 14 32
<strong>EL2620</strong> 2011<br />
Backlash Compensation<br />
• Deadzone<br />
• Linear controller design<br />
• Backlash inverse<br />
Linear controller design: Phase lead compensation<br />
θ ref<br />
−<br />
e u<br />
˙θin<br />
1<br />
θ in<br />
θ out<br />
K 1+sT2<br />
1+sT1<br />
1+sT<br />
1<br />
s<br />
<strong>Lecture</strong> 14 33<br />
<strong>EL2620</strong> 2011<br />
• Choose compensation F (s) such that the intersection with the<br />
describing function is removed<br />
F (s) =K 1+sT 2<br />
with T 1 =0.5,T2 =2.0:<br />
1+sT1<br />
10<br />
Nyquist Diagrams<br />
8<br />
6<br />
0.5<br />
4<br />
y, with/without filter<br />
2<br />
0<br />
Imaginary Axis<br />
<strong>EL2620</strong> 2011<br />
Question 10<br />
Can you repeat linearization through high gain feedback?<br />
<strong>Lecture</strong> 14 35<br />
1.5<br />
1<br />
-2<br />
0<br />
0 5 10 15 20<br />
2<br />
-4<br />
1<br />
with filter<br />
-6<br />
0<br />
-8<br />
without filter<br />
-10<br />
-10 -5 0 5 10<br />
Real Axis<br />
-1<br />
-2<br />
u, with/without filter<br />
0 5 10 15 20<br />
Oscillation removed!<br />
<strong>Lecture</strong> 14 34<br />
<strong>EL2620</strong> 2011<br />
Inverting <strong>Nonlinear</strong>ities<br />
Compensation of static nonlinearity through inversion:<br />
<strong>Control</strong>ler<br />
−<br />
F (s) ˆf<br />
−1 (·) f(·) G(s)<br />
Should be combined with feedback as in the figure!<br />
<strong>Lecture</strong> 14 36
<strong>EL2620</strong> 2011<br />
Remark: How to Obtain f −1 from f using<br />
Feedback<br />
v<br />
ê k u<br />
−<br />
s<br />
f(u)<br />
u<br />
f(·)<br />
ê = ( v − f(u) )<br />
If k>0 large and df /du > 0, then ê → 0 and<br />
0= ( v − f(u) ) ⇔ f(u) =v ⇔ u = f −1 (v)<br />
<strong>Lecture</strong> 14 37<br />
<strong>EL2620</strong> 2011<br />
Question 12<br />
What about describing functions?<br />
<strong>Lecture</strong> 14 39<br />
<strong>EL2620</strong> 2011<br />
Question 11<br />
What should we know about input–output stability?<br />
You should understand and be able to derive/apply<br />
• System gain γ(S) =sup u∈L 2<br />
• BIBO stability<br />
‖y‖2<br />
‖u‖2<br />
• Small Gain Theorem<br />
• Circle Criterion<br />
• Passivity Theorem<br />
<strong>Lecture</strong> 14 38<br />
<strong>EL2620</strong> 2011<br />
Idea Behind Describing Function Method<br />
r e u y<br />
− N.L. G(s)<br />
e(t) =A sin ωt gives<br />
u(t) =<br />
∞ ∑<br />
n=1<br />
√<br />
a<br />
2<br />
n + b 2 n sin[nωt +arctan(an/bn)]<br />
If |G(inω)| ≪ |G(iω)| for n ≥ 2, then n =1suffices, so that<br />
√<br />
y(t) ≈ |G(iω)| a 2 1 + b 2 1 sin[ωt + arctan(a1/b1)+argG(iω)]<br />
<strong>Lecture</strong> 14 40
<strong>EL2620</strong> 2011<br />
Definition of Describing Function<br />
The describing function is<br />
N(A, ω) = b 1(ω)+ia1(ω)<br />
A<br />
e(t) u(t)<br />
N.L.<br />
e(t) û1(t)<br />
N(A, ω)<br />
If G is low pass and a0 =0then<br />
û1(t) =|N(A, ω)|A sin[ωt +argN(A, ω)] ≈ u(t)<br />
<strong>Lecture</strong> 14 41<br />
<strong>EL2620</strong> 2011<br />
More Courses in <strong>Control</strong><br />
• EL2450 Hybrid and Embedded <strong>Control</strong> Systems, per 3<br />
• EL2745 Principles of Wireless Sensor Networks, per 3<br />
• EL2520 <strong>Control</strong> Theory and Practice, Advanced Course, per 4<br />
• EL1820 Modelling of Dynamic Systems, per 1<br />
• EL2421 Project Course in Automatic <strong>Control</strong>, per 2<br />
<strong>Lecture</strong> 14 43<br />
<strong>EL2620</strong> 2011<br />
replacements<br />
Existence of Periodic Solutions<br />
0 e u y<br />
− f(·) G(s)<br />
G(iω)<br />
−1/N (A)<br />
A<br />
y = G(iω)u = −G(iω)N(A)y ⇒ G(iω) =− 1<br />
N(A)<br />
The intersections of the curves G(iω) and −1/N (A)<br />
give ω and A for a possible periodic solution.<br />
<strong>Lecture</strong> 14 42<br />
<strong>EL2620</strong> 2011<br />
EL2520 <strong>Control</strong> Theory and Practice,<br />
Advanced Course<br />
Aim: provide an introduction to principles and methods in advanced<br />
control, especially multivariable feedback systems.<br />
• Period 4, 7.5 p<br />
• Multivariable control:<br />
– Linear multivariable systems<br />
– Robustness and performance<br />
– Design of multivariable controllers: LQG, H∞-optimization<br />
– Real time optimization: Model Predictive <strong>Control</strong> (MPC)<br />
• <strong>Lecture</strong>s, exercises, labs, computer exercises<br />
Contact: Mikael Johansson mikaelj@kth.se<br />
<strong>Lecture</strong> 14 44
<strong>EL2620</strong> 2011<br />
EL2450 Hybrid and Embedded <strong>Control</strong><br />
Systems<br />
Aim:course on analysis, design and implementation of control<br />
algorithms in networked and embedded systems.<br />
• Period 3, 7.5 p<br />
• How is control implemented in reality:<br />
– Computer-implementation of control algorithms<br />
– Scheduling of real-time software<br />
– <strong>Control</strong> over communication networks<br />
• <strong>Lecture</strong>s, exercises, homework, computer exercises<br />
Contact: Dimos Dimarogonas dimos@kth.se<br />
<strong>Lecture</strong> 14 45<br />
<strong>EL2620</strong> 2011<br />
EL1820 Modelling of Dynamic Systems<br />
Aim: teach how to systematically build mathematical models of<br />
technical systems from physical laws and from measured signals.<br />
• Period 1, 6 p<br />
• Model dynamical systems from<br />
– physics: lagrangian mechanics, electrical circuits etc<br />
– experiments: parametric identification, frequency response<br />
• Computer tools for modeling, identification, and simulation<br />
• <strong>Lecture</strong>s, exercises, labs, computer exercises<br />
Contact: Håkan Hjalmarsson, hjalmars@kth.se<br />
<strong>Lecture</strong> 14 47<br />
<strong>EL2620</strong> 2011<br />
EL2745 Principles of Wireless Sensor<br />
Networks<br />
Aim: provide the participants with a basic knowledge of wireless<br />
sensor networks (WSN)<br />
• Period 3, 7.5 cr<br />
• THE INTERNET OF THINGS<br />
– Essential tools within communication, control, optimization and<br />
signal processing needed to cope with WSN<br />
– Design of practical WSNs<br />
– Research topics in WSNs<br />
Contact: Carlo Fischione carlofi@kth.se<br />
<strong>Lecture</strong> 14 46<br />
<strong>EL2620</strong> 2011<br />
EL2421 Project Course in <strong>Control</strong><br />
Aim: provide practical knowledge about modeling, analysis, design,<br />
and implementation of control systems. Give some experience in<br />
project management and presentation.<br />
• Period 4, 12 p<br />
• “From start to goal...”: apply the theory from other courses<br />
• Team work<br />
• Preparation for Master thesis project<br />
• Project management (lecturers from industry)<br />
• No regular lectures or labs<br />
Contact: Jonas Mårtensson jonas1@kth.se<br />
<strong>Lecture</strong> 14 48
<strong>EL2620</strong> 2011<br />
Doing Master Thesis Project at Automatic<br />
<strong>Control</strong> Lab<br />
◦ Theory and practice<br />
◦ Cross-disciplinary<br />
◦ The research edge<br />
◦ Collaboration with leading industry and universities<br />
◦ Get insight in research and development<br />
Hints:<br />
• The topic and the results of your thesis are up to you<br />
• Discuss with professors, lecturers, PhD and MS students<br />
• Check old projects<br />
<strong>Lecture</strong> 14 49<br />
<strong>EL2620</strong> 2011<br />
Doing PhD Thesis Project at Automatic<br />
<strong>Control</strong><br />
• Intellectual stimuli<br />
• Get paid for studying<br />
• International collaborations and travel<br />
• Competitive<br />
• World-wide job market<br />
• Research (50%), courses (30%),<br />
teaching (20%), fun (100%)<br />
• 4-5 yr’s to PhD (lic after 2-3 yr’s)<br />
<strong>Lecture</strong> 14 50