Digital Control Systems [MEE 4003] - Kckong.info
Digital Control Systems [MEE 4003] - Kckong.info
Digital Control Systems [MEE 4003] - Kckong.info
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong> [<strong>MEE</strong> <strong>4003</strong>]<br />
Kyoungchul Kong<br />
Assistant Professor<br />
Department of Mechanical Engineering<br />
Sogang University<br />
Draft date September 6, 2012
Preface<br />
For inspiration and motivation of my students.<br />
(Parts of this class note are copied and edited from articles available on the web.)<br />
1
¨©¤§<br />
£¤<br />
¨©¤§<br />
¦£¤<br />
Chapter 1<br />
Introduction<br />
1.1 Problem Definition<br />
1.1.1 <strong>Control</strong> systems<br />
<strong>Control</strong> theory is an interdisciplinary branch of engineering and mathematics that deals<br />
with the behavior of dynamic systems. The desired output of a system is called the reference.<br />
When one or more output variables of a system need to follow a certain reference<br />
over time, a controller manipulates the inputs to a system to obtain the desired effect on<br />
the output of the system.<br />
§§¥§£§<br />
+<br />
−<br />
¥¥¢¥<br />
¡¢£¤¥¢¦¦§¥<br />
©¦ ¢¤¤<br />
§¥§§£¤<br />
¨§£¢¥<br />
Figure 1.1: The concept of the feedback loop to control the dynamic behavior of<br />
the system: this is negative feedback, because the sensed value is subtracted from<br />
the desired value to create the error signal, which is amplified by the controller.<br />
1.1.2 Regulation and tracking control<br />
When the reference is constant, the control process is called regulation. On the other<br />
hand, when the reference is a time-varying quantity, the control process is called tracking<br />
control. Note that the control process is identical for both regulation and tracking control.<br />
2
1.1. PROBLEM DEFINITION 3<br />
(a)<br />
(b)<br />
(c)<br />
(d)<br />
Figure 1.2: Examples of controlled systems. (a): a robot arm, (b): a helicopter;<br />
(c): an active suspension, (d): an air conditioning system<br />
[Example 1-1] The systems in Fig. 1.2 are examples of controlled systems. The robot<br />
arm in (a) is a tracking control system, because the reference of the robot joint is<br />
time-varying in general. On the other hand, the remaining systems are all regulation<br />
systems. For example, once the desired height of a helicopter is set, the rotor speed is<br />
automatically controlled to maintain the height. For (c) and (d), find the reason why<br />
they are regulation systems.<br />
Disturbance<br />
Disturbance is an undesired input that affects the performance of the overall control system.<br />
Disturbance includes an environmental change, an external force, a change in system<br />
parameter, etc.<br />
1.1.3 System<br />
System is a set of interacting components forming an integrated whole. Every system has<br />
input(s) and output(s). Most systems share common characteristics, including:<br />
• <strong>Systems</strong> have structure, defined by components and their composition,<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.1. PROBLEM DEFINITION 4<br />
• <strong>Systems</strong> have behavior, which involves inputs, processing and outputs of material,<br />
energy, <strong>info</strong>rmation, or data,<br />
• <strong>Systems</strong> have interconnectivity: the various parts of a system have functional as<br />
well as structural relationships to each other,<br />
• <strong>Systems</strong> may have some functions or groups of functions<br />
Plant<br />
A plant in control theory is the combination of process and actuator. In particular, a plant<br />
is the system to be controlled.<br />
1.1.4 Signal<br />
A signal is any time-varying or spatial-varying quantity. In the physical world, any quantity<br />
measurable through time or over space can be considered as a signal. More generally,<br />
any set of human <strong>info</strong>rmation or machine data can also be considered as a signal. Such<br />
<strong>info</strong>rmation or machine data must all be part of systems existing in the physical world.<br />
[Example 1-2] Examples of signals follow.<br />
• Motion: The motion of a particle through some space can be considered as a<br />
signal, or can be represented by a signal. In general the position signal is a 3-<br />
vector signal. If orientation is considered, it is a 6-vector signal.<br />
• Sound: Since a sound is a vibration of a medium (such as air), a sound signal<br />
is related to the pressure value of air. A microphone converts sound pressure at<br />
some place to a function of time, generating a voltage signal that is proportional<br />
to the sound signal. Sound signals can be sampled to a discrete set of time points.<br />
For example, compact discs (CDs) contain discrete signals representing sound,<br />
recorded at 44,100 samples per second.<br />
• Images: A picture or image consists of a brightness or color signal, a function of<br />
a two-dimensional location. A 2D image can have a continuous spatial domain,<br />
as in a traditional photograph or painting; or the image can be discretized in<br />
space, as in a raster scanned digital image. Color images are typically represented<br />
as a combination of images in three primary colors, so that the signal is vectorvalued<br />
with dimension three.<br />
• Videos: A video signal is a sequence of images. A point in a video is identified<br />
by its two-dimensional position and by the time at which it occurs, so a video<br />
signal has a three-dimensional domain. Analog video has one continuous domain<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.1. PROBLEM DEFINITION 5<br />
dimension (across a scan line) and two discrete dimensions (frame and line).<br />
1.1.5 Continuous-time and discrete-time signals<br />
If the quantities are defined only on a discrete set of times, we call it a discrete-time signal.<br />
The discrete-time signal can be indexed by an integer that represents the sequence of each<br />
data point.<br />
On the other hand, a continuous-time real signal is any real-valued function that is<br />
defined for all timetin an interval.<br />
[Example 1-3] An example of the continuous-time and discrete-time signals is shown<br />
in Fig. 1.3. In general, the discrete-time signal is a sampled version of its corresponding<br />
continuous-time signal.<br />
0.9<br />
0.9<br />
0.8<br />
0.8<br />
0.7<br />
0.7<br />
0.6<br />
0.6<br />
Amplitude<br />
0.5<br />
0.4<br />
Amplitude<br />
0.5<br />
0.4<br />
0.3<br />
0.3<br />
0.2<br />
0.2<br />
0.1<br />
0.1<br />
0<br />
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />
Time (sec.)<br />
0<br />
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />
Time (sec.)<br />
(a)<br />
(b)<br />
Figure 1.3: Continuous-time and discrete-time domain signals.<br />
(a): a continuous-time signal, (b): a discrete-time signal<br />
1.1.6 <strong>Digital</strong> control<br />
<strong>Digital</strong> control is a branch of control theory that uses digital computers to act as system<br />
controllers. Depending on the requirements, a digital control system can take the form<br />
of a microcontroller (costs less than 10,000 KRW) to a digital signal processor (DSP,<br />
costs more than 10,000,000 KRW). Since a digital computer has finite precision (i.e.,<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.1. PROBLEM DEFINITION 6<br />
Figure 1.4: 4-channel analog-to-digital converter WM8775SEDS made by Wolfson<br />
Microelectronics placed on an X-Fi Pro sound card.<br />
quantization), extra care is needed to ensure that the errors in coefficients, A/D conversion,<br />
D/A conversion, etc. do not produce undesired or unplanned effects.<br />
The benefits of a digital control system include<br />
• Inexpensive: under $5 for many microcontrollers<br />
• Flexible: easy to configure and reconfigure through software<br />
• Scalable: programs can scale to the limits of the memory or storage space without<br />
extra cost<br />
• Adaptable: parameters of the program can change with time<br />
• Static operation: digital computers are not easily affected by environmental conditions<br />
than capacitors, inductors, etc.<br />
Analog-to-digital converter<br />
An analog-to-digital converter (ADC, A/D, or A to D) is a device that converts a continuous<br />
quantity to a discrete time digital representation. Typically, an ADC is an electronic<br />
device that converts an input analog voltage to a digital number proportional to the magnitude<br />
of the voltage.<br />
<strong>Digital</strong>-to-analog converter<br />
A digital-to-analog converter (DAC, D/A, or D-to-A) is a device that converts a digital<br />
code to an analog signal (voltage, current, or electric charge).<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.1. PROBLEM DEFINITION 7<br />
Figure 1.5: 8-channel digital-to-analog converter Cirrus Logic CS4382 as used<br />
in a soundcard.<br />
0.9<br />
0.8<br />
0.7<br />
0.6<br />
Amplitude<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
T<br />
0.1<br />
0<br />
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />
Time (sec.)<br />
Figure 1.6: The definition of sampling period.<br />
1.1.7 Sampling Rate<br />
The sampling rate, sample rate, or sampling frequency defines the number of samples<br />
per unit of time (usually seconds) taken from a continuous signal to make a discrete<br />
signal. For time-domain signals, the unit for sampling rate is hertz[Hz] (inverse seconds,<br />
1/s, s −1 ), sometimes noted as Sa/s (samples per second). The inverse of the sampling<br />
frequency is the sampling period or sampling interval, which is the time between samples.<br />
T shown in Fig. 1.6 is the sampling period, and 1 is the sampling frequency.<br />
T<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.2. EXAMPLES 8<br />
1.2 Examples<br />
1.2.1 Vehicle Cruise <strong>Control</strong><br />
Suppose you have a vehicle with a cruise control function. The cruise control unit (CCU)<br />
calculates the amount of acceleration from the measured speed error to maintain a set<br />
speed. The vehicle speed is measured by a tachometer.<br />
• What are the signals<br />
• What is the plant<br />
• What is the controller<br />
• What is the sensor<br />
• How is the block diagram represented<br />
• What are the expected disturbances<br />
1.2.2 Air Conditioner<br />
An air conditioner is installed in a room. The desired temperature is 24 ◦ C, and the current<br />
temperature is being measured by a thermocouple. The fan of the air conditioner is<br />
controlled such that its desired speed is proportional to the temperature error. The speed<br />
of the fan is controlled by an electric current flowing through the fan, where the electric<br />
current is controlled by a motor driver.<br />
• What are the signals<br />
• What are the systems<br />
• What is the plant<br />
• What is the controller<br />
• What is the sensor<br />
• How is the block diagram represented<br />
• What are the expected disturbances<br />
1.2.3 Robot Arm <strong>Control</strong><br />
A motor installed at a joint of an industrial robot is to be controlled. The desired angular<br />
position of the motor is a sine wave with an amplitude of1radian and a frequency of1Hz.<br />
It is known that the motor follows the equation of motionM¨Θ+C ˙Θ+KΘ = τ, whereΘ<br />
is the angular position of the motor andτ is the torque generated by the motor. The motor<br />
torque is proportional to the electric current flowing through the motor, where the electric<br />
current is regulated by a motor driver. The motor driver is connected to the computer via<br />
a D/A converter, and the torque command is transferred by an analog voltage signal. A<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
1.3. REFERENCES 9<br />
digital controller generates the torque command to be proportional to the angular position<br />
error. The angular position is measured by a potentiometer.<br />
• What are the signals<br />
• What are the systems<br />
• What is the plant<br />
• What is the controller<br />
• What is the sensor<br />
• How is the block diagram represented<br />
• What are the expected disturbances<br />
• Is this system a continuous or digital control system<br />
1.3 References<br />
1. Wikipedia, available on-line: www.wikipedia.com<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
Chapter 2<br />
Review of Continuous-Time Domain<br />
<strong>Control</strong> Theory<br />
2.1 State Space Realization of Dynamic <strong>Systems</strong><br />
2.1.1 Differential equations: state space realization<br />
Dynamics of systems can be described by equations of motions, which are in general<br />
described by differential equations.<br />
Modeling of complicated mass-spring-damper systems<br />
A mechanical system with multiple masses, springs, and dampers can be described as a<br />
mathematical model:<br />
Mẍ+Cẋ+Kx = u<br />
where<br />
M ∈ R n×n is the mass matrix,<br />
C ∈ R n×n is the damping coefficient matrix,<br />
K ∈⎡R n×n ⎤is the spring constant matrix,<br />
x 1<br />
x 2<br />
x = ⎢ ⎥<br />
⎣ . ⎦ ∈ Rn is the position vector (x 1 , ...,x n are the position of each mass),<br />
x n<br />
⎡<br />
u = ⎢<br />
⎣<br />
⎤<br />
u 1<br />
u 2<br />
⎥<br />
.<br />
u n<br />
⎦ ∈ Rn is the input vector (u 1 , ...,u n are the force applied to each mass).<br />
10
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 11<br />
k<br />
M<br />
x<br />
u<br />
x1<br />
2<br />
u1<br />
2<br />
m1<br />
u<br />
k<br />
x<br />
m 2<br />
(a) A mass-spring system<br />
(b) Double mass-spring system<br />
k1<br />
k2<br />
m1<br />
m2<br />
c<br />
c1<br />
2<br />
k 4<br />
m 3<br />
x 1<br />
x2<br />
x3<br />
(c) Complicated mass-spring-damper system<br />
k 3<br />
u<br />
c<br />
k<br />
x 1<br />
u 1<br />
(d) Creeping phenomenon of a steel material<br />
P<br />
x<br />
v<br />
(e) Buckling phenomenon of a rod<br />
Figure 2.1: Dynamic systems<br />
The matrices,M, C, and K, are defined as follows.<br />
• M ii is the mass value of thei th mass.<br />
• M ij,i≠j = 0.<br />
• C ii is the sum of damping coefficients of all dampers connected to thei th mass.<br />
• C ij,i≠j is the negative value of the sum of damping coefficients of all dampers connected<br />
between thei th mass and thej th mass.<br />
• K ii is the sum of spring constants of all springs connected to thei th mass.<br />
• K ij,i≠j is the negative value of the sum of spring constants of all springs connected<br />
between thei th mass and thej th mass.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 12<br />
[Example 2-1] Differential equations of the systems in Fig. 2.1 are as follows.<br />
(a) mẍ+kx = u<br />
[ ]<br />
x1<br />
(b) The output vector is x = ∈ R<br />
x 2 , and the input vector is u =<br />
2<br />
The entries of the matrices are defined as follows.<br />
• M 11 = m 1 and M 22 = m 2 .<br />
[<br />
u1<br />
u 2<br />
]<br />
∈ R 2 .<br />
• M 12 = M 21 = 0.<br />
• Since there is no damper in the system,C = 0.<br />
• The spring k is the only spring connected to m 1 , i.e., K 11 = k. By the same<br />
reason, K 22 = k.<br />
• The spring k is the only spring connected between m 1 and m 2 . Since K ij,i≠j is<br />
the negative value of the sum of spring constants,K 12 = K 21 = −k.<br />
Therefore, the differential equation of the two mass-spring system is<br />
[ ] [ ] [ ]<br />
m1 0 0 0 k −k<br />
ẍ+ ẋ+ x = u<br />
0 m 2 0 0 −k k<br />
⎡<br />
(c) The output vector is x = ⎣<br />
⎤<br />
x 1<br />
x 2<br />
x 3<br />
⎦ ∈ R 3 , and the input vector is u = ⎣<br />
⎡<br />
0<br />
0<br />
u<br />
⎤<br />
⎦ ∈ R 3 .<br />
Note that no force is applied to m 1 and m 2 . The entries of the matrices are defined as<br />
follows.<br />
• M 11 = m 1 , M 22 = m 2 , and M 33 = m 3 .<br />
• M 12 = M 13 = M 21 = M 23 = M 31 = M 32 = 0.<br />
• The dampers connected to m 1 are c 1 and c 2 . Thus, C 11 = c 1 +c 2 . On the other<br />
hand, c 2 is the only damper connected to m 2 , i.e., C 22 = c 2 . Since no damper is<br />
connected tom 3 , C 33 = 0.<br />
• The damper(s) connected between m 1 and m 2 is c 2 . Thus, C 12 = C 21 = −c 2 .<br />
There is no damper between m 1 and m 3 , i.e., C 13 = C 31 = 0. Similarly, C 23 =<br />
C 32 = 0.<br />
• The springsk 1 ,k 2 , andk 4 are connected tom 1 , i.e.,K 11 = k 1 +k 2 +k 4 . Similarly,<br />
K 22 = k 2 +k 3 and K 33 = k 3 +k 4 .<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 13<br />
c<br />
x 2<br />
k<br />
0 0<br />
x 1<br />
u 1<br />
Figure 2.2: An equivalent model of the creeping phenomenon.<br />
• The springk 2 is placed betweenm 1 andm 2 . Thus,K 12 = K 21 = −k 2 . Between<br />
m 2 and m 3 , the spring k 3 is placed, i.e., K 23 = K 32 = −k 3 . Similarly, K 13 =<br />
K 31 = −k 4 .<br />
⎡<br />
⎣<br />
Finally, the mathematical model of the three mass-spring-damper system is<br />
⎤ ⎡ ⎤ ⎡<br />
⎤<br />
m 1 0 0 c 1 +c 2 −c 2 0 k 1 +k 2 +k 4 −k 2 −k 4<br />
0 m 2 0 ⎦ẍ+ ⎣ −c 2 c 2 0 ⎦ẋ+ ⎣ −k 2 k 2 +k 3 −k 3<br />
⎦x<br />
0 0 m 3 0 0 0 −k 4 −k 3 k 3 +k 4<br />
(d) The modeling of creep phenomenon of materials is introduced in this example. In<br />
materials science, creep is the tendency of a solid material to slowly move or deform<br />
permanently under the influence of stresses. In order to find the mathematical model<br />
of the creep phenomenon, an equivalent model is introduced with fictitious masses<br />
(m = 0) as in Fig. 2.2.<br />
wherex =<br />
The mathematical model of the equivalent model is<br />
[ ] [ ] [<br />
0 0 0 0<br />
ẍ+ ẋ+<br />
0 0 0 c<br />
] [ ]<br />
∈ R<br />
x 2 u1<br />
and u = ∈ R<br />
2 0<br />
2 .<br />
[<br />
x1<br />
k −k<br />
−k k<br />
]<br />
x = u<br />
(e) In the figure, x is the position from the top of the bar, andv is the distance from the<br />
vertical line to the center line of the deflected bar. The internal moment in the bar, M,<br />
is related to its deflected shape, v, by<br />
EI d2 v<br />
dx 2 = M<br />
The internal moment,M, is determined by the applied force, P , and the distance from<br />
the vertical line, v (i.e., M = −Pv). Therefore, the bar under a compressive load is<br />
modeled by<br />
EI d2 v<br />
dx 2 +Pv = 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
!! "# $%&'<br />
<br />
&)!' *+!%, -.. / 0-. / 1 2 +<br />
()'<br />
4 56 78 9: 9;
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 15<br />
State space realization of linear systems<br />
Consider the equation of motion of a system:<br />
d n x<br />
a n<br />
dt +a d n−1 x<br />
n n−1<br />
dt +···+a dx<br />
n−1 1<br />
dt +a 0x = bu (2.1)<br />
where a 0 , a 1 , ..., a n , and b are constants all in R. The input is u ∈ R, and the output is<br />
x ∈ R. Note that the order of the equation of motion is n.<br />
The state space realization of the given equation of motion is obtained as follows.<br />
[Step 1] Find a variable that has the lowest order in the equation of motion and the output.<br />
In the example of (2.1), x is the variable with the lowest order. Let the variable bex 1 .<br />
[Step 2] Define the derivative ofx 1 as a new variable,x 2 , i.e.,<br />
d<br />
dt x 1 = x 2<br />
[Step 3] Repeat defining new variables up to x n . For example,<br />
d<br />
dt x 2 = x 3<br />
d<br />
dt x 3 = x 4<br />
.<br />
d<br />
dt x n−1 = x n<br />
Note thatx 1 = x, x 2 = dx<br />
dt , x 3 = d2 x<br />
dt 2 , ...,x n = dn−1 x<br />
dt n−1 .<br />
[Step 4] Rearrange the equation of motion such that only the highest order term is remained<br />
on the left hand side, i.e.,<br />
d n x<br />
dt = − 1 [<br />
]<br />
d n−1 x<br />
a n n−1<br />
a n dt +···+a dx<br />
n−1 1<br />
dt +a 0x + b u<br />
a n<br />
The equation above can be rewritten using the new variables, i.e.<br />
d<br />
dt x n = − 1 [a n−1 x n +···+a 1 x 2 +a 0 x 1 ]+ b u<br />
a n a n<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 16<br />
The n new first-order differential equations can be arranged into the following matrix<br />
form:<br />
⎡ ⎤ ⎡<br />
⎤⎡<br />
⎤ ⎡ ⎤<br />
x 1 0 1 0 0 x 1 0<br />
d<br />
x 2<br />
0 0 1 ... 0<br />
x 2<br />
0<br />
x 3<br />
=<br />
0 0 0 0<br />
x 3<br />
+<br />
u<br />
dt ⎢ ⎥ ⎢<br />
⎣ . ⎦ ⎣ .<br />
.. ⎥⎢<br />
⎥ ⎢ ⎥<br />
. ⎦⎣<br />
. ⎦ ⎣<br />
0. ⎦<br />
x n − a 0<br />
a n<br />
− a 1<br />
a n<br />
− a 2<br />
a n<br />
... − a n−1<br />
b<br />
a n<br />
x n a n<br />
where the output is<br />
⎡<br />
y = [ 1 0 0 0 0 ] ⎢<br />
⎣<br />
⎤<br />
x 1<br />
x 2<br />
x 3<br />
⎥<br />
. ⎦<br />
x n<br />
A state space realization has been obtained. If the original equation of motion consists of<br />
two or more differential equations, you may repeat the same process until the state space<br />
representation of the whole system is obtained.<br />
[Example 2-2] An equation of motion, mẍ = u, where the input is u ∈ R and the<br />
output isx ∈ R, is to be represented in a state space.<br />
The variable that has the lowest order in the equation of motion and the output is<br />
x, which is shown in the output. Let x be x 1 . Following the steps listed above, a new<br />
variable is defined, i.e.,<br />
d<br />
dt x 1 = x 2<br />
Note thatx 2 = ẋ. The derivative ofx 2 is<br />
d<br />
dt x 2 = ẍ = 1 m u<br />
Arranging the two new first-order differential equations,<br />
[ ] [ ][ ]<br />
d x1 0 1 x1 0<br />
= +[<br />
1<br />
dt x 2 0 0 x 2 m<br />
where the output is<br />
y = [ 1 0 ][ x 1<br />
x 2<br />
]<br />
]<br />
u<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 17<br />
[Example 2-3] The governing equation of the two-mass-spring system<br />
[ ][ ] [ ][ ] [ ][ ]<br />
m1 0 ẍ1 0 0 ẋ1 k −k x1<br />
+ + =<br />
0 m 2 ẍ 2 0 0 ẋ 2 −k k x 2<br />
where the outputs are x 1 andx 2 , is to be represented in a state space.<br />
i.e.,<br />
[<br />
u1<br />
u 2<br />
]<br />
Note that the original equation of motion consists of two differential equations,<br />
ẍ 1 = 1 m 1<br />
[−kx 1 +kx 2 ]+ 1 m 1<br />
u 1 (2.2)<br />
ẍ 2 = 1 m 2<br />
[kx 1 −kx 2 ]+ 1 m 2<br />
u 2 (2.3)<br />
The variables with the lowest order in the equation of motion and the output arex 1 and<br />
x 2 . Let them be y 1 and z 1 . New variables, y 2 and z 2 , are defined as the derivatives of<br />
y 1 and z 1 , respectively, i.e.<br />
d<br />
dt y 1 = y 2<br />
d<br />
dt z 1 = z 2<br />
Differentiating once more, the original differential equations in (2.2)–(2.3) appear, i.e.<br />
d<br />
dt y 2 = 1 m 1<br />
[−ky 1 +kz 1 ]+ 1 m 1<br />
u 1<br />
d<br />
dt z 2 = 1 m 2<br />
[ky 1 −kz 1 ]+ 1 m 2<br />
u 2<br />
Arranging the new first-order differential equations,<br />
⎡ ⎤ ⎡ ⎤⎡<br />
⎤ ⎡<br />
y 1 0 1 0 0 y 1<br />
d<br />
⎢ y 2<br />
⎥<br />
dt ⎣ z 1<br />
⎦ = −k k<br />
⎢ m 1<br />
0<br />
m 1<br />
0<br />
⎥⎢<br />
y 2<br />
⎥<br />
⎣ 0 0 0 1 ⎦⎣<br />
z 1<br />
⎦ + ⎢<br />
⎣<br />
z 2 0 −k<br />
m 2<br />
0 z 2<br />
k<br />
m 2<br />
0 0<br />
1<br />
m 1<br />
0<br />
0 0<br />
0<br />
1<br />
⎤<br />
[ ⎥ u1<br />
⎦<br />
m 2<br />
u 2<br />
]<br />
Since the outputs of the system arey 1 = x 1 and z 1 = x 2 ,<br />
⎡ ⎤<br />
[ ]<br />
x 1<br />
1 0 0 0<br />
y = ⎢ x 2<br />
⎥<br />
0 1 0 0 ⎣ x˙<br />
1<br />
⎦ ∈ R2<br />
x˙<br />
2<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 18<br />
[Example 2-4] Find the state space realizations of the following dynamic models:<br />
(a) a 1 ẋ+a 2 x = u, where the input isu ∈ R and the output isx ∈ R.<br />
(b)ẍ+a 1 ẋ+a 2 x = b 1 u, where the input is u ∈ R and the output isẋ ∈ R.<br />
(c) a 1 ẍ+a 2 ẋ+a 3 x = b 1 u+b 2˙u, where the input isu ∈ R and the output is x ∈ R.<br />
[ ][ ] [ ][ ] [ ]<br />
m1 0 ẍ1 2k −k x1 u1<br />
(d) + = ,<br />
0 m 2<br />
[<br />
ẍ 2<br />
]<br />
−k 3k x 2<br />
[<br />
u 2<br />
]<br />
u1<br />
where the input is ∈ R<br />
u 2 x1 + x˙<br />
and the output is 1<br />
∈ R<br />
2 x 2 + x˙<br />
2 .<br />
2<br />
[ ][ ] [ ][ ] [ ][ ] [ ]<br />
m1 0 ẍ1 c −c x1 ˙ k −k x1 u1 + u˙<br />
(e) + + = 1<br />
,<br />
0 m 2<br />
[<br />
ẍ 2<br />
]<br />
−c c x˙<br />
2<br />
[<br />
−k<br />
]<br />
k x 2 u 2 + u˙<br />
2<br />
u1<br />
where the input is ∈ R<br />
u 2 x1<br />
and the output is ∈ R<br />
2 x 2 .<br />
2<br />
2.1.2 Solution to state space equations<br />
For a state space model,<br />
ẋ = Fx+Gu<br />
y = Hx<br />
whereF ∈ R n×n ,G ∈ R n×m , andH ∈ R p×n . The initial condition ofx(t) is given asx 0 .<br />
In order to find a solution (x(t)) for an arbitrary input (u(t)), e −Ft is multiplied to<br />
the both sides of the state space equation:<br />
e −Ft ẋ = e −Ft Fx+e −Ft Gu<br />
Rearranging the equation above and integrating the new equation, we get<br />
∫ t<br />
0<br />
(<br />
e<br />
−Fτẋ−e −Fτ Fx ) ∫ t<br />
dτ = e −Fτ Gu(τ)dτ fort ≥ 0<br />
Note that the integration variable has been replaced to τ to avoid confusion between the<br />
integration variable and the time index. The left hand side is the definite integration, and<br />
thus<br />
e −Ft x(t)−x(0) =<br />
∫ t<br />
0<br />
0<br />
e −Fτ Gu(τ)dτ fort ≥ 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
HIJKLMIN IO P QLPLR QSPTR UIVRJ<br />
G<br />
WXMYMNPJ VMOORXRNLMPJ RZKPLMIN[ \]] ^ _\] ^ ` a K<br />
G<br />
HLPLR QSPTR UIVRJ VROMNMLMIN<br />
G<br />
a cd ef g` g_hf<br />
b<br />
a cdf ehf i<br />
a ce dhf j<br />
a df k<br />
a QQmbninjnkof<br />
QlQ<br />
ƒ‘’“{x<br />
‰€x }Š”p~p<br />
~‚{€—€x<br />
|Š•–{Š~—˜ƒ‹{<br />
ŒŠ•–{Š~—š‹}Šƒp’›{—€x<br />
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 19<br />
p q rstsussvtvswx<br />
G yMUR QSPN<br />
z q {|}~p€u ‚ƒ„~… p€x<br />
G †N PX‡MLXPXl MNSKL VROMNRV IˆRX L<br />
‰ q Š‚ƒ‹~‚Œ‚ z p€x G ŽPJTKJPLMNY L R IKLSKL QMYNPJ<br />
Figure 2.4: Matlab code for calculating the output signal from an arbitrary input.<br />
lsim.m function calculates the output from an arbitrarily defined input signal. You<br />
may also use step.m and impulse.m for obtaining step and impulse responses.<br />
By multiplying e Ft to the both sides of the equation above, and by using the initial condition<br />
(x(0) = x 0 ), the solution of the state space equation under an arbitrary input is<br />
obtained:<br />
∫ t<br />
x(t) = e Ft x 0 +e Ft e −Fτ Gu(τ)dτ fort ≥ 0<br />
Sincee Ft is not a function ofτ, the solution can be reduced to<br />
x(t) = e Ft x 0 +<br />
This process is called convolution.<br />
∫ t<br />
0<br />
0<br />
e F(t−τ) Gu(τ)dτ for t ≥ 0 (2.4)<br />
In order to find a complete solution of (2.4), it is necessary to solve e Ft . A simple<br />
method to the solution ofe Ft is the Taylor expansion. Namely,<br />
e Ft = I +Ft+ 1 2 F2 t 2 + 1 6 F3 t 3 +... =<br />
∞∑<br />
i=0<br />
1<br />
i! Fi t i<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 20<br />
[Example 2-5] If F =<br />
[<br />
0 1<br />
0 0<br />
]<br />
, e Ft is<br />
e Ft = I +Ft+ 1 2 F2 t 2 + 1 6 F3 t 3 +...<br />
[ ] [ ]<br />
1 0 0 t<br />
= +<br />
0 1 0 0<br />
[ ]<br />
1 t<br />
=<br />
0 1<br />
[Example 2-6] Suppose a state space model<br />
[<br />
d 0 1<br />
dt x = 0 0<br />
y = [ 1 0 ] x<br />
x(0) = 0 ∈ R 2<br />
] [ 0<br />
x+<br />
1<br />
is under a unit step input, which is defined as<br />
{<br />
1 fort ≥ 0<br />
u(t) =<br />
0 fort < 0<br />
]<br />
u<br />
By using the result of the previous example, the state is calculated as<br />
Then, the output is<br />
∫ t<br />
x(t) = e Ft x(0)+ e F(t−τ) Gu(τ)dτ<br />
0<br />
∫ t<br />
[ ][ ]<br />
1 t−τ 0<br />
=<br />
dτ<br />
0<br />
0 1 1<br />
[ 1<br />
]<br />
=<br />
2 t2<br />
fort > 0<br />
t<br />
y(t) = [ 1 0 ] x(t) = 1 2 t2 fort > 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 21<br />
Solution of diagonalizable state space models<br />
For a diagonalizable matrix F = VΛV −1 ∈ R n×n , where Λ is a diagonal matrix with the<br />
eigenvalues ofF ,e Ft = e VΛV −1t is calculated by the Taylor expansion as follows.<br />
e (VΛV −1 )t<br />
= I +VΛV −1 t+ 1 2 VΛV −1 VΛV −1 t 2 + 1 6 VΛV −1 VΛV −1 VΛV −1 t 3 +...<br />
[<br />
= V I +Λt+ 1 2 Λ2 t 2 + 1 ]<br />
6 Λ3 t 3 +... V −1<br />
[ ∞<br />
]<br />
∑ 1<br />
= V<br />
i! Λi t i V −1<br />
i=0<br />
= Ve Λt V −1<br />
Note thatVV −1 has been canceled in the equation above. Note thate Λt is<br />
⎡ ⎤<br />
e λ 1t<br />
0 0<br />
e Λt 0 e λ 2t<br />
0<br />
= ⎢<br />
⎣<br />
. ..<br />
⎥<br />
⎦<br />
0 0 e λnt<br />
Therefore, the exponential of diagonalizable matrices can be solved as<br />
⎡ ⎤<br />
e λ 1t<br />
0 0<br />
e Ft 0 e λ 2t<br />
0<br />
= V ⎢<br />
⎣<br />
. ..<br />
⎥<br />
⎦ V −1 (2.5)<br />
0 0 e λnt<br />
From the result in (2.5), the solution to diagonalizable state space models can be<br />
obtained as follows. Suppose a state space model is given:<br />
ẋ = Fx+Gu<br />
y = Hx<br />
x(0) = x 0<br />
where F is a diagonalizable matrix in R n×n such that F can be eigendecomposed to<br />
F = VΛV −1 , whereV is a matrix consisting of the eigenvectors ofF . Then, a new state<br />
is defined as<br />
¯x = V −1 x ∈ R n<br />
orx = V ¯x. SubstitutingV ¯x forx, the state space model becomes<br />
V ˙¯x = FV ¯x+Gu<br />
y = HV ¯x<br />
¯x(0) = V −1 x 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 22<br />
k<br />
c<br />
M<br />
y<br />
u<br />
Figure 2.5: A mass-spring-damper system.<br />
Multiplying V −1 to the both sides of the equation, a complete state space model is obtained<br />
as<br />
˙¯x = V −1 FV ¯x+V −1 Gu<br />
y = HV ¯x<br />
¯x(0) = V −1 x 0<br />
Note that the state space model has been transformed into a diagonal state space model,<br />
the solution of which is<br />
¯x(t) = e Λt¯x 0 +<br />
∫ t<br />
0<br />
e Λ(t−τ) Ḡu(τ)dτ fort ≥ 0 (2.6)<br />
y(t) = ¯H¯x(t) (2.7)<br />
whereΛ = V −1 FV , Ḡ = V −1 G, ¯H = HV .<br />
[Example 2-7] Suppose that a mass-damper-spring system shown in Fig. 2.5 is under<br />
a unit step input. In the figure, M = 1 is the mass, c = 3 is the damping coefficient,<br />
and k = 2 is the spring constant. The dynamic model of the system is<br />
and a state space model can be set as<br />
ÿ +3ẏ +2y = u<br />
d<br />
dt x = [ 0 1<br />
−2 −3<br />
y = [ 1 0 ] x<br />
x(0) = 0 ∈ R 2<br />
] [ 0<br />
x+<br />
1<br />
where x(t) = [ y<br />
ẏ]<br />
∈ R 2 is the state. The initial condition was assumed to be zero for<br />
simplicity. The unit step input is defined as<br />
{<br />
1 fort ≥ 0<br />
u(t) =<br />
0 fort < 0<br />
]<br />
u<br />
To solvee Ft , the state matrix F = [ 0 1<br />
−2 −3] is to be eigendecomposed. The char-<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 23<br />
acteristic equation ofF is<br />
[ ] −λ 1<br />
det = λ 2 +3λ+2 = (λ+1)(λ+2) = 0<br />
−2 −3−λ<br />
Thus the eigenvalues ofF are λ 1 = −1 and λ 2 = −2. The eigenvalue equation is<br />
[ ] −λ 1<br />
v = 0<br />
−2 −3−λ<br />
and the associated eigenvectors (among many) are<br />
[ ] 1<br />
v 1 =<br />
and v<br />
−1<br />
2 =<br />
[ 1<br />
−2<br />
From the eigenvectors obtained, the transformation matrixV is set to<br />
[ ] 1 1<br />
V =<br />
−1 −2<br />
Finally, a new (diagonalized) state space model is set as<br />
˙¯x = Λ¯x+Ḡu<br />
y = ¯H¯x<br />
¯x(0) = 0 ∈ R 2<br />
]<br />
where<br />
[ ] −1 0<br />
Λ = V −1 FV =<br />
0 −2<br />
[ ][ ] [ ]<br />
2 1 0 1<br />
Ḡ = V −1 G = =<br />
−1 −1 1 −1<br />
¯H = HV = [ 1 0 ][ ]<br />
1 1<br />
= [ 1 1 ]<br />
−1 −2<br />
The state of the diagonalized state space matrix is<br />
∫ t<br />
[ ][ ]<br />
e<br />
−(t−τ)<br />
0 1<br />
¯x(t) =<br />
0<br />
0 e −2(t−τ) dτ fort ≥ 0<br />
−1<br />
[ ]<br />
1−e<br />
=<br />
−t<br />
0.5e −2t fort ≥ 0<br />
−0.5<br />
and the output isy(t) = ¯H¯x(t) = 0.5+0.5e −2t −e −t fort ≥ 0.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 24<br />
2.1.3 Solution of state space models in Jordan canonical form*<br />
This section is supplementary for students highly interested in <strong>Control</strong>s.<br />
Consider a state space model with a defective state matrix, i.e., some of the eigenvalues<br />
are repeated and there are fewer linearly independent eigenvectors than the number<br />
of the repeated eigenvalues. In order to form a transformation matrix, V , therefore, generalized<br />
eigenvectors should be obtained.<br />
For simplicity, suppose that a matrix F ∈ R n×n has only one distinct eigenvalue<br />
(λ) and one linearly independent eigenvector. The matrix F can be Jordan-decomposed<br />
to F = VJV −1 where 1 ⎡<br />
λ 1 0<br />
0 λ . . . .<br />
J = ⎢<br />
⎣ .. . 1<br />
0 0 λ<br />
By applying the Taylor expansion,e Ft = e VJV −1t is<br />
where<br />
⎤<br />
e Ft = e VJV −1 t<br />
= Ve Jt V −1<br />
⎥<br />
⎦ ∈ Rn×n<br />
1<br />
n tn<br />
⎡<br />
1<br />
1 t<br />
2 t2 ···<br />
1<br />
e Jt = e λt 0 1 t ···<br />
⎢<br />
⎣<br />
.<br />
. . . .. .<br />
0 0 0 ··· 1<br />
n−1 tn−1<br />
The process to find a solution to the state space models with a defective state matrix is the<br />
same as in (2.7). Namely, for a given state space model with a defective state matrix<br />
ẋ = Fx+Gu<br />
y = Hx<br />
x(0) = x 0 ∈ R n<br />
⎤<br />
⎥<br />
⎦<br />
the solution is<br />
¯x(t) = e Jt¯x 0 +<br />
∫ t<br />
0<br />
e J(t−τ) Ḡu(τ)dτ fort ≥ 0 (2.8)<br />
y(t) = ¯H¯x(t) (2.9)<br />
whereJ = V −1 FV , Ḡ = V −1 G, ¯H = HV .<br />
1 Note that the same notationJ is used for different matrix.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 25<br />
[Example 2-8] Consider the same mass-damper-spring system as in the previous example;<br />
but the properties are different so that M = 1, c = 2, and k = 1. The dynamic<br />
model of the system is<br />
ÿ +2ẏ +1y = u<br />
and a state space model can be set as<br />
d<br />
dt x = [<br />
0 1<br />
−1 −2<br />
y = [ 1 0 ] x<br />
x(0) = 0 ∈ R 2<br />
] [<br />
0<br />
x+<br />
1<br />
where x(t) = [ y<br />
ẏ]<br />
∈ R 2 is the state. The initial condition was assumed to be zero for<br />
simplicity. The input is defined as<br />
{<br />
1 fort ≥ 0<br />
u(t) =<br />
0 fort < 0<br />
]<br />
u<br />
To solve e Ft , the state matrix F = [ −1 0 −2] 1 is to be decomposed. The characteristic<br />
equation ofF is<br />
[ ]<br />
−λ 1<br />
det = λ 2 +2λ+1 = (λ+1)(λ+1) = 0<br />
−1 −2−λ<br />
Notice thatF has a repeated eigenvalue, i.e.,λ = −1. The eigenvalue equation is<br />
[ ] [ ]<br />
−λ 1 1 1<br />
v = v = 0<br />
−1 −2−λ −1 −1<br />
Since the nullity of[ −1 1 −1] 1 is1, there exists only one linearly independent eigenvector,<br />
which isv 1 = [ −1] 1 (among many). A generalized eigenvector is obtained from<br />
[ ] [ ]<br />
−λ 1 1 1<br />
v<br />
−1 −2−λ 2 = v<br />
−1 −1 2 = v 1<br />
and the generalized eigenvector isv 2 = [ 1 0] (among many).<br />
From the eigenvector and the generalized eigenvector obtained above, the transformation<br />
matrixV is set to [ ] 1 1<br />
V =<br />
−1 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 26<br />
Finally, a new state space model in Jordan canonical form is obtained as<br />
where<br />
˙¯x = J¯x+Ḡu<br />
y = ¯H¯x<br />
¯x(0) = 0 ∈ R 2<br />
[ ]<br />
−1 1<br />
J = V −1 FV =<br />
0 −1<br />
[ ][ ] [ ]<br />
0 −1 0 −1<br />
Ḡ = V −1 G = =<br />
1 1 1 1<br />
¯H = HV = [ 1 0 ][ ]<br />
1 1<br />
= [ 1 1 ]<br />
−1 0<br />
The exponential ofJt is<br />
e Jt = e −t [ 1 t<br />
0 1<br />
]<br />
and the state is<br />
¯x(t) =<br />
=<br />
[ ][<br />
e<br />
−(t−τ)<br />
(t−τ)e −(t−τ) −1<br />
0 e −2(t−τ) 1<br />
]<br />
−te −t<br />
0.5−0.5e −2t fort ≥ 0<br />
∫ t<br />
[<br />
0<br />
]<br />
dτ fort ≥ 0<br />
In the calculation of the equation above, you may use the property of ∫ τe aτ dτ =<br />
e aτ<br />
a 2 (aτ −1). The output is<br />
y(t) = ¯H¯x(t) = 0.5−0.5e −2t −te −t fort ≥ 0<br />
2.1.4 Solution of state space models with complex eigenvalues*<br />
This section is supplementary for students highly interested in <strong>Control</strong>s.<br />
Suppose that a matrixF ∈ R 2×2 has complex eigenvalues(σ±jω) such thatF can<br />
be decomposed to<br />
[ ] σ ω<br />
F = VΣV −1 where Σ =<br />
−ω σ<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.1. STATE SPACE REALIZATION OF DYNAMIC SYSTEMS 27<br />
Then the Taylor expansion ofe Ft gives<br />
e Ft = Ve σt [ cos(ωt) sin(ωt)<br />
−sin(ωt) cos(ωt)<br />
]<br />
V −1<br />
The remaining procedure to find a solution to the state space models is the same as<br />
above.<br />
[Example 2-9] Consider the same mass-damper-spring system as in the previous example;<br />
but the properties are different so that M = 1, c = 2, and k = 2. The system is<br />
under an impulse input. The state space model can be set as<br />
d<br />
dt x = [<br />
0 1<br />
−2 −2<br />
y = [ 1 0 ] x<br />
x(0) = 0 ∈ R 2<br />
wherex(t) = [ y<br />
ẏ]<br />
∈ R 2 is the state.<br />
]<br />
x+<br />
To solve e Ft , the state matrix F = [ −2 0 −2] 1 is to be decomposed. The characteristic<br />
equation ofF is<br />
[ ]<br />
−λ 1<br />
det = λ 2 +2λ+2 = (λ+1−j)(λ+1+j) = 0<br />
−2 −2−λ<br />
[<br />
0<br />
1<br />
The eigenvalue equation forλ 1 = −1+j is<br />
[ ] [ −λ 1 1−j 1<br />
v =<br />
−2 −2−λ −2 −1−j<br />
]<br />
u<br />
]<br />
v = 0<br />
An eigenvector (among many) is v 1 = [ −1<br />
1−j]<br />
. Note that the remaining eigenvector is<br />
the complex conjugate of v 1 . Therefore, v 2 = [ ]<br />
−1<br />
1+j . Since vR and v I are v R = [ −1<br />
1 ]<br />
and v I = [ −1] 0 respectively, the state matrixAcan be decomposed to<br />
F =<br />
[<br />
−1 0<br />
1 −1<br />
][<br />
−1 1<br />
−1 −1<br />
][<br />
−1 0<br />
1 −1<br />
Finally, a new state space model in oscillatory canonical form is obtained as<br />
˙¯x = Σ¯x+Ḡu<br />
y = ¯H¯x<br />
¯x(0) = 0 ∈ R 2<br />
] −1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 28<br />
where<br />
[ ] −1 1<br />
Σ = V −1 FV =<br />
−1 −1<br />
[ ][ ] [ ]<br />
−1 0 0 0<br />
Ḡ = V −1 G = =<br />
−1 −1 1 −1<br />
¯H = HV = [ 1 0 ][ ]<br />
−1 0<br />
= [ −1 0 ]<br />
1 −1<br />
The exponential ofΣt is<br />
e Σt = e −t [<br />
cos(t) sin(t)<br />
−sin(t) cos(t)<br />
]<br />
and the state is<br />
∫ t<br />
[ cos(t−τ) sin(t−τ)<br />
¯x(t) = e −(t−τ)<br />
0<br />
−sin(t−τ) cos(t−τ)<br />
[ ] −sin(t)<br />
= e −t fort ≥ 0<br />
−cos(t)<br />
][ 0<br />
−1<br />
]<br />
δ(t)dτ fort ≥ 0<br />
The output is<br />
y(t) = e −t sin(t) fort ≥ 0<br />
2.2 Laplace Transforms and Transfer Functions<br />
In mathematics, the Laplace transform is a widely used integral transform. Denoted<br />
L{f(t)}, it is a linear operator of a function f(t) with a real argument t(t ≥ 0) that<br />
transforms it to a function F(s) with a complex argument s. The respective pairs of f(t)<br />
andF(s) are matched in tables. The Laplace transform has the useful property that many<br />
relationships and operations over the originals f(t) correspond to simpler relationships<br />
and operations over the images F(s).<br />
The Laplace transform is related to the Fourier transform, but whereas the Fourier<br />
transform resolves a function or signal into its modes of vibration, the Laplace transform<br />
resolves a function into its moments (i.e., poles and zeros). Like the Fourier transform,<br />
the Laplace transform is used for solving differential and integral equations.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
žŸ ¡¢£Ÿ¤¥ž ¥¦ £ ¦§ž¨Ÿ¤¥ž<br />
œ<br />
©ª£«¬ ® ¯£¬£¨ Ÿ¢£ž°¦¥¢«<br />
œ<br />
± ´ µ œ · ¦¤ž °¸«¹¥°<br />
±²³±<br />
º °¤ž»¼½Ÿ¾¿ œ · ¦¤ž £ ¦§ž¨Ÿ¤¥ž Ÿ¥ ¹ Ÿ¢£ž°¦¥¢« À<br />
¦<br />
Òž°¼ ¢® œ<br />
¼ ¨¥°»Ÿ ¼¾ Ó ° °¤ž»Ÿ ¼¾<br />
œ<br />
Ï ÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏÏ<br />
œ<br />
Ô Ô<br />
œ<br />
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 29<br />
2.2.1 Formal definition<br />
The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the<br />
functionF(s), defined by:<br />
F(s) = L{f(t)} =<br />
where the parametersis a complex number.<br />
∫ ∞<br />
0<br />
e −st f(t)dt<br />
Á  ÃÄ´ÅÆÇÈÉÊÅ˱ǴÌÍ ´Ì<br />
œ žŸ ¡¢£Ÿ¤¥ž ¥¦ ¦½ λϰŸ¾ ¥Ð ¢ Ÿ<br />
¬¢ ŸŸ¸»Ñ¾ œ ·¤°¬£¸ Ñ £° ¬¢ ŸŸ¸ £° ¬¥°°¤¹<br />
œ ª¬»° Ÿ¾ »° Ó ¼ ¾<br />
Figure 2.6: Matlab code for integrating a function defined with symbolic variables.<br />
You may utilize syms.m, int.m, and diff.m functions in various ways, e.g.,<br />
Laplace transform, Fourier transform, Lagrangian mechanics, heat transfer, dynamics,<br />
etc.<br />
Note. Properties of Laplace transform<br />
• Linearity: L{af(t)+bg(t)} = aF(s)+bG(s), where a andbare any scalars.<br />
• Differentiation: L{f ′ (t)} = sF(s)−f(0)<br />
• Second differentiation: L{f ′′ (t)} = s 2 F(s)−sf(0)−f ′ (0)<br />
• Integration: L{ ∫ t<br />
0 f(τ)dτ} = 1 s F(s)<br />
• Convolution: L{(f ⋆g)(t)} = L{ ∫ t<br />
f(τ)g(t−τ)dτ} = F(s)G(s)<br />
0<br />
• Initial value theorem: f(0 + ) = lim s→∞ sF(s)<br />
• Final value theorem: f(∞) = lim s→0 sF(s), if the final value exists.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 30<br />
2.2.2 Output time histories<br />
Note. Laplace transform of selected signals<br />
All signals,f(t), are defined over t ≥ 0.<br />
• Unit impulse: L{δ(t)} = 1<br />
• Delayed impulse: L{δ(t−τ)} = e −τs<br />
• Unit step (integrate unit impulse): L{u(t)} = 1 s<br />
• Delayed unit step: L{u(t−τ)} = e−τs<br />
s<br />
• Ramp (integrate unit step): L{t} = 1 s 2<br />
• n th power for an integern: L{ tn<br />
n! } = 1<br />
s n+1<br />
• Exponential decay: L{e −αt } = 1<br />
s+α<br />
• Exponential approach: L{1−e −αt } = α<br />
s(s+α)<br />
• Sine: L{sin(ωt)} = ω<br />
s 2 +ω 2<br />
• Cosine: L{cos(ωt)} = s<br />
s 2 +ω 2<br />
• Hyperbolic sine: L{sinh(αt)} = α<br />
s 2 −α 2<br />
• Hyperbolic cosine: L{cosh(αt)} = s<br />
s 2 −α 2<br />
2.2.3 Transfer function<br />
Transfer functions are commonly used in the analysis of systems such as single-input<br />
single-output systems.<br />
In its simplest form for continuous-time input signalu(t) and output signaly(t), the<br />
transfer function is the linear mapping of the Laplace transform of the input,U(s), to the<br />
outputY(s), i.e.<br />
Y(s) = G(s)U(s)<br />
or<br />
G(s) = Y(s)<br />
U(s) = L{y(t)}<br />
L{u(t)}<br />
whereG(s) is the transfer function of a system.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 31<br />
[Example 2-10] Suppose that you adjusted the desired temperature of a room by−4 ◦ C<br />
at t = 0. By controlling an air-conditioner, the room temperature was changed by<br />
y(t) = −4 + 4e −3t . What is the transfer function of the room equipped with the airconditioner<br />
The transfer function is defined by the relationship between the input signal and<br />
the output signal, i.e.<br />
G(s) = Y(s)<br />
U(s) = L{−4+4e−3t }<br />
L{−4}<br />
= −4 s + 4<br />
s+3<br />
− 4 s<br />
= 3<br />
s+3<br />
whereG(s) is the transfer function of the room equipped with the air-conditioner.<br />
[Example 2-11] Suppose that a linear continuous system follows the equation of motion<br />
as<br />
ÿ +2ζω 0 ẏ +ω 2 0 y = K 0u(t)<br />
where the initial conditions are all zeros.<br />
The transfer function is obtained by taking the Laplace transform of the both<br />
sides of the equation, i.e.<br />
L{ÿ +2ζω 0 ẏ +ω 2 0y} = L{K 0 u(t)}<br />
s 2 Y(s)+2ζω 0 sY(s)+ω 2 0 Y(s) = K 0U(s)<br />
Thus, rearranging the equation above, the transfer function is obtained:<br />
G(s) = Y(s)<br />
U(s) = K 0<br />
s 2 +2ζω 0 s+ω 2 0<br />
2.2.4 Relationship between state space models and transfer functions<br />
Conversion from state space to transfer function<br />
Consider a state space model:<br />
ẋ = Fx+Gu<br />
y = Hx+Ju<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
Ö×ØÙÚÙÚÛ Ýë× ì×ÚâãÙÚÜÝâÞ<br />
Õ<br />
ï ðñòóôõö ÷øóùú<br />
î<br />
ûÞü ×ýàÙþÜÿ×ÚÝÿ<br />
Õ<br />
ï ðñò ö ùú<br />
î<br />
!ÞÜÚßØ×Þ ØàÚáÝÙâÚ"<br />
Õ<br />
ß#è $ ç ß $ è<br />
Õ<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
Õ<br />
ß#ç $ í ß#è $ è ß $ æ<br />
Õ<br />
&Ø âà 'âàÿì ÿÙ(× Ýâ ÝÞÜÚßØâÞã Ýë× ÝÞÜÚßØ×Þ ØàÚáÝÙâÚ ÙÚÝâ Ü<br />
Õ<br />
ØâÞãü àß× ¡¢£¤¥<br />
)âÿ×%*×Þâ%ÛÜÙÚ<br />
+×Þâ,)âÿ×,ÛÜÙÚ"<br />
Õ<br />
-ß$è. -ß$æ.<br />
Õ<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
Õ<br />
-ß$/01æç. -ß#è $ 20ç31íß $ 20èæ13.<br />
Õ<br />
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 32<br />
Õ Ö×ØÙÚÙÚÛ Ü ÝÞÜÚßØ×Þ ØàÚáÝÙâÚ<br />
Úàã ä åæ ç èéê<br />
Õ Ö×ØÙÚÙÚÛ Ýë× Úàã×ÞÜÝâÞ<br />
ì×Ú ä åæ í è æéê<br />
î ï òîùú<br />
Figure 2.7: Matlab code for defining a transfer function by coefficients.<br />
¦§¨©© ¨§ ¨© © § 4 5§<br />
6§ 7 89: 9; 9 ¦§¨©© §<br />
§ 7 89@ 9A=> ¦§¨©© 5§<br />
B 7 ;> ¦§¨©© ©<br />
C D EFGHIJKLMN OLPJMN QRS<br />
TU §V©W§X<br />
C D EFGHYZ[ Z\]N YZ^ Z_ Z`]N _RS<br />
§a§a©b<br />
; cd@e cdAe<br />
999999999999999999999<br />
cd:e cd;e cd
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 33<br />
(a) Actual system under an arbitrary input<br />
u(t)<br />
t<br />
0<br />
)<br />
0<br />
M & y<br />
+ ky = u y( t)<br />
= y ( t)<br />
+ ∫ g(<br />
t −τ<br />
) u(<br />
τ dτ<br />
(b) Time domain analysis: g(t) is an impulse response of the system<br />
u(t)<br />
x & = Fx + Gu<br />
y = Hx<br />
x = e<br />
Ft<br />
x<br />
t<br />
F ( t−τ<br />
)<br />
( 0) + ∫ e Gu(<br />
τ ) dτ<br />
0<br />
(c) State space analysis<br />
U (s)<br />
1<br />
2<br />
Ms + k<br />
(d) Laplace domain analysis<br />
1<br />
Y ( s)<br />
= U ( s)<br />
2<br />
Ms + k<br />
ω ω ω<br />
(e) Frequency domain analysis<br />
Figure 2.9: Various methods for analysis of an input-output relationship<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 34<br />
where x ∈ R n , y ∈ R, u ∈ R, F ∈ R n×n , G ∈ R n×1 , H ∈ R 1×n , and J ∈ R. Note<br />
that the feedforward matrix,J, is included for the sake of generality. Since the state space<br />
model above consists only of linear functions 2 , the Laplace transform can be applied 3 .<br />
Taking the Laplace transform, we obtain<br />
Assuming thatx(0) = 0,<br />
L{ẋ} = L{Fx+Gu}<br />
L{y} = L{Hx+Ju}<br />
sX(s) = FX(s)+GU(s)<br />
Y(s) = HX(s)+JU(s)<br />
Rearranging the equation above, the transfer function fromU(s) toY(s) is obtained:<br />
Y(s)<br />
U(s) = H(sI −F)−1 G+J (2.10)<br />
[Example 2-12] A state space model with state matrices, F = [ 0 1<br />
−1 −2], G = [ 0 1 ],<br />
H = [ 1 0], J = 0, is converted into a transfer function as:<br />
G(s) = [ 1 0 ]( [<br />
sI −<br />
0 1<br />
−1 −2<br />
]) −1 [ 0<br />
1<br />
]<br />
+0 =<br />
1<br />
s 2 +2s+1<br />
Recall that the dimensions of the state matrices are F ∈ R n×n , G ∈ R n×1 , H ∈<br />
R 1×n , and J ∈ R, and a transfer function,G(s) is obtained uniquely via<br />
G(s) = H<br />
}{{}<br />
1×n<br />
(sI −F) −1<br />
} {{ }<br />
n×n<br />
}{{} G<br />
n×1<br />
+ }{{} J = b(s)<br />
a(s)<br />
1×1<br />
where b(s) is the numerator polynomial and a(s) is the denominator polynomial. Since<br />
(sI −F) −1 is reduced to<br />
(sI −F) −1 =<br />
1<br />
Adj(sI −F)<br />
det(sI −F)<br />
2 In other words, the state space equation is a linear system.<br />
3 This is not always true, if the state matrix includes nonlinear functions. For example, if ẋ =<br />
[<br />
0 1<br />
−sin(t) −cos(t)]<br />
x+[<br />
0<br />
1 ]u, the state space model cannot be transformed by the Laplace transform. Namely,<br />
the state space is capable of dealing with nonlinear or time-varying systems, while the Laplace transform<br />
can only be applied to linear systems.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 35<br />
where Adj(A) is the transpose of the cofactor matrix of A. 4 Therefore, the polynomials,<br />
a(s) and b(s), are<br />
a(s) = det(sI −F)<br />
= s n +a n−1 s n−1 +a n−2 s n−2 +...+a 0<br />
b(s) = H Adj(sI −F)G+J det(sI −F)<br />
= b m s m +b m−1 s m−1 +b m−2 s m−2 +...+b 0<br />
where n and m are the orders of a(s) and b(s), respectively. Note that m ≤ n (m = n<br />
only whenJ ≠ 0). Transfer functions withm ≤ n are called realizable systems. Transfer<br />
functions with m > n, i.e. unrealizable systems, cannot be realized in the reality. Most<br />
mechanical systems have transfer functions withm < n.<br />
Conversion from transfer function to state space<br />
Consider a differential equation of a mechanical system:<br />
...<br />
y +a 2 ÿ +a 1 ẏ +a 0 y = b 2 ü+b 1˙u+b 0 u (2.11)<br />
where the initial conditions are all 0. The transfer function of the differential equation<br />
above can be obtained by taking the Laplace transform, i.e.<br />
G(s) =<br />
b 2 s 2 +b 1 s+b 0<br />
s 3 +a 2 s 2 +a 1 s+a 0<br />
(2.12)<br />
Method 1: controllable canonical form (CCF)<br />
The differential equation in (2.11) can be reformulated into the following equations.<br />
x˙<br />
1 = x 2<br />
x˙<br />
2 = x 3<br />
x˙<br />
3 = −a 0 x 1 −a 1 x 2 −a 2 x 3 +u<br />
y = b 0 x 1 +b 1 x 2 +b 2 x 3<br />
4 For example; if A = [ ] [<br />
a b , then Adj(A) = d −b<br />
]<br />
. For the higher dimensional matrices, refer to<br />
c d<br />
Advanced Engineering Mathematics.<br />
−c a<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 36<br />
In matrix form:<br />
d<br />
dt<br />
⎡<br />
⎣<br />
⎤<br />
x 1<br />
x 2<br />
⎦ =<br />
x 3<br />
⎡ ⎤⎡<br />
0 1 0<br />
⎣ 0 0 1 ⎦⎣<br />
−a 0 −a 1 −a 2<br />
} {{ }<br />
F c<br />
⎡<br />
y = [ ]<br />
b 0 b 1 b 2<br />
⎣<br />
} {{ }<br />
H c<br />
Note thatH c (sI −F c ) −1 G c = G(s) in (2.12).<br />
⎤<br />
x 1<br />
x 2<br />
⎦<br />
x 3<br />
⎤ ⎡<br />
x 1<br />
x 2<br />
⎦+ ⎣<br />
x 3<br />
0<br />
0<br />
1<br />
⎤<br />
} {{ }<br />
G c<br />
⎦u<br />
Method 2: observable canonical form (OCF)<br />
The differential equation in (2.11) can be reformulated into the following equations.<br />
d<br />
dt {ÿ +a 2ẏ −b 2˙u+a 1 y −b 1 u} = −a<br />
} {{ } 0 y +b 0 u<br />
:=x 3<br />
d<br />
dt {ẏ +a 2y −b 2 u} = x<br />
} {{ } 3 −a 1 y +b 1 u<br />
:=x 2<br />
d<br />
y<br />
dt }{{}<br />
:=x 1<br />
The equations above can be arranged in matrix form:<br />
⎡ ⎡ ⎤⎡<br />
−a<br />
d<br />
2 1 0<br />
⎣ ⎣ −a 1 0 1 ⎦⎣<br />
dt<br />
⎤<br />
x 1<br />
x 2<br />
⎦ =<br />
x 3<br />
−a 0 0 0<br />
} {{ }<br />
F o<br />
⎡<br />
y = [ 1 0 0 ] ⎣<br />
} {{ }<br />
H o<br />
Note thatH o (sI −F o ) −1 G o = G(s) in (2.12).<br />
= x 2 −a 2 y +b 2 u<br />
⎤<br />
x 1<br />
x 2<br />
⎦<br />
x 3<br />
⎤ ⎡<br />
x 1<br />
x 2<br />
⎦+ ⎣<br />
x 3<br />
⎤<br />
b 2<br />
b 1<br />
⎦<br />
b 0<br />
} {{ }<br />
G o<br />
u<br />
Method 3-1: diagonal canonical form (DCF)<br />
When the transfer function in (2.12) has no repeated poles 5 , its partial fraction is<br />
G(s) = k 1<br />
s−p 1<br />
+ k 2<br />
s−p 2<br />
+ k 3<br />
s−p 3<br />
5 Poles of a transfer function are equivalent to the eigenvalues of the corresponding state matrix.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 37<br />
where p i ’s are the poles of G(s). Note that the new transfer function can be represented<br />
with the following differential equations.<br />
x˙<br />
1 = p 1 x 1 +u<br />
x˙<br />
2 = p 2 x 2 +u<br />
x˙<br />
3 = p 3 x 3 +u<br />
y = k 1 x 1 +k 2 x 2 +k 3 x 3<br />
In matrix form:<br />
⎡<br />
d<br />
⎣<br />
dt<br />
⎤<br />
x 1<br />
x 2<br />
⎦ =<br />
x 3<br />
⎡ ⎤⎡<br />
p 1 0 0<br />
⎣ 0 p 2 0 ⎦⎣<br />
0 0 p 3<br />
} {{ }<br />
F d<br />
⎡<br />
y = [ ]<br />
k 1 k 2 k 3<br />
⎣<br />
} {{ }<br />
H d<br />
⎤ ⎡<br />
x 1<br />
x 2<br />
⎦+ ⎣<br />
x 3<br />
⎤<br />
x 1<br />
x 2<br />
⎦<br />
x 3<br />
1<br />
1<br />
1<br />
⎤<br />
} {{ }<br />
G d<br />
⎦u<br />
Note thatH d (sI −F d ) −1 G d = k 1<br />
s−p 1<br />
+ k 2<br />
s−p 2<br />
+ k 3<br />
s−p 3<br />
.<br />
Method 3-2: Jordan canonical form (JCF)<br />
If the transfer function in (2.12) has a repeated pole (p 2 = p 3 = p m ), it can be reduced to<br />
G(s) = k 1 k 2<br />
+<br />
s−p 1 (s−p m ) + k 3<br />
2 s−p m<br />
The new transfer function can be represented with the following differential equations.<br />
x˙<br />
1 = p 1 x 1 +u<br />
x˙<br />
2 = p 2 x 2 +x 3<br />
x˙<br />
3 = p 3 x 3 +u<br />
y = k 1 x 1 +k 2 x 2 +k 3 x 3<br />
In matrix form:<br />
⎡<br />
d<br />
⎣<br />
dt<br />
⎤<br />
x 1<br />
x 2<br />
⎦ =<br />
x 3<br />
⎡ ⎤⎡<br />
p 1 0 0<br />
⎣ 0 p 2 1 ⎦⎣<br />
0 0 p 3<br />
} {{ }<br />
F j<br />
⎡<br />
y = [ ]<br />
k 1 k 2 k 3<br />
⎣<br />
} {{ }<br />
H j<br />
⎤ ⎡<br />
x 1<br />
x 2<br />
⎦+ ⎣<br />
x 3<br />
⎤<br />
x 1<br />
x 2<br />
⎦<br />
x 3<br />
1<br />
0<br />
1<br />
⎤<br />
} {{ }<br />
G j<br />
⎦u<br />
Note thatH j (sI −F j ) −1 G j = k 1<br />
s−p 1<br />
+ k 2<br />
(s−p m) 2 + k 3<br />
s−p m<br />
.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 38<br />
[Summary] A transfer function<br />
G(s) =<br />
can be converted into a state space model<br />
b 2 s 2 +b 1 s+b 0<br />
s 3 +a 2 s 2 +a 1 s+a 0<br />
= k 1<br />
s−p 1<br />
+ k 2<br />
s−p 2<br />
+ k 3<br />
s−p 3<br />
ẋ = Fx+Gu<br />
y = Hx<br />
where the state matrices are<br />
<strong>Control</strong>lable Canonical Form<br />
Observable Canonical Form<br />
Diagonal Canonical Form<br />
⎡<br />
⎣<br />
F<br />
⎤ ⎡<br />
G<br />
⎤<br />
H<br />
0 1 0 0<br />
0 0 1 ⎦ ⎣ 0 ⎦ [ ]<br />
b 0 b 1 b 2<br />
−a<br />
⎡ 0 −a 1 −a<br />
⎤ 2<br />
⎡<br />
1<br />
⎤<br />
−a 2 1 0 b 2 [ ]<br />
⎣ −a 1 0 1 ⎦ ⎣ b 1<br />
⎦ 1 0 0<br />
⎡<br />
−a 0 0 0<br />
⎤ ⎡<br />
b 0<br />
⎤<br />
p 1 0 0 1<br />
⎣ 0 p 2 0 ⎦ ⎣ 1 ⎦ [ ]<br />
k 1 k 2 k 3<br />
0 0 p 3 1<br />
When the transfer function has a repeated pole such that<br />
G(s) =<br />
b 2 s 2 +b 1 s+b 0<br />
= k 1 k 2<br />
+<br />
s 3 +a 2 s 2 +a 1 s+a 0 s−p 1 (s−p m ) + k 3<br />
2 s−p m<br />
the corresponding state space model is<br />
ẋ = Fx+Gu<br />
y = Hx<br />
where the state matrices are<br />
<strong>Control</strong>lable Canonical Form<br />
Observable Canonical Form<br />
Jordan Canonical Form<br />
⎡<br />
F<br />
⎤ ⎡<br />
G<br />
⎤<br />
H<br />
0 1 0 0<br />
⎣ 0 0 1 ⎦ ⎣ 0 ⎦ [ ]<br />
b 0 b 1 b 2<br />
−a<br />
⎡ 0 −a 1 −a<br />
⎤ 2<br />
⎡<br />
1<br />
⎤<br />
−a 2 1 0 b 2 [ ]<br />
⎣ −a 1 0 1 ⎦ ⎣ b 1<br />
⎦ 1 0 0<br />
⎡<br />
−a 0 0 0<br />
⎤ ⎡<br />
b 0<br />
⎤<br />
p 1 0 0 1<br />
⎣ 0 p m 1 ⎦ ⎣ 0 ⎦ [ ]<br />
k 1 k 2 k 3<br />
0 0 p m 1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 39<br />
2.2.5 Response versus pole locations<br />
Modes in an impulse response<br />
Given the transfer function of a linear system,<br />
G(s) = Y(s)<br />
U(s) = b(s)<br />
a(s)<br />
the roots of a(s) = 0, called poles, make G(s) infinity, and those of b(s) = 0, called<br />
zeros, makeG(s) zero.<br />
Each pole location in thes-plane can be identified with a particular type of response.<br />
When the poles of a transfer function are not repeated (i.e., all the roots of a(s) = 0 are<br />
distinct), it can be expanded by a partial fraction expansion as<br />
G(s) =<br />
n∑<br />
i=1<br />
k i<br />
s−p i<br />
where k i ’s are scalars, p i ’s are the poles of G(s), and n is the order of a(s). Suppose that<br />
the system is under an impulse input, i.e.,U(s) = 1. Then, the output is<br />
y(t) = L −1 {G(s)U(s)} = L −1 {G(s)}<br />
n∑<br />
= L −1 k i<br />
{ }<br />
s−p i<br />
=<br />
i=1<br />
n∑<br />
k i e p it<br />
i=1<br />
Note that the impulse response is a linear combination of e p it , which is called a mode.<br />
When all the poles have strictly negative real parts 6 , every mode converges to zero as<br />
t → ∞, and thusy(∞) → 0.<br />
[Example 2-13] Consider a transfer function:<br />
G(s) =<br />
8s+4<br />
s 3 +4s 2 +s−6<br />
Using the partial fraction expansion,G(s) can be expanded to<br />
6 That is, R{p i } < 0 for all i’s<br />
G(s) = −5<br />
s+3 + 4<br />
s+2 + 1<br />
s−1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 40<br />
f g hi jkl<br />
m g hn j n opkl<br />
q g rstfumvl<br />
wxyz{|}tqvl<br />
~€ rr‚ƒ„ mƒ ‚…†‡€ˆ‰ Š‰ˆ† ƒˆ‰<br />
hŠu †u ‹k g Œ}|w z}tfu mvl<br />
~mŠr‚m€ sŠmŽr‚ ƒ ‰ †mƒˆ‚ ƒ<br />
Figure 2.10: Partial fraction expansion and plotting an impulse response<br />
The impulse response (i.e., U(s) = 1) ofG(s) is<br />
y(t) = −5e −3t +4e −2t +e t<br />
Sincee t diverges as t → ∞, the output does not converges to0.<br />
Complex poles<br />
Complex poles can be described in terms of their real and imaginary parts as<br />
p = −σ ±jω d<br />
Since complex poles always come in complex conjugate pairs for real polynomials, the<br />
denominator corresponding to a complex pair is<br />
a(s) = (s+σ −jω d )(s+σ +jω d ) = (s+σ) 2 +ω 2 d<br />
When finding the transfer function from differential equations, we typically 7 write the<br />
result in the polynomial form<br />
G(s) =<br />
b 0<br />
s 2 +2ζω n s+ω 2 n<br />
(2.13)<br />
where b 0 is any scalar, σ = ζω n , and ω d = ω n<br />
√<br />
1−ζ2 . The parameter ζ is called the<br />
damping ratio, and ω n is called the natural frequency.<br />
[Example 2-14] Consider a transfer function:<br />
G(s) =<br />
3<br />
s 2 +s+1<br />
7 That is, it is a common option, but is not the only option.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 41<br />
‘ ’“”•<br />
– ‘ ’— — —”•<br />
˜ ‘ š› œ– •<br />
žŸ ¡›˜ • ¢£¤¥¦¤¥§ ¥–¨©–ª š©«¬¨«¥¤«® –¥¦ ¦–¯°¤¥§ ©–¤±®<br />
²¤§«¥³–ª¨« ´–¯°¤¥§ £©«¬µ ›©–¦®<br />
·¸µ¹¹«·¹¹— º »µ¼¼«·¹¹—¤ ¸µ¹¹«·¹¹— —µ¹¹«º¹¹¹<br />
·¸µ¹¹«·¹¹— · »µ¼¼«·¹¹—¤ ¸µ¹¹«·¹¹— —µ¹¹«º¹¹¹<br />
Figure 2.11: Natural frequency and damping ratio of a transfer function<br />
G(s) can be expressed in the following form:<br />
G(s) =<br />
3<br />
s 2 +2ζω n s+ω 2 n<br />
where ω n = 1, and ζ = 0.5. One can say that the transfer function has the natural<br />
frequency of1and the damping ratio of0.5.<br />
When a transfer function has more than two poles, it is not clear how the damping<br />
ratio and the natural frequency are defined. Originally the transfer function in (2.13) is<br />
obtained from a mass-spring-damper system, which follows a second-order linear differential<br />
equation 8 . Therefore, in a strict sense the damping ratio and the natural frequency<br />
can be defined only when the transfer function has two poles. In a general sense, ζ and<br />
ω n are defined for each pair of complex poles.<br />
Repeated poles<br />
Suppose that a transfer function,G(s), hasmdistinct poles andn−m repeated poles. By<br />
the partial fraction expansion, it can be expanded as<br />
m∑<br />
n−m<br />
k i<br />
∑ κ k<br />
G(s) = +<br />
s−p<br />
i=1 i (s−p m ) k+1<br />
k=1<br />
where k i ’s and κ k ’s are scalars, p i ’s are the distinct poles and p m is the repeated pole. If<br />
the system is under an impulse input, i.e.,U(s) = 1, the output is<br />
8 mÿ +cẏ +ky = u<br />
y(t) = L −1 {<br />
=<br />
m∑<br />
i=1<br />
m∑<br />
k i e pit +<br />
i=1<br />
n−m<br />
k i<br />
∑ κ k<br />
+<br />
s−p i (s−p m ) k+1}<br />
k=1<br />
n−m<br />
∑<br />
k=1<br />
α k t k e pmt<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 42<br />
Im<br />
Asymptotically<br />
stable<br />
Unstable<br />
Re<br />
Stable in the sense of Lyapunov,<br />
if not repeated on the imaginary axis<br />
Figure 2.12: Stability and the location of poles<br />
where α k ’s are scalars. Note that the last term includes t k e pmt . It satisfies the following<br />
conditions:<br />
f(t) = t k e σmt > 0 for all k andt > 0<br />
f(0) = 0<br />
{<br />
f(t) ˙ = (k +σ m t)t k−1 e σmt > 0 for all k and0 < t < −kσm<br />
−1<br />
< 0 for all k and −kσm −1 < t<br />
where σ m is the real part of p m . Therefore, f(t) converges to 0 as t → ∞, if and only if<br />
Re(p m ) < 0. f(t) does not converge to zero if Re(p m ) = 0, unless k = 0, which means<br />
that no pole is repeated.<br />
In summary, a transfer function is asymptotically stable (or, simply stable) if all<br />
the poles have strictly negative real parts. It is unstable if any of the poles has positive<br />
real part. It is marginally stable (or, stable in the sense of Lyapunov) if all the poles<br />
have non-positive real parts and no pole is repeated on the imaginary axis. This theorem<br />
is depicted in Fig. 2.12.<br />
2.2.6 Poles and eigenvalues<br />
Recall that<br />
G(s) = b(s)<br />
a(s) = H(sI −F)−1 G+J<br />
where F , G, H, and J are the state matrices of a state space equation, and G(s) is the<br />
corresponding transfer function. The poles are the roots of<br />
a(s) = 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.2. LAPLACE TRANSFORMS AND TRANSFER FUNCTIONS 43<br />
which is called the characteristic equation. Also remind that a(s) = det{sI −F} and<br />
the eigenvalues of the matrixF are calculated from<br />
det{λI −F} = 0<br />
Therefore, the poles ofG(s) are equivalent to the eigenvalues of the state matrix F .<br />
A system is asymptotically stable if<br />
• all the poles have strictly negative real parts (in the Laplace domain), or<br />
• all the eigenvalues have strictly negative real parts (in the state space).<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 44<br />
2.3 Frequency Response Analysis<br />
2.3.1 Fourier transform versus Laplace transform<br />
There are several common conventions for defining the Fourier transform F(jω) of an<br />
integrable functionf(t). A common expression is<br />
F(jω) =<br />
∫ ∞<br />
0<br />
f(t)e −jωt dt<br />
where ω ∈ R + is a positive scalar. Notice that the Fourier transform is related to the<br />
Laplace transform<br />
F(s) =<br />
∫ ∞<br />
0<br />
f(t)e −st dt<br />
byjω = s. Therefore, Fourier-transformed functions can be obtained by replacingjω for<br />
s.<br />
[Example 2-15] The following functions (signals) are equivalent:<br />
Description Time-domain Laplace-domain Frequency-domain<br />
(t ∈ R + ) (s ∈ C) (ω ∈ R + )<br />
Unit impulse δ(t) 1 1<br />
Delayed impulse δ(t−τ) e −τs e −jωτ<br />
Unit step u(t)<br />
1<br />
s<br />
e<br />
Delayed unit step u(t−τ) −τs<br />
s<br />
1<br />
Ramp t<br />
n th power for an integern<br />
t n<br />
n!<br />
Exponential decay e −αt 1<br />
Sine sin(ω 0 t)<br />
1<br />
jω<br />
e −jωτ<br />
jω<br />
− 1<br />
s 2 ω 2<br />
1<br />
1<br />
s n+1<br />
s+α<br />
ω 0<br />
s 2 +ω0<br />
2<br />
(jω) n+1<br />
1<br />
jω+α<br />
ω 0<br />
ω0 2−ω2<br />
Notice that the unit impulse signal includes all frequency components. In the<br />
Laplace domain, all the coefficients are real numbers but the Laplace operator, s, is<br />
a complex number. In the frequency domain, however, the coefficients are complex<br />
numbers while the frequency term,ω, is a positive real number.<br />
As the input and output signals can be transformed into the frequency domain by<br />
taking the Fourier transform, its relationship can also be represented in the frequency<br />
domain. The frequency-domain input-output relationship, which is usually represented<br />
by a Bode plot, is obtained by replacing jω forsof a transfer function.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 45<br />
2.3.2 Drawing bode plots by hand and by Matlab<br />
Consider a transfer function:<br />
∏ (s−zi )<br />
G(s) = k∏ (s−pj )<br />
where k is a gain, z i ’s are the zeros of G(s), and p j ’s are the poles of G(s). In the<br />
frequency domain,G(jω) is<br />
∏ (jω −zi )<br />
G(jω) = k∏ (jω −pj )<br />
The Bode plot can be drawn by following steps:<br />
[Step 1] Sort z i ’s and p j ’s by their absolute values.<br />
[Step 2] From ω = 0 to ω → ∞, make sections by |Re{z i }|’s and |Re{p j }|’s.<br />
[Step 3] Calculate |G(0)|. If |G(0)| does not exist, calculate |G(jω)| for any selected<br />
ω ∈ R + . This value will be used as the starting point of the magnitude plot.<br />
[Step 4] For every section, find an asymptote, and draw it on a magnitude graph (log(ω)-<br />
dB) graph 9 and a phase graph. The slope of a magnitude plot is 20(m ω −n ω ) where m ω<br />
and n ω are the numbers of (jω)’s in the numerator and denominator of each asymptote,<br />
respectively. Similarly, the phase is90(m ω +2m − −n ω −2n − ) in degrees, wherem − and<br />
n − are the numbers of minus signs in the numerator and denominator of each asymptote,<br />
respectively. Tip: do not replace (jω) 2 with−ω 2 for easier calculation.<br />
[Step 5] Smoothly connect the asymptotes. When any pole or zero is complex, you may<br />
calculate the exact value at the section boundary.<br />
[Example 2-16] The Bode plot of a transfer function<br />
G(s) =<br />
G(jω) =<br />
2s+10<br />
s 2 +9s−10 =<br />
2(jω +5)<br />
(jω −1)(jω +10)<br />
2(s+5)<br />
(s−1)(s+10)<br />
is obtained by:<br />
[Step 1] Since z 1 = −5, p 1 = 1, and p 2 = −10, they are sorted by their absolute<br />
9 dB is20log(x)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 46<br />
0<br />
−10<br />
dB<br />
−20<br />
degree<br />
−30<br />
−40<br />
10 −1 10 0 10 1 10 2<br />
0<br />
−50<br />
−100<br />
−150<br />
Frequency (rad/sec)<br />
−200<br />
10 −1 10 0 10 1 10 2<br />
Frequency (rad/sec)<br />
Figure 2.13: Asymptotes of a transfer function in the frequency domain.<br />
values as 1, 5,10.<br />
[Step 2-3-4] There are four sections: 0 < ω < 1,1 < ω < 5,5 < ω < 10, and10 < ω.<br />
For each section, the asymptote is<br />
2(5)<br />
• at ω = 0: |G(j0)| = ∣ ∣ = 1 (i.e.,0dB)<br />
(−1)(10)<br />
• 0 < ω < 1: G(jω) ≈ 2(5) (slope: 0, phase: -180)<br />
(−1)(10)<br />
• 1 < ω < 5: G(jω) ≈ 2(5) (slope: -20, phase: -90)<br />
(jω)(10)<br />
• 5 < ω < 10: G(jω) ≈ 2(jω) (slope: 0, phase: 0)<br />
(jω)(10)<br />
• 10 < ω: G(jω) ≈ 2(jω) (slope: -20, phase: -90)<br />
(jω)(jω)<br />
They are plotted on a magnitude graph and a phase graph as in Fig. 2.13.<br />
[Step 5] Finally, the Bode plot is obtained by smoothly connecting the asymptotes as<br />
in Fig. 2.14.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 47<br />
0<br />
−10<br />
dB<br />
−20<br />
degree<br />
−30<br />
−40<br />
10 −1 10 0 10 1 10 2<br />
0<br />
−50<br />
−100<br />
−150<br />
Frequency (rad/sec)<br />
−200<br />
10 −1 10 0 10 1 10 2<br />
Frequency (rad/sec)<br />
Figure 2.14: Smoothly connected asymptotes.<br />
½ ¾ ¿ÀÁÂÃ ÄÅÆÇÂÄ È ÉÄÅÆÊË<br />
Ì ÍÎÀÏÐÏÐÑ Ò ¿ÓÒÐÔÀÎÓ ÀÕÐÖ¿Ï×Ð<br />
ØÙÚÛÁ½Ê<br />
Ì Ü×ÝÎÔ ×À ½ÁÔÊ<br />
ÞÛßÙÁ½Ê<br />
Ì àÎÓ×Ô ×À ½ÁÔÊ<br />
áÙâÛÁ½ÊË<br />
Ì ÍÓÒã Ò ä×åÎ æÝ׿<br />
Figure 2.15: Matlab code for drawing a Bode plot<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 48<br />
[Example 2-17] The Bode plot of a transfer function<br />
G(s) =<br />
G(jω) =<br />
4<br />
s 2 +s+4<br />
4<br />
(jω) 2 +jω +4<br />
is obtained by:<br />
√<br />
15<br />
[Step 1] Since p 1,2 = −0.5 ± j , they are sorted by the absolute values (i.e., 2).<br />
4<br />
There is only one section boundary at 2.<br />
[Step 2-4] There are two sections: 0 < ω < 2, 2 < ω. For each section, the asymptote<br />
is<br />
• at ω = 0, |G(j0)| = | 4 4 | = 1 (i.e.,0dB)<br />
• 0 < ω < 2: G(jω) ≈ 4 4<br />
(slope: 0, phase: 0)<br />
• 2 < ω: G(jω) ≈ 4<br />
(jω) 2 (slope: -40, phase: -180)<br />
The exact value of|G(jω)| at the section boundary is<br />
∣ |G(2j)| =<br />
4<br />
∣∣∣ ∣(2j) 2 +2j +4∣ = 4<br />
2j∣ = 2<br />
[Step 5] Finally, the Bode plot is obtained by smoothly connecting the asymptotes as<br />
in Fig. 2.16, where the line must pass through the exact value at the section boundary.<br />
Graphical representation ofG(s) in the Laplace-domain andG(jω) in the frequency-<br />
Domain<br />
Remind that the Laplace domain and the frequency domain are related by s = jω. Since<br />
s is a complex number, which can be defined as<br />
s = σ +jω<br />
where σ = Re{s} ∈ R and ω = Im{s} ∈ R, the Bode plot is the line of intersection<br />
between G(s) and a plane of σ = 0. Namely, the Bode plot is a part of G(s) plotted on<br />
the s-plane, and thus <strong>info</strong>rmation on the locations of poles and zeros of G(s) is resolved<br />
in the Bode plot.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 49<br />
10<br />
0<br />
at 2 (rad/sec)<br />
dB<br />
−10<br />
−20<br />
−30<br />
10 0 10 1<br />
degree<br />
0<br />
−50<br />
−100<br />
−150<br />
−200<br />
10 0 10 1<br />
Frequency (rad/sec)<br />
Figure 2.16: Smoothly connected asymptotes.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 50<br />
Magnitude plot<br />
Figure 2.17: Magnitude plot ofG(s) = (s+2)/(s 2 +6s+5) on thes-plane and<br />
the bode plot ofG(jω)<br />
Phase plot<br />
Figure 2.18: Phase plot ofG(s) = (s+2)/(s 2 +6s+5) on thes-plane and that<br />
ofG(jω)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 51<br />
2.3.3 Nyquist plot<br />
A Nyquist plot is a parametric plot of a transfer function, G(jω). The most common use<br />
of Nyquist plots is for assessing the stability of a system with feedback. In Cartesian<br />
coordinates, the real part of G(jω) is plotted on the x-axis, and the imaginary part is<br />
plotted on they-axis.<br />
ç è éêëìí îïðìí íñ ò ñïóô<br />
õö÷øùúéëçóô<br />
û üýþééùõÿ ö÷øùúé ýþé<br />
ÿù þõô<br />
û ùúýöùõÿ ÿùú<br />
Figure 2.19: Matlab code for drawing a Nyquist plot<br />
r<br />
+ −<br />
C<br />
u<br />
G<br />
y<br />
Figure 2.20: A typical feedback control system<br />
Nyquist stability criterion<br />
Suppose a plantG(s) is under feedback control by C(s) as shown in Fig. 2.20.<br />
To assess stability of the closed loop system using Nyquist theorem, the Nyquist<br />
contour should be constructed first. The Nyquist contour consists of:<br />
• G(jω)C(jω) whereω ∈ (−∞,∞), and<br />
• a semicircular arc that travels clock-wise, with radius of ∞. This semicircular line<br />
is necessary only whenG(jω)C(jω) → ∞ for someω ∈ R.<br />
Given a Nyquist contour, let<br />
• P be the number of unstable poles ofGC, and<br />
• Z be the number of unstable poles of the closed loop system<br />
Z is desired to be zero.<br />
GC . In most cases,<br />
1+GC<br />
The resultant contour should encircle counterclockwise the point −1 +0j N times such<br />
thatN = P −Z.<br />
Notice that ifGC is stable 10 ,P = 0 and thus the Nyquist contour must not encircle<br />
the critical point−1+0j.<br />
10 That is, all the poles ofGC have strictly negative real parts.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 52<br />
[Summary] Given:<br />
• GC(s) is an open loop transfer function withP unstable poles,<br />
• T(s) = GC(s) is a closed loop transfer function withZ unstable poles,<br />
1+GC(s)<br />
• N is the number of counterclockwise encirclement of (−1 + 0j) point by the<br />
locus ofG(jω) for−∞ < ω < ∞, and<br />
• n is the degree of the characteristic polynomial ofGC(s).<br />
Nyquist stability criterion states that:<br />
N = P −Z<br />
and Z = 0 implies stability of the closed loop system.<br />
This implies that for a system to be stable by feedback control, the number of<br />
encirclement of (−1 + 0j) point by the locus of GC(jω) for −∞ < ω < ∞ in the<br />
counterclockwise direction is equal to the number of unstable open loop poles.<br />
Proof of nyquist stability criterion<br />
Let GC(s) = N(s) and F(s) be<br />
D(s)<br />
F(s) = 1+GC(s) = 1+ N(s)<br />
D(s) = D(s)+N(s)<br />
D(s)<br />
:= D C(s)<br />
D(s)<br />
Note that the roots of D C (s) = 0 are the closed loop poles 11 , and those of D(s) = 0 are<br />
the open loop poles. F(s) can be organized as<br />
∏ n<br />
i=1<br />
F(s) = ∏ (s−p c,i)<br />
n<br />
k=1 (s−p o,k)<br />
wherep c,i ’s are the closed loop poles (i.e., the roots ofD C (s) = 0), andp o,k ’s are the open<br />
loop poles. In the frequency domain,<br />
∏ n<br />
i=1<br />
F(jω) = ∏ (jω −p c,i)<br />
n<br />
k=1 (jω −p o,k)<br />
From the equation above, the phase ofF(jω) is calculated as<br />
11 GC<br />
1+GC = N/D<br />
1+N/D = N<br />
D+N = N D C<br />
ϕ F (ω) =<br />
n∑<br />
ϕ c,i (ω)−<br />
i=1<br />
n∑<br />
ϕ o,k (ω)<br />
k=1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 53<br />
then,<br />
∆ϕ F := ϕ F (∞)−ϕ F (−∞)<br />
n∑<br />
= (ϕ c,i (∞)−ϕ c,i (−∞))−<br />
i=1<br />
n∑<br />
(ϕ o,k (∞)−ϕ o,k (−∞))<br />
where ϕ(∞)−ϕ(−∞) for each pole is either+π or−π as shown in Fig. 2.21. Namely,<br />
ϕ i (∞) − ϕ i (−∞) is +π if p i is located on the left half plane (LHP), and is −π if p i is<br />
located on the right half plane (RHP).<br />
k=1<br />
Im<br />
∞j<br />
Im<br />
∞j<br />
p<br />
c , i<br />
π<br />
−π<br />
p<br />
c , i<br />
Re<br />
Re<br />
Recall that<br />
− ∞j<br />
− ∞j<br />
(a) ϕ ( ∞)<br />
−ϕ(<br />
−∞)<br />
for a zero in LHP (b) ϕ( ∞)<br />
−ϕ(<br />
−∞)<br />
for a zero in RHP<br />
Figure 2.21: ϕ(∞)−ϕ(−∞) for a stable and unstable pole<br />
• the open loop transfer function,GC(s), has P unstable poles, and<br />
• the closed loop transfer function,T(s) = GC(s) , has Z unstable poles.<br />
1+GC(s)<br />
Therefore,<br />
where<br />
∆ϕ F =<br />
n∑<br />
(ϕ c,i (∞)−ϕ c,i (−∞))−<br />
i=1<br />
n∑<br />
(ϕ o,k (∞)−ϕ o,k (−∞))<br />
k=1<br />
= [(π)(n−Z)+(−π)(Z)]−[(π)(n−P)+(−π)(P)]<br />
= 2π(P −Z) = 2πN<br />
N = counterclockwise encirclement of origin in (1+GC(jω)) plane<br />
= counterclockwise encirclement of(−1+0j) in GC(jω) plane<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 54<br />
Im<br />
−1<br />
GM<br />
−1<br />
PM<br />
Re<br />
Nyquist plot<br />
of GC<br />
Figure 2.22: Graphical Representation of Phase Margin and Gain Margin<br />
Gain margin and phase margin<br />
Recall that if the open loop system, GC, is stable, the corresponding Nyquist contour<br />
must not encircle the critical point, −1+0j. If GC crosses the critical point 12 such that<br />
G(jω 0 )C(jω 0 ) = −1 + 0j for some ω 0 ∈ R, the closed loop system (GC/(1 + GC))<br />
has an infinitely large gain at the frequency ω 0 , which means the system amplifies the<br />
frequency component ofω 0 infinitely 13 . In general,GC must encircle(−1+0j) the right<br />
number of times according to the number of unstable open-loop poles.<br />
There are two measures of the stability margin, the gain margin and phase margin,<br />
that represent how far the Nyquist plot ofGC is from the critical point.<br />
Suppose that an open loop system, GC, has the gain margin of k and the phase<br />
margin ofθ. Then, you can expect that<br />
• the closed loop system becomes unstable if the open loop system is amplified byk,<br />
or<br />
• the closed loop system becomes unstable if the open loop system has an additional<br />
phase delay ofθ.<br />
For example, if an open loop system has an infinitely large gain margin, the closed loop<br />
system will not become unstable by increasing its gain. If the open loop system has an<br />
infinitely large phase margin, the closed loop system will always be stable even in the<br />
presence of unknown phase delay.<br />
12 −1+0j; it is also called “dangerous point.”<br />
13 This is the response of unstable systems<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.3. FREQUENCY RESPONSE ANALYSIS 55<br />
[Example 2-18] Suppose that a plantG(s) is under feedback control, where<br />
( )<br />
1<br />
C(s) = k P 1+T d s+T i<br />
s<br />
and it has found thatGC has the gain margin of5. Then, the maximum gain ofk P that<br />
stabilizesG(s) is 5k P .<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 56<br />
2.4 <strong>Control</strong>ler Design<br />
2.4.1 Objectives of feedback control<br />
Suppose that a plant, G, is under feedback control with a controller, C. There are three<br />
main objectives of feedback control: stability, performance, and robustness.<br />
Stability<br />
The feedback controller must stabilize the plant such that<br />
• In the time-domain, an output, y(t), is bounded for an bounded reference input,<br />
r(t). When an impulse signal is exerted as a reference input,y(t) converges to zero<br />
as t → ∞.<br />
• In the Laplace-domain, the following four transfer functions have poles with strictly<br />
negative real parts:<br />
GC<br />
1+GC<br />
C<br />
1+GC<br />
G<br />
1+GC<br />
1<br />
1+GC<br />
• In the state space, the state matrix has eigenvalues with strictly negative real parts.<br />
• In the frequency-domain, the Nyquist contour encircles(−1+0j) counterclockwise<br />
N times such that N = P , where P is the number of unstable poles of the open<br />
loop transfer function,GC(s).<br />
The four statements above are all equivalent. You may select the most appropriate domain<br />
according to the characteristics of your plant.<br />
Performance<br />
Once the plant is stabilized by feedback control, it is desired to achieve the desired performance.<br />
The feedback-controlled system should show the following characteristics:<br />
• In the time-domain, an error,e(t) = r(t)−y(t), should converge to zero ast → ∞<br />
for any constant reference input,r(t) = r 0 < ∞.<br />
Moreover, it is desired fore(t) to converge to zero as quick as possible.<br />
In addition, the output should not be affected by a constant disturbance, d(t) =<br />
d 0 < ∞. In other words, the error, e(t) = r(t)−y(t), should converge to zero as<br />
t → ∞ for any constant disturbance.<br />
• In the Laplace-domain, all the closed-loop poles should be placed as far as possible<br />
from the imaginary axis.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 57<br />
• In the state space, all the eigenvalues should be placed as far as possible from the<br />
imaginary axis.<br />
GC<br />
• In the frequency-domain, (jω) ≈ 1 and | G<br />
(jω)| ≈ 0 for a sufficiently<br />
1+GC 1+GC<br />
large frequency range 14 . In other words, the magnitudes of GC(jω) and C(jω)<br />
should be large enough for a sufficiently large frequency range.<br />
Moreover,|GC(0j)| and |C(0j)| should be infinitely large.<br />
Robustness<br />
Most of the controller design techniques are based on a mathematical model of a plant.<br />
In practice, however, the exact model parameters are difficult to identify. Moreover, in<br />
many cases the governing equation is not clear, so that an approximated model is used.<br />
Therefore, a controller designed for a particular set of parameters is required to be robust<br />
such that it works well under a different set of assumptions.<br />
This subject will be discussed more thoroughly in the Robust <strong>Control</strong> <strong>Systems</strong>,<br />
a graduate level course. You may notice that we have already learned two criteria for<br />
stability robustness in the frequency-domain:<br />
• The open loop transfer function, GC(jω), should have a large phase margin to<br />
account for the phase uncertainty.<br />
• The open loop transfer function, GC(jω), should have a large gain margin to account<br />
for the gain uncertainty.<br />
2.4.2 <strong>Control</strong>ler design in time-domain<br />
PID control law<br />
Currently, more than half of the controllers used in industry are proportional-integralderivative<br />
(PID) controllers. When a mathematical model of a system is available, the<br />
parameters of the controller can be explicitly determined. However, when a mathematical<br />
model is unavailable, the parameters must be determined experimentally. <strong>Control</strong>ler<br />
tuning is the process of determining the controller parameters which produce the desired<br />
output.<br />
The equation below is the PID control law represented in the time-domain:<br />
u(t) = K p e(t)+K i<br />
∫ t<br />
0<br />
e(τ)dτ +K d<br />
de(t)<br />
dt<br />
14 Also, it can be explained as “the closed-loop system should have a large frequency bandwidth.”<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 58<br />
where,<br />
u(t) is the control signal,<br />
e(t) is the difference between the current measurement,y(t), and the reference input,r(t),<br />
K p is the gain for a proportional controller,<br />
K i is the gain for an integral controller, and<br />
K d is the gain for a derivative controller.<br />
Trial and error method<br />
The trial and error tuning method is based on guess-and-check. In this method, the proportional<br />
action is the main control, while the integral and derivative actions refine it. The<br />
procedure is as follows.<br />
1. SettingK i and K d to zero, increaseK p from zero until the output oscillates.<br />
2. HoldingK p , increaseK d until the oscillation disappears.<br />
3. HoldingK p andK d , increaseK i such that the error converges to zero for a constant<br />
reference input.<br />
4. Repeat 1.-3. until the desired performance is obtained. In each step, you should<br />
check the magnitude and oscillation of the control input, u(t). If u(t) reaches the<br />
saturation range too often or includes a high frequency noise, the K p , K d , and K i<br />
should not be increased further.<br />
Ziegler-Nichols method<br />
The Ziegler-Nichols tuning method is a heuristic method of tuning a PID controller. It<br />
was developed by John G. Ziegler and Nathaniel B. Nichols. The procedure is as follows.<br />
1. Setting K i and K d to zero, increase K p from zero until the closed-loop system is<br />
marginally stable. The marginally stable gainK p is called the ultimate gain,K u .<br />
2. K u and the oscillation periodT u are used to set theK p ,K i , andK d gains depending<br />
on the type of controller used:<br />
<strong>Control</strong> Type K p K i K d<br />
1<br />
P K 2 u - -<br />
1<br />
PI K 2.2 u 1.2 Kp<br />
T u<br />
-<br />
classic PID 0.6K u 2 Kp 1<br />
K T u 8 pT u<br />
0.15K p T u<br />
Pessen Integral Rule 0.7K u 2.5 Kp<br />
T u<br />
some overshoot 0.33K u 2 Kp<br />
T u<br />
1<br />
3 K pT u<br />
no overshoot 0.2K u 2 Kp<br />
T u<br />
1<br />
3 K pT u<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 59<br />
2.4.3 <strong>Control</strong>ler design in Laplace-domain<br />
Pole placement<br />
Pole placement is a method employed to place the closed-loop poles of a plant at predetermined<br />
locations in the s-plane. Placing poles is desirable because the location of the<br />
poles corresponds directly to the characteristics of the response of the system.<br />
Consider a plant:<br />
G(s) = b(s)<br />
a(s)<br />
wherea(s) andb(s) are the denominator and numerator polynomials, respectively. Without<br />
loss of generality, it is assumed that a(s) and b(s) are coprime, such that there is no<br />
pole-zero cancelation. The orders of a(s) and b(s) are assumed to be n and m ≤ n,<br />
respectively.<br />
Suppose thatG(s) is under feedback control with a controller:<br />
C(s) = β 1s n−1 +β 2 s n−2 +...+β n−1 s+β n<br />
s m +α 1 s m−1 +...+α m−1 s+α m<br />
= β(s)<br />
α(s)<br />
Note that the orders ofα(s) andβ(t) aremandn−1, respectively. Then, the closed-loop<br />
characteristic polynomial 15 is<br />
a(s)α(s)+b(s)β(s) = d 0 (s r +d 1 s r−1 +...+d r−1 s+d r )<br />
} {{ }<br />
:=D c(s)<br />
(2.14)<br />
wherer = n+m. Note thatD c (s) hasr free variables (i.e., controller gains,α 1,2,...,m and<br />
β 1,2,...,n ), which are to be determined.<br />
Suppose that you haver desired poles, such that<br />
D d c(s) =<br />
r∏<br />
(s−p i )<br />
i=1<br />
= s r +d d 1 sr−1 +...+d d r−1 s+dd r (2.15)<br />
where d d i ’s are the coefficients of the desired closed-loop characteristic equation. The superscriptddenotes<br />
“desired.” Then, the controller gains can be determined by comparing<br />
(2.14) and (2.15).<br />
15 That is obtained from GC<br />
1+GC .<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 60<br />
[Example 2-19] Consider a plant:<br />
G(s) = s+1<br />
s 2 +s+1<br />
where the desired closed-loop poles are: −10,−15, and−20. Sincen = 2 andm = 1,<br />
a possible feedback controller is:<br />
C(s) = β 1s+β 2<br />
s+α 1<br />
The closed-loop characteristic polynomial is<br />
D c (s) = s 3 +(1+α 1 +β 1 )s 2 +(1+α 1 +β 1 +β 2 )s+(α 1 +β 2 )<br />
and the desired closed-loop characteristic polynomial is<br />
D d c(s) = (s+10)(s+15)(s+20) = s 3 +45s 2 +650s+3000<br />
Thus, the controller gains should be selected to<br />
α 1 = 2395<br />
β 1 = −2351<br />
β 2 = 605<br />
Pole placement for plants without RHP zeros<br />
Suppose that a plant<br />
G(s) = b(s)<br />
a(s)<br />
does not have zeros on the closed right-half plane 16 , i.e., all the zeros have negative real<br />
parts. The orders of a(s) and b(s) are n and m ≤ n, respectively. Then a controller can<br />
be designed as<br />
C(s) = β 1s n−1 +β 2 s n−2 +...+β n−1 s+β n<br />
b(s)<br />
= β(s)<br />
b(s)<br />
With this feedback controller, the closed-loop transfer function from the reference<br />
16 The closed RHP includes the imaginary axis.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 61<br />
input,r(t), to the output,y(t), is<br />
b(s)<br />
GC(s)<br />
1+GC(s) =<br />
β(s)<br />
a(s) b(s)<br />
1+ b(s) β(s)<br />
a(s) b(s)<br />
=<br />
β(s)<br />
a(s)+β(s)<br />
Notice that the closed-loop characteristic polynomial has been reduced toa(s)+β(s), the<br />
order of which is n. Since β(s) includes n design parameters (i.e., β 1 , β 2 , ..., β n ), the n<br />
closed- loop poles can be arbitrarily assigned.<br />
[Example 2-20] Consider the plant in [Example 2-19]. Since the zero ofG(s) is placed<br />
on the open left half plane, the two closed-loop poles can be arbitrarily assigned by a<br />
feedback controller<br />
C(s) = β 1s+β 2<br />
s+1<br />
The closed-loop characteristic polynomial is<br />
D c (s) = s 2 +(1+β 1 )s+(1+β 2 )<br />
Suppose that the desired closed-loop poles are −p 1 and−p 2 , i.e.<br />
Dc(s) d = (s+p 1 )(s+p 2 ) = s 2 +(p 1 +p 2 )s+p 1 p 2<br />
Thus, the controller gains should be selected to<br />
β 1 = p 1 +p 2 −1<br />
β 2 = p 1 p 2 −1<br />
2.4.4 <strong>Control</strong>ler design in state space<br />
Full state feedback<br />
In the state space, pole placement can be achieved in a more convenient way. Remind<br />
that the poles in the Laplace-domain are equivalent to the eigenvalues of the state matrix<br />
in the state space. Therefore, in the state space a feedback controller is designed such<br />
that the eigenvalues of the state matrix can be assigned at pre-determined locations. If a<br />
system is controllable, there always exists a state feedback gain such that the closed-loop<br />
eigenvalues can be arbitrarily placed.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
£ ¤ ¥ ¦§ ¡<br />
¨§ <br />
¨©<br />
£ ¤¥§ <br />
¦§<br />
2.4. CONTROLLER DESIGN 62<br />
£ ¤¨¢ ¨¦¥§ ! "#$<br />
% £ &'()*+¡,, - ¡$## !./ <br />
Figure 2.23: Matlab code for the design of the state feedback gain,K<br />
When an open loop transfer function is represented in the state space<br />
ẋ = Fx+Gu<br />
y = Hx<br />
wherex ∈ R n is the state. The eigenvalues of the state matrix are the roots of the characteristic<br />
equation given by<br />
det[sI −F] = 0<br />
Full state feedback is realized by manipulating the input. u(t). Consider a feedback<br />
controller:<br />
u = −Kx+v<br />
whereK ∈ R 1×n is the state feedback gain,v is an auxiliary input. Substitutingu = −Kx<br />
into the state space equations above,<br />
ẋ = [F −GK]x+Gv<br />
y = Hx<br />
The eigenvalues of the FSF system are given by the characteristic equation,det[sI−(F−<br />
GK)] = 0. Comparing the terms of this equation with those of the desired characteristic<br />
equation yields the values of the feedback matrix which force the closed-loop eigenvalues<br />
to the locations specified by the desired characteristic equation.<br />
[Example 2-21] Consider a control system given by the following state space equation<br />
[ ] [ ] 0 1 0<br />
ẋ = x+ u<br />
−2 −3 1<br />
The open loop system has eigenvalues at s = −1 and s = −2. Suppose, for considerations<br />
of the response, we wish the closed-loop system eigenvalues to be located at<br />
s = −1 and s = −5; i.e., the desired characteristic equation is then s 2 +6s+5 = 0.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 63<br />
Following the procedure given above, K = [ k 1 k 2] ∈ R 2×1 , and the FSFcontrolled<br />
system characteristic polynomial is<br />
[ ]<br />
s −1<br />
det[sI −(F −GK)] = det<br />
= s 2 +(3+k<br />
2+k 1 s+3+k 2 )s+(2+k 1 )<br />
2<br />
Upon setting this characteristic equation equal to the desired characteristic equation,<br />
we find<br />
K = [ 3 3 ]<br />
Therefore, setting u = −Kx forces the closed-loop eigenvalues to the desired locations,<br />
affecting the response as desired.<br />
<strong>Control</strong>lability*<br />
<strong>Control</strong>lability is an important property of a dynamic system, and the controllability property<br />
plays a crucial role in many control problems. Roughly, the concept of controllability<br />
denotes the ability to move the state of a system to an arbitrary desired state in a finite<br />
time using certain admissible inputs.<br />
Consider a state space model<br />
ẋ = Fx+Gu<br />
y = Hx<br />
whereF ∈ R n×n , G ∈ R n , and H ∈ R 1×n .<br />
Recall that the solution of the state space model is<br />
x(t) = e Ft x(0)+<br />
∫ t<br />
0<br />
e F(t−τ) Gu(τ)dτ<br />
Let us assume zero initial conditions (i.e., x(0) = 0 ∈ R n ) for simplicity. Now, since<br />
the controllability means that the system can reach any x(t), for controllable system the<br />
integral converges to x(t) with someu(t). Using the Taylor expansion of exponential,<br />
∫ t<br />
]<br />
x(t) =<br />
[I +F(t−τ)+ F2<br />
2 (t−τ)2 + F3<br />
6 (t−τ)3 +... Gu(τ)dτ<br />
0<br />
The constant terms can be taken out such that<br />
⎡<br />
x(t) = [ G FG F 2 G ... ] ⎢<br />
⎣<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
∫ t<br />
∫ t<br />
0<br />
∫ t<br />
0 u(τ)dτ<br />
(t−τ)u(τ)dτ 0<br />
1<br />
2 (t−τ)2 u(τ)dτ<br />
.<br />
⎤<br />
⎥<br />
⎦ ≡ C ∞U<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 64<br />
where the matrix part is denoted as C ∞ . For controllable systems, it should be able to<br />
obtain anyx(t) ∈ R n , and therefore C ∞ must be full rank (i.e., rankn). From the Cayley-<br />
Hamilton theorem, we know thatF n is linearly dependent onF n−1 ,F n−2 , ...,F ,I. 17 This<br />
means that no more <strong>info</strong>rmation about the rank of C ∞ is added after F n−1 G, because the<br />
remaining terms are linear combinations of the first n terms. Therefore the system is<br />
controllable if<br />
rank(C ∞ ) = rank(C) = rank ([ G FG F 2 G ··· F n−1 G ])<br />
has full rank.<br />
If a system is controllable, there always exists a state feedback gain,K, such that the<br />
closed-loop eigenvalues ofF −GK can be arbitrarily assigned. Therefore, the following<br />
statements are all equivalent:<br />
• A system is controllable.<br />
• The state of a system, x(t), can reach any point in R n with an appropriate input,<br />
u(t).<br />
• [ G FG F 2 G ··· F n−1 G ] has full rank.<br />
• There existsK such that the eigenvalues ofF −GK can be arbitrarily determined.<br />
State observer<br />
A state observer is a computer algorithm that simulates a real system in order to provide<br />
an estimate of its internal state, given measurements of the input and output of the real<br />
system. While the state is necessary to implement the state feedback control, the<br />
physical state of the system cannot be determined by direct observation. Instead, the state<br />
is indirectly observed (i.e., estimated) from the system output. A simple example is that<br />
of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel<br />
can be observed directly, but the exact velocity inside the tunnel can only be “estimated.”<br />
If a system is observable, it is possible to fully estimate the system state from its output<br />
measurements using the state observer.<br />
For a system represented in the state space,<br />
ẋ = Fx+Gu (2.16)<br />
y = Hx<br />
17 For more <strong>info</strong>rmation on the Cayley-Hamilton theorem, refer to the wikipedia, or the classnote of<br />
Mechanical <strong>Systems</strong> Analysis.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 65<br />
Accelerometer<br />
Encoder<br />
Figure 2.24: A car model.<br />
the state x is to be estimated. For simulation of the model, a new state space equation<br />
with the same model parameters is constructed:<br />
˙ˆx = Fˆx+Gu<br />
ŷ = Hˆx<br />
where ˆx(t) is the simulated (i.e., estimated) state. ˆx(t) may be different from x(t) because<br />
of the initial condition, disturbance, sensor noise, model mismatch, etc. Therefore,<br />
feedback is necessary to force ˆx(t) to converge to x(t) as t → ∞. Since the output is<br />
measurable, it is utilized for correcting the state, i.e.<br />
˙ˆx = Fˆx+Gu+L(y −ŷ)<br />
ŷ = Hˆx<br />
where L ∈ R n is the observer gain that corrects the estimation error. Note that y(t) −<br />
ŷ(t) = 0 when ˆx(t) = x(t). Sincey = Hx andŷ = Hˆx, the state observer equation is:<br />
˙ˆx = [F −LH]ˆx+Gu+Ly (2.17)<br />
Recall that it is desired for the estimation error, ˜x(t) := ˆx(t) − x(t), to converge<br />
to zero as t → ∞. The dynamics of the estimation error can be obtained by subtracting<br />
(2.16) from (2.18), i.e.<br />
˙˜x = ˙ˆx−ẋ<br />
= ([F −LH]ˆx+Gu+Ly)−(Fx+Gu)<br />
= [F −LH](ˆx(t)−x(t))<br />
= [F −LH]˜x<br />
Therefore, the observer gain,L, must be determined such that[F −LH] has eigenvalues<br />
with strictly negative real parts.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 66<br />
[Example 2-22] The example in Fig. 2.24 is related to sensor-fusion.<br />
Suppose that the velocity of a car is required in a control algorithm. Since there is<br />
no direct method to measure the velocity, an accelerometer and an encoder a are utilized<br />
to estimate the velocity. Notice that there exists a kinematic relationship between the<br />
acceleration and the position, i.e.<br />
d 2<br />
dt2y(t) = u(t)<br />
whereu(t) is the acceleration measurement, andy(t) is the position measurement. The<br />
above equation, however, does not hold in the reality because of sensor noise, parameter<br />
uncertainty, etc. Therefore, the velocity can be directly calculated from neitheru(t)<br />
nory(t). In order to obtain more precise <strong>info</strong>rmation by fusing two different measurements,<br />
a state observer can be utilized. In the state space, the differential equation can<br />
be formulated into:<br />
ẋ =<br />
[ 0 1<br />
0 0<br />
y = [ 1 0 ] x<br />
] [ 0<br />
x+<br />
1<br />
[ ] ẏ<br />
where x = is the state to be estimated. Note that the velocity is included in the<br />
y<br />
state.<br />
]<br />
u<br />
To estimate the state, a state observer is constructed:<br />
[ ] [ ] [ ]<br />
0 1 0 l1<br />
˙ˆx = ˆx+ u(t)+ (y(t)− [ 1 0 ]ˆx)<br />
0 0 1 l 2<br />
The L = [ l 1<br />
]<br />
l2<br />
matrix should be selected such that the eigenvalues of [F − LH] have<br />
strictly negative real parts. The corresponding characteristic equation is<br />
det<br />
[<br />
sI −<br />
([ 0 1<br />
0 0<br />
]<br />
−<br />
[<br />
l1 0<br />
l 2 0<br />
])]<br />
= s 2 +l 1 s+l 2 = 0<br />
A possible solution set tol 1 and l 2 (among many) isl 1 = 2 andl 2 = 1, which yield the<br />
eigenvalues of−1 and −1. Note that the velocity can be estimated by<br />
ˆv = [ 0 1 ]ˆx<br />
a An encoder is often used to measure the travel distance of a vehicle.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 67<br />
Observability<br />
A state space model without an input<br />
ẋ = Fx<br />
y = Hx<br />
x(0) = x 0<br />
is said to be observable, if ‖y(t)‖ ≠ 0 for all t > 0 and for all nonzero initial conditions<br />
x 0 . Note that the outputy(t) is<br />
y(t) = He Ft x 0<br />
Taking the Taylor expansion ofe Ft ,<br />
[<br />
y(t) = H I +Ft+ 1 ]<br />
2 F2 t 2 +...<br />
⎡<br />
H<br />
= [ 1<br />
1 t<br />
2 t2 ... ] HF<br />
⎢<br />
⎣ HF 2<br />
.<br />
= T O ∞ x 0<br />
x 0<br />
⎤<br />
⎥<br />
⎦ x 0<br />
where T and O ∞ are ∞-dimensional. Notice that the nullity of O ∞ must be zero for<br />
‖y(t)‖ to be nonzero for all nonzero x 0 (or, equivalently O ∞ must have full rank). From<br />
the Cayley-Hamilton theorem, we know thatF n is linearly dependent onF n−1 , F n−2 , ...,<br />
F , I. Therefore,<br />
⎛⎡<br />
⎤⎞<br />
H<br />
HF<br />
rank(O ∞ ) = rank(O) = rank<br />
HF 2<br />
⎜⎢<br />
⎥⎟<br />
⎝⎣<br />
. ⎦⎠<br />
HF n−1<br />
Therefore, the system is observable if the observability matrix O has full rank.<br />
If a system is observable, there always exists an observer gain, L, such that the<br />
closed-loop eigenvalues ofF −LH can be arbitrarily assigned. Therefore, the following<br />
statements are all equivalent:<br />
• A system is observable.<br />
• The output is nonzero (i.e., ‖y(t)‖ ≠ 0) for all t > 0 and for all nonzero initial<br />
conditionsx 0 .<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 68<br />
−<br />
u<br />
x&<br />
= Fx + Gu<br />
y = Hx<br />
y<br />
L<br />
+<br />
−<br />
G<br />
+<br />
x 0ˆ xˆ<br />
+ +<br />
∫<br />
H<br />
ŷ<br />
F<br />
K<br />
Computer algorithm<br />
Figure 2.25: Block diagram of observer-based state feedback-controlled system:<br />
the thick lines represent vector quantities.<br />
• The initial condition,x 0 , can be calculated from the output,y(t), in a finite time.<br />
⎡ ⎤<br />
H<br />
HF<br />
•<br />
HF 2<br />
has full rank.<br />
⎢ ⎥<br />
⎣ . ⎦<br />
HF n−1<br />
• There existsLsuch that the eigenvalues ofF −LH can be arbitrarily determined.<br />
• There existsLsuch that the state estimate, ˆx(t), converges to the actual state, x(t),<br />
as t → ∞.<br />
Observer-based state feedback<br />
In the full state feedback control system, the <strong>info</strong>rmation on the state, which is not measurable<br />
in general, was required. Therefore, for implementation the full state feedback<br />
always accompanies the state observer, which estimates the state. Recall that the full state<br />
feedback control law is<br />
u = −Kx<br />
Since x is not measurable, the estimate of x is alternatively used in the observer-based<br />
state feedback control:<br />
u = −Kˆx<br />
Remind that the state feedback gain, K, was designed such that the eigenvalues of<br />
[F −GK] have strictly negative real parts. In the state observer system, the observer gain,<br />
L, was designed such that the eigenvalues of [F −LH] have strictly negative real parts.<br />
When both the state feedback and the state observer are used at the same time, however,<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 69<br />
it is not clear if the overall system (i.e., the closed-loop system with observer-based state<br />
feedback) is stable. To assess the stability of the observer-based state feedback-controlled<br />
system, the state space equations are arranged as:<br />
Rearranging the equations above,<br />
ẋ = Fx+Gu<br />
y = Hx<br />
˙ˆx = [F −LH]ˆx+Gu+Ly<br />
u = −Kˆx<br />
ẋ = Fx−GKˆx<br />
˙ˆx = [F −LH −GK]ˆx+LHx<br />
The two equations above can be represented in a single state space equation, i.e.<br />
[ ] F −GK<br />
ẋ =<br />
x (2.18)<br />
LH F −LH −GK<br />
[ ]<br />
wherex = ∈ R xˆx<br />
2n . Define another state, ¯x, by<br />
¯x =<br />
[ I 0<br />
I −I<br />
]<br />
x<br />
Then (2.18) is converted into<br />
˙¯x =<br />
[<br />
F −GK GK<br />
0 F −LH<br />
]<br />
¯x<br />
Remind that the eigenvalues of [ ]<br />
F−GK GK<br />
0 F−LH are equal to those ofF −GK andF −LH<br />
due to the property of determinant of upper diagonal block matrix. Therefore, the closedloop<br />
eigenvalues of observer-based feedback-controlled system are the eigenvalues of<br />
F −GK and F −LH, which implies thatK and L can be separately designed such that<br />
F −GK andF −LH respectively have eigenvalues with strictly negative real parts.<br />
Linear quadratic (LQ) optimal control<br />
We have found that the state feedback allows us to assign any closed-loop system eigenvalues<br />
if the system is controllable. The desired eigenvalues must be selected by a designer.<br />
Another way to determine feedback control gains is to solve linear quadratic (LQ)<br />
optimal control problem. The LQ problem is one of the most frequently appearing optimal<br />
control problems.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 70<br />
Suppose, a system under the full state feedback control is described by<br />
ẋ = Fx+Gu<br />
y = Hx<br />
x(0) = x 0<br />
The optimal control is sought to minimize the quadratic performance index,<br />
or in more general form,<br />
J =<br />
J =<br />
∫ ∞<br />
0<br />
∫ ∞<br />
0<br />
[<br />
y T y +u T Ru ] dt<br />
[<br />
x T Qx+u T Ru ] dt (2.19)<br />
whereQ = H T H is a positive semidefinite matrix, and R is positive definite. Notice that<br />
the first term penalizes the deviation of x from the origin, while the second penalizes the<br />
control energy.<br />
The LQ problem as formulated above is concerned with the regulation of the system<br />
around the origin of the state space. The resulting system is called the Linear Quadratic<br />
Regulator (LQR).<br />
Let P ∈ R n×n be a positive definite matrix. Then<br />
x T (∞)Px(∞)−x T 0Px 0 =<br />
=<br />
=<br />
∫ ∞<br />
0<br />
∫ ∞<br />
0<br />
∫ ∞<br />
0<br />
∫ ∞<br />
d [<br />
x T Px ] dt<br />
dt<br />
[ (dx<br />
Since (2.20) is true for any P , selectP such that<br />
From (2.20) and (2.21)<br />
0 = x T 0 Px 0 +<br />
∫ ∞<br />
0<br />
=<br />
0<br />
) ]<br />
T<br />
Px+x T P dx dt<br />
dt<br />
dt<br />
[<br />
]<br />
(Fx+Gu) T Px+x T P (Fx+Gu) dt<br />
[ ( x<br />
T<br />
F T P +PF ) x+u T G T Px+x T PGu ] (2.20) dt<br />
F T P +PF = PGR −1 G T P −Q (2.21)<br />
[<br />
x<br />
T ( PGR −1 G T P −Q ) x+u T G T Px+x T PGu ] dt (2.22)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 71<br />
where x T (∞)Px(∞) in (2.20) was neglected because x(t) → 0 as t → ∞ if the closedloop<br />
system is stable. Adding (2.22) to the cost function in (2.19),<br />
J = x T 0 Px 0 +<br />
= x T 0 Px 0 +<br />
∫ ∞<br />
0<br />
∫ ∞<br />
0<br />
[<br />
x T PGR −1 G T Px+u T G T Px+x T PGu+u T Ru ] dt<br />
[<br />
(R −1 G T Px+u) T R(R −1 G T Px+u) ] dt<br />
SinceRis positive definite, in order forJ to be minimum,<br />
u = −R −1 G T Px<br />
and the minimum value ofJ is<br />
J o = x T 0 Px 0<br />
[Summary] For a controllable system<br />
ẋ = Fx+Gu<br />
y = Hx<br />
x(0) = x 0<br />
the state feedback controller that minimizes<br />
is<br />
J =<br />
∫ ∞<br />
whereP is the positive definite solution of<br />
0<br />
[<br />
x T Qx+u T Ru ] dt, Q ≥ 0,R > 0<br />
u = −R −1 G T Px<br />
F T P +PF −PGR −1 G T P +Q = 0<br />
which is called Riccati equation. The Riccati equation has a proper solution, as long as<br />
the state space equation is controllable.<br />
The linear quadratic regulator 18 is very useful in practice, because (1) the LQcontrolled<br />
systems are always stable, and (2) the number of parameters to be tuned is<br />
reduced to one 19 .<br />
18 It is called “regulator” often, because the LQ-gains are optimal for regulation problems (i.e.,r(t) = 0).<br />
Nevertheless, you can use the LQ-gains for tracking control problems.<br />
19 R is the only parameter to be tuned.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 72<br />
Moreover, performance can easily be expected according to the choice ofR. Namely,<br />
the smaller R, the shorter settling time, vice versa. Fig. 2.26 shows the experimental results<br />
of the LQR for a servo motor. Note in the experimental results that the settling time<br />
reduced, as R was decreased. In practice, however, R cannot be too small, because the<br />
magnitude of control signal is limited by hardware issues. In the experiment, the control<br />
signal was in the range of [−10,10], and thus the performance was not improved for<br />
R < 0.01.<br />
[Example 2-23] For a state space equation,<br />
[ 0 1<br />
ẋ =<br />
0 0<br />
y = [ 1 0 ] x<br />
[ ]<br />
1<br />
x(0) =<br />
0<br />
the full state feedback gain that minimizes<br />
J =<br />
∫ ∞<br />
wherer > 0, is calculated as follows.<br />
0<br />
] [ 0<br />
x+<br />
1<br />
[<br />
y 2 (t)+ru 2 (t) ] dt<br />
The positive definite solution of the Riccati equation<br />
[ ] [ ] [ ]<br />
0 0 0 1 0<br />
P +P −P r −1[ 0 1 ] P + [ 1 0 ][ ]<br />
1<br />
= 0<br />
1 0 0 0 1<br />
0<br />
]<br />
u<br />
is<br />
[ √<br />
2r<br />
0.25<br />
r<br />
P = √ 0.5<br />
r 0.5 2r<br />
0.75<br />
Thus, the optimal state feedback law is<br />
]<br />
u = − [ r −0.5 √<br />
2r<br />
−0.25 ] x<br />
and the minimal cost function is J o = x T 0 Px 0 = √ 2r 0.25 . Notice that J o is decreased,<br />
as r decreases.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 73<br />
1143<br />
(a) Parker BE series motor; its transfer function is G( s)<br />
=<br />
s<br />
2 + 1.714s<br />
1<br />
1<br />
Position (Rev.)<br />
0.5<br />
Position (Rev.)<br />
0.5<br />
0<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
0<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
<strong>Control</strong>ler Output (V)<br />
0<br />
-0.5<br />
-1<br />
-1.5<br />
-2<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
(b) LQ Regulation (R=1, five experiments)<br />
<strong>Control</strong>ler Output (V)<br />
5<br />
0<br />
-5<br />
-10<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
(c) LQ Regulation (R=0.1, five experiments)<br />
1<br />
1<br />
Position (Rev.)<br />
0.5<br />
0<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
Position (Rev.)<br />
0.5<br />
0<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
<strong>Control</strong>ler Output (V)<br />
5<br />
0<br />
-5<br />
-10<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
(d) LQ Regulation (R=0.05, five experiments)<br />
<strong>Control</strong>ler Output (V)<br />
10<br />
5<br />
0<br />
-5<br />
-10<br />
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2<br />
Time (sec.)<br />
(e) LQ Regulation (R=0.01, three experiments)<br />
Figure 2.26: Experimental results of the linear quadratic regulator for a Parker<br />
BE series servo motor. Notice that the only parameter to be tuned is R, and the<br />
LQ algorithm finds the optimal controller gains based for the selected R.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2 34 564 476<br />
1<br />
2 346576 8<br />
2 ;6 < : =>@AB<br />
:<br />
2 9DE96<br />
< C =>@AB<br />
C<br />
F 2 AN@OP>Q R@>@S =SSTU>V W>OXG<br />
<<br />
H 2 RAQY@OAX @A @ZS :O>@O S[\<br />
<<br />
2.4. CONTROLLER DESIGN 74<br />
9 2 35 476<br />
3FGH7 2 IJKL1G8GCG:M<br />
Figure 2.27: Matlab code for calculating the linear quadratic optimal state feedback<br />
gain<br />
2.4.5 <strong>Control</strong>ler design in frequency-domain<br />
Requirements of controller design<br />
Remind that the <strong>info</strong>rmation about the performance of the closed-loop system, obtained<br />
from the open-loop frequency response, are:<br />
• Low frequency region indicates the steady-state behavior.<br />
• Medium frequency region (around−1+0j in the Nyquist plot, or around gain and<br />
phase crossover frequencies 20 in the Bode plot) is related to stability.<br />
• High frequency region indicates the transient behavior.<br />
The requirements on open-loop frequency response are:<br />
• The gain at low frequency should be large enough to give a high value for error<br />
constants 21 .<br />
• At medium frequencies, the phase and gain margins should be large enough.<br />
• At high frequencies, the gain should be small enough as rapidly as possible to minimize<br />
noise effects.<br />
Lead compensator<br />
A lead compensator improves the transient response by increasing the gains at high frequencies.<br />
Since it increases the high frequency gain, the system bandwidth is increased<br />
20 The gain crossover frequency is ω Cg , where |GC(ω Cg )| = 1. The phase crossover frequency is ω Cp ,<br />
where∠GC(ω Cp ) = −180.<br />
21 The error constant isK e , where steady state error(%) = 100 1<br />
1+K e<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 75<br />
as well. In addition, addition of phase lead near the gain crossover frequency improves<br />
the phase margin. On the other hand, the disadvantage is that it amplifies high-frequency<br />
noise, which is not desired in practice.<br />
A lead compensator is<br />
C(s) = K c a Ts+1<br />
aTs+1 = K c<br />
s+ 1 T<br />
s+ 1<br />
aT<br />
where T > 0 and 0 < a < 1. Note that the lead compensator has a pole at − 1 and a aT<br />
zero at − 1 . The maximum phase-lead angle ϕ T m occurs at ω m , where ω m is the middle<br />
point between 1 T and 1<br />
aT<br />
in the log scale, i.e.<br />
logω m = 1 [<br />
log 1 2 T +log 1<br />
aT<br />
1<br />
ω m = √ aT<br />
]<br />
and thus<br />
[<br />
ϕ m = ∠C(jω m ) = ∠ K c a Tjω ]<br />
m +1<br />
aTjω m +1<br />
[ ]<br />
T √<br />
1<br />
aT<br />
j +1<br />
= ∠<br />
aT √ 1<br />
aT<br />
j +1<br />
[√ ] aj +a<br />
= ∠ √ aj +1<br />
[ ] 1−a<br />
= sin −1 1+a<br />
Moreover, note that<br />
|C(jω m )| = K c<br />
√ a<br />
Using these properties, a lead compensator is designed as follows.<br />
1. Determine the compensator gainK c a as the performance enhancement ratio 22 .<br />
2. Find the gain margin and phase margin of the gain-adjusted open loop system, i.e.,<br />
K c aG(s).<br />
3. Determine the additional phase leadϕ m required for the desired gain margin+10% ∼<br />
15%.<br />
22 If the desired steady state error is E(%) is given instead of the performance enhancement ratio, K c a<br />
should be selected such thatE = 100<br />
1<br />
1+K caG(0) .<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 76<br />
14<br />
12<br />
Bode Diagram<br />
Magnitude (dB)<br />
10<br />
8<br />
6<br />
4<br />
Lead compensator<br />
Lag compensator<br />
2<br />
0<br />
45<br />
Phase (deg)<br />
0<br />
Phase lead<br />
Phase lag<br />
−45<br />
10 −2 10 −1 10 0 10 1 10 2<br />
Frequency (rad/sec)<br />
Figure 2.28: Bode plots of a lead compensator and a lag compensator<br />
4. Obtainafrom sinϕ m = 1−a<br />
1+a .<br />
5. Determineω m such thatω m is the gain crossover frequency, i.e.,<br />
|GC(jω m )| = 1 ⇔ |G(jω m )| = 1<br />
K c<br />
√ a<br />
6. Find T from ω m and the transfer function ofC(s):<br />
1<br />
T = √ aωm<br />
C(s) = K c a Ts+1<br />
aTs+1<br />
[Example 2-24] Consider a plant:<br />
G(s) =<br />
4<br />
s(s+2)<br />
(2.23)<br />
with the following performance requirements.<br />
• performance enhancement ratio =10<br />
• phase margin> 50 ◦<br />
Following the procedure, a lead compensator is designed:<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 77<br />
50<br />
G(s): Gm = Inf dB (at Inf rad/sec) , Pm = 51.8 deg (at 1.57 rad/sec)<br />
GC(s): Gm = Inf dB (at Inf rad/sec) , Pm = 50.5 deg (at 8.9 rad/sec)<br />
Magnitude (dB)<br />
0<br />
−50<br />
−100<br />
G(s)<br />
GC(s)<br />
−150<br />
−90<br />
Phase (deg)<br />
−135<br />
−180<br />
10 −1 10 0 10 1 10 2 10 3<br />
Frequency (rad/sec)<br />
1. K c a = 10.<br />
Figure 2.29: Bode plots ofG(s) in (2.23) and GC(s)<br />
2. The phase margin of10G(s) is about 17 ◦ .<br />
3. The additional phase lead isϕ m = 0.15(50 ◦ −17 ◦ ) ≈ 38 ◦ .<br />
4. a = 1−sinϕm<br />
1+sinϕ m<br />
≈ 0.24.<br />
5. ω m is determined such that<br />
4<br />
∣jω m (jω m +2) ∣ = 1 √<br />
K c a<br />
By simple calculation,ω m ≈ 9(rad/sec).<br />
6. T = 1 √ aωm<br />
= 0.227. Finally,<br />
C(s) = 10 0.227s+1<br />
0.0545s+1 = 41.65s+4.41 s+18.4<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 78<br />
1.4<br />
1.2<br />
Step Response<br />
Amplitude<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
G(s)/(1+G(s))<br />
GC(s)/(1+GC(s))<br />
0<br />
0 1 2 3 4 5 6<br />
Time (sec)<br />
Figure 2.30: Feedback step responses ofG(s) in (2.23) and GC(s)<br />
Lag compensator<br />
In contrast to the lead compensator, a lag compensator decreases gain at high frequencies<br />
and moves the gain crossover frequency lower to obtain the desired phase margin.<br />
A lag compensator is<br />
C(s) = K c b Ts+1<br />
bTs+1 = K c<br />
s+ 1 T<br />
s+ 1<br />
bT<br />
whereT > 0 and b > 1. The design procedure is as follows:<br />
1. Let the desired phase margin beϕ d .<br />
2. Determine the compensator gainK c b to be the performance enhancement ratio.<br />
3. Find the gain margin and phase margin ofK c bG(s).<br />
4. Find the frequency point where the phase ofK c bG(s) is equal to(−180 ◦ +ϕ d +5 ◦ ∼<br />
12 ◦ ). This will be the new gain crossover frequency,ω C .<br />
5. DetermineT such thatT ∈ ( 5<br />
ω C<br />
, 10<br />
ω C<br />
).<br />
6. Determinebsuch that<br />
Approximately,<br />
|GC(jω C )| = 1<br />
−K c b|G(jω C )| = −20logb<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 79<br />
[Example 2-25] Consider a plant:<br />
G(s) =<br />
1<br />
s(s+1)(0.5s+1)<br />
(2.24)<br />
with the following performance requirements.<br />
• performance enhancement ratio =5<br />
• phase margin> 40 ◦<br />
Following the procedure, a lag compensator is designed:<br />
1. ϕ d = 40.<br />
2. K c b = 5.<br />
3. The phase margin of 5G(s) is about −13 ◦ . Thus the closed-loop system is unstable<br />
for K c b = 5. From the Bode plot of 5G(jω), you may note that at about<br />
0.5rad/sec, the phase is −130 ◦ = −180 ◦ +ϕ d +10 ◦ .<br />
Therefore, the new gain crossover frequency will beω C = 0.5rad/sec.<br />
4. T = 5<br />
ω C<br />
= 10.<br />
5. b = 10 is roughly selected from<br />
−K c b|G(jω C )| = −20logb<br />
6. Finally,<br />
C(s) = 5 10s+1<br />
100s+1<br />
= 0.5<br />
s+0.1<br />
s+0.01<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
2.4. CONTROLLER DESIGN 80<br />
G(s): Gm = 14.3 dB (at 1.32 rad/sec) , Pm = 41.6 deg (at 0.454 rad/sec)<br />
GC(s): Gm = 9.54 dB (at 1.41 rad/sec) , Pm = 32.6 deg (at 0.749 rad/sec)<br />
100<br />
50<br />
G(s)<br />
GC(s)<br />
Magnitude (dB)<br />
0<br />
−50<br />
−100<br />
−150<br />
−90<br />
Phase (deg)<br />
−135<br />
−180<br />
−225<br />
−270<br />
10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2<br />
Frequency (rad/sec)<br />
Figure 2.31: Bode plots ofG(s) in (2.24) and GC(s)<br />
Step Response<br />
Amplitude<br />
1.4<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
G(s)/(1+G(s))<br />
GC(s)/(1+GC(s))<br />
0<br />
0 5 10 15 20 25 30 35<br />
Time (sec)<br />
Figure 2.32: Feedback step responses ofG(s) in (2.24) and GC(s)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
Chapter 3<br />
Basics of LabVIEW for<br />
<strong>Control</strong> <strong>Systems</strong> Implementation<br />
LabVIEW is a graphical programming language that uses icons instead of lines of text to<br />
create applications. In contrast to text-based programming languages, where instructions<br />
determine program execution, LabVIEW uses dataflow programming, where the flow of<br />
data determines execution.<br />
Project<br />
In order to control a system, multiple functions and programs are required. The Project<br />
in LabVIEW defines a group of custom-made programs that will interact with each other.<br />
An empty project can be made by clicking New → Empty Project on the main window.<br />
Once a project is made, the main window disappears.<br />
Virtual instruments<br />
LabVIEW programs are called virtual instruments, or VIs, because their appearance and<br />
operation imitate physical instruments, such as oscilloscopes and multimeters. Every VI<br />
uses functions that manipulate input from the user interface or other sources and display<br />
that <strong>info</strong>rmation or move it to other files or other computers. A VI can be created by<br />
clicking the right button on my computer of the project explorer, New, and VI, as shown<br />
in Fig. 3.1.<br />
A VI contains the following components:<br />
• Front panel serves as the user interface.<br />
• Block diagram contains the graphical source code that defines the functionality of<br />
the VI.<br />
81
82<br />
Figure 3.1: Creating a Virtual Instrument on an empty project<br />
Figure 3.2: The front panel (left) and block diagram (right) of a VI<br />
You may press Ctrl + e, or Ctrl + t to switch the window from one to another. In<br />
order to run a VI, press Ctrl + r, or click the⇒button on the top left.<br />
<strong>Control</strong>s palette<br />
The <strong>Control</strong>s palette contains the controls and indicators you use to create the front panel.<br />
The <strong>Control</strong>s palette is available only on the front panel. The controls and indicators are<br />
located on subpalettes based on the types of controls and indicators. In order to place an<br />
indicator on the front panel, select Window → Show <strong>Control</strong>s Palette or rightclick<br />
the front panel workspace to display the <strong>Control</strong>s palette. You can place the <strong>Control</strong>s<br />
palette anywhere on the screen. LabVIEW retains the <strong>Control</strong>s palette position and size<br />
so when you restart LabVIEW, the palette appears in the same position and has the same<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
83<br />
]^_`a bc^bde<br />
Figure 3.3: <strong>Control</strong>s palette<br />
size.<br />
Functions palette<br />
The Functions palette is available only on the block diagram. The Functions palette contains<br />
the VIs and functions you use to build the block diagram. The VIs and functions<br />
are located on subpalettes based on the types of VIs and functions. To select a function<br />
icon, select Window→ Show Functions Palette or right-click the block diagram<br />
workspace to display the Functions palette.<br />
Tools palette<br />
A tool is a special operating mode of the mouse cursor. The cursor corresponds to the<br />
icon of the tool selected in the palette. Use the tools to operate and modify front panel<br />
and block diagram objects.<br />
Data types<br />
Fig. 3.6 shows the symbols for the different types of control and indicator terminals.<br />
The color and symbol of each terminal indicate the data type of the control or indicator.<br />
<strong>Control</strong> terminals have a thicker border than indicator terminals. Also, arrows appear on<br />
front panel terminals to indicate whether the terminal is a control or an indicator. An<br />
arrow appears on the right if the terminal is a control, and an arrow appears on the left if<br />
the terminal is an indicator.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
84<br />
Figure 3.4: Functions palette<br />
Figure 3.5: Tools palette<br />
<strong>Control</strong><br />
(Input)<br />
Indicator<br />
(Output)<br />
Double-precision floating-point numeric<br />
16-bit signed integer numeric<br />
32-bit signed integer numeric<br />
16-bit unsigned integer numeric<br />
Boolean<br />
Figure 3.6: <strong>Control</strong> and indicator data types<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
85<br />
For loop While loop Timed while loop<br />
Figure 3.7: Fundamental three loops<br />
Loops and structures<br />
Structures are graphical representations of the loops and case statements of text-based<br />
programming languages. Use structures on the block diagram to repeat blocks of code<br />
and to execute code conditionally or in a specific order.<br />
Like other nodes, structures have terminals that connect them to other block diagram<br />
nodes, execute automatically when input data are available, and supply data to<br />
output wires when execution completes. Each structure has a distinctive, resizable border<br />
to enclose the section of the block diagram that executes according to the rules of the<br />
structure. The section of the block diagram inside the structure border is called a subdiagram.<br />
The terminals that feed data into and out of structures are called tunnels. A tunnel<br />
is a connection point on a structure border.<br />
A For loop executes a subdiagram a set number of times. The count terminal,<br />
shown as [N], indicates how many times to repeat the subdiagram. Set the count explicitly<br />
by wiring a value from outside the loop to the left or top side of the count terminal.<br />
The iteration terminal, shown as [i], contains the number of completed iterations. The<br />
iteration count always starts at zero.<br />
A While loop executes a subdiagram until a condition is met. The While loop<br />
executes the subdiagram until the conditional terminal, an input terminal, receives a specific<br />
Boolean value.<br />
A Timed While loop is similar to the While loop, but executes a subdiagram<br />
at a constant calculation period.<br />
Shift registers<br />
Use shift registers when you want to pass values from previous iterations through the<br />
loop. A shift register appears as a pair of terminals directly opposite each other on the<br />
vertical sides of the loop border. The right terminal contains an up arrow and stores data<br />
on the completion of an iteration. LabVIEW transfers the data connected to the right side<br />
of the register to the next iteration. Create a shift register by right-clicking the left or right<br />
border of a loop and selecting Add Shift Register from the shortcut menu.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
86<br />
Figure 3.8: Creating a shift register<br />
Formula node<br />
The Formula Node is a convenient text-based node you can use to perform mathematical<br />
operations on the block diagram. You do not have to access any external code or<br />
applications, and you do not have to wire low-level arithmetic functions to create equations.<br />
Formula Nodes are useful for equations that have many variables or are otherwise<br />
complicated and for using existing text-based code. You can copy and paste the existing<br />
text-based code into a Formula Node rather than recreating it graphically.<br />
When you work with variables, remember the following points:<br />
• There is no limit to the number of variables or equations in a Formula Node.<br />
• No two inputs and no two outputs can have the same name, but an output can have<br />
the same name as an input.<br />
• Declare an input variable by right-clicking the Formula Node border and selecting<br />
Add Input from the shortcut menu. You cannot declare input variables inside the<br />
Formula Node.<br />
• Declare an output variable by right-clicking the Formula Node border and selecting<br />
Add Output from the shortcut menu. The output variable name must match either<br />
an input variable name or the name of a variable you declare inside the Formula<br />
Node.<br />
• You can change whether a variable is an input or an output by right-clicking it<br />
and selecting Change to Input or Change to Output from the shortcut<br />
menu.<br />
• You can declare and use a variable inside the Formula Node without relating it to<br />
an input or output wire.<br />
• You must wire all input terminals.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
87<br />
The indicator on the front panel<br />
is connected to this icon.<br />
Formula node is set to calculate<br />
y ( k)<br />
= y(<br />
k −1)<br />
+ k<br />
for k = 10.<br />
The shift register is initialized<br />
such that y(0) = 0.<br />
A subdiagram repeats 10 times.<br />
Figure 3.9: An example using a Formula Node and a For loop<br />
File input and output<br />
Use the high-level File I/O VIs to perform common I/O operations, such as writing<br />
to or reading from the following types of data:<br />
• Characters to or from text files.<br />
• Lines from text files.<br />
• 1D or 2D arrays of single-precision numerics to or from spreadsheet text files.<br />
• 1D or 2D arrays of single-precision numerics or 16-bit signed integers to or from<br />
binary files.<br />
[Example 3-1] A bank account simulator is to be designed with the following functions:<br />
• The calculation is updated every 1 second, which correspond to one month in<br />
reality.<br />
• The monthly interest is0.5%.<br />
• The user can specify the amount of money to be saved for every month.<br />
Using a Timed While loop, the algorithm can be realized as shown in Fig. 3.11.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
‚ƒ} ~€<br />
pvŠˆ‰||} ‹Œ…†zz € zŽƒ xyzy † zy ƒx ‘y ’“”•<br />
„…†zvpv‡ˆ‰|ˆ<br />
88<br />
(Setting significant digits)<br />
Two 1D arrays are stacked to a 2D array.<br />
The data is stacked to an array by indexing.<br />
g h i j k l m n o<br />
f<br />
g i l gf gk hg hn il jk<br />
f<br />
Saved data (opened with Excel)<br />
p q rstuvwxyzy{|}<br />
Matlab code for plotting the data obtained by LabVIEW<br />
Figure 3.10: Saving data generated by a For loop and plotting with Matlab<br />
Program runs every 1 sec.<br />
Program runs until the stop button is clicked.<br />
The chart is updated at every iteration.<br />
Figure 3.11: A bank account simulator in [Example 3-1]<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
89<br />
[Example 3-2] Suppose a plant described in state space:<br />
[ ] [ 0 1 0<br />
ẋ = x+<br />
−1 −2 1<br />
y = [ 1 0 ] x<br />
[ ]<br />
1<br />
x(0) =<br />
0<br />
]<br />
u<br />
Since the differential equation cannot be realized by a computer algorithm, the equations<br />
above are approximated by<br />
ẋ ≈<br />
x(kT +T)−x(kT)<br />
T<br />
where T is the calculation period of the computer algorithm, and k is the time index.<br />
Note that t is replaced by kT in the discretization process. Substituting the equations<br />
above, we get<br />
x(kT +T)−x(kT)<br />
T<br />
≈<br />
[<br />
0 1<br />
−1 −2<br />
] [<br />
0<br />
x(kT)+<br />
1<br />
]<br />
u(kT)<br />
OmittingT in the index for simplicity,<br />
[ 1 T<br />
x(k +1) ≈<br />
−T 1−2T<br />
] [ 0<br />
x(k)+<br />
T<br />
]<br />
u(k)<br />
Assuming T = 1ms and u = 0, the approximated plant model is realized as shown in<br />
Fig. 3.12. In the program, x(k) = [ x1<br />
x2] and x(k +1) = [ x1f<br />
x2f].<br />
[Example 3-3] For the system in [Example 3-2], an appropriate full state feedback<br />
controller is<br />
u = − [ 1 1 ] x<br />
The full state feedback controller is implemented as shown in Fig. 3.13.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
› œ ž Ÿ<br />
–—˜š<br />
¡ œ –—˜š<br />
œ Ÿ£¢Ÿ ¤ ›£¢¥<br />
¢Ÿ–<br />
œ ¦›£¢Ÿ ¤ §Ÿ¦¥£›¨£¢¥ ¤ ›£¡<br />
¢¥–<br />
¯ ° ±²±±³´<br />
ª«¬®<br />
° ³·¸³ ¹ ³·¸º´<br />
µ<br />
° ³·¸³ » ¯·¸º´<br />
¸³ª<br />
° ¯·¸³ » ¼³º·¯½·¸º » ¯·µ´<br />
¸ºª<br />
90<br />
© œ Ÿ£¢Ÿ ¤ £¢¥<br />
Figure 3.12: An approximated state space equation in [Example 3-2]<br />
¾ ° ³·¸³ » ±·¸º´<br />
Figure 3.13: An approximated state space equation with full state feedback<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
Chapter 4<br />
Discrete-Time Domain Analysis<br />
4.1 Discrete-Time Domain Signals<br />
In computer-based control systems, the output signal is measured once in a loop, and the<br />
measured signal is regarded as the “current signal” until the next loop is executed. This<br />
process is called “sampling,” and the measured signal is in the “discrete-time domain.”<br />
Computer<br />
¿ÀÁÂÃÄ<br />
ÅÂÆÄ<br />
Ç<br />
Figure 4.1: Sampling of a signal.<br />
[Example 4-1] Suppose that a signal<br />
y(t) = sin(t)<br />
is measured by a computer at a sampling frequency of10Hz, as shown in Fig. 4.1.<br />
Since the sampling period is<br />
T = 1 10 = 0.1<br />
the measured signal is<br />
y(k) = sin(0.1k)<br />
wherek = 0,1,2... is the time index.<br />
91
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 92<br />
Fourier Transform<br />
y(t)<br />
Continuous-time domain signal<br />
Laplace Transform<br />
Inverse Laplace<br />
Transform<br />
<br />
È É ÊË<br />
<br />
Sampling<br />
Ì É Í ÎÏ<br />
y(k)<br />
z-Transform<br />
Inverse z-Transform<br />
<br />
Ì É Í ÐÑÏ<br />
ÒÓÔ <br />
Discrete-time domain signal<br />
Discrete-Time Fourier Transform<br />
Figure 4.2: Various domains for analysis of signals<br />
4.2 z-Transform and Transfer Functions<br />
The z-transform converts a discrete-time domain signal, which is a sequence of real<br />
or complex numbers, into a complex frequency-domain representation. The name “ztransform”<br />
may have been derived from the idea of the letter “z” being a sampled/digitized<br />
version of the letter “s” used in Laplace transforms. This seemed appropriate since the<br />
z-transform can be viewed as a sampled version of the Laplace transform.<br />
Definition<br />
For a signal,x(k) wherek ∈ Z + 1 , thez-transform is defined as<br />
X(z) = Z{x(k)} =<br />
∞∑<br />
x(k)z −k<br />
k=0<br />
Properties<br />
Thez-transform has the following properties:<br />
1 Z represents an integer, andZ + means a positive integer.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 93<br />
• Linearity 2 : Z{a 1 x 1 (k) + a 2 x 2 (k)} = a 1 X 1 (z) + a 2 X 2 (z), where a 1 and a 2 are<br />
scalars.<br />
• Time shifting 3 : Z{x(k −n)} = z −n X(z), wheren ∈ Z.<br />
• Scaling bya k 4 : Z{a k x(k)} = X(a −1 z), wherea ∈ C.<br />
• Time reversal: Z{x(−k)} = X(z −1 ).<br />
• First difference (differentiation): Z{x(k)−x(k −1)} = (1−z −1 )X(z).<br />
• Accumulation (integration): Z{ ∑ k<br />
i=0 x(i)} = 1<br />
1−z −1 X(z).<br />
• Convolution 5 : Z{ ∑ ∞<br />
l=0 x 1(l)x 2 (k −l)} = X 1 (z)X 2 (z).<br />
2 Proof:<br />
3 Proof:<br />
4 Proof:<br />
X(z) =<br />
∞∑<br />
(a 1 x 1 (k)+a 2 x 2 (k))z −k<br />
k=0<br />
∑ ∞<br />
∑ ∞<br />
= a 1 x 1 (k)z −k +a 2 x 2 (k)z −k = a 1 X 1 (z)+a 2 X 2 (z)<br />
k=0<br />
Z{x(k −n)} =<br />
Z{a k x(k)} =<br />
k=0<br />
∞∑<br />
x(k −n)z −k =<br />
k=0<br />
∞∑<br />
i=−n<br />
x(i)z −(i+n)<br />
∑ ∞<br />
= z −n x(i)z −i note: x(i) = 0 if i < 0,<br />
i=−n<br />
∑<br />
∞<br />
= z −n x(i)z −i = z −n X(z)<br />
i=0<br />
∞∑ ∞∑<br />
a k x(k)z −k = x(k)(a −1 z) −k<br />
k=0<br />
= X(a −1 z)<br />
k=0<br />
5 Proof:<br />
∞∑<br />
Z{ x 1 (l)x 2 (k −l)} =<br />
l=0<br />
=<br />
=<br />
(<br />
∞∑ ∑ ∞<br />
)<br />
x 1 (l)x 2 (k −l)<br />
k=0<br />
l=0<br />
z −k<br />
(<br />
∑ ∞<br />
)<br />
∑ ∞<br />
x 1 (l) x 2 (k −l)z −k<br />
l=0<br />
k=0<br />
∞∑<br />
x 1 (l)z −l X 2 (z) = X 1 (z)X 2 (z)<br />
l=0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 94<br />
Initial value theorem<br />
Recall that<br />
∞∑<br />
X(z) = x(k)z −k<br />
k=0<br />
= x(0)+x(1)z −1 +x(2)z −2 +x(3)z −3 ...<br />
Ifz → ∞, the terms ofz −1 , z −2 , z −3 ... become all zero. Therefore,<br />
x(0) = lim<br />
z→∞<br />
X(z)<br />
which is called the initial value theorem. The initial value theorem holds only if x(0)<br />
exists.<br />
Final value theorem<br />
For a signal,x(k) wherek ≥ 0, define the difference∆x(k) such that<br />
∆x(k) := x(k)−x(k −1)<br />
where∆x(0) = x(0). The summation of∆x(k) results in<br />
∞∑<br />
∆x(k) = (x(0))+(x(1)−x(0))+(x(2)−x(1))+...<br />
k=0<br />
= x(∞)<br />
Sincelim z→1 z −k = 1 for all k, it can be multiplied to the equation above, i.e.<br />
x(∞) = lim<br />
z→1<br />
∞<br />
∑<br />
k=0<br />
∆x(k)z −k = lim<br />
z→1<br />
Z{∆x(k)}<br />
Note thatZ{∆x(k)} = Z{x(k)−x(k −1)} = X(z)−z −1 X(z). Therefore, we get<br />
x(∞) = lim<br />
z→1<br />
(1−z −1 )X(z)<br />
which is called the final value theorem. The final value theorem holds only for signals<br />
that converge to a certain value.<br />
Table of commonz-transform pairs<br />
Signals in the discrete-time domain, defined for k ≥ 0, 6 are converted into thez-domain<br />
as follows.<br />
6 It is assumed thatx(k) = 0 fork < 0.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 95<br />
• Impulse signal: Z{δ(k)} = 1<br />
• Delayed impulse signal 7 : Z{δ(k −k 0 )} = z −k 0<br />
• Step signal 8 : Z{u(k)} = 1<br />
1−z −1<br />
• Delayed step signal: Z{u(k−k 0 )} = z−k 0<br />
1−z −1<br />
• Exponential decay: Z{e −ak } =<br />
1<br />
1−e −a z −1<br />
• Ramp signal: Z{k} = z−1<br />
(1−z −1 ) 2<br />
• 2 nd -order polynomial signal: Z{k 2 } = z−1 (1+z −1 )<br />
(1−z −1 ) 3<br />
• k-th power: Z{a k } = 1<br />
1−az −1<br />
• k-th power times ramp: Z{ka k } = kz−1<br />
(1−az −1 ) 2<br />
• Cosine signal: Z{cos(ω 0 k)} = 1−z−1 cos(ω 0 )<br />
1−2z −1 cos(ω 0 )+z −2<br />
• Sine signal: Z{sin(ω 0 k)} =<br />
z −1 sin(ω 0 )<br />
1−2z −1 cos(ω 0 )+z −2<br />
• Decaying cosine signal: Z{a k cos(ω 0 k)} =<br />
• Decaying sine signal: Z{a k sin(ω 0 k)} =<br />
1−az −1 cos(ω 0 )<br />
1−2az −1 cos(ω 0 )+a 2 z −2<br />
az −1 sin(ω 0 )<br />
1−2az −1 cos(ω 0 )+a 2 z −2<br />
Transfer functions inz-domain<br />
The convenience in convolution makes z-transform useful to analyze input-output relationships,<br />
i.e., transfer functions.<br />
[Example 4-2] Consider a difference equation:<br />
y(k+2)+3y(k+1)−y(k)+2y(k−1) = 2u(k)+u(k−1)<br />
7 Proof:<br />
Z{δ(k −k 0 )} =<br />
∞∑<br />
δ(k −k 0 )z −k = 0+0+...+0+z −k0 +0+... = z −k0<br />
k=0<br />
8 Note that a step signal is the integrated impulse signal.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
ßæàäëèð<br />
õäåæëèð<br />
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 96<br />
Õ Ö ×Ø××ÙÚ<br />
Û ÜÝÞßàáâã ßäåáæç<br />
è Ö éêëìÙ Ùíî ìÙ Ù Ùíî ïðÚ<br />
Û ñäêáâáâã Ý éåÝâòêäå êóâôéáæâ<br />
áÞßóàòäëèðÚ<br />
Û ñåÝöáâã Ýâ áÞßóàòä åäòßæâòä æê èëõð<br />
òéäßëèðÚ<br />
Û ñåÝöáâã Ýâ áÞßóàòä åäòßæâòä æê èëõð<br />
÷æçäëèðÚ<br />
Û ñåÝöáâã éøä ùæçä ßàæé æê èëõð<br />
âúûóáòéëèðÚ<br />
Û ñåÝöáâã éøä üúûóáòé ßàæé æê èëõð<br />
Figure 4.3: Matlab code for defining a transfer function in the discrete-time domain<br />
Taking thez-transform, the equation is converted into<br />
Z{y(k+2)+3y(k+1)−y(k)+2y(k−1)} = Z{2u(k)+u(k −1)}<br />
thus the transfer function fromu(k) toy(k) is<br />
(z 2 +3z −1+2z −1 )Y(z) = (2+z −1 )U(z)<br />
Y(z)<br />
U(z) =<br />
2+z −1<br />
z 2 +3z −1+2z −1<br />
It is common to express a transfer function such that the highest order of the denominator<br />
polynomial isz 0 = 1, i.e.<br />
Y(z)<br />
U(z) =<br />
2z −2 +z −3<br />
1+3z −1 −z −2 +2z −3<br />
The physical meaning of a transfer function in the discrete-time domain is the same<br />
as one in the continuous-time domain. For discrete-time input signal u(k) and output<br />
signal y(k), the transfer function is the linear mapping of the z-transform of the input,<br />
U(z), to the output,Y(z), i.e.<br />
Y(z) = G(z)U(z)<br />
or<br />
G(z) = Y(z)<br />
U(z) = Z{y(k)}<br />
Z{u(k)}<br />
whereG(z) is the transfer function of the system.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 97<br />
Response versus pole locations<br />
Given the transfer function of a linear system in the discrete-time domain,<br />
G(z) = Y(z)<br />
U(z) = b(z)<br />
a(z)<br />
the roots of a(z) = 0, called poles, make G(z) infinity, and those of b(z) = 0, called<br />
zeros, makeG(z) zero.<br />
Each pole location in thez-plane can be identified with a particular type of response.<br />
When the poles of a transfer function are not repeated (i.e., all the roots of a(z) = 0 are<br />
distinct), it can be expanded by a partial fraction expansion as<br />
G(z) =<br />
n∑<br />
i=1<br />
k i<br />
z −p i<br />
where k i ’s are scalars, p i ’s are the poles ofG(z), and n is the order ofa(z). Suppose that<br />
the system is under an impulse input, i.e.,U(z) = 1. Then, the output is<br />
y(k) = Z −1 {G(z)U(z)} = Z −1 {G(z)}<br />
n∑<br />
= Z −1 k i<br />
{ }<br />
z −p i<br />
= Z −1 {<br />
=<br />
i=1<br />
n∑<br />
i=1<br />
n∑<br />
k i p k−1 i =<br />
i=1<br />
k i z −1<br />
1−p i z −1}<br />
n∑<br />
i=1<br />
k i p −1<br />
i p i<br />
k<br />
Note that the impulse response is a linear combination of p k i , which is called a mode.<br />
When all the poles are located in the unit circle 9 , every mode converges to zero ask → ∞,<br />
and thus y(∞) → 0. If any single pole is located on the unit circle (i.e., |p i | = 1),<br />
the associated mode maintains its magnitude for all k. Therefore, for a system in the<br />
discrete-time domain to be asymptotically stable, all the poles must be located in the<br />
unit circle. If any pole on the unit circle is repeated or any pole is located outside of the<br />
unit circle, the system in unstable.<br />
9 That is, |p i | < 1 for all i’s<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 98<br />
[Example 4-3] Consider a transfer function:<br />
G(z) =<br />
−1.55z +0.24<br />
(z +1.2)(z −0.2)(z −0.3)<br />
Using the partial fraction expansion,G(z) can be expanded to<br />
G(z) =<br />
=<br />
1<br />
z +1.2 + 0.5<br />
z −0.2 − 1.5<br />
z −0.3<br />
z −1 0.5z−1 1.5z−1<br />
+ −<br />
1+1.2z−1 1−0.2z−1 1−0.3z −1<br />
The impulse response (i.e., U(z) = 1) ofG(z) is<br />
y(k) = (−1.2) k−1 +0.5(0.2) k−1 −1.5(0.3) k−1<br />
Since(−1.2) k−1 diverges as k → ∞, the output does not converges to0.<br />
[Example 4-4] Consider a transfer function:<br />
G(z) =<br />
z 3 −z 2 +0.5z<br />
(z −0.5)(z 2 −2z +1)<br />
Notice thatG(z) has a repeated pole on the unit circle (p 1 = 0.5,p 2 = p 3 = 1). Using<br />
the partial fraction expansion,G(z) can be expanded to<br />
G(z) =<br />
The impulse response ofG(z) is<br />
=<br />
Note thaty(k) diverges as k → ∞.<br />
z<br />
z 2 −2z +1 + z<br />
z −0.5<br />
z −1<br />
(1−z −1 ) + 1<br />
2 1−0.5z −1<br />
y(k) = k +(0.5) k<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 99<br />
Im<br />
1.0<br />
0.5<br />
0.5<br />
1.0<br />
Re<br />
Asymptotically stable<br />
Marginally stable, if not repeated.<br />
Figure 4.4: Time sequences associated with pole locations in thez-plane<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.2. Z-TRANSFORM AND TRANSFER FUNCTIONS 100<br />
4.2.1 Frequency responses in discrete-time domain<br />
The z-transform is a generalization of the discrete-time Fourier transform (DTFT). The<br />
DTFT can be found by replacinge jωT forz ofX(z). In order for the DTFT of a signal to<br />
exist, the signal must not diverge.<br />
Nyquist frequency<br />
Recall that the frequency response of a transfer function in the discrete-time domain is a<br />
function ofe jωT = cosωT+jsinωT . Therefore, the computer system cannot measure or<br />
process signals in the frequency range higher thanω = 2π . T Moreover,ejωT forω ∈ [ π, 2π]<br />
T T<br />
is the complex conjugate of e jωT for ω ∈ [0, π ], and thus the effective frequency range is<br />
T<br />
onlyω ∈ [0, π ]. T<br />
The maximum frequency that a computer system can measure or process is called<br />
Nyquist frequency and is half the sampling frequency, i.e. π T .<br />
[Example 4-5] Consider a transfer function:<br />
G(z) = 0.9<br />
z −0.1<br />
where the sampling period T = 1ms. The Nyquist frequency is<br />
ω N = π T = 1000π<br />
Thus, G(z) can accept an input signal or generate an output signal in the frequency<br />
range of0 ∼ 500Hz.<br />
The frequency response ofG(z) is obtained by replacinge jωT forz, i.e.<br />
G(e jωT ) =<br />
=<br />
0.9<br />
e jωT −0.1<br />
0.9<br />
(cos(ωT)+jsin(ωT))−0.1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 101<br />
The magnitude and phase ofG(e jωT ) are<br />
|G(e jωT 0.9<br />
)| = √<br />
1.01−0.2cos(ωT)<br />
( ) sin(ωT)<br />
φ G(e jωT ) = −tan −1 cos(ωT)−0.1<br />
Notice that the magnitude is changed from 1 to 0.818 as ω changes from 0 to π T , and<br />
the phase is changed from0 ◦ to −180 ◦ .<br />
4.3 State Space in Discrete-Time Domain<br />
4.3.1 Definition of state space in discrete-time domain<br />
As in the continuous-time domain, realizable difference equations can be represented in<br />
the state space. The general form of a single-input single-output 10 state space equation in<br />
the discrete-time domain is<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)+Ju(k) (4.1)<br />
whereΦ ∈ R n×n ,Γ ∈ R n ,H ∈ R 1×n , andJ ∈ R. Similar to the continuous-time case,J<br />
is 0 for most of the mechanical systems.<br />
Solution ofx(k) in discrete-time domain<br />
Assume that the initial condition of (4.1) is x(0) = x 0 ∈ R n . Then, following (4.1), x(1)<br />
is<br />
x(1) = Φx(0)+Γu(0)<br />
Likewise,x(2) is<br />
x(2) = Φx(1)+Γu(1)<br />
= Φ(Φx(0)+Γu(0))+Γu(1)<br />
1∑<br />
= Φ 2 x(0)+ Φ 1−i Γu(i)<br />
10 Single-input single-output (SISO) means u ∈ R1 and y ∈ R 1 . When u ∈ R m and y ∈ R q , where<br />
m,q > 1, the system is called multi-input multi-output (MIMO). In this class, we deal with SISO systems<br />
only.<br />
i=0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 102<br />
Figure 4.5: Implementation of a state space equation, where Φ = [0 1;−0.5 −<br />
0.2], Γ = [0 1] T , and H = [0.1 0.9].<br />
Repeating this calculation, the solution of a state space equation in the discrete-time domain<br />
is obtained:<br />
∑k−1<br />
x(k) = Φ k x 0 + Φ k−1−i Γu(i)<br />
It is important to solve Φ k . Suppose that V is a matrix in R n×n that consists of the<br />
eigenvectors ofΦ, i.e.<br />
ΦV = VΛ<br />
where Λ is a diagonal matrix with the eigenvalues ofΦ. Since V is always invertible 11 , it<br />
satisfies<br />
Φ = VΛV −1<br />
ThusΦ k is<br />
i=0<br />
Φ k = (VΛV −1 )(VΛV −1 )...(VΛV −1 ) = VΛ k V −1<br />
11 When Φ does not have n independent eigenvectors (i.e., Φ is defective), generalized eigenvectors can<br />
be utilized to makeV invertible, which results in Jordan form.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 103<br />
Notice that<br />
Λ k =<br />
J k =<br />
where<br />
⎡<br />
⎢<br />
⎣<br />
⎡<br />
⎢<br />
⎣<br />
λ k 1 0 0<br />
0 λ k 2 0<br />
. ..<br />
⎤<br />
⎥<br />
⎦<br />
⎡<br />
if Λ = ⎢<br />
⎣<br />
0 0 λ k n<br />
⎤<br />
λ k ρ 12 λ (k−1) ρ 1n λ (k−n+1)<br />
0 λ k ρ 2n λ (k−n+2)<br />
..<br />
⎥<br />
. ⎦<br />
0 0 λ k<br />
ρ ij =<br />
j−i−1<br />
1 ∏<br />
(k −p)<br />
(j −i)!<br />
p=0<br />
⎤<br />
λ 1 0 0<br />
0 λ 2 0<br />
. ..<br />
⎥<br />
⎦ , and<br />
0 0 λ n<br />
⎡<br />
λ 1 0<br />
0 λ . . . .<br />
if J = ⎢<br />
⎣ . .. 1<br />
0 0 λ<br />
⎤<br />
⎥<br />
⎦<br />
4.3.2 Relationship between state space equations and transfer functions<br />
in discrete-time domain<br />
Conversion from state space to transfer function<br />
Takingz-transform,<br />
Arranging the equation above,<br />
SinceY(z) = HX(z),<br />
Z{x(k +1)} = Z{Φx(k)+Γu(k)}<br />
zX(z) = ΦX(z)+ΓU(z)<br />
X(z) = (zI −Φ) −1 ΓU(z)<br />
G(z) = Y(z)<br />
U(z) = H(zI −Φ)−1 Γ<br />
Conversion from transfer function to state space<br />
A transfer function<br />
G(z) =<br />
can be converted into a state space model<br />
where the state matrices are<br />
b 2 z 2 +b 1 z +b 0<br />
z 3 +a 2 z 2 +a 1 z +a 0<br />
= k 1<br />
z −p 1<br />
+ k 2<br />
z −p 2<br />
+ k 3<br />
z −p 3<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 104<br />
<strong>Control</strong>lable Canonical Form<br />
Observable Canonical Form<br />
Diagonal Canonical Form<br />
⎡<br />
⎣<br />
Φ<br />
⎤ ⎡<br />
Γ<br />
⎤<br />
H<br />
0 1 0 0<br />
0 0 1 ⎦ ⎣ 0 ⎦ [ ]<br />
b 0 b 1 b 2<br />
−a<br />
⎡ 0 −a 1 −a<br />
⎤ 2<br />
⎡<br />
1<br />
⎤<br />
−a 2 1 0 b 2 [ ]<br />
⎣ −a 1 0 1 ⎦ ⎣ b 1<br />
⎦ 1 0 0<br />
⎡<br />
−a 0 0 0<br />
⎤ ⎡<br />
b 0<br />
⎤<br />
p 1 0 0 1<br />
⎣ 0 p 2 0 ⎦ ⎣ 1 ⎦ [ ]<br />
k 1 k 2 k 3<br />
0 0 p 3 1<br />
When the transfer function has a repeated pole such that<br />
G(z) =<br />
the corresponding state space model is<br />
where the state matrices are<br />
b 2 z 2 +b 1 z +b 0<br />
= k 1 k 2<br />
+<br />
z 3 +a 2 z 2 +a 1 z +a 0 z −p 1 (z −p m ) + k 3<br />
2 z −p m<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)<br />
<strong>Control</strong>lable Canonical Form<br />
Observable Canonical Form<br />
Jordan Canonical Form<br />
⎡<br />
Φ<br />
⎤ ⎡<br />
Γ<br />
⎤<br />
H<br />
0 1 0 0<br />
⎣ 0 0 1 ⎦ ⎣ 0 ⎦ [ ]<br />
b 0 b 1 b 2<br />
−a<br />
⎡ 0 −a 1 −a<br />
⎤ 2<br />
⎡<br />
1<br />
⎤<br />
−a 2 1 0 b 2 [ ]<br />
⎣ −a 1 0 1 ⎦ ⎣ b 1<br />
⎦ 1 0 0<br />
⎡<br />
−a 0 0 0<br />
⎤ ⎡<br />
b 0<br />
⎤<br />
p 1 0 0 1<br />
⎣ 0 p m 1 ⎦ ⎣ 0 ⎦ [ ]<br />
k 1 k 2 k 3<br />
0 0 p m 1<br />
4.3.3 <strong>Control</strong>lability and observability<br />
<strong>Control</strong>lability<br />
Remind that the concept of controllability is the ability to move the state of a system to<br />
an arbitrary desired state in a finite time using a certain input sequence.<br />
Consider a state space model in the discrete-time domain:<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)<br />
x(0) = 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 105<br />
whereΦ ∈ R n×n ,Γ ∈ R n , andH ∈ R 1×n . The initial condition is assumed to be zero for<br />
simplicity. The solution of the state space model is<br />
x(k) =<br />
∑k−1<br />
Φ k−1−i Γu(i)<br />
i=0<br />
⎡<br />
= [ Γ ΦΓ Φ 2 Γ ··· Φ k−1 Γ ] ⎢<br />
⎣<br />
:= CU<br />
u(k −1)<br />
u(k −2)<br />
u(k −3)<br />
.<br />
u(0)<br />
Since Φ ∈ R n×n , C ∈ R n×k becomes n by n matrix if k = n. Therefore, x(n) can reach<br />
any point ifC n has full rank, where<br />
C n = [ Γ ΦΓ Φ 2 Γ ··· Φ n−1 Γ ] ∈ R n×n<br />
Moreover, the corresponding input sequence is<br />
U = C n −1 x(n)<br />
If a system is controllable, there always exists a state feedback gain,K, such that the<br />
closed loop eigenvalues of Φ−ΓK can be arbitrarily assigned. Therefore, the following<br />
statements are all equivalent:<br />
• A system is controllable.<br />
• The state of a system,x(k), can reach any point inR n atk = n with an appropriate<br />
input sequence, U = [ u(n−1) u(n−2) ··· u(0) ] T<br />
.<br />
• [ Γ ΦΓ Φ 2 Γ ··· Φ n−1 Γ ] has full rank.<br />
• There existsK such that the eigenvalues ofΦ−ΓK can be arbitrarily determined.<br />
⎤<br />
⎥<br />
⎦<br />
Observability<br />
A state space model without an input<br />
x(k +1) = Φx(k)<br />
y(k) = Hx(k)<br />
x(0) = x 0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.3. STATE SPACE IN DISCRETE-TIME DOMAIN 106<br />
is said to be observable, if x 0 can be observed by the sequence of y(0), y(1), ..., y(k).<br />
Note that<br />
⎡ ⎤ ⎡ ⎤<br />
y(0) H<br />
y(1)<br />
⎢ ⎥<br />
⎣ . ⎦ = HΦ<br />
⎢ ⎥<br />
⎣ . ⎦ x 0 ∈ R k<br />
y(k−1) HΦ k−1<br />
:= Ox 0<br />
The dimension of an observability matrix, O ∈ R k×n , becomes n by n if k = n. Therefore,<br />
the system is observable ifO n has full rank, where<br />
⎡ ⎤<br />
H<br />
HΦ<br />
O n = ⎢ ⎥<br />
⎣ . ⎦ ∈ Rn×n<br />
HΦ k−1<br />
Moreover,x 0 can be observed by<br />
⎡<br />
−1<br />
x 0 = O n ⎢<br />
⎣<br />
y(0)<br />
y(1)<br />
.<br />
y(n−1)<br />
If a system is observable, there always exists an observer gain, L, such that the<br />
closed loop eigenvalues of Φ−LH can be arbitrarily assigned. Therefore, the following<br />
statements are all equivalent:<br />
• A system is observable.<br />
• The output is nonzero (i.e., ‖y(k)‖ ≠ 0) for all k > 0 and for all nonzero initial<br />
conditionsx 0 .<br />
• The initial condition,x 0 , can be calculated from the output sequence,y(0),y(1), ...,<br />
y(n−1).<br />
⎡ ⎤<br />
H<br />
HΦ<br />
•<br />
HΦ 2<br />
has full rank.<br />
⎢ ⎥<br />
⎣ . ⎦<br />
HΦ n−1<br />
• There existsLsuch that the eigenvalues ofΦ−LH can be arbitrarily determined.<br />
• There existsLsuch that the state estimate, ˆx(k), converges to the actual state,x(k),<br />
as k → ∞.<br />
⎤<br />
⎥<br />
⎦<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 107<br />
4.4 Discretization<br />
4.4.1 Discretization of transfer functions by approximation<br />
Exact mapping ofz-plane to s-plane.<br />
Recall that z −1 represents one step delay in the discrete-time domain, and e −sT is the<br />
time delay of T in the continuous-time domain. Since they are equivalent in physical<br />
phenomenon,<br />
z −1 = e −sT<br />
where T is the sampling period. Therefore, the exact mapping of the z-plane to the s-<br />
plane is z = e sT . This exact mapping, however, cannot be utilized to convert from one<br />
domain to another, becausez = e sT = 1+sT + 1 2 s2 T 2 + 1 6 s3 T 3 +... results in the infinite<br />
order ofs.<br />
Forward approximation<br />
The forward approximation is a first-order approximation ofz = e sT , i.e.<br />
z = e sT<br />
≈ 1+sT (4.2)<br />
whereT is the sampling period. The inverse of (4.2) is<br />
s ≈ z −1<br />
(4.3)<br />
T<br />
Using this relationship, a transfer function in the discrete-time domain can be obtained<br />
from one in the continuous-time domain.<br />
Backward approximation<br />
Notice that the difference equation transformed by the forward approximation often requires<br />
future <strong>info</strong>rmation of the input, i.e., it often results in an unrealizable transfer function.<br />
Therefore, the backward approximation, which is a first order approximation of<br />
z = (e −sT ) −1 , is alternatively used:<br />
The inverse of (4.4) is<br />
z = e sT = 1<br />
≈<br />
1<br />
1−sT<br />
e −sT<br />
s ≈ 1−z−1<br />
T<br />
The backward approximation is also called Euler method.<br />
(4.4)<br />
(4.5)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 108<br />
Bilinear approximation<br />
The bilinear approximation, which is a first-order approximation ofz = e sT = e 0.5sT (e −0.5sT ) −1 ,<br />
enables more precise conversion between the discrete-time and continuous-time domains.<br />
z = e sT = esT/2<br />
e −sT/2<br />
≈ 1+sT/2<br />
1−sT/2<br />
The inverse of this mapping is<br />
s ≈ 2 z −1<br />
T z +1<br />
This method is also called Tustin’s method or trapezoid rule.<br />
(4.6)<br />
[Example 4-6] Suppose you have designed a PD controller in thes-domain:<br />
C(s) = U(s)<br />
E(s) = k P +k D s (4.7)<br />
For implementation, it should be converted into thez-domain as follows.<br />
1. Forward Approximation: s = z−1<br />
T<br />
z −1<br />
C(z) = k P +k D<br />
T<br />
= k (<br />
D<br />
T z + k P − k )<br />
D<br />
T<br />
The implementable difference equation is<br />
u(k) = k (<br />
D<br />
T e(k +1)+ k P − k )<br />
D<br />
e(k)<br />
T<br />
Notice that the difference equation above is not realizable, i.e., it requires future<br />
<strong>info</strong>rmation to generate the current signal. Therefore, this controller cannot be<br />
implemented.<br />
2. Backward Approximation: s = 1−z−1<br />
T<br />
C(z) = k P +k D<br />
1−z −1<br />
=<br />
(<br />
k P + k D<br />
T<br />
) T<br />
− k D<br />
T z−1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 109<br />
The implementable difference equation is<br />
(<br />
u(k) = k P + k )<br />
D<br />
e(k)− k D<br />
e(k −1)<br />
T T<br />
3. Bilinear Approximation: s = 2 T<br />
z−1<br />
z+1<br />
2<br />
C(z) = k P +k D<br />
T<br />
The implementable difference equation is<br />
(<br />
u(k) = −u(k −1)+ k P + 2k D<br />
T<br />
=<br />
=<br />
z −1<br />
z +1<br />
( ) ( )<br />
kP + 2k D<br />
T<br />
z + kP − 2k D<br />
T<br />
z +1<br />
(<br />
kP + 2k D<br />
T<br />
)<br />
+<br />
(<br />
kP − 2k D<br />
T<br />
)<br />
z<br />
−1<br />
1+z −1<br />
)<br />
e(k)+<br />
(<br />
k P − 2k )<br />
D<br />
e(k −1)<br />
T<br />
Figure 4.6 shows the frequency responses of a PD controller, and its discretized<br />
controllers by the three approximation methods. Notice that their magnitudes and<br />
phases are similar but not the same. In particular, the phases of C(z) obtained by<br />
the forward and backward approximation methods show large deviation from that of<br />
C(s), which may cause an instability problem.<br />
Pole-zero matching method<br />
The three discretization methods above are based on the approximation of z = e sT . Another<br />
idea is to convert the transfer function in the continuous-time domain such that<br />
the poles and zeros are matched in both domains. Consider a transfer function in the<br />
continuous-time domain,<br />
C(s) = k c<br />
∏ m<br />
j=1 (s−z i)<br />
∏ n<br />
i=1 (s−p i)<br />
wherez i ’s andp i ’s are zeros and poles ofC(s), respectively. Since the exact mapping between<br />
the continuous-time domain and the discrete-time domain isz = e sT , the equivalent<br />
poles and zeros in the discrete-time domain aree z iT ande p iT , respectively. Therefore, the<br />
corresponding discrete-time domain transfer function is<br />
C(z) = k d<br />
∏ m<br />
j=1 (z −ez iT )<br />
∏ n<br />
i=1 (z −ep iT<br />
)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 110<br />
Bode Plots<br />
Magnitude (dB)<br />
20<br />
15<br />
10<br />
5<br />
C(s)<br />
C(z) by forward approximation<br />
C(z) by backward approximation<br />
C(z) by bilinear approximation<br />
0<br />
180<br />
135<br />
Phase (deg)<br />
90<br />
45<br />
0<br />
10 −2 10 −1 10 0<br />
Frequency (rad/sec)<br />
Figure 4.6: Bode plots of C(s) and three approximated C(z)’s for k P = 1,<br />
k D = 1, andT = 1.<br />
wherek d is a constant to match the magnitude ofC(s = jω) andC(z = e jωT ) at a certain<br />
frequency 12 . This conversion method is called pole-zero matching method. Notice that<br />
in spite of the theoretical background, the pole-zero matching method is still not an exact<br />
conversion between the two domains 13 . Remind that the exact conversion between the<br />
continuous-time domain and the discrete-time domain transfer functions is impossible,<br />
becausez = e sT = 1+sT + 1 2 s2 T 2 +... has an infinitely large order.<br />
Fig. 4.8 shows the step responses of a transfer function in the continuous-time domain<br />
and its discrete-time models discretized by the forward approximation, the backward<br />
approximation, the bilinear approximation, and the pole-zero matching method. Although<br />
all of the discretized models show similar responses, notice that none of them follows the<br />
response of the continuous-time model exactly.<br />
[Example 4-7] A controller designed in the continuous-time domain<br />
C(s) =<br />
0.5s+1<br />
(s+1)(s+3)<br />
12 In general cases,k d is designed to match the magnitudes at zero frequency.<br />
13 The exact mapping of the locations of poles and zeros does not necessarily mean that the responses of<br />
two systems are the same.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 111<br />
0.5<br />
0.4<br />
Output y(t=kT)<br />
0.3<br />
0.2<br />
0.1<br />
C(s)<br />
C Forward<br />
(z)<br />
C Backward<br />
(z)<br />
C Bilinear<br />
(z)<br />
C Matched<br />
(z)<br />
0<br />
0 1 2 3 4 5 6 7 8<br />
Time (t=kT) [sec]<br />
Figure 4.7: Step responses ofC(s) = 1/(s 2 +3s+2) and its discretized models.<br />
is to be discretized by the pole-zero matching method with the sampling period of<br />
T = 1ms. The poles and zeros ofC(s) are<br />
z 1 = −2<br />
p 1 = −1<br />
p 2 = −3<br />
The corresponding poles and zeros in the discrete-time domain are<br />
z 1d = e −0.002<br />
p 1d = e −0.001<br />
p 2d = e −0.003<br />
Therefore, C(z) obtained by the pole-zero matching method is<br />
C(z) = k d<br />
z −e −0.002<br />
(z −e −0.001 )(z −e −0.003 )<br />
where k d is determined such that|C(s = jω)| = |C(z = e jωT )| at a certain frequency.<br />
By settingω = 0,<br />
C(s = j0) = 1 3<br />
Therefore, k d = (1−e−0.001 )(1−e −0.003 )<br />
3(1−e −0.002 )<br />
.<br />
C(z = e j0T ) = k d<br />
1−e −0.002<br />
(1−e −0.001 )(1−e −0.003 )<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
þ ÿ¥¦§¨© § <br />
ý<br />
þ ¨©¨¨ <br />
4.4. DISCRETIZATION 112<br />
ý þ ¡¢£¦ý ÿÿ<br />
!"ÿ <br />
ý þ ¡¢£¦ý "ÿ#$% & ' "ÿ#$( "ÿ$ %<br />
Figure 4.8: Matlab code for discretization by approximation.<br />
4.4.2 Stability mapping of approximation-based discretization methods<br />
Since the discretization methods are based on approximation, the stability of the discretized<br />
transfer functions is not guaranteed. Recall the discretization methods<br />
[Forward approximation] z = 1+sT<br />
[Backward approximation]<br />
1<br />
z =<br />
1−sT<br />
[Bilinear approximation] z = 1+sT/2<br />
1−sT/2<br />
If we let s = jω in these equations, we obtain the boundaries of the regions in the z-<br />
plane which originate from the stable portion of the s-plane. The shaded areas sketched<br />
in the z-plane in Fig. 4.9 are these regions 14 for each case. To show that the backward<br />
approximation results in a circle, 1 is added to and subtracted from the right hand side,<br />
2<br />
i.e.<br />
z = 1 ( 1<br />
2 + 1−sT − 1 )<br />
2<br />
= 1 2 − 1 1+sT<br />
21−sT<br />
Now it is easy to see that withs = jω, the magnitude ofz− 1 2 ∣ z − 1 2∣ = 1 1+jωT<br />
2 ∣1−jωT∣ = 1 2<br />
is constant, i.e.<br />
and the curve is thus a circle as drawn in Fig. 4.9(b). Because the unit circle is the stability<br />
boundary in thez-plane, it is apparent from Fig. 4.9 that the forward approximation<br />
method can cause a stable continuous transfer function to be mapped into an unstable<br />
digital filter. It is especially interesting to notice that the bilinear approximation<br />
method maps the stable region of thes-plane exactly into the stable region of thez-plane.<br />
14 That is, the regions that correspond to the stable region in the s-plane.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 113<br />
Re{ s}<br />
<<br />
0<br />
Im<br />
Im<br />
Re<br />
Re<br />
s =<br />
z −1<br />
T<br />
2<br />
s =<br />
T<br />
z<br />
z<br />
−1<br />
+ 1<br />
(d) Pole-zero matching<br />
Im<br />
1−<br />
z<br />
s =<br />
T<br />
−1<br />
Im<br />
Im<br />
Re<br />
Re<br />
Re<br />
(a) Forward approx. (b) Backward approx. (c) Bilinear approx.<br />
Figure 4.9: Maps of the left-half of the s-plane by the approximation methods<br />
onto thez-plane. Stables-plane poles map onto the shaded regions of thez-plane.<br />
The unit circle is shown for reference.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 114<br />
u(k)<br />
D/A<br />
u(t)<br />
u(t)<br />
Computer<br />
Discrete-time domain signal<br />
Continuous-time domain signal<br />
Figure 4.10: Zero-order-hold effect of common digital-to-analog converters:<br />
they hold the output (analog signal) until the input (digital signal) is updated.<br />
4.4.3 Discretization of plant models with D/A<br />
The zero-order-hold (ZOH) is a mathematical model of the practical signal reconstruction<br />
done by a conventional digital-to-analog converter (D/A). That is, it describes the effect<br />
of converting a discrete-time signal to a continuous-time signal by holding each sample<br />
value for one sample interval, as shown in Fig. 4.10. In fact, the control signal generated<br />
by a computer through a D/A is a combination of step functions delayed bykT .<br />
as<br />
Remind that the transfer function of a plant is described in the discrete-time domain<br />
G(z) = Y(z)<br />
U(z)<br />
whereU(z) is an input to the plant, andY(z) is an output from the plant. Since an impulse<br />
signal is1in thez-domain, the transfer function is equivalent to thez-transformed impulse<br />
response, i.e.<br />
G(z) = Z{y(k)} ifu(k) is δ(k).<br />
As shown in Fig. 4.11, suppose a computer is generating an impulse signal, i.e.<br />
{<br />
1 fork = 0<br />
u(k) =<br />
(4.8)<br />
0 fork > 0<br />
The continuous-time domain signal generated through a D/A is<br />
{<br />
1 for0 ≤ t < T<br />
u(t) =<br />
0 fort ≥ T<br />
The signal above can be expressed as a combination of two step functions, i.e.<br />
u(t) = u 1 (t)−u 2 (t)<br />
where u 1 (t) is a unit step function, and u 2 (t) is another unit step function delayed by T .<br />
Taking the Laplace transform ofu(t), we get<br />
U(s) = U 1 (s)−U 2 (s) = 1 s − e−sT<br />
s<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
= (1−e −sT ) 1 s<br />
Kyoungchul Kong
4.4. DISCRETIZATION 115<br />
u(k)<br />
1<br />
D/A<br />
u(t)<br />
1<br />
0<br />
u(t)<br />
0<br />
T<br />
Computer<br />
Discrete-time impulse signal<br />
Output from D/A<br />
Figure 4.11: The output of a D/A whenU(z) = 1<br />
Notice that the output of the plant 15 is<br />
or in the time domain<br />
Y(s) = G(s)U(s) = (1−e −sT ) G(s)<br />
s<br />
y(t) = L −1 {G(s)U(s)} = L −1 {(1−e −sT ) G(s)<br />
s }<br />
Sampling the signal above and taking thez-transform,<br />
Y(z) = Z{L −1 {(1−e −sT ) G(s) }} (4.9)<br />
s<br />
Notice that Z{L −1 {1 − e −sT }} is (1 − z −1 ), because e −sT is an one-step delay, i.e.,<br />
z −1 . Since the input signal was an impulse function (i.e., δ(k)) as in (4.8), the output in<br />
(4.9) is equivalent to the transfer function of the plant. Therefore, the plant model in the<br />
discrete-time domain is obtained by<br />
G(z) = (1−z −1 )Z{L −1 { G(s)<br />
s }}<br />
which is called the zero-order-hold (ZOH) equivalent. Note that this discretization<br />
method is not an approximation method, but is an exact mapping method. However, the<br />
ZOH equivalent can be applied only when the signal passes through a D/A. For example,<br />
the ZOH equivalent cannot be utilized to obtain a digital controller from one designed in<br />
the continuous-time domain.<br />
[Example 4-8] A plant<br />
G(s) = a<br />
s+a<br />
is connected to a computer via a D/A and an A/D. The calculation period is T . Then,<br />
the plant model including the D/A and A/D in the discrete-time domain is<br />
G(z) = (1−z −1 )Z{L −1 a<br />
{<br />
s(s+a) }}<br />
15 The physical plants are always in the continuous-time domain.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 116<br />
ZOH<br />
D/A<br />
+¤)*<br />
G(s)<br />
¤)*<br />
/<br />
A/D<br />
¤,*<br />
C(z)<br />
+¤,* -¤,*<br />
Computer<br />
−<br />
+<br />
.¤,*<br />
(a) Structure of a typical digital feedback control system<br />
ZOH<br />
G(s)<br />
¤)*<br />
+¤)*<br />
¤,*<br />
+¤,*<br />
/<br />
D/A<br />
A/D<br />
(b) Actual plant dynamics with a D/A and an A/D<br />
+¤,*<br />
G ¤,*<br />
(z)<br />
(c) Discretized plant dynamics<br />
Figure 4.12: Block diagram of a typical digital control system with a D/A and<br />
an A/D.G(z) must be obtained by the zero-order-hold equivalent, since the plant<br />
dynamics include a D/A.<br />
0 1 234567856 97:; < =>@2 ABCD> 42E@F3DE 3G@H2IB@:<br />
J 1 KLKK6;<br />
0M 1 NOP408 J8 QRSTQ:;<br />
< UDEBVBECDEVWB>CDE DXGIY>D@2<br />
Z 1 5K 6;V[ V97;<br />
\ 1 5K; 67;<br />
] 1 56 K7;<br />
^ 1 K;<br />
0F 1 FF4Z8\8]8^:;<br />
< =>@2 ABCD> 4F22D F_HD:<br />
0FM 1 H9C40F8 J8 QRSTQ:;<br />
< UDEBVBECDEVWB>CDE DXGIY>D@2<br />
Figure 4.13: Matlab code to obtain the zero-order-hold equivalent of a plant<br />
model.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 117<br />
The partial fraction expansion of<br />
a<br />
s(s+a) is<br />
Taking the inverse Laplace transform,<br />
a<br />
s(s+a) = 1 s − 1<br />
s+a<br />
L −1 { 1 s − 1<br />
s+a } = 1−e−at<br />
Sampling the signal and taking thez-transform, G(z) is obtained, i.e.<br />
G(z) = (1−z −1 )Z{1−e −akT }<br />
( ) 1<br />
= (1−z −1 )<br />
1−z − 1<br />
−1 1−e −aT z −1<br />
= (1−z −1 z −1 −e −aT z −1<br />
)<br />
(1−z −1 )(1−e −aT z −1 )<br />
= z−1 −e −aT z −1<br />
(1−e −aT z −1 )<br />
= 1−e−aT<br />
(z −e −aT )<br />
4.4.4 Discretization of state space models by<br />
zero-order-hold equivalent<br />
Recall that a state space equation of a plant in the continuous-time domain is<br />
and its solution is<br />
ẋ = Fx+Gu (4.10)<br />
y = Hx<br />
x(t) = e F(t−t 0) x(t 0 )+<br />
wheret > t 0 , and t 0 is the initial time 16 .<br />
∫ t<br />
t 0<br />
e F(t−τ) Gu(τ)dτ (4.11)<br />
When the plant is accompanied with a D/A and an A/D, the input signal, u(t), is<br />
constant during one step, i.e.<br />
u(t) = u(k)<br />
forkT ≤ t < (k +1)T<br />
which represents the zero-order-hold effect. Settingt = (k +1)T and t 0 = kT in (4.11),<br />
x((k +1)T) = e FT x(kT)+<br />
16 t 0 has been regarded as 0 in the previous chapters.<br />
∫ (k+1)T<br />
kT<br />
e F((k+1)T−τ) Gu(τ)dτ<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 118<br />
Sinceu(τ) = u(k) is constant in one calculation period,<br />
[ ∫ ]<br />
(k+1)T<br />
x((k +1)T) = e FT x(kT)+ e F((k+1)T−τ) Gdτ u(k)<br />
Setting t = τ −kT in the integration term and omittingT in the index ofx(kT),<br />
[∫ T<br />
]<br />
x(k +1) = e FT x(k)+ e F(T−t) Gdt u(k)<br />
Notice that the new state space equation is in the discrete-time domain. Therefore, the<br />
zero-order-hold equivalent of the state space equation in (4.14) is<br />
where<br />
kT<br />
x(k +1) = Φx(k)+Γu(k)<br />
0<br />
y(k) = Hx(k)<br />
Φ = e FT<br />
Γ =<br />
∫ T<br />
0<br />
e F(T−t) Gdt (4.12)<br />
[Example 4-9] Consider the equation of motion of a mass-damper-spring system:<br />
mÿ +cẏ +ky = u<br />
where the output is the position of the mass, i.e., y. It is often desirable to construct<br />
a state space equation with the state variables of the continuous-time domain position<br />
and velocity in the discrete-time domain, i.e.<br />
[ ]<br />
y(k)<br />
x(k) =<br />
(4.13)<br />
ẏ(k)<br />
Remind that if the state space equation is obtained in the discrete-time domain<br />
as in Section 4.3.2, it is possible ] to have the state in the form of x(k) =<br />
[<br />
y(k) y(k +1) ··· y(k+n−1) , but it is not possible to set the state as in<br />
(4.13). For this purpose, the method in (4.12) is applicable. First, the continuous-time<br />
domain state space equation is obtained, for example:<br />
[<br />
0 1<br />
ẋ =<br />
− k − c m m<br />
y = [ 1 0 ] x<br />
]<br />
x+[ 0<br />
1<br />
m<br />
]<br />
u<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
4.4. DISCRETIZATION 119<br />
where the state is<br />
x(t) =<br />
[<br />
y(t)<br />
ẏ(t)<br />
This state can be simply sampled as in (4.13), if the state space equation is discretized<br />
as<br />
[ ([<br />
] )[<br />
x(k +1) = exp<br />
(T −t)<br />
y(k) = [ 1 0 ] x(k)<br />
0 1<br />
− k m − c m<br />
] )] [∫ T<br />
([<br />
T x(k)+ exp<br />
0<br />
]<br />
0 1<br />
− k m − c m<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
Chapter 5<br />
<strong>Control</strong>ler Design in Discrete Time<br />
Domain<br />
5.1 <strong>Control</strong>ler Design Process<br />
All the plants are operated in the continuous-time domain, while controllers are usually<br />
running in the discrete-time domain. The controller in the discrete-time domain, C(z),<br />
can be directly designed from the discretized plant model,G(z), or obtained by converting<br />
C(s) with an appropriate approximation method.<br />
In general, designing a controller in the continuous-time domain is more preferred<br />
compared to the direct design of C(z) because of various useful controller design methods,<br />
such as PID control, lead-lag compensation, Nyquist plot analysis, and so on. However,<br />
there are many useful controller design methods available only in the discrete-time<br />
domain, such as the zero phase error tracking control, repetitive control, learning control,<br />
adaptive control, parameter adaptation, and so on. Therefore, it is important to know the<br />
usages of controllers in both the continuous-time domain and the discrete-time domain.<br />
The general controller design processes are depicted in Fig. 5.1.<br />
5.2 System Identification by Least Squares<br />
System identification 1 is a process to identify the model parameters of a plant based on<br />
the input and output signals obtained from experiments. The most fundamental method<br />
is to minimize an error between the actual (i.e., measured) output and a simulated output.<br />
The Least Squares method is utilized for this purpose.<br />
1 System identification is called “SysID” in short.<br />
120
5.2. SYSTEM IDENTIFICATION BY LEAST SQUARES 121<br />
<br />
D/A<br />
A/D<br />
Fourier transform of input/output signals<br />
`<br />
Time-domain analysis<br />
`<br />
` Parameter identification<br />
G (s)<br />
G(z)<br />
` C.C.F.<br />
` O.C.F.<br />
PID control<br />
`<br />
Lead-lag<br />
`<br />
compensator<br />
Pole `<br />
assignment<br />
F , G,<br />
H Φ,Γ, H<br />
` D.C.F.<br />
` Zero-order-hold equivalent<br />
K<br />
Zero-order-hold<br />
`<br />
equivalent<br />
Eigenvalue<br />
`<br />
assignment<br />
LQ `<br />
` C.C.F.<br />
` O.C.F.<br />
` D.C.F.<br />
K<br />
Eigenvalue<br />
`<br />
assignment<br />
LQ `<br />
` Pole assignment<br />
C(s)<br />
Discretization<br />
`<br />
by forward,<br />
backward, or<br />
bilinear<br />
approximation<br />
Observer design<br />
`<br />
by eigenvalue<br />
assignment<br />
u( t)<br />
= −Kxˆ(<br />
t)<br />
Observer<br />
`<br />
discretization by<br />
forward, backward, or<br />
bilinear approximation<br />
Observer design<br />
`<br />
by eigenvalue<br />
assignment<br />
C(z)<br />
u( k)<br />
= −Kxˆ(<br />
k)<br />
u( k)<br />
= −Kxˆ(<br />
k)<br />
C(z)<br />
` Implementation<br />
Figure 5.1: <strong>Control</strong>ler design processes.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.2. SYSTEM IDENTIFICATION BY LEAST SQUARES 122<br />
Consider a single-input (u(k)) single-output (y(k)) system described by<br />
G(z) = Y(z)<br />
U(z) = B(z)<br />
A(z)<br />
where the order of A(z) is n and that of B(z) is m. Assuming that m < n, G(z) can be<br />
written as<br />
G(z) = z−1 B(z −1 )<br />
A(z −1 )<br />
where<br />
A(z −1 ) = 1+a 1 z −1 +a 2 z −2 +...+a n z −n<br />
B(z −1 ) = b 0 +b 1 z −1 +b 2 z −2 +...+b m z −m<br />
Taking the inversez-transform, the outputy(k) is obtained:<br />
Z −1 {A(z −1 )Y(z)} = Z −1 {z −1 B(z −1 )U(z)}<br />
y(k)+a 1 y(k−1)+...+a n y(k−n) = b 0 u(k −1)+...+b m u(k−m−1)<br />
Moving they(k−1), ..., y(k−n) terms to the right hand side,<br />
y(k) = −a 1 y(k −1)−...−a n y(k−n)+b 0 u(k −1)+...+b m u(k−m−1)<br />
The equation above can be expressed in vector form, i.e.<br />
where<br />
y(k) = θ T φ(k)<br />
θ = [ a 1 ... a n b 0 ... b m<br />
] T<br />
∈ R<br />
n+m+1<br />
φ(k) = [ −y(k−1) ... −y(k −n) u(k−1) ... u(k −m−1) ] T<br />
∈ R<br />
n+m+1<br />
In general, θ is called a parameter vector, and φ(k) is called a regressor.<br />
Suppose that we have a data set[u(0),u(1),...,u(N)] and [y(0),y(1),...,y(N)] but<br />
do not know the model parameters, a 1 , ..., a n , b 0 , ..., b m . One way to identify the model<br />
parameters is to minimize<br />
N∑ [<br />
J = y(k)−θ T φ(k) ] 2<br />
k=0<br />
which requires solving a Least Squares problem. The solution to the Least Squares problem<br />
can be found by setting the partial derivative ofJ with respect toθ to zero, i.e.<br />
N<br />
∂J<br />
∂θ = −2 ∑<br />
k=0<br />
[<br />
y(k)−θ T φ(k) ] φ(k)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.2. SYSTEM IDENTIFICATION BY LEAST SQUARES 123<br />
Sinceθ T φ(k) ∈ R 1 ,(θ T φ(k))φ(k) = (φ T (k)θ)φ(k) = φ(k)(φ T (k)θ). Therefore<br />
N<br />
∂J<br />
∂θ = −2 ∑<br />
k=0<br />
[<br />
φ(k)y(k)−φ(k)φ T (k)θ ]<br />
In order to make ∂J<br />
∂θ<br />
= 0, θ should be<br />
[ N<br />
] −1 [<br />
∑<br />
N<br />
]<br />
∑<br />
θ = φ(k)φ T (k) φ(k)y(k)<br />
k=0<br />
k=0<br />
(5.1)<br />
Using (5.1), the model parameters can be identified. Notice that the calculated θ may be<br />
different from the actual values, if sensor noise or disturbance exists. Moreover, when the<br />
assumed system orders (i.e., m and n) do not match those of actual plant, the calculated<br />
θ may not be the actual value.<br />
[Example 5-1] Consider a plant in the discrete-time domain:<br />
G(z) =<br />
z +0.5<br />
z 2 +0.2z −0.8<br />
Suppose that the model parameters are unknown. To identify the parameters, an openloop<br />
control experiment has been carried out with an input signal<br />
The measured output signal is<br />
u(0) = 1.0000<br />
u(1) = 0.5000<br />
u(2) = −1.0000<br />
u(3) = −0.8000<br />
u(4) = 0.6000<br />
u(5) = 0.0000<br />
y(0) = 0.0000<br />
y(1) = 1.0000<br />
y(2) = 0.8000<br />
y(3) = −0.1100<br />
y(4) = −0.6380<br />
y(5) = 0.2396<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 124<br />
From the data above, φ(k) = [ −y(k −1) −y(k −2) u(k −1) u(k −2) ] T<br />
can<br />
be constructed as<br />
φ(0) = [ 0.0000 0.0000 0.0000 0.0000 ] T<br />
φ(1) = [ 0.0000 0.0000 1.0000 0.0000 ] T<br />
φ(2) = [ −1.0000 0.0000 0.5000 1.0000 ] T<br />
φ(3) = [ −0.8000 −1.0000 −1.0000 0.5000 ] T<br />
φ(4) = [ 0.1100 −0.8000 −0.8000 −1.0000 ] T<br />
φ(5) = [ 0.6380 0.1100 0.6000 −0.8000 ] T<br />
The model parametersθ = [ ] T<br />
a 1 a 2 b 0 b 1 can be identified by the Least Squares,<br />
i.e. [ 5∑<br />
] −1 [ 5∑<br />
]<br />
θ = φ(k)φ T (k) φ(k)y(k)<br />
(5.2)<br />
k=0<br />
k=0<br />
The calculated result is θ = [ 0.2000 −0.8000 1.0000 0.5000 ] T<br />
, which is the<br />
same as the actual parameters.<br />
5.3 Feedforward <strong>Control</strong>ler Design<br />
5.3.1 Perfect tracking control<br />
Suppose that a plant,G(z), is under feedback control withC(z). The closed-loop transfer<br />
function is<br />
G C (z) = G(z)C(z)<br />
1+G(z)C(z) = B C(z)<br />
A C (z) = b 0z q +b 1 z q−1 +b 2 z q−2 +...+b q<br />
z p +a 1 z p−1 +a 2 z p−2 +...+a p<br />
where the orders of A C (z) and B C (z) are p and q ≤ p, respectively. Multiplyingb −1<br />
0 z −p<br />
to both the numerator and denominator,G C (z) becomes<br />
G C (z) = z−d β C (z −1 )<br />
α C (z −1 )<br />
wheredis the relative order, i.e.,d = p−q. α C (z −1 ) andβ C (z −1 ) are<br />
α C (z −1 ) = α 0 +α 1 z −1 +α 2 z −2 +...+α p z −p<br />
β C (z −1 ) = 1+β 1 z −1 +β 2 z −2 +...+β q z −q<br />
Notice that the first term ofβ(z −1 ) is set to 1. 2<br />
(5.3)<br />
2 Note thatβ 1 = b1<br />
b 0<br />
, ...,β q = bq<br />
b 0<br />
, α 0 = 1 b 0<br />
, ...,α p = ap<br />
b 0<br />
.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 125<br />
d<br />
G C<br />
−<br />
1 ( z)<br />
C (z) G(z)<br />
y<br />
y r<br />
+<br />
−<br />
)<br />
G(<br />
z)<br />
C(<br />
z)<br />
G C<br />
( z)<br />
=<br />
1+<br />
G(<br />
z)<br />
C(<br />
z<br />
Figure 5.2: Block diagram of a feedforward and feedback control system<br />
In addition to the feedback controller, suppose that the reference input, r(k), is<br />
obtained by filtering the desired output,y d . The most intuitive filter for this purpose is the<br />
inverse of the closed-loop transfer function,G −1<br />
C<br />
(z), as shown in Fig. 5.2.<br />
and<br />
Suppose that G C (z) has no zero outside of the unit circle. Then, G −1<br />
C<br />
(z) is stable<br />
G −1<br />
C (z) = zd α C (z −1 )<br />
β C (z −1 )<br />
Since the input toG −1<br />
C (z) isy d(k), the reference input,r(k), is obtained by<br />
r(k) = Z −1 {G −1<br />
C (z)Y d(z)}<br />
Notice that the overall transfer function from y d (k) toy(k) is<br />
Y(z)<br />
Y d (z) = Y(z) R(z)<br />
R(z) Y d (z) = G C(z)G −1<br />
C (z) = 1<br />
(5.4)<br />
which results in y(k) = y d (k), i.e., perfect tracking control. For this scheme to work, at<br />
least the following three conditions must be satisfied:<br />
1. the desired output must be known at leastdsteps ahead 3 .<br />
2. the mathematical inverse of the closed-loop system is asymptotically stable, which<br />
implies that the closed-loop system does not possess any unstable zeros, and<br />
3. G C (z) must be an accurate representation of the closed-loop system.<br />
In most of the mechanical control problems such as robot control and machining, the<br />
desired output 4 is usually predetermined or known in advance, and thus the first condition<br />
is satisfied. The second condition requires that the open plant model does not possess<br />
any zero on the right half plane and that a feedback controller does not introduce unstable<br />
zeros. The third condition requires accurate identification of system parameters.<br />
3 When the future <strong>info</strong>rmation is not available, the z d term in G −1<br />
C<br />
(z) can be ignored. In this case, the<br />
output will bedstep delayed such that y(k) = y d (k −d), which is still good performance.<br />
4 That is, the desired trajectory<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 126<br />
In addition to the necessary conditions stated above, there exists a recommended<br />
condition in the design of desired output, y d (k). Since the magnitude of G C (e jωT ) is<br />
small at a high frequency 5 ,G −1<br />
C<br />
(z) amplifies the high frequency component in the desired<br />
output. Thus, the desired output should be designed such that y d (k) is continuous and<br />
smooth for all k. Notice that a step function is not a good choice for the desired output.<br />
[Example 5-2] Suppose a plant is under feedback control, where<br />
G(z) =<br />
The closed-loop transfer function is<br />
1<br />
z 2 +0.2z +0.5<br />
C(z) = z +0.5<br />
z<br />
G C (z) =<br />
z +0.5<br />
z 3 +0.2z 2 +1.5z +0.5 = z −2 (1+0.5z −1 )<br />
1+0.2z −1 +1.5z −2 +0.5z −3<br />
Since G C (z) does not have any zero outside of the unit circle, the perfect tracking<br />
control is applicable. The inverse of the closed-loop transfer function is<br />
G −1<br />
C (z) = z2 (1+0.2z −1 +1.5z −2 +0.5z −3 )<br />
1+0.5z −1<br />
Therefore, the reference input is obtained from the actually desired output by<br />
R(z)<br />
Y d (z)<br />
= z2 (1+0.2z −1 +1.5z −2 +0.5z −3 )<br />
1+0.5z −1<br />
(1+0.5z −1 )R(z) = z 2 (1+0.2z −1 +1.5z −2 +0.5z −3 )Y d (z)<br />
Taking the inversez-transform, we get<br />
r(k) = −0.5r(k −1)+y d (k +2)+0.2y d (k +1)+1.5y d (k)+0.5y d (k −1)<br />
5.3.2 Zero phase error tracking control<br />
Remind that the perfect tracking control in (5.4) is available only when the closed loop<br />
system does not possess any zero outside of the unit circle. The undesired zero, however,<br />
often appears through the discretization process or is inherited from the physical model.<br />
For example, see the following example.<br />
5 Remind in the continuous-time domain that |G(jω)| → 0 as ω → ∞, if G(s) has n poles and m < n<br />
zeros. Most mechanical systems satisfy this condition.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 127<br />
a b cdefghifg jhkl<br />
m nopqc rstuo<br />
v b cdefg ghifg whkl<br />
m nx ysqczso<br />
{ b w|wwgl<br />
a} b yjteai {kl m ~uzssztuz€sotuz u ‚ƒ„pouqc<br />
v} b yjtevi {i …{‚†cƒq…kl m ‡ƒoƒqupz pˆˆzs‰ƒrpcƒsq<br />
ay b duutŠpy‹ea}Œv}i gkl m vos†utossˆ czpq†duz d‚qycƒsq<br />
n{v b Ž ‘’“eay”gk m rƒqzupoek †ƒrˆoƒdƒu† c€u<br />
m ysuddƒyƒuqc† †‚y€ c€pc • – ƒ† g<br />
{zpq†duz d‚qycƒsq—<br />
gwwg }”j g˜˜ } š ˜˜›|œ<br />
<br />
} w|˜˜˜<br />
prˆoƒqž cƒru— w|wwg<br />
Figure 5.3: Matlab code to obtain the feedforward controller transfer function.<br />
[Example 5-3] A pure-mass system<br />
is controlled by a proportional controller<br />
Then the closed loop transfer function is<br />
G(s) = 1<br />
ms 2<br />
C(s) = k D s+k P<br />
G C (s) =<br />
k D s+k P<br />
ms 2 +k D s+k P<br />
which does not possess any zero on the right half plane, ifk P > 0 and k D > 0.<br />
Suppose that the control system above is implemented into a computer with the<br />
calculation period of T . The plant is accompanied by a D/A and an A/D such that the<br />
plant model is discretized by the zero-order-hold equivalent, i.e.<br />
G(z) = T2 z −1 (1+z −1 )<br />
2m(1−z −1 ) 2<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 128<br />
and the controller is discretized by the backward approximation, i.e.<br />
(<br />
C(z) = k P + k ) (<br />
D<br />
+ − k )<br />
D<br />
z −1 := k 1 +k 2 z −1<br />
T T<br />
The resulting closed loop transfer function in the discrete-time domain is<br />
G C (z) =<br />
T 2 z −1 (1+z −1 )(k 1 +k 2 z −1 )<br />
2m+(k 1 T 2 −4m)z −1 +(k 1 T 2 +k 2 T 2 +2m)z −2 +k 2 T 2 z −3<br />
Notice that G C (z) possesses a zero at −1, which is on the stability boundary.<br />
Therefore, the inverse of G C (z) will possess a marginally stable pole. Moreover, the<br />
mode of −1 causes a large oscillation a , and thus the perfect tracking control is not<br />
recommended to this system.<br />
a Recall that y(k) = (−1) k ifY(z) = 1<br />
1+z −1 .<br />
Suppose that a closed loop transfer function is<br />
G C (z) = z−d β C (z −1 )<br />
α C (z −1 )<br />
In order to develop a feedforward controller for systems with uncancellable 6 zeros,β C (z −1 )<br />
is factorized into two parts, i.e.<br />
β C (z −1 ) = β s C (z−1 )β u C (z−1 )<br />
whereβ s C (z−1 ) contains stable zeros, andβ u C (z−1 ) contains uncancellable zeros. Since the<br />
inverse ofβ u C (z−1 ) cannot be realized, an alternative feedforward controller is introduced:<br />
G ZPET (z) = zd α C (z −1 )β u C (z)<br />
β s C (z−1 )β u C (1)2 (5.5)<br />
where β u C (z) is obtained by replacing z−1 in β u C (z−1 ) by z. β u C (1)2 is introduced to make<br />
the magnitude ofG ZPET (z)G C (z) one at the zero frequency.<br />
Using the feedforward controller in (5.5), the reference input (r(k)) is obtained from<br />
the desired output (y d (k)), i.e.<br />
R(z) = G ZPET (z)Y d (z)<br />
6 That is, unstable zeros and marginally stable zeros.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.3. FEEDFORWARD CONTROLLER DESIGN 129<br />
The overall transfer function fromy d (k) to y(k) is<br />
Y(z)<br />
Y d (z)<br />
= G ZPET (z)G C (z)<br />
= zd α C (z −1 )βC u(z)<br />
z −d βC s(z−1 )βC u(z−1 )<br />
βC s(z−1 )βC u(1)2 α C (z −1 )<br />
= βu C (z)βu C (z−1 )<br />
βC u(1)2<br />
Notice that in the frequency domain, βC u(ejωT ) is the complex conjugate of βC u(e−jωT ).<br />
Therefore the phase of βu C (ejωT )βC u(e−jωT )<br />
is zero for the entire frequency range. Moreover,<br />
βC u(1)2<br />
at the zero frequency (i.e., at z = 1), βu C (1)βu C (1−1 )<br />
= 1. Since this feedforward control<br />
βC u(1)2<br />
method does not introduce any phase error, it is called zero phase error tracking (ZPET)<br />
control.<br />
[Example 5-4] Consider the closed loop transfer function in [Example 2]. For simplicity,<br />
assume that T = 0.1,m = 1,k P = 1 andk D = 0.1.<br />
G C (z) = 0.02z−1 (1+z −1 )(1−0.5z −1 )<br />
2−3.98z −1 +2.01z −2 −0.01z := z−d βC s(z−1 )βC u(z−1 )<br />
−3 α C (z −1 )<br />
G C (z) can be factorized into<br />
α C (z −1 ) = 0.02 −1 (2−3.98z −1 +2.01z −2 −0.01z −3 )<br />
β s C (z−1 ) = 1−0.5z −1<br />
β u C(z −1 ) = 1+z −1<br />
d = 1<br />
To obtain a feedforward filter by the zero phase error tracking control, BC u (z) and<br />
BC u(1) are necessary, i.e. βC u (z) = 1+z<br />
βC(1) u = 2<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 130<br />
ThusG ZPET (z) is<br />
G ZPET (z) = zd α C (z −1 )β u C (z)<br />
β s C (z−1 )β u C (1)2<br />
The implementable control law is<br />
= z1 0.02 −1 (2+−3.98z −1 +2.01z −2 −0.01z −3 )(1+z)<br />
(1−0.5z −1 )2 2<br />
= 25z2 −24.75z 1 −24.625+25z −1 −0.125z −2<br />
1−0.5z −1<br />
r(k) = 0.5r(k−1)+25y d (k+2)−24.75y d (k+1)−24.625y d (k)+25y d (k−1)−0.125y d (k−<br />
Although the relative order d was 1, the two-step advanced <strong>info</strong>rmation is required in<br />
the ZPET control. Using this feedforward control method, the output is<br />
y(k) = Z −1 {G C (z)G ZPET (z)Y d (z)}<br />
= Z −1 { βu C (z−1 )βC u(z)<br />
Y<br />
βC u d (z)}<br />
(1)2<br />
= Z −1 { (1+z−1 )(1+z)<br />
Y d (z)}<br />
4<br />
= 0.25y d (k +1)+0.5y d (k)+0.25y d (k −1)<br />
Notice that ify d is constant,y(k) = y d .<br />
5.4 <strong>Control</strong>ler Design in Discrete-Time State Space<br />
5.4.1 Discrete-time full state observer<br />
For an n-dimensional discrete-time system described by<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)<br />
x(0) = x 0<br />
a full state observer is<br />
ˆx(k +1) = Φˆx(k)+Γu(k)+L[y(k)−Hˆx(k)], ˆx(0) = 0 (5.6)<br />
The estimation error equation is<br />
e(k +1) = [Φ−LH]e(k)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 131<br />
Ÿ ¡¢¡¡£¤<br />
¥¦ §¨©ª£ £¢« £¢¬ £®ª£ £ ¡¢¬ £ ¡¢¯® Ÿ°¤ ± ² ³´µ§ ·¸¹º´<br />
ªµ®» §¨¹µ§µ©¥¦® ¼½¼°¤<br />
ª¦®³®¾ §¨«¦³¾©µ®»°¤<br />
± ¿µ´ÀÁ´µ§Â¸ ¸¨ ¦ºÃ¸Ä µ¹ ³¸´ºÄ<br />
µ»Ä©¦°<br />
± ¿ÅºÀ¾ ¨ µÆ ¦ºÃ¸ ÂÄ ¸Á§Ä¹º ¸¨ §Åº Á§ ÀÂÃÀ´º<br />
¦Á£ ¦©£°<br />
± Çħµ»´º ¦ºÃ¸<br />
¦Á« ¦©«°<br />
± Çħµ»´º ¦ºÃ¸<br />
¦Ä ¦©È° ± ɧµ»´º ¦ºÃ¸<br />
²À ¦³¾©³®ª®£®Ÿ° ± ²Ê¿©¦°<br />
ËÀÄ ¦³¾©¦Ä®ª®£®Ÿ°<br />
± ËÊ¿ÌÄ©¦°<br />
ËÀÁ ¦³¾©ª¦Á£ÌÍ£ ¦Á«ÌÍ£®ª¡ ¡®©Í¦Á£°Î©Í¦Á«°®Ÿ°<br />
± ËÊ¿ÌÁ©¦ÌÍ£°<br />
ËÀÁ£ ÏÐÑÒÓÔ©ËÀÁ°¤<br />
± ËÊ¿ÌÁ©£°<br />
Õ ²À Î ËÀÁ Ö ©ËÀÄ Î ËÀÁ£Ì« °<br />
± ×ØÙŸ ¿¸§Ã¸´<br />
ªÚ®Û §¨¹µ§µ©Õ® ܽܰ<br />
Figure 5.4: Matlab code to obtain the feedforward controller transfer function<br />
by zero phase error tracking control.<br />
where<br />
e(k) = x(k)− ˆx(k), e(0) = x 0<br />
As in the continuous-time case, the eigenvalues of [Φ−LH] can be arbitrarily assigned<br />
if the system is observable. Therefore, the observer gain matrix, L, should be designed<br />
such that the eigenvalues of[Φ−LH] are all inside of the unit circle.<br />
[Example 5-5] Consider a state space equation<br />
[<br />
1 1<br />
x(k +1) =<br />
0 1<br />
]<br />
x(k)+<br />
y(k) = [ 1 0 ] x(k)<br />
[<br />
0.5<br />
1<br />
]<br />
u(k)<br />
A full state observer is constructed as<br />
[ ] [ 1 1 0.5<br />
ˆx(k +1) = ˆx(k)+<br />
0 1 1<br />
]<br />
u(k)+L[y(k)− [ 1 0 ]ˆx(k)]<br />
The observer gain matrix, L ∈ R 2 , is designed such that the eigenvalues of [Φ−LH]<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 132<br />
are all placed in the unit circle. The eigenvalue equation is<br />
[ [ ] [ ] 1 1 l1 [ ] ]<br />
det zI − + 1 0 = z 2 +(l<br />
0 1 l 1 −2)z +(1−l 1 +l 2 ) = 0<br />
2<br />
Suppose that the desired eigenvalues are all zeros, then<br />
l 1 = 2 l 2 = 1<br />
5.4.2 Discrete-time full state observer with predictor<br />
In the discrete-time case, we note that the observer in (5.6) does not make use ofy(k) for<br />
the estimation of the state at timek in spite that the current measurementy(k) is strongly<br />
correlated with the current state x(k). Therefore, we consider a new observer, which<br />
makes use ofy(k) for the estimation ofx(k). The new observer involves two stages: prediction<br />
and correction.<br />
The new observer is constructed as<br />
[Corrector] ˆx(k) = ˆx p (k)+L[y(k)−Hˆx p (k)], ˆx p (0) = 0<br />
[Predictor]<br />
ˆx p (k +1) = Φˆx(k)+Γu(k)<br />
If the predicted state, ˆx p (k), is eliminated from the equations:<br />
ˆx(k +1) = [I −LH]Φˆx(k)+[I −LH]Γu(k)+Ly(k +1)<br />
Then, the estimation error equation becomes<br />
where<br />
e p (k +1) = [I −LH]Φe p (k)<br />
e p (k) = x(k)− ˆx(k), e p (0) = x 0<br />
The estimation error is governed by [I −LH]Φ, instead of [Φ−LH]. Therefore, in the<br />
full state observer with a predictor the observer gain, L, should be designed such that the<br />
eigenvalues of[I −LH]Φ are all inside of the unit circle.<br />
[Example 5-6] Consider the same state space equation as in [Example 5-5]. A full<br />
state observer with predictor is constructed as<br />
[Corrector] ˆx(k) = ˆx p (k)+L[y(k)− [ 1 0 ]ˆx p (k)], ˆx p (0) = 0<br />
[ ] [ ] 1 1 0.5<br />
[Predictor] ˆx p (k +1) = ˆx(k)+ u(k)<br />
0 1 1<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 133<br />
The observer gain matrix, L ∈ R 2 , is designed such that the eigenvalues of [I −<br />
LH]Φ are all placed in the unit circle. The eigenvalue equation is<br />
[ [ ] [ ] 1 1 l1 [ ] [ ]]<br />
1 1<br />
det zI − + 1 0 = z 2 +(l<br />
0 1 l 2 0 1 1 +l 2 −2)z+(1−l 1 ) = 0<br />
Suppose that the desired eigenvalues are all zeros, then l 1 = 1 l 2 = 1.<br />
[Example 5-7] Consider the full state observers in [Example 5-5] and [Example 5-6].<br />
Suppose the initial state is [ ]<br />
a0<br />
x(0) = ∈ R 2<br />
a 1<br />
wherea 0 and a 1 are any scalars.<br />
Recall that the estimation error equations are<br />
e(k +1) = [Φ−LH]e(k)<br />
e p (k +1) = [I −LH]Φe p (k)<br />
where e p (k) and e(k) are the estimation errors (i.e., x(k) − ˆx(k)) with and without a<br />
predictor, respectively.<br />
Using the observer gain matrices obtained in [Example 1] and [Example 2],<br />
which assign the eigenvalues at zero, the estimation error equations become<br />
[ ]<br />
−1 1<br />
e(k +1) = e(k)<br />
−1 1<br />
[ ]<br />
0 0<br />
e p (k +1) = e<br />
−1 0 p (k)<br />
At k = 0, the estimation errors are<br />
e(0) = x(0) =<br />
At k = 1, they become<br />
[ ][ ]<br />
−1 1 a0<br />
e(1) = =<br />
−1 1 a 1<br />
[<br />
a0<br />
a 1<br />
]<br />
[ ]<br />
a1 −a 0<br />
a 1 −a 0<br />
e p (0) = x(0) =<br />
e p (1) =<br />
[<br />
a0<br />
[<br />
0 0<br />
−1 0<br />
a 1<br />
]<br />
][<br />
a0<br />
]<br />
=<br />
a 1<br />
[<br />
0<br />
−a 0<br />
]<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 134<br />
At k = 2,<br />
[ ][ ] [ ] [<br />
−1 1 a1 −a<br />
e(2) =<br />
0 0 0 0<br />
= e<br />
−1 1 a 1 −a 0 0 p (1) =<br />
−1 0<br />
Notice that the estimation errors become a zero within two steps.<br />
][ ] [<br />
0 0<br />
=<br />
−a 0 0<br />
a It does not only “converge” to zero but also “become” exactly zero. This is another advantage of<br />
using the discrete-time domain.<br />
]<br />
5.4.3 Discrete-time linear quadratic optimal control<br />
Consider a plant under the full state feedback control:<br />
x(k +1) = Φx(k)+Γu(k) (5.7)<br />
y(k) = Hx(k)<br />
u(k) = −Kx(k) (5.8)<br />
x(0) = x 0<br />
The optimal control is sought to minimize a quadratic performance index,<br />
J =<br />
∞∑<br />
y T (k)y(k)+u T (k)Ru(k)<br />
k=0<br />
or in more general form,<br />
J =<br />
∞∑<br />
x T (k)Qx(k)+u T (k)Ru(k) (5.9)<br />
k=0<br />
whereQ = H T H is a positive semidefinite matrix, and R is positive definite. Notice that<br />
the first term penalizes the deviation of x(k) from the origin, while the second penalizes<br />
the control energy.<br />
In order to find a control law that minimizes (5.9), let P ∈ R n×n be a positive<br />
definite matrix. Note that<br />
∞∑ [<br />
x(k +1) T Px(k +1)−x(k) T Px(k) ] = x(∞) T Px(∞)−x(0) T Px(0)<br />
k=0<br />
Assuming that the system is asymptotically stable (i.e., x(k) → 0 as k → ∞),<br />
x(0) T Px(0)+<br />
∞∑ [<br />
x T (k +1)Px(k+1)−x T (k)Px(k) ] = 0<br />
k=0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.4. CONTROLLER DESIGN IN DISCRETE-TIME STATE SPACE 135<br />
Plugging (5.7) into the equation above, we get<br />
∞∑ [<br />
x(0) T Px(0)+ [Φx(k)+Γu(k)] T P[Φx(k)+Γu(k)]−x T (k)Px(k) ] =<br />
k=0<br />
x(0) T Px(0)+<br />
∞∑<br />
k=0<br />
k=0<br />
[ x T (k) [ Φ T PΦ−P ] x(k)+u T (k)Γ T PΦx(k)<br />
+x T (k)Φ T PΓu(k)+u T (k)Γ T PΓu(k)<br />
Since (5.10) is zero, it can be added to the cost function in (5.9), i.e.<br />
∞∑<br />
[ x<br />
J = x(0) T Px(0)+<br />
T (k) [ Φ T PΦ−P +Q ] ]<br />
x(k)+u T (k)Γ T PΦx(k)<br />
+x T (k)Φ T PΓu(k)+u T (k) [ R+Γ T PΓ ] u(k)<br />
]<br />
= (5.10) 0<br />
The equation above can be arranged into<br />
∞∑<br />
J = x(0) T Px(0)+ [u+K LQ x(k)] [ T R+Γ T PΓ ] [u+K LQ x(k)] (5.11)<br />
where<br />
k=0<br />
K LQ = [ R+Γ T PΓ ] −1<br />
Γ T PΦ (5.12)<br />
Φ T PΦ−P +Q = Φ T PΓ [ R+Γ T PΓ ] −1<br />
Γ T PΦ (5.13)<br />
where (5.12) is the full state feedback gain obtained by the LQ method, and (5.13) is<br />
called the discrete-time algebraic Riccati equation (DARE). The cost function in (5.11) is<br />
minimized if<br />
u(k) = −K LQ x(k)<br />
and the minimal cost is<br />
J o = x(0) T Px(0)<br />
[Summary] For a controllable system<br />
x(k +1) = Φx(k)+Γu(k)<br />
y(k) = Hx(k)<br />
x(0) = x 0<br />
the state feedback controller that minimizes<br />
∞∑<br />
J = x T (k)Qx(k)+u T (k)Ru(k), Q ≥ 0,R > 0<br />
k=0<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
- ì ïð<br />
, ì ó'.óð<br />
/ ì ¡¢£¤õ)(âö òéææéö ,ö -÷ Ý +, ßùáâæéê %éâà<br />
5.5. DISTURBANCE OBSERVER 136<br />
ÝÝ Þßàáâàãßãäåáâæç èßæéâà æßèçê<br />
ë ì íî ïð î îñð<br />
ò ì íîð ïñð<br />
ó ì íï îñð<br />
ô ì îð<br />
ò ì ääõëöòöóöô÷ð<br />
Ý øáéáç äùéúç æßèçê<br />
ÝÝ ûâäúüçáâýéáâßà<br />
þ ì îÿîîïð<br />
Ý øéæùêâà% ùçüâßè<br />
òè ì ú&èõòöþö'ýß('÷ð<br />
)(â ì òèÿéð<br />
Ý ûâäúüçáâýçè äáéáç æéáüâúçä<br />
òéææé ì òèÿ*ð<br />
ó ì òèÿúð<br />
ô ì òèÿèð<br />
ÝÝ +, Þßàáüßê òéâà<br />
Figure 5.5: Matlab code for discrete-time LQ optimal control.<br />
is<br />
u(k) = −K LQ x(k)<br />
where<br />
K LQ = [ R+Γ T PΓ ] −1<br />
Γ T PΦ<br />
and P is the positive definite solution of<br />
Φ T PΦ−P +Q−Φ T PΓ [ R+Γ T PΓ ] −1<br />
Γ T PΦ = 0<br />
which is called the discrete-time algebraic Riccati equation. The Riccati equation has<br />
a proper solution, as long as the state space equation is controllable.<br />
5.5 Disturbance Observer<br />
Various useful control methods have been introduced in this lecture. The performance<br />
of the control methods, however, is highly dependent on the accuracy of a plant model,<br />
which is nearly impossible to obtain in practice. Most of the mechanical systems exhibit<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.5. DISTURBANCE OBSERVER 137<br />
nonlinear behavior due to the Coulomb friction, the input/output limitation 7 , the actuator<br />
nonlinearity, and so on. Moreover, in many cases the high-frequency plant dynamics is<br />
neglected for the sake of simplicity in the design of control algorithms. Another practical<br />
challenge in control systems is a disturbance. Mechanical systems interact with environments,<br />
and thus they cannot be free from environmental disturbances.<br />
d +<br />
u<br />
+<br />
~ G ( z )<br />
y<br />
Figure 5.6: An open-loop system with model discrepancy and disturbance.<br />
Figure 5.6 shows a plant with a model discrepancy and a disturbance. The actual<br />
plant dynamics, ˜G(z), is not necessarily the same asG(z), which is a mathematical (called<br />
nominal) model identified by a system identification process. Moreover, a disturbance,<br />
d, is exerted to the plant so that the output is affected by the disturbance. Note that the<br />
output is<br />
y = ˜G(z)[u+d] (5.14)<br />
d +<br />
u<br />
+<br />
~ G ( z )<br />
y<br />
G(z)<br />
+<br />
−<br />
ŷ<br />
−<br />
Q(<br />
z)<br />
G<br />
1 ( z)<br />
dˆ<br />
Computer program<br />
Figure 5.7: A disturbance observer system.<br />
In order to reject the disturbance and the model discrepancy, they should be estimated.<br />
Suppose that the nominal model is being simulated simultaneously as in Fig. 5.7.<br />
Then the simulated and actual outputs should be matched if d = 0 and ˜G(z) = G(z).<br />
Subtracting the simulated output, ŷ = G(z)u, from the actual output, y, the effect of the<br />
disturbance and the model discrepancy can be estimated, i.e.<br />
ˆd = Q(z)G −1 (z)[y −G(z)u] (5.15)<br />
where Q(z) is a filter introduced to make (5.15) realizable 8 . If ˜G(z) = G(z), (5.15) is<br />
reduced to ˆd = Q(z)d. Therefore,Q(z) should be designed such thatQ(z) ≈ 1 for a large<br />
frequency range. Since (5.15) estimates the disturbance, it is often called a disturbance<br />
observer (DOB).<br />
The estimated disturbance is fed back into the system in order to reject the actual<br />
disturbance, as shown in Fig. 5.10. Such control method is called a DOB-based control<br />
7 It is often called “saturation.”<br />
8 Remind that G −1 (z) is not realizable.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.5. DISTURBANCE OBSERVER 138<br />
u C<br />
d<br />
u<br />
+<br />
~<br />
+ +<br />
G ( z )<br />
y<br />
−<br />
Q (z) − +<br />
−<br />
Q(<br />
z)<br />
G<br />
1 ( z)<br />
dˆ<br />
Figure 5.8: A disturbance-observer-based control system.<br />
system. The closed-loop transfer functions from u C and d toy are<br />
where<br />
y = G u (z)u C +G d (z)d (5.16)<br />
G u (z) =<br />
G d (z) =<br />
˜G(z)G(z)<br />
G(z)+[˜G(z)−G(z)]Q(z)<br />
˜G(z)G(z)[1−Q(z)]<br />
G(z)+[˜G(z)−G(z)]Q(z)<br />
(5.17)<br />
(5.18)<br />
Note that G u (z) = G(z) and G d (z) = 0 if Q(z) = 1. In other words, if Q(z) = 1<br />
the disturbance is perfectly rejected and the system follows the nominal plant model.<br />
This makes the DOB-based control system attractive in practice, because it solves two<br />
major problems, the disturbance and the model discrepancy, all at once.<br />
The Q(z) filter, however, cannot be 1, because Q(z)G −1 (z) must be realizable.<br />
Alternatively, suppose that Q(z) is a low pass filter with the DC-gain of 1, then G u (z) ≈<br />
G(z) and G d (z) ≈ 0 at frequencies where Q(e jωT ) ≈ 1 + 0j. Among many choices, a<br />
typicalQ(z) is obtained by converting<br />
( ) p ωb<br />
Q(s) =<br />
s+ω b<br />
by the pole-zero matching method. ω b determines the bandwidth of the filter, and p can<br />
be selected such that Q(z)G −1 (z) is realizable.<br />
if<br />
Moreover, note from (5.15) and (5.17) that the DOB-based control system is stable<br />
1. G(z) and G −1 (z) are all stable.<br />
2. G(z)+[˜G(z)−G(z)]Q(z) = 0 does not possess any roots outside of the unit circle.<br />
For the first condition, the nominal plant must be selected such that it possesses neither<br />
unstable pole nor unstable zero.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.5. DISTURBANCE OBSERVER 139<br />
In order to satisfy the second condition, the Nyquist criterion can be applied. Namely,<br />
˜G(e<br />
the Q(z) filter should be designed such that<br />
jωT )−G(e jωT )<br />
Q(e jωT ) does not encircle<br />
G(e jωT )<br />
−1+0j. In practice, however, checking the Nyquist plot is not convenient, and the bandwidth<br />
ofQ(z), ω b , is often adjusted by trials and errors such thatω b is as large as possible<br />
and the overall DOB-controlled system is stable.<br />
[Example 5-8] Consider a plant<br />
˜G(z) = 0.4<br />
z −0.7<br />
with a sampling period of 1ms. A nominal model has been identified through the<br />
system identification process, i.e.<br />
G(z) = 0.3<br />
z −0.6<br />
In order to compensate for the model discrepancy, a disturbance observer is designed<br />
with aQ(z) filter with the bandwidth of20rad/sec, i.e.<br />
Then the disturbance observer is<br />
Q(z) = 0.06<br />
z −0.94<br />
ˆd = Q(z)G −1 (z)[y −G(z)u]<br />
0.06 z −0.6<br />
=<br />
z −0.94 0.3<br />
=<br />
[<br />
y − 0.3<br />
z −0.6 u ]<br />
1 [<br />
(0.2−0.12z −1 )y −0.06z −1 u ]<br />
1−0.94z −1<br />
The implementable form of the disturbance observer is<br />
ˆd(k) = 0.94ˆd(k −1)+0.2y(k)−0.12y(k−1)−0.06u(k−1)<br />
The estimated disturbance is subtracted from the control signal,u C (k), calculated by a<br />
feedback controller,C(z), i.e.<br />
u(k) = u C (k)− ˆd(k)<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.5. DISTURBANCE OBSERVER 140<br />
Frequency responses with and withput DOB<br />
0<br />
Magnitude (dB)<br />
Phase (deg)<br />
−5<br />
−10<br />
−15<br />
45<br />
0<br />
−45<br />
−90<br />
Actual plant dynamics<br />
Nominal plant<br />
DOB−controlled system<br />
−135<br />
−180<br />
10 0 10 1 10 2 10 3<br />
Frequency (rad/sec)<br />
Figure 5.9: G u (z) with and without disturbance observer.<br />
Frequency responses with and withput DOB<br />
0<br />
Magnitude (dB)<br />
Phase (deg)<br />
−10<br />
−20<br />
−30<br />
−40<br />
90<br />
45<br />
0<br />
−45<br />
−90<br />
Actual plant dynamics<br />
DOB−controlled system<br />
−135<br />
−180<br />
10 0 10 1 10 2 10 3<br />
Frequency (rad/sec)<br />
Figure 5.10: G d (z) with and without disturbance observer.<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong
5.5. DISTURBANCE OBSERVER 141<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<strong>Digital</strong> <strong>Control</strong> <strong>Systems</strong>, Sogang University<br />
Kyoungchul Kong