Draft Student Manual - Update 1 - DCE FEL ČVUT v Praze

moodle.dce.fel.cvut.cz

Draft Student Manual - Update 1 - DCE FEL ČVUT v Praze

PolyX, Ltd

E-mail: info@polyx.com

Support: support@polyx.com

Sales: sales@polyx.com

Web www.polyx.com

Tel. +420-603-844-561, +420-233-323-801

Fax +420-233-323-802

Jarni 4

Prague 6, 16000

Czech Republic

Polynomial Toolbox 3.0 Manual

© COPYRIGHT 2009 by PolyX, Ltd.

The software described in this document is furnished under a license agreement.

The software may be used or copied only under the terms of the license agreement.

No part of this manual may be photocopied or reproduced in any form without prior

written consent from PolyX, Ltd.

Printing history: September 2009. First printing.


Contents

1 Quick Start ...................................................................................................... 11

Initialization .................................................................................................................. 11

Help ............................................................................................................................... 11

How to define a polynomial matrix ............................................................................ 12

Simple operations with polynomial matrices ............................................................. 13

Addition, subtraction and multiplication ........................................................ 13

Entrywise and matrix division ........................................................................ 14

Operations with fractions ................................................................................ 14

Concatenation and working with submatrices ................................................ 15

Coefficients and coefficient matrices .............................................................. 16

Conjugation and transposition ........................................................................ 16

Advanced operations and functions ........................................................................... 17

2 Polynomial matrices ....................................................................................... 19

Introduction .................................................................................................................. 19

Polynomials and polynomial matrices ........................................................................ 19

Entering polynomial matrices ..................................................................................... 19

The pol command ......................................................................................... 20

The Polynomial Matrix Editor ........................................................................ 20

The default indeterminate variable ................................................................. 20

Changing the default indeterminate variable .................................................. 21

Basic manipulations with polynomial matrices ......................................................... 22

Concatenation and working with submatrices ................................................ 22

Coefficients ..................................................................................................... 22

Degrees and leading coefficients .................................................................... 23

Constant, zero and empty matrices ................................................................. 24

Values ............................................................................................................. 24

Derivative and integral .................................................................................... 24

Arithmetic operations on polynomial matrices ......................................................... 25

Addition, subtraction and multiplication ........................................................ 25

Determinants, unimodularity and adjoints ...................................................... 26


Rank ................................................................................................................ 28

Bases and null spaces ...................................................................................... 29

Roots and stability ........................................................................................... 30

Special constant matrices related to polynomials ........................................... 32

Divisors and multiples.................................................................................................. 33

Scalar divisor and multiple ............................................................................. 33

Division ........................................................................................................... 34

Division with remainder ................................................................................. 34

Greatest common divisor ................................................................................ 35

Coprimeness .................................................................................................... 35

Least common multiple ................................................................................... 36

Matrix divisors and multiples ......................................................................... 36

Matrix division ................................................................................................ 36

Matrix division with remainder ...................................................................... 37

Greatest common left divisor .......................................................................... 38

Least common right multiple .......................................................................... 39

Dual concepts .................................................................................................. 40

Transposition and conjugation ................................................................................... 40

Transposition ................................................................................................... 40

Complex coefficients ...................................................................................... 41

Conjugated transposition ................................................................................ 41

Reduced and canonical forms ..................................................................................... 42

Row degrees .................................................................................................... 43

Row and column reduced matrices ................................................................. 43

Row reduced form ........................................................................................... 44

Triangular and staircase form ......................................................................... 44

Another triangular form .................................................................................. 45

Hermite form ................................................................................................... 45

Echelon form ................................................................................................... 46

Smith form ...................................................................................................... 47

Invariant polynomials ..................................................................................... 47

Polynomial matrix equations ...................................................................................... 48

Diophantine equations .................................................................................... 48

Bézout equations ............................................................................................. 50


Matrix polynomial equations .......................................................................... 50

One-sided equations ........................................................................................ 51

Two-sided equations ....................................................................................... 51

Factorizations ............................................................................................................... 53

Symmetric polynomial matrices ..................................................................... 53

Zeros of symmetric matrices ........................................................................... 54

Spectral factorization ...................................................................................... 57

Non-symmetric factorization .......................................................................... 59

Matrix pencil routines.................................................................................................. 61

Transformation to Kronecker canonical form ................................................. 61

Transformation to Clements form ................................................................... 63

Pencil Lyapunov equations ............................................................................. 63

3 Discrete-time and two-sided polynomial matrices ........................................ 65

Introduction .................................................................................................................. 65

Basic operations ............................................................................................................ 65

Two-sided polynomials ................................................................................... 65

Leading and trailing degrees and coefficients ................................................ 66

Derivatives and integrals ................................................................................. 66

Roots and stability ........................................................................................... 67

Norms .............................................................................................................. 68

Conjugations ................................................................................................................. 68

Conjugate transpose ........................................................................................ 68

Symmetric two-sided polynomial matrices................................................... 69

Zeros of symmetric two-sided polynomial matrices ....................................... 71

Matrix equations with discrete-time and two-sided polynomials ............................ 74

Symmetric bilateral matrix equations ............................................................. 75

Non-symmetric equation ................................................................................. 76

Discrete-time spectral factorization ................................................................ 77

Resampling ................................................................................................................... 78

Sampling period .............................................................................................. 78

Resampling of polynomials in z -1 .................................................................... 80

Resampling and phase ..................................................................................... 80

Resampling of two-sided polynomials ............................................................ 81


Resampling of polynomials in z ...................................................................... 81

Dilating ........................................................................................................... 81

4 The Polynomial Matrix Editor ...................................................................... 83

Introduction .................................................................................................................. 83

Quick start .................................................................................................................... 83

Main window ................................................................................................................ 84

Main window buttons ..................................................................................... 84

Main window menus ....................................................................................... 84

Matrix Pad window ...................................................................................................... 85

Matrix Pad buttons .......................................................................................... 86

5 Polynomial matrix fractions .......................................................................... 88

Introduction .................................................................................................................. 88

Scalar-denominator-fractions ..................................................................................... 88

The sdf command ......................................................................................... 90

Comparison of fractions .................................................................................. 91

Coprime ........................................................................................................... 91

Reduce ............................................................................................................. 92

General properties cop, red ...................................................................... 93

Reverse ............................................................................................................ 93

Matrix-denominator-fractions .................................................................................... 95

Coprime, reduce .............................................................................................. 96

The mdf command ......................................................................................... 98

Left and right polynomial matrix fractions ............................................................... 98

Right-denominator-fraction ............................................................................ 99

The rdf command ......................................................................................... 99

Left-denominator-fraction ............................................................................. 100

The ldf command ....................................................................................... 100

Coprime, reduce ............................................................................................ 100

Mutual conversions of polynomial matrix fraction objects .................................... 101

Operations with polynomial matrix fractions ......................................................... 103

Addition, subtraction and multiplication ...................................................... 103


Entrywise and matrix division ...................................................................... 106

Concatenation and working with submatrices .............................................. 107

Coefficients and coefficient matrices ............................................................ 108

Conjugation and transposition ...................................................................... 108

Values ........................................................................................................... 109

Derivative ...................................................................................................... 110

Composition .................................................................................................. 111

Fractions with complex polynomials ............................................................ 112

Signals and transforms .............................................................................................. 112

Fractions as signals ....................................................................................... 112

Properness ..................................................................................................... 113

Laurent series ................................................................................................ 113

Coefficients ................................................................................................... 115

Laplace trasnform ......................................................................................... 115

Norms ............................................................................................................ 116

Sampling, unsampling and resampling .................................................................... 117

Sampling ....................................................................................................... 117

Result of sampling ........................................................................................ 120

Unsampling ................................................................................................... 121

Case of complex result .................................................................................. 122

Case of zero sampling period ........................................................................ 122

Continuous-time and discrete-time indeterminate variable .......................... 123

Change sampling periode and phase ............................................................. 123

Resampling ................................................................................................... 124

6 LTI Systems .................................................................................................. 126

Introduction ................................................................................................................ 126

Continuous-time state space equations .................................................................... 126

State space equations .................................................................................... 126

Generalized state space equations ................................................................. 127

Descriptor systems ........................................................................................ 127

Input-output equations, transfer function fractions ............................................... 128

Matrix denominator fraction ......................................................................... 128

Scalar denominator fraction .......................................................................... 129

Left denominator fraction ............................................................................. 129


Right denominator fraction ........................................................................... 129

State space and fraction conversions ........................................................................ 130

State space to fractions .................................................................................. 130

Fraction to State space .................................................................................. 131

Example of uncontrollable system ................................................................ 132

Example of unobservable system .................................................................. 133

Nonproper fractions ...................................................................................... 134

Generalized state space systems and descriptor systems .............................. 135

Discrete-time state space equations .......................................................................... 136

Discrete-time state space equations .............................................................. 136

Generalized state space equations ................................................................. 137

Descriptor systems ........................................................................................ 137

Discrete-time input-output equations, discrete-time transfer function fractions . 137

Fractions and Control System Toolbox LTI objects ............................................... 139

Example ........................................................................................................ 139

Conversion to ss ........................................................................................... 140

Conversion to dss ......................................................................................... 141

Conversion to tf ............................................................................................ 142

Conversion to zpk ........................................................................................ 142

Converting nonproper fractions .................................................................... 142

Conversion from LTI objects to fractions .................................................... 144

Logical relations with different objects ........................................................ 145

Arithmetical operations with different objects............................................ 146

Fractions and Symbolic Math Toolbox objects ....................................................... 148

sym ................................................................................................................ 148

Conversion from symbolic objects to fractions ............................................ 148

Logical and arithmetical operations .............................................................. 149

Sampling with holding ............................................................................................... 149

Zero order holding ........................................................................................ 150

First order holding ......................................................................................... 151

Second order holding .................................................................................... 153

Unsamph ....................................................................................................... 154

Resampling with holding ........................................................................................... 155


Zero order holding ........................................................................................ 155

First order holding ......................................................................................... 156

Second order holding .................................................................................... 157

7 Control Systems Design ............................................................................... 159

Introduction ................................................................................................................ 159

Basic control routines ................................................................................................ 159

Introduction ................................................................................................... 159

Stabilization .................................................................................................. 160

Youla-Kucera parametrization ...................................................................... 160

Example ........................................................................................................ 161

Example ........................................................................................................ 164

Pole placement .............................................................................................. 166

Example ........................................................................................................ 167

Deadbeat controller ....................................................................................... 169

Example ........................................................................................................ 169

Example ........................................................................................................ 172

H-2 optimization ......................................................................................................... 173

Introduction ................................................................................................... 173

Scalar case ..................................................................................................... 174

Example ........................................................................................................ 174

MIMO case ................................................................................................... 175

Examples ....................................................................................................... 176

State space design ........................................................................................................ 178

Introduction ................................................................................................... 178

Eigenstructure assignment ............................................................................ 179

Example ........................................................................................................ 179

Linear quadratic regulator ............................................................................. 180

Example ........................................................................................................ 181

Linear Gaussian filter .................................................................................... 182

Descriptor system design ............................................................................................. 184

Introduction ................................................................................................... 184

Regularization of a descriptor system ........................................................... 184

Examples ....................................................................................................... 185


8 Robust Control with parametric uncertainties ............................................ 189

Introduction ................................................................................................................ 189

Single parameter uncertainty .................................................................................... 189

Example 1 ..................................................................................................... 189

Example 2 ..................................................................................................... 191

Interval polynomials .................................................................................................. 195

Example 3 ..................................................................................................... 195

Example 4 ..................................................................................................... 196

Example 5 ..................................................................................................... 197

Polytopes of polynomials ........................................................................................... 199

Example 6 ..................................................................................................... 201

Example 7 ..................................................................................................... 204

General uncertainty structure .................................................................................. 207

Example 8 ..................................................................................................... 207

Example 9 ..................................................................................................... 209

Incorrect calls ................................................................................................ 210

Example 10 ................................................................................................... 210

Example 11 ................................................................................................... 211

Spherical uncertainty ................................................................................................. 212

Example 12 ................................................................................................... 212

Example 13 ................................................................................................... 213

9 Conclusions .................................................................................................. 215


1 Quick Start

Initialization

Help

Every Polynomial Toolbox session starts with the initialization command pinit:

pinit

Polynomial Toolbox 3.0 initialized. To get started, type one of

these: helpwin or poldesk. For product information, visit

www.polyx.com or www.polyx.cz.

This function creates global polynomial properties and assigns them to their default

values. If you place this command in your startup.m file then the Polynomial

Toolbox is automatically initialized. If you include lines such as

path(path,'c:\Matlab\toolbox\polynomial')

pinit

in the startup.m file in the folder in which you start MATLAB then the toolbox is

automatically included in the search path and initialized each time you start

MATLAB.

To see a list of all the commands and functions that are available type

help polynomial

To get help on any of the toolbox commands, such as axxab, type

help axxab

Some commands are overloaded, that is, there exist various commands of the same

name for various objects. To get help on overloaded commands, you must also specify

the object class of interest. So

help pol/rank

returns help on rank for pol objects (polynomial matrices) while

help frac/rank

returns help on rank for frac objects (polynomial fractions) etc.


How to define a polynomial matrix

Basic objects of the Polynomial Toolbox are polynomial matrices. They may be looked

at in two different ways.

You may directly enter a polynomial matrix by typing its entries. Upon initialization

the Polynomial Toolbox defines several indeterminate variables for polynomials and

polynomial matrices. One of them is s. Thus, you can simply type

P = [ 1+s s^2

2*s^3 1+2*s+s^2 ]

to define the polynomial matrix

2


1

s s

Ps () 3 2

2s12ss MATLAB returns

P =

1 + s s^2

2s^3 1 + 2s + s^2

The Polynomial Toolbox displays polynomial matrices by default in this style.

We may also render the polynomial matrix P as

P() s

2s12ss 1


0 0 1


1 0 0 0

s

2 0 1

s

1 0


2 0

s

0

P

0

P

1

P

2

P

3

2

1s s 2 3

3 2


Polynomial matrices may be defined this way by typing

P0 = [ 1 0; 0 1 ];

P1 = [ 1 0; 0 2 ];

P2 = [ 0 1; 0 1 ];

P3 = [ 0 0; 2 0 ];

P = pol([P0 P1 P2 P3],3)

MATLAB again returns first

P =

1 + s s^2

2s^3 1 + 2s + s^2

The display format may be changed to ―coefficient matrix style‖ by the command

pformat coef


Typing the name of the matrix

P

now results in

Polynomial matrix in s: 2-by-2, degree: 3

P =

Matrix coefficient at s^0 :

1 0

0 1

Matrix coefficient at s^1 :

1 0

0 2

Matrix coefficient at s^2 :

0 1

0 1

Matrix coefficient at s^3 :

0 0

2 0

Simple operations with polynomial matrices

Addition,

subtraction and

multiplication

In the Polynomial Toolbox polynomial matrices are objects for which all standard

operations are defined.

Define the polynomial matrices

P = [ 1+s 2; 3 4], Q = [ s^2 1-s; s^3 1]

P =

Q =

1 + s 2

3 4

s^2 1-s

s^3 1

The sum and product of P and Q follow easily:

S = P+Q

S =

R = P*Q

1 + s + s^2 3 - s

3 + s^3 5


Entrywise and

matrix division

Operations with

fractions

R =

s^2 + 3s^3 3 - s^2

3s^2 + 4s^3 7 - 3s

Entrywise division of polynomial matrices P and Q yields a fraction

T = P./Q

results in

T =

1 + s 2

----- -----

s^2 1 - s

3 4

--- -

s^3 1

Right matrix division gets a fraction

U = P/Q

U =

1

PQ

1 + s 2 / s^2 1 - s

3 4 / s^3 1

Left matrix division gets a fraction

V = Q\P

U =

1

Q P

s^2 1 - s \ 1 + s 2

s^3 1 \ 3 4

Polynomial fractions may also enter arithmetic operations, possibly mixed with

numbers or polynomials such as

or

2*U

ans =

2 + 2s 4 / s^2 1 - s

6 8 / s^3 1

P+T

ans =


Concatenation

and working

with

submatrices

1 + s + s^2 + s^3 -4 + 2s

----------------- -------

s^2 -1 + s

3 + 3s^3 8

-------- -

s^3 1

In the realm of fractions, inverses are possible

T.^-1

ans =

s^2 0.5 - 0.5s

----- ----------

1 + s 1

0.33s^3 0.25

------- ----

1 1 U^-1

U^-1

ans =

s^2 1 - s / 1 + s 2

s^3 1 / 3 4

All standard MATLAB operations to concatenate matrices or selecting submatrices

also apply to polynomial matrices. Typing

W = [P Q]

results in

W =

1 + s 2 s^2 1 - s

3 4 s^3 1

The last row of W may be selected by typing

w = W(2,:)

w =

3 4 s^3 1

Submatrices may be assigned values by commands such as

W(:,1:2) = eye(2)

W =

1 0 s^2 1 - s


Coefficients and

coefficient

matrices

Conjugation

and

transposition

0 1 s^3 1

For fraction similarly

T(1,:)

ans =

1 + s -2

----- ------

s^2 -1 + s

It is easy to extract the coefficient matrices of a polynomial matrix. Given W, the

coefficient matrix of s 2 may be retrieved as

W{2}

ans =

0 0 1 0

0 0 0 0

The coefficients of the (1,3) entry of W follow as

Given

W{:}(1,3)

ans =

0 0 1 0

W

W =

1 0 s^2 1 - s

0 1 s^3 1

the standard MATLAB conjugation operation results in

W'

ans =

1 0

0 1

s^2 -s^3

1 + s 1

Transposition follows by typing

W.'


ans =

1 0

0 1

s^2 s^3

1 - s 1

The command transpose(W) is synonymous with W.'

Advanced operations and functions

The Polynomial Toolbox knows many advanced operations and functions. After

defining

P = [ 1+s 2; 3 4+s^2 ]

try a few by typing for instance det(P), roots(P), or smith(P).

The commands may be grouped in the categories listed in Table 1. A full list of all

commands is available in the companion volume Commands. The same list appears

after typing

help polynomial

in MATLAB.

Table 1. Command categories

Global structure Equation solvers

Polynomial, Two-sided polynomial and

Fraction objects

Linear matrix inequality functions

Special matrices Matrix pencil routines

Convertors Numerical routines

Overloaded operations Control routines

Overloaded functions Robustness functions

Basic functions (other than overloaded) 2-D functions

Random functions Simulink


Advanced functions Visualization

Canonical and reduced forms Graphic user interface

Sampling period functions Demonstrations and help


2 Polynomial matrices

Introduction

In this chapter we review in a tutorial style many of the functions and operations

defined for polynomials and polynomial matrices. This exposition continues in

Chapter 3 Discrete-time and two-sided polynomial matrices. Chapter 5 is devoted to

next objects, Polynomial matrix fractions. Functions and operations for linear timeinvariant

systems defined by polynomial matrix fractions are discussed in Chapter 6,

LTI systems. Chapter 6, Control system design covers the applications of polynomial

matrices in control system design.

More detailed information is available in Chapter 12, Objects, properties, formats and

in the manual pages included in the companion volume Commands.

Polynomials and polynomial matrices

Polynomials are mathematical expressions of the form


2

n

P s P0 Ps 1 P2 s Pn s

where P0 , P1 , P2 , , P n are real or complex numbers, coefficients, and s is the

indeterminate variable.

Matrix polynomials are of the same form but with P0 , P1 , P2 , , P n numerical matrices,

coefficients matrices. Alternatively and usually, these objects can be viewed as

polynomial matrices, i.e. matrices whose individual entries are polynomials. These

two concepts will be used interchangeably.

For system and control engineers, s may be treated as the derivative operator d dt

acting on continuous-time signals xt . It can be also considered as the Laplace

transform complex variable.

Entering polynomial matrices

Polynomials and polynomial matrices are most easily entered using one of the

indeterminate variables s, p that are recognized by the Polynomial Toolbox, combined

with the usual MATLAB conventions for entering matrices. Thus, typing

P = [ 1+s 2*s^2

2+s^3 4 ]


The pol

command

The Polynomial

Matrix Editor

The default

indeterminate

variable

defines the matrix

2


1s 2s

Ps () 3

2s4

and returns

P =

1 + s 2s^2

2 + s^3 4

For other available indeterminate variables such as z, q, z^–1 or d see Chapter 3,

Discrete-time and two-sided polynomial matrices.

Polynomials and polynomial matrices may also be entered in terms of their

coefficients or coefficient matrices. For this purpose the pol command is available.

Typing

P0 = [1 2;3 4];

P1 = [3 4;5 1];

P2 = [1 0;0 1];

P = pol([P0 P1 P2],2,'s')

for instance, defines the polynomial matrix

P =

according to

1 + 3s + s^2 2 + 4s

3 + 5s 4 + s + s^2

P( s) P0 P1s P2 s

2

More complicated polynomial matrices may be entered and edited with the help of the

Polynomial Matrix Editor (see Chapter 4).

Note that if any of the default indeterminates is redefined as a variable then it is no

longer available as an indeterminate. Typing

s = 1;

P = 1+s^2

results in


Changing the

default

indeterminate

variable

P =

2

To free the variable simply type

clear s;

P = 1+s^2

P =

1 + s^2

After the Polynomial Toolbox has been started up the default indeterminate variable

is s. This implies, among other things, that the command

P = pol([P0 P1 P2],2)

returns a polynomial matrix

P =

1 + 3s + s^2 2 + 4s

3 + 5s 4 + s + s^2

in the indeterminate s. The indeterminate variable may be changed with the help of

the gprops command: typing

gprops p; pol([P0 P1 P2],2)

for instance, results in

ans =

1 + 3p + p^2 2 + 4p

3 + 5p 4 + p + p^2

To enter data independently on the current default indeterminate variable, one can

use a shortcut v: The indeterminate variable v is automatically replaced by the

current default indeterminate variable. Thus, after the default indeterminate

variable has been set to p by the command

then

gprops p

V = 1+v^2+3*v^3

returns

V =

1 + p^2 + 3p^3


Basic manipulations with polynomial matrices

Concatenation

and working

with

submatrices

Coefficients

Standard MATLAB conventions may be used to concatenate polynomial and standard

matrices:

P = 1+s^2;

Q = [P 1+s 3]

results in

Q =

1 + s^2 1 + s 3

Submatrices may be selected such as in

Q(1,2:3)

ans =

1 + s 3

Submatrices may also be changed; the command

yields

Q(1,2:3) = [1-s 4]

Q =

1 + s^2 1 - s 4

All usual MATLAB subscripting and colon notations are available. Also the commands

like sum, prod, repmat, resape, tril, triu, diag and trace work as

can be expected.

It is easy to extract the coefficients of a polynomial. Given

R=2+3*s+4*s^2+5*s^3;

the coefficient matrix of s 2 may be retrieved as

R{2}

ans =

4

The coefficient can also be changed

yields

R{2}=6


Degrees and

leading

coefficients

R =

2 + 3s + 6s^2 + 5s^3

In this way, new coefficients may be inserted. Thus

yields

R{5}=7

R =

2 + 3s + 6s^2 + 5s^3 + 7s^5

For polynomial matrices, it works similarly. Given

T = [ 1+s 2 s^2 s

3 4 s^3 0 ];

the coefficient matrix of s 2 may be retrieved as

T{2}

ans =

0 0 1 0

0 0 0 0

Taking coefficients and taking submatrices may be combined. So the coefficients of

the (1,3) entry of T follow as

T{:}(1,3)

ans =

0 0 1 0

The degree of a polynomial matrix is available by

deg(T)

ans =

3

The leading coefficient matrix, in our example that of

lcoef(T)

ans =

0 0 0 0

0 0 1 0

3

s may be obtained as


Constant, zero

and empty

matrices

Values

Derivative and

integral

A special case of polynomial matrix is that of degree 0; in effect, it is just a standard

MATLAB matrix. The Polynomial Toolbox treats both these forms of data

interchangeably: when a polynomial is expected, the standard matrix is also

accepted. For explicit conversions, pol and double commands are available:

P = pol([1 2;3 4]);

D = double(P);

Another special case is the zero polynomial; its degree is –Inf . Still another case:

the empty polynomial matrix, similar to empty standard matrix. Its degree is empty.

A polynomial may also be considered as a function of real or complex variable. For

evaluating the polynomial function, the value command is available:

F=2+3*s+4*s^2; X=2;

Y=value(F,X)

Y =

24

When F or X is a matrix then the values are evaluated entrywise:

X=[2 3];

Y=value(F,X)

Y =

24 47

A bit different command is mvalue. For scalar polynomial F and square matrix X, it

computes the matrix value Y according to the matrix algebra rules:

X=[1 2;0 1];

Y=mvalue(F,X)

Y =

9 22

0 9

For matrix polynomial function Fs , the derivative and the integral are computed

deriv(F)

ans =


3 + 8s

integral(F)

ans =

2s + 1.5s^2 + 1.3s^3

Arithmetic operations on polynomial matrices

Addition,

subtraction and

multiplication

In operations with two or more polynomial matrices, the indeterminates should be

the same. If they are not (usually by a mistake), a warning is issued and the

indeterminate is changed to the default.

Define the polynomial matrices

P = [1+s 2; 3 4], Q = [s^2 s; s^3 0]

P =

Q =

1 + s 2

3 4

s^2 s

s^3 0

The sum and product of P and Q follow easily:

S = P+Q

S =

R = P*Q

R =

The command

R+3

ans =

1 + s + s^2 2 + s

3 + s^3 4

s^2 + 3s^3 s + s^2

3s^2 + 4s^3 3s

3 + s^2 + 3s^3 3 + s + s^2


Determinants,

unimodularity

and adjoints

3 + 3s^2 + 4s^3 3 + 3s

obviously is interpreted as the instruction to add three times the unit matrix to R.

The command

3*R

ans =

3s^2 + 9s^3 3s + 3s^2

9s^2 + 12s^3 9s

yields the expected result. Also expected can be results of entrywise multiplication

R.*S and of powers R^N or R.^N with nonnegative integer N. For negative N, see

Chapter 4, Polynomial matrix fractions.

The determinant of a square polynomial matrix is defined exactly as its constant

matrix counterpart. In fact, its computation is not much more difficult:

P = [1 s s^2; 1+s s 1-s; 0 -1 -s]

P =

det(P)

ans =

1 s s^2

1 + s s 1 - s

0 -1 -s

1 - s - s^2

If its determinant happens to be constant then the polynomial matrix is called

unimodular:

U=[2-s-2*s^2, 2-2*s^2, 1+s; 1-s-s^2, 1-s^2, s;-1-s, -s, 1]

U =

det(U)

2 - s - 2s^2 2 - 2s^2 1 + s

1 - s - s^2 1 - s^2 s

-1 - s -s 1

Constant polynomial matrix: 1-by-1

ans =

1

If a matrix is suspected of unimodularity then one can make it sure by a special

tester


isunimod(U)

ans =

1

Also the adjoint matrix is defined as for constant matrices. The adjoint is a

polynomial matrix and may be computed by typing

adj(P)

ans =

1 - s - s^2 0 s - s^2 - s^3

s + s^2 -s -1 + s + s^2 + s^3

-1 - s 1 -s^2

Quite to the contrary, the inverse of a square polynomial matrix is usually not a

polynomial but a polynomial fraction

inv(P)

ans =

-1 + s + s^2 0 -s + s^2 + s^3

-s - s^2 s 1 - s - s^2 - s^3

1 + s -1 s^2

----------------------------------------

-1 + s + s^2

For more details, see Chapter 4, Polynomial matrix fractions.

The only cases do have polynomial inverse are unimodular matrices. In our example

inv(U)

ans =

Indeed,

U*inv(U)

ans =

1 -2 - s + s^2 -1 + s + s^2 - s^3

-1 3 + s - s^2 1 - 2s - s^2 + s^3

1 -2 + s^2 s - s^3

1.0000 0 0

0 1.0000 0

0 0 1.0000


Rank

If the matrix is nonsquare but has full rank then a usual partial replacement for the

inverse is provided by the generalized Moore-Penrose pseudoinverse, which is

computed by the pinv function.

Q = P(:,1:2)

Q =

1 s

1 + s s

0 -1

Qpinv = pinv(Q)

Qpinv =

1 - s^3 1 + s + s^3 2s + s^2

s^2 + s^3 -s^2 -2 - 2s - s^2

------------------------------------------

2 + 2s + s^2 + s^4

In general, the generalized Moore-Penrose pseudoinverse is a polynomial matrix

fraction. Once again,

Qpinv*Q

ans =

1.0000 0

0 1.0000

A polynomial matrix P( s)

has full column rank (or full normal column rank) if it has

full column rank everywhere in the complex plane except at a finite number of points.

Similar definitions hold for full row rank and full rank.

Recall that

P = [1 s s^2; 1+s s 1-s; 0 -1 -s]

P =

The rank test

1 s s^2

1 + s s 1 - s

0 -1 -s

isfullrank(P)


Bases and null

spaces

ans =

1

confirms that P has full rank.

The normal rank of a polynomial matrix P( s)

equals

max sC P( s)

rank

Similar definitions apply to the notions of normal column rank and normal row rank.

The rank is calculated by

rank(P)

ans =

3

As for constant matrices, rank evaluation may be quite sensitive and an ad hoc

change of tolerance (which may be included as an optional input parameter) may be

helpful for difficult examples.

A polynomial matrix is nonsingular if it has full normal rank.

issingular(P)

ans =

0

There are two important subspaces (more precisely, submodules) associated with a

polynomial matrix A( s):

its null space and its range (or span). The (right) null space

is defined as the set of all polynomial vectors x( s)

such that A( s) x( s)

0 . For matrix

A = P(1:2,:)

A =

1 s s^2

1 + s s 1 - s

the (right) nullspace is computed by

N = null(A)

N =

0.35s - 0.35s^2 - 0.35s^3

-0.35 + 0.35s + 0.35s^2 + 0.35s^3

-0.35s^2

Here the null space dimension is 1 and its basis has degree 3.


Roots and

stability

The range of A( s)

is the set of all polynomial vectors y( s)

such that y( s) A( s) x( s)

for

some polynomial vector x( s)

. In the Polynomial Toolbox, the minimal basis of the

range is returned by the command

minbasis(A)

>> minbasis(A)

ans =

0 1.0000

1.0000 0

The roots or zeros of a polynomial matrix P( s)

are those points si in the complex

plane where P( s)

loses rank.

roots(P)

ans =

-1.6180

0.6180

The roots can be both finite and infinite. The infinite roots are normally suppressed.

To reveal them, type

roots(P,'all')

ans =

-1.6180

0.6180

Inf

Unimodular matrices have no finite roots:

roots(U)

ans =

Empty matrix: 0-by-1

but typically have infinite roots:

roots(U,'all')

ans =

Inf

Inf

Inf


If P( s)

is square then its roots are the roots of its determinantdet P( s ) , including

multiplicity:

roots(det(P))

ans =

-1.6180

0.6180

The finite roots may be visualized as in Fig. 1 by typing

zpplot(P)

Fig. 1. Locations of the roots of a polynomial matrix

A polynomial matrix is stable if all its roots fall within a relevant stability region. For

polynomial matrices in s and p it is the open left half plane for polynomial matrices,

which means Hurwitz stability used for continuous-time systems

The macro isstable checks stability:

isstable(s-2)

ans =

0

isstable(s+2)


Special

constant

matrices related

to polynomials

ans =

0

For other stability regions considered in the Polynomial Toolbox, see Chapter 3,

Discrete-time and two-sided polynomial matrices.

There are several interesting constant matrices that are composed from the

coefficients of polynomials (or the matrix coefficients of polynomial matrices) and are

frequently encountered in mathematical and engineering textbooks. Given a

polynomial

p( v) p p v p v p v

0 1 2 2

d n

of degree n we may for instance define the corresponding n n Hurwitz matrix

H

p


L

M

NM

p p p

n 1 n 3 n 5

p p p

0

0

n n 2 n 4

p p p

n 1 n 3 n 5

p p p

0 0 0

n n 2 n 4

0 0 0 0



a k ( n k)

Sylvester matrix (for some k 1)

S

p


L

M

N

M

p

n





p p p 0 0

0 1

n

0 p p p 0 0

0 1


0 0 p p p

or an n n companion matrix

C( p)


L

M

NM

0 1 0

n

0 0 1 0

0 1




0 0 1

p0

p1

pn

1


pn

pn

pn

Using the Polynomial Toolbox, we take

p = pol([-1 1 2 3 4 5],5)

p =

-1 + s + 2s^2 + 3s^3 + 4s^4 + 5s^5

O

P

QP

.

p

0

n

O

P

QP

,

O

P

Q

P


and simply type

or

or

hurwitz(p)

ans =

sylv(p,3)

ans =

4 2 -1 0 0

5 3 1 0 0

0 4 2 -1 0

0 5 3 1 0

0 0 4 2 -1

-1 1 2 3 4 5 0 0 0

compan(p)

ans =

Divisors and multiples

Scalar divisor

and multiple

0 -1 1 2 3 4 5 0 0

0 0 -1 1 2 3 4 5 0

0 0 0 -1 1 2 3 4 5

0 1.0000 0 0 0

0 0 1.0000 0 0

0 0 0 1.0000 0

0 0 0 0 1.0000

0.2000 -0.2000 -0.4000 -0.6000 -0.8000

For polynomial matrices the block matrix versions are defined and computed in a

fairly obvious manner

To understand when division of polynomial matrices is possible, begin with scalar

polynomials. Consider three polynomialsa( s),

b( s)

and c( s)

such thata( s) b( s) c( s)

.


Division

Division with

remainder

We say that b( s)

is a divisor (or factor) of a( s)

or a( s)

is a multiple ofb( s),

and write

a( s) b( s).

This is sometimes also stated as b( s)

dividesa( s).

For example, take

b = 1-s; c = 1+s; a = b*c

a =

1 - s^2

As b( s)

is a divisor of a( s),

the division

a/b

can be done and results in

ans =

1 + s

If b( s)

is not a divisor of a( s)

then the result of division is not a polynomial but a

polynomial fraction1 , see Chapter 4, Polynomial matrix fractions.

On the other hand, any n( s)

can be divided by any nonzero d( s)

using another

operation called division with a remainder. This division results in a quotient q( s)

and

a remainder r( s):

n( s) q( s) d( s) r( s)

Typically, it is required that deg r( s) deg d( s)

, which makes the result unique. Thus,

dividing

by

yields

n = (1-s)^2;

d = 1+s;

[q,r] = ldiv(n,d)

q =

-3 + s

Constant polynomial matrix: 1-by-1

1 Users familiar with older versions of the Polynomial Toolbox (2.5 and below) should

notice that the performance of slash and backward slash operators has been changed:

the operators are now primarily used to create fractions and not only to extract

factors as before. For deeper understanding, experiment or see Chapter 4, Polynomial

matrix fractions and Chapter 13, Compatibility with Version 2. To extract a factor

directly, one should better use equation solvers as axb and alike.


Greatest

common divisor

Coprimeness

r =

4

Division with remainder is sometimes called Euclidean division.

If a polynomial g( s)

divides both a( s)

and b( s)

then g( s)

is called a common divisor of

a( s)

and b( s).

If, furthermore, g( s)

is a multiple of every common divisor of a( s)

and

b( s)

then g( s)

is a greatest common divisor of a( s)

and b( s).

If the only common divisors of a( s)

and b( s)

are constants then the polynomials a( s)

and b( s)

are coprime (or relatively prime). To compute a greatest common divisor of

and

type

a = s+s^2;

b = s-s^2;

gld(a,b)

ans =

s

Similarly, the polynomials

c = 1+s; d = 1-s;

are coprime as

gld(c,d)

is a constant

ans =

1

Coprimeness may also be tested directly

isprime([c,d])

ans =

1

As the two polynomials c and d are coprime, there exist other two polynomials e and f

that satisfy the linear polynomial equation

ce df 1

The polynomials e and f may be computed according to

[e,f] = axbyc(c,d,1)

e =


Least common

multiple

Matrix divisors

and multiples

Matrix division

f =

0.5000

0.5000

We check the result by typing

c*e+d*f

ans =

1

If a polynomial m ( s)

is a multiple of both a( s)

and b( s)

then m ( s)

is called a common

multiple of a( s)

and b( s).

If, furthermore, m ( s)

is a divisor of every common multiple

of a( s)

and b( s)

then it is a least common multiple of a( s)

and b( s):

m = llm(a,b)

m =

s - s^3

The concepts just mentioned are combined in the well-known fact that the product of

a greatest common divisor and a least common multiple equals the product of the two

original polynomials:

isequal(a*b,gld(a,b)*llm(a,b))

ans =

1

Next consider polynomial matrices A( s),

B( s)

and C( s)

of compatible sizes such that

A( s) B( s) C( s)

. We say that B( s)

is a left divisor of A( s),

or A( s)

is a right multiple of

B( s)

. Take for instance

B = [1 s; 1+s 1-s], C = [2*s 1; 0 1], A = B*C

B =

C =

A =

1 s

1 + s 1 - s

2s 1

0 1

2s 1 + s

2s + 2s^2 2

As B( s)

is a left divisor of A( s),

the matrix left division


Matrix division

with remainder

B\A

can be done and results in a polynomial matrix

ans =

2s 1

0 1

If B( s)

is not a left divisor then the result of left division is not a polynomial matrix

but a polynomial matrix fraction, see Chapter 4, Polynomial matrix fractions. This is

also the case when B( s)

is a divisor but from the other ("wrong") side: A/B.

A/B

ans =

1 + 3s 2s / 1 + s 1

2 + 2s + 2s^2 2s + 2s^2 / 2 1 + s

On the other hand, a polynomial matrix N ( s)

can be divided by any compatible

nonsingular square matrix D( s)

with a remainder, resulting in a matrix quotient

Q( s)

and a matrix remainder R( s)

such that

N ( s) D( s) Q( s) R( s)

1

If it is required that the rational matrix D ( s) R( s)

is strictly proper then the

division is unique. Thus, dividing a random matrix

N = prand(4,2,3,'ent','int')

N =

0 -3 + 11s -1 + s

5 - 4s^3 1 - 7s + 4s^2 + 8s^3 - 3s^4 4 + 6s

from the left by a random square matrix

D = prand(3,2,'ent','int')

D =

results in

-7 3 - 2s + 3s^2

4 + 4s + 6s^2 0

[Q,R] = ldiv(N,D)

Q =

R =

0.44 - 0.67s -0.11 + 1.7s - 0.5s^2 0

0 -1.2 0


Greatest

common left

divisor

Indeed,

while

3.1 - 4.7s -0.28 + 20s -1 + s

3.2 + 0.89s 1.4 - 13s 4 + 6s

deg(det(D))

ans =

4

deg(adj(D)*R,'ent')

ans =

3 3 3

3 3 3

1

so that each entry of the rational matrix D ( s) R( s) (adj D) R / det D is strictly

proper.

If a polynomial matrix G( s)

is a left divisor of both A( s)

and B( s)

then G( s)

is called

a common left divisor of A( s)

and B( s)

. If, furthermore, G( s)

is a right multiple of

every common left divisor of A( s)

and B( s)

then it is a greatest common left divisor of

A( s)

and B( s)

. If the only common left divisors of A( s)

and B( s)

are unimodular

matrices then the polynomial matrices A( s)

and B( s)

are left coprime. To compute a

greatest common left divisor of

and

type

A = [1+s-s^2, 2*s + s^2; 1-s^2, 1+2*s+s^2]

A =

1 + s - s^2 2s + s^2

1 - s^2 1 + 2s + s^2

B = [2*s, 1+s^2; 1+s, s+s^2]

B =

gld(A,B)

ans =

2s 1 + s^2

1 + s s + s^2

0 1

1 + s 0


Least common

right multiple

Similarly, the polynomial matrices

C = [1+s, 1; 1-s 0], D = [1-s, 1; 1 1]

C =

D =

1 + s 1

1 - s 0

1 - s 1

1 1

are left coprime as

gld(D,C)

ans =

1 0

0 1

is obviously unimodular. You may also directly check

isprime([C,D])

ans =

1

As the two polynomial matrices C and D are left coprime there exist other two other

polynomial matrices E and F that satisfy

CE DF I

They may be computed according to

[E,F] = axbyc(C,D,eye(2))

E =

F =

0 0

1.0000 -1.0000

0 0

0 1.0000

If a polynomial matrix M ( s)

is a right multiple of both A( s)

and B( s)

then M ( s)

is

called a common right multiple of A( s)

and B( s)

. If, furthermore, M ( s)

is a left


Dual concepts

divisor of every common right multiple of A( s)

and B( s)

then M ( s)

is a least common

right multiple of A( s)

and B( s)

.

M = lrm(A,B)

M =

0.32 + 1.3s + 0.95s^2 -0.3 - 1.4s - 0.3s^2 + 0.15s^3

0.63 + 1.3s + 0.63s^2 -0.75 - 1.1s - 0.15s^2 + 0.15s^3

which is verified by

A\M, B\M

ans =

ans =

0.32 + 0.32s -0.3 - 0.15s

0.32 + 0.32s -0.45

0.63 + 0.32s -0.75

0.32 -0.3 + 0.15s-3

The dual concepts of right divisors, left multiples, common right divisors, greatest

common right divisors, common left multiples, and least common left multiples are

similarly defined and computed by dual functions grd and llm.

Transposition and conjugation

Transposition

Given

T = [1 0 s^2 s; 0 1 s^3 0]

T =

1 0 s^2 s

0 1 s^3 0

the transposition follows by typing

T.'

ans =

1 0

0 1

s^2 s^3


Complex

coefficients

Conjugated

transposition

s 0

The command transpose(T)is synonymous with T.'

The Polynomial Toolbox supports polynomial matrices with complex coefficients such

as

C = [1+s 1; 2 s]+i*[s 1; 1 s]

C =

1+0i + (1+1i)s 1+1i

2+1i (1+1i)s

Many Toolbox operations and fractions handle the accordingly. For example the

transposition

C.'

ans =

1+0i + (1+1i)s 2+1i

1+1i (1+1i)s

In addition, there are several special functions for complex coefficient polynomial

matrices such as

real(C)

ans =

1 + s 1

2 s

imag(C)

ans =

conj(C)

ans =

s 1

1 s

1+0i + (1-1i)s 1-1i

2-1i (1-1i)s)

The operation of conjugated transposition is a bit more complicated. It is connected

with the scalar products xt yt of continuous-time systems. Here the theory

requires s s.

For matrix polynomials, the operation consists of the conjugation,

the transposition and the change of the argument s to –s. So,


C'

ans =

1+0i + (-1+1i)s 2-1i

1-1i (-1+1i)s

The command ctransp(C) is synonymous with C' .

For real polynomial matrix

T

sign of s hence T s

T = [1 0 s^2 s; 0 1 s^3 0]

T =

T'

ans =

1 0 s^2 s

0 1 s^3 0

1 0

0 1

s^2 -s^3

-s 0

Reduced and canonical forms

Suppose that we have a polynomial matrix

P = [1+s^2, -2; s-1 1]

P =

of degree

deg(P)

ans =

1 + s^2 -2

-1 + s 1

2

with leading coefficient matrix

Pl = P{2}

T s , this operation makes transposition with change of


Row degrees

Row and

column reduced

matrices

Pl =

1 0

0 0

Besides the (overall) degree of P we may also consider its row degrees

deg(P,'row')

ans =

2

1

and column degrees

deg(P,'col')

ans =

2 0

Associated with the row degrees is the leading row coefficient matrix

lcoef(P,'row')

ans =

1 0

1 0

and associated with the column degrees is the leading column coefficient matrix

lcoef(P,'col')

ans =

1 -2

0 1

A polynomial matrix is row reduced if its leading row coefficient matrix has full row

rank. Similarly, it is column reduced if its leading column coefficient matrix has full

column rank. The matrix P is definitely not row reduced

isfullrank(lcoef(P,'row'))

ans =

0

but it is column reduced

isfullrank(lcoef(P,'col'))

ans =

1


Row reduced

form

Triangular and

staircase form

Any polynomial matrix with full row rank may be transformed into row reduced form

by pre-multiplying it by a suitable unimodular matrix. To compute a row reduced

form of P, call

P_row_reduced = rowred(P)

P_row_reduced =

1 + s -2 - s

-1 + s 1

Indeed, the row rank of

lcoef(P_row_reduced,'row')

ans =

is full.

1 -1

1 0

There are several special forms of a polynomial matrix that can be achieved by pre-

and/or post-multiplying it by suitable unimodular matrix. These operations preserve

many important properties and indeed serve to make these visible.

Thus, a lower-left triangular form T ( s)

of A( s)

resulting from column operations

T ( s) A( s) U( s)

can be computed by the macro tri:

A = [s^2 0 1; 0 s^2 1+s]

A =

T = tri(A)

T =

s^2 0 1

0 s^2 1 + s

-1 0 0

-1 - s -1.2s^2 0

The corresponding unimodular matrix is returned by

[T,U] = tri(A); U

U =

0 0.29 0.5

0 -0.87 + 0.29s 0.5 + 0.5s

-1 -0.29s^2 -0.5s^2


Another

triangular form

Hermite form

If A( s)

has not full row rank then T ( s)

is in staircase form. Similarly, an upper-right

triangular (row staircase) form is achieved by row (unimodular) operations. It results

from the call tri(A,'row').

If B( s)

is a square polynomial matrix with nonsingular constant term then another

upper-triangular form may be obtained by the overloaded macro lu:

B = [ 1 1 s; s+1 0 s; 1-s s 2+s]

B =

1 1 s

1 + s 0 s

1 - s s 2 + s

[V,T] = lu(B)

V =

T =

1 0.33s 0.11s

1 + s 1 + 1.3s + 0.33s^2 0.44s + 0.11s^2

1 - s 1 - 1.7s - 0.33s^2 1 - 0.56s - 0.11s^2

1 1 + 0.33s 0.78s + 0.11s^3

0 -1 -0.67s - s^2 + 0.33s^3

0 0 2 + 2s + 2s^2 - s^3

The triangular forms described above are by no means unique. A canonical triangular

form is called the Hermite form. An n m polynomial matrix A( s)

of rank r is in

column Hermite form if it has the following properties:

it is lower triangular

the diagonal entries are all monic

each diagonal entry has higher degree than any entry on its left

in particular, if the diagonal element is constant then all off-diagonal elements in

the same row are zero

if n r then the last n r columns are zero

The nomenclature in the literature is not consistent. Some authors (in particular

Kailath, 1980) refer to this as the row Hermite form. The polynomial matrix A is in

row Hermite form if it is the transpose of a matrix in column Hermite form. The

command

H = hermite(A)


Echelon form

returns the column Hermite form

H =

while the call

1 0 0

1 + s s^2 0

[H,U] = hermite(A); U

U =

0 -0.25 0.5

0 0.75 - 0.25s 0.5 + 0.5s

1 0.25s^2 -0.5s^2

provides a unimodular reduction matrix U such that H ( s) A( s) U( s)

.

Yet another canonical form is called the (Popov) echelon form. A polynomial matrix E

is in column echelon form (or Popov form) if it has the following properties:

it is column reduced with its column degrees arranged in ascending order

for each column there is a so-called pivot index i such that the degree of the i -th

entry in this column equals the column degree, and the i-th entry is the lowest

entry in this column with this degree

the pivot indexes are arranged to be increasing

each pivot entry is monic and has the highest degree in its row

A square matrix in column echelon form is both column and row reduced

Given a square and column-reduced polynomial matrix D( s)

the command [E,U] =

echelon(D) computes the column echelon form E( s)

of D( s).

The unimodular

matrix U ( s)

satisfies E( s) D( s) U( s)

.

By way of example, consider the polynomial matrix

D = [ -3*s s+2; 1-s 1];

To find its column echelon form and the associated unimodular matrix, type

[E,U] = echelon(D)

MATLAB returns

E =

U =

2 + s -6

1 -4 + s


Smith form

Invariant

polynomials

0 -1.0000

1.0000 -3.0000

The ultimate, most structured canonical form for a polynomial matrix is its Smith

form. A polynomial matrix A( s)

of rank r may be reduced to its Smith form

S ( s) U( s) A( s) V ( s)

by pre- and post- multiplication by unimodular matrices U ( s)

and

V ( s),

respectively. The Smith form looks like this:

S ( s)

L N M

S r ( s)

0

0 0

O

QP

with the diagonal submatrix

Sr ( s) diag( a1( s), a2 ( s), , ar ( s))


The entries a1 ( s), a2 ( s), , ar ( s)

are monic polynomials such that a1 ( s)

divides

ai+1 ( s)

for i 1, 2, , r 1.

The Smith form is particularly useful for theoretical

considerations as it reveals many important properties of the matrix. Its practical

use, however is limited because it is quite sensitive to small parameter perturbations.

The computation of the Smith form becomes numerically troublesome as soon as the

matrix size and degree become larger. The Polynomial Toolbox offers a choice of three

different algorithms to achieve the Smith form, all programmed in macro smith.

For larger examples, a manual change of tolerance may be necessary. To compute the

Smith form of a simple matrix

A=[1+s, 0, s+s^2; 0, s+2, 2*s+s^2]

A =

simply call

smith(A)

ans =

1 + s 0 s + s^2

0 2 + s 2s + s^2

1 0 0

0 2 + 3s + s^2 0

The polynomials a1 ( s), a2 ( s), , ar ( s)

that appear in the Smith form are uniquely

determined and are called the invariant polynomials of A( s).

They may be retrieved

by typing

diag(smith(A))


ans =

1

2 + 3s + s^2

Polynomial matrix equations

Diophantine

equations

The simplest type of linear scalar polynomial equation — called Diophantine

equation after the Alexandrian mathematician Diophantos (A.D. 275) — is

a( s) x( s) b( s) y( s) c( s)

The polynomials a( s),

b( s)

and c( s)

are given while the polynomials x( s)

and y( s)

are

unknown. The equation is solvable if and only if the greatest common divisor of a( s)

and b( s)

divides c( s)

. This implies that with a( s)

and b( s)

coprime the equation is

solvable for any right hand side polynomial, including c( s)

1.

The Diophantine equation possesses infinitely many solutions whenever it is

solvable. If x( s),

y( s)

is any (particular) solution then the general solution of the

Diophantine equation is

x( s) x( s) b ( s) t( s)

y( s) y( s) a ( s) t( s)

Here t( s)

is an arbitrary polynomial (the parameter) and a ( s)

, b ( s)

are coprime

polynomials such that

b ( s)

b( s)


a ( s)

a( s)

If the polynomials a( s)

and b( s)

themselves are coprime then one can naturally take

a( s) a( s)

and b ( s) b( s)

.

Among all the solutions of Diophantine equation there exists a unique solution pair

x( s)

, y( s)

characterized by

deg x( s) deg b ( s)

.

There is another (generally different) solution pair characterized by

deg y( s) deg a( s)

.

The two special solution pairs coincide only if

deg a( s) deg b( s) deg c( s)

.


The Polynomial Toolbox offers two basic solvers that may be used for scalar and

matrix Diophantine equations. They are suitably named axbyc and xaybc. For

example, consider the simple polynomials

a = 1+s+s^2; b = 1-s; c = 3+3*s;

When typing

[x,y] = axbyc(a,b,c)

x =

y =

2

1 + 2s

b g b g

MATLAB returns the solution pair x( s), y( s) The alternative call

2, 1 2 s with minimal overall degree.

[x,y,f,g] = axbyc(a,b,c)

x =

y =

f =

g =

2

1 + 2s

-0.45 + 0.45s

0.45 + 0.45s + 0.45s^2

b g with

retrieves the complete general solution in the form x( s) f ( s) t( s), y( s) g( s) t( s)

an arbitrary polynomial parameter t( s)

.

To investigate the case of different minimal degree solutions, consider a right hand

side of higher degree

c = 15+15*s^4;

As before, the call

[x1,y1] = axbyc(a,b,c)

x1 =

y1 =

8 - 13s + 15s^2

7 + 12s + 2s^2


Bézout

equations

Matrix

polynomial

equations

results in the solution of minimal overall degree (in this case deg x1 deg y1

2 ).

A slightly different command

[x2,y2] = axbyc(a,b,c,'minx')

x2 =

y2 =

10.0000

5 - 5s - 15s^2 - 15s^3

returns another solution with the minimal degree of the first unknown. Finally,

typing

[x2,y2] = axbyc(a,b,c,'miny')

x2 =

y2 =

10 - 15s + 15s^2

5 + 10s

produces the solution of minimal degree in the second unknown.

Should the equation be unsolvable, the function returns NaNs..

[x,y] = axbyc(s,s,1)

x =

y =

NaN

NaN

A Diophantine equation with 1 on its right hand side is called a Bézout equation. It

may look like

a( s) x( s) b( s) y( s)

1

with a( s),

b( s)

given and x( s)

, y( s)

unknown.

In the matrix case, the polynomial equation becomes a polynomial matrix equation.

The basic matrix polynomial (or polynomial matrix) equations are

and

A( s) X ( s) B( s)

X ( s) A( s) B( s)

or even

A( s) X ( s) B( s) C( s)


One-sided

equations

Two-sided

equations

A( s),

B( s)

and, if applicable, C( s)

are given while X ( s)

is unknown. The Polynomial

Toolbox functions to solve these equations are conveniently named axb, xab, and

axbc. Hence, given the polynomial matrices

A= [1 s 1+s; s-1 1 0]; B = [s 0; 0 1];

the call

X0 = axb(A,B)

X0 =

1 -1

1 - s s

-1 + s 1 – s

solves the first equation and returns its solution of minimal overall degree. Typing

[X0,K] = axb(A,B); K

K =

1.7 + 1.7s

1.7 - 1.7s^2

-1.7 - 1.7s + 1.7s^2

also computes the right null-space of A so that all the solutions to A( s) X ( s) B( s)

may easily be parameterized as

X ( s) X0 ( s) K ( s) T ( s)

T ( s)

is an arbitrary polynomial matrix of compatible size. The other equations are

handled similarly.

In systems and control several special forms of polynomial matrix equations are

frequently encountered, in particular the one-sided equations

and

A( s) X ( s) B( s) Y ( s) C( s)

X ( s) A( s) Y ( s) B( s) C( s)

Also the two-sided equations

and

A( s) X ( s) Y ( s) B( s) C( s)

X ( s) A( s) B( s) Y ( s) C( s)

are common. A( s), B( s)

and C( s)

are always given while X ( s)

and Y ( s)

are to be

computed. A( s)

is typically square invertible.


The solutions of the one- and two-sided equations may be found with the help of the

Polynomial Toolbox macros axbyc, xaybc, and axybc . Thus, for the matrices

A= [1 s; 1+s 0]; B = [s 1; 1 s]; C = [1 0; 0 1];

the call

[X,Y] = axbyc(A,B,C)

returns

X =

Y =

0.25 - 0.5s 0.5

0.25 + 0.5s -0.5

-0.25 - 0.5s 0.5

0.75 + 0.5s -0.5

Various other scalar and matrix polynomial equations may be solved by directly

applying appropriate solvers programmed in the Polynomial Toolbox, such as the

equation


A( s) X ( s) X ( s) A ( s) B( s)

Table 2 lists all available polynomial matrix equation solvers.

Table 2. Equation solvers

Equation Name of the routine

AX B

axb

AXB C

axbc

AX BY C

axbyc


A X X A B

axxab


A X Y A B

axyab

AX YB C

axybc


XA AX B

xaaxb

XA B

xab

XA YB C

xaybc


Factorizations

Symmetric

polynomial

matrices

To indicate the polynomial matrix equation is unsolvable, the solver returns matrices

of correct sizes but full of NaNs.

X0 = axb(A,B)

X0 =

NaN NaN

NaN NaN

Besides linear equations special quadratic equations in scalar and matrix

polynomials are encountered in various engineering fields. The equations typically

contain a symmetric polynomial matrix on the right hand side.

A square polynomial matrix M which is equal to its conjugated transpose M' is called

(para-Hermitian) symmetric. In the scalar case with real coefficients,

2 4

2 3 4


M s s s M s

only even coefficients are nonzero. In the scalar case with complex coefficients,

2 3 4

2 3 3 4


M s is s is s M s

even coefficients are real while odd ones are imaginary. In the matrix case

or

2 1 0 1 3203 M s s s s M s

1 2


10

2 4


3 0



2 3



2 2i 1 i i 1

i

M s s M s

1 i 2 2i

1 i 7i





even coefficients are (Hermitian) symmetric, odd coefficients are (Hermitian) antisymmetric.

M s as a function of complex variable, we can set s to various points s i of

M s that are (Hermitian)

M s are

Treating

the complex plane. Thus we obtain numerical matrices i

symmetric and hence they have only real eigenvalues. If all eigenvalues of i


Zeros of

symmetric

matrices

positive for all s i on the imaginary axis then Ms is called positive definite. Such a

polynomial matrix, of course, has no zeros on the imaginary axis. If all the

eigenvalues are nonnegative then Ms is called nonnegative definite. Yet another

important class consists of matrices with constant number of positive and negative

eigenvalues of Ms i over the whole all imaginary axis. Such Ms is called

indefinite (in matrix sense). All the above cases can be factorized. Should the number

of positive and negative eigenvalues change as s i ranges the imaginary axis,

however, the matrix Ms is indefinite in the scalar sense and cannot be factorized in

any symmetric sense.

Nonnegative definite symmetric polynomial matrix Ms with real coefficients has

its zeros distributed symmetrically with respect to the imaginary axis. Thus if it has

a zero s i , it generally has a quadruple of zeros si , si, si , si

, all of the same

multiplicity. If s i is real, then it is only a couple si, si.

If s i is imaginary, then it is

also a couple si, s i but it must be of even multiplicity. Finally, if si 0 then it is a

singlet of even multiplicity.

zpplot(4+s^4), grid

zpplot(1-s^2), grid


zpplot(1+2*s^2+s^4), grid

zpplot(-s^2), grid


For polynomial matrices with complex coefficients, the picture is more simple:

generally a couple si, si,

especially in imaginary case (including the zero case) a

singlet with even multiplicity.

zpplot(2+2*i*s-s^2),grid

zpplot(1+2*i*s-s^2),grid


Spectral

factorization

One of quadratic equations in scalar and matrix polynomials frequently encountered

is the polynomial spectral factorization

A( s) X (

s) JX ( s)

and the spectral co-factorization

A( s) X ( s) JX (

s)

.

In either case, the given polynomial matrix A( s)

satisfies A( s) A ( s)


(we say it is

para-Hermitian symmetric) and the unknown X ( s)

is to be stable. The case of A( s)

positive definite on the stability boundary results in J I .

Spectral factorization with J = I is the main tool to design LQ and LQG controllers as

well as Kalman filters. On the other hand, if A( s)

is indefinite in matrix sense then

J diag l+1, + 1, , + 1, 1, 1, 1q

. This is the famous J -spectral factorization

problem, which is an important tool for robust control and filter design based on H

norms. The Polynomial Toolbox provides two macros called spf and spcof to

handle spectral factorization and co-factorization, respectively.

By way of illustration consider the para-Hermitian matrix

A =

34 - 56s^2 -13 - 22s + 60s^2 36 + 67s

-13 + 22s + 60s^2 46 - 1e+002s^2 -42 - 26s + 38s^2

36 - 67s -42 + 26s + 38s^2 59 - 42s^2

Its spectral factorization follows by typing


[X,J] = spf(A)

X =

J =

2.1 + 0.42s 5.2 + 0.39s -2 + 0.35s

-5.5 + 4s 4.3 + 0.64s -7.4 - 5.5s

0.16 + 6.3s -0.31 - 10s -0.86 + 3.5s

1 0 0

0 1 0

0 0 1

while the spectral co-factorization of A is computed via

[Xcof,J] = spcof(A)

Xcof =

J =

2.7 + 0.42s 4.8 + 4s 2 + 6.3s

-1.6 + 0.39s 0.93 + 0.64s -6.5 - 10s

4.3 + 0.35s 2.7 - 5.5s 5.8 + 3.5s

1 0 0

0 1 0

0 0 1

The resulting J reveals that the given matrix A is positive-definite on the imaginary

axis. On the other hand, the following matrix is indefinite

B =

5 -6 - 18s -8

-6 + 18s -41 + 81s^2 -22 - 18s

-8 -22 + 18s -13

Its spectral factorization follows as

[Xf,J] = spf(B)

Xf =

J =

3.3 2.9 - 0.92s -1.6

1.8 6.6 + 0.44s 3.4

1.6 2.5 + 9s -2


Non-symmetric

factorization

or

1 0 0

0 -1 0

0 0 -1

[Xcof,J] = spcof(B)

Xcof =

J =

3.3 -1.8 1.6

-1.9 + 0.92s -4.4 + 0.44s -5 - 9s

-1.6 -3.4 -2

1 0 0

0 -1 0

0 0 -1

General non-symmetric factorization can be computed by function fact which

returns a factor with zeros arbitrarily selected from the zeros of the original matrix.

For example, it is sometimes useful to express a given non-symmetric polynomial

matrix As as the product of its anti-stable and stable factors A Aantistab Astab

. So for

factorize polynomial matrix

A=prand(3,3,'int')

A =

-1 + s + 2s^2 + 7s^3 -2 + 3s + 4s^2 + 5s^3 -5 + s + s^2 - 5s^3

-4 + 5s - s^2 + 2s^3 -3s - 3s^2 + 2s^3 -5 + 4s + 3s^2 - 4s^3

-1 - 6s - 11s^2 + 5s^3 -3 + 2s + s^2 -5 - 5s - 2s^2 - 6s^3

having stable as well as unstable zeros

r=roots(A)

r =

4.2755

-1.6848

-0.8988 + 0.5838i

-0.8988 - 0.5838i

0.5927 + 0.7446i

0.5927 - 0.7446i


0.3336

-0.1425 + 0.2465i

-0.1425 - 0.2465i

the stable factor should have all the stable zeros

r_stab=r(find(real(r)


-0.8988 + 0.5838i

-0.8988 - 0.5838i

-0.1425 + 0.2465i

-0.1425 - 0.2465i

ans =

Matrix pencil routines

Transformation

to Kronecker

canonical form

4.2755

0.5927 + 0.7446i

0.5927 - 0.7446i

0.3336

Matrix pencils are polynomial matrices of degree 1. They arise in the study of

continuous- and discrete-time linear time-invariant state space systems given by

x( t) Ax( t) Bu( t)

y( t) Cx( t) Du( t)

x( t 1)

Ax( t) Bu( t)

y( t) Cx( t) Du( t)

and continuous- and discrete-time descriptor systems given by

Ex( t) Ax( t) Bu( t)

Ex( t 1)

Ax( t) Bu( t)

y( t) Cx( t) Du( t)

y( t) Cx( t) Du( t)

The transfer matrix of the descriptor system is

1

H ( s) C( sE A) B D

in the continuous-time case, and

1

H ( s) C( zE A) B D

in the discrete-time case. The polynomial matrices

sE A , zE A

that occur in these expressions are matrix pencils. In the state space case they reduce

to the simpler forms sI A and zI A .

A nonsingular square real matrix pencil P( s)

may be transformed to its Kronecker

canonical form

C( s) QP( s) Z

L

NM

a sI

0

0

O

QP

I se


Q and Z are constant orthogonal matrices, a is a constant matrix whose eigenvalues

are the negatives of the roots of the pencil, and e is a nilpotent constant matrix. (That

is, there exists a nonnegative integer k such that e i 0 for i k . The integer k is

called the nilpotency of e.)

This transformation is very useful for the analysis of descriptor systems because it

separates the finite and infinite roots of the pencil and, hence, the corresponding

modes of the system.

By way of example we consider the descriptor system

L

M

O

P

L

M

O

P

L

M

O

P

1 0 0 1 0 0 1

M0

0 1 0 1 0 0

NM

P

0 0 0QP

0 0 1 1

1 1 0 2



M

NM

P

QP


x x M

NM

Pu

QP

2

E A B

y x u


C

D

The system is defined by the matrices

A = [ -1 0 0; 0 1 0; 0 0 1 ];

B = [ 1; 0; 1 ];

C = [ 1 -1 0 ];

D = 2;

E = [ 1 0 0; 0 0 1; 0 0 0 ];

We compute the canonical form of the pencil sE A by typing

c = pencan(s*E-A)

c =

1 + s 0 0

0 1 -s

0 0 1

Inspection shows that a = 1 has dimensions 1 1 and that e is the 2 2 nilpotent

matrix

e

L

NM

O

QP

0 1

0 0

The descriptor system has a finite pole at 1 and two infinite poles.


Transformation

to Clements

form

Pencil

Lyapunov

equations

A para-Hermitian real pencil P( s) sE A , with E skew-symmetric and A

symmetric, may — under the assumption that it has no roots on the imaginary axis

— be transformed into its ―Clements‖ form (Clements, 1993) according to

T

C( s) UP( s) U

L

M

NM

0 0

0

sE A

1 1

A sE A

2 3 3

T T T T

1 1 3 3 4 3

sE A sE A sE A

The constant matrix U is orthogonal and the finite roots of the pencil sE1 A1

all

have negative real parts. This transformation is needed for the solution of various

spectral factorization problems that arise in the solution of H 2 and H optimization

problems for descriptor systems.

Let the para-Hermitian pencil P be defined as

P( s)


L

M

N

M

100 0.

01 s 0

0. 01 0.

01 0 1

s 0 1

0

0 1 0 0

It Clements form follows by typing

O

P

Q

P

P = [100 -0.01 s 0; -0.01 -0.01 0 1; -s 0 -1 0; 0 1 0 0];

C = pzer(clements(P))

C =

0 0 0 10 - s

0 -1 0 -3.5e-005 -

0.0007s

0 0 1 -0.014 + 0.00071s

10 + s -3.5e-005 + 0.0007s -0.014 - 0.00071s 99

The pzer function is included in the command to clear small coefficients in the

result.

When working with matrix pencils the two-sided matrix pencil equation

A( s) X YB( s) C( s)

is sometimes encountered. A and B are square matrix pencils and C is a rectangular

pencil with as many rows as A and as many columns as B. If A and B have no

common roots (including roots at infinity) then the equation has a unique solution

pair X, Y with both X and Y a constant matrix.

O

P

QP


By way of example, let

Then

A = s+1;

B = [ s 0

1 2];

C = [ 3 4 ];

[X,Y] = plyap(A,B,C)

gives the solution

X =

Y =

1 0

-1 2


3 Discrete-time and two-sided

polynomial matrices

Introduction

Basic operations

Two-sided

polynomials

Polynomials in s and p introduced in Chapter 2, Polynomial matrices are usually

associated with continuous-time differential operators, acting on signals xt . Here

we introduce polynomials in z and 1

z , typically associated with discrete-time shift

operators acting on signals x t with integer t.

The advance shift z acts zxt xt1, the delay shift

1

inverses each of others zz 1.

1

z 1

acts z xt xt1

. They are

Many operations and functions work for discrete-time polynomials exactly as

described before in Chapter 2, Polynomial matrices. Here we present what is

different.

1

Both z and z can be used as an indeterminate variable for polynomials. Here z^-1

can be shortened to zi. In arithmetic formulae when only one of them is used, the

usual polynomial algebra is followed. However, when both of them are used, the

1

relation zz 1 is respected.

(3*z^3+4*z^4+5*z^5)*zi^2

ans =

3z + 4z^2 + 5z^3

It may happen that both powers of z and those of

(1+z)*(1+zi)

ans =

z^-1 + 2 + z

1

z remain in the result

In such a way, new objects, two-sided polynomials (tsp) are created. Their algebra is

naturally similar to that of polynomials.


Leading and

trailing degrees

and coefficients

Derivatives and

integrals

Besides degree deg and leading coefficient lcoef, two-sided polynomials have

trailing degree tdeg and trailing coefficient tcoef:

T=[2*zi+1+3*z^3, 2*zi; zi^2, 1+z*2]

T =

2z^-1 + 1 + 3z^3 2z^-1

z^-2 1 + 2z

deg(T),lcoef(T)

ans =

ans =

3

3 0

0 0

tdeg(T),tcoef(T)

ans =

ans =

-2

0 0

1 0

In deriv command, the derivation is taken with respect to the indeterminate

variable:

F=1+z+z^2;

deriv(F)

ans =

1 + 2z

G=1+z^-1+z^-2;

deriv(G)

ans =

1 + 2z^-1

For two-sided polynomials, unless stated otherwise, the derivative is taken with

respect to z:


Roots and

stability

T=z^-2+z^-1+1+z+z^2;

deriv(T)

ans =

-2z^-3 - z^-2 + 1 + 2z

We can also explicitly state the independent variable for the derivative

deriv(T,z)

ans =

-2z^-3 - z^-2 + 1 + 2z

deriv(T,zi)

ans =

2z^-1 + 1 - z^2 - 2z^3

The same holds for integrals. As the integral of 1 z is log z , we obtain the logarithmic

term separately:

[intT,logterm]=integral(T)

intT =

-z^-1 + z + 0.5z^2 + 0.33z^3

logterm =

z

The roots of polynomials in z and 1

z , unless stated otherwise, are understood in the

complex variable equal to the indeterminate:

roots(z-0.5)

ans =

0.5000

roots(zi-0.5)

ans =

0.5000

However, we can state the variable for roots explicitely:

>> roots(zi-0.5,z)

ans =

2.0000


Norms

Conjugations

Conjugate

transpose

The stability region for z is the interior of the unit disc

isstable(z-0.5)

ans =

while for

1

1

z it is the exterior of the unit disc

>> isstable(zi-0.5)

ans =

0

>> isstable(zi+2)

ans =

1

Polynomials in z and 1

z may be considered (by means of z-transform), as discretetime

signals of finite duration. The H2-norm of such a signal may be found

h2norm(1+z^-1+z^-2)

ans =

1.7321

while its H∞-norm is returned by typing

hinfnorm(1+z^-1+z^-2)

ans =

3

The operation of conjugate transposition is connected with scalar product of discrete-

1

time signals. Here the theory requires z z

1 or z z and, indeed, the

Polynomial Toolbox returns

or

z'

ans =

zi'

ans =

z^-1

z


Symmetric

two-sided

polynomial

matrices

For more complex polynomial matrices, the operation consists of the complex

conjugation, the transposition and the change of argument. So

C=[1+z 1;2 z]+i*[z 1;1 z]

C =

D=C'

D =

1+0i + (1+1i)z 1+1i

2+1i (1+1i)z

1+0i + (1-1i)z^-1 2-1i

1-1i (1-1i)z^-1

Hence, conjugate of a polynomial in z is a polynomial in

D'

ans =

1+0i + (1+1i)z 1+1i

2+1i (1+1i)z

For two-sided polynomials, the operation works alike

T=[z 2-z; 1+zi 1+z-zi]

T =

T'

ans =

z 2 - z

z^-1 + 1 -z^-1 + 1 + z

z^-1 1 + z

-z^-1 + 2 z^-1 + 1 – z

1

z vice versa2 In particular, a two-sided polynomial matrix that equals it conjugate is called (para

Hermitian) symmetric. So is, for example,

M=[zi+2+z 3*zi+4+5*z;3*z+4+5*zi zi+3+z]

M =

2 Clearly, this operation with discrete-time polynomials is in Version 3 of the

Polynomial Toolbox implemented more naturally than before. On the other hand,

users familiar with older versions should beware of this incompatibility.


M'

ans =

z^-1 + 2 + z 3z^-1 + 4 + 5z

5z^-1 + 4 + 3z z^-1 + 3 + z

z^-1 + 2 + z 3z^-1 + 4 + 5z

5z^-1 + 4 + 3z z^-1 + 3 + z

Expressed by matrix coefficients as


M z M z M M z ,

1

1

0 1

the matrix 0 M must be symmetric and the matrices M 1, M1

must be transposed each

to other. Indeed,

and

M{0}

ans =

M{-1}

ans =

M{1}

ans =

or, together,

2 4

4 3

1 3

5 1

1 5

3 1

pformat coef

M

Two-sided polynomial matrix in z: 2-by-2,degree: 1, tdegree: -1

M =

Matrix coefficient at z^-1 :

1 3

5 1

Matrix coefficient at z^0 :


Zeros of

symmetric twosided

polynomial

matrices

2 4

4 3

Matrix coefficient at z^1 :

1 5

3 1

In case of complex coefficients, the complex conjugacy steps also into the play: in the

above, "symmetric" is replaced by "conjugate symmetric" (Hermitian symmetric) and

"transposed" by "conjugate transpose" (Hermitian transpose).

pformat

M=[(1-i)*zi+2+(1+i)*z (4+5*i)+(6+7*i)*z;(4-5*i)+(6-7*i)*zi ...

(1-i)*zi+2+(1+i)*z]

M =

(1-1i)z^-1 + 2+0i + (1+1i)z 4+5i + (6+7i)z

(6-7i)z^-1 + 4-5i (1-1i)z^-1 + 2+0i + (1+1i)z

pformat coef

M

Two-sided polynomial matrix in z: 2-by-2,degree: 1, tdegree: -1

M =

Matrix coefficient at z^-1 :

1.0000 - 1.0000i 0

6.0000 - 7.0000i 1.0000 - 1.0000i

Matrix coefficient at z^0 :

2.0000 4.0000 + 5.0000i

4.0000 - 5.0000i 2.0000

Matrix coefficient at z^1 :

1.0000 + 1.0000i 6.0000 + 7.0000i

0 1.0000 + 1.0000i

A nonnegative definite symmetric two-sided polynomial matrix Mz with real

coefficients has its zeros distributed symmetrically with respect to the unit circle.

1 1

Thus if it has a zero z i , it generally has a quadruple of zeros zi , zi , zi , zi

, all of the

1

same multiplicity. If z i is real, then it is only a couple zi, zi . If z i is on the unit circle,

then it is also a couple zi, z i but it must be of even multiplicity. Finally, if zi 1 then

it is a singlet of even multiplicity.


zpplot(sdf(2*z^-2+6*z^-1+9+6*z+2*z^2)), grid

zpplot(sdf(2*z^-2+6*z^-1+9+6*z+2*z^2)), grid

zpplot(sdf(z^-2+2*sqrt(2)*z^-1+4+2*sqrt(2)*z+z^2)), grid)


zpplot(sdf(z^-2+4*z^-1+6+4*z+z^2)), grid

For polynomial matrices with complex coefficients, the picture is simpler: generally a

z z ,

couple ,

i i

zpplot(sdf((1-i)*z^-1+3+(1+i)*z)), grid


specially in the unit circle case a singlet with even multiplicity

a=sqrt(2)/2; zpplot(sdf(a*(1-i)*z^-1+2+a*(1+i)*z)), grid

Matrix equations with discrete-time and two-sided polynomials

All the various matrix equations introduced in Chapter 2, Polynomial matrices are of

the same use for discrete-time and two-sided polynomials as well. Moreover, most of

them look identically for standard and discrete-time polynomials. Only equations

that include conjugation differ. Due to a different meaning of discrete-time


Symmetric

bilateral matrix

equations

conjugation, discrete-time polynomials in different operators as well as two-sided

polynomials may encounter in such an equation.

A frequently encountered polynomial matrix equation with conjugation is

AXXA B

where AXare , polynomial matrices in z while A, Xare

1

polynomial matrices in z

1

and B is a symmetric two-sided polynomial matrix in z and z . As usually, A and

hence A is given, B is also given while X is to be computed along with its conjugate

X .

The Polynomial Toolbox function to solve this equation is conveniently named axxab.

Hence, given the polynomial matrix

A=[1+5*z z;0 1-2*z]

A =

1 + 5z z

0 1 - 2z

and the symmetric two-sided polynomial matrix

the call

B=[7*z^-1+24+7*z,0;0 4*z^-1-10+4*z]

B =

7z^-1 + 24 + 7z 0

0 4z^-1 - 10 + 4z

X=axxab(A,B)

X =

solves the equation

0.96 + 2.2z -0.49z

0.23 -1.2 + 1.7z

1 1

5z 1 0 15z z 7z 24 7z 0

X X

1 1

1

z 2z 1 0 1 2z



0 4z 10 4z

It is easy to check that

A'*X+X'*A-B

ans =

0 0


Non-symmetric

equation

0 0

Another symmetric equation with conjugation is

XAAX B

where again AXare , polynomial matrices in z , A, Xare

polynomial matrices in

and B is a symmetric two-sided polynomial matrix. As before, A and hence A is

given and B is also given. The unknown X along with its conjugate X can be

computed using the Polynomial Toolbox function xaaxb

X=xaaxb(A,B)

X =

1 + 2.2z -0.2

0.2 - 0.4z -1 + 2z

Equations with conjugations may also be non-symmetric such as

axya b

1

where a is a given polynomial in z , a is a polynomial in z (also given as it is the

conjugate of a ) and b is a given two-sided polynomial, not necessarily symmetric.

The equation has two unknowns, xy, , both polynomials in z . Notice that the second

unknown appears in the equation in its conjugate, y which is actually a polynomial

1

in z . This equation is of a special use only and hence only a scalar version of its

solver is programmed in the Polynomial Toolbox function axyab . Given a polynomial

a=(1+2*z)*(1-3*z)

a =

1 - z - 6z^2

and a non-symmetric two-sided polynomial

b=z^-1 +2*z

b =

z^-1 + 2z

the solution is computed by calling

[x,y] = axyab(a,b)

x =

y =

0.006 - 0.24z + 0.071z^2

1

z


Discrete-time

spectral

factorization

0.012 - 0.39z + 0.036z^2

Also special quadratic equations called spectral factorizations contain conjugations as

explained in Chapter 2, Polynomial matrices. In discrete-time spectral factorizations,

1

therefore, polynomials in z meet polynomials in z as well as two-sided

polynomials.

This happens in the spectral factorization

A z z X z X z

1 ( , ) (

) ( )

as well as in the spectral co-factorization

A z z X z X z .

1 ( , ) ( ) (

)

By way of illustration consider the symmetric polynomial

b =

roots(b)

-6z^-2 + 5z^-1 + 38 + 5z - 6z^2

3.0000

-2.0000

-0.5000

0.3333

Its spectral factorization follows by typing

x=spf(b)

x =

Indeed,

-1 + z + 6z^2

roots(x)

ans =

x*x'

ans =

-0.5000

0.3333


Resampling

Sampling

period

-6z^-2 + 5z^-1 + 38 + 5z - 6z^2

When working with polynomials in

typing

xi=pol(x*zi^deg(x))

xi =

Then again

xi*xi'

ans =

but, naturally,

6 + z^-1 - z^-2

-6z^-2 + 5z^-1 + 38 + 5z - 6z^2

roots(xi)

ans =

3.0000

-2.0000

1

z , you may convert the result in those simply by

Discrete-time and two-sided polynomials often describe, discrete-time signals of finite

duration. Such a way, a discrete-time signal (sequence)

n

f fttm can be expressed as

f f z f z f z f f z f z f z

m m1 1 2 n

m m1 1 0 1 2

n

1

where the powers of z actually serve as position makers. In particular, polynomials

1

in z describe causal signals that are zero for t 0 while polynomials in z

represent anti-causal signals zero for t 0 .

1

Every polynomial f in z or z bears a sampling period h. Unless specified

otherwise, default sampling period is used which equals 1. The sampling period can

be accessed and changed by function props

f=1+0.5*zi


f =

props(f)

1 + 0.5z^-1

POLYNOMIAL MATRIX

size 1-by-1

degree 1

PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:

variable symbol z^-1 's','p','z^-1','d','z','q'

sampling period 1 0 for c-time, nonneg or [] for d-time

user data [] arbitrary

props(f,2)

POLYNOMIAL MATRIX

size 1-by-1

degree 1

PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:

variable symbol z^-1 's','p','z^-1','d','z','q'

sampling period 2 0 for c-time, nonneg or [] for d-time

user data [] arbitrary

or, better by a structure-like notation using simply f.h

f=1+0.5*zi

f =

f.h

ans =

f.h=2

f =

1 + 0.5z^-1

1

1 + 0.5z^-1


Resampling of

polynomials in

z -1

Resampling and

phase

f.h

ans =

2

In arithmetic operations and polynomial equations solvers, all input polynomias must

be of the same sampling period; otherwise a warning message is issued.

1

Polynomials in z represent, by means of z-transform, causal discrete-time signals

of finite duration. They may, or may not, be considered as a result of sampling of

continuous-time signals. Despite of that, they can be "resampled" . For example,

resampling by 3 means taking only every third sample

f=1+0.9*zi+0.8*zi^2+0.7*zi^3+0.6*zi^4+0.5*zi^5+0.4*zi^6+0.3*

zi^7+0.2*zi^8

f =

1 + 0.9z^-1 + 0.8z^-2 + 0.7z^-3 + 0.6z^-4 + 0.5z^-5 +

0.4z^-6 + 0.3z^-7 + 0.2z^-8

g=resamp(f,3)

g =

g.h

ans =

1 + 0.7z^-1 + 0.4z^-2

3

1

In the resulting g, the indeterminate z still means the delay operator but the

delay, when measured in continuous-time units, is three times longer than that for f.

It is captured by the fact that g.h is three times f.h .

Resampling with ratio 3 can be performed with "phase" 0,1 or 2, the default being 0:

g0=resamp(f,3,0)

g0 =

1 + 0.7z^-1 + 0.4z^-2

g1=resamp(f,3,1)

g1 =

0.9 + 0.6z^-1 + 0.3z^-2

g2=resamp(f,3,2)

g2 =

0.8 + 0.5z^-1 + 0.2z^-2


Resampling of

two-sided

polynomials

Resampling of

polynomials in z

Dilating

Two-sided polynomials are considered as signals in discrete time t which may be

nonzero both for t 0 and t 0 . According to the usual habit of z-transform, the

1

ordering is according to powers of z . This is followed in resampling of two-sided

polynomials with a ratio and a phase

T=1.3*z^3+1.2*z^2+1.1*z+1+0.9*zi+0.8*zi^2+0.7*zi^3

T =

0.7z^-3 + 0.8z^-2 + 0.9z^-1 + 1 + 1.1z + 1.2z^2 +

1.3z^3

resamp(T,3)

ans =

0.7z^-1 + 1 + 1.3z

resamp(T,3,1)

ans =

0.9 + 1.2z

resamp(T,3,2)

ans =

0.8 + 1.1z

Polynomials in z are considered as discrete-time signals for t 0 (anticausal). To be

compatible with the above described cases, their resampling follows

W=1.5*z^5+1.4*z^4+1.3*z^3+1.2*z^2+1.1*z+1

W =

1 + 1.1z + 1.2z^2 + 1.3z^3 + 1.4z^4 + 1.5z^5

resamp(W,3)

ans =

1 + 1.3z

resamp(W,3,1)

ans =

1.2z + 1.5z^2

resamp(W,3,2)

ans =

1.1z + 1.4z^2

Dilating is a process in some sense inverse to resampling. With

g0,g1,g2


g0 =

g1 =

g2 =

computed before we have

and

1 + 0.7z^-1 + 0.4z^-2

0.9 + 0.6z^-1 + 0.3z^-2

0.8 + 0.5z^-1 + 0.2z^-2

f0=dilate(g0,3,0)

f0 =

1 + 0.7z^-3 + 0.4z^-6

f1=dilate(g0,3,1)

f1 =

z^-1 + 0.7z^-4 + 0.4z^-7

f2=dilate(g0,3,2)

f2 =

f=f0+f1+f2

f =

z^-2 + 0.7z^-5 + 0.4z^-8

1 + z^-1 + z^-2 + 0.7z^-3 + 0.7z^-4 + 0.7z^-5 + 0.4z^-6

+ 0.4z^-7 + 0.4z^-8

restores the original f . Note that

g0.h,f.h

ans =

ans =

3

1


4 The Polynomial Matrix Editor

Introduction

Quick start

The Polynomial Matrix Editor (PME) is recommended for creating and editing

polynomial and standard MATLAB matrices of medium to large size, say from about

4 4 to30 35.

Matrices of smaller size can easily be handled in the MATLAB

command window with the help of monomial functions, overloaded concatenation,

and various applications of subscripting and subassigning. On the other hand,

opening a matrix larger than 30 35 in the PME results in a window that is difficult

to read.

Type pme to open the main window called Polynomial Matrix Editor. This window

displays all polynomial matrices (POL objects) and all standard MATLAB matrices (2dimensional

DOUBLE arrays) that exist in the main MATLAB workspace. It also

allows you to create a new polynomial or standard MATLAB matrix. In the Polynomial

Matrix Editor window you can

create a new polynomial matrix, by typing its name and size in the first (editable)

line and then clicking the Open button

modify an existing polynomial matrix while retaining its size and other

properties: To do this just find the matrix name in the list and then double click

the particular row

modify an existing polynomial matrix to a large extent (for instance by changing

also its name, size, variable symbol, etc.): To do this, first find the matrix name in

the list and then click on the corresponding row to move it up to the editable row.

Next type in the new required properties and finally click Open

Each of these actions opens another window called Matrix Pad that serves for editing

the particular matrix.

In the Matrix Pad window the matrix entries are displayed as boxes. If an entry is too

long so that it cannot be completely displayed then the corresponding box takes a

slightly different color (usually more pinkish.)


Main window

Main window

buttons

Main window

menus

To edit an entry just click on its box. The box becomes editable and large enough to

display its entire content. You can type into the box anything that complies with the

MATLAB syntax and that results in a scalar polynomial or constant. Of course you can

use existing MATLAB variables, functions, etc. The program is even more intelligent

and handles some notation going beyond the MATLAB syntax. For example, you can

drop the * (times) operator between a coefficient and the related polynomial symbol

(provided that the coefficient comes first). Thus, you can type 2s as well as 2*s.

To complete editing the entry push the Enter key or close the box by using the mouse.

If you have entered an expression that cannot be processed then an error message is

reported and the original box content is recovered.

To bring the newly created or modified matrix into the MATLAB workspace finally

click Save or Save As.

The main Polynomial Matrix Editor window is shown in Fig. 2.

The main window contains the following buttons:

Open Clicking the Open button opens a new Matrix Pad for the matrix specified

by the first (editable) line. At most four matrix pads can be open at the same

time.

Refresh Clicking the Refresh button updates the list of matrices to reflect

changes in the workspace since opening the PME or since it was last refreshed.

Close Clicking the Close button terminates the current PME session.

The main PME window offers the following menus:

Menu List Using the menu List you can set the type of matrices listed in the main

window. They may be polynomial matrices (POL objects) – default – or standard

MATLAB matrices or both at the same time.

Menu Symbol Using the menu Symbol you can choose the symbol (such as s,z,...)

that is by default offered for the 'New' matrix item of the list.


Matrix Pad window

List menu to

control what

matrices are

displayed

Editable row to

create or modify

properties

List of existing

matrices

Button to open

new matrix pad

Symbol menu to

change default

symbol

Properties of

edited matrix

Button to close

PME

Fig. 2. Main window of the Polynomial Matrix Editor

To edit a matrix use one of the ways described above to open a Matrix Pad window for

it. This window is show in Fig. 3.

The Matrix Pad window consists of boxes for the entries of the polynomial matrix. If

some entry is too long and cannot be completely displayed in the particular box then

its box takes a slightly different color.

To edit an entry you can click on its box. The box pops up, becomes editable and large

enough to display its whole content. You can type into the box anything that complies

with the MATLAB syntax and results in a scalar polynomial or constant. Of course,

you can use all existing MATLAB variables, functions, etc. The editor is even a little

more intelligent: it handles some notation going beyond the MATLAB syntax. For

example, you can drop the * (times) operator between a coefficient and the related

polynomial symbol (provided that the coefficient comes first). Thus, you can type 2s

as well as 2*s. Experiment a little to learn more.


Matrix Pad

buttons

When editing is completed push the Enter key on your keyboard or close the box by

mouse. If you have entered an expression that cannot be processed then an error

message is appears and the original content of the box is retrieved.

When the matrix is ready bring it into the MATLAB workspace by clicking either the

Save or the Save As button.

The following buttons control the Matrix Pad window:

Save button: Clicking Save copies the Matrix Pad contents into the MATLAB

workspace.

Save As button: Clicking Save As copies the matrix from Matrix Pad into the

MATLAB workspace under another name.

Browse button: Clicking Browse moves the cursor to the main window (the same

as directly clicking there).

Close Button: Clicking Close closes the Matrix Pad window.

The Matrix Pad window consists of boxes for the entries of the polynomial matrix. If

some entry is too long and cannot be completely displayed then its box takes a

slightly different color.

To edit an entry you can click on its box. The box pops up, becomes editable and large

enough to display its whole content. You can type into the box anything that complies

with the MATLAB syntax and results in a scalar polynomial or constant. Of course,

you can use all existing MATLAB variables, functions, etc. The editor is even a little

more intelligent: it handles some notation going beyond the MATLAB syntax. For

example, you can drop the * (times) operator between a coefficient and the related

polynomial symbol (provided that the coefficient comes first). Thus, you can type 2s

as well as 2*s. Experiment a little to learn more.

When editing is completed push the Enter key on your keyboard or close the box by

mouse. If you have entered an expression that cannot be processed then an error

message is appears and the original content of the box is retrieved.

When the matrix is ready bring it into the MATLAB workspace by clicking either the

Save or the Save As button.


Box with

completely

displayed

contents

Box with

incompletely

displayed

contents

Buttons to save

the matrix

Name of edited

matrix

Button to

activate the

main window

Properties of

edited matrix

Fig. 3. Matrix Pad with an opened editable box

Opened editable

box

Button to close

Matrix Pad


5 Polynomial matrix fractions

Introduction

In some point of view, polynomials behave like integer numbers: they can be added

and multiplied but generally not divided. Instead of that, the concepts of divisibility,

greatest common divisors and last common multiples are constructed.

To be able to perform division of integer numbers, we create new objects: fractions

like1 2,2 3 , etc. Similarly, polynomial fractions and polynomial matrix fractions are

created.

In this chapter, matrix polynomial fractions are introduced and operations on them

are described. To satisfy as many needs of control engineers and theorists as possible,

four kinds of fractions are implemented in the Polynomial Toolbox: scalardenominator-fraction

(sdf), matrix-denominator-fraction (mdf), left-denominatorfraction

(ldf) and right-denominator-fraction (rdf). All of them are nothing more

than various forms of the same mathematical entities (fractions); each of them can be

converted to each other.

In systems, signals ad control, the polynomial matrix fractions are typically used to

describe special rational transfer matrices. For more details, see Chapter 5, LTI

systems.

The following general rule is adopted for all kinds of fractions. The standard MATLAB

matrices, polynomials and two-sided polynomials may contain Inf's or NaN's in their

entries. The fractions, however, may not. Moreover, the denominators must not be

zero or singular, as will be explained later for individual kinds of matrix polynomial

fractions.

Scalar-denominator-fractions

These objects have a matrix numerator and a scalar denominator. They usually arise

as inverses of (square nonsingular) polynomial matrices:

P=[1 s s^2; 1+s s 1-s;0 -1 -s]

P =

1 s s^2

1 + s s 1 - s

0 -1 -s


F=inv(P)

F =

-1 + s + s^2 0 -1s + s^2 + s^3

-1s - s^2 s 1 - s - s^2 - s^3

1 + s -1 s^2

----------------------------------------

-1 + s + s^2

The numerator and the denominator can be accessed or changed by F.num and F.den

F.num

ans =

F.den

ans =

-1 + s + s^2 0 -1s + s^2 + s^3

-1s - s^2 s 1 - s - s^2 - s^3

1 + s -1 s^2

-1 + s + s^2

They are also returned by special functions NUM and DEN:

num(F),den(F)

ans =

ans =

-1 + s + s^2 0 -s + s^2 + s^3

-s - s^2 s 1 - s - s^2 - s^3

1 + s -1 s^2

-1 + s + s^2

Recall that F.n and F.d are, up to a constant multiple, equal to adj(P) and det(P)

adj(P)

ans =

det(P)

1 - s - s^2 0 s - s^2 - s^3

s + s^2 -s -1 + s + s^2 + s^3

-1 - s 1 -1s^2


The sdf

command

ans =

1 - s - s^2

Sdf objects also arise as negative powers of polynomial matrices

Q=[1 1;1-s s]

Q =

Q^-1

ans =

Q^-2

ans =

1 1

1 - s s

0.5s -0.5

-0.5 + 0.5s 0.5

-------------------

-0.5 + s

0.25 - 0.25s + 0.25s^2 -0.25 - 0.25s

-0.25 + 0.25s^2 0.5 - 0.25s

----------------------------------------

and alike. It can be verified that

Q*Q^-1

ans =

Q^2*Q^-2

ans =

and so on.

1 0

0 1

1 0

0 1

0.25 - s + s^2

Scalar-denominator-fractions may also be entered in terms of their numerator

matrices and denominator polynomials. For this purpose the sdf command is

available. Typing


Comparison of

fractions

Coprime

N = [-1+s+s^2 0 -s+s^2+s^3;-s-s^2 s 1-s-s^2-s^3;1+s -1 s^2];

d = -1+s+s^2;

sdf(N,d)

for instance, defines the scalar-denominator-fraction

ans =

-1 + s + s^2 0 -s + s^2 + s^3

-s - s^2 s 1 - s - s^2 - s^3

1 + s -1 s^2

----------------------------------------

-1 + s + s^2

Note that for two fractions to be equal it is not necessary to have the same numerator

and the same denominator. Common polynomial factors may be present:

G=(s-1)*inv(s^2-1)

G =

-1 + s

--------

-1 + s^2

H=inv(1+s)

H =

G==H

ans =

1

-----

1 + s

1

Every fraction can be converted to the coprime case

K=coprime(G)

K =

1

-----

1 + s


Reduce

In the coprime case, the only common factors are the invertible ones, i.e. the nonzero

numbers. So even the coprime case is not unique:

L=K;L.den=2*L.den; L.num=2*L.num

L =

K==L

ans =

2

------

2 + 2s

1

To reach the unicity, we introduce the reduced case: The leading coefficient of the

denominator is 1:

>> M=reduce(L)

M =

1

-----

1 + s

1

For fractions in z , instead of the leading coefficient, the trailing coefficient is used.

This is in accord with system and control theory needs.

N=inv(2-3*zi);N.den=N.den*2;N.num=N.num*2

N =

1

---------

2 - 3z^-1

Q=reduce(N)

Q =

0.5

-----------

1 - 1.5z^-1


General

properties

cop, red

Reverse

Now for all scalar denominator fractions K, equal each other, the result of

reduce(coprime(K)) is unique.

Results of operations with fractions are generally not guaranteed to be coprime and

reduced, even if the operands are. It depends on algorithm used. Sometimes it is

convenient to require such cancelling and reducing after every operation; it can be

managed by switching the Polynomial Toolbox into a special mode of operation that is

achieved by setting the general properties

gprops('cop','red')

The default state is restored by

gprops('ncop','nred')

To display the status (together with all other general properties), type

gprops

Global polynomial properties:

PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:

variable symbol s 's','p','z^-1','d','z','q'

zeroing tolerance 1e-008 any real number from 0 to 1

verbose level no 'no', 'yes'

display format nice 'symb', 'symbs', 'nice', 'symbr',

'block'

'coef', 'rcoef', 'rootr', 'rootc'

display order normal 'normal', 'reverse'

coprime flag ncop 'cop', 'ncop'

reduce flag nred 'red', 'nred'

defract flag ndefr 'defr', 'ndefr'

discrete variable disz 'disz', 'disz^-1'

continuous variable conts 'conts', 'contp'

This option should be used carefully as it increases the computation time. The test for

coprimeness is influenced by the zeroing tolerance globally

or locally

tol=1.e-5;

gprops(tol)

coprime(G,tol);

The same fraction may be expressed both by z and by

other, reverse command is used:

1

z . To convert each form to


Q=sdf(2-zi,1-.9*zi)

Q =

2 - z^-1

-----------

1 - 0.9z^-1

R=reverse(Q)

R =

Q==R

ans =

-1 + 2z

--------

-0.9 + z

1

1

Be careful with using reverse with fractions in other operators than z or z . So for

1

a fraction in s , as no polynomials in s exist, the result is again in s , and Q==R no

more holds:

Q=(2+s)*inv(3+s)

Q =

2 + s

-----

3 + s

R=reverse(Q)

R =

Q==R

ans =

1 + 2s

------

1 + 3s

0


Matrix-denominator-fractions

The next kinds of fractions, matrix-denominator fractions, are matrices whose

individual entries are scalar fractions. These objects have both matrix numerator and

matrix denominator. They are especially convenient for rational matrices as one can

see separately numerators of all rational entries in the matrix numerator and

similarly all particular denominators in the denominator matrix.

They usually arise as results of entry-wise (array) division of polynomial matrices:

F=[1+s 2+s; -2+s s-1],G=[1-s 3+s; -3+s 1+s]

F =

G =

H=F./G

H =

1 + s 2 + s

-2 + s -1 + s

1 - s 3 + s

-3 + s 1 + s

-1 - s 2 + s

------ -----

-1 + s 3 + s

-2 + s -1 + s

------ ------

-3 + s 1 + s

They can also arise as entry.wise negative powers of matrix polynomials:

F.^2

ans =

F.^-1

ans =

1 + 2s + s^2 4 + 4s + s^2

4 - 4s + s^2 1 - 2s + s^2

1 1

----- -----


Coprime,

reduce

F.^-2

ans =

1 + s 2 + s

1 1

------ ------

-2 + s -1 + s

1 1

------------ ------------

1 + 2s + s^2 4 + 4s + s^2

1 1

------------ ------------

4 - 4s + s^2 1 - 2s + s^2

It can be verified that

F.^2.*F.^-2

ans =

1.0000 1.0000

1.0000 1.0000

The numerator and the denominator can be accessed or changed by F.num and F.den

F.num, F.den

ans =

ans =

1 1

0.5s -1 + s

s -1 + s

1 1 + s

Matrix-denominator fraction may also contain common factors. The factors are

applied to individual entries

H.num(1,2)=H.num(1,2)*(1+s);

H.den(1,2)=H.den(1,2)*(1+s)


H =

-1 - s 2 + 5s + 4s^2 + s^3

------ -------------------

-1 + s 3 + 7s + 5s^2 + s^3

-2 + s -1 + s

------ ------

-3 + s 1 + s

Coprime mdf has all entries coprime:

K=coprime(H)

K =

-0.71 - 0.71s 0.45 + 0.22s

------------- ------------

-0.71 + 0.71s 0.67 + 0.22s

2.4 - 1.2s -0.71 + 0.71s

---------- -------------

3.6 - 1.2s 0.71 + 0.71s

Reduced mdf has all entries reduced:

M=reduce(K)

M =

-1 - s 2 + s

------ -----

-1 + s 3 + s

-2 + s -1 + s

------ ------

-3 + s 1 + s

Again, for all matrix-denominator fractions H equal each to other, the result of

reduce(coprime(H)) is unique.


The mdf

command

Scalar-denominator-fractions may also be entered in terms of their numerator

matrices

F = mdf([s+2 s+3], [s+3,s+1])

F =

2 + s 3 + s

----- -----

3 + s 1 + s

Needless to say that all entries of this denominator matrix must be nonzero. All

functions described above for sdf work for mdf as well.

Left and right polynomial matrix fractions

Mathematicians sometimes like to describe matrices of fractions (rational matrices)

as fractions of two polynomial matrices. So the 2x3 rational matrix

2

s s

1


Rs ()

1s 1s 1s


1 1 1

1s 1s 1s

can either be described by the polynomial matrices

as the so called right matrix fraction

or by other two polynomial matrices

as the so called left matrix fraction

1s00 2

s s

1

Ar( s)


0 1 s 0





, Br( s)


1 1 1

0 0 1

s



R( s) B ( s) A ( s)


r r

1 s s 0 1

Al( s) , Bl( s)


0 1 s


1 1 1



R s A s B s .

1

( ) l( ) l(

)

Such fractions polynomial matrix fractions are now often used in automatic control

and other engineering fields. To handle the right and the left polynomial matrix

1


Rightdenominatorfraction

The rdf

command

fractions, Polynomial Toolbox is equipped with two objects, rdf and ldf. These

objects have both matrix numerator and matrix denominator (the right one or the left

one).

These objects are usually entered using standard Matlab division operator (slash)

Ar=[1+s,0,0;0,1+s,0;0,0,1+s],Br=[s^2,-s,1;1,1,1]

Ar =

Br =

R=Br/Ar

R =

1 + s 0 0

0 1 + s 0

0 0 1 + s

s^2 -s 1

1 1 1

s^2 -s 1 / 1 + s 0 0

1 1 1 / 0 1 + s 0

/ 0 0 1 + s

Right-denominator-fractions may also be entered in terms of their numerator and

right denominator matrices

F = rdf(Br,Ar)

F =

s^2 -s 1 / 1 + s 0 0

1 1 1 / 0 1 + s 0

/ 0 0 1 + s

Note the order of input arguments that come as they appear in the right fraction.

Needless to say that here the denominator matrix must be here nonzero but maz

eventually possess zero entries.


Leftdenominatorfraction

The ldf

command

Coprime,

reduce

These objects are usually entered using standard Matlab left division operator

(backslash)

Al=[1 s;0 s+1],Bl=[s 0 1; 1 1 1],G=Al\Bl

Al =

Bl =

G =

1 s

0 1 + s

s 0 1

1 1 1

1 s \ s 0 1

0 1 + s \ 1 1 1

Left-denominator-fractions may also be entered in terms of their numerator and left

denominator matrices

G = ldf(Al,Bl)

G =

1 s \ s 0 1

0 1 + s \ 1 1 1

Note the order of input arguments, which come as they appear in the left fraction.

Needless to say that here the denominator matrix must be here nonzero but may

eventually possess zero entries.

Left-denominator fraction F is considered coprime if matrix polynomials F.num and

F.den are left coprime, i.e. their only left common factors are unimodular matrices.

Every ldf can be made coprime

F=[3+s 1; 2 2+s]\[s+1 0;0 s+1]

F =

3 + s 1 \ 1 + s 0

2 2 + s \ 0 1 + s

FF=coprime(F)


FF =

0.25 -0.25 \ 0.25 -0.25

1.2 + 0.24s 0.74 + 0.24s \ 0.21 + 0.24s 0.27 + 0.24s

The reduced form of a left-denominator matrix fraction is a special form where the

denominator is in the row reduced echelon form, see Chapter 2, Polynomials in s and

p, Section Reduced and canonical forms. Recall that, among others, the denomiantor

is row-reduced. The command

FFF=reduce(FF)

FFF =

-1 1 \ -1 1

4 + s 0 \ 2 + s -1

The dual concepts for right-denominator fractions are easily constructed.

Mutual conversions of polynomial matrix fraction objects

Polynomial matrix fraction objects can be mutually converted by means of their basic

constructors sdf, mdf, rdf, and ldf. For example,

N = [-1+s 0 -s+s^3;-s-s^2 s 1-s]; d = -1+s+s^2;

Gsdf=sdf(N,d)

Gsdf =

-1 + s 0 -s + s^3

-s - s^2 s 1 - s

--------------------------

>> class(Gsdf)

ans =

sdf

-1 + s + s^2


can be converted as follows

Gmdf=mdf(Gsdf)

Gmdf =

-1 + s 0 -s + s^3

------------ ------------ ------------

-1 + s + s^2 -1 + s + s^2 -1 + s + s^2

-s - s^2 s 1 - s

------------ ------------ ------------

-1 + s + s^2 -1 + s + s^2 -1 + s + s^2

class(Gmdf)

ans =

mdf

Grdf=rdf(Gsdf)

Grdf.numerator =

-1 + s 0 -s + s^3

-s - s^2 s 1 - s

Grdf.denominator =

-1 + s + s^2 0 0

0 -1 + s + s^2 0

0 0 -1 + s + s^2

class(Grdf)

ans =

rdf

Gldf=ldf(Gsdf)

Gldf =

-1 + s + s^2 0 \ -1 + s 0 -s + s^3

0 -1 + s + s^2 \ -s - s^2 s 1 - s

class(Gldf)


ans =

ldf

Of course, the real entity behind remains the same so that the "contents" of all the

various expressions equal

Gsdf==Gmdf

ans =

Gmdf==Grdf

ans =

Grdf==Gldf

ans =

1 1 1

1 1 1

1 1 1

1 1 1

1 1 1

1 1 1

The user should keep in mind, however, that the conversions above may sometimes

requirenontrivial computation namely solving a polynomial equation. This is why

they may be influenced by zeroing tolerance.For large data, of course, they can be

both time-consuming and subject numerical inaccuracies. Definitely, they should be

avoided if possible.

Operations with polynomial matrix fractions

Addition,

subtraction and

multiplication

In the Polynomial Toolbox polynomial matrix fractions are objects for which all

standard operations are defined. Different object types may usually enter an

operation.

Define the polynomial fractions

F=s^-1,G=s/(1+s)


F =

G =

1

-

s

s / 1 + s

The sum and product follow easily:

F+G

ans =

G+F

ans =

F*G

ans =

G*F

ans =

1 + s + s^2

-----------

s + s^2

1 + s + s^2 / s + s^2

s

-------

s + s^2

s / s + s^2

For polynomial matrix fractions

a=1./s;b=1./(s+1);F=[1 a; a+b b],G=[b a;a b]

F =

1

1 -

s


G =

1 + 2s 1

------- -----

s + s^2 1 + s

1 1

----- -

1 + s s

1 1

- -----

s 1 + s

the operations work as expected

F+G

ans =

F*G

ans =

2 + s 2

----- -

1 + s s

2 + 3s 2

------- -----

s + s^2 1 + s

1 + s + s^2 2 + s

----------- -------

s^2 + s^3 s + s^2

2 + 3s 1 + 3s + 3s^2


Entrywise and

matrix division

-------------- ----------------

s + 2s^2 + s^3 s^2 + 2s^3 + s^4

Entrywise division of polynomial matrix fractions

T = F./G

results in another fraction

ans =

s

1 + s -

s

s + 2s^2 1 + s

-------- -----

s + s^2 1 + s

while matrix division yields

F/G

ans =

F\G

ans =

0.5s + s^2 - 0.5s^4 0.5s^3 + 0.5s^4

------------------- ---------------

0.5s + s^2 0.5s + s^2

-0.5s^3 - 0.5s^4 0.5s + 2s^2 + 2s^3 + 0.5s^4

------------------- ---------------------------

0.5s + 1.5s^2 + s^3 0.5s + 1.5s^2 + s^3

-1s - 2s^2 0

--------------------- ---------------------

-s - 3s^2 - s^3 + s^4 -s - 3s^2 - s^3 + s^4


Concatenation

and working

with

submatrices

s^4 -1s - 3s^2 - s^3 + s^4

--------------------- ----------------------

-s - 3s^2 - s^3 + s^4 -1s - 3s^2 - s^3 + s^4

All standard MATLAB operations to concatenate matrices or selecting submatrices

also apply to polynomial matrix fractions. Typing

W = [F G]

results in

W =

1 1 1

1 - ----- -

s 1 + s s

1 + 2s 1 1 1

------- ----- - -----

s + s^2 1 + s s 1 + s

The last row of W may be selected by typing

w = W(2,:)

w =

3 4 s^3 1

Submatrices may be assigned values by commands such as

w =

1 + 2s 1 1 1

------- ----- - -----

s + s^2 1 + s s 1 + s

The user should be aware that, depending on fraction type used, even concatenation

and submatrix selection may require extensive computing when working with

fraction

L=[s 1; 1 s+1]\[1;1]


Coefficients and

coefficient

matrices

Conjugation

and

transposition

L =

L(1)

ans =

s 1 \ 1

1 1 + s \ 1

-1 + s + s^2 \ s

Coefficient of a fraction is defined as the coefficient in its Laurent series

>> l=(z-1)^-1

l =

1

------

-1 + z

>> l{-1}

ans =

1

>> l{-2}

ans =

1

>> l{-2:0}

ans =

1 1 0

Conjugation and transposition work as expected for fractions

T=[a b;a 0]

T =

1 1

- -----

s 1 + s


Values

T'

1

- 0

s

ans =

T.'

ans =

1 1

-- --

-s -s

1

----- 0

1 - s

1 1

- -

s s

1

----- 0

1 + s

Fractions may be also considered as functions, rational functions. For evaluation,

the value command value is available.

F=(2+3*s+4*s^2)/(1+s^2)

F =

x=1

x =

2 + 3s + 4s^2 / 1 + s^2

1

value(F,x)

ans =


Derivative

4.5000

In a special case, the argument may be infinite

value(F,Inf)

ans =

4

If the argument is a matrix, a matrix of values is produced where each entry is

evaluated separately.

X=[1 0;0 1]

X =

1 0

0 1

value(F,X)

ans =

4.5000 2.0000

2.0000 4.5000

One can even substitute a matrix for the variable using function mvalue which

is based on matrix powers

mvalue(F,X)

ans =

4.5000 0

0 4.5000

The derivative of a matrix polynomial fraction may be computed, the result being

a fraction of the same kind

f=1/s;

g=deriv(f)

g =

f=1\s;

-1 / s^2

g=deriv(f)

g =

1


Composition

f=s\1;

g=deriv(f)

g =

f=1./s;

g=deriv(f)

g =

s^2 \ -1

-1

---

s^2

f=inv(s);

g=deriv(f)

g =

Fractions

and

-1

---

s^2

F=(1+s+s^2)/(1+s)

F =

1 + s + s^2 / 1 + s

G=(1+s)/(1-s)

G =

1 + s / 1 - s

treated as functions, can be composed, i.e., they compound function created

H=F(G)

H =


Fractions with

complex

polynomials

-1.5 - 0.5s^2 / -1 + s

Fractions can also have complex coefficients, similarly as polynomials can.

Function isreal, real, imag, conj work as expected. Note, however, that

the testing whether a fraction is real appears to be more involved here. A complex

common factor may be present, in fact

f=(2+s)/(3+s);

f.num=f.num*(1+j*s);

f.den=f.den*(1+j*s)

f =

2+0i + (1+2i)s + (0+1i)s^2 / 3+0i + (1+3i)s + (0+1i)s^2

isreal(f)

ans =

1

Of course, it is

Signals and transforms

Fractions as

signals

fr=reduce(coprime(f))

fr =

imag(fr)

ans =

2 + s / 3 + s

0

For control engineers, variable s means the derivative operator d dt or the variable of

Laplace transform. Polynomials in s are considered as differential operators.

Fractions in s can be also considered as operators but more concrete interpretation is

possible: Fraction Fscorresponds ()

to signal f() t by means of Laplace transform.

Variable z means the advance shift operator acting on discrete-time signals. Variable

1

z 1

means the delay shift operator. Polynomials in z or in z are considered as


Properness

Laurent series

recurrent (regressive) operators. Fraction Fz () corresponds to signal f t by means of

z-transform.

Mathematically, fraction Fsis () proper if it acquires finite value in the point s .

Specially, when this value is zero, Fs () is called strictly proper.

The properness plays a role in system theory. In the continuous-time case, Fs ()

corresponds to a regular signal f() t (i.e. free of Dirac delta functions), if Fs () is

strictly proper. In the discrete time, Fz () corresponds to a causal signal f t (i.e.

nonzero only for t 0 ), if Ft () is proper. Note that the point z corresponds to the

1

point z 0

.

In the Polynomial Toolbox, the tester is

or

isproper((2+s)/(3+s))

ans =

1

isproper((2+s)/(3+s),'strict')

ans =

0

In the discrete-time case,

isproper(z/(3+z))

ans =

1

isproper(zi/(3+zi))

ans =

1

The discrete-time function can be xpanded in the properness point z or

into the Laurent series, e.g.

F() z f z f z f f z f z

2 12 210 1 2

1

z

This formula is nothing else than the z-transform. Note that for f t the direction of

growing t is that of descending powers of z.

The Toolbox commands

pformat reverse

0


F=z^3/(z-.9)

F =

z^3 / z - 0.9

laurent(F,3)

deliver the series up tobthe power

ans =

3

z :

z^2 + 0.9z + 0.81 + 0.73z^-1 + 0.66z^-2 + 0.59z^-3

The continuous-time fraction Fs () can be also expanded, say

F() s F s F s F F s F s

2 12 210 1 2

The interpretation is, however, different. In Laplace transform, the corresponding

signal f() t is

f ( t) F ( t) F (

t) F ( t)


210 2

t

1 2 3

F F t F ()

t

2

where () t is the unit-step function and ( t), ( t), ( t),

are Dirac delta function and

its derivatives. For strictly proper Fs, () the signal f() t is free of delta functions. In

F , F , F , equal to f (0), f (0), f (0), .

such a case, the coefficients 1 2 3

In the Polynomial Toolbox, however, there are no polynomials in

commands

pformat reverse

F=s^3/(s+1);

[H,Q]=laurent(F,3)

deliver the series up to the power

dimensional array H:

H =

s^2 - s

Q(:,:,1) =

1

Q(:,:,2) =

1

s . So the

3

s in two parts: Polynomial Q and three


Coefficients

Laplace

trasnform

-1

Q(:,:,3) =

1

Q(:,:,4) =

It means

-1

F( s) s s 1 s s s

2 1

2 3

.

When cnsidering a fraction Fz () as equivalent to its Laurent series f t . we can

extract individual f t as coefficients of Ft :

pformat

F=z^3/(z-.9);

laurent(F,3)

ans =

F{1}

ans =

F{-3:2}

ans =

0.59z^-3 + 0.66z^-2 + 0.73z^-1 + 0.81 + 0.9z + z^2

0.9000

0.5905 0.6561 0.7290 0.8100 0.9000 1.0000

For fraction Fs, () the inverse Laplace transform f() t can be computed with degree 3

and sampling period 0.5 by commands

pformat normal

G=laplace(1/(1+s),3,0.5)

G =

1 + 0.61z^-1 + 0.37z^-2 + 0.22z^-3

Note that the sampling period is recorded

G.h

ans =


Norms

0.5000

For fraction Fs, () equivalent to time signal f() t , its H 2 -norm may be found

h2norm(1/(s+1))

ans =

0.7071

However, note what may happen:

h2norm(1/(s-1))

Warning: Denominator matrix is unstable. H2 norm is

infinite.

ans =

Inf

h2norm(s/(s-1))

Warning: Matrix fraction is not strictly proper. H2 norm is

infinite.

ans =

Inf

H -norm is computed

hinfnorm(1/(s+1))

ans =

1

The extra cases are

hinfnorm(1/s)

Warning: Denominator has a purely imaginary root.

ans =

H-inf norm is infinite.

Inf

hinfnorm(s^2/(s+1))

Warning: Fraction is nonproper. H-inf norm is infinite.

ans =


Inf

Sampling, unsampling and resampling

Sampling

A fraction This section is devoted to sampling of fractions Fs, () corresponding to time

signals f() t , and, furthermore, to unsampling and resampling of fractions Gz () ,

corresponding to time signals g t . For sampling, unsampling and resampling of

transfer functions, combined with holding operation (zero-order-hold, first-order-hold,

second-order-hold), see Chapter 6, LTI Systems, Section Sampling with holding.

A fraction in s or p corresponds, by means of Laplace transform, to continuous-time

signal f() t . If Fs () is strictly proper, then f() t is regular function, i.e. it does not

contain Dirac delta impulses. Such a signal can be sampled with sampling period h. A

discrete-time signal g k is created for integer time k: gk f ( kh)

. It corresponds, by

means of z-transform, to a causal fraction Gz () . This relation between Fsand () Gz ()

may be called sampling of fractions with period h.

The signals f() t and g k are pictured in Fig

For example

F=1/(s+1);h=2;

G=samp(F,h)

G =

z / -0.14 + z

Check the sampling period of the result


G.h

ans =

2

Several values of gk may be obtained by

laurent(G,4)

ans =

or simply

G{-4:0}

ans =

1 + 0.14z^-1 + 0.018z^-2 + 0.0025z^-3 + 0.00034z^-4

0.0003 0.0025 0.0183 0.1353 1.0000

eventually in the same order as before

fliplr(G{-4:0})

ans =

1.0000 0.1353 0.0183 0.0025 0.0003

A more general case is sampling with period h and phase , where 0 h . The

g f kh . This case is pictured in Fig.

correspondence is

The command is

tau=0.2;

GT=samp(F,h,tau)

k


GT =

0.82z / -0.14 + z

fliplr(GT{-4:0})

ans =

0.8187 0.1108 0.0150 0.0020 0.0003

For some problems of analysis and control design, we need Gz () not for one

individual but for more 's. To this aim, argument tau of may be a vector of

numbers. The result Gz () is not a single function but a cell vector, every cell

containing the result for corresponding . So

GG=samp(F,h,0:.2:2)

GG =

Columns 1 through 7

[1x1 rdf] [1x1 rdf] [1x1 rdf] [1x1 rdf] [1x1

rdf] [1x1 rdf] [1x1 rdf]

Columns 8 through 11

GG{1:3}

ans =

ans =

ans =

[1x1 rdf] [1x1 rdf] [1x1 rdf] [1x1 rdf]

z / -0.14 + z

0.82z / -0.14 + z

0.67z / -0.14 + z

All the GG's can be pictured in one picture, every picture in its correct times. Here for

degree 2:


Result of

sampling

Compare it with the picture of the original

Note that for nonzero , the requirement for Fsto () be strictly proper (i.e. for f() t to

be free of delta-impulses) is no more necessary. The delta-impulses sitting in the

point t 0 do not influence the sampling process: the sampling takes place in other

points than t 0 .

Let us investigate the correspondence between the fraction Fsto () be sampled and the

i

result Fz. () Every pole s i of Fs () corresponds to pole

s

zi e . As the exponential

function of the complex variable is periodic with period 2 j , the poles s i , s l whose

s sjwhere 2

h is the sampling frequency, lead to the same

difference is i l n

n

result zi zl.

Furthermore, the exponential function never yields zero: zi 0 . So, the


Unsampling

mapping si zimaps the strip n2Imsin2to the whole plane zi with the

exception of the point zi 0 . The mapping is one-to-one in the interior of the strip but

the border points s i , sl with the same real parts and with the imaginary parts

n 2, n 2 lead to the same zi zl,

real negative. E.g. sampling f ( t) cos( t)

with the

period yields

1 k

g .

k

CC=samp(s/(s^2+1),pi)

CC =

z / 1 + z

laurent(CC,5)

ans =

1 - z^-1 + z^-2 - z^-3 + z^-4 - z^-5

Here the same root appeared also in the numerator and cancelling took place.

The operation inverse to sampling is unsampling. The function Gz () to be unsampled

must not have a pole zi 0 . Moreover, the point zi 0 must not be a zero-point of Gz ()

- this can be seen in the above example G = z/(z-0.14). Furthermore, the pole

z is also excluded: Gz () would not be causal. For Gz () satisfying these

i

requirements, e.g. in our above examples, the unsampling goes

unsamp(G,2)

ans =

1 / 1 + s

unsamp(GT,2,0.2)

ans =

1 / 1 + s

In these commands, the sampling period argument may be omitted, the default being

the internally recorded G.h:

unsamp(G)

ans =

1 / 1 + s


Case of

complex result

Case of zero

sampling period

The sampling phase argument, however, when nonzero, cannot be omitted, as this

information is not internally recorded.

unsamp(GT,[],0.2)

ans =

1 / 1 + s

For Gz () with real coefficients, the result Fsof () unsampling is usually also real. The

exception is Gz () having a real negative pole. In such a case, the only corresponding

Fsis () of higher degree. The polynomial Toolbox, however, looks for a result of the

same degree; the only such one is complex. In our above example

CC=samp(s/(s^2+1),pi)

CC =

z / 1 + z

unsamp(CC,pi)

Warning: Principal matrix logarithm is not defined for A with

ans =

nonpositive real eigenvalues. A non-principal matrix

logarithm is returned.

1 / 0-1i + s

The corresponding continuous-time result here is not real.

In the samp command, the zero sampling period is also allowed. However, this case is

degenerate: for any pole s i , the resulting zi is always 1. In the sampling process, all

information about the original function is lost barring the gain and the relative

degree

K=samp(2/(s+1),0)

K =

2z / -1 + z

K=samp(2/s,0)

K =

2z / -1 + z


Continuoustime

and

discrete-time

indeterminate

variable

Change

sampling

periode and

phase

In the unsamp command, only exceptional cases of Gz () are Gz () allowed for zero

sampling period

unsamp(K,0)

ans =

2 / s

unsamp(z/(z-0.9),0)

??? Error using ==> frac.unsamp at 174

Invalid 1st argument for zero sampling period.

By default, the indeterminate variable for discrete-time fractions is z and that for

continuous-time fractions is s. This is used by commands samp and unsamp. The

variable can be changed globally by

gprops('diszi','contp')

Gzi=samp(F,h)

Gzi =

1 / 1 - 0.14z^-1

Fp=unsamp(Gzi,h)

Fp =

1 / 1 + p

gprops('disz','conts')

or locally by

samp(F,h,zi)

ans =

1 / 1 - 0.14z^-1

When Gz () satisfies the requirement for unsampling, we can consider the unsampling

followed by another sampling, with different sampling period and phase. Such a

process can be performed in one go by chsamp command:

F=1/(s+1);h=2;tau=0.2;


Resampling

GT=samp(F,h,tau)

GT =

0.82z / -0.14 + z

GT=samp(F,h,tau),h,tau

GT =

h =

tau =

0.82z / -0.14 + z

2

0.2000

newh=1;newtau=0.5;

HT=chsamp(GT,h,newh,tau, newtau)

HT =

0.61z / -0.37 + z

Check the conversion backwards

KT=chsamp(HT,newh,h,newtau,tau)

KT =

Check also

0.82z / -0.14 + z

FT=unsamp(HT,newh,newtau)

FT =

1 / 1 + s

Another process is resampling. Every discrete-time fraction Gz () can be resampled

with ratio and phase, irrespectively whether Gz () may be a result of sampling of some

continuous-time Fs () or not. The meaning is the same as resampling of discrete-time

polynomials. E.g.

F=z/(z-0.9);

fliplr(F{-8:0})

ans =


Columns 1 through 7

1.0000 0.9000 0.8100 0.7290 0.6561 0.5905 0.5314

Columns 8 through 9

0.4783 0.4305

G=resamp(F,3)

G =

z / -0.73 + z

G0=resamp(F,3,0)

G0 =

z / -0.73 + z

fliplr(G0{-2:0})

ans =

1.0000 0.7290 0.5314

G1=resamp(F,3,1)

G1 =

0.9z / -0.73 + z

fliplr(G1{-2:0})

ans =

0.9000 0.6561 0.4783

G2=resamp(F,3,2)

G2 =

0.81z / -0.73 + z

fliplr(G2{-2:0})

ans =

0.8100 0.5905 0.4305


6 LTI Systems

Introduction

Linear time-invariant (LTI) systems are a very important class of models to be used

in automatic control and related fields. Even though the "real world" is without doubt

thoroughly nonlinear, linear models provide an extraordinarily useful tool for study

of dynamical systems.

In this Chapter, both continuous-time and discrete-time LTI systems will be treated.

Continuous-time state space equations

State space

equations

A very well known model for LTI system is of course the familiar state-space

description, which for continuous-time systems takes the form

x( t) Ax( t) Bu(

t)

y( t) Cx( t) Du(

t)

with u the input, y the output and x the state variable, and where the time t is

continuous variable. All these signals may be vector-valued and A, B, C and D are

constant matrices of appropriate dimensions.

The state space equation can be solved by means of Laplace transform. Assuming

zero initial conditions, we obtain

The circumflex denotes Laplace transform.

Solving for the output we have

The transfer function

sxˆ ( s) Axˆ ( s) Buˆ(

s)

yˆ ( s) Cxˆ( s) Duˆ(

s)

1

yˆ( s) CsIABDuˆ( s)


1

H() s C sI A B D

is rational in s, i.e. it is a matrix polynomial fraction. We can see it to be proper, i.e.

H( ) D,

a finite value.

In the time domain, the corresponding function


Generalized

state space

equations

Descriptor

systems

At

h( t) CeB( t) D( t)

is called the impulse response or impulse characteristics. Here () t is the unit-step

function and () t is the Dirac delta-function. In the special case of D 0 , the transfer

function Hsis () strictly proper and the impulse characteristics is free of delta-

functions.

The above state space equations are not the most general ones for linear timeinvariant

systems. For example, the simple case of differentiator y( t) u( t)

does not fit

into this scheme. Generalized equations are

x( t) Ax( t) Bu(

t)

y t Cx t D u t D u t D u t

( n)

( ) ( ) 0 ( ) 1 ( ) n ( )

Here the Laplace transform yields the transfer function

1

H( s) C sI A B D( s)

with matrix polynomial Ds () . It is a rational function but generally not proper.

The impulse characteristics

h t Ce B t D t D t D t

At ( n)

( ) ( ) 0( ) 1(

) n

( )

contains Dirac delta-functions of higher order.

The most general LTI systems, so called descriptor systems, are described by

generalized state space equations

Ex( t) Ax( t) Bu(

t)

y( t) Cx( t) Du(

t)

with matrix E not necessarily square nonsingular. The Laplace transform yields

sE A xˆ ( s) Axˆ ( s) Buˆ ( s)

In case of non-invertible matrix sE A

, the system may have (given initial

conditions) non-unique solution or may have no solution at all. Such systems, of

course, have no transfer function.

When sE A

is invertible (with exception of a finite number of zero points), the

transfer function reads

1

H( s) C sE A B D( s)


It is rational but generally not proper. So, this class of systems is the same as the

above one described by the generalized state space equations.

Input-output equations, transfer function fractions

Matrix

denominator

fraction

In physical or technical applications, writing down the equations of the system, we

usually do not obtain directly the above state space equations. Instead, we have a set

of differential equations (of order higher than one) and of algebraic equations. Here

the output variable yt () , input variable ut () and "intermediary variables () t are

present. Solving the equations by means of Laplace transform and eliminating ˆ () s ,

we arrive at

where Hsis () rational transfer function.

yˆ( s) H( s) uˆ( s)

Depending on the physical nature of the system, and on our needs, the transfer

function may have, or may be convertible to, various forms.

In most occasions, the matrix transfer function between vector-valued input usand ˆ( )

vector-valued output ys ˆ( )

yˆ1( s) H ˆ

11( s) H1m ( s) u1( s)




yˆ ( ) ˆ

p s H p1( s) H pm ( s)


um( s)


is a matrix whose entries are transfer functions between individual inputs and

outputs. They have individual numerators and denominators:

N11() s N1m() s


D11( s) D1m ( s)


H() s .


Np1( s) N pm ( s)



D ( s) D ( s)


p1 pm

The Polynomial Toolbox supports this form as matrix-denominator-fraction (mdf).


Scalar

denominator

fraction

Left

denominator

fraction

Right

denominator

fraction

Sometimes it is useful to convert the denominator of all entries to a common one,

usually equal to the characteristic polynomial of the system. Such form

N11( s) N1m ( s)




Np1( s) N pm ( s)


H() s


Ds ()

is supported by the Polynomial Toolbox as scalar-denominator fraction (sdf).

The next form of matrix transfer function arises when we write the equation of the

system between the input us ˆ( ) and the output ys ˆ( ) , all intermediate variable

eliminated. In the scalar case

or

D y( t) D y( t) D y ( t) N u( t) N u( t) N u ( t)

( n) ( n)

0 1 n 0 1

p


D d dt y( t) N d dt u( t)

with polynomials D, N in the operator d dt . In the general case with vector-valued

input usand ˆ( ) ys ˆ( ) , we have polynomial matrices D, N with D square nonsingular.

Solving the equation by Laplace transform, we obtain matrix transfer function

where N( s), D( s) have matrix coefficients

H s D s G s

1

( ) ( ) ( )

N() s N N s N s

0 1

D() s D D s D s

0 1

This form is called left-denominator-fraction (ldf).

The system equation between input ut () , output yt () and intermediate variable () t

may be written in the form

or

D t D t D t u t

( n)

0( ) 1(

) n

( ) ( )

y t N t N t N t

( n)

( ) 0( ) 1(

) n

( )




D d dt ( t) u( t)

y( t) N d dt (

t)

n

p

n

n


The transfer function between us ˆ( ) and ys ˆ( ) is

H s N s D s

1

( ) ( ) ( )

Generally, Ns () and Ds () are matrix polynomials with Ds () square non-singular.

This form is called right-denominator fraction (rdf).

The ldf and rdf forms of matrix transfer functions are especially important for

the design of control by means of matrix polynomial equations.

State space and fraction conversions

State space to

fractions

The state space model A, B, C, D can be converted to a fraction. The transfer function

can in fact be computed simply by commands mdf, sdf, ldf, rdf:

A=[-2 1;2 -3];B=[1 0; 0 1];C=[1 1];

gprops red

F=mdf(A,B,C)

F =

5 + s 3 + s

------------ ------------

4 + 5s + s^2 4 + 5s + s^2

G=sdf(A,B,C)

G =

5 + s 3 + s

---------------

4 + 5s + s^2

H=ldf(A,B,C)

H =

4 + 5s + s^2 \ 5 + s 3 + s

In the left-dominator fraction H, the numerator and the denominator are left

coprime. The denominator is row reduced and its row degrees are observability

indices of the state space system.

K=rdf(A,B,C)

K =


Fraction to

State space

1 1 / 2 + s -1

/ -2 3 + s

In the left-dominator fraction K, the numerator and the denominator are right

coprime. The denominator is column reduced and its column degrees are

controllability indices of the state space system.

In these examples, argument D was omitted, the default zero having been used. With

an explicit D, the functions work similarly

D=[1 0];

F2=mdf(A,B,C,D)

F2 =

9 + 6s + s^2 3 + s

------------ ------------

4 + 5s + s^2 4 + 5s + s^2

G2=sdf(A,B,C,D)

G2 =

9 + 6s + s^2 3 + s

----------------------

H2=ldf(A,B,C,D)

H2 =

4 + 5s + s^2

4 + 5s + s^2 \ 9 + 6s + s^2 3 + s

K2=rdf(A,B,C,D)

K2 =

3 + s 0 / 2 + s -1

/ -2 3 + s

The backward conversion from fraction to state space is performed by command

abcd:

[a,b,c,d]=abcd(F)

a =

-5.0000 1.0000

-4.0000 0


Example of

uncontrollable

system

b =

c =

d =

1.0000 1.0000

5.0000 3.0000

1 0

1.0000 0

[a,b,c,d]=abcd(K)

a =

b =

c =

d =

-2.0000 1.0000

2.0000 -3.0000

1 0

0 1

1.0000 1.0000

0 0

We should not be surprised that, in some of these examples, a, b, c, d are not equal to

A, B, C, D. Nevertheless, both these quadruples are equivalent: there exists a

nonsingular matrix T, transforming one of them to the other:

1 1

a T AT, b T B, c CT, d D

Such equivalent systems define the same set of input-output pairs uy. ,

The state space representation and input-output representation (the transfer

function matrix) are equivalent in the above sense of input output pairs only if the

state space system is controllable and observable.

We consider the state space equation

x x

y x u

Obviously, the system is observable but not controllable. We enter its data as

A=1;B=0;C=1;D=1;

Conversion to a left-denominator fraction representation yields


Example of

unobservable

system

pformat symbs

H=ldf(A,B,C,D)

H =

1 \ 1 reduced, proper

The polynomials H.num and H.den are obviously coprime. The input-output model

corresponding to H is

y( t) u( t)

This model is not equivalent to the original state space system because its output

shows no trace of the uncontrollable but observable mode corresponding to the pole -

1. We may obtain the correct model by computing

gprops ncop

H=C*((s-A)\B)+D

H =

-1 + s \ -1 + s

We thus have the correct input-output representation

The non-coprime right-denominator fraction

gprops ncop

K=(1+s)/(1+s)

K =

by the command

1 + s / 1 + s

[A,B,C,D]=abcd(K)

dy( t) du( t)

y( t) u( t)

dt dt

yields the unobservable but controllable system

A =

B =

C =

D =

-1.0000

1

0


Nonproper

fractions

1

Nonproper fractions are also handled; the abcd function delivering the generalized

state space model with polynomial Ds ()

H=(1+s^2)/(2+s)

H =

1 + s^2 / 2 + s

[a,b,c,d]=abcd(H)

a =

b =

c =

d =

-2.0000

1

5.0000

-2 + s

mdf(a,b,c,d)

ans =

1 + s^2

-------

2 + s

Another possibility for a nonproper fraction is the function abcde delivering the

descriptor system.

[A,B,C,D,E]=abcde(H)

A =

B =

-2.0000 0 0

1

0

1

0 1.0000 0

0 0 1.0000


Generalized

state space

systems and

descriptor

systems

C =

D =

E =

5.0000 -1.0000 0

-2.0000

1 0 0

0 0 1

0 0 0

K=mdf(A,B,C,D,E)

K =

1 + s^2

-------

2 + s

The generalized state space systems and the descriptor systems can also be converted

each into other by commands abcd and abcde. In our example

[A,B,C,D,E]=abcde(a,b,c,d)

A =

B =

C =

D =

E =

-2.0000 0 0

1

0

1

0 1.0000 0

0 0 1.0000

5.0000 -1.0000 0

-2.0000

1 0 0

0 0 1


0 0 0

[a,b,c,d]=abcd(A,B,C,D,E)

a =

b =

c =

d =

-2.0000

-1

-5.0000

-2 + s

Discrete-time state space equations

Discrete-time

state space

equations

In the discrete-time case, the state space equations are

x Ax Bu

t1 t t

y CxDu t t t

where now t assumes integer values. The solution by means of z-transform for zero

initial conditions

yields the transfer function

or

It is proper, i.e. H( ) D,

a finite value.

zxˆ ( z) Axˆ ( z) Buˆ(

z)

yˆ ( z) Cxˆ( z) Duˆ(

z)

1

H() z C zI A B D

1

1 1 1

H( z ) Cz I Az B D

In the time domain, the properness corresponds to causality: the impulse

characteristics ht is zero for t 0 . The formula for h t is

where t is the unit-step sequence

h CAB D

t1

t t1 t


Generalized

state space

equations

Descriptor

systems

while t is the unit-impulse sequence

1 t 0

t


0 t 0


The generalized state space equations are

x Ax Bu

t1 t t

the z-transform transfer functions is

t

1 t 0

0 t 0

y Cx D u D u D u

t t 0 t 1 t1ntn 1

H( z) C zI A B D( z)

with matrix polynomial Dz () . The function H() z is general is not proper. The impulse

characteristics

h CA B D D D

t1

t t1 0 t 1 t1 n tn is noncausal, it is nonzero for a finite number of negative times t.

The discrete-time descriptor systems are

Ex Ax Bu

t1 t t

y CxDu t t t

with E possibly singular. In case of invertible zI A

, the transfer function is

or

1

H() z C zE A B D

1

1 1 1

H( z ) Cz E Az B D

noncausal as in the above case with polynomial Dz () .

Discrete-time input-output equations, discrete-time transfer function fractions


The input-output equations for discrete-time systems are similar to those for

continuous-time ones. Instead of differential equations, we have the recurrent (or

u y or

regressive) equations. They may be written either in advance values of signals ,

t t

in delayed ones. So, the equations leading to left-denominator fractions are

D y D y D y N u N u N u

0 t 1 t1 n t n 0 t 1 t 1 r t r

D( z) y N( z) u

t t

with polynomials D, N in the advance operator z . The transfer function is

Alternatively

1

H( z) D ( z) N( z)

.

d y d y d y n u n u n u

0 t 1 t1 n tn 0 t 1 t1 r tr d z y n z u

1 1

( ) t ( ) t

1

with polynomials d, n in the delay operator z . The transfer function is

h z d z n z

1 1 1 1

( ) ( ) ( )

In the Polynomial Toolbox, fractions H and h are treated as equal

h==H

ans =

1

and converted each to other by

h=reverse(H);

H=reverse(h);

In the conversion between state space form and fractions, the indeterminate variable

may be set globally or locally

A=[-2 1;2 -3];B=[1 0; 0 1];C=[1 1];

gprops red z

F=mdf(A,B,C)

F =

5 + z 3 + z

------------ ------------

4 + 5z + z^2 4 + 5z + z^2

G=sdf(A,B,C,'z')

G =

5 + z 3 + z


---------------

4 + 5z + z^2

H=sdf(A,B,C,'zi')

H =

z^-1 + 5z^-2 z^-1 + 3z^-2

-----------------------------

1 + 5z^-1 + 4z^-2

K=sdf(A,B,C,'z^-1')

K =

z^-1 + 5z^-2 z^-1 + 3z^-2

-----------------------------

1 + 5z^-1 + 4z^-2

In the backward conversion abcd from fraction to state space form, the function

returns matrices A, B, C, D such that, in the case of z-fraction, 1

Fz () or, in the case of

1

z -fraction, 1

1 1


Fractions and Control System Toolbox LTI objects

Example

1

Cz I z A B is equal to Gz ( )


.

C zI A B is equal to

The Control System Toolbox for Matlab recognizes LTI objects in four different

formats: state space descriptor systems, transfer functions and zero-pole-gain.

Overloaded versions of the commands ss, dss, tf and zpk are available in

Polynomial Toolbox to create Control System Toolbox objects in corresponding

formats from polynomial matrix fractions. This, of course, requires presence of

Control System Toolbox.

For example,

Nl=[s^2 (2+s)*(1+s)]

Nl =

s^2 2 + 3s + s^2

Dl=s^2*(1+s)

Dl =


Conversion

to ss

Fl=Dl\Nl

Fl =

s^2 + s^3

s^2 + s^3 \ s^2 2 + 3s + s^2

can be converted to ss object by typing

sys1=ss(Fl)

a =

b =

c =

d =

x1 x2 x3

x1 -1 1 0

x2 0 0 1

x3 0 0 0

u1 u2

x1 1 1

x2 0 3

x3 0 2

x1 x2 x3

y1 1 0 0

u1 u2

y1 0 0

Continuous-time model.

The same result could have been obtained by commands

[A,B,C,D]=abcd(Fl);

sys1=ss(A,B,C,D);


Conversion

to dss

Here A, B, C and D are numerical matrices.

sys2=dss(Fl)

a =

b =

c =

d =

e =

x1 x2 x3

x1 -1 1 0

x2 0 0 1

x3 0 0 0

u1 u2

x1 1 1

x2 0 3

x3 0 2

x1 x2 x3

y1 1 0 0

u1 u2

y1 0 0

x1 x2 x3

x1 1 0 0

x2 0 1 0

x3 0 0 1

Continuous-time model.

The same result could have been obtained by typing

[A,B,C,D,E]=abcde(Fl);


Conversion

to tf

Conversion

to zpk

Converting

nonproper

fractions

sys2=dss(A,B,C,D,E);

sys3=tf(Fl)

Transfer function from input 1 to output:

1

-----

s + 1

Transfer function from input 2 to output:

s + 2

-----

s^2

sys4=zpk(Fl)

Zero/pole/gain from input 1 to output:

1

-----

(s+1)

Zero/pole/gain from input 2 to output:

(s+2)

-----

s^2

Nonproper fractions do not correspond to a state space system. Hence in ss

command, the input fraction must be proper:

ss(s^2/(1+s))

??? Error using ==> frac.ss at 50

Fraction is not proper.

The other commands, dss, tf, and zpk handle nonproper fractions quite well


dss(s^2/(1+s))

a =

b =

c =

d =

e =

x1 x2 x3

x1 -1 0 0

x2 0 1 0

x3 0 0 1

u1

x1 1

x2 0

x3 1

x1 x2 x3

y1 1 -1 0

u1

y1 -1

x1 x2 x3

x1 1 0 0

x2 0 0 1

x3 0 0 0

Continuous-time model.

tf(s^2/(1+s))

Transfer function:

s^2


Conversion

from LTI

objects to

fractions

-----

s + 1

zpk(s^2/(1+s))

Zero/pole/gain:

s^2

-----

(s+1)

Conversely, any Control System Toolbox object can be converted into a polynomial

matrix fraction by the Polynomial Toolbox commands sdf, mdf, ldf or rdf.

Fm=mdf(sys1)

Fm =

s^2 2 + 3s + s^2

--------- ------------

s^2 + s^3 s^2 + s^3

Fm=reduce(coprime(Fm))

Fm =

1 2 + s

----- -----

1 + s s^2

The same result can be obtained from sys2, sys3 or sys4.

The original Fl can be recovered

or

Fl=ldf(sys1)

Fl =

s^2 + s^3 \ s^2 2 + 3s + s^2

Fl=ldf(sys2)

Fl =

s^2 + s^3 \ s^2 2 + 3s + s^2


Logical

relations with

different objects

Similarly

Fr=rdf(sys1)

Fr =

0 1 / 1.8 + 2.7s + 0.9s^2 1 + s

Fr=reduce(Fr)

Fr =

/ -0.9s^2 0

1 2 + s / 1 + s 0

Fr=rdf(sys3)

Fr =

/ 0 s^2

0.58 0.82 + 0.41s / 0.58 + 0.58s 0

Fr=reduce(Fr)

Fr =

1 2 + s / 1 + s 0

/ 0 0.41s^2

/ 0 s^2

Logical relations work well even with different objects. Two objects are considered

equal if they represent "the same system." So it is

Fm==sys1

ans =

Fm==sys2

ans =

Fm==sys3

ans =

Fm==sys4

ans =

1 1

1 1

1 1

1 1

It is, of course, also


Arithmetical

operations

with different

objects

Fl==sys1

ans =

Fr==sys1

ans =

and so on.

1 1

1 1

In arithmetical operations, Polynomial Toolbox objects and Control System Toolbox

objects can be freely mixed, the necessary conversions being performed automatically

G=[3 4]./s + sys1

G =

3s + 4s^2 2 + 7s + 5s^2

--------- -------------

s^2 + s^3 s^2 + s^3

G=reduce(coprime(G))

G =

3 + 4s 2 + 5s

------- ------

s + s^2 s^2

The class of the result is determined by the class of the first operand; in our case, it is

mdf. If we add the same operands in the reversed order, the class of the result is ss:

sys5=sys1 + [3 4]./s

a =

b =

x1 x2 x3 x4

x1 -1 1 0 0

x2 0 0 1 0

x3 0 0 0 0

x4 0 0 0 0


c =

d =

u1 u2

x1 1 1

x2 0 3

x3 0 2

x4 3 4

x1 x2 x3 x4

y1 1 0 0 1

u1 u2

y1 0 0

Continuous-time model.

Command

G2=mdf(sys5)

G2 =

3s + 4s^2 2 + 7s + 5s^2

--------- -------------

s^2 + s^3 s^2 + s^3

verifies correctness of the claim.

When we need to force the conventions to follow another way than the default one, we

can use explicitely the function which creates the desired class

mdf(sys1 + [3 4]./s)

ans =

3s + 4s^2 2 + 7s + 5s^2

--------- -------------

s^2 + s^3 s^2 + s^3


Fractions and Symbolic Math Toolbox objects

sym

Conversion

from symbolic

objects to

fractions

Matrix polynomial and fraction objects of the Polynomial Toolbox can be converted by

sym commands to objects of the Symbolic Math Toolbox:

Fm=[1./(1+s) (2+s)./s^2]

Fm =

1 2 + s

----- -----

1 + s s^2

Fs=sym(Fm)

Fs =

[ 1/(s + 1), (s + 2)/s^2]

class(Fs)

ans =

sym

This command, of course, requires the presence of Symbolic Math Toolbox.

A backward conversion is also possible

Fn=mdf(Fs)

Fn =

1 2 + s

----- -----

1 + s s^2

Of course, only such symbolic objects can be converted to Polynomial Toolbox objects,

which are rational in variables

syms x;

H=1/(1+x)

H =

1/(x + 1)

>> rdf(H)

??? Error using ==> rdf.rdf at 274

Invalid variable symbol in symbolic expression


Logical and

arithmetical

operations

In logical and arithmetical operations, Polynomial Toolbox objects can be freely

mixed, the necessary conversions being performed automatically. So, the comparison

of the result of the above example:

Fm=[1./(1+s) (2+s)./s^2];

Fs=sym(Fm);

Fs==Fm

ans =

1 1

The class of the result of an arithmetic operation is determined by the class of the

first operand.

Gm=[3 4]./s + Fs

Gm =

3 + 4s 2 + 5s

------- ------

s + s^2 s^2

class(Gm)

ans =

mdf

Adding the same operands in reversed order results in

Gs=Fs+[3 4]./s

Gs =

[ 1/(s + 1) + 3/s, (s + 2)/s^2 + 4/s]

class(Gs)

ans =

sym

simplify(Gs)

ans =

Sampling with holding

[ 1/(s + 1) + 3/s, (5*s + 2)/s^2]

When fraction Fsis () interpreted as a signal f() t , it can be sampled, as described

above in Chapter 5, Polynomial matrix fractions, Section Sampling.


Zero order

holding

However, when fractions are interpreted as transfer functions, the relation between

the continuous-time transfer functions its discrete-time counterpart is a bit different.

The continuous-time transfer function Fs () corresponds to f() t , the response to the

Dirac delta-impulse. The discrete-time transfer function of the continuous plant is,

however, the response to the specific "unit" impulse, created by a "holding element".

The simplest case is zero order holding. The unit pulse is held to 1 during the interval

0 t L:

In Laplace transform,

1 0t

L

ut ()

0 Lt 1

e

Us ()

s

The sampling with holding means that, before sampling, F( s) U( s) is created, i.e. in

time-domain f() t is convolved with ut () . The holding interval is usually equal to the

sampling period h but a case of general L is also thinkable.

Sampling points are , h,2 h

,

. The requirement on Fsis: () When some of the

sampling points coincide with 0 or L, the fraction must be proper.

The command for sampling with zero order holding is samph:

F=1/(s+1)

F =

sL


First order

holding

1 / 1 + s

With sampling period 2,

G=samph(F,2)

G =

0.86z^-1 / 1 - 0.14z^-1

To see a picture, use commands

GG=samph(F,2,0:.2:2);

picture(GG,2,2,0:.2:2)

In some cases of control, when smoother input signal to the controlled plant is

required, the first order holding is used. Here the unit pulse is a first order spline:

t 0 t L


u( t) t 2 t L L t 2L

0 2Lt


In Laplace transform

Us ()

2

sL

1

e

The requirement on Fs: () when some of the sampling points , T,2 T

,

coincide

with 0, L or 2L, the expression F() s s must be proper.

The commands

Gh1=samph1(F,2)

Gh1 =

1.1z^-1 + 0.59z^-2 / 1 - 0.14z^-1

GG=samph1(1/(s+1),2,0:.2:2);

picture(GG,2,2,0:.2:2)

s

2


Second order

holding

In case of the second order holding, the unit pulse is a second order spline

2

t

2


2

2

t 3tL ut () 2


3

2 2

t 3 t L 3 t 2L

0 t L

L t2L 2Lt3L 2

0 3Lt

It is a continuous function of time, its derivative being also continuous. In Laplace

transform

Us ()

3

sL

1

e

The requirement on Fs: () When some of the sampling points , T,2 T

,

coincide

with 0, L 2L or 3L, the expression

The commands

Gh2=samph2(F,2)

Gh2 =

2

F() s s must be proper.

0.86z^-1 + 2.3z^-2 + 0.32z^-3 / 1 - 0.14z^-1

GG=samph2(1/(s+1),2,0:.2:2);

picture(GG,2,2,0:.2:2)

s

3


Unsamph

The commands inverse to samph, smaph1, samph2 are unsamph, unsmaph1,

unsamph2, respectively. The Polynomial Toolbox can solve this problem only for the

most common case: holding interval L equal to sampling period h. The input

argument function Gz () must be proper, i.e., it must not have a pole for z .

Furthermore, for unsamph, it must not have a pole at z 0 . For unsamph1, Gz ()

must not have a higher than single pole at z 0 . For unsamph2, it must not be higher

than double. In our examples

F=1/(s+1);

Gh=samph(F,2);

Gh1=samph1(F,2);

Gh2=samph2(F,2);

unsamph(Gh)

ans =

1 / 1 + s

unsamph1(Gh1)

ans =

1 / 1 + s

unsamph2(Gh2)

ans =

1 / 1 + s


Resampling with holding

Zero order

holding

The process of sampling with holding has also its purely discrete-time counterpart,

resampling with holding. Instead of sampling continuous-time Fswith () period h and

creating discrete-time Gz () , we resample discrete-time Fzwith () ration r and create

Gz () whose z means r-times greater shift than that of Fz. ()

In case of zero order holding, the unit pulse, created by the (discrete-time) holder, is

In z-transform

Example with resampling ration r = 6:

F=zi/(1-.9*zi)

F =

u

t

1 t 0,1, , L1


0 t L, L 1,

L

1

z

U( z) 1 z z

1

1

z

z^-1 / 1 - 0.9z^-1

Gh=resamph(F,6)

Gh =

4.7z^-1 / 1 - 0.53z^-1

GG=resamph(F,6,0:5);

picture(GG,2,6,0:5)

1 ( L1)


First order

holding

In case of first order holding, the unit pulse is

In z-transform

Our example:

Gh1=resamph1(F,6)

Gh1 =

t t 0,1, , L 1


u t 2 t L t L, L 1, ,2L 1

t

0 t 2 L,2L 1,

U() z z

1

L

1 z

1

1 z

13z^-1 + 15z^-2 / 1 - 0.53z^-1

2

2


Second order

holding

GG=resamph1(F,6,0:5);

picture(GG,2,6,0:5)

In case of the second order holding:

In z-transform

u

t


2

t

2


2

2

t 3tL 2


3

2 2

t 3 t L 3 t 2L

t 0,1, , L1

t L, L 1, ,2L 1

2

t 2 L,2L 1, ,3L 1

0 t 3 L,3L 1,

U() z z

2

L

1 z

1

1 z

3

3


Our example:

Gh2=resamph2(F,6)

Gh2 =

19z^-1 + 1.1e+002z^-2 + 38z^-3 / 1 - 0.53z^-1

GG=resamph2(F,6,0:5);

picture(GG,2,6,0:5)


7 Control Systems Design

Introduction

Basic control routines

Introduction

In the context of polynomial methods, control system design amounts to the selection

of a polynomial matrix fraction description for a dynamic output feedback

compensator to satisfy given specifications. This is one of the steps in industrial

design that needs to be complemented with other steps such as exploratory analysis,

identification, analysis, simulation, evaluation and assessment.

The Polynomial Toolbox provides several routines to solve typical design tasks. Their

modifications as well as polynomial solutions to many other design problems can

easily be built with the help of the basic tools of the Polynomial Toolbox.

Polynomial Toolbox control design functions are usually based on solving polynomial

or polynomial matrix equations. The functions often work in several modes. In the

simplest mode, an inexperienced user typically enters a plant transfer function

matrix and the functions returns a resulting controller. In this mode the plant is only

allowed to be in the form of a left-denominator fraction or a right-denominator

fraction while the resulting controller is returned as "opposite-side fraction" that is

the right-denominator fraction or left-denominator fraction, respectively.

In the sophisticated mode offered to users familiar with the polynomial methods, all

controllers to meet the goals are returned in a parametric form expressed by several

polynomial matrices.

For consistency with older Polynomial Toolbox versions, entering the plant via two

separate polynomial matrices, the numerator matrix and the denominator one, is also

allowed.

In this chapter we successively discuss several basic control design routines, 2 H

optimization, and H optimization.

The Polynomial Toolbox offers basic functions to

stabilize the plant and, moreover, to parametrize all stabilizing controllers

place closed-loop poles by dynamic output feedback

design deadbeat controllers for discrete-time systems


Stabilization

Youla-Kucera

parametrization

Table 3 lists the corresponding routines.

Table 3. Basic control design routines

stab Stabilization and Youla-Kucera parametrization

pplace Pole placement

debe Deadbeat design

A simple random stabilization can be achieved as follows. Given a linear timeinvariant

plant with transfer matrix

where can be any of the variables

C = stab(P)

P( )

1

s, p, z, z , q, d

computes a stabilizing controller with transfer matrix

C( )

, the command

If the input P is a left-denomiantor fraction, the output C is a right-denominator

fraction. Specularly, if P is a right-denomiantor fraction, then C is a right one. The

other two fraction types, scalar-denomiantor fraction and matrix-denomiantor

fractions are not allowed.

The resulting closed-loop poles are randomly placed in the stability region, whose

shape of course depends on the choice of the variable.

For the same plant expressed by a left-denominator fraction P, the command

[Nc,Dc,E,F] = stab(P)

is used to obtain the parametrization of all stabilizing controllers in the form

1

C( ) N ( ) U( ) E( ) T( ) D ( ) U( ) F( ) T(

)



C C

U( ) is an arbitrary but stable polynomial matrix parameter of compatible size, and

T( ) is another (not necessarily stable) arbitrary polynomial matrix parameter of

compatible size. The parameters can be chosen at will but so that the resulting

controller is proper (or causal). If any common factor in C( ) is cancelled then the

above formula is the standard Youla-Kucera parametrization of all stabilizing

controllers and det U( ) is the resulting closed-loop characteristic polynomial.


Example

Similarly, for a plant with transfer matrix expressed by a right-denomiantor fraction

P( ) , the command

[Nc,Dc,E,F] = stab(P)

gives rise to the parametrization

1


C( ) U( ) D ( ) T ( ) F( ) U( ) N ( ) T( ) E(

)

C C

U( ) is an arbitrary but stable polynomial matrix parameter of compatible size, and

T( ) is another (not necessarily stable) arbitrary polynomial matrix parameter of

compatible size. The parameters can be chosen at will but such that the resulting

controller is proper (or causal).

Consider the simple continuous-time plant with defined by

P=(s+1)/(2-3*s+s^2)

P =

1 + s / 2 - 3s + s^2

It has two unstable poles:

roots(P.den)

ans =

2

1

To obtain a stabilizing controller, type

C = stab(P)

C =

-0.59 + s \ 8.9 + 7.3s

Indeed, this controller gives rise to the closed-loop characteristic polynomial

cl = P.den*C.den+P.num*C.num

cl =

roots(cl)

ans =

7.7 + 20s + 3.7s^2 + s^3

-1.6619 + 3.9773i

-1.6619 - 3.9773i


-0.4132

and all closed-loop poles are in the left half plane:

roots(cl)

ans =

-0.8297 + 6.1991i

-0.8297 - 6.1991i

-2.3816

Using the polynomial approach you can get not just one but all stabilizing controllers.

Type

[nc,dc,e,f] = stab(P)

nc =

dc =

e =

f =

23 + 4s

4.3 + s

0.5 - 0.75s + 0.25s^2

0.25 + 0.25s

to get another (because of the random character of the macro) stabilizing controller,

along with the parametrization of all stabilizing compensators

2

0.50.75s0.

25s


n 23 4 s u( s) t( s)

C


d s u s 5 s t( s)

C

4.3 ( ) 0.25 0.2

Taking cs ( ) 1 and tsarbitrary () we get all the controllers that assign the closed-loop

characteristic polynomial to be

m = P.den*dc+P.num*nc

m =

31 + 16s + 5.3s^2 + s^3

The closed-loop poles are positioned at

roots(m)

ans =


-3.3643

-0.9802 + 2.8975i

-0.9802 - 2.8975i

Similarly, for cs () fixed stable and tsarbitrary () we always obtain the closed-loop

characteristic polynomial m( s) u( s ) , unless we perform a cancellation in the controller.

If tsis () chosen such that ms () cancels then the resulting closed-loop characteristic

polynomial equals exactly us. ()

Thus, for

and

c = (s+1)*(s+2)*(s+3);

t

t =

we obtain

-23 + 46s + 16s^2

nc1 = nc*c+e*t

nc1 =

1.3e+002 + 3.2e+002s + 1.5e+002s^2 + 47s^3 + 8s^4

dc1=dc*c-f*t

dc1 =

31 + 47s + 21s^2 + 6.3s^3 + s^4

which are both divisible by ms () . Indeed,

nc2=coprime(nc1/m), dc2=coprime(dc1/m)

nc2 =

dc2 =

4 + 8s

1 + s

Applying this controller we have

cl2 = P.den*dc2+P.num*nc2

cl2 =

6 + 11s + 6s^2 + s^3


Example

roots(cl2)

ans =

-3.0000

-2.0000

-1.0000

which equals the desired us. ()

Consider the three-input two-output discrete-time plant given by the transfer matrix

1 1 1 1

P( z ) N( z ) D ( z ) with

and

N = [2 zi zi+1; 1-2*zi 0 zi]

N =

2 z^-1 1 + z^-1

1 - 2z^-1 0 z^-1

Dd = diag([2*zi+1 zi-1 1]);

D=[1 1 1; 0 1 zi; 0 0 1]*Dd*[1 0 0; zi+1 1 0; zi 1 1]

D =

P=N/D

3z^-1 + z^-2 z^-1 1

-1 + 2z^-2 -1 + 2z^-1 z^-1

z^-1 1 1

P.numerator =

2 z^-1 1 + z^-1

1 - 2z^-1 0 z^-1

P.denominator =

3z^-1 + z^-2 z^-1 1

-1 + 2z^-2 -1 + 2z^-1 z^-1

z^-1 1 1

1 1 1 1

The plant is stabilized by the controller C( z ) D ( z ) N ( z ) with

C=stab(P)

C C


C.denominator =

Columns 1 through 2

1 - 0.63z^-1 -8.6e-019 + 0.34z^-1

-1.1z^-1 1 + 0.62z^-1

8.9e-016 - 0.15z^-1 0.18z^-1

Column 3

-4.5e-017 + 0.21z^-1 - 0.056z^-2

-0.56z^-1 - 0.17z^-2

1 + 0.63z^-1 - 0.2z^-2

C.numerator =

class(C)

ans =

ldf

-0.86 1.4 - 0.29z^-1

-1.3 1 - 0.45z^-1

0.073 0.21 + 0.024z^-1

Nc=C.num,Dc=C.den

Nc =

Dc =

-0.86 1.4 - 0.29z^-1

-1.3 1 - 0.45z^-1

0.073 0.21 + 0.024z^-1

Columns 1 through 2

1 - 0.63z^-1 -8.6e-019 + 0.34z^-1

-1.1z^-1 1 + 0.62z^-1

8.9e-016 - 0.15z^-1 0.18z^-1

Column 3

-4.5e-017 + 0.21z^-1 - 0.056z^-2

-0.56z^-1 - 0.17z^-2


Pole placement

1 + 0.63z^-1 - 0.2z^-2

Indeed, the feedback matrix

Cl = Dc*D+Nc*N

Cl =

Columns 1 through 2

-0.35 - 0.4z^-1 - 0.11z^-2 -4.4e-017 +

1.1e-015z^-1

-2.7 - 3.1z^-1 - 0.87z^-2 - 2.2e-016z^-3 -1 - 0.52z^-1

- 2.2e-016z^-2

0.36 + 0.42z^-1 + 0.12z^-2 + 1.7e-016z^-3 1 + 0.52z^-1

+ 1.7e-016z^-2

Column 3

0.14 + 0.096z^-1 + 5.6e-017z^-2

-1.3 - 0.95z^-1 - 2.8e-016z^-2

1.1 + 0.76z^-1 - 5.6e-017z^-2

is stable as proved by typing

isstable(Cl)

ans =

1

Typically, the closed-loop poles should not only be stable but also located in

prescribed positions within the stability region. The routine pplace takes care of

this. Given the plant with transfer matrix

P D N

1

( ) ( ) ( )

and a vector of desired closed-loop pole locations poles, the command

C = pplace(P,poles)

computes a controller with transfer matrix

C N D

1

( ) C( ) C ( )

which places the closed-loop poles at the locations poles. The multiplicity of the

poles is increased if necessary. The resulting system may have real or complex

coefficients depending on whether or not the desired poles are self-conjugate.


Example

For the same plant and the same desired locations vector the command

[Nc,Dc,E,F,degT] = pplace(P,poles)

may be used to obtain the parametrization

1

C( ) N ( ) E( ) T( ) D ( ) F( ) T(

)



C C

of all other controllers yielding the same dynamics. T( ) is an arbitrary polynomial

matrix parameter of compatible size and of degree bounded by degT.

The pole placement technique is particularly useful for single-input single-output

plants. The macro does its job for multi-input multi-output systems as well but the

user should be aware of the fact that assigning just pole locations does not need to be

enough. In the multi-input multi-output case the desired behavior typically also

depends on the closed-loop invariant polynomials rather than on the pole locations

only. In fact, the assignment of invariant polynomials is very easy: all that is needed

is to place the desired invariant polynomials pi ( ) into a diagonal matrix R( ) of the

same size as D( ) and call

C = pplace(P,R).

Dually, if the plant transfer matrix is given by a right-denominator fraction, the

controller is returned as the left-denominator one.

Consider a simple continuous-time plant described by

d = 2-3*s+s^2;

n = s+1;

P=n/d

P =

1 + s / 2 - 3s + s^2

The plant has two unstable poles

roots(d)

ans =

2.0000

1.0000

The poles may be shifted arbitrarily with a first order controller. Hence, the resulting

feedback system has three poles. We place them at the locations s1 1,

s21 j

and s31 j . A controller that puts the poles at these locations results from

C = pplace(P,[-1,-1+j,-1-j])


C =

1 + s \ 5s

and, hence, has the transfer function

Indeed,

and

r = d*dc+n*nc

r =

roots(r)

ans =

2 + 4s + 3s^2 + s^3

-1.0000 + 1.0000i

-1.0000 - 1.0000i

-1.0000

5s

Cs () .

s 1

as desired. There are no other proper controllers placing poles exactly that way as

[nc,dc,f,g,degT] = pplace(P,[-1,-1+j,-1-j])

nc =

dc =

f =

g =

degT =

-Inf

5s

1 + s

2 - 3s + s^2

1 + s

The parameter Ts ( ) 0 laves no degree of freedom.


Deadbeat

controller

Example

In discrete-time systems, there is one pole location of particular interest. The closedloop

system can be forced to have a finite time response from any initial condition by

making the closed-loop characteristic polynomial equal to a suitable power of z or q.

Equivalently, the characteristic polynomial equals 1 for systems described by a delay

operator ( z 1 or d ).

The resulting performance is called deadbeat control and can be achieved as follows.

Given a discrete-time plant with transfer matrix Pz, () the command

C = debe(P)

computes a deadbeat controller with transfer matrix Cz () . If the plant is expressed as

a left-denominator fraction, the controller is returned in the form of rightdenominator

fraction. If a right-denominator fraction is inputted, left-denominator

fraction results.

The resulting closed-loop response to any initial condition as well as to any finite

length disturbance disappears in a finite number of steps.

The function works similarly for the other discrete-time operators q, d and z 1 .

If any other deadbeat regulators exist then they can be obtained with the

parametrization

which is computed by the command

[Nc,Dc,E,F,degT] = debe(P),

1

C( z) N ( z) E( z) T( z) D ( z) F( z) T( z)


C C

with a left-denominator fraction Pz. () If Pz () is a right-denominator fraction, the

same call returns the alternative parametrization

1


C( z) D ( z) F( z) T( z) N ( z) E( z) T( z)

C C

of the controler. Here T() z is an arbitrary polynomial matrix parameter of compatible

size with degree limited by degT. Any such choice of T() z results in a proper

controller yielding the desired dynamics.

1

If the design is being made in the backward shift operators d or z then the degree of

T is not limited. Any choice of T results in a causal controller that guarantees a

finite response (with the number of steps depending on the degree, of course). In this

case the output argument degT is useless and is returned empty.

Consider a simple third-order discrete-time plant with the scalar strictly proper

1

transfer function P( z) D ( z) N( z)

given by

N = pol([1 1 1],2,'z')


N =

1 + z + z^2

D = pol([4 3 2 1],3,'z')

D =

P=D\N

P =

4 + 3z + 2z^2 + z^3

4 + 3z + 2z^2 + z^3 \ 1 + z + z^2

A deadbeat regulator is designed simply by typing

C = debe(P)

C =

-2.3 - 2.3z - 2.7z^2 / 0.57 + 0.71z + z^2

Computing the closed-loop characteristic polynomial

D*C.den+N*C.num

ans =

z^5

reveals finite modes only and hence confirms the desired deadbeat performance.

Trying

[Nc,Dc,E,F,degT] = debe(P)

Nc =

Dc =

E =

F =

degT =

-Inf

-2.3 - 2.3z - 2.7z^2

0.57 + 0.71z + z^2

4 + 3z + 2z^2 + z^3

1 + z + z^2


shows (as degT=-Inf) that there is no other proper deadbeat regulator such that the

resulting system is of order 5. Deadbeat controllers of higher order can be found by

making the design in d or by solving the associated Diophantine equation directly .

For instance, the command

[Dc,Nc,F,E] = axbyc(D,N,z^6)

Dc =

Nc =

F =

E =

C=Nc/Dc

C =

-0.47 + 0.1z + 0.25z^2 + z^3

1.9 - 0.88z - 1.4z^2 - 2.2z^3

-1 - z - z^2

4 + 3z + 2z^2 + z^3

1.9 - 0.88z - 1.4z^2 - 2.2z^3 / -0.47 + 0.1z + 0.25z^2 + z^3

yields a set of third order controllers parametrized by a constant T .

1

For comparison we now perform the same design in the backward-shift operator z .

1

To convert the plant transfer function into z , type

Pzi=reverse(P)

Pzi =

Typing

1 + 2z^-1 + 3z^-2 + 4z^-3 \ z^-1 + z^-2 + z^-3

Czi = debe(Pzi)

Czi =

-2.7 - 2.3z^-1 - 2.3z^-2 / 1 + 0.71z^-1 + 0.57z^-2

leads to the same regulator as previously (the only deadbeat regulator of second

order). Causal regulators of higher order can be obtained from

[Ncneg,Dcneg,Eneg,Fneg,degTneg] = debe(Pzi)

Ncneg =


Example

Dcneg =

Eneg =

Fneg =

-2.7 - 2.3z^-1 - 2.3z^-2

degTneg =

1 + 0.71z^-1 + 0.57z^-2

0.17 + 0.35z^-1 + 0.52z^-2 + 0.7z^-3

0.17z^-1 + 0.17z^-2 + 0.17z^-3

[]

1

A parameter T( z )

1

of any degree may be used. For instance, the choice of Tz ( ) 1


yields a third order regulator with

Nc_other = Ncneg+Eneg

Nc_other =

-2.5 - 1.9z^-1 - 1.8z^-2 + 0.7z^-3

Dc_other=Dcneg-Fneg

Dc_other =

The check

1 + 0.54z^-1 + 0.4z^-2 - 0.17z^-3

Pzi.den*Dc_other+Pzi.num*Nc_other

ans =

1.0000

confirms the deadbeat performance.

Consider the two-input two-output plant with transfer matrix Pzgiven () by

N = [1-z z; 2-z 1]

N =

1 - z z

2 - z 1

D = [1+2*z-z^2 -1+z+z^2; 2-z 2+3*z+2*z^2]

D =


H-2 optimization

Introduction

P=N/D

P =

1 + 2z – z^2 -1 + z + z^2

2 - z 2 + 3z + 2z^2

1 - z z / 1 + 2z - z^2 -1 + z + z^2

2 - z 1 / 2 - z 2 + 3z + 2z^2

The deadbeat regulator Cz () is found by typing

C = debe(P)

C =

-1.7 + z -0.78 \ 3.7 + 2.3z -0.17 + 1.4z

-0.74 -0.78 + z \ 0.65 - 0.7z 0.83 + 0.43z

Indeed, the resulting closed-loop denominator matrix

C.den*P.den+C.num*P.num

ans =

-z^3 z^3

0 2z^3

reveals that only finite step modes are present.

H 2 or linear-quadratic-Gaussian (LQG) control is a modern technique for designing

optimal dynamic controllers. It allows to trade off regulation performance and control

effort, and to take process and measurement noise into account. The Polynomial

Toolbox offers the two macros listed in Table 4 for H 2 optimization by polynomial

methods.

Table 4. H-2 optimization routines

splqg Scalar H-2 optimization

plqg Matrix H-2 optimization

doplnit


Scalar case

Example

The function call

[C,regpoles,obspoles] = SPLQG(P,Q,R,rho,mu)

results in the solution of the SISO LQG problem defined by

Response of the measured output to the control input:

y P() s u

Response of the controlled output to the control input:

z Q() s u

Response of the measured output to the disturbance input:

In state space form

y R() s v




1

x Ax Bu Gv P() s C sI A B

1

z Dx Q() s D sI A B

1

y Cx w R() s C sI A G

The scalar white state noise v has intensity 1, and the white measurement noise w

has intensity µ. The compensator Csminimizes ()

the steady-state value of

2 2 ( ) ( )

E z t u t

The output argument regpoles contains the regulator poles and obspoles contains

the observer poles. Together the regulator and observer poles are the closed-loop

poles.

Consider the LQG problem for the plant with transfer function

4

4 s

3

s

2

s s

0.08 2.5022 0.06002 0.56297

10 0.16 10.0088 0.4802 9.0072

Ps ()

2 4 3 2

s s s s s

This is a scaled version of a mechanical positioning system discussed by Dorf, 1989

(pp. 544–546). The definition of the LQG problem is completed by choosing Q = R = P

6

so that p = q = n. Furthermore, we let 1 and 10 .

We input the data as follows:

n = 1e-4*(s^4+0.16*s^3+10.0088*s^2+0.4802*s+9.0072);

d = s^2*(s^4+0.08*s^3+2.5022*s^2+0.06002*s+0.56295);


MIMO case

P = n/d; Q = P; R = P;

rho = 1; mu = 1e-6;

We can now compute the optimal compensator:

[C,regpoles,obspoles] = splqg(P,Q,R,rho,mu))

MATLAB returns

C.numerator =

0.9 + 34s + 7.5s^2 + 1.5e+002s^3 + 6.4s^4 + 61s^5

C.denominator =

regpoles =

1 + 2.7s + 4.8s^2 + 5.5s^3 + 4.4s^4 + 1.9s^5 + s^6

-0.0300 + 1.5000i

-0.0300 - 1.5000i

-0.0101 + 0.5000i

-0.0101 - 0.5000i

-0.0284 + 0.0282i

-0.0284 - 0.0282i

obspoles =

-0.0692 + 1.5048i

-0.0692 - 1.5048i

-0.6931 + 0.3992i

-0.6931 - 0.3992i

-0.1734 + 0.7684i

-0.1734 - 0.7684i

Consider the linear time-invariant plant transfer matrix P( ) where v can be any of

1

the variables s, p, z, q, z and d . The command

C = plqg(P,Q1,R1,Q2,R2)

computes an LQG optimal regulator as in Fig. 4 with transfer matrix C( ) .


Examples

Fig. 4. LQG feedback structure

The controller minimizes the steady-state value of the expected cost

T T

( ) 2 ( ) ( ) 2 ( )

E y t Q y t u t R u t

Q 2 and R2 are symmetric nonnegative-definite weighting matrices. The symmetric

nonnegative-definite matrices Q1 and R1 represent the intensities (covariance matrices)

of the input and measurement white noises, respectively, and need to be nonsingular.

Example 1 (Kucera, 1991, pp. 298–303). Consider the discrete-time plant described

1

by a left matrix fraction P( z) D ( z) N( z)

, where

N = [1; 1];

D = [z-1/2 0; 0 z^2];

P = D\N

P =

-0.5 + z 0 \ 1

0 z^2 \ 1

Define the nonnegative-definite matrices

Q1 = 1;

Q2 = [7 0; 0 7];

R1 = [0 0; 0 1];

R2 = 1;

The optimal LQG controller is obtained by typing

[Nc,Dc] = plqg(N,D,Q1,R1,Q2,R2)

MATLAB returns

Nc =


Dc =

-0.10633z^2 0

-0.21266z^2 - 0.85065z^3 0

0.10633 + 0.34409z^2 - 1.3764z^3 1.1756

1

If the same plant is described by a right matrix fraction description N ( z) D ( z)

, with

N = [z^2; z-1/2]

D = z^2*(z-1/2)

then the controller results by typing

[Nc,Dc] = plqg(N,D,Q1,R1,Q2,R2, 'r')

Constant polynomial matrix: 1-by-2

Nc =

Dc =

0.5 0

1 + 4z

Example 2. Consider now a continuous-time problem described by

N = [1; 1];

D = [s-2 0; 0 s^2+1];

P = D\N

P =

-2 + s 0 \ 1

0 1 + s^2 \ 1

and the weighting matrices

Q1 = 1;

R1 = eye(2);

Q2 = [10 0; 0 2];

R2 = 1;

The call

C = plqg(P,Q1,R1,Q2,R2)

returns the optimal LQG feedback controller


State space design

Introduction

C.numerator =

-0.25 + 0.65s 0.48

C.denominator =

-0.16 + 0.15s + 0.016s^2 0.14 + 0.026s

1.2 + 0.63s + 0.14s^2 -0.21 - 0.48s

The closed-loop poles of the optimal feedback system follow as

roots(N*Nc+D*Dc)

ans =

-3.7297

-2.2300

-0.4236 + 1.0797i

-0.4236 - 1.0797i

-0.3887 + 1.0519i

-0.3887 - 1.0519i

Many modern regulators are based on state feedback. As an alternative to traditional

state-space methods, such a state feedback controller can also be designed via

powerful polynomial techniques. The Polynomial Toolbox offers the two functions

listed in Table 4 for state feedback design via polynomial approach.

Table 5. H-2 optimization routines

psseig Eigenstructure assignment

via state feedback

psslqr Linear quadratic regulator

design


Eigenstructure

assignment

Example

State feedback controller to achieve desired eigenstructure can be easily designed by

polynomial approach. Given a linear system

x Fx Gu

where F is an nn constant matrix and G is an nm constant matrix, and

P p ( s), p ( s), , p ( s) , r m

, the command

a set of polynomials

L = psseig(F,G,P)

1 2

returns, if possible, a constant matrix L such that the closed-loop matrix of the

controlled system

r

x ( F GL)

x

has invariant polynomials q1( s), q2( s), , qn( s ) , where

q ( s) p ( s) q ( s)

1 1 2

q ( s) p ( s) q ( s),

2 2 3

q ( s) p ( s),

r r

q ( s) q ( s)

1

r1n Such a matrix exists if and only if the fundamental degree inequality

degq1 deg q2 deg qk c1 c2 c k

holds for all k 1,2, , r, where c1 c2 cr are the controllability indices of the

pair (F,G). Moreover, equality must hold for k = r. If the input polynomials P do not

satisfy these conditions an error message is issued.

The dynamics of an inverted pendulum linearized about the equilibrium position are

described by the equation

where

The desired closed-loop poles are selected as

x Fx Gu

0 1 0 0 0



10.7800 0 0 0


0.2000

F , G

0 0 0 1 0


0.9800

0 0 0 0.2000


Linear quadratic

regulator

This yields the invariant polynomial

Since m 1,

one has

1 j

22j 4 3 2

1 ( s) s 6s 18s 24s 16

( s) ( s) ( s ) 1.

2 3 4

We aim to find a feedback gain matrix L so that the state feedback law u Lx

assigns these invariant polynomials to the closed loop system matrix F GL

. The

corresponding code is as follows:

F=[0,1,0,0;10.78,0,0,0;0,0,0,1;-0.98,0,0,0]

F =

0 1.0000 0 0

10.7800 0 0 0

0 0 0 1.0000

-0.9800 0 0 0

G=[0;-0.2000;0;0.2000]

G =

0

-0.2000

0

0.2000

P = s^4 + 6*s^3 + 18*s^2 + 24*s + 16;

L=psseig(F,G,P)

L =

-152.0633 -42.2449 -8.1633 -12.2449

As expected,

det(s*eye(4)-F+G*L)-P

ans =

0

Optimal state feedback controller called linear-quadratic regulator can also be

designed via polynomial techniques. Given a linear system

x Fx Gu


Example

where F is an nn constant matrix and G is an nm constant matrix, and a

regulated variable

z HxJu where H is a pn constant matrix and J is a pm constant matrix, the command

L = psslqr(F, G, H, J)

returns a constant matrix L such that the control function u Lx

minimizes the L2

-norm of z for every initial state x(0).

It is assumed that

T T

J H 0, J J I

For example (tracking system with an amplifier-motor, see A. Tewari, Modern

Control Design with Matlab and Simulink, 2nd ed., Wiley 2002, Ex. 6.3, p.303) ,

consider the followin state-space system with

0 1 0 0 0

F


0 0.01 0.3


, G


0 1









0 0.003 100.1

0

and the cost that can be desribed as minimum of the regulated variable defined by

When imputing the data

1 0 0 0 0


0 1 0


0 0



H 0 0 1 , J 0 0


0 0 0 1 0


0 0 0


0 1


F=[0 1 0; 0 -0.01 0.3;0 -.003 -10]

G=[0 0;0 -1; .1 0]

H=[eye(3);zeros(2,3)],J=[zeros(3,2);eye(2)]

F =

0 1.0000 0

0 -0.0100 0.3000

0 -0.0030 -10.0000

G =

0 0

0 -1.0000

0.1000 0


Linear Gaussian

filter

H =

J =

1 0 0

0 1 0

0 0 1

0 0 0

0 0 0

0 0

0 0

0 0

1 0

0 1

the optimum state feedback is computed using command

K=psslqr(F,G,H,J)

K =

0.0025 0.0046 0.0051

-1.0000 -1.7220 -0.0462

Thanks to well known duality, the same procedure can be applied to design linear

optimal Gaussian filters. As an exmaple, consider the linearized model of the verticalplane

dynamics of an AIRC aircraft described by the equations

x Fx GLv

y Hx JLv

where

001.1320 0 -1



0 -0.0538 -0.1712 0 0.0705


F 00010,

00.0485 0 -0.8556 -1.013


0-0.2909 0 1.0532 -0.6859


10000

H


0 1 0 0 0



00100

We want to design a linear Gaussian filter for the covariance matrices given by


G

J

L

L

0 0 0 0 0 0



-0.1200 1 0 0 0 0


0 0 0 0 0 0,


4.4190 0 -1.6650 0 0 0


1.5750 0 -0.0732 0 0 0


000100



0 0 0 0 1 0



000001

The corresponding code is as follows:

F = [0,0,1.1320,0,-1;0,-0.0538,-.1712,0,0.0705;0,0,0,1,0;

0,0.0485,0,-0.8556,-1.0130;0,-0.2909,0,1.0532,-0.6859]

F =

0 0 1.1320 0 -1.0000

0 -0.0538 -0.1712 0 0.0705

0 0 0 1.0000 0

0 0.0485 0 -0.8556 -1.0130

0 -0.2909 0 1.0532 -0.6859

H = [1,0,0,0,0;0,1,0,0,0;0,0,1,0,0]

H =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

GL = [0,0,0,0,0,0;-

0.12,1,0,0,0,0;0,0,0,0,0,0;4.4190,0,1.665,0,0,0;

1.575,0,-0.0732,0,0,0]

GL =

0 0 0 0 0 0

-0.1200 1.0000 0 0 0 0

0 0 0 0 0 0

4.4190 0 1.6650 0 0 0

1.5750 0 -0.0732 0 0 0

JL = [0,0,0,1,0,0;0,0,0,0,1,0;0,0,0,0,0,1]

JL =

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1


Descriptor system design

Introduction

Regularization

of a descriptor

system

L = psslqr(F',H',GL',JL')

L =

1.0423 0.0663 -0.2106 -0.4498 -0.8060

0.0663 0.9445 -0.0688 -0.0527 -0.2484

-0.2106 -0.0688 1.8029 1.6497 2.1948

Descriptor systems are usually described by equations like

Ex Ax Bu

y CxDu where E is a possibly singular n n constant matrix. If E happens to be nonsingular,

the above equations can be transformed into standard state-space equations. If it is

singular, we have a more general system called descriptor. Transfer function matrix

of a descriptor system may be improper (with numerator degrees higher that the

denominator ones) while it is proper of even strictly proper for a state space system.

Polynomial Toolbox offers several functions for descriptor systems control.

Regularization of a descriptor system of the form

w Ex Ax B

u zw CxD yu means its transformation into an equivalent form

with

w ex ax b

u zw cx d

yu d

d d

11 12


d21d

22


such that d12 has full column rank and d21 has full row rank. ―Equivalent‖ means that

the two plants have the same transfer matrices.

Regularization is achieved by calling


Examples

[a,b,c,d,e] = dssreg(A,B,C,D,E,nmeas,ncon)

where the dimension of y is nmeas and the dimension of u is ncon.

In the section devoted to the command dsshinf the descriptor representation of a

generalized plant is derived. When considering the subsystem

z2 c(1 rs)

u

two pseudo state variables are defined as x3 u, x4 u

, which leads to the

descriptor equations

The output equation is rendered as

x3 x4

0 x3u z2 c(1 rs) u crx4 cu

The output equation, however, equally well could be chosen as

z2 c(1 rs) u cx3 crx 4

This brings the generalized plant in the form

For this plant we have

1


0

0

0

0

1

0

0

0

0

1

0

0 x10

0


x2

0

0 x30

0 x40 1

0

0

0

0

0

0

1

0 x1 2


0


x2 1

1 x

3 0


0 x4

0

0


1w



0


u 1

E A B

x1 z1 1 0 0 0 1 0


x2 w

z2



0 0 c cr




0 0

x

3 u y 1 0 0 0 1 0

x4 C

D

0

D12 ,

D 21 1

0 so that D 12 does not have full rank. We apply dssreg to this plant for c = 0.1, r =

0.1.

c = 0.1; r = 0.1;

E = [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 0];


A = [0 1 0 0; 0 0 0 0; 0 0 0 1; 0 0 -1 0];

B = [sqrt(2) 0; 1 1; 0 0; 0 1];

C = [1 0 0 0; 0 0 c c*r; -1 0 0 0];

D = [1 0; 0 0; -1 0];

ncon = 1; nmeas = 1;

We now apply dssreg.

[a,b,c,d,e] = dssreg(A,B,C,D,E,nmeas,ncon)

a =

b =

c =

d =

e =

We now have

0 1 0 0

0 0 0 0

0 0 0 1

0 0 -1 0

1.4142 0

1.0000 1.0000

0 1.0000

0 1.0000

1.0000 0 0 0

0 0 0.1000 0.0100

-1.0000 0 0 0

1.0000 0

0 0.0100

-1.0000 0

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 0


so that the transformed plant is ―regular.‖

0

D12 ,

D 21 1

1 As a second example we consider the standard plant

w x x 1 1


u z 1 x

y 1 for which neither D 12 nor D 21 has full rank. We obtain the following result.

E = 1; A = 1; B = [1 1]; C = [1; 1]; D = [0 0;0 0];

nmeas = 1; ncon = 1;

[a,b,c,d,e] = dssreg(A,B,C,D,E,nmeas,ncon)

a =

b =

c =

d =

e =

1 0 0

0 1 0

0 0 1

1 1

0 1

1 0

1 1 0

1 0 1

0 1

1 0

1 0 0

0 0 0

0 0 0

Instead of a state representation of dimension 1 we now have a 3-dimensional

descriptor representation, which, however, is ―regular.‖


8 Robust Control with parametric

uncertainties

Introduction

Single parameter uncertainty

Example 1

Modern control theory addresses various problems involving uncertainty. A

mathematical model of a system to be controlled typically includes uncertain

quantities. In a large class of practical design problems the uncertainty may be

attributed to certain coefficients of the plant transfer matrix. The uncertainty usually

originates from physical parameters whose values are only specified within given

bounds. An ideal solution to overcome the uncertainty is to find a robust controller —

a simple, fixed controller, designed off-line, which guarantees desired behavior and

stability for all expected values of the uncertain parameters.

The Polynomial Toolbox offers several simple tools that are useful for robust control

analysis and design for systems with parametric uncertainties. The relevant macros

are briefly introduced in this chapter. More details on the underlying methods as well

as other solutions that can also be built from Polynomial Toolbox macros are

described in Barmish (1996), Bhattacharya, Chapellat and Keel (1995) and other

textbooks.

Many systems of practical interest involve a single uncertain parameter. At the time

of design the parameter is only known to lie within a given interval. Quite often even

more complex problems (with a more complex uncertainty structure) may be reduced

to the single parameter case. Needless to say that the strongest results are available

for this simple case.

Even though the uncertain parameter is single it may well appear in several

coefficients of the transfer matrix at the same time. Quite in the spirit of the

Polynomial Toolbox the coefficients are assumed to be polynomial functions of the

uncertain parameter.

Robust stability interval. To analyze the single parameter uncertain polynomial


p( s, q) 3 10 q s 12s 6 q s s

2 3 4


first check whether p( s, q ) is stable for q 0 . Then find its left-sided and right-sided

stability margins, that is, the smallest negative qmin and the largest positive q max such

that ( , )

q q , q .

p s q remains stable for any

min max

With the Polynomial Toolbox this is an easy task: First express the given polynomial

as

p( s, q) p ( s) qp ( s)

0 1

and enter the data

p0 = 3 + 10*s + 12*s^2 + 6*s^3 + s^4;

p1 = s + s^3;

Then type

isstable(p0)

ans =

1

to verify nominal stability (that is, stability for q 0). Finally, call

[qmin,qmax] = stabint(p0,p1)

qmin =

qmax =

-5.6277

Inf

to determine the stability margins. This result discloses that p( s, q) is not merely

stable for q 0 but also for all q 5.6277, . When q 5.6277 , stability is lost.

Nothing is claimed for q 5.6277 , however, only that stability is not guaranteed.

If you have also the Control System Toolbox available then you can combine it with

the Polynomial Toolbox and visualize the result by plotting the root-locus of a

fictitious plant p1( s) p0( s) under a fictitious feedback gain q ranging over 5.6277, .

Typing

rlocus(ss(p1,p0),qmin:.1:100)

produces the root locus plot of Chyba! Nenalezen zdroj odkazů.. It confirms that

all the roots of p s, q p0( s) qp1( s)

stay inside the stability region for all

q 5.6277,100 . Note also the role of the macro ss, which converts the polynomial

fraction into the Control System Toolbox state-space format.


Example 2

Root locus plot

Robust stabilization. Consider the plant with transfer matrix

, ,

1

D s q N s q

2 2

q s s q


2

1

2 3 2 3

s q

1s0 q q s q q s



2 2 2 2

1 q s q 1


1 q 2

1 q s qs




s


2 3 2 3

q q s q q s

which depends on an uncertain parameter q . Suppose that q may take any value in

the interval 0,1 and that its nominal value is q0 1.

The plant is described by a leftsided

fraction of polynomial matrices in two variables: D s, q and N s, q that may be

written as

and

2

2 s0 0 1 2 0 0

D s, q D0 ( s) qD1 ( s) q D2 ( s) q q

0 s 0 0


1 0



1 s 0 0 1

N s, q N0( s) qN1( s) q

0 s


0 0


Robust control structure

If a feedback controller with transfer matrix

N s D s

1

c( ) c ( )

is applied as in Chyba! Nenalezen zdroj odkazů. then the resulting closed-loop

denominator matrix is


P s, q D s, q D ( s) N s, q N ( s)

The denominator matrix may also be expressed as


P s, q P ( s) qP ( s) q P ( s)

2

0 1 2

c c

2

D ( s) D ( s) N ( s) N ( s) q D ( s) D ( s) N ( s) N ( s) q D ( s) D ( s) N ( s) N ( s)



To enter the data, type

and

0 c 0 c 1 c 1 c 2 c 2 c

D0 = [ s^2 1; 1 s ];

D1 = [ 0 1; 0 0 ];

D2 = [ 0 0; 1 0 ];

N0 = [ 1+s 0; 0 1 ];

N1 = [ 0 0; 1 0 ];

Nominally (that is, for q 0 ), the transfer matrix

is unstable because

roots(D0)

ans =

2

1

1 1 s 0

,0 ,0 0 ( ) 0(

)

1 s 0

D s N s D s N s

0 s 0 s


-0.5000 + 0.8660i

-0.5000 - 0.8660i

1.0000

To stabilize the nominal plant, call

C=stab(D0\N0)

C =

1.4e+002 + 17s -3.6 + 2.6s / -16 + s -2.6

16 4.9 / -1 3.6 + s

This controller gives rise to the feedback denominator matrix defined by

P0 = D0*Dc1+N0*Nc1, P1 = D1*Dc1+N1*Nc1, P2 = D2*Dc1

P0 =

P1 =

P2 =

1.4e+002 + 1.6e+002s + s^2 + s^3 0

0 2.3 + 3.6s + s^2

-1 3.6 + s

1.4e+002 + 17s -3.6 + 2.6s

0 0

-16 + s -2.6

This denominator is nominally stable, as expected, because

roots(P0)

ans =

-0.0526 +12.6416i

-0.0526 -12.6416i

-2.7347

-0.8902

-0.8422

To check robust stability simply type

[qmin,qmax] = stabint(P0,P1,P2)

qmin =

-0.9290


qmax =

0.3888

This result reveals that the closed-loop system only remains stable on the interval

q 0.9290,0.3888 , which does not include the entire desired interval 0,1 . Hence,

the controller is nominally but not robustly stabilizing. Let us try another one:

C=stab(D0\N0)

C =

19 + 9.4s -13 + 12s / 3.1 + s -12

-3.1 56 / -1 13 + s

This second controller yields

Dc1=C.den; Nc1=C.num;

P0 = D0*Dc2+N0*Nc2, P1 = D1*Dc2+N1*Nc2, P2 = D2*Dc2

P0 =

P1 =

P2 =

18 + 29s + 12s^2 + s^3 0

0 44 + 13s + s^2

-1 13 + s

19 + 9.4s -13 + 12s

0 0

3.1 + s -12

Again, as expected the controller is nominally stabilizing:

roots(P0)

ans =

-9.6741

-6.4147 + 1.6390i

-6.4147 - 1.6390i

-1.5221

-1.2364

Its robust stability interval is


Interval polynomials

Example 3

[qmin,qmax] = stabint(P0,P1,P2)

qmin =

qmax =

-1.1088

1.0346

Because 0,1 1.1088,1.0346 , the second controller evidently guarantees stability on

the whole required uncertainty-bounding interval. Hence, it is the desired robustly

stabilizing controller.

Another important class of uncertain systems is described by interval polynomials

with independent uncertainties in the coefficients. An interval polynomial looks like


n


i

p s, q

qi , q i s


with

qi , q i denoting the bounding interval for the ith coefficient. Using the

Polynomial Toolbox, it is convenient to describe interval polynomials by their ―lower‖

and ―upper‖ elements

n

i1

i0

i

i

p () s qi s and p () s qi s

In many applications interval polynomials arise when an original uncertainty

structure is known but too complex (e.g., highly nonlinear) to be tractable but may be

―overbounded‖ by a simple interval once an independent uncertainty structure is

imposed.

Graphical Method. Consider the continuous-time interval polynomial (Barmish, 1996)

, 0.45,0.55 1.95,2.05 2.95,3.05 5.95,6.05


4 5 6

3.95,4.05 s 3.95,4.05 s

s

p s q s s s

n

i1

2 3

The first step in the graphical test for robust stability requires establishing that at

least one polynomial in the family is stable. Using the midpoint of each of the

intervals we obtain

p_mid = pol([0.5 2 3 6 4 4 1],6)

p_mid =


Example 4

0.5 + 2s + 3s^2 + 6s^3 + 4s^4 + 4s^5 + s^6

isstable(p_mid)

ans =

1

Next we enter the given interval polynomial in terms of two ―lumped‖ polynomials

and

pminus = 0.45+1.95*s+2.95*s^2+5.95*s^3+3.95*s^4+3.95*s^5+s^6;

pplus = 0.55+2.05*s+3.05*s^2+6.05*s^3+4.05*s^4+4.05*s^5+s^6;

Using these polynomials we plot the sets p( j , q)

, consisting of what are called the

―Kharitonov rectangles‖ for 0 1 using the command

khplot(pminus,pplus,0:.001:1)

This results in Chyba! Nenalezen zdroj odkazů. figure below. Since none of the

rectangles touches the point z 0 the Zero Exclusion Condition 0 p j, q

is

satisfied, and we conclude that the interval polynomial is robustly stable. Note that

as long as all the polynomial coefficients are real numbers we only need to investigate

0 . The plot for 0

p j, q p j, q .

is symmetric as

Kharitonov rectangles

Test Using Kharitonov Polynomials. For continuous-time interval polynomials we

have an even simpler method available: An interval polynomial of invariant degree


Example 5

(with real coefficients) is known to be stable if and only if just its four ―extreme‖

polynomials (called the Kharitonov polynomials)

K ( s) q q s q s q s q s q s q s ;

2 3 4 5 6

1 0 1 2 3 4 5 6

K ( s) q q s q s q s q s q s q s ;

2 3 4 5 6

2 0 1 2 3 4 5 6

K ( s) q q s q s q s q s q s q s ;

2 3 4 5 6

1 0 1 2 3 4 5 6

K ( s) q q s q s q s q s q s q s ;

2 3 4 5 6

1 0 1 2 3 4 5 6

are stable. For the interval polynomial of Example 3 the Kharitonov polynomials are

computed by

format bank

pformat symb

[stability,K1,K2,K3,K4] = kharit(pminus,pplus)

stability =

K1 =

K2 =

K3 =

K4 =

1.00

0.45 + 1.95s + 3.05s^2 + 6.05s^3 + 3.95s^4 + 3.95s^5 + s^6

0.55 + 2.05s + 2.95s^2 + 5.95s^3 + 4.05s^4 + 4.05s^5 + s^6

0.55 + 1.95s + 2.95s^2 + 6.05s^3 + 4.05s^4 + 3.95s^5 + s^6

0.45 + 2.05s + 3.05s^2 + 5.95s^3 + 3.95s^4 + 4.05s^5 + s^6

The macro also checks the stability of the Kharitonov polynomials. The resulting

value of stability confirms that all the four polynomials are stable and we

conclude that the interval polynomial is robustly stable.

Robust stability of discrete-time interval polynomials. For discrete-time polynomials

(of degree 4 and higher), the Kharitonov-like extremal results are not available.

However, the graphical method may be applied for discrete-time polynomials as well

as for other stability regions.

Consider the interval polynomial


p z, q 10,20 20,30 z 128,138 z 260,270 z 186z

To test its robust stability, we write

2 3 4

.


0 1 1 2 2 3 3 4 4

p z, q p ( z) q p ( z) q p ( z) q p ( z) q p ( z)

,

2 3 4

2

3

where p0 ( z) 10 20z 128z 260z 168z

, p1( z) 1, p2( z) z, p3( z) z and 4 () p z z .

Such an expression is called polytopic and will be discussed later in a more general

setting. For the moment it enables us to describe each interval coefficient by a

separate uncertain parameter qi ranging over 0,10 .

To analyze the interval polynomial we first enter the data

p0 = 10 + 20*z + 128*z^2 + 260*z^3 + 168*z^4;

p1 = 1; p2 = z; p3 = z^2; p4 = z^3;

Qbounds = [ 1 10; 1 10; 1 10; 1 10 ];

Next we check that p 0 is stable

isstable(p0)

ans =

1

p c q but now with c sweeping around the unit circle.

Note that the value sets no longer have a rectangular shape and we must use a more

general command

Then we plot the value sets ,

ptopplot(p0,p1,p2,p3,p4,Qbounds,exp(j*(0:0.001:1)*2*pi))

to obtain the plot

Value sets


To see better what is happening in the neighborhood of the point 0 we zoom the graph

for generalized frequencies j

0.6 ,1.4 :

e in the critical range

ptopplot(p0,p1,p2,p3,p4,Qbounds,exp(j*(0.3:0.001:0.7)*2*pi))

This yields the plot

Polytopes of polynomials

Zoomed plot of value sets

Indeed, zero is excluded from all the octagons ( 0 pc, q

for all c on the unit circle)

and we conclude that the discrete-time interval polynomial is robustly stable.

A more general class of systems is described by uncertain polynomials whose

coefficients depend linearly on several parameters, but where each parameter may

occur simultaneously in several coefficients. Such an uncertain polynomial may look

like

n

p s, q ai ( q) s

with each coefficient i ( )

there exist a column vector i and a scalar i such that

i1

a q an affine function of q . That is, for each i 0,1,2, n

T

a ( q) q


i i i

i

,


Uncertain polynomials with the affine uncertainty structure form polytopes in the

space of polynomials. Similarly to the single parameter case such polynomials may

always be expressed as

0 1 1 2 2

p s, q p ( s) q p ( s) q p ( s) q p ( s)

This form is preferred in the Polynomial Toolbox. Thus, a polytope of polynomials

with n parameters is always described by the n 1polynomials p0( s), p1( s), , pn ( s)


along with n parameter bounding intervals

q1 , q 1 ,

q2 , q 2 , ,

qn, q n

. To keep an

invariant degree over the whole polytope it is usually assumed that

deg p ( s) deg p ( s)

for all i .

0

i

One reason why the affine linear uncertainty structure is so important is that it is

preserved under feedback interconnection. To see this, consider an uncertain plant

,

,

N s q

P s, q


D s q

connected in the standard feedback configuration of Chyba! Nenalezen zdroj odkazů.

with a compensator

Nc() s

Cs ()

D () s

A simple calculation leads to the closed loop transfer function

,

c

N s, q Dc ( s)


c c

n n

Pcl s q

D s, q D ( s) N s, q N ( s)

If the plant has have an affine linear uncertainty structure then the closed-loop

transfer function has an affine linear uncertainty structure as well. Indeed, if we

write

and

n

N s, q N0 ( s) qiNi( s)

i1

n

D s, q D0 ( s) qiDi( s)

then the closed-loop characteristic polynomial follows as

, 0( ) ( ) 0(

) ( ) ( ) ( ) ( ) ( )

D s q D s D s N s N s q D s D s N s N s

cl c c i i c i c

i1

i1

l


Example 6

while the numerator of the closed-loop transfer function is

Inspection shows that D s, q and ,

l

N s, q N0 ( s) D ( s) q N ( s) D ( s)

cl c i i c

i1

cl Ncl s q have affine linear uncertainty structures.

In fact, every transfer function of practical interest has this structure.

The affine linear uncertainty structure is also (roughly speaking) preserved under

linear fractional transformation of s and has many other interesting features.

Improvement over rectangular bounds. For the polytope of polynomials P described by

(Barmish, 1996, p. 146)

, 2 2 1 2 4 2 1

p s q q q q s q q s q s

2 4

1 2 2 1 2 2

with q1 0.5,2

and 2 0.3,0.3

Part 1: Conservatism of Overbounding. First replace

interval polynomial p s, q described by

,

, 0.9,4.6 0.7,1.3 2.7,8.3 0.4,1.6 Using the Kharitonov polynomials

q , we carry out two robust stability analyses.

p s q by the overbounding

p s q s s s s

pminus = pol([0.9 0.7 2.7 0.4 1],4);

pplus = pol([4.6 1.3 8.3 1.6 1],4);

2 3 4

[stable,K1,K2,K3,K4] = kharit(pminus,pplus); stable

stable =

0

we conclude that ,

p s q is not robustly stable. It is easy to verify that the third

Kharitonov polynomial is unstable:

isstable(K3)

ans =

0

Part 2: Value Set Comparison. To begin with the second analysis, we express p s, q

as

where

0 1 1 2 2

p s, q p ( s) q p ( s) q p ( s)


The data are entered as

p0 = pol([2 1 4 1 1],4);

p1 = pol([1 0 2],2);

p2 = pol([-2 1 -1 2],3);

p ( s) 2 s 4s

s s

0

p ( s) 12s Qbounds = [-0.5 2; -0.3 0.3];

1

2

2

2 3 4

p ( s) 2 s s 2s

2 3

Next we verify the critical precondition for application of the Zero Exclusion

Condition. Indeed, 0 () p s is a stable member of the given interval polynomial family

as shown by

isstable(p0)

ans =

1

Next we generate 80 polygonal values corresponding to frequencies evenly spaced

between 0 and 2 :

ptopplot(p0,p1,p2,Qbounds,j*(0:0.025:2))


Extremal polygons

Within computational limits, we conclude from Chyba! Nenalezen zdroj odkazů. plot

that 0 p j, Q

for all 0 . Hence, by the Zero Exclusion Condition we conclude

that P is robustly stable.

It may also be illuminating to picture the Kharitonov rectangles for the overbounding

p s, q :

interval polynomial

khplot(pminus,pplus,0:0.025:2)


Example 7

Kharitonov rectangles

It is clear from the resulting plot, that the Zero Exclusion Condition is violated for

the Kharitonov rectangles even though it holds for the polygons of the previous plot.

Summarizing, working with the overbounding interval polynomial is inconclusive

while working with polygonal value sets leads us to the unequivocal conclusion that

p s, q is robustly stable.


Robust stability degree design for a polytopic plant. Consider the plant transfer

function



12

N s, q 1 q 1 q s


D s, q 2 q 1 q s 2 2q s 2s

1 2 2

with two uncertain parameters q 0,0.2 and 0,0.2 2 3

1

q2 . The plant is to be robustly

stabilized with robust stability degree 0.9 .

Both the numerator and the denominator of the transfer function are uncertain

polynomials with a polytopic (affine) uncertainty structure. Write

and

, ( ) ( ) ( ) 1

N s q N s q N s q N s s q q s

0 1 1 2 2 1 2

2 2

, ( ) ( ) ( ) 2 2 1 3

D s q D s q D s q D s s s q s q s

0 1 1 2 2 1 2


and enter the data:

D0 = 2+s+2*s^2-2*s^3;

D1 = 1+2*s^2;

D2 = -3*s;

N0 = 1+s;

N1 = 1;

N2 = s;

Qbounds = [ 0 0.2; 0 0.2 ]

As the nominal plant

is unstable

isstable(D0)

ans =

0



N s,0 N () s 1

s

D s,0 D ( s) 2 s 2s 2s

0

2 3

0

we stabilize it by placing the closed-loop poles at 2 , 3 , 4 and 2 j :

[Nc,Dc] = pplace(N0,D0,[-2,-2+j,-2-j,-3,-4])

Nc =

Dc =

128.2000 + 115.9000s + 73.3000s^2

-4.1000 - 7.0000s - 0.5000s^2

Note that the nominal positions of the closed-loop poles well satisfy the required

stability degree 0.9 .

When the resulting controller

Nc() s 1.28.2 115.9s73.3s

2

D () s 4.1 7s0.5s c

is connected with the uncertain plant the closed-loop characteristic polynomial

becomes uncertain but the polytopic structure is preserved. The characteristic

polynomial may be written as

where

p ( s) qp( s) q p ( s)

0 1 1 2 2

2


P0 =

P1 =

P2 =

P0 = D0*Dc+N0*Nc, P1 = D1*Dc+N1*Nc, P2 = D2*Dc+N2*Nc

120.0000 + 226.0000s + 173.0000s^2 + 67.0000s^3 + 13.0000s^4

+ 1.0000s^5

124.1000 + 108.9000s + 64.6000s^2 - 14.0000s^3 - 1.0000s^4

140.5000s + 136.9000s^2 + 74.8000s^3

Recall that we require achieving a robustly stable system with stability degree 0.9 .

The polytopic family naturally has a member that is stable with at least the stability

degree required (remember the roots of 0 () p s ) and we can test the motion of

polygonal value set pc, q by sweeping c along the shifted stability boundary

c j 0.9 j

. Starting with values 04we type

ptopplot(10*P0,10*P1,10*P2,Qbounds,-.9+j*(0:.01:4))

Fig. 5Value sets

The plot of Fig. 5 seems to indicate that zero is excluded. To be completely confident,

we must zoom the picture to see the critical range 0 1:

ptopplot(P0,P1,P2,Qbounds,-.9+j*(0:.01:1))

for all and, hence, the Zero

Exclusion Condition is verified. We conclude that the desired design specifications

It is evident from Fig. 6 that 0 p 0.9 j, q


are satisfied: The closed-loop system is robustly stable with robust stability degree

0.9.

General uncertainty structure

Example 8

vset,vsetplot,sarea,sareaplot, tsyp???

Fig. 6. Zoomed value sets

Even very general parametric unceratiny structures can be hendled by the

Polynomial Toolbox as long as you can express the structure by a Matlab expression.

Consider an uncertain continuous-time polynomial with multilinear uncertainty

structure

p( s, q1, q2) p0( s) q1 p1( s) q2p2 ( s) q1q2 p12 ( s )

composed of four fixed polynomials


p 1.853 3.164 s 2.871 s 2.56s

s

0

p 3.773 4.841 s 2.06s

s

1

p 1.985 1.561 s 1.561s

s

2

p 4.032 1.06 s s

12

2

2 3 4

2 3

2 3

and check its robust stability for q 1 0,1 and q 2 0, 3.

To this end, first enter

the data

p0 = pol([1.853 3.164 2.871 2.56 1],4);

p1 = pol([3.773 4.841 2.06 1],3);

p2 = pol([1.985 1.561 1.561 1],3);

p12 = pol([4.032 1.06 1],2);

describe the uncertainty structure

expr = 'p0+q1*p1+q2*p2+q1*q2*p12'

and define a reasonable grid for the parameter intervals

q1 = 0:1/50:1; q2=0:3/50:3;

As the polynomials are of continuous-time nature it is necessary to plot value sets for

several critical frequencies on the imaginary axis. Hence, choose i = 1.3, 1.4, 1.6,

1.6 and type

V = vset(q1,q2,expr,p0,p1,p2,p12,j*[1.3:.1:1.6]);

vsetplot(V,'points')

to obtain the plot

Note that the value sets are not convex. This typically happens whenever the

uncertainty structure is multilinear or more complex.

As one of the value sets (that for i 1.4 ) seems to include the critical point 0 we

zoom the plot in to see more details.

V = vset(q1,q2,expr,p0,p1,p2,p12,j*[1.4]);

208


Example 9

vsetplot(V,'points')

It is evident that 0 V

(1.4) and, hence, the family is not robustly stable.

Now consider a family of discrete-time polynomials with quite complicated

uncertainty

where

1 1 1 1 2 1

p( z , k, l, m) e( z ) sin( k) f( z ) cos( k) kg( z ) l h( z )

1 1 1 1

e( z ) ( z 1.5)( z 2)( z 2)

1

f( z ) 1

1 1

g( z ) z

1 2

h( z ) z

and k, l, m 1,1. Here the data to be entered are

e = (zi-1.5)*(zi+2)*(zi-2);f=1; g=zi; h=zi^2;

uncrty = 'e+sin(k)*f-cos(m)*k*g+(l^2)*h';

and, say,

k = -1:.1:1; l = k; m = k;

Before using the Zero Exclusion Condition to test robust stability we must check that

the family contains at least one stable member. Indeed, the nominal polynomial

1 1

p( z ,0,0,0) e( z ) is stable:

isstable(e)

ans =

1

Now we evaluate and plot value sets at 40 generalized frequencies evenly spread

around unit circle:

V = vset(k,l,m,uncrty,e,f,g,h,exp(j*(0:2*pi/40:2*pi)));

vsetplot(V)

and obtain the picture

209


Incorrect calls

Example 10

As all the sets are far enough to the right of the critical point robust stability is

verified.

The user must not forget about calling the function with named variable arguments.

Even if the parameters

q0 = 1:5;

already exist in the workspace it must be represented by its name. The following call

is definitely incorrect

vset(1:5,'q0*p',p,j)

??? Error using ==> vset at 92

Parameter vector must be a named variable.

Consider an uncertain polynomial

p( s, q , q ) p ( s) ( q q ) p ( s) q p ( s )

1 2 0 1 2 1 2 2

composed of three fixed polynomials

2

p0 4 8 s 5 s

3

s

p1 1 s

2

s

p2 s

4

s

and two real parameters q 1 6,12 and q 2 5,15. Suppose you want to check

p( s, q , q ) . As there are two

which values of q 1 and q 2 give rise to a stable 1 2

parameters and the uncertainty structure is quite complicated there is hardly any

theoretical method known to help. Nevertheless, simple gridding can do the job in a

reasonable time.

To start, insert the data

p0 = 4+8*s+5*s^2+s^3; p1=1-s+s^2; p2=s+s^4;

and choose an appropriate grid, such as

210


Example 11

q1 = -6:.1:12; q2=-5:.1:15;

Then construct the stability area array by typing

S = sarea(q1,q2,'p0+(q1+q2)*p1+sqrt(abs(q2))*p2',p0,p1,p2);

and plot it with the help of

sareaplot(q1,q2,S)

What you get is the really nice picture. It shows which combinations of parameter

values yield a stable polynomial.

It is a must here to use names rather than values as the input arguments for both the

parameters and the polynomials. Violation of this rule causes an error message:

S=sarea(-6:.1:12,q2,'p0+(q1+q2)*p1+sqrt(abs(q2))*p2',p0,p1,p2);

??? Error using ==> sarea at 70

Parameter vector must be a named variable.

3-D examples are even nicer but, of course, more time consuming. Consider a threeparameter

uncertain polynomial

with

2 2

p( s, q1, q2, q3) p0( s) ( q1 q1q3 ) q3p1 ( s) q1 q2 p2( s )

2 3

p0( s) 2 4 s 3 s s

2

p1 ( s) 1.7 0.13 s 0.29s

2

p2( s) 1.2 1.2s 0.038s

and q1, q2, q 320,

20.

When inputting the data

p0 = 2+4*s+3*s^2+s^3;

p1 = -1.7+0.13*s+0.29*s^2;

p2 = 1.2+1.2*s-0.038*s^2;

q1 = -20:.5:20;q2=q1;q3=q1;

expr = 'p0+(q1+q1*q2)*q3*p1+(q1^2*q2^2)*p2';

211


Spherical uncertainty

Example 12

the function called by

S3 = sarea(q1,q2,q3,expr,p0,p1,p2);

needs around one hour on an average PC. The command

sareaplot(q1,q2,q3,S3)

results in a beautiful picture

Such a 3-D plot can of course be zoomed or rotated by mouse in the standard MATLAB

manner.

A family of polynomials P = {p( ,q) : q Q} is said to be spherical if p( ,q) has an

independent uncertainty structure and the uncertainty set Q is an ellipsoid. Such a

family can be expressed in the centered form

p( s, q ) p0( s) qis

n

i0

i

where the weighted Euclidian norm of the vector of the uncertain parameters is

bounded by

q

2,W

r

Polynomial Toolbox offers a tool for testing robust stability of spherical families using

the Zero Exclusion Condition. Here we can plot value sets using function

speherplot.

Consider the uncertain polynomial

2 3

p( s, q) (0.5 q0 ) (1 q1 ) s (2 q2) s (4 q3) s

212


Example 13

with the uncertainty bound q 1 and the weighting matrix W diag

2, W

2, 5, 3,1

,

that is,

2 2 2 2

2q0 5q1 3q2 q 3 1

Use the graphical method of the Zero Exclusion Principle to test for the robust

stability of the given uncertain polynomial. First we express the given polynomial in

the centered form

3

2 3

p( s, q ) 0.5 s 6s 4s

q s

i0

with the uncertainty bound unchanged. Now type

p0 = 0.5+s+6*s^2+4*s^3;

weight = [2,5,3,1];

r = 1; omega = 0:.01:1;

isstable(p0)

ans =

1

i

i

The graphical representation of the value set for the given range of frequencies is

generated by

spherplot(p0,omega,r,weight)

It can be seen that the Zero Exclusion Condition is violated so we conclude that the

given polynomial family is not robustly stable.

Similarly to the previous example, test the following polynomial [1, pp.268] for robust

stability


2 3

0 1 2 3

p( s, q ) (2 q ) 1.4 q s 1.5 q s (1 q ) s

with the uncertain parameters subject to

213


q

2

We type

0.011

p0 = 2+1.4*s+1.5*s^2+s^3; r = 0.011; omega = 0:0.005:1.4;

isstable(p0)

ans =

1

spherplot(p0,omega,r)

This results in

In this case, the origin is excluded from the value set and we conclude that the

polynomial family is robustly stable.

214


9 Conclusions

This is just a first version of the Manual. Future corrections, modifications and

additions are expected.

215


A

addition · 13, 25, 103

adj · 27

adjoint · 27

advanced operations and functions · 17

axb · 51

axbyc · 35, 39, 49, 52

B

backslash · 100

basic control routines · 159

basic operations on polynomial matrices · 25

Bézout equations · 50

C

canonical forms · 42

clements · 63

Clements form · 63

coefficient · 108

coefficient matrices · 16, 22, 23

coefficients · 16, 22, 23

column degrees · 43

column reduced · 43

common

divisor · 35

common left multiple · 40

common right divisor · 40

common right multiple · 39

compan · 33

companion matrix · 32

concatenation · 15, 22, 107

Conjugate transpose · 68

conjugation · 16

Conjugation · 108

constant

matrices · 32

control system design · 159

conversion

polynomial matrix fraction objects · 101

cop · 93

coprime · 35

left · 38

Coprime · 91, 96

216

D

deadbeat

compensator design · 159

control · 169

feedback controller · 169

regulator · 170

debe · 169

default indeterminate variable · 21

deg · 43

degree · 43

deriv · 66

descriptor system · 61

det · 26

determinant · 26

Diophantine equation · 48

Diophantine equations · 48

Discrete-time spectral factorization · 77

display format · 12

division · 34

entrywise · 106

matrix · 106

with remainder · 34

divisor · 33

right · 40

E

echelon · 46

echelon form · 46

entering polynomial matrices · 19

equation solvers · 52, 53

Euclidean division · 35

F

factor · 34

freeing variables · 21

G

General properties · 93

gensym · 21

gld · 35, 38

greatest common

divisor · 35

left divisor · 38

right divisor · 40


H

H-2 optimization · 173

help · 11

hermite · 45

Hermite form · 45

how to define a polynomial matrix · 12

hurwitz · 33

Hurwitz matrix · 32

Hurwitz stability · 31

I

indeterminate variables · 19

infinite roots · 30

initialization · 11

integral · 67

interval polynomials · 195

discrete-time · 197

invariant polynomials · 47

placement by feedback · 167

inverse · 26

isequal · 36

isfullrank · 28, 43

isprime · 35, 39

issingular · 29

isstable · 31

K

kharit · 197, 201

Kharitonov

polynomials · 196

rectangles · 196

khplot · 196, 204

Kronecker canonical form · 61

L

Laurent series · 108

lcoef · 43

ldf · 100

ldiv · 37

leading column coefficient matrix · 43

leading row coefficient matrix · 43

least common multiple · 36

least common right multiple · 39

Left-denominator-fraction · 100

llm · 36

LQG control · 173, 174

MIMO systems · 175

217

SISO systems · 174, 179, 180

lrm · 40

lu · 45

M

matrix denominator · 95

matrix division · 36, 106

matrix division with remainder · 37

matrix divisors · 36

matrix multiples · 36

matrix numerator · 95

matrix pencil · 61

matrix pencil Lyapunov equations · 63

matrix pencil routines · 61

matrix polynomial equations · 50

mdf · 98

minbasis · 30

minimal basis · 30

Moore-Penrose pseudoinverse · 28

multiple · 33

left · 40

multiplication · 13, 25, 103

N

Norms · 116

null · 29

null space · 28, 29

O

one-sided equations · 51

P

para-Hermitian · 57

parameter uncertainty · 189

single parameter · 189

pencan · 62

pinit · 11

pinv · 28

plqg · 175

plyap · 64

pol · 20, 90

pole placement · 159, 166

polynomial

basis · 30

polynomial matrix editor · 83

main window · 83, 84


matrix pad · 83

polynomial matrix Editor

matrix pad · 85

polynomial matrix equations · 48, 74

polynomial matrix fraction

computing with · 103

Polynomial matrix fractions · 88

polytopes · 199

of polynomials · 199

Popov form · 46

pplace · 167

prand · 37

prime

relatively · 35

product · 25

pseudoinverse · 28

ptoplot · 198

Q

quick start · 11

quotient · 34

R

range · 29

rank · 28, 29

normal · 29

rdf · 99

red · 93

Reduce · 92

reduced forms · 42

Resampling · 78

reverse · 93

right matrix fraction · 98

Right-denominator-fraction · 99

robust stabilization · 191

roots · 30

roots · 30

row degrees · 43

row reduced

form · 44

rowred · 44

S

Sampling · 117, 120, 121, 122

Sampling period · 78

scalar denominator · 88

Scalar-denominator-fraction · 88

sdf · 90

218

simple operations with polynomial matrices · 13,

103, 215

slash · 99

smith · 47

Smith form · 47

span · 29

spcof · 58

spectral co-factorization · 57, 77

spectral factorization · 57, 77

spf · 58

stab · 160, 193

stability · 30

degree · 204, 207

interval · 189

margins · 190

stabilization · 159, 160

all stabilizing controllers · 160

stabint · 190, 193

staircase form · 44

startup · 11

state space system · 61

submatrices · 15, 22, 107

subtraction · 13, 25, 103

sum · 25

sylv · 33

Sylvester matrix · 32

T

transformation

to Clements form · 63

to Kronecker canonical form · 61

transpose · 17

transposition · 16, 108

tri . · 44

triangular form · 44, 45

tutorial · 19, 65, 88

two-sided equations · 51

U

unimodular · 26

Y

Youla-Kucera parametrization · 160

Z

zero exclusion condition. · 202


zeros · 30, 71 zpplot · 31

219


220

More magazines by this user
Similar magazines