7/31/2019 Thesis Nam
1/87
Author name(s)
Book title
Monograph
March 10, 2012
Springer
7/31/2019 Thesis Nam
2/87
7/31/2019 Thesis Nam
3/87
Use the template dedic.tex together with the
Springer document class SVMono for
monograph-type books or SVMult forcontributed volumes to style a quotation or a
dedication at the very beginning of your book
in the Springer layout
7/31/2019 Thesis Nam
4/87
7/31/2019 Thesis Nam
5/87
Foreword
Use the template foreword.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your foreword in the
Springer layout.The foreword covers introductory remarks preceding the text of a book that are
written by a person other than the author or editor of the book. If applicable, the
foreword precedes the preface which is written by the author or editor of the book.
Place, month year Firstname Surname
vii
7/31/2019 Thesis Nam
6/87
7/31/2019 Thesis Nam
7/87
Preface
Use the template preface.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your preface in the
Springer layout.A preface is a books preliminary statement, usually written by the author or ed-
itor of a work, which states its origin, scope, purpose, plan, and intended audience,
and which sometimes includes afterthoughts and acknowledgments of assistance.
When written by a person other than the author, it is called a foreword. The
preface or foreword is distinct from the introduction, which deals with the subject
of the work.
Customarily acknowledgments are included as last part of the preface.
Place(s), Firstname Surname
month year Firstname Surname
ix
7/31/2019 Thesis Nam
8/87
7/31/2019 Thesis Nam
9/87
Acknowledgements
Use the template acknow.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) if you prefer to set your ac-
knowledgement section as a separate chapter instead of including it as last part ofyour preface.
xi
7/31/2019 Thesis Nam
10/87
7/31/2019 Thesis Nam
11/87
Contents
Part I BACKGROUND
1 Set Theoretic Methods in Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Set terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Ellipsoidal set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Polyhedral set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Set invariance theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.2 Ellipsoidal invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.3 Polyhedral invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Enlarging the domain of attraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.2 Saturation nonlinearity modeling- A linear differential
inclusion approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 Enlarging the domain of attraction - Ellipsoidal set approach 28
1.3.4 Enlarging the domain of attraction - Polyhedral set approach 34
2 Optimal and Constrained Control - An Overview . . . . . . . . . . . . . . . . . . 41
2.1 Dynamic programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2 Pontryagins maximum principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Implicit model predictive control . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Recursive feasibility and stability . . . . . . . . . . . . . . . . . . . . . . . 49
2.3.3 Explicit model predictive control - Parameterized vertices . . 50
2.4 Vertex control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
xiii
7/31/2019 Thesis Nam
12/87
7/31/2019 Thesis Nam
13/87
Acronyms
Use the template acronym.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your list(s) of abbrevia-
tions or symbols in the Springer layout.Lists of abbreviations, symbols and the like are easily formatted with the help of
the Springer-enhanced description environment.
ABC Spelled-out abbreviation and definition
BABI Spelled-out abbreviation and definition
CABR Spelled-out abbreviation and definition
xv
7/31/2019 Thesis Nam
14/87
7/31/2019 Thesis Nam
15/87
Part I
BACKGROUND
7/31/2019 Thesis Nam
16/87
Use the template part.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your part title page and,
if desired, a short introductory text (maximum one page) on its verso page in the
Springer layout.
7/31/2019 Thesis Nam
17/87
Chapter 1
Set Theoretic Methods in Control
Abstract The first aim of this chapter is to briefly review some of the set families
used in control and comments on the strengths and weaknesses of each of them.
The tool of choice throughout the manuscript will be the ellipsoidal and polyhedralsets due to their mix of numerical applicability and flexibility. Then the concept of
robust invariant and robust controlled invariant sets are introduced. Some algorithms
are given for computing such sets. The chapter ends with an original contribution
on estimating the domain of attraction for time-varying and uncertain discrete-time
systems with a saturated input.
1.1 Convex sets
1.1.1 Set terminology
Definition 1.1. (Convex set) A set CRn is convex if for all x1 Cand x2 C, itholds that
x1 + (1)x2 C, 0 1
The point
x = x1 + (1)x2
where 0 1 is called a convex combination of the pair x1 and x2. The set of allsuch points is the segment connecting x1 and x2. In other words a set C is said to be
convex if the line segment between two points in C lies in C.
Definition 1.2. (Convex function) A function f : C R with C Rn is convex ifand only if the set C is convex and
f(x1 + (1)x2) f(x1) + (1)f(x2)
for all x1 C, x2 Cand for all 0 1.
3
7/31/2019 Thesis Nam
18/87
4 1 Set Theoretic Methods in Control
Definition 1.3. (Closed set) A set C is closed if it contains its own boundary. In
other words, any point outside Chas a neighborhood disjoint from C.
Definition 1.4. (Closure of a set) The closure of a set C is the intersection of all
closed sets containing C. The closure of a set C is denoted as cl(C).
Definition 1.5. (Bounded set) A set C Rn is bounded if it is contained in someball BR = {x Rn : x2 } of finite radius > 0.
Definition 1.6. (Compact set) A set CRn is compactif it is closed and bounded.
Definition 1.7. (C-set) A set S Rn is a Cset if is a convex and compact set,containing the origin in its interior.
Definition 1.8. (Convex hull) The convex hull of a set CRn is the smallest convexset containing C.
Definition 1.9. (Support function) The support function of a set CRn, evaluated
at z Rn1 is defined asC(z) = sup
xCzTx
1.1.2 Ellipsoidal set
Ellipsoidal sets or ellipsoids are a famous class of convex sets. Ellipsoids represent
a large category used in the dynamic systems control field due to their simple nu-
merical representation [19], [44]. Next we provide a formal definition for ellipsoidal
sets and a few properties.
Definition 1.10. (Ellipsoidal set) An ellipsoidal set E(P,x0) Rn with center x0and shape matrix P is a set of the form
E(P,x0) = {x Rn : (xx0)
TP1(xx0) 1}
where P Rnn is a positive definite matrix.
If the ellipsoid is centered in the origin then it is possible to write
E(P) = {x Rn : xTP1x 1}
Define Q = P12 as the Choleskyfactor of matrix P, which satisfies QTQ = QQT =
Q. With the matrix Q, it is possible to show an alternative dual representation for an
ellipsoidal set
D(Q,x0) = {x Rn : x = x0 + Qz, where z
Tz 1}
7/31/2019 Thesis Nam
19/87
1.1 Convex sets 5
Ellipsoidal sets are the most popularly used in the control field since they are
associated with powerful tools such as the Lyapunov equation or Linear Matrix In-
equalities (LMI) [59], [19]. When using ellipsoidal sets, mostly all the optimization
problems present in the control field can be reduced to the optimization of a linear
function under LMI constraints. This optimization problem is convex and is now apowerful tool in many control applications.
A linear matrix inequality is a condition of the type [59], [19]
F(x) 0
where x Rn is a vector variable and the matrix F(x) is affine in x, that is
F(x) = F0 +n
i=1
Fixi
with symmetric matrices Fi Rmm.LMIs can either be feasibility conditions or constraints for optimization prob-
lems. Optimization of a linear function over LMI constraints is called semidefinite
programming, which is considered as an extension of linear programming. Nowa-
days, a major benefit in using LMIs is that for solving an LMI problem, several
polynomial time algorithms were developed and implemented in several software
packages. Such are LMI Lab [28], YALMIP [50], CVX [31], etc.
The Schur complements are one of the most important theorem when working
with LMIs. The Schur complementsstate that the nonlinear conditions of the special
forms P(x) 0P(x)Q(x)TR(x)1Q(x) 0
or
R(x) 0P(x)Q(x)TR(x)1Q(x) 0can be equivalently written as the LMI
P(x) Q(x)T
Q(x) R(x)
0
The Schur complements allow one to convert certain nonlinear matrix inequali-
ties into LMIs. For example, it is well known [44] that the support function of the
ellipsoid E(P,x0), evaluated at the vector z is
E(P,x0)(z) = zTx0 +
zTPz (1.1)
then based on equation (1.1), it is apparent that the ellipsoid E(P) is a subset of the
polyhedral set P(f, 1) = {x Rn : |fTx| 1} with f Rn1 if and only if
fTP f 1
7/31/2019 Thesis Nam
20/87
6 1 Set Theoretic Methods in Control
or by using the Schur complements this condition can be rewritten as [19], [36]
1 fTP
P f P 0 (1.2)
Obviously an ellipsoidal set E(P,x0) Rn is uniquely defined by its matrix P
and by its center x0. Since matrix P symmetric, the complexity of the representation
isn(n + 1)
2+ n =
n(n + 3)
2
The main drawback of ellipsoids is however that having a fixed and symmetrical
structure they may be too conservative and this conservativeness is increased by the
related operations. It is well known that [44]
The convex hull of of a set of ellipsoids, in general, is not an ellipsoid. The sum of two ellipsoids is not, in general, an ellipsoid. The difference of two ellipsoids is not, in general, an ellipsoid.
The intersection of two ellipsoids is not, in general, an ellipsoid.
1.1.3 Polyhedral set
Polyhedral sets provide a useful geometrical representation for the linear constraints
that appear in diverse fields such as control and optimization. In a convex setting,
they provide a good compromise between complexity and flexibility. Due to their
linear and convex nature, the basic set operations are relatively easy to implement
[45], [64]. Principally, this is related to their dual (half-spaces/vertices) representa-
tion [54], which allows to chose which formulation is best suited for a particular
problem. With respect to their flexibility it is worth noticing that any convex body
can be approximated arbitrarily close by a polytope [20].
This section is started by recalling some theoretical concepts.
Definition 1.11. (Hyperplane) A hyperplane H Rn is a set of the form
H = {x Rn : fTx = g}
where fRn1, g R.
Definition 1.12. (Half-space) A closed half-space H Rn is a set of the form
H = {x Rn : fTx g}
where fR
n1
, g R
.
Definition 1.13. (Polyhedral set) A convex polyhedral set P(F, g) is a set of theform
P(F, g) = {x Rn : Fix gi, i = 1, 2, . . . , n1}
7/31/2019 Thesis Nam
21/87
1.1 Convex sets 7
where Fi R1n denotes the ith row of the matrix F Rn1n and gi is the ithcomponent of the vector g Rn11.
A polyhedral set includes the origin if and only ifg 0 and includes the origin
in its interior if and only ifg > 0.
Definition 1.14. (Polytope) A polytope is a boundedpolyhedral set.
Definition 1.15. (Dimension of polytope) A polytope P Rn is of dimension dn, if there exists a ddimension ball with radius > 0 contained in P and thereexists no d+ 1dimension ball with radius > 0 contained in P.
Definition 1.16. (Face, facet, vertex, edge) A face Fia of polytope P(F, g) is definedas a set of the form
Fia = P{x Rn : Fix = gi}
The intersection of two faces of dimension n 1 usually gives a n 2 face. Thefaces of the polytope P with dimension 0, 1 and n1 are called vertices, edges and
facet, respectively.
One of the fundamental properties of polytopes is that it can be presented in
half-space representation as in Definition 1.13 or in vertex representation as follows
P(V) =
x Rn : x =
r
i=1
ivi, 0 i 1,r
i=1
i = 1
where vi Rn1 denotes the i column of matrix VRnr.
x1
x2
f2
Tx g
2
f3
Tx g
3
f5
Tx g
5
f6
Tx g
6
f1
Tx g
1
f7
Tx g
7
f4
Tx g
4
Fig. 1.1 Half-space representation of polytopes.
x1
x2
v1
v2
v3
v4
v5
v6
v7
Fig. 1.2 Vertex representation of polytopes.
This dual (half-spaces/vertices) representation has very practical consequences
in methodological and numerical applications. Due to this duality we are allowed to
use either representation in the solving of a particular problem. Note that the trans-
formation from one representation to another may be time-consuming with several
well-known algorithms: Fourier-Motzkin elimination [25], CDD [26], Equality Set
Projection [39].
Note that the expression x =r
i=1
ivi with a given set of vectors {v1, v2, . . . , vr}
and
7/31/2019 Thesis Nam
22/87
8 1 Set Theoretic Methods in Control
r
i=1
i = 1, i 0
is called the convex hull of a set of vectors {v1, v2, . . . , vr} and denotes as
x = Conv{v1, v2, . . . , vr}
Definition 1.17. (Simplex) A simplex CRn is an ndimensionalpolytope, whichis the convex hull of its n + 1 vertices.
For example, a 2simplex is a triangle, a 3simplex is a tetrahedron, and a4simplex is a pentachoron.
Definition 1.18. (Minimal representation) A half-space or vertex representation
of polytope P is minimal if and only if the removal of any facet or any vertex would
change P, i.e. there are no redundant half-spaces or redundant vertices.
A minimal representation of a polytope can be achieved by removing from the
half-space (vertex) representation all the redundant half-spaces (vertices), whosedefinition is provided next.
Definition 1.19. (Redundant half-space) For a given polytope P(F, g), a polyhe-dral set P(F, g) is defined by removing the i th half-plane Fi from matrix F andthe corresponding component gi. The half-space Fi is redundantif and only if
gi < gi
with
gi = maxFxg
{Fix}
Definition 1.20. (Redundant vertex) For a given polytope P(V
), a polyhedral set
P(V) is defined by removing the ith vertex vi from matrix V. The vertex vi isredundantif and only if
pi < 1
where
pi = minV p=vi
{1Tp}
Now some basic operations on polytopes will be briefly reviewed. Note that al-
though the focus lies on polytopes,most of the operationsdescribed here are directly
or with minor changes applicable to polyhedral sets. Additional details on polytope
computation can be found in [65], [32], [27].
Definition 1.21. (Intersection) The intersection of two polytopes P1 R
n
, P2 R
n
is a polytope
P1P2 = {x Rn : x P1,x P2}
7/31/2019 Thesis Nam
23/87
1.1 Convex sets 9
Definition 1.22. (Minkowski sum) The Minkowski sum of two polytopes P1 Rn,
P2 Rn is a polytope
P1P2 = {x1 +x2 : x1 P1,x2 P2}
It is well known that ifP1 and P2 are presented in vertex representation, i.e.
P1 = Conv{v11, v12, . . . v1p},P2 = Conv{v21, v22, . . . v2q}
then the Minkowski can be computed as [65]
P1P2 = Conv{v1i + v2j}, i = 1, 2, . . . ,p, j = 1, 2, . . . , q
Definition 1.23. (Pontryagin difference) The Pontryagin difference P of two poly-
topes P1 Rn, P2 R
n is a polytope
P1P
2= {x
1 P
1: x
1+x
2 P
1,x
2 P
2}
x1
x2
P1
P2
P1 P
2
Fig. 1.3 Minkowski sum P1P2.
x1
x2
P1
P2
P1
P2
Fig. 1.4 Pontryagin difference P1P2.
Note that the Pontryagindifference is not the complement of the Minkowski sum.
For two polytopes P1 and P2, it holds that (P1P2)P2 P1.
Definition 1.24. (Projection) Given a polytope P Rn1+n2 the orthogonal projec-tion onto the x1spaceR
n1 is defined as
Projx1 (P) = {x1 Rn1 : x2 R
n2 such that [xT1 xT2 ]
T P}
It is well known that the Minkowski sum operation on polytopes in their half-
plane representation is complexity-wise equivalent to a projection [65]. Current pro-
jection methods for polytopes that can operate in general dimensions can be grouped
into four classes: Fourier elimination [40], block elimination [4], vertex based ap-
proaches and wrapping-based techniques [39].Apparently the complexity of the representation of polytopes is not a function of
the space dimension only, but it may be arbitrarily big. For the half-space (vertex)
representation, the complexity of the polytopes is a linear function of the number
7/31/2019 Thesis Nam
24/87
10 1 Set Theoretic Methods in Control
x1
x2
P
Projx
1
(P)
Fig. 1.5 Projection of a 2-dimensional polytopeP onto a line x1.
of rows of the matrix F (the number of columns of the matrix V). As far as the
complexity issue concerns, it is worth to say that non of these representations can
be regarded as more convenient. Apparently, one can define an arbitrary polytope
with relatively few vertices, however this may nevertheless have a surprisingly large
number of facets. This happens, for example when some vertices contribute to many
facets. And equally, one can define an arbitrary polytope with relatively few facets,
however this may have relatively many more vertices. This happens, for example
when some facets have many vertices.
The main advantage of the polytopes is their flexibility. It is well known that [20]
any convex body can be approximated arbitrarily close by a polytope. Particularly,
for a given bounded, convex and close set S and for a given with 0 < < 1, thenthere exists a polytope P such that
(1 )S P S
for an inter approximation of the set S and
S P (1 + )S
for an outer approximation of the set S.
1.2 Set invariance theory
1.2.1 Basic definitions
Set invariance is a fundamental concept in analysis and controller design for con-
strained systems, since the constraint satisfaction can be guaranteed for all time if
and only if the initial states are contained in an invariant set. Two types of systems
7/31/2019 Thesis Nam
25/87
1.2 Set invariance theory 11
will be considered in this section, namely, autonomous discrete-time uncertain non-
linear systems
x(k+ 1) = f(x(k), w(k)) (1.3)
and systems with external control inputs
x(k+ 1) = f(x(k), u(k), w(k)) (1.4)
where x(k) Rn, u(k) Rm and w(k) Rd are respectively the system state, thecontrol input and the unknown disturbance.
The state vectorx(k), the control vector u(k) and the disturbance w(k) are subjectto constraints
x(k) Xu(k) Uw(k) W
k 0 (1.5)
where the sets X, U and W are assumed to be close and bounded.
Definition 1.25. Robust positively invariant set [14] [41] The set X is robustpositively invariant for the system (1.3) if and only if
f(x(k), w(k))
for all x(k) and for all w(k) W.
Hence if the state vector of system (1.3) reaches a robust positively invariant set,
it will remain inside the set in spite of disturbance w(k). The term positively refersto the fact that only forward evolutions of the system (1.3) are considered and will
be omitted in future sections for brevity.
The maximal robust invariant set max X is a robust invariant set, that containsall the robust invariant sets contained in X.
Definition 1.26. Robust contractive set [14] For a given scalar number with 0 1, the set X is robust contractive for the system (1.3) if and only if
f(x(k), w(k))
for all x(k) and for all w(k) W.
Obviously in definition 1.26 if= 1 we will have robust invariance.
Definition 1.27. Robust controlled invariant set [14] [41] The set CX is robustcontrolled invariant for the system (1.4) if for all x(k) C, there exists a controlvalue u(k) U such that
x(k+ 1) = f(x(k), u(k), w(k)) C
for all w(k) W.
The maximal robust controlled invariant set Cmax X is a robust controlled in-variant set and contains all the robust controlled invariant sets contained in X.
7/31/2019 Thesis Nam
26/87
12 1 Set Theoretic Methods in Control
Definition 1.28. Robust controlled contractive set [14] For a given scalar number
with 0 < 1 the set CX is robust controlled contractive for the system (1.4)if for all x(k) C, there exists a control value u(k) U such that
x(k+ 1) = f(x(k), u(k), w(k)) C
for all w(k) W.
1.2.2 Ellipsoidal invariant sets
Ellipsoidal sets are the most popularly used for robust stability analysis and con-
troller synthesis of constrained systems due to computational efficiency via LMI
and the complexity is fixed [19], [59]. This approach, however may lead to conser-
vative results. To begin, let us consider the following system
x(k+ 1) = A(k)x(k) +B(k)u(k) (1.6)
where the matrices A(k) Rnn, B(k) Rnm satisfy
A(k) =q
i=1
i(k)Ai
B(k) =q
i=1
i(k)Biq
i=1
i(k) = 1,i(k) 0, i = 1, 2, . . . , q
(1.7)
with given and fixed matrices Ai Rnn and Bi Rnm, i = 1, 2, . . . , q.
Remark 1.1. A(k) and B(k) given as
A(k) =q1
i=1
i(k)Ai
B(k) =q2
i=1
i(k)Biq1
i=1
i(k) = 1,i(k) 0, i = 1, 2, . . . , q1q2
i=1
i(k) = 1,i(k) 0, i = 1, 2, . . . , q2
(1.8)
can be translated into the form of (1.8) as follows
7/31/2019 Thesis Nam
27/87
1.2 Set invariance theory 13
x(k+ 1) =q1
i=1
i(k)Aix(k) +q2
j=1
j(k)Bju(k)
=q1
i=1
i(k)Aix(k) +q1
i=1
i(k)q2
j=1
j(k)Bj u(k)
=q1
i=1
i(k)
Aix(k) +q2
j=1
j(k)Bju(k)
=q1
i=1
i(k)
q2
j=1
j(k)Aix(k) +q2
j=1
j(k)Bju(k)
=q1
i=1
i(k)q2
j=1
j(k)
Aix(k) +Bju(k)
=q1,q2
i=1,j=1
i(k)j(k)
Aix(k) +Bju(k)
Consider the polytope Pc, the vertices of which are given by taking all possible
combinations of{Ai,Bj} with i = 1, 2n . . . , q1 and j = 1, 2, . . . , q2. Since
q1,q2
i=1,j=1
i(k)j(k) =q1
i=1
i(k)q2
j=1
j(k) = 1
it is clear that {A(k),B(k)} can be expressed as a convex combination of the verticesofPc.
Both the state vectorx(k) and the control vector u(k) are subject to the constraintsx(k) X, X = {x : |Fix| 1}, i = 1, 2, . . . , n1u(k) U, U = {u : |ui| uimax},i = 1, 2, . . . , m
(1.9)
where Fi R1n is the irow of the matrix FRn1n and uimax is the icomponent
of vector umax Rm1. It is assumed that the matrix F and the vector umax are
constant with umax > 0 such that the origin is contained in the interior ofX and U.
Let us consider now the problem of checking robust controlled invariant. The
ellipsoid E(P) = {x Rn : xTP1x 1} is controlled invariant if and only if for allx E(P) there exists an input u = (x) U such that
(Aix +Bi(x))TP1(Aix +Bi(x)) 1 (1.10)
for all i = 1, 2, . . . , q.It is well known [17] that for time-varying and uncertain linear discrete-time
systems (1.6), it is sufficient to check condition (1.10) only for all x on the bound-
ary ofE(P), i.e. for allx such that xTP1x = 1. Therefore condition (1.10) can betransformed in
(Aix +Bi(x))TP1(Aix +Bi(x)) x
TP1x, i = 1, 2, . . . , q (1.11)
One possible choice for u = (x) is a linear state feedback controller u = Kx. Bydenoting Aci = Ai +BiK with i = 1, 2, . . . , q, condition (1.11) is equivalent to
7/31/2019 Thesis Nam
28/87
14 1 Set Theoretic Methods in Control
xTATciP1Acix x
TP1x, i = 1, 2, . . . , q
or
ATciP1Aci P
1, i = 1, 2, . . . , q
By using the Schur complements, this condition can be rewritten asP1 ATciAci P
0, i = 1, 2, . . . , q
The condition provided here is not linear in P. By using the Schur complements
again, one gets
PAciPATci 0, i = 1, 2, . . . , q
or P AciP
PATci P
0, i = 1, 2, . . . , q
By substituting Aci = Ai +BiK with i = 1, 2, . . . , q, one obtainsP AiP +BiKP
PATi + PKTBTi P
0, i = 1, 2, . . . , q
Though this condition is nonlinear (since P and K are unknown). Still it can be
re-parameterized into a linear condition by setting Y = KP. The above condition isequivalent to
P AiP +BiYPATi +Y
TBTi P
0, i = 1, 2, . . . , q (1.12)
Condition (1.12) is necessary and sufficient for ellipsoid E(P) with linear statefeedbacku = Kx to be robust invariant. Concerning the constraint satisfaction (1.9),based on equation (1.2) it is obvious that
The state constraints are satisfied if and only ifE(P) is a subset ofX, hence1 FiP
PFTi P
0, i = 1, 2, . . . , n1 (1.13)
The input constraints are satisfied if and only ifE(P) is a subset of a polyhedralset Xu where
Xu = {x Rn : |Kix| uimax}
for i = 1, 2, . . . , m and Ki is the irow of the matrix FRmn, hence
u2imax KiP
PKTi P 0,
By noticing that KiP = Yi with Yi is the irow of the matrix Y Rmn, one gets
7/31/2019 Thesis Nam
29/87
1.2 Set invariance theory 15u2max YiYTi P
0 (1.14)
Define a vector Ti R1m as follows
Ti = [0 0 . . . 0 1ith position
0 . . . 0 0]
It is clear that Yi = TiY. Therefore equation (1.14) can be transformed inu2max TiY
YTTTi P
0 (1.15)
With all the ellipsoids satisfying invariance condition (1.12) and constraint sat-
isfaction (1.13), (1.14), we would like to choose among them the largest ellipsoid.
In the literature, the largeness of ellipsoid E(P) is usually measured by the determi-nant or the trace of matrix P, see [63]. Here the trace of matrix P is chosen due to
its linearity. The trace of a square matrix is defined to be the sum of the elements onthe main diagonal of the matrix. Maximization of the trace of matrices corresponds
to the search for the maximal sum of eigenvalues of matrices. With this objective
function, the problemof maximizing the robust invariant ellipsoid can be formulated
as
J= maxP,Y
{trace(P)} (1.16)
subject to
Invariance condition (1.12) Constraints satisfaction (1.13), (1.15)
It is clear that the solution P, Y of problem (1.16) may lead to the controller K=Y P1 such that the closed loop system with matrices Aci = Ai +BiK, i = 1, 2, . . . , qis at the stability margin. In other words, the ellipsoid E(P) thus obtained might notbe contractive (although being invariant). Indeed, the system trajectories might not
converge to the origin. In order to ensure x(k) 0 as k , it is required that forall x on the boundary ofE(P), i.e. for all x such that xTP1x = 1,
(Aix +Bi(x))TP1(Aix +Bi(x)) < 1 i = 1, 2, . . . , q
With the same argument as the above discussion, one can conclude that the ellip-
soid E(P) with the linear controller u = Kx is robust contractive if the following setof LMI conditions is satisfied
P AiP +BiY
PATi +YTBTi P 0 i = 1, 2, . . . , q (1.17)
7/31/2019 Thesis Nam
30/87
16 1 Set Theoretic Methods in Control
1.2.3 Polyhedral invariant sets
The problem of computing the domain of attraction using polyhedral sets is ad-
dressed in this section. With linear constraints on state and control variables poly-
hedral invariant sets are more preferable than ellipsoidal invariant sets, since they
offer a better approximation of domain of attraction [24], [35], [12]. To begin, let us
now consider the following uncertain discrete time systems
x(k+ 1) = A(k)x(k) +B(k)u(k) +D(k)w(k) (1.18)
where x(k) Rn, u(k) Rm and w(k) Rd are, respectively the state, input anddisturbance vectors.
The matrices A(k) Rnn, B(k) Rnm and E(k) Rnn satisfy
A(k) =q
i=1
i(k)Ai
B(k) =
q
i=1i(k)Bi
D(k) =q
i=1
i(k)Diq
i=1
i(k) = 1, i(k) 0
where the matrices Ai, Bi and Di are given.
The state, the control and the disturbance are subject to the following polytopic
constraints
x(k) X, X = {x Rn : Fxx gx}u(k) U, U = {u Rm : Fuu gu}w(k) W, W = {w Rd : Fww gw}
(1.19)
where the matrices Fx, Fu, Fw and the vectors gx, gu and gw are assumed to beconstant with gx 0, gu 0, gw 0 such that the origin is contained in the interiorofX, U and W. Recall that the inequalities are element-wise.
When the control input is in form of the state feedbacku(k) = Kx(k), the closedloop system is
x(k+ 1) = Ac(k)x(k) +D(k)w(k) (1.20)
where
Ac(k) = A(k) +B(k)K= Conv{Aci}
with Aci = Ai +BiK, i = 1, 2, . . . , q.The state constraints of the closed loop system are in the form
x Xc, Xc = {x Rn : Fcx gc} (1.21)
where
Fc =
Fx
FuK
, gc =
gxgu
7/31/2019 Thesis Nam
31/87
1.2 Set invariance theory 17
The following definition plays an important role in computing a robust invariant
set for system (1.20) and constraints (1.21).
Definition 1.29. (Pre-image set) For the system (1.20), the one step admissible pre-
image setof the set Xc is a set X1
c Xc such that for all x X1
c , it holds that
Acix +Diw Xc
for all i = 1, 2, . . . , q.The pre-image set Pre(Xc) can be defined by [16], [13]
X1c =
x Xc : FcAcix gcmax
wW{FcDiw}
(1.22)
for all i = 1, 2, . . . , q.
Example 1.1. Consider the following uncertain system
x(k+ 1) = A(k)x(k) +Bu(k) +Dw(k)
whereA(k) = (k)A1 + (1(k))A2
B =
0
1
, D =
1 0
0 1
with 0 (k) 1 and
A1 =
1.1 1
0 1
, A2 =
0.6 1
0 1
The constraints on the state, on the input and on the disturbance are
x X, X = {x Rn : Fxx gx},u U, U = {u Rm : Fuu gu},w W, W = {w Rn : Fw gw}
where
Fx =
1 0
0 1
1 00 1
, gx =
3
3
3
3
Fu =
1
1
, gu =
2
2
Fw = 1 0
0 1
1 00 1
, gw = 0.2
0.2
0.2
0.2
The feedback controller is chosen as
7/31/2019 Thesis Nam
32/87
18 1 Set Theoretic Methods in Control
K = [0.3856 1.0024]
With this feedback controller the closed loop matrices are
Ac1 = 1.1000 1.00000.3856 0.0024
, Ac2 = 0.6000 1.00000.3856 0.0024The state constraint set Xc is
Xc =
x Rn :
1.0000 0
1.0000 00.3590 0.9333
0.3590 0.9333
x
3.0000
3.0000
0.9311
0.9311
Based on equation (1.22), the one step admissible pre-image set X1c of the set Xcis defined as
X1c =x Rn :
FcFcA1FcA2
x gc
gcmaxwW
{Fcw}
gcmaxwW
{Fcw}
(1.23)After removing redundant inequalities, the set X1c can be represented as
X1c =
x Rn :
1.0000 0
1.0000 00.3590 0.9333
0.3590 0.9333
0.7399 0.6727
0.7399 0.67270.3753 0.9269
0.3753 0.9269
x
3.0000
3.0000
0.9311
0.9311
1.8835
1.8835
1.7474
1.7474
The sets X, Xc and X
1c are presented in Figure 1.6.
It is clear that the set Xc is robust invariant if it equals to its one step admis-sible pre-image set, that is for all x and for all w W, it holds that
Aix +Diw
for all i = 1, 2, . . . , q. Based on this observation, the following algorithm can be usedfor computing a robust invariant set for system (1.20) with respect to constraints
(1.21)
Procedure 2.1: Robust invariant set computation
Input: The matrices Ac1,Ac2,. . .,Acq, D1,D2,. . .,Dq and the sets Xc, W
7/31/2019 Thesis Nam
33/87
1.2 Set invariance theory 19
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x1
x2
X
Xc
Xc
1
Fig. 1.6 One step pre-image set for example 1.1.
Output: The robust invariant set 1. Set i = 0, F0 = Fc, g0 = gc and X0 = {x R
n : F0x g0}.2. Set Xi = X0.3. Eliminate redundant inequalities of the following polytope
P =
x Rn :
F0F0Ac1F0Ac2
...
F0Acq
x
g0g0max
wW{F0D1w}
g0maxwW
{F0D2w}
...
g0maxwW
{F0Dqw}
4. Set X0 = P.
5. IfX0 = Xi then stop and set = X0. Else continue.6. Set i = i + 1 and go to step 2.
The natural question for procedure 2.1 is that if there exists an index i such that
X0 = Xi
In the absence of disturbance, the following theorem holds [15]
Theorem 1.1. [15] Assume that the system (1.20) is robust asymptotically stable.
Then there exists a finite index i = imax, such that X0 = Xi.
Apparently the sensitive part of procedure 2.1 is the step 5. Checking the equalityof two polytopes X0 and X1 is computationally demanding, i.e. one has to check
X0 X1 and X1 X0. Note that if at the step i of procedure 2.1. the set is invariantthen the following set of inequalities
7/31/2019 Thesis Nam
34/87
20 1 Set Theoretic Methods in Control
Pr =
x Rn :
F0Ac1F0Ac2
...
F0Acq
x
g0maxwW
{F0D1w}
g0maxwW
{F0D2w}
...
g0maxwW
{F0Dqw}
is redundant with respect to the set
= {x Rn : F0x g0}
Hence the procedure 2.1 can be modified for computing a robust invariant set as
follows
Procedure 2.2: Robust invariant set computation
Input: The matrices Ac1,Ac2,. . .,Acq, D1,D2,. . .,Dq and the sets Xc, W
Output: The robust invariant set
1. Set i = 0, F0 = Fc, g0 = gc and X0 = {x Rn : F0x g0}.
2. Eliminate redundant inequalities of the following polytope
P =
x Rn :
F0F0Ac1F0Ac2
...
F0Acq
x
g0g0max
wW{F0D1w}
g0maxwW
{F0D2w}
...
g0maxwW
{F0Dqw}
starting from the following set of inequalities
{x Rn : F0Ac jx g0maxwW
{F0Djw}}
with j = q, q1, . . . ,1.3. If all of these inequalities are redundant, then stop and set = X0. Else con-
tinue.
4. Eliminate redundant inequalities of the polytope P for the set of inequalities
{x Rn : F0x g0}
5. Set X0 = P6. Set i = i + 1 and go to step 2.
It is well known that [29], [43], [17] the set resulting from procedure 2.1 orprocedure 2.2, turns out to be the maximal robust invariant set for for system (1.20)
with respect to constraints (1.19), that is = max.
7/31/2019 Thesis Nam
35/87
1.2 Set invariance theory 21
Example 1.2. Consider the uncertain system in example 1.1 with the same con-
straints on the state, on the input and on the disturbance. Applying procedure 2.2,
the robust maximal invariant set is obtained as
max =
x Rn :
0.3590 0.93330.3590 0.9333
0.6739 0.7388
0.6739 0.73880.8979 0.4401
0.8979 0.44010.3753 0.9269
0.3753 0.9269
x
0.93110.9311
1.2075
1.2075
1.7334
1.7334
1.7474
1.7474
The sets X, Xc and max are depicted in Figure 1.7
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x1
x2
X
Xc
max
Fig. 1.7 Maximal robust invariant setmax for example 1.2.
Definition 1.30. (One step robust controlled set) Given the polytopic system
(1.18), the one step robust controlled set of the set C0 = {x (R)n : F0x g0} is
given by all states that can be steered in one step in C0 when a suitable control ac-
tion is applied. The one step robust controlled set denoted as C1 can be shown to be
[16], [13]
C1 =
x Rn : u U : F0(Aix +Biu) g0max
wW{F0Diw}
(1.24)
for all w W and for all i = 1, 2, . . . , q
Remark 1.2. If the set C0 is robust controlled invariant, then C0 C1. Hence C1 is arobust controlled invariant set.
Recall that the set max is a maximal robust invariant set. Define CN as the set ofall states, that can be steered to themax in no more thanNsteps along an admissible
7/31/2019 Thesis Nam
36/87
22 1 Set Theoretic Methods in Control
trajectory, i.e. a trajectory satisfying control, state and disturbance constraints. This
set can be generated recursively by the following procedure
Procedure 2.3: Robust controlled invariant set computation
Input: The matrices A1,A2,. . .,Aq, D1,D2,. . .,Dq and the sets X, U, W and themaximal robust invariant set max
Output: The robust controlled invariant set CN
1. Set i = 0 and C0 = max and let the matrices F0, g0 be the half space repre-sentation of the set C0, i.e. C0 = {x R
n : F0x g0}2. Compute the expanded set Pi Rn
+m
Pi =
(x, u) Rn+m :
Fi(A1x +B1u)Fi(A2x +B2u)
.
..Fi(Aqx +Bqu)
gimax
wW{FiD1w}
gimaxwW
{FiD2w}
.
..gimax
wW{FiDqw}
3. Compute the projection ofPi on Rn
Pni = {x Rn : u U such that (x, u) Pi}
4. Set
Ci+1 = Pni X
and let the matrices Fi+1, gi+1 be the half space representation of the set Ci+1,
i.e.
Ci+1 = {x Rn : Fi+1x gi+1}
5. IfCi+1 = Ci, then stop and set CN = Ci. Else continue.6. Ifi = N, then stop else continue.7. Set i = i + 1 and go to step 2.
As a consequence of the fact that max is a robust invariant set, it follows that foreach i, Ci1 Ci and therefore Ci is a robust controlled invariant set and a sequenceof nested polytopes.
Note that the complexity of the set CN does not have any analytic dependence on
N and may increase without bound, thus placing a practical limitation on the choice
ofN.
The set CN resulting from procedure 2.3 is robust controlled invariant, in the
sense that for all x(k) CN, there exists an input u(k) Rm
such that x(k+ 1) CNfor all unknown admissible i(k) and for all w(k) W. If the parameters i(k)are not known a priori, but their instantaneous value is available at each sampling
7/31/2019 Thesis Nam
37/87
1.2 Set invariance theory 23
period, then the following procedure can be used for reducing conservatiness in
computing CN [16].
Procedure 2.4: Controlled invariant set computation
Input: The matrices A1,A2,. . .,Aq, D1,D2,. . .,Dq and the sets X, U, W and theinvariant set max
Output: The controlled invariant set CN
1. Set i = 0 and C0 = max and let the matrices F0, g0 be the half space repre-sentation of the set C0, i.e. C0 = {x R
n : F0x g0}2. Compute the expanded set Pi j Rn
+m
Pi j =
(x, u) Rn+m : Fi(Ajx +Bju) gimax
wW{FiDjw}
,j = 1, 2, . . . , q
3. Compute the projection ofPi j on Rn
Pni j = {x Rn : u U such that (x, u) Pi j},j = 1, 2, . . . , q
4. Set
Ci+1 = Xq
j=1
Pni j
and let the matrices Fi+1, gi+1 be the half space representation of the set Ci+1,
i.e.
Ci+1 = {x Rn : Fi+1x gi+1}
5. IfCi+1 = Ci, then stop and set CN = Ci. Else continue.6. Ifi = N, then stop else continue.
7. Set i = i + 1 and go to step 2.
For a better understanding of the two procedures 2.3 and 2.4 it is important to
underline where the one step controlled set is calculated. In procedure 2.3 such set
is represented by Pni , while in procedure 2.4 the one step controlled set corresponds
to the intersectionq
j=1Pni j.
Example 1.3. Consider the uncertain system in example 1.1. The constraints on the
state, on the input and on the disturbance are the same.
Using procedure 2.3, one obtains the robust controlled invariant sets CN as shown
in Figure 1.8 withN= 1 andN= 7. The set C1 is a set of all states that can be steeredin one step in max when a suitable control action is applied. The set C7 is a set ofall states that can be steered in seven steps in max when a suitable control action isapplied. Note that C7 = C8, so C7 is the maximal robust controlled invariant set.
The set C7 can be presented in half-space representation as
7/31/2019 Thesis Nam
38/87
24 1 Set Theoretic Methods in Control
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x1
x2
X
C0
= maxC7
C1
Fig. 1.8 Robust controlled invariant set for example 1.3.
C7 =
x Rn :
0.3731 0.9278
0.3731 0.92780.4992 0.8665
0.4992 0.86650.1696 0.9855
0.1696 0.98550.2142 0.9768
0.2142 0.97680.7399 0.6727
0.7399 0.67271.0000 0
1.0000 0
x
1.3505
1.3505
1.3946
1.3946
1.5289
1.5289
1.4218
1.4218
1.8835
1.8835
3.0000
3.0000
1.3 Enlarging the domain of attraction
This section presents an original contribution on estimating the domain of attraction
for uncertain and time-varying linear discrete-times systems with a saturated inputs.
The ellipsoidal and polyhedral sets will be used for characterizing the domain of
attraction. The use of ellipsoidal sets associated with its simple characterization
as a solution of an LMI problem, while the use of polyhedral sets offers a better
approximation of the domain of attraction.
[]
7/31/2019 Thesis Nam
39/87
1.3 Enlarging the domain of attraction 25
1.3.1 Problem formulation
Consider the following time-varying or uncertain linear discrete-time system
x(k+ 1) = A(k)x(k) +B(k)u(k) (1.25)
where
A(k) =q
i=1
i(k)Ai
B(k) =q
i=1
i(k)Biq
i=1
i(k) = 1, i(k) 0
(1.26)
with given matrices Ai Rnn and Bi Rnm, i = 1, 2, . . . , q.Both the state vectorx(k) and the control vector u(k) are subject to the constraints
x(k) X, X = {x Rn : Fix gi},i = 1, 2, . . . , n1u(k) U, U = {u Rm : uil ui uiu},i = 1, 2, . . . , m (1.27)where Fi R1n is the ith row of the matrix Fx Rn1n, gi is the ith componentof the vector gx Rn11, uil and uiu are respectively the i th component of thevectors ul and uu, which are the lower and upper bounds of input u. It is assumed
that the matrix Fx and the vectors ul Rm1, uu Rm1 are constant with ul < 0
and uu > 0 such that the origin is contained in the interior ofX and U.
Assume that using established results in control theory (LQR, LQG, LMI based,
etc), one can find a feedback controller KRmn such that
u(k) = Kx(k) (1.28)
robustly quadratically stabilizes the system (1.25). We would like to estimate thedomain of attraction of the origin for the closed loop system
x(k+ 1) = A(k)x(k) +B(k)sat(Kx(k)) (1.29)
1.3.2 Saturation nonlinearity modeling- A linear differential
inclusion approach
In this section, a linear differential inclusion approach used for modeling the satura-
tion function is briefly reviewed. This modeling framework was firstly proposed by
Hu et al. in [36], [37], [38]. Then its generalization was developed by Alamo et al.
[1], [2]. The main idea of the differential inclusion approach is to use an auxiliary
vector variable v, and to compose the output of the saturation function as a convex
combination of the actual control signals u and v.
7/31/2019 Thesis Nam
40/87
26 1 Set Theoretic Methods in Control
u
sat(u)
uu
ul
Fig. 1.9 The saturation function
The saturation function is defined as follows
sat(ui) =
uil, if ui uilu, if uil ui uiuuiu, if uiu ui
(1.30)
for i = 1, 2, . . . , m and uil and uiu are respectively the upper bound and the lowerbound ofui.
To simply present the approach, lets firstly consider case when m = 1. In thiscase u and v are a scalar number. Clearly if
ul v uu (1.31)
then the saturation function can be rewritten as a convex combination ofu and v
sat(u) = u + (1)v (1.32)
with 0 1, orsat(u) = Conv{u, v} (1.33)
Figure 1.10 illustrates this fact.
Analogously, for m = 2 and v such thatu1l v1 u1uu2l v2 u2u
(1.34)
the saturation function can be expressed as
sat(u) = 1u1
u2
+2
u1v2
+3
v1u2
+4
v1v2
(1.35)
where
7/31/2019 Thesis Nam
41/87
1.3 Enlarging the domain of attraction 27
ul
u v uu
uu
uul
sat(u) = u
v
uu
vul
sat(u) = uu
u
sat(u) = ul
Case 1: u ul
Case 3: uu u
Case 2: ul u u
u
Fig. 1.10 Linear differential inclusion approach.
4
i=1
i = 1, i 0 (1.36)
or, equivalently
sat(u) = Conv
u1u2
,
u1v2
,
v1u2
,
v1v2
(1.37)
Denote now Dm as the set ofmm diagonal matrices whose diagonal elementsare either 0 or 1. For example, ifm = 2 then
D2 = 0 00 0 ,1 00 0 ,0 00 1 ,1 00 1 There are 2m elements in Dm. Denote each element ofDm as Ei, i = 1, 2, . . . , 2
m
and define Ei = IEi. For example, if
E1 =
0 0
0 0
then
E1 =
1 0
0 1
0 0
0 0
=
1 0
0 1
Clearly ifEi Dm, then E
i is also in Dm. The generalization of the results (1.33)
(1.37) is reported by the following lemma [36], [37], [38]
Lemma 1.1. [37] Consider two vectors u Rm and v Rm such that uil vi uiufor all i = 1, 2, . . . , m, then it holds that
7/31/2019 Thesis Nam
42/87
28 1 Set Theoretic Methods in Control
sat(u) Conv{Eiu +Ei v}, i = 1, 2, . . . , 2
m (1.38)
Consequently, there exist i with i = 1, 2, . . . , 2m and
i 0 and2
m
i=1
i = 1
such that
sat(u) =2m
i=1
i(Eiu +Ei v)
1.3.3 Enlarging the domain of attraction - Ellipsoidal set approach
The aim of this section is twofold. First, we provide an invariance condition of
ellipsoidal sets for discrete-time linear time-varying or uncertain systems with a sat-urated input and state constraints. This invariance condition is an extended version
of the previously published results in [37] for the robust case. Secondly, we propose
a method for computing a nonlinear controller u(k) = sat(Kx(k)), which makes agiven ellipsoid invariant.
For the simple exposition, consider the case in which the upper bound and the
lower bound are opposite and equal to umax, precisely
ul = uu = umax
It is assumed that the polyhedral constraint set X is symmetric with gi = 1, forall i = 1, 2, . . . , n1. Clearly this assumption is always possible as long as
Fix gi Fi
gix 1
for all gi > 0.
For a matrix H Rmn, define Xc as an intersection between the state constraintset X and the polyhedral set F(H, umax) = {x : |Hx| umax}, i.e.
Xc =
x Rn :
FxHH
x
1umax
umax
We are now ready to state the main result of this section
Theorem 1.2. If there exist a symmetric matrix P Rnn and matrix H Rmn
such thatP {Ai +Bi(EjK+E
j H)}P
P{Ai +Bi(EjK+Ej H)}
T P
0, (1.39)
7/31/2019 Thesis Nam
43/87
1.3 Enlarging the domain of attraction 29
for all i = 1, 2, . . . , q, j = 1, . . . , 2m and E(P)Xc, then the ellipsoid E(P) is a robustinvariant set.
Proof. Assume that there exist matrix P and matrixHsuch that theconditions (1.39)
are satisfied. Based on Lemma 1.1 and by choosing v = Hx with |Hx| umax, onehas
sat(Kx) =2m
j=1
j(EjKx +Ej Hx)
and subsequently
x(k+ 1) =q
i=1
i(k)
Ai +Bi
2m
j=1
j(EjK+Ej H)
x(k)
=q
i=1
i(k)
2m
j=1
jAi +Bi2m
j=1
j(EjK+Ej H)
x(k)
=
q
i=1i(k)
2m
j=1j Ai +Bi(EjK+Ej H)x(k)
=q,2m
i=1,j=1
i(k)j
Ai +Bi(EjK+E
j H)
x(k) = Ac(k)x(k)
where
Ac(k) =q,2m
i=1,j=1
i(k)j
Ai +Bi(EjK+E
j H)
From the fact that
q,2m
i=1,j=1
i(k)j =q
i=1
i(k)
2m
j=1
j
= 1
it is clear that Ac(k) belongs to the polytope Pc, the vertices of which are given bytaking all possible combinations ofAi +Bi(EjK+E
j H) where i = 1, 2, . . . , q and
j = 1, 2, . . . , 2m.The ellipsoid E(P) = {x Rn : xTP1x 1} is invariant, if and only if for all
x Rn such that xTP1x 1 it holds that
xTAc(k)TP1Ac(k)x 1 (1.40)
With the same argument as in Section 1.2.2, it is clear that condition (1.40) can
be transformed to P Ac(k)P
PAc(k)T P
0 (1.41)
The left-hand side of equation (1.40) can be treated as a function ofk and reachesthe maximum on one of the vertices ofAc(k), so the set of LMI conditions to besatisfied to check invariance is the following
7/31/2019 Thesis Nam
44/87
30 1 Set Theoretic Methods in ControlP {Ai +Bi(EjK+E
j H)}P
P{Ai +Bi(EjK+Ej H)}
T P
0,
for all i = 1, 2, . . . , q, j = 1, . . . , 2m.
Note that conditions (1.39) involve the multiplication between two unknown pa-
rameters H and P. By denoting Y = HP, the LMI conditions (1.39) can be rewrittenas
P (AiP +BiEjKP +BiEj Y)
(PATi + PKTEjB
Ti +Y
TEj BTi ) P
0, (1.42)
for all i = 1, 2, . . . , q, j = 1, . . . , 2m. Thus the unknown matrices P and Y enterlinearly in the conditions (1.42).
Again, as in Section 1.2.2, in general one would like to have the largest invariant
ellipsoid for system (1.25) under the feedbacku(k) = sat(Kx(k)) with respect toconstraints (1.27). This can be done by solving the following LMI problem
J= maxP,Y
{trace(P)} (1.43)
subject to
Invariance conditionP (AiP +BiEjKP +BiE
j Y)
(PATi + PKTEjB
Ti + Y
TEj BTi ) P
0, (1.44)
for all i = 1, 2, . . . , q, j = 1, . . . , 2m
Constraint satisfaction
On state
1 FiPPFTi P
0, i = 1, 2, . . . , n1 On input
u2imax YiYTi P
0, i = 1, 2, . . . , m
Example 1.4. Consider the following uncertain discrete time system
x(k+ 1) = A(k)x(k) +B(k)u(k)
withA(k) = (k)A1 + (1(k))A2B(k) = (k)B1 + (1(k))B2
andA1 =
1 0.1
0 1
,B1 =
0
1
,
A2 =
1 0.2
0 1
,B2 =
0
1.5
7/31/2019 Thesis Nam
45/87
1.3 Enlarging the domain of attraction 31
At each sampling time (k) [0, 1] is an uniformly distributed pseudo-randomnumber. The constraints are
10x1 10, 10x2 10
1 u 1
The feedback matrix gain is chosen as
K = [1.8112 0.8092]
By solving the optimization problem (1.43) one obtains matrices P and Y
P =
5.0494 8.96408.9640 28.4285
, Y = [0.4365 4.2452]
Hence
H = Y P1 = [0.4058 0.2773]
Based on the LMI problem (1.17), an invariant ellipsoid E(P1) is obtained underthe linear feedbacku(k) = Kx(k) with
P1 =
1.1490 3.17473.1747 9.9824
Figure 1.11 presents invariant sets under different kind of controllers. The set
E(P) is obtained with the saturated controller u(k) = sat(Kx(k)) while the set E(P1)is obtained with the linear controller u(k) = Kx(k).
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.56
4
2
0
2
4
6
x1
x2
Kx = 1
Hx = 1
Hx = 1
Kx = 1
E(P1)
E(P)
Fig. 1.11 Invariant sets with different kind of controllers for example 1.4. The setE(P) is obtainedwith the saturated controller u(k) = sat(Kx(k)) while the set E(P1) is obtained with the linearcontroller u(k) = Kx(k).
7/31/2019 Thesis Nam
46/87
32 1 Set Theoretic Methods in Control
Figure 1.12 shows different state trajectories of the closed loop system with the
controller u(k) = sat(Kx(k)), depending on the realizations of(k) and on initialconditions.
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.56
4
2
0
2
4
6
x1
x2
Fig. 1.12 State trajectories of the closed loop system for example 1.4.
In the first part of this section, theorem 1.2 was exploited in the following man-
ner: if the ellipsoid E(P) is robust invariant for the system
x(k+ 1) = A(k)x(k) +B(k)sat(Kx(k))
then there exists a linear controller u(k) = Hx(k), |Hx(k)| umax, such that theellipsoid E(P) is robust invariant for the system
x(k+ 1) = A(k)x(k) +B(k)Hx(k)
and the matrix gainHRmn is obtained based on the optimizationproblem (1.43).Theorem 1.2 nowwill be exploited in a different manner. We would like to design
a saturated feedback gain u(k) = sat(Kx(k)) that makes a given invariant ellipsoidE(P) contractive with a maximal contraction factor. This invariant ellipsoid E(P)can be obtained together with a linear feedback gain u(k) = Hx(k) aiming to max-imize some convex objective function J(P), for example trace(P). Designing theinvariant ellipsoid E(P) and the controller u(k) = Hx(k) can be done by solvingthe LMI problem (1.17). In the second stage, based on the gain H and the ellip-
soid E(P), a saturated controller u(k) = sat(Kx(k)) which aims to maximize somecontraction factor 1g is computed.
It is worth noticing that the invariance condition (1.17) corresponds to the case in
condition (1.42) with Ej = 0 and Ej = IEj = I. Following the proof of theorem1.2, it is clear that for the following system
x(k+ 1) = A(k)x(k) +B(k)sat(Kx(k))
7/31/2019 Thesis Nam
47/87
1.3 Enlarging the domain of attraction 33
the ellipsoid E(P) is contractive with the contraction factor 1g if
Ai +Bi(EjK+E
j H)
T
P1
Ai +Bi(EjK+E
j H)
P1 gP1
for all i = 1, 2, . . . , q and for all j = 1, 2, . . . , 2m such that Ej = 0. By using the Schurcomplements, this problem can be converted into an LMI optimization as
J= maxg,K
{g} (1.45)
subject to (1g)P1 (Ai +Bi(EjK+E
j H))
T
(Ai +Bi(EjK+Ej H)) P
0
for all i = 1, 2, . . . ,p and j = 1, 2, . . . , 2m with Ej = 0. Recall that here the onlyunknown parameters are the matrix K Rmn and the scalar g, the matrices P and
H being given.
Remark 1.3. The proposed two-stage control design presentedhere benefits of global
uniqueness properties of the solution. This is due to the one-way dependence of the
two (prioritized) objectives: the trace maximization precedes the associated contrac-
tion factor.
Example 1.5. Consider the uncertain system in example 1.4 with the same con-
straints on the state vector and on the input vector. In the first stage by solving
the optimization problem (1.17), one obtains the matrices P and Y
P =
100.0000 43.105143.1051 100.0000
, Y = [3.5691 6.5121]
Hence H = Y P1 = [0.0783 0.0989].In the second stage, by solving the optimization problem (1.45), one obtains the
feedback gain K
K= [0.33420.7629]
Figure 1.13 shows the invariant ellipsoid E(P). This figure also shows thestate trajectories of the closed loop system under the saturated feedbacku(k) =sat(Kx(k)) for different initial conditions and different realizations of(k).
For the initial condition x(0) = [4 10]T Figure 1.14 presents the state trajec-tories of the closed loop system with the saturated controller u(k) = sat(Kx(k)) andwith the linear controller u(k) = Hx(k). It can be observed that the time to regu-late the plant to the origin by using the linear controller is longer than the time to
regulate the plant to the origin by using saturated controller. The explanation forthis is that when using the controller u(k) = Hx(k), the control action is saturatedonly at some points of the boundary of the ellipsoid E(P), while using the controlleru(k) = sat(Kx(k)), the control action is saturated not only on the boundary of the
7/31/2019 Thesis Nam
48/87
34 1 Set Theoretic Methods in Control
10 8 6 4 2 0 2 4 6 8 1010
8
6
4
2
0
2
4
6
8
10
x1
x2
Kx = 1
Kx = 1
Fig. 1.13 Invariant ellipsoid and state trajectories of the closed loop system for example 1.5.
set E(P), its saturated also inside the set E(P). This effect can be seen in Figure1.15. Figure 1.15 also shows the realization of(k).
0 10 20 30 40 50 604
2
0
2
4
6
Times
x1
0 10 20 30 40 50 60
0
5
10
Times
x2
u(k) = Hx(k)
u(k) = sat(Kx(k))
u(k) = sat(Kx(k))
u(k) = Hx(k)
Fig. 1.14 State trajectories as a function of time for example 1.5. The green line is obtained by
using the saturated feedback gainu(k) = Kx(k). The dashed red line is obtained by using the linearfeedback gain u(k) = Hx(k).
1.3.4 Enlarging the domain of attraction - Polyhedral set approach
In this section, the problem of estimating the domain of attraction is addressed byusing the polyhedral sets.
For the given linear state feedback controller u(k) = Kx(k), it is clear that thelargest polyhedral invariant set is the maximal robust invariant set max. The set
7/31/2019 Thesis Nam
49/87
1.3 Enlarging the domain of attraction 35
0 10 20 30 40 50 60
1
0.5
0
Times
u
0 10 20 30 40 50 600
0.2
0.4
0.6
0.8
1
Times
u(k) = Hx(k)
u(k) = sat(Kx(k))
Fig. 1.15 Input trajectory and realization of(k) as a function of time for example 1.5. The greenline is obtained by using the saturated feedback gainu(k) = Kx(k). The dashed red line is obtainedby using the linear feedback gainu(k) = Hx(k).
max can be readily found using procedure 2.1 or procedure 2.2. From this point on,it is assumed that the set max is known.
Our aim in this section is to find the the largest polyhedral invariant sets char-
acterizing an estimation of the domain of attraction for system (1.25) under the
saturated controller u(k) = sat(Kx(k)). To this aim, recall that from Lemma (1.1),the saturation function can be expressed as
sat(Kx) =2m
i=1
i(EiKx +Ei v),
2m
i=1
i = 1, i 0 (1.46)
with ul v uu and Ei is an element ofDm (the set ofmm diagonal matrices
whose diagonal elements are either 0 or 1), E
i = IEi.It is worth noticing that the parameters i in (1.47) are a function of the state x[22]. To see this let us consider the case in which m = 1, ul =1, uu = 1 and assumev = 0. In this case, one has
sat(Kx) = 1Kx
If, for example 1 Kx 1, then 1 = 1. IfKx = 2, then 1 = 0.5. IfKx = 5,then 1 = 0.2.
Similarly, if for example v = 0.5Kx, then
sat(Kx) = (1 + 0.52)Kx
If1 Kx 1, then 1 = 1, 2 = 0. IfKx = 2, then 1 = 0, 2 = 1.With equation (1.47) the closed loop system can be rewritten as
7/31/2019 Thesis Nam
50/87
36 1 Set Theoretic Methods in Control
x(k+ 1) =q
i=1
i(k)
Aix(k) +Bi
2m
j=1
j(EjKx(k) +Ej v)
=q
i=1
i(k)2m
j=1
jAix(k) +Bi2m
j=1
j(EjKx(k) +Ej v)
=q
i=1
i(k)2m
j=1
j
Aix(k) +Bi(EjKx(k) +E
j v)
or
x(k+ 1) =2m
j=1
j
q
i=1
i(k)
(Ai +BiEjK)x(k) +BiEj v
(1.47)
The variables v Rm can be considered as an external controllable input forsystem (1.47). Hence, the problem of finding the largest polyhedral invariant set
s for system (1.25) boils out to the problem of computing the largest controlledinvariant set for system (1.47).
Since the parameters j are a functionof state x(k), theyare known, once thestate
vector x(k) is available. Therefore, system (1.47) can be considered as an uncertainsystem with respect to the parameters i and a time varying system with respect tothe known parameters j. Hence the following procedure can be used to obtain thelargest polyhedral invariant set s for system (1.47) based on the results in Section1.2.3, where procedure 2.4 is used to deal with known j while procedure 2.3 isused to deal with uncertain i.
Procedure 2.5: Invariant set computation
Input: The matrices A1,A2,. . .,Aq and the sets X, U and the invariant set max Output: The invariant set s
1. Set t = 0 and C0 = max and let the matrices F0, g0 be the half space repre-sentation of the set C0, i.e. C0 = {x R
n : F0x g0}2. Compute the expanded set Pt j Rn+m
Pt j =
(x, u) R
n+m :
Ft{(A1 +B1EjK)x +B1Ej v}
Ft{(A2 +B2EjK)x +B2Ej v}
...
Ft{(Aq +BqEjK)x +BqEj v}
gtgt...
gt
3. Compute the projection ofPt j on Rn
Pnt j = {x Rn : v U such that (x, v) Pt j},j = 1, 2, . . . , 2
m
4. Set
Ct+1 = X2m
j=1
Pnt j
7/31/2019 Thesis Nam
51/87
1.3 Enlarging the domain of attraction 37
and let the matrices Ft+1, gt+1 be the half space representation of the set Ct+1,
i.e.
Ct+1 = {x Rn : Ft+1x gt+1}
5. IfCt+1 = Ct, then stop and set s = Ct. Else continue.6. Set t = t+ 1 and go to step 2.
It is clear that Ct1 Ct, since the set max is robust invariant. Hence Ct is sa robust controlled invariant set. The set sequence {C0,C1, . . . ,} converges to s,which is the largest polyhedral invariant set.
Remark 1.4. Each one of the polytopes Ct is an estimation of the domain of attrac-
tion for the system (1.25) under the saturated controller u(k) = sat(Kx(k)). Thatmeans the procedure 2.5 can be stopped at any time before converging the true
largest invariant set s.
It is worth noticing that the matrix H Rmn resulting from optimization prob-lem (1.43) can also be employed for computing the polyhedral invariant setHs withrespect to the saturated controller u(k) = Kx(k). Clearly the set Hs is a subset ofs, since the vector v is now in the restricted form v(k) = Hx(k). In this case, basedequation (1.47) one gets
x(k+ 1) =2m
j=1
j
q
i=1
i(k)
(Ai +BiEjK+BiEj H)x(k)
(1.48)
Define the set XH as follows
XH = {x Rn : FHx gH} (1.49)
where
FH =
FxHH
, gH = gxuu
ul
With the set XH, the following procedure can be used for computing the polyhedral
invariant set Hs .
Procedure 2.6: Invariant set computation
Input: The matrices A1,A2,. . .,Aq and the set XH and the invariant set max Output: The invariant set Hs
1. Set t = 0 and C0 = max and let the matrices F0, g0 be the half space repre-sentation of the set C0, i.e. C0 = {x R
n : F0x g0}2. Compute the set Pt j Rn
+m
7/31/2019 Thesis Nam
52/87
38 1 Set Theoretic Methods in Control
Pt j =
x Rn :
Ft(A1 +B1EjK+B1Ej H)x
Ft(A2 +B2EjK+B2Ej H)x
...
Ft(Aq +BqEjK+ +BqEj H)x
gtgt...
gt
3. Set
Ct+1 = XH
2mj=1
Pnt j
and let the matrices Ft+1, gt+1 be the half space representation of the set Ct+1,
i.e.
Ct+1 = {x Rn : Ft+1x gt+1}
4. IfCt+1 = Ct, then stop and set s = Ct. Else continue.5. Set t = t+ 1 and go to step 2.
Since thematrices (Ai +BiEjK+BiEj H) is asymptotically stable for i = 1, 2, . . . , qand j = 1, 2, . . . , 2m, procedure 2.6 is terminated in finite time [17]. In other words,there exists t = tmax such that Ctmax = Ctmax+1.
Example 1.6. Consider again the example 1.4. The constraint on the state vector and
on the input vector are the same. The feedback controller is
K = [1.8112 0.8092]
By using procedure 2.5 one obtains the robust polyhedral invariant set s asshown in Figure 1.16. Procedure 2.5 terminated with t= 121.Figure 1.16 also showsthe robust polyhedral invariant set Hs obtained with the auxiliary matrix H where
H = [0.4058 0.2773]
and the robust polyhedral invariant set max obtained with the controller u(k) = Kx.The set s and
Hs can be presented in half-space representation as
7/31/2019 Thesis Nam
53/87
1.3 Enlarging the domain of attraction 39
4 3 2 1 0 1 2 3 410
8
6
4
2
0
2
4
6
8
10
x1
x2
s
s
H
max
Fig. 1.16 Robust invariant sets with different kind of controllers and different methods for exam-
ple 1.6. The polyhedral set s is obtained with respect to the controller u(k) = sat(Kx(k)). Thepolyhedral set Hs is obtained with respect to the controller u(k) = sat(Kx(k)) using an auxiliarymatrix H. The polyhedral setmax is obtained with the controlleru(k) = Kx.
s =
x R2 :
0.9996 0.02730.9996 0.0273
0.9993 0.03690.9993 0.0369
0.9731 0.23050.9731 0.2305
0.9164 0.4004
0.9164 0.40040.8434 0.5372
0.8434 0.5372
0.7669 0.64180.7669 0.64180.6942 0.7198
0.6942 0.71980.6287 0.7776
0.6287 0.77760.5712 0.8208
0.5712 0.8208
3.5340
3.5340
3.5104
3.5104
3.4720
3.4720
3.5953
3.5953
3.8621
3.8621
4.24414.2441
4.7132
4.7132
5.2465
5.2465
5.8267
5.8267
7/31/2019 Thesis Nam
54/87
40 1 Set Theoretic Methods in Control
Hs =
x R2 :
0.8256 0.56420.8256 0.5642
0.9999 0.0108
0.9999 0.0108
0.9986 0.05320.9986 0.05320.6981 0.7160
0.6981 0.7160
0.9791 0.2033
0.9791 0.20330.4254 0.9050
0.4254 0.9050
2.0346
2.0346
2.3612
2.3612
2.34672.3467
2.9453
2.9453
2.3273
2.3273
4.7785
4.7785
Figure 1.17 presents state trajectories of the closed loop system with the con-
troller u(k) = sat(Kx(k)) for different initial conditions and different realizations of(k).
4 3 2 1 0 1 2 3 410
8
6
4
2
0
2
4
6
8
10
x1
x2
Fig. 1.17 State trajectories of the closed loop system with the controlleru(k) = sat(Kx(k)) for 1.6.
7/31/2019 Thesis Nam
55/87
Chapter 2
Optimal and Constrained Control - AnOverview
Abstract In this chapter some of the most important approaches to constrained and
optimal control are briefly reviewed. This is intended to organize the chapter in the
following sections
1. Dynamic programming.
2. Pontryagins maximum principle.
3. Model predictive control, implicit and explicit solution.
4. Vertex control.
2.1 Dynamic programming
The purpose of this section is to present a brief introduction to dynamic program-
ming, which provides a sufficient condition for optimality.
Dynamic programming was developed by R.E. Bellman in the early fifties [5],[6], [7], [8]. It is a powerful method to solve control problems for various class of
systems, e.g. linear, time-varying or nonlinear. The optimal solution is expressed as
a time-varying state-feedback form.
Dynamic programming is based on the principle of optimality [9]
An optimal policy has the property that whatever the initial state and initial
decision are, the remaining decisions must constitute an optimal policy with regard
to the state resulting from the first decision.
To begin, let us consider the following optimal control problem
minx,u
N1
k=0
L(x(k), u(k)) +E(x(N))
(2.1)
subject to
41
7/31/2019 Thesis Nam
56/87
42 2 Optimal and Constrained Control - An Overview
x(k+ 1) = f(x(k), u(k)), k= 0, 1, . . . ,N1u(k) U, k= 0, 1, . . . ,N1
x(k) X, k= 0, 1, . . . ,Nx(0) = x0
where
x(k) and u(k) are respectively the state and control variables. N > 0 is called the time horizon. L(x(k), u(k)) is the Lagrange objective function, which represents a cost along
the trajectory.
E(x(N)) is the Mayer objective function, which represents the terminal cost. U and X are the sets of constraints on the input and state variables, respectively. x(0) is the initial condition.
Define the value function Vi(x(i)) as follows
Vi(x(i)) = minu E(x(N)) +N1
k=i (L(x(k), u(k))) (2.2)
subject to
x(k+ 1) = f(x(k), u(k)), k= 0, 1, . . . ,N1u(k) U, k= i, i + 1, . . . ,N1
x(k) X, k= i, i + 1, . . . ,N
for i = N,N1,N2, . . . , 0.Clear Vi(x(i) is the optimal cost on the remaining horizon [i, N], starting from
the state x(i). Based on the principle of optimality, one has
Vi(x(i)) = minu(i)
{L(x(i), u(i)) +Vi+1(x(i + 1))}
By substituting
x(i + 1) = f(x(i), u(i))
one gets
Vi(z) = minu(i)
{L(x(i), u(i)) + Vi+1(f(x(i), u(i)))} (2.3)
The problem (2.3) is much simpler than the one in (2.1) because it involves only
one decision variable u(i). To actually solve this problem, we work backwards intime from i = N, starting with
VN(x(N)) = E(x(N))
Based on the value function Vi+1(x(i + 1)) with i = N1,N2, . . . , 0, the opti-
mal control values u(i) can be obtained as
u(i) = argminu(i)
{L(x(i), u(i)) +Vi+1(f(x(i), u(i)))}
7/31/2019 Thesis Nam
57/87
2.2 Pontryagins maximum principle 43
2.2 Pontryagins maximum principle
The second direction in the optimal control theory is the Pontryagins maximum
principle [57], [23]. This approach, in contrast to the classical calculus of variation
approach allows us to solve the control problems in which the control input is sub-
ject to constraints in a very general way. Therefore the maximum principle is a basic
mathematical technique used for calculating the optimal control values in many im-
portant problems of mathematics, engineering, economics, etc. Here for illustration,
we consider the following simple optimal control problem
minx,u
N1
k=0
L(x(k), u(k)) +E(x(N))
(2.4)
subject to
x(k+ 1) = f(x(k), u(k)), k= 0, 1, . . . ,N1u(k) U, k= 0, 1, . . . ,N1
x(0) = x0
For simplicity the state variables are considered unconstrained. For solving the
optimal control problem (2.4) with the Pontryagins maximum principle, the follow-
ing Hamiltonian Hk() is defined
Hk(x(k), u(k),(k+ 1)) = L(x(k), u(k)) +T(k+ 1)f(x(k), u(k)) (2.5)
where (k) with k= 1, 2, . . . ,N are called the co-state or the adjoint variables. Forproblem (2.4), these variables must satisfy the so called co-state equation
(k+ 1) =Hk
(x(k)), k= 0, 1, . . . ,N2
and
(N) =E(x(N))
(x(N))
For given state and co-state variables, the optimal control value is achieved by
choosing control u(k) that minimizes the Hamiltonian at each time instant, i.e.
Hk(x(k), u(k),(k+ 1))Hk(x
(k), u(k),(k+ 1)), u(k) U
Note that the convexity assumption on the Hamiltonian is needed, i.e. the func-
tion Hk(x(k), u(k),(k+ 1)) is convex with respect to u(k).
7/31/2019 Thesis Nam
58/87
44 2 Optimal and Constrained Control - An Overview
2.3 Model predictive control
Model predictive control (MPC), or receding horizon control, is one of the most
advanced control approaches which, in the last decades, has became the leading in-
dustrial control technology for systems with constraints [21],[53], [18], [58], [30],
[51]. MPC is an optimization based strategy, where a model of the plant is used to
predict the future evolution of the system, see [53], [51]. This prediction uses the
current state of the plant as the initial state and, at each time instant, k, the controller
computes an optimal control. Then the first control in the sequence is applied to
the plant at time instant k, and at time instant k+ 1 the optimization procedure isrepeated with a new plant measurement. This open loop optimal feedback mecha-
nism1 of the MPC compensates for the prediction error due to structural mismatch
between the model and the real system as well as for disturbances and measurement
noise.
The main advantage which makes MPC industrially desirable is that it can take
into account constraints in the control problem. This feature is very important for
several reasons
Often the best performance, which may correspond to the most efficient opera-tion, is obtained when the system is made to operate near the constraints.
The possibility to explicitly express constraints in the problem formulation offersa natural way to state complex control objectives.
2.3.1 Implicit model predictive control
Consider the problem of regulation to the origin the following discrete-time linear
time-invariant system
x(k+ 1) = Ax(k) +Bu(k) (2.6)
where x(k) Rn and u(k) Rm are respectively the state and the input variables,A Rnn and B Rnm are the system matrices. Both the state vector x(k) and thecontrol vector u(k) are subject to polytopic constraints
x(k) X, X = {x : Fxx gx}u(k) U, U = {u : Fuu gu}
k 0 (2.7)
where the matrices Fx Rn1n, Fu Rm1m and the vectors gxRn11, guRm11
are assumed to be constant with gx > 0, gu > 0 such that the origin is contained in
the interior ofX and U. Here the inequalities are taken element-wise.
It is assumed that the pair (A,B) is stabilizable, i.e. all uncontrollable states have
stable dynamics.
1 So it has been named OLOF (= Open Loop Optimal Feedback) control, by the name of the author
of [33], whose name by chance, is Per Olof
7/31/2019 Thesis Nam
59/87
2.3 Model predictive control 45
Provided that the state x(k) is available from the measurements, the finite horizonMPC optimization problem is defined as
V(x(k)) = minu=[u(0),u(1),...,u(N1)]N
t=1xT
(t)Qx(t) +
N1
t=0 uT
(t)Ru(t) (2.8)subject to
x(t+ 1) = Ax(t) +Bu(t), t = 0, 1, . . . ,N1x(t) X, t = 1, 2, . . . ,Nu(t) U, t = 0, 1, . . . ,N1
x(0) = x(k)
where
Q Rnn is a real symmetric positive semi-definite nn matrix. R Rmm is a real symmetric positive definite mm matrix. N is a fixed integer greater than 0. N is called the time horizon or the prediction
horizon.The conditions on Q and R guarantee that the function J is well-defined. In term
of eigenvalues, the eigenvaluesofQ should be non-negative,while those ofR should
be positive.
It is clear that the first term xT(t)Qx(t) penalizes the deviation of the state x fromthe origin, while the second term uT(t)Ru(t) measures the input deviation. In otherwords, selecting Q large means that, to keep J small, the state x(t) must be small.On the other hand, selecting R large means that the control input u(t) must be smallto keep J small.
An alternative is a performance measure based on l1norms or lnorms
minu=[u(0),u(1),...,u(N1)]
N
t=1
|Qx(t)|1 +N1
t=0
|Ru(t)|1 (2.9)min
u=[u(0),u(1),...,u(N1)]
N
t=1
|Qx(t)| +N1
t=0
|Ru(t)|
(2.10)
Based on the state space model (2.6), the future state variables are computed
sequentially using the set of future control parameters
x(1) = Ax(0) +Bu(0)x(2) = Ax(1) +Bu(1) = A2x(0) +ABu(0) +Bu(1)...
x(N) = ANx(0) +AN1Bu(0) +AN2Bu(1) + . . . +Bu(N1)
(2.11)
The set of equations (2.11) can be rewritten in a compact matrix form as
x = Aax(0) +Bau = Aax(k) +Bau (2.12)
7/31/2019 Thesis Nam
60/87
46 2 Optimal and Constrained Control - An Overview
with
x = [xT(1) xT(2) . . . xT(N)]T
and
Aa =
A
A2
...
AN
, Ba =
B 0 . . . 0
AB B. . .
......
.... . . 0
AN1B AN2B . . . B
The MPC optimization problem (2.8) can be expressed as
V(x(k)) = minu{xTQax + u
TRau} (2.13)
where
Qa =
Q 0 . . . 0
0 Q . . . 0...
..
.
. ..
..
.0 0 . . . Q
, Ra =
R 0 . . . 0
0 R . . . 0...
..
.
. ..
..
.0 0 . . . R
and by substituting (2.12) in (2.13), one gets
V(x(k)) = minu{uTHu + 2xT(k)Fu +xT(k)Y x(k)} (2.14)
where
H = BTa QaBa +Ra, F = ATa QaBa and Y = A
Ta QaAa (2.15)
Consider now the constraints on state and on input along the horizon. From (2.7)
it can be shown thatFax x g
ax
Fau u gau
(2.16)
where
Fax =
Fx 0 . . . 0
0 Fx . . . 0...
.... . .
...
0 0 . . . Fx
, gax =
gxgx...
gx
Fau =
Fu 0 . . . 0
0 Fu . . . 0...
.... . .
...
0 0 . . . Fu
, gau =
gugu...
gu
Using (2.12), the state constraints along the horizon can be expressed as
Fax {Aax(k) +Bau} gax
or
Fax Bau Fa
x Aax(k) + gax (2.17)
7/31/2019 Thesis Nam
61/87
2.3 Model predictive control 47
Combining (2.16), (2.17), one obtains
Gu Ex(k) +W (2.18)
whereG =
Fau
Fax Ba
, E =
0
Fax Aa
, W =
gaugax
Based on (2.13) and (2.18), the MPC quadratic program formulation can be de-
fined as
V1(x(k)) = minu
uTHu + 2xT(k)Fu
(2.19)
subject to
Gu Ex(k) +W
where the term xT(k)Y x(k) is removed since it does not influence the optimal ar-gument. The value of the cost function at optimum is simply obtained from (2.19)
by
V(x(k)) = V1(x(k)) +xT
(k)Y x(k)A simple on-line algorithm for MPC is
Algorithm 3.1. Model predictive control - Implicit approach
1. Measure or estimate the current state of the system x(k).2. Compute the control signal sequence u by solving (2.19).
3. Apply first element of the control sequence u as input to the system (2.6).
4. Wait for the next time instant k:= k+ 1.5. Go to step 1 and repeat
Example 2.1. Consider the following discrete time linear time invariant system
x(k+ 1) =
1 1
0 1
x(k) +
1
0.7
u(k) (2.20)
and the MPC problem with weighting matrices Q = I and R = 1 and the predictionhorizon N = 3.
The constraints are
2x1 2, 5x2 51 u 1
Based on equation (2.15) and (2.18), the MPC problem can be described as a QP
problem
minu={u(0),u(1),u(2)}
uTHu + 2xT(k)Fu
with
7/31/2019 Thesis Nam
62/87
48 2 Optimal and Constrained Control - An Overview
H =
12.1200 6.7600 2.89006.7600 5.8700 2.1900
2.8900 2.1900 2.4900
, F = 5.1000 2.7000 1.0000
13.7000 8.5000 3.7000
and subject to the following constraints
Gu W+Ex(k)
where
G =
1.0000 0 0
1.0000 0 00 1.0000 0
0 1.0000 00 0 1.0000
0 0 1.00001.0000 0 0
0.7000 0 01.0000 0 00.7000 0 0
1.7000 1.0000 0
0.7000 0.7000 0
1.7000 1.0000 00.7000 0.7000 0
2.4000 1.7000 1.0000
0.7000 0.7000 0.7000
2.4000 1.7000 1.00000.7000 0.7000 0.7000
, E =
0 0
0 0
0 0
0 0
0 0
0 0
1 1
0 11 10 1
1 20 11 2
0 1
1 30 11 3
0 1
, W =
1
1
1
1
1
1
2
52
5
2
5
2
5
2
5
2
5
For the initial condition x(0) = [2 1]T and by using the implicit MPC method,
Figure 2.1 shows the state trajectories as a function of time.
0 2 4 6 8 10 12 14 16 18 20
0
0.5
1
1.5
2
Time
x1
0 2 4 6 8 10 12 14 16 18 20
0.5
0
0.5
1
Time
x2
Fig. 2.1 State trajectories as a function of time
7/31/2019 Thesis Nam
63/87
2.3 Model predictive control 49
Figure 2.2 presents the input trajectory as a function of time.
0 2 4 6 8 10 12 14 16 18 20
1
0.8
0.6
0.4
0.2
0
0.2
Time
u
Fig. 2.2 Input trajectory as a function of time.
2.3.2 Recursive feasibility and stability
Recursive feasibility of the optimization problem and stability of the resulting
closed-loop system are two important aspects when designing a MPC controller.
Recursive feasibility of the optimization problem (2.19) means that if the prob-
lem (2.19) is feasible at time instant k, it will be also feasible at time instant k+ 1. Inother words there exists an admissible control value that holds the system within the
state constraints. The feasibility problem can arise due to model errors, disturbances
or the choice of the cost function.Stability analysis necessitates the useof Lyapunovtheory [42], since thepresence
of the constraints makes the closed-loop system nonlinear. In addition, it is well
known that unstable input-constrained system cannot be globally stabilized [62],
[52], [60]. Another problem is that the control law is generated by the solution of
an optimization problem and generally there does not exist any simple closed-form
expression for the solution, although it can be shown that the solution is a piecewise
affine state feedback law [11].
Recursive feasibility and stability can be assured by adding a terminal cost func-
tion in the objective function in (2.8) and by including the final state of the planning
horizon in a terminal positively invariant set. Let the matrix P Rnn be the uniquesolution of the following discrete-time algebraic Riccati equation
P = ATPAATPB(BTX B +R)1BTPA + Q (2.21)
and the matrix gain KRmn is defined as
7/31/2019 Thesis Nam
64/87
50 2 Optimal and Constrained Control - An Overview
K=(BTPB +R)1BTPA (2.22)
It is well known [3], [46], [47], [48] that matrix gain K is a solution of the opti-
mization problem (2.8) when the time horizon N = and there are no constraints
on the state vector and on the input vector. In this case the cost function is definedas
V(x(0)) =
k=0
xT(k)Qx(k) + uT(k)Ru(k)
=
k=0
xT(k)
Q + KTRK
x(k) = xT(0)Px(0)
Once the stabilizing feedback gain u(k) = Kx(k) is defined, the terminal setX can be computed as a maximal invariant set associated with the control lawu(k) = Kx(k) for system (2.6) and with respect to the constraints (2.7). Generally,the terminal set is chosen to be in the ellipsoidal or polyhedral form, see Section1.2.2 and Section 1.2.3.
Consider now the following MPC optimization problem
minu=[u(0),u(1),...,u(N1)]
xT(N)Px(N) +
N1
t=0
xT(t)Qx(t) + uT(t)Ru(t)
(2.23)
subject to
x(t+ 1) = Ax(t) +Bu(t), t = 0, 1, . . . ,N1x(t) X, t = 1, 2, . . . ,Nu(t) U, t = 0, 1, . . . ,N1
x(N) x(0) = x(k)
then the following theorem holds [53]
Theorem 2.1. [53] Assuming feasibility at the initial state, the MPC controller
(2.23) guarantees recursive feasibility and asymptotic stability.
Proof. See [53].
The MPC problem considered here uses both a terminal cost function and a ter-
minal set constraint and is called thedual-mode MPC. This MPC scheme is the most
attractive version in the MPC literature. In general, it offersbetter performance when
compared with other MPC versions and allows a wider range of control problems to
be handled.
2.3.3 Explicit model predictive control - Parameterized vertices
Note that the implicit model predictive control requires running on-line optimiza-
tion algorithms to solve a quadratic programming (QP) problem associated with
the objective function (2.8) or to solve a linear programming (LP) problem with
7/31/2019 Thesis Nam
65/87
2.3 Model predictive control 51
the objective function (2.9), (2.10). Although computational speed and optimiza-
tion algorithms are continuously improving, solving a QP or LP problem can be
computationally costly, specially when the prediction horizon is large, and this has
traditionally limited MPC to applications with relatively low complexity/sampling
interval ratio.Indeed the state vector can be interpreted as a vector of parameters in the opti-
mization problem(2.23).The exact optimal solution can be expressedas a piecewise
affine function of the state overa polyhedral partition of the state space and the MPC
control computation can be moved off-line, see [11], [18], [61], [55]. The control
action