Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
LQ feedback formulation for H∞ output feedback control
(USC Thesis Other)
LQ feedback formulation for H∞ output feedback control
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
LQ FEEDBACK FORMULATION FORH
1
OUTPUT FEEDBACK CONTROL
by
Anantha Karthikeyan
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ELECTRICAL ENGINEERING)
August 2013
Copyright 2013 Anantha Karthikeyan
Dedicated to my family
ii
Contents
Dedication ii
List of Figures v
List of Tables vi
Abstract vii
Acknowledgements viii
Chapter 1: Introduction 1
Chapter 2: Preliminaries 4
2.1 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Schur Decomposition . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 SchurJ-factorization . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.3 Matrix Pencil . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Problem Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 StandardH
1
Control Problem . . . . . . . . . . . . . . . . . . 10
2.3.2 OptimalH
1
Control Problem . . . . . . . . . . . . . . . . . . 11
2.3.3 H
1
Full-Information and Full-Control Problems . . . . . . . . 11
2.3.4 H
1
Disturbance Feed-forward and Output Estimation Problems 12
2.3.5 LQ State-feedback Control Problem . . . . . . . . . . . . . . . 13
Chapter 3: Background 15
3.1 Completion of Squares (CoS) Identities . . . . . . . . . . . . . . . . . 15
3.1.1 LQ Completion of Squares . . . . . . . . . . . . . . . . . . . . 15
3.1.2 LQ Interpretation ofH
1
. . . . . . . . . . . . . . . . . . . . . 18
3.2 Squared-Down Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 Squaring-DownD
12
. . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Squaring-DownD
21
. . . . . . . . . . . . . . . . . . . . . . . 22
iii
Chapter 4: H
1
Full-Information and Full-Control 24
4.1 H
1
Full-Information Feedback . . . . . . . . . . . . . . . . . . . . . . 24
4.2 H
1
Full-Control Feedback . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 H
1
Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Solution methods for the sub-optimalH
1
Control Problem . . . . . . . 28
4.4.1 Hamiltonian matrix approach to the sub-optimal H
1
control
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4.2 Matrix pencil approach to the sub-optimalH
1
control problem 32
Chapter 5: Main Result 35
5.1 LQ FeedbackH
1
“All-solutions” Controller Formula . . . . . . . . . . 35
5.2 Matrix PencilH
1
“All-solutions” Controller Formula . . . . . . . . . . 37
Chapter 6: Examples 39
Chapter 7: Conclusion 43
Bibliography 45
Appendices 47
Appendix A: Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
A.1 Proofs forLQ Feedback formulation theorems forH
1
Full-Information,
H
1
Full-Control andH
1
Output Feedback “All-solutions” cases . . . . 48
A.2 Proof for Matrix pencil “All-solutions”H
1
Controller formula . . . . . 52
iv
List of Figures
2.1 General control configuration . . . . . . . . . . . . . . . . . . . . . . . 5
5.1 A representation ofH
1
“all-solutions” controller . . . . . . . . . . . . 36
6.1 Mixed SensitivityH
1
control problem . . . . . . . . . . . . . . . . . . 40
6.2 Singular-value Bode plot of closed-loop functions . . . . . . . . . . . . 41
v
List of Tables
2.1 MATLAB script forSJF function . . . . . . . . . . . . . . . . . . . . . 8
2.2 MATLAB script forlqy function . . . . . . . . . . . . . . . . . . . . . 14
6.1 H
1
Controller Design Summary . . . . . . . . . . . . . . . . . . . . . 42
vi
Abstract
In this thesis we present a simple, unified formula for discrete and continuous-timeH
1
“all-solutions” controllers. By observing a “cost” equivalence between the standardH
1
control problem and a certainLQ optimal regulator problem, an elegant controller struc-
ture reminiscent of anLQG optimal controller is developed. Our choice of notation also
simplifies the derivation and existence conditions considerably, whereby all unnecessary
assumptions on plant state-space matrices and “loop-shifting” transformations are elim-
inated. Additionally, with our focus entirely on input-output weighting “cost functions”
this derivation offers a “behavioral” theory interpretation for all solutions of a standard
H
1
control problem.
In this thesis we also present a simplified matrix pencil formula for solving the H
1
control problem for the case 0 (D
11
)
. This formula is useful in developing
a more numerically robust algorithm in H
1
control. A significant feature of this for-
mula is that each element of the pencils is expressed directly in terms of the original
descriptor-form state space matrices of the plant and even pencil eigenspaces computed
using a numerically robust even pencil algorithm. There are no data-corrupting numer-
ical operations required to form any of the matrices that appear in our “all-solutions”
controller formula.
vii
Acknowledgements
First and foremost, I wish to express my sincere gratitude towards my Ph.D. advisor
Dr. Michael G. Safonov for the inspiration, motivation and guidance throughout the
course of my studies here at USC. I also wish to thank all my Ph.D. defense committee
members starting with Dr. Edmond Jonckheere, Dr. Paul Newton, Dr. Si-Zhao Qin and
Dr. Firdaus Udwadia for their constant support. A special note of thanks is due to Prof.
Jerry Lockenour who introduced me to the joys of Flight Mechanics and Flight vehicle
stability and control.
Over the years I have received a lot of support from the Ming Hsieh Department of
Electrical Engineering. I enjoyed working as a Teaching Assistant for the department
which gave me plenty of opportunities to teach a variety of courses at the Graduate and
Undergraduate level. In this regard I am indeed grateful to Ms. Diane Demetras, Ms.
Christina Fontenot, Mr. Shane Goodoff, Ms. Annie Yu and Mr. Tim Boston who have
been extremely helpful. I am also thankful to Dr. Keith Chugg, Dr. Michael A. Enright,
Prof. Mark Redekopp, Dr. Ali Zadeh, Dr. Allan Weber, Dr. Ashok Patel and many
others who helped me in carrying out my duties as a teaching assistant. On a separate
note, I wish to thank my former colleagues at the Department of Environmental Health
and Safety at USC, especially my supervisors Ms. Lisa Sanchez and Dr. John Edward
Becker who were very supportive during the early days of my Doctoral Program.
viii
I am indeed grateful to all members of the Controls group at USC, especially Dr.
Srideep Musuvathy, Mr. Eugenio Grippo, Dr. Michael Chang, Dr. Mubarak Alha-
rashani, Dr. Shin-Young Cheong, Dr. Yun Wang, Mr. Prashanth Harshangi and Mr.
Rajit Chatterjea who have helped me a lot over the years. I am also thankful to all
my friends including Mr. Anup Menon, Mr. Amrut Dash, Mr. Nikandan Kumar, Mr.
Abhiram B.J., Mr. Nikhil Saraf, Mr. Ajit Raut, Ms. Chhavi Mishra, Mr. Krishnakanth
Chimalamarri, Mr. Rachit Lavasa, Mr. Ramzi El-Khoury, Mr. Kiran Nandanan, Mr.
Christo Singh, Mr. Arjun Mohan, Mr. Pradeep Mohanraj, Mr. Devesh Thanvi and Mr.
Prashanth Venkateswaran who supported me throughout my time at USC.
I am in-debt to the search and rescue teams of L.A. County and San Bernandino
County and Mt. Baldy park ranger Mr. Nathan who came to rescue me and my friends
after an unfortunate hiking accident last December. I am thankful to Dr. Reza Omid,
Dr. Michael Abdulian, Dr. Michael Lim and Ms. Diane Lapa of Keck Medical Cen-
ter of USC who helped with my surgery. I am also thankful to Mr. Kenneth Kim
and the support staff of USC Division of Biokinesiology who helped in my subsequent
recovery process. I am extremely grateful to the Doctoral Programs Coordinator Ms.
Tracy Charles and The Director of Doctoral Programs Ms. Jennifer Gerson who were
extremely supportive during this time.
Above all, I am grateful to my family especially my mother Mrs. Rajalakshmi
Karthikeyan, my father Mr. V . Karthikeyan and my brother Mr. K. Pranav Shashid-
har for their patience, understanding and undying support which has kept me going
throughout this research endeavor. Thanks also to my uncle Dr. Anantha Sundararajan
for inspiring me over the years. Many thanks to my grandparents who have enriched
my life with their presence. Thanks also to my uncle Dr.Srinivasan and his family and
to my cousins for their love and support. Thanks also to Ms. Uma Syamala who helped
me a lot during my stay in the U.S.
ix
Last but not least, I wish to thank Ms. Patsy Carter and Ms. Tracy Carter for their
kind hospitality for over six years .
Anantha Karthikeyan
Los Angeles, 2013
x
Chapter 1
Introduction
The idea of applyingLQ game theory to robust control problems can be traced to the
1967 paper by Medanic [Medanic, 1967]. Building upon this idea, Mageirou and Ho
developed optimal small gain feedback theory [Mageirou and Ho, 1977] in 1977. After
a brief hiatus, the idea re-surfaced in the form ofLQ ‘completion of squares’ identity
in the seminal work of Doyle et al. [Doyle et al., 1989]. By then, techniques to simplify
H
1
theory via “loop shifting” concept were also in place [Safonov et al., 1989]. Finally
a characterization of all solutions to the four block general distance problem was pre-
sented by Limebeer et al. [Limebeer et al., 1988] in 1991. In these works, restrictive
conditions on the state-spaceA;B;C;D-matrices were used to keep the equations sim-
ple, and then the case of generalA;B;C;D-matrices is handled via a sequence of addi-
tional loop-shifting transformations [Safonov et al., 1989] which add considerably to the
complexity of the derivations and to the controller formulae. An LQ game theoretic for-
mulation [Medanic, 1967, Mageirou and Ho, 1977, Doyle et al., 1989] of continuous-
time H
1
output-feedback control was presented in [Karthikeyan and Safonov, 2010],
along with an “all-solutions” characterization of the controller. The result of this
paper showed that the seminal works of [Doyle et al., 1989] can be simplified in the
case of generalA;B;C;D-matrices without any additional “loop-shifting” transforma-
tions [Safonov et al., 1989], thereby significantly reducing the complexity of the deriva-
tions, controller formulae and existence conditions.
As for the case of discrete-time H
1
control, an indirect approach has
been to use bilinear transforms along with the continuous-time results to
1
find a suitable controller. To circumvent the use of bilinear transforms
many techniques have been proposed, of which, significant contributions were
made by [Limebeer et al., 1989, Green and Limebeer, 1995, Iglesias and Glover, 1991,
Ionescu et al., 1999, Stoorvogel et al., 1994] and [Petkov et al., 1999]. Furthermore, in
[Karthikeyan and Safonov, 2009] the author showed the existence of a deeper connec-
tion between the discrete and continuous-time cases, although an “all-solutions” char-
acterization of the controller was not presented here.
The motivation for the LQ feedback formulation results in this thesis are there-
fore two-fold. First we wish to demonstrate that the derivation for the continuous-time
case has almost a one-to-one mapping to the discrete-time “all-solutions” case, which
calls for a unified theory and software implementation. Second, we wish to show that
this “LQ cost function” approach completely eliminates numerically corrupting opera-
tions such as “loop-shifting” transformation and bilinear transforms fromH
1
controller
derivations.
When it comes to the question of numerically robust methods in H
1
control we
see that techniques like [Benner et al., 2007] have been developed based on gamma
iteration and a novel extended matrix pencil formulation of the state space solution of
the sub-optimal H
1
control problem. This approach is based on solving even gen-
eralized eigenproblems instead of Riccati equations and unstructured matrix pencils.
Such methods avoid potentially error causing matrix algebra involving summation and
inversion of ill-conditioned matrices which would otherwise be encountered while
constructing the Riccati equation or Hamiltonian matrices used for solving the gamma
iteration problem. The enhanced numerical robustness in this method comes from
preserving the spectral symmetries which are inherent in the structure of the problem.
Furthermore, these methods are found to be useful even if the pencil has eigenvalues on
the imaginary axis. However, the problem of bringing in numerical robustness into the
2
controller formula was not addressed in [Benner et al., 2007]. Therefore, in this thesis
we present another simplified “all-solutions” formula using inverse free matrix pencils,
where each element of the controller is expressed in terms of the original descriptor
form state-space representation of the plant and even pencil eigenspaces computed
using the structure preserving algorithm of [Benner et al., 2007]. This implies reduced
numerical manipulation and correspondingly data corruption while solving the H
1
control problem.
3
Chapter 2
Preliminaries
2.1 Notation and Terminology
1. The operator ‘’ is defined as
=
8
>
>
<
>
>
:
z; (Discrete-time)
s; (Continuous-time)
where ‘z’ is the forward time-shift operator and ‘s’ is the differentiation operator.
2. We consider a real linear time-invariant system G having state-space system
matrix
S(G)
ss
=
2
4
A B
C D
3
5
(2.1.1)
The state-space systemS(G) has transfer function
G() = C(IA)
1
B +D
3. We denote by RH
1
the set of real LTI transfer function matrices which are stable
and proper.
4
Figure 2.1: General control configuration
4. For G 2 RH
1
, we define the H
1
norm as (e.g.,
[Skogestad and Postlethwaite, 2005]),
kG()k
1
=
8
>
>
<
>
>
:
sup
!
(G()j
=e
j!
); =z
sup
!
(G()j
=j!
); =s
where denotes the greatest singular value of a matrix.
5. Given a plantG and controllerK partitioned compatibly as follows
G =
2
4
G
11
G
12
G
21
G
22
3
5
K =
2
4
K
11
K
12
K
21
K
22
3
5
:
If G
22
K
11
is well defined and square, and IG
22
K
11
is invertible, then T =
5
lft(G;K) forms the Redheffer star product or linear fractional transformation with
respect to this partition [Matlab, 2010a], [Redheffer, 1960] (see Fig.2.1).
6. `
2
[0;1) denotes the Hilbert space of vector valued functions defined on [0;1)
with inner product
hf;gi =
1
X
k=0
(f(k)
T
g(k)): (2.1.2)
7. L
2
[0;1) denotes the Hilbert space of vector valued functions defined on [0;1)
with inner product
hf;gi =
Z
1
0
(f(t)
T
g(t))dt: (2.1.3)
8. By the notationL
2
[0;1) we refer to the Hilbert space `
2
[0;1) in the discrete-
time case or the spaceL
2
[0;1) in the continuous-time case.
9. Given a discrete or continuous-time signalu we define theL
2
norm
kuk
2
L2
=
8
>
>
<
>
>
:
kuk
2
`
2[0;1)
; (Discrete-time)
kuk
2
L
2
[0;1)
; (Continuous-time)
10. CoS is used as an abbreviation for the phrase: “completion of squares”.
6
2.2 Definitions
2.2.1 Schur Decomposition
Definition 1 (Schur Decomposition [Skogestad and Postlethwaite, 2005]). Given a
matrixA partitioned as follows
A =
2
4
A
11
A
12
A
21
A
22
3
5
; (2.2.1)
ifA
22
is non-singular thenA has the decomposition
A =
2
4
I A
12
A
1
22
0 I
3
5
2
4
Y 0
0 A
22
3
5
2
4
I 0
A
1
22
A
21
I
3
5
(2.2.2)
whereY =A
11
A
12
A
1
22
A
21
.
2.2.2 SchurJ-factorization
Definition 2 (Schur J-factorization). Let A be a symmetric matrix partitioned as in
equation (2.2.1). IfA has a Schur decomposition (2.2.2) withA
22
> 0 andY < 0, then
for a given value of
there exists a SchurJ-factorization
A =
2
4
I S
T
21
0 I
3
5
2
4
0 S
T
1
S
T
2
0
3
5
J
2
4
0 S
2
S
1
0
3
5
2
4
I 0
S
21
I
3
5
(2.2.3)
7
Table 2.1: MATLAB script forSJF function
function [S1,S2,S21,exist]=SJF(A,a1,gamma)
A11 =A(1:a1,1:a1); % partition A
A12 =A(1:a1,a1+1:end); A21 =A12';
A22 =A(a1+1:end,a1+1:end);
Y = A11-A12
*
(A22\A12');
% check existence and compute Schur J-Factors
exist1=true((eig(A22))'>(zeros(size(eig(A22))))')
exist2=true((eig(Y))'<(zeros(size(eig(Y))))')
if and(exist1,exist2)
S1 =sqrt(-Y)./gamma;
S2 =sqrt(A22); S21 =(A22\A12');
exist=true;
else
S1=[];S2=[];S21=[];
exist=false;
end
where the matrixJ and SchurJ-factorsS
1
;S
2
;S
21
are given by equations (2.2.4-2.2.7).
J
=
2
4
I 0
0
2
I
3
5
; (2.2.4)
S
1
=
1
(Y )
1
2
; (2.2.5)
S
2
= (A
22
)
1
2
; (2.2.6)
S
21
= A
1
22
A
T
12
: (2.2.7)
The SJF(A;
) function
Definition 3. (SJF(A;
) The notation SJF(A;
) denotes a function that computes the
SchurJ-factorization of (2.2.3) whenever such a factorization exists.
Remark 1. Table (2.1) gives aMATLAB script which can be used to find Schur-J factors
ofA when they exist.
8
2.2.3 Matrix Pencil
Definition 4. Matrix pencils are order one polynomial matrices of the form M(s) =
sM
+M
, where M
, M
are matrices. A pencil M(s) is said the be even if the
matrices M
, M
are real and M(s) = M
T
(s). A generalized eigenvector
0
and
zero ats
0
can be evaluated ifM(s
0
)
0
=0 andM(s)
0
6= 0 for alls6=s
0
.
9
2.3 Problem Statements
2.3.1 StandardH
1
Control Problem
The standardH
1
control problem concerns finding a controllerK for a plantG defined
by the state equations
2
6
6
6
4
x
y
1
y
2
3
7
7
7
5
= S(G)
2
6
6
6
4
x
u
1
u
2
3
7
7
7
5
; (2.3.1)
where
S(G)
ss
=
2
6
6
6
6
4
A B
1
B
2
C
1
D
11
D
12
C
2
D
21
D
22
3
7
7
7
7
5
: (2.3.2)
D
22
2R
m
2
r
2
,D
11
2R
m
1
r
1
,m =m
1
+m
2
,r =r
1
+r
2
.
Definition 5. is the set of all
0 for which an internally stabilizing controller exists
and
T
y
1
u
1
= lft(G;K)
satisfies
kT
y
1
u
1
k
1
<
: (2.3.3)
Problem 1 (StandardH
1
Control Problem). [Doyle et al., 1989]
Given
0, the standardH
1
problem is to determine if
2 and, if so, to compute
a realization forK. 2
10
2.3.2 OptimalH
1
Control Problem
Problem 2 (OptimalH
1
Control).
The optimalH
1
problem is to compute the infimum of
opt
= min
and a corresponding controller (e.g.,[Benner et al., 2007]). 2
In practice, the solution to optimal Problem 2 is computed via the
-iteration algo-
rithm (e.g., [Benner et al., 2007]) in which a convergent sequence upper and lower
bounds on
opt
is computed via iterative solution of Problem 1. Although, our main
concern here is the solution of the standard Problem 1.
2.3.3 H
1
Full-Information and Full-Control Problems
Problem 3 (H
1
Full-Information Feedback).
We refer to Problem 1 as theH
1
Full-information problem when the plantG is replaced
by the corresponding Full-Information plantG
FI
S(G
FI
)
ss
=
2
6
6
6
6
6
6
6
4
A B
1
B
2
C
1
D
11
D
12
2
4
I
n
0
3
5
2
4
0
I
r
1
3
5
2
4
0
0
3
5
3
7
7
7
7
7
7
7
5
: (2.3.4)
2
Problem 4 (H
1
Full-Control Feedback).
We refer to Problem 1 as theH
1
Full-control problem when the plantG is replaced by
the corresponding Full-Control plantG
FC
11
S(G
FC
)
ss
=
2
6
6
6
6
4
A B
1
h
I 0
i
C
1
D
11
h
0 I
i
C
2
D
21
h
0 0
i
3
7
7
7
7
5
: (2.3.5)
2
2.3.4 H
1
Disturbance Feed-forward and Output Estimation Prob-
lems
Problem 5 (H
1
Disturbance Feed-forward).
We refer to Problem 1 as theH
1
Disturbance Feed-forward problem when the plantG
is replaced by the corresponding Disturbance Feed-forward plantG
DF
S(G
DF
)
ss
=
2
6
6
6
6
4
A B
1
B
2
C
1
D
11
D
12
C
2
I 0
3
7
7
7
7
5
: (2.3.6)
To motivate the name disturbance feed-forward consider the special case withC
2
= 0.
Then there is no feedback and the measurement is exactlyu
1
. 2
Problem 6 (H
1
Output Estimation).
We refer to Problem 1 as the H
1
Output Estimation problem when the plant G is
replaced by the corresponding Output Estimation plantG
OE
12
S(G
OE
)
ss
=
2
6
6
6
6
4
A B
1
B
2
C
1
D
11
I
C
2
D
21
0
3
7
7
7
7
5
: (2.3.7)
2
Remark 2. The Output Estimation problem is dual to the Disturbance Feed-forward
problem just as Full-Control was dual to the Full-Information control problem.
2.3.5 LQ State-feedback Control Problem
Problem 7 (LQ State-feedback Control Problem).
Given a state space system (2.1.1) and cost matrices Q;R;N, the Linear Quadratic
(LQ) state-feedback control problem concerns finding a control lawu =Fx that mini-
mizes the corresponding “output-weighting” cost function
=
*
2
4
y
u
3
5
;
2
4
Q N
N
T
R
3
5
2
4
y
u
3
5
+
(2.3.8)
whereQ is any symmetric matrix.
Sincey = Cx +Du, we may re-write the above cost in standard LQR optimal control
framework as follows (see for e.g.,[Lancaster and Rodman, 1995, Matlab, 2008])
=
*
2
4
x
u
3
5
;
2
4
Q
x
N
x
N
T
x
R
x
3
5
2
4
x
u
3
5
+
(2.3.9)
13
Table 2.2: MATLAB script forlqy function
function [F,Ru,exist]=lqy(G,Q,R,N)
[A,B,C,D,T]=ssdata(G);
Qx = C'
*
Q
*
C;
Rx = R + D'
*
Q
*
D + N'
*
D + D'
*
N;
Nx = C'
*
(Q
*
D + N);
if (T==0) % continuous-time case
[P,L,F] = care(A,B,Qx,Rx,Nx);
Ru = Rx;
else % discrete-time case
[P,L,F] = dare(A,B,Qx,Rx,Nx);
Ru = Rx + B'
*
P
*
B;
end
exist= (:isequal(F,[]))&&(:isequal(P,[]));
F=-F; % change sign for positive feedback u=Fx
where,
2
4
Q
x
N
x
N
T
x
R
x
3
5
=
2
4
C
T
0
D
T
I
3
5
2
4
Q N
N R
3
5
2
4
C D
0 I
3
5
:
2
The lqy function
Definition 6. (lqy) The notation lqy(G;Q;R;N) denotes a function that finds the
unique stabilizing feedback F that solves problem 7 for a given plant G and choice
ofQ;R;N, whenever such anF exists.
2
Remark 3. AMATLAB script for such a function is given in Table (2.2).
14
Chapter 3
Background
3.1 Completion of Squares (CoS) Identities
3.1.1 LQ Completion of Squares
Lemma 3.1.1 (LQ Completion of Squares Identity). [Brockett, 1970,
Lancaster and Rodman, 1995]
For a given plant G and cost matrices Q;R;N, if there exists a stabilizing feedback
F that solves problem 7, then, for all u2 L
2
[0;1), u
= Fx and initial condition
x(0) = 0, it holds that
=h(uu
);R
u
(uu
)i; (3.1.1)
where
R
u
=
8
>
>
<
>
>
:
R
x
+B
T
PB; (Discrete-time)
R
x
; (Continuous-time).
(3.1.2)
P in equation (3.1.2) is the unique stabilizing solution of the discrete algebraic Riccati
equation
P =A
T
PA +Q
x
(N
T
x
+B
T
PA)
T
R
1
P
(N
T
x
+B
T
PA): (3.1.3)
Proof. For the continuous-time case [Karthikeyan and Safonov, 2010], it is known
[Brockett, 1970, Lancaster and Rodman, 1995] that for all u(t) 2 R
m
the cost J
at
15
time with controlu and initial statex
0
can be represented in terms of any solutionP
to the following Riccati equation
A
T
P +PA (PB +N
x
)R
x
1
(PB +N
x
)
T
+Q
x
= 0 (3.1.4)
and feedback gain matrix
F =R
x
1
(N
x
T
+B
T
P ): (3.1.5)
Now, ifP is any symmetric solution to the Riccati equation 3.1.4, then for anyu2 U,
u
=Fx and initial conditionx
0
,
J
=x
0
T
Px
0
x
T
Px
+
Z
0
(uu
)
T
R
x
(uu
)dt:
where
u
=Fx:
Moreover, ifu2 L
2
[0;1),x
0
= 0, then lim
!1
x
= 0 and lim
!1
J
= J. There-
fore,
J =
Z
1
0
(uu
)
T
R
x
(uu
)dt: (3.1.6)
For discrete-time case [Karthikeyan and Safonov, 2009] it is known [Brockett, 1970,
Lancaster and Rodman, 1995] that for all u(k)2 R
m
the cost J at k with control u
16
and initial state x(0) can be represented in terms of any solution P to the following
Riccati equation
P =A
T
PA +
~
Q (
~
S
T
+B
T
PA)
T
(R
P
)
1
(
~
S
T
+B
T
PA) (3.1.7)
and feedback gain matrix
F =R
P
1
(
~
S
T
+B
T
PA) (3.1.8)
whereR
P
=
~
R +B
T
PB. Now, ifP is any symmetric solution to the Riccati equation
stated above, then for anyu2U and initial conditionx(0)
J
K
=x(K)
T
Px(K) +x(0)
T
Px(0) +
K
X
k=0
(u(k)u(k)
)
T
R
P
(u(k)u(k)
)
(3.1.9)
where
u(k)
=Fx(k) (3.1.10)
Moreover, ifu2`
2
[0;1),x(0) = 0 and lim
K!1
x(K) = 0, then
lim
K!1
J
K
=
1
X
k=0
(u(k)u(k)
)
T
R
P
(u(k)u(k)
) (3.1.11)
2
Remark 4. It must be stressed that the “completion of squares” matrix R
u
in equa-
tion (3.1.6) is the only point of disparity between the continuous and discrete-time case.
17
3.1.2 LQ Interpretation ofH
1
It is a well-known consequence of Parseval’s theorem that the H
1
control objective
kT
y
1
u
1
k
1
<
can be expressed equivalently in the time-domain in terms of an
inequality satisfied by anLQ cost function.
Lemma 3.1.2 (LQ cost interpretation of H
1
). [Mageirou and Ho, 1977,
Doyle et al., 1989]
For a given plantG and controllerK
kT
y
1
u
1
k
1
<
if and only if
H1
< 0 for allu
1
2L
2[0;1)
; (3.1.12)
where
H1
= ky
1
k
2
L2
2
ku
1
k
2
L2
: (3.1.13)
2
18
By defining cost matricesQ
c
;R
c
;N
c
as follows
Q
c
=
2
4
I
m
1
m
1
0
0 0
3
5
2R
mm
; (3.1.14)
R
c
=
2
4
2
I
r
1
r
1
0
0 0
3
5
2R
rr
; (3.1.15)
N
c
= 02R
mr
: (3.1.16)
equation (3.1.13) can be expressed as
H1
=
*
2
4
y
u
3
5
;
2
4
Q
c
N
c
N
T
c
R
c
3
5
2
4
y
u
3
5
+
: (3.1.17)
Comparing cost functions
H1
(3.1.17) and (2.3.8), we notice that
H1
is a special
case of whereQ =Q
c
;R =R
c
;N =N
c
. Note that we do not requireR
c
> 0.
Lemma 3.1.3 (H
1
CoS Identity).
For a given plantG and cost matricesQ
c
;R
c
;N
c
theH
1
“completion of squares” iden-
tity (3.1.18) holds for ally
1
; y
1
;u
1
; u
1
;u
2
;x2 L
2
if and only if there exists a stabiliz-
ing feedbackF solving problem 7 and the correspondingLQ “completion of squares”
matrixR
u
given by (3.1.2) has a SchurJ-factorization (2.2.3).
ky
1
k
2
L2
2
ku
1
k
2
L2
= k y
1
k
2
L2
2
k u
1
k
2
L2
(3.1.18)
where
19
2
4
y
1
u
1
3
5
=
2
4
0 S
u
2
S
u
1
0
3
5
2
4
I 0
S
u
21
I
3
5
2
4
u
1
u
1
u
2
u
2
3
5
(3.1.19)
2
4
u
1
u
2
3
5
=
2
4
F
1
F
2
3
5
x (3.1.20)
2
4
F
1
F
2
3
5
= F: (3.1.21)
S
u
1
=
1
(R
u
12
R
1
u
22
R
T
u
12
R
u
11
)
1
2
(3.1.22)
S
u
2
= (R
u
22
)
1
2
(3.1.23)
S
u
21
= R
1
u
22
R
T
u
12
(3.1.24)
Note that by takingA = R
u
in equation (2.2.3), the SchurJ-factorsS
u
1
;S
u
2
;S
u
21
were derived above using equations (3.1.22-2.2.7).
Proof. Specializing Lemma 3.1.1 to the case ofH
1
cost matricesQ = Q
c
, R = R
c
,
N =N
c
, equation (3.1.6) becomes
H1
= h(uu
);R
u
(uu
)i: (3.1.25)
Using a SchurJ-factorization ofR
u
(2.2.3) in equation (3.1.25)
H1
= k y
1
k
2
L2
2
k u
1
k
2
L2
: (3.1.26)
2
20
3.2 Squared-Down Plants
3.2.1 Squaring-DownD
12
Lemma 3.2.1 (Squaring-DownD
12
).
Consider the squared-down plant
G with a state-space representation
S(
G)
ss
=
2
6
6
6
6
4
A B
1
S
1
u
1
B
2
S
u
2
F
2
S
u
2
S
u
21
S
1
u
1
S
u
2
C
2
D
21
S
1
u
1
D
22
3
7
7
7
7
5
(3.2.1)
where
A =A +B
1
F
1
,
C
2
=C
2
+D
21
F
1
.
Under the conditions of Lemma 3.1.3, a feedbacku
2
=Ky
2
that solves theH
1
control
problem for the squared-down plant
G also solves the H
1
control problem for the
plant G, provided that it stabilizes G. Conversely, a feedback u
2
= Ky
2
that solves
the H
1
control problem for the plant G also solves the H
1
control problem for the
squared-down plant
G, provided that it stabilizes
G.
Proof. Given a plantG with state-space equations (2.3.1), the equations for a squared
down plant are obtained by substituting u
1
= S
u
1
(u
1
u
1
), replacing they
1
output by
y
1
= S
u
2
((u
2
u
2
) +S
u
21
(u
1
u
1
)) and regrouping terms. Since by hypothesis both
G andG are stabilized byK, the closed-loop response signalsy
1
;y
2
;u
2
;x; y
1
; u
2
are in
L
2
for allu
1
; u
2
inL
2
. Hence, by Lemma 3.1.3,
ky
1
k
2
L2
2
ku
1
k
2
L2
= k y
1
k
2
L2
2
k u
1
k
2
L2
(3.2.2)
and the result follows immediately via Parseval’s theorem. 2
21
-0.1in By defining the cost matricesQ
o
;R
o
;N
o
as given below
Q
o
=
2
4
I
r
1
r
1
0
0 0
3
5
2R
rr
(3.2.3)
R
o
=
2
4
2
I
m
1
m
1
0
0 0
3
5
2R
mm
(3.2.4)
N
o
= 02R
rm
(3.2.5)
the following dual result is immediate.
3.2.2 Squaring-DownD
21
Lemma 3.2.2 (Squaring-DownD
21
).
Given a plantG and cost matricesQ
o
;R
o
;N
o
, if forG
T
there exists a stabilizing feed-
backH
T
solving problem 7 such that the corresponding CoS matrixR
y
has SchurJ-
factorsS
y
1
;S
y
2
;S
y
21
, then a feedbacku
2
= Ky
2
that solves the standardH
1
control
Problem 1 for the following squared-down plant
~
G
S(
~
G)
ss
=
2
6
6
6
6
4
~
A H
2
S
y
2
~
B
2
S
1
y
1
C
1
S
1
y
1
S
T
y
21
S
y
2
S
1
y
1
D
12
C
2
S
y
2
D
22
3
7
7
7
7
5
(3.2.6)
where
~
A = A +H
1
C
1
,
~
B
2
= B
2
+H
1
D
12
, also solves the H
1
control problem for
the plantG, provided that it stabilizesG. Conversely, a feedbacku
2
=Ky
2
that solves
the H
1
control problem for the plant G also solves the H
1
control problem for the
22
squared-down plant
~
G, provided that it stabilizes
~
G.
Proof. SincekGk
1
=kG
T
k
1
, the result follows directly by transposingG, applying
Lemma 3.2.1 and finally transposing the resultant equations. 2
23
Chapter 4
H
1
Full-Information and Full-Control
4.1 H
1
Full-Information Feedback
Theorem 4.1.1 (H
1
Full-Information Feedback).
A solution to Problem 3 exists if and only if the following conditions hold:
(i) There exists a full-state feedback F and CoS matrix R
u
such that the H
1
CoS
identity (3.1.18) holds for the full-information plantG
FI
(2.3.4) and cost matrices
Q
c
;R
c
;N
c
.
(ii) The matrixA +B
2
h
S
u
21
I
r
2
i
F is Hurwitz.
Furthermore if a solution exists, then forX2RH
1
,kXk
1
<
, all solutions are given
by
u
2
= lft(K
FI
;X)
2
4
x
u
1
3
5
; (4.1.1)
using the “central full-information controller”
K
FI
= lft(M
FI
;S
FI
); (4.1.2)
24
where
M
FI
=
2
6
6
6
6
4
0 0 I
F
2
4
I
0
3
5
0
3
7
7
7
7
5
; (4.1.3)
S
FI
=
2
4
S
u
21
I S
1
u
2
S
u
1
0 0
3
5
: (4.1.4)
Proof. See Appendix. 2
Remark 5. The condition (ii) in Theorem 4.1.1 is equivalent to the requirement that the
solutionP to the algebraic Riccati equation for the correspondingLQ problem be pos-
itive semidefinite [Safonov et al., 1989, Thm 4]. From a computational standpoint, the
Hurwitz condition (ii) in Theorem 4.1.1 is preferable. This is in part because the Hurwitz
condition directly checks that the feedback gainF is stabilizing. But, more importantly,
condition (ii) circumvents the numerical sensitivity issues that would arise in attempting
to distinguish an indefinite Riccati equation solutionP from one that is merely semidef-
inite. Furthermore, in Theorem 4.1.1, all checks on existence of Riccati solution are
handled within the lqy function by standard Riccati solvers (for e.g.,dare,care from
MATLAB).
The dual of the full-information feedbackH
1
problem is the problem of computing
anH
1
observer gainH. This is addressed in the following corollary to Theorem 4.1.1.
4.2 H
1
Full-Control Feedback
Corollary 4.2.1 (H
1
Full-control Feedback).
A solution to Problem 6 exists if and only if the following conditions hold:
25
(i) There exists a full-state feedbackH
T
and matrixR
y
such that theH
1
CoS iden-
tity (3.1.18) holds for the full-information plant (G
FC
)
T
(2.3.7) and cost matrices
Q
o
;R
o
;N
o
.
(ii) The matrixA +H
h
S
T
y
21
I
r
2
i
C
2
is Hurwitz.
Furthermore if a solution exists, then forX2RH
1
,kXk
1
<
, all solutions are given
by
u
2
= lft(K
FC
;X)y
2
(4.2.1)
using the “central full-control controller”
K
FC
= lft(M
FC
;S
FC
); (4.2.2)
where
M
FC
=
2
6
6
6
6
4
0 H
0
h
I 0
i
I 0
3
7
7
7
7
5
; (4.2.3)
S
FC
=
2
6
6
6
6
4
S
T
y
21
S
y
1
I 0
S
y
2
1
0
3
7
7
7
7
5
: (4.2.4)
Proof. See Appendix.
26
4.3 H
1
Observer
TheH
1
observer for a plantG in (2.3.1) is given by
^ x = A^ x +B
2
u
2
+H
2
4
^ y
1
3
5
(4.3.1)
^ y
1
= C
1
^ x +D
12
u
2
(4.3.2)
^ y
2
= C
2
^ x +D
22
u
2
(4.3.3)
= ^ y
2
y
2
(4.3.4)
^
~ u
1
= S
1
y
2
(4.3.5)
Lemma 4.3.1 (H
1
observer ).
Suppose the feedback gain H exists and the observer equations (4.3.1-4.3.5) for the
plantG are used to compute an estimate ^ x. Now, consider the equations of the squared
down plant (3.2.6)
2
6
6
6
4
x
~ y
1
y
2
3
7
7
7
5
=S(
~
G)
2
6
6
6
4
x
~ u
1
u
2
3
7
7
7
5
;
then the observer errore
=x ^ x satisfies
e = (A +HC)e
where (A +HC) is Hurwitz.
27
Proof. The error dynamics are given by
e =x^ x
=
~
AxH
2
S
y
2
u
1
+
~
B
2
u
2
(
~
A^ x +
~
B
2
u
2
+H
2
)
= (
~
A +H
2
C
2
)e
= (A +H
1
C
1
+H
2
C
2
)e
=A
e
e
= (A +HC)e
We note that (A +HC) is Hurwitz sinceH
T
is a stabilizing feedback that solves prob-
lem 7 forG
T
and cost matricesQ
o
;R
o
;N
o
. 2
Ifx(0) = ^ x(0) = 0 thene(k) = 08k (Discrete-time) ore(t) = 08t (Continuous-
time). Therefore we may substitute x(k) with ^ x(k) or x(t) by ^ x(t) without affecting
kT
~ y
1
~ u
1
k
1
, whereT
~ y
1
~ u
1
= lft(
~
G;K).
4.4 Solution methods for the sub-optimal H
1
Control
Problem
Consider the linear time invariant plant
G(s) =
2
4
G
11
(s) G
12
(s)
G
21
(s) G
22
(s)
3
5
28
which has descriptor form state-space representation
G(s)
ss
=
2
6
6
6
6
4
Es +A B
1
B
2
C
1
D
11
D
12
C
2
D
21
D
22
3
7
7
7
7
5
=
2
4
C
1
C
2
3
5
(EsA)
1
h
B
1
B
2
i
:
with statex2R
n
, inputsu
1
2R
r
1
;u
2
2R
r2
, and Outputsy
1
2R
m
1
y
2
2R
m
2
. When
the feedback lawu
2
(s) =K(s)y
2
(s) is applied to the plantG(s), we have
T
y
1
u
1
(s) = lft (G(s);K(s)) (4.4.1)
= G
11
(s) +G
12
(s)K(s)(IG
22
(s)K(s))
1
G
21
(s) (4.4.2)
In this part we make the following assumptions:
A1: (A;B
2
;C
2
) is stabilizable and detectable
A2:D
12
has full column rank andD
21
has full row rank
A3:
2
4
sE +A B
2
C
1
D
12
3
5
and
2
4
sE
T
+A
T
C
2
T
B
1
T
D
21
T
3
5
have no zeros on the j! axis and
r
2
m
1
andm
2
r
1
A4: The plant matrixE is invertible
Note that A1,A2 and A3 are standard assumptions, required for well-posedness of the
all-solutions controller formula of [Limebeer et al., 1988]. The last assumption A4
might be inessential, but is required here to ensure that the plant is proper and can be
transformed to the standard state-space form of [Limebeer et al., 1988, Thm. 5.1’] upon
which we base the derivations in this paper.
29
Eigenspace methods for solving the Optimal H
1
control problem include
(A) the original Hamiltonian matrix methods (e.g., [Glover and Doyle, 1988,
Limebeer et al., 1988]), and (B) the more numerically robust extended matrix-
pencil methods (e.g., [Benner et al., 2007, K.C.Goh and M.G.Safonov, 1993,
Gahinet and Pandey, 1991]). We briefly describe key elements of each of these
two methods below.
4.4.1 Hamiltonian matrix approach to the sub-optimalH
1
control
problem
[Benner et al., 2007] Let us consider
H =
2
4
F G
K F
T
3
5
(4.4.3)
to be a Hamiltonian matrix, whereG;K are symmetric andF;G;K2R
n;n
. H has no
eigenvalues on the j! axis and the eigenvalues ofH have spectral symmetry. We also
note that to each Hamiltonian matrix there corresponds an algebraic Riccati equation of
the form
F
T
X +XF +KXGX = 0: (4.4.4)
A solution X of the above equation is said to be stabilizing if X=X
T
and F-GX is
Hurwitz.
Let us define as in [Benner et al., 2007] the symmetric matrices depending on
as a
parameter to be:
30
R
H
=
2
4
D
11
T
D
12
T
3
5
h
D
11
D
12
i
2
4
2
I
r1
0
0 0
3
5
(4.4.5)
R
J
=
2
4
D
11
D
21
3
5
h
D
11
T
D
21
T
i
2
4
2
I
m1
0
0 0
3
5
(4.4.6)
and take ^
H
= max(
2RjR
H
is singular), ^
J
= max(
2RjR
J
is singular) and ^
=
max(^
H
; ^
J
)
Finally let us define the Hamiltonian matrices as in [Benner et al., 2007]
H(
) =
2
4
A 0
C
T
1
C
1
A
T
3
5
+
2
4
B
1
B
2
C
T
1
D
11
C
T
1
D
12
3
5
R
1
H
2
4
D
T
11
C
1
B
T
1
D
T
12
C
1
B
T
2
3
5
(4.4.7)
J(
) =
2
4
A
T
0
B
1
B
T
1
A
3
5
+
2
4
C
T
1
C
T
2
B
1
D
T
11
B
1
D
T
21
3
5
R
1
J
2
4
D
11
B
T
11
C
1
D
21
B
T
1
C
2
3
5
: (4.4.8)
Under the assumptions A1-A3, for the given plant with R
H
and R
J
as defined
above there exists an internally stabilizing controller such thatkT
y
1
u
1
(s)k
1
<
if
and only if the following conditions hold [Benner et al., 2007, Glover and Doyle, 1988,
Zhou et al., 1995].
1.
> ^
where ^
is as defined above.
2. There exists a positive semidefinite stabilizing solution X
H
for the algebraic
Riccati equation associated with H(
).
31
3. There exists a positive semidefinite stabilizing solution X
J
for the algebraic
Riccati equation associated with J(
).
4.
2
>(X
H
X
J
).
4.4.2 Matrix pencil approach to the sub-optimalH
1
control prob-
lem
We note that there are numerical difficulties associated with the explicit solution of
the Riccati equations and the spectral radius condition. In order to overcome such
problems, a matrix-pencil reformulation of the above conditions has been developed
[Gahinet and Pandey, 1991, K.C.Goh and M.G.Safonov, 1993, Benner et al., 2007].
In [K.C.Goh and M.G.Safonov, 1993], the following two matrix pencils replace
the Hamiltonian matrices corresponding to the two Riccati equations of
[Limebeer et al., 1988]:
M
12
(s) = sM
E
12
+M
A
12
(4.4.9)
M
12
(s) =
2
6
6
6
6
6
6
6
4
0 sE +A B
1
B
2
sE
T
+A
T
C
1
T
C
1
C
1
T
D
11
C
1
T
D
12
B
1
T
D
11
T
C
1
2
I +D
11
T
D
11
D
11
T
D
12
B
2
T
D
12
T
C
1
D
12
T
D
11
D
12
T
D
12
3
7
7
7
7
7
7
7
5
(4.4.10)
32
M
21
(s) = sM
E
21
+M
A
21
(4.4.11)
M
21
(s) =
2
6
6
6
6
6
6
6
4
0 sE
T
+A
T
C
1
T
C
2
T
sE +A B
1
B
1
T
B
1
D
11
T
B
1
D
21
T
C
1
D
11
B
1
T
2
I +D
11
D
11
T
D
11
D
21
T
C
2
D
21
B
1
T
D
21
D
11
T
D
21
D
21
T
3
7
7
7
7
7
7
7
5
(4.4.12)
In the extended matrix pencil framework developed by [Benner et al., 2007], M
12
(s),
M
21
(s) are replaced
^
M
12
(s) and
^
M
21
(s) respectively, where
^
M
12
(s) is defined as
^
M
12
(s) =
2
6
6
6
6
6
6
6
6
6
6
4
0 sE +A B
1
B
2
0
sE
T
+A
T
0 0 0 C
1
T
B
1
T
0
2
I 0 D
11
T
B
2
T
0 0 0 D
12
T
0 C
1
D
11
D
12
I
3
7
7
7
7
7
7
7
7
7
7
5
; (4.4.13)
and
^
M
21
(s) is defined as
^
M
21
(s) =
2
6
6
6
6
6
6
6
6
6
6
4
0 sE
T
+A
T
C
1
T
C
2
T
0
sE +A 0 0 0 B
1
C
1
0
2
I 0 D
11
C
2
0 0 0 D
21
0 B
1
T
D
11
T
D
21
T
I
3
7
7
7
7
7
7
7
7
7
7
5
(4.4.14)
33
12
=
2
6
6
6
6
6
6
6
4
12
X
12
V
12
U
12
3
7
7
7
7
7
7
7
5
and
21
=
2
6
6
6
6
6
6
6
4
21
X
21
V
21
U
21
3
7
7
7
7
7
7
7
5
form the bases of the eigenspaces corresponding
toC
zeros ofM
12
(s) andM
21
(s). Similarly the bases for the generalized eigenspaces
of
^
M
12
(s) and
^
M
21
(s) can be expressed as
^
12
=
2
6
6
6
6
6
6
6
6
6
6
4
12
X
12
V
12
U
12
W
12
3
7
7
7
7
7
7
7
7
7
7
5
and
^
21
=
2
6
6
6
6
6
6
6
6
6
6
4
21
X
21
V
21
U
21
W
21
3
7
7
7
7
7
7
7
7
7
7
5
The extended matrix pencils
^
M
12
(s);
^
M
21
(s) of [Benner et al., 2007] have the advantage
over the pencilsM
12
(s);M
21
(s) that they are defined directly in terms of the plant data
without the need for potentially data-corrupting multiplication or addition.
Methods for extracting the eigenspaces of matrix pencils has been given in
[Dooren, 1981, Dooren, 1979]. A numerically robust computation of these eigenspaces
is crucial for computing the optimal H
1
cost
opt
via
-iteration technique. Benner
et. [Benner et al., 2007] have indeed developed such an algorithm which preserves the
structural symmetry of the eigenspaces implicit in the even structure of the pencils
^
M
12
(s);
^
M
21
(s). As a continuation to the effort of solving the H
1
control problem,
we present the controller formulae based on these inverse free pencils in the following
section.
34
Chapter 5
Main Result
5.1 LQ Feedback H
1
“All-solutions” Controller For-
mula
Theorem 5.1.1 (H
1
“All-solutions” Controller Formula).
Given a plantG, a solution to theH
1
control problem exists if and only if the following
existence conditions hold:
(i) For the plant G, there exists a stabilizing solution H to the corresponding H
1
Full-control problem.
(ii) For the squared-down plant
~
G, there exists a stabilizing solution
~
F to the corre-
spondingH
1
Full-information problem.
When the above conditions hold, then, forX2 RH
1
,kXk
1
<
, the reconstructed-
state output-feedbackH
1
“all-solutions” controller is given by
u
2
= lft(
~
K
FI
;X)
2
4
^ x
^
~ u
1
3
5
; (5.1.1)
where the “central full-information controller”
~
K
FI
= lft(
~
M
FI
;
~
S
FI
); (5.1.2)
35
G
A B
C D
PLANT MODEL
H
X
OBSERVER
CONTROLLER
-
+
+
+
+ +
Figure 5.1: A representation ofH
1
“all-solutions” controller
and
~
M
FI
;
~
S
FI
are obtained by applying Theorem 4.1.1 to the squared-down plant
~
G.
Furthermore, the reconstructed full-information vector comprising of state estimate ^ x
and exogenous input
^
~ u
1
is given by theH
1
observer (4.3.1-4.3.5).
Proof. See appendix. 2
36
5.2 Matrix Pencil H
1
“All-solutions” Controller For-
mula
Let us define:
(s) =
2
6
6
6
6
6
6
6
6
6
6
4
2
(sE
T
+A
T
) 0 0 0 0
0 sE +A B
1
B
2
0
0 C
1
D
11
D
12
0
0 C
2
D
21
D
22
0
0 0 0 0
2
D
11
T
3
7
7
7
7
7
7
7
7
7
7
5
(5.2.1)
also define:[K.C.Goh and M.G.Safonov, 1993]
^
D
11
= D
11
T
(
2
ID
11
D
11
T
)
1
(5.2.2)
^
D
12
= [D
12
T
(I
2
D
11
D
11
T
)
1
D
12
]
1
2
(5.2.3)
^
D
21
= [D
21
(I
2
D
11
T
D
11
)
1
D
21
T
]
1
2
(5.2.4)
Q(s) is defined as any r
2
m
2
stable transfer function matrix such that
([K.C.Goh and M.G.Safonov, 1993])
k
^
D
12
Q(s)
^
D
21
k
1
<
Theorem 5.2.1 (Matrix PencilH
1
Controller Formulae).
Suppose (D
11
) <
. Then, the sub-optimal H
1
control problem for the given plant
has an internally stabilizing controllerK
Q
(s), provided the conditions 1 to 4 hold.
The internally stabilizing controller is then given by:
K
Q
(s)=F(K(s),Q(s))
37
The descriptor representation of K(s) is as follows:
K(s)
des
=
2
6
6
6
6
4
sE
k
+A
k
B
k1
B
k2
C
k1
D
k11
D
k12
C
k2
D
k21
D
k22
3
7
7
7
7
5
(5.2.5)
Where:
[sE
k
+A
k
] = [
^
T
21
(s)
^
12
] (5.2.6)
h
B
k1
B
k2
i
=
h
^
T
21
i
2
6
6
6
6
6
6
6
6
6
6
4
0
B
B
B
B
B
B
B
B
B
B
@
0
0
0
I
m2
0
1
C
C
C
C
C
C
C
C
C
C
A
(s)
0
B
B
B
B
B
B
B
B
B
B
@
0
0
0
I
r2
0
1
C
C
C
C
C
C
C
C
C
C
A
3
7
7
7
7
7
7
7
7
7
7
5
(5.2.7)
2
4
C
k1
C
k2
3
5
=
2
6
4
0 0 0 I
r2
0
0 0 0 I
m2
0
(s)
3
7
5
h
^
12
i
(5.2.8)
2
4
D
k11
D
k12
D
k21
D
k22
3
5
=
2
4
0 I
r2
I
m2
D
22
+D
21
^
D
11
D
12
3
5
(5.2.9)
Proof. See appendix. 2
38
Chapter 6
Examples
Example 1. Given the plant
P (s) =
(s 1)
(s + 1)
2
; (6.0.1)
consider the “mixed” sensitivityH
1
control problem (Problem 2) for
T
y
1
u
1
=
2
6
6
6
4
W
p
(s)1=(1 +P (s)K(s))
W
u
(s)K=(1 +P (s)K(s))
W
t
(s)P (s)K(s)=(1 +P (s)K(s))
3
7
7
7
5
; (6.0.2)
with the following choice of “weights”
W
p
(s) =
0:1(s + 100)
(100s + 1)
; (6.0.3)
W
u
(s) = 0:1; (6.0.4)
W
t
(s) = 0: (6.0.5)
For T
y
1
u
1
in equation (6.0.2) the generalized plant G is given by (see Sec.3.8.1,
[Skogestad and Postlethwaite, 2005], also see Fig.6.1)
39
Figure 6.1: Mixed SensitivityH
1
control problem
G(s) =
2
6
6
6
6
6
6
6
4
W
p
(s) W
p
(s)P (s)
0 W
u
(s)
0 W
t
(s)P (s)
1 P (s)
3
7
7
7
7
7
7
7
5
: (6.0.6)
AnH
1
optimal controller is then computed using our main Theorem 5.1.1 (see summary
in Appendix:Table 6.1). From Fig. 6.2 we see that all design requirements were met as
our result agrees with the output ofMATLAB’s routinemixsyn (see [Matlab, 2010b]).
40
10
-4
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
10
4
-140
-120
-100
-80
-60
-40
-20
0
20
40
60
Singular Values
Frequency (rad/sec)
Singular Values (dB)
1/(1+PK)
PK/(1+PK)
GAM/W
p
GAM*P/ss(W
u
)
Figure 6.2: Singular-value Bode plot of closed-loop functions
41
Table 6.1:H
1
Controller Design Summary
Design Parameter Values (refer to Fig.5.1)
A
2
4
0:01000 0:22096 0:15624
0 1 1:41421
0 0
3
5
B
2
4
0:31248 0
0 0
0 2
3
5
C
2
4
0:31998 0:00071 0:00050
0 0 0
0 0:70711 0:50000
3
5
D
2
4
0:00100 0
0 0:10000
1 0
3
5
H
2
4
0 0 0:31248
0 0 0
0 0 0
3
5
S
y
2
[1]
~
F
43:66786 8:14796 2:89797
102:75678 19:42774 7:12868
~
S
FI
0 1 10
0:99999 0 0
X [0]
42
Chapter 7
Conclusion
Building upon the seminal works of [Glover and Doyle, 1989], [Doyle et al., 1989],
[Limebeer et al., 1988] in the continuous-time case and prominent results
such as [Limebeer et al., 1989, Green and Limebeer, 1995, Petkov et al., 1999,
Iglesias and Glover, 1991, Ionescu et al., 1999, Stoorvogel et al., 1994] in the discrete-
time case, we present a unified formula (5.1.1) and representation structure (Fig.5.1) for
H
1
“all-solutions” controllers. With our focus on input-output weighting “cost” func-
tions, we revisit the “completion of squares” identity (Lemma 3.1.1) to show that the
continuous and discrete-time cases differ only in the choice of a “completion of squares”
matrixR
u
(3.1.2). With a simpler set of existence conditions we see that this result can
be easily implemented in software to handle continuous and discrete-time plants alike,
without the hassle of bilinear transforms and “loop-shifting” transformations. This
result is simpler than any earlier formula appearing in the aforementioned works (e.g.,
[Doyle et al., 1989, Iglesias and Glover, 1991] etc.). Otherwise intricate pencils and/or
Hamiltonians are eliminated from both our derivations and controller formula. Instead
these details become inessential details in subroutines of established LQ solution
formulae. The controller realization preserves and extends to the general case the
internal plant model controller structure first identified by [Glover and Doyle, 1989].
As shown in Figure 5.1, the general “all-solutions”H
1
controller realization derived in
this thesis contains at its core anH
1
optimal state-estimator with an exact copy of the
plant model.
43
The second main result of this thesis builds upon the work of [Benner et al., 2007]
which gives us a numerically robust even matrix pencil algorithm for computing the
optimal value of
via
-iteration, we have followed up in this work with simplified
matrix pencil formulae for the all-solutions H
1
controller too. A significant feature
of our formulae is that each element of the pencils is expressed directly in terms of the
original descriptor-form state space matrices of the plant and the even pencil eigenspaces
computed by the even pencil algorithm of [Benner et al., 2007], so that there are no data-
corrupting numerical operations required to form any of the matrices that appear in our
“all-solutions” controller formulae.
44
Bibliography
[Benner et al., 2007] Benner, P., Byers, R., and Xu, H. (2007). A robust numerical
method for the
-iteration inH
1
control. Linear Algebra and Applications, 425(2-
3):548–570.
[Brockett, 1970] Brockett, R. (1970). Finite Dimensional Linear Systems. Wiley, NY .
[Dooren, 1979] Dooren, P. (1979). The computation of kronecker’s canonical form of
a singular pencil. Linear Algebra and its Applications, 27:103–140.
[Dooren, 1981] Dooren, P. V . (1981). A generalized eigenvalue approach for solving
riccati equations. SIAM Journal of Scientific and Statistical Computing, 2(2):262–
283.
[Doyle et al., 1989] Doyle, J., Glover, K., Khargonekar, P., and Francis, B. (1989). State
space solutions to standard H
2
and H
1
control problems. IEEE Transactions in
Automatic Control, AC-34(8):731–747.
[Gahinet and Pandey, 1991] Gahinet, P. M. and Pandey, P. (1991). Fast and numerically
robust algorithm for computing the H
1
optimum. In Proceedings of 30th IEEE
Conference on Decision and Control, Brighton, England.
[Glover and Doyle, 1988] Glover, K. and Doyle, J. (1988). State space formulae for all
stabilizing controllers that satisfy anH
1
norm bound and relations to risk sensitivity.
Systems and Control Letters, 11:167–172.
[Glover and Doyle, 1989] Glover, K. and Doyle, J. C. (1989). A state-space approach
to H
1
optimal control. In Three Decades of Mathematical System Theory, pages
179–218. Springer-Verlag, NY .
[Green and Limebeer, 1995] Green, M. and Limebeer, D. (1995). Linear Robust Con-
trol. Prentice Hall.
[Iglesias and Glover, 1991] Iglesias, P. and Glover, K. (1991). State-space approach to
discrete-timeH
1
control. International Journal of Control, 54(5):1031–73.
45
[Ionescu et al., 1999] Ionescu, V ., Oara, C., and Weiss, M. (1999). Generalized Riccati
theory and robust control: A Popov function approach. John Wiley.
[Karthikeyan and Safonov, 2009] Karthikeyan, A. and Safonov, M. G. (2009). LQ
feedback formulation for discrete-timeH
1
output feedback. In Proceedings of IEEE
Conference on Decision and Control, Shanghai, CN.
[Karthikeyan and Safonov, 2010] Karthikeyan, A. and Safonov, M. G. (2010). LQ
approach to an all-solutions formula forH
1
output feedback control. In Proceedings
of IEEE Conference on Decision and Control, Atlanta, GA.
[Karthikeyan and Safonov, 2012] Karthikeyan, A. and Safonov, M. G. (2012). A simple
unified formula for discrete and continuous-timeH
1
“all-solutions” controllers. In
Proceedings of 7
th
IFAC Symposium on Robust Control Design, Aalborg, Denmark.
[K.C.Goh and M.G.Safonov, 1993] K.C.Goh and M.G.Safonov (1993). H
1
con-
trol:inverse free formulation for D
11
6= 0 and eliminating pole-zero cancellations
via interpolation. In Proceedings of 32nd IEEE Conference on Decision and Control,
San Antonio, TX.
[Lancaster and Rodman, 1995] Lancaster, P. and Rodman, L. (1995). Algebraic Riccati
Equations. Oxford Science Publications.
[Limebeer et al., 1989] Limebeer, D., Green, M., and Walker, D. (1989). Discrete time
H
1
control. In Proceedings of Conference on Decision and Control, pages 392–396,
Tampa, FL.
[Limebeer et al., 1988] Limebeer, D. J., Kasenally, E. M., Jaimouka, I., and Safonov,
M. G. (1988). All solutions to the four block general distance problem. In Proceed-
ings of IEEE Conference on Decision and Control, Austin, TX.
[Mageirou and Ho, 1977] Mageirou, E. and Ho, Y . (1977). Decentralized stabilization
via game theoretic methods. Automatica, 13:393–399.
[Matlab, 2010a] Matlab (2001-2010a). lft - Generalized feedback interconnection of
two LTI models. In Control System Toolbox. MathWorks, Natick, MA.
[Matlab, 2010b] Matlab (2001-2010b). mixsyn - H
1
mixed-sensitivity synthesis
method for robust control. In Robust Control Toolbox. MathWorks, Natick, MA.
[Matlab, 2008] Matlab (2008). Control Systems Toolbox User’s Guide. MathWorks,
Natick, MA.
[Medanic, 1967] Medanic, J. (1967). Bounds on the performance index and the Ric-
cati equation in differential games. IEEE Transactions in Automatic Control, AC-
7(5):613–614.
46
[Petkov et al., 1999] Petkov, P., Gu, D., and Konstantinov, M. (1999). Fortran 77 rou-
tines forH
1
andH
2
design of discrete-time linear control systems. In NICONET.
WGS.
[Redheffer, 1960] Redheffer, A. (1960). On a certain linear fractional transformation.
Journal of Math. Physics, 39(7):269–286.
[Safonov et al., 1989] Safonov, M. G., Limebeer, D. J., and Chiang, R. Y . (1989). Sim-
plifying the H
1
theory via loop-shifting, matrix-pencil and descriptor concepts.
International Journal of Control, 50(6):2467–2488.
[Skogestad and Postlethwaite, 2005] Skogestad, S. and Postlethwaite, I. (2005). Multi-
variable Feedback Control, 2nd Edition. Wiley, NY .
[Stoorvogel et al., 1994] Stoorvogel, A., Saberi, A., and Chen, B. (1994). The discrete-
time H
1
control problem with measurement feedback. International Journal of
Robust and Nonlinear Control, 4:457–479.
[Zhou et al., 1995] Zhou, K., Doyle, J., and K.Glover (1995). Robust and Optimal Con-
trol. Prentice-Hall, Upper Saddle River, NJ.
47
Appendix A
Proofs
A.1 Proofs for LQ Feedback formulation theorems for
H
1
Full-Information, H
1
Full-Control and H
1
Output Feedback “All-solutions” cases
Theorem 4.1.1:H
1
Full-Information Feedback. [Karthikeyan and Safonov, 2010,
Karthikeyan and Safonov, 2012, Theorem 1,Theorem 11]
By Lemma 3.2.1, a controller K
FI
solves the standard H
1
problem for the full
information plant S(G
FI
) if (a) it stabilizes G
FI
and (b) it solves the H
1
control
problem for the corresponding squared down plant
S(
G
FI
) =
2
6
6
6
6
6
6
6
4
A B
1
S
1
u
1
B
2
S
u
2
F
2
S
u
2
S
u
21
S
1
u
1
S
u
2
2
4
I
F
1
3
5
2
4
0
S
1
u
1
3
5
2
4
0
0
3
5
3
7
7
7
7
7
7
7
5
where
A =A +B
1
F
1
.
Denote byX the closed-loop system
X
= lft(
G
FI
;K) (A.1.1)
48
so that
y
1
=X u
1
8 u
1
: (A.1.2)
From the first output equation ofS(
G
FI
), we have
y
1
=S
u
2
F
2
x +S
u
2
S
u
21
S
1
u
1
u
1
+S
u
2
u
2
8 u
1
;u
2
(A.1.3)
Substituting equation (A.1.2) into (A.1.3) we have
X u
1
=S
u
2
F
2
x +S
u
2
S
u
21
S
1
u
1
u
1
+S
u
2
u
2
8 u
1
;u
2
solving foru
2
in terms ofx; u
1
, we obtain theH
1
full-information control law (4.1.1).
u
2
= u
2
(S
u
21
S
u
2
1
XS
u
1
)(u
1
u
1
) (A.1.4)
= lft(K
FI
;X)
2
4
x
u
1
3
5
(A.1.5)
So, from Lemma 3.2.1 the result follows providedK stabilizesG
FI
.
The systemT
y
1
u
1
can be decomposed as
T
y
1
u
1
= lft(G; lft(K
FI
;X)) (A.1.6)
= lft(T;X) (A.1.7)
whereT
= lft(G;K
FI
). Now, by (3.1.19) we have
u
2
= u
2
S
u
21
(u
1
u
1
): (A.1.8)
49
Substituting (A.1.8) and (3.1.19) into (2.3.1), we find that for allu
1
; y
1
it holds that
2
4
y
1
u
1
3
5
=T
2
4
u
1
y
1
3
5
(A.1.9)
whereT has state-space representation
S(T ) =
2
6
6
6
6
4
A +B
2
(F
2
+S
u
21
F
1
) B
1
B
2
S
u
21
B
2
S
1
u
2
C
1
+D
12
(F
2
+S
u
21
F
1
) (IS
u
21
)D
11
D
12
S
1
u
2
S
u
1
F
1
S
u
1
0
3
7
7
7
7
5
(A.1.10)
for all x;u
2
;y
1
;y
2
; u
1
; u
2
satisfying the system equations (16)-(22) and (26). By
condition iii) of Theorem 4.1.1, T is stable and hence we have T
y
1
u
1
= lft(T;X) is
stable for the special caseX = 0. Further, from equation (3.1.18) the completion of the
squares Lemma 3.2.1, it holds for all
2
4
y
1
u
1
3
5
=T
2
4
u
1
y
1
3
5
that
2
4
y
1
u
1
3
5
=
2
4
u
1
y
1
3
5
: (A.1.11)
It follows that
2
4
1 0
0
3
5
T
2
4
1=
0
0 1
3
5
1
1; from which it follows by the small
gain stability theorem that T
y
1
u
1
= lft(T;X) is internally stable for all X 2 RH
1
satisfyingkXk
1
<
. 2
50
Corollary 4.2.1:H
1
Full-control feedback.
Applying Theorem 4.1.1 to the transpose ofT
y
1
u
1
and transposing back, the result fol-
lows immediately.
The matrix
h
H
1
H
2
i
in Corollary 4.2.1 is called anH
1
observer gain matrix. Con-
sider theH
1
observer squared down plant
~
G (3.2.6). Let
~
G have a stabilizing feedback
~
F that solves problem 7 forQ
c
;
~
N
c
= 0 and
~
R
c
=
2
4
2
I
m
2
m
2
0
0 0
3
5
2R
(m
2
+r
2
)(m
2
+r
2
)
: (A.1.12)
Also, letR
~ u
have a SchurJ-factorization (2.2.3) withS
~ u
1
;S
~ u
2
;S
~ u
21
given by equations
(3.1.22-2.2.7). Now, recall from “completion of squares” that the all-solutions con-
troller for the squared-down full-information plant is given by equation (4.1.1) where
X is any transfer function such that,kXk
1
<
. As theH
1
observer (4.3.1-4.3.5) has
e(k) = 08k whenx(0) = ^ x(0) = 0, we can replacex with ^ x and ~ u
1
with
^
~ u
1
. Thus, we
have the proof of our main result 5.1.1.
Theorem 5.1.1:H
1
“All-solutions” Controller Formula.
From Corollary 4.2.1, it follows that the necessary existence conditions for a solution
to Problem 1 are the existence of a stabilizing feedbackH andA + (H
2
+H
1
S
T
y
21
)C
2
being Hurwitz. When these conditions hold, it follows from Lemma 3.2.2 thatK solves
Problem 1 for the plantG if and only if it solves the problem for the squared-down plant
~
G. The results then follow directly from Lemma 4.3.1, and Theorem 4.1.1.
51
A.2 Proof for Matrix pencil “All-solutions” H
1
Con-
troller formula
We begin by assuming without loss of generality that E = I and D
11
=
0. For E 6= I it suffices to notice that our matrix pencil formulae in Theo-
rem 5.2.1 remain invariant under the change of variables [A;B
1
;B
2
;
12
;X
21
] !
[E
1
A;E
1
B
1
;E
1
B
2
;E
T
12
;E
T
X
21
]. [This change of variables corresponds to pre-
multiplying the first row and post-multiplying the first column of the pencil
^
M
12
by
E
1
and (E
1
)T respectively, and similarly pre-multiplying the second row and post-
multiplying the second column of the pencil
^
M
21
(s) byE
1
and (E
1
)
T
respectively].
For the caseD
11
6= 0, the result follows from [Safonov et al., 1989, Lemma 1], which
establishes thatK(s) solves the sub-optimal H
1
problem forG(s) if and only if it solves
the problem for the plant
^
G(s)
dss
=
2
6
6
6
6
4
Es +A
^
B
1
^
B
2
^
C
1
0
^
D
12
^
C
2
^
D
21
D
22
+D
21
^
D
11
D
12
3
7
7
7
7
5
which has a zeroD
11
-matrix.
Consider the formulae given under Theorem 5:1
0
and Theorem 5:2
0
of
[Limebeer et al., 1988] which hold under the assumptions A1 to A3 along with
D
22
= 0 andkD
11
k
2
<
. AdditionallyD
21
D
T
21
=I
m
2
andD
12
T
D
12
=I
r
2
.
According to [Limebeer et al., 1988, Theorem 5.2’] all internally stabilizing controllers
satisfyingkF
l
(P;K)k
1
are given by
52
K(s)
dss
=
2
6
6
6
6
4
sE
k
+A
k
B
k1
B
k2
C
k1
D
k11
D
k12
C
k2
D
k21
D
k22
3
7
7
7
7
5
(A.2.1)
Where:
E
k
= Y
11
T
X
11
2
Y
12
T
X
12
B
k1
= (
2
Y
11
T
B
1
+Y
12
T
C
1
T
D
11
+Y
12
T
C
2
T
D
21
(
2
ID
11
T
D
11
))
(
2
I
~
D
T
?
~
D
?
D
11
T
D
11
)
1
D
21
T
C
k1
= D
12
T
(
2
ID
11
D
11
T
D
?
D
?
T
)
1
(
2
C
1
X
11
+D
11
B
1
T
X
12
+ (
2
ID
11
D
11
T
)D
12
B
2
T
X
12
)
D
k
12
= (ID
12
T
D
11
(
2
ID
11
T
D
?
D
?
T
D
11
)
1
D
11
T
D
12
)
1=2
D
k
21
= (ID
21
D
11
T
(
2
ID
11
~
D
T
?
~
D
?
D
11
)
1
D
11
D
21
T
)
1=2
D
k
22
= (D
k21
1
)
T
D
21
D
11
T
(
2
ID
11
~
D
T
?
~
D
?
D
11
T
)
1
D
12
D
k12
A
k
= E
k
T
x
+B
k1
D
k21
1
C
k2
B
k2
= (Y
11
T
B
2
+ (Y
11
T
B
1
~
D
T
?
~
D
?
D
T
11
+Y
12
T
(C
1
T
C
2
T
D
21
D
11
T
))
(
2
I
~
D
T
?
~
D
?
D
11
T
D
11
)
1
D
12
)D
k12
C
k2
= D
k21
(C
2
X
11
+D
21
(
2
ID
11
T
D
?
D
?
T
D
11
)
1
(D
11
T
D
?
D
?
T
C
1
X
11
+B
1
T
X
12
D
11
T
D
12
B
2
T
X
12
))
53
Remark 6. It is known that [Benner et al., 2007, K.C.Goh and M.G.Safonov, 1993,
Gahinet and Pandey, 1991] that
h
X
11
; X
12
i
=
h
X
12
;
12
i
; (A.2.2)
h
Y
11
; Y
12
i
=
h
X
21
;
21
i
: (A.2.3)
Using this and the fact the
12
;
21
span the respective right eigenspaces of the pen-
cilsM
12
(s);M
21
(s), we may simplify each of the terms of the controller and arrive at
our modified formulae. Consider first theB
k1
term.
B
k1
= (
2
Y
11
T
B
1
+Y
12
T
C
1
T
D
11
+Y
12
T
C
2
T
D
21
(
2
ID
11
T
D
11
))
(
2
I
~
D
T
?
~
D
?
D
11
T
D
11
)
1
D
21
T
= (
2
X
21
T
B
1
+
21
T
C
1
T
D
11
+
21
T
C
2
T
D
21
(
2
ID
11
T
D
11
))
(
2
I
~
D
T
?
~
D
?
D
11
T
D
11
)
1
D
21
T
= (
2
X
21
T
^
B
1
+
21
T
^
C
T
2
^
D
21
2
)
2
^
D
T
21
= X
T
21
^
B
1
^
D
T
21
+
21
T
^
C
T
2
= X
T
21
^
B
1
^
D
T
21
(
^
D
21
^
B
T
1
X
21
+
^
D
21
^
D
T
21
U
21
)
T
= X
T
21
^
B
1
^
D
T
21
X
21
T
^
B
1
^
D
T
21
U
21
T
^
D
21
^
D
T
21
= U
21
T
=
^
U
T
21
=
^
T
21
h
0 0 0 I
m2
0
i
T
The results forB
k2
;C
k1
;C
k2
andD
k
can be derived in a similar fashion.
54
It remains now to establish the equivalence of our formula forsE
k
+A
k
with that
of [Limebeer et al., 1988].
Clearly,
E
k
= Y
11
T
X
11
2
Y
12
T
X
12
= X
21
T
X
12
2
21
T
12
=
2
4
X
21
21
3
5
T
2
4
I
n
0
0
2
I
n
3
5
2
4
X
12
12
3
5
And,
A
k
=
2
4
X
21
21
3
5
T
2
4
I
n
0
0
2
I
n
3
5
2
4
X
12
12
3
5
T
x
+B
k1
D
k21
1
C
k2
From [Limebeer et al., 1988, Theorem 5.1], we have
2
4
X
11
X
12
3
5
T
x
= H
1
2
4
X
11
X
12
3
5
:
Therefore,
A
k
=
2
4
X
21
21
3
5
T
2
4
I
n
0
0
2
I
n
3
5
H
1
2
4
X
12
12
3
5
+B
k1
D
k21
1
C
k2
55
loop shift and set
^
D
11
= 0. The Hamiltonian matrix can then be simplified as follows:
H
1
=
2
4
H
1
11
H
1
12
H
1
21
H
1
22
3
5
=
2
4
^
A
^
B
2
^
D
T
12
^
C
1
2
^
B
1
^
B
T
1
^
B
2
^
B
T
2
^
C
T
1
(I
^
D
12
^
D
T
12
)
^
C
1
^
A
T
+
^
C
T
1
^
D
12
^
B
T
2
3
5
Therefore,
sE
k
+A
k
=
2
4
X
21
21
3
5
T
2
4
sI +H
1
11
H
1
12
2
H
1
21
2
sI
2
H
1
22
3
5
2
4
X
12
12
3
5
+B
k1
D
k21
1
C
k2
56
Simplifying using the pencilsM
12
andM
21
with
^
D
11
= 0
sE
k
+A
k
=
h
21
T
X
21
T
V
21
T
U
21
T
i
2
6
6
6
6
6
6
6
4
2
[sI +A
T
^
C
1
T
^
D
12
^
B
T
2
]
2
^
C
T
1
(I
^
D
12
^
D
T
12
)
^
C
1
0 0
2
^
B
1
^
B
T
1
^
B
2
^
B
T
2
sI +
^
A
^
B
2
^
D
12
^
C
1
0 0
0 0 0 0
0
^
C
2
^
D
21
^
D
22
3
7
7
7
7
7
7
7
5
h
12
T
X
12
T
V
12
T
U
12
T
i
T
=
21
T
2
6
6
6
6
6
6
6
4
2
[sI +
^
A
T
^
C
T
1
^
D
12
^
B
T
2
]
2
^
C
T
1
(I
^
D
12
^
D
T
12
)
^
C
1
0 0
^
B
2
^
B
T
2
sI +
^
A
^
B
2
^
D
12
^
C
1
^
B
1
0
0
^
C
1
0 0
0
^
C
2
^
D
21
^
D
22
3
7
7
7
7
7
7
7
5
12
=
21
T
2
6
6
6
6
6
6
6
4
2
[sI +
^
A
T
^
C
T
1
^
D
12
^
B
T
2
]
2
^
C
T
1
(I
^
D
12
^
D
T
12
)
^
C
1
0 0
0 sI +
^
A
^
B
1
^
B
2
0
^
C
1
0 0
0
^
C
2
^
D
21
^
D
22
3
7
7
7
7
7
7
7
5
12
=
21
T
2
6
6
6
6
6
6
6
4
2
[sI +
^
A
T
^
C
T
1
^
D
12
^
B
T
2
] 0 0 0
0 sI +
^
A
^
B
1
^
B
2
0 (I
^
D
12
^
D
T
12
)
^
C
1
0 0
0
^
C
2
^
D
21
^
D
22
3
7
7
7
7
7
7
7
5
12
=
21
T
2
6
6
6
6
6
6
6
4
2
(sI +
^
A
T
) 0 0
0 sI +
^
A
^
B
1
^
B
2
0
^
C
1
0
^
D
12
0
^
C
2
^
D
21
^
D
22
3
7
7
7
7
7
7
7
5
12
:
57
Using the extended matrix pencil framework express in terms of the original state space matrices
sE
k
+A
k
=
^
T
21
2
6
6
6
6
6
6
6
6
6
6
4
1=
2
(sI +A
T
) 0 0 0 0
0 sI +A B
1
B
2
0
0 C
1
D
11
D
12
0
0 C
2
D
21
D
22
0
0 0 0 0 1=
2
(D
11
T
)
3
7
7
7
7
7
7
7
7
7
7
5
^
12
:
Therefore, [sE
k
+A
k
] =
^
T
21
(s)
^
12
: Q.E.D.
58
Abstract (if available)
Abstract
In this thesis we present a simple, unified formula for discrete and continuous-time H∞ "all-solutions" controllers. By observing a "cost" equivalence between the standard H∞ control problem and a certain LQ optimal regulator problem, an elegant controller structure reminiscent of an LQG optimal controller is developed. Our choice of notation also simplifies the derivation and existence conditions considerably, whereby all unnecessary assumptions on plant state-space matrices and "loop-shifting" transformations are eliminated. Additionally, with our focus entirely on input-output weighting "cost functions" this derivation offers a "behavioral" theory interpretation for all solutions of a standard H∞ control problem. ❧ In this thesis we also present a simplified matrix pencil formula for solving the H∞ control problem for the case where the maximum singular value of (D₁₁) is less than γ. This formula is useful in developing a more numerically robust algorithm in H∞ control. A significant feature of this formula is that each element of the pencils is expressed directly in terms of the original descriptor-form state space matrices of the plant and even pencil eigenspaces computed using a numerically robust even pencil algorithm. There are no data-corrupting numerical operations required to form any of the matrices that appear in our "all-solutions" controller formula.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Data-driven H∞ loop-shaping controller design and stability of switched nonlinear feedback systems with average time-variation rate
PDF
Adaptive control: transient response analysis and related problem formulations
PDF
Relaxing convergence assumptions for continuous adaptive control
PDF
Adaptive control with aerospace applications
PDF
On the synthesis of controls for general nonlinear constrained mechanical systems
PDF
Unfalsified adaptive control with reset and bumpless transfer
PDF
Discrete geometric motion control of autonomous vehicles
PDF
Robust adaptive control for unmanned aerial vehicles
PDF
Optimal guidance trajectories for proximity maneuvering and close approach with a tumbling resident space object under high fidelity J₂ and quadratic drag perturbation model
PDF
A revised computational procedure for calculating Zames-Falb multipliers
PDF
Elements of robustness and optimal control for infrastructure networks
PDF
Performance monitoring and disturbance adaptation for model predictive control
PDF
Bumpless transfer and fading memory for adaptive switching control
PDF
Optimization strategies for robustness and fairness
PDF
Structural nonlinear control strategies to provide life safety and serviceability
PDF
Non‐steady state Kalman filter for subspace identification and predictive control
PDF
Spacecraft trajectory optimization: multiple-impulse to time-optimal finite-burn conversion
PDF
Iterative path integral stochastic optimal control: theory and applications to motor control
PDF
Algorithmic aspects of energy efficient transmission in multihop cooperative wireless networks
PDF
High-accuracy adaptive vibrational control for uncertain systems
Asset Metadata
Creator
Karthikeyan, Anantha
(author)
Core Title
LQ feedback formulation for H∞ output feedback control
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
07/15/2013
Defense Date
05/23/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
H∞ optimal control,LQ feedback control,OAI-PMH Harvest,output feedback,robust control
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Safonov, Michael G. (
committee chair
), Jonckheere, Edmond A. (
committee member
), Newton, Paul K. (
committee member
), Qin, Si-Zhao (
committee member
), Udwadia, Firdaus E. (
committee member
)
Creator Email
akarthik@usc.edu,anantha.mahadevan.k@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-290208
Unique identifier
UC11288313
Identifier
etd-Karthikeya-1777.pdf (filename),usctheses-c3-290208 (legacy record id)
Legacy Identifier
etd-Karthikeya-1777.pdf
Dmrecord
290208
Document Type
Dissertation
Rights
Karthikeyan, Anantha
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
H∞ optimal control
LQ feedback control
output feedback
robust control