Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A revised computational procedure for calculating Zames-Falb multipliers
(USC Thesis Other)
A revised computational procedure for calculating Zames-Falb multipliers
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A REVISED COMPUTATIONAL PROCEDURE FOR CALCULATING ZAMES-FALB MULTIPLIERS by Michael Weikai Chang A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) December 2010 Copyright 2010 Michael Weikai Chang Epigraph \For those who harm you, do not hate them nor seek revenge. For those who help you, repay them two-fold." ii Dedication To I Chung Chang Song-Fung Chou Chang Tung Sheng Chen Kwei-Van Hong Chen Chu-Sheng Lin Tie Ji Lin Wan-An Kao Hsiu-Ying Hsu Kao Edward S. Chang Jenny C. Chang Stanley Lin Rachel Lin Eve Chang iii Acknowledgments My deepest thanks go to my family for their unyielding support and innite love: my father, Doctor Edward S. Chang; my mother, Mrs. Jenny C. Chang; my father-in-law, Doctor Stanley Lin; my mother-in-law, Mrs. Rachel Lin; and my wife, Mrs. Eve Chang. I also greatly appreciate the guidance provided to me, in regard to research as well as life, by my advisor, Professor Michael Safonov. It has truly been a pleasure to work under him, and I have learned greatly under such a giant of science. I also thank the additional members of my guidance committee: Professor Edmond Jonckheere, Professor Petros Ioannou, Professor Kenneth Alexander, and Professor Jerry Mendel. You have taught me so much about science and what it means to be a scientist. I also thank my other mentors, Professor Ricardo Mancera and Professor Huiyu Jin, for their contributions to and inspiration for my work not limited to the area of research covered by this thesis. iv Table of Contents Epigraph ii Dedication iii Acknowledgments iv List of Tables vii List of Figures viii Abbreviations x Abstract xi Preface xii Chapter 1: Introduction 1 Chapter 2: Preliminaries 3 2.1 Notation 3 2.2 Problem Formulation 4 2.2.1 Denitions 5 2.2.2 Monotone Nonlinearities 11 2.2.3 Problem Statement 15 2.3 Stability Criteria 15 2.3.1 The Popov Stability Criterion 17 2.3.2 The Circle Stability Criterion (Non-Incremental Version) 17 2.3.3 The Circle Stability Criterion (Incremental Version) 20 2.3.4 The O-Axis Circle Stability Criterion 20 2.3.5 A Generalized Multiplier Set (Zames-Falb Multipliers) 21 Chapter 3: Calculating Zames-Falb Multipliers 26 v 3.1 A High Dimensional Linear Program 26 3.2 A Low Dimensional Linear Program 29 3.2.1 Background of Gapski and Geromel's Algorithm (Algorithm 1) 29 3.2.2 Gapski and Geromel's Algorithm (Algorithm 1) 32 3.2.3 Subalgorithm 1 of Algorithm 1 38 Chapter 4: Subalgorithm 2 40 4.1 The Simplex Method 40 4.1.1 Overview of the Simplex Method 40 4.1.2 Application of the Simplex Method 42 4.2 Statement of Subalgorithm 2 43 4.3 Theorems About Subalgorithms 1 and 2 46 4.3.1 The General Case 46 4.3.2 The Case of a Single Frequency 48 4.4 Remarks About Subalgorithm 2 51 4.4.1 Sector Maximum 51 4.4.2 A Practical Choice of Time 52 4.4.3 Degeneracy 52 4.4.4 Starting at the Origin versus at the Previous Optimum 54 4.5 Example 56 Chapter 5: A Generalized Basis Set 61 5.1 Background 62 5.2 The Linear Program 65 5.3 Application of Subalgorithm 2 71 5.4 Remarks 75 5.4.1 The mod Function 75 5.4.2 Usage of the Maximum Index 76 5.4.3 Advantages in Computational Complexity 76 5.5 Example 78 Chapter 6: Conclusion 82 Bibliography 84 vi List of Tables 2.1 Stability criteria. 16 4.1 Comparison of starting at the origin versus starting at the previous optimum for each iteration of Algorithm 1. 56 4.2 Data for the example G(s). 56 4.3 Programming parameters for the example G(s). 57 5.1 Summary of the discussion on the background of the generalized basis set. 62 5.2 Comparison of the computational complexity of three common algorithms for solving LPs with n variables that can be encoded in b input bits. 77 5.3 Data for the example G(s) using Subalgorithm 2-gen. 79 5.4 Programming parameters for the exampleG(s) using the generalized basis set. 79 vii List of Figures 1 Repayment. xiv 2.1 System with monotone nonlinear feedback. 5 2.2 (y) is strictly inside sector [;]. 8 2.3 (y) is incrementally strictly inside sector [;]. 9 2.4 (y) is strictly inside [0;1] but not incrementally strictly inside [0;1]. 10 2.5 (y) is incrementally strictly inside [0;1] but not strictly inside [0;1]. 10 2.6 Operational amplier. 12 2.7 Input-output graph of an operational amplier. 12 2.8 Diode. 13 2.9 Input-output graph of a diode. 13 2.10 Motor. 14 2.11 Input-output graph of a motor. 14 2.12 The Popov stability criterion. 18 2.13 The circle stability criterion, (both non-incremental and incremental versions). 19 2.14 The o-axis circle stability criterion. 22 3.1 Gapski and Geromel's Algorithm (Algorithm 1). 33 3.2 A typical progression of Algorithm 1 up to N = 3. 34 viii 3.3 Step 2 of Algorithm 1. 35 3.4 A graphical representation of Step 2 of Algorithm 1. 37 3.5 Subalgorithm 1 of Algorithm 1. 39 4.1 N for the example G(s). 57 4.2 Relative cost for the example G(s). 58 4.3 Nyquist plot of M N (j!)G(j!) for the example G(s) with the original Subal- gorithm 1. 59 4.4 Nyquist plot of M N (j!)G(j!) for the example G(s) with the original Subal- gorithm 1 (zoomed in). 59 4.5 Nyquist plot of M N (j!)G(j!) for the example G(s) with the new Subalgo- rithm 2. 60 4.6 Nyquist plot of M N (j!)G(j!) for the example G(s) with the new Subalgo- rithm 2 (zoomed in). 60 5.1 Nyquist plot for the example G(s) with the generalized basis set. 79 5.2 N for the example G(s) with the generalized basis set. 80 5.3 Nyquist plot ofM N (j!)G(j!) for the exampleG(s) with the generalized basis set. 80 5.4 Nyquist plot ofM N (j!)G(j!) for the exampleG(s) with the generalized basis set (zoomed in). 81 ix Abbreviations LP = linear program LTI = linear time-invariant MIMO = multi-input/multi-output SISL = stable in the sense of Lyapunov SISO = single-input/single-output x Abstract The search for Zames-Falb multipliers is a method by which the stability of a system with a monotone nonlinear feedback can be determined. An initial algorithm for computing these multipliers proposed by Safonov and Wyetzner uses a large set of basis elements and is thus computationally intensive. Later, Gapski and Geromel proposed a new algorithm which treats the problem as an optimization problem and uses a small set of basis elements. However, it is found that occasionally their algorithm terminates prematurely and fails to nd the optimal multiplier. This thesis proposes an improvement that always nds an ascent direction and a multiplier that improves the cost function whenever one exists. Application of the new procedure for calculating Zames-Falb multipliers is further extended from the case of a basis set comprised of impulses to the generalized basis set proposed by Chen and Wen comprised of decaying exponentials. xi Preface After moving to Los Angeles in 2004, I was delighted to nd the open arms and support of countless relatives, friends, classmates, and professors. The University of Southern California, located in Los Angeles, CA, USA, has proven to be a nourishing incubator of my development as a scientist. Analogous to the problem to be addressed in this thesis, the object of every scientic research is to nd the optimal answer within the N dimensions of what is currently knowable. Occasionally, however, science reaches a standstill when it is realized that no further improvement can be made within theseN dimensions. It is only when a new (N + 1) st dimension is discovered, and research begins again from the origin, that science takes a major leap forward. My development as a scientist, and as a human, blossomed into countless new dimensions with the guidance of those whom I met at USC. It is a wonderful thing that to learn is to open the world. \Man's mind, once stretched by a new idea, never regains its original dimensions." - Oliver Wendell Holmes Of these former strangers at USC, the one who has brought me the most joy in life is the one who decided to marry me. In her eyes, I see our future children; her mind, our future goals; and her heart, our future spirit. It is a curious thing that when a heart is lost, the mind will think of past regrets; but when the heart has found true love, the mind will think of all the joy to come. xii \I would rather have eyes that cannot see; ears that cannot hear; lips that cannot speak, than a heart that cannot love." - Robert Tizon Not a stranger in the least, my father has been an inspiration throughout my life. Without him, I would know nothing about the world or about myself. In easy times and in trying times, he has always been a rock. He has taught me that all problems in life are solvable when a goal is set and we work to achieve it. It is a sure thing that obstacles, con icts, and self-doubt appear distant when looking back at them from the destination. \The world stands aside to let anyone pass who knows where he is going." - David Starr Jordan No tougher a foe nor gentler a friend, my mother nurtured my condence when it was low and tempered my ego when it was high. My image of myself comes from her image of me. When I grew slowly { physically, mentally, and socially, behind all the other kids in school, she reminded me that bitter beginnings make tough people and sweet endings. It is an awesome thing that the owers that are the most beautiful are those that bloom late. \Just remember in the winter, far beneath the bitter snows, lies the seed that with the sun's love, in the spring becomes the rose." - \The Rose," by Bette Midler The family that I have joined through my wife is far from me in distance but close to me in my heart. My parents-in-law have welcomed me with open arms since I rst met them. They have shown by example that castles can be built with little more than hard work and sheer force of will. It is a just thing that we all write our own histories. \It matters not how strait the gate, How charged with punishments the scroll, I am the master of my fate: I am the captain of my soul." - \Invictus," by William Ernest Henley xiii My advisor, whose path I happened to cross, is a mentor without equal. As a thinker and a scholar, he is a king; but as a teacher and a guide, he is not haughty. It is a funny thing how a person so smart can teach a person so dumb. \Genius is eternal patience." - Michelangelo During my journey to Los Angeles and my journey through USC, there is so much that has happened that I realize there is so much more to come. Along the way, there has been, and there will be, those who drag you down and those who lift you up. It is a wise thing to forgive those who have wronged you and to focus on becoming successful, at which point, only scores in your favor should be tabulated. \For those who harm you, do not hate them nor seek revenge. For those who help you, repay them two-fold." Figure 1: Repayment. xiv Chapter 1 Introduction Graphical criteria for determining the absolute stability of a closed-loop system were gener- alized by Zames and Falb [ZF68] into a class of multipliers. Safonov and Wyetzner [SW87] treated the problem of nding the optimal multiplier as a large scale linear program (LP). Both they and Chen and Wen [CW95] used a large predetermined set of basis elements, thus making the problem computationally intensive. Gapski and Geromel [GG94] insight- fully observed that the optimal multiplier problem is a convex optimization problem. They devised a remarkably ecient algorithm which starts with a small basis that expands until the solution is reached. However, the algorithm occasionally selects a new basis element at certain iterations that causes no improvement in the objective function when select- ing another one would have, thus terminating prematurely without nding the optimal multiplier. As such, this thesis proposes a new subalgorithm in which the objective func- tion is improved at each iteration whenever possible. Application of this subalgorithm is further extended from the case of a basis set comprised of impulses used by Gapski and Geromel [GG94] to the generalized basis set comprised of decaying exponentials proposed by Chen and Wen [CW95]. 1 In the next chapter, notation are dened, the problem of stability is formulated, and some stability criteria are given. Then the issue of calculating Zames-Falb multipliers is addressed, and Gapski and Geromel's algorithm is discussed. Next, the main result of this thesis, a new basis element selection subalgorithm, is proposed, along with related theorems and an example. Then the new subalgorithm is applied to the case of a generalized basis set of decaying exponentials, and an example is included. Conclusions follow. 2 Chapter 2 Preliminaries 2.1 Notation N = f1; 2;:::g N + = f0; 1; 2;:::g N N = f1; 2;:::;Ng Z = f:::;2;1; 0; 1; 2;:::g R = the real numbers R N = x i 2R 8i2N N Dom(H) = the domain of system H Ran(H) = the range of system H 0 = [ 0 0 ] 0 0 k = [ 0 0 ] 0 2 R k 3 I = the identity matrix I k = the kk identity matrix hu;vi = u 0 v for u;v2R N rg = the gradient of the function g (t) = the unit impulse function Refxg = the real part of x Imfxg = the imaginary part of x jxj = the magnitude of x x mod y = the remainder after x is divided by y x mod y = ((x 1) mod y) + 1 v i:j = elements i through j of vector v S 1 nS 2 = fx : x2 set S 1 ; x62 set S 2 g ) = \implies" 2.2 Problem Formulation Consider the LTI system described by the state space equations 8 > > > > < > > > > : _ x = Ax +Bu y = Cx u = (y) (2.1) where x2R n ;y2R;u2R; and G(s) =C(sIA) 1 B: (cf. Figure 2.1.) Remark 2.2 In this thesis, we deal only with the SISO case. 2 4 G(s) (¢) u y Figure 2.1: System with monotone nonlinear feedback. 2.2.1 Denitions Denition 2.3 (L 2 -norm) Within the set of functionsy(t) dened in the intervalt2 [0;1], theL 2 -norm is dened as kyk = s Z 1 0 y 2 (t)dt: 2 Denition 2.4 (L 2e -norm) Within the set of functions y(t) dened in the interval t2 [0;1], theL 2e -norm is dened as kyk = s Z 0 y 2 (t)dt: 2 5 Denition 2.5 (Euclidean norm) The Euclidean norm of the vector x = h x 1 x N i 0 is dened as jxj = v u u t N X i=1 x 2 i : 2 Denition 2.6 (L 2 -space) Within the set of functionsy(t) dened in the intervalt2 [0;1], theL 2 -space is dened as L 2 =fy(t) : kyk<1g: 2 Denition 2.7 (L 2e -space) Within the set of functions y(t) dened in the interval t2 [0;1], theL 2e -space is dened as L 2e =fy(t) : kyk <1; 80 <1g: 2 Remark 2.8 Note thatL 2 L 2e . 2 Denition 2.9 (SISL) A system H with state vector x is stable in the sense of Lyapunov (SISL) if8k 1 > 0, 9k 2 > 0 such that jx(0)j<k 2 ) sup t jx(t)j<k 1 ; 6 wherejj denotes the Euclidean norm. 2 Denition 2.10 (L 2e -nite gain stability) A system H :x!y isL 2e -nite gain stable if8x2L 2e ,9K <1 such that kyk Kkxk ; 8 > 0: 2 Denition 2.11 (L 2e -continuous stability) A systemH :x i !y i isL 2e -continuous stable if8x 1 ;x 2 2L 2e such thatx 1 6=x 2 ,9K <1 such that ky 1 y 2 k Kkx 1 x 2 k ; 8 > 0: 2 Remark 2.12 For convenience, unless otherwise indicated, we make the following simplications through- out this thesis. 1. The term stable will refer toL 2e -nite gain stability. 2. The term incrementally stable will refer toL 2e -continuous stability. 2 Denition 2.13 (Hurwitz) [Kha02] A transfer function G(s) is said to be Hurwitz if each of its poles has a negative real part. 2 Denition 2.14 (Strictly inside a sector) [Zam66a] A function (y) is said to be strictly inside sector [;] if9 > 0 such that8y 7 we have ((y)ay)((y)by) 0; where a = + and b =. (cf. Figure 2.2.) 2 + y (y) Figure 2.2: (y) is strictly inside sector [;]. Denition 2.15 (Incrementally strictly inside) [Zam66a] A function (y) is said to be incrementally strictly inside sector [;] if9 > 0 such that8y 1 6=y 2 we have a (y 1 )(y 2 ) y 1 y 2 b; where a = + and b =. (cf. Figure 2.3.) 2 Remark 2.16 To be strictly inside a sector and to be incrementally strictly inside a sector are not equiv- alent. For example, the graph in Figure 2.4 is strictly inside [0;1] but not incrementally 8 + y (y) Figure 2.3: (y) is incrementally strictly inside sector [;]. strictly inside [0;1], as demonstrated by the negative slope of the curve in the region high- lighted by the circle. On the other hand, the graph in Figure 2.5 is incrementally strictly inside [0;1] but not strictly inside [0;1], as demonstrated by the part of the curve in the second quadrant highlighted by the circle. 2 The loop shifting transformation [DV75,SW87] 2 4 y 3 5 = 2 4 1 1 1 3 5 2 4 y 3 5 can be used to transform the nonlinearity(y) from incrementally strictly inside the sector [;] to incrementally strictly inside the sector [0;1]. The transformed G(s) is G(s) = G(s) 1 1G(s) : 9 y (y) Figure 2.4: (y) is strictly inside [0;1] but not incrementally strictly inside [0;1]. y (y) Figure 2.5: (y) is incrementally strictly inside [0;1] but not strictly inside [0;1]. 10 The inverse transformation 2 4 y 3 5 = 1 2 4 1 3 5 2 4 y 3 5 exists, and G(s) = G(s) + 1 1 + G(s) : The transformation does not aect the stability properties of the system. As such, without loss of generality, this thesis simply considers (y) incrementally strictly inside the sector [0;1]. This is equivalent to having (y) be a monotone nonlinearity. 2.2.2 Monotone Nonlinearities The motivation and usefulness of the theory presented in this thesis are that monotone nonlinearities arise in a number of engineering applications. We present three examples in this section. Example 2.17 (Operational amplier) Consider the operational amplier in Figure 2.6 whose input-output graph is shown in Figure 2.7. The equations governing this device are v in = v in+ v in v out = v out+ v out A = gain v CC = supply voltage: Here, the nonlinearity occurs in the saturation region where the output voltage is saturated atv CC . 2 11 v in A v in v out v out Figure 2.6: Operational amplier. v in v out v cc v cc A Figure 2.7: Input-output graph of an operational amplier. 12 Example 2.18 (Diode) Consider the diode in Figure 2.8 whose input-output graph is shown in Figure 2.9. A Figure 2.8: Diode. Figure 2.9: Input-output graph of a diode. small forward voltage input results in a large forward current output, but it takes a large reverse voltage input to cause a signicant reverse current output. 2 Example 2.19 (Motor) Consider the motor in Figure 2.10 whose input-output graph is shown in Figure 2.11. Here, the nonlinear input-output relation is RPM =k(iv) 1=3 ; 13 v v i Figure 2.10: Motor. iv = k iv Figure 2.11: Input-output graph of a motor. 14 where RPM is the spin rate of the motor in rounds per minute, k is a constant, i is the input current, v =v + v is the input voltage, and iv is the power. 2 2.2.3 Problem Statement Throughout this thesis, unless otherwise stated, the following assumptions are made: Assumption 2.20 G(s) is Hurwitz. 2 Assumption 2.21 (y) is an arbitrary monotone nonlinearity, i.e. (y) is incrementally strictly inside the sector [0;1], and (0) = 0. 2 Assumption 2.22 (y) is time-invariant. 2 The main problem to be considered in this thesis is stated as follows. Problem Statement 2.23 Find conditions on G(s) such that the closed-loop system of Figure 2.1 is stable. 2 This problem has been the subject of much research, focusing on both the time domain as well as the frequency domain. Some well-established results of frequency domain stability criteria are provided in the next section. Later, we shall see that these criteria are special cases of a certain class of multipliers that guarantee stability. 2.3 Stability Criteria In this section, we consider the system of Figure 2.1. Table 2.1 gives a summary of the stability criteria discussed in this section. Throughout this section, the term stable, when applied to a specic stability criterion and when no confusion will arise, will refer to the stability listed in Table 2.1 for that criterion. 15 Table 2.1: Stability criteria. Equivalent Criterion Nonlinearity Bound Stability Zames-Falb References [0;1] Proven Multiplier Popov strictly inside SISL 1 +j!q [Pop62] Circle: Non- strictly inside L 2e -nite gain 1 [San64] Incremental [Zam66b] Circle: incrementally strictly L 2e -continuous 1 [Zam64] Incremental inside, (0) = 0 [Zam66b] O-Axis incrementally strictly SISL e j [CN68] Circle inside, (0) = 0 Zames-Falb incrementally strictly L 2e -nite gain N/A [ZF68] Multipliers inside, (0) = 0 Some proofs in the references claim a weaker stability result than is actually proven. 16 2.3.1 The Popov Stability Criterion The Popov criterion for stability [Pop62] is stated as follows. Suppose that the following conditions hold: 1. G(s) is Hurwitz, 2. (y) is strictly inside the sector [0;k], and 3.9q;> 0 such that Ref(1 +j!q)G(j!)g 1 k +; 8!: Then, the closed-loop system is stable. Graphically, this is equivalent to having the curve of RefG(j!)g versus !ImfG(j!)g lie strictly to the right of the line through the point 1 k with slope 1 q . (cf. Figure 2.12.) For (y) strictly inside the sector [0;1], we have Ref(1 +j!q)G(j!)g 0; 8!: (2.24) 2.3.2 The Circle Stability Criterion (Non-Incremental Version) The non-incremental version of the circle criterion for stability [San64, Zam66b] is stated as follows. Suppose that the following conditions hold: 1. G(s) is Hurwitz, 2. (y) is strictly inside the sector [;] with 0<<, 3.9> 0 such that G(j!) + 1 2 1 + 1 1 2 1 1 +; 8!; and 17 Figure 2.12: The Popov stability criterion. 18 4. G(j!) does not encircle the point 1 2 1 + 1 . Then, the closed-loop system is stable. Graphically, this is equivalent to having the Nyquist curve of G(j!) lie outside the disk of radius 1 2 1 1 centered at the point 1 2 1 + 1 . (cf. Figure 2.13.) Figure 2.13: The circle stability criterion, (both non-incremental and incremental versions). For (y) strictly inside the sector [0;1], we have [San64,Zam66b] RefG(j!)g 0; 8!: (2.25) 19 2.3.3 The Circle Stability Criterion (Incremental Version) The incremental version of the circle criterion for stability [Zam64, Zam66b] is stated as follows. Suppose that the following conditions hold: 1. G(s) is Hurwitz, 2. (y) is incrementally strictly inside the sector [;] with 0<< and (0) = 0, 3.9> 0 such that G(j!) + 1 2 1 + 1 1 2 1 1 +; 8!; and 4. G(j!) does not encircle the point 1 2 1 + 1 . Then, the closed-loop system is incrementally stable. Graphically, this is equivalent to having the Nyquist curve of G(j!) lie outside the disk of radius 1 2 1 1 centered at the point 1 2 1 + 1 . (cf. Figure 2.13.) For (y) incrementally strictly inside the sector [0;1] with (0) = 0, we have [Zam64, Zam66b] RefG(j!)g 0; 8!: 2.3.4 The O-Axis Circle Stability Criterion The o-axis circle criterion for stability [CN68] is stated as follows. Suppose that the following conditions hold: 1. G(s) is Hurwitz, 2. (y) is incrementally strictly inside the sector [;] with 0<< and (0) = 0, 20 3.9c i 2R and > 0 such that jG(j!)cj r +; 8!, where c = c r +jc i = the center of a circle with c r = 1 2 1 + 1 ; r = 1 2 1 1 +jc i = its radius, and 4. G(j!) does not encircle c. Then, the closed-loop system is stable. Note that the circle criterion is a special case of the o-axis circle criterion when c i = 0. Graphically, the o-axis circle criterion is equivalent to having the Nyquist curve of G(j!) lie outside any disk through the points 1 and 1 . (cf. Figure 2.14.) For (y) incrementally strictly inside the sector [0;1] with (0) = 0, we have Re e j G(j!) 0; 8!; for some 2 2 ; + 2 : (2.26) 2.3.5 A Generalized Multiplier Set (Zames-Falb Multipliers) The Popov, circle, and o-axis circle stability criteria are special cases of a generalized class of multipliers M, or a limit point of a sequence of multipliers in M, dened as M = M(j!) : M(j!) =m 0 Z 1 1 e j!t f(t)dt , where (2.27) f2 Z = f : f(t) 0 8t2R; Z 1 1 jf(t)jdtm 0 : (2.28) 21 Figure 2.14: The o-axis circle stability criterion. 22 Without loss of generality, we can set m 0 = 1, and dene M = M(j!) : M(j!) = 1 Z 1 1 e j!t f(t)dt , and (2.29) f2Z = f : f(t) 0 8t2R; Z 1 1 jf(t)jdt 1 : (2.30) For convenience, throughout this thesis, we denote M(j!) = 1 Z 1 1 e j!t f(t)dt: (2.31) Note that M(j!) is the Fourier transform of (t)f(t). Formally, the stability theorem for this classM of henceforth known as Zames-Falb multipliers is stated as follows. Theorem 2.32 [ZF68] Given the system of Figure 2.1, suppose that the following conditions hold: 1. G(s) is Hurwitz. 2. (y) is incrementally strictly inside the sector [0;1] with (0) = 0, and 3.9M2M given by (2.29) such that RefM(j!)G(j!)g 0; 8!: Then, the closed-loop system is stable. 2 Refer to Table 2.1 for a summary of the equivalent Zames-Falb multipliers for the aforementioned stability criteria. The multiplier for the Popov criterion (2.24) is obtained 23 as [SW87] 1 +j!q = lim n!1 M n (j!); 8!2 [0;1), where M n (j!) = 1 +j!q 1 + j! n 2 M, and m 0 = nq: The multiplier for the circle criterion (2.25) is M(j!) = 12M. The multiplier for the o-axis circle criterion (2.26) is obtained as the limit point of a sequence of rational RC or RL multipliers converging to e j ; 8!2 (0;1) [NT73]. The results of Theorem 2.32 were extended into the repeated SISO [KS02], nonrepeated MIMO [SK00], and repeated MIMO [MS05] cases in continuous time as well as the non- repeated SISO [Wil71], repeated SISO [KS02], nonrepeated MIMO [Man03,MS05] and re- peated MIMO [Man03,MS05] cases in discrete time. This thesis focuses on the continuous time nonrepeated SISO case. Safonov and Wyetzner [SW87] observed that the problem of stability given by Prob- lem Statement 2.23 can be stated as the innite dimensional linear programming (LP) optimization problem = max f2Z min !2R Re 1 Z 1 1 e j!t f(t)dt G(j!) : (2.33) If 0 then the system is stable. In other words, this LP leads to a new stability criterion, the optimal multiplier stability criterion. This is stated formally in the following theorem. Theorem 2.34 [Saf85] Given the system of Figure 2.1, suppose that the following conditions hold: 1. G(s) is Hurwitz. 2. (y) is incrementally strictly inside the sector [0;1], and (0) = 0. 24 Then, the closed-loop system is stable if the LP given by (2.33) has the optimum 0. 2 The study of Zames-Falb multipliers has been extended in [HW07] to the case of con- strained linear model predictive control, where quadratic programs are involved. The study of Zames-Falb multipliers in this thesis is restricted to the linear case. Note that, as this thesis discusses the calculation of Zames-Falb multipliers, attention is restricted to nonlinearities incrementally strictly inside [0;1] with (0) = 0 (Assump- tion 2.21), and the goal is to prove stability, not incremental stability. (cf. Table 2.1.) Given Theorem 2.34, we now turn our attention to the issue of how to calculate the multipliers in the setM given by (2.29). 25 Chapter 3 Calculating Zames-Falb Multipliers In this section, the evolution of methods of calculating the Zames-Falb multipliers M2M given by (2.29) is discussed, leading up to the main contribution of this thesis. 3.1 A High Dimensional Linear Program In [SW87], problem (2.33) was approximately solved by approximating the function f(t) by a nite sum of N 0 impulses as f(t) = N X i=1 z i (tt i ): (3.1) M(j!) is thus approximated by M N (j!) = " 1 N X i=1 e j!t i z i # : (3.2) Dene the real vector z = [ z 1 z 2 z N ] 0 2R N : 26 Then the approximation to problem (2.33) becomes the LP as follows. N = max z2Z N N (z), where (3.3) N (z) = min !2R RefM N (j!)G(j!)g , and (3.4) Z N = ( z2R N : z i 0; N X i=1 z i 1 ) : (3.5) Given a set of N timesft 1 ;:::;t N g and a set of L frequenciesf! 1 ;:::;! L g, the LP in standard form is as follows. 8 > > > > < > > > > : min x2R N+1 cx subject to: Ax b x 2:N+1 0 N (3.6) where c = h 1 0 0 i (3.7) x = h z 1 z N i 0 A = 2 6 6 6 6 6 6 6 4 0 1 1 1 1 1 1 N . . . 1 L 1 L N 3 7 7 7 7 7 7 7 5 (3.8) b = h 1 b 1 b L i 0 (3.9) l i = Re e j! l t i G(j! l ) (3.10) b l = RefG(j! l )g (3.11) i 2 N N l 2 N L : 27 Remark 3.12 The constraint 0 is omitted so as to make the problem feasible. As we shall see in Chapter 4, the new subalgorithm for calculating Zames-Falb multipliers is still valid. (cf. Remark 4.15.) 2 In [SW87], an algorithm was proposed to solve this LP by running it with large sets of times and frequencies. This algorithm is discussed in detail in [SW87] and is summarized only with its key points as follows. Algorithm Step 1. Initialize: N = 200, L = 200,ft i g a logarithmically spaced set of N times, and f! l g a logarithmically spaced set of L frequencies. Step 2. Run the LP, and determine if a feasible solution exists. Step 3. If a feasible solution z is found from Step 2, then verify RefM(j!)G(j!)g 0 for all !2R. If so then: The system is stable. End program. Else: Expand the set of frequency samples and go to Step 2. If no feasible solution is found, expand the set of time samples and go to Step 2. If the algorithm does not terminate after several iterations, then no conclusion can be drawn about the stability of the system. The main drawback of this algorithm is the need to run the high dimensional LP within the limits of existing computing power. Monte Carlo simulations with MatLab's linprog function frequently terminate without an answer due to the following factors. 1. The maximum number of iterations has been reached. 2. The maximum space allocation has been exceeded. 3. The processing time is too long. 28 This problem of computational complexity is compounded by the fact that, in the original algorithm as stated in [SW87], the expansion of the set of sample times is done by dou- bling the time interval and halving the sampling period. As such, the set of sample times quadruples in size with each iteration, i.e. N 4N. As we shall see in the next section, it is possible to solve problem (2.33) using a low dimensional LP by taking advantage of some of its key properties. 3.2 A Low Dimensional Linear Program Gapski and Geromel [GG94] observed that the optimization problem of (2.33) is equivalent to a convex programming problem. They devised an algorithm whereby, at each iteration, a carefully selected new time t N+1 is computed to add a new basis element (tt N+1 ) to the approximation (3.1) used in solving the LP (2.33). As such, the set of sample times can start empty and need only increase in size until a solution is reached, which typically occurs by N = 10 or N = 20, according to [GG94] and the Monte Carlo simulations done to generate the example ofx4.5. A background of Gapski and Geromel's algorithm [GG94], identied as Algorithm 1 in this thesis, follows. Then the details of Algorithm 1 are stated. Following that, their subalgorithm for choosing the new time t N+1 , identied as Subalgorithm 1 in this thesis, is stated. 3.2.1 Background of Gapski and Geromel's Algorithm (Algorithm 1) It is desired to derive an algorithm, called Algorithm 1, that discretizes and eciently solves (2.33) by iteratively adding to an expanding collection of times in the approximation (3.1). Note that (Lemma 1 of [GG94]) (f) = min !2R Re 1 Z 1 1 e j!t f(t)dt G(j!) (3.13) 29 is a concave functional, since ^ (f;!) = Re 1 Z 1 1 e j!t f(t)dt G(j!) is linear inf for each xed!2R and, by Danskin's Theorem (Proposition B.25 of [Ber99]), the minimum of a set of linear functionals is a concave functional. For the approximate LP problem given by (3.3), note that f2Z given by (2.30) if and only if z2Z N given by (3.5). Let W (z) be the set of arguments to the solution of (3.4), i.e. W (z) =f! : N (z) = RefM N (j!)G(j!)gg: (3.14) For z =z given, dene W =W (z ); (3.15) where W is given by (3.14). Denote the set of sample times as T N =ft 1 ;:::;t N g: (3.16) For z 0 2Z N and T N given, an element of the subgradient of (3.4) is Re 8 > > > > > > > < > > > > > > > : 2 6 6 6 6 6 6 6 4 e j! 0 t 1 e j! 0 t 2 . . . e j! 0 t N 3 7 7 7 7 7 7 7 5 G(j! 0 ) 9 > > > > > > > = > > > > > > > ; 2@ N (z 0 ); (3.17) 30 where ! 0 2W . For T N and ! l given, dene l = [ l 1 l N ] 0 2R N ; (3.18) where l i is given by (3.10). BecauseZ N+1 Z N , we have N+1 N ; 8N 1; 8T N : As such, T N+1 =T N [ft N+1 g can be chosen iteratively by observing that the right direc- tional derivative of N+1 () at z 0 2Z N+1 in the direction d2R N+1 is given by [Las70] D N+1 (z 0 ;d) = min 0 d : 2@ N+1 (z 0 ) : (3.19) Dene d = [ 0 0 0 1 ] 0 2R N+1 ; and dene z 2Z N as the optimal solution of (3.3) for N time samples. Then we get for z = h (z ) 0 0 i 0 2Z N+1 ; (3.20) D N+1 z ; d = max !2W fjG(j!)j cos(!t N+1 \G(j!))g; (3.21) where\G() denotes the phase of G(). In [GG94], it is proposed that the new time t N+1 should be chosen such that (3.21) is maximized. A possible motivation for the choice of t N+1 in [GG94] is that if (3.21) is positive and if d is a feasible direction, then clearly N+1 > N . As shall be shown in Chapter 4, the choice of t N+1 given by the maximization of (3.21) sometimes results in no improvement 31 in N , i.e. N+1 = N , leading to a premature suboptimal termination of Gapski and Geromel's algorithm [GG94]. First, a formal statement of the algorithm is made. 3.2.2 Gapski and Geromel's Algorithm (Algorithm 1) The following is a restatement of the algorithm used in [GG94] to solve (2.33), dened in this thesis as Algorithm 1. (cf. Figure 3.1.) Algorithm 1 [GG94] Step 1. Solve the unidimensional optimization problem 0 = min !2R RefG(j!)g: (3.22) Set T 1 =ft 1 g, N = 1, and "> 0 suciently small. Step 2. Solve the convex programming problem N = max z2Z N N (z): (3.23) Step 3. If: N N1 <": Then: Go to Step 4. Else: Set T N+1 =T N [ft N+1 g; N N + 1 and go back to Step 2. Step 4. If N > 0 then the closed-loop system is stable in [0;1). Figure 3.2 shows a typical progression of Algorithm 1 up to N = 3. Note that the constraint set is a simplex. Problem (3.23) can be solved by any outer-linearization technique [Las70], such as Kelly's cutting-plane algorithm [Kel60]. In [GG94], Problem (3.23) is solved as follows. (cf. Figure 3.3.) 32 0 * 2R G j T 1 t 1 N N * z2Z _ N N z N * N 1 * N * ¸ 1 T N + 1 T N [ t N + 1 N Ã N Figure 3.1: Gapski and Geromel's Algorithm (Algorithm 1). 33 z 1 z 3 z 2 Z 3 z * 2R 0 z * 2R 1 z * 2R 2 z * 2R 3 Figure 3.2: A typical progression of Algorithm 1 up to N = 3. 34 N L z 1 N N z 1 1 2 N z 1 * * 1 L * L + 1 z2Z _ N N z l l z z l ¸ l 2 N L z L + 1 I f : | L + 1 N ( z L + 1 ) | < N * L + 1 W * W z L + 1 S e t L Ã L + 1 . C a l c u l a t e N ( z L + 1 ) a n d L + 1 2 N ( z L + 1 ) . S e t * Ã * [ * f o r t h e * c o r r e s p o n d i n g t o L + 1 . Figure 3.3: Step 2 of Algorithm 1. 35 Step 2 of Algorithm 1 Step 2a. For N xed, set " > 0 suciently small, L = 1, z 1 = 0 N , and calculate N (z 1 ) and 1 2@ N (z 1 ). Set = the ! corresponding to 1 . Step 2b. Using the L elements of , solve the LP problem L+1 = max z2Z N : N (z l ) +h l ;zz l i ; l2N L (3.24) and let z L+1 be its optimal solution. Step 2c. If: j L+1 N (z L+1 )j<": Then: Stop. Set N = L+1 and W =W (z L+1 ): Else: Set L L + 1, and calculate N (z L+1 ) and L+1 2@ N (z L+1 ): Set [! for the ! corresponding to L+1 , and go back to Step 2b. Remark 3.25 For Step 2c, note that W is the set of all frequencies W (z L+1 ), whereas appends only the one ! 2 W (z L+1 ) corresponding to the arbitrarily selected L+1 . In other words, when Step 2c exits, W contains all the frequencies for which RefM N (j!)G(j!)g attains its minimum value given z L+1 . On the other hand, when Step 2c exits, contains the L frequencies used to calculate each i ; i2N L , for the last L th LP given by (3.24). As we shall see, Gapski and Geromel's Subalgorithm 1 used to select t N+1 usesW , while the new Subalgorithm 2 proposed in this thesis uses . 2 See Figure 3.4 for a graphical representation of Step 2 of Algorithm 1 given an arbitrary dimension z j with j2N N . Note that most frequencies are inactive constraints. Also note that the objective function is a convex problem, since it is the minimum of a set of linear 36 functions. In the (z j ;)-plane, the constraint line C(! l ) is C(! l ) = f(z j ;)g, where = l j z j +K; K = X i2N N ;i6=j l i z i +b l = a constant in z j ; l i is given by (3.10), and b l is given by (3.11). See [Las70,Kel60] for discussions of outer- linearization techniques and Kelly's cutting-plane algorithm. z j ! " z j N z j z j C 5 C 6 z j C 8 C ( 8 ) Figure 3.4: A graphical representation of Step 2 of Algorithm 1. 37 3.2.3 Subalgorithm 1 of Algorithm 1 The following is a statement of the subalgorithm used in [GG94] to select t N+1 in Step 3 (and t 1 in Step 1) of Algorithm 1, dened in this thesis as Subalgorithm 1. Subalgorithm 1 Step 3a. Use W obtained from Step 2 to calculate D N+1 given by (3.21). Step 3b. Set t N+1 = the argument of the solution to maximizing D N+1 over t N+1 . See Figure 3.5 for a sample graph of D N+1 given by (3.21). Note that in this case, both frequencies ! 1 and ! 2 are active constraints. For convenience, denote h(!;t) =jG(j!)j cos(!t\G(j!)): If (3.21) is positive and if d is a feasible direction, then for this choice of t N+1 , clearly N+1 > N . As shall be shown in Chapter 4, the choice oft N+1 given by the maximization of (3.21) can result in no improvement in N , i.e. N+1 = N , if d is not feasible, i.e. P N i=1 z i = 1. This can lead to a premature suboptimal termination of Gapski and Geromel's algorithm [GG94]. In the next chapter, a new subalgorithm, Subalgorithm 2, is proposed to choose a t N+1 in Step 3 of Algorithm 1 that always improves N whenever possible. 38 t! G j 2 t N h 1 t h 2 t h 1 t h 2 t G j 1 Figure 3.5: Subalgorithm 1 of Algorithm 1. 39 Chapter 4 Subalgorithm 2 In this section, a new Subalgorithm 2 is proposed to replace Subalgorithm 1. It is proved that the new Subalgorithm 2 is superior in the sense that it always results in an improvement of the objective function, i.e. N+1 > N , wheneverz 2Z N is not optimal, and that there exist cases when Subalgorithm 1 does not. First, a brief review of the simplex method is presented. 4.1 The Simplex Method The new Subalgorithm 2 proposed inx4.2 is derived from the simplex method, explained below. 4.1.1 Overview of the Simplex Method This section is a basic review of some key properties of the simplex method for solving LP problems. There are several good texts that cover this topic in more detail, including 40 [Ber99,LY08]. Consider the LP 8 > > > > < > > > > : min x2R n cx subject to: Ax b x 0 n (4.1) where c = h c 1 c n i ; x = h x 1 x n i 0 ; A = 2 6 6 6 6 4 a 1;1 a 1;n . . . a m;1 a m;n 3 7 7 7 7 5 ; b = h b 1 b m i 0 ; n is the number of variables, and m is the number of constraints. The simplex method begins with the canonical initial tableau composed of the c, A, and b matrices as follows. 1 0 0 a 1;1 a 1;n b 1 0 1 0 a 2;1 a 2;n b 2 . . . . . . . . . 0 0 1 a m;1 a m;n b m 0 0 0 c 1 c n 0 (4.2) 41 The method proceeds by successively selecting a pivot element from a pivot column and generating a series of tableaux of the form y 1;1 y 1;m y 1;m+1 y 1;m+n y 1;0 . . . . . . . . . y m;1 y m;m y m;m+1 y m;m+n y m;0 r 1 r m r m+1 r m+n v (4.3) where y i;j = the coecient for row i and column j; y i;0 = the coecient for row i and the last column, r j = the relative cost for column j; i 2 N m ; j 2 N m+n , and v = the value of the objective function. 4.1.2 Application of the Simplex Method Consider the simplex method for solving this LP problem [Ber99, LY08]. Given N and L, the LP in Step 2b of Algorithm 1 (x3.2.2) is given by (3.6). Letm =L+1 andn =N +1. The method begins with the canonical (m+1)(m+n+1) initial tableau I m A b 0 0 m c 0 (4.4) 42 where A;b, and c are respectively given by (3.8), (3.9), and (3.7). In expanded form, this tableau is 1 0 0 0 1 1 1 0 1 0 1 1 1 1 N b 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 1 1 L 1 L N b L 0 0 0 1 0 0 0 (4.5) where l i is given by (3.10) and b l is given by (3.11). Successively selecting pivots gives tableaux of the form given by (4.3). Dene r = [ r s r t ], where (4.6) r s = [ r 1 r m ], and (4.7) r t = [ r m+1 r m+n ]: A standard result of the simplex method which will be used later in this thesis is r t =r s A +c: (4.8) We are now ready to state Subalgorithm 2. 4.2 Statement of Subalgorithm 2 This section is a statement of the proposed Subalgorithm 2 to replace Subalgorithm 1 in Step 3 of Gapski and Geromel's algorithm, Algorithm 1 (x3.2.2). 43 Dene ( ;t)2R L+1 as ( ;t) = 2 6 6 6 6 6 6 6 4 1 Re e j! 1 t G(j! 1 ) . . . Re e j! L t G(j! L ) 3 7 7 7 7 7 7 7 5 ; (4.9) where =f! 1 ;:::;! L g. For and T N given, dene r t (t) = r s ( ;t); (4.10) N+1 = ft2R : t6= 0; r t (t)< 0; t62T N g; (4.11) t min 2 argmin t2 N+1 fr t (t)g, and (4.12) t loc 2 argmin t2 N+1 jtj : d dt r t (t) = 0 : (4.13) As such, N+1 = the set of all available times t with negative relative cost; t min = an element of the set of times t for which the relative cost is a minimum, when the set is nonempty, and t loc = an element of the set of times t that is smallest in magnitude for which the relative cost is a local minimum, when the set is nonempty. Given iterationN and Step 3 of Algorithm 1, the main result of this thesis is to propose the use of the following subalgorithm, called Subalgorithm 2, to select t N+1 . Subalgorithm 2 Step 3a. From the last iteration of Step 2 of Algorithm 1, obtainL, 2R L , andr s 2R L+1 from the LP in Step 2b. 1 1 rs can be obtained as the output of a standard linear programming function, such as MatLab's linprog function. 44 Step 3b. Calculate r t (t), the scalar function of t given by (4.10). Step 3c. If: N+1 is empty. Then: Stop and go to Step 4. Else: Set t N+1 =t min given by (4.12): Remark 4.14 As r s is not available for the rst iteration, t 1 needs to be calculated using the original Subalgorithm 1. 2 Remark 4.15 The constraint 0 is omitted so as to make the linear program feasible. This is valid, as Subalgorithm 2 chooses a new (N + 1) st dimension based on ( ;t). In other words, the column of the simplex tableau corresponding to cannot be selected as the new dimension. To place the constraint 2R in standard form, it is a common linear programming technique to dene an unconstrained variable as the dierence between two nonnegative variables as follows. = + subject to: + 0 0: Refer to Theorem 4.19 for a proof that Subalgorithm 2 improves the objective function whenever possible. 2 Remark 4.16 Note the condition in N+1 given by (4.11) that t6= 0. If t i = 0 for any i, then the corresponding multiplier in the set M given by (2.27) can be rescaled so that m 0 = 1 and 45 that the multiplier is inM given by (2.29). Hence, although valid, selectingt = 0 is useless. 2 4.3 Theorems About Subalgorithms 1 and 2 In this section, some important theoretical results are presented. 4.3.1 The General Case This section contains some important theorems about the theoretical consequences of using Subalgorithm 1 (x3.2.3) and Subalgorithm 2 (x4.2) in Step 3 of Algorithm 1 (x3.2.2). Theorem 4.17 There exists a G(j!) such that, for some iteration N of Algorithm 1, (i) using Subalgorithm 1 to selectt N+1 results in no improvement in N (i.e. N+1 = N ), whereas (ii) selecting another t N+1 results in an improvement in N (i.e. N+1 > N ). 2 Proof (Theorem 4.17) Refer to Chapter 4.5 for such an example. 2 The main theorem of this thesis follows. Note that z given by (3.20) is optimal if z = argmax z2Z N+1 N+1 (z); (4.18) where N is given by (3.4). Theorem 4.19 Suppose Subalgorithm 2 is used in Algorithm 1 and that 46 (i) any t N+1 2 N+1 is selected for Step 3 whenever N+1 given by (4.11) is nonempty, and (ii) the solution of the last LP of Step 2b is nondegenerate. 2 Then N+1 = N if (4.18) holds. Otherwise, N+1 > N . 2 Proof (Theorem 4.19) An arbitrary column (i + 1)2f2;:::;N + 1g of A given by (3.8) is h 1 1 i L i i 0 ; where l i is dened by (3.10) for ! l 2 , l2 N L , and t i 2 T N . The column of A for an arbitrary time t2R, had it been included in A, would be ( ;t) given by (4.9). By inspection of (3.7), the corresponding element of c for this time would equal 0. Combined with (4.8), we get (4.10). If (4.18) holds, then there is no ascent direction. The simplex method ensures that r t (t) 08t, so N+1 is empty. The Optimality Condition Theorem inx3.3 of [LY08] implies that N+1 = N . Conversely, if (4.18) does not hold, then there exists an ascent direction. The simplex method ensures that r t (t)< 0 for some t, so N+1 is nonempty. By the unnamed theorem inx3.3 of [LY08], the objective function can be strictly improved by selecting this as the pivot column. 3 Hence, N+1 > N if t N+1 2 N+1 . 2 Remark 4.20 Note that althoughr t (t) is calculated from theL th LP of iterationN, it is used only to select t N+1 . Oncet N+1 is selected, Algorithm 1 reverts to the initial tableau withz = 0 N+1 and the initial values of l i given by (3.10). Seex4.4.4 for a more detailed discussion. 2 2 For a denition and discussion of degeneracy, cf.x4.4.3. 3 Equivalence of (24) and (25) inx3.3 of [LY08] means c j z j in [LY08] is rt(t). 47 Corollary 4.21 Suppose Subalgorithm 2 is used in Algorithm 1, N+1 is nonempty, and condition (ii) of Theorem 4.19 holds. If t min given by (4.12) or t loc given by (4.13) is selected for Step 3, when it exists, then N+1 = N if (4.18) holds. Otherwise, N+1 > N . 2 Proof (Corollary 4.21) By Theorem 4.19, any t N+1 2 N+1 is sucient for N+1 > N , and t min and t loc are two such cases. 2 Corollary 4.21 implies that whenever improving N is possible, then an improvement can be made using Subalgorithm 2. 4.3.2 The Case of a Single Frequency In this section, the claim in [GG94] that Subalgorithm 1 (x3.2.3) can be used for the case of a single frequency is proved. Consider Algorithm 1 (x3.2.2) and its solutionz 2Z N afterN iterations. Consider the case when W given by (3.15) consists of a single frequency ! . It is claimed by the next theorem that, in this case, t N+1 chosen by [GG94] as t N+1 = (2k + 1) +\G(j! ) ! , where (4.22) k = argmin k2Z fjt N+1 j : t N+1 6= 0g; 48 will still result in N+1 > N , even if P N i=1 z i = 1 so that d is an infeasible direction. First dene i = Re n e j! t i G(j! ) o 8i2N N+1 ; = [ 1 N+1 ] 0 2R N+1 ; (4.23) e i = the i th coordinate vector = [ 0 0 1 0 0 ] 0 2R N+1 ; S = N+1 X i=1 z i for z given by (3.20) with z 2Z N+1 , and (4.24) J = fi2N N : i < N+1 ; z i > 0g: (4.25) Theorem 4.26 Suppose Gapski and Geromel's subalgorithm, Subalgorithm 1, is used in Algorithm 1 with t N+1 chosen by (4.22) and that (i) N < 0, and (ii) equation (4.18) does not hold. If W consists of a single point ! , then N+1 > N . 2 Proof (Theorem 4.26) Clearly, condition (i) implies thatjG(j! )j > 0; otherwise, the algorithm will terminate with N+1 = N = 0. Abbreviate N+1 ( z ) as . Because W consists of a single point, @ given by (3.17) consists of only one element 2 R N+1 given by (4.23). As such, r exists and =r . Choosing t N+1 by (4.22), which maximizes (3.21), ensures that N+1 =jG(j! )j i 8i2N N . Note that N+1 > 0. Assume that S = 1 with S given by (4.24) and that J given by (4.25) is empty. As such,8i2 N N , z i > 0 implies i = N+1 . SinceZ N+1 is convex, the set of all feasible 49 directions from z isf(z z ) : z2Z N+1 g. For all such directions, (z z ) 0 r = (z z ) 0 = X fi: z i =0g i z i + X fi: z i >0g i (z i z i ) N+1 N+1 X i=1 z i S ! 0: So there does not exist a feasible ascent direction and condition (ii) would not hold. Thus either S < 1, J is nonempty, or both. SupposeS < 1. Nowe N+1 , z 2Z N+1 andZ N+1 is convex, so (e N+1 z ) is a feasible direction. Also, (e N+1 z ) 0 r = (e N+1 z ) 0 = N+1 N X i=1 z i i > 0; so it is an ascent direction. Thus N+1 > N . Now supposeJ is nonempty, and x j2J. Considerd j = (e N+1 e j )2R N+1 , and let any " be such that 0<" z j . Then ( z +"d j ) = h z 1 z j " z N+1 +" i 0 = h z 1 z j " " i 0 2 Z N+1 : 50 Since z 2Z N+1 andZ N+1 is convex, "d j is a feasible direction. Also, ("d j ) 0 r = ("d j ) 0 = "( N+1 j ) > 0; so it is an ascent direction. Thus N+1 > N . 2 Theorem 4.26 implies that Subalgorithm 1 can result in premature termination of Algo- rithm 1 only if there are at least two points in W , a situation that typically occurs after two or more iterations (i.e. N 2). The main result of this thesis, Theorem 4.19, estab- lishes that substituting the new Subalgorithm 2 for Gapski and Geromel's subalgorithm, Subalgorithm 1, prevents this. 4.4 Remarks About Subalgorithm 2 This section contains some miscellaneous remarks about Subalgorithm 2 (x4.2). 4.4.1 Sector Maximum Given G(j!), it is frequently desired to calculate b max dened as b max = max b0 n b : 9 M(j!) : Re n M(j!) ~ G(j!) o 0; 8! o , where ~ G(j!) = G(j!) + 1 b , and (4.27) M(j!) is given by (2.31). As such, 0 implies absolute stability for(y) incrementally inside sector [0;b max ] with (0) = 0 [SW87,ZF68]. Dene b k max as b max using Subalgorithm k 2 f1; 2g for the same G(j!), and dene k similarly. Then if b 1 max < b 2 max (i.e. 9 1 ; 2 such that 1 < 0 < 2 ), using 51 Subalgorithm 2 over Subalgorithm 1 could result in a substantial improvement in b max for an appropriately dened G(j!). In fact, it may be that b 2 max =1>b 1 max by redening G new (j!) = ~ G(j!) = ~ G(j!) + 1 1 = G(j!) + 1 b 1 max + 1 b 2 max : 4.4.2 A Practical Choice of Time Given iteration N of Algorithm 1, in theory the optimal choice of t N+1 would appear to be the global minimum ofr t (t) given by (4.12). However, in practice, whenjt N+1 j is large, e j!t N+1 is sensitive to small changes in !, including computational rounding errors. As such, the calculation of W given by (3.15) requires more computation time for the same precision as for a smaller t N+1 , or it is less precise. In this case, the LP of the (N + 1) st iteration of Algorithm 1 might have a constraint row that causes the solution N+1 to be such that N+1 < N . As such, the best choice of t N+1 was experimentally found to be time t with the smallestjtj that is a local minimum, as given by (4.13). Corollary 4.21 shows that this t N+1 also improves N . 4.4.3 Degeneracy The degeneracy condition in Theorem 4.19 is included for technical reasons. For a detailed discussion of degeneracy, refer tox2.3 and \Degeneracy" inx3.4 of [LY08]. Degeneracy is dened as follows. Denition 4.28 (Degeneracy) [Las70] An LP is degenerate if in a basic feasible solution, one of the basic variables takes on a zero value. Such a solution is also said to be degenerate. 2 52 When an LP is degenerate, it is possible for the simplex method to perform a pivot operation whereby the objective function value is not improved. For example, consider the LP in the standard form given by (4.3) as follows. 1 0 1 1 0 1 0 1 0 1 1 0 1 0 0 0 1 1 (4.29) Pivoting on the circled element in (4.29) gives the LP as follows. 1 0 1 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 (4.30) Note that there is no improvement in the objective function value of1 between (4.29) and (4.30). However, pivoting on the circled element in (4.30) shows that an improvement is possible, as shown in the following LP. 1 0 1 1 0 1 1 1 1 0 1 1 2 1 1 0 0 2 Here the objective function value is2. Because of degeneracy, it is possible for an LP to enter into a cycle. In light of this, it is conceivable that the new Subalgorithm 2 (x4.2) may choose at N+1 such that the objective value does not improve (i.e. N+1 = N ) for many iterations, if Algorithm 1 is forced to continue, because it is in a cycle rather than at an optimum. As such, a perfect algorithm for nding the optimal Zames-Falb multiplier would include a check for cycling. 53 However, the degeneracy condition can be dropped in practice, since degeneracy rarely occurs [LY08]. It is included in Theorem 4.19 simply so that the proof can invoke the unnamed theorem inx3.3 of [LY08]. Without it, a modied algorithm that takes cycling into account is possible. Cycling can be avoided by using Bland's pivot rule [Bla77] for example. Many programming codes implement anti-cycling procedures [LY08]. 4.4.4 Starting at the Origin versus at the Previous Optimum Note that althoughr t (t) is calculated from theL th LP of iterationN, it is used only to select t N+1 . Oncet N+1 is selected, Algorithm 1 reverts to the initial tableau with z = 0 N+1 and the initial values of l i given by (3.10). The value ofr t (t) is the correct value of the relative cost for the column corresponding to t N+1 in theL th LP. However, the values of the other elements in this column are in general not equal to l N+1 = l i given by (3.10) with i =N + 1 in the original A matrix given by (3.8) had t N+1 been included, where l 2 N L : More formally, suppose Step 2 of iteration N of Algorithm 1 began with the L th LP given by (4.5) and terminated with the tableau shown in standard form given by (4.3) as follows. p 0 0 p 0 1 p 0 N p 1 0 p 1 1 p 1 N . . . . . . . . . . . . . . . . . . . . . . . . p L 0 p L 1 p L N r s 0 r s L r t 0 r t (t 1 ) r t (t N ) N (4.31) 54 Suppose t N+1 has been chosen and that the corresponding column in (4.31) is p N+1 = h p 0 N+1 p 1 N+1 p L N+1 r t (t N+1 ) i 0 : (4.32) In general, l N+1 6=p l N+1 for l2N L . In light of this fact, after selecting t N+1 , Algorithm 1 can proceed from iteration N to N + 1 in two ways, as follows. 1. (Start at previous optimum) Populate p N+1 that corresponds to the last LP given by (4.31) of iteration N. Append this LP with column p N+1 . Start iteration N + 1 with this appended LP using the last optimum z 1 = h (z ) 0 0 i 0 2Z N+1 with z 2Z N . 2. (Start at origin) Start iteration N + 1 with L = 1 and the rst LP at the origin, z 1 = 0 N+1 , and use the original l N+1 's. For choice 1, the computational complexity lies in populating p N+1 , as the pivot opera- tions would need to be recorded and then used to calculate the new p N+1 . The advantage is that iterationN + 1 can start at the previous optimum. For choice 2, the computational complexity lies in recalculating N+1 from the origin, z 1 = 0 N+1 . The advantage is that the original l N+1 's can be used without having to populatep N+1 . This issue is summarized in Table 4.1. In this thesis, choice 2 is used; i.e. the origin is used as the starting point. Theoretical results and experimental simulations comparing these two starting points are potential for future work. As unreported simulations done in preparation for this thesis typically terminate at N = 10 or N = 20 iterations, it is conjectured that no signicant dierence will be found. This thesis shall now provide an in-depth analysis of an example that illustrates the advantages of Subalgorithm 2 (x4.2) over Subalgorithm 1 (x3.2.3.) 55 Table 4.1: Comparison of starting at the origin versus starting at the previous optimum for each iteration of Algorithm 1. Start At z 1 = (z ) 0 0 0 2Z N+1 z 1 = 0 N+1 populating p N+1 or l N+1 complex simple nding the new optimum simple complex 4.5 Example Consider the randomly generated stable transfer function G(s) given in Table 4.2. Table 4.2: Data for the example G(s). Parameter Value G(s) = ~ G(s) + 1 b ~ G(s) zeros +7:423 2:115i 2:040 2:725i ~ G(s) poles 3:122 9:681i 0:828 7:906i 0:116 2:832i ~ G(s) gain 2:052 b +6:467069 Algorithm 1 was run twice, rst with Subalgorithm 1 and then with Subalgorithm 2 for this G(s). In both cases, the programming parameters given in Table 4.3 were used. Dene N to be the last iteration number N when Algorithm 1 stops. Dene N k , k N , t k i , T k , z k , and M k N (j!) to be N, N , t i , T , z , and M N (j!), respectively, when using Subalgorithm k 2f1; 2g. Algorithm 1 rst used Subalgorithm 1 with this G(j!), and it stopped at N = N 1 = 3 with 1 N < 0, thus not proving stability. (cf. Figure 4.1.) 56 Table 4.3: Programming parameters for the example G(s). Parameter Value " 1e6 MatLab LP function linprog Simplex termination tolerance 1e8 Largescale termination tolerance 1e10 Next, Subalgorithm 2 was run with the local minimum t loc given by (4.13) instead of the Figure 4.1: N for the example G(s). global minimum t min given by (4.12) (cf.x4.4.2.) This simulation was initialized with the same timesft 1 ;t 2 g as Subalgorithm 1, so that Subalgorithm 2 began by choosing a new t N+1 =t 3 . The algorithm stopped at N = N 2 = 3 with 2 N > 0, thus proving stability. In other words, an improvement in the objective function occurs, i.e. N+1 > N with N given by (3.3), if and only if the subalgorithm chooses a t N+1 for which r t (t N+1 )< 0. 57 Here we see that the original Subalgorithm 1 (x3.2.3) makes the wrong choice, whereas the new Subalgorithm 2 (x4.2) performs correctly as predicted by Theorems 4.17 and 4.19. It is noteworthy to consider iterationN = 2, where each subalgorithm chooses a dierent t N+1 . (cf. Figure 4.2.) Here, Subalgorithm 1 chooses a t N+1 with relative cost r t > 0, and Figure 4.2: Relative cost for the example G(s). so no improvement is possible, i.e. 1 3 = 1 2 . However, Subalgorithm 2 chooses a t N+1 with relative cost r t < 0, and so an improvement is made, i.e. 2 3 > 2 2 . The Nyquist plots of Figures 4.3 - 4.6 conrm the result. 58 Figure 4.3: Nyquist plot ofM N (j!)G(j!) for the exampleG(s) with the original Subalgo- rithm 1. Figure 4.4: Nyquist plot ofM N (j!)G(j!) for the exampleG(s) with the original Subalgo- rithm 1 (zoomed in). 59 Figure 4.5: Nyquist plot of M N (j!)G(j!) for the example G(s) with the new Subalgo- rithm 2. Figure 4.6: Nyquist plot of M N (j!)G(j!) for the example G(s) with the new Subalgo- rithm 2 (zoomed in). 60 Chapter 5 A Generalized Basis Set Considering M N (j!) given by (3.2), it is observed that the basis set of M(j!) given by (2.31) used in Algorithm 1 (x3.2.2) is the set of impulses f(tt i ) : t i 2R ; i2Ng; (5.1) whose Fourier transform is e j!t i : t i 2R ; i2N : However, impulses are impractical to implement in the real world, and they lead to high frequency dependence, as illustrated by the loops in Figures 4.3 - 4.6 for the example of x4.5. Fortunately, any basis setB can be used, so long as: (i)f is an element ofZ given by (2.30), and (ii) every element ofZ can be represented by elements ofB. One such set is the set of decaying exponentials introduced by Chen and Wen [CW95]. This issue is explored further in this chapter. 61 5.1 Background The discussion in this section is summarized in Table 5.1. In [CW95], it is observed that Table 5.1: Summary of the discussion on the background of the generalized basis set. impulses decaying exponentials basis element (tt i ) e t t i ;t 0 ; e t t i ;t 0 advantage 1 innite magnitude nite magnitude zero duration non-zero duration advantage 2 high-frequency dependence no high-frequency dependence Fourier transform e j!t i ; i2N 1 (s1) i+1 ; i2N + M N (j!) 1 P N i=1 e j!t i z i 1 P N i=0 a i (j!+1) i+1 b i (j!1) i+1 i! P N i=0 a i (1) i s 2i (s+1) N (s+1) N 0, P N i=0 b i s 2i (s+1) N (s+1) N 0 constraint 1 z i 0 8s =j! 1 ; ! 1 2R constraint 2 P N i=1 z i 1 P N i=0 a i + (1) i b i i!< 1 62 the function f2Z given by (2.30) can be expressed as f(t) = f + (t) +f (t), where f + (t) = 8 < : f(t) ; t 0 0 ; t< 0 , and f (t) = 8 < : 0 ; t 0 f(t) ; t< 0 : In [CW95, Sze75], it is shown that for such an f + ,8 > 0,9a = (a 0 ;a 1 ; ;a N )2R N+1 such that Z 1 0 f + (t) N X i=0 a i e t t (i+) dt<; where>1. Replacing t byt, the same result holds for f . In [CW95], it is proposed that, by choosing = 0, the following can be used as the basis for the approximation of M(j!) given by (2.31). e + i (t) ; e i (t) ; 8i2N + , where (5.2) e + i (t) = 8 < : e t t i ; t 0 0 ; t< 0 , and e i (t) = 8 < : 0 ; t> 0 e t t i ; t 0 : Its Fourier transform is i! (s + 1) i+1 ; i! (s 1) i+1 ; i2N + ; 63 giving the basis set 1 (s + 1) i+1 ; 1 (s 1) i+1 ; i2N + : (5.3) As such, f(t) can then be approximated as f(t) = N X i=0 (a i e + i (t) +b i e i (t)); a i ;b i 2R: (5.4) Then the N th -order approximation M N of multiplier M2 M given by (2.27) is M N (j!) =m 0 N X i=0 a i (j! + 1) i+1 b i (j! 1) i+1 i!; a i ;b i 2R: Without loss of generality, we can set m 0 = 1, and the N th -order approximation M N of multiplier M2M given by (2.29) is M N (j!) = 1 N X i=0 a i (j! + 1) i+1 b i (j! 1) i+1 i!; a i ;b i 2R: (5.5) The multiplier classM with the constraint setZ given by (2.30) leads to constraints onfa i ;b i g, as derived in [CW95] and restated as the following theorem. Theorem 5.6 [CW95] A multiplier M N is an element ofM given by (2.29) subject to the N th -order approximation given by (5.4) if and only if the following conditions hold for a i ;b i 2R. 1. M N (j!) = 1 N X i=0 a i (j! + 1) i+1 b i (j! 1) i+1 i!: 2. N X i=0 a i + (1) i b i i!< 1: (5.7) 3. P N i=0 a i (1) i s 2i (s + 1) N (s + 1) N 0, P N i=0 b i s 2i (s + 1) N (s + 1) N 0; 8s =j! 1 ; ! 1 2R: (5.8) 64 2 If a multiplier satisfying the conditions of Theorem 5.6 can be found such that RefM N (j!)G(j!)g 0 8!; then Theorem 2.32 implies that the closed-loop system of Figure 2.1 is stable. Theorem 5.6 leads to a new linear program for calculating M N . Gapski and Geromel's Algorithm (Algorithm 1 (x3.2.2)) can still be applied with the modication that the di- mensions for expanding the feasible region not depend on t i 2 T N , but rather i2 J N = f1;:::;Ng. Subalgorithm 2 (x4.2) can also be used, but some modications are necessary. Before applying Subalgorithm 2, the new linear program is described. 5.2 The Linear Program In this section, the linear program to calculate a multiplier of Theorem 5.6 is described. First note that, by a standard linear programming technique, the unconstrained variables a i ;b i 2R can be constrained as a i = a + i a i b i = b + i b i a + i ;a i ;b + i ;b i 0: Dene the vector z of 4N variables as z = h a + 1 a + N b + 1 b + N a 1 a N b 1 b N i 0 (5.9) 2 R 4N : 65 Dene the setZ N as Z N = fz given by (5.9) such that (5.7) and (5.8) holdg: Recall equations (3.4), (3.14), and (3.15), restated as follows. N (z) = min !2R RefM N (j!)G(j!)g; (5.10) W (z) = f! : N (z) = RefM N (j!)G(j!)gg , and (5.11) W = W (z ): (5.12) 66 For a given z 2Z N and M N given by (5.5), an element of the subgradient of (5.10) is l = h l + 0 l 0 i 0 (5.13) 2 R 4N , where l + = h l a+ 0 l b+ 0 i 0 l = h l a 0 l b 0 i 0 l a+ = h l a + ;1 l a + ;N i 0 (5.14) l b+ = h l b + ;1 l b + ;N i 0 (5.15) l a = h l a ;1 l a ;N i 0 (5.16) l b = h l b ;1 l b ;N i 0 (5.17) l a+;i = Re i! (j! l + 1) i+1 G(j! l ) l b+;i = Re +i! (j! l 1) i+1 G(j! l ) l a;i = Re +i! (j! l + 1) i+1 G(j! l ) l b;i = Re i! (j! l 1) i+1 G(j! l ) ! l 2 W given by (5.12): Recall the LP given by (3.24) to be solved in Step 2b of Algorithm 1 (x3.2.2), restated as follows. L+1 = max z2Z N : N (z l ) +h l ;zz l i ; l2N L ; (5.18) 67 where L is the number of elements of dened in the statement of the algorithm. In standard form, this LP is expanded as follows. 8 > > > > < > > > > : min x2R 4N+1 cx subject to: Ax b x 2:4N+1 0 4N (5.19) where c = h 1 0 0 i 2 R 4N+1 x = h z 0 i 0 2 R 4N+1 68 A = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1 1 a+ 0 1 b+ 0 1 a 0 1 b 0 . . . . . . . . . . . . . . . 1 L a+ 0 L b+ 0 L a 0 L b 0 0 1 + 0 0 N 1 0 0 N . . . . . . . . . . . . . . . 0 L + 0 0 N L 0 0 N 0 0 0 N 1 + 0 0 N 1 . . . . . . . . . . . . . . . 0 0 0 N L + 0 0 N L 0 a+ b+ a b 1 0 0 N 0 0 N 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 R (3L+2)(4N+1) b = h RefG(j! 1 )g RefG(j! L )g 0 2L 1 1 i 0 2 R 3L+2 z = given by (5.9) l a+ = given by (5.14) l b+ = given by (5.15) l a = given by (5.16) l b = given by (5.17) 69 l + = h l +;1 l +;N i l = h l ;1 l ;N i l + = h l +;1 l +;N i l = h l ;1 l ;N i l +;i = (1) i (j! l ) 2i (j! l + 1) N (j! l + 1) N l ;i = (1) i (j! l ) 2i (j! l + 1) N (j! l + 1) N l +;i = (j! l ) 2i (j! l + 1) N (j! l + 1) N l ;i = (j! l ) 2i (j! l + 1) N (j! l + 1) N a+ = h 1! N! i b+ = h (1) 1 1! (1) N N! i a = h 1! N! i b = h (1) 1 1! (1) N N! i i 2 N N l 2 N L ! l 2 dened in Algorithm 1 (x3.2.2). The additional constraint of the (3L+2) nd row ofA that 1 is to ensure that the linear program is not unbounded. Adding this constraint is valid because the stability criterion only requires that anyM N given by (5.5) such that N 0 be found. The constraint 0 is omitted so as to make the linear program feasible. See Remark 4.15 for a discussion of this issue as it pertains to the original Subalgorithm 2 (x4.2). Having set up the LP, we now turn our attention to the application of Subalgorithm 2. 70 5.3 Application of Subalgorithm 2 Intuitively, it would appear that Step 3 of Algorithm 1 (x3.2.2) should iteratively select each index i in the basis set given by (5.2) in sequential order until a solution is reached. Formally, Subalgorithm (Not used.) Step 3a. Set J N+1 =J N [fN + 1g; where J N =f1;:::;Ng: (5.20) However, a similar problem may arise as with Gapski and Geromel's Subalgorithm 1 (x3.2.3) in that the choice of new indices may correspond to columns with nonnegative relative costs. In this case, Algorithm 1 might terminate prematurely, i.e. N+1 = N , had another choice of indices been able to improve the objective function, i.e. N+1 > N . Some modications of the LP given by (5.19) are in order, stated as follows. 8 > > > > < > > > > : min x2R 4N+1 cx subject to: Ax b x 2:4N+1 0 4N (5.21) where c = h 1 0 0 i 2 R 4N+1 x = h z 0 i 0 2 R 4N+1 71 A = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1 1 a+ 0 1 b+ 0 1 a 0 1 b 0 . . . . . . . . . . . . . . . 1 L a+ 0 L b+ 0 L a 0 L b 0 0 1 + 0 0 N 1 0 0 N . . . . . . . . . . . . . . . 0 L + 0 0 N L 0 0 N 0 0 0 N 1 + 0 0 N 1 . . . . . . . . . . . . . . . 0 0 0 N L + 0 0 N L 0 a+ b+ a b 1 0 0 N 0 0 N 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 (5.22) 2 R (3L+2)(4N+1) b = h RefG(j! 1 )g RefG(j! L )g 0 2L 1 1 i 0 2 R 3L+2 z = given by (5.9) l a+ = h l a + ;1 l a + ;N i 0 l b+ = h l b + ;1 l b + ;N i 0 l a = h l a ;1 l a ;N i 0 l b = h l b ;1 l b ;N i 0 l a+;i = Re I i ! (j! l + 1) I i +1 G(j! l ) (5.23) l b+;i = Re +I i ! (j! l 1) I i +1 G(j! l ) (5.24) l a;i = Re +I i ! (j! l + 1) I i +1 G(j! l ) (5.25) l b;i = Re I i ! (j! l 1) I i +1 G(j! l ) (5.26) 72 l + = h l +;1 l +;N i l = h l ;1 l ;N i l + = h l +;1 l +;N i l = h l ;1 l ;N i l +;i = (1) I i (j! l ) 2I i (j! l + 1) N (j! l + 1) N (5.27) l ;i = (1) I i (j! l ) 2I i (j! l + 1) N (j! l + 1) N (5.28) l +;i = (j! l ) 2I i (j! l + 1) N (j! l + 1) N (5.29) l ;i = (j! l ) 2I i (j! l + 1) N (j! l + 1) N (5.30) a+ = h I 1 ! I N ! i (5.31) b+ = h (1) I 1 I 1 ! (1) I N I N ! i (5.32) a = h I 1 ! I N ! i (5.33) b = h (1) I 1 I 1 ! (1) I N I N ! i (5.34) i 2 N N l 2 N L I i 2 J N J N = fI 1 ;:::;I N : I i 2N + g (5.35) N = max J N (5.36) ! l 2 dened in Algorithm 1 (x3.2.2). Note that an element I i 2 J N given by (5.35) need not satisfy I i = i, as was the case with (5.20). The constraint 0 is omitted so as to make the linear program feasible. See Remark 4.15 for a discussion of this issue as it pertains to the original Subalgorithm 2 (x4.2). 73 With these modications, Algorithm 1 (x3.2.2) with Subalgorithm 2 (x4.2) can proceed at iteration N with the generalized basis set given by (5.2) analogously to the case of the basis set that is the set of impulses given by (5.1). We call this generalized subalgorithm Subalgorithm 2-gen. It is formally stated as follows. Subalgorithm 2-gen Step 3a. From the last iteration of Step 2 of Algorithm 1, obtain L, 2 R L , and r s 2 R 3L+2 from the LP in Step 2b. 1 Step 3b. Given a set of ^ N available indices ^ J = fj 1 ;:::;j ^ N g, let (5.37) I 1 = j 1 . . . I ^ N = j ^ N : Populate the (3L + 2) (4 ^ N) subset of theA matrix given by (5.22) composed of columns 2 through (4 ^ N + 1). Denote this matrix as ^ A. Step 3c. Calculate the (4 ^ N)-dimensional vector r =r s ^ A = h r 1 ::: r 4 ^ N i , and let ^ j N+1 = n j2N 4 ^ N : r j < 0 o : (5.38) Step 3d. If: ^ j N+1 is empty. Then: Stop and go to Step 4. Else: Set J N+1 =J N [fj N+1 g, where j N+1 =j k and k = argmin j2 ^ j N+1 fr j g ! mod ^ N: Set ^ J ^ Jnfj N+1 g: 1 rs can be obtained as the output of a standard linear programming function, such as MatLab's linprog function. 74 With these modications, Subalgorithm 2-gen can be used with the generalized basis set given by (5.2) in the same way that Subalgorithm 2 can be used with the original basis set given by (5.1). It will be shown in the example ofx5.5 that there are advantages of using the former over the latter. Before doing so, some remarks are in order. 5.4 Remarks The remarks in this section pertain to the use of Subalgorithm 2-gen (x5.3) in Algo- rithm 1 for the generalized basis set given by (5.2). As these remarks are primarily useful for coding purposes, readers solely interested in theoretical discussions may wish to skip this section. 5.4.1 The mod Function The mod function in Step 3d of Subalgorithm 2-gen (x5.3) recovers the original index into ^ J of the minimum element of r. Formally stated, with z = h z 1 z ^ N z ^ N+1 z 2 ^ N z 2 ^ N+1 z 3 ^ N z 3 ^ N+1 z 4 ^ N i 0 = h a + 1 a + ^ N b + 1 b + ^ N a 1 a ^ N b 1 b ^ N i 0 2 R 4 ^ N , we have z j 2 fa + k ;b + k ;a k ;b k g, where j 2 N 4 ^ N , and k = j mod ^ N 2 N ^ N : Even with the mod function, Subalgorithm 2-gen will still work, because selecting any column of ^ A derived in Step 3b with a negative relative cost as the pivot column (cf. 75 \The Simplex Method" ofx4.1) results in an improvement in the objective function, i.e. N+1 > N . 5.4.2 Usage of the Maximum Index Note that N given by (5.36), rather than N, is used in l +;i ; l ;i ; l +;i , and l ;i given by (5.27)-(5.30). If these terms were to depend on the total number of indices, N, then Subalgorithm 2-gen would not be valid for dimension N + 1, as the vector r s of Step 3a would change. Any indexj i 2 ^ J given by (5.37) withj i < N is valid for stepN + 1. Thus, to ensure that Subalgorithm 2-gen can be used for the entire set of available indices ^ J, it would be necessary to use N = max ^ J rather than (5.36). In fact, it would be necessary to start withJ 1 = n max ^ J o . In practice, computational limitations prohibit solving an LP with factorial terms (cf. (5.23)-(5.26) and (5.31)-(5.34)) with high precision due to sensitivity of rounding errors in! l for example. As such, the magnitude of N, and thus the number of elements of ^ J, is restricted. Simulations show that Algorithm 1 (x3.2.2) with Subalgorithm 2-gen (x5.3) still performs well with the generalized basis set given by (5.2). Refer tox5.5 for an example. 5.4.3 Advantages in Computational Complexity In the case of the original basis set given by (5.1), which is a set of impulses, it is possible to calculate the scalar function r t (t) given by (4.10) to nd t N+1 . However, with the generalized basis set given by (5.2), it may be necessary to calculate all 4 ^ N elements of r given in Step 3c before a negative relative cost is found. This raises the question of how using Subalgorithm 2-gen (x5.3) to choose j N+1 given in Step 3d would have any advantage over calculating the solution to one LP with all ^ N 76 elements of ^ J given by (5.37). To answer this question, we examine the computational complexity of these two methods, formally stated as follows. Method 1. Use Algorithm 1 (x3.2.2) with a limited number of elements from the set ^ J of ^ N elements. At the end of iteration N, use Subalgorithm 2-gen (x5.3) to select the next element j N+1 . Method 2. Do not use Algorithm 1. Instead, solve the LP given by (5.22) using all ^ N elements of ^ J. To compare Methods 1 and 2, refer to Table 5.2 for a summary of the computational complexity of three common algorithms for solving LPs. Table 5.2: Comparison of the computational complexity of three common algorithms for solving LPs with n variables that can be encoded in b input bits. Algorithm Worst-Case Problem Typical/Random Problem (Here, O () refers to the \order" of the problem.) Simplex Algorithm 2 O(n) O n 3 (exponential time) (cubic time) [DT97,DT03] [DT97,DT03,PS98,Mur83] [DT97,DT03,Tod02] Ellipsoid Algorithm O n 4 b (operations on O (b) digits) [Kha80] [Kha80] Interior Point Method O n 3:5 b (Projective Method) (operations on O (b) digits) [Kar84] [Kar84] 77 Simulations with randomly generated transfer functions using the generalized basis set given by (5.2) in preparation for this thesis reached solutions typically within N typ = 6 or N typ = 7 iterations. Thus, in a typical case, Method 1 has a computational complexity of L typ O 1 3 +O 2 3 + +O N 3 typ <L typ N typ O N 3 typ =L typ O N 4 typ ; while Method 2 has a computational complexity of L typ O ^ N 3 ; where L typ is the typical number of constraints, ^ N is the number of elements in the set ^ J of available indices, andO () refers to the \order" of the complexity of the problem. Thus, when ^ N is large, i.e. ^ N >N (4=3) typ = 7 (4=3) = 13; Method 1 is faster in reaching a solution than Method 2 for a typical case. 5.5 Example Consider the randomly generated stable transfer function G(s) given by Table 5.3. The Nyquist plot of Figure 5.1 shows that min !2R RefG(j!)g =4:57e3. Algorithm 1 was run with this G(s) using Subalgorithm 2-gen (x5.3) and the parameters shown in Table 5.4. As shown in Figure 5.2, the simulation stopped at N = 7 with N = +7:83e4. Fig- ures 5.3 and 5.4 conrm that min !2R RefM N (j!)G(j!)g = +7:83e4 > 0, thus proving stability and conrming that Subalgorithm 2-gen can be used with the generalized basis set given by (5.2). 78 Table 5.3: Data for the example G(s) using Subalgorithm 2-gen. Parameter Value G(s) = ~ G(s) + 1 b ~ G(s) zeros +7:818 3:317i ~ G(s) poles 2:788 7:865i 2:209 4:301i 3:462 0:117i 0:963 ~ G(s) gain +3:975 b +500 Figure 5.1: Nyquist plot for the example G(s) with the generalized basis set. Table 5.4: Programming parameters for the example G(s) using the generalized basis set. Parameter Value " 1e6 MatLab LP function linprog Simplex termination tolerance 1e6 Set of available indices ^ J f0,. . . ,6g Initial set of indices J 1 f6g 79 Figure 5.2: N for the example G(s) with the generalized basis set. Figure 5.3: Nyquist plot of M N (j!)G(j!) for the example G(s) with the generalized basis set. 80 Figure 5.4: Nyquist plot of M N (j!)G(j!) for the example G(s) with the generalized basis set (zoomed in). It should be noted that one of the main advantages of using the generalized basis set comprised of decaying exponentials given by (5.2), rather than the basis set comprised of impulses given by (5.1), is that high frequency dependence is eliminated. This is demon- strated by the much fewer number of loops in the Nyquist plot of Figure 5.3 as compared to Figures 4.3 and 4.5. Furthermore, it is impossible to implement in real life a true continuous-time impulse (t) with innite magnitude and zero duration. 81 Chapter 6 Conclusion Graphical criteria for determining the stability of a system with a monotone nonlinear feedback element [Pop62, San64, Zam64, Zam66b, CN68] were generalized into the class of Zames-Falb multipliers [ZF68]. Safonov and Wyetzner treated the problem of nding the optimal multiplier as a large scale linear program [Saf85, SW87]. Gapski and Geromel [GG94] devised an ecient algorithm, Algorithm 1 (x3.2.2), that starts with a small basis set and iteratively adds a basis element until a solution is reached. However, it is found that Algorithm 1 may terminate prematurely when using their Subalgorithm 1 (x3.2.3) for determining t N+1 , failing to nd the optimal multiplier. This thesis has proposed an alternative subalgorithm, Subalgorithm 2 (x4.2), that, bar- ring degeneracy, always nds an ascent direction and a multiplier that improves the ob- jective function whenever one exists, as shown by Corollary 4.21. Additionally, the claim in [GG94] that (4.22) can be used in the case of a single constraining frequency was proven by Theorem 4.26. The example ofx4.5 showed that Subalgorithm 1 can cause Algorithm 1 to terminate prematurely at < 0 when Subalgorithm 2 causes it to terminate at > 0. The basis set was then extended from a set of impulses given by (5.1) to a general- ized basis set comprised of decaying exponentials given by (5.2) [CW95]. The example of 82 x5.5 showed that Subalgorithm 2-gen still performs as expected, causing Algorithm 1 to terminate at > 0. 83 Bibliography [Ber99] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientic, Belmont, MA, second edition, 1999. [Bla77] Robert G. Bland. New nite pivoting rules for the simplex method. Mathematics of Operations Research, 2(2):103{107, May 1977. [CN68] Yo-Sung Cho and Kumpati S. Narendra. An o-axis circle criterion for the stability of feedback systems with a monotonic nonlinearity. IEEE Transactions on Automatic Control, 13(4):413{416, August 1968. [CW95] Xin Chen and John T. Wen. Robustness analysis of LTI systems with structured incrementally sector bounded nonlinearities. In Proceedings of the American Control Conference, pages 3883{3887, Seattle, WA, June 1995. [DT97] George B. Dantzig and Mukund N. Thapa. Linear programming 1: Introduction. Springer-Verlag, New York, NY, 1997. [DT03] George B. Dantzig and Mukund N. Thapa. Linear Programming 2: Theory and Extensions. Springer-Verlag, New York, NY, 2003. [DV75] C. A. Desoer and M. Vidyasagar. Feedback Systems: Input-Output Properties. Academic Press, New York, NY, 1975. 84 [GG94] P. B. Gapski and J. C. Geromel. A convex approach to the absolute stability problem. IEEE Transactions on Automatic Control, 39(9):1929{1932, September 1994. [HW07] W. P. Heath and A. G. Wills. Zames-Falb multipliers for quadratic programming. IEEE Transactions on Automatic Control, 52(10):1948{1951, October 2007. [Kar84] Narendra Karmarkar. A new polynomial time algorithm for linear programming. Combinatorica, 4(4):373{395, 1984. [Kel60] J. E. Kelly, Jr. The cutting-plane method for solving convex problems. Journal of SIAM, 8(4):703{712, December 1960. [Kha80] Leonid G. Khachiyan. Polynomial algorithms in linear programming. USSR Computational Mathematics and Mathematical Physics, 20(1):53{72, 1980. [Kha02] Hassan K. Khalil. Nonlinear Systems. Prentice Hall, 2002. [KS02] V. V. Kulkarni and M. G. Safonov. All multipliers for repeated monotone non- linearities. IEEE Transactions on Automatic Control, 47(7):1209{1212, 2002. [Las70] Leon S. Lasdon. Optimization Theory for Large Systems. MacMillan, New York, NY, 1970. [LY08] David G. Luenberger and Yinyu Ye. Linear and Nonlinear Programming. Springer Science + Business Media, LLC, New York, NY, third edition, 2008. [Man03] Ricardo Mancera. Multipliers for MIMO Nonlinearities: Theory and Algorithms. PhD thesis, University of Southern California, Los Angeles, CA 90089, USA, May 2003. [MS05] R. Mancera and M. Safonov. All stability multipliers for repeated MIMO non- linearities. Systems and Control Letters, 54:389{397, 2005. 85 [Mur83] Katta G. Murty. Linear Programming. Wiley, New York, NY, 1983. [NT73] K. S. Narendra and J. H. Taylor. Frequency Domain Criteria for Absolute Sta- bility. Academic Press, New York, NY, 1973. [Pop62] V. M. Popov. Absolute stability of nonlinear systems of automatic control. Au- tomation and Remote Control, 22:857{875, 1962. [PS98] Christos H. Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover Publications, Mineola, NY, 1998. [Saf85] Michael G. Safonov. Optimization of multipliers for nonlinear input-output sta- bility tests. Proceedings of the American Control Conference, pages 1585{1587, June 1985. [San64] I. W. Sandberg. A frequency-domain condition for the stability of feedback sys- tems containing a single time-varying nonlinear element. Bell System Technical Journal, 43(4):1601{1608, July 1964. [SK00] M. G. Safonov and V. V. Kulkarni. Zames-Falb multipliers for MIMO nonlinear- ities. International Journal of Robust and Nonlinear Control, 10(11/12):1025{ 1038, 2000. [SW87] Michael G. Safonov and Gary Wyetzner. Computer-aided stability analysis ren- ders Popov criterion obsolete. IEEE Transactions on Automatic Control, AC- 32(12):1128{1131, December 1987. [Sze75] Gabor Szeg o. Orthogonal Polynomials. American Mathematical Society, Provi- dence, RI, 1975. [Tod02] Michael J. Todd. The many facets of linear programming. Mathematical Pro- gramming, 91(3), February 2002. 86 [Wil71] J. C. Willems. The Analysis of Feedback Systems. MIT Press, Cambridge, MA, 1971. [Zam64] George Zames. On the stability of nonlinear, time-varying feedback systems. Proceedings of the National Electronics Conference, 20:725{730, October 1964. [Zam66a] George Zames. On the input-output stability of time-varying nonlinear feedback systems, Part I: Conditions derived using concepts of loop gain, conicity, and positivity. IEEE Transactions on Automatic Control, 11(2):228{238, April 1966. [Zam66b] George Zames. On the input-output stability of time-varying nonlinear feedback systems, Part II: Conditions involving circles in the frequency plane and sec- tor nonlinearities. IEEE Transactions on Automatic Control, AC-11(3):465{476, July 1966. [ZF68] George Zames and Peter Falb. Stability conditions for systems with monotone and slope restricted nonlinearities. SIAM Journal no Control, 6:89{108, February 1968. 87
Abstract (if available)
Abstract
The search for Zames-Falb multipliers is a method by which the stability of a system with a monotone nonlinear feedback can be determined. An initial algorithm for computing these multipliers proposed by Safonov and Wyetzner uses a large set of basis elements and is thus computationally intensive. Later, Gapski and Geromel proposed a new algorithm which treats the problem as an optimization problem and uses a small set of basis elements. However, it is found that occasionally their algorithm terminates prematurely and fails to find the optimal multiplier. This thesis proposes an improvement that always finds an ascent direction and a multiplier that improves the cost function whenever one exists. Application of the new procedure for calculating Zames-Falb multipliers is further extended from the case of a basis set comprised of impulses to the generalized basis set proposed by Chen and Wen comprised of decaying exponentials.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Relaxing convergence assumptions for continuous adaptive control
PDF
LQ feedback formulation for H∞ output feedback control
PDF
Balancing local resources and global goals in multiply-constrained distributed constraint optimization
PDF
Quantum computation in wireless networks
PDF
Bumpless transfer and fading memory for adaptive switching control
PDF
Iterative path integral stochastic optimal control: theory and applications to motor control
PDF
Speeding up distributed constraint optimization search algorithms
PDF
Spatio-temporal probabilistic inference for persistent object detection and tracking
PDF
Adaptive control: transient response analysis and related problem formulations
PDF
Toward human-multiagent teams
PDF
A system for trust evaluation and management leveraging trusted computing technology
PDF
Traffic pattern in negatively curved network
PDF
Computation of algebraic system from self-assembly
PDF
Calculating architectural reliability via modeling and analysis
PDF
Energy-efficient packet transmissions with delay constraints for wireless communications
PDF
Practical adaptive control for systems with flexible modes, disturbances, and time delays
PDF
Multiple model adaptive control with mixing
PDF
Reconfiguration strategies for mitigating the impact of port disruptions
PDF
Unfalsified adaptive control with reset and bumpless transfer
PDF
On practical network optimization: convergence, finite buffers, and load balancing
Asset Metadata
Creator
Chang, Michael Weikai
(author)
Core Title
A revised computational procedure for calculating Zames-Falb multipliers
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
11/16/2010
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
computation,OAI-PMH Harvest,Zames-Falb multipliers
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Safonov, Michael G. (
committee chair
), Alexander, Kenneth S. (
committee member
), Jonckheere, Edmond (
committee member
)
Creator Email
changmw@usc.edu,mwchang@optonline.net
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m3532
Unique identifier
UC1165520
Identifier
etd-Chang-4039 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-479144 (legacy record id),usctheses-m3532 (legacy record id)
Legacy Identifier
etd-Chang-4039.pdf
Dmrecord
479144
Document Type
Dissertation
Rights
Chang, Michael Weikai
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
computation
Zames-Falb multipliers