Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
New approaches in modeling and control of dynamical systems
(USC Thesis Other)
New approaches in modeling and control of dynamical systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
New Approaches in Modeling and Control of Dynamical Systems
by
Prasanth Babu Koganti
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(CIVIL ENGINEERING)
December 2015
Copyright 2015 Prasanth Babu Koganti
ii
Acknowledgements
The completion of this thesis would not have been possitble without the help and contributions
of several friends, colleagues and well-wishers. I would like to take a moment to thank them
from the bottom of my heart. Although this thesis bears only my name on it, major credit should
go to Prof. Udwadia. I do not even know where to start listing his contributions. He has a very
intuitive understanding of the field of mechanics, and with his keen eye, he was able to see
potential in what seemed to me to be useless threads of research and hence not worth pursuing.
As it turns out almost this entire thesis is made up of ideas resulted from pursuing those
‘unworthy’ threads. Without his guidance, I would have been just bouncing from idea to idea and
probably never be abe to complete this thesis. I am really grateful to him for giving me the
opportunity to work with him and learn from him.
Prof. Sacker has made mathematics fun and simple for me with his teaching style consisting of
always giving geometric constructions for mathematical proofs. This line of thinking has really
helped me whenever I had to prove a mathematical result. I would like to thank him and Prof.
Ketan Savla for being a part of my thesis committee. I would also like to thank Prof. Ketan Savla
asking me some interesting questions during the departmental seminar and pointing out possible
improvements he could see.
I really appreciate all the help I received from Prof. Erik Johnson. Throughout my studies, he
helped me understand the department procedures, has taken a keen interest in my research and
gave me invaluable comments about my research and presentation skills. I am deeply indebted to
iii
Prof. Vincent Lee, Prof. Mike Safanov for serving on my qualifying committee and giving me
useful feedback.
I would like to thank Sumo, Hancheol, Harsha for discussions about my research and sometimes
cross-checking my work, giving me feedback and all other help. Charan has been my roommate
for the most part of my PhD. He took care of me when I was sick or feeling down, discussed
research ideas with me and has been very patient with me. I would also like to thank
Ramakrishna, Yadi for critiquing my work and giving me useful suggestions.
The staff members of the Civil Engineering department have facilitated my life at USC. Big
thanks go to them. I am also very grateful to the generous support I received from USC Graduate
School in the form of Provost’s Fellowship.
I would like to thank my teachers who taught me and encouraged me to learn more throughout
my life. I should especially mention the contributions of Prof. C S Manohar who has sparked my
interest in dynamics and encouraged me to pursue PhD. Prof. J M Chandra Kishen, Dr. Debraj
Ghosh have not only taught me very important courses during my masters but they have also
helped me immensely in securing an admission to the PhD program. I am indebted to them.
No amount of words can describe my gratitude towards my family. My dad and my sisters have
supported me in every possible way through the past 5 years. They gave me the freedom to
pursue my dreams. They let me travel far in the pursuit of my ambitions but always kept me
close to their hearts. I would not have been where I am today with out their sacrifices.
iv
Table of Contents
Abstract ......................................................................................................................................... vi
Chapter 1. Introduction ................................................................................................................1
1.1 Motivation and Background ...................................................................................................1
1.2 Literature Review ...................................................................................................................4
1.3 Organization .........................................................................................................................15
Chapter 2. Stable Control of Nonlinear Mechanical System ...................................................18
2.1 Introduction ..........................................................................................................................18
2.2 Main Result ..........................................................................................................................20
2.3 Numerical Examples ............................................................................................................22
2.4 Conclusions ..........................................................................................................................33
Chapter 3. Stable Control of Nonlinear Dynamical System ....................................................34
3.1 Introduction ..........................................................................................................................34
3.2 General Dynamical System ..................................................................................................35
3.3 Application to Linear Systems .............................................................................................49
3.4 Consistent (V, w) Pairs for Mechanical Systems ..................................................................52
3.5 Conclusions ..........................................................................................................................58
Chapter 4. Stable Control of Mechanical System with Constraint on Maximum Control
Effort .............................................................................................................................................59
4.1 Introduction ..........................................................................................................................59
4.2 Main Result ..........................................................................................................................59
4.3 Numerical Examples ............................................................................................................69
4.4 Conclusions ..........................................................................................................................85
Chapter 5. Decentralized Control of Mechanical Systems.......................................................86
5.1 Introduction ..........................................................................................................................86
5.2 Decentralized Control ..........................................................................................................87
5.3 Numerical Examples ............................................................................................................98
v
5.4 Conclusions ........................................................................................................................127
Chapter 6. Dynamics and Control of a Multi-body Planar Pendulum .................................129
6.1 Introduction ........................................................................................................................129
6.2 Derivation of the General Equations of Motion of an n-body Pendulum ..........................130
6.3 Control of an n-body Damped Planar Pendulum ...............................................................141
6.4 Control of the System Under Uncertainties .......................................................................151
6.5 Conclusions ........................................................................................................................161
Chapter 7. Unified Approach to Modeling and Control of Multi-body Mechanical
Systems ........................................................................................................................................163
7.1 Introduction ........................................................................................................................163
7.2 Description of Constraints ..................................................................................................164
7.3 Fundamental Equation of Motion ......................................................................................169
7.4 Inconsistent Constraints .....................................................................................................173
7.5 Consistent Constraints ........................................................................................................179
7.6 Geometric Explanation of the Control Approach...............................................................183
7.7 Numerical Examples ..........................................................................................................189
7.8 Conclusions ........................................................................................................................211
Chapter 8. Conclusions ..............................................................................................................213
8.1 Conclusions ........................................................................................................................213
8.2 Scope for Further Work ......................................................................................................215
Appendix A .................................................................................................................................217
Appendix B .................................................................................................................................218
Appendix C .................................................................................................................................222
Appendix D .................................................................................................................................228
References ...................................................................................................................................231
vi
Abstract
The current thesis deals with new approaches in the modeling and control of highly nonlinear,
nonautonomous dynamical systems. In the field of Analytical Dynamics, the “fundamental
equation of motion” developed by Udwadia and Kalaba has been traditionally used to derive the
equations of motion that describe a mechanical system modeled using certain modeling
constraints. One view of looking at constrained motion is that nature applies the ‘control force’
necessary to satisfy the (modeling) constraints. The fundamental equation of motion gives an
explicit expression to compute the exact force required so the system satisfies the modeling
constraints. Recently, an alternate use has been developed for it in the control of mechanical
systems, where it is applied to obtain the control force necessary to satisfy certain types of
control objectives. As long as the control objectives can be formally expressed as constraints
which are linear in accelerations, the fundamental equation of motion can be used to obtain the
necessary control force. In the current work, an approach using Lyapunov’s theorem as the
vehicle to synthesize the constraints, that ensure global asymptotic stability of a dynamical
system when satisfied, has been explored. Several refinements have been proposed to this
approach that add more practical value. These refinements include limitation on the maximum
control effort, non-full state control, decentralized control of complex dynamical systems, and
control under uncertainty. In addition, the current work also proposes a unifying framework for
modeling and control of dynamical systems in which control requirements (constraints) are
present in addition to the physical/modeling constraints that are needed to describe the physical
system appropriately.
1
Chapter 1. Introduction
1.1 Motivation and Background
A common problem studied in the control of nonlinear dynamical systems is designing a control
that can ensure that the controlled system has an asymptotic equilibrium point at the origin (i.e
the zero state). The usefulness of studying such a problem is enhanced by the fact that we can
stabilize the system at any other desired state by coordinate transformation so that the origin is at
the desired state in the new coordinate frame. Two important goals that need to be achieved in
designing such a control are (i) the stability of the controlled system and (ii) the optimality as
measured by a user specified cost function. While there are standard methods for linear systems,
for which the cost function specified is usually a quadratic function in the state variables and
control input, which also guarantee stability, there are no such standard methods for nonlinear
dynamical systems that guarantee both stability and optimality. Since it is very difficult to find a
method that works well for all nonlinear dynamical systems, there are a large number of
techniques available for designing nonlinear control, out of which one or another may be more
suited to a particular problem at hand. So usually the choice of the design methodology is based
on the specific dynamical system for which the control is synthesized. Here, we propose a fresh
take on this classical problem, which is quite different from the existing well-trodden and recent
methods, some of which are reviewed in the next section. Our approach relies on combining
some recent results in the field of Analytical Dynamics with Lyapunov stability theory.
The design of control for nonlinear systems gets more complicated when there are additional
requirements that the controller needs to meet like when the control resources are limited or
2
when control cannot be applied to all degrees of freedom of the system. Every practical cpntrol
system has limits on the resources (like power, current, etc.) that it can draw upon. So it is
important to estimate the maximum control required for the system to be controllable. We also
often come across under-actuated mechanical systems in practice with examples including
cranes, robotic arms, autonomous under-water vehicles, etc. This work presents some ways in
which the stable control proposed here can be extended to such problems. More work still needs
to be done in this area, and we point out where improvements can be made in Chapter 8.
A different type of complexity arises for large complex systems where the system is controlled
by several controllers acting locally with little or no information being shared between the
controllers. One local controller may not have the knowledge of the state of another subsystem
“operated on” by a different local controller. Recently, a lot of research interest has been
generated in this field with applications in the area of formation flight of unmanned aerial
vehicles, decentralized control of large scale power systems, decentralized thermal control of
buildings etc. But most of the research that has been conducted thus far focuses on linear
systems, and today few, if any, methodologies are available that work in practical situations in
which nonlinear, nonautonomous systems need to be decentrally controlled. We propose an
approach that combines the ideas of stable control of nonlinear systems presented here with a
generalized sliding surface controller to design decentralized control for large-scale nonlinear
systems.
Control of pendulum systems, stabilizing them in an inverted position has been at the center of
attention of the scientific community for a long time. The reason is, these systems can be thought
of as generic approximations of several real-life mechanical systems such as robotic
3
manipulators, robot arms, biped robots, multi-body systems, etc. In addition, they are highly
unstable, highly nonlinear and can pose quite a few challenges in control design. For this reason,
control theorists typically use pendulum systems as testing grounds to develop, test, validate, and
compare various control approaches. Since pendulum systems are mechanical systems described
by second order nonlinear differential equations, the control techniques developed for them can
be successfully applied to other mechanical systems such as aerospace systems, attitude control
of satellites, robotic manipulatoes, and so on. We have applied the control methodology
developed here to control a chain pendulum system in various ‘inverted’ configurations. The
system studied here is not composed of straight links but links of arbitrary shapes. We are not
aware of any previous attempts in the literature to model or control such systems with links of
arbitrary shapes, though their presence is usually the norm rather that the exception in the real-
life systems that these pendulum systems are usually meant to generically approximate. We have
obtained the Lagrange’s equations of motion for the multi-pendulum system using a novel, yet
simple, recursive approach based on mathematical induction. We also look at the control of such
a system both when its description is accurately know, and when it is only imprecisely known
with (bounded) estimates on its uncertain description.
In several approaches used in the area of control design for mechanical systems, control
constraints are synthesized that satisfy the control objectives when enforced. Then, these
constraints are enforced on the system through the application of (generalized) control forces.
While this has been done successfully when no physical constraints are present, it becomes a
challenging task when there are already some physical/modeling constraints present that the
system needs to satisfy. In general, it is usually hard to synthesize control constraints such that
4
they are consistent with the modeling constraints. Recently, there are attempts to solve the
problem by using simple control constraints but relaxing the requirement that they be satisfied at
all times. We propose an approach that satisfies the control requirements (constraints) in the least
square sense while never violating the modeling constraints; a violation of the modeling
constraints would be tantamount to using an incorrect description of the physical system that
needs to be controlled.
1.2 Literature Review
Before we present our work, we review some of the popular recent techniques usually adopted
for controlling nonlinear dynamical systems.
Feedback linearization is a method in which a nonlinear coordinate transformation transforms the
nonlinear system into a linear system in new coordinates. Then, the standard LQ methods for
linear systems can be used to design control for the system in new coordinates. This method
doesn’t apply to all dynamical systems except the so called feedback linearizable systems [1].
Sliding mode control guarantees stability of a nonlinear dynamical system by driving the system
to a reduced order space called a sliding surface, in which the system is asymptotically stable.
Typically, the controller has a signum function in it. Thus, control switches between various
continuous control regimes depending on the sign of the sliding variables. This high frequency
switching leads to the so called “chattering problem” and reduces the life and performance of
actuators, etc. It might, and often does, also cause instability by exciting unmodeled dynamics in
real-life systems. This chattering problem is usually resolved by replacing the signum function
with a saturation function with a high slope at the origin [2-4].
5
Sontag had proved a converse theorem that states that if a nonlinear dynamical system is stable,
then there exists a Control Lyapunov Function (CLF) for the system [5]. A Control Lyapunov
Function (V ) for a dynamical system whose equation in the presence of the control is given as
( , ) ( , ) ( , ) x f x t g x t u x t , is defined as a proper positive definite function that satisfies (i)
0, 0 0
VV
g x f
xx
and (ii) For any 0 , 0 such that if 0 x and x , there
exists a u with u such that 0
V
f gu
x
[1, 6]. The second property above is called
“small control property”. If a candidate Lyapunov function (CLF) is given, Sontag has given an
explicit formula to obtain the control that will ensure the stability of the system. The control thus
obtained is discontinuous. There are certain classes of problems for which CLF’s can be easily
found, like feedback linearizable systems. This method has been further expanded to minimize
the
2
L norm of control in Ref. [7-8].
All these methods discussed so far ensure stability of a dynamical system with little/no
consideration about the optimality of the control. On the other end, there are methods that try to
obtain control inputs by solving an optimization problem. A cost function that depends on the
state and control input is defined. Then, the cost function needs to be minimized subject to the
constraint that the state variables satisfy the system equation. There are usually two approaches
followed to solve this optimization problem. One is by taking the route of dynamic programming
and trying to solve the Hamilton-Jacobi-Bellman (HJB) equations and the other is by using
Pontryagin’s minimum principle. It’s extremely hard to find closed form solutions to these
problems. Hence, often numerical solutions are computed online. Due to computational
difficulties, the solutions that guarantee optimality over a finite time horizon can only be
6
computed. However, while optimal solutions for infinite time horizons may guarantee stability of
the system, the finite time horizon solutions do not guarantee the stability. Moreover, they also
don’t guarantee the optimality over infinite time horizon.
In addition to the above mentioned methods, there also have been numerous attempts at
obtaining controllers for particular applications. For example, see references [9-14]. The usual
approach taken is to propose a controller, often based on experience, or heuristic grounds and
then prove its stability using Lyapunov’s second theorem. Although there are some existing
methods and guidelines both to obtain a controller and to prove its stability, it’s often time
consuming to search for a Lyapunov function to prove the stability.
In contrast to the above mentioned methods, recent advances in the control of nonlinear,
nonautonomous mechanical systems have led to the development of a new perspective in which
control requirements are viewed as control constraints that are imposed on these systems. Instead
of the use of conventional control theory this approach relies on recent results from the field of
analytical dynamics through the use of the fundamental equation of mechanics [15-16]. The
control forces required to enforce these control constraints, are obtained in closed form without
the need for any linearizations/approximations of the dynamical systems involved or the need for
imposing any a priori structure on the nature of the controller [17-18]. These control forces are
readily computable making real time control of complex nonlinear, nonautonomous dynamical
systems possible. For nonlinear, nonautonomous systems whose dynamical models are assumed
to be known and that permit full-state control, this approach ensures that the control
requirements are exactly satisfied while simultaneously minimizing a user-specified quadratic
control cost at each instant of time [17-19]. For example, the tracking control of a set of slave
7
gyroscopes that exhibit chaotic motions so that each slave exactly tracks the chaotic motions of a
master gyroscope, while simultaneously minimizing a quadratic control cost, is demonstrated in
Ref. [20]. The same approach is further used to obtain the closed form tracking control of a
cluster of nonlinearly coupled slave-gyros so they exactly track the chaotic motions of a master-
gyro [21]. Applications to the control of systems with complex and highly nonlinear dynamics
such as the formation flight of spacecraft in non-uniform gravity fields illustrate the simplicity
and effectiveness of the closed form approach [22-24]. Current extensions to dynamical systems
subjected to (generalized) forces that are only imprecisely known and/or to systems whose
description is only imprecisely known can be found in Ref. [25-28].
We take the above mentioned perspective to control of nonlinear dynamic systems in which
control objectives are viewed as control constraints that are then enforced on the system. The
control methodology proposed here uses Lyapunov’s stability theorem to generate the constraints
that ensure asymptotic stability of the system. This eliminates the need to search for a Lyapunov
function to prove the stability of controller. The control we propose is also optimal in the sense
that, a suitable norm of the control effort is minimized at each instant of time. It should be noted
that we minimize the cost function only at each instant of time, which is not a usual choice.
Usually cost functions are defined as an integral over time. So, the method we proposed doesn’t
guarantee that integral of a cost function over time is minimized. It should also be noted that
nature operates by minimizing the Gaussian (which can be thought of as a cost function) at each
instant of time. The derivation of the control forces presented here closely follows the derivation
of the fundamental equation presented by Udwadia etal in [15-16, 29].
8
Another problem that has received wide interest in the research community recently is, design of
a decentralized control. The objective here is to stabilize a conglomerate system consisting of
mutually coupled subsystems, applying control forces on each subsystem based only on local
information. By local information, we mean only the states of the subsystem on which the
controller is acting. In engineering application, many a times, we come across complex systems
whose dynamics are coupled together. Often, information about the state of the entire system
may not be available, or if available it would be so enormous as to prevent real time control
because of data gathering and information processing overheads. Thus, for large complex
systems, one is often constrained to using only the ‘locally available’ information about each
subsystem that comprises the entire interacting conglomerate in order to control the entire
conglomerate in a desired manner. Such problems of decentralized control arise in numerous
fields where large complex systems are involved such as in process control, formation flight of
multiple UAV's, project management, and in the analysis of economic and social systems, to
name a few. Thus, decentralized control design is an important problem when dealing with large
complex systems, and the development of methods to ensure effective and efficient control of an
interconnected, conglomerate nonlinear non autonomous dynamical system through the control
of each of its composite subsystems by using only information that is locally available has
become a topic of intensive research in recent years.
Since the literature in this area is enormous we cite only a few relevant papers. Wang et al. [30]
applied decentralized control techniques to reduce the response of a building, subjected to
earthquakes. Arash Fallah et al. [31] applied decentralized control to reduce the response of a
cable stayed bridge under seismic loads. Kung-Chen Lu et al. [32] investigated the application of
9
sliding mode control in control of a building using MR dampers. They have also given the
damper configuration required for decentralized control. Stipanović et al. and Stanković et al.
[33-34] have done significant work on decentralized overlapping control of formations of
unmanned vehicles. Most of the above-mentioned work uses linear system models, and tries to
minimize a cost function that is quadratic in the state variable and in the control cost, and is
integrated over time. They achieve decentralized control by applying a constraint over the
structure of the control gain matrix. Sandell et al. [35] note that linear problems are the ones
mostly studied, since nonlinear feedback control theory is not developed nearly as far even for
the case of centralized control. Witsenhausen, [35-36] formulated a very simple counterexample
problem that shows that the performance of linear feedback control is inferior, compared to
nonlinear feedback control strategies, when full information is not shared between the
subsystems. The methodology developed in the current work differs from the above-mentioned
approaches in the sense that: (i) the system is not assumed to be linear (ii) a linear structure is not
imposed on the controller, and (iii), rather than minimizing the integral of a cost over time the
duration over which the control is effected, the cost is minimized at each instant of time.
The obvious difficulty in designing localized, or decentralized, control is that we have limited
information regarding the global state of the combined system. This, in particular, raises
considerable issues about the stability of the entire controlled system when it is controlled by
decentralized controllers, each of which does not have information about the entire state of the
system and the behavior of any of the other controllers. In this work, we show that such a control
design is possible, and can be simply effected.
10
In what follows, we shall refer to the mechanical system that we want to control as the ‘actual
system’. We develop the control design in two steps. In the first step, we define a ‘nominal
system’ which is an imaginary system that doesn’t exist in reality, but is an approximation of the
real life ‘actual system’ in some sense. The nominal system consists of ‘nominal subsystems’
whose equations of motion can be independently integrated. We obtain the control forces to be
applied to this nominal system, so the controlled nominal system has an asymptotically stable
equilibrium at the origin. Closed-form controllers are obtained which use user-prescribed
positive definite functions defined over local domains. These control forces are computed in such
a way that user-prescribed cost functions are also simultaneously minimized at each instant of
time. The control of each subsystem is done so that stability of the conglomerate system is
assured from the manner in which the subsystems are controlled. One of the main advantages of
doing this is that we don’t need to search for a Lyapunov function to ensure the stability of the
entire system under the decentralized control scheme developed herein.
In the second step, we design additional compensating controllers that ensure that each
controlled actual subsystem tracks the trajectory of the corresponding nominal subsystem to
within prespecified error bounds [25, 37]. Since, the nominal subsystem satisfies the control
objective, this ensures that each controlled actual subsystem satisfies the same. The additional
controllers required for each subsystem are designed using the concept of generalized sliding
surfaces. This gives us an additional advantage that the controlled actual system is robust to
uncertainties. A limitation of this approach is that, we need a bound on the mismatch between the
nominal and the actual subsystems. This can be overcome by having a very crude estimate of this
bound, and then multiplying it by a suitable factor based on the extent of conservatism desired,
11
since overestimating this bound does not have a significant impact on the magnitude of
additional (generalized) control forces needed to compensate for our lack of knowledge of the
bound on the mismatch.
The derivation of the explicit equations of motion for a multi-body planar pendulum, though
straight-forward, can get rather complex when dealing with a pendulum consisting of more than
four bodies. For example, Refs. [38] and [39] that deal with the development of such equations
for an inverted planar pendulum spend several pages deriving the equations of motion, which
are then employed for either a two-link or three-link pendulum. Refs. [40] and [41] derive the
equations of motion of a similar system—an inverted pendulum with a follower force; they show
the considerable degree of algebraic complexity in getting the general equations of motion. Ref.
[42] presents an algorithm to generate the equations of motion for robotic manipulators
automatically using a computer program. The Newton-Euler formalism is used to obtain
equations of motion instead of the Lagrangian formalism. However, the resulting method is still
quite cumbersome as seen from the application of this approach to just a double pendulum
system [42].
All the equations of motion obtained to date deal with bodies composed of straight links. In
Chapter 6, we present an explicit set of equations for a more general situation wherein the center
of mass of each body (in the N-body pendulum) may lie at a point that is not on the line joining
the hinges in that body. The derivation of the general equation is short and simple and relies on
an approach based on mathematical induction on the number of bodies in the pendulum system.
We are unaware of the use of this sort of approach being applied to get the Lagrange equations of
motion for any N-body system with this level of complexity. The equations are then extended to
12
include pendulum systems with nonlinear (and linear) damping. The pendulum system whose
description is known precisely is controlled using the optimal, stable control methodology
proposed earlier whereas the system with norm-bounded uncertainty in its description is
controlled using the two step methodology used in the case of decentralized control. The
pendulum system can be more complex as the uncertainty in the knowledge of mass can cause
significant problems in control design. Hence, sliding mode controller to compensate for the
uncertainty used in the case of decentralized control has been suitably modified and made
applicable.
Despite the variety of problems that the general control methodology proposed here has been
able to handle, its application to large-scale complex dynamical systems requires that it be able
to simultaneously handle both modeling constraints and control requirements in an effective
manner. Most large-scale, complex nonlinear systems are modelled through the use of modeling
constraints because their use significantly simplifies the task of the modeler [43]. The modelling
constraints are quintessential for providing a proper mathematical model of the physical
mechanical system at hand and they must be satisfied at all times in order to maintain the
integrity of system’s physical description. When such systems are further subjected to control
requirements, such as, when required to follow specific trajectories (as in trajectory tracking),
these control requirements, as stated above, can also be interpreted as constraints on the
mechanical system--but with a difference. For, were the control constraints not to be exactly
satisfied, this would naturally lead to an inadequate satisfaction of the control requirements,
resulting in poor control. But if the modeling constraints are dissatisfied, the integrity of the
13
physical description of the mechanical system is jeopardized, since now one is not dealing with
the proper model of the physical system.
Traditionally, when modeling constraints are present in addition to control requirements, control
has been designed under simplified assumptions by simplifying/approximating the dynamical
system and/or imposing structure on the controller. For example in [44], the constrained motion
of a biped robot is tackled by linearizing the system about an operating point and assuming the
modeling constraints to be holonomic. Under these simplified assumptions linear control has
been obtained for the nonlinear system by pole placement techniques. In [45], a geometric
approach inspired by Refs. [46] and [47] has been presented for analysis and control of
constrained mechanical systems. The proposed control is of the PID type and with proper
choices of gain matrices, the tracking error and the deviation of the control force from the
desired control force is driven to zero asymptotically. A different approach would be to generate
feasible trajectories (that satisfy the modeling constraints) in real-time as presented in [48] and
then design a suitable feedback controller such that the system is stabilized along this reference
trajectory.
In Chapter 7, a unified approach for modeling and control is developed so that when the control
constraints (requirements) are consistent with the modeling constraints both sets of constraints
can be (exactly) satisfied simultaneously and the physical system will precisely meet the control
requirements while simultaneously minimizing a user-prescribed control cost. But, when
inconsistent, satisfaction of the modeling constraints is always enforced (since these constraints
pertain to the proper description of the physical system) at the expense of not exactly satisfying
the control constraints (requirements), while still minimizing a user-desired control cost.
14
The task of providing such a unified approach was first introduced in a landmark paper by
Schutte [49]. Motivated by the general result for constrained systems that do not satisfy
d’Alembert’s principle [50], Schutte developed an approach for modeling and controlling
mechanical systems with general holonomic and nonholonomic constraints. Ref. [50] assumes
that control constraints are not consistent with the modeling constraints. First, control forces are
obtained that enforce the control constraints and minimize the Gaussian at each instant of time.
Next, these control forces are projected into the space of forces which produce accelerations
consistent with the modeling constraints. These control forces are referred to in Ref. [49] as
permissible forces since their addition does not violate the modeling constraints. However, as
stated in in Ref. [49], the method does not guarantee that the control requirements will be
ultimately met.
The approach proposed here has the advantage that it is more generally applicable. Specifically,
it can be used to obtain the control forces irrespective of the consistency of the constraints. So
the user need not check for consistency to choose whether to use the fundamental equation of
motion [15-16] or the approach in Ref. [49]. In addition, the control forces that enforce the
control constraints also minimize a user-prescribed quadratic control cost at each instant of time.
As explained later on, the control cost function could be the same as the Gaussian or different.
When the constraints are consistent, the approach (a) minimizes the user-prescribed control cost,
(b) makes sure that the modeling constraints are exactly satisfied, and (c) the control
requirements are exactly satisfied. When the constraints are inconsistent, (a) the norm of
deviation of the system from the set of control requirements is minimized along with the
minimization of a user-prescribed control cost. When consistent, it is shown that all the
15
requirements, that is the modelling and control constraints, are exactly satisfied even when the
cost functions being minimized are different in enforcing the two different sets of constraints.
1.3 Organization
This document is organized as follows.
In Chapter 2, a control strategy is presented for the stable control of nonlinear nonautonomous
mechanical systems. The controller is obtained by applying a constraint that requires that a user
prescribed candidate Lyapunov function, V, decrease along the trajectories of the controlled
system at a rate prescribed by another user prescribed positive definite function, w. The approach
is inspired by the recent advances in the theory of constrained motion and doesn’t make use of
concepts from variational calculus or LQR control theory. The derived controller ensures the
stability of the system, while also minimizing a user specified control cost at each instant of time.
In Chapter 3, we extend this approach to the control of general nonlinear dynamical systems
described by a set of first order nonlinear, nonautonomous differential equations. The
consistency condition that is required for the approach to work is discussed in detail and the
effect of not satisfying this condition is demonstrated using numerical examples. It is also shown
that the current approach is capable of reproducing the LQR result for linear dynamic systems
through a proper choice of the positive definite function pair and the control cost.
In Chapter 4, we consider the case, when the maximum control force that can be applied on the
nonlinear dynamical system is limited. We derive a controller that enforces the stability
requirement (constraint) as much as possible (as explained later) while not exceeding the limit
16
on the maximum available control force. We also provide an estimate for the control force
required to ensure the stability of the system.
In Chapter 5, we propose a novel approach to implement decentralized control of a conglomerate
mechanical system. We do this in two steps. In the first step, we define a nominal system that is
uncoupled. We compute the nominal control force required to stabilize this nominal system using
positive definite functions defined over local domains, using the approach proposed in Chapter 2.
In the second step, we use the concept of generalized sliding surfaces to derive an additional
control force that ensures that the controlled actual system tracks the trajectories of the nominal
system within a user specified error tolerance, thus ensuring the stability of the actual system.
In Chapter 6, we apply the control approach we have presented in Chapter 2 to a multi-body
planar pendulum system. We show that the equations of motion can be derived in a simple way
using a recursive approach based on mathematical induction by only looking at the effect of
adding ‘(N+1)
th
’ body to a system consisting of N bodies. The approach does away with
exponential blow-up of terms in Lagrange’s equations as the number of bodies in the pendulum
system increases. We also show that this highly nonlinear system can be controlled when the
system is only imprecisely known in its description with a given norm bounded uncertainty.
In Chapter 7, we propose a unified approach to enforce control requirements (constraints) on a
mechanical system in the presence of modeling constraints. We consider all possible scenarios
involving whether the control constraints are consistent or inconsistent with the modeling
constraints and the control objective is the same as or different from the Gaussian. We show that
if the modeling constraints and the control requirements (constraints) are consistent, then the
17
proposed methodology provides the exact control force that ensures that the controlled system
satisfies both these types of constraints simultaneously. If they are inconsistent, then the
modeling constraints are exactly satisfied as required for a proper description of the physical
system, and simultaneously the norm of the error in enforcing the control requirments
(constraints) is minimized.
In Chapter 8, we conclude with a brief summary and a discussion of unresolved issues that need
further investigation.
18
Chapter 2. Stable Control of Nonlinear Mechanical Systems
2.1 Introduction
In this chapter a novel methodology is proposed to control a nonlinear mechanical system, in
such a way that the system has an asymptotically stable equilibrium point at the origin. In the
presented approach, positive definite functions specified by the user are used to compute the
control forces that ensure the stability of the system. The control forces are computed so that a
suitable control cost specified by the user is minimized at each instant of time. The stability of
the controlled system is ensured by the way these controllers are designed, eliminating the
burden of searching for Lyapunov functions to prove the system's stability.
Consider a mechanical system whose equation of motion is given by,
( , ) ( , , ) M x t x F x x t . (2.1)
We shall assume that Eq. (2.1) is defined over the domain DR
, where
nn
D R R . ( , ) M x t
is the n by n symmetric, positive definite mass matrix of the mechanical system and, ( , , ) F x x t
is the n dimensional force vector acting on it. We shall assume that M and F are at least
1
C
functions of their arguments.
The equation of motion of the system, in presence of control force ( , , )
C
Q x x t is given as,
( , ) ( , , ) ( , , )
C
M x t x F x x t Q x x t . (2.2)
The control objective is to drive this system to 0 xx , and also minimize a suitable control
cost specified in the form of a norm at each instant of time. Let the control cost be given as,
19
( ) ( , , ) ( , , ) ( , , )
T
CC
J t Q x x t N x x t Q x x t . (2.3)
where, ( , , ) N x x t is a symmetric, positive definite matrix.
Consider a Lyapunov function ,, V x x t , such that
1. ( , ) ( , , ) ( , )
LU
V x x V x x t V x x , (2.4)
where, ( , )
L
V x x , ( , )
U
V x x are positive definite functions on the domain D, and
2. ( , )
VVV
V x x w x x
x x t
, (2.5)
in D where, ( , ) w x x is a positive definite function in D. We assume that we are given a candidate
Lyapunov function ( , , ) V x x t , and a positive definite function ( , ) w x x . Any controller that causes
the dynamics of the controlled system described by Eq. (2.2) to satisfy condition (2.5) for a
Lyapunov candidate function satisfying Eq. (2.4) only ensures that the controlled system has an
asymptotic stable equilibrium at 0 xx [1, 51-54]. Equation (2.5) can be alternatively written
as,
( , )
V V V
x w x x x
x x t
. (2.6)
To ensure that the dynamics of the controlled system satisfy Eq. (2.5), we impose a constraint of
the form given in Eq. (2.6). Thus, we want to calculate the control force required to impose this
constraint, while minimizing control cost given in Eq. (2.3). The derivation of this control force
20
closely resembles the derivation of fundamental equation of motion [15-16] for a constrained
system.
2.2 Main Result
From here on, we suppress the arguments of various quantities unless required for clarity. Before
we state the main result of this chapter, let us introduce the following notation. Let us define,
:
V
A
x
, (2.7)
:
VV
b w x
xt
. (2.8)
With these definitions, the constraint equation (2.6) can be written more elegantly as,
Ax b . (2.9)
Result: The control force required to impose the constraint in Eq. (2.9), that simultaneously
minimizes control cost in Eq. (2.3), thus ensuring that the controlled system in Eq. (2.2) has an
asymptotic equilibrium at 0 xx is explicitly given as,
1/2 1
( , , )
C
Q x x t N G b AM F
, (2.10)
where,
1 1/2
: G AM N
. (2.11)
Proof: Using the equation of motion of the controlled system given in Eq. (2.2) in the constraint
equation, Eq. (2.9), we obtain
21
11 C
AM Q b AM F
. (2.12)
Observing the form of the control cost given in Eq. (2.3), let us define a new quantity z as,
1/2
:
C
z N Q . (2.13)
Thus, the control cost given in Eq. (2.3) becomes,
()
T
J t z z . (2.14)
Using this new defined quantity, Eq. (2.12) can be rewritten as,
1 1/2 1
AM N z b AM F
. (2.15)
Noting the definition of G in Eq. (2.11), Eq. (2.15) can be compactly written as,
1
Gz b AM F
. (2.16)
Thus we desire to find a vector z that satisfies Eq. (2.16), and simultaneously minimizes cost
function given in Eq. (2.14) at each instant of time. Such a z is obtained as [15-16],
1
z G b AM F
.
(2.17)
Throughout the thesis, the symbol ‘+’ in the superscript denotes Moore-Penrose Pseudo inverse.
Thus, the control force is explicitly obtained as,
1/2 1 C
Q N G b AM F
. (2.18)
Observing that G is a row vector, we can further simplify Eq. (2.18) as,
22
11
1/2 1 1
1 1 1
TT
C
TT
G N M A
Q N b AM F b AM F
GG AM N M A
. (2.19)
It should be noted that for this entire control scheme to work, the constraint equation Eq. (2.9)
needs to be consistent (i.e. at least one solution to Eq. (2.9) must exist) at all instants of time.
Since A is a row vector, this equation is always consistent if we can ensure that b goes to zero
whenever A becomes zero. A detailed discussion of the consistency is provided in Section 3.2.2.
In section 3.3 a class of positive definite function pairs (V , w ), which ensure that Eq. (2.9) is
always consistent, is provided.
To summarize, in this chapter we have a shown a simple approach to control mechanical systems
such that a user defined control cost is minimized at each instant of time and the controlled
system has an asymptotically stable equilibrium point at origin. The control methodology relies
on Lyapunov’s theory to provide a suitable control constraint and the fundamental equation of
motion to provide control force that enforces the Lyapunov constraint and minimizes the control
cost at each instant of time.
In the subsequent chapters, various approaches to deal with more complex systems are provided
that build on this basic result.
2.3 Numerical Examples
To show the efficacy of our control methodology and the ease in its implementation, we consider
two illustrative examples of nonlinear, nonautonomous mechanical systems. All the
computations are done in the Matlab environment. We use the ODE15s package to perform the
23
numerical integration, with a relative error tolerance of
8
10
and an absolute error tolerance of
12
10
. We use a radially unbounded positive definite function of the form
1 2 12
11
22
T T T
V a x x a x x a x x , wV , (2.20)
where
12
2
2a
a
. As shown later, this choice of positive functions ensures the consistency of Eq.
(2.21). It is to be noted that with the choice of the positive definite function w , the constraint
equation takes the form,
VV . (2.22)
Thus, the control renders the system exponentially stable and the rate of convergence to
equilibrium point can be altered by choosing appropriate value for . Using these positive
definite functions,
12 2
TT
V
A a x a x
x
. (2.23)
Also, We choose the positive definite matrix N used in the definition of the control cost as,
1
NM
. Then the equation for the control force can be simplified as,
1
1
T
C
T
A
Q b AM F
AM A
. (2.24)
Example 1:
24
For our first example, let us consider a nonlinear nonautonomous mechanical system whose
equation of motion is,
: ( )
nl
Mx F Kx K x . (2.25)
In the above equation,
3
1 2 3
,,
T
x x x x R
is the displacement 3-vector, ()
nl
Kx is a nonlinear
stiffness term chosen as,
3 3 3
1 2 2 3 3 1
( ) , ,
T
nl
K x x x x x x x
, M is 3 by 3 symmetric,
positive definite mass matrix
1 2 2 1
diag ,2 ,1.5
2 3 1
t t t
M
t t t
, K is 3 by 3 symmetric,
stiffness matrix
100 100 0
100 150 50
0 50 100
K
. We choose the initial conditions as, 0 1, 2,1
T
x ,
0 2,3,0
T
x .
On this system, we apply a control force explicitly computed using Eq. (2.24). Let us use the
positive definite functions defined in Eq. (2.20) with parameters
1
1 a ,
2
8 a ,
12
1 a ,
1
4
.
The equation of motion of the controlled system is,
: ( )
C
nl
Mx F Kx K x Q . (2.26)
Figure 1 shows the time history of displacement response of the controlled system for the first 40
seconds. We can see the asymptotic convergence of the response to zero.
25
Figure 1. Displacement history of the controlled system.
Figure 2 shows the projection of phase portrait of the controlled system on
11
xx ,
22
xx , and
33
xx planes. Initial positions are indicated by square markers. The positions at the end of 40
seconds of integration are marked by circular markers. In the plots, we can observe the system
asymptotically going to zero.
26
Figure 2. Projection of phase portrait on
11
xx ,
22
xx , and
33
xx planes.
The control forces computed explicitly using Eq. (2.24) are shown in Figure 3. These control
forces ensure the stability of the system, while also minimizing the control cost in Eq. (2.3).
27
Figure 3. Control force
C
Q calculated using Eq. (2.24).
Figure 4(a) shows the variation of the Lyapunov function V with time. We can observe an
exponential decay in the value of V , as expected because of the constraint in Eq. (2.22). Figure
4(b) shows the error in enforcing the constraint,
1
( ) :
C
e t V V AM F Q b
. (2.27)
28
(a) (b)
Figure 4. (a) Variation of the Lyapunov function V with time. (b) Error in satisfying the
constraint () et given in Eq. (2.27).
Example 2:
Let us consider a mechanical system described by,
( ) ( ) Mx Kx S x C x , (2.28)
where () Sx is a cubic stiffness term defined as,
3
1
3
2
3
3
1 2 0
( ) : 2 3 1
0 1 2
x
S x x
x
, () Cx is a
nonlinear damping term defined as,
12
23
31
( ) :
xx
C x x x
xx
. Mass matrix is taken as,
29
1 3 2 1
diag 2 , ,
2 2 2
t t t
M
t t t
, linear stiffness matrix K is chosen to be,
100 100 0
100 200 100
0 100 100
K
. We choose the initial conditions as,
2
02
1
x
,
2
01
3
x
.
A control force calculated using Eq. (2.24) is applied to this system. The parameters in the
definition of positive definite functions in Eq. (2.20) are chosen as,
1
1 a ,
2
6 a ,
12
1 a ,
1
3
. The equation of motion of the controlled system is,
( ) ( )
C
Mx Kx S x C x Q . (2.29)
This equation is numerically integrated for 40 seconds and the displacement response of the
system is plotted as a function of time in Figure 5.
30
Figure 5. Displacement history of the controlled system.
Figure 6 shows the projection of phase portrait of the controlled system on
11
xx ,
22
xx , and
33
xx planes. Initial positions are indicated by square markers. The positions at the end of 40
seconds of integration are marked by circular markers. In the plots, we can verify that the system
asymptotically goes to zero.
31
Figure 6. Projection of phase portrait on
11
xx ,
22
xx , and
33
xx planes.
The control forces explicitly computed using Eq. (2.24) are shown in Figure 7. These control
forces ensure the stability of the system, while also minimizing the control cost in Eq. (2.3).
32
Figure 7. Control force
C
Q calculated using Eq. (2.24).
Figure 8(a) shows the variation of the Lyapunov function V with time. We can observe an
exponential decay in the value of V , as expected because of the constraint in Eq. (2.22). Figure
8(b) shows the error in enforcing the constraint () et using Eq. (2.27),
(a) (b)
Figure 8. (a) Variation of the Lyapunov function V with time. (b) Error in satisfying the
constraint () et given in Eq. (2.27).
33
2.4 Conclusions
In this chapter, we have proposed a new methodology for obtaining sets of stable controllers for
nonlinear, non-autonomous mechanical systems. We have shown that it is possible to reverse the
current philosophy of first designing a controller and then checking for its stability, by first
choosing a candidate Lyapunov function V and a positive definite function w, and obtaining in
closed-form a nonlinear controller using these functions so that: (i) the candidate Lyapunov
function is indeed the Lyapunov function for the controlled system, thus ensuring stability, and
(ii) the controller minimizes at each instant of time a user-prescribed control cost. Depending on
the Lyapunov function V chosen, the positive definite function w used, the weighting matrix N
desired by the user, and the specific parameter values involved in these three entities, one can
obtain, in closed form, sets of stable controllers for a given mechanical system. These controllers
are not only easy to obtain but they can also save considerable effort in searching for Lyapunov
functions; they simultaneously minimize a user-prescribed control cost. The applicability of this
approach appears to be quite general, since no approximations/linearizations are made regarding
the mechanical system and no structure is imposed on the controller. The examples show the
simplicity and efficacy of the approach.
34
Chapter 3. Stable Control of Nonlinear Dynamical System
3.1 Introduction
In this chapter, the control methodology proposed in Chapter 2 is extended to general dynamical
systems described by a set of first order differential equations. Controllers that ensure that the
dynamical system has an asymptotic equilibrium point at the zero state, are derived. First, a
candidate Lyapunov function and a positive definite function dictating the rate of decay of the
Lyapunov function are chosen, and then a set of nonlinear controllers are obtained in closed form
such that the dynamics ensure that the candidate Lyapunov function is indeed the Lyapunov
function for the system. Once again, no linearizations and/or approximations of the dynamical
system are made and no a priori structure is imposed on the controllers. The consistency
condition for the positive definite function pair that was mentioned in the passing in Chapter 2 is
discussed in more detail. A family of positive definite function pairs that satisfy this consistency
condition are provided for mechanical systems.
In section 3.2, control is derived for general nonlinear non-autonomous dynamical systems
described by a set of first order differential equations. The consistency condition, which can be
used to check if a given pair of positive definite functions (V, w) can be used with the current
method to produce stable control, has also been formalized. Numerical examples have been
provided that demonstrate the efficacy of the current method and the ease and simplicity of its
application. They also demonstrate the use of the consistency condition to check the positive
definite function pairs before using them with the current method. In section 3.3, it is shown that
the LQR control can be derived for linear systems using the current approach through the use of
35
a particular positive definite function pair (V, w). In section 3.4, a family of positive definite
function pairs (V, w) is provided that satisfy the consistency condition for mechanical systems
and hence can be used to obtain optimal stable control.
3.2 General dynamical systems
3.2.1 Main Result
Consider a general dynamic system described by the first order ordinary differential equation
, ( , ) ( , ) x f x t B x t u x t , (3.1)
where : [0, )
n
f D R , : [0, )
p
u D R , and, : [0, )
np
B D R
are all piecewise
continuous in t and locally Lipschitz on [0, ) D ,
n
DR is a domain that contains the origin.
In addition, ( , ) B x t and ( , ) u x t are assumed to be bounded on [0, ) D . In the above, ( , ) u x t is
the control input vector, and f and B are known quantities. The system has a uniformly
asymptotically stable equilibrium point at 0 x , if the dynamics ensure that there exists a
continuously differentiable positive definite Lyapunov function , V x t such that
,
LU
V x V x t V x , where
L
Vx and
U
Vx are continuous positive definite functions on
D , and a positive definite function () wx on D, such that the derivative of , V x t along the
trajectories of Eq. (3.1) satisfies the relation [1, 51-54]
( , )
:,
dV x t
V x t w x
dt
. (3.2)
Equation (3.2) can be written in expanded form as
36
,
VV
V x t x w x
xt
. (3.3)
Let us denote,
:
V
A
x
and :
V
b w x
t
. (3.4)
Then Eq. (3.3) can be expressed concisely as,
,, A x t x b x t . (3.5)
The problem at our hand is the following. Given a positive definite (candidate Lyapunov)
function , V x t and a positive definite function wx we want to device a control input , u x t ,
such that the controlled system described by Eq. (3.1) satisfies relationship (3.2) and
simultaneously the control cost
, , ,
T
J t u x t N x t u x t (3.6)
is minimized at each instant of time. The matrix , N x t in Eq. (3.6) is a user-prescribed,
positive definite matrix. For brevity, in what follows, we shall suppress the arguments of the
various quantities, unless required for clarity.
Result 1: The control input that ensures that the controlled system satisfies relation (3.2) while
simultaneously minimizing the control cost given in Eq. (3.6) at each instant of time t, is given
by
37
1
1/2
1
TT
TT
N B A
u N G b Af b Af
ABN B A
. (3.7)
In the above equation,
1/2
: G ABN
, (3.8)
where, A and b are defined in Eq. (3.4). The matrix G
in Eq. (3.7) is the Moore-Penrose
inverse of the matrix G .
Proof: Substituting Eq. (3.1) in Eq. (3.5), we have,
ABu b Af . (3.9)
It is necessary and sufficient that Eq. (3.9) has at least one solution at every instant of time for
control u to exist that makes the controlled system satisfy the Lyapunov constraint (3.5). This is
called the consistency condition which is further discussed in Section 3.2.2.
Define , z x t as,
1/2
, , ( , ) z x t N x t u x t (3.10)
Then, the control cost is given by,
2
2
() J t z t . (3.11)
In the above equation,
2
is a Euclidean norm. Therefore, in order to minimize Jt , we have
to minimize
2
z .
Noting Eq. (3.10), Eq. (3.9) can be rewritten as,
38
1/2
ABN z b Af
. (3.12)
The n-vector z that solves Eq. (3.12) while minimizing
2
z is given by [15-16],
1/2
z ABN b Af G b Af
. (3.13)
Since G is a row vector,
1/2
1
T T T
T T T
G N B A
G
GG ABN B A
. So, we explicitly obtain
1/2
1
TT
TT
N B A
u b Af
ABN B A
. (3.14)
3.2.2 Consistency of the Lyapunov constraint
Result 2: When the matrix 0 AB the Lyapunov constraint is always consistent. When 0 AB ,
the Lyapunov constraint is consistent if and only if the vector 0. b Af
Proof: In the derivation of the previous result, it has been assumed that at all time, there exists a
vector u that ensures that the system satisfies the Lyapunov constraint (3.5). In the derivation of
the main result in section 3.2.1, it is shown that this is equivalent to the statement that at least one
solution u to Eq. (3.9) exists at all instances of time. The necessary and sufficient condition for
the existence of a solution to Eq. (3.9) is [15]
AB AB b Af b Af
(3.15)
Since AB is a row vector, this equation can be expanded as,
39
T
T
AB AB
b Af b Af
AB AB
(3.16)
which is always true when 0 AB . When AB is identically zero, the left hand side of Eq. (3.15)
is zero and hence for the Lyapunov constraint to be consistent, we require that the right hand side
of Eq. (3.15) is also zero, so that
0 b Af (3.17)
Thus, if 0 b Af whenever 0 AB , the pair (V, w) used in obtaining the corresponding A and
b (see Eq. (3.4)) can be used to obtain a suitable control, u.
From here on, we then say, that the positive definite function pair (V, w) provides a consistent
Lyapunov (stability) constraint, or that the (V, w) pair is consistent, for short.
The next example illustrates the importance of this consistency requirement.
3.2.3 Numerical Examples:
Example 1(a)
Consider the Lorenz system with control applied to only the first state, described by the
equations
1 2 1
2 1 3 2
3 1 2 3
,
,
.
x x x u
x x x x
x x x x
(3.18)
where , , 0 are a given set of parameters. Since the control is only applied to the first
state, the control input uR is a scalar and the matrix B in Eq. (3.18) is a 3-vector,
40
1, 0, 0
T
B (3.19)
The vector f is simply,
2 1 1 3 2 1 2 3
, ,
T
f x x x x x x x x
. (3.20)
Consider the candidate positive definite function pair (V, w) given by
2 2 2
1 2 2 3
1
2
V x x x x and
2 2 2
1 1 2 2 3
() w x x x x (3.21)
where, 0 ,
1
0 and
2
0 are positive scalars, and is the parameter of the Lorenz
oscillator. Using this pair, the Lyapunov constraint is
,
V
V x t x w
x
(3.22)
For the pair (V, w) given by Eq. (3.21) and to be usable, it must be consistent.
Here, A and b are computed as,
1 2 2 3
[ , , ]
V
A x x x
x
,
2 2 2
1 1 2 2 3
b w x x x . (3.23)
Since
1
AB x , 0 AB implies
1
0 x . Using Eq. (3.23) and (3.20), it can be verified that
whenever
1
0 x ,
2 2 2 2 2
1 1 2 2 3 1 2 1 2 2 1 3 2 2 1 2 3 3
2 2 2 2
2 2 3 2 2 3
= 0.
b Af x x x x x x x x x x x x x x
x x x x
(3.24)
41
Thus, the positive definite function pair (V, w) provides a consistent Lyapunov constraint and can
be used to obtain a stabilizing control for the system.
The control cost to be minimized is specified as , , ,
T
J t u x t N x t u x t . Since u is a
scalar in this example, N is just a positive scalar and without loss of generality, it can be set to
unity. Since
1
AB x , the expression for the control force can be simplified as,
1
1
1
1
TT
TT
b Af
u N B A b Af
ABN B A x
. (3.25)
Since
2
1 2 2 1 1 1 2 3 2
( ) ( ) (1 ) b Af x x x x x x , (3.26)
the explicit control is given by
2 2 1 1 2 3 2
( ) ( ) (1 ) x x x x
u
. (3.27)
The Lyapunov function V in Eq. (3.21) is radially unbounded, and hence the controlled system is
globally asymptotically stable. It is important to realize that the control given in Eq. (3.27) not
only ensures global asymptotic stability but it is also simultaneously optimal in the sense that for
the chosen (V, w) pair it provides a control that minimizes
2
( ) ( ) J t u t at each instant of time.
42
Figure 1. (a) Time history of response of the controlled Lorenz system with control only applied
to first state. (b) Time history of the control input () ut .
For the simulation, the system parameter values are chosen as, 10 , 28 , and 2 / 3 .
The (uncontrolled) system’s behavior for this set of parameters is chaotic. The parameters
defining the (V, w) pair are chosen as 1 ,
1
0.5 , and
2
0.5 (see Eq. (3.21)). The initial
conditions are:
1
(0) 6 x ,
2
(0) 4 x , and
3
(0) 1 x .
The controlled system given in Eq. (3.18) is integrated numerically using the ODE15s package in
the MATLAB environment. The error tolerances for numerical integration are chosen as
8
10
for
the relative error, and
12
10
for the absolute error. The results of the simulation are shown in
Figures 1-3. Figure 1(a) shows the response of the controlled system, showing asymptotic
convergence to the equilibrium point
1 2 3
0 x x x . Since V is radially unbounded, the origin
is globally attractive, and thus the system can be controlled starting from any set of initial
conditions.
43
Figure 1(b) shows the time history of the control input () ut . The error in satisfying the
Lyapunov constraint is given by
( ) :
V
e t x w Ax b
x
(3.28)
It is of the same order of magnitude as the error tolerance (
12
10
) used for numerically
integrating the equations.
Example 1(b)
To demonstrate the consequences of choosing a positive definite function pair (V, w) that leaves
the Lyapunov constraint inconsistent at some point on the trajectory, let us again consider the
Lorenz system, this time with control applied only to the third state. The equation of the
controlled system is now
1 2 1
2 1 3 2
3 1 2 3
,
,
.
x x x
x x x x
x x x x u
(3.29)
The various quantities defining this dynamical system are identified as,
0, 0, 1
T
B ,
2 1 1 3 2 1 2 3
,,
T
f x x x x x x x x
. (3.30)
The positive definite function pair (V, w) is chosen to be identical to that in Example 1(a) (see
Eq. (3.21)). Thus, the Lyapunov constraint given in Eq. (3.22) is still valid with the quantities A ,
b defined in Eq. (3.23). But the matrix B is different in this case and AB is now
3
x . When
3
0 x , b Af is found to be
44
2
1 2 2 1 1 1 2 3 2
2
1 2 2 1 1
( ) ( ) (1 )
( ) ( )
b Af x x x x x x
x x x
(3.31)
The parameter
2
cannot be chosen such that the above quantity is zero whenever
3
x is zero (
2
cannot be chosen to be
as then, it will be negative). Thus, the Lyapunov constraint is not
guaranteed to be consistent for this (V, w) pair. The explicit expression for the control is obtained
by substituting
3
AB x in the first equality of Eq. (3.24) and is given in simplified form by the
expression
1
2 2 1 1 2 3 2
3
( ) ( ) (1 )
x
u x x x x
x
(3.32)
All the parameter values are kept the same as in Example 1(a). Using the control given in Eq.
(3.32), the controlled system shown in Eq. (3.29) is integrated starting from identical initial
conditions as before. The integration fails to meet the prescribed error tolerances at around 0.23
seconds and stops when the Lyapunov constraint becomes inconsistent.
(a) (b)
45
Figure 2. (a) Time history of response of the controlled Lorenz system (control only applied to
third state). (b) Variation of
3
x and b Af along the trajectory of the system.
Figure 2(a) shows the evolution of the state of the controlled system with time. Figure 2(b)
shows the variation of
3
x and b Af along the trajectory of the system. The square marker
indicates the initial state and the circular marker shows the final position. It can be seen that
when the integration fails,
3
x has reached zero but not b Af (recall,
3
0 AB x when the
Lyapunov constraint becomes inconsistent).
Example 1(c)
We next consider the system when only the second state is controlled. The controlled nonlinear
system is described by the equations
1 2 1
2 1 3 2
3 1 2 3
,
,
.
x x x
x x x x u
x x x x
(3.33)
The control input u for this case is again a scalar and the matrix B is simply 0,1,0
T
B .
Using the same pair (V, w) given in Eq. (3.21), the scalar AB is found to be equal to
22
x .
Therefore, 0 AB implies that
2
0 x . We again obtain, as before,
2
1 2 2 1 1 1 2 3 2
( ) ( ) (1 ) b Af x x x x x x . (3.34)
The choice
1
makes the quantity b Af zero when
2
0 x . The Lyapunov constraint is
then consistent, and this (V, w) pair can be used to yield a control that is globally asymptotically
stable.
46
Using Eq. (3.14) (with
1
), the globally asymptotically stable control is explicitly given by
1 2 1 3 2
2
( ) (1 ) x x x
u
. (3.35)
This control minimizes at each instant of time the control cost
2
() J t u for the pair (V, w)
chosen in Eq. (3.21) with
1
.
All the parameter values are kept the same as in Example 1(a) except ,
1
which are
respectively set to be 0.1 ,
1
1 .
(a) (b)
Figure 3. (a) Time history of response of the controlled Lorenz system (control only applied to
second state). (b) Variation of
2
x and b Af along the trajectory of the system.
Figure 3(a) shows the evolution of the state of the controlled system with time. Figure 3(b)
shows the variation of
2
x and b Af along the trajectory of the system. The square marker
again indicates the initial state and the circular marker shows the final position. Figure 4(a)
47
shows the time history of the control input that enforces the Lyapunov constraint and
simultaneously minimizes the control cost. Similarly, Figure 4(b) shows the error in the
satisfaction of the Lyapunov constraint. The error, () et , in satisfying the Lyapunov constraint
(see Eq. (3.28)) is found to be of the same order of magnitude as the error tolerances used in
numerically integrating the equations describing the controlled system.
(a) (b)
Figure 4. (a) Time history of the control input () ut . (b) Time history of the error () et in
enforcing Lyapunov stability (see Eq. (3.28)).
Example 1(d)
We next consider the system when both the second and the third state are simultaneously
controlled and the control input is a vector. The controlled nonlinear system is now described by
the equations
48
1 2 1
2 1 3 2 2
3 1 2 3 3
,
,
,
x x x
x x x x u
x x x x u
(3.36)
and the matrix B is the 3 by 2 matrix given by
00
10
01
B
. (3.37)
The control input is the 2-vector,
23
,
T
u u u . Using the same (V, w) pair given in Eq. (3.21) as
before, the row vector AB is given by
2 2 3
[ , ], AB x x so that 0 AB implies that
23
0 xx .
We again obtain, as before,
2
1 2 2 1 1 1 2 3 2
( ) ( ) (1 ) b Af x x x x x x . (3.38)
As in example 1(c), the choice
1
makes the quantity b Af zero when
23
0 xx . This
(V, w) pair is thus made consistent. Choosing the weighting matrix to be
22
23
(1 ,1 ) N Diag x x , (3.39)
the requisite globally asymptotically stable control is found using Eq. (3.14) (with
1
) and
is given in closed-form by
2
2 2 3
1 2 2 1 2 3 2
2 2 2 2 2
2
2 2 3 3 2
32
(1 )
( ) (1 )
(1 ) (1 )
(1 )
xx
x x x x x
u
x x x x
xx
. (3.40)
This control minimizes at each instant of time the control cost ()
T
J t u Nu for the pair (V, w)
chosen in Eq. (3.21), and the aforementioned weighting matrix N.
49
All the parameter values are kept identical as in Example 1(c).
(a) (b)
Figure 5. (a) Time history of response of the controlled Lorenz system (control applied to second
and third states only). (b) Time history of the control input () ut .
Figure 5(a) shows the evolution of the state of the controlled system with time. Figure 5(b)
shows the time history of the control input that enforces the Lyapunov constraint and
simultaneously minimizes for this (V, w) pair the control cost given by Eq. (6) with the weighting
matrix given by Eq.(3.39) . The error in satisfying the Lyapunov constraint () et is of O(10
-13
)
which is consistent with the error tolerances used in integrating the equations numerically.
3.3 Application to linear systems
In this section, the control method developed in section 3.2 is applied to linear systems and it is
shown that the control method can recover the solution of the LQR control approach, thereby
connecting this method with results obtained from conventional control theory.
50
Consider a linear system, described by a linear differential equation,
ˆ
x Ax Bu . (3.41)
The matrices
ˆ
nn
AR
and
np
BR
are constant matrices. It is assumed that the pair
ˆ
, AB is
controllable. The control input is
p
uR . The control objective is to minimize the cost defined as
0
ˆ
TT
J x Qx u Ru dt
, (3.42)
where ,
nn
Q R R
are constant positive definite matrices. This problem has been well studied in
the literature under the name ‘LQR control’ [e.g., 55]. The optimal control is found to be linear
and is given by,
1 T
u R B Px
, (3.43)
where P is a symmetric, positive definite matrix obtained by solving the algebraic Riccati
equation
11
ˆˆ
2
T T T
A P PA PBR B P Q PBR B P
. (3.44)
Result 3: If the positive definite pair (V, w) is chosen as (see Remarks 1 and 2 below),
T
V x Px ,
1 TT
w x Q PBR B P x
, 0 P , 0 Q (3.45)
where the matrix P satisfies relation (3.44), and the positive definite weighting matrix N is
chosen as NR , the control vector u obtained using Eq. (3.14) is the same as that given in Eq.
(3.43).
51
Proof: From Eq. (3.45), the various quantities required for obtaining the control u using Eq.
(3.14) are obtained as,
:2
T
V
A x P
x
,
1
:
TT
V
b w x Q PBR B P x
t
. (3.46)
Since
ˆ
f Ax , on substituting these quantities in Eq. (3.14), u is found explicitly to be
1
1
1
2
ˆ
4
T
TT
TT
R B Px
u x Q PBR B P x AAx
x PBR B Px
. (3.47)
On substituting for A from Eq. (3.46), and noting that
ˆ
T
x PAx is a scalar, we obtain
1
1
1
ˆˆ
2
T
T T T
TT
R B Px
u x Q PBR B P PA A P x
x PBR B Px
. (3.48)
Using the algebraic Riccati Eq. (3.44), Eq. (3.48) is reduced to
1
11
1
2
2
T
T T T
TT
R B Px
u x PBR B Px R B Px
x PBR B Px
. (3.49)
The significance of Result 3 is that the LQR controller is obtained by using a particular (V, w)
pair and applying the general methodology proposed herein.
Remark 1: Consider the matrix
1 T
PBR B P
. If we denote
ˆ
P PB , then
11
ˆˆ
TT
PBR B P PR P
is
an n by n symmetric semi-positive definite matrix of the same rank as
ˆ
P because R is a positive
definite matrix. Hence w is a positive definite function since Q is a positive definite matrix.
52
Remark 2: The (V, w) pair given in Eq. (3.45) is consistent. To verify this, using Eq. (3.46) we
find 2
T
AB x PB and following the same steps shown in Eqs. (3.47)-(3.49), we obtain
1
2
TT
b Af x PBR B Px
, so that b Af is zero whenever AB is zero.
Remark 3: While the result obtained through the use of Eq. (3.14) is identical to that given by
LQR theory, what may not be known as well conventionally is that this LQR control u also
minimizes, at each instant of time, the cost
T
u Ru when using the (V, w) pair given in Eq. (3.45).
3.4 Consistent (V, w) pairs for mechanical systems
In this section, the special case of mechanical systems which are described by second order
nonlinear non-autonomous differential equations is considered. These systems have a specific
structure with respect to how the control inputs (generalized control forces) influence the
dynamics of the system. This structure is exploited here to provide a general class of (V, w) pairs
which satisfy the consistency condition and hence can be used to design optimal stable control,
in the sense described before. Furthermore, since the functions V used in these pairs are positive
definite functions that are also radially unbounded, the control realized through their use
becomes globally asymptotically stable. We assume that each degree of freedom of the
mechanical system is subjected to a generalized control force (see Eq. (3.51) below).
Consider the nonautonomous mechanical system whose dynamics are described by the equations
( , ) ( , , ) M q t q F q q t (3.50)
53
where
nn
MR
is a symmetric positive definite mass matrix and F is a prescribed force vector
of dimension n. The n-vector q contains the generalized positions and the n-vector q is the
vector of generalized velocities. The equation of motion of the controlled system is,
( , ) ( , , ) ( , , )
C
M q t q F q q t Q q q t (3.51)
where
C
Q is the control force. The (generalized) control force is applied to each (generalized)
degree of freedom of the system. The objective of the control design is to minimize the control
cost
( ) ( , , )
T
CC
J t Q N q q t Q (3.52)
and simultaneously coerce the controlled system to have an asymptotic equilibrium point at
0 q , 0 q by enforcing a suitable Lyapunov constraint.
If we denote the generalized velocity of the system by v ( : vq ), the controlled system can be
represented in state-space form as
11
0
C
v q
v M F M Q
(3.53)
This equation can be represented in our standard generalized dynamical system form (Eq. (3.1))
by denoting
q
x
v
,
1
v
f
MF
,
1
0
B
M
, and,
C
uQ . (3.54)
Consider the positive definite function pair (V, w) where
54
1 2 12
11
22
T T T
V q Pq v P v v P q ,
1 2 3
11
22
T T T
w q Q q v Q v v Q q (3.55)
and
1 1 2 2 12 12
( ), ( ), ( ) P diag a P diag a P diag a ,
1 1 2 2 3 12
, , Q DP Q DP Q DP . (3.56)
Here
1 2 12
, , , a a a are each n-vectors, and the diagonal matrix () D diag .
We shall denote, for convenience, the element-wise product of two n-vectors l and m by the n-
vector [lm], so that its i-th element, []
i
lm , is simply given by the product of
i
l and
i
m ; also, for
an n-vector l, we will mean by
c
l (l raised to the power c) the n-vector whose i-th component is
c
i
l .
Result 4: The positive definite function pair (V, w), given by Eqs. (3.55)-(3.56), satisfies the
consistency condition if
1/2 1
1 2 12 1 2 12 2
0, 0, 0, 0 [ ] , 2[ ], a a a a a a a
(3.57)
where the signs ‘ ( ) ’ are to be interpreted as ‘greater (less) than’ element-wise.
Proof: It can be easily verified that the functions V and w in Eqs. (3.55) and (3.56) are positive
definite since,
1/2
1 2 12 1 2
0, 0, 0, 0 [ ] a a a a a . (3.58)
It is important to note that the function V is radially unbounded.
55
The matrix A and the scalar b are given by
,
V V V
A
x q v
, and bw . (3.59)
Eq. (3.60) can be differentiated to obtain the required partial derivatives,
1 12
TT
V
q P v P
q
, and,
2 12
TT
V
v P q P
v
. (3.61)
The quantity AB can then be obtained using Eqs. (3.59) and (3.54) as,
1
V
AB M
v
. Since
1
M
is a positive definite matrix, AB is zero if and only if
V
v
is zero. Using Eq. (3.61), it can
be concluded that
AB
can be zero if and only if
1
2 12
v P P q
. (3.62)
The next step in the proof is to compute the quantity b Af when AB is zero. When
V
v
is
zero, Af is computed as,
1
V V V
Af v M F v
q v q
. (3.63)
In the last equality above, we have used the fact that
V
v
is zero when AB is zero. On
substituting from Eq. (3.61), this simplifies to
1 12
0
TT
AB
Af q Pv v P v
. (3.64)
When AB is zero, Eq. (3.62) holds true and therefore,
56
1 2 2 1 2 1
1 2 12 12 12 2 12 2 1 12 2
0
[ ( )]
T T T
AB
Af q PP P q q P P P q q P P P P P q
(3.65)
where we have made use of the commutative property of diagonal matrices. In order to compute
bw when AB is zero, Eq. (3.55) is used to obtain
2 1 2 1 2 1
1 12 2 12 2 1 12 2
0
1 1 1
[ ( )]
2 2 2
T T T T
AB
w q DPq q DP P q q DP P q q D P P P q
. (3.66)
Since
1
12 2
2[ ] aa
(see Eq. (3.57)) implies that
1
12 2
2 D P P
, hence
1 2 1
12 2 1 12 2
00
[ ( )]
T
AB AB
b w q P P P P P q
. (3.67)
From Eq. (3.67) and Eq. (3.64), it is seen that 0 b Af whenever 0 AB . Hence, the positive
definite function pairs (V, w ), given in Eqs. (3.55)-(3.56), always satisfy the consistency
condition.
The significance of Result 4 is that the family of positive definite function pairs (V, w) provided
here can now be used to obtain optimal globally stable control for any mechanical system.
Remark 4: When
1 2 12
, , , and a a a are each constant n-vectors, so that
11
[1,1,1,1 ,1]
T
aa ,
22
[1,1,1,1 ,1]
T
aa ,
12 12
[1,1,1,1 ,1]
T
aa , [1,1,1,1 ,1]
T
, the (V, w) pairs in Result 3
simplify, and Eq. (3.55) becomes
1 2 12
11
22
T T T
V a q q a v v a v q and wV . (3.68)
57
Consistency of these (V, V ) pairs then requires that the constants
1 2 12
, , , and a a a satisfy the
relations
12
1 2 12 1 2
2
2
0, 0, 0 ,
a
a a a a a
a
, (3.69)
where the similarity between these relations and those in Eq. (3.57) is obvious.
Due to the particular (V, w) pair chosen in Eq. (3.68), the Lyapunov constraint can be written as,
VV , (3.70)
which when enforced, ensures that the Lyapunov function V decays exponentially with time
along the trajectory of the controlled system, and hence that the controlled system is
exponentially stable. One can alter the rate of convergence of the system to the origin by altering
the decay rate .
Remark 5: If we denote
V
A
v
and
V
b b v
q
, (3.71)
the quantities AB and Af can be expressed as
1
AB AM
,
1
1
,
v
VV
Af A v AM F
MF qq
. (3.72)
Substituting from the above equation in Eq. (3.14), the control force
C
Q is obtained in the form,
11
1
1 1 1
T
C
T
N M A
u Q b AM F
AM N M A
(3.73)
58
which is in the same form as the expression for
C
Q obtained in Chapter 2 (Eq. (2.19)).
3.5 Conclusions
In this chapter, result in Chapter 2 is extended optimal, stable controllers are provided for
nonlinear non-autonomous dynamical systems described by first order differential equation. The
controllers derived in this chapter retain the attractive characteristics of the controllers derived
for a mechanical system - (i) stability is built into the design of the controller, eliminating the
need to search for a Lyapunov function (ii) the controller minimizes at each instant of time a
user-prescribed control cost. Conditions that guarantee consistency of the Lyapunov constraint
are presented. A class of (V, w) pairs has been provided for nonlinear non-autonomous
mechanical systems that guarantee consistency. For a linear system, a particular (V, w) pair is
shown to yield the result obtained from conventional LQR control theory. Numerical examples
have been provided that illustrate the importance of the consistency condition and demonstrate
the efficacy of the approach.
59
Chapter 4. Stable Control of Mechanical System with Constraint on Maximum Control
Effort
4.1 Introduction
In chapter 2, we have designed a controller for a general nonlinear mechanical system that had
stability built into it, and was also optimal in the sense that a suitable norm of control effort was
minimized at each instant of time. In engineering practice, often there is a limit on the resources
(such as, power available, voltage, etc.) at one’s disposal for the controller, thus limiting the
maximum control forces that can be applied. Also, it is desirable that we know the maximum
control force required to stabilize the system, so we can provide the controller with suitable
actuators. We are going to deal with these two problems in this chapter. We assume that there is
a constraint on the maximum control effort (Jmax) that can be used, and derive the control force
that needs to be applied. We also calculate what should be the minimum Jmax that needs to be
provided, so that the stability of the system is ensured.
4.2 Main Result:
Consider a mechanical system, whose mass matrix is ( , , ) M x x t , subjected to external force
( , , ) F x x t , and control force ( , , )
C
Q x x t . The equation of motion for the system alone, without the
control force is given as,
( , , ) ( , , ) M x x t x F x x t . (4.1)
In the presence of control force ( , , )
C
Q x x t , it’s equation of motion is given to be,
60
( , , ) ( , , ) ( , , )
C
M x x t x F x x t Q x x t . (4.2)
Following the line of thought developed in Chapter 2, we calculate the control force ( , , )
C
Q x x t
so as to enforce a stability constraint,
( , , ) ( , ) V x x t w x x , (4.3)
along the trajectories of the solution to Eq. (4.2), where ( , , ) V x x t , ( , ) w x x are user prescribed
positive definite functions. In addition, the control force is also required to minimize control cost
specified as,
( ) ( , , ) ( , , ) ( , , )
T
CC
J t Q x x t N x x t Q x x t , (4.4)
at each instant of time, where, ( , , ) N x x t is a user prescribed, symmetric, positive definite matrix.
Let us assume that, due to limitations in resources, the control force that can be applied is limited
in the form of a maximum constraint on control effort as,
max
() J t J . (4.5)
In the following, we suppress the arguments of the various quantities for the sake of simplicity,
except, when required for clarity. The stability constraint in Eq. (4.3) can be rewritten as,
VVV
x x w
x x t
. (4.6)
It can be further simplified as,
( , , ) ( , , ) A x x t x b x x t . (4.7)
61
where,
( , , )
( , , ) :
V x x t
A x x t
x
, and, ( , , ) :
VV
b x x t w x
xt
. (4.8)
Result: The control force that ensures that relationship (4.3) is satisfied as closely as possible,
given the constraint on the maximum control effort in Eq. (4.5) is explicitly given in closed-form
as,
1/2
1 1 max
2
1/2 1
max 1 max
2
1
2
if
if
C
N G b G b J
Q
Gb
N Z G b J
Gb
, (4.9)
where,
1 1/2
: G AM N
,
max max
: ZJ , and
1
1
b b AM F
.
Proof:
Substituting for x in Eq. (4.7), using Eq. (4.2), we get,
11
1
:
C
AM Q b AM F b
(4.10)
Observing, the form of the cost function in Eq. (4.4), let us define a new quantity () zt ,
1/2
( ) :
C
z t N Q . (4.11)
Thus, minimizing cost function () Jt is the same as minimizing
2
z . The constraint in Eq. (4.5),
reduces to,
max max
2
: z Z J . (4.12)
62
Using Eq. (4.11), let us rewrite Eq. (4.10) as,
1 1/2
1
:. AM N z Gz b
(4.13)
Now, we want to satisfy the stability constraint in Eq. (4.13) in the lease square sense, given the
limitations in the control force as expressed in Eq. (4.12) i.e. we want to minimize
1
2
Gz b ,
subject to constraint in Eq. (4.12). The solution to this problem is given as (see appendix D for
proof),
1 1 max
2
1
max 1 max
2
1
2
if
if .
G b G b Z
z
Gb
Z G b Z
Gb
(4.14)
In the above equation, in the superscript indicates the Moore-Penrose pseudo inverse. Since,
G is a row vector,
2
2
.
T T
T
G G
G
GG
G
(4.15)
Using Eq. (4.15) and the fact that
1
b is a scalar, we have,
1
1
2
1 2
2
2
2
T
Gb
b
Gb
G
G
. (4.16)
Substituting equations (4.15), (4.16) in Eq. (4.14),
63
1 1 max
2
1 max 1 max
2
2
if
sgn( ) if ,
T
T
T
G
b G b Z
GG
z
G
b Z G b Z
G
(4.17)
where,
1
sgn( ) b is sign function defined as,
1
1
1
1 if 0
sgn( )
1 if 0
b
b
b
. (4.18)
Using the definition of G , in Eq. (4.13), we can simplify Eq. (4.17) to obtain a closed form
expression for the control force
C
Q as,
11
1 1 max
1 1 1 2
11
1 max 1 max
2
1 1 1
if
sgn( ) if .
T
T
C
T
T
N M A
b b G Z
AM N M A
Q
N M A
b Z b G Z
AM N M A
(4.19)
Now that we have obtained the controller that does the best job in enforcing the stability
constraint given in Eq. (4.3), given the constraint on control force in Eq. (4.5), we are interested
in finding out what is the least value of
max
J that ensures that the controlled system represented
by Eq. (4.2) is stable. It appears hard to obtain a general bound on
max
J as we shall see shortly.
So, in the rest of this section, we assume specific forms for V and w that we have obtained in
the Chapter 3. They are,
64
1 2 12
11
22
T T T
V a x x a x x a x x , (4.20)
wV , (4.21)
where, ,
1 2 12
, , a a a are scalars chosen such that,
12
2
2a
a
,
1
0 a ,
2
0 a ,
12 1 2
1
0
2
a a a . (4.22)
Result 2: For the specific forms of V and w chosen using Eqs. (4.20)-(4.22), the system is
stable if
max
J satisfies the inequality,
1
max
22
1 2 12 2
1
2
a
J a A
aa
, (4.23)
over the entire domain D and over the entire duration of integration. In the above equation, a is
the acceleration of the uncontrolled system
1
: a M F
, and
1
,
2
are the minimum eigenvalues
of the matrices
1
M
,
1/2
N
respectively.
Proof:
Observing Eq. (4.19), let us consider two cases A and B, one when
1 max
2
b G Z and the other
when
1 max
2
b G Z . We prove stability of the controlled system by verifying that the sign of V
is negative in both the cases.
Case A: When
1 max
2
b G Z , Eq. (4.19) gives control force as,
11
1
1 1 1
T
C
T
N M A
Qb
AM N M A
.
65
By the definition of V ,
11 C
VV
V AM Q AM F x
xt
. (4.24)
Substituting for
C
Q in the above equation and using the definition of
1
b , we obtain
0 Vw .
Thus, V is negative in this case.
Case B: When
1 max
2
b G Z , let us again, consider two cases B.1 and B.2, one when
1 max
2
b G Z and the other when
1 max
2
b G Z .
Case B.1: If
1 max
2
b G Z , Eq. (4.19) gives the control force explicitly as,
11
max
1 1 1
T
C
T
N M A
QZ
AM N M A
(4.25)
Thus we have,
1 1 1 1
max max
2
CT
AM Q Z AM N M A G Z
. (4.26)
From Eq. (4.10), we have
1
1
AM F b b
. (4.27)
From the definition of b in Eq. (4.8), we have
66
VV
x w b
xt
. (4.28)
Substituting for
1
AM F
from Eq. (4.27), and for
VV
x
xt
from Eq. (4.28), in Eq. (4.24), we
obtain
11
11
CC
V AM Q b b w b AM Q w b
. (4.29)
Using Eq. (4.26) in the above equation,
max 1
2
V G Z w b . (4.30)
Since,
1 max
2
b G Z , we have,
0 Vw (4.31)
Thus, V is negative in this case too. Observe that we have not used any specific form for V and
w till now. The proof to this point is valid for any general positive definite functions V and w .
Case B.2: If
1 max
2
b G Z , Eq. (4.19) gives the control force explicitly as,
11
max
1 1 1
T
C
T
N M A
QZ
AM N M A
. (4.32)
Let us try to obtain bounds for each of the terms in the expansion of V given in Eq. (4.24).
Defining
1
,
2
as the minimum eigenvalues of
1
M
,
1/2
N
respectively, since A is a row
vector, we have the inequality,
67
1 1 1/2
1 2 2
2
22
A AM AM N
. (4.33)
In the above inequality,
1
,
2
are positive real numbers since
1
M
,
1/2
N
are symmetric,
positive definite matrices. Using this inequality, we obtain the bound,
1
1 2 max
2
C
AM Q A Z
. (4.34)
Similarly, we have the inequality,
11
2 2 2
2
AM F AM F Aa A a
. (4.35)
Now, all we need is a bound on
V
x
x
. At this point, let us choose a specific form for V to
bound
V
x
x
. Let us choose the positive definite function V , defined in Eq. (4.20).
For this choice of V ,
2
1 12 1 12
2
T T T
V
x a x x a x x a x x a x
x
. (4.36)
By the definition of A , we have,
12 2
TT
A a x a x , (4.37)
and,
2 2 2
22
12 2 12 2
2 2 2
2
T
A a x a x a a x x . (4.38)
Rearranging the above equation,
68
2 2 2
22
12 2
2 2 2
12 2
2
T
A a x a x
xx
aa
. (4.39)
Substituting this in Eq. (4.36),
2 2 2
1 1 12 1 2
12
2 2 2
12 2 2 12
2 2 2
V a a a a a
x A x a x
x a a a a
. (4.40)
If we choose
2
12 1 2
2a a a , the right-most two terms in the above equation are negative, and we
have,
2
1
2
12 2
2
Va
xA
x a a
(4.41)
Thus, we have,
2
11
1 2 max 1 2 max
2 2 2 2 2
12 2 12 2
22
aa
V A a A A Z A a A Z
a a a a
(4.42)
Thus, we have the following sufficient condition for stability-
1
max
22
1 2 12 2
1
2
a
Z a A
aa
. (4.43)
Remarks:
1. We had to choose a specific form of positive definite function V to obtain a bound on
2
V
xA
x
.
69
2. If we choose a different positive definite function V , we get a different bound in Eq. (4.43).
Still, we can follow a similar procedure to obtain the bound. Equations from (4.36)-(4.41) need
to be modified in that case.
4.3 Numerical Examples
To show the efficacy of our control methodology and the ease in its implementation, we consider
two illustrative examples- the mechanical systems that we considered in Chapter 2, but this time
subjected to a constraint on maximum control effort. All the computations are done in the Matlab
environment. We use the ODE15s package to perform the numerical integration, with a relative
error tolerance of
8
10
and an absolute error tolerance of
12
10
. We use a radially unbounded
positive definite function of the form
1 2 12
11
22
T T T
V a x x a x x a x x , wV , (4.44)
where
12
2
2a
a
. Thus, we enforce a constraint of the form,
VV . (4.45)
Using these positive definite functions,
12 2
TT
V
A a x a x
x
. (4.46)
Also, we choose the positive definite matrix N used in definition of control cost as,
1
NM
.
With these choices, equations for control force can be simplified as,
70
1
1 1 max
1
1
1 max 1 max
1
if
sgn( ) if
T
T
T
C
T
T
T
A
b b AM A J
AM A
Q
A
b J b AM A J
AM A
. (4.47)
Example 1:
For our first example, let us consider the same mechanical system we considered in Example 1
of Chapter 2. We describe the system once again for convenience. The equation of motion for the
mechanical system is,
: ( )
nl
Mx F Kx K x . (4.48)
In the above equation,
3
1 2 3
,,
T
x x x x R
is the displacement 3-vector, ()
nl
Kx is a nonlinear
stiffness term chosen as,
3 3 3
1 2 2 3 3 1
( ) , ,
T
nl
K x x x x x x x
, M is 3 by 3 symmetric,
positive definite mass matrix
1 2 2 1
diag ,2 ,1.5
2 3 1
t t t
M
t t t
, K is 3 by 3 symmetric,
stiffness matrix
100 100 0
100 150 50
0 50 100
K
. We choose the initial conditions as, 0 1, 2,1
T
x ,
0 2,3,0
T
x .
We assume that
max
J in Eq. (4.5) is
5
1 10 . This is about 12% of the maximum control cost () Jt
for the system without a constraint on control force which is about
5
8 10 .
71
On this system, we apply a control force explicitly computed using Eq. (4.47). Let us use the
positive definite functions defined in Eq. (4.44) with parameters
1
1 a ,
2
4 a ,
12
1.4 a ,
0.7 . The equation of motion of the controlled system is,
: ( )
C
nl
Mx F Kx K x Q (4.49)
We integrate the above equation using the ODE15s package in the Matlab environment for 40
seconds. The response during the first 40 seconds is shown in Figure 1. We have shown the
response for the controlled system and uncontrolled system side-by-side for ease of comparison.
The controlled system goes to zero, whereas the uncontrolled system keeps on undergoing
oscillations.
(a) (b)
72
(c) (d)
(e) (f)
Figure 1. (a), (b), (c) Time history of response of the controlled system. (d), (e), (f) Time history
of response of the uncontrolled system.
Figure 2 shows the projection of phase portrait of both controlled system and uncontrolled
system on
11
xx ,
22
xx , and
33
xx planes. The relevant plots for controlled and uncontrolled
system are arranged side-by-side. Initial positions are indicated by square markers. The positions
73
at the end of 40 seconds of integration are marked by circular markers. In the plots, we can
observe the system asymptotically going to zero.
(a) (b)
(c) (d)
74
(e) (f)
Figure 2. (a), (b), (c) Projection of phase portrait of the controlled system on various planes. (d),
(e), (f) Projection of phase portrait of the uncontrolled system on various planes.
Control forces are plotted as a function of time and shown in Figure 3. These control forces are
computed using Eq. (4.47).
75
Figure 3. Control force (
C
Q ).
(a) (b)
Figure 4. (a) Variation of control cost () Jt with time for the system subjected to maximum
control constraint (given in Eq. (4.5)) (b) Variation of control cost () Jt with time for the case
when there is no limit on maximum control cost.
76
Figure 4(a) shows the variation of control cost () Jt with time. We can see that () Jt at any time
does not exceed
5
max
1 10 J . Figure 4(b) shows the variation of control cost () Jt for the case
when
max
J is infinity (i.e. no maximum limit on control effort).
(a) (b)
Figure 5. (a) Variation of Lyapunov function with time for the controlled system (b) Variation of
Lyapunov function with time for the uncontrolled system.
Figure 5(a) shows the variation of Lyapunov function with time for the controlled system. Figure
5(b) shows the corresponding plot for the uncontrolled system. We can observe that the
controlled system loses energy quickly, while the uncontrolled system does not.
77
Figure 6. Error in satisfying constraint () et .
Figure 6 shows the error in satisfying the constraint,
1
1
( ):
C
e t V V AM Q b
. (4.50)
The constraint is not satisfied at certain times, because we don’t have sufficient control force to
impose the constraint.
Example 2:
Let us consider a mechanical system described by,
( ) ( ) Mx Kx S x C x , (4.51)
78
where () Sx is a cubic stiffness term defined as,
3
1
3
2
3
3
1 2 0
( ) : 2 3 1
0 1 2
x
S x x
x
, () Cx is a
nonlinear damping term defined as,
12
23
31
( ) :
xx
C x x x
xx
. Mass matrix is taken as,
1 3 2 1
diag 2 , ,
2 2 2
t t t
M
t t t
, linear stiffness matrix K is chosen to be,
100 100 0
100 200 100
0 100 100
K
. We choose the initial conditions as,
2
02
1
x
,
2
01
3
x
.
We assume that
max
J in Eq. (4.5) is
3
3 10 . This is less than 25% of the maximum control cost
() Jt for the system without a constraint on control force which is about
4
1.3 10 .
On this system, we apply a control force explicitly computed using Eq. (4.47). Let us use the
positive definite functions defined in Eq. (4.44) with parameters
1
1 a ,
2
6 a ,
12
1.5 a ,
0.5 . The equation of motion of the controlled system is,
( ) ( )
C
Mx Kx S x C x Q (4.52)
We integrate the above equation, and equation for the uncontrolled system Eq. (4.51) using the
ODE15s package in the Matlab environment for 40 seconds. The response during the first 40
seconds is shown in Figure 7. We show the response for the controlled system and uncontrolled
system side-by-side. The controlled system goes to zero, whereas the uncontrolled system keeps
on undergoing oscillations.
79
(a) (b)
(c) (d)
80
(e) (f)
Figure 7. (a), (b), (c) Time history of response of the controlled system. (d), (e), (f) Time history
of response of the uncontrolled system.
Figure 8 shows the projection of phase portrait of both controlled system and uncontrolled
system on
11
xx ,
22
xx , and
33
xx planes. The relevant plots for controlled and uncontrolled
system are arranged side-by-side. Initial positions are indicated by square markers. The positions
at the end of 40 seconds of integration are marked by circular markers. In the plots, we can
observe the controlled system asymptotically going to zero.
81
(a) (b)
(c) (d)
82
(e) (f)
Figure 8. (a), (b), (c) Projection of phase portrait of the controlled system on various planes. (d),
(e), (f) Projection of phase portrait of the uncontrolled system on various planes.
Control forces are plotted as a function of time and shown in Figure 9. These control forces are
computed using Eq. (4.47). These control forces enforce the constraint (4.45) as much as
possible while not violating maximum control constraint (4.5).
83
Figure 9. Control force (
C
Q ).
(a) (b)
Figure 10. (a) Variation of control cost () Jt with time for the system subjected to maximum
control constraint (given in Eq. (4.5)) (b) Variation of control cost () Jt with time for the case
when there is no limit on maximum control cost.
84
Figure 10(a) shows the variation of control cost () Jt with time. We can see that () Jt at any time
does not exceed
3
max
3 10 J . Figure 10(b) shows the variation of control cost () Jt for the case
when
max
J is infinity (i.e. no maximum limit on control effort).
(a) (b)
Figure 11. (a) Variation of Lyapunov function with time for the controlled system (b) Variation
of Lyapunov function with time for the uncontrolled system.
Figure 11(a) shows the variation of Lyapunov function with time for the controlled system.
Figure 11(b) shows the corresponding plot for the uncontrolled system. We can observe that the
controlled system loses energy quickly, while the uncontrolled system does not.
85
Figure 12. Error in satisfying constraint () et .
Figure 12 shows the error in satisfying the constraint () et given in Eq. (4.50). The constraint is
not satisfied at certain times, because if we provide sufficient control force to enforce the
constraint, we violate the maximum control force constraint.
4.4 Conclusions
In this chapter, we have shown how we can modify and extend the results obtained in Chapter 2
to the situation when a limit is placed on the maximum allowable control force available to
control the dynamical system. In this case, the stability constraint is not satisfied exactly but only
in the least square sense. It has been shown that the system can still be stabilized provided the
maximum control effort allowed is high enough. It has also been shown how the limit on the
control effort required to stabilize the system can be estimated. Numerical examples have been
provided to demonstrate the efficacy of the control strategy proposed here.
86
Chapter 5. Decentralized Control of Mechanical System
5.1 Introduction
In this chapter, we show a simple approach to decentralized control design. Each subsystem of
an interconnected interacting system is controlled in a decentralized manner using locally
available information related only to the state of that particular subsystem. The approach is
different from traditional approaches in that the control is designed to ensure stability from
ground up, eliminating the need to search for a global Lyapunov function to ensure the stability
of the entire coupled system. The methodology is developed in two steps. In the first step, we
define what we call a ‘nominal system’, which consists of ‘nominal subsystems’. The nominal
subsystems are assumed to be acted upon by forces that can be computed using only locally
available information. We obtain an asymptotically stable control for each nominal subsystem
that simultaneously minimizes a suitable norm of the control effort at each instant of time. In the
second step, we determine the control force that needs to be applied to the actual subsystem in
addition to the control force calculated for the nominal subsystem, so each actual subsystem
tracks the state of the controlled nominal subsystem as closely as desired. This additional
compensating controller is obtained using the concept of a generalized sliding surface control.
The design of this additional controller needs as its input an estimate of the bound on the
mismatch between the nominal and the actual subsystems. Examples of non-autonomous,
nonlinear, distributed systems are provided that demonstrate the efficacy and ease of
implementation of the control methodology.
87
5.2. Decentralized Control
5.2.1 Actual System
An ‘actual system’ is the real life mechanical system that we are trying to control. Let us
consider a general mechanical system consisting of n nonlinear, non-autonomous mechanical
subsystems, which are mutually coupled, and whose dynamics are described by the equations
(1) (1) (1) (1)
(2) (2) (2) (2)
( ) ( ) ( ) ( )
( , ) ( , , )
( , ) ( , , )
( , ) ( , , ),
n n n n
M x t x F x x t
M x t x F x x t
M x t x F x x t
(5.1)
where,
() i
M is the
i
n by
i
n symmetric, positive definite mass matrix that describes the i
th
subsystem, 1,2, , in , the vector
()
i
n i
xR is a vector describing the configuration of the i
th
subsystem, and
()
i
n i
FR is the external force vector acting on it. In what follows, the
superscript '() i ' over a quantity refers to that quantity pertinent to the i
th
subsystem. The vector
(1) (2) ( )
, , ,
T T T
T
n
x x x x
gives the configuration vector of the interacting conglomerate system
and it has a dimension
1
n
i
i
Nn
. As noted from the right hand side of Eq. (5.1), in addition to
the externally applied forces on the system that may depend on its global state, each subsystem
can also exert, in general, forces on every other. The dots on top of the variables indicate
derivatives with respect to time. Equation (5.1) can be written more compactly as,
88
(1) (1) (1)
(2) (2) (2)
( ) ( ) ( )
( , )
( , )
: : ( , , )
( , )
n n n
M x t F
M x t F
M x x F x x t
M x t F
, (5.2)
where M is the N by N positive definite mass matrix, and F is an N-vector each of whose
components depends on t, x , and x . Equation (5.2) is defined over the domain DR
, where
NN
D R R . We shall assume that the
() i
M ’s and
() i
F ’s are at least
1
C functions of their
arguments.
Our aim is to control the above system in such a way that the controlled actual system has an
equilibrium point at 0 x , 0 x . We do this in two steps. First, we define what we are going to
call a ‘nominal system’. We do this in the next section and we also derive a decentralized control
for this nominal system that ensures its asymptotic stability. Later, in a subsequent section, we
derive an additional controller that forces the actual system to track the trajectories of the
nominal system as closely as desired, thus ensuring the stability of controlled actual system.
5.2.2 Nominal System
Let us take a typical nominal subsystem whose mass matrix is
() i
M , displacement vector is
() i
n
x ,
velocity vector is
() i
n
x . The subscript n indicates quantities that correspond to the nominal system.
The global displacement vector for the nominal system is
(1) (2) ( )
, , ,
T T T
T
n
n n n n
x x x x
. Let us
89
define
i
n
x , an approximation of the global displacement vector, by substituting in
n
x zeros for
displacement of all subsystems except the i
th
subsystem (
()
0, (1, ),
j
n
x j n j i ),
()
,0 , ,0 ,
T
T
i T i T
nn
xx
. (5.3)
Similarly we can define an approximation of the global velocity vector as,
()
,0 , ,0 ,
T
T
i T i T
nn
xx
. (5.4)
and in like manner the external force on the i
th
nominal subsystem by substituting these
approximate global vectors into the expression for the external force as,
( ) ( ) ( ) ( )
( , , ): ( , , )
i i i i i i
n n n n
F x x t F x x t . (5.5)
It should be noted that the force on a nominal subsystem depends only on the state of that
particular subsystem. Then, the equation of motion for the entire nominal system can be written
in simplified form as,
(1) (1) (1)
(2) (2) (2)
( ) ( ) ( )
( , , )
( , , )
:
( , , )
n n n
n n n
nn
nnn
n n n
F x x t
F x x t
Mx F
F x x t
. (5.6)
We apply control forces
( ) ( ) ( )
,,
i
n i i i
c n n
Q x x t R on each nominal subsystem, so the controlled
nominal system has an equilibrium point at 0
nn
xx . Let us define the global control force
vector for the nominal system as,
90
(1) (2) ( )
: , , ,
T T T
T
n
c c c c
Q Q Q Q
. (5.7)
In the presence of this control force, the equation of motion for the controlled nominal system is,
n n c
M x F Q . (5.8)
Let us consider a Lyapunov function ( , , )
nn
V x x t for this controlled nominal system, which is
described by Eq.(5.8), such that
(i) ( , ) ( , , ) ( , )
L n n n n U n n
V x x V x x t V x x (5.9)
where, ( , )
L n n
V x x and ( , )
U n n
V x x are positive definite functions on the domain D , and
(ii) : ( , )
n n n n
nn
V V V
V x x w x x
t x x
(5.10)
in D , where ( , )
nn
w x x is a positive definite function in D . Any controller that causes the
dynamics of the entire, controlled nominal system to satisfy Eq. (5.10), for a given candidate
Lyapunov function that satisfies Eq. (5.9), ensures that the controlled nominal system has an
asymptotically stable equilibrium point at 0
n
x , 0
n
x [1, 51-54].
Our aim is to design the decentralized controllers
() i
c
Q , by considering a user-prescribed
candidate Lyapunov function V -- a function that satisfies only relation (5.9) above-- and a user
prescribed positive definite function w . Since, we are interested in localized control, we further
assume that the candidate Lyapunov function V is obtained as the sum of n prescribed local
91
candidate Lyapunov functions,
() i
V , 1,2, , in , for each of the n subsystems that depend only
on the locally available states so that
( ) ( ) ( )
1
( , , ) ( , , )
n
iii
n n n n
i
V x x t V x x t
. (5.11)
Similarly, the function w is obtained as the sum of n local positive definite functions
() i
w ,
1,2, , in , one for each of the n nominal subsystems, so that
( ) ( ) ( )
1
( , ) ( , )
n
iii
n n n n
i
w x x w x x
. (5.12)
In addition, we shall require each of the decentralized controllers,
() i
c
Q , to minimize user
prescribed cost functions
( ) ( ) ( )
( , , )
iii
nn
J x x t , 1,2, , in , at each instant of time t . The cost
function for the i
th
controller is assumed to be of the form
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( , , ) ( , , ) .
T
i i i i i i i i
n n c n n c
J x x t Q N x x t Q (5.13)
where
( ) ( ) ( )
( , , )
iii
nn
N x x t , 1,2, , in , are again user-prescribed positive definite matrices. From
here on, we will not explicitly show the arguments of the various quantities unless required for
clarity.
Derivation of control force on the i
th
nominal subsystem:
We begin by considering the i
th
nominal subsystem. To ensure stability of the proposed control,
we shall require the time derivative of our candidate Lyapunov functions
() i
V along the
92
trajectories of the solution of the controlled nominal subsystem to be negative. To do this, let us
enforce the following n constraints on the nominal system described by Eq. (5.6),
( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( )
: ( , )
i i i
i i i i i i
n n n n ii
nn
V V V
V x x w x x
t x x
. (5.14)
Denoting
()
()
()
:
i
i
i
n
V
A
x
, (5.15)
Eq. (5.14) can be rearranged as,
( ) ( )
( ) ( ) ( ) ( )
()
, 1,2, ,
ii
i i i i
nn i
n
VV
A x w x i n
xt
, (5.16)
which, upon use of Eq. (5.8) can be rearranged as,
11
( ) ( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
()
: , 1,2, ,
ii
i i i i i i i i i
c n n i
n
VV
A M Q w x A M F b i n
xt
. (5.17)
We can write Eq. (5.17) more compactly as,
( ) ( ) ( )
, 1,2, ,
i i i
c
B Q b i n , (5.18)
where,
1
( ) ( ) ( ) i i i
B A M
. (5.19)
By defining the vector
93
1/2
( ) ( ) ( ) i i i
c
z N Q , (5.20)
the cost function given in Eq. (5.13) becomes
( ) ( ) ( )
T
i i i
J z z . (5.21)
Observing the form of the cost function in Eq. (5.21), let us rewrite Eq. (5.18) as,
1/2 1/2
( ) ( ) ( ) ( ) ( ) ( ) ( )
:
i i i i i i i
c
G z B N N Q b
, (5.22)
where,
1/2
( ) ( ) ( )
:
i i i
G B N
. (5.23)
Thus, we desire
() i
z that satisfies Eq. (5.22), and at the same time minimizes the cost function
given in Eq. (5.21). This is obtained as [15-16],
( ) ( ) ( ) i i i
z G b
, (5.24)
where, the ‘+’ in the superscript indicates the Moore-Penrose inverse. Then, by Eq. (5.20), the
required control force that satisfies the constraint in Eq. (5.14) and minimizes the cost function
given in Eq. (5.13) is obtained as,
1/2
( ) ( ) ( ) ( ) i i i i
c
Q N G b
. (5.25)
Observing that
() i
G is a row vector, we can further simplify Eq. (5.25) to
94
1/2
1/2 1/2 1
1
1/2 1/2
( ) ( )
( ) ( )
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
TT
TT
T
ii
ii
i i i i i i i
c T
i i i i i
i i i i
BN
GB
Q N b N b N b
G G B N B
B N B N
.(5.26)
Result: The control forces
() i
c
Q , 1,2, , in , obtained in Eq. (5.25) ensure that the entire
controlled nominal system represented by Eq. (5.8) has an asymptotically stable equilibrium
point at 0
n
x , 0
n
x .
Proof: Using the positive definite candidate Lyapunov function given in Eq. (5.11), its time
derivative along the trajectories of the solution of Eq. (5.8) is given by,
( ) ( )
11
nn
ii
ii
V V w w
. (5.27)
Thus, the controlled nominal system of Eq. (5.8) has an asymptotically stable equilibrium point
at 0
n
x , 0
n
x .
For this control scheme to work, we need Eq. (5.18) to be consistent. We can use the same class
of positive definite functions
() i
V ’s and corresponding
() i
w ’s we defined in the Appendix
following Chapter 2, so the Eq. (5.18) can be consistent.
5.2.3 Controlled Actual System
Since, the controller of an actual subsystem does not have knowledge of the entire global state
vectors, it cannot accurately obtain the external force
() i
F
in Eq. (5.1). Our nominal system
adduces the force
() i
F from locally available information, namely
() i
x . The disregard of non-
95
local information regarding the force
() i
F
could make the actual system unstable, and this in fact
is the crux of the problem of decentralized control.
To ensure the stability of controlled actual system, we add an additional compensating controller.
This controller utilizes the concept of generalized sliding surfaces, to ensure that the controlled
actual system tracks the solution trajectories of the nominal system within pre-specified error
bounds, thus ensuring its stability. As we see shortly, this controller needs a bound on the
difference between force acting on the actual subsystem
() i
F and approximate force
() i
F acting
on the nominal subsystem, to ensure that the controlled actual system can track the trajectories of
controlled nominal system well.
The equation of motion of the controlled actual subsystem in the presence of a compensating
controller
() i
u
Q is
( ) ( ) ( ) ( ) ( ) ( ) ( )
( , , ) ( ) ( , , ), 1,2, ,
i i i i i i i
cu
M x F x x t Q t Q x x t i n . (5.28)
To obtain the additional controller
() i
u
Q , we first define the tracking error for a typical subsystem
(difference between the state of the controlled actual subsystem and the controlled nominal
subsystem),
( ) ( ) ( )
( ) ( ) ( ), 1,2, ,
i i i
n
e t x t x t i n . (5.29)
Let us define a sliding surface for the subsystem as,
( ) ( ) ( ) ( )
( ) , 1,2, ,
i i i i
s t L e e i n . (5.30)
96
We denote
(1) (2) ( )
: [ , , . . . , ]
nT
e e e e and
(1) (2) ( )
: [ , , . . . , ] ,
nT
s s s s where
( ) ( )
,
i
n ii
e s R . If we
could restrict the dynamics of the controlled actual subsystem to be on this sliding surface 0 s ,
it would slide along this surface to the asymptotic equilibrium point 0 e , 0 e . Thus, the
controlled actual system tracks the trajectories of controlled nominal system. But, to restrict the
system to the sliding surface, we need discontinuous control force. Instead, we provide a
continuous control force, that restricts the system to a region enclosing the sliding surface which
can be made as close to the sliding surface as we desire. Let us denote this region by
.
To ensure that the controlled actual subsystem is restricted to a region (
) enclosing the sliding
surface, we apply an additional compensating control force, which is explicitly given as (see
Appendix 2 at the end of this Chapter),
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
, 1,2, ,
i i i i i i i i i
u
Q M u M L e f s i n . (5.31)
In the above equation,
() i
is a positive constant chosen such that
1
( ) ( ) ( ) ( )
, 0
i i i i
i
n M F F t
, where represents the infinity norm. The function
( ) ( ) ii
fs is a vector valued function, whose j
th
component is defined as,
( ) ( ) ( ) ( )
/
i i i i
jj
f s g s
, (5.32)
where,
() i
j
s is the j
th
component of
() i
s ,
() i
g
is an odd, monotonic increasing function such that
( ) ( ) ( )
/ 1 if
i i i
j
g s s
.
97
The controller thus requires an estimate of the quantity
1
( ) ( ) ( ) ( ) i i i i
M F F
over the time
horizon over which the control is done. Providing an overestimate of this quantity, however, has
a small influence on the magnitude of the control force
() i
u
Q . Later in this section, we come back
to how, we can obtain a rough estimate of
() i
.
This control force ensures that, the controlled actual subsystem is restricted to a region (which
could be made as close to the surface
()
0
i
s as we desire) around the sliding surface. The proof
is given in the Appendix 2. We are also going to show in the Appendix that the asymptotic
bound on error in tracking the displacements of nominal subsystem is,
()
()
()
lim ( )
i
i
i
t
et
L
, (5.33)
where,
1
( ) ( )
:1
ii
g
. (5.34)
Similarly, the bound on error in tracking the velocities is,
( ) ( )
lim ( ) 2
ii
t
et
. (5.35)
It should be noted that, we can choose functions
() i
g
, to satisfy user prescribed error bounds on
tracking errors. This is demonstrated in Example 1 in the following section.
Now, we go back to the problem of estimating
1
( ) ( ) ( ) i i i
M F F
. Since,
() i
V is negative
throughout the duration of control for the nominal system, we must have,
98
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
0
( , ) ( , , ) (0), (0),0 (0), (0) :
i i i i i i i i i i i i i
L n n n n n n U n n
V x x V x x t V x x V x x V . This implies
that
() i
n
x ,
() i
n
x at any time t, will lie in a local domain
()
0
i
D which is defined as,
( ) ( ) ( ) ( ) ( ) ( ) ( )
00
: ( , ) ( , )
i i i i i i i
n n L n n
D x x V x x V . (5.36)
Thus, the global state of nominal system is restricted to the domain,
(1) (2) ( )
0 0 0 0
:
n
D D D D . (5.37)
Since we assume that the norm of the tracking errors in displacement and velocity
( ) ( ) ii
n
xx ,
( ) ( ) ii
n
xx to be very small, we can obtain an approximate estimate for
()
()
i
t initially as the
supremum of
1
( ) ( ) ( ) i i i
M F F
over this domain as,
1
( ) ( )
0
( ) ( ) ( ) ( ) ( ) ( )
, , 0
sup , , , ,
ii
nn
i i i i i i
n n n n
x x D t
M F x x t F x x t
. (5.38)
5.3. Numerical Examples
Example 1:
To show the efficacy of the control we have developed, we provide an example of a nonlinear,
unstable mechanical system consisting of two mutually coupled subsystems. Let the equation of
motion for the subsystems be,
(1) (1) (1) (1) (1) (1) (1) (1)
(2) (2) (2) (2) (2) (2) (2) (2)
( ) ( , , ) : ( , , )
( ) ( , , ) : ( , , ).
M x k x u x v x x t F x x t
M x k x u x v x x t F x x t
(5.39)
99
In the above equation,
(1) (1) (1) (1) 3
1 2 3
,,
T
x x x x R
,
(2) (2) (2) 2
12
,
T
x x x R
(5.40)
(1)
1
00
2
3
0 2 0
2
21
0 0 1.5
2
t
t
t
M
t
t
t
,
(2)
4
0
2
1
0
4
t
t
M
t
t
, (5.41)
(1)
1
(1) (2) (2) (1)
1 2 2
(1)
3
( , , )
x
v x x t x x x
x
,
(1) (1) (2)
(2) 2 3 1
(1) (1) (2)
3 1 2
( , , )
x x x
v x x t
x x x
, (5.42)
3
(1) (1)
12
3
(1) (1) (1) (1)
23
3
(1) (1)
31
()
xx
u x x x
xx
,
3
(2) (2)
12
(2) (2)
3
(2) (2)
21
xx
ux
xx
, (5.43)
and,
(1)
k ,
(2)
k are symmetric, semi-positive definite stiffness matrices of the same dimension as
(1)
x ,
(2)
x respectively. In our example, we are going to use
(1)
100 100 0
100 150 50
0 50 100
k
and,
(2)
150 50
50 100
k
.
The force on the first nominal subsystem is obtained by substituting
(2) (2)
0
nn
xx in the
expression for
(1)
F in Eq. (5.39) as,
100
(1) (1) (1) (1) (1)
: ( )
nn
F k x u x . (5.44)
Similarly, the force on the second nominal subsystem is obtained by substituting
(1) (1)
0
nn
xx in
the expression for
(2)
F as,
(2) (2) (2) (2) (2)
: ( )
nn
F k x u x . (5.45)
The equations of motion of the nominal system are,
(1) (1) (1) (1) (1) (1) (1) (1) (1)
(2) (2) (2) (2) (2) (2) (2) (2) (2)
( ) : ( , , )
( ) : ( , , ),
n n n n n
n n n n n
M x k x u x F x x t
M x k x u x F x x t
(5.46)
and they depend only on the local states. We next compute the control forces
(1)
c
Q ,
(2)
c
Q using
Eq. (5.25). For that, let us choose the required parameters as,
1
(1) (1)
NM
,
1
(2) (2)
NM
. Let us
choose the positive definite functions
(1)
V ,
(2)
V
as
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
1 2 12
(2) (2) (2) (2) (2) (2) (2) (2) (1) (2)
1 2 12
11
22
11
,
22
T T T
T T T
n n n n n n
n n n n n n
V a x x a x x a x x
V a x x a x x a x x
(5.47)
and
(1)
w ,
(2)
w as,
(1) (1) (1) (2) (2) (2)
, , w V w V (5.48)
where,
(1)
1
1 a ,
(1)
2
8 a ,
(1)
12
1 a ,
(1)
1
4
,
(2)
1
1 a ,
(2)
2
4 a ,
(2)
12
1 a ,
(2)
1
2
. These values
cause Eq. (5.18) to be consistent as can be verified from Appendix 1.
101
With this choice of positive definite functions, we obtain
(1) (1) (1) (1) (1) (2) (2) (2) (2) (2)
2 12 2 12
, ,
T T T T
n n n n
A a x a x A a x a x (5.49)
and the explicit control forces from Eq. (5.26) are given by
1
1
1
1
(1)
(1) (1) (1) (1) (1) (1)
(1) (1) (1)
(2)
(2) (2) (2) (2) (2) (2)
(2) (2) (2)
.
T
T
T
T
c
c
A
Q V A M F
A M A
A
Q V A M F
A M A
(5.50)
Thus, the equation of motion of the controlled nominal system is,
(1) (1) (1) (1) (2) (2) (2) (2)
( , , ) , ( , , ) .
cc
M x F x x t Q M x F x x t Q (5.51)
We can define the tracking errors between the controlled nominal system and the controlled
actual system as in Eq. (5.29) by,
(1) (1) (1) (2) (2) (2)
and
nn
e x x e x x . (5.52)
Similarly, we define the tracking errors in velocities as,
(1) (1) (1) (2) (2) (2)
and
nn
e x x e x x . (5.53)
We choose the following parameters for the additional compensating controllers in Eq. (5.31):
(1) (2)
10 LL ,
(1) (2)
5000 .
We note that the computations requires estimates of
(1)
and
(2)
; however, the additional
control forces
() i
u
Q
are relatively insensitive to the values of these estimates, as long as these
102
values exceed
1
( ) ( ) ( ) i i i
M F F
. That is, using overestimates of
(1)
and
(2)
does not
significantly affect the magnitudes of the additional control forces to be applied.
The sliding surfaces are defined as,
(1) (1) (1) (1) (2) (2) (2) (2)
, and s L e e s L e e . (5.54)
Here,
(1) 3
sR , and
(2) 2
sR . We define the functions
(1) (1)
fs ,
(2) (2)
fs in Eq. (5.31) as,
3
(1) (1) (1) (1) (1)
3
(2) 2) (2) (2) (2)
/ : / , 1,2,3
/ : / , 1,2
j j j j
j j j j
f s g s s j
f s g s s j
, (5.55)
where, subscript j represents j
th
component of a vector. is a small number that can be chosen,
depending on how closely we want to track the nominal system. Observing the bound on
tracking error in Eq. (5.33), the bound on tracking error for this particular choice of
()
, 1,2
i
gi
is,
()
()
lim , 1,2
i
i
t
ei
L
, (5.56)
()
lim 2 , 1,2
i
t
ei
(5.57)
Thus for if user provides us an error bound that is desired, we can design the additional
compensating controller by choosing appropriate values for . For this example, we choose to
be
4
1 10
. Substituting this value of and
(1) (2)
10 LL
in Eq. (5.33), the bound on the
103
tracking error in displacement is
5
1 10
. Similarly, using Eq. (5.35), the bound on tracking error
in velocities can be obtained as
4
2 10
.
With the above defined quantities, the explicit expression for the additional compensating
control forces on each subsystem are given by
(1) (1) (1) (1) (1) (1) (1) (1) (1)
(2) (2) (2) (2) (2) (2) (2) (2) (2)
u
u
Q M u M L e f s
Q M u M L e f s
. (5.58)
The equation of motion for controlled actual system is then,
(1) (1) (1) (1) (1) (1) (1)
(2) (2) (2) (2) (2) (2) (2)
( , , ) ( ) ( , , )
( , , ) ( ) ( , , ).
cu
cu
M x F x x t Q t Q x x t
M x F x x t Q t Q x x t
(5.59)
We use the ODE15s package in Matlab environment to perform numerical integration of Eqs.
(5.59), (5.51) using a relative error tolerance of
8
10
and an absolute error tolerance of
12
10
.
Figure 1 shows the displacement response of the controlled actual subsystem 1 as a function of
time.
104
Figure 1. Displacement history of controlled actual subsystem 1.
The displacement history of the second subsystem as a function of time is plotted and shown in
Figure 2.
Figure 2. Displacement history of controlled actual subsystem 2.
105
Figure 3 shows the projection of the phase portrait of the first subsystem on the
(1) (1)
11
xx ,
(1) (1)
22
xx and,
(1) (1)
33
xx planes. Initial positions are indicated by square markers and final
positions are indicated by circular markers.
Figure 3. Phase plots of controlled actual subsystem 1.
Figure 4 shows the projection of the phase portrait of the second subsystem on the
(2) (2)
11
xx ,
(2) (2)
22
xx planes. Once again, initial positions are indicated by square markers and final
positions are indicated by circular markers.
106
Figure 4. Phase plots of controlled actual subsystem 2.
(a) (b)
107
(c) (d)
(e) (f)
Figure 5. (a), (c), (e) Control force on subsystem 1 computed from response of nominal system,
(1)
c
Q . (b), (d), (f) Additional compensating control force on subsystem 1,
(1)
u
Q , with
4
10
.
Figure 5 shows the decentralized control forces (as a function of time) calculated for nominal
subsystem 1 on the left side and compares it against the additional compensating control forces
108
on the right hand side. The additional compensating control force is seen to be quite small when
compared with the nominal control forces (less than 10% of nominal control forces).
(a) (b)
(c) (d)
Figure 6. (a), (c) Control force on subsystem 2 computed from response of nominal system,
(2)
c
Q .
(b), (d) Additional compensating control force on subsystem 2,
(2)
u
Q , with
4
10
.
109
Figure 6 shows the decentralized control forces calculated for the nominal subsystem 2 on the
left side and compares them against the additional compensating control force on the right hand
side. Again, the time history of the additional compensating control force shows that it is small
when compared to the nominal control force (less than 2% of nominal control forces).
(a) (b)
Figure 7. (a) Tracking errors in displacement,
(1)
() et , for subsystem 1 with
4
10
.
(b)Tracking errors in velocity,
(1)
() et , for subsystem 1 with
4
10
.
The tracking errors in displacement and velocity between the nominal and the actual subsystems
given in Eqs. (5.52) and (5.53) are shown in Figures 7 and 8. These are smaller than the error
bounds given in Eqs. (5.33), (5.35), which are
5
1 10
and
4
2 10
in our particular case.
110
(a) (b)
Figure 8. (a) Tracking errors in displacement,
(2)
() et , for subsystem 2 with
4
10
.
(b)Tracking errors in velocity,
(2)
() et , for subsystem 2 with
4
10
.
(a) (b)
Figure 9. (a) Variation of Lyapunov functions with time. (b) Error in satisfying constraint on the
nominal system as function of time.
111
Figure 9 (a) shows the variation of Lyapunov functions given in Eq. (5.47) with time. Figure 9
(b) shows the error in satisfying the constraints imposed on the nominal system, given in Eq.
(5.14).
To demonstrate how to choose appropriate functions
() i
g
to satisfy a pre-specified tracking error
tolerance, let us assume that we want our displacement tracking error for each subsystem,
() i
e , to
be less than
7
1 10
. Observing Eq. (5.56), we can choose an appropriate value for as,
( ) 7 7 6
1 10 10 10 1 10
i
L
. (5.60)
The corresponding velocity tracking error,
() i
e , is then
6
2 10
(see Eq. (5.57)). Thus, we use
this value for , keeping all other parameters the same, and show the results below. The
displacements of the two subsystems look quite similar to the ones shown for the case when
4
1 10
, and are not shown for brevity. We only show plots for the compensating control
forces, and the corresponding tracking errors for the two subsystems.
Figure 7 shows the additional compensating control force for subsystem 1 as a function of time,
for the case when
6
1 10
. Figure 8 shows the corresponding plots for subsystem 2.
112
Figure 10. Additional Compensating Control Force on subsystem 1,
(1)
u
Q , with
6
10
.
113
Figure 11. Additional Compensating Control Force on subsystem 2,
(2)
u
Q , with
6
10
.
(a) (b)
Figure 12. (a) Tracking error in displacement,
(1)
() et , for subsystem 1 with
6
10
.
(b)Tracking error in velocity,
(1)
() et , for subsystem 1 with
6
10
.
114
(a) (b)
Figure 13. (a) Tracking errors in displacement,
(2)
() et , for subsystem 2 with
6
10
.
(b)Tracking errors in velocity,
(2)
() et , for subsystem 2 with
6
10
.
Figures 12 and 13 show the tracking errors for subsystems 1 and 2 respectively. As seen, the
tracking errors in displacement are less than
7
1 10
, and tracking errors in velocity are less than
6
2 10
, as expected.
Example 2:
To show the efficacy of the additional compensating controller in tracking nominal system, we
consider a second example, in which the nonlinear interaction forces are quite substantial. Let us
consider the following system that has two subsystems,
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
(2) (2) (2) (2) (2) (2) (2) (2) (2) (2)
( , , ) ( , , ) ( ) : ( , , )
( , , ) ( , , ) ( ) : ( , , )
nl nl
nl nl
M x K x C x K x x t C x x t h t F x x t
M x K x C x K x x t C x x t h t F x x t
(5.61)
In the above equations,
115
(1) (1) (1) (1) (1) 4 (2) (2) (2) (2) (2) (2) 5
1 2 3 4 1 2 3 4 5
, , , , and , , , , ,
TT
x x x x x R x x x x x x R
, (5.62)
(1) (2)
1.5,1.2,2,3 , and 2,1.8,3,1.5,2.25 M diag M diag , (5.63)
(1) (2)
2
2 0 0
2
20
, and 2
02
2
00
kk
kk
k k k
k k k
KK k k k
k k k
k k k
kk
kk
, (5.64)
(1) (1) (1) (1) (1) (2) (2) (2) (2) (2)
, and C M K C M K , (5.65)
3 5 3
1 (1) (2) 2 (1) (2) (1) (2)
1 1 1 1 1 1
3 5 3
1 (1) (2) 2 (1) (2) (1) (2)
(1) (1) 2 2 2 2 2 2
3 5 3
1 (1) (2) 2 (1) (2) (1) (2)
3 3 3 3 3 3
, and
00
nl nl
nl nl
nl nl
nl
nl nl
k x x k x x x x
k x x k x x x x
K C c
k x x k x x x x
, (5.66)
(1) (1)
(2) (2)
, and
00
nl nl
nl nl
KC
KC
, (5.67)
(1) (2)
( ) 12 sin(1.4 ) 1, 1, 1, 1 , and ( ) 10 sin(1.9 ) 1, 1, 1, 1, 1 .
TT
h t t h t t (5.68)
We choose the various parameters as, 5000 k ,
(1)
0.18355 ,
(2)
0.915845 ,
(1) 4
5.234 10
,
(2) 3
6.9527 10
,
16
2 10
nl
k ,
27
1 10
nl
k ,
3
2 10
nl
c .
Following the first step in our approach, we will define a nominal system. The forces on the first
nominal subsystem will be obtained by substituting
(1) (1)
0
nn
xx , in the expression for
(1)
F as,
116
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
( , , ) : ( , , ) ( , , ) ( )
nl nl
F x x t K x C x K x x t C x x t h t , (5.69)
where,
3 5 3
3 5 3
3 5 3
1 (1) 2 (1) (1)
1 1 1
1 (1) 2 (1) (1)
(1) (1) (1) (1) (1) (1)
2 2 2
1 (1) 2 (1) (1)
3 3 3
( , , ) , and ( , , )
00
nl nl
nl nl
nl nl
nl
nl nl
k x k x x
k x k x x
K x x t C x x t c
k x k x x
. (5.70)
Similarly, the force on the second nominal subsystem, obtained by substituting
(2) (2)
0
nn
xx in
the expression for
(2)
F is,
(2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2)
( , , ) : ( , , ) ( , , ) ( )
nl nl
F x x t K x C x K x x t C x x t h t (5.71)
where,
3 5 3
3 5 3
3 5 3
1 (2) 2 (2) (2)
1 1 1
1 (2) 2 (2) (2)
2 2 2
(2) (2) (2) (2) (2) (2)
1 (2) 2 (2) (2)
3 3 3
, , , and , ,
00
00
nl nl
nl nl
nl nl
nl
nl nl
k x k x x
k x k x x
K x x t C x x t c
k x k x x
. (5.72)
With these forces thus defined, the equation of motion for the nominal system is,
(1) (1) (1) (1) (1)
(2) (2) (2) (2) (2)
( , , )
( , , ).
n n n
n n n
M x F x x t
M x F x x t
(5.73)
117
We use the same positive definite functions used in the earlier example and given in Eqs. (5.47),
(5.48). Let us choose the parameters as,
(1)
1
1 a ,
(1)
2
8 a ,
(1)
12
1 a ,
(1)
1
4
,
(2)
1
1 a ,
(2)
2
4 a ,
(2)
12
2
3
a ,
(2)
1
3
. These values are chosen in such a way as to ensure Eq. (5.18) to be
consistent. As earlier, we specify the cost functions by specifying
1
(1) (1)
NM
,
1
(2) (2)
NM
.
We calculate the explicit control forces using Eqs. (5.49) and (5.50). The equation for the
controlled nominal system is given in Eq. (5.51).
In the second step, we apply an additional compensating controller, obtained using Eqs. (5.52)
(5.58). We choose the parameters required as,
(1) (2)
10 LL ,
(1) (2) 4
10 ,
4
10
. The
equation of motion for the controlled actual system is given in Eq. (5.59). We integrate this
equation using ODE15s package in Matlab with a relative error tolerance of
8
10
and an absolute
error tolerance of
12
10
.
Figure 14 shows the displacement response of the controlled actual subsystem 1 as a function of
time. Figure 15 shows the corresponding plots for subsystem 2.
118
Figure 14. Displacement history of controlled actual subsystem 1.
119
Figure 15. Displacement history of controlled actual subsystem 2.
120
Figure 16. Projection of phase portrait of controlled actual subsystem 1 on various planes.
In Figure 16, the projections of the phase portrait of the subsystem 1 on
(1) (1)
11
xx ,
(1) (1)
22
xx ,
(1) (1)
33
xx , and,
(1) (1)
44
xx planes are plotted. Initial positions are shown using square markers.
Final positions are shown using circular markers. Plots show the asymptotic convergence to zero.
121
Figure 17. Projection of phase portrait of controlled actual subsystem 2 on various planes.
In Figure 17, the projections of the phase portrait of the subsystem 2 on
(2) (2)
11
xx ,
(2) (2)
22
xx ,
(2) (2)
33
xx ,
(2) (2)
44
xx , and,
(2) (2)
55
xx planes are plotted. As per our convention, initial positions
are shown using square markers. Final positions are shown using circular markers. Asymptotic
convergence to origin can be observed from the plots.
122
123
Figure 18. Comparison of Nominal control forces (
(1)
c
Q ) and compensating control forces (
(1)
u
Q )
for subsystem 1.
In Figure 18, nominal control forces on subsystem 1 are contrasted with additional compensating
control forces on subsystem 1. Similarly, in Figure 19, nominal control forces on subsystem 2 are
compared with additional compensating control forces on subsystem 2. We observe from these
plots that compensating control forces are comparable with nominal control forces. This is
because, the nonlinear coupling forces are strong in this example.
124
125
Figure 19. Comparison of Nominal control forces (
(1)
c
Q ) and compensating control forces (
(1)
u
Q )
for subsystem 2.
The errors in tracking the trajectories of nominal system are shown in Figures 20 and 21. We
note that our control ensured that controlled actual system tracked the trajectories of nominal
system quite well despite the strong interaction forces present between the two subsystems. We
also note that the tracking errors in displacement, velocity are less than
5
1 10
and
4
2 10
,
predicted by Eqs. (5.56) and (5.57) for our chosen value of
4
1 10
.
126
(a) (b)
Figure 20. (a) Tracking error in displacement,
(1)
() et , for subsystem 1. (b) Tracking error in
velocity,
(1)
() et , for subsystem 1.
(a) (b)
Figure 21. (a) Tracking error in displacement,
(2)
() et , for subsystem 2. (b) Tracking error in
velocity,
(2)
() et , for subsystem 2.
127
To show that the magnitude of additional control force does not vary much with , we run the
simulations for 1000 , and also for the case
4
5 10 (we have used
4
1 10 for the
earlier simulation). We observed that the additional control forces obtained are not significantly
different. For brevity, we only show the additional control forces acting on
(1)
1
m for different
values of in Figure 16 (a), 16 (b) (compare them with Figure 12(b) for the case when
4
1 10 ).
(a) (b)
Figure 22. (a) Additional control force (
u
Q ) on
(1)
1
m for the case when
3
10 . (b) Additional
control force (
u
Q ) on
(1)
1
m for the case when
4
5 10 .
5.4. Conclusions
We provide a simple methodology to design decentralized controllers for decentralized, non-
autonomous, nonlinear mechanical systems. Our approach to the control of decentralized
systems is developed in two steps. First, we define a nominal system that can be constructed
128
based on locally available information (measurements of displacements and velocities) about
each subsystem. We compute the control forces to be applied to this nominal system in order to
ensure that each nominal subsystem has an asymptotically stable equilibrium point at the origin.
In computing these control forces, we use user prescribed positive definite functions
k
V ,
k
w
defined over the domains of the local subsystems and minimize a suitable norm of control force
at each instant of time. No approximations/linearizations in modeling the dynamics and no a
priori assumptions regarding the structure of the control are made in this step.
In the second step of the approach, we add another compensating controller, which ensures that
the controlled actual system tracks this nominal system as closely as desired, thereby ensuring
that the controlled actual system also has an asymptotically stable equilibrium point at the origin.
The method requires an estimate on the bound of
1
( ) ( ) ( ) i i i
M F F
over the time interval over
which the control is executed. As demonstrated, this additional controller can be designed based
on the tracking error desired. Two examples are provided. The first considers an unstable
decentralized system and the second considers a highly coupled decentralized system. The
examples illustrate the efficacy of the approach and the ease with which it can be implemented.
129
Chapter 6. Dynamics and Control of a Multi-Body Planar Pendulum
6.1 Introduction
In this chapter, the control methods developed so far are applied to multi-body planar pendulum
system. First, the Lagrange equations of motion are obtained using a recursive approach based on
mathematical induction. This approach is short and simple and hence decreases the chances of
making errors in the modeling process. The pendular system considered consists of links whose
center of mass need not necessarily lie on the line joining the hinges. We do not know of any
attempt in the literature to tackle such systems. The systems studied in the literature to date,
assume straight links in which the center of mass lies on the line joining the hinges.
In the second step, we apply the stable control methodology presented in Chapter 2 to control the
system starting from any given initial conditions such that it reaches a desired final state. It is
assumed that the multi-body system—called the ‘nominal system’—is accurately known, and, a
nonlinear controller is obtained in closed form that simultaneously minimizes a user-specified
norm of the control effort and enforces a Lyapunov constraint that ensures it stability. As
explained in the Chapter 2, the Lyapunov function chosen must be consistent.
The third part of this chapter deals with the development of a simple continuous controller that
exactly tracks the motion of the nominal system in the presence of norm-bounded uncertainties
in our knowledge of the system. This controller, which relies on generalized sliding surface
control, which was first developed in Ref. [25] and is used here in simplified form. This is a little
different from the generalized sliding surface controller used in Chapter 5, the difference arising
due to the fact that the uncertain pendular system has uncertainty in the mass matrix too which is
more complicated than uncertainty in just the force vector.
130
Simulation results are presented showing the efficacy of the control methodology and the
simplicity of its implementation for the nominal system and the uncertain system.
Figure 1. An n-body planar pendulum.
6.2 Derivation of the General Equations of Motion of an n-body Pendulum
Consider the n-body pendulum that undergoes planar motions suspended from the point O (see
Figure 1). The i
th
body in the pendulum has mass
i
m and its moment of inertia about its center of
131
mass (CM), which is located at
i
C , is
i
J . The coordinates of the point
i
C in the inertial
coordinate frame OXY are ( , )
ii
xy . The distance between the upper and the lower hinge in the i
th
body is
i
l , and the distance from the upper hinge to its CM is
ii
bl . The line joining the two
hinges in the i
th
body makes an angle
i
with the vertical, measured in the counter-clockwise
direction (see Figure 1). Similarly, the angle made by the line joining the upper hinge to the CM
of the i
th
body makes an angle
i
with the line joining the two hinges; as shown in the figure,
this angle is again measured counter-clockwise from the line joining the two hinges.
For the purpose of uniformity in our formulation of the equations of motion, we have used two
hinges within each body. When viewed in the stable static equilibrium position of the n-body
pendulum, the upper hinge in each body is the location where it is connected to the body
immediately above, and the lower hinge, where it is connected to the body immediately below.
However, since the n
th
body has nothing below it, we can locate an imaginary ‘lower hinge’
(shown by the square in Figure 1) at any point in this body. Specifically for convenience, were
this imaginary lower hinge placed at the CM of this n
th
body, then we would have 0
n
; if,
further,
n
b were set to unity, then the distance from the CM of the n
th
body to its upper hinge,
which is given by
nn
bl , would then simply be
n
l .
A similar situation arises when the pendulum system comprises of just a single body suspended
from the point O. An imaginary ‘second hinge’ can be placed at any location in this body other
than at O, say at the point O1 as shown in the figure. The line OO1 joining the two hinges is
shown by a dashed line. The purpose of this second hinge is simply to provide a direction so that
132
the angle
1
can be measured in the counter-clockwise direction from the vertical. Using
elementary Newtonian mechanics and taking moments about the fixed origin O, we get the
equation of motion for this one-body pendulum as
2
1 1 1 1 1 1 1 1 1 1
[ ( ) ] ( )sin( ) 0 J m bl m g bl . (6.1)
Alternatively, as discussed before, the point O1 could have been chosen to coincide with the
location
1
C of the CM of the body so that
1
0 . Now
1
is the angle made by the line
1
OC
with the vertical. If, further, one sets
1
1 b , then
1
l
can now be interpreted as simply the distance
of the CM of the body from the hinge at O. Setting
1
1 b and
1
0 in Eq. (6.1), we get the
equation of motion of the pendulum in its standard form, namely,
2
1 1 1 1 1 1 1
[ ] sin 0 J ml m gl . (6.2)
where
1
l is now the distance of the CM of the body from O. Though a trivial result, it is required
in the proof of our general result where we will be primarily interested in obtaining the equations
of motion of a planar n-body pendulum, with 1 n .
The algebraic complexity of the derivation of the Lagrange equations of motion increases greatly
as the number of bodies in the system increase, and here we use a new and novel approach that
utilizes mathematical induction on the number of bodies (degrees of freedom) of the system. We
begin by stating our result.
Result: The Lagrange equations of motion of the undamped n-body pendulum described above
(see Figure 1) is given by
133
2
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) 0
n n n n n n n n
M S F
, (6.3)
where the n-vector (n by 1 vector)
()
12
[ , , . . . , ]
nT
n
, and the n-vector
2
( ) 2 2 2
12
: [ , , . . . , ]
nT
n
. The superscripts ‘(n)’ in equation (6.3) explicitly point out that the
equations are for an n-body pendulum; the matrices
() n
M and
() n
S are therefore n by n matrices,
and the vector
() n
F is an n by 1 column vector (n-vector). The elements,
()
,
n
ij
m , of the symmetric
matrix
() n
M are explicitly given by the relations
( ) 2 2
,
{}
n
i i i i i i i
m J l mb m , 1 in , (6.4)
and,
()
,
[ cos( ) cos( )], 1
n
i j i j j j ij j j ij
m l l m b m i j n , (6.5)
where we define
i j i j
,
1
n
ik
ki
mm
, and 0
n
m . (6.6)
The elements,
()
,
n
ij
s , of the skew-symmetric matrix
() n
S , whose diagonal elements are all zero,
are given by the relations
()
,
[ sin( ) sin( )], 1
n
i j i j j j ij j j ij
s l l m b m i j n , (6.7)
and the elements,
() n
i
f , of the n-vector
() n
F are given by the relations
()
{ sin( ) sin( )}, 1
n
i i i i i i i i
f gl mb m i n . (6.8)
134
Proof: We note, using Eqs. (6.4) and (6.8), that the result is trivially true for 1 n as seen from
Eq. (6.1). To proceed with our proof by induction we assume that Eqs. (6.3)–(6.8) are true for
any 1 nN , i.e., for an N-body pendulum. We then need to prove that they are true for
1 nN , i.e. for an (N+1)-body pendulum.
A comparison of elements of the symmetric matrices
() N
M and
( 1) N
M
using Eqs. (6.4) and
(6.5) reveals that if these equations are correct for 1 N , then
, , , ,
, 1 1,
1,N 1 1 1
( 1) ( ) ( 1) ( )
,1
( 1) ( 1)
1 1 1 1 1
( 1) 2 2
11
: cos( ), 1 ,
cos( ), 1 ,
.
i j i j j i j i
i N N i
N N N
N N N N
i j N i j ij
NN
N i N N iN N
N
NN
m m m m m m l l i j N
m m m l b l i N
m J m l b
(6.9)
Similarly, a comparison of the elements of the skew symmetric matrices
() N
S and
( 1) N
S
using Eq.
(6.7) shows that if this relation is corrrect then
( 1) ( )
, , , 1
( 1) ( 1)
, 1 1, 1 1 1 1 1
: sin( ), 1 , ,
sin( ), 1
NN
i j i j i j N i j ij
NN
i N N i N i N N iN N
s s s m l l i j N
s s m l l b i N
. (6.10)
Lastly, a comparison of Eq. (6.8) for the vectors
() N
F and
( 1) N
F
shows that, if correct, then we
must have
( 1) ( )
1
( 1)
1 1 1 1 1 1
: sin , 1 ,
sin( )
NN
i i i N i i
N
N N N N N N
f f f m gl i N
f m gb l
. (6.11)
Hence we are required to show that in going from an N-body pendulum to an (N+1)-body
pendulum ( 1 N ), the changes in the matrices M and S, and in the vector F, must be as
prescribed by relations (6.9)—(6.11). Showing this would then complete our proof.
135
But the difference between the N-body pendulum and the (N+1)-body pendulum is just the
addtion of the (N+1)
st
body! And hence all the terms on the right hand side of relations (6.9),
(6.10), and (6.11) must come solely from the Lagrangian L of only the the (N+1)
st
body. Since
the coordinates
11
( , )
NN
xy
of the CM of the (N+1)
st
body are (for 1 N ),
1 1 1 1 1 1 1 1 1 1
11
sin sin ), cos cos( ),
NN
N j j N N N N N j j N N N N
jj
x l b l y l b l
(6.12)
its Lagrangian L is
1 1 1
2 2 2
1 1 1 1
11
: ( )
22
N N N
N N N N
L T V m x y J m gy
, (6.13)
where T denotes the kinetic energy of the (N+1)
st
body and V denotes its gravitational potential
energy.
Our aim is to find how this Lagrangian L, through its inclusion, alters the Lagrange equations of
motion of the N-body pendulum thereby providing us with the Lagrange equations for the (N+1)-
body pendulum.
We denote the Lagrange operator ()
i
L
ii
d
dt
. Then from relation (6.12) we
obtain
136
1 1 1 1
1 1 1 1 1 1 , 1 1 1
1 1 1 1
1 1 1 1 1 1
( ) [ ( ) ] [ ( ) ]
[ ( ) ] [ ( )
N N N N
i N N N N N N i N N N
i i i i i
N N N N
N N N N N N
i i i i
d x x d y y V
L m x x m y y J
dt dt
d x x d y y
m x x m y y
dt dt
L
, 1 1 1
]
i N N N
i
V
J
11
1 1 1 1 , 1 1 1
+
NN
N N N N i N N N
i i i
x y V
m x m y J
(6.14)
where
,1 iN
is the Kronecker delta. In the second and third equalities above, we have used the
relations
11 NN
ii
xx
and
11
()
NN
ii
d x x
dt
, and similar relations for
1 N
y
. Using relation
(6.12), we then have
1
22
1 1 1 1 1 1 1 1
1
[ cos sin ] [cos( ) sin( ) ]
N
N
N j j j j j j N N N N N N N
j
x l l b l
(6.15)
and similarly,
1
22
1 1 1 1 1 1 1 1
1
[ sin cos ] [sin( ) cos( ) ] .
N
N
N j j j j j j N N N N N N N
j
y l l b l
(6.16)
Using relations (6.12), (6.15) and (6.16) in Eq. (6.14) gives
i
L for 1 iN , and we get
1
2
1 1 1 1 1 1 1 1
22
11
00
( ) [cos( )cos( ) cos( )sin( ) ]
cos( ) [ cos( ) sin( ) ] sin( ) [ sin( ) cos( ) ]
N
jj
i N i N N i N N N i N N
NN
N i i j j j j j N i i j j j j j
jj
L m l b l
m l l l m l l l
L
1
2
1 1 1 1 1 1 1 1
[sin( )sin( ) sin( )cos( ) ] ,
N
N i N N i N N N i N N
i
V
m l b l
(6.17)
which simplifies to
137
(6.18)
This equation identifies the terms that need to be added to the ( , ) ij elements of the matrices
() N
M and
() N
S to obtain the corresponding ( , ) ij elements of the matrices
( 1) N
M
and
( 1) N
S
for1 iN and 11 jN . They are the same as those shown in Eqs. (6.9) and (6.10). Also,
the elements
i
f shown in Eq. (6.18) need to be added to the N elements of the N-vector
() N
F to
obtain the corresponding (first) N elements of the vector
( 1) N
F
; they too are same as those given
in relation (6.11). The third term on the right hand side of Eq. (6.18) shows that the upper left N
by N submatrix of
( 1) N
S
is skew-symmetric; this is because the N by N matrix
() N
S is assumed
to be skew-symmetric and we are adding to this N by N matrix another skew-symmetric matrix
, ij
s . In a similar manner we compute the additional Lagrange equation that results when we go
from an N-body pendulum to an (N+1)-body pendulum ( 1 N ). We have for ,
1 1 1
1 1 1 1 1 1
1
2 2 2 2
1 1 1 1 1 1 1 1
1 1 1 1 1
2
( ) cos( ) [ cos( ) sin( ) ]
[cos ( ) cos( )sin( ) ]
sin( ) [ sin( )
N N N
N
N N N N N N i i i i i
i
N N N N N N N N
N
i
N N N N i i
L m b l l l
m b l
m b l l
L
1 1 1
2
11
1
1
2 2 2 2
1 1 1 1 1 1 1 1
cos( ) ]
[sin ( ) cos( ) si
n( ) ],
N N N
N
i j i N N
i
N
N N N N N N i
i
NN
V
lJ
m b l
(6.19)
which simplifies to
( 1)
,
,1
( 1)
, ,1
1 1 1 1 1 1 1
1
2
1 1 1 1 1 1
1
( ) cos( ) cos( )
sin( ) sin( )
N
ij
iN
j
N
ij iN
N
i N i j i j j N i N N i N N N
j
m m
N
N i j i j N i N N i N N
j
s s
L m l l m l b l
m l l m l b l
L
1
2
1
sin , 1 .
N
i
N i i
f
m gl i N
138
( 1)
( 1)
1,
1,
2
1 1 1 1 1 1 1 1 1 1 1
11
( ) cos( ) sin( )
N
N
Ni
Ni
NN
N N N N j i N N i N N N i i N N i
ii
s
m
L m b l l m b l l
L
11
( 1)
( 1)
1
1, 1
22
1 1 1 1 1 1 1
( ) sin( ),
NN
N
N
N
NN
N N N N N N N
f
m
m b l J m l g
(6.20)
On the right hand side of Eq. (6.21), terms related to the elements of the last row of the matrices
( 1) N
M
and
( 1) N
S
are identified they are the same as those given in relations (6.9) and (6.10).
We note from relations (6.22) and (6.23) that
( 1) ( 1)
1, , 1
, 1 1
NN
N i i N
s s i N
, and therefore the
matrix
( 1) N
S
is skew-symmetric. Also, the (N+1)
st
element of the vector
( 1) N
F
is given by
( 1)
1
N
N
f
, as required from relation (6.11). We observe from relations (6.24) and (6.25) that the
mass matrix
( 1) N
M
is symmetric, yet one does not need to prove this. Since the coordinates of
the center of mass ( , )
ii
xy of the i
th
body, 11 iN , when expressed in the coordinates do
not contain time explicitly, Lagrangian mechnics guarantees that the matrix is symmetric!
q.e.d.
Remark 1: For the n
th
body (and only for the n
th
body) of the n-body pendulum, we can set
0
n
, and as discussed before, also set 1
n
b so that
n
l is now the distance of the CM of the
n
th
body to the hinge in it. The other 's, 1 1
i
l i n , of course retain their usual meanings. The
angle
n
(and only
n
, none of the other 's
i
) now denotes the angle (measured counter-
clockwise) that the line joining the hinge in the n
th
body to its CM makes with the vertical (see
Figure 1).
Then, from Eqs. (6.4) (6.26), the elements
()
,
n
ij
m of the symmetric matrix
() n
M
in Eq. (6.3) are
explicitly given by
139
( ) 2 2
,
()
,
( ) ( ) 2
,,
{ }, 1 1;
[ cos( ) cos ], 1 1;
= cos , 1 1; ,
n
i i i i i i i
n
i j i j j j ij j j ij
nn
i n i n n i n n n n n n
m J l m b m i n
m l l m b m i j n
m l l m i n m J m l
(6.27)
where, as before,
1
n
ik
ki
mm
with 0
n
m .
Similarly, from Eq. (6.28), the elements
()
,
n
ij
s of the skew-symmetric matrix
() n
S in Eq. (6.3) can
be explicitly written as
()
,
()
,
[ sin( ) sin( )], 1 1,
sin , 1 1.
n
i j i j j j ij j j ij
n
i n n i n i n
s l l m b m i j n
s m l l i n
(6.29)
and the elements
() n
i
f of the column vector
() n
F can be written as
()
()
{ sin( ) sin( )}, 1 1,
sin
n
i i i i i i i i
n
n n n n
f gl m b m i n
f m gl
(6.30)
Remark 2: If the n-body pendulum is made up of all straight links, as has nearly always been
assumed in the research literature hereto, then the center of mass of each body will lie on the line
joining its hinges, hence 0, 1
i
in . The equation of motion (6.3) then simplifies further
since in the relations given in Eqs. (6.4)- (6.8) and Eqs.
(6.31)- (6.32) for the elements of
() n
M
,
() n
S , and
() n
F , we set 0, 1
i
in .
Remark 3: Consider a damped n-body planar pendulum in which the hinges provide dissipative
torques. Assume that the torque generated at the i
th
hinge (the first hinge is at O, the second at
O1, etc., see Figure 1) is expressed as
140
3
11
ˆ
ˆ ( ) d ( )
i i i i i i
c
, 1 in , with
0
:0 , (6.33)
where the ˆ
i
c ,
ˆ
i
d , 1, , in , are positive constants. The above relation thus assumes ‘linear plus
cubic’ damping. Then the equation of motion of this damped n-body pendulum is given by
2
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) 0
n n n n n n n n n n n n
M S C D F
(6.34)
where the n-vector :/ d dt . The the elements
()
,
n
ij
c of the n by n constant, symmetric,
tridiagonal matrix
() n
C are obtained by using the vitrual work done by the dissipative torques as
()
,1
( ) ( )
, 1 1, 1
ˆˆ , 1 ,
ˆ , 1 1
n
i i i i
nn
i i i i i
c c c i n
c c c i n
(6.35)
where we define
1
ˆ :0
n
c
. Similarly, using virtual work, the n-vector
() n
D has elements
() n
i
d
given by
( ) 3 3
1 1 1
ˆˆ
( ) ( ) , 1
n
i i i i i i i
d d d i n
(6.36)
where we again define
01
ˆ
0
n
d
.
The term
( ) ( ) nn
C in Eq. (6.37) gives rise to linear damping in the system and the vector
( ) ( )
()
nn
D gives rise to nonlinear cubic damping.
141
6.3 Control of an n-body damped planar pendulum
Our aim in this section to is obatin an explicit closed form control of the n-body pendulum
described in the previous section. We would like the control to be Lyapunov asymptotically
stable, and global, so that we can control the nonlinear system from any given intial state
( , )
initial initial
to any final state ( , )
final final
. To do this we use the fundamental equation of
mechanics [15-16].
Assuming that the n-mass pendulum is damped as in Remark 3, the equation of motion of the
controlled system is given by (we have removed the superscript (n) for clarity)
2
( ) ( ) ( ) ( ) ( , )
C
M S C D F Q
(6.38)
where
C
Q is the control force n-vector that needs to be determined. The system is highly
nonlinear and has nonlinear damping. We rewrite equation (6.39) for further notational
convenience as
2
()
{ } :
n C C
M S C D F Q Q Q
(6.40)
where we have suppressed the arguments of the various quantities.
We begin by choosing a suitable Lyapunov function V that is positive definite
with ( , ) 0
final final
V , and is positive everywhere else and utilizing the result we derived in
Chapter 2, enforce the constraint on the controlled system so that its trajectory always staisfies
the relation
142
( , )
( , )
dV
V
dt
, (6.41)
where 0 is a suitable constant. From Lyapunov’s second method we know [1, 51-54] that if
the equality (6.42) is satisfied, the system will have an asymptotically stable equailibrium point
at ( , )
final final
. Furthermore, if the choice of our Lyapunov function is such that it is radially
unbounded, then we are assured global asymptotic stability. Equation (6.43) can be written as
( , ) ( , ) Ab (6.44)
where the row n-vector A and the scalar b are respectively given by
V
A
, and
V
bV
. (6.45)
The explicit control that enforces the constraint equation (6.46) while simultaneously minimizing
the control cost
( ) [ ] ( )
C T C
J t Q N Q (6.47)
for a given positive definite weighting matrix () N is obtained as
1/2 1
()
C
Q N G b AM Q
(6.48)
where the matrix
1/2 1
() G A N M
and G
denotes the Moore-Penrose inverse of the matrix G. It
is important that the function V and the parameter be chosen so that the constraint given by
equation (6.49) is consistent at all times. As mentioned before, if V is radially unbounded, the
equilibrium point ( , )
final final
is globally asymptotically stable.
143
In the following subsection we provide a numerical example.
6.3.1 Numerical Example
We consider a 10-mass damped planar pendulum in which all the links are identical and are each
L-shaped. Figure 2 shows the geometry of the i
th
link. The dimensions of the outer edges of the
arms of each L-shaped link are taken to be a and b, and the width, t. (See Figure 2). We assume
that all the links are of uniform (and constant) thickness,
h
t , and that the density of the material
of each link is .
Figure 2. Geometry of the L-shaped link that froms the i
th
body in the 10- link pendulum. The
hinges are located at O
i
and
1
O
i
. For a = 0.4 m, b = 0.5 m and t = 0.1 m, the angle
o
52.125
i
and the angle
i
shown in the figure is about
o
24.444 .
For simplicity, the Lyapunov function V is chosen to be
144
1 1 12
11
( ) ( ) ( )
22
final T final T T final
V a a a (6.50)
so that the equilibrium point is ( ,0)
final
. To ensure that V is positive definite we require that
1 2 12
a a a . The parameter in Eq. (6.51) is chosen to equal
12 2
2/ aa to ensure consistency of
this equation, and the weighting matrix N in Eq. (6.52) is set to
1
M
. With this weigthing matrix,
noting that A is a row vector, relation (6.53) becomes
1
1
()
()
T
C
T
A
Q b AM Q
AM A
. (6.54)
We note in passing that this would be the constraint force that Nature would apply to the
pendular system were it required to satisfy the constraint given in Eq. (6.55); it is the constraint
force given directly by the fundamental equation of motion.
For the simulations shown below the following parameter values are used for the geometry of
each of the L-shaped links: 0.4 m, 0.5 m, 0.1 m a b t (see Figure 2). The mass of each link
is then 0.08
h
t and its moment of inertia about its center of mass is 0.002833
h
t . For the
numerical computations we take 0.01 m,
h
t and
3
7850 kg/m . The damping parameters
described in Remark 3 of the Section 2 are chosen to be ˆ 0.01 N-m-s/rad
i
c , 1, , in and
ˆ
0.001
i
d N-m-s
3
/rad
3
, 1, , in . This then specifies the 10-link pendulum along with its
nonlinear damping. The parameters used to describe the Lyapunov function in Eq. (6.56) are
1 2 12
1, 1, 0.5 a a a so that 1 in Eq. (6.57).
145
Starting from the static equilibrium position the 10-link damped pendulum is “swung-up” from
rest. The following three cases are simulated:
(i) The system is swung and is required to come to rest ‘upside down’ with all its hinges
aligned vertically above the fixed pivot O (see Figure 3(a)) so that
, 0,
final final
ii
i ;
(ii) the system is swung and is required to come to rest past its inverted ‘upside down’
postion so that all the hinges lie along the line 0 xy in the second quadrant in the XY
plane (see Figure 3(b)) with 5 / 4, 0,
final final
ii
i ; and
(iii) it system is swung and is required to one complete revolution around the fixed pivot O
before coming to rest again ‘upside down’ with all the hinges again aligned vertically
above the pivot O so that 3 , 0,
final final
ii
i .
Figure 3(a) The final configuration of the system in case (i) when , 0,
final final
ii
i .
Figure 3 (b) The final configuration of the system in case (ii) when 5 / 4, 0,
final final
ii
i .
146
(The black dot indicates the position of the 1
st
hinge. Square dots indicate the locations of other
hinges and the red line indicates the line joining the hinges.)
The equation of motion (6.58) of the controlled system, with
C
Q explicitly specified by relation
(6.59), is integrated in of the abovementioned three cases using ode15s using a relative error
tolerance of
7
10
and an absolute tolerance of
9
10
. Each simulation was run for 25 seconds.
Figure 4(a) The variation of the angles , 1,3,5
i
i with time for case (i). Figure 4(b) The
variation of the angles , 7,9,10
i
i with time for case(i).
147
Figure 5(a) The errors , 1,3,5
i
ei with time for case (i). Figure 5(b) The variation of the
Lyapunov function V with time for case(i).
Figure 6(a) The control torques on the hinges , 1,3,5
C
i
Qi as a function of time for case (i).
Figure 6(b) The control torques on the hinges , 7,9,10
C
i
Qi as a function of time for case(i).
For the first situation described above Figure 4(a) shows the results of the simulation in which
the angles , 1,3,5,
i
i are plotted as functions of time; Figure 4(b) similarly shows the variation
in the angles , 7,9,10
i
i . Figure 5(a) shows the error ()
final
i i i
e , 1,5,10 i , and Figure
5(b) shows the variation of the Lyapunov function with time. Figure 6(a) shows the control
torques required to be applied at the first, third and fifth hinge, while Figure 6(b) shows those
needed at the seventh, ninth and tenth hinge. The final position of the inverted pendulum at the
end of the simulation is shown in Figure 3(a).
Figures 7, 8 and 9 similarly show results for the swing-up described in case (ii) above in which
the final state is a stable position with 5 / 4, 0,
final final
ii
i . The final position acquired
by the 10-link pendulum at the end of the simulation is shown in Figure 3(b).
148
Figure 7(a) The variation of the angles , 1,3,5
i
i with time for case (ii). Figure 7(b) The
variation of the angles , 7,9,10
i
i with time for case(ii).
Figure 8(a) The errors , 1,3,5
i
ei with time for case (ii). Figure 8(b) The variation of the
Lyapunov function V with time for case(ii).
149
Figure 9(a) The control torques on the hinges , 1,3,5
C
i
Qi as a function of time for case (ii).
Figure 9(b) The control torques on the hinges , 7,9,10
C
i
Qi as a function of time for case(ii).
Figure 10(a) The variation of the angles , 1,3,5
i
i with time for case (iii). Figure 10(b) The
variation of the angles , 7,9,10
i
i with time for case(iii).
Figure 10 shows the changes in
i
with time, as before, for case (iii) wherein the pendulum
system makes one complete revolution about the fixed pivot before coming to its final ‘inverted’
state. The final position of the pendulum is identical to that shown in Figure 3(a), and is not
150
Figure 11(a) The errors , 1,3,5
i
ei with time for case (iii). Figure 11(b) The variation of the
Lyapunov function V with time for case(iii).
Figure 12(a) The control torques on the hinges , 1,3,5
C
i
Qi as a function of time for case (iii).
Figure 12(b) The control torques on the hinges , 7,9,10
C
i
Qi as a function of time for case(iii).
shown for brevity. Figures 11(a) shows the errors ()
final
i i i
e , 1,5,10 i , and Figure 11(b)
shows the change in the Lyapunov function with time. Figure 12 show the requisite control
torques as functions of time for achieving this.
151
6.4 Control of the system under uncertainties
In the previous section, we looked at systems whose parameters are perfectly known. In reality,
there is always an error in our knowledge of the system. In the current section, we look at the
pendular system whose parameters are not known accurately. That is, the dimensions, the density
of the material of the pendulum or the damping co-efficients could all be uncertain. Our best
estimate of these parameters is available to us and the errors in the parameter values are norm
bounded. From here on, by ‘actual system’ we refer to the actual pendular system whose
parameters are known only with bounded uncertainty. We call the system with our best estimate
parameter values, the ‘nominal system’. Since we know the nominal system exactly, we can
control the system starting from any initial state to any final desired state using the control
derived in section 3. This control is globally asymptotically stable. But if we apply this control
force on the actual system, we might not get the desired result because the actual system is
different from the nominal system. The control could even become unstable. This problem can
be tackled by the use of an additional controller that ensures that the controlled actual system
tracks the trajectories of the controlled nominal system. This additional controller needs to be
robust to uncertainties and hence sliding mode control is a natural choice because of its
robustness against norm bounded uncertainties [3, 5]. However, there are a few disadvantages
associated with discontinuous sliding mode control like causing damage to the actuators due to
high frequency switching, exciting unmodeled dynamics etc [3-4]. Due to these disadvantages, a
continuous controller has been proposed in [25] using the concept of generalized sliding
surfaces. In this approach, a user supplied smooth function is utilized in the control design
instead of the sign function used traditionally.
152
Using such an additional controller, it is not guaranteed that the actual system exactly tracks the
trajectories of nominal system in finite time. However, it is guaranteed that the actual system
asymptotically tracks the trajectories of the nominal system within user specified error bounds.
In addition, since the nominal system and the actual system start out with the same initial
conditions, the tracking errors in position and velocity start within these error bounds and stay
within them forever. Thus, the actual system is guaranteed to asymptotically converge to an
attracting region that can be specified by the user close to the desired final state and containing
the final state. This attracting region can be made arbitrarily smaller.
The equation of motion of the controlled actual system is
( ) ( , ) ( ) ( , )
Cu
a a a a a a a a
M Q Q t Q . (6.60)
The subscript ‘a’ under various quantities denotes that they belong to the actual system. The
nominal control force is shown here as only a function of time since it does not depend on the
state of the actual system. Its dependence is only on the state of nominal system as shown in Eq.
(6.48).
u
Q is the additional control force applied to the actual system so it can track the nominal
system within a certain bound. In the following, the method to compute
u
Q will be described.
Details of the proof are provided in the Appendix C.
We can define the errors in tracking as,
a
e ,
a
e . (6.61)
In the above, ( , ) is the state of the nominal system. We can define a sliding surface as,
153
s e ke (6.62)
where, 0 k is an arbitrary positive number. If the actual system can be restricted to stay on the
sliding surface 0 s , we are guaranteed that the actual system exactly tracks the trajectories of
the nominal system (since they both start out with same initial conditions). However, since we
intend to use a smooth function, we can ensure that the actual system stays within a small region
around the origin, s
. This region
defined as,
:|
n
s R s
(6.63)
can be made arbitrarily smaller as will be seen shortly.
On defining
11
( ) ( )
CC
aa
q M Q Q M Q Q
, (6.64)
Then, the method requires the computation of the following estimates,
(i)
1
min
: min{ of }
a
eigenvalues M
, (6.65)
(ii)
min
q k e
, t . (6.66)
In the above equations, denotes the
2
L norm. Having obtained these quantities, the simple
closed form expression for the additional control force is given as,
( / )
u
Qs . (6.67)
154
In this expression, is a positive number, which can be chosen by the user so as to meet desired
tracking tolerances. The tracking errors are guaranteed to be within the bounds given by (for a
Proof, see the Appendix C),
Using this additional controller, the tracking errors are guaranteed to be within the bounds,
1
i
e
k
, 2
i
e , 1,2, , in . (6.68)
Thus, as seen from the above equation, decreasing the value of has the effect of shrinking the
region
and reducing the maximum possible errors in tracking.
Remark: It has to be noted that the Lyapunov constraint (Eq. (6.41)) is not satisfied by the
controlled actual system and hence it does not converge to the exact final state asymptotically.
However, it does converge to a small region around the desired final state which can be made
arbitrarily smaller.
6.4.1 Numerical Example
We will consider the ten link pendulum system we considered previously in section 6.3. Now we
assume that the density of the material of the pendulum is not known exactly. The actual system
has the density 7,850
a
kg/m
3
whereas the nominal system has 7,000 kg/m
3
. We keep
the rest of the parameters same as before. The uncertainty in the density leads to uncertainty in
the mass and mass moment of inertia of each individual link. This ultimately leads to uncertainty
in both the mass matrix and force vector of the pendulum system i.e.,
a
MM ,
a
QQ .
The equation of motion of the controlled nominal system is,
155
C
M Q Q , (6.69)
where nominal control force
C
Q is computed using Eq. (6.54). The equation of motion of the
controlled actual system is,
Cu
a a a
M Q Q Q (6.70)
where the additional control force
u
Q is computed using Eq. (6.67). We choose the parameters
for the additional controller to be
4
10
, 10 k ,
5
10 . For these chosen parameters, we are
guaranteed that the tracking errors in position and velocity are bounded as,
5
1
10
i
e
k
,
4
2 2 10
i
e
, 1,2, , in . (6.71)
Equations (6.69) and (6.70) are numerically integrated simultaneously using the ode15s package
in the MATLAB platform. The relative and absolute tolerances for integration are chosen to be
7
10
and
10
10
respectively.
Once again, the three scenarios mentioned in the numerical example in section 3.1 are simulated
in which (i) , 0,
final final
ii
i , (ii) 5 / 4, 0,
final final
ii
i , and, (iii)
3 , 0,
final final
ii
i .
156
Figure 13(a) The time variation of the angles of the actual pendulum
,
, 1,3,5
ai
i for case (i).
Figure 13(b) The time variation of the angles of the actual pendulum
,
, 7,9,10
ai
i for case(i).
Figure 14(a) The errors in tracking the nominal system , 1,5,10
i
ei with time for case (i).
Figure 14(b) The tracking errors in velocity , 1,5,10
i
ei with time for case(i).
157
Figure 15(a) The additional control torques on the hinges , 1,3,5
u
i
Qi as a function of time for
case (i). Figure 15(b) The additional control torques on the hinges , 7,9,10
u
i
Qi as a function
of time for case(i).
For the first case, Figure 13(a) shows the results of the simulation in which the angles of the
actual system
,
, 1,3,5,
ai
i are plotted as functions of time; Figure 13(b) similarly shows the
variation in the angles
,
, 7,9,10
ai
i . These can be compared with similar figures for the
nominal system shown in Figure 4(a) and 4(b). Figure 14(a) shows the error in tracking the
nominal system
,
()
i a i i
e , 1,5,10 i , and Figure 14(b) shows the error in tracking the
velocity of the nominal system
,
()
i a i i
e , 1,5,10 i . These values are seen to be smaller
than the bounds of
5
10
and
4
2 10
guaranteed (see Eq. (6.71)) due to our choice of the
parameter . Figure 15(a) shows the additional control torques required to be applied at the first,
third and fifth hinges, while Figure 15(b) shows those applied at the seventh, ninth and tenth
hinges. Figure 19(a) shows the variation of
158
Figure 16(a) The time variation of the angles of the actual pendulum
,
, 1,3,5
ai
i for case (ii).
Figure 16(b) The time variation of the angles of the actual pendulum
,
, 7,9,10
ai
i for case(ii).
the Lyapunov function with time. As mentioned in the remark following Eq. (6.68), the
Lyapunov function of the actual system is not guaranteed to decay with time. But since, the
trajectories of the actual system lie with in a tight region around the nominal system, the plot for
the time history of the Lyapunov function V for the actual system look almost exactly same as
that for the nominal system (compare Fig. 19(a) with Fig. 5(b)).
159
Figure 17(a) The errors in tracking the nominal system , 1,5,10
i
ei with time for case (ii).
Figure 17(b) The tracking errors in velocity , 1,5,10
i
ei with time for case(ii).
Figure 18(a) The additional control torques on the hinges , 1,3,5
u
i
Qi as a function of time for
case (ii). Figure 18(b) The additional control torques on the hinges , 7,9,10
u
i
Qi as a function
of time for case(ii).
Figures 16, 17 and 18 similarly show results for the swing-up described in the second case in
which the final state is a stable position with 5 / 4, 0,
final final
ii
i . Figure 19(b) shows
the plot of Lyapunov function for this case. Again, the plot looks quite similar to the one
obtained in the case of nominal system (compare with Fig. 8(b)).
Figures 20, 21, 22 show the corresponding plots for the third situation in which the desired final
state is at 3 , 0,
final final
ii
i . We do not show the plot of Lyapunov function for this
situation but it looks similar to what was obtained in the case of nominal system (Fig. 11(b)). In
all the three cases, we see that the controlled actual system closely tracks the trajectories of the
nominal system, thus satisfying the control objective.
160
Figure 19(a) The variation of Lyapunov function for the actual system with time for case (i).
Figure 19(b) The variation of Lyapunov function for the actual system with time for case (ii).
Figure 20(a) The time variation of the angles of the actual pendulum
,
, 1,3,5
ai
i for case (iii).
Figure 20(b) The time variation of the angles of the actual pendulum
,
, 7,9,10
ai
i for case(iii).
161
Figure 21(a) The errors in tracking the nominal system , 1,5,10
i
ei with time for case (iii).
Figure 21(b) The tracking errors in velocity , 1,5,10
i
ei with time for case(iii).
Figure 22(a) The additional control torques on the hinges , 1,3,5
u
i
Qi as a function of time for
case (iii). Figure 22(b) The additional control torques on the hinges , 7,9,10
u
i
Qi as a function
of time for case(iii).
6.5 Conclusions
The current chapter demonstrates the use of mathematical induction to derive the equations of
motion for an n-body planar pendulum system. This method simplifies the modeling process,
162
reduces the algebraic complexity involved in the derivation of equations. We do not assume that
the centre of mass of each link lies on the straight line joining the hinges. In the case, when the
properties of the system are known perfectly (nominal system) a simple yet elegant control
method was suggested for controlling the system to drive it to any desired final state. This
method makes use of the fundamental equation of mechanics that enforces a Lyapunov
constraint on the system to obtain an explicit expression for the optimal control force that
minimizes a user specified control cost at each instant of time and is also globally asymptotically
stable. Further, in the case that the parameters of the system are known with bounded
uncertainty, a control methodology was suggested that makes use of an additional control which
forces the controlled system to track the trajectories of the nominal system with in user
prescribed bounds, thus ensuring that the actual controlled system is stable in the sense that its
trajectories are asymptotically attracted to a small region around desired equilibrium point which
can be made arbitrarily smaller. Numerical examples are provided showing the simplicity and
efficacy of the control methodology.
163
Chapter 7. Unified Approach to Modeling and Control of Multi-body Mechanical Systems
7.1 Introduction
In this chapter we develop a unified approach to modeling and control of multi-body mechanical
systems. Often, modeling of complex mechanical systems requires use of more variables than the
least possible. In such cases, the relationships between the variables are described using
modeling constraints. Satisfaction of these modeling requirements is essential so that the model
accurately captures the physical behavior of the system. In this chapter, we propose a control
approach for such systems that ensures that the modeling constraints are satisfied at all instants
of time. Following the work in the earlier chapters, control objectives are cast as control
constraints that when enforced, ensures that the system satisfies the desired control requirements.
The control constraints could be consistent with the modeling constraints or they could be
inconsistent. However, it is assumed that each set of constraints are consistent within the set.
The task of designing controls for mechanical systems in the presence of modeling constraints
was first taken up in [49]. Schutte’s approach was inspired by the general result for constrained
systems that do not satisfy d’Alemebert’s principle [62]. It was assumed that the control
constraints are not consistent with modeling constraints. Control forces are obtained in two steps.
In the first step, only control constraints are used and fundamental equation of motion is used to
obtain a nominal control force that enforces the control constraints and minimizes Gaussian at
each instant of time. In the second step, these nominal forces are projected into the space of
forces that produce accelerations consistent with the modeling constraints to obtain the control
forces. The second step ensures that the modeling constraints are always satisfied. However, due
164
to this projection, the controlled system then does not in general satisfy the control constraints
and is a drawback of this method. However, it has been shown through examples that although
the control constraints are not satisfied at each instant of time, the method can ultimately satisfy
the control requirements (albeit with no mathematical guarantees) and thus has practical
significance.
The current approach does not make any assumption regarding the consistency of control and
modeling constraints. Specifically, it relieves the user of the need to check for consistency at
each instant of time to choose between using either the fundamental equation of motion [15-416
or the approach in [49] to obtain control forces. Thus it is more generally applicable. In addition,
the control cost minimized at each instant of time can be any user-prescribed quadratic function.
When the modeling and control constraints are consistent, the current approach ensures that the
controlled system satisfies both the control and modeling constraints and when they are
inconsistent, the norm of error in satisfying the control constraints is minimized.
7.2 Description of constraints
Consider a system, whose unconstrained motion can be described by the equations,
, , , M q t q Q q q t (7.1)
where M is a symmetric positive definite mass matrix and Q is the generalized impressed
(given) force vector.
n
qR is the n dimensional column vector that represents the
configuration of the system. A dot on top of a variable represents the derivative with respect to
time and two dots represent the second derivative with respect to time.
165
Let us assume that
m
n constraints need to be imposed on the unconstrained system described by
Eq. (7.1) so that the constrained system now provides a proper description of the physical system
under consideration. Let these constraints be of the form,
( , , ) 0
m
q q t . (7.2)
In the above equation,
m
n
m
R is a column vector. These could consist of both holonomic as
well as non-holonomic constraints.
It is important to note that these modeling constraints are essential in providing a proper
mathematical description of the physical system, and they must be satisfied at all times if the
mathematical model is to represent the physical system with probity. Furthermore, these
constraints must be consistent, else, once again, the mathematical modelling of system would be
incorrect.
We assume that the constraints are smooth enough to be differentiated a sufficient number of
times to get equations of the form [15],
( , , ) ( , , )
mm
A q q t q b q q t . (7.3)
Nature, applies a force ,,
m
Q q q t (called the ‘constraint force’) on the system so that these
modeling constraints are enforced. According to Gauss’ principle, nature applies the constraint
force in such a way that the Gaussian Gt defined as
1
, , , , ,
T
mm
G t Q q q t M q t Q q q t
. (7.4)
166
is minimized at each instant of time. Thus, nature appears to minimize a quadratic cost () Gt
with the specific weighting matrix
1
M
. The consequent equation of motion of the constrained
mechanical system is then
, , , , ,
m
M q t q Q q q t Q q q t . (7.5)
When control requirements (objectives) are placed on this mechanical system, these
requirements can be viewed as a set of additional constraints. Thus, the system is subjected to
control requirements of the form,
( , , ) 0
c
q q t (7.6)
where
c
is a column vector of
c
n dimensions.
We assume that these constraints are smooth enough to be differentiated and expressed in the
form,
( , , ) ( , , )
cc
A q q t q b q q t . (7.7)
To satisfy the control objectives one needs to apply a (generalized) control force ,,
C
Q q q t to
the system so that the system also satisfies the set of control constraints given in Eq. (7.7). This
control force
C
Q is obtained by minimizing the control cost
, , , , , , , ,
T
CC
c
J q q t Q q q t W q q t Q q q t . (7.8)
where ,, W q q t is a user-prescribed symmetric, positive definite matrix.
167
The two sets of constraints given in Eqs. (7.3) and (7.7) are very different in character. While the
modelling constraints given in Eq. (7.3) must be exactly satisfied for the mathematical model to
represent the physical system with integrity, the control constraints given by Eq. (7.7) may or
may not. When the control constraints are exactly satisfied, the control objectives for
(controlling) the physical system are met; when they are not, then we have inadequate control of
the physical system.
The goal is to find the (generalized) force n-vectors
m
Q and
C
Q such that the controlled system
described by the equation
, , , , , , ,
C
m
M q t q Q q q t Q q q t Q q q t (7.9)
satisfies the following conditions:
A.
m
Q is found such that
(i) the modeling constraints (7.3) are satisfied, and
(ii) the Gaussian () Gt shown in Eq. (7.4) is minimized.
B.
C
Q is found such that
(i) the control constraints (7.7) are satisfied, and
(ii) the control cost ()
c
Jt shown in Eq. (7.8) is minimized.
168
In what follows, instead of the accelerations, we use ‘scaled’ accelerations, which were first
introduced in Ref. [63]. ‘Scaling’ consists of pre-multiplying Eq. (7.9) by the matrix
1/2
M
.
Thus, after scaling, Eq. (7.9) can be rewritten as
1/2 1/2 1/2 1/2 C
m
M q M Q M Q M Q
(7.10)
which upon defining the scaled accelerations as,
1/2 1/2 1/2 1/2
, , ,
m c C
s s s m s
q M q a M Q q M Q q M Q
, (7.11)
simplifies to
mc
s s s s
q a q q . (7.12)
The subscript ‘s’ denotes scaled accelerations. From here on, we refer to
s
q as the scaled
acceleration of the controlled system,
s
a as the scaled acceleration of the unconstrained system,
m
s
q as the scaled constraint acceleration, and
c
s
q as the scaled control acceleration. From here on,
scaled acceleration vectors will simply be referred to as accelerations for brevity, unless required
for clarity.
The modeling constraint can now be written in scaled form as,
m
m s m
B q b ;
1/2
:
mm
B A M
(7.13)
and the control constraint as,
c
c s c
B q b ;
1/2
:
cc
B A M
. (7.14)
169
With the two sets of constraints given in Eqs. (7.13) and (7.14), and the two different cost
functions given in Eqs. (7.4) and (7.8) to be minimized in enforcing these constraints, several
different situations are possible.
In the next section, we will first review the fundamental equation of motion briefly. Then in
Section 7.4, we consider the most general case in which the model constraints (7.13) and the
control constraints (7.14) need not be consistent with each other. In Section 7.5, we deal with the
special case in which the constraints are consistent. Here again there are two possibilities; the
control cost function ()
c
Jt could be the same as the Gaussian () Gt , or it could be different. We
investigate both cases in detail and show how simplifications emerge. The expression for the
control force in Section 7.5 can be viewed as a simplified version of that obtained in the general
case dealt with in Section 7.4. Section 7.6 provides a geometric understanding of the current
approach. Numerical examples that demonstrate its effectiveness are provided in Section 7.7.
Conclusions are given in Section 7.8.
7.3 Fundamental Equation of Motion
Before dealing with the general case involving both control and modeling constraints, let us first
ignore the control constraints and look at the system with only the modeling constraints.
Consider the motion of the constrained system whose unconstrained motion is described by Eq.
(7.1) along with the modeling constraints described by Eq. (7.2) (or alternatively, by Eq. (7.13)).
The equation of motion of the constrained system is given as,
, , , , ,
m
M q t q Q q q t Q q q t , (7.15)
170
which in terms of scaled accelerations can be rewritten as,
m
s s s
q a q . (7.16)
The constraint acceleration
m
s
q of the constrained system is explicitly found using the
fundamental equation of motion as
m
s m m m s
q B b B a
. (7.17)
Thus, the equation of the motion of the constrained system, when the only constraints to be
satisfied are the modeling constraints, can be written simply as
s s m m m s
q a B b B a
, (7.18)
or alternatively as
s n m m s m m
q I B B a B b
. (7.19)
In the above,
n
I denotes an nn Identity matrix. We denote the scaled acceleration of the
physical system subjected only to the modeling constraints and as yet not subjected to any
control constraints by
s
u (referred to as scaled acceleration of the uncontrolled system) so that
:
s n m m s m m
u I B B a B b
. (7.20)
Two important properties of the matrix
n m m
I B B
in Eqs. (7.19) and (7.20) are noteworthy.
(i) The matrix
n m m
I B B
is an orthogonal projection matrix, and
171
(ii) it projects any scaled acceleration vector into the null space of
m
B thus ensuring that the
modeling constraint, Eq. (7.13), is always satisfied.
The first property can be quickly shown as below
2
2
2
.
n m m n m m m m m m
n m m m m
n m m
I B B I B B B B B B
I B B B B
I B B
(7.21)
In the above, we have made use of the Moore-Penrose condition
m m m m
B B B B
. Furthermore, the
matrix
n m m
I B B
is symmetric since
T
m m m m
B B B B
.
To verify the second property, consider any n-vector v . Its projection is
n m m
I B B v
. Then we
have
0.
m n m m m m m m
mm
B I B B v B B B B v
B B v
(7.22)
Also, the minimum norm, least squares solution to the constraint equation
m
m s m
B q b is given
by
m
s m m
q B b
. When the constraints are consistent, which they must be if the modelling is done
correctly, consistency then implies that [15],
.
m m m m
B B b b
(7.23)
Premultiplying Eq. (7.19) by
m
B and using Eqs. (7.22) and (7.23), one obtains
172
m s m n m m s m m m m
B q B I B B a B B b b
(7.24)
thus ensuring that the scaled acceleration vector
s
q satisfies the constraint given by Eq. (7.13)
From a geometrical viewpoint Eq. (7.19) can now be interpreted as follows. Nature appears to
enforce the constraint given in Eq. (7.13) in two steps. First, she projects the unconstrained
acceleration vector
s
a onto the null space of
m
B and then adds the correction vector
mm
Bb
so
that the modeling constraint given in Eq. (7.13) is always satisfied (see section 7.6 for more
details).
3.1 Adding an additional force to the constrained system:
Now, if we were to apply a (generalized) control force
C
Q in addition to Q on the system, the
equation of motion of the controlled system becomes
()
C
m
Mq Q Q Q , (7.25)
which can be rewritten in terms of the accelerations as
()
cm
s s s s
q a q q . (7.26)
Use of Eq. (7.17) then gives the constraint acceleration
m
s
q as
[ ( )]
mc
s m m m s s
q B b B a q
. (7.27)
Thus, the equation of motion of the controlled system is
173
.
cc
s s m m m s s m m s
c
n m m s m m n m m s
q a B b B a q B B q
I B B a B b I B B q
(7.28)
Equation (7.28) shows the effect of adding a control force to the system. Comparing Eqs. (7.20)
and (7.28), the scaled acceleration
s
q is modified in the presence of
C
Q by the addition of the
projection of the scaled control acceleration
c
s
q into the null space of
m
B . It is important to note
that Eq. (7.28) ensures that the modeling constraints given by Eq. (7.13) are always satisfied, no
matter what the scaled control acceleration
c
s
q is.
7.4 Inconsistent constraints
When the constraints are inconsistent, no acceleration n-vector
s
q can be found that can
simultaneously satisfy both the constraint sets given by relations (7.13) and (7.14). In such a
situation, as explained before, it is still required that the modeling constraints be always satisfied,
else the integrity of the proper physical description of the mechanical system will be
compromised. One could thus imagine that:
(i) the controller applies a control force
C
Q , and
(ii) the appropriate constraint force
m
Q is created in response to this by the physical
mechanical system; one needs to ensure then that in the presence of
C
Q the
mathematical model is in compliance with the proper physical description of the
system.
174
In other words,
m
Q may be considered a function of
C
Q . The physical mechanical system sees
the control force
C
Q as an externally applied force and reacts appropriately to it, ensuring that
the modeling constraints, which ensure the integrity of the physical description of the system, are
always satisfied. Hence, once an explicit expression for
m
Q as a function of
C
Q and Q is
obtained,
C
Q can be determined by minimizing the norm of the error (e) in satisfying the control
constraints
c c c s c
e A q b B q b . (7.29)
The equation of the system in the presence of these two forces is given as,
C
m
Mq Q Q Q . (7.30)
For this system, for a given impressed force Q and a given control force
C
Q , the constraint
force
m
Q that ensures that (i) the modeling constraints (7.13) are satisfied and (ii) the Gaussian
() Gt in Eq. (7.4) is minimized, is given by the fundamental equation of motion.
Result 1: The (generalized) control force
C
Q that minimizes (i) the norm of the error in
satisfying the control constraints (see Eq. (7.29)) and (ii) simultaneously minimizes control cost
c
J shown in Eq. (7.8) is
1/2 Cc
s
Q M q (7.31)
where the control acceleration
c
s
q is given by
175
c
s cms c c s
q SB b B u
. (7.32)
The various quantities in the above equation are respectively,
1/2 1/2
S M W
,
cms c n m m
B B I B B S
, and
s n m m s m m
u I B B a B b
. (7.33)
Proof: In Section 3.1, the equation of motion of the controlled system has been obtained in terms
of the accelerations as
c
s s m m m s n m m s
q a B b B a I B B q
. (7.34)
If we denote the acceleration of the uncontrolled system (see Eq. (7.20)) as,
s n m m s m m
u I B B a B b
, (7.35)
Eq. (7.34) simplifies to
c
s s n m m s
q u I B B q
. (7.36)
By substituting Eq. (7.36) in Eq. (7.29), we obtain
c
c s c n m m s c
e B u B I B B q b
(7.37)
which can be simplified as
c
c n m m s c c s
e B I B B q b B u
. (7.38)
The control cost ()
c
Jt given in Eq. (7.8) can be written in terms of the quantity
c
s
q as,
176
1/2 1/2 1/2 1/2
T
T c c
c c c s s
J Q WQ q M W W M q . (7.39)
If we define the quantities
1 1/2 1/2
: S W M
and
1/2 1/2 1
:
cc
c s s
z W M q S q
, (7.40)
then the control cost is simply
T
c c c
J z z and
c
sc
q Sz . Minimizing
c
J would then mean
selecting a vector
c
z with minimum
2
L norm.
Using these quantities, Eq. (7.38) reduces to
c n m m c c c s
e B I B B Sz b B u
. (7.41)
The problem of finding
c
s
q is therefore reduced to finding the vector
c
z that minimizes the norm
of the error in Eq. (7.41) and has minimum norm (since
2
cc
Jz ). By denoting
:
cms c n m m
B B I B B S
, (7.42)
the solution is simply given using Moore-Penrose pseudo-inverse as
c cms c c s
z B b B u
. (7.43)
Thus, the explicit expression for the control acceleration
c
s
q is,
.
c
s c cms c c s
q Sz SB b B u
(7.44)
177
Using this relation, the generalized control force
C
Q can be obtained from Eq. (7.31).
Remark 1: If the constraints are not consistent, the control given in relation (7.31) minimizes
cc
A q b . In other words, we are satisfying the control constraint given in Eq. (7.7) in the least
square sense while minimizing the control cost
T
CC
c
J Q WQ .
Remark 2: It must be noted (see Eq. (7.38)) that as the weighting matrix W changes the control
force
C
Q changes, but the norm of the error in satisfying the control constraints,
cc
A q b ,
remains the same and is the minimum possible among all control forces that satisfy the
modelling constraints. The control force that minimizes this norm and also minimizes the user-
specified control cost
c
J given in Eq. (7.39) is obtained from Eqs. (7.44) and (7.31).
Remark 3: The equation of motion of the controlled system in the final form is
C
m
Mq Q Q Q , (7.45)
the control force is explicitly given as,
1/2 C
cms c c s
Q M SB b B u
, (7.46)
and the constraint force in closed form is,
1/2
1/2
[ ( )]
[ ( ) ].
c
m m m m s s
m m m s m m cms c c s
Q M B b B a q
M B b B a B B SB b B u
(7.47)
178
It should be observed that ()
m m s
b B a in the first member above signifies the extent to which the
unconstrained acceleration does not satisfy the modeling constraints. In the second member
above,
c c s
b B u signifies the extent to which the acceleration
s
u (of the system in the absence
of any control) does not satisfy the control constraints.
To recap, the expression in Eq. (7.46) can be used to obtain the control force in the general
situation when (a) the control constraints are not consistent with the modeling constraints and (b)
the control cost
c
J to be minimized is different from Gaussian, i.e.,
1
WM
. This explicit
expression for the control force is obtained by minimizing the norm of the error in satisfying the
control constraints and minimizing the control cost
c
J while ensuring that the modelling
constraints are always satisfied. Thus, the control obtained in the current approach has an in-built
way of always respecting and satisfying the modeling constraints, which must be satisfied at all
instants of time in order to preserve the proper correspondence between the physical system and
its mathematical model. This is different from the approach used in Ref. [49] where any suitable
control force is first found to satisfy the control constraints, and then this control force is made to
satisfy the modeling constraints by projecting it onto the space of “permissible” control forces.
Next, the situation where both the model constraints (Eq. (7.13)) and the control constraints (Eq.
(7.14)) are consistent is taken up.
179
7.5 Consistent constraints
When both the model constraints and control constraints are consistent, there are again two
possible situations, because the weighting matrix W could be the same as
1
M
or it could be
different. Consider first the case when they are not the same.
7.5.1 Unequal weighting matrices,
1
WM
Such a situation could arise when we might be interested in designing a control system to control
a mechanical system which already has some modelling constraints imposed on it, and the
control system needs to optimize a norm of the (generalized) control force which is different
from the Gaussian.
The main result for this section will be the proof that the control force
C
Q obtained using Eq.
(7.46) ensures that the control constraints are satisfied exactly; the modelling constraints are of
course always satisfied.
Before we state and prove this result, we prove a lemma which will be used later.
Lemma: If and only if Eqs. (7.13) and (7.14) are consistent, then
cms cms c c m m c c m m
B B b B B b b B B b
. (7.48)
Proof: The necessary and sufficient condition for both the equations (7.13) and (7.14) to be
consistent is that there exists an n-vector such that
mm
Bb and
cc
Bb . (7.49)
180
From the first equality in Eq. (7.49), we know that there exist a vector such that
m m m m
B b I B B
. (7.50)
Substituting for
m
b from the first equality in Eq. (7.49) into Eq. (7.50), we get
m m m m
B B I B B
. (7.51)
Rearranging the terms in Eq. (7.51), we have
m m m m
I B B I B B
. (7.52)
Thus, Eq. (7.50) yields
m m m m
B b I B B
. (7.53)
Substituting for from Eq. (7.53) in second equality of Eq. (7.49), we get
c m m c m m c
B B b B I B B b
. (7.54)
Therefore, there exists a vector such that
c m m c c m m
B I B B b B B b
. (7.55)
Therefore there exists a vector
1
S
such that,
c m m c c m m
B I B B S b B B b
. (7.56)
Using the definition of
cms
B (see Eq. (7.42)), Eq. (7.56) reduces to
181
cms c c m m
B b B B b
. (7.57)
Thus we conclude that there exists a vector such that (7.57) is true, and hence,
cms cms c c m m c c m m
B B b B B b b B B b
. (7.58)
Result 2: If equations (7.13) and (7.14) are consistent, then the control force
C
Q given in Eq.
(7.46) ensures that the control constraints
c s c
B q b (or alternatively,
cc
A q b ) are exactly
satisfied.
Proof: Using Eq. (7.34), we have
[]
c
c s c s m m m s n m m s
B q B a B b B a I B B q
(7.59)
and substituting for
c
s
q from Eq. (7.44), we get
.
c s c n m m s c m m c n m m cms c c s
B q B I B B a B B b B I B B SB b B u
(7.60)
The third member on the right hand side of the above equation can be simplified as,
1
1
1
=
=
=
=
c n m m cms c c s cms cms c c n m m s c m m
cms cms c c m m cms cms c n m m s
c c m m cms cms cms s
c c m m cms s
c
B I B B SB b B u B B b B I B B a B B b
B B b B B b B B B I B B SS a
b B B b B B B S a
b B B b B S a
bB
.
c m m c n m m s
B b B I B B a
(7.61)
In the third equality above, we have used Eq. (7.58). Substituting Eq. (7.61) in Eq. (7.60) yields,
182
= .
c s c n m m s c m m c c m m c n m m s
c
B q B I B B a B B b b B B b B I B B a
b
(7.62)
7.5.2 Equal weighting matrices,
1
WM
Such a situation could arise in real life when studying the effect of an additional set of physical
constraints on a mechanical system, and these additional constraints would then show up in the
mathematical model as additional modelling constraints. Another example is when we might be
interested in designing a control system to control a mechanical system which already has some
modeling constraints imposed on it and the weighting matrix W that describes the quadratic
control cost is set equal to
1
M
.
Since
1
WM
, from the first relation in Eq. (7.33) we have
n
SI . This simplifies various
quantities such as
:
cms cm c n m m
B B B I B B
, and .
c
s cms c c s
q B b B u
(7.63)
As a result, simpler expressions for the constraint force
m
Q and control force
C
Q are obtained.
Corollary: The constraint and control forces
m
Q ,
C
Q required to enforce the constraints given in
Eqs. (7.3) and (7.7), and simultaneously minimize the cost functions in Eqs. (7.4) and (7.8) when
1
WM
, are
1/2
1/2 1/2 1/2
,
c
m m m s m s
Cc
s c n m m c c s cm c c s
Q M B b a B q
Q M q M B I B B b B u M B b B u
(7.64)
183
where
s
u is given in Eq. (7.35).
Proof: The first relation is Eq. (7.27). The second relation is obtained from Eq. (7.44) by setting
n
SI .
Remark 4: When the control requirements are not satisfied by the system at the initial time (t=0)
one can use instead of the constraints ( , , ) 0
c
q q t , the modified constraint equations [17-18]
0, 0
ii
c i c i
(7.65)
for each control requirement that can be expressed as a nonholonomic constraint, and by the
modified constraint equations [17-18]
0, , 0
i i i
c i c i c i i
(7.66)
for each control requirement that can be expressed as a holonomic constraint. In fact, from a
numerical point of view it appears that the use of these modified constraints are useful even
when the system starts out satisfying the constraint requirements.
7.6 Geometric explanation of the control approach
In this section, a geometric interpretation of the method is provided. For ease of exposition, the
m
B and
c
B are taken to be row vectors and modeling constraints are taken to be consistent with
the control constraints. Figure 1 shows the general representation of the unconstrained system
in
2
R . O is the origin in the scaled acceleration space and the vector OA represents the scaled
unconstrained acceleration
s
a . The modeling constraint is represented by the plane
m
P
184
described by the equation
m s m
B q b . Any vector that starts at the origin and whose head lies on
this plane satisfies the modeling constraint.
Figure 1. Representation of the unconstrained system and the constraints in scaled acceleration
space in
2
R (Inset shows the unit vectors perpendicular to the planes
m
P and
c
P ) .
Any vector that lies wholly in the plane
m
P lies in the null space of the matrix
m
B . Similarly, the
control constraint is represented by the plane
c
P whose equation is
c s c
B q b . These two
constraint planes
m
P and
c
P (in
2
R ) intersect in the point E at which the modelling and the
control constraint are both exactly satisfied.
185
The lines OC and OD are perpendicular to the planes
m
P and
c
P and the unit vectors along these
lines are given by
T
m
m
m
B
e
B
, and
T
c
c
c
B
e
B
(7.67)
respectively. Both
m
e and
c
e are column vectors.
Figure 1 shows a closer view of these unit vectors. The angle between them is and denoting its
cosine by , we have
: cos
T
mc
ee . (7.68)
The vector
m
e is thus the projection of the vector
c
e along
m
e and
cm
ee is a vector
perpendicular to
m
e (and thus is in a direction along the plane
m
P ) with magnitude
1/2
2
1 .
Hence, one can write
1/2
2
,
1
c m c m
e e e , where
, cm
e is a unit vector orthogonal to
m
e .
Being orthogonal, we note that
,
0
m c m
ee .
We begin by considering the situation when only modelling constraints are present, as shown in
Figure 2. The acceleration of the controlled system is then given by
m
s s s
u a q . The vector OA
represents
s
a , and the vector ()
m
m m s s
AB B b B a q
. AB is in the direction of
T
m
B , and
therefore in the direction of
m
e , which is perpendicular to the plane
m
P . The vector OB
represents the acceleration of the uncontrolled system,
s
u (see Eqs. (7.16)-(7.20)). It is the vector
sum of (i) the projection CB of the acceleration vector
s
a along the plane
m
P , and (ii) the
186
vector OC which equals
mm
Bb
, which is perpendicular to the plane
m
P , and is the shortest
distance from O to the plane. In brief, Figure 2 shows that,
s
u OB OC CB OA AB .
Figure 2. Representation of the uncontrolled system with only modeling constraints.
Figure 3 shows the situation when both modeling and control constraints are present. For
simplicity, the case when
1
WM
is considered. From Eq. (7.64), the scaled control
acceleration
c
s
q is given by
c
s c n m m c c s
q B I B B b B u
. (7.69)
Since
2
T
TT mm m
m m m m m m m T
mm
m
Be B
B B B B e e e
BB
B
, (7.70)
187
Figure 3. Representation of the controlled system with both modeling and control constraints.
and
1/2
2
,
1 ,
T T T
c n m m c c m m c c c c m m
T T T
c c m c c m
B I B B B B B B B e B e e e
B e e B e
(7.71)
the vector
c n m m
B I B B
is therefore given by,
, 1/2
2
1
1
c n m m c m
c
B I B B e
B
. (7.72)
Thus
c
s
q is a vector in the direction of
, cm
e whose length is, (using Eqs. (7.69) and (7.72))
188
1/2
2
1
c c c s
s
c
b B u
q
B
. (7.73)
In Figure 3, the vector BF that is orthogonal to the plane
c
P is given by
2
( ) ( / )( )
T
c c c s c c c c s
BF B b B u B B b B u
. (7.74)
Hence its length is,
c c s
c
b B u
l
B
. (7.75)
Therefore the distance of the intersection point E of the two planes
m
P and
c
P from B in the
direction
, cm
e is obtained from the right triangle BEF as
1/2 1/2
22
sin
11
c c s
c
b B u ll
BE
B
, (7.76)
which is exactly the length of the vector
c
s
q as was found in Eq. (7.73). Thus,
c
s
q BE .
From Eq. (7.12) we have
mc
s s s s
q a q q (7.77)
and using Eq. (7.64), we find that
( ) ( )
mc
s m m s m m s m m s
q B b a B B q B b a AB
, (7.78)
189
where the last equality follows from
,
0
m c m
ee . Since vector
m
e is along the vector
T
B while
c
s
q is along the vector
, cm
e , the two vectors
T
B and
c
s
q are orthogonal, and their inner product
c
ms
Bq must equal zero. Therefore
m
s
q is simply AB , and from Eq. (7.77) we obtain
mc
s s s s
q a q q OA AB BE OE , (7.79)
Thus the scaled acceleration of the system is represented by OE where the point E is exactly at
the point of intersection of the two planes
m
P and
c
P ! Hence Eq. (7.79) shows that the
acceleration
s
q of the controlled system satisfies both the modelling and control constraints
exactly.
The approach taken in Ref. [49] does not offer such a guarantee when the control constraints are
consistent with the modeling constraints. Using the approach given in Ref. [49], the vector that
starts at the point B in Figure 3, and moves along the plane
m
P may not, in general, reach the
point of intersection E of the two planes; consequently, though the modelling constraint is
satisfied, the control requirement (constraint) may not be.
7.7 Numerical Examples
Example 1:
Consider a ‘dumbbell’ system consisting of two point masses
1
m and
2
m connected by a
massless rigid bar of length l. The coordinates of mass
1
m in an inertial frame of reference
are
1 1 1
, , x y z , and those of mass
2
m are
2 2 2
, , x y z . The body is acted upon by the downward
190
force of gravity so the equation of motion of the unconstrained system (the point masses without
the bar) is
Mq Q (7.80)
where,
1 1 1 2 2 2
: , , , , ,
T
q x y z x y z , M is the symmetric positive definite mass matrix
1 1 1 2 2 2
: , , , , , M diag m m m m m m , and Q is the impressed force vector due to gravity,
12
0,0, ,0,0,
T
Q m g m g . The equation of motion of the unconstrained system can also be
expressed using scaled accelerations as,
ss
qa , (7.81)
where the scaled acceleration
s
q is given as,
1/2
1 1 1 2 2 2
: , , , , ,
T
s
q M x y z x y z (7.82)
the scaled unconstrained acceleration
s
a is
12
0,0, ,0,0,
T
s
a m g m g
. (7.83)
The rigid bar is modeled using the modeling constraint
2 2 2
2
1 2 1 2 1 2
:0
m
x x y y z z l (7.84)
which needs to be satisfied at all instants of time. The modeling constraint can be put in the
desired form by twice differentiating Eq. (7.84) with respect to time to obtain
191
mm
A q b (7.85)
where the row vector
m
A is
1 2 1 2 1 2 1 2 1 2 1 2
, , , , ,
m
A x x y y z z x x y y z z
, (7.86)
and the scalar
m
b is
2 2 2
1 2 1 2 1 2 m
b x x y y z z . (7.87)
Alternatively, Eq. (7.85) can be expressed in terms of scaled accelerations as
m s m
B q b ,
1 2 1 2 1 2 1 2 1 2 1 2
1 1 1 2 2 2
: , , , , ,
m
x x y y z z x x y y z z
B
m m m m m m
. (7.88)
In the presence of the modeling constraint, the equation of motion of the system is modified as,
m
Mq Q Q (7.89)
or equivalently (see Section 7.3),
:
m
s s s s m m m s s
q a q a B b B a u
. (7.90)
In the above,
m
s
q is the scaled constraint acceleration and
s
u is the scaled uncontrolled system
acceleration.
We wish to control the system so that the controlled system described by
C
m
Mq Q Q Q (7.91)
192
satisfies the trajectory requirement (control objective)
2
11
1
:0
24
c
yz
x . (7.92)
Since our initial conditions may not lie on this trajectory, the control objective is modified to
[17]
0
c c c
, (7.93)
Where ,0 are constants. It should be noted that even in cases where the system starts on
the manifold 0
c
, using the modified constraint in Eq. (7.93) improves the computational
stability.
On simplifying Eq. (7.93), the control constraint is obtained in the form
cc
A q b , (7.94)
where
1
1
1, , ,0,0,0
22
c
z
A
,
2
1
2
c c c
z
b . (7.95)
The equation of motion of the controlled system in terms of the scaled accelerations is
mc
s s s s
q a q q . (7.96)
193
We choose the weighting matrix
1
WM
, so that the control cost minimized at each instant of
time is
1
T
CC
c
J Q M Q
. Thus
1/2 1/2
S M W I
, and the scaled control acceleration
c
s
q is then
obtained using Eqs. (7.64) and (7.63) as,
c
s c m m c c s cm c c s
q B I B B b B u B b B u
. (7.97)
Also, the scaled constraint acceleration
m
s
q is given by,
[ ( )]
mc
s m m m s s
q B b B a q
. (7.98)
Thus the control force and the constraint force are explicitly obtained respectively as,
1/2 1/2 Cc
s cm c c s
Q M q M B b B u
and (7.99)
1/2 1/2
1/2
[ ( )]
[ { ( )}].
mc
m s m m m s s
m m m s cm c c s
Q M q M B b B a q
M B b B a B b B u
(7.100)
where
s
u is given in Eq. (7.90). For the numerical simulation, we choose the parameter values
12
1, 2, mm 1, 10, 2, 9.81 lg , (7.101)
and the initial conditions,
(0) [0,0,0,2,0,0]
T
q , (0) [0,0,0,0,0,0]
T
q . (7.102)
The equation of motion of the controlled system given by Eq. (7.91) has been numerically
integrated for 5 seconds using ode45 on the MATLAB platform with a relative error tolerance of
8
10
and an absolute error tolerance of
12
10
. The simulation results are presented in Figures 4-6.
194
(a) (b)
Figure 4. (a) Time history of response, first three degrees of freedom (
1 1 1
, , x y z ). (b) Time
history of response, last three degrees of freedom (
2 2 2
, , x y z ).
Figure 4 shows the time history of the response of the system and figure 5 shows the Figure 5
shows the control force computed using Eq. (7.99). These control forces minimize Gaussian at
each instant of time. As the constraints are consistent in this case, they also ensure that the
controlled system satisfies both control and modeling constraints exactly.
195
(a) (b)
Figure 5. (a) Time history of the first three components of the control force (
1 2 3
, ,
CCC
QQQ ). (b)
Time history of the last three components of the control force (
4 5 6
, ,
CCC
QQQ ).
(a) (b)
Figure 6. (a) Error in satisfying the modeling constraint (
m
e ) as a function of time. (b) Error in
satisfying the control constraint (
c
e ) as a function of time.
Figure 6(a) shows the error in satisfying the modeling constraint,
m
e defined as,
:
m m m m s m
e A q b B q b . (7.103)
Similarly, Figure 6(b) shows the variation of error in satisfying the control constraint,
c
e ,
:
c c c c s c
e A q b B q b (7.104)
196
with time. As seen from the figure, these errors are of
13
(10 ) O
and are comparable to the error
tolerances used in the numerical integration.
Example 2:
Consider a system consisting of a hollow cylinder of density
c
with length h, external radius R,
and thickness a as shown in Figure 7(a). A rigid rod of mass density
r
and radius r connects
the centers of the two circular end-faces of the cylinder, which have thickness a. A chain of n
point masses , 1,2, . . .,
i
m i n in which each mass is connected to its nearest neighbor by
nonlinear springs slide without friction along this central rod (see Fig. 7(b)).
This example has real-life applications in the modeling and control of a composite spacecraft
system wherein the three dimensional tumbling of the spacecraft and its internal composite parts
(modelled here by point masses) are both required to be controlled with high precision. The aim
is to model and control the 3-D motion of the system as it tumbles so the cylinder has a desired
constant angular velocity and the masses keep at a prespecified desired distance along the rod.
To simplify the modeling, the system is considered initially as a composite system consisting of:
(i) the cylinder and the central rod (hereon called cylinder-rod), and (ii) the n point masses (that
slide along the rod). The co-ordinates of the center of mass of the cylinder-rod are ,,
c c c
X Y Z in
an inertial frame of reference. The position of the masses along the rod measured from the center
of mass of the cylinder-rod are denoted by , 1,2, . . .,
i
w i n . As shown in Fig. 7(b), the spring
1 i
k
connecting mass
i
m to mass
1 i
m
consists of a linear elastic spring with stiffness
1
l
i
k
in
parallel with a cubically nonlinear elastic spring with stiffness
1
n
i
k
. We assume that the
197
(a)
(b)
Figure 7. (a) The composite cylinder-rod system considered in Example 2. The inertial
coordinates X, Y, and Z are shown. The body-fixed x-axis is along the rod AB (b) Close-up view
198
of the rigid central rod connecting the two end faces of the cylinder showing the point masses
connected with non-linear spring elements.
equilibrium positions of the masses are given by , 1,2, . . .,
i
e
w i n (as measured from the center
of mass of the cylinder-rod system. The potential energy stored in these springs is given in Eq.
(7.118).
The rigid cylinder-rod system has six degrees of freedom, and each point mass has one degree of
freedom. Thus a total of n+6 coordinates are required to describe the configuration of the system.
However, in what follows an additional coordinate will be used to describe the system’s
configuration and its rotational motion will be described by four quaternions. We fix a right-
handed orthogonal coordinate frame to the cylinder-rod whose origin is located at the center of
mass of the cylinder-rod and whose x-axis is along the rod.
The mass of the cylinder
c
M , and the mass of the bar
r
M are given as,
22
( ) ( 2 )
cc
M R h R a h a , (7.105)
2
( 2 )
rr
M r h a . (7.106)
The body fixed coordinate system has its x-axis along the line AB , and two orthogonal axes y
and z perpendicular to AB . The mass moments of inertia about these body-fixed x, y, z axes for
the cylinder are respectively,
199
44
,
1
( ) ( 2 )
2
c x c
J R h R a h a ,
4 4 3 2 3 2
,,
11
( ) ( 2 ) ( ) ( 2 )
4 12
c y c z c c
J J R h R a h a R h R a h a . (7.107)
Similarly, the mass moments of inertia of the rod are,
4
,
1
( 2 )
2
r x r
J r h a ,
4 3 2
,,
11
( 2 ) ( 2 )
4 12
r y r z r r
J J r h a r h a . (7.108)
The rotational displacement of the system is described by the unit quaternion
0 1 2 3
[ , , , ]
T
u u u u u .
Denoting
1 2 3
,,
T
u u u u , the active rotation matrix that gives the transformation from the body
co-ordinates to the inertial co-ordinates is given by,
2
0 3 0
(2 (: 2 ) 1) 2
T
u I u S u u uu , (7.109)
where, the skew-symmetric matrix u is defined as,
32
31
21
0
:0
0
uu
u u u
uu
. (7.110)
The angular velocity of the cylinder-bar is given by,
Hu , (7.111)
where, the H matrix is defined as,
03
: 2 , H u u I u
. (7.112)
200
Thus, the rotational kinetic energy of the cylinder-rod system is given as,
11
22
T T T
R T T
T J u H J Hu . (7.113)
In the above equation, the total mass inertia matrix
T
J is obtained by summing the mass
moments of inertia of the cylinder and the rod,
, , , , , ,
( , , )
T c x r x c y r y c z r z
J diag J J J J J J . (7.114)
The positions of the point masses in the body-fixed co-ordinates are ,0,0
i
w , 1,2 i . The co-
ordinates of each mass in the inertial co-ordinate system are denoted by
1 2 3
,,
T
i i i i
p p p p . These
are computed, using the active rotation matrix () Su as,
1 2 3 1
, , , , ( ) ,0,0 , , ( ),
T
T T T
i i i
c c c i c c c i
p p p X Y Z S u w X Y Z w S u (7.115)
where
1
() Su is the first column of the matrix () Su . It is given as,
22
1 0 1 1 2 0 3 1 3 0 2
2 2 1, 2 2 , 2 2
T
S u u u u u u u u u u
. (7.116)
The translational kinetic energy of the entire system is now obtained as,
2 2 2
1
1
2
n
T
ii
T c r c c c i
i
T M M X Y Z m p p
. (7.117)
The potential energy of the system is,
201
1
11
24
1
1, 1
1
24
3 1 1 1 1
11
11
[ ( ) ( ) ]
24
11
g + [ ( ) ( ) ].
24
ii
i i i i
ln
c r c i i e i i e
in
nn
i l n
i i i i e e i i i e e
ii
U M M gZ k w w k w w
m p k w w w w k w w w w
(7.118)
Thus, the Lagrangian for the system can be formulated as,
RT
L T T U . (7.119)
The generalized degrees of freedom of the system are n+7 in number, namely,
0 1 2 3 1 2
: , , , , , , , , , . . .,
T
c c c n
q X Y Z u u u u w w w . (7.120)
So, we obtain n+7 equations to describe the motion of the system, along with one modeling
constraint which enforces the unit quaternion requirement. The equations of motion are,
0, 1,2,..., 7
ii
d L L
in
dt q q
. (7.121)
These equations can then be expressed in the standard form,
, , , M q t q Q q q t , (7.122)
where M is the generalized mass matrix and Q is the generalized force vector. The scaled
unconstrained acceleration is then obtained as,
1/2
s
a M Q
. (7.123)
The modeling constraint is simply given by the relation,
202
: 1 0
T
m
uu . (7.124)
By differentiating this equation twice with respect to time, we obtain the modelling constraint
equation in the form we desire,
( , , ) ( , , )
mm
A q q t q b q q t ;
13
: 0 , ,0
TT
mn
Au
, :
T
m
b u u . (7.125)
In the above, 0
n
represents a zero n-vector whose elements are all zero. In the presence of this
modeling constraint, the scaled uncontrolled acceleration is given as,
s s m m m s
u a B b B a
. (7.126)
Equations (7.121)-(7.122) are algebraically tedious to compute and they are obtained in this
paper symbolically by using Maple.
Control Requirements: We would like to control this system so that the cylinder-rod has a
desired constant angular velocity and the point masses move in a desired harmonic manner. In
other words,
(i) the angular velocity 3-vector of the cylinder-rod
d
, and,
(ii) the masses execute harmonic motions along the rod described by : cos( )
ii
i d e i i
w w w d t ,
1,2,..., in .
Here,
d
is a 3-vector which contains the desired constant angular velocity of the cylinder-rod
about the three body-fixed x, y, and z axes. The desired harmonic motion of the i
th
point mass
i
m
along the rod is described by its amplitude
i
d , and its frequency
i
.
203
We achieve these control objectives by simply imposing the following control constraints:
( ) 0, 0
d
, and (7.127)
2
11
( ) ( ) cos( )
ii
i i d i d i i
w w w w w a t , 1,..., in ,
11
, 0 , (7.128)
where, the angular acceleration vector is given as,
Hu . (7.129)
In the above relations we use the stabilization parameters
1
,
1
, and since the system does
not start out satisfying the desired control requirements [17].
In obtaining Eq. (7.129), we have used the identity 0 Hu . Equations (7.127)-(7.129) can be put
together in the form
( , , ) ( , , )
cc
A q q t q b q q t . (7.130)
The controlled system is then described by the equation
C
m
Mq Q Q Q (7.131)
in which
m
Q and
C
Q are explicitly obtained in closed form in Section 7.5.
To demonstrate the approach for a general cost function other than Gaussian, we choose
weighting matrix
n
WI . Hence the constraint force
m
Q and the control force
C
Q are obtained
using the Eqs. (7.47) and (7.46) for the general case.
204
Numerical Results: For the purposes of numerical simulation, we consider two point masses,
1
m
and
2
m , that slide along the central rod (see Figure 7). The various parameters (in consistent
units) describing the cylinder-rod and the point masses and springs are chosen as,
3 3 4
12
7 10 , 7 10 , 4, 0.1, 0.02, 12, 4.1 10 , 9.81,
cb
R r a h m m g (7.132)
12
5 5 5 3
1 2 3 1 2 3
1 10 , 1.5 10 , 1.2 10 , 150, 10 , 300, 1, 1,
l l l n n n
ee
k k k k k k w w (7.133)
and the initial conditions are chosen as,
(0) [0,0,0,1,0,0,0, 2,4.5] , (0) [10,0,20,0,0,0.5,0,0,0] .
TT
qq (7.134)
With these parameter values,
12
0.7
c
m m M , and the tumbling dynamics is considerably
affected by the motion of the point masses along the rod. For example, as seen in Fig. 8, the
angular velocity of the uncontrolled system about body fixed y-axis (
2
) exhibits oscillations
Figure 8. Time history of angular velocity of uncontrolled system about body fixed y-axis.
205
where as if the system were to act as a rigid body, it should stay constant at the initial value,
22
( ) (0) 1 t . The parameters describing the control requirements are chosen as
1 1 2 2 1 1
1, 1, 1, 0, [0.5,2,0.1] , 10, 10, 30,
T
d
dd (7.135)
These parameters values describe the following control requirements (see Figure 7).
(i) The cylinder-rod system is required to tumble about its three body-fixed axes and reach a
constant angular velocity (asymptotically) given by the 3-vector [0.5,2,0.1]
T
d
.
Its spin angular velocity (about its body-fixed x-axis, along AB) is required to be 0.5
rads/sec, its angular velocity about its body-fixed y-axis is required to be 2 rad/s, and
that about its z-axis is required to be 0.1 rad/ sec.
(ii) The mass
1
m is required to move (asymptotically) along the rod AB of the cylinder-rod
system with a harmonic motion of amplitude unity and period of 2 , since
11
1, 2 d . This amplitude is measured from the center of mass of the cylinder-
rod.
(iii)The mass
2
m is required to come to rest (asymptotically) along the rod, since
22
2, 0 d .
(iv) The control torques and forces are to be found so that the generalized control vector
C
Q
minimizes the control cost ()
T
CC
J t Q Q at each instant of time.
(v) The modelling constraint, : 1 0
T
m
uu , is required to be satisfied at all times.
206
Below we demonstrate (a) the motion of the cylinder-rod system, (b) the motion,
1
w and
2
w , of
the two point masses along the rod AB, and (c) the corresponding generalized control forces that
are required to accomplish the control requirements stated above.
Figure 9. Time history of the co-ordinates of the center of mass (CM) of the cylinder-rod in the
inertial frame of reference.
The 3-D tumbling motion of the cylinder-rod system is very complex and its interaction with the
point masses that move along the rod AB is described by highly nonlinear equations. As stated
before, no simplifications such as linearizations or approximations of the nonlinear dynamical
system are made whatsoever. The generalized control forces are obtained in closed form.
The equation of motion of the controlled system give in Eq. (7.131) is numerically integrated
using ode15s on the MATLAB platform with a relative error tolerance of
10
10
and an absolute
error tolerance of
13
10
. The results of the simulation are shown in Figures 9-15.
207
(a) (b)
Figure 10. Case: With control: (a) Time history of position,
1
w , and velocity,
1
w , of the first
mass,
1
m , relative to CM of the cylinder-rod. (b) Time history of position and velocity of the
second mass,
2
m , relative to CM of the cylinder-rod.
(a) (b)
208
Figure 11. Case: Without control: (a) Time history of position,
1
w , and velocity,
1
w , of the first
mass,
1
m , relative to CM of the cylinder-rod. (b) Time history of position and velocity of the
second mass,
2
m , relative to CM of the cylinder-rod.
(a) (b)
Figure 12. (a) Time history of error in tracking the desired position of point masses. (b) Time
history of error in tracking the desired angular velocity of cylinder-rod.
Figure 9 shows the time history of the co-ordinates of the center of mass (CM) of the cylinder.
Figure 10(a) shows the time histories of the position and the velocity of mass
1
m relative to the
center of mass of the cylinder. This can be compared with a similar plot for the uncontrolled
system shown in Fig. 11(a). The error in tracking the desired position can be seen in Figure
12(a). At the end of 15 seconds, the magnitude of this error,
1
1 d
ww , in tracking the position is
of
10
10 O
. Similarly, Figure 10(b) shows the time histories of the position and velocity of the
point mass
2
m relative to the center of mass of the cylinder-rod. It can be observed that the
209
control objective has been achieved, and the second point mass comes to rest at its desired
position. Figure 11(b) shows similar plot for the uncontrolled system. Again, the error in tracking
the desired position is shown in Fig. 10(a) and is of
14
10 O
at the end of 15 seconds.
Figure 10(b) shows the error in tracking the desired angular velocity 3-vector,
ed
(7.136)
as a function of time. The magnitude of the error at the end of 15 seconds is
10
10 O
. Figure 13
shows the time history of the error in satisfying the unit quaternion constraint
m
defined in Eq.
(7.124). Its magnitude is of
10
10 O
.
Figure 13. Time history of error in satisfying the quaternion constraint 0
m
(see Eq. (7.124)).
Figure 14(a) shows the time history of the three components of the control force on the cylinder-
rod. Figure 14(b) shows the time history of control torques on the cylinder-rod which can be
computed as [63]
210
1
2
u
H (7.137)
where
u
is the generalized control force in quaternions (the vector consisting of 4
th
to 7
th
components of
C
Q ). Figure 15 shows the corresponding control forces on the point masses that
act along the rod. The control force on mass
1
m is denoted in the figure by
8
C
Q and that on mass
2
m by
9
C
Q .
Figure 14. (a) Components of the control forces on the cylinder-rod along the three inertial frame
coordinates (b) Control torques on cylinder-rod (see Eq. (7.137)) about its body-fixed axes.
211
Figure 15. Control forces on the point masses acting along the rod AB.
7.8 Conclusions
A unified approach has been proposed to obtain the generalized control force as well as the
constraint force for a mechanical system when control objectives (requirements) are prescribed
and a user-desired control cost is desired to be minimized. The control objectives are cast as
constraints that are imposed on the system. The approach ensures that the modeling constraints,
which pertain to the proper description of the physical mechanical system, are always satisfied.
The proposed approach obtains the generalized control force such that:
1. the modeling requirements (constraints) are always exactly satisfied, irrespective of
control objectives (constraints) desired;
2. when the control requirements and the modeling requirements are consistent with each
other, both the control and the modeling requirements are exactly satisfied;
212
3. if the control requirements are not consistent with the modeling requirements, then the
control requirements are satisfied to the extent possible and the
2
L norm of the error in
their satisfaction is minimized;
4. the generalized control force obtained always minimizes a user-desired control cost at
each instant of time.
The approach does not involve simplifying assumptions such as linearizations and/or
approximations of the dynamical system or the constraints. No apriori structure is imposed on
the controller. A geometric explanation of the control methodology has been provided to enhance
its understanding.
Two numerical examples are considered. The first example deals with a simple two-mass
dumbbell, and has pedagogical significance. The second is more substantive and has real-life
applications. It deals with the three dimensional motion of a cylinder in which point masses
connected by nonlinear springs are permitted to move along a rod along its center-line. Accurate
three dimension tumbling control of the cylinder is demonstrated together with simultaneous
desired control of the different motions of the point masses. This second example has real-life
applications in the modeling and tumbling control of composite spacecraft in which the motion
of their internal composite parts, which are modelled by point masses here, are also to be
simultaneously controlled along with their three dimensional tumbling behavior. These examples
demonstrate the great simplicity, ease, and high accuracy with which the closed form generalized
control forces obtained in this paper satisfy the control requirements.
213
Chapter 8. Conclusions
This chapter summarizes the work contributed by this thesis. It also discusses the questions
which are not concretely answered by the current thesis and where there is scope for further
research.
8.1 Conclusions
Before we present the future work, let us briefly review the work that has been accomplished so
far.
1. A general stable control methodology for a general nonlinear nonautonomous mechanical
system is proposed. The approach takes a user defined positive definite function to synthesize a
Lyapunov constraint and ensures stability by enforcing it. It simultaneously minimizes a user-
defined control cost at each instant of time. The method does not impose any structure (such as
linearization) on the nonlinear equations or the controller.
2. The extension of this approach to a general dynamical system described by first order
differential equations is provided. Consistency conditions which are necessary for the control
methodology to work are formalized. Numerical examples are provided that illustrate the use of
consistency to check whether a positive definite function pair (V, w) can be used to obtain
control or not. It is shown that when applied to linear systems, the current control methodology is
capable of reproducing the classical LQR control.
3. A control strategy is proposed when the maximum allowable control force is required to be
less than a user prescribed limit. Conditions under which the above strategy ensures stability of
214
the controlled system are given, and the minimum value of the limit on control force to ensure
stability is obtained.
4. A methodology to achieve decentralized control of a coupled mechanical system is proposed.
The control is developed in two steps. In the first step, a ‘nominal system’ is defined which is our
best estimate of the actual system. The nominal system is controlled using the stable controller
designed earlier. In the second step, an additional controller is obtained using the concept of
generalized sliding surfaces that ensures that the actual system tracks the trajectories of the
nominal system to within prescribed bounds and hence ensures the stability of the entire coupled,
conglomerate system. Numerical examples are provided that demonstrate the efficacy of the
method.
5. A new approach to obtain the Lagrange’s equations of motion for a multi-body planar
pendulum system using mathematical induction has been proposed. Stable control that can drive
the system from any initial starting point to any desired final configuration is developed. The
case in which the parameters of the pendulum system are not known accurately is also
considered. In this case, a nominal system is defined with our best estimate of the parameters and
a nominal control is designed. The uncertainties are assumed to be norm bounded, and an
additional controller is obtained that ensures that the actual pendulum tracks the trajectories of
the nominal system to within user defined tolerances. Stable control for swinging-up an
uncertainly known (that is, imprecisely described) 10-body pendulum from any intial state to a
variety of final so-called ‘inverted positions’ is demonstrated.
215
6. A new control strategy for mechanical systems that is modeled with physical constraints is
proposed. When the control constraints are not consistent with the modeling constraints, this
approach satisfies the control constraints in the least square sense without violating the modeling
constraints. It has been shown that this method finds the correct control force in the case that
both the constraints are consistent with each other. It has also been shown that if the objective
function minimized by the control force is the Gaussian, then the final equation of motion given
by the proposed approach and that found using the fundamental equation of motion is the same.
8.2 Scope for further work
The approach taken in this thesis to control design is quite new and hence it opens a lot of
unanswered questions. While every effort has been made in this thesis to answer the important
questions related to the areas of modeling and control design, there are several that are
unanswered and require further exploration. Some of the issues that need further consideration
are mentioned below.
1. While a primitive approach to the control of underactuated systems has been mentioned in
Chapter 3, it requires that the constraints be consistent. It is not always easy to find Lyapunov
functions that ensure consistency. So work needs to be done to find if there are ways to find
Lyapunov functions that ensure consistency. That is, Are there general methods to determine (V,
w) pairs that are consisyent for mechanical systems? For general dynamical systems?
2. When the maximum control force available is limited, we have proposed in Chapter 4 that the
control constraints should be best satisfied to the extent allowed by available resources. One
could ask if this greedy strategy of doing the best at every instant of time is the indeed the best
216
strategy. This is because, after we deviate from the stable trajectory, nothing is known about
whether the constraints can be consistent in the future.
3. In Chapter 5, the decentralized control we have proposed relies on estimating the discrepancy
between the nominal and the actual systems. Although, we have numerically shown that
overestimating this doesn’t have any effect on the magnitude of control force, this is not shown
analytically. Also, it should be investigated if there are methods that can generate these estimates
on the fly, or adaptively, while the system is being controlled.
4. In Chapter 7, we proposed that control constraints be best satisfied while not violating the
physical constraints. Again, it is questionable if this is the best strategy in all cases. Again, this is
due to not knowing if the constraints can be consistent or not in the future. In this context, a
method to predict whether the constraints will be consistent or not in the future will be very
helpful.
217
Appendix A
Properties of Moore-Penrose Inverse
The Moore-Penrose (MP for short) inverse of a matrix
mn
AR
is usually denoted by A
(
nm
AR
). Every matrix A has a unique MP inverse and satisfies the following properties
1.
T
AA AA
, (A.1)
2.
T
A A A A
, (A.2)
3. AA A A
, and, (A.3)
4. A AA A
. (A.4)
Using the properties above, we can easily demonstrate the following properties.
(i)
0 A I A A
, (A.5)
(ii)
0 I A A A
, (A.6)
(iii)
I AA I AA I AA
, and, (A.7)
(iv)
I A A I A A I A A
. (A.8)
Proof: First relation above can be verified by expanding the left hand side as,
0 A I A A A AA A A A
. Similarly, the second property is verified as,
0 I A A A A A AA A A
. The third relation can be verified as,
22 I AA I AA I AA AA AA I AA AA I AA
. The fourth equation can
also be similarly verified easily.
218
Appendix B
Result: The compensating control force given in Eq. (5.31) ensures that the dynamics of i
th
controlled actual subsystem asymptotically converge to a region
() i
which could be made as
close to the sliding surface
()
0
i
s as we desire. The region
() i
is so defined that functions
( ) ( ) ii
fs defined in Eq. (5.31) satisfy
( ) ( )
1
ii
fs
inside
() i
.
Proof: Let us first differentiate the tracking error in in Eq. (5.29), twice with respect to time, to
get,
( ) ( ) ( ) i i i
n
e x x . (B.1)
Using the equations of motion of controlled nominal system and controlled actual system given
in (5.8) and (5.28) respectively, we get
11
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( , , ) ( , , )
i i i i i i i i i i
n n u
e M F x x t F x x t M Q
. (B.2)
Let us denote,
1
( ) ( ) ( ) i i i
u
u M Q
, (B.3)
and,
1
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( , , ) ( , , )
i i i i i i i i
nn
x M F x x t F x x t
. (B.4)
We can obtain a bound on
() i
x as,
219
1
( ) ( ) ( ) ( ) i i i i
x M F F
. (B.5)
Consider the Lyapunov function with respect to sliding surface defined in Eq. (5.30),
( ) ( ) ( )
1
2
T
i i i
s
V s s (B.6)
Differentiating Eq. (5.30) once, we get
( ) ( ) ( ) ( ) i i i i
s L e e (B.7)
Differentiating Eq. (B.6) once and substituting Eq. (B.7), we get
( ) ( ) ( ) ( ) ( ) ( ) ( )
TT
i i i i i i i
s
V s s s L e e . (B.8)
Substituting Eq. (B.2) and using notations in Eqs. (B.3), (B.4), we obtain
( ) ( ) ( ) ( ) ( ) ( )
T
i i i i i i
s
V s L e x u (B.9)
Let us choose
() i
u to be of the form
( ) ( ) ( ) ( ) ( ) ( ) i i i i i i
u L e f s (B.10)
As explained in section 2.3,
( ) ( ) ii
fs is a vector valued function, whose j
th
component is
defined as,
( ) ( ) ( ) ( )
/
i i i i
jj
f s g s
, (B.11)
220
where,
() i
j
s is the j
th
component of
() i
s ,
() i
g
is an odd, monotonic increasing function such that
( ) ( ) ( )
/ 1 if
i i i
j
g s s
. Since we want to drive the system to be as close as desired to the
sliding surface
()
0
i
s , we want
() i
s
V in Eq. (B.9) to be negative outside a small region
() i
around the sliding surface. This region
() i
is so defined that functions
( ) ( ) ii
fs defined in Eq.
(B.10) satisfy
( ) ( )
1
ii
fs outside
() i
. Substituting Eq. (B.10) in Eq. (B.9), we obtain
( ) ( ) ( ) ( ) ( ) ( )
T
i i i i i i
s
V s x f s . (B.12)
Since
i
g
is an odd, monotonic increasing function,
( ) ( ) ( ) ( ) ( ) ( )
TT
i i i i i i
s f s s f s . (B.13)
Also, noting that
( ) ( )
T
ii
i
s n s , we can bound
() i
s
V as,
( ) ( ) ( ) ( ) ( ) ( ) i i i i i i
si
V s n x f s . (B.14)
Since, outside the region
() i
,
( ) ( )
1
ii
fs , we get
( ) ( ) ( ) ( ) i i i i
si
V s n x . (B.15)
Thus, if we choose
1
( ) ( ) ( ) ( ) ( )
, 0
i i i i i
i
n M F F x t
, we have
()
0
i
s
V outside
() i
,
and thus we have attractivity to the surface
()
0
i
s enclosed by the region
() i
. Since
( ) ( )
1
ii
fs
inside
() i
, once the system reaches the surface of
() i
, we have
221
1
( ) ( ) ( )
1:
i i i
sg
. (B.16)
Noting the definition of
() i
s in Eq. (5.30), and the asymptotic bound on
() i
s in Eq. (B.16), the
tracking error is asymptotically bounded as,
()
()
()
lim ( )
i
i
i
t
et
L
, and the tracking error in velocity
is bounded as,
( ) ( )
lim ( ) 2
ii
t
et
.
222
Appendix C
In this appendix, the proofs for the results regarding the additional controller used in Chapter 6,
section 6.4 are provided. For convenience, we recall the following.
The equation of motion of the controlled nominal system is
( ) ( , ) ( )
C
M Q Q t . (C.1)
The equation of motion of the controlled actual system is
( ) ( , ) ( ) ( , )
Cu
a a a a a a a a
M Q Q t Q . (C.2)
In the above equation, the additional (generalized) control force
u
Q is computed using the
expression
( / )
u
Qs (C.3)
where is a small positive number, s is the sliding variable
:
aa
s e ke , 0 k , (C.4)
and is a positive number satisfying the condition
min
a
q k e
, t , where
1
min
: min{ of }
a
eigenvalues M
. (C.5)
In Eq. (C.5), q is a quantity defined as,
11
: ( ) ( )
CC
aa
q M Q Q M Q Q
, (C.6)
223
and . represents
2
L norm of a vector.
Result 1: The additional control force
u
Q given by
( / )
u
Qs (C.7)
where is a small positive number and is a positive number satisfying the condition given in
Eq. (C.5) ensures that the controlled actual system
( ) ( , ) ( ) ( , )
Cu
a a a a a a a a
M Q Q t Q . (C.8)
stays with in the region
defined by
:|
n
s R s
. (C.9)
Proof: Noting the definition of the sliding manifold in Eq. (C.4), its derivative with respect to
time is,
()
aa
s t e ke . (C.10)
Upon differentiating the tracking error () ( ) ( )
a a
t e t t twice, we have
() ( ) ( )
a a
t e t t . (C.11)
Using the equations of motion of the controlled nominal system (Eq. (C.1)) and the controlled
actual system (Eq. (C.8)), Eq. (C.11) becomes
1 1 1 1
( ) ( )
C C u u
a a a a a
M Q Q M Q Q M Q q M Q e
. (C.12)
224
The last equality above is obtained from the definition of q in Eq. (C.6). Thus, the time
derivative of the sliding manifold can be simplified using Eq. (C.12) as,
1
()
u
a a a a
s t e ke q M Q ke
. (C.13)
Considering the Lyapunov function
1
2
T
a
V s s , (C.14)
its rate of change along the trajectories of the dynamical system is given by
1
1
= ( )
= ( ).
T
a
Tu
aa
T
aa
V s s
s q M Q ke
s
s q M ke
(C.15)
Observing that
2
1
min
T
a
s M s s
, we have
2
min
min
= .
aa
a
s
V s q k s e
s
s q k e
(C.16)
The region
is defined such that for s , we have /1 s outside
. Hence outside
, the right hand side of Eq. (C.16) is strictly negative when
min
a
q k e
. Since the
controlled actual system starts inside the region
, it stays within this attracting region and
cannot escape from it.
225
As pointed out in Section 6.4, if the nominal system and the uncertain system do not start with
the same initial conditions, then any trajectories of the controlled uncertain system that start from
outside
are globally attracted to the region
.
Result 2: If the controlled actual system is restricted to stay within the region
, the errors in
tracking the nominal system are bounded by,
,
1
ai
e
k
,
,
2
ai
e , 1,2, , in . (C.17)
Proof: Inside the region
, s and hence,
, 1
i
s i n . (C.18)
From the relation
,, i a i a i
s e ke , we get,
,,
, 1
a i a i
e ke i n . (C.19)
This inequality can be alternatively expressed as,
,, a i a i
e ke , (C.20)
which can further be simplified to
, , , a i a i a i
ke e ke . (C.21)
226
Considering
, ai
e as a dynamical system, if we can prove that
,,
0
a i a i
ee (which is the derivative
of the Lyapunov function
,,
1
2
a i a i
ee ) outside a region
i
L
, we can conclude that the region
i
L
is an
attracting region. Defining
i
L
as,
,,
1
:|
i
a i a i
L e R e
k
, (C.22)
there are two possible cases in which
, ai
e could lie outside
i
L
. Let us look at both of them.
Case 1: If
,
1
0
ai
e
k
, then
,
0
ai
ke . From Eq. (C.21), we then have
, , , ,
0
a i a i a i a i
e e e ke . (C.23)
Case 2: If
,
1
0
ai
e
k
, then
,
0
ai
ke . Also,
,
0
ai
e and so from the left inequality in
(C.21), we have,
, , , ,
0
a i a i a i a i
e e e ke . (C.24)
We note that initially
,
0
ai
e , and therefore it will remain inside the region
i
L
thereafter, or in
other words,
,
1
ai
e
k
. From relation (C.19), we observe that
, , , , a i a i a i a i
e ke e ke , (C.25)
227
which further yields
,,
2
a i a i
e ke . (C.26)
228
Appendix D
In this appendix, we prove that the vector that minimizes () f x Ax b subject to xc for
0 A c b
, where,
1 n
xR
,
1 n
AR
and \{0} bR is :
Ab
xc
Ab
.
Result: :
Ab
xc
Ab
minimizes fx amongst all :. x x c
Proof: Since A is a row vector, A
is simply given by
T
T
A
A
AA
. (D.1)
Then, the quantity
*
Ax can be computed as,
T
T
AA b AA
Ax c b b
AA A b A b A b
cc
. (D.2)
Thus,
*
fx is given by
11 x x b
c c c
f A b bbb
A b A b A b
. (D.3)
The last equality above is due to the fact that c Ab
. Any vector ‘ x ’ such that xc and
xx
; can be decomposed as,
229
x x x
,
,,
:.
,
xx
xx
xx
c
(D.4)
where, x
is a component of x along
*
x and x
is a component perpendicular to
*
x . In the
above, , denotes dot product of two vectors. The orthogonality between x
and
*
x can be
quickly verified as,
, , , , , , 0 x x x x x x x x x x x x x
. (D.5)
Expanding , xx
as
, , , 0
A b bc
x x x c x A
A b A b
, (D.6)
it can be established that ,0 xA
. Then, it can be easily verified that 0 Ax
as follows,
, , 0
T
T
A
A
AA
x A A x x x
. (D.7)
By Cauchy-Schwarz inequality,
2
, xxx x
whenever xx
. Since xc ,
2
2
,, x x cc x x
. Therefore,
1 . (D.8)
Thus for any vector x such that xc and
*
xx ,
230
x x b
c
Ax A A
Ab
. (D.9)
The value of the ojective function is then computed as,
( ) 1 1 b b b
A
c cc
f x Ax
b A b A b
bb
. (D.10)
Since 1 , we have
11
A
c
b A b
c
. (D.11)
Therefore,
Ax b A b x
or
*
() f x f x . (D.12)
Thus, we conclude that
Ab
xc
Ab
minimizes fx amongst all :. x x c
231
References
1. Khalil H.K., Nonlinear Systems, Prentice Hall, New Jersey, 2002.
2. Utkin, V. I., “Sliding Modes and their Application in Variable Structure Systems”, Mir
Publishers (English Translation), Moscow, Russia, 1978.
3. Spurgeon, S. K., “Sliding Mode Observers: A Survey”, International Journal of Systems
Science, Vol. 39 (8), 2008.
4. Young, K. D., Utkin, V. I., Özgüner, Ü., “A Control Engineer’s Guide to Sliding Mode
Control”, IEEE Transactions on Control Systems Technology, Vol. 7, No. 3, pp. 328-342,
1999.
5. Sontag E. D., “A “Universal” Construction of Artstein’s Theorem on Nonlinear
Stabilization”, Systems and Control Letters, Vol.13, No.2, pp.117-123, 1989.
6. Sontag E. D., “A Lyapunov-Like Characterization of Asymptotic Controllability”, SIAM
Journal of Control and Optimization, Vol. 21, No. 3, pp.462-471, 1983.
7. Freeman R. A. and P. V. Koktovic, “Inverse Optimality in Robust Stabilization”, SIAM
Journal of Control and Optimization, 34, pp. 1365-1392, 1996.
8. Freeman R. A., J. A. Primbs, “Control Lyapunov Functions: New Ideas From an Old
Source”, Proc. of the 35
th
IEEE Conference on Decision and Control, pp. 3926-39331, 1996.
232
9. Hokayem P. F., Stipanović D. M., Spong M. W., “Coordination and Collision Avoidance for
Lagrangian Systems With Disturbances”, Applied Mathematics and Computation, Vol. 217,
No. 3, pp.1085-1094, 2010.
10. Hokayem P. F., Stipanović D. M., Spong M. W., “Semiautonomous Control of Multiple
Networked Lagrangian Systems”, International Journal of Robust and Nonlinear Control,
Vol. 19, No. 18, pp. 2040-2055, 2008.
11. Zou Y., Pagilla P. R., Ratliff R. T., “Distributed Formation Flight Control Using Constraint
Forces”, Journal of Guidance, Control, and Dynamics, Vol. 32, No. 1, pp. 112-120, 2009.
12. Zou Y., Pagilla P. R., Misawa E., “Formation of a Group of Vehicles with Full Information
Using Constraint Forces”, Journal of Dynamic Systems, Measurement and Control, Vol. 129,
No. 5, pp. 654-661, 2007.
13. Zou Y., Pagilla P. R., “A Distributed Constraint Force Approach to Coordination of Multiple
Mobile Robots”, Journal of Intelligent and Robotic Systems, Vol. 56, No. 1-2, pp. 5-21, 2009
14. Tang Y. F., Tomizuka. G., Guerrero, and Montemayor G., “Decenteralized Robust Control of
Mechanical Systems”, IEEE Transactions on Automatic Control, Vol. 45, No. 4, 2000.
15. Udwadia, F.E., “Analytical Dynamics”, Cambridge University Press, 2008.
16. Udwadia F.E and R. E. Kalaba, “A New Perspective on Constrained Motion”, Proceedings of
the Royal Society of London, Series A, Vol. 439, November, pp. 407-410, 1992.
233
17. Udwadia, F.E., “A New Perspective on the Tracking Control of Nonlinear Structural and
Mechanical Systems”, Proceedings of the Royal Society of London, Series A, Vol. 459, pp.
1783-1800, 2003.
18. Udwadia, F.E., “Optimal Tracking Control of Nonlinear Dynamical Systems”, Proceedings
of the Royal Society of London, Series A, Vol. 464, pp.2341-2363, 2008.
19. Udwadia, F.E., and Schutte, A.D., “A Unified Approach to Rigid Body Rotational Dynamics
and Control”, Proceedings of the Royal Society of London, Series A, Vol. 468, pp. 395-414,
2012.
20. Udwadia, F.E., and Han, B., “Synchronization of Multiple Chaotic Gyroscopes Using the
Fundamental Equation of Mechanics”, Journal of Applied Mechanics, Vol. 75, 02011, 2008.
21. Mylapilli H., “Constrained Motion Approach to the Synchronization of the Multiple Coupled
Slave Gyroscopes”, Journal of Aerospace Engineering, Vol. 6, 814-828, 2011.
22. Cho, H., and Udwadia, F.E., “Explicit Control Force and Torque Determination for Satellite
Formation-Keeping with Attitude Requirements”, Journal of Guidance, Control and
Dynamics, Vol. 36, No. 2, pp. 589-605, 2013. DOI: 10.2514/1.55873
23. Cho, H., Udwadia, F.E., “Explicit Solution to the Full Nonlinear Problem for Satellite
Formation-Keeping”, Acta Astronautica, Vol. 67, pp. 369-387, 2010.
24. Udwadia, F. E., Schutte A., and Lam T., “Nonlinear Dynamics and Control of Multi-body
elastic spacecraft systems,” Advances in Nonlinear Analysis: Theory, Methods, and
Applications, Cambridge Scientific Publishers, pp. 263-285, 2009.
234
25. Udwadia, F. E., Wanichanon, T., “Control of Uncertain Nonlinear Multibody Mechanical
Systems”, Journal of Applied Mechanics, Vol. 81(4), 2014.
26. Udwadia, F. E., Koganti, P. B., Wanichanon, T., Stipanović, D. M., “Decentralized Control
of Nonlinear Dynamical Systems”, International Journal of Control, Vol. 87, Issue 4, 2014.
27. Udwadia, F.E., Wanichanon, T., and Cho, H., “Methodology for Satellite Formation-Keeping
in the Presence of System Uncertainties”, Journal of Guidance, Control, and Dynamics, Vol.
37, No. 5, pp. 1611-1624, 2014.
28. Udwadia, F. E., Koganti, P. B., “Dynamics and Control of a Multi-body Pendulum”,
Nonlinear Dyn., 2015, doi:10.1007/s11071-015-2034-0
29. Udwadia, F. E., “A New Approach to Stable Optimal Control of Complex Nonlinear
Dynamical Systems”, Journal of Applied Mechanics, Vol. 81(3), 2013.
30. Wang, Y., Swartz, R.A., Lynch, J.P., Law, K.H., Lu, K-C., and, Loh, C-H., “Decentralized
civil structural control using real-time wireless sensing and embedded computing”, Smart
Structures and Systems, Vol. 3, No. 3, pp.321-340, 2007.
31. Fallah, A.Y., Taghikhany, T., “Time-delayed decentralized H2/LQG controller for cable-
stayed bridge under seismic loading”, Structural Control and Health Monitoring, 2011.
32. Lu, K-C., Loh, C-H., Yang, J N., Lin, P-Y., “Decentralized Sliding Mode Control of a
Building Using MR Dampers”, Smart Materials and Structures, Vol. 17, Issue 5, p. 055006,
2008.
235
33. Stipanović, D.M., Gӧkhan, İnalhan, Teo, R., Tomlin, C.J., “Decentralized overlapping
control of a formation of unmanned aerial vehicles”, Automatica, 40, pp.1285-1296, 2004.
34. Stanković, S.S., Stipanović, D.M., Stanković, M.S., “Decentralized Overlapping Tracking
Control of a Formation of Autonomous Unmanned Vehicles”, American Control Conference,
2009.
35. Sandell, N.R Jr., Varaiya, P., Athans, M., Safonov M.G., “Survey of Decentralized Control
Methods for Large Scale Systems”, IEEE Transactions on Automatic Control, Vol. AC-23,
No. 2, pp. 108-128, April 1978.
36. Witsenhausen, H.S., “A counterexample in Stochastic Optimum Control,” SIAM Journal of
Control, Vol. 6, No. 1, pp. 131-147, 1968.
37. Wanichanon, T., “On the Synthesis of Controls for General Nonlinear Constrained
Mechanical Systems”, PhD thesis submitted to University of Southern California, 2012.
38. Eltohamy, K. H., and Kuo, C-Y., “Nonlinear generalized Equations of motion for multi-link
inverted pendulums”, International Journal of Systems Science, Vol. 30, pp. 505-515, 1999.
39. Larcombe, L. J., “On the control of 2-dimensional multi-link inverted pendulum: the form of
the dynamic equations from choice of co-ordinate system”, International Journal of Systems
Science, Vol. 23, pp. 2265-2289, 1992.
40. Lobas L. G., “Generalized Mathematical Model of an Inverted multi-link Pendulum with
Follower Forces”, Vol. 41, No. 5, pp. 566-572, 2005.
236
41. Lobas L. G., “Dynamic behavior of Multi-link Pendulums under Follower Forces”,
International Applied Mechanics, Vol. 41, No. 6, pp. 587-613, 2005.
42. Cheng, P-Y., Cheng-I, W., Chen, C-K., “Symbolic Derivation of Dynamic Equations of
Motion for Robot Manipulators Using Piogram Symbolic Method”, IEEE Journal of
Robotics and Automation, Vol. 4, No. 6, pp, 599-609, 1988.
43. Schutte, A.D., Udwadia, F.E., “New Approach to the Modeling of Complex Multibody
Dynamical Systems”, Journal of Applied Mechanics, Vol. 78, pp. 021018-1 to 021018-11,
2011
44. Hemami, H., Wyman, B. F., “Modeling and Control of Constrained Dynamic Systems with
Application to Biped Locomotion in the Frontal Plane”, IEEE Transactions on Automatic
Control, Vol. AC-24, No.4, pp.526-535, 1979.
45. Liu, G., Li, Z., “A Unified Geometric Approach to Modeling and Control of Constrained
Mechanical Systems”, IEEE Transactions on Robotics and Automation, Vol. 18, No. 4, pp.
574-587, 2002.
46. Blajer, W., “A Geometric Unification of Constrained System Dynamics,” Multibody System
Dynamics, Vol. 1, pp. 3-21, 1997.
47. Blajer, W., “A Geometrical Interpretation and Uniform Matrix Formulation of Multibody
System Dynamics”, Vol. 4, pp. 247-259, 2001.
237
48. Milam, M. B., Mushambi, K., Murray, R. M., “A New Computational Approach to Real-
Time Trajectory Generation for Constrained Mechanical Systems”, Proceedings of the 39
th
IEEE Conference on Decision and Control, Sydney, Australia, pp. 845- 851, 2000.
49. Schutte A. D., “Permissible Control of General Constrained Mechanical Systems”, Journal of
the Franklin Institute, Vol. 347, No. 1, pp. 209-227, 2010.
50. Udwadia, F.E., Kalaba, R. E., “On the Foundations of Analytical Dynamics”, International
Journal of Non-Linear Mechanics, Vol. 37, pp. 1079-1090, 2002.
51. Lefschetz S., Differential Equations: Geometric Theory, Dover, 1977.
52. Perko L., Differential Equations and Dynamical Systems, Springer, 1996.
53. Sontag E., Mathematical Theory of Control: Deterministic Finite Dimensional Systems,
Springer, 1998.
54. Vidyasagar M., Nonlinear Systems Analysis, Prentice Hall, 1993.
55. Burl, J., “Linear Optimal Control”, Addison Wesley, 1998.
56. Bolza, O., “Lectures on the Calculus of Variations”, The University of Chicago Press, 1904.
57. Pars, L. A., “An Introduction to the calculus of Variations”, Dover, 1962.
58. Gelfand, L. M., Fomin, S. V., “Calculus of Variations”, Dover, 1963.
59. Zubov V.I., Mathematical Theory of Motion Stability, Univ. of St. Petersburg Press, 1997.
238
60. Levant, A., “Higher-order Sliding Modes, Differentiation and Output-feedback Control”,
International Journal of Control, Vol. 76, No. 9/10, 924-941, pp. 924-941, 2003.
61. Qu, Z., Dorsey, J. F., “Robust Control by Two Lyapunov Functions”, International Journal of
Control, Vol. 55, No. 6, pp. 1335-1350, 1992.
62. Udwadia, F.E ., Kalaba, R. E., “What is the General Form of Explicit Equations of Motion
for Constrained Mechanical Systems?”, Journal of Applied Mechanics, Vol. 69, No. 3, pp.
335-339, 1992.
63. Udwadia, F.E., Schutte, A.D., “An Alternative Derivation of the Quaternion Equations of
Motion for Rigid-Body Rotational Dynamics”, Journal of Applied Mechanics, Vol. 77,
044505, pp. 1-4, 2010.
64. Krstic M., I. Kanellakopoulos, and P. V. Kokotovic, “Nonlinear and Adaptive Control
Design”, Wiley, 1995.
65. Çimen T., “State-Dependent Riccati Equation (SDRE) Control: A Survey”, Proceedings of
the 17
th
World Congress, IFAC, Seoul, Korea, July 6-11, 2008.
66. McDuffie J. H., Shtessel Y. B., “A De-coupled Sliding Mode Controller and Observer for
Satellite Attitude Control”, Proc. of 29
th
Southeastern Symposium on System Theory, pp. 92-
97, 1997.
67. Dashkovskiy, S., Kosmykov, M., and Wirth, F., “A Small Gain Condition for
Interconnections of ISS Systems with Mixed ISS Characterizations”, IEEE Transactions on
Automatic Control, V. 56(2011)5, pp. 1247-1258.
239
68. Polushin, I. G., Dashkovskiy, S. N., Takhmar, A., Patel, R. V., “A Small Gain Framework for
Networked Cooperative Force-Reflecting Teleooperation”, Automatica, Vol. 49 (2013), 2,
pp. 338-348.
69. Šiljak, D. D., “Large-Scale Dynamic Systems: Stability and Structure”, North-Holland, New
York, New York, 1978.
70. Šiljak, D. D., “Decentralized Control of Complex Systems”, Academic Press, Boston,
Massachusetts, 1991.
71. Udwadia, F. E., Koganti, P. B., “Optimal Stable Control for Nonlinear Dynamical Systems:
an Analytical Dynamics Based Approach”, Nonlinear Dyn., 2015, doi:10.1007/s11071-015-
2175-1
72. Udwadia, F. E., Koganti, P. B., “Synthesis of Stable Optimal Controls From Lyapunov Based
Constraints”,
73. Udwadia, F.E., and Mylapilli, H., “Constrained Motion of Mechanical Systems and Tracking
Control of Nonlinear Systems: Connections and Closed-form Results.”, Nonlinear Dynamics
and Systems Theory, Vol. 15, No. 1, pp. 73-89, 2015.
Abstract (if available)
Abstract
The current thesis deals with new approaches in the modeling and control of highly nonlinear, nonautonomous dynamical systems. In the field of Analytical Dynamics, the “fundamental equation of motion” developed by Udwadia and Kalaba has been traditionally used to derive the equations of motion that describe a mechanical system modeled using certain modeling constraints. One view of looking at constrained motion is that nature applies the ‘control force’ necessary to satisfy the (modeling) constraints. The fundamental equation of motion gives an explicit expression to compute the exact force required so the system satisfies the modeling constraints. Recently, an alternate use has been developed for it in the control of mechanical systems, where it is applied to obtain the control force necessary to satisfy certain types of control objectives. As long as the control objectives can be formally expressed as constraints which are linear in accelerations, the fundamental equation of motion can be used to obtain the necessary control force. In the current work, an approach using Lyapunov’s theorem as the vehicle to synthesize the constraints, that ensure global asymptotic stability of a dynamical system when satisfied, has been explored. Several refinements have been proposed to this approach that add more practical value. These refinements include limitation on the maximum control effort, non-full state control, decentralized control of complex dynamical systems, and control under uncertainty. In addition, the current work also proposes a unifying framework for modeling and control of dynamical systems in which control requirements (constraints) are present in addition to the physical/modeling constraints that are needed to describe the physical system appropriately.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An analytical dynamics approach to the control of mechanical systems
PDF
On the synthesis of controls for general nonlinear constrained mechanical systems
PDF
On the synthesis of dynamics and control for complex multibody systems
PDF
New approaches to satellite formation-keeping and the inverse problem of the calculus of variations
PDF
Computationally efficient design of optimal strategies for passive and semiactive damping devices in smart structures
PDF
Structural nonlinear control strategies to provide life safety and serviceability
PDF
Nonlinear dynamics and nonlinear dynamical systems
PDF
Application of the fundamental equation to celestial mechanics and astrodynamics
PDF
High-accuracy adaptive vibrational control for uncertain systems
PDF
Data-driven H∞ loop-shaping controller design and stability of switched nonlinear feedback systems with average time-variation rate
PDF
Nonlinear control of flexible rotating system with varying velocity
PDF
Dynamical system approach to heart rate variability: QT versus RR interval
PDF
Studies into computational intelligence approaches for the identification of complex nonlinear systems
PDF
Understanding dynamics of cyber-physical systems: mathematical models, control algorithms and hardware incarnations
PDF
Sequential Decision Making and Learning in Multi-Agent Networked Systems
PDF
Flexible formation configuration for terrain following flight: formation keeping constraints
PDF
Dynamic modeling and simulations of rigid-flexible coupled systems using quaternion dynamics
PDF
Train scheduling and routing under dynamic headway control
PDF
Multiple degree of freedom inverted pendulum dynamics: modeling, computation, and experimentation
PDF
Transient modeling, dynamic analysis, and feedback control of the Inductrack Maglev system
Asset Metadata
Creator
Koganti, Prasanth Babu
(author)
Core Title
New approaches in modeling and control of dynamical systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering (Structural Engineering)
Publication Date
10/14/2015
Defense Date
08/17/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Control,decentralized control,fundamental equation of motion,multi-body dynamics,OAI-PMH Harvest,pendulum,unified approach
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Udwadia, Firdaus (
committee chair
), Sacker, Robert J. (
committee member
), Savla, Ketan (
committee member
)
Creator Email
pkoganti@usc.edu,prasanthkoganti@hotmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-190853
Unique identifier
UC11279245
Identifier
etd-KogantiPra-3976.pdf (filename),usctheses-c40-190853 (legacy record id)
Legacy Identifier
etd-KogantiPra-3976-1.pdf
Dmrecord
190853
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Koganti, Prasanth Babu
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
decentralized control
fundamental equation of motion
multi-body dynamics
pendulum
unified approach