Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Estimation of states and parameters in nonlinear pharmacokinetic models by extended Kalman filtering
(USC Thesis Other)
Estimation of states and parameters in nonlinear pharmacokinetic models by extended Kalman filtering
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
ESTIMATION OF STATES A N D PARAM ETERS IN
NONLINEAR PHARMACOKINETIC M O DELS B Y
EXTENDED KALM AN FILTERING
by
Carol Sue Christensen
A Thesis Presented to the
FACULTY OF THE GRADUATE SCHO O L
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillm ent of the
Requirements for the Degree
M ASTER OF SCIENCE
(Applied Mathematics)
December 1984
UMI Number: EP54415
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
Dissgftsiton PiiMisting
UMI EP54415
Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author.
Microform Edition © ProQuest LLC.
All rights reserved. This work is protected against
unauthorized copying under Title 17, United States Code
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, Ml 48106 1346
UNIVERSITY OF SOUTHERN CALIFORNIA
TH E GRA DU ATE SCHO O L
U N IV E R S IT Y PARK
LOS A N G ELES. C A L IF O R N IA 9 0 0 0 7
C554
This thesis, written by
Carol Sue Christensen
under the direction of h..e.x..Thesis Committee,
and approved by a ll its members, has been pre
sented to and accepted by the Dean of The
Graduate School, in partial fulfillm ent of the
requirements fo r the degree of
.M.STEB..PJ...SCIENGE.
Dean
......
THESIS COMMITTEE
DEDICATION
To M y Family
11
ACKN O W LED G EM EN TS
I would lik e to thank m y advisor. Professor Alan Schumitzky for
introducing m e to this subject and for all the time and help he has
given m e with m y thesis, in addition to his 'instruction and
encouragement over the la s t few years.
I also wish to acknowledge Professors Francesco Parisi-Presicce
and Mark Schilling for th eir guidance as instructors and thesis
committee members.
m
TABLE O F CO NTENTS
Page
DEDICATION i i
ACKNOW LEDGEM ENTS i l l
TABLE O F CONTENTS iv
LIST OF FIGURES v
CHAPTER
1 INTRODUCTION 1
1.1 Background 1
2 DEFINITION O F THE KALM AN FILTER 5
2.1 Kalman F iltering Applied to Nonlinear Systems 8
3 PHARMACOKINETIC EXAMPLE 15
4 CONCLUSIONS 36
REFERENCES 37
IV
LIST O F FIGURES
Page
Figure
2.1.1 Linearized Kalman F ilte r 11
2.1.2 Extended Kalman F ilte r 13
3.1 Dose and Observation Information 22
3.2 Equations Used in the Pharmacokinetic Problem 25
3.3 Results of the Time Invariant Program 34
CHAPTER 1
INTRODUCTION AN D B A C K G R O U N D
Kalman filte rin g is a technique* developed by R.E. Kalman, which
optimally estimates the state of a linear system. This technique can
be applied to either a discrete-time or continuous-time model. This
paper w ill deal with the discrete-time model since i t is more
naturally implemented on a digital computer. Also, i t describes more
accurately the physical application of estimation theory, where
measurement data is obtained at discrete instances of time [4].
An optimal estimator is an algorithm which processes error
corrupted measurement data to determine a minimum error estimate of
the state of the system. Estimation problems include:
1). Smoothing - the time at which the estimate is desired fa lls
within the available measurement data.
2). Filtering - the time of interest is equal to the time of the
last measurement point.
3). Prediction - the time of interest occurs after the last
measurement point.
1.1 B A C K G R O U N D
The beginning of optimal estimation theory can be traced back
to Karl F. Gauss who in 1795, at 18 years of age invented the least
squares method. The problem that led to this invention was that of
orbit determination. The parameters that characterized the motion
1
of a planet had to be estimated from measurement data [2 ], This same
type of problem, orbit determination, has also prompted much of the
modern data studies in estimation techniques.
In his studies. Gauss developed several ideas which relate
closely to work being done today. He noted that the measurements and
observations of a system are an approximation to its true state, as
are the calculations which use them. To obtain a close approximation,
som e combination of the observations should be used, but there should
be more observations than is necessary to determine the parameters.
In addition, an approximation of the system should be known. In his
solution. Gauss states that the most probable value for the unknown
parameters would be one which minimizes the sum of the squares of the
residuals, with each residual being multiplied by a weight which
indicates the amount of confidence to place in that estimate. The
residual is the difference between the observation and the computed
measurement [2 ].
Gauss didn't publish his least squares method until 1809. By
that time Legendre had also invented the method and had published hisj
results in 1806. However, enough evidence has been found to credit'
Gauss with the development of least squares. j
In the 1940's, f ir s t Kolmogorov and then Weiner, independently ofj
I
one another, set forth a linear minimum mean-square estimation
technique. This technique allowed continuous measurements as well as
discrete, whereas Gauss had only considered the discrete-time model.
Another difference was that Kolmogorov and Weiner allowed the
parameter being estimated to change at every estimate. This technique
was limited to stationary systems and its equations were solved using
spectral factorization.
In 1955, J. W . Foil in proposed that each new estimate be
recursively determined from the new measurement and the most recent
estimate. This seemed logical, since the most recent estimate would
be based on previous measurements. I t was also less d iffic u lt than
recalculating a weighting function at each new measurement. This
approach was fundamental in the work of Richard Bucy who la te r,
together with Kalman, developed the continuous-time version of the
f i l t e r equations [2].
In 1960, Kalman published a paper describing a mean square f i l t e r
which recursively estimates the parameters of a system using discrete
samples of noisy measurements and gives a prediction of the parameters
based on previous measurements.
Differences can be seen between the classical estimation
techniques used by Gauss and those used today. In the classical
approach, systems were analyzed in terms of th eir input and output,
using basically the Laplace and Fourier transforms. Today, the
"state-space" concept is emphasized which deals with the basic system
that gives rise to the observed output. Difference and differen tial
equations are also used today instead of the integral equations.
Another difference is the use of the high speed digital computers
to generate numerical solutions rather than the requirement for closed
form pencil and paper solutions. This allov/s the use of more
sophisticated mathematical models and broadens the range of problems
which may be studied [4].
More recent applications of Kalman filte rin g can be found in
Reference [13].
A b rief outline of the thesis follows:
In chapter 2, the Kalman f i l t e r equations are defined. In
addition, the application of Kalman filte rin g to non-linear systems is
discussed and the linearized and extended Kalman filte r s are given.
Chapter 3 presents a pharmacokinetic problem using the extended
Kalman f i l t e r . Two different methods of modeling this problem are
covered with one method being implemented in a computer program.
Chapter 4 summarizes the two different methods and suggests
additions to the computer program which may improve its results.
CHAPTER 2
DEFINITION O F THE K A LM A N FILTER
A complete derivation of the Kalman f i l t e r can be found in
Reference [1 ]. Here, the model w ill be defined and the problem
described, and the solution presented without any formal proof.
Model :
x(k) = A (k ,k -l)x (k -l) + G (k,k-l)w (k-l) Dynamical System
where
X = State Vector n x 1
A = Transition Matrix n x n
w = System Noise Vector - Plant Noise 1 x 1
G = Noise Transition Matrix n x 1
Measurements :
z(k) = H(k)x(k) + v(k)
where
z = Observation Vector m x 1
H = Observation Matrix m x n
V = Additive Noise Vector m x 1
Assumptions:
E[w(k)] = 0 for all k
E[v(k)] = 0 for all k
Cov[x(0),x(0)] = P(0)
Cov[w(k) ,w (j)] = Q(k)ôj|^ 6 is the Kronecker delta
C ov[v(k),v(j)] = R(k)ôj|^ Q and R are nonegative definite
Cov[w(k),v(j)] = [0] for all j,k
Cov[w(k)*x(0)] = [0] fo r,a ll k
Cov[v(k),x(0)] = [0] for all k
Where w and v are normally distributed with mean zero and w,
V and the in tita l state, x(0) are all uncorrelated.
Problem:
Given Z(k) = [ z ( 1 ),z (2 ), . . . ,z (k )], find the linear,
unbiased, minimum square estimate of x(k), denoted by'xCk/k).
The conditional mean E (x /z(1 ),z ( 2 ) , . . . ,z (k )) minimizes the mean
square error, E[(x - x")^ (x - 'x)], over all x which are functions of
the observations. Further, x is unbiased, that is E[E(x/Z) = E[x]. A
proof of this can be found in Reference [1 ].
The determination of this conditional mean is among the
calculations needed to derive the Kalman f i l t e r given below, which is j
I
the solution to the above stated problem.
Discrete - Time Kalman F ilte r
General recursive cycle: from x"(k-l/k-l) to x'Ck/k) and
P (k -l/k -l) to P(k/k)
x'Ck/k-l) = A (k ,k -l)^ (k -l/k -l) State extrapolate
P (k /k -l) = A (k ,k -l)P (k -l/k -l)A ^ (k ,k -l) + G (k,k-l)Q (k-l)G ^(k,k-l)
Covariance extrapolate
K(k) = P(k/k-l)H(k)^[H(k)P(k/k-l)H(k)^ + R (k)]'^ Kalman gain
x(k/k) = 'x (k /k -l) + K(k)[z(k) - H(k)>^(k/k-l)] State update
P(k/k) = P (k/k-l) - K(k)H(k)P(k/k-l) Covariance update
P is the estimated variance matrix of the state vector x, and is
used as a measure of the accuracy of the estimates. The Kalman gain,
K, is a measure of the confidence the f i l t e r places in the current
measurement.
Since there is no measurement data at the start of estimation
procedure, the in itia l conditions are:
x ( 0 /-l) = E[x(0)]
with covariance
P(0/-1) = Cov[x(0),x(0)] = P(0)
The assumptions given in this problem are very similar to those
made by Gauss. The noise being independent from one sampling time to
the next is equivalent to Gauss' assumption that the residual (z - Hx)
is independent between sampling times. Also, the noise and in itia l
state are Gaussian in the Kalman f i l t e r . The lin e a rity of the system
causes the state and measurements to be Gaussian. This implies that
the residual is Gaussian which Gauss assumed. One difference between
the Kalman f i l t e r and Gauss' problem is that Kalman allows the state
to change from one time to another. This requires Gauss' problem to
be modified to include the noise (w(k)) as an error in the plant model
at each stage [2 ],
The Kalman f i l t e r has several advantages over previously
developed filte r s . These include the fact that the equations are easy
to implement on a digital computer. In addition, there are well
established numerical procedures for solving d ifferen tial equations.
The problem was developed in a general framework which allows the
assumptions and results to be seen more clearly. The results can be
analyzed within this framework and insights into many different types
of problems can be obtained from computational studies [3 ].
2.1 KALM AN FILTER APPLIED TO NONLINEAR SYSTEMS
The Kalman f i l t e r assumes that the dynamical system can be
represented by a first-o rd e r system of linear difference equations and
that the measurements are linearly related to the state vector.
However, most re a lis tic filte rin g problems are described by nonlinear
stochastic differential equation
x (t) = f ( x ( t ) , t ) + G (x (t),t)w (t)
where f is a nonlinear function of the state vector. x (t) is to be
estimated from sampled nonlinear measurements of the form
z(tj^) = h(x(tj^),t|^) + v(tj^), k = l,2 ,...
where h depends on both the index k and the state at each sampling
time. The other terms are as described in the model for the Kalman
f il te r.
To deal with the nonlinearities, the linearized Kalman f i l t e r and
8
the extended Kalman f i l t e r have been developed. These are summarized
in Figure 2.1.1 and Figure 2.1.2. Their derivation is found in
Reference [1 ].
The linearized Kalman f i l t e r assumes that a nominal solution,
x ( t ) , to the differen tial equation exists and describes the deviations
from this reference by linear equations. Since x (t) is given before
any measurement data is processed, the gains can be precalculated.
The extended Kalman f i l t e r uses the current estimate of the state
vector as the reference value at each stage for linearization. The
estimate, x (t) = f ( x ( t ) ) , is usually obtained as the solution to the
ordinary differen tial equation that describes the plant behavior.
Since the current estimate is used, the gains cannot be precomputed as
with the linearized Kalman f i l t e r . However, the extended Kalman f i l
te r is generally more accurate than the linearized Kalman f i l t e r [3 ].
One problem that can cause poor results is divergence. This
occurs when the error covariance matrix becomes too small compared to
ithe actual error in the estimate. Not enough weight is given to the
new measurement data and any errors in the plant model can build up
causing the estimates to be less and less accurate. Because of this,
a plant error term, w(k), should be included for most problems.
Divergence can usually be reduced by using the extended Kalman
f i l t e r . However, P(k) and P (k-l) are s t ill linear approximations of
the error covariance matrices and f and h are approximations of the
physical system, so modeling errors can exist [2 ].
Both the linearized and extended Kalman f i l t e r equations are
written for a continuous-discrete time system. This is a discrete
system which can be derived from a continuous system. There is also a
discrete-discrete time system which can be used for all discrete
systems. However, i f the transition matrix of the system is not
invertible, the computations required can become quite d iffic u lt [3],
This w ill be discussed further in a later section.
10
Dynamical system
Measurements
Model :
x (t) = f ( x ( t ) , t ) + Gw(t)
z(k) = h (k )(x (t(k ))) + v(k)
Assumptions:
Nominal trajectory x (t) is available
Assumptions for linear Kalman f il t e r
In itia l Conditions:
x(0) is normally distributed with x'fO/-!) = E[x(0)] and
P(0/-1) = Cov[x(0),x(0)] = P(0)
Equations:
x (t) = f ( x ( t ) , t ) + F (x(t) ,t)[i^ (t) - x ( t ) ] State Extrapolate
P(t) = F ( x (t),t)P (t) + P(t)F''’( x ( t ) , t ) + GQ(t)G''’ Covariance
Extrapolate
K(k) = P (k /k -l)H (k )^ x (t(k ))[H (k )x (t(k ))P (k /k -l)H (k )^ x (t(k )) +
I R (k)]'^ Kalman Gain
x(k/k) = '^ (k /k -l) + K (k)[z(k) - h ( k )(x (t(k ))) - H (k )(x (t))[x (k /k -l) -
x ( t ( k ) ) ] ] State update
Figure 2*1.1
Linearized Kalman F ilte r
11
x (k /l) = E [x (t(k ))/z (1 )]
P(k/k) = P (k/k -l) - K (k )H (k )(x (t(k )))P (k /k -l)
where
F ( x ( t ) ,t ) =
H (k )(x (t(k ))) = 3 h (k )(x U (k ))_ )
Figure 2.1.1 (Continued)
Linearized Kalman F ilte r
Covariance
update
12
Dynamical system
Measurements
Model :
x (t) = f ( x ( t ) , t ) + Gw(t)
z(k) = h (k )(x (t(k ))) + v(k)
Assumptions:
Assumptions for linear Kalman f i l t e r
In itia l Conditions:
x(0) is normally distributed with x‘( 0 /- l) = E[x(0)] and
P(0/-1) = Cov[x(0),x(0)] = P(0)
Equations:
1c(t) = f( 'x (t),t)
P (t) = F (x (t),t)P (t) + P (t)F ^ (9 (t),t) + GQ(t)G^
State Extrapolate
Covariance
Extrapolate
K(k) = P (k /k -l)H (k )^ (x (k /k -l))[H (k )('x {k /k -l))P (k /k -l)H (k )^
x (k /k -l) + R(k)]'^
x(k/k) = x (k /k -l) + K(k)[z(k) - h (k )x (k /k -l)]
P(k/k) = P (k/k-l) - K (k )H (k )(x (k /k -l))P (k /k -l)
Figure 2.1.2
Extended Kalman F ilte r
Kalman Gain
State update
Covariance
update
13
x (k /l) = E [x (t(k ))/z (1 )]
where
ax
H (k)(x'(k/k-D ) =
Figure 2.1.2 (Continued)
Extended Kalman F ilte r
14
CHAPTER 3
PHARMACOKINETIC EXAMPLE
3.0 PHARMACOKINETIC EXAMPLE
In this problem, a drug is given to a patient and at certain
times an observation is taken to determine the serum concentration of
the drug in the patient's system. The patient's model has two unknown
parameters denoted by Xg and x^ below. The extended Kalman f i l t e r
algorithm was used to obtain estimates of these parameters based on
observed serum levels.
The model for this problem is given below
Dynamical system:
X = (X2, Xg, x^)
x (t) = f ( x ( t ) , t)
x^(t) = - (a + X g (t)b ( t))x ^ (t) + u(t)
Xg(t) = Xg(t) = 0
G(t) = 0 ( i.e . no model noise)
a = known constant
b ( t), u(t) are known piecewise constant functions
Measurements:
z(tn) = X3(t^)x^(t^) +
tg = 0, t p tg, . . . are not evenly spaced
15
-(a + X g ( t ) b ( t ) ) - b ( t ) x ^ ( t ) 0
0 1 0
0 0 1
Observations
h^(x(tn)) =
3h (x (t ))
fact that
The state transition matrix A(t, tQ) can be found by using the
9A(t, tçj)
= F (x (t), t) A(t, tg) A(tQ, tg) = I
where F (x (t ),t) = F.1 ( x ( t ) , t ) F.g ( x ( t ) , t ) 0
11
0
0
A(t,tQ) =
11 ' ‘■13
21 ‘■22 *■23
31 ‘■32 ‘■33
12
1
0
and
W e need only to compute L^g ^nd since:
L . j( t ) = 0 i = 2, 3 ; j = 1, 2, 3
16
22^^) = = 1 and
22( t) = Lg^Ct) - ^22( t) = L^gCt) = 0
11 “ '" ll ‘■ ll ^ "^12 ‘-21 ■ ‘^11 ‘-11 ‘-21 “ °
l l ( 0) = 1
12 ^ ^11 ‘ -12 * ‘ ^12 ‘ -22 - ‘ 'll ‘ -12 * ‘ ^12 ‘ -22 ' ^
^2 (0) = 0
13 ^11 ^13 ■*■0 = 0» ^23 (0) = 0 therefore L23 (t) = 0.
0 compute and L^g the following linear ordinary differen tial
equation must be solved:
L(t) = a (t) L(t) + 3(t)
L(0) = Y.
The solution to this is given by:
L (t) = + A e (h (t)-h (s ))g (;)
h(t) = y ^ a(s) ds.
As can be seen, for the continuous discrete time system, numerical
integration is required each time the extended Kalman f i l t e r algorithm
17
is invoked. To avoid this, the problem was modeled as a discrete-
1 dynamics discrete-time system. This could be done because the control
!
I variables of the dynamical system, u and b are piecewise constant and
i
: the differential equation could be solved an alytically. The equation
I can be written as:
x^(t) = xxjCt) + u where
Xg (t) = 0
X3 (t) = 0
X = a + Xg(t)b = constant
b = b ( t )
u(t) = u = constant
x^Ctp) is given.
iThe solution is
x^it) = e
- X ( t - tg )
X i(to) + "/x (1 - e
- X ( t - tg)
Let t e [t^ . t^_j]
u = u.
X - - a +
then XjCt) can be written as
-X A U -X A
x i(t„ ,l)= e " " x , ( t j + " /\( 1 - e " " ) .
18
i The discrete-discrete extended Kalman f i l t e r is given by Schweppe [6]
j as follows:
I
Model :
x(n+l) = #[x(n),n] + G[x(n),n] w(n) Dynamical system
z(n) = h[x(n),n] + v(n) Measurements n=l, . N
where
0 = vector nonlinear function of the vector state x and
time n
h = Kg vector nonlinear function of the K^ vector state x and
and time n
G = K^ by K^ matrix
w, V are uncertain processes
x(0 ) is uncertain in itia l state.
Assumptions:
Same as those for the linear Kalman f il t e r .
Equations:
î(n + l/n ) = 0 [x(n/n ),n ] State extrapolate
P(n+l/n) = A[x(n/n),n] P(n/n) Cx(n/n),n]
+ G[x'(n/n) ,n] Q(n) G^ C)c(n/n),n] Covariance extrapolate
P(n+l/n+l) = [H^ C?(n+l/n),n] R”^(n+1) HCx(n+l/n), n+1]
1 1
+ P" (n+l/n)]*" Covariance update
19
K(n+1) = P(n+l/n+l)H^[x(n+l/n),n+l] R"^(n+1)
Kalman gain
x(n+l/n+l) = x(n+l/n) + K(n+1) [z(n+l) - h [x(n +l/n),n+l]
State update
P(0/0) = Cov [x (0 ), x(0)]
x(0/ 0) = 0
A [x(n /n )..n ] =
H [x (n n /n ).n )] =
X = x(n/n)
X = x(n+l/n)
In the above notation the pharmacokinetic example is written as
follows:
x(n) = (x^(n), Xg(n), x^Cn))
T
x^(n) = x^(t^) and Xg(n) and Xg(n) are constants
0(x(n),n) = (x(n),n)
Lg (x(n),n)
Lg (x(n),n)
x^(n+l) = L^(x(n),n) = e x^(n) + —
Xg(n+1) = Lg(x(n),n) = Xg(n) = Xg
Xgfn+l) = Lg(x(n),n) = Xg(n) = x^
20
Xp = a ^ X2
G[x(n),n] = 0 ( i.e . no system noise)
h[x(n),n] = x^(n)x2
7(n) = h[x(n),n] + v(n)
H[x(n),n] = (xg, 0, x^(n))
A[x(n),n] =
3L^[x(n),n]
3X-
0
0
" 3L^[x(n) ,n
3X,
since
3L^[x(n),n] 3Lg[x(n),n] 3LgCx(n),n] 3Lg[x(n),n] 3Lg[x(n),n]
3X. 3X. 3X. 3X 3X, = 0
3Lp[x(n),n] 3L -[x (n ),n ]
and -" — I-'-I - - T '■ - - = ' = 1.
3X, 3X-
Figure 3.1 shows the dose times and amounts along with the
observation times and values. One of the control variables, b (OCR in
this figure), was the same throughout the observations. Because of
this i t was decided to make a change of variable so that the other
constants could have the following values:
a = 0 , b = 1 making x = Xg
21
Observation Inform ation
Observation Time in Measured level for Standard
number hours each output (z) deviation
1 1.5 4.4 .44
2 4.5 2.2 .22
3 5.0 1.6 .16
4 9.5 0.5 .05
5 59.5 1.1 .11
6 61.0 6.5 .65
Dosage Regimen Information
Event Time in Rate and/or amount for all inputs
hours IV O C R
1 0.0 160.0 58.0
2 0.5 0.0 58.0
3 11.5 80.0 58.0
4 12.5 0.0 58.0
5 19.5 120.0 58.0
6 20.5 0.0 58.0
7 27.5 120.0 58.0
8 28.5 0.0 58.0
Figure 3.1
Observation and Dose Information
22
Dosage Regimen Inform ation (Continued)
Event Time in Rate and/or amount for a ll inputs
Hours IV O C R
9 35.5 120.0 58.0
10 36.5 0.0 58.0
11 43.5 120.0 58.0
12 44.5 0.0 58.0
13 51.5 120.0 58.0
14 52.5 0.0 58.0
15 59.5 120.0 58.0
16 60.5 0.0 58.0
■Figure 3.1
Observation and Dose Information (Continued)
23
With this change the in itia l conditions are as given below:
(0/ 0) = ""o 0 0 0
(0/ 0) = .22 P(0/0)= 0 .022 0
(0/ 0) = .045 0 0 .0045
These conditions are known exactly. As can be seen from the equation
above, for the updated covariance, i t is required that the covariance
matrix P, be invertible. Since this was not true for the above
conditions, the covariance update equation was rederived to relax this
assumption and is included in the final equations used in this
problem. See Figure 3.2.
Since the observations are not taken at equal time intervals
and these times are not the same as the dose times, the model w ill be
d ifferen t for each observation. That is , a new L^(x(n),n) and
A[x(n),n] have to be calculated. These are given below. q(k) w ill
denote x^ at dose time k. x(n) w ill denote x at observation number
n and is a function of the q(k) since the previous observations. W e
therefore have:
T
x(0) = (0 , Xg, x^)
— . 5X/
q(0) = 0 , q(.5) = 0 + ^ (1 - e '
q(1.5) = e
-l.Ox,
q{.5) + 0 = L j(x{0),0 )
24
r
I Model :
-x«^n u_ -XoA.
9 II M ''9“n
Xj(n+1) * e x^(n) + /Xg(l - e )
Xg(n+1) = Xg
Xg(n+1) = Xg
z(n) = h[x(n),n] + v(n)
h[x(n),n] = x^(n) Xg
Extended Kalman F ilte r:
State extrapolate
x(N+l/N) = 0 [x(N/N),N]
Covariance extrapolate
P(N+1/N) = A[x(N/N),N] P(N/N) A^[x(N/N),N]
j Covariance update
: P(N+1/N+1) = P(N+1/N) - P(N+l/N)H^[x(N+l/N),N+l]
j X [H[x(N+l/N),N+l]P(N+l/N)H^[x(N+l/N),N+l] + R(M+1)]"^
X H[x(N+l/N), N+1]P(N+1/N)
Gain
K(N+1) = P(N+l/N+l)H^[x(N+l/N),N+l] R'^(N+1)
State update
x(N+l)/N+l) = x(N+l/N) + K(N+1) [z(N+l) - h[x(N+l/N), N+1]]
Figure 3.2
Equations used in the Pharmacokenetic Problem
25
I 3L,[x(0 ),0 ]
Î 1
ax^
3L^[x(0) ,0]
axg
x (l) = (q (l
q(4.5) = e
aL^[x(l)
, 1]
ax^
aL^[x(l)
, 1]
= 0
T l.5 e
— l.OXrt 1 — l.OXrt — l,5Xo
- l.Oe - L ( e - e )]
X2
- 3 .Ox,
q(1.6) + 0 = e x^ = L ^ (x (l),l)
= e
-3 .Ox,
ax,
= -3,0e
x(2) = (q (4.5 ), Xg,Xg)^
26
— « 5x — » 5x
C|(5.0) = G q{4.5) = e
- 5X/'
-.5x
= - . 5g
9X
-4.5x -4.5x
q(9.5) = e q{5.0) = e
-4.5x
9X
9 L .[ x {3 ),3] -4.5x
= -4.5e
9X
- 2x - 2x
q(11.5)
q(12.5)
-7x
q(19.5)
-X -X
q(20.5)
-7x
q(27.5) q(20.5) + 0
-X -X
q(28.5)
-7x
q(35.5) q(28.5) + 0
-X -X
-7x
q(43.5) q(36.5) + 0
-X -X
.5)
-7x
q(44.5) + 0 q(51.5)
-X -X
q(52.5)
-7Xp
q(59.5) = e ^q(52.5) + 0
!q(59.5) = L .(x (4 ), 4)
3L^(x ( 4 ) , 4 ) _ -7xp -7Xp -Xp
^ - ■ e ^ q(52.5) = e e q(51.5)
3X.
1
-8Xrt -7Xrt -15xp -Xp
= e ® ^ q(44.5) = e e 4- q(43.5)
d A 2 d A 2
-16Xp -7Xp -23Xp -x«
= G e q(36.5) = e e q(35.5)
-24xp -7xp -31Xrt -Xp
= ® ® âlr q(28.5) = e e ^ q(27.5)
-32Xp -7Xp -39Xp -Xp
= e e q(20.5) = e e ^ q(19.5)
d A ^ d A 2
-40Xp -7xp -47Xp -Xp
. e e gi- q(12.5) = e e ' q(11.5)
-48Xp - 2xp -50Xp
= e e = e
3L^(x ( 4 ) ,4)
3X
2
-7x,
3 -7*2 -7xp .
e q(52.5) + e ^ q(52.5)
2 d A 2
3X,
q(52.5) = e q(51.5) + e q(51.5) + ^ i|2 (1 - e
d A/) dAo 2 2
3 3 -7x« -7Xp
^ q(51.5) = ^ e ^ q(44.5) + e ^ ^ q(44.5)
3X,
q(47.5) = gi- e q(43.5) + ji- q(43.5) e ^ i|£ (1-e
29
-7x -7x
q(36.5) + e
3X
q(35.5) + e ^ j | - q(35.5)
-X
q(36.5)
3X 3X ,
-7x -7x
rç" Cj(35.5) q(28.5) + e
3X
q(27.5) + e % -1 - q(27.5)
-X
q(28.5)
3X
3X
-7x -7x
q(20.5) + e
3X
-X
-Xp a
q(19.5) + e ^ ^ q(19.5)
-X
3X 3X
-7x -7x
q(12.5) + e
3X
-X
q (ll,5 ) + e q ( ll
-X
^ q(12.5)
3X 3X ,
-2x
q(11.5)
3X
-X -X -X
120 120 120
-e
3X
-X -X
120
x(5) = (q(59.5), x^, x^)
q(60.5) = e ' H9(l . e
i.
-. 5xp
q(61.0) = e q(60.S) + 0 = L^(x(5),5)
9L^(x(5),5) -.Sxg
ax.
= e
ax.
q(60.5) = e
-1.5x,
aL^(x(5),5) g -.5x2 —. 5x
2 a
ax,
_a
ax
e q(60.5) + e — q(60.5)
- q(60.5) = e ' \ . + r|- (^(1 - e""?))
2 2 2 2
-X,
-X, -X,
x(6) = (q(61.0), Xg, x^)
31
! Each x(n) is evaluated with the latest estimate x (n -l/n -l) to
produce the extrapolated state & (n /n -l). Sim ilarly, the state
transition matrix A for observation n is evaluated with estimate
x C n -l/n -l) and then used in the equation for the extrapolated
covariance. The calculation of A, past the f ir s t observation requires
the model for the previous A. The previous model is also evaluated
I
with the la te s t estimate before the current A is computed.
R ea listically, this scenario is one of the simplest for a
pharmacokinetic type problem. Yet writing a program to implement
these equations on the computer is not a straightforward task. In
addition, the resulting program would be d iffic u lt to test and/or
debug completely.
To avoid these d iffic u ltie s the problem was modified to be time
invariant. Since the dose and observation times fe ll mainly on a half
an hour boundary, the time unit was chosen to be a half an hour. This
could be increased or decreased for different sets of input times, i f
necessary. The model does not have to be recalculated for each
observation, but remains constant for every time point. I t is given
)elow:
- . 5x9 u^ -.5x«
L j(x(n ),n ) = e x^ + /x ^ d - e )
for all n
oL,(x(n),n) -iSXp
— = e n = 2, ...,6
axj
= 0.0 n = 1
32
3L.(x(n),n) u„ -.5x« , -.Sx?
- « 5xp
" -'5 e
A program which used the equations in Figure 3.2, the above model
and the dose and observation data in Figure 3.1 was written.
Estimates were generated for the 6 observations and the results are
given in Figure 3.3.
The program keeps track of the actual dose and observation
times. I f there is no dose at a particular time point the u for that
time is set to 0.0. I f there is no observation, R is considered
in fin ite . This has the effect of zeroing certain terms, so that the
only thing done at that time point by the Kalman f i l t e r equations, is
the calculation of A and the state and covariance extrapolation.
33
Estimate for observation number 1 z = 4.400
x (l) = 61.0061
x(2) = .2169
x(3) = .0716
Estimate for observation number 2 z = 2.200
x (l) = 30.6689
x(2) = .2192
x(3) = .0718
Estimate for observation number 3 z = 1.600
x ( l) = 23.8421
x (2) = .2261
x(3) = .0728
Estimate for observation number 4 z = .500
x ( l) = 7.1122
x(2) = .2291
x(3) = .0745
Figure 3.3
Results of the Time Invariant Program
34
Estimate for observation number 5 z = 1.100
x (l) = 14.7734
x(2) = .2341
x(3) = .0781
Estimate for observation number 6 z = 6.500
x (l) = 106.7392
x(2) = .2336
x(3) = .0705
Figure 3.3 (Continued)
Results of the Time Invariant Program
35
CHAPTER 4
CONCLUSIONS
The pharmacokinetic example Is modeled two differen t ways. Using
the f ir s t method, at each time point the model has to be modified
since the dose and observation times are differen t and unevenly
spaced. Any changes to these times requires that each model be
rewritten. The second method keeps the model constant by selecting a
time interval and assuming an observation is taken every interval.
The state vector and covariance matrix are extrapolated at each time
point, but the Kalman f i l t e r algorithm is only executed at the actual
observation times. A change in these times requires only that the
input to the program be changed accordingly.
I t would be desirable to make the program interactive so that
I
I anyone using this model could simply enter the dose and observation
I times and have the program print out the estimates. At the present
time, the data is in itia liz e d at the start of the program by data
statements.
To increase the accuracy of the estimates, noise could be added
to the dynamical system. The assumption that the model is error free
is only a f ir s t approximation to re a lity . Further study could be done
to determine additional sources for these model noise terms.
36
REFERENCES
[1] Le May, J .L ., and Brogan, W.L., Kalman F ilterin g (class
notes), U.C.L.A., 1982.
[2] Sorenson, H.W., "Least - Squares Estimation: from Gauss to
Kalman," IEEE Spectrum, Vol. 7, pp. 63-68, July 1970.
[3] Technical S taff of the Analytical Sciences Corporation,
Applied Optimal Estimation, A. Gelb, ed. Massachusetts:
M .I.T. Press, 1974.
[4] Sorenson, H.W., "Kalman F ilterin g Techniques," in Advances
in Control Systems Theory and Applications, Vol. 3, C.T.
Leondes, ed. New York: Academic Press, 1966.
[5] Bozic, S.M., Digital and Kalman F ilte rin g . New York: John
Wiley and Sons, 1979.
[6] Schweppe, F.C., Uncertain Dynamic Systems. Englewood
C liffs , New Jersey: Prentice-Hall, 1973.
[7] Jazwinski, A.H., Stochastic Processes and F ilte rin g Theory.
New York: Academic Press, 1973.
[8] Luenberger, D.G., Optimization by Vector Space Methods. New
York: John Wiley and Sons, 1969.
[9] Lobdill, J ., "Kalman Mileage Predictor-Moniter," Byte, Vol.
6, pp. 230-248, July 1981.
37
[10] Kalman, R.E., "A New approach to Linear F ilterin g and
Predicton Problems," Journal of Basic Engineering, Vol.
820, pp. 35-47, March 1960.
[11] du Plessis, R.M., Poor Man's Explanation of Kalman
F ilterin g (Technical Report), North American Aviation,
Anaheim, C alifornia, June 1967.
[12] Hayashi, H.S., The Discrete Kalman F ilte r With Application
to Target Motion Analysis (Technical Report), Naval
Undersea Research and Development Center, Pasadena,
C alifornia, June 1970.
[13] Special Issue on Applications of Kalman F ilte rin g , IEEE
Transactions on Automatic Control, H.W. Sorenson, ed.. Vol.
AC-28 March 1983.
38
Asset Metadata
Creator
Christensen, Carol Sue (author)
Core Title
Estimation of states and parameters in nonlinear pharmacokinetic models by extended Kalman filtering
Contributor
Digitized by ProQuest
(provenance)
Degree
Master of Science
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
applied sciences,Health and Environmental Sciences,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c37-170349
Unique identifier
UC11634542
Identifier
EP54415.pdf (filename),usctheses-c37-170349 (legacy record id)
Legacy Identifier
EP54415.pdf
Dmrecord
170349
Document Type
Thesis
Format
application/pdf (imt)
Rights
Christensen, Carol Sue
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
applied sciences
Linked assets
University of Southern California Dissertations and Theses