Copyright © IFAC Computer Aided Design in Control Systems , Beijing, PRC, 1988
A MODULAR IMPLEMENTATION OF THE EXTENDED KALMAN FILTER I. Solberg Division of Engineering Cybernetics, Nonvegian Institute of Technology , N-7034 Trondheim, Nonvay
Abstract: A number of dynamic simulation packages offer the possibilty to build up large systems by connecting smaller modules together. Each module then contains the algebraic and difference or differential equations for a part of the system. For the extended Kalman filter the partial derivatives of the process model equations are needed for the covariance calculations . The state and covariance predictions have normally been donJ by separate program modules. A method is described by which the a priori state covariance matrix will be updated by the same modules that compute the a priori state estimate. A similar method is used for the calculation of the predicted measurements from the a priori state estimate. Keywords: Kalman filters. Data processing. Square root filtering, State-space methods, Stochastic systems, Nonlinear filtering, Computer programming. INTRODUCTION
TIlE EXTENDED KALMAN FILTER
It is necessary to specify matrix calculations along with process and measurement predictions when implementing a time-varying or extended Kalman filter. The normal way of doing this has been to write separate statements in a computer program to make the predictions and specify the matrix elements involved. Thus if a change is made to the model used. this change has to be made correctly in more than one place in the program . Errors in this process will lead to inconsistency which in turn will lead to incorrect estimates. It may be very hard to find such inconsistencies.
A nonlinear stochastic discrete time model is described by the equation ~(k+1) = f(~(k) ,~(k»
where ~ is the atate vector and process noise.
~
is a vector of
If ~(k) is the a posteriori estimate at timestep k with covariance X(k), and ~(k) is the mean of v(k) with covariance V(k) then the extended Kalman filter gives
These problems are particularly extensive in the design phase of a Kalman filter as frequent model changes normally occur. In a CACSD (Computer Aided Control System Design) package a possibility for efficient (with respect to development time) implementation of the extended Kalman filter ought to exist. Though such programs exist (Onshus 1978), it is still necessary to specify both matrix elements and the process prediction.
variance X(k+1).
Such problems. and the need to not only specify the model, but the matrices as well. may have been one reason for the increased popularity of simple linear estimation techniques based on polynomial models like ARMAX.
Square root algorithms are often used for Kalman filters to achieve better numerical properties. The U-D factorization (Bierman 1977) factorizes the covariance matrices into a diagonal matrix and an upper triangular matrix with a unit diagonal.
~(k+1)
f(~(k) ,y(k»
X(k+1)
~~
X(k)
(~~)T
+
~~
V(k)
(~~)T
where ~(k+1) is the next a priori estimate with co-
Modern programming languages like Pascal and Ada offer a lot of possibilities which are not available in Fortran . Up to now Fortran has been the most widely used language for scientific problems like simulation and filtering. However, it lacks structured variables (except arrays) and recursive procedure calls. and since it has no strict type checking of parameters in procedure calls this makes programs prone to error. As structured variables are almost a prerequisite for the method described in this paper, I have used Pascal in the current implementation. I have considered implementing it in Ada later, as this will give the implementation of a process model a more readable form. The use of IMPACT (Rimwall 1986) or another well-structured CACSD package/language will also be considered.
V(k)
U (k)D (k)U (k)T v v v
In this case the equations giving the predicted states and covariances are ;(k+1) • f(~(k) ,y(k»
3f U(k)
U(k+1)
U (k+1) v
269
3~
c
!f Uv (k) u~
1. Solberg
270
Then there is a special algorithm that calculates U{k+1) and D{k+1) from U{k+1), Uv (k+1), D{k) and Dv{k) . The vector x{k+1) and the partial derivatives (or rather matrices multiplied by the partial derivatives) are required for both formulations.
For a variable x of this type the .value component holds the value, and the .dtll component is related to the derivatives. - is used to denote a variable of the type struct. A variable of this type with its components will frequently be written this way
x .value
This motivates the search for a method that can calculate both the function value
x . diff ] }
or with numerical values (n=2): and the matrix product
{ 1.3 [ 2.5, 3.6 ] }
of
~X
The general formula for a (basic) function of the type struct with arguments of the same type is:
o~
by using the same specification, and without requiring that the Jacobian matrix of should be o~ specified .
lunctton I_name ( a, b, c struct;
struct )
begin I name. value : I_name ( a.value, b.value, c.value ... );
COMMON METHODS FOR THE CALCULATION OF JACOBIAN MATRICES The most common method for the numerical calculation of Jacobian matrices is to vary the argument to a function and compare this with the change in the output:
I_name.dtl!
' . I nameda * a.dtll + I-namedb * b.dtll + I=namedc * c.dtll +
end;
of (x) f{x+6x) - f{x) ox '" 6x This is easily implemented on a computer. f can be implemented as a function with x as the argument. If x is a vector of dimension n this will generally require n+1 evaluations of f . The accuracy of the result depends on the value of 6x. Another method which is used is to symbolically differentiate the function and put this expression into the program. This will give more accurate results than the above method. It will probably be used more in the future as new tools are coming on the market. These will make it possible to have the associated procedure automatically generated from the function. Programs such as MACSYMA from Symbolics is able to support this to some extent.
Here I _nameda = M.Ll. oa , I_namedb -- M.Ll. ob . . .. and the implementor must supply these. (Notice that there are two functions with the name I name, one of the type real and one of the type struct.) Another way to formulate this (with the previously defined notation) is:
a [ a.diff ] }.
If a
b
b . diff ] }.
b
then f{a, b, USE OF A COMPOSITE DATA STRUCTURE
= { f{a, b, c,
A function is normally composed from a few basic operators and functions . The idea is to have a data structure that contains both the value and the derivatives, and define the basic functions / operators to operate on this structure correctly. In the following pseudo Pascal will be used for the descriptions . This is not true Pascal since : Vector operations are assumed to exist . Operators may be defined for new data types (as in Ada). Different functions may have the same name as long as their arguments or results are of different type (as overloading in Ada). The syntactic rules are violated some places to create a more readable form.
c
of of oa a . diff + ob b.diff +
...
] }
Proof: Having a function f{x) and a matrix X. x is an m-dimensional vector~ and X has dimension m· n. The matrix X may be split into row vectors:
X
= (~l
~2
•••
and the vector
~) ~
T
into its components:
The data structure struct is defined :
type
vector - array{l .. nJ 01 real; struct - record value dtl! end;
real; vector;
A vector ~ components:
{Xl' x 2
x )T is defined with m
A Modular Implementation of the Extended Kalman Filter EXAMPLES
A function f(x) giving a composite result is defined as in-the previous section
The implementation of some operators and functions will now be described, these are followed by a numerical example.
f.value f.diff
~1 ~ 3x.
=
The addition is implemented this way
x .diff
i=
i
1
Inserting for x and introducing matrix notation we get
m
!unction '+' (a,b : struct) : struct; begin '+ '.vaLue := a.vaLue + b . vaLue; ' + ·. di!! :- a.di!! + b.di!!; end; multiplication is implemented
3f X ] } 3x
If X is initialized to I
271
then the Jacobian matrix
(here a row vector) will come out as the .di!! component.
!unction .* ' (a,b : struct) : struct; begin '*'.vaLue := a.vaLue * b.vaLue; ·*'.di!! := a.vaLue*b.di!! + b.vaLue*a.di!!; end; the sinus function will be
Thus the proof for a bas~c function is complete. Any extension to vector functions [ is very simple:
Using the same
~,
X and
~
as above and defining
!unction sin (a : struct) : struct; begin sin. vaLue := sin(a.vaLue); sin.di!! : - cos(a.vaLue)*a.di!!; end; and similarly for the other operators and functions.
A more complex function sin (a-b) will be very simple to implement where each f. is defined as above for f one will get 1 [(~)
=(
f1 (~).value, f2(~)·value
!unction sinab (a,b : struct) begin sinab := sin(a*b); end;
struct;
the same applies to all functions which are composed from already implemented functions.
and The function sinab above could also be defined this way:
3f
~X
3~
=(
-
f1
-
T
(~).diff
,
The method also takes care of the chain rule, so that the result of an inner function may be used as input for an outer function:
!unction sinab (a,b struct ) struct; var product reaL; dsin reaL; begin product :- a.vaLue * b.vaLue; sinab.vaLue := sin(product);
Given a function [(g(~» a vector ~ and a matrix X. Initialising as before and g having the dimension 1. First g(~) is calculated so as to give the result ( gl (~), g2(~) ... g (~)
)i with
components:
3g . gi (~) = { gi (x) [ 3/ X ] } This result is used as argument when calling resulting in the components
dsin :- cos(product); sinab.di!! := dSin*b.vaLue*a.di!! + dSin*a.vaLue*b.di!!; end; This implementation requires one scalar multiplication of a vector less than the previous one.
f,
A numerical example: Calculate the value of f(~) = sin(x -x ) and the Jacobian 1 2
First the variables Xl and x
By combining the .vaLue and .di!! parts of the resulting components we get the result [(g(~» and the matrix product
x1
3x x
1
3x
x
2
{ X
2
1
3x
1 ] }
ax2 ax' 1 2
2
are initialized:
{ 1.2 [ 1.0, 0.0 ] }
3x
2 ] } = { 2.0 ax ax' 1 2
Then we make the function call: in which the first product can be recognized as the chain rule.
[ 0.0, 1.0 ] }
272
I. Solberg
The language then requires the expression
x -x l
2
to
The predicted measurement is
be evaluated first by a call to the function --This will return the value
~(k) • g(~(k) ,~(k))
{ 2.4 [ 2.0, 1.2 ] }
with assumed covariance:
This is passed to the sin function that will return
Y(k)
{ 0.68 [ -1.47, -0.88 ] } which in turn is assigned to is f(x
l'
r.
The total result
lli.:l lli.:l] } X' X
x) [ 2
I
2
• { 0.68 [ -1.47, -0.88 ] } This way to find derivatives may be extended to higher orders by making minor modifications to the functions and data structure. This will not be described here. APPLICATION WITH U-D-FACTORIZATION The data and program structure described in the previous section can be directly applyed to the extended Kalman filter using U-D-factorization.
f(~(k) ,~(k))
~(k+1)
The similarity with the expression for X(k+1) is clear. If the last product of this sum is a diagonal matrix, then it is possible to make update from one measurement after the other. (If it is not diagonal one should try a transformation making it diagonal.) This makes it possible to use a very robust update algorithm together with the U-Dfactorization. A factorization of W can be made: W(k)
=
U (k)D (k)U (k)T w w w
A new index j is introduced that is increased after the processing of each measurement. ~j is the estima: e of ~ . after the update with measurement j. Uj and Dj ~escribes the corresponding covariance. ~j is the corresponding vector of variables of the type struct. (The time argument k is dropped).
can be rewritten as
The measurement model for each measurement can be implemented using the same data structure and procedures as the process model.
where
will give the result ogi • The corresponding matrix will be
Z
=
U(k)
o
[
----"- U o~
j-1'
og . ....:1. o~
U ]} w
which together with U _ and the diagonal matrices j 1 Dj _1 and Dw contain all information that the
0 ] U (k) v
algorithm described in Bierman (1977) needs to A vector z with composite components
calculate Uj , Dj and the gain ~. The new estimate x . is calculated -J
~j-1
is defined together with the function
I(!)
+
~(Yj-Yj)
with components
Inserting for ! and Z to get
] }
i;i(k+1)
For U-D factorization we have the scalars y, and y (processing one measurement after the other), the vectors ~, ~, ~, ~ and K, and also the matrices U, D, UV ' Dv' Uw and Dw' and the intermediate matrices __ ~ . • og . D, Dv' U- ox Uj _1 and Uw=~ Uw These are (except for the diagonal matrices and y) included as components of vectors of the type struct, These vectors are defined with components as follows :
Thus by the calculation of
I
all the three wanted
elements ~(k+1), D(k+1) and D (k+1) are available. v
:: V.
1.
In addition to the process measurements:
f(~,Y)
there are
::
~
is a vector of noise.
UT] }
t
- T
!Ii ::
where
OT -
-vi
- T '!!vi ]}
273
A l\lodular Implementation of the Extended Kalman Filter :::
y
* U * ] } U, w
= { y
yb(k)
b(k) + wb(k)
yd(k)
d(k) + wd(k)
The main loop of the Kalman filter program using U-D-factorization will be:
The elements can be put into vectors:
wh i Le true do begin readmeasure;
~
a, b, c, d ]T
{ get
for i:~ 1 to ny do if measured{i} then begin measure( i};
y
. - gi(~i-1' z
udmeasure;
x.
:=
-l.
end; modeL;
::: ~
:=
z
udpredict;
~
end;
t
~)
D. : = l.
DO : =
The user has to implement ~he procedures readmeasure, measure and modeL. The procedure readmeasure should support the program with the actual measurements from the process. ModeL must i~plement f(~, ~) and put the result into ~. Measure must implement the function gi(~' ~) for each measurement, and return one measurement in y for each call.
U,
U
Udmeasure uses 0i_1' w and Dw to find 0 Di and the gain K by the algorithm described by Bierman (1977).
';i
yb, yd ]T
~
wb, wd ]
';i }
I(~, ~)
.-,
~
v ]
It then updates the estimate by
~i := ~i-1 + K' (Yi-Yi) Udpredict uses U, D, U and Dv to find v and DO' by the algorithm in Bierman (1977).
00
The data flow can be illustrated by the diagram: :::
Before the implementation is made a new data type is defined: type tstate z record estimate struct; predict : struct; end; Then the necessary variables are declared: var
a, b, c, d v wb, wd y
tstate; struct; struct; struct;
The procedures mode L and measure can then be implemented: procedure modeL; procedure order1 (var state : tstate; timeconst, input: struct); begin state.predict :- timeconst*state.estimate + input; end; procedure parameter (var state :tstate ); begin state. predict ' z state. estimate; end; begin parameter (a); order1 ( b, a.estimate, v ); parameter (c); order1 ( d, c.estimate, (b.estimate+b.predict)/2 ); end; procedure measure( i begin case i of 1: y:. b.estimate 2: y'. d.estimate end; end;
integer);
+ +
wb; wd;
Modularity has been used for the process model (local procedures order1 and parameter in model).
A PROCESS EXAMPLE The example is two linear first order processes with estimation of states (b and d) and timeconstants (a and c). For use by the extended Klaman filter the time-constants must also be specified as a parts of the state vector ~. a(k+1)
a(k)
b(k+l)
a(k) 'b(k) + v(k)
c(k+1)
c(k)
d(k+1)
c(k) 'd(k) + (b(k)+b(k+1) ) / 2
The user does not need to specify the partial derivatives and should be glad as they, even for this quite simple example become quite complex:
lL
o a
o
0 0 1
(a+1)/2 d
The code presented here is not true Pascal. When actually implementing this in Pascal, procedure calls must substituted for the operators, and some more variables must be declared to hold intermediate results. Further information can be found in Solberg (1988).
274
I. Solberg COMPUTATIONAL REQUIREMENTS
The covariance update represents most of the computational burden when using an extended Kalman filter. The method utilizes the frequent sparsity of the Jacobian matrices and can therefore be faster than an implementation using straightforward matrix multiplication . In some cases the method will need some more vector operations than an optimal solution. As modularity is implemented on the lowest level (basic operators and functions), one is free to make higher level modules consisting of the basic modules . These modules can be executed on different processors, and thus parallel computing is possible. CONCLUSION The presented method solves two problems involved with the implementation of an extended Kalman filter: The specification of partial derivatives. The problem of achieving consistency between the process model and the partial derivatives (Jacobian matrices). It does not change the behaviour of the filter (convergence, numerical properties etc.). The top level procedure, the procedures for factorization and update, and the low level procedures should be made available as a software package . This package should also contain procedures for proper initialization of the data structures. The method makes it easy to make a modular implementation of a process model. The model implementation will be similar to one aade for the simulation of a process with a discrete representation.
REFERENCES Bierman, Gerald J. : Factorisation Methods for Discrete Sequential Estimation. Academic Press, New York, 1977. ISBN 0-12-097350-2 Onshus, Tor E. : EXKALM. Interaktivt program for Kalmanfiltrering. Division of Automatic Control, SINTEF, 1978. (STF48 A78005l Rimwall. Carl Magnus : Man-Machine Interfaces and Implementational Issues in Computer-Aided Control System Design. Dissertation at Swiss Federal Institute of Technology (ETH), ZUrich 1986. Solberg, Ingar : A Modular Extended Kalman Filter Crushing and Screening Norwegian Institute of Trondheim 1988.
Implementation of the with Application to a Circuit. Dissertation at Technology,