Comput. & Elect. Engng, Vol. 4, pp. 59-69. Pergamon Press, 1977. Printed in Great Britain
SPECIFIC OPTIMAL DISCRETE FILTERING MECHANIZATIONS VUAYENDRAM. GUPTA ComputerSystemsSection,FordAerospaceand CommunicationsCorporation,Houston,TX 77058,U.S.A. and JAMES R. ROWLAND School of ElectricalEngineeringand Centerfor SystemsScience,OklahomaState University,Stillwater, OK 74074,U.S.A. (Received 13 October 1976)
Abstrad--An optimizationprocedure is developed for obtainingthe best discrete representationof a specifiedformfor modelinga continuous,steady-state,Kalmanfilter.Thistechniqueis particularlyuseful for digitalprocessorsin real-timefilteringapplications.The generalizedoptimizationformatis developed, an optimaldigitalmechanizationis obtained,and numericalcomparisonsare madewiththe Eulermethod and the second-orderRunge-Kuttanumericalintegrationformula.Significantimprovementsare shownfor several fixedvaluesof discretizationintervals. INTRODUCTION Achieving an acceptable level of filtering accuracy at a suitable computational speed has placed new constraints on the traditional problem of estimating the system state from noise-corrupted measurements. Since such filtering often is to be performed digitally, even for continuous system equations, the digital mechanization of continuous filtering algorithms has become quite important. The optimization format of this paper is developed to determine these optimal discrete representations for the continuous linear filter derived by Kalman[1]. Previous work has emphasized highly accurate Bayesian nonlinear estimators for linear systems having non-Gaussian inputs and for nonlinear systems[2--4]. Schwartz and Stear[5] made numerical comparisons on the basis of estimation accuracy alone between several approximate nonlinear filters, and Wishner et al. [6] continued these comparisons. Kaminski et al. [7] applied discrete square-root filtering concepts to the estimation problem, and Bierman[8] compared several versions of these discrete filtering algorithms. With regard to computational speed for filtering applications, Mendel[9] determined the computational requirements for a discrete Kalman filter. Gaston and Rowland [10] obtained real-time digital integration results for mechanizing continuous Kalman filters for nonlinear systems. The problem of determining efficient numerical integration formulas for discretizing approximations has been investigated by Sage and Smith[l 1], Benyon[12], and Rowland and Holmes[13]. Basic results on the digital simulation of stochastic systems have been derived by Rowland and Gupta[14] and generalized to the time-varying case by Rowland[15]. Preliminary results on optimal constrained discrete representations for continuous filters have also been obtained by Rowland and Gupta[16]. Sims and Melsa[17, 18] utilized specific optimal estimation concepts to obtain continuous fixedconfiguration filters. This paper describes a direct procedure for determining a specific optimal discrete representation for a continuous Kalman filter in the steady-state. It is shown first that the parameters of the fixed-configuration discrete filter cannot be adjusted to eliminate completely the modeling error. The utilization of a generalized, direct performance measure yields the optimum parameter settings. Comparisons are made on the basis of minimizing estimation error as well as the modeling of the corresponding continuous Kalman filter. The result is an efficient discrete filter capable of superior performance for on-line filtering applications. PROBLEM FORMULATION Consider the continuous, linear, dynamic system described by = Ax+ Bw(t)
(1)
= Cx + v(t)
(2)
59
60
V. M. GUPTAand J. R. ROWLAND
where x(t) is the n-vector representing the system state, A is an n by n system matrix and B is an n by m weighting matrix for the zero-mean white noise input w(t). The measurement vector z(t) is an r-vector. The system noise m-vector w(t) and measurement noise r-vector v(t) have constant covariance matrices Qw and Qo, respectively, which are defined by E { w ( t ) w r ( z ) } = Q~8( t - r) E{v(t)vr (r)} = QJ(t
- "c).
(3) (4)
It is assumed that x(to), w(t), and v(t) are all uncorrelated and that Qw and Q, are symmetric, positive-definite matrices. The continuous Kalman filtering equations are given by ~(t) = Ai(t) + K ( t ) [ z ( t ) K(t) = P,(t)CrQv
-
Ci(t)]
(5) (6)
-'
where the error covariance equation is /~,(t) = APe(t) + Pe(t)A r + B Q ~ B
r - P,(t)CrQf'CP,(t).
(7)
The Kalman filter in (5-7) yields the least mean-square error estimate i for the state x in (1), if both w(t) and v(t) are Gaussian. The problem investigated in this paper is to obtain an optimal discrete filtering model according to a given cost functional. Let a discrete representation of l components of i(t), where 1 --- n, be given by i,(t~+,) = ai,(t~) +/3z,dtk).
(8)
The/-vector i~(tk) represents the discrete filtering model state, and a and/3 are I by I and I by r matrices composed of free optimization parameters. A cost functional J is to be selected such that the minimization of J results in a suitable discrete representation of the continuous Kalman filter by the discrete model in (8). It is required that if the discrete model order 1 is equal to the system order n and if a sufficiently small discretization interval T is selected, then the discrete modeling error in representing (5) by (8) should be arbitrarily small at t = tk for all k. If i equals n but T is not sufficiently small, then the form of the discrete model should be the same as a direct discretization of (5) except different values of a and/3 should be obtained. Two approaches are commonly used to obtain an estimate of x(t) at discrete points in time. First, the continuous Kalman filtering eqn (5) may be integrated on the digital computer by using either single-step (Runge-Kutta) or multi-step (Adams-Bashforth or Adams-Moulton) numerical integration formulas. Second, the continuous state eqn (1) may be discretized directly, and an optimal discrete Kalman filtering algorithm may be applied to the resulting discrete equations. In this paper a generalized optimization format which includes the two above approaches as special cases when I equals n and T is sufficiently small is investigated. As one approach toward this more general discrete optimization approach, let a cost functional J be defined as ] = E { [ i d (t~) - i,(t~)] [i.'(t~) - i,(t~)] ~ }
(9)
where ia ~ contains those/-ordered discrete filter states corresponding to the l estimates in L The notation xd and i , is used to denote the discretized form of x and i from (1) and (5), respectively. An alternate approach would be to select J as J = E{[xa'(tk) -
i,(t,)]
[x,/(t~)
- L(tk)] T}
(10)
where x,' contains the /-ordered states from Xd corresponding to L. The J in (9) is based on
Specificoptimaldiscretefilteringmechanizations
61
forming an optimal discrete model for the continuous Kalman filter. On the other hand, the J in (10) attempts to obtain an optimal discrete fixed-configuration filter for estimating the x(t) in (1). This paper utilizes the cost functional in (10) for determining the optimal values of a and ~l. Comparisons by using 100 Monte Carlo simulation runs are made both for J's in (10) and (9) using these optimal parameters. STEADY-STATE OP TIM IZATIONS
Let an augmented vector of dimension 2n be defined by
,,,) Let the 2n by 2n covariance matrix Pc associated with xc be denoted as
[Pc tl
Pc,2~ =
P~= p r k c,2 P~22]
[E{xxT} E(xxT}~ \ E { i x r}
E{iir}]
(12)
where the means of both x and ~ are zero. Therefore, Pc satisfies 16~ = A~P~ + P~A~ T + B~Q,B~ T
(13)
where
Let discrete representations of (1) and (5) be written as (xa(tk+,)~ ~[xa(tk)~ . [w~(tk)'~ f~a(tk+O] = qJkf~a(t*)] + H kva(tk)]"
(14)
Moreover, the covariance equations for this discrete version of the system and filter equations are Pa(tk+,) = dPPa(tk)dP r + HQ~H r,
(15)
where qb is defined as e AcT, T is the discretization interval and 0~--( Oc
L)"
(16,
Observe that (13) and (15) provide continuous and discrete covariance equations for the augmented system-filter state vector. The steady-state values are obtained for the continuous case by equating the derivative portion to zero. For the discrete case the covariance matrix Pa(tk+~) is equated to Pa(tk). The matrices Q,,~ and Ova in (16) are selected, as described in[14], to form the best discrete representation of xc in (11) by the augmented vector in (14). Consider the cost functional J in (10), which may be expressed as J = Pn(tk) + e22(t,) - e,2(t,) - P~(tk)
(17)
where the P~'s in (17) are yet to be determined. Let P ',~(tk) A=E{xa(tk)XaT(tk)} P '~z(tk)a= E{Xd(tk)x'T(tk)}.
(18)
62
v. M. GUPTAand J. R. ROWLAND
The i by i matrix P,,(tk), which is the covariance matrix of xJ(tk), is obtained from the n by n matrix P h(tk) by selecting only those elements of P ~t(tk) corresponding to the l by l entries in Pzz(tk), which is the covariance matrix of i~(tk). Similarly, the l by i matrix P~z(tk+t), defined as E{xJ(tk)itr(tk)}, is obtained from P iz(tk+0 by retaining appropriate entries. Note that P~i(tk) was identified earlier in (15). Expressions for Pi2(tk) and P22(tk) are obtained by forming a matrix difference equation for the augmented vector xo given by (xn(t~)~ x~(tk) = \i,(tk)]"
(19)
P~(t~+,) = *~P~(tk)~Z + H . Q H Z
(20)
The resulting matrix equation is
where Ca and Ha are system matrices for Xa(tk) defined in a manner similar to that shown in (14). Equation Pa(tk+O = P,(tk) yields steady-state expressions for the matrices in (17) as functions of a and ~ defined in (8). A standard minimization algorithm by Fletcher and Powell[19] may then be utilized to determine the optimal values of the matrices a and/3. An analogous treatment can be employed to minimize the J in (9) if x,(tk) in (19) is replaced by xb(tk), where xb(t~) a__[id(t~)~ \i,(tk)f
(21)
Corresponding covariance matrices may be defined, as in (17), (18) and (20), to express J as a function of a and/3. EXAMPLE Consider the simple first-order system given by .~(t) = - x ( t ) + w(t)
z(t) = x(t) + v(t)
(22)
with its state estimate $(t) given by ~,(t) = - ~ ( t ) + K(t)[x(t) + v(t) - $(t)]
(23)
where P,(t) K ( t ) = Qv P, = - 2 P , + Qw - P,2/Q~.
(24)
The discrete representations of (22) and (23) may be written as xd(tk+,) = e-rxn(tk) + (1 -- e-r)wn(tk)
(25)
.~d(tk+l) = (e-T - e-"+K('~))r)xd(tk) + e-"+r('k))r-~n(tD 1
(1 + K(tD) (1 - e-('+x"~')r)wd(tk) + (1 -- e-r)wa(tk)
(26)
1
+ (1 + K(tk)) (1 - e ('+x('~)~-)v~(tk). Let a specific optimal discrete filter have the form ~,(tk+l) = a(tk)~,(tk) + [3(tk)xd(tk) + [3(tk)Vd(tk).
(27)
Specific optimaldiscrete filteringmechanizations
63
Corresponding to (13), the covariance equations for (22) and (23) are given by /5~1, = -2P~,1 + Qw /5~,2 = P~,, - 3P~,2
(28)
/~= = 2P~,~ - 4Pc= + Q~. In steady-state
P c , , = 16~,2 = P c = = O,
which gives P~, = Q_~ 2
Pcl2 = - ~ Pc = -
(29) * -T"
The component covariance equations for x~(tk) in (19) may be determined as in (20) by using (25) and (27) to yield Pa,
+,) =
t, +,)}
= (1 - e-~r)Po,,(t0 + (1 - e-r)2Q~a
(30)
P~ ~2(tk +t) = E {xa( tk +O.~( tk +l)} = a e-rPo~2(tD+/3 e-rP~.(tD
(31)
Po22(tk +,) = E{~,2(tk+,)} = a2P~=(tk) +/32pa.(tk) + 2a/3Po,2(tk) +/32Q~.
(32)
Setting Po(tk+,) = Po(tk) for steady-state conditions gives p p
r, ~ - ( 1 - e - r )
(33)
l, x - 0.5/3 e-rQw al2l.l.k] -- i-~ g e-~-T"
(34)
Po=(t~) =/3~[(0.50w + O,~)(1 - a e -r) + a e-TQw]
(1 -
a2)(1 - a e -r)
(35)
Numerical results in the following section show that it is not possible to obtain an exact discrete representation (27) of the steady-state Kalman filter in (23). However, the optimum J in (10) is obtained, and comparisons are made with Euler's method and the second-order Runge-Kutta integration formula (RK2). NUMERICAL COMPARISONS Initially, an attempt is made to match term-by-term the corresponding elements in (29) with those in (33), (34) and (35). Setting P c . = P.l,(tD yields (1 + e -r) Qwd = 2(1 - e -r) Q~ (36) which gives the exact discrete realization of (22) by (25). Equating Pc12 to Pa,2(tk) and Pc= to P~=(tk) gives 0.5# e-tO,. = O__~ 1 - a e -r
6
(37)
/32[(0.5Qw + Qv.)(1 - a e - r ) + a e-rQ,~l_ Q . , Qv CAEE Vo|. 4, No. I--E
(1 - a ~ ) ( ! - a e - T )
- -i~ * 7-"
(38)
64
V. M. GUPTA and J. R. ROWLAND
A correct expression for Q~ in (38) is needed. If Qw = 3 and Q~ = 1, then K(tD = 1 in steady-state according to the results in (24). Under these conditions, the covariance equation for (25) and (26) may be written in the form specified in (15), where e -T
¢ = (~(l_e-Zr) H = /[1
e - ar
0
e-Or)
0 ~l_e_2r)).
(39)
In component form, this covariance equation is Pd,,(tk+,)
=
~,l(t~,,Pa,,(tk) + ~P,zPd,2(tk)) + ~,2(~t,Pa,2(tk) + ~lzPaz2(tk)) + (1
-
(40)
e-r)ZQ~
Pat2(tk+~) = dp.(CbztPa.(t~) + ~PzzPat2(tk)) + ~P12(C~zlPa12(tk)+ tb=PazE(tk))
(41)
Pazz(t~.t) = Ozt(OeiPa.(tD + O22Pdlz(tk)) + Szz($ztP*t2(tk) + ~b22Paze(tk)) + ~(1 - e-zT)ZQo~.
(42)
Solving (42) in steady-state, with the aid of (40) and (41), yields 2 + 1 62,Pdl,(tk) 2q~z,622Pa,2(tk) + ~1 -
Pd22(tk) =
1-
e-ZT)2Qva
~b222
_ (3 + e-Zr)Qw+ 6(1 - e-Zr)Q~a 24(1 + e -zr)
(43)
Matching the right hand sides of (29) and (43) gives (3+e-Zr)Qw+6(1-e-Zr)Q~
Qw, Q~
(44)
Solving for Ooa gives Qo~ 6Q,(1 + e-Zr)- Q w(1 - e-~r) 6(1 - e -zr)
(45)
The difference in modeling Q~ as Qd T and by (45) is evident in Fig. 1. It can be observed that a small modeling error would be introduced if QdT were used instead of the value of Qoa given by (45). 20.00 18.00 16.00 g .t-
O
E o E
i
Q
14.00 12.00 10.00 8.00
QJT
6.00 4.00 ZOO 0.00
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Step size (T)
Fig. 1. Modeling Q~ by
QJT and by (45).
0.40
0.45
0.50
Specific optimal discrete filtering mechanizations
65
Returning now to the original problem with this optimal value of Q~a, there are two unknowns, a and/3, and two eqns, (37) and (38). These two equations can be rewritten to yield
/3-
/
(e T
a)
-
3
(46)
-i~+--~- ( 1 - a ~ ) ( 1 - a e -~)
/3 = d (0.5Q~ + Qo~) (1
(47)
a e T) + a e-~Q~"
These equations o f / / i n terms of a are plotted in Fig. 2 for Qw = 3, Qv = 1 and T = 0.5. Since the two curves do not intersect, the exact discrete representation of the filtering algorithm is not possible. In other words, the cost functional J in (17) will not be exactly unity for this example, which is obtained for the continuous case from (29). To obtain the minimum value of performance index based on the optimal values of a and/3 for given T, Qw and Qo, Fletcher and Powell's unconstrained minimization technique is used. The cost functionals obtained from optimal (a,/3) values and the Euler method are plotted in Fig. 3 for three values of T. The optimal discrete filter coefficients are a = 0.8160 and/3 = 0.0889 for T = 0.1, a = 0.5959 and/3 = 0.1830 for T = 0.25 and a = 0.3517 and/3 = 0.2546 for T = 0.5. 0.50 0.45 0.40 0.35 0.30 ,8
0.25 0.20 0.15 0.10 0.05
0.00
!
0.00
0.10
I
0.20
P
0.30
0
i
0.40
0
0.50
0.60
D
0.70
,..
0.80
:
:
0.90
1.00
(X
Fig. 2. Plots of ~ vs a from (46) and (47) for T = 0.5. 1.50 1.45 1.40 1.35 1.30 1.25 1.20 1.15 1.10 1.05 1.00
o
0.00
0.05
e
0.10
|
0.15
o
|
0.20
0.25
i
0.30
i
i
0.35
0.40
,
J
0.45
J
0.50
Step size (T) Fig. 3. Comparisons of ./in e.,,qn(10) for Euler's method and the specific optimal discrete filter.
66
V. M. GUPTAand J. R. ROWLAND
These values of a and/3 were used in (27) for 100 Monte Carlo simulation runs which were ensembled-averaged to obtain error covariance results. The results are compared with the same number of Monte Carlo simulation runs for the Euler method in Figs. 4--6 using step sizes T = 0.1, 0.25 and 0.5. These curves verify the optimal discrete representation of the filtering technique for the first-order linear system in the steady-state for the cost functional J given by (10). In Figs. 7-9, the curves are plotted for the cost functional J given by (9) by using the optimal a and/3 parameters determined from (10). Comparisons with the second-order Runge-Kutta numerical integration formula (RK2) are also shown in Figs. 4--9. It can be concluded from the curves that RK2 yields better results than the first-order specific optimal discrete filter in (27).
2.00 1.80 1.60 1.40 f I
Euler
1.20 1.00 0.80 0.60 0.40 0.20 0.00
:
0.00
0.50
p
1.00
o
D
1.50
2.00
~
o
2.50 3.00 Time (sec)
o
3.50
i
1
J
4.00
4.50
5.00
Fig. 4. Estimation error covariance for T = 0.1.
2.00 1.80 1.60
1.40 i
1.20
Lu
1.00
Euler ~ . ~
0.80 0.60 ~--
RK2
0.40 0.20 0.00 0.00
a
0
a
|
J
0.50
1.00
1.50
2.00
2.50
P
3.00
0
3.50
~
4.00
i
4.50
S.00
Time (sec) Fig. 5. Estimation error covariance for T = 0.25.
DISCUSSION AND EXTENSIONS Three types of errors which occur in forming specific optimal discrete filters are (1) filtering errors due to signal and measurement noise, (2) discretization errors in modeling the continuous filter and (3) modeling errors due to the use of reduced-order filters. The first of these errors is
Specific optimal discrete filtering mechanizations
67
2.00
1.80 1.60 1.40 1.20 1.00
I
0.80 0.60 0.40 0.20 0.00 0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5,00
Time (see) Fig. 6. Estimation error covariance for T = 0.5. 2.50 2.25 2.00 ?
1.75 1.50
I
1.25 1.00
0.75
(=,/3)
0.50 0.25 0.00 0.00
0.50
1.00
1.50
2.00
2.50 Time (sec)
3.00
3.50
4.00
4,50
5.00
Fig. 7, Modeling error covariance for T = 0,1. 2.50 2.25 2.00 1,75 1.50 I
uj
1.25 1.00 0,75
(=, #) ~
0.50
: =
0.25 0.00 0.00
0.50
1.00
1.50
2.00
2.50 3.00 Time (see)
3.50
Fig. 8, Modeling error covarlance for T = 0.25.
4.00
4.50
5.00
V. M. GUPTAand J. R. ROWLAND
68
2.50 2.25 2.00 1.75 o
1.50
,~ 1.25 ~ 1.00 0.75 0.50
(a./3)
0.25 0.00 0.00
:
i
0.50
1.00
~
1.50
i
2.00
i
2.50
-"
3.00
-
:
i
i
3,50
4.00
4.50
|
5.00
Time (sac)
Fig. 9. Modelingerror covariancefor T = 0.5. inherent in all filtering problems, but its effect is minimized for linear systems having Gaussian noises by utilizing the Kalman filter. In particular, the continuous Kalman filter yielded an error covariance Pe of unity for the scalar example in (22) and (23). The second type of error is illustrated in Fig. 3, where the covariance of the modeling error increases with the step size T. Observe that the error variance in Fig. 3 approaches unity for small T, i.e. the lowest error variance possible for the specific optimal discrete filter in (27) is fixed by the inherent signal estimation error for the continuous case. The third type of error occurs when the order of discrete filter is selected to be less than the system order, i.e. l < n. For example, only the second state of a second-order system may be of interest. Consequently, a first-order discrete filter may be utilized for specific optimal estimation, especially if data rates are relatively high for the available digital processor. Accuracy comparisons between RK2 and the specific optimal discrete filter (a,/3) in Figs. 4--6 indicate an important possible extension of the basic results. Since RK2 yielded increasingly improved filtering accuracy as T increased, the use of higher-order forms of discrete filters is suggested. Obviously, the RK2 format can be used with several free optimization parameters, and the optimized second-order discrete filter will be at least as good as the RK2 result. Some convergence problems were encountered in preliminary attempts at using Fletcher and Powell's algorithm for this case, and further work is indicated. Real-time applications require a modified Runge-Kutta format, since the noise-corrupted measurement zd(t~+~)is not yet available during the interval tk -< t < t~+l and some time is needed for digital processing after receiving this data. Another possibility, suggested earlier in [10], is the format of open multi-step formulas, such as the second-order Adams-Bashforth formula (AB2). A further extension is the application of the digital simulation technique in[15] to yield a specific optimal discrete filter for the time-varying case. Cost functionals, similar to those in (9) and (10), may be formed for this generalized optimization procedure. Thereafter, the maximum principle may be applied to yield a two-point boundary value problem in terms of state and co-state equations using Lagrange multipliers. The solution of these equations would provide the optimized matrices of parameters, t~(t~) and /~(t~), for the time-varying specific optimal discrete filter. CONCLUSIONS A technique has been developed in this paper to determine the optimal discrete filter of a specific configuration for modeling a continuous, steady-state, Kalman filter, Two cost functionals were considered with explicit results being derived for the direct estimation case. Numerical comparisons for a particular example indicated that an optimal first-order discrete filter was better than the Euler method for integrating the continuous Kalman filter equation. On the other hand, RK2 performed better than this specific optimal discrete filter, which
Specific optimal discrete filtering mechanizations
69
indicated that e v e n further improvements are achievable from higher-order discrete filters. The optimization f r a m e w o r k of this paper provides a valuable technique for deriving optimal digital mechanizations under the constraint of real-time operating conditions. REFERENCES I. R. E. Kalman, A new approach to linear filteringand prediction problems. Trans. ASME: 7. Basic Engng fi2-D, 35-45 0960). 2. Y. C. Ho and R. C. K. Lee, A Bayesian approach to problems in stochasticestimationand control IEEE Trans. Automatic Control 9, 333-339 (1964). 3. H. W. Sorenson and D. L. Alspach, Recursive Bayesian estimationusing Gaussian sums. Automatica 7, 465-479
(1971). 4. Y. G. Jan and R. J. P. de Figuieredo, Approximations to optimal nonlinear filters via spline functions. Proc. 2nd Symposium on Nonlinear Estimation Theory and Its Applications. San Diego, California, 127-131 (1971). 5. L. Schwartz and E. B. Stear, A computationalcomparison of several nonlinear filters. IEEE Trans. Automatic Control 13, 83-86 (1968). 6. R. P. Wishner, J. A. Tabaczynski and M. Athans, A comparison of three nonlinear filters. Automatica S, 487-496 (1969). 7. P. G. Kaminski, A. E. Bryson and S. F. Schmidt, Discrete square-root filtering:A survey of current techniques. IEEE Trans. Automatic Control 16, 727-736 (1971). 8. G. J. Bierman, A comparison of discrete filtering algorithms. IEEE Trans. Aerospace Electron. Systems 9, 28-37 (1973). 9. J. M. Mendel, Computational requirementsfor a discreteKalman filter.IEEE Trans. Automatic Control 16, 748-758
(1971). 10. L A. Gaston and L R. Rowland, Realtime digital integration for continuous Kalman filtering in nonlinear systems. Comput. Elect. Engng 2, 131-140 (1975). 11. A. P. Sage and S. L. Smith, Real-time digital simulation for systems control. Proc. IEEE 54, 1802-1812(1966). 12. P. R. Benyon, A review of numerical methods for digital simulation. Simulation 11, 219-238 (1968). 13. J. R. Rowland and W. M. Holmes, A variational approach to digital integration. IEEE Trans. Computers 20, 894-900 (197t). 14. J. R. Rowland and V. M. Gupta, Digitalsimulationsfor Monte Carlo analysis. Proc. 15th Midwest Syrup. on Circuit and Systems Theory, RoUa, Mo., Paper V. 3 (1972). 15. J. R. Rowland, Optimal digital simulations for random linear systems with integration constraints. Comput. Elect. Engng 1, 111-118(1973). 16. J. R. Rowland and V. M. Gupta, Optimal constrained discrete representations of continuous filteringalgorithms. Proc. 6th Syrup. on Nonlinear Estimation Theory and Its Applications. San Diego, California, 219-222 (1975). 17. C. S. Sims and J. L. Melsa, Specific optimal estimation. IEEE Trans. Automatic Control 14, 183-186 (1969). 18. C. S. Sims and J. L. Melsa, A survey of specific optimal techniques in control and estimation. Intern. J. Control, 14, 299-308 (1971). 19. R. Fletcher and M. J. D. Powell, A rapidly convergent descent method for minimization. Computer J. 6, 1963-1968 (1963).