Process control applications of an extended Kalman filter algorithm

Process control applications of an extended Kalman filter algorithm

Computers&em. Engng,Vol. 15.No. 12, pp. 853457, Printed in Great Britain.All rights reserved 009%1354/91 1991 $3.00 + 0.00 Fvcss plc copyright Q 1...

370KB Sizes 0 Downloads 97 Views

Computers&em. Engng,Vol. 15.No. 12, pp. 853457, Printed in Great Britain.All rights reserved

009%1354/91

1991

$3.00 + 0.00 Fvcss plc

copyright Q 1991 Pcrgamon

SHORT

NOTE

PROCESS CONTROL EXTENDED KALMAN

APPLICATIONS OF AN FILTER ALGORITHM

M. A. MYEEU and R. H. LUECKE Department of Chemical Engineering, University of Missouri-Columbia, (Received 4 February

IPP1;@al

revision received

I August

Columbia, MO 65211, U.S.A.

1991; receivedfor

publication

29 August 1991)

efficient new algorithm is described and illustrated on process examples for solution of the extended Kalman tilter equations for a continuous dynamic system with discrete measurements. Implicit simultaneous methods, which are powerful in terms of accuracy and eflkiency, are utilized for numerical integration. At the internal integration step level, the new algorithm exploits the decoupled nature of the state estimate and error covarience equations along with the symmetry of the error covariance matrix. The error control strategy includes both the state estimates and error covariances.

Abstrac-An

Measurement

INTRODUCITON

To attain trollers

optimality, must predict

model process

x(0) = x,, + wO,

zt = h[x(&), kl +

vk >

(1) (2)

w and vk are uncorrelated zero-mean white noise processes with covariance matrices Q and R, respectively, and x, is the initial state with the error covariance PO. The extended Kalman filter is comprised of two recursive update steps: the model-based update is performed over any interval for which no measurements are available, and then, when measurements are available, the measurement update is performed.

Model-based

update equations

dP x = a(% u, t);

d(O) = %I,

P = AP + PA= + GQG=;

P(0) = PO_

(3) (4)

up&ate equations

Kk =P-wwP-H=

predictive process conoutput states based on

current measurements. En many process situations, however, measuring the current states is not feasible and those state measurements that are available are often corrupted with noise and by unmeasured process disturbances. On-line state estimation techniques can provide estimates for the unmeasured states and reduce the effect of noise. The Kalman filter (Kalman and Bucy, 1961) develops maximum likelihood estimates of measured and unmeasured process states by combining information from a mathematical model of the process with actual process measurements. To develop the Kalman filter, the general multivariable dynamic control problem is formulated as a set of nonlinear ordinary differential equations with general nonlinear observation equations (Gelb, 1974; Brown, 1983; Lewis, 1986): t = a(x, u, t) + G(f)w;

-based

2, = 9, +K,[z,

+R]-‘, - h(%

P=p-KK,HjP-.

, k)].

(5)

0% (7)

An algorithm solving the extended Kalman filter equations for a general (stiff or nonstiff) process model is described here that achieves efficient solution by decomposing the set of simultaneous state estimate and covariance differential equations at the integration step size level. Well-known numerical integration methods (Hindmarsh, 1980; Dunker, 1984; Leis and Kramer, 1984) were modified so that at each integration step the state estimate equations (3) are solved first, and then the unique set of covariances from equations (4) are calculated explicitly using the state estimate solution. METHOD OF SOLUTION

Solving the model-based update equations is the cumbersome component of implementing the Kalman filter. Algorithm efficiency can be enhanced most productively with an effective way to integrate the update equations. Since the solution to the state equations (3) does not depend on the solution to the covariance equations (4), the state estimates can be computed independently. With implicit methods using Gear’s backward differentiation formulas (Gear, 1971; Holland and Liapis, 1983), the solution vectors are stored as Nordseick matrices of Taylor series terms (Nordseick, 1962). In our new algorithm, two solution vectors are calculated, one for the state estimates f and one for the covariances P. At each integration step, the state estimate solution vector is calculated first and then the covariance solution vector is calculated using the identical step size and

853

Short Note

854

order of integration. The form of equation (4) with the transpose terms does not allow simple or direct utilization of standard ordinary differential solution algorithms. Since the covariance matrix f is symmetric, an N(N + 1)/2-dimensional vector p of the unique elements of the covariance matrix can be formed by concatenation of the columns of the, lower triangular elements of the N x N-dimensional P matrix. A similar compact vector representation q of the matrix GQGT is also formed. Equation (4) then can be rewritten as the set of linear ordinary differential equations: p= Jp+q.

(8)

Since the structure of J is solely a function of the model dimension, it is analyzed only once, at the beginning of the problem, and stored as a set of indices for subsequent calculations. J is obtained from the elements of A in a manner that is transparent to the user. As with the state estimate solution vector, the covariance solution vector is also formulated in terms of the Nordseick array. From the predictor-corrector equations, equation (8) can be rewritten in the Nordseick representation as: 4, A 2

+ (p,, - p:) = i,AJ,p,, + l,Aq,.

(9)

In turn, this equation can be rearranged to provide the following explicit noniterative formula for the solution of p at each step r”: p,,=[I-l,AJ,,]-’

1

p:-&A$j+&Aq,,

0

0.2

0.4

This is the key equation for the new algorithm Note that equation (10) is implicit if J. is unknown. Since the elements of A at t, are known from the state estimate equations, this explicit formula allows the sirhultaneous solution of the state estimate and covariance equations. This solution method avoids the necessity of Newton iterations on the entire set of model-based update equations. Newton iterations are performed only on the state estimate equations; the covariances are then computed explicitly using equation ( 10). Local truncation errors in the solution vectors are estimated using the accumulated correction vector of the Nordseick arrays. Weighted-meansquare errors inequalities, with user-selected relative and/or absolute error tolerances for each solution vector, are checked at each integration step. This is referred to as mutually-dependent error control strategy (Leis and Kramer, 1984). In this strategy, the behavior of both the states and the covariances are taken into account by providing assurance that errors in the covariances are wntrolled. This is important since, if errors are allowed to accumulate in the covariances, filter divergence may occur.

_ (10)

0.6

0.8

Time

EXAMPLES

In an isothermal CSTR, the irreversible first-order reactions, A + B --, C take place (Example 5.2.5, Ray, 1989). Only the concentration of B is measured but the concentrations of both A and B are estimated. Two separate cases were considered for this example. In the first case, which is identical to Ray’s (perfect

1

1.2

1.4

1.6

1.8

2

(Dimensionless)

Fig. 1. State estimation for an isothermal CSTR-nly the concentration of the intermediate product B is measured while concentrations of both A and B are estimated. True values are solid curves, estimates are broken lines and measured are points.

Short Note

85.5

a

420 . 400

Y

I

0

1

, 2

I 3

4 Time

5

8

7

0

I

8

9

10

(Dimensionless)

b --._

80%

g ._ 2 80% P B " 2

b-x--

-

40%-

ii 2

: ;-

20x-i

Time

(Dimensionless)

Fig. 2. State estimation and control for an adiabatic CSTR--only the reactor temperatures are measured (a); the estimates for reactant conversion are used for control (b). True values are solid curves, estimates are broken lines and measured are points.

CSTR modelling), the estimates converged even with high measurement error and poor initial estimates. For the second case, illustrated in Fig. 1, Ray’s problem is extended to include imperfect CSTR modelling. Zero mean and 0.05 standard deviation simulated process noise was added to each state. Note that the filtered estimates oscillate around the true values when process noise is added. In an adiabatic CSTR example, the irreversible first-order reaction A + B takes place (Example 5.3.2, Ray, 1989). Only the reactor temperature can be measured (Fig. 2a) while estimates of the reactant conversion (Fig. 2b) are used to operate a feedback PID controller. The controller regulates the temperature of cooling coils in the reactor to maintain the conversion of A at 80%. The controller

was tuned by the trial-and-error method (Seborg et al., 1988). Control parameters of 43, 0.9 and 0.02 for the gain, reset time and derivative time constant, respectively, were used. These parameters seem to be different from the values used, but not reported, by Ray. Note that after a short transient period the estimator and controller perform quite well.

Table

1. Filter

parametersand E.

Parameter

Coli

initial fermentor

V&U

QG9

0.0001

R(l.1) R(2.2)

0.0025

0.25

conditions for

Initial P&l) P(2.2) p(3.3)

conditions - 1.0 x 10-L = 1.0 x 10-6 = 1.0 x 10-e

the

Short Note

856 STATE

ESlYMATION

FOR A FED-BATCH FERMENTOR

E. COLI (11)

Since state variables are often very difftcult to measure in biochemical reactor systems, this example demonstrates state estimation for an optimally controlled fed-batch E. Coli fermentor. The reactor is aerated and fed a stream of glucose substrate and nutrients. The model for this fermentor was described by Bajpai described

and Luecke (Schuler, by equations (1 l-14),

1989).

The system

X=px-?nJ-;X

is

where p represents the specific growth rate and X, S and C, represent the concentrations of biomass, glucose and dissolved oxygen, respectively:

(12)

S=+X++-S]+w,

(13)

C,-4.s:1C,I-:C~-[~+~~.

(14)

All values for model parameters and initial conditions are the same as reported by Bajpai and Luecke. The nonzero elements of the matrices Q, R and P(0) are

8

2

0

4

6

8

10

Time

12

14

16

18

(hours)

0.6

b 0.5

c e s

-&_

,_._,_,

,I

,,,‘*-

. F’%*,-. --:

0.4-

Y S _q r e w .$

0.3

-

0.2

-

‘S z m

O.l-

0

I

0

I

r

*‘.2

4

I

6 Time

I

I

8

10

12

14

18

18

(hours)

3. State estimation for a fed-batch E. Coli fermentor-with measurements of biomass, and dissolved oxygen concentrations estimates are made for biomass concentration. dissolved oxygen conoentration (a) and specific growth rate (b). True values are solid curves, estimates are broken lines and measured are points. Estimatesfor the biomass and dissolved oxygen ooncentrations arc nearly identical to their true values. Fig.

Short Note given in Table 1. The nutrient feed rate, F, was calculated from optimal control results (Lin, 1986). Measurements for the biomass and dissolved oxygen concentrations are used to calculate filtered estimates for the specific growth rate as well as for the biomass and dissolved oxygen concentrations. The true, measured and estimated concentrations of biomass and dissolved oxygen are compared in Fig. 3a. The estimates for the biomass and dissolved oxygen concentrations are essentially identical to their true values. The true and estimated values for the specific growth rate, important for optimal control of the reactor, are compared in Fig. 3b. During the initial period, when the relative error in the measurements is large, some unsteady differences are noted. After this period, the true and estimated states are indistinguishable. CONCLUSIONS

The extended Kalman filter algorithm described here exploits the decoupled nature of the state estimate and error covariance equations as well as the symmetry of the error covariance matrix. It utilizes proven numerical methods and imposes control over the errors in both the state estimates and error covariances. Since better estimation of process conditions leads to better process control, this algorithm will fill a need in the chemical process industry.

857 8EPERENcF.s

Brown R. G., Zniroakction to Random Signal Analysis and Kalman FiItering. Wiley, New York (1983). Dunker A. M., The decoupleddirectmethod for calculating sensitivitycoefficientsin chemicalkinetics.J. Chem.Phys. 81, 2385-2393 (1984). Gear C. W., NumerrcalInitial Value Problems in Ordinary Dzjizrential Equations. Prentice-Hall, Englewood Cliffs, NJ (1971). Gelb A. (Ed.). Applied Optimal Estimation. MIT Press, Cambridge, MA (1974). Hindmarsh A. C.. LSODE and LSODI. two new initial value ordinary differential equation solvers. ACMSIGNUM Newslett. IS, 10-l 1 (1980). Holland C. D. and A. I. Liapis; Cokzputer Methoak for Solving Dynamic Separation Problems. McGraw-Hill, New York (1983). Kalman R. E. and R. S. Bucv. New resultsin linearfilter&r and predictiontheory. T&s. ASME J. Basic Engng m; 95-108 (1961). Leis J. R. and M. A. Kramer, Sensitivityanalysisof general differential systems. AIChE National Meeting, San Francisco, CA, Paper 27c (1984). Lewis F. L., Optimal Estimation: With an Introduction to Stochastic Control Theorv. Wilev. New York (1986). Lin H. Y., Dynamic mat& contr&~ofnonlinear.pro&sses. Ph. D. Dissertation, University of Missouri-Columbia (1986). N&d&k A., On numerical integration of ordinary differentialequations. Math. Compui. 16, 22-49 (1962). Ray W. H., Advanced Process Control. Butt&worth, Stoneham. MA (1989). Schuler M. ‘L. (l&), &emicaf Engineering Problems in Biotechnology. America1Instituteof ChemicalEngineers, New York (1989). Seborg D. E., T. F. Edgar and D. A. Mellingcamp,Process Dynamics and Control. Wiley, New York (1988).