Identification of parameters in partial differential equations

Identification of parameters in partial differential equations

Chemical Engineering Science, 1969, Vol. 24, pp. 65-74. Pergamon Press. Printed in Great Britain. Identification of parameters in partial different...

857KB Sizes 0 Downloads 38 Views

Chemical Engineering Science, 1969, Vol. 24, pp. 65-74.

Pergamon Press.

Printed in Great Britain.

Identification of parameters in partial differential equations J. H. SEINFELD Department of Chemical Engineering, California Institute of Technology, Pasadena, California (First received 20 February 1968; in revisedform 20 May 1968) Abstract-The identification of parameters in partial differential equations from experimental output data is investigated. It is assumed that a physical process can be represented by a system of nonlinear hyperbolic or parabolic partial differential equations of knownform but containing unknown parameters. The parameters may enter in the equations themselves or the boundary conditions. A steep descent algorithm is derived based on minimizing the difference between the experimentally observed output and that predicted by the model. The question of observability of distributed systems is considered. The determination of the reaction velocity constant for a first-order decomposition in an isothermal, laminar-flow tubular reactor is treated in detail.

meters may be estimated by standard nonlinear regression techniques, such as nonlinear least squares. If the partial differential equations are nonlinear, however, some sort of extremely time-consuming trial and error integration of the equations would be necessary to find the values of the parameters that produce a satisfactory compliance with the experimental data. It is thus desirable to establish an orderly iterative technique that proceeds from an initial guess of the parameter values to values which in some sense make the model output correspond to the actual output. Two additional factors tend to complicate the problem. First, parameters may enter into the boundary conditions of the equations as well as the equations themselves. Such a case occurs in estimating heat and mass transfer coefficients on the boundary of a distributed medium. Second, the measurements of the process may be functions of space and time or represent integrated functions of the system state, e.g., point values of concentrations vs. overall integrated concentrations in a distributed reacting system. We will pose the parameter identification problem in terms of a class of nonlinear parabolic and hyperbolic systems that represents a large majority of systems of chemical engineering interest. A steep descent algorithm is derived based on the minimization of a least square error fit of the experimental data and the model output.

INTRODUCTION

the form of the differential equations used to describe a physical process can be specified from basic conservation principles. However, parameters, such as reaction velocity constants and transport coefficients. often are unknown, and, in fact, are determined from a comparison of experimental measurements of the process and the solutions of the differential equations describing the process. This technique is an integral part of the analysis of experimental data in terms of a model of known form with unknown coefficients. The estimation of parameters in ordinary differential equations has received considerable interest recently in the technical literature. An excellent review of the techniques of generating and analyzing parameter estimates appears in the book by Rosenbrock and Storey[l]. More recently, Bellman, Sridhar and co-workers have shown how techniques from nonlinear filtering and estimation theory can be applied to the estimation’of parameters in ordinary differential equations [2-41. In many cases it is necessary that the model of a physical process be formulated in terms of partial differential equations. Again, the equations may contain unknown parameters which are to be estimated from experimental measurements. If the model equations are linear an analytical solution is usually obtainable, from which paraUSUALLY

65 C.E.S. Vd. 24 No. 1-E

J. H. SEINFELD

example,

Starting with an initial guess of the parameter values, the algorithm iteratively improves the guesses in the direction of the gradient of the least square criterion. The method is applied to the determination of the reaction velocity constant for a first order decomposition in a laminar flow tubular reactor. FORMULATION

yJ (t) =

j=

1,2..

ye

= J:l~(t,x,u(t,x))

dt j=m,+l,.

. . ,m3.

(3) (4) Time- and spatially-independent output transformation. The observed variables yj are independent of both x and t, for example,

JrIn h tt ,x,u(t,x))

Yj =

dSZdt

j=m3+l,...,m.

(4)

In general, the experimental measurements yj contain random errors due to the presence of external noise and imperfections in the measuring instruments, and, in most cases, the statistical character of these errors is unknown. We will assume these errors are small in magnitude when compared to yj. This implies that once our best estimates of the unknown parameters and the corresponding model state variables u(t,x) have been obtained, each of (l)-(4) will not be satisfied identically, but the discrepancies will be small. We wish to consider the class of parabolic or hyperbolic systems whose dynamical behavior can be described by the set of nonlinear partial differential equations and initial conditions, audt,x) = +x,u(t,x).

. .a

j= m,+ 1,. . . ,m2.

In this case there exists a continuous mapping from the state function space to a finite-dimensional Euclidean output space. (3) Spatially-dependent output transformation. The observed variables yj depend on x only, for example,

OF THE PROBLEM

= h,(t,x,u(t,x))

da

(2)

Consider a distributed parameter system whose state at any time t can be specified by a set of n functions ui(t,x), where the scalar variable x assumes values from a spatial domain a. We assume that if there exist manipulatable input variables to the system (e.g. controls), distributed either wholly or partially on 0, they are completely known. We desire to obtain a mathematical model of the system. We assume that the model consists of a set of partial differential equations of known form but containing a number of unknown parameters. The method we will use to determine those parameters is based on using the difference between the solution of the model equations and the actual measured system response to continually improve on an initial guess of the unknown parameters. In actual situations the state variables u*(t,x) may not be directly measurable, instead, only certain prescribed functions of u*(t,x) are obtained. We will call these measured variables the output of the system and denote them by yj. j= 1,2,. . . , m s n. The observed output variables can be divided into four classes, dependent on the nature of the transformation from the state u(t,x) to the output yj: (1) Time- and spatially dependent output transformation. The observed variables yj depend on both t and x, for example, n(t,x)

JnhJ(t,x,u(t,x))

at

(1)

au(w)

a2w.4

--g-,

--zg--9

v&x)&)

(5) where h, represents a known continuous mapping from the state function space to the output function space. (2) Time-dependent output transformation. The observed variables yj depend on t only, for

Ui(O,X) = Uio(X) i= 1,2, . . . ,n where v(t,x) is a p-dimensional vector of known input variables and k is a q-dimensional vector of unknown parameters. The boundary conditions 66

Identification of parameters in partial differential equations

(8)~( lo), respectively,

of (5) can be given at two values of x, which we take for convenience here to be x = 0 and x = 1. In general, the boundary conditions may also contain unknown parameters. Arbitrarily the boundary condition at x = 0 will contain an r-dimensional vector of unknown parameters a. We write the two conditions as

r,u(t,O),~, a =0 >

a, and g2,

&r = 0

Mr,l),

J = et( T,Obuq ( T,O) J=dTbw~(T)

(6)

i= 1,2,. . . ,n.

(7)

The parameters k and a will be selected so as to minimize some criterion function characterizing the differences between the model and the actual system. In particular, we will employ a least square error criterion that is a positive definite function of the difference between the observed and computed results. We will designate this difference as ej, which, in terms of the sample outputs (l)-(4), becomes ej(tJ)

=

Yj(lJ) j=

e,(t)

= Yj(t)-

-hj(t,&

u(t,x)

1,.

. . ,rFzl

I,

hj(tJvu(tJ))

J =

n(x)

-Jo’

~W,UW)

j=n13+1,...,m

JOT ei(t)auej

(16)

(t)e&) dt

et(xbw (x) dfL

(17)

J is a function of the parameters kl, . . . ,kpr . . . ,a,. Our best choice of these parameters will lie at the minimum of J. Due to the existence of experimental errors in an actual situation as we noted, min J > 0. We desire to locate this minimum by a method of steep descent.

(9) STEEP

DESCENT

ALGORITHM

It

is necessary to compute the components of the gradient of J in the space of k and a. For simplicity let us take A as a diagonal matrix with elements c+ In terms of instantaneous criterion ( 12) we have

dr

(10)

T

If0 I1 hj(t,x,u(t,x))

(1%

%,

dQ

j=1112+1,...,m3 ej=yj-

fnei(r,x)cwu(t,x)ej(t,x)dndC

J=I,

THE

ej (x) =

(13)

where we employ the summation convention in which summation is carried out over all appropriate values of a repeated index. ail represent elements of a weighting matrix A, assumed symmetric and positive definite. An integral average error criterion based on (8)-( 10) assumes the following forms,

J=I,’

1

j=?&+1,...,m2

(12)

dsZ dr

$ = 2criei( T,O)

(11)

2

where yj are the observed output variables and u(t,x) is the solution of (5)-(7) corresponding to the values of k and a we have assigned. Two forms of error functional are possible, an instantaneous error criterion defined at a definite spatial location and/or time, and an integral average error criterion defined over all or part of R x [0, T]. To be specific, an instantaneous error criterion evaluated at r = T and x = 0, for example, takes the forms corresponding to

h(T,O)

I=

1

2 ,

ak

,

.

.

1

-

4

(18)

or

$ =2wdT,O) ($)($)= 1

J

I

tT x=0

(19)

and, similarly, g

67

= 1

2w(TSV

($)(z),, 9 Z=O

I= 1,2, . . . ,r.

J. H. SEINFELD

ag,,

Analogous results are obtained for instantaneous criteria ( 13) and ( 14). For the integral average criterion (15) the gradients are

aJ

T

z= If o

2Wi(W) n

(Z)(2)

-zj++ 6 and

ah,

Similarly, at x = 1, (21)

2Asr+

8

i(z)

(22)

al

=$(2).

afi --a%+ah + a( a2u,/ax2) ax2 ak,

(23)

and

ah ah1 aA = zps1+ a(au,lax) X 8 ah 8 + a(a2u,/ax2) ax2’

(24)

The initial conditions for (23) and (24) are 51(0,x) = 0

(25)

&(0,X) = 0.

(26)

conditions

k1.0’

I= 1,2,. . . ,q

(31)

1=1,2,...,r

(32)

where yl and y2 are arbitrary step lengths and k' and a* denote the parameter vectors at the ith iteration step. As in all steep descent techniques, convergence is dependent on a reasonable starting guess of the unknown parameters. The step lengths yl and y2 determine how far along the gradient the next guess is to be taken. Obviously, if yl and y2 are too small, the algorithm will require many steps until convergence, and if yl and y2 are too large, the minimum may be overstepped rapidly. The algorithm is used as follows: 1. Guess the q+r initial values, klO, . . . ,kpo, a,O, . . . ,a,O. 2. Integrate (5) subject to (6) and (7) and evaluate the appropriate errors ej. 3. Integrate (23) and (24) subject to (25)-(30). 4. Evaluate the gradients aJ/ak and aJ/ aa. 5. Form updated estimates of k and a from (3 1)

ah ah -$A*[ +a ( aid,/ ax)3T *

At x = 0, the boundary (24) are

(29)

We can now compute aJldkl and aJ/atq.e, is obtained by measuring the system and model outputs. A and M are calculated from (23)-(30), which are integrated with the model equations fromt=Otot= TonQ. The steep descent algorithm consists of iteratively correcting k and a by means of

Thus, we obtain

a pi

0

and

Analogous results are obtained for integral average criteria (16) and (17). Now, au,/ak,and au,/aa,are simply the sensitivity coefficients of the model output uj (t,x) to changes in the parameters kl and al. These sensitivity coefficients, which we denote as Ajl(t,x) and pjl (t,x) , are elements of the n x q matrix A(t,x) and the nXr matrix M(t,x), respectively. Equations for Ajl(t,x) and p51(t,x) can be obtained by assuming that and

ag2j

dAs,= a ( aus/ ax)ax

dil dt

I= 1,2, . . . ,r.

i($)=&(2)

(28)

8

da dt

I = 1,2, . . . ,q

(z)(z)

a,

a

aglj

(27)

~CL”z+a(au,,ax)~~‘+~=O.

and

hedt,x)

aglj :Asl=O a ( au,/ ax)ax

for (23) and 68

Identification of parameters in partial differential equations

and (32). If the new estimates increase rather than decrease J the minimum has been overstepped. Go back to the values from the previous iteration and cut y1 and yz by a predetermined factor for example, two, before forming the new estimates. 6. When ki+' - kiand ai+’- ai become sufficiently small so that it is evident that the minimum is close, stop the iteration, e.g. k,i+’ - k,i .-, .-J

kji

6e

It should be apparent that the ability to obtain estimates of parameters will automatically produce estimates of the true system state since the only unknowns in the model equations are the parameters. It is important to realize, however, that one may not be able to estimate parameters from the particular experimental measurements proposed. In other words, it may be impossible to estimate or calculate parameter values from the output data obtained. This raises the general question of the observability of a distributed parameter dynamical system. OBSERVABILITY PARAMETER

OF DISTRIBUTED SYSTEMS

The concept of observability of a dynamical system described by ordinary differential equations was introduced by Kalman[S]. The fundamental question is: Given a mathematical model of a free dynamical system and its output transformation, is it possible to determine the system state at any time t by observing the measured output over a finite time interval [t,t + T]. Evidently, if the system state can be recovered from the output data, parameter values upon which the state depends can also be recovered. The idea of observability of a system described by partial differential equations has been introduced by Wang[8], whose development we will briefly sketch in the following. Let @(t,t,,) represent a given continuous transformation of the system (5) such that given the state at to, u( t,,n) , the state at t is given by

In addition, let I-I represent the particular output transformation, for example, (l)-(4), so that y = Hu(t,x) (34) = H*(M)

u(to,x)

where y has components yl, . . . , ym The system (5)-(7) will be called completely observable at to if it is possible to determine u( t,,x) by observing y over [to,&,+ T], T finite. If the system is completely observable at any t it will be called completely observable. For any of the output transformations (2)-(4), it is conceivable that solutions corresponding to all initial states u(tO,x) vary in such a manner that the observed values y are all equal over the interval [t,,T]. It would thus be impossible to recover u(tO,x) from y on [t,,T]. In selecting experimental measurements it is necessary to verify the observability of the system with respect to the particular output transformation. Let us see what definite statements we can make regarding observability. The only analytical results that may be obtained will be for the case when O)-(7) are linear. We consider this case first. The problem (5)-(7) is well-posed if the following condition is met: The operators @(t,t,J are uniformly bounded. Loosely speaking, this condition implies that the solution depends continuously on the initial data. Then, if the system is completely observable, small errors in the output lead only to small errors in the recovered initial state. Let us assume H is a bounded operator, then it follows that H@(t,t,) is a bounded linear operator. We require that there exist a finite time T and a continuous one-to-one mapping from y to u( t,x) for the system to be observable. If the system is completely observable at to, we can recover its initial state from u(toA = [(H~(t,,t,))*(H~(tt,t,))l-‘(H~(t,,t,))*y (35) where (I-I@(t,,t,J) and is defined by

* denotes

the adjoint operator

(G(x),HQ,(t,,t,)F(t~))l= u(t,x) = @(t,t,) u(to,x).

( (H’I’(t,,t,))

(33) 69

*G(x),F(t,x)),

(36)

J. H. SEINFELD

y(t) = u(t,r)), or g(t,x) in (38) equals 6(x-r)), the Dirac delta function. The operator (39) becomes

(.,*)1 and (*,.)2 denote inner products in L*(a) and L,( [t,,,t,]xQ) , L2 being the set of all squareintegrable functions defined on fi or [tO,t,]xfk. Thus, a necessary and sufficient condition for a linear distributed parameter system to be completely observable at to is that the linear selfadjoint operator (H~(t,t,))*(H~(t,t,)) has a bounded inverse for some finite t > t,,. Let us apply the preceeding ideas to a onedimensional linear system, in which the solution of (5)-(7) can be expressed by the integral equation

(H~(t,,O))*(H~(t,,O))

w(t;tlJX.)d{dt.

y(t) = Hu(t,x)

transformation

(H@(t,,O))*(H@(t*,O)) J: g1 exp[--n2r2t]

(HQ(&JlJ)

*(HQ(tI,&J)

I n w(%,5’,5”)

of the

at t, if

I, g(f,5’ )

(.) d5”d5’ d5 dt

(39)

has a bounded inverse for some finite tI > to. Consider a linear diffusion system governed by

azu(t,x) ax2

a&r) -=at

40,x)

= b(X)

(40)

u(t,O) = u(t,l)

= 0

the solution to which can be readily shown to be U(U) = 1,l W(U,S)uo(S)

d5

(41)

where the Green’s function W( t,x,<) = 5

exp[-n2r2t]

sin(nrrq) sin(nrx)

for which a bounded inverse does not exist for any finite ct. As an example, consider the case where u,,(x) = sin(4rx). From (41) and (42) we find that u(t,x) = exp(- 16tit)sin(4mx). If we choose q= l/2, y(t) = u(rJ2) = 0, t 3 0 and u,(x) cannot be determined from y(t). Next, assume that instead of measuring z&.x) at a fixed location q, we allow our instrument to scan from x = 0 to x = 1 over [O,t,] along the line t = t,x. For a restricted class of initial functions Wang points out that the system is completely observable with this measuring scheme, or in fact any that scans along a continuous monotone increasing curve intersecting x = 0 and x = 1. For simple linear distributed systems, the solution of which can be expressed in the form (37), it is possible to determine a priori if a certain measurement scheme will enable recovery of past states of the system. If we assume that the model solution u(r,x) is a unique function of the parameter values, the complete observability of the system is a sufficient condition for determination of the parameter values. Two very important points arise though. First, a significant fraction of the relevant problems involve nonlinear systems or highly coupled linear systems, such that a convenient analytical solution is unobtainable. Second, it is possible that if a model contains several parameters, there exist a finite number of combinations of parameter values which cause the model output to

=

Jt; Ja w(%,5J)s(t,5)

=

- n2r2t] sin(nr$sin(nrr{)(-)d[dr (44)

= In g(t,x) u(t,x) dx. (38)

This system is completely observable the linear, self-adjoint operator

(43)

Combining (42) and (43).

Mtx) = ~(t~o)u(to.x) = J0 R’(~,,x,tl)~(to,~) d5. (37) We consider an output form (2), in particular,

=

sin(n7rx) sin(n?rIJ.

tl=l

(42) First, assume that we measure u(t,x) at only one fixed point qe(O, 1) for all k[O, r,]. Then 70

Identification of parameters in partial differential equations

mental measurements. We assume that (46) is a valid model for the reactor and knowing fi we desire to estimate k using the steep descent algorithm developed above. In order to examine the convergence of the algorithm it is desirable to know the true value of k. Naturally in actual practice this will not be possible. The experimental values of u (t,x) were generated by numerically integrating (46) from t = 0 to t = T using the assumed true value of k. The experimentally observed output may assume any of the general forms (l)-(4). Two particular forms were used here. The first represents the integral average effluent conversion based on the volume flow rate,

match the experimental output. Thus, a more general condition of observability than the preceeding is desirable. It seems that a sufficient condition for observability of both linear and nonlinear distributed systems might be the convergence of the steep descent algorithm presented above. Convergence depends on the nonvanishing of the sensitivity matrices A and M and the uniqueness of yj as a function of hfi The significance of the foregoing development and remarks may be seen more clearly by application to an actual system. EXAMPLE

Cleland and Wilhelm[6] studied the isothermal, liquid-phase hydrolysis of acetic anhydride in a laminar-flow tubular reactor. If axial diffusion is negligible compared to radial diffusion and there is no volume change on reaction, the mass balance for acetic anhydride is

y(r) = 4 j-i x(1 -x2)u(t,x)

(48)

the quantity measured experimentally by Cleland and Wilhelm, and the second, the wall concentrations down the length of the reactor, r(t)

We let

The sensitivity governed by and

d.X

= d&l).

(49)

coefficient

A = au/ak

is

/3=:, (l-xl)~=/3(~2+~~)-u-u

(50)

and (45) becomes h(O,x) = 0 (1 --xl)$=~(~+~~)-ku.

(46)

g=o,

x=0,1.

The initial and boundary

conditions

for (46) are It follows from (48) and (49) that

u(O,x) = 1 e(t) = y(t) -4

(47) ;=o,

J: x( 1 -x”)u(t,x)

dx

(51)

x=0,1. and e(t) = ~0) -u(t,l)

The variables t and j3 are not the same as the authors used because we want to isolate k in (46). Cleland and Wilhelm studied the range of validity of (46) as a model for the reactor by comparing experimental conversions to those predicted from (46). In their work, values for the first order reaction velocity constant k were determined beforehand from independent batch experiments. The problem we wish to consider is the opposite one, namely, the estimation of k from experi-

(52)

respectively. Since the average concentration is only measured at t = T, the error criterion of (5 1) was chosen as J = e( T)2.

The measurements t so we selected

of u(t,l)

(53) can be made at all

J= k’ e(r)2dr 71

(54)

J. H. SEINFELD

corresponding to (52). The computation of aJ/ak in each case is straightforward. We selected /3 = O-1 and the true value of k = 1. Using these values (46) was numerically integrated from t = 0 to t = T, T = 1. A discussion of the details of this computation can be found elsewhere[7]. In all our computations a 41 X 41 grid was employed. The results of a preliminary integration on a 21 x 21 mesh agreed identically with those presented in the book by Lapidus [7]. Two basic questions are of interest in the application of the steep descent algorithm. First, can we correctly identify the true value of k from either of the output observations (48) and (49), namely, is the system observable? Second, if the algorithm converges, what is the effect of the initial guess of k on the rate of convergence? Consider first the output (48), for which y (1) = 0.1378, corresponding to p = 0.1 and k = 1. The steep descent algorithm was applied with three different initial guesses of k and E = OGll with the following results:

measurements. The steep descent algorithm was used with the same three initial guesses of k as above and E = ONll : Initial values

k

Y

Final values

o-5 10 2-o 10 0 0.2

.I

k

0.03846 0.01175 0.5987

) 1moo6o

1mOO61

J X 10’ Iterations

0.2062 0.1962 l.OOOO60 0.1962

k

Y

J

J X 10’ Iterations

0.5 2.0 0

2 2 2

o-04524 O-02984 0.5137

1*00159 1@0151 1.00126

4.941 4.471 3.106

9 ::

Again convergence was obtained in each case, indicating that the system (46), (47), (49) is observable. We might have anticipated, on the other hand, from the linear diffusion example, that observing solely the concentration at x = 1 would not be sufficient to recover k. The flux boundary condition (47) leaves u (t, 1) unknown, and because of the unique relation between k and each u (t, 1) , the system is observable. It appears that one cannot make a general statement about the type of measurements required in a distributed medium. Each problem must be handled separately within the framework above. The algorithm converged rapidly in each case and did not oscillate as k = 1 was approached. A different choice of E will produce a slightly different final result. We have assumed that the experimental measurements are error free or nearly so. When significant experimental errors prevail in yj it is necessary to perform several similar experiments, from which the model parameters are estimated. The resulting parameter values determined can be analyzed in terms of statistical properties.

Iterations Initial values

Final values

33 11 89

The algorithm required approximately 1 set per iteration on an IBM 7094. Convergence was obtained in each case studied, indicating that the system (46)-(48) is observable, or that a unique value of k corresponds to each value of the integral average conversion, y( 1). Thus, measurement of the integral average conversion from a laminar-flow tubular reactor represents a valid method of obtaining first order kinetic rate data. The second type of experimental observation proposed (49), is measurement of the wall concentrations down the length of the reactor. Admittedly this would be difficult to implement in practice, however, it was chosen to atford a comparison to effluent conversion output

CONCLUSION

A steep descent algorithm for the identification of parameters in partial differential equations and associated boundary conditions is derived. Although the discussion includes only hyperbolic and parabolic systems of the class (5), the results may readily be extended to elliptic systems. In addition, extension to include parameters entering in the initial conditions and the input vector v( t,x) is straightforward. The convergence of the algorithm is proposed 72

,,

Identification of parameters in partial differential equations

as a sufficient condition for observability of distributed parameter systems, particularly nonlinear systems. The powerful and conceptually simple steep descent algorithm represents an efficient technique for data analysis. The value of the algorithm is evident upon considering the parameter identification problem in a highly coupled nonlinear distributed system. The analysis of kinetic data from a packed bed catalytic reactor in the regime of transport limitation represents a problem of this type for which the algorithm is ideally suited. Finally, it is important to point out that we have neglected the effect of both dynamical and measurement errors on the identification problem. Often this cannot be done. The estimation (as opposed to identification) of states and parameters in nonlinear distributed systems subject to stochastic inputs and measurement disturbances is treated in the accompanying paper.

H

J k

M m n P 4 r R t,T U V

; x

Yj z

Acknowledgment-Acknowledgment is made to the donors of the Petroleum Research Fund, administered by the American Chemical Society, for partial support of this research.

Greek

NOTATION

a A

vector of unknown parameters in boundary condition (6) weighting matrix in J concentration of acetic anhydride binary diffisivity difference between observed and calculated output, e.g. (8)-( 11) function of known form in (5), i = 1, . . . ,n arbitrary vector function in (36) functions of known form in (6) and (7) arbitrary vector function in (36)

LT ej

fr F g19g2 G

function of known form in output transformation,j= 1, . . . ,m output transformation operator criterion function vector of unknown parameters in (5) matrix of sensitivity coefficients pjt number of output observations number of state variables in model number of input variables in model number of unknown parameters k number of unknown parameters a and radius variable radius of tubular reactor time variables state vector input variable vector centerline velocity in tubular reactor Green’s function spatial variable experimental output observation, j = 1 * - ,m axial’variable along tubular reactor

elements of weighting matrix A variable in (45) step length in steep descent algorithm Dirac delta function convergence criterion integration variable spatial location sensitivity coefficient, auj/ ak, matrix of sensitivity coefficients sensitivity coefficient, au,/ aa, Ajl

CL

Superscripts i

iteration number

REFERENCES ROSENBROCK H. H. and STOREY C., Computurional Techniquesfir Chemical Engineers. Pergamon Press 1966. BELLMAN R. E., JACQUEZ J., KALABA R. and SCHWIMMER S., Math. Biosciences 1967 171. BELLMAN R. E., KAGIWADA H. H., KALABA R. and SRIDHAR R.,J. nsrronauf. Sci. 1966 13 3,110. DETCHMENDY D. M. and SRIDHAR R., J. Bus. Engng June 1966,362. KALMAN R. E., Proc. lstlnf. Congr.Automutic Control, 1960,1961,1481. CLELANDF.A.andWILHELMR.H.,A.I.Ch.E.JIl95624,489. LAPIDUS L., Digiful Computation for Chemical Engineers. McGraw-Hill 1962. WANG P. K. C., In Adounces in Control Systems-I (C. T. LEONDES, editor) Academic Press 1964.

73

J. H. SEINFELD RhsumC-On cherche a identifier des parametres d’equations differentielles partielles 9 partir de donnees exp&imentales. On suppose qu’il est possible de rep&enter un pro&de physique par un systeme d’equations differentielles partielles non lineaires hyperboliques ou paraboliques d’une forme connue mais contenant des paramttres inconnus. Les parametres peuvent entrer dans les equations elles-m&mes ou dans les conditions limites. On Ctablit un algorithme a descente rapide base sur !a reduction de la difference entre le rendement observe exptrimentalement et celui prevu par le modele. On considere la question de I’observabilite des systtmes distribues. La determination de la constante de vitesse de la reaction pour une decomposition de premier ordre dans un reacteur tubulaire isothermique a courant huninaire est traitee en detail. ZusammenfassungDie Identifizierung von Parametern in auf experimentellen Ausgangsdaten beruhenden Teildifferentialgleichungen wird untersucht. Es wird angenommen, dass ein physikahscher Prozess durch ein System nicht linearer hyperbolischer oder parabolischer Teilditferentialgleichungen bekannter Form, die jedoch unbekannte Parameter enthalten, wiedergegeben werden kann. Die Parameter ktinnen entweder in die Gleichungen selbst oder in die Grenzbedingungen eingefiihrt werden. Ein Steilabstiegalgorithmus wird abgeleitet. der darauf beruht. dass der Unterschied zwischen den experimentell beobachteten Ausgang und dem durch das Model1 vorherbestimmten Ausgang auf ein Mindestmass reduziert wird. Die Frage der Beobachtbarkeit verteilter Systeme wird erwogen. Die bestimmung der Reaktionsgeschwindigkeitskonstante fur einen Abbau erster Ordnung in einem isothermischen Rohrreaktor mit Laminarstriimung wird im Einzelnen behandeh.

74