coordination methods to parameter identification problems in interconnected distributed parameter systems

coordination methods to parameter identification problems in interconnected distributed parameter systems

0005-1098/86 $3.00 + 0.00 Pergamon Press Ltd. © 1986 International Federation of Automatic Control Automatica, Vol. 22, No. 1, pp. 111-116, 1986 Prin...

609KB Sizes 0 Downloads 53 Views

0005-1098/86 $3.00 + 0.00 Pergamon Press Ltd. © 1986 International Federation of Automatic Control

Automatica, Vol. 22, No. 1, pp. 111-116, 1986 Printed in Great Britain.

Brief Paper

Application of Decomposition/Coordination Methods to Parameter Identification Problems in Interconnected Distributed Parameter Systems* AXEL MUNACK? and MANFRED THOMAt Key Wards--Distributed parameter systems; large-scale systems; decomposition techniques; parameter identification; sensitivity analysis. authors no application of decomposition methods to this parameter identification procedure has been reported yet, besides Munack and Thoma (1982, original version of this paper). The purpose of this paper is to demonstrate the features of decomposed parameter identification procedures from an applications point of view. In the opinion of the authors, the promising numerical results should encourage people who are treating complex parameter identification problems (e.g. in chemical or biotechnical plants, oil or water reservoirs, or in economical studies), to use these methods. On the other hand, further theoretical work may also be initiated.

Abstract--Application of two decomposition/coordination methods to parameter identification problems for interconnected distributed parameter systems of parabolic type is treated. After formulation of both methods--penalization and re-injection-and some remarks with respect to treatment of parameters occurring simultaneously in several subsystems, the problem of convergence of the decomposed estimates is addressed. This is attacked by a linearization around the actual parameter set, which allows us to solve the identification problems on the subsystem level directly. A simple example is used to demonstrate some features of coupled and decomposed identification procedures and to show the actual regions of local convergence for different sensor locations.

2. Statement of the coupled problem and its solution We treat systems described by a set of N coupled parabolic partial differential equations of the form (it { 1,2 ..... N})

1. Introduction DETAILED modelling of chemical or biochemical processes in modern reactors often leads to distributed parameter models, in which, due to mass transfer between phases, the partial differential equations describing the balances are coupled. Parameter identifications indicate that these processes often show unpredictable, but slow variations of the system's parameters, which may be caused by variations of the biological material or by model inaccuracies. Hence, adaptive control schemes have turned out to be very powerful for such kinds of processes, see e.g. Munack (1980a). In these--heuristic-algorithms optimization and parameter identification problems have to be solved on-line. Therefore, there is a great need for fast algorithms. In this paper, an application of decomposition/coordination methods to the latter problem is treated. We will not be including here an application of the methods to industrial processes. However, the procedure has already been applied to identify unknown parameters in the tower loop bioreactor which is described in detail in Luttmann, Munack and Thoma (1985). Decomposition methods have already been used for optimization of distributed parameter systems in the last decade. cf. e.g. Bensoussan, Glowinski and Lions (1973), Cambon and LeLetty (1973), Pradin and Titli (1975) or Pradin (1979). Furthermore, theoretical results concerning a convergence analysis in the linear quadratic case have been developed, cf. Cohen (1980). Also a rigorous treatment of the parameter identification problem by use of optimal control theory has been performed by Chavent (1974). However, to the knowledge of the

~y, 0t

O [ a 2 i ( x , p ) ~ ] + ali(x'P)~-x ~.Vl + a°i(x'P)Yl = ~Zx N

= - ~ aolk(x,P)yk + bi(x,P)u i + ~ ( x , P ) i n ] 0 , 1 [ × ]0, T[;

(la)

with initial conditions yi(0) = Y~o in ]0,1 [;

(lb)

and boundary conditions -a21(0, p)~Yl ¢~.Vx=O + CoiYilx=o= CoiYeoi }

?~yil

azi(1 P ) - '

in ]0, T[.

+

CliYitx= ! = CliYeli

(lc)

~X x = l

For the left-hand side of (la) including the boundary conditions (lc) we will also write Oyi

+ A,(x, P)Yi.

(ld)

Arguments of the dependent variables are only written if this clarifies the notation. P is a vector of unknown parameters. We want to emphasize, of course, that a more general class of systems than that of equation (1) can be treated, but due to space limitations we will restrict ourselves to the above specified problem. Without going into details, we will also assume that each of the N equations as well as the whole system admits a unique solution in an appropriate solution space V over ]0, 1 [ x ]0, T[, cf. Lions (1971). The parameter identification problem is treated in the following using the method proposed by Chavent. It consists of reformulating the problem as optimization problem and solving this by optimal control theory. To be concrete, we assume that

*Received 6 September 1983; revised 4 December 1984; revised 13 May 1985. The original version of this paper was presented at the 3rd IFAC Symposium on Control of Distributed Parameter System which was held in Toulouse, France during June 1982. The published Proceedings of this IFAC Meeting may be ordered from Pergamon Press Limited, Headington Hill Hall, Oxford OX30BW, U.K. This paper was recommended for publication in revised form by Associate Editor T. Ba~ar under the direction of Editor H. Kwakernaak. ?Institut ftir Regelungstechnik, Universit/it Hannover, AppelstraBe 11, D-3000 Hannover 1, F.R.G. 111

112

Brief Paper

there are I!_ sensors located at each of the subsystems. means that we have L x N measurements

s: =

I’0

i=

X:(Z),; du

, = 1.7.

I, 2.....N;

which

. L. (2)

with Xjdenoting the spatial characteristic of the respective sensor. Assuming that the x{ are known, we define an error

where yMi(P) stands vector P. In order functional

for the state of model (I ) with parameter to minimize the error, we introduce a

For the moment. we will assume that the functional (4) may be decomposed into subfunctionals J:(P,). where the P, arc subvectors of P and form a non-overlapping partition of P. This means, that at each subsystem a subfunctional JI has to be mmimized w.r.t. a subvector P,. Couplings between the subsystems only occur via the states. The case ofcouplings via the parameters (overlapping partition of the parameter vector) will be addressed in Remark 3. 3.1. Penalty function method. The application of the penalty function method consists of a parametric decomposition of the R: coupled differential equations (I) and a modification of the subsystem functionals (4); in particular

a substitution

\ of C u,,,(.Y. P)r,, Lf,

leads to

which enables us to formulate the parameter identification problem as an optimization problem. Note that the overall functional consists ofa sum ofsubfunctionals, each subfunctional being evaluated at the corresponding subsy_stem. The objective is to find P,,, with J’(P,,,,) $ J’(P)VPEP,,, P,,,EP,,. Ed isaset of admissible parameters. defined by physical usually considerations. After definition of the adjoint states, pi, characterized by

L

_I

-

O$+ ..q,(.u.B)p,=

x:w:e:, 15a)

- f %i(.LP)Pk+ 1 Lfl

,=

~+P)~~.

16)

the coordination variables ax being fixed for each optimization at the subsystem level. The restriction cx = yMa. k = I.. . N is introduced into (4) by penalty terms, yielding

(7) where Di are positive definite. symmetrical operators on the solution space C: So one has decoupled subproblems, which may be minimized on the subsystem level by means of, for example. Newton techniques, wherein the gradient is computed using the adjoint states pi. which are given by

I

with final conditions

(5b)

p,(T) = 0. and boundary

conditions

-a,,(O,P+ ?u 1=”

+ (co, - a,,(().@)P,/,=,,

= 0%

This results in a parameter vector P* with J#*) 5 &(P), VPE Pad; P* E PJd. At the coordination level one makes use of the fact that the gradient of the overall performance index w.r.t. v is known and that this functional attains a minimum for the optimal v. So a gradient strategy for updating v at iteration I on the upper level leads to

(5c) ^ i;p, az,(LP) ;-

(‘Y ,,=I

+ (c,, + Ql,(l. P))P,l.,= I = 0.

one can calculate the gradient of the functional J’ w.r.t. the parameter vector P. Since the gradient is known, one can effectively make use of rather sophisticated optimization algorithms. In particular, modified Newton methods have proved to work highly satisfactory in identification of practical problems ~ Munack (1980b) ~ when the initial guess of P is not too bad. On the other hand, one has to face the situation that the functional is not convex in nearly all non-academic problems. For very simple problems one can get very ill-behaved functionals; a simple example is shown in the Appendix, see also Fig. 3. So in case of coupled distributed parameter systems there is usually no guarantee that the absolute minimum of the functional can be found by gradient techniques. Problems of observability and identifiability, however, seem to be not as crucial as in the single system case. By means of the couplings between different systems, usually all modes of the systems are excited. However, as will be shown, the local convergence properties of the decomposed identification algorithms may strongly be influenced by the sensor positions. 3. Resolution by decomposition/coordination methods In the following, we treat solutions by decomposition/coordination methods, particularly those on the basis of penalization of the decomposed functional and the socalled ‘re-injection’ strategy or ‘equality method’. In our research, where several simulation studies were performed, these two methods have turned out to be more efficient for resolution of optimization and particularly identification problems than the equally well-known Lagrange multiplier techniques.

h is increased

at each iteration. and tn the limit as h + x. the overall optimum for the functional (4) is achieved. It could be shown in simulations, cf. Ronge (198 I ). that in case D, = identity it may be advantageous to pass to the limit h --t x in (9); this leads to a state re-injection type iteration with ,‘j’+ 1)= On the subsystem order to guarantee

f&p*q

level, however, convergence.

00)

one has to take a fimtc jr in

3.2. Re-injection method. Application of the re-injection strategy consists of a parametric decomposition of the complete set of optimality conditions for the coupled problem. This leads to a parameterization (6) of equation (1) as stated above and a parameterization of the corresponding adjoint equations. yieldmg adjoint equations for each subsystem of the form

This means that on the subsystem J!,,,(P) = J:(P)

+ 2

level a modified

functional

’ t (uod~.hqt,!: .,,I)dt s0 If,’ 112)

Brief P a p e r is minimized. The coordinator uses the simple strategy

v!l+l~ = y~i(~,,~),

q!l+l~ = p!U(p,~ll).

(13)

A favourable start for the procedure is to take v° from a simulation of the coupled system with the initially guessed parameters and q0 = 0. Note that in the optimum without measurement noise and without structural errors holds pi = 0; this is different from optimal control problems. Remark 1 (Functional modification): The two terms in brackets contained in the modified functional (12) form a so-called zerosum modification for the global cost. This m e a n s that at the end, when convergence of the procedure has occurred, (13) becomes stationary with v~ = y~ and q~ = p~. Then one can see by rearranging the terms in (12), that the two sums just cancel. So - - a t the e n d - - t h e original overall cost functional is attained. Remark 2 (Coupling measurements): In the particular case where coupling measurements are available, these measurements m a y be taken directly to decouple the system equations. But in most practical cases, these measurements are not available. Furthermore, if there is measurement noise, one has to decide whether to take these measurements directly or not; this is a situation comparable with that discussed by Tacker and Sanders (1980) for the state estimation problem. Remark 3 (Identification of coupled parameters): The above stated algorithms may be used as they are for unknown parameters in all coefficient functions except special cases of a0~ and a0~k. These exceptions are imposed by physical considerations, concerning, for example, mass transfer between phases, heat transfer, etc. There the coefficients in different subsystems are related to each other in order to fulfil the balance equations. For instance in heat transfer between two subsystems m and n, a meaningful model must include the equality between outgoing heat flux from subsystem m and incoming heat flux into system n, both described by the heat transfer coefficient and the temperature difference between the two subsystems. If the heat transfer coefficient is an. unknown parameter, it occurs as unknown in both subsystems, but must come out with the same estimated value. To treat a rather general case, we assume a relation to be given for the parameters P,, and P, by

f,(p,,) L_ L(p,).

(14)

In order to incorporate this relation in the decomposed identification procedure, it would be possible to identify these two parameters at the coordination level. This corresponds to a separation of the set of unknown parameters into local subsets and a globally treated subset which seems to be quite reasonable. However, a great disadvantage lies in this type of strategy: one loses the interdependence of parameter changes on the subsystem level, since the globally treated subset of parameters remains fixed during optimization of the subproblems. If the couplings between different subsystems are not too strong, then the influences of the various parameters of a single subsystem onto the state of just this subsystem and their mutual interdependences are more significant than the influences of the subsystem's parameters onto other subsystems via the state couplings. So it seems to be more advantageous to identify these parameters at the subsystem level, too. The functional relations of type (14) then have to be included in some way into the identification functional in order to guarantee their fulfilment, at least at the end of the iteration process. Since the functional itself is not convex, LaRrange-type inclusion is not possible in general. Also a pure penalty term is not suitable, for this would deteriorate the convergence rate of the algorithm. Highly satisfactory results, however, have been obtained with augmented Lagrange modification, cf. Bertsekas (1976):

JL. = s' + ~(.t;.(P~) -L(P.))

113

each iteration h' is increased and a gradient step is performed for maximization of J~,a w.r.t. 2. The observed convergence rate is pretty fast see Ronge ( 1 9 8 1 ) - - a n d convergence is obtained without passing to the limit h' --, o¢.

4. Convergence analysis for the decomposed identification procedure A global convergence analysis for the decomposed identification procedure has not been accomplished until now. Local results, however, can be obtained by means ofstudying the behaviour of the linearized problem, which is derived by a sensitivity analysis. We assume the ideal situation, that no stochastic processes are involved and that there are no structural model errors as well as no errors in the other system parameters which are not to be identified. Furthermore, we presume that the identification functional of the coupled problem exhibits a - - local - - m i n i m u m for the true parameter set. Then we can state that an identification by means of a treatment of the coupled system leads to the correct parameter set if the starting values are not too bad. The question to be answered is now, whether the identification procedure with decomposition/coordination leads to the same estimated parameters. A necessary condition for this to hold is that the modified identification functional shows a - - l o c a l - m i n i m u m at just this parameter set. However, this does not imply that this m i n i m u m is computable with the decomposition/coordination method. To show this, the single steps involved in the re-injection identification procedure are examined in some detail in the following. For the local consideration it is assumed that identification of the unknown parameters has successfully been performed. Then a small but arbitrary variation 6P of the system parameters is introduced which leads to a small perturbation in the measurements. It is checked whether the decomposition/coordination procedure is able to identify this small variation in the system parameters, which means that successive estimates 61~a~, l = 1,2 .... do converge to 6P. The objective is to establish a relation

6P "~= T"~6P

(16)

and to compute the T "1, such that liTlu - [H for l ~ oo may be checked for convergence. In computations, only a few consecutive members o f t ta) can be computed, and one may object that this of course does not give any proof of convergence or divergence. From practical considerations, however, one has to face the situation that in numerical calculations for identification purposes too only a small number of iterations are performed and the optimization routine is halted after a prescribed m a x i m u m number of iterations or when the norm of the difference of consecutive estimates or functionals is small enough. Therefore, the following procedure will give the region of 'practical convergence' of the algorithm. As already mentioned, the local consideration starts with the assumption that the identification of the true parameter vector P has been accomplished, which means

L 6~yMi -+ A i i ( ? ; , P)Y~,,qi = -- ~ aOik( -v., P)Vk + bi(x, P ) u i ~t

k,i

+ fax, 1~),

(17a)

yMi(O) = Yio, Vi = YMi = Yl,

Pi = ql = 0; P = P.

(17b)

Now a small variation of the measurements is introduced, caused by a small, arbitrary perturbation of the system parameters. This variation is given by

h'

+ ~ [ ( f . ( p . ) _ f . ) 2 + ( L ( P . ) _ f . )2].

(15)

f * . denotes here an average value ofJ.,(/~*) and J~,(/5.) from the last step. Starting with 2 = h' = 0, at the coordination level at AUT 22:1-H

~=

X~

dx]

6P.

(18)

Small variations of the state of the decomposed model used for identification can be described approximately by the sensitivity

114

Brief Paper

functions 6yMi , neglecting higher-order terms. These are solutions of ~76)',, = ~ [-~(A,(x, P)y~i)] 7 ~t + A.(x.P)6y.~,, {-L ~P d

-

~

(19a)

aoik6vk,

k ¢:i

aYMi(O) = O,

--

~ 76,,~,,

~,'Mi

(19b) 0a2i(0 )

T

[t~a2i(l)] T

-

a2il'ut ~ . ~ - ~=o + e°if)'Milx:° : -- cqx x=O ~6YMi

~)'Mi

.[~]

or,

Therefore the variation of the model state at iteration step I can be expressed as a superposition, caused on one hand by variations of the estimated parameters and on the other hand by variations of the coordination variables: 63,~, = 67,,~,,p6(~"' + 6y~,,,

(20)

The variation 6pl of the adjoint state (which is equal to the adjoint state p~ itself, since pC = 0 for the nominal parameter vector) is given by __

t~t

+ Aii(x'P)~Pl

J J

=

Wizi

Z Ji 6 y ~ i d x - 6~

j=l N

- ~ aoki6qk,

(21a)

k¢i

~p~(T) = 0,

121b)

and boundary conditions (5c), writing 6p~ instead of p~. Variations of the adjoint state are caused by variations of the modelled measurements--cf. (20)--, of the measurements (18), and of the adjoint coordination variables: 6pl ~' = 6 p ~ f P u) + 6p!~,! + 6 p ~ f P + 6plaq~.

(22)

Optimization at the subsystem level makes use of the fact that the functional derivatives w.r.t, the unknown parameters may be computed using the results of (17), (19) and (21) - - by means of the well-known gradient formulae, cf. Chavent (1974) or Seinfeld and Chen (1974). A necessary condition for optimality is that the derivatives become zero. Due to the linear expressions, this condition can be evaluated analytically at the subsystem level. In the first step, 6vl a~ =- 0 and 6ql ~ -= 0 lead to 6pl~ ) =- 6pl~ ~ --- 0. Thus the gradient calculations using (22) result in a linear relation ~51~"~= T~al6P

(23)

rather time-consuming in complex problems. To be concrete, for Q unknown parameters it is necessary to solve the coupled system of N PDEs once for the nominal parameter set and Q times for the computation of the state sensitivities of the coupled system. Next. one has to evaluate the state sensitivities of the decomposed subsystems and the adjoint systems, which needs 2Q times a computation of the decoupled system's and adjoint equations. After this 'setting-up' of the procedure, the calculation of the consecutive matrices T ") requires the solution of Q decoupled systems and their adjoints for each iteration step, where the effects of the last terms of (20) resp. (22) onto the subsystems" states resp. adjoint states are computed. All other calculations (gradient components, superposition of solutions, etc.) are simple and not time-consuming at all. The final result, however, i.e. the answer to the question whether a certain set of parameters may be identified by use of the decomposition/coordination procedure is substantial to the application of the method. Computations with different sets of nominal parameters enable to determine the regions in the parameter space for which an identification by decomposition methods is possible. This will be demonstrated in the next section by means of a simple but quite instructive example. 5. E x a m p l e

The system consists of two coupled heat conductors, where the heat conductivity in each subsystem is unknown. System equation and identification functional (two pointwise measurements) are shown in the Appendix• The functional plot for true system parameters a21 = 3.5, a22 = 2.5 (Fig. lJ shows that in addition to the absolute minimum there exists another relative minimum. In case of an initial estimate C ) of the parameters, the coupled as well as the decomposed parameter estimation algorithm converge to the nominal parameters. The decrease of the identification functional w.r.t, to computation time is remarkably faster for the decomposed algorithm, see Fig. 2. Starting at a point ( ~ , cf. Fig. 1, leads to another situation: though identification by the coupled algorithm leads to the relative minimum, this does not hold for the decoupled methods. As Fig. l shows, identification by means of the decomposition methods happens to give the correct parameters in this case. The converse may also be true. As demonstrated in Fig. 3 for true system parameters aza = 0.001, a22 = 0.0001. there exist cases, where the coupled algorithm converges to the global optimum while the decoupled ends in a relative one. Performing now the local convergence analysis for this system, one obtains the results shown in Fig. 4. These results clearly indicate that identification of a system with true parameter set 02, = tt.001. a22 =0.0001 by the decomposition/coordination methods presented above is not possible. The absolute minimum of Fig. I c'q cq q:O ol

7 5 3

o

2

implying that the states 6y~l and 6pl ~~are linear in ,6P, too. Since the re-injection type coordination strategy simply consists of 6V}1+ a) : 6 y ~ i ,

(24a)

Oqll + 11 = 6plal,

(24b)

"7 i

one can show, using the same arguments for consecutive iteration steps, that both 6p}[! and 6pl~ j are linear in 6P. This results in

t

~51~11= T"IfP,

I.I3

(25)

%2

. . . . . . . . . . . . .

I

for all steps, which is the relation in demand. In order to clarify this result, we want to emphasize again, that for a check of 'practical convergence' which is needed for computational calculations one has to show that T "~--, I (up to a reasonable accuracy) within only a few iterations. The computation o f T "~ is

-s

-

?

5 -?

-1

0

'

1

"

[g o21

FIG. 1. Plot of the functional 10*J t for o21 = 3.5, a22 = 2.5 and typical identification pathways: coupled identification; decomposed identification (re-injection strategy). •



-

Brief Paper

115

V-,

0

_ _ COUPLED EQUATIONS

__

Q .

'7

•-J"

~

PENALIZATION

"

u~

,'?

~'~

RE-INdECTI

I

t

I

I

I

(

:

:

:

50

i

,

t

,

i

i

-4

-3

-2

"1

0

1 Ig a21

at

FIG. 4. Area of local convergence for the decomposed identification procedure in case of the example (re-injection strategy).

;

lO0% CPU -

i

-5

TIME

FIG. 2. Course of different identification procedures (initialization at C ) , cf. Fig. 1). and the relative minimum of Fig. 3 both lie in the region of convergence. This causes the procedure to identify this parameter set. The question arises, whether there is any possibility to enlarge the region of local convergence. The first attempt may be to use a theoretical result developed by Cohen (1980), who proved convergence in the linear quadratic case, if a term • 1[1~(t) - P * (t-~)ll2,

~> 0

(26)

is included in the functional for sufficiently large ~. In some numerical tests for this example it could be confirmed that convergence in fact can be assured also for parameter identifications by this functional modification. However, since the necessary values of ~ are very large, a very slow convergence behaviour is observed. In our numerical experience, this can be improved by variable scaling. ~. (pill _ p . . -

ll)rW(p.)

_ p*.-

with w , = (P*l~ 1)) 2 ; w i j = 0 , i ¢ j .

,)),

(27a)

(27b)

But still convergence is much slower than for the coupled identification procedure. Therefore this modification can only be recommended if reasons other than savings of ("4 cq

computation time require decomposition/coordination techniques to be applied (e.g. small RAM in the computer), or if other attempts to enlarge the region of convergence have already failed. Another possibility would be to check whether the measurements used are the best one can take. For the example discussed here, the optimal sensor positions for each parameter set can easily be calculated by means of standard techniques, based on the Fisher information matrix, cf. Qureshi, Ng and Goodwin (1980), and (for this example) Munack (1985). This leads to a drastic enlargement of the region of local convergence, as shown in Fig. 5. Remark: Note the logarithmic scaling; in linear scaling, nearly the whole first quadrant is filled by the marked area. 6. Conclusions It has been demonstrated that decomposition/coordination methods may successfully be applied to parameter identification problems with interconnected distributed parameter systems. The simple example presented here as well as a number of simulations show that a decoupled treatment offers great advantages compared with a solution of the overall identification problem. This is due to the fact that at some distance to the true parameters the global identification functional is highly nonquadratic, which results in slow convergence for the coupled optimization. In the very near region of the optimum, the greatest drawback in coupled optimization lies in inaccurate gradient calculations and therefore again very slow convergence is achieved. (This latter feature does not occur in the example, since

a

-s

-2

-q

6

i tgo21

FIG. 3. Plot of functional JJ for a2~ = 0.001, a22 = 0.0001 and typical identification pathways: -coupled identification; • ..decomposed identification (re-injection strategy).

"

,

1

i

Ig a21

FIG. 5. Area of local convergence for the decomposed identification procedure in case of optimally allocated sensors.

116

Brief Paper

the system used here is too simple.) So, mainly numerical considerations suggest the use of the decomposed algorithms, even when there is no parallel computation available. Using parallel computation, e.g. a multi-microprocessor system, there is no doubt that the decomposed algorithms should be chosen. In cases where there exist several relative minima, it is not obvious to which point both algorithms converge. If the measurements allow the functional to attain a local minimum for the true parameter set, which is the case in nearly all nonacademic applications, the coupled algorithm is locally convergent to the nominal parameter vector. For the decomposed identification procedure using the re-injection method, a local convergence analysis has been presented in this paper. This convergence analysis is used to check whether a given parameter vector may be identified by decomposition/coordination methods or not, which means that some a priori knowledge about the possible range of the system parameters is needed. This, however, is also the case for the coupled algorithms, since meaningful iaitial values are needed in order to avoid suboptimal solutions. It has been shown, that optimization of sensor positions leads to a larger area of local convergence for the decomposed algorithm. A further functional modification may be added in critical cases; however, the rate of convergence may be diminished drastically by this.

ReJerences Bensoussan, A., R. Glowinski and J. L. Lions (1973). Methode de decomposition appliquee au contrSle optimal de syst6mes distribues. In 5th IFIP Conf. on Optimization Techniques, Rome, Pt I, Lecture Notes in Control and Information Sciences 3, pp. 141-151. Springer, Berlin. Bertsekas, D. P. (19761. Multiplier methods: a survey. Automatica, 12, 133 145. Cambon, Ph. and L. LeLetty (1973). Applications of decomposition and multi-level techniques to the optimization of distributed parameter systems. In 5th IFIP ConJ[ on Optimization Techniques, Rome Pt I, Lecture Notes in Control and Information Sciences 3, pp. 538- 553. Springer, Berlin. Chavent, G. (1974). Identification of functional parameters in partial differential equations. In R. E. Goodson and M. Polis (Eds.), Identification of Parameters in Distributed Systems, pp. 31 48. ASME, New York. Cohen, G. (1980). Auxiliary problem principle and decomposition of optimization problems. JOTA, 32,277 305. Lions, J. L. [1971). Optimal Control of Systems Governed by Partial DifJerential Equations. Springer, Berlin. Luttmann, R., A. Munack and M. Thoma 0985). Mathematical modelling, parameter identification and adaptive control of single cell protein processes in tower loop bioreactors. In A. Fiechter (Ed.), Advances in Biochemical Engineering, 32, pp. 95--205. Springer, Berlin. M unack, A. ~1980a). Application of adaptive control to a bubblecolumn fermenter. In Proc. 4th Int. ('¢,!/~ on Analysis and Optimization of Systems, Versailles, Lecture Notes in Control and Information Sciences 28, pp. 516 535. Springer, Berlin.

Munack, A. (1980bl. Zur Theorie und Anwendung adaptiver SteuerungsverJdhren fiir eine Klasse yon Systemen mit verteilten Parametern. Dissertation, Universit~it Hannover. Munack, A. (1985). Parameter identification problems for interconnected distributed parameter systems and applications to a biotechnological plant. In 2nd ConJ~ on Control Theory ,['or Distributed Parameter Systems and Applications, Vorau, Lecture Notes in Control and Information Sciences, to appear. Pradin, B. (1979). Calcul hi6rarchis6 pour la commande en boucle ouverte de syst6mes a paraha6tres repartis. These Docteur d'Etat, Universit6 Paul Sabatier, Toulouse. Pradin, B. and A. Titli (1975). Methods of decompositioncoordination for the optimization of interconnected, distributed parameter systems. 6th IFAC World Congress, Boston, Pt IA, Paper 15.3. Qureshi, Z. H., T. S. Ng and G. C. Goodwin (1980). Optimum experimental design for identification of interconnected, distributed parameter systems. Int. J. Control, 31, 21-29. Ronge, P. (1981). Parameteridentifikation ft~r eine Klasse yon gekoppelten Systemen mit verteilten Parametern mit Hilfe von Dekompositionsmethoden. Diplomarbeit, Inst. f. Regelungstechnik, Universitiit Hannover (unpublishedL Seinfeld, J. H. and W. H. Chen(1974). Estimation of parameters in distributed systems. In R. E. Goodson and M. Polis IEds.), Identification of Parameters in Distributed Systems, pp. 69-90. ASME, New York. Tacker, E. C. and C. W. Sanders (1980). Decentralized structures for state estimation in large-scale systems. Large Scale Systems, I, 39 49.

Appendix: Description of the example used in Section 5 ~Yl

Ct

c2)?1 =a21 ~ ~ + 100'2-Yl), ~ .,c-

PDE: ~"2 g2y2 " - = a22 ~ +

?t

IC:

lO(yl

in ]0, T[ x ]0,1 [:

- 3'2)

yllO,.Y)=y2(O,.'Q=O

yl(t,O) = 1,

('~'-1-[

in ]0,1 [:

=

0

in ]0, T[,

BC: v2(t 1)= - 1 ,

!!)'2 =0 ?x I~=o

in ]0, T[.

Functional: jI =

f2

(yMIU,0.9) --yllt,0.9)) 2 + (yM2(t,0.1)

-- y2{t,O.l))2 dt