Compurers them. Printed in Great
Engng. Britain.
Vol.
14, No.
4/S,
pp. 469-479,
All rightsreserved
0098-I
1990
354/90
$3.00 + 0.00 Press plc
Copyright($3 1990 Pergamon
FEEDBACK CONTROL OF CHEMICAL PROCESSES USING ON-LINE OPTIMIZATION TECHNIQUES J. W.
EATON and J. B. RAWLINGS~
Department of Chemical Engineering, The University of Texas at Austin, TX 78712, U.S.A. (Received
23
October
1989;
receivedfor
publication
29
November
1989)
Abstract-The objective of this research is to develop tools for feedback control and sensitivity analysis of systems modelled by nonlinear differential-algebraic equations. Features of this novel approach include: direct use of nonlinear models without linearization, the ability to handle multiple inputs and outputs without pairing, and the ability to handle input and output constraints without complicated anti-reset windup logic. In the model-predictive control framework, an optima1 control policy for the nominal plant is determined using a simultaneous optimization and model solution approach. The sensitivity of the optimal solution with respect to the model parameters is also computed. The feedback from the measurements is used to update the important process model parameters and output disturbances. The optimal profile is then recomputed for the updated model. Confidence intervals can also be placed on the optimal control profiles by considering second-order variations in the Lagrangian. This approach will be illustrated with several examples including: batch and continuous chemical reactors, and a batch crystallizer, which demonstrates the method on a nonlinear, distributed parameter system.
1. INTRODUCTION Many chemical processes are best modelled by nonlinear differential-algebraic equations. These models have the advantage of being accurate over wide ranges of operating conditions but do not generally conform to the needs of conventional process control, except as tools for analysis or starting points for developing linear models. Such linear descriptions
necessarily lose information about the process which they are intended to represent. This can cause serious problems even for continuous operations, and may make control impossible for batch processes which operate over a wide range of conditions. In this paper we present a method that makes use of nonlinear models to solve a general class of control problems. Preliminary results and a discussion of the algorithm are given by Eaton and Rawlings (1988) and Rawlings and Eaton (1988) apply the method to the solution of a linear MIMO control problem with input and output constraints and model uncertainty. Patwardhan et al. (1990) compare the nonlinear model predictive approach to global linearization techniques. The structure of the method is shown in Fig. 1. Each of the blocks are in genera1 assumed to be nonlinear. G is the process to be controlled, C is the controller, E is a state and/or parameter estimator, and G, represents load dynamics. Cg is a user-specified performance objective, II, 1 and y are the plant inputs, loads and measured outputs, respectively. The controller and estimator i;lclude a process model and involve the solution of a nonlinear program to determine future control moves or uptTo whom all correspondence should be addressed.
dated model parameters. Sections 2 and 4 discuss the efficient solution of these optimization problems. The algorithm may be started by estimating a set of model parameters given past input-output data. The controller then determines the sequence of inputs which minimizes the performance index rD over some future time horizon. A portion of this control profile is implemented, and the previous steps are repeated. In addition, by using an optimization approach for the controller it is possible to obtain parametric sensitivity estimates from the first-order necessary conditions with only a small increase in computational effort. Confidence intervals on the manipulated variables may also be computed from the Hessian of the Lagrangian. This allows one to approximate the relative degree to which the control moves affect the objective function. The development of these sensitivities is given in Section 3.
2.
THE
CONTROL
PROBLEM
The problem to be solved by the controller may be stated as: min U(f)
@[u(t).
x(t);pl,
subject to dx/dt - f[x(t ), u(t ); PI = 0, h[x(t),
u(t); PI = 0,
&x(t),
u(t); PI 3 09
(NLPl)
x(&3) = x0,
are vector valued state and manipulated variable functions, respectively, and p is a vector of model parameters, which may include disturbances. Note that the objective function CD where
x(t)
and
U(Z)
J.W. EATON and J. B.
470
h 1
I
Fig. 1. Feedback structure.
is not required to take any special form, though the particular choice of the objective function will obviously have an effect on the problem’s solution and the effort required to obtain it. The righthand sides of the differential equations are also not required to take any special form and may include integral and/or spatial derivative terms as well as time delays. The solution of a batch crystallization control problem involving both integral and spatial derivative terms is examined in Section 5.3. There are several methods for computing solutions to NLPl . The classic methods, such as control vector iteration and conversion to a two point boundary value problem, are derived using the necessary conditions of the maximum principle (Ray, 1981; Bryson and Ho, 1975). Another method is to simply use an ODE solver in combination with an optimization algorithm and solve the problem sequentiaily. Though straightforward, these approaches are generally inefficient because they require that the mode1 equations be solved accurately at each iteration within the optimization, even when the iterates are far from the final optimal solution. In order to avoid solving the model equations to an unnecessary degree of accuracy at each step, the differential equations are approximated by a set of algebraic equations using a weighted residual method (Galerkin’s method, orthogonal collocation, etc.). The model equations are then solved simultaneously with the other constraints within an infeasible path optimization, such as successive quadratic programming. Using this approach, the differential equations are treated in the same manner as the other constraints and are not solved accurately until the final iteration of the optimization. For this work, orthogonal collocation on finite elements has been used in connection with successive quadratic programming to solve NLPI . The original problem is restated as: min ~(WYP), X." subject to Ax - f(x, u; p) = 0,
h(x, u; p) = 0, g(x, u; PI 2 0, x(~o)= Jk,
(NLPZ)
RAWLINGS
where A is a matrix of collocation weights (Villadsen and Stewart, 1967). Any derivatives with respect to spatial coordinates may be handled in a similar manner, and integral terms may be included efficiently by using appropriate quadrature formulae (Villadsen and Michelsen, 1978; Davis and Rabinowitz, 1984). These issues are discussed in greater detail in Section 5.3. In all of the simulation examples presented subsequently, the successive quadratic programming code NPSOL (Gill et al., 1983) is used to solve the resulting nonlinear program. The response of the plant to the optima1 manipulated variable profile is computed using the differential-algebraic equation solver DASSL (Petzold, 1983) and does not involve the collocation approximation that is in the controller. For other solution techniques, only the system inputs need to be considered as the decision variables in the optimization. With this approach, both II and x are treated as decision variables, though the states may be determined by the model once the inputs are given. The efficiency of this method been demonstrated by Cuthrell and Biegler (1987) and Renfro et aZ. (1987). The solution of NLP2 offers several advantages over solving NLPl by traditional methods. A wide range of constraints may be easily incorporated, the manipulated variable profile may be either continuous or discrete, and the solution of the optimization problem provides useful sensitivity information at little additional cost. State, manipulated variable, and rate of change constraints may be incorporated naturally in the problem statement by imposing bounds on the variables (or functions of the variables) at the collocation points. Additional steps must be taken in order to handle the same class of constraints using other methods. To include state constraints, Bryson and Ho (1975) added a penalty term to the objective function. This is undesirable as it may result in a sluggish, suboptimal solution. Sargent and Sullivan (1978) included an additional state variable which handles the constraints without unnecessarily sacrificing performance, but at the expense of evaluating the state constraint functions at a sufficiently large number of points to insure they are not violated. The functions used to approximate the state and manipulated variables and the finite element grids on which they are defined need not be the same. It has been our experience that they often should not be the same for reasons of efficiency and accuracy. The sensitivity of the solution of NLP2 yields valuable information concerning which parameters and manipulated variables are most important, and the time periods over which they matter most. The sensitivities may be used off-line to diagnose controller difficulties and may indicate the need for a better mode1 of the process, additional or different measurements, or design changes.
Feedback control of chemical processes On line, the parametric sensitivities may be used to determine which of the parameters are most important and need to be estimated most accurately. The manipulated variable sensitivities will indicate which of the inputs has the greatest effect on the solution of the controller subproblem. This information could be used to solve possible conditioning problems which arise in solving NLP2 when there are too many degrees of freedom in the manipulated variables. These issues are discussed in more detail in Sections 3 and 4. 3. SENSITIVITY
3.1. Parametric
ANALYSIS
objective functions, respectively. In solving control problems, the independent variables are the manipulated variables, and the dependent variables are the states. The result given below requires that the number of dependent variables be the same as the number of active constraints. Because of this, one must choose additional manipulated variables to consider as dependent for cases involving auxiliary constraints (for example, problems which have specified values at a fixed final time, or problems with active state constraints). Given the following problem, in which all active constraints are included in the vector h: min
sensitivity
Given the nonlinear program NLP2, it is desired to find the sensitivity of the optimal solution with respect to changes in parameter values p. These sensitivities will prove useful in determining which of the model parameters have the greatest effect on the solution of the dynamic optimization and must be known most accurately. Sensitivity results for this case of problems may be found in Fiacco (1983) and are summarized below. At the optimum, the following necessary conditions hold, with all decision variables represented by x: v,VL(x, P) = 0, h,(x,
p) = 0,
471
s.t.
h(x,
L(x, u) = @(x, II) -
5
i=
I
VL(x, u) = 0. Let (x*, u*) satisfy these conditions. Then a Taylor expansion about this point gives:
[
1
2
+ ;[Sx’lSUr] v,, L
wihi(x, P).
By differentiating the necessary conditions and noting that at the optimum the constraints must remain satisfied, the sensitivity equations may be written as:
The solution of these equations at the optimum yields the sensitivity of the solution of the approximate nonlinear program to changes in parameter values. 3.2. Sensitivity variables
of the optimal
solution
i: MJ,hi(X,II) i= 1
and the first-order necessary conditions for an optimum are
where L is the Lagrangian function L(x, w, P) = Q(x, P) -
u) = 0,
the Lagrangian is defined as
SL = Iv,L]V,L]
i = 1, n,
9 (x, u),
to independent
It is desired to find the sensitivity of the objective function to changes in the independent variables while requiring that the constraints remain satisfied. Physically this corresponds to changing one of the independent variables, solving the model equation (the constraints) and observing the change in the objective function. What followsis a derivation of the second-order constrained optimality conditions with an extension applicable to cases in which the decision variables may be partitioned into independent and dependent sets. Let S u, S x, S h and 66 be variations in the independent and dependent variables and the constraint and
I[ 1
VXUL V,L
x [ V,XL
sx su
+ ..’
Substitution of the first-order necessary conditions gives: s L(x*, u*) =f[SXr]sU=]
Expanding the constraint series gives: Sh(x*,
V,,L V,,L
VXXI. v L “I
I[ 1 (1)
functions
II*) = [V,hlV,hl
sx su
+ ...
in a Taylor
.
For any point (x* + Sx, II* + S u) to remain feasible, S h must be zero. Ignoring higher order terms, equation (2) may be solved to find the required change in the dependent variables with respect to a given change in the independent variables. Substitution of this result into equation (1) by rearrangement yields: so (x*, u*> z jSuTMSu.
(3)
With M = V,,L
-
V,hTV,h-TV,,L
-
V,,LV,h-‘V,h
+V,hTV,h-=V,LVXh-‘VUh.
Used directly, this result provides an estimate of the change in the objective function for a given
J.
472
W.
EATON
and J. B.
deviation from the optimal path. If one assumes independent variations in the elements of u, equation (3) may be solved to determine the magnitude of the change in any one manipulated variable which will give rise to a specified change in the objective function.
LWLINGS
which may be used to solve the following probtem efficiently: min
8 [x(r),
P
s.t. to 4. THE
ESTIMATION
Wx(r ). P; u(r)1 = 0,
where ~,,,~(r~) is the measured value and jj,(tk) is the model prediction based on measurements up to t, _ , The updated disturbance estimates are sent to the controller which then uses them to update the model predictions. Normally one sets all future disturbance estimates to the current value: t2
tk,
in which TX(r) is the model prediction to be used in the evaluation of the controller objective function and yj is the uncorrected model solution. One can obviously apply a more sophisticated scheme for predicting future disturbances and reducing the estimator’s sensitivity to measurement noise if these issues are important. For linear systems, the internal model approach has the attractive feature that, given open loop stability for the plant and the steady-state controller gain equal to the inverse of the steady-state gain of the model, it provides zero offset in the face of nonzero steady-state disturbances and plant-model mismatch. Similar conditions exist for nonlinear systems (Economou, 1986; Economou et al., 1986), though analysis for the algorithm presented here is complicated by the fact that the controller involves the iterative solution of a nonlinear program. 4.2. Model
parameter
= 0,
PROBLEM
A convenient and useful means of incorporating feedback into the algorithm is to estimate a disturbance in each ouput based on its measurement. This is the approach of internal model control (Garcia and Morari, 1982) and dynamic matrix control (Cutler and Ramaker, 1979; Prett and Gillette, 1979). In the usual implementation one computes a disturbance, d,(tk) at the kth sampling time in the ith output:
+ d(r,),
W)l,
$ - f[x(t), p;u(t)]
4. I. Output disturbance estimation
$i(t) =Yi(t)
p;
estimation
For processes whose parameters vary with time or are not well known, it may be necessary to estimate them on-line. There are many possible methods available for solving nonlinear parameter estimation problems. It is not our intent to discuss the relative merits of different solution techniques [Bard (1974) and Bates and Watts (1988) provide comprehensive reviews]. Instead, we will concentrate on one method
g[x(r )>P; u(t)1 2 0, x(2,)
=
x0.
This nonlinear program is similar to the one solved in the controller step. The process parameters are now the degrees of freedom rather than the future manipulated variables. As before, the objective function @ is not required to take any special form, though a likely choice is the sum of the squares of the errors between the measurements and the model predictions. Any of the techniques used to solve the controller subproblem may also be used to solve this subproblem. For this work, we have chosen to use the same methods of approximation as developed above for the control calculations. As before, this has the advantage of solving the optimization efficiently while taking advantage of the accuracy provided by the nonlinear model. If the structure of the model is incorrect, then after the parameters are optimized, there will still be a difference between the model predictions and the output measurements. This remaining error can be used to estimate a disturbance in each output. In other words, the parameter estimator removes as much of the error as it can, and the remaining signal is treated as a disturbance in the standard internal model approach. This combines both output disturbance estimation and parameter estimation in the feedback path. 5. CHEMICAL
PROCESS
CONTROL
EXAMPLES
The following three examples are provided to illustrate several features of the approach. The first example uses a continuous chemical reactor to demonstrate how the approach may be used to solve setpoint tracking problems. The second example shows the computation of the optimal temperature profile for a batch reactor. Parametric and independent variable sensitivities are also computed. The final example demonstrates the ability of the method to solve difficult optimal control problems in which the model is a set of coupled nonlinear integrodifferential equations. The optimal cooling curve for a batch crystallization process is computed and results in a significant improvement over isothermal operation.
Feedback control of chemical processes
473
F(t)
Fig. 2. IsothermalCSTR. 5.1. Continuous
reactor
A+A+B,
~dC, = dt
F(CA,”
-
2
1
3
5
4
t(h) Fig. 3. Optimal output for the CSTR of Section 5.1.
k
which takes place in the isothermal CSTR shown in Fig. 2. It is desired to regulate the concentration of A in the reactor by manipulating the inlet flow rate. In order to accomplish this, the performance functional: 10+ lh a= (4) [C*,, - C,(t)12dt. s 10 is minimized subject to the process model: v
0
-1
Consider the second-order chemical reaction:
C,) - kVCf,,
Although the relatively large input moves and the corresponding oscillations in the concentration of A are optimal for equation (4), they may not be entirely desirable. In order to smooth the response, one could add a constraint specifying the maximum change in the input in one time step or a bound on the maximum flow rate. This is shown in Figs 5 and 6, in which the flow rate was required to remain below 1.8 1 h-‘. 5.2. Batch reactor In this example, from Ray (1981),
the reaction:
kl
The following algorithm is applied to solve the continuous reactor problem. First, the performance functional is minimized over a future time horizon t,, (assumed to be at least one sampling time) and the optimal inputs are used to drive the actual plant. At the end of a sampling time, measurements are taken, output disturbances and (optionally) model parameters are estimated. Finally, the time horizon is redefined and the previous steps are repeated. The reactor volume, inlet concentration and reaction rate constant are assumed to be constant with the values shown in Table 1. Figure 4 shows the optimal unconstrained inputs for a setpoint change from C, = 0.5 to C, = 0.55, and Fig. 3 shows the system response for a sampling time of 0.2 and a prediction horizon of 1. To account for the plantmodel mismatch and drive the plant to setpoint, output disturbances are estimated using the method described in Section 4.1. Note that for the simulations shown, the plant parameters have not been estimated from the measurements. By applying the model parameter estimation technique described in Section 4.2 one could expect better performance since the actual source of the error is a difference in model and plant parameter values, not an ouput disturbance.
A+B k \
C
3.01
2.5 -& _
2.0
c -I
1.5
I-
1.0h1’1”“““““““““““II -1
0
1
2
3
5
4
t(h) Fig. 4. Optimal control profilefor the CSTR of Section5. I. 0.56 -----0.54
-
L g G
Table
1.
Model
Parameter V C I\,” kV
parameters for fXSUl?“le Model IO 3 10
the
continuous Plant ‘0.5 I 3Smoll-~ l5.81*mol.h-’
reactor
0.46
-1
“““~““““““““““” 0
1
2
3
4
5
t (h) Fig. 5. Optimal output for the CSTR of Section 5.1.
J. W. EATON and J. B.
474
RAWLINGS
3.0
1.0
h
2.5
0.2
0.0
0.4
0.8
0.6
1.0
t
Fig. 6. Optimal control profile for the CSTR of Section 5.1.
Fig. 8. Output profiles for the batch reactor of Section 5.2.
is carried out in the batch reactor shown in Fig. 7. It is desired to maximize the concentration of B at the end of a given reaction time. The following dimensionless model is used to describe the dynamics of the reactant and product concentrations.
special treatment to insure that interpolated values also remain within the bound (as is needed if the input moves are polynomials of degree greater than one). The sensitivity of the objective function to changes in the manipulated variable profile are shown in Fig. 10. The center line is the optimal profile, and the upper and lower lines indicate the input, which if used for any one of the steps, would result in [to the second-order approximation given by equation (3)] a 1% change the objective function. If any two steps are at the upper or lower lines (positive or negative deviations or both) the objective function will change by approx. 2% and so on. This result demonstrates several important features of this problem. First, the objective function is relatively insensitive to errors in the input. Assuming that the model is perfect (see below), one would have to miss the optimal profile by a significant amount for a relatively long period of time to see a reduction in the yield of even 5%. Even though the magnitude of the change is small, it is more important to stay near the optimal profile at the beginning of the run. In fact, over the last several time steps, it is virtually impossible to affect the value of the objective function without violating input The sensitivities give quantitative constraints. support to the physically intuitive strategy of maintaining low temperatures near the beginning to favor selectivity of product B and high temperatures near the final time to achieve complete conversion of A.
dx, = _ dt
dx, dr
=
-(u
+ azP)x,,
x, (0) = 1,
UXl,
(0) = 0,
x2 k
=
kne-“i’Rr 7
and x2 = C,/C,. The where x, = CA/C,,, parameters p and a are defined as follows: p = E2/El a = k,,/k{,
ratio ratio
of activation
model
energies,
of preexponential
factors.
For the example shown below, p = 2, a = 0.5 and u is required to be between 0 and 5. The objective is simply to maximize the yield of B at the final time: max 0
= xz(tr).
Figure 9 shows the optimal temperature profile for the given 1 h reaction time. This result is obtained by solving for the optimal inputs over the entire reaction time. To include feedback for batch problems of this type, the algorithm given in Section 5.1 for continuous problems may be applied provided the method used to update the time horizon is modified to account for the fixed final time. Note that the upper bound on the input is active at the final time step. This is easily handled using piecewise constant input moves and does not require
40
&
Fig. 7. Batch reactor.
u
0
I
0.0
I
0.2
I
I
I
0.4
I
0.6
L
I
0.8
I
1.
t Fig. 9. Optimal
control profile for the batch reactor of Section 5.2.
Feedback control of chemical processes
475
8-
-55
0.6
0.4
0.2
00
0.8
0.2
-00
1.0
t
5.3. Nonlinear distributed parameter
0.3 L
I
c
I
0.2
I
1.0
I
0.4
0.6
0.8
systems
In order to use the method outlined above to handle process models which take the form of partial differential equations or integro-differential equations, one introduces another level of approximation. We have chosen orthogonal collocation and Gaussian quadrature to rewrite these models in the form of ordinary differential equations, although other weighted residual and numerical integration methods could be used. The resulting set of differential equations is then solved within the optimization as before. If the original model is in the form of a partial differential equation, the algebraic constraints solved in the optimization will represent the solution of the model at all collocation points in the spatial dimensions and time. As an example of a nonlinear distributed parameter system, consider a batch crystallization carried out in the crystallizer shown in Fig. 13. Under the assumptions that the crystallizer is well mixed, the crystallizer temperature is directly related to the flow rate of a cooling medium [shown here as u(t)] and may be considered as the manipulated variable. Applying the population balance approach to model the crystal size distribution (CSD), the following dynamic model is obtained:
d(=) -= dt
I
0.8
Fig. 12. Manipulated variable sensitivity.
Notice that although raising the temperature late in the batch is optimal, the sensitivity is so small that the large final temperature values are irrelevant. One could modify the objective function to optimize selectivity, achieve a much better conditioned control problem, and lose almost no performance. The optimal state profiles sensitivities to the model parameters are shown in Fig. 11. Notice that for positive changes in either parameter, less B will be produced, though the amount of A which remains at the final time will not be greatly affected. This is due to the fact that the reaction is being carried out to near completion and the production of B will be traded for C. The magnitudes of the manipulated variable sensitivities are given in Fig. 12. This figure indicates that the optimal manipulated variable profile is much more sensitive to changes in the model parameters near the final time, although it is zero for the final time step due to the presence of a constraint. It is interesting to note that during the time when parameter values are most important, the manipulated variable profile is least important, and that the converse is also true (see Fig. 10). This indicates that (for this problem) if one begins with reasonable estimates for the plant parameters and implements the optimal profile for approximately the first three-fourths of the reaction time, it makes almost no difference what happens during the remaining time as the effect of errors in the parameters late in the reaction will go essentially unnoticed due to the insensitivity of the objective function to the manipulated variable.
0.0
0.6 t
Fig. 10. Objective function sensitivity.
-0.2
0.4
-3k,pG
li f (r, t)r2 dr, s0
O=c--l+k,
1.0
t Fig. Il. State variable sensitivity.
Fig. 13. Batch crystallizer.
J. W. EATON and J. B. RAWLINGS
476
In which f(r, r)dr is the concentration of crystals of size r to r + dr at time t. The variable c is the solute concentration, E is the continuous phase volume fraction and G is the crystal growth rate. It is convenient to split the CSD into seed and nucleated subpopulations:
terms and the nonlinear dependence of c,, on T. Witkowski and Rawlings (1987) discuss additional modelling and stability issues for this system. The following objective function is used: m @=
f=f,+f,The following given:
initial
and
boundary
c(t=O)
are
=&@I,
f,(r,
= 0,
t = 0)
dr.
c(t,)
nucleation rate. Power law the nucleation and growth
with cf= 0.51. Other parameter
g= B = k Ache-&b’RT b Ac=c
o f,(r)r3
J
= 0, t) = B/G.
In which B is the crystal constitutive relations for rates are assumed:
dr
This represents a desire to maximize the final mass of seed crystals while keeping the mass of nucleated crystals small. A constraint which specifies a minimum acceptable yield must be added in order for nucleation and growth to occur at all. We have chosen the specification:
=c,,
_L(r, 2 = 0)
f”(r
conditions
L(r)r3 s oQ.
-c,,,,
where AC is the supersaturation driving force for nucleation and growth. Note that the model is nonlinear in temperature due to the activation energy
values
1,
< cr,
used in the simulations are: b = 2,
E, = 2200,
Eb=O,
k, = 0.00935,
k, = 21,400.
The saturation concentration as a function perature was computed as follows: cmt = -0.05T2
+ 0.5T
of
+ 0.14,
2
f
1
0
‘2
Fig.
14. Case 1. Crystal
size distribution
due to constant temperature
profile.
tem-
Feedback control of chemical processes
0.52
CEO -
Got 0.49 t0 Fig.
15. Case
I 1029
I I 2057 3066
I I 1 4114 5143 6171 7200
c (5) I. Concentration profile due to constant temperature profile.
T=
T 333 -
273 273
In order to minimize the objective function and satisfy the additional constraint, the crystallizer temperature is chosen as a manipulated variable. This allows one to vary AC, the driving force for nucleation and growth. Figure 15 shows the concentration profile in the reactor as a function of time for an isothermal run. As shown in Fig. 14, the large driving force leads to a high nucleation rate in the beginning of the
477
run and that these crystals grow to sizes which make up a significant portion of the final mass of crystals. The optimal concentration profile, which holds the nucleation rate at small values until near the final time, is shown in Fig. 17. This agrees qualitatively with previous theoretical and experimental results which indicate that a constant nucleation rate leads to increased growth of seed crystals while keeping nucleation at a minimum (Jones, 1974). Figure 16 shows the CSD for the optimal temperature profile as a function of time. For both cases, note that the seed crystals (whose size distribution is represented by the peak centered at 350 microns at f = 0) grow to approximately the same size at Lhe final time. The CSD obtained for the optimal temperaLure profile is clearly superior to the isothermal case and represents a reduction in the objective function by a factor of 10, demonstrating the potential for product improvement with advanced control.
6. CONCLUSIONS
An efficient method for solving nonlinear optimal control problems has been exploited to provide a method of feedforward and feedback control of
‘200
Fig. 16. Case 2. Optimal crystal size distribution.
J. W.
478
JZATON
and J. B.
V= w= x = y=
0.52 0.50 -
a = L= 8 = p=
0.48 0.46 0
Reactor volume Lagrange multipliers Mode1 states Plant ouputs
Greek letters
c
0.44
F~AWLINGS
I I 1029 2057
I 3086
I 4114
I 5143
I 6171 7200
t (5) Fig. 17. Case 2. Optimal concentration profile.
Ratio of preexponential factors Void fraction Controller or estimator objective function Density
Superscripts * = Optimum = Mode1 prediction T = Transpose Subscripts
nonlinear multivariable chemical processes with constraints. Sensitivity analysis of the optimal solution provides a useful tool for evaluation of process performance. Although computationally intensive compared to simple fixed-parameter feedback laws, it seems entirely feasible that this method can be implemented in real-time on nontrivial chemical processes using workstation-level computer hardware. Because of the complexity of the dynamic of nonlinear systems, and the complexity of the control law (iterative numerical solution of an optimization problem subject to constraints), developing the theoretical underpinning of this method poses a very difficult mathematical challenge. At this juncture, it seems prudent to demonstrate the industrial usefulness of the approach uis-ri-vis other control methodologies before deciding on the future course of research in this area.
NOMENCLATURE A = h= B= C = C = Eb = E8 =
E = f= F= g = G = g = G = h= k, = k, = k, =
L =
I=
p = p= r= R =
f= T= u=
Matrix of first derivative collocation weights Nucleation order Crystal birth (nucleation) rate Concentration Controller Activation energy for crystal nucleation Activation energy for crystai growth Estimator Crystal number density Flow rate Growth order Crystal growth rate Inequality constraints Plant Equality constraints Crystal nucleation rate constant Crystal growth rate constant Volume constant Lagrangian function Load inputs Ratio of activation energies Mode1 parameters Characteristic length of crystals Gas constant Time Temperature Manipulated variables
0 = Initial time f = Final time i = Index of output k = Index of sampling time n = Nucleated crystals s = Seed crystals Acknowledgements-The authors would like to thank She11 Oil Company, Eastman Kodak, and 3M for support of this research. REFERENCES Bard Y., Nonlinear Parameter Esrimalion. Academic Press, New York (1974). Bates D. M. and D. G. Watts, Nonlinear Regression Analysis and ifs Applicafions. Wiley, New York (1988). Bryson A. E. and Y. Ho, Applied Optimal Control. Hemisphere, New York (1975). Cuthrell J. E. and L. T. Biegler, On the optimization of differential-algebraic systems. AIChE JI 33, 1257-1270 (1987).
Cutler C. R. and B. L. Ramaker, Dynamic matrix controla computer control algorithm. AIChE National Meeting, Houston, Texas, (1979). Davis P. J. and P. Rabinowitz, Methods of Numerical Infegration, 2nd Edn. Academic Press, Orlando, Florida (1984). Eaton J. W. and J. B. Rawlings, Feedback control of chemical processes using on-line optimization techniques. AIChE National Meeting, Washington, D.C. (1988). Economou C. G., An Operator Theory Approach to Nonlinear Controller Design. Ph.D. Thesis, California Institute of Technology (1986). Economou C. G., M. Morari and B. Palsson, Internal mode1 control. 5. Extension to nonlinear systems. Ind. Engng. Chem. Process. Des. Dev. 25, 403411 (1986). Fiacxo A. V., Introduction to Sensitivity Analysis Programming. Academic Press, New York (1983). Garcia C. E. and M. Morari, Internal model control. 1. A unifying review and some new result. Ind. Engng. Chem. Process. Des. Dev. 21, 308-323 (1982). Gill P. E., W. Murray, M. A. Saunders and M. H. Wright, User’s Guide for SOL INPSOL: A Fortran Package for Nonlinear Programming. Technical Report SOL 83-12, Systems Optimization Laboratory, Department of Operations Research, Stanford University (1983). Jones A. G., Optima1 operation of a batch crystallizer. Chem. Engng. Sci. 29, 1075-1087 (1974). Patwardhan A. A., J. B. Rawlings and T. F. Edgar, Nonlinear model predictive control. Chem. Engng. Commun. In press (1990).
Feedback control of chemical processes Petzold L. R., A description of DASSL: a differentialalgebraic system solver. Scientific Computing (Stepleman R. S., Ed.), pp. 65-68. North-Holland, Amsterdam (1983). Prett D. M. and R. D. Gillette, Optimization and constrained multivariable control of a catalytic cracking unit. AIChE National Meeting, Houston, Texas (1979). Rawlines J. B. and J. W. Eaton. Ootimal control and model ident%cation applied to the * shell standard control problem. Presented at the Shell Process Control Workshop iI (1988). Ray W. H., Advanced Process Conrrol. McGraw-Hill, New York (1981). Renfro J. G., A. M. Morshedi and 0. A. Asbjsrnsen, Simultaneous optimization and solution of systems
479
described by differential/algebraic equations. Compur. &em. Kngng. 11, 503-517 (1987). Sargent R. W. H. and G. R. Sullivan, The development of an efficient optimal control package. Lecture Notes in Control and Informarion Sciences 7, pp, 158-168. 8th IFIP Conf. Optimization Techniques. Springer-Verlag, New York (1978). Villadsen J. and M. L. Micheisen, Sofufion of Dz#?rentiai Eauufion Models bv Poivnomiai Aooroximation. PrenticeH&I, Englewood &ffs,- New Jersey (1978). Villadsen J. V. and W. E. Stewart, Solution of boundaryvalue problems by orthogonal collocation. C7hem.Engng. Sci. 22, 1483-1501 (1967). Witkowski W. R. and J. B. Rawlings, Modelling and control of crystallizers. Proc. 1987 Am. Control ConJ Minneapolis, Minnesota. pp. 1400-1405 (1987).