Computers and Chemical Engineering 58 (2013) 334–343
Contents lists available at ScienceDirect
Computers and Chemical Engineering journal homepage: www.elsevier.com/locate/compchemeng
Abnormal events management and process safety
Optimizing process economics online using model predictive control Rishi Amrit a,∗ , James B. Rawlings a,∗∗ , Lorenz T. Biegler b a b
Department of Chemical and Biological Engineering, University of Wisconsin–Madison, 1415 Engineering Drive, Madison, WI 53706, USA Department of Chemical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
a r t i c l e
i n f o
Article history: Received 1 March 2013 Received in revised form 11 June 2013 Accepted 30 July 2013 Available online xxx Keywords: Nonlinear control Economic MPC Direct methods Collocation Automatic differentiation
a b s t r a c t Optimizing process economics in model predictive control traditionally has been done using a twostep approach in which the economic objectives are first converted to steady-state operating points, and then the dynamic regulation is designed to track these setpoints. Recent research has shown that process economics can be optimized directly in the dynamic control problem, which can take advantage of potential higher profit transients to give superior economic performance. However, in practice, solution of such nonlinear MPC dynamic control problems can be challenging due to the nonlinearity of the model and/or nonconvexity of the economic cost function. In this work we propose the use of direct methods to formulate the nonlinear control problem as a large-scale NLP, and then solve it using an interior point nonlinear solver in conjunction with automatic differentiation. Two case studies demonstrate the computational performance of this approach along with the economic performance of economic MPC formulation. © 2013 Elsevier Ltd. All rights reserved.
1. Introduction Devising control methods to optimize directly higher-level objectives such as process economics has received increasing attention recently. Model predictive control (MPC) is the most widely used advanced control technique because of its industrial acceptance and demonstrated capability to handle nonlinear and constrained multivariable systems (Rawlings & Amrit, 2009). Hence extending MPC to optimize process economics is an attractive topic of research. Engell (2007) provides an overview of existing hierarchical approaches and discusses their disadvantages. The idea of using economic objectives in the dynamic regulation problem has been proposed earlier in many works (Renfro & Lu, 2009; Young, Bartusiak, & Fontaine, 2001). Recent research (Amrit, Rawlings, & Angeli, 2011; Angeli, Amrit, & Rawlings, 2012; Diehl, Amrit, & Rawlings, 2011; Rawlings, Bonné, Jørgensen, Venkat, & Jørgensen, 2008) has extended the MPC stability theory to handle economic objectives by proposing new stability analysis tools and methods for adjusting the chosen economic stage cost to ensure asymptotic stability of the best steady state for nonlinear systems. The necessary conditions for closed-loop stability have been derived identifying the class of nonlinear systems for which
closed-loop stability is expected. It has been shown that unlike the standard setpoint tracking problems, optimal nonlinear economic control may not lead to closed-loop stability, and unstable operation, such as periodic operation, might outperform the best steady-state solution on a time average. Angeli, Amrit, and Rawlings (2009), Huang, Harinath, and Biegler (2011) propose economic cyclic operation schemes for such cases. Unlike the standard tracking regulation, the economic objective is not necessarily convex, and dealing with nonlinear systems can lead to dynamic optimization problems that are hard to solve numerically. Numerous methods have been proposed to solve dynamic optimization problems (see Binder et al. (2001) for a review of these methods). In this study, we use the direct approach by converting the dynamic regulation problem into a standard NLP that can be solved using a nonlinear solver. Section 2 introduces the MPC dynamic regulation problem. Section 3 briefly reviews the direct methods for dynamic optimization and introduces the simultaneous dynamic optimization approach. In section 4, an NLP formulation for the MPC regulation problem is derived. In Section 5 we present two case studies to show the benefit of optimizing process economics as compared to the standard tracking regulation. 2. Model predictive control We consider the following nonlinear DAE system
∗ Corresponding author. Present address: Shell Technology Center Houston, 3333 Highway 6 S., Houston, TX 77089, United States. ∗∗ Corresponding author. E-mail addresses:
[email protected],
[email protected] (R. Amrit),
[email protected] (J.B. Rawlings),
[email protected] (L.T. Biegler). 0098-1354/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compchemeng.2013.07.015
dxd dt
= f (x(t), u(t))
0 = g(x(t), u(t))
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
with state x = xd
xa
with xd ∈ Xd ⊂ Rd and xa ∈ Xa ⊂ Ra , con-
trol u ∈ U ⊂ Rm , and state transition maps f : X × U → Rd and g : X × U → Ra , where d and a are the number of differential and algebraic variables respectively with d + a = n. Without loss of generality, we assume that the DAE system is index 1. The measurements are related to the state and input variables by the mapping h : X × U → Rp . y(t) = h(x(t), u(t)) For this study we assume that sufficient accuracy is provided by a discrete time domain, which is partitioned into discrete time elements of size equal to the sample time. If needed, this assumption can be easily relaxed to consider a finer finite element mesh. The continuous time problem is then parametrized in terms of the values of the state, input and measurement variables at the boundary of these finite elements. We discuss the finite horizon problem with a fixed control and prediction horizon N, which is also the number of finite elements. As mentioned previously, the constraints are imposed directly at the boundary points. The system is subject to the mixed constraint (x(k), u(k), y(k)) ∈ Z
k ∈ I≥0
for some compact set Z ⊆ X × U × Y. The performance measure is denoted by : Y × U → R. Consider the following steady-state problem min (y, u) subjectto
(y,u,x)∈Z
f (x, u) = 0,
0 = g(x, u),
y = h(x, u)
The solution of the steady-state problem is denoted by (xs , ys , us ), and is assumed to be unique. The dynamic MPC regulation problem is set up as following min
N−1
u
(y(k), u(k)) + Vf (y(N))
(1a)
k=0
dx = f (x(t), u(t)) dt
(1b)
0 = g(x(t), u(t))
(1c)
y(t) = h(x(t), u(t))
(1d)
cl ≤ C(y(t), u(t)) ≤ cu
(1e)
where the control sequence is denoted as u : = {u(0), u(1), . . ., u(N − 1)}. The nonlinear system is stabilized using MPC by adding either a terminal equality constraint (Angeli et al., 2012), x(N) = xs and setting Vf (x) = 0, or by adding a terminal inequality constraint of the form x(N) ∈ Xf for some appropriately chosen compact terminal region Xf and a corresponding terminal penalty Vf (.) (Amrit et al., 2011). The terminal constraint, along with other constraints including bounds on the state, input and measurement variables are generalized into the nonlinear constraint (1e). In the standard tracking regulation, the performance measure is chosen as the distance from the optimal steady state (Rawlings & Mayne, 2009). Hence track (y, u) = 1/2(|y − ys |Q + |u − us |R ), where Q and R are the tuning parameters that govern the speed of convergence. For process economics optimizing regulation, the stage cost (.) is chosen as the controller performance measure, like the operating cost or the negative of the profit. Note that in economic MPC (.) is not necessarily positive definite nor convex with respect to the optimal steady state, as it is in the standard MPC tracking problem. This is makes it hard to guarantee global optimality of the solution. Amrit (2011) shows that optimality is not required to ensure closed-loop stability of
335
the feedback controlled system. This also allows earlier termination of optimization iterations, if needed to speed up calculation time for large scale problems.
3. Brief review of direct methods for dynamic optimization Chemical processes are modeled dynamically using differential and algebraic equations (DAEs). Dynamic equations such as mass and energy balances make up the differential equations and physical and thermodynamic relations contribute to the algebraic part of the process model. In Model Predictive Control, a dynamic regulation problem is formulated in which these DAEs are used to predict the system behavior in the future. Methods to solve these dynamic optimization problems are classified into different approaches based on how the DAEs are handled. Biegler (2010), Binder et al. (2001), Cervantes, Wächter, Tütüncü, and Biegler (2000) provide a comprehensive review of all these approaches. In this study we will briefly review the direct methods, which we shall use for our simulation studies. The basic idea of the direct methods for the solution of optimal control problems is to transcribe the original infinite dimensional problem into a finite dimensional nonlinear programming problem (NLP), which is solved using NLP solvers. Such methods can be separated into two categories: sequential and simultaneous methods. In sequential simulation and optimization, also known as control vector parametrization, only the control variables are discretized. In these techniques, the model equations are solved by numerical integration methods for the current guess of the control parameters, at every iteration step. In this formulation, the control variables are represented as piecewise polynomials Barton, Allgor, Feehery, and Galán (1998), Vassiliadis (1993) and optimization is performed with respect to the polynomial coefficients. The gradients of the objective function with respect to the control coefficients and parameters are calculated either from direct sensitivity equations of the DAE system or by integration of the adjoint equations (Hindmarsh et al., 2005). However, it is well known that sequential approaches cannot handle open-loop instability (Ascher & Petzold, 1998; Flores-Tlacuahuac, Biegler, & Saldí var-Guerra, 2005). Cervantes and Biegler (2001), Vassiliadis (1993) review these methods in detail. Multiple shooting serves as a hybrid between sequential and simultaneous approaches. Here the time domain is partitioned into smaller time elements and the DAE models are integrated separately in each element (Bock & Plitt, 1984; Leineweber, 1999). Control variables are parametrized in the same way as in sequential approach and gradient information is obtained for both control variables as well as initial conditions of the state variables in each element. The inequality constraints for state and control variables are imposed directly at the boundary points. In this study, we will focus on the simultaneous approach, also known as direct transcription. In this technique both the state and control variables are discretized, which leads to a large number of optimization variables. These large scale NLPs require special solution strategies (Betts & Frank, 1994; Betts & Huffman, 1990; Cervantes & Biegler, 1998, 2000; Cervantes et al., 2000). The simultaneous approach has a number of advantages over other approaches to dynamic optimization.
1 Unlike the sequential approaches, the coupling of DAE solution with the optimization problem in the simultaneous method avoids intermediate solutions that may not exist or may require excessive computational effort. The DAE system is solved only once, at the optimal point.
336
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
2 The simultaneous approach is more robust to problems with unstable and ill-conditioned systems, which cause DAE integrators to fail. 3 The approach is advantageous for problems with path constraints and with instabilities that occur for a range of inputs. As shown by the dichotomy property(Ascher & Petzold, 1998), it can “pin down” unstable modes (i.e. increasing modes in the forward direction) by enforcing the appropriate boundary conditions. 4 These methods allow the direct enforcement of state and control variable constraints, at the same level of discretization as the state variables of the DAE system.
u
x
h(k) Collocation points
Finite element, k
On the other hand, there are some disadvantages to the simultaneous approach. As mentioned before, this approach leads to large scale NLPs and hence special methods are required to solve them efficiently. Also, for the optimal control problems where control variables are discretized at the same level as the state variables, there are a number of open questions regarding convergence to the solution of the original variational problem. Biegler (2010) and the references therein discuss these questions in detail. In the next two sections, we introduce the model predictive control dynamic regulation problem and formulate it using the simultaneous approach. 4. Formulation using simultaneous approach The DAE optimization problem (2), which has the system dynamics in continuous time, is converted into a discrete time NLP by approximating the state and control profiles by a family of polynomials on finite elements. These polynomials can be represented as power series, sums of orthogonal polynomials or in Lagrange form. Here, we use the following monomial basis representation for the differential profiles, which is popular for Runge-Kutta discretizations: K
x(t) = x(k − 1) + h(k)
q
t − t(k) dx h(k)
q=1
dt k,q
q (0) = 0
for q = 1, . . ., K
q (r )
for q, r = 1, . . ., K
= ıq,r
K
q (1)
q=1
dx dt k,q
(3)
Here we use Radau collocation points because they allow constraints to be set at the end of each element and they stabilize the system more efficiently if high index DAEs are present (Biegler, 2010). In addition, the control and algebraic profiles are approximated using a Lagrange basis representation, which takes the form
y(t) =
K
q
q=1
u(t) =
K q=1
q
t − t(k − 1) h(k)
t − t(k − 1) h(k)
Here yk,q and uk,q represent the values of the algebraic and control variables, respectively, in element k at collocation point q. q is the Lagrange polynomial of degree K satisfying q (r ) = ıq,r ,
yk,q
(4)
uk,q
(5)
for q, r = 1, · · ·, K
Note that uk,K = u(k) and yy,K = y(K), i.e. the last collocation point in each finite element lies on the boundary of that element (see Fig. 1). Substituting (2)–(5) into (1b)–(1e) gives the following NLP N−1
min
(y(k), u(k)) + Vf (y(N))
(dx/dt k,q ,uk,q ,yk,q ,zk,q ) k=0
dx dt k,q
= f (xk,q , uk,q ) K
xk,q
= x(k − 1) + h(k)
q (q )
dx dt k,q
q =1
K
x(k) = x(k − 1) + h(k)
q (1)
dx dt k,q
q=1
0
= g(xk,q , uk,q )
k = 1, . . ., N
q = 1, . . ., K
yk,q
= h(xk,q , uk,q )
k = 1, . . ., N
q = 1, . . ., K
gl
≤ G(yk,q , uk,q ) ≤ gu
k = 1, . . ., N
q = 1, . . ., K
Additionally since the inputs are applied in a zero order hold fashion on the actual system, we impose an additional constraint which fixes the input values across a given finite element: uk,q = uk,0
where r is the location of the rth collocation point within each element. Continuity of the differential profiles is enforced by x(k) = x(k − 1) + h(k)
Fig. 1. Full discretization and direct collocation for approximating the dynamic system.
(2)
Here x(k − 1) is the value of the differential variable at the beginning of element k, h(k) the length of element k, dx/dtk,q the value of its first derivative in element k at the collocation point q, and q is a polynomial of degree K, satisfying
Element boundary
∀q = 1, . . ., K
This is applicable since we allow the inputs to be discontinuous between finite elements, and this also facilitates the incorporation of move size constraints and move suppression penalty in the objective function, which are commonly used in real world applications. This formulation leads to a standard NLP of the form minf (z) z
s.t. c(z) = 0 zl ≤ z ≤ zu
where z = (x(k), xk,q , dx/dtk,q , uk,q , yk,q ), and is solved using an efficient interior point-based large-scale nonlinear optimization algorithm: IPOPT Wächter and Biegler (2006). Specifically, the IPOPT algorithm exploits the sparsity of large-scale systems by requiring that nonzero elements of the Jacobian of the constraints and the Hessian of the problem Lagrangian be in a compressed row sparse format. The performance, including the rate of convergence, depends critically on the quality of the first and second-order derivative information. For this reason, we use ADOL-C (Walther, 2008; Walther & Griewank, 2009), an open source automatic differentiation package. The numerical values of the derivative vectors are obtained free of truncation errors, at a cost which is a small multiple of the run time and random access memory required by
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
337
F200
F4 , T3
Cooling water T200
T201
Separator L2
Condenser Condensate F5
P100 LT Steam F100
T100
Evaporator
Condensate
LC
F3
F2 Product X2 , T2
Feed F1 , X1 , T1 Fig. 2. Evaporator system.
the given function evaluation. This algorithm allows the computation of the sparsity patterns of the Jacobian and the Hessian before starting the optimization algorithm, which then requests only the nonzero values of the exact first and second-order derivatives, instead of the whole matrix, making it highly efficient for large scale problems. At each execution cycle, IPOPT is initialized with a warm start constructed from the previous solution. IPOPT is tuned to begin with a small initial value of the barrier parameter (e.g. 10−3 ), thus preventing the initialization to be strongly perturbed from the previous solution. This is also facilitated by choosing a small value for the required minimum distance from the initial point to the bounds. 5. Case studies We now present two case studies to demonstrate the application of the discussed methodology to solve nonlinear dynamic regulation problems and to show the benefits of optimizing process economics directly in the dynamic regulation problem. To gauge the benefit of using economics based MPC over the standard tracking MPC, we define the following performance measure PT,eco − PT,track G= × 100% TPs where PT,eco and PT,track are the cumulative profits over T time steps, of the closed-loop systems under the economics and tracking MPC respectively, and Ps is the steady-state profit. G captures the difference in the economic performance between the economics optimizing and tracking type controller as a fraction of the nominal steady-state profit. Note that the instantaneous profit for both controllers at steady state is the same, and hence any improvements observed occur when the system is in transient.
details of the mathematical model can be found in Newell and Lee (1989). We use the following modified process model (Fig. 2). Differential equations: M
dX 2 dt
= F1 X1 − F2 X2
C
dP 2 dt
= F4 − F5
Process liquid energy balance: T2
= 0.5616P2 + 0.3126X2 + 48.43
T3
= 0.507P2 + 55
F4
=
Q100 − F1 Cp (T2 − T1 )
Heat steam jacket: T100
= 0.1538P100 + 90
Q100
= UA1 (T100 − T2 )
UA1
= 0.16(F1 + F3 )
F100
=
Q100 s
Condenser: Q200
=
UA2 (T3 − T200 ) 1 + UA2 /(2Cp F200 )
F5
=
Q200
Level controller: 5.1. Evaporation process We first consider an evaporation process that removes a volatile species from a nonvolatile solvent, thus concentrating the solution. It consists of a heat exchange vessel with a recirculating pump. The overhead vapor is condensed by a process heat exchanger. The
F2
= F1 − F4
In the above equations, M =20 kg/m is the liquid holdup in the evaporator, C =4 kg/kPa is a constant, UA1 and UA2 = 6.84 kW/K are the products of the heat transfer coefficients and the heat transfer area in the evaporator and condenser
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
0 -1 -2 -3 -4 -5 -6 -7 -8
14 12 10 8 6 4 2 0
cost ($/h)
d
338
0
20
40 60 80 time (min)
100
120
0
20
(a)
350
200 F200 (kg/min)
220 180
P100 (kPa)
300 250
160
200
140
150
120
0
20
40 60 80 time (min)
100 120
100
0
20
40 60 80 time (min)
100 120
(d)
70
50
60
45 P2 (kPa)
X2(%)
(c)
50 40
tra ck -MPC α = 0 .6 α = 0 .8 α = 0 .9 eco-MPC
40 35
30 20
100 120
(b)
400
100
40 60 80 time (min)
0
20
40 60 80 time (min)
100 120
30
0
(e)
20
40 60 80 time (min)
100 120
(f)
Fig. 3. Closed-loop input (c),(d) and state (e),(f) profiles, and the instantaneous cost (b) of the evaporator process under unmeasured disturbance (a) in the operating pressure.
respectively, Cp =0.07 kW/kg·min is the heat capacity of water, =38.5 kW/kg·min is the latent heat of evaporation of water and s =36.6 kW/kg·min is the latent heat of steam at saturated conditions. The economic objective of the evaporator process is to minimize the following operating cost (in 10−3 $/h), which consists of the cost of electricity, steam and cooling water (Govatsmark & Skogestad, 2001; Wang & Cameron, 1994). eco = 1.009(F2 + F3 ) + 600F100 + 0.6F200
(6)
The product composition X2 and the operating pressure P2 are the state variables as well as the available measurements, and the steam pressure P100 and the cooling water flow rate F200 are the manipulated variables. Table 1 shows the variables that are the source of disturbances in the system. The following bounds are imposed: X2 ≥ 25%
40 kPa ≤ P2 ≤ 80 kPa
P100 ≤ 400 kPa
F200 ≤ 400 kg/min
At the steady state under nominal conditions, the minimizer of eco (6) leads to the input and output variables with the following values X2 = 25
P2 = 50.57
P100 = 193.45
F200 = 207.33
We set up the dynamic MPC problems with a sample time of = 1 min and a control horizon of N = 200. The economic MPC problem is set up with the operating cost as the stage cost and the tracking problem uses track = |x − xs |Q 2 + |u − us |R2 + |u|S2 , where Q =diag(0.92, 0.3), R =diag(1, 1) and S = 0, as the stage cost. Both controllers are implemented with a terminal state constraint, i.e. x(N) = xs . In order to demonstrate a fair comparison between the Table 1 Disturbance variables in the evaporator system and their nominal values Variable
Description
Value
F1 C1 F3 T1 T200
Feed flow rate Feed composition Circulating flowrate Feed temperature Cooling water inlet temperature
10 kg/min 5% 50 kg/min 40 ◦ C 25 ◦ C
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
10
40
8 cost ($/h)
50
d
30 F1 T1 T100
20 10 0
0
20
6 4 2
40
0
60 80 100 120 140 time (min)
0
20
40
(a)
350
350 F200 (kg/min)
400 300
P100 (kPa)
300 250
250
200
200
150
150
0
20
40
60 80 100 120 140 time (min)
100
0
20
(c)
90
56 P2 (kPa)
X2(%)
60
75 60 45
0
20
40
60 80 100 120 140 time (min) (e)
60 80 100 120 140 time (min)
52 48 44
30
40
(d)
105
15
60 80 100 120 140 time (min) (b)
400
100
339
40
0
20
track-MPC α = 0.6 α = 0.8 α = 0.9 eco-MPC 40 60 80 100 120 140 time (min) (f)
Fig. 4. Closed-loop input (c),(d) and state (e),(f) profiles, and the instantaneous cost (b) of the evaporator process under measured disturbances (a).
tracking and economic controllers, the tracking penalties were generated using the diagonal elements of the Hessian of the economic cost at the nominal steady state. Trial and error around these values showed an increase in the benefit from the economic MPC, suggesting that these values were reasonable choices for the comparative study. We also dial between the two extremes of a pure economic and a pure tracking controller using a simple convex combination of the two stage costs as following: = ˛eco + (1 − ˛)track This allows us to study the effects of adding convex terms in the cost function. We first subject the system to a disturbance in the operating pressure P2 . A periodic drop in the pressure was injected into the system and the performance of the two controllers was compared. Fig. 3 shows the input and state profiles as well as the corresponding instantaneous operating costs of the two closed-loop systems. A pressure drop of 8 kPa was injected with a repetition period of 20 mins. A drop in the operating pressure decreases the instantaneous operating cost. Hence the system under economic MPC tends to keep the pressure (P100 ) low for a longer time, in contrast to the tracking MPC, which returns to the optimal steady-state
value quicker. The corrective control action resulting from this disturbance also increases the product composition, which in turn, increases the operating cost. Hence the economic MPC drives the product composition (X2 ) to steady-state faster than tracking MPC. The benefit obtained from economic MPC over the tracking controller is G = 6% of the steady-state operating cost. Next we subject the system to a measured disturbance in the feed flow rate (F1 ), the feed temperature (T1 ) and the coolant water inlet temperature (T100 ). Fig. 4 shows the input and state profiles as well as the corresponding instantaneous operating costs of the two closed-loop systems. An increase in the feed flow rate drops the product concentration. To counter this disturbance both controllers increase the steam pressure and the cooling water flow rate, increasing the product concentration, operating pressure as well as the operating cost. The economic controller keeps the system at a lower concentration and pressure when compared to the tracking controller. In order to reduce the instantaneous operating cost, the economic controller reduces the inlet water flow rate when a disturbance increases the cooling water inlet temperature. The economic controller drops the water flow more than the tracking controller does, leading to higher instantaneous profit. In this scenario the benefit obtained from economic MPC over the tracking controller is G = 2.25% of the steady-state operating cost.
2.4 2.2 2 1.8 1.6 1.4 1.2 1
Profitt ($/s)
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
FA (kg/s)
340
0
5
10 15 time (min)
20
13 12 11 10 9 8 7 6 5
0
5
10 time (min)
15
20
10 15 time (min)
20
(a)
(b)
96 92 TR (C)
FB (kg/s)
88 84 80 76 72
0
5
10 time (min)
15
20
6.5 6 5.5 5 4.5 4 3.5 3
0
5
(c)
(d)
0.295
0.11 XP
0.112
XE
0.3
0.29
0.108
0.285 0.28
0
0.106
α = 1 × 10−− 45 α = 1 × 10− 6 α = 1 × 10 5 10 15 time (min)
20
0.104
0
track-MPC eco-MPC 5 10 15 time (min)
(e)
20
(f)
Fig. 5. Closed-loop input (c),(d) and state (e),(f) profiles, and the instantaneous profit (b) of the Williams–Otto reactor under step disturbance in the feed flow rate of A (a).
As we dial from the pure economic to the pure tracking controller, we see the performance drop. Table 2 shows the average operating cost of the evaporator process under all these controllers in the two disturbance scenarios discussed above. The benefit of economic MPC is seen to be 6% of the nominal steady-state profit under unmeasured disturbances, which drops to 0.65% by reducing the contribution of the economic cost to ˛ = 70% in the controller objective function. During the closed-loop simulations, the dynamic optimization NLP was always started with a warm start constructed from the previous solution. This helped the solver converge faster. The details of the NLP generated for the problem and the computational Table 2 Performance comparison of the evaporator process under economic and tracking MPC Unmeasured disturbance
performance are given in Table 4. Although no convergence failures were observed by the NLP solver, we see that with the increasing frequency of disturbances, the average CPU time and the number of iterations increase. 5.2. Williams–Otto reactor The Williams–Otto reactor is one unit of the Williams–Otto plant model (Williams & Otto, 1960). The reactor is a CSTR with mass holdup 2104.7 kg and temperature TR . The reactor is fed with two Table 3 Performance comparison of the Williams–Otto reactor under economic and tracking MPC Controller
Measured disturbance
Controller
Average operating cost
G
Average operating cost
G
track-MPC ˛ = 0.7 ˛ = 0.9 ˛ = 0.95 eco-MPC
6.17 6.13 6 5.94 5.83
– 0.65% 3% 4% 6%
5.85 5.82 5.79 5.74 5.72
– 0.45 % 1% 1.8 % 2.25%
Track-MPC ˛ = 10−4 ˛ = 10−5 ˛ = 10−6 Eco-MPC
Step disturbance
Random disturbance (high frequency)
Average operating cost ($/s)
G
Average operating cost ($/s)
G
9.37 9.42 9.55 9.62 9.64
– 0.4% 1.7% 2.3% 2.5%
8.8 8.6 9 9.6 9.83
– 0.8 % 5.14 % 10.6 % 13%
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
20 Profit $/s) (
15
FA (kg/s)
7 6 5 4 3 2 1 0 -1 -2
341
10 5 0
0
-5
10 15 20 25 30 35 40 time (min)
5
0
5
10 15 20 25 30 35 40 time (min)
(a)
(b)
130
10
110 70
110
50
6 FB (kg/s)
TR (C)
90
2
10 6
90
2
70 0
5 10 15 20 25 30 35 40 time (min)
0
5 10 15 20 25 30 35 40 time (min)
(c)
0.32 0.3 0.26 0.24 0.22 0.2
0.115 0.11 0.105 0.1 0.095 0.09 0.085 0.08
XP
XE
0.28
(d)
0
5
10 15 20 25 30 35 40 time (min)
0
track-MPC eco-MPC 5 10 15 20 25 30 35 40 time (min)
(e)
(f)
Fig. 6. Closed-loop input (c),(d) and state (e),(f) profiles, and the instantaneous profit (b) of the Williams–Otto reactor under large random disturbances in the feed flow rate of A (a).
pure component reactant streams FA and FB . The following three simultaneous reactions involving 6 components occur inside the reactor. k1
k1 = 1.6599 × 106 e−6666.7/TR s−1
k2
B + C →P + E
k2 = 7.2117 × 108 e−8333.3/TR s−1
k3
k3 = 2.6745 × 1012 e−11111/TR s−1
A + B→C C + P →G
The following dynamic mass balances represent the plant behavior dX A dt dX B W dt dX C W dt dX E W dt dX G W dt dX P W dt W
= FA − (FA + FB )XA − r1
are the three reaction rates. The economic objective of the process is to maximize the profit, which is the difference between the sales of the products E and P and the costs of raw materials A and B (Xiong & Jutan, 2003). −eco = 5554.1(FA + FB )XP + 125.91(FA + FB )XE − 370.3FA − 555.42FB
(7)
The mass fractions of all the components are the state variables as well as the available measurements and the input flow rate of component B (FB ) and the reactor temperature (TR ) are the control variables. The following bounds are imposed on the input variables.
= FB − (FA + FB )XB − r1 − r2 50◦ C ≤ TR ≤ 150◦ C
= −(FA + FB )XC + 2r1 − 2r2 − r3
2 kg/s ≤ FB ≤ 10 kg/s
= −(FA + FB )XE + r2
The flow rate of component A (FA ) is the source of disturbance with a nominal value 1.8 kg/s. We set up the dynamic MPC problems with a sample time of = 0.5 min and a control horizon of N =100, with x(N) = xs . At the steady state under nominal conditions, the minimizer of eco (7) leads to the input variables with the following values
= −(FA + FB )XG + 1.5r3 = −(FA + FB )XP + r2 − 0.5r3
where XA , XB , XC , XE , XG and XP are the mass fractions of the respective components and r1 = k1 XA XB W, r2 = k2 XB XC W and r1 = k3 XC XP W
FB = 6.1 kg/s
TR = 92.9◦ C
342
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343
Table 4 NLP problem size and performance. All simulations were carried out on an Intel Core i7 (Sandy bridge) 3.4 GHz processor. NLP dimensions
Evaporator WO Reactor
Iteration (average)
Disturbance
Unknowns
Constraints
Measured Unmeasured Step Random
2800
3402
4800
4600
The economic MPC problem is set up with the profit as the stage cost and the tracking problem uses track = |x − xs |Q 2 + |u − us |R2 + |u|S2 , where Q = 10−2 I6 , R = 0 and S = 0, as the stage cost. We first subject the system to periodic step changes in the feed flow rate of A (FA ). The economic controller tries to extract more profit from the disturbance by raising the product concentrations (XP and XE ) more than the tracking controller. Fig. 5 shows the control moves and the mass fractions of the product P and E of the two closed-loop systems. To simulate more transient behavior, FA , which is a measured disturbance variable, is disturbed with large variations. Fig. 6 shows the control moves and the mass fractions of the product P and E of the two closed-loop systems. Table 3 shows the average operating cost of the Williams–Otto reactor under the two controllers in the two disturbance scenarios discussed above. The benefit of economic MPC is G = 13% of the nominal steady-state profit under large random measured disturbances. While dialing from purely economic to purely tracking cost functions, we note a steep decline in the benefit for this example. The relative benefits compared to tracking type controller become negligible at very small values of ˛. Table 4 shows again that the average CPU time and the number of iterations increase with increasing frequency of disturbances.
6. Conclusions The objective of this work to apply simultaneous solution strategy for dynamic optimization problems to solve nonlinear MPC problems online and demonstrate the economic performance benefit of optimizing process economics online in the dynamic regulation problem. The solution strategy was implemented to solve two chemical process examples from the literature, and simulated under different disturbance scenarios. The processes were studied under feedback control from an economic optimizing controller and a tracking controller separately, and the economic performance of the two closed-loop systems was compared under the different scenarios. The improvement in economic performance is extracted from the transient period of the process. It is observed that the economics optimizing controller takes advantage of the unmeasured process disturbances, which perturb the system to increase instantaneous profit, by decelerating the settling rate back to the steady state that has a lower profit. On the other hand, the tracking controller settles the system back to the steady state at the same rate irrespective of whether the disturbance reduces or increases the instantaneous profit. When the system is subject to measured disturbances, the optimal steady state changes depending on the disturbance, and the control actions of the tracking-type controller are aimed at tracking this steady-state trajectory. The economic controller follows the same trends, but different magnitudes in order to achieve higher instantaneous profit. Higher frequency of disturbances forces the system to remain in transient for longer times, producing a higher cumulative benefit. The two simulation studies show that optimizing process economics in the dynamic regulation problem can take advantage of the type and frequency of disturbances and can score
15 15 13 17
CPU time (s) (average) Overall
Function evals
0.28 0.25 1.1 1.7
0.2 0.18 0.9 1.4
an economic advantage of 5–10 % of the steady-state profit, over tracking type controllers. Acknowledgments The authors would like to thank Prof. Andrea Walther and Dr. Kshitij Kulshreshtha for their assistance with automatic differentiation setup. The authors acknowledge financial support from the industrial members of the Texas-Wisconsin-California Control Consortium (TWCCC), and the NSF through grant #CTS-0825306. References Amrit, R. (2011). Optimizing process economics in model predictive control, Ph.D. thesis. University of Wisconsin-Madison. Amrit, R., Rawlings, J. B., & Angeli, D. (2011). Economic optimization using model predictive control with a terminal cost. Annual Reviews in Control, 35, 178–186. Angeli, D., Amrit, R., & Rawlings, J. B. (2009). Receding horizon cost optimization for overly constrained nonlinear plants. In Proceedings of the Conference on Decision and Control Shanghai, China. Angeli, D., Amrit, R., & Rawlings, J. B. (2012). On average performance and stability of economic model predictive control. IEEE Transactions on Automatic Control, 57(7), 1615–1626. Ascher, U. N., & Petzold, L. R. (1998). Computer methods for ordinary differential equations and differential-algebraic equations. Philadelphia: SIAM. Barton, P. I., Allgor, R. J., Feehery, W. F., & Galán, S. (1998). Dynamic optimization in a discontinuous world. Industrial and Engineering Chemistry Research, 37, 966–981. Betts, J. T., & Frank, P. D. (1994). A sparse nonlinear optimization algorithm. The Journal of Optimization Theory and Applications, 82(3), 519–541. Betts, J., & Huffman, W. (1990). The application of sparse nonlinear programming to trajectory optimization. In AIAA guidance, navigation and control conference Portland, OR, (pp. 1198–1218). Biegler, L. T. (2010). Nonlinear programming. Concepts, algorithms, and applications to chemical processes. Philladelphia, PA, USA: SIAM. Binder, T., Blank, L., Bock, H., Bulirsch, R., Dahmen, W., Diehl, M., et al. (2001). Introduction to model based optimization of chemical processes on moving horizons, online optimization of large scale systems: State of the art. Berlin Heidelberg, Germany: Springer. Bock, H. G., & Plitt, K. J. (1984). A multiple shooting algorithm for direct solution of optimal ontrol problems. In Proceedings of the 9th IFAC World conference Pergamon Press, Budapest, (pp. 242–247). Cervantes, A., & Biegler, L. T. (1998). Large-scale DAE optimization using a simultaneous NLP formulation. AIChE Journal, 44(5), 1038–1050. Cervantes, A. M., & Biegler, L. T. (2000). A stable elemental decomposition for dynamic process optimization. Journal of Computation and Applied Mathematics, 120(1-2), 41–57. Cervantes, A., & Biegler, L. T. (2001). Optimization strategies for dynamic systems. Encyclopedia of Optimization, 5, 400–413. Cervantes, A. M., Wächter, A., Tütüncü, R. H., & Biegler, L. T. (2000). A reduced space interior point strategy for optimization of differential algebraic systems. Computers and Chemical Engineering, 24(1), 39–51. Diehl, M., Amrit, R., & Rawlings, J. B. (2011). A Lyapunov function for economic optimizing model predictive control. IEEE Transactions on Automatic Control, 56(3), 703–707. Engell, S. (2007). Feedback control for optimal process operation. Journal of Process Control, 17, 203–219. Flores-Tlacuahuac, A., Biegler, L. T., & Saldí var-Guerra, E. (2005). Dynamic optimization of HIPS open-loop unstable polymerization reactors. Industrial and Engineering Chemistry Research, 44(8), 2659–2674. Govatsmark, M. S., & Skogestad, S. (2001). Control structure selection for an evaporation process. Computers and Chemical Engineering, 9, 657–662. Hindmarsh, A. C., Brown, P. N., Grant, K. E., Lee, S. L., Serban, R., Shumaker, D. E., & Woodward, C. S. (2005). SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers. ACM Transactions on Mathematical Software, 31(3), 363–396. Huang, R., Harinath, E., & Biegler, L. T. (2011). Lyapunov stability of economically oriented NMPC for cyclic processes. Journal of Process Control, 21, 501–509. Leineweber, D. (1999). Efficient reduced SQP methods for the optimization of chemical processes described by large sparse DAE models. Germany: Tech. Rep., University of Heidelberg, Heidelberg.
R. Amrit et al. / Computers and Chemical Engineering 58 (2013) 334–343 Newell, R. B., & Lee, P. L. (1989). Applied Process Control - A Case Study. Sydney: Prentice Hall. Rawlings, J. B., & Amrit, R. (2009). Optimizing process economic performance using model predictive control. In L. Magni, D. M. Raimondo, & F. Allgöwer (Eds.), Nonlinear Model Predictive Control, vol. 384 of Lecture Notes in Control and Information Sciences (pp. 119–138). Springer, Berlin. Rawlings, J. B., Bonné, D., Jørgensen, J. B., Venkat, A. N., & Jørgensen, S. B. (2008). Unreachable Setpoints in Model Predictive Control. IEEE Transactions on Automatic Control, 53(9), 2209–2215. Rawlings, J. B., & Mayne, D. Q. (2009). Model Predictive Control: Theory and Design. Madison, WI: Nob Hill Publishing., 576 pages, ISBN 978-0-9759377-0-9. Renfro, J., Lu, J. (2009). Apparatus and method for Model Predictive Control (MPC) of a nonlinear process, United States Patent #8046089. Vassiliadis, V. (1993). Computational solution of dynamic optimization problems with general differential-algebraic constraints, Ph.D. thesis. Imperial College, Londres. Wächter, A., & Biegler, L. T. (2006). On the implementation of a primal-dual interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1), 25–57.
343
Walther, A. (2008). Computing sparse Hessians with automatic differentiation. ACM Transactions on Mathematical Software, 34(1), 1–15. Walther, A., & Griewank, A. (2009). Getting started with ADOL-C. Combinatorial Scientific Computing, 181. Wang, F., & Cameron, I. (1994). Control studies on a model evaporation processconstrained state driving with conventional and higher relative degree systems. Journal of Process Control, 4(2), 59–75. Williams, T., & Otto, R. (1960). A generalized chemical processing model for the investigation of computer control. AIEE Transactions, 79, 458. Xiong, Q., & Jutan, A. (2003). Continuous optimization using a dynamic simplex method. Chemical Engineering Science, 58(16), 3817–3828. Young, R. E., Bartusiak, R. D., & Fontaine, R. W. (2001). Evolution of an Industrial Nonlinear Model Predictive Controller. In J. B. Rawlings, B. A. Ogunnaike, & J. W. Eaton (Eds.), Chemical Process Control-VI: Sixth International Conference on Chemical Process Control, AIChE Symposium Series, Volume 98, Number 326 (pp. 342–351). Tucson, Arizona.