Journal of Process Control 21 (2011) 311–321
Contents lists available at ScienceDirect
Journal of Process Control journal homepage: www.elsevier.com/locate/jprocont
A two-layer architecture for economically optimal process control and operation Lynn Würth, Ralf Hannemann, Wolfgang Marquardt ∗ AVT – Lehrstuhl für Prozesstechnik, RWTH Aachen University, D-52056 Aachen, Germany
a r t i c l e
i n f o
Article history: Received 5 August 2010 Received in revised form 11 November 2010 Accepted 1 December 2010
Keywords: Dynamic real-time optimization Economic optimization Nonlinear model-predictive control Multi-layer architecture Neighboring-extremal control
a b s t r a c t A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process. © 2010 Elsevier Ltd. All rights reserved.
1. Introduction In the past twenty years, model-predictive control (MPC) has established itself in industry as one of the most important tools for controlling continuous processes operated at steady-state setpoints. In these processes, the control architecture often consists of several layers operating on different time-scales. Tatjewski [1] and Scattolini [2] have reviewed vertical as well as horizontal decomposition approaches for MPC. Fig. 1(a) shows the well-known two-layer architecture involving a model-based constrained controller (MPC) and a real-time optimizer (RTO). A separation in time-scales and control objectives is performed between both layers. A RTO problem based on a stationary process model is solved at the upper layer with an economic objective function to compute set-points for the underlying MPC. The lower layer operates at a high sampling rate; its functionality comprises regulatory tasks to enforce set-points under the influence of disturbances. The MPC at the lower layer often relies on a linear model valid in the vicinity of the operating point, whereas a rigorous nonlinear model is employed at the upper layer. The base layer typically comprises decentralized control loops to stabilize the plant. Often, an additional task is performed at the MPC-layer. This so-called steady-state target optimization [1,3] recalculates
∗ Corresponding author. Tel.: +49 241 809 6712. E-mail addresses:
[email protected] (L. Würth),
[email protected] (R. Hannemann),
[email protected] (W. Marquardt). 0959-1524/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.jprocont.2010.12.008
the set-points at the sampling rate of the MPC. Detailed reviews of this architecture can be found for example in the work of Qin and Badgwell [4] or Engell [5]. Robust approaches to integrate both layers taking into account process uncertainties have been presented recently by Adetola et al. [6] and Odloak [7]. The two-layer RTO architecture has already been successfully applied to large-scale and complex plants in the chemical and petrochemical industries. However, the two-layer RTO architecture has some disadvantages, as stressed, for example, by Engell [5]. It was pointed out that the models used in the optimization and in the control layer are not consistent; in particular, their steady-state gains are often different. The slow time-scale on the upper layer may lead to unsatisfactory economic performance. Since the economic optimization is only carried out when the plant reaches steady-state, the waiting time required to reach a new steady-state leads to a delay for computing a new set-point. It may thus happen that, due to uncertainties, the economically optimal operating point of the plant has shifted but the controller still tries to enforce the previously optimal operating point. Furthermore, due to frequent external disturbances, some plants are rarely operating in a steady-state mode. This will lead to infrequent optimization instances as the plant first has to reach a steady-state before a new optimization may be performed. Because of these disadvantages of the RTO architecture, several authors have shifted towards integrating the economic optimization and MPC problem into a single dynamic optimization problem (e.g. [5,8,9]). The integrated problem shown in Fig. 1(b) corresponds to a nonlinear model-predictive control (NMPC) problem with economic objective function, which will also be termed dynamic real-time optimization (DRTO) problem in the following. However,
312
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
Steady-state RTO
Dynamic RTO setpoint trajectory
setpoints for MPC
MPC &Target correction setpoints for base layer, MVs
Plant & base layer control
Tracking controller
Dynamic RTO setpoints for base layer, MVs
setpoints for base layer, MVs
Plant & base layer control
Plant & base layer control
Fig. 1. Vertical architectures for process control.
the single-layer approach has been confronted with the objections that the nonlinear dynamic problem is too complex to be solved reliably and efficiently, that safety and reliability issues arise [1], and that there is a lack of transparency for the operators. So far, very few industrial applications of the single-layer DRTO approach are known. If the optimization problem may be solved reliably and in realtime, the single-layer approach can become attractive in the future. A lot of progress has been made in recent years to improve the solution efficiency of NMPC problems. For example, Diehl et al. [10] have developed methods to provide fast approximations to the optimal solutions. However, the error of these approximations is not controlled and it may become large if strong nonlinearities and large uncertainties are present in the process. Zavala and Biegler [11] are exploiting parametric sensitivities to provide fast updates, while iterating to convergence during the sampling time. However, the strategy relies on the assumption that the solution to the fully converged optimization problem can be obtained within one sampling time of the process. An alternative to the previously mentioned single-layer approaches for DRTO is the two-layer strategy shown in Fig. 1(c), which has originally been proposed by Helbig et al. [8]. Different types of dynamics in uncertainties can be handled by this architecture, since the upper-layer rigorous optimization problem is solved at a lower sampling rate, while the lower-layer controller acts on a fast time-scale. However, the architecture as presented by Helbig et al. [8] has not been implemented at the time due to technical limitations. Rather, Dünnebier et al. [12] have computed the trajectory on the upper layer off-line, and tracked it on-line by a time-variant linear MPC involving a least-squares objective and a model derived from linearization of the original model along the trajectory, but did not update the optimal trajectory on-line. A similar two-layer scheme, where a dynamic optimization problem based on a reduced slow-scale model is solved at the upper layer, has been proposed by Tosukhowong and Lee [13]. If a linear time-variant MPC-controller is employed to track the trajectory provided by the upper layer, the inconsistency in control objectives on both layers may lead to conflicts when disturbances arise [3]. Instead of using a tracking linear MPC-controller, Kadam and Marquardt [14] have computed fast updates of the economically optimal trajectory by means of a neighboring-extremal approach. This approach avoids the inconsistency in the control objectives on both layers and the potential conflicts. However, this architecture was implemented without time-scale separation of the disturbances and without considering the computational delay of the solution of the rigorous optimization problem, which cannot be obtained instantaneously. In this work, consistent objectives are employed on both layers, as the upper layer solves an economic optimization problem and the lower layer implements a neighboring-extremal (linear time-variant) model predictive controller, which updates the economically optimal trajectory obtained from the upper layer and suppresses high-frequency disturbances to the extent possible.
Compared to our previous work [14], a time-scale separation of the disturbances is now performed such that only the slow trend of the disturbances is taken into account on the upper layer. Furthermore, the implementation of the two-layer architecture is more realistic as the computational delay of the rigorous optimization problem is accounted for on the upper layer of the control architecture. The paper is organized as follows: in Section 2, the formulation of the single-layer DRTO problem is introduced. Section 3 presents the architecture for performing DRTO on two time-scales. Section 4 describes the integration of dynamic real-time optimization and neighboring-extremal (model-predictive) control. The architecture is assessed in Section 5 with a case study of a simulated polymerization process. The article concludes with an outline of some theoretical issues related to the two-layer architecture to be addressed in the future. 2. Single-layer DRTO problem We first introduce the optimization problem formulation used in the single-layer DRTO approach without time-scale separation. The index j is denoting the sampling time instants tj of the singlelayer DRTO controller. The following optimization problem must be f solved by the controller on the time horizon [tj , tj ] at each sampling instant tj of the process: f
min ˚eco (xj (t), uj (t), tj , tj ), uj (t)
s.t.
x˙ j (t) = f (xj (t), uj (t), dˆ j (t)), xj (tj ) = xˆ j ,
(P1) (1)
y j (t) = g(xj (t), uj (t), dˆ j (t)),
(2)
0 ≥ h(xj (t), y j (t), uj (t)),
(3)
f
0 ≥ e(xj (tj )), tj :=tj−1 + t, f
t ∈ [tj , tj ],
(4) f
f
tj :=tj−1 + t,
j = 0, 1, . . . J.
(5) (6)
f ( · ) comprises the dynamic process model (1). xj (t) ∈ Rnx are state variables with initial conditions xˆ j ; y j (t) ∈ Rny are algebraic output variables, which are updated by Eq. (2). The time-dependent control variables uj (t) ∈ Rnu are degrees of freedom for the optimization problem on horizon j. Eqs. (3) and (4) describe the path constraints h( · ) on the control and state variables and the endpoint constraints e( · ) on the state variables, respectively. Disturbances dˆ j (t) with different dynamics show up in the process model. They may either be caused by measurement, process or model uncertainty. The optimization horizon is shifted by the sampling interval t at each sampling time tj . For batch processes, the problem formulation equally holds, but the final time is fixed giving rise to a shrinking horizon control problem [15]. An extension to process models represented by differential-algebraic models of differential index 1 [16] is straightforward.
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
313
After formulating the single-layer DRTO problem, two parameters remain to be set: (1) Sampling time t: The choice of the sampling time of the controller typically depends on the time-scales of the process system and on the frequency content of the disturbances. f (2) Optimization horizon [tj , tj ]: In practice, the optimization horizon is often chosen such that it is significantly longer than the process time constants in order to improve closed-loop stability [4]. In order to avoid computational delays and a deterioration of the control performance, the optimization problem must be solved in a time period dj , which is smaller than the sampling time period t. However, models of chemical processes are often complex and nonlinear, leading to long solution times of the optimization problem. Furthermore, long optimization horizons and many degrees of freedom increase the computational load. Thus, for large-scale processes, one is faced with the trade-off between either providing suboptimal control profiles to the process or with calculating a rigorous solution to the optimal control problem (P1) with a certain time delay [17,18]. In the following, we will present a two-layer approach where the computational load problem is approached by solving the rigorous optimization problem on a slower time-scale and by tracking the rigorous solution trajectory at a faster time-scale with a computationally inexpensive neighboring-extremal (model-predictive) controller. 3. Decomposition of DRTO problem on two layers The design of multi-layer architectures is related to different objectives, which can be ranked in the following way: (1) First of all, the objective of process operation is to maximize profit. Therefore, process operation is determined by economic decision criteria, which are incorporated in the objective function of the upper layer optimization. Process uncertainty should be accounted for by updating the optimization model by means of measurements at low sampling frequency. (2) Furthermore, the aim of process control is to achieve a stable operation under the influence of process uncertainties, which should be guaranteed by the lower-layer controllers. These should operate at high frequency and should be able to respond without time delay to counteract disturbances. The structural decomposition proposed in this work (Fig. 2) is based on the two-layer architecture introduced by Helbig et al. [8] for integration of dynamic real-time optimization (DRTO) and tracking MPC. The criteria for economic optimization at the upper layer are provided by a decision maker. The operational objective could for example consist of a load change or product specification change to be performed at a minimal cost, which has to be translated into the formulation of the objective function and the constraints. On the lower layer, the tracking controller receives the trajectory computed by the upper layer and calculates the set-point trajectories for the base-layer control system. The current state values are required to provide initial conditions for the DRTO and the tracking controller, which have to be estimated by means of measurements from the process. Thus, a time-scale separation of the measurements from the process must be performed, as the upper layer is operating at a lower sampling rate and should only take into account the slow trend in the measurements.
decision maker
,h
xˆi ,dˆi (t) state estimation & time-scale separation
,h
DRTO: optimal trajectory design
ti
xi (t),yi (t),u i (t)
xˆ j , dˆ j (t)
u j (t)
ui (t)
y j (t)
tracking controller
u j (t)
tj
process including d (t) base layer control system
Fig. 2. Decomposition of control problem on two levels.
The integration between the sampling instants of the rigorous optimization and the lower layer controller is illustrated in Fig. 3. The upper layer solves a rigorous optimization problem at time instants ti , ti+1 , ti+2 , . . .. The solid lines are the trajectories corresponding to these rigorous solutions, while the dotted lines represent the trajectories computed by the tracking controller. In the example of Fig. 3, first a rigorous solution obtained from the upper layer is implemented at tj . A new rigorous optimization is triggered at ti+1 and the trajectory u(ti+1 ) is passed to the lowerlayer controller after the problem has converged after some time delay di , which is an integer multiple of the sampling period (say 2 t in the example in Fig. 3). During the computational time delay at the upper layer, the lower layer controller is tracking the trajectory with a smaller sampling time interval t. Negligible computational time is required because a neighboring extremal update is applied. As the process has evolved during the computational time delay di , an update of the rigorous control trajectory uti+1 is implemented after convergence will be obtained (at tj+3 in the example in Fig. 3). Note that the upper-layer rigorous optimization does not have a fixed sampling period, but is triggered based on the uncertainty in the process as we will explain later in Section 4.4. The formulation of the DRTO-problem on two layers is presented next.
u delay
di
u(ti+1) u(tj+2) u(tj+1) u(ti)
t
t j t j+1 t j+2 t j+3 t i+1
ti
t i+2
Fig. 3. Illustration of sampling times on upper and lower layers.
314
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
3.1. Slow time-scale
(10)
The NE-controller does not follow any set-points, but computes a linear update of the optimal trajectory as a function of the process uncertainties. The objective functions used on the upper layer in the DRTO-problem and NE-controller are consistent, whereas this is not the case for the MPC-tracking controller. The loss of performance due to this inconsistency has been shown by Rawlings and Amrit [3] for the two-layer RTO architecture. Similar problems may also arise for the DRTO architecture if an MPC controller is used for tracking at the lower layer. If a new persistent disturbance arises during the sampling time of the upper layer, the MPC controller still tries to enforce the set-point (trajectory) given by the previous solution of the DRTO-problem, whereas the economic optimum might have changed under the influence of uncertainties.
(11)
3.3. Time-scale separation
At the upper layer (UL) in Fig. 2, the economic optimization problem is defined as follows: f
min ˚UL (xi (t), ui (t), ti , ti ),
(P2)
ui (t)
s.t. x˙ i (t) = f (xi (t), ui (t), dˆ i (t)),
xi (ti ) = xˆ i ,
(7)
y i (t) = g(xi (t), ui (t), dˆ i (t)),
(8)
0 ≥ h(xi (t), y i (t), ui (t)),
(9)
0 ≥ e(xi (tji )), ti :=ti−1 + ti , f t ∈ [ti , ti ],
f
f
ti :=ti−1 + ti ,
i = 0, 1, . . . I.
(12)
The upper layer DRTO optimizes an economic objective with ˚UL = ˚eco . The sampling time of the upper layer is ti and the f optimization time horizon is [ti , ti ]. The sampling time of the upper layer ti should fulfill the condition di ≤ ti ,
(13)
where di is the estimated computational time required for solving the optimization problems on the upper layer. During the computational time, the control variables corresponding to the time delay are fixed to their previously computed values to ensure stability in closed-loop [19]. Thus, the maximum estimated computational time is chosen for fixing the optimization variables, because the computational delay of the optimization problem is not known in advance. 3.2. Fast time-scale The lower-layer controller is tracking the optimal trajectory under the influence of high frequency disturbances and is executed at sampling times tj of the process. Besides controllers not relying on rigorous models, such as PID-controllers, different configurations of model-based controllers are conceivable: (1) Set-point tracking controllers, such as NMPC or linear (timevariant) MPC [12]. The problem formulation of these controllers is similar to (P1), however a quadratic objective function is used to minimize the deviation from the set-points, which may read as
˚LL =
t
f j
ref T
ref
ref T
ref
(y j − y i ) Q (y j − y i ) + (uj − ui ) R(uj − ui )
tj
+ uTj Suj dt.
(14)
(2) Neighboring-extremal (NE) controllers, which compute a correction of the economically optimal trajectory under uncertainties, such that the update of the objective function is ˚LL = ˚UL + ˚UL .
(15)
NE-control was originally introduced in the context of indirect optimal control methods, e.g. [20,21], and was recently extended by Gros et al. [22] for tracking singular arcs. In the indirect methods, an neighboring-optimum feedback law is applied to compute an almost instantaneous correction of the control trajectory for small deviations in the state vector. A neighboring-extremal approach based on direct methods of the optimization problem with discretized control variables was also applied in our previous work to compute fast updates on a single-layer DRTO scheme [18].
A time-scale separation of the process uncertainties is performed according to the scheme of the two-layer control architecture (Fig. 2), as the uncertainty in chemical processes may have different types of dynamics. For instance, the disturbances (measured or estimated) can be fluctuating at a high frequency, or they can have sudden jumps, such as for example a change in flow rate caused by a defect valve or by a pump failure. On the other hand, also slowly-varying uncertainties affect the process. These could for example be due to variations at low frequency or a drift in reaction kinetics or heat transfer resistances which are not accounted for in the model. It is assumed that the model is structurally correct and that all uncertainty is captured in the parameters and disturbances dˆ j (t). Thus, the uncertainty considered in this work is parametric as opposed to stochastic uncertainty considered in robust optimization. Furthermore, the unmeasurable model states and disturbances are assumed to be observable from the measurements. 3.3.1. Time-scale separation of disturbances We assume in the following that the process disturbances can be decomposed into a slowly-varying part (or drift) and a highfrequency part, i.e., dˆ j (t) = dˆ i (t) + dˆ j (t).
(16)
The slowly-varying part dˆ i (t) can be identified by applying a low-pass filter to the (measured or estimated) disturbance signal dˆ j (t) [8], which could for instance be based on a wavelet decomposition of the disturbance signal. Alternatively, a trend detection mechanism [23] can be applied to the slowly-varying, denoised part of the disturbances dˆ i (t) at the upper layer of the architecture to detect a constant, linear or higher-order polynominal trend in the multi-variable disturbance signal. In contrast to the filter approach, this strategy provides a functional representation of the disturbance which can be used for the prediction of its future behavior on the optimization horizon. 3.3.2. Integration with state estimation There are different possibilities to perform the time-scale separation of the disturbances integrated with estimation as shown in Fig. 4. In Fig. 4(a), state estimation is performed first. A time-scale separation is executed by applying a low-pass filter to the estimated states and disturbances of the past sampling times. As state estimation itself is acting as a filter, another possibility is shown in Fig. 4(b), where two different state estimators are executed on different time-scales. For instance, a moving horizon estimator could be used at the upper layer at a low sampling rate considering nk measurements over a time horizon [tj−nk , tj ] in the past, while an (extended) Kalman filter could be used at the lower layer employing the measurement of the last sampling time only.
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
allows a fast detection of a change in the slowly-varying part of the disturbances. As outlined in Section 3.3, the estimated process disturbances are subsequently decomposed into a slowly-varying and a highfrequency part
xˆi ,dˆi (t)
a State estimation
y j (t)
xˆ j ,dˆ j (t)
Time-scale Separation
y j (t)
State estimation 1
State estimation 2
f f where d¯ i = d¯ j and d¯ j = d¯ j − d¯ j .
d¯ j = d¯ i + d¯ j ,
xˆ j , dˆ j (t)
b
315
xˆi ,dˆi (t)
xˆ j , dˆ j (t)
Fig. 4. Integration of time-scale separation with state estimation: (a) state estimation and subsequent separation of time-scales and (b) state estimation on different time-scales.
The design of this variant is not straightforward as it involves several design parameters such as the choice of the estimator and setting the sampling times and horizon lengths on both layers. However, it would allow to make use of different models at each layer, such that a more detailed model could be used at the upper layer as the estimation problem is solved at a lower sampling rate. Since the integration and decomposition of the estimation problem is not considered in this work, we will assume subsequently that the state variables and disturbances are estimated with negligible estimation error, which corresponds to the assumption of measurable states and disturbances. The further investigation of the most appropriate integration strategies between state estimation and time-scale separation is not within the scope of this work, which focuses on the decomposition of the DRTO problem. 4. Two-layer optimal control algorithm The solution of the dynamic optimization problem on the upper layer is integrated with the neighboring-extremal controller according to the algorithm which is presented in detail below.
The initial conditions of the estimated states can also be viewed as parametric uncertainties to reflect imperfections of the state estimator, as the state values are updated by the state estimator at each sampling time. Furthermore, uncertainty in the states arises at the upper layer due to the computational delay of the rigorous solution. The system’s state prediction at time instant ti , when the rigorous optimization is triggered, is different from the state after the delay due to process uncertainties arising during the computation of the optimal solution. The values of the initial conditions required at time instants ti are filtered similarly to the disturbances and are decomposed as follows xˆ j = xˆ i + ˆxj .
(20)
Finally, the uncertainties on the upper layer d¯ i and initial conditions xˆ i are gathered in the vector of parameters pi defined as T
T T pi = [d¯ i , xˆ i ] ∈ Rnp .
(21)
Similarly, the uncertainties on the lower layer are gathered in the vector of parameters pj , which is defined as T
T T pj = [d¯ j , ˆxj ] ∈ Rnp .
(22)
4.2. Solution of the upper-layer optimization problem The infinite-dimensional optimal control problem (P2) is solved by means of an adaptive shooting method [24], which is implemented in the software tool DyOS maintained at RWTH Aachen. The continuous control variables in optimization problem (P2) are discretized using B-splines of order m; typically piecewise-constant (m = 1) or piece-wise linear (m = 2) approximations are chosen. The discretization and the formulation of the NLP for each control variable ul (t) reads as ul (t) =
4.1. Time-scale separation of disturbances and states
(19)
ni
(m)
u¯ l,i i
(t) = u¯ l
(m)
(t),
(23)
i=1
The time-scale separation is performed on the estimated disturbances dˆ j (t) and states xˆ j sampled at short sampling time intervals t as shown in Fig. 4(a). The time-dependent disturbances may be parameterized as dˆ j (t) =
nk
(m) d¯ j,k j,k (t),
(17)
k=1
where i is the index of the discretization interval and m is the order of the splines. The path constraints of the optimal control problem (P2) are relaxed by a pointwise formulation on the discretized grid. T
Choosing the discretized controls z:=[u¯ T1 , . . . , u¯ Tnu ] as the nz optimization variables, the dynamic optimization problem (P2) can be transcribed into the nonlinear program min (z i , pi ):=˚(z i , pi ),
where k is the index of the discretization interval and m is the order (m) of the spline basis functions j,k . If we parameterize the disturbances with one piece-wise constant basis function per sampling interval t, we set d¯ j = d¯ j,k with k = 1. A low-pass filter is applied to the time series of past states and uncertain parameters to remove the noise. An exponential moving average filter is chosen to compute the filtered signals from f f d¯ j = ˛d¯ j + (1 − ˛)d¯ j−1 ,
(18)
where ˛ is the tuning parameter of the filter. At the first sampling f instant j = 1, d¯ = d¯ j . By applying the above formula recursively for j
nk time instants, the moving exponential filter gives priority to recent values and puts less weight on old values. This weighting
z
s.t. g(z i , pi ) ≥ 0,
(24)
z d ,i = z d ,i−1 . Rnz
× Rnp
(NLP)
(25) Rng
g(·) : → represent the discretized constraints of the optimal control problem (P2). The variables z d ,i constitute the part of the control vector zi , which is fixed during the optimization as these control variables are not available due to the delay in the computation time of the optimization problem. Fixing these variables to their previous values is necessary to ensure the closed-loop stability of the upper-layer DRTO [19]. The nonlinear program (NLP) is solved subsequently using a sequential quadratic programming (SQP) strategy. The objective function, the constraints and the gradients are computed by
316
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
simultaneous integration of the model Eq. (7) and the associated sensitivity equation system by means of a tailored integration algorithm based on a semi-explicit Euler method with extrapolation, which has favorable properties in case of non-smooth right-hand sides typical for optimal control problems [25]. In the following, the reference solution is defined as the rigorous optimal solution of (NLP) computed at low sampling rate for a set of uncertain parameters pi estimated at sampling time ti . Reference sensitivities of first order, z (pi ), gz (pi ), gp (pi ), and second-order, Lzz (pi ), Lzp (pi ), are computed using a fast computational method based on composite adjoints developed by Hannemann and Marquardt [26,27]. These first- and second-order sensitivities are required for computing the neighboring-extremal solution of the lower-layer control problem as shown in Section 4.3. 4.3. Solution of the lower-layer problem Let us consider z0 , 0 as the optimal solution of problem (NLP) corresponding to the reference parameter values p0 . At the sampling times ti , the reference trajectory is updated by the upper layer and the reference values of the uncertain parameters and optimization variables are set to p0 = pi and z0 = zi , respectively. The Lagrange function of (NLP) is defined as T
L(z 0 , p0 , 0 ) = (z 0 , p0 ) − 0 g(z 0 , p0 ),
(26)
The updated control values zj are implemented in the process at the current sampling time. If the perturbations pj are small, the updated control parameters zj are close to the correct optimal solution. However, if these perturbations are large and the process shows strong nonlinearities, the error of the optimal objective value is increasing with the size of the perturbation. In that case, the linear update is not sufficient to provide an updated optimal solution of appropriate quality. Hence, if the uncertainties and the approximation error are large, a new rigorous solution should be computed.
4.4. Triggering upper-layer optimization The rigorous optimization on the upper layer is running on a slow time-scale and optimizes the process by taking into account the slow trend in the dynamics of the uncertainties. A criterion is required to detect persistent disturbances acting on the process and to detect if the optimal trajectory is sensitive to these disturbances. Since the computation of the upper-layer optimization depends on these uncertainties, it does not need to be performed at regular sampling intervals but is triggered dynamically with a sampling time of ti = ki t, where ki is an integer and ti a multiple of the sampling time at the lower layer. The triggering mechanisms are introduced in this section.
where 0 ∈ Rng is the vector of Lagrange multipliers. The necessary conditions of optimality (NCO) for this problem read as T
Lz (z 0 , p0 , 0 ):= z (z 0 , p0 ) − 0 g z (z 0 , p0 ) = 0,
(27a)
a (z 0 , p0 ) = 0; am,0 > 0; m = 1, . . . nag ∈ Ga ; gm
(27b)
a ina gnina (z 0 , p0 ) > 0; ina . n,0 = 0; n = 1, . . . ng − ng ∈ G
(27c)
Ga is the set of active constraints and the set Gina comprises all inactive constraints at the nominal solution p0 . Note, that the active and inactive sets are determined from the complementarity condition using for example an active-set strategy. New estimates of the uncertainties pj are obtained from some estimator (not considered in this work) at each sampling time tj on the lower layer of the control architecture. The deviation of the estimated states and disturbances from their reference values results in a parametric uncertainty for the optimization problem (NLP). In the vicinity of the optimal reference solution, a linear update of the optimal solution can be computed as follows. The differentiation of the NCO at the optimal solution provides parametric sensitivities dz 0 / dp0 , which can be used in a Taylor expansion to compute a linear update of the optimal solution [28]. However, this method assumes that the set of active constraints Ga remains constant. Alternatively, the update of the optimal control trajectory can be computed by solving a quadratic problem, while directly determining the new set of active constraints [14,29,30]: 1 min z Tj L zz (p0 )z j + pTj LTzp (p0 )z j + zT (p0 )z j , z j 2
(QP)
s.t. g(p0 ) + g z (p0 )z j + g p (p0 )pj ≥ 0.
(28)
The corrections z j = z j − z 0 follow from the deviations of the uncertain parameters pj = pj − p0 after solving (QP). Note, that the linearization of the nonlinear inequality constraints (24) leading to Eq. (28) may prevent the determination of the correct active set and lagrange multipliers [30]. This only happens if the constraints show high curvature at the point of linearization or if the deviation of the parameters pj from the nominal value p0 is relatively large. In that case, the correct active set may be determined, if additional iterations are triggered with updated lagrange multipliers and sensitivities similarly to the algorithm presented in Würth et al. [18].
4.4.1. Trigger based on filtered disturbance values A dynamic optimization is triggered based on an optimality criterion analyzing the fulfilment of the NCO based on the current uncertainties. The trigger evaluates a measure of the error in the sensitivity of the Lagrangian (εopt ) and of the nonlinear constraint infeasibility (εinfs ) of the updated trajectories with respect to uncertainty according to εopt =
||Lz (z i−1 , pi , i−1 )||∞ , ||i−1 ||2 + 1
εinfs =
||g(z i−1 , pi )||∞ . ||z i−1 ||2 + 1
(29)
where the parameters pi correspond to the filtered and parameterized values of the disturbances d¯ i and states xˆ i at time ti and zi−1 to the solution of the last rigorous optimization problem. Hence, if a change in the uncertainties occurs, which also has an impact on feasibility or optimality, a new rigorous optimization is triggered. The trigger based on optimality criteria can be combined with a trend detection mechanism outlined below, which monitors changes in disturbances and improves the prediction quality of the optimization triggered on the slow sampling scale.
4.4.2. Incorporation of trend detection The trend detection as developed by Flehmig and Marquardt [23] can be applied on the upper layer to detect a trend in the slowly-varying part of the disturbances from measurements in the past. Hence, this strategy allows to incorporate the trend as a disturbance predictor on the optimization horizon in the rigorous optimization on the upper layer. Depending on the slowly-varying part of the uncertainties dˆ i (t), two cases may be distinguished:
(1) If the trend is constant over time, it is expected that the upperlayer DRTO solution trajectory does not change. The lower-layer controller takes control action due to high-frequency disturbances only, and should not deviate substantially from the reference trajectory.
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
(2) If a new trend is detected in dˆ i (t), a rigorous optimization is triggered provided that the trigger (29) signals sufficient sensitivity of the Lagrange function and the constraints to the new trend.
The two-layer strategy is implemented in a modularized software architecture consisting of different components, which perform the simulation of the plant behavior using a plant replacement model, rigorous optimization, tracking NE-control and data exchange over an OPC-server. The OPC-server is also serving as an interface to the distributed control system (DCS) of the process, which replaces the simulated process if the control system is put on-line.
The detected (e.g. constant, linear or higher-order polynomial) trend is translated into a time-dependent disturbance signal d trend (t), which can be incorporated as a future prediction of the uncertainties into the optimization problem by setting dˆ i (t) = d trend (t). Ideally, this prediction should hold on the entire optimization horizon, but its validity might be restricted to a smaller time period depending on the reliability of the trend prediction.
5. Illustrative case study The case study considered in this work is an industrial continuous polymerization process introduced by Bayer AG as a test case for the research project INCOOP [12]. Typically, a continuous polymerization plant produces several polymer grades sequentially, requiring frequent changes in operating point from one product to another. This motivates the use of nonlinear dynamic optimization technology for these processes. The flowsheet of the process is shown in Fig. 5. The exothermic copolymerization involving multiple reactions takes place in a continuously-stirred tank reactor equipped with an evaporative cooling system. The reactor is operated at an open-loop unstable operating point corresponding to a medium level of conversion and stabilized by a temperature controller. It is followed by a separation unit for separating the polymer from unreacted monomer and solvent. Unreacted monomer and solvent are recycled back to the reactor via a recycle tank, while the polymer melt is sent to downstream processing and blending units. The reactor hold-up is maintained at a desired set-point using a proportional controller that manipulates the reactor outlet flow rate. The validated, rigorous, dynamic simulation model is implemented in gPROMS and comprises approximately 200 states and 2000 algebraic variables [12]. For confidentiality reasons, no more details about the process and the model equation can be given.
4.5. Summary of the control algorithm The developments are summarized in the algorithm presented below, where Nj is the number of sampling instants: (1) Set counter i:=1, j:=1, and trigger = 0, (2) For j: = 1 to Nj do (a) Upper-layer optimization: If j = 1 or trigger = 1, (i) Solve (NLP). (ii) Compute sensitivities of first order, z ( pi ), g z ( pi ), g p ( pi ), and second order, L zz ( pi ), Lzp ( pi ). (iii) Pass solution trajectory z i and sensitivities of first order, f z ( pi ), g z ( pi ), g p ( pi ), and second order, L zz ( pi ), Lzp ( pi ), to lower layer. (iv) i := i + 1 (b) Lower-layer optimization: (i) If j = 1 or trigger = 1, obtain new reference solution and sensitivities, set the reference values p0 = pi , z 0 = z i and trigger = 0. (ii) At tj , obtain measurements from process and/or state and disturbance estimates xˆ j , dˆ j from the state estima-
5.1. Grade transition problem
tor. (iii) Filter the disturbances and state variables and perform the decomposition: d¯ j = d¯ i + d¯ j and xˆ j = xˆ i + ˆxj . Set T T T [ˆxi , d¯ i ]
317
The operational task is to perform a change from polymer grade A of molecular weight MWA = 0.690 ± 0.013 to grade B of molecular weight MWB = 0.974 ± 0.025 in minimum time. The parameters affecting the final product properties are the polymer viscosity, which is related to the molecular weight MW, and the polymer content at the reactor outlet, which is related to the conversion in the reactor . The economic objective function minimizes the deviation to the new product specifications and at the same time maximizes the product flow rate from the splitter Fprod . The integral
T T T [ˆxj , d¯ j ] .
and pj = pi = (iv) Solve (QP) to compute z j with pj = pj − p0 . Compute zj = z0 + zj. (v) Implement controls uj on sampling time tj on process. (c) Apply trigger criteria: If εopt > opt or εinf > inf , Set trigger = 1. (d) tj+1 = tj + t. end
u3: Recycle Buffer Tank
Cooling Water
LC
u1: Monomer Loop
u2: Catalyst
TC LC
Polymer Separation
Downstream Processing
Q
Conversion Fig. 5. Flowsheet of the Bayer polymerization process example.
Q
Viscosity
318
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
of the product flowrate is scaled with time and appropriate weights W1 and W2 are chosen such that the different terms have similar sizes. The objective is given as
min
FM,in (t),FM,R (t),FCat,in (t)
f j
t
˚(tf ) = tj
−W1 Fprod t+1
+ W2 ((MW − MWB )2
2
+( − B ) )
dt.
(30)
During the grade change, operational path constraints are enforced on the reactor outlet flow rate FR , the buffer tank holdup VRT , the solid content in the reactor , the polymer molecular weight MW, the reactor temperature TR and the solvent concentration CS . The control variables available for optimization are the monomer flow rate FM,in , the catalyst flow rate FCat,in and the recycle rate FM,R . Accordingly, the path constraints are formulated as: L U ≤ FR,out ≤ FR,out , FR,out
(31)
L U ≤ VRT ≤ VRT , VRT
(32)
L
U
≤≤ ,
(33)
MW L ≤ MW ≤ MW U ,
(34)
TRL ≤ TR ≤ TRU ,
(35)
CSL ≤ CS ≤ CSU ,
(36)
L U ≤ FM,in ≤ FM,in , FM,in
(37)
L U ≤ FM,R ≤ FM,R , FM,R
(38)
U L ≤ FCat,in ≤ FCat,in . FCat,in
(39)
During the operation, the flow rate of monomer 2 is fed relatively to the flow rate of monomer 1 according to a fixed ratio. However, disturbances in the inlet flow of monomer 2 lead to fluctuations in the ratio as shown in Fig. 6. It is assumed that the disturbed feed flow rate of monomer 2 can be measured by a flow sensor. 5.2. Results and discussion 5.2.1. Comparison of control strategies Different control strategies are compared in the simulation study. (1) First, a single-layer DRTO system is applied as presented in Section 2, where the computational time delay is ignored. This
Disturbance in Feed
0.7 Measurements Filtered data
0.6 0.5 0.4
strategy could be ideally employed if the rigorous solution of the DRTO problem could be obtained instantaneously. (2) The second strategy solves a rigorous optimization problem, but is taking the solution delay into account. Obviously, this setting is more realistic. (3) The third strategy uses a neighboring-extremal strategy without rigorous optimization, which is similar to the concept of real-time iterations [10], where only one iteration is solved per sampling interval. Note, that in this strategy new first-and second-order sensitivities are computed at each sampling time based on the current process states and uncertainties. (4) The two-layer control strategy as presented in the algorithm in Section 4.5 including the triggering mechanism is finally applied. The sampling time t of the process is set to 5 min. The prediction horizon for dynamic optimization is set to 5 h. For the solution of the rigorous optimization problem at the upper layer, the maximum computational delay is estimated as 20 min after performing some numerical experiments. A moving average filter is used to filter the states and uncertainties on the upper layer. The optimization on the upper layer accounts for trends in disturbances of the uncertainties as presented in Section 4.4.2. In this case study, only constant trends are identified and incorporated in the optimization, while an extension to higher-order trends is planned in the future. 5.2.2. Computational performance The rigorous optimization takes 10–20 minutes depending on the initial guess and the number of iterations required to reach convergence. The computational time required for the computation of the second-order sensitivities by the method of composite adjoints in strategies 3 and 4 necessary to evaluate the neighboring-extremal controller on the lower control layer is around 2–4 minutes from a given initialization. These computational times refer to a standard PC (Intel Dualcore 2.66 GHz processor and 3226 MB RAM). Hence, strategies 3 and 4 are computationally very attractive, as a first-order update of the optimal trajectory can be computed at each sampling time without time delay. 5.2.3. Control performance Results of the online optimization of the manipulated variables and the product specifications are shown in Figs. 7–10 for the control strategies mentioned in Section 5.2.1. The profiles look similar for all strategies as long as the disturbances are small at the beginning of the transition. However, as the disturbances become larger, the control variables of the strategies 3 and 4 exhibit larger control moves than the corresponding control trajectories of the rigorous optimization. These control moves are probably due to the firstorder updates computed by the NE-controller, which has a stronger control reaction to minimize the deviation from the set-point compared to the converged solution of strategy 1. An improvement of performance seems to be possible for a modified tuning of the filter and the controller settings.
Table 1 Objective function and accumulated constraint violations.
0.3 0.2
0
1000
2000
3000
4000
5000
6000
Time(s) Fig. 6. Relative feed rate of monomer 2 to feed rate of monomer 1.
Strategy 1 Strategy 2 Strategy 3 Strategy 4
Sum of objective values in optimization
Sum of constraint violations
0.59 1.18 0.74 0.61
1.58 16.2 1.97 2.08
0.8 0.6 0.4 0.2 0
0
b
1
2000
4000
6000
0.8 0.6 0.4 0.2 0
8000 10000
0
2000
4000
time(s)
6000
0.8 0.6 0.4 0.2 0
8000 10000
0
2000
4000
6000
8000 10000
time(s)
1.1
d
e
1.05
1
0.9
1 0.95
0.8
0.7
c
1
time(s)
Conversion
Molecular weight
1.1
319
Flow rate of catalyst
a
1
Recycle flow rate
Flow rate of monomer 1
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
0.9
0
2000
4000
6000
8000
0.85
10000
0
2000
4000
time(s)
6000
8000
10000
time(s)
Fig. 7. Single-layer DRTO architecture (strategy 1, ignoring computational delay).
a
0.6 0.4 0.2 0
2000
4000
6000
0.8 0.6 0.4 0.2 0
8000 10000
0
2000
4000
time(s) 1.1
6000
0.6 0.4 0.2 0
2000
4000
6000
time(s) 1.1
d
e
1.05
1
0.9
0.8
0.7
0.8
0
8000 10000
c
1
time(s)
Conversion
0
Flow rate of catalyst
0.8
mization is triggered twice after the larger disturbance at t = 1500 s affects the process at t = 2000 s. Certain constraints, such as e.g. the upper bound of the level in the recycle tank, are temporarily violated for all control strategies due to the disturbances in the inlet flow rate. Since the reoptimization is only carried out every 1200 s in strategy 2, the constraint violations are highest for the rigorous optimization with delay. Despite only computing first-order updates of the optimal solution, the constraint violation observed
b
1
Recycle flow rate
1
Molecular weight
Flow rate of monomer 1
In terms of performance, the objective function values recorded in Table 1 show that due to the computational delay, the objective function and constraint satisfaction deteriorates substantially for strategy 2 compared to strategy 1. Strategies 3 and 4 are more advantageous, since the neighboring-extremal updates are computed at each sampling time almost without delay. The comparison of strategies 3 and 4 shows that the objective function value is better for strategy 4, which is due to the fact that the rigorous opti-
1 0.95 0.9
0
2000
4000
6000
8000
10000
0.85
0
2000
time(s) Fig. 8. Rigorous optimization with delay (strategy 2).
4000
6000
time(s)
8000
10000
8000 10000
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
0.8 0.6 0.4 0.2 0
b
1
0
2000
4000
6000
Flow rate of catalyst
a
1
Recycle flow rate
Flow rate of monomer 1
320
0.8 0.6 0.4 0.2 0
8000 10000
0
2000
4000
time(s)
0.6 0.4 0.2 0
8000 10000
0
2000
4000
6000
8000 10000
time(s) 1.1
d
e
1.05
1
0.9
1 0.95
0.8
0.7
0.8
time(s)
Conversion
Molecular weight
1.1
6000
c
1
0.9
0
2000
4000
6000
8000
0.85
10000
0
2000
time(s)
4000
6000
8000
10000
time(s)
0.8 0.6 0.4 0.2 0
0
b
1
2000
4000
6000
0.8 0.6 0.4 0.2 0
8000 10000
Flow rate of catalyst
a
1
Recycle flow rate
Flow rate of monomer 1
Fig. 9. Neighboring-extremal updates without rigorous reoptimization (strategy 3).
0
2000
4000
time(s)
0.6 0.4 0.2 0
8000 10000
0
2000
4000
6000
8000 10000
time(s) 1.1
d
e
1.05
1
0.9
0.8
0.7 0
0.8
time(s)
Conversion
Molecular weight
1.1
6000
c
1
1 0.95 0.9
2000
4000
6000
8000
10000
0.85 0
2000
time(s)
4000
6000
8000
10000
time(s) Fig. 10. Two-layer DRTO architecture (strategy 4).
in strategy 4 is similar to strategy 1, as the lower-layer controller does not have any computational delay. 6. Conclusions In this paper, a two-layer architecture is presented for dynamic real-time optimization of chemical processes. This strategy is attractive for large-scale processes and complex models, which often lead to long computational times during optimization whereas on the other hand short sampling times are required by
the process. For this kind of systems, the single-layer DRTO, where a rigorous optimization problem is solved at each sampling time, may not be applicable because of computational delays. On the other hand, performing only neighboring-extremal updates without iterating to convergence may also lead to a deterioration in performance, if high uncertainty is present in the system. The two-layer architecture is advantageous, since it allows an economically optimal operation by taking slow trends in uncertainties into account at the slow time-scale, and by providing fast responses to disturbances at the fast time-scale.
L. Würth et al. / Journal of Process Control 21 (2011) 311–321
In this contribution, the focus was put on the development of strategies for time-scale separation and integration of the optimization problems to be solved on both layers of the architecture. The incorporation of the higher-order trend prediction into the optimization at the upper layer is still an open issue. In addition, the incorporation of model uncertainties and the integration of the time-scale separation and state estimation tasks are subject to future work. Acknowledgements This work was supported in part by the European Commission (MRTN-CT-2004-512441) and by the German Research Foundation DFG (MA 1188/28-1). References [1] P. Tatjewski, Advanced control and on-line process optimization in multilayer structures, Annual Reviews in Control 32 (2008) 71–85. [2] R. Scattolini, Architectures for distributed and hierarchical model predictive control – a review, Journal of Process Control 19 (2009) 723–731. [3] J.B. Rawlings, R. Amrit, Optimizing process economic performance using model predictive control, in: L. Magni, D. Raimondo, F. Allgöwer (Eds.), Nonlinear Model-Predictive Control: Towards New Challenging Applications, SpringerVerlag, 2009, pp. 119–138. [4] S. Qin, T.A. Badgwell, An overview of nonlinear model predictive control applications, in: F. Allgöwer, A. Zheng (Eds.), Nonlinear Model Predictive Control, Birkhäuser, 2000, pp. 369–392. [5] S. Engell, Feedback control for optimal process operation, Journal of Process Control 17 (2007) 203–219. [6] V. Adetola, D. DeHaan, M. Guay, Adaptive model predictive control for constrained nonlinear systems, Systems and Control Letters 58 (5) (2009) 320–326. [7] D. Odloak, Robust integration of RTO and MPC, in: Proceedings of the 10th International Symposium on Process Systems Engineering, Salvador, Brazil, 2009. [8] A. Helbig, O. Abel, W. Marquardt, Structural concepts for optimization based control of transient processes, in: F. Allgöwer, A. Zheng (Eds.), Nonlinear Model Predictive Control, Volume 26 of Progress in Systems and Control Theory, Birkhäuser, Basel, 2000, pp. 295–311. [9] A.C. Zanin, M.T. Gouvea, D. Odloak, Industrial implementation of a real-time optimization strategy for maximizing production of LPG in a FCC unit, Computers and Chemical Engineering 24 (2000) 525–531. [10] M. Diehl, H. Bock, J. Schlöder, R. Findeisen, Z. Nagy, F. Allgöwer, Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations, Journal of Process Control 12 (4) (2002) 577–585. [11] V. Zavala, L. Biegler, The advanced-step NMPC controller: optimality, stability and robustness, Automatica 45 (1) (2009) 86–93.
321
[12] G. Dünnebier, D. van Hessem, J. Kadam, K.-U. Klatt, M. Schlegel, Optimization and control of polymerization processes, Chemical Engineering and Technology 28 (5) (2005) 575–580. [13] T. Tosukhowong, J.H. Lee, Real-time economic optimization for an integrated plant via a dynamic optimization scheme, in: Proceedings of the 2004 American Control Conference, Boston, Massachusetts, June, 2004. [14] J. Kadam, W. Marquardt, Sensitivity-based solution updates in closed-loop dynamic optimization, in: Proceedings of the IFAC Symposium “Dynamics and Control of Process Systems” – DYCOPS 7, Cambridge, USA, July, 2004. [15] P. Terwiesch, M. Agarwal, D.W.T. Rippin, Batch unit optimization with imperfect modelling: a survey, Journal of Process Control 4 (4) (1994) 238–258. [16] K.E. Brenan, S.L. Campbell, L.R. Petzold, Numerical Solution of Initial Value Problems in Differential-Algebraic Equations, North-Holland, New York, 1989. [17] M. Alamir, A framework for monitoring control updating period in real-time nmpc schemes, in: L. Magni, D. Raimondo, F. Allgöwer (Eds.), Nonlinear ModelPredictive Control: Towards New Challenging Applications, Springer-Verlag, 2009, pp. 433–455. [18] L. Würth, R. Hannemann, W. Marquardt, Neighboring-extremal updates for nonlinear model-predictive control and dynamic real-time optimization, Journal of Process Control 19 (2009) 1277–1288. [19] R. Findeisen, F. Allgöwer, Computational delay in nonlinear model predictive control, in: Proc. Int. IFAC Symposium “Advanced Control of Chemical Processes” – ADCHEM 3, 2004. [20] H. Pesch, Real-time computation of feedback controls for constrained optimal control problems. Part 2: a correction method based on multiple shooting, Optimal Control Applications & Methods 10 (2) (1989) 147–171. [21] A.E. Bryson, Y.-C. Ho, Applied Optimal Control, Taylor & Francis, Bristol, PA, 1975. [22] S. Gros, B. Srinivasan, B. Chachuat, D. Bonvin, Neighboring-extremal control for singular dynamic optimization problems. I-Single input case, International Journal of Control (6) (2009) 1099–1112. [23] F. Flehmig, W. Marquardt, Detection of multivariables trends in measured process quantities, Journal of Process Control 16 (2006) 947–957. [24] M. Schlegel, K. Stockmann, T. Binder, W. Marquardt, Dynamic optimization using adaptive control vector parameterization, Computers & Chemical Engineering 29 (8) (2005) 1731–1751. [25] M. Schlegel, W. Marquardt, R. Ehrig, U. Nowak, Sensitivity analysis of linearlyimplicit differential-algebraic systems by one-step extrapolation, Applied Numerical Mathematics 48 (1) (2003) 83–102. [26] R. Hannemann, W. Marquardt, Fast computation of the Hessian of the Lagrangian in shooting algorithms for dynamic optimization, in: Proceedings of the IFAC-Symposium “Dynamics and Control of Process Systems” – DYCOPS 8, Cancun, Mexico, July, 2007. [27] R. Hannemann, W. Marquardt, Continuous and discrete composite adjoints for the Hessian of the Lagrangian in shooting algorithms for dynamic optimization, SIAM Journal of Scientific Computing 31 (6) (2010) 4675–4695. [28] C. Büskens, H. Maurer, Sensitivity analysis and real-time control of parametric optimal control problems using nonlinear programming methods, in: M. Grötschel, S.O. Krumke, J. Rambau (Eds.), Online Optimization of Large Scale Problems, Springer-Verlag, Berlin-Heidelberg, 2001, pp. 57–68. [29] M. Diehl, Real-Time Optimization of Large Scale Nonlinear Processes, Ph.D. Thesis, Ruprechts-Karls-Universität Heidelberg, 2001. [30] N. Ganesh, L. Biegler, A reduced Hessian strategy for sensitivity analysis of optimal flowsheets, AIChE Journal 33 (2) (1987) 282–296.