4th IFAC Nonlinear Model Predictive Control Conference International Federation of Automatic Control Noordwijkerhout, NL. August 23-27, 2012
Model Predictive Control for changing economic targets Daniel Lim´ on ∗ Antonio Ferramosca ∗∗ Teodoro Alamo ∗ Alejandro H. Gonz´ alez ∗∗ Darci Odloak ∗∗∗ ∗
Departamento de Ingenier´ıa de Sistemas y Autom´ atica, Universidad de Sevilla, Escuela Superior de Ingenieros, Camino de los Descubrimientos s/n. 41092, Sevilla, Spain. (e-mail: {limon,alamo}@cartuja.us.es) ∗∗ Institute of Technological Development for the Chemical Industry (INTEC), CONICET-Universidad Nacional del Litoral (UNL). G¨ uemes 3450, (3000) Santa Fe, Argentina. (e-mail: {ferramosca,alejgon}@santafe-conicet.gov.ar.) ∗∗∗ Department of Chemical Engineering, University of S˜ ao Paulo. Av. Prof. Luciano Gaulberto, trv 3 380, 61548, S˜ ao Paulo, Brazil. (e-mail:
[email protected].) Abstract: The objective of this paper is to present recent results on model predictive control for tracking in the context of economic operation of a industrial plants. The well-established hierarchical economic control is based on a Real Time Optimizer that calculates the economic target to the advanced controller, in this case model predictive controllers. The change of the economic parameters or constraints, or the existence of disturbances and modelling errors make that this target may change throughout the plant evolution. The MPC for tracking is an appealing formulation to deal with this issue since maintain the recursive feasibility and convergence under any change of the target. Thus, this MPC formulation is summarized as well as its properties. In virtue of these properties, it is demonstrated how the economic operation can be improved by integrating the Steady State Target Optimizer in the MPC. Then it is also shown how the proposed MPC can deal with practical problems such us zone control or distributed control. Finally, the economic control of the plant can be enhanced by adopting an economic MPC approach. A formulation capable to ensure economic optimality and target tracking is also shown. 1. INTRODUCTION The main goal of an advanced control system in the process industries is to ensure a safe operation of the plant while the economic profits are maximized, attending the policies of the operator of the plant. The economic control of the plant in the process industries is implemented in a hierarchical control structure [Qin and Badgwell, 2003, Engell, 2007, Tatjewski, 2008]: at the top of this structure, en economic scheduler and planner decides what, when and how much the plant has to produce, taking into account information from the market and from the same plant itself. The output of this layer are production goals, prices, cost functions and constraints that are sent to a Real Time Optimizer (RTO). The RTO is a model-based system, operated in closedloop, whose task is to provide the economic targets of the process variables controlled by the control level, taking into account a number of information like production goals, prices of products, energy costs, and constraints. ⋆ This work has been funded by the National Plan Projects DPI2008-05818 and DPI2010-21589-C05-01 of the Spanish Ministry of Science and Innovation and FEDER funds, and ANPCYT, Argentina (PICT 2008, contract number 1833).
978-3-902823-07-6/12/$20.00 © 2012 IFAC
384
It employs a stationary model of the plant and for this reason its sampling time is usually larger than the one of the process. The economic operation point of the plant is calculated by solving the following optimization problem: min Φ(xs , us )
xs ,us
s.t. F (xs , us , w) = 0 H(xs , us ) ≤ 0 where Φ(xs , us ) is the profit function of the process, F (xs , us , w) = 0 is the stationary model of the plant that depends on a set of parameters/signals w which are provided by the operator or estimated from the data of the plant, as estimated disturbances, updated parameters of the model or corrected measurement from the data reconciliation procedure. The inequalities H(xs , us ) ≤ 0 define the operational constraints of the plant. The RTO provides the economic targets yt for a set of process variables y controlled by the advanced control scheme. The most extended advanced control scheme is the model predictive control [Qin and Badgwell, 1997]. This must be designed to maintain the controlled variables as close as possible to the targets yt , and hence to operate the plant close to the economic optimum. Typically, as it
10.3182/20120823-5-NL-3013.00068
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
will be shown later, the advanced control is split into two layers. The control actions calculated by the MPC are the setpoints for the low-level regulation loops. A nice property of this structure is that there exists a separation of objectives, models and time-scales between the different layers. While the RTO optimizes the operation of the plant at medium time-scales, the advanced control scheme deals with the tracking and disturbance rejection problem at a faster time-scale [Engell, 2007]. The main disadvantages of the RTO are that the control structure exhibits a slow reaction to process variations, as for instance, disturbances, due to the infrequent solution of the RTO and the existing mismatches between the model used in the RTO and the dynamic model used by the advanced controller. The model mismatch may render the economic target calculated by the RTO inconsistent with the dynamic model or the constraints used in the advanced control [Kadam and Marquardt, 2007]. In order to enhance the economic performance, some methods tending to reduce the gap between the predictive controller and RTO have been proposed. One of the solutions widely used is the addition of the steady state target optimization (SSTO) with the MPC [Muske, 1997, Rao and Rawlings, 1999]. In this case, the advanced control is split into two layers: in the upper level, denoted as steady state target optimizer (SSTO), the setpoint of the predictive control (x∗s , u∗s ) is calculated by solving a mathematical programming problem as follows (x∗s , u∗s ) = arg min ℓeco (ys − yt ) xs ,us
s.t. xs = f (xs , us ) xs ∈ X , us ∈ U where ℓeco (ys − yt ) is a local approximation of the profit function, typically a linear or a quadratic function, f (·, ·) is the state space dynamic model function and X and U the constraints of the plant. The optimal setpoint is then calculated taking into account information from the RTO and using as plant model the prediction model of the MPC, leading to a reduction of the inconsistencies [Engell, 2007]. Notice that the setpoints are updated with the same scale time than the MPC and can take into account the estimated disturbance. In the lower level, the predictive controller is designed to regulate the plant to the desired setpoints. This is derived from the solution of the following optimization problem [Muske and Rawlings, 1993]: min u
N −1 ∑
ℓ(x(j) − x∗s , u(j) − u∗s ) + Vf (x(N ) − x∗s )
j=0
s.t. x(0) = x, x(j + 1) = f (x(j), u(j)),
j = 0, · · · , N − 1
u(j) ∈ U,
j = 0, · · · , N − 1
x(j) ∈ X,
j = 0, · · · , N − 1
x(N ) −
x∗s
∈Ω
where ℓ(·, ·) is a positive definite function that measures the tracking error with the setpoint (x∗s , u∗s ) and x is the 385
state of the plant. The terminal cost function Vf and the terminal constraint Ω are designed to ensure asymptotic stability of the setpoint [Rawlings and Mayne, 2009]. The control law is derived by means of the receding horizon technique: κN (x) = u0 (0; x). In [Rao and Rawlings, 1999] the authors analyze the implementation of steady state optimizer with linear MPC. In [Zanin et al., 2002] the authors present the formulation and industrial implementation of a combined nonlinear steady state optimizer with MPC for a fluidized-bed catalytic cracker, FCC. A step forward in the integration of RTO and MPC is the formulation of model predictive controllers that implement economic cost functions by considering the economic cost function ℓeco (y − yt ) as stage cost function in the optimization problem min u
N −1 ∑
ℓeco (y(i) − yt )
i=0
s.t. u(j) ∈ U, x(j) ∈ X ,
j = 0, · · · , N − 1.
x∗s
x(N ) = These controllers have shown good practical performance [Engell, 2007, Kadam and Marquardt, 2007], but stability proof requires further study. In [Rawlings et al., 2008, Rawligns and Amrit, 2009] the authors prove asymptotical stability of the economically optimal admissible steady state, and that the closed-loop system exhibits better performance with respect to the setpoint than standard setpoint-tracking MPC formulations. In [Diehl et al., 2011] the authors formulates the problem considering a generic economic criterion as economic stage cost, and also show that the economic MPC schemes admit a Lyapunov function to establish stability properties. 1.1 Feasibility loss of MPC under setpoint changes Throughout the operation of the plant, the economic target calculated by RTO may experience frequent changes. This can be caused by the effect of the estimated disturbances, the update of the steady state model or by changes in the profit function due to variations of the economic criteria. This change of the economic target may lead to a loss of the feasibility of the predictive controller. This is derived from the finite horizon and the stabilizing design based on terminal cost function and terminal constraint. In figure 1, this effect is shown. The system initially is at x0 and the setpoint is r1 . The MPC is designed to ensure stability taking as prediction horizon N = 3 and O∞ (r1 ) as terminal constraint, where O∞ (r1 ) denotes the maximal admissible invariant set for the LQR at the reference r1 . Its domain of attraction is denoted as X3 (r1 ). For this setpoint the optimization problem is feasible and the MPC steers the system to the setpoint. But, if the setpoint changes to r2 , then it can be seen that the stabilizing terminal region O∞ (r2 ) changes (this is not a simple translation) and the optimization problem will never be feasible. Then the controller should be redesigned for this new setpoint. Several predictive control schemes for tracking references have been proposed [Rossiter et al., 1996, Chisci and
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
Zs = {(x, u) ∈ Z˜ : x = f (x, u)}
(4)
Ys = {y = h(x, u) : (x, u) ∈ Zs } (5) Assumption 1. It is assumed that the output of the system is chosen in such a way that the steady output ys univocally defines the equilibrium point (xs , us ) and there exists functions gx and gu such that xs = gx (ys ), us = gu (ys ) (6) It is also assumed that the function gx (ys ) is Lipshitz continuous and the function gu (ys ) is continuous in Ys . The main ingredients of the MPC for tracking are:
Fig. 1. Example of the loss of feasibility. Zappa, 2003, Pannocchia and Kerrigan, 2005, Pannocchia, 2004, Findeisen et al., 2000, Magni et al., 2001b, Magni and Scattolini, 2005]. In this note, the model predictive for tracking formulation proposed in [Limon et al., 2008] has been adopted, and their extensions [Alvarado, 2007, Ferramosca, 2011]. This predictive control is briefly summarized in the following section. Based on this, its integration in the economic operation of the plant is shown and it is demonstrated different formulations oriented to enhance the economic optimality of the control scheme.
(i) an artificial steady state and input (xs , us ), characterized by ys , are considered as decision variables, (ii) the stage cost penalizes the deviation of the predicted trajectory from the artificial steady conditions, (iii) an offset cost function VO : Rp → R is added to penalize the deviation between the artificial steady state and the target (ys − yt ). VO is assumed to be a positive definite convex function. (iv) the invariant set for tracking [Limon et al., 2008] Γ ⊆ IRn+p is considered as extended terminal constraint on the terminal state and the artificial reference ys . The cost function of the proposed MPC is given by: VN (x, yt ; u, ys ) =
N −1 ∑
ℓ((x(j) − xs ), (u(j) − us ))
j=0
2. MPC FOR TRACKING Consider a system described by a nonlinear time-invariant discrete time model x+ = f (x, u)
(1a)
y = h(x, u)
(1b)
where x ∈ Rn is the system state, u ∈ Rm is the current control vector, y ∈ Rp is the controlled output and x+ is the successor state. The functions of the model f (x, u) and h(x, u) are assumed to be continuous at every equilibrium point. The solution of this system for a given sequence of control inputs u and initial state x is denoted as x(j) = ϕ(j; x, u) where x = ϕ(0; x, u). The state of the system and the control input applied at sampling time k are denoted as x(k) and u(k) respectively. The system is subject to hard constraints on state and control: (x(k), u(k)) ∈ Z for all k ≥ 0, where Z ⊂ Rn+m is a closed set.
(2)
The steady state, input and output of the plant (xs , us , ys ) are such that (1) is fulfilled, i.e. xs = f (xs , us ) and ys = h(xs , us ). In order to avoid those equilibrium points where the constraints are active (for stability reasons), the following restricted constraint set is defined Z˜ = {z : z + v ∈ Z, ∀∥v∥ ≤ ϵ} (3) where ϵ > 0 is arbitrarily small. Then, let us define the set of admissible equilibrium states with non-active constraints as 386
+Vf (x(N ) − xs , ys ) + VO (ys −yt ) where x(j) = ϕ(j; x, u), xs = gx (ys ), us = gu (ys ) and yt is the target of the controlled variables. The controller is derived from the solution of the optimization problem PN (x, yt ) given by: min VN (x, yt ; u, ys ) u,ys
s.t. x(0) = x, x(j + 1) = f (x(j), u(j)), j = 0, · · · , N −1 (x(j), u(j)) ∈ Z j = 0, · · · , N −1 (xs , us ) ∈ Zs ys ∈ Ys (x(N ), ys ) ∈ Γ The optimal cost and the optimal decision variables will be denoted as VN0 (x, yt ) and (u0 , ys0 ) respectively. Considering the receding horizon policy, the control law is given by κN (x, yt ) = u0 (0; x, yt ) Since the set of constraints of PN (x, yt ) does not depend on yt , its feasibility region does not depend on the target operating point yt . Hence, there exists a region XN ⊆ X such that for all x ∈ XN and for all yt ∈ Rp , PN (x, yt ) is feasible. Consider the following assumption: Assumption 2. (1) There exists a K function αℓ such that the stage cost function fulfills ℓ(˜ x, u ˜) ≥ αℓ (|˜ x|) (2) The set of admissible output Ys is a convex set. (3) The model function f (x, u) is differentiable at any equilibrium point (xs , us ) ∈ Zs and the
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
linearized model given by the pair matrices (A(xs , us ), B(xs , us )) is controllable. Furthermore, there exist positive constants ϵx , ϵu , b > 0 and σ > 1 such that N −1 ∑ ℓ(x(i) − xs , u(i) − us ) ≤ b∥x − xs ∥σ i=0
holds for any feasible solution (u, ys ) of PN (x, yt ) such that ∥x − xs ∥ ≤ ϵx and ∥u(i) − us ∥ ≤ ϵu . (4) There exists a control law u = κ(x, ys ), and a terminal cost function Vf and a terminal constraint set Γ such that for all for all (x, ys ) ∈ Γ, ∈ Z, ys ∈ Ys , and (a) (x, κ(x, ys )) (f (x, κ(x, ys )), ys ) ∈ Γ (b) and Vf satisfies the following Lyapunov condition Vf (f (x, κ(x, ys )) − xs , ys )−Vf (x− xs , ys ) ≤ −ℓ(x− xs , κ(x, ys ) − us ) Remark 1. If the set {y = h(x, u) : (x, u) ∈ Zs } is not convex, then set Ys must be chosen as a convex set contained in it. A set Γ satisfying (4.a) is called an invariant set for tracking [Limon et al., 2008, Ferramosca, 2011]. Asymptotic stability and constraint satisfaction of the controlled system is stated in the following theorem, whose proof can be found in Ferramosca [2011]. Theorem 1. (Asymptotic Stability). Consider that assumptions 1 and 2 hold and the prediction horizon satisfies N ≥ n. Then for any target operation point yt and for any feasible initial state x0 ∈ XN , the system controlled by the proposed MPC controller κN (x, yt ) is stable, converges to an equilibrium point, fulfils the constraints throughout the time evolution and besides (i) If yt ∈ Ys then lim |y(k) − yt | = 0. k→∞
(ii) If yt ̸∈ Ys , then lim |y(k) − ys∗ | = 0, where k→∞
ys∗ = arg min VO (ys − yt ) ys ∈Ys
2.1 Linear case Consider the case where the dynamics is described by a discrete-time linear system as: x+ = Ax + Bu
(7)
y = Cx + Du and the constraint set Z is a compact convex polyhedron containing the origin in its interior. In this case, the steady state and input is characterized by (xs , us ) = M ys , where the columns of M define the null space of the matrix [A − I, B]. Then the sets Zs and Ys are also convex polyhedra. If the stage cost function is the quadratic function ℓ(˜ x, u ˜) = ∥˜ x∥2Q + ∥˜ u∥2R , then the stabilizing assumption of theorem 1 can be stated as follows Assumption 3. (1) Let R ∈ Rm×m be a positive definite matrix and Q ∈ Rn×n a positive semi-definite matrix such that the pair (Q1/2 , A) is observable. 387
(2) Let K ∈ Rm×n be a stabilizing control gain such that (A + BK) has the eigenvalues in the unit circle. (3) Let P ∈ Rn×n be a positive definite matrix such that: (A+BK)′P(A+BK)−P=−(Q+K ′ RK) (4) Let Γ be an admissible polyhedral invariant set for the system [ +] [ ][ ] A + BK BL x x = 0 Ip ys ys+ such that Γ ⊆ {(x, ys ) : (x, Kx + Lys )Z, ys ∈ Ys }, where the matrix L is equal to L = [−K I]M . The extended terminal set Γ can be calculated using standard algorithms for the calculation of invariant sets for constrained linear systems [Limon et al., 2008]. The resulting optimization problem is convex and for some instances of the offset cost function, such as quadratic functions or linear functions, this is a quadratic programming problem. In this case, the optimization problem can be casted as a multi-parametric quadratic programming w.r.t. (x, yt ), and then the control law κN (x) is a piecewise affine function of (x, yt ) that can by explicitly calculated by means of the existing multiparametric programming tools [Bemporad et al., 2002]. 2.2 Nonlinear case As it is standard in MPC for regulation, one of the main problems of the stabilizing design of the MPC for tracking is the calculation of the terminal constraint set. For this reason, a sensible choice in the design of the MPC for tracking for nonlinear systems is taking as terminal constraint the set of steady states and outputs, that is Γ = {(xs , ys ) : xs = gx (ys ), ys ∈ Ys } This is an extension of the equality terminal constraint. In this case, the terminal cost function is chosen as Vf (x, ys ) = 0. The domain of attraction of the controller and its optimality can be enhanced if a terminal cost function and terminal constraint is considered. A local approximation of the plant around the equilibrium point can be used to obtain a local terminal region and a local cost function [Ferramosca, 2011]. The improvement can be more significant if a prediction horizon larger than the control horizon is considered as proposed in [Magni et al., 2001a, Ferramosca, 2011]. Extending the results of [Limon et al., 2006], it can be proved that weighting the terminal cost function, the terminal constraint can be removed maintaining the recursive feasibility of the problem and asymptotic stability. 2.3 Properties Stability under any change in the target yt . The constraints set of the proposed optimization control problem does not depend on the given economic target yt . Hence, the recursive feasibility of the controller is ensured for any value of yt , that is, the controller ensures constraints satisfaction for any value of yt even if this is time-varying. But, if yt converges to a value that is reachable, then
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
the controller steers the system to the target. If it is not reachable, then the controller steers the plant to the closest equilibrium point, according to the offset cost function VO . Larger domain of attraction. For a given prediction horizon, the proposed controller provides a larger region of attraction than the one of the MPC for regulation. This remarkable property allows us to extend the controllability of the predictive controller to a larger region at expense of p additional decision variables. This property makes the proposed controller interesting even for regulation objectives. Notice that any admissible equilibrium (with no active constraint) is contained in the domain of attraction. Then if the system is initially in an admissible steady state, the proposed controller steers the plant to any reachable target for any prediction horizon N ≥ n. In order to show this property, the domain of attraction of the MPC for the continuous stirred tank reactor (CSTR), [Magni et al., 2001a] has been calculated. 370
360
X17
the system is steered to the target. But, if the target is not reachable,i.e. yt ̸∈ Ys , then this controller steers the system to the optimal operating point according to the offset cost function VO (.). Therefore, if the MPC for tracking is designed taking the economic function ℓeco (y) as the offset cost function, i.e. VO (y) = ℓeco (y), then the control law steers the system to the equilibrium point (x∗s , u∗s , ys∗ ) such that ys∗ = arg min ℓeco (ys − yt ) ys ∈Ys
Consequently, the SSTO is naturally integrated in the MPC for tracking. Taking into account that the closed-loop stability is proved for any convex offset cost function, if this cost function varies with the time, the results of the theorem still hold. This makes possible the use of economic functions ℓeco that can be adapted throughout the evolution of the system to enhance the economic performance of the plant. In [Zanin et al., 2002, De Souza et al., 2010], predictive controllers integrating the SSTO have been proposed and the obtained results demonstrate the economic profit of this technique.
X
10
350
4. ZONE CONTROL
X2 340
Exploiting the flexibility in the design of the offset cost function, the MPC for tracking can be also used to deal with the so-called zone control.
Ω
330 T
17
320
310
300
290
280
0
0.1
0.2
0.3
0.4
0.5 CA
0.6
0.7
0.8
0.9
1
Fig. 2. Domains of attraction of the MPC for tracking with terminal equality constraints XN for N = 2, 10, 17 and domain of attraction of the MPC for regulation ΩN with N = 17. The domain of attraction of the MPC for tracking with N = 2, N = 10 and N = 17, are represented respectively in black, red and blue solid line and are denoted as X2 , X10 and X17 . The domain of attraction of the MPC for regulation with N = 17 is represented in dashed line and is denoted as Ω17 . See how the MPC for tracking achieves a larger domain of attraction with respect to the one given by the MPC for regulation. It is also relevant that the MPC for tracking domains of attraction cover the entire steady state manifold even for N = 2. 3. INTEGRATION OF SSTO IN MPC It is not unusual that the target provided by the RTO yt is not consistent with the prediction model or with the constraints, that is, yt ̸∈ Ys . In this case, the problem is solved by the Steady State Target Optimizer (SSTO), which calculates the best admissible setpoint to the MPC according to the function ℓeco (.). However, this problem is naturally solved by the MPC for tracking, since if the target is reachable yt ∈ Ys , then 388
In many cases in the process industries, the target associated to the optimal operating condition is not given by a point in the output space (fixed setpoint), but is a region into which the output should lie most of the time. In general, based on operational requirements, process outputs can be classified into two broad categories: 1) setpoint controlled, outputs to be controlled at a desired value, and 2) set interval controlled, outputs to be controlled within a desired range. Conceptually, the output intervals are not output constraints, since they are steady state desired zones that can be transitorily disregarded, while the (dynamic) constraints must be respected at each time. In addition, the determination of the output intervals is related to the steady state operability of the process, and it is not a trivial problem. Special care should be taken about the compatibility between the available input set (given by the input constraints) and the desired output set (given by the output intervals). In practice, however, the operators may select control zones that may be fully or partly unreachable. The MPC controller has to deal with this poor selection of the control zones and the possible loss of feasibility. From a theoretic point of view, the control objective of the zone control problem can be seen as a target set Yt ⊂ IRp instead of a target point, since inside the zones there are no preferences between one point and another. As it is proposed in [Ferramosca et al., 2010], the MPC for tracking can cope with this problem using a suitable offset cost function VO (ys , Yt ) such that (i) If ys ∈ Yt , then VO (ys , Yt ) = 0. (ii) Otherwise, VO (ys , Yt ) >0.
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
In this case, the resulting controller ensures that (i) If Yt ∩ Ys ̸= ∅ then the closed-loop system asymptotically converges to a steady output y(∞) ∈ Yt . (ii) If Yt ∩ Ys = ∅, the closed-loop system asymptotically converges to a steady output y(∞) = ys∗ , such that ys∗
, arg min VO (ys , Yt ) ys ∈Ys
A sensible choice of the offset cost function is the distance from a set VO (ys , Yt ) , min ∥ys − y∥q y∈Yt
Particularly interesting is the case that the chosen norm is the 1-norm or the ∞-norm, since in the case of linear systems, the resulting optimization problem can be casted as a quadratic programming problem [Ferramosca et al., 2010]. In [Ferramosca, 2011, Chapter 4], the MPC for tracking for zone control has been applied to a simulation model of the four-tanks plant taking the case of ∞-norm and N = 3. In figure 3, the evolution of the levels (h1 , h2 ) when the system is subject to changes on the target zones is depicted. Notice that when the target zone intersects with the set of admissible outputs Ys , the system is steered to an admissible operation point inside the zone, but when the target zone does not intersect, then the system evolves to the operation point that minimizes the distance to the zone. 1.5
h2
Ys 0.5
0
0.5
1
Distributed control is a control strategy based on different agents - instead of a centralized controller - controlling each subsystems, which may or may not share information. In noncooperative controllers, each agent makes decision on the single subsystem considering the other subsystems information only locally [Dunbar, 2007]. Cooperative distributed controllers on the other hand, consider the effect of all the control actions on all subsystems in the network. Each controller optimizes an overall plant object function, such as the centralized object [Stewart et al., 2010, Pannocchia et al., 2011]. Let us consider that the plant can be partitioned into a collection of M coupled subsystems. Following [Stewart et al., 2010, Section 3.1.1] and [Rawlings and Mayne, 2009, Chapter 6, pp. 421-422], each subsystem can be modeled as follows: x+ i = Ai xi +
M ∑
¯ij uj B
(8)
j=1
where xi ∈ Rni , uj ∈ Rmj , yi ∈ Rpi , Ai ∈ Rni ×ni and Bij ∈ Rni ×mj . Without loss of generality, it is considered that u = (u1 , · · · , uM ). Among the existing solutions for the distributed predictive control problem, we focus our attention on the cooperative game presented in [Stewart et al., 2010, Rawlings and Mayne, 2009, Chapter 6, p. 433]. In this case, all agent shares a common (and hence coupled) objective, which can be considered as the overall plant objective VN (x, yt ; u, ys ), where x = (x1 , · · · , xM ). Each i-th agent calculates its corresponding input ui by solving an iterative decentralized optimization problem, given an initial feasible solution [0] ui . The solution of the agent i at the iteration p will be [p] denoted as ui . Based on this, the solution of each agent at the next iteration p + 1 is calculated from the solution of the p-th iteration u[p] and the solution of the following optimization problem for the i-th agent:
Ωy=YN
1
0
the big loss of information when the interaction between subsystems are strong. The performance of this control technique can be improved if the agents are coordinated [Liu et al., 2009, 2008].
1.5
h1
Fig. 3. State-space evolution for the ∞-norm formulation.
min VN (x, yt ; u, ys )
(9a)
ui ,ys
s.t.
5. DISTRIBUTED MPC In the process industries, plants are usually considered as large scale systems, consisting of linked unit of operations. Therefore, they can be divided into a number of subsystems, connected by networks of different nature, such as material, energy or information streams [Stewart et al., 2010]. The interconnected structure of the plant can be exploited to develop the advanced control of the plant leading to a decentralized or distributed control scheme. In a decentralized control scheme [Magni and Scattolini, 2006] each subsystem is controlled independently, without interchange of information between different subsystems. The information that flows in the network is usually considered as a disturbance by each subsystem [Sandell Jr. et al., 1978]. The drawback of this control formulation is 389
(9b) xq (j + 1) = Aq xq (j) +
2 ∑
Bqℓ uℓ (j), q ∈ I (9c)
ℓ=1
x(0) = x
(9d)
[p] [p] [p] (u1 , u2 , · · · , uM ) = u[p] , [p] uℓ (j) = uℓ (j) ℓ ∈ I \ i,
(9e)
x(j) ∈ X,
(9g)
u(j) ∈ U,
j = 0, ..., N − 1
(9f) (9h)
(xs , us ) = M ys
(9i)
(x(N ), ys ) ∈ Γ
(9j)
where I = {1, 2, · · · , M }, x(j) = (x1 (j), x2 (j), · · · , xM (j)) and u(j) = (u1 (j), u2 (j), · · · , uM (j)). Based on the solu-
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
tion of this optimization problem for each agent, namely u0i , the solution of the p + 1-iteration is given by [p+1]
ui
= wi u∗i + (1 − wi )ui
[p]
∑M
(10)
where wi ∈ (0, 1), i=0 wi = 1, is a given parameter. The [0] initial sequence of this recursion ui is calculated from the optimal solution calculated at the last sample and the terminal control law as proposed in [Ferramosca et al., 2011]. The convergence of the proposed control algorithm and its stabilizing properties are stated in the following theorem [Ferramosca et al., 2011]. Theorem 2. [Asymptotic stability] Consider that the assumptions of theorem 1 hold. Let XN be the feasible set of states of problem (9). Then for all x(0) ∈ XN and for all yt , the closed-loop system is asymptotic stable and converges to an equilibrium point (x∗s , u∗s ) = My ys∗ such that ys∗ = arg min VO (ys , yt ) ys ∈Ys
Moreover, if yt ∈ Ys , then ys∗ = yt . This control scheme inherits the properties of its centralized counterpart, such as the enlargement of the domain of attraction and admissible tracking under changing economic targets. A nice property that holds is the integration of the SSTO in the MPC. In cooperative MPC, the target problem solved in a distributed way, converges to the centralized optimum only if the constraints are uncoupled. In case case of coupled constraints, it is recommended to use the centralized approach to solve the target problem [Rawlings and Mayne, 2009, Section 6.3.4]. The proposed controller ensures convergence to the centralized optimal equilibrium point, since every agent solves an optimization problem with a centralized offset cost function. Remarkably, this property holds for any suboptimal solution provided by the controller due, for instance, to the effect of coupled constraints between agents, or to a small number of iterations p¯. Furthermore, this equilibrium point is the admissible equilibrium which minimizes the offset cost function, that is, ℓeco (ys − yt ).
where ys∗ is the output corresponding to the optimal set point (x∗s , u∗s ). Notice that this function is a simple translation of the economic cost function. The cost function of the proposed economic MPC for tracking is given by: VN (x, yt ; u, ys ) =
N −1 ∑
ℓt (y(j) − ys ) + c∥ys − ys∗ ∥
j=0
where x(j) = ϕ(j; x, u), xs = gx (ys ), us = gu (ys ) and yt is the target of the controlled variables. The controller is derived from the solution of the optimization problem PNe (x, yt ) given by: min VN (x, yt ; u, ys ) u,ys
s.t. x(0) = x, x(j + 1) = Ax(j) + Bu(j), j = 0, · · · , N −1 (x(j), u(j)) ∈ Z j = 0, · · · , N −1 ys ∈ Ys x(N ) = gx (ys ) As in [Diehl et al., 2011], the stabilizing controller is formulated in terms of the rotated cost function that it is defined as Lr (z) = ℓt (z) + λT (x − Ax − Bu) − ℓeco (ys∗ − yt ) where λ is such that the rotated cost is a positive definite function of ∥z∥. This λ exists thanks to the convexity of the function ℓeco . In [Ferramosca, 2011] it is proved that the proposed MPC for tracking taking the cost Lr (y − ys ) as stage cost provides the same control law that the proposed economic MPC. Therefore, it can be proved that the control law is such that ys (k) converges to ys∗ . Since ℓt (y −ys∗ ) = ℓeco (y − yt ), it would be desirable that the artificial reference ys converge to ys∗ in finite time, recovering the economic MPC controller. In virtue of the results of [Ferramosca et al., 2009], taking a constant c sufficiently large, when the state reaches the feasibility region of economic MPC, the control law of economic MPC for tracking is equal to the control law of economic MPC.
6. ECONOMIC MPC UNDER CHANGING TARGETS 7. CONCLUSIONS As it was commented before, the main property of economic MPC is that the economic cost function ℓeco (y − yt ) is considered as stage cost function in the MPC. This leads to control laws that provide a closed-loop trajectory with a better performance according to the considered economic function. Although stability of economic MPC has been recently proved [Diehl et al., 2011], the possible loss of feasibility when the economic target changes may still happen. It would be desirable to extend the MPC for tracking to the economic MPC paradigm ensuring that if the optimal setpoint is not reachable, then the controller steers the system to the setpoint, but in such a way that the economic optimality property is recovered as soon as possible. To extend the MPC for tracking scheme to the economic MPC scheme in the linear case, the following cost function is defined ℓt (z) = ℓeco (z + ys∗ − yt ) 390
This note is devoted to the advanced control of plants under changing economic targets provided by the Real Time Optimizers. The formulation adopted is the MPC for tracking which has the great advantage of ensuring feasibility under any changes of the target. The theoretical results show that this formulation can be naturally adapted as advanced control of the plant for linear as well as for nonlinear systems. It is also shown that the proposed approach can cope with practical control problems such us zone control problems, distributed control or economic control, ensuring in every case local optimality, recursive feasibility and asymptotic stability under changing targets. REFERENCES I. Alvarado. Model Predictive Control for Tracking Constrained Linear Systems. PhD thesis, Univ. de Sevilla., 2007.
IFAC NMPC'12 Noordwijkerhout, NL. August 23-27, 2012
A. Bemporad, M. Morari, V. Dua, and E. Pistikopoulos. The explicit linear quadratic regulator for constrained systems. Automatica, 38:3–20, 2002. L. Chisci and G. Zappa. Dual mode predictive tracking of piecewise constant references for constrained linear systems. Int. J. Control, 76:61–72, 2003. G. De Souza, D. Odloak, and A. C. Zanin. Real time optimization (RTO) with model predictive control (MPC). Computers and Chemical Engineering, 34:1999–2006, 2010. M. Diehl, R. Amrit, and J. B. Rawlings. A lyapunov function for economic optimizing model predictive control. IEEE Transactions on Automatic Control, 56(3):703 – 707, 2011. W. B. Dunbar. Distributed Receding Horizon Control of Dynamically Coupled Nonlinear Systems. TAC, 52(7): 1249–1263, 2007. S. Engell. Feedback control for optimal process operation. Journal of Process Control, 17:203–219, 2007. A. Ferramosca. Model Predictive Control for Systems with Changing Setpoints. PhD thesis, Univ. de Sevilla., 2011. http://fondosdigitales.us.es/tesis/autores/1537/. A. Ferramosca, D. Limon, I. Alvarado, T. Alamo, and E. F. Camacho. MPC for tracking with optimal closed-loop performance. Automatica, 45:1975–1978, 2009. A. Ferramosca, D. Limon, A. H. Gonz´alez, D. Odloak, and E. F. Camacho. MPC for tracking zone regions. Journal of Process Control, 20:506–516, 2010. A. Ferramosca, D. Limon, J. B. Rawlings, and E. F. Camacho. Cooperative distributed MPC for tracking. In Proceedings of the 18th IFAC World Congress, 2011. R. Findeisen, H. Chen, and F. Allg¨ower. Nonlinear predictive control for setpoint families. In Proc. Of the American Control Conference, 2000. J. V. Kadam and W. Marquardt. Integration of economical optimization and control for intentionally transient process operation. In R. Findeisen, F. Allg¨ower, and L. T. Biegler, editors, International Workshop on Assessment and Future Direction of Nonlinear Model Predictive Control, pages 419–434. Springer, 2007. D. Limon, T. Alamo, F. Salas, and E. F. Camacho. On the stability of MPC without terminal constraint. IEEE Transactions on Automatic Control, 42:832–836, 2006. D. Limon, I. Alvarado, T. Alamo, and E. F. Camacho. MPC for tracking of piece-wise constant references for constrained linear systems. Automatica, 44:2382–2387, 2008. J. Liu, D. Mu˜ noz de la Pe˜ na, B. J. Ohran, P .D. Christofides, and J. F. Davis. A two-tier Architecture for Networked Process Control. Chem. Eng. Sci., 63 (22):5394–5409, 2008. J. Liu, D. Mu˜ noz de la Pe˜ na, and P. D. Christofides. Distributed Model Predictive Control of Nonlinear Process Systems. AIChE Journal, 55(5):1171–1184, 2009. L. Magni and R. Scattolini. Stabilizing decentralized model predictive control for nonlinear systems. Automatica, 42(7):1231–1236, 2006. L. Magni and R. Scattolini. On the solution of the tracking problem for non-linear systems with MPC. Int. J. of Systems Science, 36:8:477–484, 2005. L. Magni, G. De Nicolao, L. Magnani, and R. Scattolini. A stabilizing model-based predictive control algorithm for nonlinear systems. Automatica, 37:1351–1362, 2001a.
391
L. Magni, G. De Nicolao, and R. Scattolini. Output feedback and tracking of nonlinear systems with model predictive control. Automatica, 37:1601–1607, 2001b. K. Muske. Steady-state target optimization in linear model predictive control. In Proceedings of the ACC, 1997. K. Muske and J. B. Rawlings. Model predictive control with linear models. AIChE Journal, 39:262–287, 1993. G. Pannocchia. Robust model predictive control with guaranteed setpoint tracking. Journ. of Process Control, 14:927–937, 2004. G. Pannocchia and E. Kerrigan. Offset-free reciding horizon control of constrained linear systems. AIChE Journal, 51:3134–3146, 2005. G. Pannocchia, J. B. Rawlings, and S. J. Wright. Conditions under which suboptimal nonlinear MPC is inherently robust. Systems & Control Letters, 60:747–755, 2011. S. J. Qin and T. A. Badgwell. A survey of industrial model predictive control technology. Control Engineering Practice, 11:733–764, 2003. S. J. Qin and T. A. Badgwell. An overview of industrial model predictive control technology. In Proceedings of the conference on Chemical Process Control, 1997. C. V. Rao and J. B. Rawlings. Steady states and constraints in model predictive control. AIChE Journal, 45:1266–1278, 1999. J. B. Rawligns and R. Amrit. Optimizing process economic performance using model predictive control. In L. Magni, D. M. Raimondo, and F. Allg¨ower, editors, International Workshop on Assessment and Future Direction of Nonlinear Model Predictive Control, pages 315–323. Springer, 2009. J. B. Rawlings and D. Q. Mayne. Model Predictive Control: Theory and Design. Nob-Hill Publishing, 1st edition, 2009. J. B. Rawlings, D. Bonne, J. B. Jorgensen, A. N. Venkat, and S. B. Jorgensen. Unreacheable setpoints in model predictive control. IEEE Transactions on Automatic Control, 53:2209–2215, 2008. J. A. Rossiter, B. Kouvaritakis, and J. R. Gossner. Guaranteeing feasibility in constrained stable generalized predictive control. IEEE Proc. Control theory Appl., 143: 463–469, 1996. N. R. Sandell Jr., P. Varaiya, M. Athans, and M. Safonov. Survey of decentralized control methods for larger scale systems. IEEE Transactions on Automatic Control, 23 (2):108–128, 1978. B. T. Stewart, A. N. Venkat, J. B. Rawlings, S. J. Wright, and G. Pannocchia. Cooperative distributed model predictive control. Systems & Control Letters, 59:460– 469, 2010. P. Tatjewski. Advanced control and on-line process optimization in multilayer structures. Annual Reviews in Control, 32:71–85, 2008. A. C. Zanin, M. Tvrzska de Gouvea, and D. Odloak. Integrating real time optimization into the model predictive controller of the fcc system. Control Engineering Practice, 10:819–831, 2002.