Copyright @ IF AC Advanced Control of Chemical Processes, Pisa, Italy, 2000
ROBUST STEADY-STATE TARGETS FOR MODEL PREDICTIVE CONTROL Dean E. Kassmann
Thomas A. Badgwell
Aspen Technology, Inc. 1293 Eldridge Parkway Hou.ston, Texas 77077
Abstract: In practice, Model Predictive Control (MPC) algorithms are typically embedded within a multi-level hierarchy of control functions. The MPC algorithm itself is usually implemented in two pieces: a steady-state target calculation followed by a dynamic optimization. We present a new formulation of the steady-state target calculation that explicitly accounts for model uncertainty. When model uncertainty is incorporated, the linear program associated with the steady-state target calculation can be recast as a semi-definite program (SDP) or a second-order cone program (SOCP) . A simulation example illustrates the advantages of the proposed algorithm. Copyright © 2000 IFAC Keywords: model predictive control, robust LP, semi-definite programming, second-order cone programming
1. INTRODUCTION
each unit which run more frequently or consider a more detailed unit model than is possible at the plant-wide level. The local optimizer computes an optimal economic steady-state and passes this information to the MPC algorithm for implementation. The MPC algorithm must move the plant from one constrained steady-state to another while minimizing constraint violations along the way.
Model Predictive Control (MPC) refers to a class of computer control algorithms that compute a sequence of manipulated variable adjustments in order to optimize the future behavior of a plant. As its name suggests, an MPC algorithm uses an explicit model to predict how a process will evolve in time. The optimal control moves are computed using an online optimization which is, in general, a full-blown nonlinear program (NLP) . In practice, linear models are most often used and the resulting optimizations are linear or quadratic programs. While originally developed to address the needs of power plants and petroleum refineries, MPC technology is now used in a wide variety of applications ranging from food processing to pulp and paper (Qin and Badgwell, 1997).
Steady-state targets must be recalculated at each time step in order to remove the effects of disturbances entering the process. In principle one can compute new steady-state targets while simultaneously solving the dynamic optimization, but this is rarely done in practice. Controller tuning is greatly simplified by implementing the MPC algorithm with separate steady-state and dynamic optimizations, as shown in Figure 1.
In modern processing plants, MPC is implemented as part of a multi-level hierarchy of control functions as shown in Figure 1. At the top of the structure a plant-wide optimizer determines optimal steady-state settings for each unit in the plant. These may be sent to local optimizers at
In this work we assume the model used in the steady-state calculation is not known exactly. This leads to a robust steady-state target calculation in which the controller has the ability to rigorously account for model mismatch. Section 2 introduces
413
Global St.ady-State Calculation (every day/week)
of the output is the predicted output value at steady-state when the input is held constant at its previous value. Here f represents the dynamic model of the system.
Local Steady-State Calculation (every 30 min - 6 hours)
r-----------,I
The matrix G E Rmxn is the steady-state gain matrix for the system. The LP finds optimal steady-state targets by minimizing an economic objective:
r----------,II
(la)
I
Model Predictive Control (MPC)
I I I
MPC Steady-State I Target Calculation I (every 1-2 minutes) I
I
I I
I MPC Dynamic I Optimisation I (every 1-2 minutes)
while maintaining the relationship between the steady-state inputs and outputs:
I
I I I
6.y = G6.u,
I I I1. _ _ _ _ _ _ _ _ _ _ _ ..1I
(lb)
respecting input and output constraints arising from process specifications
Baaic Dynamic Control (every second)
y:::; us:::; u '!l.:::; Y.
:::;
y,
(lc) (Id)
and ensuring the resulting solution does not force the dynamic MPC calculation to become infeasible: Fig. 1. Model Predictive Control Hierarchy
N e6.y :::; 6.u :::; N e 6.u
the nominal steady-state target calculation. Section 3 describes one possible uncertainty description derived from process identification results. Section 4 introduces the robust LP, and Section 5 describes the proposed numerical solution method. In Section 6 we provide an illustrative simulation example.
In (lc) and (Id) y and u are minimum and maximum bounds for Us , respectively, with similar notation employed for the outputs Ys' In (le) Ne refers to the control horizon of the dynamic MPC calculation, and 6.y and 6.u are the minimum and maximum bounds for the rate of change constraints in the dynamic MPC calculation. Equation (le) ensures the steady-state target is compatible with the rate of change or velocity constraints in the dynamic MPC calculation.
2. THE NOMINAL LP
The steady-state target calculation for linear MPC commonly takes the form of a linear program (LP) . Any steady-state linear model, whether it be state-space, step response, impulse response, or other, can be cast at steady-state in the following form: 6.y
In instances when it is desirable to maintain an individual output (or input) at some predetermined value, a penalty term can be added to the objective. It is common to use a weighted RI or R~ norm:
IIAy - Ytll1
= G6.u.
or
IIAy - Ytll~
Here Yt E Rq, q :::; rn, is some target. When an RI norm is used the problem can still be cast as an LP with an objective of the form in equation (la), but with more variables. When an R~ norm is used the problem becomes a quadratic program (QP). While we do not address this case directly here, the results presented here hold for both linear and quadratic objectives. Thus, we can without loss of generality consider only objectives of the form in equation (la).
Here 6.y E Rm represents the change between the current steady-state output and the openloop value of the output and steady-state, and 6.u E Rn represents the change between the current steady-state input and its open-loop value at steady-state: 6.u =U s - Uol, 6.y =Ys - Yol·
The vectors Y. and Us contain the future steadystate outputs and inputs, respectively. The vectors Yol and Uol are the open-loop values of Y and U at steady-state: Uol Yol
(le)
To avoid real-time infeasibilities in the LP, it is common to recast the hard output constraints 'l!. :::; Ys :::; y in (Id) as soft constraints by adding slack variables f ~ 0 E Rm, and t: > 0 E Rm that allow for some amount of vi~lation in th~ constraint:
=Uk-1, = f(uol)'
The open-loop value of the input is simply its value at the last time step. The open-loop value
(2)
414
3. ELLIPTIC UNCERTAINTY DESCRIPTION
The sizes of the violations are minimized by appending the slacks to the objective function : J. = eT Us + dT Y8 + eT f (3)
Our goal is to develop a robust LP that explicitly accounts for uncertainty in the steady-state gain matrix G. In model identification, if the process noise and model parameters are assumed to be normally distributed or Gaussian variables, the natural uncertainty description is an ellipsoidal bound on the parameters (Bard, 1974). Let the estimated variable be f) = [f)l ... f)n.1 T ERn., with covariance V and mean f)c. An elliptic uncertainty description of the joint confidence region is given by
where e E R 2 m is a vector which penalizes output deviations. For our discussion, we assume without loss of generality that (4)
e=[II ... I]T j
however, the elements of e may arbitrarily large in practice. The vector f is comprised of stacking the upper and lower slack vectors: f
= [IT
(5)
£T]T.
f)
Additionally, a bias is introduced into the model to incorporate feedback . It is assumed that the difference between the model prediction and the measured output at the current time is due to a constant step disturbance at the output. While other types of disturbance models can be incorporated into the MPC framework (Muske and Rawlings, 1993), the constant output step disturbance assumption is standard in industrial applications (Qin and Badgwell, 1997). The model bias bERm is based on a comparison between the current predicted output y and the current measured output f) E Rm:
b
=f) -
y.
= Gtl.u + b.
where Au E R
nxn
(7)
Here we have replaced the matrix W with its statistical interpretation: (11)
The matrix V is the covariance or estimate of the covariance of the estimated parameters. The term j3 is related to the radius of the ellipse and is given by
(12) where ~(x) is the univariate normal distribution function with zero mean and unit variance:
(8a)
bu
-
(
Nctl.u -Nctl.y
'
(8b)
=[
-i]
by
= (!y'-+y~:l) .
1'"
The uncertainty description given by (10) is more general than (9) as it allows V to be rank deficient. When the covariance matrix V does not have full rank, the ellipsoid becomes degenerate, collapsing in specific directions corresponding to parameters that are known exactly. The directions in which the ellipse collapses correspond to the eigenvectors (or semi-axes) with associated eigenvectors that are zero (i.e. those directions for which a non zero z produces V 1 / 2 z which is zero.)
while Ay E R 2 mxm and by E R 2 m are given by Ay
=
2S 1 (13) f"iC e- 12 ds . v27r -00 and 1 - et is the confidence or probability level. We can always guarantee a value of j3 to exist for any probability level 1- et, even when V is only an estimate of the true covariance (Bard, 1974) . For (10) to remain convex, et ~ 0.5, or equivalently, j3 ~ O.
~(x)
U- Uol ) -y +Uol
(9)
(10)
and bu E a 4n are given by
-
~ I},
(6)
f~O 4
: (f) - f)cfW(8 - f)c)
The confidence region defined by the ellipsoid in (9) can be parameterized by a vector s that sweeps out a unit sphere:
It is also common to calculate the moves tl.u and tl.y, instead of the inputs and outputs directly. The nominal steady-state target calculation with soft constraints and an output bias can expressed in velocity form as: cT tl.u + dT tl.y + eT f min ~u.~y .• subject to tl.y = Gtl.u + b Autl.u ~ bu Aytl.y ~ by + f
e d;j {f)
The center of the ellipse f)c is the mean of the normal distribution and the symmetric positivedefinite matrix W defines the size and orientation of the ellipsoid. In particular, the square roots of the reciprocals of the eigenvalues of Ware the lengths of the semi-axes of the ellipsoid, and the eigenvectors of W give the directions of the semiaxes.
The bias is added to the model: tl.y
E
(8c)
Additionally, since tl.y depends linearly upon tl.u , the entire problem can be expressed in terms of tl.u and f, reducing the number of decision variables. The resulting LP can be cast in standard form and passed to an optimization algorithm such as the simplex method (Chvatal, 1983).
415
4. THE ROBUST LP
which is a semi-definite program (SDP). It ean be solved directly or cast as a second-order cone program (SOCP) and solved with greater efficiency.
The robust LP explicitly accounts for uncertainty in the steady-state gain matrix G of the nominal algorithm. Because the solution of the LP is always guaranteed to lie on the boundary of the feasible set, uncertainty in the gain matrix has a direct consequence on the steady-targets passed to the dynamic MPC calculation.
Now consider a simplified version of our original problem: min
L a;jgJx :::; b;
I
Ax:::; b
"lA E U
Di
I
gT
min x
I
af x:::; b; "la; E c;,
[
+ (3 IIV;1/2Xll
:::; b; .
eT x 1 gT (D; x)
~u)
x
eT x
I iif x + (3 11 V;1/2x 11 :::; b;
+ (3 IIV 1/ 2 D; xII
+ hi + (Ayb)i
Au~u
1<
-(0
:::; b;
[b"i" 1
for i = 1, ... ,2m. This optimization problem must be solved at every controller execution.
(16a)
5. NUMERICAL SOLUTION
(I6b) In this section we show how the robust LP can be cast as a semi-definite or second-order cone program. A general semi-definite program (SDP) has the foIlowing form:
(17)
min fTx x
F(x) 2: 0
(18)
(26)
where the matrix F(x) is defined:
Thus, problem (16) can be recast as min
(24)
(25c)
which in turn is equivalent to the foIlowing deterministic constraint: iif x
+ (311V 1 / 2 D; xII :::; b;,
with
The semi-infinite constraint in (I6a) can be recast as the foIIowing constrained maximization: max{af x: a; E c;} :::; 0,
(D; x)
(25b)
for i = 1, ... , rn with uncertainty in the data a; given by the ellipsoid: " de! a;Ec-; = {-a;+ (3 V; 1/2 s: 11 s 11 :::;1 } .
diag(a;2 e) .. , diag(a;m elf , (23)
for i = 1, ... , 2m. The fuIl robust steady-state target calculation in model predictive control is then:
(14)
Now consider the case in which U is described by an ellipsoid. The standard form LP can be written constraint-wise: eT x
(22)
Vg E U.
yielding the foIIowing optimization problem:
gT (D;
x
(21)
Equation (22) is similar to the standard form constraint described above; it can be rewritten as:
where A * is a matrix created by enumerating all the possible combinations of A. In this case, the semi-infinite constraint has been replaced by an equivalent deterministic constraint.
min
= 1, ... , rn,
where 9 is the vector of the rows of G stacked lengthwise, and the vector e contains all ones.
subject to
eT x
A*x:::; b
= [diag(ai1 e)
(15)
x
i
Here Di is defined by;
where U is a convex, bounded polytope with b E Rm and x ERn. Dantzig (1963) showed that (14) can be cast equivalently as min
G E U,
gT(D; x) :::; b;
When U is a convex, bounded polytope (such as the case of a box uncertainty description) the problem can be recast as an LP with an increased number of constraints. Consider the standard form linear program: eT x
(20)
which can be equivalently cast as:
= AyG~u :::; by.
VG E U.
x
VG E U,
j
The matrix G is assumed to have some known amount of uncertainty. Let the set of possible values of G be denoted U. Then one possible solution is to require that the constraint hold for all possible G in U. This yields the semi-infinite constraint:
min
A G x :::; b,
1
in which G represents the steady-state gain matrix. The constraint can be rewritten componentwise as;
When the model is not known exactly, the boundaries of the feasible region become uncertain. The output constraint, ignoring bias and slacks, is nominaIly given by: Ay~Y
eT x
x
m
F(x)
(19)
= Fo + L xiF; i=l
416
(27)
+
6. SISO EXAMPLE
1 symmetric matrices Fo, .. . , Fm E ~ 0 means F(x) is positive semidefinite. The robust LP can be cast as a semi-definite program by recognizing that the constraint: with m
Rnxn . The inequality in F(x)
Assume that we have identified the process model tests on the plant g:
g from
1
(28)
Assume that a nominal MPC controller has been designed using model g, consisting of an LP for the steady-state target algorithm coupled with a QDMC algorithm for dynamic control. A stepresponse model with 30 coefficients is used to describe both the nominal model and the plant. The dynamic calculation uses a version of the QDMC algorithm (Garcia and Morshedi, 1986) with a control horizon of e = 5, a prediction horizon of p = 35, and simple tuning weights of Wu = 0.1 and Wy = 0.5. The QDMC algorithm has been modified to force the sum of moves over the entire control horizon to equal the steady-state move computed by the LP. The LP is solved using a variation ofthe simplex method (Dantzig, 1963) , (Grace, 1995). The dynamic calculation is solved using and active set SQP method with a BFGS update for the Hessian (Grace, 1995).
can be cast as the following matrix inequality:
F(x) - [(eTx+d)I (AX+b)] > 0 - (Ax + b)T (eT x + d) - .
(29)
In terms of our original problem: A f-- (3 V 1 / 2 D i , b f-- 0, C f-- -
d x
f--
D iT g,
by,
+ fi
(30) -
arb,
f-- ~u
The most common solution methods for semidefinite programs are primal-dual interior point methods. These algorithms developed as extensions to linear programming theory. Because of the special structure of the SDP, however, it may be more beneficial to cast the problem as a secondorder cone program.
Assume that a robust MPC controller has also been designed, consisting of the steady-state target calculation (25) coupled with the same QDMC dynamic controller used by the nominal MPC algorithm. The robust steady-state target optimization is solved using the routine SOCP by Lobo et al. (1997). This SOCP routine makes use of Nesterov and Nemirovski's (1994) potential reduction algorithm for second-order cone programming.
A general second-order cone program (SOCP) has the following form: min fT x
'"
II A i x + bill::;
er
x
+ di ,
i
= 1, ... ,N
1
g= --0.5, g= -s+ 31 2.0 3s+1
(31)
where x ERn, bi E Rm, and 9 E RP. It uses the constraint (28) directly. Second-order cone programs are not as general as SDPs. A wide variety of nonlinear convex optimization problems can be cast as SOCPs (Lobo et al., 1998) and SDPs (Vandenberghe and Boyd, 1996)
Assume unit constraints on the input and output:
-1::; y::;
1, -1::;u::;1.
Assume the steady-state economic objective is given by: 18
The two principal advantages to casting the robust LP in this form are:
=
U -
3y.
The coefficients are such that mmlmlzmg the objective drives the process toward large inputs and outputs.
• First and foremost, the original semi-infinite optimization is recast in the form of a standard optimization problem with a finite dimensional constraint. • Second, there has been tremendous activity in extending primal-dual interior-point methods to these more general problem classesresulting in new efficient solution methods.
As illustrated in Figure 2, the nominal MPC controller cycles because of the model mismatch. The steady-state algorithm is trying to find a target that agrees with the feedback it has received while satisfying the constraints. Because the model it is using is not correct, the steady-state target iterates end up 'bouncing' around their correct values. The input, in fact, is forced to cross from one side of its operating region to the other. The output is not maximized as the LP costs dictate. In contrast, Figure 3 shows that the robust MPC controller successfully stabilizes the closed loop system with the same level of model mismatch. The system moves to the best possible location for the given level of model uncertainty.
Currently, there are fewer publicly available codes for solving SOCPs than for solving SDPs. Only time will tell which formulation will ultimately prove superior for this class of problems. The primary reference for interior-point methods for second-order cone and semi-definite programming is the text by Nesterov and Nemirovski (1994).
417
demonstrate the improvement possible with this approach.
:~
ACKNOWLEDGEMENTS
-,
-.L.o!---~.--~'O---"'---20,----.. ",------J..
The authors gratefully acknowledge the financial support of Aspen Technology, Inc., The National Science Foundation, and the Texas Higher Education Coordinating Board.
T...
;~ o
10
i
16
~
~
8. REFERENCES
~
Bard, Y. (1974). Nonlinear Parameter Estimation. Academic Press, Inc .. New York. Chvatal, Vasek (1983). Linear Programming. W. H. Freeman and Company. New York. Dantzig, G. B. (1963) . Linear Programming and Extensions. Princeton University Press. New Jersey. Garcia, C. E. and A. M. Morshedi (1986) . Quadratic Programming Solution of Dynamic Matrix Control (QDMC) . Chem. Eng. Commun. 21, 73-87. Grace, A. (1995). MATLAB Optimization Toolbox User's Guide. The Mathworks, Inc .. Natick, Mass. Lobo, M. S., L. Vandenberghe and S. Boyd (1997) . SOCP: Software for Second-Order Cone Programming. Beta Version. Lobo, M. S., L. Vandenberghe, S. Boyd and H. Lebret (1998) . Applications of Second-Order Cone Programming. Linear Algebra Appl. to appear. Muske, K. R. and J . B. Rawlings (1993). Model Predictive Control with Linear Models. AIChE J. 39(2) , 262-287. Nesterov, Yu. and A. Nemirovski (1994) . InteriorPoint Polynomial Methods in Convex Programming. Vol. 13 of Studies in Applied Mathematics. SIAM. Philadelphia, PA. Qin, S. J . and T . A. Badgwell (1997). An Overview of Industrial Model Predictive Control Technology. In: Fifth International Conference on Chemical Process control (Jeffry C. Kantor, Carlos E. Garcia and Brice Carnahan , Eds.) . number 93 In: AIChE Symposium Series 316. pp. 232- 256. Vandenberghe, L. and S. Boyd (1996). Semidefinite Programming. SIAM Rev. 38(1) , 49-95.
T....
Fig. 2. Conventional MPC Controller using the nominal LP.
;~: -: .:.:.:.: -.:.:.:~:.-.:.:.:::.:.:·_·:::·:·:·1 o
5
10
;~::-:o
15
~
~
~
T...
5
_ n 10
_= ___ ---:----1 m_
15
~
~
~
""" Fig. 3. Conventional MPC Controller using the robust LP. 7. CONCLUSIONS
When investigating the theoretical properties of model predictive control, it is nearly universally assumed that the steady-state target received by the dynamic controller does not change. In practice, however, the target results from a steadystate optimization whose purpose is to reject disturbances and drive the system to some economic optimum. Mismatch between the model used in the target calculation and the true system can cause very poor control. For the case of a steady-state target calculation based on a linear program, we have shown one way that model uncertainty can be incorporated. For a linear model with elliptic uncertainty on the model parameters, the result is a robust target calculation which takes the form of a secondorder cone program (SOCP). This SOCP can greatly improve control by rigorously accounting for modeling uncertainty. The resulting SOCP structure can be exploited to develop efficient numerical solutions based on primal-dual interiorpoint methods. A simulation has been provided to
Ll lSl