Automatica 43 (2007) 158 – 163 www.elsevier.com/locate/automatica
Technical communique
Improving off-line approach to robust MPC based-on nominal performance cost夡 BaoCang Ding a,∗ , YuGeng Xi b , Marcin T. Cychowski c , Thomas O’Mahony c a School of Electricity and Automation, Hebei University of Technology, Tianjin 300130, PR China b Institute of Automation, Shanghai Jiaotong University, Shanghai 200240, PR China c Department of Electronic Engineering, Cork Institute of Technology, Rossa Avenue, Cork, Ireland
Received 17 July 2005; received in revised form 3 July 2006; accepted 21 July 2006 Available online 26 September 2006
Abstract This paper gives two alternative off-line synthesis approaches to robust model predictive control (RMPC) for systems with polytopic description. In each approach, a sequence of explicit control laws that correspond to a sequence of nested asymptotically invariant ellipsoids are constructed off-line. In order to accommodate a wider class of systems, nominal performance cost is chosen to substitute the “worst-case” one in an existing technique. In the design of control law for a larger ellipsoid, the second approach further incorporates the knowledge of control laws associated with all smaller ellipsoids, so as to further improve feasibility and optimality. The effectiveness of the alternative approaches is illustrated by a simulation example. 䉷 2006 Elsevier Ltd. All rights reserved. Keywords: Robust model predictive control; Off-line method; Nominal performance cost; Asymptotically invariant ellipsoid; Linear matrix inequality
1. Introduction Synthesis approaches for robust model predictive control (RMPC), which have been widely investigated, can be classified by on-line RMPC (see e.g. Angeli, Casavola, & Mosca, 2002; Kothare, Balakrishnan, & Morari, 1996; Kouvaritakis, Rossiter, & Schuurmans, 2000; Wan & Kothare, 2003a) and off-line RMPC (see e.g. Cychowski, Ding, Tang, & O’Mahony, 2004; Wan & Kothare, 2003b). The latter never involves on-line optimization. Kothare et al. (1996) on-line optimized a single state feedback gain F (k) such that a “worst-case” infinite-horizon quadratic optimization problem is solved. This method usually imposes heavy on-line computational burden and limitations on the achievable optimality and feasibility. There are many techniques towards improvements. Kouvaritakis et al. (2000)
夡 This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form under the direction of Editor A.L. Tits. ∗ Corresponding author. E-mail address:
[email protected] (B.C. Ding).
0005-1098/$ - see front matter 䉷 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2006.07.022
off-line designed a state feedback gain F to generate “closedloop” state predictions and, on-line, only a summation of the 2-norm of the perturbations on F is minimized. Angeli et al. (2002) off-line constructed a series of nested ellipsoids, the smallest one corresponding to F, such that the state in one ellipsoid can be steered into the neighboring smaller one in one step. On-line, by properly choosing one of the ellipsoids as the terminal constraint set, a standard MPC (see Mayne, Rawlings, Rao, & Scokaert, 2000) with control horizon M = 1 will converge the state towards the smallest ellipsoid. More recently, Wan and Kothare (2003b) directly solved the optimization in Kothare et al. (1996) off-line, such that a number of N state feedback gains Fi s with corresponding nested ellipsoidal domains of attraction i s are constructed. On-line, when the state lies between two adjacent ellipsoids, the real-time control law is chosen as a linear interpolation of the two corresponding off-line laws. This off-line RMPC has considerably low on-line computational burden. However, with respect to optimality and feasibility it is apparently worse than Kothare et al. (1996). In the present paper, we propose two improved alternatives to Wan and Kothare (2003b). The first alternative adopts nominal performance cost in substitution of
B.C. Ding et al. / Automatica 43 (2007) 158 – 163
“worst-case” one, such that feasibility can be improved. In calculating Fi for a larger ellipsoid, the second alternative further takes advantage of Fi+1 · · · FN associated with the smaller ellipsoids i+1 · · · N , such that optimality and feasibility can be improved. Inspired by the fixed-horizon on-line feedback RMPC (see Section 4.6 of Mayne et al., 2000), we name the second alternative as heuristic varying-horizon off-line feedback RMPC. Notations. For any vector x and positive-definite matrix W, x2W = x T Wx. x(k + i|k) is the value of vector x at time k + i, predicted at time k. I is the identity matrix with proper dimension. The symbol ∗ denotes the corresponding transpose of the lower block part of symmetric matrices. 2. Problem description Consider the following time-varying uncertain system: x(k + 1) = A(k)x(k) + B(k)u(k),
k 0,
(1)
where u ∈ Rm and x ∈ Rn are input and measurable state, respectively. The constraints are −u¯ u(k + i) u, ¯
¯ x(k + i + 1) , ¯ −
∀i 0,
(2)
where u¯ := [u¯ 1 , u¯ 2 , . . . , u¯ m ]T , u¯ j > 0, j ∈ {1, . . . , m}; ¯ := ¯ , ¯ ,..., ¯ ]T , ¯ > 0, s ∈ {1, . . . , q}; ∈ Rq×n . Assume [ 1 2 q s that [A(k)|B(k)] ∈ = Co{[A1 |B1 ], [A2 |B2 ], . . . , [AL |BL ]}, ∀k coefficients l (k), L0, i.e., there exist L nonnegative L (k) = 1 such that [A(k)|B(k)] = l=1 l l=1 l (k)[Al |Bl ]. ˆ ˆ Denote [A|B] ∈ as the nominal model that is more likely to be theactual system (Wan & Kothare, 2003a) (e.g., as ˆ B] ˆ = L [A| l=1 [Al |Bl ]/L). The purpose is to design a robust predictive controller that drives system (1)–(2) state (xss , uss ) = (0, 0) min to the steady 2 2 imizing Jtrue,∞ = ∞ i=0 [x(i)Q + u(i)R ], where Q > 0 and R > 0 are weighting matrices. Due to uncertainty, Jtrue,∞ cannot be directly minimized. Kothare et al. (1996) turned to solve, at each time k, the following problem: min
max
u(k+i|k)=F (k)x(k+i|k),P (k) [A(k+i)|B(k+i)]∈,i 0 ∞ 2 2
=
where F (k) is a state feedback gain; (3d) is for guaranteeing cost monotonicity and robust stability. For stable closedloop system, summing (3d) from i = 0 to i = ∞ obtains max[A(k+i)|B(k+i)]∈,i 0 J∞ (k)x(k)2P (k) , where > 0 is a scalar. Define Q = P (k)−1 and F (k) = Y Q−1 , then (3d) and x(k)2P (k) can be transformed into the following LMIs: ⎡ ⎤ Q ∗ ∗ ∗ ⎢ Al Q + B l Y Q ∗ ∗ ⎥ (4) ⎣ ⎦ 0, l ∈ {1, . . . , L}, 0 I ∗ Q1/2 Q 0 0 I R1/2 Y
1 ∗ 0, Q > 0. (5) x(k) Q With (4)–(5) satisfied, (3b) is guaranteed by the following LMIs:
Q ∗ (6) 0, Zjj u¯ 2j , j ∈ {1, . . . , m}, Y Z
Q ∗ 0, (Al Q + Bl Y ) ¯ , l ∈ {1, . . . , L}, s ∈ {1, . . . , q}, ss s 2
where Zjj (ss ) is the jth (sth) diagonal element of Z(). Thus, problem (3) can be solved by min
,Q,Y,Z,
s.t. (4).(7).
(8)
3. The control scheme adopting nominal performance cost
k: min
(3a)
u(k+i|k)=F (k)x(k+i|k),P (k) ∞ 2
=
− uu(k ¯ + i|k) u, ¯
Jn,∞ (k)
2 [x(k ˆ + i|k)Q + u(k + i|k)R ],
(9a)
i=0
¯ x(k + i + 1|k) , ¯ ∀i 0, − x(k + i + 1|k) = A(k + i)x(k + i|k) + B(k + i)u(k + i|k), x(k|k) = x(k),
,
On-line solving (8) incurs heavy computational burden especially for high dimensional systems. Wan and Kothare (2003b) proposed the off-line RMPC, to transform (8) off-line. The alternative techniques in the present paper concern two points regarding the off-line RMPC based on (8). First, the “worstcase” performance cost produces L LMIs as in (4). Second, it is a feedback RMPC with control horizon M = 0 (see Mayne et al., 2000). These are restrictions for obtaining better feasibility and optimality. Therefore, we will first adopt nominal performance cost, and then further apply a heuristic varying-horizon feedback formulation.
i=0
s.t.
(7)
Let us solve the following minimization problem at each time
J∞ (k)
[x(k + i|k)Q + u(k + i|k)R ],
159
(3b) ∀i 0,
(3c)
x(k + i + 1|k)2P (k) − x(k + i|k)2P (k) − x(k + i|k)2Q − u(k + i|k)2R , ∀[A(k + i)|B(k + i)] ∈ , i 0, P (k) > 0,
(3d)
s.t.
ˆ x(k ˆ + i + 1|k) = Aˆ x(k ˆ + i|k) + Bu(k + i|k), x(k|k) ˆ = x(k), ∀i 0, (3b) and (3c), x(k + i + 1|k)2P (k) − x(k + i|k)2P (k) < 0, ∀i 0, P (k) > 0, x(k ˆ + i + 1|k)2P (k) − x(k ˆ + i|k)2P (k) 2 − x(k ˆ + i|k)Q − u(k + i|k)2R , ∀k, i 0,
(9b) (9c)
(9d)
160
B.C. Ding et al. / Automatica 43 (2007) 158 – 163
where xˆ denotes nominal state; (9c) is for guaranteeing robust ˆ B] ˆ ∈ , both stability; (9d) is for cost monotonicity. Since [A| (9c) and (9d) are less restrictive than (3d). Hence, compared with (3), (9) is easier to be feasible, i.e., (9) can be utilized to a wider class of systems. For stable closed-loop system, x(∞|k) ˆ = 0. Hence, summing (9d) from i = 0 to ∞ obtains Jn,∞ (k)x(k)2P (k) , where the same notation as in Section 2 is adopted that should not induce confusion. According to Wan and Kothare (2003a), constraint (9d) is equivalent to ˆ (k))T P (k)(Aˆ + BF ˆ (k)) − P (k) (Aˆ + BF T − Q − F (k) RF (k).
(10)
Define Q = P (k)−1 and F (k) = Y Q−1 , then (10) and (9c) can be transformed into LMIs: ⎡
Q ˆ + BY ˆ ⎢ AQ ⎣ 1/2 Q Q R1/2 Y
Q Al Q + Bl Y
∗ ∗ Q ∗ 0 I 0 0 ∗ > 0, Q
⎤ ∗ ∗⎥ ⎦ 0, ∗ I l ∈ {1, . . . , L}.
(11)
min
,
s.t. (11).(12) and (5).(7).
(12)
(13)
Eq. (11) is a necessary condition of (4), (12) a part of (4). Hence, (13) is easier to be feasible than (8). Algorithm 1. Stage 1: Off-line, choose states xi , i ∈ {1, . . . , N}. Substitute x(k) in (5) by xi , and solve (13) to obtain the corresponding matrices {Qi , Yi }, ellipsoids i = {x ∈ Rn |x T Q−1 i x 1} and . Notice that x should be chosen feedback gains Fi = Yi Q−1 i i such that i+1 ⊂ i , ∀i = N . For each i = N , check if the following is satisfied: T −1 Q−1 i − (Al + Bl Fi+1 ) Qi (Al + Bl Fi+1 ) 0, l ∈ {1, . . . , L}.
(i) if (14) is satisfied, then i (k) ∈ (0, 1] and x(k)T [ i (k)Q−1 i + ]x(k) = 1; (1 − i (k))Q−1 i+1 (ii) if (14) is not satisfied, then i (k) = 1. Theorem 1. Given initial state x(0) ∈ 1 , then Algorithm 1 asymptotically stabilizes the closed-loop system. Moreover, if (14) is satisfied for all i = N , then the control law (15) in Algorithm 1 is a continuous function of the system state x. Proof. For x(k) ∈ i , since {Yi , Qi , Zi , i } satisfy (5)–(7) and (12), Fi is feasible and stabilizing. For x(k) ∈ i \i+1 , denote −1 Q( i (k))−1 = i (k)Q−1 i + (1 − i (k))Qi+1 and X( i (k)) = i (k)Xi + (1 − i (k))Xi+1 , X ∈ {Z, }. If (14) is satisfied and {Yi , Qi } satisfies (12), then T −1 Q−1 i − (Al + Bl F ( i (k))) Qi (Al + Bl F ( i (k))) > 0, l ∈ {1, . . . , L}.
(16)
Moreover, if both {Yi , Qi , Zi , i } and {Yi+1 , Qi+1 , Zi+1 , i+1 } satisfy (6) and (7), then
For treating input and state constraints, (12) will have the same role with (4), i.e., (5) and (12) also lead to x(k+i|k)T Q−1 x(k+ i|k)1, ∀i 0 (see “Lemma 1” of Kothare et al., 1996). Therefore, with (5) and (12) satisfied, (6)–(7) guarantee satisfaction of (3b) (proofs are same as those in Kothare et al., 1996; for input constraint also refer to Wan & Kothare, 2003a). Thus, problem (9) can be solved by ,Q,Y,Z,
where F ( i (k)) = i (k)Fi + (1 − i (k))Fi+1 and
(14)
Stage 2: On-line, at each time k adopt the following state feedback law: u(k) = F (k)x(k) F ( i (k))x(k), x(k) ∈ i , x(k) ∈ / i+1 , i = N = (15) FN x(k), x(k) ∈ N ,
Q( i (k))−1 ∗ 0 F ( i (k)) Z( i (k)) j ∈ {1, . . . , m},
with Z( i (k))jj u¯ 2j , (17)
Q( i (k))−1 ∗ 0, (Al + Bl F ( i (k))) ( i (k)) ¯ 2, ( i (k))ss s
l ∈ {1, . . . , L} with
s ∈ {1, . . . , q}.
(18)
Eqs. (16)–(18) indicate that u(k) = F ( i (k))x(k) will keep the state inside of i and drive it towards i+1 , with the hard constraints satisfied. More details are referred to Wan and Kothare (2003b). 4. Further improvement based-on heuristic varying-horizon formulation In our second approach, the off-line feedback gains will be obtained in a backward manner. That is, we calculate FN−h by increasing h gradually form h=0 till h=N −1. We optimize all FN−h that correspond to N−h satisfying the inclusion condition 1 ⊃ 2 · · · ⊃ N . By the technique of Wan and Kothare (2003b) or Algorithm 1, for any initial state x(k) = xN−h , the following is imposed u(k + i|k) = FN−h x(k + i|k),
∀i 0.
(19)
However, after applying FN−h in N−h (h > 0), the state may have been driven into the smaller ellipsoid N−h+1 in which FN−h+1 is more appropriate than FN−h . Hence, we can substitute (19) by u(k + i|k) = FN−h+i x(k + i|k), i ∈ {0, . . . , h − 1}; u(k + i|k) = FN x(k + i|k), ∀i h.
(20)
B.C. Ding et al. / Automatica 43 (2007) 158 – 163
161
4.2. Calculating {QN−h , FN−h }, h ∈ {2, . . . , N − 1}
If / N−h+1 x(k) ∈ N−h , x(k) ∈ ⇒ x(k + i) ∈ N−h+i , ∀i ∈ {0, . . . , h},
(21)
then choosing (20) is apparently better than choosing (19). 4.1. Calculating {QN , FN } and {QN−h , FN−h }, h = 1 Firstly, QN and FN are determined by Algorithm 1, by which PN = N Q−1 N is obtained. Then, considering an x(k) = xN −1 ∈ / N , we select the control laws as u(k)=FN−1 x(k),
u(k+i|k)=FN x(k+i|k), ∀i 1.
(22)
If x(k + 1|k) ∈ N , then according tothe procedure for cal2 +u ˆ culating QN and FN , it follows that ∞ i=1 [x(k+i|k) Q 2 2 ¯ (k+i|k)R ]x(k+1|k) ˆ and J (k) J (N − 1, k) = n,∞ n,∞ PN 2 2 2 x(k)Q + u(k)R + x(k ˆ + 1|k)PN . Let N −1 − x(k)T P (k)x(k) 0, ˆ N−1 )T PN (Aˆ + BF ˆ N−1 ) P (k), Q + FNT −1 RFN−1 + (Aˆ + BF (23) then by applying (22) it follows that J¯n,∞ (N − 1, k)N−1 . Let us consider, instead of (9), the following optimization problem: min
u(k)=FN−1 x(k),P (k),N−1
N−1 ,
s.t. (3b), (3c), (9c) and (23),
where i = 0.
(24)
Define QN−1 = N−1 P (k)−1 and FN−1 = YN−1 Q−1 N−1 , then (23) can be transformed into LMIs:
1 ∗ 0, (25) x(k) QN−h ⎡ ⎤ ∗ ∗ ∗ QN−h −1 ˆ N−h N−h P ˆ N−h + BY ⎢ AQ ∗ ∗ ⎥ N−h+1 ⎢ ⎥ 0. 1/2 ⎣ Q QN−h 0 N−h I ∗ ⎦ R1/2 YN−h 0 0 N−h I (26) In addition, (9c) for i = 0 is transformed into LMI:
∗ QN−h > 0, l ∈ {1, . . . , L}, Al QN−h + Bl YN−h QN−h
The procedure for calculating QN−1 , FN−1 can be generalized. Considering an x(k) = xN−h ∈ / N−h+1 , we select the control moves as (20), where FN−h+1 , FN−h+2 , . . . , FN have been obtained in earlier time. By induction, if x(k + j |k) ∈ N−h+j for all j ∈ {1, . . . , h}, then according to the procedure for calculating QN−h+j and FN−h+j , it follows that ∞ ˆ + i|k)2Q + u(k + i|k)2R ] x(k ˆ + 1|k)2PN −h+1 i=1 [x(k and Jn,∞ (k) J¯n,∞ (N − h, k) = x(k)2Q + u(k)2R + x(k ˆ + 1|k)2PN−h+1 where PN−h+1 = N−h+1 Q−1 N−h+1 . Let
T RFN−h N−h − x(k)T P (k)x(k) 0, Q + FN−h T ˆ N−h ) PN−h+1 (Aˆ + BF ˆ N−h ) P (k), + (Aˆ + BF
(30)
then by applying (20) it follows that J¯n,∞ (N − h, k) N−h . Let us solve, instead of (9), the following optimization problem: min
u(k)=FN−h x(k),P (k),N−h
N−h ,
s.t. (3b), (3c), (9c) and (30),
where i = 0.
(31)
By defining QN−h = N−h P (k)−1 and FN−h = YN−h Q−1 N−h , problem (31) can be solved by min
N−h ,YN−h ,QN−h ,ZN−h ,N−h
N−h ,
s.t. (25).(28).
(32)
Since x(k + j |k) ∈ N−h+j for all j ∈ {1, . . . , h} cannot be guaranteed, the above procedure for calculating {QN−h , FN−h } is heuristic. Algorithm 2. Stage 1: Off-line, generate states xi , i ∈ {1, . . . , N} . Substitute x(k) in (5) by xN and solve (13) to obtain matrices {QN , YN }, ellipsoid N and feedback gain FN = YN Q−1 N . For xN−h , h ∈ {1, . . . , N −1} , substitute x(k) in (25) by xN−h and solve (32) to obtain matrices {QN−h , YN−h }, ellipsoid N−h and feedback gain FN−h = YN−h Q−1 N−h . Notice that xi should be chosen such that i+1 ⊂ i , ∀i = N . For each i = N , check if (14) is satisfied. Stage 2: See Stage 2 of Algorithm 1, where the notations should be accommodated.
while (3b) for i = 0 is guaranteed by
Remark 1. In Cychowski et al. (2004) (adopting “worst-case” performance cost), the solution in the form of (20) is utilized by clearly imposing
LMIs (6).(7) by substituting {Y, Q, Z, } with {YN−h , QN−h , ZN−h , N−h }.
x(k) ∈ N−h , x(k) ∈ / N−h+1 ⇒ x(k + i|k) ∈ N−h+i , ∀i ∈ {0, . . . , h},
(27)
(28)
Thus, problem (24) can be solved by min
N −1 ,YN−1 ,QN−1 ,ZN−1 ,N−1
where h = 1.
N−1 ,
s.t. (25).(28) (29)
Notice that imposing (26)–(27) cannot guarantee x(k+1|k) ∈ N . Hence, the above procedure for calculating {QN−1 , FN−1 } is heuristic.
(33)
which means that the state in a larger ellipsoid will be driven into the neighboring smaller ellipsoid in one step. This idea has also been utilized in Angeli et al. (2002) in another context. The adverse effect of imposing (33) is that the number of ellipsoids tends to be very large in order to attain feasibility. Without imposing (33), (32) is much easier to be feasible and N of Algorithm 2 can be dramatically reduced. There are still underlying reasons. Since we are dealing with uncertain
162
B.C. Ding et al. / Automatica 43 (2007) 158 – 163
systems and ellipsoidal domains of attraction, imposing (33) is very conservative for guaranteeing (21). Hence, by imposing (33), it is nearly impossible to guarantee the following (which is the best case): x(k) ∈ N−h , x(k) ∈ / N−h+1 ⇒ x(k + i) ∈ N−h+i , x(k + i) ∈ / N−h+i+1 , ∀i ∈ {1, . . . , h − 1}, x(k + h) ∈ N .
(34)
Proof. The proof is based on the same rationale used for prov−1 in(26) is reing Theorem 1. In Algorithm 2, if N−h PN−h+1 placed by QN−h for all h ∈ {1, . . . , N − 1}, then Algorithm 1 is retrieved. Hence, the only difference between Algorithm 1 and Algorithm 2 lies in (11) and (26). We do not use (11) and (26) for proving stability of the algorithms.
u(k) = F (k)x(k); u(k + j |k) = Fi+j x(k + j |k), ∀j ∈ {1, . . . , N − i − 1};
The complexity of solving LMI optimization problem (8), (13) or (32) is polynomial-time, which (regarding the fastest interior-point algorithms) is proportional to K3 L, where K is the number of scalar variables and L the number of rows (Gahinet, Nemirovski, Laub, & Chilali, 1995). The parameter K is given by 1 + 21 (n2 + n) + mn + 21 (m2 + m) + 21 (q 2 + q) (for (8), (13) and (32)), while L is (4n + m + q)L + 2n + 2m + q + 1 (for (8)) or (3n + q)L + 5n + 3m + q + 1 (for (13) and (32)). For on-line computation, the complexity analysis can be found in section 3.3 of Wan and Kothare (2003b).
u(k + j |k) = FN x(k + j |k),
5. Numerical example
It is more likely that (34) can be achieved by properly constructing smaller number of ellipsoids. Remark 2. According to (20) and (15), at time k if x(k) ∈ i and x(k) ∈ / i+1 , then the on-line receding-horizon predictive control law sequence is (implicitly)
∀j N − i,
where only u(k) = F (k)x(k) is applied. This represents a feedback MPC with control horizon M = N − i (Mayne et al., 2000). At time k + 1, a new receding-horizon control law sequence is implicitly generated, perhaps with a different control horizon. Hence, Algorithm 2 can be taken as a varying-horizon off-line feedback RMPC. As seen from (3), (9) or (19), both the technique in Wan and Kothare (2003b) and Algorithm 1 only present feedback RMPC with control horizon M = 0. Remark 3. In Algorithms 1 and 2, it is better to satisfy (14) by appropriately spacing xi . With Fi+1 known, (14) can be transformed into LMI and be incorporated into (13) or (32) for calculating {i , Fi }. In such a way, it is much easier to satisfy (14). However, loss of optimality may be resulted. Theorem 2. Given initial state x(0) ∈ 1 , then Algorithm 2 asymptotically stabilizes the closed-loop system. Moreover, if (14) is satisfied for all i = N , then the control law (15) in Algorithm 2 is a continuous function of the system state x.
(1)
(1)
x (k+1) x (k)
1−
Consider x (2) (k+1) = K(k) + 01 u(k), 1−
x (2) (k) where K(k) ∈ [0.5, 2.5] is an
uncertain parameter.
The con
1−
ˆ ˆ straint is |u| 2. Choose A = 1.5 1− , B = 01 . Take Q = I and R = 1. The true state is generated by K(k) = 1.5 + sin(k). Simulate “Algorithm 2” of Wan and Kothare (2003b) (when its equation (13) is not satisfied, choose i = 1), Algorithms 1 and 2. For i ∈ {1, . . . , N}, choose xN−i+1 =[0.6+d(i −1), 0]T (1),max where d represents the spacing of the ellipsoids. Denote x1 (1),max as the maximum value such that, when x1 = [x1 , 0]T , the corresponding optimization problem remains feasible. By varying d, we find that: (i) for = 0, Algorithm 1 is much easier to be feasible than “Algorithm 2” of Wan and Kothare (2003b); (ii) for = ±0.1, Algorithm 2 gives smaller cost value than Algorithm 1; (iii) for =0, either Algorithm 2 is easier to be feasible, or it gives smaller cost value than Algorithm 1. Table 1 lists the simulation results for four typical cases: (A) = −0.1, d = 0.1, N = 16, x(0) = [2.1, 0]T ; (B) = 0, d = 0.02, N = 76,
Table 1 The simulation results by “Algorithm 2” of Wan and Kothare (2003b) (WK2), Algorithm 1 (A1), Algorithm 2 (A2) under four typical cases (1),max
The set of i for which (14) is not satisfied
x1
Case Case Case Case
(A) (B) (C) (D)
WK2
A1
A2
WK2
A1
A2
2.1 59.28 55.6 4.4
2.1 74.66 70.6 4.4
2.1 72.46 90.6 4.4
{1} {} {11, 10} {}
{2,1} {68,67,64} {11, 10} {38}
{5,4,3,2,1} {75,71,64,61,59} {11,10,9,8} {38, 35}
a(b) indicates that M = a is repeated for b times
Case Case Case Case
(A) (B) (C) (D)
Jtrue,∞ WK2
A1
A2
Control horizon M for Algorithm 2 for k = 0, 1, 2, 3, . . . , M=
60.574 37.172 64407677 575
57.932 34.687 61469235 552
54.99 34.485 64801789 542
15, 11, 10, 6, 0, 0, 0, . . . 75, 47, 35, 0, 0, 0, . . . 11(7), 10(10), 9(15), 8(14), 7(11), 6(7), 5(6), 4(5), 3(14), 2(11), 1(15), 0, 0, 0, . . . 38, 30, 29, 26, 23, 19, 16, 11, 2, 0, 0, 0, . . .
B.C. Ding et al. / Automatica 43 (2007) 158 – 163
x(0) = [2.1, 0]T ; (C) = 0, d = 5, N = 12, x(0) = [55.6, 0]T ; and (D) = 0.1, d = 0.1, N = 39, x(0) = [4.4, 0]T . For simplicity, we have spaced xi equally. By unequal spacing, it can give lower cost value Jtrue,∞ , result into larger (1),max and render satisfaction of (14) for all i = N , especially x1 for Algorithm 2. Acknowledgements This work is supported by National Nature Science Foundation of China (NSFC, Grant No. 60504013). The authors would thank the reviewers for many pertinent comments and suggestions. References Angeli, D., Casavola, A., & Mosca, E. (2002). Ellipsoidal low-demanding MPC schemes for uncertain polytopic discrete-time systems. Proceedings of the 41st IEEE conference on decision and control, Las Vegas, Nevada USA (pp. 2935–2940).
163
Cychowski, M. T., Ding, B., Tang, H., & O’Mahony, T. (2004). A new approach to off-line constrained robust model predictive control. Proceedings of the 2004 UK control conference (pp. 146). Gahinet, P., Nemirovski, A., Laub, A. J., & Chilali, M. (1995). LMI control toolbox for use with matlab, User’s guide. Natick, MA, USA: The Math Works Inc. Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust constrained model predictive control using linear matrix inequalities. Automatica, 32, 1361–1379. Kouvaritakis, B., Rossiter, J. A., & Schuurmans, J. (2000). Efficient robust predictive control. IEEE Transactions on Automatic Control, 45, 1545–1549. Mayne, D. Q., Rawlings, J. B., Rao, C. V., & Scokaert, P. O. M. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36, 789–814. Wan, Z., & Kothare, M. V. (2003a). Efficient robust constrained model predictive control with a time varying terminal constraint set. Systems and Control Letters, 48, 375–383. Wan, Z., & Kothare, M. V. (2003b). An efficient off-line formulation of robust model predictive control using linear matrix inequalities. Automatica, 39, 837–846.