Min–max optimal control of linear systems with uncertainty and terminal state constraints

Min–max optimal control of linear systems with uncertainty and terminal state constraints

Automatica 49 (2013) 1809–1815 Contents lists available at SciVerse ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica B...

522KB Sizes 0 Downloads 224 Views

Automatica 49 (2013) 1809–1815

Contents lists available at SciVerse ScienceDirect

Automatica journal homepage: www.elsevier.com/locate/automatica

Brief paper

Min–max optimal control of linear systems with uncertainty and terminal state constraints✩ Changzhi Wu a,1 , Kok Lay Teo b , Soonyi Wu c a

SITE, University of Ballarat, Victoria, Australia

b

Department of Mathematics and Statistics, Curtin University, WA, Australia

c

National Center for Theoretical Sciences, National Cheng Kung University, Tainan, Taiwan

article

info

Article history: Received 15 April 2012 Received in revised form 20 December 2012 Accepted 24 January 2013 Available online 19 March 2013 Keywords: Min-max optimal control Terminal state constraints Semi-definite programming

abstract In this paper, a class of min–max optimal control problems with continuous dynamical systems and quadratic terminal constraints is studied. The main contribution is that the original terminal state constraint in which the disturbance is involved is transformed into an equivalent linear matrix inequality without disturbance under certain conditions. Then, the original min–max optimal control problem is solved via solving a sequence of semi-definite programming problems. An example is presented to illustrate the proposed method. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction Optimal control theory has many successful applications in a wide range of disciplines, ranging from engineering to biomedicine. In real world applications, information used to model a problem is usually noisy, incomplete, even erroneous since the exact measurements are often not available. In this case, the optimal solution is no longer optimal, or even worse, it could become an infeasible solution. Thus, the min–max criterion is often adopted to deal with the uncertainty. There are many results which are available in the literature for optimal control problems with uncertainty. See, for example, (Baillieul & Wong, 2009; Basar, 1991; Basar & Bernhard, 1991; Bemporad, Borrelli, & Morari, 2003; Bertsimas & Brown, 2007; Chernousko, 2005; Diehl & Bjornberg, 2004; Goulart, Kerrigan, & Alamo, 2009; Kim, 2004a,b; Kostyukova & Kostina, 2006; Mayne & Eschroeder, 1997; Scokaert & Mayne, 1998; Vinter, 2005; Yoon, Ugrinovskii, & Pertersen, 2005) and the references cited therein. For the case of the linear discrete time system with quadratic cost and l2 -norm bounded uncertainty, it

✩ Changzhi Wu was partially supported by NSFC 11001288, the Key Project of Chinese Ministry of Education under the grant 210179, CMEC KJ090802. The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Akira Kojima under the direction of Editor Ian R. Petersen. E-mail addresses: [email protected] (C. Wu), [email protected] (K.L. Teo), [email protected] (S. Wu). 1 Tel.: +61 3 53279814; fax: +61 3 53279077.

0005-1098/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.automatica.2013.02.052

is studied in Bertsimas and Brown (2007). If there is no constraint, then it is equivalent to a semi-definite programming (SDP) problem. If there are constraints, then it can be approximated by a sequence of second-order cone programs (SOCP) (Bertsimas & Brown, 2007). It is well known (see, for example, Boyd & Vandenberghe, 2004) that SDP and SOCP can be solved in polynomial time. Thus, these linear quadratic discrete time optimal control problems with l2 -norm bounded disturbances can be solved efficiently. However, if the uncertainty set is a polyhedral set, then the corresponding min–max optimal control problems are computationally demanding. This class of optimal control problems is formulated and exploited in Scokaert and Mayne (1998). The results obtained in Scokaert and Mayne (1998) is extended to a more general case in Bemporad et al. (2003). In Diehl and Bjornberg (2004), this problem is solved by using a duality approach, where the dynamic programming recursion is performed on the dual problem. For other results for problems governed by linear discrete time systems, see, for example, Diehl and Bjornberg (2004), Mayne and Eschroeder (1997) and Scokaert and Mayne (1998). For min–max optimal control problems governed by continuous time linear systems, they are much more challenging. In Vinter (2005), necessary conditions in the form of a maximum principle are derived. The focus is on the theoretical analysis rather than on the numerical algorithm. In Chernousko (2005), the cost is assumed to be linear with respect to the state and quadratic with respect to the control. In this case, the original min–max optimal control problem can be efficiently solved by applying the maximum principle. However, if the constraints are added or the cost

1810

C. Wu et al. / Automatica 49 (2013) 1809–1815

is nonlinear, no effective numerical algorithms are available in the literature. In this paper, we focus on a class of min–max open-loop optimal control problems governed by continuous time linear dynamical systems subject to a quadratic terminal state constraint. In Basar and Bernhard (1991), it is shown that the solution of this min–max optimal control problem can be obtained via solving two different Riccati equations. However, this elegant result is obtained for the min–max optimal control problem without having constraint on the state or control. In particular, no terminal constraint on the state is allowed. Thus, the result in Basar and Bernhard (1991) is not applicable to the min–max optimal control problem considered in this paper due to the presence of terminal constraint. The parameterization method (i.e., approximating the original infinite-dimensional minimax optimal control problem by the finite-dimensional minimax optimal control problem) is a feasible approach. Amongst the existing methods by direct parameterization (Teo, Goh, & Wong, 1991), both the feasible infinitedimensional control set and disturbance set are parameterized by finite-dimensional sets, respectively. However, the drawback is that there is no guarantee that the solution obtained is feasible for the original problem. Clearly, this is not satisfactory (Scokaert, Mayne, & Rawlings, 1999). Due to this deficiency, we will propose an effective solution approach to solving such a class of min–max optimal control problems. The novelty of our method is that the feasibility of the terminal constraint is maintained during the iteration, while the existing methods based on direct parameterization are not. A numerical example is used to illustrate the effectiveness of the method proposed.

where x∗ is the desired terminal state and γ > 0 is a given constant. A u ∈ Uδ is called a feasible control if the constraint (5) is satisfied. We may now specify our problem formally as: given the dynamical system (1) and the terminal constraint (5), find a control u ∈ Uδ such that the worst-case performance J (u, w) is minimized over Uδ , i.e., finding a control u ∈ Uδ such that it solves the following min–max optimal control problem: min max J (u, w).

This problem is referred to as Problem (P ). Clearly, without disturbance w in (1), Problem (P ) is easy to solve by existing optimal control methods. However, in the presence of disturbances, Problem (P ) becomes much more complicated. In this paper, our main task is to develop a computational scheme for solving this min–max optimal control problem. To proceed further, we assume that the matrices A (t ) , B (t ) , C (t ) , Q (t ) and R (t ) are all continuous on [0, T ]. That is, each of their elements is a continuous function on [0, T ]. Furthermore, Q (t ) is positive semi-definite and R(t ) is positive definite for all t ∈ [0, T ]. 3. Problem transformation and approximation Let F (t , τ ) be the transition matrix of (1). For each u and w , define

T0 (u) = F (T , 0) x0 +

x˙ (t ) = A (t ) x (t ) + B (t ) u (t ) + C (t ) w (t ) ,

t ∈ [0, T ] ,

(1)

1

T1 (u) = Q 2 (t ) F (t , 0) x0 +

where x (t ) ∈ Rn is the state vector, u (t ) ∈ Rm is the control in put, w (t ) ∈ Rr is the disturbance, and A (t ) = ai,j (t ) , B (t ) =

bi,j (t ) , C (t ) = ci,j (t ) are matrices with appropriate dimensions and x0 is a given initial condition. The disturbance w(t ) in (1) can be caused due to either external environment or errors in measurement and implementation. Amongst many different kinds of disturbances, the class of L2 -norm bounded disturbances is often considered (Bertsimas & Brown, 2007; Kostyukova & Kostina, 2006). In this paper, we assume that the disturbance w(t ) belongs to an L2 -norm bounded set Wρ given below:







(2)

T

where ∥w∥2 = 0 w ⊤ (t ) w (t ) dt and ρ > 0 is a given constant. Furthermore, the control u is also restricted in an L2 -norm bounded set Uδ given below:

Uδ = u ∈ L2 [0, T ] , Rm : ∥u∥2 ≤ δ 2 ,









(3)

T

where ∥u∥2 = 0 u⊤ (t ) u (t ) dt and δ > 0 is a given constant. The performance index is given as a quadratic function as follows:



T

 J (u, w) = x (t ) Q (t ) x (t ) + u (t ) R (t ) u (t ) dt , (4) 0     where Q (t ) = qi,j (t ) and R(t ) = ri,j (t ) , t ∈ [0, T ] are matri





ces with appropriate dimensions. It is assumed that the following requirement on the terminal state is satisfied:

∥x(T ) − x∗ ∥ ≤ γ ,

1

Q 2 (t ) F (t , τ ) B (τ ) u (τ ) dτ ,

F0 (w) =



F1 (w) =



T

F (T , τ ) C (τ ) w (τ ) dτ , 0 t

1

Q 2 (t ) F (t , τ ) C (τ ) w (τ ) dτ .

When no confusion can occur, the same notation ⟨·, ·, ⟩ is used as the inner product in L2 as well as in Rn . The cost function (4) and the constraint (5) can be rewritten as



Wρ = w ∈ L2 [0, T ] , Rr : ∥w∥2 ≤ ρ 2 ,



t



0

x (0) = x0 ,



F (T , τ ) B (τ ) u (τ ) dτ ,

0

Consider the continuous linear uncertain dynamical system defined on the time horizon [0, T ] given below:



T

 0

2. Problem statement



(6)

u∈Uδ w∈Wρ

∀ w ∈ Wρ ,

(5)

1

1

J (u, w) = ⟨T1 (u) + F1 (w), T1 (u) + F1 (w)⟩ + ⟨R 2 u, R 2 u⟩,

(7)

and

⟨T0 (u) + F0 (w) − x∗ , T0 (u) + F0 (w) − x∗ ⟩ ≤ γ 2 , ∀w ∈ Wρ .

(8)

We have the following theorem. Theorem 1. (1) T1 is a linear bounded operator from L2 ([0, T ] , Rm ) to L2 ([0, T ] , Rn ). Furthermore, if un ∈ Uδ and un ⇀ u, then, T1 (un ) → T1 (u). (2) F1 is a linear bounded operator from L2 ([0, T ] , Rr ) to L2 ([0, T ] , Rn ). Furthermore, if wn ∈ Wρ and wn ⇀ w , then F1 (wn ) → F1 (w), where ⇀ and → stand for convergence in the weak topology and strong topology in L2 space, respectively. Proof. The proof is given in Appendix A.



Note that both T1 and F1 are bounded operators, and hence it is easy to show that there exists a constant K > 0, such that 0 ≤ J (u, w) < K , ∀(u, w) ∈ Uδ × Wρ . For a given u ∈ Uδ , let {wn } ⊂ Wρ ⊂ L2 be a maximizing sequence, meaning that J (u, wn ) → sup J (u, w). Since L2 space is reflexive, Wρ , which is a ball with radius ρ , is weakly sequentially compact (Kurdila & Zabarankin, 2005). Thus, there exists a subsequence of the sequence {wn }, which is denoted by the original sequence, such that wn ⇀ w(u) ∈

C. Wu et al. / Automatica 49 (2013) 1809–1815

1811

Wρ . Thus, by Theorem 1, it follows that F1 (wn ) → F1 (w(u)), and

Theorem 4. Problem (P N ) is equivalent to the following SDP problem:

J (u, w(u)) = max J (u, w).

θ∈4N ,t1 ,t2 ,ς1 ≥0,ς2 ≥0

hence J (u, wn ) → J (u, w(u)). Clearly, J (u, w(u)) = sup J (u, w). More precisely,

min

w∈Wρ

Obviously, w (u) may be not unique. But they share the same objective function value maxw∈Wρ J (u, w). For Problem (P ), the following theorem is useful. Theorem 2. Problem (P ) has a unique solution u∗ ∈ Uδ such that u∗ satisfies (5) and

 J u , w(u ) = min max J (u, w). 





(9)

u∈Uδ w∈Wρ

t1 + t2 + 2q⊤ N θ + µ0

subject to 1



PN2 θ

I 1

θ ⊤ PN2 

 ≽ 0,

t1 − ς1 ρ 2 −QN θ − rN⊤



−θ ⊤ QN −rN ς1 I − RN

I

(Vx0 − x + VN θ ) 

Theorem 3. Define T



F (T , τ )C (τ )C ⊤ (τ )F ⊤ (T , τ )dτ

and let λmax (S ) be the largest eigenvalue of S. Suppose that S is invertible. If Problem (P ) has a feasible control, then

λmax (S )ρ 2 ≤ γ 2 .

(11)

Furthermore, the constraint (8) is equivalent to the following constraint: I

(T0 (u) − x∗ )⊤ I

T0 (u) − x∗

γ −ρ ς 2

2

0



I 0  ≽ 0. ς S −1

Proof. The proof is given in Appendix A.



≽ 0,

(15)

Vx0 − x∗ + VN θ

γ 2 − ρ 2 ς2 0



I 0  ≽ 0, ς2 S −1

(16)

where the explicit expressions of PN , QN , RN , qN , rN , V , VN and µ0 are given in Appendix B. In light of Theorem 4, the solution of Problem (P N ) is easily obtained via solving an SDP problem defined by (13)–(16). For the relation between Problem (P ) and Problem (P N ), we have the following theorem.

(10)

0





I

Note that the disturbances w ∈ Wρ are involved in the constraint (5). There is no method that can be applied to solve such a problem directly. By a close examination, we have the following theorem.

S=

(14)

t2



Proof. The proof is given in Appendix A.

(13)

(12)



The matrix S is the controllability gramian of the pair (A(·), C (·)). Thus, S is invertible if and only if the pair (A(·), C (·)) is controllable. If the system (1) is time-invariant, then S is invertible if and only if (C , CA, . . . , CAn−1 ) is a full rank matrix. In what follows, we assume that S is invertible. By Theorem 3, (8) and (12) are equivalent. Problem (P ) is equivalent to the problem defined by (6) and (12). Clearly, the problem defined by (6) and (12) is a convex infinite dimensional optimization problem. Although the maximization with respect to w ∈ Wρ is required to be carried out only in J (u, w) without involving constraint (12), the problem defined by (6) and (12) is still too much complicated to be solved analytically. It is inevitable to resort to numerical methods. ∞ Suppose that {γi }∞ i=1 and {ψi }i=1 are orthonormal bases (OB) of 2 m 2 r L ([0, T ] , R ) and L ([0, T ] , R ), respectively. Now we approximate u and w by the truncated OB as u (t ) = ΓN (t ) θ and w (t ) = ΨN (t ) ϑ , where N is the truncated number, ΓN (t ) = [γ1 (t ), γ2 (t ), . . . , γN (t )], ΨN (t ) = [ψ1 (t ) , ψ2 (t ) , . . . , ψN (t )] , θ = [θ1 , θ2 , . . . , θN ]T and ϑ = [ϑ1 , ϑ2 , . . . , ϑN ]T ∈ RN . Denote 4N = {θ ∈ RN : ∥θ ∥ ≤ δ}, UN = {ΓN (t ) θ : θ ∈ 4N }, 5N = {ϑ ∈ RN : ∥ϑ∥ ≤ ρ} and WN = {ΨN (t ) ϑ : θ ∈ 5N }. Then, the parameterized finite dimensional optimization problem is defined as: find a control u ∈ Uδ ∩ UN such that the cost function maxw∈Wρ ∩WN J (u, w) is minimized subject to the constraint (12). Let this problem be referred to as Problem (P N ). Following a similar proof as that given for Theorem 3, we have the following theorem.

Theorem 5. Suppose that u∗ is the optimal solution of Problem (P ) and θN∗ ∈ 4N is the optimal solution of Problem (P N ). Let u∗N (t ) = ΓN (t ) θN∗ . Then, (1) u∗N ⇀ u∗ as  N → ∞; (2) limN →∞ J u∗N = J (u∗ , w (u∗ )). Proof. The proof is given in Appendix A.



Theorem 5 shows that the min–max optimal control problem (P ) is approximated by a sequence of finite dimensional convex optimization problems (P N ). Then, an intuitive scheme to solve Problem (P ) can be stated   as:  for a given  tolerance ϵ > 0, we solve Problem (P N ) until J u∗N +1 − J u∗N  ≤ ϵ . Problem (P ) with ρ = 0 is a standard optimal control problem without disturbance. Let it be referred to as Problem (P¯ ). Similarly, we can solve Problem (P¯ ) through solving a sequence of approximate optimal control problems, denoted by Problems (P¯ N ), by restricting the feasible control u in UN ∩ Uδ . We have the following obvious results. Corollary 6. Problem (P¯N ) is equivalent to the following SDP problem: min

θ∈4N ,t ≥0

t + 2q⊤ N θ + µ0

(17)

subject to



1

I 1

PN2 θ

θ ⊤ PN2 

 ≽ 0,

(18)

t I

(Vx0 + VN θ − x )

∗ ⊤

Vx0 + VN θ − x∗

γ



≽ 0.

(19)

Remark 1. During our computation, both u and w are approximated by truncated orthonormal bases. Suppose that u∗N = ΓN (t )θ ∗ , then I0 (u∗N ) = Vx0 + VN θ ∗ , where θ ∗ is the optimal solution of Problem (P N ). Since θ ∗ satisfies the linear matrix inequality (16), u∗N satisfies the linear matrix inequality (12). Thanks to Theorem 3, the terminal inequality constraint (5) holds for all w ∈ Wρ . Thus, u∗N is a feasible solution. This feature is not shared by the available direct approximation method (Teo et al., 1991).

1812

C. Wu et al. / Automatica 49 (2013) 1809–1815

More specifically, if we directly approximate Uδ , Wρ by UN and WN , then we can also transform the approximated problem as an SDP which is different from that defined by (13)–(16). Let the solution obtained by this method be u¯ ∗N . Then, the terminal inequality constraint (5) is only satisfied for those w ∈ WN ⊂ Wρ , not for all w ∈ Wρ . Thus, the approximate solution u¯ ∗N may be infeasible. For our proposed approach, the approximations of u and w only affect the computation of the cost function value (4). The feasibility of the terminal constraint (5) is maintained for all w ∈ Wρ . This important feature clearly shows the novelty of the approach proposed.

Table 1 The optimal cost of Problem (P¯ N ) and the optimal cost of Problem (P N ). N

Optimal cost of Problem (P¯ N )

Optimal cost of Problem (P N )

5 6 7 8 9 10 11

2.157374508 2.157349390 2.157334886 2.157331918 2.157331543 2.157331514 2.157331514

2.283417844 2.283381916 2.283363764 2.283360165 2.283359716 2.283359680 2.283359680

a

4. Illustrative example In this section, we consider a worst-case DC motor control. The mathematical model of a DC motor is expressed as two linear differential equations (Gao, Kostyukova, & Chong, 2009) as U = Ra i + La C m i = Jr

dω dt

di dt

+ Ce ω

+ µω + m,

(20)

where U is the voltage applied to the rotor circuit, i is the current, ω is the rotation speed, m is the resistant torque reduced to the motor shaft, Ra and La are the resistance and the inductance of the circuit, Jr is the inertia moment, Ce and Cm are the constants of the motor and µ is the coefficient of viscous friction. Let x(t ) = [ω(t ), i(t )]⊤ , u(t ) = U (t ), w(t ) = m(t ). Then, (20) can be rewritten as

b

x˙ (t ) = Ax(t ) + Bu(t ) + C w(t ), where

µ  − Jr A=  Ce − 

La

Cm



Jr  , Ra  −





0 B =  1 ,

La

La

C =

  1 −   Jr 0

.

We suppose that the initial condition is x(0) = [0, 0]⊤ . The goal of optimal control is to drive the system to the neighborhood around the desired state x∗ for all disturbances w ∈ Wρ such that the minimum energy is used. In this case, Q = 0 and R = 1 in (4). Suppose that the nominal parameters of the DC motor are given as µ = 0.01, Jr = 0.028, Cm = 0.58, La = 0.16, Ce = 0.58 and Ra = 3. Let T = 1, δ = 5, ρ = 0.01, γ = 0.2, x∗ = [3, 1]⊤ . The ∞ two orthonormal bases {γi }∞ i=1 and {ψi }i=1 are taken as the normalized shifted Legendre polynomial, i.e.,

γi (t ) = ψi (t ) =



2i + 1Pi (2t − 1) ,

i = 0, 1, 2, . . . ,

where Pi (t ) is the i-th order Legendre polynomial. During our simulation, we use SeDuMi (Sturm, 1999) and YALMIP (Löberg, 2004) to solve the SDP problem defined by (13)–(16) and the SDP problem defined by (17)–(19). Note that the system (20) is time invariant. By direct computation, we know that the matrix [C , CA] is with full rank. Thus, S in (10) is invertible. Using Simpson’s rule to compute it, we obtain

 S=

176.8542 −27.7387

 −27.7387 5.3628

and λmax (S ) = 181.2293, which indicates that γ should be far larger than ρ . We set the tolerance ϵ as 10−8 and N = 5 to start solving Problem (PN ). For N = 10, we have |J (u∗N +1 ) − J (u∗N )| ≤ 10−8 .

So we stop our computation. Meanwhile, we have |J¯(u∗N +1 ) −

J¯(u∗N )| ≤ 10−8 , where J¯(u∗N ) is the optimal cost function value of Problem (P¯ N ). The cost function values obtained are given in

Fig. 1. (a) The nominal state x∗11 . (b) The solution u∗11 of Problem (P 11 ).

Table 1. Table 1 shows that both the convergence for the case with disturbance and that without disturbance are very fast. Fig. 1 depicts the nominal state [w(t ), i(t )]⊤ and u∗11 with worst case performance. The terminal constraint (5) holds for any w ∈ Wρ that is ensured by Theorem 3. For comparison, we introduce the piecewise constant approximation method in Teo et al. (1991) to solve the case without uncertainty. Set N = 100; the optimal cost function value obtained is 2.15785. To achieve an optimal cost less than 2.1574, N is required to be larger than 200. This test tells us that the truncated orthonormal bases achieves a faster convergence than the piecewise constant approximation method in Teo et al. (1991). 5. Conclusion In this paper, we have developed a computational method for a class of min–max open-loop optimal control problems governed by a continuous time linear dynamical system subject to a quadratic terminal state constraint, where the cost is of quadratic form. We first prove a fundamental result in Theorem 3 that the terminal

C. Wu et al. / Automatica 49 (2013) 1809–1815

constraint with disturbance is transformed equivalently to a linear matrix inequality without disturbance. Based on Theorem 3, the original min–max optimal control problem with infinite many constraints is transformed into an equivalent min–max optimal control problem with only one linear matrix inequality constraint. Then, an effective approximation scheme is devised to solve this transformed min–max optimal control problem. The significance of our method is that the feasibility of the terminal constraint is maintained during iteration, while the existing methods based on direct parameterization are not. The limitation of the proposed approach is that it is developed based on the assumption that the matrix S is invertible. If S is singular, our method does not work. The extension of our approach to this singular case remains open and is an interesting research topic. Appendix A Proof of Theorem 1. It is easy to show that T1 is a bounded linear operator from L2 ([0, T ] , Rm ) to L2 ([0, T ] , Rn ). Suppose that un ∈ Uδ and un ⇀ u. Define F (t , τ ) =



F (t , τ ) , 0n×n ,

if τ ≤ t , else.

lim (T1 (un ) − T1 (u)) (t )

n→∞

t



≥ ⟨T1 (u) + F1 (w (u)), T1 (u) + F1 (w (u))⟩.

T

J (u, w (u)) ≤ lim J (un , w (un )) n→∞

Proof of Theorem 3. To prove Theorem 3, we first show that

F0 Wρ = H ,





(25)

where F0 Wρ = F0 (w) : w ∈ Wρ and H = {h ∈ Rn : h⊤ S −1 h ≤ ρ 2 }.







1

For notational simplicity, let G(τ ) = F (T , τ ) C (τ ) and S − 2 G(τ ) = [g1⊤ (τ ), . . . , gn⊤ (τ )]⊤ . Clearly, gi ∈ L2 ([0, T ] , Rm ), i = 1, . . . , n. Then, for any w ∈ Wρ , n n   ⟨gi , w⟩2 ≤ ∥gi ∥2 ∥w∥2 i=1

1

= Tr

S

 t     1  Q 2 (t ) F (t , τ ) B (τ ) un (τ ) dτ  ≤ ~  i 0   t 1 for all t ∈ [0, T ], where 0 Q 2 (t ) F (t , τ ) B (τ ) un (τ ) dτ dei t 1 notes the i-th element of 0 Q 2 (t ) F (t , τ ) B (τ ) un (τ ) dτ . Now,

by applying the Lebesgue dominated convergence theorem (Teo et al., 1991), it follows that T1 (un ) → T1 (u). Similarly, we can show that F1 is a bounded linear operator from L2 ([0, T ] , Rr ) to L2 ([0, T ] , Rn ).  Proof of Theorem 2. Suppose that un ⇀ u. Since R (t ) is positive T definite, 0 u (t ) R (t ) u (t ) dt is strictly convex. Hence, u⊤ n (t ) R (t ) un (t ) dt ≥

0

− 12

i=1 1

F (T , τ ) C (τ ) C ⊤ (τ )F ⊤ (T , τ )S − 2



∥w∥2

0

1

On the other hand, since un ∈ Uδ and Q 2 (t ) F (t , τ ) B (τ ) is continuous with respect to (t , τ ) ∈ [0, T ] × [0, t], we can easily show that there exists a constant ~ such that

T

T



0



as un ⇀ u.

In addition, we can show that maxw∈Wρ ⟨T1 (u) + F1 (w), T1 (u) + F1 (w)⟩ is convex in u. Furthermore, (26) is convex in u and T u (t ) R (t ) u (t ) dt is strictly convex in u. Thus, it follows that 0 Problem (P ) is strictly convex. The conclusion of the theorem follows readily. 

Q 2 (t ) F (t , τ ) B (τ ) (un (τ ) − u (τ )) dτ = 0.

=

(24)

By (21) and (24), it shows that J (u, w (u)) is weakly lower semicontinuous, i.e.,

(F0 (w))⊤ S −1 F0 (w) =

0



lim ⟨T1 (un ) + F1 (w (un )), T1 (un ) + F1 (w (un ))⟩

n→∞

1

Q 2 (t ) F (t , τ ) B (τ ) (un (τ ) − u (τ )) dτ

=

By Theorem 1, T1 (un ) → T1 (u) when un ⇀ u. Taking limit inferior on both sides of (23), it gives



Clearly, F (t , ·) is a continuous function except at τ = t. Then, for each given t ∈ [0, T ], we have

1813

≤ ρ2. 



Thus, F0 Wρ ⊂ H. On the other hand, for any h ∈ H, define w(t ) = G(t )S −1 h. Then, T



(w(t )) w(t )dt = ⊤

0

T



h⊤ S −1 (G(t ))⊤ G(t )S −1 hdt 0

= h⊤ S −1 h ≤ ρ 2 , and T



G(t )w(t )dt =



0

T

G⊤ (t )G(t )S −1 hdt = h.

0









Thus, H ⊂ F0 Wρ . Therefore, F0 Wρ = H. In light of (25), the constraint (5) is equivalent to the constraint

T



u⊤ (t ) R (t ) u (t ) dt

⟨T0 (u) + h − x∗ , T0 (u) + h − x∗ ⟩ ≤ γ 2 ,

∀ h ∈ H.

(26)

0

 +2

T

u⊤ (t ) R (t ) (un (t ) − u (t )) dt .

(21)

For the simplicity of symbol, let Vu = T0 (u) − x∗ . We homogenize the inequality constraint (26) as follows:

0

h⊤ h + 2tVu⊤ h + t 2 Vu⊤ Vu ≤ γ 2 t 2 ,

Thus, T



u⊤ (t ) R (t ) u (t ) dt ≤ lim

n→∞

0

  ∀ (h, t ) ∈ (h, t ) : h⊤ S −1 h ≤ ρ 2 t 2 .

T



u⊤ n (t ) R (t ) un (t ) dt .

The above inequality can be rewritten as

0

Note that, for any un ,

⟨T1 (un ) + F1 (w (un )), T1 (un ) + F1 (w (un ))⟩ = max ⟨T1 (un ) + F1 (w), T1 (un ) + F1 (w)⟩

(22)

w∈Wρ

≥ ⟨T1 (un ) + F1 (w (u)), T1 (un ) + F1 (w (u))⟩.

(23)

 ⊤  2 t γ − Vu⊤ Vu h −V u  ⊤  2 t ρ ∀(t , h) : h

0

−Vu⊤ −I

 

0 −S − 1

t h

  t h

≥ 0, ≥ 0.

(27)

1814

C. Wu et al. / Automatica 49 (2013) 1809–1815

By S-Lemma (Boyd & Vandenberghe, 2004), (27) holds if and only if there exists a ς ≥ 0 such that



 I ≽ 0,





(29)

s.t. θ ⊤ PN θ ≤ t2

ϑ RN ϑ + 2 θ QN + rN

ϑ ≤ t1 ,

∀ϑ ∈ 5N .

(31)

Using a similar argument as in the proof of Theorem 3, we can show that (31) is equivalent to (15). Then, the results in Theorem 4 follows readily.  Proof of Theorem 5. Suppose that u∗ is the optimal solution of Problem (P ) and θN∗ ∈ 4N is the optimal solution of Problem (P N ). Let u∗N (t ) = ΓN (t ) θN∗ . Suppose that for u∗ ∈ Uδ , and let w (u∗ ) be such that

t



0

FB⊤,Γ (t , τ )dτ Q

0 T 

 0

0

VN = P 1/2



T

FB,Γ (t , τ )dτ dt 0



µ0 = x ⊤ 0

FC ,Ψ (t , τ )dτ dt , 0 t



FC ,Ψ (t , τ )dτ dt , 0

F (T , t ) B (t ) ΓN (t ) dt ,



0 t



0 t

V = P 1/2 F (T , 0) ,

ΓN⊤ (τ ) B⊤ (τ ) F ⊤ (t , τ ) dτ Q (t ) F (t , 0) x0 dt , ΨN⊤ (τ ) C ⊤ (τ ) F ⊤ (t , τ ) dτ Q (t ) F (t , 0) x0 dt ,

rN = 0

FC⊤,Ψ (t , τ )dτ Q (t )

t



T



qN = 0 T

(t )

t

RN =

(30)

⊤

T

QN =



min t1 + t2 + 2q⊤ N θ + µ0

t

0



N

(t )



ΓN⊤ (t ) R (t ) ΓN (t ) dt ,

+

(28)

can be equivalently rewritten as



FB⊤,Γ (t , τ )dτ Q

0 T

0

Proof of Theorem 4. Clearly, J (u , w ) = θ PN θ + 2θ QN ϑ + ⊤ N N ϑ ⊤ RN ϑ + 2q⊤ N θ + 2rN ϑ + µ0 . Then, minθ∈4N maxϑ∈5N J (u , w )



t



PN =

which can be equivalently rewritten as (12) by the Schur complement lemma. The inequality (28) implies that ς S −1 ≽ I and γ 2 − ςρ 2 ≥ 0. Thus, λmax (S )ρ 2 ≤ γ 2 . We complete the proof. 



T



 γ 2 − Vu⊤ Vu − ς ρ 2 −Vu⊤ −V u ς S −1 − I  2   ⊤ γ − ς ρ2 0 Vu  Vu = −1 − I 0 ςS

N

Appendix B

0 T



F ⊤ (t , 0) Q (t ) F (t , 0) dt



x0 ,

0

where FB,Γ (t , τ ) = F (t , τ ) B (τ ) ΓN (τ ) and FC ,Ψ (t , τ ) = F (t , τ ) C (τ ) ΨN (τ ) .

J (u∗ , w(u∗ )) = max J (u∗ , w). w∈Wρ

Note that the maximizer w(u∗ ) may not be unique, although yielding the same value of maxw∈Wρ J (u∗ , w). Let w ∗ be one of these maximizers. Similarly, let wN be one of the maximizers w uN . Without loss of generality, we suppose that u∗N ⇀ u and wN∗ ⇀ w . Let uN ,∗ = ΓN (t ) θ N ,∗ and w N ,∗ = ΓN (t ) ϑ N ,∗ denote the projection of u∗ onto UN and the projection of w ∗ onto VN , respectively. Then, uN ,∗ → u∗ and w N ,∗ → w ∗ . Thus,





J u∗N , wN∗





=

min

u∈UN ∩Uδ

J u, wN∗ ≤ J uN ,∗ , wN∗













    → J u∗ , w ≤ J u∗ , w ∗ . On the other hand, J u∗ , w ∗ = min J u, w ∗ ≤ J u, w ∗ ≤ lim J u∗N , w N ,∗















u∈Uδ



n→∞

≤ lim

max

n→∞ w∈VN ∩Wρ

J u∗N , w N ,∗ = lim J u∗N , wN∗ .









n→∞

Therefore, limN →∞ J N (θ ∗ ) = limN →∞ J u∗N , wN∗ = J (u∗ , w (u∗ )). We shall show that un ⇀ u∗ by contradiction. that  Suppose  it is not true. Then, there exists a subsequence unk of {un } and





a subsequence wnk of w unk such that unk ⇀ u ̸= u∗ and wnk ⇀ w. Let wu be one of the maximizers w (u). Then, by virtue of the uniqueness of the solution of Problem (P ), it is clear that





 



J u∗ , w ∗ < J (u, wu ) .





(32)

Let w (t ) = Γnk (t ) ϑ denote the projection of wu onto Vnk . Then, w nk → wu . Since J (u, w (u)) is weakly sequentially lower semi-continuous (Kurdila & Zabarankin, 2005), we obtain nk

nk

J (u, wu ) ≤ lim J unk , w nk ≤ lim



nk →∞



max

nk →∞ w∈Vnk ∩Wρ

J unk , w nk



    = lim J unk , wnk = J (u∗ , w u∗ ), nk →∞

which is a contradiction to (32). Thus, un ⇀ u∗ .





References Baillieul, J., & Wong, W.S. (2009). The standard parts problem and the complexity of control communication. In Proceeding of 48th IEEE CDC. Basar, T. (1991). Optimum performance levels for minimax filters, predictors and smoothers. Systems & Control Letters, 16, 309–317. Basar, T., & Bernhard, P. (1991). H ∞ -optimal control and related minimax design problems. New Jersey: Birkhauser. Bemporad, A., Borrelli, F., & Morari, M. (2003). Min–max control of constrained uncertain discrete-time linear systems. IEEE Transactions on Automatic Control, 48, 1600–1606. Bertsimas, D., & Brown, D. B. (2007). Constrained stochastic LQC: a tractable approach. IEEE Transactions on Automatic Control, 52, 1826–1841. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. http://www.stanford.edu/~boyd/cvxbook/. Chernousko, F. L. (2005). Minimax control for a class of linear systems subject to disturbances. Journal of Optimization Theory and Applications, 127, 535–548. Diehl, M., & Bjornberg, J. (2004). Robust dynamic programming for min–max model predictive control of constrained uncertain systems. IEEE Transactions on Automatic Control, 49(12), 2253–2257. Gao, Y., Kostyukova, O., & Chong, K. T. (2009). Worst-case optimal control for an electrical drive system with time-delay. Asian Journal of Control, 11, 386–395. Goulart, P. J., Kerrigan, E. C., & Alamo, T. (2009). Control of constrained discrete-time systems with bounded l2 gain. IEEE Transactions on Automatic Control, 54(5), 1105–1111. Kim, K. B. (2004a). Distrubance attenuation for constrained discrete-time systems via receding horizon control. IEEE Transactions on Automatic Control, 49(5), 797–801. Kim, K. B. (2004b). Design of receding horizon controls for constrained time-varying systems. IEEE Transactions on Automatic Control, 49(12), 2248–2252. Kostyukova, O., & Kostina, E. (2006). Robust optimal feedback for terminal linearquadratic control problems under disturbances. Mathematical Programming, 107, 131–153. Kurdila, A. J., & Zabarankin, M. (2005). Convex functional analysis. Basel, Switzerland: Birkhauser Verlag. Löberg, J. (2004). YALMIP: a toolbox for modeling and optimization in Matlab. In Proc. int. symp. CACSD (pp. 284–289). Taipei, Taiwan. Mayne, D. Q., & Eschroeder, W. R. (1997). Robust time-optimal control of constrained linear systems. Automatica, 33, 2103–2118. Scokaert, P. O. M., & Mayne, D. Q. (1998). Min–max feedback model predictive control for constrained linear systems. IEEE Transactions on Automatic Control, 43, 1136–1142. Scokaert, P. O. M., Mayne, D. Q., & Rawlings, J. B. (1999). Suboptimal model predictive control (feasibility implies stability). IEEE Transactions on Automatic Control, 44(3), 648–654. Sturm, J. F. (1999). Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods & Software, 12, 625–633.

C. Wu et al. / Automatica 49 (2013) 1809–1815 Teo, K. L., Goh, C. J., & Wong, K. H. (1991). A unified computational approach to optimal control problem. Longman Scientific & Technical. Vinter, R. B. (2005). Minimax optimal control. SIAM Journal on Control and Optimization, 44, 939–968. Yoon, M. G., Ugrinovskii, V. A., & Pertersen, I. R. (2005). On the worst-case disturbance of minimax optimal control. Automatica, 41, 847–855.

Changzhi Wu received the bachelor’s degree from Anhui Normal University, China, in 2001, and the Ph.D. degree from Zhongshan University, China, in 2006. He joined Chongqing Normal University as a Lecturer in 2006. In 2010, he joined University of Ballarat as a research fellow. His main interests include both theoretical and practical aspects of optimization and optimal control and their applications in signal processing.

Kok Lay Teo received his Ph.D. degree in electrical engineering from the University of Ottawa in Canada. He has held academic positions in the Department of Applied Mathematics, University of New South Wales, Australia; the Department of Industrial and Systems Engineering, National University of Singapore, Singapore; and the Department of Mathematics, University of Western Australia, Australia. He joined the Department of Mathematics and Statistics, Curtin University, Australia, as the Chair of Applied Mathematics in 1996. He was Chair Professor of Applied Mathematics and Head of Department of

1815

Applied Mathematics at the Hong Kong Polytechnic University from 1999 to 2004. He returned to Australia in 2005 as the Chair of Applied Mathematics and Head of Department of Mathematics and Statistics at Curtin University until 2010. He is currently John Curtin Distinguished Professor at Curtin University. He has published 5 books and over 400 journal papers. He has a software package, MISER3.3, for solving general constrained optimal control problems. He is the Editor-in-Chief of the Journal of Industrial and Management Optimization; Numerical Algebra, Control and Optimization; and Dynamics of Continuous, Discrete and Impulsive Systems, Series B. He is a regional editor of Nonlinear Dynamics and Systems Theory. He also serves as an associate editor of a number of international journals, including Automatica, Journal of Optimization Theory and Applications, Journal of Global Optimization, Optimization Letters, Discrete and Continuous Dynamic Systems, International Journal of Innovational Computing and Information Control, ANZIAM Journal, and Journal of Inequalities and Applications. His research interests include both the theoretical and practical aspects of optimal control and optimization, and their practical applications such as in signal processing and telecommunications, and financial portfolio optimization.

Soonyi Wu was born in Taiwan, 1956. He is a distinguished professor with the Department of Mathematics at the National Cheng Kung University, Taiwan. He got the Ph.D. in Engineering from Cambridge University. His research interests include infinite optimization, semiinfinite optimization, and mathematic programming.