Model predictive control of linear systems with multiplicative unbounded uncertainty and average constraints

Model predictive control of linear systems with multiplicative unbounded uncertainty and average constraints

Preprints, 5th IFAC Conference on Nonlinear Model Predictive Preprints, Preprints, 5th 5th IFAC IFAC Conference Conference on on Nonlinear Nonlinear M...

568KB Sizes 11 Downloads 79 Views

Preprints, 5th IFAC Conference on Nonlinear Model Predictive Preprints, Preprints, 5th 5th IFAC IFAC Conference Conference on on Nonlinear Nonlinear Model Model Predictive Predictive Control Preprints, 5th IFAC Conference on Nonlinear Model Predictive Control Available online at www.sciencedirect.com Control September 17-20, 2015. Seville, Spain Control September 17-20, 2015. Seville, Spain September September 17-20, 17-20, 2015. 2015. Seville, Seville, Spain Spain

ScienceDirect

IFAC-PapersOnLine 48-23 (2015) 266–271

Model predictive control of linear systems with Model predictive control of linear systems with Model predictive control of linear systems with Model predictive controluncertainty of linear systems with multiplicative unbounded and average multiplicative unbounded uncertainty and average multiplicative unbounded uncertainty and average multiplicative unbounded uncertainty and average constraints constraints constraints constraints

Marcello Farina ∗∗ Riccardo Scattolini ∗∗ Marcello Farina ∗∗ Riccardo Scattolini ∗∗ Marcello Marcello Farina Farina Riccardo Riccardo Scattolini Scattolini ∗ Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di ∗∗ Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di ∗ Dipartimento Dipartimento di Elettronica, Elettronica, Informazione Bioingegneria, Politecnico di di Milano, via Ponzio Informazione 34/5, 20133 Milan, Italy (e-mail: di ee Bioingegneria, Politecnico Milano, via Ponzio 34/5, 20133 Milan, Italy (e-mail: Milano, via Ponzio 34/5, 20133 Milan, Italy (e-mail: {marcello.farina,riccardo.scattolini}@polimi.it). Milano, via Ponzio 34/5, 20133 Milan, Italy (e-mail: {marcello.farina,riccardo.scattolini}@polimi.it). {marcello.farina,riccardo.scattolini}@polimi.it). {marcello.farina,riccardo.scattolini}@polimi.it). Abstract: In this paper a novel Stochastic Model Predictive Control algorithm is developed for systems Abstract: In this paper a novel Stochastic Model Predictive Control algorithm is developed for systems Abstract: In paper Model Predictive Control algorithm is for characterized by multiplicative and possibly unbounded model uncertainty, average constraints on the Abstract: In this this paper aa novel novel Stochastic Stochastic Model Predictive Control algorithm is developed developed for systems systems characterized by multiplicative and possibly unbounded model uncertainty, average constraints on the characterized by multiplicative and possibly unbounded model uncertainty, average constraints on states and inputs, and quadratic cost function. The stochastic control problem, and in particular characterized by multiplicative and possibly unbounded model control uncertainty, average constraints on the states and inputs, and quadratic cost function. The stochastic problem, and in particular the states and inputs, and quadratic cost function. The stochastic control problem, and in particular the average constraints, arequadratic reformulated deterministic terms. The properties of the namely states and inputs, and cost in function. The stochastic control problem, andalgorithm, in particular the average constraints, are reformulated in deterministic The properties algorithm, namely average constraints, are reformulated in deterministic terms. The properties of the algorithm, namely the recursive feasibility and the pointwise convergence terms. of the state, are provenof bythe suitably selecting the average constraints, are reformulated in deterministic terms. The properties of the algorithm, namely the recursive and the pointwise convergence of the state, are proven by suitably selecting the the recursive feasibility and convergence of are proven by selecting the terminal cost feasibility and the constraints on the mean and the variance of the at the end of the prediction the recursive feasibility and the the pointwise pointwise convergence of the the state, state, arestate proven by suitably suitably selecting the terminal cost and the constraints on the mean and the variance of the state at the end of the prediction terminaland costby and the constraints constraints on the the mean and the the variance variance of theatstate state at the end end of the prediction horizon considering the mean andmean the covariance of the of state the at beginning terminal cost and the on and the the horizon and by considering the mean and the The covariance of the state at the beginning of theselection prediction horizon and by the and covariance of state at of prediction as additional optimization variables. numerical issues to the off-line of horizon and by considering considering the mean mean and the the The covariance of the the staterelated at the the beginning beginning of the theselection prediction horizon as additional optimization variables. numerical issues related to the off-line of horizon as additional optimization variables. The numerical issues related to the off-line selection of the algorithm’s parameters and its variables. on-line implementation are also related discussed in depth. Aselection simulation horizon as additional optimization The numerical issues to the off-line of the algorithm’s parameters and its on-line implementation are also discussed in depth. A simulation the algorithm’s parameters and its on-line implementation are also discussed in depth. A simulation campaign witnesses the effectiveness of the proposed control scheme. the algorithm’s parameters and its on-line implementation are also discussed in depth. A simulation campaign campaign witnesses witnesses the the effectiveness effectiveness of of the the proposed proposed control control scheme. scheme. campaign witnesses the effectiveness of the proposed control scheme. © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Model predictive control, stochastic systems, multiplicative uncertainty. Keywords: Model Model predictive control, control, stochastic systems, systems, multiplicative uncertainty. uncertainty. Keywords: Keywords: Model predictive predictive control, stochastic stochastic systems, multiplicative multiplicative uncertainty. assumed to be a zero mean white noise with known variance. 1. INTRODUCTION assumed to aa zero white with variance. 1. assumed to be beallows zerotomean mean white noise noise with known known variance. 1. INTRODUCTION INTRODUCTION This method guarantee the recursive feasibility of assumed to be a zero mean white noise with known variance. 1. INTRODUCTION This method allows to guarantee the recursive feasibility of This method allows to guarantee the recursive feasibility of the algorithm and the convergence of the state to a suitable This method allows to guarantee the recursive feasibility of the algorithm and the convergence of the state to a suitable the algorithm algorithm and the convergence convergence of the state isto toguaranteed a suitable suitable Stochastic Model Predictive Control (S-MPC) is an active field neighbor of the origin. In particular, the former the and the of the state a Stochastic Predictive is field neighbor of In particular, the is guaranteed Stochastic Model Predictive Control (S-MPC) is an an active activealgofield by neighbor of the theatorigin. origin. In instant particular, the former former is of guaranteed of research,Model motivated by theControl need to(S-MPC) develop innovative considering any time the current value the mean Stochastic Model Predictive Control (S-MPC) is an active field neighbor of the origin. In particular, the former is guaranteed of research, motivated by the need to develop innovative algoby considering at any time instant the current value of the of research, motivated by the need to develop innovative algoby considering at any time instant the current value of the mean mean rithms for systems subject to random noises with known disand of the covariance of the state as optimization variables to of research, motivated by the need to develop innovative algoby considering at any time instant the current value of the mean rithms for systems subject to random noises with known disand of the covariance of the state as optimization variables to rithms for systems subject to random noises with known disand of the covariance of the state as optimization variables to tribution and characterized byrandom less conservative performances be properly selected according to two alternative strategies. rithms for systems subject to noises with known disand of the covariance of the state as optimization variables to tribution and characterized by less conservative performances be properly selected according to two alternative strategies. tribution and characterized by less conservative performances be this properly selected according tomodels two alternative alternative strategies. with respect to robust deterministic methods. A performances large number In paper we address linear affected by multiplicatribution and characterized by less conservative be properly selected according to two strategies. with respect robust deterministic methods. large number In paper we models by multiplicawith respect to toare robust deterministic methods. A large number tive In this this papermotivated we address address linear models affected affected by to multiplicaof techniques nowadays available, which A can be roughly noise, by linear the valuable possibility describe with respect to robust deterministic methods. A large number In this paper we address linear models affected by multiplicaof techniques are nowadays available, which can be roughly tive noise, motivated by the valuable possibility to describe of techniques techniques areclasses: nowadays available, whichand can randomized be roughly roughly model tive noise, motivated by the valuable possibility to describe divided in twoare analytic algorithms parametric uncertainties of the system in terms of mulof nowadays available, which can be tive noise, motivated by the valuable possibility to describe divided in two classes: analytic algorithms and randomized model parametric uncertainties of the system in terms of muldivided in two classes: analytic algorithms and randomized model parametric uncertainties of the system in terms of in mulmethods. Intwo analytical approaches, mainly developed for linear tiplicative noise in different application scenarios, e.g., fidivided in classes: analytic algorithms and randomized model parametric uncertainties of the system in terms of mulmethods. In analytical approaches, mainly developed for linear tiplicative noise in different application scenarios, e.g., in methods. In analytical approaches, mainly developed for linear tiplicative noise in in different different application scenarios, e.g., in fifidiscrete-time systems, the cost function to be minimized and the nancial applications Primbs [2007], Dombrovskii and Obyedko methods. In analytical approaches, mainly developed for linear tiplicative noise application scenarios, e.g., in fidiscrete-time the to minimized and nancial Primbs [2007], Dombrovskii and Obyedko discrete-time systems, systems,constraints the cost cost function function to be be and minimized and the the [2015], nancial applications applications Primbs [2007], Dombrovskii and dynamics Obyedko probabilistic/average on the state input variables where stock prices and the portfolio wealth discrete-time systems, the cost function to be minimized and the nancial applications Primbs [2007], Dombrovskii and Obyedko probabilistic/average constraints the and input [2015], where probabilistic/average constraints on on the state state andinstance input variables variables [2015], where stock stock prices and the the portfolio portfolio wealth wealth dynamics dynamics are reformulated in deterministic terms, see for Primbs are represented in thisprices way. and probabilistic/average constraints on the state and input variables [2015], where stock prices and the portfolio wealth dynamics are reformulated in deterministic terms, see for instance Primbs are represented in this way. are reformulated in deterministic terms, see for instance Primbs are represented in this way. and Sung [2009],inCannon et al. terms, [2009a], Farina et al. [2015], are reformulated deterministic see for instance Primbs are represented this way. in Farina et al. [2013, 2015], in and Sung Cannon et Farina al. the linesin reported andthat Sung [2009], Cannon et al. al.to [2009a], [2009a], Farina et eton-line al. [2015], [2015], so the[2009], resulting algorithm be implemented is of Along and Sung [2009], Cannon et al. [2009a], Farina et al. [2015], Along the lines reported in al. in Along the lines reported in Farina Farina iset etdeveloped al. [2013, [2013,for2015], 2015], in so that the resulting algorithm to be implemented on-line is of this paper a new S-MPC algorithm systems so that that the the resulting resulting algorithm tonominal be implemented implemented on-lineatis isthe of Along the alines reported in Farina et al. [2013, 2015], in complexity similar to that of ato MPC, usually so algorithm be on-line of this paper new S-MPC algorithm is developed for systems this paper a new S-MPC algorithm is developed for systems complexity similar to that of a nominal MPC, usually at the characterized by multiplicative and possibly unbounded model complexity similar toofthat that of aa nominal nominal and MPC, usually at the the this paper a new S-MPC algorithm is developed for systems price of some degreeto conservativeness of ausually cumbersome complexity similar of MPC, at characterized by model characterizedaverage by multiplicative multiplicative and possibly unbounded model price conservativeness and aa cumbersome constraintsand on possibly the statesunbounded and inputs, and price of of some some degree ofOn conservativeness andinof ofrandomized, cumbersome characterized by multiplicative and possibly unbounded model off-line designdegree phase.of the other hand, or uncertainty, price of some degree of conservativeness and of a cumbersome uncertainty, average constraints on the states and inputs, and uncertainty, average constraints on the states and inputs, and off-line design phase. On the other hand, in randomized, or quadratic cost function. The recursive feasibility of inputs, the method off-line design phase. On the other hand, in randomized, or uncertainty, average constraints on the states and and scenario-based, methods a properly chosen number of noise off-line design phase. On the other hand, in randomized, or quadratic cost function. The recursive feasibility of the method quadratic cost function. The recursive feasibility of the method scenario-based, methods a properly chosen number of noise and the pointwise convergence of the state are proven by suitscenario-based, methodsgenerated properlyat chosen chosen number of noise cost function. The recursive feasibility of the by method realizations is randomly any timenumber instant of to noise com- quadratic scenario-based, methods aa properly and pointwise convergence of are suitand the the pointwise convergence of the the state are proven proven bymean suitrealizations is at instant comselecting the terminal cost and thestate constraints on the realizations is randomly randomly generated at any any time instant to tosee comand the pointwise convergence of the state are proven by suitpute the optimal solution generated with a given leveltime of accuracy, for ably realizations is randomly generated at any time instant to comably selecting the terminal cost and the constraints on the mean ably selecting the terminal cost and the constraints on the mean pute the optimal solution with a given level of accuracy, see for and the variance of the state at the beginning of the prediction pute the optimal solution with a given level of accuracy, see for ably selecting the terminal cost and the constraints on the mean instance Blackmore et al. with [2010], Calafiore Fagiano [2013]. pute the optimal solution a given leveland of accuracy, see for and of state beginning the and the the variance variance of the the approach state at at the the beginning of the prediction prediction instance et [2010], and Fagiano [2013]. The proposed shares many of similarities with instance Blackmore Blackmore et al. al.algorithms [2010], Calafiore Calafiore and Fagiano [2013]. horizon. and the variance of the state at the beginning of the prediction Notably, scenario-based can deal with very generic instance Blackmore et al. [2010], Calafiore and Fagiano [2013]. horizon. The proposed approach shares many similarities with horizon. The proposed approach shares many similarities with Notably, scenario-based algorithms can deal with very generic the one proposed in Primbs and Sung [2009]; in our paper, Notably, scenario-based algorithms can deal with very generic horizon. The proposed approach shares many similarities with systems, cost functions, and state and control constraints, often Notably, scenario-based algorithms can deal with very generic the one proposed in Primbs and Sung [2009]; in our paper, the one one proposed proposed in for Primbs and Sung Sung [2009]; in our our of paper, systems, cost functions, and state and control constraints, often however, a solution enforcing recursive feasibility the systems, cost functions, and state and control constraints, often the in Primbs and [2009]; in paper, at the price a very heavy computational load, which systems, costof functions, and on-line state and control constraints, often however, aa solution for however, solution for enforcing enforcing recursive recursive feasibility feasibility of of the the at the price of a very heavy on-line computational load, which MPC problem is proposed. at the price of a very heavy on-line computational load, which however, a solution for enforcing recursive feasibility of the could serious limitation many application at the represent price of aaavery heavy on-line in computational load,fields. which MPC problem is proposed. MPC problem is proposed. could represent serious limitation in many application fields. paper is organized as follows. In Section 2 the system to could represent serious limitationmethods, in many many application application fields. MPC problem is proposed. In therepresent wide class of analytical perhaps thefields. most The could aa serious limitation in The paper as Section 22 the to Thecontrolled paper is is organized organized as follows. follows. Inwith Section the system system to In the wide class of analytical methods, perhaps the most be is introduced togetherIn the considered averIn the wide class of analytical methods, perhaps the most The paper is organized as follows. In Section 2 the system to popular assumptions are that the linear model is affected by In the wide class of analytical methods, perhaps the most be controlled is introduced together with the considered averbe controlled controlled is Section introduced together with the considered considered averpopular assumptions are that the linear model is affected by age constraints. 3 is devoted to define the structure of popular assumptions are that the linear model is affected by be is introduced together with the averadditive, see for instance Cannon et al. model [2012],isFarina et by al. age constraints. Section 3 is devoted to define the structure of popular assumptions are that the linear affected age control constraints. Section is devoted devoted toand define the structure structure of additive, see Cannon et et law, Section to transform the stateto control constraints additive,2015], see for for instance Cannon et al. al. [2012], [2012], Farina et al. al. the age constraints. 33 is define the of [2013, or instance multiplicative uncertainties, seeFarina e.g. Primbs additive, see for instance Cannon et al. [2012], Farina et al. the control law, to transform the state and control constraints the control law, to transform the state and control constraints [2013, 2015], or multiplicative uncertainties, see e.g. Primbs in deterministic terms, to formulate the optimization problem, [2013, 2015], or multiplicative multiplicative uncertainties, seeete.g. e.g. Primbs in thedeterministic control law, terms, to transform the state and control constraints and Sung [2009], Cannon et al. [2009a,b], Evans al. [2012], [2013, 2015], or uncertainties, see Primbs to the problem, in deterministic deterministic terms, to formulate formulate the optimization optimization problem, and [2009], [2009a,b], Evans et the state constraints to be considered at the beginning and Sung Sungand [2009], Cannon etInal. al. [2009a,b], Evans 2015] et al. al. [2012], [2012], in terms, to formulate the optimization problem, Paulson StreifCannon [2015].et Farina et al. [2013, a state including and Sung [2009], Cannon et al. [2009a,b], Evans et al. [2012], including the state constraints to be considered at the beginning including the state constraints to be behorizon, considered attothe thepresent beginning Paulson and Streif [2015]. In Farina et al. [2013, 2015] a state and at the end of the prediction and the Paulson and Streif [2015]. In Farina et al. [2013, 2015] a state including the state constraints to considered at beginning and output algorithm has been for alinear Paulson andfeedback Streif [2015]. In Farina et al. developed [2013, 2015] state and the prediction horizon, and to the and at atconvergence the end end of of the the prediction horizon, and to present present the and output feedback algorithm has been developed for linear main result. In Section 4 the numerical issues and output feedback algorithm has been developed for linear and at the end of the prediction horizon, and to present the systems subject to an additive, unbounded uncertainty, and output feedback algorithmpossibly has been developed for linear main convergence result. In Section 44 the numerical issues main convergence result. In Section the numerical issues systems subject to an additive, possibly unbounded uncertainty, systems subject subject to to an an additive, additive, possibly possibly unbounded unbounded uncertainty, uncertainty, main convergence result. In Section 4 the numerical issues systems Copyright 2015 IFAC 266 Hosting by Elsevier Ltd. All rights reserved. 2405-8963 © 2015, IFAC (International Federation of Automatic Control) Copyright 2015 IFAC 266 Copyright © 2015 IFAC 266 Peer review© of International Federation of Automatic Copyright ©under 2015 responsibility IFAC 266Control. 10.1016/j.ifacol.2015.11.294

2015 IFAC NMPC September 17-20, 2015. Seville, Spain

Marcello Farina et al. / IFAC-PapersOnLine 48-23 (2015) 266–271

related to the off-line design and the on-line implementation are discussed, and numerical algorithms are provided to compute the main design parameters. Section 5 reports simulation results obtained considering the benchmark example reported in Primbs and Sung [2009], while conclusions are drawn in Section 6. For clarity of presentation, all proofs are postponed to the Appendices A and B.

267

3.2 Constraints Recalling that E{xk } = x¯k and that E{uk } = u¯k , we can rewrite (2) in a deterministic way, i.e., in terms of variables whose evolution is deterministically defined. It results that (2) are verified (for all time instants k ≥ t) if bTr x¯k ≤ 1

2. THE SYSTEM

cTs u¯k

We consider the following discrete-time linear system

r = 1, . . . , nr

(6a)

≤ 1 s = 1, . . . , ns

(6b)

q

xt+1 = Axt + But + ∑ (C j xt + D j ut )wtj

(1)

3.3 Cost function

j=1

where xt ∈ Rn is the state, ut ∈ Rm is the input and wtj ∈ R, for all j = 1, . . . , q, is a zero-mean white noise with unitary variance and possibly unbounded support. Furthermore, we assume that E{wtj wik } = 0 for all t, k and for all i �= j. Perfect state information is assumed, together with the stabilizability of the pair (A, B). Similarly to Primbs and Sung [2009], we assume that the state and input variables are subject to expectation constraints of the type E{bTr xk } ≤ 1 r = 1, . . . , nr

(2a)

E{cTs uk } ≤ 1 s = 1, . . . , ns where E{φ } denotes the expected value of φ .

(2b)

3. MPC ALGORITHM: FORMULATION AND PROPERTIES

The considered cost function is t+N−1

J = E{

k=t

q

¯ + ∑ (C j + D j K) ¯ T P(C j + D j K) ¯ −P ¯ T P(A + BK) (A + BK) j=1

= −Q − K¯ T RK¯

(8) where K¯ is a suitable stabilizing gain for the pair (A, B). As also discussed in Cannon et al. [2009a], the computation of suitable ¯ if possible, can be carried out using linear matrices P and K, matrix inequalities, for more details, see Section 4. Using standard procedures, we can write

Xk+1 =(A + BKk )Xk (A + BKk )T q

+ ∑ (C j + D j Kk )Xk (C j + D j Kk )T

(9)

where t+N−1

Jm =



�x¯k �2Q + �u¯k �2R + �x¯t+N �2P

(10)



tr{(Q + KkT RKk )Xi } + tr{PXt+N }

(11)

k=t t+N−1 k=t

Define x¯t in such a way that x¯t = E{xt }. For system (1), we consider a state-feedback control law of the type, for k ≥ t, uk = u¯k + Kk (xk − x¯k ) (3) where the input sequence u¯k and the gain sequence Kk , k = t,t + 1, . . . , are defined (see later for details) as the result of a suitable optimization problem solved at time t (and therefore j independently of the sequences wtj , wt+1 , . . . , for all j = 1, . . . , q). In view of the fact that wkj is a zero-mean white noise, x¯k , for k > t, evolves according to x¯k+1 = Ax¯k + Bu¯k (4) T Define δ xk = xk − x¯k and Xk =var{δ xk } = E{δ xk δ xk }. Similarly to Primbs and Sung [2009], it is possible to show that Xk evolves according to the equation

(7)

where Q and R are positive definite, symmetric matrices of appropriate size and P is the solution to the algebraic equation

Jv =

3.1 Control law

Note that Jm depends on u¯t...t+N−1 and on the initial condition of the mean value x¯t , while Jv depends on u¯t...t+N−1 , Kt...t+N−1 , and on the initial conditions of x¯t and Xt , in view of the fact that the evolution of Xk in (5) depends also on x¯k and u¯k . 3.4 Terminal constraints As usual in MPC with guaranteed stability (see e.g. Mayne et al. [2000]), constraints are enforced at the end of the prediction horizon. In this case, terminal constraints are needed on the mean value x¯t+N , i.e., ¯f (12) x¯t+N ∈ X ¯ f is a positively invariant set for the system (4), with The set X the control law u¯k = K¯ x¯k , see Kolmanovsky and Gilbert [1998], that is ¯f ¯f ¯ x¯ ∈ X ∀x¯ ∈ X (13) (A + BK) ¯ f. The following inequalities must also hold, for all x¯ ∈ X

j=1 q

(5)

j=1

Also, note that Uk =var{uk − u¯k } =

�xk �2Q + �uk �2R + �xt+N �2P }

J = Jm + Jv

According to the standard procedure of MPC, at any time instant t a future prediction horizon of length N is considered and a suitable optimization problem is solved. The main ingredients of the optimization problem are now introduced.

+ ∑ (C j x¯k + D j u¯k )(C j x¯k + D j u¯k )T



Kk Xk KkT . 267

bTr x¯ ≤ 1 cTs K¯ x¯ ≤ 1

(14a) (14b)

2015 IFAC NMPC 268 September 17-20, 2015. Seville, Spain

Marcello Farina et al. / IFAC-PapersOnLine 48-23 (2015) 266–271

3.5 Initial conditions for the mean and the covariance Similarly to the solution adopted in Farina et al. [2015], for feasibility purposes the initial conditions (x¯t , Xt ) at the current time instant must also be accounted for as free variables in the optimization problem. More specifically, the following alternative strategies can be selected. Strategy 1 - Reset of the initial state: in the MPC optimization problem set x¯t|t = xt , Xt|t = 0. Strategy 2 - Prediction: in the MPC optimization problem set x¯t|t = x¯t|t−1 , Xt|t = Xt|t−1 . This will result in including, in the MPC optimization problem, the following constraint (x¯t , Xt ) ∈ {(xt , 0), (x¯t|t−1 , Xt|t−1 )} (15)

4. NUMERICAL ISSUES AND SIMPLIFIED FORMULATIONS In this section the design and implementation issues of the proposed control scheme are discussed. 4.1 Offline design The main design problem consists in the computation of matrices K¯ and P such that equation (8) is verified. Solution with LMI’s The equation (8) can be solved by defining a suitable LMI. ¯ −1 , we can cast the Namely, if we define Π = P−1 and S = KP corresponding inequality ¯ ¯ T P(A + BK) P − {(A + BK) q

¯ ≥0 ¯ + Q + K¯ T RK} ¯ T P(C j + D j K) + ∑ (C j + D j K)

3.6 MPC problem

j=1

The Stochastic MPC problem is the following. min

u¯t...t+N−1 ,Kt...t+N−1 ,(x¯t ,Xt )

J

(16)

subject to the dynamics (4) and (5), to the constraints (6) for all k = 0, . . . , N −1, to the initial constraint (15), and to the terminal constraint (12). Denoting by u¯t...t+N−1|t = {u¯t|t , . . . , u¯t+N−1|t }, Kt...t+N−1|t = {Kt|t , . . . , Kt+N−1|t }, and (x¯t|t , Xt|t ) the optimal solution of the SMPC problem, and according to the receding horizon principle, the feedback control law actually used is, consistently with (3), ut = ut|t + Kt|t (xt − x¯t|t ) The recursive feasibility and convergence properties of the resulting control system are stated in the following result. Theorem 1. If, at time t = 0, the S-MPC problem admits a solution, the optimization problem is recursively feasible and E{�xt �2Q } → 0 as t → +∞. Furthermore, the state and input expectation constraints (2) are verified for all t ≥ 0. A remark is due at this point. An ambiguity could arise when formulating expectation constraints of type, e.g., (2a). In general, expectation operators are defined conditionally. In the state-feedback approach presented in Primbs and Sung [2009], for example, conditional expectation constraints of type E{bTr xt+k |xt } ≤ 1, k = 1, 2, ... - in the case of linear constraints - are enforced in the MPC optimization problem formulated at time step t. In view of the receding horizon principle, this may result in E{bTr xt+1 |xt } ≤ 1 for all t, meaning that the 1step forward conditional expectation constraint is verified at each time step. In our paper, however, the feasibility issue is solved by allowing two possible initial conditions (15). It is important to remark, indeed, that the two possible initializations imply different definitions of expected value. Indeed, if the reset Strategy 1 is adopted at time t, we implicitly enforce E{bTr xt+1 |xt } ≤ 1 while, if the prediction Strategy 2 is adopted, we verify E{bTr xt+1 |xt−τ } ≤ 1, where t − τ is the most recent past time step when the reset strategy has been adopted. Finally note that our approach allows to enforce E{bTr xt+1 |x0 } ≤ 1, i.e., the fulfillment of the “non-conditional” expectation constraint, by disregarding, at each time step (even if feasible and optimal), the reset strategy, i.e., by setting (x¯t , Xt ) = (x¯t|t−1 , Xt|t−1 ) for all t ≥ 0. Remark the latter choice would not compromise the recursive feasibility and convergence results provided by Theorem 1. 268

as the following LMI  Π    AΠ + BS  C1 Π + D1S   ..     . C Π + D S  q q    Π S

Π 0 . . . 0  0 

0

 [∗ ∗ . . . ∗ ∗ ∗]  0 ... 0 0 0  Π ... 0 0 0    ..  .. . . .. ..  .  . . . . ≥0  0 ... Π 0 0   −1  0 ... 0 Q 0  0 . . . 0 0 R−1

(17)

Solution using small gain arguments An alternative solution, for the computation of matrix P satisfying (8) can be found thanks to the following result. Proposition 1. If there exists matrix K¯ such that

µ2 1−λ2

q

¯ 2<1 ∑ �C j + D j K�

(18)

j=1

where µ > 0 and λ ∈ [0, 1) are the positive scalars defined in ¯ k � ≤ µλ k , if we set P(0) = 0, and if such a way that �(A + BK) we define P(i) according to the following iterative equation ¯ ¯ T P(i)(A + BK) (19a) P(i + 1) = (A + BK) q

¯ T P(i)(C j + D j K) ¯ + Q + K¯ T RK¯ + ∑ (C j + D j K) j=1

then P(i) → P as i → ∞, where P verifies (8). Proposition 1 provides a constructive method for computing matrix P , provided that a suitable gain has been computed, satisfying (18). The problem of computing K¯ could be addressed by solving the following nonlinear optimization problem. min γ K¯ ¯ <1 subject to ρ (A + BK) q 2 µ ¯ 2≤γ ∑ �C j + D j K� 1 − λ 2 j=1 γ <1 ¯ is the spectral radius of A + BK. ¯ where ρ (A + BK) 4.2 Online implementation In this section we discuss the numerical problems related to the online implementation of the algorithm, focusing in particular

Marcello Farina et al. / IFAC-PapersOnLine 48-23 (2015) 266–271

Note also that the constraints (6) are linear in the optimization variables. Overall, we can conclude that the S-MPC problem (16) is a quadratic optimization problem with linear constraints, allowing for an affordable computational load of the control scheme. Also, it is worth noting that the binary choice (15) in the initial conditions in problem (16) does not compromise the computational simplicity of the optimization problem. Indeed, (16) must be solved starting from both initial conditions. If the problem results feasible for both initializations, we select the one leading to the smallest value of the cost function J. To further simplify the on-line implementation from the numerical side, it is possible to set, similarly to Farina et al. [2015], constant control gain Kk = K¯ for all k. This does not compromise the recursive feasibility and the convergence properties of the control scheme, but significantly reduces the number of degrees of freedom of the control scheme, and consequently the corresponding computational burden. 5. EXAMPLE

269

2

1

1 x

2

2

x2 0

0

−1

−1 −1

0 x

1

−1

1

0 x

1

0 x

1

1

2

1

1 2

2

x

on the optimization problem (16). Similarly to Primbs and Sung [2009], it is possible to show that the update equation (5) can be reformulated as a suitable LMI. In fact, under the assumption that Xk > 0 and defining Yk = Kk Xk we can write   Xk+1   [∗ ∗ . . . ∗ ∗ . . . ∗]   T  (AXk + BYk ) Xk 0 . . . 0 0 . . . 0    (C1 Xk + D1Yk )T   0 Xk . . . 0 0 . . . 0    . . .  .. .. .. . . ..   . . .   . . . . .  . . .   ≥ 0 (20) (Cq Xk + DqYk )T    0 0 . . . Xk 0 . . . 0     (C1 x¯k + D1 u¯k )T   0 0 . . . 0 I . . . 0       ..  ... ... . . . ... ... . . . ...     . T 0 0 . . . 0 0 . . . I (Cq x¯k + Dq u¯k )

x2

2015 IFAC NMPC September 17-20, 2015. Seville, Spain

0

0

−1

−1 −1

0 x

1

−1

1

1

Fig. 1. Trajectories x¯t corresponding with the trajectories xt used to obtain in Figure 2 (black lines). The blue dashed line denotes the constraint bT x ≤ 1. Initial conditions: (i) x0 = [−0.1, 1.2]T (upper left panel); (ii) x0 = [−0.1, 1.3]T (upper right panel); (iii) x0 = [−0.1, 1.4]T (lower left panel); (iv) x0 = [−0.1, 1.5]T (lower right panel). 6. CONCLUSIONS The main feature of the S-MPC algorithm described in this paper is possibility of coping with both bounded and unbounded multiplicative uncertainties. Its properties, namely recursive feasibility and pointwise convergence of the state, are fundamental to guarantee its applicability and performance, while its reduced on-line computational load allows it to be used also for control of large-scale systems. Extensions to the cases of quadratic expectation and probabilistic constraints are envisaged in future works. Appendix A. PROOF OF THEOREM 1

We consider the example illustrated in Primbs and Sung [2009]. More specifically, it is set � � � � 1.02 −0.1 0.1 0 A = ,B = 0.1 0.98 0.05 0.01 C1 =



� � � 0.04 0 0.04 0 , D1 = 0 0.04 −0.04 0.008

The disturbance is wt1 ∼ N (0, 1). The weighting matrices are Q =diag(2, 1) and R =diag(5, 20). Constraints are imposed on the state variables solely by imposing in (2b), it is set bT = [−2 1] /2.3. For better comparison with Primbs and Sung [2009], we consider different initial conditions. More specifT ically, we test the following cases: (i) x0 = [−0.1 1.2] ; (ii) T T T x0 = [−0.1 1.3] ; (iii) x0 = [−0.1 1.4] ; (iv) x0 = [−0.1 1.5] . In Figure 2-1 we show the results of a Monte Carlo campaign consisting of 100 runs of the closed-loop controlled system. In particular, Figure 1 illustrates the behaviour of the trajectories x¯t , for all realizations, while Figure 2 shows the “rate of transit” of the simulated perturbed trajectories xt in the different points of the state-space. It is possible to note that, in all cases, the constraint (2a) is respected on average, since the corresponding deterministic constraint is respected, for all realizations, by the trajectory x¯t . 269

Assume that, at time instant t, a feasible solution of S-MPC is available, i.e., (x¯t|t , Xt|t ) with optimal sequences u¯t...t+N−1|t , Kt...t+N−1|t . At time t + 1, a feasible solution to S-MPC exf ists, i.e., (x¯t+1|t , Xt+1|t ) with admissible sequences u¯t+1...t+N|t = f {u¯t+1|t , . . . , u¯t+N−1|t , K¯ x¯t+N|t }, Kt+1...t+N|t = {Kt+1|t , . . . , ¯ Kt+N−1|t , K}. In fact, constraint (6a) is verified for all values of x¯t+1+k|t , k = 0, . . . , N − 2, in view of the feasibility of S-MPC at time t. Furthermore, in view of (12) and of the condition (14a), we have that bT x¯t+N|t ≤ 1 i.e., constraint (6a) is verified. Analogously, constraint (6b) is verified for all values of u¯t+1+k|t , k = 0, . . . , N − 2, in view of the feasibility of S-MPC at time t. Furthermore, in view of (12) and of the condition (14b), we have that cT K¯ x¯t+N|t ≤ 1 i.e., constraint (6b) is verified. From (12) and the invariance property (13) it follows that ¯ f hence verifying (12) at time ¯ x¯t+N|t ∈ X x¯t+N+1|t = (A + BK) t + 1. In view of the feasibility, at time t + 1, of the possibly suboptif f mal solution u¯t+1...t+N|t , Kt+1...t+N|t , and (x¯t+1|t , Xt+1|t ), we have

2015 IFAC NMPC 270 September 17-20, 2015. Seville, Spain

Marcello Farina et al. / IFAC-PapersOnLine 48-23 (2015) 266–271

Fig. 2. Results of the Monte Carlo simulation campaign, consisting of 100 runs for each initial condition, with t ∈ [0, 99] and different noise realizations. The histogram levels represent the rates of occurrence of points xt of the simulated trajectories in the neighborhoods of the centroids of their bases. The light blue surface denotes the constraint bT x ≤ 1. Initial conditions: (i) x0 = [−0.1, 1.2]T (upper left panel); (ii) x0 = [−0.1, 1.3]T (upper right panel); (iii) x0 = [−0.1, 1.4]T (lower left panel); (iv) x0 = [−0.1, 1.5]T (lower right panel). that the optimal cost function computed at time t + 1 is J ∗ (t + 1) = Jm∗ (t + 1) + Jv∗(t + 1). From the optimality of J ∗ (t + 1) J ∗ (t + 1) ≤ Jm (t + 1|t) + Jv(t + 1|t)

(A.1)

where Jm (t + 1|t) = Jm∗ (t) − �x¯t|t �2Q − �u¯t|t �2R + �x¯t+N|t �2Q + �K¯ x¯t+N|t �2R − �x¯t+N|t �2P + ¯ x¯t+N|t �2P �(A + BK)

= �x¯t+N|t �2∑q

(A.5)

¯ T P(C j +D j K) ¯

j=1 (C j +D j K)

�x¯t+N|t �2(Q+K¯ T RK)−P+(A+B ¯ ¯ T P(A+BK)+ ¯ ∑q K)

¯ T P(C j +D j K) ¯

j=1 (C j +D j K)

(A.2)

≤ J ∗ (t) − E{�xt �2Q } Using standard arguments we conclude that t → +∞.

(A.3)

(A.6) E{�xt �2Q }

→ 0 as

Appendix B. PROOF OF PROPOSITION 1 Proof of Proposition 1



The proof is divided in three steps.



Boundedness The boundedness of P(k), if computed by iterating (19a), can be proved along the lines of Dashkovskiy et al. [2007]. We rewrite (19a) as the following system of coupled equations ¯ T P(i)(A + BK) ¯ P(i + 1) = (A + BK) (B.1a)

Considering (A.3) and recalling (8): ¯ t+N|t − PXt+N|t + P(A + BK)X ¯ t+N|t (A + BK) ¯ T tr{(Q + K¯ T RK)X

q

¯ T ∆(i)(C j + D j K) ¯ + Q + K¯ T RK¯ + ∑ (C j + D j K)

q

¯ t+N|t (C j + D j K) ¯ T} + P ∑ (C j + D j K)X

j=1

j=1

¯ T ∆(i)(A + BK) ¯ ∆(i + 1) = (A + BK)

¯ − P + (A + BK) ¯ T P(A + BK) ¯ = tr{(Q + K¯ T RK)

(B.1b)

q

q

¯ T P(C j + D j K))X ¯ t+N|t } = 0 + ∑ (C j + D j K)

j=1

From (A.1)-(A.5) and recalling (8) again, we obtain J ∗ (t + 1) ≤ J ∗ (t) − E{�xt �2Q + �ut �2R }+

and T RKt|t )Xt|t } Jv (t + 1|t) = Jv∗ (t) − tr{(Q + Kt|t ¯ t+N|t + tr{(Q + K¯ T RK)X ¯ t+N|t (A + BK) ¯ T − PXt+N|t + P(A + BK)X q ¯ T ¯ t+N|t (C j + D j K) + P (C j + D j K)X j=1 q T ¯ x¯t+N|t x¯t+N|t ¯ T} (C j + D j K) + P (C j + D j K) j=1

q

T ¯ x¯t+N|t x¯t+N|t ¯ T} (C j + D j K) tr{P ∑ (C j + D j K)

(A.4)

j=1

¯ T P(i)(C j + D j K) ¯ + Q + K¯ T RK¯ + ∑ (C j + D j K) j=1

(B.1c)

Also, the last line in (A.3) results in 270

2015 IFAC NMPC September 17-20, 2015. Seville, Spain

Marcello Farina et al. / IFAC-PapersOnLine 48-23 (2015) 266–271

with P(0) = ∆(0), from which we derive the following solutions ¯ k ¯ T )k P(0)(A + BK) (B.2a) P(k) = ((A + BK) k

q

i=1

j=1

¯ T )k−i { ∑ (C j + D j K) ¯ T ∆(i − 1)(C j + D j K) ¯ + ∑ ((A + BK) ¯ ¯ k−i + Q + K¯ T RK}(A + BK) ¯ T )k ∆(0)(A + BK) ¯ k ∆(k) = ((A + BK)

271

Using recursion arguments, we obtain that, under the stated initialization, P(k + 1) ≥ P(k) for all k ≥ 0. In view of the boundedness property, P(k) → P¯ as k → ∞, where P¯ results to be the solution to the algebraic equation (8). REFERENCES

Blackmore, L., Ono, M., Bektassov, A., and Williams, B. (2010). A probabilistic particle-control approximation of q k chance-constrained stochastic predictive control. IEEE ¯ T )k−i { ∑ (C j + D j K) ¯ T P(i − 1)(C j + D j K) ¯ + ∑ ((A + BK) Transactions on Robotics, 26(3), 502–517. i=1 j=1 Calafiore, G.C. and Fagiano, L. (2013). Robust model predic¯ ¯ k−i tive control via scenario optimization. IEEE Transactions on + Q + K¯ T RK}(A + BK) Automatic Control, 58(1), 219–224. From this we obtain that Cannon, M., Cheng, Q., Kouvaritakis, B., and Rakovi´c, S.V. k ¯ k−i �2 ¯ k �2 �P(0)� + ∑ �(A + BK) (2012). Stochastic tube MPC with state estimation. Auto�P(k)� ≤ �(A + BK) matica, 48(3), 536–541. i=1 q Cannon, M., Kouvaritakis, B., and Wu, X. (2009a). Model ¯ ¯ 2 max �∆(h)� + �Q + K¯ T RK�} × { ∑ �C j + D j K� predictive control for systems with stochastic multiplicative h∈[0,k] j=1 uncertainty and probabilistic constraints. Automatica, 45(1), q 2 167–172. µ ¯ k �2 �P(0)� + ¯ 2 { ∑ �C j + D j K� ≤ �(A + BK) Cannon, M., Kouvaritakis, B., and Wu, X. (2009b). Probabilis1 − λ 2 j=1 tic constrained MPC for multiplicative and additive stochas¯ × max �∆(h)� + �Q + K¯ T RK�} (B.3a) tic uncertainty. IEEE Transactions on Automatic Control, h∈[0,k] 54(7), 1626–1632. q µ2 Dashkovskiy, S., R¨uffer, B.S., and Wirth, F. (2007). An ISS k 2 2 ¯ ¯ { ∑ �C j + D j K� �∆(k)� ≤ �(A + BK) � �∆(0)� + small gain theorem for general networks. Mathematics of 1 − λ 2 j=1 Control, Signals, and Systems, 19, 93–122. ¯ × max �P(h)� + �Q + K¯ T RK�} (B.3b) Dombrovskii, V. and Obyedko, T. (2015). Model predich∈[0,k] tive control for constrained systems with serially correlated Denote δk = maxh∈[0,k] �∆(h)� and pk = maxh∈[0,k] �P(h)�. stochastic parameters and portfolio optimization. AutomatFrom (B.3) we obtain that ica, 54, 325 – 331. pk ≤ γδk + q Evans, M., Cannon, M., and Kouvaritakis, B. (2012). Linear δk ≤ γ pk + q stochastic MPC under finitely supported multiplicative un2 2 q ¯ 2 and q = µ 2 �P(0)�+ µ 2 �Q+ certainty. In American Control Conference (ACC), 442–447. where γ = 1−µ λ 2 ∑ j=1 �C j +D j K� Farina, M., Giulioni, L., Magni, L., and Scattolini, R. (2013). A 1−λ ¯ since P(0) = ∆(0). We define K¯ T RK� probabilistic approach to model predictive control. In IEEE   52nd Annual Conference on Decision and Control (CDC), 0 γ Γ= 7734–7739. γ 0 Farina, M., Giulioni, L., Magni, L., and Scattolini, R. (2015). and we write     An approach to output-feedback MPC of stochastic linear q p discrete-time systems. Automatica, 55, 140 – 149. (I − Γ) k ≤ q δk Kolmanovsky, I. and Gilbert, E.G. (1998). Theory and comAccording to Dashkovskiy et al. [2007], if the spectral radius putation of disturbance invariant sets for discrete-time linear of Γ is strictly smaller than one, i.e., if γ < 1, for every systems. Mathematical Problems in Engineering, 4(4), 317– initial condition (see, e.g., Lemma 13 for the general nonlinear 367. case), the solution to the system (B.1) exists and is uniformly Mayne, D.Q., Rawlings, J.B., Rao, C.V., and Scokaert, P.O.M. bounded, since q does not depend on k. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36, 789–814. Monotonicity Paulson, J.A. and Streif, S. Mesbah, A. (2015). Stability for Define receding-horizon stochastic model predictive control. In ¯ T P(i)(A + BK) ¯ + ∑q (C j + D j K) ¯ T P(i)(C j + f (P) = (A + BK) arXiv:1410.5083 [cs.SY]. j=1 ¯ To obtain the proof of Proposition 1, we Primbs, J.A. (2007). Portfolio optimization applications of ¯ + Q + K¯ T RK. D j K) stochastic receding horizon control. In American Control need to show that, if PA ≥ PB (where both matrices PA and PB Conference (ACC), 1811–1816. are symmetric semi-positive definite), then f (PA ) ≥ f (PB ). This Primbs, J.A. and Sung, C.H. (2009). Stochastic receding horifollows from the fact that zon control of constrained linear systems with state and con¯ T (PA − PB)(A + BK) ¯ f (PA ) − f (PB ) = (A + BK) q trol multiplicative noise. IEEE Transactions on Automatic ¯ T (PA − PB)(C j + D j K) ¯ ≥0 + ∑ (C j + D j K) Control, 54(2), 221–230. (B.2b)

j=1

Convergence Consider now that P(0) = 0. Therefore P(1) = f (P(0)) ≥ 0 = P(0). In view of the monotonicity property, f (P(1)) ≥ f (P(0)). 271