Available online at www.sciencedirect.com
Journal of the Franklin Institute 352 (2015) 73–92 www.elsevier.com/locate/jfranklin
Constrained robust distributed model predictive control for uncertain discrete-time Markovian jump linear system$ Yan Songa,n, Shuai Liub, Guoliang Weia a
Department of Control Science and Engineering, Key Laboratory of Modern Optical System, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China b Business School, University of Shanghai for Science and Technology, Shanghai 200093, People's Republic of China Received 19 May 2014; received in revised form 6 August 2014; accepted 21 September 2014 Available online 14 October 2014
Abstract This paper is concerned with the robust distributed model predictive control (MPC) problem for a class of uncertain discrete-time Markovian jump linear systems (MJLSs), subject to constraints on the inputs and states. Polytopic uncertainties both in system matrices and transition probability matrices of Markov process are taken into consideration. The global system is decomposed into several subsystems, and in this way these subsystems are able to exchange information with each other via internet. Furthermore, by constructing a novel Lyapunov functional, a sufficient condition is derived to guarantee the robust stability in the mean square sense for each subsystem with admissible constraints and uncertainties. In terms of the Cauchy–Schwarz inequality, the constrained problem of minimizing an upper bound on the worst-case infinite horizon cost function is transformed into a convex optimization problem involving linear matrix inequalities (LMIs). By solving a series of LMIs, a novel Jacobi iterative algorithm is proposed to design a distributed mode-dependent state-feedback controller, which ensures the local optimality at each sampling instant. Finally, compared with centralized and decentralized control schemes, two numerical simulation examples are employed to show the effectiveness of the proposed distributed algorithm. & 2014 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
☆ This work was supported in part by the National Natural Science Foundation of China under Grant (61403254, 61374039, 61203143), Shanghai Pujiang Program under Grant 13PJ1406300, Shanghai Natural Science Foundation of China under Grant 13ZR1428500, Innovation Program of Shanghai Municipal Education Commission 14YZ083, Hujiang Foundation of China (A14001, B1402/D1402). n Corresponding author. E-mail address:
[email protected] (Y. Song).
http://dx.doi.org/10.1016/j.jfranklin.2014.09.016 0016-0032/& 2014 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
74
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
1. Introduction In the past few decades, stochastic model has come to play an important role in many branches of science fields such as biology, economic and engineering applications. Extensive research attention has been paid to linear and nonlinear systems with stochastic perturbations, such as stochastic noises, stochastic nonlinearities, randomly occurring incomplete information and so on, see, e.g. [1–5]. As an effective tool to describe stochastic phenomenon, Markovian jump linear system (MJLS) has been receiving intensive research interests, which is capability of maintaining an acceptable behavior and meeting some performance requirements even in the presence of abrupt changes, for instance, random failures of the components, sudden disturbances and variations of the environment, changes of the subsystems interconnections. Examples of these situations can be found, such as in economic systems, aircraft control systems, robotic manipulator systems, and large flexible structures for space station, see, e.g. [6,7], which parameters usually jump among finite modes, and the mode switching is governed by a Markov process. For MJLSs, the issues of stability, stabilization, optimal control, quadratic optimal control, H 1 control and filtering have been well investigated, see, e.g. [8,6,9–15]. Recently, according to model predictive control, discrete-time Markovian jump system (DMJS) has drawn some research attention, see, e.g. [8,9,11] and the references therein. In most existing literature, the transition probabilities in the jumping process, which determine the system behavior to a great extent, have been assumed to be completely accessible. However, such an ideal assumption would inevitably limit the application of established results because of the difficulty and cost in obtaining precisely all the transition probabilities. Some results have been obtained in [8,9] for MJLSs with polytopic uncertainties in transition probabilities matrix. Model predictive control (MPC), also called receding horizon control, is a model-based openloop optimal control. At each sampling instant, by solving a discrete-time optimal control problem over a given horizon, an optimal control input sequence is obtained and only the first control in that sequence is applied. At the next sampling instant, a new optimal control problem is formulated and solved based on the new measurements. As a popular optimal control strategy, MPC has attracted extensive research attention, which is capable of providing controller ensuring stability, robustness, constraint satisfaction and tractable computation for linear and nonlinear system [16–19]. An attractive attribute of MPC technology is its ability to systematically account for system constraints on the states, inputs, outputs, see, e.g. [9,20–23]. However, the main drawback of current design techniques for MPC is its inability to deal with model uncertainty. For this reason, robust model predictive control (RMPC) has become a significant branch of MPC, see, e.g. [9,20,24,25] and the reference therein. Taking above two aspects into account, Kothare et al. [20] proposed the robust MPC subject to inputs and outputs constraints, and by convex optimization technique, transferred the constraints into a series of linear matrix inequalities. In [9], Lu et al. presented the multi-step mode-dependent control law for the MPC of uncertain discrete-time Markovian jump linear systems subject to constraints on the inputs and states. On the other hand, because of the unavailability of the exact model and the limitation of computation complexity, the centralized MPC becomes impractical and unsuitable for large-scale systems. With the purpose of dealing with the drawbacks of centralized MPC scheme, distributed MPC scheme has been proposed and obtaining more and more concern in large-scale systems, such as power systems, water distribution systems, traffic systems, manufacturing systems, and economic systems, see, e.g. [26–28]. Distributed MPC is a control scheme in which the global system is divided into a number of subsystems, and the information from each subsystem can be
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
75
exchanged by transmission over network. For example, Dunbar et al. [29] proposed a distributed MPC framework that had independent subsystem dynamic but linked together through their cost functions and constraints. Distributed MPC algorithms for unconstrained linear time-invariant systems were proposed in [30]. Al-Gherwi et al. [31] investigated a robust distributed model predictive control algorithm. Zhang et al. [32] presented a distributed MPC algorithm for polytopic uncertain systems subject to actuator saturation, and a similar method was also adopted in [33] with randomly occurring actuator saturation and packet loss. Unfortunately, up to now, the robust distributed MPC for discrete-time Markovian jump system has not been investigated yet, not to mention the case when the constraints on the inputs and states occur as well, which gives rise to the main motivation of our research. In this paper, we focus on the robust distributed MPC problem for a class of uncertain discretetime Markovian jump linear systems with inputs and states constraints. The main contributions can be highlighted as follows. (1) According to uncertain discrete-time Markovian jump linear iteration = 1
1.5
Centralized MPC Subsystem 1 Subsystem 2
1
0.5
x2
0
0
−0.5
−0.5
−1
−1
−0.5
0 x
0.5
−1.5 −1
1
−0.5
0 x
1
0.5
1
Fig. 1. Invariant sets comparison. 15 14
Centralized Subsystem 1 Subsystem2
13 12 Upper bound
x2
0.5
−1.5 −1
iteration = 2
1.5
11 10 9 8 7 6
1
2
3
4 Iterations
Fig. 2. Upper bounds.
5
6
1
76
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92 7 centralized Subsystem 2 Subsystem 1
6
Upper bound
5 4 3 2 1 0
0
5
10
15
20
25
30
35
40
45
50
Time interval
Fig. 3. Upper bounds.
system, the robust distributed MPC problem is seldom discussed. By decomposing the control input, the distributed system comes into being from the global system, and all the subsystems can communicate with each other. (2) To make the presented techniques more flexible and applicable in practice, polytopic uncertainties both in system matrices and transition probability matrices of Markov process are taken into account. (3) The constraints on the inputs and states are considered, which are transformed into linear matrix inequalities by the convex optimization technique. (4) A novel state feedback control law is obtained by solving a series of LMI optimization problems, and an iterative algorithm is proposed to make cooperative among the subsystems. The rest of this paper is organized as follows. In Section 2, the discussed plant modeled by a uncertain discrete-time Markovian jump linear system with constraints on the inputs and states is introduced, and by decomposing the control input, the global system is decomposed into M subsystems. Then, the distributed control law is applied to the distributed system. In Section 3, considering unconstrained and constrained cases, respectively, sufficient conditions are provided to guarantee the stability and feasibility of the system, and the controller gain of each subsystem is derived in terms of the solutions to a sequence of linear matrix inequalities. An iterative distributed MPC algorithm is proposed to make cooperative among the subsystems. In Section 4, two simulation examples demonstrate the effectiveness of the proposed distributed MPC algorithm. Finally, we conclude the paper in Section 5. Notation: Most notation used in the paper is fairly standard. Rn and Rnm denote the ndimensional Euclidean space and the set ofpall n mffi real matrices, respectively. JA J refers to ffiffiffiffiffiffiffiffiffiffiffiffiffiffi the norm of a matrix A defined by J A J ¼ trðAT AÞ. xi ðk þ njkÞ refers to the predicted state of subsystem i at time k þ n based on the measurements at time k. xi ðkjkÞ refers to the state of subsystem i at time k. ui ðk þ njkÞ refers to the control move at time k þ n and ui ðkjkÞ refers to the control move to be implemented at time k. r kþnjk denotes the predicted mode at time k þ n based on time k. MT represents the transpose of the matrix M. I and 0 denote the identity matrix and zero matrix of compatible dimension. Efxg stands for the expectation of stochastic variable x. P40 means that matrix P is real symmetric and positive definite. The asterisk n in a matrix is used to denote a term induced by symmetry. Matrices, if they are not explicitly specified, are assumed to have compatible dimensions.
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92 Subsystem1
1.5
0.5
x2
0.5
x2
1
0
Subsystem2
1.5
1
7
0
7 4
4 −0.5
−0.5
3
3
1
1
−1
−1.5 −1
77
−1
−0.5
0 x
0.5
1
−1.5 −1
−0.5
1
0 x
0.5
1
1
Fig. 4. Ellipsoids for a series of time interval.
2. Problem formulation 2.1. Models Consider the following MJLSs with polytopic uncertainties in system matrices described by ( xðk þ 1Þ ¼ AðξÞ ðr k ÞxðkÞ þ BðξÞ ðr k ÞuðkÞ ð1Þ yðkÞ ¼ CðξÞ ðr k ÞxðkÞ where xðkÞA Rnx , uðkÞ ARnu , yðkÞ A Rny and rk are state vector, control input, controlled output and system mode, respectively. The initial state is x0, and the initial mode is r0. We can see the true “state” involving in two parts: the continuous part, i.e., x(k), and the discrete part, i.e., the mode rk. For a fixed mode rk, the MJLS is a linear time-varying system in essence. Let r k ðk A ½0; NÞ be a Markov chain taking values in a finite state space M ¼ f1; 2; …; Sg with transition probability given by ϱðg; hÞ ¼ Probfr kþ1 ¼ hjr k ¼ gg;
8 g; hA M
ð2Þ
where ϱðg; hÞZ 0ðg; h A MÞ is the transition probability from g to h and ∑Sh ¼ 1 ϱðg; hÞ ¼ 1; 8g A M. It assumed that for 8r k AM, AðξÞ ðr k Þ, BðξÞ ðr k Þ and CðξÞ ðr k Þ are unknown matrices which contain in a convex polyhedral set Ωðr k Þ described by L vertices Ωðr k Þ≔Cof½Að1Þ ðr k Þ; Bð1Þ ðr k Þ; C ð1Þ ðr k Þ; …; ½AðLÞ ðr k Þ; BðLÞ ðr k Þ; CðLÞ ðr k Þg
78
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
Mode
2
1
0
1
2
3
4
5
6
7
Time interval
Fig. 5. Mode.
where Co refers to the convex hull, that is, L
½AðξÞ ðr k Þ; BðξÞ ðr k Þ; CðξÞ ðr k Þ ¼ ∑ λl ½AðlÞ ðr k Þ; BðlÞ ðr k Þ; C ðlÞ ðr k Þ
ð3Þ
l¼1
The transition probabilities from mode g A M, ½ϱðg; 1Þ; ϱðg; 2Þ; …; ϱðg; SÞ are assumed not to be known exactly, but belonging to a polytopic set as follows: ½ϱðg; 1Þ; ϱðg; 2Þ; …; ϱðg; SÞ A ΩP ¼ CofPð1Þ ðg; :Þ; Pð2Þ ðg; :Þ; …; PðTÞ ðg; :Þg
ð4Þ
where PðtÞ ðg; :Þ ¼ ½ϱðg; 1; tÞ; ϱðg; 2; tÞ; …; ϱðg; S; tÞ; t ¼ 1; …; T; gA M. Remark 1. Due to the presence of uncertainties, resulting from modeling error and external environmental disturbances inevitably existing, it is hardly to get an exact system model. Therefore, to tackle this challenge, the uncertainty systems are considered as a powerful tool and have been investigated extensively during the past few decades. In this paper, we discuss two cases of polytopic uncertainties in system matrices and transition probability matrices of Markov process, which is more practical to the exact system. Under the engineering background, assume that we have input/output data sets at different operating point, or at different times. From each data set, we propose a number of linear models and for simplicity, we assume that the various linear models refer to the same state vector. Then it is reasonable that any analysis and synthesis for the polytopic system with vertices given by the linear models can apply to the real system. The states and control inputs in model (1) can be decomposed into M subsystems as follows: 3 3 3 2 2 2 x11 ðk þ 1Þ x11 ðkÞ u1 ðkÞ 7 6 6 ⋮ 7 6 ⋮ 7 ⋮ 7 7 7 6 6 6 7 7 7 6 6 6 6 xii ðk þ 1Þ 7 ¼ AðξÞ ðr k Þ6 xii ðkÞ 7 þ ½BðξÞ ðr k Þ; …; BðξÞ ðr k Þ; …; BðξÞ ðr k Þ6 ui ðkÞ 7 i i M 1 7 7 7 6 6 6 7 6 6 ⋮ 7 6 ⋮ 7 ⋮ 5 5 5 4 4 4 xMM ðk þ 1Þ xMM ðkÞ uM ðkÞ T T T T yi ðkÞ ¼ C ðξÞ i ðr k Þ x11 ðkÞ; …; xii ðkÞ; …; xMM ðkÞ
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
79
3
2.5
Output values
2
1.5
centralized distributed decentralized
1
0.5
0
0
5
10
15
20
25
30
Time interval
Fig. 6. Tracking comparison.
Assumption 1. The vector of states xTi ¼ ½xT11 ; …; xTii ; …; xTMM T ; xii A Rxii , includes all the states of the system, that is, the local state xii that can be measured or estimated as well as the other subsystems' states exchanged via communication. However, for the sake of the unified in form, we denote it as xi to stress the vector to be used for computing the local control input ui. Assumption 2. Control agents are synchronous and communicate only once within a sampling time interval. Assumption 3. In this paper, we neglect the effect of packet loss, packet disorder, actuator saturation, time delay, etc., that is, we assume that the system is in the ideal condition. Based on the above assumptions, the distributed subsystems with MJLSs can be rewritten by 8 M > ðξÞ > < xi ðk þ 1Þ ¼ AðξÞ ∑ BðξÞ i ðr k Þxi ðkÞ þ Bi ðr k Þui ðkÞ þ j ðr k Þuj ðkÞ j ¼ 1;j a i ð5Þ > > : yi ðkÞ ¼ CðξÞ ðr k Þxi ðkÞ; i ¼ 1; 2; …; M i where xi ARnx , ui A Rnui , and yi A Rnyi are the state, control input and controlled output of subsystem i at time interval k, respectively. Specially, it should be pointed out that xi contains all the states of the system but only the xii is directly measured, and the rest of the states are obtained via communication or estimated. Accordingly, AiðξÞ ðr k Þ and CiðξÞ ðr k Þ contain all the elements of the ðξÞ ðξÞ ðξÞ ðξÞ matrix AðξÞ ðr k Þ and C ðξÞ ðr k Þ, respectively, that is, AðξÞ i ðr k Þ ¼ A ðr k Þ, C i ðr k Þ ¼ C ðr k Þ. Bi ðr k Þ is ðξÞ the i-column of B ðr k Þ. Then using the concept of a polytopic model given in Eq. (3), it is assumed that the model given in Eq. (5)can be represented as follows: L
ðξÞ ðξÞ ðlÞ ðlÞ ðlÞ ½AðξÞ i ðr k Þ; Bi ðr k Þ; C i ðr k Þ ¼ ∑ λl ½Ai ðr k Þ; Bi ðr k Þ; C i ðr k Þ
ð6Þ
l¼1
Remark 2. With the rapid development of current technology, the control requirements both in accuracy and rate become more and more rigorous, which causes the increasing complexity of system. In this paper, by decomposing the control input nu ¼ ∑M i ¼ 1 nui , the system (5) becomes a
80
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92 3
2.5
Output values
2
1.5
centralized distributed decentralized
1
0.5
0
0
5
10
15
20
25
30
Time interval
Fig. 7. Tracking comparison.
distributed MPC problem, that is, each subsystem as well as other subsystems exchange information via communication. Compared with the centralized MPC, distributed MPC strategies can reduce the computational complexity and communication bandwidth limitations, thus have received an increasing amount of research interests [26–33]. In addition, due to the inevitable parameter uncertainties under consideration, the considered problem is converted into a robust distributed MPC, whose purpose is to design a distributed controller for each closed-loop subsystem to reach a global performance.
Here, we consider the following constraints on the inputs and states j½ui ðk þ n; r kþn Þj j r ½u i ðr kþn Þj ; j½φi j xi ðk þ nÞj r ½x i j ;
n Z 0; r kþn A M; j ¼ 1; 2; …; nui ; i ¼ 1; 2; …; M
nZ 0; j ¼ 1; 2; …; nφi ; i ¼ 1; 2; …; M
ð7Þ ð8Þ
where φi A Rnφi nx , and ½j denotes the jth element of a vector or the jth row of a matrix. Remark 3. In this paper, the componentwise peak bounds on the inputs and the states are taken into consideration. It needs to be particularly noted that the constraints on the states are not mode-dependent. In detail, suppose that for subsystem i at time instant k, the subsystem is at mode rk and the control input is determined as ui(k), and once the control move is complemented, at time instant k þ 1, the state xi ðk þ 1Þ is determined regardless of the mode, hence, the dynamic of each subsystem will be determined at time instant k þ 1. 2.2. Distributed MPC controller design In this paper, the worst-case infinite horizon cost function for subsystem i is designed at each time interval k Z 0 as follows: min
ui ðkþnjkÞ;n Z 0 i ¼ 1;…;M
max
ðξÞ ðξÞ ðξÞ ½A ðr kþnjk Þ;B ðr kþnjk Þ;C ðr kþnjk Þ A Ωðrk Þ i i i i ¼ 1;…;M;r kþnjk A M
J i ðk Þ
ð9Þ
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
81
0.4 0.35
Input values
0.3 0.25 0.2 centralized1 distributed2 decentralized3
0.15 0.1 0.05
0
5
10
15
20
25
30
Time interval
Fig. 8. Dynamic input.
where J i ðkÞ ¼ Ek
"
1
∑
n¼0
‖xi ðk þ
njkÞ‖2Qðrkþnjk Þ
þ ‖ui ðk þ
njkÞ‖2Ri ðrkþnjk Þ
þ
M
∑
j ¼ 1;j a i
!# ‖uj ðk
þ
njkÞ‖2Rj ðrkþnjk Þ
where Qðr kþnjk Þ40, Ri ðr kþnjk Þ40 and Rj ðr kþnjk Þ40 are given symmetric weighting matrices. The objective considers the overall control objective for the entire system since it takes into account the goals of current controller and other controllers. The superscript “” indicates that the solution was obtained in a previous iteration and is kept fixed in the current iteration. Since this term is kept fixed during the current iteration, it does not affect the objective function during optimization, however the value of the controller producing this previous iteration affects the constraints of the problem. The distributed MPC controller to design for each subsystem i satisfies with the following state-feedback law: M
ui ðk þ njkÞ ¼ F ii ðk; r kþnjk Þxii ðk þ njkÞ þ ¼ F i ðk; r kþnjk Þxi ðk þ njkÞ;
∑
j ¼ 1;j a i
F ij ðk; r kþnjk Þxij ðk þ njkÞ
n Z0
ð10Þ
Similarly, uj ðk þ njkÞ ¼ F jj ðk; r kþnjk Þxjj ðk þ njkÞ þ ¼ F j ðk; r kþnjk Þxj ðk þ njkÞ;
M
∑
i ¼ 1;i a j
nZ0
F ji ðk; r kþnjk Þxji ðk þ njkÞ ð11Þ
Specially, when n ¼ 0, ui ðkjkÞ ¼ ui ðkÞ, where xij has the same meaning as xii, which denotes the state of subsystem j received via communication or estimated. Next, we use xi;x0 to denote the trajectory of the system (5) under the controller (10) and (11) starting from the initial value x0. The notion of the “domain of attraction in the mean-square sense” is introduced in the following definition. Definition 1. The set D ¼ fx0 A Rnx : limk-1 EJ xi;x0 J 2 ¼ 0g is said to be the mean-square domain of attraction of the origin of the system (5) with the feedback control law (10) and (11).
82
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92 2 1.8
Input values
1.6 1.4
centralized distributed decentralized
1.2 1 0.8
0
5
10
15
20
25
30
Time interval
Fig. 9. Dynamic input.
Define an ellipsoid ΠðPi ; ρi Þ ¼ fxi A Rnx : xTi Pi xi r ρi g, where Pi A Rnx nx is a positive definite matrix and ρi A R is a positive scalar. The purpose of this paper is to design the distributed MPC controller described by Eqs. (10) and (11) for subsystem (5). More specially, we are interested in determining the ellipsoid parameters Pi and ρi such that the zero-solution of the system (5) under the control law (10) and (11) is locally mean-square stable and the ellipsoid ΠðPi ; ρi Þ is contained in its mean-square domain of attraction D. Lemma 1. Let AðξÞ and BðξÞ be polytopic uncertainties with appropriate dimensions, and F is a constant vector, SðξÞ ¼ AðξÞ þ BðξÞ F is then a polytopic uncertainty. Proof. Since A and B are polytopic uncertainties, thus SðξÞ ¼ AðξÞ þ BðξÞ F ¼ ∑li ¼ 1 Ai þ ∑li ¼ 1 Bi F. Notice that F is a constant vector, we have S ¼ ∑li ¼ 1 M i with M i ¼ Ai þ Bi F. Then, we can conclude that SðξÞ is a polytopic uncertainty. This completes the proof. □ In the sequel of the paper, for presentation convenience, we denote each possible r kþnjk ¼ p, ðp A MÞ. 3. Main results 3.1. Unconstrained distributed MPC Consider the following Lyapunov functional: V i ðk þ njkÞ ¼ xTi ðk þ njkÞPi ðk; pÞxi ðk þ njkÞ;
i ¼ 1; …; M; nZ 0; pA M
where the symmetric positive definite weighting matrices Pi ðk; pÞ will be determined by solving the optimization problem of distributed MPC. To derive an upper bound on Eq. (9), impose the following robust stability constraints: V i ðk þ n þ 1jkÞ V i ðk þ njkÞ r xTi ðk þ njkÞQðpÞxi ðk þ njkÞ þ uTi ðk þ njkÞRi ðpÞui ðk þ njkÞ
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
83
Mode
2
1
0
5
10
15
20
25
30
Time interval
Fig. 10. Mode.
þ
#
M
∑
j ¼ 1;j a i
uT j ðk
þ
njkÞRj ðpÞuj ðk
þ njkÞ ;
n Z 0; i ¼ 1; …; M:
ð12Þ
Conditions which the robust stability constraints (12) are satisfied, then the ellipsoid ΠðPi ðkÞ; ρi ðkÞÞ is contained in the mean-square attraction domain of the closed-loop system (5). Furthermore, in the following lemma, by using of Eq. (12), we can obtain the upper of the optimization problem. Lemma 2. For each subsystem i, robust stability constraints (12) are satisfied if there exist a positive scalar ρi ðkÞ, positive definite matrices W i ðk; pÞ ¼ ρi ðkÞPi 1 ðk; pÞ40, and any matrices Y i ðk; pÞ ¼ F i ðk; pÞW i ðk; pÞ, such that the following LMIs: 2
W i ðk; pÞ 6 Φi ðk; pÞ 6 6 ⋮ 6 6 6 Φi ðk; pÞ 6 6 1 6 Ψ i ðpÞ2 W i ðk; pÞ 4 1 R2i ðpÞY i ðk; pÞ
"
1 xi ðkÞ
⋯ ⋯ ⋱ ⋯
0 0
n
ϱ
n
W i ðr k Þ
1
ðp; 1; tÞW i ðk; 1Þ ⋮ 0
n
n
n
n
n
n
⋮ ϱ 1 ðp; S; tÞW i ðk; SÞ
⋮
⋮
n
n
⋯
0
ρi ðkÞI
n
⋯
0
0
ρi ðkÞ
3
7 7 7 7 7 7 r0 7 7 7 5
ð13Þ
# r0
ð14Þ
hold. Then the gain matrix F i ðk; pÞ ¼ Y i ðk; pÞW i 1 ðk; pÞ, and the upper bound of the optimiðlÞ zation problem is ρi ðkÞ. where Φi ðk; pÞ ¼ A^ i ðpÞW i ðk; pÞ þ BðlÞ i ðpÞY i ðk; pÞ, Ψ i ðpÞ ¼ QðpÞþ T M ∑j ¼ 1;j a i F j ðk; pÞRj ðpÞF j ðk; pÞ. Proof. The difference of V i ðk þ njkÞ along the system (5) can be calculated as follows: Ekþnjk ½ΔV i ðk þ njkÞ ¼ Ekþnjk ½V i ðk þ n þ 1jkÞ V i ðk þ njkÞ ¼ Ekþnjk ½xTi ðk þ n þ 1jkÞPi ðk; rkþnþ1jk Þxi ðk þ n þ 1jkÞ xTi ðk þ njkÞPi ðk; pÞxi ðk þ njkÞ h iT h ðξÞ i ðξÞ ðξÞ T ^ ¼ xi ðk þ njkÞ A^ i ðpÞ þ BðξÞ i ðpÞF i ðk; pÞ Ekþnjk ½Pi ðk; r kþnþ1jk Þ A i ðpÞ þ Bi ðpÞF i ðk; pÞ Pi ðk; pÞgxi ðk þ njkÞ
ð15Þ
84
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92 ðξÞ
ðξÞ
ðξÞ M ^ where A^ i ðpÞ ¼ AðξÞ i ðpÞ þ ∑j ¼ 1;j a i Bj ðpÞF j ðk; pÞ. By Lemma 1, we can conclude that A i ðpÞ is a polytopic uncertainty. Furthermore, Eq. (12) can be reformulated as follows:
Ekþnjk ½ΔV i ðk þ njkÞ " r xTi ðk
þ njkÞ QðpÞ þ
F Ti ðk; pÞRi ðpÞF i ðk; pÞ
M
þ
∑
j ¼ 1;j a i
# F T j ðk; pÞRj ðpÞF j ðk; pÞ
xi ðk þ njkÞ
ð16Þ
Combining Eqs. (15) and (16), we can obtain the following inequality: xTi ðk
h iT h ðξÞ i ðξÞ ðξÞ ^ þ njkÞ A^ i ðpÞ þ BðξÞ i ðpÞF i ðk; pÞ Ekþnjk ½Pi ðk; r kþnþ1jk Þ A i ðpÞ þ Bi ðpÞF i ðk; pÞ Pi ðk; pÞ þ QðpÞ þ F Ti ðk; pÞRi ðpÞF i ðk; pÞ þ
M
∑
j ¼ 1;j a i
F T j ðk; pÞRj ðpÞF j ðk; pÞ xi ðk þ njkÞr 0
According to Eq. (2), it is clear that Ekþnjk Pi ðk; r kþnþ1jk Þ ¼ ∑Sh ¼ 1 ϱðp; hÞPi ðk; hÞ. Then the above inequality is equivalent to the following matrix inequality: xTi ðk þ njkÞ
h iT S h ðξÞ i ðξÞ ^ ðpÞ þ BðξÞ ðpÞF i ðk; pÞ Pi ðk; pÞ A^ i ðpÞ þ BðξÞ ðpÞF ðk; pÞ ∑ ϱðp; hÞP ðk; hÞ A i i i i i h¼1
þ QðpÞ þ F Ti ðk; pÞRi ðpÞF i ðk; pÞ þ
M
∑
j ¼ 1;j a i
F T ðk; pÞR ðpÞF ðk; pÞ xi ðk þ njkÞr 0 j j j
ðξÞ
Since A^ i ðpÞ, BðξÞ i ðpÞ, pðg; hÞ are polytopic uncertainties. According to Eqs. (4) and (6), the above inequality is satisfied if and only if xTi ðk þ njkÞ
h iT S h ðlÞ i ðlÞ A^ i ðpÞ þ BðlÞ ∑ ϱðp; h; tÞPi ðk; hÞ A^ i ðpÞ þ BðlÞ i ðpÞF i ðk; pÞ i ðpÞF i ðk; pÞ Pi ðk; pÞ h¼1
þQðpÞ þ
F Ti ðk; pÞRi ðpÞF i ðk; pÞ
M
þ
∑
j ¼ 1;j a i
F T j ðk; pÞRj ðpÞF j ðk; pÞ
xi ðk þ njkÞ r 0
ð17Þ
Pre- and post-multiplying Eq. (17) with Pi 1 ðk; pÞ result in xTi ðk þ njkÞ
h iT S h ðlÞ ðlÞ 1 A^ i ðpÞPi 1 ðk; pÞ þ BðlÞ ∑ ϱðp; h; tÞPi ðk; hÞ A^ i ðpÞPi 1 ðk; pÞ i ðpÞF i ðk; pÞPi ðk; pÞ
i
h¼1
þ
1 BðlÞ i ðpÞF i ðk; pÞPi ðk; pÞ
þ
ðF i ðk; pÞPi 1 ðk; pÞÞT Ri ðpÞðF i ðk; pÞPi 1 ðk; pÞÞ M 1 Pi 1 ðk; pÞ ∑ F T ðk; pÞR ðpÞF ðk; pÞP ðk; pÞ xi ðk j j j i j ¼ 1;j a i
þ
Pi 1 ðk; pÞ
þ
Pi 1 ðk; pÞQðpÞPi 1 ðk; pÞ þ njkÞr 0
ð18Þ
Then, Eq. (18) can be further simplified by multiplying with ρi ðkÞ xTi ðk þ njkÞ
h
ðlÞ A^ i ðpÞW i ðk; pÞ þ BðlÞ i ðpÞY i ðk; pÞ
W i ðk; pÞ þ
iT
W Ti ðk; pÞρi 1 ðkÞQðpÞW i ðk; pÞ
h ðlÞ i S ∑ ϱðp; h; tÞW i 1 ðk; hÞ A^ i ðpÞW i ðk; pÞ þ BðlÞ i ðpÞY i ðk; pÞ
h¼1
þ Y Ti ðk; pÞρi 1 ðkÞRi ðpÞY i ðk; pÞ
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
þ W i ðk; pÞρi 1 ðkÞ
M
∑
j ¼ 1;j a i
85
F T ðk; pÞR ðpÞF ðk; pÞW ðk; pÞ xi ðk þ njkÞ r 0 j i j j
By using the Schur complement, if LMI (13) holds, the above inequality will be satisfied. In the following, based on the robust stability constraints (12), an upper bound on the worst-case infinite horizon (9) can be obtained. Firstly, taking the expected values on both sides of Eq. (12) conditional on the information available at time instant k, the following inequalities hold: Ek ½V i ðk þ n þ 1jkÞ V i ðk þ njkÞ r Ek xTi ðk þ njkÞQðpÞxi ðk þ njkÞ þ uTi ðk þ njkÞRi ðpÞui ðk þ njkÞ M T þ ∑ uj ðk þ njkÞRj ðpÞðpÞuj ðk þ njkÞ j ¼ 1;j a i
Then, summing up both sides of the above inequalities from n ¼ 0 to 1, we have ( " Ek ½V i ðkÞ Ek ½V i ð1jkÞ ZEk
þ
1
∑ xTi ðk þ njkÞQðpÞxi ðk þ njkÞ þ uTi ðk þ njkÞRi ðpÞui ðk þ njkÞ
n¼0
#)
M
∑
j ¼ 1;j a i
uT j ðk
þ
njkÞRj ðpÞuj ðk
þ njkÞ
Since V i ð1jkÞ¼ 0, then "
J i ðkÞ ¼ Ek
1
∑
‖xi ðk þ
n¼0
njkÞ‖2Qðrkþnjk Þ
þ ‖ui ðk þ
njkÞ‖2Ri ðrkþnjk Þ
þ
M
∑
j ¼ 1;j a i
!# ‖uj ðk
þ
njkÞ‖2Rj ðrkþnjk Þ
r Ek ½V i ðkÞ ¼ xTi ðkÞPi ðk; rk Þxi ðkÞ Define an upper bound xTi ðkÞPi ðk; r k Þxi ðkÞr ρi ðkÞ, then by Schur complement it can be written as Eq. (14), and we can obtain J i ðkÞr ρi ðkÞ We can see that ρi ðkÞ is an upper bound on the expected cost function. Therefore, this completes the proof. □ Next, we will minimize the upper bound to approximately minimize the worse-case infinite horizon expected cost function. Now, we are ready to present the unconstrained distributed MPC of MJLSs with polytopic uncertainties in terms of a minimization problem at each time instant k as follows: min
ρi ðkÞ
W i ðk;pÞ;Y i ðk;pÞ;p A M
s:t: ð13Þ; ð14Þ
In the following theorem, we will discuss the feasibility and stability of Eq. (19).
ð19Þ
86
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
Theorem 1. Consider the unconstrained uncertain MJLSs described by Eqs. (4)–(6), if there is a feasible solution to the optimization problem (19) at time instant k for the initial state xk and initial mode rk, there will also exist a feasible solution at time instant t Z k; and the distributed MPC control law F i ðk; r k Þ ¼ Y i ðk; r k ÞW i 1 ðk; r k Þ based on Eq. (19) guarantees closed-loop stability of the system in the mean-square sense. Proof. The proof of this theorem is similar to that of Theorem 3 in [8]. For brevity, it is omitted here. □ 3.2. Constrained distributed MPC In this section, we focus on the constrained distributed MPC design for uncertain MJLSs described by Eqs. (4)–(6) subject to constraints on the inputs and states (7)–(8) by incorporating these constraints into the optimization (19) in terms of LMIs. Lemma 3. Constraints on the inputs and states (7)–(8) are satisfied if there exist symmetric matrices W i ðk; pÞ, Xi ðk; pÞ, Zi ðk; pÞ and any matrices Y i ðk; pÞ ¼ F i ðk; pÞW i ðk; pÞ; p A M such that the following LMIs: "
Xi ðk; pÞ
Y i ðk; pÞ
n
W i ðk; pÞ
#
o0;
½Xi ðk; pÞjj r ½u i ðpÞ2j ; p A M; j ¼ 1; …; nui ; i ¼ 1; 2; …; M
ð20Þ "
Zi ðk; pÞ n
#
φi W i ðk; pÞ o0; W i ðk; pÞ
½Zi ðk; pÞjj r ½x i 2j ; p A M; j ¼ 1; …; nφi ; i ¼ 1; 2; …; M
ð21Þ
hold. Proof. Considering the constraints on the inputs (7), according to Eq. (14) and Cauchy–Schwarz inequality, it follows that j½ui ðk þ njkÞj j2 ¼ jej F i ðk; pÞxi ðk þ njkÞj2 1=2
¼ jej Y i ðk; pÞW i 1 ðk; pÞxi ðk þ njkÞj2 ¼ jej Y i ðk; pÞW i 1=2
ðk þ njkÞj2 r J ej Y i ðk; pÞW i
1=2 r Jej Y i ðk; pÞW i ðk; pÞJ 2
1=2
ðk; pÞJ 2 J W i
1=2
ðk; pÞW i
ðk; pÞxi
ðk; pÞxi ðk þ njkÞJ 2
¼ jej Y i ðk; pÞW i 1 ðk; pÞY Ti ðk; pÞeTj j
Define Y i ðk; pÞW i 1 ðk; pÞY Ti ðk; pÞoXi ðk; pÞ, then Eq. (7) can be guaranteed by Eq. (20). And then, consider the constraints on the states (8), we have 1=2
1=2
j½φi j xi ðk þ njkÞj2 ¼ j½φi j W i ðk; pÞW i
1=2
ðk; pÞxi ðk þ njkÞj2 r J ½φi j W i ðk; pÞJ 2
1=2
¼ Jej φi W i ðk; pÞJ 2 ¼ jej φi W i ðk; pÞφTi eTj j r ½x i 2j Define φi W i ðk; pÞφTi oZi ðk; pÞ, which is guaranteed by Eq. (21).
ð22Þ □
After the transformation of the constraints into LMIs, the distributed MPC optimization problem of Eq. (19) subject to constraints on the inputs and states can be transformed into the
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
87
following one: min ρi ðkÞ
W i ðk;pÞ;Y i ðk;pÞ Xi ðk;pÞ;Zi ðk;pÞ
s:t: ð13Þ; ð14Þ; ð20Þ; ð21Þ; p A M; i ¼ 1; 2; …; M:
ð23Þ
Theorem 2. Consider the constrained uncertain MJLSs described by Eqs. (4)–(6), if there is a feasible solution to the optimization problem (23) at time instant k for the initial state xk and initial mode rk, there will also exist a feasible solution at time instant t Z k; and the distributed MPC state feedback control law F i ðk; r k Þ ¼ Y i ðk; r k ÞW i 1 ðk; r k Þ based on Eq. (23) guarantees closed-loop stability of the system in the mean-square sense. The concrete proof can refer to literature [8,9,20], so it is omitted here. Remark 4. Considering the convex optimization problem (23) involving LMI conditions (13), (14), (20), (21), for a fixed mode rk, there are ðL 2Mnui þ 1 þ 2M ðnui þ nφi ÞÞ LMIs to be solved with the centralized MPC algorithm, while only ðL 2nui þ 1 þ 2ðnui þ nφi ÞÞ LMIs with the distributed MPC algorithm for each subsystem. In other words, compared with the centralized MPC algorithm, the condition of Theorem 1 by using distributed MPC algorithm extensively reduces the computation burdens in solving the LMIs, hence the computation time can be reduced accordingly by the distributed subsystems working together on different computers. 3.3. Robust distributed MPC algorithm For dealing the coupling in Eq. (23), we propose the on-line algorithm for robust distributed MPC at the given initial mode rk. And the algorithm adopts Jacobi iterative method for the solution of systems. Step 0: At the first sampling interval k¼ 0, set an initial feedback law F i ð0; r k Þ; i ¼ 1; 2; …; M. Step 1: At time interval k, all subsystems exchange their local states measurements and initial feedback law F i ðk; r k Þ via communication, set iteration t ¼ 1 and F i ðk; r k Þ ¼ F i ð0; r k Þ. Step 2: Solve all M LMI problems (23) in parallel to obtain the optimal Y ðtÞ i ðk; r k Þ and ðtÞ ðtÞ 1ðtÞ W i ðk; r k Þ to estimate the optimal feedback laws F ðtÞ ðk; r Þ ¼ Y ðk; r ÞW ðk; r Þ. k k k i i i Step 3: For all subsystems, check the convergence with a specified error tolerance εi for the feedback law ðt 1Þ J F ðtÞ ðk; r k ÞJ r εi ; i ðk; r k Þ F i
i ¼ 1; 2; …; M:
If the convergence condition or t ¼ t max is satisfied, current F ðtÞ i ðk; r k Þ is taken as the optimal feedback law. Otherwise, update the initial feedback laws with F i ðk; r k Þ ¼ F ðtÞ i ðk; r k Þ and set t ¼ t þ 1, exchange the solutions with other subsystems and repeat step 2. Step 4: The optimal scheme ui ðkÞ ¼ F i ðk; r k Þxi ðkÞ is applied to the corresponding subsystems. Set the time interval k ¼ k þ 1 and go back to step 1. Generally speaking, step 1 refers to updating; steps 2 and 3 are called iteration; step 4 refers to as implementation. Lemma 4 (Zhang et al. [32], Song and Fang [33]). For robust distributed MPC algorithm, with increasing the iterations the performance index ρ1 ðkÞ; …; ρM ðkÞ of subsystems will converge ðtÞ to the same result which is the result of centralized problem, which means ρðtÞ 1 ðkÞ ¼ ⋯ ¼ ρM ðkÞ ¼ ρðkÞ,
88
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
where ρðkÞ is the upper bound of centralized MPC at time interval k, t denotes the number of Jacobi iterations. Remark 5. From Lemma 4, it can be concluded that the robust distributed MPC algorithm may reach the same performance as the centralized MPC algorithm within enough iterations. However, with the number of iterations increasing, the computational time will increase inevitably. Therefore, there is a tradeoff between computational time and the distributed performance. Remark 6. The system (5) under consideration is quite comprehensive, which reflects uncertain transition probability as well as constraints on the states and inputs. Then by decomposing the control input, the distributed MPC problem is studied for the system, where a mode dependent state feedback controller is designed, and the robust stability is guaranteed in the mean square sense for each subsystem with admissible constraints and uncertainties. In addition, the proposed Jacobi iterative algorithm is suitable for online design. Subsequently, a simulation example is derived to show the effectiveness of the proposed algorithm in the next section. 3.4. Set-point tracking problem In the following, we will consider the set-point tracking problem for robust distributed MPC algorithm. The system output is required to track a reference trajectory yr(k), where the steady state xr(k) and control input ur(k) are computed by solving the following equations: 8 > < xr ðk þ 1Þ ¼ Ar xr ðkÞ þ Br ur ðkÞ yr ðkÞ ¼ C r xr ðkÞ ð24Þ > : x ðk þ 1Þ ¼ x ðkÞ r r where the set (Ar ; Br ; C r ) is the nominal model of the global system. For subsystem i, in terms of set-point tracking problem, the corresponding cost function can be defined as follows: 1 J i ðkÞ ¼ Ek ∑ ‖yi ðk þ njkÞ yi;r ðk þ njkÞ‖2Qðrkþnjk Þ þ ‖ui ðk þ njkÞ ui;r ðk þ njkÞ‖2Ri ðrkþnjk Þ n¼0
þ
M
∑
j ¼ 1;j a i
‖uj ðk þ njkÞ uj;r ðk þ njkÞ‖2Rj ðrkþnjk Þ
where yi;r ðk þ njkÞ, ui;r ðk þ njkÞ denote the reference output and input of subsystem i at time k þ n based on the measurements at time k, respectively. Specially, the reference state xi;r ðk þ njkÞ and reference input ui;r ðk þ njkÞ are computed off-line. As discussed in [34], in order to cast the set-point tracking distributed MPC problem into the standard form (23), we can define a shift state x~ i ¼ xi xi;r , and a shift input u~ i ¼ ui ui;r . Thus, the shifted state-space model is given as 8 M > > < x~ i ðk þ 1Þ ¼ AðξÞ ~ i ðkÞ þ ∑ BðξÞ ~ j ðkÞ x i ðkÞ þ BðξÞ i ðr k Þ~ i ðr k Þu j ðr k Þu > > : y~ i ðkÞ ¼ C ðξÞ ðr k Þ~x i ðkÞ i
j ¼ 1;j a i
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
89
with the initial state x~ i ð0Þ ¼ xi ð0Þ xi;r ð0Þ. Let Q~ ¼ C ðξÞT ðr k ÞQC ðξÞ i i ðr k Þ. We have the following cost function of set-point tracking distributed MPC problem: "
J i ðkÞ ¼ Ek
1
∑
n¼0
‖~x i ðk þ
njkÞ‖2Qðr ~ kþnjk Þ
þ ‖u~ i ðk þ
njkÞ‖2Ri ðrkþnjk Þ
!#
M
þ
∑
j ¼ 1;j a i
‖u~ j ðk
þ
njkÞ‖2Rj ðrkþnjk Þ
Then, the corresponding stabilizing controller is given by ui ðk þ njkÞ ¼ F i ðk; r kþnjk Þ½xi ðk þ njkÞ xi;r ðk þ njkÞ þ ui;r ðk þ njkÞ 4. Illustrative examples In this section, two simulation examples are presented to demonstrate the effectiveness of the proposed robust distributed MPC algorithm for MJLSs subject to constraints on the inputs and states given by Eq. (1). In the following two examples, we all assume that the system involves two modes and two polytopic vertices both in system matrices and transition probability matrices. 4.1. Example 1 The system data is given as follows: Mode 1 Að1Þ ðr1Þ ¼
Mode 2 Að1Þ ðr2Þ ¼
1:1 0:75
0 ; 0:9
1
0
0:3
0:1
;
Bð1Þ ðr1Þ ¼
Bð1Þ ðr2Þ ¼
0:066 0
0:092 0
0:07 ; 0:24
0 ; 0:09
Að2Þ ðr1Þ ¼
Að2Þ ðr2Þ ¼
0:5 0 ; 0 0:5
0:5
0
1:5
0:1
;
Bð2Þ ðr1Þ ¼
Bð2Þ ðr2Þ ¼
0 0
0:07 0:08
0:2
0:01
0
0:8
where r1 and r2 denote mode 1 and mode 2, respectively. And assume that the transition probability matrices with polytopic uncertainties can denote as follows: 0:85 0:15 0:7 0:3 ð1Þ ð2Þ P ¼ ; P ¼ 0:7 0:3 0:65 0:35 For the objective of robust distributed MPC algorithm implementation, we decompose the global system into two subsystems, and each subsystem has one control input. The constraints on the inputs and states are considered uðr1Þ ¼ uðr2Þ ¼ ½1; 1T and φ ¼ I 2 , x ¼ ½1; 1T . The proposed robust distributed MPC algorithm is used to control these two subsystems with initial states x0 ¼ ½ 0:5; 0T , initial mode r 0 ¼ 1 and weighting matrices Q1 ðr1Þ ¼ Q2 ðr1Þ ¼ Q1 ðr2Þ ¼ Q2 ðr2Þ ¼ I 2 and R1 ðr1Þ ¼ R2 ðr1Þ ¼ R1 ðr2Þ ¼ R2 ðr2Þ ¼ 1. The performance of robust distributed MPC algorithm is compared with centralized MPC algorithm under the same weighting matrices. Fig. 1 shows the invariant sets by different algorithms at the first sampling interval. We can see that after an iteration, the subsystem 2 reaches a same invariant as the centralized MPC, and after two iterations, both the subsystem 1 and subsystem 2 reach a same invariant set as the centralized MPC, which validates the results of Lemma 4. It is special note that because of Pi ðk; r k Þ depending on the mode rk, we only give one case, which the mode transfers from 1 to 1, and in other cases, the number of iterations may differ, but based on Lemma 4 at the end, they all can reach to the same results of centralized MPC. According to Fig. 1, the comparison of
90
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
computational time between centralized MPC and distributed MPC at the first sampling interval is shown in Table 1. The distributed algorithm is applied on single iteration and two iterations, respectively. From Table 1, we can see that the distributed MPC algorithm only takes 0.4983 s after two iterations, which is less than 2.0833 s by centralized algorithm. Fig. 2 shows the upper-bound trends of centralized MPC, subsystems 1 and 2, respectively, from which we can see that only after two iterations the upper bound of subsystems 1 and 2 almost converge to upper bound of the centralized MPC. Fig. 3 shows that the upper bound can all converge to 0 when the time interval k trends to infinite. And based on the theory, ρi ðkÞ does not depend on the mode. Fig. 4 shows the ellipsoids after one iteration at the time intervals k ¼ 1; 3; 4; 7. We can conclude that the ellipsoids of two subsystems will converge to 0, respectively, with time interval increasing, and Fig. 5 shows the mode of the Markovian jump corresponding to Fig. 4. 4.2. Example 2 In the following, set-point tracking problem is discussed. Assume that the following systems both reference model and predictive model have 2 states, 2 inputs, and 2 outputs. The required tracking point sets follows as 8 0 r ko10; > < 2; 10 r ko20; yr ðkÞ ¼ 1; > : 2:5; 20 r ko30 The data of reference model (24) can be described as follows: 0:55 0 0:46 0:07 4:2 0 Ar ¼ ; Br ¼ ; Cr ¼ 0:025 0:5 0 0:16 0 4 The data of predicted model (1) can be given as follows: Mode 1 Að1Þ ðr1Þ ¼
Mode 2 Að1Þ ðr2Þ ¼
0:2
0
0:5
0:4
0:2
0
0:75
0:65
;
Bð1Þ ðr1Þ ¼
;
Bð1Þ ðr2Þ ¼
0:25
0:17
0
0:24
0:85 0:07 0
0:24
;
;
Að2Þ ðr1Þ ¼
Að2Þ ðr2Þ ¼
0:5
0
0
0:5
0:5
0
0
0:5
;
;
Bð2Þ ðr1Þ ¼
Bð2Þ ðr2Þ ¼
1 2:7
0 1:8
1:5
2:7
0
1:3
For brevity, the predictive output matrices are also denoted by Cr, that is, the uncertainties of the output matrices are neglected. And assume that the transition probability matrices with polytopic uncertainties can denote as follows: 0:5 0:5 0:7 0:3 ð1Þ ð2Þ P ¼ ; P ¼ 0:7 0:3 0:65 0:35 The global system is decomposed into two subsystems. The weighting matrices for this simulation are taken as Qðr1Þ ¼ Qðr2Þ ¼ 10 C Tr Cr ; R1 ðr 1 Þ ¼ R2 ðr 1 Þ ¼ 1; R1 ðr 2 Þ ¼ 0:5; R2 ðr 2 Þ ¼ 1. The constraints on the inputs are considered uðr1Þ ¼ uðr2Þ ¼ ½1; 2T , and the constraints on the states are considered as the same to Example 1. It is special note that the reference state xr and input ur are computed off-line. The performance comparison of distributed, decentralized and centralized MPC algorithm is presented with the same input and output weights.
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
91
Figs. 6 and 7 show the tracking of the first output variable and the second output variable, respectively. Figs. 8 and 9 show the first input variable and the second input variable corresponding to Figs. 6 and 7, respectively. We can see that the robust distributed MPC algorithm tracks the set-point similar to the centralized MPC with two iterations, which is better than the performance of decentralized MPC algorithm. And specially from Fig. 8, comparing with the control input, the robust distributed MPC algorithm is even more smooth than the centralized MPC algorithm. Therefore, this example indicates the effectiveness of the proposed robust distributed MPC algorithm for set-point tracking problem. Fig. 10 is the corresponding mode within 30 interval times. 5. Conclusion In this paper, we have dealt with the robust distributed MPC problem for a class of uncertain discrete-time Markovian jump linear systems, subject to constraints on the inputs and states. To reflect more realistic situations, we consider the polytopic uncertainties both in system matrices and transition probability matrices of Markov process. To overcome the centralize MPC drawback of computational complexity and communication bandwidth limitations, by decomposing the control input, the global system has been decomposed into several subsystems. And each subsystem communicates with each other via network. For each subsystem, by using Cauchy–Schwards inequality and convex optimization technique, the problem of minimizing an upper bound on the worst-case infinite horizon cost function, subject to constraints, has been transformed into the problem to solve a series of linear matrix inequalities (LMIs). A novel Jacobi iterative algorithm has been proposed to design the mode-dependent controller for each subsystem at each instant time. Finally, two examples have demonstrated the effectiveness of the proposed algorithm. Further research topics include the extension of our results to more complex system with incomplete information, such as the packet loss, saturation, time-delay and so on.
References [1] J. Hu, Z.D. Wang, B. Shen, H.J. Gao, Quantized recursive filtering for a class of nonlinear systems with multiplicative noises and missing measurements, Int. J. Control 86 (4) (2013) 650–663. [2] J. Hu, Z.D. Wang, H.J. Gao, Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises, Automatica 49 (11) (2013) 3440–3448. [3] J. Hu, Z.D. Wang, H.J. Gao, L.K. Stergioulas, Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements, Automatica 48 (9) (2012) 2007–2015. [4] J.L. Liang, F.B. Sun, X.H. Liu, Finite-horizon H 1 filtering for time-varying delay systems with randomly varying nonlinearities and sensor saturations, Syst. Sci. Control Eng.: An Open Access J. 2 (1) (2014) 108–118. [5] M. Kermani, A. Sakly, Stability analysis for a class of switched nonlinear time-delay systems, Syst. Sci. Control Eng.: An Open Access J. 2 (1) (2014) 80–89. [6] O.L.V. Costa, M.D. Fragoso, R.P. Marques, Discrete-time Markov Jump Linear Systems, Springer, London, 2005. [7] A.N. Venkat, I.A. Hiskens, J.B. Rawlings, S.J. Wright, Distributed MPC strategies with application to power system automatic generation control, IEEE Trans. Control Syst. Technol. 16 (6) (2008) 1192–1206. [8] B. Park, W.H. Kwon, Robust one-step receding horizon control of discrete-time Markovian jump uncertain systems, Automatica 38 (2002) 1229–1235. [9] J.B. Lu, D.W. Li, Y.G. Xi, Constrained model predictive control synthesis for uncertain discrete-time Markovian jump linear systems, IET Control Theory Appl. 7 (5) (2013) 707–719. [10] G.L. Wei, Z.D. Wang, H.S. Shu, Nonlinear H 1 control of stochastic time-delay systems with Markovian switching, Chaos Solitons Fractals 35 (2008) 442–451.
92
Y. Song et al. / Journal of the Franklin Institute 352 (2015) 73–92
[11] B.G. Park, J.W. Lee, W.H. Kwon, Robust one-step receding horizon control for constrained systems, Int. J. Robust Nonlinear Control 9 (7) (1999) 381–395. [12] H.L. Dong, Z.D. Wang, H.J. Gao, Fault detection for Markovian jump systems with sensor saturations and randomly varying nonlinearities, IEEE Trans. Circuits Syst-I: Regul. Pap. 59 (10) (2012) 2354–2362. [13] H.L. Dong, Z.D. Wang, H.J. Gao, Distributed H 1 filtering for a class of Markovian jump nonlinear time-delay systems over lossy sensor networks, IEEE Trans. Ind. Electron. 60 (10) (2013) 4665–4672. [14] H.L. Dong, Z.D. Wang, D.W.C. Ho, H.J. Gao, Robust H 1 filtering for Markovian jump systems with randomly occurring nonlinearities and sensor saturation: the finite-horizon case, IEEE Trans. Signal Process. 59 (7) (2011) 3048–3057. [15] E.K. Boukas, H. Yang, Stability of discrete-time linear systems with Markovian jumping parameters, Math. Control Signals Syst. 8 (1995) 390–402. [16] J.B. Rawlings, D.Q. Mayne, Model Predictive Control: Theory and Design, Nob-Hill, Madison, 2009. [17] J.H. Lee, Model predictive control: review of the three decades of development, Int. J. Control Autom. Syst. 9 (3) (2011) 415–424. [18] D. Limon, I. Alvarado, T. Alamo, E.F. Camacho, MPC for tracking piecewise constant references for constrained linear systems, Automatica 44 (9) (2008) 2382–2387. [19] D. Fu, E. Aghezzaf, R.D. Keyser, A model predictive control framework for centralised management of a supply chain dynamical system, Syst. Sci. Control Eng.: An Open Access J. 2 (1) (2014) 250–260. [20] M.V. Kothare, V. Balakrishnan, M. Morari, Robust constrained model predictive control using linear matrix inequalities, Automatica 32 (10) (1996) 1361–1379. [21] D.Q. Mayne, J.B. Rawlings, C.V. Rao, P.O.M. Scokaert, Constrained model predictive control: stability and optimality, Automatica 36 (6) (2000) 789–814. [22] B. Pluymers, L. Roobrouck, J. Buijs, J.A.K. Suykens, B.D. Moor, Constrained linear MPC with time-varying terminal cost using convex combinations, Automatica 41 (5) (2005) 831–837. [23] J.B. Rawlings, K.R. Muske, The stability of constrained receding horizon control, IEEE Trans. Autom. Control 38 (10) (1993) 1512–1516. [24] C. Lovaas, M.M. Seron, G.C. Goodwin, Robust output-feedback model predictive control for systems with unstructured uncertainty, Automatica 44 (8) (2008) 1933–1943. [25] D.W. Li, Y.G. Xi, F.R. Gao, Synthesis of dynamic output feedback RMPC with saturated inputs, Automatica 49 (2013) 949–954. [26] A.N. Venkat, I.A. Hiskens, J.B. Rawlings, S.J. Wright, Distributed MPC strategies with application to power system automatic generation control, IEEE Trans. Control Syst. Technol. 16 (6) (2008) 1192–1206. [27] A. Ferramosca, D. Limon, I. Alvarado, E.F. Camacho, Cooperative distributed MPC for tracking, Automatica 49 (4) (2013) 906–914. [28] B.T. Stewart, A.N. Venkat, J.B. Rawlings, S.J. Wright, G. Pannocchia, Cooperative distributed model predictive control, Syst. Control Lett. 59 (8) (2010) 460–469. [29] W.B. Dunbar, R.M. Murray, Distributed receeding horizon control for multi-vehicle information stabilization, Automatica 42 (4) (2006) 539–558. [30] E. Camponogara, D. Jia, B.H. Krogh, S. Talukdar, Distributed model predictive control, IEEE Control Syst. Mag. 22 (1) (2002) 42–52. [31] W. Al-Gherwi, H. Budman, A. Elkamel, A robust distributed model predictive control algorithm, J. Process Control 21 (8) (2013) 1127–1137. [32] L.W. Zhang, J.C. Wang, C. Li, Distributed model predictive control for polytopic uncertain systems subject to actuator saturation, J. Process Control 23 (8) (2013) 1075–1089. [33] Y. Song, X.S. Fang, Distributed model predictive control for polytopic uncertain systems with randomly occurring actuator saturation and packet loss, IET Control Theory Appl. 8 (5) (2014) 297–310. [34] H. Kwakernaak, R. Sivan, Linear Optimal Control Systems, Wiley-Interscience, New York, 1972.