Robust stabilization of stochastic Markovian jumping systems via proportional-integral control

Robust stabilization of stochastic Markovian jumping systems via proportional-integral control

Signal Processing 91 (2011) 2478–2486 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro ...

328KB Sizes 0 Downloads 46 Views

Signal Processing 91 (2011) 2478–2486

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Robust stabilization of stochastic Markovian jumping systems via proportional-integral control$ Shuping He a,b,n,1, Fei Liu a a b

Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Institute of Automation, Jiangnan University, Wuxi 214122, PR China College of Electrical Engineering and Automation, Anhui University, Hefei 230601, PR China

a r t i c l e in f o

abstract

Article history: Received 30 October 2010 Received in revised form 8 March 2011 Accepted 16 April 2011 Available online 30 April 2011

This paper studied the proportional-integral (PI) control problems of stochastic Markovian jump systems (MJSs) with uncertain parameters. Under complete access to the system states, the PI controller design procedure turns to static output feedback control problem that make the closed-loop dynamics of this class of uncertain MJSs be robustly stochastically stable. A sufficient condition on the existence of PI controller is presented and proved by means of linear matrix inequality techniques. The presented results are extended to the case when the system states are not accessible. In order to make the relative equations approximate with a satisfactory precision, we described the problem as a semidefinite programming one via disciplined convex optimization. Simulation results illustrate the validity of the proposed algorithms. & 2011 Elsevier B.V. All rights reserved.

Keywords: Markovian jump systems (MJSs) Proportional-integral (PI) control Static output feedback Semidefinite programming Linear matrix inequalities

1. Introduction The proportional-integral (PI) and proportional-integral-derivative (PID) controllers have wide applications in conventional control systems due to their functional simplicity. In a feedback control system, a controller without integral action may cause steady state offset in case of external disturbance and set-point adjustment, thus the integral action is an effective complement to the proportional feedback. Most of PI and PID controllers are applied in single-input and single-output (SISO) systems because most traditional PI and PID control schemes are

$ This work was supported in part by National Natural Science Foundation of P.R. China under Grant nos. 60974001 and 60904045, National Natural Science Foundation of Jiangsu Province under Grant BK2009068, Program for Postgraduate Scientific Research and Innovation of Jiangsu Province under Grant no. CX09B_169Z and Doctor Candidate Foundation of Jiangnan University under Grant no. JUDCF09026. n Corresponding author. E-mail addresses: [email protected] (S. He), fl[email protected] (F. Liu). 1 Now he is a visiting doctor in Control Systems Centre of School of Electrical and Electronic Engineering, University of Manchester, UK.

0165-1684/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2011.04.023

frequency-domain based. But in practice, PI and PID controllers always exist in multiple-input and multipleoutput (MIMO) plants in the engineering. Fortunately, the time-domain approach has been demonstrated to be more effective in dealing with these MIMO systems, see [1–9] and the references therein. Recently, linear matrix inequality (LMI) techniques [10] have been applied to PI and PID controller design for MIMO systems, and a few good results have been obtained in existing literatures [2–7]. Among these papers, a usual way is to transform the dynamic system into a static output feedback one and then to solve an optimization problem through iterative algorithms based on LMI. For example, iterative-LMI based static output feedback PID controller is designed in [3], and then the reduced conservative results are proposed in [4]. For stochastic systems, a generalized PI and PID control strategy in discrete-time context is presented in [5,6] for solving the constrained tracking problem. However, very few results in the literature consider the PI and PID controller design problem for dynamic MIMO systems with Markovian jumping. In this paper, we investigated MIMO plants under PI control through time-domain approach for this class of

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

Markovian jumping systems (MJSs). The objective is to establish an effective scheme to design the robust PI controller to stabilize the control dynamics. To this end, the presented results are also extended to the case when the system states are not accessible. Because in some cases, the feedback PI control approach fails to guarantee the stabilizability when some of system states are not measurable. In order to make the relative equations approximate with a satisfactory precision, we described the problem as a semidefinite optimization (SDP) one via disciplined convex programming [11]. At last, numerical examples are included to illustrate the effectiveness of the developed techniques. Notations: The symbols Rn and Rn  m stand for an ndimensional Euclidean space and the set of all n  m real matrices, respectively, AT and A  1 denote the matrix transpose and matrix inverse, diag A B represents the block-diagonal matrix of A and B, smax(C) denote the maximal eigenvalue of a positive-define matrix C, SNio j denotes, for example, for N ¼3, SNio j aij 3a12 þa13 þ a23 , JJ denotes the Euclidean norm of vectors, E{ * } denotes the mathematics statistical expectation of the stochastic process or vector, Ln2 ð 0 1 Þ is the space of ndimensional square integrable function vector over ð 0 1 Þ, Po0 (or P40) stands for a negative-definite (or positive-define) matrix, I is the unit matrix with appropriate dimensions, 0 is the zero matrix with appropriate dimensions, * means the symmetric terms in a symmetric matrix. 2. Problem formulation Given a probability space (O,F,r) where O is the sample space, F is the algebra of events and r is the probability measure defined on F. Let the random form process {rt,tZ0} be the continuous-time discrete-state Markov stochastic process taking values in a finite set L ¼{1, 2, y, L} with transition probability matrix ri ¼{rij(t), i, jAL} given by ( pij Dt þ oðDtÞ, iaj ri ¼ rij ðtÞ ¼ rfrt þ Dt ¼ j9rt ¼ ig ¼ 1þ pii Dt þ oðDtÞ, i ¼ j

2479

representing time-varying but norm bounded parameters uncertainties satisfying h i   DAðrt Þ DBðrt Þ ¼ Eðrt ÞGðrt , tÞ F1 ðrt Þ F2 ðrt Þ ð3Þ where E(rt), F1(rt) and F2(rt) are known mode-dependent matrices with appropriate dimensions and D(rt,t) is the time-varying unknown matrix function with Lebesgue norm measurable elements satisfying GT(rt, t)G(rt, t)rI. For notational simplicity, we denote rt ¼i, and then A(rt), B(rt), C(rt), DA(rt, t), DB(rt,t), E(rt), F1(rt) and F2(rt), can be labeled as Ai, Bi, Ci, DAi, DBi, Ei, F1i and F2i. Remark 1. The parameter uncertainty structure in (3) has been widely used in robust stability and stabilization study of uncertain systems and it can represent parameter uncertainty in many physical cases. Actually, any normbounded uncertain parameters can be expressed in the form of (3). Note that the unknown mode-dependent matrix Gi(t) in (3) can also be allowed to be statedependent, i.e., Gi(t)¼ Gi(t, x(t)), as long as JGi ðt,xðtÞÞJ r 1 is satisfied. For more results on this matter, we refer readers to [12–30] and the references therein. Definition 1. MJSs (2) with u(t)¼0 is stochastically stable if, for any initial x0 and initial mode r0, then Z T  lim E ð4Þ Jxðt,x0 ,r0 ÞJ2 dt9x0 ,r0 o 1 T-1

0

Definition 2. (Mao [13]). Let V(x(t),rt,t4 0)¼ V(x(t),i) be the stochastic positive functional, and define its weak infinitesimal operator as IVðxðtÞ,iÞ ¼ lim

  1   E Vðxðt þ DtÞ,rt þ Dt ,t þ DtÞ9xðtÞ,rt ¼ i VðxðtÞ,i,tÞ

Dt-0 Dt

ð5Þ For MJSs (2), we consider the following proportionalintegral (PI) controller: Z t uðtÞ ¼ KPi yðtÞ þKIi yðtÞ dt ð6Þ 0

where KPi and Kli are the PI controller gain matrices to be designed.

ð1Þ where Dt40 and lim oðDtÞ=Dt-0. pij Z0 is the transition Dtk0

probability rates from mode i at time t to mode j(iaj) at time t þ Dt, andSLj ¼ 1,jai pij ¼ pii . Consider the following uncertain MJSs described over the probability space (O,F,r): 8 _ > < xðtÞ ¼ ½Aðrt Þ þ DAðrt , tÞxðtÞ þ½Bðrt Þ þ DBðrt , tÞuðtÞ yðtÞ ¼ Cðrt ÞxðtÞ ð2Þ > : xðtÞ ¼ x ,rðtÞ ¼ r 0

0

where x(t)ARn is the state, y(t)ARm is the measured output, u(t)ARl is the controlled input, x0 is the initial state and r0 is the initial mode. A(rt), B(rt), C(rt) are known mode-dependent matrices with appropriate dimensions and rt represents a continuous-time discrete state Markov stochastic process with values in the finite set L. DA(rt, t) and DB(rt,t) are mode-dependent unknown matrices

Remark 2. In fact, without the integral term, the proportional-integral (PI) controller (6) will reduce to the output proportional control. For these, we can use the static/ dynamic output feedback control schemes to check the stochastic stability and to design the stabilizable feedback controllers. For more details on this topic, we refer the reader to [16,25] and the references therein. Our objective in this part is to design a PI controller with form (6) to make MJSs (2) be stochastically stabilizable. Let ( z1 ðtÞ ¼ xðtÞ Rt ð7Þ z2 ðtÞ ¼ 0 yðtÞ dt We can get (

_ ¼ ðAi þ DAi Þz1 ðtÞ þðBi þ DBi ÞuðtÞ z_ 1 ðtÞ ¼ xðtÞ z_ 2 ðtÞ ¼ yðtÞ ¼ Ci z1 ðtÞ

ð8Þ

2480

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

where

Define (

y1 ðtÞ ¼ yðtÞ ¼ Ci z1 ðtÞ Rt y2 ðtÞ ¼ 0 yðtÞdt ¼ z2 ðtÞ

ð9Þ

Oi ¼ Ai Xi þ Xi ATi þBi Vi Ci þ CiT ViT BTi þ Ei ETi þ pii Xi MðXi Þ ¼

Then PI controller (6) follows that uðtÞ ¼ KPi y1 ðtÞ þ KIi y2 ðtÞ

ð10Þ

Thus, the closed-loop MJSs with PI controller can be rewritten as ( z_ 1 ðtÞ ¼ ½ðAi þ DAi Þ þ ðBi þ DBi ÞKPi Ci z1 ðtÞ þ ðBi þ DBi ÞKIi z2 ðtÞ z_ 2 ðtÞ ¼ Ci z1 ðtÞ ð11Þ Remark 3. Notice that the PI controller design in this part is under the complete access to the system states. In this case, we just need to design the proper controllers to make the closed-loop dynamic MJSs (11) robustly stochastically stable. Thus, the PI controller design procedure can be turned to the static output feedback control problems. In the following, the developed results in the literature can be used either to check the stochastic stability, or to design the state feedback or the output feedback controllers that stochastically stabilize this presented system.

Lemma 1. (Wang et al. [31]). Let T, L1 and L2 be real matrices with appropriate dimensions. Then for all timevarying unknown matrix function G(t) satisfying GT(t) G(t)rI, the following relation T þL1 GðtÞL2 þ LT2 GT ðtÞLT1 o 0

ð12Þ

holds if and only if there exists a positive scalar a 40, such that T þ aLT1 L1 þ a1 L2 LT2 o 0

ð13Þ

Lemma 2. (Feng et al. [12]). Stochastically stable is equivalent to almost surely (asymptotically) stable. Theorem 1. If there exists a set of mode-dependent and positive definite matrices Xi, Yi and Ui, a set of mode-dependent matrices Vi and Wi, and a mode-dependent sequences ai, such that the following relations hold for all iAL, Ci Xi ¼ Ui Ci 2

Oi

6  6 6 6  6 6 4  

ð14Þ

Bi Wi þ Xi CiT

T T Xi F1i þ CiT ViT F2i

MðXi Þ

pii Yi

T WiT F2i

0



ai I

0

 

 

NðXi Þ 

0

pffiffiffiffiffiffi i ffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi pi1 Xi    pp piði1Þ Xi    piL Xi iði1Þ Xi

n o NðXi Þ ¼ diag X1    Xi1 Xi þ 1    XL n o NðYi Þ ¼ diag Y1    Yi1 Yi þ 1    YL h pffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pi1 Yi    pp piði þ 1Þ Yi    iði1Þ Yi MðYi Þ ¼

pffiffiffiffiffiffi i piL Yi

then the closed-loop MJSs (11) is stochastically stable and the PI controller gain matrices are given by KPi ¼ Vi Ui1 ,

KIi ¼ Wi Yi1 ,

i2L

ð16Þ

Proof. Let the mode at time t be i; that is rt ¼iAL. Take the stochastic Lyapunov–Krasovskii functional Vðz1 ðtÞ, z2 ðtÞ, i,t 4 0Þ:Rn  Rn  L  R þ -R þ as Vðz1 ðtÞ,z2 ðtÞ,iÞ ¼ zT1 ðtÞPi z1 ðtÞ þ zT2 ðtÞQi z2 ðtÞ wherein Pi, Qi 40 are the given mode-dependent symmetric positive-definite matrix for each modes iAL. Along the trajectories of MJSs (11), the weak infinitesimal operator of V(z1(t),z2(t),i) is given by IVðz1 ðtÞ,z2 ðtÞ,iÞ ¼ 2zT1 ðtÞPi ½ðAi þ DAi Þ þ ðBi þ DBi ÞKPi Ci z1 ðtÞ þ2zT1 ðtÞPi ðBi þ DBi ÞKIi z2 ðtÞ þ2zT2 ðtÞQi Ci z1 ðtÞ L L X X pij Pj z1 ðtÞ þ zT2 ðtÞ pij Qj z2 ðtÞ: þzT1 ðtÞ

3. Robust stabilization of uncertain MJSs via PI control In order to prove the stochastic stabilization of MJSs via PI control, the following lemmas are required.

h pffiffiffiffiffiffiffi

j¼1

j¼1

Thus, it concludes that IV(z1(t),z2(t),i) o0 can be guaranteed by

Pi þ DPi o0

ð17Þ

where 2

3

L X

T

6 Pi ðAi þ Bi KPi Ci Þ þ ðAi þ Bi KPi Ci Þ Pi þ pij Pj 6 j¼1 6

Pi ¼ 6 6 6 4

Pi Bi KIi þ CiT Qi L X



pij Qj

j¼1

"

DPi ¼

Pi ðDAi þ DBi KPi Ci Þ þ ðDAi þ DBi KPi Ci ÞT Pi

Pi DBi KIi



0

7 7 7 7 7 7 5

#

According to Lemma 1, DPi can be presented as the following form: T DPi ¼ L1i Di ðtÞL2i þLT2i DTi ðtÞLT1i o ai L1i LT1i þ a1 i L2i L1i

where

Pi Ei , L1i ¼ 0

3

 L2i ¼ F1i þ F2i KPi Ci

F2i KIi



Then inequality (17) leads to

MðYi Þ 7 7 7 0 7 7 o0 7 0 5 NðYi Þ

2

ð15Þ

Li

6 6 6  6 6 4 

Pi Bi KIi þ CiT Qi L X

T T T F1i þ CiT KPi F2i

pij Qj

T KIiT F2i



ai I

j¼1

3 7 7 7 7 o0 7 5

ð18Þ

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

Taking limit as T-N, it follows that

where

Li ¼ Pi ðAi þ Bi KPi Ci Þ þ ðAi þ Bi KPi Ci ÞT Pi þPi Ei ETi Pi þ

L X

pij Pj

lim E

Z

T-1

Let Xi ¼ Pi1 , Yi ¼ Qi1 . Pre- and post-multiplying inequality (18) by block-diagonal matrix diag{Xi, Yi, I} and applying Schur complement formula, it equals to the following inequality: 2 3 T T T Si Bi KIi Yi þ Xi CiT Xi F1i þ Xi CiT KPi F2i MðXi Þ 0 6  T T pii Yi Yi KIi F2i 0 MðYi Þ 7 6 7 6 7 6  7o0 a I 0 0   i 6 7 6 7 0 5   NðXi Þ 4      NðYi Þ where

Si ¼ Ai Xi þ Xi ATi þ Bi KPi Ci Xi þ Xi CiT KPiT BTi þ Ei ETi þ pii Xi Now if we let CiXi ¼UiCi, Vi ¼KPiUi and Wi ¼KIiYi hold for each iAL that we get the main results of Theorem 1. And if matrix inequality (14) holds, there will exist matrix Li 40, such that IVðzðtÞ,iÞ ¼ zT ðtÞLi zðtÞ

zðtÞ ¼

z1 ðtÞ

#

z2 ðtÞ

T

 nr o r zT ðtÞzðtÞ dt9zð0Þ,r0 r lim ð1expðs tÞÞ ¼ o 1 t-1

0

j¼1

where "

2481

l

l

This proves that the closed-loop MJSs (11) under study are almost surely (asymptotically) robustly stable by Definition 1 and Lemma 2 and this completes the proof. & Remark 4. The sufficient conditions we developed may be conservative due to the presence of the constraints CiXi ¼UiCi, iAL. But these constraints are needed in the LMI setting to design the PI controller for MJSs (2). For the uncertain class of systems, the system will be robustly stochastically stable for all admissible uncertainties by the LMI constraints in Theorem 1. Remark 5. Different with the main results given by Boukas [16], the proposed method in this literature pays more attention to the PI controller design of MJSs with uncertainty parameters. By Theorem 1, we can easily get the relevant PI controllers that make the uncertain MJSs (2) stochastically stablizable. In fact, we can also consider the following PI controller by output error signal: Z t uðtÞ ¼ KPi ½rðtÞyðtÞ þKIi ½rðtÞyðtÞ dt ð19Þ 0

Since IV(z(t),i)o0, we can get VðzðtÞ,iÞ o Vðzð0Þ,r0 Þ9t ¼ 0 ¼ zT ð0ÞPðr0 Þzð0Þ Then, the following relation holds: IVðzðtÞ,iÞ zT ðtÞLi zðtÞ o VðzðtÞ,iÞ Vðzð0Þ,r0 Þ Define M1 ¼ inf

0rdrt

EfJzðdÞJ2 g, M2 ¼ EfJzð0ÞJ2 g, s1 ¼ minr2N

smin ðLi Þ, s2 ¼ maxr2N smax ðPðr0 ÞÞ. Therefore, there exists a given minus number s 40 satisfying the following relation: IVðzðtÞ,iÞ zT ðtÞLi zðtÞ M1 s1 o r ¼ s: VðzðtÞ,iÞ Vðzð0Þ,r0 Þ M2 s2 Since M1 40, M2 40, s1 40, s2 40 and s 40, we have IVðzðtÞ,iÞ osVðzðtÞ,iÞ

where r(t) is a given reference signal, KPi and Kli are the PI controller gain matrices to be designed. Consider the problem of reference trajectory tracking, the integral of error signal is introduced to MJSs (2), then the dynamic characteristic of the generalized system is described by z_ ðtÞ ¼ Ai zðtÞ þBi uðtÞ þ Bri rðtÞ

ð20Þ

where "

# " # xðtÞ xðtÞ R ¼ zðtÞ ¼ , t ZðtÞ 0 ½rðtÞyðtÞ dt Bi ¼



Bi þ DBi 0

,

Bri ¼

" Ai

Ai þ DAi

0

Ci

0

# ,



0 1

For convenience, we set the reference signal as step one, that is, r(t) ¼r as t 40. We denote 8 x ðtÞ ¼ xðtÞxðtÞ9t-1 > > < e Ze ðtÞ ¼ ZðtÞZðtÞ9t-1 ð21Þ > > : ue ðtÞ ¼ uðtÞuðtÞ9 t-1

That is EfVðzðtÞ,iÞg o expðstÞVðzð0Þ,r0 Þ By letting r ¼M2s2, for a given small positive scalar l 40, we can get

lEfzT ðtÞzðtÞg r EfVðzðtÞ,iÞg r r expðs tÞ which implies that z(t)-0 as t-N.

Then the PI controller (19) follows that ue ðtÞ ¼ KPi Ci xe ðtÞ þKIi Ze ðtÞ

ð22Þ

Thus, the closed-loop MJSs with PI controller can be rewritten as ( x_ e ðtÞ ¼ ½ðAi þ DAi ÞðBi þ EBi ÞKPi Ci xe ðtÞ þ ðBi þ DBi ÞKIi Ze ðtÞ Z_ e ðtÞ ¼ Ci xe ðtÞ ð23Þ

2482

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

following observer-based PI controller: 8 > x_^ ðtÞ ¼ Ai x^ ðtÞ þ KPi ½y^ ðtÞyðtÞ þ Bi uðtÞ þ Bi vðtÞ > > > < vðtÞ _ ¼ vðtÞ þ KIi ½y^ ðtÞyðtÞ > ^ y ðtÞ ¼ Ci x^ ðtÞ > > > : uðtÞ ¼ K x^ ðtÞ i

To obtain the robustly stabilizable PI controller, we can get the following Theorem 2 by proper stochastic Lyapunov–Krasovskii functional. Theorem 2. If there exists a set of mode-dependent and positive definite matrices Xi, Yi and Ui, a set of modedependent matrices Vi and Wi, and a mode-dependent sequences ai, such that the following relations hold for all iAL, Ci Xi ¼ Ui Ci 2

6  6 6 6  6 6 4  

where x^ ðtÞ 2 Rn , y^ ðtÞ 2 Rn and vðtÞ 2 Rn are, respectively, the estimated states and outputs and integral states, KPi, Kli and Ki are, respectively, the proportional, integral and state feedback gains to be designed. Using now the estimate of the state we can use the controller (27) in which the state x(t) is replaced by x^ ðtÞto stabilize the MJSs (2). Plugging the controller expression (27) in MJSs (2) and letting eðtÞ ¼ xðtÞx^ ðtÞ, we can get the following closed-loop system: 8 _ > < eðtÞ ¼ ðAi þKPi Ci DBi Ki ÞeðtÞ þðDAi þ DBi Ki ÞxðtÞBi vðtÞ _ ¼ ½Ai þ DAi þ ðBi þ DBi ÞKi xðtÞðBi þ DBi ÞKi eðtÞ xðtÞ > : vðtÞ _ ¼ vðtÞKIi Ci eðtÞ

ð24Þ

Xi Bi Wi þXi CiT

T T Xi F1i CiT ViT F2i

MðXi Þ



T WiT F2i ai I

0 0





NðXi Þ







pii Yi

0

3

MðYi Þ 7 7 7 0 7 7o0 7 0 5 NðYi Þ ð25Þ

Xi ¼ Ai Xi þ Xi ATi Bi Vi Xi Xi ViT BTi

þ Ei ETi

þ pii Xi , then where the closed-loop MJSs (23) is robustly stochastically stable with the PI controller gain matrices given by KPi ¼ Vi Ui1 ,

KIi ¼ Wi Yi1 ,

i2L

ð28Þ Similar to Theorems 1 and 2, we can get following Theorem 3 by proper stochastic Lyapunov–Krasovskii functional. And in the main results in Theorem 3, the PI controller and feedback controller are, respectively, designed by LMI techniques and relevant matrices transformation.

ð26Þ

Proof. For the closed-loop MJSs (23) via PI controller, we introduce the following relation by defining Lyapunov– Krasovskii functional,

Theorem 3. If there exists a set of mode-dependent and positive definite matrices Xi, Yi and Ui, a set of mode-dependent matrices Vi, Wi and Zi, and a mode-dependent sequences bi, such that the following relations hold for all iAL,

Vðxe ðtÞ, Ze ðtÞ,iÞ ¼ xTe ðtÞPi xe ðtÞ þ ZTe ðtÞQi Ze ðtÞ Then following the similar proof as Theorem 1, the main results in Theorem 2 can be easily obtained. This completes the proof. & 2 6 6 6 6 6 6 Fi ¼ 6 6 6 6 6 6 4

Ci Xi ¼ Ui Ci

ð29Þ

F1i

ZiT BTi þ Ei ETi

Bi Yi CiT WiT

T ZiT F2i

MðXi Þ

0



F2i

0

T T Xi F1i þ ZiT F2i

0

MðXi Þ





F3i

0

0

0







bi I

0

0









NðXi Þ

0

 

 

 

 

 

NðXi Þ 

Remark 6. Under the complete access to the system states, the developed results in Theorem 2 can be used either to check the output tracking of step signals, or to design the output feedback controllers that stochastically stabilize the uncertain MJSs.

4. Robust stabilization via observer-based PI control Practically the complete access to the states is not the fact for many reasons such as the unavailability of the sensors to measure some of the state variables, and consequently the previous control approach will not be feasible. To overcome such problem we can recourse to the estimation of the state. For this purpose we use the

ð27Þ

0

3

7 7 7 MðYi Þ 7 7 7 0 7 7 7 0 7 7 0 7 5 NðYi Þ 0

ð30Þ

where

F1i ¼ Ai Xi þ Xi ATi þ Vi Ci þ CiT ViT þEi ETi þ pii Xi F2i ¼ Ai Xi þ Xi ATi þ Bi Zi þZiT BTi þ Ei ETi þ pii Xi F3i ¼ Yi YiT þ pii Yi then the closed-loop MJSs (28) is robustly stochastically stable and the PI controller gain matrices are given by KPi ¼ Vi Ui1 ,

KIi ¼ Wi Ui1 ,

Ki ¼ Zi Xi1 ,

i2L

ð31Þ

Proof. Define the stochastic Lyapunov–Krasovskii functional as follows: VðeðtÞ,xðtÞ,vðtÞ,iÞ ¼ eT ðtÞP1i eðtÞ þ xT ðtÞP2i xðtÞ þ vT ðtÞP3i vðtÞ

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

where P1i, P2i, P3i 40 are the given mode-dependent symmetric positive-definite matrix for each modes iAL. Then we can get the following weak infinitesimal operator of V(e(t),x(t),v(t),i):

where 2

3 P1i Ei 6 P2i Ei 7 L3i ¼ 4 5, 0

2483

h L4i ¼ F2i Ki

F1i þF2i Ki

0

i

IVðeðtÞ,xðtÞ,vðtÞ,iÞ ¼ 2eT ðtÞP1i ðAi þ KPi Ci DBi Ki ÞeðtÞ 2eT ðtÞP1i Bi vðtÞ þ 2eT ðtÞP1i ðDAi þ DBi Ki Þ

Then it yields 2

T

xðtÞ þ 2x ðtÞP2i ½Ai þ DAi þ ðBi þ DBi ÞKi 

6 6 Ci ¼ 6 6 4

T

xðtÞ2x ðtÞP2i ðBi þ DBi ÞKi eðtÞ

C1i þ P1i Ei ETi P1i

KiT BTi P2i þ P1i Ei ETi P2i

P1i Bi CiT KIiT P3i



C2i þ P2i Ei ETi P2i

0

 

 

C3i

2vT ðtÞP3i vðtÞ2vT ðtÞP3i KIi Ci eðtÞ T

e ðtÞ

L X

T

pij P1j eðtÞ þ x ðtÞ

j¼1

þ vT ðtÞ

L X

L X

T T 7 7 F1i þ KiT F2i 7 7 0 5 bi I

pij P2j xðtÞ 1 1 1 ¼ P2i and Yi ¼ P3i . Pre- and postLet P1i ¼P2i, P1i ¼ P1i multiplying inequality (33) by block-diagonal matrix   diag Xi Xi Yi I and applying Schur complement formula, inequality (33) equals to the following inequality:

pij P3j vðtÞ

j¼1

6 6 6 6 6 6 Yi ¼ 6 6 6 6 6 6 4

3

ð33Þ

j¼1

2



T KiT F2i

Y1i

Xi KiT BTi þ Ei ETi

Bi Yi Xi CiT KIiT

T Xi KiT F2i

MðXi Þ

0



Y2i

0

T T Xi F1i þXi KiT F2i

0

MðXi Þ





F3i

0

0

0







bi I

0

0









NðXi Þ

0











NðXi Þ













0

3

7 7 7 MðYi Þ 7 7 7 0 7 7 7 0 7 7 0 7 5 NðYi Þ 0

where Thus, it concludes that IV(e(t),x(t),v(t),i)o0 can be guaranteed by

Ci þ DCi o 0

ð32Þ

where 2

C1i

6  4 

2

DCi ¼ 6 4

KiT BTi P2i

P1i Bi CiT KIiT P3i

C2i

0



C3i

P1i DBi Ki KiT DBTi P1i

Y2i ¼ Ai Xi þ Xi ATi þBi Ki Xi þ Xi KiT BTi þ Ei ETi þ pii Xi Consequently we can get the main results of Theorem 3 by letting CiXi ¼UiCi, Vi ¼KPiUi, Wi ¼KIiUi and Zi ¼ KiXi. Following the main proof of Theorem 1, we know that the closed-loop MJSs (28) is almost surely (asymptotically) robustly stable by Lemma 2. This completes the proof. &

3 7 5

P1i ðDAi þ DBi Ki ÞKiT DBTi P2i

0 T

P2i ðDAi þ DBi Ki Þ þ ðDAi þ DBi Ki Þ P2i 

 

Y1i ¼ Ai Xi þ Xi ATi þKPi Ci Xi þXi CiT KPiT þ Ei ETi þ pii Xi

C1i ¼ P1i ðAi þ KPi Ci Þ þ ðAi þKPi Ci ÞT P1i þ

L X

3

7 05 0

pij P1j

When there are difficulties to solve the output regulator equations (14) (or (24), (29)), we can transforms them into the following semidefinite programming (SDP) problems via disciplined convex optimization min

"

j¼1

s:t:

C2i ¼ P2i ðAi þ Bi Ki Þ þðAi þ Bi Ki ÞT P2i þ

L X

d dI

Ci Xi Ui Ci

Xi CiT CiT UiT

dI

# Z0

ð34Þ

pij P2j

j¼1

T C3i ¼ P3i P3i þ

L X

pij P3j

j¼1

According to Lemma 1, DCi can be presented as follows: T DCi ¼ L3i Di ðtÞL4i þ LT4i DTi ðtÞLT3i o bi L3i LT3i þ b1 i L4i L4i

Remark 7. Solutions of Theorems 1–3 can be obtained by solving SDP problems via disciplined convex optimization with (34) and solving the LMIs with (15), (25) and (30). In order to make CiXi approximate to UiCi with a satisfactory precision, we can first select a sufficiently small scalar d 40 to meet (34). By using the relevant Matlab Toolbox, it is straightforward to check the feasibility of the disciplined convex optimization and LMIs.

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

Remark 9. As a popular industrial control methods, PID or PI control was widely investigated in theoretical and practical aspects. In this paper, we succeeded in designing the PI stablizable controller of stochastic MJSs with uncertain parameters in the cases that the system states are accessible or not completely accessible. Referring to the main results in Boukas [16], the PI controller design procedure can be turned to the static output feedback control problems. Comparing with the PID controller design for output PDFs of stochastic systems [5,6], our researches are more focused on the fact that how to simplify the PI controller design procedure by LMI techniques and SDP optimization, and meanwhile, the PI controller design schemes also adapt to the systems in which the states are not completely accessible. 5. Numeral examples

2.5

Jumping modes

Remark 8. Indeed, the applications of PID or PI control methods are comprehensive in industrial control processes. It should be observed out that the contributions of this paper are mainly theoretical aspects. As a widely used stochastic system, the proposed methods can be considered in the following research. In order to illustrate the effectiveness of the developed techniques, we will give two numerical examples in the following Section 5.

2

1.5

1

0.5 0

2

4

By solving the SDP optimization problem in (34) and LMI (15), we can get the optimal d ¼1.66344  10  11 and the following PI controller parameters as



4:9311 2:1900 1:4530 1:2455 KP1 ¼ , KP2 ¼ 9:0483 6:5243 0:5835 0:3357



0:1797 0:2152 0:1030 0:0048 , KI2 ¼ KI1 ¼ 0:0641 0:1020 0:1531 0:1479 with the zero initial condition x1(0) ¼2.0, x2(0) ¼1.5and r0 ¼1, we can get the simulink results of the jumping modes, the response of state x(t), the response of control input u(t) and the output signal y(t) in Figs. 1–4.

10 t/s

12

14

16

18

20

2 System state x1 (t) System state x2 (t)

1.5 1 0.5 0 -0.5 -1 2

4

6

8

10 t/s

12

14

16

18

20

The controlled signal u(t) via PI control

Fig. 2. The response of system state x(t).

2.5 2 Control signal u1(t)

1.5

Control signal u2(t)

1 0.5 0 -0.5 -1 -1.5 0

2

4

6

8

10 t/s

12

14

16

18

20

Fig. 3. The response of control input u(t).

The output response y (t) via PI control

The mode switching is governed by a Markov chain that has the following transition rate matrix:

0:6 0:6 P¼ 0:4 0:4

8

2.5

0

Example 1. We consider the following MIMO system with two jumping operation modes described as



0 1 2 1 A1 ¼ , A2 ¼ , 1 0 1 1



0:1 0:2 0 0:2 , B2 ¼ , B1 ¼ 0:2 0:1 0:1 0:1



0:1 0:1 0:1 0 C1 ¼ , C2 ¼ , 0 0:2 0:1 0:1



0:2 0:1 E1 ¼ , E2 ¼ , 0:1 0:2     F11 ¼ 0:1 0:2 , F12 ¼ 0:1 0:1 ,     F21 ¼ 0:1 0:1 , F22 ¼ 0:2 0:1

6

Fig. 1. Estimation of changing between modes during the simulation with the initial mode rt ¼ 1.

System stabilized states x (t) by PI control

2484

0.3 0.2

Output signal y1 (t) Output signal y2 (t)

0.1 0 -0.1 -0.2 -0.3 -0.4 0

2

4

6

8

10 t/s

12

14

Fig. 4. The output signal y(t).

16

18

20

Remark 10. In Example 1, we can first get the optimal d by solving the SDP optimization problem in (34) and LMI (15). The simulation shows that the system states eventually converges to zero although it exists some oscillates. Thus, the designed robust PI controller can stabilized the uncertain MJSs well. The output signal is finite and convergent and the input signal is controlled. Example 2. As the system data presented in Example 1, we consider the observer-based PI control of the uncertain MJSs. By solving the SDP optimization problem in (34) and LMI (30), we can get the optimal d ¼4.31486  10  12 and the following PI controller and observer gain parameters:



9:8360 11:1693 4:4207 5:5847 KP1 ¼ , KP2 ¼ , 11:1693 1:1667 5:5847 9:8360



1 0:5 1 1 , KI2 ¼ , KI1 ¼ 2 1:5 1 1



1:2854 1:8655 0:1130 1:8907 , K2 ¼ K1 ¼ 2:8432 1:3589 0:5871 0:3630

With the zero initial condition x1 ð0Þ ¼ x^ 1 ð0Þ ¼ 1:0, x2 ð0Þ ¼ x^ 2 ð0Þ ¼ 1:0 and r0 ¼1, we can get the simulink results of the real state x(t) and estimated state x^ ðtÞ, the response of control input u(t) and the output signal y(t) in Figs. 5–7. From Fig. 5, the real states and estimated states can be observed and the system can be stabilized by the designed observer-based PI controller. Remark 11. From Figs. 5–7, the states can be estimated and observed and the output performance is quite

The real state and estimated state of x (t)

1 Real state x1 (t) Estimated state x1 (t)

0.5 0

-3 5 x 10 4 3 2 1 0 -1 -2 -3 -4 -5 0 2

2485

Output signal y1 (t) Output signal y2 (t)

4

6

8

10 t/s

12

14

16

18

20

Fig. 7. The output signal y(t).

satisfactory although there exist some transient tracking errors. We also see from the simulation results that the controlled signal oscillates before eventually converging to zero. Thus, the designed robust PI controller can stabilize the uncertain MJSs in which the states are not completely accessible. It also proves the equivalences of stochastic stability and almost asymptotic stability for MJSs via simulation results. 6. Conclusion In the paper, we have studied the problems of PI controller design for uncertain MJSs. By applying SDP optimization via disciplined convex programming and LMI techniques, the PI controller is effectively designed to stabilize the uncertain MJSs, and the main results can be extended to the case when system states are not accessible. Simulation example demonstrates the effectiveness of the developed techniques. References

-0.5

0

2

4

6

8

10

12

14

16

18

20

0.5 0 Real state x2 (t) Estimated state x2 (t)

-0.5 -1 0

2

4

6

8

10 t/s

12

14

16

18

20

Fig. 5. The response of system state x(t) and estimated state x^ ðtÞ.

2.5 The controlled signal u(t) via observer-based PI control

The output response y (t) via observer-based PI control

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

2 Control signal u1 (t) Control signal u2 (t)

1.5 1 0.5 0 -0.5 -1 -1.5 -2 0

2

4

6

8

10 t/s

12

14

Fig. 6. The response of control input u(t).

16

18

20

[1] B. Shafai, S. Beale, H.H. Niemann, J.L. Stoustrup, LTR design of discrete-time proportional-integral observers, IEEE Trans. Automat. Control 41 (7) (1996) 1056–1062. [2] M. Mattei, Robust multivariable PID controller for linear parameter varying systems, Automatica 37 (12) (2001) 1997–2003. [3] F. Zheng, Q. Wang, T.H. Lee, On the design of multivariable PID controllers via LMI approach, Automatica 38 (3) (2002) 517–526. [4] C. Lin, Q. Wang, T.H. Lee, An improvement on multivariable PID controller design via iterative LMI approach, Automatica 40 (3) (2004) 519–525. [5] L. Guo, H. Wang, Generalized discrete-time PI control of output PDFs using square foot B-spline expansion, Automatica 41 (1) (2005) 159–162. [6] L. Guo, H. Wang, PID controller design for output PDFs of stochastic systems using linear matrix inequalities, IEEE Trans Syst., Man, Cybern. B, Cybern 35 (1) (2005) 65–71. [7] C. Lin, Q. Wang, Y. He, G. Wen, X. Han, G. Li, Z. Zhang, On stabilizing PI controller ranges for multivariable systems, Chaos, Solitions Fractals 35 (2) (2008) 620–625. [8] B. Sulikovski, K. Galkowski, E. Rogers, PI output feedback control of differential linear repetitive processes, Automatica 44 (5) (2008) 1442–1445. [9] D. Valerion, J. Sa da Costa, Tuning of fractional PID controllers with Ziegler–Nichols-type rules, Signal Process 86 (10) (2006) 2771–2784. [10] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994. [11] M. Grant, S. Boyd, Y. Ye, Disciplined convex programming, Springer, New York, 2006.

2486

S. He, F. Liu / Signal Processing 91 (2011) 2478–2486

[12] X. Feng, K.A. Loparo, Y. Ji, H.J. Chizeck, Stochastic stability properties of jump linear systems, IEEE Trans. Automat. Control 37 (1) (1992) 38–53. [13] X. Mao, Stability of stochastic differential equations with Markovian switching, Stoch. Process. Appl. 79 (1) (1999) 45–67. [14] Z. Wang, H. Qiao, K.J. Burnham, On stabilization of bilinear uncertain time-delay stochastic systems with Markovian jumping parameters, IEEE Trans. Automat. Control 47 (4) (2002) 640–646. [15] J. Xiong, J. Lam, H. Gao, Daniel W.C. Ho, On robust stabilization of Markovian jump system with uncertain switching probabilities, Automatica 41 (5) (2005) 897–903. [16] E.K. Boukas, Static output-feedback control for stochastic hybrid systems: LMI approach, Automatica 63 (3-1) (2005) 301–310. [17] P. Shi, M. Mahmoud, S.K. Nguang, A. Ismail, Robust filtering for jumping systems with mode-dependent delays, Signal Process. 86 (1) (2006) 140–152. [18] G. Wang, Q. Zhang, V. Sreeramc, Design of reduced-order HN filtering for Markovianjumpsystems with mode-dependent time delays, Signal Process 89 (2) (2009) 187–196. [19] Y. Zhang, S. Xu, B. Zhang, Robust output feedback stabilization for uncertain discrete-time fuzzy Markovian jump systems with time-varying delays, IEEE Trans. Fuzzy Syst. 17 (2) (2009) 411–420. [20] S. He, F. Liu, Fuzzy model-based fault detection for Markov jump systems, Int. J. Robust Nonlinear Control 19 (11) (2009) 1248–1266. [21] S. He, F. Liu, Robust peak-to-peak filtering for Markov jump systems, Signal Process. 90 (2) (2010) 513–522.

[22] S. He, F. Liu, Robust finite-time stabilization of uncertain fuzzy jump systems, Int. J. Innov. Comput. Inform. Control 6 (9) (2010) 3853–3862. [23] C. Han, H. Zhang, Linear optimal filtering for discrete-time systems with random jump delays, Signal Process. 89 (6) (2009) 1121–1128. [24] H. Zhang, A.S. Mehr, Y. Shi, Improved robust energy-to-peak filtering for uncertain linear systems, Signal Process. 90 (9) (2010) 2667–2675. [25] Z. Shu, J. La, J. Xiong, Static output-feedback stabilization of discrete-time Markovian linear systems: a system augmentation approach, Automatica 46 (4) (2010) 687–694. [26] G. Nakura, Stochastic optimal tracking with preview by state feedback for linear discrete-time Markovian jump systems, Int. J. Innov. Comput. Inform. Control 1 (6) (2010) 15–28. [27] Y. Xia, Z. Zhu, M.S. Mahmoud, H2 control for networked control systems with Markovian data losses and delays, ICIC Express Lett. 3 (3A) (2009) 271–276. [28] Q. Ding, M. Zhong, On designing HN fault detection filter for Markovian jump linear systems with polytopic uncertainties, Int. J. Innov. Comput. Inform. Control 6 (3A) (2010) 995–1004. [29] X. Luan, F. Liu, P. Shi, Neural network based stochastic optimal control for nonlinear Markov jump systems, Int. J. Innov. Comput. Inform. Control 6 (8) (2010) 3715–3724. [30] J. Qiu, K. Lu, New robust passive stability criteria for uncertain singularly Markov jump systems with time delays, ICIC Express Lett. 3 (3B) (2009) 651–656. [31] Y. Wang, L. Xie, C.E. de Souza, Robust control of a class of uncertain nonlinear systems, Syst. Control Lett. 19 (2) (1999) 139–149.