Available online at www.sciencedirect.com
Journal of the Franklin Institute 356 (2019) 1856–1879 www.elsevier.com/locate/jfranklin
A co-design methodology for cyber-physical systems under actuator fault and cyber attackR Dan Ye a,b,∗, Shengping Luo a a College b State
of Information Science and Engineering, Northeastern University, Shenyang 110819 Liaoning, China Key Laboratory of Synthetical Automation of Process Industries, Northeastern University, Shenyang 110819 Liaoning, China Received 2 March 2018; received in revised form 24 December 2018; accepted 8 January 2019 Available online 14 January 2019
Abstract This paper investigates the controller design problem of cyber-physical systems (CPSs) to ensure the reliability and security when actuator faults in physical layers and attacks in cyber layers occur simultaneously. The actuator faults are time-varying, which cover bias fault, outage, loss of effectiveness and stuck. Besides that, some state-dependent cyber attacks are launched in control input commands and system measurement data channels, which may lead state information to the opposite direction. A novel co-design controller scheme is constructed by adopting a new Lyapunov function, Nussbaum-type function, and direct adaptive technique, which may further relax the requirements of actuator/sensor attacks information. It is proven that the states of the closed-loop system asymptotically converge to zero even if actuator faults, actuator attacks and sensor attack are time-varying and co-existing. Finally, simulation results are presented to show the effectiveness of the proposed control method. © 2019 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
1. Introduction Cyber-physical systems (CPSs) monitor/control physical processes by combining embedded computers and networks. With advances in communication and computing technologies, the R This work is supported by National Natural Science Foundation of China (Nos. 61773097, U1813214), Fundamental Research Funds for the Central Universities (No. N160402004). ∗ Corresponding author at: College of Information Science and Engineering, Northeastern University, Shenyang 110819 Liaoning, China. E-mail addresses:
[email protected] (D. Ye),
[email protected] (S. Luo).
https://doi.org/10.1016/j.jfranklin.2019.01.009 0016-0032/© 2019 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1857
studies of CPSs have become an important direction of academic research worldwide. So far, many applications have been addressed from the perspective of CPSs such as power systems, smart-grid, transport networks, and gas/water distribution networks [1–4]. However, CPSs are vulnerable when a malicious attacker destroys the communication networks and this raises many theoretical and practical challenges. Therefore, safety and security of CPSs are vital [5–9]. At present, denial-of-service (DoS) attacks, replay attacks, and false data injection attacks are the main kinds of common cyber attacks. DoS attacks can occupy network resources to block the transmission of control signals, and make system performance degraded. In [10], by employing linear matrix inequalities (LMIs), the stability analysis of CPSs under DoS attack is transformed into the stability analysis of the system under switched controllers. The issue of security constrained optimal control is studied in [11], where measurement packets and control signals are transmitted through a communication network and the packets may be blocked or destroyed by attackers. The replay attacker can inject external control inputs to degrade system performance by recording the readings of the damaged sensors and actuators, and repeating them over a certain period of time. An attack-resilient receding-horizon control law under replay attack has been introduced in [12], where a simple and explicit relation between the infinite time domain cost, the computational cost, and the attack horizon is derived. False data injection attacks are deemed the most hazardous network attacks as attackers can inject spiteful data and make system performance degraded or even deteriorated. In [13], based on the relationship between s-strong detectability and 2s-sparse detectability, a state observer is presented by an adaptive switching mechanism and the observation error system is stable even under injection attack. A graph-based method for power system is proposed in [14] to retain the state estimation accuracy against false data injection attacks. A rigorous analysis of the adverse effects from false data injection attacks and a mitigation method of sensors and actuators attacks are proposed for discrete-time distributed multi-agent systems [15]. As a kind of stealthy attacks, zero dynamic attack utilizes the system zero to make the system unstable and cannot be detected by defenders, but accurate system dynamic is needed [6]. Furthermore, a new alternative zero dynamics attack is proposed for uncertain systems in [16], which does not require precise system knowledge for remaining stealthy. Recently, some state-dependent false data injection sensor attacks have been considered in [17]. Then the result has been extended to the case that both time-varying sensor and actuator attacks coexist in [18], where an adaptive method is used to ensure the closed-loop system uniformly ultimately bounded. However, the sensor attacks in [17] and [18] are not allowed to change states information to the opposite direction. Moreover, the actuator attack in [18] is assumed to be parameterized by a nonlinear function with a known structure. On the other hand, as an important component of CPSs, the reliability of physical systems is also vital. Since the physical system is operating over long periods of high loads, the sensors, actuators, and other components of the system will inevitably malfunction. Once some faults occur, the stability and control performance of systems will be destroyed or degrade significantly, which may further lead to huge economic losses and even endanger personal safety [19]. Fault-tolerant control theories are effective to raise the reliability and safety of complicated systems [20,21]. In the past few years, adaptive techniques (direct adaptive method or indirect adaptive method) have been widely used in fault-tolerant control, due to its potential regulation ability [22–30]. In [25–27], the controller parameters are estimated directly to tolerate faults by direct adaptive methods. While in the literature about indirect adaptive fault-tolerant control methods [24,28–31], some unknown fault parameters are estimated
1858
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
on-line firstly. Then the controller parameters are updated according to the estimated parameters. Furthermore, there are time-invariant and time-varying multiplicative actuator faults in practical systems. Some results are devoted to deal with time-invariant multiplicative actuator faults [25–28,31–33]. The direct adaptive method and the actuator redundancy condition are used to design fault-tolerant controller against stuck faults in [25–27]. In [28], the reducedorder integral sliding mode technique has been introduced to obtain controller gains under some allowable outage and stuck faults modes. In [32], an adaptive neural fault-tolerant control scheme is also proposed for the three degrees of freedom model helicopter against constant actuator faults. Recently, indirect adaptive method and linear matrix inequalities technique are combined to deal with time-varying multiplicative fault cases [29,30]. Some novel controller structures are designed, which are affine dependent on fault parameters. Unfortunately, those results in [29,30] cannot handle stuck faults. It is still a challenge to investigate time-varying loss of effectiveness faults and stuck faults at the same time. To the authors’ best knowledge, few existing works consider actuator faults in physical layers and cyber attacks in cyber layers simultaneously for CPSs. In this paper, a co-design methodology for linear CPSs is investigated to tolerate the adverse effects from some kinds of state-dependent attacks and actuator faults, which can also guarantee the closed-loop dynamic systems asymptotically stable. Compared with the existing works, the main contributions of this paper can be summarized as follows: (i) The considered actuator faults are more general than those in [24–34], which cover time-varying loss of effectiveness faults and timevarying stuck faults at the same time. (ii) Some restrictions on actuators/sensors attack models in [17,18] have been removed successfully by combining adaptive technique, fuzzy logic method, and Nussbaum-type function. (iii) Different from [28–30], here every lower bound of multiplicative fault parameters is not required to design controller gains, and a novel controller architecture is proposed based on a direct adaptive method to deal with multiple cases of outage and stuck fault; (iv) The asymptotic stability of the closed-loop systems can be guaranteed no matter when the unknown state-dependent sensor/actuator attack parameters are time-varying or not, which improves the results in [17,18]. The remaining part of the paper is organized as follows. In Section 2, some preliminaries and problem description are presented. In Section 3, a co-design methodology for CPSs under cyber attack and actuator fault and the corresponding proof of stability are given. In Section 4, an example is used to show the effectiveness of the proposed method. Finally, the conclusion is drawn in Section 5. The following notations are used throughout the paper. The superscript “T” stands for matrix transposition, Rn denotes the n-dimensional Euclidean space. · represents Euclidean norm of vectors or matrices. diag(X1 , X2 , . . . , Xn ) denotes a block diagonal matrix with matrices X1 , X2 , . . . , Xn on its main diagonal. λmin (X) and λmax (X) are defined as the minimum eigenvalue and the maximum eigenvalue of matrix X, respectively.
2. Problem statement and preliminaries 2.1. Problem statement Consider the following class of continuous-time linear systems: x˙(t ) = Ax(t ) + Bu(t )
(1)
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1859
Fig. 1. Closed-loop dynamical system under sensor and actuator attacks as well as actuator fault. Table 1 Fault Type. Fault type
ρij (t )
σi j (t )
Normal Loss of effectiveness Stuck Outage Bias fault
1 (0,1] 0 0 1
0 0 1 0 1
where x(t ) ∈ Rn is the state vector and Rn denotes the n-dimensional Euclidean space, u(t ) ∈ Rm is the control input, A, B are known constant matrices with appropriate dimensions. Here, both cyber attacks in cyber layers and actuator faults in physical layers are considered. The corresponding system structure diagram is given in Fig. 1. 2.1.1. Actuator fault model in physical layers Here, a more general actuator fault model is given as follows: uiF (t ) = ρi (t )( j) ui (t ) + σi (t )( j) usi (t ) where uiF (t ) is the output signal from the ith actuator that has failed and ui (t) denotes the input signal of the ith actuator, i = 1, . . . , m, j = 1, . . . , L. ρ i (t)(j) is an unknown time-varying function, j denotes the jth fault mode, and L is the number of the total fault modes. usi (t) is an unparameterizable bounded and time-varying function, which is denoted as the stuck fault parameter in the ith actuator. Denote ρ(t )( j) = diag(ρ1 (t )( j) , ρ2 (t )( j) , . . . , ρm (t )( j) ), σ (t )( j) = diag(σ1 (t )( j) , σ2 (t )( j) , . . . , σm (t )( j) ), and us (t ) = [us1 (t ), us2 (t ), . . . , usm (t )]T . Then, for all possible faulty modes L, the unified fault model can be written as follows: T uF (t ) = u1F (t ), u2F (t ), . . . , umF = ρ(t )u(t ) + σ (t )us (t ) (2) The actuator fault model (2) describes outage, loss of effectiveness, bias fault, and stuck. Table 1 is given to show the possible fault types more clearly. 2.1.2. Cyber attack model in cyber layers The following state-dependent attack models are borrowed from [17] and [18] but more general. It is assumed that adversaries have system knowledge of actuators and sensors, and
1860
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
can access networks and modify control signals and measurements. x˜(t ) = x(t ) + ηs (t , x(t )),
(3)
u˜ (t ) = u(t ) + ηa (t , x(t ))
(4)
where ηs (t, x(t)) and ηa (t, x(t)) are the false data injected sensor and actuator attack signals, respectively. x˜(t ) and u˜ (t ) are the attacked state feedback signal and control input signal, respectively. The sensor attacks in Eq. (3) can be described as ηs (t, x(t )) = ω(t )x(t ), where ω(t) is ¯˙ Here, · denotes Euclidean an unknown variable satisfying ω(t ) ≤ ω¯ and ω˙ (t ) ≤ ω. norm of vectors or matrices and ω¯ , ω¯˙ are unknown constants. Since ω(t ) ≡ −1 may result in x˜(t ) ≡ 0, it is impossible to design control law to regain the ideal system performance asymptotically. Thus, we suppose that ω(t ) = −1 to construct a feasible control law, which means sensor attacks can change state information to the opposite direction. It should be noted that the original states under attack x(t) is not available. The considered actuator attack in (4) can be described as ηa (t, x(t )) = ηa (t, x˜(t )) + ε(t, x(t ))
(5)
where ε(t, x(t )) ∈ Rm is the unknown bounded errors. ηa ( · ) is assumed to be a continuous function defined on a compact set . In practice, a worst-case actuator attack often may result in amplitude saturation of actuator. Without loss of generality, here we assume that ε(t, x(t)) is bounded by an unknown constant ε¯. Specifically, the sensor attack is modeled in a multiplicative form, in which the attacker may destroy sensor measurements in proportion. Take a practical system as an example, a spiteful attack on the automobile speed sensor can be modeled by Eq. (3), which shows a small portion of the automobile speed. It can lead to an unintentional increase in vehicle speed regulation. Besides, the actuator attack model indicates an additive state-dependent signal, which is a parameterization of the system attack modes or any residual signal. Remark 1. In fact, the state-dependent sensor attacks model (3) is more general than those in [17] and [18], where the parameter ω(t) was assumed to satisfy ω(t ) > −1. It means that the sensor attacks cannot change the state information to the opposite direction. Moreover, the considered state-dependent actuator attack in [18] is ηa (t, x(t )) = W T (t )ϕ(x(t )), where ϕ(x) is a nonlinear function with a known structure. That is to say, ηa ( · ) in [18] needs to be partly known. In this paper, the above-mentioned two restrictions have been removed successfully. Furthermore, the state-independent actuator faults in physical layers are considered at the same time. Then a co-design controller will be proposed to tolerate these adverse effects of attacks and faults by different techniques, which is more practical in CPSs. The following assumptions are standard and widely used in the literature [25–30]. Assumption 1. The pair [A, B] is stabilizable. Assumption 2. rank(Bρ(t )) = rank(B) = p, (p ≤ m) for any actuator fault mode ρ(t ) ∈ {ρ(t )(1) , ρ(t )(2) , . . . , ρ(t )(L) } Assumption 3. The unparametrisable stuck faults are bounded functions, that is, there exist an unknown constant u¯s > 0 such that us (t ) ≤ u¯s
(6)
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1861
By Assumption 1, it follows that there exist matrices K and P > 0 such that the LMI holds: (A + BK )T P + P (A + BK ) < 0
(7)
Remark 2. Time-varying multiplicative fault and stuck fault coexist in this paper, which can not be directly solved by the methods in [24–31]. Therefore, the fault model (2) is more general than those in [24–31]. Moreover, the actuator redundancy condition, that is Assumption 2, is a necessary condition for full compensation of stuck faults. Then, the corresponding closed-loop system under attack (3), (4) and actuator fault Eq. (2) can be written as x˙(t ) = Ax(t ) + Bρ(t )[u(t ) + ηa (t , x˜(t )) + ε(t , x(t ))] + Bσ (t )us (t )
(8)
The main object of our paper is constructing a novel controller to ensure the asymptotical stability of the considered CPSs even actuator faults in physical layers and attacks in cyber layers occur simultaneously. 2.2. Preliminaries The following definition and lemmas will be used to derive our main results. Firstly, to deal with unknown control input direction, some results about Nussbaum function are introduced. Definition 1 [35]. If a function N(ς ) has the following properties: 1 z lim sup N (ς )dς = +∞, z→∞ z 0 1 z lim inf N (ς )dς = −∞ z→∞ z 0 then N(ς ) is called Nussbaum-type function. Lemma 1 [36]. Let V(t) and ς (t) be functions defined on [0, tf ), with V(t) > 0, ∀t ∈ [0, tf ), and N(ς ) be an even smooth Nussbaum-type function. If the following inequality holds: t 0 ≤ V (t ) ≤ c0 + e−c1 t (g(τ )N (ς (τ )) ± 1)ς˙ (τ )ec1 τ dτ 0
where c1 > 0, g(t) is a time-varying parameter which takes values in the unknown closed intervals I := [l − , l + ] with 0 ∈ I, and c0 represents some suitable constant, then V(t), ς (t) and t 0 (g(τ )N (ς (τ )) ± 1)ς˙ (τ )dτ must be bounded on [0, tf ). Then, some preliminaries about Fuzzy logic systems (FLS) are given to deal with unknown actuator attacks. Lemma 2 [37]. Let f(x(t)) be a continuous function and be defined on a compact set , then for any constant > 0, there exists an FLS (9) such that sup | f (x(t )) − Wˆ sT ϕs (x(t ))| ≤ x∈
(9)
where ϕs (x) = [ϕs1 (x ), ϕs2 (x ), . . . , ϕsN (x )]T and ϕsi (x ) = nj=1 μH ji (x j )/ Ni=1 [ nj=1 μH ji (x j )] are fuzzy basis functions. μH ji (x j ) is a fuzzy membership function of the variable (xj ) in if-then
1862
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
rules. Wˆ sT = [Wˆ s1 , Wˆ s2 , . . . , Wˆ sN ] is the adaptive parameter vector, and Wˆ si is the inference variable corresponding to the ith IF-then rule. Remark 3. Neural networks and FLS are common and effective approaches to approximate the unknown function in nonlinear systems. They transform the unknown function into the multiplication of weight vector and basis function, and then the weight vector is updated, for example, [27,32,38,39]. However, the basis functions in the neural network are manually selected. Therefore, it is difficult to ascertain whether the selected base function or activation function is appropriate. On the other hand, FLS are usually used to identify or model unknown functions where the experience of experts and knowledge of experiments were introduced [39]. The basic function of FLS can be seen as the standardization of neural network basis functions. Based on Lemma 2 and [27], define an optimal weight vector Ws∗ (t ) as ∗ ˆ ˆ Ws (t ) = arg min sup | f (t , x(t )|Ws (t )) − f (t , x(t ))| Wˆ s ∈θ
x∈
where θ m > 0, is a set for x(t), and θ = {Wˆ s ∈ RN |Wˆ sT (t )Wˆ s (t ) ≤ θm2 } is a set for Wˆ s (t ). fˆ(t, x(t )|Wˆ s (t )) is an estimation of f(t, x(t)), and is given by fˆ(t, x(t )|Wˆ s (t )) = Wˆ s (t )T ϕs (x(t )) where Wˆ s (t ) is an adaptive weight vector. Then it follows: f (t, x(t )) = Ws∗T (t )ϕs (x(t )) + (t, x(t )), x(t ) ∈ ⊂ Rn where (t, x(t)) is the fuzzy approximation error. The weight matrix Ws∗ (t ) and error (t, x(t)) are assumed to be bounded by unknown constants θ n and . ¯ Next, a lemma about projection algorithm is given. Lemma 3 [38]. Let W∗ ∈ θ and Wˆ s is updated according to the following dynamics: W˙ˆ s = Proj(Wˆ s , a), Wˆ s (t0 ) ∈ θ where for given a ∈ RN , the projection algorithm Proj is defined as follows: ⎧ θ T θ ≤ θm2 ⎨a, θ T θ ≥ θm2 and θ T a ≤ 0 Proj(θ , a) = a, T 2 ⎩ θ θ−θ a − ωθ T θ m θ T aθ , θ T θ ≥ θm2 and θ T a > 0 Then, the following is true: 1. Wˆ s (t ) ∈ r , ∀t ≥ 0, 2. (Wˆ s − Ws∗ )T (Proj(Wˆ s , a) − a) ≤ 0, ∀t ≥ 0. where r = {Wˆ s (t ) ∈ RN |Wˆ sT (t )Wˆ s (t ) ≤ θm2 + r}, r is a positive constant. 3. A co-design controller design strategy under faults and attacks The following controller with four parts is designed to assure the closed-loop systems (8) asymptotically stable under attacks and faults (2) and (3). u(t ) = Kˆ1 (t )x˜(t ) + N (ς (t ))K2 (t )x˜(t ) + N (ς (t ))K3 (t )x˜(t ) + uF (t ) Fig. 2 is a flow chart to show the signal relationship in the controller.
(10)
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1863
Fig. 2. Controller structure.
Part I: Kˆ1 (t )x˜(t ) is to compensate the loss of effectiveness fault Here Kˆ1 (t ) ∈ Rm×n is the estimate of unknown matrix K1 in Eq. (12). Kˆ1i = ˆ [K1i1 , Kˆ1i2 , . . . , Kˆ1in ] ∈ R1×n is the ith row of Kˆ1 and is updated by
0, Kˆ1i j ≥ ρ −1 K¯ , and Fi j > 0 or Kˆ1i j ≤ −ρ −1 K¯ and Fi j < 0 ˙ ˆ K1i j (t ) = (11) Fi j , otherwise, i = 1, 2, . . . , m; j = 1, 2, . . . , n where Fi = −ekt i x˜x˜T Pbi , i = 1, 2, . . . , m. Fij is the jth row of Fi , bi is the ith column of B and k is a positive constant. i is a positive-definite diagonal matrix, whose elements are to be chosen and P is a positive-definite matrix satisfies Eq. (7). The minimum lower bound of all the non-zero loss of effectiveness parameters is noted by ρ, which is chosen by the designer. The smaller ρ is, the greater the fault range can be tolerated. The following algorithm is to obtain the constant K¯ . The concrete calculation process of ¯ K for a numerical example is also demonstrated in Section 4. Lemma 4. If the Assumptions 1–2 hold, for all the possible ρ(t) there always exists a function K1 (t) satisfying both K1 (t ) ≤ ρ −1 K¯ and the following inequality: (A + Bρ(t )K1 (t ))T P + P (A + Bρ(t )K1 (t )) < 0
(12)
where P is the same as that in Eq. (7), ρ is the minimum lower bound of all the non-zero loss of effectiveness parameters ρ(t). Proof. In fact, the condition rank(Bρ(t )) = rank(B) in Assumption 2 indicates that a linear combination of columns in B can be expressed by a linear combination of those in Bρ(t). That is, there exists a time-varying K1 (t) such that Bρ(t )K1 (t ) = BK for each ρ(t), where K is a feasible solution in Eq. (7). Therefore, according to Assumption 1 and 2, Eq. (12) is satisfied with the same P as that in Eq. (7).
1864
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
There are multiple cases of stuck/outage fault as described in Eq. (2). For the hth case, it can be seen that if K1 (t) satisfies ρ(t )(h) K1 (t ) = (h) K (h)
(13)
then K1 (t) is one feasible solution of Eq. (12). Here, (h) , K(h) are the same matrices in Algorithm 1. So we can construct Eq. (13) to prove the existence of K1 (t). Denote Algorithm 1 Algorithm to obtain K¯ . Step 1. Select p numbers from the {1, 2, m} to obtain the set N = {α, β, . . . , γ }, α, β, . . . , γ ∈ {1, 2, . . . , m}. There are Cmm−p cases in all. 0 i∈N Step 2. Let i = , (h) = diag(1 , 2 , . . . , m ), h = 1, 2, . . . , Cmm−p . 1 i∈ /N Step 3. Solve (A + B(h) K (h) )T P + P (A + B(h) K (h) ) < 0 to obtain K (h) , h = 1, 2, . . . , Cmm−p . Step 4. Solve (A + BK (0) )T P + P (A + BK (0) ) < 0 to obtain K (0) . m−p Step 5. K¯ = max{K (0) , K (1) , . . . , K (Cm ) }. ρ(t )(h) = diag(ρ1 (t )(h) , ρ2 (t )(h) , . . . , ρm (t )(h) ), K1 (t )T = [K11 (t )T , K12 (t )T , . . . , K1m (t )T ], (h) (h) (h) (h) (h)T and = diag(1 , 2 , . . . , m ), K = [K1(h)T , K2(h)T , . . . , Km(h)T ], then the Eq. (h) (h) (h) (13) is equivalent to ρi (t ) K1i (t ) = i Ki , i = 1, 2, . . . , m. That is to say, if ρi (t )(h) = 0, then i(h) = 0, and let K1i (t ) = Ki(h) = 0; if ρ i (t)(h) ∈ [ρ, 1], then i(h) = 1 and let K1i (t ) = ρi (t1)(h) Ki(h) . It can be obtained that 2 m m 1 T K1 (t ) = K1i (t ) K1i (t ) = Ki(h)T Ki(h) (h) ρ (t ) i i=1 i=1 m 1 1 (h)T (h) ≤ Ki Ki = K (h) (14) ρ i=1 ρ Since K¯ = max{K (0) , K (1) , . . . , K (Cm ) }, then K1 (t ) ≤ ρ −1 K¯ . ˆ Part II: N (ς (t ))K2 (t )x˜(t ) is to compensate for the impact of sensor attacks on K1 (t ). N(ς ) will be introduced in Part IV. Since ω(t ) ≤ ω¯ and ω(t ) = −1, there exists a unknown positive k1 satisfies Kˆ (t )ω(t ) 1 (15) ≤ k1 (1 + ω(t ))2 m−p
kˆ 1 (t ) is to estimate k1 , which updates according to the following law: ˙ kˆ 1 (t ) = 2ekt γ1 BT Px˜(t )x˜(t ) − 2γ1 ekt σ0 (t )kˆ 1 (t )
(16)
where k > 0, and σ0 (t ) ∈ R+ is any positive uniform continuous and bounded function satisfying t lim σ0 (s)ds ≤ σ¯ < ∞
t→∞ t 0
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1865
Here, we choose σ0 (t ) = v1 e−v2 t with two positive constants v1 and v2 . The auxiliary control function is as follows: K2 (t ) =
−kˆ 12 (t )BT Px˜(t )2 x˜T (t )PBkˆ 1 (t )x˜(t ) + σ0 (t )
(17)
Part III: N (ς (t ))K3 (t )x˜(t ) is to compensate for the stuck, bias faults, actuator attack parameterize errors, and approximation errors. Based on Assumption 3, there exists an unknown positive k2 that satisfies 1 1 σ u¯s + ε¯ + ) ¯ ≤ k2 (18) 1+ω(t ) (σ us (t ) + ε(t, x(t )) + (t, x(t ))) ≤ 1+ω(t ) ( Here, kˆ 2 (t ) is the estimate of k2 , whose derivative is ˙ kˆ 2 (t ) = 2ekt γ2 BT Px˜(t ) − 2γ2 ekt σ0 (t )kˆ 2 (t )
(19)
where k is the same as those in Eqs. (11) and (16). Then the following function is designed. K3 (t ) =
−kˆ 22 (t )BT P x˜T (t )PBkˆ 2 (t ) + σ0 (t )
(20)
Part IV: uF (t) is an adaptive fuzzy control law to compensate for the actuator attacks. Wˆ s (t ) is the estimation of Ws∗ (t ), and is updated by W˙ˆ s (t ) = Proj(Wˆ s , 2ekt γs BT Px˜(t )ϕs (x˜(t ) )), Wˆ s (0) ∈ θ ,
(21)
where γ s is a positive parameter to be designed. N(ς ) is a Nussbaum-type function to solve the unknown direction problem caused by sensor attack. In this paper, the Nussbaum-type function is chosen as ς cos (0.75ς ), and the updating law of ς is as follows: ς˙ (t ) =
2x˜T (t )PB2 kˆ 12 x˜(t ) 2x˜T (t )PB2 kˆ 22 (t ) + x˜T (t )PBkˆ 1 x˜(t ) + σ0 (t ) x˜T (t )PBkˆ 2 (t ) + σ0 (t ) 2x˜T (t )PB2 + T Wˆ T (t )ϕs (x˜(t )). x˜ (t )PB + σ0 (t ) s
(22)
The control function uF (t) is designed as uF (t ) = −
N (ς )BT Px˜(t ) Wˆ T (t )ϕs (x˜(t )). BT Px˜(t ) + σ0 (t ) s
(23)
Now the whole controller (10) are designed completely with the above-mentioned four parts. Remark 4. It should be noticed that Algorithm 1 in Part I, is an off-line algorithm, which aims to obtain the maximum matrix norm K¯ of different K1 (t) for all possible ρ(t) in Eq. 1i j (t ) ≤ ρ −1 K¯ and K1 (t ) ≤ (12). From Eq. (11) and Lemma 4, it can be obtain that K −1 ¯ ρ K , respectively. Then the estimation error K˜1i j (t ) is constrained into a bounded interval −1 ˜ ˜T [−2ρ −1 K¯ , 2ρ −1 K¯ ], which further guarantees m i=1 ρ˙i (t )K1i (t ) K1i (t ) in the derivative of the Lyapunov function (30) is bounded. In addition, we can also use the techniques in Part
1866
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
III to deal with the possible bounded matched disturbances in the system (1). Moreover, if we only consider cyber attacks without the occurrence of faults, then Part I can be simplified to Kx(t) where K satisfies Eq. (7), and Part II-IV remain the same as before. So the proposed method can also work when only cyber attacks are considered. Denote K˜1 (t ) = Kˆ1 (t ) − K1 (t ), k˜ 1 (t ) = kˆ 1 (t ) − k1 ˆ Ws (t ) − Ws (t ), then we obtain
and k˜ 2 (t ) = kˆ 2 (t ) − k2 , W˜ s (t ) =
K˙˜1 (t ) = K˙ˆ1 (t ) − K˙1 (t ),
(24)
˙ k˜ 1 (t ) = 2ekt γ1 BT Px˜(t )x˜(t ) − 2γ1 ekt σ0 (t )(k˜ 1 (t ) + k1 ),
(25)
˙ k˜ 2 (t ) = 2ekt γ2 BT Px˜(t ) − 2γ2 ekt σ0 (t )(k˜ 2 (t ) + k2 ),
(26)
W˙˜ s (t ) = Proj(Wˆ s , 2ekt γs BT Px˜(t )ϕs (x˜(t ))) − W˙ s (t ).
(27)
Based on Assumption 2 and [26], there exists a positive constant μ such that x˜T (t )PBρ(t )BPx˜(t ) ≥ μx˜T (t )PB2 Furthermore, there exists a function μ(ρ(t)) such that x˜T (t )PBρ(t )BPx˜(t ) = μ(ρ(t ))x˜T (t )PB2 where 0 < μ ≤ μ(ρ(t)) ≤ 1. Here is an example: μ(ρ(t )) = Define the function f (t, x˜(t )) as follows: f (t, x˜(t )) =
x˜T (t )PBρ(t )ηa (t, x˜(t )) x˜T (t )PB
(28) x˜T (t )PBρ(t )BPx˜(t ) . x˜T (t )PB2
(29)
Here f (t, x˜(t )) will be approximated by the fuzzy logic technique. Moreover, it can be seen that x˜T (t )PBρ(t )ηa (t, x˜(t )) = f (t, x˜(t ))x˜T (t )PB. Thus, the resultant approximate error “ ”, which is multiplied by x˜T (t )PB, can be treated as a matching disturbance problem and solved by adaptive methods. Based on the abovementioned design process, the following theorem can be derived. All the variables and their definitions and introductions are summarized in Table 2. Theorem 1. Consider the linear system (1) with both cyber attacks (3)and (4) and the actuator fault (2). If Assumptions 1–3 are satisfied, then with the control law (10) and adaptive laws (11),(16),(19),(21), and (22), the closed-loop system (8) is asymptotically stable for bounded initial conditions. Proof. Consider the Lyapunov function candidate V (t ) = V1 (t ) + V2 (t ) where V1 (t ) = x T (t )Px(t )
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1867
Table 2 Variable summary table. Variable
Definition/Introduction
x(t) x˜(t ) u(t) u˜ (t ) ηs (t, x(t)) ηa (t, x(t)) ε(t, x(t)) uF N(ς ) kˆ 1 f (t, x˜(t )) W∗ (t) Wˆ s (t ) (t, x(t)) ϕs( · ) u¯s k1 k2 kˆ 1 (t ), kˆ 2 (t )
States of system States under sensor attack/x˜(t ) = x(t ) + ηs (t , x(t )) Control input Control input under actuator attack/u˜ = u(t ) + ηa (t , x(t )) Sensor attack/ηs (t, x(t )) = ω(t )x(t ) Actuator attack/ηa (t, x(t )) = ηa (t, x˜(t )) + ε Actuator attack parameterize errors Actuator in fault Nussbaum function/against sensor attack may turn states direction Estimate of k1 Function to be approximated by FLS/ f (t, x˜(t )) = Ws∗T (t )ϕs (x(t )) + (t, x(t )) Optimal weight vector Adaptive weight vector/estimate of W∗ (t) FLS approximate errors Fuzzy basis function Upper bound of stuck fault Upper bound of impact of sensor attacks on K˜ Upper bound of (1 + ω(t ))−1 (us + ε(t , x(t )) + (t , x(t ))) Estimate of k1 , k2
m 1 1 1 ρi (t )K˜1Ti (t ) −1 K˜1i (t ) + W˜ sT (t )s−1W˜ s (t ) + γ1−1 k˜ 12 (t ) 2 (1 + ω(t )) i=1 2 2 1 + γ2−1 k˜ 22 (t ) 2
V2 (t ) = e
−kt
1 Let δ(t ) = 1+ω(t , then x(t ) = δ(t )x˜(t ). As it is assumed that ω(t ) ≤ ω¯ and ω(t ) = −1, ) ¯ where δ and δ¯ are unknown constants. According one can obtain that 0 < δ ≤ δ(t ) ≤ δ, to Eqs. (10), (17), (20), (23) and (29), the time derivative of V(t) for t > 0 can be derived as follows:
V˙ (t ) ≤x T (t )(P (A + Bρ(t )Kˆ 1 (t )) + (A + Bρ(t )Kˆ 1 (t ))T P )x(t ) + 2x T (t )PBKˆ 1 (t )ω(t )x(t ) 2x˜T (t )PBρ(t )BT Px˜(t )kˆ 12 (t )x˜(t )2 2x˜T (t )PBρBT Px˜(t )kˆ 22 (t ) − δ(t )N (ς (t )) − δ(t )N (ς (t )) kˆ 1 (t )x˜T (t )PBx˜(t ) + σ0 (t ) kˆ 2 (t )x˜T (t )PB + σ0 (t ) T T 2x˜ (t )PBρ(t )B Px˜(t ) T − δ(t )N (ς (t )) Wˆ s (t )ϕs (x˜(t )) + 2x T (t )PB(σ (t )u¯s + ε¯ + ) ¯ x˜T (t )PB + σ0 (t ) m 1 + 2x˜T (t )PBWsT (t )ϕs (x˜(t )) + 2e−kt ρi (t )K˜1Ti (t ) −1 K˙˜1i (t ) 2 (1 + ω(t )) i=1 + 2e−kt
m m 1 2ω˙ (t ) −kt ˜1Ti (t ) −1 K˜1i (t ) − ρ ˙ (t ) K e ρi (t )W˜ sT (t )s−1W˜ s (t ) i (1 + ω(t ))2 i=1 (1 + ω(t ))3 i=1
˙ ˙ + e−kt W˜ sT (t )s−1W˙˜ s (t ) + e−kt γ1−1 k˜ 1 (t )k˜ 1 (t ) + e−kt γ2−1 k˜ 2 (t )k˜ 2 (t ) − kV2 (t )
(30)
1868
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Define −Q = (A + BK )T P + P (A + BK ). Due to the proof of Lemma 4, Eqs. (15), (18), and Eqs. (24)–(28), it yields that V˙ ≤− x T Qx + D + 2x˜T PBk1 x˜ − + +
2x˜T PB2 kˆ 22
kˆ 2 x˜T PB + σ0
−
kˆ 1
2x˜T PB2 kˆ 12 x˜2 x˜T PBx˜(t )
+ σ0
+
kˆ 1
2x˜T PB2 kˆ 12 x˜2 x˜T PBx˜(t )
+ σ0
−
2x˜T PB2 kˆ 22
kˆ 2 x˜T PB + σ0
2x˜T PB2 2x˜T PB2 Wˆ sT ϕs (x˜ ) + T Wˆ T ϕs (x˜ ) + 2x˜T PBk2 T x˜ PB + σ0 x˜ PB + σ0 s
2x˜T PBσ0 2μ(ρ )x˜T PB2 kˆ 12 x˜2 2μ(ρ )x˜T PB2 kˆ 22 Wˆ s ϕs (x˜ ) − δN (ς ) − δN (ς ) T x˜ PB + σ0 kˆ 1 x˜T PBx˜(t ) + σ0 kˆ 2 x˜T PB + σ0 2μ(ρ )x˜T PB2 T ρ˙i 1 Wˆ ϕs (x˜ ) − 2e−kt K˜1Ti −1 K1i + 2e−kt ρ˙i K˜1Ti −1 K˜1i 2ρ 2 x˜T PB + σ0 s (1 + ω) (1 + ω) i i=1 i=1 m
− δN (ς )
m
2ω˙ e−kt ρiW˜ sT s−1W˜ s + e−kt s−1W˜ sT (W˙ˆ s − 2ekt s BT Pxϕs (x˜ )) − e−kt W˜ sT s−1W˙ s 3 (1 + ω) i=1 m
−
+ 2x˜T PBk˜ 1 x˜ + 2x˜T PBk˜ 2 − σ0 (k˜ 12 + k˜ 1 k1 + k˜ 22 + k˜ 2 k2 ) − kV2
(31)
˜ T −1 ˙ˆ where D = 2x T PBρ K˜1 x + 2e−kt (1+1ω)2 m i=1 ρi K1i i K1i . Based on the adaptive law (11), D ≤ 0 can be obtained as follows: Case 1. K˙ˆ1i j = Fi j It can be easily obtained that D = 0 Case 2. K˙ˆ1i j = 0 K¯ and Fi j > 0, or Kˆ1i j = −ρ −1 K¯ and Fi j < 0 In this situation, Kˆ1i j = ρ −1 i i D= =
m m 1 1 T ˜1Ti x˜ = x ˜ P b ρ K ρi K˜1Ti x˜x˜T Pbi i i (1 + ω)2 i=1 (1 + ω)2 i=1 m 1 ρi (ρ −1 K¯1i − K1i )i−1 i x x T Pbi (1 + ω)2 i=1
m n 1 = −1 ρi (ρ −1 K¯1i j − K1i j )(−e−kt Fi j ) ≤ 0 (1 + ω)2 i=1 j=1 i
According to Lemma 2 and Eq. (21), it follows that: Wˆ s (t ) ∈ θ , ∀t ≥ 0
(32)
W˜ sT (W˙ˆ s − 2ekt s BT Pxϕs (x˜ )) ≤ 0, ∀t ≥ 0 From Eq. (32), one has Wˆ s 2 ≤ θm2 + r Since 0 ≤ ϕsh (x˜ ) ≤ 1, h = 1, 2, . . . , n, and N 2 (x˜ ) ≤ 1 ϕs (x˜ )2 = ϕsh h=1
N h=1
(33) ϕsh (x˜ ) = 1, it follows: (34)
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1869
Due to Eqs. (33) and (34), there exists a positive constant c = θm2 + r + 1 satisfying 2Wˆ sT ϕs (x˜ ) ≤ Wˆ s 2 + ϕs (x˜ )2 ≤ c In addition, we obtain the following equations: 2x˜T PBx˜kˆ 1 σ0 2x˜T PB2 kˆ 12 x˜2 + 2k1 x˜T PBx˜ + 2k˜ 1 x˜T PBx˜ = kˆ 1 x˜T PBx˜ + σ0 kˆ 1 x˜T PBx˜ + σ0 and 2x˜T PBkˆ 2 σ0 2x˜T PB2 kˆ 22 − + 2k2 x˜T PB + 2k˜ 2 x˜T PB = kˆ 2 x˜T PB + σ0 kˆ 2 x˜T PB + σ0 −
From Eq. (11) and Lemma 4, it follows K˜1Ti K˜1i ≤ n(ρ −1 K¯ 2 − (−ρ −1 K¯ )2 ) ≤ 2nρ −1 K¯ 2 , −K˜1Ti K1i ≤ K1Ti K1i + nρ −1 K¯ 2 ≤ 2nρ −1 K¯ 2 . Moreover, W˜ sT W˙ s ≤ 2 θm2 + r = c1 , where W˙ s ≤ and c1 is a positive constant. ab For any a > 0, b > 0, the inequality 0 ≤ a+ ≤ a holds. Furthermore, it follows: b 2x˜T PBx˜kˆ 1 σ0 2x˜T PBkˆ 2 σ0 2x˜T PBσ0 T + + T Wˆ ϕs (x˜ ) x˜ PB + σ0 s kˆ 1 x˜T PBx˜ + σ0 kˆ 2 x˜T PB + σ0 2x˜T PB2 kˆ 12 x˜2 2x˜T PB2 kˆ 22 2x˜T PB2 + + + T Wˆ T ϕs (x˜ ) x˜ PB + σ0 s kˆ 1 x˜T PBx˜(t ) + σ0 kˆ 2 x˜T PB + σ0
V˙ ≤ − x T Qx +
− δN (ς )
2μ(ρ)x˜T PB2 kˆ 12 x˜2 2μ(ρ)x˜T PB2 kˆ 22 − δN (ς ) kˆ 1 x˜T PBx˜(t ) + σ0 kˆ 2 x˜T PB + σ0
− δN (ς )
2μ(ρ)x˜T PB2 T ρ˙i Wˆ s ϕs (x˜ ) − 2e−kt K˜ T −1 K1i T 2 ρ 1i x˜ PB + σ0 (1 + ω) i i=1 m
+ 2e−kt
m m 1 2ω˙ T −1 ˜ −kt ˜ ρ ˙ K K − e ρiW˜ sT s−1W˜ s − e−kt W˜ sT s−1W˙ s i 1i 1i (1 + ω)2 i=1 (1 + ω)3 i=1
− σ0 (k˜ 12 + k˜ 1 k1 + k˜ 22 + k˜ 2 k2 ) − kV2 x˜T PB2 kˆ 12 x˜ x˜T PB2 kˆ 22 x˜T PB2 T T ≤ − x Qx + 2 +2 +2 T Wˆ ϕs (x˜ ) x˜ PB + σ0 s x˜T PBkˆ 1 x˜ + σ0 x˜T PBkˆ 2 + σ0 1 ¯ ω¯˙ −kt −1 ¯2 ¯ ¯ 2 ¯ × (1 − δμ(ρ)N (ς )) + e 4nmρ δ K 1+ ρ˙ + + c1 m ρ (1 + ω¯ )3 − σ0 (k˜ 12 + k˜ 1 k1 + k˜ 22 + k˜ 2 k2 + 4 + c) − kV2
(35)
where ¯ = max{1 , 2 , . . . , m }. Substitute Eqs. (22) to (35), then the derivative of V(t) satisfies λmin (Q) V1 + ς˙ (1 − δμ(ρ)N (ς )) + e−kt ξ + σ0 κ − kV2 λmin (P ) ≤ − c0V + ς˙ (1 − δμ(ρ)N (ς )) + e−kt ξ + σ0 κ
V˙ ≤ −
where κ =
k12 4
k2 + 42
+4+c
and ξ =
ω¯˙ ¯˙ (4nmρ −1 δ¯2 ¯ K¯ 2 1+ ρ1 ρ+ (1+ω¯ )3
(Q) constant such that c0 < min{ λλmin , k}. min (P)
(36) ¯ . c0 is any positive + c1 m)
1870
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Multiplying Eq. (36) by ec0 t , it becomes d (V (t )ec0 t ) ≤ ς˙ (1 − δμ(ρ)N (ς ))ec0 t + (e−kt ξ + σ0 κ )ec0 t (37) dt Because σ0 (t ) = v1 e−v2 t it is easy to choose v2 that satisfying v2 > c0 . By integrating Eq. (37) over [0, t], we have t v1 κ ξ e−c0 t + e−c0 t V (t ) ≤ V (0) + + ς˙ (1 − δμ(ρ)N (ς ))ec0 τ dτ v2 − c0 k − c0 0 t Applying Lemma 1, we can conclude that V(t), 0 ς˙ (1 − δμ(ρ)N (ς ))dτ are bounded on [0, t). t Let cδ be the upper bound of 0 ς˙ (1 − δμ(ρ)N (ς ))dτ, Eq. (36) implies that t ξ lim λmin (Q)x(τ )2 dτ ≤ V (t0 ) + cδ + κ σ¯ + (38) t→∞ t k 0 Applying the Barbalat Lemma [40] to Eq. (38) yields lim x(t; 0, x(0)) = 0
t→∞
This ends the proof.
Remark 5. Since the considered loss of effectiveness actuator faults are time-varying, there is an extra term related to ρ(t ˙ ) when compared to the time-invariant case. Thus, it cannot be solved by methods in [25–27]. In this paper, we introduce the exponential term e−kt into the Lyapunov function and adaptive laws, and utilize Eq. (11) to eliminate the effects of ρ˙ i (t ). Also, as the original states is affected by sensor attack, a novel term m 1 −1 ˜ ˜T e−kt (1+ω(t 2 i=1 ρi (t )K1i (t ) K1i (t ) is adopted into the Lyapunov function. )) Remark 6. Compared with [17,18], Nussbaum function and Lemma 1 are utilized to solve the problem that sensor attacks may make the states information to the opposite direction. Meanwhile, fuzzy logic approximation technology is used to estimate parameterized actuator attack, the structure of ηa ( · ) in Eq. (4) is not required to be partly known as that in [18]. Moreover, from Theorem 1, it can be proved that the closed-loop system is asymptotically stable even if the attack parameters ηa ( · ) and ω(t) are time-varying. The results in [17,18] cannot deal with these issues mentioned above, which implies the proposed method in this paper has more theoretical and practical values. Remark 7. Since indirect adaptive methods are proposed to estimate fault parameters in [28–30], the lower and upper bounds of multiplicative fault parameters for different fault modes are used to obtain the controller gains. Thus, it may lead to more conservative design conditions than our direct adaptive results. Moreover, to deal with the outage and stuck fault, the lower bounds of ρ(t ˆ ) should be chosen as 0 in [28–30], which further add an additional restriction to Assumption 2. In this paper, only LMI (7) (ensured by Assumption 1) needs to be solved, which results in less conservative design conditions for more general fault cases. Remark 8. In the existing literature, there are some effective attack detection methods, such as Luenberger observer-based attack detection and identification [8] and Kalman filter-based attacks detection [9]. However, here we mainly focus on how to propose a co-designed controller to tolerate the adverse effects of some kinds of state-dependent attacks and more general actuator fault types. It is independent of detectors and unnecessary to detect the cyber attack.
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1871
Fig. 3. Response curves of ω(t) in the two attack cases.
Remark 9. It should be noted that the sensor attack considered here is state-dependent and satisfies restrictions of boundedness. However, some kinds of state-independent attacks, such as zero dynamic attacks in [6,16], (for example, in [6], the attack is “ak = vk g”, and it is not state-dependent) can not be defended by the proposed method. 4. Simulation results In this section, a linearized rocket fairing structural-acoustic model [25] is given to illustrate the advantages of the proposed method. The corresponding system parameter matrices are shown below: ⎡ ⎤ ⎡ ⎤ 0 1 0.0802 1.0415 1 1.55 0.75 ⎢−0.1980 −0.115 −0.0318 ⎢ ⎥ 0.3 ⎥ ⎥, B = ⎢0.975 0.8 0.85⎥ A=⎢ ⎣−3.0500 1.1880 −0.4650 ⎦ ⎣ 0.9 0 0 0 ⎦ 0 0.0805 1 0 0 0 0 The state variables are pitch rate (rad/s), true airspeed (m/s), angle of attack (rad), and pitch angle (rad). The input variables are elevator deflection, total thrust, and horizontal stabilizer, respectively. In the following simulation, the actuator attack is ηa (t, x(t )) = [1, 1, 1]T 0.2sin(2.5t ) + [0.1sin(2t ), 0.5cos(t ), 0.2sin(0.5t )]T 0.2cos(x1 )sin(x2 ).
1872
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Fig. 4. State curves in two attack cases by the method in [18].
Here, two different sensor attacks are given to demonstrate that sensor attacks may reverse the state direction. Attack case 1: ω(t ) = −1.25 − 0.15sin(2.5t ). Attack case 2: ω(t ) = −0.25 − 0.15sin(2.5t ). The upper bounds of ω(t) are 1.4 and 0.4 in Attack case 1 and Attack case 2, respectively. Due to x˜(t ) = (1 + ω(t ))x(t ), the signal of (1 + ω(t )) indicates whether the sensor attack change the state direction. For instance, ω(t ) < −1 means compromised states are opposite to the original direction. The response curves of ω(t) in these two cases are given in Fig. 3. Both time-varying multiplicative faults and multiple cases of stuck faults are considered here. Fault case 1: the system operates in normal case before 10s. Then the third actuator actuator is stuck at −20 and the other two actuators are suffering time-varying loss of effectiveness i. e. ρ1 (t ) = (0.5 + 0.4sin(5t )) and ρ2 (t ) = (0.4 + 0.3cos(2t )), respectively. Fault case 2: after 2s, the first actuator is stuck at (10 + 3sin(1.5t )) and the second and third actuators are losing their effectiveness with ρ2 (t ) = 0.4 and ρ3 (t ) = (0.5 + 0.3sin(7t )). According to Algorithm 1, the computing process of K¯ is described as follows. It is easy to check that rank(Bρ) = rank(B) = 2. Then here we have p = 2, m = 3, Cmm−p = 3 (see Step 1), which further implies (1) = diag(0, 1, 1), (2) = diag(1, 0, 1), (3) = diag(1, 1, 0) (see Step 2). Then solve LMI: (A + B(h) K (h) )T P + P (A + B(h) K (h) ) < 0 for h = 1, 2, 3, respectively (see Step 3). It can be seen that the solutions of P are the same, for h = 1, 2, 3.
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1873
Fig. 5. State curves in two attack cases.
The resultant P and K(h) are as follows: ⎡ ⎤ 0.0256 −0.0029 −0.0054 −0.0161 ⎢−0.0029 0.0208 0.0030 0.0077 ⎥ ⎥, P=⎢ ⎣−0.0054 0.0030 0.0330 0.0165 ⎦ −0.0161 0.0077 0.0165 0.0419 ⎡ ⎤ 0 0 0 0 147.4626 −7.5003 −49.1007⎦, K (1) = ⎣ 156.6914 −352.6890 −115.7351 48.7047 173.9461 ⎡ ⎤ −8.9153 5.2367 44.5727 20.3241 ⎦, 0 0 0 0 K (2) = ⎣ 9.8169 −6.5919 −53.0681 −24.4088 ⎡ ⎤ 1.1552 −1.3590 −8.9890 −4.3835 1.0228 8.8717 4.1139 ⎦. K (3) = ⎣−1.7409 0 0 0 0 Then for the normal follows: ⎡ −44.2421 K (0) = ⎣−46.3142 161.4703
case, solve LMI: (A + BK (0) )T P + P (A + BK (0) ) < 0 (see Step 4). It 79.7507 −60.6775 −42.7510
102.3463 −4.6555 −129.3303
⎤ 170.0960 −7.4861 ⎦ −231.6462
1874
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Fig. 6. State and N(ς ) curves in Attack & Fault case 1.
Furthermore, we obtain the upper bound K¯ = max{K (0) , K (1) , K (2) , K (3) } = 459.4264 (see Step 5). The following parameters x(0) = [20, −8, 0.8, 3]T , Kˆ1i (0) = [0, 0, 0, 0]T , kˆ 1 (0) = 5, σ0 (t ) = 2et , kˆ 2 (0) = 5, i = 1, γ1 = γ2 = 1000, k = 0.001, i = 1, 2, 3 are used in the simulations. In order to demonstrate the superiority of our method compared with that in [18], some simulations results are given when only cyber attacks exist. The sensor attack makes the state direction reversed in Attack case 1, which does not obey the assumption in [18]. Thus, the method in [18] can only deal with Attack case 2. Fig. 4 further demonstrate this fact. On the other hand, the state curves in the two attack cases by our method are shown in Fig. 5. It can be seen that the systems are asymptotically stable in both cases by introducing Nussbaumtype function. Moreover, Fig. 5 also indicates that the control performance will become worse when the bounds of ω(t) increase. Furthermore, the effects of both actuator faults and attacks on the system states are also simulated in different cases. Figs. 6–8 and Figs. 9–11 are the states and parameters curves of the closed-loop systems in Attack and Fault case 1 and Attack and Fault case 2, respectively. Just as the theory results in this paper, the system states are asymptotically stable even if different cases of time-varying actuator faults and sensor/actuator attacks coexist. Moreover, from Figs. 3–9, it can be seen that the sign of N(ς ) can compensate for direction changes of states cause by sensor attacks. It should be noticed that the results in [17,18] only focus on the attacks in cyber layers while the faults in physical layers of CPSs are ignored. On the other hand, the existing results
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Fig. 7. Response curves of kˆ 1 (t ), kˆ 2 (t ) and Wˆ s in Attack & Fault case 1.
Fig. 8. Estimates of K1 in Attack & Fault case 1.
1875
1876
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
Fig. 9. State and N(ς ) curves in Attack & Fault case 2.
Fig. 10. Response curves of kˆ 1 (t ), kˆ 2 (t ) and Wˆ s in Attack & Fault case 2.
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1877
Fig. 11. Estimates of K1 in Attack & Fault case 2.
about FTC [15,24–32,34] cannot be directly used to deal with time-varying actuator faults (2) and have no ability to tolerant the considered cyber attacks. 5. Conclusion In this paper, a co-design controller scheme with an adaptive mechanism has been developed for addressing security and safety of CPSs under actuator faults and cyber attacks. More general actuator faults, sensor attacks and actuator attacks are considered simultaneously. The direct adaptive technique with projection, fuzzy logic approximation method, and Nussbaum-type function have been combined to construct a novel controller to ensure that the closed-loop CPSs are asymptotically stable. The upper bounds of unknown parameters are estimated online, which relaxes some requirements of sensor attacks or actuator attacks. Some simulation results are presented to demonstrate the effectiveness of the proposed method. The security of CPSs under more complex attacks in cyber layers and actuator saturation in physical layers is one of our future works. References [1] D. Wang, Z.D. Wang, B. Shen, F.E. Alsaadi, T. Hayat, Recent advances on filtering and control for cyber-physical systems under security and resource constraints, J. Frankl. Inst. 353 (11) (2016) 2451–2466. [2] J. Singh, O. Hussain, Cyber-physical systems as an enabler for next generation applications, in: Proceedings of the IEEE 15th International Conference on Network-Based Information Systems (NBiS), 2012, pp. 417–422. [3] J.S. Priya, S.P. Rajagopalan, M. Ramakrishnan, Medical cyber physical system security-mitigating attacks using trust model, J. Med. Imaging Health Inform. 6 (7) (2016) 1572–1575.
1878
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
[4] W.D. Dai, C. Pang, V. Vyatkin, J.H. Christensen, X.P. Guan, Discrete-event-based deterministic execution semantics with timestamps for industrial cyber-physical systems, IEEE Trans. Syst. Man Cybern. Syst. (2017), doi:10.1109/TSMC.2017.2736339. [5] S. Bi, Y.J. Zhang, Using covert topological information for defense against malicious attacks on DC state estimation, IEEE J. Sel. Areas Commun. 32 (7) (2014) 1471–1485. [6] A. Teixeira, I. Shames, H. Sandberg, K.H. Johansson, A secure control framework for resource-limited adversaries, Automatica 51 (2015) 135–148. [7] R. Mitchell, R. Chen, Modeling and analysis of attacks and counter defense mechanisms for cyber physical systems, IEEE Trans. Reliab. 65 (1) (2016) 350–358. [8] F. Pasqualetti, F. Dörfler, F. Bullo, Attack detection and identification in cyber-physical systems, IEEE Trans. Autom. Control 58 (11) (2013) 2715–2729. [9] K. Manandhar, X. Cao, F. Hu, Y. Liu, Detection of faults and attacks including false data injection attack in smart grid using Kalman filter, IEEE Trans. Control Netw. Syst. 1 (4) (2014) 370–379. [10] A.Y. Lu, G.H. Yang, Input-to-state stabilizing control for cyber-physical systems with multiple transmission channels under denial-of-service, IEEE Trans. Autom. Control 63 (6) (2017) 1813–1820. [11] S. Amin, A.A. Cárdenas, S.S. Sastry, Safe and secure networked control systems under denial-of-service attacks, in: Hybrid Systems: Computation and Control Lecture Notes in Computer Science, Berlin/Heidelberg, 2009, pp. 31–45. [12] M.H. Zhu, S. Martinez, On the performance analysis of resilient networked control systems under replay attacks, IEEE Trans. Autom. Control 59 (3) (2014) 804–808. [13] L.W. An, G.H. Yang, Secure state estimation against sparse sensor attacks with adaptive switching mechanism, IEEE Trans. Autom. Control 63 (8) (2017) 2596–2603. [14] S. Bi, Y.J. Zhang, Graphical methods for defense against false-data injection attacks on power system state estimation, IEEE Trans. Smart Grid 5 (3) (2014) 1216–1227. [15] A. Mustafa, H. Modares, Attack analysis and resilient control design for discrete-time distributed multi-agent systems (2018), arXiv:1801.00870. [16] G. Park, H. Shim, C. Lee, Y. Eun, K.H. Johansson, When adversary encounters uncertain cyber-physical systems: Robust zero-dynamics attack with disclosure resources, in: Proceedings of the Decision and Control, IEEE, 2016, pp. 5085–5090. [17] T. Yucelen, W.M. Haddad, E.M. Feron, Adaptive control architectures for mitigating sensor attacks in cyber– physical systems, Cyber-Phys. Syst. 2 (1–4) (2016) 24–52. [18] X. Jin, W.M. Haddad, T. Yucelen, An adaptive control architecture for mitigating sensor and actuator attacks in cyber-physical systems, IEEE Trans. Autom. Control 62 (11) (2017) 6058–6064. [19] M. Blanke, C. Frei, F. Kraus, R.J. Patton, M. Staroswiecki, What is fault-tolerant control? IFAC Proc. Vol. 33 (11) (2000) 41–52. [20] S. Chen, D.W.C. Ho, Sampled-data approach to state estimation performance for heterogeneous distributed system with fault, Asian J. Control 17 (6) (2015) 1–11. [21] H. Li, H. Liu, H. Gao, P. Shi, Reliable fuzzy control for active suspension systems with actuator delay and fault, IEEE Trans. Fuzzy Syst. 20 (2012) 242–357. [22] Y. Shen, L.J. Liu, E.H. Dowell, Adaptive fault-tolerant robust control for a linear system with adaptive fault identification, IET Control Theory Appl. 7 (2) (2013) 246–252. [23] B. Chen, Y.G. Niu, Y.Y. Zou, Adaptive sliding mode control for stochastic markovian jumping systems with actuator degradation, Automatica 49 (6) (2013) 1748–1754. [24] K. Zhang, B. Jiang, P. Shi, Fast fault estimation and accommodation for dynamical systems, IET Control Theory Appl. 3 (2) (2009) 189–199. [25] X.J. Li, G.H. Yang, Robust adaptive fault-tolerant control for uncertain linear systems with actuator failures, IET Control Theory Appl. 6 (10) (2012) 1544–1551. [26] L.B. Wu, G.H. Yang, D. Ye, Robust adaptive fault-tolerant control for linear systems with actuator failures and mismatched parameter uncertainties, IET Control Theory Appl. 8 (6) (2014) 441–449. [27] Y.X. Li, G.H. Yang, Robust fuzzy adaptive fault-tolerant control for a class of nonlinear systems with mismatched uncertainties and actuator faults, Nonlinear Dyn. 81 (1-2) (2015) 395–409. [28] L.Y. Hao, J.H. Park, D. Ye, Fuzzy logic systems-based integral sliding mode fault-tolerant control for a class of uncertain non-linear systems, IET Control Theory Appl. 10 (3) (2016) 300–311. [29] D. Ye, J.H. Park, Q.Y. Fan, Adaptive robust actuator fault compensation for linear systems using a novel fault estimation mechanism, Int. J. Robust Nonlinear Control 26 (8) (2016) 1597–1614.
D. Ye and S. Luo / Journal of the Franklin Institute 356 (2019) 1856–1879
1879
[30] D. Ye, L. Su, J.L. Wang, Y.N. Pan, Adaptive reliable H∞ optimization control for linear systems with time– varying actuator fault and delays, IEEE Trans. Syst. Man Cybern.: Syst. 47 (7) (2017) 1635–1643. [31] M. Liu, D.W.C. Ho, P. Shi, Adaptive fault-tolerant compensation control for Markovian jump systems with mismatched external disturbance, Automatica 58 (2015) 5–14. [32] M. Chen, P. Shi, C.C. Lim, Adaptive neural fault-tolerant control of a 3-DOF model helicopter system, IEEE Trans. Syst. Man Cybern.: Syst. 46 (2) (2016) 260–270. [33] G.S. Zhang, J.H. Qin, W.X. Zheng, Y. Kang, Fault-tolerant coordination control for second-order multi-agent systems with partial actuator effectiveness, Inf. Sci. 423 (2018) 115–127. [34] D.Y. Zhao, Y. Liu, M. Liu, J.Y. Yu, Adaptive sliding mode control for high-frequency sampled-data systems with actuator faults, Designs 2 (1) (2018) 3, doi:10.3390/designs2010003. [35] R.D. Nussbaum, Some remarks on a conjecture in parameter adaptive control, Syst. Control Lett. 3 (5) (1983) 243–246. [36] T.P. Zhang, S.S. Ge, Adaptive neural control of MIMO nonlinear state time-varying delay systems with unknown dead-zones and gain signs, Automatica 43 (6) (2007) 1021–1033. [37] L.X. Wang, Stable adaptive fuzzy control of nonlinear systems, IEEE Trans. Fuzzy Syst. 1 (2) (1993) 146–155. [38] H.N. Wu, X.H. Qiang, L. Guo, L∞ -gain adaptive fuzzy fault accommodation control design for nonlinear time-delay systems, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 41 (3) (2011) 817–827. [39] L. Liu, Z.S. Wang, Z.J. Huang, H.G. Zhang, Adaptive predefined performance control for MIMO systems with unknown direction via generalized fuzzy hyperbolic model, IEEE Trans. Fuzzy Syst. 25 (3) (2017) 527–542. [40] J.J. Slotine, W.P. Li, et al., Applied Nonlinear Control, volume 199, Prentice Hall Englewood Cliffs, NJ, 1991.