Progressive accommodation of parametric faults in linear quadratic control

Progressive accommodation of parametric faults in linear quadratic control

Automatica 43 (2007) 2070 – 2076 www.elsevier.com/locate/automatica Brief paper Progressive accommodation of parametric faults in linear quadratic c...

445KB Sizes 4 Downloads 44 Views

Automatica 43 (2007) 2070 – 2076 www.elsevier.com/locate/automatica

Brief paper

Progressive accommodation of parametric faults in linear quadratic control夡 Marcel Staroswiecki a,∗ , Hao Yang b , Bin Jiang b a SATIE, ENS Cachan, USTL, CNRS, UniverSud, 61 avenue du Président Wilson, 94235 Cachan Cedex, France b College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, 29 YuDao Street, Nanjing 210016, China

Received 23 February 2006; received in revised form 7 December 2006; accepted 3 April 2007 Available online 13 August 2007

Abstract In this paper, a strategy based on the linear quadratic design, which progressively accommodates the feedback control law, is proposed. It significantly reduces the loss of performance that results from the time delay needed by fault accommodation algorithms to provide a solution. An aircraft example is given to illustrate the efficiency of progressive accommodation. 䉷 2007 Elsevier Ltd. All rights reserved. Keywords: Fault tolerant control; Linear quadratic control; Anytime algorithm; Progressive accommodation

1. Introduction Fault tolerant control (FTC) aims at preserving the functionality of a faulty system with acceptable performances when compared to normal operation (Blanke, Kinnaert, Lunze, & Staroswiecki, 2003). Opposed to passive FTC, active FTC implements decisions that are specific to the diagnosed fault, and depends on the functionality to be preserved: optimizing a performance index (Moerder, Halyo, & Broussard, 1989; Staroswiecki, 2003), matching a given model (Gao & Antsaklis, 1991; Jiang, 1994; Staroswiecki, 2005), tracking a given trajectory (Gao & Antsaklis, 1992). However, very few papers consider delays associated with computation times (Staroswiecki, 2004; Zhang, Parisini, & Polycarpou, 2004). When faults occur, the faulty system works under the nominal control until accommodation is performed, which may cause severe loss of performance and stability. Progressive accommodation (PA) was proposed (Staroswiecki, 2004) to minimize the effect of the accommodation delay. It is 夡 A part of this paper has been presented at 6th IFAC Symposium on Safeprocess 2006. This paper was recommended for publication in revised form by Associate Editor Denis Dochain under the direction of Editor Frank Allgöwer. This work is partially supported by National Natural Science Foundation of China (60574083). ∗ Corresponding author. Tel.: +33 03 20 33 71 90; fax: +33 03 20 33 71 89. E-mail addresses: [email protected] (M. Staroswiecki), [email protected] (H. Yang), [email protected] (B. Jiang).

based on the “anytime” property of the Newton–Raphson (NR) scheme introduced in Kleinman (1968) and Kwakernaak and Sivan (1972) for computing the solution of the LQ problem. This paper investigates the optimality of PA, and illustrates it by an aircraft example. Section 2 describes the fault tolerant LQ control problem, and shows the limits of the ideal fault accommodation approach. PA is introduced and analyzed in Section 3. Section 4 presents simulation results. Conclusions are given in Section 5. 2. Fault tolerant linear quadratic control 2.1. An optimal control problem Problem setting. Consider the system x˙ = An x + Bn u,

t ∈ Tn [0, tf [,

x˙ = Af x + Bf u,

t ∈ Tf [tf , ∞),

where x ∈ R n is the state, u ∈ R m is the control, tf > 0 is the finite switching time between the two system models, An , Bn and Af , Bf are real constant matrices of appropriate dimensions, and both pairs are stabilizable. The control objective is to transfer the system state from the initial value x(0) = x0 to some unspecified final value x(∞), while minimizing the cost function  ∞ (u Ru + x  Qx) dt, (2) J (u, x0 , tf ) = 0

0005-1098/$ - see front matter 䉷 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2007.04.016

(1)

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076

where  stands for transposition, Q and R are symmetric matrices with Q = D  D, R > 0 and both (D, An ) and (D, Af ) are detectable. Problem solution. The optimal solution is known to be u = −R −1 B P x  − F x, on each time interval T ,  = n, f , where P is the unique positive definite solution of the algebraic Riccati equation (ARE) P A + A P + Q − P B R −1 B P = 0,

(3)

2071

which (Af , Bf ) is controlled by un ; [tf di , tf tc [ is the fault accommodation delay, where (Af , Bf ) is still controlled by un ; finally on [tf tc , ∞) the fault is accommodated, i.e. (Af , Bf ) is controlled by uf . The practical fault accommodation problem is to minimize tf tc − tf di (minimizing tf di − tf is an objective for the diagnostic problem). In the sequel, we investigate the PA strategy, where an ever improving control is applied at each iteration.

such that the closed loop system x=(A ˙  −B F )x is stable. The optimal control leads to the final state x(∞) = 0, and the minimal cost is

3. Progressive accommodation

J ∗ (x0 , tf ) = x0 Pn x0 + xf (Pf − Pn )xf ,

The NR scheme. Let Pi be the unique solution of the Lyapunov equation

(4)

where xf x(tf ). 2.2. Interpretation and practical issues Interpretation. The above problem interpretation is as follows: (I1) The pair (An , Bn ) is the nominal model, it is changed at time tf into (Af , Bf ) as the result of parametric faults. (I2) The quadratic cost (2) stems from a compromise between the ∞ cost 0 x  Qx dt of tracking the reference trajectory x = 0 and ∞ the cost of control 0 u Ru dt. (I3) tf , Af , Bf are given by the diagnostic algorithm, with no delay, no error. (I4) Solving (3) for =f is the optimal way to accommodate the fault (from Bellman’s optimality principle). (I5) If the cost (4) is below a specified limit, then system (1) is tolerant to fault (Af , Bf ) occurring at time tf , otherwise, the fault is declared to be unrecoverable. Indeed, although the system is optimally controlled, performance degradation is unacceptable (Staroswiecki, 2003). Discussion. (D1) Parametric faults do not change the equilibrium point, so tracking the reference trajectory x = 0 is still a valid objective after the fault occurrence. (D2) By (Af , Bf ) being stabilizable and (D, Af ) being detectable, it is assumed that the fault is such that the accommodation problem has a unique optimal solution. The limit on the cost mentioned in (I5) defines the “graceful degradation” of the system performances that can be accepted. (D3) In order to keep focused, and for the sake of brevity, the state is assumed to be always available. Let y = Cn x—resp. y = Cf x—describe the available sensors in nominal—resp. in faulty situations, this boils down to assuming that (Cn , An ) and (Cf , Af ) are observable, and a state observer is constructed (Staroswiecki, Hoblos, & Aitouche, 2004 discuss fault tolerant estimation). (D4) Should the fault be unrecoverable (no solution, or solution with unacceptable cost), then objective reconfiguration (Blanke et al., 2003) would be needed. Note that a critical situation is reached if the fault is such that assumptions in (D2) and (D3) do not hold. Objective reconfiguration is not considered in this paper. (D5) Obviously, the assumption of a perfect diagnosis is not realistic. Assuming there is no diagnostic error, three time instants tf , tf di , tf tc and four time windows must be considered (Staroswiecki, 2004): nominal operation takes place on [0, tf [, i.e. system (An , Bn ) is controlled by un ; [tf , tf di [ is the diagnostic delay, during

3.1. The algorithm

Pi (Af − Bf Fi−1 ) + (Af − Bf Fi−1 ) Pi  = −Q − Fi−1 RF i−1 ,

(5)

where Fi = R −1 Bf Pi for all i = 1, 2, . . . and the initial F0 is given. If Af − Bf F0 is Hurwitz, Kleinman (1968) has proven that this algorithm produces the result (a) Af − Bf Fi is Hurwitz for all i = 1, 2, . . . , (b) Pf  · · · Pi+1 Pi  · · · P0 , i = 1, 2, . . . , (c) limi→∞ Pi = Pf ,

(6)

where Pf is the solution of (3) for  = f . The “anytime” property. In situations where the decision time may not be long enough for convergence to be obtained, using an anytime algorithm, i.e. an algorithm whose result quality improves as computation time increases, gives a rationale for solving the decision problem: the current solution is the best one available (Zilberstein, 1996). Since from (6-b) the NR scheme has the anytime property, the idea of PA is to apply the feedback control law ui = −Fi x on the time interval [ti , ti+1 [. The system behavior after the fault occurrence is therefore x˙ = (Af − Bf Fn )x, t ∈ [tf , t0 [, x˙ = (Af − Bf F0 )x, t ∈ [t0 , t1 [, x˙ = (Af − Bf Fi )x, t ∈ [ti , ti+1 [, i = 1, 2, . . . ,

(7) (8) (9)

where t0 , F0 define the algorithm initialization (note that in general t0 > tf di because time is needed to initialize the algorithm after the fault has been detected, isolated and estimated). In the sequel, we prove some properties of this fault accommodation scheme. 3.2. Stabilization Theorem 1. If Af −Bf F0 is Hurwitz, the PA scheme stabilizes the faulty system. Proof. From (6-a) every closed loop matrix Af −Bf Fi is stable (this can be seen using x  (t)Pi x(t) as a Lyapunov function for the unchanged equilibrium x = 0). Moreover, instability is not introduced at the commutation times ti , since from (6-b), it follows that x  (ti+1 )Pi+1 x(ti+1 ) x  (ti+1 )Pi x(ti+1 ). 

2072

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076

If the faulty system is not destabilized by the nominal control, taking F0 = Fn is the quickest initialization. If Af − Bf Fn is unstable, a stabilizing feedback F0 has first to be found (remember that (Af , Bf ) is stabilizable). t0 − tf di is the time needed to select an appropriate F0 . 3.3. Optimality After N > 1 iterations, algorithm (5) initialized with F0 has produced the sequence F1 , . . . , FN . Let us compare the strategy CA where the feedback FN is first computed (C) and then applied (A), leading to the two post-fault control phases x˙ = (Af − Bf Fn )x,

t ∈ [tf , tN [,

(10)

x˙ = (Af − Bf FN )x,

t ∈ [tN , ∞),

(11)

where k =tN −tk . They also differ on [tN , ∞) due to different initial conditions (14). Remember that all feedbacks Fk are stabilizing since Fn is stabilizing, and are linked by Fk =R −1 BfT Pk with Pk solution of (5). (b) Compute the costs. The cost associated with Sk−1 t  RF is J (Sk−1 ) = J¯(t0 , tk ) + tkN x  (Q + Fk−1 k−1 )x dt + ∞   ¯ tN x (Q + FN RF N )x dt, where J (t0 , tk ) is the cost already spent on [t0 , tk [. From (5) and the definition of Fk it follows that J (Sk−1 )= J¯(t0 , tk )+x  (tk )Pk x(tk )−x  (tN )(Pk −PN+1 )x(tN ) and using (14) one gets J (Sk−1 ) − J¯(t0 , tk ) =x  (tk )Pk x(tk )−x  (tk )k−1 (Pk − PN+1 )k−1 x(tk ).

(15)

Similarly, the cost associated with Sk is J (Sk ) − J¯(t0 , tk ) = x  (tk )Pk+1 x(tk ) − x  (tk )k (Pk+1 − PN+1 )k x(tk ). (16)

with the strategy PA where the control is progressively (P) applied (A), i.e. one has the closed loop matrix Af − Bf F0 on [t0 , t1 [ and the closed loop matrices Af −Bf Fi on [ti , ti+1 [, i= 1, 2, . . . . The CA and PA controls differ only on [t0 , tN [. Let JCA and JPA be the associated costs. Note that for N large enough, the convergence is obtained, i.e. one has FN =Ff +N with N  0, but the developments below are valid for any N > 1. Case 1: Let us first consider the case when Af − Bf Fn is stable, and prove the better performance of PA.

(c) Compare the costs. For simplicity, write x instead of x(tk ). From (15) and (16) the difference is J (Sk−1 )−J (Sk )= x  [Pk −Pk+1 +k (Pk+1 −PN+1 )k −k−1 (Pk −PN+1 )k−1 ]x. Let Wk Pk − PN+1 , from (6-b) one has Wk > 0 for all k. The difference J (Sk−1 ) − J (Sk ) can be written as J (Sk−1 ) − J (Sk ) = x  [Wk − Wk+1 + k Wk+1 k − k−1 Wk k−1 ]x. In order to show that it is positive, we first prove that

Theorem 2. If the closed loop matrix Af − Bf Fn is Hurwitz then the initialization F0 = Fn leads to JPA JCA .

and then

Proof. The proof is in three parts: (a) set the comparison frame, (b) compute the costs and (c) compare the costs. (a) Set the comparison frame. The control sequence associated with CA is SCA

[t0 , t1 [

[t1 , t2 [

(Fn )

(Fn )

. . . [tN−1 , tN [ ...

(Fn )

[tN , ∞), (FN ),

(12)

while with the initialization F0 =Fn , the PA control sequence is [t0 , t1 [ [t1 , t2 [ SPA

(Fn )

(F1 )

. . . [tN−1 , tN [ ...

(FN−1 )

[tN , ∞), (FN ).

(13)

For k = 1, . . . , N − 1, define the family of control sequences Sk

[t0 , t1 [ [t1 , t2 [

. . . [tk , tk+1 [

. . . [tN−1 , tN [

(Fn )

...

...

(F1 )

(Fk )

(Fk )

[tN , ∞), (FN )

and let S0 =SCA . Obviously SN−1 = SPA . For k=1, 2, . . . , N − 1, we compare Sk and Sk−1 and show that J (Sk ) J (Sk−1 ) holds. The two state trajectories are identical on [t0 , tk [, resulting in the same initial state x(tk ) for the time window [tk , tN [. They differ on [tk , tN [ due to different controllers, leading to the two states x(tN ) = e(Af −Bf Fk−1 )k x(tk )k−1 x(tk ), x(t ¯ N ) = e(Af −Bf Fk )k x(tk )k x(tk ),

(14)

x  [Wk − k−1 Wk k−1 ]x x  [Wk − k Wk k ]x x  [Wk − k Wk k ]x x  [Wk+1 − k Wk+1 k ]x.

(17)

(18)

Eq. (18) holds because for all k, Wk − Wk+1 0 and max (k ) < 1, since Af − Bf Fk is stable, which leads to Wk − Wk+1 k [Wk − Wk+1 ]k . It remains to prove (17), i.e. k Wk k − k−1 Wk k−1 0 or equivalently 

−1  k Wk k −1 k−1 [k−1 k−1 − Wk ]k−1 0. From the definiBf (Fk−1 −Fk )k . tions of k−1 and k one gets k −1 k−1 = e Consider the auxiliary system (AS): z˙ = −Bf (Fk−1 − Fk )z and the Lyapunov function L = z (Pk−1 − Pk )z. Its time derivative is L˙ = −z (Pk−1 − Pk )Bf (Fk−1 − Fk )z − z (Fk−1 − Fk ) Bf (Pk−1 − Pk )z. From the definitions of Fk and Fk−1 one gets Fk−1 − Fk = R −1 Bf (Pk−1 − Pk ), so L˙ = −2z (Pk−1 − Pk ) · Bf R −1 Bf (Pk−1 − Pk )z 0. From

the stability of (AS) it follows that min (k −1 k−1 ) > 1 and  W [ −1 ] − W 0 which is the result.  ] [k −1 k k k−1 k k−1

Case 2: When Af − Bf Fn is unstable, the CA control sequence is still given by (12) while in both the SPA and in the S0 control sequences Fn is replaced by F0 where F0 is the stabilizing initial feedback. Since JPA < J (S0 ) (from Theorem 2), it is interesting to compare JCA and J (S0 ). The two costs are given by JCA = t J¯(tf , t0 ) + t0N x  (t)(Q + Fn RF n )x(t) dt + x  (tN )PN x(tN ), t and J (S0 ) = J¯(tf , t0 ) + t0N x¯  (t)(Q + F0 RF 0 )x(t) ¯ dt + ¯ N ), where the states x(t) = n (t)x(t0 ) and x¯  (tN )PN x(t

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076 1.5

2073

0.1

1

0.05 0

x2

x1

0.5 0 -0.5

-0.05

-1 0

2

4

6

8

-0.1

10

0

2

4

6

8

10

6

8

10

t/s

t/s 0.3 0.05

0.2

0 x4

x3

0.1 0

-0.05

-0.1 -0.1 -0.2

0

2

4

6

8

10

0

2

4 t/s

t/s

Fig. 1. State trajectories (PA—continuous line; CA—discontinuous line; perfect FTC—dotted line).

3.5

12

3

10

8 Cost of control

Cost of states

2.5

2

1.5

4

1

2

0.5

0

6

0

2

4 t/s

6

0

0

2

4

6

t/s

Fig. 2. Cost comparison (PA—continuous line; CA—discontinuous line; perfect FTC—dotted line).

x(t) ¯ = 0 (t)x(t0 ) are defined from n (t)e(Af −Bf Fn )t and 0 (t)e(Af −Bf F0 )t , with x(t0 ) as the common initial condition, resulting in x(tN ) = n ()x(t0 )n x(t0 ) and x(t ¯ N ) = 0 ()x(t0 )0 x(t0 ), where  = tN − t0 . Note that there is no relation between Fn and (Af , Bf )— except Af −Bf Fn is unstable—and there is no relation between

F0 and (Af , Bf )—except Af − Bf F0 is stable—therefore no general inequality can be derived to compare JCA and J (S0 ). Indeed, during the transient, the control Fn x (although it destabilizes system (Af , Bf )) might be of lower energetic cost than the control F0 x that is used to initialize the NR iterations. However, comparisons can be done over the cost associated with

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076

5

12

0

10

-5

8

-10

6

x2

x1

2074

-15

4

-20

2

-25

0

-30

0

2

4

6

8

-2

10

0

2

4

10

2

5

0

8

10

6

8

10

-2 x4

x3

0 -5

-4 -6

-10 -15

6 t/s

t/s

-8 0

2

4

6

8

10

-10

0

t/s

2

4 t/s

Fig. 3. State trajectories (PA—continuous line; CA—discontinuous line; perfect FTC—dotted line).

the state and the cost associated with the control, for the terminal period [tN , ∞) (Proposition 3) and for the transient period [t0 , tN [ (Proposition 4). Proposition 3. The terminal cost associated with S0 is always less than the one associated with SCA . Proof. Since Af − Bf F0 is Hurwitz, max (0 ) < 1, while the instability of Af − Bf Fn results in max (n ) > 1. It follows that (using the notation x instead of x(t0 )) x  (tN )PN x(tN ) − x¯  (tN )PN x(t ¯ N) 

= x  0 [0−1 n PN n −1 0 − PN ]0 x > 0.



Let us now consider the transient period [t0 , tN [. The cost difference JCA −J (S0 ) is written as JCA −J  t(S0 )=state (t0 , tN )+ control (t0 , tN ), where state = x  ( t0N (−T0 (t)Q0 (t) + 

[T0 (t)0−1 (t) n (t)Qn (t)−1 0 (t)0 (t)]) dt)x, and control =  tN    x ( t0 (n (t)Fn RF n n (t) − 0 (t)F0 RF 0 0 (t)) dt)x.

Proposition 4. (i) The cost associated with the state satisfies: ∀x ∈ R n : state (t0 , tN ) > 0. (ii) ∃T > t0 such that the cost associated with the control satisfies: ∀ > T , ∀x ∈ R n : control (t0 , ) > 0. Proof. The two statements directly follow from the fact that ∀t > 0, max (0 (t)) < 1 and max (n (t)) > 1.  Remark. The value of T depends on the pair (Fn , F0 ). Obviously, if T tN then JCA > J (S0 ) > JPA . If T > tN the control cost associated with the stabilizing initialization F0 is higher

during the transient period [t0 , tN [ than the one associated with the (unstable) nominal control Fn . Looking for the “best” F0 is not a sound problem: indeed, the solution is Ff , which is what the algorithm intends to determine. 4. Example The example is taken from Wu, Zhang, and Zhou (2000) (longitudinal aircraft dynamic control). The nominal system is ⎛ ⎞ −0.0226 −36.6 −18.9 −32.1 0 −1.9 0.983 0 ⎟ ⎜ An = ⎝ ⎠, 0.0123 −11.7 −2.63 0 0 0 1 0 ⎛ ⎞ 0 0 0 ⎟ ⎜ −0.414 Bn = ⎝ ⎠. −77.8 22.4 0 0 The state variables are forward velocity, angle of attack, pitch rate and pitch angle, the control inputs are the elevon and canard positions, and the whole state is measured. The weighting matrices are Q = diag(1, 1, 1, 1) and R = diag(1, 1), leading to the optimal feedback

0.9619 −1.8377 −1.1620 −1.8352 Fn = . −0.2679 0.4506 0.3322 0.5247 Case 1: Af − Bf Fn stable. A loss of effectiveness of the two actuating channels occurs at tf = 0.2 s. It is described by

0 −0.0414 −7.78 0 Af = An , Bf = . 0 0 2.24 0

10

10

9

9

8

8

7

7

6

6

cost of control

cost of states

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076

5 4

5 4

3

3

2

2

1

1

0

1

1.5

2 t/s

2.5

3

2075

0

1

1.5

2 t/s

2.5

3

Fig. 4. Cost comparison (PA—continuous line; CA—discontinuous line; perfect FTC—dotted line).

The accommodated control is uf = −Ff x where

0.9953 −2.4611 −2.4012 −3.8674 Ff = . −0.0284 0.0691 0.0688 0.1111

Since Af − Bf Fn is unstable, the stabilizing initial feedback

0.5 −8.5 −0.5 1.5 F0 = 0.2 −5 0.1 0.8

It is obtained after 5 iterations, by initializing the NR algorithm with F0 = Fn . Assume it takes 1 s for fault diagnosis, 0.2 s for checking that Af − Bf Fn is stable, and 0.5 s for each iteration. The CA approach would apply Fn until t = 3.9 s and then F5 , while the PA approach applies Fn until 1.4 s and then the sequence F1 , F2 , F3 , F4 and F5 at respective times 1.9, 2.4, 2.9, 3.4 and 3.9 s. Figs. 1 and 2 show the state trajectories and the evolution of the system cost in the three cases associated with CA, PA and ideal (instantaneous) fault accommodation. It is seen that the trajectories associated with PA are much closer to the ideal trajectories than those associated with CA, thus widely improving system performance during the fault accommodation transient. Case 2: Af − Bf Fn unstable. A loss of effectiveness in the elevon actuator, coupled with a damage to the structure, occurs at tf = 0.2 s. ⎛ ⎞ −0.0226 −36.6 −18.9 −32.1 0 −0.19 −1.1 0 ⎟ ⎜ Af = ⎝ ⎠, 0.0123 −11.7 1.3 0 0 0 1 0 ⎛ ⎞ 0 0 0 ⎟ ⎜ −0.0414 Bf = ⎝ ⎠. −7.78 22.4 0 0

is first computed. Convergence to Ff requires 5 iterations. Assume that the fault diagnosis takes 1 s, finding the initial F0 takes 0.4 s—so t0 = 1.6 s—and each iteration takes 0.5 s. The classical approach would apply Fn until t = 4.1 s and then F5 . The PA approach applies the sequence F0 , F1 , F2 , F3 , F4 and F5 at respective times 1.6, 2.1, 2.6, 3.1, 3.6 and 4.1 s. Figs. 3 and 4 show the state trajectories and the system costs in the same three cases (PA, CA, ideal), leading to the same conclusion. It is also seen that the assumption of ideal fault accommodation completely fails to explain the actual behavior of the system, where strict physical limitations on the state variables might be trespassed when using the CA strategy.

The new feedback is

−0.4761 −0.3609 Ff = −0.8530 0.4860

0.7406 1.3432

1.6247 . 2.0848

5. Conclusion This paper emphasizes the importance of improving the system behavior during the fault accommodation delay. In the linear quadratic control frame, progressive accommodation implements an “anytime” approach, splitting the problem into system stabilization and performance improvement. Its application to the longitudinal control of an aircraft under actuator and process faults shows that it can minimize the large transients which may occur when the faulty system is still controlled by the nominal control law. Future research will investigate more algorithms for solving the Riccati equation, and analyze their “anytime” potential, as well as the robustness issues that arise from uncertainties in both the nominal and faulty system models.

2076

M. Staroswiecki et al. / Automatica 43 (2007) 2070 – 2076

References Blanke, M., Kinnaert, M., Lunze, J., & Staroswiecki, M. (2003). Diagnosis and fault-tolerant control. Berlin, Heidelberg: Springer. Gao, Z., & Antsaklis, P. J. (1991). Stability of the pseudo-inverse method for reconfigurable control. International Journal of Control, 53(3), 717–729. Gao, Z., & Antsaklis, P. J. (1992). Reconfigurable control systems design via perfect model following. International Journal of Control, 56(4), 783–798. Jiang, J. (1994). Design of reconfigurable control system using eigenstructure assignments. International Journal of Control, 59(2), 395–410. Kleinman, D. L. (1968). On an iterative technique for Riccati equation computation. IEEE Transactions on Automatic Control, AC-13, 114–115. Kwakernaak, H., & Sivan, R. (1972). Linear optimal control systems. New York: Wiley. Moerder, D. D., Halyo, N., Broussard, J. R. et al. (1989). Application of precomputed control laws in a reconfigurable aircraft fight control system. Journal of Guidance, Control and Dynamics, 12, 325–333. Staroswiecki, M. (2003). Actuator faults and the linear quadratic control problem. In Proceedings of the 42nd IEEE conference on decision and control (pp. 959–965), Hawaii, USA. Staroswiecki, M. (2004). Progressive accommodation of actuator faults in the linear quadratic control problem. In Proceedings of the 44th IEEE conference on decision and control (pp. 5234–5241), Paradise Island, The Bahamas. Staroswiecki, M. (2005). Fault tolerant control: The pseudo-inverse method revisited. In Proceedings of the 16th IFAC World congress (pp. Th-E05TO/2), Prague, Czech Republic. Staroswiecki, M., Hoblos, G., & Aitouche, A. (2004). Sensor network design for fault tolerant estimation. International Journal of Adaptive Control and Signal Process, 18, 55–72. Wu, N. E., Zhang, Y. M., & Zhou, K. M. (2000). Detection, estimation and accommodation of loss of control effectiveness. International Journal of Adaptive Control and Signal Process, 14(3), 775–795. Zhang, X., Parisini, T., & Polycarpou, M. M. (2004). Adaptive fault-tolerant control of nonlinear uncertain systems: An information-based diagnostic approach. IEEE Transactions on Automatic Control, 49(8), 1259–1274.

Zilberstein, S. (1996). Using anytime algorithms in intelligent systems. Artificial Intelligence Magazine, 3, 73–83. Marcel Staroswiecki was born in Melitopol (Ukraina) in 1945. He obtained the Engineering Degree from Ecole Nationale Supérieure des Arts et Métiers in 1968, Ph.D. in Automatic Control in 1970, and D.Sc. in Physical Sciences in 1979. He is currently a full professor at University of Lille. His research at Ecole Normale Supérieure de Cachan (SATIE) addresses Fault Detection and Isolation and Fault Tolerance.

Hao Yang was born in Nanjing, China, in 1982. He is currently a Ph.D. candidate in Nanjing University of Aeronautics and Astronautics, China, and University of Lille 1, France. His research interests include fault tolerant control of hybrid systems.

Bin Jiang was born in Jiangxi, China, in 1966. He obtained his Ph.D. in Automatic Control from Northeastern University, Shenyang, China, in 1995. Currently he is a full professor and department head of Automatic Control in Nanjing University of Aeronautics and Astronautics. He serves as associate editor for International Journal of System Science, International Journal of Control, Automation and Systems, etc. His research interests include fault diagnosis and fault tolerant control.