Consensus rate regulation for general linear multi-agent systems under directed topology

Consensus rate regulation for general linear multi-agent systems under directed topology

Applied Mathematics and Computation 271 (2015) 845–859 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

1MB Sizes 0 Downloads 45 Views

Applied Mathematics and Computation 271 (2015) 845–859

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Consensus rate regulation for general linear multi-agent systems under directed topology Tao Feng a, Huaguang Zhang a,b,∗, Yanhong Luo a, Hongjing Liang a a b

College of Information Science and Engineering, Northeastern University, 110819 Shenyang, LN, PR China State Key Laboratory of Synthetical Automation for Process Industries (Northeastern University), 110004 Shenyang, LN, PR China

a r t i c l e

i n f o

Keywords: Consensus rate Consensus region Convergence speed Inverse optimal Optimal distributed consensus protocols

a b s t r a c t Recently, optimization in distributed multi-agent coordination has been studied concerning convergence speed. The optimal convergence speed of consensus for multi-agent systems consisting of general linear node dynamics is still an open problem. This paper aims to design optimal distributed consensus protocols for general identical linear continuous time cooperative systems which not only minimize some local quadric performances, but also regulate the consensus rate (including convergence rate and damping rate) for the multi-agent systems. The graph topology is assumed to be fixed and directed. The inverse optimal design method is utilized and the resulting optimal distributed protocols place part of close-loop poles of the global disagreement systems at specified locations asymptotically, while the remains far from the imaginary axis enough. It turns out that for the identical linear continuous time multiagent systems, the convergence speed has no upper bound. The main advantages of the developed method over the LQR design method are that the resulting multi-agent systems can achieve specified consensus rate asymptotically and the resulting protocols have the whole right half complex plane as its asymptotical consensus region. Numerical examples are given to illustrate the effectiveness of the proposed design procedures. © 2015 Published by Elsevier Inc.

1. Introduction Cooperative distributed control of multi-agent systems in accomplishing a common task has received increasing demands and been a priority research subject for a variety of military and civilian applications. All agents can communicate with each other and exchange information such as their relative positions, target, etc. and use the data to develop distributed but coordinated control policies. Different with the centralized approach, the distributed approach does not require a central station for control. The distributed approach is believed more promising and shows many advantages in achieving cooperative group performances, especially with high robustness, strong adaptivity, less system requirements. Pioneering works are generally referred to [1–2], thereafter consensus algorithms under various information-flow constraints have been studied in [3–17]. Recently, many researchers focus on the heterogeneous multi-agent systems. Adaptive control methods have been developed by adopting a fuzzy disturbance observer for heterogeneous multi-agent system which is consisted of heterogeneous nonlinear dynamic nodes, see [18] and reference therein. For excellent survey, please see [19].



Corresponding author. Tel.: +86 24 83687762; fax: +86 24 83679605. E-mail addresses: [email protected] (T. Feng), [email protected], [email protected] (H. Zhang), [email protected] (Y. Luo), lianghongjing99@ 163.com (H. Liang). http://dx.doi.org/10.1016/j.amc.2015.08.067 0096-3003/© 2015 Published by Elsevier Inc.

846

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

Recently, a considerable amount of the existing literature is dedicated to finding optimal strategy subject to some given constraints [20–28]. Optimal consensus rate is one of the key optimization issues in distributed multi-agent coordination [19,29]. Consensus rate includes two indexes [30]: convergence speed which characterizes how fast consensus can be achieved and damping rate that evaluates the oscillating behaviors of the agents. Since the graph topology interplays with the system dynamics, the locations that the global closed-loop poles lie in are hard to determine even the closed-loop poles of each agent system are placed at specified region. For multi-agent systems consisting of single-integrator kinematics, the smallest nonzero eigenvalue of the Laplacian matrix determines the worst-case convergence speed [12], hence the convergence speed has been maximized by choosing optimal weights associated with edges [31,32]. Another common way focused on the case when all agents converge to the average of the initial states [33] which makes the optimization problems challenging if it is unknown. In [22,23], the inverse optimal design method was introduced to obtain the optimal distributed consensus protocols by constructing a global performance index. Inverse optimal problem was first raised by Kalman [36] and solved for the single-input linear systems, then was extended to the multi-inputs case in [38]. From the practical viewpoint, it is more practical in the LQ regulator design to give up the weight selection and design instead those state feedback controls which are optimal for some unknown weights, thereby simplifying the design procedures. In [37], the optimal regulator was designed by satisfying the pole assignment requirement which the LQR design method could hardly address. However, in [23], the authors only considered the consensus performance for a class of directed graphs. It is worth pointing out that optimal convergence for multi-agent systems consisting of general linear node dynamic on directed graphs is still an open problem [19]. One way to evaluate the performance of consensus protocols is to show how consensus depends on structural parameters of the communication graph by using the concept of consensus region [34], which is closely related to the gain margin of the feedback gain used in distributed protocols design. Obviously, a large consensus region is more desirable. In [35], an unbounded consensus region has been obtained by using the LQR optimal design method. However, the well developed LQR optimal consensus design methods are incapable of handling consensus rate issues. Improvements of the convergence rate and consensus region, both are vital characteristics in cooperative distributed design, should be analyzed together. Motivated by the above facts, this paper aims to address issues of convergence rate and consensus region in cooperative distributed design by means of inverse optimal design methods [36–38]. The resulting distributed protocol is local optimal and its control gain contains two parameters, one is designed based on the asymptotic objective which is to place part of closedloop poles to specified locations, and the other is tuned to ensure the asymptotic objective is achieved and meanwhile gradually eliminate the interplay from the graph topology, such that the consensus region extends to the whole right complex plane. Therefore, the multi-agent system can achieve specified consensus rate and desirable consensus region. What is more, the results clearly indicate that for the general linear continuous time multi-agent systems, the convergence speed has no upper bound, hence there is no optimal convergence rate on any given graph topology. The main contributions of this paper are stated as follows: • A novel and simple inverse optimal result is proposed which yields the optimal regulator that has an explicit parameterization and at least a specified guaranteed gain margin. • The consensus problem for general linear multi-agent systems on fixed, directed graphs is solved by using the proposed inverse optimal design method. The resulting distributed protocol contains two parameters which can be designed such that the convergence speed and the damping rate are asymptotically achieved. • The interplay from the graph topology can be gradually eliminated as the gain increases, thus the consensus region extends to the whole right complex z-plane. This is fairly significant if the number of agent nodes is large and the eigenvalues of the corresponding communication matrix is hard to determine or even troublesome to estimate. The paper is organized as follows. Section 2 introduces the knowledge of graph theory. In Section 3, a new and simple timedomain solution of the inverse optimal problem of the LQ regulator is proposed. Such a time-domain solution leads to a fairly simple design procedure to the optimal partial pole placement. The LQ regulator designed places part of close-loop poles at specified locations exactly and has the specified gain margin. In Section 4, the cooperative distributed control problem, including leader following consensus problem and leaderless consensus problem, is addressed by the developed inverse optimal design method. Novel distributed cooperative design methods are developed. The resulting distributed consensus protocols are local optimal, and the asymptotic design objective is achieved with resort to possibly high gain regulators. The consensus rate and consensus region issues are both well addressed. A conclusion is given in Section 5. Notations: Matrix A > 0(≥0) means A is positive definite (semi-definite), A < 0(≤0) means A is negative definite (semidefinite). The Kronnecker product is denoted by ⊗. The transposition of matrix A is denoted by AT . The Hermitian transposition of matrix A is denoted by A∗ . In denotes the n dimensional identity matrix in Rn×n . 1n ∈ Rn is the vector with all elements 1. 2. Preliminaries 2.1. Graph theory Consider a weighted digraph G = (V, E, A) with a nonempty finite set of N nodes V = {v1 , v2 , . . . , vN }, a set of edges E ⊂ V × V and the associated adjacency matrix A = [ai j ] ∈ RN×N . An edge rooted at node j and ended at node i is denoted by (vj , vi ), which means the information flows from node j to node i. The weight aij of edge (vj , vi ) is positive, i.e., aij > 0 if (v j , vi ) ∈ E, otherwise, ai j = 0. In this paper, assume that there are no repeated edges and no self loops, i.e., aii = 0, ∀i ∈ N , where N = {1, 2, . . . , N}. If

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

847

(v j , vi ) ∈ E, then node j is called a neighbor of node  i. The set of neighbors of node i is denoted as Ni = { j|(v j , vi ) ∈ E }. Define the in-degree matrix as D = diag{di } ∈ RN×N with di = j∈N ai j and the Laplacian matrix as L = D − A. Obviously, L1N = 0. A graph i is said to be directed if the information on the communication topology is flowed from any agent to all of the rest. The graph is said to be connected if every two vertices can be joined by a path. A graph is said to be strongly connected if every two vertices can be joined by a directed path. If G is strongly connected, the zero eigenvalue of is simple, and ker L = span{1N }. A digraph is said to have a spanning tree, if there is a node ir , such that there is a directed path from the node ir to every other nodes in the graph.

3. Inverse optimality of linear systems Consider the following continuous-time LQ regulator problem

x˙ = Ax + Bu,  ∞ J= (xT Qx + uT Ru)dt.

(1a) (1b)

0

where x ∈ Rn is the state vector, u ∈ Rm is the control input. Throughout the paper, it is assumed that (A, B) is controllable and the input matrix B is of full column rank m. If Q and R are given, then the optimal control, named the LQ regulator, is given by a state feedback control u = −Kx, where

K = R−1 BT P,

(2)

and P is the unique symmetric positive definite solution of the algebra Riccati equation

AT P + PA − K T RK + Q = 0.

(3)

3.1. Solution of the inverse optimal control problem The inverse optimal control problem [36] is to find the necessary and sufficient condition for a given stable control

u = −Kx,

(4)

such that the control (4) minimizes the cost (1b) for some symmetric Q > 0 and R > 0. Remark 1. Restricting Q > 0 rather than Q ≥ 0 greatly simplifies the later development, although there is slightly loss of generality. Theorem 1. The state feedback gain K is optimal for (1) with some Q = Q T > 0 and R = RT > 0, if and only if (C1) ∃ 0 < γ ≤ 12 such that A − γ BK is Hurwitz, (C2) KB is a simple matrix and all its eigenvalues are positive. Proof. Necessity: if K is LQ optimal for some Q = Q T > 0 and R = RT > 0, then K satisfies (2) and (3), which indicates



A−

1 BK 2

T



P+P A−



1 BK = −Q < 0. 2

(5)

According to the Lyapunov stability theorem, A − 12 BK is Hurwitz. Therefore, (C1) is true. (C2) is a direct conclusion of [Th. 5.1, [34]]. Sufficiency: using [Th. 5.1, [34]], if (C1) and (C2) hold, then there exist some P = P T > 0, R = RT > 0 and Q˜ = Q˜ T such that

 

(A − γ BK )   T

BT P =

R

γ

P + P (A − γ BK ) + Q˜ + (γ K )T

R

γ

(γ K ) = 0,

(6a)

(γ K ).

(6b)

The state feedback control law u∗ = −γ Kx minimizes the corresponding performance index J(x0 ; u) = and the optimal value is given by

J∗ (x0 ) = xT0 Px0 =



0



xT (Q˜ + γ K T RK )x dt,

∞ 0

˜ + γ −1 uT Ru)dt (xT Qx

∀x0 ∈ Rn , x0 = 0.

Since P > 0, then (7) indicates that Q˜ + γ K T RK > 0. Let Q = Q˜ + (1 − γ )K T RK, then Q ≥ Q˜ + γ K T RK > 0 since 0 < γ ≤

(A − BK )T P + P(A − BK ) = −Q − K T RK < 0.

(7)

1 2.

Adding 2(γ − 1)K T RK to both sides of (6a) yields (8)

According to the Lyapunov stability theorem, A − BK is Hurwitz. Since KB is simple and all its eigenvalues are positive, hence K is optimal using Theorem 5.1 of [38]. As an observation, (8) is the algebra Riccati equation using the state feedback control u = −Kx, which indicates that K is LQ optimal for Q = Q T > 0 and R = RT > 0. 

848

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

Remark 2. If B is not of full column rank, then (C2) should be modified as

rank(KB) = rank(B) = rank(K ) and all eigenvalues of KB are nonnegative [Th. 4.1,[7]], where rank(X) denotes the rank of the matrix X. Let V (x) = xT Px and take the derivative along the state trajectory x˙ = (A − σ BK )x, where σ is a positive number, we have

V˙ (x) = xT [(A − σ BK )T P + P (A − σ BK )]x = xT [(A − γ BK )T P + P (A − γ BK ) + 2(γ − σ )K T RK]x < 2(γ − σ )xT K T RKx, then V˙ (x) < 0 if σ ≥ γ , hence we obtain the following theorem. Theorem 2. The feedback control gain K satisfying (C1) and (C2) in Theorem 1 ensures the guaranteed gain margin of (γ , +∞), 0 < γ ≤ 12 . Remark 3. The significant meaning of this result is that we can specify γ such that the resulting LQ regulator ensures a larger gain margin than ( 12 , +∞), which the normal LQ regulator ensures, as in [39]. It is readily seen that if the system is open-loop stable, the LQ regulator ensures the guaranteed gain margin of (0, +∞). In the case where the system is open-loop unstable, it’s worth noting that a larger gain margin generally leads to a larger magnitude of the LQ regulator, a trade-off must be made. 3.2. Optimal partial pole placement In this section, we use the developed inverse optimality result to solve the optimal partial pole placement problem, which aims to design a feedback control not only satisfied the optimality criterion but also places part of closed-loop poles at specified locations exactly. Since the optimality of K is invariant under any state coordinate transformation x = Gz (obviously, G is nonsingular) of the system (1a), in the sense that K is optimal for the system (1a) if and only if the control KG is optimal for the transformed system z˙ = G−1 AGz + G−1 Bu [37]. Without loss of generality, it is assumed that A and B are of the form



A=



A11

A12

A21

A22



0

, B=

Im

,

(9)

where A11 ∈ R(n−m)×(n−m) , A22 ∈ Rm×m . Let K be partitioned as K = [K1 , K2 ], where K2 ∈ Rm×m , then take



T=

In−m

0

S

Im



,

we have

(10)



T (A − BK )T −1 =



A11 − A12 S

A12

M

SA12 + A22 − K2

,

(11)

where M = SA11 + A21 − (SA12 + A22 )S − (K1 − K2 S). Let M = 0, then n − m eigenvalues of A − BK, which are placed at specified locations by S due to the controllability of (A11 , A12 ), are designed as the dominant poles in practical applications. The other m eigenvalues, which are placed as the eigenvalues of SA12 + A22 − K2 and located more to the left than the eigenvalues of SA12 + A22 , are designed as the nondominant poles. Let K2 =  be a simple and positive definite matrix, then the candidate of the optimal feedback control gain K is parameterized as

K = [SA11 + A21 − SA12 S − A22 S, 0] + [S, Im ].

(12)

Lemma 1. If A11 − A12 S is Hurwitz, then there exist some simple matrices  > 0 such that A − γ BK is Hurwitz, where γ > 0. Proof. Using T defined by (10), we have

L = T (A − γ BK )T −1



=

A12

(1 − γ )(SA11 + A21 − SA12 S − A22 S)

SA12 + A22 − γ 

obviously, L is similar to A − γ BK. Let



L

T



A11 − A12 S



0

0

Im





+



0

0

Im





11 L 21

12 = , 22

,

(13)

(14)

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

849

where  =  T > 0,  ∈ R(n−m)×(n−m) and

11 = (A11 − A12 S)T  +  (A11 − A12 S),

(15)

12 =  = (1 − γ )(SA11 + A21 − SA12 S − A22 S) +  A12 ,

(16)

22 = (SA12 + A22 − γ )T + (SA12 + A22 − γ ).

(17)

T 21

T

Therefore, L is Hurwitz if  < 0 for some  =  T > 0 and  > 0. According to the Shur Theorem,  < 0 if and only if 1) 11 < 0 and 2) 22 − 21 −1 12 < 0. Since that A11 − A12 S is Hurwitz, then there exists  =  T > 0, such that 11 = (A11 − 11 T A12 S)  +  (A11 − A12 S) < 0. Set  = ωIm , then we can select ω large enough such that 22 < 21 −1 12 for a given γ > 0, 11 which implies that (2) is always satisfied for some simple matrices  > 0. This completes the proof.  To sum up, we have the following theorem. Theorem 3. A solution of LQ regulator problem (1), which is LQ optimal for some Q = Q T > 0, R = RT > 0, places n − m poles at specified locations in the left half complex plane and has gain margin of (γ , +∞), 0 < γ ≤ 12 , can be given by (12), where S and  are determined by the following procedure. Procedure 1. (1) Design S, such that A11 − A12 S places n − m poles at specified locations in the left half complex plane. (2) Design a simple matrix  > 0 such that  < 0. Remark 4. Note that  < 0 is a linear matrix inequality (LMI) with respect to  and , hence a feasible  (for simplicity, let  = ωIm , ω > 0) can be easily obtained using the LMI approach. 1

Remark 5. If the system is open-loop unstable, we can let  = ωIm and γ = ω− 2 , then γ → 0 as ω → +∞, hence Lemma 1 still holds and the resulting optimal feedback gain of form (12) ensures the guaranteed gain margin of (0, +∞) asymptotically. Example 1. Consider the flight control design problem for the F-4 fighter aircraft as treated in [37,39], the system matrices are given as

⎡−0.746

0.387

⎢ 0.024 ⎢ ⎢ 0.006 A0 = ⎢ ⎢ 1 ⎢ ⎣ 0 ⎡0

0 0

⎢0 ⎢ ⎢0 B0 = ⎢ ⎢0 ⎢ ⎣20 0

−12.9

0

0.952

6.05

−0.174

4.31

0

−1.76

−0.999

−0.0578

0.0369

0.0092

0

0

0

0

0

0

0

0

−10

0

0

0

0

0

−5



−0.416



⎥ ⎥ ⎥, ⎥ ⎥ ⎦

−0.0012⎥

0⎥



0⎥ ⎥. 0⎥ ⎥ 0⎦

10

Take the following similarity transformation

A = V −1 A0V, B = V −1 B0 , where V = diag(1, 1, 1, 1, 20, 10), hence A and B are of form (9). Select



S=



−0.0064

−0.0305

0.0822

0.0008

0.0566

0.0154

−0.2393

0.0031

,

which is the same with L1 in [39] and coincides with F1 in [37], to place the four (n − m) dominant poles at {−4.00, −0.63 ± 2.42 j, −0.05}. Note that the system is open-loop stable. The eigenvalues of SA12 + A22 are −1.66 and −9.02. To place the two nondominant poles of A − BK at the desired locations, say −20 more left than the existing values, we select  = 20I2 , so the optimal feedback K in the original coordinates is determined by (12) and given as



K=



−0.1656

−0.9595

2.2665

0.0284

1

0

1.1874

0.6096

−5.8702

0.0570

0

2

.

The eigenvalues of A − BK are {−4.00, −0.63 ± 2.42 j, −0.05, −21.66, −29.02}. As an observation, the four dominant poles are placed at the specified locations exactly and the resulting optimal LQ regulator has gain margin of (0, +∞). Compared with the existing work [37,39], our method is fairly simple and effective.

850

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

4. Consensus rate regulation for linear cooperative systems In this section, the developed inverse optimal results are used to design the distributed consensus protocols for the consensus problem of multi-agent systems. It is well known that the optimal convergence problem of the general linear multi-agent systems is still an open problem [19]. By using the inverse optimal design method, the dominant poles of the global closed-loop error system can be placed at any specified locations asymptotically, hence desired consensus rate (convergence rate and damping rate) can be achieved. The importance of this result is that it indicates that for the general linear multi-agent systems on directed graphs, the convergence speed of the agents has no upper bound and can be regulated to reach the desired objective. Assume that the multi-agent systems consist of a group of N nodes, distributed on a directed communication graph G, which have the following identical linear time-invariant dynamics

x˙ i = Axi + Bui ,

∀i ∈ N ,

Rn ,

the input ui ∈ where the state xi ∈ The global form of (18) follows

(18) Rm .

x˙ = (IN ⊗ A)x + (IN ⊗ B)u,

(19)

where the state x = (xT1 , xT2 , . . . , xTN )T ∈ RnN , the input u = (uT1 , uT2 , . . . , uTN )T ∈ RmN . 4.1. Consensus of the multi-agent systems Generally, the consensus problem for coordination systems can be categorized into leader following consensus problem and leaderless consensus problem. Leader following consensus, which is to design distributed consensus protocols ui , ∀i ∈ N , such that all nodes of the multiagent system synchronize to the state trajectory of the leader node. The dynamic of the leader is given by

x˙ 0 = Ax0 ,

(20)

where x0 ∈ Rn is the state. Denote the pinning matrix as G = diag{g1 , . . . , gN }. Assume the digraph G contains a spanning tree and the root node ir can observe information from the leader node, hence all eigenvalues of matrix L + G have positive real part [34]. The local neighborhood error is defined as

εi =



ai j (xi − x j ) + gi (xi − x0 )

(21)

j∈N

and the global neighborhood tracking error is given by ξ = (L + G) ⊗ In δ, where the global disagreement error is δ = x − 1N ⊗ x0 ∈ RnN . The distributed consensus protocol considered has the following form [35]

ui = −cK εi ,

(22)

where the scalar coupling gain c > 0 and the feedback control gain matrix K ∈ The global form of the distributed consensus protocol is

Rm×n .

u = −c[(L + G) ⊗ K]δ,

(23)

where G = diag{g1 , . . . , gN } is the pinning matrix. Then, the global system using the protocol (23) is given as

x˙ = (IN ⊗ A)x − c[(L + G) ⊗ (BK )]δ,

(24)

hence we have the global error system

δ˙ = (IN ⊗ A)δ + (IN ⊗ B)u.

(25)

Using the protocol (23), the global closed-loop error system is formed as

δ˙ = [IN ⊗ A − c(L + G) ⊗ (BK )]δ.

(26)

To achieve the synchronization, (25) or (26) must be asymptotically stabilized to the origin. Let λi (i ∈ N ) be the eigenvalues of the matrix L + G. For the global error system (25), the synchronization is achieved if and only if all matrices

A − cλi BK,

∀i ∈ N

(27)

are Hurwitz. For leaderless consensus problems, all agents are to achieve the same state, i.e., xi − x j → 0 as t → ∞, ∀i, j ∈ N . The local neighborhood error is defined as

εi =

 j∈N

ai j (xi − x j )

(28)

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

851

and the global neighborhood error is given as ξ = (L ⊗ In )x. Also, we consider the distributed consensus protocol has the form of

ui = −cK εi ,

(29)

where the scalar coupling gain c > 0 and the feedback control gain matrix K ∈ Rm×n . The global form of the distributed consensus protocol is

u = −c(L ⊗ K )x,

(30)

which gives the global closed-loop system

x˙ = [IN ⊗ A − cL ⊗ (BK )]x.

(31)

Assume that the graph is strongly connected and let λi , i ∈ N be the eigenvalues of the Laplacian matrix L, where λ1 = 0. The consensus of the agents is achieved if and only if all matrices

A − cλi BK, i = 2, 3, . . . , N

(32)

are Hurwitz. To solve the leader following problem or the leaderless problem, it is necessary to investigate the locations of eigenvalues of A − cλi BK. In the following, it is assumed that A and B are of the form



A=

A11

A12

A21

A22





, B=

0

B2

,

(33)

where A11 ∈ R(n−m)×(n−m) , A22 , B2 ∈ Rm×m . Note that B is assumed to be of full column rank, hence B2 is nonsingular. Inspired by the control gain K of form (12), the parameterization of the optimal distributed protocol gain K with respect to (33) is given as

K = B−1 2 [SA11 + A21 − SA12 S − A22 S + ω S

ωIm ],

(34)

where  = ωIm , just for simplicity. 4.2. Consensusability analysis For the leader following consensus problem, we have the following theorem. Theorem 4. For the global error system (25), assume that the graph contains a spanning tree with at least one nonzero pinning gain connecting into a root node, then there exist some optimal distributed consensus protocols of form (23) using the control K formed as (34), such that the synchronization is achieved, where S and  are determined by Procedure 1, c ≥ γ /min {Re λi }, 0 < γ ≤ 12 , ∀i ∈ N . Proof. For the matrices (27), since all eigenvalues of L + G have positive real part, let λi = αi + βi j, ∀i ∈ N , where αi > 0, βi ∈ R and j is the imaginary unit with property j2 = −1. Using Theorem 1, for K of form (12), there exist some R = RT > 0 and P = P T > 0, such that BT P = RK, hence K T BT P = PBK = K T RK. Let V (z) = zT Pz and take the derivative along the state trajectory z˙ = (A − cλi BK )z, then we have

V˙ (z) = zT [(A − cλi BK )∗ P + P (A − cλi BK )]z = zT [(A − cαi BK )∗ P + P (A − cαi BK ) − cβi (K T BP − PBK )]z = zT [(A − γ BK )∗ P + P (A − γ BK ) + 2(γ − cαi )K T RK]z < 2(γ − cαi )zT K T RKz, then V˙ (z) < 0 if cα i ≥ γ , i.e., c ≥ γ /α i . Therefore, A − cλi BK, ∀i ∈ N are Hurwitz if c ≥ γ /min {Re λi }, ∀i ∈ N , hence the consensus of the agents is achieved.  Similar development of Theorem 4, for the leaderless consensus problem, we have the following theorem. Theorem 5. For the global system (31), assume that the graph is strongly connected, then there exist some optimal distributed consensus protocols of form (30) using the control K formed as (34), such that the consensus is achieved, where S and  are determined by Procedure 1, c ≥ γ /min {Re λi }, 0 < γ ≤ 1/2, i = 2, . . . , N. Remark 6. If γ = 1/2, then the lower bound of the coupling gain can be given by c = 1/(2 min{Re λi }), which is the case of normal LQR design methods in [35].

852

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

4.3. Consensus region One way to evaluate the performance of consensus protocols is to show how consensus depends on structural parameters of the communication graph by using the concept of consensus region. Definition 4 [34]. Consider the consensus protocol (23) and (30), the consensus region is a complex region defined as S  {s ∈ C|A − sBK is Hurwitz}. By Lemma 2 and Theorem 4, the consensus is achieved if cλi ∈ S, ∀i ∈ N . An unbounded consensus region is a desired property of a consensus protocol [34,35]. Corollary 1. For protocols (23) and (30) satisfy Theorem 4, the consensus region is unbounded. More specifically, a conservative consensus region is S  {α + β j| α ∈ [γ , +∞), β ∈ ( − ∞, +∞)}, 0 < γ ≤ 12 . Remark 7. Note that the consensus region given in Corollary 1 can be specified by the parameter γ , and it contains the LQR method as a special case, where γ = 1/2. Example 2. Consider the following system [35]:



A0 =



−2

−1

2

1



1

, B0 =

0

.

Take the following similarity transformation

A = W −1 A0W, B = W −1 B0 , where W = [10 10], then the original system is transformed into



A=

1

2

−1

−2





, B=

0

1

,

hence A and B are of form (9). Now consider the LQR optimal feedback gain K = [1.5440 1.8901], the consensus region is S1 = {x + y j|1 + 1.5440x > 0, (5.331x − 1.5473)y2 + 2.2362(1 + 1.5440x)2 x > 0}, which is unbounded as shadowed in Fig. 1. Then we use the proposed inverse optimal design method by setting γ = 0.1, and the corresponding optimal feedback gain can be simply designed as K = [10 10] in form of (34) using Remark 4, the consensus region is S2 = {x + y j|1 + 10x > 0, 100(1 + 10x)y2 + 10(1 + 10x)2 x − 100y2 > 0}, which turns out to be the whole right half complex z-plane and is shadowed in Fig. 2, which demonstrates the superiority of the proposed inverse optimal design method.

Fig. 1. Consensus region using LQR based optimal design: Q = I2 , R = 1.

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

853

Fig. 2. Consensus region using inverse optimal design for γ = 0.1: ω = 10, S = 1.

4.4. Consensus rate Inspired by the consensus rate definition in [30], two indexes with respect to the dominant eigenvalues are proposed to evaluate the consensus rate of the agents: • Convergence rate: the dominant eigenvalue ωρ with minimum absolute value of real part ρ . • Damping rate: dominant eigenvalues ωθ with maximum argument θ from negative direction of real-axis. The convergence rate is used to evaluate the convergence speed of the agents, and the damping rate is used to evaluate the oscillating behaviors of the agents. To obtain desired consensus rate, it is necessary to investigate the locations of eigenvalues of A − cλi BK, ∀i ∈ N for leader consensus problem and A − cλi BK, i = 2, . . . , N for leaderless consensus problem. The following lemma plays an important role in the late development. Lemma 2 [40]. For the following matrix equation

DW0 − (W22 + L0W12 )D − DW12 D + L0W0 = 0,

(35)

if W22 is nonsingular and satisfies

1 3

−1

W22

< ( W0 + W12

L0 )−1 ,

(36)

then a unique root of (35) exists satisfying

0 ≤ D ≤

2 W0

L0 .

W0 + W12

L0

(37)

The following theorem shows the asymptotic behaviors of eigenvalues of A − cλi BK and the consensus region. Theorem 6. If K is given by (34), and S is selected such that n − m eigenvalues of A − BK are placed at {sd1 , . . . , sdn−m } in the left half complex plane, then as ω → +∞, 1. there are n − m eigenvalues {si1 , . . . , sin−m } of matrix A − cλi BK following sij → sdj ,

2. there are m eigenvalues {sin−m+1 , . . . , sin } of matrix A − cλi BK following sij → −ωcλi ,

3. the consensus region S → {α + β j| α ∈ (0, +∞), β ∈ ( − ∞, +∞)}, i.e., the whole right half complex plane. where cRe λi ≥ 1, ∀i ∈ N .

Proof. Consider the following dynamic system

δ˙ i = (A − cλi BK )δi .

(38)

854

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859 z1

Similar to the development of (13) and using the transformation zi = T δi , where zi = [ i2 ], T is defined in (10), then we have zi

T (A − cλi BK )T

−1

=

A11 − A12 S

A12

(1 − cλi )(SA11 + A21 − SA12 S − A22 S)



H11

H12

H21

H22 − ωcλi Im



SA12 + A22 − ωcλi Im

.

Let μ = ω−1 , then (38) is transformed into

z˙ i1 = H11 zi1 + H12 zi2 ,

(39a)

μz˙ i2 = μH21 zi1 + (μH22 − cλi Im )zi2 .

(39b)

Let y =

z˙ i2

+ μLzi1 ,

then

μy˙ = μ + μ2 Lzi1 = {μH21 + μ2 LH11 − [μ2 LH12 + (μH22 − cλi Im )](μL)}zi1 + [μLH12 − (μH22 − cλi Im )]y. z˙ i2

Hence if there exist μ and L such that

μH21 + μ2 LH11 − [μ2 LH12 + (μH22 − cλi Im )](μL) = 0, then (38) is transformed as

1

z˙ i





=

H11 − μH12 L

H12

0

μLH12 + (H22 − ωcλi Im )

(40)

1

z˙ i



.

(41)

Note that cRe λi ≥ 1. As ω → +∞, μ → 0, then

H11 − μH12 L → H11 = A11 − A12 S,

μLH12 + (H22 − ωcλi Im ) → −ωcλi Im , hence (1) and (2) are true. Note that for μ = 0, (40) is equivalent to

H21 + (μL)H11 − (μL)H12 (μL) − (H22 − ωcλi Im )(μL) = 0.

(42)

(H22 − ωcλi Im )−1 H21 .

Now we seek μ and L such that (42) holds and are of form μL = L0 + D, where L0 = Let W22 = H22 − ωcλi Im , W0 = H11 − H12 (H22 − ωcλi Im )−1 H21 , W12 = H12 , then (42) is transformed into

DW0 − (W22 + L0W12 )D − DW12 D + L0W0 = 0.

(43)

Apparently, W22 must be nonsingular for sufficiently large ω since cRe λi ≥ 1. −1

= 0 and limω→+∞ W0 = A11 − A12 S > 0, then Note that limω→+∞ W22

lim

1

ω→+∞ 3

1 3

( W0 + W12

L0 )−1 = A11 − A12 S −1 > 0.

(44)

Therefore, there exist some sufficiently large ω such that

1 3

−1

W22

< ( W0 + W12

L0 )−1 .

(45)

Using Lemma 2, a unique root D of (43) exists, then there are some μ and L such that (42) holds. The conclusion (3) follows from Remark 5 and Corollary 1.  Theorem 6 indicates that all eigenvalues of A − cλi BK asymptotically achieve the eigenvalues of A − BK as ω → +∞. Obviously, n − m finite eigenvalues {sd1 , . . . , sdn−m } are dominant eigenvalues since they completely determine the convergence rate and the damping rate of the agents. Therefore, desired consensus performance of the agents can be achieved asymptotically by specifying the set of dominant eigenvalues {sd1 , . . . , sdn−m } for each agent, which yields the following distributed cooperative design procedures. Procedure 2.1. Inverse optimal design method for leaderless consensus problem. 1. 2. 3. 4. 5.

Give 0 < γ ≤ 12 , and record λi , i = 2, . . . , N, the eigenvalues of L . Set c ≥ γ / mini∈N {Re λi }. Design S such that A11 − A12 S places n − m eigenvalues at specified locations {sd1 , . . . , sdn−m }. Obtain K by (34) and let ω → +∞. Determine the distributed consensus protocols by (30).

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

855

Procedure 2.2. Inverse optimal design method for the leader following consensus problem. 1. 2. 3. 4. 5.

Give 0 < γ ≤ 12 , and record λi , ∀i ∈ N , the eigenvalues of L + G. Set c ≥ γ / mini∈N {Re λi }. Design S such that A11 − A12 S places n − m eigenvalues at specified locations {sd1 , . . . , sdn−m }. Obtain K by (34) and let ω → +∞. Determine the distributed consensus protocols by (23).

5. Simulations In this section, two numerical examples are given to demonstrate the advantage of the developed inverse optimal based design methods. Example 3 (Leaderless case). Consider the following multi-agent systems with six nodes:

x˙ i = Axi + Bui , i = 1, . . . , 6, where

 A=

−1 0 −4

2 −1 0



4 2 , −1

(46)

  B=

0 0 . 1

(47)

The initial states of each agent are generated randomly in [0, 1]. The communication topology is described in Fig. 3. The minimum positive real part of the nonzero eigenvalues of L is 1.2652, hence we can set the coupling gain c = 1. Let the weighting matrices be selected as Q = I3 and R = 1, we use the LQR based optimal design method proposed in [21], the corresponding convergence rate and damping rate are shown in Table 1. The consensus process of the states is shown in Fig. 4, where the consensus is reached within 8 s. Now we are about to design the optimal distributed consensus protocols (30) such that the resulting global closed-loop system (31) asymptotically achieves the desired convergence rate and damping rate of {3 ± 0.5j} by running Procedure 2.1. We set ω = 500 and the corresponding convergence rate and damping rate are shown in Table 1. The desired consensus performance is asymptotically reached, and the resulting evolutionary process of consensus is shown in Fig. 5. Compared with the agents behaviors in Fig. 4, the consensus is reached faster (within 2.5 s) and the oscillating behaviors are also improved. All the facts show the effectiveness of the developed inverse optimal based design methods. Example 4 (Leader following case: two-mass–spring system). This example is taken from [21]. It is well known that many industrial applications can be modeled as the mass–spring systems. Here, we consider the two-mass–spring system with single force input which can be shown shown in Fig. 6, where k1 and k2 are spring constants, m1 and m2 are two masses, the force input u is for mass 1, and y1 , y2 denotes displacement of the two masses, respectively. Define the state vector x = [x1 , x2 , x3 , x4 ]T = [y2 , y˙ 2 , y1 , y˙ 1 ]T , then the considered two-mass–spring system is modeled as

x˙ = Ax + Bu, where



0 ⎢ k2 ⎢− ⎢ m2 A=⎢ ⎢ 0 ⎣ k 2 m1

(48)

1 0 0 0

0 k2 m2 0 k1 + k2 − m1



⎡ ⎥ 0⎥ ⎢ ⎥ ⎥, B = ⎢ ⎣ 1⎥ ⎦ 0

0



0 0 ⎥ 0 ⎥. ⎦ 1 m1

(49)

Fig. 3. Communication topology. Table 1 Consensus performance using LQR optimal design and inverse optimal design.

LQR optimal design: Q = I3 , R = 1 Inverse optimal design: {−3 ± 0.5 j}, ω = 500

Convergence rate ωρ

Damping rate ωθ

−0.9580 ± 3.6586 j −2.9669 ± 0.5154 j

−1.0141 ± 4.5501 j −2.9761 ± 0.5368 j

856

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

1.5

The state trajectories

1

0.5

0

−0.5

−1

0

1

2

3

4 t

5

6

7

8

Fig. 4. Consensus process using LQR optimal based distributed consensus protocols: Q = I3 , R = 1.

1.5

The state trajectories

1

0.5

0

−0.5

−1

0

1

2

t

3

4

5

Fig. 5. Consensus process using inverse optimal based optimal distributed consensus protocols: {−3 ± 0.5 j}, ω = 500.

Let one unforced two-mass–spring system act as the leader node and produce a desired state trajectory, and other six twomass–spring systems act as the follower nodes. Assume that the six nodes can get state information from their neighbors with the communication topology which is described by Fig. 7. Let ui , yi, 1 and yi, 2 be the force input, displacement of mass 1 and displacement of mass 2 for the ith node, respectively, where i = 0, 1, . . . , 6. It is desired to design the distributed optimal protocols for the follower nodes, such that the displacements of the two masses of each two-spring-system synchronize to that of the leader node steady and rapidly, from the practical view of point.

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

857

Fig. 6. Two-mass–spring system.

Fig. 7. Communication topology.

4

x x x

2 The state trajectories

0,1

x0,2

3

0,3 0,4

1 0 −1 −2 −3 −4

0

5

10

Time(sec)

15

20

25

Fig. 8. Consensus process using LQR based optimal distributed consensus protocols: Q = I4 , R = 1. Table 2 Consensus performance using LQR optimal design and inverse optimal design.

LQR optimal design: Q = I4 , R = 1 Inverse optimal design: {−0.5 ± 1.2 j, −0.5}, ω = 100

Convergence rate ωρ

Damping rate ωθ

−0.2246 ± 1.2696 j −0.4927 ± 1.2033 j

−0.2246 ± 1.2696 j −0.4927 ± 1.2033 j

Set m1 = 0.9 kg, m2 = 1.1 kg, k1 = 1.5 N/m, k2 = 1 N/m. The minimum real part of all eigenvalues of L + G is 1, hence the coupling gain can be set as c = 1. By choosing Q = I4 and R = 1, then we can use the LQR based optimal design method proposed in [21], the corresponding convergence rate and damping rate are shown in Table 2. The evolutionary process of consensus is shown in Fig. 8 where the consensus is reached within 20 s.

858

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

6

x

5

x

4 The state trajectories

0,1

x0,2 x

3

0,3 0,4

2 1 0 −1 −2 −3 −4

0

5

Time(sec)

10

15

Fig. 9. Consensus process using inverse optimal based optimal distributed consensus protocols: {−0.5 ± 1.2 j, −0.5}, ω = 100.

To obtain more desired consensus performance, we aim to obtain the optimal distributed optimal protocols (23) such that the resulting global error closed-loop system (25) asymptotically reaches the desired convergence rate and damping rate of {−0.5 ± 1.2 j, −0.5} by Procedure 2.2. Set ω = 100, then the corresponding convergence rate and damping rate are shown in Table 2, we see that the desired consensus performance is asymptotically reached, and the resulting evolutionary process of consensus is shown in Fig. 9. Compared with the agents behaviors in Fig. 8, the consensus is reached faster (within 5 s) and the oscillating behaviors of the agents are also more desirable, i.e., all displacements of the six two-spring-systems can synchronize to leader node more rapidly and steady. The above facts show the advantage of the developed inverse optimal based design methods. 6. Conclusion In this paper, novel optimal distributed consensus protocols design methods have been proposed by using inverse optimal results. New time-domain solution of the inverse optimal problem has been established and simple optimal partial pole placement procedure has been given. Then the cooperative distributed control problem, including leader following consensus problem and leaderless consensus problem, has been addressed by the developed inverse optimal design method. The resulting distributed consensus protocols are local optimal, and the asymptotic design objective can be achieve with resort to possibly high gain regulators. The main advantages of the developed method over the LQR design method are that the resulting multi-agent systems can achieve specified consensus rate asymptotically and the resulting protocols have the whole right half complex plane as the asymptotic consensus region. Numerical examples are given to illustrate the effectiveness of the proposed design procedures. Acknowledgments This work was supported by the National Natural Science Foundation of China (61034005, 61433004, 61104010), and the National High Technology Research and Development Program of China (2012AA040104) and IAPI Fundamental Research Funds 2013ZCX14. This work was supported also by the development project of key laboratory of Liaoning province. References [1] J.N. Tsitsiklis, Problems in Decentralized Decision Making and Computation, Ph.D. dissertation, Dept. Electr. Eng. Comput. Sci., Mass. Inst. Technol., Cambridge, 1984. [2] J.N. Tsitsiklis, D.P. Bertsekas, M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Trans. Autom. Control 31 (9) (1986) 803–812. [3] J.A. Fax, R.M. Murray, Information flow and cooperative control of vehicle formations, IEEE Trans. Autom. Control 49 (9) (2004) 1465–1476. [4] R. Olfati-Saber, Flocking for multi-agent dynamic systems: algorithms and theory, IEEE Trans. Autom. Control 51 (3) (2006) 401–420.

T. Feng et al. / Applied Mathematics and Computation 271 (2015) 845–859

859

[5] A. Jadbabaie, J. Lin, A. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Trans. Autom. Control 48 (6) (2003) 988–1001. [6] H. Liang, H. Zhang, Z. Wang, J. Zhang, Output regulation for heterogeneous linear multi-agent systems based on distributed internal model compensator, Appl. Math. Comput. 242 (1) (2014) 736–747. [7] Y. Huang, H. Zhang, Z. Wang, Multistability of complex-valued recurrent neural networks with real-imaginary-type activation functions, Appl. Math. Comput. 229 (25) (2014) 187–200. [8] J. Wang, H. Zhang, Z. Wang, B. Wang, Local exponential synchronization in complex dynamical networks with time-varying delay and hybrid coupling, Appl. Math. Comput. 225 (1) (2013) 16–32. [9] G. Belta, V. Kumar, Abstraction and control for groups of robots, IEEE Trans. Robot. 20 (5) (2004) 865–875. [10] H. Liang, H. Zhang, Z. Wang, J. Wang, Output regulation of state-coupled linear multi-agent systems with globally reachable topologies, Neurocomputing 123 (2014) 337–343. [11] W. Ren, R. Beard, E. Atkins, Information consensus in multivehiches cooperative control, IEEE Control Syst. Mag. 27 (2) (2007) 71–82. [12] R. Olfati-Saber, R. M.Murray, Consensus problems in networks of agents with switching topology and time-delays, IEEE Trans. Autom. Control 49 (9) (2004) 1520–1533. [13] W. Ren, R.W. Beard, Consensus seeking in multiagent systems under dynamically changing interaction topologies, IEEE Trans. Autom. Control 50 (5) (2005) 655–661. [14] L. Moreau, Stability of multi-agent systems with time-dependent communication links, IEEE Trans. Autom. Control 50 (2) (2005) 169–182. [15] T.H. Lee, M.J. Park, J.H. Park, O.M. Kwon, S.M. Lee, Extended dissipative analysis for neural networks with time-varying delays, IEEE Trans. Neural Netw. Learn. Syst. 25 (10) (2014) 1936–1941. [16] T.H. Lee, S. Lakshmanan, J.H. Park, P. Balasubramaniam, State estimation for genetic regulatory networks with mode-dependent leakage delays, time-varying delays, and Markovian jumping parameters, IEEE Trans. NanoBiosci. 12 (4) (2013) 363–375. [17] Y. Wang, H. Zhang, X. Wang, D. Yang, Networked synchronization control of coupled dynamic networks with time-varying delay, IEEE Trans. Syst., Man, Cybern., B: Cybern. 40 (6) (2010) 1468–1479. [18] T.H. Lee, J.H. Park, D.H. Ji, H.Y. Jung, Leader-following consensus problem of heterogeneous multi-agent systems with nonlinear dynamics using fuzzy disturbance observer, Complexity 19 (4) (2014) 20–31. [19] W. Ren, R. Beard, E. Atkins, A survey of consensus problems in multi-agent coordination, Proceedings of the American Control Conference, Portalnd, OR, 2005, pp. 1859–1864. [20] F. Borelli, T. Keviczky, Distributed LQR design for identical dynamically decoupled systems, IEEE Trans. Autom. Control 53 (8) (2008) 1901–1912. [21] H. Zhang, F.L. Lewis, Lyapunov, adaptive, and optimal design techniques for cooperative systems on directed communication graphs, IEEE Trans. Ind. Electron. 59 (7) (2011) 3026–3041. [22] H.M. Kristian, F.L. Lewis, Cooperative optimal control for multi-agent systems on directed graph topologies, IEEE Trans. Autom. Control 44 (9) (1999) 1746– 1749. [23] H.G. Zhang, T. Feng, G.H. Yang, H.J. Liang, Distributed cooperative optimal control for multiagent systems on directed graphs: an inverse optimal approach, IEEE Trans. Cybern. 45 (7) (2015) 1315–1326. [24] H. Zhang, J. Zhang, G.H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multi-agent differential games via fuzzy adaptive dynamic programming, IEEE Trans. Fuzzy Syst. (2014), doi:10.1109/TFUZZ.2014.2310238. [25] W. Dong, Distributed optimal control of multiple systems, Int. J. Control 83 (10) (2010) 2067–2079. [26] Y. Cao, W. Ren, Optimal linear-consensus algorithms: an LQR perspective, IEEE Trans. Syst., Man, Cybern. B: Cybern. 40 (3) (2010) 819–830. [27] Z. Qu, M. Simaan, J. Doug, Inverse optimality of cooperative control for networked systems, in: Proceedings of the 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, Shanghai, PR China, December 2009, pp. 16–18. [28] H. Zhang, D. Liu, Y. Luo, D. Wang, Adaptive Dynamic Programming for Control: Algorithms and Stability, Springer Science & Business Media, 2012. [29] M. Zabarankin, R. Murphey, R. Murray, Optimization of convergence rate and stability margin of information flow in cooperative systems, Automatica 49 (7) (2012) 2030–2038. [30] D. Tsubakino, S. Hara, Eigenvector-based characterization for hierarchical multi-agent dynamical systems with low rank interconnection, in: Proceedings of the 2010 IEEE International Conference on Control Applications (CCA), 2010, pp. 2023–2028. [31] Y. Kim, M. Mesbahi, On maximizing the second smallest eigenvalue of a state-dependent graph Laplacian, IEEE Trans. Autom. Control 51 (1) (2006) 116–120. [32] Y. Kim, Bisection algorithm of increasing algebraic connectivity by adding an edge, IEEE Trans. Autom. Control 55 (1) (2010) 170–174. [33] Y. Kim, D.W. Gu, I. Postlethwaite, Spectral radius minimization for optimal average consensus and output feedback stabilization, Automatica 45 (6) (2009) 1379–1386. [34] Z. Li, Z. Duan, G. Chen, L. Huang, Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint, IEEE Trans. Circuits Syst. I 57 (1) (2010) 213–224. [35] H. Zhang, F.L. Lewis, Optimal design for synchronization of cooperative systems: state feedback, observer and output feedback, IEEE Trans. Autom. Control 56 (8) (2011) 1948–1953. [36] R.E. Kalman, When is a linear control system optimal? Trans. ASME J. Basic Eng., Ser. D 86 (1964) 81–90. [37] T. Fujii, A new approach to the LQ design from the viewpoint of the inverse regulator problem, IEEE Trans. Autom. Control 32 (11) (1987) 995–1004. [38] A. Jameson, E. Kreindler, Inverse problem of linear optimal control, SIAM J. Control Optim. 11 (1) (1973) 1–19. [39] K. Sugimoto, Partial pole assignment by LQ regulators: an inverse problem approach, IEEE Trans. Autom. Control 43 (5) (1998) 706–708. [40] P.V. Kokotovic, A Riccati equation for block-diagonalization of ill-conditioned systems, IEEE Trans. Autom. Control 20 (6) (1975) 812–814.