On the positive stability of P2-matrices

On the positive stability of P2-matrices

Linear Algebra and its Applications 503 (2016) 190–214 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.co...

424KB Sizes 8 Downloads 53 Views

Linear Algebra and its Applications 503 (2016) 190–214

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

On the positive stability of P 2 -matrices Olga Y. Kushel Shanghai Jiao Tong University, Department of Mathematics, Dong Chuan Road 800, 200240 Shanghai, China

a r t i c l e

i n f o

Article history: Received 10 April 2014 Accepted 1 April 2016 Available online 12 April 2016 Submitted by V. Mehrmann MSC: 15A48 15A18 15A75

a b s t r a c t In this paper, we study the positive stability of P -matrices. We prove that a matrix A is positive stable if A is a P 2 -matrix and there is at least one nested sequence of principal submatrices of A each of which is also a P 2 -matrix. This result generalizes the result by Carlson which shows the positive stability of sign-symmetric P -matrices and the result by Tang, Simsek, Ozdaglar and Acemoglu which shows the positive stability of strictly row (column) square diagonally dominant for every order of minors P -matrices. © 2016 Elsevier Inc. All rights reserved.

Keywords: P -matrices P 2 -matrices Q-matrices Stable matrices Exterior products P -matrix powers Stabilization

0. Introduction Given an n × n matrix A, we denote by aij its entry in ith row and jth column: A = {aij }ni,j=1 . For subsets α = (i1 , . . . , ik ), 1 ≤ i1 < . . . < ik ≤ n and β = (j1 , . . . , jk ),   α 1 ≤ j1 < . . . < jk ≤ n, k = 1, . . . , n, we denote by A the submatrix of A with the β

E-mail address: [email protected]. http://dx.doi.org/10.1016/j.laa.2016.04.002 0024-3795/© 2016 Elsevier Inc. All rights reserved.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214



191

 α β

entries aij such that i ∈ α, j ∈ β. We use the notation A for the determinant of     α α the submatrix A . If α = β = (i1 , . . . , ik ), 1 ≤ k ≤ n, we call A a principal β α     α i1 . . . ik submatrix of A and its determinant A a principal minor of =A i1 . . . ik α A of the kth order. We use the notation σ(A) for the spectrum of the matrix A, i.e. for the set of all its eigenvalues. Finally, we use “a positive diagonal matrix” for a diagonal matrix with positive principal diagonal entries. In this paper, we study P -matrices: Definition 1. A real n × n matrix  A is called  a P -matrix if all its principal minors are i1 . . . ik positive, i.e. the inequality A > 0 holds for all (i1 , . . . , ik ), 1 ≤ i1 < i1 . . . ik . . . < ik ≤ n, and all k, 1 ≤ k ≤ n. Having been introduced by Fiedler and Pták in the 1960s, P -matrices have found a great number of applications in various disciplines including physics, economics, communication networks and biology. Spectral properties of P -matrices have received great attention. Definition 2. A matrix is called positive stable or just stable if all its eigenvalues have positive real parts. Notions of positive stability are of fundamental importance in matrix theory. It is easy to see, that a P -matrix may not be positive stable. This paper deals with the relation between the positive stability and the positivity of principal minors of matrix powers. The paper is organized as follows. In Section 1, we analyze the background and the open problems on the positive stability of structured matrices. Here we formulate the main result of the paper and analyze its place among the results known before. In Section 2, we develop the necessary methods for the proof, namely, we recall the definitions and statements concerning additive compound matrices introduced in [5]. In Section 3, the results on the stabilization by a diagonal matrix are studied. Recovering the proof from [1], we show the possibility of choosing a stabilization matrix with certain additional properties. Section 4 deals with the proof of our main result (Theorem 14). In Section 5, we analyze the known classes of positive stable P 2 -matrices. 1. Background statements The following result characterizes real eigenvalues of P -matrices (see [6, p. 385, Theorem 3.3]).

192

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

Theorem 3 (Fiedler, Pták). The following properties of a real matrix A are equivalent: 1. All principal minors of A are positive. 2. Every real eigenvalue of A as well as of each principal submatrix of A is positive. However, P -matrices may have non-real eigenvalues as well. Let us mention the following result by Kellogg concerning complex eigenvalues of P -matrices (see [14, p. 174, Corollary 1]). Theorem 4 (Kellogg). Let λ be an eigenvalue of an n × n P -matrix A. Then |arg(λ)| < π −

π . n

Since P -matrices are not necessarily stable, the sufficient conditions of their stability become a question of great interest. The major result on this topic was obtained by Carlson (see [4, p. 1]). Definition 5. A matrix A is called sign-symmetric if the inequality     i1 . . . ik j1 . . . jk A A ≥0 j1 . . . jk i1 . . . ik holds for all sets of indices (i1 , . . . , ik ), (j1 , . . . , jk ), where 1 ≤ i1 < . . . < ik ≤ n, 1 ≤ j1 < . . . < jk ≤ n, k = 1, . . . , n. Theorem 6 (Carlson). A sign-symmetric P -matrix is positive stable. The proof of Carlson’s theorem is based on the following implications. (a) A is a sign-symmetric P -matrix ⇒ A2 is a P -matrix. (b) A is a sign-symmetric P -matrix ⇒ DA is a sign-symmetric P -matrix and (DA)2 is a P -matrix for every diagonal matrix D with positive principal diagonal entries. Several attempts to generalize Carlson’s theorem to wider classes of matrices were made afterwards. To analyze them, we first consider the following generalization of the class of P -matrices which preserves the same spectral properties (see [11, p. 83, Theorem 1]). Definition 7. A real matrix A is called a Q-matrix if the inequality    i1 . . . ik A >0 i1 . . . ik (i1 ,...,ik )

holds for all k, 1 ≤ k ≤ n.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

193

Theorem 8 (Hershkowitz). A set {λ1 , . . . , λn } of complex numbers is a spectrum of some P -matrix if and only if it is a spectrum of some Q-matrix. Corollary 1. Every real eigenvalue of a Q-matrix A is positive. Moreover, if λ is an eigenvalue of an n × n Q-matrix then |arg(λ)| < π −

π . n

In [12], the following matrix classes are considered. Definition 9. A matrix A is called a P M -matrix (QM -matrix) if Ak is a P -matrix (respectively, a Q-matrix) for all positive integer k. In the case of known examples of P M -matrices, all their eigenvalues are positive. In the case of QM -matrices, an example given in [12] (see p. 164), shows that they may not be even positive stable. The following still open questions were raised in [12]. If A is a P M -matrix, does λ ∈ σ(A) imply λ > 0? and Are the spectra of P M -matrices the same as those of QM -matrices? In connection with Carlson’s theorem and the above questions, the following classes of matrices were introduced in [13]. Definition 10. A matrix A is called a P 2 -matrix (Q2 -matrix) if both A and A2 are P (respectively, Q-) matrices. The following question was raised (see [13, p. 122, Question 6.2]). Are P 2 -matrices positive stable? This question is still open. For the case of Q2 -matrices, Hershkowitz and Keller proved their positive stability for n ≤ 3 (see [13, p. 112, Proposition 3.1]). However, the example given in [12] (see [12, p. 164]) and the reasoning in [13] (see [13, p. 123, Corollary 6.9]) show that Q2 -matrices of order n ≥ 4 may have eigenvalues in the left-hand side of the complex plane. The following determinantal conditions were introduced in [17]. Definition 11. An n × n matrix A is called strictly row square diagonally dominant for every order of minors if the following inequalities hold:

194

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

  A

α α

2

 



>

A

α,β∈[n],α=β

α β

2

for any α = (i1 , . . . , ik ), β = (j1 , . . . , jk ) and all k = 1, . . . , n. A matrix A is called strictly column square diagonally dominant if AT is strictly row square diagonally dominant. Later we will show that P -matrices which satisfy the square diagonal dominance conditions are P 2 -matrices as well. The square diagonal dominance conditions guarantee the positive stability of a P -matrix (see [17, p. 27, Theorem 3]). Theorem 12 (Tang et al.). Let A be a P -matrix. If A is strictly row (column) square diagonally dominant for every order of minors, then A is positive stable. The main result of this paper generalizes both the results of [4] and [17]. It also provides sufficient conditions for the stability of P 2 -matrices. Here we use the following definition (see, for example, [10]). Definition 13. We say that an n × n minors or simply a nest, if there is such that  i1 . . . A i1 . . .

matrix A has a nested sequence of positive principal a permutation (i1 , . . . , in ) of the set of indices [n]  ij ij

>0

j = 1, . . . , n.

Note that the indices (i1 , . . . , ij ) in each minor are ordered lexicographically. Theorem 14. Let an n × n P 2 -matrix A have a nested sequence of principal submatrices each of which is also a P 2 -matrix. Then A is positive stable. Let us give an example illustrating Theorem 14. Example 15. Let n = 4 and ⎛

6 −30 ⎜ 1 2 ⎜ A=⎜ ⎝ 1 1 10 10

⎞ 1 10 1 10 ⎟ ⎟ ⎟. 10 10 ⎠ 10 100

As it is easy to check, A is a P -matrix and A2 is also a P -matrix. The matrix A is obviously not sign-symmetric, thus it does not satisfy the conditions of Theorem 6 (Carlson). Also, it does not satisfy the conditions of row (column) square diagonal dominance for every order of minors. Moreover, for the diagonal matrix D = diag{1, 1, 0.01, 0.01},

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

195

we have ⎛

⎞ 7.01 −238.99 −22.9 −229.9 ⎜ 9.01 −24.99 4.1 40.1 ⎟ ⎜ ⎟ (DA)2 = ⎜ ⎟, ⎝ 0.081 −0.269 0.04 0.31 ⎠ 0.801 −2.699 0.31 3.01 which is not even a Q-matrix, implication (b) does not hold and we cannot apply the reasoning of the proof of Theorem 6 (Carlson). However, the matrix A satisfies the conditions of Theorem 14, since it has a nested sequence of principal submatrices, each of which is also a P 2 -matrix. We obtain this sequence by deleting the first, the second and the third row and column, consequently: ⎞   2 1 10 10 10 ⎟ ⎜ A1 = ⎝ 1 10 10 ⎠ , A12 = , A123 = (100). 10 100 10 10 100 ⎛

It is not difficult to check that A is positive stable, with four positive eigenvalues λ1 = 102.837, λ2 = 8.89601, λ3 = 5.34722, λ4 = 0.9199. 2. Exterior products and additive compound matrices Let {ei }ni=1 be an arbitrary basis in Rn . Let x1 , . . . , xj (2 ≤ j ≤ n) be any vectors in Rn defined by their coordinates xi = (x1i , . . . , xni ), i = 1, . . . , j in the basis {ei }ni=1

n. Then the exterior product x1 ∧ . . . ∧ xj of the vectors x1 , . . . , xj is a vector ϕ in R j

 n! ( nj = j!(n−j)! ) with the coordinates of the form   xi1  1  (ϕ)α :=  . . .  ij  x1

 . . . xij1   ... ... , ij  . . . xj 

where α is the number of the set of indices (i1 , . . . , ij ) ⊆ [n] in the lexicographic ordering ([n], as usual, denotes the set {1, . . . , n}).

n We consider the jth exterior power ∧j Rn of the space Rn as the space R j . The set of all exterior products of the form ei1 ∧ . . . ∧ eij , where 1 ≤ i1 < . . . < ij ≤ n forms a canonical basis in ∧j Rn (see, for example, [8]). For the definition and properties of the exterior product of operators in Rn, we refer to [9, p. 326]. The following special case of the exterior products of operators is studied in [5] (see [5, p. 394]). Definition 16. For a linear operator A : Rn → Rn and two positive integers j, m (1 ≤ m ≤ j ≤ n), the generalized j-th compound operator ∧jm A on ∧j Rn is defined by the following formula:

196

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214



1 (∧jm A)(x1 ∧ . . . ∧ xj ) = j  m

A α1 x 1 ∧ . . . ∧ A αj x j ,

(α1 ,...,αj )

where the sum is taken with respect to all the possible sets (α1 , . . . , αj ) such that each j αi is either 0 or 1 and i=1 αi = m (here A0 = I, A1 = A). Note that the formula of the generalized jth compound operator provided by Fiedler in [5] slightly differs from the given above definition. The only difference is, it does not contain the coefficient 1j  . m

The matrix of ∧jm A in the canonical basis {ei1 ∧ . . . ∧ eij }, where 1 ≤ i1 < . . . < ij ≤ n (j) is called the generalized jth compound matrix of the initial matrix A and denoted Am . It follows from the above definitions that

n

j A(j) m = {ξαβ }α,β=1 ,

where 

1



m

(l1 ,...,lm )⊆[j]

ξαβ = j 

A

 . . . ilm . . . klm

il1 kl1

,

the sets (i1 , . . . , ij ), 1 ≤ i1 < . . . < ij ≤ n and (k1 , . . . , kj ), 1 ≤ k1 < . . . < kj ≤ n, correspond to the numbers α and β in the lexicographic numeration, respectively. The sum is taken with respect to all the possible subsets (l1 , . . . , lm ), 1 ≤ l1 < . . . < lm ≤ j, of the set [j] = (1, . . . , j). Example 17. Let us consider an n × n diagonal matrix D of the form: D = diag{ 1 , 2 , . . . , n }, (j)

where 1 = 1 > 2 > . . . > n > 0. In this case, Dm , where 1 ≤ m ≤ j ≤ n, is an

n n j × j diagonal matrix of the form: m m

  D(j) m = diag{d11 , . . . , d n n }, j

j

where 

1 dm αα = j  m

(k1 ,...,km )⊆(i1 , ...,

1

j m

   k1  0    ... ij )   0

 (k1 ,...,km )⊆(i1 , ..., ij )

0 k2 ... 0

 ... 0  ... 0  = ... ...   . . . km 

k1 . . . km ,

(1)

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

197

α is the number in the lexicographic numeration of the set of indices (i1 , . . . , ij ), 1 ≤ i1 ≤ . . . ≤ ij ≤ n, the sum is taken with respect to all the possible subsets of m indices (k1 , . . . , km ) from the set (i1 , . . . , ij ), i1 ≤ k1 < . . . < km ≤ ij . In the case of n = 3, we have ⎛

1 ⎜ D=⎝ 0 0

0 2 0

⎞ 0 ⎟ 0 ⎠, 3



⎞ 1 0 0 ⎜ ⎟ I = ⎝0 1 0⎠. 0 0 1

Thus ⎛

(2)

D1

1 + 2 1⎜ = ⎝ 0 2 0

0 1 + 3 0

⎞ 0 ⎟ 0 ⎠. 2 + 3

In the case m = j, the above definition gives the linear operator ∧j A, defined by the equality (∧j A)(x1 ∧ . . . ∧ xj ) = Ax1 ∧ . . . ∧ Axj . The operator ∧j A is called the jth exterior power of the initial operator A or the jth compound operator. It is easy to see that ∧1 A = A and ∧n A is one-dimensional and coincides with det A. If A = {aij }ni,j=1 is the matrix of A in the basis {ei }ni=1 , then the matrix of ∧j A in the basis {ei1 ∧ . . . ∧ eij }, where 1 ≤ i1 < . . . < ij ≤ n, equals the jth compound matrix A(j) (j) of the initial matrix  A. (Here the jth compound matrix A consists of all the minors i1 . . . ij of the jth order A , where 1 ≤ i1 < . . . < ij ≤ n, 1 ≤ k1 < . . . < kj ≤ n, k1 . . . kj of the initial n × n matrix A, listed in the lexicographic order (see, for example, [16]).) The following properties of compound matrices are well-known. 1. Let A, B be n × n matrices. Then (AB)(j) = A(j) B(j) for j = 1, . . . , n (the Cauchy–Binet formula). 2. The j-th compound matrix A(j) of an invertible matrix A is also invertible and the following equality holds: (A(j) )−1 = (A−1 )(j) , j = 1, . . . , n (the Jacobi formula). 3. Stabilization by a diagonal matrix To describe the matrix stabilization, let us give the following definitions. Definition 18. An n × n positive diagonal matrix D(A) is called a stabilization matrix for an n × n matrix A if all the eigenvalues of D(A) A have positive real parts.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

198

Definition 19. An n × n real matrix A is called stabilizable if there is at least one stabilization matrix D(A) . The following sufficient conditions for the existence of a stabilization matrix were provided by Fisher and Fuller (see [1, p. 728, Theorem 1], also [7]). Theorem 20 (Fisher, Fuller). Let A be an n ×n real matrix, all of whose leading principal minors are positive. Then there is an n × n positive diagonal matrix D(A) , such that all of the eigenvalues of D(A) A are positive and simple. An obvious consequence of the Fisher–Fuller theorem is the following statement. Corollary 2. An n × n real matrix A is stabilizable if it has at least one nested sequence of positive principal minors. Let us prove the following lemma which describes the possible choice of a stabilization matrix. Lemma 1. Let A be an n × n matrix with positive leading principal minors. Then it is stabilizable. Moreover, for any sequence (γ1 , . . . , γn−1 ), γi > 0, i = 1, . . . , n, there is a stabilization matrix D(A) = diag{ 1 , 2 , . . . , n } with the following properties: 1. 1 = 1 > 2 > . . . > n > 0. 2. i i+1

≥ γi

i = 1, . . . , n − 1.

Proof. For the proof, we repeat the reasoning of Proof 1 of the Fisher–Fuller theorem (see [1, pp. 728–729]). We use the induction on n. For n = 1, the result is trivial, A = {a11 }and D(A) = {1}. Let us   proveLemma 1 for n = 2. In this case, A = a11 a21

a12 a22

1 0 and we define D(A) = , where 2 > 0 will be chosen later. Then 0 2   a12 a11 D(A) A = . Let M2 ( 2 , λ) := det(D(A) A − λI), which is dependent 2 a21 2 a22 on 2 . Then M2 (0, λ) has two simple roots: λ1 = a11 and λ2 = 0. The roots of M2 ( 2 , λ) are close to a11 and 0, for sufficiently small values of 2 . That means, both of them are real (since complex eigenvalues must appear in conjugate pairs), and at least one of them is positive. Since det(D(A) A) is positive, the other eigenvalue is also positive. The

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

199

continuity of eigenvalues implies, that there is a positive integer 02 < 1 such that the above spectral properties hold for all 2 , 0 < 2 ≤ 02 . Thus we can find 2 ≤ ( 02 ; γ11 ). The above reasoning shows that the positive diagonal matrix D(A) := diag{1, 2 } is a stabilization matrix for A satisfying 12 ≥ γ1 . For n = 2, the lemma is proven. Assume the lemma holds for n − 1. Let us prove it for n. Given a sequence (γ1 , . . . , γn−1 ) of positive numbers and an n ×n matrix A, let us show that n can be chosen to satisfy the inequalities 0 < n < n−1 . Repeating the proof of the Fisher–Fuller theorem, we apply the induction hypothesis to the (n − 1) × (n − 1) leading principal submatrix A11 , obtained from A by deleting the last row and the last column. We use the following partitions:  A=

 A11 A21

A12 A22

,

and  D(A) =

 D1 0

0 n

,

where the matrix D1 = diag{ 1 , 2 , . . . , n−1 } is obtained by applying the induction hypothesis to the matrix A11 and n will be chosen later. Note, that by the induction i hypothesis 1 = 1 > 2 > . . . > n−1 > 0 and i+1 ≥ γi for i = 1, . . . , n − 2. Repeating the above reasoning, we put Mn ( n , λ) := det(D(A) A − λI). Nonzero roots of Mn (0, λ) are the same that of det(D1 A11 − λI), besides, it has a simple root in 0. Considering sufficiently small n and taking into account that det(D(A) A) > 0, we obtain that all the roots of Mn ( n , λ) are positive and simple. Using the continuity of eigenvalues, we get that there is a positive integer 0n , 0 < 0n < n−1 such that the above spectral properties hold for all n which satisfy 0 < n ≤ 0n . Thus we can choose n satisfying 0 < n < min( 0n , γn−1 ). So we have found the stabilization matrix D(A) in the n−1 form D(A) = diag{ 1 , 2 , . . . , n }, where 1 = 1 > 2 > . . . > n > 0, which satisfies i the conditions i+1 ≥ γi for i = 1, . . . , n − 1. 2 4. Spectra of P 2 -matrices The following properties of P -matrices are well-known (see, for example, [18, Theorem 3.1]). Lemma 2. Let A be a P -matrix. Then the following matrices are also P -matrices. 1. AT (the transpose of A); 2. A−1 (the inverse of A); 3. DAD−1 , where D is an invertible diagonal matrix;

200

4. 5. 6. 7.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

PAP−1 , where P is a permutation matrix; any principal submatrix of A; the Schur complement of any principal submatrix of A. both DA and AD, where D is a positive diagonal matrix. Let us list the following properties of P 2 - and Q2 - matrices, which will be used later.

Lemma 3. Let A be a P 2 - (Q2 -) matrix. Then the following matrices are also P 2 - (Q2 -) matrices. 1. 2. 3. 4.

AT (the transpose of A); A−1 (the inverse of A); DAD−1 , where D is an invertible diagonal matrix; PAP−1 , where P is a permutation matrix.

Proof. (1) Since A is a P -matrix, AT is also a P -matrix, by Lemma 2, Part 1. Since A2 is a P -matrix and (A2 )T = (AT )2 , we obtain that (AT )2 is also a P -matrix. So we conclude that AT is a P 2 -matrix. (2) It is enough for the proof just to check that (A−1 )2 = (A2 )−1 and to apply Lemma 2, Part 2 to both A and A2 . (3) The proof follows from the equality (DAD−1 )2 = D(A)2 D−1 and Part 3 of Lemma 2. (4) The proof follows from the equality (PAP−1 )2 = P(A)2 P−1 and Part 4 of Lemma 2. The case of a Q2 -matrix is considered analogically by using the obvious fact that if A is a Q-matrix then AT , A−1 , DAD−1 , PAP−1 are also Q-matrices. 2 Note, that principal submatrices and Schur complements of a P 2 -matrix may not be P -matrices. Unlike the case of P -matrices, the product of the form DA, where A is a P 2 -matrix, D is an arbitrary positive diagonal matrix, also may not be a P 2 -matrix and even a Q2 -matrix. The following lemma shows the link between the properties of the compound matrices of diagonal scalings and generalized compound matrices. 2

Lemma 4. Let an n × n matrix A be a P 2 -matrix. Let D be a positive diagonal matrix such that the following inequalities hold: (j)

(j) Tr(Dk A(j) D(j) m A )>0

for every j = 1, . . . , n and every pair (k, m), 0 ≤ k, m ≤ j. Then Dt A = (tI + (1 − t)D)A is also a P -matrix and a Q2 -matrix for all t ∈ [0, 1]. Proof. To prove that Dt A is a P -matrix, it is enough to observe that Dt is a positive diagonal matrix for all t ∈ [0, 1] and then apply Lemma 2, Part 7. Now let us prove that

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

201

(Dt A)2 is a Q-matrix, or, equivalently, that Tr(((Dt A)2 )(j) ) > 0 for all j = 1, . . . , n and all t ∈ [0, 1]. Applying the Cauchy–Binet formula, we obtain that ((Dt A)2 )(j) = ((Dt A)(j) )2 , for all j = 1, . . . , n and all t ∈ [0, 1]. Using the Cauchy–Binet formula again, we expand (Dt A)(j) as follows: (Dt A)(j) = (Dt )(j) A(j) = (tI + (1 − t)D)(j) A(j) . Now, using the properties of the exterior products and the binomial formula, we obtain (tI + (1 − t)D)(j) =

j    j k=0

=

k

j   k=0

. . ∧ I ∧ D . . ∧ D = tj−k (1 − t)k I ∧ .  ∧ . j−k

k



j j−k (j) t (1 − t)k Dk , k

(j)

where D0 = I(j) . Thus we have  =

((tI + (1 − t)D)(j) A(j) )2 =  j    j     j j j−k (j) (j) = t (1 − t)k Dk A(j) tj−m (1 − t)m D(j) m A k m m=0

k=0

=

   j  j j 2j−(k+m) (j) (j) t (1 − t)k+m Dk A(j) D(j) m A . k m

k,m=0

Since the trace function is linear, we obtain Tr(((tI + (1 − t)D)(j) A)(j) )2 ) =    j  j j 2j−(k+m) (j) (j) t (1 − t)k+m Tr(Dk A(j) D(j) m A ). k m

(2)

k,m=0

According to the conditions of the lemma, each term of Sum (2) is positive. Thus the whole sum is positive. 2 Let us observe the following notations and basic facts. Let A = {aij }ni,j=1 be an n × n matrix, then |A| denotes a matrix which entries are equal to the absolute values of aij (i, j = 1, . . . , n), i.e. |A| = {|aij |}ni,j=1 . In this case, the triangle inequality shows that |AB| ≤ |A||B| entry-wise for any n × n matrices A and B.

202

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

Let D = diag{d11 , . . . , dnn } be a positive diagonal matrix which satisfies the estimate D < αI for some positive value α (in this notation, we mean that dii < α for all i = 1, . . . , n). Then    n  n n       |Tr(DA)| =  dii aii  ≤ dii |aii | < α |aii | = αTr|A|,   i=1

i=1

(3)

i=1

for any n × n matrix A. Let D1 , D2 be positive diagonal matrices which satisfy the estimates D1 < αI, D2 < βI, respectively, for some positive values α and β. Then: |Tr(D1 AD2 A)| < αTr(|AD2 A|) ≤ αTr(|A||D2 ||A|) =

(4)

αTr(D2 |A|2 ) < αβTr(|A|2 ), for any n × n matrix A. Given an n × n matrix A, we denote A(j) [1, . . . , m], where j = 1, . . . , n and m = 0, 1, . . . , j, a principal submatrix of the jth compound matrix A(j) which consists of all the minors of the jth order of the following form  A

 1 ... 1 ...

m m

im+1 km+1

. . . ij . . . kj

,

where m < im+1 < . . . < ij ≤ n, m < km+1 < . . . < kj ≤ n, with A(j) [0] := A(j) . Now let us prove the following lemma which shows that under some additional conditions on a P 2 -matrix A, some of its diagonal scalings will be Q2 -matrices. Lemma 5. Let A be an n × n P -matrix which satisfies the following conditions: all principal diagonal entries of the matrices (A(j) [1, . . . , m])2 are positive for all j = 1, . . . , n and all m = 0, 1, . . . , j. Then there is a positive diagonal matrix D0(A) with the following two properties: 1. D0(A) is a stabilization matrix for A. 2. D0t A = (tI + (1 − t)D0(A) )A is a Q2 -matrix for all t ∈ [0, 1]. Proof. Let us search for the required matrix D0(A) in the following form: D0(A) = diag{ 01 , 02 , . . . , 0n }, where 1 = 01 > 02 > . . . > 0n > 0.

(5)

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

203

For this, let us construct the matrix D(A) , which has the form (5) and satisfies the following inequalities: (j)

(j) Tr((D(A) )k A(j) (D(A) )(j) m A )>0

for every j = 1, . . . , n and every pair (k, m), 0 ≤ k, m ≤ j. First, let us consider the case when k = 0 or m = 0. Without loss of generality, let us (j) assume m = 0, k satisfies 0 ≤ k ≤ j. Since (D(A) )0 = I, we obtain (j)

(j)

(j)

Tr((D(A) )k A(j) (D(A) )0 A(j) ) = Tr((D(A) )k (A(j) )2 ) > 0 (j)

since (D(A) )k is positive diagonal for every positive diagonal matrix D and (A(j) )2 has all positive principal diagonal entries. For the case when k, m = 0, we will choose the entries 1 , . . . , n of the required matrix D(A) consequently, taking into account the inequalities 1 = 1 > 2 > . . . > n > 0. First, we put 1 := 1. Now let us choose 2 . Let us examine all the pairs (k, m) such that max(k, m) = 1. Since k ≥ 1 and m ≥ 1, there is only one pair k = 1, m = 1 which satisfies this condition. So let us choose 2 such that 2 < 1 and (j) (j) Tr((D(A) )1 A(j) (D(A) )1 A(j) ) > 0 for all j = 1, . . . , n − 1. According to Formula (1), (j) the following equalities hold for the entries d1αα of the matrix (D(A) )1 : 1 ik , j j

d1αα =

(6)

k=1

where α is the number in the lexicographic ordering of the set of indices (i1 , . . . , ij ), 1 ≤ i1 < . . . < ij ≤ n. Examining the lexicographic ordering, it is easy to see that only



n−1 first n−1 j−1 of the sets (i1 , . . . , ij ) contain the first index 1 and exactly the first j−1 (j)

entries d1αα contain 1 = 1. Thus we can represent (D(A) )1 in the following form: (j) (D(A) )1

where I n−1 is an j−1

n−1 j−1

×

n−1 j−1

1 = j



I n−1 j−1

0

0



0

+ O1 ( 2 ),

identity matrix, O1 ( 2 ) is an

n j

×

n j

positive diagonal

matrix. Taking into account Formula (6) and the inequalities 1 = 1 > 2 > . . . > n , we obtain the following estimate: |O1 ( 2 )| < 2 I. 



 by A(j) , we obtain the matrix with first n−1 j−1 0 0

 n−1 nonzero rows while the rest is zero. Taking into account that n−1 j−1 × j−1 leading Then, multiplying

I n−1 j−1

0

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

204

principal submatrix of A(j) is exactly the matrix A(j) [1], we obtain the following representation. (j)

(j)

(D(A) )1 A(j) (D(A) )1 A(j) =        (j) 1 1 A(j) [1] ∗ [1] ∗ A + O1 ( 2 )A(j) + O1 ( 2 )A(j) = j j 0 0 0 0       (j) 1 (A(j) [1])2 ∗ 1 [1] ∗ A + O1 ( 2 )A(j) + O1 ( 2 )A(j) + j2 j 0 0 0 0     (j) 2 1 A(j) [1] ∗ 1 [1]) 0 (A O1 ( 2 )A(j) = 2 + Θ( 2 ). j j 0 0 0 0 The above equality implies (j)

(j)

Tr((D(A) )1 A(j) (D(A) )1 A(j) ) =

1 Tr((A(j) [1])2 ) + Tr(Θ( 2 )). j2

Since |O1 ( 2 )| < 2 I, we have the estimates by Formulae (3) and (4): |Tr(Θ( 2 ))| ≤            (j) 2  Tr(O1 ( 2 )(A [1]) ) + Tr(O1 ( 2 )A(j) O1 ( 2 )A(j) ) + Tr(O1 ( 2 )(A(j) [1])2 ) < 2 (2Tr(|A(j) [1]|2 ) + 2 Tr(|A(j) |2 )) → 0

as 2 → 0.

Since Tr((A(j) [1])2 ) > 0, we obtain the estimates (j)

(j)

Tr((D(A) )1 A(j) (D(A) )1 A(j) ) ≥

1 Tr((A(j) [1])2 ) − |Tr(Θ( 2 ))| > j2

1 Tr((A(j) [1])2 ) − 2 (2Tr(|A(j) [1]|2 ) + 2 Tr(|A(j) |2 )) ≥ j2 min j

1 Tr((A(j) [1])2 ) − 2 (2 max Tr(|A(j) [1]|2 ) + 2 max Tr(|A(j) |2 )), j j j2

for all j = 1, . . . , n − 1. Thus we can choose 2 such that 2 < 1 and min Tr((A(j) [1])2 ) − n 2 (2 max Tr(|A(j) [1]|2 ) + n 2 max Tr(|A(j) |2 )) > 0. j

j

j

This inequality implies (j)

(j)

Tr((D(A) )1 A(j) (D(A) )1 A(j) ) > 0 for all j = 1, . . . , n − 1.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

205

Assume we have already chosen 1 , . . . , l (2 ≤ l < n) such that 1 = 1 > . . . > l > 0 (j) (j) and Tr((D(A) )k A(j) (D(A) )m A(j) ) > 0 for all j = 1, . . . , n − 1 and all the pairs (k, m) with max(k, m) ≤ l − 1, 1 ≤ k, m ≤ j. Let us choose l+1 . For this, we examine all pairs (k, m) with max(k, m) = l and all j satisfying j ≥ l. Since l ≤ n − 1, such pairs do exist. Assume that k = l, m ≤ l (the case of m = l, k ≤ l is considered analogically). We choose l+1 satisfying the following conditions: (1) 0 < l+1 < l ; (j) (j) (2) Tr((D(A) )l A(j) (D(A) )m A(j) ) > 0 for all m, 1 ≤ m ≤ l and all j, j ≥ l. For this, we apply the same reasoning as above. According to Formula (1), the following equalities hold for the entries dlαα of the (j) matrix (D(A) )l : 

1 dlαα = j  l

k1 . . . kl ,

(k1 ,..., kl )⊆(i1 ,...,ij )

where α is the number of the set of indices (i1 , . . . , ij ), 1 ≤ i1 < . . . < ij ≤ n, in the lexicographic ordering. Then, considering the lexicographic ordering of the sets

 (i1 , . . . , ij ), 1 ≤ i1 < . . . < ij ≤ n, we obtain that exactly the first n−l j−l of them contains

n−l l the first indices (1, . . . , l). Thus exactly the first j−l entries dαα contain the product (j)

1 2 . . . l and we can present (D(A) )l (j) (D(A) )l

in the following form.

1 . . . l = j  l



I n−l

0

 + Ol ( l+1 ),

j−l

0

0

 n−l

n n where I n−l is an n−l j−l × j−l identity matrix, Ol ( l+1 ) is an j × j positive diagonal j−l matrix. Since we consider 1 = 1 > 2 > . . . > l > l+1 > . . . > n > 0, the following estimate holds: |O( l+1 )| < 1 . . . l−1 l+1 I. (j)

Now let us consider (D(A) )m , m ≤ l. Using the same reasoning as above, we represent it in the following form: ⎛ (D(A) )(j) m

=⎝



(i1 ,...,im )⊂[l]

where I n−l is an j−l matrix.

⎞ i1 . . . im ⎠

I n−l

0

j−l

0

0

 + Om ( l+1 ),

n−l n−l

n n j−l × j−l identity matrix, Om ( l+1 ) is an j × j positive diagonal

206

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

Since we consider 1 = 1 > 2 > . . . > l > l+1 > . . . > n > 0, Formulae (1) imply the following estimate: |Om ( l+1 )| < 1 . . . m−1 l+1 I. 

 I n−l 0 j−l Then, multiplying by A(j) , we obtain the matrix with first n−l j−l nonzero 0 0

 n−l rows while the rest is zero. Taking into account that n−l j−l × j−l leading principal sub(j) (j) matrix of A is exactly the matrix A [1, . . . , l], we obtain the following representation. Thus we obtain the equality 

(j)

(j) (D(A) )l A(j) (D(A) )(j) = m A     A(j) [1, . . . , l] ∗ (j) ∗ 1 . . . l + Ol ( l+1 )A 0 0 ⎛⎛ ⎞ ⎞  (j)  [1, . . . , l] ∗ A ⎝⎝ i1 . . . im ⎠ + Om ( l+1 )A(j) ⎠ = 0 0 (i1 ,...,im )⊂[l] ⎛ ⎞   (A(j) [1, . . . , l])2 ∗ ⎝ ⎠ i1 . . . im + 1 . . . l 0 0 (i1 ,...,im )⊂[l] ⎛⎛ ⎞ ⎞  (j)  [1, . . . , l] ∗ A + Om ( l+1 )A(j) ⎠ + i1 . . . im ⎠ Ol ( l+1 )A(j) ⎝⎝ 0 0 (i1 ,...,im )⊂[l]   A(j) [1, . . . , l] ∗ 1 . . . l Om ( l+1 )A(j) = 0 0 ⎛ ⎞  (j) 2  [1, . . . , l]) ∗ (A = 1 . . . l ⎝ i1 . . . im ⎠ + Θ( l+1 ). 0 0 (i1 ,...,im )⊂[l]

The above equality implies (j)

⎛ 1 . . . l ⎝

(j) Tr((D(A) )l A(j) (D(A) )(j) m A )= ⎞  i1 . . . im ⎠ Tr((A(j) [1, . . . , l])2 ) + Tr(Θ( l+1 )). (i1 ,...,im )⊂[l]

Since |Ol ( l+1 )| < 1 . . . l−1 l+1 I and |Om ( l+1 )| < 1 . . . m−1 l+1 I, we obtain the following estimate, using Inequalities (3) and (4): ⎛ ⎝



(i1 ,...,im )⊂[l]

|Tr(Θ( l+1 ))| ≤ ⎞     i1 . . . im ⎠ Tr(Ol ( l+1 )(A(j) [1, . . . , l])2 ) +

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

         Tr Ol ( l+1 )A(j) Om ( l+1 )A(j)  + 1 . . . l Tr(Om ( l+1 )(A(j) [1, . . . , l])2 ) < ⎛ ⎞    j 1 . . . l−1 l+1 ⎝ i1 . . . im ⎠ Tr(|A(j) [1, . . . , l]|2 ) + l (i1 ,...,im )⊂[l]

   j j 1 . . . l−1 l+1 1 . . . m−1 l+1 Tr(|A(j) |2 ) + l m

  j 1 . . . m−1 l+1 1 . . . l Tr(|A(j) [1, . . . , l]|2 ) → 0 m

as l+1 → 0.

Since Tr((A(j) [1, . . . , l])2 ) > 0, we have (j)

⎛ 1 . . . l ⎝

(j) (j) Tr((D(A) )l A(j) (D(A) )m A )≥ ⎞



i1 . . . im ⎠ Tr((A(j) [1, . . . , l])2 ) − |Tr(Θ( l+1 ))| >

(i1 ,...,im )⊂[l]







1 . . . l ⎝

i1 . . . im ⎠ Tr((A(j) [1, . . . , l])2 ) −

(i1 ,...,im )⊂[l]

⎛   j 1 . . . l−1 l+1 ⎝ l





i1 . . . im ⎠ Tr(|A(j) [1, . . . , l]|2 ) −

(i1 ,...,im )⊂[l]

   j j 1 . . . l−1 l+1 1 . . . m−1 l+1 Tr(|A(j) |2 ) − l m   j 1 . . . m−1 l+1 1 . . . l Tr(|A(j) [1, . . . , l]|2 ). m Then we can choose l+1 such that l+1 < l and ⎛ 1 . . . l ⎝





i1 . . . im ⎠ min Tr((A(j) [1, . . . , l])2 ) − j

(i1 ,...,im )⊂[l]

⎛   n 1 . . . l−1 l+1 ⎝ l



⎞ i1 . . . im ⎠ max Tr(|A(j) [1, . . . , l]|2 ) − j

(i1 ,...,im )⊂[l]

   n j 1 . . . l−1 l+1 1 . . . m−1 l+1 max Tr(|A(j) |2 ) − j l m   n 1 . . . m−1 l+1 1 . . . l max Tr(|A(j) [1, . . . , l]|2 ) > 0. j m (j)

(j)

This inequality implies Tr((D(A) )l A(j) (D(A) )m A(j) ) > 0 for all j = l, . . . , n − 1.

207

208

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

Applying the above reasoning n − 1 times, we construct all the entries 1 , . . . , n of the matrix D(A) . When j = n, the inequality (n)

(n) (n) Tr((D(A) )k A(n) (D(A) )m A )>0

is obvious for any positive diagonal matrix D(A) and any pair (k, m), 0 ≤ k, m ≤ n. i Now putting γi := i+1 and applying Lemma 1, we find a stabilization matrix D0(A) . i It is easy to see that, for sufficiently big values i+1 , the matrix D0(A) also satisfies the inequalities (j)

(j) Tr((D0(A) )k A(j) (D0(A) )(j) m A )>0

for every j = 1, . . . , n and every pair (k, m), 0 ≤ k, m ≤ j. Since it satisfies the conditions of Lemma 4, applying Lemma 4, we complete the proof. 2 Now let us recall the following basic definitions and facts, namely, Sylvester’s Determinant Identity (see, for example, [16, p. 3]) and the Schur complement formula (see, for example, [2,3]). For a given n × n matrix A, two sets of indices (i1 , . . . , ik ) and (j1 , . . . , jk ) from [n] and each l ∈ [n] \ (i1 , . . . , ik ), r ∈ [n] \ (j1 , . . . , jk ), we define  blr = A

 i1 j1

. . . ik . . . jk

l r

.

(In such notations, we always mean that the set of indices is arranged in natural increasing order.) Then the following identity (Sylvester’s Determinant Identity) holds for the minors of the (n − k) × (n − k) matrix B = {blr }:  . . . lp = B . . . rp p−1  . . . ik i1 . . . ik A . . . jk j1 . . . jk 

l1 r1

 A

i1 j1

 l1 r1

. . . lp . . . rp

.

Now let us write an initial n × n matrix A in the following block form:  A=

 Akk An−k,k

Ak,n−k An−k,n−k

,

where Akk is a nonsingular leading principal submatrix of A, spanned by the basic vectors e1 , . . . , ek , 1 ≤ k < n, An−k,n−k is the principal submatrix spanned by the remaining basic vectors. Then the Schur complement A|Akk of Akk in A is defined as follows:

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

209

A|Akk = An−k,n−k − An−k,k A−1 kk Ak,n−k . For the entries of the Schur complement, we have the following equality: A|Akk = {clr }, where  clr = A

 1 ... 1 ...

k k

l r

 /A

 1 ... 1 ...

k k

,

n − k ≤ l, r ≤ n. The following formula connects the inverse of the Schur complement A|Akk with a principal submatrix of A−1 : (A|Akk )−1 = A−1 n−k,n−k .

(7)

Using the properties of Schur complements, we’ll prove the following statement. Lemma 6. Let A be a P 2 -matrix. Let A have at least one nested sequence of principal submatrices, each of which is also a P 2 -matrix. Then there is a permutation θ of [n] with −1 the permutation matrix Pθ such that the matrix B = P−1 Pθ is a P 2 -matrix and θ A satisfies the following conditions: each principal diagonal entry of the matrix B(j) [1, . . . , m])2 is positive

(8)

for all j = 1, . . . , n and all m = 1, . . . , j. 

 i1 . . . ij Proof. Let us use the notation A for the j × j principal submatrix of A i1 . . . ij determined by the rows with the indices (i1 , . . . , ij ) and the columns with the same indices. (Note that the indices (i1 , . . . , ij ) in this notation are re-ordered in the increasing order, as they are in the initial matrix.) We consider the given nested sequence of the principal submatrices of A, each of which is a P 2 -matrix: 

  i2 i1 , ..., A i2 i1

i1 ai1 i1 , A i1

 . . . in . . . . in

This sequence is defined by the permutation τ = (i1 , . . . , in ) of the set of indices [n]. Let us construct the permutation θ by the following rule: θ(im ) = n − m + 1, m = 1, . . . , n. In this case, we obtain the following equalities for the principal submatrices of the matrix  = P−1 APθ : A θ  i1 A i1

 . . . in−m . . . in−m

  m + 1 ... =A m + 1 ...

 n , n

m = 0, . . . , n − 1.

210

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

 −1 = P−1 A−1 Pθ and its Schur complements of Now let us consider the matrix B = (A) θ the form B|Bmm , where Bmm is the principal submatrix of B which consists of the first m rows and columns, m = 1, . . . , n. Using Formula (7), we obtain:  (B|Bmm )−1



 m + 1 ... n =A m + 1 ... n

 i1 =A i1

 . . . in−m , . . . in−m

m = 1, . . . , n − 1.



 i1 . . . in−m By the conditions, A is a P 2 -matrix. Thus (B|Bmm )−1 is also a i1 . . . in−m P 2 -matrix. By Lemma 3 and Lemma 2 we obtain that B is also a P 2 -matrix. Now we’ll prove Conditions (8). Applying Lemma 3 and Lemma 2 to all (B|Bmm )−1 , m = 1, . . . , n − 1, each of which is a P 2 -matrix, we obtain that B|Bmm is also a P 2 -matrix for every m = 1, . . . , n − 1. Now let us fix m, 1 ≤ m ≤ n − 1 and consider the matrices B(j) [1, . . . , m] for each positive integer j, m < j ≤ n. The matrix B(j) [1, . . . , m] consists of all the minors of the form  B

 1 1

... m ... m

im+1 km+1

. . . ij . . . kj

,

where m + 1 ≤ im+1 < . . . < ij ≤ n, m + 1 ≤ km+1 < . . . < kj ≤ n. Applying the Silvester determinant identity, we obtain 

  j−m l1 . . . lj−m 1 ... m (B|Bmm ) = B r1 . . . rj−m 1 ... m   j−m−1  1 . . . m l1 . . . lj−m 1 ... m B . B 1 . . . m r1 . . . rj−m 1 ... m Thus 

 . . . lj−m (B|Bmm ) = . . . rj−m     1 ... m 1 . . . m l1 . . . lj−m B B . 1 . . . m r1 . . . rj−m 1 ... m l1 r1

And we obtain the following equality for the compound matrices: (B|Bmm )(j−m)

 = B(j) [1, . . . , m] B



 1 ... m 1 ... m

.

(9)

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

211

Applying the Cauchy–Binet formula, ((B|Bmm )2 )(j−m) = ((B|Bmm )(j−m) )2 =  2  1 ... m (j) 2 , (B [1, . . . , m]) B 1 ... m

(10)

for all m = 1, . . . , n and all j, j = m + 1, . . . , n. Since B|Bmm is a P 2 -matrix 2 (j−m) we have that all principal diagonal entries of ((B|B  mm ) )  are positive for all j = 1 ... m > 0 for all m = 1, . . . , n. m +1, . . . , n. Since B is a P -matrix, we have B 1 ... m Thus Equality (10) shows that all the principal diagonal entries of the matrix  (j)

(B

2

[1, . . . , m]) = B

are positive for all j = m + 1, . . . , n.

1 ... m 1 ... m

2 ((B|Bmm )2 )(j−m)

2

The proof of Carlson’s theorem uses the fact that if A is a sign-symmetric P -matrix and D is an arbitrary positive diagonal matrix then the matrix DA is a P 2 -matrix. However, the proof of our main result does not require (DA)2 to be a P -matrix (and even a Q-matrix) for every positive diagonal matrix D. This condition is too strong. In the proof below, we use a much weaker condition: under certain assumptions on a P 2 -matrix A, there is at least one stabilization matrix D0(A) such that (tI +(1 −t)D0(A) )A is a Q2 -matrix for every t ∈ [0, 1] (note that here we multiply A by a positive diagonal matrix of a very special form). Proof of Theorem 14. Applying Lemma 6 to A, we find a permutation matrix Pθ and −1 construct a P 2 -matrix B = P−1 Pθ , which satisfies the following conditions: all θ A (j) principal diagonal entries of (B [1, . . . , m])2 are positive for all m = 1, . . . , j and all j = 1, . . . , n. Since B satisfies the conditions of Lemma 5, it is stabilizable and we construct a stabilization matrix D0(B) such that D0t B = (tI + (1 − t)D0(B) )B is a Q2 -matrix for all t ∈ [0, 1]. Now we apply to B the reasoning of the proof of Carlson’s theorem (see [4]). Since 0 D(B) is a stabilization matrix for B, we obtain that all the eigenvalues of the matrix D0(B) B have positive real parts (in fact, they are all positive). Let us consider (D0t B)2 which is a Q-matrix for any t ∈ [0, 1] (as it was shown above). Then, applying Corollary 1 (Hershkowitz), we obtain that (D0t B)2 cannot have nonpositive real eigenvalues. Since the eigenvalues of (D0t B)2 are just the squares of the eigenvalues of D0t B, we obtain that D0t B cannot have eigenvalues on the imaginary axis, for any t ∈ [0, 1]. For t = 0, we have that D0t B = D0(B) B and all of its eigenvalues are on the right-hand side of the complex

212

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

plane (they are all positive). Since the eigenvalues change continuously on t, we have that the eigenvalues of D0t B cannot cross the imaginary axis for any t ∈ [0, 1]. So they must stay in the right-hand side of the complex plane. Thus putting t = 0, we obtain −1 that B is positive stable. Taking into account that B = P−1 Pθ and the eigenvalues θ A of B are just the inverses of the eigenvalues of A, we conclude that A is also positive stable. 2 Note, that applying Corollary 1 to the matrices (D0t B)2 , t ∈ [0, 1] we can localize the spectrum of the initial matrix A more precisely: λ ∈ σ(A) implies | arg(λ)| <

π π − . 2 2n

5. Examples and applications Theorem 14 implies Theorem 6 (Carlson) and Theorem 12 (Tang et al.). Proof of Theorem 6. If A is a sign-symmetric P -matrix then all its principal submatrices are also sign-symmetric P -matrices. It is easy to see that A is a P 2 -matrix and all its principal submatrices are also P 2 -matrices. Thus we obtain that A satisfies the conditions of Theorem 14. Applying Theorem 14 we get that A is positive stable. 2 Proof of Theorem 12. For the proof, it is enough to show that A2 is a P -matrix and A has a nested sequence of principal submatrices each of which is also a P 2 -matrix. In this case, we can show even more: that every principal submatrix of A is a P 2 -matrix. First, let us show that if A is strictly row square diagonally dominant for every order of minors then so is every principal submatrix of A. (The same is true for the column dominance.) In this case, we have, by Definition 11:   A

α α

2

 



>

A

α,β⊂[n],α=β

α β

2 ,

where α = (i1 , . . . , ij ), 1 ≤ i1 < . . . < ij ≤ n, β = (l1 , . . . , lj ), 1 ≤ l1 < . . . < lj ≤ n, j = 1, . . . , n. Let us fix an arbitrary (n − 1) × (n − 1) submatrix Ak obtained from A by deleting the row and the column with the number k, 1 ≤ k ≤ n. Since all the terms in the right-hand side of the above inequality are positive, we obtain for j = 1, . . . , n − 1:  α,β⊂[n],α=β

  A

α β

2 >

 α,β⊂[n]\{k},α=β

  A

α β

2 .

Thus Ak is strictly row square diagonally dominant for every order of minors 1, . . . , n − 1 and for each value of k, 1 ≤ k ≤ n.

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

213

Now let us show that both A2 and A2k are P -matrices. The proof copies the corresponding reasoning of Tang et al. (see [17, p. 27, proof of Theorem 3]). First, we obtain the estimate:  A

2

 

 α α

=



2

A



 β α

A 



+

 α β

α,β⊂[n]

α α

A



 α β

A

α,β⊂[n], α=β

= 

 β α

A

>

[by the condition of square diagonal dominance] ⎛      ⎞ 2  α β ⎠ ⎝ A α +A A > = β β α α,β⊂[n], α=β

1 2

⎛ 



⎝ A

α,β⊂[n], α=β

⎛ 1⎝ = 2

α β



2



α,β⊂[n], α=β



+ 2A

α β

 



A

α β



 β α

A  +A

β α

  +

A

β α

2 ⎞ ⎠=

2 ⎞ ⎠ > 0.

Thus a strictly row square diagonally dominant P -matrix A is a P 2 -matrix as well as all its principal submatrices. Applying Theorem 14, we obtain that A is positive stable. 2 The following matrix classes obviously satisfy the conditions of Theorem 14. 1. Hermitian positive definite matrices. 2. Strictly totally positive matrices (an n × n matrix A is called strictly totally positive if A is (entry-wise) positive and its jth compound matrices A(j) are also positive for all j = 2, . . . , n). 3. Strictly totally J-sign-symmetric matrices (for the definition and properties see [15]). Acknowledgments The research leading to these results was carried out at Technische Universität Berlin and has received funding from the European Research Council under the European Unions Seventh Framework Programme (FP7/20072013)/ERC grant agreement No. 259173. The research is also supported by the National Science Foundation of China (grant number 11550110187). The author thanks Dr. Michal Wojtylak and Alexander Dyachenko for their valuable comments.

214

O.Y. Kushel / Linear Algebra and its Applications 503 (2016) 190–214

References [1] C.S. Ballantine, Stabilization by a diagonal matrix, Proc. Amer. Math. Soc. 25 (1970) 728–734. [2] D. Carlson, T.L. Markham, Schur complements of diagonally dominant matrices, Czechoslovak Math. J. 29 (1979) 246–251. [3] D. Crabtree, E. Haynsworth, An identity for the Schur complement of a matrix, Proc. Amer. Math. Soc. 22 (1969) 364–366. [4] D. Carlson, A class of positive stable matrices, J. Res. Natl. Bur. Stand. Sect. B 78B (1974) 1–2. [5] M. Fiedler, Additive compound matrices and an inequality for eigenvalues of symmetric stochastic matrices, Czechoslovak Math. J. 24 (1974) 392–402. [6] M. Fiedler, V. Pták, On matrices with non-positive off-diagonal elements and positive principal minors, Czechoslovak Math. J. 22 (87) (1962) 382–400. [7] M.E. Fisher, A.T. Fuller, On the stabilization of matrices and the convergence of linear iterative processes, Math. Proc. Cambridge Philos. Soc. 54 (1958) 417–425. [8] I.M. Glazman, Yu.I. Liubich, Finite-Dimensional Linear Analysis: A Systematic Presentation in Problem Form, MIT Press, 1974. [9] A. Grothendieck, La théorie de Fredholm, Bull. Soc. Math. France 84 (1956) 319–384. [10] D.A. Grundy, C.R. Johnson, D.D. Olesky, P. van den Driessche, Products of M -matrices and nested sequences of principal minors, Electron. J. Linear Algebra 16 (2007) 380–388. [11] D. Hershkowitz, On the spectra of matrices having nonnegative sums of principal minors, Linear Algebra Appl. 55 (1983) 81–86. [12] D. Hershkowitz, C.R. Johnson, Spectra of matrices with P -matrix powers, Linear Algebra Appl. 80 (1986) 159–171. [13] D. Hershkowitz, N. Keller, Positivity of principal minors, sign symmetry and stability, Linear Algebra Appl. 364 (2003) 105–124. [14] R.B. Kellog, On complex eigenvalues of M and P matrices, Numer. Math. 19 (1972) 170–175. [15] O.Y. Kushel, Cone-theoretic generalization of total positivity, Linear Algebra Appl. 436 (2012) 537–560. [16] A. Pinkus, Totally Positive Matrices, Cambridge University Press, 2010. [17] A.K. Tang, A. Simsek, A. Ozdaglar, D. Acemoglu, On the stability of P -matrices, Linear Algebra Appl. 426 (2007) 22–32. [18] M. Tsatsomeros, Generating and detecting matrices with positive principal minors, Asian-Inf.-Sci.Life 1 (2002) 115–132.