Perron spectratopes and the real nonnegative inverse eigenvalue problem

Perron spectratopes and the real nonnegative inverse eigenvalue problem

Linear Algebra Appl. 493 (2016) 281–300 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa P...

446KB Sizes 2 Downloads 80 Views

Linear Algebra Appl. 493 (2016) 281–300

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

Perron spectratopes and the real nonnegative inverse eigenvalue problem Charles R. Johnson a , Pietro Paparella b,∗ a

Department of Mathematics, College of William & Mary, Williamsburg, VA 23187-8795, USA b Division of Engineering and Mathematics, University of Washington Bothell, Bothell, WA 98011-8246, USA

a r t i c l e

i n f o

Article history: Received 22 August 2015 Accepted 20 November 2015 Available online 17 December 2015 Submitted by R. Brualdi MSC: 15A18 15B48 15A29 05B20 05E30 Keywords: Perron spectracone Perron spectratope Real nonnegative inverse eigenvalue problem Hadamard matrix Association scheme Relative gain array

a b s t r a c t Call an n-by-n invertible matrix S a Perron similarity if there is a real non-scalar diagonal matrix D such that SDS −1 is entrywise nonnegative. We give two characterizations of Perron similarities and study the polyhedra C (S) := {x ∈ Rn : SDx S −1 ≥ 0, Dx := diag (x)} and P (S) := {x ∈ C (S) : x1 = 1}, which we call the Perron spectracone and Perron spectratope, respectively. The set of all normalized real spectra of diagonalizable nonnegative matrices may be covered by Perron spectratopes, so that enumerating them is of interest. The Perron spectracone and spectratope of Hadamard matrices are of particular interest and tend to have large volume. For the canonical Hadamard matrix (as well as other matrices), the Perron spectratope coincides with the convex hull of its rows. In addition, we provide a constructive version of a result due to Fiedler [9, Theorem 2.4] for Hadamard orders, and a constructive version of the Boyle–Handelman theorem [2, Theorem 5.1] for Sule˘ımanova spectra. © 2015 Elsevier Inc. All rights reserved.

* Corresponding author. E-mail addresses: [email protected] (C.R. Johnson), [email protected] (P. Paparella). URL: http://faculty.washington.edu/pietrop/ (P. Paparella). http://dx.doi.org/10.1016/j.laa.2015.11.033 0024-3795/© 2015 Elsevier Inc. All rights reserved.

282

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

1. Introduction The real nonnegative inverse eigenvalue problem (RNIEP) is to determine which sets of n real numbers occur as the spectrum of an n-by-n nonnegative matrix. The RNIEP is unsolved for n ≥ 5, and the following variations, which are also unsolved for n ≥ 5, are relevant to this work (additional background information on the RNIEP can be found in, e.g., [5,18,20]): • Diagonalizable RNIEP (D-RNIEP): Determine which sets of n real numbers occur as the spectrum of an n-by-n diagonalizable nonnegative matrix. • Symmetric NIEP (SNIEP): Determine which sets of n real numbers occur as the spectrum of an n-by-n symmetric nonnegative matrix. • Doubly stochastic RNIEP (DS-RNIEP): Determine which sets of n real numbers occur as the spectrum of an n-by-n doubly stochastic matrix. • Doubly stochastic SNIEP (DS-SNIEP): Determine which sets of n real numbers occur as the spectrum of an n-by-n symmetric doubly stochastic matrix. The RNIEP and the SNIEP are equivalent when n ≤ 4 and distinct otherwise (see [13]). Notice that there is no distinction between the SNIEP and the D-SNIEP since every symmetric matrix is diagonalizable. The set σ = {λ1 , . . . , λn } ⊂ R is said to be realizable if there is an n-by-n nonnegative matrix with spectrum σ. If A is a nonnegative matrix that realizes σ, then A is called a realizing matrix for σ. It is well-known that if σ is realizable, then sk (σ) :=

n 

λki ≥ 0, ∀ k ∈ N

(1.1)

i=1

ρ (σ) := max |λi | ∈ σ i

m−1 sm skm (σ), ∀ k, m ∈ N k (σ) ≤ n

(1.2) (1.3)

Condition (1.3), known as the J-LL condition, was proven independently by Johnson in [12], and by Loewy and London in [17]. In this paper, we introduce several polyhedral sets whose points correspond to spectra of entrywise nonnegative matrices. In particular, given a nonsingular matrix S, we define several polytopic subsets of the polyhedral cone C (S) := {x ∈ Rn : SDx S −1 ≥ 0, Dx := diag (x)} and use them to verify the known necessary and sufficient conditions for the RNIEP and SNIEP when n ≤ 4. For a nonsingular matrix S, we provide a necessary and sufficient condition such that C (S) is nontrivial. For every n ≥ 1, we characterize C(Hn ), where Hn is the Walsh matrix of order 2n , which resolves a problem posed in [7, p. 48]. Our proof method yields a highly-structured (2n − 1)-class (commutative) association scheme and, as a consequence, a highly structured Bose–Mesner Algebra. In addition, we provide a constructive version of a result due to Fiedler [9, Theorem 2.4] for

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

283

Hadamard orders, and a constructive version of [2, Theorem 5.1] for Sule˘ımanova spectra. The introduction of these convex sets extends techniques and ideas found in (e.g.) [6–8, 21–23,25] and provides a framework for investigating the aforementioned problems. 2. Notation and background Denote by N the set of natural numbers and by N0 the set N ∪ {0}. For n ∈ N, the set {1, . . . , n} ⊂ N is denoted by n . If σ = {λ1 , . . . , λn } ⊂ R, then σ is called normalized if λ1 = 1 ≥ λ2 ≥ · · · ≥ λn . The set of m-by-n matrices with entries from a field F (in this paper, F is either C or R) is denoted by Mm,n (F) (when m = n, Mn,n (F) is abbreviated to Mn (F)). The set of all n-by-1 column vectors is identified with the set of all ordered n-tuples with entries in F and thus denoted by Fm . The set of nonsingular matrices over F is denoted by GLn (F) and the set of n-by-n orthogonal matrices is denoted by O (n). For A = [aij ] ∈ Mn (C), the transpose of A is denoted by A ; the spectrum of A is denoted by σ = σ (A); the spectral radius of A is denoted by ρ = ρ (A); and Diag (A) denotes the n-by-1 column vector [aii · · · ann ] . Given x ∈ Fn , xi denotes the ith entry of x and Dx = Dx ∈ Mn (F) denotes the diagonal matrix whose (i, i)-entry is xi . Notice that for scalars α, β ∈ F, and vectors x, y ∈ Fn , Dαx+βy = αDx + βDy . The Hadamard product of A, B ∈ Mm,n (F), denoted by A ◦ B, is the m-by-n matrix whose (i, j)-entry is aij bij . The direct sum of A1 , . . . , Ak , where Ai ∈ Mni (C), denoted k by A1 ⊕ · · · ⊕ Ak , or i=1 Ai , is the n-by-n matrix ⎡ ⎤ A1 0 ⎥ ⎢ .. ⎢ ⎥, . ⎣ ⎦ 0 Ak

k where n = i=1 ni . The Kronecker product of A ∈ Mn,n (F) and B ∈ Mp,q (F), denoted by A ⊗ B, is the mp-by-nq matrix defined by ⎡ ⎤ a11 B · · · a1n B .. ⎥ ⎢ . .. A ⊗ B = ⎣ .. . . ⎦. am1 B

· · · amn B

For S ∈ GLn (C), the relative gain array of S, denoted by Φ(S), is defined by Φ(S) = S ◦ S − , where S − := (S −1 ) = (S  )−1 . It is well-known (see, e.g., [15]) that if A = SDx S −1 , then Φ(S)(x) = Diag (A) .

(2.1)

For the following, the size of each matrix will be clear from the context in which it appears: • I denotes the identity matrix;

284

• • • •

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

ei denotes the ith-column of I; e denotes the all-ones vector; J denotes the all-ones matrix, i.e., J = ee ; and K denotes the exchange matrix, i.e., K = [en | · · · |e1 ].

A matrix is called symmetric, persymmetric, or centrosymmetric if A − A = 0, AK − KA = 0, or AK − KA = 0, respectively. A matrix possessing any two of these symmetry conditions can be shown to possess the third, thus we define a trisymmetric matrix as any matrix possessing two of the aforementioned properties. If A is an entrywise nonnegative (positive) matrix, then we write A ≥ 0 (A > 0,

n respectively). An n-by-n nonnegative matrix A is called (row) stochastic if j=1 aij = 1,

n ∀ i ∈ n ; column stochastic if i=1 aij = 1, ∀ j ∈ n ; and doubly stochastic if it is row stochastic and column stochastic. We recall the well-known Perron–Frobenius theorem (see, e.g. [1], [11, Chapter 8], or [20]). Theorem 2.1 (Perron–Frobenius). If A ≥ 0, then ρ ∈ σ, and there is a nonnegative vector x such that Ax = ρx.

n Remark 2.2. The scalar ρ is called the Perron root of A. When i=1 xi = 1, x is called the (right) Perron vector of A and the pair (ρ, x) is called the Perron eigenpair of A. Since the nonnegativity of A is necessary and sufficient for the nonnegativity of A , if A ≥ 0, then, following Theorem 2.1, there is a nonnegative vector y, such that y  A =

n ρy  . When i=1 yi = 1, y is called the left Perron vector of A. Given vectors v1 , . . . , vn ∈ Rn and scalars α1 , . . . , αn ∈ R, the linear combination

n combination if αi ≥ 0 for every i ∈ n ; and a convex comi=1 αi vi is called a conical

n bination if, in addition, i=1 αi = 1. The conical hull of the vectors v1 , . . . , vn , denoted by coni (v1 , . . . , vn ), is the set of all conical combinations of the vectors, and the convex hull, denoted by conv (v1 , . . . , vn ), is the set of all convex combinations of the vectors. Given A ∈ Mm,n (R) and b ∈ Rm , the polyhedron determined by A and b is the set P(A, b) := {x ∈ Rn : Ax ≤ b}. When b = 0, P(A, 0) is called the polyhedral cone determined by A and is denoted by C(A). Lastly, recall that a polytope is a bounded polyhedron. We say that S ⊆ Rn is polyhedral if S = P(A, b) for some m × n matrix A and b ∈ Rm . 3. Spectrahedral sets and Perron similarities Definition 3.1. For S ∈ GLn (R), let C (S) := {x ∈ Rn : SDx S −1 ≥ 0} and A(S) := {A ∈ Mn (R) : A = SDx S −1 , x ∈ C (S)}. Remark 3.2. Since SIS −1 = I ≥ 0 for every invertible matrix S, it follows that the sets C (S) and A(S) are always nonempty. Specifically, coni (e) ⊆ C (S) for every invertible

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

285

matrix S. In the sequel, we will state a necessary and sufficient condition on S such that coni (e) ⊂ C (S). Moreover, if α, β ≥ 0 and x, y ∈ C (S), then αx + βy ∈ C (S) so that C (S) is a convex cone. We refer to C (S) as the Perron spectracone of S. The convex cone A(S) is a nonnegative commutative algebra. In Section 5, we will show that if Hn is the Walsh matrix of order 2n , then A(Hn ) is a nonnegative Bose– Mesner algebra. Before cataloging basic properties of the spectracone, we require the following lemma. Lemma 3.3. If P is a permutation matrix and x ∈ Cn , then P Dx P  = Dy , where y = P x. Proof. Because a permutation similarity effects a simultaneous permutation of the rows and columns of a matrix, it follows that P Dx P  is diagonal, say Dy . Following (2.1), y = Φ(P )(x) = P ◦ (P −1 ) x = P x.

2

Proposition 3.4. If S ∈ GLn (R) and P is a permutation matrix, then (i) (ii) (iii) (iv) (v) (vi)

C (S) is a polyhedral cone; C(SP ) = P  C (S) := {y ∈ Rn : y = P  x, x ∈ C (S)}; C(P S) = C (S); C(Dv S) = C (S) for any v > 0; C(SDv ) = C (S) for any nonzero v ∈ Rn ; and C (S) = C((S  )−1 ).

Proof. The proofs of parts (iii)–(vi) are straightforward exercises; thus, we provide proofs for parts (i) and (ii): (i) The matricial inequality SDx S −1 ≥ 0 specifies n2 linear homogeneous inequalities in the variables x1 , . . . , xn ; specifically, if S = [sij ] and S −1 = [tij ], then the (i, j)-entry

n of SDx S −1 is k=1 sik tkj xk . If vec (·) denotes columnwise vectorization, then ⎤ s1 ◦ t1 ⎢ ... ⎥ ⎥ ⎢ ⎢ sn ◦ t1 ⎥ ⎥ ⎢ ⎢s ◦t ⎥ 1 2 ⎥ ⎢

 ⎢ . ⎥ ≥ 0 ⇐⇒ vec SDx S −1 ≥ 0 ⇐⇒ − ⎢ .. ⎥ x ≤ 0, ⎥ ⎢ ⎢ sn ◦ t2 ⎥ ⎥ ⎢ ⎢ s1 ◦ tn ⎥ ⎢ . ⎥ ⎣ . ⎦ . sn ◦ tn ⎡

SDx S −1

where si and ti denote the ith-row and ith-column of S and S −1 , respectively.

(3.1)

286

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

(ii) Following Lemma 3.3, y ∈ C(SP ) ⇐⇒ SP Dy P  S −1 ≥ 0 ⇐⇒ SDx S −1 ≥ 0, x = P y ⇐⇒ x ∈ C (S) ⇐⇒ y ∈ P  C (S) .

2

Let B n := {x ∈ Rn : ||x||∞ ≤ 1}. For k ∈ n , let Pk be the (n − 1)-by-n matrix obtained by deleting the kth-row of I, and define πk : Rn −→ Rn−1 by πk (x) = Pk x. Definition 3.5. For S ∈ GLn (R), let W(S) := C (S) ∩ B n , P (S) := {x ∈ C (S) : x1 = 1}, and P 1 (S) := {y ∈ Rn−1 : y = π1 (x), x ∈ P (S)}. Remark 3.6. Since SIS −1 = I ≥ 0 for every invertible matrix S, it follows that the sets W(S) and P (S) are always nonempty; if n ≥ 2, then P 1 (S) is always nonempty. Proposition 3.7. If S ∈ GLn (R), then the sets W(S), P (S), and P 1 (S) are polytopes. Proof. A polyhedral description of W(S) is obtained by appending the inequalities 

xi ≤ 1, i ∈ n −xi ≤ 1, i ∈ n

(3.2)

to (3.1). Since W(S) ⊂ B n , it follows that it is bounded and hence a polytope. Similarly, appending the inequalities x1 ≤ 1 and −x1 ≤ −1 to (3.1) and (3.2) yields a half-space description of P (S). Since P (S) ⊂ Bn , it follows that it is bounded and hence a polytope. It can be shown via Fourier–Motzkin elimination that the projection of a polyhedron is a polyhedron (see, e.g., [4] and references therein). Thus, P 1 (S) is a polytope. 2 Remark 3.8. For S ∈ GLn (R), we refer to P (S) as the Perron spectratope of S. Proposition 3.9. If S = T ⊕ U ∈ GLn (R), then (i) C (S) = C(T ) × C(U ); (ii) P (S) = P(T ) × W(U ); and (iii) P 1 (S) = P 1 (T ) × W(U ). Proof. All three parts are straightforward exercises. 2 Recall that a scalar matrix is any matrix of the form A = αI, α ∈ F. Let S ∈ GLn (R) and suppose there is a real diagonal matrix D and a nonnegative, nonscalar matrix A such

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

287

−1 that A = SDS −1 . Following Theorem 2.1, there is an i ∈ n such that Sei and e are i S both nonnegative (or both nonpositive). In consideration of part (v) of Proposition 3.4, we may assume that they are both nonnegative. This motivates the following definition.

Definition 3.10. We call an invertible matrix S a Perron-similarity if there is an i ∈ n −1 such that Sei and e are nonnegative. i S Given S ∈ GLn (R), it is natural to determine necessary and sufficient conditions so that S is a Perron-similarity; to that end, we require the following theorem. Theorem 3.11. Let ⎡

⎤ s 1 ⎢ ⎥ S = ⎣ ... ⎦ ∈ GLn (R), s n −1 where sk ∈ Rn , for every k ∈ n . If y  := e , then y ≥ 0 if and only if ei ∈ i S coni (s1 , . . . , sn ). Moreover, y > 0 if and only if ei ∈ int (coni (s1 , . . . , sn )).

Proof. If y ≥ 0, then ei = S  y =

n 

yk sk ∈ coni (s1 , . . . , sn )

(3.3)

k=1

If y > 0, then (3.3) implies e1 ∈ int (coni (s1 , . . . , sn )). Conversely, if ei ∈ coni (s1 , . . . , sn ), then there are nonnegative scalars λ1 , . . . , λn such that ei =

n 

λk sk = S  λ,

k=1 

where λ = [λ1 · · · λn ] . By hypothesis, S  y = ei , and since S  is invertible, it follows that y = λ ≥ 0. Lastly, if ei ∈ int (coni (s1 , . . . , sn )), then y = λ > 0. 2 Corollary 3.12. Let S ∈ GLn (R) and suppose that ⎡

⎤ ⎡ ⎤ s t1 1 ⎢ .. ⎥ ⎢ .. ⎥ −1  S = ⎣ . ⎦ and (S ) = ⎣ . ⎦ . s n

t n

Then S is a Perron-similarity if and only if there is an i ∈ n such that ei ∈ coni (s1 , . . . , sn ) and ei ∈ coni (t1 , . . . , tn ).

288

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

Corollary 3.13. If Q ∈ O (n), then Q is a Perron-similarity if and only if there is an i ∈ n such that Qei ≥ 0. Remark 3.14. It is well-known that a nonnegative matrix A is stochastic if and only if Ae = e [11, §8.7, Problem 4]. Thus, A is doubly stochastic if and only if Ae = e and −1 e A = e . If there are real scalars α and β such that Sei = αe and e = βe , then i S P (S) contains spectra that are doubly stochastically realizable; however, notice that the converse does not hold: for example, if ⎡ ⎤ 1 1 0 S = ⎣ 1 −1 0 ⎦ , 0 0 1 

and v = [ 1 λ 1 ] , where λ ∈ [−1, 1], then M = SDv S −1 is doubly stochastic. If Q ∈ O (n), then P (Q) contains spectra that are symmetrically, doubly stochastically  realizable if and only if Qei = e and e i Q=e . Example 3.15. Although the nonsingular matrix   1 1/2 S= 1 1 has two positive columns, it is not a Perron-similarity; indeed, note that   2 −1 −1 S = . −2 2 Remark 3.16. We briefly digress to make the following observation: recall that a nonsingular M -matrix is any matrix of the form S = αI − T , where T ≥ 0 and α > ρ (T ). It is well-known that S is an M -matrix if and only if S −1 ≥ 0 (Plemmons lists forty characterizations in [24]). Thus, following Theorem 3.11, S is an M -matrix if and only if ei ∈ coni (s1 , . . . , sn ) for every i ∈ n . Corollary 3.17. If S ∈ GLn (R), then C (S) \ coni (e) = ∅ if and only if S is a Perronsimilarity. Proof. If S is a Perron-similarity, then there is an i ∈ n such that the vectors x := Sei −1 and y  := e = xy  ≥ 0, so that ei ∈ C (S). i S are nonnegative. Thus, SDei S Conversely, if C (S) \ coni (e) = ∅, then there is a vector x = e such that SDx S −1 ≥ 0. The result now follows from Theorem 2.1. 2 4. RNIEP and SNIEP for low dimensions In this section, we verify the known necessary and sufficient conditions for the SNIEP and RNIEP when n ≤ 4.

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

289

Fig. 1. RNIEP & SNIEP for n = 2.

For n ≥ 2, notice that, following (1.1) and (1.2), 

P 1 (S) ⊆ T n−1 ,

S∈GLn (R)

where T n−1 := {x ∈ B n−1 : 1 + e x ≥ 0}. The region T n−1 is known as the tracenonnegative polytope [16]. Theorem 4.1. If σ = {λ1 , . . . , λn } and n ≤ 4, then σ is realizable if and only if σ satisfies (1.1) and (1.2). Furthermore, the realizing matrix can be taken to be symmetric. Proof. We give a proof for 2 ≤ n ≤ 4, given that the result is trivial when n = 1 (however, note that the similarity H0 := 1 yields all possible spectra). Case n = 2. Fig. 1 depicts the spectrahedral sets for the matrix   1 1 H1 = , 1 −1 which are established in Theorem 5.2 and Corollary 5.4. This solves the SNIEP and RNIEP when n = 2, since P 1 (H1 ) = B 1 = T 1 and the realizing matrix is  1 λ1 + λ2 2 λ1 − λ2

 λ1 − λ2 , λ1 ≥ λ2 . λ1 + λ2

Case n = 3. Let ⎡

1 ⎣ S := H1 ⊕ H0 = 1 0

⎤ ⎡ ⎤ 1 0 1 0 0 −1 0 ⎦ and P := ⎣ 0 0 1 ⎦ . 0 1 0 1 0

290

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

Fig. 2. RNIEP & SNIEP for n = 3.

For a ∈ [0, 1], let b := 1 − a and ⎡

⎤ 1 1 0 Sa := ⎣ 1 −a 1 ⎦ . 1 −a −1 Fig. 2 depicts the projected Perron spectratopes for the matrices S, SP , and Sa . A straightforward calculation reveals that if v = [1 x y] , then the matrix Sa Dv Sa−1 equals ⎡

⎤ 2(x + a) 1−x 1−x 1 ⎣ 2a(1 − x) a(x + y) + y + 1 a(x − y) − y + 1 ⎦ . 2(1 + a) 2a(1 − x) a(x − y) − y + 1 a(x + y) + y + 1 Furthermore, if u = [1



2a

√  2a] , then the matrix Du−1 Sa Dv Sa−1 Du equals

√ ⎡ 2a(1 − x) 2(x + a) √ 1 ⎣ 2a(1 − x) a(x + y) + y + 1 2(1 + a) √ 2a(1 − x) a(x − y) − y + 1



⎤ 2a(1 − x) a(x − y) − y + 1 ⎦ ; a(x + y) + y + 1

thus, the realizing matrix can by taken to be symmetric. Case n = 4. Without loss of generality, assume that λ1 ≥ λ2 ≥ λ3 ≥ λ4 . If v := [λ1 λ4 λ2 λ3 ] and S := H1 ⊕ H1 , then ⎡

SDv S −1

λ1 + λ4 1⎢ λ1 − λ4 = ⎢ ⎣ 0 2 0

λ1 − λ4 λ1 + λ4 0 0

0 0 λ 2 + λ3 λ 2 − λ3

⎤ 0 ⎥ 0 ⎥. λ2 − λ3 ⎦ λ2 + λ3

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

291

Fig. 3. P 1 (H2 ).

Thus, C (S) contains all spectra such that (i) λ1 ≥ λ2 ≥ λ3 ≥ λ4 ≥ 0; (ii) λ1 ≥ λ2 ≥ λ3 ≥ 0 > λ4 ; or (iii) λ1 ≥ λ2 ≥ 0 > λ3 ≥ λ4 , and λ2 + λ3 ≥ 0. If H2 := H1 ⊗ H1 , then H2 Dv H2−1 equals ⎡

λ1 + λ4 + λ2 + λ3 1⎢ λ ⎢ 1 − λ4 + λ2 − λ3 4 ⎣ λ1 + λ4 − λ2 − λ3 λ1 − λ4 − λ2 + λ3

λ1 − λ4 + λ2 − λ3 λ1 + λ4 + λ2 + λ3 λ1 − λ4 − λ2 + λ3 λ1 + λ4 − λ2 − λ3

λ1 + λ4 − λ2 − λ3 λ1 − λ4 − λ2 + λ3 λ1 + λ4 + λ2 + λ3 λ1 − λ4 + λ2 − λ3

⎤ λ 1 − λ 4 − λ2 + λ3 λ 1 + λ 4 − λ2 − λ3 ⎥ ⎥. λ 1 − λ 4 + λ2 − λ3 ⎦ λ 1 + λ 4 + λ2 + λ3

Thus, C (S) contains all spectra such that (i) λ1 ≥ λ2 ≥ 0 > λ3 ≥ λ4 , and λ2 + λ3 < 0; or (ii) λ1 ≥ 0 > λ2 ≥ λ3 ≥ λ4 . Fig. 3 depicts the projected Perron spectratope of H2 , as established in Corollary 5.4.

2

Fig. 2 suggests that the trace-nonnegative polytope T 2 cannot be covered by countably many projected Perron spectratopes; this can be proven via the relative gain array. Lemma 4.2. Suppose that S is a 3-by-3 Perron-similarity. If x, y ∈ C (S) such that e x = e y = 0, then x = y.

292

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

Proof. If x = y, then rank (Φ(S)) = 1; hence ⎡

a ⎣ Φ(S) = ka a

b kb b

⎤ c kc ⎦ . c

Since Φ(S) has row and column sums equal to one [15, §2, Observation 1], it follows that k =  = 1, and a = b = c = 1/3; however, as noted in [15, §5, Example 2], the equation Φ(X) = (1/3)J has no real solutions. 2 Corollary 4.3. If {Sα }α∈I is a collection of invertible 3-by-3 matrices such that 

P 1 (Sα ) = T 2 ,

α∈I

then I is uncountable. 5. Perron spectratopes of Hadamard matrices Recall that H = [hij ] is a Hadamard matrix if hij ∈ {±1} and HH  = nI. If n is a positive integer such that there is a Hadamard matrix of order n, then n is called a Hadamard order. The longstanding Hadamard conjecture asserts that there is a Hadamard matrix of order 4k, for every k ∈ N. Let H0 = [1], and for n ∈ N, let  Hn := H1 ⊗ Hn−1 =

 Hn−1 Hn−1 Hn−1 −Hn−1

∈ M2n (R).

(5.1)

It is well-known that Hn is a Hadamard matrix for every n ∈ N0 . The construction given in (5.1) is called the Sylvester construction, and the matrix Hn , n ∈ N0 is called the canonical Hadamard, or Walsh matrix of order 2n (for brevity, and given that Sylvester matrix is reserved for another matrix, we will use the latter term). Walsh matrices satisfy the following additional well-known properties: (i) (ii) (iii) (iv) (v) (vi)

Hn = Hn ; Hn−1 = 2−n Hn ; tr (Hn ) = 0, n ≥ 1;  e 1 Hn = e ; Hn e1 = e; and e Hn = ne1 .

Lastly, note that Hn is a Perron similarity for every n.

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

293

Proposition 5.1. Let  P11 :=

1 0

0 1



 and P12 :=

0 1

 1 . 0

For n ≥ 2, let

Pnk

  ⎧ 0 P(n−1)k ⎪ ⎪ ∈ M2n (R), k ∈ 2n−1 ⊗ P = P (n−1)k ⎨ 11 0 P(n−1)k :=   ⎪ 0 P(n−1) ⎪ ⎩ P12 ⊗ P(n−1) = ∈ M2n (R), k ∈ 2n \2n−1 , P(n−1) 0

where  = k − 2n−1 . Then, for n ∈ N: (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)

Pn1 = I; Pn2n = K; P is a trisymmetric permutation matrix, ∀k ∈ 2n ;

nk 2n k=1 Pnk = J; if Pn := {Pn1 , . . . , Pn2k }, then {Pn , ×} ∼ = (Z2 )n ; 2 Pnk = I;  −n if v := e Hn Dv Hn , ∀k ∈ 2n ; k Hn , then Pnk = 2

  Pn = 0; and for every k = , Pnk , Pn := tr Pnk for every k,  ∈ n , there is a j ∈ n such that Pnk Pn = Pn Pnk = Pnj .

Proof. Parts (i)–(v) follow readily by induction on n; part (vi) follows from part (iii); part (viii) follows from parts (iii) and (iv); and part (ix) follows from parts (v) and (vii) (alternately, part (ix) follows from part (vii) and the fact that the rows of Hn form a group with respect to Hadamard product). For part (vii), we proceed by induction on n: for n = 1, the result follows by a direct computation. m+1 Assume that the result holds when n = m > 1. If v ∈ R2 , and u and w are the m 2 vectors in R defined by   u v= , (5.2) w then Dv = Du ⊕ Dw and 

Hm+1 Dv Hm+1

Hm Du+w Hm = Hm Du−w Hm

 Hm Du−w Hm . Hm Du+w Hm

(5.3)

We distinguish the following cases: (i) k ∈ 2m . If v  := e k Hm+1 , then, u = w = Hm ek . Following (5.3) and the inductionhypothesis,

294

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

 Hm+1 Dv Hm+1 =  =

2Hm Du Hm 0 2 · 2m Pmk 0

 0 2Hm Du Hm  0 = 2m+1 P(m+1)k , 2 · 2m Pmk

and the result is established. m−1 (ii) k ∈ 2m+1 \2m . If v  := e . k Hm+1 , then u = −w = Hm e , where  := k − 2 Following (5.3) and the induction-hypothesis,  0 2Hm Du Hm = 2Hm Du Hm 0   m 0 2 · 2 Pm = 2m+1 P(m+1)k , = 2 · 2m Pm 0 

Hm+1 Dv Hm+1

and the result is established. 2 Theorem 5.2. The Perron spectracone of the Walsh matrix of order 2n is the conical hull of its rows. Proof. For x ∈ C2 , let v  := x Hn , and M = Mx := 2−n Hn Dv Hn . Since n





v = x Hn =

 2n 

 x k e k

n

Hn =

k=1

2 

n

2

  xk e xk vk , k Hn =

k=1

k=1

where vk := e k Hn , it follows from part (vii) of Proposition 5.1 that n

M =2

−n

H n Dv H n =

2  k=1

xk 2

−n



Hn Dvk Hn =

n

2 

xk Pnk ,

k=1

i.e., M ∈ span (Pn ). In view of parts (iii) and (iv) of Proposition 5.1, it follows that the kth-entry of x appears exactly once in each row and column of M . Thus, M ≥ 0 if and only if x ≥ 0, i.e., if and only if v ∈ coni (v1 , . . . , v2n ). 2 The following result is useful in quickly determining whether a vector belongs to C (Hn ). n

Corollary 5.3. If v ∈ R2 , where v1 ≥ · · · ≥ vn , then v ∈ C (Hn ) if and only if Hn v ≥ 0. Proof. If v ∈ C (Hn ), then following Theorem 5.2, there is a nonnegative vector x such that v = Hn x; multiplying both sides of this equation by Hn yields Hn v = 2n x ≥ 0. Conversely, if x := Hn v ≥ 0, then Hn x = 2n v, i.e., v = Hn (x/2n ); the result now follows from Theorem 5.2. 2

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

295

Parts (i), (iii), (iv), and (ix) of Proposition 5.1 demonstrate that Pn is a (2n −1)-class symmetric (and hence commutative) association scheme (see, e.g., the survey [19] and references therein). As a consequence, A(Hn ) is a nonnegative Bose–Mesner algebra, i.e., it is closed with respect to matrix transposition, matrix multiplication, and Hadamard product. A straightforward proof by induction shows that ⎤ x Pn1 .. ⎥ ⎢ Mx = ⎣ ⎦ = [ Pn1 x · · · Pn2k x ] . . x Pn2k ⎡

For example, if x ∈ C8 , then ⎡

x1 ⎢x ⎢ 2 ⎢ ⎢ x3 ⎢ ⎢ x4 Mx = ⎢ ⎢x ⎢ 5 ⎢ ⎢ x6 ⎢ ⎣ x7 x8

x2 x1 x4 x3

x3 x4 x1 x2

x4 x3 x2 x1

x5 x6 x7 x8

x6 x5 x8 x7

x7 x8 x5 x6

x6 x5 x8 x7

x7 x8 x5 x6

x8 x7 x6 x5

x1 x2 x3 x4

x2 x1 x4 x3

x3 x4 x1 x2

⎤ x8 x7 ⎥ ⎥ ⎥ x6 ⎥ ⎥ x5 ⎥ ⎥. x4 ⎥ ⎥ ⎥ x3 ⎥ ⎥ x2 ⎦ x1

Moreover, if x, y ∈ C, then Mx ◦ My = Mx◦y and Mx My = Mz , where ⎡ ⎢ ⎢ z=⎢ ⎣





⎤ x y ⎥ ⎢ x Pn2 y ⎥ ⎥ ⎢ ⎥ ⎥=⎢ ⎥. .. ⎦ ⎣ ⎦ .

x Pn1 y x Pn2 y .. . x Pn2k y

x Ky

Corollary 5.4. The Perron spectratope of the Walsh matrix of order 2n is the convex hull of its rows. The import of Corollary 5.4 can be viewed through the following lens: given an affinely independent set V = {v1 , . . . , vn+1 } ∈ Rn , recall that the n-simplex of V is the set S(V ) := conv (V ). It is well-known (see, e.g., [26]) that    1  Vol (S(V )) =  det M  , n! where ⎡

1 ⎢ .. M =⎣.

⎤ v1 .. ⎥ . ⎦.

 1 vn+1

(5.4)

296

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

Thus, Vol (W(Hn )) =

 n−1 1 n2n−1 1 2 2n2 . and Vol P 1 (Hn ) = n (2n )! (2 − 1)!

We call a nonsingular matrix S a strong Perron similarity if there is a unique i ∈ n −1 > 0. If S ∈ GLn (R) is a strong Perron similarity, then, such that Sei > 0 and e i S without loss of generality, S = [e s2 · · · sn ], where ||si ||∞ = 1. Since a Hadamard matrix has maximal determinant among matrices whose entries are less than or equal to 1 in absolute value, it follows that if P 1 (S) = S(s2 , . . . , s2n ),



 then Vol P 1 (S) ≤ Vol P 1 (Hn ) . It is an open question whether P 1 (Hn ) has maximal volume for all Perron spectratopes. Moreover, Corollary 5.4 does not seem to hold for general Hadamard matrices. If H is a Hadamard matrix, then any matrix resulting from negating its rows or columns is also a Hadamard matrix. Indeed, if u denotes the ith-column of H and w denotes its ˆ = Dh u HDw are ith-row, then the ith-row and ith-column of the Hadamard matrix H ii positive. Thus, without loss of generality, we assume that the first row and column of any Hadamard matrix are positive and refer to such a matrix as a normalized Hadamard matrix. It can be verified via the MATLAB-command hadamard(12) that only the first row of the normalized Hadamard matrix of order twelve belongs to its Perron spectratope. However, it is clear that every row of a normalized Hadamard matrix is realizable since every row (sans the first) contains an equal number of positive and negative entries and n/2 thus is realizable (after a permutation) by the Perron similarity i=1 H1 . 6. Sule˘ımanova spectra, the DS-RNIEP, and the DS-SNIEP We begin with the following definition. Definition 6.1. We call σ = {λ1 , . . . , λn } ⊂ R a Sule˘ımanova spectrum if s1 (σ) ≥ 0 and σ contains exactly one positive value. In [27], Sule˘ımanova stated that every such spectrum is realizable (for a proof via companion matrices, and references to other proofs, see Friedland [10]). Fiedler [9] proved that every Sule˘ımanova spectrum is symmetrically realizable. In [14], Johnson et al. posed the following. Problem 6.2. If σ is a normalized Sule˘ımanova spectrum, is σ realizable by a doubly stochastic matrix? We will show that for Hadamard orders, the answer is ‘yes’ and the realizing matrix can be taken to be symmetric.

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

297

Theorem 6.3. If H is a normalized Hadamard matrix of order n and σ = {λ1 , . . . , λn } is a normalized Sule˘ımanova spectrum, then σ is realizable by a symmetric, doubly stochastic matrix. Proof. It suffices to show that v := [1 · · · λn ] ∈ P (H). For k ∈ n , k = 1, let  −1 Dk := e1 e HDk H  . Then 1 − ek ek and Mk := n

   nMk = H e1 e 1 − ek ek H 





= He1 (He1 ) − Hek (Hek ) = J − Hek (Hek ) . 



Because the matrix Hek (Hek ) has entries in {±1}, the matrix J − Hek (Hek ) has entries in {0, 2}, i.e., Mk ≥ 0. Moreover, Diag (Dk ) = e1 − ek so that e1 − ek ∈ P (H). Since P (H) is convex and {e1 , e1 − e2 , . . . , e1 − en } ⊂ P (H), it follows that conv (e1 , e1 − e2 , . . . , e1 − en ) ⊆ P (H). If μ1 := s1 (σ) and μk := −λk , for k ≥ 2, then

n μk ≥ 0, k=1 μk = 1, and v = e1 +

n 

λk ek

k=2

= e1 +

n 

(λk + μk )e1 +

k=2

= μ1 e1 +

n 

λk ek

k=2

n 

μi (e1 − ek ).

k=2

Thus, v ∈ conv (e1 , e1 − e2 , . . . , e1 − en ) and the result is established. 2 Remark 6.4. For any normalized Hadamard matrix, notice that Hn := conv (e, e1 − e2 , . . . , e1 − en ) ⊆ P (H) . Since

  n  1 e+ (e1 − ek ) = e1 , n k=2

it follows that conv (e1 , e1 − e2 , . . . , e1 − en ) ⊂ Hn ⊆ P (H) . Thus, 

P (H) = Hn ,

Hn

where Hn contains every normalized Hadamard matrix of order n.

298

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

In terms of the projected Perron spectratope, notice that π1 (Hn ) = conv (e, −e2 , . . . , −en ) ⊆ P 1 (H) ⊂ T n−1 If 

1 e S= e −I

 ∈ GLn (R), n ≥ 2

then det S = (−1)n+1 n. Following (5.4),

 Vol P 1 (H) ≥ Vol (π1 (Hn )) =

n2 n = . (n − 1)! n!

The following result is corollary to Corollary 5.4 and Theorem 6.3. Corollary 6.5. If σ = {λ1 , . . . , λ2n } is a normalized Sule˘ımanova spectrum, then σ is realizable by a trisymmetric doubly stochastic matrix. In [9], Fiedler showed that every Sule˘ımanova spectrum is symmetrically realizable. However, his proof is by induction and therefore does not explicitly yield a realizing matrix (the computation of which is of interest for numerical purposes), which is common for many NIEP results. Indeed, according to Chu: Very few of these theoretical results are ready for implementation to actually compute [the realizing] matrix. The most constructive result we have seen is the sufficient condition studied by Soules [25]. But the condition there is still limited because the construction depends on the specification of the Perron vector – in particular, the components of the Perron eigenvector need to satisfy certain inequalities in order for the construction to work [3, p. 18]. Thus, Theorem 6.3 and Corollary 6.5 are constructive versions of Fiedler’s result for Hadamard powers. Corollary 6.6. If σ = {λ1 , . . . , λn } is a normalized Sule˘ımanova spectrum, then there is a nonnegative integer N such that N

  σ ˆ := σ ∪ {0, . . . , 0} is realized by a symmetric, doubly stochastic (n + N )-by-(n + N ) matrix. If n + N is a power of two, then the realizing matrix is trisymmetric.

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

299

Remark 6.7. If the Hadamard conjecture holds, then N ≤ 3. Moreover, Corollary 6.6 is a constructive version of the celebrated Boyle–Handelman theorem [2, Theorem 5.1] for Sule˘ımanova spectra. A natural variation of Problem 6.2 is the following. Problem 6.8. If σ = {λ1 , . . . , λn }, n ≥ 2 is a normalized Sule˘ımanova spectrum such that s1 (σ) = 0, is σ realizable by a doubly stochastic matrix? Although the trace-zero assumption is rather restrictive, we demonstrate it is a nontrivial problem: If A is a 3-by-3 doubly stochastic matrix and tr (A) = 0, then ⎡

0 A = ⎣1 − a a

a 0 1−a

⎤ 1−a a ⎦ , a ∈ [0, 1], 0

√ √ and σ (A) = {1, −1/2 ± 3(a − 1/2)i}, where i := −1. Clearly, σ is real if and only if a = 1/2. Thus, {1, −1/2, −1/2} is the only normalized Sule˘ımanova spectrum that is realizable by a 3-by-3 trace-zero doubly stochastic matrix. We conclude with the observation that the Perron spectratopes of H1 and H2 resolve Problem 6.8 when n = 2 and n = 4, respectively (see Fig. 1 and Fig. 3). References [1] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Classics Appl. Math., vol. 9, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994. Revised reprint of the 1979 original. [2] M. Boyle, D. Handelman, The spectra of nonnegative matrices via symbolic dynamics, Ann. of Math. (2) 133 (2) (1991) 249–316. [3] M.T. Chu, Inverse eigenvalue problems, SIAM Rev. 40 (1) (1998) 1–39. [4] G. Dahl, Combinatorial properties of Fourier–Motzkin elimination, Electron. J. Linear Algebra 16 (2007) 334–346. [5] P.D. Egleston, T.D. Lenker, S.K. Narayan, The nonnegative inverse eigenvalue problem, in: Tenth Conference of the International Linear Algebra Society, Linear Algebra Appl. 379 (2004) 475–490. [6] L. Elsner, R. Nabben, M. Neumann, Orthogonal bases that lead to symmetric nonnegative matrices, Linear Algebra Appl. 271 (1998) 323–343. [7] S.D. Eubanks, On the construction of nonnegative symmetric and normal matrices with prescribed spectral data, PhD thesis, Washington State University, ProQuest LLC, Ann Arbor, MI, 2009. [8] S.D. Eubanks, J.J. McDonald, On a generalization of Soules bases, SIAM J. Matrix Anal. Appl. 31 (3) (2009) 1227–1234. [9] M. Fiedler, Eigenvalues of nonnegative symmetric matrices, Linear Algebra Appl. 9 (1974) 119–142. [10] S. Friedland, On an inverse problem for nonnegative and eventually nonnegative matrices, Israel J. Math. 29 (1) (1978) 43–60. [11] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1990. Corrected reprint of the 1985 original. [12] C.R. Johnson, Row stochastic matrices similar to doubly stochastic matrices, Linear Multilinear Algebra 10 (2) (1981) 113–130. [13] C.R. Johnson, T.J. Laffey, R. Loewy, The real and the symmetric nonnegative inverse eigenvalue problems are different, Proc. Amer. Math. Soc. 124 (12) (1996) 3647–3651. [14] C.R. Johnson, C. Marijuán, M. Pisonero, Personal communication, 2015.

300

C.R. Johnson, P. Paparella / Linear Algebra Appl. 493 (2016) 281–300

[15] C.R. Johnson, H.M. Shapiro, Mathematical aspects of the relative gain array (A ◦ A− ), SIAM J. Algebr. Discrete Methods 7 (4) (1986) 627–644. [16] C. Knudsen, J.J. McDonald, A note on the convexity of the realizable set of eigenvalues for nonnegative symmetric matrices, Electron. J. Linear Algebra 8 (2001) 110–114. [17] R. Loewy, D. London, A note on an inverse problem for nonnegative matrices, Linear Multilinear Algebra 6 (1) (1978/1979) 83–90. [18] C. Marijuán, M. Pisonero, R.L. Soto, A map of sufficient conditions for the real nonnegative inverse eigenvalue problem, Linear Algebra Appl. 426 (2–3) (2007) 690–705. [19] W.J. Martin, H. Tanaka, Commutative association schemes, European J. Combin. 30 (6) (2009) 1497–1525. [20] H. Minc, Nonnegative Matrices, Wiley-Intersci. Ser. Discrete Math. Optim., John Wiley & Sons, Inc., New York, 1988. A Wiley–Interscience Publication. [21] H. Perfect, On positive stochastic matrices with real characteristic roots, Proc. Cambridge Philos. Soc. 48 (1952) 271–276. [22] H. Perfect, Methods of constructing certain stochastic matrices, Duke Math. J. 20 (1953) 395–404. [23] H. Perfect, Methods of constructing certain stochastic matrices. II, Duke Math. J. 22 (1955) 305–311. [24] R.J. Plemmons, M -matrix characterizations. I. Nonsingular M -matrices, Linear Algebra Appl. 18 (2) (1977) 175–188. [25] G.W. Soules, Constructing symmetric nonnegative matrices, Linear Multilinear Algebra 13 (3) (1983) 241–251. [26] P. Stein, Classroom notes: a note on the volume of a simplex, Amer. Math. Monthly 73 (3) (1966) 299–301. [27] H.R. Sule˘ımanova, Stochastic matrices with real characteristic numbers, Dokl. Akad. Nauk SSSR (N. S.) 66 (1949) 343–345.