Linear Algebra and its Applications 463 (2014) 154–170
Contents lists available at ScienceDirect
Linear Algebra and its Applications www.elsevier.com/locate/laa
A tree-based approach to joint spectral radius determination Claudia Möller ∗ , Ulrich Reif a r t i c l e
i n f o
Article history: Received 3 May 2013 Accepted 17 August 2014 Available online 20 September 2014 Submitted by R. Brualdi MSC: 15A18 15A60
a b s t r a c t We suggest a novel method to determine the joint spectral radius of finite sets of matrices by validating the finiteness property. It is based on finding a certain finite tree with nodes representing sets of matrix products. Our approach accounts for cases where one or several matrix products satisfy the finiteness property. Moreover, is potentially functional even for reducible sets of matrices. © 2014 Elsevier Inc. All rights reserved.
Keywords: Joint spectral radius Finiteness property Exact computation
1. Introduction The concept of the joint spectral radius (JSR) has gained some attention in recent years. As a generalization of the standard spectral radius, this characteristic of a set of matrices plays an important role in various fields of modern mathematics, see e.g. the monograph [18]. First introduced by Rota and Strang in 1960 [23], the JSR was almost forgotten, and then rediscovered in 1992 by Daubechies and Lagarias [7] in the context of the analysis of refineable functions. In general, the JSR is not exactly computable [3], and even its approximation is NP-hard in some sense [2]. In practice, the situation is as follows: Beyond the analysis of special cases, the few known algorithms for a potential * Corresponding author. http://dx.doi.org/10.1016/j.laa.2014.08.009 0024-3795/© 2014 Elsevier Inc. All rights reserved.
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
155
exact evaluation are based on establishing the finiteness property (FP), as introduced below, for a given problem. This approach cannot claim universality because families of matrices exist1 that do not exhibit the FP. However, to the best of our knowledge, all problems coming from applications have permitted a treatment via FP so far. In [10,5,14,13,21,12,20], the FP is established by the construction of a special norm with certain extremal properties. The approach suggested here also aims at the FP but is different in the way how it is verified. Some pros and cons will be discussed in the final section of this paper. The idea of a graph-theoretical analysis has proven to be useful before. In [11], a branch and bound algorithm on the tree of all matrix products is used for approximating the JSR. This method yields arbitrarily small enclosing intervals but, except for some very special cases, cannot be used for an exact determination. In [16], the range of C 1-parameters of the four-point scheme is determined explicitly by considering certain infinite paths in the tree of all matrix products. Though we adapted some ideas from the latter work, the trees to be considered in the following are different from those in [11] and [16]: Their knots represent sets of matrices instead of single matrix products. This crucial idea potentially reduces the analysis of infinite sets of products to a study of finite subtrees. In particular, this aspect facilitates automated verification of the FP by computer programs. After introducing some notation, our main results are presented in Section 3 and then proven in the subsequent section. Section 5 illustrates our new method by some model problems but does not comprise numerical tests since the focus of this work is on theoretical aspects. In particular, we demonstrate the capabilities of our method by settling two problems raised in [12]. Some concluding remarks can be found in Section 6. 2. Setup We consider a finite set A = {A1 , ..., Am } of matrices in Cd×d . To deal with products of its elements, we introduce the sets I0 := {∅},
Ik := {1, . . . , m}k ,
I :=
Ik ,
k∈N0
of completely positive index vectors of length k ∈ N0 and arbitrary length, respectively. By contrast, an index vector may contain also negative entries, whose special meaning will be explained in the next section. For k ∈ N, we define the matrix product AI := Aik · · · Ai1 ,
I = [i1 , . . . , ik ] ∈ Ik .
(1)
1 Non-constructive proofs of this fact can be found in [1,4,19], while an explicit counterexample is given in [17].
156
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
Otherwise, if k = 0, let A∅ := Id be the identity matrix. The length of the vector I ∈ Ik is denoted by |I| := k. That is, any index vector I ∈ I encodes a matrix product AI with |I| factors. Let · denote any norm on Cd , and also the induced matrix norm. The joint spectral radius (JSR) of A is defined as 1
ρˆ(A) := lim sup max AI k . k→∞
I∈Ik
In particular, if m = 1, then ρˆ(A) = ρ(A1 ) is the standard spectral radius (SSR) of A1 . Clearly, the JSR is independent of the chosen norm. As shown in [7], upper and lower bounds on ρˆ(A) are given by 1
1
max ρ(AI ) k ≤ ρˆ(A) ≤ max AI k I∈Ik
(2)
I∈Ik
for any k ∈ N. The set A is said to have the finiteness property (FP), if there exists a completely positive index vector J ∈ Ik such that equality holds on the left hand side of the latter display, 1
ρ(AJ ) k = ρˆ(A).
(3)
The method suggested here, as well as ideas developed in [10,5,14,13,21,12,20], are based on verifying (3) for some sophisticated guess J. The issue of finding such a J is a problem in its own rights and not discussed here, see also the comments in [10]. The SSR as well as the JSR are homogeneous functions, i.e., ρ(βAJ ) = |β|ρ(AJ ),
ρˆ(βA) = |β|ˆ ρ(A),
β ∈ C,
(4)
where βA := {βA1 , . . . , βAm }. Hence, discarding the trivial case ρ(AJ ) = 0, we can scale the family A such that at least one of the dominant eigenvalues of AJ equals 1, and in particular ρ(AJ ) = 1. Eq. (3), which has to be demonstrated, then reads ρˆ(A) = ρ(AJ ) = 1.
(5)
By (2), this is possible only if ρ(Ai ) ≤ 1,
i = 1, . . . , m,
(6)
so that we assume this property, throughout. 3. Main results In the following, we consider a matrix family A = {A1 , . . . , Am } satisfying (6) for which the normalized equation (5) shall be proven.
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
157
Given some index vector J ∈ I, let π(J) denote the set of all cyclic permutations of J. Arbitrary repetitions of such vectors form the set Π(J) := {I k : I ∈ π(J), k ∈ N}. If ρ(AJ ) = 1 for some index vector J, then it is clear that also ρ(AJ ) = 1 for all J ∈ Π(J). The algorithm suggested in [10] is based on the concept of dominance. It terminates only if there exists an index vector J such that ρ(AJ ) = 1 for J ∈ Π(J), but ρ(AJ ) < 1 otherwise. The approach in [12] introduces a weaker condition, called asymptotic simplicity: There may exist several index vectors J1 , . . . , Jn with ρ(AJ ) = 1 for J ∈ Π(J1 ) ∪· · ·∪Π(Jn ), but, in addition, some quite restrictive conditions on certain leading eigenvectors have to be satisfied. Our work aims at abandoning such assumptions. The family J of index vectors considered here contains index vectors with ρ(AJ ) = 1. Moreover, for good reasons, we allow also index vectors with ρ(AJ ) < 1. As will be demonstrated by an example in Section 5, such index vectors may reduce significantly the complexity of the trees to be constructed. Definition 3.1. Given matrices A = {A1 , . . . , Am }, consider some non-empty set J = {J1 , . . . , Jn } of completely positive index vectors Ji ∈ I. If max ρ(AJ ) = 1, J∈J
then J is called a generator set of A, and each element J ∈ J is called a generator of J . It is our goal to relate properties of generator sets to the equality ρˆ(A) = 1. By (2), existence of a generator set implies ρˆ(A) ≥ 1 so that (5) becomes equivalent to ρˆ(A) ≤ 1. In the following, let the set A = {A1 , . . . , Am } of matrices and the generator set J = {J1 , . . . , Jn } be fixed. Whenever a generator is mentioned, it is understood as a generator of J . To address products of matrices in A conveniently, we introduce the sets K0 := {∅},
K := {−n, . . . , −1, 1, . . . , m} ,
K :=
K ,
∈N0
of index vectors of length ∈ N0 and arbitrary length, respectively. As before, the length of K ∈ K is denoted by |K| := . Index vectors K ∈ K encode sets of matrix products in the following way: While single positive indices correspond to singletons, single negative indices correspond to infinite sets containing special matrix powers, A :=
{A } {AkJ− : k ∈ N0 }
if > 0, if < 0.
Defining products of sets as sets of products, i.e., P · Q := {P Q : P ∈ P, Q ∈ Q}, let AK := Ak · · · Ak1 ,
K = [k1 , . . . , k ] ∈ K ,
158
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
for ∈ N, and A∅ := {Id}. This definition is similar to (1), but AI is a single matrix, while AK is always a set, even if K ∈ I is completely positive. In this case, AK = {AK } is a singleton, while otherwise, it is typically2 a denumerable set. We need some more notations and definitions: Concatenation of vectors P ∈ Ki and S ∈ Kj is denoted by [P, S] := [p1 , . . . , pi , s1 , . . . , sj ] ∈ Ki+j . Powers indicate concatenation of an index vector with itself, K 1 := K,
K +1 := K , K .
If K = [P, S], then P is a prefix and S is a suffix of K. The sets of prefixes and suffixes of K ∈ K are denoted by P(K) := P : K = [P, S] for some S , S(K) := S : K = [P, S] for some P , respectively. Example. Let J = {J1 , J2 } with J1 = [1, 2] and J2 = [2, 2, 1] be a generator set for a matrix set A = {A1 , A2 }. We have A[−1] = {(A2 A1 )k : k ∈ N0 } and A[−2] = {(A1 A22 )k : k ∈ N0 }. For K = [1, 1, −1, 1, 2], we obtain AK = A2 A1 (A2 A1 )k A21 : k ∈ N0 . The first two entries of K generate the rightmost factor A21 , the negative entry −1 refers to the generator J1 = [1, 2] and leads to the powers (A2 A1 )k , and the last two entries generate the leftmost product A2 A1 . The index set P = [1, 1, −1] ∈ P(K) is a prefix and S = [1, 2] ∈ S(K) is the complementary suffix of K. In this special case, we observe that AK ⊂ AP , what might be surprising at first sight. This phenomenon where a set of matrix products is completely covered by that of a prefix will play a prominent role below. In a natural way, the set K of index vectors can be given the structure of a directed tree, denoted by T : The elements of K are the nodes, the empty vector ∅ is the root, and an edge is connecting the parent node P with the child node C if and only if C = [P, i] for some index i ∈ K1 . 2 For instance, if all eigenvalues of the matrices AJ happen to be 0, then AK is finite even if K is not completely positive.
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
159
Definition 3.2. A node K ∈ K is called • positive or negative if so is the suffix i ∈ K1 when writing K = [P, i]. • 1-bounded if AK := sup{A : A ∈ AK } ≤ 1. • covered if there exists a prefix P ∈ P(K) such that AK ⊂ AP , and the complementary suffix S is completely positive and not empty. Typically, covered nodes appear in the following situation: Let P = [P , ] be a negative node, i.e., < 0. Then its descendant K = [P, J− ] is covered since k A[,J− ] = AJ− · A = Ak+1 J− : k ∈ N0 ⊂ AJ− : k ∈ N0 = A . The example above is constructed in exactly this way. The following theorem provides a sufficient condition for establishing the JSR of a family of matrices. This condition is based on properties of a finite subtree of T and thus can be verified (though not falsified) numerically or analytically in finite time. The shape of the tree depends on the chosen norm and there is no a priori bound for its depth. Theorem 3.3. Let A = {A1 , . . . , Am }. If there exists a generator set J and a finite subtree T∗ ⊂ T such that • the root ∅ of T∗ has exactly m positive children, • every leaf of T∗ is either 1-bounded or covered, • every other node of T∗ has either exactly m positive children or an arbitrary number of negative children, then ρˆ(A) = 1. Positive children of the root ∅ are demanded merely to simplify forthcoming arguments. Negative children, though of little use in practice, would be equally fine from a theoretical point of view. The approach suggested in [10] assumes irreducibility of the family A and is successful if and only if, possibly after scaling, this family has a spectral gap at 1, as defined below. The class of problems decidable by Theorem 3.3 is actually larger: First, Theorem 3.5 shows that a tree of the requested form exists always if A has these properties. Choosing an appropriate norm, this case is in fact trivial because the nodes of the tree can all be chosen to be completely positive, i.e., no infinite sets of matrices have to be employed. Second, an example in Section 5 shows that it is possible to establish the JSR of families which do not have a spectral gap at 1. Also irreducibility is not assumed. The two properties mentioned above, namely irreducibility and the spectral gap at 1, mean the following: The first one concerns a possible reduction of dimensionality. The family A = {A1 , . . . , Am } is called irreducible if the matrices A1 , . . . , Am have no common
160
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
nontrivial invariant linear subspace. If A is reducible, JSR computation can be split into lower-dimensional problems until irreducibility is attained, see [18], Proposition 1.5. The second one concerns separation of spectral radii in the set of matrix products. Given a single generator J, ρ(AJ ) = 1 immediately implies ρ(AI ) = 1 for any index vector I ∈ Π(J) which is a cyclic permutation of a power of J. A spectral gap at 1 means that the SSR of no other product matrix can come close to 1. More precisely, we define: Definition 3.4. The matrix family A has a spectral gap at 1 if there is exists a completely positive index vector J with ρ(AJ ) = 1 such that • there exists q < 1 such that ρ(AI ) ≤ q
(7)
for any product AI , unless I = ∅ or I = [S, J r , P ] for some r ∈ N0 and some partition [P, S] = J of J, • the Jordan normal form Λ of AJ is 1 0 , ρ(Λ∗ ) < 1. Λ := V −1 AJ V = (8) 0 Λ∗ In this case, J is called a dominant generator. Since A is finite, the JSR can be characterized also by 1
ρˆ(A) = lim sup max ρ(AI ) k . k→∞
I∈Ik
Therefore, a spectral gap at 1 implies ρˆ(A) = 1. Following [9], Lemma 4, an irreducible set A with ρˆ(A) = 1 is product bounded. That is, there exists a constant cA such that AI < cA for all I ∈ I. An algorithm for computing an admissible constant can be found in [22]. If, in addition, A has a dominant generator J, we may scale the matrix V in (8) such that its columns vj and the columns wi of W := V −t satisfy w1 2 = 1 and vj 2 ≤ c−1 A , j ≥ 2, where the constant cA is taken with respect to the Euclidean norm · 2 . Now, we define the matrix norm AV := max j
wit Avj i
as the standard 1-norm of the transformed matrix W t AV . At least with respect to this norm, the implication of Theorem 3.3 is in fact an equivalence: Theorem 3.5. Let A be irreducible with spectral gap at 1. Using the norm · V , there exists a subtree T∗ ⊂ T according to the specifications of Theorem 3.3. Moreover, all nodes in this tree can be requested to be completely positive.
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
161
4. Proofs 4.1. Proof of Theorem 3.3 Below, T∗ is assumed to be a fixed subtree of T according to the conditions of the theorem. Dependencies of variables on T∗ will not be declared explicitly. The proof is based on the following ideas: By the existence of a generator set, it is ρˆ(A) ≥ 1. To establish ρˆ(A) ≤ 1, it suffices to show that there exists a monotone increasing polynomial p : N0 → R such that all matrix products are bounded with respect to their length, AI ≤ p(|I|). Convergence of |I| p(|I|) leads to the desired upper bound of ρˆ(A). To reach this goal, a partition for completely positive index vectors, depending on T∗ , is defined. Lemmata 4.1 and 4.2 give upper bounds for the factors of this partition, together leading to the existence of p. In the following, the finite set of nodes of T∗ is denoted by K∗ , and the union of all
contained matrix products by A∗ := K∈K∗ AK . The corresponding set of completely positive index vectors is I∗ := {I ∈ I : AI ∈ A∗ }. That is, {AI : I ∈ I∗ } = A∗ and K∗ ∩ I ⊆ I∗ . The index vectors in K are ordered completely by setting K < K for |K| < |K |, and applying lexicographic order for index vectors of equal length. To any completely positive index vector I ∈ I \ {∅}, we assign the following two objects: • The I∗ -maximal prefix M (I) := argmax I∗ ∩ P(I) of I is defined as the longest prefix of I which is contained in I∗ . • The K∗ -maximal node N (I) := max{K ∈ K∗ : AM (I) ∈ AK } of I is defined as the maximal node in K∗ with the property that the according set AN (I) contains the product AM (I) corresponding to the I∗ -maximal prefix of I. We note that, in general, M (I) is not a node of T∗ while N (I) is by definition. On the other hand, N (I) is generally not a prefix of I. Further, M (I) is completely positive by definition, coding the singleton {AM (I) }. In contrast, N (I) may have negative entries, then coding a denumerable set of products which contains AM (I) . If I ∈ I is a node of T∗ , then M (I) = I. Further, for any I ∈ I \ {∅}, the length of the I∗ -maximal prefix M (I) is at least 1 because the first entry of I is necessarily a child of the root of T∗ .
162
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
Lemma 4.1. If I ∈ / I∗ , then its K∗ -maximal node N (I) is a 1-bounded leaf of T∗ . Proof. Let P := M (I) and K := N (I) denote the I∗ -maximal prefix and the K∗ -maximal node of I, respectively. First, assume that K has positive children. Writing I = [i1 , . . . , ik ] / I∗ . Since, by assumption, positive and P = [i1 , . . . , i ], we know that +1 ≤ k because I ∈ children always come as a complete set of m siblings, K := [K, i+1 ] ∈ K∗ is a node of T∗ . Consider the prefix P := [i1 , . . . , i+1 ] of I. Then AP = Ai+1 · AP ∈ Ai+1 · AK = AK ⊂ A∗ . This implies P ∈ I∗ , contradicting maximality of the prefix P in I∗ . Second, assume that K has a negative child K = [K, j] for some j < 0. Since Aj = {AkJ−j : k ∈ N0 } contains the identity matrix, we have AP ∈ AK ⊂ AK , contradicting maximality of K. Third, assume that K is a covered node. Thus, by definition, K = [Q, S] for some S ∈ I \ {∅} and AP ∈ AK ⊆ AQ . By properties of S, Q has positive children. Again, the argument used in the first part of this proof yields a contradiction to maximality of the prefix P . The first two cases imply that the node K is one of the leaves of T∗ . These are either 1-bounded or covered, the latter being excluded by the third case. 2 The I∗ -maximal prefix partition P1 , . . . , Pr of I ∈ I \ {∅} is characterized by I = [P1 , . . . , Pr ] P = M [P , . . . , Pr ] ,
= 1, . . . , r.
Algorithmically, the vectors P can be determined by a recursive process, starting from P1 = M (I): Regard the complementary suffix of I respective to prefix P . Then P+1 is its I∗ -maximal prefix. The algorithm terminates finding an I∗ -maximal prefix Pr = M (Pr ), which implies Pr ∈ I∗ . The number r cannot exceed |I| because I∗ -maximal prefixes of non-empty vectors have length ≥ 1. Lemma 4.1 provides information on the I∗ -maximal prefixes P1 , . . . , Pr−1 , while the suffix S = Pr is covered by the following result: Lemma 4.2. There exists a monotone increasing polynomial p : N0 → R, depending only on A and T∗ , such that AS ≤ p |S| for any S ∈ I∗ .
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
163
Proof. Let B := {A1 , . . . , Am , AJ1 , . . . , AJn }. Since ρ(B) ≤ 1 for all B ∈ B, the entries of powers B r grow at most in a polynomial way. That is, there exists a monotone increasing polynomial q : N0 → [1, ∞) with r B ≤ q(r),
B ∈ B, r ∈ N0 .
Let h := max{|K| : K ∈ K∗ } denote the height of T∗ . Then p := q h is also a monotone increasing polynomial on N0 . For S ∈ I∗ , the product AS is member of the set AK for some K = [k1 , . . . , k ] ∈ K∗ , and thus can be written as AS = Br · · · B1r1 , where ri ∈ N0 , Bi ∈ B, and Biri ∈ Aki . The exponents are bounded by ri ≤ |S|, and the number of factors by ≤ h. Hence, r i Bi ≤ AS ≤ q(ri ) ≤ q |S| ≤ p |S| , i=1
i=1
as stated. 2 Now, we are prepared to accomplish the proof of Theorem 3.3: Proof. Let I ∈ Ik be any completely positive index vector, and P1 , . . . , Pr its I∗ -maximal prefix partition. Then AI = APr · APr−1 · · · AP1 . For i = 1, . . . , r − 1, [Pi , . . . , Pr ] ∈ / I∗ and APi ∈ AN ([Pi ,...,Pr ]) by construction. So, APi ≤ AN ([Pi ,...,Pr ]) ≤ 1 due to Lemma 4.1. A bound on the norm of Pr is given by Lemma 4.2. By monotonicity of the polynomial p and |Pr | ≤ |I| = k, we obtain AI ≤ APr ≤ p |Pr | ≤ p(k). Hence, by (2), 1 = max ρ(AJ ) : J ∈ J ≤ ρˆ(A) ≤ k p(k). As k → ∞, the right hand side converges to 1, thus verifying the claim. 2
164
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
4.2. Proof of Theorem 3.5 Let I := (ik )k∈N denote a sequence of positive indices ik ∈ {1, . . . , m}. Adapting notation in the obvious way, we denote prefixes of I by Ik := [i1 , . . . , ik ] ∈ P(I). Further, J ∞ := [J, J, . . .] is the sequence obtained by infinite repetition of J. If AIk > 1 for all k ∈ N0 , then I is called an infinite path. The proof of Theorem 3.5 consists of three steps: First, in Lemma 4.3, we characterize the structure of such an infinite path. Second, in Lemma 4.4, we exclude the existence of an infinite path with respect to the V-norm · V . Third, we deduce the finiteness of a certain set-valued tree which satisfies the specifications of Theorem 3.3 and has only completely positive nodes. Lemma 4.3. Let J be a dominant generator of the irreducible family A with spectral gap at 1. Then any infinite path has the form I = [P, J ∞ ] for some prefix P ∈ I. Proof. Since A is product bounded, there exists a convergent subsequence B := AIk() , ∈ N0 , with limit B ∗ := lim B . Let λ ∈ N and L,λ ∈ I such that [Ik() , L,λ ] = Ik(+λ) for ∈ N0 . Then B+λ = C,λ B ,
C,λ := AL,λ , ∈ N0 .
The right hand side of (C,λ − Id)B ∗ = C,λ B ∗ − B + B+λ − B ∗ ≤ cA B ∗ − B + B ∗ − B+λ tends to 0. Thus, lim C,λ B ∗ = B ∗ .
→∞
By assumption, B > 1 for all and hence B ∗ = 0. So, recalling (7), the displayed equation shows that there exists 0 ∈ N such that ρ(C,λ ) > q for all ≥ 0 . Since J is dominant, we obtain by definition L,λ = S,λ J r,λ P,λ ,
≥ 0 ,
for partitions [P,λ , S,λ ] = J. Substituting this representation into [L,1 , L+1,1 ] = L,2 yields S,1 J r,1 P,1 S+1,1 J r+1,1 P+1,1 = S,2 J r,2 P,2 . This is possible only if P,1 S+1,1 = J, implying P,1 = P+1,1 and S,1 = S+1,1 . That is, P,1 = P0 ,1 and S,1 = S0 ,1 for all ≥ 0 . We recall B0 := AIk(0 ) and abbreviate ri := r0 +i,1 , P˜ := P0 ,1 , S˜ := S0 ,1 to find
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
165
˜ J r0 , P˜ , S, ˜ J r1 , P˜ , . . . = Ik( ) , S, ˜ J∞ , I = Ik(0 ) , S, 0 ˜ and the claim follows with P := [Ik(0 ) , S]. 2 While the lemma above makes no assumptions concerning the underlying norm, the next one shows that the V -norm · V is special. Lemma 4.4. If A is irreducible with spectral gap at 1, then there is no infinite path with respect to the V -norm. Proof. Assume that there exists an infinite path. According to the previous lemma, it is given by I = [P, J ∞ ] with J being a dominant generator. The corresponding limit of matrix products S := limk A[P,J k ] exists because Λ∞ := limk Λk = diag([1, 0, . . . , 0]) exists. Since AJ k V = Λk 1 = 1 for k sufficiently large, P cannot be empty. In this case, we know that A[P,J k ] V > 1 and ρ(A[P,J k ] ) ≤ q < 1 for all k ∈ N. Hence, SV ≥ 1 and ρ(S) ≤ q. With T := AP , the matrix S˜ := W t SV is given by ⎡ wt T v ⎢ S˜ = Λ∞ W t T V = ⎢ ⎣
0 .. .
w1t T v2 0 .. .
0
0
1
1
· · · w1t T vd ⎤ ··· 0 ⎥ . .. ⎥ . ⎦ ···
0
˜ = ρ(S) < 1. Further, recalling Since S and S˜ are similar, we have |w1t T v1 | = ρ(S) −1 t t w1 2 = 1 and vj 2 ≤ cA , we find |w1 T vj | ≤ w1 2 T 2 vj 2 < 1 for j ≥ 2. Thus, we ˜ 1 < 1. 2 obtain the contradiction 1 ≤ SV = S The lemma enables us to prove Theorem 3.5: Proof. Let T∗ be the largest subtree of T with the following properties: ∅ is contained as a node, all nodes are completely positive and each 1-bounded node is a leaf. Assuming that the set K∗ of nodes is infinite, we define K ⊂ K∗ as the set of nodes which are prefix of infinitely many other nodes in K∗ . Then K is not empty because the root ∅ belongs to it. Further, if K ∈ K , then there exists at least one child [K, k] ∈ K . That is, the recursion ik := min i ∈ I1 : [i1 , . . . , ik−1 , i] ∈ K defines an infinite path I = (ik )k∈N , contradicting Lemma 4.4.
2
5. Examples In this section, we illustrate our method by considering some model problems. Unless indicated otherwise, all trees are constructed with respect to the maximum absolute row sum norm · ∞ .
166
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
Fig. 1. Tree for ε = 1/8 (left) and ε = 0 (right) according to Theorem 3.3. Colors and shapes of markers indicate properties of nodes: gray 1-bounded, white covered, square negative child w.r.t. generator J1 = [1], triangle negative child w.r.t. generator J2 = [2], black other.
First, let A = {A1 , A2 } with A1 =
10 9 − 13
1 3
0
,
0 √ A2 = 1 −5 1 − ε
1 5
√
1−ε . 26 25 − ε
Then ρ(A1 ) = 1 and ρ(A2 ) = 1 − ε. Let us consider a few special cases: • For ε = 18 , we choose the generator set J = {[1]}. It leads to a finite tree that satisfies the conditions of Theorem 3.3, see Fig. 1 (left). That is, ρˆ(A) = ρ(A1 ) = 1. • For ε = 0, both matrices A1 , A2 have spectral radius 1. Hence, A does not have a spectral gap at 1. While this case is explicitly excluded in [10], Fig. 1 (right) shows the corresponding tree according to Theorem 3.3 for J = {J1 , J2 } with J1 = [1], J2 = [2], thus proving ρˆ(A) = 1 also in this case. • For ε close to 0, the asserted benefits of generators J with ρ(AJ ) < 1 become apparent. For ε = 0.01, the spectral radius of A2 is 0.99. In principle, it is sufficient to use only the generator J1 = [1], see Fig. 2 (left). However, Fig. 2 (right) shows that the depth of the resulting tree is reduced significantly when using the additional generator J2 = [2]. This is because the slow decay of norms of matrix powers Ak2 is subsumed in the single negative node marked by a triangle. For smaller values of ε, the effect becomes even more drastic. For instance, ε = 0.001 leads to depth 224 when using a single generator, while the two generators still yield the same trim pattern shown in Fig. 2 (right). The next two examples are taken from [12]. These pairs of matrices are not asymptotically simple, what is an obstacle to a validation by means of a finite structure when
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
167
Fig. 2. Tree for ε = 0.01 using a single generator J1 = [1] (left) and two generators J1 = [1], J2 = [2] (right).
following the approach suggested there.3 In contrast, our method meets the challenge and establishes the JSR in a rather efficient way. Let B = {B1 , B2 } with
cos(1) − sin(1) B1 = , sin(1) cos(1)
1 B2 =
2
0
− 2i . 0
When choosing J = {[1]}, we obtain a finite tree according to Theorem 3.3, see Fig. 3 (left). Thus, ρˆ(B) = ρ(B1 ) = 1. Let C = {C1 , C2 } with C1 =
√ 3− 5 2 1 , 1 1 2
C2 =
√ 3− 5 1 1 2
1 . 2
When choosing J = {[1], [2]}, we obtain a finite tree according to Theorem 3.3, see Fig. 3 (right). Thus, ρˆ(C) = ρ(C1 ) = ρ(C2 ) = 1. The following two examples arise from the smoothness analysis of subdivision schemes. They show that our method is able to deal with matrix triples as well as with matrices of larger size. First, we consider the ternary subdivision scheme with tension parameter 1 μ proposed in [15]. According to [24], smoothness is highest for μ = 11 . In this case, the 2,α ∗ limit curves are in C for any α < α := log3 11 − 2 ≈ .1827. Skipping the details of the well known matrix formalism for determining the critical Hölder exponent of a refineable function, see for instance [8], we note that this result is equivalent to ˆ(D) = 1 for the triple D = {D1 , D2 , D3 } with 3 According to a referee’s remark, it is possible to construct a variant of that method which is able to deal with these examples. However, we were not able to find more information regarding this in the published literature.
168
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
Fig. 3. Tree for B with single generator J1 = [1] (left) and for C with two generators J1 = [1], J2 = [2] (right).
Fig. 4. Tree for D with generator set J = {[1], [2], [3]} and norm · 2 .
⎡ 5 1⎣ −4 D1 = 9 0
⎤ −4 0 5 0⎦, 9 0
⎡ ⎤ −4 5 0 1⎣ 0 9 0 ⎦, D2 = 9 0 5 −4
⎡ 0 9 1⎣ 0 5 D3 = 9 0 −4
⎤ 0 −4 ⎦ . 5
Fig. 4 shows the corresponding finite tree for the generator set J = {[1], [2], [3]} with respect to the norm · 2 , confirming the result of [24]. Second, we consider the eight-point scheme of Deslaurier and Dubuc [6], which requires the JSR analysis of a pair E = {E1 , E2 } of 8 × 8-matrices, given by
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
169
Fig. 5. Tree for E with generators J = {[1], [2]} (left) and a zoom on the leftmost subtree (right).
⎡
30 −14 −14 30 0 0 ⎢ −5 −56 154 −56 −5 0 ⎢ ⎢ 0 30 −14 −14 30 0 ⎢ ⎢ 0 −5 −56 154 −56 −5 ⎢ E1 = ⎢ ⎢ 0 0 30 −14 −14 30 ⎢ ⎢ 0 0 −5 −56 154 −56 ⎢ ⎣ 0 0 0 30 −14 −14 0 0 0 −5 −56 154 ⎡ −5 −56 154 −56 −5 0 ⎢ 0 30 −14 −14 30 0 ⎢ ⎢ 0 −5 −56 154 −56 −5 ⎢ ⎢ 0 0 30 −14 −14 30 ⎢ E2 = ⎢ ⎢ 0 0 −5 −56 154 −56 ⎢ ⎢ 0 0 0 30 −14 −14 ⎢ ⎣ 0 0 0 −5 −56 154 0 0 0 0 30 −14
⎤ 0 0 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ ⎥, 0 0 ⎥ ⎥ −5 0 ⎥ ⎥ 30 0 ⎦ −56 −5 ⎤ 0 0 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ ⎥. −5 0 ⎥ ⎥ 30 0 ⎥ ⎥ −56 −5 ⎦ −14
30
Fig. 5 shows that ˆ(E) = (E1 ) = (E2 ), implying that the scheme is generating limits in C 3,α for any α < α∗ := 8 − log2 ˆ(E) ≈ 0.5511. 6. Conclusion We have presented an alternative to known algorithms for rigorous certification of the JSR of a finite set of matrices. It is based on the observation that existence of a certain finite tree with nodes representing sets of matrix products implies validity of the FP for a given set of matrices. Such trees can be searched automatically by computer programs by a variant on depth-first search. The main advantage of our method is its broad range of applicability, as it makes no a priori assumptions on the properties of the generator set. Problems revealing a spectral gap at 1 are guaranteed to be decidable by our algorithm when choosing the norm appropriately.
170
C. Möller, U. Reif / Linear Algebra and its Applications 463 (2014) 154–170
Despite possible principal advantages, we have to acknowledge that typical run-times of our current implementation can hardly compete with the impressive results of Protasov, Guglielmi and others. Present work is aiming at increasing the efficiency of our code, and at improving our understanding of the influence of various parameters, like the chosen norm. References [1] T. Bousch, J. Mairesse, Asymptotic height optimization for topical IFS, tetris heaps, and the finiteness conjecture, J. Amer. Math. Soc. 15 (1) (2002) 77–111. [2] V.D. Blondel, Y. Nesterov, Computationally efficient approximations of the joint spectral radius, SIAM J. Matrix Anal. Appl. 27 (1) (2005) 256–272. [3] V.D. Blondel, J.N. Tsitsiklis, The boundedness of all products of a pair of matrices is undecidable, Systems Control Lett. 41 (2) (2000) 135–140. [4] V.D. Blondel, J. Theys, A.A. Vladimirov, An elementary counterexample to the finiteness conjecture, SIAM J. Matrix Anal. Appl. 24 (4) (2003) 963–970. [5] A. Cicone, N. Guglielmi, S. Serra-Capizzano, M. Zennaro, Finiteness property of pairs of 2 × 2 sign-matrices via real extremal polytope norms, Linear Algebra Appl. 432 (2–3) (2010) 796–816. [6] G. Deslauriers, S. Dubuc, Symmetric iterative interpolation processes, in: Constructive Approximation, Springer, 1989, pp. 49–68. [7] I. Daubechies, J.C. Lagarias, Sets of matrices all infinite products of which converge, Linear Algebra Appl. 161 (1992) 227–263. [8] N. Dyn, D. Levin, Subdivision schemes in geometric modelling, Acta Numer. 11 (2002) 73–144. [9] L. Elsner, The generalized spectral-radius theorem: an analytic-geometric proof, Linear Algebra Appl. 220 (1995) 151–159. [10] N. Guglielmi, V.Y. Protasov, Exact computation of joint spectral characteristics of linear operators, Found. Comput. Math. 13 (1) (2013) 37–97. [11] G. Gripenberg, Computing the joint spectral radius, Linear Algebra Appl. 234 (1996) 43–60. [12] N. Guglielmi, F. Wirth, M. Zennaro, Complex polytope extremality results for families of matrices, SIAM J. Matrix Anal. Appl. 27 (3) (2005) 721–743. [13] N. Guglielmi, M. Zennaro, An algorithm for finding extremal polytope norms of matrix families, Linear Algebra Appl. 428 (10) (2008) 2265–2282. [14] N. Guglielmi, M. Zennaro, Finding extremal complex polytope norms for families of real matrices, SIAM J. Matrix Anal. Appl. 31 (2) (2009) 602–620. [15] M.F. Hassan, I.P. Ivrissimitzis, N.A. Dodgson, M.A. Sabin, An interpolating 4-point C 2 ternary stationary subdivision scheme, Comput. Aided Geom. Design 19 (1) (2002) 1–18. [16] J. Hechler, B. Mößner, U. Reif, C 1 -Continuity of the generalized four-point scheme, Linear Algebra Appl. 430 (11–12) (2009) 3019–3029. [17] K.G. Hare, I.D. Morris, N. Sidorov, J. Theys, An explicit counterexample to the Lagarias–Wang finiteness conjecture, Adv. Math. 226 (6) (2011) 4667–4701. [18] R.M. Jungers, The Joint Spectral Radius, Theory and Applications, Springer, 2009. [19] V. Kozyakin, A dynamical systems construction of a counterexample to the finiteness conjecture, in: Proceedings of the 44th IEEE Conference on Decision and Control and ECC 2005, 2005, pp. 2338–2343. [20] M. Maesumi, Joint spectral radius and Hölder regularity of wavelets, Comput. Math. Appl. 40 (1) (2000) 145–155. [21] M. Maesumi, Optimal norms and the computation of joint spectral radius of matrices, Linear Algebra Appl. 428 (10) (2008) 2324–2338. [22] V.Y. Protasov, The joint spectral radius and invariant sets of the several linear operators, Fundam. Prikl. Mat. 2 (1) (1996) 205–231. [23] G.C. Rota, W.G. Strang, A note on the joint spectral radius, Indag. Math. 22 (4) (1960) 379–381. [24] H. Zheng, H. Zhao, Z. Ye, M. Zhou, Differentiability of a 4-point ternary subdivision scheme and its application, IAENG Int. J. Appl. Math. 36 (1) (2007) 19–24.