Products of idempotent and square-zero matrices

Products of idempotent and square-zero matrices

Linear Algebra and its Applications 497 (2016) 116–133 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.co...

302KB Sizes 2 Downloads 88 Views

Linear Algebra and its Applications 497 (2016) 116–133

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

Products of idempotent and square-zero matrices J.D. Botha 1 Department of Mathematical Sciences, University of South Africa, P.O. Box 392, UNISA, 0003, South Africa

a r t i c l e

i n f o

Article history: Received 16 June 2015 Accepted 18 February 2016 Available online 27 February 2016 Submitted by R. Brualdi MSC: 15A23 15B33

a b s t r a c t This paper presents necessary and sufficient conditions for a square matrix over an arbitrary field to be expressed as a product of idempotent and square-zero matrices. Since the similarity structures of such factors are completely determined by their rank, we also investigate the ranks that these factors may have. The case where the factors are all idempotent or all square-zero have been investigated by various authors, some of whom are listed in the references of this paper. © 2016 Elsevier Inc. All rights reserved.

Keywords: Idempotent matrix Square-zero matrix Matrix factorization

1. Introduction Matrices which can be factored as a product of idempotent matrices were first investigated by Erdos [6] following an analogous investigation by Howie [7] for the semigroup of transformations of a set. Various authors have since published on this topic, including Ballantine [1] who investigated the minimum number of factors required for a given matrix, and Botha [2] who investigated the ranks which the factors may have (see also Knüppel and Nielsen [8]). Factorization in terms of square-zero matrices has a more E-mail address: [email protected]. This paper was written while the author was with Sabbatical leave granted by the University of South Africa. 1

http://dx.doi.org/10.1016/j.laa.2016.02.026 0024-3795/© 2016 Elsevier Inc. All rights reserved.

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

117

recent history with a paper by Novak [10] and a recent paper by Botha [4]. In this paper, we investigate necessary and sufficient conditions for a square matrix over an arbitrary field to be expressed as a product of idempotent and square-zero matrices. Since the similarity structures of such factors are completely determined by their rank, we also investigate the ranks that these factors may have. These investigations form part of a larger investigation into quadratic factorization of matrices – see for example [5] and [11], the latter of which contains references to a series of papers by the author on quadratic factorization over the complex field. The motivation for investigating products of idempotent and square-zero matrices is due to the fact that a singular quadratic matrix is either square-zero or a scalar multiple of an idempotent matrix. For research on the factorization of matrices over rings, such as the ring of integers, the reader is referred to [9], and references therein. We will show that, up to similarity, a product G ∈ Mm×m (F) of idempotent and square-zero matrices over an arbitrary field F can be arranged in such a way that the idempotent matrices appear first in increasing order of nullity followed by the square-zero matrices in increasing order of rank. This arrangement will suit our presentation best; thus we assume that G = E1 · · · Ek Z1 · · · Zl

(1)

where Ei (1 ≤ i ≤ k) are idempotent with 0 ≤ n(E1 ) ≤ · · · ≤ n(Ek ) ≤ n(G) and Zj (1 ≤ j ≤ l) are square-zero with r(G) ≤ r(Z1 ) ≤ · · · ≤ r(Zl ) ≤

m . 2

The presence of square-zero matrices in the factorization of G implies that r(G) ≤ m 2. This condition is also sufficient to ensure the existence of factorizations (1) for any k, l ≥ 1 if no restriction is placed on the nullities of Ei and ranks of Zj (Theorems 1, 2, and 3). It is interesting to note that in any event (i.e., for any k, l ≥ 1), the ranks of the square-zero factors can always be chosen arbitrarily, subject to the obvious condition r(G) ≤ r(Zj ) ≤ m 2 . Only when there are more than three square-zero factors present (i.e., l ≥ 3) can the nullities of the idempotent factors also be chosen arbitrarily, subject to 0 ≤ n(Ei ) ≤ n(G). For l = 1 and l = 2, an additional condition on the nullities of the idempotent factors is required in each case, namely n(E1 ) + · · · + n(Ek ) ≥ r(Z1 ) − dim(R(G) ∩ N (G)) (Theorem 1) and n(E1 ) + · · · + n(Ek ) ≥ r(G) − n(G) + dim(R(G) ∩ N (G)) (Theorem 2) ,

118

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

respectively. It is for this reason that the organization of the paper is such that cases l = 1 and l = 2 are treated separately in Sections 3 and 4, respectively, while the remaining cases (that is, l ≥ 3) are treated simultaneously in Section 5. Section 6 is separate from the rest of the paper and briefly refers to a related topic, namely, factorizations of the type G = ZE (equivalently, G = EZ) where E is idempotent, Z square-zero, and either E or Z is prescribed. We now fix some notation. All matrices are assumed to be over an arbitrary fixed field F. For a given matrix G ∈ Mm×n (F), it will sometimes be useful to consider the linear mapping from Fn to Fm associated with G with respect to the standard bases for Fn and Fm . This is also denoted by G (hence, G(v) = Gv for all v ∈ Fn ), since the intended meaning will be clear from the context. We denote the kernel (equivalently, null space) of G by N (G), the image (equivalently, range or column space) of G by R(G), and their respective dimensions by n(G) and r(G). The row space of G is denoted by row(G). Similarity in Mn×n (F) is denoted by ≈. The restriction of a linear mapping T : V → W to a subspace U of V is denoted by TU . If V = W , then U is T -invariant if T (U ) ⊆ U ; thus TU is a linear operator on U . A basis B for a vector space V is sometimes denoted by B = basis(V ). For a real number r, r denotes the largest integer less than or equal to r. 2. Preliminary results Let nE1 , · · · , nEk and rZ1 , · · · , rZl denote integers such that 0 ≤ nEi ≤ n(G), 1 ≤ i ≤ k, and r(G) ≤ rZj ≤ m 2 , 1 ≤ j ≤ l. In this paper we investigate conditions for the existence of idempotent matrices Ei with n(Ei ) = nEi , 1 ≤ i ≤ k, and square-zero matrices Zj with r(Zj ) = rZj , 1 ≤ j ≤ l, such that G = E1 · · · Ek Z1 · · · Zl . Note. (a) It suffices only to investigate whether G is similar to a product of the above type, since X −1 GX = E1 · · · Ek Z1 · · · Zl ⇒ G = (XE1 X −1 ) · · · (XEk X −1 )(XZ1 X −1 ) · · · (XZl X −1 ) and XEi X −1 is idempotent with the same nullity as Ei and XZj X −1 is square-zero with the same rank as Zj . (b) Up to similarity, the matrices in the above factorization of G may occur in any order due to the following transposition property for square (m × m) matrices A and B over F: AB ≈ (AB)T = B T AT , and AT is of the same rank, nullity and similarity class as A, and similarly for B. Hence we may assume without loss of generality that the Ei ’s appear first and that 0 ≤ nE1 ≤ · · · ≤ nEk ≤ n(G) and r(G) ≤ rZ1 ≤ · · · ≤ rZl ≤

m . 2

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

119

Lemma 1. (a) Let G ∈ Mm×m (F), F an arbitrary field, and let nE be an integer such that 0 ≤ nE ≤ n(G). There exists an idempotent matrix E ∈ Mm×m (F) with n(E) = nE such that G = GE. (b) Let G ∈ Mm×m (F), F an arbitrary field, and let k be a positive integer. For any integers nE1 , · · · , nEk such that 0 ≤ nEi ≤ n(G), 1 ≤ i ≤ k, there exist idempotent matrices Ei ∈ Mm×m (F) with n(Ei ) = nEi such that G = GE1 · · · Ek . Proof. (a) Let E ∈ Mm×m (F) be an idempotent matrix of nullity nE such that N (E) ⊆ N (G). Since N (E) = R(I − E), it follows that G(I − E) = 0, thus G = GE. (b) It follows from (a) that there exists an idempotent matrix Ek ∈ Mm×m (F) with n(Ek ) = nEk such that G = GEk . Similarly G = GEk−1 , so G = GEk = GEk−1 Ek . The result follows by repeated application of (a). 2 Lemma 2. Let U and W be subspaces of a finite-dimensional vector space V over an arbitrary field F. (a) If {v1 , · · · , vk } ⊆ V is linearly independent such that span{v1 , · · · , vk } ∩ W = {0}, then {v1 + w1 , · · · , vk + wk } is linearly independent for arbitrary vectors w1 , · · · , wk ∈ W , and span{v1 + w1 , · · · , vk + wk } ∩ W = {0}. (b) There exists a subspace S of V with dim(S) = min{dim(U ), dim(W )} − dim(U ∩ W ) such that S ⊆ U + W and S ∩ U = S ∩ W = {0}. (c) If dim(V ) ≥ dim(U ) + dim(W ), then there exists a subspace Sˆ of V such that ˆ = min{dim(U ), dim(W )} and Sˆ ∩ U = Sˆ ∩ W = {0}. dim(S)

120

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

Proof. (a) For ai ∈ F, 1 ≤ i ≤ k, a1 (v1 + w1 ) + · · · + ak (vk + wk ) ∈ W ⇒ a1 v1 + · · · + ak vk ∈ span{v1 , · · · , vk } ∩ W = {0} ⇒ a1 = · · · = ak = 0, thus the result follows. (b) Let ˜ ⊕ (U ∩ W ) ⊕ W ˜ , where U ˜ ⊆ U and W ˜ ⊆ W. U +W =U ˜ and Assume dim(U ) ≥ dim(W ), and choose linearly independent sets {u1 , · · · , uk } ⊆ U ˜ such that {w1 , · · · , wk } is a basis for W ˜ . It follows from (a) that {w1 , · · · , wk } ⊆ W S = span{u1 + w1 , · · · , uk + wk } satisfies the requirements. ˜, W ˜ and S be as in (b). Choose a subspace V˜ of V such that (c) Let U (U + W ) ∩ V˜ = {0} and dim((U + W ) ⊕ V˜ ) = dim(U ) + dim(W ). Then Sˆ = S ⊕ V˜ satisfies the requirements. 2 G-decomposition of Fm Let G ∈ Mm×m (F), F an arbitrary field, such that r(G) ≤ m 2 (equivalently, r(G) ≤ m n(G)). Let N ⊆ N (G), R ⊆ R(G) and W be subspaces of F such that R(G) + N (G) = N ⊕ (R(G) ∩ N (G)) ⊕ R and Fm = (R(G) + N (G)) ⊕ W, where dim(W ) = dim(R(G) ∩ N (G)) (thus N (G) = N ⊕ (R(G) ∩ N (G)) and R(G) = (R(G) ∩ N (G)) ⊕ R). By Lemma 2(c) there exists a subspace V of Fm such that dim(V ) = r(G) and V ∩ N (G) = V ∩ R(G) = {0}. Thus G maps V one-to-one onto R(G) and hence there exists a basis {v1 , · · · , vr(G) } of V such that basis(R(G) ∩ N (G)) = {Gv1 , · · · , Gvt } and basis(R) = {Gvt+1 , · · · , Gvr(G) }. We refer to all of the construction above as the G-decomposition of Fm .

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

121

3. Products with one square-zero factor Theorem 1. Let G ∈ Mm×m (F), F an arbitrary field, and let k ≥ 1 be an integer. Then G can be expressed as G = E1 · · · Ek Z where Ei (1 ≤ i ≤ k) are idempotent and Z square-zero

(2)

if and only if r(G) ≤ m 2. Moreover, if a factorization (2) exists, then the nullities of Ei (1 ≤ i ≤ k) and rank of Z can be chosen arbitrarily subject to n(E1 ) + · · · + n(Ek ) ≥ r(Z) − dim(R(G) ∩ N (G))

(3)

and the basic conditions 0 ≤ n(Ei ) ≤ n(G) and r(G) ≤ r(Z) ≤ Proof. Suppose G can be expressed as in (2). Then r(G) ≤ show that (3) holds we proceed as follows: Let

m . 2

m 2

since Z is square-zero. To

H = E1 · · · Ek . Since Z is square-zero and divides G on the right we have R(Z) ⊆ N (Z) ⊆ N (G). So Z(N (G)) ⊆ R(Z) ⊆ N (G), and since G(N (G)) = HZ(N (G)) = {0} implies Z(N (G)) ⊆ N (H) we have dim(N (H) ∩ N (G)) ≥ dim(Z(N (G))) = n(G) − n(Z), since N (Z) ⊆ N (G) = r(Z) − r(G).

(4)

Now refer to the G-decomposition of Fm . Since Z maps V into N (G) and since N (G) ∩ R = {0}, it follows from Lemma 2(a) that {Zvt+1 − Gvt+1 , · · · , Zvr(G) − Gvr(G) }

(5)

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

122

is linearly independent, and span{(Z − G)vt+1 , · · · , (Z − G)vr(G) } ∩ N (G) = {0}. Now v ∈ N (H) ∩ N (G) ⇒ (I − H)v = v ∈ N (G) and for t < i ≤ r(G) (I − H)(Zvi ) = (Z − G)vi ∈ span{(Z − G)vt+1 , · · · , (Z − G)vr(G) } so r(I − H) ≥ {r(Z) − r(G)} + {r(G) − dim(R(G) ∩ N (G))}, from (4) and (5) = r(Z) − dim(R(G) ∩ N (G)). Thus it follows from the theorem in [2] that n(E1 ) + · · · + n(Ek ) ≥ r(I − H) ≥ r(Z) − dim(R(G) ∩ N (G)). Conversely, suppose r(G) ≤

m 2,

and let

0 ≤ nE1 ≤ · · · ≤ nEk ≤ n(G) and r(G) ≤ rZ ≤

m 2

be integers such that nE1 + · · · + nEk ≥ rZ − dim(R(G) ∩ N (G)). We will establish the converse by first showing that G can be expressed as G = HZ where Z is square-zero of rank rZ and H is such that n(H) = nH := max{nEk , rZ − r(G)} and nE1 + · · · + nEk ≥ r(I − H). Now refer to the G-decomposition of Fm and further decompose N as follows: N = span{yt+1 , · · · , yr(G) } ⊕ N1 ⊕ N2 ⊕ N3 where {yt+1 , · · · , yr(G) } is linearly independent and dim(N1 ) = dim(N2 ) = rZ − r(G). Thus

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

123

dim(N3 ) = {n(G) − dim(R(G) ∩ N (G))} − {r(G) − t} − dim(N1 ) − dim(N2 ) = {n(G) − dim(R(G) ∩ N (G))} − {r(G) − t} − 2(rZ − r(G)) = m − 2rZ . Define Z on Fm = N (G) ⊕ V as follows: •

vi → Gvi ,

1≤i≤t



vi → yi ,

t < i ≤ r(G)



N1 → N2 isomorphically



(R(G) ∩ N (G)) ⊕ span{yt+1 , · · · , yr(G) } ⊕ N2 ⊕ N3 → 0

Then R(Z) = (R(G) ∩ N (G)) ⊕ span{yt+1 , · · · , yr(G) } ⊕ N2 , N (Z) = (R(G) ∩ N (G)) ⊕ span{yt+1 , · · · , yr(G) } ⊕ N2 ⊕ N3 so R(Z) ⊆ N (Z) ⊆ N (G) which implies that Z is square-zero of rank rZ and a right divisor of G. Define H on Fm = N (G) ⊕ R ⊕ W as follows: •

span{Zv1 , · · · , Zvt } → span{Zv1 , · · · , Zvt } identically (thus H(Zvi ) = Zvi = Gvi , 1 ≤ i ≤ t)



N1 ⊕ N3 ⊕ W → N1 ⊕ N3 ⊕ W idempotently with rank to be determined



N2 → 0



span{Zvt+1 , · · · , Zvr(G) } ⊕ R → span{Zvt+1 , · · · , Zvr(G) } ⊕ R as follows: Zvi → Gvi , t + 1 ≤ i ≤ r(G)  Gvi , t + 1 ≤ i ≤ t + n ˜ Gvi → Zvi , t + n ˜ < i ≤ r(G) where n ˜ = min{r(G) − t, nH + r(G) − rZ }

(thus the Zvi ’s in the last line fall away if n ˜ = r(G) − t, and H becomes idempotent). It follows that G = HZ. By choosing the rank of H on N1 ⊕ N3 ⊕ W appropriately, let n(H) = nH = max{nEk , rZ − r(G)}. To determine r(I − H), we consider the following two separate cases: (a) n ˜ = r(G) − t ≤ nH − (rZ − r(G)).

124

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

In this case H is idempotent and therefore r(I − H) = n(H). Since nE1 + · · · + nEk ≥ rZ − dim(R(G) ∩ N (G)) ≥ rZ − r(G) it follows that nE1 + · · · + nEk ≥ max{nEk , rZ − r(G)} = n(H) = r(I − H). According to the theorem in [2], H can be expressed as H = E1 · · · Ek where Ei is idempotent with n(Ei ) = nEi (1 ≤ i ≤ k). Thus the result follows in this case. (b) n ˜ = nH − (rZ − r(G)) < r(G) − t. It follows that n(H) = nH = n ˜ + (rZ − r(G)) = n ˜ + dim(N2 ) and that the nullspace of H is the direct sum of N2 and the nullspace of H acting on span{Zvt+1 , · · · , Zvr(G) } ⊕ R. Thus H acts as the identity on both span{Zv1 , · · · , Zvt } and N1 ⊕ N3 ⊕ W , so that r(I − H) = dim(N2 ) + (r(G) − t) = (rZ − r(G)) + (r(G) − t) = rZ − dim(R(G) ∩ N (G)) since I − H acts as the identity on N2 and the image of I − H on span{Zvt+1 , · · · , Zvr(G) } ⊕ R is generated by (I − H)(Zvi ) = Zvi − Gvi , t + 1 ≤ i ≤ r(G)  0, t+1≤i≤t+n ˜ (I − H)(Gvi ) = ˜ < i ≤ r(G) Gvi − Zvi , t + n which by Lemma 2(a) is of dimension r(G) − t. Thus the result also follows in this case which completes the proof. 2

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

125

4. Products with two square-zero factors For G ∈ Mm×m (F) we denote by n0 (G) the maximum number of linearly independent vectors in N (G) with zero height, that is, n0 (G) := n(G) − dim(R(G) ∩ N (G)). Lemma 3. Let G ∈ Mm×m (F), F an arbitrary field, such that r(G) ≤ n0 (G) = n(G) − dim(R(G) ∩ N (G)) and let k ≥ 1 and l ≥ 2 be integers. Then G can be expressed as G = E1 · · · Ek Z1 · · · Zl where Ei are idempotent and Zj square-zero and where the nullities of Ei (1 ≤ i ≤ k) and ranks of Zj (1 ≤ j ≤ l) can be chosen arbitrarily subject only to the basic conditions 0 ≤ n(Ei ) ≤ n(G) and r(G) ≤ r(Zj ) ≤

m . 2

Proof. From Lemma 1(b) there exist idempotent matrices Ei with arbitrary nullities 0 ≤ n(Ei ) ≤ n(G), 1 ≤ i ≤ k, such that G = GE1 · · · Ek and from [4, Theorems 3 and 7] there exist square-zero matrices Zj with arbitrary ranks r(G) ≤ r(Zj ) ≤ m 2 , 1 ≤ j ≤ l, such that G = Z1 · · · Zl , thus G = Z1 · · · Zl E1 · · · Ek .

2

Theorem 2. Let G ∈ Mm×m (F), F an arbitrary field, and let k ≥ 1 be an integer. Then G can be expressed as G = E1 · · · Ek Z1 Z2 where Ei are idempotent and Z1 , Z2 square-zero

(6)

if and only if r(G) ≤ m 2. Moreover, if a factorization (6) exists, then the nullities of Ei (1 ≤ i ≤ k) and ranks of Z1 , Z2 can be chosen arbitrarily subject to n(E1 ) + · · · + n(Ek ) ≥ r(G) − n0 (G)

(7)

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

126

and the basic conditions 0 ≤ n(Ei ) ≤ n(G) and r(G) ≤ r(Z1 ), r(Z2 ) ≤

m . 2

Proof. Suppose G can be expressed as in (6). Then r(G) ≤ m 2 since G contains a squarezero factor. To show that (7) holds we proceed as follows: Let H = E1 · · · Ek and F = Z1 Z2 . It follows from [4, Theorem 3] that r(F ) ≤ n(F ) − dim(R(F ) ∩ N (F )). If r(G) ≤ n0 (G) then (7) holds trivially. Assume therefore r(G) > n0 (G). Let R(G) ∩ N (F ) = {R(G) ∩ R(F ) ∩ N (F )} ⊕ span{Gu1 , · · · , Gun } where {Gu1 , · · · , Gun } is linearly independent. Then (I − H)(F ui ) = F ui − Gui (1 ≤ i ≤ n) are linearly independent by Lemma 2(a) since R(F ) ∩ span{Gu1 , · · · , Gun } = {0}. Thus r(I − H) ≥ dim(R(G) ∩ N (F )) − dim(R(G) ∩ R(F ) ∩ N (F )). Now dim(R(G) ∩ N (F )) = dim(R(G) ∩ N (G) ∩ N (F )), since N (F ) ⊆ N (G) = dim(R(G) ∩ N (G)) + n(F ) − dim(R(G) ∩ N (G) + N (F )) ≥ dim(R(G) ∩ N (G)) + n(F ) − n(G), since R(G) ∩ N (G) + N (F ) ⊆ N (G) = n(F ) − n0 (G) so

(8)

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

127

r(I − H) ≥ dim(R(G) ∩ N (F )) − dim(R(G) ∩ R(F ) ∩ N (F )) ≥ n(F ) − n0 (G) − dim(R(F ) ∩ N (F )) ≥ n(F ) − n0 (G) + r(F ) − n(F ), from (8) = r(F ) − n0 (G) ≥ r(G) − n0 (G). Thus it follows from the theorem in [2] that n(E1 ) + · · · + n(Ek ) ≥ r(I − H) ≥ r(G) − n0 (G). Conversely, suppose r(G) ≤

m 2,

and let

0 ≤ nE1 ≤ · · · ≤ nEk ≤ n(G) and r(G) ≤ rZ1 ≤ rZ2 ≤

m 2

be integers such that nE1 + · · · + nEk ≥ r(G) − n0 (G). If r(G) ≤ n0 (G) then the converse follows from Lemma 3. Assume therefore r(G) > n0 (G) = n(G) − dim(R(G) ∩ N (G)). We will establish the converse by first showing that G can be expressed as G = HF where F is such that N (F ) = N (G) and r(F ) = n(F ) − dim(R(F ) ∩ N (F )) and H is such that n(H) = nH := max{nEk , r(G) − n0 (G)} and nE1 + · · · + nEk ≥ r(I − H). Now refer to the G-decomposition of Fm . Let s = r(G) − n0 (G), then 0 < s ≤ dim(R(G) ∩ N (G)) = t. Choose ˜ {y1 , · · · , ys } ⊆ W linearly independent and let W = span{y1 , · · · , ys } ⊕ W ˜ of Fm . for some subspace W m Define F on F = N (G) ⊕ V as follows:

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

128



vi → yi

1≤i≤s



vi → Gvi

s < i ≤ r(G)



N (G) → 0

Then •

R(F ) = span{y1 , · · · , ys } ⊕ span{Gvs+1 , · · · , Gvr(G) }



N (F ) = N (G))



R(F ) ∩ N (F ) = span{Gvs+1 , · · · , Gvt }

so F is a right divisor of G such that r(F ) = r(G) and dim(R(F ) ∩ N (F )) = t − s = dim(R(G) ∩ N (G)) − (r(G) − n0 (G)) = n(G) − r(G). Thus r(F ) = n(F ) − dim(R(F ) ∩ N (F )), since r(F ) = r(G) and n(F ) = n(G). From [4, Theorem 3] there exist square-zero matrices Z1 and Z2 with ranks rZ1 and rZ2 , respectively, such that F = Z1 Z2 . Define H on Fm = N ⊕ R(G) ⊕ W as follows: •

span{F vs+1 , · · · , F vr(G) } → span{F vs+1 , · · · , F vr(G) } identically (thus H(F vi ) = F vi = Gvi , s < i ≤ r(G))



˜ →N ⊕W ˜ idempotently with rank to be determined N ⊕W



span{F v1 , · · · , F vs } ⊕ span{Gv1 , · · · , Gvs } → span{F v1 , · · · , F vs } ⊕ span{Gv1 , · · · , Gvs }  F vi → Gvi 1 ≤ i ≤ s as follows: Gvi → Gvi 1 ≤ i ≤ s

˜ It follows that G = HF where H is idempotent. By choosing the rank of H on N ⊕ W appropriately, n(H) can assume any value such that s ≤ n(H) ≤ n(G).

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

129

Choose therefore n(H) = max{nEk , s = r(G) − n0 (G)}. Since nE1 + · · · + nEk ≥ s we have nE1 + · · · + nEk ≥ max{nEk , s} = n(H) = r(I − H). According to the theorem in [2], H can be expressed as H = E1 · · · Ek where Ei is idempotent with n(Ei ) = nEi (1 ≤ i ≤ k). This completes the proof. 2 Corollary 1. It follows from the proof of Theorem 2 that if a factorization (6) exists for given nullities n(Ei ) (1 ≤ i ≤ k) and ranks r(Z1 ), r(Z2 ), then the idempotent factors can be chosen in such a way that their product H = E1 · · · Ek is also idempotent.

2

Corollary 2. Let G ∈ Mm×m (F), F an arbitrary field, such that r(G) ≤ 0 ≤ nE1 ≤ · · · ≤ nEk ≤ n(G) and r(G) ≤ rZ1 ≤ rZ2 ≤

m 2

and let

m 2

be integers such that nE1 + · · · + nEk + n(G) ≥ r(I − G). Then G can be expressed as G = E1 · · · Ek Z1 Z2 , where Ei are idempotent and Z1 , Z2 square-zero such that n(Ei ) = nEi (1 ≤ i ≤ k) and r(Z1 ) = rZ1 , r(Z2 ) = rZ2 . Proof. Since N (I − G) ⊆ R(G) and N (I − G) ∩ N (G) = {0}, we have n(I − G) ≤ r(G) − dim(R(G) ∩ N (G)) so

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

130

r(I − G) ≥ n(G) + dim(R(G) ∩ N (G)) ≥ r(G) + dim(R(G) ∩ N (G)) and therefore nE1 + · · · + nEk ≥ r(I − G) − n(G) ≥ r(G) − n0 (G). Thus the result follows from Theorem 2. Alternatively, we can prove the result as follows: It follows from the theorem in [2] that G can be expressed as a product of idempotent matrices G = E1 · · · Ek Ek+1 where n(Ei ) = nEi (1 ≤ i ≤ k) and n(Ek+1 ) = n(G).

(9)

Since R(Ek+1 ) ∩ N (Ek+1 ) = {0}, we have r(Ek+1 ) ≤ n(Ek+1 ) = n0 (Ek+1 ) and therefore it follows from [4, Theorem 3] that Ek+1 can be expressed as a product of square-zero matrices Ek+1 = Z1 Z2 such that r(Z1 ) = rZ1 and r(Z2 ) = rZ2. Substitution in (9) yields the result.

2

Remark 1. A square-zero matrix Z ∈ Mm×m (F) is not necessarily a product of two square-zero matrices (e.g., when r(Z) = n(Z)) nor is a product F ∈ Mm×m (F) of two square-zero matrices necessarily square-zero (e.g., when F is idempotent and r(F ) = n(F )). Thus the lower bounds for n(E1 ) + · · · + n(Ek ) in Theorems 1 and 2 are not generally related. Indeed, if G ∈ Mm×m (F) is such that r(G) = n(G) and R(G) ∩ N (G) = {0} then the lower bound in Theorem 1 is larger than in Theorem 2, and if G is square-zero such that r(G) = n(G) then the lower bound in Theorem 2 is larger than in Theorem 1. 5. Products with three or more square-zero factors Theorem 3. Let G ∈ Mm×m (F), F an arbitrary field, and let k ≥ 1 and l ≥ 3 be integers. Then G can be expressed as G = E1 · · · Ek Z1 · · · Zl where Ei are idempotent and Zj square-zero

(10)

if and only if r(G) ≤ m 2. Moreover, if (10) holds, then the nullities of Ei (1 ≤ i ≤ k) and ranks of Zj (1 ≤ i ≤ l) can be chosen arbitrarily subject only to the basic conditions 0 ≤ n(Ei ) ≤ n(G) and r(G) ≤ r(Zj ) ≤

m . 2

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

131

Proof. Suppose G can be expressed as in (10). Then r(G) ≤ m 2 since G contains a square-zero factor. Conversely, suppose r(G) ≤ m 2 . From Lemma 1(b) there exist idempotent matrices Ei with arbitrary nullities 0 ≤ n(Ei ) ≤ n(G) (1 ≤ i ≤ k) such that G = GE1 · · · Ek and from [4, Theorem 7] there exist square-zero matrices Zj with arbitrary ranks r(G) ≤ r(Zj ) ≤ m 2 (1 ≤ j ≤ l) such that G = Z1 · · · Zl , thus G = Z1 · · · Zl E1 · · · Ek .

2

Remark 2. It follows from [4, Theorem 7] that the result in Theorem 3 also holds if k = 0, that is, when there are no idempotent factors. 6. Factorizations with a prescribed idempotent or square-zero factor In conclusion, we remark briefly on factorizations of the type G = ZE (equivalently, G = EZ) where E is idempotent, Z square-zero, and either E or Z is prescribed. Such factorizations are treated in a more general context in [4, Theorem 1] and [3, Theorem 2]. However, in the special case where it is given that E is idempotent, we have the following stronger result. Proposition 1. Let G, E ∈ Mm×m (F), where F is a field and E an idempotent matrix. Then (a) G = ZE with Z ∈ Mm×m (F) square-zero if and only if N (E) ⊆ N (G) and R(G) ∩ R(E) ⊆ N (G); (b) G = EZ with Z ∈ Mm×m (F) square-zero if and only if R(G) ⊆ R(E) and R(G) ⊆ N (G) + N (E). Proof. (a) Note that E(R(G) ∩ R(E)) = R(G) ∩ R(E) since E is idempotent and that E(N (G)) ⊆ N (G) if N (E) ⊆ N (G)

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

132

since Fm = N (E) ⊕ R(E) ⇒ N (G) = N (E) ⊕ (R(E) ∩ N (G)) ⇒ E(N (G)) = E(R(E) ∩ N (G)) = R(E) ∩ N (G) ⊆ N (G). Thus N (E) ⊆ N (G) and R(G) ∩ R(E) ⊆ N (G) ⇔ N (E) ⊆ N (G) and R(G) ∩ R(E) ⊆ E(N (G)) = R(EN (G) ) and hence the result follows from [4, Theorem 1(c)]. (b) Suppose G = EZ where Z ∈ Mm×m (F) is square-zero. Then R(G) ⊆ R(E) and E(G − Z) = EG − EZ = G − EZ = 0, hence Gv ∈ R(Z) + N (E) ⊆ N (G) + N (E) for all v ∈ Fm since Z is square-zero and therefore R(Z) ⊆ N (Z) ⊆ N (G). Conversely, suppose R(G) ⊆ R(E) and R(G) ⊆ N (G) + N (E). Then v ∈ Fm ⇒ Gv = Ew = y + z, where w ∈ Fm , y ∈ N (G) and z ∈ N (E) ⇒ Gv = E 2 w = Ey, where y ∈ N (G)     G E ⇒ v= y, where y ∈ N (G) Om×m G and hence the result follows from [4, Corollary 1(e)].

2

Acknowledgements The author wishes to express his sincere appreciation to the reviewers for their constructive and useful comments. References [1] C.S. Ballantine, Products of idempotent matrices, Linear Algebra Appl. 19 (1978) 81–86. [2] J.D. Botha, Idempotent factorization of matrices, Linear Multilinear Algebra 40 (1996) 365–371.

J.D. Botha / Linear Algebra and its Applications 497 (2016) 116–133

133

[3] J.D. Botha, Matrix division with an idempotent divisor or quotient, Linear Algebra Appl. 446 (2014) 71–90. [4] J.D. Botha, Square-zero factorization of matrices, Linear Algebra Appl. 488 (2016) 71–85. [5] F. Bünger, F. Knüppel, K. Nielsen, The product of two quadratic matrices, Linear Algebra Appl. 331 (2001) 31–41. [6] J.A. Erdos, On products of idempotent matrices, Glasg. Math. J. 8 (1967) 118–122. [7] J.M. Howie, The subsemigroup generated by the idempotents of a full transformation semigroup, J. Lond. Math. Soc. 41 (1966) 707–716. [8] F. Knüppel, K. Nielsen, A short proof of Botha’s theorem on products of idempotent linear mappings, Linear Algebra Appl. 438 (2013) 2520–2522. [9] P. Lenders, J. Xue, Factorization of singular integer matrices, Linear Algebra Appl. 428 (2008) 1046–1055. [10] N. Novak, Products of square-zero operators, J. Math. Anal. Appl. 339 (2008) 10–17. [11] J.-H. Wang, Factorization of matrices into quadratic ones. III, Linear Algebra Appl. 240 (1996) 21–39.