Matrix polynomials: Factorization via bisolvents

Matrix polynomials: Factorization via bisolvents

Accepted Manuscript Matrix polynomials: Factorization via bisolvents Nir Cohen, Edgar Pereira PII: DOI: Reference: S0024-3795(17)30270-7 http://dx...

331KB Sizes 0 Downloads 97 Views

Accepted Manuscript Matrix polynomials: Factorization via bisolvents

Nir Cohen, Edgar Pereira

PII: DOI: Reference:

S0024-3795(17)30270-7 http://dx.doi.org/10.1016/j.laa.2017.04.033 LAA 14142

To appear in:

Linear Algebra and its Applications

Received date: Accepted date:

10 December 2014 24 April 2017

Please cite this article in press as: N. Cohen, E. Pereira, Matrix polynomials: Factorization via bisolvents, Linear Algebra Appl. (2017), http://dx.doi.org/10.1016/j.laa.2017.04.033

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Matrix Polynomials: Factorization via Bisolvents Nir Cohen1 , Edgar Pereira1 Abstract:

We reconsider the classification of all the factorizations of a matrix poly-

nomial P as P = QR with Q a matrix polynomial and R(λ) = λT − S a regular matrix pencil. It is shown that the entire classification problem can be reduced to the simpler classification of factors R with commuting coefficients S, T . It is then shown that, for these commuting factors, S and T must satisfy a certain algebraic equation which we call the bisolvent equation. This extends the generalized B´ezout theorem which associates monic factors λI − S with solutions S of a solvent equation. In case P is regular, the classification of commuting pairs (S, T ) of this type (up to left equivalence) is described in terms of enlarged standard pairs, following a well known approach. Under a non-derogatory generic condition on the roots of P , the number of such pairs associated with degree-minimal factorizations is finite, and admits explicit description in terms of Jordan chains. MSC2010:

15A23, 15A21 , 15A22

Keywords:

Matrix polynomial, solvent, standard pair, commuting matrices, factor-

ization, Weierstrass form.

1

Introduction

There is interest in extending the fundamental theorem of algebra, from polynomials over an alge braically closed field F, to matrix-valued polynomials P (λ) = πj=0 λj Pj over F. Here, one needs to reconsider the fundamental notions of factorability and roots, due to the lack of commutativity, and to re-evaluate the equivalence between them. As far as factorability is concerned, the existence and uniqueness of the factorization are no more guaranteed, irreducible factors may be nonlinear, and the sum of degrees of the factors may exceed the degree of their product. Known results are either based on spectral analysis, as summarized 1

Department of Mathematics, UFRN - Universidade Federal do Rio Grande do Norte, Natal, Brazil

([email protected], [email protected]).

1

e.g. in [16] §7, or numerical approaches [2], both mostly restricted to regular matrix polynomials (det P (λ) ≡ 0). At the other front, attempts to define ”matrix-valued roots” of a matrix-valued polynomial go back to several papers of Sylvester from the 1880s (cited in [18] and [23]). For second order polynomials P (λ) = Aλ2 + Bλ + C, natural candidates include square matrix solutions X of the unilateral equation AX 2 + BX + C = 0, often called right solvents (or the left-handed equivalent), or solutions of the Riccati equation XAX + BX + C = 0. For higher order polynomials, say of order π, more recent studies center around the equation P (S) =

π 

Pj S j = 0,

(1)

j=0

whose solutions are also often called right solvents of P. The generalized B´ezout theorem reaffirms the equivalence with factorability: Theorem 1 ([11] vol. I, pp. 81-82, 228): A monic pencil R(λ) = λI − S is a right divisor of P (with left factor Q, necessarily of degree deg Q = π − 1) iff S is a right solvent of P . The calculation of solvents, too, can be carried out numerically (Dennis et al [7], [8], Higham-Kim [18],[19], Shieh et al [26]) or using spectral notions (Roth [23], Bell [3], Pereira [21],[22]). A natural application is into systems of linear ODEs, where S in (1) is the derivative operator (see a detailed discussion in [20] §20). One limitation of Theorem 1 is that it only describes monic first order right factors. We may extend the theory to comonic polynomials P, Q, R by symmetry (λ → 1/λ) , and then to regular polynomials P, Q, R using bilinear change of variable μ = (aλ + b)/(cλ + d)). However, this algorithmic approach, which involves two steps of reduction, is indirect and computationally complicated, and does not yield an explicit direct extension of Theorem 1. In this paper we present a direct extension of Theorem 1 which matches a complete classification of polynomial factorizations P = QR,

deg R = 1,

det R(λ) ≡ 0;

(2)

namely, with R(λ) = λT − S a regular matrix pencil and Q a matrix polynomial of any degree. This is done in two steps: first, assuming the commuting assumption T S = ST 2

(3)

and then considering the general non-commuting case. Our approach for commuting pencils requires a generalization of the algebraic equation (1). As is well known, in the matrix case the finite roots are no more adequate for a complete spectral description of P (λ), and an infinite root has to be added. Equivalently, one replaces roots s ∈ F , π i 2 based on the polynomial identity i=0 Pi s = 0, by elementary divisors (s, t) ∈ F / ∼, based  on the homogeneous identity πi=0 Pi si tπ−i = 0 (see Gantmacher [11] vol. II, p. 26-28, following Kronecker’s work). In analogy, here we abandon the solvent identity (1) in favor of the identity PL (S, T ; k) =

π 

Pj S j T k−j = 0.

(4)

j=0

If the pair S, T is a solution of this equation, we call the pencil T − λS a right bisolvent of P , and consider k to be the bisolvent order. Observe that PL (S, T ; k + 1) = PL (S, T ; k)T

∀k ≥ π.

(5)

Thus, a bisolvent of order k is also a bisolvent of order k + 1. Unlike right solvents, right bisolvents lack most of the properties of elementary divisors as factors of P ; however, under the commutation assumption (3) we reach a complete analogue of Theorem 1: Theorem 2 Let R(λ) be a regular commuting pencil. The following are equivalent for any k ≥ π: (i) R is a right factor in a factorization P = QR with deg Q ≤ k − 1; (ii) R is a right bisolvent of order k of P . Proofs for our main results are given in §3. A factorization of type (i) in Theorem 2 will be called a k-factorization. The monic case T = I is treated by Theorem 1, and the comonic case S = I is analogous, though not as commonly met (but  see e.g. [27]). Solutions of the equation πi=0 Pi S k−i = 0 are called reverse solvents. Defining the  reverse polynomial P˜ (λ; k) = πi=0 Pi λk−i = λk P (1/λ), it is readily seen that every reverse solvent of P is a solvent of P˜ (λ; k). We digress to comment on the determination of k ≥ π in the factorization P = QR. The parameter k = deg Q + deg R = deg Q + 1 admits the obvious lower bound k = π = deg P ; and in §2 we show that when R is a regular m×m pencil we also have the upper bound k = π +m. Clearly, 3

factorizations of low degrees are of particular interest, and especially degree-minimal factorizations with k = π; however, the existence of those is not guaranteed in general. In §6 we show that even a minimal factorization (in the sense of the McMillan degree, see [24] and references therein) of a regular matrix polynomial may fail to be degree-minimal. This state of affairs justifies our interest in the entire allowable range [π, π + m] of factorization degrees. We address now the more general case of non-commuting factors. The idea is to reduce the general case to the commuting case, using the following elementary reasoning. Together with any given factorization P = QR we also have the factorization P = (QG−1 )(GR) for all G ∈ GLm (F ). Thus, right factors form complete orbits w.r.t. left equivalence: [R] = {GR : G ∈ GLm (F )}. Furthermore, each orbit [λT −S] contains commuting pencils (in fact, separable pencils, see Lemma 7(i)). By choosing a commuting representer from each orbit, the entire classification of factors R(λ) can be performed, up to left equivalence, reaching our second central result. Theorem 3 Let R be a non-constant regular pencil. the following are equivalent for k ∈ N: (i) R is a right factor in a k-factorization of P (i.e. deg Q + deg R ≤ k); (ii) R is left equivalent to a commuting right bisolvent of order k of P . As observed earlier, w.l.o.g. we may restrict k to the interval [π, π + m]. The regularity assumption on R in Theorems 2 and 3 is essential (see §7); however, P and Q are not required to be regular or even square. An extreme case is the trivial polynomial P (λ) ≡ 0 ∈ Mn,m (F ), which admits any possible (regular m × m) pencil as both right bisolvent and right factor. As the commuting representer is not unique, its choice poses a certain difficulty, treated in Lemma 7. Let us call the orbit [λT − S] essentially monic (resp. essentially comonic) when T (resp. S) is invertible. Clearly, an essentially monic orbit admits a unique monic representer, and an essentially comonic orbit admits a unique comonic representer. (In case both S and T are invertible, the monic and comonic representers may be different!) If the regular orbit [λT − S] is so that both S, T are singular matrices, it admits (up to left equivalence and, if T, S do not commute, bilinear change of variable) no representer of the monic or 4

comonic type, but may still admit a commuting bisolvent as a representer. It is in these situations where Theorem 3 improves upon Theorem 1. As a result of Theorem 3, the two classification problems (of right regular pencil factors, and of commuting bisolvents) are shown to be equivalent, and one approach, sketched here in §4, is to follow (with some adaptations) the classical spectral analysis of factorization theory, initially developed in the monic case (Pπ = I) by I. Gohberg and collaborators in the late 1970s, based on standard/Jordan pairs (X, J) (see [16] Corollary 4.7 or [15] Theorems 2.1-2), then extended to the regular case (det P ≡ 0) in the early 1980s ([4], see also [16] §7.1). Assuming P to be regular, the associated Jordan pair, as defined in [4], is a pair of matrices X ∈ Mm,πm (F ), J ∈ Mπm,πm (F ) codifying, respectively, Jordan chains and roots of P . Here, however, a slightly generalized version will be needed, in which the standard pair has augmented size: X ∈ Mm,km (F ), J ∈ Mkm,km (F ). This generalization is both straightforward and necessary, if we want to describe factorizations with k = deg Q + deg R > deg P = π (the case k = π is sufficient if e.g. P, Q, R are assumed monic). This will be done in §4. It can be seen that the increase in size does not affect the partial multiplicities of finite roots, but inflates the multiplicities at the root at ∞. The reader may consult [25] for the related technicalities. As it turns out, a right factor R of P in a k-factorization P = QR can be constructed from any J-invariant subspace V ⊂ F km of dimension ρm, where ρ = deg R (we shall really only need the case ρ = 1). Roth’s 1930 paper [23] is remarkable for its pioneering use of Jordan chains in the construction of solvents, even if P is not regular. It shows that, unlike the regular case, in the singular case part of the spectrum of the solvent may be chosen arbitrarily. Roth’s use of truncated Jordan chains ([23]) is not as general as the invariant subspace approach, since truncation always defines a marked invariant subspace, and it is still an open problem whether every factorization (hence, every commuting bisolvent) defines a marked restriction (this point is discussed in §4.3; see also the discussion in [17] §2). However, it can easily be verified that if P satisfies a simple (and, within the class of regular matrix polynomials, generic) non-derogatory condition, every restriction of a (minimal) Jordan pair (X, J) is a true truncation (for the monic case, see [17] Thm 2.9.2): Lemma 4 (i) Assume that every root of P (including the root at ∞) has geometric multiplicity 1. 5

Then P admits a finite number of distinct left equivalence orbits [R(λ)] with R(λ) a regular pencil so that P = QR is a minimal-degree polynomial factorization. (ii) Furthermore, let (X, J) be a Jordan pair of P . Then every orbit described in (i) is determined by an m-dimensional truncation of (X, J). (For a proof, see Lemma 11). Both the non-derogatory and marked properties are local, i.e. can be defined at each eigenvalue of P . Accordingly, Lemma 4 admits weaker, local versions where P is assumed non-derogatory w.r.t. some part of its spectrum. See e.g. Pereira [21]. It would be interesting to extend the analysis of commuting bisolvents to singular matrix polynomials, as well as to right factors R(λ) of degree ρ > 1.

2

On the factorization degree

Every matrix polynomial P over F is decomposable, in the sense that P admits non-trivial polynomial factorizations P = QR; but not necessarily strongly decomposable, in the sense that deg Q + deg R = deg P. The monic 2 × 2 strongly decomposable matrix polynomials over R or over C are described in [5]; the comonic case is analogous. These cases provide examples of a polynomial of arbitrary degree π which admits no bisolvent of order π, hence no minimal degree factorization. To date, as far as we know, there is no description of strongly decomposable matrix polynomials for m > 2. The only restriction which binds the three degrees of any factorization of matrix polynomials, P = QR, is deg P ≤ deg Q + deg R. In other words, using π = deg P and k = deg Q + deg R, k has the lower bound π, but not an upper bound. A trivial example is 0 = 0R with R of arbitrary degree. Now, assuming a priori that R is an m × m regular pencil, there is also an upper bound. In case R is essentially monic, the factorization is degree-minimal; otherwise, let ν(R) be the largest partial multiplicity in R of the root at infinity (i.e. the size of the largest ”infinite Jordan cell” in the Weierstrass form, [11] vol. II, p. 25). Clearly, 0 ≤ ν(R) ≤ m, with ν(R) = 0 iff R is essentially monic. Lemma 5 Given a regular m × m pencil R, and non-negative integers π, κ, the following are equivalent: 6

(i) π = deg P and κ = deg Q + 1 for some matrix polynomials P, Q so that P = QR : (ii) 0 ≤ κ − π  ≤ ν(R), where π  := max{π, 1} Proof: (i) → (ii) The left inequality is trivial. For the right inequality, assume that R is not essentially monic with Weierstrass form −1 ˆ R(λ) = GR(λ)H ,

ˆ (R(λ) = [(λI − J  ) ⊕ (λJ  − I)],

G, H ∈ GLm (F ))

with J  nilpotent of order ν := ν(R) ≥ 1 [11]. Under a suitable block division, G = [G , G ] and H = [H  , H  ]. Assuming furthermore that P = QR we have [P (λ)H  ] = [Q(λ)G ](λI − J  ),

[P (λ)H  ] = [Q(λ)G ](λJ  − I).

The first factorization involves a monic right factor, hence is degree-minimal; due to the identity ⎞ ⎛ ν−1  λj J j ⎠ (I − λJ  ) (6) I=⎝ j=0

the second factorization can be written as Q(λ)G = −P (λ)H  (

ν−1 j=0

λj J j ). Now we have

deg Q + deg R = max{deg QG + 1, deg QG + 1} ≤ ≤ max{deg P H  , deg P H  + (ν − 1) + 1} ≤ deg P + ν. (ii) → (i) First assume that R(λ) = I − λJ  where J  is a nilpotent Jordan block of size ν. For all j (0 ≤ j ≤ ν − 1) and π  = π > 0 define P (λ) = λπ J j ,

Q(λ) = P (λ)

ν−j−1 

(λJ  )i .

(7)

i=0

We have P = QR and deg Q + deg R − deg P = ν − j. As 0 ≤ j ≤ ν − 1 we get factorizations with π arbitrary and 1 ≤ k − π ≤ ν. To get k − π = 0 we use the alternative factorization with Q = λπ−1 I and P = QR. Now if π = 0 < k 7 is still valid, concluding the proof for a single cell. When the Weierstrass form of R has more than one cell, and R is not essentially monic, without loss of generality we may write R in Weierstrass form, as a direct sum of cells Ri (1 ≤ i ≤ α), some monic and some comonic. Let νi be the size of the cell, if comonic, and 0 otherwise. To construct a suitable factorization P = QR, we assume that P and Q have compatible direct sum structures in terms of blocks Pi , Qi . Fixing L, a non-negative integer, we choose as Qi = λki I where 7

ki = L + νi . From the first part of the proof it follows that Pi = Qi Ri has degree πi which is limited only by the inequality ki − νi ≤ πi ≤ ki , namely, L ≤ πi ≤ L + νi , where πi := max{πi , 1}. Thus, setting ν = max νi , we have that π  = max πi can assume any number between L = deg Q − ν and L + ν = deg Q. By taking the worst case ν(R) = m, in the previous Lemma we get the following immediate corollary: Lemma 6 Given an n × m matrix polynomial P (λ) of order π > 0, and an integer k, a necessary condition for the existence of a k-factorization P = QR with R a regular pencil is that π ≤ k ≤ π + m. The value k − π expresses the redundancy of factorization, in terms of the net length of the Jordan chains at ∞; and one way of producing redundant factorizations is to choose R(λ) to be unimodular (ie. of constant nonzero determinant). In fact, as the reader can easily verify, to attain the upper bound k = π + m it suffices to choose R(λ) = (λT − I) with T a suitably chosen nilpotent matrix of nilpotency order m. Not every matrix polynomial attains the lower bound k = π in Lemma 6. One example is  1 λ2 P (λ) = 0 1 which is not a product of two pencils (See §6.5). Recall [25] that P = QR is a minimal factorization in the sense of the extended McMillan degree δ,

including poles at infinity, if δ  (P ) = δ  (Q) + δ  (R) (see also [17] §7.1). It is interesting to find

an upper bound for k in Lemma 6 under this additional constraint. We return to this issue in §6.

3

Separability

Before proving our main results, Theorems 2 and 3, we need to redefine the notion of a canonical form of a regular matrix pencil under left equivalence in a context wider than the monic or comonic. It is not difficult to observe that every factorization orbit [λT − S], which contains one commuting member (S, T ), actually contains an infinite number of commuting members; in fact, in some cases the entire orbit consists of commuting pencils, including a single monic pencil and a single (usually different!) comonic pencil. Besides the lack of uniqueness of the representer, one should settle the question of existence. A

8

pencil will be called separable if it is of the form G[(λI − S  ) ⊕ (λT  − I)]G−1 ,

G ∈ GLm (F );

(8)

and weakly separable if it admits the more general form G[(λT  − S  ) ⊕ (λT  − S  )]G−1 ,

G ∈ GLm (F )

(9)

with (S  , T  ) and (S  , T  ) both commuting and with T  , S  invertible . Lemma 7

(i) Every regular pencil R(λ) = λT − S is left (as well as right) equivalent to a

separable pencil.

(ii) Every regular commuting pencil is weakly separable.

Proof. The proofs of items (i) and (ii) are based on two classical results considered unrelated in general: the Weierstrass form and a result on two commuting matrices. In item (i), for definiteness we only show the left equivalence claim. Indeed, every regular pencil R is equivalent to a pencil in Weierstrass form [11], say R(λ) = H[(λI − J1 ) ⊕ (λJ2 − I)]G−1 ,

G, H ∈ GLm (F );

(10)

but then R is left equivalent to the separable pencil (8). In item (ii), assume that R(λ) = λT − S is regular and T, S commute. We may find a similarity G = [G , G ],

(G ∈ Mm,m (F ),

G ∈ Mm,m (F ))

(11)

with m = m + m , so that T = G[T  ⊕ T  ]G−1

(T  ∈ Mm (F ) invertible, T  ∈ Mm (F ) nilpotent).

Commutation of R implies that S = G[S  ⊕ S  ]G−1 , with both pencils λT  − S  and λT  − S  commuting. Regularity of R implies that λT  − S  is regular. In addition, T  and S  commute and T  is nilpotent, and this can be shown to imply that S  is invertible. Separable pencils are regular and commuting. However, not every regular commuting pencil is  1 1 separable, as shown by the example (λ − 1) 0 1 . We now prove the main results announced earlier. Proof of Theorem 2. By Lemma 7 we may assume that R is weakly separable, as in (9). Defining the rational matrix function Q := P R−1 , and writing G = [G , G ] as in (11), we get X  (λ) := [Q(λ)G ] = [P (λ)G ][λT  − S  ]−1 , 9

(12)

X  (λ) := −λ−k [Q(λ)G ] = [−λ−k P (λ)G ][λS  − T  ]−1 .

(13)

As both pairs S  , T  and S  , T  commute, and T  , S  are invertible, we may use the power expansions ∞  (λT  − S  )−1 = λ−j−1 T −j−1 S j (|λ| >> 1), (14) j=0

(λT  − S  )−1 = −

∞ 

λj T j S −j−1

(|λ| << 1),

(15)

j=0

along with the finite expansion P (λ) =



i=0 Pi λ

i

valid for all λ ∈ C. Substitution of these

expansions in (12,13) leads to the converging power series π−1 

X  (λ) =

Xi λi ,

X  (λ) =

i=−∞

+∞ 

Xi λi .

i=0

Now, the factorability assumption means that Q, hence both X  , X  , are matrix polynomials of degree ≤ k − 1, which is equivalent to the vanishing of the coefficients Xi (i < 0) and Xi (i ≥ k); and direct calculation provides  X−t = Y  T −t−k S t−1

(Y  =

π 

Pi G T k−i S i ,

t = 1, 2, · · · )

i=0

Xt = Y  T t−k S −t−1

(Y  = −

Pi G T k−i S i ,

t = k, k + 1, · · · )

i=0 T  , S  assume

negative powers). Thus, Q is a polynomial π vanish, namely, iff i=0 Pi T k−i S i = [Y  , Y  ] vanishes, i.e. iff

(observe that only the invertible matrices of order ≤ k − 1 iff both Y  and Y 

π 

R(λ) is a bisolvent of order ≤ k. In the special case S = I, where R(λ) is comonic, Theorem 2 and the proof given here can be found in Wegge’s paper [27], reaching the reverse solvent equation. Proof of Theorem 3. The implication (ii) → (i) follows from Theorem 2. For the converse, let λT − S be a right factor. Then λTˆ − Sˆ = (αT − S)−1 (λT − S), with det(αT − S) = 0, is a commuting factor which by Theorem 2 is a bisolvent of P. We include here a strengthened version of the implication (ii) → (i) just proven. Lemma 8 Every regular right factor of P is left equivalent to a commuting separable right bisolvent of P in the form (8), with S  and T  in Jordan form. Proof. by Lemma 7, R(λ) is left equivalent to a separable pencil S(λ). Since both belong to the same orbit, S(λ) is also a right factor, and so by Theorem 2 it is also a right bisolvent. By an additional similarity operation, both S  and T  can be put in Jordan form. 10

4

Spectral description of right factors

As explained in the introduction, we describe a general spectral procedure which recovers all the regular factorization orbits [λT − S] for a regular matrix polynomial P . As this procedure is based on existing theory, it will only be sketched. The construction includes three basic steps. First we extend the definition of standard pair obtaining an enlarged version. 4.1. The direct spectral problem. (For more details see [4], [16] §7.1-7.6, where the special case k = π is described). Let P be a regular m × m matrix polynomial of order π and let k ≥ π. A standard pair of order k, (X, J; k), for P is a pair X = [X  , X  ] ∈ Mm,km (F ), J = J  ⊕J  ∈ Mkm (F ), under a suitable block division, satisfying two conditions:

⎡ det ⎣

Extended controllability of order k : Bisolvency of order k:

X

X  J k−1

·

·

·

·

X  J k−1

X 

⎤ ⎦ = 0;

(16)

The separable pencil R(λ) := (λI − J  ) ⊕ (λJ  − I) is a

bisolvent of order k for P (λ)X. Namely, π 

π 

Pj X  J j = 0,

j=0

Pj X  J k−j = 0.

(17)

j=0

The standard pair (X, J; k) is called a Jordan pair of order k if in addition J is in Jordan form. We interpret the size mk as the total (k-dependent) root multiplicity over all finite and infinite roots of P . This value coincides with the extended McMillan degree only when P is essentially monic and k = π [25]. The chains of vectors of X in a Jordan pair form Jordan chains for P [4],[16] §7.1, indexed by the corresponding roots in J. It can always be assumed that J  is nilpotent, in which case the finite spectrum is encoded in (X  , J  ) and the infinite spectrum in (X  , J  ). Every regular matrix polynomial has standard and Jordan pairs of any order k ≥ π. The existence of pairs of order π is demonstrated in [4], and the existence for k > π can be obtained by the following extension process. Assume that (X, J; π) is of order π and J has q cells at infinity. ˜ J, ˜ k) is the extended pair, each of the q cells at infinity should be increased by k − π, and If (X, m − q new cells of size k − π should be added. The number of cells at infinity has increased from q to m. Each of the Jordan chains in X can be extended arbitrarily to its new size, subject to the only restriction that the m eigenvectors at infinity be linearly independent. This guarantees the 11

controllability condition (16). Meanwhile, partial multiplicities and Jordan chains at finite points remain unchanged. It can be verified that the new pair still satisfies the bisolvency condition (17). 4.2. The inverse spectral problem. The Jordan pair (X, J; k) encodes the spectral data of P ; the question is how to recover P from its Jordan/standard pair. Lemma 9 Given any extended controllable pair (X, J; k) (16), there exists a regular matrix polynomial P of order π ≤ k, unique up to left equivalence, w.r.t. which (X, J; k) satisfies the bisolvency condition (17). We do not assume that J is in Jordan form, nor that J  is nilpotent. Furthermore, when k = π the reconstruction of P from (X, J; k) is explicit and can be found, together with a proof of Lemma 9, in [4] or [16] Theorem 7.8. The same proof and formula remain in effect for k > π. 4.3.

Description of factors. When P is essentially monic, square factors R of P are also

essentially monic and, moreover, are spectrally subordinate to P . Namely, R has the same roots as P , with smaller or equal (and possibly nil) partial multiplicities (see [16] §7.7, or [23] §I-II). More generally, when P is regular, so is R, even when R is not linear, and subordination extends also to the root at infinity, in a well-defined sense. Namely, if we have P = QR where deg Q = κ and deg R = ρ, we define the root at infinity of P, Q or R (and its partial multiplicities) as representing a root at the origin of the respective matrix polynomial λk P (1/λ),

λκ Q(1/λ),

λρ R(1/λ)

(k = κ + ρ)

(and its partial multiplicities). In the non-minimal case (k > π) this requires an artificial increase of these multiplicities. Subordination also implies that ”R inherits some of the Jordan chain structure of P ”; however, here a more rigorous treatment is needed. Consider two Jordan pairs, P (λ) → (X, J; k)

(X = [X  , X  ], J = J  ⊕ J  ),

˜ J; ˜ ρ) R(λ) → (X,

˜  ], J˜ = J˜ ⊕ J˜ ) ˜ = [X ˜ , X (X

˜ J; ˜ ρ): with k > ρ. Following a formalism obtained in [15] and [17] §4.1, we call (X, 1) a restriction of (X, J; k) if there is a left invertible matrix V = V  ⊕ V  ∈ Mmk×mρ , ˜ = XV and V J˜ = JV. under a joint block structure, so that ImV is J-invariant, with X 12

2) a marked restriction if in addition to item 1 ImV is a marked subspace [6]. This is equivalent to V = GV0 , where V0 is a truncation matrix and G is a matrix which commutes with J (in a typical upper Toeplitz block structure, [9]). 3) a truncation of (X, J; k) if in addition to item 1 V is a ”truncation matrix”, namely, a column-stochastic 0,1 matrix in row (or column) echelon form; invariance then implies ˜ consists of ”left-truncated” subchains of X. that X Clearly, every truncation is a marked restriction and every marked restriction is a restriction. ˜ J, ˜ ρ) be a restriction of a Jordan pair (X, J, k) of P . Then (X, ˜ J; ˜ ρ) is a Lemma 10 Let (X, Jordan pair for some regular m × m matrix polynomial R of degree at most ρ. Namely, controllable and solves the bisolvency identity ρ 

ρ 

˜  J˜j = 0, Rj X

j=0

˜  J˜ρ−j = 0. Rj X

j=0

R is uniquely defined by this identity up to left equivalence, and Q := P R−1 is a matrix polynomial of order at most k−ρ. Moreover, every polynomial factorization P = QR of these degree restrictions with R regular is defined in this way, up to left equivalence on R (right equivalence on Q). A proof of the direct statement for k = π can be found in [15] Theorems 2.1, 2.2 or [16] Theorems 7.10, 7.13 and Corollary 7.11; the proof for k > π is analogous. The proof of controllability and bisolvency is straightforward (restriction of the same properties in (X, J; k) to an invariant subspace). Bisolvency then implies factorability by Theorem 2. The uniqueness claim follows from the inverse spectral problem (§4.2). A proof of the converse statement is by constructing, from the Jordan pairs of Q and R, ˆ J; ˆ k) for QR = P and a natural restriction which produces (X, ˜ J; ˜ ρ) from a standard pair (X, ˆ J; ˆ k). Since (X, J; k) and (X, ˆ J; ˆ k) are similar, (X, ˜ J; ˜ ρ) is also a restriction of (X, J; k). For (X, k = π see Lemma 4 or [16] Section 7.7. In our next definition and result we address the two more restricted concepts of marked restriction and truncation. We define a polynomial P as non-derogatory if it satisfies any of the following equivalent conditions: (i) For every finite root λ of P we have dim ker P (λ) = 1. Namely, λ does not admit 13

an eigenspace of dimension 2. At infinity, the requirement is that λπ P (1/λ) be nonderogatory at the origin. (ii) If (X, J; π) be a Jordan pair for P , with J = J  ⊕ J  , then both matrices J  and J  are non-derogatory in the usual sense. Namely, the minimal and characteristic polynomials are equal. (iii) The geometric multiplicity of any root is equal to one. We observe that a generic regular matrix polynomial is non-derogatory. ˜ J˜ The situation described in Lemma 10 simplifies considerably in case ρ = 1, in which both X, ˜ is invertible. This is the case treated in this paper. are m × m matrices and X Lemma 11 Let P be a regular non-derogatory matrix polynomial of order π. Let (X, J; π) be a Jordan pair of P . Then (i) Each restriction of (X, J) in Jordan form is marked, and the corresponding linear right factor is attained by truncation of (X, J). (ii) The number of restrictions of (X, J), hence also the number of left equivalence orbits of degree-minimal factors for P, is finite. ˜1, X ˜ 2 ], J˜1 ⊕ J˜2 ; 1) be a restriction of ([X1 , X2 ], J1 ⊕J2 ; π) with J˜1 , J˜2 in Jordan Proof. (i) Let ([X form. For definiteness, we assume that J2 , J˜2 are nilpotent, representing a zero at infinity. Let V = [V1 , V2 ] be the transpose of the associated restriction matrix. Observe that each invariant subspace of Ji and J2 is a union of truncations on the individual Jordan chains of blocks of Ji . This applies in particular to the invariant space ImV.

 0 I In particular, there exists a block permutation matrix S of the form S = I ⊕ I 0 ⊕ I of ˜ ˜ J ⊕J ∗ suitable dimensions so that S(J1 ⊕ J2 )S −1 = Jˆ := 1 0 2 ∗ (possibly no more in Jordan form) and SV  = [G, 0] where G = G1 ⊕ G2 ∈ GLm (F ). Under this change of basis, the J-invariant subspace ImV is mapped to the space Im(SV ) which is supported on the first m coordinates. ˜ 0] where [I, 0] is a truncation and Now, the identity Ji Vi = Vi J˜i implies that SV = G[I, ˜ := G1 ⊕ G2 ⊕ I commutes with Jˆ1 ⊕ Jˆ2 . Thus, by definition, (X, ˜ J; ˜ 1) is a marked restriction G ˆ π). Since S is a block permutation matrix which preserves the order within each of (XS −1 , J; ˜ J; ˜ 1) is also a marked restriction of (X, J; π). Moreover, Lemma Jordan chain, it follows that (X, 14

˜ J; ˜ 1) satisfies the extended controllability condition 16, which for matrix 10 guarantees that (X, ˜ = [X ˜1, X ˜ 2 ] ∈ GLm (F ). pencils means that X ˜ −1 SV = [I, 0] and consider the pair ([X ¯1, X ¯ 2 ], J¯1 ⊕ J¯2 ; 1) obtained from Now, Define W = G ¯ i = G−1 Xi , ([X1 , X2 , ], Jˆ1 ⊕ Jˆ2 ; π) by restriction w.r.t. W . Clearly, W is a truncation, and we have X i J¯i = J˜i where we used the commutation of Gi and J˜i . It is easy to see that the factors defined by ˜1, X ˜ 2 , ], Jˆ1 ⊕ Jˆ2 ; π) and by ([X ¯1, X ¯ 2 , ], J¯1 ⊕ J¯2 ; π), namely, R(λ) ˜ ˜ and ([X = (λI − J˜1 ) ⊕ (I − λJ˜2 )X ¯ ¯ are related by left multiplication by G1 ⊕ G2 , hence define the R(λ) = (λI − J¯1 ) ⊕ (I − λJ¯2 )X, ˜ ˆ J), ˆ and since same left-equivalence orbit. Thus, the orbit of R(λ) is defined by a truncation of (X, S is a block permutation matrix, defined by a truncation of (X, J) itself. (ii) It suffices to observe that the lattice of invariant subspaces of a non-derogatory matrix J is finite. According to Lemma 10, each invariant subspace defines a unique truncation and, by Lemma 10, a unique linear regular right factor, up to left equivalence. ˜ J˜ are m × m Item (i) is essentially [17] Theorem 2.9.2. Here, in the restriction process both X, matrices. Lemma 12 (I) Let (X, J; k) be a Jordan pair for the regular m × m matrix polynomial P . Let ˜ J; ˜ 1) be a restriction of (X, J; k) onto an invariant subspace of dimension m. Then X ˜ is (X, invertible and the pencil ˜ ˜ ˜ −1 R(λ) = X[(λI − J˜ ) ⊕ (λJ˜ − I)]X

(18)

˜ J) ˜ up to left equivais a separable right bisolvent, and a right factor, of P , uniquely defined by (X, lence. (II) Moreover, each (non-constant) regular linear right factor of P is left equivalent to a pencil of the form (18). Proof. (referring to §4.1). Invertibility of X is implied by controllability of order 1, and P ˜ is obtained from the bisolvency identity of standard pairs if we right multiply both bisolvency of R ˜ −1 . These properties, plus factorability and surjectivity, are guaranteed by sides of the latter by X Lemma 10. Under the (generic) condition that P is non-derogatory, Lemma 11 and 12 provide an explicit procedure for calculating the entire (finite!) set of left-equivalence orbits of bisolvents for P , and with them, a complete list of regular right factors of the form λT − S, a procedure suggested by Roth in [23]. 15

5

Bisolvents: Remarks and Examples

5.1. It often makes sense to divide the Jordan pair (X, J) of P into three rather than two parts, separating the origin and infinity from the remaining roots: G[(λI − J  ) ⊕ (λI − J ∗ ) ⊕ (λJ  − I)]G−1 . Here, J  and J  are nilpotent and J ∗ is invertible, all in Jordan form. This division is within the general framework of the Weierstrass form. In these terms, the restriction matrix splits as a direct sum V = V  ⊕ V ∗ ⊕ V  . This form is unique up to block permutation within each of the three blocks. Solvents correspond to restrictions into subspaces not intersecting V  and reverse solvents correspond to restrictions into subspaces not intersecting V  . See relevant material in [14], [16] §1.8-1.10, and Corollary 4.7. The number of solvents (resp. reverse solvents) is finite provided J  ⊕ J ∗ (resp. J ∗ ⊕ J  ) is a non-derogatory matrix ([21], see also [3]). In handling reverse solvents, the pair (X ∗ , J ∗ ) may be manipulated so as to replace λI − J ∗ by a comonic pencil, for esthetic reasons. Consider the matrix polynomial in direct sum form P (λ) = λ ⊕ (λ − 1) ⊕ 1. We may choose the Jordan pair  X =



1 0

X∗ =

,

0



0 1

,

X  =

0

0 0

,

J  = 0,

J ∗ = 1,

J  = 0.

(19)

1

The entire space is a trivial restriction defining the factorization P = IP with right factor (and separable bisolvent) λT − S, with S = 0 ⊕ 1 ⊕ 1, T = 1 ⊕ 1 ⊕ 0. 5.2. If R(λ) = λT − S is a commuting (e.g. separable) bisolvent and T is non-singular then T −1 S is a solvent in the traditional sense. Again, if S is non-singular then S −1 T is a reverse solvent. Finally, every solvent and reverse solvent is constructed in this way. In this sense, separable bisolvents extend the existing approaches for solvents and reverse solvents. 5.3. A regular matrix polynomial may admit separable bisolvents without admitting any solvent or reverse solvents. This requires P (λ) to have roots at both 0 and ∞. One example is λ2 ⊕ 1. Observe that the right factor in (λ2 ⊕ 1) = (λ ⊕ 1)(λ ⊕ 1) commutes, hence by Theorem (2) defines an (essentially unique) bisolvent of P . As a second example, Consider P (λ) = P2 λ2 + P1 λ + P0 , 

where P2 =

1 10

−33

15

16

−32

10

14

19

−5

−8



 P1 =

, 16

1 215

−788

55

281

931

−110

−347

1097

−670

−589

 ,

1 P0 = 43



−60

76

30

−81

6

−36

80

10



74

.

Spectral analysis leads to the standard pair (notation as in §5.1):      5

X =

2

X∗ =

,

2



J = 0,





J =

2

2

4

4

1

1

3

0

0

2

,

X  = 

 ,



J =

1

1

3

−1

−1

1

3

3

4

0

1

0

0

0

1

0

0

0

 ,

 .

Clearly, P is non-derogatory and so, according to Lemma 11(i), we only need to consider Jordan chain truncations. Looking for solvents, we seek a basis among the first three columns of X = [X  X ∗ X  ] corresponding to finite roots; for reverse solvents, among the last five columns corresponding to non-zero roots, and respecting the order within each chain; as both tasks are impossible, P (λ) admits no direct or reverse solvents. However, among all six columns of X, respecting chain order, P (λ) does admit two possible truncations, giving rise to two bisolvents. Namely, the first and fourth columns, plus either the second or the third. Thus, the bisolvent λT − S inherits from P three roots: 0, ∞ and 2 (resp. 3). 5.4. If P (λ) is a matrix polynomial which is not regular, the spectral data of P (λ) can still be used to construct ”standard pairs” which encode roots and eigenvectors; and, formally, the same restriction method described here can be used to generate a large number of separable bisolvents, representing regular first order right factors with spectral subordination. However, the construction of standard pair is problematic due to the existence of right and left null vectors (i.e. ”Forney indices”, see [10]). Roth [23] indeed reports on the possibility of right factors which are monic pencils but not spectrally subordinate. The construction of singular right factors is a wide open area for research (see also §7.2) . 5.5. We mention in passing two alternative definitions for a separable bisolvent (multiplicative and additive) which may be of some interest. Multiplicatively, λT − S is a separable bisolvent if S, T satisfy π j π−j = 0, (i) j=0 Pj S T (ii)

S = ΠSΠ + (I − Π),

and

T = Π + (I − Π)T (I − Π),

where Π is idempotent. Similarly, in the additive version with Π as before, λT0 − S0 is a separable bisolvent , if S0 , T0 satisfy 17

(iii)



(iv)

S0 = ΠS0 Π

j j=0 Pj (S0

+ T0π−j ) = 0, and

T0 = (I − Π)T0 (I − Π),

where (iii) is the additive analogue of the bisolvent equation. The three formalisms are completely equivalent. For example, given S0 , T0 we define S, T via S = S0 + (I − Π),

T = T0 + Π.

Conversely, given S, T we define S0 = ΠSΠ,

T0 = (I − Π)T (I − Π).

We then have S0i + T0j = S i T j for all i, j ≥ 0. Both pairs (S0 , T0 ), (S, T ) are commutative and S0 T0 = T0 S0 = 0.

6

Notions of factorization redundancy

The theory of rational matrix functions, based on the Smith-McMillan degree, and the theory of matrix polynomials, based on the Weierstrass form, diverge in their definitions of spectral data, resulting in two different concepts of factorization redundancy. The Weierstrass form is based on roots (or elementary divisors, see [11] vol. II §XII and [16] §7) whereas the other theory defines poles and zeros (see [1] §VI). Roots and zeros have the same structure at finite points, but may have different structure at infinity, where the matrix polynomial has a pole ([24],[25]). In local terms of finite poles and zeros, every factorization of regular matrix polynomials is minimal, since no finite poles are involved. Since finite zeros and roots have the same partial multiplicities, this sense of minimality extends to finite roots. However, the point at infinity is treated differently in the two approaches. A pole/zero of P (λ) at ∞ is defined, up to partial multiplicities, by a pole/zero of P (1/λ) at the origin, while a root of P (λ) is defined as a root of λk P (1/λ) at the origin. Even if we choose k = π, the partial root multiplicities turn out to be higher than the zero analogues [24]; in addition, sometimes k must be chosen larger than π, further inflating the multiplicities of the roots at infinity. On a global level, consider the following three degrees of redundancy for a polynomial factorization P = QR : a := deg Q + deg R − deg P,

b := δQ−1 + δR−1 − δP −1 , 18

c = δQ + δR − δP

where δ is the McMillan degree (sum of pole multiplicities over F) and δ  is the extended McMillan degree (including the point at infinity, ie over F ∪ {∞}), see [25]. The identity b = 0 (resp. c = 0) implies minimality of the factorization at all the finite points (resp also at ∞). We also have δP −1 ≤ δ  P ≤ mπ, with equality iff P is essentially monic. The cases of minimality in the three degrees of redundancies (i.e. a = 0 or b = 0 or c = 0) do not necessarily coincide. For example, some factorizations which are minimal (c = 0, i.e. no pole-zero cancellations) may fail to be degree-minimal (a = 0). In addition, some relevant and interesting factorizations may fail to be both minimal and minimal-degree at certain finite points (e.g. non-canonical spectral (Wiener-Hopf) factorizations of a rational matrix function, [1] §VI), or at infinity (e.g. strong equivalence via unimodular matrix polynomials, [16] §1 and §1.6). Thus, it might be overly restrictive to consider only degree-minimal factorizations (a = 0) in Theorem 3 (i.e. impose k = π). Below we restrict ourselves to some examples of the situation at ∞. 6.1. If P is essentially monic, the π-factorizations (a = 0) are those where both factors are again essentially monic. These factorizations are minimal (b = c = 0). 6.2. In the two factorizations ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤ λ−1 0 λ−1 0 1 0 1 0 λ−1 0 ⎣ ⎦=⎣ ⎦⎣ ⎦=⎣ ⎦⎣ ⎦ 0 λ−2 0 1 0 λ−2 0 λ−2 0 1

(20)

P is essentially both monic and comonic, b = c = 0 but a = 1 > 0. Observe that the right factors in (20) are commuting bisolvents of degree 2, but not of degree 1. 6.3. In the example P (λ) = λ2 ⊕ I2 ,

Q(λ) = λ ⊕ (I2 + λT ),

R(λ) = λ ⊕ (I2 − λT )

(with T being a non-zero 2×2 nilpotent matrix) a = 0 but c = 0. Indeed, the total root multiplicities at infinity add up (4 = 2 + 2), but the total zero multiplicities do not (resp. 0 = 1 + 1). 6.4. An important example of redundancy at ∞ with R(λ) comonic is when R(λ) is unimodular. Take for example the case P = I; for example, ⎡ I2 = (I2 − λT )(I2 + λT ) = (I2 + λT )(I2 − λT ),

T =⎣

0 ±1 0

0

⎤ ⎦.

(21)

P admits no bisolvents of orders 0 or 1, and two reverse solvents of order 2, namely, the right factors ±T in (21). 19

The existence of unimodular right factors shows that every matrix polynomial P admits bisolvents, but not necessarily bisolvents of order π. In other words, P admits factorizations, but not necessarily minimal-order factorization. An example is ⎡ ⎤ 1 λ2 ⎦. P (λ) = ⎣ 0 1 6.5. Another source of redundancy occurs when we multiply P by λ, obtaining a matrix polynomial with vanishing free coefficient and an inflated root multiplicity at the origin. Some factorizations of λP may fail to be associated with (polynomial) factorizations of P. An example is the minimal factorization λI2 = (λ ⊕ 1)(1 ⊕ λ) with P (λ) = I2 .

7

Remarks on Theorem 2

7.1. On commutativity. Theorem 2 shows that, under a commutativity assumption, R(λ) = λT − S is a right factor iff it is a bisolvent. The commutativity condition is essential, and without it both implications are incorrect. Factors need not be bisolvents: Indeed, the set of right factors of P is closed under left equivalence, and in general this cannot be said about the corresponding set of bisolvents, even if we restrict the generality to regular P, R. Bisolvents need not be factors: If P, Q, R are regular and P = QR, we expect that the roots of R in F be subordinated to those of P. However, every pair (S, T ) with S, T nilpotent of order k and with ST = 0 forms a bisolvent S(λ) for P (λ), independent of spectral subordination. This includes the trivial pencil (S = T = 0). 7.2. On regularity. Regularity of R is essential in proving the implication (ii) → (i) in Theorem 2. Namely, every regular (commuting) right factor λT − S is a bisolvent. For example, in any block division m = m1 +m2 , consider the pencil R(λ) = λT −S with S = S  ⊕0 and T = 0⊕T  , where the blocks S  , T  are nilpotent. This pencil is non-regular, commuting and is automatically a right bisolvent for any matrix polynomial P of degree π ≥ m. This includes the trivial example S = T = 0. However, such a pencil λT − S cannot be a factor of any regular matrix polynomial P . We observe that Lemma 5 (ii) → (i) is no more valid if we require P (hence also Q) to be regular. For example, when π = 0 and R(λ) = I − λN (N nilpotent) essentially the only factorization is 20

I = R−1 (λ)R(λ), with k − π  = ν(N ), and if R(λ) = B − λA with A, B invertible, no polynomial factorization with π = 0 is possible.

References [1] H. Bart, I. Gohberg and M. Kaashoek, Minimal Factorizations of Matrices and Operators Functions, Birkhauser, Basel, 1979. [2] H. Bart, I. Gohberg, M. Kaashoek and P. Van Dooren, Factorizations of transfer functions, SIAM J. Control Optim., 18(6):675–69, 1980. [3] J. Bell, Families of solutions of the unilateral matrix equation, Proc. Amer. Math. Soc. 1:151– 159, 1950. [4] N. Cohen, Spectral analysis of regular matrix polynomials, Integral Equations and Operator Theory, 6:161–183, 1983. [5] N. Cohen, 2x2 monic irreducible matrix polynomials. Linear Multilinear Algebra 23:325–331, 1988. [6] A. Compta, J. Ferrer and M. Pe˜ na, Dimension of the orbit of marked subspaces, Linear Algebra Appl., 379:239–248, 2004. [7] J. Dennis, J. Traub and R. Weber, The algebraic theory of matrix polynomials, SIAM J. Numer. Anal., 13:831–845, 1976. [8] J. Dennis, J. Traub and R. Weber, Algorithms for solvents of matrix polynomials, SIAM J. Numer. Anal., 15:523–533, 1978. [9] J. Ferrer, F. Puerta and X. Puerta , Geometric characterization and classification of marked subspaces, Linear Algebra Appl., 235:15–34, 1996. [10] G. Forney, Jr., Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM J. Control, 13:493–520, 1975. [11] F. Gantmacher, The Theory of Matrices, Vols I-II. Chelsea, New York, 1960 (Reprinted, 2000).

21

[12] I. Gohberg and L. Rodman, On spectral analysis of non-monic matrix and operator polynomials, I. Reduction to monic polynomials, Israel J. Math., 30:133–151, 1978. [13] I. Gohberg and L. Rodman, On spectral analysis of non-monic matrix and operator polynomials, II. Dependence on the finite spectral data, Israel J. Math., 30:321–334, 1978. [14] I. Gohberg and L. Rodman, On spectral structure of monic matrix and the extension problem, Linear Algebra Appl., 24:157–172, 1979. [15] I. Gohberg, M. Kaashoek, L. Lerer and L. Rodman, Common multiples and common divisors of matrix polynomials, I. Spectral method, Indiana Journal of Math., 30:321–356, 1981. [16] I. Gohberg, P. Lancaster and L. Rodman, Matrix Polynomials. Acad. Press 1982 (repr. SIAM, Classics in Applied Math 58, 2009). [17] I. Gohberg, P. Lancaster and L. Rodman, Invariant Subspaces of Matrices with Applications. J. Wiley, New York 1986 (repr. SIAM, Classics in Applied Math 51, 2006). [18] N.J. Higham, H-M Kim. Numerical Analysis of a quadratic matrix equation, IMA Journal of Numerical Analysis, 20:499–519, 2000. [19] N.J. Higham, H-M Kim. Solving a quadratic matrix equation by Newton’s method with exact line searchers,. SIAM J. Matrix Anal. Appl. 23:303–316, 2001. [20] S.G. Krein, Linear differential equations in Banach spaces. Trans. Math. Monographs 29, AMS 1972. [21] E. Pereira, On solvents of matrix polynomials, Applied Numerical Mathematics, 42:197–208, 2003. [22] E. Pereira, On solvents of nonmonic matrix polynomials, Commun. Appl. Math. Comput, 28(4):390–401, 2014. [23] W. Roth, On the unilateral equation in matrices, Trans. Amer. Math. Soc., 32:61–80, 1930. [24] P. Van Dooren and P. Dewilde, The eigenstructure of an arbitrary polynomial matrix: computational cspects, Linear Algebra Appl., 50:545–579. 1983. [25] G. Verghese, P. Van Dooren and T. Kailath, Properties of a system matrix of a generalized state space system, International J. Control, 30(2):235-243, 1979. 22

[26] L.S. Shieh, Y.T. Tsay and N.P. Coleman, Algorithms for solvents and spectral factors of matrix polynomials, Int. J. System Sciences, 12:1303-1316, 1981. [27] L. Wegge, Sylvester matrix and common factors in polynomial matrices, Working paper no. 07-4, Dept of Economics, Univ. of CA, Davis, 2007. [28] K. Weierstrass, Zur Theorie der bilinearen und quadratishen Formen, Monatsh. Akad. Wiss. Berlin, 310–338, 1867.

23