Further remarks on multivariate polynomial matrix factorizations

Further remarks on multivariate polynomial matrix factorizations

Linear Algebra and its Applications 465 (2015) 204–213 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.co...

304KB Sizes 4 Downloads 185 Views

Linear Algebra and its Applications 465 (2015) 204–213

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

Further remarks on multivariate polynomial matrix factorizations ✩ Jinwang Liu a , Mingsheng Wang b,∗ a

College of Mathematics and Computation, Hunan Science and Technology University, Xiangtan, Hunan 411201, China b Lab of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China

a r t i c l e

i n f o

Article history: Received 16 April 2014 Accepted 16 September 2014 Available online xxxx Submitted by L. Rodman MSC: 15A23 15A06 13P05 13P10

a b s t r a c t Multivariate (n-D) polynomial matrix factorizations are basic research problems in multidimensional systems and signal processing. In this paper Youla’s MLP Lemma [19] is extended to the general case. Based on this extension, generalizations of some results in [16] are proved which might be useful for further investigations on problems of factorizations of multivariate polynomial matrices. © 2014 Elsevier Inc. All rights reserved.

Keywords: Multivariate polynomial matrices Matrix factorizations MLP Lemma

1. Introduction The basic structure of multivariate (n-D) polynomial matrices and n-D polynomial matrix factorizations have been investigated over the past several decades because of their ✩

This paper was supported partially by the National Science Foundation of China (11471108, 11171323).

* Corresponding author. E-mail address: [email protected] (M. Wang). http://dx.doi.org/10.1016/j.laa.2014.09.027 0024-3795/© 2014 Elsevier Inc. All rights reserved.

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

205

wide range of applications in multidimensional (n-D) circuits, systems, controls, signal processing, and other related areas (see, e.g., [1–4,7–11,19,20]). Theory and algorithms of univariate matrix factorizations have played an important part in design and analysis of control systems. For the bivariate (2-D) case, polynomial matrix factorizations have been completely solved using the so-called matrix primitive factorizations and Smith reduction over the rational function field of one variable over thirty years ago [14,4]. The factorization problems of n-D (n > 2) polynomial matrices have been open for almost thirty years. In recent years, some factorization theory and algorithms for a large class of n-D polynomial matrices have been developed using a completely different approach from that of the 2-D case [9,10,15–18,12]. Using these new results and algorithms, a new constructive approach for bivariate polynomial matrix factorization was proposed to avoid doing computation over the rational function field in one variable [13]. In [17], some progress about n-D factor prime factorization was obtained. Especially, under the condition that every factor is regular, a sufficient and necessary condition was given for factor prime factorizations. Up to now, a complete characterization for factor primeness remains open for an arbitrary n-D polynomial matrix, which is a major problem on this research direction. In order to attack this important open problem, some new research methods need to be developed. But for the time being, we hope to check carefully our approach for investigating matrix factorizations. In [16], one of the major theorems is a one-to-one correspondence between existence of an MLP factorization and the freeness of certain submodule. This approach is further extended to general case in [17] using some different proof methods. On the other hand, the methods in [16] might be useful for further research, so this paper attempts to extend the major result in [16] to the general case. The organization of the paper is as follows. In Section 2, some preliminaries and motivation of the research problem are introduced. In Section 3, we first extend Youla’s “MLP Lemma” [19] to the general case, then making use of this result to extend Theorem 4 in [16] to the general case, and an application is presented to explain this result. 2. Preliminaries and research problem Let n be an integer with n ≥ 1. Let k be an arbitrary but fixed field, R = k[z1 , . . . , zn ] be the polynomial ring in n variables z1 , . . . , zn over k. Rl×m denotes the free module of l × m matrices with entries in R. We also write R1×m as Rm which is a free module of rank m over R. Without loss of generality, the size of a given matrix is assumed to be l×m with l ≤ m. For computation concerning modules and matrices over the polynomial ring we refer to [5,6]. Let F ∈ Rl×m be of full row rank, we denote the greatest common divisor of all i × i minors of F by di (F ) for i = 1, . . . , l. As in [16,12], we set d(F ) = dl (F ) for the g.c.d. of the minors of maximal order of F , and ρ(F ) denotes the submodule of Rm generated by rows of F . Also for f ∈ R, [f Il | F ] ∈ Rl×(l+m) stands for the matrix concatenating f Il and F , where Il is the identity matrix of order l.

206

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

Let A ∈ Rn×n , for any i, j = 1, . . . , n, Mij (A) denote the (n − 1) × (n − 1) minors of A formed by the ith row and jth column of A; the element (−1)i+j Mij (A) will be the i, jth cofactor of A, which will be denoted by cof ij (A). The adjoint matrix, adj(A), of an n × n matrix A is defined by [adj(A)]ij = cof ji (A) for all i, j = 1, . . . , n. Thus by Laplace Theorem, A adj(A) = adj(A)A = det(A)In . The following concepts are from multidimensional systems theory [19]. Definition 2.1. Let F ∈ Rl×m be of full row rank. Then F is said to be: (i) zero left prime (ZLP) if the l × l minors of F are zero coprime, i.e., are devoid of any common zeros; (ii) minor left prime (MLP) if the l × l minors of F are factor coprime, i.e., are devoid of any non-trivial common factors; (iii) factor left prime (FLP) if in any polynomial matrix factorization F = F1 F2 in which F1 is a square matrix, F1 is necessarily a unimodular matrix, i.e., det F1 is a nonzero constant in k. Zero right prime (ZRP), and minor right prime (MRP) etc. can be similarly defined for matrices F ∈ Rm×l with m ≥ l. Notice that ZLP ⇒ MLP ⇒ FLP. When n ≥ 3, these concepts are pairwise different, when n = 2, ZLP is not equivalent to MLP, but MLP is the same as FLP; when n = 1 all three concepts coincide [19]. When k is an algebraically closed field, F is a ZLP matrix if and only if all l × l minors of F have no common zeros in k. Definition 2.2. Let F ∈ Rl×m be of full row rank, and f | d(F ). F is said to admit a matrix factorization with respect to f if F can be factorized as F = G1 F1

(1)

such that F1 ∈ Rl×m , G1 ∈ Rl×l , and det G1 = f . In Definition 2.2, if F1 is ZLP (MLP, FLP), then (1) is said to be a ZLP (MLP, FLP) factorization. For related results we refer readers to [15–18]. In order to motivate our research problem, we recall the following definition [17]: Definition 2.3. Let F ∈ Rl×m be of full row rank, and f | d(F ). f is said to be regular with respect to F if d([f Il | F ]) = f up to multiplication by a nonzero constant. We need also the following definition from [12]. Definition 2.4. Let F ∈ Rl×m be of full row rank, and f | d(F ). f is said to be weakly regular with respect to F if there exists h ∈ R such that d([hIl | F ]) = f up to

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

207

multiplication by a nonzero constant. In this case, f is said to be weakly regular with respect to F and h. By Lemma 1 in [12], for f | d(F ), f is weakly regular with respect to F if and only if there exists g | f such that d([gIl | F ]) = f . For a regular factor f of F , by Theorem 3.1 in [17], there exists a matrix factorization F = GF1 with det G = f if and only if ρ(F ) : f is a free R-module. In this case, ρ(F1 ) = ρ(F ) : f is uniquely determined. However, for a non-regular factor f of F , under the condition that there exists a factorization F = GF  with det G = f , ρ(F  ) is not uniquely determined; in other words, if F = G1 F1 = G F  with det G1 = det G = f , then ρ(F1 ) and ρ(F  ) might not be the same. Thus it is impossible through computing directly suitable submodule to obtain a factorization in such cases. Further, we are interested in the following problem: Problem. For which factor f of d(F ), under the condition that there exists factorization F = GF1 with det G = f , ρ(F1 ) is uniquely determined? Generally speaking, even for one variable or two variable cases, the problem is not clear. However, for a weakly regular factor f of F , there exists g | f such that d([gIl | F ]) = f . If ρ(F ) : g is a free R-module, then there exists F = GF1 such that det G = f , and ρ(F1 ) = ρ(F ) : g. In this case, we are interested in under which condition, ρ(F1 ) = ρ(F ) : f . In this paper, we will give a positive solution to this particular case. 3. Main results and their proof First we recall Youla’s MLP Lemma, which had been used to give another (elegant) proof of Serre problem over the polynomial ring in two variables [19]. Lemma 3.1. Let C(z) ∈ Rl×m (l < m) be a full row rank matrix. Then C(z) is MLP if and only if for every i = 1, . . . , n, it can be incorporated into the first l rows of an m × m matrix Ce (z) whose determinant ψi (z) is nontrivial and independent of zi . The starting point of the paper is to extend MLP Lemma into the following theorem. Theorem 3.1. Let C(z) ∈ Rl×m (l < m) be a full row rank matrix, d be the greatest common divisor of all l × l minors of C(z). Then for every i, 1 ≤ i ≤ n, C(z) can be incorporated into the first l rows of an m × m matrix Ce (z) and  detCe (z) = dϕi (z), where ϕi (z) is nonzero and independent of zi , i.e., Ce (z) = det Ce (z) = dϕi (z).

C(z) C1 (z)

∈ Rm×m , and

Proof. For every fixed i, 1 ≤ i ≤ n, the entries of C(z) can be viewed as polynomials in F [zic ][zi ]. According to diagonalization method of polynomial matrices in one variable,

208

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

i.e., Smith reduction, there exists a matrix V (z) ∈ Rm×m such that det V (z) = v(z) is non-trivial and independent of zi , and    C(z)V (z) = E(z)  0l×(m−l)

(2)

Let V (z) = [V1 (z) V2 (z)], where V1 (z) ∈ Rm×l , V2 (z) ∈ Rm×(m−l) . By (2), we have C(z)V1 (z) = E(z). According to Cauchy–Binet theorem: det E(z) =

α 

Δi V1i = d ·

i=1

α 

Δi V1i ,

(3)

i=1

l where α = Cm , Δi , V1i are the l × l minors of C(z), V1 (z) respectively arranged in suitable ordering, and Δi = dΔi , i = 1, . . . , α. By (3), we have d | det E(z). Let the adjoint matrix of V (z) be:



B(z) , C1 (z)



V (z) =

(4)

where B(z) ∈ Rl×m , C1 (z) ∈ R(m−l)×m . Let  Ce (z) =

C(z) , C1 (z)

(5)

thus C(z) is imbedded into the first l row of Ce (z). Note that    C1 (z)V (z) = 0(m−l)×l  v(z)Im−l .

(6)

By (2), we get:  Ce (z)V (z) =

E(z) 0(m−l)×l

0l×(m−l) v(z)Im−l

.

(7)

By taking the determinant of the above matrices, we have: det Ce (z) = v(z)m−l−1 · det E(z).

(8)

Hence det Cde (z) is independent of zi if and only if det dE(z) is independent of zi (since v(z) is non-trivial and independent of zi ). On the other hand, by right multiplying V ∗ (z) in two sides of (2), we have    v(z)C(z) = E(z)  0l×(m−l) V ∗ (z) = E(z)[Il | 0l×(m−l) ]V ∗ (z).

(9)

Hence det E(z) divides the greatest common divisor of all l × l minors of v(z)C(z), i.e.,

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

det E(z) | v(z)l d

209

(10)

Thus det dE(z) | v(z)l . Since v(z) does not involve in zi , hence det dE(z) = φi (z) does also not involve zi . Therefore det Ce (z) = v m−l−1 (z) · dφi (z) = d · ϕi (z), where ϕi (z) = v m−l−1 (z)φi (z) does not involve zi . Theorem 3.1 has the following immediate corollary. Corollary 3.1. Let C(z) ∈ Rl×m (l < m) be a full row rank matrix, d be the greatest common divisor of all l × l minors of C(z). Then for every i, 1 ≤ i ≤ n, there exists a matrix Vi (z) ∈ Rm×l such that C(z)Vi (z) = dϕi (z)Il , where ϕi (z) is nonzero and independent of zi . Theorem 3.1 has some interesting applications, by which we may obtain more general theorems than those of [16]. We first prove the following proposition that is a generalization of Proposition 3 in [16] which was proved under the premise that there exists an MLP factorization. Proposition 3.1. Let F ∈ Rl×m be a full row rank matrix, d = d(F ), and K = ρ(F ). Then (K : d)/K = Torsion(Rm /K). Proof. (K : d)/K ⊆ Torsion(Rm /K) is obvious, so we only need to show Torsion(Rm / K) ⊆ (K : d)/K. Actually, ∀¯ x = x + K ∈ Torsion(Rm /K), there exists non-zero a ∈ R with ax ∈ K. Thus there exists a vector u ∈ Rl such that ax = uF . For F with d(F ) = d, by Corollary 3.1, for every i, i = 1, . . . , n, there exists a matrix Vi ∈ Rm×l such that F Vi = dϕi Il , where ϕi does not involve zi . Then axVi = uF Vi = dϕi u. Since ϕi does not involve zi , a divides every element of du, hence du a is a polynomial vector. Further, d du dx = a · ax = a F ∈ K, so x ∈ K : d, i.e., x ¯ = x + K ∈ (K : d)/K. Thus (K : d)/K = Torsion(Rm /K). Proposition 3.2. Let F ∈ Rl×m be a full row rank matrix, d = d(F ), and f | d. If there exists a factorization F = GF1 such that G ∈ Rl×l , F1 ∈ Rl×m with det G = f . Then K1 : d/f = K : d, where K = ρ(F ), and K1 = ρ(F1 ). Proof. Since F = GF1 , we have K ⊆ K1 . Since F = GF1 with det G = f , we have f F1 = adj(G) · F , where adj(G) is the adjoint matrix of G. Thus for every row vector v of F1 , f v may be linearly represented by row vectors of F , i.e., f ρ(F1 ) ⊆ K, hence K1 ⊆ K : f . By f | d, we have K : f ⊆ K : d, hence K1 : d/f ⊆ (K : f ) : d/f = K : d, i.e., K1 : d/f ⊆ K : d. In the following, we show K : d ⊆ K1 : d/f . ∀x ∈ K : d, since K = ρ(F ), there exists a vector u ∈ Rl such that dx = uF = uGF1 . Note that for F1 , d(F1 ) = fd , by Corollary 3.1, for every i, where 1 ≤ i ≤ n, there exists Vi ∈ Rm×l such that F1 Vi = ϕi fd Il . Thus dxVi = uGF1 Vi = fd ϕi uG, where ϕi does

210

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

not contain zi . Therefore d divides every component of fd uG, i.e., d1 ( fd uG) = uG f is a d 1 polynomial matrix. Hence f x = ( f uG)F1 ⊆ K1 , i.e., x ∈ K1 : d/f . Hence K : d ⊆ K1 : d/f . Thus we have proved K1 : d/f = K : d. By Proposition 3.1 and Proposition 3.2, we have the following corollary: Corollary 3.2. Let F ∈ Rl×m be of full row rank, d = d(F ). Suppose there exists a factorization F = GF1 such that G ∈ Rl×l , F1 ∈ Rl×m with det G = f . Let K = ρ(F ), and K1 = ρ(F1 ). Then (K1 : fd )/K = (K : d)/K = Torsion(Rm /K). The following result is a generalization of Theorem 4 in [16]. Here we give a proof by using Theorem 3.1. Lemma 3.2. Let F ∈ Rl×m be a full row rank matrix, d = d(F ). Let g | d with gcd(g, dg ) = 1. If K : g = K, then g is a constant. Proof. Assume that g is not a constant. Then g involves some variable zi , (i = 1, . . . , n). Without loss of generality, we assume that g contains variable z1 . For F ∈ Rl×m , by F  Theorem 3.1, there exists a matrix X ∈ R(m−l)×m such that F1 = X ∈ Rm×m , and det F1 = dϕ1 (z), where ϕ1 (z) does not contain z1 . Let the adjoint matrix of F1 be as follows: ⎡

A11 ⎢ A ⎢ 12 F1∗ = [Am×l Bm×(m−l) ] = ⎢ ⎢ .. ⎣ . A1m

A21 A22 .. . A2m

· · · Al1 · · · Al2 .. .. . . · · · Alm

A(l+1)1 A(l+1)2 .. . A(l+1)m

⎤ · · · Am1 ⎥ · · · Am2 ⎥ .. ⎥ .. ⎥ . . ⎦ · · · Amm

(11)

Then F1 F1∗ = dϕ1 (z)Im . Hence F F1∗ = [dϕ1 (z)Il | 0l,m−l ], and F A = dϕ1 Il

(12)

By Cauchy–Binet theorem:

(ϕ1 d)l =

β 

a i A ji ,

(13)

i=1

where a1 , . . . , aβ are the l × l minors of F , Aj1 , . . . , Ajβ are l × l minors of A arranged l in some ordering, β = Cm , and ai = gei . In the following, we first prove that there exists some element Ai0 j0 of A which cannot be divided by g. Assume that every entry of A can be divided by g. Then by (13):

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

l

(ϕ1 d) =

β 

a i A ji = g

l+1

i=1

β 

ei Aji .

211

(14)

i=1

Hence g l+1 | (ϕ1 d)l , i.e., g | ϕl1 ( dg )l . From gcd(g, dg ) = 1, we get g | ϕl1 . Thus ϕ1 must involve variable z1 . This is impossible. Therefore there exists some entry Ai0 j0 of A which cannot be divided by g, where 1 ≤ i0 ≤ l, 1 ≤ j0 ≤ m. Considering the product of the row vector of Ai0 j0 in F1∗ with F1 : (A1j0 , . . . , Ai0 j0 , . . . , Alj0 , . . . , Amj0 )F1 = (A1j0 , . . . , Ai0 j0 , . . . , Alj0 , . . . , Amj0 ) ⎡ a11 a12 ··· a1l a1(l+1) ⎢ a21 a · · · a a2(l+1) 22 2l ⎢ ⎢ . . . .. ⎢ .. .. .. . ⎢ ⎢ × ⎢ al1 al2 ··· all al(l+1) ⎢ ⎢ x(l+1)1 x(l+1)2 · · · x(l+1)l x(l+1)(l+1) ⎢ ⎢ .. .. .. .. ⎣ . . . . xm1 xm2 ··· xml xm(l+1)

··· ···

a1m a2m .. . alm

···

xmm



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ··· ⎥ ⎥ · · · x(l+1)m ⎥ ⎥ ⎥ .. ⎦ .

= (0, · · · , dϕ1 , . . . , 0) = dϕ1 ej0 .

(15)

Hence  l  i=1

i=l+1

l 

m 

aim Aij0 +

i=1

So  l 

ai1 Aij0 ,

l 

i=1



=



m 

ai1 Aij0 +

ai2 Aij0 , · · · ,

xi1 Aij0 , −

i=l+1

l 

ai2 Aij0 +

i=1

xi2 Aij0 , . . . ,

i=l+1



xim Aij0

m 

= dϕ1 ej0 .

i=l+1

i=1 m 

xi1 Aij0 ,

l 

 aim Aij0

i=1 m  i=l+1

xi2 Aij0 , . . . , ϕ1 d −

m  i=l+1

xij0 Aij0 , . . . , −

m 

 xim Aij0 .

i=l+1

(16) Note that Aij0 in the right-hand side of the above formula is the cofactor of xij0 in A, where i = l+1, . . . , m. By Laplace Theorem, it is easy to know g | Aij0 for i = l+1, . . . , m. Hence (A1j0 , A2j0 , . . . , Alj0 )F = g(α1 , α2 , . . . , αj0 , . . . , αm ) = gh,

(17)

212

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

where αj0 = 1, . . . , m. Note that

d g ϕ1



m

m

Aij0 g

, j = 1, . . . , j0 − 1, j0 +

gh = (A1j0 , . . . , Ai0 j0 , . . . , Alj0 )F ∈ K,

(18)

i=l+1

xij0

Aij0 g

, αj = −

i=l+1

xij

hence h ∈ K : g. Note that g  Ai0 j0 , it is easy to know h ∈ / K because of full rankness of F . Thus there exists h such that h ∈ K : g, but h ∈ / K. Hence K : g = K. This is a contradiction. So g is a constant. Lemma 3.3. Let F ∈ Rl×m be of full row rank, d = d(F ), and f | d. Let K = ρ(F ). If K : f = K, then f is a constant. Proof. Assume that f is not a constant. By f | d, we know d is not a constant. Hence by Theorem 4 in [16], or Lemma 3.2, K : d = K, i.e., there exists a vector x ∈ K : d, but x∈ / K. Let d = pq11 pq22 · · · pqt t be the factorization of d, where p1 , . . . , pt are polynomials in R, and qi ≥ 1. We consider the two cases: (1) Suppose f = pr11 pr22 · · · prt t , 1 ≤ ri ≤ qi , i = 1, . . . , t. We want to prove K : d = K. Actually, let y ∈ K : d, then pq11 pq22 · · · pqt t y ∈ K. Hence f (p1q1 −1 pq22 · · · pqt t y) ∈ K. Thus pq11 −1 pq22 · · · pqt t y ∈ K : f = K. By a similar reasoning, we have pq11 −2 pq22 · · · pqt t y ∈ K : f = K. Continuing in this way, we have pq22 · · · pqt t y ∈ K. Considering exponents in p2 , p3 , . . . , pt , by the similar reasoning as above, we can obtain y ∈ K. Hence K : d = K. This contradicts with K : d = K. So f = 1. (2) If f = pr11 pr22 · · · prkk = 1, where 1 ≤ ri ≤ qi , i = 1, . . . , k with k < t. Let g = pq11 pq22 · · · pqkk . With the same proof as in (1), we can prove K : g = K. Thus by Lemma 3.2, g is a constant, which is a contradiction. Hence f is a constant. Based on Lemma 3.3, we can obtain the main theorem of the paper, which is a generalization of Theorem 4 in [16]. Theorem 3.2. Let F ∈ Rl×m be of full row rank, d = d(F ). Let K = ρ(F ), f ∈ R. Then K : f = K if and only if f and d are co-prime. Proof. ⇐. Assume that f is co-prime with d, we want to prove K : f = K. It is only to prove K : f ⊆ K. ∀x ∈ K : f , we have f x ∈ K. Thus there exists a vector u ∈ Rl such that f x = uF . By Theorem 3.1, for every i, i = 1, . . . , n, there exists a matrix Vi ∈ Rm×l such that F Vi = dϕi Il , where ϕi does not contain variable zi . Thus f xVi = uF Vi = dϕi uIl . Hence f divides every component of du. By gcd(f, d) = 1, f divides every coordinate of u. Thus x = uf F , i.e., K : f ⊆ K. ⇒. Let h = gcd(f, g). Then K : h ⊆ K : f = K. It is obvious that K ⊆ K : h. So K : h = K. Note that h | d, by Lemma 3.3, we have h = 1, i.e., f is co-prime with d. As a direct application of Theorem 3.2, we have the following result:

J. Liu, M. Wang / Linear Algebra and its Applications 465 (2015) 204–213

213

Theorem 3.3. Let F ∈ Rl×m be of full row rank, d = d(F ). Let K = ρ(F ), f ∈ R. Assume that F have a factorization F = GF1 such that det G = f , and ρ(F1 ) = K : g for a factor g of f . Then ρ(F1 ) = K : f if and only if gcd( fg , fd ) = 1. Proof. Let f = gh for some h ∈ R. Then K : f = (K : g) : h. Let K1 = ρ(F1 ). Then ) K1 = K : f if and only if K1 = K1 : h. Note that d(F1 ) = d(F f . Then by Theorem 3.2, we have K1 = K1 : h if and only if gcd(h, fd ) = 1. Acknowledgements The authors would like to thank the referee for his suggestions on revision of this paper. References [1] N.K. Bose, B. Buchberger, J.P. Guiver, Multidimensional Systems Theory and Applications, Kluwer, Dordrecht, The Netherlands, 2003. [2] W.C. Brown, Matrices Over Commutative Rings, Marcek Dekker, Inc., New York, 1993. [3] A. Fabianska, A. Quadrat, Applications of the Quillen–Suslin theorem to multidimensional systems theory, in Gröbner bases, in: H. Park, G. Regensburger (Eds.), Control Theory and Signal Processing, Walter de Gruyter, Berlin, 2006, pp. 23–106. [4] J.P. Guiver, N.K. Bose, Polynomial matrix primitive factorization over arbitrary coefficient field and related results, IEEE Trans. Circuits Syst. 29 (Oct. 1982) 649–657. [5] G.M. Greuel, G. Pfister, A Singular Introduction to Commutative Algebra, 2nd edititon, Springer, 2008. [6] T.Y. Lam, Serre’s Problem on Projective Module, Springer-Verlag, Berlin, Germany, 2006. [7] Z. Lin, On matrix faction descriptions of multivariate linear n-D systems, IEEE Trans. Circuits Syst. 35 (10) (1988) 1317–1322. [8] Z. Lin, N.K. Bose, A generalization of Serre’s conjecture and related issues, Linear Algebra Appl. 338 (2001) 125–138. [9] Z. Lin, J. Ying, L. Xu, Factorizations for nD polynomial matrices, Circuits Systems Signal Process. 20 (6) (2001) 601–618. [10] Z. Lin, L. Xu, H. Fan, On minor prime factorization for n-D polynomial matrices, IEEE Trans. Circuits Syst. II. Express Briefs 52 (9) (Sep. 2005) 568–571. [11] Z. Lin, L. Xu, N.K. Bose, A tutorial on Gröbner bases with applications in signals and systems, IEEE Trans. Circuits Syst. I. Regul. Pap. 55 (1) (Jan. 2008) 445–461. [12] J. Liu, M. Wang, Notes on factor prime factorizations for n-D polynomial matrices, Multidimens. Syst. Signal Process. 21 (1) (2010) 87–97. [13] J. Liu, M. Wang, New results for multivariate polynomial matrix factorizations, Linear Algebra Appl. 438 (1) (2013) 87–95. [14] M. Morf, B.C. Levy, S.Y. Kung, New results in 2-D systems theory, part I: 2-D polynomial matrices, factorizations and coprimeness, Proc. IEEE 65 (4) (1977) 861–872. [15] M. Wang, D. Feng, On Lin–Bose problem, Linear Algebra Appl. 390 (Oct. 2004) 279–285. [16] M. Wang, C.P. Kwong, On multivariate polynomial matrix factorization problems, Math. Control Signals Systems 17 (4) (2005) 297–311. [17] M. Wang, On factor prime factorization for n-D polynomial matrices, IEEE Trans. Circuits Syst. I. Regul. Pap. 54 (6) (Jun. 2007) 1398–1405. [18] M. Wang, Remarks on n-D polynomial matrix factorization problems, IEEE Trans. Circuits Syst. II. Express Briefs 55 (1) (Jan. 2008) 61–64. [19] D.C. Youla, G. Gnavi, Notes on n-dimensional system theory, IEEE Trans. Circuits Syst. CAS-26 (2) (Feb. 1979) 105–111. [20] D.C. Youla, P.F. Pickel, The Quillen–Suslin theorem and the structure of n-dimensional elementary polynomial matrices, IEEE Trans. Circuits Syst. CAS-31 (6) (Jun. 1984) 513–518.