Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations

Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations

Linear Algebra and its Applications 438 (2013) 136–152 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journa...

329KB Sizes 0 Downloads 75 Views

Linear Algebra and its Applications 438 (2013) 136–152

Contents lists available at SciVerse ScienceDirect

Linear Algebra and its Applications journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / l a a

Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations Ivan Kyrchei Pidstrygach Institute for Applied Problems of Mechanics and Mathematics of NAS of Ukraine, str. Naukova 3b, Lviv 79060, Ukraine

ARTICLE INFO

ABSTRACT

Article history: Received 2 December 2011 Accepted 13 July 2012 Available online 11 September 2012

Within the framework of the theory of the column and row determinants, we obtain explicit representation formulas (analogs of Cramer’s rule) for the minimum norm least squares solutions of quaternion matrix equations AX = B, XA = B and AXB = D. © 2012 Elsevier Inc. All rights reserved.

Submitted by C.-K. Li AMS classification: 15A15 16W10 Keywords: Matrix equation Least squares solution Moore–Penrose generalized inverse Quaternion matrix Cramer rule Column determinant Row determinant

1. Introduction In recent years quaternion matrix equations have been investigated by many authors (see, e.g., [1–16]). For example, Jiang et al. [1] studied the solutions of the general quaternion matrix equation AXB − CYD = E, and Liu [3] studied the least squares Hermitian solution of the quaternion matrix equation (A H XA , BH XB) = (CD), Wang et al. [10] derived the common solution to six quaternion matrix equations, etc. However the understanding of the problem for determinantal representing the least squares solutions of the quaternion matrix equations, in particular AX

= B,

E-mail address: [email protected] 0024-3795/$ - see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.laa.2012.07.049

(1)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

XA

= B,

AXB

137

(2)

= D,

(3)

has not yet reached a satisfactory level. The reason was the lack of appropriate noncommutative determinant. Many authors had tried to give the definitions of the determinants of a quaternion matrix, (see, e.g., [17–23]). Unfortunately, by their definitions it is impossible for us to give an determinant representation of an inverse matrix. But in [24], we defined the row and column determinants and the double determinant of a square matrix over the quaternion skew field. As applications we obtained the determinantal representations of an inverse matrix by an analogue of the adjoint matrix in [24] and the Moore–Penrose inverse over quaternion skew field H in [25]. In [16] we also gave Cramer’s rule for the solution of nonsingular quaternion matrix equations (1)–(3) within the framework of the theory of the column and row determinants. In [13,14], the authors obtained the generalized Cramer rules for the unique solution of the matrix equation (3) in some restricted conditions within the framework of the theory of the column and row determinants as well. In this paper we aim to obtain explicit representation formulas (analogs of Cramer’s rule) for the minimum norm least squares solutions of quaternion matrix equations (1)–(3) without any restriction. The paper is organized as follows. In Section 2, we start with some basic concepts and results from the theory of the column and row determinants which are necessary for the following. The theory of the column and row determinants of a quaternionic matrix is considered completely in [24]. In Section 3, we give the theorem about determinantal representations of the Moore–Penrose inverse over the quaternion skew field derived in [25]. In Section 4, we obtain explicit representation formulas for the minimum norm least squares solutions of quaternion matrix equations (1)–(3). In Section 5, we show a numerical example to illustrate the main result.

2. Elements of the theory of the column and row determinants Throughout the paper, we denote the real number field by R, the set of all m × n matrices over the quaternion algebra

H = {a0 + a1 i + a2 j + a3 k | i2 = j2 = k2 = −1, a0 , a1 , a2 , a3 ∈ R} ×n . Let M (n, H) be the ring of n × n quaternion by Hm×n and its subset of matrices of rank r by Hm r matrices. The conjugate of a quaternion a = a0 + a1 i + a2 j + a3 k ∈ H is defined by a = ao − a1 i − a2 j − a3 k.   if a∗ij = aji for The Hermitian adjoint matrix of A = aij ∈ Hn×m is called the matrix A ∗ = a∗ij m× n   all i = 1, . . . , n and j = 1, . . . , m. The matrix A = aij ∈ Hn×m is Hermitian if A ∗ = A. Suppose Sn is the symmetric group on the set In = {1, . . . , n}.

Definition 2.1. The ith row determinant of A rdeti A

=

 σ ∈Sn

= (aij ) ∈ M (n, H) is defined by

(−1)n−r ai ik1 aik1 ik1 +1 · · ·aik1 +l1 i · · · aikr ikr +1 · · · aikr +lr ikr

for all i = 1, . . . , n. The elements of the permutation σ are indices of each monomial. The left-ordered cycle notation of the permutation σ is written as follows, 









σ = i ik1 ik1 +1 · · · ik1 +l1 ik2 ik2 +1 · · · ik2 +l2 · · · ikr ikr +1 · · · ikr +lr . The index i opens the first cycle from the left and other cycles satisfy the following conditions, ik2 ik3 < · · · < ikr and ikt < ikt +s for all t = 2, . . . , r and s = 1, . . . , lt .

<

138

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

Definition 2.2. The jth column determinant of A cdetj A for all j

=

 τ ∈Sn

= (aij ) ∈ M (n, H) is defined by

(−1)n−r ajkr jkr +lr · · · ajkr +1 ikr · · ·aj jk1 +l1 · · · ajk1 +1 jk1 ajk1 j

= 1, . . . , n. The right-ordered cycle notation of the permutation τ ∈ Sn is written as follows, 





τ = jkr +lr · · · jkr +1 jkr · · · jk2 +l2 · · · jk2 +1 jk2

  jk1 +l1



· · · jk1 +1 jk1 j .

The index j opens the first cycle from the right and other cycles satisfy the following conditions, jk2 < jk3 < · · · < jkr and jkt < jkt +s for all t = 2, . . . , r and s = 1, . . . , lt . The following lemmas enable us to expand rdeti A by cofactors along the ith row and cdetj A along the jth column respectively for all i, j = 1, . . . , n. Lemma 2.1 ([24]). Let Ri j be the right ij-th cofactor of A all i = 1, . . . , n. Then ⎧ ⎪ ⎨ −rdetj A.i ji (a. i ) , i  = j, Ri j = ⎪ ⎩ rdet A i i , i = j, k

∈ M (n, H), that is, rdeti A =

n

j=1

ai j · Ri j for

where A.i ji (a. i ) is obtained from A by replacing the jth column with the ith column, and then by deleting both the ith row and column, k = min {In \ {i}}. Lemma 2.2 ([24]). Let Li j be the left ij-th cofactor of A all j = 1, . . . , n. Then

Li j

=

⎧  jj  ⎪ ⎨ −cdeti Ai . aj . , i

 = j,

⎪ ⎩ cdet A j j , k

= j,

i

∈ M (n, H), that is, cdetj A =

n

i=1 Li j

· ai j for

 jj  where Ai . aj . is obtained from A by replacing the ith row with the jth row, and then by deleting both the jth row and column, k = min {Jn \ {j}}. Since these matrix functionals do not satisfy the axioms of noncommutative determinant, then they are called “determinants” shareware. But by the following theorems we introduce the concepts of a determinant of a Hermitian matrix and a double determinant which both satisfy the axioms of noncommutative determinant and can be expanded by cofactors along an arbitrary row or column using the row-column determinants. This enables us to obtain determinantal representations of the inverse matrix. Theorem 2.1 ([24]). If A · · · = cdetn A ∈ R.





= aij ∈ M (n, H) is Hermitian, then rdet1 A = · · · = rdetn A = cdet1 A =

Remark 2.1. Since Theorem 2.1, we can define the determinant of a Hermitian matrix A putting for all i = 1, . . . , n det A

:= rdeti A = cdeti A .

Theorem 2.2 ([24]). If for a Hermitian matrix A det A

= 0,

∈ M (n, H),

∈ M (n, H)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

139

then there exist a unique right inverse matrix (RA )−1 and a unique left inverse matrix (LA )−1 of A, where (RA)−1 = (LA )−1 =: A −1 , and they possess the following determinantal representations ⎞ ⎛ ⎜ R11 R21 · · · Rn1 ⎟ ⎟ ⎜ ⎟ ⎜ ⎜ R12 R22 · · · Rn2 ⎟ 1 ⎟ ⎜ −1 ⎟, ⎜ (4) (RA) = ⎟ det A ⎜ ⎜· · · · · · · · · · · · ⎟ ⎟ ⎜ ⎠ ⎝ R1n R2n · · · Rnn ⎛

(LA)

−1

=

L11 L21

⎜ ⎜ ⎜ L12 L22 ⎜ ⎜ det A ⎜· · · · · · ⎝ L1n L2n 1

· · · Ln1

⎞ ⎟ ⎟

· · · Ln2 ⎟ ⎟, ⎟ · · · · · ·⎟ ⎠

(5)

· · · Lnn

where Rij , Lij are right and left ijth cofactors of A respectively for all i, j Theorem 2.3 ([24]). If A

= 1, . . . , n.

∈ M (n, H), then det AA ∗ = det A ∗ A.

According to Theorem 2.3 we introduce the concept of a double determinant. Definition 2.3. The determinant of the corresponding Hermitian matrices, (A ∗ A or AA ∗ ), of A M (n, H) is called its double determinant, i.e., ddetA := det (A ∗ A ) = det (AA ∗ ).



Suppose A i j denotes the submatrix of A obtained by deleting both the ith row and the jth column. Let a.j be the jth column and ai. be the ith row of A. Denote by a.∗j and ai∗. the jth column and the ith row of a Hermitian adjoint matrix A ∗ as well. Suppose A.j (b) denotes the matrix obtained from A by replacing its jth column with the column b, and Ai. (b) denotes the matrix obtained from A by replacing its ith row with the row b. We have the following theorem on the determinantal representation of the inverse matrix over H. Theorem 2.4 ([24]). The necessary and sufficient condition of invertibility of A Then there exists A −1 = (LA )−1 = (RA )−1 , where ⎞ ⎛

L

(LA)

−1

=

 ∗ −1 ∗ A A A

=

⎜ 11 ⎜ 1 ⎜L12 ⎜ ⎜ ddetA ⎜ . . . ⎝

L21 . . . Ln1

∈ M(n, H) is ddetA = 0.

⎟ ⎟

L22 . . . Ln2 ⎟ ⎟, ⎟ ... ... ...⎟ ⎠

(6)

L1n L2n . . . Lnn

⎛ −1

(RA)

∗

= A AA

∗ −1

=

R11 R21 . . . Rn1

⎜ ⎜ ⎜R12 ⎜ ⎜ ∗ ddetA ⎜ . . . ⎝ 1









Lij = cdetj (A ∗ A ).j a.∗i , R ij = rdeti (AA ∗ )i. aj∗. , for all i, j

= 1, . . . , n.

⎟ ⎟

R22 . . . Rn2 ⎟ ⎟ ⎟ ... ... ...⎟ ⎠

R1n R2n . . . Rnn

and



(7)

140

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

Remark 2.2. Since by Theorem 2.4 ddetA





= cdetj A ∗ A =







Lij · aij , ddetA = rdeti AA ∗ =

i



aij ·Ri j

j

for all j = 1, . . . , n, then Lij is called the left double ijth cofactor and Ri j is called the right double ijth cofactor for the entry aij of A ∈ M (n, H).

3. Determinantal representation of the Moore–Penrose inverse For a quaternion matrix A following Penrose conditions 

∗



∗

(1) AA + (2) A + A

∈ Hm×n , a generalized inverse of A is a quaternion matrix X with

= AA + ; = A+ A;

(8)

(3) AA + A = A ; (4) A + AA + = A + .

Definition 3.1. For A ∈ Hm×n , X ∈ Hn×m is said to be a (i, j, . . .) generalized inverse of A if X satisfies Penrose conditions (i), (j), . . . in (8). We denote the X by A (i,j,...) and the set of all A (i,j,...) by A {i, j, . . .}. By [26] we know that for A ∈ Hm×n , the A {i, j, . . .} exists and the A (1,2,3,4) exists uniquely. The matrix A (1,2,3,4) is called the Moore—Penrose inverse of A and denote A + := A (1,2,3,4) . In [25] the determinantal representations of the Moore–Penrose inverse over the quaternion skew field was derived based on the limit representation. Theorem 3.1. If A lim (A ∗ A

α→0

∈ Hm×n and A + is its Moore–Penrose inverse, then A + = lim A ∗ (AA ∗ + α I)−1 =

+ α I)−1 A ∗ , where α ∈ R+ .

Corollary 3.1. If A (i) If rank A (ii) If rank A (iii) If rank A

α→0

∈ Hm×n , then the following statements are true.

= n, then A + = (A ∗ A )−1 A ∗ . = m, then A + = A ∗ (AA ∗ )−1 . = n = m, then A + = A −1 .

We shall use the following notations. Let α := {α1 , . . . , αk } ⊆ {1, . . . , m} and β := {β1 , . . . , βk } ⊆ {1, . . . , n} be subsets of the order 1 ≤ k ≤ min {m, n}. By Aβα denote the submatrix of A determined by the rows indexed by α and the columns indexed by β . Then A α α denotes the principal submatrix determined by the rows and columns indexed by α . If A ∈ M (n, H) is Hermitian, then  α by Aα  denote the corresponding principal minor of det A. For 1  k  n, denote by Lk,n := { α : α = (α1 , . . . , αk ) , 1 ≤ α1 ≤ · · · ≤ αk ≤ n} the collection of strictly of k  increasing sequences  integers chosen from {1, . . . , n}. For fixed i ∈ α and j ∈ β , let Ir , m {i}:= α : α∈Lr ,m , i ∈ α , Jr , n {j}  := β : β ∈ Lr ,n , j ∈ β . The following theorem and remarks introduce the determinantal representations of the Moore– Penrose inverse which we shall use below.

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

141



Theorem 3.2 ([25]). If A



×n , then the Moore–Penrose inverse A + = a+ ∈ Hn×m possess the ∈ Hm r ij

following determinantal representations:     β ∗ ∗ β∈Jr , n {i} cdeti (A A ) . i a.j β +   aij = ,  β  ∗   A A ( ) β∈Jr , n β or  +

=

aij for all i

α∈Ir ,m {j} rdetj



α∈Ir , m



 α

(AA ∗ )j . (ai∗. )

α

  (AA ∗ ) α  α

(9)

.

(10)

= 1, . . . , n, j = 1, . . . , m. −1

−1

Remark 3.1. If rank A = n, then by Corollary 3.1 A + = (A ∗ A ) A ∗ . Considering (A ∗ A ) inverse, we get the following representation of A + : ⎛  ∗   ∗ ⎞ ∗ ∗ ⎜cdet1 (A A ). 1 a. 1 . . . cdet1 (A A ). 1 a. m ⎟ ⎟ 1 ⎜ ⎟ ⎜ A+ = ⎟ ⎜ ... ... ... ⎟ ⎜ ddetA ⎝ ⎠     ∗ ∗ ∗ ∗ cdetn (A A ). n a. 1 . . . cdetn (A A ). n a. m . If m

(11)

> n, then by Theorem 3.2 for A + we have (9) as well. −1

−1

Remark 3.2. If rank A = m, then by Corollary 3.1 A + = A ∗ (AA ∗ ) . Considering (AA ∗ ) inverse, we get the following representation of A + : ⎛  ∗  ∗ ⎞ ∗ ∗ ⎜rdet1 (AA )1. a1. . . . rdetm (AA )m. a1. ⎟ ⎟ 1 ⎜ ⎟ ⎜ ⎟. ⎜ A+ = . . . . . . . . . ⎟ ddetA ⎜ ⎠ ⎝     ∗ ∗ ∗ ∗ rdet1 (AA )1. an. . . . rdetm (AA )m . an . If m

as a left

as a right

(12)

< n, then by Theorem 3.2 for A + we also have (10).

4. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations Denote by A  the Frobenius norm of the quaternion matrix A

∈ Hm×n .

Definition 4.1. Consider a matrix equation AX where A HR

= B,

∈H

m× n

(13)

,B ∈ H

m× s

are given, X

∈H

n×s

is unknown. Suppose

= {X|X ∈ Hn×s , AX − B = min},

Then matrices X ∈ Hn×s such that X ∈ HR are called least squares solutions of the matrix equation (13). If XLS = minX∈HR X , then XLS is called the minimum norm least squares solution of (13). If the Eq. (13) has no precision solutions, then XLS is its optimal approximation. The following important theorem is well-known.

142

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

Theorem 4.1 ([2]). The least squares solutions of (13) are   X = A + B + In − A + A C , where C ∈ Hn×s is an arbitrary quaternion matrix and the minimum norm least squares solution is XLS = A + B.

=: Bˆ = (bˆ ij ) ∈ Hn×s .

We denote A ∗ B Theorem 4.2.

(i) If rank A = r ≤ m < n, then for the minimum norm least square least squares solution XLS = (xij ) ∈ Hn×s of (13) for all i = 1, . . . , n, j = 1, . . . , s, we have     β ∗ ˆ β∈Jr , n {i} cdeti (A A ) . i b.j β   . (14) xij =  β  ∗ β∈Jr , n (A A ) β 

= n, then for XLS = (xij ) ∈ Hn×s of (13) for all i = 1, . . . , n, j = 1, . . . , s, we have

(ii) If rank A xi j

=

  ˆ .j cdeti (A ∗ A ). i b

(15)

ddetA

ˆ .j is the jth column of Bˆ for all j where b

= 1, . . . , s.

Proof. (i) If rank A = r ≤ m < n, then by Theorem 3.2 we can represent the matrix A + by (9). Therefore, we obtain for all i = 1, . . . , n, j = 1, . . . , s xij

=

m  + aik bkj

=

m 

β∈Jr , n {i} cdeti



k

 k

a.∗k bkj

=

a∗1k bkj

⎜ ⎜ ⎜ k a∗2k bkj ⎜ ⎜ .. ⎜ ⎜ . ⎝  ∗ k ank bkj



=

1

n 

ddetA k=1

 β β

· bkj

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

= bˆ .j and denoting the jth column of Bˆ by bˆ .j as well, then it

follows (14). (ii) If rank A = n, then by Corollary 3.1, A + for all i = 1, . . . , n, j = 1, . . . , s xij



(A ∗ A ) . i a.∗k

  β  ∗ k=1 k=1 β∈Jr , n (A A ) β   ∗  ∗  β  m β∈Jr , n {i} k=1 cdeti (A A ) . i a.k β · bkj   =  β  ∗ β∈Jr , n (A A ) β  ⎛

Since



= (A ∗ A )−1 A ∗ . Representing (A ∗ A )−1 by (5), we obtain

Lki bˆ kj ,

where Lij is a left ijth cofactor of (A ∗ A ) for all i, j

ˆ by bˆ .j , it follows (15).  the jth column of B

= 1, . . . , n. From this by Lemma 2.2 and denoting

Corollary 4.1 (Theorem 3.1 in [16]). Suppose AX

=B

(16)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

is a right matrix equation, where {A , B} ∈ M(n, H) are given, X then (16) has a unique solution, and the solution is   ˆ .j cdeti (A ∗ A ). i b xi j = ddetA

ˆ .j is the jth column of Bˆ for all i, j where b

143

∈ M(n, H) is unknown. If ddetA = 0,

(17)

= 1, . . . , n.

Definition 4.2. Consider a matrix equation XA where A HL

= B,

∈H

(18)

m× n

,B ∈ H

= {X | X ∈ H

s ×n

s ×m

are given, X

∈H

s ×m

is unknown. Suppose

, XA − B = min}.

s ×m

Then matrices X ∈ H such that X ∈ HL are called least squares solutions of the matrix equation (18). If XLS = minX∈HL X , then XLS is called the minimum norm least squares solution of (18). The following theorem can be obtained by analogy to Theorem 4.1. Theorem 4.3. The least squares solutions of (18) are   X = BA + + Im − AA + C, where C ∈ Hn×s is an arbitrary quaternion matrix and the minimum norm least squares solution is XLS = BA + . We denote BA ∗





=: Bˇ = bˇ ij ∈ Hs×m .

Theorem 4.4. (i) If rank A = r ≤ n < m, then for the minimum norm least squares solution XLS (18) for all i = 1, . . . , s, j = 1, . . . , m, we have     ∗ α ˇ α∈Ir ,m {j} rdetj (AA ) j . bi . α   xij = .  (AA ∗ ) α  α∈Ir , m

(ii) If rank A xi j

= (xij ) ∈ Hs×m of

(19)

α

= m, then for XLS = (xij ) ∈ Hs×m of (18) for all i = 1, . . . , s, j = 1, . . . , m, we have =

  ˇi . rdetj (AA ∗ )j. b

(20)

ddetA

ˇ i. is the ith row of Bˇ for all i where b

= 1, . . . , s.

Proof. (i) If rank A = r ≤ n < m, then by Theorem 3.2 we can represent the matrix A + by (10). Therefore, for all i = 1, . . . , s, j = 1, . . . , m, we get   ∗  α  ∗ m m   α∈Ir ,m {j} rdetj (AA ) j . ak . α +   xij = bik akj = bik ·  (AA ∗ ) α  α∈Ir , m α k=1 k=1   ∗  α m  ∗ α∈Ir ,m {j} rdetj (AA ) j . ak . α k=1 bik   =  (AA ∗ ) α  α∈Ir , m α

144

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

Since

 k

bik a∗k .

=

 k

bik a∗k1

 k

bik a∗k2

ˇ i. as well, then it follows (19). by b (ii) If rank A = m, then by Corollary 3.1 A + for all i = 1, . . . , s, j = 1, . . . , m, xij

=

1

n 

ddetA k=1



···

k

bik a∗kn



= bˇ i. and denoting the ith row of Bˇ

= A ∗ (AA ∗ )−1 . Representing (AA ∗ )−1 by (4), we obtain

bˇ ik Rjk ,

where Rij is a left ijth cofactor of (AA ∗ ) for all i, j

ˇ by bˇ i. , it follows (20).  denoting the ith row of B

= 1, . . . , m. From this by Lemma 2.1 and

Corollary 4.2 (Theorem 3.2 in [16]). Suppose XA

=B

(21)

is a left matrix equation, where {A , B} ∈ M(n, H) are given, X then (21) has a unique solution, and the solution is   ˇi . rdetj (AA ∗ )j. b xi j = ddetA

ˇ i. is the ith column of Bˇ for all i, j where b

∈ M(n, H) is unknown. If ddetA = 0,

(22)

= 1, . . . , n.

Definition 4.3. Consider a matrix equation AXB

= D,

(23)

∈ Hm×n , B ∈ Hp×q , D ∈ Hm×q are given, X ∈ Hn×p is unknown. Suppose

where A HD

= {X| X ∈ Hs×m , AXB − D = min}.

Then matrices X ∈ Hs×m such that X ∈ HD are called least squares solutions of the matrix equation (23). If XLS = minX∈HD X , then XLS is called the minimum norm least squares solution of (23). The following important theorem is well-known. Theorem 4.5 ([15]). The least squares solutions of (23) are X

= A + DB+ + (In − A + A )V + W (Ip − BB+ ),

where {V , W } ⊂ Hn×p are arbitrary quaternion matrices and the least squares solution with minimum norm is XLS = A + DB+ . We denote  D

= A ∗ DB∗ .

Theorem 4.6. (i) If rank A = r1 < m and rank B = r2 XLS = (xij ) ∈ Hn×p of (23) we have  xij

=



β∈Jr1 , n {i} cdeti

β∈Jr1 ,n

< p, then for the minimum norm least square solution



(A ∗ A ) . i d B. j

   ∗ β  (A A )β  α∈Ir

2 ,p

β β

 , (BB∗ )α  α

(24)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

or





xij

=

where

=



⎜ ⎝

α∈Ir2 ,p {j}

⎛ diA.



A ∗ α α∈Ir2 ,q {j} rdetj (BB ) j . d i . α   ,    ∗ β  α  ∗  β∈Jr1 ,p (A A )β  α∈Ir2 ,q (BB )α

⎛ d.Bj







=⎝

β∈Jr1 ,n {i}

145

(25)

⎞   α ∗ ˜ k. ⎟ rdetj BB j. d ⎠

∈ Hn×1 , k = 1, . . . , n

(26)

⎞   β  ⎟ ˜ .l cdeti A ∗ A .i d ⎠

∈ H1×p , l = 1, . . . , p

(27)

α

β

˜ i. and d˜ .j are the ith row and the jth column are the column vector and the row vector, respectively. d  of D for all i = 1, . . . , n, j = 1, . . . , p. (ii) If rank A = n and rank B = p, then for XLS = (xij ) ∈ Hn×p of (23) we have   cdeti (A ∗ A ). i d.Bj , (28) xi j = ddetA · ddetB or

  rdetj (BB∗ )j. diA.

xi j

=

d.Bj

:= rdetj (BB∗ )j. d˜ 1 . , . . . , rdetj (BB∗ )j. d˜ n .

diA.

:= cdeti (A ∗ A ). i d˜ .1 , . . . , cdeti (A ∗ A ). i d˜ .n

ddetA · ddetB

,

(29)

where 















T

,



˜ i . is the ith row of  are respectively the column-vector and the row-vector. d D for all i ˜ . j is the jth column of  and d D for all j = 1, . . . , p. (iii) If rank A = n and rank B = r2 < p, then for XLS = (xij ) ∈ Hn×p of (23) we have    cdeti (A ∗ A ) . i  d B. j  , xij =  (BB∗ )α  ddetA α∈Ir2 ,p

or xij

=

α∈Ir2 ,p {j} rdetj

ddetA



(31)

= 1, . . . , n,

(32)

α





(30)



(BB∗ ) j . d Ai .

α∈Ir2 ,p

  (BB∗ )α  α



α α

,

(33)

where d.Bj is (26) and diA. is (31). (iv) If rank A xi j

= r1 < n and rank B = p, then for XLS = (xij ) ∈ Hn×p of (23) we have =

  rdetj (BB∗ )j. diA.   ,   ∗ β β∈Jr ,n (A A )β  · ddetB 1

(34)

146

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

or 



β∈Jr1 , n {i} cdeti

=

xi j



(A ∗ A ) . i d B. j

β β

   ∗ β (A A )β  · ddetB



β∈Jr1 ,n

,

(35)

where d.Bj is (30) and diA. is (27). Proof. p×q

×n , B ∈ H ∈ Hm and r1 < m, r2 < p, then by Theorem 3.2 the Moore–Penrose r1   r2   + + + inverses A = aij ∈ Hn×m and B+ = aij ∈ Hq×p posses the following determinantal

(i) If A

representations respectively, 

 +

β∈Jr1 , n {i} cdeti

=

aij



β∈Jr1 , n

 +

=

bij

α∈Ir2 ,p {j} rdetj



By Theorem 4.5, XLS

xij

=

q  s=1

α∈Ir2 ,p



(A ∗ A ) . i a.∗j

  ∗ (A A )

 β β



β β

,

  (BB∗ )j . (bi∗. ) αα   . (BB∗ ) α 

(36)

α

= A + DB+ and entries of XLS = (xij ) are



⎞ m  + + ⎝ aik dks ⎠bsj .

(37)

k=1

for all i = 1, . . . , n, j = 1, . . . , p. Denote by dˆ.s the sth column of A ∗ D  from k a.∗k dks = dˆ. s that m  + aik dks

k=1

=

m 



β∈Jr1 , n {i} cdeti

=: Dˆ = (dˆ ij ) ∈ Hn×q for all s = 1, . . . , q. It follows 



(A ∗ A ) . i a.∗k

 β β

  · dks β  ∗ k=1 β∈Jr1 , n (A A ) β   ∗  ∗  β  m β∈Jr1 , n {i} k=1 cdeti (A A ) . i a.k β · dks   =  β  ∗ β∈Jr1 , n (A A ) β      β ∗ ˆ β∈Jr1 , n {i} cdeti (A A ) . i d. s β   =  β  ∗ β∈Jr , n (A A ) β  

(38)

1

Suppose es. and e. s are respectively the unit row-vector and the unit column-vector whose components are 0, except the sth components, which are 1. Substituting (38) and (36) in (37), we obtain       β ∗ ∗ ∗  α q ˆ  β∈Jr1 , n {i} cdeti (A A ) . i d. s α∈Ir2 ,p {j} rdetj (BB )j . (bs. ) α β     . xij =   β  ∗ ∗ α  (A A )  α∈Ir ,p (BB ) α s=1 β∈J r1 , n

β

2

Since dˆ. s

=

n  l=1

e. l dˆls ,

bs∗.

=

p  t =1

b∗st et . ,

q  s=1

dˆls b∗st

dlt , =

(39)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

147

then we have q

s =1

xij =

p

t =1

=

p

t =1

n

l =1

n

l =1



β∈Jr1 , n {i} cdeti





 α

 β ((A ∗ A ) . i (e. l )) β dˆls b∗st α∈Ir2 ,p {j} rdetj (BB∗ )j . (et . )

β∈Jr1 , n

  ∗ (A A )

 β  β  α∈Ir

2 ,p

  (BB∗ ) α  α

 α β   ∗ ∗ β∈Jr1 , n {i} cdeti ((A A ) . i (e. l )) β dlt α∈Ir2 ,p {j} rdetj (BB )j . (et . ) α   .    β   ∗ ∗ α  β∈Jr , n (A A ) β  α∈Ir ,p (BB ) α



1

α

(40)

2

Denote by ditA



:=

cdeti

β∈Jr1 , n {i}

    β A∗ A . i  d. t β

n 

=



cdeti

l=1 β∈Jr1 , n {i}

A A the tth component of a row-vector diA. = (di1 , . . . , dip ) for all t (40), we have  α p A ∗ α∈Ir2 ,p {j} rdetj (BB )j . (et . ) α t =1 dit   xij =    . β   ∗ ∗ α  β∈Jr , n (A A ) β  α∈Ir ,p (BB ) α 1

p

Since = If we denote by A t =1 dit et .

dljB

:=

p  t =1

then it follows (25).

α∈Ir2 ,p {j}

rdetj



(BB∗ )j . (et . )

the lth component of a column-vector d.Bj it in (40), we obtain n xij

=

n

= 1, . . . , p. Substituting it in

2

diA. ,



 dlt

 ∗   β A A . i (e. l ) β  dlt

α α

=





rdetj

α∈Ir2 ,p {j}



dl . ) (BB∗ )j . (

α α

(41)

B B T = (d1j , . . . , djn ) for all l = 1, . . . , n and substitute



β B ∗ β∈Jr1 , n {i} cdeti ((A A ) . i (e. l )) β dlj   .    β   ∗ ∗ α  β∈Jr1 , n (A A ) β  α∈Ir2 ,p (BB ) α

l=1

= d.Bj , then it follows (24). −1 −1 (ii) If rank A = n and rank B = p, then by Corollary 3.1, A + = (A ∗ A ) A ∗ and B+ = B∗ (BB∗ ) . −1 ∗ −1 ∗ If we represent (A A ) as the left inverse by ( 5) and (BB ) as the right inverse by (4) , then Since

l=1

e.l dljB

we obtain XLS

= (A ∗ A )−1 A ∗ DB∗ (BB∗ )−1 ⎛

⎞ ⎛ x x . . . x1p LA ⎜ 11 12 ⎟ ⎜ 11 ⎜ ⎟ ⎜ ⎜x x . . . x ⎟ ⎜LA ⎜ 21 22 ⎜ 12 2p ⎟ 1 ⎜ ⎟ ⎜ =⎜ = ⎟ ddetA ⎜ ⎜. . . . . . . . . . . . ⎟ ⎜. . . ⎜ ⎟ ⎜ ⎝ ⎠ ⎝ xn1 xn2 . . . xnp LA ⎛ ⎞ ⎛ 1n RB d˜ d˜ . . . d˜ 1m ⎜ 11 12 ⎟ ⎜ 11 ⎜ ⎟ ⎜ ⎜d˜ d˜ . . . d˜ ⎟ ⎜ RB ⎜ 21 22 ⎜ 12 2m ⎟ 1 ⎜ ⎟ ⎜ ×⎜ ⎟ ddetA ⎜ ⎜. . . . . . . . . . . . ⎟ ⎜. . . ⎜ ⎟ ⎜ ⎝ ⎠ ⎝ d˜ n1 d˜ n2 . . . d˜ nm RB1p

A L21 A L22

A . . . Ln1

...

⎞ ⎟ ⎟

A ⎟ Ln2 ⎟

⎟ ⎟

. . . . . . . . .⎟ ⎟ A L2n

RB21 RB22

A . . . Lnn



. . . RBp1 ...

⎞ ⎟ ⎟

RBp2 ⎟ ⎟

⎟, ⎟

. . . . . . . . .⎟ ⎟ RB2p

. . . RBpp



148

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

where LijA is a left ijth cofactor of (A ∗ A ) for all i, j = 1, . . . , n and RiBj is a right ijth cofactor of (BB∗ ) for all i, j = 1, . . . , p. This implies  p n A˜ B s=1 k=1 Lki d ks Rjs , (42) xij = ddetA · ddetB for all i = 1, . . . , n, j = 1, . . . , p. We obtain the sum in parentheses by Lemma 2.2 and denote it, n    A˜ ˜ . s =: diAs , Lki dk s = cdeti (A ∗ A ). i d k=1

˜ . s is the sth column-vector of D˜ for all s where d





= 1, . . . , p. Suppose diA. := diA1 , . . . , diAp is

 B the row-vector for all i = 1, . . . , n. Reducing the sum ns=1 diAs Rjs by Lemma 2.1, we obtain an analog of Cramer’s rule for the minimum norm least squares solution of (23) by (28). Interchanging the order of summation in (42), we have   n A p ˜ B s=1 d ks Rjs k=1 Lki xij = . ddetA · ddetB Further, we obtain the sum in parentheses by Lemma 2.1 and denote it, p  s=1

d˜ k s RjBs





= rdetj (BB∗ )j. d˜ k . := dkB j ,

˜ k . is the kth row-vector of D˜ for all k where d



= 1, . . . , n. Suppose d.Bj := d1B j , . . . , dnB j

T

is  A B the column-vector for all j = 1, . . . , p. Using Lemma 2.2 for reducing the sum nk=1 Lki dk j , we obtain an analog of Cramer’s rule for (23) by (29). ×n , B ∈ Hp×q and r = n, r < p, then by Remark 3.1 and Theorem 3.2 the (iii) If A ∈ Hm r2 2 r1 1    + + + n×m and B+ = bij ∈ Hq×p possess the following Moore–Penrose inverses A = aij ∈ H determinantal representations respectively,   cdeti (A ∗ A ) . i a.∗j + , aij = ddetA   ∗  α ∗ α∈Ir2 ,p {j} rdetj (BB )j . (bi. ) α +   bij = .  ∗ α  α∈Ir ,p (BB ) α

(43)

2

= A + DB+ , then an entry of XLS = (xij ) is (37). Denote by dˆ.s the sth  ˆ = (dˆ ij ) ∈ Hn×q for all s = 1, . . . , q. It follows from k a.∗k dks = dˆ. s that column of A ∗ D =: D Since by Theorem 4.5, XLS

m  + aik dks

k=1

=

  m  cdeti (A ∗ A ) . i a.∗k ddetA k=1

· dks =

  cdeti (A ∗ A ) . i dˆ. s ddetA

Substituting (44) and (43) in (37), and using (39) we have    ∗ ∗  α q cdet (A ∗ A ) ˆ  i . i d. s α∈Ir2 ,p {j} rdetj (BB )j . (bs. ) α   xij =  (BB∗ ) α  ddetA α∈Ir2 ,p

s=1

q

=

s=1

p

t =1

n

l=1

α

=

   cdeti (A ∗ A ) . i (e. l )dˆls b∗st α∈Ir ,p {j} rdetj (BB∗ )j . (et . ) α α 2     ddetA α∈Ir ,p (BB∗ ) α α 2

(44)

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

p

t =1

=

n

l=1

   dlt α∈Ir ,p {j} rdetj (BB∗ )j . (et . ) α cdeti (A ∗ A ) . i (e. l )  α 2   .   ddetA α∈Ir ,p (BB∗ ) α α

149

(45)

2

If we substitute (41) in (45), then we get n B ∗ l=1 cdeti (A A ) . i (e. l ) dlj   . xij =  (BB∗ ) α  ddetA α∈Ir2 ,p

n

Since again l=1 e.l dljB If we denote by ditA

:=

n  l=1

α

= d.Bj , then it follows (32), where d.Bj is (26).

    d. t cdeti A ∗ A . i 

=

n  l=1

  dlt cdeti A ∗ A . i (e. l ) 

A A the tth component of a row-vector diA. = (di1 , . . . , dip ) for all t (45), we obtain  α p A ∗ α∈Ir2 ,p {j} rdetj (BB )j . (et . ) α t =1 dit   . xij =   ddetA α∈Ir ,p (BB∗ ) α α

= 1, . . . , p and substitute it in

2

p

Since again t =1 ditA et . = diA. , then it follows (33), where diA. is (31). (iv) The proof is similar to the proof of (iii).  Corollary 4.3 (Theorem 3.3 in [16]). Suppose AXB

=C

(46)

is a two-sided matrix equation, where {A , B, C} ∈ M(n, H) are given, X ∈ M(n, H) is unknown. If ddetA  = 0 and ddetB  = 0 , then (46) has a unique solution, and the solution is   rdetj (BB∗ )j. ciA. xi j = , (47) ddetA · ddetB or xi j

=

  cdeti (A ∗ A ). i c.Bj ddetA · ddetB

,

(48)

   where ciA. := (cdeti (A ∗ A ). i (˜c.1 ) , . . ., cdeti (A ∗ A ). i (˜c.n )) is the row vector and c.Bj := rdetj (BB∗ )j. c˜ 1 . ,   . . . , rdetj (BB∗ )j. c˜ n . T is the column vector and c˜ i . , c˜ . j are the ith row vector and the jth column vector of  C, respectively, for all i, j = 1, . . . , n.   β    designates ith column of (A ∗ A ) . i a.∗j , Remark 4.1. In Eq. (24), the index i in cdeti (A ∗ A ) . i d B. j β    β ∗ may be placed in a column with the another the entries of a but in the submatrix (A ∗ A ) . i a.∗j .j β index. Similarly, we have for j in (25). In (14) and (19) we have equivalently.

5. An example In this section, we give an example to illustrate our results. Let us consider the matrix equation AXB

= D,

(49)

150

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

where ⎛

A

=

⎜ ⎜ ⎜−k ⎜ ⎜ ⎜ k ⎝ j

Then we have

A∗ A

1

=

i

⎟ ⎟ 1⎟ ⎟, ⎟ j −i⎟ ⎠ −1 i i





=⎝

B

2i + 2j

4

⎜ ⎜ ⎜ −2i − 2j 4 ⎝ −2j − 2k 2i + 2k ⎛

 D



j

⎜ ⎜

2 + 2i + 2j

⎞ i 1 j

−i

j k

⎠,

2j + 2k

⎞ BB∗

−i + 2j − 2k i−j−k

1 − 2j

⎟ ⎟ 0 i⎟ ⎟. ⎟ j 0⎟ ⎠ k i

2i − k

=⎝

3

−3k

3k

3

Since ddet A

=

=

0 and det A ∗ A

⎞ ⎠,

⎞ ⎟ ⎟ ⎟. ⎠

⎛ det A ∗ A



i j



⎟ ⎟ , −2i − 2k⎟ ⎠ 4

= A ∗ DB∗ = ⎜ 1 − i − j ⎝

=

D

⎛ 1 ⎜ ⎜ ⎜k ⎜ ⎜ ⎜1 ⎝ 0

= ⎝

4

2i + 2j

−2i − 2j

4

⎞ ⎠

= 8 = 0, then rank A = 2.

Similarly rank B = 1. So, we have the case (i) of Theorem 4.6. We shall find the minimum norm least squares solution XLS of (49) by (24). We obtain      BB∗ α  α

α∈I1, 2

   β   A∗ A β 

β∈J2, 3

= 3 + 3 = 6, ⎛

= det ⎝

4

2i + 2j

−2i − 2j

4



+ det ⎝ By (26), we can get ⎛ ⎞ 2 + 2i + 2j ⎜ ⎟ ⎜ ⎟ d.B1 = ⎜ 1 − i − j ⎟ , ⎝ ⎠ 1 − 2j Since

⎛ 

 ∗  A A . 1 d B. 1



then finally we obtain

=

⎞ ⎠

4

2j + 2k

−2j − 2k

4

⎛ d.B2







⎠ + det ⎝

−i + 2j + 2k

⎞ ⎟

⎟ ⎜ i − j − k ⎟. =⎜ ⎝ ⎠ 2i − k

⎞ 2 + 2i + 2j 2i + 2j 2j + 2k ⎜ ⎟ ⎜ ⎟ ⎜ 1−i−j 4 − 2i − 2k⎟ , ⎝ ⎠ 1 − 2j 2i + 2k 4

4

−2i − 2k

2i + 2k

4

⎞ ⎠

= 24.

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

 x11





= 





 ∗ β  β∈J2,3 (A A )β  α∈I1,2 ⎛

cdet1 ⎝

2 + 2i + 2j 2i + 2j 1−i−j

= =

β β   α ∗ (BB )  α

(A ∗ A ) . 1 d B. 1

β∈J2, 3 {1} cdet

4 + 5i + 6j − k 72

4

151







⎠ + cdet1 ⎝

2 + 2i + 2j 2j + 2k 1 − 2j

⎞ ⎠

4

144

.

Similarly, ⎛

−i + 2j − 2k 2i + 2j

cdet1 ⎝ x12

i−j−k

= =

−1 − 2i + 5j − 4k ⎛

72

x21

=

i − 2j − k 72 ⎛

= =

=

i−j−k

=

−2j − 2k

1 − 2j

XLS



=

1 ⎜ ⎜ ⎝

72

1 − 2j

4

⎞ ⎠



⎛ i−j−k ⎠ + cdet1 ⎝ 2i − k

−2i − 2k

⎞ ⎠

4





⎠ + cdet2 ⎝

4

1−i−j

2i + 2k

1 − 2j

⎞ ⎠

144

, 4

−i + 2j − 2k

−2j − 2k

2i − k

3i − 3j − 2k

Then

−2i − 2k

, 2 + 2i + 2j

1 − 4i − 3j

72

1−i−j

144

4

72 ⎛



⎠ + cdet1 ⎝

−2i − 2j

72 ⎛

=



−i + 2j − 2k

cdet2 ⎝ x32

,

4

−2 + 2i + j − k

=



4

,

cdet2 ⎝ x31

2i − k



144

cdet2 ⎝ x22

−i + 2j − 2k 2j + 2k

144

−2i − 2j 1 − i − j

=



⎠ + cdet1 ⎝

2 + 2i + 2j

4

cdet2 ⎝

4







⎠ + cdet1 ⎝

4

i−j−k

2i + 2k

2i − k

144

.

4 + 5i + 6j − k i − 2j − k

1 − 4i − 3j



−1 − 2i + 5j − 4k ⎟ −2 + 2i + j − k ⎟ ⎠ 3i − 3j − 2k

is the minimum norm least squares solution of (49).

⎞ ⎠

152

I. Kyrchei / Linear Algebra and its Applications 438 (2013) 136–152

References [1] T.S. Jiang, Y.H. Liu, M.S. Wei, Quaternion generalized singular value decomposition and its applications, Appl. Math. J. Chin. Univ. 21 (1) (2006) 113–118. [2] T. Jiang, L. Chen, Algebraic algorithms for least squares problem in quaternionic quantum theory, Comput. Phys. Comm. 176 (2007) 481–485. [3] Y.H. Liu, On the best approximation problem of quaternion matrices, J. Math. Study 37 (2) (2004) 129–134. [4] H. Liping, The matrix equation AXB − GXD = E over the quaternion field, Linear Algebra Appl. 234 (1996) 197–208. [5] G. Wang, S. Qiao, Solving constrained matrix equations and Cramer rule, Appl. Math. Comput. 159 (2004) 333–340. [6] Q.W. Wang, A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity, Linear Algebra Appl. 384 (2004) 43–54. [7] Q.W. Wang, Bisymmetric and centrosymmetric solutions to system of real quaternion matrix equation, Comput. Math. Appl. 49 (2005) 641–650. [8] Q.W. Wang, The general solution to a system of real quaternion matrix equations, Comput. Math. Appl. 49 (2005) 665–675. [9] Q.W. Wang, S.W. Yu, C.Y. Lin, Extreme ranks of a linear quaternion matrix expression subject to triple quaternion matrix equations with applications, Appl. Math. Comput. 195 (2008) 733–744. [10] Q.W. Wang, H.X. Chang, Q. Ning, The common solution to six quaternion matrix equations with applications, Appl. Math. Comput. 198 (2008) 209–226. [11] Q.W. Wang, H.X. Chang, C.Y. Lin, P-(skew)symmetric common solutions to a pair of quaternion matrix equations, Appl. Math. Comput. 195 (2008) 721–732. [12] Q.W. Wang, F. Zhang, The reflexive re-nonnegative definite solution to a quaternion matrix equation, Electron. J. Linear Algebra 17 (2008) 88–101. [13] G.J. Song, Q.W. Wang, H.X. Chang, Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field, Comput. Math. Appl. 61 (6) (2011) 1576–1589. [14] G.J. Song, Q.W. Wang, Condensed Cramer rule for some restricted quaternion linear equations, Appl. Math. Comput. 218 (7) (2011) 3110–3121. [15] C. Wensheng, Solvability of a quaternion matrix equation, Appl. Math. J. Chinese Univ. Ser. B 17 (4) (2002) 490–498. [16] I.I. Kyrchei, Cramer’s rule for some quaternion matrix equations, Appl. Math. Comput. 217 (5) (2010) 2024–2030. [17] H. Aslaksen, Quaternionic determinants, Math. Intelligencer 18 (3) (1996) 57–65. [18] L. Chen, Definition of determinant and Cramer solutions over quaternion field, Acta Math. Sinica (N.S.) 7 (1991) 171–180. [19] L. Chen, Inverse matrix and properties of double determinant over quaternion field, Sci. China Ser. A 34 (1991) 528–540. [20] N. Cohen, S. De Leo, The quaternionic determinant, Electron. J. Linear Algebra 7 (2000) 100–111. [21] J. Dieudonne, Les determinants sur un corps non-commutatif, Bull. Soc. Math. France 71 (1943) 27–45. [22] I. Gelfand, V. Retakh, Determinants of matrices over noncommutative rings, Funct. Anal. Appl. 25 (2) (1991) 91–102. [23] I. Gelfand, V. Retakh, A theory of noncommutative determinants and characteristic functions of graphs, Funct. Anal. Appl. 26 (4) (1992) 1–20. [24] I.I. Kyrchei, Cramer’s rule for quaternion systems of linear equations, J. Math. Sci. 155 (6) (2008) 839–858. [25] I.I. Kyrchei, Determinantal representations of the Moore–Penrose inverse over the quaternion skew field and corresponding Cramer’s rules, Linear and Multilinear Algebra 59 (2011) 413–431. [26] D. Huylebrouck, R. Puystjens, J.V. Geel, The Moore–Penrose inverse of a matrix over a semi-simple Artinian ring, Linear and Multilinear Algebra 16 (1984) 239–246.