Least squares Hermitian solution of the matrix equation (AXB,CXD)=(E,F) with the least norm over the skew field of quaternions

Least squares Hermitian solution of the matrix equation (AXB,CXD)=(E,F) with the least norm over the skew field of quaternions

Mathematical and Computer Modelling 48 (2008) 91–100 www.elsevier.com/locate/mcm Least squares Hermitian solution of the matrix equation (AXB, CXD) =...

249KB Sizes 3 Downloads 68 Views

Mathematical and Computer Modelling 48 (2008) 91–100 www.elsevier.com/locate/mcm

Least squares Hermitian solution of the matrix equation (AXB, CXD) = (E, F) with the least norm over the skew field of quaternionsI Shifang Yuan ∗ , Anping Liao, Yuan Lei College of Mathematics and Econometrics, Hunan University, Changsha, Hunan, 410082, PR China Received 7 March 2007; received in revised form 26 June 2007; accepted 7 August 2007

Abstract By using the complex representations of quaternion matrices, Moore–Penrose generalized inverse and the Kronecker product of matrices, we derive the expression of the least squares Hermitian solution of the matrix equation (AX B, C X D) = (E, F) with the least norm over the skew field of quaternions, and provide a numerical algorithm to calculate the forementioned solution. c 2007 Elsevier Ltd. All rights reserved.

Keywords: Matrix equation; Least squares solution; Moore–Penrose generalized inverse; Kronecker product; Quaternion matrices

1. Introduction For convenience, we list some notations as follows: R m×n , C m×n : m × n real matrix set and m × n complex matrix set, respectively; Q, Q m×n : the set of quaternions and m × n quaternion matrix set, respectively; S R n×n : n × n real symmetric matrix set; AS R n×n : n × n real anti-symmetric matrix set; Re(A): real part of the complex matrix A; Im(A): imaginary part of the complex matrix A; A, AT : conjugate and transpose matrix of A, respectively; A H , A+ : conjugate transpose and Moore–Penrose generalized inverse of A, respectively; O, In : zero matrix of suitable size and identity matrix of order n, respectively; A ⊗ B: Kronecker product of A and B.

I This work was supported in part by the Hunan Provincial Natural Science Fund of China (No. 03JJY6028). ∗ Corresponding author.

E-mail addresses: [email protected] (S. Yuan), [email protected] (A. Liao). c 2007 Elsevier Ltd. All rights reserved. 0895-7177/$ - see front matter doi:10.1016/j.mcm.2007.08.009

92

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

A ∈ Q n×n is called a Hermitian matrix if A H = A [1]. The set of all n × n Hermitian matrices is denoted by H Q n×n . For any A ∈ H Q n×n , A can be uniquely expressed as A = A0 + A1 i + A2 j + A3 k, where A0 ∈ S R n×n , A1 , A2 , A3 ∈ AS R n×n . In this paper, by using the complex representations of quaternion matrices, Moore–Penrose generalized inverse and the Kronecker product of matrices, we mainly consider the following problem. Problem I. Given A ∈ Q m×n , B ∈ Q n×s , C ∈ Q m×n , D ∈ Q n×t , E ∈ Q m×s , and F ∈ Q m×t . Let HL = {X |X ∈ H Q n×n , kAX B − Ek2 + kC X D − Fk2 = min}. Find b X ∈ HL such that kb X k = min kX k,

(1)

X ∈HL

where k · k is the Frobenius norm of the quaternion matrix. The solution b X of Problem I is called the least squares Hermitian solution of the matrix equation (AX B, C X D) = (E, F)

(2)

with the least norm over the skew field of quaternions. The set HL is nonempty and is a closed convex set (it is proved in Section 2), therefore, there exists a unique solution for Problem I. A quaternion a can be uniquely expressed as a = a0 + a1 i + a2 j + a3 k with real coefficients a0 , a1 , a2 , a3 , and i 2 = j 2 = k 2 = −1, i j = − ji = k, jk = −k j = i,

ki = −ik = j,

and a can be uniquely expressed as a = c1 + c2 j, where c1 and c2 are complex numbers. We define Re(a) = a0 , the real part of a; Im(a) q = a1 i + a2 j + a3 k, the imaginary part of a; a = a0 − a1 i − a2 j − a3 k, the conjugate of a; √ and |a| = aa = a02 + a12 + a22 + a32 , the norm of a. A quaternion a is said to be a unit quaternion if its norm is 1. For any A ∈ Q m×n , A can be uniquely expressed as A = A1 + A2 j, where A1 , A2 ∈ C m×n . Denote by f (A) the complex representation matrix of quaternion matrix A:   A1 A2 ∈ C 2m×2n . (3) f (A) = −A2 A1 Obviously, f (A) is uniquely determined by A. We define the inner product: hA, Bi = tr(A H B) for all A, B ∈ Q m×n , then Q m×n is a Hilbert inner product space and the norm of a quaternion matrix generated by this inner product is the matrix Frobenius norm k · k. kxk denotes the 2-norm of the vector x. The following facts then follow from the definitions, which are useful in this paper. See, for example, [1] or [2] for more details. If q, q1 , q2 ∈ Q, then 1. 2. 3. 4. 5. 6. 7. 8. 9.

|q| = 0 if and only if q = 0; |q1 + q2 | ≤ |q1 | + |q2 | and |q1 q2 | = |q1 ||q2 |; |q1 |2 + |q2 |2 = 12 (|q1 + q2 |2 + |q1 − q2 |2 ); |q| = |q|; q1 q2 = q2 q1 ; qd = dq for every q ∈ Q, if and only if d is a real number; q = |q|u for some unit quaternion u; jc = c j or jc j = c for any complex number c; and if q 6= 0, then q −1 q = qq −1 = 1, where q −1 = |q|−2 q. If A ∈ Q m×n , B ∈ Q n×s , f (A) is the complex representation matrix of quaternion matrix A, then

1. (A)T = (AT ); 2. (AB) H = B H A H ;

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

3. 4. 5. 6. 7. 8. 9.

93

AB 6= A B in general; (AB)T 6= B T A T in general; f (AB) = f (A) f (B); f (A + B) = f (A) + f (B); f (A H ) = f (A) H ; f (α A) = α f (A); α is a real number; and kAk2 = kRe(A1 )k2 + kIm(A1 )k2 + kRe(A2 )k2 + kIm(A2 )k2 for any A = A1 + A2 j ∈ Q m×n .

In [3–13], authors have studied the matrix equation (2) where A, B, C, D, E, and F are given matrices of suitable size defined over the real or complex number field. For the quaternion matrix equations, there are many important results. For example, by using general singular value decomposition of quaternion matrices, Jiang, Liu, and Wei [14] studied the solutions of the general quaternion matrix equation AX B − CY D = E, and Liu [15] studied the least squares Hermitian solution of the quaternion matrix equation (A H X A, B H X B) = (C, D). Jiang and Wei [16] studied the solution of the general quaternion matrix equation AX B − CY D = E by using real representations of quaternion matrix equations. Over the real quaternion algebra, Wang [17,18] studied bisymmetric and centrosymmetric solutions to certain matrix equations and he studied the general solution to the system of matrix equations A1 X = C1 , A2 X = C2 , A3 X = C3 , and A4 X = C4 . However, to our knowledge, the understanding of the problem for finding the least squares constraint solutions of the quaternion matrix equation (2) has not yet reached a satisfactory level. The reasons are twofold: (i) Solving the problem needs a new way or technique, and (ii) the main obstacles in the study of quaternion matrices, as expected come from the noncommutative multiplication of quaternions, i.e., ab 6= ba for a, b ∈ Q in general. Our approach, to solve the problem, is based on a new way of studying vec(ABC), which successfully overcomes the difficulty which arises from the noncommutative multiplication of quaternions. This paper is organized as follows. In Section 2, we derive the explicit expression for the solution of Problem I, and we provide a numerical algorithm for finding the solution of Problem I. Finally, we give some brief comments in Section 3. 2. The solution of Problem I In order to study Problem I, we first state some lemmas. For matrix A ∈ Q m×n , let ai = (a1i , a2i , . . . , ami ) (i = 1, 2, . . . , n), and denote by the following vector containing all the entries of matrix A: vec(A) = (a1 , a2 , . . . , an )T .

(4)

√ √ √ √ Definition 1. √ For matrix A ∈ Q n×n , let a1 = (a11 , 2a21 , . . . , 2an1 ), a2 = (a22 , 2a32 , . . . , 2an2 ), . . . , an−1 = (a(n−1)(n−1) , 2an(n−1) ), an = ann , and denote by vec S (A) the following vector: vec S (A) = (a1 , a2 , . . . , an−1 , an )T ∈ Q

n(n+1) 2

.

(5)

Definition 2. For matrix B ∈ Q n×n , let b1 = (b21 , b31 . . . , bn1 ), b2 = (b32 , b42 . . . , bn2 ), . . . , bn−2 = (b(n−1)(n−2) , bn(n−2) ), bn−1 = bn(n−1) , and denote by vec A (B) the following vector: vec A (B) =



2(b1 , b2 , . . . , bn−2 , bn−1 )T ∈ Q

n(n−1) 2

.

(6)

Lemma 1. Suppose X ∈ R n×n , then (i)

X ∈ S R n×n ⇐⇒ vec(X ) = K n vec S (X ),

where vec S (X ) is represented as (5), and the matrix K n ∈ R

(7) n 2 × n(n+1) 2

is of the following form

94

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100 √1 2

Kn =

√

2e1 0 0 .. .

        0 0 (ii)

e2 e1 0 .. .

e3 0 e1 .. .

··· ··· ···

0 0

0 0

··· ···

en−1 0 0 .. .

en 0 0 .. .

√0 2e2 0 .. .

0 e3 e2 .. .

··· ··· ···

e1 0

0 e1

0 0

0 0

··· ···

0 en−1 0 .. .

0 en 0 .. .

··· ··· ···

e2 0

0 e2

··· ···



0 0 0 .. .

0 0 0 .. .

2en−1 0

en en−1

0 0 0 .. .



    ,    √0 2en

(8)

X ∈ AS R n×n ⇐⇒ vec(X ) = L n vec A (X ),

(9)

where vec A (X ) is represented as (6), and the matrix L n ∈ R e 2 −e1  0 1  Ln = √   .. 2 .  0 0

e3 0 −e1 .. .

··· ··· ···

0 0

··· ···

n 2 × n(n−1) 2

0

is of the following form

en−1 0 0 .. .

en 0 0 .. .

0 e3 −e2 .. .

··· ··· ···

en−1 0

0 en 0

−e1 0

0 −e1

0 0

··· ···

0 −e2 0

0 0 −e2

··· ··· ···

0 0 0

0 ··· en · · · −en−1

    ,   

(10)

where ei is the ith column of In . Obviously, K n and L n are standard column orthogonal, that is, K nT K n = I n(n+1) , 2

L nT L n = I n(n−1) . 2

Proof. We only consider (ii), with similar argument applicable to (i). For X ∈ AS R n×n , X can be expressed as  0 −x · · · −x −x  21

x21   ···  X = x(n−1)1  xn1

(n−1)1

n1

0 · · · −x(n−1)2 −xn2   ··· ··· ··· · · ·  x(n−1)2 · · · 0 −xn(n−1)  xn2 · · · xn(n−1) 0

= x21 (e2 , −e1 , . . . , 0, 0) + · · · + x(n−1)1 (en−1 , 0, . . . , −e1 , 0) + xn1 (en , 0, . . . , 0, −e1 ) + · · · + x(n−1)2 (0, en−1 , . . . , −e2 , 0) + xn2 (0, en , . . . , 0, −e2 ) + · · · + xn(n−1) (0, 0, . . . , en , −en−1 ), it then follows that  1   1   1  √ e2 √ en−1 √ en  2   2   2    1     − √ e   0   0  √ √ √      1 ..    + 2xn1  ..  vec(X ) = 2x21  .  .2  + · · · + 2x(n−1)1    .     .   1   .     0   0   − √ e1   1  2 − √ e1 0 0 2       0 0 0 0  1   1     √ en−1   √ en    ..  2   2    √ √ √       . . .    ,   .. + · · · + 2x(n−1)2   + 2xn2  ..  + · · · + 2xn(n−1)  1   1     √ en     0    2  − √ e2   1   1  2 − √ e2 − √ en−1 0 2 2

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

95

and it yields that vec(X ) = L n vec A (X ). Conversely, if the matrix X ∈ R n×n satisfies vec(X ) = L n vec A (X ), then it is easy to see that X ∈ AS R n×n . The proof is completed.  Lemma 2. The matrix equation Ax = b, with A ∈ R m×n and b ∈ R n , has a solution x ∈ R n if and only if A A+ b = b,

(11)

in that case it has the general solution x = A+ b + (I − A+ A)y, where y ∈

Rn

(12)

is an arbitrary vector.

Lemma 3. The least squares solutions of the matrix equation Ax = b, with A ∈ R m×n and b ∈ R n , can be represented as x = A+ b + (I − A+ A)y,

(13)

Rn

where y ∈ is an arbitrary vector, and the least squares solution of the matrix equation Ax = b with the least norm is x = A+ b. It is well-known that for A ∈ C m×n , B ∈ C n×s , and C ∈ C s×t , one can get vec(ABC) = (C T ⊗ A)vec(B).

(14) Q m×n ,

Q n×s , and C

Q s×t , (14) cannot hold for the noncommutative

However, in quaternion matrices, for A ∈ B∈ ∈ multiplication of quaternions. Thus, we have to adopt a new way to study vec(ABC). To nearly identify q with a complex vector vec q ∈ C 2 , we will denote such an identification by the ∼ = symbol, i.e., c1 + c2 j = q ∼ = vec q = (c1 , c2 ). e = (A1 , A2 ), and Thus for A = A1 + A2 j ∈ Q m×n , A ∼ =A   vec(A1 ) ∼ e , vec(A1 ) + vec(A2 ) j = vec(A) = vec( A) = vec(A2 )

 

e = vec(A1 ) . kvec(A)k = kvec( A)k

vec(A2 ) − → We denote A = (Re(A1 ), Im(A1 ), Re(A2 ), Im(A2 )),     vec S (Re(A1 )) vec(Re(A1 )) vec(Im(A1 )) vec A (Im(A1 )) − → − →   vec( A ) =  vec H ( A ) =  vec(Re(A2 )) , vec A (Re(A2 )) , vec(Im(A2 )) vec A (Im(A2 ))

→ − → e = kvec(− obviously, kvec( A)k A )k. In particular, for A = A1 + A2 i ∈ C m×n , A1 , A2 ∈ R m×n , then A ∼ = A = (A1 , A2 ), and   vec(A1 ) − → ∼ vec(A1 ) + vec(A2 )i = vec(A) = vec( A ) = . vec(A2 ) Addition of two quaternion matrices A = A1 + A2 j and B = B1 + B2 j is defined by ^ (A1 + B1 ) + (A2 + B2 ) j = (A + B) ∼ + B = (A1 + B1 , A2 + B2 ), =A whereas multiplication is defined as AB = (A1 + A2 j)(B1 + B2 j) = (A1 B1 − A2 B2 ) + (A1 B2 + A2 B1 ) j,

96

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

then AB ∼ AB and g AB can be expressed as =g g AB = (A1 B1 − A2 B2 , A1 B2 + A2 B1 )   B1 B2 = (A1 , A2 ) −B2 B1 e = A f (B). Lemma 4. For A = A1 + A2 j ∈ Q m×n , B = B1 + B2 j ∈ Q n×s , and C = C1 + C2 j ∈ Q s×t , then   vec( e B) ] = ( f (C)T ⊗ A1 , f (C j) H ⊗ A2 ) vec( ABC) . vec(− g j B j)

(15)

Proof. From e f (BC) ] = A ABC e f (B) f (C) = A  B1 = (A1 , A2 ) −B2

B2 B1



C1 −C2

C2 C1



= (A1 B1 C1 − A2 B2 C1 − A1 B2 C2 − A2 B1 C2 , A1 B1 C2 − A2 B2 C2 + A1 B2 C1 + A2 B1 C1 ), we can get ! T T T A )vec(B ) − (C ⊗ A )vec(B ) − (C ⊗ A )vec(B ) − (C ⊗ A )vec(B ) 1 1 2 1 2 2 2 1 2 2 1 ] = vec( ABC) T T A1 )vec(B1 ) + (C1 ⊗ A1 )vec(B2 ) + (C1 ⊗ A2 )vec(B1 ) − (C2T ⊗ A2 )vec(B2 ) !  T  H vec( e B) C2 C2 −C1 = ⊗ A1 , − ⊗ A2 vec(− g j B j) C1 C1 C2   vec( e B) = ( f (C)T ⊗ A1 , f (C j) H ⊗ A2 ) . vec(− g j B j) (C1T ⊗ (C2T ⊗  C1 −C2

This proves the assertion.



For any X = X 1 + X 2 j ∈ Q n×n , X i ∈ C n×n (i = 1, 2), it is easy to see that X ∈ H Q n×n ⇔ (Re(X 1 )T , Im(X 1 )T , Re(X 2 )T , Im(X 2 )T ) = (Re(X 1 ), −Im(X 1 ), −Re(X 2 ), −Im(X 2 )). Then we have Lemma 5. For X = X 1 + X 2 j ∈ H Q n×n , one gets    Kn i L n O O vec S (Re(X 1 ))   O   vec( e X) O Ln i Ln   vec A (Im(X 1 )) . =    g K n −i L n O O vec A (Re(X 2 )) vec(− j X j) O O L n −i L n vec A (Im(X 2 )) Proof. For X = X 1 + X 2 j ∈ H Q n×n , we have   vec(X 1 )   vec(X 2 ) vec( e X)   = vec(X 1 ) vec(− jg X j) vec(X 2 )    vec(X 1 ) In 2 O O O O   O In 2 O   vec(X 1 ) =  O In 2 O O  vec(X 2 ) O O O In 2 vec(X 2 )

(16)

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100



Kn O = Kn O The proof is completed.

i Ln O −i L n O

O Ln O Ln

97

O vec S (Re(X 1 )) vec A (Im(X 1 )) i Ln   . O  vec A (Re(X 2 )) −i L n vec A (Im(X 2 )) 





By Lemmas 1, 4 and 5, we have Lemma 6. For A = A1 + A2 j ∈ Q m×n , X = X 1 + X 2 j ∈ H Q n×n , and B = B1 + B2 j ∈ Q n×s , one gets    Kn i L n O O vec S (Re(X 1 ))    O Ln i Ln   vec A (Im(X 1 )) . ]B) = ( f (B)T ⊗ A1 , f (B j) H ⊗ A2 )  O vec( AX  K n −i L n O   O vec A (Re(X 2 )) O O L n −i L n vec A (Im(X 2 ))

(17)

Based on the discussions mentioned above, we now study Problem I. Theorem 1. Given A ∈ Q m×n , B ∈ Q n×s , C ∈ Q m×n , D ∈ Q n×t , E ∈ Q m×s , and F ∈ Q m×t , K n is defined as (8), L n is defined as (10), vec(X ) is defined as (4), W = diag(K n , L n , L n , L n ), A = A1 + A2 j, A1 , A2 ∈ C m×n , C = C1 + C2 j, C1 , C2 ∈ C m×n . Let   Kn i L n O O O O Ln i Ln  , Q 1 = ( f (B)T ⊗ A1 , f (B j) H ⊗ A2 )   K n −i L n O O  O O L n −i L n   Kn i L n O O O O Ln i Ln  , Q 2 = ( f (D)T ⊗ C1 , f (D j) H ⊗ C2 )   K n −i L n O O  O O L n −i L n   e vec(Re( E))   vec(Re( F)) e  Q1  Q0 = , P1 = Re(Q 0 ), P2 = Im(Q 0 ), e= (18) vec(Im( E)) e . Q2 e vec(Im( F)) Then the set HL of Problem I can be expressed as − → HL = {X |vec( X ) = W ( P1+ − H T P2 P1+ , H T )e + W (I − P1+ P1 − R R + )y}, where R = (I − P1+ P1 )P2T , H = R + + (I − R + R)Z P2 (P1+ )P1+T (I − P2T R + ), Z = (I + (I − R + R)P2 P1+ P1+T P2T (I − R + R))−1 , S11 = I − P1 P1+ + P1+T P2T Z (I − R + R)P2 P1+ , S12 = −P1+T P2T (I − R + R)Z , S22 = (I − R + R)Z , where y is an arbitrary real vector of approximate order. Proof. By Lemmas 1 and 6, we can get kAX B − Ek2 + kC X D − Fk2 = kvec(AX B) − vec(E)k2 + kvec(C X D) − vec(F)k2 e 2 + kvec(C e 2 ]B) − vec( E)k ^ = kvec( AX X D) − vec( F)k

(19)

98

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

! − → 2

vec( E ) − →

= Q 0 vec H ( X ) − − →

vec( F )

 

2

P1

− →

. = vec ( X ) − e H

P2

By Lemmas 2 and 3, it follows that  +  !  + P1 P1 P1 − → y, e+ I − vec H ( X ) = P2 P2 P2 thus − → vec( X ) = W (P1+ − H T P2 P1+ , H T )e + W (I − P1+ P1 − R R + )y. The proof is completed.



Remark 1. In the proof of Theorem 1, we apply the results in [19]:    + S11 P1 − P1+ P2 H + + , I − (P1 , P2 ) (P1 , P2 ) = (P1 , P2 ) = T H S12

 S12 , S22

please see [19] for details. Corollary 1. If the notations and conditions are the same as in Theorem 1, then the quaternion matrix equation (2) has a solution X ∈ H Q n×n if and only if   S11 S12 e = 0. (20) T S12 S22 In that case, denote by H E the solution set of the quaternion matrix equation (2), and H E can be expressed as − → H E = {X |vec( X ) = W (P1+ − H T P2 P1+ , H T )e + W (I − P1+ P1 − R R + )y},

(21)

where y is an arbitrary real vector of approximate order. Furthermore, if (20) holds, then the quaternion matrix equation (2) has a unique solution X ∈ H E if and only if   P rank 1 = 2n 2 − n. (22) P2 In that case H E can be expressed as − → H E = {X |vec( X ) = W (P1+ − H T P2 P1+ , H T )e},

(23)

the notations are the same as in Theorem 1. Theorem 2. If the notations and conditions are the same as in Theorem 1, then there exists a unique solution b X ∈ HL X can be expressed as for Problem I and b − → vec( b X ) = W (P1+ − H T P2 P1+ , H T )e.

(24)

Proof. From (19), it is easy to verify that the solution set HL is nonempty and is a closed convex set. Then there exists a unique solution b X ∈ HL for Problem I. Now we prove that the solution b X can be expressed as (24).

99

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

From (19), and because the matrix W is standard column orthogonal, we have − → min kX k = min kvec( X )k X ∈HL

X ∈HL

− → = min kW vec H ( X )k X ∈HL

− → = min kvec H ( X )k, X ∈HL

by Lemma 3,  + − → P1 b e, vec H ( X ) = P2 thus, − → vec( b X ) = W (P1+ − H T P2 P1+ , H T )e. This proves the assertion.



Corollary 2. If the notations and conditions are the same as in Theorem 1, then the least norm problem kb X k = min kX k2 X ∈H E

has a unique solution b X ∈ H E and b X can be expressed as (24). Remark 2. The problem associated with Problem I is Problem II. Given A ∈ Q m×n , B ∈ Q n×s , C ∈ Q m×n , D ∈ Q n×t , E ∈ Q m×s , F ∈ Q m×t , and X ∗ ∈ Q n×n . Let SL = {X |X ∈ H Q n×n , kAX B − Ek2 + kC X D − Fk2 = min}. Find b X ∈ SL such that kb X − X ∗ k = min kX − X ∗ k. X ∈HL

(25)

Obviously, when X ∗ = O, the solution of Problem II is the solution of Problem I. Since the methods and results are similar, we omit the detailed discussions in this paper. According to the discussions mentioned above, here we provide a numerical algorithm for finding the solution of Problem I. Algorithm 1 (For Problem I). (1) Impute A, B, C, D, E, F, K n , and L n (A ∈ Q m×n , B ∈ Q n×s , C ∈ Q m×n , D ∈ Q n×t , E ∈ Q m×s , andF ∈ Q m×t ). (2) Compute P1 , P2 , R, H, S11 , S12 , S22 , and e. (3) If (20) and (22) hold, then calculate b X (b X ∈ H E ) according to (24). b b (4) If (20) holds, then calculate X ( X ∈ H E ) according to (24), otherwise go to next step. (5) Calculate b X (b X ∈ HL ) according to (24). The authors have implemented Algorithm 1 in Matlab 6.5. 3. Comments In this paper, we have discussed the Hermitian solution of the quaternion matrix equation (2). An analytical expression of the least squares Hermitian solution with the least norm of the matrix equation (2) over the skew field of quaternions is derived. Moreover, an algorithm for finding the least norm solution is established. To our knowledge, the understanding of the problem for finding the least squares constraint solutions of the quaternion matrix equation (2) has not yet reached a satisfactory level. Based on a new way of studying vec(ABC),

100

S. Yuan et al. / Mathematical and Computer Modelling 48 (2008) 91–100

we apply the complex representations of quaternion matrices, Moore–Penrose generalized inverse and the Kronecker product of matrices to investigate the Hermitian solutions of the quaternion matrix equation (2). It is worthy to say that our approach is useful to solve the least squares constraint solutions of other quaternion matrix equations. Although the paper is essentially a theoretical one, a numerical algorithm for finding the least norm solution of the quaternion matrix equation (2) is given. More work, e.g. iterative methods, error analysis and numerical stability needs to be investigated. Acknowledgements The authors are very much indebted to the anonymous referees for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. References [1] F.Z. Zhang, Quaternions and matrices of quaternions, Linear Algebra Appl. 251 (1997) 21–57. [2] D.R. Farenick, B.A.F. Pidkowich, The spectral theorem in quaternions, Linear Algebra Appl. 371 (2003) 75–102. [3] S.K. Mitra, Common solutions to a pair of linear matrix equation A1 X B1 = C1 and A2 X B2 = C2 , Proc. Cambridge Philos. Soc. 74 (1973) 213–216. [4] K.W.E. Chu, Singular value and generalized singular value decompositions and the solution of linear matrix equations, Linear Algebra Appl. 88–89 (1987) 83–98. [5] A.P. Liao, A generalization of a class of inverse eigenvalue problem, J. Hunan Univ. 22 (1995) 7–10. [6] S.K. Mitra, A pair of simultaneous linear matrix equations A1 X B1 = C1 and A2 X B2 = C2 and a matrix programming problem, Linear Algebra Appl. 131 (1990) 107–123. [7] A. Navarra, P.L. Odell, D.M. Yong, A representation of the general Common solution to the matrix equations A1 X B1 = C1 and A2 X B2 = C2 with applications, Comput. Math. Appl. 41 (7–8) (2001) 929–935. [8] J.V. Woude, On the existence of a common solution X to the matrix equations Ai X B j = Ci j , i j ∈ Γ , Linear Algebra Appl. 375 (2003) 135–145. [9] Y.X. Yuan, On the two classes of best approximation problems, Math. Numer. Sin. 23 (2001) 429–436. [10] Y.X. Yuan, The minimum norm solutions of two classes of matrix equations, Numer. Math. J. Chin. Univ. 24 (2002) 127–134. [11] Y.X. Yuan, The optimal solution of linear matrix equation by matrix decompositions, Math. Numer. Sin. 24 (2002) 165–176. [12] N. Shinozaki, M. Sibuya, Consistency of a pair of matrix equations with an application, Keio Engrg. Rep. 27 (1974) 141–146. [13] A.P. Liao, Y. Lei, Least squares solution with the mininum-norm for the matrix equation (AX B, G X H ) = (C, D), Comput. Math. Appl. 50 (2005) 539–549. [14] T.S. Jiang, Y.H. Liu, M.S. Wei, Quaternion generalized singular value decomposition and its applications, Appl. Math. J. Chin. Univ. 21 (1) (2006) 113–118. [15] Y.H. Liu, On the best approximation problem of quaternion matrices, J. Math. Study 37 (2) (2004) 129–134. [16] T.S. Jiang, M.S. Wei, Real representations of quaternion matrix equations, Acta Math. Sci. 26A (4) (2006) 578–584 (in Chinese). [17] Q.W. Wang, Bisymmetric and centrosymmetric solutions to system of real quaternion matrix equation, Comput. Math. Appl. 49 (2005) 641–650. [18] Q.W. Wang, The general solution to a system of real quaternion matrix equations, Comput. Math. Appl. 49 (2005) 665–675. [19] J.R. Magnus, L-structured matrices and linear matrix equations, Linear Multilinear Algebra 14 (1983) 67–88.