Applied Mathematics and Computation 235 (2014) 157–167
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A fast structure-preserving method for computing the singular value decomposition of quaternion matrices q Ying Li a, Musheng Wei b,⇑, Fengxia Zhang a, Jianli Zhao a a b
College of Mathematical Sciences, Liaocheng University, Shandong 252000, PR China College of Mathematics and Science, Shanghai Normal University, Shanghai 200234, PR China
a r t i c l e
i n f o
a b s t r a c t
Keywords: Quaternion matrix Quaternion singular value decomposition Structure-preserving algorithm
In this paper we propose a fast structure-preserving algorithm for computing the singular value decomposition of quaternion matrices. The algorithm is based on the structurepreserving bidiagonalization of the real counterpart for quaternion matrices by applying orthogonal JRS-symplectic matrices. The algorithm is efficient and numerically stable. Ó 2014 Elsevier Inc. All rights reserved.
1. Introduction Quaternion and quaternion matrices have wide applications in applied science, such as special relativity and non-relativistic [11,13], group representations [9,10,12], relativistic dynamics [14,15], field theory [16], Lagrangian formalism [17], electro weak model [18] and grand unification theories [21]. With the rapid development of the above disciplines, it is getting more and more necessary for us to further study the theoretical properties and numerical computations of quaternion and quaternion matrices. There are several ways to design an algorithm for numerical computations involving quaternion matrices. The first way is to directly use the methods for real matrix computations, then the algorithm will need quaternion arithmetic, and so sometimes is inefficient. The second way is based on the real presentations of quaternion matrices to design an algorithm. Notice that, both rows and columns of the real presentation of a quaternion matrix expand four times of that of the original quaternion matrix. Therefore, in this case we need to pay extra attention to design an efficient algorithm. When quaternion matrices have special properties, then one may design efficient algorithms by applying the properties of the matrices. As a special case, consider the following linear complex symmetric system
Ax ¼ b; A 2 Cnn ;
and x; b 2 Cn ;
ð1:1Þ
where A ¼ W þ iT, in which W is real symmetric positive definite, and T – 0 is real symmetric positive semi-definite, x ¼ y þ iz; b ¼ c þ id with y; z; c; d n-dimensional real vectors. Complex symmetric linear systems of this kind arise in many problems in scientific computing and engineering applications, including diffuse optical tomography, FFT-based solution of certain time-dependent PDEs, quantum mechanics, molecular scattering, structural dynamics, and lattice quantum chromodynamics. If we design any iterative method direct from (1.1), we would use complex arithmetic throughout the code and it would be wasteful and inefficient. q Supported by the National Natural Science Foundation of China 11171226 and 11301247, the Natural Science Foundation of Shandong under Grant ZR2012FQ005, and the Science Foundation of Liaocheng University under Grants 31805 and 318011318. ⇑ Corresponding author. E-mail address:
[email protected] (M. Wei).
http://dx.doi.org/10.1016/j.amc.2014.02.068 0096-3003/Ó 2014 Elsevier Inc. All rights reserved.
158
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
During the recent years, several efficient iterative algorithms for solving (1.1) were proposed. 1. Benzi et al. [4] proposed the following procedure. From the real presentation of a complex matrix, W T f ðW þ iTÞ ¼ , the above complex symmetrical Eq. (1.1) can be rewritten as T W
W
T
T
W
y z
¼
c : d
ð1:2Þ
Because the left side matrix is real, then one can propose any appropriate iterative method and precondition, such as Hermitian/skew-Hermitian splitting (HSS) [3] and preconditioned HSS (PHSS). 2. Bai et al. [1] observed that, the complex symmetric system (1.1) can be rewritten as
ðaI þ WÞx ¼ ðaI iTÞx þ b;
ð1:3Þ
ðaI þ TÞx ¼ ðaI þ iWÞx ib;
in which a > 0 is a parameter. From the above equations the authors of [1] propose the modified HSS (MHSS) iterative method. 3. Bai et al. [2] observed that, the complex symmetric system (1.1) can also be rewritten as
ðaV þ WÞx ¼ ðaV iTÞ þ b;
ð1:4Þ
ðaV þ TÞx ¼ ðaV þ iWÞx ib;
in which a > 0 is a parameter, and V is a real symmetric positive definite matrix. From the above equations the authors of [2] propose the preconditioned MHSS (PMHSS) iterative method. For detailed analysis, examples and additional references, we refer to [4,1,2]. The singular value decomposition (SVD) [19,20] is one of the most powerful tools in matrix computations. The SVD of a quaternion matrix was theoretically derived in 1997 by Zhang [28]. Now the quaternion SVD has been widely used in reduced-rank signal processing where the idea is to extract the significant parts of a signal [6], algebraic feature extraction method for any colour image in image pattern recognition [26,7,25]. We now describe application background of the Quaternion SVD. 1. reduced-rank signal processing. A color image can be represented by a quaternion matrix Q 2 Hmn . Let the SVD of Q be
Q ¼ U RV H ; in which U 2 H
mm
ð1:5Þ ; V 2H
nn
are quaternion unitary matrices, respectively, V
H
is the conjugate transpose matrix of V,
R ¼ diagðr1 ; . . . ; rl Þ with l ¼ minfm; ng, and r1 P r2 P P rl are the singular values of the matrix Q. ^ 0 2 Hmn , such that Then the reduced-rank signal processing model can be described as: find a quaternion matrix Q
b 0 Þ ¼ minfrankðQ ^Þ : Q ^ 2 Hmn g; rankð Q
^ 0 k < ; s:t: kQ Q 2
ð1:6Þ
where > 0 is a problem related parameter and k jj2 is the spectral matrix norm. If the singular values of Q satisfy r1 P P rk P > rkþ1 P P rl , then a solution to (1.6) has the form
^ 0 ¼ U 1 R1 V H ; Q 1
ð1:7Þ
where R1 ¼ diagðr1 ; . . . ; rk Þ; U 1 ; V 1 are respectively the first k columns of U; V. 2. The Karhunen–Lóeve transform. The Karhunen–Lóeve transform is a well-known technique in image and signal processing, and is based on the EVD of the covariance matrix of the different lines or columns. Given an image Q 2 Hmn , one can build the covariance matrix of ql , the lth column of Q as:
Cl ¼ q~l q~Hl ;
ð1:8Þ
~l ¼ ql m½ql is the lth centered (corrected form its mean color value) column of Q. Then, the mean covariance matrix where q is given as:
C¼
n 1X 1 Cl ¼ Q~ Q~ H ; n l¼1 n
ð1:9Þ
~ ¼ ðq ~1 ; . . . ; q ~n Þ, and C is a positive semi-definite quaternion matrix. Then C has the following quaternion eigenin which Q value decomposition:
C ¼ U KU H ;
ð1:10Þ
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
159
in which U is a quaternion unitary matrix and K ¼ diagðk1 ; . . . ; kn Þ is a positive semi-definite diagonal matrix. From the above transformation we obtain a new image
Y ¼ UT Q ;
ð1:11Þ
~ can be denoted as here U denotes the transpose of the matrix U. On the other hand, the SVD of Q T
~ ¼ U RV H ; Q
ð1:12Þ
where U is the same quaternion unitary matrix as in the quaternion eigenvalue decomposition of C, therefore we can com~ instead the QEVD of C to obtain U. pute the SVD of Q Bihan [6,5] introduced the computations of quaternion SVD using the complex adjoint matrix. [23] presented a method to compute quaternion SVD based on transformation of the quaternion matrix to bidiagonal form using quaternion Householder transformations. Bihan and Sangwine [8] proposed the Jacobi SVD algorithm for quaternion matrices. Recently, the authors of [22] proposed a structure-preserving algorithm for solving the right eigenvalue problem of Hermitian quaternion matrices, using the special structures and properties of the real representation of Hermitian quaternion matrices. By applying the same idea, the authors of [27] proposed a structure-preserving algorithm to compute the Cholesky decomposition of the Hermitian positive definite quaternion matrix. In this paper we will propose the SVD of a quaternion matrix Q using the same idea of [22]. Let !Q be the real representation of Q. Then !Q has a nice structure: generalized JRS-symmetry. This structure is unchanged under orthogonal JRS-symplectic transformations (for definitions of generalized JRS-symmetry and orthogonal JRS-symplectic transformation, see §2). That makes it possible to derive a structure-preserving method for bidiagonalization of !Q . This structure-preserving method only cost about a quarter of arithmetic operations of the Householder bidiagonalization algorithm directly applied to !Q . What is more important is that the bidiagonal matrix obtained from m n quaternion matrix by the structure-preserving method is still generalized JRS-symmetric, and has the following form
2
3
D
0
0
0
60 6 6 40
D
0
0
D
07 7 7 05
0
0
0
D
where D is an m n real bidiagonal matrix. Therefore to evaluate the singular values of the quaternion matrix, we only need to compute the singular values of D. This paper is organized as follows. In Section 2, we will recall some preliminary results used in the paper. In Section 3, we will propose a structure-preserving algorithm for computing the SVD for quaternion matrices. In Section 4, we will provide experiments to compare this algorithm with two other standard algorithms to demonstrate the efficiency of our algorithm. Finally in Section 5 we will make some concluding remarks. 2. Preliminaries In this section we recall some basic properties about quaternions and quaternionic matrices for completeness. A quaternion q 2 H is expressed as
q ¼ a þ bi þ cj þ dk; where a; b; c; d 2 R; and three imaginary units i; j; k satisfy 2
2
2
i ¼ j ¼ k ¼ ijk ¼ 1: The quaternion skew-field H is an associative but non-commutative algebra of rank four over R, endowed with an involutory antiautomorphism
q ! q ¼ a bi cj dk: Every non-zero quaternion is invertible, and the unique inverse is given by 1=q ¼ q=jqj2 , where the quaternionic norm jqj is defined by 2
2
jqj2 ¼ qq ¼ a2 þ b þ c2 þ d : For any quaternion matrix Q ¼ Q 0 þ Q 1 i þ Q 2 j þ Q 3 k, where Q 0 ; Q 1 ; Q 2 ; Q 3 2 Rmn , its real counterpart can be defined as
2
Q0
6 6 Q 2 6 !Q 6 6 6 Q 1 4 Q 3
Q2
Q1
Q0
Q3
Q 3
Q0
Q1
Q 2
Q3
3
7 Q 1 7 7 7: 7 Q2 7 5 Q0
ð2:1Þ
160
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
Now we study the structural properties of !Q . Definition 2.1 [22]. Define three unitary matrices
2
0
0
Ik
60 6 Jk ¼ 6 4 Ik
0
0
0
0
0
Ik
0
2
0
0
3
Ik 7 7 7; 0 5
ð2:2Þ
0 3
Ik
0
0
6I 6 k Rk ¼ 6 40
0
0
0
0
07 7 7; Ik 5
0
0
Ik
2
0
60 6 Sk ¼ 6 40
0
Ik
3
0
Ik
Ik
0
0 7 7 7: 0 5
0
0
0
Ik
0
0
ð2:3Þ
where Ik denote the k k unit matrix, k ¼ m or n. Let the superscripts T and ⁄ denote the transpose and the conjugate transpose, respectively. A real matrix M 2 R4m4n is called generalized JRS-symmetric if J m MJTn ¼ M; Rm MRTn ¼ M and Sm MSTn ¼ M. Definition 2.2. A real matrix M 2 R4m4n is called JRS-symplectic if MJ n M T ¼ J m ; MRn M T ¼ Rm and MSn M T ¼ Sm , where J k ; Rk ; Sk ; k ¼ m. By simple computations, we observe that !Q , the real counterpart of the quaternion matrix Q 2 Hmn , is generalized JRSsymmetric [22]. Therefore, if we want to compute the SVD of a m n quaternion matrix Q, then we can deal with the SVD of the 4m 4n real counterpart matrix !Q . When we consider the structure of !Q ; the computation workload, computational time and storage spaces can be greatly reduced. We also need the following results from [22]. For given g ¼ g 0 þ g 1 i þ g 2 j þ g 3 k 2 H , g 0 ; g 1 ; g 2 ; g 3 2 R, if g – 0, define G1 as
2
cos a0 6 6 cos a 2 6 G1 ¼ 6 6 6 cos a1 4
cos a1
cos a0
cos a3
cos a3
cos a0
cos a1
cos a2
cos a3
3
2 g0 7 6 6 cos a1 7 7 6 g 2 7¼6 7 6 6 cos a2 7 5 4 g 1
cos a2
cos a3
cos a0
g 3
g2
g1
g0
g3
g 3
g0
g1
g 2
g3
3
7 g 1 7 7 7=jgj ¼ !g =jgj; 7 g2 7 5 g0
then
GT1 !g ¼ !g GT1 ¼ jgjI4 ;
GT1 G1 ¼ G1 GT1 ¼ I4 :
Therefore, G1 is an orthogonal matrix, which is the condensed form of the following JRS-symplectic Givens matrix Gl ,
2
6 6 6 6 6 6 6 6 6 6 6 6 Gl ¼ 6 6 6 6 6 6 6 6 6 6 6 6 4
Il1 ;
0;
0;
0;
0;
0;
0;
0;
0;
0;
0;
0
0; 0;
cos a0 ; 0;
0; Inl ;
0; 0;
cos a2 ; 0;
0; 0;
0; 0;
cos a1 ; 0;
0; 0;
0; 0;
cos a3 ; 0;
0 0
0;
0;
0;
Il1 ;
0;
0;
0;
0;
0;
0;
0;
0
0;
cos a2 ;
0;
0;
cos a0 ;
0;
0;
cos a3 ;
0;
0;
cos a1 ;
0
0;
0;
0;
0;
0;
Inl ;
0;
0;
0;
0;
0;
0
0;
0;
0;
0;
0;
0;
Il1 ;
0;
0;
0;
0;
0
0;
cos a1 ;
0;
0;
cos a3 ;
0;
0;
cos a0 ;
0;
0;
cos a2 ;
0
0;
0;
0;
0;
0;
0;
0;
0;
Inl ;
0;
0;
0
0;
0;
0;
0;
0;
0;
0;
0;
0;
Il1 ;
0;
0
0; 0;
cos a3 ; 0;
0; 0;
0; 0;
cos a1 ; 0;
0; 0;
0; 0;
cos a2 ; 0;
0; 0;
0; 0;
cos a0 ; 0;
0 Inl
The direct sum of four identical n n Householder matrices has the following form,
2 6 6 6 Hl ¼ 6 6 6 4
In bvv T
0
0
0
0
In bvv T
0
0
0
0
In bvv T
0
0
0
0
In bvv T
3 7 7 7 7: 7 7 5
3
7 7 7 7 7 7 7 7 7 7 7 7 7: 7 7 7 7 7 7 7 7 7 7 7 5
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
161
Notice that both Gl and Hl are orthogonal JRS-symplectic matrices with the form
2
W0
6 W 2 6 W ¼6 4 W 1 W 3
W2
W1
W0
W3
W3
3
W 1 7 7 7; W2 5
W 3
W0
W1
W 2
W0
where 1 6 l 6 n; a0 ; a1 ; a2 ; a3 2 ½p=2; p=2Þ with cos2 a0 þ cos2 a1 þ cos2 a2 þ cos2 a3 ¼ 1, v is a vector of length n with its first l 1 elements equal to zero, b is a scalar with bðbv T v 2Þ ¼ 0 and W 1 ; W 2 ; W 2 ; W 3 2 Rnn . Suppose that U; V are 4m and 4n orthogonal JRS-symplectic matrices, respectively, if X is generalized JRS-symmetric, then
J m ðU T XVÞJ Tn ¼ U T ðUJm U T ÞXðVJ Tn V T ÞV ¼ U T ðJ m XJ Tn ÞV ¼ U T XV; Rm ðU T XVÞRTn ¼ U T ðURm U T ÞXðVRTn V T ÞV ¼ U T ðRm XRTn ÞV ¼ U T XV; Sm ðU T XVÞSTn ¼ U T ðUSm U T ÞXðVSTn V T ÞV ¼ U T ðSm XSTn ÞV ¼ U T XV; therefore, U T XV is still generalized JRS-symmetric. Therefore, orthogonal JRS-symplectic transformations preserve generalized JRS-symmetric structures. 3. The structure-preserving algorithm In this section, we propose an algorithm for the bidiagonalization of the real counterpart of a quaternion matrix. Baesd on the algorithm, we present the standard form of the real counterpart of a quaternion matrix under the orthogonal JRS-symplectic transformations. First, We recall the algorithms for generating a generalized symplectic Givens rotation [22] and generating the Householder vector and the scale b1 of a Householder matrix [19,20], which will be called as subprograms. Algorithm 3.1 [22]. A method for generating an 4 4 generalized symplectic Givens rotation. Function: G1 ¼ JRSGiv ensðg 0 ; g 1 ; g 2 ; g 3 Þ. If g 1 ¼ g 2 ¼ g 3 ¼ 0,
G1 ¼ I4 ; else
2
g0 6 6 g 2 G1 ¼ 6 6 4 g 1 g 3
g2
g1
g0
g3
g 3
g0
g1
g 2
g3
3
7 g 1 7 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 7 g 20 þ g 21 þ g 22 þ g 23 : 7 g2 5 g0
This algorithm costs 24 flops, including 1 square root operation. Algorithm 3.2 [19,20]. A method for generating the Householder vector and the scale b1 of a n n Householder matrix. Function: ½u; b1 ¼ Houseðx; nÞ u ¼ x; a1 ¼ normðxÞ; If uð1Þ >¼ 0, uð1Þ ¼ uð1Þ þ a1; b1 ¼ 1=ðuð1Þ a1Þ; else uð1Þ ¼ uð1Þ a1; b1 ¼ 1=ðuð1Þ a1Þ; end In fact, the Householder matrix H ¼ eyeðnÞ b1 u u0 needs not to be explicitly generated, we only need Householder vector u and scalar b1; thus this algorithm involves only about 2n flops. Now we propose an algorithm for the bidiagonalization of the real counterpart of a quaternion matrix.
162
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
Algorithm 3.3. For a given quaternion matrix Q ¼ Q 0 þ Q 1 i þ Q 2 j þ Q 3 k 2 Hmn , where Q p 2 Rmn ; p ¼ 0; 1; 2; 3. This algorithm presents an orthogonal JRS-symplectic matrix U 2 R4m4m ; V 2 R4n4n and a bidiagonal matrix D 2 Rmn satisfying Theorem 3.4. Without loss of generality, we suppose m P n in this algorithm. Function: D = Bidiagq (Q 0 ; Q 1 ; Q 2 ; Q 3 ). ½m; n ¼ sizeðQ 0 Þ; 1. 2. 3.
for s ¼ 1 : n 2. for t ¼ s : m. Generate a generalized symplectic Givens rotation for the ðt; sÞth elements of Q s by
G1 ¼ JRSGiv ensðQ 0 ðt; sÞ; Q 1 ðt; sÞ; Q 2 ðt; sÞ; Q 3 ðt; sÞÞ: 4.
Update the sth to nth elements in tth row of Q p by
Z ¼ GT1 ½Q 0 ðt; s : nÞ; Q 2 ðt; s : nÞ; Q 1 ðt; s : nÞ; Q 3 ðt; s : nÞ Q 0 ðt; s : nÞ ¼ Zð1; :Þ; Q 1 ðt; s : nÞ ¼ Zð3; :Þ;
Q 2 ðt; s : nÞ ¼ Zð2; :Þ; Q 3 ðt; s : nÞ ¼ Zð4; :Þ;
5. end 6. Generate a Householder vector for the sth to mth elements in sth column of Q 0
½u; b1 ¼ HouseðQ 0 ðs : m; sÞ; m þ 1 sÞ: 7. Update the submatrix composed by the elements at the intersection of sth to mth rows and sth to nth columns of Q p ; p ¼ 0; 1; 2; 3 by
Q p ðs : m; s : nÞ ¼ Q p ðs : m; s : nÞ ðb1 uÞðu0 Q p ðs : m; s : nÞÞ 8. 9.
for t ¼ s þ 1 : n. Generate a generalized symplectic Givens rotation for the ðs; tÞth elements of Q p by
G1 ¼ JRSGiv ensðQ 0 ðs; tÞ; Q 1 ðs; tÞ; Q 2 ðs; tÞ; Q 3 ðs; tÞÞ: 10.
Update the sth to mth elements in tth column of Q p by
y ¼ ½Q 0 ðs : m; tÞ; Q 2 ðs : m; tÞ; Q 1 ðs : m; tÞ; Q 3 ðs : m; tÞGT1 Q 0 ðs : m; tÞ ¼ yð:; 1Þ;
Q 2 ðs : m; tÞ ¼ yð:; 2Þ;
Q 1 ðs : m; tÞ ¼ yð:; 3Þ;
Q 0 ðs : m; tÞ ¼ yð:; 4Þ:
11. end 12. Generate a Householder vector for the ðs þ 1Þth to nth elements in sth row of Q 0
½v ; b1 ¼ HouseðQ 0 ðs; s þ 1 : nÞ0 ; n sÞÞ: 13. Update the submatrix composed by the elements at the intersection of sth to mth rows and s þ 1th to nth columns of Q p ; p ¼ 0; 1; 2; 3 by
Q p ðs : m; s þ 1 : nÞ ¼ Q p ðs : m; s þ 1 : nÞ ðQ p ðs : m; s þ 1 : nÞv Þðb1 v 0 Þ 14. end 15. s ¼ n 1. 16. for t ¼ s : m. 17. Generate a generalized symplectic Givens rotation for the ðt; sÞth elements of Q s by
G1 ¼ JRSGiv ensðQ 0 ðt; sÞ; Q 1 ðt; sÞ; Q 2 ðt; sÞ; Q 3 ðt; sÞÞ: 18.
Update the ðn 1Þth to nth elements in tth row of Q p by
Z ¼ GT1 ½Q 0 ðt; n 1 : nÞ; Q 2 ðt; n 1 : nÞ; Q 1 ðt; n 1 : nÞ; Q 3 ðt; n 1 : nÞ Q 0 ðt; n 1 : nÞ ¼ Zð1; :Þ; Q 1 ðt; n 1 : nÞ ¼ Zð3; :Þ;
Q 2 ðt; n 1 : nÞ ¼ Zð2; :Þ; Q 3 ðt; n 1 : nÞ ¼ Zð4; :Þ;
19. end 20. Generate a Householder vector for the sth to mth elements in ðn 1Þth column of Q 0
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
163
½u; b1 ¼ HouseðQ 0 ðn 1 : m; n 1Þ; m þ 1 sÞ: 21. Update the submatrix composed by the elements at the intersection of ðn 1Þth to mth rows and ðn 1Þth to nth columns of Q p ; p ¼ 0; 1; 2; 3 by
Q p ðn 1 : m; n 1 : nÞ ¼ Q p ðn 1 : m; n 1 : nÞ ðb1 uÞðu0 Q p ðn 1 : m; n 1 : nÞÞ 22. Generate a generalized symplectic Givens rotation for the ðn 1; nÞth elements of Q p by
G1 ¼ JRSGiv ensðQ 0 ðn 1; nÞ; Q 1 ðn 1; nÞ; Q 2 ðn 1; nÞ; Q 3 ðn 1; nÞÞ: 23. update the ðn 1; nÞth elements of Q p by
y ¼ ½Q 0 ðn 1; nÞ; Q 2 ðn 1; nÞ; Q 1 ðn 1; nÞ; Q 3 ðn 1; nÞGT1 Q 0 ðn 1; nÞ ¼ yð:; 1Þ;
Q 2 ðn 1; nÞ ¼ yð:; 2Þ;
Q 1 ðn 1; nÞ ¼ yð:; 3Þ;
Q 0 ðn 1; nÞ ¼ yð:; 4Þ:
24. s ¼ n. 25. for t ¼ s : m. 26. Generate a generalized symplectic Givens rotation for the ðt; nÞth elements of Q s by
G1 ¼ JRSGiv ensðQ 0 ðt; nÞ; Q 1 ðt; nÞ; Q 2 ðt; nÞ; Q 3 ðt; nÞÞ: 27.
Update the nth to mth elements in nth row of Q p by
Z ¼ GT1 ½Q 0 ðt; nÞ; Q 2 ðt; nÞ; Q 1 ðt; nÞ; Q 3 ðt; nÞ Q 0 ðt; nÞ ¼ Zð1; :Þ;
Q 2 ðt; nÞ ¼ Zð2; :Þ;
Q 1 ðt; nÞ ¼ Zð3; :Þ;
Q 3 ðt; nÞ ¼ Zð4; :Þ;
28. end 29. Generate a Householder vector for the nth to mth elements in nth column of Q 0
½u; b1 ¼ HouseðQ 0 ðn : m; nÞ; m þ 1 nÞ: 30. Update the submatrix composed by the elements at the intersection of nth to mth elements in nth column of Q p ; p ¼ 0; 1; 2; 3 by
Q p ðn : m; nÞ ¼ Q p ðn : m; nÞ ðb1 uÞðu0 Q p ðn : m; nÞÞ 31. D ¼ Q 0 . 32. end
Remark 3.1. In Algorithm 3.3 we observe the special JRS-symmetrical structure of all intermediate matrices, and JRS-symplectic structures of all orthogonal matrices, therefore we can only evaluate the first m rows, or the first n columns, of the intermediate matrices, to update these intermediate matrices. By using this strategy, we only need a quarter of above arithmetic operations. Therefore the algorithm is numerically stable, efficient, and the real counterpart needs not to be generated. Algorithm 3.3 takes about 44ðmn2 n3 =3Þ flops for the bidiagonalization for the real counterpart !Q 2 R4m4n of a m n quaternionic matrix Q. Recall that the Householder bidiagonalization of 4m 4n matrix !Q needs about 4ð4mÞð4nÞ2 4ð4nÞ3 =3 ¼ 256ðmn2 n3 =3Þ flops, what is more important is that Algorithm 3.3 is strongly backward stable, since in every step the structures of !Q are preserved and only real orthogonal transformations are used. From above algorithm, we can check the following result. Theorem 3.4. Suppose that Q 2 Hmn and !Q is the real counterpart of Q. Then there exist orthogonal JRS-symplectic matrices U 2 R4m4m and V 2 R4n4n such that
2
D
6 60 6 U !Q V ¼ 6 6 60 4 T
0 where D 2 R
mn
0
0
D
0
0
D
0
0
0
3
7 07 7 7; 7 07 5 D
is a bidiagonal matrix.
ð3:1Þ
164
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
Once performing Algorithm 3.3, we obtain a real bidiagonal matrix D, which contains all singular values of Q. We then can perform standard iterative procedure to compute the singular values of D [20]. 4. Numerical experiments In this section we present some numerical examples. In all examples, we perform the structure-preserving algorithm (Algorithm 3.3 and function svd in MATLAB) to compute singular values of a quaternion matrix Q, and compare our algorithm with two other standard algorithms, one is to perform the function svd in Quaternion Toolbox of Matlab [24], which is based on bidiagonalization of a quaternion matrix to a real or complex bidiagonal matrix using quaternionic Householder transformations [23], another is to perform the function svd on the real counterpart !Q . All these computation are performed on an Intel Core i7–2600 @ 3. 40 GHz/8 GB computer using MATLAB 7.10. Example 4.1. Given a quaternion matrix Q ¼ Q 0 þ Q 1 i þ Q 2 j þ Q 3 k 2 Hakbk , where Q p ; ðp ¼ 0; 1; 2; 3Þ are all random real matrices, a and b are two given positive integers. a; b and k determine the dimension of Q. 1. When a ¼ 3; b ¼ 2 and k ¼ 4, compute the singular values of Q by performing the structure-preserving algorithm. 2. Compare the CPU time of three different algorithms: performing the structure-preserving algorithm, svd of Quaternion Toolbox of Matlab [24] for Q, and svd for !Q , with k ¼ 1 : 50. 3. For a ¼ 4; b ¼ 2; for k ¼ 30, evaluate the singular values of Q by performing the structure-preserving algorithm, and svd of Quaternion Toolbox in Matlab to compare the accuracy of two algorithms. Solution. 1. For a random quaternion matrix Q ¼ Q 0 þ Q 1 i þ Q 2 j þ Q 3 k 2 H128 ; where
2
0:6233; 6 6 0:2101; 6 6 6 6 0:7435; 6 6 6 0:1600; 6 6 6 6 0:2993; 6 6 6 0:4708; 6 Q0 ¼ 6 6 6 0:7180; 6 6 6 0:1701; 6 6 6 6 0:0038; 6 6 6 0:5891; 6 6 6 6 0:3686; 4
0:7837; 0:1263; 0:9960;
0:4004;
0:5048;
0:9190; 0:7429
3
7 0:9431; 0:5746; 0:7738; 0:2211; 0:2747; 0:2272; 0:2975 7 7 7 7 0:7482; 0:7118; 0:3793; 0:1549; 0:4194; 0:3112; 0:3089 7 7 7 0:5984; 0:5592; 0:9837; 0:1985; 0:8513; 0:5729; 0:2335 7 7 7 7 0:1661; 0:6203; 0:2785; 0:5000; 0:6893; 0:2322; 0:5818 7 7 7 0:4190; 0:9323; 0:4556; 0:3421; 0:1933; 0:9018; 0:6557 7 7 7; 7 0:7135; 0:9946; 0:2254; 0:8964; 0:0661; 0:1087; 0:5653 7 7 7 0:0952; 0:6224; 0:0319; 0:9869; 0:0883; 0:0801; 0:5041 7 7 7 7 0:0254; 0:1348; 0:0335; 0:2128; 0:3588; 0:3210; 0:1933 7 7 7 0:8189; 0:8695; 0:3628; 0:2861; 0:3325; 0:7438; 0:6403 7 7 7 7 0:4389; 0:7211; 0:8921; 0:9620; 0:0453; 0:0831; 0:7897 7 5
0:2634; 0:4011; 0:9019; 0:6848; 0:8369; 0:5921; 0:4795; 0:3695 2
0:7255; 0:1768; 0:4182; 0:6648; 0:5683; 0:0599;
6 6 0:8406; 6 6 6 0:6898; 6 6 6 0:4995; 6 6 6 0:1403; 6 6 6 0:3513; Q1 ¼ 6 6 6 0:5259; 6 6 0:6017; 6 6 6 0:1669; 6 6 6 0:8233; 6 6 6 0:1005; 4
0:4128; 0:9259;
0:5080;
0:1053;
0:7059; 0:7872; 0:3848; 0:6760; 0:2912; 0:5392; 0:6811; 0:8104; 0:9810; 0:6818; 0:9972; 0:1110; 0:1034; 0:7696;
0:0905; 0:6351;
0:3551; 0:9676; 0:2394; 0:1120; 0:0656; 0:6311; 0:3748; 0:8755; 0:5865; 0:4438;
0:0960; 0:2737;
0:7896; 0:3018; 0:2415; 0:9674; 0:3441; 0:8595; 0:8634; 0:2255;
0:9987; 0:4609; 0:9598; 0:2983; 0:0497;
0:0712;
0:2190
3
7 0:7079; 0:1483; 0:9821 7 7 7 0:6827; 0:9553; 0:1870 7 7 7 0:4728; 0:9319; 0:1687 7 7 7 0:5459; 0:9137; 0:8921 7 7 7 0:5711; 0:4131; 0:8423 7 7; 7 0:7446; 0:3561; 0:6428 7 7 0:2707; 0:9912; 0:9882 7 7 7 0:5763; 0:7165; 0:8410 7 7 7 0:2666; 0:7703; 0:0933 7 7 7 0:7018; 0:6668; 0:1056 7 5 0:1708; 0:4399; 0:3049
165
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
2
0:6113; 6 6 0:8418; 6 6 0:5245; 6 6 6 0:3586; 6 6 0:3419; 6 6 6 0:7294; Q2 ¼ 6 6 0:9949; 6 6 6 0:3406; 6 6 0:4136; 6 6 6 0:9197; 6 6 0:6155; 4 0:9620; 2
3 0:9400; 0:1599; 0:4550; 0:0198; 0:8301; 0:6232; 0:0570 7 0:8012; 0:2732; 0:0989; 0:1773; 0:1395; 0:0849; 0:2178 7 7 0:2171; 0:7817; 0:2241; 0:4006; 0:9429; 0:4507; 0:6524 7 7 7 0:1965; 0:7044; 0:9854; 0:4341; 0:1321; 0:8569; 0:2683 7 7 0:7474; 0:6506; 0:0031; 0:9409; 0:0256; 0:5756; 0:2141 7 7 7 0:3738; 0:2487; 0:5150; 0:4017; 0:9708; 0:7727; 0:1984 7 7; 0:9792; 0:2553; 0:5138; 1:0000; 0:2436; 0:6342; 0:7427 7 7 7 0:7852; 0:6583; 0:9827; 0:1603; 0:3354; 0:5762; 0:5371 7 7 0:4631; 0:4266; 0:5925; 0:3174; 0:6665; 0:9554; 0:7467 7 7 7 0:8436; 0:0605; 0:0613; 0:3874; 0:7955; 0:3088; 0:3350 7 7 0:1811; 0:5899; 0:0882; 0:1987; 0:2060; 0:5900; 0:2355 7 5 0:2914; 0:6944; 0:1055; 0:7125; 0:1500; 0:9243; 0:5972
0:2556; 0:0950; 0:8221; 0:5450; 0:8811; 0:0618; 0:3654; 0:2850
6 6 0:4058; 6 6 0:0514; 6 6 6 0:8060; 6 6 0:9040; 6 6 6 0:1716; Q3 ¼ 6 6 0:3627; 6 6 6 0:0694; 6 6 0:3118; 6 6 6 0:7396; 6 6 0:5479; 4
0:0610; 0:1537; 0:5131; 0:6583; 0:5356; 0:3156; 0:7238; 0:9834; 0:6569; 0:2253;
0:3054; 0:3770;
3
7 0:6401; 0:5121; 0:4981; 0:5327; 0:3718; 0:5531 7 7 0:0494; 0:3243; 0:5499; 0:2022; 0:2161; 0:1556 7 7 7 0:4988; 0:1325; 0:7526; 0:2995; 0:5395; 0:2334 7 7 0:4748; 0:1995; 0:1726; 0:5127; 0:6931; 0:9170 7 7 7 0:4286; 0:9745; 0:9684; 0:8671; 0:3316; 0:8288 7 7; 0:5937; 0:9530; 0:6525; 0:9139; 0:5509; 0:0072 7 7 7 0:6551; 0:7205; 0:4227; 0:9444; 0:9925; 0:3087 7 7 0:9996; 0:6703; 0:8347; 0:8134; 0:5742; 0:4675 7 7 7 0:1005; 0:3471; 0:6838; 0:6422; 0:5945; 0:3817 7 7 0:9789; 0:4702; 0:5919; 0:6322; 0:9965; 0:3423 7 5 0:0902; 0:0643; 0:4137; 0:9149; 0:6619; 0:5265
by performing the structure-preserving algorithm, we have
singular values ¼ ½10:2115; 2:8739; 2:6999; 1:9684; 1:5409; 1:3002; 0:9291; 0:6532; and the process takes about 0.0173 s. 4. For a ¼ 10; b ¼ 5; and k ¼ 1 : 50, we apply three algorithms to compute the singular values of Q. From Fig. 1, we observe that, for n large, our structure-preserving method cost about one-third CPU time of that by running svd of Quaternion toolbox, and about one-sixth CPU time of that by running svdof real counterpart. Therefore, Algorithm 3.3 is efficient and stable. 5. For a ¼ 4; b ¼ 2; for k ¼ 30, Fig. 2 shows the singular values computed by the structure-preserving method and svd of Quaternion toolbox [24]. The two sets of singular values are almost identical. 30 *−−structure−preserving mathod .−−svd of Quaternion toolbox +−−svd of real counterpart
CPU time(second)
25
20
15
10
5
0
0
10
20
30
40
k=1:50 Fig. 1. CPU times for computing singular values.
50
166
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
90 *−−structure−preserving method o−−svd of Quaternion Toolbox
80 70
Singularvalue
60 50 40 30 20 10 0
0
10
20
30 k=30
40
50
60
Fig. 2. Accuracy comparison of singular values.
Fig. 3. SVD faces in color of 16 faces from Faces95.
Example 4.2. For an application of the SVD for quaternion matrices, we use the reduced-rank signal processing model described in (1.5)–(1.7) for face recognition. We use the same sets of data as in [22] from Faces95 [29]. There are 16 faces in color, and every color face is a 128 128 quaternion matrix Ht ¼ Rt i þ Gt j þ Bt k for t ¼ 1 : 16. The matrix Q is defined as Q ¼ ½vecðH1 Þ; . . . ; vecðH16 Þ, where vec is the vectorizing operator. Therefore, Q is a 16384 16 quaternion matrix, and A ¼ Q H Q is a 16384 16384 quaternion matrix. We apply the structure-preserving algorithm proposed in this paper to evaluate the SVD of Q for face recognition, and it takes about 0.000016 s to do the work. We also perform the svd order of Quaternion toolbox, and it takes about 0.00021 s to do the same work. The obtain figures are in Fig. 3. We observe that these recovered figures are at least as good as in [22]. Recall that, the authors of [22] perform the structure-preserving algorithm for solving the right eigenvalue problem of the Hermitian quaternion matrix A ¼ Q H Q . Because the columns of A is much bigger than that of Q, therefore, computational time would be much larger. In fact, the structure-preserving algorithm proposed in [22] takes about 0.015 s to do this work, the eig order of Quaternion toolbox takes about 0.203 s to do the same work.
Y. Li et al. / Applied Mathematics and Computation 235 (2014) 157–167
167
Remark 4.1. From the above examples, we observe the following facts. 1. All three methods are numerically stable, because these algorithms use orthogonal or unitary transformations to obtain bidiagonal matrices. Our structure-preserving method cost about one-third CPU time of that by running svd of Quaternion toolbox, and about one-sixth CPU time of that by running svd of real counterpart. Therefore, the structure-preserving method is more efficient than other two methods. 2. Notice that the function svd of Quaternion toolbox is based on quaternion Householder transformations to obtain real bidiagonal matrix proposed by Sangwine and Bihan [23]. We would like to point out that, if one performs quaternion Householder transformations by applying the special structural properties of real representation of quaternion matrices, one can also propose an more efficient structure-preserving algorithm based on real presentation of quaternion Householder transformations. 5. Conclusions In this paper we have proposed a structure-preserving method to compute singular values of quaternion matrices. This method is based on the observations that, the real representation of a quaternion matrix is generalized JRS-symmetry, and this structure is of unchanged under orthogonal JRS-symplectic transformations. Therefore, we can propose a fast structurepreserving algorithm to evaluate the singular values of this quaternion matrix. The algorithm is fast and stable. The examples in this paper also demonstrate the efficiency of the algorithm. Acknowledgments The authors are grateful to two anonymous referees for their valuable comments and suggestions, which greatly improve the original manuscript of this paper. References [1] Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) 93–111. [2] Z.-Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms 56 (2011) 297–317. [3] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J Matrix Anal. Appl. 24 (2003) 603–626. [4] M. Benzi, D. Bertaccini, Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal. 28 (2008) 598–618. [5] N.L. Bihan, Traitement algébrique des signaux vectoriels, Application en séparation d0ondes sismiques (Ph. D. thesis), Institut National Polytechnique de Grenoble, 2001. [6] N.L. Bihan, J. Mars, Singular value decomposition of quaternion matrices: a new tool for vector-sensor signal processing, Signal Process. 84 (2004) 1177–1199. [7] N.L. Bihan, S.J. Sangwine, Quaternion principal component analysis of color images, in: IEEE International Conference on Image Processing, Barcelona, Spain, September, 2003. [8] N.L. Bihan, S.J. Sangwine, Jacobi method for quaternion matrix singular value decomposition, Appl. Math. Comput. 187 (2007) 1265–1271. [9] N. Cohen, S. De Leo, The quaternionic determinant, Electron. J. Linear Algebra 7 (2000) 100–111. [10] P.M. Cohn, Skew Field Constructions, Cambridge University Press, Cambridge, 1977. [11] A.J. Davies, B.H. McKellar, Observability of quaternionic quantum mechanics, Phys. Rev. A 46 (1992) 3671–3675. [12] A.J. Davies, B.H. McKellar, Non-relativistic quaternionic quantum mechanics, Phys. Rev. A 40 (1989) 4209–4214. [13] G.M. Dixon, Division Algebras: Octonions, Quaternions, Complex Numbers and the Algebraic Design of Physics, Kluwer, Dordrecht, 1994. [14] H. Faßender, D.S. Mackey, N. Mackey, Hamilton and Jacobi come full circle: Jacobi algorithms for structured Hamiltonian problems, Linear Algebra Appl. 332–334 (2001) 37–80. [15] D. Finkelstein, J.M. Jauch, D. Speiser, Notes on quaternion quantum mechanics, Logico-Algebraic Approach to Quantum Mechanics, vol. II, Reidel, Dordrecht, 1979, pp. 367–421. [16] D. Finkelstein, J.M. Jauch, S. Schiminovich, D. Speiser, Foundations of quaternion quantum mechanics, J. Math. Phys. 3 (1962) 207–220. [17] D. Finkelstein, J.M. Jauch, S. Schiminovich, D. Speiser, Principle of general Q-covariance, J. Math. Phys. 4 (1963) 788–796. [18] D. Finkelstein, J.M. Jauch, D. Speiser, Quaternionic representations of compact groups, J. Math. Phys. 4 (1963) 136–140. [19] G.H. Golub, C. Reinsch, Singular value decomposition and least squares solutions, Numer. Math. 14 (1970) 403–420. [20] G.H. Golub, C.F. Van Loan, Matrix Computations, 3rd Edition., The Johns Hopkins University Press, Baltimore, MD, 1996. [21] F. Gürsey, C.H. Tze, On the Role of Division, Jordan and Related Algebras in Particle Physics, World Scientific, Singapore, 1996. [22] Z. Jia, M. Wei, S. Ling, A new structure-preserving method for quaternion Hermitian eigenvalue problems, J. Comput. Appl. Math. 239 (2013) 12–24. [23] S.J. Sangwine, N.L. Bihan, Quaternion singualar value decompsition based on bidiagonalization to a real or complex matrix using quaternion householder transformations, Appl. Math. Comput. 182 (2006) 727–738. [24] S. Sangwine, N.L. Bihan, Quaternion toolbox for matlab.
.