A minimal residual algorithm for the inconsistent matrix equation AXB = C over symmetric matrices

A minimal residual algorithm for the inconsistent matrix equation AXB = C over symmetric matrices

Applied Mathematics and Computation 188 (2007) 499–513 www.elsevier.com/locate/amc A minimal residual algorithm for the inconsistent matrix equation ...

253KB Sizes 5 Downloads 27 Views

Applied Mathematics and Computation 188 (2007) 499–513 www.elsevier.com/locate/amc

A minimal residual algorithm for the inconsistent matrix equation AXB = C over symmetric matrices Yuan Lei *, An-ping Liao College of Mathematics and Econometrics, Hunan University, Changsha 410082, PR China

Abstract An iterative method with short recurrences is presented by Peng [Z.-Y. Peng, An iteration method for the least squares symmetric solution of the linear matrix equation AXB = C, Appl. Math. Comput. 170 (2005) 711–723] for solving the nearness problem associated with the inconsistent matrix equation AXB = C for symmetric matrices. The solution of this nearness problem can be computed with little work and low storage requirements per iteration. However, this algorithm may be slow in case of the irregular convergence behavior in the residual norm of AXB = C. In order to remedy this problem, a modification based on the idea of the classical CG method is presented in this paper and an error bound is given. Finally, numerical experiments are reported.  2006 Elsevier Inc. All rights reserved. Keywords: Iterative method; The matrix nearness problem; Least-squares symmetric solution; The minimum norm solution

1. Introduction Throughout this paper, we denote the sets of all m · n real matrices and n · n real symmetric matrices by Rm·n and SRn·n respectively. For a matrix A 2 Rm·n, we denote its transpose, Frobenious norm, trace, column space and null space by AT, kAk, tr(A), R(A) and N(A) respectively. The symbol vec(Æ) represents the vec operT ator, i.e., vecðAÞ ¼ ðaT1 ; aT2 ; . . . ; aTn Þ for the matrix A = (a1, a2, . . . , an) 2 Rm·n, ai 2 Rm·1 (1 6 i 6 n), and A  B stands for the Kronecker product of matrices A and B, see [1] for details. b in a The matrix nearness problem, in the sense of the Frobenius norm, consists of finding a nearest member X constraint matrix set to a given matrix X . Because the preliminary estimation X is frequently obtained from b in this conexperiments, it may not satisfy the given restrictions. Hence, it is necessary to find a nearest matrix X straint matrix set to replace the estimation X [2]. The matrix nearness problem associated with matrix equation arises in the structural modification and model updating [3,4]. For example, to update the stiffness and damping matrices simultaneously, a class of finite element model updating (FEMU) problems can be transformed to the

*

Corresponding author. E-mail addresses: [email protected] (Y. Lei), [email protected] (A.-p. Liao).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.10.011

500

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

matrix nearness problem associated with the inconsistent matrix equation AXB + CYD = E [5]. Furthermore, we may assume the stiffness matrices are known or have been updated before in order to improve the estimation precision of the damping matrices [6,7], then the corresponding FEMU problem reduces to the matrix nearness problem associated with the inconsistent matrix equation AXB ¼ C:

ð1Þ

In this paper, we consider the matrix nearness problem associated with Eq. (1) for the symmetric unknown matrix X, which can be described as follows. Problem I. Given matrices A 2 Rm·n, B 2 Rn·p, C 2 Rm·p and X 2 SRnn . Let   nn S E ¼ X j X 2 SR ; kAXB  Ck ¼ minnn kAYB  Ck : Y 2SR

b 2 S E such that Then find X b  X k ¼ min kX  X k: kX X 2S E

b is If there does not exist a preliminary analysis damping matrix [7], i.e., the matrix X ¼ 0 in Problem I, then X the least-squares symmetric of Eq. (1) with minimum norm. b of For dense matrices A and B, Liao and Lei [8] has obtained an analytical expression of the solution X Problem I by simultaneously using both GSVD and CCD of matrix pairs. An iterative method is constructed by Peng [9] for solving Problem I with large and sparse coefficient matrices, and by choosing the appropriate b of Problem I can be obtained within finite iteration steps in the absence of initial matrix, the solution X roundoff errors. The iterative method is based on short recurrences, which keep work and storage requirements constant at each iteration. However, the iteration is only defined by the Galerkin condition, but lack of a minimization property, which means that the algorithm may exhibit a rather irregular convergence behavior in the residual norm of Eq. (1), and this often results in a very slow convergence. Hence, it motivates us to construct an iterative method, for solving Problem I, to gain faster convergence with the roughly same computational cost per iteration. The main idea is based on the classic CG method. This CG-type method can both maintain the short recurrences and satisfy a minimization property, i.e., the approximate solution minimizes the residual norm over a special affine subspace, which ensures that this method possesses a more smooth convergence than that presented in [9]. We call the resulting algorithm the b can also be obtained by choosing a minimal residual method in this paper. In exact arithmetic, the solution X special kind of initial symmetric matrices within finite iteration steps. For the minimal residual method, it is possible to obtain an error bound which is essentially the same as that of the classical CG method. Finally, one more note. Similar to [9,10], Problem I is equivalent to the problem of finding the least-squares e with minimum norm. Therefore, without loss of eB ¼ C symmetric solution of a new matrix equation A X generality, we can always assume that the preliminary estimation X ¼ 0 in Problem I, that is to say, we only consider the minimum norm least-squares symmetric solution of Eq. (1) in this paper. This paper is organized as follows. In Section 2, we first establish the minimal residual method for solving Problem I, and then describe the basic properties of this method. In Section 3, we show that the method possesses a minimization property, which ensures that the algorithm converges smoothly. In Section 4, an error bound is derived. In Section 5, we present several numerical examples and use some brief concluding remarks in Section 6 to end our paper. 2. The minimal residual method for solving Problem I In this section, based on the main idea of CG method, we construct a minimal residual method for solving Problem I, which is a modification of the iterative method presented in [9]. And then, some basic properties are described. Finally, we show that the iteration is terminated within finite iteration steps in exact arithmetic, and b of Eq. (1) can be obtained by choosing a special kind the minimum norm least-squares symmetric solution X of initial matrices.

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

501

By applying the Kronecker product [1], we can easily verify that the following matrix equation: AT AXBBT þ BBT XAT A ¼ AT CBT þ BC T A

ð2Þ

is the normal equation of the matrix equations ( AXB ¼ C;

ð3Þ

BT XAT ¼ C T in Rn·n. For all X 2 Rn·n, we have 2

2

2

kAXB  Ck þ kBT XAT  C T k ¼ kASðX ÞB  C þ ARðX ÞBk þ kBT SðX ÞAT  C T þ BT RðX ÞAT k 2

2

¼ kASðX ÞB  Ck þ kBT SðX ÞAT  C T k þ 2kARðX ÞBk 2

2

2

2 2

P kASðX ÞB  Ck þ kBT SðX ÞAT  C T k ¼ 2kASðX ÞB  Ck ; where S(X) and R(X) denote the symmetric and anti-symmetric parts of matrix X respectively. If the matrix X0 2 Rn·n is the least-squares solution of (3), then the inequality above indicates that its symmetric part S(X0) is also the least-squares solution of (3), i.e., the normal Eq. (2) is always consistent for symmetric matrices, and it can be also regarded as the normal of Eq. (1) over symmetric matrices. Consequently, we get an analogous conclusion with the one in [9]. Lemma 1. Problem I is equivalent to the problem of finding the minimum norm symmetric solution of the normal Eq. (2). Hence, the minimal residual method presented in this paper is also an iterative method for solving the normal Eq. (2), and this method is formulated as follows: Algorithm I. For an arbitrary initial matrix X0 2 SRn·n, compute Step 1: S0 = C  AX0B; R0 ¼ AT S 0 BT þ BS T0 A; P0 = R0; k: = 0; Step 2: If Rk = 0, then stop; else, k :¼ k + 1, and compute kPkÞ Step 3: Xk+1 = Xk + akPk, ak ¼ 2trðBtrðR T P AT AP BÞ; k

T

T

k

T

Rk+1 = Rk  ak(A APkBB + BB PkATA); T

T

P k A ARkþ1 BÞ ; Pk+1 = Rk+1  bkPk, bk ¼ trðB trðBT P k AT AP k BÞ Step 4: Go to Step 2.

Evidently, the matrices Xi, Ri and Pi generated by Algorithm I are symmetric, and Ri is the residue of the normal Eq. (2), where i = 0, 1, 2, . . . In the next part, we show the properties of the iteration method by induction. Lemma 2. For the matrices Ri and Pi generated by Algorithm I, if there exists a positive integer k such that Ri 5 0, ai 5 0 and ai 5 1 for all i = 0, 1, 2, . . . , k, then we have (1) (2) (3) (4)

tr(RiRj) = 0, i,j = 0, 1, 2, . . ., k, i 5 j; tr(BTPiATAPjB) = 0, i,j = 0, 1, 2, . . . , k, i 5 j; tr(RiPj) = 0, 0 6 j < i 6 k; tr(RiPj) = tr(RjPj), 0 6 i 6 j 6 k.

502

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

Proof. For k = 1, it follows that trðR1 R0 Þ ¼ tr½ðR0  a0 ðAT AP 0 BBT þ BBT P 0 AT AÞÞR0  ¼ trðR0 R0 Þ  a0 trðAT AP 0 BBT R0 þ BBT P 0 AT AR0 Þ ¼ trðR0 R0 Þ  2a0 trðBT P 0 AT AP 0 BÞ ¼ 0; trðB P 1 AT AP 0 BÞ ¼ tr½BT ðR1  b0 P 0 ÞAT AP 0 B T

¼ trðBT R1 AT AP 0 BÞ  b0 trðBT P 0 AT AP 0 BÞ ¼ trðBT R1 AT AP 0 BÞ  trðBT P 0 AT AR1 BÞ ¼ 0; trðR1 P 0 Þ ¼ tr½ðR0  a0 ðAT AP 0 BBT þ BBT P 0 AT AÞÞP 0  ¼ trðR0 P 0 Þ  a0 trðAT AP 0 BBT P 0 þ BBT P 0 AT AP 0 Þ ¼ trðR0 P 0 Þ  2a0 trðBT P 0 AT AP 0 BÞ ¼ 0: Assume that the conclusions tr(RsRj) = 0, tr(BTPsATAPjB) = 0 and tr(RsPj) = 0 hold for all j 6 s  1 (0 < s < k), then trðRsþ1 Rj Þ ¼ trðRs Rj Þ  as tr½ðAT AP s BBT þ BBT P s AT AÞRj  ¼ trðRs Rj Þ  as tr½ðAT AP s BBT þ BBT P s AT AÞðP j þ bj1 P j1 Þ ¼ 0; trðB P sþ1 AT AP j BÞ ¼ tr½BT ðRsþ1  bs P s ÞAT AP j B T

¼ trðRsþ1 AT AP j BBT Þ   AT AP j BBT þ BBT P j AT A ¼ tr Rsþ1 2   Rj  Rjþ1 ¼ tr Rsþ1 2as 1 ¼ trðRsþ1 Rjþ1 Þ; 2as trðRsþ1 P j Þ ¼ tr½ðRs  as ðAT AP s BBT þ BBT P s AT AÞÞP j  ¼ trðRs P j Þ  as tr½ðAT AP s BBT þ BBT P s AT AÞP j  ¼ 0: Algorithm I shows that the matrix Pi (i 6 s) can be expressed as P i ¼ Ri  bi1 P i1 ¼ Ri  bi1 Ri1 þ bi1 bi2 Ri2 þ    þ ð1Þ

i1

bi1    b1 R1 ;

which implies that trðRi Ri Þ ¼ trðRi P i Þ: From (4), we have trðRsþ1 Rs Þ ¼ trðRs Rs Þ  as tr½ðAT AP s BBT þ BBT P s AT AÞðP s þ bs1 P s1 Þ ¼ trðRs Rs Þ  2as trðAT AP s BBT P s Þ ¼ trðRs Rs Þ  trðRs P s Þ ¼ 0;

ð4Þ

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

503

trðBT P sþ1 AT AP s BÞ ¼ tr½BT ðRsþ1  bs P s ÞAT AP s B ¼ trðBT Rsþ1 AT AP s BÞ  bs trðBT P s AT AP s BÞ ¼ 0; trðRsþ1 P s Þ ¼ trðRs P s Þ  as tr½ðAT AP s BBT þ BBT P s AT AÞP s  ¼ trðRs P s Þ  2as trðBT P s AT AP s BÞ ¼ 0: Then the conclusion tr(Rs+1Rs) = 0 and the assumption tr(RsRj) = 0 show that tr(BTPs+1ATAPjB) = 0 holds for all j 6 s  1. By the principle of induction, we know that Eq. (3) holds for all 0 6 j < i 6 k, and Eqs. (1) and (2) hold for all i, j = 0, 1, 2, . . . , k, i 5 j due to the fact that tr(BTA) = tr(ATB) holds for all matrices A and B in Rm·n. By Algorithm I and Eq. (2) we have Ri ¼ Riþ1 þ ai ðAT AP i BBT þ BBT P i AT AÞ ¼ Riþ2 þ aiþ1 ðAT AP iþ1 BBT þ BBT P iþ1 AT AÞ þ ai ðAT AP i BBT þ BBT P i AT AÞ ¼ Rj þ aj1 ðAT AP j1 BBT þ BBT P j1 AT AÞ þ    þ ai ðAT AP i BBT þ BBT P i AT AÞ; which implies that Eq. (4) holds for all i, j (0 6 i 6 j 6 k).

h

Lemma 2 shows that the sequences R0, R1, . . . generated by Algorithm I are orthogonal each other in the finite dimension matrix space SRn·n, and it is easy to show that the sequences P0, P1, . . . are linear independent. Hence, there exists a positive integer m 6 nðnþ1Þ such that Rm = 0, i.e., the iteration will be terminated at most 2 nðnþ1Þ steps. Moreover, for all k 6 m we have 2 spanfR0 ; R1 ; . . . ; Rk g ¼ spanfP 0 ; P 1 ; . . . ; P k g:

ð5Þ

It is worth to note that the conclusions of Lemma 2 may not be true without the assumptions ai 5 0 and ai 5 1. In particular, ai = 1 means that the iteration breaks down before Rm = 0 for i < m. Hence, it is necessary to consider the case that ai = 0 or ai = 1. If ai = 0, i.e., tr(RiPi) = 0, it follows from (4) that tr(RiRi) = 0, which implies Ri = 0. If ai = 1, i.e., tr(BTPiATAPiB) = 0, then we have APiB = 0. Therefore, the matrix Pi satisfies the equation AT AP i BBT þ BBT P i AT A ¼ 0; which implies vecðP i Þ 2 N ðBBT  AT A þ AT A  BBT Þ:

ð6Þ

For an arbitrary initial matrix X0 2 SRn·n, the matrix P0 can be expressed as P 0 ¼ R0 ¼ AT AðX   X 0 ÞBBT þ BBT ðX   X 0 ÞAT A; where X* is a symmetric solution of the normal (2). Therefore, by Algorithm I, we know that there exists a matrix Hi 2 SRn·n such that P i ¼ AT AH i BBT þ BBT H i AT A; which implies vecðP i Þ 2 RðBBT  AT A þ AT A  BBT Þ:

ð7Þ

Combining (6) and (7), we have Pi = 0, and it follows from Algorithm I that Ri = bi1Pi1. From the conclusion Eq. (3) in Lemma 2, we know that trðRi Ri Þ ¼ bi1 trðRi P i1 Þ ¼ 0; which means Ri = 0. The discussions about the coefficient ai in Algorithm I are concluded as follows.

504

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

Lemma 3. For an arbitrary initial matrix X0 2 SRn·n, if there exists a positive integer i 6 m such that the coefficient ai = 0 or ai = 1, then the iteration solution Xi is just the least-squares symmetric solution of Eq. (1) in exact arithmetic. Lemma 3 indicates that the iteration do not break down before Rm = 0 in the absence of roundoff errors, which is just our desired result. By choosing a special kind of initial symmetric matrices, we can obtain the main results of this section. Its proof is analogous with that in [9,10], and is omitted. Theorem 1. Let S = {YjY = ATAHBBT + BBTHATA, H 2 SRn·n}. If we choose the initial iteration matrix b of Problem I within finite iteration steps by X0 2 S, especially, let X0 = 0, we can obtain the unique solution X b 2 SRnn such that using Algorithm I in exact arithmetic. Furthermore, there exists a certain matrix H T b T Tb T b X ¼ A A H BB þ BB H A A. 3. The minimization property of Algorithm I In this section, the minimization property of this minimal residual algorithm is characterized by making use of a special quadratic matrix function, and it ensures that the algorithm, unlike the one presented in [9], converges smoothly. To this end, we first define a matrix function in SRn·n as follows: f ðY Þ ¼ BT YAT AYB  2BT YAT C:

ð8Þ

b 2 SRnn is the solution of Problem I if and only if it satisfies the following minimization Theorem 2. The matrix X problem: b Þ ¼ min tr½f ðY Þ; tr½f ð X Y 2S

where S is the matrix set defined in Theorem 1. b is the solution of Problem I, we know that Proof. Obviously, S is a linear subspace of SRn·n. If the matrix X b 2 S by Theorem 1. For all Y 2 S, we have X b Þ ¼ BT YAT AYB  2BT YAT C  BT X b AT A X b AT C b B þ 2BT X f ðY Þ  f ð X b AT A X b AT CÞ b ÞAT AðY  X b ÞB  2ðBT X b B  BT X ¼ BT ðY  X b AT AYB þ BT YAT A X b B  2BT YAT CÞ: þ ðBT X

ð9Þ

b is also the symmetric solution of the normal Eq. (2), then we have Noticing that the matrix X b AT AYB þ BT YAT A X b AT AY þ AT A X b BÞ ¼ trðBBT X b BBT Y Þ ¼ trðAT CBT Y þ BC T AY Þ trðBT X ¼ 2trðBT YAT CÞ and b T b T b Tb T b A AX b BÞ ¼ tr X A A X BB þ X BB X A A trðB X 2 T

T

ð10Þ !

b AT CBT þ X b BC T A X ¼ tr 2

! b AT CÞ: ¼ trðBT X

ð11Þ

Combining (9)–(11), we know b Þ ¼ tr½BT ðY  X b ÞAT AðY  X b ÞB P 0: tr½f ðY Þ  f ð X b 2 S, we can conclude that tr½f ðY Þ ¼ tr½f ð X b Þ if and only if Y ¼ X b. By the fact that Y  X Let the sequence fEi gri¼1 be a set of linear independent basis of the subspace S with dimension r. If the matrix X 2 S satisfies tr[f(X)] = minY2Str[f(Y)], then the following inequality: tr½f ðX Þ 6 tr½f ðX þ tEi Þ

ð12Þ

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

505

holds for all real number t and the basis matrix Ei(1 6 i 6 r). By the definition (8), we have tr½f ðX þ tEi Þ ¼ tr½f ðX Þ þ ttrðBT Ei AT AXB þ BT XAT AEi B  2BT Ei AT CÞ þ t2 trðBT Ei AT AEi BÞ: Combining with the inequality (12), we know t½trðBT Ei AT AXB þ BT XAT AEi B  2BT Ei AT CÞ þ ttrðBT Ei AT AEi BÞ P 0: When t ! 0+, by the basic properties of functional limit, we have trðBT Ei AT AXB þ BT XAT AEi B  2BT Ei AT CÞ P 0; When t ! 0, we have trðBT Ei AT AXB þ BT XAT AEi B  2BT Ei AT CÞ 6 0: These two inequalities indicate that the following equations: trðBT Ei AT AXB þ BT XAT AEi B  2BT Ei AT CÞ ¼ tr½Ei ðAT AXBBT þ BBT AT A  AT CBT  BC T AÞ ¼ 0 hold for all Ei (1 6 i 6 r). As a consequence, we have AT AXBBT þ BBT XAT A  AT CBT  BC T A ¼ 0 due to the fact that AT AXBBT þ BBT XAT A  AT CBT  BC T A 2 S: b is the solution From Lemma 2.4 in [9], we know X is the minimum norm symmetric solution of (2), i.e., X ¼ X of Problem I. Thus, we complete the proof. h From Theorem 2, we know that Problem I is equivalent to a minimization problem associated with the quadratic matrix function tr[f(X)] in subspace S, and the idea is analogous with that of the steepest descent method and CG method. By the definition of the matrix function f(X), we can further demonstrate the following conclusion. Theorem 3. For an arbitrary initial symmetric matrix X0, the approximate solution Xk, generated by Algorithm I at the kth step, satisfies the following minimization problem: tr½f ðX k Þ ¼

min

X 2X 0 þspanfP 0 ;P 1 ;...;P k1 g

tr½f ðX Þ:

ð13Þ

Proof. For all X 2 X0 + span{P0, P1, . . . ,Pk1}, there exists a set of real numbers fti gk1 such that 0 X ¼ X0 þ

k1 X

ti P i :

i¼0

Denote

"

gðt0 ; t1 ; . . . ; tk1 Þ ¼ tr f X 0 þ

k1 X

!# ti P i

i¼0

and by the definition (8) and the conclusion Eq. (2) in Lemma 2, we have gðt0 ; t1 ; . . . ; tk1 Þ ¼ tr½f ðX 0 Þ þ

k1 X k1 X i¼0

¼ tr½f ðX 0 Þ þ

k1 X i¼0

ti tj trðBT P i AT AP j BÞ þ

j¼0

t2i trðBT P i AT AP i BÞ þ

k1 X

ti trðBT P i AT AX 0 B þ BT X 0 AT AP i B  2BT P i AT CÞ

i¼0 k1 X i¼0

ti trðBT P i AT AX 0 B þ BT X 0 AT AP i B  2BT P i AT CÞ:

506

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

Because g(t0, t1, . . . , tk1) is a continuous and differentiable function with respect to the k variables ti i = 0, 1, . . . , k  1, we easily know that g(t0, t1, . . . , tk1) = min if and only if ogðt0 ; t1 ; . . . ; tk1 Þ ¼ 0: oti It follows from the conclusion Eq. (4) in Lemma 2 that ti ¼

trðBT P i AT AX 0 B þ BT X 0 AT AP i  2BT P i AT CÞ trðR0 P i Þ trðRi P i Þ ¼ ¼ ¼ ai : 2trðBT P i AT AP i BÞ 2trðBT P i AT AP i BÞ 2trðBT P i AT AP i BÞ

By the fact that min gðt0 ; t1 ; . . . ; tk1 Þ ¼

min

ti

we complete the proof.

X 2X 0 þspanfP 0 ;P 1 ;...;P k1 g

tr½f ðX Þ;

h

Theorem 3 shows the approximate solution Xk minimizes the quadratic matrix function tr[f(X)] in the affine subspace X0 + span{P0, P1, . . . , Pk1} for all initial symmetric matrix X0. Furthermore, we have tr½f ðX k Þ 6 tr½f ðX k1 Þ; due to the fact that the approximation Xk1 2 X0 + span{P0, P1, . . . , Pk1}. It follows from direct computations that tr½f ðX k Þ ¼ trðBT X k AT AX k B  2BT X k AT CÞ ¼ tr½ðBT X k AT  C T ÞðAX k B  CÞ  C T C 2

2

¼ kAX k B  Ck  kCk : Consequently, we know that the minimization problem (13) is equivalent to kAX k B  Ck ¼

min

X 2X 0 þspanfP 0 ;P 1 ;...;P k1 g

kAX k B  Ck;

and we have kAX k B  Ck 6 kAX k1 B  Ck; which shows that the sequence kAX0B  Ck, kAX1B  Ck, . . . is monotonically decreasing. So the approximate solution Xk minimizes the residual norm in the affine subspace X0 + span{P0, P1, . . . , Pk1} for all initial symmetric X0, and the descent property of the residual norm of Eq. (1) leads to the faster convergence of Algorithm I compared with that of the iterative method presented in [9]. 4. Results on the error bound In this section, we derive an error bound for the minimal residual method, which are essentially the same as that for the classic CG method. For simplicity, we denote G ¼ AT A  BBT þ BBT  AT A; which implies the matrix G is symmetric positive semidefinite, and define a new norm pffiffiffiffiffiffiffiffiffiffiffi kxkG ¼ xT Gx in the subspace R(G). Then for all matrix X in the subspace S defined in Theorem 2, we have qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi T kvecðX ÞkG ¼ ½vecðX Þ G½vecðX Þ ¼ 2trðBT XAT AXBÞ ¼ 2kAXBk: Before beginning our analysis, we give an estimation about the nonzero eigenvalues of the matrix G. Let the nonzero singular values of the matrices A and B be ri (i = 1, 2, . . . , rA) and di (i = 1, 2, . . . , rB), where rA and rB represent the rank of A and B respectively. We further order them such that r1 P r2 P    P rrA > 0

and

d1 P d2 P    P drB > 0:

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

507

Then by the properties of Kronecker product [1] and symmetric matrices [11], we know that the nonzero eigenvalues ki of G are determined by the singular values of A and B, and satisfy for all i ¼ 1; 2; . . . ; rG ;

2rrA drB 6 ki 6 2r1 d1

ð14Þ

where rG represents the rank of G and rG = rArB. Assume that the Algorithm I can continue without breakdown at the kth step and choose the initial matrix X0 2 S, then by (5), we know spanfvecðR0 Þ; vecðR1 Þ; . . . ; vecðRk1 Þg ¼ KðG; vecðR0 Þ; kÞ; where KðG; vecðR0 Þ; kÞ is a Krylov subspace of dimension k. Then the minimization problem (13) in Theorem 3 can be reformulated as follows: tr½f ðX k Þ ¼

min

vecðX Þ2vecðX 0 ÞþKðG;vecðR0 Þ;kÞ

tr½f ðX Þ:

ð15Þ

By making use of the definition (8) again, for all X 2 X0 + span{P0, P1, . . . ,Pk1} we have b AT AÞ b BBT  XBBT X tr½f ðX Þ ¼ trðBT XAT AXB  2BT XAT CÞ ¼ trðBT XAT AXB  XAT A X b Þk2  1 kvecð X b Þk2 ; b AT A X b ÞAT AðX  X b ÞB  trðBT X b BÞ ¼ 1 kvecðX Þ  vecð X ¼ tr½BT ðX  X G G 2 2 b is the solution of Problem I. It then follows from (15) that where X b Þk ¼ 2tr½f ðX k Þ þ kvecð X b Þk ¼ kvecðX k Þ  vecð X G G 2

2

min

vecðX Þ2vecðX 0 ÞþKðG;vecðR0 Þ;kÞ

b Þk : kvecðX Þ  vecð X G 2

b Þ  vecðX k Þ, then Xk is of the form Let d k ¼ vecð X vecðX k Þ ¼ vecðX 0 Þ þ qk ðGÞvecðR0 Þ; where qk is a polynomial of degree k  1 such that kðI  Gqk ðGÞÞd 0 k ¼ min kðI  GqðGÞÞd 0 k: q2P k1

b ÞkG minimizes G-norm of the error over From the previous discussions, it is known that kvecðX k Þ  vecð X polynomial p(t), i.e., b ÞkG ¼ kvecðX k Þ  vecð X

min

p2P k ;pð0Þ¼1

kpðGÞd 0 kG ;

ð16Þ

where p(t) = 1  tq(t). In order to obtain an upper bound for (16), we assume that /1 ; /2 ; . . . ; /rG are unit orthogonal eigenvectors of G associated with the nonzero eigenvalues k1 ; k2 ; . . . ; krG . Then the initial error d0 can be expressed as d0 ¼

rG X

ni /i

i¼1

due to the fact that d0 2 R(G). Substituting it into the right-hand side of (16) yields 2  rG rG   X X   2 2 2 2 2 2 ni / i  ¼ ki pðki Þ n2i 6 max pðki Þ kd 0 kG 6 max pðkÞ kd 0 kG : kpðGÞd 0 kG ¼ pðGÞ i   k2½2rrA drB ;2r1 d1  i¼1 i¼1 G

Therefore, we have b Þk 6 kvecðX k Þ  vecð X G

min

max

p2P k ;pð0Þ¼1 k2½2rrA drB ;2r1 d1 

jpðkÞjkd 0 kG :

With the analogous derivation process for the error bound of the CG in [12], the main result of this section can be formulated as follows. Theorem 4. For the initial symmetric matrix X0 2 S, where S is a subspace defined in Theorem 2, the matrix Xk b of Problem I satisfy generated by k steps of Algorithm I and the solution X

508

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

pffiffiffi k j1 b ÞkG 6 2 pffiffiffi b ÞkG ; kvecðX k Þ  vecð X kvecðX 0 Þ  vecð X jþ1

ð17Þ

in which j is the spectral condition number j ¼ rrr 1 dd1r . A

B

For the initial matrix X0 2 S, by Algorithm I and Theorem 1, we know that the approximation matrix Xk at b are also the members of the subspace S. Hence, the error bound (17) can the kth step and the exact solution X be rewritten in matrix form pffiffiffi k j1 b b ÞBk: p ffiffiffi kAðX k  X ÞBk 6 2 kAðX 0  X jþ1 Just as the view in [13,14], the minimal residual method can converge fast if j  1. However, in general case, there is usually a serious amplification of the spectral condition number, which leads that the convergence becomes very slow for the case when j 1. 5. Examples for the minimal residual method In this section, we give some numerical examples to illustrate our results. All the tests are performed using MATLAB 6.5 which has a machine precision of around 1016. Because of the influence of the error of calculation, the iteration will not stop within finite steps. Hence, we regard the approximate solution Xk as the solution of Problem I if the corresponding residue satisfies kRkk 6 1.0e  010. Example 5.1. Let matrices   zerosð4Þ zerosð4Þ A¼ ; hankelð1 : 4Þ onesð4Þ

 B¼

toeplitzð1 : 4Þ

onesð4Þ

zerosð4Þ

onesð4Þ

 ;

 DC ¼

pascalð4Þ zerosð4Þ zerosð4Þ

zerosð4Þ

 ;

where pascal(n) denotes the nth order Pascal matrix, and toeplitz(1: n) and hankel(1: n) denote the nth order Toeplitz matrix and Hankel matrix which first row are (1, 2, . . . , n), respectively. ones(n) and zeros(n) respectively denote the n · n matrices whose all elements are one and zero. We take C = AXB + DC, where X = ones(8). Then we can theoretically show that X is the least-squares symmetric solution of Eq. (1) by the fact that ATDCBT = 0. Choosing the initial matrix X0 = zeros(8), we obtain the approximate solution as follows at the 32nd step by using Algorithm I: 0

X 32

1

0:8516

0:9629 0:9536

0:9420

1:1484

1:1484

1:1484 1:1484

B 0:9629 B B B 0:9536 B B 0:9420 B ¼B B 1:1484 B B 1:1484 B B @ 1:1484

0:9907 0:9884

0:9855

1:0371

1:0371

0:9884 0:9855 0:9855 0:9819

0:9819 0:9774

1:0464 1:0580

1:0464 1:0580

1:0371 1:0464 1:0371 1:0464

1:0580 1:0580

0:8516 0:8516

0:8516 0:8516

1:0371 1:0464

1:0580

0:8516

0:8516

1:0371 1:0371 C C C 1:0464 1:0464 C C 1:0580 1:0580 C C C: 0:8516 0:8516 C C 0:8516 0:8516 C C C 0:8516 0:8516 A

1:1484

1:0371 1:0464

1:0580

0:8516

0:8516

0:8516 0:8516

By concrete computations, we have kR32 k ¼ 5:7703e  011

and

kAX 32 B  Ck ¼ kAXB  Ck ¼ 26:4008

and we can further verify that kX 32 k ¼ 7:9610 < kX k ¼ 8:0000: If we choose the initial matrix X0 = ATAI8BBT + BBTI8ATA, where I8 denotes the unit matrix of order 8, we can obtain the same result at the 27th step by using Algorithm I:

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

0

X 27

0:8516 0:9629

B 0:9629 B B B 0:9536 B B 0:9420 B ¼B B 1:1484 B B 1:1484 B B @ 1:1484 1:1484

1

0:9536 0:9420

1:1484

1:1484

1:1484

1:1484

0:9907 0:9884

0:9884 0:9855 0:9855 0:9819

1:0371 1:0464

1:0371 1:0464

1:0371 1:0464

0:9855 1:0371

0:9819 0:9774 1:0464 1:0580

1:0580 0:8516

1:0580 0:8516

1:0580 0:8516

1:0371 C C C 1:0464 C C 1:0580 C C C; 0:8516 C C 0:8516 C C C 0:8516 A

1:0371

1:0464 1:0580

0:8516

0:8516

0:8516

1:0371 1:0371

1:0464 1:0580 1:0464 1:0580

0:8516 0:8516

0:8516 0:8516

0:8516 0:8516

509

0:8516

with kR27 k ¼ 5:7703e  011: If we let the initial matrix X0 = I8, noting I8 = ATAHBBT + BBTHATA, then we have 0 1:0398 1:0099 1:0124 1:0155 B 1:0099 1:0025 1:0031 1:0039 B B B 1:0124 1:0031 1:0039 1:0049 B B 1:0155 1:0039 1:0049 1:0061 B e X 31 ¼ B B 0:9602 0:9901 0:9876 0:9845 B B 0:9602 0:9901 0:9876 0:9845 B B @ 0:9602 0:9901 0:9876 0:9845 0:9602 0:9901 0:9876 0:9845

that there does not exist a matrix H 2 SRn·n such that 1

0:9602

0:9602

0:9602 0:9602

0:9901 0:9876

0:9901 0:9876

0:9845 1:7898

0:9845 0:7898

0:7898

1:7898

0:7898 0:7898

0:7898 0:7898

0:9901 0:9901 C C C 0:9876 0:9876 C C 0:9845 0:9845 C C C: 0:7898 0:7898 C C 0:7898 0:7898 C C C 1:7898 0:7898 A 0:7898 1:7898

It is easy to verify that kR31 k ¼ 4:2249e  011;

e 31 B  Ck ¼ 26:4008 and kA X

e 31 k ¼ 8:2084: kX 32 k ¼ 7:9610 < k X

300

280

residual norm

260

240 Algorithm 2.1 220

200

180 Algorithm I

160

0

5

10

15

20

25

30

35

40

45

iteration number Fig. 1. The comparison of residual norm between these two algorithms.

50

510

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

Example 5.1 shows that the computation results are in accordance with the theory established in this paper. The following example is used to verify that the minimal residual method possesses a more smooth convergence than that presented in [9]. Example 5.2. Suppose that the matrices A, B and C are given by Example 3.1 in [9], and let the initial matrix X0 is zero matrix of suitable size. By making use of Algorithm I and Algorithm 2.1 in [9] respectively, we obtain the results shown in Figs. 1 and 2. The results in these figures show clearly that the minimal residual algorithm converges smoothly and the residual norm at the 19th step is kR19 k ¼ 4:8746e  011

and

kAX 19 B  Ck ¼ 179:0445;

which means that the iteration steps of Algorithm I are more less than that of Algorithm 2.1 with the same precision requirement. The last example is used to test the convergence behavior for different condition number j with the roughly same computational cost per iteration. Example 5.3. In this example we take A to be a real n · n block-diagonal matrix, where n = 2l + 6, whose first l blocks are of the form   ai bi ; bi ai and its last two blocks are 0 1 0 0 1 0 0 B C B @ 0 0 1 A and @ 0 0 0 0 0

0 0

1 1 C 0 A:

0

0

Therefore, the nonzero singular values of matrix A are (a1 + b1)2, (a1  b1)2, . . . , (al + bl)2, (al  bl)2 and 1. We take B to be a real n · n block-diagonal matrix with the 3 · 3 block 4

6

x 10

residual norm of normal equation

5

4

3

Algorithm 2.1

2

1

0 Algorithm I -1

0

5

10

15

20

25

30

35

40

45

50

iteration number Fig. 2. The comparison of residual norm of normal Eq. (2) between these two algorithms.

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

511

104

2

10

case (1) 0

10

case (3)

residual norm

–2

10

–4

10

case (2) 10

–6

–8

10

–10

10

–12

10

0

5

10

15

20

25

30

35

40

45

50

45

50

iteration number Fig. 3. Test Algorithm I for different condition number. 4

10

case (1) 2

10

0

case (3)

residual norm

10

10

10

–2

–4

case (2) 10

10

–6

–8

–10

10

–12

10

0

5

10

15

20

25

30

35

40

iteration number Fig. 4. Test Algorithm 2.1 for different condition number.

0

e

B @d 0

d e d

0

1

C d A: e

pffiffiffi In order to simplify the discussion, we only consider two cases, i.e., d = 2, e = 0 and d ¼ 2, e = 2. Then the e ¼ onesðnÞ and corresponding nonzero singular values of matrix B are 4 and 4, 16 respectively. Let X

512

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

e B, then minX 2SRnn kAXB  Ck ¼ 0. We intend to test Algorithm I for the following three cases respecC ¼ AX tively with the initial matrix X0 = zeros(n). pffi (1) ai ¼ i, bi = 2ai (1 6 i 6 l) and d = 2, e = 0, then j = 9l; (2) ai = 2, bi = 2ai (1 6 i 6 l) and d = 2, pffiffieffi = 0, then j = 36; (3) ai = 2, bi = 2ai (1 6 i 6 l) and d ¼ 2, e = 2, then j = 144. In cases (1) and (2), the condition number is only dependent on the singular values of A, but in case (3) is determined by that of A and B simultaneously. In Fig. 3, we present the results obtained by applying Algorithm I with l = 45, and the results show that the convergence becomes very slow for the great condition number. By applying Algorithm 2.1 in [9] again for these cases, we obtain the results shown in Fig. 4, which shows that the convergence behavior of Algorithm 2.1 is also correlative with the condition number defined in Theorem 4, but comparing with Fig. 3, we observe that the convergence of Algorithm 2.1 is more slow and presents the irregular behavior. 6. Concluding remarks In this paper, we have constructed a minimal residual algorithm, i.e., Algorithm I, for solving the nearness problem associated with the inconsistent matrix equation AXB = C for unknown symmetric matrix X. By choosing the appropriate initial matrix, we can obtain the least-squares symmetric solution of equation AXB = C with minimum norm within at most nðnþ1Þ iteration steps in exact arithmetic. 2 We have shown that the approximate solution Xk, generated by Algorithm I at the kth step, minimizes the residual norm kAXB  Ck in an affine subspace. The minimization property means that the sequence of residual norm is monotonically decreasing, and ensures that Algorithm I converges smoothly. We have derived an error bound analogous with that of the classic CG method, which indicates that the convergence behavior of Algorithm I is correlative with the condition number defined in Theorem 4. Finally, several numerical examples were given to support the theories established in this paper. Algorithm I is a modification of Algorithm 2.1 in [9], and compared with Algorithm 2.1, exhibits a more smooth and fast convergence behavior with the roughly same computational cost per iteration. The fact has been confirmed through the numerical experiments. In this paper, we have made an attempt to solve the nearness problem associated with Eq. (1) by borrowing the idea of the classic CG method, but some problems need to be further considered. Theorem 4 and Example 3 have shown that the convergence of Algorithm I becomes very slow when the condition number j 1. Hence, how to apply the appropriate precondition techniques for Algorithm I or construct a more efficient algorithm will be the emphases of our work. Acknowledgement The work was supported in part by Natural Science Fundation of Hunan Province (03JJY6028). References [1] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, 1991. [2] N.J. Higham, Matrix nearness problems and applications, in: M.J.C. Gover, S. Barnett (Eds.), Applications of Matrix Theory, Oxford University Press, 1989, pp. 1–27. [3] M.I. Friswell, J.E. Mottershead, Finite Element Model Updating in Structural Dynamics, Kluwer Academic Publication, 1995. [4] J.-H. Zhang, Modeling of Dynamical Systems, National Defence and Industry Press, Beijing, 2000 (in Chinese). [5] A.-P. Liao, Z.-Z. Bai, Y. Lei, Best approximation solution of matrix equation AXB + CYD = E, SIAM J. Matrix Anal. Appl. 27 (2005) 675–688. [6] S.Y. Chen, M.S. Ju, Y.G. Tsuei, Estimation of system matrices by dynamic condensation and application to structural modification, AIAA J. 3 (2) (1995) 199–204. [7] C.-F. Kong, H. Dai, Updating of the viscous damping matrix from incomplete modal experimental data, J. Vib. Eng. 18:1 (2005) 85–90 (in Chinese). [8] A.-P. Liao, Y. Lei, Optimal approximate solution of the matrix equation AXB = C over symmetric matrices, J. Comput. Math., in press.

Y. Lei, A.-p. Liao / Applied Mathematics and Computation 188 (2007) 499–513

513

[9] Z.-Y. Peng, An iteration method for the least squares symmetric solution of the linear matrix equation AXB = C, Appl. Math. Comput. 170 (2005) 711–723. [10] Y.-X. Peng, X.-Y. Hu, L. Zhang, An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB = C, Appl. Math. Comput. 160 (2005) 763–777. [11] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985. [12] Y. Saad, Iterative methods for sparse linear systems, PWS, Boston, 1996. [13] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., The Johns Hopkins University Press, Baltimore and London, 1996. [14] O. Axelsson, Iterative Solution Methods, Cambridge University Press, 1996.