Applied Mathematics and Computation 259 (2015) 212–219
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
The least squares anti-bisymmetric solution and the optimal approximation solution for Sylvester equation Li-Ying Hu ⇑, Gong-De Guo, Chang-Feng Ma School of Mathematics and Computer Science, Fujian Normal University, Fuzhou 350007, PR China
a r t i c l e
i n f o
Keywords: Sylvester equation Modified conjugate iterative method Anti-bisymmetric matrix Least squares solution Optimal approximation
a b s t r a c t In this paper, a modified conjugate gradient iterative method for solving Sylvester equation is presented. By using this iterative method, the least squares anti-bisymmetric solution and the optimal approximation solution can be obtained. Here we present the derivation and theoretical analysis of our iterative method. Numerical results illustrate the feasibility and effectiveness of the proposed iterative method. Ó 2015 Elsevier Inc. All rights reserved.
1. Introduction As we know, in the field of matrix algebra, the Sylvester matrix equation is one kind of very important matrix equation. The study of Sylvester matrix equation is of great importance in the control theory and application, such as pole assignment, the observer design and construct Lyapunov function and so on. A large number of papers have been written for solving the various types of Sylvester matrix equations [1–20]. In [21], Don presented the general symmetric solution to matrix equation AX ¼ B by applying a formula for the partitioned minimum-norm reflexive generalized. In [22], the symmetric solution of linear matrix equation like AX ¼ B had been considered using the singular-value, generalized singular-value, real Schur, and real generalized Schur decompositions. After these works, there were some results about solvability conditions, solution formulas and least-squares approximate solutions for more general forms of the linear matrix equations. For example, Liao and Bai in [23] studied the symmetric positive semidefinite solution of the matrix equation AX 1 AT þ BX 2 BT ¼ C and the bisymmetric positive semidefinite solution of the matrix equation DT XD ¼ C. They in [24] considered the least-squares solution of AXB ¼ D with respect to symmetric positive semidefinite matrix X. In [25] they and Lei presented an algorithm for finding the best approximate solution of matrix equation AXB þ CYD ¼ E. Wang in [26] considered the bisymmetric solution of the real quaternion matrix equation AX ¼ B. In [27], Zhao et al. got the bisymmetric least squares solution under a central principal submatrix constraints of matrix equation AX ¼ B. In [28], Sheng et al. gave the conditions for the existence of the anti-question of the anti-bisymmetric matrix. The anti-bisymmetric solution of the matrix equation AX = B had been derived by the generalized singular value decomposition in [29]. In [30], Li et al. presented the least squares anti-bisymmetric solution of the matrix equation AX ¼ B by special transformation and approximating disposal. Its iterative method was on the base of conjugate gradient method. Using the thinking methods of the literature [30] for reference, we present an iterative method to solve the least squares anti-bisymmetric solution of the Sylvester matrix equation AX þ XB ¼ C.
⇑ Corresponding author. E-mail address:
[email protected] (L.-Y. Hu). http://dx.doi.org/10.1016/j.amc.2015.02.056 0096-3003/Ó 2015 Elsevier Inc. All rights reserved.
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
213
In this paper, we mainly consider the following problems: Problem I Given n n real matrices A; B and C, find an n n anti-bisymmetric matrix X such that
kAX þ XB Ck ¼ min : ^ 2 SE such that Problem II Given an n n real matrix X , find X
^ X k ¼ minðkX X kÞ; kX X2SE
where k k is Frobenius norm and SE is the solution set of Problem I. The organization of this paper is as follows. In Section 2, we get the equivalent transformation of Problem I. In Section 3, we present an iterative method for solving Problem I. In Section 4, we obtain the solution of Problem II using the iterative method proposed in Section 3. Numerical results are given in Section 5 to show feasibility and effectiveness of our iterative method. Finally, in Section 6, we draw a brief conclusion and make some remarks. The notations and symbols used in this paper are summarized as follows. Let Rnn and BASRnn be the set of n n real matrices and the set of n n real anti-bisymmetric matrices. The symbols AT ; rðAÞ; RðAÞ and trðAÞ respectively stand for the transpose, rank, column space and trace of matrix A. hA; Bi ¼ trðBT AÞ is defined as the inner product of the two matrices, which generates the Frobenius norm, i.e. kAk2F ¼ hA; Ai ¼ trðAT AÞ. The Kronecker product of two matrices A and B is denoted T
by A B. The vec operator is represented as vecðÞ and vecðAÞ ¼ ðaT1 ; aT2 ; . . . ; aTn Þ , where ak is the kth column of A. In and ej represent the identity matrix of order n and the jth column of In , respectively. The n n reverse unit matrix S ¼ ðen ; en1 ; . . . ; e1 Þ. It’s simple to verify that S2 ¼ In ; S ¼ ST ¼ S1 . An n n matrix X is called an anti-bisymmetric matrix if X ¼ X T and X ¼ SXS. 2. The equivalent transformation of Problem I In this section, we firstly review two lemmas which will be used in what follows. Lemma 2.1. If matrix X 2 BASRnn , then X þ SXS 2 BASRnn . Lemma 2.2 [31]. Suppose that A 2 Rmn ; x 2 Rn1 ; b 2 Rm1 , then finding the least norm solution of the linear equation Ax ¼ b is equivalent to finding the solution of the linear equation AT Ax ¼ AT b. For the convenience of discussion, we introduce marks as follows:
h i T FðXÞ ¼ ðAT A þ BBT ÞX þ XðAT A þ BBT Þ þ ðAXBT þ AT XBÞ ðAXBT þ AT XBÞ þ S ðAT A þ BBT ÞX þ XðAT A þ BBT Þ S h i T þ S ðAXBT þ AT XBÞ ðAXBT þ AT XBÞ S; h i T T H ¼ AT C þ CBT ðAT C þ CBT Þ þ S AT C þ CBT ðAT C þ CBT Þ S: Obviously, H 2 BASRnn . Theorem 2.1. Problem I is equivalent to finding the anti-bisymmetric solution of the matrix equation
FðXÞ ¼ H;
ð2:1Þ
and it is always consistent. Proof. According to the definition of anti-bisymmetric matrix and the properties of Frobenius norm, it follows that
1 min kAX þ XB Ck2 þ kXAT þ BT X þ C T k2 þ kASXS þ SXSB Ck2 þ kSXSAT þ BT SXS þ C T k2 4 X2BASRnn 0 0 12 1 0 12 AX þ XB C AX þ XB C T T T T C T T C C B B B 1 1 B XA þ B X þ C B XA þ B X C B C C C minnn B min ¼ ¼ C C B C : B 4 X2BASR @ ASXS þ SXSB C A 4 X2BASRnn @ ASXS þ SXSB A @ C A SXSAT þ BT SXS þ C T SXSAT þ BT SXS C T
min ðkAX þ XB Ck2 Þ ¼
X2BASRnn
ð2:2Þ Using vec operator, we have that Problem I is equivalent to the following minimum residual problem
214
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
0 1 1 0 vecðCÞ I A þ BT I T C B T C B AIþIB B C B vecðC Þ C minnn B vecðXÞ : C C B @ S ðASÞ þ ðBT SÞ S A @ vecðCÞ A X2BASR ðASÞ S þ S ðBT SÞ vecðC T Þ
ð2:3Þ
By Lemma 2.2, we know (2.3) is equivalent to finding the solution of the following linear equation
1T 0 1 1T 0 1 0 vecðCÞ I A þ BT I I A þ BT I I A þ BT I T C T T T C C C B B B B AIþIB AIþIB AIþIB C B C C B vecðC Þ C B B C B CvecðXÞ ¼ B C B C: B @ S ðASÞ þ ðBT SÞ S A @ S ðASÞ þ ðBT SÞ S A @ S ðASÞ þ ðBT SÞ S A @ vecðCÞ A 0
ðASÞ S þ S ðBT SÞ
ðASÞ S þ S ðBT SÞ
ðASÞ S þ S ðBT SÞ
ð2:4Þ
vecðC T Þ
By direct calculations we have
I AT A þ B A þ BT AT þ BBT I þ AT A I þ A B þ AT BT þ I BBT þ I ðSAT ASÞ þ ðSBSÞ ðSASÞ þðSBT SÞ ðSAT SÞ þ ðSBBT SÞ I þ ðSAT ASÞ I þ ðSASÞ ðSBSÞ þ ðSAT SÞ ðSBT SÞ þ I ðSBBT SÞ vecðXÞ ¼ ðI AT þ B IÞvecðCÞ þ ðAT I þ I BÞvecðC T Þ þ ðS ðSAT Þ þ ðSBÞ SÞvecðCÞ þ ððSAT S þ S ðSBÞÞvecðC T Þ: ð2:5Þ
Hence, it holds that
vec AT AX þ AXBT þ AT XB þ XBBT þ XAT A þ BXAT þ BT XA þ BBT X þ SAT ASX þ ðSASÞXðSBT SÞ þ ðSAT SÞXðSBSÞ þ XSBBT S þXSAT AS þ ðSBSÞXðSAT SÞ þ ðSBT SÞXðSASÞ þ SBBT SX ¼ vec ðAT A þ BBT ÞX þ XðAT A þ BBT Þ þ AXBT þ AT XB þ BXAT þ BT XA þ S ðAT A þ BBT ÞX þ XðAT A þ BBT Þ ð2:6Þ þAXBT þ AT XB þ BXAT þ BT XA S ¼ vecðAT C þ CBT C T A BC T þ SðAT C þ CBT C T A BC T ÞSÞ
It follows from the above equation that vecðFðXÞÞ ¼ vecðHÞ. So Problem I is equivalent to finding the anti-bisymmetric solution of the matrix Eq. (2.1). Now that the Eq. (2.4) is a normal equation and it is consistent, it follows that the matrix Eq. (2.1) is consistent. ~ is a solution of Next, we will prove that the matrix Eq. (2.1) always has an anti-bisymmetric solution. Suppose that X matrix Eq. (2.1) (not necessarily anti-bisymmetric), then
~ ¼ H: FðXÞ
ð2:7Þ
~ X ~ T þSðX ~ X ~ T ÞS X
~ 2 BASRnn by the definition of anti-bisymmetric matrix and Lemma 2.1. And . It is obvious that MðXÞ ~ it’s easy to prove that MðXÞ is a solution of the matrix Eq. (2.1). h ~ ¼ Let MðXÞ
4
3. An iterative method for solving Problem I Now, we introduce an iterative method for solving Problem I on the base of modified conjugate gradient. Algorithm 3.1 (A Modified Conjugate Gradient Algorithm). Step 0 Input matrices A; B; C 2 Rnn , a random matrix X~0 2 Rnn , a tolerance e, compute X 0 :¼ X~0 X~0 T þ SðX~0 X~0 T ÞS. Step 1 Compute:
R0 ¼ C AX 0 X 0 B;
P 0 ¼ H FðX 0 Þ;
Q 0 ¼ FðP 0 Þ;
Step 2 If kRk k 6 eor kP k k 6 e; stop; else k :¼ k þ 1. Step 3 Compute:
X k ¼ X k1 þ
kPk1 k2
kQ k1 k2 Rk ¼ C AX k X k B;
Q k1 ;
Pk ¼ H FðX k Þ ¼ Pk1
kPk1 k2
kQ k1 k2 hFðPk Þ; Q k1 i Q k ¼ FðPk Þ Q k1 : kQ k1 k2
FðQ k1 Þ;
k :¼ 0:
215
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
Step 4 Go to Step 2. Similar to the literature [30], we get the following Lemmas. The proof is trivial and will be omitted. Lemma 3.1. Suppose that M; N 2 Rnn ; L1 ; L2 2 BASRnn , then
hL1 ML2 ; Ni ¼ hM; L1 NL2 i: Lemma 3.2. Suppose that X; Y 2 Rnn , then ð1Þ FðX YÞ ¼ FðXÞ FðYÞ; ð2Þ hFðXÞ; Yi ¼ hX; FðYÞi. Theorem 3.1. Suppose that X is a solution of Problem I, then for any initial anti-bisymmetric matrix X 0 , the matrices X k ; P k and Q k generated by Algorithm 3.1 satisfy ð1ÞhFðPk Þ; X X k i ¼ kP k k2 ; ð2ÞhQ k ; X X k i ¼ kP k k2 ; ðk ¼ 0; 1; 2; . . .Þ. Proof. (1) We have FðXÞ ¼ H because X is a solution of Problem I. When k ¼ 0; 1; 2; . . ., using Lemma 3.2, we have hFðPk Þ; X X k i ¼ hP k ; FðX X k Þi ¼ hPk ; H FðX k Þi ¼ kP k k2 . (2) We prove this conclusion by induction. When k ¼ 0, we have
hQ 0 ; X X 0 i ¼ hFðP0 Þ; X X 0 i ¼ kP0 k2 : Assume that the conclusion holds for k ¼ l; ðl > 0Þ, that is khQ l ; X X l i ¼ kP l k2 . For k ¼ l þ 1, it follows that
* Q lþ1 ; X X lþ1 i ¼ hFðPlþ1 Þ
hFðPlþ1 Þ; Q l i kQ l k2
+ Q l ; X X lþ1
¼ hFðP lþ1 Þ; X X lþ1 i 2
¼ kPlþ1 k ¼ kPlþ1 k2
hFðPlþ1 Þ; Q l i
hFðPlþ1 Þ; Q l i kQ l k2 hFðPlþ1 Þ; Q l i kQ l k2
By the principle of induction, we have hQ k ; X X k i ¼ kPk k2 ; ðk ¼ 0; 1; 2; . . .Þ.
kQ l k2 *
hQ l ; X X lþ1 i
Q l; X Xl
kPl k2 kQ l k2
+ Ql
ðkPl k2 kPl k2 Þ ¼ kPlþ1 k2 :
h
For the sequences P k ; Q k generated by Algorithm 3.1, Theorem 3.1 implies if Pk – 0, then Q k – 0. That is to say if P k – 0, then Algorithm 3.1 cannot be terminated. Theorem 3.2. For the matrices Pk ; Q k generated by Algorithm 3.1, if there exist a positive integer l such that P k – 0; ðk ¼ 0; 1; 2; . . . ; lÞ, then we have hPi ; Pj i ¼ 0; hQ i ; Q j i ¼ 0; ði – j; i; j ¼ 0; 1; 2; . . . ; lÞ. Proof. Because of the fact that hA; Bi ¼ hB; Ai, we need only consider the case 0 6 i < j 6 l. If l ¼ 1, then
* hP 0 ; P 1 i ¼
P0 ; P0
+
kP0 k2
hQ 0 ; Q 1 i ¼ hQ 0 ; FðP1 Þ
hFðP1 Þ; Q 0 i kQ 0 k
kP 0 k2
¼ kP 0 k2
FðQ 0 Þ kQ 0 k2
2
kQ 0 k
2
hP0 ; FðQ 0 Þi ¼ kP 0 k2
Q 0 i ¼ hQ 0 ; FðP 1 Þi
hFðP 1 Þ; Q 0 i kQ 0 k2
kP0 k2 kQ 0 k2
hFðP 0 Þ; Q 0 i ¼ 0;
hQ 0 ; Q 0 i ¼ 0:
Suppose that the conclusion holds for l ¼ s; ðs P 1Þ, then when l ¼ s þ 1, we have
* hP s ; P sþ1 i ¼
Ps ; Ps
kP s k2
+
* hQ s ; Q sþ1 i ¼
Q s ; FðPsþ1 Þ
FðQ s Þ
¼ kPs k2
kP s k2
hPs ; FðQ s Þi ¼ kPs k2
kPs k2
hFðPs Þ; Q s i kQ s k kQ s k kQ s k2 * + kPs k2 hFðPs Þ; Q s1 i kP s k2 hFðPs Þ; Q s1 i ¼ Q þ Q ; Q hQ s1 ; Q s i ¼ 0; ¼ kPs k2 s s1 s kQ s1 k2 kQ s1 k2 kQ s k2 kQ s k2 2
hFðPsþ1 Þ; Q s i kQ s k2
2
+ Qs
¼ hQ s ; FðPsþ1 Þi
hFðPsþ1 Þ; Q s i kQ s k2
hQ s ; Q s i ¼ 0:
216
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
It is easy to show that hP 0 ; P sþ1 i ¼ 0. When j ¼ 1; 2; . . . ; s 1, we have
*
hPj ; P sþ1 i ¼
Pj ; Ps
¼
kPs k2 2
kQ s k
kP s k2 kQ s k
hQ j þ
2
+
FðQ s Þ
¼ hPj ; P s i
hFðP j Þ; Q j1 i kQ j1 k2
*
¼
Q j ; FðPsþ1 Þ * kQ j k2 kPj k2
hFðPsþ1 Þ; Q s i kQ s k2 +
ðPj Pjþ1 Þ; Psþ1
kQ s k
2
hP j ; FðQ s Þi ¼
kPs k2 kQ s k2
hFðPj Þ; Q s i
Q j1 ; Q s i ¼ 0:
If j ¼ 0; 1; 2; . . . ; s 1, then
hQ j ; Q sþ1 i ¼
kPs k2
+ Qs
¼ hQ j ; FðPsþ1 Þi
hFðPsþ1 Þ; Q s i kQ s k2
hQ j ; Q s i ¼ hFðQ j Þ; Psþ1 i
¼ 0:
So the conclusion holds for l ¼ s þ 1. We have thus proved the theorem. h Theorem 3.3. For any initial matrix X~0 2 Rnn , we can get a anti-bisymmetric matrix X 0 by Step 0 of Algorithm 3.1. Then the sequence fX k g generated by Algorithm 3.1 converges to one of its anti-bisymmetric solution within finite iteration steps. What’s more, let the initial matrix X 0 ¼ FðUÞ, (U is an arbitrary n n real anti-bisymmetric matrix), then the solution X obtained by Algorithm 3.1 is the least Frobenius norm anti-bisymmetric solution of Problem I. The proof of Theorem 3.3 is quite similar to that given for Theorem 4 of the literature [30] and so it is omitted. 4. The solution of Problem II ðX ÞT þSðX ðX ÞT ÞS , it 4 T T T T
Given a matrix X 2 Rnn and let X ¼ X
is easy to verify that X 2 BASRnn . Let X ¼ X X ; T
C ¼ C AX X B; H ¼ AT C þ CBT ðAT C þ CB Þ þ SðA C þ CB ðAT C þ CBT Þ ÞS. Then we have the following theorem. Theorem 4.1. Let the initial matrix X 0 ¼ FðUÞ (U is an arbitrary anti-bisymmetric matrix with order n), then Problem II is equivalent to the matrix equation
FðX X Þ ¼ FðXÞ ¼ H;
ð4:1Þ
and it is always consistent. Proof. 8X 2 SE , noting that a symmetric matrix and an anti-symmetric matrix is orthogonal, and that an anti-bisymmetric matric and a skew anti-symmetric matric are orthogonal too, we have
T T 2 T 2 X þ ðX ÞT 2 X ðX Þ X þ ðX Þ X ðX Þ kX X k ¼ X ¼ X þ 2 2 2 2 2 T T T T X þ ðX ÞT 2 X ðX Þ þ SðX ðX Þ ÞS X ðX Þ SðX ðX Þ ÞS ¼ X þ 4 4 2 2 2 T T X ðX ÞT SðX ðX ÞT ÞS X þ ðX ÞT 2 X ðX Þ þ SðX ðX Þ ÞS ¼ X þ þ : 4 4 2 2
So Problem II is equivalent to the following problem: T
minkX X2SE
T
X ðX Þ þ SðX ðX Þ ÞS k ¼ minðkX X kÞ: X2SE 4
We know
AX þ XB ¼ C () AðX X Þ þ ðX X ÞB ¼ C AX X B () AX þ XB ¼ C: So
minkAX þ XB Ck () minkAðX X Þ þ ðX X ÞB C AX X Bk () minkAX þ XB Ck: X2SE
X2SE
Using Theorems 2.1 and 3.3, we complete the proof.
X2SE
h
Let the initial matrix X 0 ¼ FðUÞ, where U is an arbitrary n n real anti-bisymmetric matrix. We know the solution X obtained by Algorithm 3.1 is the least norm anti-bisymmetric solution of Problem I by Theorem 3.3. According to Theorem 2.1, we know the solution X is also the solution of the matrix equation FðXÞ ¼ H . Then, let C ¼ C; H ¼ H; X ¼ X,
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
217
using Algorithm 3.1, we can obtain the least norm anti-bisymmetric solution X of the matrix equation FðXÞ ¼ H. According to Theorem 4.1 and X ¼ X X , we can get the solution of Problem II: X ¼ X þ X . 5. Computational experiments In this section, we report several numerical examples to compute the least squares anti-bisymmetric solution and the optimal approximation solution of the matrix equation AX þ XB ¼ C. Our experiments are performed at Matlab7.11 on a i3-2310 M
[email protected] GHz PC with 4 GB memory. Meanwhile, we assumed that
e ¼ 1010 .
Example 5.1. Consider the linear matrix equation AX þ XB ¼ C with
0
3
1
2
1 1
1
0
C B B 1 2 1 1 1 C C B B 1 4 2 4C A¼B 3 C; C B @ 1 4 6 2 4 A 1 1 3 5 1 0
13:0740
46:7421
1
2
4 6 1
1
C B B 2 2 3 4 4 C C B B 6 5 0 1C B¼B 5 C; C B 4 5 3 3A @ 5 1 1 1 1 0
107:4750
112:0234
113:2612
1
C B B 18:4270 24:0853 83:5018 118:3101 6:3182 C C B B 7:9701 55:5425 71:7410 C C ¼ B 41:3453 60:5960 C; C B 1:5151 2:5653 79:1948 13:9751 A @ 38:3454 110:4849 146:3087 149:7608 88:8010 59:4637 0
0:4904 0:5650 0:0821 0:5737 0:0634
1
C B B 0:8530 0:6403 0:1057 0:0521 0:8604 C C B ~ 0 ¼ B 0:8739 0:4170 0:1420 0:9312 0:9344 C: X C B C B @ 0:2703 0:2060 0:1665 0:7287 0:9844 A 0:2085 0:9479 0:6210 0:7378 0:8589 ~ 0 to solve the equation. After 4 iterations, we got the following We apply Algorithm 3.1 with the initial random matrix X results:
P4 ¼ kH FðX 4 Þk ¼ 1:8595 1011 :
R4 ¼ kC AX 4 X 4 Bk ¼ 0:00014927;
~ 0 as the Example 5.2. Consider the linear matrix equation AX þ XB ¼ C with the same A, B and initial random matrix X Example 5.1. And
1 104 142 116 142 62 C B 90 26 34 98 C B 54 C B C C¼B B 170 276 302 388 64 C: C B @ 130 170 212 382 110 A 0
34
170 184 110
60
After 4 iterations, we got the following results:
R4 ¼ kC AX 4 X 4 Bk ¼ 787:3352;
P4 ¼ kH FðX 4 Þk ¼ 2:4184 1011 :
Example 5.3. Consider the linear matrix equation AX þ XB ¼ C with
0
4
3
1
3
1 3
2
1
B 3 2 3 4 3 2 2 C C B C B B 4 3 1 3 1 3 2 C C B C B 2 1 C; A ¼ B 3 1 3 1 3 C B B 4 3 1 3 1 3 2 C C B C B 2 1 A @ 3 1 3 1 3 2 8 4 3 2 1 2
0
5
3
5
5
3 3 6
1
B 3 9 3 2 4 6 7C C B C B C B 1 2 4 5 3 9 2 C B C B 8 2 4 C; B ¼ B 2 1 2 5 C B B 3 2 3 7 2 4 8 C C B C B 6 9 1 5 8 4 A @ 2 3 2 1 5 3 9 8
218
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
0
43
84
41
50
9
34
23
1
B 21 22 34 87 55 42 43 C C B C B C B 12 9 4 56 90 45 2 C B C B 98 43 33 6 88 C: C ¼ B 20 38 C B B 23 109 40 48 54 31 22 C C B C B 2 3 1 2 3 4 A @ 1 7 8 6 3 23 9 11 ~ 0 to solve Problem I is as follows: The initial random matrix X
0
0:9234 0:2581 0:1174
0:8010
0:4588 0:6791 0:7962
1
0:4087 0:2967 0:0292 0:9631 0:3955 0:0987 C C C 0:5949 0:3188 0:9289 0:5468 0:3674 0:2619 C C C 0:2622 0:4242 0:7303 0:5211 0:9880 0:3354 C: C 0:6028 0:5079 0:4886 0:2316 0:0377 0:6797 C C C 0:7112 0:0855 0:5785 0:4889 0:8852 0:1366 A 0:1111 0:2217 0:2625 0:2373 0:6241 0:9133 0:7212
B 0:4302 B B B 0:1848 B B ~ X 0 ¼ B 0:9049 B B 0:9797 B B @ 0:4389
After 15 iterations, we got the following results:
R15 ¼ kC AX 15 X 15 Bk ¼ 270:814;
P15 ¼ kH FðX 15 Þk ¼ 8:737 1011 :
Example 5.4. The given matrices A; B; C are as the same as Example 5.2. Given the matrix
0
1
B B 5 B X ¼B B 1 B @ 10 3
3 1 9
2 5 1
3
4
1
C 8 C C 3 3 4 C C; C 2 3 11 A 4 1 6 2
3
find the solution of Problem II. ^ ¼ X 4 þ X . We finally Using Algorithm 3.1, after 4 iterations, we got X 4 and then we obtained the solution of Problem II: X had the following result:
^ X k ¼ 57:2271: minkX X k ¼ kX X2SE
Example 5.5. The given matric A; B; C are as the same as Example 5.3. Given the matrix
0
0
B 0:7000 B B B 0:1000 B B X ¼ B 3:4000 B B 2:2000 B B @ 0:7000 0
0:7000 0:1000 0 1:1000 0:7000 0:2000 0 0:6000
3:4000
2:2000
0:7000
0:2000
0:7000
0
1
0:7000 C C C 0 1:4000 0 0:2000 2:2000 C C C 1:4000 0 1:4000 0:7000 3:4000 C; C 0 1:4000 0 1:1000 0:1000 C C C 0:2000 0:7000 1:1000 0 0:7000 A 2:2000 3:4000 0:1000 0:7000 0
1:1000
0
find the solution of Problem II. ^ ¼ X 18 þ X . We Using Algorithm 3.1, after 18 iterations, we got X 18 and then we obtained the solution of Problem II: X finally had the following result:
^ X k ¼ 0:1960: minX2SE kX X k ¼ kX 6. Conclusions In this work, we present an iterative method for solving the least squares anti-bisymmetric solution and the optimal approximation of the matrix equation AX þ XB ¼ C. The iterative method considered here is based on the modified conjugate gradient method and makes use of special transformation and approximating disposal. We have shown that, under suitable conditions, the iteration sequence generated by the proposed algorithm converges to a solution we need.
L.-Y. Hu et al. / Applied Mathematics and Computation 259 (2015) 212–219
219
Acknowledgements We would like to thank the anonymous editors and reviewers for their helpful suggestions and comments. This work was supported by National Natural Science Foundation of China (Grant Nos. 61175123, 11461046 and 11261012). References [1] G.H. Golub, S. Nash, C. Van Loan, A Hessenberg–Schur method for the problem AX þ XB ¼ C, IEEE Trans. Automat. Control AC-24 (1979) 909–913. [2] R. Bitmead, H. Weiss, On the solution of the disctete-time Lyapunov matrix equation in controllable canonical form, IEEE Trans. Automat. Control AC24 (1979) 481–482. [3] R. Bitmead, Explicit solutions of the discrete-time Lyapunov matrix equation and Kalman–Yakubovich equations, IEEE Trans. Automat. Control AC-26 (1981) 1291–1294. [4] T. Mori, A. Derese, A brief summary of the bounds on the solution of the algebraic matrix equations in control theory, Int. J. Control 39 (1984) 247–256. [5] E.L. Wachspress, Iterative solution of the Lyapunov matrix equation, Appl. Math. Lett. 1 (1988) 87–90. [6] G. Starke, W. Niethammer, SOR for AX XB ¼ C, Linear Algebra Appl. 154 (1991) 355–375. [7] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2001) 603–626. [8] O. Axelsson, Z.-Z. Bai, S.-X. Qiu, A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part, Numer. Algorithms 35 (2004) 351–372. [9] Y.-B. Deng, Z.-Z. Bai, Y.-H. Gao, Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations, Numer. Linear Algebra Appl. 13 (2006) 801–823. [10] Z.-Z. Bai, Several splittings for non-Hermitian linear systems, Sci. China A 51 (2008) 1339–1348. [11] Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) 93–111. [12] L. Xie, Y.-J. Liu, H.-Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB þ CX T D ¼ F, Appl. Math. Comput. 217 (2010) 2191–2199. [13] Q. Niu, X. Wang, L.-Z. Lu, A relaxed gradient based algorithm for solving Sylvester equations, Asian J. Control 13 (2011) 461–464. [14] Z.-Z. Bai, On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations, J. Comput. Math. 29 (2011) 185–198. [15] X. Wang, W.-H. Wu, A finite iterative algorithm for solving the generalized (P, Q)-reflexive solution of the linear systems of matrix equations, Math. Comput. Model. 54 (2011) 2117–2131. [16] X. Wang, D. Liao, The optimal convergence factor of the gradient based iterative algorithm for linear matrix equations, Filomat 26 (2012) 607–613. [17] X. Wang, L. Dai, D. Liao, A modified gradient based algorithm for solving Sylvester equations, Appl. Math. Comput. 218 (2012) 5620–5628. [18] X. Wang, Y. Li, L. Dai, On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB = C, Comput. Math. Appl. 65 (2013) 657–664. [19] X. Wang, W.-W. Li, L.-Z. Mao, On positive-definite and skew-Hermitian splitting iteration methods for continuous Sylvester equation AX + XB = C, Comput. Math. Appl. 66 (2013) 2352–2361. [20] Y.-G. Peng, X. Wang, A finite iterative algorithm for solving the least-norm generalized (P, Q) reflexive solution of the matrix equations Ai XBi ¼ C i , J. Comput. Anal. Appl. 17 (2014) 547–561. [21] F.J.H. Don, On the symmetric solutions of a linear matrix equation, Linear Algebra Appl. 93 (1987) 1–7. [22] K.W.E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl. 119 (1989) 35–50. [23] A.-P. Liao, Z.-Z. Bai, The constrained solutions of two matrix equations, Acta Math, Sin., Engl. Ser. 18 (2002) 671–678. [24] A.-P. Liao, Z.-Z. Bai, Least-squares solution of AXB ¼ D over symmetric positive semidefinite matrices X, J. Comput. Math. 21 (2003) 175–182. [25] A.-P. Liao, Z.-Z. Bai, Y. Lei, Best approximate solution of matrix equation AXB þ CYD ¼ E, SIAM J. Matrix Anal. Appl. 27 (2005) 675–688. [26] Q.-W. Wang, Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl. 49 (2005) 641–650. [27] L.-J. Zhao, X.-Y. Hu, L. Zhang, Least squares solutions to AX = B for bisymmetric matrices under a central principal submatrix constraints and the optimal approximation, Linear Algebra Appl. 428 (2008) 871–880. [28] Y.-P. Sheng, D.-X. Xie, The solvability conditions for the inverse problem of anti-bisymmetric matrices, J. Numer. Comput. Appl. 2 (2002) 111–120. [29] X.-D. Zhang, Z.-N. Zhang, The anti-bisymmetric matrices optional approximation solution of matrix equation AX = B, Acta. Math. Appl. Sin. 32 (2009) 810–818. [30] L. Li, X.-J. Yuan, H. Liu, An iterative method for the least squares anti-bisymmetric solution of the matrix equation AX = B, Intell. Sci. Intell. Data Eng., Lect. Notes Comput. Sci. 7202 (2012) 81–88. [31] Y.-P. Cheng, K.-Y. Zhang, Z. Xu, Matrix Theory, Northwestern Polytechnical University Press, Xi’an, 2004.