On the Hermitian positive definite solutions of nonlinear matrix equation Xs+∑i=1mAi∗X-tiAi=Q

On the Hermitian positive definite solutions of nonlinear matrix equation Xs+∑i=1mAi∗X-tiAi=Q

Applied Mathematics and Computation 243 (2014) 950–959 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

382KB Sizes 1 Downloads 35 Views

Applied Mathematics and Computation 243 (2014) 950–959

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

On the Hermitian positive definite solutions of nonlinear matrix P  t i equation X s þ m Ai ¼ Q q i¼1 Ai X Aijing Liu a,b, Guoliang Chen b,⇑ a b

School of Mathematical Sciences, Qufu Normal University, Qufu 273165, Shandong, PR China Department of Mathematics, East China Normal University, Shanghai 200062, PR China

a r t i c l e

i n f o

Keywords: Nonlinear matrix equations Hermitian positive definite solutions Iterative method Maximal solution Error estimation

a b s t r a c t In this paper, the Hermitian positive definite solutions of nonlinear matrix equation P  ti Ai ¼ Q are considered, where Q is a Hermitian positive definite matrix, Ai Xs þ m i¼1 Ai X are nonsingular complex matrices, s; m are positive numbers, and 0 < t i  1; i ¼ 1; . . . ; m. Necessary and sufficient conditions for the existence of Hermitian positive definite solutions are derived. A sufficient condition for the existence of a unique Hermitian positive definite solution is given. In addition, some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions are presented. Finally, an iterative method is proposed to compute the maximal Hermitian positive definite solution, and some numerical examples are given to show the efficiency of the proposed iterative method. Ó 2014 Published by Elsevier Inc.

1. Introduction We consider the nonlinear matrix equation

Xs þ

m X Ai X ti Ai ¼ Q ;

ð1:1Þ

i¼1

where Q is an n  n Hermitian positive definite matrix, Ai are n  n nonsingular complex matrices, s; m are positive numbers, and 0 < t i  1; i ¼ 1; . . . ; m. Here Ai stands for the conjugate transpose of the matrix Ai ; i ¼ 1; . . . ; m. Nonlinear matrix equations with the form of (1.1) have many applications in: control theory; dynamic programming; ladder networks; stochastic filtering; statistics and etc. The solutions of practical interest are their Hermitian positive definite (HPD) solutions. The existence of HPD solutions of Eq. (1.1) has been investigated in some special cases. The authors [12,13,3,2] studied the matrix equation when m ¼ 1; s ¼ 1; t ¼ 1. In Hasanov [10,11], the authors investigated the matrix equation when m ¼ 1; s ¼ 1; t 2 ð0; 1. Then Peng et al. [17] proposed iterative methods for the extremal positive definite solutions of Eq. (1.1) for m ¼ 1; s ¼ 1 with two cases: 0 < t  1 and t P 1. Cai and Chen [4,5] studied the matrix equation for m ¼ 1 with two cases: s and t are positive integers, and s P 1; 0 < t  1 or 0 < s 6 1; t P 1 respectively. He and Long [16] considered the matrix equation when s ¼ 1; t ¼ 1; Q ¼ I. And Sarhan et al. [1] investigated the matrix equation when Q ¼ I. q Research supported by Natural Science Foundation of China (No. 11071079), Shanghai Natural Science Foundation (No. 09ZR1408700) and Doctoral Scientific Research Starting Foundation of Qufu Normal University (No. bsqd20130141). ⇑ Corresponding author. E-mail address: [email protected] (G. Chen).

http://dx.doi.org/10.1016/j.amc.2014.05.090 0096-3003/Ó 2014 Published by Elsevier Inc.

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

951

In this paper, we study the HPD solutions of Eq. (1.1). The paper is organized as follows. In Section 2, we derive necessary and sufficient conditions for the existence of HPD solutions of Eq. (1.1), and give a sufficient condition for the existence of a unique HPD solution of Eq. (1.1). We also present some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions of Eq. (1.1). Then in Section 3, we propose an iterative method for obtaining the maximal HPD solution of Eq. (1.1). We give some numerical examples in Section 4 to show the efficiency of the proposed iterative method. Conclusions will be put in Section 5. We start with some notations which we use throughout this paper. The symbol Cmn denotes the set of m  n complex matrices. We write B > 0ðB P 0Þ if the matrix B is positive definite (semidefinite). If B  C is positive definite (semidefinite), then we write B > CðB P CÞ. We use k1 ðBÞ and kn ðBÞ to denote the maximal and minimal eigenvalues of a matrix B. We use kBk and kBkF to denote the spectral and Frobenius norm of a matrix B, and we also use kbk to denote l2 -norms of a vector b. We use X S and X L to denote the minimal and maximal HPD solution of Eq. (1.1), that is, for any HPD solution X of Eq. (1.1), then X S 6 X 6 X L . The symbol I denotes the n  n identity matrix. The symbol qðBÞ denotes the spectral radius of B. Let ½B; C ¼ fXjB 6 X 6 Cg and ðB; CÞ ¼ fXjB < X < Cg. For matrices B ¼ ðb1 ; b2 ; . . . ; bn Þ ¼ ðbij Þ and C; B  C ¼ ðbij CÞ is a Kronecker T T T T product and v ecðBÞ is a vector defined by v ecðBÞ ¼ ðb1 ; b2 ; . . . ; bn Þ . 2. Solvability conditions and properties of the HPD solutions In this section, we shall derive the necessary and sufficient conditions for Eq. (1.1) to have a HPD solution, and give a sufficient condition for the existence of a unique HPD solution of Eq. (1.1). We also shall present some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions of Eq. (1.1) Lemma 2.1. [7]. If A P B > 0(or A > B > 0), then Aa P Ba > 0(or Aa > Ba > 0) for all a 2 ð0; 1, and Ba P Aa > 0(or Ba > Aa > 0) for all a 2 ½1; 0Þ.

Lemma 2.2 [9]. Let A and B be positive operators on a Hilbert space H such that 0 < m1 I 6 A 6 M 1 I; 0 < m2 I 6 B 6 M 2 I, and 0 < A 6 B. Then

At 6

 t1 M1 Bt ; m1

At 6



M2 m2

t1

Bt

hold for any t P 1. Lemma 2.3 [14]. Let f ðxÞ ¼ xt ðg  xs Þ; g > 0; x P 0. Then     1s  1s t t (1) f is increasing on 0; sþt g g ; þ1 ; and decreasing on sþt  1s   st t t s t g gsþ1 . ¼ sþt (2) fmax ¼ f sþt sþt

Lemma 2.4 [15]. If C and P are Hermitian matrices of the same order with P > 0, then CPC þ P1 P 2C. Lemma 2.5 [8]. If 0 < h 6 1, and P and Q are positive definite matrices of the same order with P; Q P bI > 0, then h1 ðhþ1Þ kP h  Q h k 6 hb kP  Q k and kP h  Q h k 6 hb kP  Q k. Here k  k stands for one kind of matrix norm. 1

Theorem 2.1. Eq. (1.1) has a HPD solution if and only if Ai Q 2 can factor as ti

1

Ai Q 2 ¼ ðL LÞ2s N i ; i ¼ 1; 2; . . . ; 0 m;

1 1 LQ 2 B N C B 1 C where L is a nonsingular matrix and B . C is column-orthonormal. @ .. A

ð2:1Þ

Nm Proof. If Eq. (1.1) has a HPD solution, then X s > 0. Let X s ¼ L L be the Cholesky factorization, where L is a nonsingular matrix. Then Eq. (1.1) can be rewritten as 1

1

Q 2 L LQ 2 þ

m t t X 1 1 i i Q 2 Ai ðL LÞ 2s ðL LÞ 2s Ai Q 2 ¼ I:

ð2:2Þ

i¼1

Let N i ¼ ðL LÞ

t

2si

1

1

ti

Ai Q 2 ; i ¼ 1; 2; . . . ; m, then Ai Q 2 ¼ ðL LÞ2s N i ; i ¼ 1; 2; . . . ; m. Moreover, (2.2) turns into

952

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959 1

1

Q 2 L LQ 2 þ N1 N1 þ    þ Nm Nm ¼ I:

ð2:3Þ

i.e.,

0

1

LQ 2

B B N1 B B . B . @ . Nm

1 0 C C C C C A

1

LQ 2

B B N1 B B . B . @ . Nm

1 C C C C ¼ I; C A

0

1 1 LQ 2 B N C B 1 C which means that B . C is column-orthonormal. @ .. A

Nm 1

1

Conversely, if Ai Q 2 has a decomposition as (2.1), i ¼ 1; . . . ; m, let X ¼ ðL LÞ s , then X is a HPD matrix, and it follows from (2.1) and (2.3) that

Xs þ

m X 1 1 1 1 1 1 1 1 Ai X ti Ai ¼ L L þ Q 2 N1 N 1 Q 2 þ    þ Q 2 Nm Nm Q 2 ¼ Q 2 ðQ 2 L LQ 2 þ N1 N1 þ    þ N m Nm ÞQ 2 ¼ Q: i¼1

Hence Eq. (1.1) has a HPD solution. h As for m ¼ 2, Eq. (1.1) turns into

X s þ A1 X t1 A1 þ A2 X t2 A2 ¼ Q :

ð2:4Þ

And we have the following theorem. Theorem 2.2. Eq. (2.4) has a HPD solution if and only if there exist a unitary matrix V 2 Cnn , a column-orthonormal matrix   U1 U¼ 2 C2nn , in which U 1 2 Cnn and U 2 2 Cnn , and diagonal matrices C > 0 and S P 0 with C 2 þ S2 ¼ I such that U2 1

t1 1 2s

1

1

t2 1 2s

1

A1 ¼ ðQ 2 V  C 2 VQ 2 Þ U 1 SVQ 2 ;

ð2:5Þ

A2 ¼ ðQ 2 V  C 2 VQ 2 Þ U 2 SVQ 2 :

ð2:6Þ

Proof. If Eq. (2.4) has a HPD solution, we have by Theorem 2.1 that, 1

t1

1

t2

A1 Q 2 ¼ ðL LÞ2s N 1 ;

ð2:7Þ

A2 Q 2 ¼ ðL LÞ2s N 2 ; 0

12

ð2:8Þ

1

LQ and the matrix @ N 1 A is column-orthonormal. According to the CS decomposition theorem (Theorem 3.8 in [6]), there   N2 P1 0 exist unitary matrices P ¼ 2 C3n3n ; V 2 Cnn , in which P 1 2 Cnn ; P2 2 C2n2n , such that 0 P2



P1 0

0 0 1 1 1 C  LQ 2 0 B C  B C @ N 1 AV ¼ @ S A; P2 0 N2

ð2:9Þ

where C ¼ diagðcos h1 ; . . . ; cos hn Þ; S ¼ diagðsin h1 ; . . . ; sin hn Þ and 0 6 h1 6 . . . 6 hn 6 p2 . Thus the diagonal matrices C; S P 0 and C 2 þ S2 ¼ I. Furthermore, noting that L is nonsingular, by (2.9), we have 1

C ¼ P 1 LQ 2 V  > 0; P2



   N1 S V ¼ : 0 N2

Eq. (2.11) is equivalent to Then we have

ð2:10Þ ð2:11Þ 

N1 N2



   S U1 ¼ P 2 V, let P2 be partitioned as P2 ¼ 0 U2

 U3 , in which U i 2 Cnn ; i ¼ 1; 2; 3; 4. U4

953

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959



N1 N2



 ¼

    U 1 SV S V¼ ; U 2 SV 0 U4

U1

U3

U2

1

from which it follows that N 1 ¼ U 1 SV; N 2 ¼ U 2 SV. By (2.10), we have L ¼ P 1 CVQ 2 . Then by (2.7) and (2.8), we have t1

1

1

t1 1 2s

1

t2

1

1

t2 1 2s

1

A1 ¼ ðL LÞ2s N1 Q 2 ¼ ðQ 2 V  C 2 VQ 2 Þ U 1 SVQ 2 ; A2 ¼ ðL LÞ2s N2 Q 2 ¼ ðQ 2 V  C 2 VQ 2 Þ U 2 SVQ 2 : 1

1

1 s

Conversely, assume that A1 ; A2 have the decomposition (2.5) and (2.6). Let X ¼ ðQ 2 V  C 2 VQ 2 Þ , which is a HPD matrix. Then it is easy to verify that X is a HPD solution of Eq. (2.4). h Theorem 2.3. If Eq. (1.1) has a HPD solution X, then X 2 ðM; NÞ,where

M¼ in which

1t m   t i 1 1X li i ðAi Q 1 Ai Þti ; m i¼1 mi



Q

m X ti Ai Q  s Ai

!1s ð2:12Þ

;

i¼1

li and mi are the minimal and maximal eigenvalues of Ai Q 1 Ai respectively, i ¼ 1; . . . ; m. ti

Proof. Let X be a HPD solution of Eq. (1.1), then it follows from 0 < X s < Q , and Lemma 2.1 that X ti > Q  s ; i ¼ 1; . . . ; m. Hence

Xs ¼ Q 

m m X X ti Ai X ti Ai < Q  Ai Q  s Ai : i¼1

i¼1

Thus we have

X<

Q

m X

t

i Ai Q  s Ai

!1s ¼ N:

i¼1

On the other hand, from Ai X ti Ai < Q ; i ¼ 1; . . . ; m, it follows that ti

1

ti

1

Q 2 Ai X  2 X  2 Ai Q 2 < I; i ¼ 1; . . . ; m; ti

ti

X  2 Ai Q 1 Ai X  2 < I; i ¼ 1; . . . ; m; Ai Q 1 Ai < X ti ; i ¼ 1; . . . ; m: Let

li and mi are the minimal and maximal eigenvalues of Ai Q 1 Ai respectively, i ¼ 1; . . . ; m. Since

li I 6 Ai Q 1 Ai 6 mi I; i ¼ 1; . . . ; m, by Lemma 2.2, we get 1t Pm li  ti i 1

X>m

i¼1

mi

Theorem 2.4. If



Pm

1 ti

ðAi Q 1 Ai Þ ¼ M:

 t i Ai i¼1 Ai X

i  1t t

li mi

i

1 ti

ðAi Q 1 Ai Þ < X; i ¼ 1; . . . ; m, Hence we have

h 1

6 Q  M s for all X 2 ½M; Q s , and

m 1X t i kðsþti Þ ðMÞkAi k2F < 1; s i¼1 n

where M is defined by (2.12), then Eq. (1.1) has a unique HPD solution. Proof. By the definition of M, we have M > 0. Hence kn ðMÞ > 0. n o 1 P 1  t i We consider the map FðXÞ ¼ ðQ  m Ai Þ s and let X 2 X ¼ XjM 6 X 6 Q s . i¼1 Ai X Obviously, X is a convex, closed and bounded set and FðXÞ is continuous on X. By the hypothesis of the theorem, we have 1 1 P 1  t i Ai Þ s P ðQ  Q þ M s Þ s ¼ M, Q s P ðQ  m i¼1 Ai X i.e., 1 M 6 FðXÞ 6 Q s . Hence FðXÞ # X.

1 ti

P 1, and

954

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

For arbitrary X; Y 2 X, we have Pm  ti P  t i Ai 6 Q  M s and m Ai 6 Q  M s ; i¼1 Ai X i¼1 Ai Y Hence

FðXÞ ¼

Q

m X Ai X ti Ai

!1s

1

P ðQ  Q þ M s Þ s ¼ M P kn ðMÞI;

ð2:13Þ

i¼1

FðYÞ ¼

Q

m X Ai Y ti Ai

!1s

1

P ðQ  Q þ M s Þ s ¼ M P kn ðMÞI;

ð2:14Þ

i¼1

From (2.13) and (2.14), it follows that

   " #   X s1 s1  X     kFðXÞs  FðYÞs kF ¼  FðXÞi ðFðXÞ  FðYÞÞFðYÞs1i  ¼ vec FðXÞi ðFðXÞ  FðYÞÞFðYÞs1i     i¼0  i¼0 F      X  X s1 s1    i s1i  s1i i ¼  vec½FðXÞ ðFðXÞ  FðYÞÞFðYÞ  ¼  ðFðYÞ  FðXÞ ÞvecðFðXÞ  FðYÞÞ   i¼0   i¼0 P

s1 X kns1 ðMÞkvecðFðXÞ  FðYÞÞk ¼ sks1 n ðMÞkðFðXÞ  FðYÞÞkF :

ð2:15Þ

i¼0

According to the definition of the map F, we have s

m X Q Ai X ti Ai

s

FðXÞ  FðYÞ ¼

!

m X  Q Ai Y ti Ai

i¼1

! ¼

i¼1

m X Ai ðY ti  X ti ÞAi

ð2:16Þ

i¼1

Combining (2.15) and (2.16), we by Lemma 2.5 have

   X  m m  X 1    2 ti  t i t i t  kFðXÞ  FðYÞ kF ¼ s1 kFðXÞ  FðYÞkF 6 s1  Ai ðY  X ÞAi  6 s1 Ai kF kY  X i   F skn ðMÞ skn ðMÞ  i¼1 skn ðMÞ i¼1 F 1

6

s

1

m X

sks1 n ðMÞ

i¼1

s

1

i þ1Þ ti kðt ðMÞkAi k2F kY  XkF ¼ n

m 1X t i kðsþti Þ ðMÞkAi k2F kX  YkF ¼ pkX  YkF : s i¼1 n

Since p < 1, we know that the map FðXÞ is a contraction map in X. By Banach fixed point theorem, the map FðXÞ has a 1 unique fixed point in X and this shows that Eq. (1.1) has a unique HPD solution in ½M; Q s . h Theorem 2.5. If Eq. (1.1) has a HPD solution X, then

kn

m X

Q

12

t

1 i Ai Q  s Ai Q 2

!



t sþt

6

i¼1

st

1 s ^Q s ; ; and X 6 a sþt

^ is a solution of equation yt ð1  ys Þ ¼ kn where t ¼ min ti , and a

P m

i¼1 Q

16i6m

12

ti

1

Ai Q  s Ai Q 2



in



t sþt

1s

 ;1

Proof. Consider the sequence defined as follows: P 11s 0 t 1 1

a0 ¼ 1; akþ1 ¼ @1 

kn

m

i¼1

Q



i   2 A Q  s Ai Q 2 i

atk

A ; k ¼ 0; 1; 2; . . .

Let X be a HPD solution of Eq. (1.1), then 1 P 1 1  t i s X¼ Q m Ai < Q s ¼ a0 Q s . i¼1 Ai X 1

Assuming that X < ak Q s , then by Lemma 2.1, we have

! Pm  tsi Pm 1  tsi 1 m m X X 2 1 t i 1 1 Ai Ai Q 2  ti  i¼1 Ai Q i¼1 Q Ai Q s 2 Q2 X ¼Q Ai X Ai < Q  Ai ðak Q Þ Ai < Q  ¼Q I t t s

0

i¼1

kn 1B 6 Q 2 @1 

P m

i¼1 Q

i¼1 12

Ai Q

atk 1

t  si

1 1

ak

ak

Ai Q  2 C 1 AQ 2 ¼ askþ1 Q: 1

Therefore X < akþ1 Q s . Then by the principle of induction, we get X < ak Q s ; k ¼ 0; 1; 2; . . ..

955

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

^ , then Noting that the sequence ak is monotonically decreasing and positive, hence ak is convergent. Let limk!1 ak ¼ a P 11s 0 t i m 1 1 P  kn Q 2 Ai Q  s Ai Q 2 t 1 i¼1 m 12   si A , i.e., a ^ is a solution of the equation yt ð1  ys Þ ¼ kn a^ ¼ @1  Ai Q 2 . i¼1 Q Ai Q a^ t  1   t s s t s t t s since maxy2½0;1 ¼ f ¼ sþt thet function f ðyÞ t ¼ y ð1  y Þ, Consider    sþt sþt , from which it follows that Pm  1   i 1 s t s s A Q 2 2A Q 6 kn Q . i i¼1 i sþt sþt  1s   1s t t ^ 2 sþt ^ 6 1. On the other hand, for the sequence ak , since a0 ¼ 1 > sþt Next we shall prove that a ; 1 . Obviously, a ,  1s t without loss of generality. Then we may assume that ak > sþt

0

akþ1 ¼ B @1 

kn

P m

i¼1 Q

12

11s ti 1 Ai Q  s Ai Q 2 C A P t

1

ak



1

atk

t sþt

ts

s sþt

!1s

0



1 t B > @1    t s sþt t

ts

11s

 1s s C t : A ¼ sþt sþt

sþt



t sþt

1s

^ ¼ limk!1 ak P Hence ak > ; k ¼ 0; 1; 2; . . .. So a  1  s t ^ 2 sþt Consequently, we have a ;1 . This completes the proof.



t sþt

1s

.

h 1

1

Theorem 2.6. Suppose that Aj Q 2 ¼ Q 2 Aj ; j ¼ 1; . . . ; m. If Eq. (1.1) has a HPD solution, then

 tsj t j s tj þ1 ðqðAj ÞÞ 6 k1s ðQÞ; j ¼ 1; . . . ; m: s þ tj s þ tj 2

1

1

1

1

Proof. If Aj Q 2 ¼ Q 2 Aj ; j ¼ 1; . . . ; m, then Aj Q 2 ¼ Q 2 Aj ; j ¼ 1; . . . ; m. So Eq. (1.1) can be written as 1

1

Q 2 X s Q 2 þ

m X 1 1 Ai Q 2 X ti Q 2 Ai ¼ I:

ð2:17Þ

i¼1

For any eigenvalue k of Aj ; j ¼ 1; . . . ; m, let x be a corresponding eigenvector. Multiplying left side of Eq. (2.17) by x and right side by x, we have 1

1

1

1

x Q 2 X s Q 2 x þ x Aj Q 2 X tj Q 2 Aj x þ

m X 1 1 x Ai Q 2 X ti Q 2 Ai x ¼ x x; i¼1 i–j

which yields 1

1

1

1

x Q 2 X s Q 2 x þ jkj2 x Q 2 X tj Q 2 x 6 x x:

ð2:18Þ 

Since X > 0, there exists an unitary matrix U such that X ¼ U KU, where K ¼ diagðg1 ; . . . gn Þ > 0. Then (2.18) turns into the following form 1

1

1

1

x Q 2 U  Ks UQ 2 x þ jkj2 x Q 2 U  Ktj UQ 2 x 6 x x:

ð2:19Þ

12

T

Let y ¼ ðy1 ; y2 ; . . . ; yn Þ ¼ UQ x, then (2.19) reduces to

y Ks y þ jkj2 y Ktj y 6 y UQU  y; from which we obtain

jkj2 6

y ðUQU   Ks Þy y ðk1 ðQ ÞI  Ks Þy 6 ¼ y Ktj y y Ktj y

Pm

2 i¼1 yi ðk1 ðQ Þ  Pm 2 tj i¼1 yi i

Form Lemma 2.3, we know that  tsj tj þ1 tj gti j ðk1 ðQ Þ  gsi Þ 6 sþts j sþt k1s ðQ Þ, j i.e., s ðk1 ðQ Þ  gsi Þ 6 sþt j



tj sþt j

tsj

tj

þ1

k1s

t

ðQ Þgi j .

Noting that y – 0, we get  tsj tj Pm 2 P þ1 tj s s 2 t j y ðk ðQ Þ  g Þ 6 k1s ðQ Þ m 1 i¼1 i i¼1 yi gi . i sþt j sþt j

g

gsi Þ

:

956

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

Consequently, 2

Pm

jkj 6

2 i¼1 yi ðk1 ðQÞ  Pm 2 tj i¼1 yi i t 2

Then ðqðAj ÞÞ 6

s sþt j



g

tj sþt j

 sj

gsi Þ tj

 tsj t j s tj þ1 6 k1s ðQ Þ: s þ tj s þ tj

þ1

k1s ðQ Þ; j ¼ 1; . . . ; m.

h

3. Iterative method for the maximal HPD solution In this section, we consider the iterative method for obtaining the maximal HPD solution X L of Eq. (1.1). We propose the following algorithm which avoids calculating matrix inversion in the process of iteration. Algorithm 3.1. step1. Input initial matrices: 1

X 0 ¼ cQ s ; Y0 ¼

c þ 1 1s Q ; 2c

^ ; 1Þ, and a ^ is defined in Theorem 2.5. where c 2 ða step2. For k ¼ 0; 1; 2; . . ., compute:

Y kþ1 ¼ Y k ð2I  X k Y k Þ; 1

X kþ1 ¼ ðQ 

m s X ti Ai Y kþ1 Ai Þ : i¼1

Theorem 3.1. If Eq. (1.1) has a HPD solution, then it has the maximal one X L . Moreover, to the sequences X k and Y k generated by Algorithm 3.1, we have

Y 0 < Y 1 < Y 2 <    ; lim Y k ¼ X 1 L :

X 0 > X 1 > X 2 >    ; lim X k ¼ X L ; k!1

k!1

1

^ Q s , thus Proof. Since X L is a HPD solution of Eq. (1.1), by Theorem 2.5, we have X L 6 a 1

1

^Q s P XL; X 0 ¼ cQ s > a

Y0 ¼

c þ 1 1s 1 1s 1 1s Q < Q < Q 6 X 1 L : 2c c a^

By Lemma 2.1 and Lemma 2.4, we have 1 Y 1 ¼ Y 0 ð2I  X 0 Y 0 Þ ¼ 2Y 0  Y 0 X 0 Y 0 6 X 1 0 < XL ;

and

Y 1  Y 0 ¼ Y 0  Y 0 X 0 Y 0 ¼ Y 0 ðY 1 0  X 0 ÞY 0 ¼

1  c2 1s Q > 0: 4c

According to Lemma 2.1 and Y 1 < X 1 L , we have

X1 ¼

m X t Q Ai Y 1i Ai i¼1

!1s

1

> ðQ 

m s X t Ai X L i Ai Þ ¼ X L ; i¼1

and m X t t X s1  X s0 ¼  Ai ðY 1i  Y 0i ÞAi < 0; i¼1

i.e., X s1 < X s0 , by Lemma 2.1 again, it follows that X 1 < X 0 . Hence X 0 > X 1 > X L , and Y 0 < Y 1 < X 1 L . 1 Assume that X k1 > X k > X L , and Y k1 < Y k < X 1 L , we will prove the inequalities X k > X kþ1 > X L , and Y k < Y kþ1 < X L .

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

957

By Lemmas 2.1 and 2.4, we have 1 Y kþ1 ¼ 2Y k  Y k X k Y k 6 X 1 k < XL ;

and

X kþ1 ¼

Q

m X

!1s ti Ai Y kþ1 Ai

Q

>

i¼1

m X

!1s t Ai X L i Ai

¼ XL:

i¼1

1 1 Since Y k 6 X 1 k1 < X k , we have Y k > X k , thus we have by Lemma 2.1 that 1 Y kþ1  Y k ¼ Y k ðY k  X k ÞY k > 0, and

m X ti t X skþ1  X sk ¼  Ai ðY kþ1  Y ki ÞAi < 0; i¼1

i.e., X skþ1 < X sk , by Lemma 2.1 again, it follows that X kþ1 < X k . Hence we have by induction that

X 0 > X 1 > X 2 >    > X k > X L and Y 0 < Y 1 < Y 2 <    < Y k < X 1 L b ; limk!1 Y k ¼ Y b , taking the limit in are true for all k ¼ 0; 1; 2; . . ., and so limk!1 X k and limk!1 Y k exist. Suppose limk!1 X k ¼ X 1 Pm  b t s ^1 b b b b 6 X L . Morethe Algorithm 3.1 leads to Y ¼ X and X ¼ ðQ  i¼1 Ai X i Ai Þ . Therefore X is a HPD solution of Eq. (1.1), thus X b P X L , then X b ¼ X L . The theorem is proved. h over, as each X k > X L , so X Theorem 3.2. If Eq. (1.1) has a HPD solution and after n iterative steps of Algorithm 3.1, we have kI  X n Y n k < e, then

    m m 1ti X   s X  ti Ai X n Ai  Q  6 ek1 t i k1 s ðQ ÞkAi k2 ; X n þ n ðMÞ   i¼1 i¼1

where M is defined by (2.12). 1

1

s 1 Proof. From the proof of Theorem 3.1, we have Q  s < c2þ1 < Y n < X 1 for all n ¼ 1; 2; . . .. Thus we have by Theon < XL c Q 1s 1 1 rem 2.3 that Q < Y n < X n < M . And this implies

1

1 k1 s ðQ ÞI < Y n < X 1 n < kn ðMÞI:

Since

X sn þ

m X i Ai X t n Ai  Q ¼

Q

i¼1

m X Ai Y tni Ai

!

i¼1

þ

m m X X i

ti i Ai X t Ai X t n Ai  Q ¼ n  Y n Ai ; i¼1

i¼1

we have by Lemma 2.5 that

     X  X  m m m m t 1 X i     s X  ti ti ti i i Ai X n Ai  Q  ¼  Ai ðX t  Y ÞA kAi k2 kX t t i k1 s ðQ ÞkAi k2 kX 1 X n þ i 6 n n n  Yn k 6 n  Y nk   i¼1   i¼1 i¼1 i¼1 6

m m 1ti 1ti X X 1 ti k1 s ðQ ÞkAi k2 kX 1 t i k1 s ðQ ÞkAi k2 :  n kkI  X n Y n k 6 ekn ðMÞ i¼1

i¼1

4. Numerical examples In this section, we give some numerical examples to illustrate the efficiency of the proposed algorithm. All the tests are performed by MATLAB 7.0 with machine precision around 1016 . We stop the practical iteration when the residual P  t i kX sk þ m i¼1 Ai X k Ai  Q kF  1:0e  010. Example 4.1. Let m ¼ 1; s ¼ 2; t1 ¼ 0:2, and

0

2 B 1 B B B 0 A1 ¼ B B 1 B B @ 1 0

0 2

0 0

0

3

0

0

0

2

0

1

0

1

0

0

1 0 0 C C C 1 0 C C; 0 1 C C C 3 0 A

1 0 0 1

1

2

1 310 12 48 60 135 62 B 12 168 38 54 116 10 C C B C B B 48 38 289 22 168 76 C C: Q ¼B B 60 54 22 202 16 220 C C B C B @ 135 116 168 16 374 50 A 0

62

10

76

220

50

278

958

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

^ 0:9984984, so we choose c ¼ 0:999. By using Algorithm 3.1 and iterating nine steps, we obtain the By calculating, a maximal HPD solution X L of Eq. (1.1) as follows:

0

16:6193

0:0450

2:3234

1:9639

4:2784

1:4450

1

C B B 0:0450 12:0544 0:6912 2:4099 3:6401 0:2898 C C B C B 5:3575 2:3218 C B 2:3234 0:6912 15:5761 0:4933 C: XL X9 ¼ B C B 2:4099 0:4933 9:8559 1:1515 9:4797 C B 1:9639 C B B 4:2784 3:6401 5:3575 1:1515 17:4170 1:3839 C A @ 1:4450 with the residual

kX 29

þ

0:2898

A1 X 0:2 A1 9

2:3218

9:4797

1:3839

13:2603

 Q kF ¼ 8:1270e  011.

Example 4.2. Let m ¼ 2; s ¼ 5; t 1 ¼ 0:2; t 2 ¼ 0:5; A1 is defined in Example 4.1, and

0

1

0

C 4 7 1 3 0C C C 9 2 4 7 8C C; C 5 3 0 0 1C C 5 0 2 1 7C A

B B B B B Q ¼B B B B B @

2 1 6 0 5 7

B B3 B B B0 A2 ¼ B B B8 B B2 @

4 0

105

62

62

154

56

67

11

50

41

82

71

121

0 1 4 9

56

11

41

71

1

C 121 C C C 109 15 59 61 C C: C 15 28 37 53 C C 59 37 113 132 C A 67

50

82

61

53 132 250

^ 0:8412949, so we choose c ¼ 0:87. By using Algorithm 3.1 and iterating 27 steps, we obtain the maxBy calculating, a imal HPD solution X L of Eq. (1.1) as follows:

0

X L X 27

1:6740 B 0:1069 B B B 0:2218 ¼B B 0:0033 B B @ 0:0775 0:2502

with the residual

kX 527

þ

1 0:0033 0:0775 0:2502 0:2854 0:2327 0:2553 C C C 0:2356 1:7428 0:0088 0:2549 0:0884 C C; 0:2854 0:0088 1:1526 0:1433 0:1666 C C C 0:2327 0:2549 0:1433 1:6075 0:4349 A

0:1069 1:8446

0:2218 0:2356

0:2553

0:0884

A1 X 0:2 27 A1

þ

A2 X 0:5 27 A2

0:1666

0:4349 2:1978

 Q kF ¼ 4:9831e  011.

5. Conclusions In this paper, we discuss the HPD solutions of Eq. (1.1). We derive necessary and sufficient conditions for the existence of a HPD solution of Eq. (1.1), give a sufficient condition for the existence of a unique HPD solutio of Eq. (1.1), and also present some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions of Eq. (1.1). We propose an iterative method for obtaining the maximal HPD solution of Eq. (1.1). Numerical results show that the proposed algorithm is quite efficient. References [1] A.M. PSarhan, N.M. El-Shazly, E.M. Shehata, On the existence of extremal positive definite solutions of the nonlinear matrix equation  di Xr þ m Ai ¼ I, Math. Comput. Model. 51 (2010) 1107–1117. i¼1 Ai X [2] J.C. Engwerda, On the existence of a positive definite solution of the matrix equation X þ A X 1 A ¼ I, Linear Algebra Appl. 194 (1993) 91–108. [3] J.C. Engwerda, A.C.M. Ran, A.L. Rijkeboer, Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation X þ A X 1 A ¼ Q , Linear Algebra Appl. 186 (1993) 255–275. [4] J. Cai, G. Chen, Some investigation on Hermitian positive definite solutions the matrix equation X s þ A X t A ¼ Q , Linear Algebra Appl. 430 (2009) 2448– 2456. [5] J. Cai, G. Chen, On the Hemitian positive definite solutions of nonlinear matrix equation X s þ A X t A ¼ Q , Appl. Math. Comput. 217 (2010) 117–123. [6] J. Sun, Matrix Perturbation Analysis, Science Press, Beijing, China, 2001. [7] M. Parodi, La localisation des valeurs caracterisiques desmatrices etses applications, Gauthiervillars, Paris, 1959. [8] R. Bhatia, Matrix analysis, Graduate Texts in Mathematics, Spring, Berlin, 1997. [9] T. Furuta, Operator inequalities associated with Holder-McCarthy and Kantorovich inequalities, J. Inequal. Appl. 6 (1998) 137–148. [10] V.I. Hasanov, Positive definite solutions of the matrix equations X A X q A ¼ Q , Linear Algebra Appl. 404 (2005) 166–182. [11] V.I. Hasanov, S.M. El-sayed, On the positive definite solutions of nonlinear matrix equation X þ A X d A ¼ Q , Linear Algebra Appl. 412 (2006) 154–160. [12] W.N. Anderson, T.D. Morley, G.E. Trapp, Positive solutions to X ¼ A  BX 1 B , Linear Algebra Appl. 134 (1990) 53–62. [13] X. Chen, W. Li, On the matrix equation X þ A X 1 A ¼ P: solution and perturbation analysis, Math. Numer. Sin. 27 (2005) 303–310 (in Chinese). [14] X. Duan, A. Liao, On the existence of Hermitian positive definite solutions of the matrix equation X s þ A X t A ¼ Q , Linear Algebra Appl. 429 (2008) 673–687. [15] X. Zhan, Computing the extremal positive definite solutions of a matrix equation, SIAMJ. Sci. Comput. 17 (1996) 1167–1174.

A. Liu, G. Chen / Applied Mathematics and Computation 243 (2014) 950–959

959

P  1 [16] Y. He, J. Long, On the Hemitian positive definite solution of the nonlinear matrix equation X þ m Ai ¼ I, Appl. Math. Comput. 216 (2010) 3480– i¼1 Ai X 3485. [17] Z. Peng, S.M. El-Sayed, X. Zhang, Iterative methods for the extremal positive solution of the matrix equation X þ A X a A ¼ Q , J. Comput. Appl. Math. 200 (2007) 520–527.