Modified alternately linearized implicit iteration method for M-matrix algebraic Riccati equations

Modified alternately linearized implicit iteration method for M-matrix algebraic Riccati equations

Applied Mathematics and Computation 347 (2019) 442–448 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

NAN Sizes 0 Downloads 21 Views

Applied Mathematics and Computation 347 (2019) 442–448

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Modified alternately linearized implicit iteration method for M-matrix algebraic Riccati equationsR Jinrui Guan a Department of Mathematics, Taiyuan Normal University, China

a r t i c l e

i n f o

MSC: 65F30 65F10 15A24 Keywords: M-matrix algebraic Riccati equation Minimal nonnegative solution Newton method ALI iteration method

a b s t r a c t Research on the theories and efficient numerical methods of M-matrix algebraic Riccati equation (MARE) has become a hot topic in recent years. In this paper, we consider numerical solution of M-matrix algebraic Riccati equation and propose a modified alternately linearized implicit iteration method (MALI) for computing the minimal nonnegative solution of MARE. Convergence of the MALI method is proved by choosing proper parameters for the nonsingular M-matrix or irreducible singular M-matrix. Theoretical analysis and numerical experiments show that the MALI method is effective and efficient in some cases. © 2018 Elsevier Inc. All rights reserved.

1. Introduction and preliminaries We consider numerical solution of M-matrix algebraic Riccati equation (MARE) of the form

XCX − X D − AX + B = 0,

(1)

where A, B, C and D are real matrices of sizes m × m, m × n, n × m and n × n respectively, and



K=

D −B

−C A



(2)

is an M-matrix. M-matrix algebraic Riccati equation arises from many branches of applied mathematics, such as transport theory, Wiener-Hopf factorization of Markov chains, stochastic process, and so on [2,3,5,7,15,18,21]. Research on the theories and the efficient numerical methods of MARE has become a hot topic in recent years [1,5–7,9–14,17,19,20]. The following are some notations and definitions we need in the sequel. For any matrices A = (ai j ), B = (bi j ) ∈ Rm×n , we write A ≥ B(A > B), if aij ≥ bij (aij > bij ) for all i, j. A is called a Z-matrix if aij ≤ 0 for all i = j. A Z-matrix A is called an M-matrix if there exists a nonnegative matrix B such that A = sI − B and s ≥ ρ (B) where ρ (B) is the spectral radius of B. In particular, A is called a nonsingular M-matrix if s > ρ (B) and singular M-matrix if s = ρ ( B ). We review some basic results of M-matrix. The following lemmas can be found in [4]. Lemma 1.1. Let A be a Z-matrix. Then the following statements are equivalent: (1) A is a nonsingular M-matrix; (2) A−1 ≥ 0; R

National Natural Science Foundation of China (11371275,11401424) and Natural Science Foundation of Shanxi Province, China (201601D011004). E-mail address: [email protected]

https://doi.org/10.1016/j.amc.2018.11.025 0 096-30 03/© 2018 Elsevier Inc. All rights reserved.

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448

443

(3) Av > 0 for some vectors v > 0; (4) All eigenvalues of A have positive real part. Lemma 1.2. Let A and B be Z-matrices. If A is a nonsingular M-matrix and A ≤ B, then B is also a nonsingular M-matrix. In particular, for any nonnegative real number α , B = α I + A is a nonsingular M-matrix. Lemma 1.3. Let A be a nonsingular M-matrix or an irreducible singular M-matrix. Let A be partitioned as

 A=

A11 A21



A12 , A22

where A11 and A22 are square matrices. Then A11 and A22 are nonsingular M-matrices. The solution of MARE of practical interest is the minimal nonnegative solution. For the minimal nonnegative solution of the MARE, we have the following important result [5,7,8,11]. Lemma 1.4. If K is a nonsingular M-matrix or an irreducible singular M-matrix, then (1) has a unique minimal nonnegative solution S. If K is nonsingular, then A − SC and D − CS are also nonsingular M-matrices. If K is irreducible, then S > 0 and A − SC and D − CS are also irreducible M-matrices. There are many methods for computing the minimal nonnegative solution of MARE, such as Schur method, matrix sign function, fixed-point iteration, Newton iteration, ALI iteration method, doubling algorithms, and so on [1,5,7,9,13,17,20]. In this paper, based on the ALI iteration method [1], we propose a modified alternately linearized implicit (MALI) iteration method for computing the minimal nonnegative solution of MARE. Convergence of the MALI method is proved by choosing proper parameters for the nonsingular M-matrix or irreducible singular M-matrix. Theoretical analysis and numerical experiments show that the MALI iteration method is efficient in some cases. The rest of the paper is organized as follows. In Section 2, we discuss some existing methods and then propose a modified ALI iteration method. In Section 3 we prove the convergence of the MALI iteration method. In Section 4, we use some numerical examples to show the effectiveness of the new method. Concluding remarks is given in Section 5.

2. Modified ALI iteration method In the Newton method (see [7,9]) for computing the minimal nonnegative solution of MARE, a Sylvester matrix equation of the following form

Xk+1 (D − CXk ) + (A − XkC )Xk+1 = B + XkCXk is required to solve at each iteration, which is very costly when the matrix sizes are large. Same situation happens for fixed-point iteration methods in [7]. In [1], Z.-Z. Bai et al. proposed an alternately linearized implicit iteration method (ALI) for MARE as follows.

The ALI iteration method • Set X0 = 0 ∈ Rm×n . • For k = 0, 1, . . . , until {Xk } converges, compute Xk+1 from Xk by solving the following two systems of linear matrix equations:



Xk+1/2 (α I + (D − CXk )) = (α I − A )Xk + B, (α I + (A − Xk+1/2C ))Xk+1 = Xk+1/2 (α I − D ) + B,

(3)

where α > 0 is a given parameter.

In the ALI iteration method, only linear matrix equations are needed to solve, which are easier than Sylvester matrix equations. However, the ALI method are still not very effective for MARE, since at each iteration a matrix equation needs to be solved exactly, which is very costly. Several improvements to the ALI iteration method have been suggested. In [16], a new ALI iteration method was proposed by using the Shamanskii technique at each iteration. A modified ALI iteration method was proposed in [22], which is effective than the ALI iteration method when the magnitude between A and D is large. In [6], the NALI iteration method was proposed as follows.

444

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448

The NALI iteration method • Set X0 = 0 ∈ Rm×n . • For k = 0, 1, . . . , until {Xk } converges, compute Xk+1 from Xk by solving the following two systems of linear matrix equations:



Xk+1/2 (α I + D ) = (α I − A + XkC )Xk + B, (β I + A )Xk+1 = Xk+1/2 (β I − D + CXk ) + B,

(4)

where α > 0, β > 0 are two given parameters.

The NALI iteration method is more efficient than the ALI iteration method, though it needs more iterations, since the coefficient matrices of the linear matrix equations are fixed in each iteration. Hence, less CPU times are required for solving MARE. However, in some cases, the NALI iteration method is still not very satisfactory. So, in the following, we will propose a modified alternately linearized implicit (MALI) iteration method for computing the minimal nonnegative solution of MARE. Compared with ALI and NALI, at each iteration of the MALI iteration method, two triangular linear systems need to be solved, hence less CPU time is needed. First, let D = LD − UD , where LD is the lower triangular part of D and UD is the strictly upper triangular part of D. We split A in the same way A = LA − UA . Then, we have the new method as follows.

The MALI iteration method • Set X0 = 0 ∈ Rm×n . • For k = 0, 1, . . . , until {Xk } converges, compute Xk+1 from Xk by solving the following two triangular systems of linear matrix equations:



Xk+1/2 (α I + LD ) = (α I − A + XkC )Xk + XkUD + B, (β I + LA )Xk+1 = Xk+1/2 (β I − D + CXk+1/2 ) + UA Xk+1/2 + B,

(5)

where α > 0, β > 0 are two given parameters.

Compared with the ALI and NALI iteration methods, the MALI iteration method is more efficient since the coefficient matrices of the linear matrix equations are triangular and fixed in the iteration. Hence, less CPU times are required for solving MARE. 3. Convergence analysis of the MALI iteration method In the following, we give convergence analysis of the MALI iteration method. First, we need several lemmas. Lemma 3.1. Let {Xk } be the matrix sequence generated by the MALI method,

R(X ) = XCX − X D − AX + B, and S be the minimal nonnegative solution to (1). Then for any k ≥ 0, the following equalities hold: (a) (Xk+1/2 − S )(α I + LD ) = (α I − A + SC )(Xk − S ) + (Xk − S )(CXk + UD ); (b) (Xk+1/2 − Xk )(α I + LD ) = R(Xk ); (c) R(Xk+1/2 ) = (α I − A + Xk+1/2C )(Xk+1/2 − Xk ) + (Xk+1/2 − Xk )(UD + CXk ); (d) (β I + LA )(Xk+1 − S ) = (Xk+1/2 − S )(β I − D + CS ) + (UA + Xk+1/2C )(Xk+1/2 − S ); (e) (β I + LA )(Xk+1 − Xk+1/2 ) = R(Xk+1/2 ); (f) R(Xk+1 ) = (Xk+1 − Xk+1/2 )(β I − D + CXk+1/2 ) + (UA + Xk+1/2C )(Xk+1 − Xk+1/2 ). Proof. We first prove (a). From SCS − SD − AS + B = 0, we have S(α I + LD ) = (α I − A + SC )S + SUD + B. Thus

(Xk+1/2 − S )(α I + LD ) = Xk+1/2 (α I + LD ) − S(α I + LD ) = (α I − A )(Xk − S ) + XkCXk − SCS + (Xk − S )UD = (α I − A + SC )(Xk − S ) + (Xk − S )(CXk + UD ). Next we prove (b) and (c). We have

(Xk+1/2 − Xk )(α I + LD ) = Xk+1/2 (α I + LD ) − Xk (α I + LD ) = (α I − A + XkC )Xk + XkUD + B − Xk (α I + D ) = R(Xk ).

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448

445

From (5), we have Xk+1/2 D = (α I − A + XkC )Xk + XkUD + B − Xk+1/2 (α I + UD ). Thus

R(Xk+1/2 ) = Xk+1/2CXk+1/2 − AXk+1/2 − Xk+1/2 D + B

= (α I − A + Xk+1/2C )(Xk+1/2 − Xk ) + (Xk+1/2 − Xk )(UD + CXk ).

Finally, we prove (d), (e) and (f). We have

(β I + LA )(Xk+1 − S ) = (β I + LA )Xk+1 − (β I + LA )S = Xk+1/2 (β I − D + CXk+1/2 ) + UA Xk+1/2 + B − S(β I − D + CS ) − UA S − B = (Xk+1/2 − S )(β I − D + CS ) + (UA + Xk+1/2C )(Xk+1/2 − S ). (β I + LA )(Xk+1 − Xk+1/2 ) = (β I + LA )Xk+1 − (β I + LA )Xk+1/2 = Xk+1/2 (β I − D + CXk+1/2 ) + UA Xk+1/2 + B − (β I + A )Xk+1/2 = R(Xk+1/2 ). R(Xk+1 ) = Xk+1CXk+1 − AXk+1 − Xk+1 D + B

= (Xk+1 − Xk+1/2 )(β I − D + CXk+1/2 ) + (UA + Xk+1/2C )(Xk+1 − Xk+1/2 ). 

Lemma 3.2. Let {Xk } be the matrix sequence generated by the MALI method, K in (2) be a nonsingular M-matrix or an irreducible singular M-matrix and S be the minimal nonnegative solution to (1). If the parameters α , β of the MALI method satisfy

α ≥ max{aii },

β ≥ max{d j j },

then for any k ≥ 0,

0 ≤ Xk+1/2 ≤ S,

0 ≤ Xk+1 ≤ S.

(6)

Proof. When K in (2) is a nonsingular M-matrix or an irreducible singular M-matrix, A and D are nonsingular M-matrices by Lemma 1.3. Thus we have UA ≥ 0, UD ≥ 0 by Lemma 1.1, and α I + LD , β I + LA are both nonsingular M-matrices by Lemma 1.2. From Lemma 1.1 we know (α I + LD )−1 ≥ 0, (β I + LA )−1 ≥ 0. We prove (6) by induction. When k = 0, from (5) we have X1/2 (α I + LD ) = B. Thus

X1/2 = B(α I + LD )−1 ≥ 0. On the other hand, by Lemma 3.1(a), we have

(X1/2 − S )(α I + LD ) = −(α I − A + SC )S − SUD ≤ 0, thus

X1/2 − S = −((α I − A + SC )S + SUD )(α I + LD )−1 ≤ 0. Similarly, from (5) we have

(β I + LA )X1 = X1/2 (β I − D + CX1/2 ) + UA X1/2 + B. When β ≥ max {djj }, we have β I − D ≥ 0, thus

X1 = (β I + LA )−1 (X1/2 (β I − D + CX1/2 ) + UA X1/2 + B ) ≥ 0. From Lemma 3.1(d), we have

(β I + LA )(X1 − S ) = (X1/2 − S )(β I − D + CS ) + (UA + X1/2C )(X1/2 − S ). Since X1/2 − S ≤ 0, we get

X1 − S = (β I + LA )−1 ((X1/2 − S )(β I − D + CS ) + (UA + X1/2C )(X1/2 − S )) ≤ 0. Thus we have proved

0 ≤ X1/2 ≤ S,

0 ≤ X1 ≤ S.

446

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448

Suppose that the assertions (6) hold for k = l − 1. From (5), we have

Xl+1/2 (α I + LD ) = (α I − A + Xl C )Xl + Xl UD + B, thus

Xl+1/2 = ((α I − A + Xl C )Xl + Xl UD + B )(α I + LD )−1 ≥ 0. On the other hand, by Lemma 3.1(a), we have

(Xl+1/2 − S )(α I + LD ) = (α I − A + SC )(Xl − S ) + (Xl − S )(CXl + UD ), thus

Xl+1/2 − S = ((α I − A + SC )(Xl − S ) + (Xl − S )(CXl + UD ))(α I + LD )−1 ≤ 0. Similarly, from (5) we have

(β I + LA )Xl+1 = Xl+1/2 (β I − D + CXl+1/2 ) + UA Xl+1/2 + B, thus

Xl+1 = (β I + LA )−1 (Xl+1/2 (β I − D + CXl+1/2 ) + UA Xl+1/2 + B ) ≥ 0. On the other hand, by Lemma 3.1(d), we have

(β I + LA )(Xl+1 − S ) = (Xl+1/2 − S )(β I − D + CS ) + (UA + Xl+1/2C )(Xl+1/2 − S ), thus

Xl+1 − S = (β I + LA )−1 ((Xl+1/2 − S )((β I − D + CS ) + (UA + Xl+1/2C )(Xl+1/2 − S )) ≤ 0. Hence the assertions (6) hold for k = l. Thus we have proved by induction that the assertions (6) hold for all k ≥ 0.



Lemma 3.3. Let the assumption be as in Lemma 3.2. Then for any k ≥ 0, we have

Xk ≤ Xk+1/2 ≤ Xk+1 ,

R(Xk+1/2 ) ≥ 0,

R(Xk+1 ) ≥ 0.

Proof. We prove this by induction. When k = 0, it is clear that 0 = X0 ≤ X1/2 . From Lemma 3.1(c), we have

R(X1/2 ) = (α I − A + X1/2C )X1/2 + X1/2UD ≥ 0. By Lemma 3.1(e), we have (β I + LA )(X1 − X1/2 ) = R(X1/2 ). Thus

X1 − X1/2 = (β I + LA )−1 R(X1/2 ) ≥ 0. By Lemma 3.1(f), we have

R(X1 ) = (X1 − X1/2 )(β I − D + CX1/2 ) + (UA + X1/2C )(X1 − X1/2 ) ≥ 0. Suppose that the assertions hold for k = l − 1. By Lemma 3.1(b), we have (Xl+1/2 − Xl )(α I + LD ) = R(Xl ). Thus

Xl+1/2 − Xl = R(Xl )(α I + LD )−1 ≥ 0. From Lemma 3.1(c), we have

R(Xl+1/2 ) = (α I − A + Xl+1/2C )(Xl+1/2 − Xl ) + (Xl+1/2 − Xl )(UD + CXl ) ≥ 0. From Lemma 3.1(e), we have (β I + LA )(Xl+1 − Xl+1/2 ) = R(Xl+1/2 ). Thus

Xl+1 − Xl+1/2 = (β I + LA )−1 R(Xl+1/2 ) ≥ 0. By Lemma 3.1(f), we have

R(Xl+1 ) = (Xl+1 − Xl+1/2 )(β I − D + CXl+1/2 ) + (UA + Xl+1/2C )(Xl+1 − Xl+1/2 ) ≥ 0. Hence the assertions hold for k = l. Thus we have proved by induction that the assertions hold for all k ≥ 0.



Using the lemmas above, we can prove the following convergence theorem of the MALI method. Theorem 3.1. For the MARE (1), if K in (2) is a nonsingular M-matrix or an irreducible singular M-matrix and the parameters α , β satisfy

α ≥ max{aii },

β ≥ max{d j j },

then {Xk }, generated by the MALI method, is well defined, monotonically increasing and converges to S, the minimal nonnegative solution. Proof. Combining Lemma 3.2 with Lemma 3.3, we have shown that {Xk } is nonnegative, monotonically increasing and bounded from above. Thus there is a nonnegative matrix S∗ such that limk→∞ Xk = S∗ . It also holds that limk→∞ Xk+1/2 = S∗ . From Lemma 3.2, we have S∗ ≤ S. On the other hand, taking the limit in the MALI method, we know S∗ is a solution of the MARE (1), thus S ≤ S∗ . Hence S = S∗ . 

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448

447

Table 1 Numerical results of Experiment 1. m

5

10

20

Methods

IT

CPU

RES

Newton FP

2 3

0.0072 0.0087

1.3048e−08 5.1826e−08

ALI NALI MALI

5 5 6

0.0042 0.0040 0.0038

9.0345e−08 8.3996e−08 3.9328e−07

Newton FP

3 5

0.2178 0.2207

2.5071e−11 3.4722e−08

ALI NALI MALI

9 9 13

0.0974 0.0517 0.0486

4.8599e−07 3.9908e−08 7.5643e−07

Newton FP

3 13

3.0800 5.4770

6.9198e−07 6.2371e−08

ALI NALI MALI

20 24 32

0.3141 0.2558 0.2030

5.0121e−07 9.3599e−08 9.9759e−07

4. Numerical experiments In this section we use several examples to show the effectiveness of the MALI iteration method (MALI). In particular, we compare the numerical behaviour of MALI iteration method with the ALI iteration method (ALI) in [1], the NALI iteration method (NALI) in [6] and Newton method (Newton) and fixed-point iteration method (FP) in [7,9] with respect to the numbers of iterations (IT), CPU time (CPU) and the residue (RES) which is defined to be

RES :=

XCX − X D − AX + B ∞ . XCX ∞ + X D ∞ + AX ∞ + B ∞

In our implementations, all iterations are run in MATLAB (version 2012a) on a personal computer and are terminated when the current iterate satisfies RES < 10−6 or the number of iterations is more than 90 0 0. In the experiments, we choose the parameters α , β in the MALI and NALI iteration methods to be α = max{aii }, β = max{d j j } and the parameter α in the ALI iteration method to be α = max{max{aii }, max{d j j }}. Experiment 1. (see [1]) Consider the MARE (1) for which

A = D = T ridiag(−I, T , −I ) ∈ Rn×n are block tridiagonal matrices,

C=

1 tridiag(1, 2, 1 ) ∈ Rn×n 50

is a tridiagonal matrix, and

B = SD + AS − SCS such that

S=

1 T ee ∈ Rn×n 50

is the minimal nonnegative solution of (1), where

T = tridiagonal (−1, 4 +

200

( m + 1 )2

, −1 ) ∈ Rm×m ,

e = (1, . . . , 1 )T ∈ Rn×m ,

and n = m2 . For different sizes of m, the numerical results are summarized in Table 1. Experiment 2. (see [1]) Consider the MARE (1) for which A, B, C, D are generated as follows. First set R = rand (2n, 2n ); then set W = diag(Re ) − R, with e = (1, 1, . . . 1 )T ∈ R2n×2n ; finally, define

D = W (1 : n, 1 : n ) + I,

C = −W (1 : n, n + 1 : 2n ),

B = −W (n + 1 : 2n, 1 : n ),

A = W (n + 1 : 2n, n + 1 : 2n ) + I,

where I is the identity matrix of size n. In this case, the corresponding K is a nonsingular M-matrix. For different sizes of n, the numerical results are summarized in Table 2. From Tables 1 and 2 we can see that, though more iterations, the MALI iteration method needs the least CPU time. Hence it is effective.

448

J. Guan / Applied Mathematics and Computation 347 (2019) 442–448 Table 2 Numerical results of Experiment 2. n

50

100

500

Methods

IT

CPU

RES

Newton FP

5 33

0.4330 0.2749

1.4172e−09 9.1695e−07

ALI NALI MALI

22 30 34

0.0329 0.0253 0.0232

9.1544e−07 6.9917e−07 9.9718e−07

Newton FP

5 46

2.0103 1.3109

1.3607e−07 8.9286e−07

ALI NALI MALI

29 39 46

0.1207 0.1053 0.1018

9.7380e−07 8.6849e−07 8.6957e−07

Newton FP

6 84

111.3035 234.3724

9.3985e−08 9.6281e−07

ALI NALI MALI

53 75 88

9.1768 8.1253 7.1428

9.6947e−07 9.9217e−07 9.6192e−07

5. Conclusions We have proposed a modified alternately linearized implicit iteration method (MALI) for computing the the minimal nonnegative solution of MARE. Convergence of the MALI method is guaranteed for MARE associated with a nonsingular M-matrix or an irreducible singular M-matrix. Numerical experiments have shown that the MALI method is efficient and effective in some cases. However, for the critical case or near-critical case, the ALI methods will be very slowly and in this case the Newton method will be preferable. References [1] Z.Z. Bai, X.X. Guo, S.F. Xu, Alternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations, Numer. Linear Algebra Appl. 13 (2006) 655–674. [2] N.G. Bean, M.M. O’Reilly, P.G. Taylor, Algorithms for return probabilities for stochastic fluid flows, Stoch. Models. 21 (2005) 149–184. [3] N.G. Bean, M.M. O’Reilly, P.G. Taylor, Algorithms for the Laplace-Stieltjes transforms of first return times for stochastic fluid flows, Methodol. Comput. Appl. Probab. 10 (2008) 381–408. [4] A. Berman, R.J. Plemmons, Nonnegative matrices in the mathematical sciences, Academic Press, New York, 1994. [5] D.A. Bini, B. Iannazzo, B. Meini, Numerical solution of algebraic Riccati equations, in: SIAM Series on Fundamentals of Algorithms, Philadelphia, 2012. [6] J.R. Guan, L.Z. Lu, New alternately linearized implicit iteration for M-matrix algebraic Riccati equations, J. Math. Study 50 (1) (2017) 54–64. [7] C.H. Guo, Nonsymmetric algebraic Riccati equations and Wiener-Hopf factorization for M-matrices, SIAM J. Matrix Anal. Appl. 23 (2001) 225–242. [8] C.H. Guo, A note on the minimal nonnegative solution of a nonsymmtric algebraic Riccati equation, Linear Algebra. Appl. 357 (2002) 299–302. [9] C.H. Guo, N.J. Higham, Iterative solution of a nonsymmtric algebraic Riccati equation, SIAM J. Matrix Anal. Appl. 29 (2007) 396–412. [10] C.H. Guo, B. Iannazzo, B. Meini, On the doubling algorithm for a (shift) algebraic Riccati equation, SIAM J. Matrix Anal. Appl. 29 (2007) 1083–1100. [11] C.H. Guo, On algebraic Riccati equations associated with M-matrices, Linear Algebra Appl. 439 (2013) 2800–2814. [12] C.H. Guo, D. Lu, On algebraic Riccati equations associated with regular singular M-matrices, Linear Algebra Appl. 493 (2016) 108–119. [13] X.X. Guo, W.W. Lin, S.F. Xu, A structure-preserving doubling algorithm for nonsymmetric algebraic Riccati equation, Numer. Math. 103 (2006) 393–412. [14] T.M. Huang, W.Q. Huang, .R.C. Li, W.W. Lin, A new two-phase structure-preserving doubling algorithm for critically singular M-matrix algebraic Riccati equations, Numer. Linear Algebra Appl. 23 (2016) 291–313. [15] J. Juang, W.W. Lin, Nonsymmetric algebraic Riccati equations and Hamilton-like matrices, SIAM J. Matrix Anal. Appl. 20 (1999) 228–243. [16] H.Z. Lu, C.F. Ma, A new linearized implicit iteration method for nonsymmetric algebraic Riccati equations, J. Appl. Math. Comput. 50 (2015) 1–15. [17] L.Z. Lu, Solution form and simple iteration of a nonsymmetric algebraic Riccati equation arising in transport theory, SIAM J. Matrix Anal. Appl. 26 (2005) 679–685. [18] L.C.G. Rogers, Fluid models in queueing theory and Wiener-Hopf factorization of Markov chains, Ann. Appl. Probab. 4 (1994) 390–413. [19] J.G. Xue, R.C. Li, Highly accurate doubling algorithms for M-matrix algebraic Riccati equations, Numer. Math. 135 (2017) 733–767. [20] W.G. Wang, W.C. Wang, R.C. Li, Alternating-directional doubling algorithm for M-matrix algebraic Riccati equations, SIAM J. Matrix Anal. Appl. 33 (2002) 170–194. [21] K.M. Zhou, P.P. Khargonekar, An algebraic Riccati equation approach to H∞ optimization, Syst. Control Lett. 11 (1988) 85–91. [22] A.K. Zubair, Q.M. Ghulam, M.M. Dur, S.C. Muhammad, A modified linearized implicit iteration methods for nonsymmetric algebraic Riccati equations, Sci. Int. 28 (5) (2016) 4569–4572. (Lahore)