Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations

Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations

Accepted Manuscript Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations Shinya Miyajima PII: DOI: Referen...

719KB Sizes 0 Downloads 55 Views

Accepted Manuscript Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations Shinya Miyajima PII: DOI: Reference:

S0377-0427(17)30046-8 http://dx.doi.org/10.1016/j.cam.2017.01.025 CAM 10990

To appear in:

Journal of Computational and Applied Mathematics

Received date: 25 July 2016 Revised date: 18 November 2016 Please cite this article as: S. Miyajima, Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations, Journal of Computational and Applied Mathematics (2017), http://dx.doi.org/10.1016/j.cam.2017.01.025 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations Shinya Miyajima Faculty of Science and Engineering, Iwate University, 020-8551, Japan

Abstract Fast iterative algorithms for computing interval matrices containing solutions of discrete-time algebraic Riccati equations are proposed. These algorithms involve only cubic complexity per iteration. The stabilizability and uniqueness of the contained solution can moreover be verified by these algorithms. Numerical results show the properties of the algorithms. Key words: Algebraic Riccati equations, Stabilizing solution, Verified computation 2010 MSC: 15A24, 65F99, 65G20, 93C05 1. Introduction Consider the following discrete-time algebraic Riccati equations (DAREs): F (X) := AH XA − X − AH XB(R + B H XB)−1 B H XA + Q = 0,

(1)

where A, Q ∈ Cn×n , B ∈ Cn×m and R ∈ Cm×m are given, Q and R are Hermitian, AH and B H denote the conjugate transposes of A and B, respectively, X ∈ Cn×n is to be solved, and m ≤ n. The DAREs appear in important problems in science and technology, e.g., discrete-time LQ-optimal control problems and an equation in ladder networks [1]. The solution X of interest is a Hermitian matrix, and the Hermitian solution X is called stabilizing if all eigenvalues of AX := A − B(R + B H XB)−1 B H XA are inside the unit disk. The stabilizing solution is required in practical applications, hence we focus on its computation. Throughout this paper, we say a solution X has the “stabilizability” if X is a stabilizing solution. One common approach to solve (1) is to compute an n-dimensional deflating subspace of the matrix pencils λM − N , where [2]     In 0 0 A 0 B M :=  0 −AH 0  , N :=  Q −In 0  , (2) 0 0 R 0 −B H 0 Email address: [email protected] (Shinya Miyajima)

Preprint submitted to Elsevier

November 18, 2016

and In denotes the n × n identity matrix. If this subspace is spanned by [V1T , V2T , V3T ]T , where V1 , V2 ∈ Cn×n and V3 ∈ Cm×n , and V1 is nonsingular, then V2 V1−1 is a solution of (1). We refer the reader to the books [1, 3] for details concerning main theoretical properties together with the description of the main tools for the design and analysis of solution algorithms. For example, the other pencils for solving (1) are introduced in [1]. The work presented in this paper addresses the problem of computing verified solutions of (1), specifically, computing an interval matrix which is guaranteed to contain the solution. The pioneering work is the iterative algorithms proposed in [4, 5]. In [4] and [5], two and one algorithms are proposed, respectively, and the algorithm in [5] coincides with the first algorithm in [4]. In these algorithms, the special structure of (1) is skillfully exploited. These algorithms require O(n6 ) operations per iteration. To the author’s best knowledge, the other verification algorithm for (1) has not been written down in literature. For continuous-time algebraic Riccati equations, on the other hand, the verification algorithms which require only O(n3 ) operations have been proposed in [6, 7, 8]. The purpose of this paper is to propose two iterative algorithms for computing verified solution of (1). These algorithms require only O(n3 ) operations per iteration, and moreover verify that the solution contained in the interval is stabilizing and unique. The verification of stabilizability and uniqueness seems to be the first attempt in the context of (1). The first algorithm utilizes a nu˜ merical spectral decomposition of AH ˜ , where X denotes a numerical solution of X (1), and is applicable when a numerically computed eigenvector matrix of AH ˜ is X not ill-conditioned. The second algorithm is based on a simple reformulation of (1) as a fixed point equation, and does not utilize the spectral decomposition. Although this algorithm is generally less reliable than the first one, it has the advantage of not breaking down when the eigenvector matrix is ill-conditioned. This paper is organized as follows: In Section 2, notation and theories used in this paper are introduced. In Sections 3 and 4, the first and second algorithms are proposed, respectively. In Sections 5, numerical results are reported. Section 6 finally summarizes the results in this paper and highlights possible extension and future work. 2. Preliminaries For M ∈ Cm×n , let Mij , M:j and M be the (i, j) element, j-th column and complex conjugate of M , respectively, |M | := (|Mij |), M T := (Mji ), ∑ ∑ T M H := M , ∥M ∥∞ := maxi j |Mij |, ∥M ∥1 := maxj i |Mij | and ∥M ∥max := maxi,j |Mij |. If M is nonsingular in particular, let M −H := (M H )−1 . For M, N ∈ Rm×n , M ≤ N means that Mij ≤ Nij follows for all i and j. For v ∈ Cn , vi denotes the i-th component of v. Let diag(v1 , . . . , vn ) be the n × n diagonal matrix whose (i, i) element is vi for i = 1, . . . , n. For v, w ∈ Cn with ∥w∥∞ < 1 and M, N ∈ Cn×n with ∥N ∥max < 1, define ∥v∥w := maxi (|vi |/(1 − |wi |)) and ∥M ∥N := maxi,j (|Mij |/(1 − |Nij |)), respectively. Let eps, realmin, In , ⊗ and ./ be machine epsilon, the smallest positive normalized floating point number 2

(especially eps = 2−52 and realmin = 2−1022 in IEEE 754 double precision), the n × n identity matrix, the Kronecker product (see e.g., [9]), and pointwise division, respectively, and e(n) := (1, . . . , 1)T ∈ Rn . For M c ∈ Cn×n and M r ∈ Rn×n with M r ≥ 0, ⟨M c , M r ⟩ denotes the interval matrix whose center and radius are M c and M r , respectively. For a Fr´echet differentiable matrix function F (X) where X ∈ Cn×n , denote the Fr´echet derivative (see e.g., [10]) of ′ F at X applied to the matrix H by FX (H). For an interval matrix M , int(M ) denotes the interior of M . The notation fl(·) denotes a result of floating point computation, where all operations inside the parentheses are executed by ordinary floating point arithmetic in rounding to nearest mode. The notations fl(·) and fl(·) denote rigorous upper and lower bounds for the insides the parentheses obtained by rounding mode controlled floating point computations, respectively. 2 For M ∈ Cn×n and v ∈ Cn , define     M:1 v1 vn+1 · · · vn(n−1)+1     .. .. .. vec(M ) :=  ...  and mat(v) :=  ... , . . . M:n

vn

v2n

···

vn2

respectively. We then have mat(vec(M )) = M , vec(mat(v)) = v and ∥M ∥N = ∥vec(M )∥vec(N ) for N ∈ Cn×n with ∥N ∥max < 1. If vi ̸= 0, ∀i, then (diag(v1 , . . . , vn2 ))−1 vec(M ) = vec(M./mat(v)).

(3)

Now, we cite Lemmas 1 to 4 and Theorem 1 together with Corollary 1 and Lemma 5 which we use them in the following, frequently. Lemma 1 (e.g., Meyer [11]). If ∥Z∥∞ < 1 for Z ∈ Cn×n , then In − Z is nonsingular. Lemma 2 (e.g., Horn and Johnson [9]). For any complex matrices L, M and N with compatible sizes, it holds that vec(LM N ) = (N T ⊗ L)vec(M ). Lemma 3 (Minamihata [12]). Let Z ∈ Cn×n , f ∈ Cn and s := |Z|e(n) . If ∥s∥∞ < 1, then In − Z is nonsingular and |(In − Z)−1 ||f | ≤ |f | + ∥f ∥s s. Remark 1. The proof of Lemma 3 can be found in [13, Section 2]. Lemma 4 (Ionescu and Weiss [14]). Let M and N be as in (2), and the n-dimensional deflating subspace of λM − N be spanned by V = [V1T , V2T , V3T ]T , where V1 , V2 ∈ Cn×n and V3 ∈ Cm×n , i.e., S ∈ Cn×n such that N V = M V S exist. If all eigenvalues of S lie inside the unit disk, then V1H V2 = V2H V1 . Corollary 1. Let F (X) and AX be as in Section 1. If F (X) = 0 and all the eigenvalues of AX are inside the unit disk, then X H = X. Proof. Let V := [In , X T , (−(R + B H XB)−1 B H XA)T ]T , and M and N be as in (2). Then, rank(V ) = n, and we have N V = M V AX from F (X) = 0, i.e., V spans the n-dimensional deflating subspace of λM −N . Since all the eigenvalues of AX are inside the unit disk, moreover, Lemma 4 prove the result. 2 3

Theorem 1 (Ionescu and Weiss [14]). The stabilizing solution of (1) is unique if it exists. Lemma 5. Let A, X, H ∈ Cn×n , B ∈ Cn×m , R ∈ Cm×m , RX := R + B H XB, RX and RX+H be nonsingular, and AX be as in Section 1. Then −1 AH (X + H)BRX+H B H (X + H)A −1 H −1 H −1 H = AH XBRX B XA + AH HBRX B XA + AH XBRX B HAX −1 H −1 +(AH − AH XBRX B )HBRX+H B H HAX .

Proof. For M, N ∈ Cn×n , we have (M + N )−1 = M −1 − (M + N )−1 N M −1 = M −1 − M −1 N (M + N )−1 if M and M + N are nonsingular. Therefore, −1 RX+H

= (RX + B H HB)−1 −1 −1 −1 = RX − RX+H B H HBRX

=

−1 RX



−1 H −1 RX B HBRX+H .

(4) (5)

Substituting (4) into (5) gives −1 −1 −1 H −1 −1 H −1 −1 RX+H = RX − RX B HBRX + RX B HBRX+H B H HBRX .

(6)

From (4), (5) and (6), we finally obtain −1 AH (X + H)BRX+H B H (X + H)A −1 −1 H −1 −1 H −1 −1 = AH XB(RX − RX B HBRX + RX B HBRX+H B H HBRX )B H XA −1 −1 −1 −1 +AH HB(RX − RX+H B H HBRX )B H XA + AH HBRX+H B H HA −1 −1 H −1 +AH XB(RX − RX B HBRX+H )B H HA

−1 H −1 H −1 H = AH XBRX B XA + AH HBRX B XA + AH XBRX B HAX −1 H −1 H H H +(A − A XBRX B )HBRX+H B HAX . 2

Let AV ≈ V Λ be a numerical spectral decomposition of A ∈ Cn×n , where Λ ∈ Cn×n is diagonal, and W ∈ Cn×n be the result of a numerical inversion of V . We can then expect AV − V Λ ≈ 0 and In − W V ≈ 0. All eigenvalues of A can be enclosed using V , Λ and W . Lemma 6 (Miyajima [13]). Let A, Λ, V, W ∈ Cn×n be given, Λ be diagonal, ˆ := W (AV − V Λ) and Z := In − W V . If ∥|Z|e(n) ∥∞ < 1, then V and W E ∪n are nonsingular, and all eigenvalues of A are included in i=1 ⟨Λii , ri ⟩, where ˆ (n) + ∥|E|e ˆ (n) ∥|Z|e(n) |Z|e(n) . r := |E|e 3. A new verification algorithm utilizing a spectral decomposition ˜ be a Hermitian numerical Let F (X) and AX be as in Section 1, and X ˜ and F (X) ˜ ≈ 0 can solution of (1). Then, the nonsingularity of R + B H XB be expected. Assume as the results of a numerical spectral decomposition and 4

inversion, we have Λ, V ∈ Cn×n with Λ being diagonal and W ∈ Cn×n such H that AH ˜ V ≈ V Λ and W V ≈ In , respectively. Define E := Λ − W AX ˜ V and X Z := In − W V . Provided that V is not ill-conditioned, we can expect E ≈ 0 and Z ≈ 0. If ∥Z∥∞ < 1, Lemma 1 gives that In − Z is nonsingular, which implies the nonsingularity of V and W . Hereafter, let Y := V −1 XV −H , Y˜ := ˜ −H , RX := R + B H XB, SY := R + (V H B)H Y V H B and AˆY := A − V −1 XV −1 H BSY B V Y V H A. We firstly consider computing the interval matrix containing the solution X ∗ of (1). The verification of the stabilizability and uniqueness will be discussed later. We have F (X) = 0 ⇔ W F (X)W H = 0. The relation X = V Y V H moreover gives W F (X)W H

=

W AH V Y V H AW H − W V Y V H W H + W QW H −W AH V Y V H BSY−1 B H V Y V H AW H .

Hence, (1) is equivalent to G(Y ) = 0, where G(Y ) :=

W AH V Y V H AW H − W V Y (W V )H + W QW H −(V H AW H )H Y V H BSY−1 (V H B)H Y V H AW H .

Although G(Y ) seems to be more complicated than F (X), we can find its advantage when we consider its Fr´echet derivative. In fact, Lemma 5 applied to A := V H AW H , X := Y and B := V H B gives (V H AW H )H (Y + H)V H BSY−1+H (V H B)H (Y + H)V H AW H = (V H AW H )H Y V H BSY−1 (V H B)H Y V H AW H +W AH V HV H BSY−1 B H V Y V H AW H +W AH V Y V H BS −1 B H V HV H AˆY W H + o(∥H∥), Y

where the norm is any matrix norm. Therefore, G(Y + H) =

W (AH − AH V Y V H BSY−1 B H )V HV H AˆY W H + G(Y ) −W V H(W V )H + o(∥H∥),

which shows G′Y (H) = W (AH − AH V Y V H BSY−1 B H )V HV H AˆY W H − W V H(W V )H . ˜ −H , S ˜ = R ˜ , A ˜ = A − BR−1 B H XA, ˜ ˜H = This, Y˜ = V −1 XV AˆY˜ = AX˜ , X ˜ Y X X X H ˜ E = Λ − W A V and Z = In − W V yield X, ˜ X G′Y˜ (H)

= =

H H H W AH ˜ V H(W AX ˜ V ) − W V H(W V ) X

(Λ − E)H(Λ − E)H − (In − Z)H(In − Z)H ,

(7)

′ H ′ whereas FX ˜ −H. Namely, the coefficient matrices of GY˜ (H) are ˜ (H) = AX ˜ HAX H ′ not too far from diagonal, against that the coefficients AX˜ and AX˜ of FX ˜ (H)

5

are dense in general. In order to exploit the special structure of the coefficient matrices of G′Y˜ (H), we deal with G(Y ) = 0 instead of F (X) = 0. Namely, we compute an interval matrix Y containing Y ∗ ∈ Cn×n such that G(Y ∗ ) = 0, and enclose X ∗ by computing a superset of {V Y V H : Y ∈ Y }. Since X ∗ = V Y ∗ V H , the superset contains X ∗ . We now discuss the way for obtaining Y . If G′Y˜ (H) is invertible, we can define the Newton operator N (Y ) := Y − (G′Y˜ )−1 G(Y ), and N (Y ) = Y is a fixed point equation for Y . For computing Y , we thus verify the invertibility of G′Y˜ (H) and inclusion {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ int(⟨Y˜ , Y r ⟩) for given Y r ∈ Rn×n with Y r > 0. If these are true, the Brouwer’s fixed point theorem implies Y ∗ ∈ int(⟨Y˜ , Y r ⟩), which gives Y ∗ = N (Y ∗ ) ∈ {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩}. Hence, an interval matrix including {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} can be regarded as Y . We verify the invertibility of G′Y˜ (H) by the following idea: From (7) and Lemma 2, G′Y˜ (H) can be represented in terms of a matrix vector product as vec(G′Y˜ (H)) = Pvec(H),

P := (Λ − E) ⊗ (Λ − E) − (In − Z) ⊗ (In − Z).

Therefore, if P is nonsingular, G′Y˜ (H) is invertible. The nonsingularity of P can be verified with only O(n3 ) operations by exploiting the special structure.

˜ V, Λ, W ∈ Cn×n with Λ = diag(λ1 , . . . , λn ) be given, and Lemma 7. Let X, ˜ AX , E, Z, Y , RX and G′Y˜ (H) be as in the above discussion. Assume RX˜ T

is nonsingular. Define λ := (λ1 , . . . , λn )T , D := λλH − e(n) e(n) and T := (n) T ) + |λ|(|E|e(n) )T + |Z|e(n) (|W V |e(n) )T + e(n) (|Z|e(n) )T . If |E|e(n) (|W AH ˜ V |e X ∥|Z|e(n) ∥∞ < 1, then V and W are nonsingular. If |D| > 0 and ∥T./|D|∥max < 1, additionally, G′Y˜ (H) is invertible.

Proof. Let P be as in the above discussion, ∆ := Λ ⊗ Λ − In2 , Ω := (W AH ˜V )⊗ X

E + E ⊗ Λ − (W V ) ⊗ Z − Z ⊗ In , and ΩS := |W AH ˜ V | ⊗ |E| + |E| ⊗ |Λ| + |W V | ⊗ X |Z| + |Z| ⊗ In . We prove the nonsingularity of P. Since Λ is diagonal, ∆ is also diagonal. The elements ∆11 , . . . , ∆n2 n2 can be written as λ1 λ1 − 1, . . . , λ1 λn − 1, . . . , λn λ1 − 1, . . . , λn λn − 1, respectively. From this and Dij = λi λj − 1, i, j = 1, . . . , n, we have mat((∆11 , . . . , ∆n2 n2 )T ) = D. This and |D| > 0 give ∆kk ̸= 0, ∀k. Thus, ∆ is nonsingular, which shows P

= Λ ⊗ Λ − In ⊗ In − (Λ − E) ⊗ E − E ⊗ Λ + (In − Z) ⊗ Z + Z ⊗ In = ∆ − Ω = ∆(In2 − ∆−1 Ω). (8)

Therefore, if ∥∆−1 Ω∥∞ < 1, Lemma 1 yields the nonsingularity of P. We hence prove ∥∆−1 Ω∥∞ < 1. It holds from |∆−1 Ω| = |∆−1 ||Ω|, |Ω| ≤ ΩS , |Λ|e(n) = |λ|, mat((∆11 , . . . , ∆n2 n2 )T ) = D, (3) and Lemma 2 that |∆−1 Ω|e(n

2

) T

≤ |∆−1 |ΩS vec(e(n) e(n) )

T

T

(n) (n) = |∆−1 |((|W AH e ) + (|E| ⊗ |Λ|)vec(e(n) e(n) ) ˜ V | ⊗ |E|)vec(e X

6

T

T

+(|W V | ⊗ |Z|)vec(e(n) e(n) ) + (|Z| ⊗ In )vec(e(n) e(n) )) = |∆−1 |vec(T ) = vec(T./|D|),

(9)

2

which gives ∥∆−1 Ω∥∞ = ∥|∆−1 Ω|e(n ) ∥∞ ≤ ∥vec(T./|D|)∥∞ = ∥T./|D|∥max . This and ∥T./|D|∥max < 1 show ∥∆−1 Ω∥∞ < 1. 2 We verify the inclusion {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ int(⟨Y˜ , Y r ⟩) by computing a superset of {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩}. The superset can be computed by the following idea: The equality N (Y ) = Y − (G′Y˜ )−1 G(Y ) is equivalent to G′Y˜ (N (Y )) = G′Y˜ (Y ) − G(Y ). Therefore, {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} is a set of all solutions to parameterized matrix equations (Λ − E)NY (Λ − E)H − (In − Z)NY (In − Z)H = G′Y˜ (Y ) − G(Y ),

(10)

where NY ∈ Cn×n is unknown and Y ∈ ⟨Y˜ , Y r ⟩ is the parameter, which can be represented as Pvec(NY ) = vec(G′Y˜ (Y ) − G(Y )). Hence, the superset can be obtained by enclosing the solution set. The solution set can be enclosed with O(n3 ) operations by exploiting (8) and (9).

˜ V, Lemma 8. Let F (X), Y˜ , SY and N (Y ) be as in the above discussion, X, W , D and T be as in Lemma 7, and Y r ∈ Rn×n with Y r > 0 be given. With all the assumptions in Lemma 7, suppose SY is nonsingular for all Y ∈ ⟨Y˜ , Y r ⟩. −1 T r H H H r H H ˜ Let GS ≥ |W F (X)W |, GN ≥ |W AH ˜V | , ˜ V |Y |V BSY (V B) |Y |W AX X ∀Y ∈ ⟨Y˜ , Y r ⟩, GD := (GS + GN )./|D| and Y ε := GD + ∥GD ∥T./|D| (T./|D|). Then {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ ⟨Y˜ , Y ε ⟩.

Proof. We prove that ⟨Y˜ , Y ε ⟩ includes the solution set of (10), i.e., |Y˜ − NY | ≤ Y ε holds for any Y ∈ ⟨Y˜ , Y r ⟩. Let P be as in the above discussion, and ∆ and Ω be as in the proof of Lemma 7. From Lemma 7, P, ∆ and In2 − ∆−1 Ω are nonsingular. Any Y ∈ ⟨Y˜ , Y r ⟩ can be written as Y = Y˜ + YPr , where YPr ∈ Cn×n satisfies |YPr | ≤ Y r . This representation, the nonsingularity of SY , ˜ = V Y˜ V H , S ˜ = R ˜ and Lemma 5 applied to A := V H AW H , X := Y˜ , X Y X H := YPr and B := V H B give (V H AW H )H Y V H BSY−1 (V H B)H Y V H AW H −1 H ˜ H ˜ ˜ = W AH XBR B XAW H + W AH V YPr V H BR−1 B H XAW ˜ X

−1 H H H r ˜ +W A XBR ˜ V ) + GP , ˜ B V YP (W AX X

˜ X

H

−1 r H H H r H H where GP := W AH ˜ V YP V BSY (V B) YP (W AX ˜ V ) , which shows X

G(Y ) =

W AH V (Y˜ + YPr )V H AW H − W V (Y˜ + YPr )(W V )H + W QW H −1 H ˜ −1 H ˜ H H ˜ −W AH XBR − W AH V YPr V H BRX ˜ B XAW ˜ B XAW X

−1 H r H H ˜ −W AH XBR ˜ V ) − GP ˜ B V YP (W AX X

H ˜ = W F (X)W + G′Y˜ (YPr ) − GP

7

for any Y ∈ ⟨Y˜ , Y r ⟩. From this and G′Y˜ (Y ) = G′Y˜ (Y˜ ) + G′Y˜ (YPr ), we have H ˜ − GP . This, Pvec(NY ) = vec(G′Y˜ (Y ) − G′Y˜ (Y˜ ) − G′Y˜ (Y ) + G(Y ) = W F (X)W G(Y )), mat((∆11 , . . . , ∆n2 n2 )T ) = D, (3) and (8) yield vec(Y˜ − NY ) =

vec(Y˜ ) − vec(NY ) = vec(Y˜ ) − P−1 vec(G′Y˜ (Y ) − G(Y )) = P−1 (Pvec(Y˜ ) − vec(G′Y˜ (Y ) − G(Y ))) = (∆(In2 − ∆−1 Ω))−1 vec(G′˜ (Y˜ ) − G′˜ (Y ) + G(Y )) Y

Y

H ˜ = (In2 − ∆−1 Ω)−1 ∆−1 vec(W F (X)W − GP ) −1 −1 H ˜ = (In2 − ∆ Ω) vec((W F (X)W − GP )./D).

(11)

From |YPr | ≤ Y r , it follows for any Y ∈ ⟨Y˜ , Y r ⟩ that

−1 r H H H r H T |GP | ≤ |W AH ˜ V ||YP ||V BSY (V B) ||YP ||W AX ˜ V | ≤ GN . X

H ˜ From this, GS ≥ |W F (X)W |, (9), (11) and Lemma 3, we finally obtain H ˜ vec(|Y˜ − NY |) ≤ |(In2 − ∆−1 Ω)−1 |vec((|W F (X)W | + |GP |)./|D|) −1 −1 ≤ |(In2 − ∆ Ω) |vec(GD )

≤ vec(GD ) + ∥vec(GD )∥|∆−1 Ω|e(n2 ) |∆−1 Ω|e(n

2

)

≤ vec(GD ) + ∥vec(GD )∥vec(T./|D|) vec(T./|D|) = vec(Y ε ),

which proves |Y˜ − NY | ≤ Y ε for any YPr , i.e., for any Y .

2

Lemmas 7 and 8 yield the theory for computing the interval matrix containing X ∗ , i.e., the superset of {V Y V H : Y ∈ Y }.

˜ and V be as in Lemma 7, Y r and Y ε be as in Lemma 8, Theorem 2. Let X ε ε and X ≥ |V |Y |V |T . With all the assumptions in Lemmas 7 and 8, suppose ˜ X ε ⟩ includes the solution X ∗ of (1). Y ε < Y r . Then ⟨X,

Proof. Let Y ∗ , Y and N (Y ) be as in the above discussion. From Y ε < Y r and Lemmas 7 and 8, {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ ⟨Y˜ , Y ε ⟩ ⊆ int(⟨Y˜ , Y r ⟩). Hence, the Brouwer theorem implies Y ∗ ∈ int(⟨Y˜ , Y r ⟩), which shows Y ∗ = N (Y ∗ ) ∈ {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ ⟨Y˜ , Y ε ⟩. We can thus put Y = ⟨Y˜ , Y ε ⟩. This, X ∗ = V Y ∗ V H and a center-radius interval arithmetic evaluation (e.g., [17]) yield ˜ |V |Y ε |V |T ⟩ ⊆ ⟨X, ˜ X ε ⟩. X ∗ ∈ {V Y V H : Y ∈ Y } ⊆ V ⟨Y˜ , Y ε ⟩V H ⊆ ⟨X,

2

We next discuss the verification of the stabilizability and uniqueness. Let ˜ X ε ⟩. Since eigenvalues of AX ∗ and AH ∗ have the same magnitude, X ∗ ∈ ⟨X, X we consider the eigenvalues of AH X ∗ instead, and show that all eigenvalues of ˜ ε AH X are inside the unit disc for any X ∈ ⟨X, X ⟩. If this is true, AX ∗ also has this eigenvalue property. This and Corollary 1 give (X ∗ )H = X ∗ . These properties show the stabilizability of X ∗ . If the eigenvalue property of AH X for ˜ X ε ⟩ is shown, moreover, we can prove not only the stabilizability any X ∈ ⟨X, ˜ X ε ⟩. but also the uniqueness, i.e., the other solutions are not included in ⟨X, 8

˜ V , Λ, W and λi , i = 1, . . . , n be as in Lemma 7, Z be Theorem 3. Let X, ˜ X ε ⟩ contain the solution, RX be nonsingular for as in the above discussion, ⟨X, ˜ X ε ⟩, Ac ∈ Cn×n and Ar ∈ Rn×n with Ar ≥ 0 satisfy {AX : all X ∈ ⟨X, X X X ˜ X ε ⟩} ⊆ ⟨Ac , Ar ⟩, E ˆ := W ((Ac )H V − V Λ) and v := |E|e ˆ (n) + X ∈ ⟨X, X X X r T (n) (n) |W |(AX ) |V |e . Suppose ∥|Z|e ∥∞ < 1 and define r := v +∥v∥|Z|e(n) |Z|e(n) . If maxi (|λi | + ri ) < 1, the contained solution is stabilizing and unique. Proof. We show that all the eigenvalues of AH X are inside the unit disc for any ˜ X ε ⟩. Let E ˆX := W (AH V − V Λ). From ∥|Z|e(n) ∥∞ < 1 and Lemma 6, X ∈ ⟨X, X ∪n (n) ˆ all the eigenvalues of AH + X are included in i=1 ⟨λi , ui ⟩, where u := |EX |e r (n) (n) c ˆ ∥|EX |e ∥|Z|e(n) |Z|e . The matrix AX can be written as AX = AX + AP , where ArP ∈ Cn×n satisfies |ArP | ≤ ArX . This representation gives ˆX |e(n) = |E ˆ + W (ArP )H V |e(n) ≤ |E|e ˆ (n) + |W ||ArP |T |V |e(n) ≤ v. |E ∪n This implies u ≤ r, hence i=1 ⟨λi , ri ⟩ also contains all the eigenvalues. From this and maxi (|λi | + ri ) < 1, all the eigenvalues are inside the unit disc. This and the above discussion show the stabilizability. We can moreover prove the uniqueness from the eigenvalue property. In fact, if two solutions are included in ˜ X ε ⟩, both of them must be stabilizing, which contradicts Theorem 1. ⟨X, 2 Remark 2. In the proposed algorithm, V , Λ and W are updated exploiting a numerical spectral decomposition of (AcX )H . By this update, we can expect that the each component of r become smaller. Based on the theories, we propose Algorithm 1 for enclosing the solution. Algorithm 1. Let α and kmax be a real number and an integer such that α > 1 and kmax > 1, respectively, and ⟨AcX˜ , ArX˜ ⟩ and SY be interval matrices enclosing ˜ and X ε AX˜ and {SY : Y ∈ ⟨Y˜ , Y r ⟩}, respectively. This algorithm computes X ˜ X ε ⟩ ∋ X ∗ . The stabilizability and uniqueness are also verified. such that ⟨X, ˜ by numerically solving (1) using a known algorithm. If X ˜ Step 1. Compute X ˜ such that X ˜ = fl((X ˜ +X ˜ H )/2). is not Hermitian, update X

Step 2. Verify the nonsingularity of RX˜ utilizing a known algorithm. If the verification failed, terminate with failure. Otherwise, compute ⟨AcX˜ , ArX˜ ⟩. Step 3. Compute Λ and V by a numerical spectral decomposition of (AcX˜ )H . Calculate W such that W = fl(V −1 ). Step 4. Compute fl(|Z|e(n) ). If fl(∥|Z|e(n) ∥∞ ) ≥ 1, terminate with failure. Step 5. Compute fl(|D|). If fl(mini,j |Dij |) = 0, terminate with failure. Step 6. Compute fl(T./|D|). If fl(∥T./|D|∥max ) ≥ 1, terminate with failure.

H ˜ Step 7. Let GS = fl(|W F (X)W |), and initialize k and Y r such that k = 1 r and Y = fl(αGS ), respectively. If Yijr < eps, update Y r as Yijr = eps.

9

Step 8. If k = kmax , terminate with failure. Otherwise, compute SY and verify the nonsingularity of all matrices contained in SY . If the verification failed, terminate with failure. Step 9. Compute fl(Y ε ) using SY . If fl(Y ε ) < Y r , go to Step 10. Otherwise, update k and Y r such that k = k +1 and Y r = fl(eps|fl(Y˜ )|+(1+eps)Y ε ), respectively, and go back to Step 8. Step 10. Let X ε = fl(|V |Y ε |V |T ). Verify the nonsingularity of RX for all ˜ X ε ⟩. If the verification failed, the validation of the stabilizability X ∈ ⟨X, and uniqueness failed. Terminate. Step 11. Compute AcX and ArX , and update Λ, V and W exploiting the spectral decomposition of (AcX )H . If fl(∥|Z|e(n) ∥∞ ) ≥ 1, the validation failed. Terminate. Otherwise, compute fl(r). If fl(maxi (|λi | + ri )) < 1, the contained solution is stabilizing and unique. Terminate. ˜ + Q. In Algorithms 1 and 2, ˜ = AH XA ˜ ˜ −X Remark 3. We have F (X) X c r ˜ therefore, we reuse ⟨AX˜ , AX˜ ⟩ in the evaluation of F (X). ˜ + Remark 4. Let YPr be as in the proof of Lemma 8. Since SY = R + B H XB (V H B)H YPr V H B and YPr ∈ ⟨0, Y r ⟩, SY can be obtained as a result of an interval ˜ + (V H B)H ⟨0, Y r ⟩V H B, that is, V −1 and arithmetic evaluation of R + B H XB V −H are unnecessary in the computation of SY . In the computation of fl(Y˜ ), in contrast, we need to consider V −1 and V −H . In this computation, on the other hand, only an approximation of Y˜ is required, i.e., its enclosure is unnecessary. Remark 5. The update of Y r in Step 9 is based on epsilon inflation [15, 16], which has precise theoretical justification [16]. In the epsilon inflation, a new candidate set is the product of ⟨1, eps⟩ and an interval enclosing {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩}. The update roughly corresponds to this product, since {N (Y ) : Y ∈ ⟨Y˜ , Y r ⟩} ⊆ ⟨Y˜ , Y ε ⟩ and ⟨1, eps⟩⟨Y˜ , Y ε ⟩ ⊆ ⟨Y˜ , eps|Y˜ | + (1 + eps)Y ε ⟩. The following proposition clarifies the computational efficiency of Algorithm 1. Proposition 1. Algorithm 1 has a cost of O(n3 ) operations per iteration.

Proof. Computing an enclosure for {SY−1 (V H B)H : SY ∈ SY } has cost O(n3 ). All the other matrix-matrix operations (multiplications, additions or pointwise divisions) in Steps 8 and 9 involve n × m or n × n matrices, so their costs are again O(n3 ). Therefore, the cost for computing fl(Y ε ) is O(n3 ). 2 ˜ can be obtained with O(n3 ) operations. This assumption is acceptSuppose X able. In fact, the common approach discussed in Section 1 has cost O(n3 ). Then, Steps 1 to 7, 10 and 11 also have costs O(n3 ), since the costs for the numerical spectral decompositions and inversions are cubic in m or n, and computing an −1 H enclosure for RX and all the other matrix-matrix operations also have costs ˜ B 3 O(n ). Therefore, Algorithm 1 requires O(kn3 ) operations. If fl(Y ε ) < Y r does not follow in kmax iterations, then it fails and involves O(kmax n3 ) operations. 10

4. A new verification algorithm based on a simple reformulation While Algorithm 1 works for many examples, an essential limitation is that it requires V to be not ill-conditioned. If V is ill-conditioned, we cannot expect E ≈ 0 and Z ≈ 0, hence ∥|Z|e(n) ∥∞ < 1 or ∥T./|D|∥max < 1 are less likely to hold. A striking example of this phenomenon is [18, Example 4.1], which is taken up in Section 5. To overcome this issue, we propose a different algorithm. While this algorithm is straightforward and works on a lower number of examples, it does not require that V be not ill-conditioned, since it does not utilize the numerical spectral decomposition AH ˜ V ≈ V Λ. X ˜ + U for an unknown corWe rewrite (1) as follows: We can write X = X ˜ is Hermitian. If X is a Hermitian rection matrix U . As mentioned above, X ˜H = X ˜ and Lemma 5 applied to solution, then U is also Hermitian. From X ˜ and H := U , moreover, we obtain X := X ˜ ˜ + U ) = AH˜ U A ˜ − U − AH˜ U BR−1 B H U A ˜ + F (X). F (X ˜ X X X X X+U Therefore, (1) is equivalent to U = P (U ),

−1 ˜ B H U )AX˜ + F (X), P (U ) := AH ˜ U (In − BRX+U ˜ X

which is mentioned also in [4]. We thus compute an interval matrix P such that P ⊇ {P (U ) : U ∈ U } for a given interval matrix U . If P ⊆ int(U ), the Brouwer theorem implies that int(U ) contains U ∗ ∈ Cn×n such that U ∗ = P (U ∗ ). We ˜ + P . This then have U ∗ = P (U ∗ ) ∈ {P (U ) : U ∈ U } ⊆ P , so that X ∗ ∈ X discussion gives Algorithm 2. Let lmax and F be an integer and interval matrix such that ˜ respectively. This algorithm computes an interval lmax > 1 and F ∋ F (X), matrix X such that X ∋ X ∗ , and verifies the stabilizability and uniqueness. Steps 1 and 2. Similar to those in Algorithm 1. Step 3. Compute F and initialize l and U such that l = 1 and U = ⟨1, eps⟩F + T ⟨0, realmin⟩e(n) e(n) , respectively. Step 4. If l = lmax , terminate with failure. Otherwise, verify the nonsingularity of RX+U for all U ∈ U . If the verification failed, terminate with failure. ˜ Otherwise, compute P . Step 5. If P ⊆ int(U ), go to Step 6. Otherwise, update l and U such that T l = l + 1 and U = ⟨1, eps⟩P + ⟨0, realmin⟩e(n) e(n) , respectively, and go back to Step 4. ˜ + P. Step 6. Compute X such that X = X Step 7. Verify the stabilizability and uniqueness similarly to Steps 10 and 11 ˜ X ε ⟩ := X. Terminate. of Algorithm 1 by putting ⟨X, 11

The update of U in Step 5 is again based on the epsilon inflation. We obtain a result similar to Algorithm 1 regarding to the computational cost. Proposition 2. Algorithm 2 has a cost of O(n3 ) operations per iteration. −1 Proof. Computing an enclosure for {RX+U B H : U ∈ U } and all the other ˜ parts in Steps 4 and 5 have costs O(n3 ). Thus, the result follows. 2

Remark 6. The change of basis analogous to [8, Section 4] is possible. As far as the author incorporated this change into Algorithm 2 and applied the incorporated algorithm to the examples in Section 5, the incorporated algorithm did not give results better than those by Algorithm 2. Therefore, we omit it. 5. Numerical results We used a computer with Intel Core 2.60GHz CPU, 8.00GB RAM and MATLAB R2012a with Intel Math Kernel Library and IEEE 754 double precision. We denote the compared algorithms as follows: LOT1: The first verification algorithm in [4], LOT2: The second verification algorithm in [4], M1: Algorithm 1, ˜ M2: M1 with an accurate evaluation of F (X), M3: Algorithm 2, and ˜ M4: M3 with the accurate evaluation of F (X). ˜ by the function dare.m from the MATIn all the algorithms, we computed X LAB Control System Toolbox. The enclosures for the products of inverse matrices and matrices were computed by the INTLAB [19] function verifylss.m −1 H ˜ in M2 and M4. The funcexcept RX in the evaluations of AX˜ and F (X) ˜ B tion verifylss.m computes intervals containing solutions of linear systems. In Step 2 of LOT1, δ was set as 2−1 in the initial stage, and it was divided by 2 if the verification failed. This process was repeated at most 50 times as far as the verification failed. We call this iteration as “existence iteration”. Once the ˜ was refined, δ was divided by 2, and the verification verification succeeded, X was repeated at most 50 times as far as it succeeded. We call this iteration as “improvement iteration”. Note that LOT2 also includes these iterations, since ˜ i.e., Steps 2 and 3 this algorithm calls LOT1. In LOT2, the refinement of X, ˜ ∞ ) deof this algorithm, was repeated at most 50 times as far as fl(∥F (X)∥ creased. We call this iteration as “refinement iteration”. In M1 and M2, we set α and kmax such that α = 2 and kmax = 50, respectively. The numerical spectral decompositions and inversions were executed by the MATLAB functions eig.m and \ (mldivide.m), respectively. In M2 and M4, the enclo−1 H ˜ was computed utilizing a sure (i.e., the center and radius) of RX in F (X) ˜ B 12

matrix version of Lemma 3 (see [13, Corollary 2.4]), and the center is stored −1 H and by two terms: The first and second terms are approximations of RX ˜ B ˜ its correction matrix, respectively. The other parts in F (X) were evaluated using the INTLAB function AccDot.m which computes dot products with extended precision. Although input matrices in AccDot.m must be real, M2 and M4 did the job in the examples below, since the input matrices in the exam˜ but also that of A ˜ ples were real. Note that not only the evaluation of F (X) X is different between M1 and M3, and M2 and M4, since the evaluation of AX˜ is ˜ (see Remark 3). In M3 and M4, we set lmax such that reused in that of F (X) lmax = 2n. See http://web.cc.iwate-u.ac.jp/~miyajima/DARE.zip for the details of the implementations, where the INTLAB codes of these algorithms (denoted by LOT1.m, LOT2.m, M1.m, M2.m, M3.m and M4.m) are uploaded. ˜ X ε ⟩ ∋ X ∗ . To assess the qualities of the enclosures, define the maxLet ⟨X, ε imum radius as maxi,j Xij . In all the examples below, we chose R = Im . In the first and second examples, we computed A, B and Q via floating point operations in rounding-to-nearest mode, treated 100 problems for each parameter using randomly generated matrices, took the maximum values of the maximum radii, and calculated the mean values of computing times and numbers of the iterations. Therefore, Tables 2 and 5 contain the iteration numbers being not integers. For some parameters, the algorithms failed in less than 100 problems. In these cases, we show the maximum and mean values within the succeeded problems. Example 1 We observe the properties of the algorithms for [20, Example 2], where m = 4, A = A0 /∥A0 ∥1 and A0 is obtained from the centered finite difference discretization of the operator L(u) = ∆u − 10y

∂u ∂u − 10 − 100u, ∂x ∂y

on the unit square [0, 1]×[0, 1] with homogeneous Dirichlet boundary conditions. Then n = n20 , where n0 is the number of inner grid points in each direction. The matrix B and Q were generated by the MATLAB codes B = rand(n,m); and C = rand(m,n); Q = C’*C;, respectively. Table 1 displays the maximum radii and computing times (sec) of the algorithms for various n0 . Table 2 shows the numbers of the iterations. For some parameters, the algorithms could not prove the existence of the solution and/or stabilizability and uniqueness within 100 problems. Table 3 thus shows the numbers of problems where the algorithms could not prove them. The reasons for the failure of proving the existence are listed below. In some items, two reasons are written. This is because we treated 100 problems. Namely, in some problems, the algorithm failed due to the first reason, and in the other problems it failed due to the second reason. Similar can be said in the next three lists.

13

LOT1 and LOT2 when n0 = 7: They did not succeed in computing the enclosures after 50 iterations, LOT1 and LOT2 when n0 = 8 or more: Due to out of memory, M1 when n0 = 9, 10: Either it did not succeed after 50 iterations, or the nonsingularity of SY , ∀Y ∈ ⟨Y˜ , Y r ⟩ could not be verified, M1 when n0 = 11, 12: The nonsingularity could not be verified,

M1 when n0 = 16 or more: Either the nonsingularity or ∥T./|D|∥max < 1 could not be verified, M3 when n0 = 3, and M4 when n0 = 3, 4: They did not succeed after 2n iterations, M4 when n0 = 5: Either it did not succeed after 2n iterations, or the nonsingularity of RX+U , ∀U ∈ U could not be verified, and ˜

M3 when n0 = 4 or more, and M4 when n0 = 6 or more: The nonsingularity could not be verified. The reasons for the failure of proving the stabilizability and uniqueness are: M1 when n0 = 4 to 6, and M2: The inequality fl(maxi (|λi | + ri )) < 1 was not satisfied,

M1 when n0 = 7: Either the inequality was not satisfied, or the nonsingularity ˜ X ε ⟩ could not be verified, and of RX , ∀X ∈ ⟨X, M1 when n0 = 8 or more: The nonsingularity could not be verified.

We see from Table 1 that the radii by M1 and M2 were larger and smaller, respectively, than those by LOT1 and LOT2. The algorithms M1 and M2 were faster than LOT1 and LOT2. Especially, the growth rates of the computing times of M1 and M2 were smaller than those of LOT1 and LOT2. These results coincide with the fact that the computational costs of M1 and M2 are O(n3 ) operations per iteration, whereas those of LOT1 and LOT2 are O(n6 ) operations per iteration. As mentioned above, when n0 = 8 or more, LOT1 and LOT2 failed due to out of memory. This is because n2 × n2 (i.e., n40 × n40 ) matrices are stored in a memory during their executions. On the other hand, at most only n × n matrices are stored during the executions of M1 and M2. Therefore, they did not cause out of memory. Example 2 We observe the properties for [20, Example 3], where m = 10, A = (M − δt K)−1 M , B = δt (M − δt K)−1 F , Q was obtained similarly to Example 1,     4 1 2 −1     .. .. .. ..    . . . . 1   1  , K = −αn  −1 , M=     .. .. .. .. 6n   . . 1  . . −1  1 4 −1 2 14

Table 1: The maximum radii (upper part) and computing times (lower part) in Example 1.

n0 3 4 5 6 7 8 9 10 11 12 16 20 3 4 5 6 7 8 9 10 11 12 16 20

LOT1 9.3e–11 2.6e–9 2.6e–8 3.5e–7 1.8e–6 failed failed failed failed failed failed failed 4.5e–1 9.5e–1 4.2e+0 1.7e+1 6.4e+1 failed failed failed failed failed failed failed

LOT2 9.2e–11 2.6e–9 2.5e–8 3.5e–7 1.4e–6 failed failed failed failed failed failed failed 4.9e–1 9.8e–1 4.2e+0 1.7e+1 6.4e+1 failed failed failed failed failed failed failed

M1 8.6e–8 4.4e–6 1.1e–4 3.0e–3 8.2e–2 2.6e–1 2.4e+0 1.2e+1 1.8e+1 failed failed failed 2.5e–2 2.8e–2 3.6e–2 4.7e–2 6.4e–2 7.4e–2 1.1e–1 1.6e–1 2.4e–1 failed failed failed

M2 1.4e–12 5.7e–12 3.6e–11 2.6e–10 1.3e–9 2.1e–9 1.0e–7 7.8e–8 1.3e–8 8.7e–8 5.1e–7 1.5e–6 1.8e–1 4.2e–1 9.7e–1 2.0e+0 3.6e+0 6.2e+0 1.0e+1 1.6e+1 2.4e+1 3.6e+1 1.4e+2 4.3e+2

M3 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

M4 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

δt = 0.01, α = 0.002, and F ∈ Rn×m was obtained by F = rand(n,m);. Tables 4, 5 and 6 display the quantities similar to Tables 1, 2 and 3, respectively, for various n, which show the tendencies analogous to Example 1. The reasons for the failure of proving the existence are: LOT1 and LOT2 when n = 30 to 50: They did not succeed after 50 iterations, LOT1 and LOT2 when n = 60 or more: Due to out of memory, M1 when n = 50, 60: Either the nonsingularity of SY , ∀Y ∈ ⟨Y˜ , Y r ⟩ could not be verified, or it did not succeed after 50 iterations, M1 when n = 100: Either the nonsingularity or ∥T./|D|∥max < 1 could not be verified, M1 when n = 200 or more: The inequality could not be verified, M3 when n = 10, and M4 when n = 10, 20: They did not succeed after 2n iterations, and 15

Table 2: The numbers of the iterations in Example 1.

n0 3 4 5 6 7 8 9 10 11 12 16 20

existence LOT1 LOT2 9.7 8.7 12.5 11.1 14.7 13.0 16.8 15.1 18.5 16.8 failed failed failed failed failed failed failed failed failed failed failed failed failed failed

improvement LOT1 LOT2 26.3 27.3 18.9 20.3 13.0 14.6 7.7 9.4 3.4 5.0 failed failed failed failed failed failed failed failed failed failed failed failed failed failed

refinement LOT2 3.7 4.1 4.2 3.9 3.3 failed failed failed failed failed failed failed

M1 2.0 2.5 2.9 3.1 3.4 3.3 3.2 3.3 2.6 failed failed failed

M2 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0

M3 failed failed failed failed failed failed failed failed failed failed failed failed

M4 failed failed failed failed failed failed failed failed failed failed failed failed

M3 when n = 20 or more, and M4 when n = 30 or more: The nonsingularity of RX+U , ∀U ∈ U could not be verified. ˜ The reasons for the failure of proving the stabilizability and uniqueness are:

M1 when n = 20, 30, and M2: The inequality fl(maxi (|λi | + ri )) < 1 was not satisfied, M1 when n = 40: Either the inequality was not satisfied, or the nonsingularity ˜ X ε ⟩ could not be verified, and of RX , ∀X ∈ ⟨X, M1 when n = 50 or more: The nonsingularity could not be verified. Example 3 We observe the properties in the case when V is ill-conditioned. We consider [18, Example 4.1], which is mentioned also in Section 4, where m = 1,     0 1 0 ··· 0 0  .. . .  . . . .. . . ..   .  ..  .     A= , B =  .  , Q = In . 0     0   0 0 1  1 0 ··· 0 0

In this example, it is known [18] that the stabilizing solution is X ∗ = diag(1, . . . , n). We hence have AX ∗ = A, which implies AH X ∗ is not diagonalizable. Therefore, V becomes singular or ill-conditioned. Tables 7 and 8 show the quantities similar to Tables 1 and 2, respectively. The reasons for the failure of proving the existence are: 16

Table 3: The numbers of failed problems in Example 1.

n0 3 4 5 6 7 8 9 10 11 12 16 20

existence LOT1 LOT2 0 0 0 0 0 0 0 0 7 2 100 100 100 100 100 100 100 100 100 100 100 100 100 100

M1 0 0 0 0 0 0 3 51 89 100 100 100

M2 0 0 0 0 0 0 0 0 0 0 0 0

M3 100 100 100 100 100 100 100 100 100 100 100 100

M4 100 100 100 100 100 100 100 100 100 100 100 100

stabilizability and uniqueness M1 M2 M3 M4 0 0 100 100 22 0 100 100 100 0 100 100 100 0 100 100 100 1 100 100 100 4 100 100 100 47 100 100 100 96 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

LOT1 and LOT2 when n = 60 or more: Due to out of memory, M1 and M2 when n = 10 to 100: The nonsingularity of SY , ∀Y ∈ ⟨Y˜ , Y r ⟩ could not be verified, M1 when n = 200, and M2 when n = 200, 300: The inequality ∥T./|D|∥max < 1 could not be verified, and M1 when n = 300: We could not verify ∥|Z|e(n) ∥∞ < 1 in Step 4.

For all the problems, M3 failed in proving the stabilizability and uniqueness. When n = 10 and 20, in contrast, M4 succeeded. When n = 30 or more, M4 failed. The reasons for the failure are: M3, and M4 when n = 30 to 200: The inequality fl(maxi (|λi | + ri )) < 1 was not satisfied, and M4 when n = 300: We could not verify ∥|Z|e(n) ∥∞ < 1 in Theorem 3.

Unlike the previous examples, M3 and M4 succeeded in proving the existence for all the problems, although M1 and M2 failed. Even for large n, especially, M3 and M4 succeeded, whereas LOT1 and LOT2 failed. 6. Conclusion In this paper, we proposed the verification algorithms for the solutions of (1), and reported the numerical results. By slightly extending the theories in this paper, it is possible to develop verification algorithms for solutions of generalized DAREs. As shown in the numerical results, the proposed algorithms could not prove the stabilizability and uniqueness for large n. Therefore, our future work will be to develop an algorithm which can prove them even for large n. 17

Table 4: The maximum radii (upper part) and computing times (lower part) in Example 2.

n 10 20 30 40 50 60 100 200 300 10 20 30 40 50 60 100 200 300

LOT1 2.4e–9 7.1e–7 1.6e–6 failed failed failed failed failed failed 4.6e–1 1.5e+0 6.5e+0 failed failed failed failed failed failed

LOT2 2.4e–9 6.8e–7 1.9e–6 failed failed failed failed failed failed 4.9e–1 1.6e+0 6.3e+0 failed failed failed failed failed failed

M1 1.4e–7 8.3e–5 4.8e–3 2.2e–2 1.9e–1 5.6e–1 failed failed failed 2.3e–2 3.4e–2 5.2e–2 7.0e–2 1.2e–1 2.2e–1 failed failed failed

M2 4.2e–10 1.7e–10 1.3e–10 1.2e–10 5.2e–10 6.2e–10 3.7e–10 7.1e–10 1.9e–8 2.8e–1 8.0e–1 1.7e+0 3.0e+0 5.0e+0 8.1e+0 4.3e+1 1.9e+2 5.3e+2

M3 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

M4 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

Acknowledgments The author thanks Prof. Dr. Wolfram Luther in University of DuisburgEssen for providing him the technical report related to [4]. The author also acknowledges the reviewers for valuable comments. This work was partially supported by JSPS KAKENHI Grant Numbers JP23560066 and JP16K05270. References [1] D.A. Bini, B. Iannazzo, B. Meini, Numerical Solution of Algebraic Riccati Equations, SIAM Publications, Philadelphia, 2012. [2] A. Emami-Naeini, G. Franklin, Comments on the numerical solution of the discrete-time algebraic Riccati equation, IEEE Trans. Automat. Contr. 25 (1980) 1015–1016. [3] P. Lancaster, L. Rodman, Algebraic Riccati Equations, Clarendon Press, Oxford, 1995. [4] W. Luther, W. Otten, H. Traczinski, Verified calculation of the solution of continuous- and discrete time algebraic Riccati equation, Schriftenreihe des Fachbereichs Mathematik der Gerhard-Mercator-Universit¨at Duisburg (1998) SM-DU-422.

18

Table 5: The numbers of the iterations in Example 2.

n 10 20 30 40 50 60 100 200 300

existence LOT1 LOT2 10.8 10.4 16.4 16.1 19.0 18.0 failed failed failed failed failed failed failed failed failed failed failed failed

improvement LOT1 LOT2 21.5 21.9 7.2 7.6 1.0 2.0 failed failed failed failed failed failed failed failed failed failed failed failed

refinement LOT2 3.9 3.6 4.0 failed failed failed failed failed failed

M1 2.0 3.1 4.9 7.4 12.8 27.5 failed failed failed

M2 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.1

M3 failed failed failed failed failed failed failed failed failed

M4 failed failed failed failed failed failed failed failed failed

Table 6: The numbers of failed problems in Example 2.

n 10 20 30 40 50 60 100 200 300

existence LOT1 LOT2 0 0 0 0 99 98 100 100 100 100 100 100 100 100 100 100 100 100

M1 0 0 0 0 2 47 100 100 100

M2 0 0 0 0 0 0 0 0 0

M3 100 100 100 100 100 100 100 100 100

M4 100 100 100 100 100 100 100 100 100

stabilizability and uniqueness M1 M2 M3 M4 0 0 100 100 100 0 100 100 100 0 100 100 100 2 100 100 100 31 100 100 100 95 100 100 100 100 100 100 100 100 100 100 100 100 100 100

[5] W. Luther, W. Otten, Verified calculation of the solution of algebraic Riccati equation, in: T. Csendes (Ed.), Developments in Reliable Computing, Kluwer Academic Publishers, Dordrecht, 1999, pp. 105–118. [6] B. Hashemi, Verified computation of symmetric solutions to continuoustime algebraic Riccati matrix equations, Proc. SCAN conference, Novosibirsk, 2012, pp. 54–56. http://conf.nsc.ru/files/conferences/ scan2012/139586/Hashemi-scan2012.pdf [7] S. Miyajima, Fast verified computation for solutions of continuous-time algebraic Riccati equations, Jpn. J. Ind. Appl. Math., 32 (2015) 529–544. [8] T. Haqiri, F. Poloni, Methods for verified solutions to continuous-time algebraic Riccati equations, arXiv:1509.02015 [math.NA]. [9] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994. 19

Table 7: The maximum radii (upper part) and computing times (lower part) in Example 3.

n 10 20 30 40 50 60 100 200 300 10 20 30 40 50 60 100 200 300

LOT1 2.7e–15 5.3e–15 5.3e–15 1.1e–14 1.1e–14 failed failed failed failed 6.1e–1 3.3e+0 2.0e+1 7.3e+1 2.3e+2 failed failed failed failed

LOT2 1.4e–14 6.0e–14 1.3e–13 2.6e–13 4.0e–13 failed failed failed failed 6.4e–1 3.3e+0 1.9e+1 7.4e+1 2.3e+2 failed failed failed failed

M1 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

M2 failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed failed

M3 1.6e–14 6.8e–14 1.3e–13 2.7e–13 4.1e–13 5.5e–13 1.6e–12 6.5e–12 1.5e–11 4.6e–2 8.3e–2 1.2e–1 1.7e–1 2.3e–1 3.4e–1 8.3e–1 5.0e+0 2.2e+1

M4 2.7e–15 5.3e–15 5.3e–15 1.1e–14 1.1e–14 1.1e–14 2.1e–14 4.3e–14 8.5e–14 1.2e–1 3.4e–1 6.9e–1 1.2e+0 1.8e+0 2.6e+0 7.2e+0 3.4e+1 1.1e+2

[10] N.J. Higham, Functions of Matrices: Theory and Computation, SIAM Publications, Philadelphia, 2008. [11] C.D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM Publications, Philadelphia, 2000. [12] A. Minamihata, private communication, 2013. [13] S. Miyajima, Fast enclosure for all eigenvalues and invariant subspaces in generalized eigenvalue problems, SIAM J. Matrix Anal. Appl. 35 (2014) 1205–1225. [14] V. Ionescu, M. Weiss, On computing the stabilizing solution of the discretetime Riccati equation, Linear Algebra Appl. 174 (1992) 229–238. [15] O. Caprani, K. Madsen, Iterative methods for interval inclusion of fixed points, BIT 18 (1978) 42–51. [16] S.M. Rump, Verification methods for dense and sparse systems of equations, in: J. Herzberger (Ed.), Topics in Validated Computations - Studies in Computational Mathematics, Elsevier, Amsterdam, 1994, pp. 63–136. [17] H. Arndt, On the interval systems [x] = [A][x] + [b] and the powers of interval matrices in complex interval arithmetics, Reliab. Comput. 13 (2007) 245–259. 20

Table 8: The numbers of the iterations in Example 3.

n 10 20 30 40 50 60 100 200 300

existence LOT1 LOT2 1 1 1 1 1 1 1 1 1 1 failed failed failed failed failed failed failed failed

improvement LOT1 LOT2 48 46 47 43 47 42 46 41 46 41 failed failed failed failed failed failed failed failed

refinement LOT2 9 3 3 3 3 failed failed failed failed

M1 failed failed failed failed failed failed failed failed failed

M2 failed failed failed failed failed failed failed failed failed

M3 9 20 29 40 50 60 100 200 300

M4 9 20 29 40 50 60 100 200 300

[18] J. Abels, P. Benner, DAREX - A collection of benchmark examples for discrete-time algebraic Riccati equations, SLICOT Working Note (1999) 1999-16. http://slicot.org/objects/software/reports/ SLWN1999-16.ps.gz [19] S.M. Rump, INTLAB - INTerval LABoratory, in: T. Csendes (Ed.), Developments in Reliable Computing, Kluwer Academic Publishers, Dordrecht, 1999, pp. 77–104. [20] A. Bouhamidi, M. Heyouni, K. Jbilou, Block Arnoldi-based methods for large scale discrete-time algebraic Riccati equations, J. Comp. Appl. Math. 236 (2011) 1531–1542.

21