Mathematical and Computer Modelling 55 (2012) 622–633
Contents lists available at SciVerse ScienceDirect
Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm
The interval Lyapunov matrix equation: Analytical results and an efficient numerical technique for outer estimation of the united solution set Behnam Hashemi a , Mehdi Dehghan b,∗ a
Department of Mathematics, Faculty of Basic Sciences, Shiraz University of Technology, Modarres Boulevared, Shiraz 71555-313, Iran
b
Department of Applied Mathematics, Faculty of Mathematics and Computer Science, Amirkabir University of Technology, No. 424, Hafez Avenue, Tehran 15914, Iran
article
info
Article history: Received 1 February 2010 Received in revised form 20 August 2011 Accepted 22 August 2011 Keywords: Lyapunov matrix equation Interval analysis United solution set AE-solution sets
abstract This note tries to propose an efficient method for obtaining outer estimations for the so-called united solution set of the interval Lyapunov matrix equation AX + X AT = F , where A and F are known real interval matrices while X is the unknown matrix; all of dimension n × n. We first explore the equation in the more general setting of AE-solution sets, and show that only a small part of Shary’s results on the AE-solution sets of interval linear systems can be generalized to the interval Lyapunov matrix equation. Then, we propose our modification of Krawczyk operator which enables us to reduce the computational complexity of obtaining an outer estimation for the united solution set to cubic, provided that the midpoint of A is diagonalizable. © 2011 Elsevier Ltd. All rights reserved.
1. Introduction Consider a time-invariant, continuous-time dynamical system of type
x˙ (t ) = Ax(t ) + Bu(t ), y(t ) = Cx(t ) + Du(t ),
(1)
where A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m and the vector-valued functions u, x, and y are called the input, the state, and the output of the system, respectively. The controllability Gramian P and the observability Gramian Q , which play an important role in balancing and model reduction of system (1), are solutions of the following closely related pair of Lyapunov equations [1–3].
AP + PAT = −BBT , AT Q + QA = −C T C .
(2)
These Gramians are very important since they describe how the energy of the system is distributed over the coordinates of the state-space. See e.g. [4,5] for examples of large-scale stable Lyapunov equations. In this paper, we consider the Lyapunov matrix equation AX + XAT = F ,
∗
(3)
Corresponding author. E-mail addresses:
[email protected] (B. Hashemi),
[email protected],
[email protected],
[email protected] (M. Dehghan).
0895-7177/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2011.08.036
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
623
where A and F are known real matrices of dimension n × n, while the unknown matrix X is an n × n real matrix. This matrix equation plays a vital role in numerous applications. For example in control theory [2] it arises, not only in determining controllability and observability Gramians, but also in stability and robust stability analyses, and in computing H2 -norm. The solution of Lyapunov equation (3) is also needed for the implementation of some iterative methods for solving algebraic Riccati matrix equations, such as Newton’s method. The Schur method proposed by Bartels and Stewart [6] is now widely used as an effective method for numerically computing a solution of the Lyapunov equation (3). Though the Lyapunov matrix equation (3) is studied in the literature, less attention has been paid to the form of uncertainties that may occur in the elements of A and F . The elements of A and F occurring in practice are usually obtained from the experiments and most measuring instruments, for example, have a built-in uncertainty, and therefore a result is measured within uncertainty. See for example, the website of the American National Institute of Standards, NIST, for internationally recommended values of fundamental physical constants [7]. There, one can find for each constant, a value and its ‘‘standard uncertainty’’. The choice of a model of uncertainty depends on the type and quantity of information available. If only fragmentary information on uncertain parameters is available, it is possible to establish the least favourable response and the most favourable response of the problem using interval analysis. Interval computations has various real-life applications ranging from robotics, automatic control, astrophysics, traffic control and expert systems to ergonomics, social sciences and economics [8]. One can, for example, find information regarding essentials of expressing measurement uncertainty in different types including intervals for fundamental physical constants in [7]. The main interval computation website that has an excellent collection of interval arithmetic related sources is http://www.cs.utep.edu/intervalcomp/main.html. The most well-known interval software is INTLAB that has been developed by Rump [9]. It is a Matlab toolbox supporting real and complex interval scalars, vectors, and matrices, as well as sparse real and complex interval matrices. It is designed to be very fast and is available online at http://www.ti3.tuharburg.de/rump/intlab/. Another interval software which is a collection of INTLAB programs is VERSOFT provided by Rohn [10]. VERSOFT is able to compute verified solutions for various numerical linear algebraic problems having exact or interval-valued data and is freely available online at http://uivtx.cs.cas.cz/~rohn/matlab/index.html. In this paper, we represent the uncertain elements of the Lyapunov equation in an interval form, and hence we have the following interval Lyapunov matrix equation AX + X AT = F ,
(4)
where A and F are known real interval matrices. We also assume that the interval elements in the matrices are mutually independent. The interval Lyapunov equation (4) can be transformed [10–13] to an interval linear system of the form Px = f ,
where P = In ⊗ A + A ⊗ In , x = vec(X ) and f = vec(F ).
Herein, ⊗ denotes the Kronecker product. For two interval matrices T ∈ IR block interval matrix
t11 S
.
T ⊗ S = .. tm1 S
··· .. . ···
t1n S
(5) m×n
k×t
and S ∈ IR
it is given by the mk × nt
.. . .
tmn S 2
For A = (aij ) ∈ IRn×n the vector vec(A) ∈ IRn is obtained by stacking the columns of A, i.e., vec(A) = (a11 , . . . , an1 , a12 , . . . , an2 , . . . , a1n , . . . , ann )T . So, vec(X ) and vec(F ) are real and interval vectors of length n2 , respectively. In what follows we will use uppercase letters to denote matrices and the corresponding lowercase letters to denote the vector related to the matrix via the vec operator. Rohn and Kreinovich [14] proved that computing the exact solution of a square interval linear systems is NP-hard. Even checking solvability of an interval linear system or obtaining an enclosure for its solution set is NP-hard [15,16]. Seif and his colleagues [11] examined different techniques for approximating an outer estimation for the united solution set of the interval Sylvester matrix equation AX + X B = C . Most of the methods proposed in [11] have an exponential complexity in n. A sensitivity analysis approach has also been proposed in [11] which is claimed to be of computational complexity O (n5 ). Shashikhin [12,13] also used the above-mentioned correspondence between the interval Sylvester matrix equation and the interval linear system (5) in order to find an outer approximation for the united solution set. Rohn [10] used the same approach in the VERMATREQN.m code of VERSOFT. Such methods have a computational complexity of O (n6 ), which is very high. The approach to be presented here aims at reducing the cost to O (n3 ). In this paper, we use the standard notations of interval analysis [17]. So, interval quantities will always be typeset in boldface. R denotes the field of real numbers, Rn×n the vector space of n × n matrices with real coefficients, IR the set of intervals and IRn×n the set of all n × n interval matrices. If x ∈ IR, then min x := x and max x := x are the lower and upper bounds of x, respectively. The width of x is wid x := x − x ≥ 0, its radius is rad x := 12 (x − x), its midpoint is mid x := 12 (x + x), and the absolute value of x is abs x := max{|x| | x ∈ x} = max{|x|, |x|}. The same operations will be applied componentwise with respect to interval vectors and matrices. Hence, for the interval matrix A we can write A = [mid A − rad A, mid A + rad A] where mid A and rad A are the real midpoint matrix and the real radius matrix of the ×n interval matrix A, respectively. Similarly, ICm disc denotes the set of circular complex interval matrices of size m × n. Whenever
624
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
we need complex interval arithmetic we use circular interval arithmetic in this paper. We also use the convention that each complex interval in IC is represented by a circular interval in ICdisc . So, by IC we mean the set of circular intervals. The fundamental property of real or circular complex interval arithmetic, i.e., the enclosure property, states that for a, b ∈ IC and any arithmetic operation ◦ ∈ {+, −, ∗, /} a ∈ a, b ∈ b ⇒ a ◦ b ∈ a ◦ b. Even for real intervals INTLAB uses the mid-rad representation which is the restriction of circular complex arithmetic for real intervals. For more details about the basic results of interval analysis, the interested reader may consult the textbooks [18–20]. The organization of this paper is as follows. In Section 2, we define and characterize the AE-solution set of the interval Lyapunov matrix equation (4). In Section 3, we focus our attention on the united solution set, explain difficulties of the available methods in the literature which provides a motivation for our new approach, propose our modified Krawczyk operator and provide some numerical examples. Section 4 ends this paper with a short conclusion and an emphasis on directions for future research. 2. The generalized AE-solution set and its characterizations Consider the interval Lyapunov matrix equation (4). Let A ∈ IRn×n , and F ∈ IRn×n . It is easily understandable that (4) is a collection of all real linear matrix equations with the elements that belong to A and F , respectively. Shary has introduced the concept of AE-solution sets for a system of interval linear equations in order to specify various possible ways of describing the uncertainty type distribution with respect to the interval parameters of the system [21]. We define the AE solution set of the interval Lyapunov matrix equation (4) using a similar convention. Different components of the interval matrices A and F can correspond to different types of uncertainty. So, in order to describe the most general case, let the entire set of the index pairs (i, j) that describe the component aij of the matrix A be divided into two nonintersecting parts: Ω ′ = {ω1′ , ω2′ , . . . , ωp′ } and Ω ′′ = {ω1′′ , ω2′′ , . . . , ωr′′ } in which p + r = n2 . These sets have the following interpretation: if (i, j) ∈ Ω ′ , then the parameter aij is of the A-uncertainty. Here, ‘‘A’’ stands for all and we say that an interval aij is of A-uncertainty if a specific property (that may be a point equation, inequality, etc.) holds for all members of aij [21]. If (i, j) ∈ Ω ′′ then the parameter aij is of the E-uncertainty, where ‘‘E’’ stands for exists suggesting that the desired property holds only for some, not necessarily all, members of aij . Similarly, we introduce two non-intersecting sets of integer indices: Θ ′ = {ω1′ , ω2′ , . . . , ωs′ } and Θ ′′ = {ω1′′ , ω2′′ , . . . , ωt′′ } in which s + t = n2 . These sets have the following interpretation: if (i, j) ∈ Θ ′ then the parameter fij is of the A-uncertainty. If (i, j) ∈ Θ ′′ , then the parameter fij is of the E-uncertainty. Some of the sets Ω ′ , Ω ′′ , Θ ′ , Θ ′′ may be empty. Definition 2.1. We define the AE-solution set of type αβ to the interval Lyapunov matrix equation (4) as the set
Ξαβ (A, F ) = {X ∈ Rn×n |(∀aω1′ ∈ aω1′ ) · · · (∀aωp′ ∈ aωp′ )(∀fθ1′ ∈ fθ1′ ) · · · (∀fθs′ ∈ fθs′ )
× (∃aω1′′ ∈ aω1′′ ) · · · (∃aωr′′ ∈ aωr′′ )(∃fθ1′′ ∈ fθ1′′ ) · · · (∃fθt′′ ∈ fθt′′ ) (AX + XAT = F )}, where the n × n quantifier matrices α = (αij ), and β = (βij ) are defined as follows
αij =
∀, ∃,
if (i, j) ∈ Ω ′ , if (i, j) ∈ Ω ′′ ,
and βij =
∀, ∃,
if (i, j) ∈ Θ ′ , if (i, j) ∈ Θ ′′ .
Now, the n × n interval matrices A∀ = (a∀ij ), A∃ = (a∃ij ), F ∀ = (fij∀ ) and F ∃ = (fij∃ ) can be defined as follows: a∀ij = fij∀ =
aij , 0,
if αij = ∀, otherwise ,
and a∃ij =
fij , 0,
if βij = ∀, otherwise ,
and
fij∃ =
aij , 0,
fij , 0,
if αij = ∃, otherwise , if βij = ∃, otherwise .
Thus A = A∀ + A∃ , and F = F ∀ + F ∃ , and for all i, j we have a∀ij · a∃ij = 0, and fij∀ · fij∃ = 0. Hence, the matrices A∀ , A∃ , F ∀ , F ∃ form disjoint decompositions for A, and F . The above-mentioned interpretation enables us to formulate what we mean by a solution of the interval Lyapunov matrix equation (4). Using the above general definition we can make the following solution sets for the interval Lyapunov matrix equation (4) as particular cases of Ξαβ (A, F ):
• The united solution set Ξ∃∃ (A, F ) := {X ∈ Rn×n |(∃A ∈ A)(∃F ∈ F ); AX + XAT = F }.
(6)
• The tolerable solution set Ξ∀∃ (A, F ) := {X ∈ Rn×n |(∀A ∈ A)(∃F ∈ F ); AX + XAT = F }, formed by all matrices X ∈ R
n×n
T
such that AX + X A falls into F for any A ∈ A.
(7)
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
625
• The controllable solution set Ξ∃∀ (A, F ) := {X ∈ Rn×n |(∀F ∈ F )(∃A ∈ A); AX + XAT = F }, n×n
which contains all matrices X ∈ R AX + XAT = F .
(8)
such that for any desired F ∈ F we can find an appropriate A ∈ A satisfying
2.1. Characterization of the AE-solution sets Here, we derive some equivalent descriptions of the AE-solution sets to the interval Lyapunov matrix equation (4) and study some properties of these solution sets. Theorem 2.2.
Ξαβ (A, F ) =
{X ∈ Rn×n |((A′ + A′′ )X + X (A′ + A′′ )T = (F ′ + F ′′ ))}.
A′ ∈A∀ F ′ ∈F ∀ A′′ ∈A∃ F ′′ ∈F ∃
Proof. Using the matrices A∀ , A∃ , F ∀ , and F ∃ , introduced above, we can rewrite the definition of AE-solution set Ξαβ (A, F ) in the following equivalent form
Ξαβ (A, F ) = {X ∈ Rn×n |(∀A′ ∈ A∀ )(∀F ′ ∈ F ∀ )(∃A′′ ∈ A∃ )(∃F ′′ ∈ F ∃ )((A′ + A′′ )X + X (A′ + A′′ )T = (F ′ + F ′′ ))}.
(9)
So, according to the definition of the intersection and the union of sets, we have
Ξαβ (A, F ) =
{X ∈ Rn×n |(∃A′′ ∈ A∃ )(∃F ′′ ∈ F ∃ )((A′ + A′′ )X + X (A′ + A′′ )T = (F ′ + F ′′ ))}
A′ ∈A∀ F ′ ∈F ∀
=
{X ∈ Rn×n |((A′ + A′′ )X + X (A′ + A′′ )T = (F ′ + F ′′ ))}.
A′ ∈A∀ F ′ ∈F ∀ A′′ ∈A∃ F ′′ ∈F ∃
Corollary 1. In particular, for the united solution set (6) of the interval Lyapunov equation (4) we have
Ξ∃∃ (A, F ) =
{X ∈ Rn×n |AX + XAT = F }, A∈A F ∈F
which demystifies the name ‘‘united solution set’’. For the tolerable solution set (7), we have
Ξ∀∃ (A, F ) =
{X ∈ Rn×n |AX + XAT = F }. A∈A F ∈F
For the controllable solution set (8), we have
Ξ∃∀ (A, F ) =
{X ∈ Rn×n |AX + XAT = F }. F ∈F A∈A
Remark 2.3. For an interval linear system Ax = b, with independent data, Shary proved in Theorem 3.4 of [21] that x ∈ Ξαβ (A, b) if and only if A∀ X − b∀ ⊆ b∃ − A∃ X . This result cannot be generalized in a completely similar manner to our interval Lyapunov equation (4). In contrast to Shary’s result, we provide Theorems 2.6 and 2.7 in the following. A very first reason for this difference is dependency between the elements of P in the interval linear system (5) corresponding with the interval Lyapunov equation (4). Indeed, the interval components of P in the interval linear system Px = c would not remain independent, even though we have already supposed that the interval components of the Lyapunov equation (4) are mutually independent. Shary’s result for interval linear systems with a single right-hand side vector cannot be trivially generalized even to the case of interval linear systems of the form AX = B [22]. To obtain more insight into this important difference we give more details in the following. Remark 2.4. Suppose that X ∈ Rn×k and the elements of the interval matrix A vary independently of each other. The exact equality A · X = {AX | A ∈ A},
(10)
with an interval matrix A is valid only if X is a real vector, i.e., if k = 1. Note that the left-hand side of the equality sign in (10) is an interval operation while its right-hand side is a power set operation. In case X being an n × k real matrix with k > 1, the weaker equality A · X = {AX | A ∈ A},
(11)
takes place instead of (10), where denotes the interval hull. Indeed, when k = 1 the right-hand side of (11) is the same as the right-hand side of (10). So, in the most general case k ≥ 1 relation (11) holds true instead of (10). As a simple example,
626
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
Fig. 1. Interval operation and power set operation for matrix–matrix multiplication in Remark 2.4.
let A = [1, 2] be a 1 × 1 interval matrix and X = (x1 x2 ) = (1 2) be a real point 1 × 2 matrix. In the interval matrix–vector algebra, we have A · (x1 x2 ) = (Ax1 Ax2 ) = ([1, 2] [2, 4]), while the set {AX | A ∈ A} = {(Ax1 Ax2 ) | A ∈ A} = {(A 2A) | A ∈ [1, 2]} forming the right-hand side of (10) occupies only the diagonal of the 1 × 2 interval matrix ([1, 2] [2, 4]). See Fig. 1. Equality (11) is also in accordance with the definitions of arithmetic operations between interval matrices in [20, p. 78]. Remark 2.5. Despite the validity of Formula (11) in Remark 2.4 we only have A · X + X · AT − F ⊇ {AX + XAT − F |A ∈ A, F ∈ F }.
(12) T
The reason is that the expression in the left hand-side of (11) is a single-use expression (SUE), while A · X + X · A − F is not. So the straightforward interval computation of the latter interval operations can only lead to an enclosure for the hull of the set in the right hand-side of (12). Lemma 1 ([20, p. 83]). Let A ∈ IRn×n , and let Σ , Σ ′ be subsets of Rn×n . Then (a) Σ ⊆ A H⇒ Σ ⊆ A, (b) Σ ⊆ Σ ′ H⇒ Σ ⊆ Σ ′ . Lemma 2. For two intervals a and b, we have a ⊆ b ⇐⇒ −a ⊆ −b. Proof. This is evident by the inclusion isotonicity of interval arithmetic. See e.g., [18,20].
Lemma 3. The real matrix X belongs to the AE-solution set Ξαβ (A, F ) if and only if
{A′ X + XA′T − F ′ |A′ ∈ A∀ , F ′ ∈ F ∀ } ⊆ {F ′′ − A′′ X − XA′′T |A′′ ∈ A∃ , F ′′ ∈ F ∃ }.
(13)
Proof. Let X ∈ Ξαβ (A, F ). So (9) holds true and therefore for all A′ ∈ A∀ and for all F ′ ∈ F ∀ , there exists A′′ ∈ A∃ and there exists also F ′′ ∈ F ∃ such that A′ X + XA′T − F ′ = F ′′ − A′′ X − XA′′T .
(14)
Now assume that Z ∈ {A X + XA − F |A ∈ A , F ∈ F }. So, for A ∈ A and F ∈ F we have Z = A X + XA − F . Using (14) we conclude that there exists A′′ ∈ A∃ and F ′′ ∈ F ∃ such that Z = F ′′ −A′′ X −XA′′T . So, Z ∈ {F ′′ −A′′ X −XA′′T |A′′ ∈ A∃ , F ′′ ∈ F ∃ } and (13) holds. Finally, we have to prove the converse statement. If (13) holds, then every Z in {A′ X + XA′T − F ′ |A′ ∈ A∀ , F ′ ∈ F ∀ } also belongs to {F ′′ − A′′ X − XA′′T |A′′ ∈ A∃ , F ′′ ∈ F ∃ }. Since every element of the set {A′ X + XA′T − F ′ |A′ ∈ A∀ , F ′ ∈ F ∀ } is of the form A′ X + XA′T − F ′ for A ∈ A∀ and F ′ ∈ F ∀ we conclude that for every A ∈ A∀ and for every F ′ ∈ F ∀ there exist A′′ ∈ A∃ and F ′′ ∈ F ∃ such that A′ X + XA′T − F ′ = F ′′ − A′′ X − XA′′T . By using (9) this means that X ∈ Ξαβ (A, F ) and the proof is completed. ′
′T
′
′
∀
′
∀
′
∀
′
∀
′
′T
′
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
627
Let us now fix all the entries of n × n quantifier matrix α = (αij ), to be equal only to existential quantifiers, i.e., αij = ∃, for all i, j. We denote the corresponding AE-solution set by Ξ∃β (A, F ). Another important case arises when we fix all the entries of matrix α = (αij ), to be equal only to universal quantifiers, i.e., αij = ∀, for all i, j. The corresponding AE-solution set will be denoted by Ξ∀β (A, F ). We characterize the solution sets Ξ∃β (A, F ) and Ξ∀β (A, F ) as follows. Theorem 2.6. If the real matrix X belongs to the AE-solution set Ξ∃β (A, F ) then F ∀ ⊆ AX + X AT − F ∃ .
(15)
Proof. Let X ∈ Ξ∃β (A, F ). So by using Lemma 3, second part of Lemma 1, and according to Formula (12) we have {−F ′ |F ′ ∈ F ∀ } = −F ∀ ⊆ {F ′′ − AX − XAT |A ∈ A, F ′′ ∈ F ∃ }
⊆ F ∃ − A · X − X · AT , which by Lemma 2 makes the proof completed.
(16)
Theorem 2.7. If AX + X AT − F ∀ ⊆ F ∃
(17)
then the real matrix X belongs to the AE-solution set Ξ∀β (A, F ). Proof. Suppose that (17) holds true. We then have
{AX + XAT − F ′ |A ∈ A, F ′ ∈ F ∀ } ⊆ {AX + XAT − F ′ |A ∈ A, F ′ ∈ F ∀ } ⊆ AX + X AT − F ∀ ⊆ F ∃ = {F ′′ |F ′′ ∈ F ∃ } = {F ′′ |F ′′ ∈ F ∃ }. ′
∀
′′
∃
T
(18) ′
′′
Therefore, for all A ∈ A and for all F ∈ F there exist at-least one F ∈ F such that AX + XA − F = F . This means that X ∈ Ξ∀β (A, F ). Theorem 2.8. If the real matrix X belongs to the AE-solution set Ξ∃β (A, F ), then
|(mid A) X + X (mid AT ) − mid F | ≤ (rad A)|X | + |X |rad AT + rad F ∃ − rad F ∀ .
(19)
Proof. Neumaier [20] noted that if G , H belong to IRm×n , then G ⊆ H ⇔ H ≤ G and G ≤ H ⇔ |mid H − mid G | ≤ rad H − rad G . Hence the characterization (15) can be rewritten as in the following:
|mid (AX + X AT − F ∃ ) − mid F ∀ | ≤ rad (AX + X AT − F ∃ ) − rad F ∀ .
(20)
On the other hand, we also have [20]: rad (G ± H ) = rad G + rad H ,
mid (G ± H ) = mid G ± mid H .
Therefore (20) holds, if and only if
|mid (AX ) + mid (X AT ) − mid F ∃ − mid F ∀ | ≤ rad (AX ) + rad (X AT ) + rad F ∃ − rad F ∀ . Also, since mid (A · X ) = (mid A) · X ,
mid (X · A) = X · (mid A),
rad (A · X ) = (rad A) · |X |,
rad (X · A) = |X | · (rad A),
we have
|(mid A)X + X (mid AT ) − mid F | ≤ (rad A)|X | + |X |rad (AT ) + rad F ∃ − rad F ∀ . The following result characterizes the united solution set of the interval Lyapunov matrix equation, and can also be derived from a result in [11]. Corollary 2. If the real matrix X belongs to the united solution set (6) of the interval Lyapunov matrix equation, then we have 0 ∈ AX + X AT − F ,
0 ∈ Rn×n
(21)
and also
|(mid A)X + X (mid AT ) − mid F | ≤ (rad A)|X | + |X |(rad AT ) + rad F .
(22)
628
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
Proof. The proof is clear. Just let F ∀ = 0 in Theorems 2.6 and 2.8 to get (21) and (22), respectively.
Corollary 3. If the real matrix X belongs to the controllable solution set Ξ∃∀ (A, F ), then F ⊆ AX + X AT , and also
|(mid A)X + X (mid AT ) − mid F | ≤ (rad A)|X | + |X |(rad AT ) − rad F . Proof. This is also clear. We just need to let F ∃ = 0 in Theorem 2.6 and Theorem 2.8 to get both results.
Corollary 4. If either AX + X AT ⊆ F ,
(23)
|(mid A)X + X (mid AT ) − mid F | ≤ rad F − (rad A)|X | − |X |(rad AT ),
(24)
or
then the real matrix X belongs to the tolerable solution set Ξ∀∃ (A, F ) of the interval Lyapunov matrix equation. Proof. The proof is clear if we set F ∀ = 0 in the statement of Theorem 2.7. Similarly to the proof of Theorem 2.8 it is also easy to prove that (23) and (24) are equivalent. 3. Efficient computation of outer estimates for the united solution set The united solution set (6) is the widest in the collection of all AE-solution sets to the interval Lyapunov matrix equation (4), i.e., Ξαβ (A, F ) ⊆ Ξ∃∃ (A, F ). This is a first reason to pay special attention to the united solution set. Moreover, the united solution set is the one which has numerous applications in the so-called verified numerical computations based on the interval analysis (also called scientific computing with automatic result verification) [23,19]. For example, in order to find verified solutions of a nonlinear system of equations f (x) = 0, f : Rn → Rn one can use the interval Newton method [24] which needs the united solution set of a specially-defined interval linear system Ax = b. Therefore, we focus our attention on the united solution set of the interval Lyapunov matrix equation (4). In a future research, we will try to apply our approach in this paper to problems in the area of verified computations. In general, the united solution set Ξ∃∃ (A, F ) has a very complicated structure. It can be unbounded: the trivial 1 × 1 example is
] [ ]T [ 1 1 1 1 x+x − , = ([1, 1]), − , 2 2
2 2
for which its united solution set is (−∞, −1] ∪ [+1, +∞). Our goal, here, is to compute approximations to Ξ∃∃ (A, F ). In modern interval analysis, the most commonly encountered ways of approximating a solution set are known to be the inner interval estimation and the outer interval estimation. The aim of inner estimation problem is to find an interval matrix that is contained in the solution set Ξ∃∃ (A, F ) (if it is nonempty). In the outer estimation problem, we are interested in finding an interval matrix that contains the solution set Ξ∃∃ (A, F ). In the present paper, we will only discuss the outer estimation problem. The tightest interval matrix enclosing the united solution set Ξ∃∃ (A, F ) is its interval hull Ξ∃∃ (A, F ). If the united solution set Ξ∃∃ (A, F ) is bounded, then we can define its interval hull as follows Ξ∃∃ (A, F ) = [X , X ],
(25)
where X = (xij ), X = (xij ) with
xij = min{xij |X ∈ Ξ∃∃ (A, F )}, xij = max{xij |X ∈ Ξ∃∃ (A, F )}.
∀i = 1, 2, . . . , n, ∀j = 1, 2, . . . , n.
(26)
Theorem 3.1 ([11]). A sufficient condition for the interval Lyapunov matrix equation (4) to have a nonempty and bounded united solution set is that 0 ̸∈ λi (A) + λj (A), for i, j = 1, . . . , n, in which λi (A) = {λi (A)| A ∈ A} is the i-th interval eigenvalue of A. The most commonly used method for computing outer estimations for the united solution set of an interval Lyapunov matrix equation is to firstly transform it into the interval linear system Px = f as in (5). Then one can use a technique for computing outer estimations for the united solution set of that interval linear system. This is the approach used by Seif and his colleagues [11], Shashikhin [12,13], and Rohn in the VERMATREQN.m code of VERSOFT [10]. The main difficulty when using these approaches is that solving such an interval linear system needs a computational cost of O (n6 ) which is prohibitive even for Lyapunov equations of small sizes.
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
629
3.1. Our approach For an interval linear system Ax = b, a given real vector x˜ and a given real matrix R, the standard Krawczyk operator k (˜x, x) is defined as k (˜x, x) := x˜ − R(Ax˜ − b) + (I − R · A)(x − x˜ ),
x˜ ∈ x.
(27)
The following theorem summarizes the important properties of k (˜x, x). Theorem 3.2 ([25,26]). Let A ∈ IRn×n , b ∈ IRn be given and let for some R ∈ Rn×n , x˜ ∈ Rn and x ∈ IRn R · (b − A · x˜ ) + (I − R · A) · x ⊆ int(x). Then, R and every matrix A ∈ A is nonsingular and for all A ∈ A, b ∈ b the corresponding linear system Ax = b is uniquely solvable with solution xˆ satisfying xˆ ∈ x˜ + x. Therefore, Ξ∃∃ (A, b) ⊆ x˜ + x. The above standard Krawczyk operator when applied to the large interval linear system Px = f in (5) is given as k (˜x, x) = x˜ − R[(In ⊗ A + A ⊗ In )˜x − f ] + [In2 − R(In ⊗ A + A ⊗ In )](x − x˜ ),
(28)
n2 ×n2
where R ∈ R is a computed (approximate) inverse of mid (In ⊗ A + A ⊗ In ). It is evident that computing such an approximate inverse R is too costly, since In ⊗ A + A ⊗ In is an n2 × n2 matrix. The following theorem which expresses the essence of all Krawczyk type verification methods has been recently proved in [27] for a general function f : CN → CN . In its formulation x has been represented as x˜ + z, thus separating the approximate zero x˜ from the enclosure of its error, z. Theorem 3.3 ([27]). Assume that f : D ⊂ CN → CN is continuous in D. Let x˜ ∈ D and z ∈ ICN be such that x˜ + z ∈ D. Moreover, assume that A ⊂ CN ×N is a set of matrices containing all slopes A(˜x, y) for y ∈ x˜ + z =: x. Finally, let R ∈ CN ×N . Denote Kf (˜x, R, z , A) the set
Kf (˜x, R, z , A) := {−Rf (˜x) + (I − RA)z : A ∈ A, z ∈ z }.
(29)
Then, if
Kf (˜x, R, z , A) ⊆ int z ,
(30)
the function f has a zero x in x˜ + Kf (˜x, R, z , A) ⊆ x. Moreover, if A also contains all slope matrices A(y, x) for x, y ∈ x, then this zero is unique in x. ∗
Lemma 4. For any three point matrices A, B and C of compatible sizes, we have (a) vec(ABC ) = (C T ⊗ A)vec(B), (b) (Diag (vec(A)))−1 vec(B) = vec(B./A), where ./ denotes the Hadamard division, (c) σ (I ⊗ A + B ⊗ I ) = σ (A) + σ (B), where σ shows the spectrum. Parts (a) and (c) are from [28] while Part (b) is taken from [27]. In [27], the authors developed two variants of Krawczyk operator based on Theorem 3.3 for obtaining verified square roots of a real matrix. Here, we partly use Theorem 3.3 to obtain outer estimations for the united solution set of the Lyapunov equation (4) with uncertain interval data. Let A and F be two arbitrary point matrices in A and F , respectively. Assume that f (x) = Px − f where P = In ⊗ A + A ⊗ In and f = vec(F ). Let us also assume that the real point matrix mid A is diagonalizable, i.e., mid A = V ΛW ,
with V , W , Λ ∈ Cn×n , Λ = Diag (λ1 , . . . , λn ) diagonal, VW = I .
(31)
So, we have f ′ (x) = In ⊗ A + A ⊗ In
= (V ⊗ W −1 ) · In ⊗ (WAW −1 ) + (V −1 AV ) ⊗ In · (V −1 ⊗ W ). We stress again that A, in the above formula, is not necessarily equal to mid A but can vary to be any arbitrary point matrix in A. In contrast, the matrices V and W are those in the spectral factorization of mid A. From the above equality, we see that an approximate inverse for mid (In ⊗ A + A ⊗ In ) can be given in factorized form by the complex matrix R = (V ⊗ W −1 ) · ∆−1 · (V −1 ⊗ W ),
where ∆ = In ⊗ Λ + Λ ⊗ In .
(32)
Even though we assumed that mid A is a real matrix, its eigenvalues can be complex and therefore R in (32) can be a complex matrix. Now assume that V , W and Λ are numerically computed quantities obtained by using a standard method to get the decomposition (31) such as Matlab’s eig function, e.g. Note that we assume V , the matrix of right eigenvectors, to be a computed quantity, as well as W , the matrix of left eigenvectors. Similarly, the computed diagonal matrix Λ will generally not have the exact eigenvalues on its diagonal.
630
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633 2
For the n2 × n2 matrix R in (32), any n × n matrix A ∈ A and any vector z ∈ Cn we have u := (In2 − R · (In ⊗ A + A ⊗ In )) · z
= (V ⊗ W −1 )∆−1 (∆ − In ⊗ (WAW −1 ) − (V −1 AV ) ⊗ In )(V −1 ⊗ W )z . The latter expression is rich in Kronecker products, so that on account of the first part of Lemma 4 we can efficiently enclose u as described in lines 1–9 of Algorithm 1. We notice our convention on the implicit relation between upper and lower case letters when denoting variables, so z = vec(Z ), u = vec(U ), etc. The interval matrix Z in line 1 of Algorithm 1 should be chosen such that (30) is likely to take place. We use standard ε -inflation [29] to obtain Z . See also line 8 of Algorithm 2 where ε denotes the machine precision. Moreover, IV and IW are interval matrices which are known to contain the exact value of V −1 and W −1 , respectively. Whenever an expression in the algorithms is not specified exactly due to missing brackets, we evaluated ‘‘from right to left’’, i.e. we compute Y = W (ZIVT ) in the first line of Algorithm 1. The vectors di in line 5 of Algorithm 1 are columns of the matrix D for which 2 ×n2
∆ = Diag (vec(D)) ∈ Cn
,
D = [d1 | . . . |dn ] ∈ Cn×n .
(33)
Due to the second part of Lemma 4, we see that multiplication by ∆ can be replaced by the Hadamard division of matrices as done in line 8 (and further in line 13) of Algorithm 1. Note that as a result of the enclosure property of circular arithmetic, the interval matrix U from line 9 of Algorithm 1 satisfies −1
vec(U ) ⊇ { In2 − R · (In ⊗ A + A ⊗ In ) · z : A ∈ A, z ∈ z }.
On the other hand, we use lines 10–14 of Algorithm 1 to enclose l := −R · f (˜x) = −R · ((I ⊗ A + A ⊗ I ) · x˜ − f ), for any arbitrary A ∈ A and F ∈ F in a similar manner. This means that the interval matrix L computed in line 14 of Algorithm 1 satisfies vec(L ) ⊇ {−R · ((In ⊗ A + A ⊗ In ) · x˜ − f ) : A ∈ A, f ∈ f }. Algorithm 1 Computation of an interval matrix K such that vec(K ) contains Kf from (29) for any A ∈ A and any F ∈ F 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:
Compute Y = W ZIVT Compute S = IV · A · V Compute T = W · A · IW for i = 1, . . . , n do compute ci = (Diag (di ) − Sii I − T )Yi end for Compute Q = −YS0T + [c1 | · · · |cn ] Compute N = Q · /D Compute U = IW N V T {Lines 11–14 evaluate −Rf (˜x)} Compute F = A · X˜ + X˜ · AT − F Compute G = W FIVT Compute H = G · /D Compute L = −IW H V T Compute K = L + U
{The j-th column of Y will be denoted Yj } {S is an n × n interval matrix with entries Sij }
{S0 = S − Diag (S11 , . . . , Snn )}
Theorem 3.4. Let A ∈ IRn×n , F ∈ IRn×n be given and suppose K obtained by Algorithm 1 satisfies K ⊆ int(Z ) for some 2
2
R ∈ Cn ×n , Z ∈ ICn×n and X˜ ∈ Rn×n . Then, R is nonsingular and λi (A) + λj (A) ̸= 0 for all eigenvalues λi (A) and λj (A) of A ∈ A. Moreover, for A ∈ A and F ∈ F the corresponding Lyapunov matrix equation AX + XAT = F is uniquely solvable with solution X ∗ satisfying X ∗ ∈ X˜ + Z . Therefore, Ξ∃∃ (A, F ) ⊆ X˜ + Z . Proof. The proof is somehow similar to the proof of Theorem 2.1 in [25] except for K which we obtain through Algorithm 1. As we already noticed, for any A ∈ A and any F ∈ F , on the foundation of the enclosure property of circular arithmetic which extends to the operations between interval matrices and interval vectors, we have R · [f − (In ⊗ A + A ⊗ In ) · x˜ ] + [In2 − R · (In ⊗ A + A ⊗ In )] · z ∈ vec(L ) + vec(U ) = vec(K ), where f := vec(F ), x˜ := vec(X˜ ), and z = vec(Z ). Now, let P = In ⊗ A + A ⊗ In . Since K ⊆ int(Z ), we get R · (f − P · x˜ ) + (IN − R · P ) · z ∈ int(z ),
(34)
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
631
Algorithm 2 If successful this algorithm provides an interval matrix X containing the united solution set Ξ∃∃ (A, F ) of the interval Lyapunov matrix equation A X + X AT = F 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
Use a floating point algorithm to get an approximate solution X˜ to the Lyapunov matrix equation (mid A)X +X (mid AT ) = mid F . Use a floating point algorithm to get approximations for V , W and Λ in the spectral decomposition (31) for mid A. Compute interval matrices IV , IW containing V −1 and W −1 , resp. {Take verifylss.m from INTLAB, e.g.} Compute L, an interval matrix containing −Rf (˜x) as in lines 11–14 of Algorithm 1. Put K := L and k := 0 repeat Put Z = (0, K · [1 − ε, 1 + ε]), increment k {ε -inflation} Compute U for input Z as in lines 1–9 of Algorithm 1 Put K = L + U until (K ⊆ int Z or k = 7) if K ⊂ int Z then {successful termination} Output X := X˜ + K ⊇ Ξ∃∃ (A, F ) else Output ‘‘no outer estimation computed" end if
where, all operations in the left-hand side are point (non-interval) vector and matrix operations, z = vec(Z ) and N = n2 . Now, define the function h : CN → CN h(z ) := R · (f − P · x˜ ) + (IN − R · P ) · z = z + R · [f − P · (˜x + z )]. This is a continuous function that by (34) maps the non-empty, convex and compact set z into itself. In fact, according to (34) we even have h(z ) ⊆ int(z ).
(35)
Therefore, Brouwer’s fixed-point theorem implies the existence of some zˆ ∈ z such that h(ˆz ) = zˆ .
(36)
Hence, R · [f − P · (˜x + zˆ )] = 0.
(37)
Now assume that Py = 0 for some y ∈ C . According to the definition of h we get for all λ ∈ R N
h(ˆz + λy) = zˆ + λy + R · [f − P · (˜x + zˆ + λy)]
= zˆ + R · [f − P · (˜x + zˆ )] + λy − λRPy = h(ˆz ) + λy − 0 = zˆ + λy. ˆ such that zˆ + λˆ y ∈ ∂ x which So, h(ˆz + λy) = zˆ + λy, ∀λ ∈ R, s.t. Py = 0. For y ̸= 0 with Py = 0 there would be some λ contradicts (35). So, y = 0 and Py = 0 is true only for y = 0 which means that P = In ⊗ A + A ⊗ In is nonsingular. This implies that zero is not an eigenvalue of P, which by the third part of Lemma 4 means that 0 ̸∈ {λi (A) + µj (A) : A ∈ A}, for i, j = 1, . . . , n. So, every Lyapunov equation AX + XAT = F is uniquely solvable. Now, we show that the matrix R is nonsingular. Suppose that Ry = 0 for some y ∈ CN . Therefore, for all λ ∈ R we have h(ˆz + λP −1 y) = zˆ + λP −1 y + R · [f − P · (˜x + zˆ + λP −1 y)] = zˆ + λP −1 y.
ˆ such that zˆ + λˆ P −1 y ∈ ∂ x and Now suppose that y ̸= 0. Since P is nonsingular, P −1 y ̸= 0 and there would be some λ − 1 − 1 − 1 ˆ P y = h(ˆz + λˆ P y). So, zˆ + λˆ P y ∈ ∂ x which contradicts (35). So y = 0 which means that R is nonsingular. zˆ + λ Therefore, by (37) we get f − P · (˜x + zˆ ) = 0, i.e., P −1 f = x˜ + zˆ . Since zˆ ∈ z, we get P −1 f ∈ x˜ + z and these conclusions hold for all A ∈ A, and F ∈ F , because A and F were chosen arbitrarily. So, we get Ξ∃∃ (A, F ) ⊆ X˜ + Z . Theorem 3.5. Algorithm 2 requires O (n3 ) arithmetic operations. Proof. It is clear that the first four lines of Algorithm 2 need O (n3 ) operations. In line 5, we have to perform lines 11–14 of Algorithm 1 which perform matrix–matrix multiplications for matrices of size n × n and need O (n3 ) operations. The cost of remaining part of Algorithm 2 depends mainly on the lines 1–9 of Algorithm 1. Lines 1–3 of Algorithm 1 have clearly a cubic computational complexity. In lines 4–6 the cost for each i is O (n2 ), since we have a matrix–vector multiplication of size n for each i. In total, the for-loop thus also has a cost of O (n3 ). In lines 7–9 of Algorithm 1 we again have to perform matrix–matrix multiplications, which is again O (n3 ). So, overall Algorithm 2 requires O (n3 ) arithmetic operations.
632
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
We now compare our approach with the straightforward method of converting the interval Lyapunov equation to the interval linear system (5) proposed in [10–13] via two numerical examples. Example 3.6. Consider the interval Lyapunov equation with the matrices of size n = 40 obtained by the following Matlab commands which we quote from VERSOFT.
alpha=1e-6; Al=2*rand(n,n)-ones(n,n); Au=Al+alpha*rand(n,n); A=infsup(Al,Au); A=A*A.’; Fl=2*rand(n,n)-ones(n,n); Fu=Fl+alpha*rand(n,n); F=infsup(Fl,Fu); We obtained after just 1.0 s an outer estimation X for the hull of united solution set. Our interval matrix X has a maximum componentwise radius of 21.8 and an average radius of 2.9. VERSOFT on the other hand obtains an interval outer estimation matrix R with a maximum componentwise radius of 16.9 and an average radius of 2.0 after 53.2 s. The matrices X and R obtained are too large to be reproduced here; we may look e.g. at their first five diagonal entries: X11 = [1.34, 6.39], X22 = [1.84, 10.25], X33 = [−0.55, 1.71], X44 = [2.86, 16.55], X55 = [−10.46, 1.19]. R11 = [2.09, 5.64], R22 = [3.09, 8.99], R33 = [−0.12, 1.29], R44 = [4.50, 14.90], R55 = [−8.47, −0.79]. Example 3.7. We now consider a larger Lyapunov equation arising in the control of the heat flow in a thin rod [30,3]. The matrices A and F have size 150 × 150 and we randomly perturb them so that the maximum componentwise radius of the matrices A and F becomes 9.53 × 10−5 and 9.9 × 10−3 , respectively. VERSOFT fails to obtain any solution because of the need for more memory. Indeed, examples of this size or larger cannot be tried by VERSOFT probably because of its high computational complexity. Algorithm 2 on the other hand succeeds in obtaining an enclosure for the united solution set after 7 iterations and 21.92 s with a maximum componentwise radius of 2.28 and an average componentwise radius of 1.07. 4. Conclusion and future directions First, we generalized the concept of AE-solution sets to the interval Lyapunov matrix equation and proved some analytical characterizations for those solution sets. We then proposed a modified Krawczyk operator which, in contrast to the standard Krawczyk operator and available methods in the literature, keeps the computational complexity of computing outer estimations for the united solution set down to cubic. Note that we use outward rounding everywhere necessary in our implementation of the modified Krawczyk operator so that our outer estimation is a verified result taking care of rounding errors everywhere. The present work is, however, only a starting point for the research concerning the interval Lyapunov matrix equation. There are still several problems that can be tackled. Two interesting problems are the detailed study of tolerable and controllable solution sets of the interval Lyapunov equation, and to propose methods that can efficiently approximate outer estimations for them. The tolerable and the controllable solution sets have been studied in [31–33] for interval linear systems. Another interesting problem, which we leave for a future research, is to develop methods for computing inner estimations for different AE-solution sets of the interval Lyapunov matrix equation, including the united, the tolerable and the controllable solution sets. This cannot be easily done because of the notes mentioned in Remark 2.4. It seems that a way to deal with the problem of computing such inner estimations is to employ the theory of parameterdependent interval linear systems [34,35]. Acknowledgments The authors would like to thank the referees and the editor. They are especially indebted to the referees for their many thoughtful suggestions and constructive criticisms. We are also grateful to Professor Jiri Rohn for his help with this paper. References [1] A. Antoulas, Approximation of Large-Scale Dynamical Systems, in: Advances in Design and Control, SIAM, Philadelphia, 2005. [2] B.N. Datta, Numerical Methods for Linear Control Systems, Academic Press, San Diego, 2004. [3] D. Kressner, V. Mehrmann, T. Penzl, CTDSX—a collection of benchmark examples for state-space realizations of continuous-time dynamical systems, Tech. Report SLICOT Working Note 1998-9, 1998. http://www.slicot.org/REPORTS/SLWN1998-9.ps.gz. [4] T. Penzl, A cyclic low-rank Smith method for large sparse Lyapunov equations, SIAM J. Sci. Comput. 21 (2000) 1401–1418. [5] Y. Saad, Numerical solution of large Lyapunov equations, in: M.A. Kaashoek, J.H. van Schuppen, A.C. Ran (Eds.), Signal Processing, Scattering, Operator Theory and Numerical Methods, in: Proc. of the Internat. Symposium MTNS-89, vol. 3, Birkhauser, Boston, 1990, pp. 503–511. [6] R.H. Bartels, G.W. Stewart, Algorithm 432: solution of the matrix equation AX + XB = C , Commun. ACM 15 (1972) 820–826. [7] CODATA Internationally recommended values of the fundamental physical constants, The Physics Laboratory of the National Institute of Standards. Available online at: http://physics.nist.gov/cuu/Constants. [8] R.B. Kearfott, V. Kreinovich, Applications of Interval Computations, Kluwer Academic Publishers, Dordrecht, Boston, London, 1996. [9] S.M. Rump, INTLAB—INTerval LABoratory, in: Tibor Csendes (Ed.), Developments in Reliable Computing, Kluwer, Dordrecht, Netherlands, 1999, pp. 77–105. [10] J. Rohn, Verification Software in MATLAB/INTLAB. Available online at: http://uivtx.cs.cas.cz/~rohn/matlab. [11] N.P. Seif, S.A. Hussein, A.S. Deif, The interval Sylvester matrix equation, Computing 52 (1994) 233–244. [12] V.N. Shashikhin, Robust stabilization of linear interval systems, J. Appl. Math. Mech. 66 (2002) 393–400. [13] V.N. Shashikhin, Robust assignment of poles in large-scale interval systems, Autom. Remote Control 63 (2002) 200–208. [14] J. Rohn, V. Kreinovich, Computing exact componentwise bounds on solutions of linear systems with interval data is NP-hard, SIAM J. Matrix Anal. Appl. 16 (1995) 415.
B. Hashemi, M. Dehghan / Mathematical and Computer Modelling 55 (2012) 622–633
633
[15] V. Kreinovich, A. Lakeyev, J. Rohn, P. Kahl, Computational Complexity and Feasiility of Data Processing and Interval Computations, Kluwer, Dordrecht, 1998. [16] J. Rohn, Enclosing solutions of linear interval equations is NP-hard, Computing 53 (1994) 365–368. [17] R.B. Kearfott, M.T. Nakano, A. Neumaier, S.M. Rump, S.P. Shary, P. van Hentenryck, Standardized notation in interval analysis, Reliab. Comput. 15 (2010) 7–13. [18] G. Alefeld, J. Herzberger, Introduction to Interval Computations, Academic Press, New York, 1983. [19] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to Interval Analysis, SIAM, Philadelphia, 2009. [20] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990. [21] S.P. Shary, A new technique in systems analysis under interval uncertainty and ambiguity, Reliab. Comput. 8 (2002) 321–418. [22] B. Hashemi, M. Dehghan, Results concerning interval linear systems with multiple right-hand sides and the interval matrix equation AX = B, J. Comput. Appl. Math. 235 (2011) 2969–2978. [23] E. Adams, U. Kulisch (Eds.), Scientific Computing with Automatic Result Verification, in: Mathematics in Science and Engineering, vol. 189, Academic Press, Boston, 1993. [24] R.B. Kearfott, Interval analysis: interval Newton methods, in: Encyclopedia of Optimization, vol. 3, Kluwer, 2001, pp. 76–78. [25] C. Jansson, S.M. Rump, Rigorous solution of linear programming problems with uncertain data, ZOR, Methods Models Oper. Res. 35 (1991) 87–111. [26] S.M. Rump, Verification methods for dense and sparse systems of equations, in: J. Herzberger (Ed.), Topics in Validated Computations, North-Holland, Amsterdam, 1994, pp. 63–136. [27] A. Frommer, B. Hashemi, Verified computation of square roots of a matrix, SIAM J. Matrix Anal. Appl. 31 (2009) 1279–1302. Preprint available as technical report BUW-SC 09/2, Bergische Universität Wuppertal. www-ai.math.uni-wuppertal.de/SciComp/preprints/SC0902.pdf. [28] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, 1991. [29] S.M. Rump, A note on epsilon-inflation, Reliab. Comput. 4 (1998) 371–375. [30] A.S. Hodel, K.P. Poolla, B. Tenison, Numerical solution of the Lyapunov equation by approximate power iteration, Linear Algebra Appl. 236 (1996) 205–230. [31] S.P. Shary, Solving the interval tolerance problem, Math. Comput. Simul. 39 (1995) 53–85. [32] S.P. Shary, Algebraic approach to the interval linear static identification, tolerance and control problems, or one more application of Kaucher arithmetic, Reliab. Comput. 2 (1996) 3–33. [33] S.P. Shary, Controllable solution set to interval static systems, Appl. Math. Comput. 86 (1997) 185–196. [34] E. Popova, Quality of the solution sets of parameter-dependent interval linear systems, Z. Angew. Math. Mech. 82 (2002) 723–727. [35] E. Popova, W. Krämer, Inner and outer bounds for the solution set of parametric interval linear systems, J. Comput. Appl. Math. 199 (2007) 310–316.