JID:LAA AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.1 (1-10)
Linear Algebra and its Applications ••• (••••) •••–•••
Contents lists available at ScienceDirect
Linear Algebra and its Applications www.elsevier.com/locate/laa
A backward error for the symmetric generalized inverse eigenvalue problem Wei Ma 1 School of Mathematics and Statistics, Nanyang Normal University, Nanyang, Henan 473061, People’s Republic of China
a r t i c l e
i n f o
Article history: Received 10 July 2013 Accepted 17 November 2013 Available online xxxx Submitted by J.L. Barlow MSC: 65F18 15A18 49J52 90C56
a b s t r a c t In this paper, we discuss the backward error for the symmetric generalized inverse eigenvalue problem, which extends the result of Sun (1999) [10]. The optimal backward error is defined for symmetric generalized inverse eigenvalue problem with respect to an approximate solution, and the upper and lower bounds are derived for the optimal backward error. The expressions may be useful for testing the stability of practical algorithms. © 2013 Published by Elsevier Inc.
Keywords: Generalized inverse eigenvalue problem Optimal backward error Upper bound Lower bound
1. Introduction Throughout this paper, we use Rnn×n to denote the set of n × n real nonsingular matrices, Sn the set of n × n real symmetric matrices and Dn the set of n × n real diagonal matrices. | · | denotes any unitarily invariant norm, and · 2 the spectral norm and the Euclidean vector norm. The superscript T is for transpose. The symbol I is the n × n identity matrix. A > 0 ( A 0) denotes that A is a symmetric positive definite matrix (symmetric positive semidefinite matrix). For four matrices A, B, C and D ∈ Sn , the relation ( A , B ) (C , D ) means that A, B and C , D have the same generalized eigenvalue. A B means that A − B is positive semidefinite matrix.
1
E-mail address:
[email protected]. The research of this author was partially supported by the Special Project Grant of Nanyang Normal University.
0024-3795/$ – see front matter © 2013 Published by Elsevier Inc. http://dx.doi.org/10.1016/j.laa.2013.11.025
JID:LAA
AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.2 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
2
Consider the following symmetric generalized inverse eigenvalue problem: Problem GIEP. Given 2n + 2 real symmetric n × n matrices { A i }ni=0 , { B i }ni=0 and n real numbers λ1 λ2 · · · λn , find c = (c 1 , c 2 , . . . , cn ) with real components such that
A (c), B (c) (Λ, I ),
where
A (c) := A 0 +
n
B (c) := B 0 +
ci A i ,
i =1
n
ci B i α I ,
i =1
Λ = diag(λ1 , λ2 , . . . , λn ) and α > 0. Generalized inverse eigenvalue problems arise from a remarkable variety of applications, including the discrete analogue of inverse Sturm–Liouville problems [3] and structural design [4]. A special case of the GIEP in which B (c) = I n is known as the algebraic inverse eigenvalue problem [3,4]. One may refer to the survey paper [5] and [6] for the different applications and numerical methods of GIEPs. Sun [10] defined the backward error of the symmetric matrix inverse eigenvalue problems and derived an explicit expression for it. Liu and Bai [9] extended Sun’s approach to more general inverse eigenvalue problems and derived the upper and lower bounds for the optimal backward error. Chen [2] extended Sun’s approach to the inverse singular value problem and derived an explicit expression for the optimal backward error. However, the optimal backward error of the symmetric generalized inverse eigenvalue problem is rarely treated in the literature. In this paper we consider the optimal backward error of the symmetric generalized inverse eigenvalue problem and the upper and lower bounds are derived for the optimal backward error, which extend the result of Sun [10]. Let c˜ = (˜c 1 , c˜ 2 , . . . , c˜ n ) be an approximate solution to Problem GIEP. In general, there are many backward perturbations A 0 , A 1 , . . . , A n , B 0 , B 1 , . . . , B n ∈ Sn , and Λ ∈ Dn such that
A0 + A0 +
n
c˜ i ( A i + A i ), B 0 + B 0 +
i =1
n
c˜ i ( B i + B i ) (Λ + Λ, I ),
i =1
and
B0 + B0 +
n
c˜ i ( B i + B i ) α I .
i =1
One may well ask: How close is the nearest Problem GIEP for which c˜ is the solution? In order to define backward errors for measuring the distance between the original problem and the perturbed problems, we define the normwise backward error η(˜c) by
η(˜c) =
min
( A 0 , A 1 ,..., An , B 0 , B 1 ,..., B n ,Λ)∈G
+
2 n | B k | k =0
where
| B k |
+
|Λ| |Λ|
2 12
2 n | A k | k =0
| A k | (1)
⎧ ⎫ : ⎪ ⎪ ⎪ ( A 0 , A 1 , . . .n, An , B 0 , B 1 , . . . , B n , Λ) ⎪ ⎪ ⎪ n ⎨ ⎬ c˜k ( A k + A k ), B 0 + B 0 + c˜ k ( B k + B k )) (Λ + Λ, I ) . G = ( A0 + A0 + ⎪ ⎪ ⎪ ⎪ k =1 k =1 ⎪ ⎪ ⎩ ⎭ A 0 , A 1 , . . . , An , B 0 , B 1 , . . . , B n ∈ Sn , Λ ∈ Dn (2)
JID:LAA AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.3 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
3
Note that the normwise backward error η(˜c) can also be called the optimal backward perturbation bound [10]. From the definition of the backward error, we can find the upper and lower bounds for the optimal backward error η(˜c), which may be useful for testing the stability of practical algorithms. This paper is organized as follows. In Section 2 we introduce some preliminary results on the generalized eigenvalue problem. In Section 3 we derive the upper and lower bounds for η(˜c) using the techniques described in [10]. In Section 4 we use a numerical example to illustrate our result. 2. Preliminary In the following, we review some primary lemmas. We first recall some useful properties for GEP of a real matrix pair [7]: Lemma 2.1. Suppose A , B ∈ Sn with B > 0. Then there exists a matrix X = (x1 , x2 , . . . , xn ) ∈ Rnn×n , such that
X T A X = diag(γ1 , γ2 , . . . , γn ),
XT B X = I
and Axi = γi Bxi , i = 1, 2, . . . , n. Lemma 2.2. The conditions are as described in Lemma 2.1 and B α I with α > 0 holds. Then
1
X 2 √
α
−1 √ X α. 2
and
Proof. From B α I we get
α X T X X T B X. Thus
1
X 2 √ .
α
It is easy to know that B −1 α1 I , by [1, Lemma 8.4.1], we obtain
−1 B 1 , 2
α
and so
2
1 = X −1 B −1 X − T 2 X −1 2 B −1 2
1 −1 2 X . 2
α
Thus
−1 √ X α. 2
2
In order to discuss on spectral variations of the definite generalized eigenvalue problem, we have the following fundamental result for two positive definite Hermitian matrices [8, Lemma 2.5]. Lemma 2.3. Let B and D be two positive definite Hermitian matrices. For any unitarily invariant norm, we have
−1/2 B − D −1/2
B −1 2 D −1 2 −1/2
B 2
−1/2
+ D 2
| B − D |.
JID:LAA
AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.4 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
4
Lemma 2.4. Suppose A, B, C and D ∈ Sn with B , D > 0, and λ( A , B ) = {(σi , 1), i = 1, 2, . . . , n} and λ(C , D ) = {(τ j , 1), j = 1, 2, . . . , n} where σi , τ j ∈ R, i , j = 1, 2, . . . , n, and σ1 σ2 · · · σn , τ1 τ2 · · · τn . Then there exists any unitarily invariant norm | · | such that
diag(σ1 − τ1 , σ2 − τ2 , . . . , σn − τn ) B −1 2 | A − C | + B −1 2 C 2 D −1 2 | B − D |.
(3)
Proof. It follows from B , D > 0 and (σi , 1) ∈ λ( A , B ), (τ j , 1) ∈ λ(C , D ); we have
σi ∈ λ B −1/2 A B −1/2
and
τ j ∈ λ D −1/2 C D −1/2 ,
for i , j = 1, 2, . . . , n, and B −1/2 A B −1/2 , D −1/2 C D −1/2 are both Hermitian matrices. By the Mirsky inequality (see, e.g., [11, Chapter IV, Theorem 4.11 and Corollary 4.12]),
diag(σ1 − τ1 , σ2 − τ2 , . . . , σn − τn ) B −1/2 A B −1/2 − D −1/2 C D −1/2 .
(4)
On the other hand,
B −1/2 A B −1/2 − D −1/2 C D −1/2
= B −1/2 A B −1/2 − B −1/2 C B −1/2 + B −1/2 C B −1/2 − B −1/2 C D −1/2 + B −1/2 C D −1/2 − D −1/2 C D −1/2 = B −1/2 ( A − C ) B −1/2 + B −1/2 C B −1/2 − D −1/2 + B −1/2 − D −1/2 C D −1/2 which, together with Lemma 2.3, yields
−1/2 −1/2 B AB − D −1/2 C D −1/2 B −1/2 2 | A − C | B −1/2 2 + B −1/2 C 2 B −1/2 − D −1/2 + B −1/2 − D −1/2 C D −1/2 2 B −1 2 | A − C | + B −1/2 2 + D −1/2 2 C 2 B −1/2 − D −1/2 B −1 2 | A − C | + B −1 2 C 2 D −1 2 | B − D |.
(5)
In the above derivation, we have employed the fact B −1/2 2 = B −1 2 and D −1/2 2 = D −1 2 , since B −1 and D −1 are both positive Hermitian. The inequality (3) is a consequence of (4) and (5). 2 1/ 2
1/ 2
3. Main results
˜ and B˜ by For an approximate solution c˜ = (˜c 1 , c˜ 2 , . . . , c˜ n ) of Problem GIEP, we define A
˜ = A0 + A
n
c˜ k A k ,
B˜ = B 0 +
k =1
n
c˜ k B k α I .
(6)
k =1
˜ and B˜ have the generalized eigenvalue decomposition By Lemmas 2.1 and 2.2, A
˜ = U˜ −T Λ˜ U˜ −1 A where U˜ ∈
Rnn×n ,
and
B˜ = U˜ − T U˜ −1 ,
U˜ 2 α and √1
Λ˜ = diag(λ˜ 1 , λ˜ 2 , . . . , λ˜ n ) with λ˜ 1 λ˜ 2 · · · λ˜ n .
(7)
JID:LAA AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.5 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
5
Let
1 M=√ ,
N = U˜ −1 2
α
and
˜ 2 B˜ −1 M 2 , K = max 1, A 2
| A 0 | | A 0 | | A 0 | | A 0 | θ1 = , θ2 = , . . . , θn = , θn+1 = , | A 1 | | A 2 | | An | | B 1 | | A 0 | | A 0 | | A 0 | | A 0 | θn+2 = , . . . , θ2n = , θ2n+1 = , θ2n+2 = . | B 2 | | B n | | B 0 | |Λ| The following result gives the upper and lower bounds for the optimal backward error
η(˜c).
Theorem 3.1. Let c˜ = (˜c 1 , c˜ 2 , . . . , c˜ n ) be an approximate solution to Problem GIEP, and η(˜c) be the backward ˜ and B˜ by (6). Let A˜ and B˜ have the generalized eigenvalue decomposierror defined by (1) and (2). Define A tion (7). Moreover, define ρ , g and Θ by
ρ=
1 K
˜ |Λ − Λ|,
g = |˜c 1 |, . . . , |˜cn |, |˜c 1 |, . . . , |˜cn |, 1,
1
T
K
,
Θ = diag(θ1 , θ2 , . . . , θ2n+2 ).
(8)
˜ ˜ |Λ − Λ| N 2 |Λ − Λ| η(˜c) . T − 2 T K | A 0 | 1 + g Θ g | A 0 | 1 + g Θ −2 g
(9)
Then
Proof. Define the sets L and Lρ by
L = ( A 1 , . . . , A n , B 0 , . . . , B n , Λ): A k , B 0 , B k ∈ Sn , k = 1, 2, . . . , n, Λ ∈ Dn , n Lρ = ( A 1 , . . . , A n , B 0 , . . . , B n , Λ) ∈ L| |˜ck || A k | k =1
+
n
|˜ck || B k | + | B 0 | +
k =1
1 K
|Λ| ρ .
By (1) and (2), we have
2 η(˜c) =
1
| A 0 |2
min
( A 1 ,..., An , B 0 ,..., B n ,Λ)∈L
2 2 + θ2n +1 | B 0 |
n
θk2 | A k |2 +
n j =1
k =1
θn2+ j | B j |2
2 2 + θ2n +2 |Λ|
n −T −1 ˜ + min W (Λ + Λ) W − A+ c˜ k A k n×n W ∈Rn , W 2 M 2
n −T −1 c˜ k B k . + W W − B˜ + B 0 + k =1
k =1
(10)
JID:LAA
AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.6 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
6
Set
X1 =
min
( A 1 ,..., An , B 0 ,..., B n ,Λ)∈Lρ
n
θk2 | A k |2 +
n j =1
k =1
θn2+ j | B j |2
−T + min W (Λ + Λ) W −1 W ∈Rnn×n , W 2 M 2
n n − T − 1 − A˜ + c˜ k A k + W W − B˜ + B 0 + c˜ k B k k =1 k =1 n n min θk2 | A k |2 + θn2+ j | B j |2 2 2 + θ2n +1 | B 0 |
2 2 + θ2n +2 |Λ|
( A 1 ,..., An , B 0 ,..., B n ,Λ)∈Lρ 2 2 + θ2n +1 | B 0 |
j =1
k =1
2 2 + θ2n +2 |Λ|
+
min
W ∈Rnn×n , W 2 M
−T W (Λ + Λ) W −1 − A˜ 2
n n + W −T W −1 − B˜ − |˜ck || A k | − |˜ck || B k | − | B 0 | k =1
.
(11)
k =1
From Lemma 2.4 and W 2 M, we obtain
˜ W −T W −1 −1 W −T (Λ + Λ) W −1 − A˜ |Λ + Λ − Λ| 2 − 1 + W −T W −1 2 A˜ 2 B˜ −1 2 W −T W −1 − B˜ M 2 W −T (Λ + Λ) W −1 − A˜ + A˜ 2 B˜ −1 2 W −T W −1 − B˜ K W −T (Λ + Λ) W −1 − A˜ + W −T W −1 − B˜ . (12) Therefore
−T W (Λ + Λ) W −1 − A˜ + W −T W −1 − B˜ 1 |Λ + Λ − Λ|. ˜ K
Together with (11) shows
X1
min
( A 1 ,..., An , B 0 ,..., B n ,Λ)∈Lρ
n
θk2 | A k |2 +
k =1
2 2 2 2 + θ2n +1 | B 0 | + θ2n+2 |Λ| +
−
n
|˜ck || B k | − | B 0 | −
k =1
1 K
n j =1
1 K
˜ − |Λ − Λ|
n
|˜ck || A k |
k =1
2
|Λ|
θn2+ j | B j |2
(13)
.
Let
δk = | A k |,
k = 1, 2, . . . , n ;
δ2n+1 = | B 0 |;
δn+ j = | B j |,
δ2n+2 = |Λ|;
Together with (13) observe the following fact
X1
min
δ1 ,δ2 ,...,δ2n+2 0, g T dρ
f (d),
j = 1, 2, . . . , n ;
d = (δ1 , δ2 , . . . , δ2n+2 ) T .
JID:LAA AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.7 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
7
where the function f (d) is defined by
f (d) = d T Θ 2 d +
2
ρ − gT d .
The gradient ∇ f of f has the expression
∇ f = 2 Θ 2 + gg T d − g ρ . Therefore, a vector d∗ satisfies ∇ f (d∗ ) = 0 if and only if
d∗ = Θ 2 + gg T
− 1
gρ =
Θ −2 g ρ 1 + g T Θ −2 g
which satisfies
g T d∗ ρ . The Hessian matrix H of f has the expression
H = 2 Θ 2 + gg T , which shows that H is symmetric positive definite. Hence, the function f (d) has a unique global minimal point d∗ , and
min
δ1 ,δ2 ,...,δ2n+2 0, g T dρ
f (d) = f d∗ =
ρ2 1 + g T Θ −2 g
.
On the other hand, if ( A 1 , . . . , A n , B 0 , . . . , B n , Λ) ∈ L\Lρ , then n
|˜ck || A k | +
k =1
n
|˜ck || B k | + | B 0 | +
k =1
1 K
|Λ| > ρ ,
i.e.
ρ < g T d. Hence
2
ρ 2 < g T d d T Θ 2 d g T Θ −2 g d T Θ 2 d 1 + g T Θ −2 g , i.e.
d T Θ 2d >
ρ2 1 + g T Θ −2 g
.
From the discussion and (10), we get
2 η(˜c) =
1
| A 0 |2
min X1 ,
min
( A 1 ,..., An , B 0 ,..., B n ,Λ)∈L\Lρ
d T Θ 2d
n −T −1 + min c˜ k A k W (Λ + Λ) W − A˜ + n×n W ∈Rn , W 2 M k =1
n 2 c˜ k B k + W −T W −1 − B˜ + B 0 + k =1
1
ρ
2
| A 0 |2 1 + g T Θ −2 g
.
JID:LAA
AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.8 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
8
Therefore
η(˜c)
˜ |Λ − Λ| . K | A 0 | 1 + g T Θ −2 g
For deriving the other inequality, taking the specific perturbations A k∗ (k = 1, 2, . . . , n), B ∗j ( j = 1, 2, . . . , n), B ∗0 and Λ∗ defined by
A k∗ = U˜ −T Σk∗ U˜ −1 ,
Σk∗ =
B ∗j = U˜ −T Σn∗+ j U˜ −1 ,
A0 = U
sign(˜c j )|˜c j |
(1 + g T Θ −2 g )θn2+ j
k = 1, 2, . . . , n ;
˜ (Λ − Λ),
j = 1, 2, . . . , n ;
1
∗ Σ2n +1 =
hence
˜ (Λ − Λ),
˜ (Λ − Λ); 2 (1 + g T Θ −2 g )θ2n +1 −1 ∗ ˜ Σ2n (Λ − Λ), +2 = 2 K (1 + g T Θ −2 g )θ2n +2
∗ Λ∗ = N 2 Σ2n +2 ,
˜ −T
(1 + g T Θ −2 g )θk2
Σn∗+ j =
∗ ˜ −1 , B ∗0 = U˜ −T Σ2n +1 U
∗
sign(˜ck )|˜ck |
∗
˜ −1
Λ + Λ + I U
− A˜ +
n
∗
c˜ k A k
− B˜ + B 0 + ∗
n
∗
c˜ j B j .
j =1
k =1
From (1) and (2), we have
2 η(˜c)
1
| A 0 |2
n k =1
n 2 2 ∗ 2 2 θk2 A k∗ + θn2+ j B ∗j + θ2n +1 B 0 j =1
2 ∗ 2 + θ2n +2 Λ
N4
| A 0 |2
n k =1
2 + A ∗0
n 2 2 2 2 ∗ θk2 Σk∗ + θn2+ j Σn∗+ j + θ2n +1 Σ2n+1
2 2 ∗ + θ2n +2 Σ2n+2
j =1
2
n n ∗ ∗ ∗ ∗ ˜ c˜ k Σk − c˜ j Σn+ j − Σ2n+1 + Λ + Λ − Λ − j =1 k =1 n n |˜ck | 2 ˜ 2 |˜c j | 2 N4 1 1 |Λ − Λ| + + + θk θn+ j θ2n+1 | A 0 |2 K 2 θ2n+2 (1 + g T Θ −2 g )2 j =1 k =1 n n |˜ck | 2 |˜c j | 2 + Λ − Λ˜ − + θk θn+ j k =1
j =1
2
Λ − Λ˜ + + θ2n+1 K θ2n+2 (1 + g T Θ −2 g ) n n |˜ck | 2 ˜ 2 |˜c j | 2 N4 |Λ − Λ| T −2 ˜ g Θ g + Λ − Λ − + θk θn+ j | A 0 |2 (1 + g T Θ −2 g )2 1
N2
k =1
j =1
JID:LAA AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.9 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
+ = =
1
θ2n+1
+
1 K 2 θ2n+2
9
2
Λ − Λ˜ (1 + g T Θ −2 g )
˜ 2 T −2 2 |Λ − Λ| g Θ g + 1 + g T Θ −2 g − g T Θ −2 g | A 0 |2 (1 + g T Θ −2 g )2 N4 N4
| A 0
|2
˜ 2 |Λ − Λ| . T (1 + g Θ −2 g )
Therefore
η(˜c)
˜ N 2 |Λ − Λ|
. | A 0 | 1 + g T Θ −2 g
2
Remark 3.2. Let B (c) = I n and α = 1. By Lemmas 2.1 and 2.2, we know that M = 1 and N = 1. Also, from (12), it is easy to check that K = 1. Take
θn+ j = 0,
j = 1, 2, . . . , n ,
and
θ2n+1 = 0.
We have
η(˜c) =
| A 0 | +
n
˜ |Λ − Λ|
ck | k=1 |˜
2 | A
k |
2
+ |Λ|2
which is the result of Sun [10] for the symmetric matrix inverse eigenvalue problems. 4. A numerical example To illustrate the result of the previous section, in this section an interesting numerical example is given, which is carried out by using MATLAB 7.10.0 (R2010a) with machine epsilon = 2.2 × 10−16 . For convenience, all vectors will be written as row-vectors. Example 4.1. This is a generalized inverse eigenvalue problem with distinct eigenvalues. Let n = 5,
A 0 = diag(9, 11, 10, 8, 14),
⎛
0
2
0
0
0
⎞
⎜2 0 1 0 0⎟ ⎜ ⎟ A2 = ⎜ 0 1 0 1 0 ⎟ , ⎝0 0 1 0 1⎠ ⎛
0
0
0
1
0
⎞
0 0 3 0 0 ⎜0 0 0 2 0 ⎟ ⎜ ⎟ A 3 = ⎜ 3 0 0 0 −1 ⎟ , ⎝0 2 0 0 0 ⎠ 0 0 −1 0 0
⎛
0
0
0
1
0
⎞
⎜0 0 0 0 1⎟ ⎜ ⎟ A4 = ⎜ 0 0 0 0 0 ⎟ , ⎝1 0 0 0 0⎠ 0
1
0
0
0
B 0 = diag(11, 13, 15, 11, 10),
⎛
0 1
0
0
0
⎞
0 0 ⎟ ⎜1 0 1 ⎜ ⎟ B 2 = ⎜ 0 1 0 −1 0 ⎟ , ⎝ 0 0 −1 0 −1 ⎠ 0 0 0 −1 0 ⎛ 0 0 −1 0 0 0 −1 ⎜ 0 ⎜ B 3 = ⎜ −1 0 0 0 ⎝ 0 −1 0 0
⎛
0 0
0
0 0
2
1
0
⎞
⎜0 0 0 0 1⎟ ⎜ ⎟ B4 = ⎜ 0 0 0 0 0 ⎟ , ⎝2 0 0 0 0⎠ 0
1
0
0
0
0
A1 = B 1 = I ,
⎞
0 0⎟ ⎟ 1⎟, ⎠ 0 0
JID:LAA
AID:12469 /FLA
[m1G; v 1.118; Prn:4/12/2013; 14:03] P.10 (1-10)
W. Ma / Linear Algebra and its Applications ••• (••••) •••–•••
10
Table 1 The upper and lower bounds of the optimal backward error
η(˜c).
α
0.1
1
8
lower bounds upper bounds
1.398139466175644e-010 4.396950291075646e-008
1.397152456631111e-009 4.393846286068193e-008
1.070673934140234e-008 4.208897055922883e-008
⎛
0 ⎜0 ⎜ A5 = ⎜ 0 ⎝1 0
⎞
0 0 1 0 0 0 0 1⎟ ⎟ 0 0 0 0 ⎟ = B5, ⎠ 0 0 0 0 1 0 0 0
A (c) = A 0 +
5
ci A i ,
B (c) = B 0 +
i =1
5
ci B i .
i =1
The eigenvalues are prescribed to be
λ∗ = (0.43278721102, 0.66366274839, 0.94385900467, 1.10928454002, 1.49235323254). We use the iterate approach given in [6] (also see [5]) to solve this problem and obtain the following computed solution:
c˜ = (1, 1, 1, 1, 1). Take | · | = · 2 in (9) and use the ged function in Matlab to compute the generalized eigenvalue decomposition (7). For different α , we listed the upper and lower bounds of the optimal backward error η(˜c) in Table 1. For all sufficiently small α ,
η(˜c) 1, which shows that the computed solution c˜ is the exact solution of a slightly perturbed symmetric generalized inverse eigenvalue problem. In other words, the computation for producing c˜ has proceeded stably. Acknowledgements The author would like to thank Professor Zheng-Jian Bai for his supervision and guidance. The author is also indebted to the referees for their comments and suggestions. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
D.S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, second ed., Princeton University Press, Princeton, NJ, 2009. X.S. Chen, A backward error for the inverse singular value problem, J. Comput. Appl. Math. 234 (2010) 2450–2455. M.T. Chu, Inverse eigenvalue problem, SIAM Rev. 40 (1998) 1–39. M.T. Chu, Structured inverse eigenvalue problem, Acta Numer. 11 (2002) 1–71. H. Dai, An algorithm for symmetric generalized inverse eigenvalue problem, Linear Algebra Appl. 296 (1999) 79–98. H. Dai, P. Lancaster, Newton’s method for a generalized inverse eigenvalue problem, Numer. Linear Algebra Appl. 4 (1997) 1–21. G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, Baltimore, MD, 1996. R.C. Li, A perturbation bound for definite pencils, Linear Algebra Appl. 179 (1993) 191–202. X.G. Liu, Z.J. Bai, A note on the backward errors for inverse eigenvalue problems, J. Comput. Math. 21 (2003) 201–206. J.G. Sun, Backward errors for the inverse eigenvalue problem, Numer. Math. 82 (1999) 339–349. G.W. Stewart, J.G. Sun, Matrix Perturbation Theory, Academic Press, New York, 1990.