Linear Algebra and its Applications 451 (2014) 33–48
Contents lists available at ScienceDirect
Linear Algebra and its Applications www.elsevier.com/locate/laa
Componentwise backward error analysis of Neville elimination Rong Huang ∗,1 , Li Zhu School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, Hunan, China
a r t i c l e
i n f o
Article history: Received 27 November 2013 Accepted 11 March 2014 Available online 1 April 2014 Submitted by Y. Wei MSC: 65G50 15A23 65F05 Keywords: Neville elimination Componentwise error analysis Sign regular matrices Pivoting
a b s t r a c t In this paper, we perform a backward error analysis of Neville elimination whenever there need row exchanges. Componentwise backward error bounds are presented for this elimination procedure applied to any nonsingular matrices. Consequently, it is shown that Neville elimination with twodeterminant pivoting proposed by Cortes and Peña (2007) [5] is an excellent method for the triangularization of nonsingular sign regular matrices including totally nonnegative and totally nonpositive matrices, which has a pleasantly small componentwise relative backward error. In particular, a small componentwise relative error bound is also provided for the bidiagonal factorization of totally nonnegative matrices. © 2014 Elsevier Inc. All rights reserved.
1. Introduction Neville elimination is a classical elimination technique which, differently from Gaussian elimination, produces zeros in a column by subtracting from each row an adequate multiple of the previous one. This elimination procedure has been precisely described and * Corresponding author. E-mail address:
[email protected] (R. Huang). This work was supported by the technology program of Hunan Province (Grant No. 2014FJ3087) and the National Natural Science Foundation for Youths of China (Grant No. 11001233). 1
http://dx.doi.org/10.1016/j.laa.2014.03.014 0024-3795/© 2014 Elsevier Inc. All rights reserved.
34
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
extensively studied in [7–11]. In particular, it has been found out that this method plays an important role in dealing with sign regular matrices (a matrix is called sign regular if for each k, all its minors of order k have the same sign or are zero) [5,15,16], rankstructured matrices [12] and quasiseparable matrices [3]. For a matrix A = (aij ) ∈ Rn×n , this elimination procedure consists of n − 1 successive steps A = A˜(1) → A(1) → A˜(2) → · · · → A˜(n−1) → A(n−1) → A˜(n) = A(n) = U,
(1)
(t)
where U is an upper triangular matrix, and for all 1 t n − 1, A(t) = (aij ) is obtained from A˜(t) by reordering the rows t, . . . , n such that (t)
ait = 0, i t
⇒
(t)
aht = 0, ∀h i;
(2)
(t+1) A˜(t+1) = (˜ aij ) is obtained from A(t) according to the formula
(t+1)
a ˜ij
⎧ (t) if i t, aij , ⎪ ⎪ ⎪ ⎨ (t) (t) (t) (t) a 0, = aij − (t)it ai−1,j , if i t + 1 and ai−1,t = ai−1,t ⎪ ⎪ ⎪ ⎩ (t) (t) if i t + 1 and ai−1,t = 0. aij ,
(3)
Denote by Ej (α) the elementary matrix obtained from the identity matrix In by changing the (j, j − 1)th entry to α. Then the formula (3) can be matricially described as follows Et+1 (−αt+1,t ) . . . En (−αnt )A(t) = A˜(t+1) which, in return, gives that A(t) = En (αnt ) . . . Et+1 (αt+1,t )A˜(t+1) ,
(4)
where (t)
αt+1,t =
at+1,t (t)
att
(t)
,
...,
αn−1,t =
an−1,t (t)
an−2,t
(t)
,
αnt =
ant (t)
(5)
an−1,t
are defined because of (2). It has been known that there need no row exchanges for Neville elimination of totally nonnegative matrices (i.e., all its minors are nonnegative) [7]. Matrices whose Neville elimination can be performed with no row exchanges are said to satisfy the WR condition [10]. Backward error analysis is one of the most powerful tools for studying numerical stabilities of numerical methods. Due to Wilkinson’s work [21,20], backward error analysis of Gaussian elimination has been studied extensively. It has been known [13, p. 164] that when Gaussian elimination without row exchanges is performed on a matrix A ∈ Rn×n ,
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
35
ˆ U ˆ of A ∈ Rn×n satisfy the following componentwise backward the computed factors L, error ˆU ˆ = A + E, L
ˆ U ˆ| where |E| γn |L||
(6)
nμ where γn := 1−nμ provided nμ < 1, μ is the unit roundoff, and the matrix inequality is meant componentwise; when Gaussian elimination with row exchanges is performed on a matrix A ∈ Rn×n , its backward error analysis is equivalent to that of Gaussian elimination without row exchanges performed on B = QA where Q is a perturbation matrix. So, the error result (6) is enough to bound the backward errors of the Gaussian elimination method applied to any matrix. It is natural to consider backward error analysis of Neville elimination. Indeed, Alonso, Gasca and Peña [1] have provided backward error bounds of this elimination method in the special case that no row exchanges are performed as follows: let A ∈ Rn×n be a nonsingular matrix satisfying the WR condition such that Neville elimination of A has been performed without row exchanges in finite floating point arithmetic, then the ˆ of A satisfy that ˆ := Fˆ1 Fˆ2 . . . Fˆn−1 and U computed factors L
ˆU ˆ = A + E, L
where |E| μ
n−1
|Fˆ1 | . . . |Fˆj | Aˆ(j+1) ,
(7)
j=1
ˆ are referred to where the factors Fˆi (i = 1, 2, . . . , n − 1), Aˆ(j) (j = 2, . . . , n) and U [1, Theorem 3.1]; in particular, for a nonsingular totally nonnegative matrix A ∈ Rn×n with sufficiently high precision, ˆU ˆ = A + E, L
where |E|
ϕn |A|, ϕn = (1 + μ)n−1 − 1. 1 − ϕn
Remind that because of (2), in general row exchanges are necessary for Neville elimination. Therefore, it remains to consider how to bound backward errors of Neville elimination whenever row exchanges are performed. A natural question is to ask whether the error result (7) of Neville elimination without row exchanges is sufficient for bounding backward errors of Neville elimination with row exchanges, as for Gaussian elimination. However, this is not true as illustrated by the following example: let A be a nonsingular totally nonpositive matrix (i.e., all its minors are nonpositive) ⎡
⎤ 0 −1 −1 ⎢ ⎥ A = ⎣ −2 −2 −2 ⎦ , −1 −1 0 then for the triangularization of A by Neville elimination in the procedure (1), row exchanges must be performed in the first and second steps, and so, there does not exist a permutation matrix Q such that the triangularization of QA can be performed by Neville
36
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
elimination without row exchanges. Therefore, a lot of work remains to be done for bounding backward errors of Neville elimination with row exchanges. This is exactly our motivation in this present paper. To perform our error analysis, the following standard model is adopted: fl(x ◦ y) = (x ◦ y)(1 + δ),
|δ| μ;
fl(x ◦ y) = (x ◦ y)
1 , (1 + )
|| μ,
(8)
where ◦ = +, −, ∗ or /. Moreover, we assume that neither overflow nor underflow occurs. In addition, we will use the following result for accumulating rounding errors [13, Lemma 3.1]: if |δi | μ and ρi = ±1 for all i = 1, 2, . . . , n, and ku < 1, then k
(1 + δi )ρi = 1 + θk ,
where |θk |
i=1
kμ =: γk . 1 − kμ
(9)
The rest of this paper is organized as follows. In Section 2, we perform a backward error analysis of Neville elimination whenever there need row exchanges. Componentwise backward error bounds are presented for this elimination procedure applied to any nonsingular matrix. Consequently, in Section 3, it is shown that Neville elimination with two-determinant pivoting proposed by Cortes and Peña [5] is an excellent method for the triangularization of nonsingular sign regular matrices including totally nonnegative and totally nonpositive matrices, which has a pleasantly small componentwise relative backward error. In addition, a small componentwise backward relative error bound is also provided for the bidiagonal factorization of totally nonnegative matrices. 2. Neville elimination of nonsingular matrices In this section, we perform a componentwise backward error analysis for Neville elimination of any nonsingular matrix. Let A ∈ Rn×n be nonsingular. According to the procedure (1), we immediately get by (4) and (5) that A is factored as A = Q1 L1 Q2 L2 . . . Qn−1 Ln−1 U
(10)
where U is upper triangular, and for all t = 1, 2, . . . , n − 1, Qt is the permutation matrix corresponding to (2), and Lt = En (αnt )En−1 (αn−1,t ) . . . Et+1 (αt+1,t ). In particular, if the matrix A satisfies the WR condition, because Ei (x)Ej (y) = Ej (y)Ei (x) for any |i − j| > 1, then A = L1 L2 . . . Ln−1 U = F1 F2 . . . Fn−1 U,
(11)
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
37
where U = (uij ) is upper triangular, and Fi is bidiagonal of the form ⎤
⎡
1 ⎢0 ⎢ ⎢ ⎢ ⎢ ⎢ Fi = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
1 ..
.
..
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎦
.
0
1 αn−i+1,1
1 ..
.
..
. αni
i = 1, 2, . . . , n − 1.
(12)
1
Further, if Neville elimination of U T can be performed without row exchanges, then A = F1 · · · Fn−1 DCn−1 · · · C1 ,
(13)
where D = diag(dii ) is diagonal, Ci is bidiagonal of the form ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Ci = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
1
0 ..
⎤ .
.. 1
. 0 1
β1,n−i+1 .. .
..
.
1
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ βin ⎦ 1
i = 1, 2, . . . , n − 1.
(14)
Next we will provide componentwise backward error bounds for these factorizations (10), (11) and (13) computed in finite floating point arithmetic. We assume that all the computed quantities wear a hat. The following lemma is needed. Lemma 1. (See [13, p. 74].) If Xj + Xj ∈ Rn×n satisfies | Xj | δj |Xj | for all j, then m m m m Xj (1 + δj ) − 1 |Xj |. (Xj + Xj ) − j=0
j=0
j=0
j=0
Our main results in this section are the following theorems. Theorem 2. Let A ∈ Rn×n be nonsingular, and let us apply Neville elimination to A in n(n−1) finite floating point arithmetic. Set ψn = (1 + μ) 2 − 1. Then the computed factors of (10) satisfy that ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ =A+E Q1 L
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
38
where ˆ 1 |Q2 |L ˆ 2 | . . . Qn−1 |L ˆ n−1 ||U ˆ |. |E| ψn Q1 |L Proof. Let us apply Neville elimination to a nonsingular matrix A ∈ Rn×n according to (t+1) the procedure (1). Then in exact arithmetic, for all 1 t n − 1, A(t+1) = (aij ) is (t)
obtained from A(t) = (aij ) as follows Et+1 (−αt+1,t ) . . . En−1 (−αn−1,t )En (−αnt )A(t) = A˜(t+1) ,
Qt+1 A˜(t+1) = A(t+1)
where Qt+1 is the corresponding permutation matrix, and all the entries (t)
αt+1,t =
at+1,t (t)
att
(t)
,
...,
αn−1,t =
an−1,t (t)
(t)
,
an−2,t
αnt =
ant
.
(t)
an−1,t
(t+1) Thus, in finite floating point arithmetic, the computed matrix Aˆ(t+1) = (ˆ aij ) is ob(t) a ) in the following procedures tained from Aˆ(t) = (ˆ ij
⎧ ⎪ αnt )Aˆ(t) = Cˆ (1) , fl En (−ˆ ⎪ ⎪ ⎪ ⎪ ⎪ αn−1,t )Cˆ (1) = Cˆ (2) , ⎪ ⎨ fl En−1 (−ˆ .. . ⎪ ⎪ ⎪ ⎪ αt+1,t )Cˆ (n−t−1) = Cˆ (n−t) , fl Et+1 (−ˆ ⎪ ⎪ ⎪ ⎩ Qt+1 Cˆ (n−t) = Aˆ(t+1) , (s) where let Cˆ (s) = (ˆ cij ) for all s = 1, . . . , n − t, then all the computed entries
(t) a ˆ , α ˆ nt = fl (t)nt a ˆn−1,t
(1) cˆn−1,t α ˆ n−1,t = fl (1) , cˆn−2,t
...,
α ˆ t+1,t = fl
(n−t−1)
cˆt+1,t
(n−t−1)
cˆtt
Now we first focus on the operation fl(En (−ˆ αnt )Aˆ(t) ) = Cˆ (1) . Obviously,
(1)
cˆij
⎧0 if i = n, j = t; ⎪ ⎨ (t) (t) αnt a ˆn−1,j )), if i = n, t < j n; anj − fl(ˆ = fl(ˆ ⎪ ⎩ (t) otherwise. a ˆij ,
Using the model (8), we have that for all t + 1 j n, (t) (1) (t) cˆnj = a ˆnj − α ˆ nt a ˆn−1,j (1 + δnj )
1 , 1 + nj
|nj | μ, |δnj | μ,
.
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
39
from which it follows that (t)
(1)
(t)
(1)
(1)
a ˆnj = cˆnj (1 + nj ) + (1 + δnj )ˆ αnt a ˆn−1,j = cˆnj (1 + nj ) + (1 + δnj )ˆ αnt cˆn−1,j .
(15)
In addition, since (t) (t) a ˆ 1 a ˆ = (t)nt α ˆ nt = fl (t)nt , 1 + δnt a ˆn−1,t a ˆn−1,t
|δnt | μ,
we get that (t)
(t)
(1)
(1)
a ˆnt = α ˆ nt a ˆn−1,t (1 + δnt ) = cˆnt (1 + nt ) + (1 + δnt )ˆ αnt cˆn−1,t (1)
(t)
(16)
(1)
by considering cˆnt = 0 and a ˆn−1,t = cˆn−1,t . Thus the facts (15) and (16) imply that (t) (j) (1) a ˆj = En (ˆ αnt ) + Ent cˆj ,
j = t, . . . , n,
(17)
(t) (1) (j) where a ˆj and cˆj denote the jth columns of Aˆ(t) and Cˆ (1) , respectively; Ent = (esk ) is the following form
esk
⎧ ⎪ ⎨ nj , = δnj α ˆ nt , ⎪ ⎩ 0,
if s = n, k = n; if s = n, k = n − 1; otherwise,
(j)
and so, | Ent | μ|En (ˆ αnt )|. Moreover, applying the similar argument above to fl En−r (−ˆ αn−r,t )Cˆ (r) = Cˆ (r+1) ,
r = 1, . . . , n − t − 1,
we have that (r)
cˆj
(j) (r+1) = En−r (ˆ αn−r,t ) + En−r,t cˆj ,
j = t, . . . , n,
(18)
(r) (r+1) (j) where cˆj and cˆj denote the jth columns of Cˆ (r) and Cˆ (r+1) , respectively; En−r,t = (esk ) is the following form
esk
⎧ ⎪ ⎨ n−r,j , = δn−r,j α ˆ n−r,t , ⎪ ⎩ 0,
if s = n − r, k = n − r; if s = n − r, k = n − r − 1;
|n−r,j | μ, |δn−r,j | μ, (19)
otherwise,
(j) and so, | En−r,t | μ|En−r (ˆ αn−r,t )|. Consider that Cˆ (n−t) = Qt+1 Aˆ(t+1) . Thus, combining (17) with (18), we conclude that
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
40
(t)
(j)
(t+1)
a ˆj = Mt Qt+1 a ˆj (t)
(t+1)
where a ˆj and a ˆj (j)
Mt
j = t, t + 1, . . . , n, t = 1, 2, . . . , n − 1,
,
denote the jth columns of Aˆ(t) and Aˆ(t+1) , respectively; and let
(j) (j) (j) = En (ˆ αnt ) + Ent En−1 (ˆ αn−1,t ) + En−1,t . . . Et+1 (ˆ αt+1,t ) + Et+1,t .
Further, since all the kth entries (t k n) in the first t − 1 columns of Aˆ(t) and Aˆ(t+1) are zero, it is not difficult to see that (t)
(j)
(t+1)
a ˆj = Mt Qt+1 a ˆj
,
j = 1, 2, . . . , n, t = 1, 2, . . . , n − 1.
(20)
ˆ and Qn = In . Hence, we get by using (20) that Consider that Aˆ(1) = Q1 A, Aˆ(n) = U (1)
aj = Q1 a ˆj (1)
where aj , a ˆj
(j)
(j)
(j)
= Q1 M1 Q2 M2 . . . Qn−1 Mn−1 u ˆj ,
j = 1, 2, . . . , n,
(21)
ˆ , respectively. Now let and u ˆj denote the jth columns of A, Aˆ(1) and U
ˆ t = En (ˆ L αnt )En−1 (ˆ αn−1,t ) . . . Et+1 (ˆ αt+1,t ),
t = 1, 2, . . . , n − 1.
It is interesting to be pointed out that for all t, ˆ t | = En (ˆ |L αnt ) En−1 (ˆ αn−1,t ) . . . Et+1 (ˆ αt+1,t ) . Thus, using Lemma 1 we get that for all j = 1, . . . , n, ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 u ˆ 1 |Q2 |L ˆ 2 | . . . Qn−1 |L ˆ n−1 ||ˆ |aj − Q1 L ˆj | ψn Q1 |L uj |, where ψn = (1 + μ)
n(n−1) 2
− 1. This means that for all j = 1, . . . , n,
ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 u Q1 L ˆj = aj + ej , where ˆ 1 |Q2 |L ˆ 2 | . . . Qn−1 |L ˆ n−1 ||ˆ |ej | ψn Q1 |L uj |. So, ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ = A + E1 , Q1 L where ˆ 1 |Q2 |L ˆ 2 | . . . Qn−1 |L ˆ n−1 ||U ˆ |. |E1 | ψn Q1 |L Thus, the result is proved. 2
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
41
Theorem 3. Let A ∈ Rn×n be nonsingular such that (11) is computed by Neville elimination without row exchanges in finite float point arithmetic. Set ϕn = (1 + μ)n−1 − 1. Then ˆ =A+E Fˆ1 Fˆ2 . . . Fˆn−1 U
(22)
where ˆ |. |E| ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||U Proof. Since there is no row exchange, according to the proof of Theorem 2, we immediately have from (21) that (j)
(j)
(j)
(j)
aj = M1 M2 M3 . . . Mn−1 u ˆj ,
j = 1, 2, . . . , n
where aj and u ˆj denote the jth (j = 1, . . . , n) columns of A and the computed upper ˆ , respectively; and for all t = 1, . . . , n − 1, triangular matrix U (j)
Mt
(j) (j) (j) = En (ˆ αnt ) + Ent En−1 (ˆ αn−1,t ) + En−1,t . . . Et+1 (ˆ αt+1,t ) + Et+1,t . (j)
Notice that each En−r,t (r = 0, 1, . . . , n−t−1) is of the form (19). So, for all |k−s| > 1,
(j)
αkt ) + Ekt Ek (ˆ
(j)
αst ) + Est Es (ˆ
(j)
(j)
(j) (j) αst ) + Est Ek (ˆ αkt ) + Ekt . = Es (ˆ
(j)
Now let M (j) = M1 . . . Mn−2 Mn−1 . Then M (j) can be rewritten as
(j) (j) (j) En (ˆ αn1 ) + En1 . . . E2 (ˆ α21 ) + E21 . . . En (ˆ αn,n−2 ) + En,n−2 (j) (j) αn−1,n−2 ) + En−1,n−2 En (ˆ αn,n−1 ) + En,n−1 × En−1 (ˆ (j) (j) (j) αn1 ) + En1 En−1 (ˆ αn−1,1 ) + En−1,1 En (ˆ αn2 ) + En2 . . . = En (ˆ (j) (j) (j) α21 ) + E21 E3 (ˆ α32 ) + E32 . . . En (ˆ αn,n−1 ) + En,n−1 E2 (ˆ
M (j) =
= (Fˆ1 + Fˆ1 )(Fˆ2 + Fˆ2 ) . . . (Fˆn−1 + Fˆn−1 ) where for each i, Fˆi and Fˆi + Fˆi are of the form (12), and | Fˆi | μ|Fˆi |. Hence, we get by using Lemma 1 that |aj − Fˆ1 Fˆ2 . . . Fˆn−1 u ˆj | ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||ˆ uj |, where ϕn = (1 + μ)n−1 − 1. So, ˆ = A + E, Fˆ1 Fˆ2 . . . Fˆn−1 U The result is proved. 2
ˆ |. where |E| ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||U
42
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
Remark 1. According to (9), we have that ϕn γn−1 . So, our error result (22), compared with the result (7), is similar as that of Gaussian elimination in (6). Theorem 4. Let A ∈ Rn×n be nonsingular such that (13) is computed by Neville elimination without row exchanges in finite float point arithmetic. Set ϕn = (1 + μ)n−1 − 1. Then ˆ Cˆn−1 . . . Cˆ2 Cˆ1 = A + E Fˆ1 Fˆ2 . . . Fˆn−1 D where ˆ Cˆn−1 | . . . |Cˆ2 ||Cˆ1 |. |E| 3ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D|| Proof. According to Theorem 3, when Neville elimination of A is performed without row ˆ , we have that exchanges to get an upper triangular matrix U ˆ = A + E1 , Fˆ1 Fˆ2 . . . Fˆn−1 U
ˆ |; where |E1 | ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||U
ˆ T without row exchanges, we have that when proceeding with Neville elimination of U ˆ + E2 , ˆ Cˆn−1 . . . Cˆ2 Cˆ1 = U D
ˆ Cˆn−1 | . . . |Cˆ2 ||Cˆ1 |. where |E2 | ϕn |D||
Therefore, we conclude that ˆ Cˆn−1 . . . Cˆ2 Cˆ1 = A + E Fˆ1 Fˆ2 . . . Fˆn−1 D where |E| |E1 | + |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||E2 | ˆ Cˆn−1 | . . . |Cˆ2 ||Cˆ1 | 2ϕn + ϕ2 |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D|| n
ˆ Cˆn−1 | . . . |Cˆ2 ||Cˆ1 |, 3ϕn |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D|| and so, the result is proved. 2 3. Neville elimination of sign regular matrices In this section, we perform a componentwise backward error analysis for Neville elimination of sign regular matrices including totally nonnegative and totally nonpositive matrices. In the last decades, sign regular matrices have been studied extensively [2,4,18] and have found a wide variety of applications in approximation theory, numerical mathematics, statistics, economics, computer aided geometric design, and others fields [14,19].
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
43
For Neville elimination of sign regular matrices, Cortes and Peña [5] proposed the twodeterminant pivoting strategy of O(n) operations. Now let us simply describe how to determine the permutation matrices of the pivoting strategy (for more details, the reader is referred to [5, pp. 56–57]). Let A ∈ Rn×n be nonsingular sign regular, and let A[α|β] be the submatrix of A ∈ Rn×n with rows and columns indexed by α and β, and A[α] = A[α|α]. Denote by In ∈ Rn×n the identity matrix, and let Pn = (pij ) ∈ Rn×n be the permutation matrix with pij = 1 if |i + j| = n + 1 and 0 otherwise. Because of [5, Lemma 3.3], the permutation matrix Qt ∈ Rn×n (1 t n − 1) of the two-determinant (t) pivoting strategy for obtaining A(t) = (aij ) from A˜(t) by reordering the rows t, . . . , n in the procedure (1) is determined as follows: • If det A˜(t) [t, t + 1] > 0 or det A˜(t) [n − 1, n | t, t + 1] > 0, then Q = In . • If det A˜(t) [t, t + 1] < 0 or det A˜(t) [n − 1, n | t, t + 1] < 0, then Q = diag(It−1 , Pn−t+1 ). A remark feature of Neville elimination with two-determinant pivoting is that sign regularity of sign regular matrices is inherited as shown in the following lemma. In the sequel, by 1 (A) we means that all the nonzero entries of A have the same sign 1 . Lemma 5. (See [5, Theorem 3.4].) Let A ∈ Rn×n be nonsingular sign regular, and let us apply Neville elimination with two-determinant pivoting to A in the procedure (1). Then for all t = 1, . . . , n, the submatrix A˜(t) [t, . . . , n] is nonsingular sign regular with 1 (A˜(t) ) = 1 (A(t) ) = 1 (A). Using Lemma 5, we immediately have the following factorization result. Lemma 6. Let A ∈ Rn×n be nonsingular sign regular, and let us apply Neville elimination with two-determinant pivoting to A. Then A can be factored as A = Q1 L1 Q2 L2 . . . Qn−1 Ln−1 U
(23)
where U is upper triangular with 1 (U ) = 1 (A), and for all t = 1, . . . , n − 1, Qt is the permutation matrix associated with the pivoting strategy, and Lt = En (αnt )En−1 (αn−1,t ) . . . Et+1 (αt+1,t ),
αnt 0, . . . , αt+1,t 0.
Proof. From Lemma 5, we have the fact 1 (A(t) ) = 1 (A) for all t = 1, . . . , n, which implies that αij 0 of (5) for all i > j, and 1 (U ) = 1 (A). Thus, the result is true. 2 Remark 2. As shown in Lemma 5, we have in exact arithmetic that 1 (A(t) ) = 1 (A) for all t = 1, . . . , n. Thus, we assume that in finite float point arithmetic, the computed matrices Aˆ(t) satisfy that 1 Aˆ(t) = 1 (A),
t = 1, 2, . . . , n,
(24)
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
44
which are always true for strictly sign regular matrices (i.e., a sign regular matrix all of whose minors are nonzero) with sufficiently high precision. The following result shows that another remark feature of Neville elimination with two-determinant pivoting is that it is an excellent method for the triangularization of nonsingular sign regular matrices, which has a pleasantly small componentwise relative backward error. Theorem 7. Let A ∈ Rn×n be nonsingular sign regular and let us apply Neville elimination with two-determinant pivoting to A in finite float point arithmetic with the condition (24) n(n−1) being satisfied. Set ψn = (1 + μ) 2 − 1. Then the computed factors of (23) satisfy that ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ = A + E, Q1 L
|E|
ψn |A|. 1 − ψn
Proof. Because of the condition (24), the computed entries α ˆ ij 0 of (5) for all i > j, ˆ ˆ and U 0 if A 0, or U 0 if A 0. Consequently, according to Theorem 2, the computed factors of (23) satisfy that ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ = A + E, Q1 L where ˆ 1 |Q2 |L ˆ 2 | . . . Qn−1 |L ˆ n−1 ||U ˆ | = ψn |Q1 L ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ |, |E| ψn Q1 |L and so, ˆ 1 Q2 L ˆ 2 . . . Qn−1 L ˆ n−1 U ˆ| |Q1 L
1 |A| 1 − ψn
⇒
|E|
ψn |A|. 1 − ψn
Thus the result is proved. 2 Two important classes of sign regular matrices are totally nonnegative matrices and totally nonpositive matrices. According to the proof of [5, Theorem 3.4], if Neville elimination with two-determinant pivoting is applied for the triangularization of a nonsingular totally nonnegative matrix A, then there is no row exchange, and thus, A is factored as the form (11), i.e., A = F1 F2 . . . Fn−1 U,
where F1 0, . . . , Fn−1 0, U 0;
if Neville elimination with two-determinant pivoting is applied for the triangularization of a nonsingular totally nonpositive matrix A, then there are row exchanges only in the first and second steps, and thus, A is factored as A = Q1 En (αn1 ) . . . E2 (α21 )Q2 F2 . . . Fn−1 U,
(25)
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
45
where Q1 = Pn , Q2 = diag(1, Pn−1 ), U 0 is upper triangular, αn1 0, . . . , α21 0, and for all i = 2, . . . , n − 1, ⎤
⎡
1 ⎢0 ⎢ ⎢ ⎢ ⎢ ⎢ Fi = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎦
1 ..
.
.. 0
. 1 αn−i+2,2
1 ..
.
..
. αni
αn−i+2,2 0, . . . , αni 0.
1
Combining with Theorem 3, we immediately have the following results. Corollary 8. (See [1, Theorem 4.1].) Let A ∈ Rn×n be nonsingular totally nonnegative such that (11) is computed by Neville elimination in finite float point arithmetic with the condition (24) being satisfied. Set ϕn = (1 + μ)n−1 − 1. Then ˆ = A + E, Fˆ1 Fˆ2 . . . Fˆn−1 U
|E|
ϕn |A|. 1 − ϕn
Corollary 9. Let A ∈ Rn×n be nonsingular totally nonpositive such that (25) is computed by Neville elimination in finite float point arithmetic with the condition (24) being satisfied. Set ζn = (1 + μ)2n−3 − 1. Then ˆ = A + E, Q1 En (ˆ αn1 ) . . . E2 (ˆ α21 )Q2 Fˆ2 . . . Fˆn−1 U
|E|
ζn |A|. 1 − ζn
It has been known [6,10,11] that A ∈ Rn×n is nonsingular totally nonnegative if and only if it can be factored as the bidiagonal form (13), i.e., A = F1 · · · Fn−1 DCn−1 · · · C1 , where D = diag(dii ) is nonnegative diagonal, Fi and Ci (i = 1, 2, . . . , n − 1) are nonnegative bidiagonal of the form (12) and (14), respectively, with all the entries αij 0 and βij 0 satisfying that αij = 0, i > j
⇒
αrj = 0, ∀r > i;
βij = 0, i < j
⇒
βih = 0, ∀h > j.
So, these n2 independent elements αij , dii and βij re-parameterize the class of totally nonnegative matrices. Importantly, it has been shown in [15–17] that these n2 parameters determine the inverse, LDU decomposition, eigenvalues, and SVD of a nonsingular totally nonnegative matrix to high relative accuracy. Therefore, it is meaningful to perform
46
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
a backward error analysis for computing the bidiagonal factorization to obtain these parameters. In finite floating point arithmetic, we assume that the computed matrices ˆ 0, D
Cˆi 0,
Fˆi 0,
i = 1, 2, . . . , n − 1,
(26)
which are always true for totally positive matrices (i.e., all its minors are positive) with sufficiently high precision. Combining with Theorem 4, we have the following result. Theorem 10. Let A ∈ Rn×n be nonsingular totally nonnegative such that (13) is computed by Neville elimination in finite float point arithmetic with the condition (26) being satisfied. Set ϕn = (1 + μ)n−1 − 1. Then ˆ Cˆn−1 . . . Cˆ2 Cˆ1 = A + E Fˆ1 Fˆ2 . . . Fˆn−1 D where |E|
3ϕn |A|. 1 − 3ϕn
Proof. Because of the condition (26), the computed factors satisfy that ˆG ˆ n−1 . . . G ˆ2G ˆ 1 | = |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D|| ˆ G ˆ n−1 | . . . |G ˆ 2 ||G ˆ 1 |, |Fˆ1 Fˆ2 . . . Fˆn−1 D and thus, by Theorem 4, we have that ˆ G ˆ n−1 | . . . |G ˆ 2 ||G ˆ1| |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D||
1 |A|. 1 − 3ϕn
So, the result is true by using Theorem 4 again. 2 Finally, we provide several numerical examples by bounding the componentwise backward errors of the bidiagonal factorizations (13) of nonsingular matrices. The numerical computations are done in MATLAB, where the roundoff unit is μ ≈ 2.2204e−016. For the ˆ Cˆn−1 . . . Cˆ2 Cˆ1 of a nonsingular computed bidiagonal factorization H =: Fˆ1 Fˆ2 . . . Fˆn−1 D matrix A ∈ Rn×n , we measure the following errors: e1 = max i,j
with the convention
0 0
(|H − A|)ij , (|A|)ij
e2 = max i,j
(|H − A|)ij , (|G|)ij
ˆ Cˆn−1 | . . . |Cˆ2 ||Cˆ1 |. = 0, where G =: |Fˆ1 ||Fˆ2 | . . . |Fˆn−1 ||D||
Example 1. We consider a nonsingular totally nonnegative matrix A ∈ Rn×n all of whose parameters are randomly chosen in (0, 1) by MATLAB’s function rand. In fact, the matrix A is totally positive. We obtain the bidiagonal factorization (13) by Neville elimination without row exchanges. The error results are reported in Table 1, which confirm our result of Theorem 10.
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
47
Table 1 A totally nonnegative matrix A ∈ Rn×n . size
n = 10
n = 20
n = 50
n = 100
e1
4.6435e−016
6.7248e−016
3.8687e−015
3.8657e−014
Table 2 The totally nonnegative Hilbert matrix A ∈ Rn×n . size
n = 10
n = 20
n = 50
n = 100
e1
6.2450e−016
9.7145e−016
1.2629e−015
3.0444e−015
Table 3 A nonsingular matrix A ∈ Rn×n . size
n = 10
n = 20
n = 50
n = 100
e1 e2
7.3071e−011 3.7500e−016
1.1224e−011 5.9256e−016
7.8159e−009 8.2451e−016
2.0424e−007 1.2860e−015
Example 2. We consider the well-known totally nonnegative Hilbert matrix A ∈ Rn×n as follows n 1 A= . i + j − 1 i,j=1 Although its bidiagonal factorization can be accurately computed by (3.6) of [15], we here compute the bidiagonal factorization (13) by Neville elimination only in order to confirm our result of Theorem 10. We obtain the error results in Table 2. Example 3. We consider a nonsingular matrix A ∈ Rn×n all of whose entries are randomly chosen in (0, 1) by MATLAB’s function rand such that the bidiagonal factorization (13) is computed by Neville elimination without row exchanges. The errors e1 and e2 are reported in Table 3, which confirm our result of Theorem 4. References [1] P. Alonso, M. Gasca, J.M. Peña, Backward error analysis of Neville elimination, Appl. Numer. Math. 23 (1997) 193–204. [2] T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987) 165–219. [3] R. Bevilacqua, E. Bozzo, G.M. Delcorso, qd-Type-methods for quasiseparable matrices, SIAM J. Matrix Anal. Appl. 32 (2011) 722–747. [4] C. de Boor, A. Pinkus, Backward error analysis for totally positive linear systems, Numer. Math. 27 (1977) 485–490. [5] V. Cortes, J.M. Peña, Sign regular matrices and Neville elimination, Linear Algebra Appl. 421 (2007) 53–62. [6] S.M. Fallat, Bidiagonal factorizations of totally nonnegative matrices, Amer. Math. Monthly 108 (2001) 697–712. [7] M. Gasca, J.M. Peña, Total positivity and Neville elimination, Linear Algebra Appl. 44 (1992) 25–44. [8] M. Gasca, J.M. Peña, Total positivity, QR-factorization and Neville elimination, SIAM J. Matrix Anal. Appl. 14 (1993) 345–356.
48
R. Huang, L. Zhu / Linear Algebra and its Applications 451 (2014) 33–48
[9] M. Gasca, J.M. Peña, Scaled pivoting in Gauss and Neville elimination for totally positive systems, Appl. Numer. Math. 13 (1993) 345–355. [10] M. Gasca, J.M. Peña, A matricial description of Neville elimination with applications to total positivity, Linear Algebra Appl. 202 (1994) 33–45. [11] M. Gasca, J.M. Peña, On factorizations of totally positive matrices, in: Total Positivity and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996, pp. 109–130. [12] L. Gemignani, Neville elimination for rank-structured matrices, Linear Algebra Appl. 428 (2008) 978–991. [13] N.J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., SIAM, Philadelphia, 2002. [14] S. Karlin, Total Positivity, Stanford University Press, Stanford, 1968. [15] P. Koev, Accurate eigenvalues and SVDs of totally nonnegative matrices, SIAM J. Matrix Anal. Appl. 27 (2005) 1–23. [16] P. Koev, Accurate computations with totally nonnegative matrices, SIAM J. Matrix Anal. Appl. 29 (2007) 731–751. [17] P. Koev, F. Dopico, Accurate eigenvalues of certain sign regular matrices, Linear Algebra Appl. 424 (2007) 435–447. [18] J.M. Peña, Backward stability of a pivoting strategy for sign-regular linear systems, BIT 37 (1997) 910–924. [19] J.M. Peña, Shape Preserving Representations in Computer-Aided Geometric Design, Nova Science Publishers, Commack, New York, 1999. [20] J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, Oxford, 1965. [21] J.H. Wilkinson, Rounding Errors in Algebraic Processes, Notes Appl. Sci., vol. 32, Her Majesty’s Stationary Office, London, 1965.