Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices

Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices

Applied Mathematics and Computation 358 (2019) 119–127 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

401KB Sizes 0 Downloads 41 Views

Applied Mathematics and Computation 358 (2019) 119–127

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices H. Orera∗, J.M. Peña Departamento de Matemática Aplicada/IUMA, Universidad de Zaragoza, Zaragoza, Spain

a r t i c l e

i n f o

MSC: 65F35 15A60 65F05 90C33 Keywords: Infinity matrix norm Inverse matrix Nekrasov matrices H-matrices Strictly diagonally dominant matrices Scaling matrix

a b s t r a c t For many applications, it is convenient to have good upper bounds for the norm of the inverse of a given matrix. In this paper, we obtain such bounds when A is a Nekrasov matrix, by means of a scaling matrix transforming A into a strictly diagonally dominant matrix. Numerical examples and comparisons with other bounds are included. The scaling matrices are also used to derive new error bounds for the linear complementarity problems when the involved matrix is a Nekrasov matrix. These error bounds can improve considerably other previous bounds. © 2019 Elsevier Inc. All rights reserved.

1. Introduction Providing upper bounds for the infinity norm of the inverse of a matrix has many potential applications in Computational Mathematics. For instance, for bounding the condition number of the matrix, for bounding errors in linear complementarity problems (cf. [1]) or, in the class of H-matrices, for proving the convergence of matrix splitting and matrix multisplitting iteration methods for solving sparse linear systems of equations (cf. [2]). The class of Nekrasov matrices (see [3] or Section 2) contains the class of strictly diagonally dominant matrices. Recent applications of Nekrasov matrices can be seen in [4–11]. Nekrasov matrices are H-matrices. Let us recall some related classes of matrices. A real matrix A is a nonsingular M-matrix if its inverse is nonnegative and all its off-diagonal entries are nonpositive. M-matrices form a very important class of matrices with applications to Numerical Analysis, Optimization, Economy, and Dynamic systems (cf. [12]). Given a complex matrix A = (ai j )1≤i, j≤n , its comparison matrix M(A ) = (a˜i j )1≤i, j≤n has entries a˜ii := |aii | and a˜i j := −|ai j | for all j = i and i, j = 1, . . . , n. We say that a complex matrix is an H-matrix if its comparison matrix is a nonsingular M-matrix. About a more general definition of H-matrix, see [13]. A matrix A = (ai j )1≤i, j≤n is SDD (strictly diagonally dominant by rows) if |aii | >  j = i |aij | for all i = 1, . . . , n. It is well–known that an SDD matrix is nonsingular and that a square matrix A is an H-matrix if there exists a diagonal matrix S with positive diagonal entries such that AS is SDD. The role of the scaling matrix is crucial, for instance, for the problem mentioned above of the convergence of iteration methods and also for the problem of eigenvalue localization (see [11]). This paper deals with the research of such scaling matrices S for the particular case when A is a Nekrasov matrix. The scaling matrix S is applied to obtain upper bounds for the infinity norm of the inverse of a Nekrasov matrix.



Corresponding author. E-mail addresses: [email protected] (H. Orera), [email protected] (J.M. Peña).

https://doi.org/10.1016/j.amc.2019.04.027 0 096-30 03/© 2019 Elsevier Inc. All rights reserved.

120

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

The paper is organized as follows. Section 2 constructs scaling matrices S  for Nekrasov matrices A, such that AS is SDD.  Section 3 applies the scaling matrices of Section 2 to derive upper bounds of A−1  , including an algorithm to obtain the ∞ corresponding bound. Section 4 presents an improvement of the bound obtained in Section 3 and includes numerical examples, illustrating our bounds and comparing them with other previous bounds. We consider several test matrices previously considered in the literature, and we also consider some variants of these matrices. We also include a family of 3 × 3 matrices showing that previous bounds can be arbitrarily large, in contrast to our bounds, which are always controlled. Finally, we derive bounds for other norms. Section 5 illustrates the use of our scaling matrices to derive new error bounds for the linear complementarity problems when the involved matrix is a Nekrasov matrix. We avoid the restrictions of the bound in [1] and we present a family of matrices for which our bound is a small constant, in contrast to the bounds of [14–16], which can be arbitrarily large. We finish this introduction with some basic notations. Let N := {1, . . . , n}. Let Qk,n be the set of increasing sequence of k positive integers in N. Given α , β ∈ Qk,n , we denote by A[α |β ] the k × k submatrix of A containing rows numbered by α and columns numbered by β . If α = β , then we have the principal submatrix A[α ] := A[α |α ]. Finally, the diagonal matrix with diagonal entries di , 1 ≤ i ≤ n, will be denoted by diag(di )ni=1 . 2. Scaling matrices Let us start by defining the concept of a Nekrasov matrix (see [2–4]). For this purpose, let us define recursively, for a complex matrix A = (ai j )1≤i, j≤n with aii = 0, for all i = 1, . . . , n,

h1 (A ) :=



|a1 j |, hi (A ) :=

j=1

i−1 

|ai j |

n  h j (A ) + |ai j |, |a j j |

i = 2, . . . , n.

(1)

j=i+1

j=1

We say that A is a Nekrasov matrix if |aii | > hi (A) for all i ∈ N. It is well–known that a Nekrasov matrix is a nonsingular H-matrix [3]. So, there exists a positive diagonal matrix S such that AS is SDD. In particular, Nekrasov matrices can be characterized in terms of these scaling matrices (see Theorem 2.2 of [7]). Once we have found a scaling matrix, we can use it to derive infinity norm bounds for the inverse of Nekrasov matrices, which may be useful for many problems, as recalled in the Introduction. In fact, the problem of bounding the infinity norm of the inverse of a Nekrasov matrix has attracted great attention recently (see [2,5,6,8,9]). In this section we are introducing two methods that allow us to build a scaling matrix for any given Nekrasov matrix.

⎛h ⎜

Theorem 2.1. Let A = (ai j )1≤i, j≤n be a Nekrasov matrix. Then the matrix S = ⎝

with



1 (A )+1 |a11 |

..

⎟ ⎠,

. hn (A )+n |ann |

1 > 0, 0 < i ≤ |aii | − hi (A ),

i >

i−1 |ai j | j

for i = 2, . . . , n,

j=1 |a j j |

is a positive diagonal matrix such that AS is SDD. Proof. Let us start by proving that there exist 1 , . . . , n satisfying the conditions stated above. We consider, as a first choice, −1 |ai j | j i := |aii | − hi (A ) for i = 1, . . . , n. If i > ij=1 |a | for all i = 2, . . . , n we have finished. Otherwhise, let i > 1 be the first index jj

such that the inequality does not hold. Then we substitute  j by

j

ˆi M

ˆ i is a positive number , with j = 1, . . . , i − 1, where M

such that the inequality is satisfied. The inequalities checked at earlier steps remain true. We continue this process until the inequality holds for all i = 2, . . . , n. The diagonal matrix S is positive because hi (A) ≥ 0 and  i > 0. The entry (i, j) of AS is ai j AS is SDD we start by checking that the condition is true for the nth row: n−1 

|an j |

j=1

n−1

n−1

 h j (A ) +  j h (A )   = |an j | j + |an j | j < hn (A ) + n = |(AS )[n]| |a j j | |a j j | |a j j | j=1







hn ( A )

j=1

The condition holds for the row n − 1: n−2  j=1

|an−1, j |

n−2

 h j (A ) +  j  hn (A ) + n + |an−1,n | ≤ hn−1 (A ) + |an−1, j | j |a j j | |ann | ajj



< hn−1 (A ) + n−1 = |(AS )[n − 1]|

 ≤1



j=1

h j ( A )+  j |a j j | .

In order to prove that

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

The first inequality is due to the hypothesis n ≤ |ann | − hn (A ), which implies row for 2 ≤ i < n − 1: i−1 

|ai j |

≤ 1. In general, considering the ith

n i−1   h j (A ) +  j h (A ) +  j  + |ai j | j ≤ hi ( A ) + |ai j | j < hi (A ) + i = |(AS )[i]| |a j j | |a j j | |a j j | j=i+1

j=1

hn (A )+n |ann |

121

j=1

and, when i = 1: n 

|a1 j |

j=2

h j (A ) +  j ≤ h1 (A ) < h1 (A ) + 1 = |(AS )[1]|. |a j j |

The inequality for the ith row is proven using that  j ≤ |a j j | − h j (A ) for j = i + 1, . . . , n and i > last inequality is reduced to  1 > 0.

i−1 |ai j | j j=1 |a j j |



. If i = 1, the

In Theorem 2.1 we introduced a diagonal matrix S that transforms any Nekrasov matrix into an SDD matrix. Its construction implied the search of the parameters  i for i ∈ N. Taking into account the existence of nonzero entries in the upper triangular part of a Nekrasov matrix, we can build a new scaling matrix S, simpler in many cases, whose product with a Nekrasov matrix is also SDD. Theorem 2.2. Let A = (ai j )1≤i, j≤n be a Nekrasov matrix and let k ∈ N be the first index such that there does not exist j > k with

i = 0, i = 1, . . . , k − 1, h ( A )+  akj = 0. Then the matrix S = diag(si )ni=1 with si := i |a | i and with 0 <  < |a | − h (A ),  > i−1 |ai j | j for i = k, . . . , n, is a ii i ii i i j=k |a | jj

positive diagonal matrix such that AS is SDD. Proof. Let us start by showing that there exist 1 , . . . , n satisfying the conditions stated above. Since A is a Nekrasov matrix, we have that |aii | > hi (A) for i = 1, . . . , n. The existence of 1 = · · · = k−1 = 0 is trivial and, following the constructive proof of the existence of these parameters given in Theorem 2.1, we can deduce the existence of k , . . . , n . It remains to prove that AS is an SDD matrix, which can be done analogously to the proof of Theorem 2.1. Let us first consider the ith row, when i < k. Since i < k, there exists an entry aij = 0 with i < j. Taking also into account that  j = 0 for all j < i and that h j (A ) +  j < |a j j | for all j = i + 1, . . . , n, we deduce that: i−1 

|ai j |

n  h j (A ) +  j h (A ) +  j + |ai j | j |a j j | |a j j | j=i+1

j=1

=

i−1 

|ai j |

j=i+1

j=1

<

i−1 

n  h j (A ) h (A ) +  j + |ai j | j |a j j | |a j j |

|ai j |

j=1

h j (A ) + |a j j |

n 

|ai j | = hi (A ) = |(AS )[i]|.

j=i+1

For the kth row we have that ak j = 0 for every j > k and so: k−1 

|ak j |

n k−1   h j (A ) +  j h (A ) +  j h (A ) + |ak j | j = |ak j | j |a j j | |a j j | |a j j | j=k+1

j=1

j=1

= hk (A ) < hk (A ) + k = |(AS )[k]|. It just remains to check the ith rows, when i > k. Since h j (A ) +  j < |a j j | for all j > i( > k), we have, by the choice of  i : i−1  j=1

|ai j |

n i−1   h j (A ) +  j h (A ) +  j  + |ai j | j ≤ hi ( A ) + |ai j | j < hi (A ) + i = |(AS )[i]|. |a j j | |a j j | |a j j | j=i+1

j=k

 The particular case k = n corresponds to a diagonal matrix with 1 = · · · = n−1 = 0 and n ∈ (0, |ann | − hn (A ) ). This scaling matrix was already introduced in [1] and it was used to derive an error bound for linear complementarity problems of Nekrasov matrices. In the following section, we shall apply the scaling matrices derived in this section to the problem of bounding the norm of the inverse of a Nekrasov matrix.     3. Bounding A−1  ∞

With an adequate scaling matrix S (given by Theorems 2.1 or 2.2) we can obtain the desired bound for the inverse of a Nekrasov matrix A considering the product AS. For this purpose, we are going to use the following result introduced by Varah in [17]:

122

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127 Table 1 Computational cost of (2). Operations additions/subtractions multiplications quotients

General

k=n

k=1

3n2 +n+2 + (n−k−12)(n−k ) 2 2 7n2 +9n+4 + 5k −102kn−11k 2

3n2 +n+2 2

2n2 − n + 2

n (n − 1 ) 2n − 1

2n − 1 + 2 ( n − k )

7n2 −n−2 2

4n − 3

Table 2 Leading term of the computational cost of (2).

T

Theorem 3.1. If A is SDD and α := mink (|akk | −



k=n

k=1

5 2 n 2

11 2 n 2

j=k |ak j | ), then

  A−1  < 1/α . ∞

Theorem 3.1 gives a bound for the infinity norm of the inverse of an SDD matrix. This theorem, jointly with the scaling matrices introduced in Section 2, allows us to deduce Theorem 3.2. Theorem 3.2. Let A = (ai j )1≤i, j≤n be a Nekrasov matrix. Then

 −1  A  ≤ ∞

maxi∈N

 h i ( A )+  i  |aii |

mini∈N (i − wi + pi )

,

(2)

where (1 , . . . , n ) are given by Theorems 2.1 or 2.2, wi :=

i−1

j=1



|ai j | |a j | , and pi :=

n

jj

j=i+1 |ai j |

|a j j |−h j (A )− j for all i ∈ N. |a j j |

Proof. We choose a diagonal matrix S following either Theorems 2.1 or 2.2 and we deduce the following inequality:

 −1        A  = S(S−1 A−1 ) = S(AS )−1  ≤ S∞ (AS )−1  . ∞ ∞ ∞ ∞

(3)

h ( A )+  The matrix S is diagonal, so its infinity norm is given by maxi∈N ( i |a | i ). Since AS is SDD we can apply Theorem 3.1 to ii   (AS )−1  . For this purpose, we need to compute for each i = 1, . . . , n: ∞

hi (A ) + i −

 j=i

|ai j |

h j (A ) +  j = |a j j | =

i −

i−1 

|ai j |

j=1

j

|a j j |

+

n 

|ai j |

j=i+1

i − wi + pi ,

where we have substituted hi (A) by the expression given by (1).

|a j j | − h j ( A ) −  j |a j j |



Since the diagonal matrix S satisfies S∞ ≤ 1, we can substitute the numerator of the bound (2) by one and obtain the following result: Corollary 3.3. Let A = (ai j )1≤i, j≤n be a Nekrasov matrix. Then

 −1  A  ≤ ∞

1 , mini∈N (i − wi + pi )

where (1 , . . . , n ) are given by Theorems 2.1 or 2.2, wi :=

i−1

j=1



|ai j | |a j | , and pi := jj

n

j=i+1 |ai j |

|a j j |−h j (A )− j for i ∈ N. |a j j |

In Table 1 we present the computational cost of the bound (2) using the matrix S given by Theorem 2.2. The cost depends on the index k. Two extreme cases are studied separately. The first one, k = n, corresponds to the simplest case, where i = 0 for i = 1, . . . , n − 1. The second one corresponds to k = 1 and it uses a diagonal matrix S with  i = 0 for all i ∈ N. In fact, in this case the diagonal matrix S also satisfies the definition given by Theorem 2.1. The particular cases k = n and k = 1 have the lowest and biggest computational cost, respectively. Table 2 shows the leading term T of the computational cost in these cases. Now we are going to introduce Algorithm 1, which allows us to compute the bound (2) choosing  i with i ∈ N following Theorem 2.2. It corresponds to Theorem 2.1 when k = 1. In order to compute this bound, the algorithm needs to give some initial values to 1 , . . . , n . These parameters are initialized with either 0 or t (|aii | − hi (A )), where t ∈ (0, 1). It could be useful to choose a different scalar t for each  i . However, it is not clear how to choose their values, and for many matrices, such as those included in Section 4, we have that i = 0 for i = 1, . . . , n − 1. In this case, we also consider in Section 4 the possibility of choosing  n as the middle point of its interval, that is, n = 2n , where n = |ann | − hn (A ).

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

123

Algorithm 1 nektoSDD - Computing bound (2). Input: A = (ai j )1≤i, j≤n , t for i = 1 : n do −1 hi = ij=1 |ai j |k j n r = j=i+1 |ai j | if r == 0, J == 0 then J=i; end if hi = hi + r i = |aii | − hi ki = hi /|aii | end for K = t K w1 = · · · = wK = 0 for i = K + 1 : n do i = t i p j =  j /|a j j | −1 wi = ij= K |ai j | p j if wi − i > 0 then M = 1/2wi for j = K : i − 1 do  j =  j i M w j = w j i M end for wi = i /2 end if end for for i=n:-1:2 do Si = i − wi + nj=i+1 |ai j | f j fi = (i − i )/|aii | end for S1 = 1 − w1 + nj=2 |a1 j | f j maxi∈N {ki + i /|aii |} Bound = mini∈N {Si }

 Find the first row such that i > 0

 If i ≤ K, we have that wi = 0

4. Improvements, numerical tests and bounds for other norms In the previous section, we derived the bound (2) for the infinity norm of the inverse of a Nekrasov matrix A. For this purpose, we first obtained an adequate scaling matrix S and then we applied the well–known Varah’s bound of Theorem 3.1 to the matrix AS. Nevertheless, any bound applicable to SDD matrices could be applied to AS, and a different choice would lead us to a different bound. In order to illustrate this fact, we are also going to use the bound introduced in [6] for Nekrasov matrices, which in particular improves Varah’s bound for SDD matrices (as proven in Theorem 2.4 of [6]):

 −1  A  ≤ max ∞ i∈N

zi ( A ) , |aii | − hi (A )

z1 (A ) := 1, zi (A ) :=

i−1  j=1



|ai j |

(4) z j (A ) + 1, |a j j |

i = 2, . . . , n.







As in (3), the new bound for A−1  reduces to the product of S∞ and the bound to (AS )−1  obtained by (4). In fact, ∞ ∞ taking into account that zi (AS ) = zi (A ) for all i ∈ N, the explicit form of this new bound is:

   −1  zi ( A ) A  ≤ max hi (A ) + i max . ∞ |aii | i∈N i∈N (hi (A ) + i − hi (AS ))

(5)

As shown by the following numerical experiments, this change gives a better bound whenever S follows Theorem 2.2. However, in general the substitution of Varah’s bound is going to increase the computational cost of the bound, while the bound (2) using Theorem 2.1 is not significantly improved. Analogously to (5), if better bounds than (4) for SDD matrices are obtained, then they can be also combined with our bound of Theorem 2.2 to derive sharper bounds than (5), although the computational cost can increase again.

124

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

Recent articles have studied the problem of finding bounds for the infinity norm of the inverse of a Nekrasov matrix. In [2], two bounds are introduced and tested with the following six matrices:



−7 ⎜7 A1 = ⎝ 2 0.5

1 88 0.5 3



−0.2 2 13 1

−9.1 9.1 −0.7 −0.7

21 ⎜−0.7 A3 = ⎝ −0.7 −0.7

 A5 =

6 −1 −7



2 −3⎟ ⎠, −2 6



−4.2 −4.2 4.9 −0.7 −3 11 −3

−2.1 −2.1⎟ ⎠, −2.1 2.8



8 ⎜ 7 A2 = ⎝ −1.3 0.5



5 ⎜1 A4 = ⎝ 2 0.5





8 ⎜ −9 A6 = ⎝ −6 −4.9

−2 −8 , 10

1 13 6.7 3

−0.2 2 13 1

1 21 0.5 −1

0.2 1 6.4 1

−0.5 16 −4 −0.9

−0.5 −5 15 −0.9



3.3 −3 ⎟ ⎠, −2 6



2 −3⎟ ⎠, −2 9



−0.5 −5 ⎟ ⎠. −3 6

In more recent works, such as [5,6,9], improvements of these bounds are developed and tested using also these matrices. Since the scaling matrices introduced in Section 2 allowed us to derive different bounds, we are going to compare them with the results obtained in some of the mentioned papers. We have included results from Gao et al. [5,6]. The bound (4) (which corresponds to the bound 2.4 of [6]) improves those obtained in [2] for Nekrasov matrices (as proven in Theorem 2.3 of [6]). Theorem 9 of [5] gives a sharper bound in some cases, and so we also include it in our comparison. Table 3 gathers the different bounds. The first row shows the exact infinity norm of the matrices. The data included in the second (corresponding to bound (4)) and third rows are borrowed from the articles that achieved the sharpest bounds. The other rows contain our results, obtained with bounds (2) and (5). In the last case S was given by Theorem 2.1 while in the other cases the diagonal matrix S followed Theorem 2.2. Excluding the case where n = n /2, our bounds used an appropriate choice of parameters. Looking at the rows corresponding to Theorem 2.2 we can observe that the obtained bounds are better for A4 and A5 , but they are worse in the other cases. With the choice of a diagonal matrix S following Theorem 2.1 and bound (2) we obtained a better bound for every matrix. This option seems superior to the other possibilities. However, it has an intrinsic problem: the choice of the parameters  i for i = 1, . . . , n. Given the right parameters, the obtained bound is excellent. But a bad choice of these values may give a useless bound. In general, it is not clear how to find the optimal values using Theorem 2.1. Performing more numerical tests, we have seen that the bounds introduced in this paper may be particularly useful when the considered Nekrasov matrix is far from satisfying |aii | > hi (A) for some i ∈ N\{n}. Looking at the bound for SDD matrices introduced by Varah (Theorem 3.1), we can observe that it depends on all the row sums of the comparison matrix. In particular, bounds for Nekrasov matrices based on Varah’s bound seem to be inversely proportional to |aii | − hi (A ) for some indices i ∈ N. In order to illustrate this fact, we have modified one entry of all previous examples and we present the bounds obtained for the inverses of these new matrices in Table 4.



−7 7 ⎜ Aˆ 1 = ⎝ 2 0.5

1 88 0.5 3



−9.1 9.1 −0.7 −0.7

21 −0 .7 ⎜ Aˆ 3 = ⎝ −0.7 −0.7

 Aˆ 5 =

−3.9 2 13 1

6 −1 −7



2 −3⎟ ⎠, −2 6



−4.2 −4.2 4.9 −0.7 −3 9 −3

−2.1 −4.2⎟ ⎠, −2.1 2.8



−2 −8 , 10



8 7 ⎜ Aˆ 2 = ⎝ −11 0.5



5 1 ⎜ Aˆ 4 = ⎝ 2 0.5



8 −31 .9 ⎜ Aˆ 6 = ⎝ −6 −4.9

1 13 6.7 3

−0.2 2 13 1

1 21 0.5 −1

0.2 1 6.4 15

−0.5 16 −4 −0.9

−0.5 −5 15 −0.9



3.3 −3 ⎟ ⎠, −2 6



2 −3⎟ ⎠, −2 9



−0.5 −5 ⎟ ⎠. −3 6

In Table 4 we observe that bound (2) is lower than (4) and the bound of [5] even with the choice of  n as the middle point for matrices Aˆ 2 , Aˆ 3 , Aˆ 5 and Aˆ 6 . We also obtained tight bounds for the norm of the inverse of Aˆ 1 . The remaining case, Aˆ 4 , was built increasing significantly an entry of the last row. As a consequence, all bounds compared in Table 4 obtained weaker results than in Table 3. For Aˆ 6 , bounds of [5] and [6] (corresponding to bound (4)) are very high while our bounds (using (2)) are all controlled. This phenomenon will be also illustrated with the following family of 3 × 3 matrices.

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

125

Table 3 Upper bounds of ||A−1 ||∞ . Matrix

A1

A2

A3

A4

A5

A6

Exact norm (4) Theorem 9 of [5] (2), n = n /2 (5), n = n /2 (2), Theorem 2.2 (5), Theorem 2.2 (2), Theorem 2.1

0.1921 0.2632 0.2505 0.6398 0.4992 0.3474 0.3074 0.2354

0.2390 0.5365 0.5365 1.4406 0.7422 0.8894 0.5684 0.5260

0.8759 0.9676 0.9676 1.5527 1.0632 1.3325 0.9735 0.9273

0.2707 0.5556 0.5038 0.7264 0.5596 0.4484 0.3817 0.3168

1.1519 1.4138 1.4138 1.2974 1.2809 1.1658 1.1658 1.1588

0.4474 0.4928 0.4928 1.2893 1.2893 1.0796 1.0436 0.4527

Table 4 Upper bounds of ||A−1 ||∞ . Matrix

Aˆ 1

Aˆ 2

Aˆ 3

Aˆ 4

Aˆ 5

Aˆ 6

Exact norm (4) Theorem 9 of [5] (2), n = n /2 (5), n = n /2 (2), Theorem 2.2 (5), Theorem 2.2 (2), Theorem 2.1

0.2385 10.0 0 0 0 0.3979 1.2345 0.6144 0.8230 0.5344 0.3262

0.9827 16.2005 16.2005 2.2098 1.2071 1.4732 0.9923 1.2642

1.0997 5.5357 5.5357 2.3120 1.6377 2.1018 1.5203 1.1479

0.2848 8.7889 8.7889 17.0569 3.1074 10.2316 3.0603 6.6456

2.4545 7.0 0 0 0 7.0 0 0 0 5.5208 5.5208 4.1085 3.4717 2.6180

0.9144 266.0 0 0 0 266.0 0 0 0 2.6020 2.6020 2.0316 1.9119 2.0316

Example 4.1. Let us consider the family of matrices



A=

4 3

4 −ε 1

2 2 1



1 1 , 2

(6)

1 where 0 < ε < 10 . In this case, ||A−1 ||∞ < 1.4167, h1 (A ) = 3, h2 (A ) = 2 − 34 ε and h3 (A ) = 74 − 38 ε . Then the bounds (2.4) of [6] and Theorem 9 of [5] coincide and are equal to 916ε − 13 . We can observe that this bound is arbitrarily large when ε → 0. 1− ( 3ε /8 ) However, our bounds remain controlled. In fact, (2) with 3 = 3 /2 (and 1 = 0, 2 = 0) gives the bound 16( 1+ (3ε /2 ) ), (2) in 1− ( 3ε /8 ) Theorem 2.2 is equal to 12( 1+ (3ε /2 ) ) and (2) in Theorem 2.1 can become equal to 12.

We finish this section by applying Theorem 3.2 to derive bounds for other norms. The first result is obtained from applying Theorem 3.2 to the transpose matrix. Corollary 4.2. Let A = (ai j )1≤i, j≤n be a matrix with AT Nekrasov. Then

 T  maxi∈N hi (A|a )|+¯i  −1  ii A  ≤ , 1 mini∈N (¯i − w¯ i + p¯ i )

where ¯i , w¯ i , p¯ i are the parameters i , wi , pi of Theorems 2.2 and 3.2 corresponding to AT . Corollary 4.3. Let A = (ai j )1≤i, j≤n be a matrix with A and AT Nekrasov and let σ n (A) be its minimal singular value. Then

 −1 σn (A ) = A−1  ≥ 2



mini∈N (i − wi + pi ) mini∈N (¯i − w¯ i + p¯ i ) maxi∈N

 h i ( A )+  i  |aii |

maxi∈N

 hi (AT )+¯i 

,

(7)

|aii |

where i , wi , pi , ¯i , w¯ i , p¯ i are given in Theorems 2.2 and 3.2 and Corollary 4.2. Proof. It is a consequence of the well–known facts that, for a nonsingular matrix M, its minimal singular value coincides





−1 with M−1  and that M22 ≤ M1 M∞ . 2



In the following example we apply Corollary 4.3 to the suitable matrices from the previous experiments, A3 and A4 . The matrix A3 is an SDD matrix whose transpose is Nekrasov, while A4 is an SDD matrix whose transpose is also SDD. Example 4.4. Corollary 4.3 gives a lower bound for the minimal singular value of a Nekrasov matrix whose transpose is also a Nekrasov matrix. We can apply this result to A3 and A4 with the choice 1 = 2 = 3 = 0, 4 = 4 /2, where 4 (A3 )/2 = 0.6572 and 4 (A4 )/2 = 3.9646. For these matrices, we have that σn (A3 ) = 1.0943 and σn (A4 ) = 4.2327. The bounds obtained applying (7) are σ n (A3 ) > 0.3357 and σ n (A4 ) > 0.8680.

126

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

5. Error bounds for LCP of Nekrasov matrices Given an n × n real matrix A and q ∈ Rn , these problems look for solutions x∗ ∈ Rn of

Ax + q ≥ 0,

x ≥ 0,

xT (Ax + q ) = 0.

(8)

This problem (8) is usually denoted by LCP(A, q). A real square matrix is called a P-matrix if all its principal minors are positive. Let us recall (see [18]) that A is a P-matrix if and only if the LCP(A, q) (8) has a unique solution x∗ for each q ∈ Rn . Let A be a real H-matrix with all its diagonal entries positive. Then A is a P-matrix and so we can apply the third inequality of Theorem 2.3 of [14] and obtain for any x ∈ Rn the inequality:

x − x∗ ∞ ≤ maxd∈[0,1]n (I − D + DA )−1 ∞ r (x )∞ , where we denote by I the n × n identity matrix, by D the diagonal matrix D = diag(di )ni=1 with 0 ≤ di ≤ 1 for all i = 1, . . . , n, by x∗ the solution of the LCP(A, q) and by r (x ) := min(x, Ax + q ), where the min operator denotes the componentwise minimum of two vectors. By (2.4) of [14], given in Theorem 2.1 of [14], when A = (ai j )1≤i, j≤n is a real H-matrix with all its diagonal entries positive, then we have

maxd∈[0,1]n (I − D + DA )−1 ∞ ≤ (M(A ))−1 max( , I )∞ ,

(9) diag(aii )ni=1 )

where we denote by M(A ) the comparison matrix of A, by the diagonal part of A ( := and by max( , I ) := diag(max{aii , 1} )ni=1 . The next theorem, corresponding to Theorem 2.1 of [19], shows the application of obtaining scaling matrices to transform an H-matrix into an SDD matrix in order to derive error bounds for LCP. Theorem 5.1. Suppose that A = (ai j )1≤i, j≤n is an H-matrix with all its diagonal entries positive. Let S = diag(si )ni=1 , si > 0 for all i ∈ N, be a diagonal matrix such that AS is SDD. For any i = 1, . . . , n, let β¯ i := aii si − j=i | ai j | s j . Then



maxd∈[0,1]n (I − D + DA )

−1

∞



maxi {si } maxi {si } ≤ max , . mini {βi } mini {si }

(10)

The following theorem provides an error bound for the particular LCP associated to a Nekrasov matrix. Theorem 5.2. Let A = (ai j )1≤i, j≤n be a Nekrasov matrix with all its diagonal entries positive. Let S = diag(si )ni=1 and  i (i ∈ N) be the diagonal matrix and positive real numbers, respectively, defined in Theorem 2.2. Then

maxd∈[0,1]n (I − D + DA )−1 ∞ ≤ max





1 1 , , mini {i − wi + pi } mini {si }

(11)

where, for each i ∈ N, pi and wi are defined in Theorem 3.2. Proof. Since A is Nekrasov, si < 1 for all i ∈ N and A is an H-matrix. So, we can apply (10) and then it is sufficient to prove that β¯ i = i − wi + pi for all i ∈ N. For any i ∈ N, we have

β¯ i = aii

 hi (A ) + i − aii

|ai j |

j∈N\{i}

h j (A ) +  j ajj

and by (1) we can write

β¯ i =

i−1 

|ai j |

j=i+1

j=1

= i −

n   h j (A ) + |ai j | − ajj



j∈N\{i}

|ai j |

 h j (A ) + i − ajj

i−1 n  h (A ) +  j |ai j |  + |ai j | 1 − j ajj ajj



j

j∈N\{i}

|ai j | ajj

= i − wi + pi .

j=i+1

j=1

 As a choice of each parameter  i (i ∈ N) in Theorem 5.2, we recommend (as we already did for  n ) to choose the middle point of the interval where it lies (see Theorem 2.2). This choice is applied in the following example, where we present a family of matrices for which our bound (11) is a small constant, in contrast to the bounds of [14–16], which can be arbitrarily large. Observe also that these matrices do not satisfy the necessary hypotheses to apply the bound of [1]. Example 5.3. Let us consider the family of matrices



A=

K −K −K

−K + 2 K −1 K



−1 0 , K

H. Orera and J.M. Peña / Applied Mathematics and Computation 358 (2019) 119–127

where K > 2. In this case, h1 (A ) = K − 1, h2 (A ) = K − 1 and h3 (A ) = K − 1 + 2K 3 +2K K 2 −1

and the bounds of [15,16] coincide and are equal to

2K 3 +2K 2 K 2 −K+1

K−1 . K2

Then the bound (9) (of [14]) is equal to

. We can observe that these bounds are arbitrarily large

when K → ∞. However, our new bound remains controlled. In fact, (11) with 1 = 0, 2 = 1/2 and 3 = bound

4K 3 . 2K 3 −2K 2 −2K+1

127

2K 2 −2K+3 4K 2

gives the

Acknowledgments This research has been partially supported by MTM2015-65433-P (MINECO/FEDER) Spanish Research Grant and by Gobierno de Aragón. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]

M. García-Esnaola, J.M. Peña, Error bounds for linear complementarity problems of Nekrasov matrices, Numer. Algorithms 67 (2014) 655–667. L. Cvetkovic´ , P.-F. Dai, K.D. ski, Y.T. Li, Infinity norm bounds for the inverse of Nekrasov matrices, Appl. Math. Comput. 219 (2013) 5020–5024. T. Szulc, Some remarks on a theorem of Gudkov, Linear Algebra Appl. 225 (1995) 221–235. L. Cvetkovic´ , V. Kostic´ , K.D. ski, Max-norm bounds for the inverse of s-Nekrasov matrices, Appl. Math. Comput. 218 (2012) 9498–9503. L. Gao, C. Li, Y. Li, A new upper bound on the infinity norm of the inverse of Nekrasov matrices, J. Appl. Math. 2014 (2014) 8. Art. ID 708128. L.Y. Kolotilina, On bounding inverses to Nekrasov matrices in the infinity norm, J. Math. Sci. 199 (2014) 432–437. L.Y. Kolotilina, Some characterizations of Nekrasov and s-Nekrasov matrices, J. Math. Sci. 207 (2015a) 767–775. L.Y. Kolotilina, Bounds for the inverses of generalized Nekrasov matrices, J. Math. Sci. 207 (2015b) 786–794. C. Li, H. Pei, A. Gao, Y. Li, Improvements on the infinity norm bound for the inverse of Nekrasov matrices, Numer. Algorithms 71 (2016) 613–630. J. Liu, J. Zhang, L. Zhou, G. Tu, The Nekrasov diagonally dominant degree on the Schur complement of Nekrasov matrices and its applications, Appl. Math. Comput. 320 (2018) 251–263. T. Szulc, L. Cvetkovic´ , M. Nedovic´ , Scaling technique for partition-Nekrasov matrices, Appl. Math. Comput. 271 (2015) 201–208. A. Berman, R.J. Plemmons, Nonnegative matrices in the mathematical sciences, in: Classics in Applied Mathematics, 9, SIAM, Philadelphia, 20 0 0. R. Bru, I. Giménez, A. Hadjidimos, Is a ∈ cn × n a general h-matrix? Linear Algebra Appl. 436 (2012) 364–380. X. Chen, S. Xiang, Computation of error bounds for p-matrix linear complementarity problems, Math. Program. Ser. A 106 (2006) 513–525. C. Li, P. Dai, Y. Li, New error bounds for linear complementarity problems of Nekrasov matrices and b-Nekrasov matrices, Numer. Algorithms 74 (2017) 997–1009. L. Gao, C. Li, Y. Li, An improvement of the error bounds for linear complementarity problems of Nekrasov matrices, Linear Multilinear Algebra 66 (2018) 1505–1519. J.M. Varah, A lower bound for the smallest singular value of a matrix, Linear Algebra Appl. 11 (1975) 3–5. R.W. Cottle, J.S. Pang, R.E. Stone, The Linear Complementarity Problems, Academic Press, Boston MA, 1992. M. García-Esnaola, J.M. Peña, A comparison of error bounds for linear complementarity problems of h-matrices, Linear Algebra Appl. 433 (2010) 956–964.