Journal Pre-proof Improved error bounds based on α(M ) for the linear complementarity problem
Xi-ming Fang, Zhijun Qiao
PII:
S0024-3795(19)30516-6
DOI:
https://doi.org/10.1016/j.laa.2019.12.009
Reference:
LAA 15207
To appear in:
Linear Algebra and its Applications
Received date:
21 November 2018
Accepted date:
4 December 2019
Please cite this article as: X.-m. Fang, Z. Qiao, Improved error bounds based on α(M ) for the linear complementarity problem, Linear Algebra Appl. (2020), doi: https://doi.org/10.1016/j.laa.2019.12.009.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier.
Improved error bounds based on α(M) for the linear complementarity problem∗ Xi-ming Fang1
Zhijun Qiao2†
1
Department of Mathematics and Statistics, Zhaoqing University, China 2 School of Mathematical and Statistical Sciences The University of Texas Rio Grande Valley, Edinburg, TX 78539, USA
Abstract In this paper, the error bounds of the approximate solution and the bounds of the true solution to the linear complementarity problem (LCP) (M, q) are presented based on the function α(M) which is introduced by Mathias and Pang in [23]. In order to show the accuracy and the effectiveness of our results, we perform some numerical experiments to reveal our results are better than other results in the literature.
Key words: Linear complementarity problem (LCP); Solution; Error bound. Mathematics Subject Classifications (2010): 65G20, 65G50
1
Introduction
The linear complementarity problem (LCP) is to find a real vector z ∈ Rn such that zT (Mz + q) = 0,
z ≥ 0,
Mz + q ≥ 0,
(1)
where M = (ai j ) ∈ Rn×n , and q ∈ Rn is a column vector. This problem is usually abbreviated as the LCP(M, q), and many real-world problems can be cast into such a problem, for instance, ∗
This work was partially supported by Zhaoqing University Research Program # 611-612279 and by the NNSFC under grant number # 11971475. The author Qiao also thanks the UT President Endowed Professorship (Project # 450000123) for its partial support. † Corresponding author: E-mail:
[email protected]
1
the elastic contact problem, the linear and quadratic programming problems, the free boundary problems for journal bearing and the market equilibrium problems (see [4–6, 11, 18, 21] and the references therein). The research on the linear complementary problem has two-fold: basic theories and numerical algorithms. In the basic theories, the research hotspots are mainly concentrating on the existence and uniqueness of solution, the stability and sensitivity of the solution, the relationship between the linear complementarity and other problems (such as the variational inequality problem), and the extension and evolution of the linear complementarity problem [6, 11, 11, 14, 16, 20, 21, 23, 24]. There is a clear statement on the existence of the solution to the linear complementarity problem, that is, when the system matrix M is a P-matrix, the linear complementarity problem has an unique solution for any q ∈ Rn , The positive definite matrix, H+ -matrix, and M-matrix are three types of P-matrix, which were already studied by many authors both in the theory and in the numerical algorithm respect (see [1, 2, 4, 5, 11, 12, 14, 15, 17] and the references therein). In the study of the stability and sensitivity of the solution, Mathias and Pang introduced a positive quantity function α(M) for P-matrix M in [23]. By utilizing the function α(M), they provided some error bounds of the approximate solution as well as estimated the bound of the function. Meanwhile, Cottle, Pang and Stone also obtained some perturbation results for the solution through the function α(M) [6]. Since the function α(M) is not easy to calculate for a concrete P-matrix,many authors looked for other alternative functions to estimate the error bounds of the approximate solution, in particular for some special P-matrices, such as the MB-matrices, B-matrices and H+ -matrices [9, 10, 13, 19, 24, 25]. In those alternative functions, most of them refer to nonnegative diagonal matrices or nonnegative variable parameters, proper choices of which are very important in estimating the value of those functions (see [7–9, 19, 20, 22, 25] and the references therein). In recent decades, with the help of some numerical algorithms, a series of solving methods were studied for the linear complementarity problem, including the Lemke method, projected method, modulus-based matrix splitting iteration method, and parallel iteration method [1–5, 12, 15, 26], and the related convergence theories and numerical experiments are presented when the system matrix is a positive definite matrix, H+ -matrix, and M-matrix, and so on. In this paper, we study the error bound of the approximate solution for the LCP(M, q) based on the function α(M), and present some new error bounds, including the absolute error bounds and the relative error bounds of the approximate solution. Meanwhile, we discuss the bounds of the true solution with some estimation of new bounds. In order to demonstrate the accuracy and the effectiveness of our results, we give some numerical results to show that our results are better than the results in the literature [23]. 2
The paper is organized as follows. We introduce some definitions and lemmas in Section 2, and present some results in Section 3. The numerical experiments are shown and discussed in Section 4. We conclude the paper by some remarks in Section 5.
2
Preliminaries
In this section, we introduce some definitions and preliminaries related to (1). Definition 2.1 ([16])
A matrix M ∈ Rn×n is called a P-matrix if the following inequality max xi (Mx)i > 0
1≤i≤n
(2)
satisfies for all nonzero real column vector x ∈ Rn . Definition 2.2 ([23]) Let M ∈ Rn×n be a P-matrix, a quantity function α(M) about M is defined as α(M) = min max {xi (Mx)i }. (3) x∞ =1 1≤i≤n
Lemma 2.1 ([23])
Let M ∈ Rn×n be a P-matrix, and z∗ denote the unique solution of (1). Then α(M −1 )(−q)+ ∞ ≤ z∗ ∞ ≤ α(M)−1 (−q)+ ∞ ,
(4)
where (−q)+ = max{−q, 0}. Lemma 2.2 ([23]) Let M ∈ Rn×n be a P-matrix, z∗ denote the unique solution of (1) and u be an arbitrary n-vector, respectively. Then 1 1 + M∞ min(u, q + Mu)∞ ≤ z∗ − u∞ ≤ min(u, q + Mu)∞ , 1 + M∞ α(M)
(5)
where the operator min denotes the componentwise minimum of two vectors. Lemma 2.3 ([23]) Let M ∈ Rn×n be a P-matrix, z∗ denote the unique solution of the LCP(M, q) and u be an arbitrary n-vector, respectively. Assume that (−q)+ 0. Then, α(M) min(u, q + Mu)∞ z∗ − u∞ 1 + M∞ min(u, q + Mu)∞ ≤ ≤ , 1 + M∞ (−q)+ ∞ z∞ (−q)+ ∞ α(M −1 )α(M)
3
(6)
3
Main results
In this section, we discuss the error bounds of the approximate solution and the bound estimation of the true solution for (1). Theorem 3.1 Let M ∈ Rn×n be a P-matrix, z∗ ∈ Rn denote the unique solution of (1) and u be an arbitrary n-vector, respectively. Then, √ √ v∞ (1 + M∞ ) − Δ v∞ (1 + M∞ ) + Δ ∗ ≤ z − u∞ ≤ , (7) 2α(M) 2α(M) where v = min(u, q + Mu), Δ = v2∞ (1 + M∞ )2 − 4α(M)v2i∗ ≥ 0, and i∗ satisfies (u − z∗ )i∗ (M(u − z∗ ))i∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi∗ 0. If we denote the lower and upper bounds in (7) by Blower and Bupper , respectively, then the two bounds are located in the following two intervals: Blower ∈ [A− , B− ] and Bupper ∈ [A+ , B+ ],where v∞ (1 + M∞ ) − v2∞ (1 + M∞ )2 − 4α(M) min1≤i≤n {v2i 0} A− = , 2α(M) v∞ (1 + M∞ ) − v2∞ (1 + M∞ )2 − 4α(M)v2∞ , B− = 2α(M) v∞ (1 + M∞ ) + v2∞ (1 + M∞ )2 − 4α(M)v2∞ , A+ = 2α(M) v∞ (1 + M∞ ) + v2∞ (1 + M∞ )2 − 4α(M) min1≤i≤n {v2i 0} . B+ = 2α(M) Proof: Let v = min(u, Mu + q) and ω∗ = Mz∗ + q with z∗ ∈ Rn being the unique solution of (1). Then the vector y = u − v satisfies the complementarity system as follows: y ≥ 0, x = q + (M − I)v + My ≥ 0, yT x = 0. Thus, we obtain the following equation and inequality for each i (y − z∗ )i (x − ω∗ )i = yi xi − yi ω∗i − z∗i xi + z∗i ω∗i = −yi ω∗i − z∗i xi ≤ 0, (y−z∗ )i (x−ω)i = (u−v−z∗ )i (−v+ M(u−z∗ ))i = −(u−z∗ )i vi −vi (M(u−z∗ ))i +(u−z∗ )i (M(u−z∗ ))i +vi 2 . Hence, for each i we have (u − z∗ )i (M(u − z∗ ))i ≤ (u − z∗ )i vi + vi (M(u − z∗ ))i − vi 2 . 4
(8)
Consider the particular index i∗ for which 0 ≤ (u − z∗ )i∗ (M(u − z∗ ))i∗ = max (u − z∗ )i (M(u − z∗ ))i ,
(9)
1≤i≤n
we can derive the following inequality from (8), (9) and Definition 2.2. α(M)u − z∗ 2∞ ≤ (u − z∗ )i∗ vi∗ + vi∗ (M(u − z∗ ))i∗ − v2i∗ ≤ (1 + M∞ )v∞ u − z∗ ∞ − v2i∗ .
(10)
From (10), we can find that if vi∗ = 0, then u = z∗ , means the u is the true solution. Thus, if vi∗ 0, the above inequality can be transformed into the following inequality. α(M)u − z∗ 2∞ − (1 + M∞ )v∞ u − z∗ ∞ + v2i∗ ≤ 0. Solving the above quadratic inequality, we can obtain (7). The domains of Blower and Blower can be proved easily based on (7). In order to precisely describe the error between the approximate solution and the true solution better, we try to increase the error lower bound and decrease the error upper bound. Since we couldn’t know the true solution in advance, the index i∗ in the inequality (7) can’t be confirmed accurately. We’re mainly concerned with the error upper bounds in practice, the domains of Bupper can be learned by calculation beforehand and can help us to know some information about the error, for example, if the fight-hand bound of the domain Bupper is very small, then we know the approximate solution is very close to the true solution. So, the domains of Bupper may be more useful in some certain sense. According to the proof of this theorem, we can deduce a bound estimation of the true solution for the LCP(M, q) as follows. Corollary 3.1
Let M ∈ Rn×n be a P-matrix. Let z∗ ∈ Rn denote the unique solution of (1). Then, |((−q)+ )i∗ |(1 + M∞ − 2α(M)
√ Δ)
|((−q)+ )i∗ |(1 + M∞ + ≤ z ∞ ≤ 2α(M) ∗
√
Δ)
,
(11)
where Δ = (1 + M∞ )2 − 4α(M) ≥ 0, and i∗ satisfies z∗i∗ (Mz∗ )i∗ = max1≤i≤n {z∗i (Mz∗ )i }. If we denote the nonzero minimum element and the maximum element of the vector |(−q)+ | by |(−q)+ |min and |(−q)+ |max , respectively, and if ((−q)+ )i∗ 0, from |((−q)+ )i∗ | ∈ [|(−q)+ |min , |(−q)+ |max ] with |(−q)+ |min 0, then we have the range of the true solution’s lower and upper bounds as follows: √ √ |(−q)+ |min (1 + M∞ − Δ) |(−q)+ |max (1 + M∞ − Δ) , ], Blower ∈ [ 2α(M) 2α(M) √ √ |(−q)+ |min (1 + M∞ + Δ) |(−q)+ |max (1 + M∞ + Δ) , ]. Bupper ∈ [ 2α(M) 2α(M) 5
Proof: The proof is similar with the proof of Theorem 3.1. Let u be zero in (10), combine with the relation v = min(0, M0 + q) = min(0, q) = −(−q)+ , we have z∗i (Mz∗ )i = −z∗i (M(−z∗ ))i ≤ (−z∗ )i (−(−q)+ )i + (−(−q)+ )i (M(−z∗ ))i − v2i , = z∗i ((−q)+ )i + ((−q)+ )i (Mz∗ )i − ((−q)+ )2i , for i = 1, 2, ..., n. Thus, consider the particular index i∗ which satisfies z∗i∗ (Mz∗ )i∗ = max z∗i (Mz∗ )i . 1≤i≤n
Noting z∗ ≥ 0 and the definition of α(M), we have α(M)z∗ 2∞ ≤ z∗i∗ (Mz∗ )i∗ ≤ z∗i∗ ((−q)+ )i∗ + ((−q)+ )i∗ (Mz∗ )i∗ − ((−q)+ )2i∗ , ≤ z∗ ∞ ((−q)+ )i∗ + ((−q)+ )i∗ M∞ z∗ ∞ − ((−q)+ )2i∗ . Equivalently, α(M)z∗ 2∞ − ((−q)+ )i∗ (1 + M∞ )z∗ ∞ + ((−q)+ )2i∗ ≤ 0. Thus, if ((−q)+ )i∗ 0, we can obtain (11) by solving the quadratic inequality. About Corollary 3.1, the inequality (11) can not be obtained from the inequality (7) in Theorem 3.1 by letting u = 0 directly; in addition, if ((−q)+ )i∗ = 0, then z∗ = 0, we already have the true solution. From Theorem 3.1 and Corollary 3.1, we can obtain the following relative error bounds of the approximate solution for the LCP(M, q). Theorem 3.2 Let M ∈ Rn×n be a P-matrix, z∗ ∈ Rn denote the unique solution of (1) and u be an arbitrary n-vector, respevtively. If ((−q)+ )i2 ∗ 0, then we have √ √ z∗ − u∞ v∞ (1 + M∞ ) + Δ1 v∞ (1 + M∞ ) − Δ1 ≤ (12) ≤ √ √ , z∗ ∞ |((−q)+ )i2 ∗ |(1 + M∞ + Δ2 ) |((−q)+ )i2 ∗ |(1 + M∞ − Δ2 ) where v = min(u, q+ Mu), Δ1 = v2∞ (1+M∞ )2 −4α(M)v2i1 ∗ ≥ 0, and i1 ∗ satisfies (u−z∗ )i1 ∗ (M(u− z∗ ))i1 ∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi1 ∗ 0. Δ2 = (1 + M∞ )2 − 4α(M) ≥ 0, and i2 ∗ satisfies z∗i2 ∗ (Mz∗ )i2 ∗ = max1≤i≤n {z∗i (Mz∗ )i } and ((−q)+ )i2 ∗ 0. Proof: From Theorem 3.1, we have √ √ v∞ (1 + M∞ ) − Δ1 v∞ (1 + M∞ ) + Δ1 ∗ ≤ z − u∞ ≤ , 2α(M) 2α(M)
6
where v = min(u, q + Mu), Δ1 = v2∞ (1 + M∞ )2 − 4α(M)v2i1 ∗ ≥ 0, i1 ∗ satisfies (u − z∗ )i1 ∗ (M(u − z∗ ))i1 ∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi1 ∗ 0. From Corollary 3.1, we have √ √ ∗ |(1 + M∞ + |((−q)+ )i∗2 |(1 + M∞ − Δ2 ) ) Δ2 ) |((−q) + i 2 ≤ z∗ ∞ ≤ , 2α(M) 2α(M) where Δ2 = (1 + M∞ )2 − 4α(M) ≥ 0, i2 ∗ satisfies z∗i2 ∗ (Mz∗ )i2 ∗ = max1≤i≤n {z∗i (Mz∗ )i } and ((−q)+ )i2 ∗ 0. Combining with the above two inequalities yields the inequality (12). For Theorem 3.2, we can obtain the domains of the relative error bounds according to Theorem 3.1 and Corollary 3.1, the results are omitted here. In the following, we give another bound estimation of the true solution z∗ for the LCP(M, q). Lemma 3.1
Let M ∈ Rn×n be a P-matrix and z∗ denote the unique solution of (1). Then, (−q)+ ∞ (−q)+ ∞ ≤ z∗ ∞ ≤ , M∞ α(M)
(13)
where (−q)+ = max{−q, 0}. Proof: Noting the right-hand inequality in (13) is same with the right-hand inequality in (4), we only need to prove the left-hand inequality. Since z∗ is the unique solution of (1), thusMz∗ + q ≥ 0, and the following inequality relation holds: M∞ z∗ ∞ ≥ Mz∗ ∞ = |Mz∗ |∞ ≥ (Mz∗ )+ ∞ ≥ (−q)+ ∞ . Therefore, the left-hand inequality in (13) can be derived directly. Though we can not compare the bounds in (12) and (13) in general, it is better to set the larger of the two lower bounds and the smaller of the two upper bounds to be the bounds of the true solution z∗ , and these bounds can be used to estimate the relative error bounds of the approximate solution. Just as the proof procedure in Theorem 3.2, from Theorem 3.1 and Lemma 3.1, we can obtain another relative error bounds of the approximate solution for (1) as follows. Theorem 3.3 Let M ∈ Rn×n be a P-matrix. Denote z∗ ∈ Rn as the unique solution of the LCP(M, q) (1), and let u be an arbitrary real n-vector. Then if (−q)+ 0, we have v∞ (1 + M∞ ) − 2(−q)+ ∞
√
Δ
z∗ − u∞ M∞ [v∞ (1 + M∞ ) + ≤ ≤ z∗ ∞ 2α(M)(−q)+ ∞ 7
√
Δ]
,
(14)
where v = min(u, q+ Mu), Δ = v2∞ (1+M∞ )2 −4α(M)v2i∗ ≥ 0, i∗ satisfies (u−z∗ )i∗ (M(u−z∗ ))i∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi∗ 0, (−q)+ = max{−q, 0}. In the following, we consider a particular P-matrix, that is the positive diagonal matrix. The proof procedures are similar with that of Theorems 3.1-3.3, which are omitted here. We first give a property about the quantity α(M) for such matrix. Lemma 3.2
Let M ∈ Rn×n be a positive diagonal matrix. Then, α(M) = min {Mii },
(15)
1≤i≤n
where Mii denotes the main diagonal element of M, for i = 1, 2, ..., n. Proof: Let x ∈ Rn be an arbitrary vector satisfied with x∞ = 1, then we have max {xi (Mx)i } = max {M11 x12 , M22 x22 , ..., Mnn xn2 } ≥ min {Mii } > 0. x∞ =1
1≤i≤n
1≤i≤n
Thus, α(M) = min max {xi (Mx)i } ≥ min {Mii }. x∞ =1 1≤i≤n
1≤i≤n
In the other hand, we have α(M) ≤ min {Mii }. 1≤i≤n
So, we have α(M) = min1≤i≤n {Mii }. About this Lemma, we remark here that for a general P-matrix M, though we always have α(M) ≤ min {Mii } 1≤i≤n
and
α(M) ≤ δ(M) = min{λ(Mββ ), β ⊆ {1, 2, ...., n}},
where λ(Mββ ) denotes the real eigenvalue of the principle submatrice Mββ of M (see [23]), we couldn’t give the following conclusion α(M) = min {Mii }
or
1≤i≤n
α(M) = δ(M)
even the P-matrix M is a symmetric positive (not a diagonal) definite matrix, and we will give an example in Section 4. Based on Lemma 3.2, we obtain the following corollary.
8
Corollary 3.2 Let M ∈ Rn×n be a positive diagonal matrix, z∗ ∈ Rn denote the unique solution of the LCP(M, q) (1) and u be an arbitrary n-vector, respectively. Then, √ √ v1 ∞ (1 + max1≤i≤n {Mii }) − Δ1 v1 ∞ (1 + max1≤i≤n {Mii }) + Δ1 ∗ ≤ z − u∞ ≤ , 2 min1≤i≤n {Mii } 2 min1≤i≤n {Mii } √ √ |((−q)+ )i∗2 |(1 + max1≤i≤n {Mii } + Δ2 ) |((−q)+ )i∗2 |(1 + max1≤i≤n {Mii } − Δ2 ) ∗ ≤ z ∞ ≤ , 2 min1≤i≤n {Mii } 2 min1≤i≤n {Mii } (−q)+ ∞ (−q)+ ∞ ≤ z∗ ∞ ≤ . max1≤i≤n {Mii } min1≤i≤n {Mii } where v1 = min(u, q+Mu), Δ1 = v2∞ (1+max1≤i≤n {Mii })2 −4 min1≤i≤n {Mii }v2i1 ∗ ≥ 0, and i1 ∗ satisfies (u − z∗ )i1 ∗ (M(u − z∗ ))i1 ∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi1 ∗ 0. Δ2 = (1 + max1≤i≤n {Mii })2 − 4 min1≤i≤n {Mii } ≥ 0, and i2 ∗ satisfies z∗i2 ∗ (Mz∗ )i2 ∗ = max1≤i≤n {z∗i (Mz∗ )i } and ((−q)+ )i2 ∗ 0. Then from the three inequalities of Corollary 3.2, we can obtain the following Corollaries 3.3 and 3.4. Corollary 3.3 Let M ∈ Rn×n be a positive diagonal matrix. Let z∗ ∈ Rn denote the unique solution of (1), and let u be an arbitrary real n-vector. If ((−q)+ )i2 ∗ 0, then we have √ √ z∗ − u∞ v1 ∞ (1 + max1≤i≤n {Mii }) + Δ1 v1 ∞ (1 + max1≤i≤n {Mii }) − Δ1 ≤ ≤ √ √ , z∗ ∞ |((−q)+ )i2 ∗ |(1 + max1≤i≤n {Mii } + Δ2 ) |((−q)+ )i2 ∗ |(1 + max1≤i≤n {Mii } − Δ2 ) where v1 , Δ1 , Δ2 , i2 ∗ are same with Corollary 3.2. Corollary 3.4 Let M ∈ Rn×n be a positive diagonal matrix. Let z∗ ∈ Rn denote the unique solution of the LCP(M, q) (1), and let u be an arbitrary n-vector. If (−q)+ 0, then, √ √ v∞ (1 + max1≤i≤n {Mii }) − Δ z∗ − u∞ max1≤i≤n {Mii }[v∞ (1 + max1≤i≤n {Mii }) + Δ] ≤ ≤ , 2(−q)+ ∞ z∗ ∞ 2 min1≤i≤n {Mii }(−q)+ ∞ where v = min(u, q + Mu), Δ = v2∞ (1 + max1≤i≤n {Mii })2 − 4 min1≤i≤n {Mii }v2i∗ ≥ 0, and i∗ satisfies (u − z∗ )i∗ (M(u − z∗ ))i∗ = max1≤i≤n {(u − z∗ )i (M(u − z∗ ))i } and vi∗ 0. At the end of this section, we remark that if Δ → 0 in Theorem 3.1-3.3, the error bounds are better. The error upper bound in Theorem 3.1 is more accurate than the upper bound in Lemma √ |v|min 3 2.2. For the lower bound, we can prove that when |v|max ≥ 2 , the error lower bound in Theorem 3.1 is better than the error lower bound in Lemma 2.2. We compare the absolute error bound in Theorem 3.1 with Lemma 2.2 as follows. 9
(i) About the upper bounds, we have √ v∞ (1+M∞ )+ Δ 2α(M) 1+M∞ α(M) v∞
1 + M∞ +
=
Δ v2∞
≤
2(1 + M∞ )
1 + M∞ + 1 + M∞ = 1. 2(1 + M∞ )
So, the absolute error upper bound presented in Theorem 3.1 is better than the absolute error upper bound given in Lemma 2.2. (ii) About the lower bounds, noting that 0 v2i∗ ≤ v2∞ in Theorem 3.1, we have √ v∞ (1+M∞ )− Δ 2α(M) 1 1+M∞ v∞
=
(1 + M∞ )(1 + M∞ −
Δ ) v2∞
2α(M)
, v2∗
(1 + M∞ )4α(M) vi 2 ∞
=
2α(M)[1 + M∞ +
(1 + M∞
)2
v2i∗
,
− 4α(M) v2 ] ∞
v2i∗
2(1 + M∞ ) v2 ∞
= 1 + M∞ +
(1 + M∞
|v|2
2(1 + M∞ ) vmin 2 ∞
∈[ 1 + M∞ +
|v|2
(1 + M∞ )2 − 4α(M) vmin 2
,
v2i∗
− 4α(M) v2
∞
2(1 + M∞ ) ]. 1 + M∞ + (1 + M∞ )2 − 4α(M)
∞
|v|2 2(1+M∞ ) min v2 ∞
Solving the inequality √ 3 2 ,
,
)2
1+M∞ +
|v|2 (1+M∞ )2 −4α(M) min v2 ∞
> 1, we obtain
|v|min v∞
≥
√ 3 2 .
So, when
|v|min v∞
≥
the ratio will be greater than 1, and the absolute error lower bound in Theorem 3.1 is better than the absolute lower bound in Lemma 2.2.
4 Numerical examples In this section, we show some numerical experiments to examine the results presented in this paper, including the absolute error bounds and the relative error bounds of the approximate solution and the value estimation of the function α(M).
10
Example 4.1 In this example, we consider to examine the error bounds of the approximate solution for (1) with a P-matrix. Set the system matrix M in (1) to be ⎛ ⎞ ⎜⎜⎜ 1 0 0.01 ⎟⎟⎟ ⎜⎜ ⎟⎟ M = ⎜⎜⎜⎜⎜ 0.01 1 −0.02 ⎟⎟⎟⎟⎟ ⎜⎝ ⎟⎠ −0.01 0.01 1 and let
⎛ ⎞ ⎛ ⎜ 2 ⎜⎜⎜ −2 ⎟⎟⎟ ⎟⎟⎟ ∗ ⎜⎜⎜⎜⎜ ⎜⎜⎜ q = ⎜⎜⎜⎜ −1.02 ⎟⎟⎟⎟ , z = ⎜⎜⎜⎜ 1 ⎜⎝ ⎟⎠ ⎜⎝ 0 0.01
⎞ ⎛ ⎞ ⎟⎟⎟ ⎜⎜⎜ 2.01 ⎟⎟⎟ ⎟⎟⎟ ⎜ ⎟ ⎟⎟⎟ , u = ⎜⎜⎜⎜⎜ 0.9 ⎟⎟⎟⎟⎟ . ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ −0.01
Then z∗ is the unique solution to the LCP(M, q). The vector u is a given approximate solution used to be estimated. Then, from Theorem 3.1 and Theorem 3.3, we have the absolute error bounds and the relative error bounds of the approximate solution u, respectively. 0.0792 ≤ z∗ − u∞ ≤ 0.1292, 0.0385 ≤
z∗ − u∞ ≤ 0.0665. z∗ ∞
If we apply Lemma 2.2 and Lemma 2.3 to estimate the absolute error bounds and the relative error bounds of the approximate solution, respectively, then we have the following two inequalities. 0.0491 ≤ z∗ − u∞ ≤ 0.2084, 0.0238 ≤
z∗ − u∞ ≤ 0.2315. z∗ ∞
The true error bounds are as follows: z∗ − u∞ = 0.1,
z∗ − u∞ = 0.05. z∗ ∞
So, we can find that the absolute error bounds (7) in Theorem 3.1 and the relative error bounds (12) in Theorem 3.3 effectively reflect the true error bounds, and the absolute error bound results and the relative error bound results are better than the results by Lemma 2.2 and Lemma 2.3, respectively. The domains of the error lower bound denoted by DomainLB and upper bound denoted by DomainLB are shown in Table 1. From Table 1, we can find that the domains of the error upper bounds can reflect the true error bounds to some extent, and the upper bounds are actually we care about.
11
Table 1: The domains of the error bounds absolute error bounds DomainLB DomainUB z∗ , u
[0.0005, 0.0792]
relative error bounds DomainLB DomainUB
[0.1292, 0.20799]
[0.0002, 0.0385]
[0.0665, 0.1071]
Example 4.2 In this example, we consider to estimate the bounds of the true solution by Lemma 2.1, Corollary 3.1 and Lemma 3.3, respectively. Let the matrix M, q be same with Example 4.1, then z∗ is the unique solution of the LCP(M, q). So we have the three inequalities as follows. 0.9002 ≤ z∗ ∞ ≤ 2.0595, 1.5896 ≤ z∗ ∞ ≤ 2.5912, 1.9417 ≤ z∗ ∞ ≤ 2.0595. The three inequalities come from Lemma 2.1, Corollary 3.1 and Lemma 3.2, respectively, and the true solution z∗ satisfies z∗ ∞ = 2. So, we can find that the third result is the best one, the second result is better than the first result in the lower bound but not in the upper bound. In addition, from Corollary 3.1, we can have the bound domains of the solution z∗ is as follows. DomainLB = [0.8107, 1.5896], DomainUB = [1.3215, 2.5912]. So, the two domains also reflect the bound of the true solution to some extent.
Example 4.3 In this section, we consider a higher case. Setting the matrix M in the LCP(M, q) to be ⎛ ⎜⎜⎜ ⎜⎜⎜ ⎜⎜⎜ ⎜⎜⎜ M = ⎜⎜⎜⎜⎜ ⎜⎜⎜ ⎜⎜⎜ ⎜⎜⎝
2 1 0 ··· −1 2 0 · · · 0 0 1 ··· .. .. .. . . . 0 0 0 ···
0 0 0 .. . 1
⎞ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎟⎟ ⎟⎠
with the order being n = 1000, then M is an H+ -matrix. We can obtain that α(M) = 1 based on Proposition 4 in [23] and α(M) ≤ min{Mii }. We set the true solution of the LCP(M, q) to be z∗ = (1, 2, 1, · · · , 1, 2) and q = −Mz∗ . We solve the LCP(M, q) by the modulus-based Gauss Seidel iteration method (MGS) presented in [4] with the initial iteration vector is x0 = (1, 0, 1, 0, · · · , 1, 0) 12
and the stop iteration criterion is norm(zk , Mzk + q) < 1.0e−5 . We estimate the absolute error of the numerical solution zk and denote the bound zk − z∗ ∞ by True Bound with the estimated lower error bound and the estimated upper error bound generated from (7) denoted by Lower BoundT and Upper BoundT , respectively. The upper bound generated from Lemma 2.2 is denoted by Upper BoundL . The domain bounds of upper bound presented in Theorem 3.1 are denoted by Lower BoundB− and Upper BoundB+ , respectively. Then we obtain Figure 1 as follows. 8
8 True Bound Lower Bound
7
Upper Bound Upper Bound
6
True Bound Lower Bound
7
T
B
-
Upper Bound T
T
6
L
Upper Bound Upper Bound
5
5
4
4
3
3
2
2
1
B
+
L
1
0 100
10
0 100
1
(a)
101
(b)
Figure 1: The estimated error bounds about the approximate solution zk In Figure 1, the horizontal coordinate is about the iteration steps of the MGS iteration method. From (a), we can find the true bound of the estimate solution is in the domain given in Theorem 3.1 and the upper bound in Theorem 3.1 is better then the upper bound given in Lemma 2.2. From (b), we can find that the upper bound given in Theorem 3.1 is just the lower bound B− of the upper bound domain given in Theorem 3.1 and the upper bound B+ of the upper bound domain is also better than the upper bound given in Lemma 2.2.
Example 4.4 In [23], the authors provided a example to show the function α(M) tends to 0 as t → ∞ since α(M) ≤ t12 , where the P-matrix M is ⎛ ⎜⎜ 1 t M = ⎜⎜⎜⎝ 0 1
⎞ ⎟⎟⎟ ⎟⎟⎠ ,
which is not a symmetric matrix. In this paper, we consider to compare the value of function α(M) with M’s smallest eigenvalue and smallest principal diagonal element for a symmetric matrix. Let
13
M be
⎛ ⎜⎜ 1 t M = ⎜⎜⎜⎝ t 2
⎞ ⎟⎟⎟ ⎟⎟⎠ .
√ √ √ 2 Thus, the eigenvalues of M are λ1,2 = 3± 21+4t , and if we set t ∈ (− 2, 2), then M is a symmetric positive definite matrix, is also a P-matrix. Hence, we have
α(M) = min max {xi (Mx)i } = min max{x12 + tx1 x2 , 2x22 + tx1 x2 }, x∞ =1 1≤i≤n
≤
max
x1 =1,x2 =− 2t3
x∞ =1
{x12 + tx1 x2 , 2x22 + tx1 x2 } =
2 2 2t 2 2 2 max √ √ {1 − t , 2( ) − t }. 3 3 3 t∈(− 2, 2)
√
So, if we set t = 1, then α(M) ≤ 13 < λ1 = 3−2 5 < 1. From this example, we can find that the function α(M) is less than both the smallest principal diagonal element and the smallest eigenvalue of the symmetric positive definite matrix M.
5
Conclusions and remarks
In this paper, we consider the error bounds of the approximate solution to the LCP(M, q) with a Pmatrix M. By utilizing the function α(M), we present some new results. The numerical experiments show the accuracy and the effectiveness of these results. Nevertheless, there are still some problems to be considered: i) Is there any good method to calculate the value of α(M) for some types of Pmatrices? ii) Does α(M) equal to minx∞ =1,x≥0 max1≤i≤n {xi (Mx)i } for some types of P-matrices? Meanwhile, other methods for estimating the error bounds beyond α(M) will also be considered to explore in the future.
Acknowledgements This work was partially supported by Zhaoqing University Research Program # 611-612279 and by the NNSFC under grant number # 11971475. The author Qiao also thanks the UT President Endowed Professorship (Project # 450000123) for its partial support. Both authors are also thankful to Professors Hengjun Zhao and Hongyin Shi for their fruitful discussions during Dr. Fang visited UTRGV and presented his work in the SMSS Signal and Data seminar.
14
References [1] Ahn B H, Solution of nonsymmetric linear complementarity problems by iterative methods, Journal of Optimization Theory and Applications, 1981, 33(2): 175-185. [2] Bai Z Z, Evans D J, Matrix multisplitting relaxation methods for linear complementarity problems, International Journal of Computer Mathematics, 1997, 63(3-4): 309-326. [3] Bai Z Z, Evans D J, Chaotic iterative methods for the linear complementarity problems, Journal of Computational and Applied Mathematics, 1998, 96(2): 127-138. [4] Bai Z Z, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numerical Linear Algebra with Applications, 2010, 17: 917–933. [5] Cottle R W, Dantzig G B, Complementarity pivot theory of mathematical programming, Linear Algebra Appl., 1968 1: 103-125. [6] Cottle R W, Pang J S, Stone R E, The Linear Complementarity Problem, Academic Press, New York, 1992. [7] Chen X, Xiang S, Computation of error bounds for P-matrix linear complementarity problems. Mathematical Programming, 2006, 106(3): 513-525. [8] Chen X, Xiang S, Perturbation mounds of P-matrix linear complementarity problems, SIAM Journal on Optimization, 2007, 18(4): 1250-1265. [9] Chen T, Li W, Wu X and Vong S, Error bounds for linear complementarity problems of MB -matrices, Numerical Algorithms, 2015, 70(2): 341-356. [10] Dai P F, Lu C J, Li Y T, New error bounds for the linear complementarity problem with an SB-matrix, Numerical Algorithms, 2013, 64(4):741-757. [11] Ferris M C, Pang J S, Complementarity and variational problems, Philadephia, Pennsylvania, 1997. [12] Fang X, Wei C, The general modulus-based Jacobi iteration method for linear complementarity problems, Filomat, 2015, 29(8): 1821-1830. [13] Garcia-Esnaola M, Pena J M, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra and Its Applications, 2010, 433(5): 956-964. 15
[14] Harker P T, Pang J S, Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications, Mathematical Programming, 1990, 48(1-3): 161-220. [15] Hadjidimos A, Lapidakis M, Tzoumas M, On iterative solution for linear complementarity problem with an H+ -matrix, SIAM Journal on Matrix Analysis and Applications, 2012, 33(1): 97-110. [16] KAtta G. Murty, On the number of sulutions to the complementarity problem and spanning properties of complementary cones, Linear Algebra and Its Applications, 1972, 5: 65-108. [17] Kojima M, Megiddo N, Ye Y, An interior point potential reduction algorithm for the linear complementarity problem, Mathematical Programming, 1992, 54(1-3): 267-279. [18] Lemke C E, Bimatrix equilibrium points and mathematical programming, Management Sci. 1965 11: 681-689. [19] Li W, Zheng H, Some new error bounds for linear complementarity problems of H-matrices, Numerical Algorithms, 2013, 67(2): 257-269. [20] Mangasarian O L, Shiau T H, Error bounds for monotone linear complementarity problems, Springer-Verlag New York, Inc. 1986. [21] Murty K G. Linear Complementarity, Linear and Nonlinear Programming, Heldermann, 1988. [22] Mangasarian O L, Ren J, New improved error bounds for the linear complementarity problem, Mathematical Programming, 1994, 66(1-3): 241-255. [23] Roy Mathis, Jong-Shi Pang, Error bounds for the linear complementarity problem with a P-matrix, Linear Algebra and Its Applications, 1990, 132: 123-136. [24] Wang F, Sun D, New error bound for linear complementarity problems for B-matrices, Linear and Multilinear Algebra, 2017(4): 1-12. [25] Wang, Zhengyu, Yuan, et al. Componentwise error bounds for linear complementarity problems, IMA Journal of Numerical Analysis, 2018, 31(1): 348-357. [26] Zhang L L, Two-stage multisplitting iteration methods using modulus-based matrix splitting as inner iteration for linear complementarity problems, Journal of Optimization Theory and Applications, 2014, 160(1): 189-203. 16