Linear Algebra and its Applications 584 (2020) 394–408
Contents lists available at ScienceDirect
Linear Algebra and its Applications www.elsevier.com/locate/laa
A two-level additive Schwarz method for a kind of tensor complementarity problem Shui-Lian Xie ∗ , Hong-Ru Xu School of Mathematics, Jiaying University, Meizhou, 514015, China
a r t i c l e
i n f o
Article history: Received 27 October 2016 Accepted 23 September 2019 Available online 26 September 2019 Submitted by E. Tyrtyshnikov MSC: 90C33 65K15 65F99
a b s t r a c t In this paper, we present a two-level additive Schwarz method for a kind of tensor complementarity problem (TCP). The method is proved to be convergent monotonically and can reach the solution within finite steps. We report some preliminary numerical results to test the efficiency of the proposed method. © 2019 Elsevier Inc. All rights reserved.
Keywords: Tensor complementarity problem Z-tensor Two-level Convergence
1. Introduction Tensor complementarity problem (TCP), which was first introduced by Song and Qi [21], is a direct and natural extension of linear complementarity problem (LCP). TCP has many applications, such as nonlinear compressed sensing, DNA microarrays, n-person noncooperative game and so on, see for example [12,18] and the references therein. We * Corresponding author. E-mail address:
[email protected] (S.-L. Xie). https://doi.org/10.1016/j.laa.2019.09.025 0024-3795/© 2019 Elsevier Inc. All rights reserved.
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
395
denote by T (m, n) the set of all tensors with order m and dimension n. A ∈ T (m, n) is in the form of A = [ai1 i2 ···im ],
ai1 i2 ···im ∈ R,
1 ≤ i1 , i2 , · · · , im ≤ n.
In this paper, we consider the following tensor complementarity problem: finding a point x ∈ Rn such that f (x) = Axm−1 − q ≥ 0,
x ≥ 0,
xT f (x) = 0,
(1.1)
where q ∈ Rn and A ∈ T (m, n). Axm−1 is an n-dimensional vector, whose ith component is given by [20]
(Axm−1 )i =
n
aii2 ···im xi2 xi3 · · · xim .
i2 ,··· ,im =1
Recently, TCP attracts many people’s attention and has been deeply studied. Most of the work focuses on the fundamental facts on the qualitative behavior of solutions to TCP, such as structure of solution set, global uniqueness solvability and error bound and so on. We refer the readers to [2,5,6,10,13,21–24,26] for details. Unfortunately, so far, the study in high efficient and powerful numerical methods for TCP is very few. [27] proposed a sequential mathematical programming method for finding the least solution to TCP. Under some appropriate conditions, they proved that the generated sequence monotonically convergent to the least solution to the problem. [28] is concerned with the tensor complementarity problem with positive semi-definite Z-tensor. Under the assumption that the problem has a solution at which the strict complementarity holds, they showed that the problem is equivalent to a system of lower dimensional tensor equations. [11] introduced a Kojima-Megiddo-Mizuno type continuation method for solving tensor complementarity problems. It is well known that the numerical algorithms for LCP are fruitful, such as domain decomposition method [1], semi-smooth Newton method [9], active set method [14], modulus-based matrix splitting method [3,4] and so on. In [25], a variant of Schwarz algorithm, called two-level Schwarz algorithm, was proposed for the solution of a kind of linear obstacle problem. Numerical results show that the method is very effective. The method has been extended to some other problems, for example, [16], [29] extended the method to some special variational inequality and nonlinear complementarity problems. In this paper, motivated by the recent work going on in this field, we extend the two-level Schwarz algorithm for LCP to a kind of TCP, and establish its convergence. The paper in the sequel is organized as follows. In Section 2, we present some preliminaries and the two-level additive Schwarz method. We establish the convergence theory in Section 3. Finally, we present some simple numerical results.
396
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
2. The two-level additive Schwarz method First, we give some definitions. The diagonal of a tensor A contains the entries aii···i with i = 1, 2, · · · , n and other entries are called off-diagonal. We call a tensor the unit tensor and denote it by I [7] if its diagonals are all 1 and off-diagonals are 0. A tensor is said to be nonnegative if all of its entries are nonnegative. We denote N = {1, 2, . . . , n}. Let I ⊂ N with |I| = r. Denote (Axm−1 )I as the r-dimensional subvector of Axm−1 and its elements are (Axm−1 )i , i ∈ I. Similarly, we denote qI as the r-dimensional subvector of q and its elements are qi , i ∈ I. Definition 2.1. [31,7] A tensor A ∈ T (m, n) is called a Z-tensor if all its off-diagonal entries are nonpositive, which is equivalent to say that A can be written as A = sI − B, where s > 0 and B is a nonnegative tensor (B ≥ 0). Definition 2.2. [7] We call a Z-tensor A = sI − B (B ≥ 0) an M -tensor if s ≥ ρ(B), where ρ(B) = max{|λ| : λ is an eigenvalue of B}. An M -tensor is called nonsingular if s > ρ(B). n Definition 2.3. [27] A tensor A ∈ T (m, n) is called strongly monotone over R+ , if for any n x, y ∈ R+ ,
A(xm−1 − y m−1 ) ≥ 0
=⇒
x ≥ y,
where the vector inequality x ≥ y is defined by elements. Remark 2.1. It is easy to verify that the even-order diagonal tensors with positive diagonal entries are strongly monotone tensors. We give an non-diagonal strongly monotone Z-tensor. Example 2.1. Let A be a 4th-order 4-dimensional tensor with elements ⎧ ⎪ a1111 = a4444 = 1, ⎪ ⎪ ⎨a 2222 = a3333 = 2, ⎪ a2114 = a2141 = a2411 = −1/3, ⎪ ⎪ ⎩a 3441 = a3144 = a3414 = −1/3. 4 It is easy to verify A is a strongly monotone Z-tensor over R+ . Actually, it is easy to check that an even-order tensor A = I − ξ with ξ being a tensor of all ones and > 0 being a small enough number, is strongly monotone.
Remark 2.2. A tensor A is called monotone if Axm−1 ≥ 0 implies x ≥ 0. An even-order monotone Z-tensor is a nonsingular M -tensor [7], and then it is a P -tensor [6]. The concept of a strongly monotone tensor is stronger than a monotone tensor. In the case of m = 2, both are equivalent. Hence, a strongly monotone Z-tensor is always a P -tensor.
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
397
In the latter part of this paper, without specification, we always suppose that tensor n A ∈ T (m, n) in the TCP (1.1) is a strongly monotone Z-tensor over R+ . We first prove the following interesting property of a strongly monotone Z-tensor. n , sets I, J satisfy Lemma 2.1. Suppose A is a strongly monotone Z-tensor over R+ n I J = N and I J = ∅. If vectors y, z ∈ R+ satisfy that yJ ≤ zJ and (Ay m−1 )I ≤ (Az m−1 )I , then we have y ≤ z.
Proof. Let J = {i ∈ N : yi ≤ zi }, I = N \J . Without loss of generality, we assume I = {1, 2, · · · , k}, J = {k+1, · · · , n}. Assume that I is not empty, then by the definition of J , we know that J ⊂ J and I ⊂ I. More exactly, we have (Ay m−1 )I ≤ (Az m−1 )I ,
yI > zI ,
yJ ≤ zJ .
(2.1)
Since A is a Z-tensor, by (2.1), we have (A(yI , zJ )m−1 )I ≤ (A(yI , yJ )m−1 )I = (Ay m−1 )I ≤ (Az m−1 )I = (A(zI , zJ )m−1 )I .
(2.2)
By yI > zI , we have (A(yI , zJ )m−1 )J ≤ (A(zI , zJ )m−1 )J .
(2.3)
Combining (2.2) with (2.3), we have A(yI , zJ )m−1 ≤ A(zI , zJ )m−1 . By the strongly monotone property of A, we have yI ≤ zI , this is a contradiction to (2.1). Thereby I = ∅ and J = N , which completes the proof. 2 By the use of the above lemma, we can prove the following lemma. Lemma 2.2. Let A be a strongly monotone Z-tensor. Then the solution of (1.1) is the least element of the feasible set of the problem defined by D = {x : Axm−1 − q ≥ 0, x ≥ 0}. n Proof. First, we show that for each x = 0 ∈ R+ ,
max xi (Axm−1 )i > 0.
1≤i≤n
n Assume there exists an x = 0 ∈ R+ , such that
398
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
max xi (Axm−1 )i ≤ 0.
(2.4)
1≤i≤n
Let I = {i ∈ N : xi = 0}, J = {i ∈ N : xi > 0}. By (2.4), we have (Axm−1 )i ≤ 0, i ∈ J. Let y = 0, we have yI = xI ,
(Ay m−1 )J ≥ (Axm−1 )J .
Therefore, by Lemma 2.1, we have y ≥ x, which is a contradiction. Hence, for each n x = 0 ∈ R+ , max xi (Axm−1 )i > 0.
1≤i≤n
And then, by Corollary 3.7 in [19], there exists a solution to (1.1). ¯ be the solution of (1.1). It is clear that x ¯ ∈ D. Denote Let x I = {i ∈ N : x ¯i = 0} and J = N \I = {i ∈ N : x ¯i > 0}. By the definition of D, the following inequality holds for any x ∈ D, xi ≥ x ¯i = 0,
∀i ∈ I.
We also have (A¯ xm−1 )J − qJ = 0 and (Axm−1 )J − qJ ≥ 0,
∀x ∈ D.
The last two inequalities yield (Axm−1 )J ≥ (A¯ xm−1 )J ,
∀x ∈ D.
It then follows from Lemma 2.1 that x ≥ x ¯. The proof is complete.
2
Definition 2.4. [15] Let A ∈ T (m, n) and α ⊂ N with |α| = r. A principal subtensor Aα of the tensor A with index set α is an mth-order r-dimensional subtensor of A consisting of rm elements defined by Aα = [ai1 ···im ],
where i1 , · · · , im ∈ α.
Remark 2.3. It is easy to check that if A is a strongly monotone Z-tensor, then all principal subtensors of A are strongly monotone Z-tensors. Actually, without loss of
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
399
generality, suppose Aα is a principal subtensor of A with α = {1, 2, · · · , k}. Suppose k-dimensional vectors xα , yα satisfy Aα (xm−1 − yαm−1 ) ≥ 0. Let x = (xα , 0N \α ), y = α m−1 m−1 )α = Aα xα and (Ay m−1 )α = Aα yαm−1 , we have (yα , 0N \α ), noting that (Ax (Axm−1 )α − (Ay m−1 )α ≥ 0,
xN \α = yN \α = 0N \α .
Hence, by Lemma 2.1, we have x ≥ y, and then xα ≥ yα . Hence Aα is strongly monotone. For the sake of convenience, we introduce two operators. Let I, J be a nonoverlapping decomposition of N . That is, N = I J and I J = ∅. For any v ∈ Rn , we introduce the following nonlinear problem of finding w ∈ Rn such that wI = vI ,
(Awm−1 )J − qJ = 0.
(2.5)
We denote above nonlinear system (2.5) by the operation form w = GJ (v).
(2.6)
Next, we introduce the following problem of finding w ∈ Rn such that wI = vI ,
min{(Awm−1 )J − qJ , wJ } = 0.
(2.7)
We denote above nonlinear problem (2.7) by the operation form w = TJ (v).
(2.8)
Before deriving two-level additive Schwarz method for problem (1.1), we first present an additive Schwarz method with two subdomains, which is an extension method for LCP, see for example [30]. Method 2.1 (additive Schwarz method). Let I and J be a decomposition of N , i.e., I J = N . For k = 0, 1, . . ., do the following two steps until convergence. Step 1: Solve the following two subproblems in parallel: xk,1 = TI (xk ),
(2.9)
xk,2 = TJ (xk ).
(2.10)
Step 2: Let xk+1 = min(xk,1 , xk,2 ), where ‘min’ should be understood by componentwise. The following theorem gives the convergence of the Method 2.1.
400
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
Theorem 2.3. Let the sequence {xk } be generated by Method 2.1. If x0 ∈ D, then for k = 0, 1, . . ., we have (a) xk,i ≤ xk , i = 1, 2 and then xk+1 ≤ xk . (b) xk,i ∈ D, and then xk+1 ∈ D. (c) limk→∞ xk = x, where x is the solution of problem (1.1). Proof. We first prove that (a) and (b) hold for k = 0. By the assumption that x0 ∈ D, we have x0 ≥ 0 and A(x0 )m−1 − q ≥ 0.
(2.11)
The feasible set of subproblem (2.9) is D1 = {x : (Axm−1 − q)I ≥ 0,
xI ≥ 0,
xJ = x0J }.
Hence, it is obvious that x0 belongs to D1 . On the other hand, since AI is a strongly monotone Z-tensor, by Lemma 2.2, x0 ≥ x0,1 . Since x0 ∈ D and x0,1 is the solution of problem (2.9) with k = 0, we have x0,1 ≥ 0,
(A(x0,1 )m−1 − q)I ≥ 0.
(2.12)
Noting that A is a Z-tensor, we have (A(x0,1 )m−1 − q)J ≥ (A(x0 )m−1 − q)J ≥ 0.
(2.13)
Combining (2.12) and (2.13), we have x0,1 ∈ D. Similarly, x0 ≥ x0,2 and x0,2 ∈ D hold. Since x1 = min(x0,1 , x0,2 ), we have immediately x1 ≤ x0 . We now verify x1 ∈ D. For 1 0,1 any j ∈ N , there exists i ∈ {1, 2}, such that x1j = x0,i , x0,2 }. Since j since x = min{x A is a Z-tensor and x0,i ∈ D, we have (A(x1 )m−1 )j −qj ≥ (A(x0,i )m−1 )j − qj ≥ 0. Hence A(x1 )m−1 − q ≥ 0. This together with x1 ≥ 0 concludes x1 ∈ D. By the principle of induction, (a) and (b) hold for all k. Since 0 ≤ xk+1 ≤ xk , {xk } converges. Let xk → x ˜ as k → ∞. Taking limits in (2.9) and (2.10), we have by N = I ∪ J that x ˜ ≥ 0,
A˜ xm−1 − q ≥ 0 and x ˜T (A˜ xm−1 − q) = 0.
That is, x ˜ is a solution of problem (1.1), which implies x ˜ = x. This completes the proof. 2 In what follows, we let N 0 = {j ∈ N : xj = 0}, N + = {j ∈ N : xj > 0}, where x is the solution of problem (1.1). If x0 ∈ D, then the sequence {xk } generated by Method 2.1 is in D by Theorem 2.3. Moreover, if we define the coincidence set of xk as follows: I k = {j ∈ N : xkj = 0},
(2.14)
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
401
we have by the monotonicity of {xk } that I k ⊆ I k+1 ⊆ N 0 ,
k = 0, 1, · · · ,
which gives inner approximations for the coincidence set N 0 . Before deriving outer approximations for N 0 , we prove the following simple lemmas. Lemma 2.4. Let I ⊆ N 0 , x be the solution of problem (1.1) and x ˆ = TN \I (0).
(2.15)
Then x ˆ = x. Proof. Since I ⊆ N 0 , it is easy to see that the solution x of problem (1.1) satisfies x = TN \I (0). The solution x ˆ of problem (2.15) satisfies x ˆI = 0 and min{(Aˆ xm−1 − ˆN \I } = 0. Therefore, x ˆN \I satisfies the following problem: q)N \I , x x ˆN \I ≥ 0
(Aˆ xm−1 − q)N \I ≥ 0,
x ˆTN \I (Aˆ xm−1 − q)N \I = 0,
where x ˆI = 0. Therefore, by Lemma 2.2, x ˆN \I = xN \I and then x ˆ = x. The proof is then completed. 2 Lemma 2.5. For v ∈ D, let w = GN \I(v) (0),
(2.16)
where I(v) denotes the coincidence set of v, i.e., I(v) = {i ∈ N : vi = 0}. Then (a) I(v) ⊆ N 0 ; (b) if x is the solution of problem (1.1), w ≤ x; (c) let v, vˆ ∈ D with v ≥ vˆ, w and w ˆ be the solutions of problem (2.16) with given v ˆ and vˆ, respectively, then w ≤ w. Proof. (a) is obvious, since v ∈ D and x is the least element of D. Problem (2.16) is equivalent to the following system: wI(v) = 0, (Awm−1 − q)N \I(v) = 0,
(2.17)
which implies wI(v) = 0 = vI(v) = xI(v) . Let J(v) = N \I(v). Noting that (2.17) and (Axm−1 − q)J(v) ≥ 0, we have (Awm−1 − q)J(v) = 0 ≤ (Axm−1 − q)J(v) . Therefore, (b) holds by Lemma 2.1.
402
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
For any v, vˆ ∈ D, if v ≥ vˆ, then I(v) ⊆ I(ˆ v ) ⊆ N 0 . Let N1 = N \I(ˆ v ), N2 = I(ˆ v ) \I(v), N3 = I(v). We have a nonoverlapping decomposition of N as N = N1 ∪N2 ∪N3 . Thereby, w = GN \I(v) (0) = GN1 ∪N2 (0), and then
(Awm−1 − q)N1 ∪N2 = 0, wN3 = 0.
(2.18)
Similarly, since I(ˆ v ) = N2 ∪ N3 , we have w ˆ = GN \I(ˆv) (0) = GN1 (0) and
(Aw ˆ m−1 − q)N1 = 0, w ˆN2 ∪N3 = 0.
(2.19)
This together with (b) implies wN2 ∪N3 ≤ xN2 ∪N3 = 0 = w ˆN2 ∪N3 . On the other hand, by (2.18) and (2.19), (Awm−1 − q)N1 = (Aw ˆ m−1 − q)N1 . Hence, we have w ≤ w ˆ by Lemma 2.1, which completes the proof. 2 Lemma 2.5 leads to the following useful result. Theorem 2.6. Let {xk } be a sequence in D and satisfy xk+1 ≤ xk . Let wk = GN \I k (0), k = 0, 1, · · · , where I k is the coincidence of xk defined by (2.14). Define Ok = {j ∈ N : wjk ≤ 0},
Lk = N \ Ok ,
k = 0, 1, · · · ,
(2.20)
then I k ⊆ I k+1 ⊆ N 0 ⊆ Ok+1 ⊆ Ok ,
(2.21)
(Axm−1 − q)Lk = 0,
(2.22)
k = 0, 1, · · · .
Proof. By Lemma 2.5, it is obvious wk ≤ wk+1 ≤ x. k Thereby, (2.21) holds. By the definition of Lk , xLk ≥ wL k > 0. Therefore (2.22) holds, which completes the proof. 2
By Theorem 2.3 and Theorem 2.6, we can get an outer approximation sequence {Ok } and an inner approximation sequence {I k }. Furthermore, if we define C k as C k = N \ (I k ∪ Lk ),
(2.23)
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
403
C k may contain both elements of N 0 and N + . So, it is called the critical subsets. Let Cˆ k = C k ∪ H k ,
(2.24)
where H k is a subset of N corresponding to an overlapping of the subsets associated with Lk and C k . Now, we are ready to present two-level additive Schwarz method for problem (1.1). Method 2.2 (two-level additive Schwarz method). 1. Initialization. k := 0. a) Choose an initial x0 such that x0 ∈ D. Define the coincidence set I 0 according to (2.14). b) Solve the tensor equation w0 = GN \I 0 (0)
(2.25)
and define L0 , C 0 and Cˆ 0 according to (2.20), (2.23) and (2.24), respectively. 2. Iteration step. a) Inner approximation (additive Schwarz method). Solve the following two subproblems in parallel: (i) The subproblem defined by the following nonlinear problem xk,1 = TCˆ k (xk ).
(2.26)
(ii) The subproblem defined by the following tensor equation xk,2 = GLk (xk ).
(2.27)
Let xk+1 = min(xk,1 , xk,2 ) and define the coincidence set I k+1 according to (2.14). b) Outer approximation. Solve the tensor equation wk+1 = GN \I k+1 (0).
(2.28)
If wk+1 ≥ 0, then stop; wk+1 is the solution. Otherwise, define Lk+1 and Cˆ k+1 according to (2.20) and (2.24), respectively. c) If C k+1 = ∅, then let x = wk+1 and stop; otherwise, k := k + 1 and return to step 2. Remark 2.4. Since all principal subtensors of a strongly monotone Z-tensor are strongly monotone Z-tensors, problem (2.26) always has a solution. There are very few results on the existence of a positive solution to tensor equation. Ding and Wei [8] obtained that when A is a nonsingular M -tensor with a positive right hand side, there exists a unique positive solution to the tensor equation. Recently, Li, Guang and Wang [17] also proved the existence of solution to a Z-tensor equation under certain conditions.
404
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
Remark 2.5. Subproblems (2.26) can be solved by PSOR method, semi-smooth Newton method or some other known methods. Subproblems (2.25), (2.27) and (2.28) can be solved by Newton method which is very effective for nonlinear equations. 3. The convergence of Method 2.2 In this section, we analyze the convergence of Method 2.2. First, we introduce some lemmas. Lemma 3.1. Let xk ∈ D, subsets Lk and Cˆ k be defined by (2.20) and (2.24), respectively, then xk,1 = TCˆ k (xk ) ∈ D,
xk,1 ≤ xk ,
(3.1)
xk,2 = GLk (xk ) ∈ D,
xk,2 ≤ xk ,
(3.2)
xk+1 = min(xk,1 , xk,2 ) ∈ D, x≤x
k+1
≤x , k
I ⊆I k
k+1
(3.3) ⊆N . 0
(3.4)
Proof. (3.1) can be directly obtained by Theorem 2.3. By (2.27), we have (A(xk,2 )m−1 − q)Lk = 0,
xk,2 = xkN \Lk . N \Lk
(3.5)
Since xk ∈ D, we have (A(xk )m−1 − q)Lk ≥ 0. This together with (3.5) and Lemma 2.1 concludes xk,2 ≤ xk .
(3.6)
Noting that A is a Z-tensor, we have by (3.5) and (3.6) that (A(xk,2 )m−1 − q)N \Lk ≥ (A(xk )m−1 − q)N \Lk ≥ 0.
(3.7)
It follows then from (3.5), xk ∈ D and (2.22) that xk,2 = xkN \Lk ≥ xN \Lk , N \Lk
(A(xk,2 )m−1 − q)Lk = (Axm−1 − q)Lk = 0,
(3.8)
which means xk,2 ≥ x ≥ 0 by Lemma 2.1. This together with (3.7) and (3.8) implies xk,2 ∈ D. Therefore, (3.2) holds. Similar to the proof in Theorem 2.3, we have (3.3) and (3.4). The proof is then completed. 2
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
405
Lemma 3.2. Let xk ∈ D. If C k = ∅, then one of the following formulations is true: I k,1 \ I k = ∅, x
k,1
(3.9) k
= GCˆ k (x ),
(3.10)
where I k,1 is the coincidence set of xk,1 . = 0, then we have j ∈ I k,1 Proof. If there exists a positive integer j ∈ C k , such that xk,1 j k and j ∈ / I since (2.23). Hence, (3.9) holds. Conversely, we have xk,1 > 0. By Theorem 2.6, we have xLk > 0. Noting (2.24) and Ck xk,1 ≥ x, we obtain Cˆ k ⊂ C k ∪ Lk and then xk,1 ˆ k > 0. This would then imply (3.10) by C (2.26) and the proof is completed. 2 Lemma 3.3. If there exists a k such that C k = ∅, then wk = x, where x is the solution of problem (1.1). Proof. If C k = ∅, N = I k ∪ Lk , and then I k = N 0 , Lk = N + , since I k ⊆ N 0 , Lk ⊆ N + and N = N 0 ∪ N + . Thereby, by (2.20), we have Ok = I k = N 0 , and then k k wN + = wLk > 0. Thereby, wk = GN \I k (0) = GLk (0) = TN + (0) = TN \N 0 (0). Hence, by Lemma 2.4, we have wk = x, which completes the proof. 2 Now, we are ready to give the main convergence theorem of this section. Theorem 3.4. The sequence generated by two-level additive Schwarz method (Method 2.2) converges to the solution x of problem (1.1) after a finite number of iterations. Proof. If there exists some k such that C k = ∅, then by Lemma 3.3, we have wk = x. Conversely, assume C k = ∅ for all k. By Lemma 3.2, either (3.9) or (3.10) holds. If (3.9) holds, noting that xk+1 = min(xk,1 , xk,2 ), we have I k,1 ⊆ I k+1 and then I k+1 \ I k = ∅. Noting that (2.21) and that N is an index set with finite elements, I k+1 \ I k = ∅ can only occur in finite steps. Not loss of generality, we may assume that, for all k, I k+1 \ I k = ∅ holds and then (3.10) holds by Lemma 3.2. In this case, I k+1 = I k , which implies that the partition N = I k ∪ Lk ∪ C k does not change. Therefore, we may let Cˆ k = Cˆ and Lk = L , and the inner approximation with the partitioning Lk ∪ Cˆ k would become a classical nonlinear Schwarz process: xk,1 = TCˆ (xk ) = GCˆ (xk ),
(3.11)
xk,2 = GL (xk ),
(3.12)
x
k+1
= min(x
k,1
,x
k,2
),
(3.13)
406
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
Table 1 Different dimensions for Problem 4.1. n
iter
n
iter
n
iter
10 13
1 3
11 14
2 3
12 15
2 5
which converges to w0 . Hence w0 ≥ 0. This is a contradiction. Since in this case, Method 2.2 would have stopped at step 2 b). So, there exists k , such that wk = x. This completes the proof. 2 4. Numerical experiments In this section, we test the Method 2.2 on two problems. We implemented the method in Matlab R2014a and ran the codes on a PC with 3.60GHZ CPU and 4.00 GB RAM. Problem 4.1. Consider problem (1.1). We construct a 4th-order tensor A = sI − B in the way similar to the example 4.1 presented in [8]. Specifically, we generate a nonnegan×n×n×n tive tensor B ∈ R+ containing random values drawn from the standard uniform distribution on (0, 1) and set the scalar s = (1 + ε) · maxi=1,2,··· ,n (Be3 )i + 300,
ε > 0,
where e = (1, 1, · · · , 1)T . In our experiment, we set ε = 0.01. We let qi = −i/2, when i is odd, and qi = i/2, when i is even, i = 1, 2, · · · , n. The tensor equations in Method 2.2 are solved by Newton method and the exit condition is the norm of the difference between the two adjacent iterates is less than 10−4 . The subproblem (2.26) is solved by PSOR and the exit condition is the norm of the difference between the two adjacent iterates is less than 10−6 . All initial vector is chosen to be x0 which satisfies A(x0 )3 = e. We test the method for different dimensions, the result is presented in Table 1. As we can see from the table, Method 2.2 terminated to the solution successfully in finite steps. Problem 4.2. In this problem, we let A be a tensor with seven diagonals, ⎧ ⎪ ⎨ ai,i,i,i = 4, i = 1, 2, · · · , n, ai,i−1,i,i = ai,i,i−1,i = ai,i,i,i−1 = −1/3, ⎪ ⎩a i,i+1,i,i = ai,i,i+1,i = ai,i,i,i+1 = −1/3,
i = 2, 3, · · · , n − 1, i = 2, 3, · · · , n − 1.
The vector q, the initial vector and the methods for subproblems are the same to Problem 4.1. We changed the dimension from 10 to 60, and found that in all cases, the method only used one iteration to get the solution.
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
407
5. Concluding remarks In this paper, we proposed a two-level additive Schwarz method for a kind of tensor complementarity problem with a strongly monotone Z-tensor. The method can be regarded as an extension method for LCP. Using the nice properties of strongly monotone Z-tensor, we established the convergence of the method. Preliminary numerical results showed that the method can reach the solution effectively. Declaration of competing interest The authors declare that they have no competing interests. Acknowledgements The authors are grateful to anonymous referees for their valuable comments which help to improve the presentation of the paper. S. Xie was supported by the Chinese NSF grant 11601188, 11371154, 11526097. H. Xu was supported by the Chinese NSF grant 11601188, and by training program for outstanding young teachers in Guangdong Province, (Grant No. 20140202). References [1] L. Badea, X. Tai, J. Wang, Convergence rate analysis of a multiplicative Schwarz method for variational inequalities, SIAM J. Numer. Anal. 41 (2003) 1052–1073. [2] X.L. Bai, Z.H. Huang, Y. Wang, Global uniqueness and solvability for tensor complementarity problems, J. Optim. Theory Appl. 170 (2016) 72–84. [3] Z.-Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl. 17 (2010) 917–933. [4] Z.-Z. Bai, L.-L. Zhang, Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems, Numer. Algorithms 62 (2013) 59–77. [5] M. Che, L. Qi, Y. Wei, Positive definite tensors to nonlinear complementarity problems, J. Optim. Theory Appl. 168 (2016) 475–487. [6] W. Ding, Z. Luo, L. Qi, P -tensors, P0 -tensors, and tensor complementarity problem, preprint, arXiv:1507.06371, 2015. [7] W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl. 439 (2013) 3264–3278. [8] W. Ding, Y. Wei, Solving multi-linear systems with M-tensors, J. Sci. Comput. 439 (2016) 3264–3278. [9] F. Facchinei, J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Science and Business Media, New York, 2003. [10] M.S. Gowda, Z. Luo, L. Qi, N. Xiu, Z-tensors and complementarity problems, preprint, arXiv: 1510.07933, 2015. [11] L. Han, A continuation method for tensor complementarity problems, J. Optim. Theory Appl. 180 (3) (2019) 949–963. [12] Z. Huang, L. Qi, Formulating an n-person noncooperative game as a tensor complementarity problem, Comput. Optim. Appl. 66 (2017) 557–576. [13] Z. Huang, Y. Suo, J. Wang, On Q-tensors, preprint, arXiv:1509.03088, 2015. [14] M. Hintermüller, K. Ito, K. Kunisch, Primal-dual active set strategy as a semismooth Newton method, SIAM J. Optim. 13 (2003) 865–888.
408
S.-L. Xie, H.-R. Xu / Linear Algebra and its Applications 584 (2020) 394–408
[15] M.R. Kannan, N. Snaked-Monderer, A. Berman, Some properties of strong h-tensors and general h-tensors, Linear Algebra Appl. 476 (2015) 42–55. [16] C.L. Li, J.P. Zeng, Two-level Schwarz method for solving variational inequality with nonlinear source terms, J. Comput. Appl. Math. 211 (2008) 67–75. [17] D.H. Li, H. Guang, X. Wang, Finding a nonnegative solution to an M-tensor equation, preprint, arXiv:1811.11343, 2018. [18] Z.Y. Luo, L. Qi, N.H. Xiu, The sparse solutions to Z-tensor complementarity problems, Optim. Lett. 11 (2017) 471–482. [19] J.J. Moré, Coercivity conditions in nonlinear complementarity problems, SIAM Rev. 16 (1974) 1–16. [20] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput. 40 (2005) 1302–1324. [21] Y. Song, L. Qi, Properties of some classes of structured tensors, J. Optim. Theory Appl. 165 (2015) 854–873. [22] Y. Song, L. Qi, Properties of tensor complementarity problem and some classes of structured tensors, Ann. Appl. Math. 3 (2017) 308–323. [23] Y. Song, L. Qi, Error bound of P -tensor nonlinear complementarity problem, preprint, arXiv:1508. 02005v2, 2015. [24] Y. Song, G.H. Yu, Properties of solution set of tensor complementarity problem, J. Optim. Theory Appl. 170 (2016) 85–96. [25] P. Tarvainen, Two-level Schwarz method for unilateral variational inequalities, IMA J. Numer. Anal. 19 (1999) 273–290. [26] Y. Wang, Z.H. Huang, X.L. Bai, Exceptionally regular tensors and tensor complementarity problems, Optim. Methods Softw. 31 (2016) 815–828. [27] S.L. Xie, D.H. Li, H.R. Xu, An iterative method for finding the least solution to the tensor complementarity problem, J. Optim. Theory Appl. 175 (2017) 119–136. [28] H.R. Xu, D.H. Li, S.L. Xie, An equivalent tensor equation to the tensor complementarity problem with positive semi-definite Z-tensor, Optim. Lett. 13 (4) (2019) 685–694. [29] H.R. Xu, J.P. Zeng, Z. Sun, Two-level additive Schwarz algorithms for nonlinear complementarity problem with an M-function, Numer. Linear Algebra Appl. 17 (2009) 599–613. [30] J.P. Zeng, S.Z. Zhou, On monotone and geometric convergence of Schwarz methods for two-sided obstacle problems, SIAM J. Numer. Anal. 35 (1998) 600–616. [31] L. Zhang, L. Qi, G. Zhou, M -tensors and some applications, SIAM J. Matrix Anal. Appl. 35 (2014) 437–452.