Two projection methods for the solution of Signorini problems

Two projection methods for the solution of Signorini problems

Applied Mathematics and Computation 326 (2018) 75–86 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage:...

466KB Sizes 0 Downloads 48 Views

Applied Mathematics and Computation 326 (2018) 75–86

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Two projection methods for the solution of Signorini problems Shougui Zhang School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, PR China

a r t i c l e

i n f o

2000 MSC: 58E35 65N38 Keywords: Signorini problem Variational inequality Projection method Self-adaptive method Robin boundary condition Boundary element method

a b s t r a c t Two iterative methods, based on projection and boundary element methods, are considered for Signorini problems. The regularized problem with the projection boundary condition is first deduced from the Signorini problem. By using the equivalence between the Signorini boundary condition and the projection fixed point problem, our methods formulate the Signorini boundary condition into a sequence of Robin boundary conditions. In the new boundary condition there is a penalty parameter ρ . The convergence speed of the first method is greatly influenced by the value of ρ which is difficult to choose for individual problems. To improve the performance of this method, we present a self-adaptive projection method which adjusts the parameter ρ automatically per iteration based on the iterative data. The main result of this work is to provide the convergence of the methods under mild assumptions. As the iteration process is given by the potential and its derivative on the boundary of the domain, the unknowns of the problem are computed explicitly by using the boundary element method. Both theoretical results and numerical experiments indicate efficiency of the methods proposed. © 2018 Elsevier Inc. All rights reserved.

1. Introduction It is well known that elliptic Signorini problems are important and very useful class of nonlinear problems arising from physics, engineering etc. Such problems are difficult to solve because the classical Dirichlet, Neumann and Robin boundary conditions are unknown on the Signorini boundary in advance. The existence and uniqueness of the solution are established by formulating the problem as variational inequalities [5]. Since the Signorini condition is imposed at the boundary of the domain, the boundary methods, such as the method of fundamental solutions (MFS) and the boundary element method (BEM), are suitable for the solution of Signorini problems. There are several efficient algorithms in conjunction with these boundary methods for the numerical solution. For example, the Signorini problem was formulated as a constrained minimization problem by the MFS [2,4,13,23,24]. For the BEM, it seems more suitable for the solution because the unknowns (the potential and its normal derivative) on the Signorini boundary are related directly by a linear system [1,6,14,15,18,19,28,29]. It is worth mentioning that the iterative algorithm has become an efficient numerical technique for finding the approximate solution of Signorini problems [1,11,12,15,25–29,31]. Among these methods, semi-smooth Newton methods and augmented Lagrangian methods are extensively applied to variational inequalities and Signorini problems [11,12,25–27,29,31]. However, these methods have to solve a nonlinear problem in every iteration step, and the penalty parameter ρ of the methods must be sufficiently large. Using the BEM, we also developed several projection iterative methods [32–34] for the

E-mail address: [email protected] https://doi.org/10.1016/j.amc.2018.01.004 0 096-30 03/© 2018 Elsevier Inc. All rights reserved.

76

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

Signorini problem, which formulate the Signorini boundary condition into a sequence of simple boundary conditions such as Dirichlet, Neumann and Robin boundary conditions. In the last two decades, a class of iterative methods, named implicit method, emerged for the finite dimensional variational inequalities. The basic idea is contained in [8,20–22], while the convergence depends significantly on the penalty parameter ρ . Moreover, the proper parameters are different for individual problems and difficult to choose. To overcome the disadvantage, some self-adaptive projection methods were used to adjust the parameter automatically [7,9,16,17]. Motivated by the above methods, we propose two projection methods for solving the Signorini problem in infinite-dimensional spaces. The methods convert the Signorini boundary condition into a series of Robin boundary conditions and are implemented in conjunction with the BEM [3,10,30], then our methods are different from other methods and only require to solve a linear variational problem for each iteration. The structure of this paper is organized as follows. In Section 2, we establish the equivalence between the Signorini boundary condition and a fixed point problem using the projection method, and we introduce a weak formulation for the Signorini problem. Sections 3 and 4 are devoted to the convergence analysis of the two new methods: the projection method and self-adaptive projection method, respectively. Section 5 deals with the numerical implementation of combining the methods with the BEM for the solution. Furthermore, the self-adaptive adjustment rule is presented for the second method. In Section 6, we present some numerical results to illustrate and compare the performance of the methods. Finally, some concluding remarks are given in Section 7. 2. Formulation of the Signorini problem Let  ⊂ R2 an open and bounded domain with a smooth boundary  = ∂  which consists of three disjoint parts  D ,  N and  S , where Dirichlet, Neumann and Signorini boundary conditions are given respectively. In this paper we consider ¯ D ), find u ∈ H1 () such that the Signorini problem: given f ∈ L2 (), g ∈ H 1/2 ( \ ¯ N ) and h ∈ H −1/2 ( \ 

u = f in ,

(1)

u = g on D ,

(2)

λ = h on N ,

(3)

λ ≥ h, (u − g)(λ − h ) = 0 on S .

u ≥ g,

(4)

where λ := ∂∂ un and n denotes the outward unit normal to the boundary. To give the weak formulation of the above problem, we introduce the following space of functions

Hg1 () := K :=





 v ∈ H 1 (), v = g on D ,

 v ∈ Hg1 (), v ≥ g on S .

It is well known that the Signorini problem is then equivalent to a variational inequality

⎧ ⎨Find u ∈ K such that 





∇ u · ∇ (v − u )dx ≥

 N ∪S

h(v − u )ds +

 

f (v − u )dx,

∀v ∈ K.

(5)

or a minimization problem

⎧ ⎨Find u ∈ K such that ⎩J (u ) = min J (v ) := v∈K

   1 |∇v|2 dx − hvds − f vdx. 2  N ∪S 

(6)

 We assume that the problem satisfies D = ∅ or  ∪ hds < 0. Then the problem (5) or equivalently (6) admits a unique N S solution in K that depends continuously on the data f, g and h [5,6,12,26–28,35]. Remark 2.1. A large number of fluid mechanics and heat transfer problems can be modeled by Signorini problems, as a simplified version of a problem occurring in elasticity, the simplified Signorini problem [11,25,26] consists of finding u ∈ H1 () satisfying



− u + u = f in ,

u ≥ 0,

λ ≥ h, u(λ − h ) = 0 on  .

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

77

This problem has the same properties as the Signorini problem (1)–(4). Then the numerical analysis of our methods in this paper can be straightforwardly extended to the simplified Signorini problem. Since the main difficulty about (5) and (6) is due to the inequality constraint from the Signorini boundary condition (4), we transfer it to a fixed point problem [11,12,29,31–34]. We introduce the projection notation [v]+ := max(v, 0 ) for v ∈ R. Then the following result is obtained as in [34]. Lemma 2.1. For all ρ > 0, the boundary condition (4) is equivalent to:

u − g − [u − g − ρ (λ − h )]+ = 0 on S .

(7)

If we define the residual function of the Eq. (7) as follows

Rρ (u, λ ) = u − g − [u − g − ρ (λ − h )]+ on S ,

(8)

the nonsmooth projection equation (7) can be rewritten as Rρ (u, λ ) = 0. Then, we introduce a new equivalent formulation for the Signorini boundary condition (4). For a constant ω = 0, Rρ (u, λ ) = 0 can be also equivalently expressed as

u + ρλ = u + ρλ − ωRρ (u, λ ).

(9)

On the other hand, we apply the Green’s formula to (1)–(3) and obtain the variational formulation as follows: Find u ∈ Hg1 () such that

a(u, v ) − λ, v S = L(v ),

∀v ∈ Hg1 ().

(10)

In (10) we have set

a(u, v ) :=





λ, v S := L(v ) :=

 N

∇ u(x ) · ∇v(x )dx,



S

λ(x )v(x )dsx ,

h ( x )v ( x )d sx +

 

f (x )v(x )dx,

where the bilinear form a(u, v ) on Hg1 () × Hg1 () satisfies coercivity and continuity, i.e.

a(u, u ) ≥ α u 2H 1 () , a(v, w ) ≤ β v H 1 () u H 1 ()

(11)

0

for some α > 0 and β > 0 independently of u ∈ H01 () and v, w ∈ H 1 (), respectively. Now, we obtain a new variational formulation for the problem (1)-(4) as follows: Find u ∈ Hg1 () such that



a(u, v ) − λ, v S = L(v ),

∀v ∈ Hg1 (), u + ρλ = u + ρλ − ωRρ (u, λ ).

(12)

Although the problem (12) still contains the nonlinear boundary problem (9), the new formulation has a variational form and no inequality constraint which would be useful from the numerical and theoretical analysis point of view. 3. Projection method and the convergence analysis Inspired by the methods developed in [8,20–22], we propose the projection method as follows. Algorithm 1. Step 0: Choose initial functions u(0 ) ∈ Hg1 (), λ(0) ∈ L2 ( S ), and the parameters ρ ∈ R+ , ω ∈ (0, 2). Set k := 0. Step 1: Compute Rρ (u(k) , λ(k) ) according to (8). Step 2: Solve the problem





a u(k+1) , v −

(k+1) , v = L(v ), ∀v ∈ Hg1 (), λ 

(13)

S

with the Robin boundary condition





u(k+1) + ρλ(k+1) = u(k ) + ρλ(k ) − ωRρ u(k ) , λ(k ) on S ,

(14)

and obtain the boundary functions u(k+1 ) and λ(k+1 ) on  S . Step 3: Check some stopping criterion and if it not satisfied, then set k := k + 1 and return to Step 1. Obviously, we obtain the sequences u(k ) ∈ Hg1 () and λ(k) ∈ L2 ( S ). Let u∗ and λ∗ denote the solution of Signorini problems and the corresponding derivative on the boundary  , respectively. In order to analyze the convergence of Algorithm 1, we will need the following results [8,20–22]. Theorem 3.1. For all u, λ ∈ L2 ( S ), it holds that



Rρ (u, λ ) − ρ (λ − h ), [u − g − ρ (λ − h )]+ − u∗ + g

S

≥ 0.

(15)

78

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

Proof. For a parameter ρ > 0, we separate  S in two subparts + and − , where

u − g − ρ (λ − h ) ≥ 0

in + ,

u − g − ρ (λ − h ) < 0

in − .

It follows that

[u − g − ρ (λ − h )]+ − [u − g − ρ (λ − h )] = 0

in + ,

u∗ − g − [u − g − ρ (λ − h )]+ ≥ 0

in − .

[u − g − ρ (λ − h )]+ − [u − g − ρ (λ − h )] > 0

in − ,

Consequently,



[u − g − ρ (λ − h )]+ − [u − g − ρ (λ − h )], u∗ − g − [u − g − ρ (λ − h )]+  ≥ 0. S

Together with (8) this complete the proof.



Theorem 3.2. For all u(k ) ∈ Hg1 () and λ(k) ∈ L2 ( S ) generated by Algorithm 1, we have



ρ λ(k) − λ∗ , u(k) − u∗ 

S



≤ u(k ) − u∗ + ρ λ(k ) − λ∗ , Rρ u(k ) , λ(k ) − Rρ u(k ) , λ(k ) 2S . 

(16)

S

Proof. For any v ∈ K, we have

λ∗ − h, v − u∗ S = λ∗ − h, v − g S − λ∗ − h, u∗ − g S . Note that λ∗ − h, v − g  ≥ 0 and λ∗ − h, u∗ − g S = 0, it follows that S

λ∗ − h, v − u∗ S ≥ 0. We take v



= g + u (k ) − g − ρ (λ (k ) − h )

 +

(17) in (17) and obtain

 

  ρ λ∗ − h, u(k) − g − ρ λ(k) − h + − u∗ + g ≥ 0.

(18)

S

We set u = u(k ) and λ = λ(k ) in (15) and we have





Rρ u ( k ) , λ ( k ) − ρ



(k )

  λ − h , u ( k ) − g − ρ λ ( k ) − h + − u∗ + g ≥ 0. S

(19)

Finally, adding (18) and (19), and using (8) we find





Rρ u ( k ) , λ ( k ) − ρ





λ(k) − λ∗ , u(k) − u∗ − Rρ u(k) , λ(k)  ≥ 0.

Rearranging terms in (20), we obtain the result. Theorem 3.3. Let

{(u(k) ,

(20)

S

λ(k) )}



be the sequence generated by Algorithm 1, then the following estimate holds



2ρω λ(k ) − λ∗ , u(k ) − u∗ 

S

≤ u(k ) − u∗ + ρ λ(k ) − λ∗ 2S − u(k+1) − u∗ + ρ λ(k+1) − λ∗ 2S .

(21)

Proof. From Step two of Algorithm 1, it follows that

u(k+1) − u∗ + ρ





λ(k+1) − λ∗ = u(k) − u∗ + ρ λ(k) − λ∗ − ωRρ u(k) , λ(k) .

Thus, thanks to (16) we have

u(k+1) − u∗ + ρ λ(k+1) − λ∗ 2S



= u(k ) − u∗ + ρ λ(k ) − λ∗ − ωRρ u(k ) , λ(k ) 2S



= u(k ) − u∗ + ρ λ(k ) − λ∗ 2S + ω2 Rρ u(k ) , λ(k ) 2S



−2ω u(k ) − u∗ + ρ λ(k ) − λ∗ , Rρ u(k ) , λ(k ) 

(k ) 2

(k ) S (k ) 2 (k ) ∗ ∗ 2 ≤ u − u + ρ λ − λ S + ω Rc u , λ 

(k ) (k ) 2 (k )

S ∗ (k ) ∗ −2ω Rc u , λ S − 2ρω λ − λ , u − u  . S

Since ω ∈ (0, 2), this gives (21).



Consider that u(k) and u∗ satisfy the same boundary conditions on  D and  N , we apply Green’s formula and get

(k )



λ − λ∗ , u(k) − u∗  = λ(k) − λ∗ , u(k) − u∗  = |∇ u(k) − u∗ |2 ≥ 0. S

(22)

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

79

Consequently, we can see from Theorem 3.3 that the sequence { λ(k ) − λ∗ − ρ (u(k ) − u∗ ) } is bounded and decreasing. It follows that both sequences {u(k) } and {λ(k) } are bounded. From the above theorem, we can obtain the global convergence of Algorithm 1. Theorem 3.4. Let {(u(k) , λ(k) )} be the sequence generated by Algorithm 1, then {(u(k) , λ(k) )} converges to the unique solution {(u∗ , λ∗ )} in Hg1 () × L2 (S ) as k → ∞. Proof. Let us denote by δu(k ) := u(k ) − u∗ ∈ H01 () and δλ(k ) := λ(k ) − λ∗ ∈ L2 (S ). From the fact that (u∗ , λ∗ ) satisfies (10) we take v = u∗ and v = u(k ) and have

a(u∗ , u∗ ) − λ∗ , u∗ S = L(u∗ ), and





a u∗ , u ( k ) − Then, we obtain





a u∗ , δu(k ) −

∗ (k )

λ , u  = L u (k ) . S

∗ (k )

λ , δu  = L δu(k) ,

(23)

S

In a similar way, if we employ (13) at iteration k instead of (k + 1 ), with v = u∗ and v = u(k ) , we obtain for k ≥ 1





a u(k ) , δu(k ) −

(k ) (k )

λ , δu  = L δu(k) .

(24)

S

Subtracting (23) from (24) and using the coercivity of a( · , · ), we deduce



α δu(k) 2H1 () ≤ a δu(k) , δu(k) = δλ(k) , δu(k)  .

(25)

S

From (21), it follows that

(k ) (k )

δλ , δu  ≤ (2ρω )−1 δu(k) + ρδλ(k) 2S − δu(k+1) + ρδλ(k+1) 2S ,

(26)

S

Substituting (25) in (26), we get

α δu(k) 2H1 () ≤ (2ρω )−1 δu(k) + ρδλ(k) 2S − δu(k+1) + ρδλ(k+1) 2S .

(27)

Summing up Eq. (27) with respect to k we obtain ∞  k=0

α δu(k) 2H1 () ≤ (2ρω )−1 δu(0) + ρδλ(0) 2S < ∞,

(28)

which yields

lim

k→∞

δu(k) 2H1 () = 0.

Therefore, u(k) converges to u∗ in Hg1 () and from Step 2 of the Algorithm 1 λ(k) converges to λ∗ in L2 ( S ) as k → ∞ [12].



In Algorithm 1, there are two parameters ω ∈ (0, 2) and ρ > 0 which may affect the convergence speed. We can see from the proof of Theorems 3.3 and 3.4 that the good parameter ω should be less than and close to 2, while different problem has different optimal ρ . However, it is usually difficult to choose the proper parameter ρ . Thus, it is important to choose the suitable parameter ρ for the fast convergence. 4. Self-adaptive projection method and the convergence analysis In order to remedy the disadvantage of Algorithm 1, we consider a simple rule which replaces the fixed parameter ρ by a self-adaptive variable sequence {ρ k }. Such a strategy has already been applied for finite dimensional variational inequalities [7,9,16,17]. In the following we assume that a nonnegative sequence {sk } satisfies +∞ 

sk < +∞,

k=0

Now, we propose the self-adaptive projection method as follows. Algorithm 2. Step 0: Choose initial functions u(0 ) ∈ Hg1 (), λ(0) ∈ L2 ( S ), the parameters ρ ∈ R+ and ω ∈ (0, 2). Set ρ0 = ρ and k := 0. Step 1: Compute Rρk (u(k ) , λ(k ) ) according to (8). Step 2: Solve the problem

a(u(k+1) , v ) − λ(k+1) , v S = L(v )

∀v ∈ Hg1 (),

(29)

80

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

with the Robin boundary condition



u(k+1) + ρk λ(k+1) = u(k ) + ρk λ(k ) − ωRρk u(k ) , λ(k )



on S ,

(30)

and obtain the boundary functions u(k+1 ) and λ(k+1 ) on  S . Step 3: Choose the next penalty parameter ρk+1 such that

1 ρ ≤ ρk+1 ≤ (1 + sk )ρk . 1 + sk k

(31)

Step 4: Check some stopping criterion and if it not satisfied, then update k := k + 1 and return to Step 1. For the convergence analysis of this algorithm, we will need the following simple result with the sequence {sk }. Lemma 4.1. If the sequence {sk } satisfies sk ≥ 0 and

+ ∞  k=0

sk < +∞, then

+ ∞  k=0

( 1 + sk ) < +∞.

Let

Cs :=

+∞ 

( 1 + sk ).

k=0

From (31), it follows that the parameter ρk ∈ [ C1s ρ0 , Cs ρ0 ] is bounded from (31). We set ∞ ∞ ρL := inf {ρk }+k=0 > 0, ρU := sup{ρk }+ > 0. k=0

Theorem 4.1. Let {(u(k) , λ(k) )} be the approximate solution obtained from Algorithm 2, then the following estimate holds



u(k+1) − u∗ + ρk+1 λ(k+1) − λ∗ 2S ≤ (1 + ξk ) u(k) − u∗ + ρk λ(k) − λ∗ 2S

−2ρk ω λ(k ) − λ∗ , u(k ) − u∗ , 

(32)

S

where ξk := 2sk + s2k . Proof. Note that u(k+1 ) and λ(k+1 ) satisfy (30) we have

u(k+1) − u∗ + ρk







λ(k+1) − λ∗ = u(k) − u∗ + ρk λ(k) − λ∗ − ωRρk u(k) , λ(k) .

Following the proof of Theorem 3.3, we get

u(k+1) − u∗ + ρk λ(k+1) − λ∗ 2S



≤ u(k ) − u∗ + ρk λ(k ) − λ∗ 2S − 2ρk ω λ(k ) − λ∗ , u(k ) − u∗ . 

(33)

S

Since 0 < ρk+1 ≤ (1 + sk )ρk and (22), it follows that



u(k+1) − u∗ + ρk+1 λ(k+1) − λ∗ 2S ≤ (1 + sk )2 u(k+1) − u∗ + ρk λ(k+1) − λ∗ 2S .

(34)

By combining (33) and (34), we have

u(k+1) − u∗ + ρk+1 λ(k+1) − λ∗ 2S

≤ (1 + sk )2 u(k ) − u∗ + ρk λ(k ) − λ∗ 2S

−2ρk ω λ(k ) − λ∗ , u(k ) − u∗ , 

(35)

S



this completes the proof. {(u(k) ,

Theorem 4.2. Let

λ(k) )}

be the sequence generated by Algorithm 2, then {(u(k) , λ(k) )} converges to the unique solution {(u∗ ,

λ∗ )} in Hg1 () × L2 (S ) as k → ∞. Proof. It follows from

+ ∞  k=0

C0 :=

+∞ 

sk < +∞ that

ξk , Cp :=

k=0

+∞ 

+ ∞  k=0

ξk < +∞ and

+ ∞  k=0

(1 + ξk ) < +∞. Give

( 1 + ξk ) .

k=0

Using (25), we have

(k ) (k )

δλ , δu  = a δu(k) , δu(k) ≥ α δu(k) 2H1 () ≥ 0.

(36)

S

Note (32) and (36) we then have

δu(k+1) + ρk+1 δλ(k+1) 2S ≤

k  i=0

(1 + ξi ) δu(0) + ρ0 δλ(0) 2S ≤ Cp δu(0) + ρ0 δλ(0) 2S .

(37)

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

81

Therefore, there exists a constant C > 0 such that

δλ(k) + ρk δu(k) 2S ≤ C, ∀k ≥ 0.

(38)

According to (36) and (38), the sequence {u(k) } is bounded. Furthermore, we from (32) obtain



+∞  k=0

+∞ 

ρk δλ(k) , δu(k)  ≤ δu(0) + ρ0 δλ(0) 2S + ξk δu(k) + ρk δλ(k) 2S . S

(39)

k=0

It follows from (36), (38) and (39) that ∞  k=0

αρL δu(k) 2H1 ()

≤ ( 2ω )

 (0 )

δu

−1

 ≤(2ω )−1 C + C

+ ρk δλ(0) 2S +∞ 

+

+∞ 

 ξk δu

k=0



(k )

+ ρk δλ(k ) 2S

ξk

k=0

≤(2ω )−1 (1 + C0 )C Consequently,

lim

k→∞

δu(k) 2H1 () = 0.

This implies that u(k) converges to u∗ in Hg1 () and from Step 2 of the algorithm λ(k) converges to λ∗ in L2 ( S ) as k → ∞ [12]. 

5. Numerical implementation The main advantage of our methods is that each iteration only requires computing a linear problem. Using boundary conditions (2) and (3) on boundary  D ∪  N and Robin boundary condition (14) or (30) determined from the kth iterative step on  S respectively, we can obtain the unknown u(k+1 ) and its normal derivative λ(k+1 ) on  S by solving a linear problem at (k + 1 )st step, so that we can update the boundary condition on  S according to (14) or (30). Different from Algorithm 1 implemented with a fixed parameter ρ , Algorithm 2 generate a series of variable parameters ρ k which improve the convergence.

5.1. Application of algorithms with the BEM Any numerical method, including the finite element method and the finite difference method, can be used in conjunction with the methods proposed in this paper. Since the iterative process is limited to the boundary using u(k) and λ(k) , the methods are implemented in conjunction with the BEM. Here, we only give the boundary element approximation of Algorithm 2. By using the BEM, the solution u of the Signorini problem also can be represented as

u (y ) =

 

λ(x )U ∗ (x, y )dsx −

 

u (x )

∂ U ∗ (x, y ) dsx + ∂ nx

 

f (x )U ∗ (x, y )dx,

(40)

where U ∗ (x, y ) = − 21π ln |x − y| is the fundamental solution of two-dimension Laplace equation. The relationship between the potential u and its normal derivative λ on the boundary  is given by the boundary integral equation

  u (y ) ∂ U ∗ (x, y ) = λ(x )U ∗ (x, y )dsx − u(x ) dsx + w(y ) 2 ∂ nx  

y ∈ ,

(41)

 where w(y ) =  f (x )U ∗ (x, y )dx. In particular we use the piecewise constant boundary element, and we divide the boundary  into N straight line elements and place one node in the middle of each element. Then, we can compute the projection residual function Rρk (u(k ) , λ(k ) ) point-wise on nodes element by element along  S . Consider the boundary conditions (2) and (3), we obtain the following discrete form of the boundary integral equation (41) at the (k + 1 )st iterative step for node yi ∈  D ∪  S

82

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

 N  u(k+1) (yi ) + 2 j j=1

 ∂ U ∗ (x, y ) dsx u(k+1) (x j ) ∂ nx    N  ∗ = U (x, yi )dsx λ(k+1) (x j ) + w(yi ) yi ∈ D ∪ N . j

j=1

(42)

If the Robin boundary condition on y ∈  S for (30) is given by



λ

(k+1 )



(k )

+

u(k ) − u(k+1) − ωRρk u(k ) , λ(k )

ρk



,

(43)

we substitute (43) into (41) to eliminate λ(k+1 ) and obtain the corresponding discrete form at yi ∈  S

u(k+1) (yi )  (k+1) + u (x j ) 2 N



j=1

=

N   j=1

j

U ∗ (x, yi )dsx

+w ( yi )

j

∂ U ∗ (x, y ) 1 dsx + ∂ nx ρk





U (x, yi )dsx ∗

j



 1 (k ) λ (k ) ( x j ) + u ( x j ) − ω R ρk u ( k ) ( x j ) , λ ( k ) ( x j ) ρk

yi ∈ S .

(44)

For the Newton potential w(yi ), we can solve it by using FEM, which is unrelated to iteration. Using all boundary conditions, we obtain the unknown u(k+1 ) (x j ) and its normal derivative λ(k+1 ) (x j ) at the nodes on  S by solving the linear equations (42) and (44), so that we can reset the Robin boundary condition on  S for Algorithm 2, according to (30) respectively. 5.2. The self-adaptive adjusting rule Now, we turn to Step 3 of Algorithm 2. We extend the simple rule to adjust the parameter ρ k for finite-dimensional variational inequalities in [7,9,16,17] to the Signorini problem. Consider that the sequence {(u(k) , λ(k) )} generated by Algorithm 2 satisfies (33) and (36), we have



u(k+1) − u∗ − ρk λ(k+1) − λ∗ 2S ≤ u(k) − u∗ − ρk λ(k) − λ∗ 2S .

For numerical point of view, we hope that

u(k+1) − u(k) S  ρk λ(k+1) − λ(k) S .

This provides a good idea of choosing the penalty parameter ρ k as follows. For a given positive constant τ , if ρk (λ(k+1 ) − λ(k) ) S > (1 + τ ) u(k+1) − u(k) S , we decrease ρ k for the next iteration. Otherwise, we increase ρ k if ρk (λ(k+1) −

λ(k) ) S <

1 1+τ

u(k+1) − u(k) S . Let

ρk λ(k+1) − λ(k) S wk = , u(k+1) − u(k) S

we update the parameter ρk+1 by

ρk+1 =

⎧ ⎪ ( 1 + s k ) ρk ⎪ ⎪ ⎨ 1

ρ ⎪ ⎪ 1 + sk k ⎪ ⎩ ρk

In order to satisfy

+ ∞  k=0

if wk <

if wk > 1 + τ , otherwise. sk < +∞ for the nonnegative sequence {sk }, we use ck to give the change times of {ρ k }, i.e.

 c0 = 0 , ck+1 =

1 , 1+τ

ck ck + 1

1 ≤ wk ≤ 1 + τ , 1+τ otherwise. if

Then the sequence {sk } is generated by

⎧ τ ⎪ ⎪ ⎨

1 sk = ( c − cmax )2 ⎪ k +1 ⎪



0

if ck < ck+1 and ck+1 ≤ cmax , if ck < ck+1 and ck+1 > cmax , otherwise

..

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

83

Table 1 Number of iterations for each method.

ρ

SSN N = 80

N = 160

N = 320

N = 80

N = 160

N = 320

N = 80

N = 160

N = 320

10 0 0 100 10 1 0.1 0.01 0.001

– – – – – – –

– – – – – – –

– – – – – – –

– – – 501 51 30 224

– – – 867 88 29 221

– – – – 145 29 220

25 22 19 16 13 14 17

29 24 21 19 18 17 20

31 29 24 20 20 20 23

Algorithm 1

Algorithm 2

“–” means that the number of iterations is more than 10 0 0.

For a given constant integer cmax > 0, it can be seen that the sequence {sk } is nonnegative and satisfies matically.

+ ∞  k=0

sk < +∞ auto-

6. Numerical examples To test the efficiency of the proposed methods, some numerical experiments are performed in this section. In all tests, we take ω = 1.8, cmax = 10 and choose Rρk (u(k ) , λ(k ) ) ≤ 10−5 as stopping criterion, and all computations are performed in Matlab codes. 6.1. The Signorini problem of the Laplace equation We first consider a classical example on the annular domain  = {(x, y ) : a <





x2 + y2 < b}, D = {(x, y ) :



x2 + y2 =

b} ∪ {(x, y ) : = a, y ≥ 0}. The remaining boundary part of  is submitted to the Signorini condition with g = 0 and h = 0. The exact solution in the domain  is given by the following complex function x2

+ y2

u(x, y ) = Im

ω (x + iy )3 ,

where

ω (x + iy )  " # $2 x2 − y2 !1

=

+i

r2

2

 " # $2 x2 − y2 !1 r2

2



1 + 4

#

1 + 4

a2 r2 − 2 a2 r

#

r2 a2 − 2 a2 r

$2

$2

1 x2 − y2 + 4 r2 1 x2 − y2 − 4 r2

#

r2 a2 + 2 a2 r

#

r2 a2 + 2 a2 r

$2 sign x

$ sign y,



and r = x2 + y2 , x2 + y2 ≥ a. According to the above solution, we can easily obtain the Dirichlet boundary condition on  D , the exact solution on the Signorini boundary  S is

"

u(x, y ) =

# max 0,

x2 − y2 a2

$3 sign y.

(45)

This problem has been considered by different iterative methods combining with the BEM [15,28,32–34]. We introduce the parameterizations t → (a cos π t, −a sin π t ) and t → (bcos π t, bsin π t) and consider the case a = 0.1 and b = 0.25. Firstly, we apply Algorithms 1 and 2 with ρ = 1 to this problem. The boundary is discretized by a uniform grid for t, with N = 320. In Fig. 1 we present the evolution of the relative errors for the potential u with respect to the iteration index k. It can be seen that our methods converge, and the convergence of Algorithm 2 is better than the convergence of Algorithm 1. For the sake of comparison with the semi-smooth Newton (SSN) method in [12], we give the number of iterations with respect to numbers of boundary elements N and the penalty parameter ρ in Table 1. We note that the convergence speed depends significantly on the ρ for the SSN and Algorithm 1, and the algorithms converges slowly. Conversely, Algorithm 2 converges quickly for all ρ . Also from this table, we can see that the iteration numbers of Algorithm 2 are not sensitive to N.

84

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

Fig. 1. Evolution of relative errors with k for N = 320 and ρ = 1.

Fig. 2. Approximate solutions for the potential u on  S .

6.2. The Signorini problem of the Poisson equation Now, let us consider the Signorini problem for Poisson equation in the square domain  = (−1, 1 ) × (−1, 1 ) with f = cos( 21 π + π x ) + 1 [12]. The boundary conditions are given by

g=0

on

D = {(x, y ) : x = 0, 0 ≤ y ≤ 1} ∪ {(x, y ) : x = 1, 0 ≤ y ≤ 1}, on N = {(x, y ) : 0 ≤ x ≤ 1, y = 1},

h = −7x(1 − x )

and the complementary condition

u ≤ g,

λ ≤ 0, (u − g)λ = 0 on S = {(x, y ) : 0 ≤ x ≤ 1, y = 0},

here g = 5x(1 − x )(0.5 − x ) max(x, 1 − x ). This problem was considered by projection methods with the FDM [12] and the BEM [32,33].

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86

85

Table 2 Number of iterations for each method.

ρ

SSN N = 80

N = 160

N = 320

N = 80

N = 160

N = 320

N = 80

N = 160

N = 320

10 0 0 100 10 1 0.1 0.01 0.001

– – – – – – –

– – – – – – –

– – – – – – –

– – – 116 15 86 552

– – – 179 22 85 547

– – – 334 37 84 538

22 19 17 12 14 16 18

25 21 17 14 18 16 18

28 26 20 19 15 21 21

Algorithm 1

Algorithm 2

We solve the problem using our methods with N = 320 and ρ = 1. The numerical results are given in Fig. 2. Although there is no exact solution for this example, the graphs show clearly where the separation points are on  S . It can be seen that our results are in good agreement with those in Ref. [12,32,33]. We also investigate the convergence behavior of our methods for this example. Table 2 displays the number of iterations of different methods. Similarly, we find that the convergence speed depends heavily on the value of ρ for the SSN and Algorithm 1. Algorithm 2 converges fast for all penalty parameter ρ which does not has a significant effect on the number of iterations. In addition, the iteration numbers of Algorithm 2 is not very sensitive to the N. 7. Conclusion In this paper, we have proposed two projection methods for the solution of Signorini problems and studied the relationship of them. The advantage of our methods is that each iteration needs only solving a simple elliptic boundary value problem, and the methods are easy to be implemented in conjunction with the BEM for Signorini problems. Algorithm 1 is powerful only for proper parameter ρ and it is difficult to find a proper ρ . Algorithm 2 can be viewed as the improvement of Algorithm 1 with self-adaptive variable parameter ρ . Two examples tested show that the self-adaptive adjustment rule is attractive and necessary in practice. The methods can also be applied to the three dimensional case. Acknowledgment This work was supported by China Scholarship Council, the National Natural Science Foundation of China (Grant no. 11471063), the Natural Science Foundation Project of CQ CSTC of China (Grant nos. cstc2015jcyjBX0083 and cstc2017jcyjAX0316), the Educational Commission Foundation of Chongqing of China (Grant no. KJ1600329) and Fundamental Project of CQNU of China(Grant no. 16XLB006). Supplementary material Supplementary material associated with this article can be found, in the online version, at 10.1016/j.amc.2018.01.004. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

J.M. Aitchison, M.W. Poole, A numerical algorithm for the solution of Signorini problems, J. Comput. Appl. Math. 94 (1998) 55–67. A. Bogomolny, Fundamental solutions method for elliptic boundary value problems, SIAM J. Numer. Anal. 22 (1985) 644–669. C.A. Brebbia, The Boundary Element Method for Engineers, Pentech Press, London, 1978. G. Fairweather, A. Karageorghis, The method of fundamental solutions for elliptic boundary value problems, Adv. Comput. Math. (1998) 69–95. R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, Berlin, 2008. H.D. Han, A direct boundary element method for Signorini problems, Math. Comp. 55 (1990) 115–128. D.R. Han, Solving linear variational inequality problems by a self-adaptive projection method, Appl. Math. Comput. 182 (2006) 1765–1771. B.S. He, A class of implicit methods for monotone variational inequalities, Math. Numer. Sin. 20 (1998) 337–344. B.S. He, L.Z. Liao, S.L. Wang, Self-adaptive operator splitting methods for monotone variational inequalities, Numer. Math. 94 (2003) 199–217. G.C. Hsiao, W.L. Wendland, Boundary Integral Equations, Springer, Berlin, 2008. K. Ito, K. Kunisch, Augmented lagrangian methods for nonsmooth, convex optimization in hilbert spaces, Nonlinear Anal. 41 (20 0 0) 591–616. K. Ito, Raleigh, K. Kunisch, Graz, Semi-smooth Newton methods for the Signorini problem, Appl. Math. 53 (2008) 455–468. A. Karageorghis, D. Lesnic, L. Marin, The method of fundamental solutions for solving direct and inverse Signorini problems, Comput. Struct. 151 (2015) 11–19. A. Karageorghis, Numerical solution of a shallow dam problem by a boundary element method, Comput. Methods Appl. Mech. Eng. 61 (1987) 265–276. A. Leontiev, W. Huacasi, J. Herskovits, An optimization technique for the solution of the Signorini problem using the boundary element method, Struct. Multidisc. Optim. 24 (2002) 72–77. M. Li, A. Bnouhachem, A modified inexact operator splitting method for monotone variational inequalities, J. Glob. Optim. 41 (2008) 417–426. M. Li, A. Bnouhachem, M. Khalfaoui, Z.H. Sheng, A modified inexact implicit method for mixed variational inequalities, J. Comput. Appl. Math. 234 (2010) 3356–3365. M. Maischak, E.P. Stephan, Adaptive hp-versions of BEM for Signorini problems, Appl. Numer. Math. 54 (2005) 425–449. T.A. Mel’nyk, I.A. Nakvasiuk, W.L. Wendland, Homogenization of the Signorini boundary-value problem in a thick junction and boundary integral equations for the homogenized problem, Math. Methods Appl. Sci. 34 (2011) 758–775. M.A. Noor, Some algorithms for general monotone variational inequalities, Math. Comput. Modell. 29 (1999) 1–9.

86 [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35]

S. Zhang / Applied Mathematics and Computation 326 (2018) 75–86 M.A. Noor, Splitting algorithms for general pseudomonotone mixed variational inequalities, J. Global. Optim. 18 (20 0 0) 75–89. M.A. Noor, Some developments in general variational inequalities, Appl. Math. Comput. 152 (2004) 199–277. A. Poullikkas, A. Karageorghis, G. Georgiou, The method of fundamental solutions for Signorini problems, IMA J. Numer. Annual. 18 (1998) 273–285. A. Poullikkas, A. Karageorghis, G. Georgiou, The numerical solution of three-dimensional Signorini problems with the method of fundamental solution, Eng. Anal. Bound. Elem. 25 (2001) 221–227. C.S. Ryoo, Numerical verification of solutions for a simplified Signorini problem, Compt. Math. Appl. 40 (20 0 0) 10 03–1013. C.S. Ryoo, Numerical verification of solutions for Signorini problems using newton-like method, Int. J. Numer. Methods Eng. 73 (2008) 1181–1196. C.S. Ryoo, An approach to the numerical verification of solutions for variational inequalities using Schauder fixed point theory, Bound. Value Probl. (2014). Article ID 235. P. Spann, On the boundary element method for the Signorini problem of the Laplacian, Numer. Math. 65 (1993) 337–356. O. Steinbach, Boundary element methods for variational inequalities, Numer. Math. 126 (2014) 173–197. O. Steinbach, Numerical Approximation Methods for Elliptic Boundary Value Problems, Springer, New York, 2008. Y.C. Wang, X.L. Li, A meshless algorithm with moving least square approximations for elliptic Signorini problems, Chin. Phys. B 23 (2014) 090202. S.G. Zhang, Chongqing University, 2012 Ph. d. thesis. S.G. Zhang, J.L. Zhu, A projection iterative algorithm boundary element method for the Signorini problem, Eng. Anal. Bound. Elem. 37 (2013) 176–181. S.G. Zhang, A projection iterative algorithm for the Signorini problem using the boundary element method, Eng. Anal. Bound. Elem. 50 (2015) 313–319. T. Zhang, Z. Li, An analysis of finite volume element method for solving the Signorini problem, Appl. Math. Comput. 270 (2015) 830–841.