Error bounds for GMLS derivatives approximations of Sobolev functions

Error bounds for GMLS derivatives approximations of Sobolev functions

Accepted Manuscript Error bounds for GMLS derivatives approximations of Sobolev functions Davoud Mirzaei PII: DOI: Reference: S0377-0427(15)00410-0 h...

325KB Sizes 4 Downloads 95 Views

Accepted Manuscript Error bounds for GMLS derivatives approximations of Sobolev functions Davoud Mirzaei PII: DOI: Reference:

S0377-0427(15)00410-0 http://dx.doi.org/10.1016/j.cam.2015.08.003 CAM 10256

To appear in:

Journal of Computational and Applied Mathematics

Received date: 15 April 2015 Revised date: 30 June 2015 Please cite this article as: D. Mirzaei, Error bounds for GMLS derivatives approximations of Sobolev functions, Journal of Computational and Applied Mathematics (2015), http://dx.doi.org/10.1016/j.cam.2015.08.003 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

Error bounds for GMLS derivatives approximations of Sobolev functions Davoud Mirzaei Department of Mathematics, University of Isfahan, 81745-163 Isfahan, Iran.

Abstract This paper provides the error estimates for generalized moving least squares (GMLS) derivatives approximations of a Sobolev function in Lp norms and extends them for local weak forms of DMLPG methods. Sometimes they are called diffuse or uncertain derivatives, but precisely they are direct approximants of exact derivatives which possess the optimal rates of convergence. GMLS derivatives approximations are different from the standard derivatives of MLS approximation. While they are much easier to evaluate at considerably lower cost, in this article the same orders of convergence with comparison to the standard derivatives are obtained for them. Keywords: Meshless methods, MLS approximation, GMLS approximation, Diffuse derivatives, DMLPG methods, Error bounds.

1. Introduction The moving least squares approximation (MLS) in the current form was introduced by Lancaster and Salkauskas [1] in 1981. There are many researches concerning the error analysis of this approximation. For example see [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. In recent years many authors have tried to improve and develop the MLS approximation in different aspects. As such improvements and developments we can mention the complex variable MLS approximation [13] and the interpolating MLS [14, 15]. A presentation of generalized moving least squares (GMLS) approximation and a connection to Backus-Gilbert optimality were done in [4] and then an application to numerical integration was performed in [16]. In [17], the concept of GMLS was linked to the so-called diffuse derivatives [18, 19] and an error bound in L∞ norm was derived. The authors of [17] suggested to ignore the phrase “diffuse derivatives” in favour of “GMLS derivatives approximations” because there is nothing diffuse or uncertain about them. ∗ Corresponding

author Email address: [email protected] (Davoud Mirzaei) Preprint submitted to Elsevier

June 30, 2015

Afterward, in [20, 21] the concept of GMLS approximation was employed to accelerate the meshless local Petrov-Galerkin (MLPG) methods of Atluri and his collaborators [22, 23]. The new methods were called direct MLPG (DMLPG) because GMLS directly approximates the local weak forms and boundary operators without any detour via classical MLS shape functions. The optimal rate of convergence for GMLS derivatives approximations toward the exact derivatives has been proved in [17] in L∞ (Ω) for sufficiently smooth functions over Ω∗ , where Ω∗ can be larger than the consideration domain Ω. In this paper we estimate the errors in Lp (Ω), p ∈ [1, ∞], relax the bounds for Sobolev functions over Ω and then extend them for local weak forms of DMLPG. The results of this paper can be used for analyzing DMLPG and all methods based on diffuse derivatives. When GMLS is applied to recover the value of a functional, it suffices to evaluate the functional on a space of polynomials, not on a certain trial space spanned by MLS shape functions. This significantly speeds up numerical calculations, if the functional is complicated, e.g. a high order derivative or a numerical integration against a test function. This is the main advantage of GMLS approximation compared with the MLS approximation. For more details see [17, 20, 24]. The rest of this paper is organized as follows. In section 2 the concept of GMLS approximation is reviewed and in section 3 error bounds in Lp are proved. In section 4 the error estimations and convergence rates are justified by some numerical experiments. 2. GMLS derivative approximation Let Ω ⊂ Rd , for positive integer d, be a nonempty and bounded set. In next section, more conditions on Ω will be considered. Assume, X = {x1 , x2 , . . . , xN } ⊂ Ω, is a set containing N scattered points, called centers or data site. Distribution of points should be well enough to pave the way for analysis. Henceforth, we use Pdm , for m ∈ N0 = {n ∈ Z, n > 0}, as space of d-variable polyno mials of degree at most m of dimension Q = m+d . A basis for this space is denoted by d {p1 , . . . , pQ }. The MLS, as a meshless approximation method, provides an approximation u b(x) of u(x) in terms of values u(xj ) at centers xj by u(x) ≈ u b(x) =

N X j=1

aj (x)u(xj ),

x ∈ Ω,

where aj are MLS shape functions. MLS finds the best approximation to u out of Pdm with respect to a discrete ℓ2 norm induced by a moving inner product, where the corresponding 2

weight function w depends not only on points xj but also on point x to be approximated. Indeed, the influence of the centers is governed by w(x, xj ), which vanishes for arguments x, xj ∈ Ω with kx − xj k2 greater than a certain threshold, say δ. Thus we can define w(x, xj ) = ϕ(kx − xj k2 /δ) where ϕ : R>0 → R is a compactly supported function on [0, 1]. Derivatives of u are usually approximated by derivatives of u b, Dα u ≈ Dα u b(x) =

N X

Dα aj (x)u(xj ),

j=1

x ∈ Ω,

and they are called standard derivatives. Since derivatives of complicated and non-closed from shape functions aj should be taken, the standard derivatives are known to be timeconsuming. This is the reason why some people avoid using them and take a bypass via diffuse derivatives [18, 19]. Another approach is a direct approximation of Dα u from the data without detour via derivatives of u b. In this case we have α u(x) = d Dα u ≈ D

N X

aj,α (x)u(xj ),

j=1

x ∈ Ω.

(2.1)

This is a GMLS approximation where Dα u is recovered directly from u(xj ) as a linear functional. It should be noted that Dα aj (x) 6= aj,α (x) in general, and in fact in vector form aα (x) = W P T (P W P T )−1 Dα p, where W is the diagonal matrix carrying the weights wj = w(x, xj ) on its diagonal, P is N × Q matrix of values pk (xj ), 1 6 j 6 N , 1 6 k 6 Q and p = (p1 , . . . , pQ )T . It is clear that the operator Dα acts only on the basis polynomials p, and this significantly reduces the cost of computations. Details are in [17]. This approach provides the GMLS derivatives approximations and [17] shows the coincidence with diffuse derivatives and gives an error bound in L∞ for them. Equation (2.1) can even be extended to more general recovery problem: under some conditions on linear functional λ, we can write c (λu)(x) ≈ λu(x) =

N X

aj,λ (x)u(xj ),

(2.2)

j=1

where the functional can be for instance point evaluations, derivative or integral operators, and etc. Here aj,λ are functions associated with λ and in vector form they can be obtained by aλ = W P T (P W P T )−1 λ(p). 3

Thus it suffices to evaluate λ on the space Pdm , not on a certain trial space spanned by certain shape functions. This significantly speeds up numerical calculations, if the functional λ is complicated, e.g. a numerical integration against a test function. This generalized approximation is the building block of different variations of DMLPG method [20, 21, 24]. 3. Error estimation First we introduce some notations. As usual, B(x, r) stands for the ball of radius r centered at x. For a set of points X = {x1 , x2 , . . . , xN } in a bounded domain Ω ⊂ Rd , the fill distance is defined to be hX,Ω = sup min kx − xj k2 , x∈Ω 1≤j≤N

and the separation distance is defined by qX =

1 min kxi − xj k2 . 2 i6=j

A set X of data sites is said to be quasi-uniform with respect to a constant cqu > 0 if qX 6 hX,Ω 6 cqu qX .

(3.1)

A set X = {x1 , . . . , xN } ⊂ Rd with N > Q is called Pdm -unisolvent if the zero polynomial is the only polynomial from Pdm that vanishes on X. Since error estimates will be established using a variety of Sobolev spaces, we introduce them now. Let Ω ⊂ Rd be a domain. For k ∈ N0 , and p ∈ [1, ∞), we define the Sobolev space Wpk (Ω) to consist of all u with distributional derivatives Dα u ∈ Lp (Ω), |α| 6 k. The (semi-)norms associated with these spaces are defined as 

|u|Wpk (Ω) := 

X

|α|=k

1/p

kDα ukpLp(Ω) 



, kukWpk (Ω) := 

X

|α|6k

The case p = ∞ is defined in the standard way

1/p

kDα ukpLp (Ω) 

.

α α |u|W∞ k (Ω) := sup kD ukL∞ (Ω) , kukW k (Ω) := sup kD ukL∞ (Ω) . ∞ |α|=k

|α|6k

The first step in deriving error estimates is to consider only local regions D that are star-shaped with respect to a ball. A domain D ⊂ Rd is said to be star-shaped with respect to a ball B = B(y, ρ) = {x ∈ Rd : kx − yk 6 ρ} if for every x ∈ D, the closed convex hull of {x} ∪ B is contained in D. Let ρmax = sup{ρ : D is star-shaped with respect to a ball of radius ρ}, 4

D then the chunkiness parameter of D is defined by γ = ρdmax where dD is the diameter of D. From here on, the letter p will be used for both Sobolev’s index and polynomial’s notation. In [25, Chapter 4], Brenner and Scott discuss approximating a function u ∈ Wqm+1 (D) by averaged Taylor polynomials Qm u ∈ Pdm . These polynomials are often used to bound the errors of some approximations based on polynomials in Sobolev norms. The averaged Taylor polynomials are defined as follows. Let B be a ball with respect to which D is star-shaped having radius ρ > 21 ρmax . Then X 1 Z Qm u(x) := Dα u(y)(x − y)α φ(y)dy, α! B

|α|6m

R where φ(y) > 0 is a C ∞ “bump” function supported on B satisfying both B φ(y)dy = 1 and max φ 6 Cρ−d . The following propositions provide a bounds on u − Qm u. Proposition 3.1. Let B be a ball in D such that D is star-shaped with respect to B and such that its radius ρ > (1/2)ρmax. Let Qm u be the Taylor polynomial of order m of u averaged over B where u ∈ Wpm+1 (D) for p > 1. Let m + 1 > d/p for p > 1 and m + 1 > d for p = 1. Then for q > 1 the following estimate holds for |α| 6 m + 1, m+1−|α|+d(1/q−1/p)

ku − Qm ukW |α| (D) 6 C dD q

|u|Wpm+1 (D) ,

(3.2)

where dD is the diameter of D, and C = C(m, d, p, q, ℓ, γ). The proof of this Proposition can be found in [25, 26, 12]. We can adapt the result of Proposition 3.1 for Dα u instead of u. Replacing m and u by m − |α| and Dα u, respectively, gives m+1−|α|+d(1/q−1/p)

kDα u − Qm−|α| Dα ukLq (D) 6 C dD

|Dα u|W m+1−|α| (D) , p

(3.3)

provided that m + 1 > |α| + d/p for p > 1 and m + 1 > |α| + d for p = 1. The following result allows to interchange the Taylor and derivative operators [25]. Proposition 3.2. For any α such that |α| 6 m, Dα Qm u = Qm−|α| Dα u,

|α|

for all u ∈ W1 (D).

To bound the errors of GMLS derivatives approximations, we follow the technique of stable local polynomial reproduction adapted from [8, 9], where [8] considers only interpolation, while [9] includes more general functionals. (See also [17]). Let’s start with definition of a stable local polynomial reproduction. 5

Definition 3.3. Consider a process that defines for every Pdm –unisolvent set X = {x1 , x2 , . . . , xN } ⊂ Ω and each multi-index α with |α| 6 m a family of functions sj,α : Ω → R, 1 6 j 6 N to approximate Dα u(x) ≈

N X

sj,α (x)u(xj ),

j=1

x ∈ Ω.

Then we say that the process provides a stable local polynomial reproduction of degree m on Ω if there exist constants h0 , C1,α , C2,α > 0 such that for every x ∈ Ω PN α d 1. j=1 sj,α (x)p(xj ) = D p(x), for all p ∈ Pm , PN −|α| 2. j=1 |sj,α (x)| 6 C1,α hX,Ω , 3. sj,α (x) = 0 if kx − xj k2 > C2,α hX,Ω , hold for all |α| 6 m and all X with hX,Ω 6 h0 .

If a function u with certain smoothness is approximated by a stable local polynomial reproduction system, then the error of approximation can be estimated in terms of the fill distance hX,Ω . This was done in [8, 9] for identity and in [17] for more general functionals in L∞ . The smoothness requirement is u ∈ C m+1 (Ω∗ ) := max kDβ ukL∞ (Ω∗ ) , |β|=m+1

where Ω∗ = ∪x∈Ω B(x, C2,α h0 ) can be obviously larger than the exact domain Ω. This is too strong for the result to be relevant in many applications. Here we estimate the errors in Lq (Ω), q ∈ [1, ∞] and relax the bounds for Sobolev functions over Ω, i.e. for functions u ∈ Wpm+1 (Ω). To do this, we first assume that Ω is a bounded domain with Lipschitz continuous boundary. This assumption allows us to extend the underlying Sobolev function on Ω to all of Rd , and finally use the equivalence of the norms of the extended function and the original function. Theorem 3.4. Suppose that Ω ⊂ Rd is bounded set with a Lipschitz boundary. Let u ∈ Wpm+1 (Ω) for p > 1 and assume m + 1 > |α| + d/p for p > 1 and m + 1 > |α| + d for p = 1. For X = {x1 , . . . , xN } define α u(x) := d D

N X

sj,α (x)u(xj ),

j=1

x ∈ Ω,

where {sj,α } is a stable local polynomial reproduction of order m on Ω for |α| 6 m in the sense of Definition 3.3. Then there exist constants C > 0 and h0 > 0 such that for all X with hX,Ω 6 h0 which are quasi-uniform with the same cqu in (3.1), the estimate

α

m+1−|α|−d(1/p−1/q)+ α u

D u − D d kukWpm+1(Ω) , (3.4) 6 ChX,Ω Lq (Ω) holds for all q > 1, where x+ := max{x, 0}.

6

Proof. Since Ω is bounded and has a Lipschitz boundary, we can use the continuous extension operator EΩ : Wpm+1 (Ω) → Wpm+1 (Rd ), for 1 6 p 6 ∞, constructed by Stein [27], to extend any u ∈ Wpm+1 (Ω) to a function v := EΩ u ∈ Wpm+1 (Rd ), with v|Ω = u. Since the extension is continuous, we have kvkWpm+1 (Rd ) 6 CkukWpm+1 (Ω) .

(3.5)

First we prove (3.4) for 1 6 q < ∞. We bound the error over subdomains Bk = B(xk , C2,α hX,Ω ) ∩ Ω, k = 1, . . . , N , for hX,Ω 6 h0 where h0 is the constant from Definition 3.3. At the end, we will extend the error bound over entire Ω. Let Dk = B(xk , 2C2,α hX,Ω ), k = 1, . . . , N . Clearly, Dk * Ω in general. But Dk is star-shaped with e ⊂ Dk with chunkiness parameter γ = 2. Now let p = Qm v ∈ Pdm , the respect to a ball B e Using the properties of the Taylor polynomial of degree m of v on Dk averaged over B. stable local polynomial reproduction and using the facts that v|Ω = u and Bk ⊂ Dk , we can write for x ∈ Bk , α u(x) = D α u(x) − D α p(x) + d Dα u(x) − D

and in L norm, q

α

α u

D u − D d Lq (B

k)

6 Dα u − Dα p Lq (B

k

N X j=1

 sj,α (x) p(xj ) − u(xj ) ,

N

X 

sj,α (·) p(xj ) − u(xj ) +

) j=1 d/q

6 kDα u − Dα pkLq (Bk ) + cdBk kv − pkL∞ (Dk ) max x∈Bk

6 kD u − D pkLq (Bk ) + α

α

Note that N

X 

q

sj,α (·) p(xj ) − u(xj )

Lq (Bk )

j=1

6

Z

Bk

d/q −|α| cdBk C1,α hX,Ω kv

R

Bk

N X sj,α (x) j=1

− pkL∞ (Dk ) .

(3.6)

N X q sj,α (x) p(xj ) − u(xj ) dx j=1

Z N  X q sj,α (x) 6 kv − pkqL∞ (Dk ) max x∈Bk

which together with 3.1 we have

Lq (Bk )

dx,

Bk

j=1

dx 6 c ddBk gives the second inequality in (3.6). From Proposition m+1−d/p

kv − pkL∞ (Dk ) 6 c dDk

|v|Wpm+1 (Dk ) .

Moreover, since Bk ⊂ Dk and Dα p = Dα Qm v = Qm−|α| Dα v, (3.3) leads to kDα u − Dα pkLq (Bk ) 6 kDα v − Dα pkLq (Dk )

m+1−|α|+d(1/q−1/p)

6 CdDk

|Dα v|W m+1−|α| (D ) . p

7

k

If we assemble everything up to this point and use the facts that dBk 6 2C2,α hX,Ω , dDk = 4C2,α hX,Ω and |Dα v|W m+1−|α| (Dk ) 6 |v|Wpm+1 (Dk ) , we get from (3.6) p

α

α u

D u − D d Lq (B

m+1−|α|+d(1/q−1/p)

k)

6 C hX,Ω

(3.7)

|v|Wpm+1 (Dk ) .

Now we should extend this bound over entire Ω. If C2,α > 1 then for every x ∈ Ω there N ∗ is a center xj ∈ B(x, C2,α hX,Ω ) ∩ Ω. This clearly shows Ω = ∪N k=1 Bk ⊂ ∪k=1 Dk =: Ω . If C2,α < 1, just let C2,α = 1. This does not disturb the definition of the stable local polynomial reproduction. Now we can write N X

α

α u q d

D u − D Lq (B

α

α u d

D u − D 6 Lq (Ω)

k)

k=1

6C

m+1−|α|+d(1/q−1/p) hX,Ω

!1/q

N X

k=1

6C

|v|qW m+1 (D ) p k

m+1−|α|+d(1/q−1/p) (1/q−1/p)+ N hX,Ω

N X

k=1

!1/q |v|pW m+1 (D ) p k

(3.8) !1/p

.

The second inequality follows from (3.7) and the last inequality follows from standard inequalities relating p and q norms on finite dimensional spaces. To bound N by the fill distance, we use the quasi-uniform condition (3.1) as bellow. Let dΩ be the diameter of √ Ω. Since Ω is bounded, there exists a cube of side length dΩ / d that contains Ω. Thus  d d  c d d qu Ω Ω 6 √ = ch−d N6 √ X,Ω . dqX dhX,Ω

(3.9)

On the other side we have N X

k=1

|v|pW m+1 (D p

k)

=

N X

Z

χDk (x)|Dβ v(x)|p dx

X

Z

χpDk (x)|Dβ v(x)|p dx (p′ 6= 0)

X

kχpDk kLp′ (Ω∗ ) k(Dβ v)p kLp (Ω∗ )

X

k=1 |β|=m+1

=

N X

k=1 |β|=m+1

6

N X

N X

k=1



Ω∗



k=1 |β|=m+1

6

Ω∗



kχDk kpLp′ (Ω∗ )

= |v|Wpm+1 (Ω∗ )

N X

k=1

X

|β|=m+1 ′

kDβ vkpLp (Ω∗ )

kχDk kpLp′ (Ω∗ ) 8

(1/p + 1/p′ = 1)

where χA denotes the characteristic function of the set A. In the third line above, we applied the well-known H¨ older inequality. It is not really hard to show that ′

kχDk kpLp′ (Ω∗ ) 6 chdX,Ω , for a constant c. Thus using (3.9) we can write N X

k=1

|v|pW m+1 (D p

k)

6 C |v|pW m+1 (Ω∗ ) .

(3.10)

p

Inserting (3.9) and (3.10) into (3.8) and using the identity d(1/q − 1/p)− d(1/q − 1/p)+ = −d(1/p − 1/q)+ we have

α

m+1−|α|−d(1/p−1/q)+ α u

D u − D d |v|Wpm+1 (Ω∗ ) 6 C hX,Ω Lq (Ω) m+1−|α|−d(1/p−1/q)+

6 C hX,Ω

kvkWpm+1 (Rd ) .

Finally, we use the norm equivalence property (3.5) to get the final bound (3.4) for q ∈ [1, ∞). The case q = ∞ can be proved with similar argument. It suffices to apply (3.3) for q = ∞ to bound the first term on the right-hand side of the first line of equation (3.6), and use the inequality N

X 

sj,α (·) p(xj ) − u(xj )

j=1

L∞ (Bk )

−|α|

6 ChX,Ω kv − pkL∞ (Dk )

to bound the second term. Details are left to the reader. Now, using the same strategy presented in [9], we can prove that functions aj,α , 1 6 j 6 N , of GMLS approximation in (2.1), provide a stable local polynomial reproduction in the sense of definition 3.3. But, the domain Ω should satisfy an interior cone condition. Definition 3.5. A set Ω ⊂ Rd is said to satisfy an interior cone condition if there exists an angle θ ∈ (0, π/2) and a radius r > 0 such that for every x ∈ Ω a unit vector ξ(x) exists such that the cone  C(x, ξ, θ, r) := x + ty : y ∈ Rd , kyk2 = 1, y T ξ > cos θ, t ∈ [0, r]

is contained in Ω.

Here, we just restate Theorem 4.12 of [17] which follows [9, Theorem 4.7]. Theorem 3.6. Suppose that Ω ⊂ Rd is compact and satisfies an interior cone condition with radius r and angle θ ∈ (0, π/2). Fix m ∈ N0 and multi-index α ∈ Nd0 . Let C1,α = 21−|α| (1 + sin θ)−|α| ,

C2,α = 9

16(1 + sin θ)2 m2 , 3 sin2 θ

h0 =

r . C2,α

Then for every set X which satisfies the quasi-uniform condition (3.1) and hX,Ω 6 h0 , the basis functions aj,α with δ = 2C2,α hX,Ω provide a stable local polynomial reproduction e1,α = c1 C1,α and C e2,α = 2C2,α instead of C1,α and as in definition 3.3 with constants C C2,α themselves. Here c1 is a constant that can be derived explicitly in terms of cqu , C2,α and lower and upper bounds of weight function w. Since {aj,α } provides a local polynomial reproduction, error bound (3.4) immediately α u. We summarize all in the following d holds for GMLS derivatives approximation D corollary which is a direct consequent of Theorems 3.4 and 3.6. But first we note that a region with a Lipschitz boundary automatically satisfies an interior cone condition. More details can be found in [28]. Corollary 3.7. Suppose that Ω ⊂ Rd is a bounded domain with a Lipschitz boundary. Fix m ∈ N0 , p > 1 and q > 1. Let u ∈ Wpm+1 (Ω). For α with |α| 6 m, assume m + 1 > |α| + d/p for p > 1 and m + 1 > |α| + d for p = 1. Then, in situation of Theorem 3.6, there exist constants C and h0 such that for all X with hX,Ω 6 h0 which are quasi-uniform with the same cqu in (3.1), we have

α

m+1−|α|−d(1/p−1/q)+ α u

D u − D d kukWpm+1(Ω) , 6 ChX,Ω Lq (Ω)

(3.11)

where C is a constant, and

α u(x) = d D

N X

aj,α (x)u(xj ).

j=1

Comparing with the results of [12, 6, 5, 2, 3, 7], one can see that the rates of convergence for these types of derivatives toward the exact solutions are the same as those for standard derivatives. Remark 3.8. One can use the interpolation arguments to extend the bound (3.11) for function u lying in the fractional order Sobolev space Wpm+s (Ω), 0 6 s < 1. The final bound is

α

m+s−|α|−d(1/p−1/q)+ α u

D u − D d kukWpm+s(Ω) , (3.12) 6 ChX,Ω Lq (Ω) provided that m > |α| + d/p for p > 1 and m > |α| + d for p = 1.

Remark 3.9. The functional λ in section 2 can be generalized to the local weak forms of Direct Meshless local Petrov-Galerkin (DMLPG) methods [20]. Concerning a special local subdomain, λ has the form   Z 1 x−y (λu)(x) := v Lu(y) dy, where B = B(x, r0 ) ⊂ Ω, (3.13) vol(B) B r0 where L is a linear differential operator with continuous coefficients and v is a regular and compactly supported test function on B(0, 1). The order of L should not exceed 10

m and of course u should allow the act of this operator. The order of L can usually be reduced by applying the integration by parts. Local weak forms are fundamental elements of MLPG and DMLPG methods. Applying the GMLS approximation of [17] or this paper on functionals (3.13) leads to (DMLPG) methods [20, 21]. The same analysis can be done to get the following result. Theorem 3.10. Suppose that λ is defined as (3.13) and all conditions of Corollary 3.7 are satisfied. Define N X c aj,λ (x)u(xj ), λu(x) := j=1

where aj,λ are functions derived from the GMLS approximation for functional λ. There exist positive constants C and h0 such that for u ∈ Wpm+1 (Ω) and for all quasi-uniform set X ⊂ Ω with hX,Ω 6 h0 we have

n+1−ℓ−d(1/p−1/q)+ c q

λu − λu kukWpm+1 (Ω) , q > 1. (3.14) 6 ChX,Ω L (Ω)

Here ℓ is the maximal order of derivatives involved in linear operator L (or in its weakened form, if it has been done) and n, n 6 m, is determined such that λ(xα ) 6= 0 for some α with |α| 6 n and λ(xα ) = 0 for all α with |α| > n.

Note that, if λ(xα ) = 0 for 0 6 |α| 6 m, i.e. λ(p) = 0, then the corresponding optimization problem (see [17, Section 3]) gives the trivial solution a = 0 which is not our interest. Besides, if λ(xα ) = 0 for all α with n < |α| 6 m then we will miss the contributions of polynomials of orders more than n. In this case the order stays at n + 1 − ℓ. As an example see the experimental results of [20] for Poisson equation where DMLPG fails at m = 1 (because of the trivial solution) and the convergence rates do not increase when going from m = 2k to m = 2k + 1, k > 1, because the contribution of odd powers is missed. 4. Experimental results First, we apply the procedure on 1D functions 2 u1 (x) = |1 − 4x2 | sin(x − 0.5) ∈ W∞ [0, 1], 1 u2 (x) = |x − 0.5|3/2+τ ∈ W22+τ [0, 1], τ > 0, 3/2 + τ

to recover the functions themselves and their first weak derivatives. Let X consists of N regular points with distance h in [0, 1]. Since uk ∈ Wp2 , p = 2, ∞, we set m = 1. Moreover, δ = 2mh is chosen whereas the compactly supported C 4 Wendland’s function φ(r) = (1 − r)6+ (35r2 + 18r + 3) is employed as a weight function in all examples. Computational orders of the errors for u1 and u2 are given in Tables 1 and 2 respectively. For function u2 11

we put τ = 0.001. The L2 norms are approximated using a Gauss-Legendre quadrature with many points, and the L∞ norms are computed on a very fine mesh point. Since the mesh-size h is divided by two row by row, the orders are computed by   e(h) , log2 e(h/2) where e(h) is the error in level h. Now, we consider the following example to examine the orders in a higher dimensional case, u(x) = kxkλ2 , x ∈ Ω ⊂ Rd ,

where λ is a real parameter and Ω is a bounded region around the origin. Results of this part can be compared with the experimental results of [12] for MLS approximation. It is well known that u ∈ Wpτ (Ω) iff λ > τ − d/p. We let p = 2 and Ω = [−0.5, 0.5]2 ⊂ R2 , and we assign two values 1.5 and 3 to λ. According to the theory, in the first case we set m = 2 and examine (3.12), and in the second case we set m = 3 and examine (3.11). In both cases a regular mesh distribution with the fill distance h is used as a set of centers, the above Wendland’s function is employed as a weight function, and δ = 2mh is used as a support-size. Results are presented in Tables 3 and 4 for q = 2, ∞, and different order derivatives α. The L2 -errors are computed using a (200 × 200)-point Gauss-Legendre quadrature, and L∞ -errors are computed on a very fine regular mesh of size hs = 0.005. As we can see, the experimental results confirm the theoretical bounds. Comparing with [12], both theoretical and numerical results show that the orders of convergence in MLS and GMLS approximations are the same. However, GMLS is computationally more attractive because derivatives of shape functions are not required. As an example, the CPU times used for approximating the second derivative by the GMLS on the mentioned test points in the last example are 10.5, 12.0, 13.9 and 17.6 seconds for h = 0.1, 0.05, 0.025 and 0.0125. While they are 30.1, 32.5, 35.0 and 42.3 seconds when we use the MLS approximation. As we discussed in the previous section, the GMLS can be applied to MLPG functionals to construct a new class of meshless methods for numerical solution of PDEs, called DMLPG methods. Comparing the computational costs, results of [20, 21, 24, 29] show the superiority of the new methods over the classical ones. References [1] P. Lancaster, K. Salkauskas, Surfaces generated by moving least squares methods, Mathematics of Computation 37 (1981) 141–158. [2] T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, P. Krysl, Meshless methods: an overview and recent developments, Computer Methods in Applied Mechanics and Engineering, special issue 139 (1996) 3–47.

12

[3] W. K. Liu, S. Li, T. Belytschko, Moving least square reproducing kernel methods, (I) methodology and convergence, Computer Methods in Applied Mechanics and Engineering 143 (1997) 113–154. [4] D. Levin, The approximation power of moving least-squares, Mathematics of Computation 67 (1998) 1517–1531. [5] M. G. Armentano, R. G. Dur´ an, Error estimates for moving least square approximations, Applied Numerical Mathematics 37 (2001) 397–416. [6] M. G. Armentano, Error estimates in Sobolev spaces for moving least square approximations, SIAM Journal on Numerical Analysis 39 (2001) 38–51. [7] W. Han, X. Meng, Error analysis of the reproducing kernel particle method, Computer Methods in Applied Mechanics and Engineering 190 (2001) 6157–6181. [8] H. Wendland, Local polynomial reproduction and moving least squares approximation, IMA Journal of Numerical Analysis 21 (2001) 285–300. [9] H. Wendland, Scattered Data Approximation, Cambridge University Press, 2005. [10] R. J. Cheng, Y. M. Cheng, Error estimates for the finite point method, Applied Numerical Mathematics 58 (2008) 884–898. [11] R. J. Cheng, Y. M. Cheng, Error estimates of element-free Galerkin method for potential problems, Acta Physica Sinica 57 (2008) 6037–6046. [12] D. Mirzaei, Analysis of moving least squares approximation revisited, Journal of Computational and Applied Mathematics 282 (2015) 237–250. [13] H. P. Ren, J. Cheng, A. Huang, The complex variable interpolating moving least squares method, Applied Mathematics and Computation 219 (2012) 1724–1736. [14] T. Most, C. Bucher, New concepts for moving least squares: an interpolating non-singular weighting function and weighted nodal least squares, Engineering Analysis with Boundary Elements 32 (2008) 461–470. [15] J. F. Wang, F. X. Sun, A. X. Huang, Error estimates for the interpolating moving least-squares method, Applied Mathematics and Computation 245 (2014) 321–342. [16] D. Levin, Stable integration rules with scattered integration points, Journal of Computational and Applied Mathematics 112 (1999) 181–187. [17] D. Mirzaei, R. Schaback, M. Dehghan, On generalized moving least squares and diffuse derivatives, IMA Journal of Numerical Analysis 32 (2012) 983–1000. [18] B. Nyroles, G. Touzot, P. Villon, Generalizing the finite element method: Diffuse approximation and diffuse elements, Computational Mechanics 10 (1992) 307–318. [19] D. W. Kim, Y. Kim, Point collocation methods using the fast moving least-square reproducing kernel approximation, International Journal for Numerical Methods in Engineering 56 (2003) 1445– 1464. [20] D. Mirzaei, R. Schaback, Direct Meshless Local Petrov-Galerkin (DMLPG) method: a generalized MLS approximation, Applied Numerical Mathematics 33 (2013) 73–82. [21] D. Mirzaei, R. Schaback, Solving heat conduction problem by the Direct Meshless Local PetrovGalerkin (DMLPG) method, Numerical Algorithms 65 (2014) 275–291. [22] S. N. Atluri, S. Shen, The Meshless Local Petrov-Galerkin (MLPG) Method, Tech Science Press, Encino, CA, 2002. [23] S. Atluri, T.-L. Zhu, A new meshless local Petrov-Galerkin (MLPG) approach in Computational mechanics, Computational Mechanics 22 (1998) 117–127. [24] D. Mirzaei, A new low–cost meshfree method for two and three dimensional problems in elasticity, Applied Mathematical Modelling In press. doi:10.1016/j.apm.2015.02.050. [25] S. C. Brenner, L. R. Scott, The Mathematical Theory of Finite Element Methods, 3nd edition, Springer, 2008. [26] F. Narcowich, J. Ward, H. Wendland, Sobolev bounds on functions with scattered zeros, with application to radial basis function surface fitting, Mathematics of Computation 47 (250) (2004)

13

743–763. [27] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, New Jersey, 1971. [28] J. Wloka, Partial Differential Equations, Cambridge University Press, Cambridge, 1987. [29] M. Ramezani, M. Mojtabaei, D. Mirzaei, DMLPG solution of the fractional advection–diffusion problem, Engineering Analysis with Boundary Elements 59 (2015) 36–42.

14

2 Table 1: Orders for m = 1, u1 ∈ W∞

L2

L∞

h

α=0

α=1

α=0

α=1

0.1









0.05

1.97

1.29

1.98

0.99

0.025

1.99

1.20

1.98

0.99

0.0125

1.99

1.13

1.98

1.00

0.00625

2.00

1.13

1.99

1.00

Theory

2

1

2

1

Table 2: Orders for m = 1, u2 ∈ W22

L2

L∞

h

α=0

α=1

α=0

α=1

0.1









0.05

1.89

0.98

1.50

0.51

0.025

1.91

0.96

1.50

0.51

0.0125

1.92

0.96

1.50

0.55

0.00625

1.92

0.94

1.56

0.50

Theory

2.001

1.001

1.501

0.501

Table 3: Orders for λ = 1.5 and m = 2

L2 h

L∞

α = (0, 0)

α = (1, 0)

α = (0, 0)

α = (1, 0)

0.1









0.05

2.56

1.43

1.50

0.50

0.025

2.52

1.47

1.50

0.50

0.0125

2.56

1.48

1.50

0.50

Theory

2.5

1.5

1.5

0.5

Table 4: Orders for λ = 3 and m = 3

L2

L∞

h

α = (0, 0)

α = (1, 0)

α = (2, 0)

α = (0, 0)

α = (1, 0)

0.1





0.05

3.75

3.11

α = (2, 0)









2.76

3.00

2.02

1.00

0.025

3.86

3.01

1.84

3.00

2.00

1.00

0.0125

3.88

3.01

1.87

3.00

2.02

1.00

Theory

4

3

2

3

2

1

15