Monte Carlo solution of the Neumann problem for the nonlinear Helmholtz equation

Monte Carlo solution of the Neumann problem for the nonlinear Helmholtz equation

Available online at www.sciencedirect.com ScienceDirect Mathematics and Computers in Simulation 117 (2015) 1–9 www.elsevier.com/locate/matcom Origin...

238KB Sizes 23 Downloads 215 Views

Available online at www.sciencedirect.com

ScienceDirect Mathematics and Computers in Simulation 117 (2015) 1–9 www.elsevier.com/locate/matcom

Original articles

Monte Carlo solution of the Neumann problem for the nonlinear Helmholtz equation Abdujabar Rasulov a,∗ , Gulnora Raimova b a Mathematical Modeling and Informatics Department, University of World Economy and Diplomacy, Tashkent, Uzbekistan b Academy of Public Administration under the President of the Republic of Uzbekistan, Uzbekistan

Received 13 September 2013; received in revised form 9 May 2015; accepted 15 May 2015 Available online 5 June 2015

Abstract In this paper we will consider the Neumann boundary-value problem for the Helmholtz equation with a polynomial nonlinearity on the right-hand side. We will assume that a solution to our problem exists, and this permits us to construct an unbiased Monte Carlo estimator using the trajectories of certain branching processes. To do so we utilize Green’s formula and an elliptic meanvalue theorem. This allows us to derive a special integral equation, which equates the value of the function u(x) at the point x with its integral over the domain D and on boundary of the domain ∂ D = G. The solution of the problem is then given in the form of a mathematical expectation over some particular random variables. According to this probabilistic representation, a branching stochastic process is constructed and an unbiased estimator of the solution of the nonlinear problem is formed by taking the expectation over this branching process. The unbiased estimator which we derive has a finite variance. In addition, the proposed branching process has a finite average number of branches, and hence is easily simulated. Finally, we provide numerical results based on numerical experiments carried out with these algorithms to validate our approach. c 2015 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights ⃝ reserved. Keywords: Monte Carlo method; Branching random process; Martingale; Unbiased estimator

1. Introduction In this work we consider the Neumann boundary-value problem for the Helmholtz equation:  ∂ u  Mu ≡ −∆ u(x) + a(x) u(x) = f (u), x ∈ D; = ϕ(x), ∂n G with a polynomial nonlinearity on the right-hand side: f (u) =

∞ 

bn (x) u n (x).

n=0

∗ Corresponding author.

E-mail addresses: [email protected] (A. Rasulov), [email protected] (G. Raimova). http://dx.doi.org/10.1016/j.matcom.2015.05.002 c 2015 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights 0378-4754/⃝ reserved.

2

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

Ermakov et al. considered the linear case for the equation −∆ u(x) + a(x) u(x) = f (x). In their work [2], a sequence of unbiased estimators was built using trajectories of a random process. The question of simulating these special probability densities was discussed. Furthermore, some of the results from [2] were used for other probabilistic representations and in the simulation of the corresponding densities. The Neumann problem for the equation −∆ u + cu = a u n + g in three dimensions was considered recently by Makarov [5]. There a and c are constants, and g(x) is a smooth function. In that work, on the basis of a difference approximation to the boundary conditions and using Green’s formula over the ball, they converted the differential equation into an equivalent integral equation. For the solution of the integral equation they used standard Monte Carlo techniques. Under the conditions that na/c < 1 and ∥ g ∥ /c < 1, they proved that the average number of branches was finite, and that their estimator was unbiased and had finite variance. In our case, we noticed that the common conditions which are placed on the coefficients in the equation, −∆ u + cu = a u n + g, are the same conditions as hold for our equations [5]. A nonlinear problem, with a finite polynomial nonlinearity and a zero boundary condition, was considered by one of the author (A.R.). In [7], computational Monte Carlo algorithms for the solution of the nonlinear Neumann problem were derived, and unbiased estimators using trajectories of the corresponding random processes were constructed. Also, the behavior of branching processes and the variance of these estimator were studied. Statistical algorithms, which we consider in the current article, differ from those in this previous work [7], due to the choice of transition densities, the branching rules, and the corresponding branching processes. Also, the unbiased condition and the degeneration of the process allow us to obtain simple constraints for the coefficients in the equations that are independent of the diameter of the domain. The constructed algorithm could be applied in airfoil theory in coercible gas [3]. 2. Description of the problem Let D ⊂ R m be a bounded convex domain with a sufficiently smooth boundary G. We will consider the following Neumann boundary-value problem:  ∞     bi (x) u i (x), x ∈ D  M u ≡ −∆ u(x) + a(x) u(x) = i=0  (1)  ∂ u    = ϕ(x).  ∂n G ¯ (i = 0, 1, 2, . . .), a(x) ∈ C(D), Here the following relations define the functions ϕ(x) ∈ C(G), bi (x) ∈ C( D), a(x) > 0, where n = n x is an external normal to surface G at a point x. Assume, that the functions ϕ(x), bi (x), a(x) are such that there exists [1] a classical unique continuous solution ¯ of the above problem (1). The functions bi (x), (i = 0, 1, 2, . . .) are assumed to be bounded, u(x) ∈ C 2 (D) ∩ C( D) supx∈D |bi (x)| ≤ b¯i , and the following series is assumed to converge ∞ 

b¯k k < ∞.

(2)

k=0

The purpose of this article is to derive a Monte Carlo algorithm for solving the problem (1) at some arbitrary point, x ∈ D. To do this, we will use some known facts from mathematical physics to obtain an integral representation of the solution to this problem. Then, under the above assumptions, we will build unbiased Monte Carlo estimators with finite variances. 3. Integral representation of the solution We will now derive an integral equation for the solution to the problem (1) at some arbitrary point, x ∈ D. To do this we will use Green’s theorem and an elliptic mean-value theorem. Let the boundary, G, be sufficiently smooth so that Green’s formula is valid:     ∂u ∂v −v d y S, u (v Mu − u Mv) dy = ∂n y ∂n y D G

3

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

) for any u, v ∈ C 2 (D) ∩ C(D). Let 0 < a ≤ a(x) ≤ c2 , a, c = const > 0, v(x, y) = σ exp(−cr m−2 (m−2) , r = |x − y|; let mr σm be the surface area of the unit sphere in R m . We notice that the function v(x, y) is a Levi function for the operator, M, and that M is formally self-adjoint. It is well known [2], that the following mean-value theorem is valid:       ∂v(x, y) ∂u(y) u(x) = v(x, y) f (u(y)) − u(y) M y v(x, y) dy + v(x, y) − u(x, y) d y S, (3) ∂n y ∂n y D G ∞ where f (u) denotes f (u(y)) = i=0 bi (y) u i (y).

Let us find M y v and

∂v ∂n y

with the help of the Laplace operator in spherical coordinates [6]: ∆ =

∂2 ∂r 2

+

(m−1) ∂ r ∂r



δ, here δ is the Laplace–Beltrami operator on the unit sphere with the center in zero S1 (0), acting only on angle variables. Thus we have:     (m − 1) ∂v m−2 2 m−2 ∂ 2v ∂v m−2 ∂ 2v = v c + v; ∆y v = 2 + ; = v −c − ; + r ∂r ∂r r r ∂r ∂r 2 r2   (y − x, n y ) c2r + c(m − 3) ∂v cr exp(−c r ) ∆ v = exp(−cr ) ; cos ϕx y ; cos ϕx y = . = 1 + ∂n y m−2 r σm (m − 2) r m−1 r m−1 σm 1 r2

By substituting these results in expression (3) we have:  2    r a(y) exp(−c r ) c r + c(m − 3) 1 − f (u(y)) dy + exp(−c r ) u(y)dy u(x) = m−2 σ (m − 2) 2 r + c(m − 3) (m − 2) r m−1 σ r c m m D D     cos ϕx y exp(−c r ) cr + 1 + exp(−c r ) u(y) d S + ϕ(y) d y S. (4) y m−1 m−2 m−2 σm r σm (m − 2) G r G Let c2 r + c(m − 3) exp(−c r ). (m − 2) ρ We notice that k1′ (r ) = −k2 (r ), and also k1 (0) = 1, and thus 0 k2 (r ) dr = 1 − k1 (ρ). With the new notation the expression (4) takes the form:      1 r a(y)r k2 (r ) u(x) = f (u(y)) + 1 − u(y) dy m−1 c2 r + c(m − 3) σm c2r + c(m − 3) D r    cos ϕx y exp(−cr ) + k1 (r ) m−1 u(y) d y S + ϕ(y)d y S . m−2 (m − 2) r G G r  k1 (r ) = 1 +

cr m−2



exp(−c r ),

k2 (r ) =

We should realize that G is a Lyapunov surface. By a theorem on the boundary values of a double layer potential [6], we can derive a formula for x ∈ G as well. As a result, we derive:      1 + IG (x) k2 (r ) r f (u(y)) a(y) r + 1 − u(y) dy u(x) = m−1 c2 r + c (m − 3) σm c2r + c(m − 3) D r    cos ϕx y exp(−c r ) + k1 (r ) m−1 u(y) d y S + ϕ(y) d S . (5) y m−2 (m − 2) r G G r Here IG (x) is the indicator function of the boundary G. 4. Probabilistic representation of the solution We use a convenient form of the last expression (5):    f (u(y)) u(x) = p(x, y) (1 − g(x, y))u(y) + g(x, y) dµ(y) + F(x). a(y) D¯

(6)

4

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

Here  cos ϕx y k (r ) , y ∈ G, 1 + IG (x)  1 r m−1 p(x, y) =  σm  k2 (r ) , y ∈ D, r m−1  y ∈ G, 0, g(x, y) = a(y)r  , y ∈ D, c2r + c(m − 3)  exp(−c r ) 1 + IG (x) ϕ(y) d y S, F(x) = m−2 (m − 2) σm r G

(7)

(8)

where µ is a measure defined on a Borel σ -algebra of subsets of D with the relation µ(A) = λ(A) + S(A ∩ G), where λ is Lebesgue measure in R m , and S is a surface measure. Since we are working on a convex domain, we have cos ϕx y ≥ 0 and accordingly p(x, y) ≥ 0. If a ≡ 0, ϕ ≡ 0, f ≡ 0, then u ≡ 1 is a solution to the Neumann boundary-value problem. By substituting into (6), we will show that ¯ p(x, y) is a transition density. It is clear that 0 ≤ g(x, y) ≤ 1, since a(x) ≤ c2 for x ∈ D. When applying Monte Carlo methods to the solution of these equations, we require a convergent iterative method for the equations. The nonlinear integral operator, ℜ(u), corresponding to the probabilistic representation (6), should therefore be a contraction operator. If the nonlinear integral operator is a contraction operator, then the operator has a fixed point [4]. u(x) can be obtained as the limit of the iterative process {un (x)}, where u n+1 (x) = ℜ u n (x). We consider the nonlinear integral operator, ℜ, acting on the functions from C D by the formula:  ℜu(x) = R(x, y, u(y))dµ(y) + F(x)   D f (u(y)) p(x, y) (1 − g(x, y))u(y) + g(x, y) = dµ(y) + F(x), a(y) D¯ where p(x, y) and g(x, y) are defined via formulas (7)–(8). It is possible to prove (see [8]) the following theorem in which the conditions providing convergence for the iterative methods are defined.  ¯ Theorem 1. Let B0 ⊂ C 2 (D) ∩ C(D), B0 = {v : ∥v∥ < 1} and M = ∞ n=1 n bn < a. Then the operator ℜ will be a contraction, i.e. ∥ℜu − ℜv∥ ≤ q0 ∥u − v∥ , (u, v ∈ B0 ), q0 < 1 and the iteration u n+1 (x, t) = ℜu n (x, t) beginning with any element u 0 ∈ B0 converges to a unique solution, u(x, t), of the equation u(x, t) = ℜu(x, t). This allows us to apply standard Monte Carlo techniques to find a solution. In the following section we will construct a random process in the domain D based on (6). 5. Construction of the branching random process We will now present a simulation method for the Markov chain x0 , x1 , x2 , . . . with the transition density p(x, y) [2,10]. Let n ≥ 0 and xn be known. If xn ∈ G, then Πxn denotes a half-space in which D ⊂ Πxn , and it is defined by the tangent plane to D at the point xn . Let ω be an isotropic vector in Πxn if xn ∈ G and an isotropic vector in R m if xn ∈ D. ρ = ρ(xn , ω) is the length of a ray that emits from the point xn and lies in D pointing in the ω direction. Actually, by construction, the point xn + ρω lies on the boundary. The Markov chain jumps from xn to xn+1 , where xn+1 = ξ = xn + ρω with probability k1 (ρ); with probability (1 − k1 (ρ)), ξ is distributed on the segk2 (r ) ment [xn , xn + ρω] with the probability density p1 (r ) = (1−k . This simulation method, as defined above, gives a 1 (ρ)) vector, ξ , which is distributed via p(x, y). It follows from (6) that unbiased estimators for u(x) can be obtained by using branching stochastic processes as we will describe below. We will define a branching ∞ random process in D that corresponds to the probabilis¯ tic representation in the following way. Let M = n=1 n bn , and let α be some real number 0 < α ≤ 1 (the condition on α is explained below). Initially there is only one particle at the point x0 = x. Let n ≥ 0 and xn be known. The next point we define using the simulation method of the transition density p(x, y), which is described

5

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

above. Then, the particle branches at xn+1 with probability g(xn , xn+1 ). Clearly, no branching occurs with probability (1 − g(xn , xn+1 )). When branching occurs, the number of particles created is distributed with the probability α ¯ bn , (n = 1, 2, . . .), where n is the number of newly distribution πn = M  generated particles. The probability of absorption at state xn+1 is equal to π0 = 1 − ∞ n=1 πn . New particles behave independently and in a manner identical to that of the original particle. The process will degenerate if all particles in D die. The parameter α allows us to regulate the number of branches: the smaller the value of the parameter α, the less branching. Let T0 = (x, 1), T1 , T2 , . . . , Tn , . . . be a trajectory of our branching process, where Ti = (xi1 , n i1 ; xi2 , n i2 ; . . . ; li xi , nlii ) is a point distribution at time i. The case with li = 0 corresponds to the absence of particles in D. Such a point distribution is called zero and is denoted by θ . A point distribution, Ti , can be interpreted in the following way.   j j After i steps (i = 1, 2, . . .) there will be n i particles at locations xi , ( j = 1, li ). The following Lemma is valid [8]. Lemma 1. With probability one the branching random process {Tn } degenerates in the domain D. 6. Construction of an unbiased estimator of the solution The sequence of random variables, {ξk (x)}∞ k=0 , is defined via recurrent formula. Let ξ0 (x) = u(x), ξk (x) = Ψ (ξk−1 (x)), where  ξ(y), case 1   n     Wn (y) ξ (i) (y), case 2 Ψ (ξ(x)) = (9) i=1     ˜  F(y) b (y) 1 0   + , case 3.  π0 a(y) g(x, y)(1 − k1 (ρ)) Case 1. If x → y = y1 ∈ G or x → y = y2 ∈ D and with probability 1 − g(x, y) the particle continues without branching. Case 2. If x → y = y2 ∈ D and with probability g(x, y) the particle will branch (n-number of branchings). Case 3. If x → y = y2 ∈ D and the particle will be absorbed. ˜ Here ξ (i) (y) are independent realizations of the random variable ξ(y). F(x) denotes the unbiased estimator for the  r) integral F(x) = 1+IσGm (x) G r exp(−c ϕ(y) d S. The so-called “weights”, i.e. multipliers, are defined as follows: y m−2 (m−2) Wn (y) =

bn (y) M . b¯n α a(y)

(10)

Let T0 , T1 , . . . , Tk be the trajectory of the branching random process, where Ti = (xi1 , n i1 ; xi2 , n i2 ; . . . ; xili , nlii ) is the point distribution at time i. ℑn is a σ -algebra, generated by the process up to time n. Let N be the stopping time of the process, i.e. for TN l N = 0 is satisfied. From Lemma 1 it follows that N < ∞ with probability 1. Theorem 2. The following statement holds: ∞ A. The sequence {ξk (x)}∞ k=0 forms a martingale relative to the family {ℑk }k=0 . If M < a, then ξk (x) a uniformly integrable martingale.

B. The estimator ξ N (x) is an unbiased estimator for u(x), and its variance is finite. Proof. Now we will prove the first part of the theorem. From the definition of ℑk it is obvious that ξk (x) is ℜk measurable. From properties of conditional expectations, and formulas (9)–(10) we have that   E(ξn+1 (x)|ℑn ) = E(Ψ (ξn (x))|ℑn ) = E k1 (ρ)ξn (y1 ) + (1 − k1 (ρ)) (1 − g(x, y2 ))ξn (y2 )  + g(x, y2 )

∞  i=1

πi Wi (y2 )

i  j=1

( j) ξn (y2 ) +



˜ 2) b0 (y2 ) F(y + a(y2 ) g(x, y2 )(1 − k1 (ρ))



6

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

 Cosϕx y 1 + IG (x) = k1 (r ) m−1 ξn (y)d y S σm r G     ∞  b0 (y) k2 (r ) i πi Wi (y)ξn (y) + + dy (1 − g(x, y)) ξn (y) + g(x, y) m−1 a(y) D r i=1   exp(−cr ) ϕ(y)d y S . + m−2 (m − 2) G r  ¯ Using expressions (10) for Wi (y) and taking into account that M = ∞ n=1 n bn we obtain ∞ 

πi Wi (y) ξni +

i=1

∞ b0 (y)  ξ i (y) b0 (y) f (u(y)) bi (y) n = + = . a(y) a(y) a(y) a(y) i=1

Substituting this result in the inner parentheses above we have  cos ϕx y 1 + IG (x) E(ξn+1 (x)|ℑn ) = k1 (r ) m−1 ξn (y) d y S σm r G     k2 (r ) f (ξn (y)) exp(−cr ) + (1 − g(x, y))ξn (y) + g(x, y) dy + ϕ(y) d y S . m−1 m−2 (m − 2) a(y) D r G r Recalling of probabilistic representation (6), it follows that:    f (ξn (y)) p(x, y) (1 − g(x, y))ξn (y) + g(x, y) E(ξn+1 (x) | ℑn ) = dµ(y) + F(x) = ξn (x). a(y) D¯ ∞ Thus, the process {ξn (x)}∞ . To prove the uniform integrability for ξn (x), it n=0 forms a martingale relative to {ℑn }n=0   ¯ is enough to show that |ξn (x)| ≤ C, (C = const). Since u(x) ∈ C D¯ ∩ C 2 (D), we have |u(x)| ≤ const for x ∈ D. M Further, if the parameter α is chosen via the condition a ≤ α ≤ 1, and the assumption of the theorem holds, we have    bn (y) M   ≤ 1, |Wn (y)| =  b¯n α a(y) 

and accordingly |ξn (x)| ≤ C < ∞, (C = const). Thus, the process {ξn (x)} is uniformly integrable, which proves the first part of the theorem. Now, we will prove the second part of the theorem. Since ξn (x) is a uniformly integrable martingale, and N is a Markov time, then according to Doob’s optional sampling theorem [9], for the martingale {ξn (x)}∞ n=0 , we obtain E ξ N (x) = E ξ1 (x). By the definition of ξ1 (x) from formulas (9)–(10) we have   E ξ1 (x) = E k1 (ρ)u(y1 ) + (1 − k1 (ρ)) (1 − g(x, y2 ))u(y2 ) +

 ∞  i=1

 πi Wi (y2 )u (y2 ) + i

˜ 2) b0 (y2 ) F(y + a(y2 ) g(x, y2 )(1 − k1 (ρ))



 g(x, y2 )

.

 cos ϕx y 1 + IG (x) = k1 (r ) m−1 u(y) d y S σm r G      k2 (r ) f (u(y)) exp(−cr ) ϕ(y)d y S . + (1 − g(x, y)) u(y) + g(x, y) dy + m−1 m−2 (m − 2) a(y) G r D r If we recall the probabilistic representation (6) we see    f (u(y)) E ξ1 (x) = p(x, y) (1 − g(x, y)) u(y) + g(x, y) dµ(y) + F(x) = u(x). a(y) D¯ By assumptions for this theorem E (ξ N (x))2 < ∞, and so its variance is also finite, and the theorem is proved.

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

7

For the numerical simulation of our we should be able to construct the unbiased estimator for the  estimator, r) following integral: F(x) = 1+IσGm (x) G r exp(−c m−2 (m−2) ϕ(y) d y S. For the estimator of the integral, F(x), a method in this case is presented elsewhere [2]. To do this we perform a change of variables and transform to polar coordinates. Let x0 ∈ D, let ω be an isotropic vector in R m , let y(ω) be a point on the boundary having the form y(ω) = x0 + ωt, t > 0. y(ω) lies on a ray emitting from x0 in the direction ω. ω0 is a unit vector so that y(ω0 ) = x. Transforming F(x), we derive  exp(−c r ) 1 + IG (x) ϕ(y) d y S F(x) = m−2 (m − 2) σm r G   exp (−c |x − y(ω)|) |x0 − y(ω)|m−1 |ω − ω0 |m−2 ϕ(y(ω)) = (1 + IG (x))E ·   |x − y(ω)|m−2 cos y(ω) − x0 , n y |ω − ω0 |m−2   ϕ(y(ω)) = E ψ (x, ω) . |ω − ω0 |m−2   Let R(x0 ) = dist (x0 , G). Then R(x0 ) |ω − ω0 | ≤ |x − y(ω)| , cos y(ω) − x0 , n y > const > 0 and so ψ(x, ω) is bounded. Accordingly, the singularity at the point ω0 may be included in the density:   |ω − ω0 | cos ϕω0 ω 1 2 ψ(x, ω) ϕ(y(ω)) ψ(x, ω) ϕ(y(ω)) F(x) = d S = dω S. ω m−1 σm S1 (0) σm S1 (0) |ω − ω0 | |ω − ω0 |m−1 Let ω1 be an isotropic vector in the half-space Πω0 , where   Πω0 = x ∈ R m : cos(x − ω0 , ω0 ) ≤ 0 . After defining ω(ω0 , ω1 ) as a point on the sphere S1 (0), which is seen from ω0 in the direction ω1 , we derive an ˜ ˜ unbiased estimator F(x) for F(x): F(x) = ψ(x, ω(ω0 , ω1 )) ϕ(y(ω(ω0 , ω1 ))). In [8] they considered common cases where f (u) = g exp(u), f (u) = g sin(u), f (u) = g cos(u), f (u) = g sinh(u), f (u) = g cosh(u)(g = const) and obtained the corresponding branching probabilities of the appropriate Markov processes. We consider one particular case of problem (1)–(2). Let m = 3 and a(x) = c2 . In this case k2 (r ) = c2r exp(−cr ) and the second volume integral in expression (5) disappears. The formula (5) takes the form:     cos ϕx y k2 (r ) f (u(y)) exp(−c r ) 1 + IG (x) k (r ) dy + u(y) d S + u(x) = ϕ(y) d S . y y 1 2 σ3 r c2 r2 D r G G The probabilistic representation in this case differs from (6) and using the notation of Section 3, the new expression for the probabilistic representation will be  u(y), y ∈ G;    (11) u(x) = p(x, y)Φ(u(y)) dµ(y) + F(x), Φ(u) =  D¯  f (u(y)) , y ∈ D. c2 ∞ In Theorem 1, the condition M = n=1 n b¯n < a provides for the convergence of the iterative method. When it is substituted with M < c2 we also obtain convergence. The proof of this statement essentially does not differ from the proof of Theorem 1. The sequence of random variables, {ηk (x)}∞ k=0 , in this case is defined in the following way. Let η0 (x) = u(x), ηk (x) = Θ(ηk−1 (x)), where we have  η(y), x → y = y1 ∈ G;   n     (i) Wn (y) η (y), x → y = y2 ∈ D, n ̸= 0; Θ(η(x)) = i=1      F˜ (y) 1 b0 (y)   + , x → y = y2 ∈ D, n = 0.  π0 (1 − k1 (ρ)) c2 Here the η(i) (y) are independent realizations of the random variable η(y).

8

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9

“Weights” are defined by the formula: Wn (y) = bnb¯(y) αMc2 . The parameter α is chosen from the condition cM2 ≤ n α ≤ 1. Let T0 , T1 , . . . , Tn be a trajectory of the branching random process. ℑn is a σ -algebra, which is generated by the progress up to time n. Let us take N as a stopping time of the process. The following theorem is then valid. ∞ 2 Theorem 3. A. A sequence {ηk (x)}∞ k=0 forms a martingale relative to {ℑk }k=0 . If M < c , then ηk (x) is a uniformly integrable martingale.B. The estimator η N (x) is unbiased for u(x), and its variance is finite.

The proof of the theorem is similar to that of Theorem 2, and therefore it is omitted. 7. Numerical experiments The results of numerical experiment are given in this section to provide empirical evidence of the correctness of the derivation of these Monte Carlo methods. The following problems were considered: I. The case a(x) = c2 : (a) f (u) = g exp(u) + f 0 when the domain D is a ball (MP1) and the domain D is a cube n (MP2); (b) f (u) = g sin(u) + f 0 when the domain D is a ball (MP3); (c) f (u) = ∞ n=0 bn (x) u (x) when the domain D is a ball (MP4). II. The case of variable coefficients: (a) f (u) = g exp(u) + f 0 when the domain D is a ball (MP5); (b) f (u) = g sin(u) + f 0 when the domain D is a ball (MP6). Model problem 1 (MP1): D = {(x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 1}.  2 2 2 2 2  −∆ u(x) + 6.25u(x) = exp(u) + f 0 (x), x ∈ D, ∂u ∂n G = 0.6 cos(1); f 0 (x) = 0.3 sin(x 1 + x 2 + x 3 )(4 (x 1 + x 2 + 2 2 2 2 2 2 2 x3 ) − 6.25) − 1.8 cos(x1 + x2 + x3 ) − exp(0.3 sin(x1 + x2 + x3 )). Model problem (MP2): D = {(x1 , x2 , x3 ) : −0.5 ≤ x1 , x2 , x3 ≤ 0.5}.  2 2  −∆ u(x) + 6.25 u(x) = exp(u) + f 0 (x), x ∈ D, ∂u ∂n G = x 1 x 2 x 3 cos(x 1 x 2 x 3 ). f 0 (x) = 0.5((x 1 x 2 ) + (x 1 x 3 ) + (x2 x3 )2 + 6.25) sin(x1 x2 x3 ) − exp(0.5 sin(x1 x2 x3 )). Model problem (MP3) : D = {(x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 1}.  −∆ u(x) + 2.25 u(x) = sin(u) + f 0 (x), x ∈ D, ∂ u  = −0.9(x1 x2 x3 ) exp(x1 x2 x3 ). ∂n G

f 0 (x) = 0.3(2.25 − (x1 x2 )2 − (x1 x3 )2 − (x2 x3 )2 ) exp(−x1 x2 x3 ) − sin(0.3 exp(−x1 x2 x3 )). Model problem 4 (MP4): D = {(x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 1}.  ∂u n −∆ u(x) + 9 u(x) = ∞ n=0 bn (x) u (x), x ∈ D, ∂n |G = 0.3(x 1 + x 2 + x 3 ) exp(x 1 + x 2 + x 3 ).

b0 (x) = 1.8 exp(x1 + x2 + x3 ) − sin(x1 + x2 + x3 ) − sin(x1 + x2 + x3 + 1.8 exp(x1 + x2 + x3 ));

b4n+1 (x) = b4n+3 (x) =

1 1 (4n+1)! cos(x 1 + x 2 + x 3 ); b4n+2 (x) = − (4n+2)! 1 1 − (4n+3)! cos(x1 + x2 + x3 ); b4n+4 (x) = (4n+4)!

sin(x1 + x2 + x3 ); sin(x1 + x2 + x3 ); (n = 0, 1, 2, . . .).

Model problem 5 (MP5): D = {(x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 1}. −∆u(x) + (4 + (x1 x2 x3 )2 )u(x) = exp(u) + f 0 (x), x ∈ D,  ∂u  = 0.3(x1 + x2 + x3 ) cos(x1 + x2 + x3 ). ∂n G

f 0 (x) = 0.3 sin(x1 + x2 + x3 ) (7 + (x1 x2 x3 )2 ) − exp(0.3 sin(x1 + x2 + x3 )) Model problem 6 (MP6): D = {(x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 1}. −∆u(x) + exp(1 + x12 + x22 + x32 ) u(x) = 0.8 exp(u) + f 0 (x), x ∈ D,  ∂u  = 0.3(x1 + x2 + x3 ) exp(0.3 (x1 + x2 + x3 )). ∂n G

f 0 (x) = −2.7 exp(0.3(x1 + x2 + x3 ))+exp(1+ x12 + x22 + x32 ) exp(0.3(x1 + x2 + x3 ))−0.8 exp(exp(0.3(x1 + x2 + x3 ))). The results of computational experiments are given in Table 1.

9

A. Rasulov, G. Raimova / Mathematics and Computers in Simulation 117 (2015) 1–9 Table 1 MP

(x0 )

α

skv

skc

Uex

ms



err

1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6

(0.3;0.3;0.3) (0.5;0.5;0.5) (0.25;0.25;0.25) (0.1;−0.3;0.5) (−0.5;0.0;0.1) (0.25;0.25;0.25) (0.25;0.25;0.25) (−0.3;0.2;0.1) (1.0;0.0;0.0) (−0.2;0.6;−0.4) (−1.0;0.0;0.0) (0.5;0.6;0.5) (−0.7;0.5;−0.3) (0.25;0.25;0.25) (0.5;0.5;0.5) (−0.2;0.5;0.0) (0.25;0.25;0.25) (0.0;0.0;−1.0)

0.6 0.6 0.7 0.7 0.7 0.5 0.7 0.7 0.7 0.9 0.9 0.8 0.8 0.7 0.8 0.7 0.5 0.7

1.54 1.55 1.86 1.88 1.89 1.40 1.52 1.55 1.57 4.06 4.09 2.53 2.41 1.82 2.52 1.53 1.23 1.54

3.41 3.77 4.32 6.73 6.61 4.56 6.62 6.79 6.91 10.2 10.2 6.15 9.7 6.27 10.1 6.43 4.07 6.22

0.0800 0.2045 0.0558 −0.007 0.0 0.0078 0.2953 0.3018 0.3 0.3 0.1104 1.4859 −0.144 0.2045 0.2992 1.0942 1.2523 1.3498

0.0821 0.1972 0.0585 −0.0047 0.0003 0.0056 0.2893 0.3058 0.2989 0.2915 0.1105 1.4777 −0.1474 0.2026 0.3043 1.0738 1.2690 1.3484

0.0175 0.0251 0.0194 0.0028 0.0057 0.0034 0.02818 0.0204 0.0178 0.2547 0.0142 0.1573 0.0337 0.0226 0.0242 0.0547 0.0492 0.0837

0.0021 0.0073 0.0026 0.0027 0.0003 0.0021 0.0060 0.0040 0.0010 0.0084 0.0001 0.0082 0.0036 0.0019 0.0051 0.0203 0.0167 0.0014

Explanations to the table. MP is the number of the model problem; (x0 ) is the point where the problem is solved; α is a parameter, which is used α ¯ bn , regulating the quantity of the branches; S K C is the average quantity of the particles of the tree; S K V in πn = M is the average number of branches;   Uex is the exact solution Uex = u(x0 ), M S is the sample mean estimate given by M S = ξ¯ = N1t ξ1 + · · · + ξ Nt , where ξ = ξ N (x0 ) and ξi are independent realizations of the random estimator for √ the solution ξ ; σ is the statistical estimate of the value V ar (ξ )/Nt , where V ar (ξ ) is the variance of the estimator of the solution of our given problem; 3σ is the half-length of the confidence interval; err is the difference between the exact solution and the estimate, err = |u(x0 ) − M S|. 8. Conclusions The results of the numerous computational experiments show that by using the algorithms derived in this paper we can obtain practically easy realizable estimators for point solutions to these nonlinear PDEs. The parameter α, which is used in calculating the probabilities of the branching, gives us the possibility of controlling the average quantity of the branches and the average number of the particles in the branching process’ arborization. We recall that this parameter was chosen using the condition M/a ≤ α ≤ 1. The numerical experiments reveal that the value of the exact solutions lies in the 99.7% confidence interval (see table). In the future, we are going to apply the constructed above algorithms to the solution of nonlinear parabolic equations. References [1] R. Courant, D. Hilbert, Methods of Mathematical Physics, vol. 2, John Wiley and Sons, 2008. [2] S.M. Ermakov, V.V. Nekrutkin, A.S. Sipin, Random Processes for Classical Equations of Mathematical Physics, Springer, 1989. [3] F.I. Frankl, M.V. Keldish, The exterior Neumann problem for nonlinear elliptic differential equations with applications to the theory of a wing in a compressible gas, Izv. Akad. Nauk SSSR 12 (1934) 561–601 (in Russian). [4] L.V. Kantorovich, G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982. [5] R.N. Makarov, Study of estimations of the Monte Carlo method for solving nonlinear Helmholtz equation, in: Proceedings of the Computing Centre of Siberian Branch of the Russian Academy of Sciences, Computational Mathematis, Novosibirsk, vol. 4, 1996, pp. 74–82 (in Russian). [6] S.G. Mikhlin, Linear Equations in Partial Derivatives, Visshaya shkola, Moscow, 1977 (in Russian). [7] A.S. Rasulov, Monte Carlo Method for Solving Nonlinear Problems, Fan, Tashkent, 1992 (in Russian). [8] A.S. Rasulov, M. Mascagni, G.M. Raimova, Monte Carlo Methods for Solution Linear and Nonlinear Boundary Value Problems, UWED Press, Tashkent, 2006. [9] A.N. Shiryaev, Probability, Springer, 1996. [10] A.S. Sipin, About the solution of Neumann problem by Monte Carlo method, in: Proceedings Monte Carlo Methods in Computational Mathematics and Mathematical Physics, Novosibirsk, 1976, pp. 120–136 (in Russian).