Computers and Mathematics with Applications 67 (2014) 210–216
Contents lists available at ScienceDirect
Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa
Numerical approximation of time evolution related to Ginzburg–Landau functionals using weighted Sobolev gradients Nauman Raza a,b,∗ , Sultan Sial c , Asma Rashid Butt d,e a
Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada
b
Department of Mathematics, University of the Punjab, Lahore, Pakistan
c
Department of Mathematics, Lahore University of Management Sciences, Opposite Sector U, DHA, Lahore Cantt. 54792, Pakistan
d
Department of Mathematics, University of Engineering and Technology, Lahore, Pakistan
e
Department of Mathematics, Brock University, St. Catharines, ON, Canada
article
info
Article history: Received 17 June 2013 Received in revised form 25 October 2013 Accepted 8 November 2013 Keywords: Sobolev gradient Ginzburg–Landau functional Finite-element setting Steepest descent
abstract Sobolev gradients have been discussed in Sial et al. (2003) as a method for energy minimization related to Ginzburg–Landau functionals. In this article, a weighted Sobolev gradient approach for the time evolution of a Ginzburg–Landau functional is presented for different values of κ . A comparison is given between the weighted and unweighted Sobolev gradients in a finite element setting. It is seen that for small values of κ , the weighted Sobolev gradient method becomes more and more efficient compared to using the unweighted Sobolev gradient. A comparison with Newton’s method is given where the failure of Newton’s method is demonstrated for a test problem. © 2013 Elsevier Ltd. All rights reserved.
1. Introduction Nonlinear phenomena are of fundamental importance in various fields of science and engineering. Nonlinear problems are difficult to solve either analytically or numerically. Many numerical algorithms have been developed for the solution of nonlinear boundary value problems. Weighted Sobolev gradients have been used to treat linear and nonlinear singular differential equations [1]. By careful consideration of the weighting, much improvement can be achieved. The inability of the Newton’s method to converge in some situations where the Sobolev gradient approach works has been shown. For the solution of partial differential equations (PDEs), Sobolev gradient methods [2] have been used in finitedifference [1] and finite-element settings [3]. Sobolev gradients have been successfully applied to physics [4–10], image processing [11,12], geometric modeling [13], material sciences [14–19] and Differential Algebraic Equations (DAEs) [20]. A detailed discussion of Sobolev gradient can be found in [2]. This reference contains existence and sufficient conditions for convergence of the solution. For some applications and open problems in this field we refer the reader to [21]. Computational work was done on an Intel(R) 3 GHz Core(TM)2 Duo machine with 1 GB RAM. We used the open sources FreeFem++ [22] software for the solution of PDEs. All the graphs are drawn using gnuplot software. 2. Sobolev gradient approach In this section we discuss the Sobolev gradient and steepest descent. A detailed analysis regarding the construction of Sobolev gradients can be seen in [2].
∗
Corresponding author at: Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada. E-mail addresses:
[email protected] (N. Raza),
[email protected] (S. Sial),
[email protected] (A.R. Butt).
0898-1221/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.camwa.2013.11.006
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216
211
Suppose that n is a positive integer, F is a real valued C 1 function on Rn . Then the gradient ∇ F is defined as lim
t →0
1 t
(F (x + th) − F (x)) = F ′ (x)h = ⟨h, ∇ F (x)⟩Rn ,
x, h ∈ R n .
(1)
For F as above but with ⟨·, ·⟩S , an inner product on Rn different from the standard inner product ⟨·, ·⟩Rn , there is a function
∇s F : Rn → Rn so that
F ′ (x)h = ⟨h, ∇S F (x)⟩S
x, h ∈ Rn .
(2)
The linear functional F (x) can be represented using any inner product on R . Say that ∇S F is the gradient of F with respect to the inner product ⟨·, ·⟩S and note that ∇S F has the same properties as ∇ F . By taking a linear transformation: ′
n
A : Rn → Rn these two inner products can be related
⟨x, y⟩S = ⟨x, Ay⟩Rn for x, y ∈ Rn , and a reflection leads to
∇S F (x) = A−1 ∇ F (x),
x ∈ Rn .
(3)
n
Each x ∈ R corresponds to an inner product
⟨·, ·⟩x n
on R . Thus for x ∈ Rn , define ∇x F : Rn → Rn so that F ′ (x)h = ⟨h, ∇x F (x)⟩x
x, h ∈ R n .
(4)
Depending upon the choice of metric, we have a variety of gradients for a function F and these gradients have vastly different numerical properties. A gradient of a function F is said to be a Sobolev gradient when it is defined in finite or infinite dimensional Sobolev space. Readers who are unfamiliar with Sobolev spaces are referred to [23]. The method of steepest descent is an optimization technique which can be classified into two types: discrete steepest descent and continuous steepest descent. Let F be a real-valued C 1 function on a Hilbert space H and ∇S F be its gradient with respect to the inner product ⟨·, ·⟩S defined on H. Discrete steepest descent denotes, a process of constructing a sequence {xk } such that x0 is given and xk = xk−1 − δk (∇ F )(xk−1 ),
k = 1, 2, . . .
(5)
where for each k, δk is chosen so that it minimizes, if possible, F (xk−1 − δk (∇ F )(xk−1 )).
(6)
In continuous steepest descent, we construct a function z : [0, ∞) → H so that dz dt
= −∇ F (z (t )),
z (0) = zinitial ,
(7)
under suitable conditions on F , z (t ) → z∞ where F (z∞ ) is the minimum value of F . Continuous steepest descent is interpreted as a limiting case of discrete steepest descent and so (5) can be considered as a numerical method for approximating solutions to (7). Continuous steepest descent provides a theoretical starting point for proving convergence for discrete steepest descent. Using (5) one seeks u = limk→∞ xk , so that F (u) = 0
or (∇S F )(u) = 0.
(8)
Using (7), one seeks u = limt →∞ zt so that (8) holds. To solve a partial differential equation (PDE), we construct F by a variational principle, i.e., there is a function F and we have a function u that satisfies the differential equation if and only if u is a critical point of F . In these situations we try to use a steepest descent minimization process to find a zero of the gradient of F . In our case F (u) =
Ω
δt
u4 4
+ (1 − δt )
u2 2
κ − fu + δt |∇ u|2 . 2
(9)
Note that other functionals are also possible and one of the prime examples in this direction is least square formulation. Such functions are shown in [9,16]. In this paper, we only shows results from which F comes by a variational principle as the results in this setting are optimal [16]. The existence and convergence of z (t ) → z (∞) for different linear and nonlinear forms of F is discussed in [2].
212
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216
In this paper only discrete Sobolev spaces are used and for numerical computation, finite dimensional versions of functions F are considered. 3. Model a time evolution The minimization of the Model A Ginzburg–Landau free energy functional F (u) =
u4
4
V
−
u2 2
+
κ 2
|∇ u|2
(10)
has been considered in [14]. The static and dynamical properties of this model have been extensively studied, primarily in numerical work related to coarsening and growth of domains [24–26]. In the continuous case, the related Ginzburg–Landau time evolution is ut = −∇ F (u)
(11)
which on the interior of the system is ut = u − u3 + ∇ 2 u.
(12)
We form an associated functional G(u) =
Ω
δt
u4 4
+ (1 − δt )
u2 2
κ − fu + δt |∇ u|2 2
(13)
over some two or three-dimensional region Ω subject to Dirichlet boundary conditions. Here f is the solution at time t and u is the desired solution at time t + δt . 3.1. Gradients and minimization Gradient and descent direction is associated with each inner product. Gradients in a suitably chosen Sobolev space speed up the minimization process in steepest descent. The question of how to choose the best inner product that is suitable for the problem at hand is still open and fundamental. Mahavier’s idea for the construction of weighted Sobolev gradient leads us to define a new inner product which is suitable for the energy functional (13). A Sobolev space H 1 (Ω ) is defined as H 1 (Ω ) = {uε L2 (Ω ) : Dα uε L2 (Ω ), 0 ≤ α ≤ 1}
(14)
where Dα denotes the weak derivative of u of order α . H 1 (Ω ) is a Hilbert space with the norm defined by
∥ u∥
2 H1
|∇ u|2 + |u|2 .
= Ω
(15)
Now we define a new norm, inspired by Mahavier’s idea of weighted gradients, that takes care of w = κδt which affects the derivative term in (13). We define a weighted Sobolev space H 1,w (Ω ) such that the norm is defined as
∥u∥2H 1,w =
Ω
|w∇ u|2 + |u|2 .
(16)
It can be easily verified that the weighted Sobolev space with the norm defined by (16) is a Hilbert space. We define a perturbation subspace L20 (Ω ) of functions such that L20 (Ω ) = {v ∈ L20 (Ω ) : v = 0 on Γ }
(17)
to incorporate the Dirichlet boundary conditions. Here Γ denotes the boundary of the domain Ω . Perturbation subspaces 1,w 1 1,w H respectively. H and H0 = L20 related to H 1 and H 1,w would be H01 = L20 We are looking for a function u that minimizes the energy functional (13). The aim is to find the gradient ∇ G(u) of a functional G(u) associated with the original problem and to find zero of the gradient. For this purpose we define the Fréchet derivative. Given an energy functional G(u) =
Ω
R(u, ∇ u),
(18)
´ (u) defined by the Fréchet derivative of G(u) is a bounded linear functional G ´ (u)h = lim G
G(u + α h) − G(u)
α
α→0
for h ∈ H01 (Ω ).
(19)
In our case, recall the energy functional G(u) =
Ω
δt
u4 4
+ (1 − δt )
u2 2
κ − fu + δt |∇ u|2 2
(20)
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216
213
then according to (19) we have
´ (u)h = lim G
α→0
1
α
Ω
δt
(u + α h)4 4
+ (1 − δt )
(u + α h)2 2
− f (u + α h)
u4 κ κ u2 + δt |∇(u + α h)|2 − δt − (1 − δt ) + fu − δt |∇ u|2 . 2
4
2
2
(21)
Simplifying and applying the limit we have
´ (u)h = G
Ω
(δt u3 + (1 − δt )u − f )h −
Ω
κδt ∇ u · ∇ h.
(22)
Let ∇ G (u), ∇ Gs (u) and ∇ Gw (u) denote the gradients in L2 (the vector space RM equipped with the usual inner product ⟨u, v⟩ = i u(i)v(i)), H 1 and H 1,w respectively. Then by using (2) we can write
´ (u)h = ⟨∇ G(u), h⟩L2 = ⟨∇ Gs (u), h⟩H 1 = ⟨∇ Gw (u), h⟩H 1,w . G
(23)
Thus the gradient in L2 is
∇ G(u) = (δt u3 + (1 − δt )u − f )h + κδt ∇ 2 u.
(24)
In our case we are using Dirichlet boundary conditions, the value of u is fixed on the boundaries of the system and we look for gradients that are zero on the boundary of Ω . So one wishes to use not ∇ G(u) but π ∇ G(u) where π is a projection that sets values of test function h zero on the boundary of the system. We use FreeFem++ [22], a free software designed to solve partial differential equations using the finite element method. This software has the facility to define the gradient zero at the boundary. So to find π∇ G(u) we need to solve
π
Ω
(δt u3 + (1 − δt )u − f )h −
Ω
κδt ∇ u · ∇ h = π
Ω
∇ G(u)h +
Ω
∇∇ G(u) · ∇ h.
(25)
In other words, for u in L2 (Ω ) find ∇ G(u) ∈ L20 , such that
⟨∇ G(u), h⟩L2 = ⟨A(u) − f , h⟩L2 , ∀h ∈ L20 (Ω ) ⟨A(u), h⟩L2 = (δt u3 + (1 − δt )u)h − κδt ∇ u · ∇ h. Ω
(26) (27)
Ω
This gradient is inefficient as the CFL condition [27] applies. The CFL condition is a theoretical bound that how much large time step we can use in finite difference methods. In our case this bound is applied when we consider steepest descent as a time evolution from a higher energy state to a lower energy state. We need to find gradients in H 1 and H 1,w . By using (15) and (23) we can relate the L2 gradient and unweighted Sobolev gradient in the weak form as
⟨(1 − ∇ 2 )∇s G(u), h⟩L2 = ⟨A(u) − f , h⟩L2
(28)
similarly using (16) and (23) one can relate the weighted Sobolev gradient with the L2 gradient
⟨(1 − w2 ∇ 2 )∇w G(u), h⟩L2 = ⟨A(u) − f , h⟩L2 .
(29)
For numerical implementation of the method, we find gradients in unweighted and weighted Sobolev spaces. In order to find gradients we need to solve the following equations.
π
π
Ω
Ω
(δt u3 + (1 − δt )u − f )h −
(δt u3 + (1 − δt )u − f )h −
Ω
Ω
κδt ∇ u · ∇ h = π
κδt ∇ u · ∇ h = π
Ω
Ω
∇s G(u)h +
∇w G(u)h +
Ω
∇∇s G(u) · ∇ h
Ω
∇∇w G(u) · ∇ h.
(30) (31)
It is seen that when using the weighted Sobolev gradient, the step size λ does not have to be reduced as the numerical grid becomes finer and the number of minimization steps remains reasonable. At the same time the conceptual simplicity and elegance of the steepest descent algorithm has been retained.
214
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216 Table 1 Numerical results of steepest descent in H 1 , H 1,w using δt = 0.5, κ = 0.01 for fifteen time steps in the two-dimensional case.
λ
M
Triangles
H1
H 1,w
Iterations H1
H 1,w
CPUs H1
H 1,w
–
–
1.8 1.8 1.8 1.8 1.8
1.0 1.0 1.0 1.0 1.0
163 553 848 1026 1105
69 94 102 106 112
0.516 1.492 9.344 24.51 48.586
0.06 0.24 1.035 2.309 4.408
8 16 32 48 64
16 54 226 490 906
Table 2 Numerical results of steepest descent in H 1 , H 1,w using δt = 0.5 for fifteen time steps in the three-dimensional case.
λ
M
Triangles
H1
H 1,w
CPUs H1
H 1,w
H1
Iterations H 1,w
–
–
1.5 1.5 1.5
0.9 0.9 0.9
14.77 172.17 1626.05
3.13 23.57 198.23
573 843 985
116 119 125
5 10 20
50 200 800
4. Numerical results For the two-dimensional case, we let Ω be the circular disk centered at the origin of radius 10 with an oval region removed. The oval region has border x(t ) = 8 cos(t ), y(t ) = 2 sin(t ) with t ∈ [0, 2π]. The initial state was u = 0.0 and the Dirichlet condition was that u = −1 on the outer boundary and u = 1 on the inner boundary on the boundaries. We let κ = 0.01 and time-step δt = 0.5. FreeFem++ requires one to specify the borders of the region and the number of nodes required on each border. The software then creates a mesh. FreeFem++ solves the equations of the same type as (31) that determine the H 1 and H 1,w . We conducted numerical experiments with M = 8, 16, 32 and 64 nodes on each border. The system was evolved over 15 time steps. For each time step δt the functional defined by (13) was minimized using steepest descent steps with both H 1 and H 1,w until the infinity norm of the gradient was less than 10−6 . The step-size, number of steps taken for convergence by each method and the CPU time were recorded in Table 1. From the Table 1 we see that as the mesh becomes finer with increasing M, more minimization steps are required for convergence of Sobolev gradient whereas in the weighted Sobolev gradient case the number of iterations required does not increase substantially either. By decreasing the value of κ the weighted gradient becomes more and more efficient over the traditional Sobolev gradient. For the three-dimensional case, we let Ω be a cube centered at the origin of each side length 10. The initial state was u = 0.0. We set u = 1 on the top and bottom faces and u = −1 on the front back, left and right faces of the cube. We let κ = 0.1 and time step δt = 0.5. The system was evolved over 15 time steps. For each time step δt the functional defined by (13) was minimized using steepest descent steps with both H 1 and H 1,w until the infinity norm of the gradient was less than some set tolerance. Once again we used the finite-element software FreeFem++ [22] for this problem. We conducted numerical experiments with M = 5, 10 and 20 nodes on each axis. A record was kept of the total number of minimization steps, the largest value of λ that can be used, and the CPU time in Table 2. We see that as the mesh becomes finer with increasing M the weighted Sobolev gradient becomes more and more efficient than the results from unweighted gradients. The number of iterations required does not increase substantially in case of weighted Sobolev gradient. By reducing the value of κ , the performance of the weighted gradient is more pronounced over the unweighted gradient. Fig. 1 shows the results of using steepest descent with the weighted and unweighted gradients to solve Eq. (13) in a two dimensional case, with an initial iterate of u = 0. It shows the comparison between weighted and unweighted gradients, for first eight iterations versus infinity norm of the gradient vector. From the graph we see that convergence is slow with the unweighted gradient. 5. Comparison with Newton’s method In this section, we show the comparison between Newton’s method and the weighted Sobolev gradient approach. Newton’s method is an iterative method, has quadratic convergence. In many circumstances Newton’s method and its various forms are considered optimal provided a good initial guess. In our numerical experiments, we also choose a good initial guess so that we could compare the two methods in a fair manner. Consider the variational form of nonlinear problem
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216
215
Fig. 1. Graph of first 8 iterations versus infinity norm of the gradient vector with gradients in H 1 , H 1,w . Table 3 Comparison of Newton’s method and steepest descent in H 1,w for different values of κ .
κ = 1.0
κ = 0.1
κ = 0.01
–
Newton
H 1,w
Newton
H 1,w
Newton
H 1,w
Error
4 6 6
7 11 15
4 8 NC
7 13 19
4 NC NC
7 12 18
10−5 10−7 10−9
as given by
⟨G(u), h⟩ =
Ω
δt
u4 4
h + (1 − δt )
u2 2
h − fuh + δt
κ 2
|∇ u|2 h.
(32)
In order to apply Newton’s method we need to find the Gateaux derivative which is defined as: Let X and Y be two Banach spaces, then the operator φ : K ⊆ X → Y is said to be Gateaux differentiable at v0 if and only if there exist a linear operator L : X → Y such that lim
δ→0
1
δ
(φ(v0 + δ h) − φ(v0 )) = Lh,
∀ h ∈ X , ∥h∥ = 1
(33)
the linear operator L is called the Gateaux derivative of φ at v0 . Newton’s method is defined as
⟨F´ (un )cn , v⟩ = ⟨F (u), v⟩,
∀v ∈ H01 (Ω ),
(34)
to use an appropriate linear solver. For example the Newton’s iteration scheme for this problem is un+1 = un − cn .
(35)
We work out the example in two dimensional case, for this we let Ω be a square centered at the origin of each side length 10. The initial state was u = 0.0. We set u = 0.1 on the vertical edges and u = −0.1 on the horizontal edges of the square. We let δt = 0.5. The system was evolved over 2 time steps until the infinity norm of the gradient is less than some set tolerance. Results were obtained on 30 × 30 grid points. Table 3 shows the results for different values of κ . The term NC in the Table denotes no convergence. Results show that Newton’s method performs better than the weighted gradient. However, Newton’s method does not converge for strict stopping criterion. Initially Newton’s method is superior in terms of minimization steps but fails to damp out low frequency error modes. On the other hand, weighted gradient takes more iterations but it continues to converge even for very tight stopping criteria. There is not much difference in terms of minimization steps between the two methods. By decreasing the value of κ the weighted Sobolev gradient convergence occurs monotonically whereas this is not in the case in Newton’s method.
216
N. Raza et al. / Computers and Mathematics with Applications 67 (2014) 210–216
6. Summary and conclusions In this paper, we have presented minimization schemes for the Ginzburg–Landau functional based on the weighted Sobolev gradient technique [2]. The descent in H 1,w outperforms descent in H 1 , as the spacing of the numerical grid is made finer. The numerical work in this paper indicates a systematic way to choose the underlying space and demonstrates that an appropriate choice of weight functions contributes to developing efficient code. Newton’s method converges only when the initial guess is taken sufficiently close to a local minimum. At each step, Newton’s method require the inverse of Hessian matrix to evaluate, which is sometimes very expensive or may not be positive definite. In [11] it was shown that for some problems Newton’s method fails to converge near the singularity but the Sobolev gradient method does converge. In [9,16], a failure of Newton’s method has been shown by taking a bigger jump discontinuity and on increased number of nodes. In this setting, the Sobolev gradient method still functions well. Therefore the Sobolev gradient method can be applied to a broader range of problems with efficiency by the introduction of the weight functions. One advantage of steepest descent is that it converges even for a poor or random initial guess where this is not in the case of Newton’s method. The research can be further enhanced by comparing the performance of weighted gradient with some nonlinear FAS multigrid method in finite element setting. Acknowledgments The authors are grateful to the referees for careful reading, kind remarks and valuable suggestions in improving the paper. We acknowledge and appreciate Higher education commission (HEC) of Pakistan for providing research grant through a post doctoral fellowship scheme. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]
W.T. Mahavier, A numerical method utilizing weighted Sobolev descent to solve singular differential equations, Nonlinear World 4 (1997) 435–455. J.W. Neuberger, Sobolev Gradients and Differential Equations, in: Springer Lecture Notes in Mathematics, vol. 1670, Springer-Verlag, New York, 1997. C. Beasley, Finite Element Solution to Nonlinear Partial Differential Equations, Ph.D. Thesis, University of North Texas, Denton, Tx, 1981. J. Karatson, L. Loczi, Sobolev gradient preconditioning for the electrostatic potential equation, Comput. Math. Appl. 50 (2005) 1093–1104. J. Karatson, I. Farago, Preconditioning operators and Sobolev gradients for nonlinear elliptic problems, Comput. Math. Appl. 50 (2005) 1077–1092. J. Karatson, Constructive Sobolev gradient preconditioning for semilinear elliptic systems, Electron. J. Differential Equations 75 (2004) 1–26. J. Garcia-Ripoll, V. Konotop, B. Malomed, V. Perez-Garcia, A quasi-local Gross–Pitaevskii equation for attractive Bose–Einstein condensates, Math. Comput. Simul. 62 (2003) 21–30. A. Majid, S. Sial, Application of Sobolev gradient method to Poisson–Boltzman system, J. Comput. Phys. 229 (2010) 5742–5754. A. Majid, S. Sial, Approximate solutions to Poisson–Boltzman systems with Sobolev gradients, J. Comput. Phys. 230 (2011) 5732–5738. N. Raza, S. Sial, S. Siddiqi, T. Lookman, Energy minimization related to nonlinear Schrödinger equation, J. Comput. Phys. 228 (2009) 2572–2577. W.B. Richardson, Sobolev preconditioning for the Poisson–Boltzman equation, Comput. Methods Appl. Mech. Engrg. 181 (2000) 425–436. W.B. Richardson, Sobolev gradient preconditioning for image processing PDEs, Comm. Numer. Methods Engrg. 24 (2008) 493–504. R.J. Renka, Constructing fair curves and surfaces with a Sobolev gradient method, Comput. Aided Geom. Design 21 (2004) 137–149. S. Sial, J.W. Neuberger, T. Lookman, A. Saxena, Energy minimization using Sobolev gradients: application to phase separation and ordering, J. Comput. Phys. 189 (2003) 88–97. S. Sial, J. Neuberger, T. Lookman, A. Saxena, Energy minimization using Sobolev gradients finite-element setting, in: Proc. World Conference on 21st Century Mathematics, Lahore, Pakistan, 2005. N. Raza, S. Sial, S. Siddiqi, Sobolev gradient approach for the time evolution related to energy minimization of Ginzberg–Landau functionals, J. Comput. Phys. 228 (2009) 2566–2571. N. Raza, S. Sial, S.S. Siddiqi, Approximating time evolution related to Ginzberg–Landau functionals via Sobolev gradient methods in a finite-element setting, J. Comput. Phys. 229 (2010) 1621–1625. S. Sial, Sobolev gradient algorithm for minimum energy states of s-wave superconductors: finite-element setting, Supercond. Sci. Technol. 18 (2005) 675–677. B. Brown, M. Jais, I. Knowles, A variational approach to an elastic inverse problem, Inverse Problems 21 (2005) 1953–1973. R. Nittka, M. Sauter, Sobolev gradients for differential algebraic equations, J. Differential Equations 42 (2008) 1–31. R.J. Renka, J.W. Neuberger, Sobolev gradients: introduction, applications, problems, Contemp. Math. 257 (2004) 85–99. F. Hecht, O. Pironneau, K. Ohtsuka, FreeFem++Manual, www.freefem.org. R.A. Adams, Sobolev Spaces, Academic Press, New York, 1975. P.C. Hohenberg, B.I. Halperin, Theory of dynamic critical phenomena, Rev. Modern Phys. 49 (1977) 435–479. T.M. Rogers, K.E. Elder, R.C. Desai, Numerical study of the late stages of spinodal decomposition, Phys. Rev. B 37 (1988) 9638–9649. J.W. Cahn, J.E. Hilliard, Free energy of a nonuniform system, I. Interfacial free energy, J. Chem. Phys. 28 (1958) 258–267. R. Courant, K. Friedrichs, H. Lewy, On the partial differential equations of mathematical physics, IBM J. Res. Dev. 11 (1967) 215–234.