G Model ASOC-1901; No. of Pages 12
ARTICLE IN PRESS Applied Soft Computing xxx (2013) xxx–xxx
Contents lists available at SciVerse ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization M. Babaei ∗ Civil Engineering Department, Faculty of Engineering, Urmia University, Urmia, Iran
a r t i c l e
i n f o
Article history: Received 29 May 2012 Received in revised form 7 December 2012 Accepted 4 February 2013 Available online xxx Keywords: Linear/nonlinear differential equation Solution approximation Fourier series Weighted-residual functional Particle swarm optimization (PSO) Penalty function
a b s t r a c t A general algorithm is presented to approximately solve a great variety of linear and nonlinear ordinary differential equations (ODEs) independent of their form, order, and given conditions. The ODEs are formulated as optimization problem. Some basic fundamentals from different areas of mathematics are coupled with each other to effectively cope with the propounded problem. The Fourier series expansion, calculus of variation, and particle swarm optimization (PSO) are employed in the formulation of the problem. Both boundary value problems (BVPs) and initial value problems (IVPs) are treated in the same way. Boundary and initial conditions are both modeled as constraints of the optimization problem. The constraints are imposed through the penalty function strategy. The penalty function in cooperation with weighted-residual functional constitutes fitness function which is central concept in evolutionary algorithms. The robust metaheuristic optimization technique of the PSO is employed to find the solution of the extended variational problem. Finally, illustrative examples demonstrate practicality and efficiency of the presented algorithm as well as its wide operational domain. © 2013 Elsevier B.V. All rights reserved.
1. Introduction It is known that explicit solution of a great variety of differential equations is not achievable straightforwardly. Generally, there are many important problems in engineering and science, especially nonlinear ones, to which the analytical methods are not applicable or at least are very complicated to use [1]. In these cases, use of available numerical techniques such as Euler, Runge–Kutta and Adams methods lie useful and sometimes inevitable. However, each of these numerical techniques has its own operational restrictions which strictly confine their functioning domain. In addition, most of them are based on classical mathematical tools. Nowadays, artificial Intelligence (AI) provides us with powerful evolutionary algorithms for solving the most sophisticated problems. With coming into existence of these newborn AI algorithms such as Genetic Algorithm (GA), particle swarm optimization (PSO), Neural Networks (NN), and Ant Colony (AC) method, applied scientists have strong tools to reinforce their problem-solving insight and expand their horizons. An extensive number of papers have been published on the numerical solutions for various types of differential equations [2], and in particular, nonlinear ones [3]. Darania et al. [4–6] presented significant works on solving integro-differential equations. Liao and Tan [7] developed a general approach to obtain series solutions of nonlinear differential equations. Lee [8] approximated solutions of nonlinear differential equations using the method of bilaterally bounded (MBB). Mateescu [9] presented a simple and interesting formulation for solving Cauchy problem using genetic algorithm. Mastorakis [10] provided several new ideas on formulation of ODEs as optimization problems. He utilized finite element concepts to develop the fitness functions. Many others also applied evolutionary algorithms to differential equations [11–13]. Alternatively, Deb in his book discussed application of genetic programming (GP) for solving ODEs [14]. However, while working on this idea, we encountered some difficulties which need more investigation. First, in this approach, the combination of the approximation function varies from one problem to another. But, to have a systematic approach, we were looking for a common combination which covers various classes of ODEs in the same way. Second, to evaluate the residual functional of an Nth order differential equation we need to perform N symbolic differentiation operations for each function generated in genetic programming. This will be very costly and extremely time-consuming while handling thousands of symbolic functions. Third, it may happen that the functions stochastically generated in GP
∗ Tel.: +98 9141043958. E-mail addresses:
[email protected], mehdi babaei
[email protected] 1568-4946/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.asoc.2013.02.005
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model ASOC-1901; No. of Pages 12
ARTICLE IN PRESS M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
2
do not have higher-order derivatives or their derivatives diminish in lower orders. Based on these facts, we suggested using the Fourier partial sum together with the PSO algorithm for this purpose because it has none of these deficiencies. Although there are very strong theorem on the existence, uniqueness, and solving methods of various ordinary differential equations (ODEs), it still feels a lack of a general approach which meets all engineering demands while dealing with unconventional nonlinear ODEs encountered in applied science. Moreover, it would be insensible to find analytical and exact solution for solving all ODEs of various types. Therefore, in this paper, we put forward the challenging idea of finding a wide-ranging approach to approximate the solution of ODEs, independent of their form, order, and linearity. Such a method can be based on the evolutionary algorithms. It should be so devised to solve linear and nonlinear differential equation with arbitrary boundary and/or initial value in a similar way. Accordingly, in this study, a new algorithm is presented which is extensively flexible, simple and simultaneously general to be applied to those of nonlinear differential equations which do not fit in the frameworks of the other conventional numerical methods. It should be noted that this paper does not aim to investigate the problem theoretically, but it provides an applicable tool for solving complicated nonlinear differential equations of higher orders and various forms. It means that the presented algorithm can be viewed as an engineering approach for solution approximation of unconventional ODEs rather than merely mathematical or fundamental phenomena which always guarantee any desired level of accuracy. In addition, based on stochastic nature of the PSO algorithm which takes the substantial role of finding solution in the suggested algorithm, it is not expected to achieve a satisfactory accurate approximation or at least in its first implementation. 2. Particle swarm optimization The particle swarm optimization method (PSO) is an evolutionary global-search technique in large multidimensional solution spaces. It is an engineering model of swarm intelligence which is adopted from flock of birds, fish schooling, and swarm of insects [15]. The first mathematical formulation of the PSO was devised by Kennedy and Eberhart [16]. Recently performed researches show that the PSO algorithm gives comparably appropriate results in a faster and cheaper way compared with its other counterparts like genetic algorithm [17,18]. The PSO algorithm utilizes numerous randomly created searching individuals, so-called particles, like other population-based evolutionary algorithms. These artificial individuals are searching agent of the PSO. They explore an extensively large search space to find food position. The food position is interpreted as the global optimum in scientific world and our engineering field. The individuals in the flock memorize their earlier experience to use in their next movements. They also exchange information about their current position. The data transmission in the PSO algorithm is a one-way information flow mechanism [19]. It means that the best qualified particle only informs the others. Consequently, particles track the food location using the information received from this article and also their preceding personal experience saved in their memory. In this mechanism, members of a swarm communicate their information and modify their positions and velocities according to the best position appeared in the current movement of the swarm. The particles of the swarm would gradually get close to this position and finally find the optimal point by their interactive cooperation [15]. Now, consider swarm of particles wandering around the solution space searching for optimum solution. Here a particle is identified through its two basic characteristics: a position vector Xi (t) and a velocity vector Vi (t). A mathematical model of the PSO algorithm for updating the particles’ position in each movement can be formulated as following Vi (t + 1) = ωVi (t) + c1 rand1 (pbesti (t) − Xi (t)) + c2 rand2 (gbest (t) − Xi (t))
(1)
Xi (t + 1) = Xi (t) + Vi (t + 1)
(2)
where Xi (t) and Xi (t + 1) are the position of agent i at iteration t and t + 1, Vi (t) and Vi (t + 1) are the velocities of agent i at iteration t and t + 1. Particles’ velocities in each direction are bounded to a maximum velocity Vmax . ω is the inertia weight that controls the exploration and exploitation of the search space. Its value is set to 0.9 at the initial iteration and it linearly decreases while optimization process is in progress [20]. pbesti is the best-so-far position of the agent i which is called personal best, and the knowledge of the best neighbor gbest is that one experienced so far by the rest of the swarm. The first term in Eq. (1) is the previous velocity of the agent i. c1 and c2 , the cognition and social components respectively, are the acceleration constants which change the velocity of a particle toward the pbest and gbest. Their values are set to 0.5 and 1.5, respectively [21]. rand is a vector of uniformly distributed random numbers between 0 and 1. The basic flowchart of the PSO is shown in Fig. 1. Compared to the GA, the PSO has few parameters and could be performed more easily [15]. So, in this study, the PSO technique is employed to find the solution of the problem under consideration. However, in our work, the PSO is merely an optimization tool which can be substituted by any other newly developed ones if they offer better results. Moreover, according to fast advancement in the area of evolutionary algorithm, this version of PSO may not be the latest standard; so it may be replaced by the updated versions in the future researches (see Ref. [22]). 3. Proposed methodology for solving ODEs In this paper, we are to develop the generality of a simple algorithm which can be used to find an approximate solution of a great variety of linear and nonlinear ODEs using the PSO algorithm. In this approach, the problem is attended from a novel point of view to simplify and strengthen the solving procedure of differential equations. Several essential concepts from different area of mathematics such as variational calculus, series expansion and evolutionary optimization algorithm are adopted in formulation of the presented algorithm. Each of these components would be briefly discussed and then they will be appropriately combined to conduct an efficient solving procedure. It is known that in the conventional numerical methods of solving ODEs, solution interval of problem is divided into finer steps and then is solved step by step. Most of these methods usually carry severe restriction on the forms of differential equations that they can deal with. For example, most of them are devised to solve first or second order ODEs, initial value or boundary value problems, or require the explicit form of ODEs such as y = f (x, y, y )
(3)
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
3
Fig. 1. The particle swarm optimization flowchart.
However, in the new algorithm, there are not such restrictions. It offers a completely general way for solution approximation of linear and nonlinear ODEs of higher orders in their implicit form
f x, y, y , . . . , y(n) = 0
(4)
Accordingly, for this family of differential equations, a distinctive approach is suggested in which a common form of approximation function is used for solving ODEs over the whole desired domain or its subdomains. 3.1. Linear and nonlinear differential equations The main idea behind the algorithm is to use the terms of nth partial sum of the Fourier series expansion with undetermined coefficients in company with the PSO algorithm for solving various forms of differential equations. The Fourier series is suggested to be used, because it enjoys some impressive and noteworthy qualities which fit exactly our current purpose. First of all, a powerful theory backs its convergence for a wide variety of continuous functions [23]. Secondly, it contains sine and cosine terms which are differentiable of any order. It allow us to use finite number of terms which suit for the solution approximation of differential equation of higher orders without being worried about diminishing its derivatives in differentiating process. And third, a unique form of approximation function could be utilized for different kinds of ODEs in the same manner. In other words, it is not required to construct a different function composition for solving various differential equations anymore as it was conventional in other methods like Petrov-Galerkin and Ritz approaches [24]. The algorithm can be simply applied to boundary value problems (BVPs) as well as initial value problems (IVPs). Consequently, in this algorithm, we choose appropriate terms of Fourier partial sum, with indeterminate coefficients. It is obvious that since we do not have the solution function in advance, we could not utilize the analytical formulas to determine the coefficients of approximate solution. These coefficients are put aside to be determined in the evolutionary process. Consequently, the PSO is employed to find these coefficients. This approach is a meshless one which gives the solution function instead of its numerical values at several discrete points. To explain the procedure, consider the general implicit form of the following nonlinear ODE which is to be solved at the interval x0 to xn
f x, y, y , . . . , y(n) = 0
(5)
with boundary conditions y(x0 ) = y0 ,
y (x0 ) = y0 , . . . , y(xn ) = yn ,
y (xn ) = yn , . . .
(6)
or initial values y(x0 ) = y0 ,
y (x0 ) = y0 , . . . , y(n−1) (x0 ) = y0
(n−1)
(7)
For formulation purpose, the partial sum of the Fourier series with the center of x0 is proposed to be used as the approximation function y(x) ≈ Ynat (x) = a0 +
nat
am cos
m(x − x ) 0
L
+ bm sin
m(x − x ) 0
L
(8)
m=1
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
4
This function together with its derivatives (x) = y (x) ≈ Ynat
nat m
−
L
am sin
m(x − x ) 0
L
m=1 nat
y (x)
≈
(x) Ynat
=
m 2 −
L
+
am cos
m bm cos L
m(x − x ) 0
L
m(x − x ) m 2 0 −
L
L
bm sin
m(x − x )
m=1
0
L
(9)
.. . (n)
y(n) (x) ≈ Ynat (x) =
nat
[. . .]
m=1
will be used to estimate the solution of Eq. (5). In these equations, x0 is the start point of solving interval and L is the length of the solution interval, i.e., L = xn − x0 . nat is the number of sine and cosine terms, i.e., number of approximation terms, taken from the Fourier series. It is noted that we could achieve higher accuracy if we was able to handle larger number of unknown variables, nat, through the available evolutionary algorithms. Theoretically, these algorithms can find the solution of problems with any number of variables, but in practice they fail to find the global optimum while dealing with large number of variables. They are trapped in local optima. In addition, large search spaces need more time and computational efforts to be appropriately explored. For these deficiencies with the available metaheuristic optimization algorithms, working with the current versions of evolutionary algorithms, we had to check several alternatives to find the maximum possible value for nat which neither noticeably affects the precision of the solution nor results into trapping in local optima. However, with coming into existence of the powerful metaheuristic algorithms, working with larger number of variables will be possible in the future works. 3.2. Systems of linear and nonlinear differential equations Systems of nonlinear equations composed of a few dependent differential equations could also be effectively coped with in the presented algorithm. As a representative example of these equations, we consider the following system of ODEs which consists of two nonlinear differential equations at the solution interval of t0 to tn
f1 t, x, y, x , y , . . . , x(n) , y(n) = 0
(10)
f2 t, x, y, x , y , . . . , x(n) , y(n) = 0 with given boundary conditions
x(t0 ) = x0 y(t0 ) = y0
,
or initial conditions as
x(t0 ) = x0 y(t0 ) = y0
,
x (t0 ) = x0 y (t0 ) = y0
x (t0 ) = x0 y (t0 ) = y0
,...,
,
x(tn ) = xn y(tn ) = yn
,
x (tn ) = xn x (tn ) = yn
,...
(11)
(n−1)
x(n−1) (t0 ) = x0
(n−1)
(12)
y(n−1) (t0 ) = y0
where x and y both are functions of independent variable t. As already discussed, two partial sums of the Fourier series with the center of t0 can be used for approximation purpose in this system of equations, and we have
⎧ nat m(t − t ) m(t − t ) ⎪ 0 0 ⎪ ⎪ x(t) ≈ X (t) = a + am cos + bm sin nat 0 ⎪ ⎨ L L m=1
nat m(t − t ) m(t − t ) ⎪ ⎪ 0 0 ⎪ y(t) ≈ Y (t) = c + cm cos + dm sin ⎪ nat 0 ⎩ L L
(13)
m=1
Eq. (13) along with its derivatives with respect to t would be used in formulation of solving algorithm. In these equations, t0 is the start point of solving interval and nat is the number of sine and cosine terms taken from the Fourier series. L is the length of the solution interval, i.e., L = tn − t0 . Numerically evaluating the y-value of these functions and their corresponding derivatives from their term-wise differentiations in several discrete points, we can easily investigate the accuracy of solution as well as satisfaction of boundary values (BVs) and/or initial values (IVs). 4. Weighted-residual functional as convergence criterion While working with the explicit form of the differential equations we need a justifying criterion to judge as if the desired accuracy in the approximate solution is obtained through the algorithm or not. To have such a numerical criterion on acceptability of the approximated solution, we must have a numerical assessment measure of errors. If this evaluating measure falls into a satisfactory range, we can rely on the results obtained from the algorithm. Furthermore, an appropriate criterion is required to be introduced to the PSO algorithm as an objective function. For all these considerations, we notice the concept of weighted-residual functional in the variational calculus. The Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
5
weighted residual functional is an integral which tests the solution of problem [25]. To be precise, the approximate solution Ynat is required to satisfy the ODE in the form of its residual-integral given as
W (x) .|R(x)| dx
WRF =
(14)
D
where W(x) is called weight function, and R(x) is residual [24] which is obtained by substituting approximate function Y(x) and its derivatives for y(x), y (x),. . ., and y(n) (x) in the implicit form of the differential Eq. (5)
R(x) ≡ f x, Y (x), Y (x), . . . , Y (n) (x)
(15)
As it would be seen, this integral will be employed as the objective function of optimization problem, and numerically evaluated by Trapezoidal or Simpson method in solving process. The value of WRF should approach zeros; otherwise the algorithm does not converge properly. In conventional weighted residual methods like Galerkin and Ritz, the weight function W(x) was an arbitrary function used for generating independent algebraic equations for finding the coefficients in the approximation function as discussed in Ref. [24]. However, such a duty is not given to this function in the present approach since we do not need any algebraic equation for determining these coefficients here. This duty is given to the PSO algorithm to be carried out. Instead, in the current formulation, this function is given a fundamentally different and substantial task of pressuring on that part of solving domains in which we need to have more precise approximation. It means that wherever the absolute value of W(x) is large, the arisen error in approximate solution will be proportionally magnified to be strictly coped with in the PSO algorithm. However, exactness of final solution will be checked through the WRF value while the algorithm is in progress. The less this value is, the more accuracy is achieved. The process would be stopped when the value of this function becomes satisfactorily small, and the algorithm returns the identified solution function in the evolutionary process. If desired accuracy is not achieved in the solving process, one of the following modifications can be taken up before the algorithm is executed again I. Taking additional terms of in order to increase the adaptability of approximation function. II. Using fewer but appropriate number of the Fourier partial sum to avoid trapping in local optima in evolutionary algorithm. III. In the initial value problems, the accuracy of the algorithm can be noticeably improved, providing that the problem solution interval is divided into finer subdomains, and the solution is approximated part by part. It should be noted that the first alteration does not contrast the second one because each of them cure their own distinctive probable error source. 5. Formulation of boundary and initial conditions Dealing with the differential equation problems, we need to find a solution which satisfies the equation itself and simultaneously see its boundary and initial conditions. Consequently, since in the presented formulation, differential equations are solved by the evolutionary optimization algorithm, an appropriate strategy is required to take into account the BVs and IVs. Accordingly, BVs and IVs both should be formulated as the constraints of the optimization problem. These conditions can either be homogeneous or nonhomogeneous. Homogeneous ones are treated in their original implicit form, e.g., at the start point of the solution interval we may have
y(x0 ) = 0 ⇒ g1 (x0 ) = y(x0 ) ≈ Y (x0 )
(16-a)
y (x0 ) = 0 ⇒ g2 (x0 ) = y (x0 ) ≈ Y (x0 )
(16-b)
.. .
y(n) (x0 ) = 0 ⇒ gn (x0 ) = y(n) (x0 ) ≈ Y (n) (x0 )
(16-c)
The nonhomogeneous conditions, at this point, can be mastered in their normalized forms as
y(x0 )
y(x0 ) = y0 ⇒ g1 (x0 ) =
y0
y (x0 )
y (x0 ) = y0 ⇒ g2 (x0 ) =
y0
Y (x0 )
− 1 ≈
y0
Y (x0 )
− 1 ≈
y0
− 1
(17-a)
− 1
(17-b)
.. .
y
(n)
(x0 ) =
(n) y0
(n) (n) y (x0 ) Y (x0 ) ⇒ gn (x0 ) = − 1 ≈ − 1 y0(n) y0(n)
(17-c)
In the above relations g1 , g2 , . . ., gn represent constraints of optimization problem. Then, all conditions of different type can be involved in the context of a single penalty function to be treated in the same way.
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
6
a0
a1
b1
a2
…
bnat
Fig. 2. The arrangement of the variables in the PSO particles.
6. Penalty function and fitness function It is well-known that the PSO algorithm offers an unconstrained technique. Therefore, in our constrained problems, it is needed to employ one of the known schemes in constraints handling. Accordingly, the most famous method of the penalty function is employed to impose the constraints which are discussed in the preceding section. The penalty function has the task of penalization of violated solutions from the given conditions. Consequently, by adding up the value of the penalty function PFV to the weighted-residual integral WRF we obtain the fitness function for the problem FFV = WRF + PFV
(18)
where the value of penalty function PFV is calculated as the penalty scheme presented by Rajeev and Krishnamoorthy [26]
nBVs +nIVs
PFV = WRF ·
Km gm
(19)
m=1
in which gm is the normalized violation of mth constraint which is computed from Eqs. (16) and (17) depending on the degree of violation for that given condition; nBVs and nIVs are the number of boundary and initial conditions. Km is penalty multiplier which is chosen depending on the level of emphasis on satisfaction of each condition. Consequently, if the constants Km is chosen to be a large number, the pressure on satisfaction of this condition will be high and, therefore; the PSO algorithm tries to see this constraint rather than the satisfaction of differential equation itself. On the other hand, if the value of Km is considered a small value, its corresponding condition will be weakly satisfied. Therefore, assigning appropriate values to these coefficients and adjusting their values during evolutionary process has its own importance whose detailed discussion is beyond the scope of this paper. However, for the sake of simplicity, the coefficients Km all are chosen to be constant throughout all examples investigated here. 7. Proposed problem-solving algorithm Considering all the concepts already discussed, one can formulate the ODE problems of Eqs. (5)–(7) as a minimization problem of a residual functional over its solving domain D
minimize :
WRF =
|W (x)| · |R(x)| dx
(14)
D
Subjected to BVs and/or IVs constraints (1)
= 0,
i = 1, 2, . . . , nBVs
(20-a)
(2)
= 0,
j = 1, 2, . . . , nIVs
(20-b)
gi
gj
(1)
(2)
As already mentioned, in the above relations, W(x) is the weight function; R(x) is the residual function. gi and gi represent the constraints of the boundary and initial conditions, respectively. Here, the stepwise procedure of the proposed algorithm will be presented in consequence to be easily regenerated by other researchers. The presented algorithm can be executed step-by-step as follows (1) Prepare an appropriate implicit form of the given ODE in the form of Eq. (15). It is noted that solution interval is from x0 to xn . (2) Write nonhomogeneous boundary and/or initial conditions in their normalized form using Eq. (16) and Eq. (17) in order to use in evaluating penalty values. (3) Take an appropriate number of terms of the Fourier series with indeterminate coefficients as Eq. (8). (4) Assign an optimization variable to each coefficient in the approximation function to be introduced to the PSO algorithm as shown in Fig. 2. (5) Call the PSO optimizer to find the indeterminate coefficients in the approximation function Ynat (x). (6) Compute the xy-coordinates of points with the step size of x for x components xi = x0 + i · x
(21)
yi ≈ Ynat (xi )
(22)
It is noted that the step size x is usually an arbitrary value which does not noticeably affect the accuracy of final result. This parameter is merely used for numerical evaluation of residual integral an in contrast to the conventional stepwise method; the finer value of this parameter will not increase the accuracy of solution because this method is a meshless one. (7) Calculate the approximate value for derivatives at the point xi . They must be obtained up to the order which is needed in the differential equation. (8) Constitute the weighted-residual functional of Eq. (14) and numerically evaluate its value using the approximation function Ynat (x) and its derivatives values
WRF ≈
(n) W (x) f x, Ynat (x), Ynat (x), . . . , Ynat (x) dx
(23)
D
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model ASOC-1901; No. of Pages 12
ARTICLE IN PRESS M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
7
Fig. 3. The basic flowchart of the presented algorithm.
(9) (10) (11) (12)
This integral is numerically evaluated by means of Trapezoidal or Simpson integration method. For the sake of simplicity, the weight function can be assumed to be unit for the present. Control satisfaction of boundary and initial conditions and then calculate the penalty value for violated solutions from Eq. (19). Obtain fitness function value FFV for each particle, adding up weighted-residual and penalty values as indicated in Eq. (18). Repeat 6–11 until stopping criteria are reached in the PSO evolutionary algorithm. Repeat 6–12 until the desired accuracy is achieved.
It is obvious that the solving procedure for systems of equations can be constructed in a similar way. A brief flowchart of the algorithm is presented in Fig. 3. 8. Numerical examples Up to this point we have discussed the generality of an algorithm for finding an approximate solution of ODEs. Here we are to investigate several BV and IV problems through this algorithm to test its applicability as well as its accuracy. Furthermore, to show that the algorithm covers various classes of differential equations, it is coded and applied to different types of ODEs. Examples are selected from some of Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
8 0.3
Exact solution Approximate solution
0.25
0.2
y
0.15
0.1
0.05
0
-0.05
-0.1 0
0.5
1
1.5
2
2.5
3
x Fig. 4. Comparison of the exact and approximate solution in ex.8.1.
the most famous references in this field. In order to have an illustrative comparison between the approximate results from the presented algorithm and their exact solutions, even the nonlinear examples are selected from those ones for which there are exact solution in hand from theory of ordinary differential equations. Moreover, the graphical comparison of the obtained approximations with the exact ones reveals the efficiency of the algorithm and exhibit rather small deviation between exact solutions and algorithm outcomes. The cognition and social components are set to 0.5 and 1.5, respectively. Moreover, after applying the algorithm proposed in this study to the numerical examples with population size of 200–600, depending on the number of terms taken from the Fourier series, the results given in Eqs. (27), (31), (35), and (39) can be obtained. Maximum number of generation in these computations is set to 1000. As already mentioned, the weight function is assumed to be unit in all examples under consideration for the time being. 8.1. Test for integro-differential equation [27] Consider the following integro-differential equation
x
u + 2u + 5
u(t) dt =
1 x≥0
(24)
0 x<0
0
u(0) = 0
(25)
for which solution interval is considered to be from 0 to . The exact solution is u(x) =
1 −x e sin(2x) 2
(26)
The best approximate solution of this problem is obtained for nat = 6 and we have y(x) ≈ (3.74e − 02) + (4.67e − 03) cos + (−1.49e − 02) cos + (1.21e − 02) cos
3x
5x
x
+ (5.31e − 03) sin
+ (8.24e − 02) sin
+ (2.81e − 02) sin
3x
5x
x
+ (−3.87e − 02) cos
+ (−1.11e − 02) cos
+ (1.06e − 02) cos
4x
6x
2x
+ (1.36e − 01) sin
+ (4.14e − 02) sin
+ (1.01e − 03) sin
4x
2x
6x
(27)
Fig. 4 schematically shows the diagram of this function in comparison with the exact solution. 8.2. Test for system of linear ODEs Since this work mainly concerns engineering applications, let us test the algorithm on a system of linear equation which is adopted from Ref. [1]. This system is obtained for a simple circuit
⎧ dI ⎪ ⎨ = −I − V dt
(28)
⎪ ⎩ dV = 2I − V dt
Initial conditions are given as I(0) = 2 and
V (0) = 2
(29)
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
9
2.5 2.25 2 1.75 1.5 1.25
I , V
1 0.75 0.5 0.25 0 -0.25 -0.5 -0.75 -1 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
t
Fig. 5. Comparison of the exact and approximate solution versus t in ex.8.2.
The exact solution of this problem is I = 2e−t cos
√ √ 2t −
2e−t sin
√ 2t
(30-b)
√ √ √ V = 2 2e−t sin 2t + 2e−t cos 2t
(30-b)
Approximate solution set is achieved for nat = 3 as following I(t) ≈ (+8.827e − 01) + (1.316e + 00) cos + (−3.998e − 01) sin
2t 1.5
+ (1.620e − 02) cos
1.5
1.5
+ (−1.111e + 00) sin
+ (−7.071e − 02) cos
V (t) ≈ (8.784e − 01) + (8.378e − 01) cos
3t
t
t 1.5
3t 1.5
1.5
+ (−1.282e − 01) cos
+ (2.241e − 02) sin
+ (9.736e − 01) sin
+ (−4.422e − 02) sin
t
t
3t
1.5
2t
3t
1.5 (31-a)
1.5
+ (2.675e − 01) cos
2t 1.5
+ (5.179e − 02) sin
2t 1.5 (31-b)
1.5
The graphical presentation of these functions is given in Figs. 5 and 6.
2.6 2.4 2.2 2 1.8
V = V (I)
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 -0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
I
Fig. 6. Comparison of the exact and approximate solution in ex.8.2.
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
10
Fig. 7. Comparison of the exact and approximate solution in ex.8.3.
8.3. Test for Brachistochrone problem Brachistochrone problem is a well-known example of nonlinear differential equation with boundary conditions [1]. It was one of the earliest problems posed in the calculus of variations. In brief, we are to find a curve function y = f(x) along with a particle will slide from P(0,0) to a lower point Q(x0 ,y0 ) without friction in the least time. For the sake of convenience, the y-coordinate of this point is assumed positive in this example. Differential equation of this problem for x0 = 1 and y0 = 2 can be expressed as [1]
1 + (y )
2
y(0) = 0,
y = k2
(32)
y(1) = 2
(33)
where k2 is a certain positive constant to be determined. In our formulation, one additional parameter is assigned to this constant to be found in PSO optimization process. The approximate value of this parameter is found 4.88 in the PSO algorithm which has 1.66 percent error compared with its analytical value 4.81. The exact solution of the problem is a parametric equation of a cycloid
⎧ k2 ⎪ ⎨ x = ( − sin()) 2
(34)
⎪ ⎩ y = k2 1 − cos 2
and approximate solution, which is obtained for nat = 5, is y(x) ≈ (8.147e − 1) + (−1.037e + 00) cos + (1.381e − 02) cos + (2.386e − 02) cos
3x 1
5x 1
x 1
+ (6.391e − 01) sin
+ (5.033e − 02) sin
3x
+ (−1.433e − 02) sin
1
x 1
+ (1.336e − 01) cos
+ (5.167e − 02) cos
4x
5x 1
1
2x 1
+ (3.361e − 01) sin
+ (5.763e − 02) sin
4x
2x 1
1 (35)
Fig. 7 graphically shows the accuracy degree of the approximate solution. 8.4. Test for nonlinear Bernoulli equation For the fourth example, consider the following Bernoulli equation that is one of the most famous second-order nonlinear differential equation for which exact solution is achievable [1] y + (y ) − 2e−y = 0
(36)
y(0) = 0,
(37)
2
y(1) = 1
This equation is considered as a boundary value problem whose analytical solution is y = ln
1 x− 2
2
3 + 4
(38)
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model
ARTICLE IN PRESS
ASOC-1901; No. of Pages 12
M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
11
Fig. 8. Comparison of the exact and approximate solution in ex.8.4. Table 1 Obtained results for the solved examples 1 to 4. Item Type of problem
Example 1 Integro-differential equation (IVP)
Example 2 Linear System of ODEs (IVP)
Number of variables Size of population Constraint values in the best run
13 200 5e−06
Number of execution Average computational time (s) Average value of objective function (WRF) The best value of objective function (WRF)
20 55 0.0591 0.0324
14 600 −3.3e−05 −3e−06 20 120 0.122 0.0810
and the best approximated solution which is obtained for nat = 3 is y(x) ≈ (2.40e − 02) + (−1.48e − 05) cos + (1.45e − 05) cos
3x 1
x 1
+ (−3.32e − 01) sin
+ (4.08e − 03) sin
3x
x 1
Example 3 Brachistochrone problem-Nonlinear ODE (BVP) 12 300 3.6e−05 −4.5e−05 20 35 0.126 0.0755
+ (−2.40e − 02) cos
1
2x 1
Example 4 Bernoulli equation-Nonlinear ODE (BVP) 7 200 1.4e−06 5.5e−04 20 5 0.0204 0.0194
+ (4.09e − 05) sin
2x 1 (39)
A graphical comparison between these two functions is shown in Fig. 8. A summary of the obtained results for all of the examples is given in Table 1. 9. Summary In most of the cases exact solution of differential equations is unachievable or at least needs considerable computational effort which is costly and time-consuming. Nevertheless, the advent of AI algorithms has provided new perspectives to numerical researchers to solve their extremely complicated problems which have not been solved so far. This study can be viewed as a preliminary step-up in finding a challenging numerical approach to solve all types of differential equations similarly through evolutionary techniques. Accordingly, using the PSO, a general and flexible algorithm is presented to cover a wide range of ordinary differential equations. It has little operational requirements on the form of differential equations and their given conditions. Although, based on stochastic nature of the PSO algorithm, the presented algorithm could not be backed with a theoretical proof of convergence and accuracy level yet; however, the measures of errors obtained from the implemented residual functional gives a reasonable criterion which could form the base of our judgment to decide on acceptability and accuracy of the approximations. The conducted comparison between the exact solution and the algorithm outcomes in the investigated examples showed that the algorithm yields satisfactorily precise approximation of solution in its current preliminary formulation. However, it can be enhanced and developed by other researchers for solving extra types of differential equations as well as partial differential equations with higher degree of accuracy. Some of the most important advantages of the presented algorithm are as follows. (1) As it was shown in numerical examples, the algorithm enjoys the generality of being applied to a wide variety of linear, nonlinear, BV and IV differential equations of higher orders as well as the systems of nonlinear differential equation. (2) The adjustable parameters in the presented algorithm provide an extensive flexibility with possible constructive modification in its structure which enables the algorithm to be easily refined if higher degree of accuracy is desired.
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005
G Model ASOC-1901; No. of Pages 12 12
ARTICLE IN PRESS M. Babaei / Applied Soft Computing xxx (2013) xxx–xxx
(3) The obtained accuracy of solutions was satisfactorily enough in all of the investigated examples. (4) Treatment of boundary and initial conditions is very simple and general in the presented algorithm. Moreover, they are both modeled in the same way, and there is no difference whether the problem is a boundary value or initial condition problem in this algorithm. (5) Since the terms of the Fourier series are differentiable of any order, the limitation on the degree of differentiability of the adopted approximation function is not concerned anymore. Therefore, the number of adopted terms from the Fourier series is not restricted to the maximum order of derivative existed in a given equation; it can be taken more or less. This can be a great flexibility with this algorithm. (6) Unlike stepwise methods, the new methodology is a meshless approach in which the well-known problem of error accumulation is resolved. To conclude, the heuristic algorithm like the PSO and GA can be employed to solve a wide variety of differential equations which have not been solved up to now. They will provide us with robust tools for solving very complicated nonlinear differential equations as well as partial differential equations in an easy manner. References [1] W.E. Boyce, R.C. Diprima, Elementary Differential Equations and Boundary Value Problems, 6th ed., John Wiley & Sons, New York, 1997. [2] S. Yalcinbas, M. Sezer, The approximate solution of high-order linear Volterra–Fredholm integro-differential equations in terms of Taylor polynomials, Applied Mathematics and Computation 112 (2000) 291–308. [3] P. Roul, P. Meyer, Numerical solutions of systems of nonlinear integro-differential equations by Homotopy-perturbation method, Applied Mathematical Modelling 35 (2011) 4234–4242. [4] P. Darania, E. Abadian, Development of the Taylor expansion approach for nonlinear integro-differential equations, International Journal of Contemporary Mathematical Sciences 1 (14) (2006) 651–664. [5] P. Darania, A. Ebadian, A method for the numerical solution of the integro-differential equations, Applied Mathematics and Computation 188 (2007) 657–668. [6] P. Darania, K. Ivaz, Numerical solution of nonlinear Volterra–Fredholm integro-differential equations, Computers & Mathematics with Applications 56 (2008) 2197–2209. [7] S. Liao, Y. Tan, A general approach to obtain series solutions of nonlinear differential equations, Studies in Applied Mathematics 119 (2007) 297–354. [8] Z.Y. Lee, Method of bilaterally bounded to solution Blasius equation using particle swarm optimization, Applied Mathematics and Computation 179 (2006) 779–786. [9] Mateescu, G. Daniel, On the application of genetic algorithms to differential equations, Romanian Journal of Economic Forecasting (2006). [10] Mastorakis, E. Nikos, Unstable ordinary differential equations: solution via genetic algorithms and the method of Nelder-Mead, in: Proceedings of the 6th WSEAS Int. Conf. on Systems Theory & Scientific Computation, 119, Elounda, Greece, 2007, pp. 297–354. [11] C.L. Karr, E. Wilson, A self-tuning evolutionary algorithm applied to an inverse partial differential equation, Applied Intelligence 19 (2003) 147–155. [12] C. Reich, Simulation of imprecise ordinary differential equations using evolutionary algorithms, Proceedings of the 2000 ACM Symposium on Applied Computing 1 (2000) 428–432. [13] H. Cao, L. Kang, Y. Chen, Evolutionary modeling of systems of ordinary differential equations with genetic programming, Genetic Programming and Evolvable Machines 1 (2000) 309–337. [14] K. Deb, Multi-Objective Optimization by Evolutionary Algorithms, John Wiley & Sons, Chichester, 2001. [15] S.N. Sivanandam, S.N. Deepa, Introduction to Genetic Algorithms, Springer-Verlag, Berlin, Heidelberg, 2008. [16] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Conference on Neural Networks (Perth, Australia), IEEE Service Center, Piscataway, NJ, 1995, pp. 1942–1948. [17] L.J. Li, Z.B. Huang, F. Liu, A heuristic particle swarm optimization method for truss structures with discrete variables, Comput. Struct. 87 (2009) 435–443. [18] L.J. Li, Z.B. Huang, F. Liu, A heuristic particle swarm optimizer (HPSO) for optimization of pin connected structures, Computers & Structures 85 (2007) 340–349. [19] S.R. Singiresu, Engineering Optimization: Theory and Practice, fourth ed., John Wiley & Sons, Hoboken, NJ, 2009. [20] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceedings of 1998 IEEE International Conference on Evolutionary Computation, 1998, pp. 69–73. [21] S. Chen, PSO MATLAB code, Version 20100125. Mathworks site,
(accessed: 18.10.11). [22] http://www.particleswarm.info [23] E. Kreyszig, Advanced Engineering Mathematics, 9th ed., John Wiley & Sons, New York, 2009. [24] J.N. Reddy, Introduction to the Finite Element Method, 2nd ed., McGraw-Hill, 1993. [25] K.J. Bathe, Finite Element Procedures, 2nd ed., Prentice Hall, New Jersey, 1996. [26] S. Rajeev, C.S. Krishnamoorthy, Discrete optimization of structures using genetic algorithms, Journal of Structural Engineering: ASCE 118 (1992) 1233–1250. [27] Integro-differential equation, Wikipedia, the free encyclopedia, (accessed: 13.05.12).
Please cite this article in press as: M. Babaei, A general approach to approximate solutions of nonlinear differential equations using particle swarm optimization, Appl. Soft Comput. J. (2013), http://dx.doi.org/10.1016/j.asoc.2013.02.005