Numerical solution of nonlinear singular initial value problems of Emden–Fowler type using Chebyshev Neural Network method

Numerical solution of nonlinear singular initial value problems of Emden–Fowler type using Chebyshev Neural Network method

Author's Accepted Manuscript Numerical solution of nonlinear singular initial value Problems Of Emden-Fowler type using Chebyshev Neural Network Meth...

510KB Sizes 0 Downloads 36 Views

Author's Accepted Manuscript

Numerical solution of nonlinear singular initial value Problems Of Emden-Fowler type using Chebyshev Neural Network Method Susmita Mall, S. Chakraverty

www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(14)00973-4 http://dx.doi.org/10.1016/j.neucom.2014.07.036 NEUCOM14499

To appear in:

Neurocomputing

Received date: 14 November 2013 Revised date: 28 April 2014 Accepted date: 14 July 2014 Cite this article as: Susmita Mall, S. Chakraverty, Numerical solution of nonlinear singular initial value Problems Of Emden-Fowler type using Chebyshev Neural Network Method, Neurocomputing, http://dx.doi.org/10.1016/j. neucom.2014.07.036 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Numerical Solution of Nonlinear Singular Initial Value Problems of Emden-Fowler type using Chebyshev Neural Network Method

Susmita Mall and S. Chakraverty* Department of Mathematics, National Institute of Technology Rourkela-769008, Odisha, India Tel: 91661-2462713, Fax: +91661-2462713-2701 E-mail*: [email protected] (Corresponding Author)

Abstract In this investigation, a new algorithm has been proposed to solve singular initial value problems of Emden-Fowler type equations. Approximate solutions of these types of equations have been obtained by applying Chebyshev Neural Network (ChNN) model for the first time. The EmdenFowler type equations are singular in nature. Here, we have considered single layer Chebyshev Neural Network model to overcome the difficulty of singularity. The computations become efficient because the procedure does not need to have hidden layer. A feed forward neural network model with error back propagation principle is used for modifying the network parameters and to minimize the computed error function. We have compared analytical and numerical solutions of linear and nonlinear Emden-Fowler equations respectively with the approximate solutions obtained by proposed ChNN method. Their good agreements and less CPU time in computations than the traditional Artificial Neural Network (ANN) show the efficiency of the present methodology. Keywords Singular initial value problem, Emden-Fowler equation, Chebyshev Neural Network, feed forward model, Error back propagation. 1   

1. Introduction

Singular second order nonlinear initial value problems describe several phenomena in mathematical physics and astrophysics. Many problems in astrophysics may be modeled by second order ordinary differential equations as proposed by Lane [1]. The Emden-Fowler equation is studied in detail by Emden [2] and Fowler [3, 4]. The general form of the EmdenFowler equation may be written as d 2 y r dy + + af ( x) g ( y ) = 0, dx 2 x dx

r≥0

(1)

subject to initial conditions y (0) = α , y ′(0) = 0

where f (x ) and g ( y ) are functions of x and y respectively and r , a, α are constants. For f ( x ) = 1, g ( y ) = y n and r=2, Eq. (1) reduces to the standard Lane-Emden equations. The Lane-

Emden type equations are applicable for the theory of stellar structure, thermal behavior of a spherical cloud of gas, isothermal gas spheres, and theory of thermionic currents [5-7]. Solution of differential equations with singularity behavior in various linear and nonlinear initial value problems of astrophysics is a challenge. In particular, present problem of Emden-Fowler equations which has singularity at x =0 is also important in practical applications. These equations are difficult to solve analytically, so various techniques based on series solutions such as Adomian decomposition, differential transformation and perturbation methods viz. homotopyperturbation have been employed to solve Emden-Fowler equations. Wazwaz [8-10] used Adomian decomposition method and modified decomposition method for solving Lane-Emden and Emden-Fowler type equations. Chowdhury et al. [11, 12] employed homotopy-perturbation method to solve singular initial value problems of time independent equations. Ramos[13] solved singular initial value problems of ordinary differential equations using Linearization techniques. Liao [14] used Adomain decomposition method for solving Lane-Emden type equations. Approximate solution of a differential equation arising in astrophysics using the variational iteration method has been developed by Dehghan and Shakeri [15]. The Emden-Fowler equation has also been solved by utilizing the techniques of Lie and Painleve analysis in Govinder and Leach [16]. An efficient analytic algorithm based on modified homotopy analysis method has 2   

been implemented by Singh et al. [17]. Muatjetjeja and Khalique [18] provided exact solution of the generalized Lane-Emden equations of the first and second kind. Mellin et al. [19] solved numerically, general Emden-Fowler equations with two symmetries. Vanani and Aminataei [20] implemented the Pade series solution of Lane-Emden equations. Demir and Sungu [21] approached numerical solutions of nonlinear singular initial value problems of Emden-Fowler type using Differential Transformation Method (DTM) and Maple 11. Kusanoa and Manojlovic [22] presented asymptotic behavior of positive solutions of the second-order non linear ordinary differential equations of Emden–Fowler type. Bhrawy and Alofi [23] proposed a shifted Jacobi– Gauss collocation spectral method for solving the nonlinear Lane–Emden type equations. Homotopy analysis method for singular initial value problems of Emden–Fowler type has been proposed by Bataineh et al. [24]. In another approach Muatjetjeja and Khalique [25] developed conservation laws for a generalized coupled bi-dimensional Lane–Emden system. In recent years, a lot of attention has been devoted to the study of Artificial Neural Network (ANN) to investigate differential equations. It is known that neural networks have universal approximation capabilities [26, 27], so approximate solutions of initial / boundary value problems may be efficient. The ANN based solution has many benefits compared with other traditional numerical methods. First of all, the approximate solution is continuous over all the domain of integration. Then the method is general and can be applied to solve linear and nonlinear singular initial value problems. Moreover, other numerical methods are usually iterative in nature, where we fix the step size before initiating the computation. After the solution is obtained, if we want to know the solution in between steps then again the procedure is to be repeated from initial stage. Although one may use some interpolation techniques to get approximate results. But the interpolation techniques itself are sometimes error prone. ANN may be one of the reliefs where we may overcome this repetition of iterations and the interpolations. Also we may use it as a black box to get numerical results at any arbitrary point in the domain.

Meade and Fernandez [28, 29] have applied neural network and B 1 splines for solving both linear and non-linear differential equations. Lagaris et al. [30] used neural networks and Broyden– Fletcher–Goldfarb–Shanno (BFGS) optimization technique to solve both ordinary and partial differential equations. Also Lagaris et al. [31] solved boundary value problems with irregular boundaries using multilayer perceptron in network architecture. Maleck and Beidokhti [32] 3   

presented a hybrid method based on artificial neural networks and optimization techniques to solve lower as well as higher order ordinary differential equations. Yazdi et al. [33] implemented a new method based on unsupervised version of kernel least mean square algorithm for solving first and second order ordinary differential equations. A new algorithm for solving matrix Riccati differential equations has been developed by Selvaraju and Samant [34]. Multilayer perceptron and radial basis function (RBF) neural networks with a new unsupervised training method has been proposed in [35] for numerical solution of non linear Schrodinger equation. In [36], Aarts and Van der veer analyzed initial value problems using evolutionary algorithm. Another method for solving mixed boundary value problems on irregular domains have been proposed by Hoda and Nagla [37]. Mcfall and Mahan [38] presented an artificial neural network method for solution of mixed boundary value problems with irregular domain. Manevitz et al. [39] studied the neural network model to the problem of mesh adaptation for the finite-element method to solve timedependent partial differential equations. In another work, Mai-Duy and Tran-Cong [40] presented procedures to solve linear ordinary differential equations and elliptic partial differential equations using multi-quadric radial basis function neural networks. Jianye et al. [41] implemented numerical solution of elliptical partial differential equation using radial basis neural network. Parisi et al. [42] solved unsteady solid-gas reactor problem by using unsupervised neural networks. Kumar et al. [43] surveyed multilayer perceptrons and radial basis function neural network methods for the solution of differential equations. Recently, Mall and Chakraverty [44] proposed regression based neural network model for solving ordinary differential equations.

In this paper, we present Chebysehv Neural Network (ChNN) model to obtain approximate solutions of linear and nonlinear Emden-Fowler equations. A Functional Link Artificial Neural Network (FLANN) which is a fast learning single layer ANN has been considered here. In FLANN the hidden layer is replaced by a functional expansion block for enhancement of the input patterns using Chebyshev polynomials. The method is found to be more computationally efficient than the multi-layer perceptron network [45, 46]. The Chebyshev Neural Network (ChNN) successfully applied to various problems viz. system identification [47, 48], function approximation [49], digital communication [50], a channel equalizer using feedback Cebyshev functional link neural networks [51] etc.

4   

We propose a single layer neural network with increasing the dimension of the input pattern using Chebyshev polynomials. To the best of our knowledge this study may be the first where the authors used Chebyshev Neural Network based Functional Link Artificial Neural Network (FLANN) to solve the Emden-Fowler type differential equations. A feed forward neural network and error back propagation algorithm has been used here. The initial weights of the single layered network model are considered as random. The organization of the paper is as follows. Section 2 introduces structure of the Chebyshev Neural Network (ChNN) and its learning algorithm. ANN formulations for differential equations, construction of the appropriate form of ChNN trial solution and error estimation are described in section 3. In section 4 we have presented the numerical examples, solutions and comparison of existing and ChNN results. Conclusions are drawn in section 5.

In the next section, we describe Chebyshev Neural Network and its learning algorithm.

2. Chebyshev Neural Network (ChNN) model

In this section, first we have introduced structure of single layered ChNN model and then described learning algorithm of ChNN.

2.1 Structure of Chebyshev Neural Network model Fig. 1 shows the structure of Chebyshev Neural Network (ChNN) which consists of single input unit, one output unit and a functional expansion block based on Chebyshev polynomials. ChNN model is a single layer neural model where each input data is expanded to several terms using Chebyshev polynomials. In this investigation, we have considered one input node only. The first two Chebyshev polynomials may be written as

T 0( x) = 1 T 1( x) = x The higher order Chebyshev polynomials may be generated by the well known recursive formula

T n+1( x) = 2 xT n( x) −T n−1( x)

(2)

5   

where T n(x) denotes nth order Chebyshev polynomial. We consider input data as

x = ( x1 , x 2 ,...,x h )T that is the single node x has h number of data. Then the enhanced pattern is obtained by using Chebyshev polynomials as

[T 0( x1 ), T1 ( x1 ),T 2( x1 ),...; T0 ( x 2 ),T 1( x 2 ),T 2( x 2 ),...;T 0( x 3 ),T 1( x 3 ),T 2( x 3 ),...;T 0( x h ),T 1( x h ),T 2( xh ),...]

.

The advantage of the ChNN is to get the result by using single layer network. Although this is done by increasing the dimension of the input pattern through Chebyshev polynomials. Architecture of the network with first six Chebyshev polynomials, single input and output layer (with one node) is shown in Fig. 1.

C h e b y s h e v

x

  Input layer

E x p a n s i o n

T 0( x)

w1 T 1( x )

w2

T 2( x )

w3

                   

tanh(Σ)    

w4

ChNN output

T 3( x ) w5

T 4( x )

N(x,p)

Output layer

w6

T 5( x )

Fig. 1 Structure of single layer Chebyshev Neural Network

6   

2.2 Learning algorithm of Chebyshev Neural Network (ChNN) Error back propagation learning algorithm has been used to update the network parameters (weights) and for minimizing error function of the ChNN. The tangent hyperbolic (tanh) function e x − e− x viz. x e + e−x

is considered here as the activation function.

The network output with input x and parameters (weights) p may be computed as

ez −e−z N(x, p) = tanh(z) = z −z e +e

(3)

where z a weighted sum of expanded input data. It is written as m

z=

∑w T j

( x)

(4)

j −1

j =1

where x is the input data, T

( x ) and w j with j = {1,2,3,...m} denote the expanded input data and

j−1

the weight vector respectively of the Chebyshev Neural Network.

Here the gradient descent algorithm is used for learning and the weights are updated by taking negative gradient at each iteration ⎛ ∂E ( x, p) k w kj +1 = w kj + Δw kj = w kj + ⎜ − η ⎜ ∂w kj ⎝

⎞ ⎟ ⎟ ⎠.

(5)

In the next head, the procedure of applying ANN in the solution of Differential Equations is discussed. 3. General formulation for differential equations using Artificial Neural Network (ANN)

In this section, we have described general formulation of differential equations using Neural Network. In particular the formulations of ordinary differential equations (ODEs) are incorporated in detail with computation of the gradient of the network parameters with respect to its inputs.

7   

Let us consider the general form of differential equation that represents both ordinary and partial differential equations as follows

G( x, y( x),∇y( x),∇2y( x),...,∇n y( x)) = 0,

x ∈ D ⊆Rn

(6)

where G is the function which defines the structure of differential equation, y (x) and ∇ denote the solution and differential operator respectively. It may be noted that for ordinary differential equation, x ∈ D ⊂ R and for partial differential equation x = ( x1 , x 2 ,...x n ) ∈ D ⊂ R n . Let y t ( x, p ) denotes the trial solution with adjustable parameters p and then the above general differential equation changes to the form

G(x, yt (x, p),∇yt (x, p),∇2 yt (x, p),...,∇n yt (x, p)) = 0 .

(7)

The problem is transformed into the following minimization problem [30]

∑(

min p

x∈D

)

1 G(x, yt (x, p),∇yt (x, p),∇2 yt (x, p),...,∇n yt (x, p)) 2

2

(8) .

In the following paragraphs we now discuss about the ordinary differential equation formulation. The trial solution y t ( x, p ) of feed forward neural network with input x and parameters p may be written in the form

yt (x, p) = A(x) + F(x, N(x, p)) .

(9)

The first part of right hand side in Eq. (9) that is A(x) satisfies only initial / boundary conditions, where as the second part F ( x, N ( x, p )) contains single output N ( x, p ) of ChNN with input x and adjustable parameters p. As mentioned above, a single layer ChNN is considered with one input node x (having h number of data) and single output node N ( x, p ) . where 8   

N ( x, p) = tanh(z) .

(10)

General form of corresponding error function for the ODE may be formulated as h

E(x, p) =

2

dyt (xi , p) d n−1 yt (xi , p) ⎞⎫ 1 ⎧d n yt (xi , p) ⎛ ⎟⎬ − f ⎜⎜ xi , yt (xi ), ,..., ⎨ dx dxn−1 ⎟⎠⎭ 2 ⎩ dxn ⎝ .

∑ i=1

(11)

For minimizing the error function E ( x, p ) corresponding to every entry x, we differentiate E ( x, p ) with respect to the parameters. Then the gradient of network output with respect to their

inputs is computed as below.

3.1 Computation of gradient for ChNN

The error computation involves both output and derivatives of the network output with respect to the corresponding inputs. So it is required to find the gradient of the network derivatives with respect to the inputs. As such, the derivatives of N ( x, p ) with respect to input x is written as (32)

(w T ( x)) −(w T ( x)) (w T ( x)) −(w T ( x)) dN m ⎡⎛⎜ (e j j−1 + e j j−1 )2 − (e j j−1 − e j j−1 )2 ⎞⎟⎤ = ∑⎢ ⎥(wjT′ j −1(x)) (wjTj−1 ( x)) −(wjTj−1 ( x)) 2 ⎟ dx j =1 ⎢⎣⎜⎝ (e ) +e ⎠⎥⎦

(12) .

Simplifying, the above we have

dN = dx

m

⎡ ⎛ e(wjTj−1( x)) − e−(wjTj−1( x)) ⎞2 ⎤ ⎢1− ⎜ (w T ( x)) −(w T ( x)) ⎟ ⎥(wjT ′ j−1(x)) ⎢ ⎜⎝ e j j−1 + e j j−1 ⎟⎠ ⎥ ⎣ ⎦ .

∑ j=1

(13)

It may be noted that the above differentiation is done for all x, where x has h number of data. 9   

Here

N(x, p) = tanh(z) =

ez −e−z ez + e−z

m

and

z=

∑w T

j j−1

(x)

j=1

.

Similarly we can compute the second derivative of N(x,p) as

d 2N = dx2

m

⎡ ⎧ ⎛ (wjTj−1( x)) −(wjTj−1( x)) ⎞2 ⎫ ⎧⎪ ⎛ e(wjTj−1( x)) − e−(wjTj−1( x)) ⎞2 ⎫⎪⎤ d e e d − ⎪ ⎪ ⎢ ⎨1− ⎜ ⎟ ⎬(wjT′ j−1(x)) + (wjT′ j−1(x))⎨1− ⎜ (w T ( x)) −(w T ( x)) ⎟ ⎬⎥ ⎜ e j j−1 + e j j−1 ⎟ ⎥ ⎢ dx ⎪ ⎜⎝ e(wjTj−1( x)) + e−(wjTj−1( x)) ⎟⎠ ⎪ dx ⎪ ⎠ ⎪⎭⎦ ⎝ ⎩ ⎩ ⎭ ⎣

∑ j=1

This may be written as

d 2N = dx2

m

⎡⎧⎪ ⎛ e(wjTj−1( x)) − e−(wjTj−1( x)) ⎞⎛ (e(wjTj−1( x)) + e−(wjTj−1( x)) )2 − (e(wjTj−1( x)) − e−(wjTj−1( x)) )2 ⎞⎫⎪ ⎤ 2 ⎜ ⎟ ⎜ ⎟ ′ ( ) ⎢⎨− 2⎜ (wjTj−1( x)) −(wjTj−1( x)) ⎟⎜ −( w T ( x)) ( w T ( x)) ⎟⎬⎪ wjT j−1(x) ⎥⎥ +e (e j j−1 + e j j−1 )2 ⎢⎪⎩ ⎝ e ⎠⎝ ⎠⎭ ⎢ ⎥ −( wjTj −1 ( x)) 2 ⎫ ( wjTj −1 ( x)) ⎧ ⎢ ⎥ ⎞⎪ −e ⎪ ⎛e ⎢+ (wjT ′′ j−1(x))⎨1− ⎜⎜ (wjTj−1( x)) −(wjTj−1( x)) ⎟⎟ ⎬. ⎥ +e ⎢⎣ ⎥⎦ ⎪ ⎪⎩ ⎝ e ⎠⎭

∑ j=1

After simplifying the above we get

d2N = dx2

⎡⎛⎧ ⎛ (wjTj−1(x)) −(wjTj−1(x)) ⎞3 ⎛ (wjTj−1(x)) −(wjTj−1(x)) ⎞⎫ ⎞⎤ ⎞ ⎛⎧ ⎛ (wjTj−1(x)) −(wjTj−1(x)) ⎞2 ⎫ e e e e e e − − − ⎪ ⎪ ⎪ ⎜ 2 ⎟ ⎜⎪ ⎢ ⎨2⎜ ⎟ − 2⎜ (w T (x)) −(w T (x)) ⎟⎬(wjT′ j−1(x)) + ⎨1− ⎜ (w T (x)) −(w T (x)) ⎟ ⎬(wjT′′ j−1(x))⎟⎥ ( ( )) ( ( )) w T x w T x − ⎟⎟⎥ ⎟⎟ ⎜⎜ ⎜ e j j−1 + e j j−1 ⎟ ⎢⎜⎜⎪ ⎜⎝ e j j−1 + e j j−1 ⎟⎠ ⎜⎝ e j j−1 + e j j−1 ⎟⎠⎪ ⎠ ⎪⎭ j=1 ⎭ ⎠⎦ ⎠ ⎝⎪⎩ ⎝ ⎣⎝⎩ m



(14)

where w j denote parameters of network and T ′ j −1( x ),T ′′ j −1( x) denote first and second derivatives of Chebyshev polynomials. Next we include some detail of the above procedure for traditional Multi Layer Perceptron (MLP) for the sake of completeness. 10   

3.2 computation of gradient for traditional (MLP) neural network Let us consider a multi layered perceptron with one input node x, a hidden layer with u nodes and one output node. For the given input x the output N ( x, p ) may be formulated as (30) u

N ( x, p ) =

∑ v tanh(r ) t

t

t =1

(15)

where rt = wt x + ut , wt denotes the weight from input unit to the hidden unit t , vt denotes weight from the hidden unit t to the output unit, ut is the biases and tanh( rt ) is the tangent hyperbolic activation function. The derivative of N ( x, p ) with respect to input x is (32) dN = dx

u

∑ v w (tanh(r ))′ t

t

t =1

t

.

(16)

In the similar way we may use back propagation algorithm for updating the network parameters (weights and biases) from input layer to hidden and from hidden to output layer as below k +1 t

w

⎛ ∂E ( x, p ) k = w + Δw = w + ⎜⎜ − η ∂wtk ⎝ k t

k t

k t

⎛ ∂E ( x, p ) k vtk +1 = vtk + Δvtk = vtk + ⎜⎜ − η ∂vtk ⎝

⎞ ⎟⎟ ⎠.

(17)

⎞ ⎟⎟ ⎠ .

(18)

Our aim is to solve the Emden-Fowler type differential equations. As such we now discuss below the formulation for the following type (second order initial value problem) of ordinary differential equation

d 2y dy = f ( x, y , ) 2 dx dx

x ∈ [a , b ]

with initial conditions y ( a ) = A , y ′(a ) = A′ The ChNN trail solution may be written as

yt ( x, p) = A + A′( x − a) + ( x − a) 2N ( x, p)

(19) 11 

 

where N ( x, p ) is the output of the Chebyshev Neural Network with one input x and parameters p . The trial solution yt ( x, p ) satisfies the initial conditions. From (19) we have (by differentiating) dy t ( x, p ) dN = A′ + 2( x − a ) N ( x, p ) + ( x − a ) 2 dx dx

(20)

and

2 d 2 y t ( x, p ) dN 2 d N N x p x a x a = 2 ( , ) + 4 ( − ) + ( − ) dx dx 2 dx 2 .

(21)

The error function E(x,p) is written as h

E(x, p) =

2

dy (x , p)⎤⎞ 1⎛ d 2 yt (xi , p) ⎡ ⎜⎜ − f ⎢xi , yt (xi , p), t i ⎥⎟⎟ 2 dx ⎦⎠ 2⎝ dx ⎣

∑ i=1

(22) .

As discussed above, for ChNN x i s’ i=1,2,…h are the input data and the weights w j from input to output layer are modified according to the back propagation learning algorithm as follows ⎛ ∂E ( x, p) k w kj +1 = w kj + Δw kj = w kj + ⎜ − η ⎜ ∂w kj ⎝

⎞ ⎟ ⎟ ⎠

(23)

where η is learning parameter, k is iteration step and E(x,p) is the error function. One may note that the parameter k is used for updating the weights as usual in ANN.

Here

dy (x , p) ⎤⎞ ∂E(x, p) ∂ ⎛⎜ h 1⎛ d 2 yt (xi , p) ⎡ ⎜⎜ − f ⎢xi , yt (xi , p), t i ⎥⎟⎟ = ∑ 2 ∂wj ∂wj ⎜ i=1 2⎝ dx dx ⎦⎠ ⎣ ⎝

2

⎞ ⎟ ⎟ ⎠.

(24)

Finally, we may use the converged ChNN results in equation (19) to obtain the approximate solutions.

12   

4. Numerical examples

In this section, we consider linear and nonlinear Emden-Fowler equations to show the powerfulness of the proposed method.

Example 1

A nonlinear singular initial value problem of Emden-Fowler type may be written as

y ′′ +

8 y ′ + xy 2 = x 4 + x 5 x

x≥0

with initial conditions y (0) = 1 , y ′(0) = 0 As mentioned above, we have the ChNN trial solution

yt ( x, p) = 1 + x 2 N ( x, p)

We train the network for ten equidistant points in the domain [0, 1] with first six Chebyshev polynomials. Table 1 shows comparison among numerical solutions obtained by Maple 11, Differential Transformation Method (DTM) for n=10 [21], Chebyshev neural (ChNN) and traditional (MLP) ANN. Comparison between numerical solutions by Maple 11 and Chebyshev neural are depicted in Fig.2. Fig.3 shows semi logarithmic plot of the error (between Maple 11 and ChNN). From Table 1, one may see that ChNN solutions agreed well at all points with the solutions of Maple 11 and DTM numerical solutions. The converged ChNN is used then to have the results for some testing points. As such Table 2 incorporates corresponding results directly by using the converged weights.

13   

Table 1: Comparison among numerical solutions using Maple 11, DTM, ChNN and traditional

ANN (Example 1) Input data 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Maple 11 [21] 1.0000000 0.99996668 0.99973433 0.99911219 0.99793933 0.99612622 0.99372097 0.99100463 0.98861928 0.98773192 0.99023588

DTM [21] 1.00000000 0.99996668 0.99973433 0.99911219 0.99793933 0.99612622 0.99372096 0.99100452 0.98861874 0.98772971 0.99022826

ChNN

Traditional ANN

1.00000000 0.99986667 1.00001550 0.99924179 0.99792438 0.99608398 0.99372989 0.99103146 0.98861829 0.98773142 0.99030418

1.00000000 0.99897927 1.00020585 0.99976618 0.99773922 0.99652763 0.99527655 0.99205860 0.98867279 0.98753290 0.99088174

1.005

Maple 11 solutions Chebyshev neural solutions

Results

1

0.995

0.99

0.985

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x Fig.2 Plot of numerical solutions using Maple 11 and ChNN (Example 1).

14   

-3

10

-4

Semilog y

10

-5

10

-6

10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Fig. 3 Semi logarithmic plot of error between Maple 11 and ChNN solutions (Example 1). Table 2: ChNN solutions for testing points (Example 1)

Testing points

0.130

0.265

0.481

0.536

0.815

ChNN

o.99992036

0.99854752

0.99729365

0.99525350

0.98866955

It is worth mentioning that the CPU time of computation for the proposed ChNN model is 10,429.97 sec. whereas CPU time for tradition neural network (MLP) is 15,647.58 sec. As such we may see that ChNN takes less time of computation than traditional MLP.

Example 2

Now let us consider a linear, non homogeneous Emden-Fowler equation

y ′′ +

8 y ′ + xy = x 5 − x 4 + 44 x 2 − 30 x x

x≥0

with initial conditions y (0) = 0 , y ′(0) = 0 15   

The exact solution for above equation is [12]

y ( x) = x 4 − x 3 We can write the related ChNN trial solution as

y t ( x, p ) = x 2 N ( x, p ) Ten equidistant points in [0, 1] and six weights with respect to first six Chebyshev polynomials are considered. Comparison of analytical and Chebyshev neural (ChNN) solutions has been cited in Table 3. These comparisons are also depicted in Fig. 4. Semi logarithmic plot of the error function between analytical and ChNN solutions is cited in Fig. 5. Finally results for some testing points are again shown in Table 4. This testing is done to check whether the converged ChNN can give results directly by inputting the points which were not taken during training.

Table 3: Comparison between Analytical and ChNN solutions (Example 2).

Input data

Analytical [12]

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0 -0.00090000 -0.00640000 -0.01890000 -0.03840000 -0.06250000 -0.08640000 -0.10290000 -0.10240000 -0.07290000 0.00000000

ChNN 0 -0.00058976 -0.00699845 -0.01856358 -0.03838897 -0.06318680 -0.08637497 -0.10321710 -0.10219490 -0.07302518 0.00001103

16   

0.02

Analytical solutions Chebyshev neural solutions

0

Results

-0.02

-0.04

-0.06

-0.08

-0.1

-0.12

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x Fig.4 Plot of Analytical and ChNN solutions (Example 2). -3

Semilog y

10

-4

10

-5

10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Fig. 5 Semi logarithmic plot of error between Analytical and ChNN solutions (Example 2).

Table 4: ChNN solutions for testing points (Example 2)

Testing points Analytical ChNN

0.154 0.328 0.561 0.732 -0.00308981 -0.02371323 -0.07750917 -0.10511580 -0.00299387 -0.02348556 -0.07760552 -0.10620839

0.940 -0.04983504 -0.04883402

17   

Example 3

In this example we take a non linear, homogeneous Emden-Fowler equation y ′′ +

6 y ′ + 14 y = −4 y ln y x

x≥0

subject to y (0) = 1 , y ′(0) = 0 The analytical solution is [13]

y ( x) = e − x

2

Again we can write the ChNN trail solution as

yt ( x, p) = 1 + x 2 N ( x, p) The network is trained for ten equidistant points in the given domain. As in previous cases the analytical and Chebyshev neural solutions are cited in Table 5. Comparisons among analytical, Chebyshev neural and traditional (MLP) ANN solutions are depicted in fig.6. Fig. 7 shows semi logarithmic plot of the error function (between analytical and ChNN solutions). ChNN solutions for some testing points are given in Table 6. Table 5: Comparison among Analytical, ChNN and traditional ANN solutions (Example 3).

Input data

Analytical [13]

ChNN

Traditional ANN

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

1.00000000 0.99004983 0.96078943 0.91393118 0.85214378 0.77880078 0.69767632 0.61262639 0.52729242 0.44485806 0.36787944

1.00000000 0.99004883 0.96077941 0.91393017 0.85224279 0.77870077 0.69767719 0.61272838 0.52729340 0.44490806 0.36782729

1.00000000 0.99014274 0.96021042 0.91302963 0.85376495 0.77644671 0.69755681 0.61264315 0.52752822 0.44502071 0.36747724

18   

1

Analytical solutions Chebyshev neural solutions

0.9

Results

0.8

0.7

0.6

0.5

0.4

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x Fig.6 Plot of Analytical and ChNN solutions (Example 3). -3

10

-4

Semilog y

10

-5

10

-6

10

-7

10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Fig. 7 Semi logarithmic plot of error between Analytical and ChNN solutions (Example 3).

Table 6: ChNN solutions for testing points (Example 3)

Testing points Analytical ChNN

0.173 0.97051443 0.97049714

0.281 0.92407596 0.92427695

0.467 0.80405387 0.80379876

0.650 0.65540625 0.65580726

0.872 0.46748687 0.46729674

19   

The CPU time of computation for the proposed ChNN model is 7,551.490 sec. and for traditional ANN (MLP) is 9,102.269 sec.

Example 4

Finally we consider a nonlinear Emden-Fowler equation y′′ +

3 y′ + 2 x 2y 2 = 0 x

with initial conditions y (0) = 1 , y ′(0) = 0 The ChNN trial solution in this case is represented as

yt ( x, p) = 1 + x 2 N ( x, p) Again the network is trained with ten equidistant points. Table 7 incorporates the comparison among numerical solutions obtained by Maple 11, Differential Transformation Method (DTM) for n=10 [21], and present ChNN. Fig. 8 shows comparison between solutions by Maple 11 and ChNN. Finally the semi logarithmic plot of the error (between Maple 11 and ChNN solutions) is cited in Fig. 9. Table 7: Comparison among numerical solutions by Maple 11, DTM for n=10 and ChNN

(Example 4) Input data 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Maple 11 [21] 1.00000000 0.99999166 0.99986667 0.99932527 0.99786939 0.99480789 0.98926958 0.98022937 0.96655340 0.94706857 0.92065853

DTM [21] 1.00000000 0.99999166 0.99986667 0.99932527 0.99786939 0.99480794 0.98926998 0.98023186 0.96656571 0.94711861 0.92083333

ChNN 1.00000000 0.99989166 0.99896442 0.99982523 0.99785569 0.99422605 0.98931189 0.98078051 0.96611140 0.94708231 0.92071830

20   

1

0.99

Results

0.98

Maple 11 solutions Chebyshev neural solutions

0.97

0.96

0.95

0.94

0.93

0.92 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x Fig.8 Plot of Maple 11 and ChNN solutions (Example 4).

-3

Semilog y

10

-4

10

-5

10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Fig.9 Semi logarithmic plot of error between Maple 11 and ChNN solutions (Example 4). Conclusions

In this paper, Chebyshev Neural Network (ChNN) based model for singular initial value problem of second order ordinary differential equations has been developed and applied to a variety of linear and nonlinear Emden-Fowler equations. These equations describe various phenomena in astrophysics and quantum mechanics. Here, a single layer Chebyshev Neural Network (ChNN) model is considered to overcome the difficulty of this type of equations due to the existence of 21   

singular point at x=0. We propose a new approach to solve Emden- Fowler equations using single layer Functional Link Artificial Neural Network (FLANN) architecture. In a FLANN, the hidden layer is replaced by functional expansion block for enhancement of the input patterns by a set of linearly independent functions. Thus the numbers of parameters of Chebyshev Neural Network (ChNN) are less than the multi layered neural network. Here, the dimension of input data is expanded using Chebyshev polynomials. A feed forward neural model with error back propagation algorithm is used to minimize the error function. Also, time of computation (CPU time) for our proposed ChNN model is found to be less than the traditional (MLP) ANN model. Results are compared between existing and proposed ChNN modes. It may be seen that the proposed ChNN model is easy to implement, computationally efficient and straight forward.

Acknowledgement

The first author would like to acknowledge the Department of Science and Technology (DST), Government of India for financial support under Women Scientist Scheme-A. Also the authors would like to thank Editor in chief and the Reviewers for their valuable suggestions to improve this work.

References

[1] J.H. Lane, On the theoretical temperature of the sun under the hypothesis of a gaseous mass maintaining its volume by its internal heat and depending on the laws of gases known to terrestrial experiment, The American Journal of Science and Arts, 2nd series 4 (1870) 5774.

[2] R. Emden, Gaskugeln Anwendungen der mechanischen Warmen-theorie auf Kosmologie and meteorologische Probleme, Leipzig, Teubner 1907.

[3] R.H. Fowler, The form near infinity of real, continuous solutions of a certain differential equation of the second order, Quart. J. Math. (Oxford) 45 (1914) 341-371.

[4] R.H. Fowler, Further studies of Emden’s and similar differential equations, Quart. J. Math. (Oxford) 2 (1931) 259-288. 22   

[5] H.T. Davis Introduction to Nonlinear Differential and Integral Equations, New York: Dover publications Inc (1962).

[6] S. Chandrasekhar, Introduction to Study of Stellar Structure, New York: Dover publications Inc (1967).

[7] B.K. Datta, Analytic solution to the Lane-Emden equation, Nuov. Cim 111B (1996) 13851388.

[8] A.M. Wazwaz, A new algorithm for solving differential equation Lane–Emden type, Appl. Math. Comput 118 (2001) 287–310.

[9]. A.M. Wazwaz, Adomian decomposition method for a reliable treatment of the Emden– Fowler equation, Applied Mathematics and Computation 161(2005) 543–560.

[10] A.M. Wazwaz, The modified decomposition method for analytical treatment of differential equations, Applied Mathematics and Computation 173 (2006) 165-176.

[11] M.S.H. Chowdhury, I. Hashim, Solutions of a class of singular second order initial value problems by homotopy- perturbation Method, Phys.Lett. A 365 (2007) 439-447.

[12] M.S.H. Chowdhury, I. Hashim, Solutions of Emden-Fowler Equations by homotopyperturbation Method, Nonlinear Analysis: Real Word Applications 10 (2009) 104-115.

[13] J.I.Ramos, Linearization techniques for singular initial-value problems of ordinary differential equations, Applied Mathematics and Computation 161(2005) 525–542.

[14] S.J. Liao, A new analytic algorithm of Lane–Emden type equations, Appl. Math. Comput 142 (2003) 1–16. [15] M. Dehghan, F. Shakeri, Approximate solution of a differential equation arising in astrophysics using the variational iteration method, New Astronomy 13 (2008) 53-59.

23   

[16] K.S. Govinder, P.G.L. Leach, Integrability analysis of the Emden-Fowler equation, J. Nonlinear Math. Phys 14 (2007) 435-453.

[17] O.P. Singh, R.K. Pandey, V.K. Singh, Analytical algorithm of Lane-Emden type equation arising in astrophysics using modified homotopy analysis method, Computer Physics Communications 180 (2009) 1116-1124.

[18] B. Muatjetjeja, C.M. Khalique, Exact solutions of the generalized Lane-Emden equations of the first and second kind, Pramana 77 (2011) 545-554.

[19] C.M. Mellin, F.M. Mahomed, P.G.L. Leach, Solution of generalized Emden-Fowler equations with two symmetries, Int. J. Nonlinear Mech. 29 (1994) 529-538.

[20] S.K. Vanani, A. Aminataei, On the numerical solution of differential equations of LaneEmden type, Computers and Mathematics with applications 59 (2010) 2815-2820.

[21] H. Demir, I.C. Sungu, Numerical solution of a class of nonlinear Emden-Fowler equations by using differential transformation method, Journal of Arts and Science 12 (2009) 75-81.

[22] T. Kusano, J. Manojlovic, Asymptotic behavior of positive solutions of sub linear differential equations of Emden–Fowler type, Computers and Mathematics with Applications 62 (2011) 551–565.

[23] A.H. Bhrawy, A.S. Alofi, A Jacobi- Gauss collocation method for solving nonlinear LaneEmden type equations, Commun Nonlinear Sci Numer Simulat 17 (2012) 62-70.

[24] A.S. Bataineh, M.S.M. Noorani, I. Hashim, Homotopy analysis method for singular initial value problems of Emden-Fowler type, Commun Nonlinear Sci Numer Simulat 14 (2009) 1121-1131. [25] B. Muatjetjeja, C.M. Khalique, Conservation laws for a generalized coupled bidimensional Lane–Emden system, Commun Nonlinear Sci Numer Simulat 18 (2013) 851-857.

24   

[26] J.M. Zurada, Introduction to Artificial Neural Network. West Publ. Co (1994).

[27] S. Haykin, Neural Networks A Comprehensive Foundation. Prentice Hall International, Inc (1999). [28] A.J. Meade Jr., A.A. Fernandez, The numerical solution of linear ordinary differential equations by feed forward neural networks, Mathematical and Computer Modeling 19 (1994) 1–25.

[29] A.J. Meade Jr., A.A. Fernandez, Solution of nonlinear ordinary differential equations by feedforward neural networks, Mathematical and Computer Modeling 20 (1994) 19–44.

[30] I.E. Lagaris, A. Likas, D.I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9 (1998) 987–1000.

[31] I.E. Lagaris, A.C. Likas, Neural network methods for boundary value problems with irregular boundaries, IEEE Transanction on Neural Networks 11 (2000) 1041-1049.

[32] A.Malek, R. Beidokhti Shekari ,Numerical solution for high order deferential equations, using a hybrid neural network-Optimization method, Applied Mathematics and Computation 183 (2006) 260–271.

[33] H.S. Yazid, M. Pakdaman, H. Modaghegh, Unsupervised kernel least

mean square

algorithm for solving ordinary differential equations, Nerocomputing 74 (2011) 20622071.

[34] N. Selvaraju and J. Abdul Samant, Solution of matrix Riccati differential equation for nonlinear singular system using neural networks, International Journal of Computer applications 29 ( 2010) 48-54.

[35] Y. Shirvany, M. Hayati, R. Moradian, Multilayer perceptron neural networks with novel unsupervised training method for numerical solution of the partial differential equations, Applied Soft Computing 9 (2009) 20–29. 25   

[36] L.P. Aarts, P.Van der veer, Neural network method for solving partial differential equations, Neural processing letters 14 (2001) 261-271.

[37] S.A. Hoda, H.A. Nagla, Neural network methods for mixed boundary value problems, International Journal of Nonlinear Science 11 (2011) 312-316.

[38] K. McFall, J.R. Mahan, Artificial neural network for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions, IEEE Transactions on Neural Networks 20 (2009) 1221-1233.

[39] L. Manevitz, A. Bitar, D. Givoli, Neural network time series forecasting of finite-element mesh adaptation, Neurocomputing 63 (2005) 447–463.

[40] N. Mai-Duy, T. Tran-Cong, Numerical solution of differential equations using multi quadric radial basis function networks, Neural Networks 14 (2001) 185–199.

[41] L. Jianyu, L. Siwei, Q. Yingjian, H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Network 16 (2003) 729–734.

[42] D.R. Parisi, M.C. Mariani, M.A. Laborde, Solving differential equations with unsupervised neural networks, Chemical Engineering and Processing: Process Intensification 42 (2003) 715–721. [43] M. Kumar, N.Yadav,

Multilayer perceptrons and radial basis function neural network

methods for the solution of differential equations: a survey, Computers and mathematics with applications 62 (2011) 3796-3811.

[44] S. Mall, S. Chakraverty, Regression Based Neural network training for the solution of ordinary differential equations, Int. J. of mathematical modeling and numerical optimization 4 (2013) 136-149. [45] A. Namatame, N. Ueda, Pattern classification with chebyshev neural network, Int. J. Neural Network 3 (1992) 23-31. 26   

[46] J.C.Patra, Chebyshev Neural Network-Based Model for Dual-Junction Solar Cells, IEEE Transactions on Energy Conversion 26 (2011) 132-140.

[47] S. Purwar, I.N. Kar, A.N. Jha, Online system identification of complex systems using Chebyshev neural network, Applied soft computing 7 (2007) 364-372.

[48] J.C. Patra, A.C. Kot, Nonlinear dynamic system identification using Chebyshev functional link artificial neural network, IEEE Trans. Syst., Man, Cybern., Part B-Cybern 32 (2002) 505–511. [49] T.T. Lee, J.T. Jeng, The Chebyshev-Polynomials-Based Unified Model Neural Networks for Function Approximation, IEEE Transactions on systems, man and cybernetics –part B: cybernetics 28 (1998) 925-935. [50] J.C. Patra, M. Juhola, P.K. Meher, Intelligent sensors using computationally efficient Chebyshev neural networks, IET Sci. Meas. Techno 2 (2008) 68–75.

[51] W.D. Weng, C.S. Yang, R.C. Lin, A channel equalizer using reduced decision feedback Chebyshev functional link artificial neural networks, Information Sciences 177 (2007) 2642– 2654.

Dr. S. Chakraverty is working in National Institute of Technology, Rourkela, Odisha, India as a professor in Applied Mathematics. Dr. Chakraverty received his Ph. D. from IIT Roorkee in 1992. There after he did his post-doctoral research at Institute of Sound and Vibration Research, University of Southampton, U.K. and at the Faculty of Engineering and Computer Science, Concordia University, Canada. He was also a visiting professor at Concordia and McGill University, Canada, during 1997-1999 and currently he is visiting professor of University of Johannesburg, South Africa. He has authored three books and published around 180 research papers in journals. He is the reviewer of many international journals of repute. Dr. Chakraverty is 27   

recipient of many prestigious awards viz. CSIR Young Scientist, BOYSCAST, UCOST Young Scientist, Golden Jubilee Director’s (CBRI) Award, INSA Int. Biletral Exchange Award, Roorkee Univ. gold medal. He has undertaken a good number of research projects as Principle Investigator funded by international and national agencies. His present research area includes mathematical Modeling, Soft computing and Machine Intelligence, Vibration and Inverse Vibration Problem and Numerical analysis.

Susmita Mall received her M. Sc. degree in Mathematics (Operations research, Numerical analysis) from Ravenshaw University, Cuttack, Odisha, India in 2003. Currently she is continuing her Ph.D. degree in National Institute of Technology, Rourkela - 769 008, Odisha, India. Her current research interests include Mathematical modeling, Artificial neural network, Differential equations and Numerical analysis.

28