Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix

Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix

Accepted Manuscript Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational mat...

537KB Sizes 0 Downloads 39 Views

Accepted Manuscript Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix Saeed Hatamzadeh-Varmazyar, Zahra Masouri, Esmail Babolian PII: DOI: Reference:

S0307-904X(15)00361-3 http://dx.doi.org/10.1016/j.apm.2015.04.048 APM 10596

To appear in:

Appl. Math. Modelling

Received Date: Revised Date: Accepted Date:

13 September 2013 23 February 2015 20 April 2015

Please cite this article as: S. Hatamzadeh-Varmazyar, Z. Masouri, E. Babolian, Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix, Appl. Math. Modelling (2015), doi: http://dx.doi.org/10.1016/j.apm.2015.04.048

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix Saeed Hatamzadeh-Varmazyara,∗, Zahra Masourib , Esmail Babolianc a

Department of Electrical Engineering, Islamshahr Branch, Islamic Azad University, Tehran, Iran Department of Mathematics, Khorramabad Branch, Islamic Azad University, Khorramabad, Iran c Department of Mathematics, Kharazmi University, Tehran, Iran b

Abstract This article presents a numerical method for solving ordinary linear differential equations of arbitrary order and coefficients. For this purpose, block-pulse functions (BPFs) as a set of piecewise constant orthogonal functions are used. By the BPFs vector forms and operational matrix of integration, solving the differential equation is reduced to solve a linear system of algebraic equations. Some problems are numerically solved by the proposed method to illustrate its generality and computational efficiency for solving an arbitrary linear differential equation. For further evaluation, mean-absolute errors and running times associated with the method are given to show that the method is convergent and runs in a reasonable time. Keywords: Linear differential equations; Numerical solution; Orthogonal basis functions; Vector forms; Operational matrix of integration. MSC2010 classification: 65L05; 41A30; 33F05; 33E99.

1. Introduction In recent decades, BPFs have been finding an important role in numerical analysis. Especially, valuable efforts have been spent, by researchers, on introducing novel ideas for numerical solution of various functional equations by using properties of these functions. A numerical direct method is presented in [1] for solution of some electromagnetic scattering problems using BPFs. Reference [2] presents an expansion-iterative technique based on these functions for solving Fredholm integral equations of the first kind arising in one- and two-dimensional electromagnetic scattering. A similar concept is used in [3] to establish a numerical expansion-iterative method for solving Volterra integral equation of the first kind. BPFs are respectively used in [4] and [5] for implementation of collocation method for analysis of some perfectly conducting and resistive scatterers. Reference [6] presents a direct method for solution of Volterra integral equation of the first kind based on BPFs and their operational matrix of integration. Operational matrices of BPFs are also used in [7] to solve Volterra integral and integrao-differential equations of convolution type by finding numerical inversion of Laplace transform. Further information as to BPFs may be found in [8–13]. A great deal of interest has been focused on the solution of ordinary linear differential equations. A modification of Adomian decomposition method has been introduced in [14] for solving second-order ordinary differential equations. Also, [15] uses homotopy perturbation method to solve specific second-order ordinary differential equations. A numerical method for solving some differential equations is presented in [16] in which the operational matrix of integration of Chebyshev wavelets basis and its product operational matrix are used. Reference [17] describes a method for numerical solution of linear integral equations based on Lagrange interpolation and then it is extended to be applied in solving linear integro-differential and differential equations. A method is presented in [18] for obtaining the particular solution of ordinary differential equations with constant coefficients and an explicit formula of the particular solution is derived from the use of an upper triangular Toeplitz matrix. [19] proposes a numerical scheme to solve the delay differential equations of pantograph type. The method consists of expanding the required approximate solution as the elements of the shifted Chebyshev polynomials. Some of the above mentioned methods show a reasonable computational efficiency, however all of them have major limitations. For instance, they have been proposed for the solution of linear differential equations of specific orders or specific coefficients. Also, some of the methods have essentially been formulated for special types of differential equations. It is the main aim of this article to present a method that has the potential to be free of such limitations, while having a reasonable computational efficiency. This article proposes a numerical method for solving linear ordinary differential equations of arbitrary order and coefficients. For this purpose, BPFs as a set of piecewise constant orthogonal basis functions are used to approximate the solution, its derivatives, and the equation coefficients. By using the BPFs operational matrix of integration, the BPFs coefficients vector ∗ Corresponding

author. Email addresses: [email protected] (Saeed Hatamzadeh-Varmazyar), [email protected] (Zahra Masouri), [email protected] (Esmail Babolian) Preprint submitted to Applied Mathematical Modelling

June 8, 2015

of the solution and that of its various derivatives are expressed in terms of the BPFs coefficients vector of the highest order derivative and the initial conditions vectors, that results in a linear system of algebraic equations. Solving this system gives the BPFs coefficients vector of the highest order derivative and, accordingly, an approximate solution for the differential equation is obtained. The main advantages of the proposed method are as follows: • The formulation of the method is quite general, without limitation or restriction. Therefore, it can be used for numerically solving an extensive variety of linear ordinary differential equations of arbitrary order and coefficients. • The accuracy of the method is satisfactory. • The running times of its algorithm, even for high degrees of approximation, are in a reasonable range. • The method is convergent. Its error decreases as the grid-spacing is reduced. Thus, one can increase the degree of approximation until the results settle down to a desired accuracy. • The algorithm is simple and clear to use and can be implemented easily. The organization of this article is as follows. A review on BPFs and their properties is provided in section 2. The representation error in approximation by BPFs is surveyed in section 3 to get an error bound and evaluate the order of convergence. In section 4, some other types of piecewise constant orthogonal functions including Rademacher, Walsh, and Haar functions are studied in conjunction with BPFs. Section 5 presents the numerical method for solving arbitrary linear ordinary differential equations by using the BPFs vector forms and operational matrix of integration. Some test problems are numerically solved in section 6 by the proposed method and the related numerical results are given. There will be extensive varieties of orders, coefficients, types, and solutions associated with the test problems to illustrate the generality and computational efficiency of the proposed method. For further evaluation of the method, we present in section 7 the running times of its algorithm for solving the presented test problems and also the mean-absolute errors associated with the method. Finally, conclusions will be given in section 8. 2. Block-pulse functions and their vector forms 2.1. Definition An m–set of BPFs is defined over a real interval [0, H) as [1–3, 6] ( (i+1)H 1, iH m 6t < m , ϕi (t) = 0, otherwise,

(1)

where i = 0, 1, . . . , m − 1, with a positive integer value for m. Also, consider h = H/m, and ϕi is the ith BPF. Here, we assume that H = 1, so BPFs are defined over [0, 1), and h = 1/m. A set of BPFs over [0, 1) for m = 4 is shown in Fig. 1. There are some properties for BPFs, the most important properties are disjointness, orthogonality, and completeness. Let us consider the first m terms of BPFs and write them concisely as an m–vector Φ(t) = [ϕ0 (t) ϕ1 (t) . . . ϕm−1 (t)]T ,

t ∈ [0, 1),

(2)

where, superscript T indicates transposition. The above representation and disjointness property follows ˜ Φ(t)ΦT (t)V = VΦ(t),

(3)

where V is an m–vector and V˜ = diag(V). Moreover, it can be clearly concluded that for any m × m matrix B ΦT (t)BΦ(t) = Bˆ T Φ(t),

(4)

where Bˆ is an m–vector with elements equal to the diagonal entries of matrix B. Also, Z

1

¯ Φ(t) dt = [h h . . . h]T = h,

(5)

Φ(t)ΦT (t) dt = hI,

(6)

0

and Z

1

0

where I is m × m identity matrix.

2

0 (t ) 1

0

0

1/4

1/2

3/4

1

t

0 0

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

1 (t ) 1

 2 (t ) 1

0 0

 3 (t ) 1

0 0

Figure 1: A set of four BPFs.

2.2. BPFs expansion The expansion of a function f over [0, 1) with respect to ϕi , i = 0, 1, . . . , m − 1, may be compactly written as [1–3, 6] f (t) '

m−1 X

fi ϕi (t) = F T Φ(t) = ΦT (t)F,

(7)

i=0

where F = [ f0 f1 . . . fm−1 ]T and fi ’s are defined by Z 1 1 fi = f (t)ϕi (t) dt. h 0

(8)

2.3. Operational matrix Rt Computing 0 ϕi (τ) dτ follows [3, 6]   t < ih, Z t 0, ϕi (τ) dτ = t − ih, ih 6 t < (i + 1)h,  0  h, (i + 1)h 6 t < 1.

(9)

Note that t − ih equals R tto h/2 at mid-point of [ih, (i + 1)h]. So, we can approximate t − ih, for ih 6 t < (i + 1)h, by h/2 . Now, expressing 0 ϕi (τ) dτ in terms of the BPFs gives Z

t

ϕi (τ) dτ ' [0 . . . 0

0

h h . . . h] Φ(t), 2

(10)

in which h/2 is ith component. Therefore, Z t Φ(τ) dτ ' PΦ(t),

(11)

0

where Pm×m is called operational matrix of integration and can be represented as   1 2 2 ... 2 0 1 2 . . . 2  h   P = 0 0 1 . . . 2 . 2  .. .. .. . . ..  . . . . . 0

0

0

...

1 3

(12)

3

2.5

f (t ) 2

f i i (t ) 1.5

1

0.5

ei2 (t ) 0

i m

−0.5 0.5

1

i 1 m

1.5

ei (t )

2

t 2.5

Figure 2: Representation error for BPF.

3. Representation error in approximation by BPFs Here, we perform an analysis and get an error bound to evaluate the order of convergence. Considering Fig. 2, the representation   error (or the residual error) when a differentiable function f (t) is represented in a series of BPFs over every subinterval mi , i+1 is [11] m   i i+1 ei (t) = fi ϕi (t) − f (t), t∈ , . (13) m m It can be shown that [11] 1  0 2 k ei k = f (ξi ) , 12m3 2

i i+1 ξi ∈ , m m 



.

(14)

It is obvious that the error in the BPFs expansion of f (t) may be written as e(t) =

m−1 X

fi ϕi (t) − f (t),

t ∈ [0, 1).

(15)

i=0

So k e(t) k2 =

1

Z 0

m−1 X 2 fi ϕi (t) − f (t) dt.

(16)

i=0

Now, suppose that f (t) is decomposed to m functions fi (t), defined in the subintervals, such that (   , f (t), t ∈ mi , i+1 m i = 0, 1, . . . , m − 1. fi (t) = 0, otherwise,

(17)

So we can write f (t) = f0 (t) + f1 (t) + · · · + fm−1 (t) =

m−1 X

(18)

t ∈ [0, 1).

fi (t),

i=0

Therefore from (16) we obtain k e(t) k2 =

Z

1

0

m−1 X 2 fi ϕi (t) − fi (t) dt,

(19)

i=0

or k e(t) k2 =

j+1 m

m−1 Z X j=0

Note that ( ϕ j (t) = 0, f j (t) = 0,

j m

m−1 X 2 fi ϕi (t) − fi (t) dt.

(20)

i=0

 t<

j j+1 , m m



.

(21) 4

Hence, Eq. (20) is simplified to j+1 m

m−1 Z X

k e(t) k = 2

j m

j=0

j+1 m

m−1 Z X

=

j m

j=0

2 f j ϕ j (t) − f j (t) dt (22) 2 f j ϕ j (t) − f (t) dt.

So, according to (13) we obtain j+1 m

m−1 Z X

k e(t) k = 2

j=0

j m

2 e j (t) dt,

(23)

or m−1 X

k e(t) k = 2

k e j k2 ,

(24)

j=0

which results, according to (14), in m−1 X

k e(t) k2 =

j=0

2 1  0 f (ξ j ) , 3 12m

ξj ∈



j j+1 , m m



.

(25)

Now, we note that for all j = 0, 1, . . . , m − 1, we have 0 f (ξ j ) 6 sup f 0 (t) , t ∈ [0, 1), that gives 0    f (ξ j ) 2 = f 0 (ξ j ) 2 6 sup f 0 (t) 2 ,

(26)

t ∈ [0, 1).

(27)

From (25) and (27) it is concluded that k e(t) k2 6

m−1 X j=0

2 1 sup f 0 (t) , 3 12m

(28)

or k e(t) k2 6 m

2 1 sup f 0 (t) , 3 12m

(29)

and consequently k e(t) k2 6

2 1 sup f 0 (t) , 12m2

(30)

which results in k e(t) k6

1 √ sup f 0 (t) , 2 3m

(31)

or 1 , m where M = 2 √1 3 sup f 0 (t) . k e(t) k6 M

(32)

Eq. (32) clearly shows that the total error in approximation by an m-set of BPFs is convergence may be considered as O ( m1 ). Moreover, we obtain from (32) that   1 lim k e(t) k6 M lim . m→∞ m→∞ m

O ( m1 ); in other words, the order of (33)

Therefore lim k e(t) k= 0,

(34)

m→∞

which establishes that we will have an exact representation of the function by using BPFs if m is high enough. 5

r0(t)

1 0

−1 1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

0

1/4

1/2

3/4

1

t

0

1/4

1/2

3/4

1

t

r1(t)

0

1 0

−1

r2(t)

0

1 0

r3(t)

−1

1 0

−1

Figure 3: A set of four Rademacher functions.

4. Some other types of piecewise constant orthogonal functions 4.1. Rademacher functions  Figure 3 shows a 4–set of Rademacher functions (RFs) ri (t) on unit interval [0, 1). In general, rm−1 (t) is a train of unit pulses with 2m−1 cycles in [0, 1) taking alternately values +1 and −1. An exception is r0 (t) which is the unit pulse over [0, 1). This system of square waves may be generated in many waves physically. For instance,a binary counter with a clock pulse train input gives RFs at its various stages up to that of the most significant digit. The system ri (t) is orthogonal but not complete [11]. Consider the first m functions in each of the series of RFs defined above and write concisely as m–vector  T R(t) = r0 (t) r1 (t) . . . rm−1 (t) .

(35)

The RFs may be expressed as linear combination of BPFs. It can be shown that [11] R(t) = T BR Φ(t).

(36)

The linear transformation matrix T BR , in the case of m = 4, is as follows:   1 1 1 1 T BR = 1 1 −1 −1 . 1 −1 1 −1

(37)

 Note that T BR is not a square matrix, and so it is not an invertible matrix. This suggests in simple terms that sequence ri (t) is not complete. Hence, we will not attempt any further discussions on function approximation with the Rademacher system. These functions are not appropriate for solving many functional equations. 4.2. Walsh functions A 4–set of Walsh functions (WFs) is shown in Fig. 4. WFs may be generated from RFs using the relation [11]  d  d  d wn (t) = rq (t) q rq−1 (t) q−1 rq−2 (t) q−2 . . . ,  where wn (t) is the (n + 1)th member of wi (t) ordered in particular way, and   q = log2 n + 1, n > 1,

(38)

(39)

in which [.] means integer part of ‘.’. The digit d j (zeros or ones) belongs to the binary form of n. That is, n = dq 2q−1 + dq−1 2q−2 + · · · .

(40)

6

w0(t)

1 0

−1 1/4

1/2

3/4

1

t

0

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

w1(t)

0

1 0

w2(t)

−1

1 0

−1

w3(t)

0

1 0

−1 0

Figure 4: A set of four Walsh functions.

There are many kinds of ordering of WFs. We choose only one part of the form called the Payley form here [11]. In an m–set of WFs, m = 2 j , where j is a positive integer. Thus, w0 (t) = r0 (t), w1 (t) = r1 (t), w2 (t) = r2 (t), (41)

w3 (t) = r2 (t)r1 (t), .. . and so on. The system of WFs is orthogonal and complete [11]. The first m functions of WFs can be written as m–vector  T W(t) = w0 (t) w1 (t) . . . wm−1 (t) .

(42)

The WFs may be expressed as linear combinations of BPFs; it can be shown that W(t) = T BW Φ(t).

(43)

The linear transformation matrix T BW , in the case of m = 4, is as follows [11]:   1 1 1 1 1 1 −1 −1  T BW =  1 −1 1 −1 . 1 −1 −1 1

(44)

T BW is a constant invertible matrix. The expansion of a function f over [0, 1) with respect to WFs may be compactly written as T f (t) ' FW W(t),

(45)

where the m–vector FW represents the WFs spectra. The spectra may be transformed from Walsh basis to the block-pulse basis by using the linear transformation connecting the vectors Φ(t) and W(t). For instance, T T FW W(t) = FW T BW Φ(t) = F BT Φ(t),

(46)

implying that T −1 FW = F BT T BW .

(47) 7

h0(t)

1 0

−1

h1(t)

0

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

t

1/4

1/2

3/4

1

1 0

−1

h2(t)

0 1.4 0 −1.4 0 h3(t)

t 1.4 0 −1.4 0

1/4

1/2

3/4

1 t

Figure 5: A set of four Haar functions.

4.3. Haar functions Another complete system of orthogonal piecewise constant basis functions is that of Haar functions (HFs) defined as an m–set as [11] har(0, 0, t) = 1, t ∈ [0, 1),  n−1/2 n−1 j/2  2 , 2j 6 t < 2j , n−1/2 n har( j, n, t) = −2 j/2 , 2j 6 t < 2j ,   0, otherwise,

(48)

j k where 0 6 j < logm 2 , 1 6 n 6 2 , m= 2 , and k is a positive integer. Figure 5 shows a set of four HFs on [0, 1) for 1 6 n 6 2, and m = 4. Therefore, the sequence hi (t) is ordered as follows [11]:

h0 (t) = har(0, 0, t), h1 (t) = har(0, 1, t),

(49)

h2 (t) = har(1, 1, t), h3 (t) = har(1, 2, t). Consider the first m functions of HFs and write them concisely as m–vector  T H(t) = h0 (t) h1 (t) . . . hm−1 (t) .

(50)

Similar to RFs and WFs, the HFs may be expressed as linear combination of BPFs. As a result of such a possibility it can be shown that [11] H(t) = T BH Φ(t).

(51)

The linear transformation matrix T BH , in the case of m = 4, is as follows:   1 1 1 1  1 1√ −1 −1  . √ T BH =   2 − 2 0 0√  √ 0 0 2 − 2  T BH is a constant invertible matrix. This confirms that the system hi (t) is complete. The expansion of a function f over [0, 1) with respect to HFs may be compactly written as f (t) ' F HT H(t),

(52)

(53) 8

where the m–vector F H represents the HFs spectra. Similarly, we can write F HT H(t) = F HT T BH Φ(t) = F BT Φ(t),

(54)

−1 F HT = F BT T BH .

(55)

so,

5. Numerical method for solving arbitrary linear ordinary differential equations using BPFs Here, using the results obtained in section 2 as to BPFs, we propose an effective numerical method for solving ordinary linear differential equations of arbitrary order and coefficients. Let us consider a general ordinary linear differential equation, with arbitrary coefficients, of arbitrary order n as follows: x(n) (t) + an−1 (t)x(n−1) (t) + an−2 (t)x(n−2) (t) + · · · + a1 (t)x0 (t) + a0 (t)x(t) = b(t), with the following initial conditions:    x(t0 ) = α0 ,    x0 (t0 ) = α1 , ..   .    (n−1) x (t0 ) = αn−1 ,

(56)

(57)

where x is the unknown function, with respect to variable t, to be determined; x(k) , k = 1, 2, . . . , n, is the kth derivative of x with respect to t; coefficients ak , k = 0, 1, . . . , n − 1, and b are functions of t; and αk , k = 0, 1, . . . , n − 1, is a scalar. Also, without loss of generality, it is supposed that t0 = 0. Approximating the functions x; x(k) , k = 1, 2, . . . , n; ak , k = 0, 1, . . . , n − 1; and b with respect to BPFs, using Eq. (7), gives x(t) ' X0T Φ(t) = ΦT (t)X0 , x(k) (t) ' XkT Φ(t) = ΦT (t)Xk ,

(58)

ak (t) ' ATk Φ(t) = ΦT (t)Ak , b(t) ' BT Φ(t) = ΦT (t)B, where the m-vectors X0 , Xk , Ak , and B are BPFs coefficients of x, x(k) , ak , and b, respectively. Substituting Eqs. (58) into Eq. (56) gives XnT Φ(t) + ATn−1 Φ(t)ΦT (t)Xn−1 + ATn−2 Φ(t)ΦT (t)Xn−2 + · · · + AT1 Φ(t)ΦT (t)X1 + AT0 Φ(t)ΦT (t)X0 ' BT Φ(t).

(59)

Using Eq. (3) follows XnT Φ(t) + ATn−1 X˜ n−1 Φ(t) + ATn−2 X˜ n−2 Φ(t) + · · · + AT1 X˜ 1 Φ(t) + AT0 X˜ 0 Φ(t) ' BT Φ(t),

(60)

XnT + ATn−1 X˜ n−1 + ATn−2 X˜ n−2 + · · · + AT1 X˜ 1 + AT0 X˜ 0 ' BT .

(61)

or

Transposition of both sides of Eq. (61) yields Xn + X˜ n−1 An−1 + X˜ n−2 An−2 + · · · + X˜ 1 A1 + X˜ 0 A0 ' B,

(62)

because X˜ kT = X˜ k , k = 0, 1, . . . , n − 1. Therefore, by considering X˜ k Ak = A˜ k Xk , k = 0, 1, . . . , n − 1, we get Xn + A˜ n−1 Xn−1 + A˜ n−2 Xn−2 + · · · + A˜ 1 X1 + A˜ 0 X0 ' B.

(63)

On the other hand we have R t (n) x (τ) dτ = x(n−1) (t) − αn−1 ,   R0t (n−1)    (τ) dτ = x(n−2) (t) − αn−2 ,  R0t x (n−2) (τ) dτ = x(n−3) (t) − αn−3 , 0 x  ..    .   R t 0 0 x (τ) dτ = x(t) − α0 ,

(64)

9

which results in  T ~ n−1 , P Xn ' Xn−1 − α     T  ~ n−2 , P X ' X − α n−1 n−2   T ~ n−3 , P Xn−2 ' Xn−3 − α  .  ..     T ~ 0, P X1 ' X0 − α

(65)

or  ~ n−1 , Xn−1 ' PT Xn + α     T  ~ n−2 , X ' P X + α n−1   n−2 T ~ n−3 , Xn−3 ' P Xn−2 + α  ..    .    ~ 0, X0 ' PT X1 + α

(66)

~ k , k = 0, 1, . . . , n − 1, is an m-vector with elements equal in which P is the operational matrix of integration in Eq. (11) and α to αk . Equations (66) may be rewritten as  ~ n−1 , Xn−1 ' PT Xn + α     T 2 T  ~ n−1 + α ~ n−2 , X ' (P ) X n+P α   n−2 T 2 T ~ n−2 + α ~ n−3 , Xn−3 ' (P ) Xn−1 + P α  .   ..     ~1 + α ~ 0. X0 ' (PT )2 X2 + PT α

(67)

After successive substitutions we finally obtain  T ~ n−1 ,  Xn−1 ' P Xn + α   T 2 T  ~ n−1 + α ~ n−2 , X ' (P ) X n+P α   n−2 T 3 T 2 ~ n−1 + PT α ~ n−2 + α ~ n−3 , Xn−3 ' (P ) Xn + (P ) α  ..    .    ~ n−1 + (PT )n−2 α ~ n−2 + · · · + PT α ~1 + α ~ 0. X0 ' (PT )n Xn + (PT )n−1 α

(68)

Now, we substitute Eqs. (68) into Eq. (63) and get  ~ n−1 Xn + A˜ n−1 PT Xn + α  ~ n−1 + α ~ n−2 + A˜ n−2 (PT )2 Xn + PT α ~ n−1 + PT α ~ n−2 + α ~ n−3 + A˜ n−3 (PT )3 Xn + (PT )2 α



(69)

+ ···  ~ n−1 + (PT )n−2 α ~ n−2 + · · · + PT α ~1 + α ~ 0 ' B, + A˜ 0 (PT )n Xn + (PT )n−1 α or i  ~ n−1 A˜ n−1 PT Xn + A˜ n−1 α h i  ~ n−1 + A˜ n−2 α ~ n−2 + A˜ n−2 (PT )2 Xn + A˜ n−2 PT α h i  ~ n−1 + A˜ n−3 PT α ~ n−2 + A˜ n−3 α ~ n−3 + A˜ n−3 (PT )3 Xn + A˜ n−3 (PT )2 α

Xn +

h

+ ··· h i  ~ n−1 + A˜ 0 (PT )n−2 α ~ n−2 + · · · + A˜ 0 PT α ~ 1 + A˜ 0 α ~ 0 ' B. + A˜ 0 (PT )n Xn + A˜ 0 (PT )n−1 α

10

(70)

Equation (70) may be rewritten as h i I + A˜ n−1 PT +A˜ n−2 (PT )2 + A˜ n−3 (PT )3 + · · · + A˜ 0 (PT )n Xn h  ~ n−1 ' B − A˜ n−1 α  ~ n−1 + A˜ n−2 α ~ n−2 + A˜ n−2 PT α  ~ n−1 + A˜ n−3 PT α ~ n−2 + A˜ n−3 α ~ n−3 + A˜ n−3 (PT )2 α

(71)

+ ··· i ~ n−1 + A˜ 0 (PT )n−2 α ~ n−2 + · · · + A˜ 0 PT α ~ 1 + A˜ 0 α ~0 . + A˜ 0 (PT )n−1 α Now, we replace ' with =, and write Eq. (71) in a simpler form as GXn = W,

(72)

in which G=I+

n−1 X

A˜ r (PT )n−r ,

(73)

r=0

and W = B−

n−1 X n−1 X

~ r. A˜ k (PT )r−k α

(74)

k=0 r=k

Equation (72) is a linear system of m algebraic equations with respect to m unknowns xn0 , xn1 , . . . , xnm−1 , components of Xn . Solution of this system gives vector Xn . Then, form Eqs. (68) we have ~ n−1 + (PT )n−2 α ~ n−2 + · · · + PT α ~1 + α ~ 0. X0 ' (PT )n Xn + (PT )n−1 α

(75)

Substituting the determined Xn into Eq. (75) gives the unknown vector X0 . Hence, an approximate solution for differential equation (56) is obtained as x(t) ' X0T Φ(t).

(76)

Remark 1. The method proposed in this section can be used to obtain the numerical solution of Eq. (56) in any arbitrary bounded real interval. For this purpose, we assume that BPFs are defined over arbitrary bounded interval [α, β), where α, β ∈ R. It is clear that all the properties and relations mentioned in section 2 can be easily generalized over this interval provided that ϕi , i = 0, 1, . . . , m − 1, is defined as ( 1, α + ih 6 t < α + (i + 1)h, ϕi (t) = (77) 0, otherwise, in which h = (β − α)/m. By using the above generalization, the formulation proposed in the current section can be applied in solution of the differential equation in any arbitrary bounded real interval [α, β) without needing any modification to the formulas of the presented method. Remark 2. As is common in the field of differential equations, we have considered form of the differential equation as in Eq. (56). This form is sufficiently general. We can, however, consider a slightly more general form of the equation as follows: an (t)x(n) (t) + an−1 (t)x(n−1) (t) + an−2 (t)x(n−2) (t) + · · · + a1 (t)x0 (t) + a0 (t)x(t) = b(t),

(78)

where function an is a non-zero coefficient. It is obvious that form of Eq. (78) is easily converted to that of Eq. (56) by dividing its both sides by an . However, if there exists a pole-type singularity in the coefficients of differential equation, then the current form (78) may be more appropriate to be used. For instance, considering bounded real interval [α, β), a pole-type singularity f (t) in coefficient ak (t) = t−ξ , where f is a function of t and ξ ∈ [α, β), may be removed by multiplying both sides of the related differential equation by t − ξ. The proposed method in this article can be formulated based on form (78) too. One can easily show that for numerical solution of Eq. (78), the following system of algebraic equations should be solved: GXn = W,

(79) 11

in which G=

n X

A˜ r (PT )n−r ,

(80)

r=0

and W = B−

n−1 X n−1 X

~ r. A˜ k (PT )r−k α

(81)

k=0 r=k

Then, an approximate solution for Eq. (78) will be obtained using Eqs. (75) and (76). 6. Test problems and numerical results Some test problems are numerically solved here by the proposed method in this article and the related numerical results are given. There are extensive varieties of orders, coefficients, types, and solutions associated with the test problems given here to illustrate the generality and computational efficiency of the proposed method for the solution of arbitrary linear ordinary differential equations. The approximate results obtained by the method for each test problem are given for ten points ti in the related interval [α, β) such that ti = α + ih0 , where i = 0, 1, . . . , 9 and h0 = (β − α)/10. It should be mentioned that all the computations associated with the proposed method have been performed using MATLAB software on a personal computer having the Intel 2.5 GHz processor. Test problem 1. Numerical solution of Bessel’s equation The well-known Bessel’s equation as a linear homogeneous second-order ordinary differential equation is given by [20, 21] t2 x00 (t) + tx0 (t) + (t2 − ν2 )x(t) = 0,

(82)

1 0 ν2 x (t) + (1 − 2 )x(t) = 0, t t

(83)

or x00 (t) +

where ν is a real constant. As a second-order differential equation, Bessel’s equation has two independent solutions. If ν = ` is an integer, one solution defines J` (t) as the Bessel function of the first kind of order `, and another solution defines Y` (t) referred to as the Bessel function of the second kind of order `. The Bessel functions play an important role in physical and engineering problems; for instance, the Bessel function of the first kind appears in the solution of electromagnetic wave equation inside circular waveguides [21]. We apply the proposed method for numerically solving Bessel’s equation to obtain its approximate solution. The initial conditions are set such that the equation has a unique solution J` (t). The exact values of J` have been extracted by MATLAB software. However, a series solution is available as [21] J` (t) =

∞ X m=0

(−1)m  t 2m+` . m!(m + `)! 2

(84)

Moreover J−` (t) = (−1)` J` (t),

(85)

J` (−t) = (−1)` J` (t).

The following relations may be used to obtain the derivatives of Bessel functions with respect to t for the required initial conditions. Letting Uν (t) denote an arbitrary solution to Bessel’s equation, we have [21] ν Uν0 (t) = Uν−1 − Uν , t ν 0 Uν (t) = −Uν+1 + Uν . t

(86)

The approximate results calculated by the proposed method together with the related exact results for solution of Bessel’s equation are given in Figs. 6(a)–6(d) respectively for ` = 0, 1, 2, and 3. Test problem 2. Numerical solution of Legendre differential equation The Legendre differential equation as a linear homogeneous second-order ordinary differential equation is given by [22] (1 − t2 )x00 (t) − 2tx0 (t) + ν(ν + 1)x(t) = 0,

(87) 12

or x00 (t) −

2t 0 ν(ν + 1) x (t) + x(t) = 0. 1 − t2 1 − t2

(88)

where ν is a real constant. One solution to the Legendre differential equation defines the Legendre function of the first kind Pν (t). If ν = ` is an integer, solution of the Legendre differential equation results in function P` (t) known as the Legendre polynomial of degree `. Another solution to this equation gives Q` (t) referred to as the Legendre function of the second kind. Like the Bessel functions, the Legendre functions too have various applications in physical and engineering problems; e.g., associated Legendre functions appear in the solution of Helmholtz equation in spherical coordinates [21]. The exact values of the first six polynomials P` may be calculated through the following analytical relations [22]: P0 (t) = 1, P1 (t) = t, 1 P2 (t) = (3t2 − 1), 2 1 P3 (t) = (5t3 − 3t), 2 1 P4 (t) = (35t4 − 30t2 + 3), 8 1 P5 (t) = (63t5 − 70t3 + 15t). 8

(89)

We apply the proposed method for numerically solving the Legendre differential equation and set the initial conditions such that the equation has a unique solution P` (t). The numerical results obtained by the proposed method together with the related exact results are given in Figs. 7(a)–7(d) respectively for ` = 0, 1, 2, and 5. Test problem 3. Numerical solution of a third-order inhomogeneous linear differential equation with singular coefficients We survey in this problem the flexibility of the proposed method in solution of differential equations with singular coefficients. For this purpose, let us consider the following third-order inhomogeneous linear differential equation:   t 1 x000 (t) − 2 ln(t2 + 0.64)x00 (t) + t2 sin x0 (t) + cos(πt2 )x(t) = b(t), (90) t − 0.64 t − 0.8 with exact solution x(t) = t3 + sin(πt) and right side b(t) as   t ln(t2 + 0.64) 6t − π2 sin(πt) t2 − 0.64   (91)     1 + t2 3t2 + π cos(πt) sin + cos(πt2 ) t3 + sin(πt) . t − 0.8  1 t Coefficient a1 = t2 sin t−0.8 has an essential singularity and coefficient a2 = − t2 −0.64 ln(t2 + 0.64) has a pole-type singularity at t = 0.8 in interval [0, 1). Hence, a2 approaches ±∞ as t approaches 0.8. The pole-type singularity of a2 is removed by 2 multiplying both sides of Eq. (90)  by t − 20.64 (or1 t −  0.8). However, the essential singularity of a1 remains. On the other hand, 1 2 2 function t (t − 0.64) sin t−0.8 (or even t sin t−0.8 ) is bounded as t approaches 0.8. Therefore, a stable solution is accessible for the problem over interval [0, 1). Of course, we have to use form (78) and, accordingly, Eqs. (79)–(81) for this case. Fig. 8(a) shows the numerical solution of this problem obtained by the proposed method and the related exact results. If we consider form (56) for solving the problem (as in Eq. (90)), then a sufficiently small region of radius ε around the singular point t = 0.8 must be omitted. The ε value depends mainly on the used software’s structure and its default algorithms for implementation of various mathematical operations. However, the related subroutines may be modified to be more flexible in this regard. For the current problem, according to the used software, the minimum valid value of ε is 5.5512 E − 17. The implementation for lower values of ε gives no appropriate output. The numerical results obtained by the proposed method for ε = 5.5512 E − 17 are given in Fig. 8(b) which are in good agreement with those presented in Fig. 8(a). b(t) = 6 − π3 cos(πt) −

Test problem 4. Numerical solution of a fourth-order inhomogeneous linear differential equation with singular solution The purpose of this problem is to survey the computational efficiency of the proposed method near a singular point of the solution of differential equation. For instance, we consider the following fourth-order inhomogeneous linear differential equation: √ x(4) (t) + t2 x000 (t) + sin(πt)x00 (t) + t3 x0 (t) + t2 + 1x(t) = b(t), (92) 1 with singular exact solution x(t) = t−1 and right side b(t) as √ t2 + 1 3t − 2 2 sin(πt) 6t2 24 b(t) = −t − 2 + − + − + . 2 3 t−1 (t − 1) (t − 1) (t − 1)4 (t − 1)5 13

(93)

Solution x(t) has a pole-type singularity at t = 1. So, if we consider an interval [α, β) for the solution of Eq. (92) such that 1 ∈ [α, β), then |x(t)| approaches infinity as t approaches 1. It is obvious that in a real situation we have no certain knowledge, in general, about the solution before solving the equation and, therefore, we are unable to judge about the existence and location of probable singularity. However, for a given interval [α, β), we can apply the method in solution of the equation for some different values of m and obtain the approximate results at a great number of points. Then, we concentrate on the behavior of the solution. For instance, suppose that we have the initial conditions at t = 0 and would like to obtain the approximate solution in interval [0, 1.2) (that includes the singular point). We have solved this problem by the proposed method for t ∈ [0, 1.2) and m = 8, 16, 32. The results are given in Figs. 9(a)–9(c). We see that the solution becomes unstable after t = 1 because, firstly, the scales of the solution on the right side of point t = 1 are unusual (1 E + 14, 1 E + 11, 1 E + 13) in comparison with the left side; secondly, the concavity (convexity) of the solution is alternatively changed as m increases (it is concave for m = 16 and convex for m = 8 and m = 32). Therefore, it is concluded that the solution may have a pole-type singularity at t = 1 and we must redefine the interval such that it does not include the singular point. We have solved the problem for t ∈ [0, 0.99) by the method and given the results for m = 140 in Fig. 9(d) which illustrates the efficiency of the proposed method for giving a suitable and accurate approximate solution near the singularity. Test problem 5. Numerical solution of a high-order inhomogeneous linear differential equation with both complex solution and complex coefficients We show in this problem that the proposed method is applicable in solving differential equations with complex solution and/or complex coefficients. Let us consider an inhomogeneous linear differential equation of order 15 as x(15) (t) + (t3 − jt2 + 1)x(10) (t) + (t + j)H0(2) (t)x(5) (t) + jt sin(t2 + jt)x(t) = b(t),

(94)

where H0(2) is Hankel function of the second kind of zero-order, j is imaginary unit and j2 = −1. Assuming complex exact solution x(t) = exp( jt) for Eq. (94), the right side will be n o   b(t) = − exp( jt) t3 + jt2 − j H0(2) (t) + sin(t2 + jt) t + H0(2) (t) + j + 1 . (95) The proposed method is applied in solving Eq. (94) to obtain its approximate solution. The results are given in Figs. 10(a)–10(d) including, respectively, the amplitude, real part, imaginary part, and angle of solution x in interval [3, 4). Test problem 6. Numerical solution of a very high-order inhomogeneous linear differential equation We survey here the efficiency of the proposed method for numerical solution of very high-order differential equations. For this purpose, we consider the following inhomogeneous linear differential equation of order 35: √ p x(35) (t) + tan( |t|)x(20) (t) + t2 sin(t2 )x(11) (t) + cos( t4 + 1)x(t) = b(t), (96) with exact solution x(t) = exp(t) + sin(t) and right side b(t) as h i h i h i √ √ p p b(t) = exp(t) 1 + tan( |t|) + t2 sin(t2 ) + cos( t4 + 1) + sin(t) tan( |t|) + cos( t4 + 1) − cos(t) 1 + t2 sin(t2 ) .

(97)

The numerical results calculated by the proposed method and the exact results for t ∈ [−5, −4) are given in Figs. 11(a) and 11(b) respectively for m = 32 and m = 64. Test problem 7. Numerical solution of a linear differential equation of a long time interval To show the efficiency of the proposed method for solving problems of arbitrary time interval, we resurvey problem 6 here and obtain its numerical solution in a long time interval. For this purpose, we assume that t ∈ [0, 5). The numerical results obtained by the proposed method and the exact results for this interval are given in Figs. 12(a) and 12(b) respectively for m = 64 and m = 128. The results show that the method has a good efficiency in such cases too. Test problem 8. Comparison with another method To compare the method proposed in this article with another method, we solve here the following differential equation that has previously been solved by a modified Adomain decomposition method in [14]: x00 (t) +

cos(t) 0 x (t) = −2 cos(t), sin(t)

(98)

with exact solution x(t) = cos(t) and initial conditions x(0) = 1 and x0 (0) = 0. The numerical results calculated by the proposed method and the exact results for t ∈ [0, 1) are given in Figs. 13(a) and 13(b) respectively for m = 64 and m = 128. The modified Adomain decomposition method proposed in [14] gives the exact solution for this problem. However, the method proposed in [14] has an important disadvantage. It has been proposed just for specific second order ordinary differential equations, while the method proposed in this article is free of such an essential limitation. Moreover, another disadvantage of the Adomian decomposition method is that the solution’s series may have small convergence radius and the truncated series solution may be inaccurate in some regions [23]. This will greatly restrict the application area of the Adomian method. 14

1

0.6

Exact solution: x(t)=J1(t)

Exact solution: x(t)=J (t) 0

Approximate solution obtained by the proposed method

0.58

Approximate solution obtained by the proposed method

0.95 0.56

0.54

x(t)

x(t)

0.9

0.85

0.52

0.5

0.48 0.8 0.46

0.75 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.44 1

1

1.1

1.2

1.3

1.4

1.5

t

t

(a)

(b)

0.5

1.6

1.7

1.8

1.9

2

3.6

3.7

3.8

3.9

4

0.44

Exact solution: x(t)=J3(t)

Exact solution: x(t)=J2(t) Approximate solution obtained by the proposed method

0.42

Approximate solution obtained by the proposed method

0.4 0.45

x(t)

x(t)

0.38

0.36 0.4 0.34

0.32

0.35 2

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

0.3 3

3

3.1

3.2

3.3

3.4

3.5

t

t

(c)

(d)

Figure 6: Numerical results for test problem 1 for m = 64. (a) Results for ` = 0 and t ∈ [0, 1). (b) Results for ` = 1 and t ∈ [1, 2). (c) Results for ` = 2 and t ∈ [2, 3). (d) Results for ` = 3 and t ∈ [3, 4).

15

2

2

Exact solution: x(t)=P1(t)=t

Exact solution: x(t)=P0(t)=1 1.9

Approximate solution obtained by the proposed method

1.6

1.8

1.4

1.7

1.2

1.6

x(t)

x(t)

1.8

1

1.5

0.8

1.4

0.6

1.3

0.4

1.2

0.2

1.1

0 2

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

1 1

3

Approximate solution obtained by the proposed method

1.1

1.2

1.3

1.4

1.5

t

t

(a)

(b)

1

1.6

1.7

1.8

1.9

2

0.9

1

1

Exact solution: x(t)=P (t)=(1/2)(3t2−1)

Exact solution: x(t)=P5(t)=(1/8)(63t5−70t3+15t)

Approximate solution obtained by the proposed method

Approximate solution obtained by the proposed method

2

x(t)

0.5

x(t)

0.5

0

−0.5 −1

0

−0.9

−0.8

−0.7

−0.6

−0.5

−0.4

−0.3

−0.2

−0.1

−0.5 0

0

0.1

0.2

0.3

0.4

0.5

t

t

(c)

(d)

0.6

0.7

0.8

Figure 7: Numerical results for test problem 2. (a) Results for ` = 0, t ∈ [2, 3), and m = 1. (b) Results for ` = 1, t ∈ [1, 2), and m = 64. (c) Results for ` = 2, t ∈ [−1, 0), and m = 64. (d) Results for ` = 5, t ∈ [0, 1), and m = 64.

1.4

1.2

1.4

Exact solution: x(t)=t3+sin(πt) Approximate solution obtained by the proposed method

1.2

0.8

0.8

x(t)

1

x(t)

1

Exact solution: x(t)=t3+sin(πt) Approximate solution obtained by the proposed method

0.6

0.6

0.4

0.4

0.2

0.2

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0 0

1

0.1

0.2

0.3

0.4

0.5

t

t

(a)

(b)

0.6

0.7

0.8

0.9

1

Figure 8: Numerical results for test problem 3 for t ∈ [0, 1) and m = 64. (a) Results by considering form (78) and removing the pole-type singularity. (b) Results by considering form (56) and omitting region (0.8 − ε, 0.8 + ε) for ε = 5.5512 E − 17.

16

3

x 10

14

2

Approximate solution obtained by the proposed method

x 10

11

0

2.5 −2 2

−4 −6

x(t)

x(t)

1.5

1

−8 −10 −12

0.5

−14 0 −16 −0.5 0

6

x 10

0.2

0.4

0.6

0.8

1

Approximate solution obtained by the proposed method

−18 0

1.2

0.2

0.4

0.6

t

t

(a)

(b)

0.8

1

1.2

13

10

Approximate solution obtained by the proposed method

0

5 −10 −20

4

−30

x(t)

x(t)

3

−40 −50

2 −60 1

−70 −80

Exact solution: x(t)=1/(t−1) Approximate solution obtained by the proposed method

0 −90 −1 0

0.2

0.4

0.6

0.8

1

−100 0

1.2

0.1

0.2

0.3

0.4

0.5

t

t

(c)

(d)

0.6

0.7

0.8

0.9

1

Figure 9: Numerical results for test problem 4. (a) Unstable convex approximate solution for t ∈ [0, 1.2) and m = 8. (b) Unstable concave approximate solution for t ∈ [0, 1.2) and m = 16. (c) Unstable convex approximate solution for t ∈ [0, 1.2) and m = 32. (d) A suitable approximate solution obtained by the proposed method for t ∈ [0, 0.99) and m = 140 together with the exact results.

17

1.1

Exact solution: x(t)=exp(jt) Approximate solution obtained by the proposed method

1.08

Exact solution: x(t)=exp(jt) Approximate solution obtained by the proposed method

−0.7

1.06 −0.75

Real part of x(t)

Amplitude of x(t)

1.04 1.02 1 0.98

−0.8

−0.85

0.96

−0.9

0.94 −0.95 0.92 0.9 3

3.1

3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

−1 3

4

3.1

3.2

3.3

3.4

3.5

t

t

(a)

(b)

0.2

3.6

3.7

3.8

3.9

4

3.8

3.9

4

200

Exact solution: x(t)=exp(jt) Approximate solution obtained by the proposed method

0.1

Exact solution: x(t)=exp(jt) Approximate solution obtained by the proposed method

150

Angle of x(t) (degrees)

Imaginary part of x(t)

0 −0.1 −0.2 −0.3 −0.4 −0.5

100

50

0

−50

−100 −0.6 −150

−0.7 −0.8 3

3.1

3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

−200 3

4

3.1

3.2

3.3

3.4

3.5

t

t

(c)

(d)

3.6

3.7

Figure 10: Numerical results for test problem 5 for t ∈ [3, 4) and m = 64. (a) Amplitude. (b) Real part. (c) Imaginary part. (d) Angle.

1.1

1.1

Exact solution: x(t)=exp(t)+sin(t) Approximate solution obtained by the proposed method

1.05

Exact solution: x(t)=exp(t)+sin(t) Approximate solution obtained by the proposed method

1.05

0.95

0.95

x(t)

1

x(t)

1

0.9

0.9

0.85

0.85

0.8

0.8

0.75 −5

−4.9

−4.8

−4.7

−4.6

−4.5

−4.4

−4.3

−4.2

−4.1

0.75 −5

−4

−4.9

−4.8

−4.7

−4.6

−4.5

t

t

(a)

(b)

−4.4

−4.3

Figure 11: Numerical results for test problem 6 for t ∈ [−5, −4). (a) Results for m = 32. (b) Results for m = 64.

18

−4.2

−4.1

−4

150

150 Exact solution: x(t)=exp(t)+sin(t) Approximate solution obtained by the proposed method

Exact solution: x(t)=exp(t)+sin(t) Approximate solution obtained by the proposed method

x(t)

100

x(t)

100

50

0

50

0

1

2

3

4

0

5

0

1

2

3

t

t

(a)

(b)

4

5

1

1

0.95

0.95

0.9

0.9

0.85

0.85

0.8

0.8

x(t)

x(t)

Figure 12: Numerical results for test problem 7 for long interval t ∈ [0, 5). (a) Results for m = 64. (b) Results for m = 128.

0.75

0.75

0.7

0.7

0.65

0.65

0.6

0.6 Exact solution: x(t)=cos(t) Approximate solution obtained by the proposed method

0.55 0.5

0

0.2

0.4

0.6

Exact solution: x(t)=cos(t) Approximate solution obtained by the proposed method

0.55

0.8

0.5

1

0

0.2

0.4

0.6

t

t

(a)

(b)

Figure 13: Numerical results for test problem 8 for t ∈ [0, 1). (a) Results for m = 64. (b) Results for m = 128.

19

0.8

1

Table 1: Running times of the proposed method’s algorithm in seconds (software: MATLAB; processor: Intel, 2.5 GHz). m 2 4 8 16 32 64 128 256 512

Test problem 1 (results for J3 ) 0.0313 0.0625 0.0625 0.0781 0.0938 0.1406 0.2344 0.4375 1.1094

Test problem 2 (results for P0 ) 0.0313 0.0469 0.0469 0.0469 0.0469 0.0781 0.1094 0.2344 0.7188

Test problem 3

Test problem 4

Test problem 5

Test problem 6

0.0469 0.0781 0.0781 0.0781 0.1094 0.1719 0.2813 0.6094 1.7188

0.1250 0.1250 0.1250 0.1250 0.1406 0.1719 0.2500 0.5313 1.8125

0.0938 0.0938 0.0938 0.1094 0.1250 0.2188 0.5938 3.2188 21.328

0.0781 0.0781 0.0938 0.0938 0.1719 0.5469 2.2656 17.156 128.23

7. Running times and mean-absolute errors For further evaluation of the proposed method, we present in this section the running times of its algorithm for solving the presented test problems and also the mean-absolute errors associated with the method. The results are given for m = 2k , k = 1, 2, . . . , 9. The hardware configuration and the software used for this purpose are as mentioned in section 6. 7.1. Running times Table 1 gives the running times (in seconds) of the proposed method’s algorithm in solving most of the mentioned test problems. The results show that by increasing the order of differential equation (n), the required cpu times increase (except for some exceptions). Of course, for a given order n, the cpu times may differ from equation to equation; e.g. test problems 1 and 2 both of order 2. It seems to be due to different coefficients in the two differential equations. However, the cpu times are, in general, in a resonable range, even for high values of m and n. Therefore, the associated programs may be run for achieving a desired accuracy (by increasing m) without worrying about long running times. 7.2. Mean-absolute errors, convergence rate The mean-absolute errors associated with the proposed method are given here to illustrate its convergence rate. If, for a given m, we obtain the approximate solution by the method at N arbitrary points ti , i = 1, 2, . . . , N, then we can consider the mean-absolute error as Em,N =

N 1 X x(ti ) − xm (ti ) , N i=1

(99)

where E is the mean-absolute error, and x and xm are respectively the exact and approximate  solutions.  We give the results for two groups of points. One includes the mid-points of subintervals α+ih, α+(i+1)h , i = 0, 1, . . . , m−1, in main interval [α, β). In this case, we must set N = m. Another group consists of ten points ti = α + ih0 , i = 0, 1, . . . , 9, in interval [α, β), where h0 = (β − α)/10. In this case, we have N = 10. Table 2 shows the mean-absolute errors of the method in solving some of the presented test problems. We see that the error decreases as m is increased, that quantitatively confirms that the proposed method is convergent. Hence, we can increase the degree of approximation until the results settle down to a desired accuracy. According to Table 2, the mean-absolute error is O ( m1 ) for the ten points. Finally, the proposed method, independent of the value of m, gives exact solution x(t) = P0 (t) = 1 for test problem 2. Due to the different behaviour of the method near the singular point of the solution, the errors concerning test problem 4 are separately given in Table 3. The results show that a stable and convergent approximate solution is accessible near the singularity if m is high enough, and the mean-absolute error will become smaller and smaller as m increases. Finally, it should be mentioned that the proposed method is, in general, compatible with any arbitrary m. In fact, there is no essential limitation on m, and it just should be a positive integer. Choosing m = 2k in this article is due to the fact that the error is O ( m1 ), and hence when we repeatedly multiply m by 2, the related error will be halved in each step. Such a procedure is very suitable to numerically show the convergence rate of the proposed method and the rate of decreasing its error. However, to show the flexibility of the method, we chose m = 140 in solution of test problem 4, to illustrate that the proposed method is accurate enough even for those values of m which are not of the form m = 2k . 8. Conclusion A numerical approach for solving arbitrary linear ordinary differential equations was proposed in this article by using the BPFs vector forms and operational matrix of integration. Some test problems were numerically solved by the method to illustrate its computational efficiency. The running times of the algorithm were in a reasonable range and the mean-absolute errors quantitatively confirmed that the method is convergent. In general, the numerical results showed that the proposed method is efficient and applicable in solving various types of ordinary linear differential equations. 20

Table 2: Mean-absolute errors associated with the proposed method by considering the approximate results at the mid-points of subintervals and ten points in the main interval. m

Test problem 1 (results for J3 )

2 4 8 16 32 64 128 256 512

Mid-points 5.2 E − 3 1.3 E − 3 3.3 E − 4 8.3 E − 5 2.1 E − 5 5.2 E − 6 1.3 E − 6 3.2 E − 7 8.1 E − 8

Test problem 2 (results for P0 ) Ten points 1.6 E − 2 8.3 E − 3 4.2 E − 3 2.1 E − 3 1.1 E − 3 5.4 E − 4 2.4 E − 4 1.4 E − 4 6.8 E − 5

Mid-points 0 0 0 0 0 0 0 0 0

Test problem 3 Ten points 0 0 0 0 0 0 0 0 0

Mid-points 8.9 E − 2 5.5 E − 2 1.6 E − 2 3.8 E − 3 9.4 E − 4 2.4 E − 4 6.4 E − 5 1.5 E − 5 3.7 E − 6

Test problem 5 Ten points 1.8 E − 1 1.0 E − 1 4.8 E − 2 2.6 E − 2 1.3 E − 2 6.5 E − 3 3.1 E − 3 1.6 E − 3 7.9 E − 4

Mid-points 3.2 E − 2 8.3 E − 3 2.1 E − 3 5.2 E − 4 1.3 E − 4 3.2 E − 5 8.2 E − 6 2.0 E − 6 5.1 E − 7

Test problem 6 Ten points 1.3 E − 1 6.5 E − 2 3.2 E − 2 1.6 E − 2 8.1 E − 3 4.1 E − 3 2.0 E − 3 1.0 E − 3 5.1 E − 4

Mid-points 2.5 E − 2 6.2 E − 3 1.6 E − 3 3.9 E − 4 9.8 E − 5 2.4 E − 5 6.1 E − 6 1.5 E − 6 3.8 E − 7

Ten points 3.8 E − 2 1.8 E − 2 9.3 E − 3 4.1 E − 3 2.1 E − 3 1.0 E − 3 5.4 E − 4 2.5 E − 4 1.3 E − 4

Table 3: Errors of the proposed method in solution of test problem 4 at the ten points together with the related mean-absolute errors. m 16 32 64 128 256 512

t=0 3.3 E − 2 1.6 E − 2 7.9 E − 3 3.9 E − 3 1.9 E − 3 1.0 E − 3

t = 0.1 7.2 E − 3 1.1 E − 2 7.8 E − 4 4.1 E − 3 1.7 E − 3 5.2 E − 4

t = 0.2 3.0 E − 2 2.4 E − 3 1.0 E − 2 4.3 E − 3 1.3 E − 3 2.0 E − 4

t = 0.3 3.8 E − 2 1.1 E − 2 3.7 E − 3 4.5 E − 3 5.8 E − 4 1.4 E − 3

t = 0.4 1.6 E − 2 3.4 E − 2 1.5 E − 2 4.5 E − 5 7.4 E − 4 2.0 E − 3

t = 0.5 1.3 E − 1 4.8 E − 2 1.2 E − 2 4.2 E − 3 3.3 E − 3 6.5 E − 4

t = 0.6 3.7 E − 2 3.1 E − 2 2.5 E − 2 3.0 E − 3 8.6 E − 3 2.4 E − 3

t = 0.7 2.6 E − 1 1.9 E − 2 5.0 E − 2 1.1 E − 3 2.2 E − 2 1.0 E − 2

t = 0.8 3.1 E − 1 1.9 E − 1 6.1 E − 2 1.8 E − 2 3.4 E − 2 1.1 E − 2

t = 0.9 5.0 E 0 2.8 E 0 7.3 E − 1 1.5 E − 1 7.6 E − 2 1.1 E − 2

Mean-absolute error 5.9 E − 1 3.1 E − 1 9.2 E − 2 2.0 E − 2 1.5 E − 2 4.1 E − 3

References [1] S. Hatamzadeh-Varmazyar and Z. Masouri, Numerical method for analysis of one- and two-dimensional electromagnetic scattering based on using linear Fredholm integral equation models, Mathematical and Computer Modelling, 54 (2011) 2199–2210. [2] S. Hatamzadeh-Varmazyar and Z. Masouri, Numerical expansion-iterative method for analysis of integral equation models arising in one- and twodimensional electromagnetic scattering, Engineering Analysis with Boundary Elements, 36 (2012) 416–422. [3] Z. Masouri, E. Babolian, and S. Hatamzadeh-Varmazyar, An expansion-iterative method for numerically solving Volterra integral equation of the first kind, Computers & Mathematics with Applications, 59 (2010) 1491–1499. [4] S. Hatamzadeh-Varmazyar, M. Naser-Moghadasi, and Z. Masouri, A moment method simulation of electromagnetic scattering from conducting bodies, Progress In Electromagnetics Research, PIER 81 (2008) 99–119. [5] S. Hatamzadeh-Varmazyar, M. Naser-Moghadasi, E. Babolian, and Z. Masouri, Numerical approach to survey the problem of electromagnetic scattering from resistive strips based on using a set of orthogonal basis functions, Progress In Electromagnetics Research, PIER 81 (2008) 393–412. [6] E. Babolian and Z. Masouri, Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions, Journal of Computational and Applied Mathematics, 220 (2008) 51–57. [7] E. Babolian and A. Salimi Shamloo, Numerical solution of Volterra integral and integro-differential equations of convolution type by using operational matrices of piecewise constant orthogonal functions, Journal of Computational and Applied Mathematics, 214 (2008) 495–508. [8] A. Deb, G. Sarkar, M. Bhattacharjee, and S.K. Sen, All-integrator approach to linear SISO control system analysis using block pulse functions (BPF), Journal of the Franklin Institute, 334B(2) (1997) 319–335. [9] A. Deb, G. Sarkar, and S.K. Sen, Block pulse functions, the most fundamental of all piecewise constant basis functions, International Journal of Systems Science, 25(2) (1994) 351–363. [10] Z.H. Jiang and W. Schaufelberger, Block pulse functions and their applications in control systems, in: Lecture Notes in Control and Information Sciences, vol. 179, Springer-Verlag, Berlin, 1992. [11] C.P. Rao, Piecewise Constant Orthogonal Functions and Their Application to Systems and Control, Springer, Berlin, 1983. [12] P. Sannuti, Analysis and synthesis of dynamic systems via block-pulse functions, Proceeding of IEE 124(6) (1977) 569–571. [13] C.-H. Wang, Generalized block-pulse operational matrices and their applications to operational calculus, International Journal of Control, 36 (1982) 67–76. [14] M.M. Hosseini and H. Nasabzadeh, Modified Adomian decomposition method for specific second order ordinary differential equations, Applied Mathematics and Computation, 186 (2007) 117–123. [15] A. Rafiq, M. Ahmed, and S. Hussain, A general approach to specific second order ordinary differential equations using homotopy perturbation method, Physics Letters A, 372 (2008) 4973–4976. [16] E. Babolian and F. Fattahzadeh, Numerical solution of differential equations by using Chebyshev wavelet operational matrix of integration, Applied Mathematics and Computation, 188 (2007) 417–426. [17] M.T. Rashed, Lagrange interpolation to compute the numerical solutions of differential, integral and integro-differential equations, Applied Mathematics and Computation, 151 (2004) 869–878. [18] J. Jia and T. Sogabe, On particular solution of ordinary differential equations with constant coefficients, Applied Mathematics and Computation, 219 (2013) 6761–6767. [19] S. Sedaghat, Y. Ordokhani, and M. Dehghan, Numerical solution of the delay differential equations of pantograph type via Chebyshev polynomials, Communications in Nonlinear Science and Numerical Simulation, 17 (2012) 4815–4830. [20] C.A. Balanis, Antenna Theory: Analysis and Design, 2nd edition, Wiley, New York, 1996. [21] R.F. Harrington, Time-Harmonic Electromagnetic Fields, IEEE Press series on electromagnetic wave theory, 2nd edition, Wiley-IEEE Press, New York, 2001. [22] C.A. Balanis, Advanced Engineering Electromagnetics, Wiley, New York, 1989. [23] R. Caponetto, L. Fortuna, and S. Fazzino, Simulating fractional order systems using multistage Adomian decomposition method, 18th International Federation of Automatic Control (IFAC) World Congress, 13990–13994, August 28–September 2, 2011, Milan, Italy.

21