Accepted Manuscript A semi-analytical approach to solve integro-differential equations S. Kheybari, M.T. Darvishi, A.M. Wazwaz PII: DOI: Reference:
S0377-0427(16)30544-1 http://dx.doi.org/10.1016/j.cam.2016.11.011 CAM 10884
To appear in:
Journal of Computational and Applied Mathematics
Received date: 26 July 2016 Revised date: 3 September 2016 Please cite this article as: S. Kheybari, M.T. Darvishi, A.M. Wazwaz, A semi-analytical approach to solve integro-differential equations, Journal of Computational and Applied Mathematics (2016), http://dx.doi.org/10.1016/j.cam.2016.11.011 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Manuscript Click here to view linked References
A semi-analytical approach to solve integro-differential equations S. Kheybari a b
a,∗
, M. T. Darvishi a , A. M. Wazwaz
b
Department of Mathematics, Razi University, Kermanshah 67149, Iran
Department of Mathematics, Saint Xavier University, Chicago, IL 60655, USA
Abstract In this work, we present a semi-analytical method for solving integro-differential equations under multi-point or mixed boundary conditions. The proposed method solves linear and nonlinear Fredholm-Volterra integro-differential equations. A convergence analysis of the proposed method is directly examined. Numerical examples are worked out to demonstrate the main results. Moreover, proper graphs are provided to confirm the efficiency and the accuracy of the proposed scheme. We show that with a few number of obtained approximating terms, we achieve a high accuracy level of the obtained results. However, increasing the number of approximating terms, yields a significant decrease of the error of the approximation. The proposed method is very useful, reliable, and flexible for solving different kinds of integro-differential equations. Keywords: Integro-differential equation; Multi-point boundary condition; Residual function; Error function
1. Introduction A combination of integral and differential equations is called the integro-differential equations (IDEs). They are very important mathematical modelling in real-life problems. Integro-differential equations arise in many applied areas including physics, mechanics, engineering, chemistry, economics, electronics, electrostatics, potential theory etc. (for more details, see [1, 2, 3, 4, 5, 6]). There are two original branches of these ∗
Corresponding author. Tel.: +98 914 140 6445; fax: +98 83 3427 4569. Email address:
[email protected] (S. Kheybari a, )
Preprint submitted to Elsevier
equations, Fredholm and Volterra ones. The solution of IDEs has a major role in the fields of science and engineering. But generally speaking, the analytical solutions of most IDEs are not easy to obtain, hence numerical methods are needed. Many numerical methods have been used such as the Legendre wavelets method [7], the Tau method [8, 9, 10, 11, 12], the Haar functions method [13, 14], the finite difference method [15], the Cas wavelet method [16], the homotopy pertubation method [17], the sine-cosine wavelet method [18, 19], the Chebyshev and Taylor collocation methods [6, 20], the hybrid function method [21], the Adomian decomposition method [22] and the Bessel collocation method [23]. We refer the reader to the recent works on this subject to references [24, 25, 26, 27, 28]. This paper concerns two special types of IDEs, with multi-point (m-point) and mixed boundary conditions. In this study we present a semi-analytical approach to solve mpoint initial or boundary value problems of IDEs. Also, this scheme is applied to solve IDEs with the mixed boundary conditions. Our proposed method is based on finding an optimal linear combination of particular solutions of auxiliary differential equations which is defined for the original problem. We must evaluate the unknown coefficients of this combination. To do this, we construct a residual function by substitution of approximate solution in the original problem. Then we try to determine the unknown coefficients such that the weighted integrals of the residual function be equal to zero on a given interval. The approximate solution which obtains by our method doesn’t require any linearization or discretization to both linear and nonlinear integro-differential equations. The obtained numerical results of this method are then compared with the referenced methods for different examples. In the present investigation, the main attractive advantage of this computational numerical method is its reliability and applicability. This paper has organized as follows. In the next section, we explain the new algorithm to solve IDEs. Error minimization is presented in Section 3. Section 4 concerns the error analysis. Application of the new method is presented in Section 5, in which some test problems have solved. Finally, the paper is concluded in Section 6. 2. Main algorithm In this paper, we consider the following nonlinear integro-differential equation u(n) (x) = F (x, u(x), · · · , u(n−1) (x), (κ1 u)(x), (κ2 u)(x)) + f (x), 2
x ∈ [a, b],
(1)
subject to the following boundary conditions nk ∑ j=1
ajk u(rjk ) (ξjk ) = dk , k = 1, · · · , n, ξjk ∈ [a, b], rjk ∈ {0, 1, · · · , n − 1},
(2)
where dk , ajk are real constants, and (κ1 u)(x) =
∫
b
k1 (x, t)u(t)dt,
(κ2 u)(x) =
∫
x
k2 (x, t)u(t)dt,
a
a
here k1 and k2 are continuous functions on [a, b] × [a, b].
To solve (1) we suppose that its approximate solution has the following form uℓ (x) = u˜(x) = ψ(x) +
ℓ ∑
αm φm (x),
(3)
m=0
where ψ(x) and {φm (x)}ℓm=0 must be obtained in such a way that u˜ satisfies the bound-
ary conditions (2), and they are defined by (ℓ + 2) auxiliary differential equations. To ensure that (3) satisfies the boundary conditions (2), we must have nk ∑
ajk ψ
j=1
(rjk )
(ξjk ) +
ℓ ∑
nk ∑
αm
m=0
(r
)
ajk φmjk (ξjk ) = dk ,
j=1
k = 1, · · · , n.
If for k = 1, · · · , n and m = 0, 1, · · · , ℓ we have the following equalities, then u˜(x) satisfies the boundary conditions (2) nk ∑
j=1 nk ∑
ajk ψ (rjk ) (ξjk ) = dk , (r
)
ajk φmjk (ξjk ) = 0,
j=1
in fact, these conditions are sufficient conditions which ensure that u˜(x) satisfies the boundary conditions (2). Suppose that V = {P0 (x), P1 (x), · · · , Pℓ (x)} be a set of basis polynomials on [a, b], where Pk (x) is a polynomial of degree k, for k = 0, 1, · · · , ℓ. We’d like ψ(x) be such that it satisfies in the following equations ψ (n) (x) = f (x), nk ∑ ajk ψ (rjk ) (ξjk ) = dk , j=1
a ≤ x ≤ b, a ≤ ξjk ≤ b, 3
(4) k = 1, · · · , n,
(5)
hence, if we express ψ(x) as ψ(x) =
∫
x
a
n−1 ∑ (x − t)n−1 f (t) dt + c i xi , (n − 1)! i=0
then ψ(x) satisfies (4). By taking the rth-derivative of (6), we have ∫ x (x−t)n−r−1 f (t) dt + ∑n−1 ci i! xi−r , r = 0, 1, · · · , n − 1, i=r (n−r−1)! (i−r)! a (r) ψ (x) = f (x), r = n.
(6)
(7)
Substituting (7) in the boundary conditions (5) for k = 1, · · · , n, gives [ ∫ ξjk ] i−r nk n−1 ∑ ∑ i! ξjk jk (ξjk − t)n−rjk −1 f (t) ajk dt + ci = dk . (n − rjk − 1)! (i − rjk )! a j=1 i=r jk
For k = 1, · · · , n, j = 1, · · · , nk , set Ljk (f ) = ajk
∫
ξjk
a
(ξjk − t)n−rjk −1 f (t) dt, (n − rjk − 1)!
then the following linear system with respect to {ci }n−1 i=0 is obtained nk ∑ n−1 ∑
j=1 i=rjk
i−r
nk ∑ i! ξjk jk ci ajk = dk − Ljk (f ), (i − rjk )! j=1
k = 1, · · · , n.
(8)
Note that when one can not compute Ljk (f ) analytically, it can be computed by a numerical quadrature for any value of ξjk . Equations (8) ensure that the boundary n−1 conditions (5) are satisfied. The unknown coefficients {ci }i=0 can be obtained by solving
the linear system (8).
Also, we’d like {φm (x)}ℓm=0 be such that satisfy the followings (n)
φm (x) = Pm (x), ∑nk (j) j=1 ajk φm (ξjk ) = 0,
a ≤ x ≤ b,
m = 0, 1, · · · , ℓ,
a ≤ ξjk ≤ b,
k = 1, · · · , n,
(9) (10)
hence, if we define φm (x) as
φm (x) =
∫
a
x
n−1 ∑ (x − t)n−1 Pm (t) dt + cm,i xi , (n − 1)! i=0
(11)
then for m = 0, 1, · · · , ℓ, φm (x) satisfy (9). By taking the rth-derivative of (11), we have
φ(r) m (x)
=
∫ x
(x−t)n−r−1 Pm (t) dt (n−r−1)! a
P (x), m
+
∑n−1 i=r
4
i−r
x cm,i i!(i−r)! , r = 0, · · · , n − 1,
r = n.
(12)
Substituting (12) in the homogenous boundary conditions (10) for k = 1, · · · , n, yields [ ∫ ξjk ] i−r nk n−1 ∑ ∑ i! ξjk jk (ξjk − t)n−rjk −1 Pm (t) ajk dt + cm,i = 0, (13) (n − rjk − 1)! (i − rjk )! a j=1 i=r jk
for k = 1, · · · n, j = 1, · · · , nk , let Ljk (Pm ) = ajk
∫
a
ξjk
(ξjk − t)n−rjk −1 Pm (t) dt, (n − rjk − 1)!
(14)
then we can rewrite (13) as the following linear system with respect to unknowns {cm,i }n−1 i=0 nk ∑ n−1 ∑
j=1 i=rjk
i−r n−1 ∑ i! ξjk jk cm,i ajk =− Ljk (Pm ), (i − rjk )! j=0
k = 1, · · · , n.
(15)
For each m = 0, 1, · · · , ℓ, the equations in (15) ensure that the homogenous boundary
conditions (10) hold. Solution of the linear system (15) is the unknown coefficients {cm,i }n−1 i=0 .
We can always find solution of (14) by an analytical method because Pm (x) is a polynomial. Substituting (6) and (11) in (3) yields [ ] ] ∫ x ℓ n−1 [ ℓ ∑ ∑ ∑ (x − t)n−1 u˜(x) = f (t) + αm Pm (t) dt + ci + cm,i αm xi . (n − 1)! a m=0 m=0 i=0 From (7) and (12) and taking rth-derivative of (16), we have ∫ ∑ x (x−t)n−r−1 [f (t) + ℓm=0 αm Pm (t)]dt (n−r−1)! a + ∑n−1 [ci + ∑ℓ cm,i αm ] i! xi−r , r = 0, · · · , n − 1, i=r m=0 (i−r)! u˜(r) (x) = ∑ r = n. f (x) + ℓm=0 αm Pm (x), If u˜(x) be the exact solution of (1), then we must have ( ) u˜(n) (x) = F x, u˜(x), u˜′ (x), · · · , u˜(n−1) (x), (κ1 u˜)(x), (κ2 u˜)(x) + f (x),
(16)
(17)
x ∈ [a, b], (18)
hence from (17) we can rewrite (18) equivalently as ℓ ∑
m=0
( ) αm Pm (x) − F x, u˜(x), u˜′ (x), · · · , u˜(n−1) (x), (κ1 u˜)(x), (κ2 u˜)(x) = 0, 5
x ∈ [a, b], (19)
but since u˜(x) is an approximate solution of (1), relation (19) often does not satisfy n−1 the equation. After substituting the computed values of {ci }n−1 i=0 and {cm,i }i=0 for m =
0, 1, · · · , ℓ in (16), which are obtained by solving (8) and (15), we define the following
residual function
Rℓ (x, α0 , · · · , αℓ ) =
ℓ ∑
m=0
( ) αm Pm (x) − F x, u˜(x), u˜′ (x), · · · , u˜(n−1) (x), (κ1 u˜)(x), (κ2 u˜)(x) .
(20)
In the next section, we obtain {αm }ℓm=0 in such a way that the residual function be minimized or forced to be zero in an average sense over the interval [a, b]. 3. Minimization of the residual function In this section, we present a minimization algorithm for the residual function (20). To do this, we obtain the unknown coefficients {αm }ℓm=0 of (20) in such a manner that the following weighted integrals of the residual function be equal to zero ∫ b Ej (α0 , · · · , αℓ ) = wj Rℓ (x, α0 , · · · , αℓ )dx, j = 0, 1, · · · , ℓ,
(21)
a
where {wj }ℓj=0 are weight functions.
In this algorithm, the following weight functions are taken from the displaced Dirac delta function
1 x = xj wj = δ(x − xj ) = 0 x = ̸ xj ,
which for j = 0, 1, · · · , ℓ, have the following property ∫ b Ej (α0 , · · · , αℓ ) = wj Rℓ (x, α0 , · · · , αℓ )dx = Rℓ (xj , α0 , · · · , αℓ ) = 0.
(22)
a
Hence the residual function is forced to be zero at ℓ + 1 specified collocation points {xj }ℓj=0 . If ℓ increases, then the approximate solutions will be better, because the resid-
ual function Rℓ (x, α0 , · · · , αℓ ) equals to zero at more points and presumably approaches to zero over the interval [a, b]. We have different options for the collocation points. In this paper, we use Chebyshev collocation points xj = cos( jπℓ ) for j = 0, 1, · · · , ℓ. Solving the algebraic system (22) yields the unknown coefficients {αm }ℓm=0 . 6
4. Error analysis In this section, the convergence of our method for solving IDEs is presented. ( ) Theorem 1. Suppose that F x, u(x), u′ (x), · · · , u(n−1) (x), (κ1 u)(x), (κ2 u)(x) is a piecewise continuously function on [−1, 1] and V is the space of orthogonal Chebyshev polynomials of the first kind. Then for any given positive ε there is an integer L such that ℓ ≥ L implies that |Rℓ (x)| < ε, where Rℓ (x) is the residual function (20). Proof. From (20) we have Rℓ (x, α0 , α1 , · · · , αℓ ) =
ℓ ∑
m=0
( ) αm Tm (x)−F x, u˜(x), u˜′ (x), · · · , u˜(n−1) (x), (κ1 u˜)(x), (κ2 u˜)(x) ,
where {αm }ℓm=0 are unknown coefficients, and Tm (x) = cos(m arccos(x)). Considering ( ) G(x, α0 , α1 , · · · , αℓ ) = F x, u˜(x), u˜′ (x), · · · , u˜(n−1) (x), (κ1 u˜)(x), (κ2 u˜)(x) yields Rℓ (x, α0 , α1 , · · · , αℓ ) =
ℓ ∑
m=0
αm Tm (x) − G(x, α0 , α1 , · · · , αℓ ).
(23)
As F is a piecewise continuous function and also {αm }ℓm=0 are constants, therefore G is a piecewise continuous function with respect to x over the interval [−1, 1]. Hence, we can consider the Chebyshev expansion of G as ∞
β0 ∑ G(x, α0 , α1 , · · · , αℓ ) = + βm Tm (x), 2 m=1
(24)
where βm for m = 0, 1, · · · , are functions of (α0 , α1 , · · · , αℓ ) which are obtained by ∫ ∫ 2 1 G(x, α0 , α1 , · · · , αℓ )Tm (x) 2 π √ βm = dx = G(cos(θ), α0 , α1 , · · · , αℓ ) cos(mθ)dθ. π −1 π 0 1 − x2 (25) Therefore, from (23)–(25) we have |Rℓ (x, α0 , α1 , · · · , αℓ )| = | Next, we consider
∑ℓ
m=0
αm Tm (x) =
ℓ ∑
m=0
∑∞
αm Tm (x) −
m=0
∞
β0 ∑ − βm Tm (x)|. 2 m=1
αm Tm (x), where αℓ+1 = αℓ+2 = · · · = 0. By
convergence of the Chebyshev expansion we have
ℓ ∞ ∑ ∑ β0 |Rℓ (x, α0 , α1 , · · · , αℓ )| = |(α0 − ) + (αm − βm )Tm (x) + (αm − βm )Tm (x)|. 2 m=1 m=ℓ+1
7
If we set
β0 , and αm = βm , for m = 1, · · · , ℓ, (26) 2 then, we can compute the values of {αm }ℓm=0 by solving the system (26) and since for α0 =
m > ℓ, αm = 0 then we have |Rℓ (x)| = |
∞ ∑
m=ℓ+1
(αm − βm )Tm (x)| = |
∞ ∑
βm Tm (x)| ≃ |βℓ+1 Tℓ+1 (x)|.
m=ℓ+1
Hence convergence of the Chebyshev expansion of G implies that for any positive ε there exists a large enough positive integer L such that for any ℓ ≥ L we have |βℓ+1 Tℓ+1 (x)| < ε. ( ) ′ (n−1) Theorem 2. If G(x, ζ0 , · · · , ζℓ ) = F x, u˜(x), u˜ (x), · · · , u˜ (x), (κ1 u˜)(x), (κ2 u˜)(x) ∑ where u˜(x) = ψ(x) + ℓj=0 ζj φj (x) is analytic at all points inside and on circle C of radius r with center at x0 ∈ (0, T ) and if x is any interior point of C then we can obtain {ζj }ℓj=0 in such a way that |Rℓ | → 0 for ℓ → ∞. This completes the proof.
Proof. From (20) and Taylor series expansion for polynomials Pj (x) we have Rℓ (x, ζ0 , · · · , ζℓ ) = =
ℓ ∑ j=0
j
ζj
ℓ ∑ j=0
∑ (x − x0 )k k=0
k!
ζj Pj (x) − G(x, ζ0 , · · · , ζℓ ) (27) (k)
Pj (x0 ) − G(x, ζ0 , · · · , ζℓ ).
By Cauchy’s integral formula, we have I 1 G(z, ζ0 , · · · , ζℓ ) G(x, ζ0 , · · · , ζℓ ) = dz, 2πi C z−x
i2 = −1.
We can write G(z, ζ0 , · · · , ζℓ ) = z−x G(z, ζ0 , · · · , ζℓ ) x − x0 (x − x0 )2 (x − x0 )ℓ (x − x0 )ℓ+1 [1 + + + · · · + + ]. z − x0 z − x0 (z − x0 )2 (z − x0 )ℓ (z − x0 )ℓ (z − x) Therefore
ℓ
1 ∑ G(x, ζ0 , · · · , ζℓ ) = (x − x0 )k 2πi k=0 =
I
C
G(z, ζ0 , · · · , ζℓ ) dz + Rℓ (z − x0 )k+1
ℓ ∑ (x − x0 )k dk G
k!
k=0
8
dxk
(x0 , ζ0 , · · · , ζℓ ) + Rℓ ,
(28)
where
(x − x0 )ℓ+1 Rℓ = 2πi
By substituting (28) in (27) Rℓ (x, ζ0 , · · · , ζℓ ) =
I
C
G(z, ζ0 , · · · , ζℓ ) dz. (z − x0 )ℓ+1 (z − x)
ℓ ℓ ∑ (x − x0 )k ∑ dk G (k) [ ζj Pj (x0 ) − k (x0 , ζ0 , · · · , ζℓ )] − Rℓ , k! dx k=0 j=k
whence ℓ ℓ ∑ (x − x0 )k ∑ dk G (k) |Rℓ (x, ζ0 , · · · , ζℓ )| ≤ | [ ζj Pj (x0 ) − k (x0 , ζ0 , · · · , ζℓ )]| + |Rℓ |. (29) k! dx k=0 j=k
To vanish the first term of right hand side of (29) we set ℓ ∑
(k) ζj Pj (x0 )
j=k
dk G = (x0 , ζ0 , · · · , ζℓ ), dxk
k = 0, 1, · · · , ℓ.
(30)
This changes (29) to |Rℓ (x, ζ0 , · · · , ζℓ )| ≤ |Rℓ |. From system of equations (30) we can obtain ζj = ζ˜j for j = 0, 1, · · · , ℓ. Now we define the following
˜ G(z) = G(z, ζ˜0 , · · · , ζ˜ℓ ). There is a constant M such that for any z on C, |
˜ G(z) G(z, ζ˜0 , · · · , ζ˜ℓ ) |=| | ≤ M, z−x z−x
|z − x0 | = r.
Therefore I ˜ |x − x0 |ℓ+1 G(z) ˜ ˜ | dz| |Rℓ (x, ζ0 , · · · , ζℓ )| ≤ |Rℓ | = ℓ+1 (z − x) 2π C (z − x0 ) I I ˜ |x − x0 |ℓ+1 M |x − x0 |ℓ+1 G(z) ≤ ||dz| ≤ | |dz| ℓ+1 (z − x) 2π 2πrℓ+1 C (z − x0 ) C M |x − x0 |ℓ+1 . = rℓ Now for ℓ → ∞ we have |Rℓ | → 0. Note that
|x−x0 | r
< 1. This completes the proof.
(31)
It is necessary to say that, we can control the accuracy of the obtained approximate solution by evaluating the upper bound of the mean value of the residual function Rℓ (x) 9
on the interval [a, b]. In order to estimate the upper bound of the mean value of residual function, we suppose that Rℓ (x) is integrable with respect to x, in the Riemann sense,
then by using the Schwarz inequality we have ∫ b √ ≤ b − a Rℓ , R (x)dx ℓ 2 a
where Rℓ 2 =
{
} 21 2 ∫b Rℓ (x) dx . According to the mean value theorem for integrals, a
if Rℓ (x) is continuous in [a, b], there exists a point c in (a, b) such that ∫ b Rℓ (x)dx = (b − a)Rℓ (c). a
Thus,
∫b Rℓ (x)dx a
Rℓ b − a Rℓ 2 2 ≤ =√ = Rℓ . b−a b−a b−a If Rℓ → 0 when N is sufficiently large, then the error of approximate solution is negli Rℓ (c) =
√
gible.
In addition, for the linear problems which are defined as ∫ b ∫ n−1 ∑ (n) (i) u (x) = gi (x)u (x) + λ1 k1 (x, t)u(t)dt + λ2 a
i=0
nk ∑
ajk u(rjk ) (ξjk ) = dk ,
k2 (x, t)u(t)dt + f (x),
a
k = 1, · · · , n,
j=1
x
ξjk ∈ [a, b],
(32)
rjk ∈ {0, 1, · · · , n − 1},
we define the error function as ϵ(x) = u(x) − u˜(x), where u(x) is the exact solution of problem (32). Substitution the approximate solution of (32) into the (32) yields ∫ b ∫ x n−1 ∑ (n) (i) u˜ (x) = gi (x)˜ u (x) + λ1 k1 (x, t)˜ u(t)dt + λ2 k2 (x, t)˜ u(t)dt + f (x) + Rℓ (x), a
i=0
nk ∑
ajk u˜(rjk ) (ξjk ) = dk ,
j=1
a
k = 1, · · · , n,
ξjk ∈ [a, b],
So we can write u
(n)
(x) − u˜
(n)
(x) =
n−1 ∑ i=0
(i)
gi (x)(u (x) − u˜ (x)) + λ1
+ λ2
∫
a
nk ∑ j=1
(
(i)
x
∫
a
rjk ∈ {0, 1, · · · , n − 1}. b
( ) k1 (x, t) u(t) − u˜(t) dt
( ) k2 (x, t) u(t) − u˜(t) dt − Rℓ (x),
) ajk u(rjk ) (ξjk ) − u˜(rjk ) (ξjk ) = 0, k = 1, · · · , n, ξjk ∈ [a, b], rjk ∈ {0, 1, · · · , n − 1}. 10
Then we obtain the following error function which is an integro-differential equation with homogeneous boundary conditions ∫ b ∫ n−1 ∑ (n) (i) ϵ (x) = gi (x)ϵ (x) + λ1 k1 (x, t)ϵ(t)dt + λ2 a
i=0
nk ∑
ajk ϵ
(rjk )
(ξjk ) = 0,
j=1
k = 1, · · · , n,
a
ξjk ∈ [a, b],
x
k2 (x, t)ϵ(t)dt − Rℓ (x),
(33)
rjk ∈ {0, 1, · · · , n − 1},
Solving the error problem (33) by the proposed method, we obtain the approximation ϵ˜(x) to ϵ(x). Consequently, we have the following improved approximate solution u˜imp (x) = u˜(x) + ϵ˜(x). Note that, we can iterate this process to reach a satisfactory approximate solution. Remark 1. If the exact solution of the problem is not available, then we can approximate the error function ϵ(x) by ϵ˜(x). Remark 2. To investigate sufficient conditions for a unique solution for IDEs, on may refer to [31]. 5. Application In this section, we apply the present method on some test problems. In all problems, our approximation space is based on the Chebyshev polynomials of the first kind. The numerical results are evaluated using Maple 17.0. The obtained results by the new algorithm are compared with the exact solutions for each problem and are found to be in a good agreement with each other. As one can see from the numerical results, by increasing ℓ accuracy of the approximate solution increases. Also, the new method is more accurate than the other numerical methods available in the literature. Problem 1. Consider the following linear integro-differential equation, (see [24]) y (iv) (x) = x(1 + ex ) + 3ex + y(x) − ∫ x y(t)dt, 0 (34) y(0) = 1, y ′ (0) = 1, y(1) = 1 + e, y ′ (1) = 2e.
The exact solution is given by y(x) = 1 + xex . To solve (34) by the presented method, we define the following auxiliary differential equations ψ (iv) (x) = x(1 + ex ) + 3ex , ψ(0) = 1, ψ ′ (0) = 1, ψ(1) = 1 + e, 11
ψ ′ (1) = 2e,
(35)
φ(iv) m (x) = Tm (x), φ(0) = 0,
φ′ (0) = 0,
m = 0, · · · , ℓ, φ(1) = 0,
(36)
φ′ (1) = 0,
where Tm (x) = cos(m arccos(x)) is the Chebyshev polynomial of degree m. From (6) and (35) it follows that ψ(x) = (x − 1)ex + so c0 = e −
149 , 120
conditions (35).
c1 =
1 5 1 1 x + (4c3 − )x3 + (2c2 − )x2 + (c1 − 3c3 )x + 1 + c0 − c2 , 120 3 2
577 160
269 − 34 e, c2 = − 120 + e and c3 =
397 480
− 14 e are easily obtained from
We can use the following recursive formula to evaluate the described integral in (11) ∫
0
x
(m−1)π (m+1)π (x − t)n−1 xn−1 [ cos( 2 ) cos( 2 ) ] Tm (t)dt = − (n − 1)! 2(n − 1)! m−1 m+1 ∫ x ∫ x [ ] n−2 1 (x − t) (x − t)n−2 1 1 + Tm+1 (t)dt − Tm−1 (t)dt . 2 m + 1 0 (n − 2)! m − 1 0 (n − 2)!
Now from (11) and (37) we can obtain the following solution for (36) 4 x24 + 4c0,3 x3 + 2c0,2 x2 + (c0,1 − 3c0,3 )x + c0,0 − c0,2 , x5 + 4c1,3 x3 + 2c1,2 x2 + (c1,1 − 3c1,3 )x + (c1,0 − c1,2 ), 120 4 x6 − x24 + 4c2,3 x3 + 2c2,2 x2 + (c2,1 − 3c2,3 )x + (c2,0 − c2,2 ), 180 5 x7 210 − x40 + 4c3,3 x3 + 2c3,2 x2 + (c3,1 − 3c3,3 )x + (c3,0 − c3,2 ), x8 x6 x4 − + + 4c4,3 x3 + 2c4,2 x2 + (c4,1 − 3c4,3 )x + (c4,0 − c4,2 ), 210 45 24 [ 1 ∏4 1 Tm−4 (x)[3m4 + 30m3 + 105m2 + 150m + 72] 2 2 48 i=1 m −i φm (x) = +Tm−2 (x)[−12m4 − 60m3 + 120m2 + 960m + 1152] +Tm (x)[18m4 − 450m2 + 2592] + Tm+2 (x)[−12m4 + 60m3 +120m2 − 960m + 1152] + Tm+4 (x)[3m4 − 30m3 + 105m2 −150m + 72] + cos( mπ )[−48m4 + 480m2 − 3456x2 + 24m6 x2 2 −624m4 x2 + 4056m2 x2 − 432] + sin( mπ )[8m7 x3 − 232m5 x3 2 ] 5 3 3 3 3 −48m x + 1952m x + 960m x − 4608mx ] + 4cm,3 x3 +2c x2 + (c − 3c )x + (c − c ), m,2 m,1 m,3 m,0 m,2
(37)
m=0 m = 1, m = 2, m = 3, m = 4,
m > 4,
where m = 0, 1, · · · , ℓ. The unknown coefficients {cm,i }3i=0 for m = 0, 1, · · · , ℓ can be
easily obtained from the homogeneous boundary conditions in (36). Now, we can obtain 12
the expression for u˜(x) from (3), and we should try to determine the unknown coefficients {αm }ℓm=0 such that the value of Ej (α0 , · · · , αℓ ) in (21) be minimized or forced to be zero. To do this we use our residual minimization scheme.
Table 1 presents the numerical values of the exact solutions and absolute errors of obtained solutions by the new method for two different values of ℓ. Fig. 1 shows the plots of error function u(x) − u˜(x) and residual function Rℓ , for ℓ = 32. Our Absolute error for x
Exact Solution
ℓ=8
ℓ = 16
ℓ = 32
ℓ = 64
0.00 0.20 0.40 0.60 0.80 1.00
1.00000000000 1.24428055163 1.59672987906 2.09327128023 2.78043274279 3.71828182846
0.00E − 00 2.55E − 10 1.09E − 09 1.09E − 09 2.55E − 10 0.00E − 00
0.00E − 00 1.00E − 17 3.02E − 17 3.02E − 17 1.00E − 17 0.00E − 00
0.00E − 00 3.32E − 31 1.25E − 30 1.25E − 30 3.33E − 31 0.00E − 00
0.00E − 00 1.04E − 56 3.22E − 56 3.22E − 56 1.04E − 56 0.00E − 00
Rℓ
1.39E − 05 1.46E − 11 1.15E − 23 5.16E − 48
Table 1: Exact values, absolute errors which are obtained by our method and upper bound of mean value of the residual error for different values of ℓ, for Problem 1.
(a) Plot of ϵ(x) for ℓ = 32.
(b) Plot of Rℓ (x) for ℓ = 32.
Figure 1: Plot of ϵ(x) = u(x) − u˜(x) and the residual function Rℓ (x), for ℓ = 32 which u˜(x) is obtained by our method.
13
Problem 2. Consider the following second order, nonlinear, two-point boundary value integro-differential equation with interior points ξ1 = exp(1) u(x) u (x) − sinh(1) ′′
∫
1 3
and ξ2 = 23 .
1
u2 (t)dt = 0,
0 < x < 1,
0
subject to the boundary conditions ) ( ) 2 ( ∑ 1 i u(0) = u + 0.632802, 2 + i 3 i=1
u(1) = exp(−1),
with exact solution u(x) = exp(−x). By the same application of the new method which is used for the first problem, the approximate solutions can be obtained for different values of ℓ. In order to perform an error analysis of the obtained results, we use the following convergence indicators (see [29]) with their rate of convergence: • The consecutive errors: Dℓ = ||uℓ+1 − uℓ ||2 .
• The pointwise error: E ℓ (x) = uexact (x) − uℓ (x). ℓ • The reference error: Eref = ||E ℓ (x)||2 .
• The residual error: Resℓ2 = ||Rℓ ||2 .
If we suppose that εℓ be one of the mentioned error terms, then its rate of convergence with respect to order of approximate solution ℓ is defined by the following relation (see [29]) εℓ+1 . εℓ In Tables 2 and 3 the comparison of obtained values for the L2 -norm of the exact error rεℓ = ln
ℓ Eref , the residual term Resℓ2 and the consecutive error Dℓ along with their respective
rate of convergence rEref ℓ , rResℓ and rD ℓ are presented, which are evaluated by the new 2 method and Temimi’s method [29].
14
Our method ℓ 2 3 4 5 6 7 8 9 10 11 12 13
ℓ Eref
Temimi’s method [29]
Resℓ2
4.32E − 02 1.45E − 03 7.53E − 04 4.25E − 05 5.41E − 06 5.68E − 07 7.54E − 08 9.23E − 09 1.48E − 09 1.82E − 10 3.11E − 11 3.95E − 12
3.87E − 01 5.80E − 02 1.18E − 02 2.24E − 03 4.17E − 04 7.67E − 05 1.40E − 05 2.53E − 06 4.56E − 07 8.19E − 08 1.46E − 08 2.61E − 09
Dℓ
4.30E − 02 1.74E − 03 7.49E − 04 4.34E − 05 5.41E − 06 5.68E − 07 7.61E − 08 9.33E − 09 1.49E − 09 1.85E − 10 3.12E − 11 4.01E − 12
ℓ Eref
Resℓ2
1.36E − 02 7.21E − 02 3.84E − 03 2.05E − 03 1.10E − 03 5.86E − 04 3.13E − 04 1.68E − 04 8.96E − 05 4.79E − 05 2.56E − 05 1.37E − 05
1.52E − 02 8.06E − 03 4.30E − 03 2.30E − 03 1.23E − 03 6.57E − 04 3.52E − 04 1.88E − 04 1.01E − 04 5.38E − 05 2.88E − 05 1.54E − 05
Dℓ
1.21E − 02 6.37E − 03 3.37E − 03 1.79E − 03 9.56E − 04 5.10E − 04 2.73E − 04 1.46E − 04 7.80E − 05 4.17E − 05 2.23E − 05 1.19E − 05
14 6.12E − 13 4.64E − 10 6.17E − 13 7.33E − 06 8.23E − 06 6.38E − 06 15 9.05E − 14 8.23E − 11 9.14E − 14 3.92E − 06 4.40E − 06 3.41E − 06
ℓ , Resℓ and D ℓ , for Problem 2. Table 2: Comparison of obtained values for the L2 -norm of Eref 2
Our method
Temimi’s method [29]
ℓ
rEref ℓ
rResℓ2
rDℓ
rEref ℓ
rResℓ2
rDℓ
2 3 4 5 6 7 8 9 10 11 12 13
−3.3917 −0.6565 −2.8740 −2.0632 −2.2532 −2.0194 −2.1010 −1.8331 −2.0905 −1.7703 −2.0613 −1.8653
−1.8983 −1.5892 −1.6646 −1.6817 −1.6934 −1.7018 −1.7083 −1.7135 −1.7179 −1.7215 −1.7246 −1.7273
−3.2047 −0.8452 −2.8490 −2.0817 −2.2536 −2.0098 −2.0994 −1.8358 −2.0844 −1.7808 −2.0502 −1.8722
−0.6325 −0.6294 −0.6277 −0.6268 −0.6263 −0.6261 −0.6260 −0.6259 −0.6258 −0.6258 −0.6258 −0.6258
−0.6372 −0.6275 −0.6267 −0.6263 −0.6261 −0.6259 −0.6259 −0.6258 −0.6258 −0.6258 −0.6258 −0.6258
−0.6489 −0.6362 −0.6313 −0.6287 −0.6274 −0.6266 −0.6262 −0.6260 −0.6259 −0.6259 −0.6258 −0.6258
14 −1.9122 −1.7296 −1.9101 −0.6258 −0.6258 −0.6258 15 −1.8655 −1.7317 −1.8618 −0.6258 −0.6258 −− Table 3: Comparison of obtained values for the convergence rate of error terms rE ℓ , rResℓ2 ref
and rDℓ , for Problem 2.
15
In the following example we present an example where the Temimi and Ansari’s work [29] fails to solve it but our method succeeds. Problem 3. Consider the following nonlinear second order Volterra integro-differential equation: 1 u (x) + ln(x + 3)u (x) + e u (h(x)) + + sin(u(x)) ′′
′
−x
subject to the following boundary conditions: 4 ∑
i=0 4 ∑ i=0
∫
0
x
u(t) dt = f (x), t+3
i 13 7 15 u( ) = ln2 (3) + ln2 ( ) + ln2 ( ) + ln2 ( ) + ln2 (4), 4 4 2 4 i 6 8 2551 4 8 u′ ( ) = ln(3) + ln(13) − ln(2) + ln(7) + ln(5), 4 5 13 1365 7 15
for ( 3 ) ln2 (x+3)) 3 1 f (x) = 2(1−ln(x+3)+(x+3) + ln (x + 3) − ln (3) + e−x ln2 (h(x) + 3) + 2 (x+3) 3 h(x) = x , x2 , sin(x). 2
1 , sin(ln2 (x+3)
The exact solution of this problem is u(x) = ln2 (x + 3). Tables 4, 5 and 6 show the obtained results for L2 -norm of the error terms with their respective rate of convergence
for different choices of h(x), which are evaluated by the new method. ℓ L2 -norm of Eref
ℓ 12 13 14 15 16 17 18 19 20 21 22
h(x) =
x 2
3.47E − 13 2.72E − 13 8.35E − 15 5.58E − 15 1.60E − 16 3.37E − 17 3.32E − 17 1.19E − 18 8.24E − 18 1.66E − 20 2.18E − 21
rEref ℓ
h(x) = x2
h(x) = sin(x)
3.47E − 13 2.74E − 13 8.35E − 15 5.62E − 15 1.60E − 16 3.39E − 17 3.32E − 18 1.20E − 18 8.24E − 20 1.66E − 20 2.18E − 21
3.47E − 13 2.72E − 13 8.33E − 15 5.57E − 15 1.60E − 16 3.37E − 17 3.32E − 18 1.19E − 18 8.24E − 20 1.66E − 20 2.18E − 21
h(x) =
x 2
−0.2435 −3.4844 −0.4037 −3.5503 −1.5589 −2.3174 −1.0232 −2.6729 −1.6037 −2.0266 −0.2906
h(x) = x2
h(x) = sin(x)
−0.2354 −3.4926 −0.3957 −3.5588 −1.5529 −2.3226 −1.0175 −2.6784 −1.6013 −2.0293 −0.2987
−0.2431 −3.4860 −0.4018 −3.5515 −1.5562 −2.3182 −1.0252 −2.6707 −1.6040 −2.0266 −0.2895
ℓ along with its respective rate of convergence rE ℓ Table 4: L2 -norm of the exact error Eref
ref
for different choices of h(x), for Problem 3.
16
Residual error Resℓ2 ℓ 12 13 14 15 16 17 18 19 20 21 22
h(x) =
x 2
1.46E − 10 2.59E − 11 4.62E − 12 8.25E − 13 1.47E − 13 2.59E − 14 4.50E − 15 7.85E − 16 1.40E − 16 2.51E − 17 4.33E − 18
rResℓ2
h(x) = x2
h(x) = sin(x)
1.46E − 10 2.59E − 11 4.62E − 12 8.25E − 13 1.47E − 13 2.59E − 14 4.50E − 15 7.85E − 16 1.40E − 16 2.51E − 17 4.33E − 18
1.46E − 10 2.59E − 11 4.62E − 12 8.25E − 13 1.47E − 13 2.59E − 14 4.50E − 15 7.85E − 16 1.40E − 16 2.51E − 17 4.33E − 18
h(x) =
x 2
−1.7255 −1.7244 −1.7229 −1.7262 −1.7371 −1.7489 −1.7454 −1.7246 −1.7201 −1.7558 −1.7816
h(x) = x2
h(x) = sin(x)
−1.7258 −1.7245 −1.7228 −1.7261 −1.7370 −1.7487 −1.7453 −1.7245 −1.7201 −1.7558 −1.7817
−1.7254 −1.7243 −1.7228 −1.7261 −1.7371 −1.7488 −1.7454 −1.7245 −1.7201 −1.7558 −1.7816
Table 5: The residual error Resℓ2 along with its respective rate of convergence rResℓ2 for different choices of h(x), for Problem 3.
Consecutive error Dℓ ℓ 12 13 14 15 16 17 18 19 20 21 22
h(x) =
x 2
4.09E − 13 2.74E − 13 8.97E − 15 5.61E − 15 1.58E − 16 3.40E − 17 3.59E − 18 1.20E − 18 8.47E − 20 1.67E − 20 3.64E − 21
rDℓ
h(x) = x2
h(x) = sin(x)
4.09E − 13 2.76E − 13 8.97E − 15 5.66E − 15 1.58E − 16 3.42E − 17 3.59E − 18 1.21E − 18 8.48E − 20 1.67E − 20 3.67E − 21
4.10E − 13 2.74E − 13 9.00E − 15 5.61E − 15 1.58E − 16 3.40E − 17 3.58E − 18 1.20E − 18 8.48E − 20 1.67E − 20 3.65E − 21
h(x) =
x 2
−0.4000 −3.4200 −0.4687 −3.5689 −1.5370 −2.2495 −1.0929 −2.6527 −1.6245 −1.5232 −0.2162
h(x) = x2
h(x) = sin(x)
−0.3924 −3.4275 −0.4617 −3.5771 −1.5312 −2.2539 −1.0878 −2.6579 −1.6228 −1.5182 −0.2156
−0.4031 −3.4159 −0.4727 −3.5685 −1.5358 −2.2510 −1.0939 −2.6504 −1.6255 −1.5201 −0.2201
Table 6: The consecutive error Dℓ along with its respective rate of convergence rDℓ for different choices of h(x), for Problem 3.
17
Problem 4. Consider the following linear Volterra-Fredholm integro-differential equation, (see [27, 30]) 1 y ′′ (x) + xy ′ (x) − xy(x) = ex − sin(x) + x cos(x) 2 ∫ 1 ∫ x 1 + sin(x)et y(t)dt − cos(x)e−t y(t)dt, 2 0 0
0 ≤ x, t ≤ 1,
with the initial conditions y(0) = 1, y ′ (0) = 1. The exact solution of the problem is y(x) = exp(x). Table 7 shows the obtained results for L2 -norm of the error terms with their respective
rate of convergence, which are evaluated by the new method. Table 8 presents the comparison of upper bound of mean errors and maximum absolute errors, which are obtained by our method, Chelyshkov collocation method [30] and collocation method based on the Bessel polynomials [27]. Fig. 2 shows the physical behavior of absolute values of residual function in 3D. As one can see from this figure, by increasing the values of approximating terms, ℓ, the residual function approaches to zero rapidly. Also, Fig. 3 compares the exact and the approximated solutions for Problem 4. In this figure the physical behavior of approximated solutions is plotted in 3D. Error terms ℓ 2 3 4 5 6 7 8 9 10 11 12 13 14 15
ℓ Eref
2.96E − 03 3.75E − 05 7.13E − 06 9.75E − 08 4.61E − 09 4.32E − 11 2.36E − 12 2.12E − 14 9.49E − 16 7.89E − 18 2.98E − 19 2.25E − 21 7.32E − 23 5.02E − 25
Resℓ2 1.85E − 02 7.10E − 04 1.38E − 04 3.23E − 06 3.55E − 07 5.84E − 09 4.39E − 10 5.68E − 12 3.23E − 13 3.48E − 15 1.59E − 16 1.46E − 18 5.61E − 20 4.55E − 22
Convergence rate Dℓ
2.93E − 03 3.37E − 05 7.08E − 06 9.95E − 08 4.61E − 09 4.34E − 11 2.36E − 12 2.13E − 14 9.50E − 16 7.90E − 18 2.98E − 19 2.25E − 21 7.32E − 23 5.02E − 25
rEref ℓ
rResℓ2
rDℓ
−4.3673 −1.6601 −4.2922 −3.0525 −4.6696 −2.9073 −4.7120 −3.1062 −4.7897 −3.2772 −4.8864 −3.4243 −4.9821 −3.5540
−3.2579 −1.6393 −3.7533 −2.2069 −4.1092 −2.5878 −4.3475 −2.8657 −4.5322 −3.0843 −4.6848 −3.2644 −4.8155 −3.4174
−4.4625 −1.5614 −4.2646 −3.0723 −4.6644 −2.9128 −4.7098 −3.1082 −4.7892 −3.2775 −4.8869 −3.4238 −4.9830 −3.5531
ℓ , the residual term Resℓ and the consecutive error Table 7: L2 -norm of the exact error Eref 2
Dℓ along with their respective rate of convergence rE ℓ , rResℓ and rDℓ , for Problem 4. ref
18
2
Upper bound of mean errors Rℓ
R5 R7
Our method
Method [30]
3.23E − 06 5.84E − 09
2.60E − 03 1.15E − 05
Maximum absolute errors ||E ℓ (x)||∞
||E 3 (x)||∞ ||E 7 (x)||∞ ||E 10 (x)||∞
Our Method Method [27] 5.96E − 05 6.87E − 11 2.10E − 15
2.37E − 02 1.21E − 06 8.22E − 10
Table 8: Comparison of upper bound of mean errors and maximum absolute errors, which are obtained by our method, method in [30] and method in [27], for Problem 4.
(a)
(b)
(c)
(d)
Figure 2: The behavior of absolute values of residual function versus x, and different values of the approximating terms, ℓ, for Problem 4, a) 2 ≤ ℓ ≤ 4,
d) 12 ≤ ℓ ≤ 16.
19
b) 4 ≤ ℓ ≤ 8,
c) 8 ≤ ℓ ≤ 12,
Figure 3: The behavior of approximations of u(x) which are evaluated by our method versus x, and the different values of the approximating terms, ℓ, and the exact solution for Problem 4.
Concluding Remarks In this work, we proposed an efficient algorithm for solving integro-differential equations with multi-point or mixed boundary conditions. We applied this algorithm to solve linear and nonlinear Fredholm-Volterra integro-differential equations. Four numerical examples were worked out, and the obtained numerical results demonstrate high enhancement over existing techniques. It is concluded that the proposed algorithm gives better accuracy in comparison to the other numerical methods [27, 29, 30] available in the literature. From Table 1, it is worthnoting that the proposed method provides a satisfactory solution, and for increasing the number of approximating terms, ℓ, the error of the solution decreases rapidly. The admissible comparison of the obtained results has been shown in Tables 2-8 which justify the applicability, accuracy and efficiency of our proposed method. From Figure 1, it manifests that the proposed method provides excellent solutions for integro-differential equations, because of the small error and residual error functions. Plots of Figure 2 show rapid convergence of the present method. Figure 3 shows the physical behavior of approximated solutions by the present method, and presents a very good agreement between these solutions and the exact solution. 20
It is worthnoting that the proposed method gives other advantages. The use of the algorithm is easy and straight forward. The solutions of auxiliary equations can be tabulated in a way, such that one can use them for any problem. After increasing ℓ one can use all previous values of φi (x) and ψ(x). From these findings, we conclude that this proposed algorithm might become a promising method in finding the semi-analytical solutions of the integro-differential equations. References [1] K. Maleknejad, Y. Mahmoudi, Numerical solution of linear Fredholm integral equation by using hybrid Taylor and block-pulse functions, Appl. Math. Comput., 149 (2004) 799–806. [2] M.T. Rashed, Numerical solution of functional differential, integral and integro-differential equations, Appl. Numer. Math., 156 (2004) 485–492. [3] Y. Ren, B. Zhang, H. Qiao, A simple Taylor-series expansion method for a class of second kind integral equations, J. Comp. Appl. Math., 110 (1999) 15–24. [4] W. Wang, C. Lin, A new algorithm for integral of trigonometric functions with mechanization, Appl. Math. Comput., 164 (1) (2005) 71–82. [5] W. Wang, An algorithm for solving the higher-order nonlinear Volterra-Fredholm integrodifferential equation with mechanization, Appl. Math. Comput., 172 (2006) 1–23. [6] S. Yal¸cinbas, M. Sezer, The approximate solution of high-order linear Volterra-Fredholm integrodifferential equations in terms of Taylor polynomials, Appl. Math. Comput., 112 (2000) 291–308. [7] M. Razzaghi, S. Yousefi, Legendre wavelets method for the nonlinear Volterra-Fredholm integral equations, Math. Comput. Simulation, 70 (2005) 1–8. [8] S.M. Hosseini, S. Shahmorad, A matrix formulation of the Tau method and Volterra linear integrodifferential equations, Korean J. Comput. Appl. Math., 9 (2) (2002) 497–507. [9] S.M. Hosseini, S. Shahmorad, Numerical solution of a class of integro-differential equations by the Tau method with an error estimation, Appl. Math. Comput., 136 (2003) 559–570. [10] E.L. Ortiz, L. Samara, An operational approach to the Tau method for the numerical solution of nonlinear differential equations, Computing, 27 (1981) 15–25. [11] E.L. Ortiz, On the numerical solution of nonlinear and functional differential equations with the Tau method, Numerical treatment of differential equations in applications, Lecture Notes in Math., 679 (1978) 127–139. [12] J. Pour-Mahmoud, M.Y. Rahimi-Ardabili, S. Shahmorad, Numerical solution of the system of Fredholm integro-differential equations by the Tau method, Appl. Math. Comput., 168 (2005) 465–478. [13] K. Maleknejad, F. Mirzaee, Numerical solution of integro-differential equations by using rationalized Haar functions method, Kybernetes Int. J. Syst. Math., 35 (2006) 1735–1744. [14] M.H. Reihani, Z. Abadi, Rationalized Haar functions method for solving Fredholm and Volterra integral equations, J. Comput. Appl. Math., 200 (2007) 12–20.
21
[15] J. Zhao, R.M. Corless, Compact finite difference method for integro-differential equations, Appl. Math. Comput., 177 (2006) 271–288. [16] H. Danfu, S. Xufeng, Numerical solution of integro-differential equations by using CAS wavelet operational matrix of integration, Appl. Math. Comput., 194 (2007) 460–466. [17] E. Yusufo¨ glu, Improved homotopy perturbation method for solving Fredholm type integrodifferential equations, Chaos Solitons and Fractals, 41 (2009) 28–37. [18] M. Tavassoli Kajani, M. Ghasemi, E. Babolian, Comparison between the homotopy perturbation method and the sine-cosine wavelet method for solving linear integro-differential equations, Comput. Math. Appl., 54 (2007) 1162–1168. [19] M. Tavassoli Kajani, M. Ghasemi, E. Babolian, Numerical solution of linear integro-differential equation by using sinecosine wavelets, Appl. Math. Comput., 180 (2006) 569–574. [20] A. Aky¨ uz-Da¸scıo˘ glu, M. Sezer, Chebyshev polynomial solutions of systems of higher-order linear Fredholm-Volterra integro-differential equations, J. Franklin Inst., 342 (2005) 688–701. [21] C.H. Hsiao, Hybrid function method for solving Fredholm and Volterra integral equations of the second kind, J. Comput. Appl. Math., 230 (2009) 59–68. [22] A.M. Wazwaz, The combined Laplace transform-Adomian decomposition method for handling nonlinear Volterra integro-differential equations, Appl. Math. Comput., 216 (2010) 1304–1309. uzba¸sı, N. S¸ahin, M. Sezer, A Bessel collocation method for numerical solution of generalized [23] S¸. Y¨ pantograph equations, Numer. Meth. Part. D. E., 28 (4) (2012) 1105–1123. [24] A. Arikoglu, I. Ozkol, Solution of boundary value problems for integro-differential equations by using differential transform method, Appl. Math. Comput., 168 (2005) 1145–1158. [25] A. Borhanifar, Kh. Sadri, A new operational approach for numerical solution of generalized functional integro-differential equations, J. Comput.Appl. Math., 279 (2015) 80–96. [26] K. Kim, B. Jang, A novel semi-analytical approach for solving nonlinear Volterra integrodifferential equations, Appl. Math. Comput., 263 (2015) 25–35. [27] S¸. Y¨ uzba¸sı, N. S¸ahin, A. Yildirim, A collocation approach for solving high-order linear FredholmVolterra integro-differential equations, Math. Comput. Model., 55 (2012) 547–563. [28] S. Yu. Reutskiy, The backward substitution method for multipoint problems with linear VolterraFredholm integro-differential equations of the neutral type, J. Comput. Appl. Math., 296 (2016) 724–738. [29] H. Temimi, A.R. Ansari, A new iterative technique for solving nonlinear second order multi-point boundary value problems, Appl. Math. Comput., 218 (2011) 1457–1466. [30] C. O˘guz, M. Sezer, Chelyshkov collocation method for a class of mixed functional integrodifferential equations, Appl. Math. Comput., 259 (2015) 943–954. [31] X. Zhang, M. Feng, W. Ge, Existence of solutions of boundary value problems with integral boundary conditions for second-order impulsive integro-differential equations in Banach spaces, Journal of Computational and Applied Mathematics. 233(8), (2010) 1915-1926.
22