Numerical solution of differential algebraic equations using a multiquadric approximation scheme

Numerical solution of differential algebraic equations using a multiquadric approximation scheme

Mathematical and Computer Modelling 53 (2011) 659–666 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepage...

217KB Sizes 2 Downloads 58 Views

Mathematical and Computer Modelling 53 (2011) 659–666

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

Numerical solution of differential algebraic equations using a multiquadric approximation scheme S. Karimi Vanani, A. Aminataei ∗ Department of Mathematics, K.N. Toosi University of Technology, P.O. Box 16315-1618, Tehran, Iran

article

info

Article history: Received 30 May 2009 Received in revised form 30 September 2010 Accepted 4 October 2010 Keywords: Multiquadric approximation scheme Differential algebraic equations Hessenberg forms of differential algebraic equations Error estimation Run time of the method Scattered data points

abstract The objective of this paper is to solve differential algebraic equations using a multiquadric approximation scheme. Therefore, we present the notation and basic definitions of the Hessenberg forms of the differential algebraic equations. In addition, we present the properties of the proposed multiquadric approximation scheme and its advantages, which include using data points in arbitrary locations with arbitrary ordering. Moreover, error estimation and the run time of the method are also considered. Finally some experiments were performed to illustrate the high accuracy and efficiency of the proposed method, even when the data points are scattered and have a closed metric. © 2010 Elsevier Ltd. All rights reserved.

1. Introduction Differential algebraic equations (DAEs) frequently appear in various applications in mathematical modelling, physical problems, circuit analysis, computer-aided design, power systems, simulation of mechanical systems and more general optimal control problems; thus, they have attracted the attention of numerical analysts [1–5]. The index of a DAE is a measure of the degree of singularity of the system, and it is widely regarded as an indication of certain difficulties for numerical solution of ODE systems [1]. The following statements and discussion of DAEs have been described by Celik [6]. The most regular form of a DAE is the following: F (t , y, y′ ) = 0,

(1)

∂F ∂ y′

where may be singular. The rank and structure of this Jacobian matrix depends, in general, on the solution y(t ), and for simplicity, we shall always assume that it is independent of t. The important case of a semi-explicit DAE, or an ODE with constraints, is given in the following equations: x′ = f (t , x, z ),

(2)

0 = g (t , x, z ).

(3) ∂g ∂z



is non-singular, because then one differentiation of Eq. (3) yields z in This is a special case of Eq. (1). The index is 1 if principle. For the semi-explicit index-1 DAE, we can differentiate between differential variables x(t ) and algebraic variables z (t ). The algebraic variables may be less smooth than the differential variables by one derivative. In the general case, each



Corresponding author. E-mail addresses: [email protected] (S. Karimi Vanani), [email protected] (A. Aminataei).

0895-7177/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2010.10.002

660

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

component of y may contain a combination of differential and algebraic components, which make the numerical solution of such high-index problems much more difficult. The semi-explicit form is de-coupled in this sense. On the other hand, any DAE can be written in the semi-explicit forms (2) and (3), but with the index increased by 1 upon defining y′ = z, which gives the following equations: y′ = z ,

(4)

0 = F (t , y, z ).

(5)

It is evident that re-writing by itself does not make the problem easier to solve. The converse transformation is also possible. Given a semi-explicit index-2 DAE system with z = w ′ , the system is easily represented as follows: x′ = f (t , x, w ′ ),

(6)

0 = g (t , x, w ),

(7)



where this system is an index-1 DAE and yields exactly the same solution for x as (2) and (3) above. The classes of fullyimplicit index-1 DAEs of the form (1) and semi-explicit index-2 DAEs of the forms (2) and (3) are therefore equivalent. We define the index of a DAE. For general DAE systems (1), the index along a solution y(t ) is the minimum number of differentiations of y and t that the system requires to solve for y′ uniquely in terms of y and t (i.e., to define an ODE for y). Thus, the index is defined in terms of the over-determined system as follows: F (t , y, y′ ) = 0,

∂F (t , y, y′ , y′′ ) = 0, ∂t .. . ∂ mF (t , y, y′ , . . . , y(m+1) ) = 0, ∂tm

(8)

where m is assumed to be the smallest integer such that y′ in (8) can be determined in terms of y and t. It should be noted that in practice, differentiating the system as in (8) is rarely done in a computation. However, such a definition is useful in understanding the underlying mathematical structure of the DAE system and therefore in selecting an appropriate numerical method [5,6]. In recent years, considerable efforts have been made to solve systems of DAEs. There are some numerical methods that solve DAEs; the important ones use both BDF [1,4,5,7] and implicit Runge–Kutta methods [1,7,8]. These methods are only applicable to low-index problems and some special problems. In this article, we are interested in solving DAEs with a multiquadric approximation scheme because this solution method is easy to implement, yields the desired result in only a few terms and works efficiently for scattered data points. The multiquadric (MQ) approximation scheme is a useful method for the numerical solution of ordinary and partial differential equations (ODEs and PDEs). It is a grid-free spatial approximation scheme that converges exponentially for the spatial terms of ODEs and PDEs. The MQ approximation scheme was first introduced by Hardy [9] who successfully applied this method to approximate surfaces and bodies from field data. Hardy [10] has written a detailed review article summarizing its explosive growth since it was first introduced. In 1972, Franke [11] published a detailed comparison of 29 different scattered data schemes for analytic problems. Of all the techniques tested, he concluded that MQ performed the best in accuracy, visual appeal, and ease of implementation, even against various finite element schemes. To the best of our knowledge, this is the first demonstration of the application of the MQ approximation scheme to DAEs. The organization of this paper is as follows. Section 2 gives some notations and basic definitions of the DAE forms. In Section 3, basic concepts of the MQ approximation scheme and its application to DAEs are presented. An error estimate of the proposed method is given in Section 4. The results of two illustrative numerical experiments are described in Section 5. The discussion and conclusion are presented in Section 6. 2. Special differential algebraic equation forms Many practical problems with higher indices that arise in applied sciences can be considered to be a combination of more restrictive structures of ODEs coupled with constraints. In such systems, the algebraic and differential variables are explicitly identified for higher-index DAEs as well, and the algebraic variables may all be eliminated using the same number of differentiations. These are called Hessenberg forms of the DAE and are given as follows [1]. 2.1. Hessenberg index-1 x′ = f (t , x, z ),

(9)

0 = g (t , x, z ).

(10)

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

661

∂g

In the above equations, the Jacobian matrix function ∂ z is assumed to be non-singular for all t. This system is also often referred to as a semi-explicit index-1 system. Semi-explicit index-1 DAEs are closely related to implicit ODEs. Using the implicit function theorem [12, pp. 36–37], we can actually solve for variable z in constraint (10). Substituting the result in differential equation (9) yields an ODE in variable x. 2.2. Hessenberg index-2 x′ = f (t , x, z ),

(11)

0 = g (t , x),

(12) ∂g ∂f ∂x ∂z

where the product of Jacobians is non-singular for all ts. Note the absence of the algebraic variable z from constraint (12). This system is a pure index-2 DAE, and all algebraic variables play the role of index-2 variables. 2.3. Hessenberg index-3 x′ = f (t , x, y, z ),

(13)

y = g (t , x, y),

(14)



0 = h(t , y),

(15) ∂h ∂g ∂f ∂y ∂x ∂z

where the product of three matrix functions is non-singular. The index of a Hessenberg DAE is found, as in the general case, by differentiation. However, only algebraic constraints must be differentiated [8]. 3. MQ approximation scheme The basic MQ approximation scheme assumes that any function can be expanded as a finite series of upper hyperboloids that are written as follows: u( t ) =

N −

aj h(t − tj ),

t ∈ R,

(16)

j =1

where N is the total number of data centers under consideration, and 1

h(t − tj ) = ((t − tj )2 + R2 ) 2 ,

j = 1, 2, . . . , N ,

(17)

(t − tj ) is the square of Euclidean distances in R, and R > 0 is an input shape parameter. Note that the basis function h, 2

2

which is a type of spline approximation, is continuously differentiable. The expansion coefficients aj are found by solving a set of full linear equations, which are written as follows: u(ti ) =

N −

aj h(ti − tj ),

i = 1, 2, . . . , N .

(18)

j=1

Zerroukut et al. [13] found that a constant shape parameter (R2 ) achieves better accuracy. Mai-Duy and Tran-Cong [14] have developed new methods based on radial basis function networks (RBFNs) to approximate both functions and their first and higher derivatives. The direct RBFN (DRBFN) and indirect RBFN (IRBFN) methods have been studied, and it has been found that the IRBFN methods yield consistently better results for both functions and their derivatives. Recently, Aminataei and Mazarei [15] stated that, in the numerical solution of elliptic PDEs using DRBFN and IRBFN methods, the IRBFN method is more accurate than other methods with very small error. They have shown that, especially on one-dimensional equations, the IRBFN method is more accurate than the DRBFN method. Furthermore, Aminataei and Mazarei [16] used the DRBFN and IRBFN methods on the polar coordinate and have achieved better accuracy. Micchelli [17] proved that MQ schemes belong to a class of conditionally positive definite RBFNs. He showed that Eq. (16) is always solvable for distinct points. Madych and Nelson [18] proved that MQ interpolation always produces a minimal semi-norm error, and that the MQ interpolant and derivative estimates converge exponentially as the density of data centers increases. In contrast, the MQ interpolant is continuously differentiable over the entire domain of data centers, and the spatial derivative approximations were found to be excellent, especially in very steep gradient regions where traditional methods fail. This ability to approximate spatial derivatives is due in large part to a slight modification of the original MQ scheme that permits the shape parameter to vary with the basis function.

662

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

Instead of using the constant shape parameter defined in Eq. (17), we have used variable shape parameters [19–21] as follows: 1

h(t − tj ) = ((t − tj )2 + R2j ) 2 , 

R2j = R2min



R2max



j−1 N −1

R2min

j = 1, 2, . . . , N ,

(19)



,

j = 1, 2, . . . , N ,

(20)

and R2min > 0. R2max and R2min are two input parameters chosen so that the following ratio is in the given range: R2max R2min

∼ = 10 to 106 .

(21)

Madych [22] has proven that very large values of a shape parameter are desirable in certain circumstances. Eq. (19) is one way to have at least one very large value of a shape parameter without incurring the onset of severe ill-conditioned problems. Spatial partial derivatives of any function are formed by differentiating the spatial basis functions. Consider a one dimensional problem. The first derivative is given in the same line by simple differentiation: u′ (ti ) =

N − aj (ti − tj )

hij

j =1

,

1

hij = ((ti − tj )2 + R2j ) 2 , i = 1, 2, . . . , N .

(22)

We are interested in solving DAEs using an MQ approximation scheme. Therefore, we consider one of the Hessenberg forms of DAEs, e.g., the Hessenberg index-3, as follows: x′ (t ) = f (t , x(t ), y(t ), y′ (t )),

t ∈ [a, b]

y (t ) = g (t , x(t ), y(t )), ′

(23)

0 = h(t , y(t )). Let us consider the following approximate solutions of x(t ) and y(t ): x(t ) ≃ xˆ (t ) =

N −

aj h(1) (t − tj ),

(24)

bj h(2) (t − tj ),

(25)

j =1

y(t ) ≃ yˆ (t ) =

N − j =1

where h(1) and h(2) are defined in the same manner in Eqs. (19) through (21). To apply the MQ approximation scheme, N collocation points are required. In most numerical methods, choosing the set of grid points is an important issue. Collocated points are chosen when the MQ approximation scheme is used. The MQ approximation scheme does not depend on the selection of collocation points, which is the most important advantage of this method for interpolating scattered data [11]. Thus, suppose that t1 , t2 , . . . , tN are N arbitrary collocation points in the interval [a, b]. Implementing the MQ approximation scheme for the aforesaid system at these points, we obtain: xˆ ′ (ti ) = f (ti , xˆ (ti ), yˆ (ti ), yˆ ′ (ti )), yˆ ′ (ti ) = g (ti , xˆ (ti ), yˆ (ti )), 0 = h(ti , yˆ (ti )).

(26)

Solving the obtained system requires supplementary conditions. Imposing the given supplementary conditions to the problem yields the unknown coefficients ai , bi , i = 1, 2, . . . , N. We use the Gauss elimination method with total pivoting to solve the above system. In the same manner, this solution method can be applied to systems with Hessenberg index-1 and -2 DAEs, easily. 4. Error estimation In this section, an error estimate for the approximate solution of Eq. (23) is obtained. Let us set ex (t ) = x(t ) − xˆ (t ) and ey (t ) = y(t ) − yˆ (t ) as the error functions of the approximate solutions xˆ (t ) to x(t ) and yˆ (t ) to y(t ), where x(t ) and y(t ) are the exact solutions of Eq. (23). Thus, xˆ (t ) and yˆ (t ) satisfy the following equations: xˆ ′ (t ) = f (t , xˆ (t ), yˆ (t ), yˆ ′ (t )) + A(t ), yˆ ′ (t ) = g (t , xˆ (t ), yˆ (t )) + B(t ), 0 = h(t , yˆ (t )) + C (t ).

(27)

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

663

The perturbation terms A(t ), B(t ) and C (t ) are obtained as follows: A(t ) = xˆ ′ (t ) − f (t , xˆ (t ), yˆ (t ), yˆ ′ (t )), B(t ) = yˆ ′ (t ) − g (t , xˆ (t ), yˆ (t )), C (t ) = −h(t , yˆ (t )).

(28)

We proceed to find approximations eˆ x (t ) and eˆ y (t ) to the error functions ex (t ) and ey (t ), respectively in the same way as we did before for the solution of Eq. (23). By subtracting Eq. (27) from Eq. (23), we have: x′ (t ) − xˆ ′ (t ) = f (t , x(t ) − xˆ (t ), y(t ) − yˆ (t ), y′ (t ) − yˆ ′ (t )) − A(t ), y′ (t ) − yˆ ′ (t ) = g (t , x(t ) − xˆ (t ), y(t ) − yˆ (t )) − B(t ), 0 = h(t , y(t ) − yˆ (t )) − C (t ).

(29)

Now, the error functions eˆ x (t ) and eˆ y (t ) satisfy the following equations: eˆ ′x (t ) = f (t , eˆ x (t ), eˆ y (t ), eˆ ′y (t )) − A(t ), eˆ ′y (t ) = g (t , eˆ x (t ), eˆ y (t )) − B(t ), 0 = h(t , eˆ y (t )) − C (t ).

(30)

eˆ ′x (t ) − f (t , eˆ x (t ), eˆ y (t ), eˆ ′y (t )) = −A(t ), eˆ ′y (t ) − g (t , eˆ x (t ), eˆ y (t )) = −B(t ), h(t , eˆ y (t )) = C (t ).

(31)

or

It should be noted that in order to construct the approximates eˆ x (t ) and eˆ y (t ) to ex (t ) and ey (t ), only the related equations, like Eqs. (19) through (26), need to be recomputed; the structure of the method remains the same. 5. Numerical experiments In this section, two experiments are presented to demonstrate the capability of the proposed method. In the first experiment, the high accuracy, high efficiency, simplicity, and run time of the method are observable, as well as the fact that the higher-order accuracy and convergency can be achieved by increasing N. In the second experiment, the convenience of choosing scattered data points as collocation points and the sensitivity of the method to very close data points are shown. In addition, the accuracy of an approximate solution is measured by means of the discrete relative L2 norm, which is defined as follows:

 12 2 [ˆ x ( t ) − x ( t )] i i    i=1  Ne x(t ) =   , N   ∑ x2 (ti ) 

N ∑

(32)

i=1

where x and xˆ are the exact and computed solutions, respectively, and N is the number of unknown nodal values of x. The computations associated with the experiments discussed above were performed in Maple 13 on a PC with a CPU of 2.4 GHz. Experiment 1. Consider the following DAE:

 ′ x1 (t ) = t λy′ (t ) + p1 (t ), 0 < t ≤ 1, x′ (t ) = (λ − 5)y′ (t ) + p2 (t ), 02= t 2 x (t ) + sin(t )x (t ) + s(t ), 1 2 where

 p1 (t ) = et − t λ(et + e−t ), p2 (t ) = −e−t − (λ − 5)(et + e−t ),  s(t ) = −(t 2 et + sin(t )e−t ), with the initial conditions x1 (0) = 1, x2 (0) = 1 and y(0) = 0. λ is also an arbitrary parameter, and the exact solution is as follows:

  x 1 ( t ) = et , x 2 ( t ) = e− t , y(t ) = et − e−t .

664

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

Table 1 Approximate solutions for N = 4, 8, 12, 16 unknown nodal values of x1 (t ) and the exact solutions of x1 (t ) of experiment 1. t

N =4

N =8

N = 12

N = 16

Exact solution

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

1.000000000 1.100882761 1.215683691 1.344215954 1.486728788 1.644076058 1.817703124 2.009502352 2.221606488 2.456171364 2.715179783

1.000000000 1.105165340 1.221397414 1.349853693 1.491819774 1.648716648 1.822114450 2.013748665 2.225537184 2.459599642 2.718278782

1.000000000 1.105170917 1.221402757 1.349858806 1.491824696 1.648721270 1.822118799 2.013752706 2.225540927 2.459603110 2.718281828

1.000000000 1.105170918 1.221402758 1.349858807 1.491824697 1.648721270 1.822118800 2.013752707 2.225540928 2.459603111 2.718281828

1.000000000 1.105170918 1.221402758 1.349858807 1.491824697 1.648721270 1.822118800 2.013752707 2.225540928 2.459603111 2.718281828

Table 2 Approximate solutions for N = 4, 8, 12, 16 unknown nodal values of x2 (t ) and the exact solutions of x2 (t ) of experiment 1. t

N =4

N =8

N = 12

N = 16

Exact solution

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

1.000000000 0.905723313 0.820032966 0.742445892 0.672317270 0.608952940 0.551670915 0.499830533 0.452851438 0.410231745 0.371565896

1.000000000 0.904838038 0.818731796 0.740819792 0.670322063 0.606533069 0.548814412 0.496588374 0.449332305 0.406573257 0.367883060

1.000000000 0.904837418 0.818730753 0.740818220 0.670320046 0.606530660 0.548811636 0.496585304 0.449328964 0.406569660 0.367879441

1.000000000 0.904837418 0.818730753 0.740818220 0.670320046 0.606530659 0.548811636 0.496585303 0.449328964 0.406569659 0.367879441

1.000000000 0.904837418 0.818730753 0.740818220 0.670320046 0.606530659 0.548811636 0.496585303 0.449328964 0.406569659 0.367879441

Table 3 Approximate solutions for N = 4, 8, 12, 16 unknown nodal values of y(t ) and the exact solutions of y(t ) of experiment 1. t

N =4

N =8

N = 12

N = 16

Exact solution

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.000000000 0.195173604 0.395821359 0.602170032 0.815044690 1.035943636 1.266970168 1.510656850 1.769734375 2.046890045 2.344548474

0.000000000 0.200326484 0.402665190 0.609033862 0.821497925 1.042183962 1.273300521 1.517160813 1.776205393 2.053026858 2.350396156

0.000000000 0.200333498 0.402672003 0.609040585 0.821504650 1.042190609 1.273307163 1.517167402 1.776211963 2.053033450 2.350402386

0.000000000 0.200333500 0.402672005 0.609040586 0.821504651 1.042190610 1.273307164 1.517167403 1.776211964 2.053033451 2.350402387

0.000000000 0.200333500 0.402672005 0.609040586 0.821504651 1.042190610 1.273307164 1.517167403 1.776211964 2.053033451 2.350402387

We have solved this problem for λ = 15, with Rmax = 25, Rmin = 0.5 and different N. The results given in Table 1 demonstrate the convergence of the proposed method as N increases (see also Tables 2 and 3). In the above tables, a closer look at the results of the MQ approximation scheme reveals that higher-order accuracy can be achieved by increasing N. This fact ensures that the method is convergent. Also, as the programs were running, we found the run times of the MQ approximation scheme for N = 4, 8, 12 and 16 unknown nodal values are 0.031 s, 0.109 s, 0.328 s and 0.655 s, respectively. Zerroukut et al. [13], found that a constant shape parameter provides a better accuracy, but the main problem is that the range of suitable constants has not yet been specified, because the solutions are sensitive to uncertain shape parameters; therefore, it is too difficult to find them. Also, Madych [22] proved that in certain circumstances very large values of a shape parameter are desirable. The range of large variable shape parameters is introduced in Eq. (19). However, the large constant shape parameter is again difficult to find. Therefore, we prefer using variable shape parameters. To illustrate that the variable shape parameters outperform constant shape parameters, we consider each case. First, we choose some arbitrary constant shape parameters and solved experiment 1. Then, we compute Ne for x1 (t ), x2 (t ) and y(t ) and 16 terms (N = 16) of approximate solutions. The results are given in Table 4. In Table 4, some constant shape parameters were tested, but we could not find a suitable parameter that performed well for large constants. In addition, we tested some variable shape parameters defined in Eqs. (19)–(21) (Table 5). In the above table, the discrete relative L2 norm (Ne ) given by Eq. (32) is computed. The results demonstrate that higher-order accuracy can be achieved by the MQ approximation scheme in the determined range of variable shape parameters.

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

665

Table 4 Some arbitrary constant shape parameters (R) and the discrete relative L2 norm Ne with 16 terms of approximate solutions of experiment 1. R

5

10

20

50

100

200

1000

10 000

N e x 1 (t ) N e x 2 (t ) N e y (t )

2.2e−03 5.3e−03 3.5e−04

6.9e−01 8.1e−02 4.7e−03

1.0e−00 9.9e−01 6.5e−01

9.9e−01 1.0e−00 9.9e−01

1.0e−00 9.9e−01 9.9e−01

9.9e−01 1.0e−00 9.9e−00

1.0e−00 9.9e−01 9.9e−00

9.9e−01 1.0e−00 9.9e−01

Table 5 Some arbitrary variable shape parameters (Rmax and Rmin ) and the discrete relative L2 norm Ne with 16 terms of approximate solutions of experiment 1. Rmax Rmin

25 0.5

50 0.5

100 0.5

25 0.75

50 0.75

100 0.75

25 1.0

50 1.0

100 1.0

N e x 1 (t ) Ne x2 (t ) Ne y(t )

8.6e−09 1.2e−08 1.2e−09

8.4e−07 1.0e−06 1.9e−07

5.4e−08 5.4e−08 5.0e−08

8.8e−08 1.9e−07 1.1e−08

1.1e−07 1.7e−07 1.1e−07

1.2e−07 1.4e−07 1.9e−07

4.2e−07 7.4e−07 7.2e−08

9.5e−06 5.1e−06 4.3e−07

6.3e−07 9.4e−07 1.9e−07

Table 6 Some other variable shape parameters (Rmax and Rmin ) and the discrete relative L2 norm Ne with 16 terms of approximate solutions of experiment 1. Rmax Rmin

50 50

100 50

150 50

1000 0.5

1500 0.5

20 000 0.5

1000 0.25

1500 0.25

2000 0.25

N e x 1 (t ) Ne x2 (t ) Ne y(t )

9.9e−01 1.0e−00 9.9e−01

4.1e−01 1.2e−00 1.4e−01

1.0e−00 1.0e−00 1.9e−01

1.2e−03 2.2e−03 6.2e−05

3.0e−05 2.9e−05 1.2e−04

6.2e−05 4.4e−05 1.0e−03

3.8e−05 2.5e−05 3.2e−04

3.8e−05 2.5e−05 3.2e−04

6.1e−04 6.9e−04 2.4e−03

In addition, various variable shape parameters out of the determined range were tested. The results show that the method does not work well in these cases. The results in Table 6, demonstrate that variable shape parameters out of the defined range do not work as well as the ones in the range. Nevertheless, these variable shape parameters work better than constant shape parameters. The above discussion demonstrates that the best constant shape parameters are difficult to find and variable shape parameters work well in the determined range. Therefore, we prefer to use the variable shape parameters to solve DAEs with the MQ approximation scheme. Experiment 2. Consider the following DAE: A(t )x′ (t ) + B(t )x(t ) = 0,

0 ≤ t ≤ 1,

where



1 0 A(t ) =  0 0

0 1 0 0

0 0 1 0



0 0 , 0 0

1  −1 B(t ) =  3 −t t



0 1 t2 −1

−t −t 2 −1 t

1 t  , 0 −1



with initial conditions of x1 (0) = 1, x2 (0) = 0 and x3 (0) = 1. The exact solution is as follows: e− t te−t  x(t ) =  t  . e tet





We have solved this experiment for Rmax = 50 and Rmin = 1.9 and 9 scattered data points (N = 9). The numerical results given in Tables 7 and 8 demonstrate that the collocating points can be scattered and that the results do not depend on collocating points; thus, this solution method is applicable for the first four and the last four collocating points, which have a very small metric. The numerical results show that the MQ approximation scheme works excellently for scattered data points as well. Even if some of the scattered data points are very close to each other, this method can still perform well, which is one of the most important advantages of the MQ approximation scheme. Moreover, the run time of the MQ approximation scheme for this experiment is 0.297 s. 6. Conclusion In the present work, an MQ approximation scheme is proposed to solve DAEs. The results reveal that the technique introduced here is effective and convenient in solving DAEs because this method is easy to implement and yields the desired accuracy with only a few terms. Other advantages of the present method are its low run time, a minimal number of data

666

S. Karimi Vanani, A. Aminataei / Mathematical and Computer Modelling 53 (2011) 659–666

Table 7 Approximate solutions and exact solutions of x1 (t ) and x2 (t ) of experiment 2. t

xˆ 1 (t )

x1 (t )

xˆ 2 (t )

x 2 (t )

0.000 0.001 0.002 0.003 0.500 0.997 0.988 0.999 1.000

0.9999999 0.9990004 0.9980020 0.9970044 0.6065306 0.3689848 0.3686160 0.3682476 0.3678795

1.0000000 0.9990005 0.9980019 0.9970044 0.6065306 0.3689847 0.3686159 0.3682475 0.3678794

0.0000000 0.0009990 0.0019960 0.0029910 0.3032657 0.3678779 0.3678788 0.3678794 0.3678796

0.0000000 0.0009990 0.0019960 0.0029910 0.3032653 0.3678777 0.3678787 0.3678792 0.3678794

Table 8 Approximate solutions and exact solutions of x3 (t ) and x4 (t ) of experiment 2. t

xˆ 3 (t )

x3 (t )

xˆ 4 (t )

x 4 (t )

0.000 0.001 0.002 0.003 0.500 0.997 0.988 0.999 1.000

1.0000000 1.0010005 1.0020020 1.0030045 1.6487215 2.7101393 2.7128508 2.7155650 2.7182820

1.0000000 1.0010005 1.0020020 1.0030045 1.6487212 2.7101392 2.7128506 2.7155649 2.7182818

0.0000000 0.0010010 0.0020040 0.0030090 0.8243603 2.7020088 2.7074251 2.7128494 2.7182819

0.0000000 0.0010010 0.0020040 0.0030090 0.8243606 2.7020087 2.7074249 2.7128493 2.7182818

points in the required domain and its applicability to scattered collocated points with a very small metric. All of these advantages of the MQ approximation scheme suggest that the method is a fast and powerful tool that is more convenient for computer algorithms. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]

K.E. Brenan, S.L. Campbell, L.R. Petzold, Numerical Solution of Initial Value Problems in Differential Algebraic Equations, Elsevier, New York, 1989. E. Hairer, S.P. Norsett, G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential–Algebraic Problems, Springer, New York, 1992. L.R. Petzold, Numerical solution of differential–algebraic equations, Adv. Numer. Anal. 4 (1995). C.W. Gear, L.R. Petzold, ODE systems for the solution of differential–algebraic systems, SIAM J. Numer. Anal. 21 (1984) 716–728. U.M. Ascher, L.R. Petzold, Computer Methods for Ordinary Differential Equations and Differential Algebraic Equations, Society for Industrial and Applied Mathematics, Philadelphia, 1998. E. Celik, On the numerical solution of differential algebraic equation with index-2, Appl. Math. Comput. 156 (2004) 541–550. U.M. Ascher, R.J. Spiter, Collocation software for boundary-value differential algebraic equations, SIAM J. Sci. Comput. 15 (1994) 938–952. U.M. Ascher, L.R. Petzold, Projected implicit Runge–Kutta methods for differential algebraic equations, SIAM J. Numer. Anal. 28 (1991) 1097–1120. R.L. Hardy, Multiquadric equations of topography and other irregular surfaces, J. Geophys. Res. 76 (1971) 1905. R.L. Hardy, Theory and applications of the multiquadric bi-harmonic method: 20 years of discovery, Comput. Math. Appl. 19 (8–9) (1990) 163. R. Franke, Scattered data interpolation: tests of some methods, Math. Comp. 38 (1971) 181. A. Ambrosetti, G. Prodi, A Primer of Nonlinear Analysis, Cambridge University Press, 1993. M. Zerroukut, H. Power, C.S. Chen, A numerical method for heat transfer problems using collocation and radial basis functions, Internat. J. Numer. Methods Engrg. 42 (1998) 1263. N. Mai-Duy, T. Tran-Cong, Numerical solution of differential equations using multiquadric radial basis function networks, Neural Netw. 14 (2001) 185. A. Aminataei, M.M. Mazarei, Numerical solution of elliptic partial differential equations using direct and indirect radial basis function networks, Eur. J. Sci. Res. 2 (2) (2005) 5. A. Aminataei, M.M. Mazarei, Numerical solution of Poisson’s equation using radial basis function networks on the polar coordinate, Comput. Math. Appl. 11 (56) (2008) 2887–2895. C.A. Micchelli, Interpolation of scattered data: distance matrices and conditionally positive definite functions, Constr. Approx. 2 (11) (1986). W.R. Madych, S.A. Nelson, Multivariable interpolation and conditionally positive definite functions II, Math. Comp. 54 (1990) 211. E.J. Kansa, Multiquadrics—a scattered data approximation scheme with applications to computational fluid dynamics—I. Surface approximations and partial derivative estimates, Comput. Math. Appl. 19 (8–9) (1990) 127. E.J. Kansa, Multiquadrics—a scattered data approximation scheme with applications to computational fluid dynamics—II. Solutions to hyperbolic, parabolic and elliptic partial differential equations, Comput. Math. Appl. 19 (8–9) (1990) 147. A. Aminataei, M. Sharan, Using multiquadric method in the numerical solution of ODEs with a singularity point and PDEs in one and two dimensions, Eur. J. Sci. Res. 10 (2) (2005) 19. W.R. Madych, Miscellaneous error bounds for multiquadric and related interpolants, Comput. Math. Appl. 24 (12) (1992) 121.