Error bounds estimate of nonlinear boundary value problems using method of weighted residuals with genetic algorithms

Error bounds estimate of nonlinear boundary value problems using method of weighted residuals with genetic algorithms

Applied Mathematics and Computation 144 (2003) 261–271 www.elsevier.com/locate/amc Error bounds estimate of nonlinear boundary value problems using m...

196KB Sizes 0 Downloads 41 Views

Applied Mathematics and Computation 144 (2003) 261–271 www.elsevier.com/locate/amc

Error bounds estimate of nonlinear boundary value problems using method of weighted residuals with genetic algorithms Jin-Mu Lin a, ChaÕo-Kuang Chen

b,*

a

b

Department of Mechanical Engineering, Far East College, Tainan County, Taiwan, ROC Department of Mechanical Engineering, National Cheng-Kung University, Tainan, Taiwan, ROC

Abstract An error bounds estimate procedure to solve the nonlinear boundary value problem of differential equations is presented in this paper. A good approximate solution and error bounds can be obtained by the proposed approach which combines the method of weighted residuals with genetic algorithms. A nonlinear boundary value problem is studied as an example. The efficiency, accuracy, and simplicity of this approach are illustrated. It shows that the proposed method can be easily extended to solve a wide range of physical engineering problems. Ó 2002 Elsevier Inc. All rights reserved. Keywords: Boundary value problem; Method of weighted residuals; Genetic algorithms

1. Introduction The method of weighted residuals (MWRs) based on the governing differential equation is a mathematical procedure to obtain approximate solutions of physical engineering problems. Experience and intuition are sometimes required for a good first guess of trial function, from which it is possible to proceed to successively improved approximations. The analytical form of the approximate solution is often more useful than generated by numerical

*

Corresponding author. E-mail address: [email protected] (C.-K. Chen).

0096-3003/02/$ - see front matter Ó 2002 Elsevier Inc. All rights reserved. doi:10.1016/S0096-3003(02)00405-8

262

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

integration, and the approximate solution usually requires less computation time to generate [1]. In comparison with the finite element method and other current methods [2], the MWR does not rely on the existence of a variational principal for which stationarity would be sought. It has the advantages of program simplicity, shorter computer run time, and smaller computer error. With a rapid development of engineering and technique, a reliable and accuracy solution to the physical problem must be significantly guaranteed. Unfortunately, the conventional MWR procedure does not allow accurate error analysis. For this problem, Appl and Hung [3] proposed a bounding principle (maximum principle) and a convergent procedure to improve error bounds. A mathematical programming approach based on the maximum principle of differential equations for improving error bounds which was proposed by Finlayson and Scriven [4]. For nonlinear problems, ApplÕs approach needs to solve algebraic equations that is not applicable to nonlinear problems. FinlaysonÕs approach is applicable to nonlinear case, but it requires a nonlinear programming algorithm to solve the mathematical programming problems. Genetic algorithms (GAs), is a class of probabilistic search algorithms for optimization problems, was proposed by John Holland and his coworkers in 1970s [5–7]. GAs start with a population of randomly generated candidates and evolve toward better solutions by applying genetic operators, on the genetic processes occurring in nature. In the last decade, the GA has emerged as a practical and robust search method [8–11]. This paper attempts to estimate the error bounds of a nonlinear boundary value problem by combining the MWR with GAs which is formulated as a mathematical programming problem. It shows that the error bounds has been improved by the proposed methodology.

2. Problem formulation for error bounds estimate 2.1. The method of weighted residuals Consider a physical engineering problem described by the following differential equation and boundary condition F ½uðxÞ  f ðxÞ ¼ 0

in the domain X

ð1Þ

G½uðxÞ  gðxÞ ¼ 0

on the boundary S

ð2Þ

Let a trial function of the form ZðxÞ ¼

n X i¼0

ci zi ðxÞ

ð3Þ

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

263

be substituted into (1) and (2), the residuals corresponding to the domain X and the boundary S can be written as RX ¼ F ½ZðxÞ þ f ðxÞ

ð4Þ

RS ¼ G½ZðxÞ  gðxÞ

ð5Þ

In general, the residuals are not zero across the entire domain of the problem, and they can be used as an indicator to obtain the best approximation solution. In other words, the pointwise values of the residual can be minimized by adjusting the value of the undetermined parameters ci . For this purpose, the weighted residual of the form as follows are used. Z RX WX dX ¼ 0 ð6Þ X

Z

RS WS dS ¼ 0

ð7Þ

S

The weighted functions WX and WS can be chosen in several ways, and each choice corresponds to a different criterion in MWR. Five basic choices for the weighting functions are summarized in Table 1. 2.2. Formulation of error bounds The maximum principle of differential equation provides information about the solution of differential equation without any explicit knowledge of the solution itself. In particular, the maximum principle is a useful tool to determine the error bounds of the approximation solutions for physical problems. Consider a nonlinear boundary value problem u00 þ H ðx; u; u0 Þ ¼ 0;

x 2 ða; bÞ

ð8Þ

with boundary conditions u0 ðaÞ cos h þ uðaÞ sin h ¼ c1 u0 ðbÞ cos / þ uðbÞ sin / ¼ c2

 ð9Þ

Table 1 The basic methods of the MWRs Method Least squares method Collocation method Subdomain method Galerkin method Method of moment

Integral form R RX RðoR=ocj Þ dX ¼ 0 Rdðx  xj Þ dX ¼ Rðxj Þ ¼ 0 X R RWj dXj ¼ 0 Xj R RX RZjj dX ¼ 0 Rx dX ¼ 0 X

Weighting function oR=ocj dðx  xj Þ; j ¼ 0; 1; . . . ; n  1 in Xj Wj ¼ 0 not in Xj Zj ; j ¼ 0; 1; . . . ; n xj ; j ¼ 0; 1; . . . ; n

264

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

where 0 6 h 6 p=2, 0 6 / 6 p=2, h and / are not both zero. Let ZðxÞ be the trial function of the nonlinear boundary value problem. The residuals corresponding to this problem can be written as R½Z ¼ Z 00  H ðx; Z; Z 0 Þ;

x 2 ða; bÞ

RS1 ½Z ¼ Z 0 ðaÞ cos h þ ZðaÞ sin h  c1 RS2 ½Z ¼ Z 0 ðbÞ cos / þ ZðbÞ sin /  c2

ð10Þ 

Suppose the trial function ZðxÞ satisfies the following conditions 9 R½Z P 0 = RS1 ½Z P 0 ; RS2 ½Z P 0

ð11Þ

ð12Þ

and H;

oH oH oH ; 60 ; are continuous; ou ou0 ou

ð13Þ

follows the maximum principle, as detailed in [12], then ZðxÞ P uðxÞ;

x 2 ½a; b

ð14Þ

where ZðxÞ can be consider as an upper bound on uðxÞ. The conditions (12) and (13) are sufficient to make ZðxÞ an upper bound of uðxÞ. Consequently, if a function is found that satisfies the conditions (12) with the inequality reversed, then the function is a lower bound of uðxÞ. Suppose Zu ðxÞ, Zl ðxÞ are the upper and lower bounds of the solution uðxÞ, respectively, then the approximate solution ZðxÞ and the error bound Eb of the problem can be determined as Z ¼ ðZu þ Zl Þ=2

ð15Þ

Eb ¼ ðZu  Zl Þ

ð16Þ

2.3. Mathematical programming approach for error bounds estimate The MWR provides an approach to construct the error bounds mentioned in the proceeding section. If the residuals of a trial function hold the conditions (12) at each point, then the trial function is an upper bound. On the other hand, if similar conditions hold with the inequality reversed, then the trial function is a lower bound. The following approach, based on Finlayson and Scriven [4] ensures the satisfaction of conditions (12). Suppose a trial function of the form ZðxÞ ¼

n X i¼0

ci zi ðxÞ

ð17Þ

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

265

is applied, then the residuals of the trial function can be written as (10) and (11). In general, the residuals are functions of position oscillate around zero using conventional MWR. Therefore, the conditions (12) are not satisfied, and the approximate solution is not necessarily either an upper or lower bound. To improve this situation, a mathematical programming problem for the upper bound Zu ðxÞ by using the collocation method can be shown as min Zðxj Þ

ð18Þ

such that

9 R½Zðxi Þ ¼ Z 00  H ðxi ; Z; Z 0 Þ P e; i ¼ 0; 1; . . . ; n = RS1 ½Z ¼ Z 0 ðaÞ cos h þ ZðaÞ sin h  c1 P e ; RS2 ½Z ¼ Z 0 ðbÞ cos / þ ZðbÞ sin /  c2 P e

ð19Þ

where e is a small positive value that is applied to shift the residuals and to improve the accuracy of the approximate solution by minimizing the upper bound. The corresponding lower bound Zl ðxÞ can be determined as max Zðxj Þ

ð20Þ

such that

9 R½Zðxi Þ ¼ Z 00  H ðxi ; Z; Z 0 Þ 6 e; i ¼ 0; 1; . . . ; n = RS1 ½Z ¼ Z 0 ðaÞ cos h þ ZðaÞ sin h  c1 6 e ; RS2 ½Z ¼ Z 0 ðbÞ cos / þ ZðbÞ sin /  c2 6 e

ð21Þ

Then, the approximate solution ZðxÞ and the error bound Eb can be obtained as (15) and (16) respectively.

3. Outline of genetic algorithms In general, a simple GA can be formulate as follows: GAs ¼ ðP ; St; q; ðSe; Cr; MuÞ; fit; ðPc ; Pm ÞÞ

ð22Þ

where St is a string representation of points, contained in a population P, called chromosomes in the string space. q is a coding function maps the search space into a space of string. Selection (Se), crossover (Cr), mutation (Mu) are a set of operators for generating new population. A fitness function (fit) is used to evaluate the search points. Probability of crossover Pc and mutation Pm are stochastically to control the genetic operators. A simple GA operates as the following steps and the operation diagram is shown in Fig. 1. 1. Initialization––An initial population P of PS chromosomes St is randomly generated, biased at the central region of the search window.

266

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

Fig. 1. Operation diagram of the simple GAs.

2. Evaluation––Fitness function fit of each chromosome within a population is evaluated. Typically, a decoding structure maps the string into the real search point, and then the fitness function maps the search point into a real number. 3. Selection––Based on the fitness of strings in the current population, pairs of parents are selected and undergo subsequent genetic operations to produce pairs of child strings that from a new population (next generation). Based on the ‘‘survival of the fittest,’’ the probability of a chromosome being selected is proportional to its fitness. 4. Crossover––A simple ‘‘crossover’’ undergo by the following step. An integer position k along the string is selected at random between 1 and chromosome length ðlÞ  1. Two new strings are created by swapping all characters between position k þ 1 and l, inclusively. For example, consider strings St1 and St2 from a mating pool St1 ¼ 01j101 St2 ¼ 11j000 Two new strings can be obtained through a crossover procedure as St01 ¼ 01000 St02 ¼ 11101 The occurrence of crossover operation is controlled by the parameter Pc . 5. Mutation––A background operator selects a gene at random on a given chromosome and mutates the allele for that gene. Mutation is used to recover the genes that may have been lost from the population for purely

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

267

stochastic reasons. The probability of occurrence for mutation is controlled by a parameter Pm . 6. Termination criterion. If the termination criterion is satisfied, the stop the algorithm, or else go to Step 2.

4. An alternative approach to the nonlinear boundary value problem The procedure to estimate the error bounds of a nonlinear boundary value problem by combining with the MWR and GAs is stated as follows. (1) Construct the error bounds conditions of a nonlinear boundary value problem based on the maximum principles. (2) Formulate the error bounds estimate problem as a mathematical programming problem, using MWR. (3) Find the optimal solution of the mathematical programming problem using GAs. Example. Consider a nonlinear boundary value problem of the form 9 2 > h00 þ H ðx; h; h0 Þ ¼ d h2  aðh  1Þ  bh3T ðh4  1Þ ¼ 0 > > > dx > = dh  ¼ tðh2  hÞ dx x¼0 > > > > > dh ¼ 0 ; dx x¼1

ð23Þ

with the parameters as follows: a ¼ 0:17652;

b ¼ 5:00434 1010 ;

t ¼ 1:58172;

h1 ¼ 300;

h2 ¼ 2

where H, oH =ou, oH =ou0 are continuous in ð0; 1Þ and oH =ou < 0; hence, the upper and lower bounds of this example can be estimated by following the (18)–(21). Suppose the following trial function is applied. Zu ðxÞ ¼ a0 þ

3 X

2

ai ðx  1Þ xi þ a4 ðx3 =3  x2 =2Þ

ð24Þ

i¼1

Then the residuals of the problem can be defined as  R½Zu ðxÞ ¼ Z 00  H ðx; Zu ; Zu0 Þ RS ½Zu ðxÞ ¼ Z 0  tðT1  Zu Þ

ð25Þ

To estimate the upper bound of the solution hðxÞ, a mathematical programming problem can be obtained using (18) and (19) as follows.

268

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

min Zu ð0:1Þ

ð26Þ

such that Ri ½Zu ðxi Þ ¼ Z 00  H ðxi ; Zu ; Zu0 Þ  e P 0; xi ¼ 0:1i; R10 ½Zu ð0Þ ¼ Z 0  tðT1  Zu Þ  e P 0

i ¼ 1; 2; . . . ; 9

 ð27Þ

Table 2 Comparison of the optimal coefficients Parameter, e

a2

a3

a4

Optimal coefficients of upper bound 0.0006 1.84204 )0.250464 0.006 1.84714 )0.247782 0.06 1.8973 )0.222498

a1

0.005388 0.005614 0.005154

)0.000477 )0.000873 )0.000539

0.723002 0.715071 0.640547

Optimal coefficients of lower bound )0.0006 1.83733 )0.228499 )0.006 1.83241 )0.230017 )0.06 1.77531 )0.25638

0.0108906 0.007766 0.0058517

)0.0100038 )0.0047259 )0.002576

0.657393 0.663829 0.74417

a0

Table 3 Comparison of minimum and maximum residuals Upper bound

Lower bound

Parameter, e

Minimum residuals

Parameter, e

Maximum residuals

0.0006 0.006 0.06

0.00060396 0.0059958 0.06

)0.0006 )0.006 )0.06

0.03319 0.02276 )0.0344997

Fig. 2. The residuals of Zu with different e.

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

269

Then, the GA can be applied to determine the upper bound of hðxÞ with a fitness function. In the example, the fitness function is defined as 10 X fitðaÞ ¼ Cmax  ½C1 Zu ð0:1Þ þ C2 Pi ðaÞ ð28Þ i¼1

where Pi ðaÞ is a penalty function and  1Ri ½Zu ; if Ri ½Zu  < 0 Pi ðaÞ ¼ 0; if Ri ½Zu  P 0

Fig. 3. The residuals of Zl with different e.

Fig. 4. The error bounds of the approximate solution.

ð29Þ

270

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

By a similar way, it is easy to extend this procedure to estimate the lower bound of the solution hðxÞ. Table 2 shows the comparison of optimal coefficient with different e. Table 3 shows the comparison of minimum and maximum residuals with different e. Figs. 2 and 3 show how the residuals converge with respect to the parameter e. Fig. 4 show the error bound of the approximate solution with e ¼ 0:0006 and e ¼ 0:06, which satisfied the conditions of error bounds formulation. Fig. 5 is the approximate solution. All the cases are obtained by using GA with the parameters shown in Table 4.

Fig. 5. The approximate solution.

Table 4 The parameters of GA for the numerical example Parameters

Value/method

Population size Probability of crossover Probability of mutation Coding method String length of a coefficient Method of fitness scaling Maximum generation Weight of cost ðC1 Þ Weight of penalty ðC2 Þ Cmax

200 0.8 0.01 Binary coding 20 Sigma truncation 2000 1.0 50,000 50,000

J.-M. Lin, C.-K. Chen / Appl. Math. Comput. 144 (2003) 261–271

271

5. Conclusions 1. The present example and results indicate that the use of this procedure eliminates the need for error analysis which is usually difficult for nonlinear boundary value problem. 2. The results shown in Figs. 2 and 3 indicate that the MWR can not hold the conditions (12) in all domain of the problem that decrease the accuracy of the approximate solution. 3. For a nonlinear mathematical programming problem, the GA provides a good result and easy to apply.

References [1] B.A. Finlayson, L.E. Scriven, The method of weighted residuals––a review, Appl. Mech. Rev. 19 (9) (1966) 735–748. [2] Y.C. Zhang, X. He, Analysis of free vibration and buckling problems of beams and plates by discrete least-squares method using B5 -spline as trial functions, Comput. Struct. 31 (2) (1989) 115–119. [3] F.C. Appl, H.M. Hung, A principle for convergent upper and lower bounds, Int. J. Mech. Sci. 6 (1964) 289–381. [4] B.A. Finlayson, L.E. Scriven, Upper and lower bounds for solutions to the transport equations, AICHE J. 12 (6) (1966) 1151–1157. [5] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, AddisonWesley, Reading, MA, 1989. [6] Lawrence Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [7] J.H. Holland, Adaptation in Natural and Artificial System, MIT Press, Cambridge, MA, 1992. [8] Srinivas, L.M. Patnaik, Genetic algorithms––a survey, Computer 27 (6) (1994) 17–26. [9] W.M. Jenkis, Towards structural optimization via the genetic algorithm, Comput. Struct. 40 (5) (1991) 1321–1327. [10] L.H. Christopher, E.B. Donald, A parallel heuristic for quadratic assignment problems, Comput. Operat. Res. 18 (3) (1991) 275–289. [11] C.R. Reeves, Modern Heuristic Technique for Combinational Problems, Halsted Press, New York, 1993. [12] M.H. Protter, Maximum Principles in Differential Equations, Prentice-Hall, Englewood Cliffs, NJ, 1967.