li NORTH-~
Error Bounds Estimate of Weighted Residuals Method Using Genetic Algorithms Cha'o-Kuang Chen and Jin-Mu Lin
Department of Mechanical Engineering National Cheng-Kung University Tainan, Taiwan, Republic of China and Chieh-Li Chen
Institute of Aeronautic 8J Astronautics National Cheng-Kung University Tainan, Taiwan, Republic of China
ABSTRACT
An error bounds estimate procedure to solve the boundary value problem of differential equations is presented in this paper. A good approximate solution and error bounds can be obtained by the proposed approach which combines the method of weighted residuals (MWR) with genetic algorithms (GAs). A nonlinear boundary value problem is studied as an example. The efficiency, accuracy, and simplicity of this approach are illustrated. It shows that the proposed method can be easily extended to solve a wide range of physical engineering problems. © Elsevier
Science Inc., 1997
1.
INTRODUCTION
The method of weighted residuals (MWR) based on the governing differential equation is a mathematical procedure to obtain approximate solutions of physical engineering problems. Experience and intuition are sometimes required for a good first guess of trial function, from which it is possible to proceed to successively improved approximations. The analytical form of the
APPLIED MATHEMATICS AND COMPUTATION 81:207-219 (1997) © Elsevier ScienceInc., 1997 655 Avenue of the Americas, New York, NY 10010
0096-3003/97/$17.00 PII S0096-3003(96)00017-3
208
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN
approximate solution is often more useful than solutions generated by numerical integration, and the approximate solution usually requires less computation time to generate [1]. In comparison with the finite element method and other current methods [2], the MWR does not rely on the existence of a variational principal for which stationarity would be sought. It has the advantages of program simplicity, shorter computer run time, and smaller computer error. With a rapid development of engineering and technique, a reliable and accuracy solution to the physical problem must be significantly guaranteed. Unfortunately, the conventional MWR procedure does not allow accurate error analysis. For this problem, Appl and Hung [3] proposed a bounding principle (Maximum principle) and a convergent procedure to improve error bounds. A mathematical programming approach based on the maximum principle of differential equations for improving error bounds was proposed by Finlayson [4]. For nonlinear problems, Appl's approach needs to solve algebraic equation that is not applicable to nonlinear problems. Finlayson's approach is applicable to nonlinear cases, but it requires a nonlinear programming algorithm to solve the mathematical programming problems. Genetic Mgorithms (GAs), is a class of probabilistic search algorithms for optimization problems, was proposed by John Holland and his coworkers in 1970's [5-7]. GAs start with a population of randomly generated candidates and evolve toward better solutions by applying genetic operators, on the genetic processes occurring in nature. In the last decade, the GA has emerged as a practical and robust search method [8-11]. This paper attempts to estimate the error bounds of a nonlinear boundary value problem by combining the mathematical programming approach with GAs. It shows that the MWR has been improved by the proposed methodology. 2.
PROBLEM FORMULATION FOR ERROR BOUNDS ESTIMATE
2.1. The Method of Weighted Residuals Consider a physical engineering problem described by the following differential equation and boundary condition F[ u( x)] - f ( x ) = 0 in the domain •
(1)
G[ u( x)] - g(x) = 0 on the boundary S.
(2)
Let a trial function of the form
Z( x) = ~ c~z,( x) i=0
(3)
Error Bounds and Genetic Algorithms
209
be substituted into (1) and (2), the residuals corresponding to the domain ll and the boundary S can be written as
R~ = - F[ Z( x)] + / ( x )
(4)
Rs = G[ Z( x)] - 9 ( x ) .
(5)
In general, the residuals are not zero across the entire domain of the problem, and they can be used as an indicator to obtain the best approximation solution. In other words, the pointwise values of the residual can be minimized by adjusting the value of the undetermined parameters % For this purpose, the weighted residual of the form as follows are used.
f a R a W a dll = 0
fsRsW8
(6)
dS = 0
(7)
The weighting functions W a and W s can be chosen in several ways, and each choice corresponds to a different criterion in MWR. Five basic choices for the weighting functions are summarized in Table 1.
TABLE 1 T H E BASIC METHODS OF THE METHOD OF WEIGHTED RESIDUALS
Method
Integral form
Weighting function
aR fa R ~ Cj dll = 0
OR
Collocation method
f a R 6 ( x - xj) dx = R ( x j ) = 0
6( x - xj), j = O, 1 . . . . . n
Subdomain method
fa R Wj dllj = 0
Least squares method
~ Cj
Wj =
1 0
in llj not in llj
3
Galerkin method
fllRuj dll = 0
uj, j = O, 1 , . . . , n
Method of moment
/tlRxJ dll = 0
x j, j = 0, 1, ..., n
210
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN
2.2. Formulation of Error Bounds The maximum principle of differential equation provide information about the solution of differential equation without any explicit knowledge of the solution itself. In particular, the maximum principle is a useful tool to determine the error bounds of the approximation solutions for physical problems. Consider a boundary value problem u" + H ( x , u , u ' ) = 0 ,
x ~ (a, b)
(8)
with boundary conditions - u'( a)cos 0 + u( a)sin 0 = `/1 u'( b)cos ~b + u( b)sin ~b
"/2
(9)
f
where 0 _< 0 < qr/2, 0 < ~b < ~r/2, 0 and ¢ are not both zero. Let Z(x) be the trial function of the boundary value problem. The residuals corresponding to this problem can be written as R[Z] = - Z " - H ( x , Z , Z ' ) ,
x~ (a,b)
RSl [ Z] = - Z'( a)cos 0 + Z( a)sin 0 - `/1 Rs2 [ Z] = Z'( b)cos ~b + Z( b)sin ~b - `/2
f
(10)
(11)
Suppose the trial function Z(x) satisfies the following conditions
Rsl[ z] > R A z] >_
(12)
and
OH OH H, 0 u , 0- - u' are continuous,
OH 0u
_< 0
(13)
follows the maximum principle, as detailed in [12], then
z ( x ) >_ u ( x ) ,
~ e [ a, b]
(14)
Error Bounds and Genetic Algorithms
211
where Z(x) can be consider as an upper bound on u(x). The conditions (12)-(13) are sufficient to make Z(x) an upper bound of u(x). Consequently, if a function is found that satisfies the conditions (12) with the inequality reversed, then the function is a lower bound of u(x). Suppose Z,(x), Zl(x) are the upper and lower bounds of the solution u(x), respectively, then the approximate solution Z(x) and the error bound E b of the problem can be determined as
Z = ( Z. + Zt) /2 E, = ( &
- Z,).
(15) (16)
2. 3. Mathematical Programming Approach for Error Bounds Estimate The MWR provides an approach to construct the error bounds mentioned in the preceding section. If the residuals of a trial function hold the conditions (12) at each position, then the trial function is an upper bound. On the other hand, if similar conditions hold with the inequality reversed, then the trial function is a lower bound. The following approach, based on Finlayson [4] ensures the satisfaction of conditions (12). Suppose a trial function of the form
Z(x) = ~ cizi( x)
(17)
i=0
is applied, then the residuals of the trial function can be written as (10) and (11). In general, the residuals are functions of position oscillate around zero using the conventional MWR. Therefore, the conditions (12) are not satisfied, and the approximate solution is not necessarily either an upper or lower bound. To improve this situation, a mathematical programming problem for the upper bound Z~(x) can be shown as min Z(x~)
(18)
R[ Z( xi) ] = - Z" - H( x, Z, Z') _> e, i = O, l, ..., m} Rs, [ Z] = - Z'( a)cos 0 + Z( a)sin 0 - "~1 > 8 Rm[ Z] = Z'( a)cos 0 + Z( a)sin 0 - '~1 ; °°
(19)
such that
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN
212
where ~ is a small positive value that is applied to shift the residuals and to improve the accuracy of the approximate solution by minimizing the upper bound. The corresponding lower bound Zl(x) can be determined as
max Z(xj)
(20)
such that R[ Z(xz)] = - Z " - H ( x , Z, Z') _< 6, i = 0, 1 , . . . , m }
Rsl [ Z] = - Z ' ( a ) c o s 0 + Z ( a ) s i n 0 - T1 < ~ • Rsl [ Z] = Z'( a)cos 0 + Z( a)sin 0 - "~1 ~-- °°
(21)
Then, the approximate solution Z(x) and the error bound E b can be obtained as (15) and (16), respectively. 3.
O U T L I N E OF G E N E T I C A L G O R I T H M S In general, a simple genetic algorithm can be formulated as follows
GAs = ( P, St, p, ( Se, Cr, Mu), fit, ( Pc, Pm) )
(22)
Where St is a string representation of points, contained in a population P, called chromosomes in the string space, p is a coding function maps the search space into a space of string. Selection (Se), crossover (Cr), mutation (Mu) are a set of operators for generating new population. A fitness function fit is used to evaluate the search points. Probability of crossover Pc and mutation Pm are stochasticly assigned to control the genetic operators. A simple genetic algorithm operates as the following steps and the operation diagram is shown in Fig. 1. 1. Initialization--An initial population P of Ps chromosomes St is randomly generated, biased at the central region of the search window. 2. Evaluation--Fitness function fit of each chromosome within a population is evaluated. Typically, a decoding structure maps the string into the real search point, and then the fitness function maps the search point into a real number. 3. Selection--Based on the fitness of strings in the current population, pairs of parents are selected and undergo subsequent genetic operations to produce pairs of child strings that form a new population (next generation). Based on the "survival of the fittest," the probability of a chromosomes being selected is proportional to its fitness.
213
Error Bounds and Genetic Algorithms
Selection ~__ String(ChromosOme)
Population
]55 00] 11( 10] 01( 00( 11]
Offspring Mating Pool
MuLation ~
]Crossover
FIG. 1. Operation diagram of the simple genetic algorithms.
4. C r o s s o v e r - - A simple "crossover" undergo by the following step. An integer position k along the string is selected at r a n d o m between I and the chromosome length (1) - 1. T w o new strings are created by swapping all characters between position k + 1 and l, inclusively. For example, consider strings St 1 and S t 2 from a mating pool ~t I =
01[101
St 2 = 11[000.
Two new strings can be obtained t h r o u g h a crossover procedure as St'~ = 0 1 0 0 0
St~ = 1 1 1 0 1 . The occurrence of crossover operation is controlled by the parameter P~. 5. M u t a t i o n - - A background operator selects a gene at r a n d o m on a given chromosome and m u t a t e s the allele for t h a t gene. Mutation is used to recover the genes t h a t m a y have been lost from the population for purely stochastic reasons. T h e probability of occurrence is controlled by a parameter Pm"
214
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN
6. Termination criterion. If the termination criterion is satisfied, then stop the algorithm or else go to step 2. 4.
AN A L T E R N A T I V E APPROACH T O T H E W E I G H T E D RESIDUALS METHOD
The procedure to estimate the error bounds of a boundary value problem by combining with the MWR and GAs is stated as follows. (1) Construct the error bounds conditions of a boundary value problem based on the maximum principles. (2) Formulate the error bounds estimate problem as a mathematical programming problem using the MWR. (3) Find the optimal solution of the mathematical programming problem using GAs.
EXAMPLE. Consider a nonlinear boundary value problem of the form
=
(23)
u(0) = u(1)
where H, a H / c ) u , c ) H / ~ u ' are continuous in (0, 1) and a H / a u = 0; hence, the upper and lower bounds of this example can be estimated by following the (18)-(21). Suppose the following trial function is applied. Z , ( x ) = (1 - x ) ( a l x + a2 x 2 + a3x3).
(24)
Then the residuals of the problem can be defined as R[ Z,( x)] = - Z~' - H( x, Z,, Z',).
(25)
To estimate the upper bound of the solution u(x), a mathematical programming problem can be obtained using (18)-(19) as follows. min Zu(0.5 )
(26)
Error Bounds and Genetic Algorithms
215
such t h a t R[Z~(x,)]
= -Z'-
H(x,Z~,Z'~) - e_
0,
i=0,1
.... ,m.
(27)
Then, t h e G A can be a p p l i e d to d e t e r m i n e t h e u p p e r b o u n d of u ( x ) w i t h a fitness function. In t h e example, t h e fitness function is defined as
fit(a)
= Cma x -
C 1 x Z u ( 0 . 5 ) -~- C 2 x
Z Pi(a) i=1
(28)
where Pi(a) is a p e n a l t y function a n d
-1 P~(a) =
× R[Z.(x~)],
if R[Z,,(x~)] < 0
0,
if R [ Z ~ ( x ~ ) ] > 0.
(29)
By a similar way, it is easy to e x t e n d this p r o c e d u r e to e s t i m a t e t h e lower b o u n d of t h e solution u(x). T a b l e 2 shows t h e c o m p a r i s o n of o p t i m a l coefficients w i t h different 6. T a b l e 3 shows t h e c o m p a r i s o n of m i n i m u m a n d m a x i m u m residuals with
TABLE
2
COMPARISON OF THE OPTIMAL COEFFICIENTS (MAXIMUM GENERATION = 1000)
Parameter ~
a1
a2
aa
Optimal coefficients of upper bound 0.000005 0.00005 0.0005 0.005 0.05
0.0417754 0.0417606 0.0420011 0.0442590 0~0668107
-0.000005 -0.00005 -0.0005 - 0.005 -0.05
0.0416819 0.0416538 0.0414249 0.0391771 0.0166634
0.0414764 0.0415806 0.0415288 0.0415081 0.0414702
-0.0414826 -0.0415832 - 0.0415288 -0.0415000 -0.0414733
Optimal coefficients of lower bound 0.0416306 0.0416292 0.0416373 0.0416364 0.0416721
-0.0416304 -0.0416190 -0.0416333 -0.0416588 -0.0416645
216
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN TABLE 3 COMPARISONOF MINIMUMAND MAXIMUMRESIDUALS (MAXIMUMGENERATION= 1000) Upper bound
Lower bound
Parameter s
Minimum residual
Parameter e
Maximum residual
0.000005 0.00005 0.0005 0.005 0.05
0.0000148253 0.0000447375 0.000507964 0.0049999 0.0499929
-0.000005 -0.00005 -0.0005 - 0.005 -0.05
0.0001038 0.0001104 -0.0004008 -0.0049186 -0.0499718
different s. Figure 2 a n d Figure 3 show how the residuals converge with respect to the p a r a m e t e r ~. Figure 4 shows the error b o u n d of the approxim a t e solution with ~ = _+0.0005, which satisfy the conditions of error b o u n d s formulation• All the cases are o b t a i n e d b y using the G A with the p a r a m e t e r shown in T a b l e 4.
c> o
• "r ~"
o ¢=0.000005/
-
o
0 c) I
l 0.000
l
l
l
i 0.500
I
I
l
Position FIG. 2.
T h e r e s i d u a l s o f Z~ w i t h d i f f e r e n t ~.
I 1.000
217
Error Bounds and Genetic Algorithms o
°f ¢5
-0.000005
-
e= -0.00005
c
0 0005
0
d
I 0.000
I
I
I
I
] I 0.500
I
!
I
1.000
Position FIC. 3.
The residuals of Z t with different e.
0
6
= ¢0.0005
"o 0
c~
0
¢5 I
I 0.000
I
,I
I
I
I
0.500
I
I
I
1.000
Position FIG. 4.
The error bounds of the approximate solution.
218
C.-K. CHEN, J.-M. LIN, AND C.-L. CHEN
T H E PARAMETERS OF
TABLE 4 CA FOR THE
Parameters Population size Probability of crossover Probability of mutation Coding method String length of a coefficient Method of fitness scaling Maximum generation Weight of cost (C 1) Weight of penalty (C 2) Cmax
5.
NUMERICAL EXAMPLE
Value/Method 100 0.8 0.01 Binary coding 22 Sigma truncation 1000 1.0 5000 50000
CONCLUSIONS
1. The present example and results indicate that the use of this procedure eliminates the need for error analysis which is usually difficult for nonlinear boundary value problem. 2. The results shown in Figure 2 and Figure 3 indicate that the M W R cannot hold the conditions (12) in all domain of the problem that decrease the accuracy of the approximate solution. 3. For a nonlinear mathematical programming problem, the GA provides a good result and is easy to apply. REFERENCES 1 B. A. Finlayson, and L. E. Scriven, The method of weighted Residuals-A Review, Applied Mechanics Review 19(9):735-748, (1966). 2 Y. C. Zhang and X. He, Analysis of free vibration and buckling problems of beams and plates by discrete least-squares method using Bs-spline as trial functions, Computer & Structures 31(2):115-119 (1989). 3 F.C. Appl and H. M. Hung, A principle for convergent upper and lower bounds, Int. J. Meck Sci. 6:381-389, (1964). 4 B.A. Finlayson and L. E. Scriven, Upper and lower bounds for solutions to the transport equations, AICHE Journal 12(6):1151-1157, (1966). 5 D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, Mass, 1989.
Error Bounds and Genetic Algorithms
219
6 Lawrence Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. 7 J. H. Holland, Adaptation in Natural and Artificial System, MIT Press, Cambridge, Massachusetts, 1992. 8 M. Srinivas and L. M. Patnaik, Genetic algorithms-A survey, Computer, 27(6):17-26, (1994). 9 W. M. Jenkis, Towards structural optimization via the genetic algorithm, Computers & Structures, 40(5):1321-1327, (1991). 10 L.H. Christopher and E. B. Donald, A parallel heuristic for quadratic assignment problems, Computers Operations Research, 18(3):275-289, (1991). 11 C.R. Reeves, Modern Heuristic Technique for Combinatorial Problems, Halsted Press, New York, 1993. 12 M.H. Protter, Maximum Principles in Differential Equations, Prentice-Hall, Englewood Cliffs, New Jersey, 1967.