Agricultural Systems 60 (1999) 113±122
www.elsevier.com/locate/agsy
Survival of the ®ttestÐgenetic algorithms versus evolution strategies in the optimization of systems models D.G. Mayer a,*, J.A. Belward b, H. Widell c, K. Burrage b a
Queensland Beef Industry Institute, Department of Primary Industries, Locked Mail Bag No. 4, Moorooka, Queensland 4105, Australia b Centre for Industrial and Applied Mathematics and Parallel Computing, University of Queensland, Queensland 4072, Australia c Hadsundvej 222, DK-9220 Aalborg ést, Denmark Received 3 September 1998; received in revised form 20 November 1998; accepted 26 February 1999
Abstract The use of numerical optimization techniques on simulation models is a developing ®eld. Many of the available algorithms are not well suited to the types of problems posed by models of agricultural systems. Coming from dierent historical and developmental backgrounds, both genetic algorithms and evolution strategies have proven to be thorough and ecient methods in identifying the global optimum of such systems. A challenging herd dynamics model is used to test and compare optimizations using binary and real-value genetic algorithms, as well as evolution strategies. All proved successful in identifying the global optimum of this model, but evolution strategies were notably slower in achieving this. As the more successful innovations of each of these methods are being commonly adopted by all, the boundaries between them are becoming less clear-cut. They are eectively merging into one general class of optimization methods now termed evolutionary algorithms. # 1999 Elsevier Science Ltd. All rights reserved. Keywords: Optimization; Model; Genetic algorithm; Evolution strategy
1. Introduction As commercial enterprises and systems managers strive to become more competitive, numerical optimization techniques are increasingly being used to identify the best solution for modelled systems (Fu, 1994). These applications are appearing across a wide range of disciplines, including agricultural systems (Mayer et al., 1998a). This approach ®rst requires a valid simulation model of the targeted system, which is then taken as a `black box' to be formally optimized. * Corresponding author. Tel.: +61-73362-9574; fax: +6173362-9429. E-mail address:
[email protected] (D.G. Mayer)
One necessary assumption here is that the set of management strategies which are identi®ed as optimal for the model will similarly prove to be the best when applied to the real-world system. For simpler models (e.g. only a few hundred or thousand possible combinations of the management options), the optimal strategy can be determined by complete enumeration. However, over 30 years ago Wilde (1964) identi®ed the ``curse of dimensionality'', where this task becomes dicult and eventually intractable as the size of the system increases. To eectively solve multi-dimensional real-world problems, targeted and ecient optimization methods are required. Various types of numerical techniques have potential for solving these types of problems. These include hill-climbing
0308-521X/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved. PII: S0308-521X(99)00022-0
114
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
and other gradient-type methods (Fletcher, 1987), direct-search algorithms (Polyak, 1987) including the simplex method (Nelder and Mead, 1965), hybrid targeted methods such as random search with a learning component (Eisgruber and Lee, 1971), genetic algorithms (Goldberg, 1989), evolution strategies (Fogel, 1995), simulated annealing (Ingber and Rosen, 1992), and tabu search (Glover, 1990). Unfortunately, most (if not all) of these methods experience diculties when faced with multidimensional models of real-world systems, which typically are non-smooth with clis and discontinuities in the response surface, have practical constraints on the available options, and feature multiple local optima which can trap these methods (Mayer et al., 1996, 1998a). The more traditional hill-climbing and direct-search algorithms have previously been shown to perform comparatively poorly on these types of problems (Hendricksen et al., 1988; Mayer et al. 1991, 1996; Wang, 1991; Goe et al., 1994; Horton, 1996; Parsons, 1998). Tabu search has identi®able methodological ¯aws which make it unsuitable for these types of problems (Mayer et al., 1998b). Simulated annealing has proven to be a thorough (but not necessarily ecient) algorithm across a range of problem areas, including models of agricultural systems (Bos, 1993; Lockwood and Moore, 1993; Mayer et al., 1996; Watson and Sumner, 1997). However, on larger problems this method is penalized by its lack of eciency, often failing to converge to the global optimum within feasible computing time (Ingber and Rosen, 1992; Mayer et al., 1999). This leaves genetic algorithms and evolution strategies as general methods which appear both thorough and ecient, and these ``often yield excellent results when applied to complex optimization problems where other methods are either not applicable or turn out to be unsatisfactory'' (BaÈck et al., 1997, p. 10). In a research equivalent of parallel evolution, these two dierent forms of evolutionary algorithms (being the generic term) have been developed to practical fruition over the past few decades. Genetic algorithms were primarily researched in the USA and English-speaking countries, and evolution strategies were largely developed in Germany (Hinterding et al., 1995). Since practitioners of these methods began
communicating in the early 1990s (BaÈck et al., 1997), they have swapped successful traits and strategies. Both these disciplines also have a considerable degree of overlap with the developing computational ®eld of self-evolving programs, termed evolutionary computation or evolutionary programming (Fogel, 1995), or genetic programming (Koza, 1994). These latter methods target the automatic generation of computer functions or programs, using natural selection-type operators to optimize the degree of ®t. In practice, their features which are applicable to more general optimization problems have largely found their way into evolution strategies (Fogel, 1995). This paper contrasts the methodologies and features of genetic algorithms and evolution strategies, and outlines their applications to date in the agricultural systems ®eld. Then, various forms of each are trialled on a dicult optimization problem, and their performance compared. The problem used is a whole-property model of a beef production enterprise, with 40 continuous, interacting management options to consider. 2. Genetic algorithms As their name implies, genetic algorithms are based on the biological concept of genetic reproduction, and biological terminology has become standard in the description of these methods. In genetic algorithms, successive gains are made by parallelling the natural selection processes of evolution (Radclie and Wilson, 1990). This optimization process takes the independent variables of the model and converts them into a genetic representation (traditionally binary). A population of such individuals is initially obtained through random or targeted processes, with these forming the `parents' of the next generation. Frequency of parenthood is directly related to `®tness', or the resultant dependant variable to be optimized, so like evolution the `successful' individuals pass on their genes (attributes) more frequently. The basic operation of genetic algorithms mimics sexual reproduction between two selected individuals, where sequences of genetic code are crossed and mixed to produce `children' (the next genera-
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
tion), which are likely to be dierent from the parents. Over generations, this process tends to combine `successful' traits and generally improve the ®tness of the population. Low-level random mutation is also introduced to parallel nature, and this rediscovers `lost' genes which may prove bene®cial, as well as assisting in searching across the problem space. Over time, population structures tend to congregate around either a single point or a number of near-optimal solutions. As with all optimization methods, there is no guarantee of ®nding the global optimum. The method of converting the continuous independent variables to discrete binary genetic codings introduces both advantages and disadvantages for this method. On the positive side, bound constraints are implicit, as values are mapped onto a `lowest to highest' de®ned range. As a negative, genetic algorithms cannot converge exactly onto an optimum, only to the nearest de®ned combination of input parameters. This can be alleviated by allowing more levels in these variables; however, this increases the length of the genes and thus computational time. From a practical point of view, a safe approach is to allow as many levels as are realistically required in the model. As with any optimization technique, researchers have trialled a range of operational parameters, adaptations and improvements. In general, optimal settings appear to be problem-dependent, but robust values tend to work well across a variety of problems (Davis, 1991). The population size needs to contain sucient diversity for crossbreeding to be eective, without carrying excessive numbers. Large, complex problems require larger population numbers of up to 200 (Peck and Dhawan, 1995). Crossover is the dominant genetic operation, consistently having high probability of around 0.6±1 (BaÈck et al., 1997), and is particularly eective in combining successful `building blocks' (Jones, 1995). The background mutation operator typically is given a low probability, of the order of 0.001±0.05. The choice of replacing all population individuals (generation gap of one) or only a portion at each iteration appears open. Steady-state or sub-generational replacement makes new genetic material immediately available for use, but may be computationally inecient if the whole population
115
needs to be re-weighted and/or re-ranked at each inclusion (Michalewicz, 1996). In practice, this issue has been shown not to be critical (Peck and Dhawan, 1995). However, the safeguard of elitism (where, at least, the best individual is preserved) is frequently recommended (Michalewicz, 1996), and has proved advantageous over a range of problem types (Jones, 1995). The apparent variety of recommended selection criteria (Roulette wheel using scores or ranks, tournament, or truncation selection) is not of great concern, as De Jong and Sarma (1995) and Blickle and Thiele (1997) have shown that dierent selection strategies can produce similar ranges of selection intensities, and the same expected numbers of ospring per parent. Ranges of enhancements and improvements have been proposed. Some have demonstrated improved performance, but only speci®c problem types. Goldberg (1989) suggests this is because the `basic' genetic algorithms work so well. Whilst Michalewicz (1996) suggests a thorough evaluation of operational parameters to guarantee reasonable performance is required, Horton (1996) found that, in practice, genetic algorithms perform well across quite a wide range of operational parameters. 3. Evolution strategies The basic methods now collectively termed evolution strategies were developed by two students in Berlin in the 1960s (Michalewicz, 1996). Evolution strategies use real-number representation of the input values, so there is one `gene' representing each option. Mutation is the key evolutionary operation, and was initially the only operation used with a population of one. Each `ospring' is created by the random Gaussian mutation of each gene, according to a vector of mutation variances which itself evolves over time (BaÈck and Schwefel, 1993). Hence, the number of genes is eectively doubled; for each input option to be optimized, one gene carries its value, and another its current mutation variance (Michalewicz, 1996). Fitness of the `ospring' is judged on the modelled economic result of the speci®ed inputs, and this allows the better combinations of mutation variances to be `dragged along' into the latter generations. These
116
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
variances are initially larger to facilitate searching, and over time narrow down to near zero (and eectively ®ne-tune the solution) as the genes converge to their optimal value (Michalewicz, 1996). Some applications also carry a third vector of correlations between the respective mutation amounts, allowing directional searching which is optimized over time. The original single-parent and single-ospring evolution strategy was adapted to include multiple parents (with the number of these usually represented by m), and multiple ospring (l) (BaÈck and Schwefel, 1993). Selection of parents for reproduction is typically random, and selection pressure is introduced by the ospring being retained or discarded on a deterministic or stochastic assessment of ®tness (BaÈck and Schwefel, 1993). Two separate strategies are used: the (m, l) evolution strategy, where the parents are replaced by the best ospring at each generation, and the (m+l) evolution strategy, where the combined parents and ospring compete to be amongst the best m individuals used as parents in the next generation. The former strategy has proven useful for noisy functions and problems where the optimum is non-stationary (Michalewicz, 1996). The (m+l) evolution strategy is approximately equivalent to the elitist strategy of genetic algorithms (MuÈhlenbein and Schlierkamp-Voosen, 1994), in that the best of the population members are preserved. In a range of practical studies, the (m+l) evolution strategy has been shown to perform better than the (m, l) strategy (BaÈck et al., 1997). Fogel (1995) showed that the (1+l) evolution strategy has a logarithmic increase in the rate of convergence over the (1+1) evolution strategy, and MuÈhlenbein and Schlierkamp-Voosen (1994) demonstrated a speed-up as l was increased. BaÈck and Schwefel (1993) found an optimal ratio of m:l of about 1:7. 4. Cross-breeding of genetic algorithms and evolution strategies As with other competing methodologies, some early studies investigated the relative practical performance of genetic algorithms against evolu-
tion strategies. On test functions, BaÈck and Schwefel (1993) showed evolution strategies identi®ed better optima, and at higher rates of convergence, than binary genetic algorithms. Conversely, Keane (1996) found that a sophisticated binary genetic algorithm performed better than evolution strategies on dicult test functions. In the practical optimization of laminate designs, Le Riche et al. (1995) found these two methods performed similarly. Hinterding et al. (1995) reported mixed resultsÐgenetic algorithms proved superior on discontinuous and multipleoptima test functions, whilst evolution strategies performed better with lower-dimensional and smooth-type functions. Currently though, genetic algorithms and evolution strategies may be viewed more as collaborators than competitors. The past few years have eectively seen these two methods merge into an even more powerful class of methods termed evolutionary algorithms. The major changes are the adoption of real-value rather than binary genes (to more closely match most problem de®nitions; Michalewicz, 1996), and the introduction of recombination and crossover operations using multiple parents into the evolution strategies. In moving to real-value representation, mutation is no longer a `binary bit-¯ip' operation. One option is to use evolution strategies-style mutation, where all genes are changed by a selfadapting vector of mutation variances for each generation. A second option is to use random mutation and change only one or a few genes per operation (as per genetic algorithms). Here, the type and amount of change for each needs to be speci®ed. Usually, the mutation shift is selected from a Gaussian or delta distribution (these being practically similar), and randomly assigned in a positive or negative direction, subject (as usual) to the boundary constraints. An alternative is to use `boundary mutation', where the gene is shifted onto one of its boundaries. This is an `expansive' operation which can be used to counter the `contractive' nature of some of the recombination operators (Michalewicz, 1996). Recombination operators are also more complex with real-value genes. Given two parents, the simplest crossover (as used with binary genes) is to take a discrete value (or string of values) at random
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
from either parent (MuÈhlenbein and SchlierkampVoosen, 1993). Here, one-point, multi-point, or uniform random crossover operators may be utilized. Fogel (1995) found the latter superior on test functions, but concluded that this result was likely to be problem-dependent. More sophisticated recombination operations (Michalewicz, 1996) include intermediate linear recombination (where a random value is selected between the two parents' genes), which tends to be contractive, or extended linear recombination (which can, randomly, go outside the existing parents' ranges, up to the de®ned boundaries). In evolutionary algorithms, the role and importance of each of the major operators is yet to be fully explored. Each oers a balance between exploitation (using existing material to best bene®t) and exploration (to adequately search the feasible space). Traditionally, genetic algorithms were driven by recombination, with mutation only a background operation (typically 0.01 or lower); whereas evolution strategies initially only used mutation. The optimal balance between recombination and mutation can vary, and is likely to be problem-speci®c (Michalewicz, 1996). Using test functions, Hinterding et al. (1995) found the best combination was lower crossover rates (0.1±0.4) with moderate mutation (up to 5/N, N being the number of genes). On a transportation problem, the optimal combinations were crossover rates of 0.05±0.25 with mutation probabilities around 0.2± 0.4 (Michalewicz, 1996). Signi®cantly, in this transportation study, crossover rates of zero (thus using only the mutation operator, in the style of the original evolution strategy) proved inferior, as was also found by BaÈck and Schwefel (1993). A number of studies have shown mutation and recombination to be advantageous to each other (Fogel, 1995). An early practical example of an evolutionary algorithm was the `breeder genetic algorithm' (MuÈhlenbein and Schlierkamp-Voosen, 1993, 1994). This used real-value representation with discrete or extended linear recombination, truncation selection from the `best' (typically 10±50%) of the parents, elitism, and low-level background mutation. In an application to airfoil design (De Falco et al., 1996), it proved faster than a `classical'
117
(binary) genetic algorithm, this being attributed to the accelerated pressure of truncation selection, and the advantages of real-value representation and recombination. Truncation selection can, however, also be ineective in some instances, because it increases the loss of diversity in the population, which can then lead to premature convergence to a local optimum (Blickle and Thiele, 1997). 5. Agricultural systems applications To date, only binary genetic algorithms have been reported in this ®eld. In horticultural planning, Annevelink (1992) outlined a spatial/temporal allocation problem which was solved by a genetic algorithm, after initial linear and dynamic programming approaches proved inadequate. Horton (1996) showed genetic algorithms eciently optimized a sheep genetics model, identifying optimal solutions which hill-climbing methods repeatedly failed to ®nd. A dairy farm model was optimized via genetic algorithms, with these solutions (along with simulated annealings) being superior to those identi®ed by hill-climbing and direct-search algorithms (Mayer et al., 1996). The annual fertilizer usage and grazing management plan of a sheep farm was successfully optimized with a genetic algorithm (Barioni et al., 1997). Parsons (1998) showed a genetic algorithm to be marginally superior to the simplex method (Nelder and Mead, 1965) in optimizing silage harvesting plans. Hart et al. (1998) found a hybrid genetic algorithm/hill-climbing approach to be the best for optimizing a dairy farm model. 6. Case studyÐoptimization of a herd dynamics model The simulation model used to test the range of evolutionary algorithms was a commercial herd dynamics package, DYNAMA (Holmes, 1995). The system simulated was an average farm structure in the Brigalow region of Queensland, running 1300 branded beef cattle on 7200 ha (O'Rourke et al., 1992), as outlined in Mayer et al.
118
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
(1999b). The production value to be optimized was the discounted before-tax net income, over a 10-year horizon. To allow for the eects of herd improvements, the discounted closing value of the herd was added to this, and the opening value subtracted. Typical sales and purchase prices for each cohort, and ®xed (operating and animal husbandry) costs were taken from Holmes (1995). In its present form, this model can only be used to investigate trading decisions (buying or selling stock), as the biological feedbacks of other management options, such as stocking rate, supplementation, or pasture improvements, are yet to be included in the model. Also, modelled trading decisions are set to be constant over time, allowing only the identi®cation of the best steady-state strategy. Dynamic situations, such as going into, or coming out of a drought, remain to be investigated. Even with these limitations, the optimization of this model still forms a challenging problem. The model has 40 separate management decisions covering buying and selling options, giving a potential search-space of the order of 10100. In practice, these management decisions tend to be dependent on each other, resulting in a high degree of interaction in the results. This herd dynamics model was integrated with the optimization routines GENESIS (a binary genetic algorithm, Grefenstette, 1995) and GENIAL (a real-value genetic algorithm and/or evolution strategy package, Widell, 1997), and run on networked Sun workstations. The majority of these optimizations used a default of 5 million model evaluations (each taking a number of days), and converged well inside this limit. However, even these could be insucient in that better values may have been found if longer runs were used. Some longer optimizations were run as checks, using up to 50 million model evaluations and taking about a month. The best net income of US$0.91 million (over 10 years, and before farm ®xed costs are deducted) was found by all methods, and has been adopted as the global optimum of the system. This value, of course, is speci®c to the farm and price structures used. In practical terms, an optimization is deemed successful if it achieves 99.9% (or better) of this value. It is interesting to note that the optimal steady-state
management strategy for this property involved breeding onlyÐwith no trading (or buying in animals), half of the input options (being numbers bought in each age class and sex) have a zero value at the optimum, this being the lower bound of these variables. Hence, the optimization methods have to ®nd these boundary values, as well as the optimal selling proportions for breeders and steers, from the random starting values for each. Initially, a standard binary genetic algorithm (GENESIS) was used on this problem. The best operational parameters from previous studies (Mayer et al., 1996, 1999) were used, along with some other exploratory combinations, as outlined in Table 1. These results extend the preliminary ®ndings of Mayer et al. (1999), where the ®rst four lines of Table 1 were used as a benchmark against which simulated annealing optimizations were compared (and found wanting). As expected, the binary genetic algorithm produced consistently good results, across the range of operational parameters. Run-time prior to convergence averaged 2.4 million model evaluations per optimization. GENIAL was then used to test the performance of a real-value genetic algorithm, where improved performance may be expected due to the direct one-to-one correspondence between the model's input parameters and the algorithm's coded genes (De Falco et al., 1996; BaÈck et al., 1997). This then opens up a range of possible numerical operations, in terms of selection criteria, crossover operators and mutation operators, as previously outlined. Table 1 Performance of individual binary genetic algorithm evaluations of herd dynamics model Population size
Selection Crossover methoda probability
Mutation probability
Optimum (%)
50 50 50 50 100 150 30 100
Scores Scores Ranks Ranks Scores Ranks Scores Ranks
0.001 0.010 0.001 0.010 0.010 0.005 0.005 0.010
100 100 99.99 99.98 99.97 99.91 99.99 99.85
0.50 0.50 0.50 0.50 0.40 0.80 0.80 0.50
a Of parents, according to Roulette-wheel probabilities based on either their scores or ranks.
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
Cross over can be one-point or multi-point (genetic algorithms style, where parents' vectors are crossed at one or more random points), uniform (each value of the ospring is taken from one parent, chosen at random), or arithmetical (evolution strategies-style, using both parents' values in intermediate or extended linear recombination, with the latter being used in this study). Mutation operations include random (changed to a randomly selected value within the variable's bounds), simplex (uses the elite parent to produce ospring through mutation), dynamic (where the probability density function, controlling the relative size of mutations, changes over time), or Gaussian creep (the mutated value `creeps' by an amount randomly chosen from the Gaussian distributionÐby allowing small changes, this parallels evolution strategy style mutation). Exploratory combinations of these options were trialled with the herd dynamics model, as reported in Table 2. GENIAL uses the crossover and mutation operators independently to generate individual ospring, rather than simultaneously. This has the advantage of allowing each to have varying con-
119
tributions throughout the search, but results should be similar to the more standard implementation. Table 2 shows that most combinations of this real-value genetic algorithm produced good results, including `lower' crossover rates (0.25). Notably, the `higher' mutation probabilities (up to 0.25) performed equally well, as also found by BaÈck et al. (1997). On average, 3.5 million model evaluations were required to ®nd these optima. GENIAL can also be run as an evolution strategy (giving an 80-dimensional problem, where the ®rst 40 de®ne the management options, and the rest their mutation variances). The (m+l) strategy was implemented, with random selection of parents and deterministic replacement of population members based on ®tness score. Again, a variety of operational parameters were trialed, as listed in Table 3. Except for the ®rst optimization (using `crossover only'), most runs achieved or went close to the target of 99.9%. However, with the doubling of the number of parameters to 80, the rates of convergence were notably slower, with these optimizations taking, on average, 8.8 million model runs to ®nd their optima. This performance
Table 2 Performance of individual real-value genetic algorithm evaluations of herd dynamics model Population size
No. replaced per generation
Selection method (parents)a
Crossover operator
Mutation operator
Probability
Type
Probability
Type
50 50 50 50 50 50 50 50 100c 50 50 50 50 100 500
49 49 49 49 49 49 49 49 50 49 49 49 49 50 50
Scores Scores Scoresb Scoresb Ranks Ranks Ranks Tournament (3) Truncation (15) Ranks Ranks Ranks Ranks Tournament (2) Tournament (2)
0.50 0.50 0.50 0.50 0.50 0.50 0.25 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
1-point 1-point 1-point 1-point 1-point Uniform Arithmetical 1-point Uniform 1-point 1-point 1-point 1-point Uniform Uniform
0.001 0.010 0.010 0.010 0.010 0.010 0.010 0.010 0.025 0.150 0.100 0.125 0.250 0.055 0.055
Random Random Random Random Random Random Random Random Random Simplex Gaussiand Gaussiand Gaussiand Doublee Doublee
a b c d e
Scores-based or ranks-based Roulette-wheel, or tournament or truncation (sizes nominated). Using alternate window-scaling options to preceding line. Breeder genetic algorithm options, as per De Falco et al. (1996). Gaussian creep mutation. Random mutation (probability of 0.005) plus Gaussian creep (probability of 0.05).
Optimum (%)
99.78 99.92 99.89 99.89 100 99.98 99.96 99.97 99.97 99.91 100 100 99.97 100 100
120
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
Table 3 Performance of evolution strategies on herd dynamics model No. of parents (population size)
No. of ospring per generation
Crossover operator
Mutation operatora
Probability
Type
Probability
Type
20 20 20 20 20 20 50 50 50 50 50 50 50
20 20 20 20 20 80 50 350 350 350 350 350 350
0.333 ± 0.167 0.167 0.167 0.033 0.167 0.050 0.050 0.050 0.050 0.050 0.200
Arithmetical Nil Arithmetical Arithmetical Arithmetical Arithmetical Arithmetical Uniform Uniform Uniform 1-point Uniform Uniform
± 0.200 0.067 0.133 0.333 0.600 0.133 0.001 0.005 0.005 0.005 0.050 0.005
Nil Gaussian Gaussian Gaussian Gaussian Gaussian Gaussian Dynamic Dynamic Gaussian Dynamic Dynamic Dynamic
a
Optimum (%)
98.51 99.83 99.91 99.93 99.82 99.87 99.87 100 100 99.35 99.51 100 100
In addition to the evolution strategy's self-adapting vector of mutation variances.
(both in terms of solutions identi®ed, and speed of convergence) may well improve with alternate parameter settings, or as more familiarity with the method is gained. As BaÈck et al. (1997) concluded, more research is needed to understand the relative advantages and disadvantages of the self-adaption mechanisms of evolution strategies. Overall, each of these individual classes of evolutionary algorithms (binary genetic algorithm, real-value genetic algorithm and evolution strategy) produced acceptable results, with the former two proving more ecient than the evolution strategy approach. Within genetic algorithms, real-value representation was expected to be superior to binary coding (BaÈck et al., 1997), but, in practice, we found no dierence. The `simple' binary genetic algorithm, with relatively few operational parameters to decide on, proved very ecient on our herd dynamics model. With the real-value genetic algorithm, most combinations proved satisfactory. The computational simplicity of tournament selection has obvious appeal. No ®rm conclusions can be drawn from this study concerning crossover or mutation, other than that they appear to be synergistic. The double mutation strategy, comprising low-level background mutation with higher-level creep mutation in the style of evolution strategies, appears promising.
In generalizing these results to identify the likely best strategies for future, larger problems, the genetic algorithm style appears more ecient than the evolution strategies. This is probably largely due to the latter method's necessity of doubling the search-space by also optimizing the mutation variances. 7. Conclusions Whilst practical applications of both genetic algorithms and evolution strategies continue, the boundaries between these methods have blurred over time. In a parallel of nature, they have eectively cross-bred and merged into a single broader class of optimization methods (BaÈck and Schwefel, 1993). By adopting the best traits of each of their parent lines, these hybrid evolutionary algorithms oer the potential to be superior to both genetic algorithms and evolution strategies on the optimization of large, dicult modelling problems. References Annevelink, E., 1992. Operational planning in horticulture: optimal space allocation in pot-plant nurseries using heuristic techniques. Journal of Agricultural Engineering Research 51, 167±177.
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122 BaÈck, T., Schwefel, H.P., 1993. An overview of evolutionary algorithms for parameter optimization. Evolutionary Computation 1, 1±123. BaÈck, T., Hammel, U., Schwefel, H.-P., 1997. Evolutionary computation: comments on the history and current state. IEEE Transactions on Evolutionary Computation 1, 3±17. Barioni, L.G., Dake, C.K.G., Parker, W.J., 1997. Optimising rotational grazing in sheep management systems. In: Proceedings of the 1997 International Congress on Modelling and Simulation, 8±11 December, pp. 1068±1073. Blickle, T., Thiele, L, 1997. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation 4, 361±394. Bos, J., 1993. Zoning in forest management: a quadratic assignment problem solved by simulated annealing. Journal of Environmental Management 37, 127±145. Davis, L., 1991. Handbook of Genetic Algorithms. Reinhold, New York. De Falco, I., Del Balio, R., Della Cioppa, A., Tarantino, E., 1996. Breeder genetic algorithms for airfoil design optimisation. In: Proceedings of the IEEE International Conference on Evolutionary Computation, 20±22 May, pp. 71±75. De Jong, K., Sarma, J., 1995. On decentralizing selection algorithms. In: Proceedings of the Sixth International Conference on Genetic Algorithms, 15±19 July, pp. 17±23. Eisgruber, L.M., Lee, G.E., 1971. A systems approach to studying the growth of the farm ®rm. In: Dent, J.B., Anderson, J.R. (Eds.), Systems Analysis in Agricultural Management. WileySydney, pp. 330±±347. Fletcher, R., 1987. Practical Methods of Optimization. Wiley, New York. Fogel, D.B., 1995. Evolutionary ComputationÐToward a New Philosophy of Machine Intelligence. IEEE Press, New York. Fu, M.C., 1994. Optimization via simulation: a review. Annals of Operations Research 53, 199±247. Glover, F., 1990. Tabu search: a tutorial. Interfaces 20, 74±94. Goe, W.L., Ferrier, G.D., Rogers, J., 1994. Global optimization of statistical functions with simulated annealing. Journal of Econometrics 60, 65±99. Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, UK. Grefenstette, J.J., 1995. A user's guide to the genetic search implementation system (GENESIS, Version 5.0). ftp:// www.aic.nrl.navy.mil/pub/galist/src/genesis.tar.Z Hart, R.P.S., Larcombe, M.T., Sherlock, R.A., Smith, L.A., 1998. Optimisation techniques for a computer simulation of a pastoral dairy farm. Computers and Electronics in Agriculture 19, 129±153. Hendrickson, J.D., Sorooshian, S., Brazil, L.E., 1988. Comparison of Newton-type and direct search algorithms for calibration of conceptual rainfall±runo models. Water Resources Research 24, 691±700. Hinterding, R., Gielewski, H., Peachey, T.C., 1995. The nature of mutation in genetic algorithms. In: Proceedings of the
121
Sixth International Conference on Genetic Algorithms, 15± 19 July, pp. 65±72. Holmes W.E., 1995. BREEDCOW and DYNAMAÐHerd Budgeting Software Package. Queensland Department of Primary Industries, Townsville. Horton, B., 1996. A method of using a genetic algorithm to examine the optimum structure of the Australian sheep breeding industry: open-breeding systems, MOET and AI. Australian Journal of Experimental Agriculture 36, 249±258. Ingber, L., Rosen, B., 1992. Genetic algorithms and very fast simulated reannealing: a comparison. Mathematical and Computer Modelling 16, 87±100. Jones, T., 1995. Crossover, macromutation, and population-based search. In: Proceedings of the Sixth International Conference on Genetic Algorithms, 15±19 July, pp. 73±80. Keane, A.J., 1996. A brief comparison of some evolutionary optimization methods. In: Raywood-Smith, V., Osman, I., Reeves, C., Smith, G.D. (Eds.), Modern Heuristic Search Methods. Wiley, New York, pp. 255±272. Koza, J.R., 1994. Genetic Programming II. Automatic Discovery of Reusable Programs. MIT Press, Cambridge, MA. Le Riche, R.G., Knopf-Lenoir, C., Haftka, R.T., 1995. A segregated genetic algorithm for constrained structural optimization. In: Proceedings of the Sixth International Conference on Genetic Algorithms, 15±19 July, pp. 558±565. Lockwood, C., Moore, T., 1993. Harvest scheduling with spatial constraints: a simulated annealing approach. Canadian Journal of Forestry Research 23, 468±478. Mayer, D.G., Belward, J.A., Burrage, K., 1996. Use of advanced techniques to optimize a multi-dimensional dairy model. Agricultural Systems 50, 239±253. Mayer, D.G., Belward, J.A., Burrage, K., 1998a. Optimizing simulation models of agricultural systems. Annals of Operations Research 82, 219±231. Mayer, D.G., Belward, J.A., Burrage, K., 1998b. Tabu search not an optimal choice for models of agricultural systems. Agricultural Systems 58, 243±251. Mayer, D.G., Belward, J.A., Burrage, K., 1999. Performance of genetic algorithms and simulated annealing in the economic optimization of a herd dynamics model. Environment International (in press). Mayer, D.G., Schoorl, D., Butler, D.G., Kelly, A.M., 1991. Eciency and fractal behaviour of optimisation methods on multiple-optima surfaces. Agricultural Systems 36, 315±328. Michalewicz, Z., 1996. Genetic algorithms + data structures =evolution programs, 3rd Edition, Springer, Berlin. MuÈhlenbein, H., Schlierkamp-Voosen, D., 1993. Predictive models for the breeder genetic algorithm. Evolutionary Computation 1, 25±49. MuÈhlenbein, H., Schlierkamp-Voosen, D., 1994. The science of breeding and its application to the breeder genetic algorithm BGA. Evolutionary Computation 1, 335±360. Nelder, J.A., Mead, R., 1965. A simplex method for function minimisation. The Computer Journal 7, 308±313.
122
D.G. Mayer et al. / Agricultural Systems 60 (1999) 113±122
O'Rourke, P.K., Winks, L., Kelly, A.M., 1992. North Australia Beef Producer Survey 1990. Queensland Department of Primary Industries, Brisbane. Parsons, D.J., 1998. Optimising silage harvesting plans in a grass and grazing simulation using the revised simplex method and a genetic algorithm. Agricultural Systems 56, 29±44. Peck, C.C., Dhawan, A.P., 1995. Genetic algorithms as global random search methods: an alternative perspective. Evolutionary Computation 3, 39±80. Polyak, B.T., 1987. Introduction to Optimization. Optimization Software, New York. Radclie, N., Wilson, G., 1990. Natural solutions give their best. New Scientist 126, 35±38.
Wang, Q.J., 1991. The genetic algorithm and its application to calibrating conceptual rainfall±runo models. Water Resources Research 27, 2467±2471. Watson, R.A., Sumner, N.R., 1997. Maximising the landed value from prawn ®sheries using a variation on the simulated annealing algorithm. In: Proceedings of the International Congress on Modelling and Simulation, 8±11 December, pp. 864±868. Widell, H. (1997). GENIAL 1.1. A function optimizer based on evolutionary algorithmsÐUser manual. WWW: http:// hjem.get2net.dk/widell/genial.htm Wilde, D.J., 1964. Optimum seeking methods. Prentice-Hall, Englewood Clis, NJ.