Copyright © IFAC 12th Triennial World Congress, Sydney, Australia, 1993
GENETIC ALGORITHMS IN CONTROL SYSTEMS ENGINEERING P.J. Fleming and C.M. Fonseca Department of Automatic Control and Systems Engineering, University of Sheffield, P. O. Box 600, Mappin Street, Sheffield, Sl 4DU, UK
Abstract. Recent research into the mechanisms of evolution and genetics has shown how biological systems have managed to develop .ome very powerful methods of optimilling and adapting themselves to meet new environmental challenge• . For applications in control .y.tem. engineering, many of the characteristics exhibited by genetic algorithms are particularly appropriate. They can be used as an optimillation tool or as the baaia of adaptive systems. The veraatile and robust qualities of these algorithms are reviewed and their relevance for control systems is highlighted. Applications are described and implementation illuea are addressed, including parallelillation. Prospective future direction. are identified. Key Words. Genetic algorithms; optimillation; parallel pro ceiling; automatic control; adaptive control
1.
on promising developments.
INTRODUCTION
Nature has been a source of inspiration and metaphors for the application of the parallel processing paradigm in control systems engineering : systolic arrays (Irwin, 1992), artificial neural networks (Miller et al., 1990; Ruano et al., 1992) and, now, genetic algorithms. Based on Darwin's survival-of-the-fittest strategy, genetic algorithms (GAs) constitute a growing field of research. Together with Evolutionary Strategies (Ba.ck et al. , 1991), they are members of the broader class of Evolutionary Algorithms.
2.
WHAT ARE GENETIC ALGORITHMS?
Genetic algorithms are stochastic global search algorithms . They operate on a population of current approximations - the individuals - initially drawn at random, from which improvement is sought . Individuals are encoded as strings - the chromosomes - constructed over some particular alphabet, e.g., the binary alphabet {O, 1}, so that chromosome values, or genotypes, are uniquely mapped onto the decision variable (phenotypic) domain. As described below, all of the search process takes place at the coding level.
GAs are adaptive search techniques, based on the principles of natural genetics and natural selection, which, in control systems engineering, can be used as an optimization tool or as the basis of more general adaptive systems. Following an introduction to the simple GA, this paper describes some variations on the simple model, also exploring the parallel nature of the process. The important characteristics of GAs are identified and their relevance to control engineering applications is explained, together with a description of tools to support their study and evaluation . One of the early GA pioneers, Goldberg, applied this algorithmic approach to the control and optimization of gas pipelines (Goldberg, 1985). This paper surveys a range of recent control applications of GAs and speculates
Once the decision variable domain representation of the current population is calculated, individual performance is assessed according to the objective function which characterizes the problem to be solved. Whilst playing the role of the environment, the objective function establishes the basis for selection. At the reproduction stage, a fitness value is derived from the raw individual performance measure given by the objective function, and used to bias the selection process. Highly fit individuals will have a higher probability of being selected to 605
take part in the next stage than the less fit and, therefore, the average performance of this intermediate generation of individuals is expected to mcrease.
random with a high probability that crossover will take place. In the affirmative case, a crossover point is selected at random and, say, the rightmost segments of each individual are exchanged to produce two offspring, as illustrated in Fig. 1.
The selected individuals are then modified through the application of genetic operators, in order to obtain the next generation. Genetic operators manipulate the characters (genes) that constitute the chromosomes directly, following the assumption that certain genes code, on average, for fitter individuals than other genes. Genetic operators can be divided into two main categories.
111111111111 --+
101010110101 T crossover point
Recombination. Causes pairs, or larger groups, of individuals to exchange genetic information with one another.
Fig. 1. Single point crossover In this SGA, the mutation operation consists of simply flipping each individual bit with low probability. This background operator is used to ensure that the probability of searching a particular subspace of the problem space is never zero, thereby tending to inhibit the possibility of ending the search at a local, rather than a global, optimum.
Mutation. Causes individual genetic representations to be changed according to some probabilistic rule . After recombination and mutation, individual chromosomes are decoded, evaluated, and selected according to their fitness, and the process continues .
2.2. 2.1.
The Simple Genetic Algorithm
Other GA Variants
The simple genetic algorithm has been improved in many ways . Different selection methods have been proposed (Baker, 1987), which reduce the stochastic errors associated with roulette wheel selection . Ranking (Baker, 1985) has been introduced as an alternative to proportional fitness assignment, and shown to help the avoidance of premature convergence and to speed up the search when the population approaches convergence (Whitley, 1989). Other recombination operators have been proposed, such as the multiple point and reduced-surrogate crossover (Booker, 1987) . The mutation operator has remained more or less unaltered, but the use of real-coded chromosomes requires alternative mutation operators (Davis, 1991) . It has also motivated the development of non-conventional recombination operators, such as intermediate crossover (Ba.ck et al., 1991) .
A simple genetic algorithm (SGA) is described by Goldberg (1989). Individuals encode a set of decision variables by con catenating them in a bit string, according to the standard binary code, where the interval of interest and desired precision of the decision variables determines the length of the bit string . The objective function, I(z), is to be optimized and, in GA terminology, is a measure of fitness . The individual fitness, F(z,), is computed as the individual performance, I(z,), relative to that of the whole population, i.e.,
F(z,) = )(z,)
1111111 [!ill X 10 10101 QI!J
(1)
L I(z,) ,=1
z,
where N is the population size and represents the phenotypic value of individual i. A probabilistic component is often incorporated into the selection procedure and one mechanism is known as roulette wheel selection . Here each F( z,) is used as the width of a slot of a biased roulette wheel. Selection is performed by spinning the roulette N times to obtain N individuals to integrate the next generation.
Other parameters have been introduced , namely that of generation gap . The synchronous generational approach implemented by the SGA implies non-overlapping populations, which is known not to be the case in natural systems. The concept of generation gap establishes how many offspring are produced at each generation. A GA which generates one single offspring per generation is called a steady-state GA, and could even be run asynchronously in parallel.
The recombination operator used in the SGA is single point crossover . Individuals are paired at 606
GAs have also been applied to ordering problems. In this case, representations other than binary are used, which require different genetic operators. Examples of operators for ordering GAs are inversion (which is, in fact, a mutation operator), partially matched crossover, order crossover, cycle crossover (Goldberg, 1989), edge-recombination crossover (Whitley et al., 1991) and analogous crossover (Davidor, 1989).
a strength of GAs is their lack of reliance on special domain-specific heuristics. A number of features commonly arising in control systems engineering problems, and the way in which these are treated through the use of the GA approach, is listed below. Discrete decision variables. These can be handled directly through binary, or even n-ary, encoding. When functions can be expected to be locally monotonic with respect to such variables, the use of Gray coding is known to better exploit that monotonicity.
Finally, several models of parallel GAs have been proposed. These are particularly interesting for the new metaphors they implement and for their performance, which has proved superior to the so-called panmictic GA, in which any individual may mate with any other. The simple genetic algorithm already offers near linear speedup when run on a parallel architecture for computationally expensive objective functions . Other GA models, such as the "island" and several "neighbourhood" models (Gorges-Schleuter, 1992), are reported to perform even better due to implementing the concept of geographical isolation, which appears to be an important aspect of natural evolution.
Continuous decision variables. Real values can be approximated to the necessary degree by using a fixed point binary representation. However, in most control problems, the relative precision of the parameters is more important than their absolute precision. In this case, the logarithm of the parameter should be encoded instead. Alternatively, a floating point binary representation can also be used directly. The previous considerations about Gray coding apply to both fixed and floating point formats . In fact, a convenient floating point format can be shown to grow in the same fashion as standard binary, and can therefore be Gray encoded. Fig. 2 shows the subset of the search space (horizontal axis) covered by schema ** ** 1, for both fixed and floating point codings. Note the loglike growth in the second graph.
3. GENETIC OPTIMIZERS FOR CONTROL SYSTEMS ENGINEERING Since they search from a population of points, and are based on probabilistic transition rules, GAs are more likely to converge to global optima than conventional optimization techniques. These latter are usually based on deterministic hill-climbing methods, which, by definition, will only find local optima. Also, conventional methods often require the objective function to be well-behaved, which restricts their application.
The direct manipulation of real-valued chromosomes, as in the case of Evolutionary Strategies, has also been proposed (Davis, 1991) . There is no consensus yet about which representation is better when continuous variables are to be addressed, as real-valued chromosomes are claimed to lead to faster rates of convergence, on the one hand, and accused of potentially leading to "blocking" (Goldberg, 1990), on the other hand.
On the other hand, GAs can tolerate discontinuities and noisy function evaluations . GAs assume no a priori information about problems but that necessary to define the decision variable space and the problem itself, i.e., the objective function. This is only required to be defined over the search space, and no assumptions about continuity, or the existence of gradient information, for example, need to be made. However, any knowledge about a function's behaviour should be incorporated in the formulation of a particular GA. This is often the case in control engineering problems .
Scale. Many optimization problems are characterized by a real-valued function. However, for these to be handled by a GA, objective values must be converted into non-negative fitness values . Initially, the use of offsetting and scaling was proposed (Goldberg, 1989). Scaling retains the relative individual performance and reflects it at the selection stage, but not in a unique way, as offsetting alters such a measure. On the other hand, by completely ignoring the scale on which problems are expressed, a ranking approach maintains the selective pressure constant and brings a number of advantages to the GA process (Whitley, 1989). Some theoretical re-
Despite their shortcomings, the power of conventional optimization methods is recognized GAs should be harnessed to address those problems which are not susceptible to efficient solution by these conventional approaches . Further, 607
neously, and increase the probability of finding the global one. 0.8
Multiple objectives. Lastly, control engineering problems very seldom require the optimization of a single objective function. Instead, there are often competing objectives which should be optimized simultaneously. GAs have the potential to become a powerful method for multiobjective optimization, enabling decision makers to progressively articulate their preferences while learning about their problems' trade-offs (Fonseca and Fleming, 1993).
0.6 0.4 0.2 o~~~~~~~~~~~
o
0.5 schema ····1 (5 bit FXP)
~
§ § § § §
§ ~ § § § ~
0.8
~
0.6
§ § E ~ § ~ ~ E ~ ~ §
0.4 0.2
o o
4. EVOLUTIONARY OPTIMIZATION TOOLS
0.5
The study and evaluation of GAs, since they are essentially non-analytic, largely depends on simulation . Since they are strongly applicationindependent, GA software potentially has a very broad domain of application . Evolutionary optimization packages developed so far include Genesis (Grefenstette, 1990), GENITOR (Whitley, 1989), Escapade (Hoffmeister, 1991), and Object-Oriented GA (Davis, 1991) . However, the GA community has not yet found a standard genetic algorithm software package. Indeed, this subject was aired at the last Conference on Parallel Problem Solving from Nature (Manner and Manderick, 1992).
schema ····1 (2 bit exp + 3 bit mant FLP)
Fig. 2. A convenient floating point binary format grows in the same fashion as standard binary.
suIts for the setting of optimal mutation rates are based on this assumption of a constant selective pressure . Constraints. Most engineering optimization problems are constrained in some way. For example, control loops are required to be stable and actuators have finite ranges of operation. GAs can handle constraints in two ways, the most efficient of which is by embedding these in the coding of the chromosomes. When this is not possible, the performance of invalid individuals should be calculated according to a penalty function, which ensures that these individuals are indeed low performers . Appropriate penalty functions for a particular problem are not necessarily easy to design, since they may considerably affect the efficiency of the genetic search (Richardson et al., 1989).
Certainly, within the control engineering community, the picture is somewhat different . MATLAB (MathWorks, 1991) , for example, is regularly used for modelling, design and simulation, providing an interactive environment, graphical capabilities, and a very large set of tools, in the form of Toolboxes, which cover a very wide range of aspects of control engineering . A Genetic Algorithm Toolbox for MATLAB is being developed by the authors (Chipperfield et al., 1992), so as to provide the control engineer with GA software which is easy to use , practical and efficient, while being portable and based on a standard scientific computation environment . In this way, the genetic algorithm becomes simply another powerful tool to be added to all of those already available.
Multimodality. Multimodal functions are particularly difficult for conventional optimizers . GAs are more robust, but they can still miss the global optimum due to genetic drift . Genetic drift consists of a certain feature being selected against another of equivalent fitness, due to the stochastic errors inherent to selection operating on finite populations . Techniques proposed to limit the effect of genetic drift include fitness sharing (Goldberg and Richardson , 1987; Deb and Goldberg, 1989) , and crowding. These force the GA to search for all of the optima simulta-
5. CONTROL APPLICATIONS OF GAs Broadly, the application of GAs to control engineering problems may be classified in two main areas: off-line design and on-line adaptation, learning and optimization. In CACSD, the GA is run off-line as an optimizer, where optimal indi608
5.3.
vi duals evolve from a random population. In online applications, however, the direct evaluation (through experimentation) of weak individuals may have catastrophic consequences. Therefore, it is common practice for a GA to operate on a model of the system on-line and only to indirectly influence the control operation. Also, online applications will often require a faster rate of convergence than off-line applications, even if this is at the expense of a decrease in robustness.
Kristinsson and Dumont (1992) have applied genetic algorithms to the on-line, as well as offline, identification of both discrete and continuous systems. In their implementation, windowed input and output data is used to construct an objective (cost) function of the parameters to be identified, which can either be the system parameters in their natural form, poles and zeros , or any convenient set of transformed variables.
Some recent examples of off-line and on-line control applications are described below.
5.1.
Since the real system parameters would be expected to minimize such an objective function for any set of input-output data, the GA sees an environment which, although varying in time due to the diversity of the data sets, maintains its optimum in the same region of the search space. In this way, only the best estimate found for a particular set of data is necessary to adaptively control the system. The evaluation of individuals other than the best in the GA population is of no consequence to the control loop .
Optimal Controller Tuning
The tuning of controllers is a classic optimization problem for control engineers and can be easily addressed using a GA. Hunt (1992) applied it to four classes of problems, namely LQG , 'Hoc minimum and mixed sensitivity problems, and frequency domain optimization . The generality and computational simplicity of the algorithm contrasts with the complexity of the theoretical solution of such problems. Also, in the case when conventional optimizers have been applied to problems of this type, difficulties have been encountered by the restriction that parameter estimates must lead to stabilizing controllers. The ill-defined nature of this constraint is easily incorporated in the GA approach. The multimodal nature of objective functions is accommodated through the globality of the search.
5.2.
System Identification
5.4.
Classifier Systems
Classifier systems (CSs) are machine learning systems that encode production rules in string form (classifiers) and learn such rules on-line, during their interaction with an arbitrary environment . Each rule has the structure
< condition >:< action> which means that the action may be taken when the condition is satisfied (matched) .
Robust Stability Analysis
They are massively parallel, message passing, rule-based systems which use GAs. The rule system is one of the three main components of a CS . It communicates with the outside world through a message list. Messages may cause an action to be taken, or another message to be produced by matching one or more classifiers in the classifier store.
One approach to the stability analysis of systems in the presence of uncertain parameters involves searching a box in the parameter space, the so-called Q-box, for points which yield unstable systems . Murdock et al. (1991) proposed the genetic search of the Q-box, formulating it as the maximization of the maximum root of the system within given parametric uncertainties . This non-differentiable objective function presents significant problems for the conventional optimization approach .
The second component of a CS is the apportionment of credit system, which associates a strength with each classifier. The right a classifier has to respond to a particular message depends on its strength. The strength is reduced by a certain amount every time the classifier is activated, that amount being paid to whatever (classifier or environment) was responsible for its activation . Classifiers may also receive reward from the environment. In this way, rules which contribute towards good actions being taken tend to see their strengths increased.
The GA was applied to a number of benchmark examples in order to verify its performance. It proved to be able to handle large numbers of parameters while producing the correct results and being computationally more efficient than the other existing methods . Its application to problems which lack a necessary and sufficient condition test, or the complexity of which makes other methods impractical, is suggested . 609
The third component of a CS is the GA which is in charge of discovering new , possibly better, rules. While selection of good rules is provided through apportionment of credit , bad rules may be replaced by combinations of good ones, and their performance evaluated on-line.
erally multimodal, problem, where candidate solutions are typically of variable length. A GA approach to this problem (Davidor, 1991) required the introduction of specialized genetic operators (Davidor, 1989) and was then improved through the use of Lamarckian learning, whereby also acquired characteristics are passed on to offspring, to bias the action of the genetic operators . Although Lamarckism is generally accepted not to take part in natural evolution, there is no reason why it should not be used in artificial evolution, when able to improve it .
While it is unsafe to evolve rules for complex systems on-line using a simple GA, its successful incorporation in a rule-based learning system has been reported in connection with the optimization of combustion in multiple burner installations (Fogarty, 1990). Other hybridized techniques involve fuzzy control.
5.5.
Comparison with hill-climbing shows the ability of the GA to find good trajectories, balancing an eventually slower rate of convergence with a much greater robustness.
Fuzzy Control
Linkens and Nyongesa (1992) identify four main aspects of fuzzy controller design : • • • •
6. FUTURE PERSPECTIVES The dramatic expansion in domain of application for optimization afforded by GAs has been mirrored by the diversity of control applications beginning to appear . This is complemented by the availability of parallel computing power, which GAs can effectively exploit . The parallel paradigm itself is a stimulant for further extension of the scope of the GA approach.
selection of scaling factors, derivation of optimal membership functions, elicitation of a rule-base, and the on-line modification of the rule-base.
The optimization of membership functions is seen as an off-line task, while a fuzzy classifier system is used to acquire and modify good sets of rules on-line . Adaptive control is provided by the constant adaptation of the rule set to the varying dynamics of the problem, without the explicit identification of a plant model.
GAs are well suited to off-line implementation, being able to approach multiobjective optimization problems directly, and enabling the decision maker to be inserted in the optimization process and guide evolution while learning about the problems' trade-offs . This especially applies when objective functions are ill-behaved in some way.
On the other hand, Karr (1992) concentrates all of the adaptation effort on the on-line optimization of the membership functions . It is argued that the rules-of-thumb humans use tend to remain the same, even across a wide range of conditions. It is the definition of the linguistic terms, here the membership functions, which is adapted to the present situation.
The GA also allows combinatorial problems to be approached in an efficient way. In addition to the travelling salesman problem, scheduling and path planning, other problems such as processprocessor mapping, sensor and actuator placing, and subset selection problems can also be addressed .
This concept has been implemented together with another GA-based system, an analysis (identification) element, to control a laboratory pH system of varying dynamics. While the analysis GA provided the parameters of a model of the process being controlled, membership functions were optimized using such a model as a basis for their evolution. The current best set of membership functions was then used to actually control the plant.
5.6.
The combination of genetic search with hillclimbing approaches is also particularly attractive, especially when problems allow for computationally simple local optimization . In this way, the GA simply searches for good starting points while the local optimizer efficiently climbs the hill, the whole process resulting in faster, but still global, convergence.
Robot Trajectory Planning On-line applications, however, present significant challenges. When a system model is available, or can be identified, a conventional GA can
The automatic generation of robot trajectories (Solano, 1992) is an order-dependent, gen610
mization tools for multi-objective optimization problems in CACSD . In: lEE Colloquium on Genetic Algorithms for Control Systems Engineering, pp . 3/1-3/6. The Institution of Electrical Engineers. Digest No. 1992/106.
be applied on-line with little alteration. If this is not the case , however, the nature of the GA
search demands that the system possess a rare degree of robustness to sustain the level of exploration demanded by a GA. Methods may emerge which effectively limit the scope of the search while maintaining a reasonable performance from the GA optimizer. One approach has been to use the GA in parallel with a conventional controller to provide a bounded contribution to the control action (Nordvik and Renders, 1991) . Also, schemes in which GA steps are interleaved with control steps may prove a fruitful area for research . Alternatively, Classifier Systems provide an effective approach to the control of plants which cannot be modelled. Following their biological analogy, GAs can be used to build adaptive systems that are able to cope with a changing environment .
Davidor, Y. (1989). Analogous crossover. In: Proc. Srd Int . Conf. on Genetic Algorithms, (J .D. Schaffer, Ed.), pp . 98-103. Morgan Kaufmann . Davidor, Y. (1991). A genetic algorithm applied to robot trajectory generation. In: Handbook of Genetic Algorithms, (L. Davis, Ed .), Chap . 12, pp. 144-165. van Nostrand Reinhold, New York . Davis, L., Ed. (1991) . Handbook of Genetic Algorithms. van Nostrand Reinhold, New York. Deb, K., and D.E. Goldberg (1989) . An investigation of niche and species formation in genetic function optimization . In : Proc. Srd Int. Conf. on Genetic Algorithms, (J .D. Schaffer, Ed .), pp . 42-50 . Morgan Kaufmann.
The use of GAs with rule-based systems (classifier systems) and artificial neural networks (ANNs), is expected to receive more attention in the future. For example, GAs may assist ANN operation through the determination of suitable ANN structures and ANNs, such as Kohonen networks, may be coupled with GAs to expedite learning. Learning systems, in general, of course, represent an important growth area for control - GAs are an important component of this development .
7.
Fogarty, T .C. (1990) . Adaptive rule-based optimization of combustion in multiple burner installations. In: Expert Systems in Engineering: Principles and Applications, (G . Gottlob, and W. Nejd, Eds .), pp . 241-248 . Springer-Verlag. Fonseca, C.M., and P.J. Fleming (1993). Genetic algorithms for multiobjective optimization: Formulation, discussion and generalization. Research report 466, Dept . Automatic Control and Systems Eng., University of Sheffield, Sheffield, U.K.
REFERENCES
Back, T ., F . Hoffmeister, and H.-P. Schwefel (1991) . A survey of evolution strategies. In: Proc. 4th Int. Conf. on Genetic Algorithms, (R. Belew, Ed.), pp . 2-9 . Morgan Kaufmann .
Goldberg, D.E. (1985) . Dynamic system control using rule learning and genetic algorithms. In: Proc. 9th Int. Joint Conf. on Artificial Intelligence, pp. 588-592 .
Baker, J .E. (1985) . Adaptive selection methods for genetic algorithms . In : Proc . 1st Int . Conf. on Genetic Algorithms, (J .J . Grefenstette, Ed.), pp . 101-111. Lawrence Erlbaum Associates.
Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company.
Baker, J .E. (1987). Reducing bias and inefficiency in the selection algorithm. In : Proc. 2nd Int . Conf. on Genetic Algorithms, (J .J . Grefenstette, Ed.), pp . 14-21. Lawrence Erlbaum Associates.
Goldberg, D.E. (1990) . The theory of virtual alphabets. In: Parallel Problem Solving from Nature, 1st Workshop, Proc., (H.-P. Schwefel, and R. Manner, Eds.), pp . 13-22. Springer-Verlag .
Booker, L. (1987) . Improving search in genetic algorithms. In: Genetic Algorithms and Simulated Annealing, (L . Davis, Ed.), pp . 61-73 . Morgan Kaufmann .
Goldberg , D.E., and J . Richardson (1987) . Genetic algorithms with sharing for multimodal function optimization. In: Proc. 2nd Int. Conf. on Genetic Algorithms, (J .J. Grefenstette, Ed.), pp. 41-49. Lawrence Erlbaum Associates.
Chipperfield, A.J ., C.M. Fonseca, and P.J . Fleming (1992) . Development of genetic opti611
pp . 886-889. American Automatic Control Council, Evanston, IL .
Gorges-Schleuter, M. (1992). Comparison of local mating strategies in massively parallel genetic algorithms . In: Parallel Problem Solving from Nature , 2, (R. Manner, and B. Manderick, Eds.), pp . 553-562. NorthHolland, Amsterdam .
Nordvik, J .-P., and J .-M . Renders (1991). Genetic algorithms and their potential for use in process control: A case study. In: Proc. 4th Int . Conf. on Genetic Algorithms, (R. Belew, Ed.), pp . 480-486 . Morgan Kaufmann.
Grefenstette, J .J . (1990). A User's Guide to GENESIS v5.0. Naval Research Laboratory, Washington, D.C.
Richardson, J .T., M.R. Palmer, G . Liepins, and M. Hilliard (1989) . Some guidelines for genetic algorithms with penalty functions. In: Proc. Srd Int. Conf. on Genetic Algorithms, (J.D . Schaffer, Ed.), pp. 191-197. Morgan Kaufmann.
Hoffmeister, F . (1991) . The User's Guide to Escapade 1.2. Dept . Computer Science, University of Dortmund, Dortmund, Germany. Hunt, K.J. (1992). Controller synthesis with genetic algorithms : The evolutionary metaphor in the context of control system optimization . Report, Dept . Mechanical Engineering, University of Glasgow, Glasgow, U.K.
Ruano, A.E .B., DJ. Jones, and P .J . Fleming (1992) . A neural network controller. In: Proc. 1991 IFAC Workshop on Algorithms and Architectures for Real- Time Control, (P.J . Fleming, and DJ . Jones, Eds.), pp. 2732 . Pergamon Press . IFAC Workshop Series, Number 4.
Irwin, G.W . (1992) . Parallel algorithms for control. In: Proc. 1992 IFAC Workshop on Algorithms and Architectures for RealTime Control, (P.J . Fleming, and W .H. Kwon, Eds .), pp . 15-22 . Pergamon Press. Preprints .
Solano, J . (1992) . Parallel Computation of Robot Motion Planning Algorithms. PhD thesis, University of Wales , Bangor, UK.
Karr, C.L . (1992). An adaptive system for process control using genetic algorithms. In : Int. Symp . on Artificial Intelligence in Real- Time Control, pp. 585590, Delft, the Netherlands. IFAC/IFIP / IMACS . Preprints.
Whitley, D. (1989) . The GENITOR algorithm and selection pressure: Why rank- based allocation of reproductive trials is best. In: Proc. Srd Int . Conf. on Genetic Algorithms, (J.D . Schaffer, Ed .), pp . 116-121. Morgan Kaufmann .
Kristinsson , K., and G .A. Dumont (1992) . System identification and control using genetic algorithms . IEEE Trans. on Sys ., Man, and Cybernetics , 22, 1033-1046.
Whitley, D., T. Starkweather, and D. Shaner (1991). The travelling salesman and sequence scheduling: Quality solutions using genetic edge recombination. In : Handbook of Geneti c Algorithms, (L . Davis, Ed.), Chap . 22, pp . 350-372 . van Nostrand Reinhold, New York.
Linkens, D.A., and H.O. Nyongesa (1992). A real-time genetic algorithm for fuzzy control. In: lEE Colloq. on Genetic Algorithms for Control Systems Engineering. The Institution of Electrical Engineers. Digest No. 1992/106 . Manner, R., and B. Manderick, Eds. (1992) . Parallel Problem Solving from Nature, 2. North-Holland, Amsterdam . MathWorks (1991) . MATLAB User's Guide. The Math Works, Inc. Miller, W .T ., R.S . Sutton, and P.J . Werbos, (1990). Neural Networks for Control. MIT Press, Cambridge. Murdock, T.M., W.E. Schmitendorf, and S. Forrest (1991) . Use of a genetic algorithm to analyze robust stability problems. In : Proc. 1991 American Control Conf., Vol. 1, 612