G Model
ARTICLE IN PRESS
JOCS-611; No. of Pages 6
Journal of Computational Science xxx (2017) xxx–xxx
Contents lists available at ScienceDirect
Journal of Computational Science journal homepage: www.elsevier.com/locate/jocs
Buffered local search for efficient memetic agent-based continuous optimization Wojciech Korczynski ∗ , Aleksander Byrski, Marek Kisiel-Dorohinicki Department of Computer Science, Faculty of Computer Science, Electronics and Telecommunications, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow, Poland
a r t i c l e
i n f o
Article history: Received 15 September 2016 Received in revised form 28 January 2017 Accepted 4 February 2017 Available online xxx Keywords: Memetic algorithms Agent-based computing Continuous optimization Meta-heuristics
a b s t r a c t In this paper, a memetic search in classic and agent-based evolutionary algorithms are discussed. A local search is applied in an innovative way; namely, during an agent’s life and in a classic way during the course of reproduction. Moreover, in order to efficiently utilize the computing power available, an efficient mechanism based on caching parts of the fitness function in the local search is proposed. The experimental results obtained for selected high-dimensional benchmark functions (with 5000 dimensions) show the apparent advantage of the proposed mechanism. © 2017 Elsevier B.V. All rights reserved.
1. Introduction Some problems are too difficult to be efficiently explored by common optimization methods because they are too complex or their search space is too big. Therefore, there is an ongoing need for the creation of meta-heuristics and other general-purpose algorithms that will provide adequate solutions for such ‘black-box’ problems [1,2] in a reasonable amount of time. The first successful experiments using meta-heuristics (conducted by Davis [3] and Moscato [4]) revealed the necessity of adjusting the solver to the problem characteristics in accordance with the so-called ‘no-free-lunch’ theorem [5,6] (which proclaims the impossibility of discovering a meta-heuristic method that would be an ultimate solution for all problems, no matter how excellent it works for a certain one). Therefore, the search for novel meta-heuristics adjusted for the given problems is still indispensable. These meta-heuristics are often inspired by various domains of life, such as biology, evolution, or genetics. An example of such a method is a well-known evolutionary algorithm that has been thoroughly researched in recent times. In 1996, Cetnarowicz [7,8] enriched the evolutionary algorithm by introducing the notion of agency and proposed the Evolutionary Multi-Agent System (EMAS),
∗ Corresponding author. E-mail addresses:
[email protected] (W. Korczynski),
[email protected] (A. Byrski),
[email protected] (M. Kisiel-Dorohinicki).
in which the main task is decomposed into sub-tasks entrusted to agents—intelligent objects [9] that are able to interact with one another, as well as with the environment and make decisions autonomously. It is to note that EMAS ability to be an universal optimization algorithm has been formally proven [10,11]. The EMAS has also been successfully applied to a number of optimization and related tasks (see, e.g. [12,13]) and efficiently implemented using both imperative [14] as functional programming paradigms [15,16]. In this paper, the idea of hybridizing EMAS with local search algorithms is discussed. The inspiration for these local search algorithms is the meme theory, which allows us to introduce memetic algorithms (MAs) [17–19]. Initially popularized by Radcliffe and Surry [20] and others, MAs are joined with local and populationbased search engines to create new hybrids. Despite providing remarkable success [21], these hybrids turned out to be computationally demanding. Therefore, further improvements have been proposed, such as that described in [22] (which we follow in order to efficiently realize a local search in the solution space). In our research, we focused on two variants of the memetic Evolutionary Multi-Agent System—one with a local search applied in the course of agent mutation and the other with a local search carried out at every evolutionary step. This EMAS hybridization with MAs is discussed in detail in Section 3. First, in Section 2, the EMAS itself as well as its main assumptions have been brought closer. Our experiments and their results are presented in Section 4. Section 5 is comprised of the paper’s conclusion and a discussion about future work.
http://dx.doi.org/10.1016/j.jocs.2017.02.001 1877-7503/© 2017 Elsevier B.V. All rights reserved.
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001
G Model
ARTICLE IN PRESS
JOCS-611; No. of Pages 6 2
W. Korczynski et al. / Journal of Computational Science xxx (2017) xxx–xxx
Fig. 1. Schematic presentation of Parallel Evolutionary Algorithm (PEA) and Evolutionary Multi-Agent System (EMAS).
This paper is an extended version of the work published in [23]. A new memetic algorithm has been proposed (memetization during an EMAS agent’s life), and two new benchmark problems have been solved. Moreover, the memetic algorithm applied in our work is thoroughly described, and its pseudo-code has been provided. In addition, the obtained results have been studied more precisely—statistical and time-consumption tests have been performed.
used for obtaining the above-mentioned results; e.g., immunological EMAS [34], co-evolutionary EMAS [32], and elitist EMAS [35]. EMAS has thus been proven to be a versatile optimization mechanism for practical situations. Moreover, research on scalability and massive parallelization of EMAS has been performed [16,36]. A summary of EMAS-related reviews is given in [8]. Compared to the classic evolutionary algorithm, EMAS provides satisfactory results with less computation time, requiring fewer evaluations.
2. Evolutionary agent-based computing
3. Efficient mechanism of local search in EMAS
Evolutionary algorithms [24] belong to the group of populationbased meta-heuristics. In the most-popular variant (called a genetic algorithm), solutions are encoded into genotypes owned by individuals that form populations (groups of potential solutions) that are evaluated based on the fitness function. Poor solutions are eliminated in the process of selection, and those remaining create a mating pool. A subsequent population is created based on this mating pool by using variation operators such as crossover or mutation. The whole process continues until some stop condition is reached (e.g., a predefined number of iterations, or reaching an acceptable solution). To preserve the population’s diversity during the search, several techniques have to be applied, such as the distribution of individuals among evolutionary islands, which allows for the parallelization of the algorithm [25]. Fig. 1(a) schematically illustrates the Parallel Evolutionary Algorithm (PEA) used as a reference in this paper. Agency brought decentralization to the evolutionary metaheuristics by providing autonomy for the individuals. EMAS [7] allows us to achieve effective results [26,27] and decrease the computation cost computed as the number of fitness evaluation events [28]. In EMAS, the phenomena of inheritance and selection are modeled by agent reproduction and death (as illustrated in Fig. 1(b)). An instance of the solution is encoded in an agent’s genotype, which is inherited from its parent(s) with the use of variation operators of mutation and recombination. Noteworthy is the fact that it is easy to add mechanisms of diversity enhancement to EMAS (such as allopatric speciation [cf. [25]]) by introducing population decomposition and an action of agent migration. In numerous research projects, EMAS has proven to be an efficient method for solving different problems—classic continuous benchmark optimization [28], inverse problems [29], optimization of neural network architecture [30], multi-objective optimization [31], multimodal optimization [32], and financial optimization [33]. Moreover, a number of EMAS variants have been constructed and
Evolutionary algorithms can be enhanced by hybridization with local search memetic algorithms, which can then be applied in the course of evaluation (according to the Baldwin effect [37]) or mutation (according to the Lamarckian model [38]). When handled with care, local search algorithms can gradually bring the individuals closer to (local) extrema, enhancing their genotypes. There are a few works that concern continuous optimization using evolutionary or memetic algorithms. As usual, the search space is possible to undergo at least a little analysis; thus, the process of a local search does not have to be completely blind (contrary to black-box problems [1]); e.g., the need to continue or stop a local search can be appropriately calculated [39], or the variation operator being efficiently tuned (minding the convergence possibility) [40]. A good review of tackling continuous problems with memetic algorithms can be found in [19]. At the same time, there is a vast number of evolutionary-related algorithms tackling such optimization problems (for which solving such problems became natural after applying real-value encoding; see e.g., [24]). A good recent review of these is presented in [41]. In our recent works, we have tried to apply memetization to EMAS (which brought promising results at the expense of efficiency [26]). A more-profound study of the hybridization of EMAS with memetic algorithms is presented below. Agents are autonomous entities, and they can conduct searches for solutions on their own, freely using the means available. In such a way, a local search could be applied more often than usual (during evaluation [42] or reproduction [38]) when handled with care. One has to remember, however, that increasing local search capabilities can be followed with the introduction of a classic memetic handicap into the search; namely, early loss of diversity [19]. Metaheuristics are already treated as methods of last resort [24], so this is important to keep it in mind when constructing such hybrids. Thus, the agents could detect certain changes in the environment (for example, getting to know about the loss or increase of relative diversity of an evolutionary island and reacting to this information,
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001
G Model JOCS-611; No. of Pages 6
ARTICLE IN PRESS W. Korczynski et al. / Journal of Computational Science xxx (2017) xxx–xxx
autonomously deciding to apply or withdraw the memetization operator). It is sure that such a mechanism can lead to the gradual improvement of the whole population, even between the reproductions (one agent can run several such improvements in different moments of its life); but again, it cannot be treated as a universal approach that is equally applied to all of the problems (mind the no-free-lunch Theorem [6]). In this paper, an Evolutionary Multi-Agent System with memetic algorithms (MemEMAS) is analysed and compared with a memetic version of the Parallel Evolutionary Algorithm (MemPEA) and the classic variants of both algorithms. Moreover, the EMAS with a memetic algorithm applied during an evolutionary agent’s life (MemEMAS step) is taken into consideration. A local search results in the creation of different solutions (each of which is evaluated), and the best one is selected to replace the individual’s genotype. Algorithm 1 presents a pseudo-code of the memetic algorithm applied in our work. Each local search event consisted of 10 cycles (phases). At each cycle, one random gene was changed. Firstly, some fixed value (expressed as a mutationStrength variable) was added to this gene. If this change resulted in the improvement of an individual’s fitness, it was performed repeatedly as long as the fitness value improved (the process of mutation is presented by function mutate in Algorithm 2). Otherwise, the same fixed value was subtracted, and this operation was repeated similarly until no improvement was reported. Moreover, the gene change was adapted to the increase of quality of the results obtained on an individual’s evolutionary island—if the best fitness on an island does not sufficiently improve, the gene change is greater. Algorithm 1. 1: 2: 3: 4: 5: 6: 7: 8: 9:
Pseudo-code of memetic algorithm function memetize(genotype, mutationStrength) bestFitness ←− genotype. fitness for i ←− 1, phases do index ←− random(0, genotype. length) oldValue ←− genotype[index] genotype[index] ←− genotype[index] + mutationStrength newFitness ←− evaluate(genotype) if newFitness > bestFitness then genotype, bestFitness ←− mutate(genotype, index, mutationStrength, newFitness, addition) else genotype[index] ←− oldValue genotype[index] ←− genotype[index] − mutationStrength newFitness ←− evaluate(genotype) if newFitness > bestFitness then genotype, bestFitness ←− mutate(genotype, index, mutationStrength, newFitness, subtraction) else genotype[index] ←− oldValue genotype. fitness ←− bestFitness
10: 11: 12: 13: 14: 15:
16: 17: 18:
Algorithm 2.
Pseudo-code of memetic mutation
3
is needed so that the number of evolutionary steps would not be disproportionately lower than in classic algorithms. In the presented research, the evaluation operator was implemented in a similar way as presented in [22], where the fitness function was divided into separable parts, each of them corresponding to the particular gene: f (x) = f1 (x1 )♦1 f2 (x2 )♦2 . . .♦n−1 fn (xn )
(1)
where f (x) : Rn → R, fi (xi ) : R → R, i ∈ [1, n], i ∈ N, ♦j , j ∈ [1, n − 1] is any mathematical operator like sum, subtraction, division, or product. Assuming such a fitness function, one can easily compute the n − 1 values of the partial functions fj (xj ), j ∈ [1, n] ∧ j= / k, j, k ∈ N, leaving the value of fk (xk ) to be computed once per mutation, when the single-point mutation is considered. The gain from “caching” the values of the other partial fitness function seems to be inevitable. This technique can be easily applied in the case of separable functions. However, we plan to implement a similar mechanism for non-separable functions in our future work. 4. Experimental results In this section, experiments comparing PEA, EMAS, and their memetic variants (MemPEA and MemEMAS with memetization both in the course of mutation and during an agent’s life) are discussed. These experiments were performed using the PyAgE computing platform [43] and run on the AGH Cyfronet Zeus supercomputer (HP BL2x220c, Intel Xeon, 23 TB RAM, 169 TFlops). In the course of the experiments, four hard multi-dimensional continuous benchmark problems were tackled: the Rastrigin, Ackley, Schwefel, and De Jong functions. These functions are described by Eqs. (2)–(5), respectively. The global optimum for all of them equals f(x) = 0.0. f (x) = An +
n
[xi2 − A cos(2xi )]
(2)
i=1
⎞
n n 1 1 2 f (x) = −a exp ⎝−b x ⎠ − exp cos(cxi ) ⎛
i
n
n
i=1
i=1
+a + exp(1)
f (x) = 418.9828n −
(3)
n
xi sin(
|xi |)
(4)
i=1
f (x) =
n
xi2
(5)
i=1
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:
function mutate(genotype, index, mutationStrength, newFitness, operator) improvement ←− true bestFitness ←− newFitness while improvement do oldValue ←− genotype[index] genotype[index] ←− operator(genotype[index], mutationStrength) newFitness ←− evaluate(genotype) if newFitness < bestFitness then improvement ←− false genotype[index] ←− oldValue else bestFitness ←− newFitness return genotype, bestFitness
During each memetization event, one gene was changed and the fitness value had to be recalculated, making the local search algorithm computationally exhaustive. Thus, an efficient modification
The tested functions were evaluated in 5000 dimensions on the hypercubes of the following dimensions: xi ∈ [−5.12, 5.12] for Rastrigin, xi ∈ [−32.768, 32.768] for Ackley, xi ∈ [−500.0, 500.0] for Schwefel, and xi ∈ [−5.12, 5.12] for De Jong. The constants were assigned the following values: Rastrigin: A = 10; Ackley: a = 20, b = 0.2, c = 2. Other configuration parameters are included in Table 1. Memetization at every evolutionary step was performed with the probability of 0.1. All experiments lasted exactly 72,000 s, were repeated 11 times, and the common statistical data was computed. The obtained results have been included in Table 2. The best ones for each benchmark function have been marked in bold. As one can see, EMAS outperformed PEA for all of the functions (visibly reaching better solutions in the same time), which is consistent
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001
G Model
ARTICLE IN PRESS
JOCS-611; No. of Pages 6
W. Korczynski et al. / Journal of Computational Science xxx (2017) xxx–xxx
4 Table 1 Experiments configuration parameters.
Table 3 Kruskal–Wallis test results.
Parameter
EMAS
Mutation Crossover Speciation Environment No. of islands Individuals on island Migration probability Initial energy Energy transferred Death energy level Reproduction energy level Migration energy level Selection
Uniform, of one randomly chosen gene Single point Allopatric Torus-shaped, size 10 × 10 3, fully connected 50 0.05 100 – 5 – 0 – 120 – 130 – – Tournament (tournament size: 30)
PEA
Table 2 Experiments results in the 72,000th second. Result
St. Dev.
Steps/Gens
Evaluations
Rastrigin PEA MemPEA EMAS MemEMAS MemEMAS step
4513.99 3236.08 11.01 28.57 63.77
38.98 34.14 1.85 8.28 16.44
287,060.08 177,721.54 6,527,920.00 3,990,063.38 2,664,479.54
43,059,161.54 134,895,769.85 23,554,647.15 306,424,173.00 690,798,501.92
Ackley PEA MemPEA EMAS MemEMAS MemEMAS step
17.19 0.66 0.05 0.03 0.05
0.77 0.00 0.00 0.00 0.00
402,680.64 150,125.82 7,110,282.45 3,189,123.18 1,773,259.36
60,402,245.45 113,398,065.45 27,779,639.36 246,871,911.27 470,297,477.00
Schwefel PEA MemPEA EMAS MemEMAS MemEMAS step
958,724.26 122,833.57 959,679.75 38,289.61 667,137.66
6378.60 8552.98 6496.24 4248.00 9495.74
412,319.67 216,056.50 6,324,276.83 4,004,475.09 901,387.83
61,848,100.00 229,910,030.58 25,705,898.75 484,356,449.55 1,336,426,245.33
De Jong PEA MemPEA EMAS MemEMAS MemEMAS step
48.91 49.14 0.03 0.06 0.15
0.47 0.63 0.00 0.01 0.02
796,200.17 298,092.75 9,377,905.17 5,249,807.14 3,250,636.46
119,430,175.00 225,126,564.92 33,349,998.92 394,534,467.71 817,883,956.92
with the conclusions of our preceding research (e.g., [26]). Regarding the influence of memetization on the results obtained by EMAS, the different cases are noticeable: • Rastrigin function: It turned out that the classic EMAS outperformed its memetic variants due to the computational overhead caused by the vast amount of fitness-evaluation events needed by the memetic algorithm (the exact data is provided in Table 2). Although initially promising, the results of the PEA algorithms proved to get stuck in a local minimum. • Ackley function: Local search enabled to further exploit the vicinity of the global optimum by EMAS as well as PEA, while the classic algorithms clearly stuck in a initial solution (however, the classic EMAS eventually succeeded to approach the global minimum). As one may observe, MemPEA was also not able to escape the local minimum after reaching an acceptable solution. • Schwefel function: A local search provided clearly better results both for EMAS and PEA. Although a solution close to the global minimum was not reached, the results of MemEMAS and MemPEA look at least as promising, as they do not seem to stick in any local optimum (contrary to the other configurations). • De Jong function: Similar to the Rastrigin function, EMAS outperformed PEA; however, the results of its classic version are worse
Function
p-Value
Rastrigin Ackley Schwefel De Jong
1.646 · 10−12 4.916 · 10−10 5.069 · 10−10 5.358 · 10−11
than the ones provided by the MemEMAS and MemEMAS step configurations. The best fitnesses obtained for all of the experiments can be seen in Table 2. It is easy to see that, in the case of the Rastrigin function, the final results provided by EMAS and its memetic variants are not far from the global optimum (considering the high dimensionality of the problem). In the case of the Ackley function, the optimum was located very accurately by the MemEMAS and MemEMAS step. All algorithms failed to reach a good solution for the Schwefel function, which may be explained by the high level of complexity of this problem in high dimensionality. Nevertheless, the local search gives us hope to provide an acceptable solution in the next steps. The De Jong function is a quite-simple problem, and it was solved most-efficiently by the memetic variants of EMAS. PEA and MemPEA failed to find a satisfactory result. The information given in Table 2 clearly shows that the introduction of a local search slows down the whole search process (e.g., in the case of PEA, running the same algorithm with the local search enabled makes it run over 100,000 fewer generations (although the result is clearly better). However, one must remember to properly tune the number and frequency of local search events not to hamper the whole search process. Especially, a reasonable number of fitness evaluation events have to be kept, as they are the most time-consuming part of computations. Moreover, the information about the evaluation events clearly shows that the implemented efficient local search mechanism allows for many more fitness evaluations to be conducted than in the classic cases. However, it has to be emphasized that these are local fitness evaluations—they may improve the exploitation, but they can be fairly useless from the exploration point of view. Memetization during an agent’s life did not improve the results of EMAS with a local search performed in the course of agent mutation. This may be explained by the encumbrance of the search process by the increased number of memetic events (one should keep in mind that, in the case of the MemEMAS configuration, a local search is performed once for each agent and, in the case of the MemEMAS step, the memetic algorithm is run multiple times). The too-frequent execution of local search disturbed the balance between exploitation and exploration. The different outcomes of the experiments demonstrate how memetization may influence results. They can also be explained by the aforementioned no-free-lunch theorem—we cannot provide an ultimate solution for all kinds of problems. Besides, as the standard deviation values are low, we may assume that all of the experiments are repeatable. Furthermore, we decided to perform the Kruskal–Wallis and Dunn tests in order to verify the results from a statistical point of view. Our aim was to prove that the obtained results actually differ by rejecting the null hypothesis that the groups of results originate from the same distribution. The p-values calculated by the Kruskal–Wallis test are presented in Table 3. All of the p-values are very low (the null hypothesis is commonly rejected for p-values lower than 0.05), so we can assume that the group of our results come from different distributions. A moreprofound verification of the results was done with Dunn’s test,
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001
G Model
ARTICLE IN PRESS
JOCS-611; No. of Pages 6
W. Korczynski et al. / Journal of Computational Science xxx (2017) xxx–xxx
5
Table 4 Dunn’s test results. Function
The highest p-value of pairwise comparisons
Pair
Rastrigin Ackley Schwefel De Jong
9.4931 · 10−2 7.5955 · 10−1 8.6265 · 10−1 7.0839 · 10−1
MemEMAS–MemEMAS step EMAS–MemEMAS step PEA–EMAS PEA–MemPEA
Table 5 Time of fitness function evaluation. Function
Time of fitness evaluation without local search buffering [ms]
Time of fitness evaluation with local search buffering [ms]
Rastrigin Ackley Schwefel De Jong
1.700 ± 0.011 1.128 ± 0.009 1.434 ± 0.275 0.805 ± 0.233
0.041 ± 0.003 0.365 ± 0.039 0.190 ± 0.004 0.191 ± 0.005
which compares individual samples and identifies those samples that actually differ. The highest p-values of the pairwise comparisons done by Dunn’s test are included in Table 4. The highest p-values calculated by Dunn’s test are greater than 0.05, so similarities between some of the algorithms exist. This outcome did not surprise us, as Table 2 clearly shows that the pairs mentioned in the third column reached similar solutions. Moreover, other p-values were much lower and confirmed a statistical difference between the results of our experiments. Ultimately, we compared the execution time of the fitness evaluation with and without the local search buffering mechanism described in Section 3. Our test was performed on 5000dimensional genotypes, evaluations were repeated 100 times, and the mean values were computed (along with the standard deviations). Results for each of the problems tackled in this work are included in Table 5. The figures in Table 5 clearly show that the gain from local search buffering is meaningful. The effective implementation of memetization allowed us to speed up the fitness function evaluation from 3 to more than 40 times.
We also try to extend mechanism applicability to non-separable problems, keeping in mind that the most useful feature of the proposed solution is to be able to perform a local search with a lower cost than usual (thanks to buffering); therefore, problems with complex fitness functions are the first to be chosen in further research. Thinking about the shortcomings of the proposed mechanism, it is noteworthy that these are connected with the usual increase of computational power and complexity of the search due to the number of fitness-function calls realized (although buffering can keep it within reasonable limits). Application of the local search can lead to a premature convergence, and this always needs to be remembered when dealing with memetic algorithms. Acknowledgments The research presented in the paper received partial support from AGH University of Science and Technology statutory project (no. 11.11.230.124). This research was supported by the PlGrid infrastructure.
5. Conclusions
References
The results presented in this paper show that a local search may provide significant improvements in a shorter amount of time and should be further developed. Moreover, the efficient implementation of the local search operator allowed us to increase the number of fitness-function evaluations more than ten times more than what was realized in the classic version of the tested algorithms. Thus, the possibilities of exploration and exploitation of the high-dimensional search spaces become real—and one cannot do anything without such dedicated mechanisms there, because of the presence of the so-called curse of dimensionality[44]. Memetization during an agent’s life did not bring improvement upon memetization in the course of evaluation, presumably due to the overload caused by the increased amount of fitness-evaluation events. The idea of the proposed mechanism benefits from the separability of the tackled problem; indeed, all of the optimized benchmark functions presented in this paper were separable. Solving all similar problems should be equally beneficial when using the proposed mechanism. We try to find more benchmark and practical problems in order to further examine the applicability of the buffered local search. Problems such as the Low Autocorrelation Binary Sequences (already approached by Cotta et al. [22]) will also be examined from the point of view of lifelong memetization, and other practical problems like the Optimal Golomb Ruler will be evaluated.
[1] S. Droste, T. Jansen, I. Wegener, Upper and lower bounds for randomized search heuristics in black-box optimization, Theory Comput. Syst. 39 (2006) 525–544. [2] C. Cotta, R. Schaefer, Complex metaheuristics, J. Comput. Sci. 17 (Part 1) (2016) 171–173, http://dx.doi.org/10.1016/j.jocs.2016.06.001. [3] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold Computer Library, New York, 1991. [4] P. Moscato, On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms, Tech. Rep. Caltech Concurrent Computation Program, Report 826, California Institute of Technology, Pasadena, CA, USA, 1989. [5] W. Hart, R. Belew, Optimizing an arbitrary function is hard for the genetic algorithm, in: R. Belew, L. Booker (Eds.), Proceedings of the Fourth International Conference on Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, 1991, pp. 190–195. [6] D. Wolpert, W. Macready, No free lunch theorems for search, Tech. Rep. SFI-TR-02-010, Santa Fe Institute, 1995. [7] K. Cetnarowicz, M. Kisiel-Dorohinicki, E. Nawarecki, The application of evolution process in multi-agent world (MAW) to the prediction system, in: M. Tokoro (Ed.), Proc. of the 2nd Int. Conf. on Multi-Agent Systems (ICMAS’96), AAAI Press, 1996. [8] A. Byrski, R. Dreewski, L. Siwik, M. Kisiel-Dorohinicki, Evolutionary multi-agent systems, Knowl. Eng. Rev. 30 (2015) 171–186. [9] M. Kisiel-Dorohinicki, G. Dobrowolski, E. Nawarecki, Agent Populations as Computational Intelligence, Physica-Verlag HD, Heidelberg, 2003, pp. 608–613, http://dx.doi.org/10.1007/978-3-7908-1902-1 93. [10] A. Byrski, R. Schaefer, Formal model for agent-based asynchronous evolutionary computation, 2009 IEEE Congress on Evolutionary Computation (2009) 78–85, http://dx.doi.org/10.1109/CEC.2009.4982933. [11] A. Byrski, R. Schaefer, M. Smołka, Asymptotic guarantee of success for multi-agent memetic systems, Bull. Pol. Acad. Sci. – Tech. Sci. 61 (1) (2013). [12] D. Krzywicki, L. Faber, A. Byrski, M. Kisiel-Dorohinicki, Computing agents for decision support systems, Future Gener. Comput. Syst. 37 (2014) 390–400.
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001
G Model JOCS-611; No. of Pages 6 6
ARTICLE IN PRESS W. Korczynski et al. / Journal of Computational Science xxx (2017) xxx–xxx
[13] K. Socha, M. Kisiel-Dorohinicki, Agent-based evolutionary multiobjective optimisation, Proceedings of the 2002 Congress on Evolutionary Computation, 2002, CEC’02, vol. 1 (2002) 109–114, http://dx.doi.org/10.1109/ CEC.2002.1006218. [14] D. Krzywicki, L. Faber, K. Pietak, A. Byrski, M. Kisiel-Dorohinicki, Lightweight distributed component-oriented multi-agent simulation platform, in: W. Rekdalsbakken, R.T. Bye, H. Zhang (Eds.), Proceedings of the 27th European ˚ Conference on Modelling and Simulation, ECMS 2013, Alesund, Norway, May 27–30, 2013, European Council for Modeling and Simulation, 2013, pp. 469–476, http://dx.doi.org/10.7148/2013-0469. [15] D. Krzywicki, J. Stypka, P. Anielski, Faber, W. Turek, A. Byrski, M. Kisiel-Dorohinicki, Generation-free agent-based evolutionary computing, 2014 International Conference on Computational Science, Procedia Comput. Sci. 29 (2014) 1068–1077, http://dx.doi.org/10.1016/j.procs.2014.05.096. [16] W. Turek, J. Stypka, D. Krzywicki, P. Anielski, K. Pietak, A. Byrski, M. Kisiel-Dorohinicki, Highly scalable Erlang framework for agent-based metaheuristic computing, J. Comput. Sci. 17 (Part 1) (2016) 234–248, http:// dx.doi.org/10.1016/j.jocs.2016.03.003. [17] P. Moscato, Memetic algorithms: a short introduction, in: D. Corne, M. Dorigo, F. Glover (Eds.), New Ideas in Optimization, McGraw-Hill, 1999, pp. 219–234. [18] N. Krasnogor, J. Smith, A tutorial for competent memetic algorithms: model, taxonomy, and design issues, IEEE Trans. Evol. Comput. 9 (5) (2005) 474–488. [19] P. Moscato, C. Cotta, A modern introduction to memetic algorithms, in: M. Gendrau, J.-Y. Potvin (Eds.), Handbook of Metaheuristics, vol. 146 of International Series in Operations Research and Management Science, 2nd Edition, Springer, 2010, pp. 141–183. [20] N. Radcliffe, P. Surry, Formal memetic algorithms, in: T. Fogarty (Ed.), Evolutionary Computing: AISB Workshop, vol. 865 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, 1994, pp. 1–16. [21] W. Hart, N. Krasnogor, J. Smith, Memetic evolutionary algorithms, in: Recent Advances in Memetic Algorithms, vol. 166 of Studies in Fuzziness and Soft Computing, Springer-Verlag, 2005, pp. 3–27. [22] J.E. Gallardo, C. Cotta, A.J. Fernández, Finding low autocorrelation binary sequences with memetic algorithms, Appl. Soft Comput. 9 (4) (2009) 1252–1262. [23] W. Korczynski, A. Byrski, M. Kisiel-Dorohinicki, Efficient memetic continuous optimization in agent-based computing, Procedia Comput. Sci. 80 (2016) 845–854. [24] Z. Michalewicz, Genetic Algorithms Plus Data Structures Equals Evolution Programs, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1994. [25] E. Cantú-Paz, A summary of research on parallel genetic algorithms, IlliGAL Report No. 95007, University of Illinois. [26] A. Byrski, W. Korczynski, M. Kisiel-Dorohinicki, Memetic multi-agent computing in difficult continuous optimisation, KES-AMSTA (2013) 181–190. [27] R. Drezewski, L. Siwik, et al., Multi-objective optimization technique based on co-evolutionary interactions in multi-agent system, in: M. Giacobini (Ed.), Proc. of EvoWorkshops 2007, vol. 4448 of LNCS, Valencia, Spain, April 11–13, 2007, Springer-Verlag, Berlin, Heidelberg, 2007, pp. 179–188. [28] A. Byrski, Tuning of agent-based computing, Comput. Sci. 14 (3) (2013) 491–512. ´ [29] K. Wróbel, P. Torba, M. Paszynski, A. Byrski, Evolutionary multi-agent computing in inverse problems, Comput. Sci. 14 (3) (2013) 367–384. [30] A. Byrski, M. Kisiel-Dorohinicki, E. Nawarecki, Agent-based evolution of neural network architecture, in: M. Hamza (Ed.), Proc. of the IASTED Int. Symp. on Applied Informatics, IASTED/ACTA Press, 2002. ˙ [31] L. Siwik, R. Drezewski, Evolutionary multi-modal optimization with the use of multi-objective techniques, in: L.E.A. Rutkowski (Ed.), Artificial Intelligence and Soft Computing, vol. 8467 of Lecture Notes in Computer Science, Springer International Publishing, 2014, pp. 428–439. ˙ [32] R. Drezewski, Co-evolutionary multi-agent system with speciation and resource sharing mechanisms, Comput. Inform. 25 (4) (2006) 305–331. ˙ J. Sepielak, L. Siwik, Classical and agent-based evolutionary [33] R. Drezewski, algorithms for investment strategies generation, in: A. Brabazon, M. O’Neill (Eds.), Natural Computing in Computational Finance, vol. 185 of Studies in Computational Intelligence, Springer-Verlag, 2009, pp. 181–205. [34] A. Byrski, M. Kisiel-Dorohinicki, Comparing energetic and immunological selection in agent-based evolutionary optimization, in: Intelligent Information Processing and Web Mining: Proceedings of the International IIS: IIPWM’06 Conference, Ustro, Poland, June 19–22, 2006, Springer Verlag, 2006, pp. 3–10. [35] L. Siwik, S. Natanek, Elitist evolutionary multi-agent system in solving noisy multi-objective optimization problems, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence) (2008) 3319–3326.
[36] D. Krzywicki, W. Turek, A. Byrski, M. Kisiel-Dorohinicki, Massively concurrent agent-based evolutionary computing, J. Comput. Sci. 11 (2015) 153–162, http://dx.doi.org/10.1016/j.jocs.2015.07.003. [37] G. Hinton, S. Nolan, How learning can guide evolution, Complex Syst. 1 (1987) 495–502. [38] N. Eldridge, S. Gould, Punctuated equilibria: an alternative to phyletic gradualism, in: T. Schopf (Ed.), Models in Paleobiology, Freeman, Cooper and Co., 1972. [39] D. Molina, M. Lozano, C. García-Martínez, F. Herrera, Memetic algorithms for continuous optimisation based on local search chains, Evol. Comput. 18 (1) (2010) 27–63, http://dx.doi.org/10.1162/evco.2010.18.1.18102. [40] M. Lozano, F. Herrera, N. Krasnogor, Real-coded memetic algorithms with crossover hill-climbing, Evol. Comput. 12 (3) (2004) 273–302. [41] O. Kramer, A Brief Introduction to Continuous Evolutionary Optimization, Springer, 2014. [42] G. Syswerda, A study of reproduction in generational and steady state genetic algorithms, Found. Genet. Algorithms 2 (1991) 94–101. [43] M. Kaziród, W. Korczynski, A. Byrski, Agent-oriented computing platform in python, in: 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), vol. 3, IEEE, 2014, pp. 365–372. [44] E. Chávez, G. Navarro, R. Baeza-Yates, J.L. Marroquín, Searching in metric spaces, ACM Comput. Surv. 33 (3) (2001) 273–321. Wojciech Korczynski obtained his M.Sc. in 2013 and is currently a Ph.D. student at AGH University of Science and Technology. His main scientific interests are multi-agent systems and natural language processing.
Aleksander Byrski obtained his Ph.D. in 2007 and D.Sc. (habilitation) in 2013 at AGH University of Science and Technology in Cracow. He works as an assistant professor at the Department of Computer Science of AGH-UST. His research focuses on multi-agent systems, biologicallyinspired computing and other soft computing methods.
Marek Kisiel-Dorohinicki obtained his Ph.D. in 2001 and D.Sc. (habilitation) in 2013 at AGH University of Science and Technology in Cracow. He works as an assistant professor at the Department of Computer Science of AGHUST. His research focuses on intelligent software systems, particularly utilizing agent technology and evolutionary algorithms, but also other soft computing techniques.
Please cite this article in press as: W. Korczynski, et al., Buffered local search for efficient memetic agent-based continuous optimization, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.02.001