JayaX: Jaya algorithm with xor operator for binary optimization

JayaX: Jaya algorithm with xor operator for binary optimization

Applied Soft Computing Journal 82 (2019) 105576 Contents lists available at ScienceDirect Applied Soft Computing Journal journal homepage: www.elsev...

3MB Sizes 0 Downloads 45 Views

Applied Soft Computing Journal 82 (2019) 105576

Contents lists available at ScienceDirect

Applied Soft Computing Journal journal homepage: www.elsevier.com/locate/asoc

JayaX: Jaya algorithm with xor operator for binary optimization Murat Aslan a , Mesut Gunduz b , Mustafa Servet Kiran b , a b



Department of Computer Engineering, Faculty of Engineering, Şırnak University, Şırnak, Turkey Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Konya Technical University, 42075 Konya, Turkey

highlights

graphical

abstract

• A novel binary optimizer called JayaX is presented.

• JayaX is based on Jaya algorithm and ‘‘exclusive or’’ logic operator.

• Two different versions of JayaX has been proposed in this study.

• The performance of JayaX has been investigated on binary optimization problems. • JayaX is compared with state-of-art population-based algorithms.

article

info

Article history: Received 18 September 2018 Received in revised form 3 April 2019 Accepted 7 June 2019 Available online 18 June 2019 Keywords: Jaya Binary optimization Logic operator Exclusive or

a b s t r a c t Jaya is a population-based heuristic optimization algorithm proposed for solving constrained and unconstrained optimization problems. The peculiar distinct feature of Jaya from the other populationbased algorithms is that it updates the positions of artificial agent in the population by considering the best and worst individuals. This is an important property for the algorithm to balance exploration and exploitation on the solution space. However, the basic Jaya cannot be applied to binary optimization problems because the solution space is discretely structured for this type of optimization problems and the decision variables of the binary optimization problems can be element of set [0,1]. In this study, we first focus on discretization of Jaya by using a logic operator, exclusive or – xor. The proposed idea is simple but effective because the solution update rule of Jaya is replaced with the xor operator, and when the obtained results are compared with the state-of-art algorithms, it is seen that the Jayabased binary optimization algorithm, JayaX for short, produces better quality results for the binary optimization problems dealt with the study. The benchmark problems in this study are uncapacitated facility location problems and CEC2015 numeric functions, and the performance of the algorithms is compared on these problems. In order to improve the performance of the proposed algorithm, a local search module is also integrated with the JayaX. The obtained results show that the proposed algorithm is better than the compared algorithms in terms of solution quality and robustness. © 2019 Elsevier B.V. All rights reserved.

1. Introduction

or discrete according to values of decision variables [2]. Binary optimization is a part of the discrete optimization research field.

In optimization problems, heuristic algorithms try to obtain optimum or near optimum solution by using experimental information [1]. Optimization problems can be defined as continuous

In this case, many discrete optimization problems such as graph coloring problem can be modeled as binary optimization problem and solved by binary optimization algorithms [3]. Therefore,

∗ Corresponding author. E-mail address: [email protected] (M.S. Kiran). https://doi.org/10.1016/j.asoc.2019.105576 1568-4946/© 2019 Elsevier B.V. All rights reserved.

binary optimization is a popular research area for researchers. Although, decision variables take all values in upper and lower limit

2

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

ranges in continuous optimization problems, the decision variables take integer values in discrete optimization problems [2]. However, in binary optimization problems constraints take only two decision variables ‘0’ and ‘1’. Various real world problems are studied in binary optimization field such as 0–1 knapsack problems [4–9], uncapacitated facility location problems [10–13], unit commitment problem [14], water pump switching problem [15], optimal phasor measurement units (PMUs) placement problem [16]. For knapsack problem (KP), [0,1] values are used for determining which items will be added into the knapsack, while it is used for to decide a facility is open or not in the case of uncapacitated facility problem and it is decided to which PMUs will be located at a bus for optimal PMUs placement problem. There are many approaches proposed in literature for solving the binary problems. Some of these algorithms are deterministic and some of them are approximate algorithms [17]. Lagrangian techniques [18], branch-and-bound methods [19,20], dynamic programming [21,22] are some of deterministic exact algorithms. The exact approaches usually accomplish quality solutions for small and medium-scale problems. However, as the dimensionality of the problem grows up, the successful rate of these algorithms dramatically decreases [23,24]. As a result, if the dimension of a problem increases, the time complexity of it also increases. So that, it is hard to achieve the optimal solution for approximate algorithms in reasonable time and it needs a powerful computation at solving problems that requires exponential time [25], briefly the problems like these are generally classified in Np-hard problem category [26,27], and linear and dynamic programming based algorithms do not guarantee to find the optimal solution [28]. Therefore, we need more powerful algorithms like metaheuristic approaches in order to solve these problems in acceptable level of time [29]. Although the metaheuristic algorithms do not guarantee the optimal solution, they guarantee the optimal or near optimal solution and they are problem-independent, easy adaptation and simple structure. Genetic algorithm, which is one of the most popular metaheuristic algorithms, can be applied directly to binary optimization problems through its nature [8,9,30]. However, continuous optimization algorithms such as particle swarm optimization algorithm [31], tree-seed algorithm [32], artificial bee algorithm [33] etc. should be modified for solving the binary optimization problems. 1.1. Main contribution and motivation of this study Jaya is a population-based algorithm proposed by Rao for solving constrained and unconstrained continuous optimization problems [34]. So that it cannot be directly applied on binary optimization and it needs to be modified for solving this type of problems. The variant of Jaya algorithms used for solving single and multi-objective real life optimization problems are given in [35]. Jaya’s most notable feature is that when an individual’s position in the population is updated, it is used both the individual which has the most quality objective function value and the individual with the worst objective function value. Because of the Jaya uses both global best and global worst position to provide a wide search area in search space. This structure of Jaya provides the solutions avoid local minimum. Search space of binary optimization consists of ‘0’ and ‘1’ values. When an algorithm such as Jaya is proposed for solving the continuous optimization problems, this algorithm should be modified to solve binary optimization problems before its application to binary optimization. For this purpose, there are many different solution approaches proposed for converting the algorithms to binary optimization such as logic functions [11,36], logistic transfer functions [23,37, 38], similarity-based approaches [13,33,36]. In literature, a binary version Jaya is developed by [16] using tangent hyperbolic logistic

transfer function. However, this binary Jaya does not provide a general purpose for solving binary optimization problems and its performance is relatively poor on the problems dealt with the study. Due to this reason, we developed and proposed a new binary version of Jaya by integrating ‘xor’ logic gate with local search strategy in Jaya. ‘xor’ provides a balance between exploitation and exploration and local search mechanism which is used in proposed algorithm try to find better solution around global best agent. To indicate potency of proposed algorithms (shortly named as JayaX and JayaX-LSM), experiments are conducted on uncapacitated facility location problems (UFLPs) taken from OR-Lib [39]. The feature of the UFLPs is the following; In UFLP total cost indicates sum a fixed cost component of construction a facility in a specific district and a transportation cost of satisfying the customer demands [40]. UFLP is a major problem in binary optimization problems while many real world problems can be transformed to it such as set covering problems, airline crew scheduling problems, set partitioning problems [12], optimal PMUs placement problems [16], discrete lot sizing problems, a substitutable inventory problem, location problem on a line, a tool selection problem, capacity expansion problem, a stochastic demand problem [27], optimal hub allocation problem [41] etc., and a lot of problems of location analysis. Moreover, JayaX-LSM is applied to solve 15 numeric benchmark problems CEC2015 [42] test suit constrained single-objective computationally expensive optimization problems. The remainder of the paper is organized as follows: In Section 2, a literature review of binary optimization problem is presented. In Section 3, the basic Jaya and a version of binary Jaya (BJA) which proposed by [16] are elaborated. In Section 4, the proposed Jaya algorithms are elaborated. In Section 5, A brief description of the uncapacitated facility location problem is given. The experimental results of our methods compared with state-ofthe-art binary optimization algorithms in Section 6. Results and discussion are given in Section 7, and the conclusions and future works are presented in Section 8. 2. Literature review There are a lot of different approaches such as exact, approximate, local search and metaheuristics algorithms for binary optimization problems. Barcelo et al. [18] developed lagrangian relaxation heuristic algorithm for the capacitated location problem. Holmberg [20] proposed a branch-and-bound exact algorithm based on a dual ascent and adjustment technique for uncapacitated facility location problems. Bertsimas and Demir [22] proposed approximate dynamic programming (ADP) approach for multidimensional knapsack binary optimization problem (MKP). Exact methods ensure quality solutions for less dimensional problems. If the number of dimension increases, the computational cost may increase. Consequently, it is needed some effective algorithms which find acceptable level of solutions in a reasonable time such as metaheuristics for large scale binary optimization problems. In this respect, Sultan and Fawzan [43] proposed a local search strategy based on tabu search (TS) algorithm for UFLPs. In their algorithm, the initial solution is generated by a greedy heuristic algorithm named net benefit heuristic (NBH) and then TS is applied on current solution. Ghosh [40] developed a method with neighborhood search heuristics based on tabu search algorithm and the neighborhood of all solutions change continuously depending on the search history to find the best solution available in the neighborhood. In another study, Zhuang and Galiana [44] implemented simulated annealing (SA) algorithm on systems of up to 100 thermal unit commitment problem. The proposed algorithm uses a random feasible solution generator

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

and metropolis optimization algorithm. Aydin and Fogarty [45] proposed evolutionary simulated annealing (ESA) algorithm and its distributed version (dESA), and the performance of the algorithms are investigated on two common benchmark problems namely, classical job shop scheduling problem and UFLP. Falkenauer and Bouffouix [46] presented a study based on genetic algorithm (GA) for solving the job shop scheduling problem which consists of dedicating, over the time, resources of finite capacity to operations, while satisfying the constraints. They presented an encoding of the problem that overcomes the difficultness of job shop scheduling. Khuri et al. [47] proposed a GA based algorithm named GENEsYs and applied on 0/1 multiple knapsack problem. Unlike many other GA based algorithms that are enlarged with domain-specific knowledge, GENEsYs uses a simple fitness function that to penalize infeasible solutions. In another paper, Jaramillo et al. [48], used fusion crossover, binary tournament selection for improving the performance of GA as an alternative method for find the optimal or near-optimal solutions for location problems. In another study, Topcuoglu et al. [49], developed a powerful solution based on a GA search framework for the uncapacitated single allocation hub location problem (USAHLP). Then, proposed GA implemented on civil aeronautics board (CAB) and Australian post (AP) data set to prove effectiveness of the algorithm with best solutions other algorithms in the literature. Lim et al. [50], proposed a monogamous pairs genetic algorithm (MopGA) for solution of 0/1 knapsack problems. The MopGA used two important technics taken from social monogamy: pair bonding and infidelity at a low probability. Unlike most of genetic algorithms, same pairs of parents named monogamous parents are used again and again for generations until their validation is finished. Kennedy and Eberhart [51] proposed a discrete binary method by inspiring the basic particle swarm optimization (PSO). The velocity information of the particle is an important component while the new movement of a particle is assigned. Therefore, Kennedy and Eberhart [51] used sigmoid logistic function for assignment the new position of particle because of the particular nature of binary optimization. The discretization process is following; the sigmoid function returns a value (S(vi )) between zero and one according to the velocity of particle. Then, a random value generated in range of [0,1], if the produced random value is less than S(vi ), the new value of dimension is ‘1’, otherwise ‘0’. In another work, Sevkli and Guner [52] implemented a continuous PSO algorithm on UFLP by using modulus-based binarization mechanism. In order to improve the performance of PSO, they combined a local search algorithm with PSO. The proposed algorithm applied on UFLP which collected from ORlibrary, compared with metaheuristic algorithms such as GA, ESA. The experimental results demonstrated that the robustness of continuous PSO is better than GA and ESA. In another study of Guner and Sevkli [53] proposed a discrete particle swarm optimization (DPSO) algorithm and tested on UFLP. Bansal and Deep [38] developed a new modified binary particle swarm optimization (MBPSO) for solving particularly 0–1 KP and MKP. While Kennedy and Eberhart [51] used sigmoid function for assignment the new position in PSO, a new logistic function is defined in MBPSO to update of particle. Bansal and Deep [38] claim that the proposed transfer function gave a wide search space to find optimum solutions while comparing with sigmoid function. Mirjalili and Lewis [37] integrated six different transfer function with binary PSO for comparing the performance of transfer functions. Sigmoid-based transfer functions are named S-Shaped and the others named V-Shaped. Mirjalili and Lewis performed the binary PSO with transfer functions on 25 benchmark problems provided by CEC2005. The results demonstrated that V-Shaped transfer functions significantly improved the solution quality of the basic binary PSO.

3

ABC is proposed for continuous optimization problems in [54]. Hence, there is no way to use basic ABC directly for binary represented problems. Kashan et al. [13] developed a binary optimization algorithm called DisABC based on artificial bee colony (ABC). DisABC used a procedure which based on a metric of dissimilarity between binary vectors by Jaccard coefficient instead of position update mechanism of basic ABC. In another study, Kiran and Gunduz [11] proposed a method based on ‘xor’ logic operator in place of original updating equation of the ABC algorithm in order to solve binary optimization problems called binary ABC (binABC). binABC was implemented on UFLP and compared with BPSO, DisABC and improved BPSO (IBPSO) to demonstrated the solution quality, robustness, and simplicity. Hancer et al. [33], proposed a binary ABC named MDisABC for the feature selection problems, which is developed by integrating evolutionary based similarity into an existing binary ABC variant. In another work, Kiran [10] proposed a method named ABCbin , the agents works on the continuous search space in the background of algorithm. The food source vector obtained by the artificial agents is converted to binary values by taken the modulus of continuous values. Differential evolution (DE) developed by Storn and Price [55] is a stochastic population-based algorithm proposed for solving continuously-structure optimization problems. Kashan et al. [12] proposed a new version of DE algorithm based on dissimilarity approach using Jaccard coefficient for particularly for binary optimization algorithm and applied the binary DE on UFLP. In another study, Chen et al. [14] proposed a binary learning differential evolution (BLDE) that obtained optimal solutions by learning from the last population, and implemented it on unit commitment problem (UCP) in power systems to show effectiveness of BLDE. Banitalebi et al. [24] proposed a new self-adaptive binary version of DE (SabDE) with dissimilarity metric. SabDE used an adaptive mechanism for create trial solutions, and a chaotic process for adapting parameter values. Banitalebi et al. implemented SabDE on large scale CEC2015 benchmark problems and compared SabDE with algorithms such as simplified binary harmony search algorithm (SBHS) proposed in [56] for large scale 0–1 KP, binary differential evolution algorithm (BLDE) proposed by [14] for unit commitment problem, binary hybrid topology particle swarm optimization quadratic interpolation (BHTPSO-QI) algorithm developed in [57] for discrete optimization problems such as nonlinear high-dimension benchmarks and for the 0– 1 MKP, binary quantum-inspired gravitational search algorithm (BQIGSA) proposed in [58] based on combination of gravitational search algorithm (GSA) [59] and quantum computing, and novel binary artificial bee colony algorithm based on crossover and swap operators of genetic algorithm (GBABC) presented in [60] to solve binary optimization problems. Zou et al. [61] proposed a novel global harmony search (NGHS) algorithm for solving 0–1 knapsack problems. NGHS used two important structures, position updating and small probability genetic mutation. In another study, Xiang et al. [6] proposed a novel discrete global harmony search (DGHS) in order to solve 0–1 knapsack problems. In DGHS initial population is generated with a greedy technique for improving effective of algorithm. In addition, it used discrete genetic mutation and two-phase repair operator. Feng et al. [5] proposed a hybrid approach named cuckoo search with global harmony search (CSGHS) for shortcomings of the standard cuckoo search (CS) and the advantage of the GHS. CSGHS’s efficient and effectiveness depended on GHS and CS; GHS improved exploration and CS exploitation. Azad et al. [62] presented a simplified binary version of the artificial fish swarm algorithm called S-bAFSA in order to solve the uncapacitated facility location problem. Trial points of SbAFSA are generated by crossover and mutation operators, and a cyclic regeneration of the population is performed to find better

4

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

Fig. 1. The algorithmic framework of Jaya.

solutions. In addition, a local search mechanism carried out in order to increase the quality of solutions. In another study, a binary algorithm [16] based on Jaya optimization algorithm was proposed in order to find optimum solutions for PMUs placement problem (OPPP) and the tangent hyperbolic logistic transfer function was used for discretization process of continuous values. The proposed binary Jaya algorithm was experimented on different standard test systems to shown superiority and accuracy of algorithm. In another study Rao et al. [63] used Jaya algorithm for surface grinding process optimization. Similarly, Rao and Waghmare [64] implemented Jaya on complex constrained design optimization problems. Wang and Huang [65] proposed a novel elite opposition-based Jaya algorithm (EO-Jaya) for parameter estimation of photovoltaic cell models and the computational results proved the superiority of the EO-Jaya for photovoltaic cell models. In [66], a new multiobjective optimization algorithm named as MO-Jaya algorithm was proposed for solving modern machining processes optimization. In another study, Rao and More [67] developed a selfadaptive Jaya algorithm for optimal design of thermal devices and the proposed algorithm is better than the other optimization techniques compared in their study. Huang et al. [68] proposed an algorithm based on Jaya for efficiently solving the maximum power point tracking (MPPT) problem of PV systems under partial shading conditions. Rao and Ankit [69] proposed elitist Jaya algorithm for solving economic optimization of shell-and-tube heat exchanger (STHE) design problems. Wang et al. [70] proposed a GPU-accelerated parallel Jaya algorithm (GPU-Jaya) for efficiently estimating Li-ion battery model parameters. GPU-Jaya accurately estimates battery model parameters while tremendously reduce the execution time using both entry-level and professional GPU. Wang et al. [71] implemented Jaya algorithm for parameter estimation of the soil water retention curve model. Cinar and Kiran [36] proposed various binary tree-seed algorithms for solving binary optimization problems by using logic gates (LogicTSA), similarity measurement techniques (SimTSA) and a hybrid variant (SimLogicTSA). LogicTSA used just ‘xor’ logic function, SimTSA used similarity metrics and SimLogicTSA used

Fig. 2. Truth table of ‘xor’ logic gate [11].

both ‘xor’ function and similarity technique depended on randomly generated value in range of [0,1]. In another study, Korkmaz et al. [72] proposed a novel solution update rule and adapted it to artificial algae algorithm (AAA). Their approach called as binAAA, implemented to UFLPs to demonstrate the effectiveness of the algorithm. 3. Jaya algorithms 3.1. Basic Jaya algorithm Jaya is a population-based algorithm proposed by Rao [34] to solve constrained and unconstrained continuous optimization problems. The main concept of Jaya algorithm is to ensure that a given problem moves towards the best solution and avoid worst solution. Jaya is easily adapted to problems and does not contain any algorithm specific parameters [16]. The position updating mechanism used in the Jaya is given below;



⏐)



⏐)

Xk′ ,j = Xk,j + r1 ∗ Bestj − ⏐Xk,j ⏐ − r2 ∗ Worstj − ⏐Xk,j ⏐

(

(

(1)

where k = (1, 2, . . . , N) is candidate solution to be processed on, j = (1, 2, . . . , D) is dimensions of problem, Best is represents the solution with the best fitness value, Worst is represents the solution with the worst fitness value, r1 and r2 are random number in [0,1]. MaxFEs is represents termination condition. The framework of Jaya algorithm is given in Fig. 1.

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

5

Fig. 3. Dimension selection procedure for update process. Table 1 Test problems used in the experiments.

Fig. 4. Binary representation of candidate, Best and Worst solutions.

3.2. Binary Jaya algorithm (BJA) The first binary version of the Jaya (BJA) was developed by Prakash et al. [16] using tangent hyperbolic logistic transfer function to solve optimum solutions for PMUs placement problem. However, BJA does not provide a general purpose for solving binary optimization problems and its performance is relatively poor on the problems deal with this study. BJA uses all steps of basic Jaya which given in Fig. 1. In addition, it just uses a transfer function for converting continuous values to binary counterparts. Eq. (1) was used in BJA exactly the same as basic Jaya. Since basic Jaya is a continuous algorithm, it is need some changes to convert continuous values to binary values. Therefore, BJA uses a transfer function is given in Eq. (2) for converting process.

(⏐ ⏐) e(|2Xi,j |) − 1 tanh ⏐Xi,j ⏐ = e(|2Xi,j |) + 1

(2)

The value of Xi,j is converted into ‘0’ or ‘1’ according to tangent hyperbolic transfer function is given in Eq. (3).

{ Xi,j =

1

if rand() < tanh ⏐Xi,j ⏐

0

otherwise

(⏐

⏐) (3)

4. Proposed binary versions of Jaya algorithm 4.1. Binary version of Jaya algorithm without local search module In this study we focused on to develop and propose a new binary version of Jaya by integrating ‘xor’ logic gate with local search strategy. ‘xor’ provides a balance between exploitation and exploration and local search module which is used in proposed algorithm try to find better solution around global best solution. Cinar and Kiran [36] have used logic gates and similarity measurement techniques to transform basic TSA to binary TSA. In this study, we integrate ‘xor’ logic gate with Jaya, briefly JayaX, by inspiring Cinar and Kiran study [36]. Logic gates make it easy to discrete basic algorithms [36]. However, the structure of the basic algorithm must also be appropriate for this process. The logical gates are named ‘and’, ‘or’ and ‘not’, ‘xor’ gates. In binary optimization, each dimension of the solution can take ‘1’ or ‘0’ value. Therefore, the logic gate used in algorithms is quite important. Because if ‘or’ gate is used for update process if only two candidate solution corresponding dimension take 0,

Name

Dimension

Cost

Cap71 Cap72 Cap73 Cap74 Cap101 Cap102 Cap103 Cap104 Cap131 Cap132 Cap133 Cap134 CapA CapB CapC

16 × 50 16 × 50 16 × 50 16 × 50 25 × 50 25 × 50 25 × 50 25 × 50 50 × 50 50 × 50 50 × 50 50 × 50 100 × 1000 100 × 1000 100 × 1000

932615.750 97779.400 1010641.450 1034976.975 796648.438 854704.200 893782.113 928941.750 793439.563 851495.325 893076.713 928941.750 17156454.478 12979071.58 11505594.33

new solution corresponding dimension will be 0. So that if ‘or’ gate used for update mechanism, new solution corresponding dimension will get ‘1’with 75% probability. if ‘and’ gate used for update mechanism, the probability for ‘1’ will be 25%. As a result, when the truth table of the ‘xor’ gate given in Fig. 2 is checked, it will be seen that the selection probability of ‘0’ and ‘1’ are equal. ⊕ symbol represents ‘xor’ logic gate [11]. After these definitions, the solution update mechanism of basic Jaya is modified as follows: Xk′ ,j = Xk,j ⊕ (Bestj ⊕ Worstj )

{

(4)

The most important reason for modeling the solution update mechanism in this way is to take advantage of the basic Jaya algorithm’s position updating rule. The candidate solution, the solution with the best fitness value and the solution with the worst fitness value are used in the basic Jaya’s position update rule given in Eq. (1). So, we modeled Eq. (4) in order to solve binary optimization problems under condition the faithful of basic Jaya. Firstly, we implemented ‘xor’ mechanism on Bestj and Worstj position. Later on, we implemented ‘xor’ mechanism on the solution gained from previous ‘xor’ process and candidate solution (Xk,j ). In the basic Jaya algorithm, the update process is performed for all dimensions of the candidate solution. This causes a huge perturbation in the produced solutions and therefore, a new mechanism is defined in order to control the number of dimension which will be updated in this study. We aim to balance between local and global search in the algorithm by using this mechanism and the procedure used for dimension selection is given in Fig. 3. We can explain the changes made on the basic Jaya algorithm better on the example given below. The sample given is a completely randomly created. It will provide a better understanding of the proposed algorithm. The dimension of the problem (D) is selected as 10. P value was taken as 0.40. In this case d will be 4 (d = D ∗ P). As a result, 4dimensions of candidate solution will be update. These dimensions are randomly selected and let us assume that these be 2, 3, 6 and 8. Candidate solution (Xk ), Best and Worst solutions are given in Fig. 4.

6

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

local search in their algorithm depended a parameter named Plocal . The reason for using the Plocal is to prevent the algorithm from constantly using local search. Thus preventing the solution from a local optimum [13]. Zhang et al. [23] used a greedy local search algorithm called EliteLocalSearch in their study to improve performance of binary artificial algae algorithm (BAAA). If the local search algorithm is frequently used, it may affect the balance between exploration and exploitation, through exploitation. In this study, we used local search algorithm which used in [23]. However, unlike EliteLocalSearch, Plocal parameter used in local search algorithm of Kashan et al. [13] was also used in our local search algorithm. This prevents the proposed algorithm from using the local search in each iteration and reduces the number of FEs used, and increases the effectiveness of exploration and exploitation. The steps of the local search algorithm are given in Fig. 7.

Fig. 5. New state after the ‘xor’ gate.

4.2.2. The framework of proposed algorithm All the modifications in the basic Jaya to effectively solve binary optimization problems are defined in Fig. 8. 5. A brief description of the problem dealt with the study Fig. 6. Xk′ position after position update rule.

According Eq. (4), ‘xor’ is firstly applied on best and w orst solutions. New state after ‘xor’ operation is given Fig. 5. Then, ‘xor’ applied to Xk and new state solution related dimensions. After the position update rule new position (Xk′ ) is generated and it is given in Fig. 6. 4.2. Binary version of Jaya algorithm with local search module In this version of the algorithm, the steps described above section are used but a local search algorithm has been integrated with JayaX to improve the effectiveness of the JayaX. The performance analysis of these two binary versions of the Jaya algorithm are evaluated in experimental results section. Local search module explained below. 4.2.1. Local search module (LSM) The LSM used in proposed algorithm is a simple neighborhood search approach. LSM begins with current solution and search its neighbors for reaching a better solution. Kashan et al. [13] used a simple and effective local search procedure in hybridized with the DisABC algorithm for solving UFLPs. Kashan et al. used

In this study, uncapacitated facility instances taken from ORLibrary [39] was used as the test suite. Uncapacitated facility location problems (UFLPs) include customers and facilities. Each of the facilities are located in different locations and have fixed installation costs. Each of the customers is located in different locations and has transportation costs that are proportional to the distance between the facilities. All requests of the customers have to be met by the facilities with the lowest transportation cost. The total cost of the problem is the sum of the installation costs of the open facilities and the transportation costs between the customers and the facilities. The main objective of the UFLPs is to determine opened and closed facilities with a minimum total cost. There are 2n different solutions depending on the number of open or closed n facilities. But the nature of the UFLPs, at least one facility should be opened. So that the number of the possible solutions is (2n −1). The complexity of the problem grows up rapidly depending on the increasing the number of facilities, therefore UFLPs are known Np-hard problem [73,74]. When we consider the setup costs of facilities as ‘0’, the problem can be solved in polynomial time. A simple illustration of UFLP is given below. In Fig. 9, green circles represent open facilities, red circles represent closed facilities and white circles represent customers. The dashed lines between the customer and the facility mean that the customer does not use this facility and named as passive

Fig. 7. LSM in JayaX.

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

7

Fig. 8. Algorithmic framework of JayaX with LSM. Table 2 The effects of parameters to the performance of JayaX. P

N

0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1

10

20

30

40

50

60

70

80

90

100

0.8378 0.8444 0.8156 0.8200 0.7956 0.7667 0.7489 0.7311 0.7378 0.7533 0.6956 0.6600 0.6600 0.6356 0.5956 0.5222 0.4200 0.1933 0.0667

0.9089 0.9311 0.9356 0.9200 0.9044 0.9111 0.9000 0.8756 0.8756 0.8578 0.8467 0.8244 0.8133 0.7644 0.7578 0.7400 0.6711 0.4889 0.1044

0.9156 0.9267 0.9400 0.9400 0.9400 0.9489 0.9444 0.9444 0.9289 0.9089 0.8978 0.8911 0.8711 0.8378 0.8067 0.7844 0.7289 0.6244 0.1600

0.9111 0.9133 0.9200 0.9467 0.9511 0.9578 0.9622 0.9556 0.9533 0.9311 0.9400 0.9311 0.9133 0.8822 0.8667 0.8378 0.7578 0.6867 0.2156

0.9067 0.9133 0.9267 0.9267 0.9333 0.9378 0.9622 0.9444 0.9667 0.9444 0.9222 0.9400 0.9156 0.9044 0.9000 0.8644 0.8089 0.7444 0.2844

0.9000 0.9222 0.9022 0.9244 0.9244 0.9267 0.9311 0.9333 0.9533 0.9444 0.9489 0.9378 0.9356 0.9089 0.9111 0.8644 0.8556 0.7844 0.2933

0.9044 0.8978 0.9111 0.9133 0.9089 0.9178 0.9244 0.9200 0.9244 0.9378 0.9600 0.9578 0.9378 0.9311 0.9178 0.9067 0.8711 0.7711 0.3333

0.9222 0.9178 0.9000 0.9067 0.9044 0.9111 0.9044 0.9133 0.9133 0.9222 0.9289 0.9578 0.9578 0.9356 0.9222 0.9067 0.8756 0.8267 0.3467

0.9044 0.9022 0.9000 0.8933 0.8867 0.8933 0.8911 0.8778 0.8933 0.9178 0.9289 0.9311 0.9356 0.9356 0.9111 0.9244 0.8756 0.8289 0.3244

0.9089 0.9089 0.9067 0.8933 0.8844 0.8689 0.8511 0.8556 0.8511 0.8822 0.8933 0.9178 0.9311 0.9289 0.9356 0.9267 0.8978 0.8467 0.3622

connection. The straight lines between the customer and the facility mean that the customer is using this facility and named as active connection. The cost function of the UFLPs is calculated as follows [40]: I = {i1 , i2 , . . . , im } is the set of potential facility locations. J = {j1 , j2 , . . . , jn } is the set of customers. F = (f1 , f2 , . . . , fm ) is the fixed cost of the facilities.

[ C =

c11 c1,2 . . . c1,n

...

cm1 cm,2 . . . cm,n

] is the transshipment cost matrix.

arg min{δ (S) =

∑ i∈S

fi +

∑ j∈J

min{ci,j |i ∈ S }}

(5)

where, S is a nonempty subset of I where, S is a nonempty subset of potential facility locations (I), cij is the transshipment cost between ith facility and jth customer and fi is the fixed cost of the ith facility. We select this problem as benchmark test suite in this study because the problem is pure binary optimization problem, there are enough benchmark sets in literature, most of the compared algorithms uses these problems, many binary optimization problems can be modeled by using

8

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576 Table 3 Comparison of JayaX and JayaX-LSM with the state-of-art algorithms on the Cap71-74. Methods

Criteria

Cap71

Cap72

Cap73

Cap74

GA-SP

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1011314.476 0.06659 19 899.650

1034976.975 0.00000 30 0.000

GA-TP

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1011130.923 0.04843 22 825.576

1034976.975 0.00000 30 0.000

GA-UP

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1011069.739 0.04238 23 789.612

1034976.975 0.00000 30 0.000

BAAA-Tanh

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010641.450 0.00000 30 0.000

1034976.975 0.00000 30 0.000

BAAA-Sig

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010641.450 0.00000 30 0.000

1034976.975 0.00000 30 0.000

binAAA

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010641.450 0.00000 30 0.000

1034976.975 0.00000 30 0.000

BJA

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010763.818 0.01211 28 465.688

1035433.658 0.04412 25 1038.632

JayaX

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010641.450 0.00000 30 0.000

1034976.975 0.00000 30 0.000

JayaX-LSM

Mean Gap Hit Std. Dev.

932615.750 0.00000 30 0.000

977799.400 0.00000 30 0.000

1010641.450 0.00000 30 0.000

1034976.975 0.00000 30 0.000

6.1. Test suite

Fig. 9. The illustration of UFLP.

this mathematical description, and the optimum solutions of the problems are available in the literature. 6. Experiments To analyze and validate the performance of the JayaX, 15 UFLPs are used in the experiments. The JayaX and compared algorithms are run 30 times with random initialization in order to prove the robustness of the algorithm. The obtained results are reported as mean of 30 runs, standard deviations, gap and hits. Gap shows the optimization error and hit shows the how many times optimum solution of the problem is obtained. Before the comparison of the algorithms, a parameter analysis has been conducted for the proposed algorithms. To indicate potency of JayaX, the second comparison is conducted on the CEC2015 benchmarks and proposed algorithm compared with SabDE, BQIGSA, GBABC, BHTPSO-QI, BLDE and SBHS.

For the first experiments 15 UFLPs taken from OR-Lib [39] are used in the analysis and experiments. In Table 1, the name, optimum and dimensionality (the number of decision variables) are given. The test problems can be divided into four groups. The first group consists of Cap71, Cap72, Cap73 and Cap74. These problems are small-sized problems and the number of decision variables are 16. The Cap101, 102, 103 and 104 problems are mediumsized problems and these problems have 25 decision variables. Large-sized problems are Cap131, 132, 133 and 134. The last group is extra-large sized problems have 100 decision variables and these are named as CapA, CapB and CapC. The hardness in these problems depends on the dimensionality of the problem and effective search should be required to solve these problems optimally. In second experiments, CEC2015 [42] large scale benchmark suite bound constrained single-objective computationally expensive optimization problems are considered. These problems are taken as real-valued 10 dimensional problems and when we convert them to binary counterparts, 50 bits are used for each dimension and total 500 bits should be optimized. Therefore, these problems have higher dimensions than the CapA, CapB and CapC problems. 6.2. Parameter analysis of JayaX for UFLPs Before the setting the control parameters of the proposed algorithm for comparing the algorithms, the control parameters

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

9

Table 4 Comparison of JayaX and JayaX-LSM with the state-of-art algorithms on the Cap101-104. Methods

Criteria

Cap101

Cap102

Cap103

Cap104

GA-SP

Mean Gap Hit Std. Dev.

797193.286 0.06839 11 421.655

854704.200 0.00000 30 0.000

894351.782 0.06374 6 505.036

928941.750 0.00000 30 0.000

GA-TP

Mean Gap Hit Std. Dev.

797164.610 0.06479 12 428.658

854704.200 0.00000 30 0.000

894329.179 0.06121 10 540.160

928941.750 0.00000 30 0.000

GA-UP

Mean Gap Hit Std. Dev.

797107.258 0.05759 14 436.524

854704.200 0.00000 30 0.000

894427.382 0.07220 9 522.784

928941.750 0.00000 30 0.000

BAAA-Tanh

Mean Gap Hit Std. Dev.

796677.114 0.00360 29 157.066

854704.200 0.00000 30 0.000

893782.113 0.00000 30 0.000

928941.750 0.00000 30 0.000

BAAA-Sig

Mean Gap Hit Std. Dev.

796648.438 0.00000 30 0.000

854704.200 0.00000 30 0.000

893782.113 0.00000 30 0.000

928941.750 0.00000 30 0.000

binAAA

Mean Gap Hit Std. Dev.

796648.438 0.00000 30 0.000

854704.200 0.00000 30 0.000

893782.113 0.00000 30 0.000

928941.750 0.00000 30 0.000

BJA

Mean Gap Hit Std. Dev.

796791.819 0.01800 25 326.091

854833.133 0.01509 27 393.441

893974.557 0.02153 22 382.124

928941.750 0.00000 30 0.000

JayaX

Mean Gap Hit Std. Dev.

796648.438 0.00000 30 0.000

854704.200 0.00000 30 0.000

893782.113 0.00000 30 0.000

928941.750 0.00000 30 0.000

JayaX-LSM

Mean Gap Hit Std. Dev.

796648.438 0.00000 30 0.000

854704.200 0.00000 30 0.000

893782.113 0.00000 30 0.000

928941.750 0.00000 30 0.000

of the JayaX has been analyzed. For this analysis, the population size is set from 10 to 100, and the number of dimensions which will be updated at one attempt (P parameter) is taken from 0.1 to 1. 1 for P stands for updating all dimensions of the solution at one attempt and 0.1 for P means that 10% of the all dimensions are updated at one attempt. LSM is also used for all analysis and it is set to 0.01 as in [13]. The termination condition is maximum number of function evaluations and it is set to 8E +5 in all analysis. The obtained results are given in Table 2. The reported values are the success rate of all cases. We have 15 UFLPs, and 30 runs are repeated for each test problem, and therefore, we have 450 different results of runs. It is expected that the optimum is reached at each run for each test problem. So, the success rate of each parameter set is calculated in Eq. (6). SR = (NTH)/(NP × NR)

(6)

where, SR is the success rate, NTH is the number of total hit of 450 run, NP is the number of problem and NR is the number of run. NP is 15 (we have 15 test problem), NR is 30 (number of run for each problem) in this study. When we set P to 0.4∼0.6 values and population size to 40∼70, the better results are obtained by the JayaX. Moreover, it is seen from Table 2 that when we set P to 1 as in the original version of Jaya, the worst results are produced by the proposed algorithm because the algorithm is not saturated around the discovered solutions. In the comparisons, due to the fact that population size and termination condition are set to 40 and 8E +5 in the compared

algorithms, we select the same parameter setup for the JayaX to provide a fairly comparison. When we set N to 40, the best choice for P is 0.4 and therefore, in the comparison we use this parameter pair. 6.3. Comparison on UFLPs The obtained results on the parameter analysis show that the JayaX produces optimal solutions for 13 of 15 test problems at each run and on the rest 2 problems, its performance is acceptable level. The performance of the JayaX is compared with prominent members of swarm intelligence or evolutionary computation algorithm in this section. JayaX algorithm has two versions: while JayaX-LSM has a local search module, JayaX do not have a local search module. These two algorithms have been compared with the state-of-art algorithms such as genetic algorithms (GAs), artificial algae algorithms (BAAA-Tanh, BAAASig, binAAA), binary particle swarm optimizations (BPSO), binary artificial bee colony algorithms (binABC, DisABC and ABCbin), binary tree-seed algorithm (SimLogicTSA), binary differential evolution algorithms (DisDE and binDE). In the first comparison, three version of GAs with different crossover operators (single point, two points, uniform crossover), AAA versions and binary Jaya (BJA) algorithms are compared with JayaX and JayaX-LSM. All the algorithms in the comparison are run 30 times and obtained results are reported as mean, standard deviation (Std.Dev.), Hit and Gap value. Hit is mean that the number of successes out of 30 runs and Gap value is mean the difference between

10

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576 Table 5 Comparison of JayaX and JayaX-LSM with the state-of-art algorithms on the Cap131-134. Methods

Criteria

Cap131

Cap132

Cap133

Cap134

GA-SP

Mean Gap Hit Std. Dev.

793980.104 0.06813 16 720.877

851495.325 0.00000 30 0.000

893891.911 0.09128 10 685.076

928941.750 0.00000 30 0.000

GA-TP

Mean Gap Hit Std. Dev.

794012.905 0.07226 14 690.560

851495.325 0.00000 30 0.000

893740.954 0.07438 12 655.920

928941.750 0.00000 30 0.000

GA-UP

Mean Gap Hit Std. Dev.

793865.023 0.05362 15 433.467

851517.200 0.00257 29 119.817

893808.891 0.08198 9 628.654

928941.750 0.00000 30 0.000

BAAA-Tanh

Mean Gap Hit Std. Dev.

793525.591 0.01084 27 262.498

851495.325 0.00000 30 0.000

893333.515 0.02875 16 324.451

928941.750 0.00000 30 0.000

BAAA-Sig

Mean Gap Hit Std. Dev.

793439.563 0.00000 30 0.000

851495.325 0.00000 30 0.000

893076.713 0.00000 30 0.000

928941.750 0.00000 30 0.000

binAAA

Mean Gap Hit Std. Dev.

793439.563 0.00000 30 0.000

851495.325 0.00000 30 0.000

893082.539 0.00065 29 31.914

928941.750 0.00000 30 0.000

BJA

Mean Gap Hit Std. Dev.

794573.409 0.14290 11 1266.076

852450.297 0.11215 9 1159.249

894293.350 0.13623 6 1689.417

929170.160 0.02459 21 502.235

JayaX

Mean Gap Hit Std. Dev.

793439.563 0.00000 30 0.000

851495.325 0.00000 30 0.000

893076.713 0.00000 30 0.000

928941.750 0.00000 30 0.000

JayaX-LSM

Mean Gap Hit Std. Dev.

793439.563 0.00000 30 0.000

851495.325 0.00000 30 0.000

893076.713 0.00000 30 0.000

928941.750 0.00000 30 0.000

the optimal cost and the average cost of 30 runs found by the proposed algorithm which is calculated as follows: Gap =

f (mean) − f (opt) f (opt)

× 100

(7)

where f (opt) is refer to the optimum cost and f (mean) is refer to the average cost of 30 runs solutions found by the proposed algorithm. The state-of-art algorithms given in Tables 3–6 are directly taken from [72] and compared with JayaX and JayaX-LSM. In Table 3, all the algorithms achieve the optimum solutions of Cap71 and Cap72 problems. The rest of problems, JayaX-LSM shows the best performance among the compared algorithms. Another important comparison for this type of algorithms is based on convergence characteristics. As seen from Figs. 10–13, the JayaX and JayaX-LSM obtain the optimal solution of the small-sized problems less than 100 iterations. On the medium-sized problems, the same performance has been shown by these two algorithms about 100 iterations but JayaX is slightly better. On the large-sized problems, JayaX and JayaX-LSM achieves the optimum solutions on less than 1000 iterations (about 800 iterations) and JayaX is faster than the JayaX-LSM. On the last part of test set (CapA, CapB and CapC), JayaX-LSM algorithm is better than the compared algorithms in terms of convergence characteristics as seen from Fig. 13. In the second comparison given in Table 7, the JayaX and JayaX-LSM are compared with BPSO. The results of BPSO is directly taken from [11]. As seen from Table 7, JayaX and JayaX-LSM find better solutions than BPSO.

In Table 8, JayaX and JayaX-LSM are compared with binary versions of ABC. The results of versions of ABC algorithms are directly taken from [10,11]. Binary ABC algorithms and our algorithms show acceptable performance on small, medium and large-sized problems. But when Table 8 is examined, it seems that the proposed algorithms find more effective results on UFLPs all compared binary ABC variants. In Table 9, JayaX and JayaX-LSM are compared with binary TSA [36] and binary DE algorithms [12]. The results of versions SimLogicTSA, DisDE/rand and binDE algorithms are directly taken from [36]. SimLogicTSA shows comparable performance with the proposed algorithms on small, medium and large-sized problems and SimLogicTSA is a hybrid algorithm based on logic operators and similarity-based approach. On the extra large-sized problems, JayaX-LSM is better than all compared algorithms as seen from Table 9. 6.4. Comparison on CEC2015 test suits The second comparison is conducted on the CEC2015 benchmarks to show efficiency of JayaX. Proposed algorithm is compared with state-of-the-art algorithms presented in a self-adaptive binary differential evolution algorithm [24]. The experimental results of algorithms such as SabDE, BQIGSA, GBABC, BHTPSO-QI, BLDE and SBHS are directly taken from [24]. The control parameters of the proposed algorithm such as number of populations, stopping criteria, the dimension of problems and bit size for each dimension are selected as in [24] for a fair comparison. The common parameters of CEC2015 problems are given in Table 10.

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

11

Table 6 Comparison of JayaX and JayaX-LSM with the state-of-art algorithms on the CapA, CapB and CapC. Methods

Criteria

CapA

CapB

CapC

GA-SP

Mean Gap Hit Std. Dev.

17164354.456 0.04605 24 22451.206

13054858.045 0.58391 9 66658.649

11586692.969 0.70486 2 51848.248

GA-TP

Mean Gap Hit Std. Dev.

17205089.145 0.28348 24 139690.216

13063527.186 0.65071 11 89122.485

11577797.524 0.62755 0 46346.052

GA-UP

Mean Gap Hit Std. Dev.

17166811.915 0.06037 24 35181.974

13107633.077 0.99053 3 79714.021

11578600.532 0.63453 0 57031.219

BAAA-Tanh

Mean Gap Hit Std. Dev.

17471223.794 1.83470 3 225123.921

13153617.764 1.34483 0 73978.543

11676427.752 1.48479 0 101438.607

BAAA-Sig

Mean Gap Hit Std. Dev.

17210900.533 0.31735 16 90743.456

13093705.559 0.88322 1 62168.803

11583462.068 0.67678 1 45788.678

binAAA

Mean Gap Hit Std. Dev.

17196684.161 0.23449 21 78259.572

13066644.044 0.67472 3 57572.956

11555763.878 0.43604 3 43963.880

BJA

Mean Gap Hit Std. Dev.

17714111.072 3.25042 1 376679.686

13230727.167 1.93893 0 126103.466

11646233.385 1.22235 0 92452.923

JayaX

Mean Gap Hit Std. Dev.

17246690.954 0.52596 19 151110.150

13041453.028 0.48063 15 89306.162

11519301.843 0.11914 8 30219.354

JayaX-LSM

Mean Gap Hit Std. Dev.

17156454.478 0.00000 30 0.000

12989379.443 0.07942 26 27033.019

11508089.967 0.02169 17 5455.945

Table 7 Comparison of JayaX and JayaX-LSM with BPSO. Problem

Cap71 Cap72 Cap73 Cap74 Cap101 Cap102 Cap103 Cap104 Cap131 Cap132 Cap133 Cap134 CapA CapB CapC

BPSO

JayaX

JayaX-LSM

Gap

Std. Dev.

Gap

Std. Dev.

Hit

Gap

Std. Dev.

Hit

0.0000 0.0000 0.0242 0.0088 0.0462 0.0148 0.0422 0.0810 0.1317 0.0914 0.1115 0.1346 2.1785 1.9490 1.4870

0.00 0.00 634.62 500.27 566.44 386.76 485.26 1951.81 1207.63 1196.19 821.28 2285.42 374302.81 176206.07 92977.85

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.5260 0.4806 0.1191

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 151110.15 89306.16 30219.35

30 30 30 30 30 30 30 30 30 30 30 30 19 15 8

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0794 0.0217

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 27033.02 5455.94

30 30 30 30 30 30 30 30 30 30 30 30 30 26 17

The other control parameters of JayaX are selected as Plocal = 0.01 and P = 0.50. JayaX-LSM and the other algorithms given in [24] are binary optimization algorithms. However, the decision variables of the benchmark problems deal with this study are continuous values. Therefore, each dimension of candidate solution represented with 50 bits, and total 500 bits are used for each candidate solution. Because of each candidate solution consists of binary values, it should be converted to continuous values before cost of the candidate solutions are calculated. This converting process is as

follows: NVi = LB +

(UB − LB) ∗ DVi 2m − 1

(8)

The maximum value of m bit is calculated as (2m − 1). NVi is continuous value for ith dimension of numeric vector (NV ), DVi is decimal value (DV ) of m-dimensional binary vector for ith dimension numeric vector. After these explanations, the obtained results on these problems are given in Table 11 and the rank results are given in Fig. 14.

12

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

Fig. 10. Convergence graphs of the algorithms on the Cap71-74.

Table 8 Comparison of JayaX and JayaX-LSM with ABC based binary optimization algorithms. Problem

Cap71 Cap72 Cap73 Cap74 Cap101 Cap102 Cap103 Cap104 Cap131 Cap132 Cap133 Cap134 CapA CapB CapC

binABC

DisABC

ABCbin

JayaX

JayaX-LSM

Gap

Std. Dev.

Gap

Std. Dev.

Gap

Std. Dev.

Gap

Std. Dev.

Hit

Gap

Std. Dev.

Hit

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.1215 0.0000 2.9622 2.5081 2.5800

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 200.24 0.00 236833.50 91430.13 82312.70

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.6196 0.0945 0.0309 0.0000 0.1522 3.3027 4.6968

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2337.64 813.37 359.03 0.00 74782.61 109738.50 95778.78

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0051 0.0000 0.1967 0.0199 0.0747 0.0000 3.1723 2.8154 2.0374

0.00 0.00 0.00 0.00 0.00 0.00 85.67 0.00 1065.73 213.28 561.34 0.00 268685.20 88452.80 78162.20

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.5260 0.4806 0.1191

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 151110.15 89306.16 30219.35

30 30 30 30 30 30 30 30 30 30 30 30 19 15 8

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0794 0.0217

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 27033.02 5455.94

30 30 30 30 30 30 30 30 30 30 30 30 30 26 17

Based on Table 11 and Fig. 14, the proposed algorithm achieves

7. Results and discussion

better results than the state-of-art binary optimization algorithms. On the high dimensional problems (500 bits), the results show that JayaX is an effective optimizer on binary optimization.

The JayaX and JayaX-LSM has been applied to solve 15 uncapacitated facility location problems and obtained results are

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

13

Fig. 11. Convergence graphs of the algorithms on the Cap101-104.

Table 9 Comparison of JayaX and JayaX-LSM with TSA, DE-based binary optimization algorithms. Problem

Cap71 Cap72 Cap73 Cap74 Cap101 Cap102 Cap103 Cap104 Cap131 Cap132 Cap133 Cap134 CapA CapB CapC

DisDE/rand

binDE

SimLogicTSA

JayaX

JayaX-LSM

Gap

Hit

Gap

Hit

Gap

Std. Dev.

Hit

Gap

Std. Dev.

Hit

Gap

Std. Dev.

Hit

0.0000 0.0000 0.0000 0.0000 0.0036 0.0049 0.0055 0.0000 0.0036 0.0000 0.0138 0.0000 0.0370 0.1890 0.0909

30 30 30 30 29 29 27 30 29 30 25 30 29 18 8

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0036 0.0050 0.0138 0.0000 1.3000 1.5200 1.5500

30 30 30 30 30 30 30 30 29 29 24 30 8 0 0

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.3176 0.4120

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 47993.68 48379.53

30 30 30 30 30 30 30 30 30 30 30 30 30 0 0

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.5260 0.4806 0.1191

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 151110.15 89306.16 30219.35

30 30 30 30 30 30 30 30 30 30 30 30 19 15 8

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0794 0.0217

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 27033.02 5455.94

30 30 30 30 30 30 30 30 30 30 30 30 30 26 17

compared with the state-of-art swarm intelligence and evolutionary computation algorithms. The obtained results show that JayaX and JayaX-LSM show the better or similar performance than the compared algorithms. When we analyze the results of JayaX and JayaX-LSM on the test problems dealt with the study, it can be seen that the JayaX algorithm does not need local search on the

small, medium and large-sized problems because it shows similar performance with JayaX-LSM in terms of solution quality and robustness and it is better than the JayaX-LSM in terms of convergence characteristics. However, JayaX algorithm shows worse performance than the JayaX-LSM on the higher dimensional test problems. This means that the solution update strategy of the

14

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

Fig. 12. Convergence graphs of the algorithms on the Cap131-134. Table 10 Parameters for CEC2015 benchmark problems. Parameter

Definition

Value

N D m D∗m MaxFEs UB LB

Population size Decision variables Bit size for each dimension Dimension of candidate solution Number of fitness evaluations Upper bound Lower bound

50 10 50 500 D × 10000 100 −100

JayaX is enough for lower dimensional problems and it needs local search on the higher dimensional problems. By including the worst individual information to the solution update rule in JayaX, the required perturbation and exploration are provided for the algorithm but exploitation or local search capability is important when the solution space is extended. Therefore, although the good performance on the binary problems are presented by JayaX, it needs the local search module in order to obtain better results. When we compare JayaX update rule with the other algorithms, there are two important difference among them. First of them is that the JayaX uses best and worst information in the population to obtain a candidate solution. The second of them is the number of dimension which will be updated at one attempt. This depends on the dimensionality of the problem and it is analyzed in this study and 0.4 is proposed as the ratio. 40% of the dimensionality of the problem is appropriate for the JayaX update rule. These two

modification and behaviors are the behind of the performance of the JayaX and the experimental results on UFLPs and CEC2015 problems show that the proposed algorithm is an alternative and competitive optimizer for binary optimization. 8. Conclusion and future works In this study, a novel binary optimization algorithm based Jaya algorithm and logic exclusive or operator has been proposed to solve uncapacitated facility location problem, which is a pure binary optimization problem. In order to improve the proposed JayaX algorithm, a local search module has been integrated with the JayaX. The key points in JayaX and JayaX-LSM algorithms, the best and worst information in the population is used for creating a candidate solutions and the number of decision variables which will be updated at each candidate generation is controlled by depending of the dimensionality of the problem. A parameter analysis for JayaX and JayaX-LSM is performed and best parameter set are assigned for the comparison of the algorithm with the state-of-art algorithms. The JayaX and JayaX-LSM are compared with binary particle swarm optimization, binary variants of artificial bee colony algorithm, binary tree-seed algorithm, genetic algorithms and etc. The best performance among these comparisons has been presented by the JayaX-LSM algorithm. In near future, we will apply the proposed algorithms to solve well-known binary optimization problems and feature selection.

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

15

Fig. 13. Convergence graphs of the algorithms on the CapA, CapB and CapC.

Fig. 14. The rank results of compared algorithms on CEC2015 problems.

Declaration of competing interest No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.asoc.2019.105576.

References [1] K.G. Murty, Optimization models for decision making, vol. 1, Chapter 1 (2003) 1–18. [2] N. Gould, An Introduction To Algorithms for Continuous Optimization, Oxford University Computing Laboratory Notes, 2006.

16

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576

Table 11 Comparison JayaX-LSM with the state-of-art algorithms on CEC2015. Problem

Criteria

SBHS

BLDE

BHTPSO-QI

GBABC

BQIGSA

SabDE

JayaX-LSM

f1

Mean Std. Rank

3.700E+08 2.271E+08 5

1.087E+11 2.351E+11 7

7.510E+08 2.203E+09 6

2.729E+07 2.267E+07 1

8.419E+07 7.354E+07 2

3.093E+08 1.168E+08 4

9.728E+07 9.588E+07 3

f2

Mean Std. Rank

4.493E+09 1.790E+06 4

4.747E+09 3.209E+08 5

5.132E+09 4.685E+08 6

2.864E+09 2.374E+09 3

7.834E+09 6.527E+09 7

2.541E+09 5.008E+09 2

1.968E+04 1.075E+04 1

f3

Mean Std. Rank

3.203E+02 7.013E−02 5

3.201E+02 1.093E−02 3

3.203E+02 7.426E−02 5

3.202E+02 2.641E+02 4

3.202E+02 2.641E+02 4

3.200E+02 2.044E−02 2

3.065E+02 1.247E+00 1

f4

Mean Std. Rank

4.478E+02 5.171E+00 5

4.378E+02 7.754E+00 3

4.478E+02 1.122E+01 5

4.358E+02 3.599E+02 2

4.389E+02 3.621E+02 4

4.116E+02 7.606E+00 1

6.696E+02 1.481E+02 6

f5

Mean Std. Rank

2.592E+03 1.959E+02 7

1.934E+03 3.088E+02 6

1.804E+03 3.926E+02 5

1.108E+03 9.736E+02 3

1.542E+03 1.275E+03 4

9.330E+02 9.464E+01 2

5.007E+02 3.774E−01 1

f6

Mean Std. Rank

5.312E+04 1.512E+04 3

2.484E+06 7.369E+06 5

6.799E+06 1.186E+07 6

7.442E+06 1.321E+07 7

5.582E+05 6.055E+05 4

4.625E+04 4.076E+04 2

6.006E+02 2.362E−01 1

f7

Mean Std. Rank

8.368E+02 7.981E+00 5

8.472E+02 1.289E+01 6

8.510E+02 1.694E+01 7

7.589E+02 4.668E+02 3

7.392E+02 4.324E+02 2

7.752E+02 4.155E+03 4

7.019E+02 2.760E+00 1

f8

Mean Std. Rank

3.344E+08 1.810E+09 6

7.104E+09 5.651E+09 7

5.407E+07 8.312E+07 5

3.949E+07 2.442E+08 4

2.948E+06 2.469E+06 2

2.395E+07 5.432E+07 3

8.075E+02 7.075E+00 1

f9

Mean Std. Rank

1.189E+03 5.210E+00 5

1.190E+03 1.736E+01 6

1.209E+03 2.050E+01 7

1.017E+03 8.397E+02 2

1.048E+03 8.643E+02 3

1.177E+03 4.102E+01 4

9.034E+02 3.154E−01 1

f10

Mean Std. Rank

9.038E+08 6.596E+08 5

1.326E+10 1.724E+10 7

1.320E+09 2.073E+09 6

9.909E+05 3.659E+06 3

4.426E+04 4.477E+04 1

2.416E+07 8.862E+07 4

1.489E+05 4.007E+05 2

f11

Mean Std. Rank

1.737E+03 1.661E+01 7

1.599E+03 3.313E+01 5

1.632E+03 3.371E+01 6

1.159E+03 9.557E+02 3

1.172E+03 9.669E+02 4

1.114E+03 1.131E+01 2

1.106E+03 1.708E+00 1

f12

Mean Std. Rank

1.445E+03 5.683E−01 7

1.440E+03 3.247E+00 6

1.437E+03 7.552E+00 5

1.264E+03 1.044E+03 3

1.255E+03 1.035E+03 2

1.224E+03 1.302E+00 1

1.293E+03 6.664E+01 4

f13

Mean Std. Rank

1.464E+10 9.749E+09 6

2.882E+10 2.073E+10 7

5.365E+08 2.294E+09 4

1.446E+03 1.193E+03 1

1.452E+03 1.197E+03 2

2.815E+09 4.158E+09 5

1.620E+03 3.812E+00 3

f14

Mean Std. Rank

2.736E+03 2.220E+02 4

4.267E+03 2.569E+03 6

5.005E+03 2.703E+03 7

2.162E+03 2.091E+03 3

3.356E+03 2.869E+03 5

1.727E+03 4.411E+02 2

1.602E+03 4.683E+00 1

f15

Mean Std. Rank

1.700E+03 1.552E−04 3

1.700E+03 2.622E−05 3

1.734E+03 1.851E+02 4

2.012E+03 1.659E+03 5

1.530E+03 1.262E+03 1

1.700E+03 2.177E−05 3

1.640E+03 1.856E+02 2

[3] H. Djelloul, S. Sabba, S. Chikhi, Binary bat algorithm for graph coloring problem, in: Complex Systems (WCCS), 2014 Second World Conference on, IEEE, 2014, pp. 481–486. [4] Y. Zhou, X. Chen, G. Zhou, An improved monkey algorithm for a 0-1 knapsack problem, Appl. Soft Comput. 38 (2016) 817–830. [5] Y. Feng, G.-G. Wang, X.-Z. Gao, A novel hybrid cuckoo search algorithm with global harmony search for 0-1 knapsack problems, Int. J. Comput. Intell. Syst. 9 (2016) 1174–1190. [6] W.-l. Xiang, M.-q. An, Y.-z. Li, R.-c. He, J.-f. Zhang, A novel discrete globalbest harmony search algorithm for solving 0-1 knapsack problems, Discrete Dyn. Nat. Soc. 2014 (2014) 1–12. [7] X. Zhang, S. Huang, Y. Hu, Y. Zhang, S. Mahadevan, Y. Deng, Solving 0-1 knapsack problems based on amoeboid organism algorithm, Appl. Math. Comput. 219 (2013) 9959–9970. [8] M. Hristakeva, D. Shrestha, Solving the 0-1 knapsack problem with genetic algorithms, in: Midwest Instruction and Computing Symposium, 2004. [9] G.R. Raidl, An improved genetic algorithm for the multiconstrained 01 knapsack problem, in: Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., the 1998 IEEE International Conference on, IEEE, 1998, pp. 207–211. [10] M.S. Kiran, The continuous artificial bee colony algorithm for binary optimization, Appl. Soft Comput. 33 (2015) 15–23. [11] M.S. Kiran, M. Gunduz, XOR-based artificial bee colony algorithm for binary optimization, Turkish J. Elect. Eng. Comput. Sci. 21 (2013) 2307–2328.

[12] M.H. Kashan, A.H. Kashan, N. Nahavandi, A novel differential evolution algorithm for binary optimization, Comput. Optim. Appl. 55 (2013) 481–513. [13] M.H. Kashan, N. Nahavandi, A.H. Kashan, DisABC: a new articial bee colony algorithm for binary optimization, Appl. Soft Comput. (2011). [14] Y. Chen, W. Xie, X. Zou, A binary differential evolution algorithm learning from explored solutions, Neurocomputing 149 (2015) 1038–1047. [15] Z.W. Geem, Harmony search in water pump switching problem, in: International Conference on Natural Computation, Springer, 2005, pp. 751–760. [16] T. Prakash, V. Singh, S. Singh, S. Mohanty, Binary jaya algorithm based optimal placement of phasor measurement units for power system observability, 2017. [17] M.J. Varnamkhasti, Overview of the algorithms for solving the multidimensional knapsack problems, Adv. Stud. Biol. 4 (2012) 37–47. [18] J. Barcelo, Å. Hallefjord, E. Fernandez, K. Jörnsten, Lagrangean relaxation and constraint generation procedures for capacitated plant location problems with single sourcing, OR Spectrum 12 (1990) 79–88. [19] V.C. Li, Y.-C. Liang, H.-F. Chang, Solving the multidimensional knapsack problems with generalized upper bound constraints by the adaptive memory projection method, Comput. Oper. Res. 39 (2012) 2111–2121. [20] K. Holmberg, Exact solution methods for uncapacitated location problems with convex transportation costs, European J. Oper. Res. 114 (1999) 127–140.

M. Aslan, M. Gunduz and M.S. Kiran / Applied Soft Computing Journal 82 (2019) 105576 [21] S. Balev, N. Yanev, A. Fréville, R. Andonov, A dynamic programming based reduction procedure for the multidimensional 0-1 knapsack problem, European J. Oper. Res. 186 (2008) 63–76. [22] D. Bertsimas, R. Demir, An approximate dynamic programming approach to multidimensional knapsack problems, Manage. Sci. 48 (2002) 550–565. [23] X. Zhang, C. Wu, J. Li, X. Wang, Z. Yang, J.-M. Lee, K.-H. Jung, Binary artificial algae algorithm for multidimensional knapsack problems, Appl. Soft Comput. 43 (2016) 583–595. [24] A. Banitalebi, M.I.A. Aziz, Z.A. Aziz, A self-adaptive binary differential evolution algorithm for large scale binary optimization problems, Inform. Sci. 367 (2016) 487–511. [25] J. Luo, M.-R. Chen, Improved shuffled frog leaping algorithm and its multiphase model for multi-depot vehicle routing problem, Expert Syst. Appl. 41 (2014) 2535–2545. [26] S. Mahmoudi, S. Lotfi, Modified cuckoo optimization algorithm (MCOA) to solve graph coloring problem, Appl. Soft Comput. 33 (2015) 48–64. [27] P.C. Jones, T.J. Lowe, G. Muller, N. Xu, Y. Ye, J.L. Zydiak, Specially structured uncapacitated facility location problems, Oper. Res. 43 (1995) 661–669. [28] S. Jaballah, K. Rouis, F.B. Abdallah, J.B.H. Tahar, An improved shuffled frog leaping algorithm with a fast search strategy for optimization problems, in: Intelligent Computer Communication and Processing (ICCP), 2014 IEEE International Conference on, IEEE, 2014, pp. 23–27. [29] D. Karaboğa, Artificial Intelligence Optimization Algorithms, Nobel, 2011, p. 244. [30] J. Holland, Adaption in Natural and Artificial Systems, Ann Arbor MI: The University of Michigan Press, 1975. [31] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of 1995 IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, 1942–1948. [32] M.S. Kiran, TSA: Tree-seed algorithm for continuous optimization, Expert Syst. Appl. 42 (2015) 6686–6698. [33] E. Hancer, B. Xue, D. Karaboga, M. Zhang, A binary ABC algorithm based on advanced similarity scheme for feature selection, Appl. Soft Comput. 36 (2015) 334–348. [34] R. Rao, Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems, Int. J. Indust. Eng. Comput. 7 (2016) 19–34. [35] R.V. Rao, Jaya: An Advanced Optimization Algorithm and Its Engineering Applications, Springer International Publishing, Switzerland, 2019. [36] A.C. Cinar, M.S. Kiran, Similarity and logic gate-based tree-seed algorithms for binary optimization, Comput. Ind. Eng. (2017). [37] S. Mirjalili, A. Lewis, S-shaped versus V-shaped transfer functions for binary particle swarm optimization, Swarm Evol. Comput. 9 (2013) 1–14. [38] J.C. Bansal, K. Deep, A modified binary particle swarm optimization for knapsack problems, Appl. Math. Comput. 218 (2012) 11042–11061. [39] J.E. Beasley, Or-Library - Distributing test problems by electronic mail, J. Oper. Res. Soc. 41 (1990) 1069–1072. [40] D. Ghosh, Neighborhood search heuristics for the uncapacitated facility location problem, European J. Oper. Res. 150 (2003) 150–162. [41] G. Mitsuo, Y. Tsujimura, S. Ishizaki, Optimal design of a star-LAN using neural networks, Comput. Indust. Eng. 31 (1996) 855–859. [42] Q. Chen, B. Liu, Q. Zhang, J. Liang, P. Suganthan, B. Qu, Problem definitions and evaluation criteria for CEC 2015 special session on bound constrained single-objective computationally expensive numerical optimization, Technical Report, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Technical Report, Nanyang Technological University, 2014. [43] K.S. Al-Sultan, M.A. Al-Fawzan, A tabu search approach to the uncapacitated facility location problem, Ann. Oper. Res. 86 (1999) 91–103. [44] F. Zhuang, F. Galiana, Unit commitment by simulated annealing, IEEE Trans. Power Syst. 5 (1990) 311–318. [45] M.E. Aydin, T.C. Fogarty, A distributed evolutionary simulated annealing algorithm for combinatorial optimisation problems, J. Heuristics 10 (2004) 269–292. [46] E. Falkenauer, S. Bouffouix, A genetic algorithm for job shop, in: Robotics and Automation, 1991. Proceedings., 1991 IEEE International Conference on, IEEE, 1991, pp. 824–829. [47] S. Khuri, T. Bäck, J. Heitkötter, The zero/one multiple knapsack problem and genetic algorithms, in: Proceedings of the 1994 ACM Symposium on Applied Computing, ACM, 1994, pp. 188–193. [48] J.H. Jaramillo, J. Bhadury, R. Batta, On the use of genetic algorithms to solve location problems, Comput. Oper. Res. 29 (2002) 761–779.

17

[49] H. Topcuoglu, F. Corut, M. Ermis, G. Yilmaz, Solving the uncapacitated hub location problem using genetic algorithms, Comput. Oper. Res. 32 (2005) 967–984. [50] T.Y. Lim, M.A. Al-Betar, A.T. Khader, Taming the 0/1 knapsack problem with monogamous pairs genetic algorithm, Expert Syst. Appl. 54 (2016) 241–250. [51] J. Kennedy, R.C. Eberhart, A discrete binary version of the particle swarm algorithm, in: Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation., 1997 IEEE International Conference on, IEEE, 1997, pp. 4104–4108. [52] M. Sevkli, A. Guner, A continuous particle swarm optimization algorithm for uncapacitated facility location problem, Ant Colony Opt. Swarm Intell. (2006) 316–323. [53] A.R. Guner, M. Sevkli, A discrete particle swarm optimization algorithm for uncapacitated facility location problem, J. Artif. Evolu. Appl. 2008 (2008). [54] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Opt. 39 (2007) 459–471. [55] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, J. Global Opt. 11 (1997) 341–359. [56] X. Kong, L. Gao, H. Ouyang, S. Li, A simplified binary harmony search algorithm for large scale 0-1 knapsack problems, Expert Syst. Appl. 42 (2015) 5337–5355. [57] Z. Beheshti, S.M. Shamsuddin, S. Hasan, Memetic binary particle swarm optimization for discrete optimization problems, Inform. Sci. 299 (2015) 58–84. [58] H. Nezamabadi-pour, A quantum-inspired gravitational search algorithm for binary encoded optimization problems, Eng. Appl. Artif. Intell. 40 (2015) 62–75. [59] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: a gravitational search algorithm, Inform. Sci. 179 (2009) 2232–2248. [60] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony algorithm based on genetic operators, Inform. Sci. 297 (2015) 154–170. [61] D. Zou, L. Gao, S. Li, J. Wu, Solving 0-1 knapsack problem by a novel global harmony search algorithm, Appl. Soft Comput. 11 (2011) 1556–1564. [62] M. Azad, A. Kalam, A.M.A. Rocha, E.M.d.G. Fernandes, A simplified binary artificial fish swarm algorithm for uncapacitated facility location problems, in: World Congress on Engineering 2013, WCE 2013, Newswood Limited Publisher, 2013, pp. 31–36. [63] R.V. Rao, D.P. Rai, J. Balic, Surface grinding process optimization using jaya algorithm, in: Computational Intelligence in Data Mining—Volume 2, Springer, 2016, pp. 487–495. [64] R.V. Rao, G. Waghmare, A new optimization algorithm for solving complex constrained design optimization problems, Eng. Optim. 49 (2017) 60–83. [65] L. Wang, C. Huang, A novel elite opposition-based jaya algorithm for parameter estimation of photovoltaic cell models, Opt. Int. J. Light Elect. Opt. 155 (2018) 351–356. [66] R.V. Rao, D.P. Rai, J. Balic, A multi-objective algorithm for optimization of modern machining processes, Eng. Appl. Artif. Intell. 61 (2017) 103–125. [67] R. Rao, K. More, Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm, Energy Convers. Manage. 140 (2017) 24–35. [68] C. Huang, L. Wang, R.S.-C. Yeung, Z. Zhang, H.S.-H. Chung, A. Bensoussan, A prediction model-guided jaya algorithm for the PV system maximum power point tracking, IEEE Trans. Sust. Energy 9 (2018) 45–55. [69] R.V. Rao, A. Saroj, Constrained economic optimization of shell-and-tube heat exchangers using elitist-Jaya algorithm, Energy 128 (2017) 785–800. [70] L. Wang, Z. Zhang, C. Huang, K.L. Tsui, A GPU-accelerated parallel jaya algorithm for efficiently estimating Li-ion battery model parameters, Appl. Soft Comput. 65 (2018) 12–20. [71] L. Wang, C. Huang, L. Huang, Parameter estimation of the soil water retention curve model with jaya algorithm, Comput. Electron. Agric. 151 (2018) 349–353. [72] S. Korkmaz, A. Babalik, M.S. Kiran, An artificial algae algorithm for solving binary optimization problems, Int. J. Mach. Learn. Cybern. (2017) 1–15. [73] E. Monabbati, H.T. Kakhki, On a class of subadditive duals for the uncapacitated facility location problem, Appl. Math. Comput. 251 (2015) 118–131. [74] J. Krarup, P.M. Pruzan, The simple plant location problem-survey and synthesis, Eur. J. Oper. Res. 12 (1983) 36–81.