Knowledge-Based Systems 128 (2017) 1–19
Contents lists available at ScienceDirect
Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys
Opposition-based learning for competitive hub location: A bi-objective biogeography-based optimization algorithm Amir Hossein Niknamfar a,∗, Seyed Taghi Akhavan Niaki b, Seyed Armin Akhavan Niaki c a
Young Researchers and Elite Club, Qazvin Branch, Islamic Azad University, Qazvin, Iran Department of Industrial Engineering, Sharif University of Technology, P.O. Box 11155-9414 Azadi Ave., Tehran 1458889694 Iran c Department of Statistics, West Virginia University, Morgantown, West Virginia, USA b
a r t i c l e
i n f o
Article history: Received 18 October 2016 Revised 22 April 2017 Accepted 29 April 2017 Available online 1 May 2017 Keywords: Competitive hub location Evolutionary computations Binary opposition-based learning Multi-objective biogeography-based optimization Non-dominated sorting genetic algorithm
a b s t r a c t This paper introduces a new hub-and-center transportation network problem for a new company competing against an operating company. The new company intends to locate p hubs and assign the center nodes to the located hubs in order to form origin–destination pairs. It desires not only to maximize the total captured flow in the market but also aims to minimize the total transportation cost. Three competition rules are established between the companies which must be abided. According to the competition rules, the new company can capture the full percentage of the traffic in each origin-destination pair if its transportation cost for each route is significantly less than of the competitor. If its transportation cost for each route is not significantly less than one of the competitors, only a certain percentage of the traffic can be captured. A bi-objective optimization model is proposed for the hub location problem on hand under a competitive environment. As the problem is shown to be NP-hard, a novel meta-heuristic algorithm called multi-objective biogeography-based optimization is developed. As there is no benchmark in the literature, a popular non-dominated sorting algorithm is utilized to validate the results obtained. Moreover, to enhance the performance of the proposed Pareto-based algorithms, this paper intends to develop a binary opposition-based learning as a diversity mechanism for both algorithms. The algorithms are tuned to solve the problem, based on which their performances are compared, ranked, and analyzed statistically. Finally, the applicability of the proposed approach and the solution methodologies are demonstrated in three steps. © 2017 Elsevier B.V. All rights reserved.
1. Introduction The hub-and-spoke networks, otherwise known as the huband-center, have been widely used in various industrial applications. These networks provide services through a specified set of hub nodes in telecommunications, postal delivery systems, emergency services, computer networks, transportation systems, etc. The hub-and-center network is a fully connected network with material/information flow between any two nodes being processed at a small number of hub nodes and moved through inter-hub links. In other words, instead of establishing direct links between two nodes, the hubs serve as transshipment or switching points for the flows between origin and destination center nodes (nonhub nodes). Flows departing from an origin node are collected in a hub or transferred between hubs if necessary, and then distributed to a destination node by combining flows [1]. As the hubs consol∗
Corresponding author. E-mail addresses:
[email protected] (A.H. Niknamfar),
[email protected] (S.T.A. Niaki),
[email protected] (S.A.A. Niaki). http://dx.doi.org/10.1016/j.knosys.2017.04.017 0950-7051/© 2017 Elsevier B.V. All rights reserved.
idate and collect the flows, due to the economies of scale concept, a significantly less operating cost can be achieved. Additionally, the origins and destinations are connected with fewer links and therefore the consequential smaller transportation rates reduce the total transportation cost [1]. In the real world, hub locations and allocating center nodes (i.e. spokes) for hubs are among the most important issues in hub-andcenter network problems. That is why the hub location problem (HLP) has received an extensive attention from researchers and practitioners. For instance, Lin and Chen [2] studied the integration of Taiwanese and Chinese air networks for direct air cargo services in a pure hub-and-spoke network. Moreover, an instance of the hub airport location between 37 cities taken from the Iranian aviation dataset (IAD) is displayed in Fig. 1, in which Tehran and Kerman are active airports acting as the hubs [3]. The work of O’Kelly and Bryan [4] was pioneering in HLP. They applied the economies of scale on trunk lines in an uncapacitated network. Ebery et al. [5] studied several formulations and solution approaches based on the shortest path, for a capacitated HLP. This study was extended by Ebery [6] for two and three hubs. The inte-
2
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
Fig. 1. An instance of a hub-and-center network in Iranian aviation dataset.
gration of Taiwanese and Chinese air networks for direct air cargo services was investigated by Lin and Chen [2] in a pure hub-andspoke network. Furthermore, Pérez et al. [7] proposed a GRASP path re-linking method to solve an HLP. A bi-criteria approach in an HLP was described by Costa et al. [8]. They presented two models; the first minimizes the time for processing flows, whereas the second minimizes the maximum service time at the hubs. Camargoa and Miranda [9] presented Bender’s decomposition algorithms to solve a HLP. On the other hand, two genetic algorithms were designed by Kratica et al. [1] to solve an un-capacitated HLP problem. Contreras et al. [10] developed a Lagrangean relaxation method to obtain tight upper and lower bounds on the total cost in an HLP. An HLP with stochastic time and service-level constraints was proposed by Sim et al. [11] using mutually independent normal distributions. Ge et al. [12] developed a tree pruning algorithm for a capacitated HLP on the air transportation system using data on passenger flows between the top 20 Chinese cities. Stanimirovic´ [13] proposed a genetic algorithm to solve an HLP in order to minimize the total transportation cost. Yang et al. [14] studied an HLP with discrete random travel. In addition, Yaman and Elloumi [15] proposed a star HLP in two-level star networks with regard to service quality considerations in order to minimize the total routing costs. Rabbani et al. [16] described an HLP with a combined cost including fixed, health, safety, environmental, energy, and personnel costs. Two mixed-integer programming problems for a HLP were introduced by Yang et al. [17]. They solved the problem using an improved hybrid particle-swarm optimization algorithm by combining genetic operators and a local search. However, they did not account for the capacity restrictions of the hubs in this research. Furthermore, Bashiri et al. [18] proposed a hybrid approach for an HLP using both qualitative and quantitative parameters to minimize the longest travel time. They utilized the fuzzy VIKOR combined with a genetic algorithm to
provide a hybrid solution. Interested readers are referred to Farahani et al. [19] for more details on the HLP. 1.1. Literature review on competitive HLP The vast majority of existing research on HLP addresses the problem of optimally locating hubs for a single company that serves flows between origin and destination center nodes in a region. In many real-world instances, however, companies compete with each other in a specific region in order to capture more market share. In order to minimize transportation costs, companies have to decide on a set of locations to establish facilities for production, service provision, warehousing and so on. In a passenger transportation network, several companies compete in a geographic region to capture the maximum customer traffic, freight shippers, or individual travelers. In this regard, the market share may be measured in terms of the percentage of the passengers, freight, or profit captured. This problem has not yet been investigated in depth. In other words, although it may seem logical to assume an existing competitive environment in the HLP [20,21], the existing studies on HLP have not taken this factor into account in their networks. The earliest work on the HLP in competitive environments goes back to Marianov et al. [22] who introduced a sequential competitive HLP. Their model located p hubs in order to maximize the flow capture for a company in a network with operating competitors in the market. Wagner [23] extended the hub location model presented by Marianov et al. [22] with optimal solutions for up to 50 nodes and 5 hubs. He assumed that allocating the customers between the two companies is based on the transportation costs. In other words, if the transportation cost of a company is lower than the one of the competitors for any path, then the company would fully capture its flow. In addition, Wagner [23] showed that the implementation of the heuristic approach proposed by Marianov et al. [22] was not correct. Lüer-Villagra and Marianov [24] presented a
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
nonlinear programming formulation for a competitive HLP including the optimal pricing decision and discrete choices of the customers. They solved the problem using a genetic algorithm. More recently, Sasaki et al. [25] introduced a general discrete Stackelberg HLP using a multiple allocation hub arc location model under a competitive environment. While a variety of works on HLP have been performed in the last two decades, to the best of the authors’ knowledge, the competitive HLP has not yet been investigated in depth, despite its importance. Furthermore, the above-mentioned works on HLP focus on either minimizing the total transportation cost or maximizing the captured market share, represented by a single-objective optimization problem. They ignore optimizing both the objectives, simultaneously. Although there is some research such as Tavakkoli-Moghaddam et al. [26] on multi-objective optimization of HLP in the literature, simultaneous consideration of these objectives in a competitive environment is often overlooked. TavakkoliMoghaddam et al. [26] developed a multi-objective mathematical model for a capacitated single allocation hub location problem and solved it using a multi-objective imperialist competitive algorithm. Additionally, as the HLP belongs to the class of NP-hard problems, exact methods are not appropriate to solve large problems in a reasonable time. Thus, an effective meta-heuristic approach is needed to solve large competitive HLPs. In short, investigating HLP in a competitive environment, developing a bi-objective optimization model, and proposing an effective meta-heuristic solution approach present research gaps that require more attention. The review of the works on the HLP models presented by Farahani et al. [19] confirms these gaps. 1.2. Motivation and contribution To fill the above-stated research gaps, this study presents a capacitated HLP in a competitive environment in which an existing company called the competitor operates in a hub-and-center transportation network. In this problem, the new company intends to locate p hubs and aims to assign the center nodes to the located hubs such that not only the total captured traffic (flow) is maximized, but also the total transportation cost is minimized simultaneously. It is assumed that three competition rules are preserved for both companies. Under these rules, the new company can only capture a certain percentage of the traffic in each origin– destination pair if its transportation cost for each route is less expensive than that of its competitor. This paper attempts to connect the hub location problem to a real-world competitive environment such as a passenger transportation network. To this aim, a bi-objective integer nonlinear programming model is developed that includes minimization of the total transportation cost and maximization of the total captured flow for the new company. Due to the complexity of the problem in large scales, a novel meta-heuristic algorithm called multi-objective biogeography-based optimization (MOBBO) is developed. As there is no benchmark available in the literature, in order to validate the obtained results, the popular non-dominated sorting genetic algorithm (NSGA-II) with a multi-parent crossover is designed. Moreover, to enhance the performance of the proposed algorithms and to enrich the Pareto-fronts, this paper intends to extend the opposition-based learning (OBL) algorithm proposed in continuous domains to be adapted as a diversity enhancing mechanism for these Pareto-based algorithms utilized in a binary solution space. Instead of a pure random generation, the aim of OBL implementation is to consider two approaches of randomness and oppositions in generating the solutions simultaneously. In other words, after generating the population of solutions, a second chance is given to the population by checking the opposite solutions, represented as opposite population. Therefore, two novel
3
multi-objective algorithms namely opposition MOBBO (OMOBBO) and opposition NSGA-II (ONSGA-II) are presented. Furthermore, to tune the parameters of the algorithms, the Taguchi method with the multi-objective coefficient of variation is utilized. The applicability of the proposed model and the solution methodologies are demonstrated in three steps. In the first step, the proposed model and its solution algorithm are validated. Moreover, the competitive environment is investigated using the maxmin approach in order to maximize the minimum (the worst) captured flow and to minimize the maximum (the worst) transportation costs. In the second step, the proposed OBL-based algorithms are compared to their original versions, where they are statistically compared using six multi-objective performance metrics to highlight the effects of the proposed binary OBL. In addition, the novel algorithms and their original versions are ranked by the TOPSIS method. Finally, in the third step, both OMOBBO and ONSGA-II are compared together, where their performances are statistically compared in terms of five performance metrics. In short, the contributions of this paper are as follows. First, a novel bi-objective optimization model for the HLP in a competitive environment is developed. Second, a novel multi-objective algorithm namely OMOBBO is presented. Third, a binary oppositionbased learning mechanism is designed for Pareto-based optimization algorithms. Finally, the problem is solved using two novel multi-objective algorithms namely opposition MOBBO and opposition NSGA-II. To the best of the authors’ knowledge, neither there is a work on the competitive hub location problems taking into account a bi-objective optimization model nor is a Pareto-based optimization algorithm enhanced using an opposition-based learning, explicitly. The application of this study is to generate additional opportunities and cost effectiveness for companies that operate in the hub-and-center transportation networks under a competitive environment especially in passenger transportation networks. The remainder of the paper is organized as follows. Section 2 contains the definition of the competitive hub location problem. Section 3 discusses the opposition-based learning. Section 4 introduces the solving methodologies. In order to demonstrate the application of the proposed model and the solution methodologies, several problems are solved in Section 5. Finally, the conclusion is provided in Section 6. 2. Problem description Hub location problems deal with the location of hubs from a set of candidate hubs and assign center nodes for the located hubs. If the number of hub nodes is fixed as p, then a p-hub location problem is at hand. Consider a real passenger transportation huband-spoke network including multiple center nodes and multiple candidate hubs. In the investigated network, H is the number of candidate hubs where h, m ∈ H with a provided handling capacity from the origin center node i to destination center node j, i, j ∈ N. Collectively, this network contains a set O = {N ∪ H } and a set of directed links that connect from all of the origin center nodes to all of the hub nodes; all of the hub nodes to all of the other hub nodes; and all of the hub nodes to all of the destination center nodes with no intermediate nodes. An origin center node i with a destination center node j forms an origin–destination pair (OD) by considering two and only two hubs between them. It is assumed that there is a discount factor on the hub-to-hub trunk transportation links to reduce the transportation cost on arcs between hubs to reflect the economies of scale concept. In addition, this paper considers a capacity limitation on the volume of the traffic entering a hub with the following assumptions: 1. The number of hubs to be located is predetermined (p). 2. Each center node is only assigned to a single hub.
4
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
Fig. 2. An illustration of the problem setting.
3. All of the hubs are connected to one another. 4. Direct transportation between center nodes is not allowed. For instance, a network of two hubs and four center nodes is shown in Fig. 2. The notation used to formulate the problem at hand is presented as follows: Indices i, j Index for center nodes i, j = 1, ..., N k, m ndex for hubs k, m = 1, ..., H Input parameters Flow (traffic) between center node i and center node j (unit) ai j Transportation cost for commodity to travel from center node i to cik hub k ($) Transportation cost for commodity to travel from hub k to hub m ($) ckm c jm Transportation cost for commodity to travel from hub m to center node j ($) α Discount factor for trips between two hub nodes Total commodity to be transferred from center node i (unit) Oi Mk Capacity of hub k (unit) Fixed installation cost of hub k ($) fk Decision variables A binary variable, set equal to 1 if center node i is allocated to hub k, Xik 0 otherwise binary variable, set equal to 1 if a hub is established at node k, 0 Yk otherwise
Consider the sample network shown in Fig. 2. The total transportation cost from nodes i to j must route through the two hubs. Therefore, the total transportation cost on the route i → k → m → j is cik + α ckm + c jm where the discount factor 0 ≤ α ≤ 1 represents the scale economies on the inter-hub linkage. 2.1. The competitive environment From a methodological point of view, the main distinctive feature of competitive location models in contrast to standard location models is the argument available from the game theory [21]. Consider an existing company that utilizes a transportation network with a hub-and-spoke topology to serve its customers. This company has already established a hub-and-spoke network with q hubs in which its per unit cost for the origin node i to the destinacompetitor tion node j (i → j) is Ci j . Meanwhile, there is a new company alongside the existing company that intends to enter into this network with the same market, using its own hub-and-spoke topol-
ogy. In this condition, the new company wants to locate p hubs from the set H and allocate the center nodes from the set N to them, not only to maximize the total captured traffic (flow) but also to minimize the total transportation cost in its own hub-andspoke topology. Hence, the existing company is a competitor for the new company because they both share the same market. It is noticeable that the proposed problem, is a capacitated single allocation p-hub median problem that has a wide range of applications in clustering as well as designing transportation and telecommunication networks [27]), but with two objectives instead of a single objective. Three competition rules are assumed between the companies. Under these rules, only a certain percentage of the traffic in each OD pair can be captured if the transportation cost of the new company is not significantly less than the one of its competitor. Also, this implies that if the new company’s transportation cost is significantly less than of the competitor, the new company can capture the full percentage of the traffic in each OD pair. In this network, while the competitor has the same cost structure, it is assumed that the new company announces an expected marginal profit (θ %) for all routes by considering the economic issues such as inflation rate, capital depreciation, and the rate of return (ROR). In order to grasp the competitive hub location problem better, a capture set, inspired by [23], is defined as:
Rule 1 : 100 ij
⎧ ⎫ (k, m ) ∈ K 2 |k < m and ⎪ ⎪ ⎪ ⎪ ⎨ it ior ⎬ 0 < min{cik + α ckm + cm j , cim + α cmk + ck j } ≤ 0.5Cicompet j = ⎪ ⎪ (1 − θ ) ⎪ ⎪ ⎩ ⎭ Competitior or min{cik + α ckm + cm j , cim + α cmk + ck j } = 0 < Ci j
In the first rule, 100 is supposed to be the capture set of the ij hubs in which 100% of the traffic in either route (i, k, m, j) or the route (i, m, k, j) can be captured only if the transportation cost of the new company for any (k, m) ∈ K2 is lower than the one of its competitor in carrying the same flow. In other words, the new company can capture 100% of the traffic in either route (i, k, m, j) or route (i, m, k, j) for any (k, m) ∈ K2 if its transcompetitor portation cost lies within [0, 0.5Ci j (1 − θ )], which is a preset agreement between the companies. In this regard, the binary vari-
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
able Ui100 is defined to take a value of 1 if Rule 1 for the route j (i, j) is applicable. Note that this agreement range was originally competitor introduced as [0, 0.5Ci j ] in Wagner [23]. However, the variable was adjusted as the expected marginal profit (θ ) was not accounted for in the previous research. This is a logical assumption for any company operating in a competitive environment. In addition, their study demonstrated a cost aspect alone for capturing the flow in the competition environment, whereas this paper extends their work. Note that the new company must preserve the lower and the upper bounds of the original range. Therefore, the proposed agreement range presented in Rule 1, not only maintains the original range but also shrinks it with increasing θ . In Rule 1 it evident that θ is between 0 and 100%. If θ is 100%, then the strategy of the new company is to locate the hubs that have no transportation cost associated with their connected routes. Consequently, this strategy would result in a reduction of captured flows and therefore losing the flows to the competitor. Hence, without loss of generality, it is essential that the new company announces this parameter appropriately. The second competition rule is:
Rule 2 : 75 ij
⎧ ⎫ 100 2 ⎨(k, m ) ∈ (K \i j ) |k < m and ⎬ (0.5Ci j Competitior (1 + θ ) < min{cik + α ckm + cm j , cim = ⎩ ⎭ it ior +α cmk + ck j } ≤ 0.75Cicompet (1 − θ ) j
Under Rule 2, the new company can capture only 75% of the traffic for any (k, m) ∈ K2 either for route (i, k, m, j) or route (i, m, k, j) and therefore put them in the capture set 75 . As a result, the ij binary variable Ui75 takes a value of 1 if Rule 2 for the route (i, j) is j applicable. Note that in Rule 2 we have:
{50, 75, 100} ); otherwise, the new company will not capture that route. As mentioned previously, under the three stated competition rules, only a certain percentage of the traffic in each OD pair can be captured by the new company. After capturing the routes when both Xik and Xjm take a value of 1 for k, m ∈ D (D ∈ {50, 75, 100} ), ij in order to investigate which rule is appropriate to capture D% of the traffic of the route (i,j), the decision variable UiDj (D ∈ {50, 75, 100}) can take a value of 1 using the following equation:
UiDj ≤
∀i, j ∈ N and D ∈ {50, 75, 100}
Xik X jm
k,m∈Di j
Otherwise, the new company will lose capturing the traffic of that route to the competitor. Note that it is possible to change the rules, allowing different percentages of captures by considering the relation between the transportation cost of the new company (the entering company) and the competitor(s). Hence, several alternative competition rules of capture are possible. 2.2. Mathematical modeling In the hub-and-spoke topology of the new company, as compared to the competitor, the aim is to locate p hubs and then to capture them from the set H and to allocate the center nodes from the set N to them. This simultaneously ensures maximizing the total captured traffic (flow) and minimizing the total transportation cost. To do so, an integer nonlinear programming model is proposed as follows:
Max Z1 = Total captured flow 75 50 = (1 × ai j × Ui100 j ) + (0.75 × ai j × Ui j ) + (0.5 × ai j × Ui j ), i, j
(1)
0.5Cicompetitore (1 + θ ) < 0.75Cicompetitore (1 − θ ) ⇒ 1.25Cicompetitore j j j
θ < 0.25Cicompetitore ⇒ θ < 0.20 j It is evident that unlike the previous rule, the maximum expected marginal profit of the new company is about 20%; so this rule limits θ for the new company. Finally, the last competition rule is presented as follows:
⎧ ⎫ 100 75 2 ⎨(k, m ) ∈ (K \(i j ∪ i j ) |k < m and ⎬ 0.75Ci j Competitior (1+θ ) < min{cik + α ckm + cm j , cim = ⎩ ⎭ it ior +α cmk + ck j } ≤ 1.1Cicompet (1−θ ) j
In Rule 3, the new company can capture only 50% of the traffic for any (k, m) ∈ K2 either for route (i, k, m, j) or route (i, m, k, j). Consequently, the binary variable Ui50 takes a value of 1 if Rule 3 j for route (i, j) is applicable. Note that in Rule 3, like the previous rule, θ is also limited as:
0.75Cicompetitore (1 + θ ) < 1.1Cicompetitore (1 − θ ) ⇒ 1.85Cicompetitore j j j
θ<
⇒ θ < 0.18
In addition, as each flow for any route (i,j) can be captured by one rule only, we have: 75 100 Ui50 ≤1 j + Ui j + Ui j
Min Z2 = TotalCost =
ai j Xik X jm cik + α ckm + c jm +
i = j,k,m
fkYk ,
k
(2)
Subject to:
∀i ∈ N;
Xik = 1,
(3)
k∈H
Rule 3 : 50 ij
0.35Cicompetitore j
5
∀i, j ∈ N
Therefore, the new company can capture route (i,j) only if the center nodes i and j could connect to those hubs (k,m) belonging to the capture sets using the proposed competition rules. In this regard, both Xik and Xjm will take a value of 1 for k, m ∈ D (D ∈ ij
∀k ∈ H;
Xik ≥ Yk ,
(4)
i∈N
∀i ∈ N, k ∈ H;
Yk ≥ Xik ,
(5)
Yk = p,
(6)
k∈H
Oi Xik ≤ Mk ,
∀k ∈ H;
(7)
i∈N
UiDj ≤
Xik X jm
∀i, j ∈ N and D ∈ {50, 75, 100}
(8)
∀i, j ∈ N
(9)
k,m∈Di j 75 100 Ui50 ≤1 j + Ui j + Ui j
Xik , YK , UiDj ∈ {0, 1},
∀i, j ∈ N, k ∈ HD ∈ {50, 75, 100}. (10)
The objective function (Z1 ) presented in Eq. (1) maximizes the total captured flow from all transportation routes with respect
6
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
to the competition rules, whereas the second objective function (Z2 ) presented in Eq. (2) minimizes the total transportation cost arisen from origin-hub, hub-hub, and hub-destination along with the fixed costs. Constraint (3) ensures that every center node is assigned to one and only one hub. Constraints (4) and (5) ensure that the flow is sent only via open hubs. They prevent direct transmission between center nodes. Constraint (6) ensures that exactly p hubs are chosen. Constraint (7) guarantees that the total flow into hub k via its connections cannot exceed its maximum capacity. Note that Constraints (3)–(7) describe creating a hub-and-spoke topology under a capacitated single-allocation p-hub median problem. Constraint (8) allows traffic between nodes i and j to be captured in level D, only if hubs are located at k and m which are in the capture set D . Consequently, both Xik and Xjm take a ij value of 1. This constraint reflects different capture rules. Constraint (9) ensures that at most only one of the variables Ui50 , Ui75 , j j and Ui100 take a value of 1, each representing the competitive envij ronment. Finally, Constraint (10) ensures binary values for the decision variables. It is clear that the above formulation for the new company is a bi-objective optimization with two conflicting objectives; i.e. capturing higher flow results in an increase in the total transportation cost. Thus, in order to solve the problem, two enhanced Pareto-based meta-heuristic algorithms that take advantage of the opposition-based learning are proposed in the next section. 3. Opposition-based learning Opposition-based learning (OBL) was originally introduced by Tizhoosh [27] and was first proposed as a machine intelligence scheme for reinforcement learning. It has been further employed recently to improve soft computing methods such as fuzzy systems [28] and artificial neural networks [29,30]. In addition, Rahnamayan et al. [31] illustrated the capabilities of OBL by combining it with differential evolution to solve continuous optimization problems. OBL has been employed in a wide range of evolutionary algorithms such as biogeography-based optimization [32], particle swarm optimization [33], ant colony optimization [34], and simulated annealing [35]. Most of the evolutionary algorithms start with an initial random population without any preliminary knowledge about the solution space. Additionally, the computation time is directly related to the quality and distance of the solutions in the initial population from the optimal solution. Here, two questions arise as follows: how to enrich the initial population and the population generated in each iteration. Is there an advantage in the simultaneous consideration of randomness and oppositions versus pure randomness? To answer these questions, this paper attempts to explore the simultaneous implementation of two approaches (randomness and oppositions) in generating solutions. After generating the population of solutions, a second chance is given to this population by checking the opposite solutions (opposite population) and selecting the best solutions (fittest) among them to start the algorithm. The aim of the OBL algorithm as a diversity mechanism is to enhance the performance of the proposed meta-heuristic algorithms and to enrich the Pareto-fronts. However, as the solution space is binary in this paper, a new version called the binary oppositionbased scheme is proposed. To do this, the concept of OBL in continuous spaces is presented. Then, it is modified to be utilized in a binary solution space. 3.1. Opposition in continuous space The continuous space encompasses the variants of opposition, quasi-opposition, and quasi-reflection defined below.
Fig. 3. Opposite points in continuous space.
Definition 1. The opposite of any real number x ∈ [a, b], denoted by x , is generated using x = a + b − x. This definition can easily be extended to higher dimensions. Definition 2. The quasi-opposite of any real number x ∈ [a, b], de noted by x qo, is defined as x qo = r and (c, x ).
Definition 3. The quasi-reflected point, x qr , of any real number in
[a, b] is calculated using x qr = r and (x, c ). In other words, quasi-opposition reflects a variable to a random point between the center of the range (c) and its opposite point, whereas quasi-reflection shifts the variable x to a random point between the center of the domain and x, as illustrated in Fig. 3 [32]. Among the above opposite points, it has been mathematically proved that the quasi-reflection with an expected probability of 11/16 has a higher chance of being closer to the solution than the quasi-opposite with an expected probability of 9/16. This means the quasi-reflection point yields the highest probability of being closer to the solution of an optimization problem [32]. 3.2. Opposition in binary space To extend the use of opposite points in a binary space, this paper utilizes a binary OBL as follows: Definition 4. The opposite point of X(x1 , x2 , ..., xd ) in a ddimensional binary space with xi ∈ {0, 1}, i = 1, 2, ..., d is obtained
by X ( x 1 , x 2 , ..., x d ), where x i = 1 − xi , i = 1, 2, ..., d. Definition 5. [36]. The Hamming distance between two ddimensional binary vectors x, y ∈ {0, 1} is calculated by HD(x, y ) = d i=1 x (i ) y (i ). Another alternative to obtain the Hamming distance is the use of the absolute value of the arithmetic subtraction as |x(i ) − y(i )|. Proposition 1. There is a unique opposite point for X(x1 , x2 , ..., xd )
denoted by X ( x 1 , x 2 , ..., x d ).
Proof. If there are two opposite points of X ( x 1 , x 2 , ..., x d ) and X (x1 , x2 , ..., xd ), then based on Definition 4, x i = 1 − xi and xi =
1 − xi for i = 1, 2, ..., d. Therefore, X = X . Proposition 2. [36]. Reconsider X(x1 , x2 , ..., xd ) and its oppo
site point X ( x 1 , x 2 , ..., x d ). Then, for each Y ∈ {0, 1}d we have
HD(X, Y ) = d − HD(X , Y ). Proposition 2, mathematically proved by Seif and Ahmadi [36], represents that the distance between X and Y, generated at random. It is equal to the difference between the size of dimension space and the Hamming distance between the opposite point X and Y where HD denotes the Hamming distance presented in Definition 5. Thus, after generating a binary solution X, its opposite point can be created and evaluated in order to give another chance to explore a candidate solution closer to the global optimum. Recently, it has been claimed that an opposite point is more effective and beneficial than an independent random point by the following proposition:
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
7
Proposition 3. [36]. Consider X(x1 , x2 , ..., xd ) as a binary solution for objective function y = f (x ) where d > 1. Suppose X1 (x11 , x12 , ..., x1d ) and X2 (x21 , x22 , ..., x2d ) are two random solutions in the solution space. Then
Pr
X , X ≤ min{ X1 , X , X2 , X } > Pr X2 , X ≤ min X1 , X , X , X .
This proposition states the probability that the distance be
tween X and X to be less than or equal to the distance between {X1 , X2 } with X is more than the probability that the distance between X2 and X is less than or equal to the distance between
{X1 , X } and X. Thus, X is more probable than X2 to be the closest to X among {X1 , X2 , X }. That is why an opposite solution is more effective than an independent random solution. Although Proposition 3 is true for an arbitrary objective function, this paper seeks to exhibit its beneficial application in multi-objective optimization problems where more than one criterion is considered and therefore the non-dominated solution concept is displayed. For instance, consider a multi-objective optimization problem involving P, (P > 1), objective functions to be optimized simultaneously. In this problem, fi (x ) ; i = 1, 2, ..., P , is the i-th objective function and x is a feasible solution. Then, a typical multi-objective maximization problem is defined as:
Maximize f1 (x ), f2 (x ), ..., fP (x )
(11)
As some objective functions in (11) may conflict with each other, a unique solution that maximizes all the objectives simultaneously does not exist. Hence, the non-dominated concept is used, in which solution x is said to dominate y, (x y), if and only if [37]:
fi (x ) ≥ fi (y ), fi (x ) > fi (y ),
(∀i = 1, 2, . . . , P ) and ( ∃ i ∈ {1 , 2 , . . . , P } )
(12)
This provides a set of optimal solutions called Pareto-optimal. These Pareto-optimal solutions create the Pareto front [38]. Under these considerations, this paper attempts to show that the binary OBL performs satisfactorily in multi-objective optimization schemes. 4. Solving methodologies As the proposed problem is an INLP and belongs to the class of NP-hard problems, a novel algorithm, called opposition multiobjective biogeography-based optimization (OMOBBO) is presented in this section to find a near-optimum solution. Moreover, this paper intends to introduce the concept of an opposition nondominated sorting genetic algorithm (ONSGA-II) due to the fact that there is no benchmark available in the literature to validate the obtained solution. Furthermore, the results of both algorithms are compared to the results of their original versions called MOBBO and NSGA-II. 4.1. OMOBBO In this section, a brief description of the biogeography-based optimization (BBO) algorithm is first given. Afterward, BBO will be extended to multi-objective biogeography-based optimization (MOBBO) and then to opposition multi-objective biogeographybased optimization (OMOBBO). BBO is a type of evolutionary algorithm that was first introduced by Simon [39]. As its name implies, biogeography is the study of the migration, speciation, and extinction of species. Biogeography has often been considered as a process that enforces equilibrium in the number of species in habitats. Moreover, it has been inspired by the study of the distribution
Fig. 4. The immigration and emigration curves in BBO.
of animals and plants over time and space. BBO has demonstrated satisfactory performance when used in various unconstrained and constrained benchmark functions [32,39]. In this algorithm, the habitat suitability index (HSI) is a measure of the goodness of the solution that is represented by the habitat, which is also called fitness. Therefore, a high value of HSI is an indication of a large number of species and a low value of HIS indicates a small number of species within the habitat. In BBO, each habitat contains several variables called suitability index variables (SIVs) that are similar to the genes of chromosomes in genetic algorithms. These SIVs emigrate from high-HSI habitats to low-HSIs during several iterations of the immigration process with an immigration rate (μi ). In contrasts, low-HSI habitats accept new SIVs from high-HSI habitats through an immigration process with an immigration rate (λi ). Hence, BBO utilizes a migration operator that includes two suboperators of the emigration and immigration. Fig. 4 illustrates the linear BBO immigration and emigration curves. In Fig. 4, the worst solution has the highest immigration rate; hence, it has a very high chance of borrowing features from other solutions to improve the next generation. Meanwhile, the best solution has a very low immigration rate, indicating that it is less likely to be altered by other solutions [39]. As the number of species increases, more species have the ability to leave the habitat and the emigration rate increases. In addition to the migration operator, the BBO utilizes another operator called mutation, similar to GA.
4.1.1. Habitat representation In this research, the habitats (represented as solutions) are generated in a way that they satisfy all constraints for all times; hence, all of their corresponding solutions are feasible. The proposed habitat is structured as a N × (H + N + N + N ) matrix provided in four sections to satisfy all constraints for all times and hence avoids generating infeasible solutions. The four sections of a habitat are: Section 1: A N × H matrix of the Xik ; Section 2: A N × N matrix of the Ui100 ; j Section 3: A N × N matrix of the Ui75 ; j Section 4: A N × N matrix of the Ui50 ; j Hence, each section of the habitat is related to a decision variable. In Section 1, each row and column of the matrix represent a center node and a hub, respectively. Therefore, each cell represents a bit which refers to an arc between a hub and a center node. It takes 1 if center node i is allocated to hub k, and 0 otherwise. Note that all the bits except one in each row are zero to handle Constraint 3. Fig. 5 illustrates the general form of the proposed habitat for five geographically dispersed center nodes (N = 5 ) and three candidate hubs (H = 3 ), where p is equal to three. It can be seen in this figure that hub 1 and 3 are located, center nodes 1–3 are
8
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
Fig. 5. The habitat representation.
Fig. 6. Illustration of the exchange mutation.
allocated to hub 3, and center nodes 4–5 are allocated to hub 1. In addition, hub 2 is not located due to the value of p. 4.1.2. Migration and mutation operators The mutation operator makes a mutated habitat by randomly modifying its suitability index variables (SIV). This operator helps to generate a reasonable level of diversity in the habitats. It also serves the search by jumping out of local optimal solutions. For this operator, a predetermined parameter (Pm ) which represents the probability of the mutation operator is applied. This operator randomly selects a habitat and then randomly selects one of its center nodes and reassigns it to another hub selected at random (an exchange mutation.) Note that the exchange mutation demonstrated satisfactory performance in the binary-coded solution [40]. More precisely, as shown in Fig. 6 for the initial habitat, the operator generates a mutated habitat such that it switches the assignment of a single node (center node 5) from one hub to another. In the migration operator, some characteristics swap directly within the emigrating habitats that share their information to immigrating habitats that accept shared characteristics due to the binary space. Thus, habitats with appropriate SIVs are scattered throughout the population. 4.1.3. Multi-objective BBO Fei and Li [41] proposed a new algorithm based on non-subsampled contourlet transform and multi-objective biogeography-based optimization to solve problems where parameters cannot be adjusted adaptively in most multi-focus image fusion algorithms. In addition, Sarrafha et al. [42] utilized the multi-objective version of BBO for an integrated procurement, production, and distribution problem, without any diversity mechanism. While in a single-objective optimization solution algorithm, the objective function value is used to rank the solutions according to the HSI’s values, in Pareto-based multi-objective algorithms the domination concept is utilized for ranking [43]. After calculating
the HIS value for the generated habitats, the fast non-dominated sorting (FNDS) operator, presented in Deb et al. [43] is applied to create the dominance concept by searching the first goal called convergence, in which lower values of FNDS indicate better ranks. Afterward, diversity is the second goal introduced using crowding distance (CD). This concept demonstrates the density of solutions surrounding each of the non-dominated habitats in order to estimate the density of similar rank habitats laid. Unlike the FNDS, larger values of CD are an indication of better habitats. The two mentioned operators, sort the generated habitats representing the non-dominated sorting concept. Now that each habitat has been characterized by its rank and crowding distance, in the MOBBO, unlike the single objective BBO, the ranks and CD are utilized to implement the migration operator in which the emigration rate (μi ) and the immigration rate (λi ) for habitat i are calculated as follows:
λi = 1 × (Ri /nPop)
(13)
μi = 1 × (1 − Ri /nPop),
(14)
where Ri represents the rank of the habitat i (or its front position) after sorting all habitats based on the multi-objective sorting strategy. nPop represents the size of the population. Note the habitat that is identified as the highest rank is a good habitat and therefore has the highest emigration rate while having the lowest immigration rate. Unlike the single objective BBO that uses the roulette wheel selection for the emigration rate to select the habitat for immigrating SIV, MOBBO utilizes a binary tournament selection as the selection process. In the selection process, if a specific habitat needs to immigrate, two habitats are selected randomly. Then, if they have different ranks, the one with the lower rank is selected; otherwise, the one with the higher CD is selected as the immigrating habitat. Then, an SIV is selected randomly from the immigrating habitat for immigration. Afterward, the mutation operator is utilized according to the mutation probability of each habitat. Once the mutation and migration operators are implemented on the initial population, the next population is generated according to ascending order of the habitat ranks with the limitation that the capacity of the next population is not exceeded. It is worthwhile to note that the optimization process of MOBBO is similar to the NSGA-II optimization process, however the main difference between them is their evolution operators. In other words, while the searching mechanism of NSGA-II is a GA [42], the searching mechanism of MOBBO is a BBO algorithm. Under these considerations, the pseudo code of MOBBO is presented in Fig. 7. 4.1.4. Opposition MOBBO In opposition MOBBO, the OBL algorithm is added to MOBBO as a diversity mechanism to enhance the performance and to enrich the Pareto-fronts. The outline of the OMOBBO algorithm is
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
9
Fig. 7. The pseudo-code of the MOBBO.
Fig. 8. The pseudo-code of the opposition MOBBO.
presented in pseudo-code shown in Fig. 8. In OMOBBO, the opposite population is created based on the definitions of the binary OBL proposed in Section 3. From the pseudo code of OMOBBO, it is evident that the proposed OMOBBO first generates an opposite population (OP0 ) of the initial population (P0 ) with the same population size (nPop). In other words, an opposite habitat is made for each habitat in the initial population. Then, both P0 and OP0 are merged and then habitats with the best fitness are preserved with respect to nPop. To do so, both the crowding distance and
the non-dominated sorting procedure mentioned in the previous section are utilized. Note that this procedure is also implemented during each iteration of OMOBBO. Nevertheless, to save computational time, the opposite population of the current population has a chance of being generated OJR% (usually 30%) of the time during each iteration. This probability is determined by the Opposition Jumping Rate ∈ [0, 1] parameter as specified in Fig. 8. Then, the generated population in iteration t, (Pt ), the new population (Qt ), and the opposite population of the new population (OQt )
10
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
Parent X1
Parent X2
Parent X3
Offspring
Fig. 9. An example of the multi-parent crossover.
are merged and truncated according to nPop. Afterward, the nondominated sorting procedure is carried out to recognize the fronts, similar to MOBBO steps. 4.2. The ONSGA-II As mentioned before, the aim of designing the NSGA-II (widely applied in INLP) is to compare the results obtained by OMOBBO. However, in order to have a proper comparison, the designed NSGA-II is modified to opposition NSGA-II, represented as ONSGAII. To do so, first, the original NSGA-II is reviewed. Then, the modified version is presented. NSGA-II was originally introduced by Deb et al. [43] as a class of multi-objective evolutionary algorithms in which a fast and capable sorting procedure is accompanied by an elitism operation. The searching mechanism of NSGA-II is a GA that implements the crossover and mutation operator on the chromosome. To obtain different Pareto fronts, a ranking procedure is performed at each generation. Similar to OMOBBO, the NSGA-II utilizes the binary tournament selection and the non-dominated sorting procedure. The chromosome representation in NSGA-II is designed similar to the habitat representation in OMOBBO. Moreover, the mutation structures of both algorithms are the same. However, the advantage of the proposed NSGA-II is that it utilizes the multi-parent crossover for binary space as defined blow. 4.2.1. Multi-parent crossover The crossover operator exchanges some of the genes of the habitats through the breakage and reunion of two selected habitats in order to generate a number of children. Here, a predetermined parameter Pc is used to represent the probability of the crossover operator applied in a population. To explore the solution space, a multi-parent crossover is chosen to generate offspring and establish a balance between exploration and exploitation in this research. This crossover operator utilizes three parents instead of two, in which they are chosen based on the roulette wheel selection. These parents are then ranked based on their fitness. For instance, if parent X1 has the first rank among the three parents while parent X2 has the second rank, then each bit of the first parent X1 is compared with the corresponding bit of the second parent X2. If both are the same, the bit is taken for offspring. Otherwise, the corresponding bit from the third parent X3 is taken for the offspring. Although the third parent does not have a good fitness, it can be used to promote diversification. Note that all of the three parents must be different from each other and that the prior described procedure is carried out on all sections of the chromosome. The structure of this operator is similar to the three parents crossover operator that has good performance in binary space [44]. This procedure for center node 5 is illustrated in Fig. 9, where p is equal to two.
4.2.2. Opposition NSGA-II Similar to OMOBBO, the opposition NSGA-II (ONSGA-II) utilizes the binary OBL due to the characteristics of the binary solution space. ONSGA-II uses the binary OBL not only in the initial population but also in all iterations OJR% (Opposition Jumping Rate) for all times. While the main steps involved in ONSGA-II are similar to OMOBBO, the difference is in the evolution process from Pt to Qt . In OMOBBO, a single objective BBO acts in the evolution process, whereas in NSGA-II, the evolution process of a GA is used in the evolution process from Pt to Qt . In other words, the binary OBL mechanism is applied for chromosomes generated using a multiparent crossover and mutation operators, whereas this mechanism is applied for habitats generated using migration and mutation operators in OMOBBO. 4.3. Performance metrics Due to the existence of competing and conflicting objectives in a multi-objective optimization problem, two goals must be searched. These goals are the convergence to the Pareto-optimal set and maintaining diversity in the solutions of the Pareto-optimal set [45]. To aim this, different metrics are usually defined to evaluate the algorithms. In this paper, the following performance metrics are used for this purpose: 1. Number of Pareto solutions (NS): is used to show the number of Pareto optimal solutions; 2. CPU time: as a representative of the required computational time to run an algorithm. Note that the computation time is how fast an algorithm generates a solution; not necessarily how fast it generates a Pareto solution; 3. Mean ideal distance (MID): measures the convergence rate of Pareto fronts to a certain point (0,0) [46]; 4. Spacing: shows the standard deviation of the distances among solutions of the Pareto front. This criterion decreases in better fronts [47]; 5. Maximum Spread (MS): measures the diagonal length of a hyper-box in the Pareto curve. Higher values indicate a better diversity of the front [46]. 6. The Coverage (C) metric: demonstrates the differences between the two utilized Pareto fronts. The value of C(A,B) states the percentage of the solution in B dominated by at least one solution of A. It is calculated by the following equation:
C (A, B ) =
|{b ∈ B|∃a ∈ A : a ≺ b} | . |B|
(15)
The value C(A, B) = 1 means that all solutions in B are dominated by solutions in A. In contrast, C(A, B) = 0 depicts the condition where none of the solutions in B are dominated by A. Thus, a value of 1 for C(A, B) indicates that front A is outperforming front B. In this metric, C (A, B ) = 1 − C (B, A ), hence both C(A, B) and C(B,
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
11
Table 1 The levels of algorithms parameters. Algorithm
Parameter
Range
Low (1)
Medium (2)
High (3)
ONSGA-II
MaxG nPop Pc Pm OJR MaxG nPop Pm OJR
10 0–40 0 50–250 0.6–0.8 0.1–0.25 0.1–0.3 10 0–40 0 50–250 0.1–0.25 0.1–0.3
100 50 0.6 0.1 0.1 100 50 0.1 0.1
250 150 0.7 0.15 0.2 250 150 0.15 0.2
400 250 0.8 0.25 0.3 400 250 0.25 0.3
OMOBBO
A) must be calculated, based on which A is better than B, if C(A, B) > C(B, A). 4.4. Parameter tuning To obtain better solutions, the parameters of the proposed algorithms are calibrated using the Taguchi method. This method is a fractional factorial experiment that is known as an efficient alternative for full factorial experiments [48]. It uses a special set of arrays called orthogonal arrays. These arrays stipulate the way of conducting the minimal number of experiments that can give the full information of all the factors that affect the performance parameter. There are two groups of factors including signal and noise, where maximizing the signal to noise (S/N) ratio is the aim. As in this research, a “smaller-is-better” objective is appropriate, the S/N ratio is defined as:
S = −10 × log N
Fig. 10. The S/N ratio plot for parameters of OMOBBO.
S(Y 2 ) , n
(16)
where Y represents the response and n denotes the number of orthogonal arrays. To employ the Taguchi method, the multiobjective coefficient of variation (MOCV) measure proposed by [49] is utilized as the response. In this response, MID and MS are used in Eq. (17)
MOCV =
MID , MS
Fig. 11. The S/N ratio plot for parameters of ONSGA-II.
(17)
where MID is a representation of the first goal (convergence) and MS is a representation of the second goal (diversity). Hence, the two mentioned goals of multi-objective optimization are considered simultaneously. In order to run the Taguchi method, the levels of the factors (algorithms parameters) such as the maximum generation (MaxG), population (Pop), and the opposition jumping rate (OJR) are determined in Table 1. The values mentioned in Table 1 are chosen based on some trial experiments and literature. To set the parameters of the model, 15 test problems with random parameters are generated. Then, the L29 design is used for ONSGA-II while L9 is designed for OMOBBO using the Minitab Software. Figs. 10 and 11 illustrate the S/N ratio resulted by OMOBBO and ONSGA-II, respectively. In these figures, the best level of each parameter is selected to be the one with the highest S/N ratio. As a result, the proper values of the parameters are shown in bold in Table 1. 5. Computational results The following steps are taken in order to assess the applicability of the proposed model and the solution methodologies: • • •
Competitive model validation; OBL validation The OBL-based algorithms comparison;
5.1. Competitive model validation To initialize the validation process and demonstrate the validity of the proposed model, 10 test problems are solved using GAMS/BARON 23.5 software. To do so, the max-min approach is applied in order to maximize the minimum (the worst) captured flow, and to minimize the maximum (the worst) transportation cost. The general form of the max-min approach is as follows:
Max{mini=1,2,...,P
f i (x ) : x ∈ Q } ,
(18)
where fi (x ) ; i = 1, 2, ..., P, is the i-th objective function and x is a feasible solution. Note that the optimal solution set provided by the max-min approach always contains an efficient solution for the original multi-objective problem presented in Eqs. (1-2). Interested readers are referred to Pasandideh et al. [37] for more information on the max-min approach. The hubs for the competitor (q) are taken from the optimal solutions where α is set to 0.75. It is assumed the new company has set the value of θ to 10% for all routes. The value of the model parameters in the test problems are randomly generated according to the distributions shown in Table 2. The lower and the upper bounds of some of these ranges are chosen based on the presented case study by Bashiri et al. [18] and Chou [50]. The Branch-And-Reduce Optimization Navigator (BARON) algorithm presented by Tawarmalani and Sahinidis
12
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19 Table 2 The sources of random generation. Parameter
Value
aij cik ckm c jm Oi Ck fk
∼U ∼U ∼U ∼U ∼U ∼U 100
(5,15) (30,45) (20,30) (10,20) (2,4) (25, 50)
[51] is an algorithm for the global solution of nonlinear programming problems. They showed that the algorithm reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude. Therefore, BARON implements algorithms of the branch-and-bound type enhanced with a variety of constraint propagation and duality techniques in order to reduce the ranges of variables in the course of the algorithm [52]. This is the reason why BARON is utilized to validate the results. Further, ten randomly generated problems of different sizes are solved. The results are shown in Table 3. The second column of Table 3 demonstrates the number of center nodes (N) and the number of potentials hubs (H), whereas the third column presents the number of hubs to be selected. Note that the hubs for the competitor are taken from the optimal solutions of the capacitated single allocation q-hub median problem proposed in Section 2, where the total transportation cost is minimized. Afterward, the optimal solutions of the new company are presented to locate p-hub’s and to assign the center nodes to the located hubs in order to maximize the total captured flow in the market while minimizing the total transportation cost. Further, these solutions are compared with each other. The optimal hub locations for the new company are presented in the fourth column, whereas the fifth column demonstrates the optimal hubs selected by the competitor. Although some of the optimal hub locations are similar, they are relatively different than each other in other cases. It is evident that increasing p increases both the total captured flow and the transportation cost of the new company. In this regard, the new company is able to compete with the competitor, as its total transportation cost is smaller than of the competitor, except in Examples 2 and 3. Furthermore, it can be seen that the new company can capture the flow. In Example 2, the new company locates hubs 3,4,5,10,12,13 with the cost of $138,200.310, whereas the competitor selects hubs 3,4,9,10,11,12 with the cost of $135,110.323. It is clear that the new company located hubs 5 and 13 instead of hubs 9 and 11 which, were selected by the competitor. This means that the new company could not capture those routes connected to hubs 9 and 11. This is primarily due to the good positioning of the competitor’s hubs. In fact, in the case of the origin-destination pairs between both companies, the flows are fully assigned to the competitor using hubs 9 and 11 and, as a result, the new company lost them. This condition is also evident in Example 3, where the new company located hubs 9 and 10 instead of hubs 5 and 7 that were selected by the competitor. Note that BARON is not able to find the optimal solution in reasonable CPU time with increasing the size of the problem. In order to demonstrate the detailed solutions obtained using the max-min approach, consider Example 3. We want to clarify the captured origin–destination pairs by the new company where the optimal location hubs by the competitor are 4, 5, 7, 8, 11, 12, 13, and 14. The captured origin–destination pair of the new company are summarized in Table 4, represented as the solutions obtained for the decision variables UiDj . From Table 4, it is evident that both Ui50 and Ui100 are zero and j j therefore a captured origin–destination pair through Rule 1 and 3
does not exist. This indicates that the new company was not able to capture either 50% or 100% of the traffic. Instead, the new company can capture only 75% of the traffic (under Rule 2) from the origin nodes listed in Table 4 to destination nodes 5 and 8 through its optimal locations of hubs 4,8,9,10,11,12,13,14. Recall each flow can be captured by only one rule, as was stated prior in the problem definition section. Finally, the max-min approach indicates that the total flow captured by the new company is 305 while its total transportation cost is $140,711.553 in the worst state. (Note that not only the new company has the ability to compete well, but also some of its optimal hub locations are similar to the ones selected by the competitor). Moreover, the new company’s total transportation cost obtained in the examples is smaller than the ones of its competitor. In other words, under the defined competitive environment, the new company can reach its objectives. In short, the results of experiments performed in the first step indicate a valid model. 5.2. OBL validation To highlight the effects of the proposed binary OBL, the ONSGAII and OMOBBO and their original versions are compared to each other in terms of all prior stated performance metrics. The algorithms are coded in MATLAB 7.8 (R2009a) software and are executed on an Intel(R), core (TM) i7, 3.23 GHz laptop with 512 Mb RAM. Moreover, the Mann-Whitney U test is utilized to determine whether significant differences exist between the results obtained from the applied algorithms. The Mann-Whitney U test is a nonparametric test that allows two groups or treatments to be compared without assuming that the sample observations are normally distributed. The null hypothesis asserts that the medians of the two samples are identical. To do so, nine problems are solved in 10 independent runs with similar input parameters demonstrated in Table 2 for N = 40, H = 30, and p = 10. Furthermore, the best value alongside the calculated mean of the metrics is considered in comparisons. Table 5 demonstrates the best value and the mean of computation times of the competing solution algorithms used to solve nine problems of the same scale. Comparing ONSGA-II with NSGA-II and also OMOBBO with MOBBO in terms of the computation times in the Best case, NSGA-II and MOBBO outperform the ONSGA-II and OMOBBO, respectively at almost 95% and 93% confidence levels by the Mann–Whitney U test. In this regard, the P-values of the Mann–Whitney U tests are 0.0521 and 0.0774, respectively. Table 6 contains the best value and the mean of the NS metric obtained by the algorithms. The results in this table, based on the Mann–Whitney U test, indicate that ONSGA-II and OMOBBO strongly outperform NSGA-II and MOBBO, respectively with a Pvalue of 0.003 and 0.004. In other words, although ONSGA-II and OMOBBO take the relatively longer time to calculate the opposite population, they both perform better than their competitors in terms of the NS metric. The pairwise comparison analysis in terms of the MID metric is performed in Table 7. The results in this table indicate that once again ONSGA-II and OMOBBO outperform NSGA-II and MOBBO respectively with a P-value of 0.0045 and 0.0040. Moreover, the results demonstrated in Table 8 in terms of the Spacing metric indicate that ONSGA-II and OMOBBO perform better than their competitors with P-values of 0.0027 and 0.0004, respectively. The prior mentioned algorithms are compared to each other in terms of the MS metric in the following. The results in Table 9 demonstrate that similar to the previously observed metrics, ONSGA-II and OMOBBO outperform NSGA-II and MOBBO respectively with a P-value of 0.0047 and 0.0171. At the end, Tables 10–13 give the best value, the average, and the worst value of the Cmetric obtained over the solution sets. By referring to these tables
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
13
Table 3 Computational result of model validation. Example
N, H
p,q
Loc. New company
Loc. Competitor
Tot. transportation ($) cost of the Competitor
Tot. captured flow
1 2 3 4 5 6 7 8 9 10
20,15
4 6 8 5 7 9 4 5 6 7
4,8,9,10 3,4,5,10,12,13 4,8,9,10,11,12,13,14 1,10,12,17,20 1,10,12,16,17,18,20 1,6,10,12,13,16,17,18,20 2,7,22,23 5,9,10,22,23 5,6,8,9,22.23 8,9,10,15,20,22,23
4,8,9,11 3,4,9,10,11,12 4,5,7,8,11,12,13,14 3,4,5,8,9 5,6,8,10,11,18,20 5,6,7,8,9,10,11,18,20 1,3,15,20 3,7,12,20,23 9,11,12,13,19,20 2,3,7,12,13,15,23
131,740.220 135,110.323∗ 137,300.551∗ 300,550.120 394,133.125 493,155.234 422,118.540 431,391.120 462,988.552 542,522.323
100 264 305 350 420 511 366 402 415 550
30,20
35,25
Table 4 Results of capturing flow. Origin node
1 2 9 11 13 14 15 16 18 20
Tot. transportation ($) cost of the New company 130,534.159 138,200.310∗ 140,711.553∗ 295,584.147 354,562.996 444,005.120 413,751.101 415,233.786 439,833.562 458,288.379
Table 6 Comparisons based on the best and the mean of the NS metric.
Rule3(Ui50 ) j
Rule2(Ui75 ) j
Rule1(Ui100 ) j
No
Yes (node 5)
Yes (node 8)
No
0 0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0
Problem
1 2 3 4 5 6 7 8 9
NSGA-II
ONSGA-II
MOBBO
Best
Mean
Best
Mean
Best
Mean
OMOBBO Best
Mean
80 82 69 64 71 68 86 77 54
74 66 52 60 71 66 80 69 44
103 121 117 109 127 135 101 102 133
100 109 112 95 113 130 94 100 110
79 49 74 62 52 56 64 61 60
66 40 52 60 51 43 62 52 60
139 130 111 132 120 116 135 116 140
136 128 111 120 120 110 119 114 140
The best “mean” and the best “best” are shown in bold.
Table 5 Comparisons based on the best and the mean CPU times. Problem
1 2 3 4 5 6 7 8 9
NSGA-II
ONSGA-II
MOBBO
OMOBBO
Best
Mean
Best
Mean
Best
Mean
Best
Mean
102 119 132 198 362 509 690 951 1205
112 123 144 210 388 522 715 966 1220
458 548 490 589 802 1009 1025 1230 1650
463 459 521 621 812 1013 1244 1243 1662
122 125 149 238 288 612 744 1001 1415
128 131 162 244 295 619 763 1012 1419
390 387 459 581 911 1015 1215 1505 1912
402 393 464 602 920 1020 1220 1508 1929
The best “mean” and the best “best” are shown in bold.
and the results obtained using the Mann–Whitney U test, ONSGAII and OMOBBO outperform the NSGA-II and MOBBO respectively at 99% confidence level. In addition, NRGA and ONRGA outperform NSGA-II and ONSGA-II respectively with 99% and 98% confidence levels. The obtained results of all algorithms on the performance metrics and their trends are graphically illustrated in Fig. 12. Fur-
thermore, in order to clarify the superior performance of the proposed Pareto-based algorithms compared to other algorithms, the obtained Pareto-fronts of all algorithms used to solve Problem 1 are displayed in Fig. 13. As the algorithms provide different values in terms of the utilized metrics of this paper, five metrics are simultaneously employed in a popular multi-criteria decision-making method named TOPSIS. This enables the determination of the best algorithm in terms of all utilized metrics simultaneously. In the TOPSIS method, the algorithms are considered as the alternatives, while an equal weight is assumed for all criteria (metrics.) Initially proposed by Hwang and Yoon [53], TOPSIS is one of the most popular multiple criteria decision-making methods. This method is based on the idea that the optimal solution should have the shortest distance from the positive ideal solution (V+ ) and the farthest from the negative ideal solution (V− ). The positive ideal solution maximizes the benefit criteria or minimizes the cost criteria. On the other hand, the negative ideal solution maximizes the cost criteria or minimizes the benefit criteria. The distances of each alternative from the positive ideal solution, D∗ , and from the neg-
Table 7 Comparisons based on the best and the mean of the MID metric. Problem
1 2 3 4 5 6 7 8 9
NSGA-II
ONSGA-II
MOBBO
OMOBBO
Best
Mean
Best
Mean
Best
Mean
Best
Mean
1.14E + 10 1.27E + 10 1.15E + 10 1.17E + 10 1.22E + 10 1.20E + 10 1.25E + 10 1.29E + 10 1.31E + 10
1.60E + 10 1.65E + 10 1.37E + 10 1.29E + 10 1.59E + 10 1.47E + 10 1.42E + 10 1.52E + 10 1.53E + 10
1.14E + 10 1.08E + 10 1.07E + 10 1.12E + 10 1.12E + 10 1.12E + 10 1.05E + 10 1.07E + 10 1.08E + 10
1.16E + 10 1.10E + 10 1.08E + 10 1.14E + 10 1.13E + 10 1.14E + 10 1.08E + 10 1.09E + 10 1.10E + 10
1.14E + 10 1.26E + 10 1.17E + 10 1.15E + 10 1.14E + 10 1.38E + 10 1.45E + 10 1.36E + 10 1.48E + 10
1.16E + 10 1.28E + 10 1.19E + 10 1.16E + 10 1.16E + 10 1.38E + 10 1.46E + 10 1.36E + 10 1.48E + 10
1.14E + 10 1.05E + 10 1.10E + 10 1.01E + 10 1.03E + 10 1.02E + 10 1.09E + 10 1.12E + 10 1.03E + 10
1.14E + 10 1.05E + 10 1.10E + 10 1.01E + 10 1.03E + 10 1.02E + 10 1.09E + 10 1.12E + 10 1.03E + 10
The best “mean” and the best “best” are shown in bold.
14
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19 Table 8 Comparisons based on the best and the mean of the Spacing metric. Problem
1 2 3 4 5 6 7 8 9
NSGA-II
ONSGA-II
MOBBO
OMOBBO
Best
Mean
Best
Mean
Best
Mean
Best
Mean
976,620.00 862,530.00 1,040,313.00 1,125,628.00 1,0 06,958.0 0 1,194,046.00 986,508.00 772,605.00 1,135,303.00
1,279,372.20 991,909.50 1,227,569.34 1,288,844.06 1,182,168.69 1,419,720.69 1,233,135.00 919,399.95 1,373,716.63
841,854.00 821,331.00 796,951.00 706,149.00 499,732.00 898,239.00 578,019.00 794,432.00 613,866.00
945,402.04 1,019,271.77 1,048,787.52 808,540.61 556,701.45 997,045.29 638,711.00 945,374.08 714,846.96
876,540.00 1,022,378.00 1,295,206.00 1,396,277.00 1,157,574.00 1,232,282.00 1,266,992.00 1,336,889.00 1,278,565.00
1,093,045.38 1,157,843.09 1,486,611.54 1,618,005.79 1,296,482.88 1,410,962.89 2,217,236.00 2,406,400.20 2,109,632.25
809,565.00 773,072.00 868,079.00 744,151.00 590,839.00 749,010.00 746,183.00 846,961.00 759,511.00
1,088,864.93 1,054,470.21 1,228,071.36 834,044.44 1,051,693.42 1,082,319.45 1,216,278.29 1,014,659.28 1,155,975.74
The best “mean” and the best “best” are shown in bold. Table 9 Comparisons based on the best and the mean of the MS metric. Problem
NSGA-II
1 2 3 4 5 6 7 8 9
ONSGA-II
MOBBO
OMOBBO
Best
Mean
Best
Mean
Best
Mean
Best
Mean
7,644,889.73 14,137,619.00 15,074,572.00 7,210,143.00 12,346,172.00 7,828,386.00 9,563,037.00 7,987,283.00 14,434,828.00
6,566,960.28 10,518,388.54 5,261,025.63 4,751,484.24 9,592,975.64 6,387,962.98 8,491,976.86 6,757,241.42 11,946,263.65
17,478,730.90 16,727,150.00 11,130,385.00 18,784,699.00 16,572,167.00 16,125,498.00 16,948,870.00 17,126,897.00 12,167,849.00
14,577,261.57 13,313,138.69 8,969,977.27 16,136,056.44 13,637,236.22 13,479,303.78 13,813,329.05 15,018,575.98 10,452,182.29
9,406,339.92 12,946,528.00 10,536,90 0.0 0 9,539,934.00 16,544,343.00 12,343,803.00 9,424,945.00 14,609,479.00 12,955,865.00
7,807,262.13 11,224,639.78 9,424,203.36 8,481,001.33 14,790,642.64 10,961,297.06 8,407,050.94 12,973,217.35 11,193,867.36
15,174,239.64 17,498,743.00 16,264,993.00 15,544,881.00 13,656,558.00 12,797,849.00 12,467,085.00 17,928,808.00 17,154,030.00
13,550,596.00 15,533,634.16 14,085,483.94 13,281,546.33 11,922,175.13 11,031,745.84 10,855,090.91 15,364,988.46 15,267,086.70
The best “mean” and the best “best” are shown in bold.
Table 10 Best and average values of the C metric obtained by NSGA-II and ONSGA-II. Problem
NSGA-II Best
Mean
Best
Mean
1 2 3 4 5 6 7 8 9
1 1 1 0.75 1 1 1 1 1
0.38 0.35 0.31 0.42 0.29 0.30 0.26 0.33 0.37
1 1 1 1 1 1 1 1 1
0.45 0.48 0.49 0.47 0.54 0.55 0.42 0.36 0.43
Table 12 Best and average values of the C-metric obtained by NSGA-II and MOBBO.
ONSGA-II
Problem
NSGA-II
MOBBO
Best
Mean
Best
Mean
1 2 3 4 5 6 7 8 9
1 0.88 1 1 1 1 1 0.79 1
0.28 0.33 0.25 0.35 0.38 0.31 0.25 0.46 0.32
1 1 1 1 1 0.88 1 1 1
0.37 0.42 0.36 0.56 0.37 0.38 0.36 0.45 0.37
The better “mean” and the better “best” are shown in bold.
The better “mean” and the better “best” are shown in bold.
Table 11 Best and average values of the C-metric obtained by MOBBO and OMOBBO.
Table 13 Best and average values of the C-metric obtained by OMOBBO and ONSGA-II.
Problem
MOBBO Best
Mean
Best
Mean
1 2 3 4 5 6 7 8 9
1 1 1 1 1 1 1 1 1
0.28 0.33 0.25 0.36 0.28 0.38 0.24 0.33 0.45
1 1 1 1 1 1 1 1 1
0.54 0.46 0.48 0.55 0.42 0.47 0.39 0.42 0.47
Problem
OMOBBO
1 2 3 4 5 6 7 8 9
OMOBBO
ONSGA-II
Best
Mean
Best
Mean
1 1 1 1 1 1 1 1 1
0.51 0.44 0.48 0.49 0.45 0.46 0.47 0.53 0.56
1 1 1 1 1 1 1 1 1
0.42 0.38 0.45 0.56 0.38 0.44 0.51 0.36 0.38
The better “mean” and the better “best” are shown in bold.
The better “mean” is shown in bold.
ative ideal solution, D− , is calculated as follows:
D∗i =
n j=1
2 Vi j−V j+ ;
(
)
i = 1, . . . , m
(19)
D− = i
n j=1
(Vi j−V j− )2 ;
i = 1, . . . , m
(20)
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
15
Table 14 The ranking results using TOPSIS. Algorithm
D∗
D−
CI∗
Rank
NSGA-II ONSGA-II MOBBO OMOBBO
0.074585 0.055022 0.089805 0.063002
0.067124 0.086120 0.057107 0.089557
0.473672 0.610163 0.388716 0.587030
3 1 4 2
decision matrix. A brief result of employing each of the above steps and the final ranking of the algorithms is demonstrated in Table 14. The results in Table 14 indicate that ONSGA-II is the best algorithm in terms of all five metrics. The OMOBBO is identified as the second best opposition-based learning algorithm. The other two algorithms that do not use the proposed OBL mechanism are ranked next. Therefore, the experimental results and comparisons not only show that the proposed ONSGA-II and OMOBBO are compatible algorithms and outperform their original popular counterparts but also highlight the importance of implementing the proposed OBL mechanism. In addition, as seen in Fig. 13 and Table 10, the performance of ONSGA-II and OMOBBO is relatively close to each other. In the next section, a formal comparison analysis between these two proposed OBL-based algorithms is performed. 5.3. The OBL-based algorithms comparison After comparing the performance of OBL-based algorithms with their original counterparts, the arising question now, is the difference between ONSGA-II and OMOBBO. To answer this question, the performances of these two algorithms in terms of all five metrics are calculated based on 30 randomly generated test problems. For each problem, the proposed algorithms are implemented for 15 independent runs and the best value of each metric is reported in Table 15. To simplify the comparison in this table, (↑) is used to display the desired high values of the metrics, whereas (↓) is used to show the desired low values of the metrics. According to the results in Table 15, while the required CPU time of ONSGA-II is less compared to the ones of OMOBBO all the time, the computation times of both algorithms are relatively close to each other up to Problem 12. However, as the number of center nodes and the potential hubs increase, the difference become noticeable, where the computation time of OMOBBO is considerably higher compared to the computation time of ONSGA-II. Moreover, both algorithms demonstrate similar performances in terms of the NS metric before Problem 8. However, the difference becomes tangible when they solve Problems 9 to 20. This difference decreases from Problems 21 to 30. Note that the minimum number of nondominated solutions obtained by both algorithms is 100, whereas the maximum number of non-dominated solutions is 140 solutions. In terms of the other three metrics, namely MID, spacing, and MS, both algorithms have fluctuation and therefore they cannot be compared intuitively. Therefore, they are statistically compared in the next section. Fig. 12. Summary of the performance of the algorithms in terms of the metrics.
Moreover, the closeness index (CI) for each alternative is computed using Eq. (21).
CIi∗ =
D− i
D− i
+ D∗i
;
i = 1, . . . , m
(21)
While CI∗ can assume values between zero and one, the alternative that has the highest CI∗ , is selected as the best alternative. Note that larger values of the NS and MS metrics are desired, while smaller values of CPU time, MID, and Spacing are preferred. Note also that the metric average of each algorithm is considered in the
5.3.1. Statistical comparison In order to assess the performance of the proposed OMOBBO and compare it to the performance of ONSGA-II, the results for both algorithms are presented in Table 15 along with the results obtained solving another 80 test problems (are not shown here to save space). The results are statistically analyzed by the pairedsample t-tests to examine the mean difference between the paired observations. The paired sample t-test is a useful tool to compare the means of the five performance metrics of the two algorithms. These tests are carried out via Minitab 16 software. In order to make sure that the data follows a normal distribution, a statistical test called the Kolmogorov–Smirnov (K-S) is uti-
16
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
Fig. 13. Obtained Pareto-fronts of the algorithms on Problem 1.
Fig. 14. Interval plots for the CPU time and the Spacing metrics.
lized. The K-S is a nonparametric test to compare a sample with a reference probability distribution (one-sample K–S test) or to compare two samples (two-sample K-S test). It quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The K-S statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). It is worthwhile to mention that the K-S test can be modified to serve as a goodness of fit test. In a special case of testing for normality of a distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates. The results of this test for the metrics obtained using OMOBBO and ONSGA-II at 95% confidence are reported in Tables 16 and 17, respectively. Statistical analysis of the results has been performed in Minitab 16 software. As it can be seen, all the metrics obtained using both OMOBBO and ONSGA-II follow a Normal distribution.
In what follows, the null hypothesis of the test for each metric at 95% confidence level is as follows: 1- The mean CPU time obtained by OMOBBO is equal to the one of ONSGA-II. 2- The mean of the NS metric obtained by OMOBBO is equal to the one of ONSGA-II. 3- The mean of the MID metric obtained by OMOBBO is equal to the one of ONSGA-II. 4- The mean of the Spacing metric obtained by OMOBBO is equal to the one of ONSGA-II. 5- The mean of the MS metric obtained by OMOBBO is equal to the one of the ONSGA-II. The summarized results of the tests are presented in Table 18. Based on these results, it can be concluded that ONSGA-II has a better CPU time performance with a P-value of 0.002, whereas OMOBBO has a better Spacing metric value with a P-value of 0.027. The interval plots in Fig. 14 graphically demonstrate the prior stated difference. Furthermore, the two algorithms do not have significant differences in terms of all other metrics. This can also
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19
17
Table 15 Multi-objective metrics obtained for each algorithm. Problem No.
N,H,p
Algorithm
CPU time (s) ↓
NS↑
MID↓
Spacing↓
MS↑
1
40,30,10
2
50,40,20
3
60,40,30
4
70,50,40
5
80,60,50
6
100,80,70
7
120,100,90
8
130,110,100
9
140,120,110
10
150,140,120
11
160,150,140
12
180,160,140
13
190,170,150
14
200,180,160
15
210,190,170
16
220,200,180
17
230,210,190
18
240,220,200
19
250,230,210
20
260,240,220
21
270,250,240
22
280,260,250
23
290,270,260
24
300,280,260
25
310,290,270
26
320,300,280
27
330,310,300
28
350,320,310
29
360,330,320
30
380,350,340
OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II OMOBBO ONSGA-II
72 58.5 102 85.5 147 118.5 168 151.5 189 163.5 222 187.5 228 198 244.5 223.5 261 237 297 283.5 321 307.5 355.5 328.5 403.5 372 511.5 447 586.5 396 646.5 451.5 730.5 547.5 754.5 604.5 852 633 912 730.5 957 853.5 1036.5 916.5 1089 1032 1149 1056 1219.5 1168.5 1291.5 1189.5 1446 1407 2095.5 1476 2253 1674 2743.5 2200.5
140 140 140 140 137 129 126 140 132 136 130 130 139 135 131 137 128 135 120 136 113 116 113 103 110 100 102 109 97 104 111 116 117 108 111 106 114 91 118 108 100 93 100 95 91 94 93 98 98 92 99 97 93 99 95 98 92 96 98 90
10,758,960 7,307,280 10,497,330 9,248,850 11,011,410 9,115,740 8,367,570 12,149,730 12,443,490 10,061,280 17,497,080 9,482,940 12,957,570 17,056,440 18,961,290 19,819,620 22,266,090 19,218,330 22,826,070 21,522,510 21,559,230 20,710,080 22,477,230 19,815,030 17,529,210 19,392,750 15,289,290 17,625,600 13,774,590 13,260,510 13,926,060 19,080,630 17,180,370 21,256,290 14,224,410 14,270,310 24,340,770 18,447,210 22,348,710 26,507,250 18,438,030 17,382,330 22,954,590 19,383,570 24,001,110 23,863,410 26,126,280 25,185,330 22,541,490 26,149,230 22,991,310 25,056,810 20,912,040 18,401,310 21,563,820 22,876,560 24,423,390 22,890,330 27,223,290 24,047,010
12,977,928 19,341,224 3,893,378 27,601,097 8,557,321 18,631,267 730,831 12,889,857 209,659 21,513,526 13,683,067 191,864 6,905,625 2,464,096 14,164,187 30,165,571 4,107,303 21,025,816 26,949,713 9,538,777 17,462,760 27,099,536 6,806,921 9,778,065 29,027,862 19,163,899 484,106 18,289,272 5,596,282 11,580,338 8,086,771 6,346,011 220,574 3,734,723 3,351,065 6,232,196 6,299,780 2,335,009 2,070,635 20,485,894 2,385,379 3,675,413 6,070,599 7,904,654 1,648,612 29,449 6,432,061 5,858,014 2,496,335 5,043,216 5,363,918 5,733,629 7,797,954 7,737,267 3,689,320 18,006,875 5,900,143 2,546,148 2,847,373 628,778
557,518,800 921,840,026 1,216,947,600 977,464,213 13,199,019,800 3,083,054,0 0 0 13,723,658,800 9,694,137,400 10,252,209,200 7,090,739,800 10,546,721,200 10,555,695,600 8,197,466,600 11,250,469,0 0 0 6,891,296,400 12,642,291,0 0 0 16,106,678,0 0 0 15,220,898,400 14,179,789,0 0 0 19,605,904,0 0 0 27,453,132,0 0 0 29,171,713,800 32,892,866,600 30,272,831,600 25,336,042,600 30,444,925,200 24,527,619,800 30,931,770,600 25,835,954,600 27,161,795,800 32,257,137,800 25,424,333,0 0 0 30,248,673,400 30,582,954,0 0 0 32,407,237,800 29,991,417,800 30,201,068,0 0 0 32,567,702,600 33,393,442,200 31,60 0,758,40 0 41,659,180,600 40,165,717,200 47,174,944,800 40,953,363,0 0 0 44,458,134,800 47,561,871,0 0 0 47,489,870,400 52,051,994,0 0 0 52,030,711,400 51,253,778,0 0 0 46,887,843,0 0 0 52,041,518,600 53,669,029,200 45,993,215,400 43,061,572,800 48,626,506,600 47,520,759,400 52,999,267,200 51,416,897,200 43,927,855,200
Table 16 The results of the K-S test on the metrics obtained using OMOBBO. K-S test
CPU time
NS
MID
Spacing
MS
N Normal parametersa,b
30 433.30 0 0 360.81404 .176 .176 −.137 .966 .518
30 122.3667 18.13262 .174 .174 −.161 .951 .726
30 3998.4333 1197.6556 .140 .101 −.140 .767 .598
30 1.4202 1.10367 .157 .157 −.132 .858 .553
30 1.8244E6 1.05036E6 .108 .102 −.108 .593 .873
Most extreme differences
Kolmogorov–Smirnov Z Asymp. Sig. (2-tailed)
18
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19 Table 17 The results of the K-S test on the metrics obtained using ONSGA-II. K-S test
CPU time
NS
MID
Spacing
MS
N Normal parametersa,b
30 517.4333 449.07581 .148 .143 −.148 .810 .528
30 122.9333 16.18414 .155 .155 −.091 .847 .671
30 4077.0667 1146.17502 .172 .103 −.172 .944 .635
30 .8885 .87570 .225 .225 −.162 0.990 .597
30 1.8245E6 1.03409E6 .117 .115 −.117 .642 .804
Most extreme difference
Kolmogorov–Smirnov Z Asymp. Sig. (2-tailed)
Table 18 The P-values of the paired sample t-tests. Metric
t-value
P-value
Test result
CPU time NS MID Spacing MS
4.14 0.38 0.62 2.34 0.02
0.002 0.709 0.537 0.027 0.751
Null Null Null Null Null
Chosen algorithm
hypothesis hypothesis hypothesis hypothesis hypothesis
be interpreted as a validation signal for the results obtained using OMOBBO. 6. Conclusion and future research Hub networks are utilized in telecommunications, transportation systems, postal delivery systems, computer networks, etc. This paper introduces a bi-objective capacitated HLP under competitive environment. In this paper, a new company intends to locate p hubs and assign the center nodes to the located hubs in order to maximize the total captured flow and to minimize the total transportation cost simultaneously. A competitive environment is assumed in this problem, where a company is operating in the market. It is assumed that three competition rules have been established between the companies in which only a certain percentage of the traffic in each origin–destination pair can be captured if the transportation cost of the new company was not significantly less than its competitor. Due to the NP-harness of the problem, two multi-objective meta-heuristic algorithms, MOBBO and NSGAII, are proposed to solve the problem. Moreover, in order to enhance the performance of the proposed algorithms and to enrich the Pareto front, the binary OBL algorithm is utilized as a diversity mechanism to propose a novel multi-objective meta-heuristic algorithm namely OMOBBO. As there was no benchmark available in the literature, a new parameter-tuned ONSGA-II with a multiparent crossover operator was developed to validate the obtained results and to evaluate the performance of OMOBBO using randomly generated test instances. The parameters of both algorithms were calibrated using the Taguchi method with a new response as the multi-objective coefficient of variation. To validate the proposed model, 110 numerical examples were randomly generated. The computational results not only indicated that the new company has the ability to compete with the existing company but also demonstrated that a number of its optimal hub locations were similar to the ones selected by the existing company. Therefore, in the competitive environment, the new company is able to meet its objectives. Moreover, the performances of the proposed algorithms were compared to the ones of their original versions in terms of six performance metrics to highlight the effects of the proposed binary OBL. The results of the MannWhitney U test confirm that the OBL-based algorithms outperform their original versions with a 95% confidence level. In addition, all four algorithms were ranked using the TOPSIS method. The results of TOPSIS and the statistical comparison not only indicated that the proposed ONSGA-II and OMOBBO are compatible algorithms and
is rejected cannot be rejected cannot be rejected is rejected cannot be rejected
ONSGA-II – – OMOBBO –
outperform their original versions, but also validated the effectiveness of the OBL mechanism. Finally, both OMOBBO and ONSGAII algorithms were compared to each other (each solving 30 randomly generated problems), based on which their performances were statistically analyzed in terms of all performance metrics. The results of five two-sample t-tests indicated that ONSGA-II had only better performance in terms of the CPU time, whereas OMOBBO performed better in terms of Spacing. All the above analysis confirms and validates the applicability of the proposed model and methodologies carried out to solve the problem. For future work extensions, another objective such as reliability can be considered in modeling the problem. Utilizing the OBL mechanism in other meta-heuristic algorithms is also recommended. Developing a stochastic programming or a robust optimization model may provide trends for future research. Acknowledgments The authors would like to acknowledge the efforts and the consideration of the editor and all reviewers for their valuable comments and suggestions to improve the quality of the paper. References [1] J. Kratica, Z. Stanimirovic, D. Tosic, V. Filipovic, Two genetic algorithms for solving the uncapacitated single allocation p-hub median problem, Eur. J. Operat. Res. 182 (2007) 15–28. [2] C.-C. Lin, Y.-I. Chen, The integration of Taiwanese and Chinese air networks for direct air cargo services, Transport. Res. Part A 37 (2003) 629–647. [3] H. Karimi, M. Bashiri, Hub covering location problems with different coverage types, Scientia Iranica 18 (2011) 1571–1578. [4] M.E. O’Kelly, D.L. Bryan, Hub location with flow economies of scale, Transport. Res. Part B 32 (1998) 605–616. [5] J. Ebery, M. Krishnamoorthy, A. Ernst, N. Boland, The capacitated multiple allocation hub location problem: formulations and algorithms, Eur. J. Operat. Res. 120 (20 0 0) 614–631. [6] J. Ebery, Solving large single allocation p-hub problems with two or three hubs, Eur. J. Operat. Res. 128 (2001) 447–458. [7] M. Pérez, F. Almeida, J.M. Moreno-Vega, A hybrid GRASP-path relinking algorithm for the capacitated p-hub median problem, Lect. Notes Comput. Sci. 3636 (2005) 142–153. [8] M. Costa, M.E. Captivo, J. Clímaco, Capacitated single allocation hub location problem-a bi-criteria approach, Comput. Operat. Res. 35 (2008) 3671–3695. [9] R.S.d. Camargoa, H.P.L. G. Miranda Jr., Benders decomposition for the uncapacitated multiple allocation hub location problem, Comput. Operat. Res. 35 (2008) 1047–1064. [10] I. Contreras, J.A. Díaz, E. Fernández, Lagrangean relaxation for the capacitated hub location problem with single assignment, OR Spectrum 31 (2009) 483–505. [11] T. Sim, T.J. Lowe, B.W. Thomas, The stochastic p-hub center problem with service-level constraints, Comput. Operat. Res. 36 (2009) 3166–3177.
A.H. Niknamfar et al. / Knowledge-Based Systems 128 (2017) 1–19 [12] W. Ge, J. Zhu, W. Wu, A tree pruning algorithm for the capacitated p-hub median problems, in: Information Engineering (ICIE), 2010 WASE International Conference on, 2010, pp. 324–327. [13] Z. Stanimirovic´ , A genetic algorithm approach for the capacitated single allocation p-hub median problem, Comput. Inform. 29 (2010) 117–132. [14] K. Yang, Y.K. Liu, X. Zhang, Stochastic p-hub center problem with discrete time distributions, Lect. Notes Comput. Sci. 6676 (2011) 182–191. [15] H. Yaman, S. Elloumi, Star p-hub center problem and star p-hub median problem with bounded path lengths, Comput. Operat. Res. 39 (2012) 2725–2732. [16] M. Rabbani, F. Pakzad, S.M. Kazemi, A new modelling for p-hub median problem by considering flow-dependent costs, in: Modeling, Simulation and Applied Optimization (ICMSAO), 2013 5th International Conference on, 2013, pp. 1–4. [17] K. Yang, Y.-K. Liu, G.-Q. Yang, Solving fuzzy p-hub center problem by genetic algorithm incorporating local search, Appl. Soft Comput. 13 (2013) 2624–2632. [18] M. Bashiri, M. Mirzaei, M. Randall, Modeling fuzzy capacitated p-hub center problem and a genetic algorithm solution, Appl. Math. Modell. 37 (2013) 3513–3525. [19] R.Z. Farahani, M. Hekmatfar, A.B. Arabani, E. Nikbakhsh, Hub location problems: a review of models, classification, solution techniques, and applications, Comput. Ind. Eng. 64 (2013) 1096–1109. [20] H.A. Eiselt, G. Laporte, Sequential location problems, Eur. J. Operat. Res. 96 (1996) 217–231. [21] H.A. Eiselt, G. Laporte, J.F. Thisse, Competitive location models: a framework and bibliography, Transport. Sci. 27 (1993) 44–54. [22] V. Marianov, D. Serra, C. ReVelle, Location of hubs in a competitive environment, Eur. J. Operat. Res. 114 (1999) 363–371. [23] B. Wagner, A note on ‘‘location of hubs in a competitive environment’’, Eur. J. Operat. Res. 184 (2008) 57–62. [24] A. Lüer-Villagra, V. Marianov, A competitive hub location and pricing problem, Eur. J. Operat. Res. 231 (2013) 734–744. [25] M. Sasaki, J.F. Campbell, M. Krishnamoorthy, A.T. Ernst, A stackelberg hub arc location model for a competitive environment, Comput. Operat. Res. 47 (2014) 27–41. [26] R. Tavakkoli-Moghaddam, Y. Gholipour-Kanani, M. Shahramifar, A multi-objective imperialist competitive algorithm for a capacitated single-allocation hub location problem, Int. J. Eng. Trans. C 26 (2013) 605–620. [27] H.R. Tizhoosh, Opposition-based learning: a new scheme for machine intelligence, in: International Conference on Computational Intelligence for Modeling, Control and Automation, vol. 1, Vienna, Austria, 2005, pp. 695–701. [28] H.R. Tizhoosh, Opposite fuzzy sets with applications in image processing, in: Proceedings of International Fuzzy Systems Association World Congress, Lisbon, Postugal, 2009, pp. 36–41. [29] M. Ventresca, H.R. Tizhoosh, Improving the convergence of back propagation by opposite transfer functions, in: IEEE International Joint Conference on Neural Networks, 2006, pp. 4777–4784. [30] M. Shokri, H.R. Tizhoosh, M. Kamel, Opposition-based q (lambda) with non– markovian update, in: Proceedings IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 20 07), Hawaii, 20 07, pp. 288–295. [31] S. Rahnamayan, H.R. Tizhoosh, M. Salama, Opposition-based differential evolution, IEEE Trans. Evol. Comput. 12 (2008) 64–79. [32] M. Ergezer, D. Simon, D. Du, Oppositional biogeography-based optimization, in: IEEE International Conference on Systems, Man and Cybernetics, 2009, pp. 1009–1014.
19
[33] H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, M. Ventresca, Enhancing particle swarm optimization using generalized opposition-based learning, Inform. Sci. 181 (2011) 4699–4714. [34] A. Malisia, Improving the exploration ability of ant-based algorithms, in: Oppositional Concepts in Computational Intelligence, Springer, 2008, pp. 121–142. [35] M. Ventresca, H.R. Tizhoosh, Simulated annealing with opposite neighbors, in: Foundations of Computational Intelligence, 2007. FOCI 2007. IEEE Symposium on, IEEE, 2007, pp. 186–192. [36] Z. Seif, M.B. Ahmadi, Opposition versus randomness in binary spaces, Appl. Soft Comput. 27 (2015) 28–37. [37] S.H.R. Pasandideh, S.T.A. Niaki, A.H. Niknamfar, Lexicographic max–min approach for an integrated vendor-managed inventory problem, Knowl.-Based Syst. 59 (2014) 58–65. [38] H. Kubotani, K. Yoshimura, Performance evaluation of acceptance probability functions for multi-objective SA, Comput. Operat. Res. 30 (2003) 427–442. [39] D. Simon, Biogeography-based optimization, IEEE Trans. Evol. Comput. 12 (2008) 702–713. [40] R. Ramezanian, D. Rahmani, F. Barzinpour, An aggregate production planning model for two phase production systems: solving with genetic algorithm and tabu search, Expert Syst. Appl. 39 (2012) 1256–1263. [41] C. Fei, J-P. Li, Multi-focus image fusion based on nonsubsampled contourlet transform and multi-objective optimization, in: Wavelet Active Media Technology and Information Processing (ICWAMTIP), IEEE International Conference on, 2012, pp. 189–192. [42] K. Sarrafha, S.H.A. Rahmati, S.T.A. Niaki, A. Zaretalab, A bi-objective integrated procurement, production, and distribution problem of a multi-echelon supply chain network design: a new tuned MOEA, Comput. Operat. Res. 54 (2015) 35–51. [43] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2002) 182–197. [44] S.N. Sivanandam, S.N. Deepa, Introduction to Genetic Algorithms, Springer, 2007. [45] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, 2001. [46] E. Zitzler, L. Thiele, Multiobjective optimization using evolutionary algorithms — a comparative case study, in: A. Eiben, T. Bäck, M. Schoenauer, H.-P. Schwefel (Eds.), Parallel Problem Solving from Nature — PPSN V, Springer, Berlin Heidelberg, 1998, pp. 292–301. [47] J.R. Schott, Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1995. [48] G.S. Peace, Taguchi Methods: A Hands-On Approach, Addison-Wesley, 1993. [49] S.H.A. Rahmati, V. Hajipour, S.T.A. Niaki, A soft-computing Pareto-based meta-heuristic algorithm for a multi-objective multi-server facility location problem, Appl. Soft Comput. 13 (2013) 1728–1740. [50] C.-C. Chou, Application of FMCDM model to selecting the hub location in the marine transportation: a case study in southeastern Asia, Math. Comput. Modell. 51 (2010) 791–801. [51] M. Tawarmalani, N.V. Sahinidis, A polyhedral branch-and-cut approach to global optimization, Math. Program. 103 (2005) 225–249. [52] N.V. Sahinidis, BARON 12.6.0: Global Optimization of Mixed-Integer Nonlinear Programs, User’s manual, (2013). [53] C.L. Hwang, K. Yoon, Multiple Attributes Decision Making Methods and Applications, Springer, Berlin, 1981.