Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
Contents lists available at ScienceDirect
Chemometrics and Intelligent Laboratory Systems journal homepage: www.elsevier.com/locate/chemolab
Self-adaptive multi-objective teaching-learning-based optimization and its application in ethylene cracking furnace operation optimization Kunjie Yu a, Xin Wang b, Zhenlei Wang a,⁎ a b
Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China Center of Electrical & Electronic Technology, Shanghai Jiao Tong University, Shanghai 200240, China
a r t i c l e
i n f o
Article history: Received 30 January 2015 Received in revised form 5 May 2015 Accepted 15 May 2015 Available online 24 May 2015 Keywords: Multi-objective optimization Teaching-learning-based optimization Cracking furnace Operation optimization
a b s t r a c t A self-adaptive multi-objective teaching-learning-based optimization (SA-MTLBO) is proposed in this paper. In SA-MTLBO, the learners can self-adaptively select the modes of learning according to their levels of knowledge in classroom. The excellent learners are more likely to choose the learner phase to enhance population diversity, and the common learners are tend to choose the teacher phase to improve the convergence ability of the algorithm. So learners at different levels choose appropriate modes of learning and carry out corresponding search function to efficiently enhance the performance of algorithm. To evaluate the effectiveness of the proposed algorithm, SA-MTLBO is firstly compared with other algorithms in twelve test problems. The results demonstrate that SA-MTLBO can generate Pareto optimal fronts with good convergence and distribution. Finally, SA-MTLBO is used to maximize the yields of ethylene, propylene, and butadiene of the naphtha pyrolysis process. The computational results of SA-MTLBO indicate that the operation of ethylene cracking furnace can be improved by increasing the yields of ethylene, propylene, and butadiene. © 2015 Elsevier B.V. All rights reserved.
1. Introduction Ethylene is the most produced organic chemical in the world. Most of the ethylene volume comes from cracking furnaces. Ethylene cracking furnace is the key unit of ethylene process and naphtha is a kind of important feedstock of pyrolysis. Naphtha pyrolysis can also bring along other important monomers, namely propylene and butadiene, however, their yield rates are smaller than that of ethylene. Because of the large price fluctuation of petroleum, the cost of ethylene, propylene and butadiene is so high that even a slight improvement can significantly increase the economic benefits [1]. Therefore, methods that increase the yields of ethylene, propylene and butadiene are attracting considerable attention. The structure of cracking furnace is shown in Fig. 1 [2]. The feedstock is first fed into the convection section to be preheated and then mixed with the dilution stream (DS). In the convection section, the mixture of the feedstock and DS is heated to the targeted temperature for cracking. The vaporized mixture is fed into the radiant section subsequently. The cracking reaction mainly occurs in the radiant section at a high temperature. Then the cracked gas will be quenched and separated in the downstream process. In the ethylene plant, the benefit of the cracking furnaces is essential, and 50–60% energy is consumed in cracking furnaces. Thus, the operation of the cracking furnace can significantly impact the benefit of the plant. There are four main manipulated ⁎ Corresponding author. Tel.: +86 21 64251250. E-mail addresses:
[email protected],
[email protected] (Z. Wang).
http://dx.doi.org/10.1016/j.chemolab.2015.05.015 0169-7439/© 2015 Elsevier B.V. All rights reserved.
variables (MVs) in cracking furnaces. They are coil output temperature (COT), feedstock flow rate (FLOW), steam hydrocarbon ratio (SHR) and coil output pressure (COP). The operation optimization of the pyrolysis process aims to maximize the yield rates of the ethylene, propylene, and butadiene. However, the three yield rates cannot be increased at the same time, because there are conflicts among them. Therefore, the problem is a multi-objective optimization problem and there is a trade-off among them and a compromise should be made. Thus, a multi-objective optimization algorithm should be used to solve the problem quickly and provide a good spread of nondominated solutions for the decision makers to choose according to their preference. Nature-inspired population-based algorithms solve a wide range of problems by simulating various natural phenomena. Researchers have proposed a number of algorithms that originated from different natural phenomena, such as particle swarm optimization (PSO) [3], differential evolution (DE) [4], artificial bee colony (ABC) [5], and cuckoo search algorithm (CS) [6,7], etc. These evolutionary algorithms have been improved and applied by researches in the engineering optimization problems [8–16]. In addition, evolutionary algorithms (EAs) are suitable to deal with multi-objective optimization problems (MOPs) due to their population-based property which achieves an approximation of the Pareto set in a single run. Since the pioneering attempt of Schaffer [17] to solve MOPs, a large variety of multi-objective evolutionary algorithms (MOEAs) have been proposed and applied to various domains [18–23]. In 1999, Zitzler et al. [24] proposed the strength Pareto EA (SPEA), which employed an external archive to store these nondominated solutions and cluster mechanism to enhance diversity. Two years later, they
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
Fig. 1. Schematic diagram of ethylene cracking furnace.
designed an improved version named SPEA2 by incorporating an enhanced fitness assignment strategy and truncation mechanism [25]. Srinivas and Deb [26] proposed nondominated sorting genetic algorithm (NSGA) which used nondominated sorting strategy to select nondominated solutions and a niching strategy to maintain the population diversity. After then, Deb et al. [27] improved the nondominated sorting strategy and replaced the niching strategy with a crowding distance mechanism, proposed the state-of-the-art NSGA-II. Nebro et al. [28] presented an archive-based hybrid scatter search (ABYSS) for MOPs, which followed the scatter search and used mutation and crossover operators of EAs. Computational results on benchmark instances of MOPs showed that the ABYSS was very competitive with or superior to the state-of-the-art MOEAs, such as NSGA-II and SPEA2. Zhang and Li [29] proposed multi-objective evolutionary algorithm based on decomposition (MOEA/D), which decomposed a MOPs into a number of scalar optimization sub-problems and optimized each of them simultaneously by using information of its neighboring sub-problems. Recently, several research efforts have focused on the performance improvement of MOEAs. Li et al. [30] presented the algorithm for achieving balance between proximity and diversity in MOEAs. Yildiz et al. [31] proposed an integrated and effective multi-objective optimization method, which based on particle swarm optimization (PSO) and receptor editing property of the immune system. Chen et al. [32] proposed an adaptive multiobjective differential evolution algorithm called EPSMODE, in which an ensemble of parameter values and mutation strategies compete to produce offspring based on Pareto domination mechanism. Wang et al. [33] improved the accuracy and the diversity of MOEAs through the local application of evolutionary operators to selected sub-populations. Tang et al. [34] proposed a novel hybrid multi-objective evolutionary algorithm (HMOEA) for real-valued MOPs by incorporating the concepts of personal best and global best in PSO and multiple crossover operators to update the population. Chen et al. [35] extended ranking-based mutation operator [36] for multi-objective differential evolution (MODE) and proposed MODE with ranking-based mutation (MODE-RMO). Teaching-learning-based optimization (TLBO) [37] is a recently developed optimization method, which simulates the teaching-learning process in the classroom. The TLBO is easy to understand and implement; it requires few parameters and has been shown to perform well on many optimization problems. In order to enhance the performance
199
of the basic TLBO, a lot of modifications in the basic TLBO were reported in the literatures [38–44] and applied to many single-objective optimization problems. Besides, some research efforts have been devoted to designing multi-objective variants of basic TLBO. Niknam et al. [45] proposed θ-multi-objective teaching-learning-based optimization (θMTLBO) where the optimization process done based on the phase angles which were allocated to the design variables rather than design variables themselves. The proposed θ-MTLBO was successfully implemented to solve the dynamic economic emission dispatch problem. After then, Niknam et al. [46] proposed multi-objective optimization algorithm based on modified teaching-learning-based optimization algorithm, which used an external repository to save obtained Pareto optimal solutions during the search process and a fuzzy clustering method was used to control the size of the repository. Medina et al. [47] proposed multi-objective teaching-learning algorithm based on decomposition where multi-objective optimization problem decomposed into different scalar optimization sub-problems and then the algorithm solved these sub-problems simultaneously. Zou et al. [48] proposed multi-objective TLBO (MOTLBO) algorithm by incorporating the nondominated sorting and crowding distance sorting concepts of NSGA-II and applied it to solve two constrained real-word MOPs. In this paper, a self-adaptive multi-objective teaching-learningbased optimization (SA-MTLBO) is proposed to solve the operation optimization of the naphtha pyrolysis process. In SA-MTLBO, the learners can adaptively select the learning phases according to their level in the classroom at each generation. It is different from that in the basic TLBO, where the learners need to undergo the teacher phase and the learner phase successively. The learners at different levels choose appropriate learning modes and carry out corresponding search function in SA-MTLBO. The excellent learners tend to choose the learner phase to maintain population diversity, and the common learners are more likely to choose the teacher phase to enhance the convergence ability of the algorithm. Under the same number of function evaluations, the SA-MTLBO by integrating self-adaptive choosing learning modes can have more search generations than the basic TLBO, so that SA-MTLBO can efficiently enhance the optimization performance. Twelve benchmark problems are solved by the proposed algorithm and other stateof-the-art MOEAs. The compared results are given in the paper. Finally, SA-MTLBO is applied to solve the designed tri-objective problems of the naphtha pyrolysis process. The application result shows that SA-MTLBO is an effective and efficient algorithm for MOPs. The rest of the paper is organized as follows. Section 2 briefly describes the basic TLBO. Section 3 presents the proposed algorithm in detail. In Section 4, the computational experiments for evaluating the performance of SA-MTLBO are described. Section 5 gives the multiobjective optimization model of the naphtha pyrolysis process and the simulation results. This paper is concluded in Section 6. 2. Teaching-learning-based optimization Teaching-learning is an important process in which every individual learns something from other individuals. Rao et al. [37] first developed TLBO. There are two phases of TLBO: teacher phase and learner phase. In the teacher phase, learners learn from the teacher. Learners learn each other in the learner phase. The population in TLBO is composed of a group of learners or a class of learners. Different subjects offered to the learners are analogous to the different design variables of the optimization problem. Result of the learners’ after undergoing the teacher phase and learner phase is analogous to the fitness value of the optimization problem. The teacher is considered the best solution in the entire population obtained thus far. TLBO is described detailed in the following. 2.1. Teacher phase Teacher phase is the initial part of the algorithm, in which students improve their knowledge with the help of the teacher who is the most
200
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
1 0.9
the average to be changed. The value of TF is either 1 or 2 and randomly determined by the following equation:
0.8
T F ¼ roundð1 þ randð0; 1ÞÞ
ð2Þ
0.7
The present solution is updated by Eq. (3).
Pi
0.6
X new j;g;i ¼ X j;g;i þ Difference mean j;g
0.5 0.4
ð3Þ
new Where Xnew g,i is the updated value of Xg,i. Xg,i is remained if it gives the superior performance. All the accept solutions at the end of the teacher phase are maintained and these values become the input to the learner phase.
0.3 0.2 0.1 0
2.2. Learner phase 10
20
30
40
50
60
70
80
90
100
Individual's id(i) Fig. 2. Each individual’s pi in ranked population with NP = 100.
knowledgeable person in the class. During this phase, the teacher gives knowledge to the learners and puts efforts to increase the mean result of the class. At any iteration g, suppose the number of subjects (i.e., design parameters) is D, the number of learners (i.e., population size, i = 1, 2… NP) is NP, and meanj,g is the mean result of the class in a particular subject j (j = 1, 2… D). Let Xbg, (b ∈ i) be the best learner in all subjects who is identified as the teacher for that cycle. The teacher will try to increase the knowledge level of the whole class up to his level of knowledge. However, learners will gain knowledge according to the quality of teaching delivered by a teacher and the quality of learners present in the class. Considering this fact, the difference between the result of the teacher and the mean result of the learners in each subject is given by the following equation: Difference mean j;g
¼ rand X bj;g −T F Mean j;g
ð1Þ
Where Xbj,g is the result of the teacher in subject j, rand is a random number in the range [0, 1], and TF is the teaching factor that determines
Learner phase simulates the learning through the interaction with other learners. A learner learns new information when other learners have more knowledge than him or her. The learning phenomenon is expressed by Eqs. (4) and (5). Two learners Xg,p and Xg,q are randomly selected, such that Xg,p ≠ Xg,q. X new j;g;p ¼ X j;g;p þ rand X j;g;p −X j;g;q ;
if f X g;p ≤ f X g;q
ð4Þ
X new j;g;p ¼ X j;g;p þ rand X j;g;q −X j;g;p ;
if f X g;q b f X g;p
ð5Þ
(Eqs. (4) and (5) are for minimization problem, reverse is true for maximization problem) new Where Xnew g,p is the updated value of Xg,p. Xg,p is remained if it gives the superior performance. 3. Self-adaptive multi-objective TLBO (SA-MTLBO) In this section, the proposed SA-MTLBO is presented in detail. In SAMTLBO, the learners can adaptively select the learning phases according to their levels in the classroom at each generation. The learner’s level is determined by the learner’s ranking in the current classroom. So a ranking assignment strategy is needed, and then selection probability is assigned to each learner.
Fig. 3. Self-adaptively select the modes of learning.
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
201
Table 1 Comparison of the GD measure between SA-MTLBO and other MOEAs, mean (Std). Problems
SA-MTLBO
MOTLBO
MODE-RMO
ABYSS
NSGA-II
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 +/=/-
2.462E-04(4.147E-05) 9.647E-05(6.567E-06) 1.614E-04(1.100E-05) 3.196E-04(1.512E-05) 6.892E-05(2.755E-06) 3.610E-04(6.643E-05) 4.846E-03(4.255E-03) 6.900E-01(1.044E + 00) 1.085E-03(1.252E-04) 9.256E-05(4.033E-05) 8.864E-06(3.307E-07) 2.920E-03(7.720E-04)
2.343E-04(4.233E-05)= 2.379E-04(1.875E-04)+ 1.748E-04(3.732E-05)= 2.092E + 00(1.402E + 00)+ 7.863E-03(1.829E-02)= 3.119E + 00(3.113E-01)+ 5.525E-03(5.216E-03)= 2.114E + 01(1.655E + 00)+ 5.990E-03(8.013E-03)= 2.144E-05(2.200E-05)8.843E-06(5.026E-07)= 8.128E-04(3.540E-04)4/6/2
2.475E-03(3.521E-04)+ 3.876E-03(2.551E-04)+ 1.086E-02(3.156E-02)+ 1.454E-03(3.904E-03)+ 1.106E-02(8.754E-04)+ 2.614E-04(5.966E-06)7.263E-04(2.244E-05)5.747E + 00(3.931E + 00)+ 6.926E-04(3.046E-05)9.157E-06(5.752E-07)5.542E-01(4.566E-02)+ 1.103E-02(3.026E-03)+ 8/0/4
2.513E-04(1.540E-05)= 1.151E-04(2.034E-05)+ 1.580E-04(1.403E-05)= 5.718E-04(2.845E-04)+ 9.275E-05(6.043E-06)+ 9.785E-03(2.868E-02)= 7.662E-04(2.944E-05)5.715E-01(5.360E-01)= 7.414E-04(2.521E-05)2.383E-05(8.272E-06)4.290E-02(1.496E-02)+ 1.673E-03(4.544E-04)4/4/4
2.913E-04(5.505E-05)= 1.513E-04(2.124E-05)+ 1.853E-04(1.013E-05)+ 3.731E-04(1.307E-04)= 3.510E-04(4.353E-05)+ 5.846E-03(1.070E-02)+ 1.523E-03(2.688E-04)8.073E-01(9.392E-01)= 1.340E-03(2.168E-04)+ 2.209E-04(5.791E-05)+ 8.714E-02(5.288E-03)+ 3.727E-03(5.710E-04)+ 8/3/1
Each objective function is normalized before calculating the crowding distance.
3.1. Selection probability assignment strategy In multi-objective optimization, the population cannot be directly sorted from the best to worst like in the single-objective optimization, because there are many nondominated solutions in multi-objective optimization. In order to generate approximate Pareto solution with both convergence and spread, fast nondominated sorting and crowding distance are used to rank the population.
3.1.1. Fast nondominated sorting and crowding distance calculating The fast nondominated sorting and crowding distance were proposed by Deb et al. in the NSGA-II [27]. In the nondominated sorting, for each solution i in a solution set, two entities are calculated: (1) domination count ni, the number of solutions which dominate the solution i, and (2) si, a set of solutions that the solution i dominates. All solutions in the first nondominated front F1will have their domination count ni = 0. Then, for each solution i with ni = 0, it visits each member k of the set si and reduces its domination count by one. In doing so, if the domination count of any member becomes zero, it is put in a separate list Q. These members belong to the second nondominated front F2. Now, the above procedure is continued with each member of Q and the third front F3 is identified. This process is looped until all fronts are identified. Crowding distance is used to obtain an estimate of the density of solutions surrounding a particular individual i in the population. The crowding distance computation requires sorting the population according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary solutions (solutions with smallest and largest function values) are assigned an infinite distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of individual distance values corresponding to each objective.
3.1.2. Ranking and selection probability assignment After the fast nondominated sorting and crowding distance calculating, the nondomination front number ifront and crowding distance icd for each solution i are obtained. The solution i is better than solution j, if any of the following conditions is satisfied: (1) ifront b jfront; (2) ifront = jfront and icd N jcd. Then, we can sort the population in ascending order based on the compare results among solutions, and the ranking of an individual is assigned by Eq. (6). Ri ¼ NP−i;
i ¼ 1; 2; …; NP
ð6Þ
Where NP is the population size. According to Eq. (6), the better individual will obtain the higher ranking. After assigning the ranking for each individual, the selecting probability pi of the ith individual is calculated by Eq. (7). R π pi ¼ 0:5 1− cos i ; NP
i ¼ 1; 2; …; NP
ð7Þ
The curve of pi assigned for a ranked population of 100 is shown in Fig. 2. Each individual from 1 to 100 has a pi value ranging from 1 to 0 and the higher ranking individual has the larger pi. After calculating the selection probability of each individual, the learners can select the modes of learning according to their own selection probability. The selection process of learning phase can be described in Fig. 3. The learners at different levels choose different modes of learning and execute corresponding search function. In the teacher phase, the best individual (teacher) makes great efforts to improve the average level of class and enhance the convergence ability of algorithm. While, in the learner phase, the learners learn the knowledge by means of the random interaction with others, this can maintain
Table 2 Comparison of the IGD measure between SA-MTLBO and other MOEAs, mean (Std). Problems
SA-MTLBO
MOTLBO
MODE-RMO
ABYSS
NSGA-II
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 +/=/-
2.712E-04(1.189E-05) 2.645E-04(1.387E-05) 3.196E-04(1.512E-05) 2.658E-04(1.491E-05) 2.437E-04(2.067E-05) 4.143E-04(1.915E-05) 1.046E-03(4.970E-05) 3.785E-02(6.711E-02) 1.049E-03(3.981E-05) 9.682E-05(1.279E-05) 1.051E-04(5.453E-06) 1.403E-03(1.321E-03)
2.925E-04(1.743E-05)+ 3.466E-04(5.758E-05)+ 2.352E-03(6.333E-03)+ 5.375E-01(3.462E-01)+ 2.400E-04(1.781E-05)= 2.037E-01(6.262E-02)+ 1.133E-03(6.818E-05)+ 2.504E + 00(2.824E-01)+ 5.418E-03(2.745E-03)+ 1.080E-04(8.057E-06)+ 1.155E-04(1.783E-05)= 9.931E-03(1.978E-03)+ 10/2/0
1.038E-03(6.734E-05)+ 1.688E-03(9.182E-05)+ 2.330E-03(4.174E-03)+ 7.862E-04(1.706E-03)+ 4.896E-03(3.794E-04)+ 4.054E-04(1.849E-05)= 1.026E-03(3.955E-05)= 1.053E-01(3.456E-02)+ 1.006E-03(3.291E-05)9.843E-05(2.496E-06)= 7.072E-02(5.770E-03)+ 1.204E-03(9.269E-05)+ 8/3/1
1.967E-04(2.857E-06)2.014E-04(2.492E-06)2.956E-03(3.491E-03)+ 3.157E-04(1.101E-04)= 1.578E-04(7.496E-07)1.186E-03(1.838E-03)+ 1.166E-03(7.841E-05)+ 6.217E-02(5.421E-02)+ 1.030E-03(4.721E-05)= 7.248E-05(8.892E-07)5.055E-03(1.779E-03)+ 5.891E-03(2.929E-03)+ 6/2/4
2.675E-04(1.611E-05)= 2.684E-04(1.324E-05)= 3.038E-04(1.288E-05)2.955E-04(4.000E-05)= 2.571E-04(1.846E-05)= 5.630E-04(2.605E-04)+ 1.118E-03(4.695E-05)+ 5.064E-02(2.464E-02)+ 1.056E-03(3.555E-05)= 1.014E-04(6.004E-06)= 9.912E-03(9.818E-04)+ 9.448E-04(5.909E-05)= 4/7/1
202
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
Table 3 Comparison between SA-MTLBO and MTLBO, mean (Std). Problems
GD
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 +/=/-
IGD
SA-MTLBO
MTLBO
SA-MTLBO
MTLBO
2.462E-04(4.147E-05) 9.647E-05(6.567E-06) 1.614E-04(1.100E-05) 3.196E-04(1.512E-05) 6.892E-05(2.755E-06) 3.610E-04(6.643E-05) 4.846E-03(4.255E-03) 6.900E-01(1.044E + 00) 1.085E-03(1.252E-04) 9.256E-05(4.033E-05) 8.864E-06(3.307E-07) 2.920E-03(7.720E-04)
2.578E-04(6.009E-05)= 1.117E-04(1.985E-05)+ 1.787E-04(1.078E-05)+ 2.471E-02(4.413E-02)+ 7.174E-05(3.498E-06)+ 4.349E-04(7.257E-05)+ 5.495E-03(5.065E-03)= 2.089E + 00(3.628E + 00)+ 1.174E-03(1.086E-04)= 1.036E-04(5.697E-05)= 8.842E-06(4.928E-07)= 2.602E-03(8.650E-04)= 6/6/0
2.712E-04(1.189E-05) 2.645E-04(1.387E-05) 3.196E-04(1.512E-05) 2.658E-04(1.491E-05) 2.437E-04(2.067E-05) 4.143E-04(1.915E-05) 1.046E-03(4.970E-05) 3.785E-02(6.711E-02) 1.049E-03(3.981E-05) 9.682E-05(1.279E-05) 1.051E-04(5.453E-06) 1.403E-03(1.321E-03)
2.839E-04(2.216E-05)= 3.025E-04(3.174E-05)+ 3.360E-04(1.711E-05)+ 1.377E-02(2.109E-02)+ 2.511E-04(1.238E-05)= 4.058E-04(1.505E-05)= 1.051E-03(2.534E-05)= 5.602E-02(4.748E-02)= 1.057E-03(4.388E-05)= 1.003E-04(7.196E-06)= 1.090E-04(1.391E-05)= 2.231E-03(2.031E-03)= 3/9/0
the diversity of population. Therefore, in the proposed selection method, the better learners are more likely to choose the learner phase to enhance the diversity of population, while the rest of learners are more likely to choose the teacher phase to improve the convergence ability of algorithm. Hence, under the same number of function evaluations, the strategy of self-adaptive choosing modes of learning can efficiently enhance the performance of algorithm compared with the serial learning mode (teacher phase, and then learner phase).
be attracted to the local optimum region, leading to premature convergence due to the resultant reduced population diversity. For this reason, the higher ranking individuals will be randomly selected as the teacher in SA-MTLBO to improve the population diversity. In addition, a crossover strategy is randomly selected from the differential evolution operator [4] and the simulated binary crossover with polynomial mutation operator [27] to perform the learner phase when two randomly selected learners are nondominated. After the teacher or learner phase, the selection operation between updated solution and current solution is done as follows:
3.2. SA-MTLBO
(1) If the updated solution dominates the current solution, replace current solution by updated solution. (2) If the current solution dominates the updated solution, the updated solution is discarded. (3) Otherwise, the updated solution is added to the population.
The selection of the teacher in the learners will affect the performance of TLBO. In the basic TLBO, the best individual in current population is selected as teacher. However, if the best individual is far from the global optimum or located in a local optimum, other learners may easily 1.4
1.4 SA-MTLBO Pareto front
1.2
MTLBO Pareto front
1.2 1
0.8
0.8
f2
f2
1
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
0
1
0
0.2
0.4
f1
0.6
0.8
1
f1 Fig. 4. Pareto fronts obtained by different algorithms on ZDT4.
1
1 SA-MTLBO Pareto front
0.8
MTLBO Pareto front
0.8
0.6
f2
f2
0.6
0.4
0.4
0.2
0.2
0 0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.2
0.3
0.4
f1
0.5
0.6
f1 Fig. 5. Pareto fronts obtained by different algorithms on ZDT6.
0.7
0.8
0.9
1
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
203
True Pareto front
1.2 1
f3
0.8 0.6 0.4 0.2 0 0
0.2
0.4
0.6
0.8
1
f2
1.2
0.8
1
1.2
0.4
0.6
0
0.2
f1
SA-MTLBO
MTLBO
1.2
1.5
1 1
0.6
f3
f3
0.8
0.4
0.5
0.2 0 0
0.2
0.4
0.6
0.8
1
f2
1.2
1.2
1
0.4
0.6
0.8
0.2
0
0 0
0.2
0.4
0.6
f1
0.8
1
f2
1.2
0.8
1
1.2
0.6
0.4
0.2
0
f1
Fig. 6. Pareto fronts obtained by different algorithms on DTLZ3.
Step5: Perform learner phase. Identify the relationship between the two randomly selected individuals Xg,p and Xg,q.
Therefore the total size of the population is between NP and 2NP at the end of a loop. This population will be truncated for the next generation. The truncated process consists of nondominated sorting and the same front individuals evaluating with the crowding distance. Finally, only the best NP individuals are retained in the population. The main steps of SA-MTLBO are described as follows:
(1) If Xg,p dominates Xg,q , carry out the learner phase by Eq. (4). (2) If Xg,q dominates Xg,p, carry out the learner phase by Eq. (5). (3) If Xg,p and Xg,q are nondominated with each other, randomly select a strategy from the differential evolution operation and the simulated binary crossover with polynomial mutation operation to perform the learner phase.
Step1: Set the generation number g = 0, and initialize the values of parameters such as population size NP, maximum generations gmax, the mutation probability, and the parameter values of crossover operators. Generate uniformly distributed randomly population. Pg = {Xg,1, Xg,2, …, Xg,NP}. Step2: Evaluate the fitness value of each individual Xg,i(i = 1, …, NP). Step3: Perform the selection of learning phases through the method described in Fig. 3. If choose the teacher phase, go to Step 4, otherwise go to Step 5. Step4: Perform teacher phase according to Eq. (3). Then go to Step 6.
Step6: Evaluate the updated solutions obtained from the teacher or learner phase, and perform the above mentioned selection operation between updated solution and current solution. Step7: After Step6, the size of population ranges from NP to 2NP. Sorting the population based on fast nondominated sorting and crowding distance, and the best NP individuals are survived into the next generation. Step8: Set g = g + 1, return to Step3 until the maximum generation gmax has been reached.
SA-MTLBO Pareto front
1
0.8
0.6
0.6
0.4
f3
f3
MTLBO Pareto front
1
0.8
0.2
0.4 0.2
0 0 0.2
0.4 0.6 f2
0.8
0
0.4
0.2 f1
0.6
0.8
0 0 0.2
0.4 0.6 f2
0.8
Fig. 7. Pareto fronts obtained by different algorithms on DTLZ5.
0
0.4
0.2 f1
0.6
0.8
204
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
Table 4 COT to the yields of ethylene, propylene, and butadiene (FLOW = 40 t/h, SHR = 0.5, and COP = 1.6 bar). COT(°C)
Yield of ethylene (%)
Yield of propylene (%)
Yield of butadiene (%)
816 823 830 837 844
27.8758 28.5718 29.1884 29.7373 30.2287
16.9439 16.6638 16.3029 15.8366 15.2398
4.9784 5.0499 5.0906 5.1002 5.0792
4. Numerical experiments In this section, the twelve test problems are firstly described and then the performance of SA-MTLBO is compared with the performance of NSGA-II [27], ABYSS [28], MODE-RMO [35], and MOTLBO [48]. Finally, the effectiveness of self-adaptively selecting the learning phases is demonstrated. 4.1. Test problems The proposed algorithm is tested on twelve typical test problems in this area. The first five bi-objective problems: ZDT1-ZDT4 and ZDT6 are proposed by Zitzler et al. in [49]. The other seven problems: DTLZ1DTLZ7 are presented by Deb et al. in [50], which are more than two objectives. In our experiments, tri-objective problems are considered. The definitions of the chosen test problems are given in Appendix A.
which is used to calculates the distance between the nondominated solutions set P obtained by the algorithm and the Pareto optimal front P ⁎. GD is defined as GD ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X 2 mindisðx; P Þ =jP j x∈P
ð8Þ
Where mindis(x, P ⁎) is the minimum Euclidean distance between solution x and the solutions in P ⁎. |P| is the number of solutions in P. GD measures how close the obtained Pareto front is to the true Pareto front. A small value of GD reveals a better convergence to the true Pareto front. The second measure is the inverted generational distance (IGD) [51], which is defined as IGD ¼
qX ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi mindisðx; P Þ2 =jP j x∈P
ð9Þ
4.2. Performance measures Two performance measures are used to evaluate the effect of the algorithm for MOPs. The first is general distance (GD) [27] measure,
Where mindis(x, P) is the minimum Euclidean distance between solution x and the solutions in P. |P ⁎| is the number of solutions in P ⁎. If |P ⁎| is large enough, the value of IGD can measure both the convergence and
Fig. 8. COT relative to the yields of ethylene, propylene, and butadiene.
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210 Table 5 FLOW to the yields of ethylene, propylene, and butadiene (COT = 820 °C, SHR = 0.5, and COP = 1.6 bar).
205
Table 6 SHR to the yields of ethylene, propylene, and butadiene (COT = 820 °C, FLOW = 40 t/h, and COP = 1.6 bar).
FLOW(t/h) Yield of ethylene (%) Yield of propylene (%) Yield of butadiene (%)
SHR
Yield of ethylene (%)
Yield of propylene (%)
Yield of butadiene (%)
34 37 40 43 46
0.4 0.475 0.55 0.625 0.7
28.0717 28.2372 28.3575 28.4269 28.4618
16.7385 16.7802 16.8156 16.8328 16.8262
5.0921 5.0411 4.9855 4.9287 4.8720
28.6317 28.3749 28.1315 27.9087 27.6919
16.6521 16.6980 16.7537 16.8154 16.8768
5.0495 5.0648 5.0758 5.0822 5.0854
diversity of P to a certain extent. A smaller value of IGD means a better convergence and diversity to the true Pareto-optimal front. 4.3. Parameter settings For NSGA-II, ABYSS, MODE-RMO, and MOTLBO, the parameter setting is identical with that in their original studies. For SA-MTLBO, the differential evolution operator is employed with F = 0.5 and CR = 0.3 according to [35]. Simulated binary crossover and polynomial mutation operator are employed with the crossover index ηc = 20, mutation rate Pm = 1/D (D is the dimension of the decision space), and mutation index ηm = 20 according to [27].The population size NP = 100 for all the compared algorithms. The maximum number of function evaluations is set as 30,000 for each test problem. In addition, to avoid randomness, each algorithm is run 20 times independently for each problem. 4.4. Comparison of SA-MTLBO with other MOEAs The comparative results of each problem are shown in Tables 1-2. In addition, nonparametric Wilcoxon rank sum test at a significance level of 5% is adopted to compare the significance between two algorithms.
In Tables 1-2, “+”, “=” and “-” indicate SA-MTLBO is significantly better, almost similar as, and significantly worse than its competitor on mean value in 20 runs. Finally, the Wilcoxon test results are summarized as “+/=/-” to denote the number of problems that SA-MTLBO performs significantly better, almost similar as, and significantly worse than its corresponding competitor, respectively. The results of GD measure are presented in Table 1. It can be observed that SA-MTLBO performs much better in 4 problems, almost similar as in 6 problems, and much worse in 2 problems than MOTLBO, respectively. Moreover, SA-MTLBO obtains smaller standard deviation values than MOTLBO in all problems except DTLZ5 and DTLZ7, which means that the stability of SA-MTLBO is better than MOTLBO. SAMTLBO and ABYSS get an almost similar performance of convergence. When compared with MODE-RMO and NSGA-II, SA-MTLBO has an obvious superiority because it surpasses MODE-RMO and NSGA-II both in 8 problems out of 12 problems. The IGD measure results of all compared algorithms are shown in Table 2. It can be seen that SA-MTLBO achieves significantly better IGD values than MOTLBO in all problems except ZDT6 and DTLZ6 (the almost similar performance in ZDT6 and DTLZ6). This indicates SAMTLBO can provide better approximation Pareto front than MOTLBO
Fig. 9. FLOW relative to the yields of ethylene, propylene, and butadiene.
206
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
when considering both convergence and diversity in most of the problems. SA-MTLBO performs significantly better in 8 problems, almost similar as in 3 problems, and significantly worse in only DTLZ4 than MODE-RMO, respectively. In addition, although SA-MTLBO and ABYSS achieve an almost similar performance of convergence, SA-MTLBO achieves better performance when simultaneously considering convergence and diversity. SA-MTLBO performs significantly better in 6 problems, almost similar as in 2 problems, and significantly worse in 4 problems than MODE-RMO, respectively. Furthermore, when compared with NSGA-II, SA-MTLBO wins in 4 problems, achieves the same performance in 7 problems, and only loses in 1 problem. The above comparisons and analyses lead to the conclusion that the proposed SA-MTLBO can generate approximation Pareto sets with superior convergence and diversity in most of test problems compared with other MOEAs.
4.5. Effectiveness of self-adaptive selecting the learning phase In this section, the effectiveness of proposed self-adaptive selecting the learning phase is investigated. For this purpose, SA-MTLBO is compared with MTLBO. The difference between SA-MTLBO and MTLBO is that in SA-MTLBO the learners will self-adaptively select a mode of learning according to their levels of knowledge in classroom at each generation, while in MTLBO the learners need to undergo teacher phase first, and then learner phase at each generation. The GD and IGD values are compared as shown in Table 3. In addition, to give a graphical overview of the behaviors of these two algorithms, we show the Pareto fronts obtained by each algorithm with the lowest IGD values for problems ZDT4, ZDT6, DTLZ3, and DTLZ5 in Figs. 4-7.
Table 7 COP to the yields of ethylene, propylene, and butadiene (COT = 820 °C, FLOW = 40 t/h, and SHR = 0.5). COP(bar)
Yield of ethylene (%)
Yield of propylene (%)
Yield of butadiene (%)
1.5 1.6 1.7 1.8 1.9
28.3110 28.2828 28.2353 28.1746 28.1028
16.8421 16.7924 16.7628 16.7086 16.6170
4.9807 5.0229 5.0611 5.0938 5.1211
It is clear from Table 3 that SA-MTLBO significantly outperforms MTLBO in 6 problems on GD, and in 3 problems on IGD. For the rest problems, two algorithms get the similar performance. It is also can be seen from Figs. 4-7 that SA-MTLBO has better convergence than MTLBO in problem ZDT4. In problem ZDT6, SA-MTLBO obtains better distribution of the Pareto fronts. In problem DTLZ3, SA-MTLBO gets a good performance while MTLBO cannot reach the true Pareto fronts. In problem DTLZ5, the quality of the Pareto fronts obtained by SA-MTLBO is better. Therefore, the use of self-adaptive selecting the learning phase strategy can efficiently improve the performance of algorithm. 5. Optimization of the naphtha pyrolysis process In this section, the operation optimization of the naphtha pyrolysis process is investigated. In practical process, the volume yields of some gaseous phase in cracked gas can be measured by online chromatographic analyzer. However, it is impossible to get the mass yield of the whole components of cracked gas online. Considering this fact, the yield model is built based on CoilSim1D [52,53]. The input variables are the property of feedstock, COT, FLOW, SHR, and COP, and the output
Fig. 10. SHR relative to the yields of ethylene, propylene, and butadiene.
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
207
Fig. 11. COP relative to the yields of ethylene, propylene, and butadiene.
variables are the yields of ethylene, propylene, and butadiene. CoilSim1D, based on the free radical reaction, is used to simulate the cracking reaction. The error between CoilSim1D and cracking furnace output can be corrected by the mass ratio of propylene over ethylene, which should be same for the simulated and measured values. After correction, the mass yields of the whole components can be obtained from the software. Considering the CoilSim1D soft is very time-consuming when used in the multi-objective operation optimization, as a substitute method, the artificial neural network (ANN) has been adopted to global fitting the corrected data generated from CoilSim1D. We use the corrected CoilSim1D to generate 1000 pairs of data, 700 pairs of data are used to train the ANN and anther 300 pairs of data are used to test the constructed ANN model. Hence, the relationships between the operating conditions (COT, FLOW, SHR, and COP) and the mass yields of ethylene, propylene, and butadiene are built based on ANN. 5.1. Multi-objective operation optimization The optimizing task of cracking furnaces is to determine the optimal values of control variables to three conflicting objectives: maximization of the yields of ethylene, propylene, and butadiene.
Accordingly, the optimization of this problem can be given as follows
F ðxÞ ¼ max Y C 2 H4 ; max Y C 3 H6 ; max Y C 4 H6
ð10Þ
Subject to: 8 816 C ≤ COT ≤ 844 C > > < 34t=h ≤ FLOW ≤ 46t=h > 0:4 ≤ SHR ≤ 0:7 > : 1:5bar ≤ COP ≤ 1:9bar
ð11Þ
Where Y C 2 H4 , Y C 3 H6 and Y C 4 H6 are the yields of ethylene, propylene, and butadiene, respectively. 5.2. Analysis of control variables effect To show graphically the existence of a nonlinear effect, each control variable is carried out an analysis by keeping the other control variables constant [1,21,22]. The effects of the COT on the yields of ethylene, propylene, and butadiene are shown in Table 4 and Fig. 8. From the results, it can be seen that, with the increase of COT from
Table 8 The results of the uniform design. No.
COT
FLOW
SHR
COP
C2H4(%)
C3H6(%)
C4H6(%)
1 2 3 4 5 6
816(1) 821.6(2) 827.2(3) 832.8(4) 838.4(5) 844(6)
36.4(2) 41.2(4) 46(6) 34(1) 38.8(3) 43.6(5)
0.52(3) 0.7(6) 0.46(2) 0.64(5) 0.4(1) 0.58(4)
1.9(6) 1.82(5) 1.74(4) 1.66(3) 1.58(2) 1.5(1)
27.9931 28.5800 28.4734 29.9805 29.5669 30.3983
16.7322 16.7699 16.5181 15.9685 15.7784 15.4796
5.1027 4.9876 5.1296 5.0559 5.0879 5.1051
208
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
C2H4(%)
C3H6(%)
C4H6(%)
30.398
16.770
5.130
5.2 5.1
1 2 3 4 5 6
C4H6
5 4.9 4.8 4.7 17.5 17 16.5 16 15.5 C3H6
27.993
15.480
15
27
28
30
29
32
31
C2H4
4.988
Fig. 13. Pareto front of the naphtha pyrolysis process by SA-MTLBO. Fig. 12. Parallel coordinates plot for three objectives.
816 to 844 °C, the yield of ethylene increases, the yield of propylene decreases, and the yield of butadiene first increases then decreases. Table 5 and Fig. 9 show that, when the FLOW is allowed to vary from 34 to 46 t/h, the yield of ethylene decreases while the yields of propylene and butadiene increase. Based on Table 6 and Fig. 10, it can be found that the yield of ethylene increases while the yield of butadiene decreases and the yield of propylene first increases then decreases, as the SHR increases from 0.4 to 0.7. According to Table 7 and Fig. 11, when the COP is allowed to vary from 1.5 to 1.9 bar, the yields of ethylene and propylene decrease while the yield of butadiene increases. Furthermore, in order to analyze the synergic effects of these control variables, we adopt the uniform design (UD) [54–56], which is a kind of space filling designs, to seek experimental points to be uniformly scattered in the experimental domain. The table of uniform design U6⁎(64) is used to get the experimental points. Thereby, each control variable is equally divided into six points within its own range and the experimental points are chosen according to the table of uniform design U6⁎(64) in each experiment. The corresponding experimental points of control variables and yields of ethylene, propylene, and butadiene in each experiment are shown in Table 8. In addition, the three objectives are plotted in a parallel coordinates plot in Fig. 12 and it is clear that there are conflicts among the three objectives under the synergic effects of control variables. Following the precedent analysis, the yields of ethylene, propylene and butadiene are determined by COT, FLOW, SHR, and COP and there are conflicts among the three objectives. So the constructed multi-objective problem with four control variables and three objectives is reasonable.
5.3. Optimization results The optimization results of SA-MTLBO, MOTLBO, and NSGA-II are given in Tables 9-10. Because the true Pareto front of this constructed problem is not known, we run SA-MTLBO, MOTLBO, and NSGA-II for a maximum of 30,000 function evaluations and then take the nondominated solutions from the union of their obtained results as the reference Pareto front. From Table 9, it can be seen that the SAMTLBO significantly outperforms MOTLBO and NSGA-II on all performance measures. The ranges of each product’s yield obtained by the three algorithms are shown in Table 10. From these results, it can be found that the SA-MTLBO obtains the best distribution of all yield rates. In the practical ethylene plant, the present yield rates of ethylene, propylene, and butadiene are 30.57%, 15.224%, and 5.035%, respectively. With SA-MTLBO, these corresponding yield rates can be simultaneously increased to 31.029%, 15.571%, and 5.054%, respectively. These improvements will be very important and beneficial to the ethylene plant. Finally, the Pareto front and its boundary points obtained by SA-MTLBO are presented in Fig. 13 and Table 11, respectively. So with the help of SA-MTLBO, the decision makers can select a set of nondominated solutions according to suitable decision-making techniques based on their preference. 6. Conclusion In this work, a multi-objective operation optimization problem of the naphtha pyrolysis process to maximize the yields of ethylene, propylene, and butadiene is investigated. SA-MTLBO has been proposed to deal with the constructed problem. In SA-MTLBO, the learners can
Table 9 Comparison of the operation optimization of naphtha pyrolysis. Measure
SA-MTLBO
MOTLBO
NSGA_II
GD IGD
4.768E-03(5.363E-04) 4.066E-03(2.832E-04)
5.418E-03(7.447E-04)+ 4.442E-03(3.487E-04)+
5.353E-03(8.766E-04)+ 4.683E-03(6.128E-04)+
Table 10 Optimization results of three algorithms. yield rate
ethylene propylene butadiene
SA-MTLBO
MOTLBO
NSGA_II
min
max
min
max
min
max
27.2224 15.4163 4.7617
31.1095 17.0599 5.1825
27.2224 15.4722 4.7665
31.1095 17.0599 5.1810
27.2253 15.4624 4.7654
31.1087 17.0599 5.1824
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
209
Table 11 Boundary optimization results of SA-MTLBO. Metric Yield rate ethylene propylene butadiene Decision variable COT FLOW SHR COP
min ethylene
max ethylene
27.2224 16.8907 5.1398
31.1095 15.4722 5.0403
816.00 46.00 0.400 1.900
844.00 34.00 0.700 1.500
min propylene
max propylene
31.0343 15.4163 5.0441
27.3929 17.0599 4.9460
843.94 34.57 0.700 1.549
self-adaptively select the modes of learning according to their knowledge levels. Thus, learners at different levels choose appropriate learning modes and carry out corresponding search function to efficiently improve the performance of algorithm. The performance of SA-MTLBO is evaluated by comparing with other MOEAs in twelve benchmark problems. The experimental results show that the proposed SAMTLBO can produce Pareto optimal fronts with good convergence and distribution. Furthermore, the effectiveness of proposed self-adaptive selecting the learning phase strategy is demonstrated. Finally, SAMTLBO is applied to solve the operation optimization problem of cracking furnaces. The optimizing results show that SA-MTLBO can obtain a good spread nondominated solutions and provide more options for decision maker to improve the benefit of cracking furnaces. The source code of SA-MTLBO can be obtained from the first author upon request.
min butadiene 27.8964 16.9763 4.7617
816.00 46.00 0.419 1.500
816.08 41.13 0.667 1.501
max butadiene 28.0702 16.3288 5.1825
826.50 46.00 0.400 1.900
Conflict of interest The authors have declared that no conflict of interest exists.
Acknowledgements This research was supported by Major State Basic Research Development Program of China under Grant No. 2012CB720500, National Natural Science Foundation of China under Grant Nos. 21276078, 61422303, Shanghai Municipal Science and Technology Commission under Grant No. 13111103800, Fundamental Research Funds for the Central Universities, Shanghai Natural Science Foundation under Grant No. 14ZR1421800, and the State Key Laboratory of Synthetical Automation for Process Industries under Grant No. PAL-N201404.
Appendix A. Test problems for multi-objective optimization.
Problems ZDT1 ZDT2 ZDT3
Definition pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
D f 1 ðxÞ ¼ x1 ; f 2 ðxÞ ¼ gðxÞ 1− x1 =g ðxÞ ; g ðxÞ ¼ 1 þ 9∑i¼2 xi =ðD−1Þ f1(x) = x1, f2(x) = g(x)[1 − (x1/g(x))2], g(x) = 1 + 9∑D i = 2xi/(D − 1) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
f 1 ðxÞ ¼ x1 ; f 2 ðxÞ ¼ g ðxÞ 1− x1 =g ðxÞ−x1 sinð10πx1 Þ=g ðxÞ ;
Search range
Dimension(D)
[0, 1]D
30
[0, 1]D [0, 1]D
30 30
[0, 1] for x1, [-5, 5] for x2,…,D
10
[0, 1]D
10
[0, 1]D
7
[0, 1]D
12
[0, 1]D
12
[0, 1]D
12
[0, 1]D
12
[0, 1]D
12
[0, 1]D
22
D
ZDT4 ZDT6
g ðxÞ ¼ 1 þ 9∑i¼2 xi =ðD−1Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
f 1 ðxÞ ¼ x1 ; f 2 ðxÞ ¼ g ðxÞ 1− x1 =g ðxÞ ;
D 2 g ðxÞ ¼ 1 þ 10ðD−1Þ þ ∑i¼2 xi −10cosð4πxi Þ
h i f 1 ðxÞ ¼ 1− expð−4x1 Þsin6 ð6πx1 Þ; f 2 ðxÞ ¼ g ðxÞ 1−ðx1 =g ðxÞÞ2 ; 0:25 D 9½∑i¼2 xi =ðn−1Þ
DTLZ1
DTLZ2
DTLZ3
DTLZ4
DTLZ5
DTLZ6
DTLZ7
g ðxÞ ¼ 1 þ f 1 ðxÞ ¼ 0:5x1 x2 ð1 þ gðxÞÞ; f 2 ðxÞ ¼ 0:5x1 ð1−x2 Þð1 þ g ðxÞÞ; f 3 ðxÞ ¼ 0:5ð1−x1 Þð1 þ gðxÞÞ h 2 D g ðxÞ ¼ 100 5 þ ∑i¼3 ððxi −0:5 −cosð20π ðxi −0:5ÞÞÞ f 1 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πx1 Þcosð0:5πx2 Þ; f 2 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πx1 Þsinð0:5πx2 Þ; D f 3 ðxÞ ¼ ð1 þ g ðxÞÞsinð0:5πx1 Þ; g ðxÞ ¼ ∑i¼3 ðxi −0:5Þ2 f 1 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πx1 Þcosð0:5πx2 Þ; f 2 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πx1 Þsinð0:5πx2 Þ; f 3 ðxÞ ¼ ð1 þ g ðxÞÞsinð0:5πx1 Þ h 2 D g ðxÞ ¼ 100 10 þ ∑i¼3 ððxi −0:5 − cosð20π ðxi −0:5ÞÞÞ 100 f 1 ðxÞ ¼ ð1 þ g ðxÞÞcos 0:5πx100 1 cos 0:5πx2 ; 100 sin 0:5πx ; f 2 ðxÞ ¼ ð1 þ g ðxÞÞcos 0:5πx100 1 2 D ; gðxÞ ¼ ∑i¼3 ðxi −0:5Þ2 f 3 ðxÞ ¼ ð1 þ g ðxÞÞsin 0:5πx100 1 f 1 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πθ1 Þcosð0:5πθ2 Þ; f 2 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πθ1 Þsinð0:5πθ2 Þ; f 3 ðxÞ ¼ ð1 þ g ðxÞÞsinð0:5πθ1 Þ D g ðxÞ ¼ ∑i¼3 ðxi −0:5Þ2 ; θi ¼ πð1 þ 2xi gðxÞÞ=ð4ð1 þ g ðxÞÞÞ f 1 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πθ1 Þcosð0:5πθ2 Þ; f 2 ðxÞ ¼ ð1 þ g ðxÞÞcosð0:5πθ1 Þsinð0:5πθ2 Þ; f 3 ðxÞ ¼ ð1 þ g ðxÞÞsinð0:5πθ1 Þ D g ðxÞ ¼ ∑i¼3 x0:1 i ; θi ¼ π ð1 þ 2xi g ðxÞÞ=ð4ð1 þ g ðxÞÞÞ f 1 ðxÞ ¼ x1 ; f 2 ðxÞ ¼ x2 ; f 3 ðxÞ ¼ ð1 þ gðxÞÞ hð f 1 ; f 2 ; g ðxÞÞ; D g ðxÞ ¼ 1 þ 9∑i¼3 xi =20; 2 hð f 1 ; f 2 ; g ðxÞÞ ¼ 3−∑i¼1 f i ð1 þ sinð3π f i ÞÞ=ð1 þ g ðxÞÞ
210
K. Yu et al. / Chemometrics and Intelligent Laboratory Systems 146 (2015) 198–210
References [1] A. Tarafder, B.C. Lee, A.K. Ray, G. Rangaiah, Multiobjective optimization of an industrial ethylene reactor using a nondominated sorting genetic algorithm, Ind. Eng. Chem. Res. 44 (2005) 124–141. [2] X. Nian, Z. Wang, F. Qian, A hybrid algorithm based on differential evolution and group search optimization and its application on ethylene cracking furnace, Chin. J. Chem. Eng. 21 (2013) 537–543. [3] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE international conference on neural networks, Perth, Australia, 1995. 1942–1948. [4] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Glob. Optim. 11 (1997) 341–359. [5] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Glob. Optim. 39 (2007) 459–471. [6] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems, Eng. Comput. 29 (2013) 17–35. [7] A.R. Yildiz, Cuckoo search algorithm for the selection of optimal machining parameters in milling operations, Int. J. Adv. Manuf. Technol. 64 (2013) 55–61. [8] A.R. Yıldız, A novel particle swarm optimization approach for product design and manufacturing, Int. J. Adv. Manuf. Technol. 40 (2009) 617–628. [9] A.R. Yildiz, A new hybrid particle swarm optimization approach for structural design optimization in the automotive industry, Proc. Inst. Mech. Eng. D J. Automob. Eng. 226 (2012) 1340–1351. [10] A.R. Yildiz, A comparative study of population-based optimization algorithms for turning operations, Inf. Sci. 210 (2012) 81–88. [11] A.W. Mohamed, H.Z. Sabry, Constrained optimization based on modified differential evolution algorithm, Inf. Sci. 194 (2012) 171–208. [12] A.R. Yildiz, Comparison of evolutionary-based optimization algorithms for structural design optimization, Eng. Appl. Artif. Intell. 26 (2013) 327–333. [13] A.R. Yildiz, Hybrid Taguchi-differential evolution algorithm for optimization of multi-pass turning operations, Appl. Soft Comput. 13 (2013) 1433–1439. [14] A.R. Yildiz, A new hybrid artificial bee colony algorithm for robust optimal design and manufacturing, Appl. Soft Comput. 13 (2013) 2906–2912. [15] A.R. Yildiz, A new hybrid differential evolution algorithm for the selection of optimal machining parameters in milling operations, Appl. Soft Comput. 13 (2013) 1561–1566. [16] A.R. Yildiz, Optimization of cutting parameters in multi-pass turning using artificial bee colony-based approach, Inf. Sci. 220 (2013) 399–407. [17] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, Proceedings of the 1st international Conference on Genetic Algorithms, L. Erlbaum Associates Inc 1985, pp. 93–100. [18] K. Deb, Multi-objective optimization using evolutionary algorithms, John Wiley & Sons, 2001. [19] C.A.C. Coello, D.A. Van Veldhuizen, G.B. Lamont, Evolutionary algorithms for solving multi-objective problems, Springer, 2002. [20] B. Xu, R. Qi, W. Zhong, W. Du, F. Qian, Optimization of p-xylene oxidation reaction process based on self-adaptive multi-objective differential evolution, Chemom. Intell. Lab. Syst. 127 (2013) 55–62. [21] C. Li, Q. Zhu, Z. Geng, Multi-objective particle swarm optimization hybrid algorithm: An application on industrial cracking furnace, Ind. Eng. Chem. Res. 46 (2007) 3602–3609. [22] X. Wang, L. Tang, Multiobjective operation optimization of naphtha pyrolysis process using parallel differential evolution, Ind. Eng. Chem. Res. 52 (2013) 14415–14428. [23] A.R. Yildiz, K. Saitou, Topology synthesis of multicomponent structural assemblies in continuum domains, J. Mech. Des. 133 (2011) 011008. [24] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach, IEEE Trans. Evol. Comput. 3 (1999) 257–271. [25] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm, Eidgenössische Technische Hochschule Zürich (ETH), Institut für Technische Informatik und Kommunikationsnetze (TIK), 2001. [26] N. Srinivas, K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evol. Comput. 2 (1994) 221–248. [27] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2002) 182–197. [28] A.J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J.J. Durillo, A. Beham, AbYSS: Adapting scatter search to multiobjective optimization, IEEE Trans. Evol. Comput. 12 (2008) 439–457. [29] Q. Zhang, H. Li, MOEA/D: A multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput. 11 (2007) 712–731.
[30] K. Li, S. Kwong, J. Cao, M. Li, J. Zheng, R. Shen, Achieving balance between proximity and diversity in multi-objective evolutionary algorithm, Inf. Sci. 182 (2012) 220–242. [31] A.R. Yildiz, K.N. Solanki, Multi-objective optimization of vehicle crashworthiness using a new particle swarm based approach, Int. J. Adv. Manuf. Technol. 59 (2012) 367–376. [32] X. Chen, W. Du, F. Qian, An adaptive multi-objective differential evolution algorithm for solving chemical dynamic optimization problems, Comput. Aided Chem. Eng. 37 (2015) 821–826. [33] R. Wang, P.J. Fleming, R.C. Purshouse, General framework for localized multiobjective evolutionary algorithms, Inf. Sci. 258 (2014) 29–53. [34] L. Tang, X. Wang, A hybrid multiobjective evolutionary algorithm for multiobjective optimization problems, IEEE Trans. Evol. Comput. 17 (2013) 20–45. [35] X. Chen, W. Du, F. Qian, Multi-objective differential evolution with ranking-based mutation operator and its application in chemical process optimization, Chemom. Intell. Lab. Syst. 136 (2014) 85–96. [36] W. Gong, Z. Cai, Differential evolution with ranking-based mutation operators, IEEE Trans. Cybern. 43 (2013) 2066–2081. [37] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems, Comput. Aided Des. 43 (2011) 303–315. [38] R.V. Rao, V.D. Kalyankar, Parameter optimization of modern machining processes using teaching–learning-based optimization algorithm, Eng. Appl. Artif. Intell. 26 (2013) 524–531. [39] K. Yu, X. Wang, Z. Wang, An improved teaching-learning-based optimization algorithm for numerical and engineering optimization problems, J. Intell. Manuf. (2014) 1–13. [40] F. Zou, L. Wang, X. Hei, D. Chen, D. Yang, Teaching–learning-based optimization with dynamic group strategy for global optimization, Inf. Sci. 273 (2014) 112–131. [41] A. Baykasoğlu, A. Hamzadayi, S.Y. Köse, Testing the performance of teaching– learning based optimization (TLBO) algorithm on combinatorial problems: Flow shop and job shop scheduling cases, Inf. Sci. 276 (2014) 204–218. [42] R. Rao, V. Patel, Thermodynamic optimization of plate-fin heat exchanger using teaching-learning-based optimization (TLBO) algorithm, Int. J. Adv. Manuf. Technol. 2 (2011) 91–96. [43] P. Kumar Roy, A. Sur, D.K. Pradhan, Optimal short-term hydro-thermal scheduling using quasi-oppositional teaching learning based optimization, Eng. Appl. Artif. Intell. 26 (2013) 2516–2524. [44] A.R. Yildiz, Optimization of multi-pass turning operations using hybrid teaching learning-based approach, Int. J. Adv. Manuf. Technol. 66 (2013) 1319–1326. [45] T. Niknam, F. Golestaneh, M.S. Sadeghi, θ-multiobjective teaching–learning-based optimization for dynamic economic emission dispatch, IEEE Syst. J. 6 (2012) 341–352. [46] T. Niknam, R. Azizipanah-Abarghooee, M. Rasoul Narimani, A new multi objective optimization approach based on TLBO for location of automatic voltage regulators in distribution systems, Eng. Appl. Artif. Intell. 25 (2012) 1577–1588. [47] M.A. Medina, S. Das, C.A. Coello Coello, J.M. Ramírez, Decomposition-based modern metaheuristic algorithms for multi-objective optimal power flow–A comparative study, Eng. Appl. Artif. Intell. 32 (2014) 10–20. [48] F. Zou, L. Wang, X. Hei, D. Chen, B. Wang, Multi-objective optimization using teaching-learning-based optimization algorithm, Eng. Appl. Artif. Intell. 26 (2013) 1291–1300. [49] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: Empirical results, Evol. Comput. 8 (2000) 173–195. [50] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable multi-objective optimization test problems, Proceedings of the Congress on Evolutionary Computation (CEC-2002), (Honolulu, USA) 2002, pp. 825–830. [51] E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, V.G. Da Fonseca, Performance assessment of multiobjective optimizers: An analysis and review, IEEE Trans. Evol. Comput. 7 (2003) 117–132. [52] K.M. Van Geem, D. Hudebine, M.F. Reyniers, F. Wahl, J.J. Verstraete, G.B. Marin, Molecular reconstruction of naphtha steam cracking feedstocks based on commercial indices, Comput. Chem. Eng. 31 (2007) 1020–1034. [53] S.P. Pyl, K.M. Van Geem, M.F. Reyniers, G.B. Marin, Molecular reconstruction of complex hydrocarbon mixtures: An application of principal component analysis, AIChE J 56 (2010) 3174–3188. [54] F. Sun, F. Pang, M. Liu, Construction of column-orthogonal designs for computer experiments, Sci. China Math. 54 (2011) 2683–2692. [55] K.-T. Fang, H. Qin, A note on construction of nearly uniform designs with large number of runs, Stat. Probab. Lett. 61 (2003) 215–224. [56] K.-T. Fang, W.-C. Shiu, J.-X. Pan, Uniform designs based on Latin squares, Stat. Sin. 9 (1999) 905–912.