A hybrid DNA based genetic algorithm for parameter estimation of dynamic systems

A hybrid DNA based genetic algorithm for parameter estimation of dynamic systems

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246 Contents lists available at SciVerse ScienceDirect Chemical Engineering Research ...

1019KB Sizes 0 Downloads 83 Views

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Contents lists available at SciVerse ScienceDirect

Chemical Engineering Research and Design journal homepage: www.elsevier.com/locate/cherd

A hybrid DNA based genetic algorithm for parameter estimation of dynamic systems Kan Dai, Ning Wang ∗ National Laboratory of Industrial Control Technology, Institute of Cyber-systems and Control, Zhejiang University, Hangzhou 310027, PR China

a b s t r a c t Inspired by the evolutionary strategy and the biological DNA mechanism, a hybrid DNA based genetic algorithm (HDNA-GA) with the population update operation and the adaptive parameter scope operation is proposed for solving parameter estimation problems of dynamic systems. The HDNA-GA adopts the nucleotides based coding and some molecular operations. In HDNA-GA, three new crossover operators, replacement operator, transposition operator and reconstruction operator, are designed to improve the population diversity, and the mutation operator with adaptive mutation probability is applied to guarantee against stalling at local peak. Besides, the simulated annealing based selection operator is used to guide the evolution direction. In order to overcome the premature convergence drawbacks of GAs and enhance the algorithm global and local search abilities, the population update operator and the adaptive parameter scope operator are suggested. Numerous comparative experiments on benchmark functions and real-world parameter estimation problems in dynamic systems are conducted and the results demonstrate the effectiveness and efficiency of the HDNA-GA. © 2012 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved. Keywords: Genetic algorithm (GA); DNA computing; Evolutionary strategies (ESs); Simulated annealing (SA); Parameter estimation; Chemical kinetics

1.

Introduction

Mathematical model plays an important role in predicting essential physical phenomena, and to obtain an accurate model is the basis of design, optimization and control of dynamic systems. However, it is very difficult to achieve the accurate model of real-world dynamic systems for the complicate and nonlinear process behavior. Generally, we can obtain the model of a dynamic system by two steps. First, through mechanism analysis of the whole processes and interpretation of the experimental data the kinetic model is made. Second, a suitable optimization method is applied to get the appropriate model parameters. Therefore, obtaining suitable parameters plays a crucial role in the whole modeling procedure. To obtain appropriate model parameters is a parameter estimation problem which is essentially a optimization problem. Through minimizing a weighted distance measure between the observations and predictions taken from quantitative model can obtain the unknown parameters.



Actually, accurate model parameters are extremely difficult to estimate. Many deterministic methods, such as Newton’s method (Linga et al., 2006), trust region (Ardenghi et al., 2003), and SQP method (Leineweber et al., 2003), tend to fail for their strict application conditions. Therefore, more and more attentions by researchers are attracted to intelligent optimization methods. Genetic algorithm (GA) developed by Holland has the characteristics of requiring little prior information and excellent global search ability. It can be employed in the parameter estimation problems well (Holland, 1975; Tveito et al., 2010), and the applications of GAs in various engineering disciplines have increased (Altinten et al., 2006; Iqbal and Guria, 2009; Izadifar and Baik, 2007; Nougues et al., 2002). Although genetic algorithm has performed well in many real-world engineering problems, it has some limitations, such as premature convergence, poor local search capability, and binary Hamming cliffs problem. In order to improve the search ability of genetic algorithms and achieve better performance, various hybrid

Corresponding author. Fax: +86 0571 87952279. E-mail addresses: [email protected], [email protected] (N. Wang). Received 26 July 2011; Received in revised form 20 March 2012; Accepted 25 May 2012 0263-8762/$ – see front matter © 2012 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.cherd.2012.05.018

2236

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

methods are studied in the recent years (Moon et al., 2008; Katare et al., 2004; Mansoornejad et al., 2008; Wang and Wang, 2010). Most of these methods attempt to associate the global search ability of GA with the local search advantage of deterministic methods so as to get better performance than each one respectively. However, these hybrid optimization methods would be constrained if GA fails to predict the potential initial condition of the global optimum. In order to overcome the drawbacks of GA, some of improved GAs based on biological DNA mechanism have emerged. Tao and Wang (2007, 2008) proposed a DNA computing based RNA genetic algorithm (RNA-GA) to estimate the parameters of chemical engineering processes and a DNA double helix based hybrid genetic algorithm to solve the gasoline blending recipe optimization problem. Chen and Wang (2009,2010) proposed a DNA based genetic algorithm for parameter estimation in the hydrogenation reaction and a DNA based hybrid genetic algorithm for gasoline blending scheduling problems. Zhang and Wang (2012) present a modified DNA genetic algorithm for parameter estimation of the 2-Chlorophenol oxidation in supercritical water. Wang and Wang (2011) presented a protein inspired RNA genetic algorithm for parameter estimation in hydrocracking of heavy oil. Heartened their work and inspired by the RNA-GA and evolutionary algorithm, a hybrid DNA based GA (HDNA-GA) with evolutionary strategies (ESs) based population update operation and adaptive parameter scope operation is proposed in this work. The HDNA-GA adopts the nucleotides coding and the more advanced genetic operators including the new crossover operators, mutation operator and simulated annealing based selection operator. In addition, the evolutionary strategies (ESs) based population update operator and the adaptive parameter scope operator are employed in the HDNA-GA to make optimization performance better. Numerical studies on several benchmark functions show the superiority of HDNA-GA over SGA and RNA-GA. Moreover, some real-world problems about chemical engineering and ecology problems of chemical kinetic models are also well solved by the proposed algorithm. This paper is almost arranged as follows. In Section 2, the hybrid DNA based genetic algorithm is discussed. Then the numerical experiment results of the proposed algorithm comparing with other optimization algorithms to some benchmark functions are shown in Section 3. In Section 4, the HDNA-GA is employed in parameter estimation problems of four real-world applications so as to further demonstrate its efficiency and effectiveness. In Section 5, some conclusions are drawn at last.

2. The hybrid DNA based genetic algorithm (HDNA-GA) 2.1.

Encoding method of DNA

According to the study of biology, DNA is the major genetic material of life and encodes plentiful genetic information. A DNA sequence contains four types of nucleotides: adenine (A), guanine (G), cytosine (C), and thymine (T). Encoding is a quite effective way to generate representations of individuals and make them be suitable for recombination and mutation. In the nucleotides based encoding (Li et al., 2004), computer cannot process these nucleotide bases, we adopted the quaternary encoding E = {0, 1, 2, 3} l to denote the encoding space E = {C, T, A, G} l , where l is the length of DNA sequences.

According to Watson–Crick complementary principle, bonding occurs by the pairwise attraction of bases: A bonds with T and G bonds with C, which makes the nucleotides based DNA encoding methods are different from the normal quaternary encoding. For example, if the sequence is TTCGC (11030), its complement sequence is AAGCG (22303). The encoding and decoding procedure completes the mutual mapping between problem space and DNA operator space. The process can be described as follows. Firstly, parameters are encoded as quaternary data sequences. And then, the sequences change their forms by adopting some DNA molecular operators. Finally, through decoding procedure, which is an inverse mapping from DNA encoding space to real-value space, we can obtain optimization results. Generally, an n-objective optimization problem can be written as follows:



min f (x1 , x2 · · ·, xn ) xmin i ≤ xi ≤ xmanxi , i = 1, 2, . . . , n

(1)

where xi is the vector of n decision or control variables, f(x) is the objective function and [xmini , xmaxi ] is the parameter bounds. In the optimization problem, every variable xi is represented as an integer string of length l. The bound of xi is [xmini , xmaxi ], thereinto, the lower limit xmini is represented by the decoded integer 0, and the upper limit xmaxi is represented by the decode integer 4l − 1. In the encoding procedure, the continuous range [xmini , xmaxi ] is divided into 4l segments, so the precision of variable xi is (xmaxi − xmini )/4l and the length of one individual is L = n × l. Based on DNA nucleotide encoding method, we propose a hybrid DNA based genetic algorithm (HDNA-GA).

2.2.

Population update operation

Along with the increase of the evolution generations, the differences in the population become small. In this case, GA cannot effectively maintain population diversity and avoid premature phenomenon in the evolution. Therefore, the (, )-ES is utilized to update the every generation population. Evolutionary strategies (ESs) are proposed by Rechenberg at Berlin in the 1960s and further developed by Schwefel (Kursawe, 1991). The notations (1 + 1)-ES, ( + 1)-ES, ( + )-ES, (, )-ES. . . characterize evolution strategies with an increasing level of imitation of biological evolution. The letter  means the total number of parents and  stands for the number of offspring. The (1 + 1)-ES is first proposed and gradually extends to ( + 1)-ES which means selecting only one child individual from the  parent individuals. Afterwards the theory is extended to the ( + )-ES and (, )-ES. As the nomenclature ( + )-ES suggests,  parents produce  offspring which are reduced again to the  individuals of the next generation. Thus, parents survive until they are superseded by better offspring. This feature gives rise to some deficiencies of ( + )-ES, so Schwefel investigates the properties of a (, )-ES where only the offspring undergo selection. According to the (, )-ES theory, the life time of every individual is limited to one generation, which can availably eliminate the effects of improper internal parameter settings. The (, )-ES based population update operation: At the beginning of every generation, we decide whether (, )-ES based population update operator is adopted. The parameter tav is defined as: if the difference between the current generation best fitness function value and previous one is not more than a constant ı = 0.01, tav will increase by 1, otherwise

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

2237

Fig. 1 – Replacement operation process. it will equal to zero. If it is greater than a user-defined constant (in this work: tav ≥ tav0 = 5), the operator will be adopted. Firstly, every individual is compared with the best individual of previous generation, if the number of identical bases between the selected individual and best individual is greater than idb(idb = nL,  = 0.7), we will delete the selected individual and randomly generate a new individual. The next,  offspring will be produced from the  parent individuals ( = 6[],  = N). Randomly select two individuals from the  parent individuals and check whether the number of identical bases between the two individuals is greater than idb, if greater, reselect two individuals. The every base of new individual, there are four possibilities, chosen as the corresponding base of the first parent individual is in probability pes1 = 0.1, chosen as the corresponding base of the second parent individual is in probability pes2 = 0.1, chosen as the average value of the two selected individuals corresponding base is in probability pes3 = 0.4, chosen as a new random generating base is in probability pes4 = 0.4. At last, the new  individuals will reduce again to  ones. Every individual will be selected with a certain probability, the selected probability function can be describes as follows: p (vj ) = fav (tav)pdf (vj ) + (1 − fav (tav))pfh (vj )

(2)

where vj (j = 1, 2, . . . ) are the representative of the  individuals, pfh (vj ) is the ratio of the corresponding individual’s fitness function value to the sum of all individual’s fitness function value. pdf (vj ) is the ratio of the corresponding individual’s alien base value to the sum of all individual’s alien base values. The individual’s alien base value is defined as the number of disparate bases between selected individual and best individual at the corresponding position dividing by the length of individual (e.g., selected individual is 2011231012 and best individual is 2221311012, so the number of the disparate bases is 4, then the alien base value is 4/10 = 0.4). The function fav (•) is set as fav (x) = exp(−8.5338/x2 ). As the new  individuals replacing the old ones, the population update operation is completed.

2.3.

Crossover operators

As the most important operator in GA, crossover operator can generate new population individuals and improve population diversity. However, both single-point crossover and multipoint crossover used in SGA are too simple to get desirable effect in many cases. In this work, we try to find more efficient crossover operators based on DNA molecular operations. After analysis and comparison, three new crossover operators are designed. They are replacement operator, transposition operator and reconstruction operator. The replacement operator and transposition operator take examples from traditional GA crossover operator which are superior to the GA crossover operator. The reconstruction operator makes good use of the advantage of DNA encoding method which can deeply simulate the characteristics of biological DNA molecules motion. The three new crossover operators are described as bellow:

(1) Replacement operator: This operator happens to two individuals. Firstly, two parent individuals are randomly selected and assumed to be R1 = R11 R21 . . . RL1 and R2 = R12 R22 . . . RL2 . 1 And then randomly select a subsequence Rs1 = Ri1 . . . Ri+m 1

from R1 , where, Ri1 is an randomly selected position in R1 and m1 is a positive integer, and 1 < m1 + i ≤ L. Similarly, 2 (1 < the selected subsequence from R2 is Rs2 = Rj2 . . . Rj+m 2

m2 + j ≤ L). Exchanging subsequence Rs1 and Rs2 , the two new individuals (child individuals) become R 1 = 2 1 R11 . . . Rj2 . . . Rj+m . . . RL1 and R 2 = R12 . . . Ri1 . . . Ri+m . . . RL2 (See 1 2 Fig. 1). (2) Transposition operator: The operation object of this operator is only one individual. Firstly, a parent individual is randomly selected and assumed to be R = R1 R2 · · · RL . And then randomly select an position in R (Ri ) and a positive integer (m), so the selected subsequence is Rs = Ri · · · Ri+m (1 < m + i < L). Randomly select a position in R (Rj , 1 ≤ j ≤ L and j ∈ / [i, i + m]) again. The next, cut off the Rs of R, and insert Rs next to the position Rj . Finally, the new individual become R = R1 · · · Rj Ri · · · Ri+m Rj+1 · · · Ri−1 Ri+m+1 · · · RL (j < i) or R = R1 · · · Ri−1 Ri+m+1 · · · Rj Ri · · · Ri+m Rj+1 · · · RL (j > i + m) (See Fig. 2). (3) Reconstruction operator: Just like the replacement operator, randomly select two parent individuals first, and suppose the two sequences are R1 = R11 R21 . . . RL1 and R2 = R12 R22 . . . RL2 . Then randomly select a position in the latter half of R1 (Ri1 , L/2 < i < L), and the subsequence after Ri1 can be 1 . . . R1 . Cut off the RS of R1 , then insert it defined as RS = Ri+1 L ahead of R2 . Because the length of every individual is fixed, we have to cut out the extra parts at the back of R2 and  . . . RL ) randomly generate a new sequence RS’ (RS = Ri+1 1 to replace the RS in R . Therefore, the two new individu . . . RL and als (child individuals) become R 1 = R11 . . . Ri1 Ri+1 1 . . . R1 R2 . . . R2 (See Fig. 3). R 2 = Ri+1 L 1 i

In this study, the fitness function values of all individuals are sorted from small to large. And then, all individuals kept in the population are divided into two classes which are deleterious part and neutral part according to their fitness values (Neuhauser and Krone, 1997). The best ROUND(N/2) individuals in terms of the fitness values are neural part and the worst N − ROUND(N/2) ones are deleterious part. Where, N is the size of population and ROUND(•) is a rounding function. In order to increase the population diversity and make maximum use of the deleterious part, we design an operator for

Fig. 2 – Transposition operation process.

2238

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

deleterious part and thus generate two new parts called complementary part and chaotic part. Inverting every base of the deleterious part (A ↔ T, C ↔ G), the complementary part is generated. And then randomly upset their position to generate the chaotic part (e.g., ‘10123’ inverting every base becomes complementary part ‘23210’, after randomly repositioning becomes chaotic part ‘31022’). Accordingly, the replacement operator is employed twice in every generation. In the first operation, the two parent individuals are both selected from the neural part and the probability of occurrence is pc1a . In the second operation, the two parent individuals are respectively selected from the neural part and the chaotic part, and the probability of occurrence is pc1b . The transposition operator is also employed twice in every generation. In the first operation, the individual is selected from the neural part and the probability of occurrence is pc2a . In the second operation, the individual is selected from the chaotic part and the probability of occurrence is pc2b . The reconstruction operator is employed only once and need to meet the condition that both the replacement operator and transposition operator have not been adopted, and the probability of occurrence is pr0 .

2.4.

Mutation operator

Mutation operator can generate new genetic material and maintain the population diversity. Take advantage of it can let search process avoid falling into local optimum and obtain better optimization performance. Therefore, mutation operator plays an important part in evolutionary algorithm. In the most GAs and some other evolutionary algorithm, the mutation operator is usually selected as the normal mutation (NM) operator. In the normal mutation operator, every base of individual will mutate with a certain probability. Although this mutation way is easy to realize and may obtain desirable results for some simple problems, it is always unable to meet more accurate and rapid requirements because of the fixed mutation rate and the same treatment for every base making later evolution hard to convergence to the optimal value. In this study, an adaptive mutation operator is adopted to suit the change of the evolution. The adaptive mutation operator is described in detail as follows. According to Neuhauser and Krone (1997), there exist ‘hot spots’ and ‘cold spots’ in DNA sequence, i.e., the nucleotide bases in the ‘cold spots’ mutate slower than the ‘hot spots’, which is in line with the fact that spots in the different positions have the different effects on the solutions of the problem. Hence, in order to explore larger feasible region, at the beginning stage of the evolution, we set the higher bit positions having a larger probability than the ending one. When the global optimum region is found, for purpose of preventing better solutions from disrupting, the mutation probabilities in the

higher bit positions are decreased. At the ending stage of evolution, on the contrary of beginning stage, in order to avoid the algorithm to lose the optimum and made it converge faster, a smaller probability is need. We defined the left half nucleotide bases of DNA sequence as the high bit position, and the right half as the low bit position. Accordingly, there are two kinds of mutation probability pmh and pml , which are described as follows:

⎧ b1 ⎪ ⎨ pmh = a1 + 1 + exp(aa(g − g )) 0 b1 ⎪ ⎩ pml = a1 +

(3)

1 + exp(−aa(g − g0 ))

where a1 denotes the initial mutation probability of pml , b1 denotes the range of transmutability. The parameter g is evolution generation, and g0 denotes the generation where mutation probability will come up great change, and aa is the speed of change. In this study, the range of mutation probability can be limit to [0.02, 0.22], the great change point of mutation g0 = Gmax /2 (Gmax is the maximal generation number). Among these parameters, the speed of change aa is the most difficult to set. On one hand, if the value of aa is too large, the change of mutation probability will only come up near g0 . On the other hand, if the value is too small, there will be a similar change rate on the whole evolution procedure, in this case, the mutation operator is unserviceable. In this work, we set the values as: a1 = 0.02, b1 = 0.2, g0 = Gmax /2, g0 = 20/Gmax . In the whole stage of evolution, the change direction of pml is exactly opposite of pmh , so we set g0 locate in the middle of the evolution. aa is inverse to the evolution times and proportional to population individual length. a1 which denotes the initial mutation probability of pml should be a small point for the reason that the higher bit positions must have a larger probability to determine the optimal range quickly in the early stage of evolution. The range of transmutability cannot be the same order of magnitude as a1 which will make no obvious difference between the pmh and pml resulting the adaptive mutation operation failure. Surely, b1 cannot be oversize. So, we set b1 is ten times as many as a1 . The change curves of mutation probability pmh and pml are shown in Fig. 4. After the mutation operation, the new individuals are generated in current generation. Since the new individuals replace old ones, the mutation operator does not add any number of individuals in population.

2.5.

Selection strategy

Selection operation determines which individuals are kept to next generation and how many individuals are replicated

Fig. 3 – Reconstruction operation process.

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Fig. 4 – The change curves of pmh and pml . to offspring. Therefore, the population evolution direction is dependent to a large extent on selection. In this work, the simulated annealing (SA) method which has widely used in global optimization (Dekkers and Aarts, 1991) is employed in the selection operation. In this case, the selected probability of every individual to next generation can be described as follows: p(xi ) =

exp(f (xi )/Tk )

c0

i=1

exp(f (xi )/Tk )

(4)

where p(xi ) is the selected probability of the individual xi , f(xi ) is the fitness function of xi . Randomly selecting a certain number of individuals from the whole population, we defined the number of selected individuals as c0 . In other words, the total number of the population is C and randomly generates a positive real number  ∈ (0, 1), the number c0 =  C. Define {Tk } as a temperature sequence which converges at zero. The expression is Tk = ln(T0 /k + 1), T0 = 100 just like the boiling point value of water, k is the evolution times. In the traditional selection strategy, every selected individual is related to its fitness value and random value. So, traditional selection strategy reflects the randomness of whole population. In the SA based selection strategy, a small child population including random number of individuals is first determined, and then the evolutionary individuals are randomly selected from the child population. Therefore, the SA based selection strategy is not only reflects the randomness of whole population, but also reflects the randomness of population area part. SA is a random optimization algorithm having a strong convergence. Its algorithm characteristics can make the selection strategy have the characteristics of becoming saturated that effectively improves the search efficiency of the algorithm. The SA based selection strategy haves the characteristics of double choice which can guarantee the global advantage and local advantage of selected individuals. Besides the simulated annealing based selection operation, we will directly keep the best individual for next generation, which ensures the desired evolution direction.

2.6.

Adaptive parameter scope operation

Generally, the bounds of every parameter are fixed, which makes algorithm spend more time in converging to optimum solution. In order to enhance the local optimization ability, the adaptive parameter scope operation is designed.

2239

In this work, we combine the (, )-ES which can essentially increase the diversity of population with DNA based GA in order to greatly enhance the global search ability. However, the local search ability is weakened, which will make the algorithm cost more time to find the optimum solution. The adaptive parameter scope operator is used for enhancing the local search ability. The parameter bounds are not fixed values any longer, both the upper and lower limits will adaptively change along with increasing evolution times. The adaptive parameter scope operation is described in detail as follows. Firstly, we recode the best individual of every generation and decode it from quaternary number to real number, the recoded sequences become bx1t , bx2t , . . . , bxnt (t is the evolution times), just like every individual can be decoded as x1 , x2 , . . ., xn . Parameter delxi is related to bxit (i = 1, 2, . . . , n) and its original value is zero. Secondly, we must do judgment. If |bxit − bxit−1 | ≤ ε (begin with t ≥ 2 and ε = 0.001), let delxi plus one, or else delxi equals to zero. And then, if delxi is greater than a user-defined constant (delxi ≥ del0 ), the bounds of xi will change. At last, we start to change the scope of parameters. m , xm ] is defined as the result of the mth times bounds [xmin i max i changes (t = 0 means the bounds is the original value). The rule can be describes as follows:

⎧ m m−1 x = min(xmax , bxit + bx ) ⎪ i ⎨ max i ⎪ ⎩

m m xmin = max(xmin , bxit − bx ) i i

(5)

m−1 m bx = (xmax + xmin )/10 i i

Adopting the (, )-ES based population update operator is good for improving global search ability, and the adaptive parameter scope operator is beneficial to improve local search ability, so the combination of the two operators can greatly enhance the proposed algorithm search ability.

2.7.

Procedure of the HDNA-GA

Using the HDNA-GA to solve a complicate optimization problem, on the basis of above description, the procedure can be summarized as follows and seen in Fig. 5. Step 1: Initialize a population with N individuals which stand for N candidate solutions. Step 2: Calculate the fitness value of each individual. Step 3: Decide whether to adopt the (, )-ES based population update operator. If tav ≥ tav0 the operator is adopted, otherwise skip this step. Step 4: Divide all the individuals into two classes which are deleterious part and neutral part. And based on deleterious part generate complementary part and chaotic part. Step 5: Employ the crossover operator: replacement operator is carried out in probability pc1a and pc1b twice, transposition operator is carried out in probability pc2a and pc2b twice, and reconstruction operator is carried out in probability pr0 only once. Repeat this step until N/2 new individuals are generated. At last insert all new individuals into the population without deleting individuals. Step 6: Employ the adaptive mutation operator. Step 7: Before selection, we increase the population diversity by adding some individuals. These individuals

2240

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

3.

Start

Initialize population

Calculate fitness value

tav≥tav0

N

Y Update population

Divide into deleterious and neutral part. And generate complementary and chaotic part

Adopt crossover operator Adopt mutation operator Add some individuals to increase the population diversity

Adopt simulated annealing selection operator (preserve the best)

Calculate the value of tav and delxi

delxi ≥ del0

N

Y Adopt adaptive parameter scope operator

Meet the termination criterion

N

Y Output the results Fig. 5 – Procedure of HDNA-GA. come from the complementary part and chaotic part, and they meet the condition that their fitness function values are greater (less) than the average fitness function value of the neutral part multiply (divide) by 1.5 (if the best fitness function value requires minimum). Step 8: Adopt the simulated annealing based selection operator to select (N − 1) individuals into next generation and keep the best individual as the last one of next population. Step 9: Decide whether to adopt the adaptive parameter scope operator. If delxi ≥ del0 the bounds values are updated, or else the bounds are unchanged. Step 10: Repeat step 2 to step 9 until the termination conditions are met and the final solution is found.

Computational experiments

Generally, we should investigate the performance of the HDNA-GA by computational experiments. In this study, several benchmark functions are selected to show the superior performance of the HDNA-GA compared with RNA-GA and SGA. The details of the benchmark functions are shown in Table 1. The selected six benchmark functions all have some typical characteristics of optimization problems. f1 is the well-known Rosenbrock function, whose optimum is located in a very narrow valley with a flat bottom. In addition, it has the characteristic of non-linear classification, so finding available solutions is a complicated task. Griewank function f2 has plentiful local minimum points that are widely distributed. It can test the global search ability and the ability of avoiding being trapped into local optimum. f3 is an undivided linear function, whose optimum is located in the center of search region. However, the difference between optimum solution and suboptimum solution in adjacent region is very small, which causes deceptive sensation for solving this problem. Schwefel’s function f4 has the characteristic of symmetry, separability and multimodal. The global optimum of this function is near the bound of the search region and far from the suboptimum. In this case, the search algorithms are potentially prone to converge in the suboptimum solution. f5 is the Goldstein and Price function. It is a multimodal function and has many local minima. Ackley function f6 has many local minima which widely and uniformly distribute in the search region. But it has only one global optimum solution in the center of the search region. According to classic methods, getting desirable optimum solution of the six benchmark functions is a hard task. In the experiments, three evolutionary algorithms, HDNA-GA, RNA-GA and SGA, are used to solve these problems, and the proposed method (HDNA-GA) has evident superiority. In the experiments, the parameters of the HDNA-GA are set as follows. The population size is 60 (N = 60), the probabilities of the three crossover operators are respectively as: pc1a = 0.7, pc1b = 0.3 + 0.08evg, pc2a = 0.54, pc2b = 0.2 + 0.08evg, pr0 = 0.2, and n evg = (1/n) i=1 delxi . The parameter del0 is set as del0 = 25 or del0 = G/10. Generally, the value of del0 set as G/10 mainly aims at higher accuracy and set as 25 mainly aims at faster convergence speed. Hence, we set del0 = G/10 for the first termination criteria and del0 = 25 for the second termination criteria. For every test problem, we run 100 independent times for the three algorithms of HDNA-GA, RNA-GA and SGA. The termination criteria are set as follows. First, the number of evolution generation reaches 1000. Second, the distance between the objective function value of the best-so-far individual and the known theoretical global optimum is small than ε which is set as 0.0001. The experiments results are shown in Tables 2 and 3. Obviously, HDNA-GA is more reliable compared with SGA and RNA-GA. Both the convergence speed and the search accuracy are important aspects of algorithms. In Table 2, a list of data of the optimum solution values show the characteristic of search accuracy and obtained these data are on the basis of the first termination criterion. In Table 3, a list of data of the needed evolution generations and success rate show the convergence speed and efficiency of algorithms, and the second termination criterion is accepted. Furthermore, some comparison data of other improved algorithms in published literatures and HDNA-GA are listed in Tables 4 and 5. The

2241

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Table 1 – Benchmark functions. Test functions

Optimal solution

min f1 (x) = 100(x2 − min f2 = 1 +

2 x12 )

2  (x −100)2 i

i=1

min f3 (x) = 0.5 +

+ (1 − x1 ) ;

4000



2



2 

min f4 (x) = −

2

cos

(1,1)

0

x1 , x2 ∈ [−600, 600]

(100,100)

0

a = 0.001 x1 , x2 ∈ [−10, 10]

(0,0)

0

x −100 i√



1=1 2 −0.5 sin x2 +x2 1 2 ; 2 1+a(x2 +x2 ) 1 2

√ xi sin( |xi |);

x1 , x2 ∈ [−5.12, 5.12]

Optimal value

i

;

x1 , x2 ∈ [−500, 500]

−837.9685

(420.9687,420.9687)

i=1

min f5 (x) =

 2 1 + (x1 + x2 + 1) 19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 

2 2 2

30 + (2x1 − 3x2 ) (18 − 32x1 + 12x1 + 48x2 − 36x1 x2 + 27x2 ) ;

i=1

20, c2 = 0.2, c3 = 2;

3

(100,100)

0

x1 , x2 ∈ [−2, 2]

min f6 (x) =

  D D    xi2 ) − exp( D1 cos(c3 xi )); c1 + exp(1) − c1 exp(−c2  D1

(0,−1)

D = 2, c1 =

i=1

xi ∈ [−32.768, 32.768]

Table 2 – Accuracy of optimization. Test functions

HDNA-GA

RNA-GA (Tao and Wang (2007), Chen and Wang (2009), Chen (2010))

Fmax f1 f2 f3 f4 f5 f6

Fmin

Fave

Fmax

2.343E-9 1.866E-22 3.548E-10 3.479E-7 1.338E-9 6.661E-16 2.16E-10 0.0074 1.108E-9 0 2.273E-10 0.0097 −837.9658 −837.9658 −837.9658 −837.966 3 3 3 3 8.882E-16 7.684E-5 1.207E-5 1.98E-4

Fmin 8.336E-9 4.156E-8 8.39E-2 −837.966 3 1.22E-5

SGA (Tao and Wang (2007), Chen and Wang (2009), Chen (2010))

Fave

Fmax

8.57E-7 0.002 0.0045 −837.966 3 7.24E-5

4.065E-5 0.0271 0.0097 −719.527 3 1.60E-4

Fmin

Fave

3.254E-9 4.598E-6 5.849E-10 0.0102 1.42E-6 0.0049 −837.966 −833.228 3 3 2.94E-6 6.85E-5

Table 3 – Generations of optimization. Test functions

f1 f2 f3 f4 f5 f6

HDNA-GA

RNA-GA (Tao and Wang (2007), Chen and Wang (2009), Chen (2010))

SGA (Tao and Wang (2007), Chen and Wang (2009), Chen (2010))

Gmax

Gmin

Gave

Suc

Gmax

Gmin

Gave

Suc

Gmax

Gmin

Gave

Suc

44 86 48 70 55 113

10 7 19 22 19 36

29.42 37.36 34.62 45.04 35.2 74.02

100% 100% 100% 100% 100% 100%

1000 1000 1000 603 1000 1000

109 5 20 117 25 36

614.45 497.2 559.48 323.5 159 685

100% 76% 62% 100% 100% 100%

1000 1000 1000 1000 1000 1000

77 89 11 201 1000 143

311.3 859.3 623.34 240.95 1000 466

100% 21% 46% 96% 100% 100%

Table 4 – Comparison data of Rosenbrock function. Method

SSGA

PfGA

Improved GA

DNA-GA (Chen and Wang, 2009)

HDNA-GA

1.72E−13 1.97E−5 135/9/47.12 –

1.8663E−22 3.5479E−10 44/10/29.42 2.5201E−10

Wang and Okazaki (2007) Fbest Fave Gmax/ Gmin/ Gave SD

1.5759E−10 2.5781E−3 – 7.6389E−3

3.7708E−11 7.6235E−4 – 1.7286E−3

6.3056E−12 1.2780E−4 – 2.2475E−4

Table 5 – Comparison data of 10 and 100 dimensions Grienwank function. Method

Moon and Linninger (2009)

SGA

BIAMC (Zhao and Wang, 2011)

HDNA-GA

10D

Fbest /Fworst Fave Suc

0/0.472 0.091 90%

0.0615/0.677 3.462 0%

0/0.221 9.88E−3 72%

6.10E−12/0.0032 1.28E−4 96%

100D

Fbest /Fworst Fave Suc

0/4.5E−5 2.3E−5 100%

40.0890/85.7141 59.3169 0%

0/0 0 100%

3.17E−22/1.1E−9 7.65E−10 100%

2242

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

results indicate that the HDNA-GA has better performance than the other improved optimization algorithms in many aspects.

4. Study of chemical kinetic models parameter estimation case In the real-world dynamic systems, especially in the chemical engineering and commercial run control, there are many challenges that are different from the test functions. In this section, several realistic examples are introduced to test the optimization ability of proposed algorithm in multidimensional environment. In this work, just like most parameter estimation problems, we will use the error-in-variables formulation to estimate the parameters. In the error-in-variables approach, the objective is to minimize the weighted squared error between the observed values and those predicted by the model. These kinds of problems can be written as following standard form.

min ,ˆz

r 

(ˆz,m − z¯ ,m )

2

The compared simulation results of different methods are shown in Table 6. Besides, more specific information of the proposed method about parameter values and object function value changing over evolution times are shown in Fig. 6. These information indicate that using the HDNA-GA to solve this problem is very efficient. In Table 6, compared with the other three methods proposed by Esposito and Floudas (2000) and Lin and Stadtherr (2006), the optimum value of proposed method reaches 2E−12 better than 1E−6 greatly. This problem did not have strong deceptive and each local optimal solution areas have obvious difference, so the HDNA-GA has very apparent optimum effect. In order to simulate real-world data, a random error is added in the next three problems.

4.2.

Catalytic cracking of gas oil

This model represents the catalytic cracking of gas oil (A) to gasoline (Q) and other side products (S). Its graphical representation can be described as follows: k1

A

(6)

m ∈ M =1

k3

k2

S

Subject to z˙ j = g(z, , t), zj (t0 ) = z0 ,

Q

j∈J j∈J

t ∈ [t0 , tf ] where  is the vector of parameters which are to be estimated; zˆ  is the vector (length |M|) of fitted data variables and z¯  is the vector (length |M|) of the measured values at th data point (includes a total of r points), and t is the time associated with the th data point; z0 is the vector of constant initial condition, and z˙ j = g(z, , t) is defined as the differential equation of the system. In the follow-up experiments, we all choose Eq. (6) as the fitness function of the proposed algorithm.

Only the concentrations of components A and Q can be measured and component S does not appear in the model used for estimation. This reaction scheme is more complex than the simple first-order kinetics in the previous example and involves nonlinear reaction kinetics. The differential equation takes form as follows:

⎧ dz ⎪ ⎨ 1 = −(1 + 2 )z21 dt

⎪ ⎩ dz2 = 1 z2 − 3 z2 1

(8)

dt

z0 = [1, 0], t ∈ [0, 0.95]

4.1.

First-order irreversible series reaction

This model represents a first-order irreversible chain reaction which is presented by Tjoa and Biegler (1991). This reaction system can be described as follows. k1

k2

A−→B−→C The concentrations of components A and B can be measured only and component C does not appear in the model used for estimation. The differential equation model has the form as follows:

⎧ dz ⎪ ⎨ 1 = −1 z1 dt

⎪ ⎩ dz2 = 1 z1 − 2 z2

(7)

dt

z0 = [1, 0], t ∈ [0, 1] where the state vector z is defined as [A, B] and the parameter vector  is defined as [k1 , k2 ]. According to Esposito and Floudas (2000), the data used in this work are generated with values for the parameters of  = [5, 1] with no added error. The bounds of the estimated variables are set as  ∈ [0, 10].

where the state vector z is defined as [A, Q], and the parameter vector  is defined as [k1 , k2 , k3 ]. The data used in this work are generated with values for the parameters of  = [12, 8, 2] with a small amount of random error added, just as Esposito and Floudas suggest. The initial bounds of the variables are set to  ∈ [0, 20]. In Table 7, the compared simulation results of different methods are shown. And more specific information of the proposed method are shown in Fig. 7. With no random error added the optimum value of HDNA-GA reaches 3.0947E−10 and the corresponding parameters are  = [11.9999, 7.9997, 2.0004]. After a small amount of random error added, by analyzing the data from Table 5, the accuracy of the optimum value of HDNA-GA decreases into 5.1363E−5 that is in accordance with the practice, but still better than the other methods quoted in this study. According to this problem, its global optimal solution compared to the local optimal solution has significant lead. In this way, using the (, )-ES based population update operation can easily determine the global optimal solution areas. Meanwhile, the adaptive parameter scope operation can have to quickly and accurately achieve optimization. So, the HDNA-GA has more apparent optimum effect compared to other methods.

2243

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Table 6 – First-order irreversible series reaction. Method Collocation approach (Esposito and Floudas, 2000) Integration approach (Esposito and Floudas, 2000) Lin and Stadtherr (2006) HDNA-GA

1

2

5.0035 5.0035 5.0035 5.0000

1.0000 1.0000 1.0000 1.0000

Objective value 1.1850E−6 1.1858E−6 1.1858E−6 2.9184E−12

CPU times (s) 25.16 248.2 3.34 3.22

Fig. 6 – Parameter value and objective function value respectively versus the evolution generation for problem 1.

Table 7 – Catalytic cracking of gas oil. Method Collocation approach (Esposito and Floudas, 2000) Integration approach (Esposito and Floudas, 2000) Lin and Stadtherr (2006) HDNA-GA

1

2

3

Objective value

CPU times (s)

12.2120 12.2140 12.2139 12.0496

7.9800 7.9800 7.9798 8.0182

2.2220 2.2220 2.2217 1.9635

2.6384E−3 2.6557E−3 2.6557E−3 5.1361E−5

796.3 >3600 9.2 13.2

Fig. 7 – Parameter value and objective function value respectively versus the evolution generation for problem 2.

Table 8 – Bellman’s problem. Method Collocation approach (Esposito and Floudas, 2000) Integration approach (Esposito and Floudas, 2000) Katare et al. (2004) HDNA-GA

p1

p2

4.5930E−6 4.5704E−6 4.5815E−6 4.5450E−6

2.8285E−4 2.7845E−4 2.7899E−4 2.6214E−4

Objective value 22.568 22.031 22.286 0.1065

CPU times (s) 499.3 >3600 39.56 41.2

2244

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Fig. 8 – Parameter value and objective function value respectively versus the evolution generation for problem 3.

Fig. 9 – Parameter value and objective function value respectively versus the evolution generation for problem 4.

4.3.

Bellman’s problem

This problem describes a reversible homogeneous gas-phase reaction of the following form: 2NO + O2  2NO2 This modal is first presented by Bellman et al. (1967) and still studied by many scholars such as Tjoa and Biegler (1991), Esposito and Floudas (2000). Its differential equation has the nonlinear characteristic and takes the following forms: dz 2 = p1 (c1 − z)(c2 − z) − p2 z2 dt

(9)

z0 = 0, t ∈ [0, 39.0]

where the parameters c = [126.2, 91.9]. The data used in this work are generated with values for the parameters of p = [1 × 10−6 , 1 × 10−4 ] with a small amount of random error added. It is a very difficult problem to solve, even from a local standpoint. Compared with the previous two problems and the next problems, this problem is harder to obtain desirable solution and fewer methods are proposed to solve it. The initial bounds of the variables are set as p ∈ [0, 0.1]. The computational results are shown in Table 8. Since the algorithm proposed by Lin and Stadtherr has not been used to solve this problem, we quote another algorithm proposed by Katare et al. (2004) instead. Besides, more specific information demonstrating the effectiveness and efficiency of the proposed algorithm can be seen in Fig. 8.

Table 9 – Lotka–Volterra problem. Method Collocation approach (Esposito and Floudas, 2000) Integration approach (Esposito and Floudas, 2000) Lin and Stadtherr (2006) HDNA-GA

1

2

Objective value

CPU times (s)

3.2521 3.2434 3.2434 3.0798

0.9183 0.9209 0.9209 0.9727

1.3194E−3 1.2493E−3 1.2492E−3 1.1570E−3

3853.3 >3600 53.2 45.0

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

4.4.

Lotka–Volterra problem

This model is a representation of the predator-prey model used in ecology, which has been studied by Luus (1998) and Esposito and Floudas (2000). The differential equation of this system takes the forms as follows:

⎧ dz ⎪ ⎨ 1 = 1 z1 (1 − z2 ) dt

⎪ ⎩ dz2 = 2 z2 (z1 − 1)

(10)

2245

The HDNA-GA through the appropriate combination of the adaptive parameter scope operation and the (, )-ES based population update operation achieves best optimization result. Comparative experiments on benchmark functions demonstrate the power of the HDNA-GA in terms of optimization accuracy and convergence speed compared to SGA, RNA-GA, DNA-GA and other improved algorithm. Moreover, the experiments results about four real-world parameter estimation problems of chemical kinetic models have validated the accuracy and efficiency of the proposed approach.

dt

Acknowledgments z0 = [1.2, 1.1], t ∈ [0, 10] where z1 represents the population of the prey and z2 represents the population of the predator. The data used in this work are generated with values for the parameters of  = [3, 1] with a small amount of normally distributed random error with ␴ = 0.01 and zero mean added. The bounds of the estimated variables are set as  ∈ [0, 10]. The simulation results of different methods are shown in Table 9. And more specific information demonstrating the rapidity and accuracy of the HDNA-GA are shown in Fig. 9. In the above experiments, we all run 50 times for every problem and all 50 runs are able to find the global optimum. The data randomly selected to fill into Tables 6–9 are one of the values of the 50 runs but not the best one. Hence, the searching precision and the success rate of the proposed method can be guaranteed. The data of CPU times in Tables 6–9 are the sum of all 50 runs. Our computations have been performed on a 2 CPU, 2.2 GHz, Windows 7 OS machine. The speed of solution for a particular problem depends not only on the processor speed but also on the memory architecture and the compiler. Through comparing the CPU times data in Tables 6–9, we can see that the methods reported by Esposito and Floudas (2000) need to spend more time and the differences between new algorithm and other algorithm are slight.

5.

Conclusions

In this work, a hybrid DNA based genetic algorithm (HDNAGA) is proposed for solving optimization problems. In the HDNA-GA, we make use of the DNA nucleotide encoding method and some new genetic operations including crossover operations, mutation operation and selection operation. The three new crossover operators, including replacement operator, transposition operator and reconstruction operator, contribute to keep the population diversity and guide the evolution direction. Inspired by the idea of ‘hot spots’ and ‘cold spots’, an adaptive mutation probability is adopted to suit the change of the evolution in the mutation operator. Employed this mutation operator can generate suitable and plentiful new genetic material and guarantee against stalling at local peak. The selection operator based on SA can effectively lead the evolution direction. These operations make the proposed algorithm evidently improves the optimization ability, especially the global search ability. In consideration of the still insufficient local search ability, the adaptive parameter scope operation is employed. It strengthens the local search ability and improves the optimization efficiency, but at the same time it weakens the global search ability. Adopting the (, )-ES based population update operation can offset above weakness and its disadvantage of weakening the local search ability can offset by the adaptive parameter scope operator as well.

This paper has been supported by the National Natural Science Foundation of China under Grants Nos. 60874072 and 60721062.

Appendix A. Nomenclature and abbreviations xi

the variables in global optimization problem (i = 1,2,. . .,n) L the length of individual population size N E encoding space the elements of the individual sequence Ri pc1 , pc2 , pr the probabilities of replacement, transposition and reconstruction operators pmh , pml the probability of mutation in high bit position and low bit position Gmax the maximum evolution generation of the algorithm the probability of selecting the individual xi p(xi ) tav0 the threshold value of adopting the (, )-ES based population update operator the number of parent individuals and child individ,  uals vj the  individuals (j = 1, 2,. . .,) the ratio of every individual’s fitness value to the sum pfh (vj ) of all individuals’ value pdf (vj ) the ratio of every individual’s alien base value to the sum of all individuals’ value p (vj ) the selected probability of every individual in the (, )-ES del0 the threshold value of adopting the adaptive parameter scope operator the bounds of xi at the tth generation bxit SGA Standard genetic algorithm RNA-GA RNA genetic algorithm deoxyribonucleic acid DNA SA simulated annealing ESs especial evolutionary strategies HDNA-GA a hybrid DNA based genetic algorithm with the population update operator and the adaptive parameter scope operator adenine A T thymine C cytosine G guanine

References Altinten, A., Erdogan, S., Hapoglu, H., Aliev, F., Alpbaz, M., 2006. Application of fuzzy control method with genetic algorithm to a polymerization reactor at constant set point. Chem. Eng. Res. Des. 84, 1012–1018.

2246

chemical engineering research and design 9 0 ( 2 0 1 2 ) 2235–2246

Ardenghi, J.I., Maciel, M.C., Verdiell, A.B., 2003. A trust-region-approach for solving a parameter estimation problem from the biotechnology area. Appl. Numer. Math. 47, 281–294. Bellman, R., Jacquez, J., Kalaba, R., Schwimmer, S., 1967. Quasilinearization and the estimation of chemical rate constants from raw kinetic data. Math. Biosci. 1, 71–76. Chen, X., 2010, March. Research on DNA genetic algorithms and applications (Ph.D. Dissertation). Zhejiang University (in Chinese). Chen, X., Wang, N., 2009. A DNA based genetic algorithm for parameter estimation in the hydrogenation reaction. Chem. Eng. J. 150, 527–535. Chen, X., Wang, N., 2010. Optimization of short-time gasoline blending scheduling problem with a DNA based hybrid genetic algorithm. Chem. Eng. Process. 49, 1076–1083. Dekkers, A., Aarts, E., 1991. Global optimization and simulated annealing. Math. Program. 50, 367–393. Esposito, W.R., Floudas, C.A., 2000. Global optimization for the parameter estimation of differential–algebraic systems. Ind. Eng. Chem. Res. 39, 1291–1310. Holland, J.H., 1975. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor. Iqbal, J., Guria, C., 2009. Optimization of an operating domestic wastewater treatment plant using elitist non-dominated sorting genetic algorithm. Chem. Eng. Res. Des. 87, 1481–1496. Izadifar, M., Baik, O.D., 2007. Application of genetic algorithm for determination of the effective diffusivity of andrographolide in plant materials. Biochem. Eng. J. 33, 178–187. Katare, S., Bhan, A., Caruthers, J.M., Delgass, W.N., Venkatasubramanian, V., 2004. A hybrid genetic algorithm for efficient parameter estimation of large kinetic models. Comput. Chem. Eng. 28, 2569–2581. Kursawe, F., 1991. A variant of evolution strategies for vector optimization. In: Schwefel, H.-P., Männer, R. (Eds.), Parallel Problem Solving from Nature. Springer, Berlin/Heidelberg, pp. 193–197. Leineweber, D.B., Bauer, I., Bock, H.G., Schloder, J.P., 2003. An efficient multiple shooting based reduced SQP strategy for large-scale dynamic process optimization. Part 1: theoretical aspects. Comput. Chem. Eng. 27, 157–166. Li, Y., Fang, C., Qi, O.Y., 2004. Genetic algorithm in DNA computing: a solution to the maximal clique problem. Chin. Sci. Bull. 49, 967–971. Lin, Y., Stadtherr, M.A., 2006. Deterministic global optimization for parameter estimation of dynamic systems. Ind. Eng. Chem. Res. 45, 8438–8448. Linga, P., Al-Saifi, N., Englezos, P., 2006. Comparison of the Luus-Jaakola optimization and Gauss-Newton methods for

parameter estimation in ordinary differential equation models. Ind. Eng. Chem. Res. 45, 4716–4725. Luus, R., 1998. Parameter estimation of Lotka–Volterra problem by direct search optimization. Hung. J. Ind. Chem. 26, 287–292. Mansoornejad, B., Mostoufi, N., Jalali-Farahani, F., 2008. A hybrid GA-SQP optimization technique for determination of kinetic parameters of hydrogenation reactions. Comput. Chem. Eng. 32, 1447–1455. Moon, J., Linninger, A.A., 2009. A hybrid sequential niche algorithm for optimal engineering design with solution multiplicity. Comput. Chem. Eng. 33, 1261–1271. Moon, J., Kulkarni, K., Zhang, L., Linninger, A.A., 2008. Parallel hybrid algorithm for process flexibility analysis. Ind. Eng. Chem. Res. 47, 8324–8336. Neuhauser, C., Krone, S.M., 1997. The genealogy of samples in models with selection. Genetics 145, 519–534. Nougues, J.M., Grau, M.D., Puigjaner, L., 2002. Parameter estimation with genetic algorithm in control of fed-batch reactors. Chem. Eng. Process. 41, 303–309. Tao, J.L., Wang, N., 2007. DNA computing based RNA genetic algorithm with applications in parameter estimation of chemical engineering processes. Comput. Chem. Eng. 31, 1602–1618. Tao, J.L., Wang, N., 2008. DNA double helix based hybrid genetic algorithm for the gasoline blending recipe optimization problem. Chem. Eng. Technol. 31, 440–451. Tjoa, I.B., Biegler, L.T., 1991. Simultaneous solution and optimization strategies for parameter-estimation of differential–algebraic equation systems. Ind. Eng. Chem. Res. 30, 376–385. Tveito, A., Langtangen, H.P., Nielsen, B.F., Cai, X., 2010. Parameter estimation and inverse problems. Elem. Sci. Comput. 7, 411–421. Wang, R.-L., Okazaki, K., 2007. An improved genetic algorithm with conditional genetic operators and its application to set-covering problem. Soft computing—a fusion of foundations. Methodol. Appl. 11, 687–694. Wang, K., Wang, N., 2010. A novel RNA genetic algorithm for parameter estimation of dynamic systems. Chem. Eng. Res. Des. 88, 1485–1493. Wang, K., Wang, N., 2011. A protein inspired RNA genetic algorithm for parameter estimation in hydrocracking of heavy oil. Chem. Eng. J. 167, 228–239. Zhang, L., Wang, N., 2012. A modified DNA genetic algorithm for parameter estimation of the 2-Chlorophenoloxidation in supercritical water. Appl. Math. Modell., http://dx.doi.org/10.1016/j.apm.2012.03.046. Zhao, J., Wang, N., 2011. A bio-inspired algorithm based on membrane computing and its application to gasoline blending scheduling. Comput. Chem. Eng. 35, 272–283.