Hybrid enhanced continuous tabu search and genetic algorithm for parameter estimation in colored noise environments

Hybrid enhanced continuous tabu search and genetic algorithm for parameter estimation in colored noise environments

Expert Systems with Applications 38 (2011) 3909–3917 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: ww...

831KB Sizes 4 Downloads 103 Views

Expert Systems with Applications 38 (2011) 3909–3917

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Hybrid enhanced continuous tabu search and genetic algorithm for parameter estimation in colored noise environments Barathram Ramkumar a, Marco P. Schoen b,⇑, Feng Lin c a

Idaho State University, Measurement and Controls Engineering Research Center (MCERC), College of Engineering, Campus Box 8060, Pocatello, Idaho 83209, USA Idaho State University, Department of Mechanical Engineering, College of Engineering, 921 South 8th Ave., Stop 8060, Pocatello, Idaho 83209, USA c Institute of Engineering Thermo-Physics, Chinese Academy of Sciences, Beijing 100190, PR China b

a r t i c l e

i n f o

Keywords: Tabu search Genetic algorithm Parameter estimation

a b s t r a c t Parameter estimation is an important concept in engineering where a mathematical model of a system is identified with the help of input and output signals. Classical parameter estimation algorithms such as Least Squares (LS), Recursive Least Squares (RLS), Least Mean Squares (LMS) and Generalized Least Squares (GLS) give an unbiased estimate of the parameters when the system noise is white. This property is lost when the system noise is colored which is generally the case for many practical situations. In order to overcome the bias problem associated with the colored noise environment, one can use a whitening filter. The cost function of the estimation problem in the case of a colored noise environment becomes multimodal when the signal to noise ratio is high. Hence the motivation to use some intelligent optimization technique for the purpose of finding the global minimum of the parameter estimation problem. A new hybrid algorithm combining intelligent optimization techniques, i.e. enhanced continuous tabu search (ECTS) and elitism based genetic algorithm (GA) is proposed and is applied to the parameter estimation problem. In this work, the ECTS is used to define smaller search spaces, which are investigated in a second stage by a GA to find the respective local minima. Simulation results show that the parameters estimated using the proposed algorithm is unbiased in the presence colored noise. In addition, the hybrid algorithm is also tested with known multimodal benchmark problems. Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction Parameter estimation is an important concept in Aerospace Engineering and deals with determining the mathematical model parameters of an assumed model structure that simulates the actual system. The computation of the parameters is based on a given set of input and output signals. The most common parameters estimation algorithms used for identifying linear systems are the Least Squares (LS), Recursive Least Squares (RLS), General Least Squares (GLS), and Least Mean squares (LMS) (Juan, 1994). All the above algorithms estimate the system parameters by minimizing a cost function, which is the square of the error between the actual output and the estimated output. These classical algorithms yield an unbiased estimate of the parameters when the system noise is white. This property is lost when the system noise is not white which is commonly the case (Hisa, 1977). If the system noise is colored, one can use a filter called whitening filter (Hisa, 1977). The unbiased parameters are estimated by minimizing the cost function, which is the square of the error between the actual output ⇑ Corresponding author. E-mail addresses: [email protected] (B. Ramkumar), [email protected] (M.P. Schoen), [email protected] (F. Lin). 0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2010.09.052

and the estimated output. The cost function in the case of colored noise process becomes multimodal if the signal to noise ratio is high (Söderström, 1974), hence some effective intelligent optimization technique is required to locate the global minimum. In this paper a hybrid intelligent optimization algorithm is proposed, which is a combination genetic algorithm (GA) and tabu search (TS). Tabu search (TS), developed by Glover, is basically a set of concepts (Glover, 1989, 1990) used to optimize combinatorial optimization problems. These concepts were extended to solve continuous optimization problems known as Continuous tabu search (CTS) which was introduced by Siarry and Berthiau (1997). Enhanced continuous tabu search (ECTS) is an algorithm that uses advanced concepts of tabu search such as diversification and intensification for optimizing functions of continuous variables. ECTS was introduced by Chelouah and Siarry (2000). Another intelligent systems based approach for optimization are genetic algorithms (GA), which were originally proposed to solve combinatorial optimization problems. The concepts of GA’s were extended by Chelouah and Siarry (2000) to solve continuous optimization problems known as Continuous genetic algorithms (CGA). TS and GA have some inherent advantages and disadvantages. TS is capable of searching wide regions in the search space but is not

3910

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

guaranteed to give optimum solutions, however experimental results show that it does find good near optimum solutions (Goldberg, 1989). GA’s are inherently slow but give better convergence to the optimum solution. The performance of GA can be enhanced by providing some additional information about the search space to the algorithm (Zdansky & Pozivil, 2002). The disadvantages of TS and GA can be mitigated by combining them; such attempts have been done for combinatorial optimization problems (Guo, Chen, Wang, & Zhao, 2003; Paladugu, Schoen, & Williams, 2003). The novel hybrid algorithm presented in this paper combines ECTS and CGA for solving continuous optimization problems. The hybrid algorithm is applied to the parameter estimation problem in the presence of colored noise to address the bias issues discussed above. The hybrid algorithm is also tested with standard bench mark problems taken from Siarry and Berthiau (1997) and also with the parameter estimation problem in the presence of a white noise process. The paper is organized as follows: Section 2 will present the parameter estimation problem and the bias problem associated with the classical LS method in the presence of colored noise. Section 3 will present the proposed hybrid algorithm, followed by the section which presents the simulation results. Finally, Section 5 provides the conclusion and recommendations for future works. 2. Parameter estimation problem Parameter estimation is an important concept in engineering, and deals with determining the mathematical model parameters of the system given a set of input and output signals. In this section, the mathematical formulation of the parameter estimation problem in the discrete domain is first discussed followed by the review of the bias problem associated with the standard Least Square (LS) algorithm. Techniques to overcome these bias problems are shown, which is the motivation for applying and developing a synergy of intelligent optimization techniques.

The stochastic system considered in this section is shown in Fig. 2.1 in block diagram form. Where u(k) is the input sequence, w(k) is the system output, v(k) is the system noise, A(z1) and B(z1) are the numerator and denominator polynomials of the system transfer function, respectively. From Fig. 1.1, the system is given as:

Aðz1 ÞwðkÞ ¼ Bðz1 ÞuðkÞ;

ð2:1:1Þ

yðkÞ ¼ wðkÞ þ v ðkÞ;

ð2:1:2Þ

and z1 is the backward shift operator, which applied to A yields

Aðz1 Þ ¼ 1 þ a1 z1 þ    þ an zn ;

In addition, the system is assumed to be a linear, stable, with a known order of n, and the input is persistently exiting of order 2n + 1. From Eqs. (2.1.1) and (2.1.2), we have

v (k) −1

−1

B( z )

Defining the residuals as:

eðkÞ ¼ Aðz1 Þv ðkÞ;

ð2:1:4Þ

one can write

Aðz1 ÞyðkÞ ¼ Bðz1 ÞuðkÞ þ eðkÞ:

ð2:1:5Þ

For estimating the parameters of the polynomials A and B using some Least Squares techniques (LS), e(k) is assumed to be a process for a sufficiently large spectrum. This assumption is in practical application rarely met. In the next subsection the bias problem associated with the Least Squares techniques is discussed, i.e. when e(k) is not a white noise process. 2.2. Bias problem associated with Least Squares (LS) techniques The LS solution of the system represented by Eq. (2.1.4) is

^h ¼ ðX T XÞ1 X T y;

ð2:2:1Þ

where the information matrix is composed as follows:

2

yðnÞ; 6 yðn þ 1Þ; 6 6 6  6 X¼6 6  6 6 4 

...; ...;

yð1Þ yð2Þ

uðn þ 1Þ; uðn þ 2Þ;















..

.

3 uð1Þ uð2Þ 7 7 7  7 7 7;  7 7 7  5

and the output vector is given by y = [y(n + 1), y(n + 2), . . . , y(n + N)]. The parameter vector is defined as h = [a1, . . . , an, b0 ,. . . , bn]. One can express the expectation of h as E½h ¼ h þ E½ðX T XÞ1 X e ¼ ^ h, where e = [e(n + 1), e(n + 2), . . . , e(n + N)]. The estimate ^h is biased if

E½X T e – 0:

v ðkÞ þ

ð2:2:2Þ

n X

ai v ðk  1Þ ¼ eðkÞ

ð2:2:3Þ

i¼1

or A(z1)v(k) = e(k), where e(k) is a white process. From Eqs. (2.2.3) and (2.1.4) it is known that the LS gives an unbiased estimate of the parameters only under a special condition i.e. when e(k) is a white process (Juan, 1994). 2.3. Parameter estimation when e(k) is not white In order to overcome the bias issue for a nonwhite process, one can use a whitening filter. The whitening filter is given as follows:

1

y (k)

2

n

eðkÞ 1 ¼ : eðkÞ Cðz1 Þ

ð2:3:2Þ

From Eqs. (2.3.2) and (2.1.4) one can formulate the transfer function of the white noise process and the system noise process as:

eðzÞ

+

ð2:3:1Þ 1

where C(q ) = 1 + c1q + c2q +  + cnq . This implies e(k) is a filtered white noise sequence with a transfer function defined by:

v ðzÞ

w (k)

Fig. 2.1. Stochastic system model.

...; ...;

yðn þ N  1Þ . . . ; yðNÞ uðn þ NÞ; . . . ; uðNÞ

Cðq1 ÞeðkÞ ¼ eðkÞ;

Bðz1 Þ ¼ b0 þ b1 z1 þ    þ bn zn :

A( z )

ð2:1:3Þ

The above equation is not satisfied only if the noise process v(k) has its characteristics defined by the following difference equation:

2.1. System representation in the presence of noise

u (k)

Aðz1 ÞyðkÞ ¼ Bðz1 ÞuðkÞ þ Aðz1 Þv ðkÞ:

¼

1 : Aðz1 ÞCðz1 Þ

ð2:3:3Þ

The system along with the whitening filter is depicted in Fig. 2.2 and is described by the following relationship:

Cðz1 ÞAðz1 ÞyðkÞ ¼ Cðz1 ÞBðz1 ÞuðkÞ þ eðkÞ:

ð2:3:4Þ

3911

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

records the most recent moves into the TL so that it can be avoided in the future. The moves that are recorded into the TL are called ‘Tabu’ and the number of iterations for which a particular move is a Tabu is known as Tabu Tenure. Tabu Tenure depends on the length of the Tabu list (N1). The larger the numberN1, the lengthier the Tabu Tenure. N1 plays an important role in the performance of the algorithm. Each element in the TL is denoted by ti = (t1, t2, . . . , tn).

Whitening Filter

v (k)

1 −1

−1

A( z ) C ( z )

e (k)

−1

A( z )

u (k)

−1

w (k)

B( z )

3.2. Enhanced continuous tabu search

++ y (k)

Fig. 2.2. System with whitening filter.

Since the nonwhite noise process in the above equation is filtered, consistent parameters can be estimated by minimizing the cost function



X

e2 ðkÞ ¼

X

k

2 Cðz1 ÞAðz1 ÞyðkÞ þ Cðz1 ÞBðz1 ÞuðkÞ :

ð2:3:5Þ

k

2.4. Need for intelligent optimization techniques One need to minimize the cost function J as described by Eq. (2.3.5) if the error residual is not a white noise sequence. It is shown in Hisa (1977) that the cost function J becomes multimodal if the signal to noise ratio is high and hence one needs some optimization technique to find the global minimum. In this paper we use a novel hybrid optimization technique that combines a continuous number genetic algorithm and a Tabu search method. 3. Hybrid optimization algorithm In this section the proposed optimization algorithm is explained. We first discuss an optimization algorithm called Tabu search then an advanced version of TS called enhanced continuous tabu search (ECTS). We then elaborate on a classical intelligent optimization algorithm called genetic algorithm (GA). The disadvantages of GA and TS are outlined and the proposed hybrid algorithm combining ECTS and GA is introduced with the goal of overcoming the individual disadvantages of each separate algorithm.

As mentioned above, ECTS uses advanced concepts of tabu search, namely Diversification and Intensification. The algorithm of ECTS consist of four stages: (1) initialization of parameters, (2) diversification, (3) selecting the most promising area, and (4) intensification. In the following, each step is explained in more detail. 3.3. Initialization of parameters The parameters to be initialized for ECTS are: the length of the Tabu List (N1), length of the promising list (N2), the radius of the neighborhood (r1), the radius of the Tabu balls (r2), the radius of the promising balls (r3), and a random point in the search space. The characterization of the search space is show in Fig. 3.1. Where x is the current solution and n1, n2 and n2 are the neighbors to the current solution chosen from concentric circles around x. Promising balls are the circles which have the elements in the promising list i.e. p1, p2 and p3 as center. Similarly Tabu balls are the circles which have the elements in the promising list i.e. t1, t2, and t3 as center. 3.4. Diversification In this stage the algorithm looks for the most promising areas in the search space. The step-by-step procedure for this stage is given as follows: Step1: Generation of homogeneous neighbors. To any point xi in the search space, one generates N neighbors sj such that

ðj  1Þr 1 6 kxi  sj k 6 jr 1 ; where r1 is the initial radius of the neighborhood and

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2  2 kx  s k ¼ xi1  sj1 þ xi2  sj2 þ    þ xin  sjn : i

3.1. Tabu search Definition and concepts used in Tabu search are explained first before introducing the ECTS algorithm. A set of all possible solutions that can be visited during the search operation is called the search space. For functions of continuous variables, the dimension of the search space is equal to the number of variables in the cost function. One denotes the search space by X and any element in the search space as xi. For functions of n variables, the dimension of the search space is Rn and any element xi = (x1, x2, . . . , xn). A cost function or objective function is denoted as C(x). For a function of n variables, C(x) is a mapping from Rn ? R1. The objective for the proposed algorithm is minC(x) : x 2 X. For each point in the search space, one can create a set of neighbors, which form a subspace in a search space, the neighborhood space, denoted as S, S  X. Each element in S is denoted as si = (s1, s2, . . . , sn). A move is defined within the algorithm as ‘‘jumping” from one neighbor to another in the search space. The key ingredient in Tabu search is the Tabu list. In order to prevent the repletion of moves in the search space (cycling), one uses an array, which is called Tabu list (TL). Here, one

j

The above method partitions the search space into concentric spheres.

Tabu Balls Elements of the tabulist

Promising Balls

t4

t2

t1

t3 solution

X n1 n2 n3

Search Space

p3 p1

p2

Neighbors to the solution Elements of The Promising list

Fig. 3.1. Characterization of search space for ECTS.

3912

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

Step2: Comparison with the Tabu list. Each neighbor sj(j = 1, 2, . . . , N) generated in the previous step is compared with the elements in the Tabu list ti(i = 1, 2, . . . , N1) and if kti  sjk 6 r2 for i = 1, 2, . . . , N1 where

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2  2 kt  s k ¼ t i1  sj1 þ t i2  sj2    þ tin  sjn ; i

j

then the corresponding sj is rejected as Tabu. Step3: Comparison with the promising list. Let N0 be the number of neighbors which are not in the Tabu list and are denoted as hi(i = 1, 2, . . . , .N0 ). Each of these elements are compared with the elements in the promising list denoted as pj(j = 1, 2, . . . , N2) and if kpi  hjk 6 r3, for i = 1, . . ., ,N2 where

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2  2 j j j kp  h k ¼ pi1  h1 þ pi2  h2    þ pin  hn ; i

j

then the corresponding hj is rejected as Tabu. Step4: Finding the best neighbor. From the neighbors which are not in the Tabu list, one finds the best neighbor which minimizes the cost function. Step5: Updating the Tabu list The best neighbor obtained is updated into the Tabu list in a First in First out (FIFO) fashion. Step6: Updating the promising list If the best neighbor obtained is the overall best, then it is updated into the promising list in a FIFO fashion. Step7: Conversion determination If no improvement occurs for a certain number of moves, then diversification part is terminated. 3.5. Intensification The most promising area from the promising list is selected and above steps is repeated in order to intensify the search in the most promising area. 3.6. Genetic algorithm GAs are evolutionary algorithms that simulate Darwin’s survival of the fittest principle. The initial population of candidate solutions is randomly generated and represented as chromosomes in the form of genes. These chromosomes are evaluated based on an objective function and ranked in terms of its fitness. A subset of the next generation of candidate solutions is selected based on their performance with the objective function. The remaining set of the new generation is generated by a mating process, where the best performing candidate solutions comprise the subset of the parents. In addition to the mating process, a mutation rate is also imbedded in the generation of the new population. The mutation rate enables the search for the optimum solution to overcome local minimums and- provided enough randomness is included in the elitism based GA -locate the global minimum/optimum. This process of selection, mating, and mutation is repeated a number of times until the best performing candidate solution converges to some stationary value. Continuous genetic algorithm (CGA) (Glover, 1990) is one that is used to optimize continuous functions. CGA is similar to GA except that the parameters are coded in terms of continuous numbers whereas in GA parameters are coded in binary format. The employed CGA uses a linear ranking model for the chromosome selection, where the probability density function is generated based on the cost of the individual candidate chromosomes. In addition, an elitism and a steady-state population size is utilized for the proposed algorithm. The step-by-step procedure for the CGA used is explained below:

Step 1: Initialization of parameters Define the CGA parameters as the initial population (Ipop), population at the end of first generation (pop), number of chromosomes kept for mating (Keep), and Mutation rate (Mut). Step 2: Generation of a Homogeneous population Generate N elements from the most promising area where N is the initial population (Ipop). The N elements in the promising area are represented as pj(j =1, 2, . . . , N) and  for a cost function with n variables pj ¼ pj1 ; pj2 ; . . . ; pjn . In order to have a homogeneous initial population, the elements are chosen such that kpj  pik P g where i – j and g is the parameter whose value is chosen based on the volume of the search space. Step 3: Evaluation of chromosomes Each element pi chosen from the search space is called as a chromosome. All chromosomes are evaluated and are ranked according to their contribution to the cost function. The chromosomes are then arranged in a decreasing order with respect to their ranks. Step 4: Genetic operation selection The chromosomes chosen for mating are based on probability, where the probability density function is generated based on the cost of the individual chromosomes. Chromosomes which have a lower contribution to the cost function are more likely to be selected. The number of crossing operations / is obtained using the formula

/ ¼ round

  pop  keep : 2

The probability density function used for this work is defined by:

P N ðnÞ ¼

n /2

ð3:1:1Þ

:

Crossing: For a cost function of n variables, each chromosome picon

tains n elements i.e. pi1 ; pi2 ; . . . ; pin . The element to be crossed between two chromosomes pi and pj is chosen randomly (uniform). Once the element is chosen, the crossover is performed according to the formula

pic ¼ pic  kpic  pjc ;

ð3:1:2Þ

pjc

ð3:1:3Þ

¼

pjc

pic and

þ

kpjc



pjc ;

pjc

where are elements chosen for crossover in order to generate new chromosomes and k is a random number between 0 and 1. The chromosomes performing less well based on the cost are now replaced by the newly generated chromosomes, such that the steady-state population size is maintained. Mutation: This operation is used to explore isolated feasibility areas within the search space. In mutation, a chromosome is chosen randomly and one of the elements in it is changed randomly. This operation is a rare operation. The probability of mutation depends on the mutation rate, which is kept constant for the entire duration of the used GA. Step 5: Termination The above two steps are repeated until the termination criteria is met. A maximum iteration number can be set to limit the number of iterations, in addition to accuracy for termination. The described hybrid algorithm is schematically given as a flow chart in Fig. 3.2. 4. Simulation results The proposed hybrid algorithm is tested using standard benchmark problems. In addition, the proposed algorithm is used for

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

3913



Rn ¼ 100 x21  x22 þ ðx1  1Þ2 : The search domain for this problem is given by: 5 6 xj 6 10 for j = 1, 2; and with a lowest minimum at x* = (1, 1), R(x*) = 0. The function is depicted in Fig. 4.1. Goldstein–Price two variables (GP):

Define: Cost function and Diversification

Generate N neighbors around the current point

x

i

,

belonging neither to the tabulist nor to the promising list

Select the best neighbor among the N neighbors

Update the Tabu and

h

i GðxÞ ¼ 1 þ ðx1 þ x2 þ 1Þ2 19  14x1 þ 3x21 þ 14x2 þ 6x1 x2 þ 3x22 h  30 þ ð2x1  3x2 Þ2 18  32x1 þ 12x21 þ 48x2

i 36x1 x2 þ 27x22 : The search domain for this problem is confined to 2 6 xj 6 2 for j = 1, 2; The GP problem consists of four local minima, with the lowest minimum at x* = (0, 1), G(x*) = 3. The Goldstein-Price two variables cost surface is depicted in Fig. 4.2. De Joung three variables (DJ)

SðxÞ ¼ x21 þ x22 þ x23 :

Stopping criteria

No

Yes Select the most promising area from the

Intensification

Define: Parameters for GA

Represent Parameters

Create Population Fig. 4.1. Rosenbrock two variables cost surface.

Evaluate Cost

Reproduce

Mutate

Test

Stop Fig. 3.2. Flow diagram of proposed hybrid algorithm.

parameter estimation problems that are operating in various noise environments. 4.1. Benchmark problems The bench mark problems considered for the simulations are taken from Chelouah and Siarry (2000) and are the Rosenbrock two variables problem, the Goldsrein–Price two variables problem, and the De Joung three variable problem.Rosembrock two variables (RC)

Fig. 4.2. Goldsrein–Pierce two variables.

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

With the search domain defined by 5 6 xj 6 5 for j = 1, 2, 3; and a lowest minimum of x* = (0, 0, 0), S(x*) = 0. The parameters chosen for the TS are N1 = 5, N2 = 5, r1 = 0.25, r2 = 0.125, and r3 = 0.06. The parameters chosen were based on the size of the search domain and the number of variables in the cost function. The parameters chosen for GA area as follows: Initial population = 30, population at the end of first generation = 20, number of chromosomes = 10, and mutation rate = 0.02. The number of iterations for TS and GA are 100 and 100, respectively. The results are summarized in Table 4.1. The experiment is repeated twenty times and the mean and standard deviation of estimated the parameters are summarized in Table 4.1. From Table 4.1 it can be seen that the hybrid algorithm has a better accuracy to the optimum solution when compared to ECTS and GA. The simulation results show that the hybrid algorithm overcomes the disadvantages of GA and ECTS. The search pattern and convergence of the algorithm for GA is shown in Figs. 4.3 and 4.4. In Fig. 4.3 the promising areas are represented by circles from which the most promising area is found. Once the most promising area is found a GA is employed for searching the promising area. The convergence of the GA to the optimum solution is shown in Fig. 4.4.

2.5

Promising Areas 2

1.5 Cost

3914

1

0.5

0

0

10

20

30

40 50 60 Number of Iterations

70

80

90

100

Fig. 4.3. Identifying the most promising area in the search space.

0.014

4.2. Parameter estimation problem 0.012

To show the effectiveness of the proposed algorithm, the parameter estimation problem is chosen, which has rather broad applications in a number of disciplines, in particular aerospace systems. Two systems for simulations are considered:

ð4:1Þ

0.008 cost

yðkÞ ¼ 0:5yðk  1Þ  0:5yðk  2Þ þ uðkÞ þ eðkÞ

0.01

0.006

and

0.004

yðkÞ ¼ 1:5yðk  1Þ  0:7yðk  2Þ þ 1:0uðk  1Þ þ 0:5uðk  2Þ þ eðkÞ: 0.002

ð4:2Þ The parameters of Eq. (4.1) are [a1, a2, b0] = [0.5, 0.5, 1] and parameters of Eq. (4.2) are [a1, a2, b0, b1] = [1.5, 0.7, 1.0, 0.5]. The algorithm is first tested with various white noise processes of different variances. The experiment is repeated twenty times

0

0

10

20

30

40 50 60 No of iterationd

70

80

90

100

Fig. 4.4. Convergence to the optimum solution.

Table 4.1 Bench mark problems using hybrid algorithm. Function x1

True parameters ECTS

x1  x1

GA

 x1

Hybrid

 x1

True parameters ECTS

x1  x2

GA

 x2

Hybrid

 x2

ECTS

 x3

r1 r1 r1 x2

r2 r2 r2 x3

r3 GA

 x3

Hybrid

 x3

r3 r3

RC

GP

DJ

1 0.89 0.082 0.933 0.041 1.0016 0.02

0 0.0085 0.0235 0.0054 0.0031 6.64e004 8.851e004

0 9.2e004 0.0052 8.4e004 0.0032 1.021e005 4.402e010

1 0.802 0.149 0.9254 0.023 0.9998 0.051

1 1.006 0.0712 1.0032 8.4e004 0.9995 6.13e004

0 0.0561 0.0055 2.4e004 0.0074 1.787e005 3.18e009

– – – – – –

– – – – – –

0.018 0.0073 4.2e004 0.0033 8.3e006 2.87e010

3915

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

and the mean and standard deviation of the estimated parameters are summarized in Tables 4.2 and 4.3. The parameters chosen for the tabu search are N1 = 5, N2 = 10, r1 = 0.25, r2 = 0.125, and r3 = 0.06. The parameters chosen for GA area Initial population = 96, population at the end of first generation = 48, number of chromosomes = 24, and mutation rate = 0.04. The number of iterations for the TS and GA portions are 100 and 500 respectively. The system was exited with a random noise signal that possesses a variance of r = 0.03. Tables 4.2 and 4.3 indicate that the hybrid algorithm gives a good estimate of the parameters when the system noise is white and limited to a standard deviation 0.1. The parameters of Eqs. (4.1) and (4.2) are estimated in the presence of colored noise i.e. when e(k) is not white. Colored noise of different variances are generated by passing whiten noise of differ-

Table 4.2 Parameters estimated using the hybrid algorithm for system (4.1) in the presence of white noise. Noise level

^1 ða1 ¼ 0:5Þ a

^2 ða2 ¼ 0:5Þ a

^ ðb ¼ 1:0Þ b 0 0

Mean

Variance

Mean

Variance

Mean

Variance

r = 0.001 r = 0.01 r = 0.05 r = 0.08 r = 0.1

0.5001 0.5021 0.482 0.4532 0.4322

6.8e004 0.0012 3.6e004 0.0035 0.0023

0.5012 0.5014 0.487 0.4498 0.4378

7.2e004 3.2e004 0.0043 0.0072 0.0141

1.0002 1.007 0.996 0.9945 0.984

2.52e004 3.54e004 0.0024 0.0197 0.0324

ent variances threw a filter with transfer function cðz11 Þ. The polynomial c(z1) considered for the simulations is given as c(z1) = 1 + 0.8z1. The spectrum of the white noise used and the colored noise generated by passing white noise threw the filter is shown in Figs. 4.5 and 4.6 respectively. It can be seen from these figures that the white noise has a flat spectrum for all frequency where as the colored noise does not have the property. The parameters considered for TS and GA are kept the same as for the previous simulation results. The system was exited with a random noise signal having a standard deviation of r = 0.03. The experiment is repeated for twenty times and the mean and standard deviation of the estimated parameters are summarized in Tables 4.4 and 4.5. Tables 4.3 and 4.4 show that the hybrid algorithm yields a good estimate of the parameters when system noise is white and has a maximum standard deviation of 0.05. In order to make a comparison, the parameters of Eq. (4.1) and (4.2) are estimated using Recursive Least Squares Algorithm (RLS). The number of iterations for the RLS was chosen to be 1000 and the system was exited with a noise signal possessing a standard deviation of r = 0.3. Colored noise of different variances are generated by passing white noise threw the filter cðz11 Þ where c(z1) = 1 + 0.8z1 The experiment is repeated for twenty times and the mean values of the estimated parameters are summarized in Tables 4.6 and 4.7. From the simulation results it can be seen that under similar conditions the hybrid algorithm yields an unbiased and better estimate of the parameters compared to the standard Least Squares techniques.

Table 4.3 Parameters estimated using the hybrid algorithm for system (4.2) in the presence of white noise. Noise level

^1 ða1 ¼ 1:5Þ a Mean

Variance

Mean

Variance

Mean

Variance

Mean

Variance

r = 0.001 r = 0.01 r = 0.05 r = 0.08 r = 0.1

1.5022 1.5042 1.496 1.457 1.399

0.0327 0.026 0.0329 0.0091 0.0329

0.702 0.7018 0.6971 0.659 0.604

0.0310 0.0235 0.0326 0.0085 0.0141

1.0018 1.0072 1.0037 0.9744 0.9744

0.0057 0.0778 0.048 0.049 0.0181

0.4908 0.477 0.4918 0.5499 0.6495

0.096 0.1068 0.063 0.0298 0.0528

^2 ða2 ¼ 0:7Þ a

^ ðb ¼ 0:5Þ b 1 1

^ ðb ¼ 1:0Þ b 0 0

White noise 100 80 60

magnitude

40 20

0 -20 -40 -60 -80 -100 0

200

400

600

800 1000 1200 Frequence (Hz)

1400

1600

Fig. 4.5. White noise process of standard deviation 0.1.

1800

2000

3916

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917

Filtered White Noise ( Colored Noise) 300 200 100

magnitude

0 -100 -200 -300 -400 -500

0

200

400

600

800 1000 1200 frequency (Hz)

1400

1600

1800

2000

Fig. 4.6. Filtered white noise (colored noise).

Table 4.4 Parameters estimated using the hybrid algorithm for system (4.1) in the presence of colored noise. Noise level

^1 (a1 = 0.5) a Mean

Variance

Mean

Variance

Mean

Variance

r = 0.001 r = 0.01 r = 0.02 r = 0.05

0.4988 0.4818 0.4488 0.3238

7.8e04 0.0045 0.0011 4.2e004

0.4994 0.4816 0.4502 0.3210

6.50e004 0.0036 8.3e004 4.409e004

1.0012 1.0123 1.0449 1.1303

3.12e004 0.0022 6.2442004 3.065e004

^2 (a2 = 0.5) a

^ ðb ¼ 1:0Þ b 0 0

Table 4.5 Parameters estimated using the hybrid algorithm for system (4.2) in the presence of colored noise. Noise level

^1 ða1 ¼ 1:5Þ a Mean

Variance

Mean

Variance

Mean

Variance

Mean

Variance

r = 0.001 r = 0.01 r = 0.02 r = 0.05

1.498 1.4787 1.422 1.33

0.0136 0.0187 0.00152 0.014

0.699 0.6813 0.6314 0.61

0.0131 0.0017 0.0014 0.012

1.0014 0.9734 0.9333 0.8732

0.0104 0.0077 0.0063 0.02

0.5083 0.54 0.654 0.66

0.0194 0.035 0.028 0.06

^2 ða2 ¼ 0:7Þ a

Table 4.6 Parameters estimated using LS for Eq. (4.1) in the presence of colored noise. Noise level

^1 a

^2 a

^ b 0

No noise r = 0.001 r = 0.01 r = 0.02 r = 0.05

0.5 0.4962 0.4155 0.2029 0.0625

0.5 0.4910 0.4216 0.2377 0.0882

1.0 0.9870 1.0062 0.9386 0.9549

Table 4.7 Parameters estimated using LS for Eq. (4.2) in the presence of colored noise. Noise level

^1 a

^2 a

^ b 0

^ b 1

No noise r = 0.001 r = 0.01 r = 0.02 r = 0.05

1.5 1.4895 1.3931 0.9961 0.5735

0.7 0.6898 0.5970 0.2213 0.1511

1.0 0.9987 1.0550 1.0070 1.0593

0.5 0.5031 0.5791 0.9961 1.6593

^ ðb ¼ 0:5Þ b 1 1

^ ðb ¼ 1:0Þ b 0 0

5. Conclusion In this work bias problem associated with the parameter estimation problem was identified. It was shown that standard algorithms like Least Squares (LS) fail in the presence of colored noise at high signal to noise ratio. A method to overcome this bias problem was presented which motivates the need for an intelligent optimization algorithm. The hybrid optimization algorithm proposed in this work employs a tabu search (TS) and a genetic algorithm (GA). The hybrid algorithm was tested with benchmark problems and also a parameter estimation problem. Simulation results show that the parameters estimated with the proposed method were more accurate and consistent than when compared to the parameters computed with the LS algorithm. References Chelouah, R., & Siarry, P. (2000). Tabu search applied to global optimization. European Journal of Operational Research, 123, 256–270.

B. Ramkumar et al. / Expert Systems with Applications 38 (2011) 3909–3917 Chelouah, R., & Siarry, P. (2000). A continuous genetic algorithm design for global optimization of multimodal functions. Journal of Heuristics, 6, 191–213. Glover, F. (1989). Tabu search part-I. ORSA Journal of Computing, 1(3), 190–205. Glover, F. (1990). Tabu search part-II. ORSA Journal of Computing, 2(1), 4–32. Goldberg, D. E. (1989). Genetic algorithm in search, optimization and machine learning. Addison Wesley Longman, Inc. Guo, Y., Chen, Li.-P., Wang, S., & Zhao, J. (2003). A new simulation optimization system for the parameters of a machine cell simulation model. International Journal of Advanced Manufacturing Technology, 21, 620–626. Hisa, T. C. (1977). System identification. Lexinton Books.

3917

Juan, J.-N. (1994). Applied system identification. Prentice-Hall, Inc. Paladugu, L., Schoen, M. P., & Williams, G. B. (2003). Intelligent techniques for starpattern recognition. In IMECE 2003, Washington DC, November 16–21. Siarry, P., & Berthiau, G. (1997). Fitting Tabu Search to optimize functions of continuous variables. International Journal for Numerical Methods in Engineering, 40, 2449–2459. Söderström, T. (1974). Convergence properties of generalized least squares identification method. Automatica, 10, 617–626. Zdansky, M., & Pozivil, J. (2002). Combination genetic/tabu search algorithm for hybrid flow shops optimization. In ALGORITMY (pp. 230–236).