A novel identification method for Wiener systems with the limited information

A novel identification method for Wiener systems with the limited information

Mathematical and Computer Modelling ( ) – Contents lists available at SciVerse ScienceDirect Mathematical and Computer Modelling journal homepage:...

414KB Sizes 1 Downloads 22 Views

Mathematical and Computer Modelling (

)



Contents lists available at SciVerse ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

A novel identification method for Wiener systems with the limited information Qibing Jin a , Jun Dou a,∗ , Feng Ding b , Liting Cao a a

Institute of Automation, Beijing University of Chemical Technology, Beijing 100029, PR China

b

School of Communication and Control Engineering, Jiangnan University, Wuxi 214122, PR China

highlights • We propose a novel research algorithm to solve optimization problem. • The linear part of the Wiener system is identified based on limited information. • The structure and the parameters of the linearity are identified simultaneity

article

info

Article history: Received 2 November 2011 Received in revised form 27 May 2013 Accepted 4 June 2013 Keywords: Identification Nonlinear systems Wiener system PSO–Rosenbrock Limited information

abstract To solve the problem caused by the lack of the structure of the nonlinearity in Wiener systems identification, we propose a new approach to identify the linear part and the nonlinear part of a Wiener system in sequence by using the limited information on the nonlinearity. The linear part of the system is identified firstly based on the limited information of the nonlinearity, such as sign information, monotonic information and then, we construct the internal signals to identify the structure and the parameters of the Wiener systems. A novel Particle Swarm Optimization–Rosenbrock (PSO–Rosenbrock) algorithm is developed and the numerical simulation shows that the identification procedure proposed in this paper is effective. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction Many real-life systems and phenomena are nonlinear, their behavior is often approximated by linear models which are easy to express. Unfortunately, the linear approximation is valid only for a special input range in many cases. The Wiener model has been used in various applications in nonlinear systems and a lot of work exists [1–7]. In nonlinear systems identification, Paduart et al. proposed a method to model nonlinear systems by using polynomial nonlinear state space equations [8], and the method was successfully applied to two physical systems. If the structure of the nonlinearity is known in advance, the identification problem can be converted into a one-dimensional optimization problem and it is relatively easy [9]. If the structure of the nonlinearity is unknown, it is difficult to identify the system. Generally, it is assumed that the nonlinearity is expressed by some reversible basis functions [10,11]. In recent years, an identification method based on a monotonic assumption for Wiener systems was proposed [12]. Another approach for Wiener systems identification based on a little prior information was developed [13,14]. Based on the prior information, the linear part of the system can be obtained exactly according to the input–output data sets. To get a satisfactory result, not only an approach but also an effective search algorithm is needed to solve the identification problem. Particle swarm optimization (PSO) is an evolutionary computation technique developed by Eberhart and



Corresponding author. Tel.: +86 1871 627 9467, +86 1581 108 4844. E-mail addresses: [email protected] (Q. Jin), [email protected] (J. Dou).

0895-7177/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.mcm.2013.06.005

2

Q. Jin et al. / Mathematical and Computer Modelling (

)



Kennedy [15]. This algorithm is easy to implement and has been successfully used in many fields [16–18]. But in practice, PSO has the limitations of stagnation and poor convergence, and is easy to fall in local optima. This paper makes full use of the global search ability of PSO and the local search ability of Rosenbrock to present a PSO–Rosenbrock algorithm, and the algorithm has the advantages of improving of the identification precision and reducing the dependence on the initial values of the parameters. In Wiener system identification, the input–output data sets do not reveal any information on the relationship between inputs u(k) and internal signals x(k) as well as the relationship between internal signals x(k) and outputs y(k) because the internal signals x(k) are not available. The goal of this paper is to identify the nonlinear system totally based on the input–output data sets and some necessary information on the linearity. In this paper, three types of the information are considered: origin point information, sign information and monotonic information. Combining with the PSO_Rosenbrock algorithm, the linear part of the system can be identified and then, the structure and the parameters of the nonlinearity can be identified simultaneously. The layout of this paper is as follows. The identification problem and the PSO–Rosenbrock algorithm are introduced in Section 2. Section 3 presents a method to identify the linear part of the system based on origin point information, sign information and monotonic information of the nonlinearity. Section 4 presents the approach to identify the structure and the parameters of the nonlinear part. Finally, some concluding remarks are provided in Section 5. 2. Problem statement and the optimization algorithm 2.1. The identification problem The Wiener system [13] considered in this paper is shown in Fig. 1 where the unknown linear and nonlinear parts are represented by G(z ) =

∞ 

h(i)z −i

and

f (·).

(1)

i=0

The linear part of the system is assumed to be stable so that |h(i)| ≤ M λi for some M < ∞ and 0 < λ < 1. The input, internal signal and the output at time k = 0, 1, 2, . . . , N are represented by u(k), x(k) and y(k), respectively. The internal signal x(k) and the structure of the nonlinear part are unknown, but the input and the output ∞ are 2bounded. The system must be stan-T dardized to guarantee identifiability. It is assumed in the paper that ∥h∥2 = i=0 h(i) = 1, where h = (h(0), h(1), . . .) is an infinitely dimensional vector. It is also assumed that the input u(k) is a bounded independence identically distributed random sequence. Obviously, all of the signals of the system are bounded due to the stability property and the bounded input. For a given positive integer n, we define hn = (h(0), h(1), . . . , h(n − 1))T .

(2)

2 2 Because of the stability assumption i=0 h(i)2 = 1, i=n h(i) → 0 as n → ∞. It is equivalent to ∥hn ∥ → 1 as n → ∞. As a result, only the first n parameters need to be identified. So the system can be represented by the following form:

∞

∞

x(k) = (u(k), u(k − 1), . . . , u(k − n + 1)) hn







φ T (k)

y(k) = f (x(k)),

k = 1, 2, . . . , N .

(3)

The goal is to identify the parameter vector hn based on the data sets {φ(k), y(k)} , k = 1, 2, . . . , N, and then, to identify the nonlinear part f (·). The identification cannot be carried on directly because the structure of the nonlinearity is unknown and the internal signal x(k) is not available. The problem will be solved if some limited information on the nonlinearity is known. From the above analysis we can see that all the information of the system is

 φ T (1) T  φ (2)    . = ..  h,  .    . .  T x(N ) φ (N ) x(1)  x(2) 







y(1)  y(2) 





f (x(1))  f (x(2)) 





 . = .   .   .  . . y(N ) f (x(N ))

with unknown internal signal x(k) and f (·).

2.2. PSO–Rosenbrock optimization algorithm The search algorithm is often used to optimize parameters in identification procedure, and has a great impact on the identification result. To improve the identification precision, this paper proposes a PSO–Rosenbrock optimization algorithm. PSO is applied to many fields successfully as a global search optimization algorithm. The Rosenbrock has the great local search ability. This paper makes full use of the global search ability of PSO and the local search ability of Rosenbrock to present PSO–Rosenbrock algorithm.

Q. Jin et al. / Mathematical and Computer Modelling (

)



3

Fig. 1. Wiener system.

2.2.1. Basic particle swarm optimization (BPSO) The mathematic description of BPSO: In a D-dimensional searching space, each particle is considered as a point of the space and has a position and a velocity in each evolution. Each particle can get a fitness value according to the predetermined fitness function F and obtain the global optimum and individual optimum by comparing the fitness value. It is assumed that the ith particle’s position and velocity in the jth evolution are represented by xi,j (t ) and vi,j (t ) respectively. pg ,j and pi,j represent the global optimum and the individual optimum respectively. In each iteration, update velocity and position according to the following formula:

vi,j+1 = ω · vi,j + c1 · rand1 · (Pi,j − xi,j ) + c2 · rand2 · (Pg ,j − xi,j ) xi,j = xi,j + vi,j+1

(4) (5)

where ω is weight factor; c1 and c2 are acceleration factor; rand1 and rand2 are independent uniform random number in [0, 1]. It is the basic PSO algorithm when ω = 1. 2.2.2. PSO–Rosenbrock optimization algorithm The hybrid PSO–Rosenbrock algorithm not only offers an effective way to solve combinatorial optimization and the parameter optimization problem, but also improves the identification accuracy greatly. The algorithm includes the following steps.





Step 1: Find the global optimal value xˆ k,l and corresponding optimal fitness function value F xˆ k,l of each species by the PSO algorithm,where and l is the Serial Number of groups;  k is the  iteration   number  Step 2: Make xˆ of min F xˆ k,1 , F xˆ k,2 · · · F xˆ k,l as the next initial value; Step 3: Judge iteration time. If iteration time satisfies the iteration number, then make xˆ the initial value of Rosenbrock algorithm; Step 4: Get the optimal variable xˆ r and its fitness function value F (ˆxr ). If F (ˆxr ) < F (ˆx), then xˆ = xˆ r ; Step 5: Move axially from xˆ , if F (ˆx + υ1 dj ) ≤ F (ˆx), then xˆ = xˆ + υ1 dj ; Step 6: Judge the condition ∥xk+1 − xk ∥ ≤ ε (ε is positive infinitesimal number), if the condition is met then stop, otherwise go to Step 5. 3. Identification of linear part If there are only the input–output data sets available, the identification of Wiener systems is impossible because the internal signal x(k) and the structure of the nonlinearity are unknown. To solve the identification problem, we need some prior information. According to some limited information on the unknown nonlinearity and the input–output data sets, the linear part of the system can be identified. 3.1. Origin point information In this section, we consider the identification of the linear part with the origin point information on the nonlinearity. Assumption 3.1. It is assumed that f (·) is continuous in the neighborhood of the origin point. Moreover, f (x) = 0 ⇔ x = 0. Though the assumption is based on the local information f (0) = 0, it is provides some global information on the nonlinearity since no other value of x(k) ̸= 0 could lead to f (x) = 0. We should recognize that if only the local information f (0) = 0 is used, the outputs y(k) ̸= 0 together with the corresponding inputs make no contribution to reveal the information on the internal signal x(k) and f (·) because no other information on f (·) is available. In other words, only the outputs y(k) = 0 and the corresponding inputs are useful for identification. However, the condition y(k) = 0 exactly is impossible and is not robust in the presence of noise. The hope is that, due to the continuity of f (·) in the neighborhood of the origin point, we can choose a small threshold ε > 0 and make x(k) = y(k) = 0 when |y(k)| ≤ ε . It can be proved that the estimated parameter vector hˆ n converges to hn as ε → 0. For each n, the linear part of the system is identifiable based on Assumption 3.1 if and only if there exist some 1 ≤ k1 < k 2 < · · · < km ≤ N so that x(k1 ) = x(k2 ) = · · · = x(km ) = 0 and the corresponding φ T (k1 )

 φ T (k2 )   matrix Φ (k1 , k2 , . . . , km ) =   ...  satisfies rankΦ (k1 , k2 , . . . , km ) = n − 1. φ T (km )

4

Q. Jin et al. / Mathematical and Computer Modelling (

)



Fig. 2. Impulse response of the linear part.



Lemma 3.1. Consider the Wiener system shown in Fig. 1 under Assumption 3.1, for any given n and ε(n) satisfying nε(n) → 0 as n → ∞, with probability  one as  N → ∞, there exists a sequence φ(ij ), j = 1, 2, . . . , n so that |y(k)| ≤ ε and φ T (i1 )

φ T (i2 )  rankΦ (i1 , i2 , . . . , in ) = rank   ...  ≥ n − 1. φ T ( in )

This guarantees that the system is identifiable. Combining with the PSO–Rosenbrock algorithm proposed in this paper, we can collect all the outputs |y(k)| ≤ ε and the corresponding inputs to identify the linear part. The steps of the identification approach based on the origin point information are as follows. Step 1: Collect data u(k) and y(k), k = 1, 2, . . . , N; Step 2: For each √n, construct matrix Φ (1, 2, . . . , N ) and obtain its submatrix Φ (i1 , i2 , . . . , il ) by deleting row k if y(k) > ε , where nε(n) → 0 as n → ∞. Step 3: The identification problem can be equivalent to the following optimization problem. hˆ n = min(φ T (ij )hˆ j − x(ij ))2 = min(φ T (ij )hˆ j )2 . hˆ ji

hˆ ji

Step 4: Solve the optimization problem shown in step 3 by the PSO–Rosenbrock algorithm, then we can obtain the estimate  ˆ (z ) = in=−01 hˆ n (i)z −i . G In the identification process, the choice of ε is not unique. Small ε throws away all the data which are larger than ε , and it means that a longer time needs to be taken to collect the data to construct the identification matrix. On the other hand, for a large ε , the collected data y(k) is not small enough to guarantee the corresponding input x(k) is in the neighborhood of origin and that results in increase of identification bias. In order to obtain a valid identification result, any choice of ε needs to be tested. We provide a numerical simulation example with the linear part be a fourth order subsystem. G(z ) =

0.75z 2 + 0.60 z4

+ 0.1z 3 + 0.25z 2 + 0.40

.

(6)

The impulse response of the linear part shows in Fig. 2, where the black dots represent the values at sampling time. The nonlinear part of the system is represented by y = f (x) = 0.15x + 1.3(e0.6x − 1)

(7)

and Fig. 3 shows its characteristic. The input u(k) is independence identically distributed uniformly in [−1, 1] and the Gaussian noise is added to the output. Fig. 4 shows the estimates hˆ n when ε = 0.1, data length N = 2000, the order of the linear part n = 20, and noise level   is 5%. The parameter error is equal to 0.0079. The estimated results verify that the proposed method is effective.

  n hn − hˆ n  = i=1 (hn (i) − hˆ n (i))2 = 0.0079.

To demonstrate the performance of the proposed identification method, we identify the linear part of the system for different combinations of data length and noise level. Table 1 shows the parameter errors for various N and noise levels when ε = 0.1 and n = 20. All the results are the averages of 100 Monte Carlo simulations.

Q. Jin et al. / Mathematical and Computer Modelling (

)



5

Fig. 3. The unknown nonlinearity.

Fig. 4. hn and hˆ n (origin information). Table 1 The parameter errors (ε = 0.1, n = 20). N

Noise level 5%

10%

20%

1000 2000 3000

0.0119 0.0063 0.0044

0.0219 0.0119 0.0079

0.0522 0.0231 0.0187

3.2. Sign information In this section, we introduce the way to identify the linear part in the case of the sign information of the nonlinearity is available. Assumption 3.2. Let sign(y(k)) denote the sign of y(k). It is assumed that sign(x(k)) = sign(y(k)), where k = 1, 2, . . . , N.

6

Q. Jin et al. / Mathematical and Computer Modelling (

)



Fig. 5. hn and hˆ n (sign information).

Clearly, the unknown nonlinearity is strictly in the first and third quadrants and no other information is available. The nonlinearity can be non-continuous. Similarly, the results obtained in this section can be applied to sign(x(k)) = −sign(y(k)) with minimal modification. We standardize the sign information as the following form. x(k) > 0 x(k) = 0 x(k) < 0.

1 0

 sign(x(k)) =

−1

(8)

The linear part of the system is equivalent to the following form. G(z ) =

α1 z m−1 +α2 z m−2 +···αm , z m +β1 z m−1 +···βm

where m is the order. Because of ∥h∥2 = 1, the impulse response of G(z ) is the vector h =

(h(0), h(1), . . .) . Let θ = (α1 , α2 , . . . αm , β1 , β2 , . . . βm ) denote the parameter vector of the system. The estimate of θ is θˆ = (αˆ 1 , αˆ 2 , . . . αˆ m , βˆ 1 , βˆ 2 , . . . βˆ m ). The approach to find an estimate is by the following minimization. T

θˆ = (αˆ 1 , αˆ 2 , . . . αˆ m , βˆ 1 , βˆ 2 , . . . βˆ m ) N  (sign(y(k)) − sign(ˆy(k)))2 = min αˆ i ,βˆ i k=1

= min

N  (sign(y(k)) − sign(ˆx(k)))2 .

αˆ i ,βˆ i k=1

(9)

(sign(y(k)) − sign(ˆx(k)))2 = 0 if θˆ = θ , that is (αˆ 1 , αˆ 2 , . . . αˆ m , βˆ 1 , βˆ 2 , . . . βˆ m ) = (α1 , α2 , . . . αm ,  β1 , β2 , . . . βm ). In other words, Nk=1 (sign(y(k)) − sign(ˆx(k)))2 ≥ 4 if (αˆ 1 , αˆ 2 , . . . αˆ m , βˆ 1 , βˆ 2 , . . . βˆ m ) ̸= (α1 , α2 , . . . αm , β1 , β2 , . . . βm ). The meaning is that there exists at least one k to make sign(y(k)) = −sign(ˆx(k)). It is guaranteed that the

It is clear that

N

k=1

minimization of (9) has one and only one solution. Let the estimate of the linear part G(z ) be denoted by the following form.

ˆ (z ) = G

αˆ 1 z m−1 + αˆ 2 z m−2 + · · · αˆ m . z m + βˆ 1 z m−1 + · · · βˆ m

If (αˆ 1 , αˆ 2 , . . . αˆ m , βˆ 1 , βˆ 2 , . . . βˆ m ) ̸= (α1 , α2 , . . . αm , β1 , β2 , . . . βm ), with probability one as N → ∞, N  (sign(y(k)) − sign(ˆx(k)))2 ≥ 4. k=1

We test the algorithm on the same example (6) and (7) as in the Section 3.1 under the same input and noise disturbance but under the Assumption   3.2. Fig. 5 shows the estimates hˆ n when ε = 0.1, N = 2000, n = 20, and noise level is 5%. The

parameter error hn − hˆ n  =





n

i =1

(hn (i) − hˆ n (i))2 = 0.0023.

Q. Jin et al. / Mathematical and Computer Modelling (

)



7

Table 2 The parameter errors (n = 20). N

Noise level 5%

10%

20%

1000 2000 3000

0.0271 0.0170 0.0019

0.0294 0.0038 0.0019

0.0268 0.0037 0.0021

Table 2 shows the parameter errors for different combination of data length and noise level. All the results are the averages of 100 Monte Carlo simulations. We can learn from Table 2 that the parameter errors under 5% noise level are almost the same as the error under 20% noise level with the same data length. The result makes clear that the identification approach based on the sign information has a strong resistance against the noise disturbance. 3.3. Monotonic information In the previous section, we obtain an estimate of the linear part based on the sign information of the nonlinearity. In this section, we extend the idea to a case where no sign information is available but the nonlinearity is assumed to be monotonic in some intervals. Assumption 3.3. There exists an interval −∞ < ymin < ymax < ∞. When f (x) ∈ [ymin ymax ], the nonlinearity f (·) is continuous and f (x1 ) = f (x2 ) ⇔ x1 = x2 . Let f (xmin ) = ymin , f (xmax ) = ymax , the assumption makes sure that x will not take any value out of the range (xmin , xmax ) when f (x) is between ymin and ymax . Define

1y(i, j) = y(i) − y(j) 1x(i, j) = x(i) − x(j).

(10)

Clearly, 1x(i, j) = (φ T (i) − φ T (j))hn = ϕ T (i, j)hn , where ϕ T (i, j) = φ T (i) − φ T (j). We consider the condition that the nonlinearity is monotonic increasing. That is to say, f (xi ) > f (xj ) if xi > xj . In (10), if x(i) > x(j), i.e. 1x(i, j) > 0, we have y(i) > y(j), 1y(i, j) > 0. Therefore, sign(1x(i, j)) = sign(1y(i, j)) = 1. Similarly, we have y(i) < y(j), 1y(i, j) < 0 if x(i) < x(j). And clearly, sign(1x(i, j)) = sign(1y(i, j)) = −1. As a result, when the nonlinearity is monotonic increasing, we have sign(1y(i, j)) = sign(y(i) − y(j)) = sign(x(i) − x(j))

= sign(ϕ T (i, j)hn ).

(11)

Therefore, by renaming 1y(i, j) as y(k) and 1x(i, j) as x(k), the results developed for sign information in Section 3.2 can be carried out here. Let us sum up the steps of the identification algorithm when the nonlinearity is monotonic increasing. Step 1: Collect data u(k) and y(k) k = 1, 2, . . . , N; Step 2: Select data φ(ki ) and y(ki ), i = 1, 2, . . . , l + 1, for y(k) ∈ [ymin ymax ]; Step 3: Define 1y(i, i + 1) = y(ki ) − y(ki+1 ),

ϕ(i, i + 1) = φ(ki ) − φ(ki+1 ); Step 4: Solve the following minimization problem by PSO–Rosenbrock to find the estimate hˆ n hˆ n = min hˆ n

l  (sign(1y(i, i + 1)) − sign(ϕ T (i, i + 1)hˆ n ))2 .

(12)

i =1

Similarly, the algorithm proposed above can also be used for the situation that the nonlinearity is monotonic decreasing with minimal modification. We test the algorithm on the same example (6) and (7) as in the Section 3.1 under the same input and noise disturbance but under Assumption 3.3. Fig. 6 shows the estimates hˆ n when N = 2000, n = 20, and noise level is 5%. The results show that the algorithm is an effective approach to find the estimates of the linear part. 4. Identification of nonlinear part In Section 3, the linear part is identified based on the limited information on the nonlinearity. Then, we can obtain the internal signal xˆ (k) with the input sequence. That is to say, the input and output data sets (xˆ (k), y(k)) of the nonlinear part

8

Q. Jin et al. / Mathematical and Computer Modelling (

)



Fig. 6. hn and hˆ n (monotonic information).

Fig. 7. The actual output and estimated output of nonlinearity.

are available. But the lack of the structure of the nonlinearity is still a difficult problem. Fortunately, the structure can be approximated by the following polynomial if the nonlinearity is continuous. y = f (x) =

n−1 

γi xi .

(13)

i=0

Using the PSO–Rosenbrock optimization algorithm, the nonlinearity of the system can be identified properly based on the estimated xˆ (k) and the true output y(k). We take the example shown in Section 3.1 as an example to test the algorithm. The true nonlinearity is y = f (x) = 0.15x + 1.3(e0.6x − 1), of course, the true structure and its best polynomial approximation are unknown in the identification process. Firstly, the linear part is identified to find the estimate vector hˆ n under the Assumption 3.1, then, the internal signal xˆ (k) can be obtained with the same input u(k) as the example in Section 3.1. In the simulation, the nonlinearity is modeled by a second order polynomial, that is, n = 3. The nonlinearity can be identified properly by using PSO–Rosenbrock algorithm. Fig. 7 shows that the true output of the nonlinearity in solid line and its estimate given by the proposed algorithm in circles. The estimated approximated polynomial is y = 0.0007 + 0.9720x + 0.2352x2 . It is clear that a satisfactory result is obtained.

Q. Jin et al. / Mathematical and Computer Modelling (

)



9

Noted that the order of the polynomial is usually unknown, so how to choose the order is a problem. Generally, with higher order of the polynomial, a better fit is expected. But if the order n is high enough to describe the nonlinearity, any increment of the order only produce a small reduction to the output error. The computation complexity becomes large with the increment of the order. So the choice of the order n is to balance the output error and the computation complexity. Define the output error e(n) =

N 1 

N i =1

(ˆy(i) − y(i))2 .

(14)

We take the output error as the judgment standard. If 1e(k) = e(k + 1) − e(k) is small enough, the order k should be chosen. Otherwise, a higher order is preferred. If the purpose of identification is for control, the order is not high as usual. 5. Conclusion In this paper, an approach is proposed to identify the linear and nonlinear part of Wiener systems in sequence. Based on the limited information of the nonlinearity, the difficult problem caused by the lack of the nonlinear structure and internal signal is solved. Obviously, the linear part is estimated in the numerical simulation. It is clear that the structure presented in the paper can obtain a good approximation of the system if the nonlinear degree is not extremely high and the novel PSO–Rosenbrock optimization algorithm provides an effective way to solve the optimization problem. Acknowledgments The authors would like to acknowledge the financial support of the National High-Tech Research and Development Plan of China (Grant 2008AA042131) and the National Grand Fundamental Research 973 Program of China (Grant 2007CB714300). The authors are grateful to the anonymous reviewers for their valuable recommendations. References [1] J.L. Figueroa, S.I. Biagiola, O.E. Agamennoni, An approach for identification of uncertain Wiener systems, Mathematical and Computer Modelling 48 (1–2) (2008) 305–315. [2] Y. Xiao, N. Yue, Parameter estimation for nonlinear dynamical adjustment models, Mathematical and Computer Modelling 54 (5–6) (2011) 1561–1568. [3] S.A. Billings, S.Y. Fakhouri, Identification of a class nonlinear systems using correlation analysis, Proceedings of the IEEE 125 (7) (1978) 691–697. [4] V. Sundarapandian, General observers for discrete-time nonlinear systems, Mathematical and Computer Modelling 40 (1–2) (2004) 227–232. [5] T. Wigren, Recursive prediction error identification using the nonlinear Wiener model, Automatica 29 (1993) 1011–1025. [6] E.W. Bai, A blind approach to the Hammerstein–Wiener model identification, Automatica 38 (2002) 967–979. [7] W. Greblicki, Nonparametric identification of Wiener systems, IEEE Transactions on Information Theory 38 (1992) 1487–1493. [8] J. Paduart, L. Lauwers, J. Swevers, K. Smolders, J. Schoukens, R. Pintelon, Identification of nonlinear systems using polynomial nonlinear state space models, Automatica 46 (2010) 647–656. [9] E.W. Bai, Identification of linear systems with hard input nonlinearities of known structure, Automatica 38 (2002) 853–860. [10] A. Papoulis, S.U. Pillai, Probability, Random Variables and Stochastic Processes, fourth ed., McGraw Hill, Boston, 2002. [11] J. Voros, Parameter identification of Wiener systems with discontinuous nonlinearities, Systems and Control Letters 44 (5) (2001) 363–372. [12] Q. Zhang, A. Iouditski, L. Ljung, Identification of Wiener system with monotonous nonlinearity, in: IFAC Symp. on System Identification, 2006, pp. 166–171. [13] E.W. Bai, J. Reyland, Towards identification of Wiener systems with the least amount of a priori information on the nonlinearity, Automatica 44 (2008) 910–919. [14] E.W. Bai, J. Reyland, Towards identification of Wiener systems with the least amount of a priori information: IIR cases, Automatica 45 (2009) 956–964. [15] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks (04), 1995, pp. 1942–1948. [16] F. Pan, J. Chen, M.G. Gan, T. Cai, Y.T. Tu, Model analysis of particle swarm optimization, Automation Technology 32 (3) (2006) 368–377. [17] H. Shayeshi, H.A. Shayanfar, S. Jalilzadeh, A. Safari, Design of output feedback UPFC controller for damping of electromechanical oscillations using PSO, Energy Conversion and Management 50 (10) (2009) 2554–2561. [18] S.A. Taher, A. Karimian, M. Hasani, A new method for optimal location and sizing of capacitors in distorted distribution networks using PSO algorithm, Simulation Modelling Practice and Theory 19 (2) (2011) 662–672.