A modified HO-based model of hysteresis in piezoelectric actuators

A modified HO-based model of hysteresis in piezoelectric actuators

Sensors and Actuators A 220 (2014) 316–322 Contents lists available at ScienceDirect Sensors and Actuators A: Physical journal homepage: www.elsevie...

1MB Sizes 1 Downloads 32 Views

Sensors and Actuators A 220 (2014) 316–322

Contents lists available at ScienceDirect

Sensors and Actuators A: Physical journal homepage: www.elsevier.com/locate/sna

A modified HO-based model of hysteresis in piezoelectric actuators Lianwei Ma a , Yu Shen b,∗ , Jinrong Li a , Xinlong Zhao c a

School of Automation and Electrical Engineering, Zhejiang University of Science & Technology, Hangzhou 310023, China Department of Applied Physics, Zhejiang University of Science and Technology, Hangzhou 310023, China c College of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou 310018, China b

a r t i c l e

i n f o

Article history: Received 3 April 2014 Received in revised form 22 October 2014 Accepted 22 October 2014 Available online 1 November 2014 Keywords: Expanded space method Hysteretic operator (HO) Hysteresis Least square method

a b s t r a c t In this paper, a new hysteretic operator (HO) for hysteresis in piezoelectric actuators is proposed. Based on the constructed HO, the input space of neural networks is expanded from 1-dimension to 2-dimension using the expanded space method so that the multi-value mapping of hysteresis is transformed into a continuous mapping comprised of one-to-one mapping and multiple-to-one mapping. Based on the expanded input space, a neural network is used to identify hysteresis nonlinearity. The approximation performance of an experimental example suggests the proposed approach is effective. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Piezoelectric materials have been extensively used in sensors and actuators in the past 20 years. However, hysteresis nonlinearities existing in these materials are hindering their applications in many fields [1]. It is well known that hysteresis is a kind of non-differentiable nonlinearity with multi-valued mapping, which often undesirably leads to oscillations and poor control performance in control systems [2]. Thus, the existence of hysteresis brings a challenge in the control of systems using piezoelectric sensors or actuators [3]. The existent compensation methods of hysteresis are usually dependent on the models of hysteresis. Therefore, it is necessary to construct accurate hysteresis models. In past decades, some hysteresis models were proposed, such as Preisach model [4], Bouc–Wen model [5], KP model [6], PI model [7] and so on. Among these hysteresis models, the model most frequently used was the Preisach model, which is based on the Preisach operator. But it is difficult to determine the parameters of the Preisach model, which limited its usage. Neural networks (NNs), especially three-layer feed-forward neural networks can implement all kinds of nonlinear mapping with variation of weight values, and are always regarded as one of the best ways to identify nonlinear systems. Researchers attached great importance to

∗ Corresponding author. Tel.: +86 57185070702. E-mail addresses: shenyu [email protected], [email protected] (Y. Shen). http://dx.doi.org/10.1016/j.sna.2014.10.025 0924-4247/© 2014 Elsevier B.V. All rights reserved.

neural networks in constructing hysteresis models because hysteresis is a type of representative nonlinearity. Neural networks were successfully applied in modeling hysteresis with single loop or first-order reserve curve in some works [8–10]. However, Wei [11] found that traditional approach of neural networks can approximate continuous one-to-one mapping and multiple-to-one mappings, but cannot identify multi-value mapping of hysteresis. Although this finding was depressing, researchers did not give up constructing NN-based hysteresis models. Tong [12] discovered that adding an input to neural networks could improve approximation performance of hysteresis models. Ma [13] improved this approach, presented hysteretic operator (HO), considered that the multi-value mapping of hysteresis is transformed into a continuous one-to-one mapping in this approach, and named it as expanded space method. Thereafter, Dong [14] and Zhang [15] respectively proposed new HO so that the expanded space method can be used to model rate-dependent hysteresis. In this paper, it is discovered that the multi-value mapping of hysteresis is transformed into a continuous mapping consisting of one-to-one mapping and multiple-to-one mapping in the expanded space method, not a single one-to-one mapping considered by Ref. [13]. Besides, since the similarity between HO and branch of hysteretic loop is a reliable indicator of the model precision, a new approach to constructing HO is proposed in this paper. And then, based on the HO, a modified hysteresis model is obtained. Finally, an experimental sample is used to validate practically the model of hysteresis.

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

317

coordinate system, the constant term is always 0. Therefore, the HO in the ith coordinate system is defined as ϕ(xi ) = a1 xi + a2 xi2 + · · · + am xim

(1)

where xi is the mapping value of any input x in the ith minor coordinate system, ϕ is the corresponding output of HO in the ith minor coordinate system. In the main coordinate system, the HO is defined as

⎧ O(x) = O(xei ) + xi sin  + ϕ cos  ⎪ ⎨ ⎪ ⎩

xi = (x − xei ) cos  + [y − O(xei )] sin 

(2)

yi = [y − O(xei )] cos  − (x − xei ) sin 

where [xei , O(xei )] are the original coordinates of the origin of the ith minor coordinate system, (x, y) are the original coordinates of (xi , yi ) and  is the counterclockwise rotation angle of the ith minor coordinate system. Fig. 1. Coordinate transformation.

3.2. Computation of the parameters 2. The expanded space method The expanded space method is based on the coordinate transformation shown in Fig. 1 and supposes that hysteretic curve is composed of the movement loci of a motion point in a series of minor coordinate systems. First of all, a minor coordinate system centering on the starting point is created. The motion point moves along a regular path from the starting point to the first extremum point, which produces a branch of major or minor loop. Next, another minor coordinate system is created at this extremum point, and the motion point continues to move along a regular path to next extremum point, which also produces a branch of hysteretic loop. And then, the third minor coordinate system is created at the second extremum point, and so on. Finally, all hysteretic loops are obtained in the same way. It can be found that the movement loci of the motion point are similar in all coordinate systems. Therefore, a certain monotone curve which is similar to the movement locus of the motion point in shape, such as a part of polynomial function, can be used to displace the regular path and predict approximately the outputs of hysteresis. This approximate function is called HO whose curve exhibits similarity to that of hysteresis, such as ascending, turning and descending. The output of HO and the input of hysteresis are together fed into a neural network so that the input space of neural network is expanded from one-dimension to two-dimension. Based on the expanded input space, a neural network is employed to identify the hysteresis nonlinearity. 3. Construction of hysteretic operator (HO) 3.1. Definition of HO

The number of parameters of HO is m, so m equations are needed to compute the m parameters. It is known that the least square method is one of the best ways to fit polynomial functions. To suppose that the number of samples used to train neural network is n and the mapping values of the input and output of any sample are respectively xi and yi in a certain minor coordinate system, and then the residual  i is i = yi − ϕ(xi )

(3)

Thus, the sum of square residuals is

S=

n 

i2 =

i=1

n 

[yi − ϕ(xi )]2 =

i=1

n 

⎡ ⎣yi −

i=1

m 

⎤2 aj xi ⎦ j

(4)

j=1

In accordance to the least square method, the sum of squares should be minimized and the minimum is determined by setting the partial derivatives of S with respect to a1 , a2 , . . ., am to zeros. Since the HO contains m parameters, there exist m partial derivative equations.





  j ∂S = −2 xik ⎣yi − aj xi ⎦ = 0, ∂ak n

m

i=1

k = 1, . . ., m

(5)

j=1

Rearranging Eq. (5) gives m 

aj

n 

j=1

j+k xi

=

i=1

n 

yi xik ,

k = 1, . . ., m

(6)

i=1

Eq. (6) are equivalent to the following equations: In Ref. [13], a quadratic function of which the linear coefficient and constant term are set to 0 s, i.e. f(x) = ax2 was used to construct HO. This HO has some advantages, such as few parameters and simple computation of parameters. But, the few adjustable parameters lead to the poor adaptation of model, and reduce the accuracy of the hysteresis model. The increase of the degree within a certain range can improve accuracy of the hysteresis model. Although the complexity of parameters computation increases along with the degree of polynomial function, it is not a problem in face of the powerful computing capacity of today’s computer. Thus, the standard form of polynomial function is used to construct HO in this paper. Because the curve of HO passes through the origin in every

⎧  n n n n    ⎪ ⎪ a1 xi2 + a2 xi3 + · · · + am xim+1 = yi xi ⎪ ⎪ ⎪ ⎪ i=1 i=1 i=1 i=1 ⎪ ⎪ ⎪ n n n n ⎪     ⎪ ⎪ ⎨ a1 xi3 + a2 xi4 + · · · + am xim+2 = yi xi2 i=1

i=1

i=1

i=1

⎪ ⎪ .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ n n n n ⎪     ⎪ ⎪ ⎪ a1 xim+1 + a2 xim+2 + · · · + am xi2m = yi xim ⎩ i=1

i=1

i=1

i=1

(7)

318

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

Eq. (7) can be written as the following matrix equation

⎡  n xi2 ⎢ ⎢ i=1 ⎢ n ⎢  ⎢ xi3 ⎢ ⎢ i=1 ⎢ ⎢ .. ⎢ . ⎢ ⎢ n ⎣

n 

xi3

i=1

n 

xi4

i=1

.. . n 

xim+1

i=1

xim+2



⎡ ⎤ n yi xi ⎥ ⎢ ⎥ ⎥ ⎡ ⎤ ⎢ i=1 ⎥ i=1 a ⎥ ⎢ ⎥ n n ⎥ 1 ⎢ ⎥  m+2 ⎥ ⎢ ⎢ ⎥ 2 ⎥ ··· xi yi xi ⎥ ⎥ ⎢ a2 ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ i=1 ⎥ (8) i=1 ⎥⎢ . ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ . ⎣ ⎦ .. .. . ⎥ ⎢ ⎥ ··· . . ⎥ a ⎢ ⎥ m ⎥ ⎢ ⎥ n n   ⎦ ⎣ ⎦ 2m m ···

n 

xim+1

···

i=1

xi

i=1

yi xi

i=1

i.e. XA = Y

(9)

Next, it will be proved that the matrix X has full rank, i.e. rank (X) = m, supported that m is smaller or equal to n. Proof.



Let x1

x12

···

x1m



⎢ x x2 · · · xm ⎥ ⎢ 2 2 2 ⎥ ⎥ M=⎢ ⎢ . . ⎥ . ⎣ .. .. · · · .. ⎦ xn

xn2

···

=

Since x1 = / x2 = / ... = / xn and m ≤ n, the m columns of the matrix M are linearly independent, i.e. the matrix M has full column rank. Thus, the rank of the matrix M is m. And since the row rank of a matrix is the same as the column rank of its transpose, the row rank of MT is also equal to m, i.e. the matrix MT has full row rank. Since the matrix X can be written as (10)

i.e. the product of a row full rank matrix and a column full rank matrix. Therefore, rank(X) = min[rank(M T ), rank(M)]

(11)

(MT ) = rank

(M) = m, rank (X) = m. Therefore, X is And since rank a full rank matrix. In sum, the Eq. (9) has a single unique solution, supported that the degree of the function (1) is smaller or equal to the number of samples, i.e. m ≤ n. Solving the Eq. (9), the solution, i.e. the model parameters are given as follow: A = X −1 Y

can be known. But the y-coordinate is unknown for any input x when the model is used to predict the output of system, i.e. the variable xi cannot be computed. Therefore, the minor coordinate systems can only be rotated as following angles:



xnm

X = MT ∗ M

Fig. 2. Multi-value mapping of hysteresis.

(12)

 3.3. Determination of rotation angle  Undoubtedly, if the minor coordinate systems are rotated by some angle  so that the movement loci of the motion point are more similar to the curve of a certain polynomial function, the accuracy of the model will be improved. In this case, the output of the HO in the ith minor coordinate system is ϕ(xi ) = a1 xi + a2 xi2 + · · · + am xim However, according to the formula (2), the variable xi is related to its original coordinates (x, y) in the main coordinate system. xi = (x − xei ) cos  + [y − O(xei )] sin  When the samples are used to train the neural network, the variable xi can be computed because the coordinates of any sample

xei+1 > xei

0 

xei+1 < xei

(13)

where xei and xei+1 are respectively the ith and (i + 1)th input extremums. In this case, the variable xi is only dependent on the input x, that is



xi =

x − xei

xei+1 > xei

xei − x

xei+1 < xei

(14)

4. Mapping between input and output spaces The multi-value mapping of hysteresis is comprised of multipleto-one mapping and one-to-multiple mapping. Research [11] suggested that only one-to-one mapping and multiple-to-one mapping can be approximated in terms of the traditional approach of neural networks, while one-to-multiple mapping cannot be identified. That is why the traditional approach of neural networks is useless in identification of hysteresis nonlinearity. In the expanded space method, the input space of neural network is expanded from 1-dimension to 2-dimension. Next, it will be proved that the multivalue mapping of hysteresis can be transformed into a continuous mapping comprised of one-to-one mapping and multiple-to-one mapping between the expanded input space and the output space. Lemma 1. Let x(t) ∈ C(R), where R = {t| − ∞ < t < ∞} and C(R) are the sets of continuous functions on R. For the different time instants t1 and / t2 ), x(t1 ) = x(t2 ), but it leads to O[x(t1 )] = / O[x(t2 )]. t2 (t1 = Proof. Fig. 2 (a) and (b) show respectively an input curve and the corresponding input–output curve of hysteresis. As shown in Fig. 2 (a), for t = te1 , an input extremum x(te1 ) = xe1 occurs, and for t = te2 , another input extremum x(te2 ) = xe2 occurs, where two input extremums are adjacent. For t = t1 and t = t2 , there exist two equal inputs x(t1 ) = x(t2 ) = x1 . Because the input x(t1 ) locates in the minor coordinate system with origin [xe1 , O(xe1 )] and xe2 < xe1 , the corresponding output of HO is O[x(t1 )] = O(xe1 ) + xi (t1 ) sin  + ϕ[xi (t1 )] cos  = O(xe1 ) − ϕ[xi (t1 )]

(15)

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

319

Thus, for two particular time instants t1 and t2 (t1 = / t2 ), even though x(t1 ) = x(t2 ), O[x(t1 )] = / O[x(t2 )] since their dominant extrema are different.  Lemma 2. If there exist two different time instants t1 and t2 , such that O[x(t1 )] − O[x(t2 )] → 0, then x(t1 ) − x(t2 ) → 0. Proof.

In any minor coordinate system, considering

O[x(t1 )] − O[x(t2 )] = k, x(t1 ) − x(t2 )

k ∈ (0, +∞)

(24)

then x(t1 ) − x(t2 ) =

Fig. 3. The curve of ϕ(x).

O[x(t1 )] − O[x(t2 )] k

(25)

It is clear that if O[x(t1 )] − O[x(t2 )] → 0, then x(t1 ) − x(t2 ) → 0.  In accordance to the formula (2), xi (t1 ) = [x(t1 ) − xe1 ] cos  + [y(t1 ) − O(xe1 )] sin  = xe1 − x1

(16)

Theorem 1. For any hysteresis, there exists a continuous mapping  : R2 → R which is comprised of one-to-one mapping and multiple-to-one mapping, such that y(t) =  (x(t), O[x(t)]).

(17)

Proof. First, it is proved that  includes one-to-one mapping. In terms of Lemma 1, if there exist two different time instants t1 and t2 , then

According to the formula (1), ϕ[xi (t1 )] = a1 (xe1 − x1 ) + a2 (xe1 − x1 )2 + · · · + am (xe1 − x1 )m

Substituting the expression (17) into the expression (15) gives

(x(t1 ), O[x(t1 )]) = / (x(t2 ), O[x(t2 )])

O[x(t1 )] = O(xe1 ) − a1 (xe1 − x1 ) − a2 (xe1 − x1 )2 − · · · − am (xe1 − x1 )m

(18)

Likewise, O(xe2 ) can be obtained: O(xe2 ) = O(xe1 ) − a1 (xe1 − xe2 ) − a2 (xe1 − xe2 )2 − · · · − am (xe1 − xe2 )m

(19)

Because the input x(t2 ) locates in the minor coordinate system with origin [xe2 , O(xe2 )], the corresponding output of the HO is O[x(t2 )] = O(xe2 ) + xi (t2 ) sin 0 + ϕ[xi (t2 )] cos 0 = O(xe2 ) + ϕ[xi (t2 )] = O(xe2 ) + a1 (x1 − xe2 ) + a2 (x1 − xe2 )2 + · · · + am (x1 − xe2 )m

(20)

Substituting the expression (19) into the expression (20) gives O[x(t2 )] = O(xe1 ) − a1 [(xe1 − xe2 ) − (x1 − xe2 )] − a2 [(xe1 − xe2 )2

(26)

Therefore, although the mapping comprised of x(t1 ) → y(t1 ) and x(t2 ) → y(t2 ) is one-to-multiple mapping, the mapping consisting of (x(t1 ), O[x(t1 )]) → y(t1 ) and (x(t2 ), O[x(t2 )]) → y(t2 ) is one-to-one / O[x(t2 )] and y(t1 ) = / y(t2 ). That is to mapping because O[x(t1 )] = say,  is a one-to-one mapping at that time. Next, it will be proved that  includes multiple-to-one mapping. As shown in Fig. 2 (a), for t = t3 and t = t4 , there exist two unequal inputs x(t3 ) = / x(t4 ). But, as shown in Fig. 2 (b), for x = x(t3 ), output of system y(t3 ) =y3 , and for x = x(t4 ), output y(t4 ) =y3 , that is, y(t3 )= y(t4 ). This is so-called multiple-to-one mapping. Since x(t3 ) = / x(t4 ) and y(t3 ) = y(t4 ), the mapping comprised of (x(t3 ), O[x(t3 )]) → y(t3 ) and (x(t4 ), O[x(t4 )]) → y(t4 ) is a multipleto-one mapping, regardless of whether O[x(t3 )] = O[x(t4 )] or O[x(t3 )] = / O[x(t4 )]. That is to say,  is a multiple-to-one mapping at that time. Finally, it will be proved that  is a continuous mapping. In accordance to Ref. [16], x(t1 ) − x(t2 ) → 0 ⇒ O[x(t1 )] − O[x(t2 )] → 0

(27)

Then, in terms of Lemma 2,

− (x1 − xe2 )2 ] − · · · − am [(xe1 − xe2 )m − (x1 − xe2 )m ] (21)

O[x(t1 )] − O[x(t2 )] → 0 ⇒ x(t1 ) − x(t2 ) → 0 ⇒ y(t1 ) − y(t2 ) → 0 (28)

Subtracting the expression (21) from the expression (18) gives O[x(t1 )] − O[x(t2 )] = ϕ(xe1 − xe2 ) − ϕ(xe1 − x1 ) − ϕ(x1 − xe2 )

(22)

Let q = x1 − xe2 r = xe1 − x1 then q + r = xe1 − xe2 And then expression (22) becomes O[x(t1 )] − O[x(t2 )] = ϕ(q + r) − ϕ(q) − ϕ(r)

(23)

Since the locus of the motion point is similar to a monotone ascending parabola in shape in every minor coordinate system, the curve of the fitting polynomial function is a parabola which is open upwards and passes through the origin, as shown in Fig. 3. Therefore, ϕ(q + r) = / ϕ(q) + ϕ(r), i.e. O[x(t1 )] = / O[x(t2 )].

Thus,  is a continuous mapping. In conclusion, there exists a continuous mapping  : R2 → R which is comprised of one-to-one mapping and multiple-to-one mapping, such that y(t) =  (x(t), O[x(t)]).  Remark 1. Theorem 1 indicates that the proposed HO can transform the multi-valued mapping of hysteresis into a continuous mapping comprised of one-to-one mapping and multiple-to-one mapping. It is known that neural network with sufficient number of hidden neurons can identify any continuous one-to-one mapping or multiple-to-one mapping [17]. 5. Experimental verification In the following, an experimental example is presented. In this example, the conjugate gradient algorithm with Powell–Beale restarted method is employed to train the neural network, so as

320

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

Fig. 4. The experimental setup.

to improve the convergent rate and the performance of the neural model. A piezoceramic actuator (PZT-753.21 C from PI Corp., shown as Fig. 4a) is adopted to validate the proposed model. Actuated by an input voltage in the 0–120 V range, the actuator has a nominal displacement ranging from 0 to 20 ␮m. The experimental platform (shown as Fig. 4b) consists of a personal computer, a voltage amplifier (E505.00), a non-contact capacitive-sensor (E-509.CxA from PI Corp.) and a multifunction board (PC-7483 from Advantech Corp.) with 16-bit A/D converter and 16-bit D/A converter. A three-layer feed-forward neural network is used to identify the measured data. The sigmoid function and linear function are respectively used as the activation function of the hidden layer and the output layer. One thousand two hundred and thirty pairs of measured data are implemented. The data is separated into two parts. One part is used for training neural networks and another for model validation. Fig. 5 shows the variation of MSE along with the increase of the degree of HO. It is shown that the model derives the best result when the degree becomes 5. Thus the degree of the HO is determined as 5, i.e., m = 5. So far, the optimal number of hidden neurons can only be determined using experimental methods. Since too many hidden neurons will reduce the fault-tolerance capability of neural network, the number of hidden neurons in 1–100 range was respectively tried in this paper. Considering the length of paper, only the best three performances were listed in the Table 1. It is shown that

Table 1 The performances of NN with different number of hidden neurons in the proposed model. No. of hidden neurons

MSE

2 21 19

0.0041 0.0055 0.0067

Table 2 The performances of NN with different number of hidden neurons in the model of Ref. [2]. No. of hidden neurons

MSE

12 17 3

0.0100 0.0108 0.0113

the neural network derives the best result when the number of hidden neurons becomes 2. Therefore, the neural network containing 2 input neurons, 2 hidden neurons and 1 output neuron is employed to identify the hysteresis nonlinearity in the piezoceramic actuator. After 82 epochs, the training procedure is finished. Figs. 6 and 7 show respectively the validation result of the proposed model and absolute errors. The MSE is 0.0041. In addition, the proposed models in Refs. [2,13] are also used to approximate the measured data. Likewise, the number of hidden neurons was tried from 1 to 100, and the best three performances

-3

7

x 10

7.5

7

6.5

prediction data real data

6.5 Displacement (µm)

MSE (µm)

6

5.5

5

6

5.5

5 4.5

4

4.5

2

2.5

3

3.5

4

4.5 m

5

5.5

6

Fig. 5. The variation of MSE along with the degree.

6.5

7

4 10

15

20

25 30 Voltage (V)

35

40

45

Fig. 6. Comparison between the prediction of the proposed model and the real data.

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

321

0.04

0.01

0.03

Model error (µm)

Model error (µm)

0.02

0

0.01 0 -0.01

-0.01 -0.02 -0.03

-0.02

0

100

200

300 No. of samples

400

500

-0.04

600

0

100

Fig. 7. The model error of the proposed model.

94 80 28

0.0105 0.0108 0.0113

400

500

600

7.5 prediction data real data

7

6.5

were respectively listed in the Tables 2 and 3. For the approach presented in Ref. [2], it is shown that the neural network derives the best results when the number of hidden neurons becomes 12. Therefore, a neural network consisting of 2 input neurons, 12 hidden neurons and 1 output neuron is used to identify the hysteresis nonlinearity. After 849 epochs, the training procedure is finished. Figs. 8 and 9 show respectively the validation result of the model and absolute errors. The MSE is 0.0100. For the proposed approach in Ref. [13], the neural network derives the best results when the quantity of hidden neurons becomes 94. Therefore, a neural network with 94 hidden neurons is used to identify the real data. After 941 epochs, the training procedure is finished. Figs. 10 and 11 show respectively the validation result of the model and absolute errors. The MSE is 0.0105.

Displacement (µm)

MSE

300 No. of samples

Fig. 9. The model error of the proposed model in Ref. [2].

Table 3 The performances of NN with different number of hidden neurons in the model of Ref. [13]. No. of hidden neurons

200

6

5.5

5

4.5

4 10

15

20

25 30 Voltage (V)

35

40

45

Fig. 10. Comparison between prediction of the proposed model in Ref. [13] and the real data.

7.5

7

prediction data real data

0.02

Model error (µm)

Displacement (µm)

6.5

6

5.5

0

-0.02

5

4.5

4 10

-0.04 15

20

25 30 Voltage (V)

35

40

45

Fig. 8. Comparison between prediction of the proposed model in Ref. [2] and the real data.

0

100

200

300 No. of samples

400

500

Fig. 11. The model error of the proposed model in Ref. [13].

600

322

L. Ma et al. / Sensors and Actuators A 220 (2014) 316–322

Comparing the prediction result of the proposed model in this paper with those of the given models in Refs. [2,13], it shows that the proposed model in this paper can better identify the measured data than the models presented in Refs. [2,13]. 6. Conclusions A new approach to constructing HO is proposed in this paper. Based on the constructed HO, the input space of neural network is expanded from one-dimension to two-dimension. Between the expanded input space and the output space, the multi-value mapping of hysteresis is transformed into a continuous mapping comprised of one-to-one mapping and multiple-to-one mapping. Based on the expanded input space, a neural model of hysteresis is established. The identification performance of the experimental example suggests that the proposed approach is effective. Acknowledgements This work is partially supported by Zhejiang Provincial Natural Science Foundation (Grant nos. Y1110508 and LQ14F050002), Interdisciplinary Pre-research Project of Zhejiang University of Science and Technology (Grant no. 2011JC03Y), Science Technology Department of Zhejiang Province (Grant no. 2014C31020) and National Natural Science Foundation of China (Grant nos. 11304282 and 61273184). References [1] T. Zhang, H.G. Li, G.P. Cai, Hysteresis identification and adaptive vibration control for a smart cantilever beam by a piezoelectric actuator, Sens. Actuators A: Phys. 203 (2013) 168–175. [2] X. Zhao, Y. Tan, Modeling hysteresis and its inverse model using neural networks based on expanded input space method, IEEE Trans. Control Syst. Technol. 16 (3) (2008) 484–490. [3] G.Y. Gu, L.M. Zhu, Motion control of piezoceramic actuators with creep hysteresis and vibration compensation, Sens. Actuators A: Phys. 197 (2013) 76–87. [4] K.K. Ahn, N.B. Kha, Internal model control for shape memory alloy actuators using fuzzy based Preisach model, Sens. Actuators A: Phys. 136 (2) (2007) 730–741. [5] W. Zhu, D. Wang, Non-symmetrical Bouc–Wen model for piezoelectric ceramic actuators, Sens. Actuators A: Phys. 181 (2012) 51–60. [6] M.A. Krasnosel’skii, A.V. Pokrovskii, System with Hysteresis, Springer–Verlag, New York, 1989. [7] M. Yang, G. Gu, L. Zhu, Parameter identification of the generalized Prandtl–Ishlinskii model for piezoelectric actuators using modified particle swarm optimization, Sens. Actuators A: Phys. 189 (2013) 254–265. [8] G. Taga, Y. Yanmaguchi, H. Shimizu, Self-organized control of bipedal locomotion by neural oscillators in unpredictable environment, Biol. Cybern. 65 (1991) 147–159. [9] A. Nafalski, B.G. Hoskins, A. Kundu, T. Doan, The use of neural networks in describing magnetization phenomena, J. Magn. Magn. Mater. 160 (1996) 84–86. [10] C.L. Hwang, C. Jan, Y.H. Chen, Piezomechanic using intelligent variablestructure control, IEEE Trans. Ind. Electron. 48 (1) (2001) 47–59. [11] J.D. Wei, C.T. Sun, Constructing hysteresis memory in neural networks, IEEE Trans. Syst. Man Cybern. 30 (4) (2000) 601–609. [12] Z. Tong, Y. Tan, Xianwen Zeng, Modeling hysteresis using hybrid method of continuous transformation and neural networks, Sens. Actuators A: Phys. 119 (1) (2005) 254–262.

[13] L. Ma, Y. Tan, Y. Chu, Improved EHM-based NN hysteresis model, Sens. Actuators A: Phys. 141 (1) (2008) 6–12. [14] R. Dong, Y. Tan, A neural networks based model for rate-dependent hysteresis for piezoceramic actuators, Sens. Actuators A: Phys. 143 (2) (2008) 370–376. [15] X. Zhang, Y. Tan, A hybrid model for rate-dependent hysteresis in piezoelectric actuators, Sens. Actuators A: Phys. 157 (1) (2010) 54–60. [16] R.B. Gorbert, Control of Hysteretic System with Preisach Representation (Ph.D. thesis), University of Waterloo, Ontario, 1997. [17] K. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Netw. 2 (3) (1989) 183–192.

Biographies

Lianwei Ma received his PhD degree in control theory and control engineering in 2008 at Shanghai Jiaotong University, Shanghai, China. He has been working in Zhejiang University of Science and Technology since 2008 and been an associate professor since 2010. His research interests include modeling and control of nonlinear systems, industrial process control and solar thermal technology.

Yu Shen received her PhD degree in the department of physics in 2008 at Zhejiang University, Hangzhou, China. She has been working in Zhejiang University of Science and Technology since 2008 and been an associate professor since 2010. Her research interests include polymer simulation and optical manipulation methods, smart materials and solar power system.

Jinrong Li received her PhD degree in the department of physics in 2013 at Zhejiang University, Hangzhou, China. She has been working in Zhejiang University of Science and Technology since 2003. Her research interests include spectrum analysis and intelligent algorithm.

Xinlong Zhao received his PhD degree in control theory and control engineering in 2007 at Shanghai Jiaotong University, Shanghai, China. He has been working in Zhejiang Sci-Tech University since 2007 and been an associate professor since 2009. His research interests include intelligent control of nonlinear systems, neural networks modeling and control.