Parameter estimation for hysteretic systems

Parameter estimation for hysteretic systems

Journal ofSound and Vibration (1987) 117(l), 161-172 PARAMETER ESTIMATION M. YAR AND FOR HYSTERETIC SYSTEMS J. K. HAMMOND Institute of Sound ...

904KB Sizes 37 Downloads 184 Views

Journal ofSound

and Vibration (1987) 117(l), 161-172

PARAMETER

ESTIMATION M. YAR

AND

FOR

HYSTERETIC

SYSTEMS

J. K. HAMMOND

Institute of Sound and Vibration Research, University of Southampton, Southampton SO9 5 NH, England

(Received

9 August 1986, and in revised form

13 November 1986)

The differential system characterization of hysteretic systems is well known. The problem of estimating the parameters of this system on the basis of input-output data, possibly noise corrupted, is considered. It turns out that the estimation problem is a non-linear optimization problem. The Gauss Newton method is used in setting up a two-stage iterative least squares algorithm. The usefulness of the algorithm is validated through its application to various simulated time histories from the hysteretic model. 1. INTRODUCTION In the commonly used methods of structural identification such as modal analysis, the linearity of the structure is invariably assumed. This is a good practical approximation as long as the amplitude of the input remains small. As the amplitude of the input is increased, many structures behave remarkably differently as compared to their behaviour with small input amplitude-an indication of possible non-linearity in the structure. This difference in the behaviour could well be attributed to the non-linearity in damping, stiffness, or both. The characterization and identification of this non-linearity is a problem of considerable theoretical and practical interest. The class of hysteretic systems is typical of non-linear systems which involve nonlinearity both in damping and stiffness. The study of this class of systems goes back to Caughey [l]. He applied averaging techniques due to Bogoliubov and Mitropolsky to bilinear hysteretic systems-the simplest kind of hysteretic system. However, the progress on hysteretic systems remained extremely slow, possibly because of the lack of a proper model for simulating these systems. A very effective method of simulating these systems was proposed by Bout [2] in the form of a differential system. This differential system model was further developed and generalized by Wen and his colleagues [3-51. The model is indeed versatile and can produce a variety of hysteresis loops except the bilinear loop originally studied by Caughey [l]. A modified form of this hysteretic model was suggested by Yar and Hammond [6] to include bilinear hysteresis systems as well. Here the identification aspect of this model in terms of its parameters is to be considered. The survey paper by Young [7] contains various approaches for estimating the parameters of differential equations on the basis of input-output data. Unfortunately there is not even a single paper on the estimation of hysteretic systems. The assumption underlying most of the parameter estimation techniques is the analyticity of the regression function, while for hysteretic systems the regression function is multiple valued and non-analytic. Andronikou and Bekey [8] considered the possibility of identifiability of the parameters of the bilinear hysteretic system in terms of its analytic approximation. Later Masri et al. [9] suggested a method of fitting an analytical model to hysteretic systems as well. In this paper a method of directly estimating the parameters of hysteretic models on the basis of possibly noise corrupted input-output data is suggested. The method is based 161 0022-460X/87/160161

+ 12 $03.00/O

@ 1987 Academic Press Limited

162

M. YAR

AND

J. K. HAMMOND

upon Gauss Newton iterations. A similar kind of approach has been used by Mottershead and Stanway [lo] in the identification of the nth power velocity damping model due to Jacobsen [ 111. In section 2 the two stage least squares algorithm for the estimation of hysteretic model is set up. In particular it is noted that the estimation problem is a non-linear one. The sampling properties of the estimates are described in section 3 without going into mathematical details. To demonstrate the usefulness of the algorithm, detailed simulation results are presented in section 4. Finally, in section 5, the scope of the method and its possible utility in the detection of hysteresis phenomena in general are discussed.

2. PARAMETER The

differential

system

i,(r)

model

= A&)-

ESTIMATION

used by Bout

[2] to describe

hysteretical

systems

is

Y~l~(t)lz~(t)lz,(,)ln-‘-P,~(r)lz,(r)l” i(O) = 1,;

x(0) =x0,

z,(O) = zo.

(1)

Here (‘), and subsequently ()‘, denotes the derivative with respect to time, f(t) is the input and zr(t) is the hysteretic force. Wen [3] restricted n to be a positive integer. In fact n need not be a positive integer; it could be any positive real number. This differential system can be thought of as a suitable mathematical model for a single degree of freedom system in which the mass is attached to two springs, one linear along with a dashpot (viscous damper) and the other a non-linear spring along with a Coulomb slider as shown in Figure l(a). The restoring force characteristic of the non-linear spring in the absence of slider is of the form shown in Figure l(b). The presence of the slider changes the restoring force characteristic of the non-linear spring to that shown in Figure l(c). This kind of characteristic is referred to as hysteretic because the restoring force depends not only on the instantaneous displacement but also on the entire history of the displacement through its dependence on velocity. An inspection of equation (1) shows immediately that the model involves some redundant parameters. By introducing the transformation z(t) = Vl(f)r The differential

equations

A=A,rl, of system

Y = Y1/77n-‘V

(1) can be written

P = B,ln”-‘9

as

%(t)+ui(r)+bx(t)+z(t)=f(t), i(t) = Ai( t) - yli( t)lz( t)jz( t)(+’ -/3i(z(”

(24

which reduces the parameters to be identified by 1. Equations (2a) will be referred to as the system equations. Since, in practice, measurements would invariably be noise corrupted (with the noise usually assumed to be additive), the observations could be assumed to be of the form y(r)=ZZ(t)+-E(f).

(2b)

Equation (2b) henceforth will be referred to as the observation equation. It is now further assumed that (i) I is a zero mean Gaussian white noise with variance a:, (ii) the input f(t) is zero mean Gaussian white noise and can be measured noise free, and further the input f( t) is assumed to be independent of the observation noise I, and (iii) the initial

HYSTERETIC

LIneOr spnng

Dashpot

_

SYSTEM

163

IDENT1FlCATlON

--+

-s--

Non-lmear

+

C

Coulomb

slider

(b)

Dlsplacemetlt Figure 1. (a) Physical realization of hysteretic (c) hysteretic force against displacement.

system;

(b) non-hysteretic

force

against

displacement;

conditions x(O), 1(O), z(0) are known. For the sake of simplicity it is assumed that x(0) = 1(O) = z(0) = 0. The estimation problem consits of estimating the parameters a, b, A, -y,p, n, ai on the basis of the data {f(t), y(t)lt~ [0, T]}, where T is the sampling time. Note that z is a complicated non-linear function of x(t), a(t), A, y, p, n. The simplest possible case is n = 0 and the solution for equations (2a) is

z=

-(A,+ Y)(YW&) + (A,+ Y)X, -(AZ - YNYWAJ + 6% - Y)X, (AZ+ YMYJVAJ + (4 + Y)X, (4 - y)($VA,)

+ (AZ - Y)X,

zao,iso Z~O,ss~O Z~O,iSO

zSO,1z=O

I ’

(3)

where A2 = A - p. Thus, one can see that z is a non-linear function of A, y, n and the parameter /3 is not identifiable (for details see reference [6]). The output u(t) hence is a non-linear function of the parameters A, ‘y,p, n. As in any estimation problem an important step in the estimation of parameters here is the choice of the cost functional for minimization. Since the noise process s(t) is assumed to be Gaussian, the least square estimates coincide with maximum likelihood estimates. Hence, the integrated mean squared error

164 is selected

M. YAR

for minimization,

AND

J. K. HAMMOND

i.e.,

with respect to the parameters a, 6, A, y, p, n. Here two cases must be distinguished: Case 1, the acceleration g(t) can be measured noise-free; Case 2, the acceleration Z(t) cannot be measured noise-free. In Case 1, i(t), x(t) can be obtained by integration. In this case the output y(t) is a linear function of the parameters a, b and a non-linear function of the parameters A, y, /II,n. The linear parameters can be easily eliminated according to a procedure suggested by Lawton and Sylvestre [ 121. The minimization of Q in this case is reduced to minimization with respect to four parameters A, y, /3, n. Case 2, which is more realistic, is far more complicated than Case 1. As is well known (see, for example, reference [7]), integration or differentiation of noisy data is a very serious problem. Therefore one can no longer assume that x(t), i(t) are available, which depend in a non-linear fashion on a, b. Hence the output y(t) is a non-linear function of all parameters a, b, A, y, /3, n. For the sake of simplicity the following notation is used: 13~= Q, & = b, O3= A, 13~ = y, &, = p, O6= n; B = ( O1 O2O3 O4 O5 0,). Differentiating Q with respect to Bi, i = 1, . . . , 6, gives the system of normal equations 1

T

[y(t)-%(t)]

i=1,2 ,...,

(&c/a&)“dt=O,

6.

TOi

(5)

Since g(t) is a non-linear function of all the parameters ei the system of equation (5) can only be solved iteratively. There are a number of methods for solving the system (5) but the simplest one is the Gauss Newton method. The Gauss Newton iterative procedure for solving equation (5) is 9 m+1= %I +S,‘Y,,

(6)

where

sn=[$1(m),

i,j=1,2

,...,

6,

Y, =[Y,

~~=tl:(~)‘.(~~)“dl,

y* y3 y4 ys

i=l,2

,...,

Y61’(4,

6,

.. (y(t)--i(t)

dt,

j=1,2

,...,

6.

Here the subscript m denotes the computation of the matrix S or the vector Y with respect to the estimate of the parameter vector 9 at the mth iteration. To carry out the iterations one needs jr(t), (13x”/M,), i = 1,2, . . . ,6. A system of coupled differential equations is set up in the Appendix which enables these quantities to be computed once the estimate of 0 is available. To start the iteration (6) a good initial estimate of the vector 8 is needed, and this is not easy to obtain because of the large dimension of the problem. Therefore the following two stage procedure is suggested for carrying out the iteration (6). This technique is common in regression analysis (see for example reference [13]). 2.

FIRST

STAGE

In the first stage the parameter n is kept fixed and Q is minimized over the five parameters a, b, A, y, p. Noting the fact that n is a positive real number, one can choose

HYSTERETIC

SYSTEM

a suitable positive value n, as an initial estimate The iterative equations (6) in this case become

6

m+,

=

165

IDENTIFICATION

of n and keep it fixed during

6, +S,‘?,,

this stage.

(7)

where 5 = (a b A y /?), % = [s,,], i, j = 1,2,. . . ,5, and $= (Y, Yz . . . YS)T. The matrix g and the vector 9 at each iteration are computed by solving the system of equations (A?) set up in the Appendix. To start the iteration (7) one needs a good initial estimate of 8. If initial estimates of a, b, say a,, bo, are available then the initial estimate of A, ‘y, p can be obtained from the following system of equations: = A2 - y~~~~z~z~“~-’- ,8f21z(“o,

z

IqXIX”~-Q = A+i_IzIzp’ In equations

(8) z, i, Ss are respectively

- y(l~(ZIZIno-1)2-Pl~IZIZI”O-l~i_IZI”O,

replaced

by ?, $ 2 defined

(8)

by

;(t)=f(t)-y(t)-a&t)-b$(t), i(t)=

Z(t)=

z^(t+A)-&-A)

t I 0

24

Y(U)du,



Z(t)=

A=

I.0 sampling ~4~2)

dU2

rate du,.

The overbars - in equation (8) denote time averages: for example, 2 = (l/ T) jr i2( t) dt. Thus, if one has the initial estimates of a, b, the initial estimates of A, y, p can be obtained. For the choice of initial estimates of a, b one must search over the parameter space for a, p. In practice it was found sufficient to search on the plane (0, &;,,) x (0, &,) where ri,,, beq are the estimates of a, b obtained by fitting a linear model to the data. The linear model can be fitted to the data as follows: T T jt(t)i(t) dt+&, T

T

(9)

i(t)x(t)dt+a*, Here i(t), x(t), x(t) are respectively replaced by y(t), i(t), Z(t) defined by equation (8). It is known that if the system is linear and the observations are noise free this leads to efficient estimates of a, b. If hysteresis is present this will lead to biased estimates of a, b. The presence of noise further increases this bias. The effect of noise on integration needs particular attention. Consider the estimate of velocity obtained through integrating y(t): namely, f , y(u)du=I(t)+~,(f), b(t) = E(U) du. El(l) = I0 I0 Note that E(E~(~)) =0, and E(~,(f,)e,(f~)) = u’, min ( tl, f2). Thus it is evident that the estimate of velocity obtained through integration is noise corrupted, and this noise process cl(f) is zero mean Gaussian with its variance increasing linearly with time (more specifically a Wiener process). Hence the estimate of velocity deteriorates as the time increases. The situation is even worse for the estimate of displacement obtained through double integration of y(t). Therefore to obtain reliable initial estimates, T in equations (8) and (9) should be chosen as small as possible. If T is chosen to be too small the

166

M. YAR

AND

J. K. HAMMOND

averages involved in the system of equation (8) and (9) would themselves be very poor estimates, leading to poor initial estimates. In practise T equal to one second was found to be a reasonable compromise between the accuracy of the estimate of integral and the estimate of displacement and velocity. To carry ou,t the iterations (6) one selects a suitable initial estimate of a, b on the plane (0, fieq) x (0, b,,) and computes the initial estimates of A, y, /3 from equations (8), and then carries out iteration (7). If the convergence fails then one changes the initial estimate of Q, b and hence A, y, @ and carries out the iteration (7). This is continued until convergence takes place. After convergence in this stage one proceeds to the second stage. 2.2.

SECOND

STAGE

In the second stage Q is minimized over the six parameters 4 b, A, y, j3, n. After convergence in the first stage one uses n, and the final estimate of 8 as an initial estimate of 8 = (6 n) and then carries out the iterations (6). If convergence does take place one takes the converged value of 0, as the final estimate of 0. If convergence fails at this stage this is usually because of the poor initial estimate of n. One then changes the initial estimate of n and goes back to the first stage. This procedure is continued until convergence occurs in both stages. From simulations it was found that an initial estimate of n on the higher side of the true value of n works better than one on the lower side. Based on the above considerations, one has the following algorithm for minimization of Q. 2.3.

ALGORITHM

(i) Integrate y(t) over th: first second to obtain an estimate of k(t), x(f) and solve equation (9) to obtain &, b,, (ii) Select a suitable positive real number n, as an initial estimate of n preferably on the higher side of the true value of n. (iii) Select suitable initial estimates of a, b on the plane (0, &,> x (0, &,) and then compute the initial estimate of A, y, p by solving the system of equations (8). (iv) Carry out the iteration (7). In case convergence fails go to step (iii) and change the initial estimate of a, b. (v) When convergence in (iv) has taken place, take the converged value of 5 and no as an initial estimate of 8 and carry out iteration (6). If convergence fails, go back to (ii) and choose a new initial estimate of n. Continue this until convergence in both stages occurs. Finally one estimates the parameter a: through 6: = (l/ T) ji [y(t) -i(t)]* df, where g(t) is obtained by solving the system of equations (A2) for converged values of 8.

3. SAMPLING

PROPERTIES

OF THE ESTIMATES

The sampling properties of the estimates obtained in the preceding section are stated in this section, without going into mathematical details. Jennrich [14] has studied the sampling properties of non-linear, least square estimators in discrete time, and arguments similar to his can be applied for continuous time. Lee and Kozin [15] have studied the properties of the estimates of the parameters of stochastic differential equations. The following theorem can be proved by using the arguments in references [ 141 and [ 151 but its formal proof would involve heavy mathematics. Theorem. Let i(T) be the estimate of the vector 8 which minimizes the integrated mean squared error: i.e., Q = (l/ T) jr [y(t) - if( t)12 dt. Then, under the regularity conditions (i) e(t) is zero mean Gaussian white noise with variance a:, (ii) Q has a unique minimum

HYSTERETIC

SYSTEM

167

IDENTIFICATION

at the true value of the parameter vector 8, (ii:) Z(t), x(t), x(f), z(t) exist and are continuous functions of the parameter vector 8, so that the moments (l/ T) 1: g2( t) dt and (l/ T) J,’ (ax/a0i)“(tJx/a@j)~~ df%i, j = 1,2, . . . , 6 exist and are finite, and (iv) the matrix s(T) y [So] where sy = (l/T) I,, (ax/a@i)“(aX/atij)” dt is positive definite. Then the i(T) = 8 with probability-l, (b) estimate i(T) has the following properties: (a) lim.,, T”‘(& T) - 0) converges in distribution to Ns(O, a: Z-i), where Z = lim,,, S(T). Here the symbol N6 stands for the multivariate normal distribution of six random variables. In passing the following observations can be made on the identifiability of the parameters. In section 2 it was pointed out that the parameter n is unidentifiable. Similarly if y = p =0 then the solution of equation (2b) is z = Ax and equation (2a) becomes A and b become unidentifiable; instead i + al + (b + A)x =f( t), so that the parameters one can only identify (A + b). Thus the problem of identifiability of the parameters is also very important. However, by using an argument similar to that of Rothenberg [16] it can be shown that under the assumption of the previous theorem the parameters a, 6, A, ‘y, p, n are uniquely identifiable. It is to be noted that the conditions imposed by the theorem are quite severe. Condition (iii) in particular requires the existence of x(t), i(t), z(t), a2/aei, i = 1,2, . . . , 6, and finiteness of the moments (l/ 7’) j: jl( t)jr( t) dt and (l/ T) j: (ax/ae,)“(ax/ae,)“dt, i, j = 1,2,. . . , 6. This means that the system of equation (A2) given in the Appendix should be stable. Similarly condition (iv) requires the matrix S to be non-singular. Using these two conditions one can at least numerically check whether a certain point in parameter space is identifiable or not. 4. SIMULATION

The algorithm set up in section differential equations 2(t)+&+bx+z=f(t),

2 was applied

RESULTS

to many sets of simulated

i = Af - +lzIzI”-’ y(t)=qt)+&(t).

-/3i[zln,

data from the (24 (2b)

The input processf( t) was taken to be Gaussian white noise with the structure E(f( t)) = 0, E(f( t,)f‘( f2)) = 1000’ 6( t, - t2). Equation (2a) was solved numerically by using the fourth order Runge-Kutta method. The observation process y(t) was obtained by adding zero mean Gaussian white noise with variance 25.0 to the acceleration a(t) obtained from equation (2a). The observation noise s(t) was taken to be independent of the input process f(t). The sampling rate was taken to be 1000 and sampling time T = 5 seconds. The algorithm in section 2 was programmed on the computer VAX 1 l/750 and its details are available on request from the authors. At each iteration one needs to solve equations (A2) of the Appendix and compute the matrix S and the vector Y. The differential equations (A2) were always solved according to the fourth order Runge-Kutta method. Integration during the computation of S and Y was performed according to the simple trapezoidal rule. The algorithm was applied successfully to many simulated data. The results presented below are typical of many such simulations. 4.1.

EXAMPLE

1

For this example the true values of the parameters are a = 4.0, b = 100, A = 100, y = 1.0, p = 0.5, n = 2, and a: = 25. The time history y(t) was formally integrated over the first second to obtain an estimate of x(t) and a(t). ‘f?le solution for equations (9) leads to the following estimates of a and b: & = 4.58726; b,, = 103.95094. We chose an initial estimate n, = 1.0 and scanned over the plane (0*0,4.58726) x (0*0,103.95094) for initial estimates of a, b. The convergence

168

M. YAR

AND

J. K. HAMMOND

did take place quite a few times in Stage 1 but failed in Stage 2; either there was computer overflow or the matrix S became singular. We changed the initial estimate of n to 1.5, 1.8 but this never resulted in convergence in the second stage. For the initial estimate n, = 2.0 convergence did take place in both stages for a number of initial estimates of a, b. The details of various iterations for Stage 1 with initial estimates a, = 1.0, b0 = 80.0, and n, = 2-O are given in Table 1 and the results for Stage 2 listed in Table 2. The estimate of the variance

covariance

matrix of the estimate 6 is given in Table 3. In order to check TABLE

1

I

First stage estimation for example Iteration no.

Estimate a* Estimate b* Estimate A Estimate q Estimate /? Estimate ti Estimate Gz of a of b of A of n of a2, of Y of P 1~00000 2.92985 3.64756 3.97420 4.01373 3.96962 3.92437 3.91659 3.91544 3.91562

1

2 3 4 5 6 7 8 9 10

80-00000 107.26022 92.04124 101.68089 99.63998 99.97878 99.92129 99.89250 99.85846 99.85551

115.53519 35.26022 28.99928 15.78903 20.40802 33.24634 56.23552 80.55399 92.10300 93.72224

0.14575 0.59673 1.35052 3.80281 5.48554 3.28910 1.88939 1.48817 1.32348 1.30693

TABLE

-0.13408 -0.57549 -1.31912 -3.73550 -5.32198 -2.92282 -1.21511 -0.50820 -0.20554 -0.16937

2

Second stage estimation for example Iteration no.

1

Estimate a* Estimate b* Estimate A Estimate q Estimate p* Estimate 6 Estimate &‘, of CT’, of n of A of a of b of Y of P

1 2 3 4 5 6 7 8 9 10

3.91567 3.91481 3.91515 3.91403 3.91373 3.91428 3.91412 3.91419 3.91422 3.91423

99.85568 99.86632 99.97556 99.88279 99.87125 99.86546 99.86702 99.86764 99.86794 99.86798

93.79686 90.28995 80.01845 83.72616 87.99901 89.81642 89.84675 89.89396 89.91419 89.91367

1.30571 0.38598 0.83385 1.14366 0.64010 0.51642 O-45716 0.44453 0.44112 0.44016

TABLE

Estimate

of variance covariance 339.58102

24.84172 5.0

2785.58057 1149.56104 173.64186 112.20616 40.47458 29.83219 25.94943 24.94776 24.85061 24.84914

I

I

0.95950 2.39304

-0.16705 -0.24705 -0.51891 -0.58237 -0.17375 -0.16104 -0.14634 -0.14260 -0.14154 -0.14132

2~00000 2.38084 2.23099 2.15223 2.35046 2.49013 2.56330 2.58112 2.58505 2.58618

24.84909 115.06146 45.14017 27.93 111 24.97046 24.89123 24.84509 24.84178 24.84175 24.84172

3

matrix of (a^- a, 6 - b, a - A, q - y, p*- /3, n^- n) 1.19972 -0.00322 0.00534

-311.02899 -7.61592 -1.21884 367.04929

-315.97754 -5.59515 -1.25856

-203.30598 1 -’ -5.49214 -0.79108

357.23688

243.38483

353.46628

235.64825 161.67558 1

I

HYSTERETIC

SYSTEM

169

IDENTIFICATION

the uniqueness of the convergence, the initial estimates a, = 1.0, b, = 80.0, and n, = 2.5 were also tried. In this case it was noted that the convergence is more rapid than for the previous initial estimates, suggesting that an initial estimate of n on the higher side of the true value of n works better. 4.2.

EXAMPLE

2

For this example the true values of the parameters are a = 4.0, b = 100.0, A = 100.0, y = l-0, p = O-5 and n = 1.0. The solution of equations (8) yields the following estimates of a and b: iCq = 5.89497; ieq = 141.62836. Encouraged by the previous example, we tried the initial estimates a, = l-0, b, = 80.0, and no = 2.0. Convergence did take place in the first stage but failed during the second stage. Then we tried the initial estimates a, = 1.0, b. = 80.0, and no = 1.5. In this case convergye took place in both stages. The details for Stages 1 and 2 are respectively given in Tables 4 and 5 while the estimate of variance and covariance matrix of the estimates is given in Table 6. In order to examine the effect of the noise level CT: on the estimates, examples 1 and 2 were repeated with noise level a: equal to 100. In example 1 the convergence always failed in Stage 2 while in example 2 the convergence did take place but the estimates TABLE

4

First stage estimation for example 2

Iteration no. 1 2 3 4 5 6 7 8 9 10

Estimate a^ Estimate b^ Estimate A Estimate j Estimate p^ Estimate fi Estimate 62 of a of b of A of n of CT’, of Y of P 1~00000 2.66899 3.92516 3.90737 3.90411 3.90432 3.90431 3.90431 3.90431 3.9043 1

80~00000 204.17546 98.34393 104.52644 104.21635 79.93697 87.08479 102.70135 87.41026 102.80966 102.82671 87.42284 102.82934 87.42188 102.82964 87.42176 102.82968 87.42175 102-82969 87.42175

0.23915 0.50427 0.20708 0.19285 0.19391 0.19389 o-19391 0.19391 0.19391 0.19391

TABLE

-0.11982 -0.36170 -0.03923 -0.00949 -0.00958 -0.00939 -0.00939 -0.00939 -0.00939 -0.00939

1’50000 1~50000 1~50000 1~50000 1~50000 1~50000 1~50000 1~50000 1*50000 1~50000

2292.27881 147.44064 29.71565 26.50901 26.47370 26.47364 26.47362 26.47359 26.47361 26.47361

5

Second stage estimation for example 2 Iteration IlO.

1 2 3 4 5 6 7 8 9 10

Estimate a^ Estimate b^ Estimate A Estimate G Estimate p^ Estimate A Estimate 6:. of a of b of A of n of u’, of Y of P 3.90431 3.89058 3.78787 3.83324 3.91114 3.88775 3.88829 3.88658 3.88661 3.88661

102.82968 99.24342 58.81675 81.11351 100.59373 98.39251 98.52274 98.48316 98.48165 98.48159

87.42175 97.60777 136.34276 111.83883 99.14720 100.52387 100.31978 100.51664 100.52201 100.52232

o-19391 0.55895 1.56890 1.21403 1.25476 0.95902 0.97946 0.98702 0.98741 0.98743

-0.00939 0.04036 0.27876 0.45288 0.52886 0.31340 0.33681 0.34591 0.34630 0.34632

1~50000 0.943 13 0.71176 O-90641 0.94860 1.01862 1.02258 1.01993 1.01982 1.01981

26,47360 966.95813 494.79639 30.68360 26.13667 28.30124 24.72784 24.72690 24.72689 24.72689

170

M. YAR AND

J. K. HAMMOND

TABLE

6

Estimate of variance covariance matrix of (a^- a, 6 - b, A - A, G - y, p^- p, 6 - n) 177.17654

5.0

24.7269

4.42188 1.01730

-180.79788 -28.78683

-321*13046-147.88322

5.79858 0.66465

-22.76400 -28.47893

0.51512

- 16.26409

- 19.94282

-88.27419

924.62872

790.24310 861 a40002

4625.18213 4199.46143 2351 I.51953 _

I

-’

were poor as compared to those with the noise level af equal to 25.0. Due to lack of space the simulations in this case cannot be presented here. 5. CONCLUSIONS

A method of estimating the parameters of an hysteretical model on the basis of input-output data has been suggested. The output is assumed to be noise corrupted acceleration, which is the case in structural vibration. However, the method is general enough to be applicable even when possibly noise corrupted data is given on displacement, velocity, or both. In such cases it may be necessary to select an appropriate cost functional for minimization. The rest of the procedure is exactly the same as outlined for the case of acceleration. For the simulation results presented, the excitation process f ( t) was taken to be Gaussian white noise. These is no basis for preference of Gaussian white noise over other excitation signal types such as a sinewave or swept-sinewave. We carried out some simulations with f(t) taken to be a sinewave and results exactly similar to those for Gaussian white noise were obtained. The only condition found was that the system should be properly excited for the hysteresis non-linearity to be there. For example, if the level of the input signal is too small, then one may have serious convergence problems. The theorem stated in section 3 shows that the estimates are asymptotically efficient but gives no indications about the level of noise which is tolerable. A certain level of noise may be tolerable for one particular parameter value but not for another. However, the effect of noise can be reduced by taking more data: i.e., at more computational cost. Throughout the discussion it has been assumed that the initial conditions x(O), a(O), z(0) are zero: i.e., they are known. If they are unknown they must be estimated from the data. This increases the number of parameters from seven to ten. However, one can augment the system of equations (A2) to compute the quantities required in the Gauss Newton iterations (6) or (7). The choice of initial estimates in this case may be more crucial and complicated than in the case with known initial conditions. In both the theory and the computation the noise process E(t) was assumed to be Gaussian white noise. However, the algorithm set up is still robust even if c(t) is non-Gaussian, as long as it is uncorrelated. The asymptotic sampling properties stated in the theorem of section 3 still hold. REFERENCES 1. T. K. CAUGHEY 1960 Transactions of the American Society of Mechanical Engineers Journal of Applied Mechanics 27, 640-648. Sinusoidal excitation of a system with bilinear hysteresis. 2. R. BOUC 1967 Proceedings of the Fourth Conference on Nonlinear Oscillations, Prague, 3 15-3 15. Forced vibration of Mechanical system with hystersis.

HYSTERETIC

SYSTEM

IDENTIFICATION

171

3. Y. K. 4. 5. 6. 7.

WEN 1976 American Society of Civil Engineers Journal of Applied Mechanics Division 12, 249-263. Method for random vibration of hysteretic systems. Y. K. WEN 1980 Transactions of the American Society of Mechanical Engineers Journal of Applied Mechanics 47,150-154. Equivalent linearisation for hysteretic systems under random excitation. T. T. BABER and Y. K. WEN 1981 American Society of Civil Engineers Journal of Applied Mechanics Division 107, 1069-1089. Random vibration of hysteretic degrading systems. M. YAR and J. K. HAMMOND 1986 (submitted to) American Society of Civil Engineers, Journal of the Engineering Mechanics Division. Modelling and response of bilinear hysteretic systems. P. C. YOUNG 1981 Automatica 17,23-39. Parameter estimation for continuous-time models-A

survey. 8. A. M. ANDRONIKOU and G. A. BEKEY 1979 Proceedings of the 8th IEEE Conference on Decision and Control 1072-1073. Identification of hysteretic systems. 9. S. F. MASRI, H. SASSI and T. K. CAUGHEY 1982 Transactions of the American Society of Mechanical Engineers Journal of Applied Mechanics 49, 619-628 Nonparametric identification of nearly arbitrary nonlinear systems. 10. J. E. MOI-~ERSHEAD and R. STANWAY 1986 Journal of Sound and Vibration 105, 309-319. Identification of N’th power velocity damping. 11. L. S. JACOBSEN 1930 Transactions of the American Society of Mechanical Engineers 52, 169- 18 1. Steady forced vibration as influenced by damping. 12. W. H. LAWTON and A. E. SYLVESTRE 1971 Technometrics 13,461-467. Elimination of linear parameters in nonlinear regressions. 13. N. R. DRAPER and H. SMITH 1981 Applied Regression Analysis. New York: Wiley & Sons, second edition. 14. R. I. JENNRICH 1969 Annals of Mathematical Statistics 40, 633-643. Asymptotic properties of nonlinear least square estimators. 15. T. S. LEE and F. KOZIN 1977 Journal of Applied Probab. 14, 527-537. Almost sure asymptotic likelihood theory for diffusion processes. 16. T. J. ROTHENBERG 1971 Econometrica 39, 577-591. Identification in parametric models. APPENDIX To carry out the iteration

(6) on equation

(7) one needs

to compute

ajc' a.i a? aif as a.i i(t),--,--,--,--,- -, aa ab aA ay ap'an

(Al)

ateach iteration. Given the estimates of a, b, A, y, /3, n these quantities by solving the following system of coupled differential equations: Z+ax+bx+z=f(t), (:)‘*+,+a

i = Ax - ylx)z)zln-’

(g)‘+bE+g=O,

may be computed

--px[z(“,

(;)“+a

($)‘+x+bz+z=O,

($)*‘+a

($)‘+b$+$=O,

($)“+a

(z)‘+b$+g=O,

($)**+a

(g)‘+bg+-$=O,

($)**+a

(g)‘+bg+z=O,

(~)‘=A(~)~-~sgn(~~z~z~n-1(~)‘-ny~i~~z~”-1~-~~z~n(~)~ - npi~zl”-* sgn (z) 2, ($)‘=

y sgn (l)zlzl”-’

A ($)‘n

ax

- PIzI (-) ab

.

- npl(zl”-’

($)‘-

nylilIzl”-’

sgn (z) -$,

$

172

M. YAR

AND

J. K. HAMMOND

(~~~~z~z~‘-‘+/~~~z~“)log jzj.

(A2)

At each iteration the system of equations (A2) is solved under zero initial conditions with a, 6, A, +ytp and n replaced by their estimates. This yields the quantities required to realize the algorithm.