STATE–SPACE IDENTIFICATION OF VIBRATING SYSTEMS FROM MULTI-OUTPUT MEASUREMENTS

STATE–SPACE IDENTIFICATION OF VIBRATING SYSTEMS FROM MULTI-OUTPUT MEASUREMENTS

Mechanical Systems and Signal Processing (1998) 12(4), 543–558 Article No. pg980155 STATE–SPACE IDENTIFICATION OF VIBRATING SYSTEMS FROM MULTI-OUTPUT...

234KB Sizes 0 Downloads 42 Views

Mechanical Systems and Signal Processing (1998) 12(4), 543–558 Article No. pg980155

STATE–SPACE IDENTIFICATION OF VIBRATING SYSTEMS FROM MULTI-OUTPUT MEASUREMENTS J. L University of Franche-Comte, L.M.A.R.C., Rue de L’Epitaphe, 25030 Besanc¸on, France

(Received July 1997 , accepted after revisions January 1998) A time series model procedure for the synthesis of multivariate stationary time series random vibrations is shown. The vibrations are assumed to be the outputs of a regularly sampled, random noise excited, differential equation model of a vibration system. The matrix block Hankel stochastic realisation method is used to determine the system matrices and modal parameters. The problem of the minimal representation is discussed and a new test for determining the number of states in the model is derived. This test is based in the estimation of the coefficients in a particular column of the observability matrix and has a x 2 distribution. A numerical example illustrates the procedure for identifying the order, parameters and spectrum of a noisy process.

7 1998 Academic Press

1. INTRODUCTION

Modal parameter identification is used to identify those parameters of the model which describe the dynamic properties of a vibrating system. It is assumed that the vibration system may be represented by a linear differential equation, characteristic of a lumped mass–spring–damper system excited by a random forcing function. For example, in the random wind excitation of a tall building, the wind is assumed to be an unobserved white noise vector. Similar models are used for wave-excited offshore platforms, in the surveillance of civil engineering structures, nuclear reactors and for flight flutter testing [1–6]. Time domain approaches are available in which modal parameters are estimated from output signals only, making assumptions as to the nature of the input. These methods generally assume the excitation to be a white noise process, stationary or non-stationary. They lead to estimates of the output covariance matrix, and the power spectral density matrix of the output process. Thus, natural frequencies and damping parameters of randomly excited systems are obtained from output-only measurements. In the identification of parameters of vibrating structures in time domain, vector autoregressive moving average (VARMA) or vector autoregressive (VAR) models [7–9] are often used in spite of the considerable computational effort needed for obtaining these models. Usually the VARMA model is derived from the discrete time state–space model. The latter is derived from the equations of motion of vibrating systems elsewhere. VARMA models have primarily been developed by control engineers and applied mathematicians in econometrics. However, in recent years, the application of VARMA models to the description of structural systems subjected to ambient excitation has become more common [4, 7]. Because of the increased complexity of multivariate VARMA models, it is very appealing to represent such systems in state–space form. Using this approach, the complexity is reduced to the manipulation of a few matrices. If the external input is 0888–3270/98/040543 + 16 $30.00/0

7 1998 Academic Press

. 

544

unknown, a stochastic realisation method can be used to determine the system matrices. This technique could be useful for the determination of modal parameters and spectral densities of vibrating systems, excited by unmeasured ambiant loading. This paper presents a procedure allowing the estimation of the structure (the system matrices or coefficients) and the order of a state–space representation for a multivariable stochastic proces, from measured output data only. It is assumed that the observed vector time series is a realisation of a process with rational spectrum or the output of a stable, time invariant, linear system driven by white noise. An algorithm is proposed which selects the number of states (the order) and the coefficients of a discrete time state–space model. The modal parameters and spectrum of a vibrating system are then derived. This paper is organised as follows. A modelisation of a vibrating system and its innovation state–space formulation is given in Section 2. The parameters or coefficient estimators are derived in Section 3. A method useful for deteriming the order of the state–space model is described in Section 4. A numerical example is presented in Section 5, and the paper is briefly concluded in Section 6.

2. MODELING A VIBRATING SYSTEM AND STATE–SPACE INNOVATIONS REPRESENTATION

In this section it will be shown briefly how a continuous time structural system, described by a second-order multivariate differential equation, can be converted to an equivalent state–space realisation. Consider a multivariate continuous time structural dynamic system described by a positive definite diagonal mass matrix M0 , a symmetric semi-definite viscous damping matrix C0 and a symmetric positive definite stiffness matrix K0 . The system is represented by the following linear differential equation M0 j (t) + C0 j (t) + K0 j(t) = f(t)

(1)

and the observation equation y(t) = Lj(t) + v(t)

(2)

where j(t) is the vector of displacement of degrees of freedom; f(t) is the unmeasured excitation vector; y(t) is the output observation vector; L specifies which points of the system are observed, namely where the sensors are located, and v(t) is an additive noise disturbance, a vector of measurement errors. Sampling with constant period Dt and a transformation of the second-order differential equation (1) into a first-order system leads to the following discrete time state equation [2, 5, 9]: xt + 1 = Axt + ut

(3)

and the discrete time observation equation yt = Cxt + vt

(4)

where xt is an unobserved state vector of a finite dimension n and mean zero; A is the n × n discrete time state matrix or transition matrix which is function of M0 , C0 , K0 and Dt [1, 2]; ut and vt are mean zero, serially and mutually uncorrelated noise processes. The matrix C is m × n and is called the observation matrix; yt is an m × 1 vector of centred

   

545

observations. A time discrete stochastic system represetation of the m-dimensional discrete time series {yt } can be written in the state–space innovations form [10, 11] zt + 1 = Azt + Bet

(5)

yt = Czt + et

(6)

where the input et is an m × 1 vector of stochastic innovations, it is a zero-mean, white Gaussian process with covariance matrix Q = E[et e't ]; zt is an n × 1 vector of unobservable states that are minimal sufficient statistics (optimal carriers of information) for the history of the process yt . By definition, the state vector zt contains all relevant information for the past history of the series yt up to time t − 1. To ensure that the next period state vector zt + 1 is a set of sufficient statistics for yt + 1 , the new information contained in et (innovation vector) must be incorporated into the states. The n × m matrix B, called the Kalman filter gain matrix, accomplishes this task in a manner that preserves the minimal sufficiency of the states. The state–space innovations model in equations (5) and (6) is a suitable representation for the synthesis of samples of regularly sampled random vibration system. Given a sequence of observations {yt }, this paper indicates how to determine the coefficient matrices A, B and C, the process order n (the number of states in the model), the natural frequencies and damping ratios of the vibrating system, and the spectrum. 3. PARAMETER ESTIMATION

It is assumed in this section that the model specification, or the order n, of multivariate stochastic time series is given. Define the theorical covariance matrices m × m of the observations Ri = E[yt + i y't ], where i = 0, 1, . . . , 2k represents the number of lags to be modeled. This obtains Ri = CA i − 1G

R−i = R'i

and

(7)

with i q 0 and G = E[zt + 1 y't ]. R0 is determined separately. The superscript ' is used to denote the transpose operation. It is convenient to stack the observations temporally via and the mk × 1 future and past vectors defined as y+ t , y' t + 1 , . . . , y' t + k − 1 ]' t = [y' + − y− = [y' , y' , . . . , y' ]'. There is no need to have the same length for y and y ; this t t − 1 t − k + 1 t t t is assumed purely for expositional convenience. It is well known that

K R1 R2 G R2 R3 H = E(y+ t − 1 ) =G t y' · G · k Rk Rk + 1

· · · ·

Rk L Rk + 1 G G · G R2k − 1 l

(8)

is a block Hankel matrix (mk × mk) (a block band counter-diagonal matrix). This block matrix may also be represented in terms of the model parameters by substituting the theorical covariance matrix Ri by its relation (7) into the block Hankel matrix (8)

K CG G CAG H =G · G k−1 k CA G

GAG 2

CA G · CA kG

· · · ·

L CA G G G · G CA 2k − 2G l CA k − 1G k

(9)

. 

546

This block Hankel matrix can be expressed as the outer product of two block vectors

K C L G CA G G[G AG . . . Ak − 1G] = OK H =G · G G 1 k CAk − l

(10)

with O the observability matrix and K the controllability matrix associated with the triplet (A, G, C). The sample covariance matrices calculated from data yt , t = 1, 2, . . . T are estimated by the maximum likelihood relation T−i

R i = T−1 s yt + i y't ;

i = 0, 1, . . . , 2k

(11)

t=1

and are used to construct a sample block matrix H which is factored into O and K , where

indicates sample versions H = O K

(12)

A second factorisation of the estimated block Hankel matrix is obtained using the singular value decomposition of H

H = U S V '

(13)

In the above equation, U is a matrix of orthonormal left singular vectors of H and V is a matrix of orthonormal right singular vectors of H , such that U 'U and V 'V are identity matrices. The singular values of H are contained in descending order in the diagonal matrix S . The number of non-zero singular values of H is equal to its rank, but since H is a statistic, its rank has to be estimated, which can be done in a variety of ways based on the singular values of S . Any statistic that detects the number of singular values significantly different from zero is a candidate for model specification. In other words, when estimating a model with n states, only the n largest singular values are used. Setting the smallest singular values to zero constitutes the set of exclusion restrictions used to identify the model. Denote the approximation to the block Hankel matrix created by enforcing these exclusion restrictions by H n , and the singular value decomposition of H n by H n = U n S n V n'

(14)

The subscripts indicate that only the n largest singular values and associated singular vectors are included in the approximation. The model specification problem has been made an approximation problem. The two factorisations of the block matrix H n are H n = U n S n V n' = (U n S n1/2 )(S n1/2 V n' )

(15)

H n = O n K n

(16)

O n = U n S n1/2

(17)

and

Thus, two factors have been formed:

   

547

K n = S n1/2 V n'

(18)

and ( n

( n

−1/2 n

 − 1/2 n

with generalised inverses O = S

U n' and K = V n S

. Other factorisations are possible such as O n = U n and K n = S n V n' or O n = U n S n1/4 and K n = S n3/4 V n' . These alternative forms amount to alternative choices of the co-ordinated system or parameterisation of the state–space. The representation that corresponds to the choice of equations (17) and (18) is called balanced since the gramians are equal [15]: O n' O n = K n' K n = S n

(19)

The unknown parameters of the model (A, G, C) may now be found in terms of the known estimated covariance matrices R i with i = 0, 1, . . , 2k and the singular value decomposition of H n . The procedure is described below. Let Hr1 and Hc1 represent the first m rows (first block row) and the first m columns (first block column) of H and note the following relationships from the first block row and block column of equations (8) and (10) above Hr1 = [R1

R2 · · · Rk ] = CK

(20)

K R1 L G R2 G G= OG Hc1 =G G · G k Rk l

(21)

Therefore, the estimates of C and G are given by resolution of these two equations, using generalised inverses: C = H r1 K n( = H r1 V n S −1/2 n ( n

−1/2 n

G = O H c1 = S

(22)

U n' H c1

(23)

To estimate A it is necessary to introduce a left-shifted version of the block Hankel matrix defined as

K R2 R3 G R3 R4 H = G · G · R R k+2 k k+1

· · · ·

Rk + 1 Rk + 2 · R2k

L K CAG GA 2G G G CA2G CA3G G= G · G G · k k+1 CA G CA G l k

· · · ·

CA kG L CA k + 1G G G · G CA2k − 1Gl

(24)

The same left-shifted matrix can be obtained by multiplying O by A and K

K C L G CA G G A[G AG · · · Ak − 1G] = OAK H = G · G G k CAk − 1 l

(25)

The estimate of A is obtained by applying the generalised inverses of K n and O n again, yielding 9 9 A = O n( H K n( = S −1/2 U n' H V n S −1/2 (26) n n The estimates A , G and C calculated from the largest singular values in S n and the orthogonal singular vectors of U n and V n , subject to the balanced representation, have the

. 

548

property called ‘strict nesting’ [10]. The estimates are called ‘strict nested’ because the leading principal submatrices of A , G and C contain the exact numerical values that would be produced by the model if it were estimated with the correspondingly smaller singular values. This result occurs because these estimates are composed of parts of scaled singular vectors which are orthogonal. Also, since the singular values are included in decreasing order, any states omitted are less important than each of the included states. Thus, if additional states associated with additional singular values, are included incorrectly in the model, all coefficients of the true model are estimated consistently and, in fact, are numerically identical to the estimates that would be obtained from the correct specification. Alternatively, if states of the true model are excluded from the estimated model, the coefficients that are included are again consistently estimated and the include states are the most important ones in the determination of coefficients. Setting singular values to zero in the model approximation state controls how many singular vectors enter into the coefficient estimation process. With estimates of A, C and G in hand, an estimate of the Kalman filter gain matrix B can be developed. This is done by solving a system of three matrix equations involving four covariance matrices: the state covariance matrix P = E[zt + 1 z't + 1 ], the innovations covariance matrix Q = E[et e't ], the covariance of the states with the observations G = E[zt + 1 y't ] and the data covariance matrix R0 = E[yt y't ]. The first two equations are derived by taking the expectations of equations (5) and (6) postmultiplied by their transposes. Because zt and et are uncorrelated and E[zt + 1 z't + 1 ] = E[zt z't ], then z't + 1 ] = E[(Azt + Bet )(Azt + Bet )'] = APA' + BQB'

P = E[zt + 1

R0 = E[yt y't ] = E[(Czt + et )(Czt + et )'] = CPC' + Q

(27) (28)

Taking the expectation of the state equation (5) postmultiplied by the transpose of the observation equation (6) gives the third matrix equation G = E[zt + 1

y't ] = E[(Azt + Bet )(Czt + et )'] = APC' + BQ

(29)

Based on the previously developed estimators of A, C, G and R0 , estimates of P, Q and B can be obtained from equations (27) to (29). One method of solving equations (27) to (29) is presented here. These three equations can be combined into a matrix Riccati equation very similar to the one encountered in Kalman filtering. The Riccati equation is created by rewriting equation (27) in a slightly modified manner and then substituting in relations from equations (28) and (29) to eliminate B and Q P = APA' + BQQ−1QB' P = APA' + (G − AP'C)(R0 − CPC')−1(G − APC')'

(30)

The state covariance matrix P may be iteratively calculated by generating [10] P0 = 0 Pi + 1 = APi A' + (G − APi C')(R0 − CPi C')−1(G − APi C')' P = lim Pi i:a

(31)

   

549

where the minimality of P is guaranteed by starting the above iterative procedure with P0 = 0. An alternative solution for the Riccati equation can be obtained non-iteratively by forming a (2n × 2n) symplectic matrix f [10, 11] f=

$

%

c' − xc−1z xc−1 −c−1z c−1

(32)

−1 where c = A − GR0− 1 C; x = C'R−1 0 C; z = GR0 G'. It can be shown that the eigenvalues of f are not only symmetric with respect to the real axis (as f is a real matrix) but also with respect to the unit circle (as f is a symplectic matrix). Next, implicitly define

n=

$

n11 n21

n12 n22

%

(33)

as the (2n × 2n) matrix that transforms f into real Schur decomposition form n'fn =

$

f11 f12 0 f22

%

(34)

where the submatrices f11 and f22 are block upper triangular. The solution to the Riccati equation in (30) is given by P = n21 n−1 (35) 11 To construct the estimate of f needed to solve the Riccati equation (30), the estimates of A, C, R0 and G from equations (26), (22), (11) and (23) can be substituted into equation (32); an estimate P of the Riccati equation can be obtained directly by the above-mentioned procedure. This solution for P allows a simple estimation of the remaining parameters of the innovations model. From equation (28) an estimate of the innovations covariance matrix is Q = R 0 − C P C '

(36)

and from equation (29) an estimate of the Kalman filter gain matrix is B = (G − A P C ')(R 0 − C P C')−1

(37)

Note that Aoki and Dorfman [12] propose a new algorithm, employing an approximate solution to the Riccati equation (which is exact only for vector autoregressive processes). This method simplifies the procedure by avoiding the explicit solution of the Riccati equation, and produces results which are close by those produced using the exact solution. This new algorithm avoids the need to perform a Schur decomposition or iterations. It uses the (mk × mk) Toeplitz covariance matrix of the stacked data vector yt − 1 R = E[y− t−1

y−' ] t−1

(38)

and the approximate covariance matrix of the states takes the form P = K R −1K '

(39)

The nature of this approximation has been examined numerically in Aoki and Dorfman [12]. Given this expression for the covariance of the states, a solution for B is achieved by substitution into equation (37). Before the coefficients in a model can be estimated, the issue of model specification must be addressed: the number of states in the model (the order) is required. The selection of

550

. 

lag parameter, k is not addressed here. The next section proposes a new method for order selection, based on the estimation of the coefficients in a particular column of the observability matrix. 4. MODEL ORDER ESTIMATION

Since the number of modes excited is unknown, the model order is not known a priori and must be determinated. Assuming that mk exceeds the model order n, the rank of the block Hankel matrix H equals the model order. Therefore, one can use a convenient property of the singular value decomposition of a matrix: the number of non-zero singular values of a matrix is equal to its rank. However, if an estimate of the block Hankel matrix is used, singular values equal to zero cannot be obtained and it is often difficult to choose the largest singular values. Because the singular values of H are ordered from largest to smallest, if a break occurs in the singular value sequence of S , in the sense that sˆ 1 e sˆ 2 · · · e sˆ n sˆ n + 1 · · · then this n is the order of the model. But in practice this break is not apparent; the number of singular values contained in S and considered to be significantly different from zero is often a subjective decision. A new method for model order selection, based on the columns of the observability matrix and using a test which has a x 2 distribution is presented. The first step, in the development of the new procedure for model order estimation, is to write the stacked vector of observations + y+ t = Ozt + Fet

(40)

where O is the observability matrix of equation (10), y+ t was defined at the beginning of the paper, zt is the state vector, e+ is a stacked vector of innovations: t e+ t , e' t + 1 , . . . , e' t + k − 1 ]', and F is the (mk × mk ) lower block triangular matrix which t = [e' is given by

K Im G CB F =G · G k−2 k CA B

0 Im · ·

· · · CB

0 0 · Im

L G G G l

(41)

Because the vector e+ t is an innovation vector, its covariance matrix is a block diagonal matrix given by Q+ = E[e+ t

e+' ] = Ik &Q t

(42)

where Ik is the (k × k) identity matrix, Q the covariance matrix of et , and & the Kronecker product of Ik and Q (see Appendix A). The estimated observability matrix is given by

V n (S n )−1/2 O = H K − n =H

(43)

Choosing this generalised inverse causes the estimated observability matrix to be a linear function of the singular vectors; because the singular vectors are orthogonal, each column of O is orthogonal to the others. If an unnecessary state, or several unnecessary states, are included in the state–space model, the coefficients (or the columns) in the observability matrix O which are associated with these unnecessary states will tend to zero, as the sample size goes to infinity. The proof is simple, due to the fact that the singular values associated with those coefficients has a

   

551

probability limit of zero. Thus, a test for the joint significance of a particular column of the observability marix can provide the significance of the state associated with that column [13]. Define O p the estimated observability matrix for the largest possible model: one with p states. Since the p chosen is larger than the true n of the system, the rightmost columns of the O p matrix should have elements which tend to zero. A test for the order can be done on the matrix estimate of the observability matrix. Represent the vectorisation operator by vec ( ): it creates a vector from a matrix by successively stacking each column of the matrix below the column to its left. In terms of vec (O p ), a p that is one larger than the true n for the system implies that the last mk elements of the column vector of estimates should all tend to zero. A test for the order model estimation can be done on the matrix estimate of the observability matrix. Before the steps to the testing procedure can be described, the covariances of the elements in the observability matrix need to be specified. To account for all possible interactions, we need to calculate the covariance matrix of the vectorisation of the observability matrix O. To estimate vec (O) it is necessary to introduce Y+, Z and E+ the matrices of observations, states and innovations, formed by arranging the data vectors from time period 1 through period T into successive columns Y+ = [y+ 1

+ y+ 2 · · · yT ];

Z = [z1

z2 · · · zT ];

E+ = [e+ 1

+ e+ 2 · · · eT ]

(44)

and then obtain Y+ = OZ + FE+

(45)

vec (Y+) = (Z'&Imk ) vec (O) + (IT &F) vec (E+)

(46)

The vectorisation of Y+ is [14]

and the estimate of vec (O) is given by vec (O ) = [(Z'&Imk )'(Z'&Imk )]−1(Z'&Imk )' vec (Y+) = [(ZZ')−1 Z&Imk ] vec (Y+) = vec (O) + [(ZZ')−1Z&Imk ](IT &F) vec (E+)

(47)

The conditional covariance matrix of the estimated observability vector is cov [vec (O )] = E[(vec (O ) − vec (O ))(vec (O ) − vec (O))'] = E{[(ZZ')−1Z&Imk ](IT &F) vec (E+) vec (E+)'(IT &F)'[(ZZ')−1Z&Imk ]'}

(48)

It is shown in Appendix A that cov [vec (O )] = E[(ZZ')−1&FQ+F']

(49)

and in Appendix B that the matrix of states Z takes the form Z = KRY− −1

(50)

− where Y− −1 is the matrix formed using the set of yt − 1 , t = 1, 2, . . . , T as columns. One has

cov [vec (O )] = E[(KR−1 Y− Y− −1 −1'

R−1 K')−1&FQ+F']

= T−1[(KR−1K')−1&FQ + F']

(51)

552

. 

The matrix R can be estimated using the R i matrices defined earlier and the controllability matrix K is estimated by K = O ( H = (S )−1/2U 'H

(52)

Matrices Q and F are estimated from equations (36) and (41). The elements of vec (O ) have a normal asymptotic distribution, so an asymptotically x 2 test can be performed on the elements of a column of the observability matrix, since vec (O )'[cov vec (O )]−1 vec (O ) has a x 2 distribution with mk degrees of freedom [16], the number of sensors (the number of time series) by the number of block rows in the observability matrix. Of course, the test, denoted M for model order, is not truly x 2 for small samples. The test M is easily calculated given the estimates of the observability matrix O p from equation (43) (for a model with p states, larger than the true model order n) and its covariance matrix from equation (51). Let (KR−1K')ij be the element in the ith row and jth column of KR−1K'. Further let oˆj be the estimate of the jth column of the observability matrix O p . The test for the null hypothesis that all the coefficients in the jth column of O p tend to zero (j q n) can be written as Mj = oˆj' [T(K R K ')ij (FQ + F')−1]oˆj

(53)

The test Mj is asymptotically distributed as a x 2 random variable with mk degrees of freedom, allowing an asymptotic test of the signifiance of the jth state. Finite sample tests could be carried out by employing Chebyshev’s inequality. The model order estimation procedure utilising the Mj test involves several steps. (1) A state–space model with a maximum number of states: a model with p states is estimated. (2) The matrices O p and cov [vec (O p )] are calculated according to equations (43) and (51). (3) Beginning with j = p, the test Mj is calculated according to equation (53). If the null hypothesis that all the coefficients in that column of the observability matrix tend to zero is not rejected, set j = j − 1 and calculate a new value of Mj . (4) Repeat the process until the null hypothesis of an unnecessary state is rejected. When the test Mj is large enough to reject the null hypothesis (a x 2 test can be performed), choose j = n as model order. Note that the entire model specification procedure is performed with the same estimates of the matrices O p and cov [vec (O p )]. The model is not re-estimated after each test. The property of strict nesting implies that the estimates of each column oˆj of O p remain unchanged as additional columns are added or deleted. In other words, the Mj values are numerically unchanged by the addition or deletion of the columns of O p , so they cannot become biased by the procedure. Because the singular values are ordered from largest to smallest, there can be no gaps in the model order of a state–space model such as can exist when using VARMA; for example, the fifth-order model cannot be significant if the fourth-order model is not. The Mj test cannot reject the null hypothesis for a column of O p and then fail to reject the null hypothesis for a column farther to the left, associated with a larger singular value. A numerical example to illustrate the procedure is shown in the next section.

5. SPECTRAL ESTIMATION AND NUMERICAL EXAMPLE

Let {Ri } be the matrix covariance sequence given by equation (7). The Fourier transform for this covariance sequence is by definition the power spectral density matrix

   

553

of the process: a

[v

S(ejv) = s Ri e−jvi

(54)

i = −a

It is shown in Appendix C that this power spectral density matrix takes the form S(ejv) = R0 + C(Im ejv − A)−1G + G'(Im e−jv − A')C'

(55)

A power spectral density estimator of the yt process is obtained by substituting R0 , C, A and G by their estimates. To illustrate the numerical procedure for estimating the order, the spectrum, the eigenfrequencies fi and damping ratios ei , an elementary three degree of freedom linear system excited by Gaussian white noise is considered. Only the responses are used. This corresponds to the situation where inputs cannot be measured; however, it can be approximated as a white noise sequence. The modal parameters of the vibrating system are given by det (M0 m 2 + C0 m + K0 ) = 0

(56)

(M0 m 2 + C0 m + K0 )C = 0

(57)

where mi are the vibration modes or eigenfrequencies: they are the frequencies at which responses may occur and Ci are the modal shapes or eigenvectors of the system. The natural frequencies fi and damping ratios ei of the system can be determined from mi = −2pfi ei + j2pfi z1 − ei2

mi = −2pfi ei − j2pfi z1 − ei2

(58)

In the discrete time domain, the signal is sampled using a constant sampling interval Vt and the eigenvalues li of the transition matrix A are related to the modal characteristics by the relation li = emi Dt. Therefore, the natural frequency, fi , and damping ratio, ei , for the ith mode can be determined by fi =

X

ei =

$ 0

2 [ln (li l* l + l* i )] i + cos−1 i 4 2zli l* i

1 2pDt

c

1%

2

(59)

2 [ln (li l* i )]

$ 0

[ln (li l* i )] + 4 cos 2

−1

li + l* i 2zli l* i

(60)

1%

2

The natural frequencies and damping ratios of the simulated system are: fi = {0.9576; 1.5635; 2.5326}; ei = {10.1 × 10−4; 11 × 10−4; 11.3 × 10−4}. The order is estimated using the Mj test, with three time series (three sensors, m = 3) and three block columns of the estimated observability matrix (k = 3). The number of data points per channel is T = 1000. Following the four-step procedure for order determination introduced in Section 4, the Mj test for nine different orders (p = 9) is shown in Table 1. T 1 Mj test for order selection and singular values of Hˆ i Mj sˆ j

1 3.1 × 10 19.8

2 5

3.6 × 10 18.16

3 4

1.7 × 10 15.77

4 5

2.9 × 10 12.64

5 5

3.4 × 10 4.54

6 4

7.9 × 10 1.43

4

7

8

9

6.1 0.004

1.3 0.002

0.9 0.00003

. 

554

Power spectral density (dB)

40 30 20 10 0 –10 –20 –30 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Normalised frequency Figure 1. Power spectral density of x1 (t).

For comparison the singular values sˆ j of the block Hankel matrix are also given. It is clear that the null hypothesis that all the coefficients in the jth column of O p tend to zero is rejected for j = 6, which is the true order of the state–space model. The quantity Mj is a x 2 random variable with mk = 9 degrees of freedom and can be carried out employing a per cent significance level. Using the 5% significance level obtains from a x 2 table M6 = 7.9 × 104 q x92 = 16.9 and M7 = 6.1 Q x92 . The order of the system using this x 2 distribution is n = 6. The break in singular values of H occurs for j = 6 and for j = 8 and the decision to choose the order of the model is then subjective. The singular values do not follow a x 2 distribution and a statistical decision to estimate the true order of the model cannot be obtained directly from the singular values. The estimated natural frequencies and damping ratios of the vibrating system excited by a random force are f i = {0.9577; 1.5635; 2.5326}; eˆ i = {7.5 × 10−4; 7.8 × 10−4; 12.1 × 10−4}. Note that these results are obtained from only one simulation run and improved results can be obtained using the ensemble averaged over several independent realisations. The

Power spectral density (dB)

30 20 10 0 –10 –20 –30 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Normalised frequency Figure 2. Power spectral density of x2 (t).

   

555

Power spectral density (dB)

20 10 0 –10 –20 –30 –40 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Normalised frequency Figure 3. Power spectral density of x3 (t).

autospectra for each of three channels as a function of the relative frequency are shown in Figs 1–3. Three peaks corresponding to three natural frequencies of the vibrating system are evident in these figures. Note that the frequency axis dimension is normalised so that the scale correspond to the dimensionless range 0 E 2Dt f E 1. 6. CONCLUSION

A method using a discrete time state–space approach for the identification of a vibrating system driven by white noise is presented. The proposed method uses only multi-output data. A technique which allows the estimation of the order of a state–space model is developed and justified. Its performance appears to be extremely satisfactory. When applied to simulated examples, it systematically yielded the good order of the model without any ambiguity. This method is based in the estimation of the coefficients in a particular column of the observability matrix and the use of a x 2 distribution with a per cent significance level. The procedure to solve the problem of identification of a vibrating system from multi-output data is divided in three parts: (a) select the order n of the state–space model using the Mj test; (b) estimate the transition matrix A using the estimated observability and controllability generalized inverses and the shifted block Hankel matrix; (c) estimate the natural frequencies, damping ratios and spectrum of the process. REFERENCES 1. W. G and R. S.-Z. L 1976 Journal of Applied Mechanics 43, 159–165. Time series methods for the synthesis or random vibration systems. 2. M. S, C. B. Y and H. I 1982 Journal of Engineering Mechanics 108, 1371–1390. Identification of linear structural dynamic systems. 3. S. M. B and J. J. H 1989 AIAA Journal 27, 1636–1643. Parameter identification of discrete-time series models for structural response prediction. 4. M. B, A. B, B. G − D, M. G, D. B, P. D, M. P and M. O 1993 Mechanical Systems and Signal Processing 7, 401–423. Damage monitoring in vibration mechanics: issues in diagnostics and predictive maintenance.

556

. 

5. P. H. K and P. A 1997 Proceedings of the 15th IMAC, 889–895. State–space identification of civil engineering structures from output measurements. 6. J. E. C 1990 Mechanical Systems and Signal Processing 4, 157–172. Comparison of modal parameter estimation techniques on aircraft structural data. 7. Y. L. P and N. C. M 1989 Journal of Engineering Mechanics 115, 2232–2250. Modal identification of vibrating structures using ARMA model. 8. Z. N. W and T. F 1986 Journal of Applied Mechanics 53, 28–32. A time domain method for identifying modal parameters. 9. J. L 1996 Mechanical Systems and Signal Processing 10, 747–761. Analysis of a multivariate autoregressive process. 10. M. A 1990 State–Space Modeling of Time Series. Berlin: Springer-Verlag. 11. M. A and A. H 1997 Lecture Notes in Statistics 119 : Applications of Computer Aided Time Series Modeling. New York: Springer-Verlag. 12. M. A and J. H. D 1992 Recent Advances in Mathematical Theory of Systems, Control, Networks and Signal Processing, 619–624. Statistical analysis of some identification procedures for state–space innovations models. 13. J. H. D 1991 Econometric Reviews 10, 67–73. Comment: a model specification test for state–space models. 14. J. R. M and H. N 1994 Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley. 15. B. C. M 1981 IEEE Transaction on Automatic Control 26, 17–32. Principal Component analysis in linear systems: controllability, observability and model reduction. 16. T. W. A 1984 An Introduction to Multivariate Statistical Analysis. John Wiley.

APPENDIX A

First recall some definitions for the sake of clarity. Definition 1: the vec ( ) operator. Let A be an m × n matrix and aj its jth column; then vec (A) is the mn × 1 column vector defined by

K a1 G a2 vec (A) =G G · k an

L G G G l

Thus, the vec operator transforms a matrix into a vector by stacking the columns of the matrix one underneath the other. Definition 2 : Kronecker product. The Kronecker product of two matrices A and B of dimensions m × n and p × q, respectively, is a mp × nq block matrix denoted A&B and defined by the generic p × q block, Aij B. Properties of Kronecker product: (A&B)' = A'&B' (A&B)−1 = A−1&B−1 (A&B)(C&D) = AC&BD To derive cov [vec (O )] conditioned on Z, consider the definition cov [vec (O )] = E[(vec (O ) − vec (O))(vec (O ) − vec (O))'] = E{[(ZZ')−1Z&Imk ](IT &F) vec (E+) vec (E+)'(IT &F)'[(ZZ')−1Z&Imk ]'} = E{[(ZZ')−1Z&Imk ](IT &F) vec (E+) vec (E+)'(IT &F)'[(ZZ')−1Z&Imk ]'} = E{[(ZZ')−1Z&Imk ](IT &F)[E(vec (E+) vec (E+)'=Z)](IT &F)'[(ZZ')−1Z&Imk ]'}

    +

557

+

+

Since Z and E are independent, the distribution of E [or vec (E )] conditional on Z is the same as its marginal distribution; specifically, conditional on Z, vec (E+) 0 N(0, IT &Q+), where Q+ = E[e+ e+' ] = Ik &Q and Q = E[et e't ], One then t t obtains cov [vec (O )] = E{[(ZZ')−1Z&Imk ](IT &F)(IT &Q+)(IT &F}'[(ZZ')−1Z&Imk ]} = E{[(ZZ')−1Z&Imk ](IT &FQ + F')[(ZZ')−1Z&Imk ]'} = E{[(ZZ')−1Z&FQ+F'][(ZZ')−1Z&Imk ]'} = E{[(ZZ')−1Z&FQ+F'][(ZZ')−1Z&Imk ]'} = E[(ZZ')−1ZZ'(ZZ')−1&FQ+F'] = E[(ZZ')−1&FQ+F']

APPENDIX B

To derive the matrix of states Z, one uses the orthogonal projection of Y+ on the linear manifold spanned by Y− −1 − − + − −1 − E[Y+=Y− Y−1 −1 ] = E[Y Y−1'] (E[Y−1Y−1'])

Writing the block Hankel matrix H and the block Toeplitz covariance matrix R in notation over all observations ] H = T−1E[Y+Y−' −1

and

− R = T−1E[Y− −1 Y−1 ]

leads to −1 − Y−1 = OKR−1Y− E[Y+=Y− −1 ] = HR −1

Since Y+ = OZ + FE+, one obtains − E[Y+=Y− −1 ] = OE[Z=Y−1 ] = OZ.

because E+ is orthogonal to Y− −1 . Therefore, Z = KRY− −1

APPENDIX C +a

−1

+a

i = −a

i = −a

i=1

S(ejv) = s Ri e−jvi = s Ri (ejv)−1 + R0 (ejv)0 + s Ri (ejv)−1 +a

+a

i=1

i=1

= R0 + s R−i (ejv)i + s Ri (ejv)−i +a

+a

i=1

i=1

= R0 + s R − i e−jvi + s R'i ejvi)

. 

558

Since R−i = R'i . By substituting equation (7) into the above equations one obtains +a

+a

i=1

i=1

S(ejv) = R0 + s CA i − 1G e−jvi + s (CA i − 1G)' ejvi +a

+a

i=0

i=0

= R0 + s CA iG e−jv(i + 1) + s G'(A i)'C' ejv(i + 1)

$

+a

%

$

+a

%

= R0 + C s (A e−jv)i G e−jv + G' s (A' ejv)i C' ejv i=0

i=0

= R0 + C(Im − A e−jv)−1G e−jv + G'(Im − A' ejv)−1C' ejv = R0 + C[(Im − A e−jv) ejv]−1G + G'[(Im − A' ejv) e−jv]−1C' = R0 + C(Im ejv − A)−1G + G'(Im e−jv − A')−1C'