Matrix linear variational inequality approach for finite element model updating

Matrix linear variational inequality approach for finite element model updating

Mechanical Systems and Signal Processing 28 (2012) 507–516 Contents lists available at SciVerse ScienceDirect Mechanical Systems and Signal Processi...

432KB Sizes 1 Downloads 144 Views

Mechanical Systems and Signal Processing 28 (2012) 507–516

Contents lists available at SciVerse ScienceDirect

Mechanical Systems and Signal Processing journal homepage: www.elsevier.com/locate/ymssp

Matrix linear variational inequality approach for finite element model updating$ Quan Yuan College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

a r t i c l e i n f o

abstract

Article history: Received 23 July 2009 Received in revised form 17 September 2011 Accepted 19 September 2011 Available online 10 October 2011

The problem of finding the least change adjustment to analytical stiffness matrix modeled by finite element method is considered in this paper. Desired matrix properties, including satisfaction of the dynamic equation, symmetry, positive semidefiniteness and sparse pattern, are imposed as side constraints of the nonlinear optimization problem. To the best of the author’s knowledge, the matrix updating problem containing all these constraints simultaneously has not been proposed in the literature earlier. By partial Lagrangian multipliers technique, the optimization problem is first reformulated as an equivalent matrix linear variational inequality (MLVI) and solved by extended projection and contraction method. The results of numerical examples show that the proposed method works well even for incomplete measured data. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Model updating Partial Lagrangian multipliers Matrix linear variational inequality Projection and contraction method

1. Introduction Building highly accurate mathematical models is necessary for analyzing and predicting the dynamic performance of complex structures, such as bridge, automobile, satellite, aircraft, etc. Finite element method is an efficient model building tool widely used by engineers. It is well known that the analytical model of a real-life structure, obtained by the finite element technique, may be represented by the following dynamic equation (characteristic equation) or generalized eigenvalue problem K a x ¼ lMa x,

ð1Þ

where Ma , K a 2 Rnn denote the analytical mass and stiffness matrices, respectively, n is the number of degrees of freedom of finite element model, l and x are the eigenvalue and corresponding eigenvector with respect to the matrix pencil ðMa ,K a Þ. In general, Ma is a symmetric positive definite sparse matrix, denoted by Ma  0, and Ka is a symmetric positive semidefinite matrix (denoted by K a k0) with special zero/nonzero pattern. Owing to the complexity of the real-life structure, however, the finite element model depends on the hypothesis of the geometry, boundary conditions and connectivity conditions of the structure. As a result, the finite element model fails to reproduce the dynamic behavior of the actual structure accurately. On the other hand, with the development of vibration test, signal processing, and ðeÞ experimental modal survey technique, a part of lower order frequencies (eigenvalues) li Z 0 ði ¼ 1, . . . ,m,m o nÞ and corresponding mode shapes (eigenvector) xðeÞ of structures can be measured more accurately from a real-life vibrating test. i The goal of model updating is to find the smallest adjustment to the modeled property matrices based on the measured $ Supported by the National Natural Science Foundation of China under grant number No. 11071118 and Jiangsu Province Natural Science Foundation under grant number BK2009364. E-mail address: [email protected]

0888-3270/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.ymssp.2011.09.016

508

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

data of the dynamic behavior of a structure, which will narrow the gap between the finite element model and the real structure and maintain the topological structure of the original model. The updated model may be considered to be a better dynamic representation of the structure, and used with greater confidence for the analysis and control of the structure. Over the past years, finite element model updating problem has received considerable discussions. A significant number of methods have been proposed for updating finite element models [1–5]. The earliest developed finite element model updating technique is the matrix-type modification method which focused on adjusting the assemble matrix of a structure. In order to update the mass matrix or the stiffness matrix of the finite element model to reproduce the measured eigenvalues and corresponding eigenvectors, the property matrix of the analytical model is added with a corrector which will be solved by taking the perturbed matrix into the characteristic equation or the orthogonal conditions that the measured data should satisfy. Assuming that the analytical mass matrix Ma is exact, Baruch and Bar-Itzhack [6], Baruch [7] and Wei [8] obtained closed-form solutions of the updated stiffness matrix by using Lagrangian multipliers imposed the characteristic equation and the symmetry of the stiffness matrix as side constraints. However, these methods do not preserve that the updated stiffness matrix is positive semidefinite. Applying the QR-factorization of measured eigenvectors and the best approximation theory, Dai [9] proposed a numerical method for correcting the stiffness matrix. The updated matrix stiffness is symmetric positive semidefinite and satisfies the characteristic equation as well. All the methods mentioned above do not take the structural connectivity into consideration, that is, the updated stiffness matrix does not preserve the sparsity or the zero/nonzero pattern of the original analytical stiffness matrix Ka. To overcome this shortcoming, Kabe [10] developed an algorithm preserving the connectivity of the structural model. Caesar and Peter [11], Kammer [12], Smith and Beattie [13,14], Zhang and Zerva [15], Halevi and Bucher [16], Sako and Kabe [17] presented some improvements of Kabe’s method. But Kabe’s method and its improvements fail to guarantee that the updated stiffness matrix is positive semidefinite. In this paper, on the assumption that the analytical mass matrix Ma is exact, the stiffness matrix is updated to satisfy with the dynamic equation, symmetry, positive semidefiniteness and sparsity simultaneously. Such a problem is formulated as the following constrained least-squares matrix problem: min s:t:

2 1 2JKK a JF ,

KX e ¼ M a X e Le ,

K ¼ KT ,

Kk0,

sparseðKÞ ¼ sparseðK 0 Þ,

ð2Þ

where Le ¼ diagðl1 , . . . , lm Þ 2 Rmm is the diagonal matrix consisted of m measured eigenvalues by vibration tests, ðeÞ nm X e ¼ ½xðeÞ is the mode matrix of m corresponding measured eigenvectors, sparse ðKÞ ¼ sparseðK 0 Þ denotes the 1 , . . . ,xm  2 R sparsity requirement on the stiffness matrix K to be updated and K0 is usually taken as Ka in finite element model. J  JF stands for the Frobenius norm of a matrix. Similar problems to problem (2) arose in finance and statistics have been studied. Higham [18,19] found the nearest symmetric positive semidefinite matrix to given correlation matrix. Stephen and Lin [20], D’aspremont et al. [21] and Lu [22] discussed sparse covariance matrix selection problem to given matrix in the set of symmetric positive semidefinite matrices. But in all cases, at least one constraint of problem (2) is not taken into account. Notice that problem (2) is a minimization of a strictly convex quadratic function over the intersection of a finite collection of closed convex sets in Rnn . Escalante and Raydan [23] solved matrix least-squares problem subject to symmetric positive definiteness constraint, box constraint and given pattern constraints applying alternating projection method, which was proposed by Dykstra [24] and extended by Boyle and Dykstra [25]. Using the same method, Moreno et al. [26] solved symmetry preserving matrix model updating problem. In this work, a different approach is presented to solve problem (2). By attaching partial Lagrangian multipliers, the Karush–Kuhn–Tucker (KKT) point of problem (2) can be interpreted as the solution to a series of matrix linear variational inequalities. For solving medium scale linear monotone variational inequalities, the projection and contraction methods of Levenberg–Marquardt type [27] are effective tools. While for large scale problems, methods of that kind are not suitable. For monotone linear variational inequality, He [28] presented a new projection and contraction method with only relative modest computation. More methods and applications on variational inequalities can be found in Refs. [29,30]. Throughout this paper, the following notations will be used. For A 2 Rnm , A þ denotes the Moore–Penrose generalized inverse of A. PS(A) represents the projection of A onto a closed convex set S. Rnn stands for the set of all n  n nonnegative 0 n matrices and A Z B means ðABÞ 2 Rnn . Let f : R  !R be a continuous differentiable real valued function, rf ðxÞ denotes 0 the gradient of f at x. In is the n  n identity matrix. This paper is organized as follows. In Section 2, we discuss the conditions to ensure the feasible region of problem (2) is nonempty. In Section 3, we describe the equivalent matrix linear variational inequality formulation of the KKT points of problem (2) and present the projection and contraction method. In Section 4, two numerical experiments are performed to illustrate the efficiency of the proposed method. Based on the results and equivalent formulation, some conclusions are drawn in Section 5. ðeÞ

ðeÞ

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

509

2. The feasible region Denote the feasible region of problem (2) as D. For convenience, define S1 ¼ fK 2 Rnn 9KX e ¼ Ma X e Le , K T ¼ K, Kk0g, S2 ¼ fK 2 Rnn 9sparseðKÞ ¼ sparseðK 0 Þg: It is obvious that D ¼ S1 \ S2 . We deduce some conditions under which the feasible region D is nonempty. Lemma 2.1 (Albert [31]). Let E 2 Rnn , F 2 Rnk , G 2 Rkk , H ¼ ðFET FGÞ. If HT ¼ H, then Hk0 if and only if Ek0,

GF T E þ Fk0,

rankðE,FÞ ¼ rankðEÞ:

Lemma 2.2. Let A, B 2 Rnm ðm r nÞ. Assume that the singular value decomposition (SVD) of the matrix A is   S 0 A¼U VT , 0 0

ð3Þ

where U ¼ ½U 1 ,U 2  2 Rnn , V ¼ ½V 1 ,V 2  2 Rmm are orthogonal matrices, S ¼ diagðs1 , . . . , sr Þ 4 0, r ¼ rankðAÞ, U 1 2 Rnr , V 1 2 Rmr . Then the matrix equation XA ¼ B

ð4Þ

has a symmetric positive semidefinite solution if and only if BV 2 ¼ 0,

U T1 BV 1 S1 ¼ S1 V T1 BT U 1 k0,

ð5Þ

rankðU T1 BV 1 S1 , S1 V T1 BT U 2 Þ ¼ rankðU T1 BV 1 S1 Þ, nn

in which case the solution X 2 R X¼U

U T1 BV 1 S1 U T2 BV 1 S1

ð6Þ

is !

S1 V T1 BT U 2 G þU T2 BV 1 S1 ðU T1 BV 1 S1 Þ þ S1 V T1 BT U 2

UT ,

ð7Þ

where G 2 RðnrÞðnrÞ is an arbitrary symmetric positive semidefinite matrix. Proof. If Eq. (4) has a symmetric positive semidefinite solution X, left-multiplying two sides of Eq. (4) by AT yields AT XA ¼ AT B. Since ATXA is symmetric positive semidefinite, then so is ATB. From Eqs. (3) and (4), one get XU 1 SV T1 ¼ B. It follows from V T1 V 2 ¼ 0 that BV 2 ¼ 0. AT Bk0 and BV 2 ¼ 0 imply that ! SU T1 BV 1 0 k0: V T AT BV ¼ 0 0 It is easy to verify that U T1 BV 1 S1 ¼ S1 V T1 BT U 1 k0. By Eq. (3), Eq. (4) is equivalent to   S 0 U T XU ¼ U T BV : 0 0

ð8Þ

Let U T XU ¼

X 11 X 21

X 12 X 22

! ,

where X 11 2 Rrr . From Eq. (8), one have ! ! U T1 BV 1 U T1 BV 2 X 11 S 0 : ¼ X 21 S 0 U T2 BV 1 U T2 BV 2 From BV 2 ¼ 0 and the symmetry of X, one can obtain X 11 ¼ U T1 BV 1 S1 ,

X 21 ¼ U T2 BV 1 S1 ¼ X T12 :

ð9Þ TXU

and Lemma 2.1 that Eq. (6) holds. Then it follows from the positive semidefiniteness of U Conversely, observed from BV 2 ¼ 0 that Eq. (8) is equivalent to ! ! U T1 BV 1 0 X 11 S 0 : ¼ X 21 S 0 U T2 BV 1 0

ð10Þ

510

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

It is easy to obtain the general symmetric solution X of Eq. (4) as follows: ! U T1 BV 1 S1 S1 V T1 BT U 2 X¼U UT , U T2 BV 1 S1 X 22

ð11Þ

where X 22 2 RðnrÞðnrÞ is an arbitrary symmetric matrix. Let X 22 U T2 BV 1 S1 ðU T1 BV 1 S1 Þ þ S1 V T1 BT U 2 ¼ G,

ð12Þ

ðnrÞðnrÞ

is an arbitrary symmetric positive semidefinite matrix. From Eqs. (5), (6), (11), (12) and Lemma 2.1, where G 2 R the general symmetric positive semidefinite solution of Eq. (4) is expressed as Eq. (7). This completes the proof. & Lemma 2.3. Let A, B 2 Rnm ðm r nÞ. Assume that rankðAÞ ¼ m and the SVD of the matrix A is   S VT, A¼U 0

ð13Þ

where U ¼ ½U 1 ,U 2  2 Rnn , V 2 Rmm are orthogonal matrices, U 1 2 Rnm , S ¼ diagðs1 , . . . , sm Þ 4 0. Then the matrix equation XA ¼B has a symmetric positive semidefinite solution if and only if U T1 BV S1 ¼ S1 V T BT U 1 Z 0,

ð14Þ

rankðU T1 BV S1 , S1 V T BT U 2 Þ ¼ rankðU T1 BV S1 Þ,

ð15Þ

nn

in which case the solution X 2 R X¼U

is !

U T1 BV S1

S1 V T BT U 2

U T2 BV S1

G þ U T2 BV S1 ðU T1 BV S1 Þ þ S1 V T BT U 2

UT ,

ð16Þ

where G 2 RðnmÞðnmÞ is an arbitrary symmetric positive semidefinite matrix. Proof. The proof is similar with that of Lemma 2.2.

&

The following result is directly from Lemma 2.2. Theorem 2.1. Let M a 2 Rnn , Le 2 Rmm be symmetric positive definite and diagonal positive definite, respectively, and X e 2 Rnm ðm rnÞ. Assume that rankðX e Þ ¼ r o m and the SVD of Xe is   S 0 Xe ¼ U VT, 0 0 where U ¼ ½U 1 ,U 2  2 Rnn , V ¼ ½V 1 ,V 2  2 Rmm are orthogonal matrices, S ¼ diagðs1 , . . . , sr Þ 40, U 1 2 Rnr , V 1 2 Rmr . Then S1 is nonempty if and only if M a X e Le V 2 ¼ 0,

U T1 Ma X e Le V 1 S1 ¼ S1 V T1 Le X Te M a U 1 k0,

rankðU T1 Ma X e Le V 1 S1 , S1 V T1 Le X Te M a U 2 Þ ¼ rankðU T1 M a X e Le V 1 S1 Þ, in which case an element in S1 can be written as K ¼U

U T1 Ma X e Le V 1 S1

S1 V T1 Le X Te Ma U 2

U T2 Ma X e Le V 1 S1

T

ð17Þ ð18Þ

! UT ,

ð19Þ

where T ¼ G þ U T2 M a X e Le V 1 S1 ðU T1 M a X e Le V 1 S1 Þ þ S1 V T1 Le X Te Ma U 2 , G 2 RðnrÞðnrÞ is an arbitrary symmetric positive semidefinite matrix. If the mode matrix Xe obtained via vibration test is of full column rank, that is, the eigenvectors are linearly independent, from Lemma 2.3, the following result is straightforward. Theorem 2.2. Let M a 2 Rnn , Le 2 Rmm be symmetric positive definite and diagonal positive definite, respectively, and X e 2 Rnm ðm rnÞ. Assume that rankðX e Þ ¼ m and the SVD of Xe is   S ð20Þ VT, Xe ¼ U 0 where U ¼ ½U 1 ,U 2  2 Rnn , V 2 Rmm are orthogonal matrices, U 1 2 Rnm , S ¼ diagðs1 , . . . , sm Þ 40. Then S1 is nonempty if and only if U T1 M a X e Le V S1 ¼ S1 V T Le X Te M a U 1 k0,

ð21Þ

rankðU T1 Ma X e Le V S1 , S1 V T Le X Te M a U 2 Þ ¼ rankðU T1 M a X e Le V S1 Þ,

ð22Þ

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

511

in which case an element in S1 can be written as K ¼U

U T1 M a X e Le V S1

S1 V T Le X Te Ma U 2

U T2 M a X e Le V S1

T

! UT ,

ð23Þ

where T ¼ G þU T2 M a X e Le V S1 ðU T1 Ma X e Le V S1 Þ þ S1 V T Le X Te Ma U 2 , G 2 RðnmÞðnmÞ is an arbitrary symmetric positive semidefinite matrix. The following conclusions are directly from the definitions of S1 and S2. Theorem 2.3. The matrices Ma 2 Rnn , Le 2 Rmm , X e 2 Rnm ðm rnÞ are defined in Theorem2.1. If rankðX e Þ ¼ r o m, the conditions (17) and (18) hold, and there exists a symmetric positive semidefinite matrix G 2 RðnrÞðnrÞ such that the stiffness matrix K described in Eq. (19) satisfies the sparsity requirement, then the feasible region of problem (2) is nonempty. Theorem 2.4. The matrices M a 2 Rnn , Le 2 Rmm , X e 2 Rnm ðmr nÞ are defined in Theorem2.2. If rankðX e Þ ¼ m, the conditions (21) and (22) hold, and there exists a symmetric positive semidefinite matrix G 2 RðnmÞðnmÞ such that the stiffness matrix K described in Eq. (23) satisfies the sparsity requirement, then the feasible region of problem (2) is nonempty. Note that the feasible region D is a closed convex set, it follows from the best approximation theory [32] that problem (2) has a unique solution if D is nonempty. 3. Equivalent variational inequality form and the algorithm To describe the sparsity requirement on K, we define the index set Ispar ¼ fði,jÞ9kij ¼ 0,K ¼ ðkij Þ 2 Rnn g, and two symmetric matrices H1 ,H2 2 Rnn , ( ( N, ði,jÞ2Ispar , N, ðH2 Þij ¼ ðH1 Þij ¼ 0, ði,jÞ 2 Ispar , 0,

ði,jÞ2Ispar , ði,jÞ 2 Ispar ,

ð24Þ

where N is a pre-given sufficient larger positive number. By the definitions of H1 and H2, the sparsity requirement on K (still denoted as S2) can be rewritten as S2 ¼ fK 2 Rnn 9H1 rK r H2 g: By the definitions of S1 and S2, problem (2) can be equivalently written as min s:t:

2 1 2JKK a JF ,

K 2 S1 ,

KH1 Z 0, H2 K Z 0:

ð25Þ

The following two conclusions are well known in optimization theories and applications and are helpful in our derivation [27]. Lemma 3.1. For convex programming problem min

f ðxÞ,

s:t:

x 2 O,

where f ðxÞ : Rn !R is a convex continuous differentiable function and O D Rn is a closed convex set. xn is a minimum if xn 2 O,

/xxn , rf ðxn ÞS Z0,

8x 2 O:

Lemma 3.2. Given constrained minimization problem min s:t:

f ðxÞ, g i ðxÞ Z0,

hj ðxÞ ¼ 0,

i ¼ 1, . . . ,m,

j ¼ 1, . . . ,l,

where f(x), g i ðxÞ,i ¼ 1, . . . ,m, hj ðxÞ,j ¼ 1, . . . ,l : Rn !R are assumed to be continuous. The feasible region of the optimization problem is denoted as D. Define gðxÞ ¼ ðg 1 ðxÞ, . . . , g m ðxÞÞT , hðxÞ ¼ ðh1 ðxÞ, . . . , hl ðxÞÞT . Let Lðx,u,vÞ ¼ f ðxÞuT gðxÞ þ vT hðxÞ be the Lagrangian function of the minimization problem with Lagrangian multipliers u, v 2 Rn and u Z 0. Assume that f ðxÞ,gðxÞ are convex functions, h(x) is a linear function and the minimization problem satisfies the Slater constraint qualification,

512

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

then xn 2 D and un Z0,vn satisfy the Karush–Kuhn–Tucker conditions if and only if Lðxn ,u,vÞ r Lðxn ,un ,vn Þ rLðx,un ,vn Þ for all x 2 D and u Z0,v 2 Rn . Attaching Lagrangian multiplier matrices Y,Z Z0 corresponding to KH1 Z0,H2 K Z 0, respectively, we get the following partial Lagrangian function of problem (25): LðK,Y,ZÞ ¼ 12JKK a J2F /Y,KH1 S/Z,H2 KS,

ð26Þ

 Rnn which defined on O ¼ S1  Rnn 0 0 . It is easy to verify that the gradient of LðK,Y,ZÞ with respect to K, Y and Z are 8 @L > > ¼ KK a YZ, > > @K > > < @L ¼ ðKH1 Þ, > @Y > > > @L > > : ¼ ðH2 KÞ, @Z respectively. Let ðK n ,Y n ,Z n Þ 2 O be the KKT point of problem (25). Using Lemma 3.1 and noticing that Kn is a minimum of LðK,Y,ZÞ with respect to K while Yn and Zn are maximums of LðK,Y,ZÞ with respect to Y and Z, respectively, one can get 8 n n n n > < /KK ,K K a Y þ Z S Z 0, /YY n ,K n H1 SZ 0, 8ðK,Y,ZÞ 2 O: ð27Þ > : /ZZ n ,H K n SZ 0, 2

Denote 0

K

0

1

B C u ¼ @ Y A, Z

Kn

B C un ¼ @ Y n A, Z

n

0

1

In

In

B M ¼ @ In In

0 0

In

1

0C A 0

0 and

K a

1

B C q ¼ @ H1 A, H2

then (27) is equivalent to the following matrix linear variational inequality: un 2 O,

/uun ,Mun þ qS Z0,

8u 2 O:

ð28Þ

Remark 3.1. It is to be pointed out that problem (28) is called matrix linear variational inequality (MLVI) introduced by He [33] to solve the problem of finding the projection of a given matrix onto the intersection of two closed convex sets. To describe the projection and contraction algorithm [33] to solve matrix linear variational inequality (28), denote e(u) as 1 0 1 0 KP S1 ðYZ þK a Þ eK ðuÞ B C B ðYK þ H1 Þ C eðuÞ ¼ @ eY ðuÞ A ¼ @ YP Rnn A: 0 ðZ þKH2 Þ ZP Rnn eZ ðuÞ 0

Algorithm 3.1 (Projection and contraction method for MLVI). Given M a ,K a 2 Rnn , X e 2 Rnm , Le 2 Rmm , uð0Þ ¼ ðK 0 Y 0 Z 0 ÞT 2 O, N 40, k¼0 and 0 o g o2. e 40 is the pre-established tolerance. (1) Using N to construct H1 and H2. (2) Compute ðI3n þMÞ1 . (3) Do uðk þ 1Þ ¼ uðkÞ gðI þMÞ1 eðuðkÞ Þ,k ¼ k þ 1, until maxðabsðeðuðkÞ ÞÞÞ=maxðabsðeðuð0Þ ÞÞÞ r e. (4) Set the solution un ¼ uðkÞ . Theorem 3.1. If the solution set Dn of problem (2) is nonempty, then the sequence fuðkÞ g generated by Algorithm3.1 is well defined and converges to a solution of problem (2). Proof. The conclusion is direct from the convergence analysis of the projection and contraction method by He [33].

&

4. Numerical tests In this section, we present two numerical examples to show the application of Algorithm 3.1 for solving problem (2). All codes are run in MATLAB on Pentium IV with CPU clock frequency at 2.93 G.

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

513

Example 4.1. A linear, undamped spring–mass system, including the spring stiffness and mass values, is shown in Fig. 1. The parameters are listed as follows. mi ¼ 2 kg, i ¼ 1; 2, . . . ,5. ki ¼ 1 N=m, i ¼ 1; 2, . . . ,6. The exact stiffness matrix and mass matrix are denoted as Kn and Mn, respectively 0 1 0 1 2 1 0 0 0 2 0 0 0 0 B 1 2 1 0 B0 2 0 0 0C 0 C B C B C B C B C n n B C C: 0 0 2 0 0 K ¼ B 0 1 2 1 0 C, M ¼ B B C B C B C 0 1 2 1 A @ 0 @0 0 0 2 0A 0 0 0 1 2 0 0 0 0 2 To illustrate the proposed method, the noised initial stiffness matrix is considered, denoted as Ka 0 1 2 1 0 0 0 B 1 2 1 0 0 C B C B C C: 0 1 2 1 0 Ka ¼ B B C B C 0 1 2 1:2 A @ 0 0 0 0 1:2 2 The first three exact eigenvalues and corresponding eigenvectors are used to update Ka, denoted as Le and Xe, respectively 0 1 0:1739 0:3563 0:4308 0 1 B C 0:0995 0 0 0 B 0:3133 0:3897 C B C B C C: 0:4532 0 0:3903 0:0699 0:4308 Le ¼ @ 0 A, X e ¼ B B C B C 0 0 1:0000 0 @ 0:3897 0:3133 A 0:2596 0:3437 0:3590 With the pre-established tolerance e ¼ 107 , using Algorithm 3.1 to solve problem (2), we update the stiffness matrix in 0.047 s (CPU time) and 92 iteration steps. The final K is 0 1 2:0000 1:0000 0:0000 0:0000 0:0000 B 1:0000 2:0000 1:0000 0:0000 0:0000 C C B B C C K ¼B B 0:0000 1:0000 2:0000 1:0000 0:0000 C: C B @ 0:0000 0:0000 1:0000 2:0000 1:0000 A 0:0000

0:0000

0:0000

1:0000

2:0000

The result shows that for low order matrix, Algorithm 3.1 reproduces the accurate matrix with litter CPU time. Example 4.2. The deflection of a plate is considered. The plate is fixed at two sides. Excitation is imposed at A by hammering method and sensors are placed at 70 different nodes to measure the acceleration response, for example, point B, as shown in Fig. 2. HP35670 signal analyzer is used to collect the experiment data. Sampling frequency is 1000 Hz and the frequency range is 0–400 Hz. The properties of the plate are the following: h ¼0.01 m, L¼1.2 m, w ¼0.6 m, module of

Fig. 1. Spring–mass system.

Fig. 2. Two-side fixed plate.

514

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

elastic ¼7.0  1010 N/m2, Poisson ratio ¼0.33 and density¼2800 kg/m3. The finite element model is built by Patran 2008 r2 and analyzed by Nastran. There are 150 rectangular grids and 182 nodes with 6 degrees of freedom per node. Analytical mass matrix Ma of 1008  1008 and the same size stiffness matrix Ka of the structure are calculated. The first 10 order frequencies are measured and only 70 nodes are measured for the corresponding modes. The measured data is incomplete. For the analytical matrices do not coincide with the measured data, the stiffness matrix should be modified to match the experimental data better. By Kidder’s dynamic expansion method [34], the noisy and incomplete measured mode shapes are expanded to update the stiffness matrix. The updated stiffness matrix K should be symmetric, positive semidefiniteness and process the same sparse pattern with the analytical one. Fig. 3 illustrates the sparsity requirements on K. There are 35 368 nonzero entries in Ka. The purpose of the paper is to update the analytical stiffness matrix by measured frequencies and modes, we draw the picture of FRF and compute MAC numbers of the first frequencies before updating as shown in Fig. 4 and Table 1, respectively. And the updating scheme will change the entries in stiffness matrix, so the FRF and MAC numbers cannot be calculated after updating.

0 100 200 300 400 500 600 700 800 900 1000 0

100

200

300

400 500 600 nz = 17946

700

800

900 1000

Fig. 3. Sparsity requirements on updated stiffness matrix.

140 FRF before updating

120 100 80 60 40 20 0

0

50

100

150

200

250

300

Fig. 4. FRF before updating.

350

400

450

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

515

Table 1 MAC number before updating. Frequency number

MAC number

1 2 3 4 5 6 7 8 9 10

3.7252 0 0 0 0.1299 1.6335 0 0 0 0.1221

Table 2 Relative errors between measured frequencies and reproduced data. Frequency number

Measured (Hz)

Reproduced (Hz)

Relative error (%)

1 2 3 4 5 6 7 8 9 10

37.089 57.901 102.216 132.7572 167.7277 200.7584 233.4645 239.2103 332.1588 339.7893

36.9056 59.4401 102.6707 134.8919 164.5537 199.0014 232.8360 238.2915 329.6599 337.9005

0.9873 5.387 0.8917 3.2418 3.7489 1.7427 0.5376 0.7667 1.4989 1.1087

With the pre-established tolerance e ¼ 105 , using Algorithm 3.1 to solve problem (2), we update Ka in 3:4099  103 s (CPU ðeÞ ðeÞ time) and 197 iteration steps. Let l1 , . . . , l10 and l1 , . . . , l10 be the 10 lower order measured eigenvalues and reproduced data, respectively. Relative errors between the reproduced eigenvalues and measured data are calculated as ðeÞ

Relative error between eigenvalues ¼

9li li 9 ðeÞ

9li 9

 100%,

i ¼ 1, . . . ,10:

The relative errors between the reproduced eigenvalues and the measured data are enumerated in Table 2. By Kidder’s dynamic expansion method, the tested mode shapes are expanded only by 70 nodes of 182. The updating result reproduces the frequencies in an acceptable range. The updated matrix is also symmetric positive semidefinite and satisfies with the sparsity requirement. So for noisy and incomplete measured data, the result of the example provides an insight into the robustness of Algorithm 3.1. 5. Conclusions In this paper, a new model updating method is proposed using matrix linear variational inequality. On the assumption that the mass matrix is exact, the stiffness matrix is updated to satisfy the desired properties, such as the dynamic equation, symmetric positive semidefiniteness and sparsity requirement. By using partial Lagrangian multipliers, equivalent matrix linear variational inequality (MLVI) is proposed and solved by extended projection and contraction method. Numerical examples show that the proposed method works well. For low order model, the proposed algorithm provides sufficient accurate result. And for large scale model, even with noisy and incomplete measured data, which means that the proposed method is robust.

Acknowledgments The author is grateful to the anonymous referees for valuable comments and suggestions which helped to improve the exposition of this paper. References [1] M. Imregun, W.J. Visser, A review of model updating techniques, Shock Vib. Dig. 23 (1991) 9–20. [2] J.E. Mottershead, M.I. Friswell, Model updating in structural dynamics: a survey, J. Sound Vib. 167 (1993) 347–375.

516

[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34]

Q. Yuan / Mechanical Systems and Signal Processing 28 (2012) 507–516

M.I. Friswell, J.E. Mottershead, Finite Element Model Updating in Structural Dynamics, Kluwer Academic Publishers, The Netherlands, 1995. D.W. Zhang, F.X. Wei, Model Updating and Damage Detection, Science Press, China, 1999 (in Chinese). A.W. Zhu, G.J. Qu, Y.N. Gao, Z.S. Wei, A survey of the modifying techniques of structure dynamic models, Adv. Mech. 32 (2002) 337–348 (in Chinese). M. Baruch, I.Y. Bar-Itzhack, Optimal weighted orthogonalization of measured modes, AIAA J. 16 (1978) 346–351. M. Baruch, Optimization procedure to correct stiffness and flexibility matrices using vibration tests, AIAA J. 16 (1978) 1208–1210. F.S. Wei, Stiffness matrix correction from incomplete test data, AIAA J. 18 (1980) 1274–1275. H. Dai, Stiffness matrix correction using test data, Acta Aeronaut. Astronaut. Sin. 9 (1994) 1091–1094. A.M. Kabe, Stiffness matrix adjustment using mode data, AIAA J. 23 (1985) 1431–1436. B. Caesar, J. Peter, Direct update of dynamic mathematical models from modal test data, AIAA J. 25 (1987) 1494–1499. D.C. Kammer, Optimum approximation for residual stiffness in linear system identification, AIAA J. 26 (1988) 104–112. S.W. Smith, C.A. Beattie, Secant-method adjustment for structural models, AIAA J. 29 (1991) 119–126. C.A. Beattie, S.W. Smith, Optimal matrix approximants in structural identification, J. Optim. Theory Appl. 74 (1992) 23–56. O. Zhang, A. Zerva, Stiffness matrix adjustment using incomplete measured modes, AIAA J. 35 (1997) 917–919. Y. Halevi, I. Bucher, Model updating via weighted reference basis with connectivity constraints, J. Sound Vib. 265 (2003) 561–581. B.H. Sako, A.M. Kabe, Direct least-squares formulation of a stiffness adjustment method, AIAA J. 43 (2005) 413–419. N. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra Appl. 103 (1988) 103–118. N. Higham, Computing the nearest correlation matrix: a problem from finance, IMA J. Numer. Anal. 22 (2002) 329–343. B. Stephen, X. Lin, Least-square covariance matrix adjustment, SIAM J. Matrix Anal. Appl. 27 (2005) 532–546. A. D’aspremont, O. Banerjee, L. Ghaoui, First-order methods for sparse covariance selection, SIAM J. Matrix Anal. Appl. 30 (2005) 56–66. Z.S. Lu, Smooth optimization approach for sparse covariance selection, SIAM J. Matrix Anal. Appl. 19 (2009) 1807–1827. R. Escalante, M. Raydan, Dykstra’s algorithm for a constrained least-squares matrix problem, Numer. Linear Algebra Appl. 3 (6) (1996) 459–471. R.L. Dykstra, An algorithm for restricted least-squares regression, J. Am. Stat. Assoc. 78 (1983) 837–842. J.P. Boyle, R.L. Dykstra, A method for finding projections onto the intersection of convex sets in Hilbert spaces, Lecture Notes in Statistics, vol. 37, 1986, pp. 28–47. J. Moreno, B. Datta, M. Raydan, A symmetric preserving alternating projection method for matrix model updating, Mech. Syst. Signal Process. 23 (2009) 1784–1791 (Special issue on ‘‘Inverse Problem’’). J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, Berlin, 1999. B.S. He, A new method for a class of linear variational inequalities, Math. Program. 66 (1994) 137–144. J.S. Pang, D. Chan, Iterative methods for variational and complementarity problems, Math. Program. 24 (1982) 284–313. P.T. Harker, J.S. Pang, Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms, and applications, Math. Program. 48 (1990) 161–220. A. Albert, Conditions for positive and nonnegative definiteness in terms of pseudo inverse, SIAM J. Appl. Math. 17 (1969) 430–440. H. Dai, The Theory of Matrices, Science Press, China, 2001 (in Chinese). B.S. He, Solving a class of matrix minimization problems by linear variational inequality approaches, in: The Second International Conference on Numerical Algebra and Scientific Computing (NASC08), Nanjing Normal University, Nanjing, China. R.L. Kidder, Reduction of structural frequencies equations, AIAA J. 11 (6) (1973) 892.