Mathematical and Computer Modelling 44 (2006) 692–700 www.elsevier.com/locate/mcm
Computing eigenpairs of quadratic eigensystems Mohammedi R. Abdel-Aziz ∗,1 Department of Mathematics and Computer Science, Kuwait University, Kuwait Received 20 October 2004; received in revised form 15 July 2005; accepted 4 February 2006
Abstract The main objective of this paper is to introduce an algorithm for solving dependent eigenproblems. The global matrices of these eigenproblems are formulated by using a mixed finite element approach. This approach uses both the polynomial and frequency dependent displacement fields. The algorithm that is introduced is based on solving a sequence of linear eigenproblems by an appropriate eigensolver and updating the parameter by a safeguarded zerofinder. The eigensolution technique that has been developed is designed to evaluate a specific set of parameterized eigenvalues. Numerical experiments show that this eigentechnique accurately evaluates the first ten eigenvalues of quadratic eigenproblems. c 2006 Elsevier Ltd. All rights reserved.
Keywords: Implicitly restarted Arnoldi method; Dependent eigensystems; Safeguarded zerofinder
1. Introduction Eigensystems arise through the discretization of a differential operator to approximate some of its spectral properties. However, there are many additional sources for these systems [1,2]. The independent and dependent eigenproblems can arise in damped oscillatory systems that are governed by the system of differential equations, A
d2 u du + Cu = f, +B 2 dt dt
in which u and f are the displacement and force vectors and the matrices A, B and C are the system mass, damping and stiffness matrices, respectively. Using the conventional finite element method, the system matrices are frequency independent and they result in a quadratic eigenproblem of size n of the form [λ2 A + λB + C]y = 0, where (λ, y) are the eigenpairs [3]. ∗ Tel.: +965 4811188; fax: +965 481 7201.
E-mail address:
[email protected]. 1 On leave from the Department of Mathematics, Faculty of Science, Alexandria University, Egypt. c 2006 Elsevier Ltd. All rights reserved. 0895-7177/$ - see front matter doi:10.1016/j.mcm.2006.02.005
(1.1)
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
693
Some techniques for studying problem (1.1) have been introduced. A second-order Krylov method has been proposed in [4]. It is based on transforming problem (1.1) into a generalized eigenproblem and constructing a subspace by a sequence of vectors defined by a second-order linear homogeneous recurrence relation with the coefficient matrices of the generalized eigenproblem and an initial vector. A review of the existing methods for solving problem (1.1) and a perturbation theory have been introduced in [5,6]. In the mixed finite element method, both the exact and polynomial displacement fields are used in formulating the vibration eigenproblem. This method computes inertia forces exactly using the frequency dependent shape functions, and an equivalent load theory is used to represent the exact element inertia forces at the element nodes [7]. The resulting eigensystem is [K − λM(λ)]x = 0,
(1.2)
where K and M(λ) are the system stiffness and mass matrices derived from the potential and the kinetic energy expressions, respectively. These matrices are symmetric positive definite and the entries of the mass matrix are functions of the real eigenvalue λ = ν 2 . The monotonicity and convexity properties of the eigenvalue functions of problem (1.2) are investigated in [8,9]. Iterative algorithms to determine a few eigenvalues and their associated eigenvectors require repeated solutions to linear eigenvalue problems involving one of the matrices and the matrix vector product of the other matrix. These computations are arranged such that the solution to the linear eigenproblem involves only the frequency independent stiffness matrix that is factored only once throughout the entire eigenvalue computation, and the matrix vector product computations involve the frequency dependent mass matrix. The important advantages of these types of algorithms are that they do not require any determinant evaluation or derivatives. For a discussion of the mixed formulation, a comparison between the finite element formulations and an iterative algorithm to compute the smallest ten eigenvalues of problem (1.2), we refer the reader to [9–11], respectively. The mixed finite element method can be used for damped free vibration, and problem (1.1) becomes [λ2 A(ν) + λB(ν) + C]y = 0.
(1.3)
The matrices A(ν) and B(ν) are nonsymmetric and their entries are functions of the natural frequency ν, √ and the independent matrix C is symmetric positive definite [12]. Of course, in formula (1.3), λ = λ + iλ , i = −1, and r g q ν = λr2 + λ2g . In [13], some algorithms have been introduced for solving problems of type (1.3). The algorithms are based on either factorizing a matrix or evaluating the derivative. In this study, we present and discuss an iterative algorithm for computing a subset of eigenvalues (natural frequencies) and their corresponding eigenvectors (modes) of eigenproblem (1.3). The algorithm does not require any determinant evaluation or derivatives [21]. It involves two general parts, solving a sequence of linear eigenproblems by a suitable eigensolver (the implicitly restarted Arnoldi method, IRAM) and updating the parameter by a safeguarded zerofinder. This paper is organized as follows. In Section 2, we reformulate problem (1.3) so that we can use a linear eigensolver to extract the required eigenvalues. A full description of the Arnoldi factorization and the IRAM are introduced in Section 3. Section 4 involves a brief description of our algorithm, which is based on the idea of linearizing, solving and then updating. This algorithm uses IRAM to solve the linear eigenproblems that arise during the course of solving problem (1.3). A zerofinder that is based on linear interpolation is used to update the parameter. Some computational results and concluding remarks are given in the last section. 2. Formulation of the problem The dependent eigenproblem (1.3) has a tremendous amount of structure which must be exploited to design an efficient algorithm. In this section, we reformulate this problem so that we can use an eigensolver easily. The dependent problem (1.3) is equivalent to the eigenproblem [A(ν) + µB(ν) + µ2 C]y = 0, where µ =
1 λ
(2.1)
(with this problem, λ is a complex number). Introducing the transformation
A(ν)y = µz
(2.2)
694
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
into Eq. (2.1), we obtain µ[z + B(ν)y + µC y] = 0. But µ 6= 0. Hence, −B(ν)y − z = µ(ν)C y.
(2.3)
Combining formulas (2.2) and (2.3), we can write −B(ν) −I y C O y = µ(ν) , A(ν) O z O I z
(2.4)
where I and O are n × n identity and zero matrices, respectively. It is clear that, for certain values of ν, the eigenproblem (2.4) is equivalent to D(ν)x = µ(ν)x,
(2.5)
where
L D(ν) = O
O I
−1
−B(ν) −I A(ν) O
Lt O
O I
−1
,
Lt x= O
O I
y z
and L is a lower triangular matrix that comes from factorizing the matrix C into L L t . The independent quadratic eigenproblem (1.1) can be reformulated by using an independent transformation. The resulting eigenproblem will be a nonsymmetric independent eigenproblem. Finding solutions of problem (2.5) and the reformulated problem of (1.1) depends on solving a sequence of linear eigenproblems. Thus, the following section involves an important linear eigentechnique for solving the nonsymmetric eigenproblems. 2.1. The IRAM The Arnoldi method was originally proposed by Arnoldi [14] as a method for the recursive computation of minimal polynomials for nonsymmetric matrices. Soon it became viewed as an efficient means to reduce a general matrix to upper Hessenberg form, from which the result can be determined. Recently, the same algorithm has become popular as a method to compute a subset of eigenvalues of large sparse matrices. Arnoldi’s method may be viewed as an orthogonal projection technique on a Krylov space F j (A, v1 ) ≡ span{v1 , Av1 , . . . , A j−1 v1 }
(2.6)
for a matrix A ∈ Rn×n generated by the starting vector v1 . When A is a symmetric matrix, the procedure is equivalent to the Lanczos method [15]. After j-steps, the Arnoldi process computes the factorization AV = V H + r etj
(2.7)
of A, where V t V = I j , H ∈ R j× j is an upper Hessenberg matrix and etj = (0, 0, . . . , 1) ∈ R j . The residual vector r ∈ Rn is orthogonal to the columns of V (the Arnoldi vectors). The matrix V H V t is the orthogonal projection of A onto the invariant space (2.6). Ritz values and vectors are defined by a Galerkin condition: a vector xr ∈ F j (A, v1 ) is called a Ritz vector with corresponding Ritz value θ if hy, Axr − θ xr i = 0,
for all y ∈ F(A, v1 )
is satisfied. To check that (θ, xr ) is a Ritz pair, let H q = θq, where kqk = 1. Define the vector xr = V q to be the Ritz vector and the scalar θ to be the Ritz value. Then kAV q − V H qk = kAxr − θ xr k = kr k|etj q|.
(2.8)
We are interested in a small subset of eigenvalues of the matrix A. A criterion that allows termination of the Arnoldi factorization for j << n is needed. As kr k decreases, the eigenvalues of H become better approximations to those of A. The Ritz estimate (2.8) indicates that, if the last component of an eigenvector of H is small or the norm of r is small, then the Ritz pair (θ, xr ) is an approximation to an eigenpair of A. The advantage of using (2.8) is to avoid
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
695
explicit formation of the quantity AV q − θ V q when accessing the numerical accuracy of an approximate eigenpair of A. Unfortunately, the Arnoldi process has some numerical difficulties such as loss of orthogonality among the Arnoldi vectors, the undetermined storage requirement, the appearance of spurious eigenvalues in the spectrum of H , and the error due to finite precision arithmetic. These issues have been the subject of considerable research [16,1]. Implicit restarting is a technique for updating a sequence of j-step Arnoldi factorizations in a way that forces desired eigenvalues to appear in the spectrum of the matrix H . The IRAM attempts to restart the Arnoldi factorization by implicitly updating the starting vector and decreasing the size of the residual. The starting vector is updated through a polynomial filter technique which is used to force convergence of the residual to zero [16]. The Arnoldi factorization (2.7) can be used to describe a complete iteration of the IRAM as follows: Algorithm 1. function [H j , V j ] = IRA (A, H, V, r ) Input: (A, H, V, r ) with AVl = Vl Hl + rl elt , an l-Step Arnoldi factorization; Output: (H j , V j ) such that AV j = V j H j , where V jt V j = I and H j is an upper triangular matrix. 1. For k = 1, 2, . . . until convergence 1.a. Compute σ (Hl ) and select set of p shifts ρ1 , ρ2 , . . . , ρ p based upon σ (Hl ) or maybe other information; 1.b. Q = Il ; 1.c. For k = 1, 2, . . . , p; 1.c.1. Factor [Q k , Rk ] = qr (Hl − ρk I ); 1.c.2. Hl ← Q kh Hl Q k ; Q ← Q Q k ; end 1.d. ξ j = Hl ( j + 1, j) η j = Q(l, j); 1.e. r j ← ξ j v j+1 + η j rl ; 1.f. V j ← Vl Q(:, 1 : j); H j ← Hl (1 : j, 1 : j); 1.g. Beginning with the j-step Arnoldi factorization AV j = V j H j + r j etj , apply p additional steps of the Arnoldi process to obtain a new l-step Arnoldi factorization AVl = Vl Hl + rl elt . end In this algorithm, the leading j columns V j of the Arnoldi basis vectors and the leading j × j Hessenberg matrix H j are the same quantities that would appear in the leading j columns of V and the leading j × j submatrix of the Hessenberg matrix H if the same shifts were selected at each step and applied to update the relation AV = V H with steps of the fully implicitly shifted Q R iteration. In step (1.c.1), the shifts are in complex conjugate pairs. Thus, the implicit double shift can be implemented to avoid complex arithematic [17]. During the implicit shift application, it may happen that a subdiagonial element ξk becomes small. Deflation strategies are important in the practical implementation of the Q R iteration. However, in a large-scale setting, the usual Q R deflation techniques are not always appropriate. There are situations that call for additional deflation schemes specific to implicit restarting. For more details about deflation strategies and choice of shifts, see Ref. [18]. 3. The nonlinear eigensolution technique The IRAM has been used with considerable success to solve different algebraic eigenproblems [16]. The nonlinear eigenvalue problem (2.5) is nonsymmetric for any specific value of ν. Its solution is based on initiating the parameter ν to linearize the problem, using a nonsymmetric eigensolver to evaluate the eigenpairs and a root finder to update ν. The advantages of the IRAM and the structure of the nonlinear eigenproblem (2.5) indicate that the use of IRAM to solve the linear eigenproblems that arise during the iterations will give accurate and efficient results. In the formulation (2.5), Arnoldi factorization (2.7) becomes D(ν)V (ν) = V (ν)H (ν) + r (ν)etj ,
(3.1)
696
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
where V (ν) ∈ R 2n× j with orthonormal columns, H (ν) ∈ R j× j is an upper Hessenberg matrix, r (ν) ∈ R 2n is the residual vector, and D(ν) is as defined before. The complex Ritz pairs of H (ν) are the approximations of the complex eigenpairs of the matrix D(ν). This approximation is exact when the residual vector r (ν) = 0. Let µ1 (ν) be the Ritz value of H (ν) with the largest magnitude and let q be the corresponding complex Ritz vector at a specific value ν. Then the process is terminated and an approximate solution to Eq. (2.5) is obtained when |µ1 (ν)| =
1 ν
under the condition kr (ν)k|etj q| ≤ |µ1 |,
(3.2)
where is a specified tolerance. In formulation (3.2), the smaller the value of the residual kr k|etj q|, the better the eigenpair approximations. Quantities θ, q and xr are, in general, complex-valued. As noted in the previous section, the residual vector r is a function of the starting vector v1 . Its norm is forced to zero by iteratively updating the starting vector v1 . Polynomial filters are applied implicitly through a truncated version of the implicitly Q R iteration to cast out unwanted parts of the spectrum of eigenvalues of D(ν). IRAM is based on repeated formation of matrix–vector multiplication w = D(ν)v. This product can be computed without additional cost by factorizing the matrix C into L L t once. The following algorithm is used for evaluating the product D(ν)v. Algorithm 2. function [w] = Vector(L , A(ν), B(ν), v) h t i (1) Solve LO OI u = v and write v1 ← u(1 : n) and v2 ← u(n + 1 : 2n) (2) Compute h z 1 i← h −B(ν)v i h 1i − v2 and z 2 ← A(ν)v1 L O w1 (3) Solve O I w2 = zz12 h i (4) Form w ← ww12 The Arnoldi factorization (3.1) is determined such that the step that requires matrix vector multiplications is replaced by Algorithm 2. The IRAM (Algorithm 1) becomes function [H, V ] = I R A(A(ν), B(ν), L , j, p, tol)
(3.3)
with the same steps, but it calls on the Arnoldi function, which became well determined, to solve the linear eigenproblems that arise during the solution of problem (2.5). The eigenvalue function |µ(ν)| is piecewise linear. Thus, we need a zerofinder based on linear interpolation (secant) to solve the scalar equation |µ(ν)| = ν1 . This equation involves the natural frequency ν, which is the magnitude of the complex eigenvalue λ, and the frequency dependent eigenvalue functions µ(ν), which may be evaluated at any given value of ν using the IRAM as outlined above. The solutions of the nonlinear eigenproblem (2.5) are the intersection points between the two curves µ(ν) and 1 ν . This is described by solving a symmetric problem in Fig. 2. The secant method that we use may be described as follows: first, the linear eigenproblem at ν0 = 0 is solved and then ν1 = |µ11(0)| . The matrices A(ν1 ) and B(ν1 ) are formed and a new linear eigenproblem at ν1 is solved. A line
between the two points (0, |µ1 (0)|) and (ν1 , |µ1 (ν1 )|) will intersect the curve ν1 at a point ν2 . Again, the matrices A(ν) and B(ν) are evaluated at ν2 , a linear eigenproblem is solved to evaluate |µ1 (ν2 )|, and another linear interpolation is made between the two points (ν1 , |µ1 (ν1 )|) and (ν2 , |µ1 (ν2 )|). Once this pattern has been set, we continue to form these secant approximations using the latest two ν points. One step of this method is described in Fig. 1. The advantage of this scheme is that the secant iteration may be applied to a real-valued function of single real variable ν instead of trying to solve the complex-valued equations arising with µ(ν). Some difficulties in updating ν may arise: (1) The absolute value, |µ(ν)|, is not a differentiable function and has a cusp which may give a serious problem. (2) At specific values of ν, the parameterized curves of |µ(ν)| may bifurcate near the solution. In these situations, safeguards must be applied to solve this scalar problem. The idea is to combine a guaranteed reliable (bisection) method with a rapidly convergent method (secant) to yield a technique that will converge at the
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
697
Fig. 1. The upper and lower bounds of µ2 . The intersection points of the secant line with µ(λ) and λ1 curves are: (λ1 , µ1 ), (λ2 , λ1 ) and (λ3 , µ3 ) 2 from left to right.
Fig. 2. Graphical depiction of the smallest ten eigenvalues for nonlinear symmetric eigenproblem as the intersection points between µ-curves and the λ1 curve.
rate of the safe guaranteed method. Each iteration of the safeguarded zerofinder requires a determination of the lth eigenpair (µl (ν), yl ) such that kr k|etj yl | < µl through j steps of IRAM. The following introduces a complete iteration for evaluating the first eigenvalue of a largest magnitude of problem (2.5). Algorithm 3. function [ν2 , V ] = IRA (C, A(ν), B(ν), j, p, tol, ν0 ); (1) Factor : C into L L t (2) [H, V] ← IRA (A(ν0 ), B(ν0 ), L , j, p, tol);
698
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
(3) µ1 (ν0 ) ← v tj+ p H v j+ p , 1 ν1 ← |µ1 (ν ; 0 )| (4) Compute: A(ν1 ), B(ν1 ); (5) [H, V] ← IRA (A(ν1 ), B(ν1 ), L , j, p, tol); (6) µ1 (ν1 ) ← v tj+ p H v j+ p ,
(7) (8) (9) (10)
(11) (12) (13) (14) (15)
1 ; ν2 ← |µ1 (ν 1 )| Compute: A(ν2 ), B(ν2 ); [H, V ] ← IRA (A(ν2 ), B(ν2 ), L , j, p, tol); µ2 ← v tj+ p H v j+ p ; Check the uncertainty of the interval If ((ν1 |µ1 | − 1)(ν2 |µ2 | − 1) > 0) then ν2 ← 23 ν2 go to 7; Update ν [ν1 , ν2 ] ← safsecant (ν1 , ν2 , |µ1 |, |µ2 |); Check Convergence : If, |ν2 − ν1 | ≤ 0.5(ν1 + ν2 ), then stop; Compute: A(ν2 ), B(ν2 ); [H, V ] ← IRA (A(ν2 ), B(ν2 ), L , j, p, tol); µ2 ← v tj+ p H v j+ p ; ν1 ← ν2 , |µ1 | ← |µ2 |, go to (11)
Remark. In this algorithm, we drop the argument of µ from step 7 until the end of the algorithm. 4. Numerical results The main Algorithm 3 is tested by solving two examples. The first one is a damped free vibration of the three-bay, eight-story portal frame. A varying number of finite elements is used to study the solution for various formulations. Material and geometric properties for an element are: length (L = 24 in.), cross-sectional area (A = 0.5 in.2 ), Young’s modulus (E = 3 × 107 lb/in.2 ), moment of inertia (I = 0.260417 × 10−2 in.4 ), and the mass density (ρ = 0.724637 × 10−3 lb/in.2 ). This problem was originally investigated by Hooper et al. using the exact finite element formulation for undamped free vibration in [19]. For more details about the properties and the formulation of this problem, we refer the reader to Refs. [10,9]. Table 1 contains the first ten complex eigenvalues for the mixed formulation (mix.f.), h-formulation (h-f.) and the conventional formulation (conv.f.). For the mixed and conventional formulations, the minimum number of finite elements is used (one element per member) leading to a 96 degrees-of-freedom model. In h-formulation, two elements per structural member are used, leading to a 264 degrees-of-freedom model, and three elements per structural member are used, leading to a 432 degrees-of-freedom model. Experiments show that the mixed finite element formulation is a lower bound to the results of the conventional formulation and the exact results obtained from the h-formulation. In the second example, the first ten complex eigenvalues are computed for a four-story plane frame with unconstrained 36 degrees of freedom. Results for experimental and finite element analyses have been presented in [20]. In the present contribution here, nonproportional damping has been added such that, at element level, the Rayleigh damping coefficients for beams are twice that for columns. In Table 2, the first ten complex eigenvalues are obtained using the mixed formulation. Two levels of discretization from the conventional formulation (one element and two elements per number) are used. The conventional finite element model shows large error for higher modes. The conventional formulation results approach the results obtained by the mixed formulation as the number of elements per member increases.
699
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
Table 1 The first ten complex eigenvalues for the mixed (N = 96), the exact (N = 264, 432) and the conventional (N = 96) formulations of the three-bay, eight-story portal frame problem Eig.
Mix.f, N = 96
h-f., N = 264
Conv., N = 96
h-f., N = 432
1 2 3 4 5 6 7 8 9 10
−0.203 + 11.204i −1.053 + 34.775i −2.958 + 61.684i −6.072 + 92.264i −5.634 + 104.238i −7.490 + 117.841i −10.592 + 126.836i −11.286 + 141.659i −15.757 + 159.604i −16.497 + 164.103i
−0.203 + 11.204i −1.053 + 34.777i −2.959 + 61.693i −6.078 + 92.295i −6.042 + 104.489i −8.068 + 118.180i −10.622 + 126.922i −12.156 + 142.176i −15.904 + 160.081i −16.643 + 164.337i
−0.203 + 11.204i −1.053 + 34.784i −2.963 + 61.735i −6.095 + 92.435i −6.058 + 108.121i −8.150 + 122.757i −10.667 + 127.305i −12.189 + 147.421i −15.910 + 160.709i −16.715 + 165.184i
−0.203 + 11.204i −1.053 + 34.776i −2.958 + 61.688i −6.077 + 92.276i −6.039 + 104.303i −8.061 + 117.929i −10.618 + 126.869i −12.144 + 141.814i −15.889 + 159.721i −16.622 + 164.185i
Table 2 The first ten complex eigenvalues for the mixed (N = 48) and conventional (N = 48, 132) formulations of the four-story plane frame problem Eig.
Mix.f, N = 48
Conv., N = 48
Conv., N = 132
1 2 3 4 5 6 7 8 9 10
−0.46 + 49.40i −4.13 + 155.89i −11.66 + 276.66i −20.68 + 392.25i −96.31 + 717.25i −120.81 + 805.33i −124.13 + 825.50i −157.09 + 926.78i −174.81 + 964.38i −176.94 + 973.26i
−0.46 + 49.42i −4.15 + 156.12i −11.75 + 277.51i −20.77 + 393.32i −119.44 + 797.69i −163.34 + 920.93i −161.15 + 953.59i −277.94 + 1111.95i −296.25 + 1171.96i −243.44 + 1199.61i
−0.46 + 49.40i −4.14 + 155.91i −11.68 + 276.73i −20.71 + 392.46i −97.12 + 720.14i −122.13 + 809.37i −125.38 + 827.89i −159.19 + 932.85i −177.72 + 971.21i −179.29 + 980.36i
5. Concluding remarks In this paper, we have established a numerical algorithm for solving a class of nonlinear eigenproblems. The eigensolver (IRA) has been reformulated to be well suited to solve the linear eigenvalue subproblems that arise. For updating the parameter ν, we used the secant method, which is based on linear interpolation. This algorithm is capable of producing accurate results for symmetric and nonsymmetric large-scale problems. Furthermore, we have illustrated its effectiveness in this context through numerical experiments. Acknowledgments The author would like to thank Professor T.E. Simos and referees for their support and the interest they have shown in this work. References [1] Y. Saad, Large Scale Nonsymmetric Eigenvalue Problems, John Wiley and Sons, New York, 1992. [2] S. Singer, S. Singer, A Skew-symmetric differential QD algorithm, Appl. Numer. Anal. Comput. Math. 2 (1) (2005) 134–151. [3] M. Radjabalipour, A. Salemi, On Eigenvalues of Quadratic Matrix Polynomials and their Perturbations, SIAM J. Matrix. Anal. Appl. 17 (3) (1996) 563–569. [4] Z. Bai, Y. Su, SOAR: A second-order Arnoldi method for the solution of the quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl. 26 (3) (2005) 640–659. [5] F. Tisseur, K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev. 43 (2001) 235–286. [6] K.V. Ngo, An approach of eigenvalue perturbation theory, Appl. Numer. Anal. Comput. Math. 2 (1) (2005) 108–125. [7] R.J. Melosh, H.A. Smith, New formulation for vibration analysis, J. Eng. Mech. 115 (3) (1989) 543–554. [8] M.R. Abdel-Aziz, Convexity analysis of the largest dependent eigenvalue functions of eigensystems, J. Appl. Math. Comput. 109 (2000) 93–100.
700
M.R. Abdel-Aziz / Mathematical and Computer Modelling 44 (2006) 692–700
[9] M.R. Abdel-Aziz, A monotonicity analysis theory for the dependent eigenvalues of vibrating systems, J. Math. Comput. Modelling 21 (7) (1995) 99–113. [10] R.K. Singh, H.A. Smith, Comparison of computational effectiveness of the finite element formulation in free vibration analysis, J. Comput. Struct. 51 (4) (1994) 381–391. [11] M.R. Abdel-Aziz, Safeguarded use of the implicitly restarted lanczos technique for solving non-linear structural eigensystems, Int. J. Numer. Methods Engrg. 37 (1994) 3117–3133. [12] P. Lancaster, λ-Matrices and Vibrating Systems, Pergamon Press, Oxford England, 1966. [13] A. Ruhe, Algorithms for the nonlinear eigenvalue problem, SIAM J. Numer. Anal. 10 (1973) 674–689. [14] W.E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quart. Appl. Math. 9 (1951) 17–29. [15] C. Lanczos, An iteration method for the solution of the eigenvalue problem of the linear differential and integral operators, Res. Nat. Bur. Standards 45 (1950) 255–282. [16] D.C. Sorensen, Implicit application of polynomial filters in a k-step Arnoldi method, SIAM J. Matrix Anal. Appl. 13 (1991) 357–385. [17] G.H. Golub, C.F. Van Loan, Matrix Computations, 3rd edition, The Johns Hopkins University Press, Baltimore, 1996. [18] R.B. Lehoucq, D.C. Sorensen, Deflation techniques for an implicitly restarted arnoldi iteration, SIAM J. Matrix Anal. Appl. 17 (4) (1996) 789–821. [19] C.T. Hooper, A. Simpson, F.W. Williams, A Study of the bounds on eigenvalues of a transcendental dynamics stiffness matrix provided by a simply derived linear matrix pencil, J. Struct. Mech. 8 (4) (1980) 365–422. [20] A. Simpson, The Kron methodology and practical algorithms for eigenvalues, sensitivity and response analyses of large scale structural systems, Aeronaut. J. (1980) 417–433. [21] F.W. Williams, D. Kennedy, Reliable use of determinants to solve nonlinear structure eigenvalue problem efficiently, Internat. J. Numer. Methods Engrg. 26 (1988) 1825–1841.