Computer Methods in Applied Mechanics and Engineering North-Holland
103 (1993) 161-173
CMA 384
Applications
of the singular value decomposition in dynamics J.P. Meijaard
Delft University of Technology, Laboratory for Engineering Mechanics, NL-2628 CD Delft, The Netherlands
Mekelweg 2,
Received 1 October 1992
The singular value decomposition of a general rectangular matrix can serve as a means for determining its numerical rank and for solving underdetermined and overdetermined linear systems in a least-squares sense. This decomposition is used in all kinds of applications, e.g. in control theory and statistics. This article presents some applications of this readily available tool for determining solutions of underdetermined systems which arise in the study of dynamical mechanical systems. First, the theory of the singular value decomposition is exposed briefly. Then, it is shown how periodic solutions of Hamiltonian, conservative systems can be determined in a direct way. Finally, a path following method for the continuation of stationary and periodic solutions and bifurcation points is described. The usefulness of these procedures is shown in examples of a rotor with non-linear support stiffness and a system which consists of an array of ten non-linearly coupled oscillators.
1. Introduction
In the commonly used direct methods for solving systems of linear equations, the first step consists of the decomposition of the matrix of coefficients in a product of some matrices which have a special structure. After that, the equations can easily be solved. The three major decompositions are the LU decomposition, in which the matrix is given as a product of a lower triangular and an upper triangular matrix, the QR decomposition, in which the matrix is given as a product of an orthogonal and an upper triangular matrix, and the singular value decomposition, in which the matrix is given as a product of two orthogonal matrices and a diagonal matrix. Although the singular value decomposition is not treated in most introductory textbooks on linear algebra (an exception is [l]), it is applied to an increasing extent to all kinds of problems, such as which arise in control theory and statistics. With its aid it is possible to obtain least-squares solutions of systems of equations and to determine the numerical rank of matrices in a stable way. In control theory, for instance, criteria for the controllability and observability of linear systems can be stated in terms of the rank of certain matrices [2], so these properties and their robustness with respect to perturbations of the system can be investigated. In statistics and identification theory, the singular value decomposition can be used for model fitting and for data and model reduction. In the analysis of the dynamics of constrained multibody systems, this decomposition can be used for the selection of generalized 00457825
/ 93 / $06.00 0
1993 Elsevier Science Publishers B .V. All rights reserved
162
.J. P. Meijaard, Applications
ofthe SVD in dynamics
coordinates ]3]. Here we shall show applications of this tool in the calculation of stationary and periodic solutions and bifurcations for mechanical dynamical systems, The structure of this article is as follows. After this introduction, the singular value decomposition is defined and its existence is established. In the next section, it is used in a method for the direct determination of periodic solutions of Hamiltonian systems. The fourth section describes a continuation method for the continuation of stationary and periodic solutions and of bifurcations. In the fifth section applications of the described methods to models of mechanical engineering problems are shown: the stationary and periodic solutions of a rotor resting on bearings with a non-linear stiffness and also bifurcations are calculated; periodic solutions of an array of ten non-linearly coupled oscillators are determined. We end with some concluding remarks
2. The singular value decomposition A basic and important tool of modern numerical linear algebra is the singular value decomposition. The singular value decomposition theorem states that any rectangular m X n matrix A can be factorized in the product of three matrices as [l, 41
Here U is an orthogonal m X m matrix, V is an orthogonal IZx tz matrix and L, is an m X rz diagonal matrix with non-negative elements on its principal diagonal, which are called the singular values of A, and with all its other elements equal to zero. The singular values are usually sorted in a descending order on the main diagonal of L, which can always be achieved by column permutations of U and V. Given the decomposition (l), the rank r of A is easily seen to be the number of non-zero singular values; for the orthogonal matrices can be interpreted as changes of bases of the domain and range of the mapping defined by A, which diagonalize the matrix. If the elements of A are not exactly known or if numerical errors occur in the computation of the singular value decomposition, singular vaiues are almost never zero. In that case the numerical rank of the matrix, which is also denoted by I, is defined as the number of singular values which are larger than, or equal to, a certain positive threshold value E. The required value of E depends on the problem at hand and the accuracy of the numerical computations. A reasonable choice appears to be E = ~~(macheps)“~, where CT~is the largest singular value and macheps is the relative machine precision. For subsequent calculations, the singular values which are smaller than E are replaced by zeros. The range of A is spanned by the first Ycolumns of U and its null space (kernel) is spanned by the last IZ- r columns of V. Analogously, the range of A’ is spanned by the first r columns of V and its null space is spanned by the last m - r columns of U. Hence we see that the singular value decomposition can be used to determine the numerical rank and bases for the range and null space of a matrix and its transpose. Following 143, we show that a decomposition of the form (1) exists for a general matrix A by considering
.I. P. Meijaard, Applications of the SVD in dynamics
163
(2) and Au = a,u, lju\l = ((u[( = 1 f or some u. With U = [uU,], V = [uV,], where U, and VI are complements of u and u such that ZJ and V are orthogonal, U’AV, has the structure
t 1
uw= ; 1, . [
(3)
The vector w must be zero, since otherwise we would have 1 AV u1 w (a; + I(wl(2)1’2
[I
2
= a; + llwj\* + ljApj*/((+:
+ [(WI(~)> a; ,
(4)
and (or would not be the maximal value according to (2). The proof of the existence of a singular value decomposition now follows from an induction argument. For a square regular matrix A, the singular values can be shown to be the principal semi-axes of the hyperellipsoid {Ax ( llxll = l}. F or example, if A represents the matrix of Cartesian deformation gradients in continuum mechanics, its singular values are the principal stretches [5]. U and V can in that case be chosen as proper rotation matrices. Algorithms for the numerical computation of the singular value decomposition are described in for instance [4,6]. These algorithms are similar to the QR algorithm for the computation of the eigenvalues and eigenvectors of a square matrix [7]. Ready-to-use subroutines are available in many numerical libraries. The Moore-Penrose inverse [4,8-lo], or pseudo-inverse, of an m x rz diagonal matrix L is defined as the 12x m diagonal matrix L+ with L,; = i
1 /Lii
if ILiiI > E ,
0
if lLiil d E ,
(5)
and E = 0. In inexact numerical computations we use a suitable small E > 0. In the general case, the pseudo-inverse of A = ULV’ is defined as A+ = VL+U' .
(6)
With the aid of the pseudo-inverse any linear system of equations can be solved in a least-squares sense; for, if we have the system of equations Ax = b, this solution is given by x = A+b = VL+U’b
(7)
in all cases. This solution has a minimal Euclidean norm of all x that minimize the sum of the squares of the residuals Ax - b.
3. Direct determinative of periodic sclllutions
Before we give a method for the cdlcu3atiun of periodic soktions of ~am~~tonian systems, we discuss, for later reference, the general problem of calculating periodic solutions of periodically forced and autonomous dissipative systems with a shooting method, A periodically forced system is a system whose behaviour can be described by a system of II first-order d~ff~r~~~a~ equations of the form
where T is the rn~n~rn~~pwiod of the forcing and h represents some parameter of the system. A periodic solution can have the same period as the forcing, its period is T, or it can have a period that is equal to an integral rnulti~~e of T, say mT. If we denote variations of quantities by a prefixed 6, the linearized equations, or variational equations, that can be derived from (8) are 62 = f@(x, t, h) fax]&X ,
c9
The problem of determining a periodic solution can now be stated as a boundary problem (8) with the periodic boundary conditions
value
where X@are the initia’l cond~t~ou~ for x at t = 0 and ypzGSI. In a sb~t~ng method [I P,12] for this boundary value problem, we solve the boundary conditions (IO) for the initial values x0 by a Newton-Raphson iteratian process. For this purpose we make a guess for the initial values of a periodic solution at t ==O, x(O;w,) =x0, and integrate the equations (8) over a time interval of length mT. We concurrently integrate the lin~ar~ed equations (9) n times with as initial conditions the n columns of the unit matrix, As a result we obtain the sohrtiun itself, x(mT, x0), and an amplification matrix, or monodromy matrix, M, which has the solutions of the linearized equations as its columns. The influence of a small perturbation in the initial conditions on the solution at d = mT is measured by this matrix, With the results obtained from the numerical ~nte~at~o~ we can make an expansion of (IO) as x(mT;x,)-;le,+fM-I)Ax,=O. With the residual r = x(mT; x0) - x0, the corrections
to the initial values become
With the corrected initial values, further iteration steps can be performed until the corrections have become sufficiently small. The iteration converges if M - I is regular at the periodic solution and the initial guess is s~cie~t~y close to the solution Although solutions obtained in tb& way have a period of PET, this need not be the minimal period if m > 1,
J. P. Meijaard, Applications of the WI) in dynamics
165
In a typical case, the stability of a periodic solution is determined by the eigenvalues of M, which are called characteristic multipliers or Floquet multipliers 1131. If all eigenvalues have a modulus smaller than one, the solution is stable; if some eigenvalue has a modulus larger than one, the solution is unstable; if some eigenvalue has a modulus equal to one, we have a critical case and we have encountered a bifurcation, where the qualitative structure of the solutions changes if the system is perturbed. In the case of an autonomous, time independent system whose behaviour is described by differential equations of the form i=f(x,
A),
X,par)
(13)
with the linearized equations
we have to adapt the shooting method. We still have to solve (10) with m = 1, but the period T of the solution is not known beforehand, so‘we have an additional unknown. This is due to the fact that the phase of the solution is undetermined: if x(t) is a solution of (13), x(f + T) is also a solution. In order to fix this phase, we force one component of the initial values to have some prescribed value, say xi = xgi. Again, we guess initial values in this hyperplane and integrate the differential equations (13) numerically until the solution has returned to a point on the hyperplane which is .sufficiently close to the guess of the initial values, along with n times the linearized equations. With these results we can expand (10) as (M-I)Ax,ff(x(T;x,),
A)AT=-r=q,-x(T;x,),
05)
where one component of Ax,, Ax,~, is zero. By removing the column of M - Z which pertains to Ax,~ and adding f as an additional column, (15) can be solved by a standard solution procedure for systems of linear equations. Besides a correction for the initial values, we have a corrected period of the solution, namely T + AT. The iteration process can be repeated until the corrections have become sufficiently small. As in the case of periodically forced systems, the stability is determined by the eigenvalues of M, with the difference that one eigenvalue of M is always equal to one, which corresponds to an infinitesimal phase shift in the solution and which has no influence on the stability. 3.2. Hamiltonian systems A stable linear time-independent conservative mechanical dynamical system has a number of eigenmodes into which the total motion can be decomposed. In each eigenmode the motion of each coordinate is harmonic with a common period and phase, while only the ratios of amplitudes are fixed, but the overall amplitude is arbitrary. In the case of a non-linear system, it is interesting to know whether families of periodic solutions remain in existence. Indeed, in [14] some theorems are reviewed which show the existence of families of periodic solutions near a stable equilibrium point under quite general conditions. Time-independent conservative systems are called ~amiltonian systems and their dynamics
are governed by systems of differential Hamilton’s equations of motion,
equations
which can be rewritten
in the form of
where H(p, q) is the Hamiltonian function of the system, which represents the total energy of the system, and p and q are the generalized momenta and coordinates. These systems have N as a first integral, because fi = 0, and furthermore, the vector field given by (16) is divergence free. For the calculation of periodic solutions, we put the generalized momenta and coordinates together in one vector as X’ = ( p’, qt ) and treat the system as an autonomous system as given by (13). The linearized equations are now given by
sti
=
-a%P, 41
sq
a4*
+
-a2m, a4ap
4) sp,
sg-
a2H(p, dP
4)
aq
sq
+
a*wPY
4)
* ap* SP(17)
Because periodic solutions are not isolated for Hamiltonian systems but come in oneparameter families, the equations (15) do not have a unique solution at a periodic solution. This difficulty is circumvented by first determining the singular value decomposition of the matrix of coefficients, then putting the smallest singular value equal to zero and finally solving the equations with the aid of the pseudo-inverse. After that, the procedure can be repeated with adjusted initial values until convergence is obtained. It has been shown in [lo] that this iteration process converges to a solution if all calculations are dane in an exact way. However, in the presence of rounding errors, the numerical solution will come close to the branch of periodic solutions and can then have a small drift along this branch. Hence we have to stop the iteration process if the corrections have become small or after a fixed, small number of iterations have been performed. The linear stability of the solutions is determined by the eigenvalues of M: if some eigenvalue of M lies outside the unit circle, the found periodic solution is unstable, and otherwise it is Iinearly stable. We note that linear stability need not imply stability for the non-linear equations. As an example, we apply the method to a system of ten non-linearly coupled oscillators without damping (Fig. 1). Each mass is equal to m, the spring stiffness of each oscillator is k,,
10 Fig. 1. Array of ten non-linearly
2
1
coupled oscillators without damping and forcing.
The tenth mass is also n~~-~~~~~~~~ coupled to the fixed w&d, far small oscillations, the circular ~~~enfrequen~ies are
g,, 5 0, It is easily &awn that
with the ei~e~m~des
(20) We shall determine two periodic solutions for the ease that PT% = 1, k, = 0.1 and k, = 2, If we take th,e first eigenmode given by (20) with k = f as a first guess for the initial ~~ndi~~~s of a periodic solution for the non-linear equations, and take as a hyperplane of section for the initial values pI = 0, we obtain for q2 the value of the initial guess 0.997204. The value after the first iteration is 0.999232; and the value after the second iteration is 0.999042, which does not change at subsequent iterations. The period of this solution is 17.947574, whereas the period for small oscillations is f7.363815, and the solution is linearly stable. If we take as an in&i& estimate the secund ei~eum~de of the linearized unctions fitr = 31, we have for q1 the value ~.97492~ as an initial guess, and its values at the next three iterations are 132574, 1.221849 and 1.222297. The period of this solution is 120.762882, whereas the period far small us~i~l~ti~us is 11.508703 and the solut&-~ is unstable.
4, A continuation method Stationary solutions, that is, solutions which are independent of time, can be obtained by a newton-~~phs~n iteration on the ant-hand sides of the di~~rential equations and periled s~l~~~~s can be calculated by a shooting methud- With these methuds one suluti~~ for a spec%c set of parameters of the system is obtained. It is, however, often desired to know huw solutions change if some parameter, which we call here A, is changed. This parameter can represent a design parameter, such as a spring stiffness or a damping constant, ur the intensity of an external agent, such as a frequency and amplitude uf a periodic furce. A simple way to handle this problem is stepping through a range of ~q~i~staut parameter values, h, == A,+iAh, i=O,l,. _. , and tu calculate a solution for each fixed value of the parameter. This simple approach works well for a number of problems, but it daes not make gaod use of information from previously obtained solutions and it may f&l if the branch of soiutiaas has a turning point (also called limit point or fold point). Procedures which make use of knawle~ge ~bt~~ued from previous s~lut~~~s and which can deal with t~~~ug points are called full~~~g methods or ~~tin~a~~~ methods. Here Be shall describe a method which is a ~~~~~~~~~ uf the methuds proposed by Ha&grove E.B] and Fried f%]* The equations for which a branch of s~lutiuns has to be calculated can be stated as
168
For instance, if a branch of stationary solutions for an autonomous system is desired, these are the right-hand sides of the system of describing differential equations (13), or if periodic solutions are sought, they represent the boundary conditions as in (la), or they may be the equations for some other problem, such as the equations which define a bifurcation in the problem of the direct determination of bifurcations. If the Jacobian matrix off has full rank at all solutions and if the solution set is not empty, (21) defines a number of smooth branches of solutions. We should draw attention to the fact that the state variables x and the parameter A may be of a totally different nature: the space (x*, Af is an affine space ]17] and has no natural measure of length. Nevertheless, if the quantities are properly sealed, for instance by the introduction of dimensionless variables, the standard Euclidean metric can be used in the space (x’, A). The continuation procedure that we propose consists of two basic steps, a predictor step and a corrector step. In the predictor step we estimate a next point on the solution branch of (21) on the basis of a number of previously obtained solutions. If we have more than one previous solution, a polynomial extrapolation can be used. This can be a linear extrapolation if two previous solutions {x,‘, Ak} and {x~_~‘,A,_,} are known,
Here (~f+~‘~Ac+i) is the predictor, s can, be interpreted as the step length along the solution curve and s, = 11(xi - xk_ 1t, Ak - A,_,}l]. A quadratic extrapolation can be made if a third previous solution, (~~-2: A,_,}, is known. The extrapolation can be done in several ways, from which we choose
where s6 = ]](x~_~~- x,_,~, hk_I - A,_,) I]. Higher-order extrapolations can be used, but little is gained by doing so because the number of correction iterations is hardly reduced. We have preferred polynomial extrapolation over the commonly used tangent predictors, because they are simpler and more accurate if quadratic extrapolation is used. At the start of the continuation method, we have only one previous solution and cannot apply linear extrapolation. Instead, a searching direction has to be supplied in the form of a fictitious previous solution (x_~‘, A_,}, which may be equal to {xl, A, - s}. In subsequent steps we use quadratic extrapolation. The predictor step will generally not yield a solution to (21), so we have to make a correction step, This cuuld be done by adding an equation N(x, A) to the system (21) and then solving the resulting augmented system by a Newton-Raphso~ iteration process or some other method. Several choices for the augmenting equation have been proposed in the literature [l&20]. A different approach is pursued in the methods proposed by Haselgrove [15] and
169
J.P. ~~~jaard, Appi~~atia~ of the SVD in dynamics
Fried [14], in which no explicit augmenting equation is used, but the underdetermined system {Zl) is solved in a direct way, In this method we assume that we have some appruximation of a solution, (x,‘, A,>, with the residual ra =f(x,, h,) and we define
k;‘,= [~flW(x,, A,) t p, = [af/aA](x,, A,).
(24)
If we expand (21) in a Taylor series and drop terms of higher than first order, we obtain lu, Ax, + p, Ah, = -ra
(25)
as the equation for the corrections (Ax,‘? Ah,]. The solution of the system (25) is made unique by imposing the additional condition that the correction vector is p~~~ndi~~lar to the tangent of the curve f(x, h) = ra at {x,“, h,}. This condition is equivalent to the requirement that the corrections have a minimal norm, so we can obtain this solution by calculating the singular value decomposition of the rectangular matrix [Ka p,] and then solving (25) in a leastsquares sense. With the correction the approximate solution is updated and the correction process is iterated until the corrections have become sufficiently small. The convergence of the process is deduced from the results given in [lo] in the case of exact computations. Again, due to numerical errors, a small drift of solutions along a region near the branch of solutions may occur, which, however, is slow. Some stepsize control algorithm can be applied to s, but in the cal~ulatiuns reported here a constant stepsize was used. After having explained the general framework of the continuation method, we shall make explicit the form uf the matrix lu, and vector p, fur the cases of the ~untin~ation of stationary solutions, continuation of periodic solutions for periodically forced systems, ~ntinuatio~ of periodic solutions of dissipative autonomous systems and the continuation of bifurcations, Some applications are given in the next section. In the case of the continuation of stationary solutions of autanomous dynamical systems, the function f of (21) represents the right-hand side of the system of first-order describing differential equations (13), and R, and p, are given by (24). In the case of a periodically forced system with driving period 7’ as in (8), the matrix rY, is given by M - 1. The vector p, is obtained by integrating the linearized equations, where the parameter is considered as an additional variable, that is, by integrating the equation
with the initial values 6x = 0. Autonomous dissipative systems are treated in the same way, where the column of K, pertaining to variations of xi has to be removed and a column pertaining to variations of the period has to be added. In the case of Hamiltonian systems, no parameter has to be varied in order to obtain a branch of periodic solutions. We can trace a branch of periodic solutions by making use of the predictor step of the continuation method and then, in the corrector step, solving for a periodic solution as in the previous section. The elementary bifurcations for stations solutions are characterized by the fact that some eigenvalue of the matrix of the iinearized equations has a real part which is equal to zero. If we consider systems which depend un two parameters, A, and h2, a branch of bi~r~ation points is characterized by the set of IZ+ 1 equatiuns
170
J. P. Meijaard, Applications of the SVD in dynamics
f(x, A,>A21=o
9
Rep,=O.
(27)
Here pC is the critical eigenvalue, which is the solution of the characteristic equation ]aflk - pZ[ = 0 that passes the imaginary axis at the bifurcation. We assume that only one eigenvalue becomes zero, which typically gives rise to a fold bifurcation, or one pair of complex conjugate eigenvalues crosses the imaginary axis, which typically gives rise to a Hopf bifurcation [21]. We can apply the continuation method to the system (27), where the only new object to be calculated is the derivative of the real part of the critical eigenvalue with respect to the state variables and the parameters. With the standard perturbation theory for eigenvalues [7], we obtain
al-% *t axi -Yc a4 aha
a
ax_ (wwY,lY:‘Y, I
>
a my,-ah, (af/a4Y,lY,*‘Y, ,
i =
172,* . . >n ,
et
Q! =
1,2,
where yz’ and y, are the left and right critical eigenvectors corresponding to p,. For bifurcations of periodic solutions, the first set of equations of (27) is replaced by the periodic boundary conditions and the condition on the critical eigenvalue is that it has a modulus equal to one. For a method for calculating the derivatives and examples of applications we refer to [12].
5. Applications of the continuation method to mechanical systems 5.1. Rotor with non-linear support stiffness
The problem is considered of the symmetric motions of a rigid rotor which is driven at a constant angular velocity and which is supported by non-linear rotationally symmetric bearings. The restoring force of the bearings is a cubic function of the radial displacement and the damping in the bearings is modelled by a so-called internal damping which is proportional to the relative velocity of the centre of the rotor in a co-rotating coordinate system and a so-called external damping which is proportional to the absolute velocity of the centre of the rotor. The two dimensionless equations of motion for the dimensionless displacements u and u in a co-rotational coordinate system are given by
,,;25mw2][;]=[e;2].
(29)
Here, r2 = u2 + u2, e is a measure for the mass eccentricity of the rotor, & and le are the relative linear internal and external damping, and w is the dimensionless angular velocity of the rotor. The system (29) is an autonomous system and due to its simplicity it is amenable to analytic calculations. We first consider the stationary solutions and their stability. For e = 0, the trivial solution u=v=O is stable for w o,,. At w = oCr a Hopf
J.P. Meijaard, Applications of the SVD in dynamics
171
bifurcation occurs and a periodic solution with an initial circular frequency of 5,/5i originates. In general, the deflection r0 at the stationary solutions follows from ri + ri[2(1 - w’)] + ri[(l
- ~0’)” + 45&*] - e204 = 0.
(30)
For e* s ei, ef = -8(1~:)~/(270;), 0: = 1 + 65% + (1252 + 365:)“*, there is a unique solution for each value of W, but for e* > ei, there is an interval of values of o where three solutions exist and the jump phenomenon occurs if the angular frequency is changed quasistatically. For e > 25,, the amplitude of one stable stationary solution increases without bound as w is increased. In that case the linear external damping is not strong enough for limiting the deflection if the rotational speed of the rotor is increased slowly from rest. Figure 2 presents some results of the numerical calculation and continuation of the stationary and periodic solutions. The results agree, as it must be, with those of the analytical calculations. Figure 3 shows curves of bifurcations in the e-o parameter plane where fold bifurcations and Hopf bifurcations of the stationary solutions occur. The cusp point of the fold bifurcations is at o = o,, = 1.0903455, e = e, = 0.037576989. 5.2. Array of non-linearly coupled oscillators The system shown in Fig. 4 is a modification of the Hamiltonian system of Fig. 1: linear dampers with a damping constant c, are set in parallel to the non-linear springs in order to introduce dissipation in the system and the first body is excited by a harmonic force F = f cos wt. Figure 5 shows the resonance curves for the case that k, = 1, k, = 0.1, c0 = 0.2 and four values off. Figure 6 is an enlargement of the first resonance peak in Fig. 5. It is seen that especially near the first eigenfrequency a large peak in the amplitude occurs, while the peaks at the higher eigenfrequencies are much lower. This is due to the fact that the higher eigenfrequencies have a larger relative damping than the lower ones. Again we see that for the larger values off a hysteresis occurs and that the jump phenomenon can be observed. 1.0 ‘0 0.8
0.6
“8.0
0.5
1.0
1.5
2.0
2.5
-
3.0
Fig. 2. Deflection of solutions with J’,= & = 0.05: stationary solutions for (a) e = 0.02, (b) e = 0.0375, (c) e = 0.06, (d) e = 0.1; (e) periodic solutions for e = 0. Drawn lines represent stable solutions and dashed lines represent unstable solutions.
Fig. 3. Parameter values at fold and Hopf bifurcations for 5, = I&= 0.05.
J.P. Meijaard, Applications of the SVD in dynamics
172
Fig. 4. Array of ten non-linearly
coupled oscillators
with damping and forcing.
5 amplitude
[: 4-
amplitude i
0.38
0. w
Fig. 5. Amplitude of the periodic response of the first body fork, =O.l, k,=l, c,=O.2 andf=O.Ol, 0.02, 0.03, 0.04. Drawn lines represent stable solutions and dashed lines represent unstable solutions.
Fig. 6. Enlargement resonance peak.
of a part of Fig. 5 near the first
6. Concluding remarks This article has shown some applications of the singular value decomposition to problems in mechanics. This decomposition has the properties that it is at the one hand numerically robust and at the other hand expansive in memory use and computation time. These conflicting characteristics make it specially useful in cases where the solution of equations constitutes only a small portion of the total computations. This is mostly the case in the determination and continuation of periodic solutions by shooting methods, where the integration of the differential equations is the larger part of the computations, and for stationary solutions if the evaluation of the function values and the linearized equations is costly. In problems with large sparse systems the use of the singular value decomposition is not recommended.
J.P. Me~j~ur~,Appiicario~ of the SVD in dynamics
173
References [I] G. Strang, Linear Algebra and its Applications, Third Edition (Harcourt Brace, Jovanovich, San Diego, 1988). [2] C.T. Chen, Linear System Theory and Design (Holt, Rinehart and Winston, New York, 1984). [3] R.P. Singh and P.W. Likins, Singular value decomposition for constrained dynamical systems, ASME J. Appl. Mech. 52 (1985) 943-948. (41 G.H. Golub and CF. Van Loan, Matrix Computations, 2nd Edition (Johns Hopkins Univ. Press, Baltimore, MD, 1989). I;.51C. Truesdell and R.A. Toupin, The classical field theories, in: S. Fhigge, ed., Encyclopedia of Physics, Volume III/ 1: Principles of Classical Mechanics and Field Theory (Springer, Berlin, 1960) 226-793. [6] G.H. Golub and C. Reinsch, Singular value decomposition and least squares solutions, Numer. Math. 14 (1970) 403-420. [7] J.H. Wilkinson, The Algebraic Eigenvalue Problem (Clarendon Press, Oxford, 1965). 181,E.H. Moore, On the reciprocal of the general algebraic matrix, Bull. Amer. Math. Sot. 26 (1920) 394-395. [9] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Sot. 51 (1955) 406-413. [lo] A. Ben-Israel and T.N.E. Greville, Generalized Inverses, Theory and Applications (Wiley, New York, 1974). [II] R. Seydel, From Equilibrium to Chaos, Practical Bifurcation and Stability Analysis (Elsevier, New York, 1988). [12] J.P. Meijaard, Dynamics of Mechanical Systems, Algorithms for a Numerical Investigation of the Behaviour of Non-Linear Discrete Models (Meijaard, Delft, 1991). [13] E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations (McGraw-Hill, New York, 1955). 1141 F. Verhulst, Asymptotic analysis of Hamiltonian systems, in: F. Verhulst, ed., As~ptotic Analysis II Surveys and New Trends (Springer, Berlin, 1983) 137-183. [15] C.B. Haselgrove, The solution of non-linear equations and of differential equations with two-point boundary conditions, Comput. J. 4 (1961) 255-259. [I61 I. Fried, Orthogonal trajectory accession to the nonlinear equilibrium curve, Comput. Methods Appl. Mech. Engrg. 47 (1984) 283-297. [17] H. Weyl, Raum, Zeit, Materie, Vorlesungen fiber allgemeine Relativititstheorie, 7. Auflage (Springer, Berlin, 1988). [18] W.C. Rheinboldt, Numerical Analysis of Parametrized Nonlinear Equations (WileyIInterscience, New York, 1986). [19] E. Riks, The application of Newton’s method to the problem of elastic stability, ASME J. Appl. Mech. 39 (1972) 1060-1065. [20] M.A. Crisfield, A fast incremental/iterative solution procedure that handles “snap-through”, Comput. & Structures 13 (1981) 55-62. [21] J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (Springer, New York, 1983).