Saddle-point principles and numerical integration methods for second-order hyperbolic equations

Saddle-point principles and numerical integration methods for second-order hyperbolic equations

Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678 www.elsevier.com/locate/cma Saddle-point principles and numerical integration methods for se...

279KB Sizes 1 Downloads 40 Views

Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

www.elsevier.com/locate/cma

Saddle-point principles and numerical integration methods for second-order hyperbolic equations Angelo Carini, Francesco Genna * Department of Civil Engineering, University of Brescia, Via Branze, 38-25123 Brescia, Italy Received in revised form 15 December 1999

Abstract This work describes a family of functionals whose stationarity ± often saddle-point condition ± leads to well-known so-called ``variational'' formulations for structural dynamics (such as the weak Hamilton/Ritz formulation and the continuous/discontinuous Galerkin formulation) and, in turn, to methods for the numerical integration of the equations of motion. It is shown that all the time integration methods based on ``variational'' formulations do descend from such functionals. Moreover, starting from the described family of functionals it is possible to construct new families of time integration methods, which might exhibit computational advantages over the corresponding ones derived from ``variational'' formulations only. Ó 2000 Elsevier Science B.V. All rights reserved.

1. Introduction The use of temporal ®nite elements to solve initial-value dynamics problems has been somewhat limited, in comparison with the use of ®nite elements to solve quasi-static problems, by the conceptual diculty constituted by the lack of suitable extremum principles to be adopted as a starting point. As is well known, while most numerical solution techniques for static problems are based upon the existence of some functional, which allows to understand several properties of the corresponding algorithm, as well as to establish estimates of the error, time integration techniques never are. From the practical viewpoint the most widely used algorithms for numerical time integration can be categorized into three main classes: 1. ®nite di€erence methods (Newmark family, Hilber±Hughes±Taylor, just to name the most popular); 2. ®nite element methods based on weighted residual formulations; 3. ®nite element methods based on weak ``variational'' formulations. The features of these three classes are very well known and well documented in the literature (see, for reviews, Refs. [1±5]). It has been proved that several methods belonging to the ®rst class can be seen as special cases of time ®nite element methods; moreover, it has been shown that, taking the necessary steps and suitably choosing the shape and trial functions, the methods belonging to the last two classes are equivalent to each other [6]. In sum, it seems possible to say that, at the present state of development, most of the numerical methods for the integration of initial-value problems in structural dynamics can be seen as some time ®nite element technique and, as such, as deriving from some ``variational'' statement. Most of the papers dealing with time ®nite elements do actually quote their basic equations as ``variational'', either in the case of a Galerkin approach or in the case of a Hamilton/Ritz approach. There has been a signi®cant amount of work in this ®eld, as well as a corresponding amount of controversy about the *

Corresponding author. Fax: +39-030-3715503.

0045-7825/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 0 4 5 - 7 8 2 5 ( 0 0 ) 0 0 1 9 5 - X

1664

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

proper way of writing the correct starting equations [7]. Of course, the problem is that all this ``variational'' framework lacks the starting extremum (at least variational) principle: all the equations de®ned as ``variational'' are always written as the variation of something which is never de®ned. For this reason, we always put between inverted commas the word ``variational'' in this context. Initial-value structural dynamics problems do not admit a ``true'' variational formulation because of the lack of symmetry of the governing operator with respect to the ``classical'' bilinear form used in continuum mechanics, i.e., the integral, over the time-space domain, of some scalar product. Only a problem reformulated as having conditions at the beginning and at the end of the motion admits a ``classical'' associated bilinear form. An example is Hamilton's functional (the one associated to the so-called Hamilton's principle, with no initial and ®nal terms, which requires the a priori satisfaction of conditions on displacement at the beginning and at the end of the motion). The lack of ``classical'' functionals has led researchers to look for ``nonclassical'', alternative formulations. For the case of initial-value structural dynamics, nonclassical functionals have been proposed by Gurtin [8], Rafalski [9] and Reiss and Haug [10]. All these results, however, are either limited to the linear case or quite complex to be used as a starting point for numerical methods (for instance, Gurtin's functional is of the convolutive type, while the other two require the integration up to an in®nite time). Tonti [11] developed a general theoretical framework to construct functionals associated to every nonlinear/non-symmetric operator. These functionals are classi®ed as ``extended'', since they refer to extended problems, constructed in such a way as to symmetrize the original one with respect to a suitably chosen bilinear form. In the framework set up by Tonti, the construction of these functionals is rather involved from the practical viewpoint; moreover, no criterion is given about the choice of the bilinear form to be used and about the kernel functions introduced in [11] to construct the extended problem. In any case, it is often possible to construct, by means of this technique, classical extended functionals associated to nonlinear/non-symmetric operators. This method has been extended and re®ned in subsequent work [12±15], and it has been ®rst applied to the initial-value nonlinear dynamics problem in [16], without, however, trying to use in a systematic way the obtained results as a starting point for numerical applications. Only in [17] a ®rst experiment was made to understand how the availability of a governing functional re¯ected on approximate time integration methods. Finally, in [18] the theory of Tonti [11] was ®rst rewritten in a simple, engineering-oriented way, and then extended to obtain a class of very simple functionals whose stationarity corresponded to any starting nonlinear/non-symmetric problem, without the need of choosing speci®c bilinear forms and speci®c kernel functions enjoying particular symmetry characteristics. Starting from the general theoretical framework established in [18], here we will describe a class of functionals whose stationarity conditions (often characterized as saddle-point conditions) yield precisely both the time continuous/time discontinuous Galerkin equations and the weak Hamilton/Ritz equations used by many Authors as a starting point for time integration methods. This class of functionals is written by using the part of the theory developed in [18] which cannot be seen as a particular case of Tonti's theory. Next, always starting from [18], we will illustrate a di€erent class of functionals for initial-value structural dynamics, all belonging to the general framework set up in [11,12], and therefore all characterized by the need of choosing a suitably de®ned bilinear form and a symmetric kernel operator, but rewritten in a more engineering-oriented way. Starting from this di€erent class of functionals it is possible to construct new time integration methods, which should bene®t from the freedom of choice of both kernel operators and bilinear forms, absent in the previously de®ned class of functionals. This freedom of choice may lead both to a better conditioning of the resulting discrete numerical problem, and to superior algorithmic characteristics; the inclusion of a suitably de®ned symmetric linear kernel operator can in fact be seen as a preconditioning technique for the numerical algorithm [13]. 2. A family of functionals which governs several established time integration methods In this section, after summarizing the theoretical results described in [18], we will make reference to one of the results there presented, i.e., that enabling to write down an extremely simple functional, whose

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1665

stationarity corresponds to several forms of ``variational'' equations used to construct numerical time integration methods. The method discussed in [18], which starts from the theory developed in [11,12], considers a generic nonlinear/non-symmetric problem written in the following form: N…x† ÿ P ˆ 0;

…2:1†

where x indicates an unknown function, possibly vector or tensor-valued, N…† indicates a generic operator and P indicates a known term, again possibly vector or tensor-valued. Refs. [12,18] discuss a general way to construct variational formulations of problem (2.1), by splitting operator N…† into a linear, symmetric, positive de®nite part and a residual part: N…† ˆ S…† ‡ R…†:

…2:2†

Operator S…† is linear, symmetric and positive de®nite. As shown in [12,18], problem (2.1) always admits an extended variational formulation, governed by the following functional: 1 F ‰x; yŠ ˆ h…x ÿ y†; …N…x† ÿ P †i ÿ h…x ÿ y†; S…x ÿ y†i; 2

…2:3†

where the symbol h:; :i indicates a suitably de®ned non-degenerated bilinear form, and y is an auxiliary unknown function which, in the solution of problem (2.1), in correspondence to a stationarity point of functional (2.3), coincides with the main unknown function x [18]. An interesting case arises from the choice S…† ˆ 0, not admissible according to the general theory of [11,12,18]. Nevertheless, in [18] it has been shown that such choice does indeed lead to the construction of a meaningful functional associated to problem (2.1); such a functional is the following: G‰x; yŠ ˆ h…x ÿ y†; …N…x† ÿ P †i:

…2:4†

In Sections 2 and 3 we will make reference to this last functional only, whereas the use of the more general one (2.3) will be discussed in Section 4. Note that, while functional (2.3) is essentially an alternative form of Tonti's extended functional [11], functional (2.4) cannot be seen as a special case of Tonti's theory, because such theory is based on the de®nition of a linear, symmetric, positive de®nite kernel operator S…†. Therefore, as shown in [18], functional (2.4) represents a new formulation. For the sake of simplicity we will here make reference only to the equations governing the motion of an undamped single degree of freedom elastic oscillator, de®ned by a mass m, which we will assume of unit value (i.e., m ˆ 1) for simplicity, a stiffness k and a forcing function f …t†, t being the time. The displacement of the mass is indicated by the unknown function u…t† and its velocity by v…t†. Depending on the adopted formulations, the velocity can be assumed either as an independent variable or as the time derivative of the displacement; the acceleration is then computed accordingly. The restriction to this simple model has only the scope of simplicity; the same formalism illustrated in this and in the following sections could be equally applied to a general multi-degree of freedom system, as well as to a continuum problem. The equations of motion of the single-degree of freedom system described above can be written as follows: v_ …t† ‡ ku…t† ÿ f …t† ˆ 0;

…2:5†

v…t0 † ÿ v0 ˆ 0;

…2:6†

_ ‡ v…t† ˆ 0; ÿu…t†

…2:7†

ÿu…t0 † ‡ u0 ˆ 0;

…2:8†

t0 being the initial time of the considered motion, u0 the initial displacement and v0 the initial velocity. Here and in the rest of the paper, a superimposed dot means a derivative with respect to time. Note the order of

1666

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

the initial conditions, essential to the application of the formalism implied by Eq. (2.4): the condition on the velocity is associated to the ®rst equation, di€erential in the velocity, whereas the condition on the displacement is associated to the second equation, di€erential in the displacement. The structure of problem (2.5)±(2.8) suggests to use, as the basic bilinear form, the following symbolic expression: Z t1 xy dt ‡ x…t0 †y…t0 †; …2:9† hx; yi ˆ t0

where t1 indicates the ending time of the considered motion. In the case the unknown functions x and y are vector-valued, considerable care has to be taken in coupling components of one function with components of the other in the boundary terms, in order to avoid de®ning a degenerated bilinear form. More about this problem will be given later. We can now write functional (2.4) for problem (2.5)±(2.8) whose operator, owing to its initial conditions, is not symmetric with respect to the bilinear form (2.9). The formalism of Eq. (2.4) allows to write several equivalent forms of a functional associated to problem (2.5)±(2.8); we write many of them because in the next section we will show how the di€erent ``variational'' equations which can be found in the literature as starting point for time integration methods can all be derived by one of the di€erent possible forms taken by functional (2.4) when applied to problem (2.5)±(2.8). Form 1: Functionals Ga1 ‰u…t†; us …t†; v…t†; vs …t†Š and Gb1 ‰u…t†; us …t†; v…t†; vs …t†Š. A ®rst form of functional G‰x; yŠ of Eq. (2.4) derives from the use of the bilinear form (2.9), by maintaining the total generality implicit into Eqs. (2.5)±(2.8), i.e., by assuming the displacement and the velocity as two independent unknown functions and by a priori prescribing no initial condition. Having made these choices, an extended functional governing problem (2.5)±(2.8) is the following: Z t1 Z t1 _ ‰u…t† ÿ us …t†Š‰_v…t† ‡ ku…t† ÿ f …t†Š dt ‡ ‰v…t† ÿ vs …t†Š‰v…t† ÿ u…t†Š dt Ga1 ‰u…t†; us …t†; v…t†; vs …t†Š ˆ t0

t0

‡ ‰u…t0 † ÿ us …t0 †Š‰v…t0 † ÿ v0 Š ÿ ‰v…t0 † ÿ vs …t0 †Š‰u…t0 † ÿ u0 Š

…2:10†

in which us …t† and vs …t† are auxiliary unknown functions. The theory developed in [18] guarantees that the solution of problem (2.5)±(2.8) can be found by computing and solving the stationarity conditions of functional (2.10) with respect to u…t†; v…t†; us …t† and vs …t†; in correspondence to the stationarity point of functional (2.10) the auxiliary unknowns us …t† and vs …t† become coincident with the main ones u…t† and v…t†, respectively. In the sequel of this work, we will often skip the indication of dependence on time t for the sake of brevity. An equivalent form of functional Ga1 of Eq. (2.10) can be obtained by means of an integration by parts of the term containing the acceleration v_ …t†. After performing the relevant simpli®cations, we are left with the following expression: Z t1 Z t1 Z t1 b _ dt ÿ …u ÿ us †…ku ÿ f † dt ‡ …v ÿ vs †…v ÿ u† …u_ ÿ u_ s †v dt G1 ‰u; us ; v; vs Š ˆ t0

t0

t0

‡ ‰u…t1 † ÿ us …t1 †Šv…t1 † ÿ ‰u…t0 † ÿ us …t0 †Šv0 ÿ ‰v…t0 † ÿ vs …t0 †Š‰u…t0 † ÿ u0 Š:

…2:11†

Form 2: Functionals Ga2 ‰u; us ; v; vs Š and Gb2 ‰u; us ; v; vs Š. A functional equivalent to that de®ned in Eq. (2.10) can be constructed by exchanging the position of the unknown functions u and v in vector x of the general theory (Eq. (2.4)). By doing so we can in fact write the following functional: Z t1 Z t1 a _ dt ‡ ‰v…t0 † ÿ vs …t0 †Š‰v…t0 † ÿ v0 Š …v ÿ vs †…_v ‡ ku ÿ f † dt ‡ …u ÿ us †…v ÿ u† G2 ‰u; us ; v; vs Š ˆ t0

ÿ ‰u…t0 † ÿ us …t0 †Š‰u…t0 † ÿ u0 Š:

t0

…2:12†

As before, by integrating by parts the term containing the acceleration in Eq. (2.12) one arrives at the following equivalent functional:

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

Gb2 ‰u; us ; v; vs Š ˆ

Z

t1 t0

Z …v ÿ vs †…ku ÿ f † dt ‡

t0

t1

Z _ dt ÿ …u ÿ us †…v ÿ u†

t0

t1

1667

…_v ÿ v_ s †v dt

‡ ‰v…t1 † ÿ vs …t1 †Šv…t1 † ÿ ‰v…t0 † ÿ vs …t0 †Šv0 ÿ ‰u…t0 † ÿ us …t0 †Š‰u…t0 † ÿ u0 Š: Form 3: Functionals Ga3 ‰u; us Š and Gb3 ‰u; us Š. Functionals Ga1 and Gb1 of Eqs. (2.10) and (2.11) can be specialized to the case in which the velocity is considered as the derivative of displacement instead of an independent unknown. On the contrary, functionals (2.12) and (2.13) cannot, since they would become degenerated. By prescribing a priori the satisfaction of Eq. (2.7), functional (2.10) reduces to the following one: Z t1 _ 0 † ÿ v0 Š ÿ ‰u…t _ 0 † ÿ u_ s …t0 †Š‰u…t0 † ÿ u0 Š; …2:14† …u ÿ us †… u ‡ ku ÿ f † dt ‡ ‰u…t0 † ÿ us …t0 †Š‰u…t Ga3 ‰u; us Š ˆ t0

whereas functional (2.11) becomes Z t1 Z b …u ÿ us †…ku ÿ f † dt ÿ G3 ‰u; us Š ˆ t0

t1 t0

_ 1 † ÿ ‰u…t0 † ÿ us …t0 †Šv0 …u_ ÿ u_ s †u_ dt ‡ ‰u…t1 † ÿ us …t1 †Šu…t

_ 0 † ÿ u_ s …t0 †Š‰u…t0 † ÿ u0 Š: ÿ ‰u…t

…2:15†

Here the order and the position of equations and unknowns is essential to avoid constructing a degenerated functional. In particular, the boundary terms, de®ned in Eqs. (2.14) and (2.15) as the product of velocity times displacement, evaluated at either the initial or the ®nal time, cannot be assumed, as it might seem possible when displacement is the only unknown variable, as the product of displacement times displacement. Such choice is in fact incorrect, since it would lead to a degenerated bilinear form 1. This is also the _ reason why functionals (2.12) and (2.13) cannot be particularized to the case in which v…t† ˆ u…t†. Form 4: Constrained functionals. Further different types of functional can be written in the case the unknown functions u and v satisfy a priori either one or all of the initial conditions. There are three possible cases (all the initial conditions satis®ed a priori, only that on displacement, only that on velocities), which can be written for each of the three preceding forms (the ®rst two for the case in which the velocity is an independent variable, the third in the case the velocity is de®ned as the time derivative of the displacement), for both the two versions of each form (index a or index b), respectively before or after the integration by parts of the acceleration term. In all these cases it is easy to write the corresponding functionals, starting from either of Eqs. (2.10)±(2.15), by dropping the relevant initial condition terms. We omit the full writing of all these equations for the sake of brevity.

3. Correspondence between the functionals of Section 2 and known ``variational'' formulations We are now in the position of showing that the stationarity conditions of functionals (2.10)±(2.15) correspond to several ``variational'' formulations which can be found in the literature as a starting point for time integration algorithms. Weak Hamilton/Ritz approach: Let us start with the simplest possible case, i.e., that arising from formulation (2.15) with all the initial conditions prescribed a priori on the unknown function u…t†. We write the particularization of functional (2.15) to this case, as an example of what was said before, with reference to Form 4 of functional G‰x; yŠ:

1 A non-degenerated bilinear form can be de®ned, for our purposes, as a bilinear form hx; yi such as if hx; yi ˆ 0 8x 6ˆ 0 then y ˆ 0 and if hx; yi ˆ 0 8y 6ˆ 0 then x ˆ 0 (see, for instance, R t[11]). An example of incorrect choice of bilinear form of the type 3, with reference to problem (2.5)±(2.8), is the following: hy; Lxi ˆ t01 y…x ‡ kx† dt ‡ y…t0 †_x…t0 † ÿ y…t0 †x…t0 †: This bilinear form is degenerated, since the p   p   choice x…t† ˆ pCk sin k t ‡ C cos k t 6ˆ 0, where C is an arbitrary nonzero constant, makes the bilinear form equal to zero without implying y being zero as well. Obviously, the same choice does not, in general, make zero the bilinear form corresponding to functional (2.14). The same choice would not invalidate functional (2.13) either, since there the velocity is independent of the displacement.

1668

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

Gb4 ‰u; us Š ˆ

Z t0

t1

Z …u ÿ us †…ku ÿ f † dt ÿ

t1 t0

_ 1 † ‡ us …t0 †v0 : …u_ ÿ u_ s †u_ dt ‡ ‰u…t1 † ÿ us …t1 †Šu…t

…3:1†

Here no conditions are enforced on the auxiliary unknown us …t†. We explore now the stationarity conditions of functional Gb4 ‰u; us Š, by ®rst writing the stationarity with respect to us : Z t1 Z t1 _ 1 † ‡ dus …t0 †v0 ˆ 0: _ u_ s dt ÿ dus …t1 †u…t …ku ÿ f †dus dt ‡ …3:2† ud dus Gb4 ‰u; us Š ˆ ÿ t0

t0

By choosing, as a special case of the free variation dus , the variation du, Eq. (3.2) assumes a quite familiar aspect. De®ne in fact the kinetic energy of the system 1 T ˆ u_ 2 …t† 2

…3:3†

and the total potential energy of the system 1 V ˆ ku2 …t† ÿ f …t†u…t†: 2

…3:4†

Then it is immediate to recognize that Eq. (3.2) can be rewritten in the familiar Hamilton's form dus Gb4 ‰u; us Š

Z ˆd

t1

t0



oT du …T ÿ V† dt ÿ ou_

t1

ˆ 0:

…3:5†

t0

The stationarity of functional Gb4 ‰u; us Š with respect to u yields du Gb4 ‰u; us Š

Z ˆ

t0

t1

Z ‰ku ÿ f ‡ k…u ÿ us †Šdu dt ‡

_ 1 †du…t1 † ˆ 0: ‡ u…t

t0

t1

_ u_ dt ‡ ‰u…t1 † ÿ us …t1 †Šdu…t _ 1† …u_ s ÿ 2u†d …3:6†

By integrating by parts the second integral and rearranging all the boundary terms one arrives at the following result: Z t1 b _ 1† ‰ u ‡ ku ÿ f ‡ k…u ÿ us † ‡ … u ÿ us †Šdu dt ‡ ‰u…t1 † ÿ us …t1 †Šdu…t du G4 ‰u; us Š ˆ t0

_ 1 †Šdu…t1 † ˆ 0: ‡ ‰u_ s …t1 † ÿ u…t

…3:7†

Recalling now that Eq. (3.5) implies the validity of the equations of motion (2.5)±(2.8), it is immediate to verify that Eq. (3.7) implies the equality u…t† ˆ us …t† in the whole integration interval. This result de®nes the auxiliary equation associated to the auxiliary unknown function us …t† in the extended problem governed by functionals (2.3) and (2.4). It has general validity within the theoretical framework developed in [18], and therefore will be assumed to hold, without explicitly rederiving it every time, throughout all the sequel of this paper. We can thus conclude that functional Gb4 ‰u; us Š of Eq. (3.1) can be considered as the starting point of all the time integration algorithms based on the so-called ``weak Hamilton/Ritz'' approach, such as used, for instance, in [19±23] (and several other related papers), with various modi®cations. Time continuous Galerkin approach ± single ®eld formulation: We now start from Eq. (2.14), by modifying it again in order to a priori enforce all the initial conditions on the unknown function u…t†, as done before. By rewriting Eq. (2.14) dropping all the boundary terms (initial conditions enforced a priori) and computing the stationarity with respect to us one arrives at the following result: Z t1 … u ‡ ku ÿ f †dus dt ˆ 0 …3:8† dus Ga3 ‰u; us Š ˆ ÿ t0

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1669

which, interpreting the variation dus as an arbitrary weight function, is precisely the time continuous Galerkin equation ®rst used, in the context of structural dynamics, in [24], and also used in [25] and several others. Time continuous Galerkin approach ± two ®eld formulation: A different time continuous Galerkin approach can be derived by starting from Eq. (2.10), a functional in two unknown main functions ± displacement u and velocity v ± and two unknown auxiliary functions, and by rewriting it prescribing the satisfaction of the initial conditions for the unknowns u and v. This simply corresponds to dropping the two boundary terms in Eq. (2.10); computing the variations with respect to the auxiliary unknown functions us and vs one gets the following results: Z t1 a …_v ‡ ku ÿ f †dus dt ˆ 0; …3:9† dus G1 ‰u; us ; v; vs Š ˆ ÿ t0

dvs Ga1 ‰u; us ; v; vs Š ˆ ÿ

Z

t1 t0

_ s dt ˆ 0: …v ÿ u†dv

…3:10†

The sum of the two results (3.9) and (3.10) leads to a time continuous Galerkin equation analogous to that used, for instance, in [3, Eq. (8)], where two di€erent weight functions are used for the equation of motion, _ to v…t† as in Eq. (2.7) of the written as in Eq. (2.5) of the present paper, and for the equation relating u…t† present paper. Time discontinuous Galerkin approach ± single and two ®eld formulations: We now arrive at those approaches which try to leave the maximum ¯exibility in the choices of the shape and weight functions ± a strategy which has already proven to be the most effective in terms of algorithmic behavior [2]. As a starting point for these approaches any of Eqs. (2.10), (2.12) and (2.14) would do; here we examine ®rst the results given by Eq. (2.12), which coincide with equations used both in [3,26]. Taking the variations of functional Ga2 ‰u; us ; v; vs Š of Eq. (2.12) with respect to both us …t† and vs …t† we obtain dus Ga2 ‰u; us ; v; vs Š ˆ ÿ

dvs Ga2 ‰u; us ; v; vs Š ˆ ÿ

Z

t1

t0

Z

t1 t0

_ …v ÿ u†du s dt ‡ ‰u…t0 † ÿ u0 Šdus …t0 † ˆ 0;

…3:11†

…_v ‡ ku ÿ f †dvs dt ÿ ‰v…t0 † ÿ v0 Šdvs …t0 † ˆ 0:

…3:12†

By adding the two results (3.11) and (3.12) we obtain a time discontinuous Galerkin equation analogous to Ref. [3, Eq. (13)]. In order to obtain a single ®eld formulation, such as that used in [26], it is now sucient to prescribe Eq. (2.7) as a constraint in the two results (3.11) and (3.12), and to add them up. In this way one obtains a formulation formally equivalent with that implied by Ref. [26, Eqs. (8)±(12)]. We are not aware of somebody having used the corresponding formulations of the Galerkin type arising from functionals (2.10) and (2.14), which we do not write here for the sake of brevity. Time discontinuous Hamilton approach: Probably the most general approach to this ®eld is that presented in [2], with reference to continuum elastodynamics. Here we show that both the Hamilton and the Galerkin formulations described in [2] can be derived by functionals of the type (2.4). In particular, the most general case of a Hamilton formulation (Ref. [2, Eq. (9)], rewritten in terms of the single degree of freedom system we are considering here) can be obtained by computing the stationarity conditions of functional (2.11) with respect to the auxiliary functions us …t† and vs …t† dus Gb1 ‰u; us ; v; vs Š ˆ ÿ

dvs Gb1 ‰u; us ; v; vs Š ˆ ÿ

Z

t1

t0

Z

t1 t0

Z …ku ÿ f †dus dt ‡

t1 t0

vdu_ s dt ÿ v…t1 †dus …t1 † ‡ v0 dus …t0 † ˆ 0;

_ s dt ‡ ‰u…t0 † ÿ u0 Šdvs …t0 † ˆ 0: …v ÿ u†dv

…3:13†

…3:14†

1670

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

To obtain the corresponding to Ref. [2, Eq. (9)], one needs to consider the de®nition (3.4) of the total potential energy, and the Legendre transform of the kinetic energy 1 _ ÿ v2 …t†: Kc ˆ v…t†u…t† 2

…3:15†

Then the sum of Eqs. (3.13) and (3.14) can be written in the following way: Z t1 …Kc ÿ V† dt ‡ v0 dus …t0 † ÿ v…t1 †dus …t1 † ‡ ‰u…t0 † ÿ u0 Šdvs …t0 † dus Gb1 ‰u; us ; v; vs Š ‡ dvs Gb1 ‰u; us ; v; vs Š ˆ d t0

ˆ0

…3:16†

which is the specialization of Ref. [2, Eq. (9)] to our model system. Starting from this result, in [2] an alternative formulation, of the time discontinuous Galerkin type, is obtained by means of an integration by parts; in the framework of the present paper it is possible to directly obtain this alternative formulation by considering the stationarity conditions of functional (2.10). Ref. [2] gives a whole set of ``variational'' equations, by prescribing all the possible combinations of a priori constraints on the two unknown functions. All the results given in [2] could be obtained by starting from the appropriate functional among those given in the previous section. For instance, the time discontinuous Galerkin formulation of [2] with no initial conditions prescribed a priori, but with the constraint _ ˆ v…t†, is obtained from the stationarity conditions of functional (2.15). If one starts from either u…t† Eq. (2.11) or Eq. (2.15) and prescribes the a priori satisfaction of either one or both of the initial conditions, one recovers all the other formulations presented in [2]. We are not aware of analogous Hamilton type formulations obtained by starting from Eq. (2.13). On the other side, it may be noted that, in the continuum case, a further variety of di€erent approaches can be obtained, as done in [2,27], by modifying the set of unknowns and constraints in the de®nition of the static part of the functional. It is not worth illustrating this point in our example concerning the simple case of a single-degree of freedom system; the application of the theory described in [18] to the continuum dynamics case has been done in [16,18]. It is easy to observe that functionals of the type (2.10), (2.12) and (2.14), with possible extensions to constrained cases, lead to Galerkin type equations, whereas functionals of the type (2.11), (2.13) and (2.15) lead to equations of the Hamilton/Ritz type. From the numerical viewpoint, as observed in [6], there is no di€erence between the two approaches, as long as the interpolating functions are the same in the two cases. It can thus be concluded that a general functional, of either of types (2.10)±(2.13), can be seen as the origin of all the numerical integration algorithms described in the literature as derived from a ``variational'' formulation. All the particular cases illustrated in this section can in fact be obtained by specializing some form of the general Eqs. (2.10)±(2.13). We do not show here any numerical result obtained from the application of this theory, because they have already been studied elsewhere [2], even if they were not being derived from the stationarity of a functional. A huge amount of work has been done and is still in progress on the choice of the best interpolation functions, to be used in conjunction with ``variational'' equations of one of the types illustrated in this section (see, for instance, [4,5,26,28,29]). This is mainly due to the fact that simple truncated polynomial expansions tend to behave poorly in terms of both density and conditioning of the resulting system of linear equations, in the case of a multiple degree of freedom system (see, for instance, [21,23]). Also, the optimal algorithmic features are searched, such as good asymptotic annihilation properties, unconditional stability and at least second-order accuracy. Here we give no contribution to this particular ®eld. However, we observe that, seemingly, all the algorithms employed so far, and described in the literature, can be seen as originating by functionals of the type (2.4), illustrated in Section 2 with reference to a simple single-degree of freedom system, i.e., functionals in which there is no choice about symmetry or about accompanying kernel functions as de®ned in the basic theory of Tonti [11]. Therefore, we feel it is worth illustrating the use, within our context, of the more general form (2.3) of an extended functional, which requires the choice of a kernel function S…†, symmetric with respect to a suitably chosen bilinear form. There is in fact the possibility that, by carefully

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1671

choosing the kernel function and the related bilinear form, the algorithmic features of time integration schemes derived by interpolation of the unknown functions might improve. These aspects will be illustrated in the next section, by means of two examples. 4. Application of the general functional (2.3) to the equations of structural dynamics and relationship with time integration methods Here we wish to illustrate the use of functionals of the type (2.3) within the context of initial-value structural dynamics, always making reference, for the sake of simplicity, to the simple single-degree of freedom system used in the previous sections. In particular, since there is no space here to cover in detail this topic, we will comment about the algorithmic features of some time integration schemes, comparing results obtained by starting either from functional (2.4), in one of the forms described in Section 2, or from a di€erent functional, based on the more general expression (2.3). To use functional (2.3) we need ®rst to introduce a linear kernel S…† with the property of being symmetric with respect to at least one non-degenerated bilinear form. With reference to our model problem of Eqs. (2.5)±(2.8), we choose here, as kernel S…†, the following operator: 8 4 > dx d2 x > > ‡ b ‡ ckx a > > dt4 dt2 > > < …a ‡ b†x…t † 0 …4:1† Sx ˆ † a_ x …t > 0 > > > > …a ‡ b†x…t1 † > > : a_x…t1 † where k is the sti€ness of the system and a; b and c are free coecients. This choice ± just one of the many possible ± allows to explore a combination of several independent symmetric kernels. If b ˆ 0, any type of discretization of the main unknown function can be adopted, both time continuous and time discontinuous. If a ˆ 0 and b 6ˆ 0, then the discretization used for the unknown function u…t† must enforce the a priori satisfaction of the initial condition (2.6) on the velocity (see [16] for an example of functional of the type (2.3), based on a kernel analogous to that of Eq. (4.1) with a ˆ 0). It is possible to verify that operator S…† of Eq. (4.1) is symmetric with respect to a bilinear form analogous to the basic one of Eq. (2.9) which, owing to the structure of Eq. (4.1), must be rewritten as follows:    Z t1  4 dx d2 x d3 y d3 y y a 4 ‡ b 2 ‡ ckx dt ‡ a ÿ y …t1 †_x…t1 † ‡ y …t0 †_x…t0 † ‡ 3 x…t1 † ÿ 3 x…t0 † hy; Sxi ˆ dt dt dt t1 dt t0 t0 h i _ 1 †x…t1 † ÿ y_ …t0 †x…t0 † ; ‡ b y…t

…4:2†

i.e., that hy; Sxi ˆ hSy; xi. It would be possible to further manipulate Eq. (4.2) by performing two integrations by parts of the integral term, in order to reduce the maximum order of the time derivative, but there is no space here to discuss in detail this aspect, not essential anyway. Note also that here we will restrict ourselves, again for the sake of simplicity, to the case in which all the time derivatives of function x…t† are not considered as independent unknowns; by using a kernel equivalent to that of Eq. (4.1), but written in terms of four ®rst-order di€erential equations in four independent variables, and rewriting accordingly the same boundary conditions, one could obtain a more general formulation, which would allow the independent discretization of displacement and velocity. If one wants to write functional (2.3), with the choices (4.1) and (4.2), and for the problem de®ned by Eqs. (2.5)±(2.8), one treats functions u…t† and us …t† as the only independent unknowns, with no a priori satis®ed initial conditions, and therefore one can start from an expression analogous to Eq. (2.14), but with the inclusion of the kernel function term as shown by Eq. (2.3). Here we also adopt the strategy proposed in

1672

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

[30] to improve the dissipation properties of the Newmark family of algorithms, i.e., we evaluate the terms of the equation of motion (2.5) at di€erent times, as follows. If we consider a generic time integration step, starting from time t0 and ending at time t1 ˆ t0 ‡ Dt, we enforce the dynamic equilibrium (2.5) in the following way: u…t1 † ‡ …1 ‡ g†ku…t1 † ÿ gku…t0 † ÿ f …t1 ‡ gDt† ˆ 0

…4:3†

where g is an algorithmic parameter; for g ˆ 0 Eq. (4.3) reduces to (2.5) evaluated at time t ˆ t1 . Having made these assumptions, the ®nal expression of functional (2.3), associated to the choices (4.1) and (4.2), is the following: Z t1 _ 0 † ÿ v0 Š …u ÿ us †‰ u ‡ …1 ‡ g†ku ÿ gku ÿ f …t1 ‡ gDt†Š dt ‡ ‰u…t0 † ÿ us …t0 †Š‰u…t F1 ‰u; us Š ˆ t0

   4   2  Z 1 t1 d u d4 us d u d2 u s _ 0 † ÿ u_ s …t0 †Š‰u…t0 † ÿ u0 Š ÿ …u ÿ us † a ÿ ‰u…t ÿ ‡b ÿ ‡ ck…u ÿ us † dt 2 t0 dt4 dt4 dt2 dt2   1 1 _ 1 † ÿ u_ s …t1 †Š ÿ ‰u…t0 † ÿ us …t0 †Š‰u…t _ 0 † ÿ u_ s …t0 † u…t1 † ÿ  us …t1 †Š‰u…t ‡ a ‰ 2 2    3    1 d u d3 us 1 d 3 u d3 u s u…t † ÿ u …t †Š ‡ ‰u…t † ÿ u …t †Š ÿ ÿ ÿ 1 s 1 0 s 0 2 dt3 dt3 t1 2 dt3 dt3 t0   1 1 _ 1 † ÿ u_ s …t1 †Š‰u…t1 † ÿ us …t1 †Š ‡ ‰u…t _ 0 † ÿ u_ s …t0 †Š‰u…t0 † ÿ us …t0 †Š : …4:4† ‡ b ÿ ‰u…t 2 2 We can start by examining the performance of a family of algorithms, derived from functional (4.4), adopting the following discretization for the two unknown functions u…t† and us …t† u…t† ˆ at2 ‡ v0 t ‡ u0 ;

us …t† ˆ b;

…4:5†

where a and b are coecients to be determined. This type of interpolation enforces a priori the satisfaction of all the initial conditions on the main unknown u, and allows jumps in both the auxiliary displacement and velocity. All these choices are permitted by the almost completely general type of functional de®ned by _ Eq. (4.4), whose only restriction implies that v…t† ˆ u…t†. It is important to note that here functions u…t† and us …t†, as approximated in Eq. (4.5), have the same number of unknown parameters. This is essential to the application of functional (2.4) (here functional (4.4) with a ˆ b ˆ c ˆ 0), since in this case the equations deriving from the stationarity with respect to the auxiliary unknown are uncoupled from those deriving from the stationarity with respect to the main unknown. Only in case one uses the full general expression (4.4) (i.e., functional (2.3)) one has total freedom in the choice of the number of the unknown parameters. Note that we have chosen interpolation functions not continuous with their third derivatives, as it would appear to be necessary considering the highest-order of derivatives present in the functional. As long as the functional is correctly de®ned, any order of polynomial compatible with the used assumptions would in fact suce to yield a numerical integration scheme (see, for instance, [19]). The choice (4.5), coupled with the use of functional (4.4) with a ˆ b ˆ c ˆ d ˆ 0, leads to the Newmark time integration method with parameters bN ˆ 1=6; cN ˆ 1=2 (bN and cN being the standard algorithmic parameters of the Newmark family ± see for instance p [31]). Such an algorithm is second-order accurate and conditionally stable, with stability limit Dtcr ˆ 2 3T , T being the time period of the actual motion. Replacing the approximations (4.5) into functional (4.4), setting t0 ˆ 0 and t1 ˆ Dt, Dt being the time step of the integration scheme, by computing the stationarity conditions of the functional with respect to the unknown parameters a and b, one obtains a family of time stepping integration schemes, where u0 and v0 are the values of displacement and velocity at the end of the previous time step, respectively. Recall that setting a ˆ b ˆ c ˆ 0 corresponds to using functional (2.4) (no kernel function), whereas the use of a nonzero value of any of these coecients corresponds to switching to the use of the more general functional (2.3).

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1673

The accuracy properties of this family of algorithms can be evaluated by means of standard techniques [31]. It turns out that to obtain ®rst-order accuracy the following condition needs to be enforced ac ˆ 0

…4:6†

and, to obtain second-order accuracy, the following one g ˆ ÿbc:

…4:7†

No choice of parameters yields third-order accuracy. Thus it is seen that if one tries to modify the standard Newmark algorithm (a ˆ b ˆ c ˆ 0 in (4.4), i.e., bN ˆ 1=6; cN ˆ 1=2) by applying a nonzero coecient g, one immediately loses second-order accuracy. On the contrary, by choosing a ˆ 0, and using nonzero b and c coecients (i.e., switching to the use of functional (2.3)), if one applies the Hilber±Hughes±Taylor g modi®cation by complying with constraint (4.7) one can maintain second-order accuracy at the same time modifying the algorithmic features. To understand how this choice re¯ects on the algorithm behavior it can be useful to look at the spectral radii of the algorithm, shown in Fig. 1 for several choices of the algorithmic parameters. The basic Newmark algorithm (a ˆ b ˆ c ˆ g ˆ 0) is conditionally stable, and has no numerical damping in the stability regime. As said, it is not worth comparing such an algorithm with others de®ned by introducing a nonzero value of the g coecient only, since then Eq. (4.7) implies the loss of second-order accuracy. All the algorithms considered in Fig. 1 are second-order accurate. It is easy to see that the introduction of the kernel function S…† of Eq. (4.1) improves both the numerical damping and the stability limits of the algorithm. These features, for the algorithm with a ˆ 0; b ˆ 0:5; c ˆ 0:5; g ˆ ÿ0:25, are compared in Fig. 2 with the corresponding ones of the basic Newmark algorithm, in terms of phase shift and numerical damping respectively. The superiority of the modi®ed algorithm is apparent, even if the form of the numerical damping is not the most desirable. We must remember that this family of algorithms is in any case strictly based on the discretization (4.5), which limits the accuracy to second-order and probably also conditions other numerical characteristics.

Fig. 1. Spectral radii of several algorithms based on functional (4.4). All algorithms are second-order accurate by prescribing the respect of constraint (4.7).

1674

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

Fig. 2. (a) Phase shift and (b) numerical damping for two algorithms as in Fig. 1.

The analysis of the spectral radius of this family of algorithms has not been attempted in analytical form. Therefore, we cannot say if some special choice of parameters might lead to unconditional stability, maybe at the price of losing second-order accuracy, even if we strongly doubt it. Numerical computations have shown that choices of b > 1 or bc P 0:5 lead to unconditionally unstable algorithms. We cannot comment on the restriction (4.6) either; all these analyses would require a signi®cant e€ort. For instance, it would be interesting to understand if the mutual exclusion (Eq. (4.6)) of the fourth-order derivative and the constant operator, as possible kernels, in order to obtain at least ®rst-order accurate methods, is restricted to the particular discretization (4.5) or is congenital into the general theoretical framework associated to functional (4.4). Work is in progress to understand this and other related issues. It is now worth commenting on a few other results, derived from a di€erent discretization of the unknown functions, in order to illustrate the e€ects of the use of functional (2.3), as opposed to the use of (2.4) (i.e., of ``standard'' methods), on the conditioning of the resulting discrete problem. As we have already recalled, the discretization in time of the unknown functions in terms of simple truncated polynomials leads to ill-conditioned systems of linear equations, in the case of multi-degree of freedom problems, when the time stepping schemes are derived by ``standard'' methods, such as the Galerkin or the weak Hamilton/Ritz approach (the functionals of the type (2.4) illustrated in Section 3). To verify this e€ect, and to show how it is modi®ed by the use of the more general functional (2.3), we again make reference to the special choices related to functional (4.4), but consider a di€erent discretization of both unknowns (always restricting ourselves to a case in which an equal number of unknown parameters is used for both the actual and the auxiliary unknowns, in order to be able to compare results given by both the approaches (2.3) and (2.4)). We consider the following choice: u…t† ˆ at2 ‡ v0 t ‡ b;

us …t† ˆ ct ‡ d:

…4:8†

Here we have two unknown parameters for each unknown function, and can thus comment, at a minimal level, about the aspect of the linear system of two equations which leads to the values of parameters a and b, _ governing the actual unknown u…t†. We have also prescribed the continuity in time of the velocity u…t†, so that we can use any combination of coecients a; b and c in the kernel (4.1). By inserting approximations (4.8) into functional (4.4) and proceeding as before, one obtains a presumably new family of time integration methods. Now the ``basic'' algorithm, de®ned by the choices a ˆ b ˆ c ˆ g ˆ 0 (i.e., by the use of functional (2.4)), does not belong to the Newmark family, which requires a parabolic expansion for u…t† with continuity in time of both displacement and velocity. The examination of the accuracy properties of this family of algorithms shows, in the ®rst place, that the adoption of a nonzero Hilber±Hughes±Taylor parameter g limits the accuracy to ®rst-order only. Therefore we proceed by discussing only the case in which, for this discretization, g ˆ 0. The accuracy analysis leads then again to constraint (4.6) to obtain ®rst-order accuracy; in order to obtain simultaneously second- and third-order accuracy one must prescribe either b ˆ 0 8a or a ˆ 1=…2k† 8b. This last option is clearly

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1675

possible only when dealing with single-degree of freedom systems. Fourth-order accuracy cannot be achieved by plugging discretization (4.8) into functional (4.4). Here we only compare the results of two choices of algorithmic parameters in terms of conditioning of the problem. To this purpose we consider the ``basic'' algorithm, a ˆ b ˆ c ˆ 0, and a new one obtained by using a ˆ 1, b ˆ c ˆ 0, i.e., by adopting, as kernel function in functional (4.4), the fourth-order derivative operator only. These two choices lead to exactly the same formulae for the unknowns a and b of Eq. (4.8). It is possible to get an idea of the resulting conditioning by computing the four derivatives of functional (4.4) with respect to the four unknown parameters a; b; c and d, then solving the ®rst two equations to get expressions of c and d as functions of a and b. Replacing these results into functional (4.4) one obtains a discrete function F  ‰a; bŠ, which governs the discretized problem in terms of the unknown function u…t† only. Fig. 3 shows the aspect of such function for the two described choices of parameters; Fig. 3(a) refers to the ``basic'' method and Fig. 3(b) to the case with a ˆ 1. Both ®gures are obtained by replacing the same values of the relevant parameters; in this case we have adopted f …t† ˆ 0; u0 ˆ 1; v0 ˆ 0; Dt ˆ 0:001; k ˆ 100 000. The ®gures are zooms of function F  ‰a; bŠ around its stationarity point. A ®rst observation is that, at the same scale, in the ``basic'' case (Fig. 3(a)) function F  ‰a; bŠ appears to be almost linear in a around its stationarity point, whereas the introduction of the kernel causes function F  ‰a; bŠ to become quasi linear in b around the stationarity point. In the ®rst case the stationarity is a saddle point, whereas in the second one it is a global minimum. It is very interesting also to observe that in the ®rst case the function F  ‰a; bŠ is stationary at an almost zero value, whereas in the second the stationarity occurs at an extremely large value. The theory implies that in both cases function F  ‰a; bŠ should be stationary at the value F  ˆ 0 [18]. The accuracy of the two resulting methods in terms of the main unknown u…t†, however, is the same. The di€erence lies in the goodness of the solution of the auxiliary equation u…t† ˆ us …t†. An examination of the two di€erent algorithms shows that in the ®rst case function us …t† almost coincides with function u…t†, whereas in the second case function us …t† is totally ``wrong''. It seems that the ability of ®nding the correct solution in terms of u…t†, even in the presence of a totally wrong result in terms of us …t†, is a key ingredient for the conditioning of the problem. In fact, even if the two solutions for u…t† are coincident, the conditioning of the system of linear equations for a and b in the ``basic'' case is of the order of 108 , with one positive and one negative eigenvalue, whereas, by adding the kernel function, the conditioning of the same system reduces to 5  104 , with two positive eigenvalues. Such di€erence is doomed to become much wider as the number of degrees of freedom is increased. In this sense, therefore, the method obtained by means of the kernel function of Eq. (4.1) with a ˆ 1; b ˆ c ˆ 0 is clearly superior to the method obtained by using the same discretization ± a simple truncated polynomial ± but with no kernel, i.e., by means of a standard Galerkin approach. In the example discussed here we have been able to compute analytical expression for a and b in both cases, which

Fig. 3. Functional (4.4), with discretization (4.7), expressed as function of a and b only, around the stationarity point. (a) ``basic'' algorithm (no kernel, a ˆ b ˆ c ˆ g ˆ 0) and (b) algorithm based on functional (2.3) (with kernel, a ˆ 1, b ˆ c ˆ g ˆ 0).

1676

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

has avoided us any numerical trouble associated to the ill-conditioning of the ``basic'' method, but of course such advantage cannot be exploited in general. A qualitative explanation of the reason why the addition of a suitably chosen kernel S…† improves the conditioning can be given starting from the original aspect of functional (2.3) as proposed in [11]. With reference to the generic nonlinear problem (2.1), such functional is the following: 1 T ‰uŠ ˆ hN…u†; KN…u†i ÿ hP ; KN…u†i: 2

…4:9†

Here K…† indicates Tonti's original kernel function, linear, symmetric and positive de®nite by assumption. Eq. (4.9) is related to (2.3) via the splitting (2.2), in which one must assume K…† ˆ Sÿ1 …†:

…4:10†

Using the splitting (2.2) and the assumption (4.10), functional T ‰uŠ of Eq. (4.9) becomes T ‰uŠ ˆ



1

N…u†; ‰u ‡ Sÿ1 R…u†Š ÿ P ; ‰u ‡ Sÿ1 R…u†Š : 2

…4:11†

The relation between Eqs. (4.11) and (2.3), not so apparent, is explained in [15,16]. At this point one can see that the closer operator S…† is to the original nonlinear operator N…†, i.e., as the residual operator R…† tends to vanish, the closer functional (4.11) is to the customary, classical convex functional governing all symmetric linear problems. This suggests a criterion for the choice of the kernel S…† with reference to the conditioning aspects. Little can be said, unfortunately, with respect to other issues, such as accuracy, stability or numerical damping in the context of second-order hyperbolic equations. Further work is needed, and is in progress, to better understand the consequences of using functionals of the type (2.3), and of the related choices of (i) kernel functions, (ii) associated bilinear forms and (iii) associated interpolation functions. It is expected that a careful study of the e€ects of all these choices on the resulting time integration algorithms should provide useful indications for the development of interesting methods, which all possess a well established variational basis. We only wish to add some general comments, in order to clarify some of the many issues related to the use of these methods. First of all, it is worth pointing out that when using functionals of the type (2.3) there is total independence in the choice of the interpolating functions for the two unknown ®elds (actual and auxiliary). This is not the case for the methods based on functional (2.4), where the structure of the functional leads to an uncoupling of the equations governing the actual unknowns. Therefore, in that case one is forced to have as many parameters governing the real unknowns as those governing the auxiliary unknowns, thus losing some generality. When using the more general formulation (2.3) one can recover the results given by (2.4), for the same choice of interpolation for the unknowns, by prescribing, in the stationarity equations, the equality of the real and the auxiliary unknown parameters. By doing so, one loses the e€ect of the presence of the kernel function, and therefore recovers the simpler approach (2.4), of course saving computing time. In general, in all cases when in the approximate solution u…t† ˆ us …t†, the results given by functionals (2.3) and (2.4) coincide. Therefore, in order to fully exploit the possible bene®ts deriving by the accurate choice of kernels, functionals and interpolation functions in the general approach based on Eq. (2.3), one has (i) to discretize in such a way that u…t† cannot be equal to us …t† in the numerical solution and (ii) to solve the full system in the real and auxiliary unknowns, thus increasing the computational e€ort. Unlike what happens using Eq. (2.4), in fact, the stationarity conditions of functional (2.3) are in general coupled equations in the real and auxiliary unknowns. The seemingly counterintuitive fact that better solutions may arise by avoiding to enforce in strong form an exact result (such as u…t† ˆ us …t†), and that better results might be obtained when, in the approximate solution, us …t† strongly di€ers from u…t†, agrees with the general conclusions drawn in [2], according to which the best algorithms derive from prescribing the smallest possible number of equations and constraints in strong form.

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

1677

5. Conclusions This work has been aimed at two main objectives. The ®rst one has been to recognize that all the commonly used time integration methods ± ®nite di€erence based, or Galerkin based, or weak Hamilton/ Ritz based ± can be derived from the stationarity conditions of a precisely de®ned functional. The theory developed in [18] has allowed to construct such functional, and to examine its di€erent aspects in the various possible cases arising from the study of the simplest possible single-degree of freedom system. The second has been to illustrate an extension of the previous theory, which allows to construct more general functionals associated to the initial-value structural dynamics problem, whose use, as a basis of time integration methods, may lead to algorithms with superior behavior. Work is in progress in this respect, with the purpose of better understanding the in¯uence of the various possible choices of ingredients on the ®nal behavior of time integration algorithms. The theory developed in [13] gives some very general guidelines about the choice of the kernel functions, but the actual understanding of precise criteria about such choice appears to be a rather complex problem. The experience we have so far acquired by the use both of functionals type (2.3) and of functionals type (2.4) fully con®rms what already said in [2] about the best choices of basic formulation. We can de®nitely conclude that the smaller the number of equations and constraints enforced in strong form the better is the numerical algorithm behavior. In practice, approximations which (i) allow u…t† to be di€erent from us …t† in the numerical solution (i.e., not of the rigorous Galerkin type), (ii) have v…t† as an independent variable (two ®eld formulations) and (iii) prescribe no a priori initial conditions (time discontinuous approaches), which therefore enforce all these equations in weak form, tend to have superior behavior both in terms of accuracy and in terms of stability. It is much more dicult to make analogous statements about the other desirable features of time integration schemes (controllable asymptotic annihilation in particular), specially when using functionals of the type (2.3). Even if this work has been limited to writing the functionals and the equations for the simplest possible example of an initial-value dynamics problem, the theory developed in [18] applies to general problems ± in particular, to continuum problems and to nonlinear problems. The construction of functionals for the nonlinear continuum dynamics problem has been illustrated both in [16,18]; no attempt has been made, so far, to start from these continuum formulations and to investigate the corresponding discretization, both in space and in time. A very attractive development is constituted by the application of these techniques to problems exhibiting material and geometrical nonlinearities; the availability of functionals governing these problems might allow to obtain interesting theoretical and numerical results. Acknowledgements Work done within a research project ®nanced by the Italian Ministry of University and Scienti®c and Technologic Research (MURST). References [1] O.C. Zienkiewicz, W.L. Wood, N.W. Hine, R.L. Taylor, A uni®ed set of single step algorithms. Part I: general formulation and application, Int. J. Numer. Methods Engrg. 20 (1984) 1529±1552. [2] M. Cannarozzi, M. Mancuso, Formulation and analysis of variational methods for time integration of linear elastodynamics, Comput. Methods Appl. Mech. Engrg. 127 (1995) 241±257. [3] O.A. Bauchau, T. Joo, Computational schemes for non-linear elasto-dynamics, Int. J. Numer. Methods Engrg. 45 (1999) 693±719. [4] T.C. Fung, Unconditionally stable higher-order accurate hermitian time ®nite elements, Int. J. Numer. Methods Engrg. 39 (1996) 3475±3495. [5] T.C. Fung, Weighting parameters for unconditionally stable higher-order accurate time step integration algorithms. Part 2 ± Second-order equations, Int. J. Numer. Methods Engrg. 45 (1999) 971±1006. [6] H.H.E. Leipholz, The Galerkin formulation and the Hamilton±Ritz formulation: a critical review, Acta Mechanica 47 (1983) 283±290. [7] C.D. Bailey, Dynamics and the calculus of variations, Comput. Methods Appl. Mech. Engrg. 60 (1987) 275±287. [8] M.E. Gurtin, Variational principles in linear initial value problems, Quart. Appl. Math. 12 (1964) 252±256.

1678

A. Carini, F. Genna / Comput. Methods Appl. Mech. Engrg. 190 (2000) 1663±1678

[9] P. Rafalski, Orthogonal projection method. II: Thermoelastic problems, Bull. Acad. Polon. Sci., Ser. Sci. Techn. 17 (1969) 69±74. [10] R. Reiss, E.J. Haug, Extremum principles for linear initial value problems of mathematical physics, Int. J. Engrg. Sci. 16 (1978) 231±251. [11] E. Tonti, Variational formulations for every nonlinear problem, Int. J. Engrg. Sci. 22 (11±12) (1984) 1343±1371. [12] G. Auchmuty, Variational principles for operator equations and initial value problems, Nonlinear Anal. Theory Methods Appl. 12 (5) (1988) 531±564. [13] A. Carini, P. Gel®, E. Marchina, An energetic formulation for the linear viscoelastic problem. Part I: theoretical results and ®rst calculations, Int. J. Numer. Methods Engrg. 38 (1995) 37±62. [14] A. Carini, O. De Donato, A comprehensive energy formulation for general nonlinear material continua, ASME J. Appl. Mech. 64 (1997) 353±360. [15] A. Carini, Saddle-point principles for general nonlinear material continua, ASME J. Appl. Mech. 64 (1997) 1010±1014. [16] A. Carini, F. Genna, Some variational formulations for continuum nonlinear dynamics, J. Mech. Phys. Solids 46 (7) (1998) 1253±1277. [17] A. Carini, L. Castiglioni, F. Genna, Extremal formulations in nonlinear dynamics, in: Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Atlanta, Georgia, 17±22 November 1996. [18] M. Brun, A. Carini, F. Genna, On the construction of extended problems and related functionals for general nonlinear equations, J. Mech. Phys. Solids (submitted). [19] M. Borri, G.L. Ghiringhelli, M. Lanz, P. Mantegazza, T. Merlini, Dynamic response of mechanical systems by a weak hamiltonian formulation, Comput. Struct. 20 (1±3) (1985) 495±508. [20] J.M. Pitarresi, G.D. Manolis, The temporal ®nite element method in structural dynamics, Comput. Struct. 41 (4) (1991) 647±655. [21] O.P. Agrawal, S. Saigal, A novel, computationally ecient, approach for Hamilton's law of varying action, Int. J. Mech. Sci. 29 (4) (1987) 285±292. [22] C.D. Bailey, The method of Ritz applied to the equations of Hamilton, Comput. Methods Appl. Mech. Engrg. 7 (2) (1976) 235±247. [23] T.E. Simkins, Finite elements for initial value problems in dynamics, AIAA J. 19 (10) (1981) 1357±1362. [24] O.C. Zienkiewicz, C.J. Parekh, Transient ®eld problems ± two and three dimensional analysis by isoparametric ®nite elements, Int. J. Numer. Methods Engrg. 2 (1970) 61±71. [25] K.K. Tamma, X. Zhou, R. Valasuetan, Computational algorithms for transient analysis: the burden of weight and consequences towards formalizing discrete numerically assigned [DNA] algorithmic markers: Wp -family, Comput. Methods Appl. Mech. Engrg. 149 (1997) 153±188. [26] G.M. Hulbert, Time ®nite element methods for structural dynamics, Int. J. Numer. Methods Engrg. 33 (1992) 307±331. [27] B. Tabarrok, Complementary variational principles in elastodynamics, Comput. Struct. 19 (1±2) (1984) 239±246. [28] S.C. Fan, T.C. Fung, G. Sheng, A comprehensive uni®ed set of single-step algorithms with controllable dissipation for dynamics. Part I. Formulation; Part II. Algorithms and analysis, Comput. Methods Appl. Mech. Engrg. 145 (1997) 87±107. [29] N.J. Salamon, J. Xiang, Weighting functions in single-step algorithms for a non-linear equation, Commun. Appl. Numer. Methods 6 (1990) 447±455. [30] H.M. Hilber, T.J.R. Hughes, R.L. Taylor, Improved numerical dissipation for time integration algorithms in structural dynamics, Earthquake Engrg. Struct. Dynamics 5 (1977) 283±292. [31] T.J.R. Hughes, An overview of semidiscretization and time integration procedures, in: T. Belytschko, T.J.R Hughes (Eds.), Computational Methods for Transient Analysis, North-Holland, Amsterdam, 1983, pp. 1±65.