On computational methods for crashworthiness

On computational methods for crashworthiness

Contputers & Srructwes Vol. 42, No. 2. pp. 271-279. Printed in Great Britain. ~3-7~9/92 35.00 + 0.00 PergarnOtl Press plf I992 ON COMPUTATIONAL MET...

930KB Sizes 0 Downloads 31 Views

Contputers & Srructwes Vol. 42, No. 2. pp. 271-279. Printed in Great Britain.

~3-7~9/92 35.00 + 0.00 PergarnOtl Press plf

I992

ON COMPUTATIONAL METHODS FOR CRASHWORTHINESS T. BELYTSCHKO Department of Civil Engineering, Robert R. McCormick School of Engineering and Applied Science, The Technological Institute, Northwestern

University, Evanston, IL 60208-3109, U.S.A.

Abstract-The evolution of computational methods for crashworthiness and related gelds is described and linked with the decreasing cost of computational resources and with improvements in computational methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

1. INTRODUCDON

Computer analysis of crashworthiness is on the threshold of emerging as a powerful technology which can substantially reduce the cost and time required for the development and certification of new designs. The increasingly rapid evolution of this field is a consequence of two factors: 1. the development of more powerful, theoretically sound, and efficient tools for the simulation of nonlinear structural dynamics problems with severe deformations and other nonlinearities; 2. the rapid growth in the speed of computers and the consequent decrease in the cost of computational resources. To understand future trends in computational ~ashworthiness analysis, it is worthwhile to look at the history of other fields of computational mechanics. One of the earliest fields of application of computational mechanics was in the development of ballistics tables, which until the mid-1940s were almost completely developed from testing. In fact, the development of the first computers, such as the ENIAC, was primarily motivated by this application. By the 197Os, computer simulation almost completely supplanted testing, and ballistics tables are now routinely prepared with recourse only to computer solutions. A similar transformation has occurred in the field of linear stress analysis, where, until the mid-1960s, design evaluation for stresses was performed entirely by means of the analysis of simplified models, photoelastic studies, and strain gauge measurements of models and prototypes. With the advent of generalpurpose finite element programs in the late 196Os, linear stress analysis became almost exclusively computer-based by the early 1980s. Much of this success is due to the rigorous foundations of linear elasticity and finite element theory for this class of problems. Testing remains common only in certain types of

vibration analysis, where the effects of poorly understood phenomena, such as the behavior of connections and structural damping, often introduce small but critical errors in the frequencies. Computer simulation has also grown in dominance in nonlinear structural analysis, such as nuclear weapons effects and the prediction of the behavior of nuclear reactor components and systems under extreme loads. It is interesting to observe the changing roles of tests in these fields with the growth of computational simulation capabilities. Whereas, in the 196Os, tests were often conducted on models or prototypes which reproduced the structures of interest as closely as possible within the limitations of cost, since the 197Os, the focus has been on more simple but highly instrumented tests or ‘experiments’ which clarify the mechanics and physical behavior and can serve to validate simulation techniques and guide in their future development. For design purposes, computer simulations are almost routinely used, particularly in nuclear weapons effects, where above-ground tests have been banned since the early 1970s. In the related field of aerodynamics, the last seventeen years have seen a similar emergence of computer simulation. In 1976, when the computational aerodynamicists were experiencing their first euphoric tastes of success in simulating flow over complicated airfoils, R. W. MacCormack at an ASME meeting predicted that within ten years wind tunnels would have little function but to store computer printouts. These remarks illustrate the perils of prophecy: ten years later, printouts were obsolete because of the emergence of large-scale disk storage, computer graphics, and CRT devices as means of communication with computers, whereas even the largest computers, such as the $l~,~O,~O NAS (National Aeronautics Simulator), still do not have the power to simulate the three-dimensional air flows of commercial aircraft with complex details such as nacelles, engine intakes, and landing gear, in a time period

271

212

T.

BELYT~CHKO

consistent with the time frame of the engineering design and decision process. A major roadblock is the lack of a first-principles model for turbulence. However, the field of computational aerodynamics has closely paralleled nonlinear structural dynamics in weapons effects and nuclear reactor safety in some important respects: increasingly, rough tests of prototypes are being replaced by ‘experiments’ with heavily instrumented, cleaner models which can be used to validate computer programs and guide their future development, and certain types of aerodynamics computations, such as laminar flows, have become widely used in the engineering design process. Two lessons can be learned from the developments in other fields: 1. the viability of computational simulation in a field critically depends on the cost and computer time required for a reasonable simulation; 2. a thorough understanding of the physical phenomena involved in a field is essential for the success of computational simulation in the field. It should be stressed that excessive computer time is not only detrimental because of the large costs involved, but if a simulation requires 4&100 h on an available computer, it may take weeks to obtain the results of a single simulation. Combined with the debugging process inherent in most simulations, which often entails three to five computer runs, the simulation task can then stretch to months and become totally incompatible with industrial time schedules. The availability of experiments with levels of detail and rigor which are seldom observed in typical testing is also essential. Simulations in fields such as crashworthiness require knowledge of mechanical processes which were seldom of interest until the advent of computers. Phenomena such as friction, severe wrinkling of sheet metal, and failure of connections were never studied in detail until the early 1970s and therefore they do not share the robust theoretical foundations that characterize fields such as linear elasticity. In order to provide reliable computational tools based on first principles, extensive programs in theoretical and experimental research must be undertaken to provide these foundations for computer simulation. Finally, in view of the importance of computational speed, simulation methods must be geared to the problem at hand: general-purpose programs which target very large classes of nonlinear mechanics problems are hopelessly inefficient in the context of a specific problem-class such as crashworthiness. In the next section, the development of computer programs for crashworthiness will be outlined and an indication of how their power has been closely linked to the development of faster computers and better computational methods given. Subsequently, some recent developments in related fields of structural dynamics will be described which have considerable promise for further improving crashworthiness simulation. In conclusion, some predictions for the future

will be made, although, based on the experience of computational aerodynamicists, they are unlikely to prove very accurate. 2. A

SHORT

HISTORY OF CRASHWORTHINESS SIMULATION

Computer programs for crashworthiness analysis were first developed in the late 1960s. These programs reflected the fact that mainframe computers of that era, which cost several million dollars and required large staffs, did not even equal the microcomputers of today. Generally a vehicle was modelled with five to fifty nodes, and very simple elements were used. A noteworthy development in this direction is the work of Kamal [l], who used a hybrid experimental-simulation technique. The properties of the elements were obtained by crush testing large components, and the computer model was then used to predict the performance of the complete vehicle for various speeds of impact. This approach, in fact, proved quite successful in achieving its limited goals, which undoubtedly is a testimony to the engineering skill and insight of its developers. However, models of this type cannot be considered first-principles models, for they required considerable testing of prototype components and were furthermore suitable only for the specific crash environment for which they were designed. The next generation of crashworthiness codes tended to be first-principles codes, in that their elements embodied the mechanics of large deformation processes, though these were usually limited to beams; see, for example, the works of Thompson [2], Shieh [3], Young [4], and Melosh [5] and the program KRASH [6], which was developed for helicopter crashworthiness. One of their major shortcomings was their reliance on implicit time integration, which they undoubtedly adopted because of the prevalence of these techniques in general-purpose finite element programs of the time. The use of implicit time integration made these computer programs very time-consuming, even for comparatively small models. Furthermore, these implicit techniques have a notorious lack of robustness for highly nonlinear processes, in that they often fail to converge to a solution during a given time step. As a consequence, the completion of a solution often required tender ministrations by someone closely familiar with the code and was extremely time-consuming in the computer environments of that time, because long runs generally had to be made overnight. In 1973, the National Highway Transportation and Safety Authority (NHTSA) began an ambitious program for the development of computer methods for crash simulation which would enable occupant safety to be established by means of simulation. The program team consisted of three contractors: The University of Michigan, which was responsible for models of the automobile frame; IIT Research Insti-

Computational Initial

213

methods for crashworthiness FInaL

configuratlon

configuratlon

Fig. 1. Simulation of a front-end crash with WRECKER, from Welch, Bruce and Belytschko [9].

tute (IITRI), which was responsible for the models of

the sheet metal; and the Naval Civil Engineering Laboratories, which were assigned the models of the occupant, with particular emphasis on head injuries, and the required contact-impact models. The IITRI effort focused mainly on the adaptation of the program WHAMS, which had been developed by Belytschko and Marchertas [7] for reactor safety studies. This computer program had originated in the field of weapons effects, where severe nonlinearities are commonplace, and consequently it used explicit time integration. This entailed the use of very small time steps in order to maintain numerical stability, but provided the program with a robustness which made the completion of simulations a simple task. The program which evolved from these efforts was called WRECKER [8,9], and its successors are still used at the Ford Motor Company today. An example of a simulation with this computer program of a front-end crash of a 1968 Plymouth is shown in Fig. 1. The model involved approximately 500 degrees of freedom. As can be seen by the comparison with test results in Fig. 2, it was quite successful in estimating the levels of acceleration at the location of the driver. It is interesting to observe

7.5

that these excellent predictions were achieved in spite of the fact that the model for the sheet metal was quite coarse and thus could only crudely replicate the energy-dissipation mechanism associated with crunching. In spite of the success of this computer program in predicting the acceleration level in a frontal crash, the research program was discontinued by NHTSA during the mid-1970s. Undoubtedly, one of the major factors for this decision was the high cost of a simulation. A simulation of this type required S-20 h of computer time on the largest mainframe available at that time, the CDC7600, which cost about $1000 per h. This high cost, coupled with the limited credibility of the results of such first-principles models at such an early stage of development, led to the decision of NHTSA to reallocate most of their funds to testing. A parallel effort was the development of DYCAST at Grumman Aerospace Corporation [lo]. This program contained both explicit and implicit integrators, although the emphasis was on the latter. The Grumman team developed considerable experience in the simulation of frontal impact, and it simulated frontal crashes for a prototype of the de Lorean, the

100 Time

12.5

I50

175

200

(msec)

Fig. 2. Time history of acceleration of center of gravity; comparison of WRECKER calculation and test, from Welch, Bruce and Belytschko [9].

214

T. BELYXCHKO

0 msec

30 mph

0 in.

100%

60.1 msec

ke

0.2 mph

19. I in. Ooh “/

1

Fig. 3. Side view of DYCAST model of a rear-engine automobile, from Pifko and Winter [lo].

1984 Chevrolet Corvette [ll], and the 1984 Dodge Caravan [12]. Two models used by Grumman are shown in Figs 3 and 4. Although these are not entirely first-principles models in that they could not replicate sheet metal crushing with fidelity, comparison with tests show surprisingly good accuracy, with errors in crush distances of only five percent [ 121.The model in Fig. 3 employed 663 unknowns. The advancement of computational techniques suitable for crashworthiness analysis during the next ten years was also pursued in nuclear reactor safety analysis and the defense community. Northwestern University, in conjunction with Argonne National Laboratory, continued the development of the WHAMS-3D code [13], using strictly explicit time integration. The principal development during this time was a new four-node quadrilateral element for thin shells which required only one quadrature point

per element, which was reported by Belytschko, Lin and Tsay [14]. This element, in conjunction with other developments, provided the program with a tenfold increase in speed. At the same time, at Lawrence Livermore National Laboratory, John Hallquist [15] developed the DYNA-3D program, which uses explicit time integration and is completely vectorized to take advantage of the Cray computer architecture. It includes both the Hughes-Liu [16] shell element and the Belytschko-Tsay-Lin quadrilateral element, with the latter being significantly faster. Benson et al. [17] report that DYNA-3D calculations have been made with 20,000 shell elements and 120,000 degrees of freedom, which required only two hours on a Cray/XMP, whereas an implicit calculation with NIKE3D with only 2000 elements required over eight hours of CPU. The refinement of models which can be achieved with such tools is

Fig. 4. DYCAST model of a Dodge minivan, from Regan and Winter [12].

Computational

methods for crashworthiness

275

Fig. 5. A 3400-element DYNA-3D model of a car, from Hallquist and Benson [18].

illustrated DYNA-3D

in Fig. 5, which shows a 3400-element model [18]. Such models are much closer

to the goal of achieving first-principles simulation of a crash than the models of the 197Os, yet important factors such as the suspension, engine block, and transmission are absent. In addition, Hallquist developed an extremely robust and efficient contactimpact algorithm which made possible simulations of extreme wrinkling of sheet metal. The enhanced capabilities alerted the automotive industry in the U.S., Japan and Europe to the potential of computer simulations of crashworthiness. Both WH’AMS and DYNA-3D are available in the public domain, and several vendors have developed commercial programs, such as PAMCRASH and RADIUS, by adopting the concepts and Fortran coding of early versions of DYNA-3D and WHAMS. These programs have been sold to several automotive companies, but they do not reflect the latest developments in nonlinear structural dynamics finite element technology. 3. RECENT

DEVELOPMENTS

3.1. Subcycling Because of the lack of robustness of implicit time integration procedures for the complex phenomena which occur in car crashes, most of the currently used programs for car-crash simulation use explicit time integration. The drawback of this method, as already mentioned, is that a small time step must be used in order to meet the conditional stability of these integrators. For stability, the time step must be less than that required for an elastic wave to traverse the smallest element. Thus, the presence of a few small elements in a mesh requires the entire mesh to be integrated with a very small time step. In multi-time step integration methods, or subcycling methods, different time steps can be used for different parts of the mesh [19,20]. In these methods, only the subdomains which contain the smallest elements are integrated with the smallest time step,

and a much larger time step can be used for the remainder of the mesh. Procedures have been developed for automatic selection of time steps so that the advantages of vectorization are not compromised and to insure the overall stability of the process. These procedures have been implemented in the program WHAMS-3D, and on typical models of automobile front ends they can speed up solution time by factors of three to ten. For example, in the model shown in Fig. 6, the speedup achieved by subcycling is approximately a factor of five. This subcycling procedure is now being incorporated in an MSC version of DYNA-3D. 3.2. New element formulations The original Belytschko-Tsay-Lin and HughesLiu plate elements were developed with a focus on computational efficiency. It has been known for some time that they suffer certain deficiencies: both elements fail to pass the standard patch test for plates, and the former does not work well in twisted configurations such as the standard twisted beam test. The WRECKER triangular shell element also fails the standard patch test except for certain meshes. The significance of the failure to pass the standard patch test is not clear, since a mathematical link between the failure to pass this patch test and the lack of convergence has not been established; furthermore, techniques such as the Hallquist hourglass control method, which seem well behaved, also fail this patch test. Recently, Belytschko and Lasry [21] developed a fractal patch test which is more directly tied to the notion of convergence for distorted meshes of elements. The test is illustrated in Fig. 7. The essential feature of this test is that it uses a repeated structure of deformed elements at different scales; hence the term ‘fractal’. The finite element solutions are not required to be exact, but they are required to converge monotonically to the exact solution for the patch as the mesh is refined. Surprisingly, we found that both the Hughes-Liu element and the

216

T. BELYISCHKO

Fig. 6. Front end modelled by WHAMS-3D; &cycling increases speed by a factor of five.

Belytschoko-Tsay-Lin element fail the fractal patch test. The Hallquist hourglass control procedure, on the other hand, which fails the standard patch test, passes the fractal patch test. Based on some recent theoretical developments, we have now been able to modify the Belytschko-Tsay-Lin element so that it passes the fractal patch test and the standard twisted beam problem. This increases the robustness and the reliability of the element, and it has been achieved without a decrease in computational speed.

Mesh 0

3.3. Adaptive meshes The crushing and wrinkling of sheet metal are associated with severe localized deformation in areas of wrinkling and hinging. A reasonably accurate representation of these phenomena requires much finer meshes than are currently feasible. A remedy for this difficulty, which has been the focus of considerable research in computational mechanics over the last seven years, is the use of adapative techniques; see Noor and Babuska [22] and Oden and Demkowicz [23]. One form of adaptivity, known as h-adaptiv-

Mesh C

Fig. 7. Fractal patch test.

Mesh D

Computational

211

methods for crashworthiness

ity, is particularly promising in crashworthiness analysis. In this type of adaptivity, the mesh is refined

during the course of the computation where it is needed in order to achieve the requisite accuracy. An h-adaptive procedure has been implemented in the WHAMS-3D code by Belytschko [24] for nonlinear analysis of shells. The adaptivity consists of two processes: fission, where elements are subdivided or refined, and fusion, where they are brought together when a fine mesh is no longer needed. An example of a computation with an h-adaptive mesh is shown in Fig. 8. As can be seen from Fig. 8, the fine elements migrate with the areas of maximum deformation. Initially, the fine elements are in the crown of the cylinder, where the most bending occurs. As the crown of the cylinder begins deformation in a plateau-like mode, the fine elements migrate to the edge of the plateau where the hinging occurs, and the elements in the center fuse. Finally, when the center begins to deform again, these elements are again fissioned to provide a more refined mesh. A comparison of these solutions with a uniform mixed mesh in Fig. 9 shows that comparable accuracy can be achieved with about half the number of elements. Furthermore, when this technique is combined with subcycling, fourfold increases in speed can be

achieved, since the adaptive mesh contains many larger elements which can be integrated with a larger time step. These techniques can revolutionize crash simulation. For example, in a frontal crash, the regions of severe deformation differ significantly from those in a crash at an angle. Although an experienced analyst could select those portions of a mesh which are likely to undergo the most extensive damage and refine them prior to the run, this would be an extremely time-consuming approach. It is desirable to start various simulations with the same mesh and let the response dictate any refinement. In an adaptive program, the parts of the mesh which need refinement are automatically selected by the program, which permits the analyst to start with a coarser mesh and yet achieve better accuracy. 3.4. Constitutive formulations The last seven years have seen dramatic improvements in the accuracy and efficiency of constitutive models which are used for nonlinear response; see Hughes [25]. These include more accurate elastic-plastic models and algorithms, viscoplastic laws, and elastic-plastic laws with comers which are more capable of simulating failure. These developments

0.0125 msec

0.200

msec

0 025

msec

0.250

msec

0.050

msec

0.300

msec

Fig. 8. Nonlinear response of a shell solved by an adaptive procedure in WHAMS-3D.

278

T. BELYTSCHKO

0

01

-.-

512 elements(fIxed mesh 16x32)

------

218 elements 200 elements

(adoptwe mesh 16 x 32) (fixed mesh 10x20)

----

128 elements

(flxed mesh 8x10)

02

03

04

05

06

07

08

09

IO

solution

with

Time (msec)

Fig. 9. Comparison

of a 218-element adaptive fixed mesh solutions.

have decreased the computational cost of simulations, because, in many cases, they are dominated by the cost of the constitutive evaluation. 4. FUTURE

DEVELOPMENTS

AND

CONCLUSIONS

There are many aspects of crashworthiness simulation which require substantial further development, and changing technologies will introduce even others. For example, the trend towards the use of plastics and perhaps even composites will require the characterization of the behavior of these materials for the large deformations of a crash event. Many of these materials are less ductile than metal, so the modelling of the failure of components will become unavoidable. Even in metals, many components fail in a crash, which cannot be modelled reliably today. The computational methods used in crashworthiness will also evolve as the size of the models increases further. For example, in computational fluid dynamics, the trend today is from explicit to implicit methods, which are becoming increasingly robust. Fluid problems do not have the discontinuous character that arises from transitions between elastic and plastic response and contact-impact in crash simulations, but the attractiveness of the unconditional stability of implicit integrators cannot be overlooked. They may prove useful for the later stages of a crash event or on parts of the mesh with explicit-implicit partitions; see [26]. Contact-impact algorithms also still require further refinement. They tend to be quite noisy and often require a large part of the computational effort. An important current trend is the emergence of commercial parallel computers, such as the Alliant FXj8 and the new Cray/YMP. Both are eight-processor machines with a shared memory. These new computer architectures are a reflection of the consensus that single processor-single data stream machines, which are often called von Neumann computers, cannot be speeded up much more, because they are running into limits associated with the speed of light. The trend to parallelism will continue, as evidenced by the an-

nouncement by Cray of a 64-processor machine for the mid-1990s. These new computer architectures will have significant implications on the structuring of efficient algorithms and will probably require considerable redesign of current computer programs. This redesign will not be easy, since supercomputers of this class, in addition to concurrency, employ vectorization, so a very fine-grained parallel algorithm is not efficient. Although computer scientists have designed compilers which automatically enable a program developed for a von Neumann computer to utilize concurrency and vectorization, there is considerable evidence that without careful redesign, these compilers are quite ineffective. For example, on the Alliant FX/8, the WHAMS program in an unvectorized, nonconcurrent form is only about one-fifth of the speed of a new version we have written which is specifically designed for concurrency and vectorization. On a 64-processor machine, the handicaps of any algorithm which is not designed for concurrency will be even more severe. From the perspective of some of the other fields of structural dynamics, crashworthiness analysis still seems in a rather nascent state. The community has only recently designed standard problems by which computations can be compared to experiments, and most of these are very simple when compared to the full-scale crashworthiness problem. From experience in other fields, such as weapons effects and reactor safety, it is clear that a substantial learning process, which involves careful experiments and computational simulations of these experiments, is needed. This is expensive and time-consuming. However, the tremendous increases in computational speed which have been brought about by the development of new computational algorithms and by increases in computer speed now make analysis of realistic models quite feasible. When the people involved in crashworthiness learn how to use these techniques and validate them with appropriate experiments, they will possess a tool of awesome power and usefulness which can eliminate considerable testing. Acknowledgement-The of support Research Center under Grant NAG-I-650 University is gratefully acknowledged.

NASA-Langley to Northwestern

REFERENCES

M. M. Kamal, Analysis and simulation of vehicle to barrier imoact. SAE Paoer 700414. Detroit. MI. Mav_ (1970). J. E. Thompson, Control of structural collapse in automotive side impact collisions. Ph.D. dissertation, University of Detroit, MI (1972). R.-C. Shieh. Basic research in crashworthiness IIprecollapse dynamic analysis of plane, ideal elastoplastic frame structures including the case of collision into a narrow rigid pole obstacle. CAL Report BV2987-V-3, Cornell Aeronautical Laboratory, Ithaca, NY (1972).

Computational

4. J. W. Young, Crash: A computer simulation of nonlinear transient response of structures. Report DOTHS-091-1-125-B. Philco-Ford Coruoration. Palo Alto. CA, March (1972). 5. R. J. Melosh, Car-barrier impact response of a computer simulated mustang. Report DOT-HS-091-l125-A, Philco-Ford Corporation, Palo Alto, CA, March (1972). 6. G. Wittlin and M. A. Gamon, Experimental program for the development of improved helicopter structural crashworthiness analytical and design techniques. USAAMRDL Technical Report 72-72. Mav (1973). I. T. Belytschko and A. H. Marchertas, Non&ear finiteelement method for plates and its application to dynamic response of reactor fuel subassemblies. Trans. ASME, J. Pressure Vessel Technol. %,251-257

219

methods for crashworthiness

(1974).

8. T. Belytschko, R. E. Welch and R. W. Bruce, Sheetmetal behavior in crash. In Aircraft Crashworthiness (Edited by K. Saczalski et al.). University of Virginia Press, pp. 549-560 (1975). 9. R. E. Welch, R. W. Bruce and T. Belytschko, Finite element analysis of automotive structures under crash loadings, Report DOT-HS-105-3-697, IIT Research Institute, Chicago, IL, May (1975). 10. A. B. Pifko and R. Winter, Theory and application of finite element analysis to structural crash simulation. Comput. Struct. 13, 277-285 (1981). 11. R. Winter, J. Crouzet-Pascal and A. B. Pitko, Front crash analysis of a steel frame auto using a finite element computer code. SAE Paper 840278, VSM Conference, Detroit, MI, April (1984). 12. A. J. Regan and R. Winter, Nonlinear finite element crash analysis of a minivan. In Compufational Mechanics ‘86. Proceedings of the International Conference on Computational Mechanics. Springer, Tokyo,

VI/223-23 1 (1986). 13. T. Belytschko and J. M. Kennedy, WHAMS-3D, an explicit 3D finite element program. KBS2 Inc., P.O. BOX 453, Willow Springs, IL 60480 (1986). 14. T. Belytschko, J. I. Lin and C.-S. Tsay, Explicit algorithms for the nonlinear dynamics of shells. Comput. Meth. appl. Mechan. Engng 42, 225-251 (1984).

15. J. 0. Hallquist, Theoretical manual for DYNA3D. Report UCID-19401, University of California, Lawrence Livermore National Laboratory, Livermore, CA (1983).

16. T. J. R. Hughes and W. K. Liu, Nonlinear finite element analysis of shells: Part I-three-dimensional shells. Comput. Meth. appl. Mechan. Engng 26,331-362,

1981.

17. D. J. Benson, J. 0. Hallquist, M. Igaraski, K. Shimomaki and M. Mizuno, The application of DYNA3D in large scale crashworthiness calculations. Report UCRL94028, University of California, Lawrence Livermore National Laboratory, Livermore, CA, April (1986). 18. J. 0. Hallauist and D. J. Benson. DYNA3D. a computer code’ for crashworthiness engineering.’ Report UCRL-95152, University of California, Lawrence Livermore National Laboratory, Livermore, CA, September (1986). 19. T. Belytschko, Partitioned and adaptive algorithms for explicit time integration. In Nonlinear Finite Element Analysis in Structural Mechanics (Edited by W. Wunderlich, E. Stein and K. J. Bathe). Springer, Berlin, pp. 572-584 (1980). 20. T. Belytschko, P. Smolinski and W. K. Liu, Stability of multi-time step partitioned integrators for first-order finite element systems. Compur. Meth. appl. Mech. Engng 49, 281-297 (1985). 21. T. Belytschko and D. Lasry, A fractal patch test. Int. J. numer. Meth. Engng, 26, 2199-2210

(1988).

22. A. K. Noor and I. Babuska, Quality assessment and control of finite element solutions. Finite Elements in Analysis and Design, 3, l-26 (1987). 23. J. T. Oden and L. Demkowicz, Advances in adaptive

improvements: A survey of adaptive finite element methods in computational mechanics. In State-of-theArt Surveys on Computational Mechanics (Edited by A. K. Noor and J. T. Oden). ASME, New York, Chap. 13, pp. 441-467 (1989). 24. T. Belytschko, Adaptive finite element methods for shells. Report to Air Force Office of Scientific Research on Grant F49620-85-C0128, Northwestern University, Evanston, IL, January (1988). 25. T. J. R. Hughes, Numerical implementations of constitutive models: rate-independent deviatoric plasticity. In Theoretical Foundation for Large-Scale Computations of Nonlinear Material Behavior (Edited by S. NematNasser et al.). Martinus-Nijhoff, Dordrecht, The

Netherlands, pp. 29-57 (1984). 26. T. J. R. Hughes and T. Belytschko, A p&is of developments in computational methods for transient analysis. J. appl. Mech. 50, 1033-1041 (1983).