COMPUTER METHODS NORTH-HOLLAND
IN APPLIED
MECHANICS
AND ENGINEERING
75 (1989) 227-240
INTERNAL DESIGN OF THE E3D INTER-DISCIPLINE ENVIRONMENT J.P. LA HARGUE CISI INGENIERIE,
Rungis, France
J.P. MASCARELL National Society for the Study and Construction of Aero Engines (SNECMA),
Moissy Cramayel, France
Requirements and techniques are presented for the internal design of an inter-discipline environment, i.e., an environment oriented towards the linkage of various structural analysis, heat transfer and other numerical codes. The choices made in the E3D software for study of structure in aircraft engines are discussed.
1. Introduction The technological evolution of turbine engines in the last two decades has been essentially governed by a search for still higher performance. Once the general decisions have been taken, the development of each component relies upon a multi-discipline approach, leading to a final compromise. In order to elaborate such a compromise, one must master the design criteria of each scientific discipline. Figure 1 shows the interaction between the disciplines involved in the design of a cooled turbine blade. Most of these disciplines have nowadays been equipped with computer tools, some of which represent a big investment ito program development and validation and cannot be realistically rewritten. On the other hand, those disciplines deal with quite different aspects of the phenomenon; therefore they use quite different physical modelizations (e.g. lD, 2D, 3D
AIR
AERODYNAMICS .eff lciency
geomerry
PARAtlETERS
I
J BLADE
COOLING cooling rhermol cooling
efficiency gredlenls System
SECTION
LAW
_‘“Ear’,,i”“*“*“““I
I’inini”a::l_ ilATER
AL
INJECTION HOLES LOCATION COMPENSATION DAIIPINC
SYSTEtlC
SHOURDS
TRADE
OFF
=
OPTItllZATION
Fig. 1. Design criteria for a coded turbine 00457825/89/$3.50
0
lype
NWIEER
1989, Elsevier Science Publishers
blade.
B.V. (North-Holland)
228
J. P. La Hargue, J. P. Mascarell, E3D
inter-discipline environment
approximations) and different numerical discretizations. But they must exchange information in order to reach the final design compromise together. The more automatic this exchange is, the higher will be the synergy between the disciplines, and the higher the overall productivity. The mechanization of such an inter-discipline exchange is a logical problem due to the discrepancy between the logical data models. It should not be confused with the physical data exchange problem, which can be solved, for example, by sharing all data on a single large computer. We present here a real-size attempt to solve this problem at the research department of SNECMA, by mean of a so-called E3D system (3D-editor). We shall not give here a tedious enumeration of the external facilities of this system, but rather discuss the requirements and feasibility constraints which have led to its actual design; finally we shall try to indicate some trends in software engineering for such systems.
2. Objectives We start with the hypothesis that the disciplines to be linked together are already equipped with some validated batch programs (here the SAMCEF analysis code; a polyhedral thermal analysis code, and so on). On the other hand, numerical data are available from the CAD systems (here COMPUTERVISION and CATIA). As this environment is not likely to be modified, it is clear that the grain of inter-discipline communication is an exchange file, whose syntax and semantics heavily depend upon the corresponding discipline. Therefore, the first objective of an inter-discipline system is to provide a computer-aided translation of one file format into another in more detail: - the system should accept the drawing files (e.g. IGES files) from CAD systems for input; - it should provide computer aid to transform those files into load-case input files to existing analysis codes; - it should accept the output files of those numerical codes for input; - it should provide computer aid to transform the output file from a code (e.g. thermal results of a heat transfer code) into an input file for another code (e.g. stress analysis). As this file-to-file translation objective cannot be met in a mechanical way under the current state of the art but rather implies some computer-aided intervention of a human operator, meeting this objective will imply a non-trivial investment into interactive graphics, which should be recovered by a wide use. Thus, in order to reuse this investment, a second derived objective is to provide interactive graphics for some disciplines which were only equipped with batch programs, or weakly interactive interfaces. In more detail: - the system should provide facilities to enhance an existing batch code into an interactive one; - it should provide an interactive graphic display of the manipulated entities and an echo for any modification; - it should ensure early detection of errors, inconsistencies and avoid the generation of incorrect load-case files; - it should allow to master complex objects through usual decomposition techniques: parts and subparts, repetitions, symmetries, superelements, mixed 3D, 2D, 2D-axisymmetric and lD-modelizations.
J.P. La Hargue, J.P. Mascarell, E3D
inter-discipline environment
229
3. Examples of computer-aided translation Before starting the analysis of the internal system design, it seems appropriate to give the reader some indications about the external behavior of a computer-aided translation system. Two examples are selected in the area of aircraft structure analysis. 3.1. Converting a ZD-IGES file into a 30
mesh file
In this first example of computer-aided translation, the CAD office is assumed to provide a 2D-IGES file (Fig. 2). The operator has to display the file, and to set up appropriate windows through which several parts of the 2D-IGES drawing will be used as a background for 3D-synthesis. Then he extracts the content of the IGES file into 3D curves; if necessary he transforms those curves into surfaces by a parametric generation (e.g. translation, rotation). Too small drawing details are eliminated during this extraction. At this point the analytical information may be used in different ways, according to the circumstances: - taking curves and surfaces as constraints, the operator may draw the mesh lines which are corrected by the computer into exact coordinates; - he may ask the system to generate equally-spaced polygonal l-blocks or polyhedral 2-blocks fitting the analytical curves or surfaces; - if high precision is not required, he simply draws mesh lines fitting approximately the analytical lines. The mesh is then completed using several functions: block generation, block topological report, mesh refinement, block intersection. 3.2. Interpolating thermal results to prepare a stress analysis In this example (Fig. 3), the user simultaneously works on two different modelizations coming from two different disciplines. Stress computation will be performed on a finite
Fig. 2. Example
of a 3D mesh.
230
J.P. La Hargue, J.P. Mascar~~l, E3D inter-disc~p~i~~eenvironment
element mesh (continuous lines) where thermal data are localized at the nodes. Those thermal data are to be interpolated from a previous thermal computation on polygonal elements (dashed lines) where thermal data are localized at the centers of the l-blocks and 2-blocks. As can be seen in Fig. 3, not only the two thermal dis~retizations differ but also the mesh contours may differ because there is a small difference in the component design or in the cellular decomposition. Thus, the computer aids the user to drag back the outer thermal unknowns into interpolable regions. These two examples show that, in order to set up the whole system, one has to link data manipulation primitives (e.g. adding nodes and cells) together with interactive graphics primitives (e.g. operator requests, display, echo, menus, . . .). These primitives and their linkage are discussed in the next sections.
4. Data model In this section, we discuss the requirements and constraints for the data model and its associated manipulation primitives. This data model must be powerful enough to support translation towards the various target numerical codes. Obviously a solution with several coexisting data models will be more complex than a solution with only one. Apart from implementation problems, the major objection to a multi-model design is that the-possibly irreversible-data translation steps from one model to another may prevent the user from operating freely upon the data, because they may enforce him to obey some order in the data manipulation. So let us, if possible, solve the problem with a single data model.
J. P. La Hargue, J. P. Mascarell, E3 D inter-discipline environment
4.1.
Cellular decomposition
231
requirements
At the microscopic level, we must deal with finite differences and finite elements. But we must also accept arbitrary polygonal and polyhedral elements, since they are used by some disciplines (thermal analysis, volume and mass checking). Another reason to accept arbitrary polygons and polyhedra is that, even if the ultimate translation step leads to well-formed finite elements or finite differences, the intermediate steps must support intermediate shapes. For instance, in order to fill a many-sided polygon with triangular elements, one has to identify the polygon, at least at some time. In other words, the microscopic model has to support the continuous evolving of object during a translation process. These considerations discard a model strictly limited to finite elements, and argue for a more general classical B-rep model [l ,2] (b oundary representation). That is to say O-blocks are defined as 2D or 3D-points; then k-blocks (1 d k G 3) are defined by the enumeration of their boundary made of (k - 1)-blocks. In this recursive definition it is convenient to split each intermediate k-level into two sublevels, allowing the aggregation of single blocks into more manageable compound blocks of the same dimension. In the finite element case, the classical node enumeration can be computed by successive boundary derivations inside the B-rep model. Compared with the finite element method (FEM) model, the B-rep model has a better mathematical basis, which allows a uniform implementation of elements of any dimension or complexity; while the FEM model always refers to a discontinuous catalog of authorized elements. As a first example, relations between blocks are easily added as an upper layer to the B-rep model to modelize inter-element connexions (e.g. radiative exchanges). As a second example, numerical and logical attributes (e.g. temperatures, pressures, stresses) are easily appended to each k-block (0 6 k Q 3); while in a FEM model there is no natural place to attach attributes except at the O-blocks (i.e., the nodes) and at the n-blocks ( i.e., the elements themselves).
relations
-------&&-
3-blocks
2-blocks
l-blocks
B-blocks
curves and surfaces
Fig. 4. Microscopic data model
Q3 fGr+ .
zf
J.P. La Hargue, J.P. Mascarell, E3D
232
inter-discipline environment
Compared with the FEM model, the major disadvantage of the B-rep model is its lack of efficiency, both in storage and in computer time for classical finite element computations. However, it can be noted that for many global actions over a mesh (e.g. isovalues plotting), a FEM-based software has to set up temporary connectivity tables which are already included in the B-rep model. Thus, we have adopted the B-rep model as a basis for cellular decomposition in the E3D software.
4.2. Geometrical requirements A translation system must also be able to accommodate the output from CAD systems, i.e., analytical curves and surfaces. Equations of those curves and surfaces might be internalized into a limited catalog. As we have to develop some procedure interpreter (see Section 6.3) we rather have to externalize the analytical information into a unlimited library of generic interpreted procedures (circles, spline, torus, . . .) which will be instantiated into actual curves and surfaces. At this point in the discussion, we have two logical descriptions of a line: (a) as a compound l-block; (b) as an analytical expression. Obviously we have to make some choice and decide what will happen, for example, when computing the intersection of two lines. If we choose the we will have to solve the difficult problem of arbitrary surface analytical description, intersection. If we merely choose the broken-line description, the computed intersection will be incorrect. We may escape this dilemma by first computing the intersection of broken lines, and secondly using analytical information to correct the previous result. That is to say, curves and surfaces will be merely used as an analytical background for correcting a polyhedral description. In that way, the investment into a general solid modeler is avoided. The major drawback of this approach is that the link between the mesh and the background geometry is a rather loose one. If the geometry is modified, the mesh will not follow automatically. It seems rather difficult to escape this drawback in a system which is not fully integrated inside a solid modeler. 4.3.
Macroscopic
decomposition
requirements
Technical objectives stated in Section 2 ask for a decomposition of a complex object into more manageable components. The simplest approach is to allow the management of subsets of the set of elements of the microscopic model. If we want to express symmetries, repetitions, or if we want to edit some subsets in various coordinate systems (e.g., polar coordinates or parametric coordinates of a parameterized surface), a more complex decomposition model is needed. It seems convenient to distinguish some decomposition nodes, each of them bearing its own 2D or 3D coordinate system. Nodes are joined by decomposition arcs, each of which bears a coordinate transformation (Fig. 5). Such a complex decomposition is not easily mastered by the end-users, and for most cases a flat decomposition, with only one level, seems to be sufficent.
.T.P. La Hargue, .l. P. ~~carell,
h
E3D
in~eT-~~cipline e~~iro~rne~~
233
3D nodes
2D nodes
Fig. 5. Macroscopic
data model.
4.4. Data manipulation primitives Clearly the manipulation primitives for the. macroscopic data model are usual graph manipulations. At the microscopic level, because of the interactive use of the system, the most frequently used primitive turns out to be the inquiry, which looks like the inquiry primitive of a data base management system. An inquiry is a predicate expression P(X) over the blocks X, using logical and geometrical selection criteria combined with relational operators ‘and, or, not’ or quantifiers ‘some, every’: which belong to part A and every O-block Y of which is contained in rectangle R
delete the 2-blocks X >
f
other primitive
f
*
inquiry predicate
As shown in this example, the integration of the inquiry within another primitive (here the delete primitive) allows the operator to apply this primitive to set a block. This is the way in which the amount of interactive exchanges at the man-machine interface is drastically reduced. The primitives are the block-level primitives of the microscopic data model: defining, updating, deleting the blocks, relations, curves and surfaces, schemes, logical and numerical attributes over the blocks, and some non-trivial computations like integral calculus over blocks. When the system must act as a mesh generator, some constructive primitives like mesh refinement and mesh duplication are added. As the system must prepare load-cases for numerical codes, isolines and cut-lines facilities must be added to help the user to prepare and verify his data. In view of inter-discipline communication, an important primitive is the computer-aided numerical interpolation of the output from one discipline over the mesh of another one, as shown in the example in Section 3.2. (Fig. 3).
J.P. La iSargue, J.P. M~~~relL, E3D inter-~~sc~pl~neen~jronrnent
234
4.5. Compilation
vs. interpretation
In view of continuous adaptation to various computer-aided translation problems, continuous evolving of the data model should be possible. Unfortunately, severe performance constraints appear while designing the implementation. For instance, a cube has 6 square sides; a square has 4 segment edges; a segment has 2 point vertices. An average size mesh of it = 3000 cubic elements contains more than 3n squares, 3n segments. For such a mesh, the number of downward links in Fig. 2 is greater than 6n + 4.3n + 2.3n = 24n. There is exactly the same number of reciprocal upward links, leading to a total link number 24n + 24n = 48n = 150,000 links. If the links are implemented using a 4-byte word on a computer, they will need more than 600 Kbytes. The same considerations apply to other attributes and to auxiliary data such as quad-trees and act-trees used to accelerate the geometrical sorting of blocks, leading easily to a several Mbyte size for a single geometry. Reducing this amount of info~ation and accelerating the data accesses cleary implies some tricky data structures and optimized programs. This optimization is only made possible using compiled algorithms. Thus the block microscopic data model must be fixed and internalized into the programs: we can say that the data model has been compiled at compilation time, and cannot be modified at run-time. On the other hand, some less critical definitions of the data model may be deferred until the run-time. This is the case for example for curve and surface definition and attribute description, finite elements schemes. This part of the data model may be considered as interpreted in the E3D system. Distinction between compilation and interpretation is merely a state-of-the-art paradigm in software engineering. Theoretically, run-time requests may be dynamically transformed into object code of the computer, and indeed this is the case in some systems such as databases. But in practice, this cannot be achieved by standard development methods. Static compilation is the main reason for lack of software flexibility in this area, and one may hope that advances in software engineering, e.g. OOP (object oriented programming), will make it less mandatory.
5. Interactive
graphics
Early error detection and correction needs an immediate echo of any modification of the displayed objects, even for small modifications like moving a single mesh node. If a segmented display list is used, each block picture (i.e. point, line, area) may be captured inside a separate graphical segment. Then a local modification of the object makes it necessary to update only a few segments. This provides a good response time on a so-called intelligent device. However, experience shows that such a large number of graphical segments brings an overhead over the size of the display list, which may exceed the device capability. The same picture generates a display list several times bigger in the segmented case than in the flat case. By blocking a few number n = 2, 3, 4, . . . of block pictures into each graphical segment, a trade-off may be found between the display list size and the interaction cost (Fig. 6). Such graphical devices can be driven through specific interfaces or through graphic norms
J. P. La Hargue, J. P. Mascarell, E3 D inter-discipline envirmment
Fig. 6. Graphical
235
segment blocking trade-off.
choi
Fig. 7. Abstract
terminal.
like GIG-3 D and PHIGS. However, those basic interfaces offer an impressive number of different functions {roughly 200). On the other hand, they do not solve some elementary In order to program the system problems in this graphical area, e.g., label management. extensively, it seems more appropriate to redefine an abstract terminal with less numerous (roughly 30) and higher-level functions. The need for abstraction also appears when defining menus and alphanumeric interactions, because here the basic norms only provide the lower level functions. For instance, on a string request they only return the characters typed by the operator; lexical analysis and lexical errur handling is not performed by the basic norms. It seems necessary for the abstract terminal to contain some menu constructor and a menu interaction handler (Fig. 7). 6, Further requirements Starting from the examples of Section 3, we have detailed the data model problems in Section 4 and the interactive graphics techniques in Section 5. We are now able to study how
to link together data manipulation primitives and interactive graphics primitives into a which have deeply complete system * It is time to introduce some further requirements influenced the design of the actual E3D system.
This first requirement may be stated as follows: whenever a repetitive sequence involving numerous man-to-computer interactions has been identified, it should be possible to encapsulate it within a more automatic procedure prompting the operator only for non-repetitive data. When the interaction number reaches the zero value, the system runs in batch mode, under the control of a fully automatic procedure. Such a requirement is common practice in the operating system area, where each user defines his own interactive procedures through a command control language, Such procedures are written off-line, i.e., the user cannot simultaneously write a procedure and run it. So the repetition requirement should not be confused with a replay requirement allowing each interactive run to be replayed with some parametric changes. Replay facilities may be quite useful, but since they are more or less language-based, they may conflict with some immediate interactions which do not have a meaningful language-based equivalent: how to replay on a different mesh the action ‘and now, I destroy some awkward elements near the right corner’?
Repetitive sequences cannot be foreseen generally, and they cannot be encapsulated in advance into automatic procedures, because they emerge from the particular use of a discipline with particular data, which vary from one office to another. Conversely, procedures designed for a given office will not be useful for another one. To meet the local need for procedures, a dist~bution requirement may be stated as follows: It should be possible to customize the system locally into simultaneous versions, according to local needs; in order to avoid long delays in the modification of the system, the local adaptations should be locally designed and implemented without advising some central maintenance team. The distribution requirement warrants some biologic adaptation of the system to the user needs.
To meet the distribution requirement, classical compilation and link-edit does not seem to be the better solution: - allowing distributed users to access a full programming language will make the system unsafe, because it will be impossible in practice to prevent them making salvage accesses to the other subsystem’s data; - in particular, distributed evolution of the system will be unsafe if the procedural language allows subterraneous sharing of data between procedures (e.g., Fortran commons); - interactive procedures contain many backward jumps (repeat a question after an operator% error) and many exit jumps (the operator wishes to cancel an option). Programming those numerous jumps will be unsafe if ‘goto’ statements are allowed;
J.P. La Hargue, J. P. Maxarell, E3D inter-discipline environment
237
- classical link-edit of large programs (e.g. several Mbytes of code) is a poorly dynamic way of replacing a small user’s procedure; - classical link-edit leads to a lack of object code unicity among several user versions, which complicates and possibly paralyzes the evolution of the system. Tbese considerations argue for a separation requirement, which may be stated as follows: Interactive procedures should be programmed using a limited and safe language with restricted access authorization to vital parts of the system. Interactive procedures should not be statically link-edited but rather dynamically loaded from libraries. If dynamic loading of procedures is employed, classical overloading of libraries may be used to customize the system according to local needs without losing unicity. From a theoretical viewpoint, dynamic loading does not bring any consequence over the software enginee~ng techniques applied to the procedures. However, dynamic loading of classical object code from compilers implies a significant machine-dependent investment, and this investment must be duplicated for each new host computer. In the E3D system, we have found it more economical to define a machine-independent object code for procedures and to run it through a portable interpreter. The drawback of interpretation versus compilation is its lack of efficiency; but man-to-computer dialogs are limited by the possible number of human answers to the computer prompts, and there are no severe efficiency constraints upon them.
7. Architecture design Now we are ready to deduce an architecture
from previous techniques
and requirements.
7.I. Identifying the subsystem From the discussion of interactive graphics (Section 5), the abstract terminal has emerged as a first subsystem. On the other hand, while discussing the data model flexibility (Section 4.5) we have found that performance constraints enforced us to internalize a significant part of the data model and primitives into an efficient black box. We denote this second subsystem as the compiled model subsystem. The separation requirement (Section 6.3) has led to the definition of a third interactive procedures subsystem in which data primitives and interactive graphics primitives are to be assembled into a man-with-computer dialog. This means that links must be established between this third subsystem and the other ones; between the first two subsystems no link is needed. We now turn to discuss the specification of these links (Fig. 8).
The more usual way to link two subsystems is the call link in which one subsystem acts as a caller and the other one is called. For large subsystems, the call link exhibits several defects: - it is an asymmetric link: the call link induces no constraint over the calling subsystem, but enforces an explosion of the called subsystem into a library of called functions, making the analysis and coding much more complicated. For instance, temporary variables must be
238
.I. P. La Hargue, J. P. Mascarell, E3D
inter-discipline environment
parallel
overloaded
links
procedure
llbrariss
Fig. 8. The three parallel subsystems.
explicitly saved in some shared area before leaving each called function; usual temporary allocation built into the programming language cannot be used. - a more philosophical defect of the call link is that the caller/called asymmetry induces a static master/slave programming tendency. In truly interactive input mode, the operator should be the slave, while in output mode the computer should be the master. It is unnatural (but not impossible, e.g., syntax parsers in compilers) to make a caller/called link work in the opposite slave/master direction. - the enumeration of the entries of a called subsystem has not enough expressive specification power to provide a well-specified interface between the subsystems. Some information must be added to indicate side-effects between two entries, restrictions upon the ordering of calls, etc. This extra description does not follow naturally the establishment of the link and is seldom complete. Forbidden calls may only be detected at run-time, giving some check overhead. 7.3. Parallel rendez-vous link The natural symmetry between the subsystems may be recovered by considering them as parallel processes, connected by classical rendez-vous links. At each rendez-vous, a token (i.e. an integer, a real, a boolean or a string) is exchanged between a sender process and a receiver process. The actual token fluxes between processes at execution time are like the words of a language. This language may be specified using a context-free formal grammar. There will be two grammars, one for each inter-subsystem link. As soon as these grammars are defined, each subsystem can be programmed in an ordinary sequential style, each with a main program. Each subsystem may be separately tested by recording and playing back the token fluxes with the missing processes. Thus, the parallel link gives a better interface specification than the call link. This kind of parallelism is a logical one, allowing the programmer of each subsystem to design and code in a classical sequential way without bothering with the other subsystems.
J.P. La Hargue, J. P. Mascarell, E3D inter-discipline environment three abstract
terminal
logical
processors
intsractive
procedures
conpi ted mode I
trip
of
physical
Fig. 9.
239
the
single
processor
Logical and physical parallelism.
At the physical level, the parallelism vanishes: Fig. 9 shows the physical implementation of instructions (vertical lines) and token exchanges (horizontal lines) through classical (Fortran) calls. This paradox disappears if one observes that the interactive procedures, in particular the main procedure, are interpreted by an interpreter subroutine. In other words, the interpretation technique gives an opportunity to implement easily a parallel link.
8. Conclusion We have tried to give an analytical insight of an inter-discipline environment, namely the E3D system used at SNECMA. The 2D version of the system has been running since 1984, and the 3D version was made operational in 1986. The programming effort was about 150,000 source lines, justifying some methodological preparation. The system has been customized by three teams into local versions oriented towards stress analysis, thermal analysis and forge, respectively. These versions share the same load-module and several common procedure libraries; but each version has its own procedure libraries. The parallel architecture works very well and significantly lowers the effort of maintenance and customization. The conjunction of a B-rep model with a dynamic attribute scheme has allowed a uniform programming of attribute manipulation. The main points of design uncertainty, due to the continuous evolving of the state-of-the-art, seem to be the graphical output techniques to be employed and the connection between a mesh and the underlying geometry. As a conclusion, we shall indicate the software engineering trends (in our opinion) for development and maintenance of such systems: - large systems are more easily mastered by identifying internal subsystems at various abstraction levels; we still need more automated tools in this area; - in particular, the design, coding debugging of man-machine interfaces should be more mechanized; - there is a trend to use more and more sophisticated data models. For performance
240
J. P. La Hargue,
J. P. MascarelE, E3D
inter-disciphz
environment
considerations, those data models are now merely implemented using the constructors of programming Ianguages: arrays and records. There is a strong need for efficient languages for in-core data manipulation, with more powe~ul data constructors, for instance network data models or relational data models [4,5].
References [l] J. Encarnacao and E.G. Schlechtend~i, Computer Aided Design (Springer, Berlin, 1983). [Z] IX. Braid, Notes on a geometric model~er, CAD group document no. 101, University of Cambridge, 1979. [3] J.P. La Hargue and J. Raguideau, Quatrix un sys@me orient6 vers le d~ve~oppement et la maintenance de bib~ioth~que scientifiques, Congr& Afcet Informatique, November, 1981. [4] R.L. Maskin and R.A. Lorie, On extending the functions of a relational data base system, ACM, New York, 1982. [5] M. Lacroix and A. Pirotte, Comparison of data base interfaces for application programming, Inform, Systems 8 (3) (1983).