Comput, & Indus Engng. Vol I. pp 207-216 Pergamon Press. 1977 Pnnted in Great Britain
SIMULATION OF CONTINUOUS SYSTEM MODELS IN INDUSTRIAL ENGINEERING BAR~EY L. CAPEHART Department of Industrial and Systems Engineering, Universityof Florida, FL, U.S.A.
(Received 7 January 1977) AIwa'act--Manyindustrial engineeringand operationsresearch studies result in the creationof continuous dynamic models:that is, modelsconsistingof a systemof ordinarydifferentialequations. Such modelsarise frequently in the analysis of economic,inventory and production control systems.The area of Industrial Dynamics (more recently named Systems Dynamics) is specifically oriented toward modeling and simulation of these dynamic systems.The purpose of this paper is to discuss brieflysome of the current techniques for modeling and simulation of continuous systems and then to devote major attention to describing a numberof difficultiesencountered in the actual digital simulation.Finally,a numberof specific suggestions will be offered to prevent the occurrence of most of these problems. INTRODUCTION The Industrial Dynamics Technique created by Jay W. Forrester of MIT has been used extensively to model and simulate a wide variety of systems in Industrial Engineering ill. This technique has been so widely used that Professor Forrester now calls his technique Systems Dynamics, which is certainly justified based on its applications in recent years to problems of the environment and energy [2]. Systems Dynamics and its programming language DYNAMO have been the initial exposure of many Industrial Engineers to the world of digital simulation of continuous systems. Such an introduction is usually effortless since modeling in Systems Dynamics is a skill that can be acquired with a little practice, and the DYNAMO language handles the rather sticky problem of numerical integration in a way that does not frighten the beginner. The only problems that really face the beginner are the selection of the step size to be used in the "hidden" Euler integration schemes, and the selection of the total simulation time. Eventually, the problems attendant to the selection of the step size and simulation time become apparent, and this marks the transition time when a simulation practitioner either becomes a blind follower of the technique afraid to ask what is really happening or realizes that simulation is actually a rather complex subject requiring a thorough understanding of basic principles in order to guarantee successful results. These basic principles are primarily related to the modeling philosophy and are mainly associated with the theory of numerical integration of differential equations. The purpose of this paper is to discuss briefly three common modeling techniques: the Systems Dynamics approach of Jay W. Forrester, the Energy Circuit Language approach of Howard T. Odum, and the State Variable approach. The main topic to be covered will then be a discussion of three simulation languages--DYNAMO, CSMP and the State Variable Method-followed by a detailed discussion of specific problems occurring in digital simulations. These problems, which involve accuracy and computer CPU time, are selection of the numerical integration method, selection of step size and selection of total simulation time. Results of comparative studies will be used to provide operational guidelines for making these selections.
Modeling philosophy The starting place for a digital simulation is the modeling philosophy used to describe the system of interest. Currently, system modeling is much more of an art than a science. Although this art must be learned by experience, a set of guidelines or procedures is quite useful. Several "packaged philosophy" techniques are available to help in this respect. Three of these-Systems Dynamics, Energy Circuit Language, and State Variables--are described briefly below. Recently, Forrester presented a paper "Business structure, economic cycles, and national policy" in which he described the System Dynamics philosophy as follows [3]. 207
208
BARNEY L. CAPFHARI
A method new to economic analysis is now being applied to examining social and economic change at the national level. The "system dynamics" approach had previously been developed as a way to relate corporate policies to their resulting behavior, such as growth, employment stability, and changes in market share. A system dynamics model is very different from the more common econometric models by being drawn from a much broader information base, by representing more generally the nonlinear character of real life, by containing a deeper internal substructure of polic.ies, by including social and psychological variables as well as the strictly economic variables, and by having the objective of choosing between alternative policies for achieving long-term improvement of the system rather than the objective of short-term forecasting as a basis for current decisions. In constructing a system dynamics model, one draws heavily on the knowledge of structure and policies already being used by managers, political leaders, and the public. From the available wealth of information, some already available in written and numerical form but much drawn from experiences and observations residing in people's heads, a computer simulation model is constructed. The computer model plays the roles of the separate parts of a social system according to knowledge about corresponding parts of the real system. A good system dynamics model can be related at every point in its structure and policies to corresponding knowledge about parts of the actual system it represents. In operation, the model should reproduce the same modes of behavior seen in the actual system and should exhibit the same kinds of successes, failures, and problems. From the model, which is a captive replica of the actual system, new insights emerge about causes of behavior and the effectiveness of alternative policies.
This description, together with several recent applications such as Limits to Growth, shows why the Systems Dynamics approach is one of the most popular philosophies in use [4]. A more recent entry into this field is the Energy Circuit Language, or Energese, approach of H. T. Odum of the University of Florida [5]. Odum has aOplied his technique to a wide variety of problems in the economic, environmental, and energy areas. In a recent paper "The energy circuit language of H. T. Odum--a tutorial exposition", the method was described as follows
[6]. With origins in the biological sciences, Energy Circuit Language (hereinafter referred to as Energese) has evolved into a complete modeling language for the description and delineation of systems. The attractiveness of Energese as a modeling tool arises from its ability to transform the myriad of type and form of real-world flows and storages found in natural and social systems into a singular set of energy components that can be analyzed, utilizing the laws and principles of thermodynamics. Such a methodology enhances model credibility by eliminating the need for arbitrary functions to link dissimilar flows and storages and provides a means to isolate erroneous model formulations that violate thermodynamics foundations. In addition, the pictorial mode of Energese facilitates model formulation and subsequent evaluation of static and dynamic system behavior. Static evaluation, useful in net energy analysis, can be accomplished by direct enumeration of model pathways on an Energese system diagram while dynamic response can be accomplished by translating the Energese diagram into a set of differential equations for subsequent analog or digital computer simulation. Although other modeling techaiques and languages are capable of duplicating the results of Energese models, the forte of Energese is that it provides the modeler with a macro-perspective of systems that reveals fundamental arrangements of structure across and within the spectrum of system scale. From such a framework, the modeling task proceeds from a "top-down'" rather than a "bottom-up" approach thus reducing the expense and complications of including intricate and unnecessary detail.
The Energy Circuit Language approach is just beginning to gain acceptance in the scientific community, and it should see very extensive application in the future [7]. The last method, State Variables, is basically a mathematical structure approach and is used mainly by people with some knowledge of mathematical systems theory [8]. However, due to the extensive theoretical results built up over recent years by systems engineers, many problems of economics, environment and energy are cast in this manner. An excellent example of this approach is in large scale ecosystem modeling as described by Professor Bernard C. Patten from the University of Georgia [9]. This method will be described in more detail in the paper. DIGITAL SIMULATION OF CONTINUOUS SYSTEMS O n c e a continuous system model in the form of one or more differential equations has been obtained for a system, the next problem is how to solve the equations on a digital computer. This is not a trivial matter since the digital c o m p u t e r is a discrete device and has no numerical w a y to handle a derivative with its implied limit. Thus, the problem b e c o m e s one of replacing the original continuous system model with a discrete model (one or more difference equations) having equivalent behavior. In all but a few instances, the continuous model is replaced by an approximate discrete model. All of the standard numerical integration techniques, such .as Euler, Modified Euler, R u n g e - K u t t a , etc., produce approximate difference equations from the original differential equations. The a c c u r a c y of these discrete models is then d e p e n d e n t on their order of approximation to the original system, and the .step size used. Thus, there will generally have to be a tradeoff b e t w e e n the selection of the numerical integration method and the step size. This tradeoff and other problems in the actual simulation are discussed later.
Simulation of continuous system models in industrial engineering
209
Before leaving this section, it should be pointed out that if the simulation language DYNAMO is used with the Systems Dynamics approach, the problem of selecting the numerical integration technique does not occur. DYNAMO uses Euler integration as the basis for its statement structure; therefore, if DYNAMO is used, Euler integration will be used automatically. As mentioned earlier, this results in an easy introduction to simulation, but it will be adequately documented later in the paper that Euler integration has many problems associated with it. Finally, if ~hould be mentioned that the three simulation methods discussed in this section do not constitute an exhaustive set of all simulation languages and methodologies. Many other languages such as CSSL, GASP IV, etc. exist and have been described elsewhere [18]. The three that have been chosen represent the most frequently encountered techniques based on the author's realm of experience. DYNAMO. CSMP. AND STATE VARIABLES--A COMPARISON
After a continuous system model has been obtained, the task of translating that system model into a computer model is facilitated by the use of a simulation language. Although in general, the modeling philosophy and use of a given simulation language are not tied together, the use of Systems Dynamics and DYNAMO as a pair is almost taken for granted. However, it is easy to show that a more general stmulation language such as CSMP has many superior features. At this point, it should at least be noted that selection of the simulation language is a variable under the control of the person accom01ishing the simulation. With this consideration, a short discussion will be given of the three languages, DYNAMO, CSMP and State Variables. DYNAMO Dynamo is a simulation language created by Phyllis Fox and Alexander Pugh at the Massachusetts Institute of Technology for the purpose of simulating certain types of dynamic information--feedback systems that can be described in terms of a set of finite difference equations [I0]. DYNAMO is a general purpose simulation language, but its use has been primarily associated with Systems Dynamics models of large scale business and economic systems. The basic DYNAMO operational procedure is to use simple first-order integration of system flow rates into level variables. The DYNAMO language is a rather simple programming language. The only subscripts allowed are the time related subscripts J, K and L. The time J implies the last time-step, K implies the current time-step calculation, and L implies the next time-step. The intervals between these times are called JK and KL. The length of these intervals is called DT. The DYNAMO model is no more than a set of difference equations that relate the variables to one another. There are only four types of variables in the DYNAMO model. Constant variables have fixed numerical values that remain constant during a particular run. Level variables are quantities whose values at a given time are computed from their values at a previous time and their rates of change during the time interval between calculations. Auxiliary variables are algebraic combinations of levels and are calculated in a specified sequence. They are usually introduced to simplify the algebraic complexity of rate equations, and could actually be eliminated by substitution into the rate equations. Rate variables indicate a rate of flow or rate of change and have meaningful values over time intervals, whereas constants, levels, and auxiliaries are meaningful at specific instants of time. The DYNAMO compiler determines the hierarchy of variables and the sequence of calculations. All variables are assigned beginning numerical values that depend either directly or indirectly on initial values specified by the user. The equation type must be specified, i.e. rate, level, etc. The specification of the type of equation automatically dictates the calculation sequence. At the beginning of the run, the constants are evaluated. Then, during each cycle of the run, the level equations are evaluated first, the auxiliary equations second, and the rate equations calculated last. All calculations are made for time K or interval K L using values at time J and interval JK. Once the calculations are complete, the time-step is incremented and the time L becomes K and the time K becomes J. Hence the DYNAMO model depends on the previous state of the system model. The DYNAMO compiler provides the programmer with several functions that may not be
210
BARNEY L. CAPEHART
provided with many other compilers. The functions include: 1. Common functions (a) Exponential (b) Natural logarithm (c) Square root (d) Trigonometric 2. Random number functions (a) Uniformly distributed (b) Normally distributed 3. Delay functions 4. Pulse and step function 5. Ramp function 6. Sampler 7. Maximum or minimum function 8. Clipping or limiting function 9. Switch function 10. Table functions II. Smoothing function Another feature of DYNAMO is the boxcar train which may or may not be cyclic and is commonly used for seasonal adjustment in the model. One advantage of using DYNAMO is the flexibility of the output package. The programmer can specify not only the DT interval but also the intervals of printout and plotted data. The plot routine does the scaling if upper and lower limits are not specified. There is no need to have any peripheral equipment other than the regular output device, because the print will include the requested plots. CSMP CSMP is a general purpose simulation language developed by IBM and is available on the IBM 1130, System/360 and system/370 computers Ill]. It is a problem oriented language that allows simulation problems to be prepared directly from either block diagram representations or sets of ordinary differential equations. System/360 CSMP is an extremely powerful simulation tool in that it accepts FORTRAN statements thereby allowing the user to provide special purpose functions or tasks related to his particular need. Users experienced in analog simulation techniques can adopt the CSMP language with very little effort. CSMP uses centralized integration which insures that all integrator outputs are computed simultaneously at the end of each cycle. Six different types of integration methods are available for selection, and it is even possible to specify a user provided integration method. The integration methods provided are rectangular (Euler), trapezoidal, Simpson, second order Adams, fourth order Runge-Kutta, and a fifth order Milne predictor-corrector. The RungeKutta and Milne methods are variable step size methods; the others are fixed step size methods. The CSMP formulation of a dynamic system model is divided into three main parts that describe the computations to be performed before, during, and after each simulation run. These parts are called INITIAL, DYNAMIC, and TERMINAL. The INITIAL portion is optional and is used for computing initial conditions and parameters. The DYNAMIC portion is the heart of the simulation, and it specifies the system dynamics together with any other computations desired during the run. The TERMINAL portion is also optional and is used for computations desired after completion of each run. This allows a sequence of re-runs to be performed automatically. To program a problem for solution using CSMP, the following three types of statements are prepared: structure statements, which describe the functional relations between the variables of the model; data statements, which assign numerical values to the parameters and initial conditions; and control statements, which specify program execution steps and data output. CSMP language statements are similar to FORTRAN statements in that they contain constants, variable names, operators, and functions. The standard FORTRAN operators are provided, as are a collection of functions such as integrators, zero order holds, lead-lag networks, and
Simulation of continuous system models in industrialengineering
211
limiters. The CSMP library contains all the standard functions found in analog computers; in addition, all of the FORTRAN library functions of the user can be utilized. Input and output are simplified by means of a free format for data entry and user-oriented input and output control statements. Output options include printing of variables in standard tabular form and print-plots. Convenient means are available for terminating a simulation run with a sequence of computations and logical tests. This allows iterative computations for problems such as search procedures for system parameter optimization.
The state variable method The state variable method is a powerful technique for describing the behavior of continuous systems regardless of their origin [12]. Control engineers have used state variable models extensively for analysis and simulation of large scale linear and non-linear systems. Once systems have been described in terms of state variables, a general computer program can be used for simulation purposes. The heart of this program is a subroutine which evaluates the state transition matrix using a truncated Taylor series expansion. To simplify the simulation program, all inputs are described by state variables. This allows simulation of a higher order, unforced system of equations. This section presents an introduction to state variable models and discusses the details of computer simulation using state variables. The technique discussed is quite general and is very appealing because of its speed of computation and ease of use. It is well known that an Nth order differential equation can be written as N first order equations. A set of linear differential equations in vector-matrix form X(t) = A(t)X(t) + B(t)U(t) is said to be in state variable form. By restricting the initial discussion to a linear, time invariant, unforced system described by X(t) -- A~((t), the solution for X(t) can be expressed as X ( t ) = q~(t, 0)X(0) where q~(t, 0) is called the state transition matrix, or matrix exponential. The state transition matrix can be calculated using the Taylor series !
¢,(t,O) = e/~ = I + At + 2 (At)2 + . . .
Since all systems are represented as discrete data systems considered constant over each sample period, the state transition matrix can be represented as
~(T,O)=eAT = I + A T +~(AT)2+. .. + ~ ,1. ( A T ) ~. In most simulation cases, ten terms in this series provides satisfactory accuracy. At the core of a state variable simulation program is a subroutine which employs the truncated Taylor series expansion to evaluate the state transition matrix. A FORTRAN computer program for computing e Ar is given in Appendix I. Once e Ar has been calculated, the solution can be stepped out as follows _X(T) =
eAT_X(0)
_X(2T) = eArX ( T )
_X[(N + 1)T] = eATX_(NT). Note that in this linear, time invariant case e Ar is only required to be computed one time. This fact accounts for the tremendous reduction in run time requirements for time invariant systems using the state variable method. These same results can be obtained for linear, time invariant systems with external inputs in most cases. The state equations have the form
X_(t) = AX_ (t) + BU_ (t).
BARNEY L. CAPEHART
212
The solution is given by
X(t) = eAtX(0) + -
-
~
t
eA"-'~BU(r) dz.
J O
-
The convolution integral can be bypassed by describing U(t) with additional state variables, thereby increasing the dimension of the original system. This is possible whenever the input is a function or sum of functions occurring in the solutions of linear differential equations. Once the input has been described by state variables, this becomes a higher dimensional, unforced system of the form
X_(t) = FX_ (t). This solution now follows from previous work as _X(t)= eF'_X(0) and the previous results are applicable. Conclusions regarding the use of these three simulation languages will be given later, since the results are based on an understanding of the simulation problems that can occur and the steps required to prevent difficulties.
Simulation dinlculties The most difficult problems encountered in performing a digital simulation are associated with selection of the numerical integration method, selection of the step size, and determination of the total time for the simulation. Each of these problem areas will be discussed in detail, and specific recommendations will be made, in some cases as a result of case study comparison.
Integration methods The selection of any integration method is determined by considering the trade-off between accuracy, stability and speed. The accuracy is dependent on the amount of truncation and round-off errors present when using a particular method. Truncation error results from the fact that numerical integration is a discrete process using a finite number of equally spaced points rather than a series with an infinite number of terms. Accuracy can be improved by using more points as a past history or by.making the integration step interval smaller. Either method achieves increased accuracy at the cost of speed. Round-off error occurs when a sequence of numbers representing the integrated function is generated. Each number is limited by the size of the computer word being used. There are methods for determining an upper bound on the magnitude of round-off errors expected. The stability of an integration method is defined as tile ability to control propagated errors so that they do not increase as time passes. Propagated errors are the sum of both truncation and round-off errors over all integration steps calculated since the problem started. The amount of propagated error is usually the most important criterion in the choice of any integration method for simulation. Fortunately, there is a comparatively simple procedure for determining whether a particular integration method can be used with a particular simulation problem. This procedure will be explained in the section on numerical instability. Speed depends on the complexity of the method used and the size of the integration interval. Real-time simulation requires fitting all computations for one cycle within the integration interval while still keeping the desired accuracy. Off-line or batch simulation problems usually do not have hard and fast speed requirements. In most batch simulation runs some subjective criterion of "reasonable accuracy" with "reasonable speed" is applied. One criterion for selecting a method is the order of the method. The order of a numerical integration technique is defined as the highest order term in the Taylor series expansion of the true solution that the approximate solution matches. Thus, the Fourth Order Runge-Kutta method would produce an approximate solution matching the true solution out through the fourth order term. For single step methods of the Runge-Kutta family, greatly increased accuracy can be obtained for a fixed step size by increasing the order of the method up to a limit of fifth order. Based on case studies involving the solution of smooth models, it is easily concluded that for a fixed step size, an increased order method (up to fifth) produces the most efficient solution[13]. For multistep predictor--corrector techniques, a similar conclusion can be reached. Limited theoretical studies and actual practice have shown two iterations of the
simulation of continuoussystem modelsin industrialengineering
213
corrector in a predictor--corrector pair to provide the optimum error reduction. Thus, with the limitation of fixed step size and a limit of two iterations of the corrector, the conclusion is that the higher order methods provide more efficient solutions. One step (Runge-Kutta) and multistep (predictor--corrector) methods each have advantages and disadvantages. These are summarized briefly as follows:
Runge-Kutta methods 1. Since they do not use information from previous calculated points, Runge-Kutta methods are self-starting. 2. For the same reason, however, they require several function evaluations and are therefore time-consuming. 3. Being self-starting, they permit an easy change in the step size. 4. They provide no easily obtainable information about truncation error.
Predictor-corrector methods The characteristics are complementary to those of Runge-Kutta methods. 1. Since they do use information about prior points, they are not self-starting. 2. They substitute information about prior points for repeated function evaluation and are therefore more efficient 3. Except in special circumstances that are usually not of practical usefulness, a change in step size requires a temporary reversion to a Runge-Kutta method. 4. A good estimate of the truncation error flows naturally out of the computation. The complementary nature of the two types of methods suggests immediately that a combination of the two will prove useful. The following course of action is recommended: i. Start the solution with a Runge-Kutta method to find the first few solution points. 2. Use a predictor-corrector pair to compute succeeding solution points. 3. If more than two iterations of the corrector are needed to obtain the desired accuracy or if the truncation error is too large, decrease the step size (see 4 below). If, on the other hand, the truncation error is exceedingly small, the step size can be increased. 4. To change the step size, consider the last solution point that was sufficiently accurate to be an initial point. Restart the solution from that point using a Runge-Kutta method as in I above.
Step size Assuming that the method has already been selected, the next problem is the selection of the step size. Harvard Lomax of NASA has suggested that the error in any numerical integration method, as it is given by the lowest order, non-vanishing truncation error terms, loses its significance when these terms exceed about one tenth [14]. Thus, Lomax says to choose the step size h such that [hALl< 0.1 where AL is the largest eigenvalue of the system model, if the largest eigenvalue of a system, or eigenvalue bound for a non-linear system, is known, the step size is easily selected. However, even for linear systems of order greater than three, the determination of the eigenvalues is not an easy task. It has been proposed that a technique from analog computer simulation theory be used to help select integration step size [15]. This technique requires that a time scaled analog computer diagram be drawn. This diagram then produces an estimate for the speed of response or time constant of the fastest integrator. The reciprocal of this nttmber gives an estimate of the largest eigenvalue and thus the required step size. The largest value that the step size can be for a given numerical integration technique is controlled by the occurrence of numerical instability [16]. Roundoff and truncation errors are commonly recognized problems in digital simulation, but numerical instability of the integration method used is another problem which is not so commonly understood. The truncated Taylor series of the state variable method provides an introduction to the problem of numerical instability and allows a unified approach to analyzing and preventing the occurrence of numerical instability. Consider the Euler method in state variable form as Xn., = ( I + Ah)X_,. For simplicity, consider the scalar case first, with the results being generalized to the vector case. The Euler method represents a first order difference equation in the scalar case, Xn÷l = (1 + ah)Xn and for
214
BARNEY L. CAPE.At1
the solution to remain bounded, II +ahl <- 1. That is, the single root of the characteristic equation must lie within the unit circle in what we could call the "eigenvalue plane". In order to extend the results back to the vector case, let the coefficient be a complex, i.e. a = tr +./to. The result is II +(a + jto)hl-< 1 1(1 +o'h)+ j(toh)[<- I (1 + trh)2 +(toh) 2<- 1. This is the equation of a circle in the eigenvalue plane, and in order to preserve the numerical stability of the solution of the Euler method, the value of the increment, or step size h, must be kept low enough so that the root lies within this circle. For a real value of a, h must clearly be selected so that lahl-<2 or 2 h-<--.
lal For systems with large eigenvalues, this is an In a similar manner the stability regions techniques can be analyzed and bounds on the determined. It should be noted that selecting h using the any method, since the Euler value of 2/lAd is
extremely severe restriction. of many other standard numerical integration value of the increment or sample time h can be Lomax rule will guarantee numerical stability of the lowest bound.
Total time Once the step size has been determined, the total simulation time must be selected. The consideration is again based on the speed of the system response. However, unlike the condition for step size selection based on the largest eigenvalue (or smallest time constant), in this case the smallest eigenvalue (or largest time constant) must be examined. A long standing rule of thumb regarding exponentials is that an exponential decays to approximately zero in a time period equal to five time constants. Thus, for systems involving decaying exponentials, a total simulation time of five (or more if severe accuracy demands are made) times the longest time constant will suffice. The question of computer CPU time has been addressed under the topic of speed and efficiency of a given method. The recommendation made for combining Runge-Kutta and Predictor-Col'rector techniques results in the desired accuracy with minimum computer CPU time. SUMMARY OF RESULTS
Previous sections of this paper have discussed three particular simulation language approaches and have enumerated a number of problems that one can normally encounter during a digital simulation. Specific suggestions were made to alleviate the problems described. Combining these recommendations with the use of the three simulation languages leads to the following conclusions: DYNAMO DYNAMO represents a very simple and appealing method of modeling and simulation. The modeling aspect of DYNAMO is worthy of praise, but the first-order (Euler) integration technique has numerous disadvantages. First, the Euler integration method requires extremely small increments or step sizes for numerical stability and accuracy. The requirement for accuracy may be obviated but numerical stability is imperative. Second, the Euler method is extremely inefficient and can result in a tremendous amount of wasted computer time. The classic example here is a stiff system; i.e. a system with one very short time constant and one very long time constant. The short time constant dictates a very small increment or sample time for accuracy and numerical stability, and the long time constant dictates a lengthy time span for the simulation. In this situation, the Euler method is worthless. The DT selection can be a tremendous problem since poor accuracy may provide a useless solution even though it may
Simulation of continuous system models in industrial engineering
215
not be "obviously" incorrect. If DYNAMO is to be used alone as a simulation tool, it is necessary to determine the validity of the numerical results by making a series of runs with reduced values of DT. CSMP
CSMP is an extremely desirable simulation tool due to its ease of use, widespread availability, and flexibility in selection of integration methods and production of iterative simulation runs. CSMP does not aid in the modeling problem as DYNAMO does, but it is significantly better than DYNAMO computationally. For systems which have already been described in terms of differential equations, the CSMP language is almost always preferred. DYNAMO has some special features which are highly desirable for certain types of industrial models, such as boxcar functions, but in general CSMP has a much broader collection of special functions, and additionally any FORTRAN written functions can be used. Computationally, the speed of a DYNAMO run can be equalled with CSMP if the rectangular (Euler) integration method is selected. In general, the best accuracy and run time combinations are achieved with the variable step methods RKS (fourth order Runge--Kutta with Simpson's rule used for error estimation) and Milne (fifth order Milne predictor--corrector). Unless some information on system time constants or eigenvalues is available, there is no way to conclude that a specific fixed step size method should be used. For a given accuracy requirement, the Milne integration method is usually the most efficient, and therefore it is recommended for general use. For linear time invariant systems, it is suggested that a State Variable Method integration program be added to CSMP. The state variable method
The state variable method as a simulation tool has appeal only in the case of linear-time invariant systems. However, this is frequently the case, and it is recommended that a matrix exponential subroutine be made available on every computing system used for digital simulation. For the linear-time invariant case, the state variable method surpasses almost all other numerical integration techniques, and it is clearly superior to the Euler method of DYNAMO and any of the six standard integration methods of CSMP. For linear-time variable or non-linear systems, the power of the state variable method disappears. DYNAMO models which are linear-time invariant can easily be integrated using the state variable method. In this case, DYNAMO would be used as the vehicle for creating the simulation model, and the state variable method would be the vehicle for the digital computation involved. Regarding CSMP simulation of linear-time invariant systems, a state variable integration routine has been added to CSMP [17]. With this addition, the state variable method becomes an internal part of CSMP which is only used in the appropriate case. CONCLUSION
Simulation of continuous system models is becoming a widely used tool for the analysis and design of systems commonly encountered in Industrial Engineering. In fact, a recent survey showed that simulation in general was the most widely used operations research method by the 1000 largest U.S. corporations [19]. In spite of the apparent simplicity of simulation methodology, there are many areas for significant difficulties to appear. This paper has attempted to describe a number of these dilftculties and has discussed the circumstances under which they may occur. These warnings are then followed by specific advice and recommendations to prevent future occurrence of some of the problems. It is hoped that this advice based on the author's experience, will be helpful to other simulation practitioners. REFERENCES I. J. W. Forrester, Industrial Dynamics. MIT Press, Cambridge, Mass. (1961). 2. J. W. Forrester, World Dynamics. Wright-Allen Press (1971). 3. J. W. Forrester, Business structure, economic cycles, and national policy. Presented at the 17th Annual Meeting of the National Association of Business Economists (1975). 4. D. L. Meadows eta/., Potomac Associates Book (1972). 5. H. T. Odum and E. C. Odum, Energy Basis for Man and Nature. McGraw-Hill, New York (1976). CAIE Vol. l, No. 4 - B
216
BARNEY L. CAPEHART
6. B. R. Scdlick and B. L. Capehart, The energy circuit languageof H. T. Odum--a tutorial introduction. Proc. 7th Annual Pittsburgh Modeling and Simulation Conference (1976). 7. Net energy analysis of alternatives for the United States. H. T. Odum, et al. Middle and Long Term Energy Policies and Alternatives, Proceedingsof the Hearings before the House Subcommitteeon Energy and Power, March 25 and 26, 1976. Serial No. 94-63, U. S. Government Printing Office. 8. B. L. Capehart, Computer analysis of linear systems IEEE Transactions on Education (August, 1970). 9. Systems Analysis and Simulation in Ecology. Vol. I. B. C. Patten, Editor. Academic Press, New York (1971). 10. A. L. Pugh, DYNAMO User's Manual. MIT Press, Cambridge, Mass (1963). ! 1. System/360continuous system modeling program user's manual. IBM Publication GH20-0367-4, 197,2. 12. P. De Russo, State Variables for Engineers. Close, Wiley, New York (1965). 13. B. L. Capehart and R. M. Strong, Case Study in DigitalSimulation,Proc. 7th Space Congress, Cocoa Beach, F1. (1970). 14. An operational unificationof finite difference methods for the numerical integration of ordinary differential equations. Lomax, Harvard, NASA Report R-262, 1%7. 15. B. L. Capehart, and F. O. Simons, An analog computer technique for estimatingsample times for digitalsimulation,Proc. IEEE Region Ill Conference (1972). 16. B. L. Capehart and D. R. Miller,Numericalinstabilityin the computer solution of state equations Proc. 2nd Southeastern Symposium on System Theory (1970). 17. B. L. Capehart and D. R. Schneider, A state variable integrationroutine for IBM System/360CSMP. Simulation (April, 1972). 18. R. E. Shannon, Systems Simulation--The Art and Science. Prentice-Hall, Englewood Cliffs, N.J. (1975). 19. F. C. Weston, O. R. techniques relevant to corporate planningfunction practices, an investigativelook. Presented at 391h National Meeting, Operations Research Society of America, Operations Research Bulletin, Vol. 19, Suppl. 2, (Spring 1971). APPENDIX I THE MATRIX EXPONENTIAL SUBROUTINE The subroutine presented here is called MXP, and is used to evaluate the function @(T,O) = e Ar using a Taylor series approximation. The n u m b e r of terms used in the series
evaluation is specified as an input parameter. The calling statement for MXP is C A L L MXP (A, N, T, P H I , N T E R M ) , where A is the system matrix from the equation ~" = ,49(, N is the order of A up to a m a x i m u m of 40, T is the sample, P H I is the resultant state transition matrix, and N T E R M is the n u m b e r of terms used in the series evaluation of PHI. Sufficient accuracy is obtained with values for N T E R M of 10 to 15 in most cases. SUBROUTINE
MXP(A,N,T,PHI,NTERM)
DIMENSION A ( 4 0 , 4 0 ) ,
PHI(40,40)
DIMENSION WORK(¢O,40), T O I L ( 4 0 ) 00 1
I
DO 2
J • I,N
= I,N
PHI(I,J)
= 0.0
WORK(I,J) : 2
A(I,J) PHI(I,I)
* T
= l.O
TOIL(1) 1
0.0
= A(I,J)
= 0.0
WORK(I,I)
• 1.0
DO 30 ITERM = I ,
NTERM
FITERM = ITERM
20
DO 21
I
DO 20
J = I,N
DO 20
K = I,N
TOIL(J) DO 21
= I,N
= TOIL(J)
+ WORK(|,K) * A ( K , d )
J = I,N
WORK(I,J) = TOIL(J)/FITERM TOIL(J)
= 0.0
PHI(I,J) 21
CONTINUE
30
CONTINUE RETURN END
- PHI(I,J)
+ WORK(I,J)