Chapter 11 Vistas in Simulation

Chapter 11 Vistas in Simulation

Cbapter 11 VISTAS IN SIMULATION Intelligence . . . is the faculty of manufacturing artificial objects, especially tools to make tools. -HENRI BERCSON...

801KB Sizes 0 Downloads 128 Views

Cbapter 11 VISTAS IN SIMULATION Intelligence . . . is the faculty of manufacturing artificial objects, especially tools to make tools.

-HENRI BERCSON

11.1. Perspectives Chapter 1 of this book presented a number of classification schemes for models. It was noted that the dynamic, stochastic simulation model constitutes the most general symbolic model category. I n particular, it was argued that the desire to achieve extensive detail in a model usually requires the need to represent a time-dependent system not only as a dynamic, but also as a stochastic, simulation model. This Uncertainty Principle of Modeling implies then that the dynamic, stochastic simulation becomes a symbolic mimicry of a time series whose statistical properties shall not be generally known in advance. I n order that an investigator, or systems analyst, may attend to the incertitudes extant in nature and may find expression for these in a symbolic model, the lengthy Chapter 2 and the somewhat condensed Chapter 3 have included discussions of the conditions under which a host of random variables and stochastic processes may arise. Thus, concepts such as the transformation of random variables, the Central Limit Theorem, and the Poisson process may each be invoked not only to provide a framework for specifying random behavior in a stochastic simulation, but also to indicate efficient methods by which random variables may be readily generated from virtually any applicable distribution function or stochastic process. 484

11.1. Perspectives

485

T h e brief discussion of the Monte Carlo method in Chapter 4 noted the value of statistical sampling experiments in order to ascertain the distributional properties of random variables which arise from experimentation with random or pseudorandom numbers. From the subsequent chapter on modeling and systems analysis, it should become apparent that the dynamic, stochastic simulation model is a generalized Monte Carlo experiment designed to reproduce (in a statistical sense) a time series of essentially unknown (or, at best, partially known) behavior. T h e borderline between the more classical sampling experiments and the more recently developed simulation models becomes difficult to define precisely. Indeed, a P E R T (Program Evaluation and Review Technique) model, of the network of interdependent activities that must be completed in order for the overall project of interest to be considered finished, provides an example of a model which can be manipulated either as a rather static Monte Carlo sampling experiment or as a quite dynamic stochastic simulation model. In the first instance, random activity times can be generated for every activity, then the standard techniques of the Critical Path Analysis (CPA) method invoked in order to ascertain the project completion time (cf. Lockyer, 1967) ; alternatively, a simular clockworks may be constructed, and the entire project may be simulated, from project initiation to completion. I t is this difference in orientation, however, that usually distinguishes between the Monte Carlo sampling experiment and the dynamic, stochastic simulation model. Since the dynamic, stochastic simulation model is so general, modeling will probably become increasingly concerned with randomness. Model responses will need to be analyzed accordingly. Indeed, as this book has indicated, statistical techniques are directly applicable to the responses arising from a stochastic simulation model : the importance of statistical comparisons, factorial experimental designs, multiple regression analyses, sequential estimation and experimentation, multivariate analyses, response surface methodology, and of time series analysis has been discussed in the preceding five chapters. I n this chapter, note will be taken of several activities currently related to simulation. Indications of the applicability of the simulation methodology to the “solution” of certain biological, environmental, and societal problems will be presented in the next section. T h e relationship of simulation to gaming will be explored in Section 11.3. T h e dynamic, stochastic simulation model can also be viewed from another important perspective. Th e analog computer, it has been realized for some time, constitutes a parallel processor, in that several physical

486

11. Vistas in Simulation

(or electrical) parameters can be traced simultaneously, with instantaneous feedback available for the purpose of influencing other parameters as time evolves. On the other hand, the electronic digital computer has appeared constrained in this regard, being essentially a sequential processor that handles one matter (i.e., variable) at a time. However, the software-defined simular clock appears to be a compromise allowing for pseudoparallel processing by the digital computer ; although the digital computer proceeds sequentially from task to task in actual time, the clocking mechanism of the simulation model’s executive routine holds simular time constant until all necessary feedback effects have been accounted for. Thus, the dynamic simulation model, when implemented on a digital computer, provides a computational capability not otherwise readily available on such machines. T he use of a simulation model as a pseudoparallel processor has repercussions in a number of intellectual endeavors. Especially pertinent are the current efforts in heuristics, self-organizing systems, and artificial intelligence. These efforts will be discussed in Section 11.4. I n conclusion, some indication of the probable evolutions that a d hoc simulation languages will undergo will be presented in the closing section.

11.2. The View Toward Applications T he major theses of this book have been expressed in terms of an Uncertainty Principle of Modeling and of the consequent generality of models of the dynamic, stochastic class. A number of our currently pressing biological, environmental, and societal problems are being recognized in their proper perspective as systemic problems; that simulation will play an immensely important role in their dissolution seems readily apparent. However, these applications are not without their attendant difficulties. In particular, ecological modeling seems fraught with impediments concerning the appropriate time span over which relevant models should mime biological development. One must select among models that are capable of predicting the immediate effects of a pollution discharge into a stream on the aquatic life therein and those that indicate the long-term, essentially evolutionary, effects of continuous resource development on the diversity of species. That models of biological ecosystems require stochasticity is perhaps, among many, a moot question. Watt (1970) observes that the ecologist will need to become more attuned to the role of probability in our descrip-

11.2. The View Toward Applications

487

tive models of complicated ecological systems. Nonetheless, any desire to incorporate greater detail in such models will inevitably lead to the requirement for stochasticity in these modeling efforts as well. Effective ecological models will, in all probability, take cognizance of the Uncertainty Principle of Modeling. [See Mihram (1972b).] Another prime conceptual difficulty in the application of simulation methodology to the modeling of biological, environmental, and societal systems is an apparent lack of agreement regarding the goals these systems are, or should be, striving to achieve. One may recall that a model is constructed in an effort to evaluate the modeled system on the basis of its achievement of certain well-defined goals under alternative environmental conditions. Should an ecosystem be maintained forever in its status quo, or does there exist an appropriate, requisite diversity of species that will allow the system to revive itself, even in the face of geological, meterological, or certain astronomical disasters ? Similarly, models of urban and societal systems require carefully defined goals. Forrester (1969) has indicated some of the essential elements in the life cycle of a city, yet this modeling activity apparently was not undertaken in sufficient detail that stochastic effects were incorporated into the resultant model of urban development. T h e success with which business, industrial, and military organizations have been simulated may probably be attributed to our ability to discern for these systems their goals and to make these explicit before undertaking the modeling effort. Stochasticity could often be readily incorporated in these models, because frequently data could be accumulated easily from managers at the appropriate hierarchical levels within the pertinent organization. T h e construction of models of societal and ecological organizations will also require not only well-defined system goals, but also methods by which statistical data can be efficiently accumulated for use within the symbolic simulation models. Of interest in this regard is the recent proposal of Simulation Councils, Inc. (see McLeod, 1969). A “World Simulation” model is suggested, to consist of a hierarchy of models that can be used to predict rapidly the effects that social, economic, political, and military forces have in their interactions throughout the world. Whether such an effort will be undertaken remains an important vista in simulation. Once begun, the effort will likely need to incorporate models of biological ecosystems as well. In addition, considerable emphasis will need to be placed on establishing the credibility of the component modules via proper verification and validation procedures.

488

11. Vistas in Simulation

11.3. A Glance at Gaming Models Another simulation-related activity that is likely to become increasingly important is gaming. A gaming model can be either static or dynamic, either physical or symbolic, or deterministic or stochastic. Though gaming models may be subjected to the same categorizations as other models, the more usual conception is that of a dynamic simulation model with the additional feature that the model provides decision points at which one or more human beings interject parametrized decisions. T h e model then continues as a simulation model, and responses from it can often be analyzed to infer the effects of alternative decision choices on the part of the participants. Thus, a gaming model is a man-machine simulation. T h e earliest known gaming models were used as military training devices. Officers could select alternative troop dispositions in accordance with a book of rules; judges or umpires could ensure that the rules were observed, and could evaluate the officers’ performance. I n passing, one might note that chess is a simplified gaming model of this genre. Evans et al. (1967) note that the more modern symbolic gaming model consists of three phases: ( a ) a situation, or map, phase, during which the book of rules and a “fact book” (which may include information regarding previous decisions and their effects) are made available to each player; ( b ) a decision phase, during which parametrized strategies and tactical decisions are specified; and, (c) a computation phase, during which the competitive situation is simulated, including the possibility of the inclusion of randomly occurring phenomena.

These symbolic gaming models have become especially useful in training programs for young executives. Implicit in the construction of such models is the development of a simulation model of the competitive situation. Though not necessary, these simulation models are usually of the dynamic, stochastic variety ; usually, the detailed structure of the model is not made available to the participants, but some information is made available by means of a “fact book” regarding the gaming situation. At the decision stage, participants are permitted to change their competitive stance by altering prices, reallocating advertising funds, selecting alternative personnel policies, etc. Thus, gaming models have been more recently of a symbolic variety.

11.4. Insights in Adaptive Models

489

Indeed, Shephard (1965) defines a gaming model as “a simulation of competition or conflict in which the opposing players decide which course(s) of action to follow on the basis of their knowledge about their own situation and intentions, and on their (usually incomplete) information about their opponents.’’ According to this definition, one does not refer to any model that permits human interaction as a game, because the simulation analyst who alters environmental specifications in successive encounters of a simulation model would not normally be considered as engaged in a competitive situation with one or more opposing players. Hence, two important distinguishing characteristics of gaming models are: ( u ) a model, usually of the simular variety, of a competitive situation; and, ( b ) human interaction with the model.

A third ingredient of most gaming models is an umpire, who judges the quality of the participants’ decisions and who ensures that the rules of the game are maintained (see also Meier et ul., 1969). Such models are of considerable pedagogical value. Indeed, simulation models of urban environments are often readily converted to the gaming format; industrialists may participate in the model by specifying the number and location of factories, citizen representatives may specify the number and location of exhaust-plumed freeways required by the public, and medical-pollution teams may endeavor to impose constraints on these activities by setting public health criteria. In this manner, the competitive nature of daily urban living can be more readily exposed, indicating to all citizenry the need for containment of excessive greed on the part of any element of the city. Of course, the credibility of such gaming models will depend on the established validity of the game’s kernel-its simulation model.

11.4. Insights in Adaptive Models One of the most exciting developments in simulation is the effort to simulate the human mind. T h e literature regarding the subject is now quite extensive, as an examination of the bibliography concluding the excellent collection of papers edited by Feigenbaum and Feldman (1963) will reveal. T h e present inability to have accomplished the ultimate goal, that of a thinking automaton, has been due to a number of factors, among which may be noted:

490

11. Vistas in Simulation

(1) our currently inadequate knowledge of the mechanism by which the mind, considered as a physical net of cells (neurons), operates; (2) an incomplete understanding of efficient methods by which large numbers of possible contributors to a mental decision are disregarded as immaterial to the final decision; 13) an uncertainty regarding the learning mechanism of the human (as well as many an other) animal; (4)a speculation that the human mind is a parallel, not a sequential, processor of information made available to it; and (5) a reticence to admit that the human mind may indeed make some decisions on an essentially random basis.

In the first instance, developments in neurophysiology are rapidly overcoming the “knowledge gap” that appears to exist in our understanding of the mechanistic principles by which the network of neural elements produce the phenomenon known as mind. T h e interested reader is referred to the expositions of Young (1957) and Rosenblueth (1970) for further details regarding these developments ; the latter is of special importance for its optimism regarding the eventual development of a mechanistic, physical theory of the mind. Nonetheless, the immensity of the human neural net will probably preclude its simulation on a neuron-by-neuron basis (cf. von Neumann, 1958). Even with continually improved computational speeds and costs, the foreseeable future holds little promise for a successful, artificially intelligent, machine constructed on this basis. Instead, much progress has already been made by viewing the human mind as a transform, an unknown function that maps sensory information into decisions or actions. T h e distinction between actions that are the result of instinctive, reflexive, or cognitive mental processes are not always apparent, though in many instances brain “centers,” each important to certain categories of human decision-action, have been isolated. Unfortunately, the hope of parsing the brain into mutually exclusive centers does not appear too promising either (cf. Young, 1957). One approach to the reduction of the “dimensionality” of the simulation of the human mind has been the use of heuristics, principles or devices which contribute, on the average, to the reduction of search in problem-solving activities whose combinatorial possibilities are barely finite. Heuristics can be contrasted with algorithms, which provide a straightforward procedure, and which guarantee that the solution to a

11.4. Insights in Adaptive Models

49 1

given problem will be isolated. A typical example of an algorithm is that providing the optimal solution to a linear objective function whose terms are subjected to linear contraints (inequalities and/or equalities) ; i.e., linear programming. In this instance, the examination of all feasible solutions to the optimization problem can be reduced to a combinatorial problem of (usually) immense size, yet the linear programming algorithm provides a straightforward method by which the particular location (i.e., value of the terms of the objective function) of the optimum can be isolated rather readily. On the other hand, an exemplary heuristic is a suggested procedure which, if followed, would seemingly lead one to an improved situation, yet with no guarantee that the procedure shall lead to the optimum. Typical of a heuristic is the procedure suggested in Section 8.5 for the isolation of important contributors to the simular response. T h e proposed use of heuristic procedures in the development of artificially intelligent machines implies the need for describing a sequential decision-making rule to the automaton ; indeed, with such, heuristics, the machine should be capable of adaptation, or learning, since the sequential decision-making rules shall likely employ records or statistical summaries of the past experiences of the automaton. Some of the typical applications in this regard are concerned with chess-playing machines and with pattern-recognizing devices such as those capable of deciphering handwriting (see, e.g., Sayre, 1965). T h e third important difficulty to be overcome in successfully simulating the human mind is concerned with our understanding of the mechanism by which the human mind learns. T h e importance that culture obtains in this process cannot be overlooked, since the human animal has become highly dependent upon the passage from generation to generation of acquired experiences. I n this regard, studies of neural characteristics and of child development seem to reveal the importance of knowledge reinforcement. Thus, the individual (especially the child) who elects certain courses of action and who finds himself punished tends not to elect the same course of action again, unless conditions are discernably or strikingly different. Indeed, much of the human experience in pedagogy indicates that repetition of material (either in actual verbal repetition, or in providing multiple sources, or in reviews or examination procedures) contributes the kind of reinforcement necessary for transmitting culture between (among) generations. Seen in this light, the distinction between prejudice and insight is not always readily discernible.

492

11. Vistas in Simulation

In the first chapter, the intelligent use of very elementary machines was observed to be of fundamental importance to scientific discovery (i.e., to scientific modeling). T h e lens, the microscope, the accurate pendulum clock-each permitted the scientist to observe and record phenomena beyond the scope of man’s naturally endowed sensory capabilities. Similarly, our modern machines, with all their relative sophistication, prove even more valuable in extending man’s knowledge of the world about him. Nonetheless, it has become apparent that certain of our machines have provided a mechanism for superimposing knowledge reinforcement on large segments of the human populace. T h e collective state of Nazi Germany’s “mind” was a direct consequence of Propaganda Minister (Dr. Josef) Goebbels’ repetitive incantations to the populace of his nation (see Grunberger, 1964). That this concern continues to exist in other highly civilized, sophisticated societies has been noted by several in recent years. Notably, McLeod (1969) writes: I believe that much of the strife and unrest in the world today is the result of the improper handling of information. . . Unfortunately, violence and sensationalism dominate the information that gets through.. . I believe that the trend in world affairs is now such that our very survival might well depend upon our correcting the bias and making proper use of the information available to us [p. vii].

.

.

More to the point, Churchman (1968) notes that Certain individuals are in control of the mass media made available to the average citizen. These persons are the politicians themselves and the owners of various television, newspaper, and other types of communication companies. Since they control the mass media, these individuals are capable of directing the way in which the public pays attention to issues. T h e question therefore is whether those who control what is presented to the citizen are themselves well informed and whether they act on the basis of information or rather a desire to be leaders and controllers of public opinion.

T h e answer to the question posed by Churchman is one for society at large to answer. Its importance in artificial intelligence is directed to the establishment and specification of the input conditions for a simulation of the human mind. T h e validity of such a model, once developed, will, like all simulation models, rest on its ability to reproduce human decisionmaking and inferences; i.e., on its ability to mime the responses of a culturally preconditioned mind. T h e cultural aspects of the human mind cannot be neglected in its successful modeling.

11.5. Language Developments

493

T h e fourth of the aforementioned difficulties in simulating the human mind deals with the apparent need to mime a parallel processor. That the mind is apparently such a system is possibly revealed best by selfreflection, but the reports of two leading mathematicians, PoincarC (1913) and Hadamard (1945), each describing the seemingly unusual circumstances often surrounding mathematical innovation and invention, serve to corroborate the conclusion. Though we, as human beings, transmit our mathematical findings to one another in a logical, sequential fashion, it is not at all certain that such a mode of operation is employed in the innovative mind. Thus, the simulation of such a mind (or, of this aspect of the mind, generally referred to as the search for general, theoremproving machines) may require a parallel processor. Fortunately, the concept of a software-defined simulation clockworks may make possible a better mimicry of the mind via the digital computer. Another aspect of the search for the “thinking machine” is that concerned with the nature of the human decision and inference processes. T h e importance of the conditioned mind cannot be deniedin this regard, especially when one notes that the restoration of sight to an adult with congenital blindness implies an immensely difficult and time-consuming period during which the affected individual essentially learns how to see (Young, 1957). Pasteur, in a statement which provides the epigraph for this book, had noted a century ago that not only is a prepared mind of value to research but also fortuitous circumstances are of prime importance in discovery and innovation. A primary difficulty with the modeling of the mind would seem to center about methods by which these conditional probabilities may be entered meaningfully into such a model’s structure. T h e eventual modeling of the thought process remains, nonetheless, one of the important vistas on the horizon of simulation methodology. T h e ability of the dynamic, stochastic simulation model to act as a pseudoparallel processor, to incorporate with facility random phenomena, and to advance mental states by means of quite general algorithms (and/or heuristics) will improve the likelihood of its use as a fundamental mechanism in the artificially intelligent machine.

11.5. Language Developments A number of the preceding vistas in simulation methodology will probably have repercussions in ad hoc simulation language developments. Noteworthy in this regard will be the need to provide facilities for incor-

494

11. Vistas in Simulation

porating heuristics, rather than the more direct algorithms, as event routines in simulation models. Many simulation languages permit a user to define fixed event routines, which are considered as “stationary” operating rules applicable at any point in simular time; perhaps greater flexibility in permitting these operating rules, or event routines, to be altered significantly with the passage of simular time, will be in order. T h e “process” concept (cf. Blunden and Krasnow, 1967) in simulation languages may prove especially valuable in this regard. Possibly interpretive languages, especially in association with gaming models, will also prove of value here. Other possible developments include hybrid computation and associated languages for the implementation of simulation models on analogdigital (hybrid) machines. As was noted in the preceding section, such simulation capabilities will be especially useful in the modeling of the human mind; that they should also be of value in environmental and ecological modeling is hardly a moot point. I n this last regard, the validity and acceptability of the responses emanating from these quite scientific models will also rest on the acceptability of the model’s implicit assumptions and their builders’ use of established scientific facts and knowledge. Thus, every ad hoc simulation language should be organised so as to encourage the systems analyst/ modeler to provide explicit statements of the assumptions underlying each source language statement (or, at the least, each source language event routine) ; the means by which computational algorithms and event routines are derived should be made explicit by incorporating scientific references among the source statements of the simulation model. In this way, documentation of a model’s structure becomes more automatic and thereby subject to closer scrutiny by the scientific community at large. Though ad hoc simulation languages could hardly be structured so as to enforce meaningful documentation and referencing, perhaps more emphasis should be placed on this aspect of a model’s credibility. Other possible language developments and improvements would concern methods for obtaining more pertinent statistical analyses of the response data arising from a single simulation encounter. Many ad hoc languages are deficient, for example, in their capability for performing time series analyses. In addition, more languages will need to provide methods by which experimental designs may be readily defined in advance by the analyst; the resulting responses should be automatically analyzed in accordance with the specified experimental design. Furthermore, methods by which the response surface methodology

495

11.6. Conclusion

can be applied in order to isolate the optimal simular response “automatically” would be of immense value to the simulation analyst. The use of heuristics for reducing the dimensionality of the “search space” would also be of eventual concern. One might note aside the suggested use of antithetic vuriutes, a Monte Carlo technique deemed occasionally appropriate to simular experimentation. [See, e.g., Page (1965).] The simular response at simular time t, arising as a result of the environmental specification %, can be represented generally as Y ( t ) = h(%) 4 s ; 21,

+

where p t ( X ) = E [ Y ( t ) ]and where E ~ ( S2) ; is an “error” random variable of mean zero, yet is an unknown transformation of the randomly selected value of the juxtaposed seed S. Thus, the essential statistical properties of Y ( t )become those of a random process { ~ ( t )whose }, distributional properties depend not only on the simular time t of observation but also on the environmental specification 2. The essence of the antithetic variate technique is the definition of a second random number generator, one to be used in the second of a pair of model encounters so as to yield a response Y z ( t )which would be negatively correlated with the first response (one recorded from the model when employing the original random number generator). The arithmetic mean of the two responses, Y l ( t ) and Y2(t), would then have lower variance than that pertaining to the arithmetic average of the responses arising from a pair of independently seeded encounters, each of which employed the same generator. However, a general procedure, for defining random number generators that will guarantee the desired negative correlations in responses from general simulation models, remains undisclosed. Indeed, it would appear unlikely that a single such technique would be generally applicable to all stochastic simulation model encounters.

11.6. Conclusion This chapter has hopefully served not only as a review of the book, but also as a vantage point for the applicability of the relatively new scientific modeling capability : the simulation model. The primary theme of this text is that our efforts to model conscientiously and in adequate detail any complex, dynamic system will provide a confrontation with the Uncertainty Principle of Modeling and, consequently, with the need for simulation models of the dynamic and stochastic varieties. Such

496

11. Vistas in Simulation

models are of the most general type currently available to the scientific investigator, so that their acceptability depends heavily upon their builders’ ability to represent reality as objectively as possible. T h e modeler is thus charged with the responsibility of including, in his simulation, appropriate and accurate random effects so that the resulting simular time series may indeed constitute approximate realizations from the stochastic process which would be observed in nature. T h e implications of the need for stochasticity in modeling permeate the entire modeling activity. T h e systems analysis stage requires a constant vigil on the part of the analyst in order that truly stochastic effects not be represented deterministically (and vice-versa). T h e subsequent system synthesis stage requires that appropriate random variables, in consonance with known probability laws or observational data, be properly embedded into the model’s structure. T h e verification stage may curtail or preclude randomness while model (or submodel) debugging proceeds, but should also include comparisons of one or more simular responses with theoretically applicable random variables. Finally, the Uncertainty Principle of Modeling leads to important c,onsiderations in the experimental stages (the validation and model analysis stages) of model development. Primary among these are the Principium of Seeding and the concomitant need to analyze and compare the independent responses via existing statistical methodology : statistical comparisons and tests of hypotheses, factorial experimental designs, multivariate analyses, sequential experimentation, regression analyses, response surface methodology, and time series analysis.