PAF—A new probabilistic, computer-based technique for technology forecasting

PAF—A new probabilistic, computer-based technique for technology forecasting

TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE 10,239-258 239 (1977) PAF-A New Probabilistic, Computer-Based Technique for Technology Forecasting...

2MB Sizes 0 Downloads 34 Views

TECHNOLOGICAL

FORECASTING

AND

SOCIAL

CHANGE

10,239-258

239

(1977)

PAF-A New Probabilistic, Computer-Based Technique for Technology Forecasting JOHN H. VANSTON, JR.*, STEVEN P. NICHOLS and RICHARD M. SOLAND

ABSTRACT The ultimate goal of all technology forecasting is to assist managers and planners in the decisionmaking process. The employment of computer techniques may increase the utility of a forecast because it permits the rapid incorporation of vast amounts of data into the projection process. On the other hand, the value of computer-based forecasts depends on the compatibility of output information with other elements of the planning system and on the confidence of the users in both the input data employed and the manner in which those data are processed. For the last three years researchers at The University of Texas at Austin have been developing a new computer-based forecasting technique called PAF (Partitive Analytical Forecasting). This technique is based on the development of logic networks which simulate the evolution and maturation of advanced technology. PAF utilized a specially developed interview technique for gathering input data and a special time-sequenced computer simulation for data processing. The final results of PAF analysis are a series of probability-associated forecasts of the time necessary for the development of a given technology and of the related costs of development. Forecasts are made for specified sets of assumptions and management strategies, and alterations of either input data or program structure are easily made. Early indications are that PAF will prove to be a valuable management tool, particularly for long-term, complex, high-risk *The PAF concept was originated by John H. Vanston in his 1973 doctoral thesis at The University of Texas at Austin. The work was funded by General Electric Company grant to the Graduate School of Business Administration for the support of technology forecasting under direction of Professor James R. Bright. AU later research on PAF was sponsored by the U.S. Energy Research and Development Administration. JOHN H. VANSTON, JR. is the Deputy Director of the Center for Energy Studies at The University of Texas at Austin and Acting Director of the Nuclear Engineering Program of the university’s Mechanical Engineering department. His research interests are computer modeling of advanced energy systems, economics of nuclear power, technology forecasting, and energy-related policy alternative studies. He has taught courses, workshops, and seminars in these areas and has authored many reports and articles on energy-related subjects. Dr. Vanston has consulted for several companies and the Australian National Government and served on a special panel set up by the National Research Council to study the application of technology forecasting techniques to the coal industry. He has been the principal investigator on a number of energy-related projects, including a study of state nuclear policies for the National Science Foundation, an assessment of portable energy sources for the National Aeronautics and Space Administration, an analysis of the institutional barriers to the development of geothermal energy for the Energy Research and Development Administration, and an evaluation of the national energy plan for the Office of Technology Assessment. STEVEN NICHOLS received his Ph.D. degree from The University of Texas at Austin and is currently teaching in the Department of Mechanical Engineering at that university. Dr. Nichols has worked on technology forecasting with the staff of the U. S. Energy Research and Development Administration, the Electric Power Research Institute, as well as Oak Ridge National Laboratories. RICHARD M. SOLAND is Visiting Professor in the Department of Industrial Engineering of Ecole Polytechnique de Montreal. OElsevier

North-Holland,

Inc., 1977

240

J. H. VANSTON, JR., S. P. NICHOLS AND R. M. SOLAND

technical programs. In this article the basic methodology of the PAT technique is discussed along with the major PAF project to date, a projection of fusion power development. Recent improvements, including the addition of optimization routines, are also briefly discussed.

I. Introduction The ultimate goal of all technology forecasting is to assist technical personnel, managers, and planners in making decisions. To the extent that the forecast assists in the decision process, it is successful; to the extent that it fails to do this, it fails in justifying its effort [ 11. The utility of the technology forecast is in large measure determined by its credibility-in the eyes of the user-and by the facility with which it can be integrated into the overall decision-making mechanism. In many cases, the employment of computer techniques may increase the utility of a forecast because it allows the forecaster to rapidly incorporate vast amounts of data into his projection processes. On the other hand, this advantage may be negated if the user is suspicious either of the input data or of the process in which these data are treated. In an attempt to improve the quality and usefulness of technology forecasts, a group of researchers at The University of Texas at Austin has been working during the last three years on a new forecasting technique called Partitive Analytical Forecasting (PAF). This technique is based on the development of logic networks which simulate the evolution and maturation of advanced technology. PAF utilizes a specially developed interview technique for gathering input data and a special time-sequenced computer simulation for data processing. As the name would suggest, the technique involves the selective partition of a complex research and development program into its more simple elements, the separate analysis of these elements, and the rejoining of the elements to generate a forecast of how the program will develop under different management strategies or sets of assumptions. Conscious efforts have been made to develop procedures that will increase confidence in the projections and to product forecasts in formats which will enhance their usefulness to those persons who will employ the results [2]. Although several projects are now underway to test the effectiveness of the PAF technique as an aid to management decision making, the greatest effort to date has been directed toward forecasts about the development of nuclear fusion power. This research, sponsored in part by the Controlled Thermonuclear Research (CTR) Division of the Energy Research and Development Administration (ERDA) and the General Electric Company, has produced forecasts of the probability of achieving commercial application within specified time frames, under various funding strategies. This project will be discussed in more detail following a brief explanation of the PAF technique. II. The PAF Technique A. GENERAL COMMENTS The basic concept of the

PAF technique is that the various activities and events included in a technological development program can be simulated by computer models. To assist in this simulation, a time network is developed in which the “nodes” of the network represent the accomplishment of certain tasks, and the “branches” or “legs” connecting these nodes represent the activities necessary to accomplish those tasks. Similar types of simulation were employed by the U.S. Navy in the development of the Polaris missile system starting in 1958, and by the E.I. DuPont Company for various

A TECHNIQUE

FOR TECHNOLOGICAL

FORECASTING

241

projects at about the same time [3]. These techniques were called PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method), respectively. Although each of these methods proved valuable in the management of large-scale projects, their value as forecasting tools was severly limited by the fact that they were both deterministic. The programs assumed that all included events could and would take place; thus, the programs did not take into account the very real possibility that certain approaches may fail, or be only partially successful, and that new approaches may be substituted to provide alternate paths to success. To increase the flexibility of the PERT and CPM programs, the GERT (Graphical Evaluation and Review Technique) computer program series was developed by Dr. A. Alan Pritsker et al. [4] . One of these programs, GERTS IIIZ, is used as the basis of the PAF technique.’ This program provides for alternate paths to success (instead of one critical path); for the possibility of partial success in activities; for the repetition of previously unsuccessful activities; and, most important, for the dynamic alteration of one part of a a network based upon events in other parts. Application of the GERTS IIIZ program is simple and it can be easily used by analysts having only limited knowledge of computer programming techniques. Moreover, the program is quite versatile, allowing easy modification for investigating changes in assumptions, parameters, or network design patterns. The output information provided by the code includes: (1) the overall likelihood of accomplishing the final goal of the research program or any designated intermediate goals; (2) the probable time required to accomplish these goals; and (3) the costs associated with goal accomplishment. B. THE PROJECT

DEVELOPMENT

NETWORK

The first step in designing the time network is to determine the activities that must be accomplished for successful completion of the program, the possible methods of accomplishing these activities, and the relationships between the various requirements and means of meeting them. These relationships are then reduced to a graphical network form. This network can be considered from the point of view of the technological development it is designed to represent or, alternatively, from the point of view of a logic network that guides the computer’s operations during any simulation run. In other words, the overall network represents the research and development process that it is intended to simulate, and also the computer model that structures the logic of each simulation. In the project development network, nodes and branches represent the events that may be accomplished and the physical activities required to achieve them. Obviously, the logic network must correspond closely with the projected development plans of project managers. Breaking down the network into steps, or branches, must be done in a way that clarifies understanding in order to facilitate good estimates of time and probability. To illustrate the general notion of a GERTS network and the symbolic notation used in the PAF application, a simple network is shown in Fig. 1. There are five types of node, each with a characteristic symbol. The two main types differ according to the number of branches emanating from the node. A “deterministic” ‘The theoretical basis for computer simulation, structures and associated assumptions, are discussed Network Techniques by Gary E. Whitehouse.

together in detail

with details of the GERT program in Systems Analysis and Design Using

242

J. H. VANSTON, JR., S. P. NICHOLS AND R. M. SOLAND

OVERALL WETWRK NOT COMETED IN

WEWORK AtTERATION

“A”

-TIIF , TRANSFER

START “OOE

TASK UNSUCCESSFUL

\

ROBABILITY

NODE

Fig. 1. Symbolic notations in a PAF subnetwork.

node (Nos. 3, 5, 11, and 12 in Fig. l), when realized, activates all branches leaving it. A “probabilistic” node (No. 4) when realized, activates only one of the branches leaving it. The specific branch activated depends upon a random selection process, governed by relative probabilities specified in the input data. Besides these two general node types, there are special nodes for beginning and terminating overall program simulation. One or more “start” nodes (No. 2) initiate activity at “zero” time, the beginning of a simulation run. “Sink” nodes (No. 99) terminate a simulation run, either upon successfully reaching the final milestone node that completes the whole network or upon failing to do so within a given time period. A node from which no branches emanate can be considered a “dump” node (No. 13) that is, one which terminates a failed subnetwork but does not in itself constitute a program failure. There are two kinds of branches (or legs): “activity” branches and zero-time, “information transfer”, branches. Legs of the first type (legs 2-3, 3-4, and 11-12) are labeled with the activities they represent, followed by the number (in parentheses) of a set of associated time parameters. Legs of the second type (legs 4-5 and 4-13) are used for the relay of information and are unlabeled phantom activities representing no “real world” time, and no physical activity. (The second type of leg is actually a special case of the first with a constant zero activity time.) The dotted line (3-l 1) is a symbol for a “network alteration”, indicating a substitution of one node for another. This alteration can be activated by the completion of a designated activity elsewhere in the network. The subnetwork shown in Fig. 1 represents a typical starting task, or initial experiment, within a larger overall network of development. When the first subtask (leg 2-3) of this experiment has been completed, node 3 initiates the second one (leg 3-4) unless a network-alteration signal (A) has caused node 11 to be substituted for node 3. In that

A TECHNIQUE

FOR TECHNOLOGICAL

FORECASTING

243

case, the next activity will be initiated from node 11 instead. With the completion of subtask 2, node 4 relays an information signal to either node 5 or node 13, depending on whether task 4 proved successful (leg 4-5) or unsuccessful (4-13). If the path 4-5 were chosen, node 5 would then initiate new activities in the overall network; if the network alteration had been activated, completion of activity 11-12 would likewise initiate new activities. C. COLLECTION

AND CORRELATION

OF INPUT DATA

Once the project-development network has been designed, the next step is to develop data concerning the time that each task will require and the likelihood that it will be completely successfully. Although any of the standard forecasting methods can be used to develop this data, one method that has proven effective in PAF projects to date is the use of structured interviews with appropriate researchers and administrators. This systematic sampling of informed judgments falls generally within the Delphi family of techniques with the interviewer serving as the feedback mechanism while preserving each expert’s anonymity. This procedure avoids the psychological inhibitions to open discussion often present in committee or group exchanges. The main modification of Delphi in the PAF approach is the substitution of direct interviews for written questionnaires. These interviews allow the interviewer to carefully define the information he desires and the participant to qualify his answers in any manner he sees fit. Although direct interviews are expensive in time, money, and effort, their use appears justified for the complex, long-term projects for which the PAF technique is most appropriate. As each interview begins, the interviewer presents a copy of the overall network and of the specific subnetwork on which the participant will be requested to supply information. The general nature of the interview is explained, together with an outline of how the information will be used. An agreement is made on what attribution can be made of the individual answers, and assurances of anonymity are formally stated. For each task for which he will be asked to make estimates, the participant is requested to rate his experience on a one-to-three scale. This self-rating is later used to give a bias in favor of qualified experience. This ability to qualify their experience in each task area serves to reduce the reluctance of participants to make estimates in areas where they feel their competence is not complete. A major advantage of the structured interview technique is that it provides for individualized challenges; that is, the interviewer can match the participant’s subjective responses against previous estimates by himself and by others. The interviewer must, of course, be thoroughly familiar with the network interrelationships and be alert to any inconsistencies in a participant’s estimates. If a participant’s estimate differs significantly from those of others, this fact is brought to his attention, and he is asked to explain possible reasons for the difference. If he wishes, he may change his estimate, but original estimates are also recorded for possible future analysis. During an interview each participant is asked to give three types of estimates: (1) the likelihood that each activity will be completed; (2) the probable time that will be required to complete each activity; and (3) the costs associated with such completion. For each probability node the participant is asked either to estimate the likelihood of an event’s occurrence as a numerical probability or to choose the most appropriate of seven adjectival statements, such as, “very probably will occur”. For example, for node 4 (Fig. I), the participant would estimate the likelihood of successful completion of subtask 2 (leg 4-5). Because node 4 is a binary probabilistic node, the probability of

244

J. H. VANSTON,

JR., S. P. NICHOLS

AND R. M. SOLAND

failure (leg 4-13) would be the estimated probability of success subtracted from 1.0. When the probabilistic node has more than two options, estimates must be mathematically adjusted to ensure that the sum of all event probabilities is 1 .O. For each activity the participant is asked to give a minimum and a maximum practical time for completion together with an indication of when within that time span the event is most likely. This is done initially for the whole subnetwork under the assumption of a certain funding level. Later, he is asked to make similar estimates based on different funding levels. The interviewer then compares estimates for each activity for the different funding levels and brings apparent discrepancies to the attention of the participant for discussion. This technique adds a new element of self-challenge to the Delphi-type procedures. When gathering time estimates, the interviewer may also gather cost estimates in a similar manner. When all estimates have been gathered, probability estimates are reduced to a weighted average, and time and cost estimates are averaged using the mathematical “beta” function. For each activity the time-parameter set going into computer input includes the smallest minimum time individually estimated among the participants, the largest maximum time estimated, an average mean time, and the standard deviation of the time estimates. Similar cost-parameters sets can also be input, if desired. D. COMPUTER

SIMULATION

After relevant network data have been collected and correlated, they are entered into the computer by means of seven types of data cards. These cards describe the project network, list node and activity-leg parameters, and specify administrative instructions for program operation, including network modification. The cards also specify the number of simulations that will be run and provide an initial, arbitrary number needed to initiate the random-choice processes of the GERTS IIIZ program. With the exception of this input information, the program can be treated as a “black box” by the investigator; that is, understanding of the internal operation of the GERTS IIIZ program is not necessary for the program’s utilization. Each simulation run involves a progressive movement through the network by the computer. As the program proceeds from node to node, it chooses by random processes from the input data a time for each activity and the path to be taken at each decision node. For each node it determines the time that has elapsed since the beginning of the simulation. A pseudo random number generator is used to generate branch choices and times according to the probability values and time distributions specified by the user in his input data. All choices are made independently (in a probabilistic sense), and all branch times are independent random variables. Each run terminates upon activation of a sink node. Because of statistical deviations in the time parameters and different paths that may be taken through the network, the shortest-time path normally differs for each simulation. At the end of all simulations, results are correlated and printed out in three formats. The first of these, a complete listing of all node realizations by time for the path actually followed on a given run, is optional. This step-by-step record is valuable for complete program analysis and for troubleshooting, but provides more detailed information than is normally necessary or desirable. The two remaining output formats are statistical and histogram tables as described below. The statistical output format lists relevant data for any node designated for data recording. The fraction of total runs in which a node was realized is a measure of the

A TECHNIQUE

FOR TECHNOLOGICAL

FORECASTING

245

confidence of success of the activity described by that node. For example, if node 5 in Fig. 1 were realized 74.5 times in 1000 runs, the likelihood of successful completion of subtask 2 would be .745. Cumulative time data for designated nodes are also given; for example, the maximum, minimum, and mean times to event occurrence, together with the costs associated with node realization. The histogram data summarize the number of times that each statistic node was realized within various designated time periods. Dividing that number by the total number of runs in the simulation indicates the likelihood of event occurrence within the given time period. E. PRESENTING

THE OUTPUT

DATA TO MANAGERS

The utility of the PAF technique, of course, lies in the assistance it can provide decision makers in the effective allocation of resources for a project: the funds, the facilities, and the manpower. In presenting the simulation data to managers and planners, one needs to explain the logic behind the technique and the nature of the input data in order to facilitate proper interpretation of results. A user of the PAF forecasts must see the network for what it is: a complex mass of judgments structured into a formal arrangement. The network obviously does not transcend the human, subjective speculations that shaped it. But the user should also appreciate the advantage of incorporating the knowledge and judgment of experts in many areas into coordinated, well-structured, formal projections. The value of the participants’ estimates has been further enhanced by the data collection process in which the participants have challenged and clarified their own judgments while taking into account the judgment of others. Each user of the simulation data should understand the network’s technical details well enough to appreciate how the computer’s processing of individual, specialized judgments can reveal a pattern of information that would not be apparent in a less structured analysis. Moreover, he should be aware of the fact that the networks can be easily and quickly redesigned and internally modified to meet any new analysis requirements that he might have. In presenting his data, the PAF forecaster needs to step beyond the detailed complexities of both network and computer and to use the language of managerial decision. He must sharply focus upon the comparative effects of the options considered-the alternative choices of project development-and he may well wish to develop simple graphic aids to increase the clarity and comprehensibility of the simulation results. From the statistical data output, he may prepare one or more concise but complete, summary tables. He will normally emphasize likelihood, projected costs, and expected times to completion for those milestone events which mark the progress of overall development. He may compare projected progress rates for each postulated level of support or for any particular strategy that the manager wishes to have examined. He should also make clear those actions that will-based on his analysis-result in particularly valuable payoffs in time, money, and knowledge. From the histogram data output, he can plot the increasing likelihood of developmental success of milestone events on a year-byyear basis to assist in making effective comparisons. The four steps of forecasting just described-network design, information collection and correlation, computer simulation, and data presentation-can be better seen in a specific application of the PAF procedure. In the following section an analysis of nuclear fusion power development using the PAF technique is described in detail.

J. H. VANSTON,

246

JR., S. P. NICHOLS

AND R. M. SOLAND

III. The Tokamak Approach to Fusion Power: A PAF Application Partitive Analytical Forecasting as a technique can be illustrated by describing its first major application-the development of fusion power utilizing the tokamak approach. The following pages will provide a brief survey of nuclear power development, describe the tokamak method, and review the applicatiori of PAF to tokamak development. A. A BRIEF

HISTORY

OF NUCLEAR

POWER

DEVELOPMENT

Fusion reactors, when and if developed, will represent the third generation of nuclear power reactors. The first two generations encompass fission reactors which produce heat by splitting large atoms into pairs of lighter atoms. This fission of large atoms is achieved by bombarding them with neutrons-the basic, electrically neutral, particles of atomic nuclei. All present U.S. nuclear plants are powered by first-generation, or “thermal”, reactors. Fuel for these reactors is being used up at an increasingly rapid rate. The second-generation reactor, still in a developmental stage, is the fast-breeder reactor, which produces more fuel atoms than it bums. Because they increase the recoverable energy from natural uranium by about sixty times, fast-breeder reactors are a prime hope for extending the world’s finite fuel supplies. The third generation of reactors will utilize the energy released when light atoms fuse to form heavier ones. Since the early 1950s-when the uncontrolled, massive release of fusion energy in hydrogen bombs became a reality-scientists and engineers have sought to develop controlled fusion as a source of power. While still in an early stage of development, research has made steady progress, and many researchers in the field hope to see fusion become a commercial reality by the end of the century. While eventual success is not certain, the nature of both the obstacles and the potential solutions has become increasingly clear. The primary fuels for fusion are deuterium (D) and tritium (T), both hydrogen isotopes, that is, forms of this element with higher atomic weights than ordinary hydrogen. Deuterium can be extracted from sea water, while the reactor itself can produce tritium from the metal lithium. Controlled fusion promises not only a potentially “infinite” energy source but also fewer problems related to reactor operation and radioactive waste disposal. The principal obstacle to the development of fusion power is that the interacting nuclei are positively charged and strongly repel each other. Thus, to achieve fusion, the D and T nuclei must be made to collide at velocities high enough to overcome this electrostatic repulsion. To attain this velocity the nuclei must be raised to temperatures in the vicinity of 100 million degrees centigrade. At this temperature the atoms are fully ionized and exist in the form of a “plasma” of positive and negative ions. Furthermore, in order to release a net output of energy, the fuel isotopes must be confined long enough to allow a large number of fusion reactions to occur. To demonstrate that fusion power is scientifically feasible, researchers must be able to meet teniperature and containment requirements simultaneously. Researchers are now working to produce the strong magnetic fields necessary to confine plasma while heating it to the high “ignition temperature” that will start the fusion process. As an alternative to magnetic confinement, laser beams have been used to heat solid hydrogen pellets so rapidly that inertial forces provide adequate containment for the

A TECHNIQUE FOR TECHNOLOGICAL

nuclei. Most research method.

to date,

FORECASTING

however,

247

has focused

on the magnetic

confinement

B. THE TOKAMAK

The tokamak approach is one of several potential methods for achieving magnetic confinement in a fusion reactor. Since Russian scientists announced in 1969 the exciting results of experiments with a device called the tokamak (an acronym for the Russian words for torus, room, and magnet), other laboratories throughout the world have built and tested tokamak devices, varying the design to suit their specialized experimental objectives. In these tokamak devices, the plasma is contained by magnetic pressure inside a doughnut-shaped vacuum chamber. This tokamak configuration is shown in Fig. 2. The plasma is magnetically confined and remains within a vacuum sheath-well removed from the chamber wall. (If its particles were permitted to collide with the wall of the vacuum chamber, the plasma would be cooled and the walls quickly destroyed.) Outside the wall is a neutron-moderating fluid blanket for heat removal and tritium breeding. Surrounding the torus are the confining magnetic field coils. C. COMPUTER SIMULATION OF THE TOKAMAK APPROACH

1. Scope of Project

In 1973, when the PAF technique was initially applied to the first development program, the decision was made to restrict the analysis, for simplicity’s sake, to the most promising fusion technique, namely, the tokamak approach. Although researchers differed on the exact sequence of events that might lead to a commercially successful tokamak reactor, there was enough consensus to project and refine one probable scenario.

IW)DEIUTINGBLANKET

Fig. 2. Tokamak configuration.

248

J. H. VANSTON, JR., S. P. NICHOLS AND R. M. SOLAND

I

I I

I

I ,

F

Fig. 3. Overall network.

For this PAF application, the tokamak program was examined from the then-present to the completion of the first commercial fusion reactor.

time

2. Nature of Network A simplified overall logic network for tokamak development is shown in Fig. 3. Only those nodes to which reference is made below are numbered. The overall network can be mentally divided into three major stages of development, each ending with a milestone event. These milestones are represented by the nodes 200 (demonstration of scientific feasibility), 282 (demonstration of engineering feasibility), and 300 (completion of first commercial reactor). The demonstration of scientific feasibility indicates that plasma temperature and confinement requirements have been achieved simultaneously in an experimental device. The demonstration of engineering feasibility reflects the development of necessary hardware for a continuously operating reactor: the first (or inner) vacuum wall, the blanket system, the magnet system, the ash-removal system, and the systems for processing tritium gas. The network is completed with the construction of the first demonstrathe plant designed to test economic or commercial feasibility of fusion power. Thus, the development of the tokamak approach can be viewed as occurring in three successive stages (I, II, III) of a single process. The development could also be viewed in terms of the devices used to carry out the experiments, since each stage of development requires the construction and operation of a major test facility.

A TECHNIQUE FOR TECHNOLOGICAL

FORECASTING

249

The overall network for simulation of tokamak development is divided into six subnetworks delineated in Fig. 3 by light lines and identified by the letters A-F. In five of the subnetworks certain activities begin with a start node (No. 2) at zero time. In Stage I, the two separate research paths are pursued simultaneously in attempt to speed the demonstration of scientific feasibility. Specifically, Stage I assumes continued experiments of nonturbulent heating methods (subnetwork A), and turbulent heating (subnetwork B). Stage II, demonstration of engineering feasibility, also included parallel approaches, the Deuterium-Tritium (DT) and Deuterium-Deuterium (DD) processes. The first of these envisions a fusion process using both tritium and deuterium atoms, while the second envisions use of deuterium atoms only. The former involves less difficult physics problems, but more difficult engineering problems than the latter. Both of these alternatives are included in subnetwork C. For each approach, individual supporting technologies (such as the first vacuum wall and the blanket system) are elaborated in sub-subnetworks, and, when collectively successful, lead to a decision for building and testing a prototype reactor (nodes 273 or 281). The successful operation of such a reactor for either approach would be necessary for a decision (node 282) to embark on the construction of a pilot plant. This decision starts Stage III. The schematic simplicity of Stage III reflects the less complicated nature of subnetwork D. There are two reasons for the simplicity of this final stage: First, it is difficult to predict precise details of development in the distant future; and, second, commercial plant designs, in general, will emerge only when the competition of different approaches has been resolved into a relatively firm design. Two supporting experiment subnetworks, while not directly a part of the three-stage development, significantly affect the four subnetworks of the major development sequence. The Magnet-Development Experiments Subnetwork (E), has direct input into subnetworks A, B, C, and D, while the Plasma Heating Subnetwork (F) has a direct effect on the nonturbulent heating approach to scientific feasibility. The overall network has approximately 300 nodes and 600 connecting legs. To describe the nature of the entire network would require an examination of each of the six subnetworks, which does not appear justified in this paper. However, each is described in detail in Ref. [2]. To illustrate the nature of individual subnetworks, one part of subnetwork A is shown in Fig. 4. The use of deterministic nodes (4, 5, 9, 10, etc.), probabilistic nodes (6, 12), start nodes (2), sink nodes (99), network alterations (A, B, C), alternate paths to objective (6-8 versus 6-9-10-l l-12-8), task activities (2-4,4-S, 5-6, etc.), and information transfer activities (6-200, 6-7, 6-8, etc.) is all indicated in this figure. One useful concept in PAF methodology is the use of a “summing circuit” to include nontechnical and judgmental considerations into the simulation process and, hence, to increase the realism of the model. This circuit is a special network logic block, or subsystem, that receives and weighs data about the results of experiments and about the technical and administrative climate in which decisions are to be made. The mechanism used in the summing circuit is the triggering of a network alteration. In the fusion project, the summing circuit accounts for these factors by considering three items that will affect the decision of whether or not to begin work on a larger fusion device: (1) attainment of minimal success in each of the major areas of preliminary experimental development, (2) a balanced appraisal of overall state of the art in major relevant technologies, and (3) consideration of the need for the device in view of advances in competing technologies.

J. H. VANSTON, JR., S. P. NICHOLS AND R. M. SOLAND

250

Fig. 4. Scientific

feasibility

demonstration

(nonturbulent

heating).

For example, before serious consideration will be given to the construction of a large scientific demonstration device, a reasonable likelihood of success is essential in each of the three major requirements for scientific feasibility: plasma confinement, plasma heating, and magnet systems. The second consideration is an appraisal of the overall state of the art in each of the several supporting technical areas. These areas include the degree to which the selected magnet and heating systems create the plasma conditions needed: the degree to which optimal plasma considerations can be met without complicating the toroidal design, and the degree of success in developing necessary auxiliary equipment, such as plasma pumps, fusion power generators for supplying a magnetic field, and superconducting magnets. The overall appraisal of success is measured by a weighted score that represents all supporting technical areas. The last of the three considerations governing the decision whether or not to build a larger device is an evaluation of the success of competing energy production technologies. For example, the decision to fund the construction of a nontokamak scientific feasibility device would decrease the likelihood of a decision to build a tokamak-based device. When the minimum requirements for a decision are met, the program will make a decision on construction of the new device. The probability of a favorable decision will be based on all of the considerations listed above. If a decision is made not to build the new device at a given time, the program will wait until the overall state of the art is improved, at which time a new decision will be made based on a new, increased probability. D. CONSIDERATION

OF DIFFERENT FUNDING LEVELS

In the 1973 study, the overall network previously mentioned was used to compare likelihoods and times for tokamak development at different levels of funding. Nineteen postulated funding strategies were simulated: three base strategies and sixteen mixed strategies. The three base strategies in increasing order of funding rate were Continued

A TECHNIQUE

FOR TECHNOLOGICAL

FORECASTING

251

Present Funding (CPF), Moderately Increasing Funding (MIF), and Maximum Effective Funding (MEF). For the mixed strategies the funding of various subnetworks was increased or decreased in relation to one of the overall, base strategies. Since the Moderately Increasing Funding strategy was felt to be the one that would most likely be followed, it was taken as the norm and was examined in more detail than MEF and CPF options. The MIF funding strategy assumed a gradually increasing funding of the overall network at a compounded rate of 15 to 20 percent each year for 10 to 15 years, with increases beyond that point depending upon experimental results. In addition, the network was programmed so that a successful demonstration of scientific feasibility, that is, completion of Stage I, would automatically increase funding to the highest (MEF) level for the supporting technologies, such as first vacuum wall and blanket system, that would be required for the demonstration of engineering feasibility (Stage II). The Maximum Effective Funding level assumed the highest level of funding that could be effectively utilized in the event that fusion power development should become a major national goal. No uncontrolled or frivolous project funding was assumed, but this option did assume that higher risk strategies would be considered. For example, it was assumed that large-scale work on the confinement magnets for advanced reactors would no longer be strictly contingent upon the completion of earlier reactor experiments. The Continued Present Funding strategy assumed that funding would be continuous on the 1973 level with adjustments only for inflationary factors until scientific feasibility was demonstrated. At that point, funding for the whole program was assumed at the middle (MIF) level. For all funding strategies it was assumed that additional funds would be made available for high-cost projects as they became clearly essential to fusion power development. Figure 5 compares the likelihood of demonstrated scientific feasibility for the MEF strategy with the likelihood for the MIF strategy as a function of time. While this overall funding increase appears to have little effect on the likelihood of eventual success, it does

YEAR Fig. 5. Likelihood of demonstration mum effective funding.

of scientific

feasibility

under moderately

increasing

and maxi-

252

J. H. VANSTON,

JR., S. P. NICHOLS AND R. M. SOLAND

move the most probable time to success forward about half a decade, that is, from 13.2 to 9.0 years. The early time plateau in each curve reflects the possibility that scientific feasibility might be demonstrated by a relatively small, early device making further work on a larger device unnecessary. The sharp rise in each curve correspondingly represents the period during which experiments involving the larger tokamak device will be conducted. Figure 6 compares the likelihood-at the two funding levels-of reaching the final milestone event, the completion of the first operating commercial reactor. Here success, if achieved, was projected for about two decades earlier under the MEF strategy than under the MIF strategy, that is, about 30 years for the former against about 49 years for the latter. The likelihood of eventual overall success for a commercial tokamak at the two funding levels is indicated by the leveling off of the two curves in Fig. 6: at 30.3 percent for MIF, and at 33.7 percent for the higher, MEF. These two likelihood percentages for the overall network deserve comment, lest they produce misleading pessimistic interpretations about the development of fusion power as a whole. In the first place, only one of several possible approaches to fusion power, the tokamak approach, was considered. Mathematically, if the other approaches are considered to be about as promising as the tokamak, the likelihood of success by alternatives would increase to about 75 percent. This figure corresponds closely with the results obtained from subsequent analyses taking other approaches into account. In the second place, the logic networks for this project considered not only the likelihood of technical success, but also the possibility that fusion program funding might be discontinued, the possible effects of social value shifts, and other nontechnical factors that might affect the fusion power development program. The reader should also note that these

1993

2005

2015

2025

2035

YEAR Fig. 6. Likelihood funding.

of first commercial

reactor

under moderately

increasing

and maximum

effective

A TECHNIQUE

FOR TECHNOLOGICAL 40 ’

I

I

253

FORECASTING I

Moderately Increasing

I

I

I

I

Funding -I

g

-30 -

MIFwith Maximum Effective runding of Piret Wall Research and Design

P : : i 2010 “0 i ; x z

IO-

I 1993

I 2005

2015

2025

2035

YEAR Fig. 7. Likelihood of first commercial reactor under moderately maximum effective funding of first wall research.

increasing funding with and without

results were based on a study completed in 1973. More recent research, discussed in section IV, indicates a higher likelihood of success. Figure 7 illustrates the results from a mixed strategy simulation. In this case the funding for first wall research was raised to the MEF level while all other projects were funding at the MIF level. This research on the first vacuum wall is one of the most crucial areas of plasma engineering since the refractory metal wall must be strong enough even at very high temperatures and after intensive neutron bombardment to provide a long-term vacuum seal. Because the problems are complex, and because the engineering stage of research will be especially time-consuming in this area before any prototype reactor can be built, first-wall development could well act as a constraining factor in the overall fusion program. Comparing the curves for norm strategy (MIF) and the indicated mixed strategy (first-wall research only at MEF level) confirms that early funding of first-wall research is one way to reduce the time required for attaining a successful commercial reactor. Keeping in mind that the norm strategy calls for an automatic shift of first-wall funding to the MEF level after scientific feasibility has been demonstrated, one notes the significance of high funding in this engineering research well before that milestone is reached. In short, the mixed strategy curve indicates the desirability of initiating active first-wall research during Stage I. A summary of the expected times required to reach key objectives together with associated likelihoods of success under each funding strategy is shown in Table 1. Although close examination of these data will reveal a number of relationships of interest to fusion planners, the most striking are the observations that sharply increased funding promises to advance the date of the first commercial reactor markedly; that research in the first-wall and blanket areas should be increased in the near future; and that research in superconducting magnets should be increased, particularly after scientific feasibility has been demonstrated.

Legend:

MEF MEF MEF MEF MEF MEF MEF CPE CPF CPF MIF MIF MIF MIF MIF MIF

of first waII R&D of blanket R&D of CMS R&D of CMS after DSF of first waII & blanket R&D of aII supporting technology R&D after DSF of first wall R&D of blanket R&D of CMS R&D of first wail R&D of blanket R&D of CMS R&D of CMS R&D prior to DSF of first walI and blanket R&D of aII supporting technology R&D

strategy

Times Required

MIF-Moderately Increasing Funding MEF-Maximum Effective Funding CPF-Continued Present Funding R&D-Research and Design CMS-Containment Magnet System DSF-Demonstration of Scientific Feasibility

MIF MEF CPF MIF with MIF with MIF with MIF with MIF with MIF with MIF with MIF with MIF with MIF with MEF with MEF with MEF with MEF with MEF with MEF with

Funding

Expected

1

13.2 9.0 18.6 13.2 13.3 12.6 13.2 13.1 12.6 13.1 13.3 13.2 13.7 9.0 8.9 9.7 9.7 8.9 9.8

Mean time in years

of Success

Mean time in years 26.9 17.5 32.9 24.0 26.7 26.7 26.9 23.8 23.0 23.2 29.3 27.0 27.4 22.2 18.8 18.0 18.0 22.3 22.7

72.7 75.8 63.8 71.6 71.4 74.2 73.2 76.7 73.3 74.3 72.8 74.1 72.0 75.9 76.8 76.2 77.0 73.9 75.7 58.0 66.1 50.2 59.0 68.7 66.6 65.2 62.9 63.8 65.1 59.1 60.4 59.5 66.1 66.3 58.4 66.1 62.7 59.2

Probability in percent

Completion of prototype (plasma test) reactor

with Likelihoods

Probability in percent

Demonstration of scientific feasibility

to Reach Key Objectives

TABLE

48.8 30.0 54.4 45.8 49.0 46.5 46.9 45.8 42.5 35.7 51.0 48.9 49.7 35.0 31.6 34.6 30.6 35.0 39.5

Mean time in years

Completion commercial

30.3 33.7 25.9 29.0 29.0 31.4 32.5 28.6 30.3 30.5 28.6 28.6 29.7 31.9 31.1 29.0 32.7 32.8 29.1

Probability in percent

of first reactor

A TECHNIQUE FOR TECHNOLOGICAL IV.

255

FORECASTING

Recent PAF Applications

A. REVIEW OF PAF APPLICATIONS The PAF technique has been used in several studies, both fusion-related and nonfusion related, since the study described in section III was completed. In the fusion area, Lowther has conducted a PAF analysis of the laser fusion concept [5]. Logic networks were developed to describe the steps necessary for the completion of a commercial laser-based fusion power plant. One of Lowther’s major conclusions was that the laser program may not have been sufficiently well defined that the PAF technique could be effectively applied. This conclusion strongly indicated a limitation to the PAF technique. Miller [6] and Chiu [7] have developed networks on the theta-pinch and mirror confinement schemes, respectively, in much the same manner as presented for the

Ilfficialt

we-

?ronti*r

.

..

‘\ \ \\ ** ‘\

\

.

\

.

-

i

.

\ \ ‘-_

. -.

‘\

‘l, -- --- - -. 300 c

(Expected

Fig. 8. Sample embedded

350 Total Cost)

400 (SW

program results.

256

J. H. VANSTON,

JR., S. P. NICHOLS

AND R. M. SOLAND

tokamak in section III. Preliminary computer simulations using the GERTS simulation code were performed to study the sensitivity of the networks to management decisions. Miller also investigated various methods of time parameter data reduction which resulted in simpler correlation procedures. Two studies by Nichols have been completed which utilize the PAF technique for fusion power analysis. The first investigated the effect of superconducting magnet technology development on the completion of an Experimental Power Reactor [8]. This study was the first utilization of the cost analysis capability of the GERTS code for the PAF technique. The second study took advantage of the work of Miller and Chiu in order to study all three confinement schemes currently receiving emphasis from the Division of Controlled Thermonuclear Research, namely, the mirror, tokamak, and theta-pinch confinement schemes [9]. The PAF technique has been used for studies outside the area of controlled fusion. Lo investigated the production of substitute natural gas using the HYGAS process [lo]. In addition, the PAF technique is being used to assist in day-to-day management of a geothermal project being conducted at the Center for Energy Studies at The University of Texas at Austin. Oak Ridge National Laboratory has also initiated PAF studies concerning liquid metal fast breeder reactor reprocessing capabilities. B. CURRENT

AND PLANNED

AREAS

FOR PAF RESEARCH

In order to expand the potential of the PAF technique, research has been initiated for its use for the optimization of funding strategies. Lawler and Bell’s O-l implicit enumeration scheme [ 1 l] is being combined with the GERTS IIIZ code to form an embedded simulation package [12] . Given certain measures of effectiveness, and objective and constraint functions defmed in terms of them, the combined code will enable the user to find optimal solutions involving the choice of funding strategies. A test case has been run with a dummy logic network in order to demonstrate the possibilities of the package. Figure 8 shows the results of the simulations. The dash lines on the figure represent the “efficient frontier” of funding schemes. This particular curve is concerned only with costs and time, but the package is also capable of dealing with parameters concerning the likelihood of success. The GERTS program is also being modified to provide a graphical interpretation as part of the computer output to facilitate and speed up interpretation of results. V. Overall Evaluation of the PAF Technique The primary measure of the value of the PAF technique is, of course, the extent to which it aids in the decision process. As mentioned at the beginning of this paper, that extent depends on the credibility of the projections and the facility with which they can be incorporated into the overall planning process. In both regards the PAF technique appears to have a number of desirable features. The most important factor in the credibility of any technology forecasting tool is that it be based on believable data treated in a logical manner. In the case of PAF, the input data are collected from people with established credentials in the field in which they are providing information. Both the gathering and the correlation of these data are conducted in a carefully planned and structured manner tailored to (a) induce carefully considered estimates and (b) to give proper weight to differences in experience and background among the participants. The logic by which these data are utilized to develop projections can be easily demonstrated to the manager or planner by means of the graphic network. The decision maker can and should be encouraged to take part in the development of the

A TECHNIQUE FOR TECHNOLOGICAL

FORECASTING

251

network and the choice of participants; this participation should increase his confidence in the credibility of the results as well as improve the overall results. Another factor in the credibility of the PAF technique is the specific expression of likelihood in the PAF projections, Since forecasting and certainty are of necessity exclusive terms, the decision maker knows that there is no guarantee that any given projection will come about. But logically developed estimates of the relative likelihoods of different possible outcomes can be useful to the decision maker. The GERTS IIIZ program permits the easy consideration of the cross impacts of various events or trends on project development. Since programs are always subject to changes resulting from unexpected successes or failures or to exogenous events, this feature makes it possible to model real life situations and should increase the credence of the user in PAF projections. Finally, the fact that the PAf technique can be used to make projections of intermediate milestones as welI as final project goals allows the user to spot-check the general accuracy of the overall PAF projections for that program. For example, the completion of initial heating tests on the “Texas Turbulent Torus”, the completion of initial magnetic confinement tests on the MIT “Alcator” device, and the completion date of the Princeton Plasma Physics Laboratory’s Large Torus (PLT) device matched PAF projections very closely. These results would seem to reflect a general accuracy of early projections and should strengthen confidence in later projections. In regard to utility, the foremat of the final projections in commonly used termstime, money, and likelihood of success-makes it easy for the decision maker to understand and appreciate the information embodied in the output. Moreover, the ability to determine the probable paths that might be followed to the completion of the final goal is actually of more value to the user than the projections of the completion date. The most useful characteristic of PAF technique to the manager, however, is the ease with which both the input data and the network itself can be modified to test the impact of different strategies and assumptions. The ability to easily evaluate the effects of different proposed actions allows key issues and uncertainties to be pinpointed and analyzed. Although the influence to date of PAf analyses on decision making in the CTR Division has been limited, several decisions have coincided with PAF recommendations, for example, that overall funding be sharply increased; that research in first-wall, blanket, and magnetic research be accelerated; and that use of normal conducting magnets for the Experimental Prototype Reactor I be favored over the use of superconducting magnets. As greater experience and confidence are gained in PAF techniques and as PAF methodologies are more carefully tailored to meet CTR division needs, it is felt that PAF may play an increasing role in division management procedures. In the meantime, the division is sponsoring continued research in methods to increase PAF accuracy and utility. Overall, it appears that PAF does represent a valuable tool for the decision maker. The ability of the technique to consider simultaneously a myriad of relevant factors, the ability to quickly evaluate the implications of strategy changes, and the ability to model real life situations to almost any desired degree of accuracy all mean that PAF can provide insights into complex programs that are not possible by traditional managerial means. Possible applications of the PAF technique are many and varied: large governmental programs such as the development of synthetic fuels, the breeder reactor, solar energy, weapons systems, mass transit systems; improved agricultural techniques; large private programs such as the development of new automotive systems, new mining methods, new production techniques, new technical processes; international research and

258

J. H. VANSTON,

JR., S. P. NICHOLS

AND R. M. SOLAND

development programs such as off-shore drilling, underseas mineral gathering, improved shipping techniques. Although the PAF methodology appears to be most useful for large, more complex technical programs, the basic principles are also applicable to smaller projects and to nontechnical projects as well. Again possible examples abound: decisions as to what products are most likely to be successful and profitable; evaluation of methods to increase productivity; development of techniques to improve reliability; analysis of possible trouble spots in new product development. Obviously, the effort involved in using the PAF technique must be tailored to suit the purposes sought, the resources available, and the complexity of the program being analyzed. Personal experience and judgment, consultation with knowledgeable associates and consultants, and other traditional management considerations will continue to dominate the decision-making process for the foreseeable future. However, PAF type analyses can serve not only as a measure against which the manager can check opinion and conclusions drawn by other means, but also as a means of both illuminating and analyzing new approaches to program conduct. In short, the use of PAF will not diminish the importance of the manager, but rather will provide another useful tool to assist him in his decision-making role. References 1. Martino, Joseph P., Evaluating Forecast Validity, A Guide to Practical Technological Forecasting, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1973, 2. Vanston, John H., Jr., Use of the Partitive Analytical Forecasting (PAF) Technique for Analysis of the Effects of Various Funding and Administrative Strategies on Nuclear Fusion Power Plant Development, Technical Report ESL-15, Energy Systems Laboratory, College of Engineering, The University of Texas at Austin, 1973. 3. Moder, Joseph J., and Phillips, Cecil R., Project Management with CPMand PERT, Van Nostrand Reinhold Company, New York, 1970. 4. Pritsker, A., Alan, B., and Hurst, N. R., GASP IV: A Combined Continuous Discrete Fortran-Based Simulation Language, Simulation 21 (September 1973). 5. Lowther, John A., Use of Partitive Analytical Forecasting Technique for Analysis of Laser Fusion Power Plant Development, Master’s thesis, College of Engineering, The University of Texas at Austin (1975). 6. Miller, Michael L., An Analysis of the Theta-Pinch Thermonuclear Power Project Using the Partitive Analytical Forecasting Technique, Master’s thesis, College of Engineering, The University of Texas at Austin (1976). 7. Chiu, Debra, Use of the Partitive Analytical Forecasting Technique for the Analysis of Mirror Fusion Power Plant Development, Master’s thesis, College of Engineering, The University of Texas at Austin (1976). 8. Nichols, Steven P., and Vanston, John H., Jr., A Study of Various Approaches to Magnetic Development for CTR Using Partitive Analytical Forecasting, Center for Energy Studies Research Report No. 5, The University of Texas at Austin (February 1976). 9. Nichols, Steven P., and Vanston, John H., Jr., Evaluation of the Long Term Program Plans of the U. S. Division of Controlled Thermonuclear Research, Center for Energy Studies Research Report No. ‘7, The University of Texas at Austin (February 1976). 10. Lo, Wen-Hsien, Partitive Analytical Forecasting (PAF) for Various Funding Strategies in Developing High-Btu Coal Gasification Plants, Center for Energy Studies Research Report No. 4, The University of Texas at Austin (May 1975). 11. Lawler, E. L., and Bell, M. D., A Method of Solving Discrete Optimization Problems, Operations Research 14(6) (1966). 12. Soland, R. M., Vanston, John H., Jr., and Nichols, S. P., Optimal Resource Allocation in the Nuclear Fusion Development Program: An Optimization-Simulation Approach, paper presented at the Operations Research Society of America/The Industrial Management Society Joint National Meeting, Las Vegas, Nevada (November 1975). Received 30 July 1976; revised 22 October 1976