Soeio-&on.
Plan.
Sci.
Vol.
4, pp. 67-95
(1970).
Pergamon
A COMPUTER-BASED RISK/UNCERTAINTY
Press.
Printed
METHOD
in Great
Britain
FOR INCORPORATING
INTO COST/UTILITY
ANALYSIS
GARY J. KOPFF Office of Deputy Under Secretary for Policy Analysis and Program Evaluation U.S. Department of Housing and Urban Development, Washington, D.C. 20041
This paper presents a computer-based method designed explicitly to incorporate risk and risk/uncertainty into a planning or budget model and provides extensive theoretical examination of the risk/optimization relationship. Background material in cost effectiveness and benefitcost analysis is reviewed; examples of the programs are included.
CONFRONTED with a plethora of increasingly massive and complex urban problems, administrators of local governmental units and relevant Federal departments and agencies have turned with growing frequency to the modern informational and decision-making technologies that have been developed in the defense-aerospace complex, in some progressive private industries and in the nation’s “think tanks” and universities. In addition, economic analysis-both macro-analysis and micro-analysis-has had an increasingly important role in shaping fiscal and budgetary policy. There has been, consequently, a gradual convergence of the overall planning process and the budgetary process in an attempt to attain methods for deriving an optimal allocation of public funds amongst competing uses. One such method or set of methods which is receiving widespread acceptance, at present, is cost-utility analysis-especially, benefit-cost analysis and cost-effectiveness analysis. The increasingly quantitative orientation of many public administrators in assessing the merits of new programs and projects frequently occurs concomitantly with the implementation of a planning-programming-budgeting system (PPBS)-such as now officially exists within all Federal agencies and departments-or the utilization of a related system based upon program budgeting and quantitative systems planning. Unfortunately, either because of inadequately functioning management information systems (MIS) or because of inherently risky and uncertain requirement and cost parameters, much costutility analysis must be undertaken under conditions of risk and/or uncertainty. Administrators who are attempting to utilize more systematic and quantitativelyoriented forms of cost-utility analysis would be wise, indeed, to avail themselves of the long and arduous years of experience with cost-utility analysis possessed by the water resource agencies and by the defense-aerospace complex. Seasoned administrators as well as neophytes ought to recall the familiar acronym known to those who perform computer operations : “GI-GO” means “garbage in-garbage out.” The most sophisticated and well-wrought analysis can be brought to naught if the quantitative inputs are beset with uncertainties which unavoidably and unpredictably beset the output with uncertainty. 67
GARY J. KOPFF
68
The lesson suggested by the water resource and defense-aerospace experiences is that if risk and uncertainty do exist, to a significant degree, in the input data, then the cost-utility analysis must explicitly incorporate it into the analysis! This paper addresses itself to the task of incorporating risk and/or uncertainty into a formal cost-utility analysis. There are four major divisions to this paper. Part I is a background sketch including the following: an examination of the context of budgetary reform, a review of benefit-cost studies as utilized in the water resources area, a review of cost-effectiveness studies as employed in weapons systems analysis, and a summary of the features of PPB systems. Part II is a theoretical examination of the impact of risk and uncertainty upon utility theory and optimization theory. This part includes the following: definitions of the realms of decision-making under conditions of certainty, risk, uncertainty and risk/uncertainty; a presentation of optimization theory and the index selection problem; an analysis of various risk postures assumed by decision-makers based upon utility theory; and a review of various approaches to incorporating risk and uncertainty into an analysis. Part III presents abstractly a computer-based method designed to explicitly incorporate risk and risk/uncertainty into a cost-utility analysis. It includes the following: an inputoutput uncertainty system model, the selection of subjective probability estimated distributions, and the use of Monte Carlo simulation to determine the net effect of the sub-system risks and uncertainties. Part IV is an example of the method as applied to a simplified cost-utility analysis. I. COST-UTILITY
ANALYSIS:
BACKGROUND
SKETCH
Process of budgetary reform
Every budget system comprises three processes which, although they are often not distinguishable operationally, may be distinguished for claritive effect. They are (1) the planning process, (2) the management process, and (3) the control process. Based upon an analytic framework proposed by Robert N. Anthony, the processes may be conceived as follows : Planning involves the determination
of objectives, the evaluation of alternative courses of action, and the authorization of select programs. Management involves the programming of approved goals into specific projects and activities, the design of organizational units to carry out approved programs, and the staffing of these units and the procurement of their necessary resources. Control refers to the process of binding operating officials to the policies and plans set forth by their superiors [I]. Ideally a budgetary system would contain all three processes, balanced and interrelated; in practice, however, they have tended to exist as competing processes with a clearly dominant process being evidenced at a particular time for a given budget system. Allen Schick, a consultant for the Federal Bureau of the Budget, has utilized the aforementioned framework in his longitudinal study of budgetary reform in the Federal government [2]. Mr. Schick identifies three stages of reform corresponding to the three processes: in the first stage-the control stage-dating roughly from 1920 to 1935, the dominant concern was to develop an adequate system of expenditure control; in the second stagethe management stage-performance budgeting, growing out of the efforts to manage New Deal programs and defense programs, was dominant; and in the third and present stage-the planning stage-the linkage of budgeting and planning has been typified with the introduction of the PPBS within the Federal government [2, pp. 33-44.
A Computer-based
Method for Incorporating
Risk/Uncertainty
into Cost/Utility
Analysis
69
The planning stage has been influenced by three trends: 1. economic analysis-both macro-analysis and micro-analysis-has had an increasing role in the shaping of overall fiscal policy and more detailed budgetary policy; 2. new informational and decision-making technologies have enlarged the applicability of objective analysis to policy-making; 3. a convergence has gradually occurred between the overall planning process and the budgetary process [2, pp. 7-81. The first trend, especially micro-economic analysis, dating to the early efforts of welfare economists to construct a science of public finance, has been the major trend responsible for cost-utility analysis. Such a science of public finance, predicated on the principle of marginal utility, would furnish objective criteria for determining the optimal allocation of public funds amongst competing uses by appraising the marginal costs and the marginal benefits that would accrue from alternatives, thereby determining the combination which maximized utility. This paper does not propose that cost-utility analysis is, in fact, the basis of such a science nor that it can in and of itself “furnish objective criteria for determining the optimal allocation of public funds”. Cost-utility analysis, however, does play an increasing role in particular situations where quantitative analysis amongst alternatives is employed. In 1958 Roland N. McKean concluded his Eficiency in Government through Systems Analysis with a listing of opportunities for analysis [5]. To update that list and to specify those situations where cost-utility analysis is being undertaken would surely impress one with the widespread applicability of cost-analysis in the planning stage of the budgetary process [6]. The successive dominance of each of the three processes of budgetary systems implies that different skills were required during the different stages of budgetary reform. For instance, in the staffing of budget offices during the early control stage, there was a preponderance of accountants. With the transition to the management stage, the role of administrators was enhanced. The current transition into the planning stage, as suggested by early experience in the water resource and defense-aerospace fields, will undoubtedly necessitate a transition to economists, system analysts and administrators who are at ease with quantitative analysis [6]. Bene$t-cost in water resource program analysis Cost-utility analysis had its inception in the field of water resource development with the major initial contributions of the personnel of the Bureau of Reclamation of the United States Department of the Interior, the United States Army Corps of Engineers, the Bureau of the Budget, and the former Bureau of Agricultural Economics. Early academic contributions were made by Ciriacy-Wantrup at the University of California and by added agricultural economists at other land grant colleges. Subsequent academic contributions have come from both economists-such as Otto Eckstein, John V. Krutilla and Roland N. McKean-and from political scientists-such as Arthur Maass. The framework of decision rules, however, evolved less from the academicians, theories than from the response to the exigencies of “in-the-field” operations. The initial mandate for benefit-cost analysis was found in the Flood Control Act of 1936 which required that benefits must exceed costs “to whomsoever they may accrue” for the projects to be authorized [7]. Otto Eckstein has commented upon this mandate: . . . it should be recognized that the legal requirement which the Act of 1936 imposes on the agencies is, in fact, impossible to meet . . . Measuring all benefits and costs “to
GARY J. KOPFF
70
whomsoever they may accrue” is not only beyond the present ability science, but presents conceptual difficulties which by their very nature overcome except by making very specific assumptions about which the prescribe. . . . First, “benefit” requires definition. . . . Second, a method and aggregating the benefits that accrue to different people must be Third, the same problems exist with regard to costs [8].
of economic can never be Act does not of comparing defined. . . .
Thus, the Act of 1936 authorized new surveys by the Corps of Engineers and confronted them with considerable definitional and conceptual difficulties. Part of the Corp’s project report was to be, and continues to be, an economic analysis applying the following uniform project study criteria: all contributions to national output and welfare that the Corps recognizes are estimated and valued to derive an estimate of national benefit, and all costs are estimated. The resulting benefit-cost ratio must exceed unity or the project does not possess economic feasibility [9]. The project report for the Bureau of Reclamation -whose activities are similar though confined to the seventeen Western states-is analagous to the Corps’ analytic form, yet it differs significantly in concept [IO]. As a result of the legislation and numerous subsequent departmental directives, the Corps of Engineers, the Department of Agriculture, and the Department of the Interior’s Bureau of Reclamation were all required to use benefit-cost evaluation in some situations, and they elected to do so in others. . . . The advisability of coordinating their procedures soon became evident, and led to a “tripartite” agreement in 1939 and a quadrupartite agreement, which included the Federal Power Commission, in 1943. This led in turn to the Federal Inter-Agency River Basin Committee, whose membership included the Department of Commerce and, later, the Public Health Service of the Department of Health, Education and Welfare [I 11.
a
In 1946, the Federal Inter-Agency River Basin Commission (FIARBC), composed of all the water resource agencies, appointed a subcommittee on benefits and costs which was instructed to formulate principles of project evaluation which would be acceptable to all of the agencies. This subcommittee has issued a number of reports on various aspects of benefit-cost analysis, and in May of 1950 set forth a complete set of principles for project evaluation in a codified form which has since been designated as the “Green Book” [12]. It should be noted, however, that the subcommittee has no enforcement powers and compliance with their recommendations has been most uneven. Two years after the “Green Book”, the Bureau of the Budget altered the criterion from its broad-gauged basis-benefits “to whomsoever they may accrue”-to benefits that increase national product [13]. This reformulation placed the form of cost-utility analysis in the realm of efficiency determination as known by classical economists. Throughout the 1950’s, this field was discovered by other economists and the theoretical foundations received considerable attention. . . . since the inception
of benefit-cost computations, the techniques applied have been continually modified and refined to more accurately identify and measure both benefits and costs. Genuine progress toward the goal of increasing the accuracy of such computations can be seen in the manuals on the preparation of survey reports issued by the Corps, the standards and procedures used in evaluation by the Bureau of the Budget, and the intermittent reports of the other Federal agencies and study commissions. Through these studies relevant concepts such as secondary benefits,
A
Computer-basedMethod for Incorporating
Risk/Uncertainty
into Cost/Utility
Analysis
71
associated costs, price levels, interest rates, periods of analysis, and cost allocation to multi-purpose projects have become more clearly defined, and the problems of empirical measurement have been pointed out and solved [14]. In summation, it should be noted that the water resource development field has been involved with the “planning stage” of budgetary reform for a considerably longer period of time than have other governmental fields. Nonetheless, statements by Congressmen concerning the use of benefit-cost calculations in this field still run the gamut from complete disdain of the benefit-cost technique to a sincere desire to increase the accuracy of the computations and to adopt the technique as a sole criterion [15]. Cost-efectiveness
in defense and aerospace systems analysis
Systems analysis as utilized in the defense-aerospace complex is nearly equivalent to “systems engineering”, “operations research”, “multi-disciplinary problem-solving teams” and “applied scientific method”. Systems analysis has been described as an art which gives bad answers to problems which otherwise would receive even worse answers [16]. A more operational definition would be the one given by Thomas C. Rowan, Vice President of the Advanced Systems Division of Systems Development Corporation. Mr. Rowan feels that the process as used in his aerospace-oriented corporation basically implies a sequence of activities such as the following: 1. Definition and detailed description of the boundaries of the system. 2. Functional description of the system in terms of the component subsystems and I their operational interaction. 3. Determination of objectives and criteria of optimal system performance. 4. Examination of reasonable alternative configurations of system elements that approximate optimal system performance and the determination of the consequences of each configuration in terms of feasibility, acceptability, and costeffectiveness. 5. Finally, objective presentation of these alternatives and the supporting evidence to the responsible decision-makers so that they may make appropriate decisions with respect to selecting one of the alternatives for design and implementation, keeping in mind, of course, that a legitimate outcome may be to make no changes [17]. In the introduction to Analysis for Military Decisions, E. S. Quade wrote in 1964 that in the light of its origins and its present uses systems analysis might be defined as inquiry to aid the decision-maker choose a course of action by systematically investigating his proper objectives, comparing quantitatively where possible the costs, effectiveness and risks associated with the alternative policies and strategies for achieving them and formulating additional alternatives if those examined are found wanting [18]. Thus, one ascertains in systems analysis the antecedents for and generalized environment of cost-effectiveness analysis applied to complex problems of choice under conditions of high uncertainty. Systems analysis reduces to the more narrow cost-effectiveness analysis once agreement has been reached on the following: (1) the objectives, and (2) the measure of effectiveness of the system. Historically, one traces the development of systems analysis and cost-effectiveness analysis to the second world war. The invention, development, fabrication and deployment of a limited number of radar sets in the United Kingdom just prior to World War II provided
72
GARY J. KOPFF
a problem area for the development of a discipline that came to be known, at that time, as operations analysis or systems analysis. The notation of a total system and its optimization with respect to some parameter or measure of effectiveness, grew very rapidly under war pressures with wartime expenditures receiving highest priorities. At the completion of the war there were approximately 400 scientists in the United Kingdom and an equal number with the American military services who regarded this new concept of systems analysis as a great area of development for the future. Since the end of the war, consequently, systems analysis enjoyed a very rapid growth both in territory long converted to its use-the military establishment-and in the emerging aerospace industries where priority programs, high expenditure levels and aggregations of technically-oriented personnel coalesced. Only in very recent years has there been a growth of systems analysis’s application to civilian agencies of the Federal government. Considerably less progress has been made applying systems analysis to State and local governments [19]. It is anticipated that the emergence of the planning stage of budgetary reform will usher in the system analysts to the urban field. Planning-programming-budgeting systems (PPBS) PPBS has been described as a system aimed at helping management make better decisions on the allocation of resources among alternative ways to attain government objectives. Its essence is the development and presentation of relevant information as to full implications-the costs and benefits-of the major alternative courses of action. PPBS hopes to minimize the amount of piecemeal, fragmented, and last minute program evaluation which tends to occur under present planning and budgeting practices. PPBS is the quintessence of the planning stage when it is fully operative. Cost-utility is at the heart of sophisticated analyses in PPBS [20]. The primary distinctive features of a PPB system are: It focuses on identifying the fundamental objectives of the government and then relating all activities to these (regardless of organizational placement). Future year implications are explicitly identified. All pertinent costs are considered. Systematic analysis of alternatives is performed. This is the crux of PPBS. The systematic analysis involves (1) identification of the government objectives, (2) explicit, systematic identification of the government’s alternative ways of carrying out the objectives, (3) estimation of the total cost implication of each alternative, and (4) estimation of the expected results of each alternative. The correspondence between PPBS and systems analysis is far from coincidental as a review of the history of PPBS will make clear. PPBS as distinct from program budgeting-i.e. budgetary form emphasizing categorization by programs, functions, or activities rather than by objects of expenditures-is a recent innovation dating to 1961 when Robert S. McNamara, with the considerable assistance from the new assistant from RAND, Dr. Charles J. Hitch, instituted PPBS for the management of all United States military programs. During the time PPBS has been employed by the Defense Department, the defense-aerospace complex has developed considerable sophistication in performing both systems analysis and cost-effectiveness studies in particular. The accumulated experience with cost-effectiveness analysis parallels the accumulated experiences with benefit-cost analysis possessed by the water resource agencies. Charles Hitch was Secretary McNamara’s Assistant Secretary for Systems Analysis
A Computer-based
Method for Jncorporating
Risk/Uncertainty
into Cost/Utility
Analysis
73
and his predilections are carried on by Alain Enthoven, his successor. The utilization of cost-effectiveness analysis in the Defense Department has-like the benefit-cost analysis in the water resource area-not been without its detractors. Prime amongst them is Admiral Rickover who feels that the Office of Systems Analysis, which began as a “tool of management to help review programs for the Defense Department’s controller, has now turned into the ultimate decision-making organization” [21]. The most significant step in the development of cost-utility analysis was the.Executive Order 66-3 issued by the Executive Office of the President in October of 1965 through the Bureau of the Budget stating that “the President has directed the introduction of an integrated planning-programming-budgeting system in the executive branch”. Essential to the system are:
(2) Analyses of possible alternative objectives of the agency and of alternative programs for meeting these objectives. Many different techniques of analysis will be appropriate, but central should be the carrying out of broad systems analysis in which alternative programs will be compared with respect to both their costs and their benefits [22]. The overall system is designed to enable each agency of the Federal government to “evaluate thoroughly and compare the benefits and costs of programs” in the same fashion-with cost-utility analysis-as had been done by the water resource agencies and the Defense Department. On July 18, 1967 Bulletin 68-2 was issued containing current guidelines for the continued development of PPBS within additional agencies of the Executive Department. It applies after January 1, 1968 to thirty-two major executive agencies and departments from the mammoth Defense Department to the miniscule Small Business Administration. Following the instituting of PPBS throughout the Federal government, there occurred comparable systems at the state, county and local levels. One notable set of experiments is the “51515 program” wherein five States, five counties and five cities have banded together under a grant from the Ford Foundation and participated in an experiment coordinated by George Washington University to experiment with PPBS. The 51515 program and the Federal PPBS have further encouraged numerous localities to experiment with PPBS until today there exists a very healthy market for administrative-analysts at all levels of government across all programs. In 1968 Kenneth Mulligan of the United States Civil Service Commission estimated that the nation was short over 10,000 people in the administrative-analysis area in the Federal government alone [23]. In mid-1968 the emergence of the planning orientation to budgeting personified in the PPB systems is unquestioned. The increasing significance of cost-utility analysis is profound. Such analysis is applied to multitudinous program areas-many of which are doubtfully amenable to such an approach given the competence of the practitioners-and the incidence of conditions of risk and/or uncertainty is very great indeed. One hopes that in the awakening to cost-utility analysis is possibilities, the two-pronged background of this analysis will not be overlooked and the lessons embedded in the accumulated experience will not be ignored.
GARY J. KOPFF
74
II. ANALYSIS UNDER RISK/UNCERTAINTY:
CONDITIONS OF THE THEORY
Definitions of certainty, risk, uncertainty and riskluncertainty
Professor Brinser of the School of Natural Resources at the University of Michigan has emphasized that a primary objective of criteria for guiding investment in natural resource development should be to establish a set of rules for the treatment of uncertainty. The “uncertainty” to which he alludes, includes not only ambiguity about the relationship between social and personal time preferences, but also the differences in the values that are assigned to both that which is to be produced and to those processes by which it is to be produced [24]. A more refined grouping of types of uncertainty might distinguish amongst uncertainty as applied to (1) specified “cost” and “gains,” (2) “given” components of the system or of its subsystems, (3) the responses of other decision-makers, or (4) technological change [25]. In the Brookings Institution’s publication Measuring Benefits of Government Investments, Frederick M. Scherer, an economist at Princeton and co-author of The Weapons Acquisition Process: An Economic Analysis underscores two major points of this paper(1) the importance of uncertainty, and (2) risk adjustments to account for uncertainty : Uncertainty is, of course, a companion to all human endeavors. . . . In my focus on these topics the uncertainty element will be slighted, entering into the analysis only as an incidental (and usually a disturbing) variable. This is clearly a significant limitation, for I believe that uncertainty is in many ways the most important single feature of research and development programs. Yet, to have the courage to attempt any analysis of the problem at all, one must assume that cost and benefit estimates can and are made, despite all of the uncertainties, and the decisions are made with or without risk aversion adjustments, on the basis of these estimates [26]. This paper proposes a method to allow the risk aversion adjustments to be made. The significance of uncertainty-suggested by Brinser and Scherer-is clearly pervasive throughout government budgetary planning. Moreover, it is essential to realize that the problems posed by uncertainty are not mere curiosities or conundrums which provide intellectual exercise. They are burdensome problems which confront men who make decisions. To approach the problem of decision-making and uncertainty in a more rigorous fashion than suggested by the previous general remarks, it is helpful to recognize that the field of decision-making is commonly partitioned according to whether a decision is made by (1) an individual, or (2) a group, and according to whether it is affected under conditions of (1) certainty, (2) risk, or (3) uncertainty. In Games and Decisions Lute and Raiffa have added a fourth category to the second partition-the condition of risk/uncertainty [27]. This condition is also referred to as the province of statistical inference and is the crucial province for decision-making as examined in this paper. The first partition will be ignored in this paper, with all decisions assumed to be made by a single person or a unanimous group. A decision will be considered in the realm of certainty if each action is known to lead invariably to specified stated outcomes, risk if each action leads to one of a set of possible outcomes occurring with a “known”
probability,
A Computer-basedMet&mdfor Incerporating Risk/Uncertainty into Cost/Utility Analysis
75
uncertainty if actions have as consequences a set of possible outcomes, but the probabilities of these outcomes are completely unknown or are not even meaningful by definition, and riskJuncertainty if each action leads to one of a set of possible outcomes with probabilities derived from subjective probability estimates [27]. Frank H. Knight in 1921 made the distinction between “measurable uncertainty”i.e. risk, in this paper-which can be measured and represented by numerical probabilities and “unmeasurable uncertainty”-i.e. uncertainty, in this paper-wherein (1) the decisionmaker was ignorant of the statistical frequencies of actions relevant to his decisions, (2) a priori calculations were impossible, or (3) the events were in some sense unique [28]. What then is the theoretical justification for the “risk/uncertainty” category and for the method of this paper based upon subjective probability estimates? Daniel Ellsberg of the RAND Corporation has pointed out that there have been proposed a number of axiom systems which prescribe constraints on decision-making under the realm of uncertainty. These systems are more or less equivalent and possess the common implication that-for a “rational” man whose behavior fits their constraintsall uncertainties can be reduced to risks [29]. The basis for this notion is suggested in F. P. Ramsey’s early notion that “the degree of belief . . . is the extent to which we are prepared to act upon it”, and “the probability of l/3 is clearly related to the kind of belief which would lead to a bet of 2 to 1 [30]. To put it more simply, as did Damon Runyon in a different context, “The race is not always to the swift or the battle to the strong, but that’s the way to bet” [31]. Ellsberg explains : Starting from the notion that gambling choices are influenced by, or “reflect”, tiering degrees of belief, this approach (the axiomatic approach) sets out to infer those beliefs from actual choices. . . . If one picks the right choices to observe, and if the . . . postulates . . . are found to be satisfied, this distinction can be made unambiguously, and either qualitative, or ideally, numerical probabilities can be determined 1321. The propounders of these axioms tend to be hopeful that the rules will be commonly satisfied at least roughly and most of the time, because they regard the postulates as normative axioms, widely accepted principles of rational behavior. At this point in the argument, Ellsberg has obviated-under prescribed conditions-the “uncertainty” realm and swelled the “risk” realm. Yet the argument swings back to Knight’s distinctions as Ellsberg asks if when considering only deliberate decisions the realm of “uncertainty” does, in any case, exist or are all uncertainties reducible to risks. In his paper he proposes to indicate a realm of decision-making in which many otherwise reasonable people neither wish nor tend to conform to the axiom sets. It follows that in that realm-the realm of uncertainty-there would be simply no way to infer meaningful probabilities for those events from their choices and, consequently, theories and methods-such as the one proposed in this paper in Part III-which purport to describe the uncertainties in terms of probabilities would be quite inapplicable. In addition, decision-makers in such a realm could not be described as maximizing the mathematical expectation of utility on the basis of probabilities. Hence, it would be impossible to derive numerical von Neumann-Morgenstern utility indices from their choices amongst gambles to ascertain their risk posture. In assessing the Knight distinction and the Ellsberg argument this paper assumes P
76
GARY J. KOPFW
validity in distinguishing between risk and uncertainty, yet it views the reahn of uncertainty, as Knight describes it, as being too global and capable of partitioning into two realmsuncertainty and risk/uncertainty. The latter realm refers to decisions where the axiomatic approaches and behavioral studies indicate that “uncertainty” reduces to “risk”. The realms of risk and risk/uncertainty will be the focal areas for the application of the method of this paper [33].
Optimization theory and the choice of index Decision theory includes far more complexities than can reasonably be related to cost-utility analysis in this short paper. Moreover, the thrust of the paper is aimed at the difficulties imposed by operating in the realm of risk and uncertainty; it is not directly concerned with the intricacies even of cost-utility decision theory. Nonetheless, it is deemed necessary to explain briefly the essence of optimization decision-making theory for the light it may shed on cost-utility analysis. Conceptually, optimization theory is not difficult to grasp and, in fact, appears quite intuitively obvious : Given a set of possible acts, choose one (or all) of those which maximize (or minimize) some given index. Symbolically, let X be a generic act in a given set F of feasible acts and let f (X) be an index associated with (or appraising) X; then find those X” in F which yield the maximum (or minimum) index-i.e. f (X0) 2 f(x) for all X in F [34]. Optimization theory assumes (1) that alternative actions exist, (2) that the index is quantifiable-hence, both cost and utility are quantifiable although not necessarily in commensurable units, and (3) the optimal index value indicates the preferred alternative action. For claritive effect in explaining the method of this paper, adherence to optimization theory will be assumed in cost-utility analysis. One must be aware, however, of the departure from reality that this assumption makes-i.e. seldom does one find in cost-utility analysis that decisions are made purely on the basis of optimizing the chosen index, be it B/C, B - C, C/E or even rate of return. Nor does this paper advocate such a puristic policy. As an example of the discrepancy between optimization theory and real-world applications of cost-utility analysis, one might consider the study undertaken by Havemann in the early ‘60’s. Havemann sought to test the hypothesis that Congress-a group decisionmaker-has sought to maximize a cost-utility index, specifically the benefit-cost ratio defined as the present value of the benefit stream divided by the sum of the present value of the annual stream of maintenance, operation and repair added to the immediate value of the investment. He examined the potential projects in the 1960 Authorization Bill compared to the actual projects in the 1962 Public Works Appropriation Bill. His conclusions were (1) that Congress does not seek to maximize the benefit-cost ratio for new project selection, (2) that the efficiency concept is, nonetheless, used as an underlying criteria, and (3) that there exists a hurdle in that the B/C ratio must exceed unity for a project to be approved but there does not exist a continuous ranking utilizing a monotonic array of B/C indices [35]. Hence, although projects with high benefit-cost ratios tend to be chosen before projects with low ratios, no consistent or persistent pattern is manifest to support adherence to optimization theory. That the ratio is not regarded as the sole piece of information upon which to base an
A Computer-based Method for Incorporating Risk/Uncertainty into Cost/Utility Analysis
77
investment decision is recognized by the agencies which prepare and use the ratios and by congressmen who use them in their decisions. The establishment of any fixed figure of a ratio with a minimum below which a project would be assured of rejection and a second level above which a project would be assured of acceptance appears to place an unwarranted emphasis on the validity and accuracy of such an index. The benefit-cost ratio . . . can be accepted as only one measure if the national and public worth of a project [36]. Nonetheless, it is the opinion of this author that so long as the index derived from a costutility analysis is calculated, it must strive for a true reflection of risks and uncertainty based upon the decision-maker’s risk posture and the subjective probability estimates of the component uncertainties. A second caveat is in order concerning the face value acceptance of optimization theory. It is theoretically not possible to simultaneously maximize utility and minimize the costs of attaining this utility, thereby achieving the optimal value for a cost-utility index. Utilizing the optimization theory notation, cost-utility analysis assumes either of two general formats: 1. For a specified level of utility the decision-maker seeks to choose from the set F of possible acts or alternatives that one act or “system configuration” which optimizes the given index f (-i.e. which minimizes the system’s cost for the specified level of utility. 2. For a speciJied level of costs the decision-maker seeks to choose from the set F of possible acts that one act or alternative system configuration which optimizes the given index f (*i.e. which maximizes the system’s utility for the specified level of costs. Thus, for example, if the index is the B/C ratio then the first format for analysis would be in terms of “dollars/unit of benefits” and the second format would be in terms of “benefits/unit dollar”. In either case the optimal act X” would maximize the indexf(X)= B/C. If the index were the cost-effectiveness ratio, then the optimal act X” would minimize the index f (X) = C/E. To move from the abstract format of optimization theory to specific formats for costutility analysis requires the specification of the index f(X). The specific definition of this index and of its components is clearly crucial for the success of the analysis. This paper will not delve into the quagmire of definitional problems and conceptual problems such as the selection of the discount rate for discounting time streams, the definitions of benefits, initial investment, yearly operating and maintenance costs, problems of secondary benefits, intangible benefits, or benefits as part of a multi-objective as opposed to single-objective social welfare function. Nor will difficulties such as price level adjustments and other problems which cause dollar units to be incommensurable be treated. Nonetheless, it would be too general an argument to by-pass the index selection problem completely. In the water resource development field, the index specified by the “Green Book” is the benefit-cost ratio : “the ratio of benefits and costs reflects both benefit and cost values and is the recommended basis for comparison of projects” [37]. The Corps of Engineers translates this dictum into the following formula: 2B/(l + i)’ B/C =
K + WV(l
+ 0’
where B represents the expected annual benefits which are additions to national income
GARY J. KOPFF
18
from the project, i represents the rate of interest used to discount the future streams of benefits and costs, t represents the estimated life of the project, K represents the fixed initial investment cost, and 0 represents the estimated annual operating, maintenance and repair costs [38]. Otto Eckstein supports the selection of the B/C ratio as the appropriate index. He analyzes both the economic theory behind the B/C analysis and the application of the B/C concept by the Bureau of Reclamation and the Corps of Engineers. He discards all factors except economic efficiency and in his analysis “treats B/C analysis as a means . . . of selecting the most desirable projects from the point of view of economic efficiency”. He feels that on& by ranking projects by the B/C ratio, and allocating funds to projects with the highest ratios first and moving to projects with successively lower ratios until the budget is exhausted, can the efficiency concept be satisfactorily treated [39]. Roland McKean disagrees with Eckstein and with those who support the B/C index. To maximize the ratio B/C may be plausible at first glance, but it allows the absolute magnitude of the achievement or of the costs to roam at will. Surely it would be a mistake to tempt the decision-maker to ignore the absolute amounts. . . . In fact the only way to know what such a ratio means is to tighten the constraint until either a single, budget or a particular degree of effectiveness is specified. At that point the ratio reduces to the test of maximum effectiveness for a given budget or else for a specified level of effectiveness at a minimum cost [40]. McKean feels, moreover, that the ratio is tantamount to the ratio of total receipts to total costs for the private sphere. This is a bad index biased towards low-turnover investments. He then proceeds to develop an efficiency criteria which, he suggests, evaluates the relationship between a projects benefits and costs more appropriately. McKean’s criterion, however, can only serve as a partial criterion, thereby failing to satisfy the requirements of formal optimization theory : . . . our partial criterion can only tell which projects are “efficient”. By itself, this test cannot tell us which position is best in any ultimate sense. . . . Hence, it is deemed appropriate here to have these cost-benefit measurements shed light on efficiency in the limited sense, and to have further exhibits shed light on redistributional effects [41].
McKean analyzes the rate of return index in considerable depth, yet this falls outside of the ken of this paper [41]. In summary, McKean’s caveat well summarizes this treacherous area of cost-utility analysis: “Guard against particularly treacherous tests (i.e. indices). One ubiquitous and untrustworthy candidate is the maximization of the ratio of gain to cost” [42]. Risk postures of decision-makers
and utility theory
The method proposed in Part III of this paper provides the decision-maker with information about alternative configurations beyond the single-valued data calculated as an average or expected value. Why and under what conditions is such additional data necessary? An approach to answering this question lies in the applying the theoretical work in utility theory and game theory of John von Neumann and Oskar Morgenstern. Building upon the utility index, three alternative risk postures will be demonstrated. In The Economics of Defense in the Nuclear Age, Charles Hitch notes that one can sympathize with the general who shouts at his analyst : “Give me a number, not a range !” Unique numbers are easier to work with and to think about than ranges or probability
A Computer-based Method for Incorporating Risk/uncertainty into Cost/Utility Analysis
79
distributions, yet the general is probably asking his analyst to falszyy the real world in a manner that will make it impossible for him, as a commander, to make good decisions [43]. The practice of military intelligence agencies of presenting numbers (“best guesses”) rather than ranges in their estimates obscures but does nothing to remedy the incompleteness and unreliability of most of the information on which the estimates are based. . . . What can the analyst do . . . ? The most important advice is : Don’t ignore them. To base an analysis and decision on some single set of best guesses could be disastrous [44]. (Emphasis added). Roland McKean, co-author with Charles Hitch, writes in his book based on the water resource development field the following: While differences in the pattern of uncertainty of an outcome--or, more properly, in the “frequency distribution of outcomes”-are matters of great concern, it is ordinarily impossible to attach a value or price to them. A price, that is, which would have any general validity. . . . Hence, the suggested method of treating uncertainty resembles the recommended way to handle other intangibles: to avoid concealment and to present some quantitative indicators. . . . The point here is that data about results other than the average outcome do properly influence one’s decisions [45]. This author proposes to go beyond McKean’s request for “some quantitative indicators” and attempt to attach a value to risk based upon the supposition that a decision-maker really desires to maximize utility and not merely expected values of an appropriate index. To do the latter, one need only have expected values; to do the former, requires additional theory and logic. It was mentioned (re: p. 75) that the axiomatic approach sets out to infer beliefs about betting on a chance occurence from actual choices. Ideally, it was stated, one can derive numerical probabilities from a proper set of postulates. Two men pioneered in the field of game theory with just such a set of postulates, containing in their extraordinary trailblazing work Theory of Games and Economic Behavior [46], John von Neumann and Oskar Morgenstern argue that maximizing the expected value of utility is the decision-maker’s strategy under conditions of uncertainty. In the realm of certainty or risk (or risk/uncertainty, especially) an individual makes decisions in accordance with a system of preferences with the following traits : 1. The system is complete and consistent; that is, an individual can tell which of two objects he prefers or whether he is indifferent to them, and if he does not prefer C to B and does not prefer B to A, then he does not prefer C to A. (Note: “object” includes combinations of objects with stated probabilities). 2. Any object which is a combination of other objects with stated probabilities is never preferred to every one of these objects, nor is every one of them preferred to the combination. 3. If the object A is preferred to the object B and B to the object C, there will be some probability combination of A and C such that the individual is indifferent between it and B [47]. Based upon these axioms, the von Neumann-Morgenstem “cardinal” utility index predictions can be made-predictions as to which of two lottery tickets (that is, which of two risky investments) a person will prefer. Information is available on the individual’s ranking of the alternative prizes offered by the lottery tickets and the odds on each prize.
80
GARY J. KOPPF
From this is calculated, by inference, and without actually asking the person, which lottery ticket he will choose. In the von Neumann-Morgenstern index, consequently, assignments are made to lottery ticket prizes [re:f (9 values in optimization theory notation] which are themselves utility numbers which, when processed arithmetically in accord with the linguistic convention will assign a utility number to the lottery ticket itself. What, then, is the value of the utility index or function which matches each lottery prize with its utility [48]? To explain the risk postures-i.e. risk-neutral, risk-avertor, and risk-taker-it is essential to know the shape of the individual’s utility curve in the region of the “lottery prizes”. For a simplified explanation of the three postures, only two lottery prizes, values of the cost-utility index, together with their probabilities (derived from the output of the method presented in part III) and their utilities (derived from the von Neumann-Morgenstern “cardinal” utility index) will be used. Before that explanation, an intuitively simple explanation might help. Hitch and McKean propose the following: While rational men do act in the face of uncertainty, they act differently. There are audacious commanders like Napolean and conservative ones like Grant-both successful in the right circumstances. Different people simply take different views of risks-in their own lives, as decision-makers do for the nation. Some “insure” and others “gamble”. It is therefore most important, whether the analyst calculates a general solution or not, that he present responsible decision-makers with the kind of information that allows them to use their own risk preferences in making the final choice [49]. To demonstrate the risk postures, two alternative system configurations, X, and &, may be considered. The selected cost-utility index for the two alternatives assumes the valuesf(X,) andf(&), which will be equated, for simplicity, to A and Ps. Hence, f(X,) = PI and f(Xb) = Pz. The probability of the index assuming those two values is derived from the output of the input-output uncertainty system model to be explained in Part III. Let the probability of A be denoted Ql and that of Pz by Qs. Hence, Probability (PI) = Ql and Probability (Ps) = Qs. Since, this simplified explanation assumes only two alternatives, Qi + Qs = 1, by definition. Hence, Qs = (1 - Qr). Using this notation, each of the three risk postures will be developed. Risk-neutrality. In the simplest case, the utility function is assumed to be linear (see below). In this case, the strategy of maximizing the single-valued expected value would give the same results as maximizing the expected utility. That is (1) JW) = WV)) meaning the expected value in terms of utility, E(U), is equal to the utility of the expected value of P, that is off(X). Since the utility function is assumed to be linear, U(P) = a + bP
with a and b some constants.
(2)
Hence, the right side of the equation
WV)) = Q&z + bh) + (1- Qd (a+ bPz). The left side of the equation, the expected utility is
(3)
E(W)) = QdU(Pd) -t(1- Ql)(U(P2)). Substituting equation (2) into equation (4),
(4)
ECU(P)) = Qda + bP1) + (1 - Qd (a+ bPz)
(5)
which proves equation (1) and illustrates that if the utility function for the values of the
A:Computer-based Method for Incorporating Risk/Uncertainty into Cost/Utility Analysis u (l-0,)
Risk-neutral
a
81
.u(P,)
(/(E(P)1
UPI)
PI
E(P)
4
<;P*
ElP)
4
E(P)
P,“Pz
P=f(x)
Risk-averter
P=f(x)
Risk-laker
P=/(x)
FIO. 1. Risk postures.
cost-utility index is linear, and only if it is linear (not spec&ally proved) then the expected utility of the risky alternatives PI and Pa is equal to the utility of the expected value and risk has no effect on the decision-makers choice. Utility maximization and index maximization are equivalent and indifference exists between a risk/uncertainty-free alternative and a risky one, so long as the single-values expected values were equal. Risk-aversion. Let us assume that the utility curve connecting the points ((PI, QI), (P2,l - Qr)) is not linear, hence risk-neutrality does not exist, by definition. Let us furthermore assume that the utility function is a curve concave, facing down. This situation will be defined as risk-aversion, with the case of a convex utility curve defined as risk-taking. Note that equation (1) no longer holds. Instead, an inequality exists WJ.) -C WV))
(6)
The left side of equation (6) is the same as in equation (4). Moreover, consider the equation
82
GARY J. KOPFF
for the chord (re: p. 14) connecting (PI, Qi) and (Ps, Qs). The equation for the straight line containing this chord is the left side of equation (6) with Q unspecified. The expected value, E(P) lies somewhere between the two points and is defined as E(P) = QdJ’l) + (1 - QSJ’z (7) Since the utility function was assumed to be concave, the utility of the expected value, on the utility function, must lie above the chord. Hence, the inequality in (6) holds. What does that inequality (6) mean ? It says that the expected utility is less than the utility of the expected value, hence a risk-averse firm would be indifferent between the risky alternatives with the expected value E(P) and a riskless alternative with the return P*. Note that if no data on risk/uncertainty were available except the expected values, then the decision-maker would certainly not recognize that the above relationship holds true. Risk-taking. Let us assume that the utility function connecting the two points is convex, facing downward. Then by a proof paralleling that for risk-aversion it is clear that (8) E(v) > W(P)) meaning the expected utility exceeds the utility of the expected value. This means that the diminishing utility function does not exist. This point causes considerable unrest in theoretical circles, but clearly has empirical evidence from behavioristic studies [50]. Note that the decision-maker is indifferent between the expected value of the risky alternatives and a riskless alternative with an index rating of P *. Again such information would not come from an examination only of expected values. Approaches to incorporating risk/uncertainty into analysis A major thesis of this paper is that the lesson learned.from the water resource development field and the weapons system analyses concerning risk and uncertainty is that it must be explicitly incorporated into the analysis. Moreover, the use of expected values and other Bayesian techniques have limitations. Beyond this point, this paper proposes a computerized method to incorporate risk and uncertainty based upon a systems approach. This paper will not attempt to critique other approaches; indeed, creative ventures and innovative approaches are urgently needed and should be encouraged. There are, however, some traditional approaches which have been utilized and which deserve mention. Havemann, writing in the water resource development area refers to five tacts often assumed : 1. a limit to the period for analysis as suggested by the Green Book [Sl], Circular A-47 [52] and dispose of by Otto Eckstein [53]; 2. the inclusion of direct and specific safety allowances proposed by the Green Book and concurred in by Eckstein (in the case of risks which are not a function of time) ; 3. the inclusion of a risk premium in the interest rate recommended by E&stein (to allow for uncertainties related to time) [54]; 4. a general case-by-case evaluation involving a description of contingencies, a schedule showing the possible range of outcomes, and an analysis of the public attitude towards the disutility (utility) of uncertainty, as recommended by McKean [551; 5. an approach which makes allowance for the fundamental differences in the nature of the expectations concerning costs and benefits through a dual rate of interest [56].
A Computedased Method for Incorporating Risk/Uncertahty into Cost/Utility Analysis III.
A COMPUTERIZED
METHOD
RISK/UNCERTAINTY
INTO
FOR
83
INCORPORATING
COST-UTILITY
ANALYSIS
Input-output uncertainty system model An input-output model will be utilized to represent the incorporating of risk and/or risk/uncertainty conditions into cost-utility analysis. The output will be represented (Fig. 2) by probability distributions of the appropriate index for various alternative system cotigurations. The input factors will also bc represented by probability distributions and will be divided into three categories: (1) cost factors, (2) requirement factors, and (3) utility factors. The “black box” commcting the inputs and outputs contains two major elements: (1) the cost analysis model, and (2) the systems analysis model, with the former being an input into the latter, as indicated in the chart [57].
cost Requirement
factors: 4
Utility
onolysis model
input factors:
FIG. 2. Input-output system uncertainty model [58].
Cost factor risk/uncertainties refers to variations in estimated system costs when the configuration of the system is invariant. Decision-makers have very limited control over the risk/uncertainty factors associated with costs if these risks and uncertainties arise from differences amongst individual cost analysts, errors in the data base used in the cost analysis, or errors in the assumed relationships within the cost analysis model within the “black box”. This is not to deny the invaluable diagnostic feature of performing sensitivity analyses on the overall system risk/uncertainty to detect the system’s sensitivity to cost factors. Extreme sensitivity may justify added expense in reducing the risk/uncertainty associated with a given cost factor. Requirement factor risk/uncertainty refers to variations in estimated system costs due to alteration of the con@uration of the system. The “configuration of the system” refers to the original specifications or assumptions with respect to hardware and software characteristics and the levels of requirement parameters set by system designers. Decisionmakers can usually directly control system configuration parameters, however, risk and
84
GARY J. KOPFP
uncertainty exists in the attempt to implement a system which changes from the initia specifications to the resulting end-product. Utility factor risk/uncertainty refers to variations in estimating utility derived from a particular system configuration. The output of this system can be understood by using the notation of optimization theory (re: p. 76) and the formats for cost-utility analysis (re: p. 77). The output for the set F of feasible acts under analysis consists of a probability distribution for the value of the index, f(x), for each of the feasible acts. If the analysis follows format one, a specified level of utility, then the set F includes the set of system configurations, each of which purports to satisfy the specified level of utility. By comparing the distribution of values of the index for each of the contigurations, the decision-maker can ascertain the optimal configuration given the output and his own risk posture. This paper will not attempt to derive a formalistic set of decision rules for ascertaining the optimal configuration from the output of the system. Nonetheless, some attempt to justify further the necessity of output presentations beyond single-valued expectations will be presented to suggest the value of probability distributions of the index as output. Assume a decision is to be made between only two system conf&urations, under either format one or format two. Letting X, and & represent the two system configurations, the abscissa represents f(x) and the ordinate value represents the probability Pr(f(x)) attached to corresponding values of f(x). Figure 3 proposes four situations where the output of the two system configurations in presented as probability distributions.
Case
I
A
f(x,)
f(xd) Xh
X”
Case II
-_lQlf(X,)
Case IKI
fcx,,
,& = f(x,)
Case TI
f(X 1
A
f(i(, )
f(X)
X”
/cx,, FIG. 3.
f(X)
-
fC&J
f(X)
A Computer-based
Method for Incorporating
Risk/Uncertainty
into Cost/Utility
Analysis
85
In case I, the decision as to the optimal contiguration is clear, whether based upon single-valued expectations-fmand f(xb)-orbased upon a comparison of the probability distributions. Regardless of the decision-maker’s risk posture, X, is preferrable to xb. In case II, it’s more complex due to the low probability that the actual value f(&) willbe less than the actual value f(X,). If the overlapping area of the distributions is small, then the decision is not difiicult for most decision-makers. However, if the overlap is considerable and if the decision-maker’s risk posture is not risk-neutrality, then the decision is quite complicated and may or may not coincide with a decision based solely upon comparing the expected values. In case III, regardless of the risk posture of the decision-maker, it is impossible to make a decision based upon the single-valued expectations alone. & has a greater range of variance than does X, and the risk-taker may wish to risk the possibility of the lowest f(X) to possibly attain the highest expected value. The risk-avoider, on the other hand, should select X,. In case III it would have sufhced to have as output the mean-i.e. the expected value for f(X)-and the standard deviations for the distributions. In case IV, there exists a most complicated situation which, unfortunately, is not particularly uncommon. In this situation an unambiguous decision could not be made based upon the expected value, nor could it be based on both the expected value and the standard deviation. This situation also suggests the trap awaiting the risk-neutral person who is apparently content to base his decisions solely upon the expected value. The output of this method, consequently, does not stop with the expected values and standard deviations of the indices associated with alternative system configurations. In the interest of presenting the decision-maker with the maximum of useful information, the probability distribution is also presented. The subjective probability distributions for risk/uncertainty All input factors-cost, requirement and utility-are to be in the form of probability distributions. If the factor is in the realm of risk, then, by definition, the probability distribution is “known” by statistical calculations. On the other hand, if decisions as to factors are in the realm of risk/uncertainty, then subjective probability distributions must be calculated. The method consists of obtaining three values for each of the input factors: 1. the lowest possible value, L; 2. the highest possible value, H, and 3. the most likely value, m. If the condition is one of risk, then L and H may be the 90 per cent or 95 per cent confidence limits or they may be derived from f(x) + kSD’s where f(x) is the mean and SD designates a standard deviation. The mean value is used for m. If the realm is risk/uncertainty, then decisions are made concerning the values of X, L and m. The probability distribution is next selected from nine prototypes arranged on the basis of their variance-high variance, medium variance, or low variance-and their skewness-left skewness, normal skewness, or right skewness. All input factors are assumed to be beta-distributed with the following form: P(f (X)) = K(X - LY(H - A?‘) where P(f(X)) represents the probability of the index assuming a particular value f(x), L, H and m as indicated, and a and b representing the beta parameters.
86
GARY
J. KOPFF
Several observations and explanations are helpful at this point in explaining this method proposed by Paul Dienemann of the RAND Corporation [59]. First, many distributions beyond the nine shown on Fig. 4 are possible, of course. More sophisticated arrangements could be conceptually formulated, yet the ability of analysts to make judgments circumscribing admittedly risky parameters is limited. Therefore, the nine distributions represent a moderate level of refinement which allows three values to be assumed for Skewed
Type
Symmetric
left
0=/3=2.75
,p-JJ
A
Type 4
right
Type3
Type2
1
a = 3.0 ,“,:;;lTe
Skewed
a=I.O K
Type 5
Type
6
a=1 5
a=4.5
p=1-5
a=P=4,0
Type7 FIG. 4. Input-output
Type8
p’4
5
TYPO 9
uncertainty probability distributions.
the skewness component and the variance component. Second, the beta distributions were chosen because of their characteristics of a finite range, continuity and unimodality. Dienemann found precedence for the use of beta functions in the assumptions underlying RAND’s PERT studies [60]. Third, the most likely value which becomes the mode of the beta distribution is defined as m = a/a + b
This definition is a result of the fact that the beta distribution is uniquely defined by a, b, L and H. The values of a and b are predetermined once the selection of one of the nine distributions has been made. L and H are subjectively arrived at by the analyst. To add a fifth parameter, m, would be to overdetermine the beta distribution. Nonetheless, since m represents probably a more accurate estimate than L or H, it should be incorporated
into the distribution. To facilitate this incorporation, the beta distributions were defined using m, L, H, a and b in such a way that m will always be at the quarter, midpoint, or third quartile of the range. Furthermore, the calculated values for L and H will usually
A Computer-based Method for Incorporating Risk/Uncertainty into Cost/Utility Analysis
87
differ slightly from the values specified by the analyst. The discrepancy between the suggested and the resultant values for L and His not usually critical [61]. This method is not without precedence in the writings of Roland McKean, although his en passim suggestions have not resulted in a precise method for incorporating risk and uncertainty. In his book, he suggested that “it is often possible to use quantitative indicators other than the expected value to describe the uncertainty-e.g. the range of outcomes.” Thus, this method uses both m and the range limits, L and H. McKean also states “in connection with the chance element in recurring events, it may be desirable to show a probability distribution of outcomes” [62]. This paper, as previously argued, proposes to extend the applicability of probability distributions from recurring events to all events following in the risk/uncertainty domain. Monte Carlo simulation of the system McKean extends his comments almost to the point of proposing a method such as just presented, yet he balks at the crucial step: If the sequence of events in the system is highly complicated a “Monte Carlo” technique may be useful. . . . With respect to cost-benefit analyses, such procedures are used to some extent in calculating expected results, particularly in the estimation of flood damage. They might be used more extensively, and used to indicate the variability of results, but it would require a great deal of time and effort. On balance, the Monte Carlo method, and the probability distribution of outcomes that this technique produces, may not be worthwhile in cost-benefit analysis except in the exhaustive examination of very expensive proposals [63]. This point is refuted, conceptually, not only by the RAND developments but also by one of the most widely reprinted articles to appear in the Harvard Business Review: “Risk Analysis in Capital Investments”, by David B. Hertz of McKinsey and Company, consultants [64]. As Dienemann explains, the mechanics of proceeding from input uncertain type, impressions to output system estimates of uncertainty are relatively simple. The flow diagram (Fig. 5) summarizes the necessary steps. A listing of the FORTRAN subroutines for steps 1, 3 and 6 (called ETABLE, SAMPLE and HISTO, respectively) in the program used in this paper and originally included in Dienemann’s paper, is given as an Appendix. Step 1. In the first step cumulative beta tables for the nine functional forms are generated. By using 128 increments for each table, a maximum of only 6 binary search steps is required to “Look up” beta values for randomly generated decimals. Step 2. Next, the inputs to the costing model are read and stored. Each input data card contains the low, modal and high value and functional form for two system input parameters. Step 3. Using a random number generator, subroutine SAMPLE develops a Monte Carlo sample value for each input parameter based on its mode, range and form. A different random number is used for each Monte Carlo simulation. Step 4. After a complete set of sample inputs has been generated, the existing cost model is used to estimate the system’s index. The results are stored in the output table. Step 5. Here the computer program tests whether the specified number of system cost estimates have been calculated. If not, a new set of inputs is generated, and the procedure is repeated. When the last iteration is made (1000 repetitions were used in Dienemann’s program), the program proceeds to the last step.
88
GARY J. KOPFF
Step 6. From the tabulation of estimated system values of the index, subroutine HISTO calculates the mean value, standard deviation and a probability distribution (for eleven class intervals). These parameters provide the description of cost-utility estimate uncertainty [65].
Co~culate 0 random value for each
-
Step
I
Step
2
step
:
step
4
inwt pwxneter
weapon system
No
Step 5
Mean
and standard Step
6
FIG. 5. Estimating system cost uncertainty [67].
To use the Monte Carlo techniques, one must assume that all input parameters are mutually independent. If functional relationships exist amongst the input factors, then the problem requires incorporating the relationships into the “black box” model which complicates matters considerably. Alternatively, one could explore more sophisticated techniques for sampling from joint frequency tables [66]. A major advantage of the systems approach with Monte Carlo simulation is the fact that the method allows a decision-maker to ascertain the sensitivity of the cost-utility index-or of its components-to each or all of the input factors. Simply by running the program with changes in the distribution of an input factor, it is possibIe to determine the effect of added or changed information (or the lack of information). It may turn out that fairly large changes in some factors do not significantly effect the outcomes. In addition, the method offers more information than simple contingency analysis. The three-valued approach-L, H and m-is similar to a standard type of contingency analysis, however the Monte Carlo simulation allows the entire system to respond as an entity to the gross risk of contingent conditions in each of its subsystems. To consistently run the system using the high or low values would be excessively optimistic or pessimistic and would deny the effects of pooled risks as distinguished from the sum of sub-system risks.
A Computer-based Method for Incorporating Risk/Uncertainty into Cost/Utility Analysis APPENDIX 1 SUBROUTINE BTABLE COMMON /BTAB/ NTABLE, CUMB(9,128), XTABLE(128) GENERATES CUMULATIVE BETA TABLES FOR NINE BETA EQUATIONS DO 10 M = 1,9 NTABLE=M DELTA = 0.0078125 A = 0.0 B = DELTA DO 10 J = 1,128 PKLEQ-A LIBRARY ROUTINEIS USED TO INTEGRATE BETA EQUATIONS CU=4&~=; PKLEQ(A,B) B=B+DELTA 10 CONTINUE RETURN END
89
90
GARY J. KOPFF
1. FUNCTION SSSSS(X) COMMON /BTAB/ NTABLE, CUMB(9,128), XTABLE(128) DEFINES BETA EQUATION THIS SUBROUTINEk USED BY PKLEQIF (NTABLE .EQ. 1) GO TO 1 IF (NTABLE .EQ. 2) GO TO 2 IF (NTABLE .EQ. 3) GO TO 3 IF (NTABLE .EQ. 4) GO TO 4 IF (NTABLE .EQ. 5) GO TO 5 IF @TABLE .EQ. 6) GO TO 6 IF (NTABLE .EQ. 7) GO TO 7 IF (NTABLE .EQ. 8) GO TO 8 IF (NTABLE .EQ. 9) GO TO 9 1 ALPHA = 1.5 BETA = 0.5 CONST = 5.1 GO TO 10 2 ALPHA = 1.35 BETA = 1.35 CONST = 10.66 GO TO 10 3 ALPHA = 0.5 BETA = 1.5 CONST = 5.1 GO TO 10 4 ALPHA = 3.0 BETA = 1.0 CONST = 20.0 GO TO 10 5 ALPHA= 2.75 BETA = 2.75 CONST = 95.5 GO TO 10 6 ALPHA= 1.0 BETA = 3.0 CONST = 20.0 GO TO 10 7 ALPHA = 4.5 BETA = 1.5 CONST = 72.5 GO TO 10 8 ALPHA = 4.0 BETA = 4.0 CONST = 630. GO TO 10 9 ALPHA = 1.5 BETA = 4.5 CONST = 72.5 10 SSSSS = (CONST) * (X**ALPHA) * ((1.0 - X)**BETA) RETURN END
PARAMETERS
A Computer-based Method for Incorporating Risk/Uncertainty into Cost/Utility Analysis
91
SUBROUTINE SAMPLE GENERATES A MONTE CARLO VALUE FOR EACH INPUT PARAMETER COMMON /NUM/ NUMI, NUMZ, NUM3, ITER, NI COMMON /BTAB/ NTABLE, CUMB (9,128), XTABLE (128) COMMON /DATA/ IDATA(420), GLOW(420), GMODE(420). GHIGH(420), C ITYPE(420), FLOW(420), FMODE(420), FHIGH(420), NTYPE(420), C OUT(1000,13) COMMON AA, AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, AL, AM, AN, AO, AP, AQ, AR, AS, AT, AU, AV, AW, AX, AY, : BJ, BK, BL, AZ, BA, BB, BC, BD, BE, BF, BG, BH, BI, BM, BN, BO, BP, BQ, BR, BS, BT, BU, BV, BW, BX, BY, : BZ, CA, CB, CC, CD, OB, OC, OD, RC, RD, RG, RH, RJ, RK, RL, RM, RO, ASl, AS2, AS3, AS4, ADl, AD2, AD3, AD4, ATl, : AT2, AMl, AM2, SSl, SS2, SS3, SS4, SDl, SD2, SD3, SD4, STl, ST2, C SMl, SM2, Cl, C2, C3, C4, CS, AGl, AG2, AG3, AEl, AE2, HA, C HB, HC, I-ID, HE, I-IF, HG, I-II-I, HI, HJ, HM, I-IN, HO, HP, C HQ, HR, m, m, HU, HV, QA QB, QC, QD, QE, QG,QH,QI,
COMMON
QJ, QK
C
QL QM, QN
GOA, ASM, YOP, DB, GA, GB, GC, GD, : GM, OA, PA, PB, TE, EEA, RDA, WA, : RDM, A2, A3, A4, C I, 0, P, X, PMA, PMO, COMM% C ZF, ZFA, ZG, ZGA, OFB,AMB, OFC,AMC, cc TOD, XA, XB, XC, XM, XN, X0, XP, : YH, YI, YJ, YK, C AGM, AEE, AES, AEM, COMMON P5, P6, P7, P17, P18, P19, P20, P30, P31, P32, P33, : C P43, P44, P45, P46, C P56, P57, P58, P59, C P69, P70, P71, P72, C P82, P83, P84, P85, C P95, P96, P97, P98, P108 P109, PllO, Pill, &MENSION FDAT&I) EQUIVALENCE (AA,FDATA(l)) DO 99 N = 1,196 IF (NTYPE(N) .EQ. IF @TYPE(N) .EQ. IF (NTYPEN .EQ. IF (NTYPE@J) .EQ. IF (NTYPE(N) .EQ. IF (NTYPE(N) .EQ. IF (NTYPE(N) .EQ. IF (NTYPE(N) .EQ. IF (NTYPE(N) .EQ.
1) 2) 3) 4) 5) 6) 7) 8) 9)
GO GO GO GO GO GO GO GO GO
TO TO TO TO TO TO TO TO TO
1 2 3 4 5 6 7 8 9
EA, GE, PC, WB, A,
EB, GF, RA, WC, B,
QO, QP, QQ, Q&RUN EC, ED, EE, EF, GG, GGl, GG2, GH, SB, TA, RB, SA, WD, WE, WF, WG, C, D, E, F,
Ql, Q, Q2, R, PMT, XXX, ZA, ZI, ZH, ZHA, OFD,AMD, CVA, XD, XE, XF, XQ, XR, YA, YL, YM, YN, TAE, TCS, TMC, P9, PlO, P8, P21, P22, P23, P34, P35, P36, P47, P48, P49, P60, P61, P62, P73, P74, P75, P86, P87, P88, P99, PlOO, PlOl, P112, TITLE (12)
ZB, ZIA, CVB, XG, YB, YOI, TSC, Pll, P24, P37, P50, P63, P76, P89, P102,
S, ZC, ZJ, CVC, XH, YC, Y02, ‘ITC, P12, P25, P38, P51, P64, P77, P90, P103,
T,
FA, GI, TB, WH, Gl,
U
TWS,TWH, FB, FC, GK, GL, TC, TD, WI, EEM, G,
VI,.,:I
ZCA, ZD, ZDA, ZE, ZL, ZT, OFA, AMA, CVD, TOA, TOB, TOC, XJ, XK, XL, XI, YD, YE, YF, YG, YP, YQ, AGE, AGS, P2, P3, P4 T-IT, Pl, P13, P14, P15, P16, P26, P27, P28, P29, P42, P39, P40, P41, P55, P52, P53, P54, P68, P65, P66, P67, P81, P78, P79, P80, P94, P91, P92, P93, P104, P105, P106. P107,
GARYJ. KOPFF
92
1
4
7
2
5
8
3
6
9 10
11 12 13 14 99 100
SINGLE VALUED INPUT FDATA(N) = FMODE(N) GO TO 99 DISTRIBUTION SKEWED LEFT SMODE = 0.75 M=l GO TO 10 SMODE = 0.75 M=4 GO TO 10 SMODE = 0.75 M=7 GO TO 10 DISTRLBUTION SYMETRIC SMODE = 0.50 M=2 GO TO 10 SMODE = 0.50 M=5 GO TO 10 SMODE = 0.50 M=8 GO TO 10 DISTRIBUTION SKEWED RIGHT SMODE = 0.25 M=3 GO TO 10 SMODE = 0.25 M=6 GO TO 10 SMODE = 0.25 M=9 CALL RANDOM(R) BINARY SEARCH FOR CUM BETA J = 64 DO 13 K= 1,6 L=6-K IF (CUMB(M,J) .LT. R) GO TO 11 IF (CUMB(M,J) .GT. R) GO TO 12 GO TO 14 J = J + 2**L GO TO 13 J = J - 2**L CONTINUE SAMPLX = XTABLE(J) IDATA = FMODE(N) + (FHIGHOV) CONTINUE RETURN END
FLOW(N))
* (SAMPLX
-
SMODE)
A Computer-based
C c
1
10 20
21 22
30
40:
50 C 51 52 99 100
Method for Incorporating
Risk/Uncertainty
into Cost/Utility
Analysis
SUBROUTINE HISTO COMMON /NUM/ NUMl, NUMZ, NUM3, ITER, NI COMMON /DATA/ IDATA(420), GLOW(420), GMODE(420), GHIGH(420), ITYPE(420), FLOW(420), FMODE(420), FHIGH(420), NTYPE(420), 0UT(1ooo,13) DIMENSION XVAL(25,13), NFREQ(25,13) CALCULATES MEAN, STANDARD DEVIATION, AND PREPARES HISTOGRAM DO 99 J = 1,13 CALCULATE INTERVAL RANGE (RINT) FOR NI INTERVALS XMIN = OUT&J) XMAX = OUT(l,J) DO 1 K= l,ITER IF (OUT(K,J) .LI’. XMIN) XMIN = OUT (K,J) IF (OUT(K,J) .GT. XMAX) XMAX = OUT(K,J) CONTINUE RINT = (XMAX - XMIN) / FLOAT(NI) CALCULATE MEAN VALUE (XVAL) FOR EACH INTERVAL XVAL(l,J) = XMIN + RINT/Z DO 10 N=2,NI XVAL(N,J) = XVAL(N-1,J) + RINT DETERMINE DATA FREQUENCY FOR EACH INTERVAL DO 20 N = l,NI NFREQ(N,J) = 0 DO 22 K = l,ITER DO 21 N=l,NI XLIM = XVAL(N,J) + RINT/2 IF (XLIM .GE. OUT(K,J)) GO TO 22 CONTINUE N = NI NFREQ(N,J) = NFREQ(N,J) + 1 CALCULATE MEAN VALUE (XMEAN) SUMX = 0 DO 30 K = 1,ITER‘ SUMX = SUMX + OUT(K,J) XMEAN = SUMX/FLOAT(ITER) CALCULATE STANDARD DEVIATION (XSDEV) SUMSQ = 0 DO 40 K = l,ITER SUMSQ = SUMSQ + (OUT(K,J) - XMEAN)**2 XVAR = SUMSQ / (FLOAT(ITER) - 1.0) XSDEV = SQRT(XVAR) PRINT OUTPUT PRINT 50, (XMIN, XMAX, XMEAN, XSDEV) FORMAT (16HlMINIMUM VALUE =, F1O.l / 16HOMAXIMUM VALUE =, F1O.l / 16H0 MEAN VALUE =, F1O.l / 16HOSTANDARD DEV. =, F1O.l //) DO 51 N = 1,NI PRINT 52, (XVAL(N,J), NFREQ(N,J)) FORMAT (14X,F10.1,110) CONTINUE RETURN END REFERENCES 1. ROBERT N. ANTHONY, Planning and Control Systems: A Framework for Analysis. Graduate School of Business Administration, Harvard University, Boston, 1965. 2. ALLEN SCHICK, The Road to PPB: LazeStages of Budget Reform. 3. Ibid., pp. 33-34. 4. Ibid., pp. 7-8. 5. ROLAND N. MCKEAN, Eficiency in Government through Systems Analysis: wzth Emphasis on Water Resource Development. A RAND Corporation Publication for Operations Research Society of America, No. 3. John Wiley, New York, 1958. Re: Ch. 14, “Analytical Aids to Governmental Economy: A Survey of Opportunities.”
93
94
GARY J. KOPFF
6. SCHICK,op. Cit., pp. 44-47. I. 49 Stat 1570 8. OTTO ECKSTEIN,Water-Resource Development: The Economics of Project Development, pp. 48-49. Harvard University Press, Harvard, 1961. 9. Ibid., p. I. 10. Ibid., p. 8. 11. MCKEAN, op cit., p. 19. 12. United States Federal Inter-Agency River Basin Committee, Subcommittee on Benefits and Costs, Proposed Practices for Economic Analysts of River Basin Projects, 1950. 13. United States Bureau of the Budget, Circular A-47, December 3 1, 1952. 14. ROBERT HAVERMAN, Water Resource Investment and the Public Interest: An Analysis of Federal Expenditures in Ten Southern States, pp. 22-23. Vanderbilt University Press, Nashville, 1965. 15. Ibid., re: pp. 11, 23, 24 for Congressional comments. 16. Operations Research: Proceedings of a Conference for Washington Area Government Agencies, April 20, 1966. United States Department of Commerce, National Bureau of Standards, Miscellaneous Publications No. 294., p. 18. 17. Ibid., p. 23. 18. E. S. QUADE, Analysis for Military Decisions. The RAND Corporation, Santa Monica, RM-387-PR, 1964. 19. Program Review-Technical Analysis Division, In-house document, Technical Analysis Division, National Bureau of Standards, US Department of Commerce, p. 1. 20. The Vice President’s Handbook for Local Oficials: A Guide to Federal Assistance for Local Governments, p. 269. Office of the Vice President, Washington, November 1, 1967. 21. Rickover Charges Lag in Submarines, The New York Times, p. 7, July 5, 1968. 22. Executive Office of the President, Bureau of the Budget, Bulletin No. 66-3, October 12, 1965. Hearings before the Subcommittee on National 23. Planning-ProgrammingBudgeting. Security and International Operations, Committee on Government Operations, US Senate, 90th Congress, Second-Session, March 26, 1968. p. 212. 24. Brinser. Standards and Techniaues of Evaluatina Economic Choices in Environmental Resource Development, Reprint&l in: DARLING,E, FRASER,and MINTON,JOHNP. (Eds.). Future Environment of North America, 1966. 25. MCKEAN, op. cit., Ch. 4, Intangibles, Uncertainty and Criter. 26. ROBERT DORPMAN (Ed.), Measuring Benefits of Government Investments, pp. 12-14. Studies of government finance. (Washington: The Brookings Institution, Washington), 1963. 27. DUNCAN R. LUCE and HOWARDRA~FPA,Games and Decisions: Introduction anda Critical Survey, p. 13. A Study of the Behavioral Models Project, Bureau of Applied Social Research, Columbia University, John Wiley, New York, 1937. 28. FRANK H. KNrGH’r, Risk, Uncertainty and Profit. Houghton Mifflin, Boston, 1921. 29. DANIEL ELLSBERG,“Risk, ambiguity and the savage axioms,” Quarterly Journal of Economics. LXXV p. 645 (Nov., 1961). 30. F. P. RAMSEY,Truth and Probability, (1926). In R. B. B~AITHWA~TE (Ed.) The Foundations of Mathematics and Other Logical Essays. Harcourt Brace, New York, 193 1. 31. Ibid., p. 644. 32. ELLSBERG,lot. cit. 33. For an interesting combination of the work of F. H. KNIGHT and the later work of NICHOLASGEORGESCU-ROEGEN, re: HAVEMANN,op. cit., Appendix B. Note, however, that this paper refutes by its assumptions the statement that “As a consequence of the pair of contributions [Knight and Georgescu-Roegen] the generally accepted theory of risk and uncertainty has abondoned its reliance on the concept of the probability distribution of expectations concerning a given event.“-pp. 158-l 59. 34. Luca and RArFFA, op. cit., p. 15. 35. HAVEMANN,op. cit. For a detailed description of the actual decision-making process of which B/C calculations are only a small part, re: Ch. 2. For a test of the ranking precedure, re: Ch. 3. 36. Economic Evaluation of Federal Water Resource Development Projects, p. 54. Quoted in MCKEAN, lot. cit. 37. Proposed Practices for Economic Analysis . . . op. cit., p. 14.
A Computer-baaed
Method for Incorporating
Risk/Uncertainty
into Cost/Utility
Analysis
38. HAV~MANN,op. cit., pp. 96-97. E~K.S~IZIN, op. cit., p. 42. ZZ: MCKEAN, op. cit., pp. 36-37. 41. Ibid., pp. 132-133. 42. Ibid., p. 97. 43. CHARLESJ. HITCH and ROLANDN. MCKEAN, Z7te Economics of Defense in the Nuclear Age, p. 188. Harvard, 1960. Harvard University Press. 44. Ibid. 45. MCKEAN, op. cit., pp. 6465. l%eory of Games and Economic Behavior 46. JOHN VONNEUMANNand OSKARMORGENSTERN, Princeton University Press, Princeton, 1947. 47. MILTON FRIEDMANand L. J. SAVAGE,The Utility Analysis of Choices Involving Risk, 77te Journal of Political Economy. LVl, No. 4, p. 288 (August, 1948). 48. WILLIAMJ. BAUMOL,Economic Theory and Operations Analysis, pp. 515-516, PrentieeHall, New Jersey, 1965. 49. HITCH and MCKEAN, op. cit., p. 197. 50. For an interesting extension of von Neumann and Morgenstem’s work, re: FRIEDMAN and SAVAGE,Eoc. cit. 51. Proposed Practices . . . op. cit. pp. 22-25. 52. Circular A-47, lot. cit. 53. ECKSTEIN, op. cit., pp. 81-86. 54. Ibid. See also, IRVINGFISHERand GEORGEHALL, Risk and the Aerospace Rate of Return (Santa Monica: The RAND Corporation, RM-5440-PR, December, 1967. MCKEAN,op. cit., pp. 58-103. 5’:. HAVEMANN, op. cit., appendix. 57: Adopted from: G. H. FISHER,A Discussion of Uncertainty in Cost Analysis: The RAND Corporation, RM-3071-PR, Santa Monica, April, 1962. 58. Adopted from: PAUL F. DIENEMANN,EWnating Cost Uncertainty Using Monte Carlo Techniques: The RAND Corporation, RM-4854-PR, Santa Monica, January, 1966. 59. PAUL F. DIENEMANN,Estimating Cost Uncertainty Using Monte Carlo Techniques. Memorandum 4854PR. The RAND Corporation, Santa Monica, Jannary, 1966. 60. K. R. MACCRIMMON,An Analytical Study of the PERTAssumptions: The RAND Corporation, RM-3408-PR, Santa Monica, December, 1962. 61. D~NEMANN,op. cit., p. 15. MCKEAN,op. cit., p. 69. Z MCKEAN,op. cit., pp. 69-70. 64: DAVID B. HERTZ, Risk Analysis in Capital Investment, Hvd Bus. Rev. January-February, 1964. pp. 95-106. 65. DIENEMANN,op. cit., p. 16. 66. For instance, re: D. J. FINNW, Frequency Distributions of Deviation from Means and Regression Lines in Samples from a Multi-variate Normal Population, Ann. math. Statist. 17, 1946. 67. DIENEMANN,op. cit. p. 17.
95