Reliability Engineering and System Safety 91 (2006) 84–99 www.elsevier.com/locate/ress
A method for the efficient prioritization of infrastructure renewal projects D.M. Karydasa,*, J.F. Gifunb a
Department of Technology Management, Quality and Reliability Engineering, Technische Universiteit Eindhoven, Den Dolech 2, Paviljoen C14, P.O. Box 513, 5600 MB Eindhoven, The Netherlands b Department of Facilities, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307, USA Received 5 October 2004; accepted 30 November 2004 Available online 2 February 2005
Abstract The infrastructure renewal program at MIT consists of a large number of projects with an estimated budget that could approach $1 billion. Infrastructure renewal at the Massachusetts Institute of Technology (MIT) is the process of evaluating and investing in the maintenance of facility systems and basic structure to preserve existing campus buildings. The selection and prioritization of projects must be addressed with a systematic method for the optimal allocation of funds and other resources. This paper presents a case study of a prioritization method utilizing multi-attribute utility theory. This method was developed at MIT’s Department of Nuclear Engineering and was deployed by the Department of Facilities after appropriate modifications were implemented to address the idiosyncrasies of infrastructure renewal projects and the competing criteria and constraints that influence the judgment of the decision-makers. Such criteria include minimization of risk, optimization of economic impact, and coordination with academic policies, programs, and operations of the Institute. A brief overview of the method is presented, as well as the results of its application to the prioritization of infrastructure renewal projects. Results of workshops held at MIT with the participation of stakeholders demonstrate the feasibility of the prioritization method and the usefulness of this approach. q 2005 Elsevier Ltd. All rights reserved. Keywords: Project prioritization; Facilities; Infrastructure renewal; Analytic hierarchy process; Deliberation; Consensus; Analytic-deliberative decision making; Multi-attribute utility theory
1. Introduction The buildings and grounds that form the campus of the Massachusetts Institute of Technology (MIT) represent its largest single monetary asset, second only to its investments, and exist for the ultimate purpose of enabling MIT’s core mission of academic learning and research [1]. MIT’s Department of Facilities (Facilities) supports MIT’s mission by operating, maintaining, protecting, and enhancing its physical asset to provide the environment where faculty, students, and administrative staff do world class work [2]. As lofty as Facilities’ goal may be it is tempered by the reality of limited resources available to the Department of Facilities and a backlog of deficiencies in the physical infrastructure that will require many years to mitigate. Whether this limitation on resources will continue far into
* Corresponding author. Fax: C31 40 2467497. E-mail address:
[email protected] (D.M. Karydas). 0951-8320/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2004.11.016
the future is not known; however, it is reasonable to anticipate that it will. Therefore, to capitalize on available resources for infrastructure renewal projects and do so effectively, Facilities adapted a prioritization method developed by MIT’s Department of Nuclear Engineering [3]. The method is applied to projects not otherwise excluded from the project prioritization process because of any particular reason or criterion. This paper explains the process and results of Facilities adaptation and application of this work.
2. Background 2.1. Infrastructure renewal at MIT In a special report from the President of MIT, Charles M. Vest writes about the development of the MIT campus in Cambridge, Massachusetts from1913 to 2000. He states, ‘A second major spurt of construction activity on the MIT
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
campus began during the 1950s, when the Institute’s faculty began to expand on the extraordinary advances in science and technology during and just after World War II’. Since 1950, the total floor area of MIT campus buildings has more than tripled but the resources needed to maintain it have not kept pace [4]. MIT is not alone in this regard. According to other research 80% of the buildings existing on college campuses in the United States today were constructed prior to 1980 where most of the available funds were dedicated to new construction but not upgrades to existing buildings [5]. The physical fabric of the American university is deteriorating. Leadership, money, and engineering, architectural, and preservation skills should be focused on renewing campus infrastructures so that buildings and grounds are ready and able to provide the physical environment for learning and research in the future. The Department of Facilities’ defines infrastructure renewal as, ‘A process of systematically evaluating and investing in maintenance of facility systems and basic structure’ [6]. Typical infrastructure renewal projects include the refurbishment and replacement of roofs, heating, ventilating, air conditioning, plumbing, masonry, and fire alarm and electrical systems. The current estimate of the total cost of MIT’s infrastructure renewal backlog could approach $1 billion in the next decade and includes the repair and upgrading of building systems and structures and select functional improvements. A goal of the Department of Facilities is to eliminate the accumulated backlog and prevent it from building up once again. To achieve this goal, Facilities sought out tools and processes to help Facilities’ infrastructure renewal team, infrastructure renewal decision-makers, make effective use of available funds through the prioritization of projects. Infrastructure renewal team members were looking for a method that would allow them to apply limited resources to the most important needs first, support consistent, repeatable, and defendable prioritization decisions, consider the impact of risk on these decisions, be flexible and easy to use, and enable careful and thoughtful consideration of alternative choices while supporting consensus. The method was designed to discriminate on the basis of risk, as defined by the members of Facilities’ infrastructure renewal team, and does not provide the means to equitably balance funding across the Department of Facilities or MIT. However, the deliberation phase of the method is sufficiently flexible to accommodate concerns related to equity should they be raised by the infrastructure renewal team. After a diligent examination of various alternatives, the aforementioned method, developed originally at MIT for the prioritization of safety and operational experience in nuclear power plants was adopted and modified by Facilities for its use. 2.2. The need for prioritization Prioritization methodologies that employ tools such multi-attribute utility theory, the analytic hierarchy process
85
(AHP), or both are not solutions to problems unique to educational facility managers. For example, they are used to plan, prepare, evaluate, and prioritize capital investments in the United States Department of Veterans Affairs (VA) and operating experience in nuclear power plants. The method used by the Department of Veterans Affairs provides a framework for developing and evaluating capital investment and spending proposals to ensure that they are consistent with the VA’s strategic plan and based on wellestablished business investment practices. Prior to the incorporation of this method, most planning did not typically include analysis of risks or costs and benefits of proposals as well as justification as part of strategic planning and assessment of alternatives external to the VA and the United States Federal Government. The VA’s decision-makers evaluate proposals against a set of weighted criteria and sub-criteria and use a data evaluation form to verify that the necessary data have been provided. AHP is used to score each proposal. Scores are compared against pre-established minimum levels of acceptability, where those with scores above the minimum level are incorporated in the VA’s capital plan [7]. The adopted method was designed to enable nuclear power plants to prioritize safety related operating experience transmitted to each plant by way of the Nuclear Network and from organizations such as the Nuclear Regulatory Commission and the Institute of Nuclear Power Operations (INPO). Through the Nuclear Network, nuclear power plants report departures in equipment operating parameters and equipment failures to each other, so that the plants not experiencing the problem can assess whether intervention is necessary, and if so, determine a course of action before a potential problem escalates. In the course of a calendar year at a specific power plant, decision-makers typically review between 800 and 1000 operating experience recommendations. The decisionmakers’ course of action is apparent in many cases. For example all Significant Event Notifications from INPO are ranked as first priority, items to be evaluated expeditiously until complete, while operating experience items relating to equipment not present at the plant or otherwise not applicable are ranked as third priority. Information notices about random equipment failures or areas where formal evaluation is not practicable are also transmitted to each plant as priority three operating experience items. The majority of operating experience items are either not applicable to a particular plant or are information notices; however, between 70 and 80 items fall into the second priority and require more detailed evaluations within six months. Due to staffing constraints at the subject plant 140 days are required for a typical operating experience item to be reviewed and evaluated. With between 70 and 80 items to process at 140 days per item, the backlog can expand quickly. Therefore, it is necessary to prioritize operating experience items so that the most important ones are processed before those of lesser importance [3].
86
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
2.3. Theoretical basis The method developed in [3] and subsequently adapted by the authors are analytic-deliberative processes where the decision-makers use the concepts of risk-informed decision making and analysis and other quantitative means to determine the priorities of projects. The method employs multi-attribute utility theory, technical input from local experts, and focused deliberation to determine all weights and make all final decisions. The analytic hierarchy process was used to elicit initial pairwise comparisons only. Final weights were determined by deliberations among the members of Facilities’ infrastructure renewal team [8]. The analytic-deliberative process is composed of two discrete and linked processes, analysis and deliberation. Analysis is a systematic application of specific theories and methods, including those from natural science, social science, engineering, decision science, logic, mathematics, and law for the purpose of collecting and interpreting data and drawing conclusions about phenomena. It may be qualitative or quantitative. Its competence is typically judged by criteria developed from the fields of expertise from which the theories and methods come. Analysis uses replicable methods developed by experts to answer factual questions and bring new information to the process. Whereas, deliberation uses processes such as discussion, reflection, and persuasion to communicate, raise and collectively consider issues, increase understanding, arrive at substantive decisions, and bring new insights, questions, and problem formulations to the process. Deliberation involves the participation or at least the representation of the relevant range of interests and values, as well as scientific and technical expertise [9]. Multi attribute utility theory enables decision-makers to compare alternatives by way of a numerical index that represents the decision-makers degree of preference of one alternative over another. The concept of multi attribute utility theory suggests that one establish a weighted set of independent and additive attributes that represent all dimensions of the decision as defined by the decision maker [10,11]. The notion of utility enables the decision-maker to express the degree of preference of non-monetary attributes such as impact on safety, the environment, or the external image of the organization as shown in the present paper. The priority or degree of preference of each alternative is determined by the decision makers rating of each of these attributes with the result being a numerical index representing preference, i.e. priority. The performance index is the summation of the weight of each attribute multiplied by the utility of each attribute. The higher the performance index, the higher the priority of the alternative. PIj Z
Kpm X
wi uij :
(1)
i
In Eq. (1) above, PIj is the performance index for item j, w is the weight of performance measure i, uij is the utility of
Fig. 1. AHP preference scale.
performance measure i for item j, and Kpm is the number of performance measures [3]. Facilities’ infrastructure renewal team used disutility in lieu of utility so that the performance index would reflect the decision makers’ aversion toward risk. Therefore, a project with a performance index of higher value represents a project that could mitigate more potential risk than a project with a performance index of lower value, i.e. less potential risk. The Analytic hierarchy process (AHP) is a method where the objectives, attributes, or elements of a decision are formatted in a hierarchy and weighted according to the degree of preference the decision makers assign to each element. The degree of preference is determined by way of the scale shown in Fig. 1 [12]. The AHP supported the development of the weights and disutilities in Eq. (1); however, AHP was not used as the decision making method. The numerals 1 and 9 indicate the extremes of the scale, i.e. equal preference of the two components of the pairwise comparison to absolute preference of one of the components of the pairwise comparison over the other. Intermediate levels of preference are identified as 3-moderate preference, 5-strong preference, and 7-very strong preference of one component. The even numbers implied in the scale but not shown, 2, 4, 6, and 8 are used when compromise is necessary. The results of the pairwise comparisons are placed in a square matrix and squared until the difference of the normalized row sums of sequential iterations equals or closely approximates zero. At this point, the values in the normalized row sums represent the matrix’s eigenvector and the weight of each attribute relative to each other [12].
3. Method 3.1. Overview The authors led Facilities’ infrastructure renewal team through a series of workshops which took place over several months. The infrastructure renewal team was multidisciplinary and consisted of Facilities’ decision-makers with expertise in, finance and accounting, utility operations, electrical, structural, mechanical, and civil engineering, architecture, facility operations, and space planning. The workshops provided team members the venue to learn about the analytic-deliberative process and its tools and apply that which was learned to achieve the results presented in this paper. The first workshop focused on learning the definitions and theoretical basis of the concepts and the applicability, strengths, and limitations of the tools,
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
identifying the objectives to be used to prioritize infrastructure renewal projects, and eliciting each team member’s expectations of the prioritization method. Subsequent workshops focused on applying the freshly learned concepts and tools and developing the method. During the first workshop tutorials in the fundamentals of probabilistic risk assessment, prioritization, the analytic hierarchy process, multi-attribute utility theory, decision analysis, and their potential application to infrastructure renewal projects were conducted. Following the tutorial portion of the first workshop, members of the infrastructure renewal team shared their expectations regarding the process they would develop, the tools they would employ, and the desired benefits of a prioritization method. This dialogue revealed that Facilities’ infrastructure renewal team wanted a prioritization method that: † Would enable the infrastructure renewal team to identify the most important and most risky projects before the confounding impacts of emotion, cost, and internal politics are brought, whether intentionally or unintentionally, into the prioritization process. † Would support risk informed decisions. † Would provide consistent, repeatable, and defendable project prioritization decisions. † Would be easy to learn and use. That is, a method that did not require considerable knowledge in probabilistic risk assessment theory and methods. † Would provide the means to rank projects relative to each other and allow one to do so within and between project selection cycles. † Would capitalize on the tacit and explicit knowledge of each team member. † Would encourage a team consisting of members with diverse profession backgrounds to reach consensus through the process of deliberation. † Would incorporate the wisdom of others wherever possible, as it did not make sense to build something that was already available either commercially or privately. The method was developed during subsequent workshops following the steps listed below: Step one Develop the project selection process. Step two Define impact categories and performance measures. Step three Weight impact categories and performance measures. Step four Define and weight constructed scales. Step five Check for consistency. Step six Check for validity and reliability.
87
Fig. 2. Process map.
3.2. Step one: develop the project selection process The authors worked together and then with members of the infrastructure renewal team to develop the process shown in Fig. 2 and explained in greater detail below. To facilitate efficient and effective meetings and to stimulate substantive and focused discussion with the infrastructure renewal team, the authors, adopted the practice of preparing and presenting authors’ draft versions of whatever was to be discussed and developed during meetings. This practice minimized the time that could have been required to create from scratch, as the draft versions gave team members something to react to and modify as they saw fit. A responsibility of the infrastructure renewal team is to produce a list of projects, ranked according to relative importance and risk, for funding consideration and if selected, subsequent implementation. To achieve this end, the project prioritization process should allow team members the ability to assess and rank projects efficiently and incorporate the means to include and exclude projects at anytime with little effort. In Fig. 2 the process block labeled ‘Potential Projects’ represents the many sources of potential infrastructure renewal projects, i.e. projects identified by a facility condition audit of the campus completed in 1998 by an external consultant, MIT’s property insurer, operations personnel, regulatory agencies, and members of the campus community. To make effective and efficient use of the infrastructure renewal team and to minimize the delay of the implementation of critical projects, a prescreening process was developed so that only projects whose end state or rank was not already known would be assessed and ranked according to the prioritization method. Pre-screening is an activity represented by the block labeled ‘Initial Sorting’, where projects were divided into groups of projects that must be done, projects that must not be done, projects that were small enough in dollar value that they could be addressed directly by maintenance personnel in the course of day-to-day operational work, and projects that were to be ranked using the prioritization method. There is no benefit to spending time sorting and prioritizing projects that have already been selected out of a prioritization method
88
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
because they must be done for some reason, e.g. a major safety problem, regulatory edict, or a senior management decision to support a particular programmatic need. Similarly, there is no benefit to prioritizing work that does not need to be done. For example, a deficiency could exist in a building slated for major renovation or demolition in the near future, whereas interim repairs could be made instead of full system replacement. The remaining projects were assessed and ordered according to the prioritization method. If infrastructure renewal team members questioned the inclusion of a project in the ‘Must Do’ category they could determine its rank by way of the method and compare it to other projects, ranked with regard to importance and risk, and challenge the person who proposed it. The outcome of the process was an ordered list of projects. The infrastructure renewal team then reviewed and validated the entire list to verify that a particular project and its relative order to other projects made sense. The prioritization method contains a level of subjectivity and does not incorporate every nuance of a decision; therefore, it is not productive to focus on inherent false accuracies of the mathematical process.
3.3. Step two: define impact categories and performance measures The value tree, Fig. 3, was constructed in several stages with members of the infrastructure renewal team under the guidance of the authors. Generally, the elements of the value tree were constructed in sequence from top to bottom; however, there was much deliberation about the definitions of each element that several iterations and revisions were required before consensus was reached and the tree was completed. Definitions of each impact category and performance measure are shown in Table 1 below. The following portrays the process in sequence and fails to capture the dynamics and value of the iterations. The overall goal of Facilities’ infrastructure renewal team is to create a prioritized list of projects. Under the overall goal are impact categories, which are classifications of the projects based on the expected impact if they are not addressed or completed. In the first stage of step two, team members decided that the three primary impact categories Impact on Health, Safety, and the Environment, Economic Impact of the Project, and Coordination with Policies,
Fig. 3. Value tree.
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
89
Table 1 Definitions, impact categories and performance measures Complexity of contingencies
Coordination with policies, programs, and operations Economic impact of the project External public image
Impact on the environment
Impact on health, safety, and the environment Impact on people
Impact on property, academic, and institute operations Impact on public image Intellectual property damage
Internal public image
Interruption of academic activities and operations Interruption time Loss of cost savings
Physical property damage
Programs affected by the project
Minimize the cost of contingency arrangements necessary for the continuity of academic activities and operations of the institute. Performance measure: The monetary cost of contingency arrangements necessary to ascertain the continuity of academic activities and operations for the restoration period. Consider the degree of association of the proposed project with the academic (teaching and research) and business objectives of the Institute and minimize the impact that its delayed completion may have in terms of public image, academic program budgets and number of students involved and employed in the associated program Evaluate the proposed project considering the economic impact that a delayed completion may have in terms of the physical damage of real and intellectual property, disruption of continuity of institute operations and wasted moneys representing added costs of condition-induced deterioration and lack of modernization-induced efficiencies. Minimize the impact of (a) the delayed completion of the project and (b) events associated with this delay on the image of the Institute held by parents of prospective students, prospective students, granting agencies, donors, and regulatory agencies. Performance measure: degree of the negative image held by parents of prospective students, prospective students, granting agencies, donors, and regulatory agencies. Minimize the impact on the environment from hazards associated with deficiencies that will be corrected with the proposed project. Performance measure: The severity of environmental damage caused by events associated with delayed completion of the proposed project. Impact on the environment, applies to the environment outside of campus buildings and impacts that could occur in the utility systems beyond the projection of the exterior fac¸ade of any buildings. Evaluate proposed project considering risk reduction opportunities introduced by the project’s completion. Minimize risk to people and the environment by correcting deficiencies associated with the proposed project. Minimize the impact on students, faculty and the public from perils associated with deficiencies that will be corrected with the proposed project. Performance measure: death, injury, and illness on individuals affected by the delayed completion of the proposed project. Minimize the impact on property (buildings and equipment and intellectual property) from damage associated with the delayed completion of the proposed project. Minimize business interruption caused by events associated with the delayed completion of the proposed project, i.e. ascertain the continuity of academic activities (teaching and research) by making appropriate contingency arrangements. Minimize the impact on the positive image that the institute strives to maintain toward the community, parents, business partners, sponsors, regulatory agencies, and local government. Minimize the impact on intellectual and intangible property from damage associated with the delayed completion of the proposed project. Performance measure: degree of ‘replaceability’ of affected property associated with the delayed completion of the proposed project. Minimize the impact of (a) the delayed completion of the project and (b) events associated with this delay on the image of the Institute held by parents of existing students, students, faculty, staff, and other members of the MIT community. Performance measure: degree of the negative image held by parents of existing students, students, faculty, staff, and other members of the MIT community. Minimize the impact on the continuity of academic activities (teaching, research, and other supporting activities, such as work environment or living accommodations) where appropriate contingency arrangements are necessary for the period necessary to restore normal operations. Minimize the length of interruption time of academic activities and other institute operations. Performance measure: the length of time needed to restore academic activities and operations. Minimize the loss of cost savings associated with the delayed completion of the project until unacceptable deterioration or damage occurs or excessive additional cost is involved. Also, consider possible lost cost savings, which otherwise might be obtained with the introduction of new technologies, higher efficiency, and innovative design associated with the proposed project. Performance measure: the amount of savings, as the difference between the current cost and the cost associated with the delayed completion of the proposed project (when irreversible damage may occur). Also the additional amount of savings associated with the implementation of new technologies or efficient design. Minimize impact on property (land, buildings, and equipment) from damage associated with the delayed completion of the proposed project. Performance measure: cost of restoration of affected property associated with the delayed completion of the proposed project. Minimize the impact on academic program budgets, the number of students involved and employed in the associated program and the institute’s business objectives associated with the delayed completion of the proposed project. Performance measures: budget amount of academic program or operation, the number of affected students, or both.
Programs, and Operations would encompass the type of projects the team would face. The primary consideration as to whether and to what degree the impact categories should be subdivided is determined by how the impact categories
can be measured. For example, the impact category Impact on Health, Safety, and the Environment could not be measured directly, as there is no way to provide a single score that represents health, safety, and the environment, but
90
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
can be measured through the proxies, Impact on People, i.e. health and safety, and Impact on the Environment. Using a more complex example, Coordination with Policies, Programs, and Operations where the impact category is subdivided into a subimpact category, Impact on Public Image and the performance measure Programs Affected by the Project. Since Impact on Public Image consists of two components, public image within and external to the MIT community, a performance measure for each was employed. According to previous research subdividing is appropriate only if the decision maker exhibits utility independence among the performance measures. For example, Impact on People is independent or mutually exclusive of Impact on the Environment and External Public Image is independent or mutually exclusive of Internal Public Image. The objective is to subdivide impact categories into as many levels as necessary to elicit a rating for a proposed project. Subdividing is not appropriate in some situations. For example, consider that a decision maker is concerned about the environment and had identified two performance measures with equal weights, damage to flora and damage to fauna. If flora and fauna were subdivided from environment and each equally contribute to environment they could not contribute more than 50% of the weight of environment. Therefore, there would be no way to fulfill the decision maker’s desire to assign full weight to environment when either flora or fauna is maximally damaged. Since the decision maker desires that the full weight be assigned to environment when either flora or fauna is maximally damaged, then subdividing is not appropriate. Instead, the decision make should measure the environment with a single performance measure [3]. Each impact category and performance measure was given a preliminary definition by the authors so that all team members would consider each one independently in context of a fictitious potential project. The authors challenged infrastructure renewal team members to validate every potential performance measure and impact category against the principle of multi-attribute utility theory, where each criterion must be additive and independent and that the criteria combined were mutually exclusive and exhaustive. This caused an expected realization of the inadequacy of the preliminary definition and encouraged the infrastructure renewal team to focus on the value of each criterion and its definition. Moreover, the infrastructure renewal team decided that one would use the method by asking questions about candidate projects in the negative sense. Such as, if this project is not funded and implemented could there be an adverse impact on Health, Safety, and the Environment? 3.4. Step three: weight impact categories and performance measures The relative weighting of each impact category and performance measure was elicited from members of the infrastructure renewal team through pairwise comparisons as dictated by the analytical hierarchy process (AHP) and
Table 2 Global (G) and local (L) weights I. Impact on health, safety, and the environment (L: 0.491, G: 0.491) A. Impact on people (L: 0.600, G: 0.295) B. Impact on the environment (L: 0.400, G: 0.196) II. Economic impact of the project (L: 0.233, G: 0.233) A. Impact on property, academic, and institute operations (L: 0.600, G: 0.140) a. Physical property damage (L: 0.210, G: 0.029) b. Intellectual property damage (L: 0.550, G: 0.077) c. Interruption of academic activities and operations (L: 0.240, G: 0.034) (a) Interruption time (L: 0.500, G: 0.017) (b) Complexity of contingencies (L: 0.500, G: 0.017) B. Loss of cost savings (L: 0.400, G: 0.093) III. Coordination with policies, programs, and operations (L: 0.276, G: 0.276) A. Impact on public image (L: 0.500, G: 0.138) a. Internal public image (L: 0.400, G: 0.055) b. External public image (L: 0.600, G: 0.083) B. Programs affected by the project (L: 0.500, G: 0.138)
through extensive deliberation of AHP derived results to determine the relative local and global weight of importance, risk, or favorability of each criterion. Local weights quantify the relative weight of sibling criteria. For example, Impact on Health, Safety, and the Environment (local weight 0.491), Economic Impact of the Project (local weight 0.233), and Coordination with Policies, Programs, and Operations (local weight 0.276). Whereas, they are siblings the sum of the local weights equals 1. Since they are primary impact categories local weights equal global weights. Similarly, Impact on People and Impact on the Environment are sibling performance measures with relative local weights of 0.600 and 0.400 and relative global weights of 0.295 and 0.196, respectively. The relative global weight for each impact category and performance measure is shown in Fig. 3 and the relative global and local weights of each are shown in Table 2. The rating process designed by the authors required that the infrastructure renewal team use the AHP preference scale in Fig. 1, to make pairwise comparisons individually, and then through deliberation, agree upon one rating for each impact category and performance measure. The post-deliberation ratings were then to be entered into a Microsoft1 Excel based AHP weighting program developed at MIT. However, prior to the first workshop, February 9, 2001, the program developed at MIT was replaced by a commercially available application to take advantage of its input and display functionality2. For example, Table 3 shows Facilities’ decision makers’ assessment post-deliberation of the degree of preference of each primary impact category over the others using the AHP preference scale. For example, the impact category, Impact 1
Microsoft is a registered trademark of Microsoft Corporation in the USA and other countries. 2 Expert Choicew 2000, Expert Choice, Inc.
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
91
Table 3 Degree of preference via AHP
Impact on health, safety, and the environment Economic impact of the project Coordination with policies, programs, and operations
Impact on health, safety, and the environment
Economic impact of the project
Coordination with policies, programs, and operations
1 1/2.5 1/1.5
2.5 1 1
1.5 1 1
on Health, Safety, and the Environment is moderately preferred over the impact category, Economic Impact of the Project while the impact categories, Economic Impact of the Project and Coordination with Policies, Programs, and Operations are equally preferable. Weights are then calculated by the analytic hierarchy process. 3.5. Step four: define and weight constructed scales The performance measures are the entry point into the method and are represented by the labeled boxes shown in Fig. 3 below the dashed line near the bottom of the value tree. The constructed scale for each performance measure depicts several levels that describe a progression of weighted disutilities over a range of 0–1.00. The levels were defined through deliberative methods and weighted using AHP in a manner similar to that which was used to determine the weights of impact categories and performance measures, but normalized. This method was designed so that infrastructure renewal team members would select the level, hence, its corresponding disutility, from corresponding descriptions for each constructed scale. Using the constructed scale for impact on people as an example, each team member would respond to the question, if we do not implement the project being considered for selection could the impact on people be level 0, 1, 2, or 3? This constructed scale captures the prediction as to impact on people, specifically gradations of injury or death should the project under evaluation not be selected for implementation. Once levels were determined individually, team members deliberated to determine a level representative of the entire team. In situations where team members prefer a measure of disutility between the ones shown in tabular format, the graphical format or its mathematical function can be used. Fig. 4a–j show the constructed scales in both tabular and graphical formats for the ten performance measures. The disutility curves, by nature of their concave shapes depict Facilities preference for risk adversity. While the range of the descriptions and the associated text appear straightforward and easy to grasp their development was not. As Impact on People carries the highest weight of all constructed scales, the infrastructure renewal team invested much time deliberating over the number of levels that would be needed, the descriptions of each level, and the disutility values. While there is always a probability, albeit remote, that a person could be injured or killed if any project was not implemented, the team focused
on realistic outcomes by way of scenarios. For example, a project to repair a cracked limestone molding at a building parapet was presented to the team for consideration. Team members asked questions about the location of the cracked molding, i.e. is it over an entrance or over an area where people cannot get to, and the severity of the crack, i.e. does the crack completely separate a section of the molding from its substrate, is the crack small and stable and not showing signs of propagation, or has part of the cracked molding already fallen. If the cracked limestone molding was over an entrance to a building or in a location where a person could pass, Facilities would divert pedestrian traffic and mitigate the danger immediately and not waste time prioritizing. However, once the potential for danger was eliminated, the repair of the limestone molding would be prioritized according to the method described in this paper. The intent of this constructed scale is to elicit Facilities rating of an infrastructure renewal project in terms of impact on the environment. Much time was invested in discussing the conditions under which an impact could take place and the descriptions for each level. The infrastructure renewal team determined that a project could impact the environment as long as the effect of not implementing the project progressed beyond the building’s roof, foundation, and the boundaries defined by its exterior surfaces and their extension into the soil. In a manner similar to the deliberation on impact on people a scenario was most helpful. Consider the hypothetical case of a project to replace a transformer containing PCB that is located in the basement of a building. If there was a floor drain nearby and the transformer developed a leak, fluids containing PCB’s could flow into the building drain system, the municipal drain system, and ultimately into the environment; however, if a floor drain were not present, the performance measure, Impact on the Environment would receive low scores. The primary reason for the separation of environment from any impact occurring within a building was to prevent double counting and thus unfair weighting of the impact, as it would already be assessed by other performance measures such as Physical Property Damage [12]. Please note that Impact on the Environment is related only to impact should the project not be implemented and is not related to any potential impact caused during the actual implementation. While three members of the infrastructure renewal team had a fundamental knowledge of the environmental issues associated with buildings and building systems, none were
92
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
credentialed experts in environmental engineering or any of the environmental sciences. The team, using the established definition for Impact on the Environment, identified only a few infrastructure projects that could be potentially rated as
impacting the environment in a minor way. Therefore, the team decided that the levels and descriptions would be used as shown herein with the understanding that they would be validated by an environmental expert in the future.
Fig. 4. Performance measures: constructed scales and disutility curves. (a) Impact on people. (b) Impact on Environment. (c) Loss of cost savings. (d) Intellectual property damage. (e) Physical property damage. (f) Interruption of operations: interruption time. (g) Interruption of operations: complexity of contingencies. (h) External public image. (i) Internal public image. (j) Programs affected by the project.
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
Fig. 4 (continued)
93
94
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
Fig. 4 (continued)
There are two aspects to the performance measure, Loss of Cost Savings (1) the loss of cost savings resulting from not implementing a project in a timely manner and (2) the loss of potential gains by not implementing less costly or more efficient alternatives. For example, this performance measure is intended to encourage the infrastructure renewal team to consider the value of lost energy savings as a result of wet insulation from a roof leak that is not repaired and the losses resulting, or financial gains not realized, from not replacing steel framed single pane windows with more energy efficient designs. Important and highly valued products of an institution like MIT are its research, both current and completed, and its archives and special collections, as they represent the largest share of MIT’s income and its value to society. Since the duration of some research can extend for many years, and if interrupted must be started over, and that both research and certain artifacts require consistent physical conditions, Facilities’ infrastructure renewal team should be aware that the ranking of a project could have a negative impact on the outcome of the research or its delivery, and the condition of the artifact, if it is not selected for implementation. This performance measure encourages the team to look beyond the physical damage caused by an infrastructure deficiency that is not mitigated and consider
the potential impact on MIT’s income stream and the intangible value and the degree of replaceability of treasured artifacts. Most intellectual property damage, if it were to occur, is expected to be minor as buildings and systems are sound, there is an extensive fire alarm system on campus, and there are skilled people who would respond to a problem immediately and have the ability to call in reinforcements, if necessary. Also, in certain sensitive areas there are electronic systems that monitor particular building system or climatic parameters and report alarms to a central operations center, if predefined thresholds were crossed. The objective of this performance measure is to identify the level of property damage, i.e. damage to land, buildings, and equipment, one could expect, if the project under consideration was not implemented. The constructed scale for Interruption of Operations: Interruption Time captures the infrastructure renewal team’s assessment of the length of time academic or administrative activities or living accommodations could be diminished or curtailed due to impact caused by a project that was not implemented. The infrastructure renewal team indicated that generally, short term interruptions can be tolerated but in an academic environment, where the length of a class is 3.5 months an interruption of several weeks can be
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
devastating to the student and to the organizations that support the student. Also, research may be adversely impacted, as data collection could be interrupted and reporting milestones established by granting entities could be missed. While MIT would undertake substantial efforts to relocate a class, office, or research the length of the interruption could result in deleterious effects anyway. The infrastructure renewal team was also concerned about the difficulty and cost of initiating some contingency measures, even if only for a short time. That is to say it is fairly easy and inexpensive to locate a room on campus for a class where the method of instruction is by overhead projector or computer generated slides, even if the class has to be scheduled at a different time for a few days. Whereas, it could be difficult and costly to locate a high level wet lab either on or off campus. If the causative event impacts a much larger area, for example many classrooms, offices, labs or perhaps an entire building the complexity and cost of initiating contingencies is expected to increase dramatically. MIT is aware of how it is perceived in the media, as negative attention could impact a prospective student’s decision to study at MIT, the level of encouragement the parents of a prospective student could give to their child to engage in academic studies at MIT, a granting agency’s decision to fund research, a donor’s decision to support MIT financially, and the good will the Institute has developed over time with regulatory agencies. This performance measure provides a value, as to how far the adverse publicity could spread, i.e. locally to internationally, and whether it could adversely affect enrollment or faculty recruitment. Internal Public Image provides a measure for the impact internal to MIT that could result from not implementing a project under consideration. This constructed scale helps the infrastructure renewal team quantify the negative image held by parents of existing students, existing students, faculty, staff, and other members of the MIT community. For example, the infrastructure renewal team could ask whether the decision not to implement a project could result in a student sending a letter critical of Facilities to the President of MIT, or could the impact be so egregious that a protest would take place on campus that draws external negative attention and thus increases the impact on external public image. By way of the constructed scale, Programs Affected by the Project, the infrastructure renewal team would provide an assessment, as to the impact on academic programs and budgets and MIT’s business opportunities. That is, would a delay in the implementation of a project prevent an academic department from expanding or from providing research opportunities to a number of students greater than what they currently do? 3.6. Step five: check for consistency The Expert Choicew application calculates a consistency ratio every time weights, as a result of consensus achieved
95
through deliberation, are calculated for impact categories, performance measures, and constructed scales. The consistency ratio represents the degree by which a judgment follows the transitive property, i.e. if A is more important than B and B is more important than C, then A is more important than C. A consistency ratio equal to or less than 0.1 suggests that the comparison is consistent. Where the consistency ratio was initially greater than 0.1 the decisionmakers, under the guidance of the authors, reviewed the ratings resulting from the pairwise comparisons and deliberation and adjusted the ratings by way of additional iterations of deliberation and pairwise comparisons until the final consistency ratio was calculated to be less than or equal to 0.1 or deemed acceptable [12]. The final consistency ratio for each performance measure is shown in Fig. 4a–j. In addition to the consistency ratio as a means to ascertain level of consistency, sensitivity analyses were conducted by the infrastructure renewal team. The team checked for consistency within each constructed scale to ensure that the disutility values were equivalent to the descriptions of each level and to make certain that, in terms of the overarching objective of the method, the potential impact of each constructed scale was relatively aligned with the potential impact of other constructed scales and that transitive relationships were preserved. For example, consider the constructed scale for Interruption of Operations: Interruption Time where level 2 represents a potential interruption of operations that could extend 1–4 weeks and level 1 represents a potential interruption of operations that would not exceed 1 week in duration. Although the disutility of level 2 at 0.19 is 3.17 times the disutility of level 1 at 0.06, the text describing the same level states a range of four times the disutility of the outermost range of level 1, i.e. moderate interruption (1–4 weeks) versus minor interruption (less than 1 week), Facilities infrastructure renewal team members were satisfied that the range in disutility by number was equivalent to the range in disutility by description. The team also checked for consistency across performance measures and impact categories in a manner similar to the check within a constructed scale. Consider the constructed scale for Impact on People versus Internal Public Image; specifically the global effect, i.e. (global weight)(disutility), of minor injury (0.295)(0.05)Z0.015 vs. the global effect of minor internal publicity (0.055)(0.04Z 0.002). This 7.5 times difference in global effect is consistent with Facilities’ overarching desire to protect people from injury. The consistency check between minor injury and major property damage caused much deliberation in the infrastructure renewal team. The global effect of minor injury is (0.295)(0.05)Z0.015 and the global effect of major property damage is (0.029)(0.27)Z0.008. In this instance Facilities is more aversive to the potential of a minor personal injury than major property damage by a factor of 2 even though the cost of a broken arm or laceration is much
96
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
less than $1 M to $10 M in property damages. After much deliberation, Facilities’ decision-makers decided that avoiding a minor injury and a possible liability claim by an injured party and the adverse external publicity such a claim could attract as more important than the potential cost of property damage. Therefore, the team considers the constructed scales to be consistent. 3.7. Step six: benchmarking Benchmarking was used to determine whether the project prioritization method was valid and reliable in context of the infrastructure renewal team’s expectations; particularly, whether the result, a prioritized list of projects, made sense to team members. To do so, several projects with priorities previously determined by discussion alone, Facilities former means for prioritizing projects were reprioritized using the method described herein. The results were compared to determine whether the calculated ranking of relative priority by way of the method matched the relative priority previously ascertained by discussion. Facilities’ infrastructure renewal team felt that the prioritization method reflected the team’s feelings about the relative importance of one project to another and the relative weight of one criterion to another.
4. Results 4.1. The method The final project selection process is shown graphically in Fig. 2 along with the value tree showing the global weights of the impact categories and performance measures in Fig. 3. The constructed scales are shown in Fig. 4a–j, the definitions of impact categories and performance measures in Table 1, and the global and local weights of the impact categories and the performance measures in Table 2. 4.2. Clear and consistently applied definitions Through several iterations of deliberation, objectives, impact categories, performance measures, and level descriptions were defined by infrastructure renewal team members to the point that they were clear and specific enough to mitigate the possibility of incorrectly assessing a project. As Facilities did not employ people with high level expertise in the environment and public and media relations it sought help from people within MIT with such expertise. MIT public and media relations experts participated in the creation of definitions and the establishment of levels related to internal and external public image. The performance measure, Impact on the Environment and the definitions, levels and constructed scales pertaining thereto were the product of the infrastructure renewal team. While Impact on the Environment, as used in this paper, has served
Facilities’ purposes well, a detailed review by persons with high levels of environmental knowledge could be beneficial. During the deliberations pertaining to Impact on the Environment it became clear to the team that unintentionally double counting potential impacts was easy to do and could skew the results. Therefore, the team decided to limit the application of the word, environment, in the constructed scale Impact on the Environment, to the environment outside of campus buildings and utility systems beyond the projection of the exterior fac¸ade of any buildings. Therefore, if projects that if not implemented could cause contamination of interior spaces, they would be assessed under property damage and the impact on operations, but not the environment. While the authors’ original intent was to provide sufficient time for the emergence of concepts, definitions, and processes from the infrastructure renewal team the pace appeared to be sluggish. Therefore, to increase the pace without diminishing quality the authors employed a three stage approach whenever a new task, new and revised material and new concepts were to be introduced and discussed. The goal was to provide the team with something they could react to, as opposed to creating it from scratch, therefore, saving time. In the first stage, the authors created a draft of the task, e.g. the identification of impact categories and their definitions before it was brought to the team for consideration. The second stage consisted of a meeting where the draft was presented to the team for the purpose of eliciting comments and causing deliberation and subsequently, revisions to the draft. In the third stage the decision-makers reviewed the entire method, selection process, definitions, weights, and utilities to make certain that the work done to that date made sense to the decisionmakers. In this study, several minor revisions were made to definitions in the third stage. 4.3. The value of deliberation Deliberation was used extensively during every stage of the development of the method to help infrastructure renewal team members achieve consensus on the value tree, the selection and weighting of the criteria and their relationship to each other, constructed scale levels, and definitions. Deliberation was also a necessary component of the prioritization process where the team deliberated upon and finally selected a single rating for a performance measure from the individual ratings provided by team members. When the differences between the ratings offered by each member were minor, deliberation provided little additional value, as the method is intended to show the relative position of projects based upon subjective expert opinion informed by acquired data and previous knowledge, but not sophisticated and highly accurate probabilistic assessments. However, a high value in deliberation was experienced in the cases where most of the ratings were fairly similar with the exception of one or two ratings that
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
lay outside the group. The authors suggested that there were three potential reasons for the outlying ratings; (1) the team member misinterpreted the information provided for the project under assessment, (2) misinterpreted the definitions used in the constructed scales, or (3) had information about the project the other team members did not have. For example, in one case, the utility engineer had experience with a particular regulatory agency while employed elsewhere, whereas, the others did not. Therefore, the utility engineer brought a level of expertise to the decision and provided instruction to the other team members so that when revisions to the ratings were made, they were done so with consensus based on the same data. As the method does not capture every nuance of a decision, Facilities’ infrastructure renewal team reviewed the list of projects ordered according to their performance indices to validate that the list made sense. If a project was determined to be out of place deliberation was undertaken and a new performance index and consistency ratio was calculated. The list was revised accordingly. 4.4. The role of the project advocate Initially, there were no specific expectations for the project advocate, the person who brings a project to the infrastructure renewal team for consideration, as the role of the project advocate was not specifically defined, i.e. anyone could introduce a project by indicating the need that the project be undertaken. While any team member could introduce a project for consideration, those brought to the team with higher levels of technical and financial information proceeded through the process quicker. This resulted in the implementation of a practice, where the project advocate, the team member with the highest level of technical affinity to the project, would collect information related to it in context of the constructed scales and submit it to the infrastructure renewal team’s leader. This information was compiled with information submitted by other project advocates and distributed to the team prior to the next project selection meeting. This practice resulted in more efficient project selection meetings as team members, not as technically familiar, as the project advocate would have time to prepare. If a potential project was identified by a team member who did not have the technical expertise related to the project, or if the team learned of a potential project by a person external to the infrastructure renewal team, the team leader would request that a team member with the appropriate expertise collect and present the background information. Project advocates provide background information related to as many of the constructed scales as possible, e.g. an estimate of the cost of property damage, if the project was not selected for implementation. The following is an example of background information provided to the infrastructure renewal team by one project advocate. This project calls for the upgrade of a facility control system, the computer and control system that monitors, adjusts, and
97
operates a building’s heating, ventilation, and air conditioning systems from a central location. † Estimated cost: $20,000,000 (Phase I-$4 million, Phase II-$4.8 million, Phase III-$4.5 million, Phase IV-$5.2 million, and Phase V-$1.5 million). † Project scope: replace oldest existing facility control systems. Project includes replacement of all devices, wiring and front-end processing equipment. † Justification: several different facility control systems are currently in operation. The most recent systems were installed in 1994 and 1995. These two systems will not be replaced at this time; however, the systems installed in the 1980s, will be. Existing systems are working but replacement parts and service are difficult to acquire and multiple front ends are difficult for FCS operators to learn and use. † Additional information to be used to determine rating levels: † System failures may cause indoor air quality problems, an uncomfortable working environment, or both. † Unpredictable system shutdowns may disrupt academic, research, and business operations and repairs and maintenance are expensive. † System failure may cause loss of research and the ability of an investigator to attract benefactors and grants in the future. † Due to system inefficiencies, 10–15% energy cost savings are not attainable. † Some members of the MIT community have already complained. † System monitors hazardous processes in some laboratories. In summary, the performance measures of the value tree of Fig. 3 may be converted to a checklist of considerations which the project advocate could use to enhance the baseline information of the proposed project and thus facilitate the project prioritization and justification process.
5. Discussion 5.1. Using the method The value of this method is proven in its ease of use by Facilities infrastructure renewal team members as it provides a rational structure to that which can be very emotional and politically rich. While members of the infrastructure renewal team found the concepts and format challenging at first, their difficulties lessened, as they gained more experience with the process. Nevertheless, one person not associated with the team, voiced objection to its use as a valid way to make project selection decisions, because decisions are too important to be left up to a computer, and that the method does not guarantee that easy to implement projects will be
98
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
done first. One author attempted on several occasions to explain the method to the objector, but was not successful. The same author tried to explain that the computer simply performed the mathematics to produce an initial ranking based entirely upon the criteria and weights established by the team. The computer output provided the foundation for the deliberations and that the projects would be selected based upon their importance, as determined by the same criteria. Contrary to the objectors comments the intent of the method was to identify potential risks and assign resources to mitigate them according to priority, not expend limited resources ineffectively, to accomplish that which was merely easiest. During implementation, the method was modified periodically following deliberation on the feedback received from members of the infrastructure renewal team and observations by the authors. The method, as it was used, evolved to the following: 1. The project’s advocate provided written baseline information about the project. This information was distributed to team members. 2. Team members individually selected a level, i.e. a rating, for each constructed scale by asking a question such as, what could be the impact on people if the project was not done? 3. The team deliberated the ratings offered by each team member. If the ratings were close, and deemed acceptable, the mean of the ratings would be used. If there were outliers in the ratings the deliberation would focus on the reasons behind them and re-rate the potential project. Rating and deliberating were an iterative process, so that consensus would be achieved eventually. 4. The performance index for each project was computed using a computer software application. Projects were then listed by the performance index in ascending order. 5. Infrastructure renewal team members conducted sensitivity analysis of the initial ordered list of projects and made adjustments, by way of deliberation, to the order as necessary. 6. Members of the infrastructure renewal team determined at the outset that, post-deliberation and unless there was a compelling reason to do otherwise, the project with the highest performance index should be done first. 5.2. Lessons learned The following bullets illustrate observations by the authors during the design and implementation phase of the method. † One benefit of the method is that the decision-makers and others see the risk of not doing a project, no matter where the cut-off line may be as projects are ranked in order of potential risk. † The ordered list can be used to identify projects for funding with the intent that projects at the top of the list
†
†
† †
† †
†
would be funded first. In cases where the cut-off line, as determined by the level of available funds, bisects a project, decision-makers could substitute the bisected project with one close in rank but with a cost that would not cause the list to exceed available funds, or phase the project in such a way that a part of the project could be done with current funds and the next part with future funds. Since performance indices are normalized, based upon one set of criteria, and all projects are assessed the same way a project brought forward at anytime can be inserted into the prioritized list according to its performance index. The opposite is also true, in that any project can be removed without impacting the relative position of the others. Deliberation was most valuable in helping infrastructure renewal team members realize that they are making the final prioritization decision not the computer software application. Instituting the role of the project advocate streamlined the project selection process. The need for a more accurate performance scale for environmental criteria was highlighted in the evolution of this method. Experts and other stakeholders should be engaged at the inception of the design process. Once the method was established, project selection decisions could be made with fewer members of the infrastructure renewal team present, as long as the project fit well within the criteria and was not contentious or would otherwise require much deliberation. An unexpected benefit was that the mere discussion about probabilistic risk assessment, priority, criteria, and decision support systems has initiated a change in the culture within the Department of Facilities. Although not quantified, one of the authors has witnessed discussions between several groups of employees not associated with the infrastructure renewal process discussing the need to agree on criteria before making a decision, the relative risk of each alternative, and the value of deliberation.
5.3. Current and future uses of the project prioritization method The method described herein and the principles upon which it is built can be used in many ways, particularly in applications where prioritization or assessment of projects and alternatives against a set of weighted criteria and where a defendable and repeatable process are desirable. For example: † The method, but particularly the value tree and the weighted attributes developed by Facilities were adapted by a master’s degree student at MIT to define end states
D.M. Karydas, J.F. Gifun / Reliability Engineering and System Safety 91 (2006) 84–99
for the vulnerability of infrastructures to the threat of terrorism [13]. † The project prioritization method is proposed in two research grant applications, (1) to assess vulnerabilities due to multiple hazards and to prioritize mitigation projects and (2) to define end states in research on the protection of critical infrastructures. † This method could be used to select capital investments for implementation [7] but also as a tool, although the criteria and weights may need to be restructured, to assess the value of an existing building in comparison to established criteria.
6. Conclusion Facilities used the method presented herein to prioritize projects within a select group, i.e. projects to repair, replace, or upgrade existing infrastructures. However, since the method was easily modified to identify critical locations in infrastructures on the MIT campus [13], while maintaining consistent use of the criteria originally established by Facilities, the author’s conclude that the method can be used to satisfy other prioritization or assessment needs in Facilities, i.e. as long as the criteria are used in a consistent fashion. For example, the method could be used to determine the relative vulnerability of campus buildings and building systems to external threats (research is currently underway by the authors). Moreover the prioritization method could be adapted by other facility operators to make prioritization decisions on other campuses, in a city’s public works department, or in a chemical plant. An initially unforeseen outcome of this work was the need for an advocate to make certain that the project was well presented to the infrastructure renewal team for rating purposes and deliberation. To achieve this end, the constructed scales were used by project advocates as a checklist a priori of project rating and deliberation sessions. The authors expect that the method could require updating if, in the future, experience reveals that existing criteria and weights require minor adjustment, are no longer applicable, and that new criteria and weights better represent the views of future stakeholders. From the experience gained by the authors and members of the infrastructure renewal team during the initial development of the prioritization method, it is the authors’ opinion that updates to the method described herein could be easily achieved. The performance measure for the criterion, impact on the environment will be updated to reflect the input from environmental experts when they are able to participate.
99
The objective of this paper was to introduce the reader to the author’s efforts to develop and provide Facilities’ decision makers with a systematic approach to prioritizing infrastructure renewal projects. The author’s believe that the objective has been met and that further experience with the method will reinforce this belief.
Acknowledgements The authors gratefully acknowledge Professor George E. Apostolakis and Dr Richard Weil of MIT’s Department of Nuclear Engineering for sharing their research and for their patient tutelage; Vicky Sirianni, MIT’s former Chief Facilities Officer, for her unflagging support and encouragement; and to the members of the infrastructure renewal team; Peter Cooper, John Dunbar, David McCormick, Bernard Richard, Stephen Miscowski, David Myers, Jim Wallace, and Anne Whealan, for their hard work and attention to detail.
References [1] Bufferd AS. Report of the treasurer for the year ended June 30, 2003. Cambridge, MA: Massachusetts Institute of Technology; 2003. [2] Sirianni VV. About us: an interview with Victoria Sirianni, chief facilities officer Massachusetts Institute of Technology, Cambridge, MA; 2004http://web.mit.edu/facilities/about/index.html [Retrieved May 25 from the World Wide]. [3] Weil R, Apostolakis GE. A methodology for the prioritization of operating experience in nuclear power plants. Reliab Eng Sys Saf 2001;74:23–42. [4] Vest CM. Building program update: a special report from the office of MIT President Charles M. Vest. Cambridge, MA: Massachusetts Institute of Technology; 2000. [5] Kaiser HH, cited in, Rose R. Charting a new course for campus renewal. Alexandria, VA: The Association of Higher Education Facilities Officers; 1999 p. 12. [6] Sirianni VV. Infrastructure renewal at MIT: planning, persistence, and improved communication. Cambridge, MA: Massachusetts Institute of Technology; 2001. [7] U.S. Department of Veterans Affairs. VA capital investment methodology guide, http://www.va.gov/budget/capital, FY 2002. [8] Apostolakis GE, Pickett SE. Deliberation: integrating analytical results into environmental decisions involving multiple stakeholders. Risk Anal 1998;18(5):621–34. [9] National Research Council. Understanding risk: informing decisions in a democratic society. Washington DC: National Academy Press; 1996 pp. 20–214. [10] Goodwin P, Wright G. Decision analysis for management judgment. 2nd ed. Chichester: Wiley; 2000. [11] Keeney RL, Raiffa H. Decisions with multiple objectives: preferences and value tradeoffs. New York: Wiley; 1976. [12] Saaty TL. The analytic hierarchy process: planning, priority setting, resource allocation. New York: McGraw-Hill; 1980. [13] Apostolakis GE, Lemon DM. A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism. Risk Anal [in press].