Available online at www.sciencedirect.com
International Journal of Project Management 29 (2011) 155–164 www.elsevier.com/locate/ijproman
A case study approach for developing a project performance evaluation system Qing Cao *, James J. Hoffman 1 Area of Information Systems and Quantitative Sciences, Rawls College of Business, Texas Tech University, Lubbock, TX 79409, United States Received 27 October 2009; received in revised form 11 February 2010; accepted 18 February 2010
Abstract Prior project management research has identified a wide variety of measures that describe the outcomes of a project and the input characteristics that impact outcomes. In practice however, project schedules are still used as the sole project performance measure in some firms. Although the use of project schedules is still a good practice for some companies, for other companies the use of project schedules as the sole project performance measure can result in industrial projects falling behind schedule and coming in over-budget. In order to examine how the evaluation of project performance can be improved, a two-step approach is documented that was used to design a new project performance evaluation system at Honeywell Federal Manufacturing & Technologies (FM&T) that would enable managers to audit a project and determine where improvements could be made. Lessons learned from the development of a project performance evaluation system at Honeywell Federal Manufacturing & Technologies are then discussed. Ó 2010 Elsevier Ltd and IPMA. All rights reserved. Keywords: Project management; Performance measures; Case study; Data envelop analysis
1. Introduction We often hear or read about projects that are late, not completed correctly, and/or over-budget. Amazingly different lobbies of people still claim that these projects have been successful. Prior project management research has identified a wide variety of measures that describe the outcomes of a project and the input characteristics that impact outcomes (Banker et al., 1984; Ling et al., 2009; Prabhakar, 2008; Thomas and Fernandez, 2008). The most commonly used project outcome measures include cost, schedule, technical performance outcomes and client satisfaction. Although in general terms project performance is recognized as a multidimensional parameter (Baccarini, 1999; Bannerman, 2008; Shenhar et al., 2001) several organizations still evaluate project performance primarily through *
Corresponding author. Tel.: +806 742 3919. E-mail addresses:
[email protected] (Q. Cao), james.hoff
[email protected] (J.J. Hoffman). 1 Tel.: +1 806 928 1364. 0263-7863/$36.00 Ó 2010 Elsevier Ltd and IPMA. All rights reserved. doi:10.1016/j.ijproman.2010.02.010
cost and schedule performance measures (Might and Fischer, 1985). One possible outcome of the use of project schedules as the sole project performance measure is that industrial projects can fall behind schedule and come in over-budget. For example, in the case of Honeywell Federal Manufacturing & Technologies (FM&T), project schedules were used as the sole project performance measure. This method centered on measuring ongoing and final project performance against project goals. While this approach provided some basis for evaluating the extent of success across projects, it did not explicitly take into account differences in project characteristics which may have impacted cost and schedule performance, nor did it take into consideration the appropriateness of project goals (Freeman and Beale, 1992). Over the years, several studies have examined approaches to improve management practices (Fortune and White, 2006; Lewis, 2000; Sullivan and Beach, 2009; Yu et al., 2005). One of these approaches is cross-project learning (i.e., based on productivity) which has been identified as being vital for any organization seeking to continuously improve
156
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
its project management practices Lewis (2000). The first step in cross-project learning is to identify outstanding projects that can serve as role models. A minimum prerequisite for identifying these best practice projects is the ability to measure productivity based performance. Measuring project performance allows for the creation of incentives that likely will yield higher performance. In order to improve the evaluation of project performance at FM&T the company decided to participate in a research project focused on designing a new evaluation system that would enable managers to audit a project and determine where improvements could be made. Specifically, a two-step approach (i.e., cross-project learning serves as part of the theoretical foundation for this new project performance evaluation system) was used to design a new project performance evaluation system at Honeywell Federal Manufacturing & Technologies (FM&T) that would enable managers to audit a project and determine where improvements could be made. In addition to designing a new project performance evaluation system, the following research questions are also examined as part of the research project at FM&T: Research Question 1: Does the use of project schedules as the sole project performance measure result in the majority of projects at FM&T being inefficient? Research Question 2: Will the development and implementation of a new performance management system provide both tangible and intangible benefits for FM&T? Research Question 3: Will engaging in cross-project learning provide benefits to FM&T? The purpose of the current paper is to examine these research questions and to illustrate how a case study approach can be used to develop a new project performance evaluation system at FM&T. Lessons that can be learned during the implementation of a new project performance evaluation system are also presented. In the next section of the paper the project management literature is reviewed, specifically the literature regarding performance measurement of project-based activities. Next, the two-step approach that we utilized for designing the new performance evaluation system for FM&T is discussed. Results from the case study are then presented along with the answers to the research questions posed above. The paper then concludes with a discussion of the lessons learned from the development of the new project performance evaluation system. 2. Literature review Project management is different from manufacturingtype operations in that project management is the business process of producing a unique product, service, or result over a finite period of time (Project Management Institute, 2004). The primary challenge of project management is to
achieve all of the project goals and objectives while adhering to project constraints (Harrison and Lock, 2004). Extant studies have documented various measurements that describe outputs of a project and input factors that impact outputs (Dumaine, 1989; Morris and Hough, 1987; Shenhar and Dvir, 2007; Turner, 2009). According to Belassi and Tukel (1996), project success factors are rather multidimensional and include factors related to project (e.g., size, urgency); factors related to the project managers and team members (e.g., competence, leadership); and factors related to the external environment (e.g., customer, market). Although there is no universally agreed definition of project output measures, the most cited project output variables are comprised of cost, schedule, technical performance outputs, and customer satisfaction (Kerzner, 2004; Pinto and Slevin, 1988). In spite of the multidimensional nature of the project performance, cost and schedule performance measures still remain as the most widely used methods of project performance evaluation by organizations in the real world (Project Management Institute, 2004). Moreover, most of the project performance evaluation methods used by many organizations do not explicitly consider key input variables that add value for the client (Farris et al., 2006). Because of this, the design and use of performance measurement systems has received considerable attention in recent years (Kennerley and Neely, 2003). Neely et al. (1997) note that inadequately designed performance measures can result in dysfunctional behavior often due to the method of calculation which can encourage individuals to pursue inappropriate courses of action. Although the importance of performance measurement has long been recognized by practitioners and academics from a variety of functional disciplines (Neely et al., 2005), and even though many organizations have redesigned their measurement systems to ensure that they take into consideration their current environment and strategies, few organizations appear to have systematic processes in place to ensure that their performance measurement systems continue to reflect their environment and strategies (Kennerley and Neely, 2003; Neely, 1999). Recent research pertaining to the implementation and use of performance management systems has identified the most severe problems organizations encounter as being lack of top management commitment; performance management getting a low priority or its use being abandoned after a change in management; not having a performance management culture; management putting a low priority on implementation; and people not seeing enough benefit from performance management (de Waal and Counet, 2009). The literature reviewed above indicates that there is quite a bit of agreement regarding what constitutes project success (i.e., delivering value to the client), and while achieving time, cost, and quality are contributors to value to the client they are not the primary success criteria. As mentioned above, in this study we examine a company (FM&T) which still uses project schedules as a key perfor-
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
mance measure. In order to improve the evaluation of project performance at FM&T we design a performance management system that will enable managers to audit a project and determine where improvements can be made. Project management research going as far back as the 1980s which purports that time, cost and quality are not per se important (e.g., Morris and Hough, 1987; Shenhar and Dvir, 2007; Turner, 2009) serves as the basis for our design approach. Specifically, we propose a project performance evaluation approach that allows mangers to explicitly consider differences in input variables across projects when evaluating the project outputs. Moreover, we also suggest that project managers employ a data envelop analysis (DEA) method, which does not require mangers to specify variables a priori, in their project performance evaluations (Charnes et al., 1978). 3. Scope of research project and methodology 3.1. Company overview We were invited by Mr. Joe Vance to help develop a systematic approach to enhance the engineering project performance at FM&T. Mr. Vance is the Director of Engineering at the Honeywell Federal Manufacturing & Technologies plant in Kansas City, Missouri. FM&T provides high-tech production services to government agencies including the National Nuclear Security Administration (NNSA). For more than half a century, Honeywell and its predecessors have manufactured some of the NNSA’s most intricate and technically demanding products at the Kansas City Plant. As one of the nation’s most diverse low-volume, high-reliability production facilities, the Kansas City Plant is at the heart of the NNSA nuclear weapons complex. Traditionally, the plant has taken product requirements from the NNSA and designs from the national laboratories, procured supplies as needed, and produced quality components and systems for other nuclear weapons complex sites and the military. These capabilities also have formed the basis for FM&T’s work-for-others program, which provides services, products, and systems for homeland security, the Department of Defense and other government agencies (source: Honeywell Federal Manufacturing & Technologies webpage: http://www.honeywell.com/sites/kcp/). 3.2. Case study approach In the first step of designing the project performance evaluation system we employed a case study approach to establish a viable set of productivity metrics (engineering project inputs and outputs). A key feature of this research project was the close collaboration between the researchers and the case study site (i.e., we interviewed the technical manager and more than 10 engineers from various departments at FM&T), in order to ensure that the performance model and results were readily understood by organiza-
157
tional personnel, as well as reflective of actual practices within the organization. The grounded theory approach (Glaser and Strauss, 1967; Strauss and Corbin, 1998) was adopted in this step because it provides a set of procedures to inductively develop a framework from data, and allowed us to focus on contextual elements as well as the consequences of productivity measurement in the organization (Orlikowski, 1993). A case study is an “empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between the phenomenon and context are not clearly evident” (Yin, 1994). Additionally, Benbasat et al. (1987) have stressed that a case study is a powerful methodology that allows researchers as well as practitioners to study information systems in natural settings, learn about the state of the art, and generate theories from practice. A case study also allows the researchers and practitioners alike to understand the nature and complexity of the process that is taking place and gain an in-depth understanding of the phenomenon under study. In addition, a case study is used for studying new phenomena where quantitative research methodologies are not possible or appropriate (Benbasat et al., 1987; Yin, 1994) The use of a case study approach is appropriate for the current research project since engineering project performance evaluation is a new managerial endeavor that has great implications for FM&T, but has not yet been adopted and used. 3.3. Case study – productivity metrics 3.3.1. Data collection We interviewed eleven employees from the organization who have been involved in various engineering projects at FM&T. The eleven participants interviewed in this study were from different functional areas; therefore, they offered different perspectives on how to measure engineering project performance in the organization. Face-to-face interviews were conducted by the researcher. Structured and semi-structured questions were asked during the interviews. Interview questions included general questions asking subjects to describe their involvement and experience in the engineering projects, as well as sub-questions such as why an engineering project performance evaluation system is important, what is the current system at FM&T, and what do they think about effectiveness of the current system. The researcher further asked the interviewees to brain storm on project based productivity measures (inputs and outputs) deemed to be viable and fair. The interview protocol is attached in Appendix I. Each interview lasted approximately 1 h. The interviews were audio-recorded and notes were taken by the researcher during each interview. The interviews were transcribed and analyzed. Table 1 shows the data collection checklist that summarizes the data resources used in this study.
158
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
Table 1 Data collection checklist. Data type
Resources
Number
Interviews
Technical manager Mechanical engineers Electrical engineers Purchasing engineers
1 4 3 3
Documentations
Current performance measurement Organization website
Yes Yes
3.4. Case analysis According to Strauss and Corbin (1998), data analysis of grounded theory research starts with open coding in which “data are broken down into discrete parts, closely examined, and compared for similarities and differences” (p. 102). In this initial stage of data analysis, the researchers reviewed the interview transcripts and looked for “discrete incidents, ideas, events, and acts” and then gave names (or “conceptual labels”) for those concepts (Strauss and Corbin, 1998). Throughout the data analysis, the researchers performed constant “comparative analysis,” that is, when the objects, events, acts, or happenings shared some common characteristics, they were grouped together and formed a category that captured their shared characteristics. Categories, as defined by Strauss and Corbin (1998), “are concepts, derived from data that stand for phenomena” (p. 114). By doing so, the researchers were able to reduce the vast amount of raw interview data into smaller, more manageable pieces of data. 4. Research results 4.1. Categories and concepts Following the principles of open coding (Strauss and Corbin, 1998), two researchers went through all of the interview transcripts, the notes taken by the researchers during the interviews, and the relevant documentation, and identified a list of concepts from the raw data. The researchers compared the emerged concepts across different participants and multiple data resources for validation. For example, concepts emerged from the interview transcript of participant A that were then corroborated with interview transcript of participant B, or checked against the documentation. Triangulation across data resources helps to strengthen the emerging concepts. Additionally, prior literature was used for “supplemental validation,” that is, references from prior literature gave validation for the accuracy of the findings and helped in naming the concepts (Chatzoglou and Soteriou, 1999; Farris et al., 2006; Herrero and Salmeron, 2005; Pinto and Slevin, 1988). Through discussions, the two researchers reached consensuses on the concepts identified from the data, as well as on the naming and phrasing of the concepts. The two researchers then reviewed all concepts and categorized
the concepts into categories based on similar characteristics shared among the concepts. Results from the open coding include a list of categories and concepts (inputs and outputs shown in Table 2) related to project based productivity measures at FM&T. FM&T engineers agreed that project duration was the key output of interest to the case study application, and that the driving force behind the business process improvement was the need to reduce project duration. In the project management literature, “time” represents a key category of project performance measures, and minimizing project duration is one objective that a project-based organization can pursue (Chatzoglou and Soteriou, 1999). Other potentially relevant output measures, such as quality and customer satisfaction, were not being tracked by the FM&T. As such, project duration was utilized as the output variable in this study. After identifying the output variable of interest, the next step was to identify the input variables necessary to capture important differences between projects. Input variables were identified through consultation with engineers and the technical manager at FM&T (to accurately describe their practices) and through a review of the project management literature. Effort (Variable 2) describes the total amount of person-days consumed by the project. This variable is under the influence of the project manager, but is fixed beyond a certain minimum point. While inefficient project management practices can increase effort, through rework, there is a certain minimum amount of work that must be completed to meet the objectives of the project – that is, there is a minimum level of effort. Therefore, effort can be viewed as a cost measure, and also as a measure related to project scope or size. Project staffing (Variable 3) describes the concentration of labor resources on the project. Specifically, project staffing describes the average number of people scheduled to work on a project each project day, thus capturing resource assignment decisions within FM&T. Obtaining and scheduling labor resources is a significant portion of any project manager’s job, and is also a concern of top management. Priority (Variable 4) indicates the importance (urgency) assigned to a project by top management. Project priority can be rated on a nine-point scale, with “1” representing the lowest level of priority and “9” representing the highest level of priority. Thus, while priority is actually an interval variable, the relatively large number of intervals suggests that it can be treated like a continuous variable. All else being equal, a higher-urgency project would be expected to achieve shorter project duration than a lower-urgency project, because higher-urgency projects would receive more attention and experience shorter turnaround time in resource requests and other administrative tasks. Number of engineers (Variable 5) indicates the number of engineers available at FM&T to support a project, not the actual number of engineers directly assigned to a project. All else being equal, increasing the number of officers should allow engineers to give more attention to individual
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
159
Table 2 Productivity metrics.
1 2 3 4 5 6
Variable
Variable
Definition
Units
Output Input Input Input Input Input
Project duration Effort Project staffing Priority Number of Engineers Technical Complexity
Work days to complete project Work content of the project Number of People on project/effort Urgency of project Number of engineers at the functional area during the project Technical difficulty and uncertainty of project
Days Person days People/day Interval (1 = lowest priority; 9 = highest priority) People Categorical (1 = most complex; 3 = least complex)
projects, thereby reducing the turnaround time for administrative tasks and, ultimately, reducing project duration. Increasing the number of engineers could also allow officers to specialize in a particular type of project, thereby increasing efficiency of project oversight. The last variable, technical complexity (Variable 6) describes the technical difficulty and uncertainty of a project. Although related to effort (i.e., more technically complex projects tend to involve more work content), technical complexity captures additional elements that affect project duration—such as the extent of risk, need for testing, need for increased coordination between functions, and degree of technological uncertainty. Engineering projects at FM&T can be categorized according to general level of technical complexity, with “1” representing the most technically complex projects, “2” representing projects of medium technical complexity, and “3” representing the least technically complex projects. 4.2. Data envelop analysis approach (DEA) In the second stage of the research project, Data Envelopment Analysis (DEA), a non-parametric linear programming method, was used to assess the project productivity. DEA was introduced in 1978 by Charnes et al. (1978) and it is a fractional programming model that estimates the relative efficiencies of a homogeneous set of units by considering multiple sets of inputs and outputs. According to Charnes et al. (1978), DEA has the following advantages in assessing project productivity: It does not require functional relationships between inputs and outputs. Multiple inputs and outputs can be considered concurrently. It has the ability to identify inefficient projects. Using DEA sensitivity analysis, the sources and amounts of inefficiency for each inefficient project can be found. Data envelopment analysis (DEA) is also a widely accepted benchmarking approach in exploring project productivity efficiency (Stensrud and Myrtveit, 2005). Several prior research efforts have used DEA to analyze software efficiency including Banker et al. (1991), who used total labor hours as the single input measure with DEA for
investigating the productivity of software maintenance projects. Additionally, in a Communications of the ACM article, Herrero and Salmeron (2005) explained how a DEA model could be used for systems analysis and designed to rank software project technical efficiency. They found DEA to be a better means of measurement than other traditional methods. DEA has two basic models associated with it; they are CRS and VRS models (see Appendix II for a discussion of basic DEA models). The CRS model developed by Charnes et al. (1978) assumes constant returns to scale (CRS), while the VRS model created by Banker et al. (1991) assumes variable returns to scale (VRS). CRS models provide the most conservative measure of efficiency (i.e., the most aggressive DEA project duration targets). Under CRS, all units are compared against a frontier defined by units operating under the most productive scale size. Units operating under any diseconomies of scale, therefore, cannot be 100% efficient. On the other hand, VRS models allow units operating under diseconomies of scale to form part of the frontier, as long as they perform better than their most similar peers (e.g., those operating under similar diseconomies of scale). Choosing which model to use depends on both the characteristics of the data set and the question being analyzed. Stensrud and Myrtveit (2005) assume that software engineering projects exhibit variable returns to scale (VRS) with nonlinear relationships between input and output, while Farris et al. (2006) presume that engineering projects show constant returns of scale (CRS). In this research project we assume that diseconomies of scale might exist for many input variables. For instance, increasing project staffing beyond a certain level may yield diminishing returns in project duration, due to congestion. Similarly, projects with large amounts of effort could also experience diminishing returns of scale. For the current research project, FM&T did not want units operating under diseconomies of scale (e.g., overstaffed projects or projects with increased effort due to rework or other inefficient practices) to be considered 100% efficient. Instead, FM&T wanted to draw aggressive comparisons based on the performance of best practice units. Thus, the CRS model output was deemed to be most appropriate since it identifies inefficiency due to diseconomies of scale and benchmarks performance against units that are operating under the most productive scale size.
160
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
4.3. Data gathering Using the project productivity metrics that were derived during the first stage of the research project, we gathered input and output data from 20 engineering projects at FM&T (see Table 3). We then used DEA to identify the most efficient projects. Next, a sensitivity analysis was performed to seek the causes of inefficient projects and to identify factors of efficiency that could be targeted for improvement. These inefficiencies are caused by input and/or output slacks. An input slack for a project means the project can reduce its input by the slack amount without reducing the output(s), while an output slack means the project will have to increase its output(s) by the slack amount to become efficient. 5. Results and discussions DEA Excel Solver was used with the engineering project productivity metrics to generate the relative efficiency for each of the 20 engineering projects in various functional areas at Honey FT&M. The projects were considered efficient if their relative efficiency ratios equaled one and regarded as inefficient if they obtained a relative efficiency ratio less than one. The results indicate that five engineering projects were deemed as efficient while fifteen projects were rated as inefficient (see Table 4, column 2). In this way, DEA clearly identified the most efficient projects. A sensitivity analysis was then performed in order to determine the causes of the inefficient projects. These inefficiencies are caused by input and/or output slacks. An input slack for a project means the project can reduce its input by the slack amount without reducing the output, while an output slack means the project will have to increase its output(s) by the slack amount to become efficient. Table 4 presents input slacks (i.e., excess input) for the fifteen projects. In Table 4, column 2 the efficiency ratios are presented
in a descending order. Columns 3–7 show the slacks of each input variables (i.e., effort). The results reveal projects 2, 4, 6, 8, and 12 were efficient, and all other projects were inefficient. Among the inefficient projects, the efficiency ratio rankings ranged from 0.26973 to 0.89755. The informational value of the slacks in columns 3 through 7 reveal from an input standpoint how and to what degree the inefficient project teams can make their projects efficient. For example, if Project 19 team could increase their effort by approximately 63 and with the help of the managers adding more engineers to the project and adjusting the complexity level of the project, it would become efficient without changing current output values. It is interesting to note that the sensitivity analysis reveals that efficiency is not only controlled by the project team but also by other factors (management and nature of the projects). 6. Research questions answered and Lessons learned during the development the new project performance evaluation system at FM&T During the development the new project performance evaluation system at FM&T several research questions that we posed in this study were answered. Specifically we found the answer to Research Question 1 to be that the use of project schedules as the sole project performance measure did result in the majority of projects at FM&T being inefficient. In terms of Research Question 2 we found that the development and implementation of the new performance management system provided both tangible and intangible benefits for FM&T. Regarding Research Question 3 we found that engaging in cross-project learning did provide benefits to FM&T. In addition to finding the answers to these research questions, during the development the new project performance evaluation system at FM&T we learned several lessons. These lessons are discussed below.
Table 3 Project input and output data. Project
Effort
Project staffing
Priority
Number of engineers
Complexity
Project durations
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
924 558 730 677 561 732 323 129 482 528 146 143 479 123 252 86 310 185 682 256
0.05 0.06 0.07 0.13 0.05 0.03 0.17 0.12 0.06 0.04 0.08 0.04 0.02 0.21 0.08 0.14 0.05 0.06 0.07 0.09
8 6 7 5 8 7 4 7 9 6 8 5 8 9 7 6 5 9 5 8
2 3 1 2 4 4 3 2 4 5 2 2 2 2 4 3 2 2 3 2
3 1 3 1 3 3 2 2 3 3 1 1 2 1 3 3 1 3 1 2
1759 2826 1088 2566 1576 1421 788 1286 921 744 646 982 825 598 450 736 888 567 920 599
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
161
Table 4 Sensitivity analysis. Project
Input-oriented CRS efficiency
Input slacks
Column 1
Column 2
Column 3 Effort
Column 4 Project staffing
Column 5 Priority
Column 6 Number of engineers
Column 7 Complexity
2 4 6 8 12 1 13 16 3 5 14 11 17 7 18 10 19 20 9 15
1.00000 1.00000 1.00000 1.00000 1.00000 0.89755 0.87283 0.85848 0.84801 0.66873 0.66314 0.64374 0.52710 0.46483 0.45829 0.39433 0.38210 0.37472 0.36451 0.26973
0.00000 0.00000 0.00000 0.00000 0.00000 465.03101 97.58252 0.00000 331.99688 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 62.66431 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.05151 0.00424 0.00000 0.09947 0.01287 0.00000 0.05825 0.00000 0.00000 0.00000 0.00443 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 3.49048 3.84046 1.14463 3.81606 1.43933 3.45073 2.63104 0.25665 0.00000 1.69506 0.24548 0.00000 0.96224 0.97257 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 1.43079 0.00000 0.64880 0.50592 0.43111 0.00000 0.53029 0.00000 0.84350 0.23881 0.00000 0.36451 0.36487
0.00000 0.00000 0.00000 0.00000 0.00000 2.06104 0.58189 1.43079 2.12003 1.09463 0.00000 0.00000 0.06359 0.58342 0.83235 0.58061 0.04776 0.26112 0.68870 0.38744
6.1. Lesson #1 – the drawbacks of using projects schedules as the sole project performance measure
there was a great need for a systematic (generalizable) approach across all functional areas.
One outcome of FM&T’s use of project schedules as the sole project performance measure was a trend at the company for some industrial projects to inefficient (i.e., behind schedule and/or over-budget). As mentioned above DEA Excel Solver was used with the engineering project productivity metrics to generate the relative efficiency for each of the 20 engineering projects in various functional areas at Honey FT&M. The projects were considered efficient if their relative efficiency ratios equaled one and regarded as inefficient if they obtained a relative efficiency ratio less than one. The results indicate that five engineering projects were deemed as efficient while fifteen projects were rated as inefficient.
6.3. Lesson #3 – the benefits of cross-project learning
6.2. Lesson #2 – why previous project productivity improvement methods failed Over the past decade, FM&T tried numerous times to better manage its engineering projects armed with various productivity improvement methods such as total quality management, six sigma, earned value project management. FM&T identified three possible reasons why most of the previous efforts failed. The first reason was that most previous attempts rendered little implementable value either due to the complexity of the system (e.g., earned value system) or disagreement among different functional areas (e.g. design engineering vs. manufacturing engineering). The second reason was that in the past, the projects were done mostly in a sequential way, meaning, there was virtually no communication among different engineering functions. The third reason was that
Engaging in cross-project learning provided several benefits for FM&T. One benefit is that identifying and studying best practice projects turned out to be an invaluable source of learning for all members of the company. Additionally, by identifying its best projects, these projects then served as role models for guiding the company in terms of how to improve. Identifying best practice projects also allowed FM&T’s stakeholders to benchmark its projects. This is important since customers of engineering services companies are increasingly demanding that performance benchmarks based on past project performance be included in bidding proposals (Luu et al., 2008). Therefore, firms like FM&T must identify and provide benchmarks in the bidding process in order to stay competitive. Benchmarks also provided FM&T with a basis for setting compensation schemes, determining who to promote, and for identifying the company’s best performers. 6.4. Lesson #4 – intangible benefits from the new project performance evaluation system Currently, there are two functional areas at FM&T (design and manufacturing departments) using the new project performance evaluation system and although the procurement and sales departments have not yet adopted the new system, they have requested that the system be revised to better suit their needs. Overall, there has been a lot of positive feedback from the departments and project teams who have used the system. Specifically, the departments
162
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
and project teams have reported several intangible benefits associated with the use of the new system. For example, they have found the new system to be relatively easy to implement. They have also found that the system promotes cross-functional efforts, and that these efforts really pay off resulting in less disputes and more consensus via interactions. According to the design engineering department head, “the new system provides a viable tool for design engineers to change their mindset and effectively carry out the concurrent engineering (CE) design concept in our department.” Additionally, they found that the new evaluation system promotes cross-project learning which is very helpful for project manager to deal with resource allocation, personnel management, and budget control. 6.5. Lesson #5 – tangible benefits from the new project performance evaluation system FM&T also derived several tangible benefits from the new project performance evaluation system. A recent analysis of 20 projects that were completed after the new system had been implemented showed that the average efficiency rate associated with the projects rose to 0.854, up from an average of 0.684 for 20 comparable projects that were completed before the new system was implemented. The average time to complete the 20 projects was also reduced by 25% when compared to the average time it took to complete similar projects that were completed before the new system was implemented. Additionally, FM&T reported that there has also been a visible decrease in calling in sick time among engineers during the past 10 months since the system was implemented. Results from the second stage of the research project also highlighted tangible benefits. Specifically, results indicated that the proposed engineering project productivity metrics and DEA approach not only can detect inefficient engineering projects, but also can provide information and guidance for managers at FM&T in terms of how to improve the engineering projects. 7. Conclusions In this research project we utilized a two-step approach that was used for designing a system that would improve the engineering process at FM&T. Our first step was to use a case study approach to derive the engineering project performance metrics. In step two we employed Data Envelop Analysis (DEA) to show the engineering project productivity metrics’ (i.e., the input and output variables identified in the case study stage of the research project), ability to measure the performance of the engineering projects at FM&T and to explore the efficiency of those projects. DEA was also used to perform a sensitivity analysis to identify factors of efficiency to target for improvement. Through multiple case studies with engineers and a manager at FM&T, viable engineering project productivity metrics (input and output variables) were developed to evaluate
project performance. The metrics were developed based on the inputs of FM&T engineers from various functional areas as such it is more realistic, reliable, and generalize-able. Based on the initial results from using the new system, FM&T plans on using the system in other functional areas within the company. Additionally, although the new system is currently being used at the project or team level, in the future FM&T plans on using the new system at individual level which will ultimately tie project performance with the company’s reward system. It is important to note that the results obtained through the case study method are often new hypotheses or theories, explanations of change- or development processes, even normative instructions. Although the material and its processing are empirical, the material is usually formed of a small number of cases (in this case 20 different projects within a single company). Thus, there is a potential problem of generalization to the results obtained by the case study method since the extent to which results obtained in a limited number of cases can be generalized to be applicable to a larger group (i.e. in other companies than in Honeywell) is not precisely known. This means that the results must be regarded as more or less probable hypotheses. This said however, the concepts we applied to design the project performance evaluation system for FM&T and the lessons we learned should still provide a good reference point for companies whose goal is to improve their engineering projects performance and, to reduce cost and schedule overruns. This is important given that as competitive pressures increase companies find themselves under more and more pressure to cut costs while maintaining productivity. Appendix I – Interview protocol Demographic information Your gender A. Female B. Male Your age A. 18–25 B. 25–38
C. 38–45
D. >45
The highest educational degree you have received A. High B. Bachelor’s C. Graduate school degree degree What is your position at the organization? Your business unit name How many employees are in your business unit? How many employees report to you? How long have you been working in this organization? Questions: Is it important for your department to have a project productivity measurement system? Why, why not? What is the purpose of having such a system in general? What do you think about the system (i.e., viable or fair)? Why?
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
Is the current productivity measurement system capable of enabling comparisons of performance across projects? Do you think we need to revamp the current system? Can you describe in general what the current project productivity measures are? What are the performance measures (a.k.a., the output variables) of a project? Is the project duration one of the major output variables? Do you think that output needs to include cost, scope and customer satisfaction? What other factors do you think we need to incorporate in the output measures? Input variables are very important to capture differences between projects. What are the input variables for projects in the current system? What input variables do you think are the most important in the current system or for a new system? Do you think input variables should include effort, project staffing, priority, technical complexity, and etc.? What other input variables do you think to be included in the system?
Appendix II – Basic DEA models Charnes et al. (1978) initially introduced the DEA model to measure the relative efficiency of decision making units (DMUs) using multiple inputs to produce multiple outputs. They addressed constant returns to scale (CRS). CRS assumes that there is a proportional change between inputs and outputs. The CRS efficiency represents technical efficiency (TB), which measures inefficiencies due to input/ output configuration as well as due to the size of the operation. Banker et al. (1984) presented the DEA model to determine whether there are any inefficiencies attributed to disadvantageous conditions under which a DMU is operating, which are not directly related to the inputs and outputs, and to allow for a larger peer group to be considered. They addressed variable returns to scale (VRS). VRS assumes that there is a proportional change in inputs, but this does not result in a proportional change in outputs. The DEA VRS model can be obtained through the Pn addition of a convexity constraint ( j¼1 kj ) to the DEA CRS model. The VRS efficiency represents pure technical efficiency (PTE), that is, a measure of efficiency without scale efficiency (SE). It is thus possible to decompose TE (assuming CRS) into PTE and SE. Scale efficiency can be estimated by dividing PTE into TE. References Baccarini, D., 1999. The logical framework method for defining project success. Project Manage. J. 30 (4), 25–32. Bannerman, P.L., 2008. Defining project success: a multilevel framework. In: Proc. PMI Research Conference, Warsaw, Poland.
163
Banker, R.D., Srikant, M., Kemerer, C.F., 1991. A model to evaluate variables impacting the productivity of software maintenance projects. Manage. Sci. 37 (1), 1–18. Banker, R.D., Charnes, A., Cooper, W.W., 1984. Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manage. Sci. 30 (9), 1078–1092. Belassi, W., Tukel, O., 1996. A new framework for determining critical success/failure factors in projects. Int. J. Project Manage. 14 (3), 141– 151. Benbasat, I., Goldstein, D.K., Mead, M., 1987. The case research strategy in studies of information systems. MIS Quart. 11 (3), 369–386. Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. Eur. J. Oper. Res. 2 (6), 429–444. Chatzoglou, P.D., Soteriou, A.C., 1999. A DEA framework to assess the efficiency of the software requirements capture and analysis process. Decision Sci. 30 (2), 503–531. de Waal, A., Counet, H., 2009. Lessons learned from performance management system implementations. Int. J. Prod. Perform. Manage. 58 (4), 367–390. Dumaine, B., 1989. How managers can succeed through speed. Fortune 19 (4), 54–59. Farris, J., Groesbeck, R.L., Van Aken, E.M., Letens, G., 2006. Evaluating the relative performance of engineering design projects: a case study using data envelopment analysis. IEEE Trans. Eng. Manage. 53 (3), 471–482. Fortune, J., White, D., 2006. Framing of project critical success factors by a systems model. Int. J. Project Manage. 24 (1), 53–65. Freeman, M., Beale, P., 1992. Measuring project success. Project Manage. J. 23 (1), 8–17. Glaser, B., Strauss, A., 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine de Gruyter, New York. Harrison, F., Lock, D., 2004. Advanced Project Management a Structured Approach. Gower Publishing, Ltd.. p. 34. Herrero, I., Salmeron, J., 2005. Using the DEA methodology to rank software technical efficiency. Commun. ACM 48 (1), 101–105. Kennerley, M., Neely, A., 2003. Measuring performance in a changing business environment. Int. J. Oper. Prod. Manage. 23 (2), 213–229. Kerzner, H., 2004. Advanced Project Management: Best Practices on Implementation. John Wiley & Sons. Lewis, J.P., 2000. The Project Manager’s Desk Reference. McGraw-Hill, New York. Ling, F., Low, S., Wang, S., Lim, H., 2009. Key project management practices affecting Singaporean firms’ project performance in China. Int. J. Project Manage. 27 (1), 59–71. Luu, V., Kim, S., Huynh, T., 2008. Improving project management performance of large contractors using benchmarking approach. Int. J. Project Manage. 26 (7), 758–769. Might, R.J., Fischer, W.A., 1985. The role of structural factors in determining project management success. IEEE Trans. Eng. Manage. 32 (2), 71–77. Morris, P., Hough, G., 1987. The Anatomy of Major Projects: A Study of the Reality of Project Management, vol. 1. Chichester, John Wiley & Sons, Ltd., UK. Neely, A., 1999. The performance measurement revolution: why now and what next. Int. J. Oper. Prod. Manage. 19 (2), 205–228. Neely, A., Gregory, M., Platts, K., 2005. Performance measurement system design: a literature review and research agenda. Int. J. Oper. Prod. Manage. 25 (12), 1228–1263. Neely, A., Richards, H., Mills, J., Platts, K., Bourne, M., 1997. Designing performance measures: a structured approach. Int. J. Oper. Prod. Manage. 17 (11), 1131–1153. Orlikowski, W.J., 1993. CASE tools as organizational change: investigating incremental and radical changes in systems development. MIS Quart. 17 (3), 309–340. Pinto, J., Slevin, D., 1988. Project success: definitions and measurement techniques. Project Manage. J. 19 (1), 67–73. Prabhakar, G.P., 2008. What is project success: a literature review. Int. J. Bus. Manage. 3 (9), 3–10.
164
Q. Cao, J.J. Hoffman / International Journal of Project Management 29 (2011) 155–164
Project Management Institute, 2004. A Guide to the Project Management Body of Knowledge, ANSI/PMI 99-001-2004, 3rd Newton Square, PA. Shenhar, A., Dvir, D., 2007. Reinventing Project Management: The Diamond Approach to Successful Growth & Innovation. Harvard Business School Publishing. Shenhar, A.J., Dvir, D., Levy, O., Maltz, A.C., 2001. Project success: a multidimensional strategic concept. Long Range Plann. 34 (4), 699– 725. Stensrud, E., Myrtveit, I., 2005. Identifying high performance ERP projects. IEEE Trans. Software Eng. 29 (5), 398–416. Strauss, A., Corbin, J., 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. SAGE Publication, Thousand Oaks, California.
Sullivan, J., Beach, R., 2009. Improving project outcomes through operational reliability: a conceptual model. Int. J. Project Manage. 27 (8), 765–775. Thomas, G., Fernandez, W., 2008. Success in IT projects: a matter of definition? Int. J. Project Manage. 26 (7), 733–742. Turner, J.R., 2009. The Handbook of Project-based Management: Leading Strategic Change in Organizations. McGraw-Hill. Yin, R.K., 1994. Case Study Research: Design and Methods. Sage Publications, Thousand Oaks, California. Yu, A., Flett, P., Bowers, J., 2005. Developing a value-centred proposal for assessing project success. Int. J. Project Manage. 23 (6), 428–436.