International Journal of Project Management 38 (2020) 99–111
Contents lists available at ScienceDirect
International Journal of Project Management journal homepage: www.elsevier.com/locate/ijproman
It is about time: Bias and its mitigation in time-saving decisions in software development projects Lior Fink *, Barak Pinchovski Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, 1 Ben-Gurion Blvd., Beer-Sheva 84105, Israel
A R T I C L E I N F O
A B S T R A C T
Keywords: Decision making Time-saving bias Software development projects Experiments
Estimates of completion times in software development projects are frequently inaccurate, potentially resulting in failure to meet project objectives. The present work aims at empirically investigating whether the time-saving bias, describing the human failure to correctly estimate the relationship between speed increase and time saving, can inform our understanding of the decades-long problem of time estimation in software development. In particular, this work examines whether a decision to save time in a software development project by increasing development speed is biased, whether this bias is observed when the decision is framed using plan-based and agile terminology, and whether the availability of relevant information mitigates this bias. These objectives are addressed in three experimental studies, in which senior information systems students (Study 1) and professional software project managers (Studies 2 and 3) are asked to make time-saving decisions about two similar scenarios, with and without relevant information. The findings confirm the existence of the bias and show that it is more likely to occur under an agile framing than under a plan-based framing, although students are highly biased in both cases. The findings also show that while the bias is mitigated, but not eliminated, when relevant information is included in the scenario, this effect dissipates once the information is no longer included in the following scenario. The accumulated evidence reported here contributes to research on the consequences of cognitive biases for project management decisions.
1. Introduction A large body of research in the areas of project management and software engineering has shown that estimates of completion times in software development projects are largely inaccurate (e.g., Brooks, 1975; Connolly & Dean, 1997; Halkjelsvik & Jørgensen, 2012; Heemstra & Kusters, 1991; Hill, Thomas, & Allen, 2000; Lopez-Martin & Abran, 2015; Morgenshtern, Raz, & Dvir, 2007). This inaccuracy has been identified as a major reason for project failure in software development (Charette, 2005; Shmueli, Pliskin, & Fink, 2016), with average time overruns increasing from 63% to 82% between 2000 and 2007 (Nelson, 2007). Against this background, several attempts have been made in recent years to identify the cognitive mechanisms responsible for biased time estimates in software development. The majority of these attempts have focused on the “planning fallacy”, which describes the tendency of people to underestimate the time needed for them to complete a task (Buehler, Griffin, & Ross, 1994; Kahneman & Tversky, 1979), as a cognitive bias that possibly explains schedule overruns in software development (Buehler, Griffin, & Peetz, 2010; Halkjelsvik & Jørgensen,
2012; Shmueli et al., 2016). Research in the areas of project management and software engineering, however, has overlooked another potentially important cognitive bias – the “time-saving bias” – which in recent years has also been repeatedly shown to bias time-related decisions. This bias describes the human failure to correctly estimate the relationship between speed increase and time saving (Fuller et al., 2009; Peer, 2010; Svenson, 2008). The speed-time function is a nonlinear relationship of the type y ¼ 1/x, and studies have shown that people are poor at correctly estimating changes in y based on changes in x for such functions (De Langhe & Puntoni, 2016). People tend to use either the proportion heuristic (Svenson, 1970) or the absolute heuristic (Larrick & Soll, 2008). The proportion heuristic (e.g., an increase of 20% in speed saves more time than a 10% increase) is inaccurate because it fails to take into account the base time, and the absolute heuristic (e.g., an increase of 20 units of speed saves more time than an increase of 10 units) is even more inaccurate because it also fails to take into account the nonlinearity of the speed-time function. This bias, originally observed in the context of driving situations, has been found to characterize the relationship between productivity increase and resource saving in production situations
* Corresponding author. E-mail addresses: fi
[email protected] (L. Fink),
[email protected] (B. Pinchovski). https://doi.org/10.1016/j.ijproman.2020.01.001 Received 18 June 2019; Received in revised form 14 November 2019; Accepted 6 January 2020 0263-7863/© 2020 Elsevier Ltd, APM and IPMA. All rights reserved.
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
(Svenson, 2011; Svenson, Gonzalez, & Eriksson, 2014) and in consumption situations (De Langhe & Puntoni, 2016; Larrick & Soll, 2008). Given that time overruns and schedule pressures are common in the software industry, software development projects are characterized by a frequent need to save time by increasing development speed (Fairley & Willshire, 2003; Nan & Harter, 2009). Although the time-saving bias has been shown to occur particularly when people are asked to make decisions about speed increase to save time, there has been no research, to the best of our knowledge, on the consequences of this bias for project management, in general, and for software development projects, in particular. To address this gap, the research question guiding this work is: How are time-saving decisions in software development projects influenced by the time-saving bias? We seek to answer this research question by addressing three objectives. The first objective is, naturally, to confirm that the time-saving bias can indeed be observed in the context of software development projects. As software development projects employ a variety of methodologies, the second objective is to examine whether the time-saving bias is likely to be observed irrespective of the “methodological framing” of time-saving decisions as either plan-based or agile. Finally, as observation of cognitive bias is typically followed by research on its mitigation, the third objective is to explore whether the time-saving bias can be mitigated by making relevant information available to decision makers. We draw on the literature about information availability (Perfetto, Bransford, & Franks, 1983; Thompson, Gentner, & Loewenstein, 2000; Weisberg, DiCamillo, & Phillips, 1978) and about information relevance (Cosijn & Ingwersen, 2000; Saracevic, 1975; Xu & Chen, 2006) to explore the effects of availability and relevance of information on the prevalence of the time-saving bias. By addressing these three objectives, this work provides a comprehensive portrayal of the consequences of a cognitive bias, shown to impair the accuracy of time-saving decisions, in a context in which such decisions are frequent and may lead to unsatisfactory performance. We address these objectives in a series of three experimental studies that employ the same time-saving problem of having to decide on the software development project to which the addition of resources would save more time in terms of project duration, analogous to time-saving problems in the literature (e.g., Svenson, 2008, 2011). The decision makers in these studies are senior information systems students (Study 1) and professional software project managers (Studies 2 and 3). The findings emerging from these multiple studies provide a threefold contribution. First, we demonstrate that the time-saving lens offers unique insights into the decades-long problem of time estimation in software development. Specifically, we show that the failure to accurately assess time savings may be a contributing factor to inaccurate estimates of completion times in software development. Second, we provide evidence that time-saving decisions are more susceptible to bias under an agile framing than under a plan-based framing, possibly because the former is more likely to facilitate heuristic thinking. Finally, we show that attempts to mitigate bias by making relevant information available to decision makers may be hindered by the failure to reuse the information. While information that is more relevant is shown to be more effective in reducing bias, this effect quickly dissipates when the information is no longer presented. Our findings, replicated in three different studies, are a strong indication of the existence and durability of the time-saving bias in software development projects.
the initial (before the project) and increased (after the project) average speeds were different. For example, participants read that Project A would increase the average speed from 30 km/h to 40 km/h, whereas Project B would increase the average speed from 70 km/h to 110 km/h. For each pair of alternatives, participants were asked to choose the one that would lead to a greater gain in driving time (or to determine that both would save the same time). The results of these experiments showed that people seem to arrive at a decision by employing the proportion heuristic, which considers the speed difference as a proportion of the initial (or increased) speed, without taking into account the initial time, thereby failing to recognize that absolute time savings decrease with increasing speed. For the example presented above, the proportion heuristic suggests that the correct answer is Project B (the proportion of speed difference to initial speed is 0.33 for Project A and 0.57 for Project B), whereas in actuality Project A would save more driving time than Project B for any given distance. The time-saving bias was evident by the low ratio of participants, about 17% (Svenson, 2008), who made the correct decision. This bias has been observed in various driving-related judgments (Fuller et al., 2009; Peer, 2010), among both non-professional and professional (taxi) drivers (Peer & Solomon, 2012), and in situations other than driving. In a healthcare setting (Svenson, 2008), participants were asked to choose between pairs of reorganization plans for healthcare centers, with similar numbers of patients per year (equivalent to distance) and different numbers of patients that can be treated by a single physician before and after reorganization (equivalent to initial and increased speeds). When asked to choose the alternative in each pair that would save a larger number of physicians (equivalent to time), participants tended to prefer the alternative for which the proportional improvement was higher. Similar biases in decisions were observed when participants were asked to choose between pairs of productivity improvement plans to production lines, with the initial and increased speeds represented as the numbers of units produced per hour and the objective being to maximize time savings (Svenson, 2011). Recently, the bias was observed in choices of consumers among different speeds of Internet connections, food processors, and printers, confirming that consumption decisions are also plagued by misunderstandings of the relationship between productivity increases and time savings (De Langhe & Puntoni, 2016). 2.2. Methodological framing The framing of a decision problem influences decision making, as people react differently to a specific choice depending on how it is presented (Levin, Schneider, & Gaeth, 1998). This effect has been observed most frequently in the context of whether a choice is framed as a loss or as a gain (Kuhberger, 1998; Tversky & Kahneman, 1981). However, other studies have used a more general definition of framing than that reflected in opposing courses of action that are assessed by probabilities of gains and losses, such as deterministic product attribute framing (Levin & Gaeth, 1988). The concept of framing has been widely employed in sociology and communication studies by viewing framing as a means of understanding how people construct meaning and make sense of the everyday world (Cacciatore, Scheufele, & Iyengar, 2016). In this work, we employ a broad definition of framing as related to the methodological terminology used to describe the decision problem. Our interest in the methodological context is motivated by the dominance of methodology in software development, by the opposing methodological approaches that have evolved to guide software development, and by the importance attributed to these approaches in enhancing software development performance. In particular, our interest is in whether the disposition of decision makers to exhibit the time-saving bias is independent of the methodological context that serves to frame the decision problem. Plan-based and agile are two dominant classes of software development methodologies, which represent opposite approaches to the organization of software development (Batra, Xia, VanderMeer, & Dutta,
2. Background and hypotheses 2.1. Time-saving bias Svenson (1970) asked drivers to estimate the time they would save by increasing their average speed over a given distance. He found that time-saving estimates deviated systematically from the correct answers. In more recent experiments, Svenson (2008) showed participants pairs of alternative road construction projects, in which road distances and construction costs were about the same (and therefore irrelevant), while 100
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
2010; Boehm, 2002; Nerur, Mahapatra, & Mangalaraj, 2005). Whereas the development of software sometimes follows methodologies that are neither plan-based nor agile, these two approaches are most representative of software development practice in the past two decades. The plan-based approach includes methodologies, such as the waterfall model, in which the development process progresses in a structured, controlled, and deterministic manner through the sequential stages of initiation, analysis, design, construction, testing, and implementation (Boehm, 1988; Petersen & Wohlin, 2010). Agile methodologies have evolved as a reaction to plan-based methodologies, offering an approach to software development that accommodates uncertainty and change during software development in a more effective way. In terms of the organization of software development, the agile approach is characterized by small teams that are engaged in short iterative cycles of development, driven by product features (Dingsøyr, Nerur, Balijepally, & Moe, 2012; Dyba & Dingsøyr, 2008; Serrador & Pinto, 2015). A project is broken down into sub-projects, called iterations, that each involve planning, development, integration, testing, and delivery (Nerur et al., 2005). Plan-based and agile projects differ in the terminology used in schedule estimation.1 In both methodologies, unless the project is schedule-driven, project size is typically estimated first and the schedule is estimated as a function of size. Project size is frequently estimated by such measures as source lines of code (SLOC) or function points (FP) in plan-based projects (Boehm et al., 2000; Laird & Brennan, 2006), and by such measures as story points in agile projects (Cohn, 2006; Trendowicz & Jeffery, 2014). When the need arises to compress the schedule and save time, which is often the case in software development projects (Fairley & Willshire, 2003; Nan & Harter, 2009), time is saved in plan-based projects by increasing delivery speed, measured as functionality delivered per time unit or per phase, and in agile projects by increasing “velocity”, which is a measure of the number of story points that the team is able to complete in a single iteration (Coelho & Basu, 2012; Cohn, 2006). Given these differences in terminology, it is important to demonstrate that the time-saving bias can be observed in software development irrespective of the terminology used to frame the time-saving decision.
implying that information availability is insufficient for its utilization in problem solving. Perfetto et al. (1983) later replicated these findings in situations in which the supporting information was highly related to problems, confirming that the reasonable explanation is not that people spontaneously retrieve the information but fail to perceive it as relevant, but rather that they fail to retrieve the information in the absence of explicit instructions to do so. Experimental work has demonstrated the value of making relevant information available to decision makers in mitigating time-related cognitive biases. For instance, in the context of the planning fallacy, Buehler et al. (1994) showed that participants who were asked to recall similar experiences and to answer questions that highlighted the relevance of those experiences provided time estimates that were less optimistically biased than participants in the recall-only and control (no manipulation) groups. Several studies provided evidence that relevant information could be effective in ameliorating the time-saving bias. For example, De Langhe and Puntoni (2016) showed that consumers' assessments of time savings were more accurate when they received additional information through metrics that were linearly related to time savings and when they estimated time savings rather than ranked them. Svenson et al. (2014) found that encouraging participants to estimate actual time savings when they were asked to make time-saving decisions slightly mitigated the bias. On the basis of these findings, we hypothesize that the availability of relevant information mitigates bias in time-saving decisions in software development. Consistent with studies about bias mitigation in time estimation (Peer & Gamliel, 2013; Shmueli et al., 2016; Svenson et al., 2014), we predict that the time-saving bias is mitigated, yet not eliminated, by the availability of relevant information. We further hypothesize that the bias-mitigating effect of relevant information diminishes when the information is no longer available. This Hypothesis is founded on the failure of people to spontaneously retrieve information unless they are explicitly instructed to do so (Perfetto et al., 1983; Weisberg et al., 1978), and on their strong tendency to consider problems as unique and to diminish the relevance of past experiences (Buehler, Griffin, & Ross, 1995; Kahneman & Lovallo, 1993; Kahneman & Tversky, 1979). The consequence may be a failure to draw a parallel even between two cases that are presented on the same page (Thompson et al., 2000).
2.3. Information availability and relevance
Hypothesis. 1a-b (H1a-b). (a) The time-saving bias is less likely to be observed (the accuracy of time-saving decisions is higher) when relevant information is available; (b) This bias-mitigating effect is weaker in subsequent time-saving decisions in which the information is not available.
The availability of relevant information is generally considered to have a positive effect on the accuracy of decisions. As obvious as this notion sounds, people frequently fail to use such information effectively (Brehmer, 1980; Thompson et al., 2000). This failure is largely the consequence of the inability of people to spontaneously retrieve information from memory when the information is needed and to perceive the information as relevant to the decision at hand. These complementary cognitive mechanisms of information retrieval and relevance perception have commonly been investigated by first presenting study participants with potentially relevant information and then exploring the conditions necessary for them to utilize this information in problem solving tasks (Perfetto et al., 1983). Weisberg et al. (1978) found that participants who learned a list of verbal paired associates and were explicitly told that one of the pairs could help them solve a problem performed better than participants who learned the same list but received no guidance,
The Hypothesis formulated above compares a situation in which information is both available and relevant to a situation in which no information is available. In so doing, it confounds the effects of availability and relevance, ignoring situations in which less relevant information may be available. Arguably, relevance is a more multifaceted construct than availability, as information can be more or less relevant to a decision problem along a number of dimensions. The literature acknowledges that relevance is an elusive notion (Saracevic, 1975); relevance is intuitively understood, but very difficult to define (Cosijn & Ingwersen, 2000). Studies have employed different definitions of relevance, depending on whether their motivation has been to study decision making, information retrieval, or information search. Information is considered relevant in a decision-making task when it has functional value, emphasizing that relevance is situational (Xu, 2007). Situational relevance refers to the pragmatic utility of the information in decision making (Cosijn & Ingwersen, 2000). Xu and Chen (2006) identified topicality, novelty, reliability, understandability, and scope as the five key dimensions of situational relevance based on Grice's (1989) communication theory, but they found no support for scope as a key dimension. Among the four remaining dimensions, we consider topicality and novelty to be less germane to time-saving decisions because the information available in such decisions rarely varies in terms of its relation to the decision maker's current interests (topicality) and fundamental knowledge (novelty). By
1 There is an extensive literature on effort and schedule estimation in software development projects, showing that effort and schedule can be estimated by numerous approaches that vary in the extent to which estimation relies on judgments by experts or on formal models (Laird & Brennan, 2006), which include such models as Constructive Cost Model (COCOMO) (Boehm et al., 2000), Software Lifecycle Management (SLIM) (Putnam, 1978), and System Evaluation and Estimation of Resources Software Estimation Model (SEER-SEM) (Galorath & Evans, 2006). The discussion of these approaches here is far from being exhaustive; it is only intended to demonstrate the conceptual differences between schedule estimation in plan-based and agile projects.
101
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
independent variable (IV – methodological framing) and a single within-subjects IV (information availability). Methodological framing was either plan-based or agile, and information availability indicated whether or not the “information” was presented to the participant in a scenario. Participants were randomly assigned to methodological framing groups and information availability was counterbalanced across the two scenarios.
contrast, the reliability and understandability of information can vary considerably in time-saving decisions. Xu and Chen (2006) measured reliability as the extent to which the information was accurate, consistent with facts, reliable, and true, similar to concepts of accuracy, validity, and agreement in the literature. Consistent with the view that effective communication should reduce the recipient's cognitive load, they measured understandability as the extent to which the information was easy to understand, required little effort, and was easy to read. These operational definitions were found to have high convergent and discriminant validity (Xu & Chen, 2006). Consequently, we hypothesize that information reliability and understandability, reflecting information relevance, have a mitigating effect on the likelihood of observing the time-saving bias. Consistent with the above definitions, we define information reliability as the degree to which the information is perceived to be factual and believable. Information in time-saving decisions is likely to be perceived as having high reliability when it is presented as based on objective statistical information about similar cases. The information is likely to be perceived as having low reliability when it is presented as based on a subjective opinion. We define information understandability as the degree to which processing the information requires cognitive effort. Understandability is likely to be perceived as high in time-saving decisions when the information can be utilized without considerable cognitive effort. Understandability is likely to be perceived as low when considerable cognitive effort needs to be expended for the information to be utilized in making time-saving decisions.
3.1.2. Procedure Participants performed the experiment online by accessing a website we developed specifically for this work. They were asked to address two scenarios (i.e., decision problems), each presented on a separate webpage, comprised of four parts: definitions, description, data, and decision (please see Appendix). Participants were initially presented with a list of key terms and their definitions to make sure they had a common baseline of knowledge in the relevant domain. The two lists of terms for planbased and agile framings were structured and phrased in a manner that minimized differences between them to the extent possible, maintaining only those that were mandatory as a consequence of methodological differences. Following the list of terms, each scenario included a description of a time-saving problem. Participants read that they were the manager of two similar plan-based (or agile) projects with a similar and predetermined size (the actual size was noted for the scenario to be as realistic as possible), but with different levels of resources. Participants were then advised that the company's management decided to shorten the duration of projects and that they were accordingly asked to add resources to one of the projects they managed in order to shorten its duration. With the cost of additional resources being similar for the two projects, participants were asked to choose the project to which resources would be added, so that the addition of resources would result in a greater reduction in overall project duration. Following the description of the time-saving problem, participants were presented with data on the “speeds” of the two projects without (initial) and with (increased) additional resources. The speeds were expressed in terms of team size (employees per phase) in the plan-based framing, and in terms of velocity (points per iteration) in the agile framing. The two scenarios presented to each participant had the same framing (either plan-based or agile) but with different value sets as the data for the decision. Each value set included two pairs of initial and increased speeds for the two projects. The value sets were selected as those that produced the greatest bias in the experiments performed by Svenson (2008), but those values were divided by 10 to be more consistent with the reality of software development. Consequently, the first value set included the speeds of 3 (initial) and 4 (increased) for one project and the speeds of 7 (initial) and 11 (increased) for the other project. The second value set included the speeds of 3 and 5 for one project and the speeds of 6 and 13 for the other project. Each participant was randomly assigned to a particular order of value sets and to a particular order of value pairs within each scenario. In each scenario, following the presentation of the value set (i.e., a value pair for Project A and a value pair for Project B), the participant was asked to choose the project to which the addition of resources would result in a greater reduction in overall project duration: Project A, Project B, or no preference between A and B because duration reduction would be similar for both. The participant was notified that once a choice was made, the following page would appear automatically, to disallow the adjustment of decisions during the experiment. Overall, the descriptions, data, and decisions were structured to be as consistent as possible with those used in experiments on the time-saving bias (e.g., Svenson, 2008). The two scenarios presented to each participant were not only distinguishable by the use of different value sets but also by the manipulation of information availability, as only one of the two scenarios included the “information” that based on statistical information collected in the company about similar past projects, it was found that adding resources to a project with a smaller number of employees (lower
Hypothesis. 2a-b (H2a-b). The time-saving bias is less likely to be observed (the accuracy of time-saving decisions is higher) when the available information is highly (a) reliable and (b) understandable. 3. Study 1 In accordance with the prevalent methodological approach in descriptive (rather than normative) research on decision making, an experimental approach was employed to test the research hypotheses. The main objective of Study 1 was to confirm the existence of the timesaving bias and to test H1a-b with students prior to committing resources to testing the hypotheses with professional software project managers. The use of student participants in this study was consistent with the frequent reliance on this population in experiments designed to explore decision making in the context of software development (e.g., Andres & Zmud, 2001; Keil et al., 2000; Keil, Im, & M€ahring, 2007; Khatri, Vessey, Ramesh, Clay, & Park, 2006; Shmueli et al., 2016, 2015; Umapathy, Purao, & Barton, 2008). 3.1. Method 3.1.1. Participants and design One hundred and five senior undergraduate students (out of a potential of 140 students) in the Department of Industrial Engineering and Management at an Israeli university participated in the experiment in return for extra credit. The students all majored in information systems and were in the last semester of their studies, with an educational background that included various courses in programming, database management, system analysis and design, project management, and IT management. Four participants were removed from the sample because of a high likelihood of duplicate responses (identified by repeated IP addresses and adjacent login times), and 22 participants were removed because they completed the experiment in an unreasonable time (less than two minutes for the entire experiment or less than 10 s for a specific scenario). Application of these criteria resulted in a final sample of 79 participants. Seventy of the participants (88.6%) were in the 25–29 age group, six (7.6%) were in the 20–24 age group, and three (3.8%) were in the 30–34 age group. Forty-nine of the participants (62.0%) were female. The experiment used a mixed design with a single between-subjects 102
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
velocity in the agile framing) is on average more effective than adding resources to a project with a larger number of employees (higher velocity in the agile framing). The scenario in which this information was included, either the first or the second, was randomly determined for each participant. This manipulation implied that participants made decisions on scenarios “without information” either after seeing the information in the previous scenario or before seeing it in the following scenario. To confirm the effectiveness of our manipulations, we asked a sample of 22 students from the same student population to read the scenario and to answer a set of questions about their understanding of the problem. Consistent with the experimental design, half of the students were randomly assigned to a plan-based scenario and half to an agile scenario. The data collected in this manipulation check confirmed that after reading the scenario, the majority of students correctly perceived the two projects as equal in size (68.2%, with 9.1% perceiving Project A as larger and 22.7% perceiving Project B as larger), responded that there was not enough information in the scenario to determine which project was more important (95.5%, with 4.5% responding that Project A was more important), and identified that the purpose of adding resources to one of the projects was to “reduce the overall duration of the project” (95.5%, with 4.5% identifying the purpose as to “increase the overall size of the project”).
Table 1 Decision accuracy in experimental conditions (Study 1). Scenario
Information availability
First
Without With Total Without With Total Without With Total
Second
Total
Methodological framing Plan-based
Agile
Total
0.26 0.63 0.43 0.42 0.52 0.48 0.33 0.57 0.45
0.29 0.63 0.43 0.13 0.71 0.46 0.22 0.68 0.45
0.27 0.63 0.43 0.29 0.61 0.47 0.28 0.62 0.45
(23) (19) (42) (19) (23) (42) (42) (42) (84)
(21) (16) (37) (16) (21) (37) (37) (37) (74)
(44) (35) (79) (35) (44) (79) (79) (79) (158)
Note. Means of decision accuracy (‘0’ – incorrect, ‘1’ – correct), reflecting the ratio of correct decisions to total decisions, are presented, with the number of observations in parentheses.
methodological framing. To test these conclusions more rigorously, a mixed-effects logistic regression model was used with methodological framing, information availability, and scenario as fixed effects, participant as a random effect, and decision accuracy as the dependent variable (DV). The model also included an effect for the interaction of information availability with scenario to test H1b (i.e., whether the effect of information availability dissipated in the following scenario). A logistic regression model was used because decision accuracy was a binary variable (whether the participant answered correctly or not). A mixed-effects model was preferred over statistical methods typically used for categorical data (e.g., Chi-square test) because of the need to include a random effect to control for the non-independence of the data (observations were paired), given the existence of repeated measures of the DV (two scenarios for each participant). The results obtained for this model, presented in Table 2, were consistent with the conclusions derived above from accuracy means. They showed no statistically significant effect for methodological framing, leading to the conclusion that the time-saving bias was equally prevalent under plan-based and agile framings. By contrast, a significant positive effect was found for information availability (p < 0.01), indicating that the inclusion of information in a scenario increased decision accuracy, consistent with H1a. If, however, information availability had any enduring effect, the difference between decision accuracy with and without information in the second scenario (both groups saw the information, in either the first or second scenario) would be smaller than that in the first scenario (only one group saw the information), and the effects for scenario and for its interaction with information availability would be statistically significant. The significant results only for information availability led to the conclusion that the effect of information was limited to the scenario in which it was included, consistent with H1b.
3.2. Results and discussion The decision made by a participant in a scenario was the unit of analysis, and its accuracy was operationally defined as a binary variable, reflecting whether the participant provided the correct answer (‘1’) or not (‘0’). In all scenarios, the correct answer was the smaller project (counterbalanced as A or B), and participants who answered differently were considered to be incorrect. Specifically, the correct answer was that increasing speed from 3 to 4 (from 3 to 5 in the second value set) would result in a greater reduction in project duration than increasing speed from 7 to 11 (from 6 to 13 in the second value set). Table 1 presents the ratio of correct decisions to total decisions, i.e., the mean of decision accuracy, in the various experimental conditions. Because it was important whether the participant was deciding on the scenario “without information” first, before seeing the information, or second, after seeing the information in the previous scenario, the analysis was not collapsed across the two scenarios. The means of decision accuracy, presented in Table 1, suggested that the time-saving bias was observed in our context, under both plan-based and agile framings, and provided preliminary evidence in support of H1ab. Accuracy in the first scenario without information was 0.27 (i.e., only about one of every four decisions was correct). The “exact” (ClopperPearson) 95% confidence interval for a binomial distribution (0.15, 0.43) showed that this accuracy was similar to that obtained by chance alone.2 When information was available as part of the scenario, the accuracy was considerably higher: 0.63 when the information was included in the first scenario and 0.61 when it was included in the second scenario. However, the same participants who seemingly utilized the information included in the first scenario to reach an accuracy of 0.63, reached an accuracy of 0.29 in the second scenario when the information was no longer included. Although the scenarios were similar except for the difference in value sets, participants who saw the information in the preceding scenario performed as poorly as participants who did not see the information before (accuracy of 0.29 versus 0.27, respectively). These values suggested that information availability had a bias-mitigating effect (H1a), which dissipated quickly when the information was no longer available (H1b). Accuracy values in plan-based and agile scenarios suggested that the time-saving bias was prevalent irrespective of the
4. Study 2 Although the findings of Study 1 were consistent with our expectations, these findings were obtained with undergraduate students as the target population. Although these students majored in information systems and were in the last semester of their studies, they had little realworld experience with plan-based or agile methodologies. It was, Table 2 Results for mixed-effects logistic regression (Study 1).
2 As there were three possible answers for each time-saving problem (Project A, Project B, or no preference), decision accuracy by chance alone was 0.33.
Fixed effect
Estimate
Std. error
z
p
(Intercept) Plan-based framing With information First scenario First scenario with information
1.343 0.036 2.004 0.079 0.174
0.631 0.524 0.755 0.686 1.055
2.127 0.069 2.653 0.116 0.165
0.033 0.945 0.008 0.908 0.869
Note. The DV was decision accuracy (‘0’ – incorrect, ‘1’ – correct); the model included a random effect for participant; N ¼ 158; Log likelihood ¼ 96.3. 103
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
therefore, necessary to replicate Study 1 with a sample of software project managers.
Table 4 Decision accuracy in experimental conditions (Study 2). Scenario
Information availability
First
Without With Total Without With Total Without With Total
4.1. Method The design and procedure of this study were identical to those of Study 1, with a different target population. This study targeted software project managers from all over the world. Potential participants were contacted directly through one of two channels. The first channel was a popular website for project managers (http://www.projectmanagement .com), supported by the Project Management Institute (PMI). Potential participants were identified using the following search terms: software development project manager, software development manager, software project manager, software manager, and IT project manager. The second channel was the social network LinkedIn, where potential participants were identified in relevant interest groups (software development management professionals group, software project management group, and IT and software project management group) using the same search terms. After a direct link was established with potential participants through these channels, we invited them to participate in the study through a personal message. Given that these channels provided lists that overlapped to some extent and given the unstructured nature of the process, we estimated that about 300 personal invitations were sent. These invitations were viewed by 156 potential participants, of whom 82 participants completed the experiment. One participant was removed from the sample because of a high likelihood of a duplicate response (repeated IP address and adjacent login times), resulting in a final sample of 81 participants. The characteristics of this sample are presented in Table 3.
Second
Total
Valid percent
9 60
13.0% 87.0%
1 5 14 20 10 7 9 3
1.4% 7.2% 20.3% 29.0% 14.5% 10.1% 13.0% 4.3%
6 8 3 4 22 29
8.3% 11.1% 4.2% 5.6% 30.6% 40.3%
5 29 12 4
10.0% 58.0% 24.0% 8.0%
Total
0.29 0.50 0.39 0.10 0.43 0.27 0.20 0.46 0.33
0.42 0.50 0.46 0.26 0.60 0.44 0.35 0.56 0.45
(22) (18) (40) (18) (22) (40) (40) (40) (80)
(21) (20) (41) (20) (21) (41) (41) (41) (82)
(43) (38) (81) (38) (43) (81) (81) (81) (162)
5. Study 3 As Study 2 provided support for H1a-b, the primary objective of Study 3 was to test H2a-b about the effects of information reliability and understandability (reflecting information relevance) on decision accuracy. To avoid the unnecessary collection of difficult-to-obtain data, information availability was not manipulated independent of scenario in Study 3, as it was in the two previous studies. Whereas information was available in either the first or second scenario in Studies 1 and 2, information was available only and always in the first scenario in Study 3. Absent scenarios in which information had not been available, Study 3 could not test H1a about the positive effect of information availability. It could, however, test H1b about the temporary effect of information, as well as test H2a-b.
Table 3 Sample characteristics (Study 2). Frequency
Agile
0.55 0.50 0.53 0.44 0.77 0.63 0.50 0.65 0.58
less accurate in agile scenarios, in contrast to the lack of effect for methodological framing in Study 1. Third, the pattern whereby information had a positive effect on decision accuracy only when it was included in the scenario, observed in Study 1, characterized the agile scenarios but not the plan-based ones. The same participants in agile scenarios who seemingly used the information included in the first scenario to achieve an accuracy of 0.50, achieved an accuracy of 0.10 in the second scenario when the information was not included. The equivalent drop in accuracy among participants in plan-based scenarios was from 0.50 to 0.44. Similar to Study 1, a mixed-effects logistic regression model was used to rigorously test these conclusions, with methodological framing, information availability, scenario, and the interaction of the last two variables as fixed effects, participant as a random effect, and decision accuracy as the DV. The results obtained for this model, presented in Table 5, showed that the plan-based framing (p < 0.05) and information availability (p < 0.01) had significant positive effects on decision accuracy. Consistent with the findings of Study 1, the effects of scenario and of its interaction with information availability were nonsignificant, confirming that decision accuracy in scenarios without information was independent of whether the participant saw the information before or not. These findings led to the conclusion that H1a-b was supported in Study 2.
Decision accuracy means, presented in Table 4, suggested that professionals behaved somewhat differently than students. First, participants in this study achieved an accuracy of 0.42 in the condition in which they had not seen the information (first scenario, without information). The equivalent accuracy among students in Study 1 was 0.27. Nevertheless, the “exact” (Clopper-Pearson) 95% confidence interval for a binomial distribution (0.27, 0.58) showed that the accuracy in Study 2, as in Study 1, was similar to that obtained by chance alone (0.33). Second, a comparison of accuracy levels in plan-based and agile scenarios suggested that participants were more prone to the time-saving bias and
Characteristic (valid N)
Plan-based
Note. Means of decision accuracy (‘0’ – incorrect, ‘1’ – correct), reflecting the ratio of correct decisions to total decisions, are presented, with the number of observations in parentheses.
4.2. Results and discussion
Gender (69) Female Male Age (69) Below 30 30-34 35-39 40-44 45-49 50-54 55-59 Above 60 Country (72) Brazil Canada India UK US Other (less than 3 participants) Position (50) Top management Middle management Leader Development
Methodological framing
Table 5 Results for mixed-effects logistic regression (Study 2). Fixed effect
Estimate
Std. error
z
p
(Intercept) Plan-based framing With information First scenario First scenario with information
2.493 1.690 2.354 1.054 1.712
0.780 0.678 0.851 0.771 1.232
3.198 2.493 2.764 1.368 1.390
0.001 0.013 0.006 0.171 0.164
Note. The DV was decision accuracy (‘0’ – incorrect, ‘1’ – correct); the model included a random effect for participant; N ¼ 162; Log likelihood ¼ 96.6. 104
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
5.1. Method
Table 6 Sample characteristics (Study 3).
Study 3 used the same target population of software project managers from all over the world and the same sampling approach as those used in Study 2. Given the larger number of IVs in Study 3, the sample size had to be doubled. Importantly, the samples in Studies 2 and 3 were mutually exclusive, as those invited to participate in Study 2 were not invited to participate in Study 3. Overall, we sent 921 personal invitations, which were viewed by 328 potential participants, of whom 160 participants completed the experiment. No participants were removed from the sample, whose characteristics are presented in Table 6. The design of Study 3 included three between-subjects IVs and one within-subjects IV. Participants were randomly assigned to either planbased or agile framing and to one of four (2 2) versions of information, where information reliability (high or low) was manipulated in the first part of the sentence and information understandability (high or low) in the second part. Consistent with the conceptual definitions presented earlier, information reliability was considered high when the available information was based on objective statistical information about similar cases (“Based on statistical information collected in the company about similar past projects, it was found…”) and low when the available information was based on a subjective opinion (“A colleague project manager in your company argues…”). Information understandability was considered high when the information could be utilized without considerable cognitive effort (“…that adding resources to a project with a smaller number of employees is on average more time saving than adding resources to a project with a larger number of employees”)3 and low when considerable cognitive effort had to be expended for the information to be utilized (“…that the decision which project saves more time requires to divide the project size by its resources before and after the addition of resources to calculate the difference in project duration for each project”). Previous research aimed at mitigating the time-saving bias by providing both information that alluded to the accurate decision (De Langhe & Puntoni, 2016) and information that described the mathematical procedure for reaching the accurate decision (Svenson et al., 2014). Importantly, the information available in the two previous studies was always the version with high reliability and high understandability, implying that the information available in these studies was of high relevance. In contrast to the two previous studies, in which the within-subjects IV of information availability was counterbalanced across the two scenarios, information was always available in the first scenario in Study 3. Accordingly, data analysis included the information availability variable but not the scenario variable. All other aspects of the design and procedure of Study 3 were similar to those of the two previous studies. Similar to manipulation checks in Study 1, to confirm the effectiveness of the manipulations of information reliability and understandability, we asked an additional sample of 20 software project managers from the same target population to read the scenario with information and to answer a set of questions about that information. Consistent with the experimental design, the managers were randomly assigned to either plan-based or agile framing and to one of four versions of information – high/low reliability and high/low understandability. After reading the complete scenario, the managers were asked to assess (on 7-point scales) the extent to which they perceived that specific information as accurate, based on facts, reliable, true, having a credible source, easy to understand, requiring little effort, and easy to read. Xu and Chen (2006) studied the dimensions of information relevance by using the first four items to measure information reliability (we added source credibility) and the last three items to measure information understandability. We used a principal component analysis (PCA) with Varimax rotation to identify the factors underlying the managers' assessments. This analysis yielded two factors (with eigenvalues above 1) and the rotated
Characteristic (valid N) Gender (149) Female Male Age (147) Below 30 30-34 35-39 40-44 45-49 50-54 55-59 Above 60 Country (152) Canada Germany Israel Italy UK US Other (less than 3 participants) Position (97) Top management Middle management Leader Development
Frequency
Valid percent
37 112
24.8% 75.2%
5 18 19 27 21 27 12 18
3.4% 12.2% 12.9% 18.4% 14.3% 18.4% 8.2% 12.2%
8 4 3 3 7 99 28
5.3% 2.6% 2.0% 2.0% 4.6% 65.1% 18.4%
3 54 33 7
3.1% 55.7% 34.0% 7.2%
component matrix showed that the first five items (accurate, factual, reliable, true, and credible source) had high loadings on one factor (loadings above 0.774, cross-loadings below 0.198) and the last three items (easy to understand, little effort, and easy to read) had high loadings on the other factor (loadings above 0.742, cross-loadings below 0.192). Cronbach's Alpha values were 0.938 for the first five items and 0.792 for the last three items. We then generated PCA-derived factor scores for each manager and tested whether the two factor scores could be predicted by the manipulations of information reliability and understandability. We found that the first factor score, measuring information reliability, was significantly affected by the manipulation of information reliability (p < 0.05) but not by that of information understandability (p > 0.10). We also found that the second factor score, measuring information understandability, was significantly affected by the manipulation of information understandability (p < 0.05) but not by that of information reliability (p > 0.10). The interaction terms in both models were not statistically significant. These results confirmed that despite our composite manipulations (two variables manipulated in different parts of a single sentence), they were effective in manipulating information reliability and understandability. 5.2. Results and discussion According to decision accuracy means, presented in Table 7, the accuracy in plan-based scenarios seemed to be higher than that in agile scenarios across all combinations of other variables. For the first scenario, in which information was included, the accuracy with high reliability was higher than that with low reliability (consistent with H2a), and the accuracy with high understandability was higher than that with low understandability (consistent with H2b). These effects seemed to be additive, resulting in the highest accuracy (0.80) being observed in planbased scenarios with high information reliability and understandability. Relative to the two previous studies, in which the available information was most relevant (high reliability and understandability), the drop in accuracy in this study once the information was no longer available in the second scenario seemed to be less dramatic, arguably because the information was less relevant and, thus, less effective in most conditions of the first scenario. A mixed-effects logistic regression model was used to test the hypotheses, with methodological framing, information availability,
3 The words smaller/larger number of employees in plan-based scenarios were replaced with the words lower/higher velocity in agile scenarios.
105
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
Table 7 Decision accuracy in experimental conditions (Study 3). Information availability
With (First scenario)
Information reliability
Low
High
Total
Without (Second scenario)
Low
High
Total
Total
Low
High
Total
Information understandability
Low High Total Low High Total Low High Total Low High Total Low High Total Low High Total Low High Total Low High Total Low High Total
Table 8 Results for mixed-effects logistic regression (Study 3).
Methodological framing Planbased
Agile
Total
0.30 0.50 0.40 0.55 0.80 0.67 0.43 0.65 0.54 0.30 0.50 0.40 0.40 0.70 0.55 0.35 0.60 0.48 0.30 0.50 0.40 0.47 0.75 0.61 0.39 0.62 0.51
0.10 0.15 0.13 0.25 0.40 0.33 0.18 0.28 0.23 0.25 0.20 0.22 0.20 0.20 0.20 0.23 0.20 0.21 0.18 0.18 0.18 0.23 0.30 0.26 0.20 0.24 0.22
0.20 0.32 0.26 0.40 0.60 0.50 0.30 0.46 0.38 0.28 0.35 0.31 0.30 0.45 0.38 0.29 0.40 0.34 0.24 0.34 0.29 0.35 0.53 0.44 0.29 0.43 0.36
Fixed effect
Estimate
Std. error
z
p
(Intercept) Plan-based framing Without information High reliability High understandability Without information high reliability Without information high understandability
3.946 2.419 0.691 1.986 1.458 1.485 0.449
0.866 0.626 0.605 0.657 0.645 0.686 0.662
4.555 3.867 1.143 3.021 2.262 2.163 0.678
0.000 0.000 0.253 0.003 0.024 0.031 0.498
Note. The DV was decision accuracy (‘0’ – incorrect, ‘1’ – correct); the model included a random effect for participant; N ¼ 320; Log likelihood ¼ 174.5.
6. General discussion While the findings of the three studies consistently show that timesaving decisions in software development projects are biased, we find that the bias is more likely to be observed (lower decision accuracy) under an agile framing than under a plan-based framing, but only in Studies 2 and 3 with samples of software project managers. We find no such differences between the two framings in Study 1 with a sample of senior information systems students. The differences between students and managers are better understood by looking at the actual means of decision accuracy across studies. When no information is available, decision accuracy means under an agile framing in Studies 1, 2, and 3 are 0.22, 0.20, and 0.21, respectively. The corresponding means under a plan-based framing are 0.33, 0.50, and 0.48, respectively. Given that these means are obtained from three independent samples, they are remarkably consistent. They suggest that students are similarly inaccurate under the two framings and that managers are as accurate as students under an agile framing. Managers, however, are more accurate under a plan-based framing. Attribute substitution theory (Kahneman & Frederick, 2002), which suggests that people may “use the answer to an easy question to solve a related problem” (Shah & Oppenheimer, 2008, p. 208), offers a possible explanation for the higher likelihood of the bias under an agile framing than under a plan-based framing among managers. Kahneman and Frederick (2002) defined attribute substitution as cognitively replacing a target attribute with a heuristic attribute, leading to heuristic thinking and, possibly, to biased decisions. The theory suggests that attribute substitution controls judgment when a heuristic attribute comes readily to mind. It is possible that the attribute of speed comes more readily to mind in decisions about velocity than in decisions about team size. Consequently, the simpler question of which speed increase saves more time, which invokes the time-saving bias, is more likely to be answered when decision makers are asked which velocity increase saves more time (agile framing) than which team size increase saves more time (plan-based framing). The differences between students and managers may be observed because managers have a deeper understanding of software development, possibly making them more sensitive to differences in whether time is saved by increasing velocity in an agile context or by increasing team size in a plan-based context. This explanation is consistent with the argument of Keil et al. (2007) that the use of student participants to study software development should not be assessed absolutely and universally, but rather on a case-by-case basis. We consider these differences between plan-based and agile framings, as well as between managers and students, as promising avenues for future research on cognitive biases in software development. By contrast, the studies are consistent in supporting the Hypothesis about the short-term effect of available relevant information (H1a-b). Studies 1 and 2 confirm that the inclusion of relevant information in a scenario increases decision accuracy, although the time-saving bias is
Note. Means of decision accuracy (‘0’ – incorrect, ‘1’ – correct), reflecting the ratio of correct to total decisions, are presented; there were exactly 20 observations in each experimental condition; N ¼ 320.
information reliability, and information understandability as fixed effects, participant as a random effect, and decision accuracy as the DV. The model also included two effects for the interactions of information availability with information reliability and with information understandability to account for the difference in accuracy between the first scenario, in which information of a specific reliability and understandability was available, and the second scenario, in which that information was no longer available.4 The results obtained for this model, presented in Table 8, showed that plan-based framing (p < 0.001), high information reliability (p < 0.01), and high information understandability (p < 0.05) had significant positive effects on decision accuracy, providing support for H2a-b. Given that the information included in the first scenario was highly relevant only in one (high reliability and understandability) of four conditions, its exclusion in the second scenario had no significant effect on decision accuracy. However, the results for the two interaction effects showed that the negative effect of the exclusion of information on decision accuracy was statistically significant in the case of high reliability (p < 0.05) but not in the case of high understandability. Evidently, information with high reliability was more effective than information with low reliability in the first scenario, but then most of this effect dissipated in the second scenario when the information was no longer included. The difference in accuracy between information with high and low understandability in the first scenario was smaller, but it better endured the exclusion of the information in the second scenario.
4
We considered information in the second scenario to have different levels of reliability and understandability despite the fact that it was not available in that scenario because the information available in the first scenario (with high/low reliability and high/low understandability) was assumed to have a residual effect in the second scenario. 106
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
the time-saving bias is observed in software development and that relevant information can mitigate the bias but not eliminate it. While we certainly do not suggest that this bias accounts for considerable variance in software development performance, we do argue that it deserves at least the same attention as other cognitive biases, such as the planning fallacy (Buehler et al., 2010; Halkjelsvik & Jørgensen, 2012; Shmueli et al., 2016), that have been shown to affect software development performance. The second contribution is proposing that the organization of software development, specifically the methodology underlying the development, may have nuanced effects on the quality of project management decisions. The differences found in the likelihood of the time-saving bias as a consequence of the methodological framing suggest that the advantages of simplicity and intuitiveness associated with the agile approach may have a downside – they may increase the likelihood of intuitive thinking, which may lead to decisions that are more biased. Accordingly, the methodology of software development should be investigated not only as an antecedent of project performance (Serrador & Pinto, 2015) but also as the context that moderates the quality of project management decisions (Stingl & Geraldi, 2017). Finally, the third contribution is showing that information availability has a positive, yet momentary effect. While the presentation of information to decision makers improves the accuracy of their decisions, this information has no residual effect once it is no longer presented, even if the following situation is very similar. Concerning the decision-making literature, which includes several demonstrations of the failure of people to transfer knowledge across tasks (e.g., Brehmer, 1980; Perfetto et al., 1983; Thompson et al., 2000), our findings represent an extreme case of such failure. Another contribution to this literature is the typology of information relevance as comprised of the dimensions of information reliability and understandability, which are both found to affect decision accuracy. The contribution to the literature on software development is the strong empirical evidence, which is replicated in three studies with different samples and designs, of the tendency to ignore information that can improve project management decisions, unless this information is immediately available and highly relevant to the decision at hand. Whereas this literature has gone to great lengths to prescribe ways to effectively collect and use information in support of decision making (e.g., Feris, Zwikael, & Gregor, 2017), our findings suggest that the benefits realized from information use are extremely limited in time if the information becomes unavailable in subsequent decisions. Each of the research contributions above has a practical implication. First, people find tasks of time estimation to be challenging (Levin & Zakay, 1989), and the reality of software development projects significantly increases the challenge of accurately estimating time. Managers need to be aware that the nonlinearity of the speed-time function implies that the time-saving bias may be at play when they have to make time-saving decisions. Although “cognitive biases do not disappear just because people know about them” (Stacy & MacMillan, 1995, p. 62), awareness of a bias is certainly a crucial step toward its mitigation (Shmueli, Pliskin, & Fink, 2015). Second, managers need to consider the possibility that the increasing use of more intuitive software development methodologies in recent years to address project performance problems may induce the side effect of more intuitive decision making, which is prone to cognitive biases. This work suggests that the higher simplicity of agile methodologies may come at the cost of a higher likelihood of biased time-saving decisions. Therefore, managers should insist on using formal methods of time estimation even when relevant heuristics come readily to mind. Persistent use of formal methods, even when they seem superfluous, can be an effective approach to eliminate cognitive bias in decision making. Third, this work shows that the positive effect of making relevant information available to decision makers dissipates quickly once this information is no longer presented. This finding is important to those engaged in efforts to develop methods and solutions in support of decision making in project management because it highlights the importance of making relevant information available
mitigated but not eliminated. Decision accuracy means for scenarios with information in Studies 1, 2, and 3 are 0.62, 0.56, and 0.60,5 respectively. These consistent means demonstrate that even when relevant information is available, about 40% of time-saving decisions remain inaccurate. These relatively high levels of inaccuracy, even when relevant information is available, may be specific to our experimental setting (e.g., lack of sufficient detail) and we are unable to explain them in the absence of additional manipulations of the available information. However, more than confirming that the bias-mitigating effect of information is weaker in a subsequent scenario without the information, as we hypothesize, Studies 1 and 2 show that the effect completely disappears and that participants exhibit the same level of performance (accuracy of less than 0.30) as those that did not see the information. Given the similarity between the two scenarios presented to each participant, the absence of any residual effect of information availability cannot be fully explained either by the tendency of people to perceive a specific decision under consideration as unique (Buehler et al., 1994; Kahneman & Lovallo, 1993; Kahneman & Tversky, 1979) or by their tendency to access previous knowledge on the basis of surface, rather than structural, similarity to the decision at hand (Gentner & Medina, 1998; Thompson et al., 2000). As the three studies have not been designed to investigate the cognitive mechanisms underlying this finding, we cannot ascertain whether the participants failed to retrieve the information or consciously dismissed it. We leave it to future research to cognitively explain this interesting finding. The information presented to participants in Studies 1 and 2 is considered relevant because of its high reliability and understandability. We manipulate information reliability and understandability in Study 3 by including three additional versions of information (reliability and understandability are high-low, low-high, or low-low). The results of Study 3 provide support for the Hypothesis that the bias-mitigating effect of information is independently contingent on its reliability and understandability (H2a-b). Consistent with the specific context of professionals making decisions that require a certain level of cognitive effort, we find that the effect of information reliability is larger than that of information understandability. However, the drop in performance in the following scenario without information is also larger for information reliability, culminating in a significant interaction effect of information availability with information reliability but not with information understandability. These results of Study 3 suggest that information relevance and bias mitigation are not dichotomous, and that variance in the former leads to variance in the latter. Accordingly, a more nuanced approach to the relationship between information relevance and bias may be warranted, at least in the case of the time-saving bias. Our findings contribute to research in three ways, which open avenues for future work. The first contribution is highlighting the timesaving bias as a concept of interest in the nomological network of biased time estimates in software development projects. Time estimation has repeatedly been shown to be a challenging task, and “the relation of time to distance and speed” has been regarded as one of “the major problems of time” (Levin & Zakay, 1989, p. 1). This work suggests that the human tendency to erroneously employ proportion or absolute heuristics in time-saving decisions should be taken into account in seeking to explain biased time estimates in software development, especially as software development frequently places decision makers in a position of having to save time by increasing development speed (Fairley & Willshire, 2003; Nan & Harter, 2009). Across three studies, with three different samples, we consistently observe that the accuracy of time-saving decisions is comparable to that achieved by chance alone (33%). Under a plan-based framing or with highly relevant information, decision accuracy increases to around 50%. These results confirm that
5 The mean in Study 3 is for the condition of high information reliability and high information understandability in which the information presented is identical to that presented in Studies 1 and 2.
107
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
throughout the decision making process. This finding suggests that it is not enough to provide information for a specific decision and assume that this information will be utilized in subsequent decisions, even in those that are similar and immediately follow. Instead, relevant information has to be made available repeatedly for it to have a sustained effect. By taking an experimental approach, which is the dominant approach in empirical decision-making research, this work trades external validity for internal validity. The primary advantage of this approach in our context is the controlled setting and the ability to attribute causality to manipulations. Its primary disadvantage is the gap between the simple scenarios presented to participants and the complexity of real-world software development projects. This gap implies that caution should be exercised in transferring our findings to the real world. It should be noted, however, that our approach is not to favor internal over external validity in all aspects of experimental design. Specifically, in manipulating methodological framing, we cannot limit the differences between plan-based and agile scenarios to those related to the distinction between team size and velocity as the basis for increasing development speed. Although such an approach can provide better control for potential confounding, it is practically impossible to manipulate only the terminology of speed increase while all other aspects of the project remain constant, especially with professional software project managers as participants. We, therefore, have to compromise on internal validity to some extent for the scenarios to have real-world credibility. On the one hand, as the concept of velocity is unlikely to be applied in real software development environments without related agile concepts, the practical implications of our findings are unaffected by this limitation. On the other hand, as agile methodologies involve distinct processes of decision making (Cao, Mohan, Ramesh, & Sarkar, 2013; Drury, Conboy, & Power, 2012), there is a need to factor in additional characteristics of these methodologies to fully understand how the time-saving bias affects
software development practice. 7. Conclusions Against the background of frequent inaccuracies in time estimation in software development projects, and the need to take into account cognitive biases in decision making as contributing factors to such inaccuracies, this work addresses the research question of how timesaving decisions in software development projects are influenced by the time-saving bias. We demonstrate that this bias deserves at least the same attention that a more recognized bias in time estimation, the planning fallacy, has received in the project management and software engineering literature. Taking a rigorous empirical approach, we conduct three experiments in which both students and managers are faced with time-saving decisions that are framed in either a plan-based context or an agile context. The findings confirm that the human failure to correctly estimate the relationship between speed increase and time saving is also observed in time-saving decisions about software development. The findings show that the bias is more common under an agile framing, but not among students, who similarly fail to correctly estimate the relationship regardless of the framing. Finally, the findings demonstrate that while the bias is mitigated, but not eliminated, by the availability of reliable and understandable information, this effect dissipates once the information is no longer available. Future research can follow the approach employed in this work to further understanding of how cognitive biases may lead to inaccurate project management decisions. Conflict of interest The authors declare no conflict of interest.
Appendix. Experimental scenarios (plan-based and agile, with information) Plan-based scenario, with information, high information reliability, high information understandability The scenarios you are about to read describe projects developed by a Plan-Based methodology. Terms: Plan-Based Project - A project that is developed using sequential procedures, such as the Waterfall methodology. Effort-Driven Project - The scheduling of the project is effort-driven, i.e. it keeps the total work at its current value, regardless of how many resources are assigned to the project. Person-month - The amount of work performed by an average person during one month. Phase - A period of time in the project in which a certain activity (such as analysis, programming) is performed for the entire project scope. Project Team - The average number of employees in each of the project phases. Project Duration – The total elapsed time from project start to finish. A Plan-Based Project consists of consecutive phases whose completion allows to complete the project. Scenario: You are the manager of two effort-driven Plan-Based software development projects with a similar and predetermined size that requires an effort of 57 person-months each. The characteristics (e.g. business value) of these projects are similar as well, but they are allocated different levels of resources at initiation. Your company's management has decided to shorten the durations of its projects. Accordingly, it has been decided to allow you to add resources to one of the projects you manage in order to shorten its duration, where the cost of additional resources is similar for both projects. The additional resources will lead to an increase in the number of employees in the different project phases, thereby shortening the duration of specific phases and the duration of the entire project to which resources were added (assuming that the number of employees has no influence on the difficulty of project management). Based on statistical information collected in the company about similar past projects, it was found that adding resources to a project with a smaller number of employees is on average more time saving than adding resources to a project with a larger number of employees. You are required to choose the project to which resources will be added - the one that adding resources to will result in a greater reduction in overall project duration. To assist you in this decision, the following data was collected for the two projects you manage:
108
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
Agile scenario, with information, high information reliability, high information understandability The scenarios you are about to read describe projects developed by an Agile methodology. Terms: Agile Project - A project that is developed using iterative methods, such as the Scrum methodology. Feature-Driven Project - The scheduling of the project is feature-driven, i.e. it keeps the total work at its current value, regardless of how many resources are assigned to the project. Story Points - A unit of measure for the size of project features. The sum of all points for all features represents the project's size. Iteration - An acceptable and predetermined period of time in which a given number of story points are completed. Velocity - The number of story points that can be completed in a single iteration. Project Duration – The total elapsed time from project start to finish. An Agile Project consists of consecutive iterations whose completion allows to complete the project. Scenario: You are the manager of two feature-driven Agile software development projects with a similar and predetermined size of 57 story points each. The characteristics (e.g. business value) of these projects are similar as well, but they are allocated different levels of resources at initiation. Your company's management has decided to shorten the duration of its projects. Accordingly, it has been decided to allow you to add resources to one of the projects you manage in order to shorten its duration, where the cost of additional resources is similar for both projects. The additional resources will lead to an increase in the velocity in the different project iterations, thereby demanding less iterations and shortening the duration of the entire project to which resources were added (assuming that the velocity has no influence on the difficulty of project management). Based on statistical information collected in the company about similar past projects, it was found that adding resources to a project with a lower velocity is on average more time saving than adding resources to a project with a higher velocity. You are required to choose the project to which resources will be added - the one that adding resources to will result in a greater reduction in overall project duration. To assist you in this decision, the following data was collected for the two projects you manage:
109
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111
Value sets The two scenarios presented to each participant included two different value sets: The speeds of 3 (initial) and 4 (increased) for one project and the speeds of 7 (initial) and 11 (increased) for the other project. The speeds of 3 and 5 for one project and the speeds of 6 and 13 for the other project. The value sets were counterbalanced across the two scenarios presented to each participant, and the value pairs were counterbalanced across the two projects in a single scenario (i.e., each participant was randomly assigned to a particular order of value sets and to a particular order of value pairs within each scenario). Both examples above include the first value set, with different order of value pairs across the two examples. Available information – Studies 1 and 2 Only one of the two scenarios presented to each participant included the following information: Based on statistical information collected in the company about similar past projects, it was found that adding resources to a project with a smaller number of employees/a lower velocity is on average more time saving than adding resources to a project with a larger number of employees/a higher velocity. The scenario in which this information was included, either the first or the second, was randomly determined for each participant. Available information – Study 3 Participants were randomly assigned to one of four (2 2) versions of information, where information reliability (high or low) was manipulated in the first part of the sentence and information understandability (high or low) in the second part. High information reliability, high information understandability (the information included in Studies 1 and 2): Based on statistical information collected in the company about similar past projects, it was found that adding resources to a project with a smaller number of employees/a lower velocity is on average more time saving than adding resources to a project with a larger number of employees/a higher velocity. High information reliability, low information understandability: Based on statistical information collected in the company about similar past projects, it was found that the decision which project saves more time requires to divide the project size by its resources before and after the addition of resources to calculate the difference in project duration for each project. Low information reliability, high information understandability: A colleague project manager in your company argues that adding resources to a project with a smaller number of employees/a lower velocity is on average more time saving than adding resources to a project with a larger number of employees/a higher velocity. Low information reliability, low information understandability: A colleague project manager in your company argues that the decision which project saves more time requires to divide the project size by its resources before and after the addition of resources to calculate the difference in project duration for each project. Information was always available in the first scenario in Study 3.
Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the "planning fallacy": Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366–381. Buehler, R., Griffin, D., & Ross, M. (1995). It's about time: Optimistic predictions in work and love. European Review of Social Psychology, 6(1), 1–32. Cacciatore, M. A., Scheufele, D. A., & Iyengar, S. (2016). The end of framing as we know it … and the future of media effects. Mass Communication & Society, 19(1), 7–23. Cao, L., Mohan, K., Ramesh, B., & Sarkar, S. (2013). Adapting funding processes for agile IT projects: An empirical investigation. European Journal of Information Systems, 22(2), 191–205. Charette, R. N. (2005). Why software fails. IEEE Spectrum, 42(9), 42–49. Coelho, E., & Basu, A. (2012). Effort estimation in agile software development using story points. International Journal of Applied Information Systems, 3(7), 7–10. Cohn, M. (2006). Agile estimating and planning. Upper Saddle River, NJ: Pearson Education. Connolly, T., & Dean, D. (1997). Decomposed versus holistic estimates of effort required for software writing tasks. Management Science, 43(7), 1029–1045. Cosijn, E., & Ingwersen, P. (2000). Dimensions of relevance. Information Processing & Management, 36(4), 533–550.
References Andres, H., & Zmud, R. W. (2001). A contingency approach to software project coordination. Journal of Management Information Systems, 18(3), 41–70. Batra, D., Xia, W., VanderMeer, D., & Dutta, K. (2010). Balancing agile and structured development approaches to successfully manage large distributed software projects: A case study from the cruise line industry. Communications of the ACM, 27(1). Article 21. Boehm, B. W. (1988). A spiral model of software development and enhancement. Computer, 21(5), 61–72. Boehm, B. (2002). Get ready for agile methods, with care. Computer, 35(1), 64–69. Boehm, B. W., Abts, C., Brown, A. W., Chulani, S., Clark, B. K., Horowitz, E., et al. (2000). Software cost estimation with COCOMO II. Upper Saddle River, NJ: Prentice Hall. Brehmer, B. (1980). In one word: Not from experience. Acta Psychologica, 45, 223–241. Brooks, F. P. (1975). The mythical man month: Essays on software engineering. Reading, MA: Addison-Wesley. Buehler, R., Griffin, D., & Peetz, J. (2010). The planning fallacy: Cognitive, motivational, and social origins. In M. P. Zanna, & J. M. Olson (Eds.), Advances in experimental social psychology (Vol. 43, pp. 1–62). Burlington, MA: Academic Press.
110
L. Fink, B. Pinchovski
International Journal of Project Management 38 (2020) 99–111 Nan, N., & Harter, D. E. (2009). Impact of budget and schedule pressure on software development cycle time and effort. IEEE Transactions on Software Engineering, 35(5), 624–637. Nelson, R. R. (2007). IT project management: Infamous failures, classic mistakes, and best practices. MIS Quarterly Executive, 6(2), 67–78. Nerur, S., Mahapatra, R., & Mangalaraj, G. (2005). Challenges of migrating to agile methodologies. Communications of the ACM, 48(5), 72–78. Peer, E. (2010). Exploring the time-saving bias: How drivers misestimate time saved when increasing speed. Judgment and Decision Making, 5(7), 477–488. Peer, E., & Gamliel, E. (2013). Pace yourself: Improving time-saving judgments when increasing activity speed. Judgment and Decision Making, 8(2), 106–115. Peer, E., & Solomon, L. (2012). Professionally biased: Misestimations of driving speed, journey time and time-savings among taxi and car drivers. Judgment and Decision Making, 7(2), 165–172. Perfetto, G. A., Bransford, J. D., & Franks, J. J. (1983). Constraints on access in a problem solving context. Memory & Cognition, 11(1), 24–31. Petersen, K., & Wohlin, C. (2010). The effect of moving from a plan-driven to an incremental software development approach with agile practices. Empirical Software Engineering, 15(6), 654–693. Putnam, L. H. (1978). A general empirical solution to the macro software sizing and estimating problem. IEEE Transactions on Software Engineering, 4(4), 345–360. Saracevic, T. (1975). Relevance: A review of and a framework for the thinking on the notion in information science. Journal of the American Society for Information Science, 26(6), 321–343. Serrador, P., & Pinto, J. K. (2015). Does agile work?—a quantitative analysis of agile project success. International Journal of Project Management, 33(5), 1040–1051. Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction framework. Psychological Bulletin, 134(2), 207–222. Shmueli, O., Pliskin, N., & Fink, L. (2015). Explaining over-requirement in software development projects: An experimental investigation of behavioral effects. International Journal of Project Management, 33(2), 380–394. Shmueli, O., Pliskin, N., & Fink, L. (2016). Can the outside-view approach improve planning decisions in software development projects? Information Systems Journal, 26(4), 395–418. Stacy, W., & MacMillan, J. (1995). Cognitive bias in software engineering. Communications of the ACM, 38(6), 57–63. Stingl, V., & Geraldi, J. (2017). Errors, lies and misunderstandings: Systematic review on behavioural decision making in projects. International Journal of Project Management, 35(2), 121–135. Svenson, O. (1970). A functional measurement approach to intuitive estimation as exemplified by estimated time savings. Journal of Experimental Psychology, 86(2), 204–210. Svenson, O. (2008). Decisions among time saving options: When intuition is strong and wrong. Acta Psychologica, 127(2), 501–509. Svenson, O. (2011). Biased decisions concerning productivity increase options. Journal of Economic Psychology, 32(3), 440–445. Svenson, O., Gonzalez, N., & Eriksson, G. (2014). Modeling and debiasing resource saving judgments. Judgment and Decision Making, 9(5), 465–478. Thompson, L., Gentner, D., & Loewenstein, J. (2000). Avoiding missed opportunities in managerial life: Analogical training more powerful than individual case training. Organizational Behavior and Human Decision Processes, 82(1), 60–75. Trendowicz, A., & Jeffery, R. (2014). Software project effort estimation: Foundations and best practice guidelines for success. Switzerland: Springer International Publishing. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. Umapathy, K., Purao, S., & Barton, R. R. (2008). Designing enterprise integration solutions: Effectively. European Journal of Information Systems, 17(5), 518–527. Weisberg, R., DiCamillo, M., & Phillips, D. (1978). Transferring old associations to new situations: A nonautomatic process. Journal of Verbal Learning and Verbal Behavior, 17(2), 219–228. Xu, Y. (2007). Relevance judgment in epistemic and hedonic information searches. Journal of the American Society for Information Science and Technology, 58(2), 179–189. Xu, Y., & Chen, Z. (2006). Relevance judgment: What do information users consider beyond topicality? Journal of the American Society for Information Science and Technology, 57(7), 961–973.
De Langhe, B., & Puntoni, S. (2016). Productivity metrics and consumers' misunderstanding of time savings. Journal of Marketing Research, 53(3), 396–406. Dingsøyr, T., Nerur, S., Balijepally, V., & Moe, N. B. (2012). A decade of agile methodologies: Towards explaining agile software development. Journal of Systems and Software, 85(6), 1213–1221. Drury, M., Conboy, K., & Power, K. (2012). Obstacles to decision making in agile software development teams. Journal of Systems and Software, 85(6), 1239–1254. Dyba, T., & Dingsøyr, T. (2008). Empirical studies of agile software development: A systematic review. Information and Software Technology, 50(9–10), 833–859. Fairley, R. E., & Willshire, M. J. (2003). Why the vasa sank: 10 problems and some antidotes for software projects. IEEE Software, 20(2), 18–25. Feris, M., Zwikael, O., & Gregor, S. (2017). QPLAN: Decision support for evaluating planning quality in software development projects. Decision Support Systems, 96, 92–102. Fuller, R., Gormley, M., Stradling, S., Broughton, P., Kinnear, N., O'Dolan, C., et al. (2009). Impact of speed change on estimated journey time: Failure of drivers to appreciate relevance of initial speed. Accident Analysis & Prevention, 41(1), 10–14. Galorath, D. D., & Evans, M. W. (2006). Software sizing, estimation, and risk management. Boca Raton, FL: Auerbach Publications. Gentner, D., & Medina, J. (1998). Similarity and the development of rules. Cognition, 65(2–3), 263–297. Grice, H. P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. Halkjelsvik, T., & Jørgensen, M. (2012). From origami to software development: A review of studies on judgment-based predictions of performance time. Psychological Bulletin, 138(2), 238–271. Heemstra, F. J., & Kusters, R. J. (1991). Function point analysis: Evaluation of a software cost estimation model. European Journal of Information Systems, 1(4), 229–237. Hill, J., Thomas, L. C., & Allen, D. E. (2000). Experts' estimates of task durations in software development projects. International Journal of Project Management, 18(1), 13–21. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). Cambridge, UK: Cambridge Unversity Press. Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17–31. Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12, 313–327. Keil, M., Im, G. P., & M€ ahring, M. (2007). Reporting bad news on software projects: The effects of culturally constituted views of face-saving. Information Systems Journal, 17(1), 59–87. Keil, M., Tan, B. C. Y., Wei, K. K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A cross-cultural study on escalation of commitment behavior in software projects. MIS Quarterly, 24(2), 299–325. Khatri, V., Vessey, I., Ramesh, V., Clay, P., & Park, S.-J. (2006). Understanding conceptual schemas: Exploring the role of application and IS domain knowledge. Information Systems Research, 17(1), 81–99. Kuhberger, A. (1998). The influence of framing on risky decisions: A meta-analysis. Organizational Behavior and Human Decision Processes, 75(1), 23–55. Laird, L. M., & Brennan, M. C. (2006). Software measurement and estimation: A practical approach. Hoboken, NJ: John Wiley & Sons. Larrick, R. P., & Soll, J. B. (2008). The MPG illusion. Science, 320(5883), 1593–1594. Levin, I. P., & Gaeth, G. J. (1988). How consumers are affected by the framing of attribute information before and after consuming the product. Journal of Consumer Research, 15(3), 374–378. Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Processes, 76(2), 149–188. Levin, I., & Zakay, D. (1989). Introduction. In I. Levin, & D. Zakay (Eds.), Time and human cognition: A life-span perspective (pp. 1–7). Amsterdam, The Netherlands: Elsevier Science Publishers. Lopez-Martin, C., & Abran, A. (2015). Neural networks for predicting the duration of new software projects. Journal of Systems and Software, 101, 127–135. Morgenshtern, O., Raz, T., & Dvir, D. (2007). Factors affecting duration and effort estimation errors in software development projects. Information and Software Technology, 49(8), 827–837.
111