Response times in economics: Looking through the lens of sequential sampling models

Response times in economics: Looking through the lens of sequential sampling models

Journal of Economic Psychology 69 (2018) 61–86 Contents lists available at ScienceDirect Journal of Economic Psychology journal homepage: www.elsevi...

3MB Sizes 0 Downloads 12 Views

Journal of Economic Psychology 69 (2018) 61–86

Contents lists available at ScienceDirect

Journal of Economic Psychology journal homepage: www.elsevier.com/locate/joep

Review

Response times in economics: Looking through the lens of sequential sampling models☆

T

John A. Clithero Lundquist College of Business, University of Oregon, Eugene, OR 97403, USA

ARTICLE INFO

ABSTRACT

JEL classification: C9 D03 D87

Economics is increasingly using process data to make novel inferences about preferences and predictions of choices. The measurement of response time (RT), the amount of time it takes to make a decision, offers a cost-effective and direct way to study the choice process. Yet, relatively little theory exists to guide the integration of RT into economic analysis. This article presents a canonical process model from psychology and neuroscience, the Drift-Diffusion Model (DDM), and shows that many RT phenomena in the economics literature are consistent with the predictions of the DDM. Additionally, use of the class of sequential sampling models facilitates a more principled consideration of findings from cognitive science and neuroeconomics. Application of the DDM demonstrates the rich inference made possible when using models that can jointly model choice and process, highlighting the need for more work in this area.

Keywords: Drift-diffusion model Experiments Process Response times

1. Introduction Behind every choice is a process that led to that choice. Using a wide array of methodologies, a growing number of economists are using process data to better understand how choices are made (Camerer, 2013; Caplin & Dean, 2015; Fehr & Rangel, 2011). One of the simplest measures of process is response time (RT), or the amount of time it takes a decision-maker to make a decision. RT is typically measured as the time elapsed between when the options are presented and the time the choice is stated. As more experiments are run on computers and more day-to-day transactions are recorded electronically, the collection of RT data is costless, and economists are increasingly interested in its applications.1 Unlike most other measures of choice process, RT has the benefit of being unobtrusive to collect. RT also reflects the usage of a scarce resource: time. All of these considerations provide broad justification for the study of RT in an economic framework. Although there were early indications that RT data might be useful for economists (Mosteller & Nogee, 1951), the expansive history of experimental economics contains comparably few mentions of RT.2 One explanation for this dearth of studies has been the lack of a straightforward way to link RT to choice data. Yet, models linking these data are abundant in the cognitive sciences. ☆ JAC acknowledges Pomona College for funding, and Maya Kaul for excellent research assistance. All errors are the sole responsibility of the author. E-mail address: [email protected]. 1 There is an excellent review article by Spiliopoulos and Ortmann (2018) that emphasizes strategic decision-making and choices made under time pressure. 2 Specifically, the classic paper by Mosteller and Nogee (1951) observed the importance of RT: “It is of interest, however, to know that these decision times have some relation to the notion of utility” (pages 395–396).

https://doi.org/10.1016/j.joep.2018.09.008 Received 30 October 2016; Received in revised form 3 July 2018; Accepted 29 September 2018 Available online 05 October 2018 0167-4870/ © 2018 Elsevier B.V. All rights reserved.

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Psychology in particular has devoted tremendous resources to measuring and understanding the cognitive processes behind RT.3 One of the centerpieces of this literature is a class of models, frequently referred to as sequential sampling models, that can jointly account for both distributions of choice probabilities and RT in a wide variety of simple perceptual tasks (Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006; Townsend & Ashby, 1983).4 One of the most popular sequential sampling models is the Drift-Diffusion Model (DDM) (Ratcliff, 1978; Ratcliff & McKoon, 2008). More detail on the DDM is presented in Section 2. The core goal of this paper is to present a formal method for considering RT data in economics. I propose looking through the lens of a structural choice model, the DDM. As a model of both choice and RT, the DDM has many testable implications. A consideration of common empirical findings regarding RT data in the existing economics literature shows that many of them are consistent with predictions of sequential sampling models like the DDM. The majority of the literature under examination in this paper involves individual decisions in non-strategic environments.5 This spotlight is beneficial for at least three reasons. First, the RT literature in experimental economics is already rich enough to merit a careful discussion. Even in tightly controlled choice environments, there are usually multiple contributors to variability in RT, and models like the DDM that can jointly account for the entire distribution of choice and RT can leverage that variability to draw novel inferences. Second, it allows the discussion to be framed by predictions of the DDM. As a corollary, using the DDM as a guide provides a principled way to incorporate relevant neuroeconomics literature into the discussion.6 Some segments of the experimental literature are difficult to reconcile with the traditional DDM framework (and other sequential sampling models), though. Thus, a third goal is to provide a list of open questions that merit further empirical and theoretical investigation. Although the RT literature in economics is expanding rapidly, several domains remain unexplored and offer exciting research opportunities for economics. The paper is organized as follows. Section 2 presents a structural model of choice, the DDM, and summarizes existing evidence supporting the DDM as a biologically plausible model for the choice process in many domains. Section 3 surveys the existing experimental literature that employs RT, providing a taxonomy of results constructed through the lens of sequential sampling models. Section 4 proposes directions for future empirical work and Section 5 concludes. 2. The Drift-Diffusion Model 2.1. The model The DDM was developed to jointly account for both the choice and RT in binary choice tasks (Ratcliff, 1978; Ratcliff & McKoon, 2008; Ratcliff & Smith, 2004). Early applications involved perceptual decision-making tasks, such as determining whether a collection of dots is moving to the left or right (Britten, Shadlen, Newsome, & Anthony Movshon, 1992).7 The model makes assumptions about the underlying cognitive process guiding decision-making, employing parameters with intuitive and direct links to specific psychological concepts. As a reflection of its strength, the DDM has been successfully applied to a wide range of tasks, including perceptual decisions, linguistics, and memory (a few examples are Heekeren, Marrett, & Ungerleider (2008), Palmer, Huk, & Shadlen (2005), and Ratcliff, Thapar, & McKoon (2004)). The DDM assumes that a decision-maker, when presented with a choice, samples information continuously from the available options. The information sampled is assumed to be noisy. As information accumulates, evidence accrues in favor of one of the two options. The sampling process continues until a pre-determined threshold (sometimes called a barrier), or an amount of relative evidence in favor of one option or the other, is reached. At this instant, the sampling process is terminated and a choice is made for the favored option. Clearly, the decision-maker would like to accumulate as much evidence as possible to increase the probability of making the correct choice, but sampling is costly in terms of time. In other words, increasing accuracy requires increasing RT. The DDM is able to capture this speed-accuracy tradeoff, and the model implements the optimal solution to noisy accumulation of evidence.8 As the DDM belongs to a more general class of sequential sampling models (Bogacz et al., 2006; Ratcliff & Smith, 2004), focusing on it here does not limit the application of the discussion, as many inferences for the DDM are valid for other sequential sampling models.

3 There are entire books devoted to core results and possible models explaining RT, including Laming (1968), Townsend and Ashby (1983), Luce (1986), Jensen (2006). 4 The marketing literature has also explored jointly modeling choice and RT. For an example, see Otter, Allenby, and van Zandt (2008). 5 Although this paper does include some examples of RT in strategic environments, a more detailed treatment can be found in Spiliopoulos and Ortmann (2018). 6 It is worth emphasizing that it would be impossible to comprehensively survey the relevant literature in psychology, neuroscience, and neuroeconomics. Numerous reviews are noted in several places for the interested reader. 7 This is a widely-used paradigm in both psychology and neuroscience, and is often referred to as a random-dot motion (RDM) task (Gold & Shadlen, 2007). 8 More precisely, the DDM minimizes the RT needed to obtain a desired level of choice accuracy. An elegant discussion of this property of the DDM can be found in Gold and Shadlen (2002), with additional formal details in Bogacz et al. (2006). This optimality of the DDM, however, only holds under several important assumptions, including zero or constant costs associated with the evidence accumulation process, and knowledge of the task difficulty. Other papers and models explore optimal decision-making under various assumptions and identify optimal stopping rules (Fudenberg et al., 2015; Rapoport & Burkheimer, 1971; Tajima, Drugowitsch, & Pouget, 2016; Woodford, 2016). Additional discussion of alternative models that account for the dynamics of the choice process are summarized in the Appendix.

62

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

The DDM can be applied to preferential choice as follows. The decision-maker is asked to make a choice between two options, x and y. Assuming the decision-maker has a value function v (·) , the options will have true underlying values of v (x ) = vx and v (y ) = vy . The decision-maker does not know these true underlying values, but instead samples from distributions that have means vx and vy . Further, assume that the samples for x and y are drawn from normal and independent distributions. The decision-maker computes a relative value signal V (t ) that tracks the relative evidence in favor of x or y. The relative value signal V (t ) is assumed to evolve in continuous time dt according to the following: (1)

dV (t ) = µdt + dW (t ).

In this stochastic differential equation, µ k (vx vy ) is the drift rate with k > 0 , and W (t ) is a Wiener (Brownian motion) process with increments dW (t ) of standard deviation .9 The process continues until a threshold is crossed: x is chosen if the upper threshold at a > 0 is crossed first, and y is chosen if the lower threshold at 0 is crossed first. Finally, V (t ) must have a starting point at time t = 0 , which is assumed to be fixed and equidistant from both barriers in the simplest specification of a DDM: an “unbiased” V (t ) will start 1 at V (0) = 2 a . Consequently, the term bias is often used to indicate whether or not the decision-maker has an a priori probability of choosing x or y that is greater than 50%. A richer specification of the DDM, in which z [0, a] becomes a free parameter (Ratcliff, 1985; Ratcliff & Rouder, 1998), can capture this phenomenon.10 A schematic for the DDM is provided in Fig. 1.11 In addition to the comparison process, the model also assumes a fraction of the individual’s RT corresponds to non-decision related processing. This non-decision time, T, accounts for basic perceptual processing, such as recognizing that a choice has been presented. The parameter T also includes any time to initiate a response, such as pressing a button or touching a screen. Importantly, T is assumed to vary across individuals, but not with the attributes of the options shown on each trial. By definition, the DDM views response time to be a sum of decision time and non-decision time. This distinction is sometimes not made in the economics literature. Assuming the amount of non-decision time is fixed within subject means that all variability in RT is a consequence of variability in decision time. However, the assumption of constant visual processing and motor reaction time can be relaxed. Another way to enrich the DDM is to assume T varies randomly across trials, with this across-trial variability typically assumed to have a uniform distribution (Ratcliff & Tuerlinckx, 2002).

Fig. 1. The Drift-Diffusion Model (DDM). (a) A schematic for the DDM and its key parameters. The model accounts for the choice between options x and y and amount of time (RT) it takes to make that decision. (b) The predicted choice curve for binary choices. The probability P (x ) of selecting x is a logistic function of the difference in value between x and y, vx vy . (c) The expected RT curve for binary choices. The inverse-U shape reflects longer RT for decisions involving closer values.

An elegant implication of the DDM is its choice function. If z = 2 a, then the probability of choosing x, or first passing the threshold at a, is the familiar logistic choice function: 1

9 For estimation purposes it is typical to fix (e.g., = 1), as all decision-related parameters in the DDM are identified up to a positive ratio (Ratcliff, Van Zandt, & McKoon, 1999). Note that dW (t ) N (0, 2) implies that the noise in the comparison process is independent of the options being compared. The normality assumption for the difference between the draws for vx and vy is also standard. To ensure the role of all parameters in the model is clear, the notation for the remainder of the paper will include . z 10 [0, 1], as in Turner, van Maanen, and Forstmann (2015). This way, unbiased A common practice is to reparameterize z to z , making z a 0.5. corresponds to z = 0.5 and biased corresponds to z 11 It can help intuition to alternatively consider the discrete time version of Eq. (1): V (t ) = V (t 1) + µ + (t ) , where (t ) N (0, 2) . This is a random walk model (see Luce, 1986 for an in-depth discussion and history). Please note all derivations included here will assume the continuous time version of V (t ) .

63

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

1

P (x ) =

1+e

aµ 2

.

(2)

The probability of choosing x is increasing in the drift rate µ and increasing in the threshold a if µ > 0 . The relationship between P (x ) and a is intuitive: the higher the decision threshold, the lower the probability that noise will lead to the lower-valued option being selected. 1 The DDM parameters also provide interpretable and concrete RT predictions. If z = 2 a, expected RT has the following form:12

E [RT ] = T +

aµ a tanh . 2µ 2 2

(3)

The formalization of the speed-accuracy tradeoff in the DDM is clear. Without loss of generality, suppose x is the “correct” response, with a more highly-valued option vx > vy . The probability of choosing x is increasing in a, but so is expected RT. Hence, the decisionmaker must sacrifice accuracy to reduce RT, or sacrifice time to increase accuracy. A second implication from Eq. (3) is that expected RT increases as µ approaches 0, or as option discriminability decreases. Additional predictions can be made with a richer specification of the DDM, when z is not assumed to be half the distance between 0 and a. In this case, the probability of the decision-maker choosing x is:

P (x ) =

e e

2zµ 2 2aµ 2

1

,

(4)

1 13

which is increasing in z. From Eq. (4) it is apparent that three different parameters affect the probability of choosing x. Each of these parameters have distinct cognitive interpretations, potentially pertaining to different components of the choice process. For example, an increase in P (x ) could be the result of an increase in µ or an increase in z. Given a change in P (x ) , how could a change in µ be distinguished from a change in z? This would not be possible with only choice data. RT data provide a clean test, as shown in Fig. 2.14 The separation does not come readily from considering only expected RT, which decreases in both cases, but rather comes in considering the full distribution of RT. To illustrate, consider two possible changes in DDM parameters that will lead to changes in the probability of ultimately choosing x. First, there could be a change in expectation, where there is a change in the probability of choosing x before sampling begins (e.g., there is no longer an a priori belief that x and y are equally likely to be preferred). This is captured by a change in z. Second, there could be a change in evaluation, where there are changes in the comparison process between x and y (e.g., the weighting assigned to various attributes of x and y changes). This is captured by a change in µ . An increase in z, or a change in expectation, will increase the positive skew of the RT distribution (cyan), with a greater number of fast RT choices. A change in evaluation, captured by an increase in µ , does not generate a significant change in the skewness of the RT distribution (purple). In this way, via joint consideration of choice and RT, the DDM provides testable explanations for observed changes in choice probabilities.15 Two other important extensions of the DDM are also frequently employed. Both of them involve additional sources of across-trial variability, or stochasticity in model parameters. First, consider the drift rate µ . Intuitively, it is plausible that x and y are not processed identically every time they are presented as options. This kind of noise in the decision-making process is distinct from the within-trial noise introduced earlier. A common assumption is to assume drift rate varies from trial to trial, following a normal distribution with mean µ and standard deviation µ . For most economics experiments, this kind of variability is natural: across an experiment, there is almost always going to be variability in vx and vy (e.g., two monetary gambles or two food items) and thus variability in µ . Second, variability in the starting point z is often assumed. Intuitively, variability in trial history, trial structure, or participant beliefs could lead to changes in z. The common assumption is that z varies uniformly.16 Note that this is mathematically equivalent to adding variability to the thresholds. One of the motivating factors for these extensions to the DDM was to accommodate the common finding that correct and incorrect responses in perceptual tasks frequently have different RT distributions (Ratcliff & McKoon, 2008; Ratcliff & Rouder, 1998; Ratcliff, Smith, Brown, & McKoon, 2016). Section 3 will show, however, that these extensions have several applications for economic choices, too. Although there are only a few papers in experimental economics that explicitly use sequential sampling models like the DDM, many studies include RT data that are consistent with at least one of the predictions from the DDM. In addition to the dissociable effects that can change P (x ) (summarized in Fig. 2), there are also distinct reasons to predict longer or shorter RT, given changes in specific DDM parameters. Section 3 presents these robust RT phenomena in the experimental economics literature, mapping them to one or more of the DDM parameters presented here.

12 13

The hyperbolic tangent function used here is tanh(x ) =

ex e x . ex + e x

In the case of µ = 0 , the result for Eq. (3) is limµ

0 E [RT ]

Full derivations are available in Bogacz et al. (2006), Palmer et al. (2005), or Smith (2000). The choice and RT plots in Figs. 2–4 are from simulated data (Wiecki, Sofer, & Frank, 2013), assuming a DDM process. 1 15 a are provided in the Appendix. Equations analogous to Eq. (3) when z 2 14

16

Ratcliff (2013) explores the effects of different distributional assumptions for variability in µ and z. 64

=

a2 . 4 2

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Fig. 2. The DDM has multiple latent parameters to account for changes in choice probabilities.. The DDM framework provides different explanations for changes in choice probabilities that are identifiable with RT data. (a) From a given expected probability of choosing x over y, an increase in the expected choice probability could result from a change in expectation (change in z, cyan) or a change in evaluation (change in µ , purple). (b) The latent changes in the choice process would not be identifiable from choice data alone. (c) The resulting RT distributions would clearly delineate a change in z (cyan) from a change in µ (purple), as a change in expectation would increase the positive skew of the RT distribution. Choice and RT data plotted are simulated using expected choice probabilities of 55% and 75% . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Now that the DDM has been outlined, it is worth taking a moment to briefly discuss its place in the larger sequential sampling model literature. While a detailed discussion and exploration of those models is beyond the scope of this paper, and has been carried out in detail elsewhere (Bogacz et al., 2006; Teodorescu & Usher, 2013), a few points are worth noting. Very broadly, sequential sampling models can be grouped into “relative evidence” accumulators and “absolute evidence” accumulators (Ratcliff et al., 2016). The DDM accumulates relative evidence (as captured by µ ), modeling a single accumulation process. Other models assume a separate accumulation process for each possible option (e.g., Usher & McClelland (2001) and Brown & Heathcote (2008)). Another possible model feature is to assume there is “decay” or “leakage” in the evidence accumulation process (e.g., Busemeyer & Townsend (1993)). Finally, it is possible to consider models where options “compete” with each other and can “inhibit” evidence accumulation in favor of other options (e.g., Usher & McClelland (2001)). In addition to the extensive literature in cognitive and mathematical psychology, theoretical work in economics is also beginning to explore the predictions and implications of different process models, as discussed in the Appendix. 2.2. Neural evidence A considerable amount of effort has been put forth within neuroscience to identify the underlying computations and neurobiological mechanisms that contribute to the decision-making process. While the goal here is not to review the neuroscience literature, a brief overview of some central findings will prove useful. Most importantly, there is strong evidence that, when presented with a decision, the brain accumulates evidence over time in a stochastic fashion, terminating the decision-making process when a threshold is reached. This provides plausibility to the latent parameters that are being estimated by the DDM with choice and RT data, making the class of accumulate-to-threshold models an appropriate foundation for formalizing the connection between RT and choice in an economic framework. Although the evidence for accumulate-to-threshold models originated from the study of perceptual decisions, there is now a growing number of neuroeconomic studies that also provide evidence for the dynamic accumulation of evidence towards an available option.17 Neuroeconomic studies that put forth neural evidence for computations like those predicted by the DDM have employed a variety of incentivized choice tasks to demonstrate the accumulation of preference-related information, including mixed-outcome 17 Economic or preference-based decisions are often called “value-based” decisions in neuroeconomics. Many in-depth reviews already exist, both for perceptual (Gold & Shadlen, 2007; Heekeren et al., 2008) and value-based decisions (Fehr & Rangel, 2011; Glimcher, 2011; Rangel & Clithero, 2013). For a discussion on possible links between the literatures, see Summerfield and Tsetsos (2012). A recent test comparing neural data from perceptual and value-based choices can be found in Polanía, Krajbich, Grueschow, and Ruff (2014).

65

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

rewards (Basten, Biele, Heekeren, & Fiebach, 2010), different juices (Hare, Schultz, Camerer, O’Doherty, & Rangel, 2011), learned probabilistic rewards (Cavanagh et al., 2011), different stock purchases (Gluth, Rieskamp, & Büchel, 2012), monetary gambles (Hunt et al., 2012), snack foods (Pisauro, Fouragnan, Retzler, & Philiastides, 2017; Polanía et al., 2014), foraging options (Shenhav, Straccia, Cohen, & Botvinick, 2014), and a modified dictator game (Hutcherson, Bushong, & Rangel, 2015). There is also evidence for decision-making context affecting the decision threshold, including incentive structure of the trial (Gluth et al., 2012; Green, Biele, & Heekeren, 2012) or varying emphasis on the speed-accuracy tradeoff (Bogacz, Wagenmakers, Forstmann, & Nieuwenhuis, 2010; Domenech & Dreher, 2010; Forstmann et al., 2008). In other words, the neuroscience literature, via a diverse set of experimental paradigms, provides robust empirical evidence for both the accumulation of evidence (drift rate) and a stopping rule (threshold), two key ingredients in the DDM. Another reason to highlight some neuroeconomic research involving the DDM is because these studies typically include RT analyses. Several RT phenomena identified in neuroeconomic studies are likely to be present in experimental economic datasets, but have thus far been overlooked. As an example, the effect of a choice set’s overall value on RT, as discussed in Section 3.7, is currently being explored in the neuroeconomic literature. Consideration of the literature also provides possible mechanistic explanations for this effect.18 Understanding the precise neural implementation of accumulate-to-threshold process is an active area of research (Forstmann, Ratcliff, & Wagenmakers, 2016; Shadlen & Kiani, 2013). Some key unanswered questions include the amount of overlap between perceptual and economic decision-making mechanisms in the brain (Summerfield & Tsetsos, 2012), the absolute or relative nature of the accumulation (Bollimunta & Ditterich, 2012; Churchland, Kiani, & Shadlen, 2008), and how accumulate-to-threshold processes are implemented for larger choice sets (Churchland & Ditterich, 2012). As these aspects of the decision-making process are studied and the underlying neural computations become understood, the neuroeconomic models used to describe and predict choices can also advance. RT data provide a convenient level of analysis between choice and more complex process data, such as eye-tracking, fMRI, or neurophysiology. This conceptual layer may provide an effective bridge between neuroeconomic work and more traditional applied work in economics. An objective of this paper is to contribute to the construction of that link. 3. Observations This section presents a series of observations that are consistently found in RT data in economics and other incentivized experiments.19 This list is not meant to be exhaustive, but it is detailed enough to demonstrate the complexity of RT, and to show the multitude of computations occurring within what is usually a relatively short amount of time. Impressively, the short list of phenomena is also sufficient to cover the overwhelming majority of RT papers in experimental economics. Tables that present all of the relevant studies for each observation can be found in the Appendix. Many of the phenomena are also predicted by sequential sampling models, like the DDM, and there are clear connections with specific DDM parameters. 3.1. Sequential effects and RT In many experiments, subjects are asked to answer multiple questions of a similar nature. A simple example would be a food choice or gamble choice experiment, where subjects make choices between different pairs of food items or different pairs of monetary gambles. While the food items or gambles vary from trial to trial, the structure of the choice environment is constant. A natural question is if and how the choice process changes over the course of the experiment. In other words, the process for choosing between x and y may or may not be independent of the history of options and responses, even if the question posed and the outcomes available are independent. The dependence of the choice process on option and response history, above and beyond the immediate choice options and overall experimental context, are often observable in both choice and RT as sequential effects (Luce, 1986).20 Many sequential effects in the decision-making process could potentially be captured by RT. For example, if subjects were to become familiar with the task, one would expect, all else equal, for RT to become faster. Several economic experiments explicitly demonstrate this effect (e.g., Anderhub, Gäuth, Mäuller, & Strobel (2000), Kocher & Sutter (2006), Fischbacher, Hertwig, & Bruhin (2013), and Achtziger & Alós-Ferrer (2014)). Similarly, if there is a “correct” way to solve a problem, involving several steps (e.g., backwards induction), and that problem is presented several times, subjects who determine the correct procedure are likely to complete the task in less time (Gneezy, Rustichini, & Vostroknutov, 2010). A broad explanation for these changes in average RT is “learning” or “experience.” However, the precise cognitive explanation for these RT dynamics in economic settings is likely to be highly context sensitive. A detailed account would benefit from a link with the existing literature in the cognitive sciences. The consideration of sequential effects centers around two closely-related empirical findings in perceptual decision-making:

18

For example, see the supplementary materials of Hunt et al. (2012). Additional evidence from psychology and neuroeconomic experiments are included at times, to provide further corroborating evidence. It is important to note that neuroeconomic experiments are incentivized, typically using payment procedures similar to those in experimental economics papers. A related but different list of “stylized facts” for RT is in Appendix A of a working paper version of Spiliopoulos and Ortmann (2018). 20 Some parallels could be drawn to what are normally referred to as “order effects” in the experimental economics literature. 19

66

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

automatic facilitation and strategic expectation (Luce, 1986). Many perceptual tasks that are used to study these effects have a similar structure. One common paradigm is a task in which individuals are shown a single stimulus on each trial, either a circle (C) or a square (S). Participants are then asked to respond if the image is a circle or a square (Frydman & Nave, 2017). The sequence of C or S is often randomized, although percentages (e.g., 50/50 or 60/40 distributions of C and S) can be manipulated. As the individual moves through the large number of trials, trial history will generate sequences such as CCCS, meaning the current trial is a square and the three previous trials were circles. Automatic facilitation predicts that RT for a given trial decreases as the consistency of the pattern of previous trials increases (Kirby, 1976). This effect is present regardless of the content of the current trial. Following the example, the effect of automatic facilitation grows the longer the previous sequence of circles (e.g., CCCS would elicit a stronger effect than CCS). Importantly, the psychology and neuroscience literature typically associate automatic facilitation with sequential effects that arise from trials presented in rapid succession (intervals between trials of less than 800 ms). For longer intertrial intervals (ITIs), which are more common in economic and neuroeconomic experiments, sequential effects typically follow the pattern predicted by strategic expectation: RT decreases (increases) if the current trial’s stimulus is consistent (inconsistent) with the stimulus from the previous trial (or several previous trials).21 So, following a trial history of CCC, strategic expectation should decrease RT if the current image is a circle, but increase RT if it is a square. In either case, sequential effects present an important consideration when interpreting RT, and the direction of their influence on RT is at least partly attributable to experimental design.22 Two recent studies demonstrate that, although often considered an experimental phenomenon that must be controlled for as a potential confound, sequential effects can also be leveraged to delineate between different choice processes. Agranov and Ortoleva (2017) construct experiments to study the source(s) of and motivation(s) for stochastic choice behavior. Agranov and Ortoleva (2017) present participants with ten different monetary gambles, asking them to make binary choices between two of them at a time. The study has two different treatments built around studying repeated choices (i.e., choices between the same two gambles), distant and consecutive. The “distant” repeats are spread out over the full set of trials, as is common in many economics experiments. The “consecutive” repeats are literally one trial after another, and participants are told the repeats will occur. Part of the analysis in Agranov and Ortoleva (2017) focuses on RT and tests the plausibility of the DDM as an explanation for the observed choice behavior. The decrease in RT for repeated choices is far greater for consecutive choices, perhaps indicating either a significant shift in the starting point of the comparison process (parameter z, as outlined in Fig. 2) or the implementation of a different choice algorithm that does not accumulate additional information. Within the DDM framework, it is plausible for a task to motivate subjects to adjust z such that it is optimal to not sample any more on the new choice (Simen et al., 2009). Agranov and Ortoleva (2017) demonstrate that some individuals deliberately change their answers on consecutive choices. A large change in z away from the previous response could accommodate this behavior. It is also plausible that familiarity with the task and/or specific choice set reduces non-decision time T. Regardless of the interpretation of the change in choice behavior, the observed effect on RT is clear: consecutive repeats of a choice set significantly reduce RT. A second recent paper leverages sequential effects to identify subjective expectations for individuals in choice environments. Using a simple binary perceptual discrimination task involving sequences of images (circles or squares, as described earlier), Frydman and Nave (2017) employ the DDM to infer subjective beliefs about upcoming perceptual stimuli. The RT data from the perceptual task are consistent with strategic expectation. However, there is individual variability in the size of this effect. Frydman and Nave (2017) fit a DDM to the perceptual task data and are able to capture this individual variability in the DDM parameters. In fact, the DDM parameter estimates can be used to construct an index for how far the participants behavior deviates from optimality. The experiment also includes a financial choice task, where each trial asks participants to state their willingness to pay for a stock based on its recent price history. This task allows Frydman and Nave (2017) to identify individual variability in extrapolative beliefs with respect to the stock prices. Strikingly, heterogeneity across subjects in the inferred perceptual beliefs correlates with heterogeneity in extrapolative beliefs in the financial choice task. In other words, Frydman and Nave (2017) employ the computational properties of the DDM to analyze sequential effects and provide evidence on a belief formation process that spans multiple choice domains.23 There are other important reasons to consider how RT might evolve over time, within an experiment and within a subject. For example, if RT are likely to diminish either over the course of an experiment, within a sequence of similar trials, or both, an increase in trial numbers that increases statistical power might also diminish the ability to identify any treatment effects on RT. Similarly, if the goal of the researcher is to use RT to better predict choices, earlier RT may be more predictive than later ones (Lotito, Migheli, & Ortona, 2013; Petrusic & Jamieson, 1978). 3.2. Option discriminability and RT When a decision-maker is choosing between options x and y as in Fig. 1, the difference between vx and vy affects the probability that the higher item is chosen. This has long been understood in both psychophysics (Thurstone, 1927) and in economics (McFadden, 2001). There is an analogous result for RT: in the perceptual decision-making literature, there is an inverse relationship between

21

Details regarding plausible neurobiological explanations for these effects can be found in Gao, Wong-Lin, Holmes, Simen, and Cohen (2009). There is also strong evidence that the magnitude of these effects vary significantly across individual participants. Gökaydin, Navarro, MaWyatt, and Perfors (2016) present an in-depth analysis of individual differences in both sequential effects. 23 Accounting for subjective beliefs in the DDM framework is discussed further in Section 3.4. 22

67

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

stimuli discriminability and RT. When asked to make a perceptual decision between two stimuli, that decision will take longer if the two stimuli are more similar.24 Some early studies looking at both preferential choice and RT noted the relationship between increasing choice probability and decreasing RT (Jamieson & Petrusic, 1977), sometimes labeling the effect a “distance effect” (Birnbaum & Jou, 1990). Busemeyer and Townsend (1993) discuss “preference strength” as motivation for developing sequential sampling models of risky choice. More recently, “conflict” (Cavanagh, Wiecki, Kochar, & Frank, 2014) has also been used to describe the same phenomenon. The effect of option discriminability, or the distance in value between options, is also captured by the DDM. For a given level of evidence in favor of one option or the other needed to initiate a choice, more samples must be drawn in order to reach that threshold if the options are very similar. This result can be seen immediately from the DDM expression for RT in Eq. (3) or the inverse-U in Fig. 1. Many of the studies in economics that look at RT demonstrate this inverse relationship, although it is not always discussed directly.25 Importantly, the range of incentivized choice domains in which the joint relationship between option discriminability and RT follows the predicted pattern is already quite extensive. For example, there exist several tasks of binary choice between different snack food items (Clithero, 2018; Krajbich, Armel, & Rangel, 2010; Milosavljevic, Malmaud, Huth, Koch, & Rangel, 2010). Similarly, tasks with binary choice between different lotteries of one to three outcomes (Gerhardt, Biele, Heekeren, & Uhlig, 2016; Moffatt, 2005; Soltani, De Martino, & Camerer, 2012) or money received after different time delays (Dai & Busemeyer, 2014; Krajbich et al., 2015a; Rodriguez, Turner, & McClure, 2014) have also been found to show an inverse relationship between option discriminability and RT. The methods for quantifying the option discriminability in economic choice experiments follow many of the methods for eliciting preferences in the experimental economics literature. If a certain utility function is assumed and there are enough observed choices, a fitted utility estimate can be assigned to each choice option (e.g., Moffatt (2005) for risk preferences). Then, option discriminability between two options is determined by the closeness of the difference in their utility estimates to zero. Many experiments will also obtain an independent measure of utility, such as an incentivized measure of willingness to pay (Becker, DeGroot, & Marschak, 1964) or a Likert scale rating. Then, in a choice task, the difference in these independent measurements can be used to determine option discriminability (e.g., Alós-Ferrer, Granić, Kern, & Wagner (2016), Milosavljevic et al. (2010) and Philiastides & Ratcliff (2013)). Another option, which does not require a separate task or any utility estimation, is to collapse choice data across individuals and look at the collective choice percentages (e.g., 80% of participants chose x over y, but only 50% of participants chose w over x). Often times, choices with extreme choice percentages will have faster average RT across participants (e.g., Dai & Busemeyer (2014)). A similar option is to group decisions by common features or attributes of the choice options. For example, in an ultimatum game, the decisions of the receiver can be binned according to the size of the monetary offer (Krajbich, Hare, Bartling, Morishima, & Fehr, 2015b). If the choice feature is meaningful to the decision-maker, RT should vary along the range of possible values (e.g., small ultimatum offers to large ultimatum offers). There is also mounting evidence that option discriminability applies to certain choices in the social and strategic domains. For responders in ultimatum games, the decision to accept or reject ultimatum offers follows this trend (Ferguson, Maltby, Bibby, & Lawrence, 2014; Krajbich et al., 2015b). Similar RT patterns can be seen in binary dictator games, with dictators choosing between two different monetary allocations (Chen & Fischbacher, 2015; Krajbich et al., 2015a). It is also plausible to test option discriminability in other social environments, such as how much to contribute to a public good. Looking at public goods games where subjects choose how much of an initial endowment to contribute towards a public good, Krajbich et al. (2015a) also found that differences in option discriminability can explain correlations between RT and pro-social behavior. Evans, Dillon, and Rand (2015) found that more extreme (i.e., nearer free-riding or full cooperation) responses had faster RT in public goods games. More recently, Merkel and Lohse (2018) use predictions from the DDM to show that option discriminability can provide a unifying explanation for why time pressure does not consistently increase pro-social behavior in binary dictator and prisoner’s dilemma games.26 Although the original DDM (Ratcliff, 1978) and the overwhelming majority of its applications involve binary choice, several studies with larger choice sets demonstrate analogous effects. For example, Krajbich and Rangel (2011) look at trinary food choice and find decreasing RT with increasing difference between the highest-valued item and the average value of the other two items. Teodorescu and Usher (2013) also report decreasing RT with increasing option discriminability in choice sets of four, in a brightness discrimination task. The implications of this phenomena are vast and remain relatively unexplored. A few illustrative examples are worth highlighting.

24 This is an effect that has been long established in psychology across sensory modalities (a few early examples are Dashiell (1937) and Shipley, Coffin, & Hadsell (1945)), and it goes beyond the scope of this paper to document its robustness in perceptual decisions. 25 See Krajbich, Bartling, Hare, and Fehr (2015a) for an excellent and in-depth discussion of this feature and a relevant “reverse inference” problem that corresponds to the study of RT. A similar discussion that extends to neuroeconomics and problems that arise from not accounting for option discriminability and RT can be found in Shenhav et al. (2014). 26 Much of the discussion surrounding social preferences and RT involves whether or not “fairness is intuitive,” an idea inspired by the results in Rand, Greene, and Nowak (2012). The follow-up literature is vast. Krajbich et al. (2015a) and Merkel and Lohse (2018) use option discriminability as an alternative explanation. Additional discussion on RT and social preferences can be found in Spiliopoulos and Ortmann (2018), a commentary on Rand et al. (2012) can be found in Myrseth and Wollbrant (2017), and a registered replication report finding no causal effect of time pressure on cooperation can be found in Bouwmeester et al. (2017).

68

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

First, consider the well-documented preference reversal pattern for monetary gambles: differences in RT among choices can help explain preference reversals (Alós-Ferrer et al., 2016). If preference reversals are more likely to occur after longer RT, other “changes of mind” are also likely to follow longer RT.27 A related prediction would be that RT can be used to infer indifference between two options, as relatively longer RT reflection weaker option discriminability. Indeed, if RT data can be used to identify indifference, this information could be leveraged to pin down individual estimates for preference parameters such as delay discount factors or loss aversion (Konovalov & Krajbich, 2016b). A third example considers the opportunity costs of spending more time making a decision. If individuals can be encouraged to “choose now” on decisions where the options are roughly equivalent, those individuals might increase their overall welfare more by allocating time to completing other choices (Oud et al., 2016). Perhaps the most important aspect of the connection between option discriminability and RT is its ubiquity. In other words, it merits consideration in almost any decision context. Choice itself does not necessarily reveal differences in strength of preference, but RT can help identify such variation. So, while differences in RT may be indicative of a different process, it is likely at least some RT variability across choices is attributable to option discriminability (Krajbich et al., 2015a). 3.3. Decision thresholds and RT Although the prediction from accumulator models that option discriminability contributes to the duration of RT is robust across many choice environments, these models have another latent parameter that independently affects RT: the threshold, or the amount of relative evidence that must be accumulated in order for a decision to be made. As can be seen in Figs. 1 and 3, increasing the decision threshold will increase the expected RT, even if option discriminability remains unchanged. In perceptual decision-making, there are at least two established ways to exogenously influence a decision-maker’s decision threshold. Both have relevance for economics. The first is an exogenously determined time limit for a response (e.g., two seconds), effectively creating time pressure. A variant of imposing a hard time limit is instructing subjects to decide as quickly as possible, emphasizing speed over accuracy. These experimental manipulations are widely known to lower decision thresholds in perceptual tasks (e.g., Ratcliff & Smith (2004) and Palmer et al. (2005)). A second method is to vary the incentives associated with correct and incorrect responses. Often, subjects are paid a positive amount for a correct response, but incur a loss for an incorrect response. The ratio of the correct/incorrect incentives will, along with the error rate, determine the reward rate, or payment per unit of time, of a task. Given task conditions, the DDM provides an optimal threshold to maximize the reward rate in a binary response environment (Bogacz et al., 2006; Simen et al., 2009). As relative punishment increases, the reward schedule slows, causing an increase in decision threshold, in an attempt to lower the error rate (Green et al., 2012). Note that the first manipulation, by introducing time pressure, also affects the reward rate of the task, thereby changing incentives. In both cases, then, there is reason to believe a change in task incentives will cause the subject to adjust their decision threshold (Gold & Shadlen, 2002). This result has several implications for experimental economics. The first involves the payment structure used to incentivize subjects. A typical payment structure for subjects in many experiments is to use the choice from a randomly chosen trial. Wilcox (1993a) employs a binary choice task involving gambles and shows an increase in RT on trials more likely to be chosen for payment. This RT difference could be interpreted as an asymmetry in how much evidence subjects require to make a decision (i.e., the height of their threshold). Second, experimental evidence shows that instructions encouraging subjects to make choices as fast as possible also lowers the decision threshold in a binary food choice experiment (Milosavljevic et al., 2010). As a result of this lower threshold, there is more noise in the choice data.28 Milosavljevic et al. (2010) also estimate a DDM with their choice and RT data. The DDM parameter estimates present a clear dissociation in their data: the drift rate estimates do not change across treatments, but the threshold estimates do change. In other words, time pressure did not influence option discriminability. Viewed in the light of sequential sampling, this is an important consideration for designing experiments: aspects of an experiment that, explicit or not, induce time pressure could change subject choices without changing the true underlying value of options. This type of scenario is summarized in Fig. 3, which shows different choice and RT data assuming a constant difference between two options x and y. In other words, a decision-maker may have a higher value for x than y (i.e., vx > vy ), but constrained decision-making time limits the ability to accumulate relative evidence, even if each sample is on average (given no change in the drift rate µ ) as informative as it would be without time pressure.29 Without RT data or a model of the choice process like the DDM, observing choice data alone could lead to the inference that difference in value between options differs across contexts. Instead, the different contexts lead to a change in a different component of the choice process, the amount of relative evidence needed in favor of one option over another.

27 This is a straightforward prediction from the DDM, as longer RT will occur when options are more difficult to separate. Consider the observed choices of x over y and of x over w. Suppose the corresponding RT for these choices are RTxy and RTxw , respectively. If RTxy < RTxw , the DDM would assign a higher future probability to w being chosen over x than to y being chosen over x. This prediction will also play a role in discussing choice errors in Section 3.6. 28 In Milosavljevic et al. (2010), all food items were separately rated, so mistakes can be inferred by the number of times a lower-ranked item was selected. 29 It is worth noting that Milosavljevic et al. (2010) also find weak evidence that time pressure increases noise in the accumulation process. More work is needed to understand the complex relationship between time pressure, attention, and changes in choice behavior.

69

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Fig. 3. Changes in the DDM threshold affect RT distributions and the probability of choice errors.. The DDM framework provides an explanation for how the probability of choice errors can be independent of the value of available options. As before, there are two options, x and y, with vx > vy . For fixed values of x and y, the implications of three different decision thresholds are considered. (a) For a given vx and vy , the probability of a decision-maker making a mistake (choosing y) changes with the decision threshold. A low (L, blue) threshold requires the least noisy evidence accumulation, so is the most likely to lead to an error. This probability decreases with an increase to a medium (M, cyan) or high (H, purple) threshold. (b) The resulting RT distributions clearly reveal different thresholds for a given vx and vy . The greatest positive skew is seen for low (blue) thresholds. (c) The exercise is repeated with the same three thresholds, but now there are trials with a high (H) difference between x and y or a low (L) difference between x and y. (d) The RT distributions again clearly differentiate the different decision thresholds. The RT on trials with a lower difference between x and y (dashed lines) all have distributions with greater density on longer RT. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Sometimes longer RT are thought to come from more “considerate” or “deliberate” decision processes (Rand et al., 2012; Rubinstein, 2007, 2013). If choices are the outcome of a process with relatively longer RT, where the longer RT is the result of a higher decision threshold and not lower option discriminability, are those choices of greater use when trying to identify preferences, compared to choices resulting from faster RT? There is some evidence for this.30 For example, RT for a decision can be longer for individuals who devote more time to reading instructions prior to seeing the options (Nielsen, Tyran, & Wengström, 2014). There is also some evidence from online experiments attempting to identify willingness to pay for various natural resources: choices coming from longer RT lead to tighter confidence intervals on estimates of willingness to pay (Börger, 2016). The sequential sampling framework provides a simple explanation for what might constitute the “carefulness” of subjects in their decision-making (Nielsen et al., 2014): an endogenous increase of a decision threshold.31 This point will be raised again in Sections 3.5 and 3.6. The majority of the economics experiments provide RT evidence consistent with the predictions of decision thresholds in a standard sequential sampling framework. In perceptual tasks, however, there is also some evidence that subjects will occasionally not accumulate any evidence and respond exceptionally quickly (Simen et al., 2009). This “nonintegrative” behavior can be optimal if a priori (before accumulation) beliefs are strong enough (Simen et al., 2009) and create an effective decision threshold of zero; no additional information is collected. In other words, in most cases, subjects will decide the cost of accumulating evidence is optimal, but will occasionally opt-out of the accumulate-to-threshold process. Caplin and Martin (2016) recently demonstrated similar behavior in a trinary choice experiment, employing a “dual process” framework that allows the decision-maker to first choose which process to employ to make a decision: a DDM or an “automatic” process. A study mentioned earlier looking at immediately repeated choices also found evidence in line with this type of process (Agranov & Ortoleva, 2017).

30 A related point is made in Recalde, Riedl, and Vesterlund (2018), although it is expressed in terms of understanding the relationship between RT and choice errors. 31 It is worth emphasizing that in the perceptual decision-making literature, it is often the case that across individuals there is no correlation between mean RT and accuracy. However, accuracy is often correlated with drift rate, and mean RT correlated with decision threshold (Ratcliff & McKoon, 2008). Within individuals, though, there is often strong correlation between RT and accuracy (Ratcliff & McKoon, 2008).

70

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

The complexity of economic choices also likely adds additional channels for incentives to influence decision thresholds. For example, while the cognitive sciences have emphasized the speed-accuracy tradeoff (i.e., the difference in quality of decisions made in the absence or presence of time pressure), there is evidence that time-dependent incentives can reduce RT without reducing decision accuracy. Kocher and Sutter (2006) demonstrate this in a beauty-contest game, where payoffs were inversely related to RT. More experimental designs that explicitly make RT costly, such as a price per sample of information (e.g., Busemeyer (1985)) will provide additional insights.32 3.4. Biases and RT Reference-dependent preferences have fast become a cornerstone of behavioral economics, in part because there is growing evidence they are driven by expectations about possible outcomes (Kőszegi & Rabin, 2006; Sprenger, 2015). In perceptual decisionmaking, there exists a relevant concept, although the terminology may appear distinct. In perceptual discrimination tasks, biased choices occur when one option is chosen more frequently than another. Importantly, this bias is distinct from option discriminability. A closer look at what drives bias in the DDM framework will reveal an intriguing link to concepts more commonly considered in economics. There are several known experimental manipulations that can reliably produce biased choices. For example, the discriminability between two images x and y might be constant across trials, but the a priori probability that x is correct can be exogenously manipulated. The DDM allows a decision-maker’s prior beliefs to enter into the decision-making process through variation in the starting point z (Ratcliff, 1985; Ratcliff & Rouder, 1998). In addition to exogenous manipulations of prior probability (e.g., the correct response will be x with 60% probability), asymmetric potential payoffs (e.g., the payment is "$"0.50 if x is correct and "$"0.25 if y is correct) have also been shown to generate biased choices and shorter RT towards the biased option in binary perceptual choice environments (Leite & Ratcliff, 2011; Mulder, Wagenmakers, Ratcliff, Boekel, & Forstmann, 2012; Simen et al., 2009; Summerfield & Koechlin, 2010).33 There also exists evidence that both of these types of prior knowledge share a common neural mechanism in the decision-making process (Mulder et al., 2012).34 The observed behavioral responses to exogenous changes in prior information are in line with the optimality of the DDM, as discussed in Section 2. Recall that the DDM implements the optimal response to the noisy accumulation of evidence in certain binary choice scenarios (Gold & Shadlen, 2002). In the simplest specification of the DDM, the two options x and y are assumed to be equally likely to be preferred. With that restriction removed, it becomes optimal for the DDM to adjust the parameter z towards the more likely option (Simen et al., 2009). This optimal response to expectations will produce a higher choice probability for the more likely option, as well as shorter RT towards the closer threshold and longer RT towards the opposite threshold (see Fig. 2). In this sense, RT aids in the identification of expectations and reference points, a common empirical challenge in economics (Ericson & Fuster, 2015). A study mentioned earlier when considering sequential effects (Frydman & Nave, 2017) is a compelling example of the potential link between perceptual and economic expectations. Frydman and Nave (2017) identify variation in the DDM parameter z across subjects in a perceptual task, and that variation maps onto heterogeneity in belief formation in a stock-purchasing task. Achtziger and Alós-Ferrer (2014) also leverage RT data in a probabilistic environment to infer whether or not subjects are using Bayesian updating. While Achtziger and Alós-Ferrer (2014) favor a “dual process” model to account for their data, both processes in the model are built on the foundation of a diffusion process (Alós-Ferrer, 2018). Dutilh and Rieskamp (2016) use a DDM to show that subjects possess a slight a priori bias for the gamble presented on screen in each trial over a fixed reference gamble, an effect that was not observable from the choice data alone. Enax, Krajbich, and Weber (2016) look at how changes in nutritional labeling affect food choice, leveraging the DDM to identify how the change in information presentation affects the choice process: changes in food labels do not change any biases in food choice, but rather change attribute-weighting via the evidence accumulation process. A common thread is that RT data reveal a bias by being either shorter or longer conditional on the ultimate choice. This difference facilitates identification even when aggregate choice data have the same average choice probabilities (as illustrated in Fig. 2). The idea that beliefs can have an effect on RT, and that RT can then be used jointly with choices to infer those beliefs is a powerful concept that economists should continue to utilize. A similar point can be made for the broad notion of attention, which is the focus of the next section. In both of these cases, parameters in sequential sampling models such as the DDM have direct connections to phenomena of growing interest in economics. 3.5. Attention and RT The consideration of how individuals make decisions given limited attention has gained prominence in recent years in economics (Caplin, 2016; Masatlioglu, Nakajima, & Ozbay, 2012). Attention has also been studied using sequential sampling models in psychology and neuroscience. Although the canonical DDM does not have an attentional parameter, other sequential sampling models were developed with attention in mind. Busemeyer and Townsend (1993) considered risky choice, and assumed the decision-maker would re-allocate attention from one attribute (e.g., probability) to another (e.g., outcome) as preference for one option or another accumulated. An attentional version of the DDM (Krajbich et al., 2010) employs similar intuition, adding a parameter to allow eye 32

A more extensive discussion of the benefits of studying decisions under time pressure can be found in Spiliopoulos and Ortmann (2018). Leite and Ratcliff (2011) found a larger effect size for prior stimulus probability than asymmetric payoffs. 34 Additional discussion of expectations in perceptual decision-making can be found in Summerfield and de Lange (2014). 33

71

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

fixations to affect the DDM drift rate µ in the direction of the attended option.35 Within the experimental economics community, RT is sometimes considered to reflect the amount of attention being given to the task at hand (Caplin & Daniel, 2013, 2016; Geng, 2016; Martin, 2016). In some tasks, distributions of RT are often bi-modal, indicating a set of trials on which individuals clearly spend more time than on others. For example, Martin (2016) looks at a simple buyer-seller game where the buyer must incur some cognitive costs to validate the value of a product, which is selling for either a high or a low price. The experiment reveals subjects spend significantly more time checking the value of the product when the price is high. This interpretation of RT has proven helpful in identifying choice mistakes (Caplin & Daniel, 2013), as is discussed in more detail in the next section. Wilcox (1993b) finds that individuals more likely to use a simple pricing rule for monetary lotteries have shorter RT. While attention is not directly discussed, this result would be in line with the idea that individuals who spend less time considering (i.e., less attention) the full support of probabilities or outcomes associated with lotteries are more likely to employ a simple pricing rule. One way to refine the broad notion of attention is to use a model that might dissociate multiple effects on the decision-making process. Within the DDM, one interpretation of slower decisions resulting from increased attention to a decision (Caplin & Daniel, 2013) would be an increase in the decision threshold. All else equal, this increased caution would reduce the likelihood of a mistake, as shown in Fig. 3. Attention could also mean that certain aspects of a choice have become more salient or important. In this case, the DDM would predict an increase in the drift rate towards the attribute or option that has increased attention (Cavanagh et al., 2014; Krajbich et al., 2010; Summerfield & de Lange, 2014). A recent eye-tracking study confirmed the existence of both types of attention, using two different eye measurements: visual gaze influenced the drift rate, but pupil dilation was found to be positively correlated with decision threshold (Cavanagh et al., 2014). Pupil dilation is often observed to increase with cognitive load or cognitive effort, just as an increased decision threshold reflects increased caution and an increase in the amount of evidence required to execute a choice. This type of multi-layered data, used within a model like the DDM, is extremely useful for identifying distinct attentional influences on choice (Caplin, 2016; Krajbich & Smith, 2015). Building on the discussion started in the last section, this section provides additional evidence that sequential sampling models can provide novel insights in the study of expectations and attention (Summerfield & de Lange, 2014), two concepts that as of now are building fairly distinct literatures in economics. Specifically, the DDM provides a plausible computational separation of the two, with expectations accounted for in the starting point and attention in the drift rate or decision threshold (Cavanagh et al., 2014; Summerfield & de Lange, 2014).36 Another route for considering attention is to consider the costs of information acquisition, meaning it may be optimal for individuals to decide without complete information (i.e., limited attention). Woodford (2016) presents one such model for optimal evidence accumulation given a processing constraint.37 3.6. Choice errors and RT Although the presence of errors in choice data has been long understood in both economics and psychology (McFadden, 2001; Thurstone, 1927), the approaches of these disciplines to understanding and identifying those errors stand in stark contrast. This section will consider how RT can be used to bridge the two approaches. The study of perceptual decisions has at least one clear advantage in examining errors: the identification of an “error” is straightforward. For example, a response that a picture of a face is a “house” is clearly incorrect. This allows researchers to unambiguously label choices as “correct” or “incorrect,” in turn facilitating the characterization of corresponding RT distributions. The perceptual decision-making literature has consequently identified several choice and RT regularities with respect to errors (Luce, 1986). Consider a standard perceptual decision-making experiment with two treatments: one where speed is emphasized (or enforced, with only a short amount of time allowed to respond), and one where accuracy is emphasized (e.g., no time limit with an incentive for each correct response). As discussed earlier, this speed (accuracy) treatment is commonly found to decrease (increase) the decision thresholds (Palmer et al., 2005; Ratcliff & Smith, 2004), thus increasing (decreasing) the chances that the wrong barrier is reached first (Fig. 3). As discussed earlier, changes in decision thresholds will also affect mean RT. Another prominent feature of most perceptual data, however, requires looking beyond the decision threshold in the DDM. It is often the case that errors are, on average, either faster than correct responses or slower than correct responses. The across-trial variability parameters outlined in Section 2 now become crucial (Ratcliff & Rouder, 1998). Simple examples are presented in Fig. 4, using choices of x (correct) and y (incorrect). Consider a DDM with two possible drift rates, µ1 < µ 2 (extending to a continuum of drift rates is also feasible). All else equal in the model, errors are more likely to occur with µ1. Additionally, average RT for µ1 are longer. If 35 [0, 1] affects evidence accumulation. Following the notation in Section 2.1, µ varies with More precisely, an attentional “discount factor” eye fixations as follows. The drift rate µ k (vx vy ) when a decision-maker is fixated on x and µ k ( vx vy ) when a decision-maker is fixated on y. Krajbich and Smith (2015) provide a discussion of the model and additional evidence supporting its predictions. 36 Although time pressure is typically assumed to affect the decision threshold for perceptual decisions, there is some evidence that time pressure also affects the accumulation process. An exogenous constraint on time effectively limits attention, which could lead the decision-maker to allocate attention differently than in a case without a time constraint. For example, in purchase decisions for risky assets, subjects showed increasing variance aversion under greater time pressure (i.e., less time to make a decision) (Nursimulu & Bossaerts, 2014). Another study, looking at binary choices between gambles, found different effects of time pressure on pure prospects (all gains or all losses) than for mixed prospects (Kocher, Pahlke, & Trautmann, 2013). 37 Additional discussion of relevant theory work in economics and psychology can be found in the Appendix.

72

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

the frequency µ1 and µ 2 are equal, average RT for choices of y will be longer than average RT for choices of x. This is exactly what is shown in Fig. 4c. Next, consider variability in starting point z. The simplest case would be two possible starting points, z2 > z1. All else equal in the model, errors are more likely to occur with z1. Now, average RT for z1 are shorter. This averages out to shorter RT on average for choices of y. This scenario is shown in Fig. 4d. When might each of these scenarios occur? Variable drift rate can readily be interpreted as variable difficulty (option discriminability). Variable starting points can result from recent trial history (sequential effects), changes in incentives (biases), or perhaps even changes in attention. In this way, the DDM can account for both fast and slow errors through changes in different latent parameters that map onto concrete cognitive concepts (Ratcliff & McKoon, 2008; Ratcliff et al., 1999).38

Fig. 4. Changes in the DDM drift rate or starting point affect RT distributions of choice errors. The simplest DDM framework assumes z = 2 a and a single 1

drift rate µ . (a) Simple example of introducing variability in z or µ , using two possible values of each. Drift rate could randomly vary from trial to µ +µ z +z trial, where 1 2 2 = µ (purple), or starting point could vary from trial to trial, where 1 2 2 = z (cyan). It is assumed vx > vy since µ > 0 . All RT distribution plots have solid lines for choices of x and dashed lines for choices of y. A similar diagram can be found in Ratcliff and Rouder (1998). (b) 1 Example RT distributions when z = 2 a is assumed. Regardless of µ and a, the predicted conditional RT distributions for choices of x and y are

equivalent. (c) Example RT distributions where z = 2 a is assumed, but there are two possible drift rates, µ1 and µ 2 . Now, the average RT for choices 1

of y is greater than the average RT for choices of x. (d) Example RT distributions where there is a constant drift rate, but two possible starting points, z1 and z2 . Now, the average RT for choices of x is greater than the average RT for choices of y. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Another robust finding in the cognitive sciences between RT and choice errors is “post-error slowing” (PES), where individuals tend to slow down after making a judgment error in a perceptual task (Laming, 1979; Rabbitt & Rogers, 1977). Several different cognitive explanations of PES have been proposed. The most prominent hypothesis is increased caution: the realization a mistake was made on the previous choice prompts the decision-maker to increase the amount of information required (an increase in the decision threshold) to make the next choice (Rabbitt & Rogers, 1977). A second explanation is one involving an a priori expectation bias (similar to that presented in Fig. 2) against the option that led to the mistake.39 This bias would be instantiated in the starting point z in a DDM. As these two explanations involve different DDM parameters, joint consideration of choices and RT is required to differentiate between them. A decomposition of the PES in a lexical discrimination task found that a change in decision threshold (response caution) was the primary source of PES, with secondary evidence for expectation bias (Dutilh et al., 2012). Thus far, economics has taken at least two approaches to studying the potential relationship between errors and RT. First, as the definition of a mistake in an economic task is not as unequivocal as in a perceptual task, experimental innovations have been required

38 Other models of choice and RT have also been developed to account for both fast and slow errors (e.g., Usher & McClelland (2001) and AlósFerrer (2018)). 39 For perceptual discrimination, if “face” was guessed when the correct response was “house,” this hypothesis predicts an a priori decrease in the probability of responding “face” on the next trial.

73

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

to ensure identification. Several studies have used the choice of a dominated strategy to indicate a mistake. While some of these kinds of choice environments are more complex than those typically studied with the DDM or other sequential sampling models, they present interesting trends. In general, the data hint at a negative relationship between RT and mistakes (Rubinstein, 2007, 2013). For example, when asked to choose between two different roulette games, subjects choosing the dominated lottery are typically much faster than those who choose the “correct” lottery (Rubinstein, 2013). Recalde et al. (2018) use specially constructed public goods games, where some options are strictly dominated, to help ease the identification of mistakes. In this setting, Recalde et al. (2018) also find mistakes to be negatively correlated with RT. Agranov, Caplin, and Tergiman (2015) show that there appears to be a relationship between “naive” (i.e., weakly dominated) guesses in a 2/3 guessing game. A recent test of the cognitive reflection test also showed that incorrect answers are often faster than correct answers (Jimenez, Rodriguez-Lara, Tyran, & Wengström, 2018). In contexts such as these, where a choice set appears to contain what is close to an objectively correct answer, longer RT may indeed reflect attention (as discussed in the previous section) and thus be predictive of a correct choice. If this negative relationship between errors and RT is valid and robust to various choice environments, it has an intriguing application: longer RT, or trials in which more time is spent, might better reflect preferences (Recalde et al., 2018) and hence have better out-of-sample prediction. Note that this implication also connects to the earlier discussion of sequential effects, where RT earlier in an experiment tend to be longer than RT later in an experiment.40 One possible interpretation of the negative relationship between errors and RT is that RT can be used as a measure of attention, as discussed in the previous section. If the decision-maker is not paying sufficient attention to a choice, an error is more likely to be made. Using this assumption, Caplin and Daniel (2013) show that providing individuals with default choices leads to less time spent evaluating the options (i.e., shorter RT), and thus to more mistakes. Geng (2016) pushes the connection between attention and RT a step further, measuring consideration time, or the amount of time spent considering each option, in addition to the total amount of time needed to make a choice. Leveraging these additional RT measurements, Geng (2016) shows that subjects spent significantly more time evaluating default choices compared to other choices. Once provided with a default, two different types of mistakes are possible: there are “fail-to-improve” errors if a better option is available and not chosen, and there are “change-to-worse” errors if the default is given up for a lower-valued option. Geng (2016) finds an asymmetry in choice errors, with more “fail-to-improve” errors than “change-to-worse” errors. Presumably, this asymmetry reflects the allocation of greater consideration time to the default, at the cost of consideration time for alternative choices. A follow-up experiment strengthens the causal claim that asymmetric attention drives the asymmetric choice behavior: an imposed time constraint on consideration time erased the error asymmetry. Not all of the evidence in economics points to fast errors, however. Slow errors, frequently defined as when a lower-valued option is selected in a binary choice task, have been observed in several recent economics experiments (e.g., Krajbich, Lu, Camerer, & Rangel (2012), Woodford (2016), & Philiastides & Ratcliff (2013)).41 As Fig. 4 demonstrates, these slow errors are predicted by the DDM when there is variability in the drift rate, or variability in option discriminability. Intuitively, this should be expected in most economics experiments, when the difference is value between two options (e.g., two gambles or two foods) is unlikely to be fixed across all trials. Importantly, this parallels contexts in perceptual decision-making where slow errors frequently occur: accuracy is emphasized and option discriminability is low (Ratcliff & McKoon, 2008). Although the origins differ, both economics and psychology possess growing literatures connecting RT and errors. There are several ways to consider linking these literatures. For example, PES has not been explored in the experimental economics literature, but there are numerous potential applications. Consider cooperation levels in a public goods game. On a given round, an individual might contribute “too much,” as compared to others in the group. If the subject felt as if this were an “error,” then their RT on the following trial should be longer compared to RT on trials in which they make “adequate” contributions (Lotito et al., 2013). In other words, RT might help identify a preference to not make high contributions beyond “conditional cooperation” (Fischbacher, Gächter, & Fehr, 2001). More broadly, improved understanding of choice mistakes is of use for all empirical work (Hey, 2005; McFadden, 2001). Leveraging the relationship between RT and errors provides a unique avenue to advance that knowledge. 3.7. Overall value and RT The difference in value, or the discriminability, between options clearly has a robust effect on RT across a wide variety of paradigms, as shown in Section 3.2. However, in addition to the difference in value (DV) between options, the choice set value, or overall value (OV), of a trial might also influence the decision-making process. Consider the choice between x and y, where vx = 4 and vy = 6. Here, DVxy = |vx vy | = 2 and OVxy = vx + vy = 10 . Compare this to a choice between w and x, where vw = 2: DVwx = DVxy , but OVwx OVxy . A small number of papers have begun to study how OV, distinctly from DV, affects both RT and choice. Despite the relatively small number of studies, evidence converges on the same effect: as OV of the choice set increases, RT decreases. The DDM, in its canonical form, cannot account for this negative effect of OV, a point that has been noted previously (Hunt et al., 2012). Alternative DDM specifications, however, can account for the effects of OV. These alternative models have similar 40

Of course, within the framework of the DDM, this conclusion could be problematic. Given a fixed task, the DDM predicts slower (faster) decisions for options with lower (higher) option discriminability; leveraging the differences in RT might actually increase out-of-sample predictive power (Clithero, 2018). However, the DDM also predicts different amounts of mistakes from choices made under different treatments that affect the decision threshold (Milosavljevic et al., 2010). In this case, ceteris paribus, the DDM would predict that choices made under a higher decision threshold (and hence longer RT) would better reflect the underlying preferences and hence have greater out-of-sample predictive power. 41 Krajbich and Smith, 2015 provide a brief discussion of these slow errors. 74

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

structure but approach the phenomenon from different perspectives. First, consider a discretization of OV into win-win,win-lose, and lose-lose choices (Ratcliff & Frank, 2012; Shenhav & Buckner, 2014). In order to account for RT differences across those groups, separate decision thresholds for each treatment must be identified (Cavanagh et al., 2014; Ratcliff & Frank, 2012). In this specification, higher barriers are found for the lose-lose choices (lowest OV) compared to win-win choices (highest OV). So, while DV might be considered a first-order determinate of decision conflict that reveals itself in RT differences, OV can also affect the dynamics of a decision process, generating asymmetric “negative” (“avoid”) and “positive” (“approach”) conflict (Busemeyer & Townsend, 1993; Ratcliff & Frank, 2012). The relative importance of a choice set is often not considered in experiments, but even simple binary choice environments can reveal its importance (e.g., Cavanagh et al. (2014) and Polanía et al. (2014)).42 Second, in an environment in which an individual is going to be making many choices, the individual will benefit from implementing a “speed-value trade-off” (Pirrone, Stafford, & Marshall, 2014). In other words, as the individual accumulates evidence about two options and realizes they are both high value, it could be more efficient to quickly choose one option and move on to the next choice.43 This could be particularly powerful if an individual is given a block of time and provided the opportunity to earn or consume as much as possible in that given amount of time. Similarly, considering the OV of a choice set is reason for decision thresholds to change within a single decision. Tajima et al. (2016) show that higher the prior belief about OV going into a trial, the faster barriers should collapse within a DDM, so as to maximize reward rate across multiple trials. There also exist alternative sequential sampling models that can account for this effect. One such model is a “race” or “absolute evidence” model, where instead of a single drift rate (that reflects the difference in utilities of the two options), separate drift processes accumulate for each option towards a single decision threshold (Brown & Heathcote, 2008). A second possibility is to suppose high (low) OV leads to an acceleration (deceleration) of the comparison process in a sequential sampling model (for example, see Busemeyer & Townsend (1993) or Diederich (2003)). An alternative biophysical model has also been proposed and tested, accounting for both the DV and OV effects on RT (Hunt et al., 2012).44 Despite this common finding of a slowing of the decision-making process as OV decreases, existing models do not provide a unified account of why there is a slowing, nor do they address other potential effects of OV. As an example, consider the possibility of non-linearities in the comparison process affecting the rate of comparison (Diederich, 2003). In this account, it is entirely plausible that “more” information (e.g., more attributes of the choices under consideration, more memories of previous consumption) is considered for high OV decisions, even if those decisions are completed in less time. If there is increased consideration of counterfactual outcomes (which are greater when OV is higher), this would offer an explanation for a seemingly paradoxical finding: while low OV trials do lead to slower decisions, there is evidence that high OV trials, although faster, are also associated with greater anxiety (Shenhav & Buckner, 2014). This would also be consistent with the notion of OV reflecting the importance of a particular choice set. An additional consideration with respect to OV is the impact of diminishing marginal utility. At least when comparing two highvalue options, the intuition here is similar to the “speed-value trade-off”: a decision-maker may eventually realize the options are of high enough value that there is little to be gained from sampling more evidence, especially if the evidence accumulation process is costly (Fudenberg et al., 2015). This effect should grow as OV increases. So, diminishing marginal utility might provide a parsimonious explanation for the negative relationship between OV and RT. A methodological consideration for the experimenter is also relevant here. As options increase in objective properties (e.g., expected value of two different gambles), diminishing marginal utility is likely to make the subjective difference between pairs of options shrink. In other words, across all choice sets in an experiment, it is important to ensure consistent control of both OV and option discriminability, as they appear to have opposing effects on RT.45 Process data like RT seems particularly useful for exploring the effects of OV, as choice probabilities alone appear to mask the decision process’ sensitivity (Polanía et al., 2014). Further research can reveal the mechanism through which OV affects the choice process, such as salience (Towal, Mormann, & Koch, 2013) or anticipated regret (Shenhav & Buckner, 2014). In this way, OV might prove a powerful manipulation, or an important control, for experiments. 3.8. Choice set size and RT The majority of the existing RT literature, in both psychology and economics, focuses on choice environments involving two options. Of course, many decisions are made with far more than two options, motivating the extensive development in the empirical literature of multinomial discrete choice models (McFadden, 2001). There are many reasons to expect the size of the choice set to affect the decision-making process. For example, as the choice set increases, cognitive constraints such as working memory and attention are likely to play a larger role (Reutskaja, Nagel, Camerer, & Rangel, 2011). These cognitive constraints are likely to play a role in both perceptual and economic decision-making. 42 Note that an additional possibility, explored in Ratcliff and Frank (2012), is that the decision process could be delayed in lose-lose trials. This would be reflected in longer non-decision times, as opposed to a higher barrier. 43 Teodorescu, Moran, and Usher (2016) also find evidence for OV effects on RT in perceptual choices between stimuli of varying brightness levels. 44 An exciting finding from this neuroeconomics paper is that the dynamics of when DV and OV affect the decision-making process appear to be distinct (Hunt et al., 2012). Specifically, it might be the case that OV enters earlier, signaling the relative importance of a trial, and DV enters later, helping discriminate between the available options. 45 Thanks to a reviewer for pointing out this consideration. This point also speaks to the general importance of effective measurement and estimation of the subjective value of all options when an experiment aims to study – or control for – option discriminability (Alós-Ferrer, 2018).

75

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Within the perceptual decision-making literature, there is at least one fairly robust relationship between RT and increasing choice set size: mean response times (MRT) increase linearly with the logarithm of the number of choice alternatives N, (5)

MRT (N ) = k + blog(N ),

where k, b > 0 . This result is typically referred to as Hick’s Law (Hick, 1952).46 The DDM, in its canonical form, cannot speak to how RT may change as set size increases.47 However, there exist alternative sequential sampling models that are readily adaptable to N > 2 alternatives, typically involving a separate accumulation process for each option (Brown & Heathcote, 2008; McMillen & Holmes, 2006; Roe, Busemeyer, & Townsend, 2001; Usher & McClelland, 2001). A common assumption, which allows a sequential sampling model to obey Hick’s Law, is that the decision threshold increases with set size (Usher & McClelland, 2001). While the experimental economics literature does not yet include a rigorous exploration of the relationship between RT and the number of available options, there is evidence for a positive relationship between choice set size and RT. One example comes from comparing the binary food choice task of Krajbich et al. (2010) to the trinary food choice task of Krajbich and Rangel (2011): although not a within-subject comparison, the two tasks are the same, save for the number of food options. A comparison of RT by rating difference across the two datasets appears to show approximately a 0.5 s increase in RT, on average. Reutskaja et al. (2011) also find increasing average RT when the set size increases from four to nine and from nine to sixteen, although the increase is discernibly smaller between nine and sixteen. There is also good reason to think the relationship between RT and choice set size is far more complicated, though. Reutskaja et al. (2011) additionally have eye-tracking data, and find the choice process can adapt to the size of the choice set, with eye fixations per option shrinking in duration as the choice set increases in size. This observation brings in two other active areas of research, both falling under the realm of attentional effects on choice (Caplin, 2016).48 First, the “consideration set” literature (Eliaz & Spiegler, 2011; Masatlioglu et al., 2012), which builds on the reasonable assumption that in a large choice set, only a subset of all possible choices are likely to be considered by the decision-maker. Second, the “sequential search” literature (Caplin, Dean, & Martin, 2011) is relevant, as a decision-maker must employ some sort of search process to navigate large choice sets. In sufficiently large choice sets that require significant cognitive effort in searching for and comparing options, the effect on RT of an increase in choice set size cannot be predicted without more knowledge of the choice process. For example, a decision-maker may employ a process that completes the same number of searches, regardless of the total number of available options. In this case, RT would not necessarily be expected to increase with choice set size (Geng, 2016). 3.9. Summary This section covered eight different empirical observations relating RT to choice behavior and cognitive processes. Four of the eight (option discriminability, decision threshold, biases, and errors) are integral parts of the DDM. The effects are quite robust in economics, as demonstrated by the range of experimental tasks discussed. Sequential effects have also been analyzed using the DDM and are a common feature of almost any RT dataset. The final three (attention, overall value, and choice set size) have received varying levels of scrutiny in economics, but are all crucial to understanding the choice process. The DDM presents a set of computations, captured by the parameters of the model, that help separate and explain many of these RT phenomena. It is worth noting that the interpretation of RT data through the lens of the DDM, as has been done in this section, does not require fitting a DDM to the data. However, while connections are clear in some instances (e.g., option discriminability), they are less clear in others (e.g., attention). Given the existing evidence, there is good reason to believe refinements of the existing class of sequential sampling models, tailored to better integrate fundamental economic concepts, will complement the unifying concepts of models like the DDM. Section 4 will return to this belief and discuss some frontiers in examining the strong connection between RT and choice. 4. Looking forward Unpacking the RT literature in economics reveals an impressive array of applications, yet also shows empirical consistencies across a variety of choice environments. As non-choice data, however, RT currently resides outside the mainstream analytic framework in economics. This section briefly highlights how that is starting to change, looking at recent developments in measurement of RT and one of the more promising applications of RT: prediction.49 46 There exist extensive explorations of the exact functional form of this relationship, with log(N 1), log(N ) , and log(N + 1) corresponding to different potential accumulate-to-threshold models (McMillen & Holmes, 2006). Also, it is worth noting that even within perceptual decisionmaking, Hick’s Law is sometimes violated, with RT sometimes being independent of N (Luce, 1986). 47 There has been work done to extend the DDM to three options (Krajbich & Rangel, 2011), though, and similar models have been used in other choice environments with four-option perceptual choices (Churchland et al., 2008). 48 It is worth noting, though, a clear connection between Hick’s Law and information theory (Luce, 1986). As was stated in Hick (1952), the RT necessary to respond to a stimulus is proportional to the amount of information contained in the stimulus. 49 A more detailed discussion of recent advances in theory can be found in the Appendix.

76

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

4.1. Enhanced measures of process As process models of choice, like the DDM, advance in economics, the demand for RT data will grow, as those models will have predictions that cannot be tested with only choice data. The question then becomes what kinds of RT data are needed. The bulk of the existing literature employs a measure of RT with two traits: first, RT is measured as the amount of time from the beginning of a trial until a (final) choice response is collected, and second, RT is endogenously determined by the decision-maker. There are, however, more innovative measures available. Depending upon the goals of the researcher, constrained or unconstrained measurements of RT might be more appropriate, as would decomposing the single RT measure into several RT measures per choice.50 This section briefly discusses enhanced measures of RT and other measures of process. There are several reasons to collect enhanced RT measures pertaining to a single decision. Consider the effect of exogenous time constraints on a choice process. A corollary of endogenous RT is that time allocated to the consideration of components (e.g., different options, different attributes) of the decision is also endogenous. An exogenous manipulation, such as time pressure, is likely to constrain time allocated to the consideration of those components, presumably affecting how they are weighed. There is some evidence for this with various components of financial choices, such as potential losses (Kocher et al., 2013) or skewness of potential outcomes (Nursimulu & Bossaerts, 2014). It is also interesting to consider how RT might reflect different levels of thinking in a strategic environment. For example, conditional on having demonstrated that subjects have learned the correct strategy, being positioned at different points in a sequential game might require different amounts of backwards induction. If it is assumed more backwards induction requires more time, joint consideration of RT and choices at different positions in a sequential game could reveal the extent of backwards induction being used by subjects (Gneezy et al., 2010). Collection of “multiple RT” (Spiliopoulos & Ortmann, 2018) also provides an opportunity to measure the time course of initial and revised choices, possibly a more accurate reflection of how choices unfold in everyday life. Indeed, preferences or understanding of a choice environment may evolve over a relatively short amount of time (Agranov et al., 2015; Caplin et al., 2011). An experimental design could incentivize an initial fast response (and hence identify effects of time pressure), but then also collect a “revised” choice after individuals have had more time to consider their options. A recent application of this “double response” method indicated that extra deliberation reduced inequity aversion in two-person distribution games (Dyrkacz & Krawcyzk, 2018). When first presented with possible distributions, subjects were less likely to choose unequal payments that were disadvantageous to themselves, resulting in lower payoffs to others. This aversion to unfavorable inequity was weaker in the second choice. Although RT are a rich and complementary piece of data for better understanding the choice process, there are limits to what can be inferred from RT data (Krajbich et al., 2015a; Spiliopoulos & Ortmann, 2018). A natural question, then, is whether other pieces of process information might better reflect the initiation or termination of a decision process, such as eye movements (Arieli, Ben-Ami, & Rubinstein, 2011; Krajbich et al., 2010; Towal et al., 2013), mouse clicks on a computer (Brocas, Carrillo, Wang, & Camerer, 2014; Chen & Fischbacher, 2016; Payne, Bettman, & Johnson, 1988), or brain imaging (Smith, Douglas Bernheim, Camerer, & Rangel, 2014). Section 3 provided several useful connections between RT and other measures of process. Recall the earlier discussion of errors and RT. Although the DDM provides a clean mechanistic explanation for why mistakes occur in choice (Fehr & Rangel, 2011), the ability to identify choice errors in an economic setting places constraints on experimental design. Looking at altruistic decisionmaking in a binary choice dictator game, Hutcherson et al. (2015) fit a DDM to the choice and RT data. A clever experimental twist reverses the stated choices with some probability. This manipulation is helpful in testing whether choices were mistakes. If a subject continues to accumulate evidence after responding, they may realize a choice is a mistake. Thus, if such a choice is reversed, they will experience relief, as such a reversed outcome is rewarding. Hutcherson et al. (2015) find neural evidence for such a “relief signal,” corroborating the DDM error predictions derived from the choice and RT data. 4.2. Additional metrics for prediction A corollary of expanded theory and improved measurement, for any phenomenon, is enhanced prediction. Given its extensive history of studying RT, then, psychology has, not surprisingly, long recognized the ability of RT to aid in the prediction of future choices (Jamieson & Petrusic, 1977). There are several ways for the predictive power of RT to manifest itself in economics. An essential test for any model claiming predictive power is its ability to predict out-of-sample choices, or choices not yet observed when the model was identified. A growing literature in economics aims to identify process data that are helpful towards this aim (Camerer, 2013; Smith et al., 2014). Recently, when paired with the DDM, RT have been shown to improve out-of-sample predictions of binary choices (Clithero, 2018). Importantly, it was not the presence of RT alone that improved predictions, but the ability to use RT data jointly with choice data to estimate a structural model of choice. RT on its own might also allow for predictions of choice in certain contexts, such as revealing a subject’s threshold strategy in a global game experiment (Schotter & Trevino, 2015). There is also evidence that RT provides an additional dimension to construct “typologies” for decision-makers, leading to predictions about the behavior of different “types” of individuals across choice domains, as opposed to predictions about individual decisions alone (Rubinstein, 2016; Schotter & Trevino, 2015). Improved prediction of choices may be of first-order importance for economics, but there are benefits to process prediction. In terms 50 Spiliopoulos and Ortmann (2018) have an extensive discussion of RT measurement options, and emphasize the benefits of studying time pressure.

77

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

of model selection, having another dimension along which to make predictions provides a more stringent test. For example, as noted several times in this paper, a common claim in the literature is that longer RT is indicative of more cognitive effort and a greater chance of being correct (Rubinstein, 2007, 2013). Another hypothesis is that RT might reflect attention to a task (Caplin, 2016; Caplin & Daniel, 2013), or whether all provided attributes of an option are being considered (Börger, 2016). Appropriate process models can either lend support to or refute these claims (Krajbich et al., 2015a): if a model makes specific process predictions, such as RT, those predictions can be tested. Thus, process predictions may help differentiate between models making identical choice predictions. Further, understanding the choice process deployed by a decision-maker to make a choice is likely to be useful for predicting how future choices would be approached. In other words, process predictions can help economists know which model is best to use for predicting future choices (Rustichini, 2009). There is some evidence that such exercises are feasible. Chabris, Morris, Taubinsky, Laibson, and Schuldt (2009) successfully test a model for predicting RT from binary intertemporal choice decisions. More recently, Clithero (2018) shows that it is possible to use the DDM to make out-of-sample RT predictions for binary choices not seen before by the decision-maker. 5. Conclusion This paper has characterized the RT literature in economics, demonstrating the usefulness of studying RT through the lens of an established structural model of the choice process. Key predictions of the model are found in RT data across the literature, exhibiting the importance of sequential sampling models for economics. The upshot is that RT, as a process measure, can be of interest in and of itself to the economist. The reasoning is simple: understanding the choice process leads to better understanding and prediction of choices, and RT provide a straightforward and (usually) costless means to measure the choice process. Choice data are often insufficient to separate predictions of choice models; RT data provide an additional dimension of heterogeneity (Fischbacher et al., 2013; Rustichini, 2009; Spiliopoulos & Ortmann, 2018) and are thus a boon for both theory and empirical work. A careful reading of the literature reveals that the most compelling use of RT occurs when it can be introduced in a principled way, in conjunction with a process model. In many such studies, the DDM or a similar accumulate-to-threshold model is used (Fehr & Rangel, 2011). This class of models has proved quite powerful across a wide variety of tasks in the cognitive sciences, particularly in non-strategic choice environments. One of the more appealing parts of the DDM is that it is a structural model of choice that can jointly account for various distributions of both choice and RT data. The richness of the DDM is sufficient to provide interpretable accounts for multiple sources of changes in choice behavior. The surveyed experimental literature also shows great promise for applied work. The DDM and its general class of models has been used to more efficiently estimate other preference parameters of interest, such as risk aversion or loss aversion (Bhui, 2015; Webb, 2018), which could improve the external validity of such parameters. A better understanding of RT, and thus decision time, could also help with institutional design. Consider an implication of sequential sampling models for dynamic auctions: the frequency of bids should decrease as the standing bid approaches bidders’ reservation prices (Chipty, Cosslett, & Dunn, 2015). For financial markets, the amount of time a floor trader has to complete a stock purchase will likely affect how stock attributes are compared (Nursimulu & Bossaerts, 2014). Similar effects have been observed in car auctions, where auctioneers can control the speed at which bidders must make decisions (Lacetera, Larsen, Pope, & Sydnor, 2016). Some important limitations for the DDM and other sequential sampling models currently exist as well. As a recent example, Rubinstein (2016) highlights the potential for RT to help differentiate between instinctive and contemplative decision-making processes in a variety of strategic settings. Although the DDM could be used to reinterpret and account for a significant portion of variation in RT (Krajbich et al., 2015a), a conceptual gap still exists between these applications of RT. New work in theory (e.g., Fudenberg et al. (2015), Woodford (2016), & Alós-Ferrer (2018)) demonstrates tremendous room for innovation and explanatory power above and beyond the standard DDM framework. There are abundant opportunities, as discussed in Section 4, to advance the literature in these areas. Clearly, RT is in the early stages of being incorporated into economics. Although several trends are apparent in the experimental literature regarding RT, they have thus far received minimal attention in economics. One likely reason for this delay has been the absence of a clean way to integrate RT into choice models. Cognitive science provides a firm foundation: sequential sampling models hold significant promise for interpreting process data, and predict many phenomena observed in economics experiments. The integration of economic and cognitive concepts will motivate novel and layered models of behavior not obtainable with only choice data. The study of response times provides economics both a new objective and a new constraint, and the inclusion of both will prove fruitful for the field. Appendix A. Theories of process and choice The emergence of considerations of process in theories of choice in economics primarily stems from the goal of understanding the source of noise in choice. However, the increase in the use of process data like RT by experimental economics is now beginning to engender new attempts to formalize the relationship between choice and process. By jointly considering choice and process, economics can build on the existing work in cognitive science. As is routinely demonstrated in their application, sequential sampling models provide an explanation for stochasticity in choice and the full distribution of RT, so it is not surprising to see sequential sampling models (often the DDM) as the motivation behind several recent theory papers in economics looking to explore both noise and process. Given the various motivations for considering process in economics, there are several relevant strands of literature. A brief tour of this burgeoning area reveals important cross-fertilization between economics and the cognitive sciences, and in distinct branches of literature,

78

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

there is significant theoretical overlap. There are also some unexplored domains which might prove particularly fruitful for economics.51 One early goal of choice-process models in the cognitive sciences was to explain context effects (Busemeyer & Townsend, 1993). Several applications of sequential sampling models to preferential choice provided accounts for multiple context effects, such as similarity effects or decoy effects. The initial domain of application was risky choice, where different attributes of risky monetary gambles (probability and outcome magnitude) were allowed to have different weights in the noisy comparison process (Busemeyer & Townsend, 1993). A relatively early discussion of cognitive models in economics can be found in Rieskamp et al. (2006), comparing the predictive ability of static models to dynamic models. Natenzon (2017) is a more recent economics paper that develops a dynamic model of discrete choice, a Bayesian learning process similar to the comparison process in the DDM. Allowing values of options to be correlated, Natenzon (2017) can account for similarity between options, making it possible to explain attraction and decoy effects. Another recent model is also able to explain many context effects by combining a sequential sampling process with a process for selecting attributes of options that are ultimately sampled and compared (Bhatia, 2013).52 An elegant property of these models is that they make predictions about how choice probabilities – and hence the strength of context effects – evolve from the beginning of the decision-making process until its termination. An assumption made in the models of Natenzon (2017) and Bhatia (2013), as well as in the canonical DDM, is an exogenous threshold, or stopping rule. Typically, this will mean that the threshold, or barrier, is fixed and independent of time. This assumption can be relaxed or replaced in a variety of ways, and several papers have explored the implications of such a modification. An early paper in economics implemented “cognitive costs” in a dynamic programming framework (Gabaix & Laibson, 2005), allowing the decision-maker to optimize the amount of time spent comparing options. If there are constant costs and known possible payoffs from sampling, agents will allocate more time to deciding between options of similar value (Gabaix & Laibson, 2005), accounting for the inverse relationship between RT and option discriminability. A related result can be found if instead a decision-maker must make a decision in some finite amount of time and discounts the future (i.e., after every sample). In this scenario, the decision-maker endogenously decreases the decision threshold with time, according to the severity of their discounting factor (Brocas, 2012). Collapsing barriers are also optimal for a decision-maker with time-dependent costs or an exogenously enforced time limit for a decision (Drugowitsch, Moreno-Bote, Churchland, Shadlen, & Pouget, 2012; Frazier & Yu, 2008). Similarly, if the gains from sampling are not known and must be learned via Bayesian updating, the optimal barrier is again decreasing in time (Fudenberg et al., 2015; Tajima et al., 2016).53 More work is needed to connect the data to this particular theoretical prediction, however. Despite the manifold claim of the optimality of collapsing barriers in a binary choice environment, experimental evidence for perceptual decision-making tasks finds better fits for models with fixed barriers than for models with various forms of collapsing barriers (Hawkins, Forstmann, Wagenmakers, Ratcliff, & Brown, 2015). An important extension will be testing whether or not such a gap between optimal and observed barriers exists in more complex choice environments. A complementary strand of literature is also emerging that uses a rational inattention framework to explain the dynamics of the decision-making process. Woodford (2016) identifies the optimal accumulation process for a decision-maker given sampling history and a time-invariant threshold, subject to the decision-maker’s information-processing constraint. The model provides straightforward RT predictions, which prove comparable to those from an attentional variant of the DDM designed to integrate eye-tracking data (Krajbich et al., 2010). Another dynamic model using the rational inattention framework considers how different levels of information capacity per period (i.e., per sample) affects choice probabilities and RT (Steiner, Stewart, & Matějka, 2017). Assuming a constant cost of sampling, both relatively low and relatively high processing capacity will result in faster decisions. The intuition is that at both extremes of capacity level, there is little incentive to sample over many periods. With low capacity, the cost of sampling more is large relative to the value of the additional information that can be acquired per period, and with high capacity, a lot of information can be processed per sample, leading to quick decisions.54 Another avenue to pursue would be to assume the DDM is one of several possible decision “strategies,” or that there are several different evidence accumulation processes with different properties. Caplin and Martin (2016) propose one such “dual process” framework: a DDM is used if it is not too “costly” and a simpler, faster process is used otherwise. Alós-Ferrer (2018) presents a dual process model that models both “valuation by calculation” (utility) and “valuation by feeling” (heuristic) as diffusion processes. Both processes accumulate evidence for a given choice environment, but a probability determines whether the ultimate choice follows the utility or heuristic process. The model also has the nice feature of providing a parsimonious explanation of both fast and slow errors. Diederich and Trueblood (2018) also present a dual process framework, applying it to risky choice. One open question for many dual process models is whether not the “two systems” operate in a serial or parallel manner, although serial is more frequently assumed. Diederich and Trueblood (2018) compare serial and parallel versions of dual process sequential sampling models for risky choice data and find stronger evidence for serial (i.e., “intuitive” followed by “deliberate”). There are also several unexplored domains. One concerns an assumption of a common component in sequential sampling models that is often not discussed: the notion of a single comparison process. Indeed, more investigation is needed, across all of economics, psychology, and neuroscience, as to how many accumulating neurons in the brain map onto a single accumulation process like the 51 Additional discussion of relevant work is provided in Rieskamp, Busemeyer, and Mellers (2006), Woodford (2016), Fudenberg et al. (2015) and Alós-Ferrer (2018). 52 This is an important extension, as most models of multi-attribute choice simply assume all available attributes are integrated into a decision. Bhatia (2013) adds another layer, in effect another choice process, that decides which available attributes are compared. A detailed background literature on earlier models in the psychology literature is also provided in Bhatia (2013). 53 The framework of Fudenberg et al. (2015) specifies how a decision-maker’s cost function (assuming consideration of available options is costly) can be identified. Tajima et al. (2016) formalize how a priori beliefs about rewards across choice trials influence the optimal threshold levels. 54 This result has several powerful implications for RT. First, the level of capacity could affect the severity of the inverse-U shaped RT distributions within subjects that are often found in binary choice data. Second, if capacity varies across individuals, it could affect the interpretability of any relationship between choice probability and RT across subjects (Steiner et al., 2017).

79

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

DDM (Zandbelt, Purcell, Palmeri, Logan, & Schall, 2014). On the other end of the spectrum, the limits of sequential sampling models as an algorithmic account of the choice process merits further consideration. For larger choice sets, additional assumptions are likely necessary to ensure choices are made in a reasonable amount of time (Reutskaja et al., 2011). Finally, sequential sampling models like the DDM were not developed to consider strategic interactions, although the current RT literature in economics demonstrates that many strategic environments generate data with properties similar to those predicted by a DDM (e.g., Rand, Fudenberg, & Dreber (2015)). The existing literature also highlights RT as a pragmatic means for identifying the strategy employed by decision-makers.55 As process models expand in economics, careful consideration will be required to understand how model spaces and model predictions overlap. Importantly, some work along these lines already exists in psychology (Bogacz et al., 2006; Teodorescu & Usher, 2013) and economics (Webb, 2018). Axiomatic approaches to RT and other process measures will also be essential to fully understand the implications of models (Echenique & Saito, 2017). A more unique hurdle for process models is clear specification of what nonchoice data are necessary to test their predictions, an empirical challenge currently being undertaken in the study of attention (Caplin, 2016; Caplin & Dean, 2015; Masatlioglu et al., 2012). Appendix B. Additional equations

a . Continuing with the notation from Section A, a This section contains the equations for RT for the DDM in the event that z 2 decision-maker is choosing between options x and y. With the assumption µ > 0 , the expected RT (dropping the constant T) for a choice of x is: 1

E [RTx ] =

aµ a coth 2 µ

zµ z coth 2 . µ

(A1)

The expected RT for a choice of y is:

E [RTy ] =

aµ a coth 2 µ

(a z ) µ a z coth . 2 µ

(A2) ex + e x . ex e x

1 . tanh(x )

The hyperbolic cotangent function used here is coth(x ) = Note that coth(x ) = These RT expressions have been previously derived in several different ways (e.g., Palmer et al. (2005), Smith (1990), Luce (1986), & Bogacz et al. (2006)).56 1 a , it holds that E [RTx ] E [RTy]. In order for E [RTx ] < E [RTy], it must be that By assuming z 2

zµ z coth 2 µ

>

(a z ) µ a z coth , 2 µ

which holds so long as z > 2 a . 1

In order to show that Eq. (A1) equals Eq. (3) when z = 2 a, the following property of the tanh function is helpful: 1

tanh(a) + tanh(b) . 1 + tanh(a)tanh(b) 1 = 2 a, then Eq. (A1)

tanh(a + b) = Now, if z

E [RTx ] = Let



, 2 2

aµ a coth 2 µ

meaning

E [RTx ] = = =

= =

1 a 2

+

µ

=



2.

(the same substitution could be done with Eq. (A2)) becomes:

coth

Then,

1 a a coth(2 ) 2 coth( ) µ µ 1 a a 1 1 2 µ tanh(2 ) µ tanh( ) 1 a a 1 1 2 µ tanh( ) + tanh( ) µ tanh( ) 1 + tanh( )tanh( ) 1 a a 1 + tanh( )tanh( ) 2 1 µ tanh( ) + tanh( ) µ tanh( ) a 1 + tanh( )tanh( ) µ 2tanh( )

a 1 2µ tanh( )

a a (tanh( )tanh( )) + 2µtanh( ) 2µtanh( ) a = tanh( ) 2µ

=

=

1 aµ 2 2

a 2µtanh( )

( )

aµ a tanh 2µ 2 2

55 See Schotter and Trevino (2015) and Spiliopoulos and Ortmann (2018) for additional discussion. There have not been too many attempts yet to apply sequential sampling models to strategic environments, but for exceptions see Spiliopoulos (2013) and Konovalov and Krajbich (2016a). (a z )2 + 2(a z )(z ) z 2 + 2(a z )(z ) 56 In the event that µ = 0, E [RTx ] = and E [RTy ] = . 2 2

3

3

80

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

This last expression is Eq. (3). Appendix C. Tables summaries of studies This section contains tables corresponding to the observations discussed in Section 3 of the main text. The observations each have a separate table and are presented in the same order here as they are in Section 3 (see Tables A.1–A.8). Table A1 Experiments showing RT consistent with sequential effects. Authors

Choice environment

Anderhub et al. (2000) Rustichini et al. (2005) Kocher and Sutter (2006) Brown et al. (2008) Piovesan and Wengström (2009) Gneezy et al. (2010) Fischbacher et al. (2013) Lotito et al. (2013) McKinney and Van Huyck (2013) Achtziger and Alós-Ferrer (2014) Agranov and Ortoleva (2017) Frydman and Nave (2017)

Intertemporal savings allocation Binary risky choice Beauty-contest games with time pressure Public, private, monetary goods Modified dictator game Sequential decision game Binary choice in mini-ultimatum games Public goods games Perfect information game Posterior probability task Binary risky choice Perceptual discrimination task

Table A2 Experiments consistent with RT increasing as option discriminability decreases. Authors

Choice environment

Busemeyer and Townsend (1993) Diederich (2003) Moffatt (2005) Rustichini et al. (2005) Brown et al. (2008) Chabris et al. (2008) Piovesan and Wengström (2009) Krajbich et al. (2010) Milosavljevic et al. (2010) Krajbich and Rangel (2011) Krajbich et al. (2012) Rand et al. (2012) Soltani et al. (2012) Philiastides and Ratcliff (2013) Achtziger and Alós-Ferrer (2014) Conte et al. (2014) Crockett et al. (2014) Dai and Busemeyer (2014) Krajbich et al. (2014) Navarro-Martinez et al. (2014) Rodriguez et al. (2014) Shenhav et al. (2014) Ferguson et al. (2014) Börjesson and Mogens (2015) Cappelen et al. (2016) Alós-Ferrer et al. (2016) Brañas-Garza et al. (2015) Chen and Fischbacher (2015) Evans et al. (2015) Krajbich et al. (2015a) Krajbich et al. (2015b) Rand et al. (2015) Schotter and Trevino (2015) Chen and Fischbacher (2016) Dutilh and Rieskamp (2016) Gerhardt et al. (2016) Konovalov and Krajbich (2016b) Stewart et al. (2016) Agranov and Ortoleva (2017) Clithero (2018) Merkel and Lohse (2018)

Binary risky choice Mixtures of money and aversive noises Binary risky choice Binary risky choice Public, private, monetary goods Intertemporal choice Modified dictator game Binary food choice Binary food choice Food choice sets of 3 Purchasing decisions Public goods games Binary risky choice Branding and clothing items Posterior probability task Binary risky choice Mixtures of money and electric shocks Intertemporal choice Binary food choice Binary risky choice Intertemporal choice Foraging task Modified ultimatum game Transportation choices Dictator game Binary risky choice Bargaining games Binary three-person dictator games Public goods games Public goods games and intertemporal choice Dictator and ultimatum games Repeated prisoner’s dillema Global games Social value orientation task Binary risky choice Binary risky choice Binary dictator game, intertemporal choice, risky choice Binary risky choice Binary risky choice Binary food choice Binary dictator and prisoner’s dilemma games

81

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Table A3 Experiments consistent with a decision threshold or threshold manipulation. Authors

Choice environment

Busemeyer (1985) Wilcox (1993a) Kocher and Sutter (2006) Simen et al. (2009) Milosavljevic et al. (2010) Green et al. (2012) Rand et al. (2012) Lindner (2014) Nielsen et al. (2014) Caplin and Martin (2016) Börger (2016) Agranov and Ortoleva (2017)

Binary choice between gambles Binary choice between gambles Beauty-contest games with time pressure Random dot motion discrimination Binary food choice Random dot motion discrimination Public goods games Market entry games Public goods games Choice between arithmetic expressions of money Online choice survey Binary risky choice

Table A4 Experiments looking at the relationship between RT and various choice biases. Authors

Choice environment

Simen et al. (2009) Summerfield and Koechlin (2010) Leite and Ratcliff (2011) Mulder et al. (2012) Achtziger and Alós-Ferrer (2014) Dutilh and Rieskamp (2016) Enax et al. (2016) Frydman and Nave (2017)

Random-dot motion discrimination Varying incentives in perceptual discrimination Binary numerosity discrimination Random-dot motion discrimination Posterior probability task Binary risky choice Binary food choice with nutrition labels Binary perceptual discrimination

Table A5 Papers looking at the relationship between RT and attention. Authors

Choice environment

Wilcox (1993b) Krajbich et al. (2010) Krajbich and Rangel (2011) Arad and Rubinstein (2012) Krajbich et al. (2012) Caplin and Daniel (2013) Cavanagh et al. (2014) Caplin and Martin (2016) Geng (2016) Martin (2016)

Lottery pricing task Binary food choice Food choice sets of 3 Colonel Blotto game Purchasing decisions Choice between arithmetic expressions of money Probabilistic reinforcement, binary choice Trinary choice between arithmetic expressions of money Choice between arithmetic expressions of money Buyer-seller game

Table A6 Experiments looking at the relationship between RT and choice errors. Authors

Choice environment

Kocher and Sutter (2006) Rubinstein (2007) Milosavljevic et al. (2010) Krajbich et al. (2012) Caplin and Daniel (2013) Rubinstein (2013) Philiastides and Ratcliff (2013) Woodford (2014) Agranov et al. (2015) Geng (2016) Recalde et al. (2018) Jimenez et al. (2018)

Beauty-contest games with time pressure Various economic games Binary food choice Purchasing decisions Choice between arithmetic expressions of money Various perceptual and economic choices Branding and clothing items Binary food choice Guessing games Choice between arithmetic expressions of money Public goods games Cognitive reflection test

82

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Table A7 Studies looking at the effect of overall value of the choice set on RT. Authors

Choice environment

Hunt et al. (2012) Ratcliff and Frank (2012) Cavanagh et al. (2014) Polanía et al. (2014) Shenhav and Buckner (2014)

Binary risky choice Probabilistic reinforcement, binary choice Probabilistic reinforcement, binary choice Binary food choice Binary choices between real products

Table A8 Experiments looking at RT for choice sets larger than two. Authors

Choice environment

Caplin et al. (2011) Krajbich and Rangel (2011) Reutskaja et al. (2011) Caplin and Daniel (2013) Towal et al. (2013) Geng (2016)

Arithmetic expressions of money, sets of 10, 20, 40 Food choice sets of 3 Food choice sets of 4, 9, 16 Arithmetic expressions of money, sets of 3 Food choice sets of 4 Arithmetic expressions of money, sets of 20, 40

References Achtziger, Anja, & Alós-Ferrer, C. (2014). Fast or rational? A response-times study of Bayesian updating. Management Science, 60(4), 923–938. Agranov, M., Caplin, A., & Tergiman, C. (2015). Naive play and the process of choice in guessing games. Journal of the Economic Science Association, 1(2), 146–157. Agranov, M., & Ortoleva, P. (2017). Stochastic choice and preferences for randomization. Journal of Political Economy, 125(1), 40–68. Alós-Ferrer, C. (2018). A dual-process diffusion model. Journal of Behavioral Decision Making, 31(2), 203–218. Alós-Ferrer, C., Granić, D.-G., Kern, J., & Wagner, A. K. (2016). Preference reversals: Time and again. Journal of Risk and Uncertainty, 52(1), 65–97. Anderhub, V., Gäuth, W., Mäuller, W., & Strobel, M. (2000). An Experimental analysis of intertemporal allocation behavior. Experimental Economics, 3(2), 137–152. Arad, A., & Rubinstein, A. (2012). Multi-dimensional iterative reasoning in action: The case of the colonel Blotto game. Journal of Economic Behavior & Organization, 84(2), 571–585. Arieli, A., Ben-Ami, Y., & Rubinstein, A. (2011). Tracking decision makers under uncertainty. American Economic Journal: Microeconomics, 3(4), 68–76. Basten, U., Biele, G., Heekeren, H. R., & Fiebach, C. J. (2010). How the brain integrates costs and benefits during decision making. Proceedings of the National Academy of Sciences, 107(50), 21767–21772. Becker, G. M., DeGroot, M. H., & Marschak, J. (1964). Measuring utility by a single-response sequential method. Behavioral Science, 9(3), 226–232. Bhatia, S. (2013). Associations and the accumulation of preference. Psychological Review, 120(3), 522–543. Bhui, R. (2015). Falling behind: Time and expectations. Working Paper. Birnbaum, M. H., & Jou, J.-W. (1990). A theory of comparative response times and “Difference” judgments. Cognitive Psychology, 22(2), 184–210. Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The Physics of optimal decision making: A formal analysis of models of performance in twoalternative forced-choice tasks. Psychological Review, 113(4), 700–765. Bogacz, R., Wagenmakers, E.-J., Forstmann, B. U., & Nieuwenhuis, S. (2010). The neural basis of the speed-accuracy tradeoff. Trends in Cognitive Sciences, 33(1), 10–16. Bollimunta, A., & Ditterich, J. (2012). Local computation of decision-relevant net sensory evidence in parietal cortex. Cerebral Cortex, 22(4), 903–917. Börger, T. (2016). Are fast responses more random? Testing the effect of response time on scale in an online choice experiment. Environmental Resource Economics, 65(2), 389–413. Börjesson, M., & Mogens, F. (2015), Response time patterns in a stated choice experiment. MPRA Working Paper No. 62002. Bouwmeester, S., Verkoeijen, P. P. J. L., Aczel, B., Barbosa, F., Bègue, L., Brañas-Garza, P., ... Wollbrant, C. E. (2017). Registered replication report: Rand, green, and nowak. Perspectives on Psychological Science, 12(3), 527–542. Brañas-Garza, P., Debrah M., & Luis M. (2015). Strategic risk and response time across games. Working Paper. Britten, K. H., Shadlen, M. N., Newsome, W. T., & Anthony Movshon, J. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12(12), 4745–4765. Brocas, I. (2012). Information processing and decision-making: Evidence from the brain sciences and implications for economics. Journal of Economic Behavior & Organization, 83(3), 292–310. Brocas, I., Carrillo, J. D., Wang, S. W., & Camerer, C. F. (2014). Imperfect Choice or imperfect attention? Understanding strategic thinking in private information games. Review of Economic Studies, 81(3), 944–970. Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57(3), 153–178. Brown, T. C., Kingsleyb, D., Peterson, G. L., Flores, N. E., Clarke, A., & Birjulin, A. (2008). Reliability of individual valuations of public and private goods: Choice consistency, response time, and preference refinement. Journal of Public Economics, 92(7), 1595–1606. Busemeyer, J. R. (1985). Decision making under uncertainty: A comparison of simple scalability, fixed-sample, and sequential-sampling models. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11(3), 538–564. Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory – A dynamic cognitive approach to decision-making in an uncertain environment. Psychological Review, 100(3), 432–459. Camerer, C. F. (2013). Goals, methods, and progress in neuroeconomics. Annual Review of Economics, 5, 425–455. Caplin, A. (2016). Measuring and Modeling attention. Annual Review of Economics, 8, 379–403. Caplin, A., & Daniel M. (2013). Defaults and attention: The drop out effect. Working Paper. Caplin, A., & Dean, M. (2015). Enhanced choice experiments. In Fréchette, G. R., Schotter, A. (Eds.), Handbook of experimental economics methodology. Oxford University Press. Caplin, A., Dean, M., & Martin, D. (2011). Search and satisficing. American Economic Review, 101(7), 2899–2922. Caplin, A., & Martin, D. (2016). The dual-process drift diffusion model: Evidence from response times. Economic Inquiry, 54(2), 1274–1282. Cappelen, A. W., Nielsen, U. H., Tungodden, B., Tyran, J.-R., & Wengström, E. (2016). Fairness is intuitive. Experimental Economics, 19(4), 727–740. Cavanagh, J. F., Wiecki, T. V., Cohen, M. X., Figueroa, C. M., Samanta, J., Sherman, S. J., & Frank, M. J. (2011). Subthalamic nucleus stimulation reverses mediofrontal influence over decision threshold. Nature Neuroscience, 14(11), 1462–1467. Cavanagh, J. F., Wiecki, T. V., Kochar, A., & Frank, M. J. (2014). Eye tracking and pupillometry are indicators of dissociable latent decision processes. Journal of

83

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Experimental Psychology: General, 143(4), 1476–1488. Chabris, C. F., Laibson, D., Morris, C. L., Schuldt, J. P., & Taubinsky, D. (2008). Measuring intertemporal preferences using response times. NBER Working Paper 13453. Chabris, C. F., Morris, C. L., Taubinsky, D., Laibson, D., & Schuldt, J. P. (2009). The allocation of time in decision-making. Journal of the European Economic Association, 7(2-3), 628–637. Chen, F., & Fischbacher, U. (2015). Cognitive processes of distributional preferences: A response time study. Working Paper. Chen, F., & Fischbacher, U. (2016). Response time and click position: Cheap indicators of preferences. Journal of the Economic Science Association, 2(2), 109–126. Chipty, S., Cosslett, R., & Dunn, L. F. (2015). A race against the clock: Auctioneer strategies and selling mechanisms in live outcry auctions. SSRN Working Paper. Churchland, A. K., & Ditterich, J. (2012). New advances in understanding decisions among multiple alternatives. Current Opinion in Neurobiology, 22(6), 920–926. Churchland, A. K., Kiani, R., & Shadlen, M. N. (2008). Decision-making with multiple alternatives. Nature Neuroscience, 11(6), 693–702. Clithero, J. A. (2018). Improving out-of-sample predictions using response times and a model of the decision process. Journal of Economic Behavior & Organization, 148, 344–375. Conte, A., Hey, J. D., Soraperra, I. (2014). The determinants of decision time. Jena Economic Research Papers. Crockett, M. J., Kurth-Nelson, Z., Siegel, J. Z., Dayan, P., & Dolan, R. J. (2014). Harm to others outweighs harm to self in moral decision making. Proceedings of the National Academy of Sciences, 111(48), 17320–17325. Dai, J., & Busemeyer, J. R. (2014). A probabilistic, dynamic, and attribute-wise model of intertemporal choice. Journal of Experimental Psychology: General, 143(4), 1489–1514. Dashiell, J. F. (1937). Affective value-distances as a determinant of esthetic judgment-times. American Journal of Psychology, 50(1), 57–67. Diederich, A. (2003). Decision making under conflict: Decision time as a measure of conflict strength. Psychonomic Bulletin & Review, 10(1), 167–176. Diederich, A., & Trueblood, J. S. (2018). A dynamic dual process model of risky decision making. Psychological Review, 125(2), 270–292. Domenech, P., & Dreher, J.-C. (2010). Decision threshold modulation in the human brain. Journal of Neuroscience, 30(43), 14305–14317. Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N., & Pouget, A. (2012). The cost of accumulating evidence in perceptual decision making. Journal of Neuroscience, 32(11), 3612–3628. Dutilh, G., & Rieskamp, J. (2016). Comparing perceptual and preferential decision making. Psychonomic Bulletin & Review, 23(3), 723–737. Dutilh, G., Vandekerckhove, J., Forstmann, B. U., Keuleers, E., Brysbaert, M., & Wagenmakers, E.-J. (2012). Testing theories of post-error slowing. Attention, Perception, & Psychophysics, 74(2), 454–465. Dyrkacz, M., & Krawcyzk, M. (2018). Exploring the role of deliberation time in non-selfish behavior: The double response method. Journal of Behavioral and Experimental Economics, 72, 121–134. Echenique, F., & Saito, K. (2017). Response time and utility. Journal of Economic Behavior & Organization, 139, 49–59. Eliaz, K., & Spiegler, R. (2011). Consideration sets and competitive marketing. Review of Economic Studies, 78(1), 235–262. Enax, L., Krajbich, I., & Weber, B. (2016). Salient nutrition labels increase the integration of health attributes in food decision-making. Judgment and Decision Making, 11(5), 460–471. Ericson, K. M. M., & Fuster, A. (2015). The endowment effect. Annual Review of Economics, 6, 555–579. Evans, A. M., Dillon, K. D., & Rand, D. G. (2015). Fast but not intuitive, slow but not reflective: Decision conflict drives reaction times in social dilemmas. Journal of Experimental Psychology: General, 144(5), 951–966. Fehr, E., & Rangel, A. (2011). Neuroeconomic foundations of economic choice–recent advances. Journal of Economic Perspectives, 25(4), 3–30. Ferguson, E., Maltby, J., Bibby, P. A., & Lawrence, C. (2014). Fast to forgive, slow to retaliate: Intuitive responses in the ultimatum game depend on the degree of unfairness. PLoS:One, e96344. Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economic Letters, 71(3), 397–404. Fischbacher, U., Hertwig, R., & Bruhin, A. (2013). How to model heterogeneity in costly punishment: Insights from responders’ response times. Journal of Behavioral Decision Making, 26(5), 462–476. Forstmann, B. U., Dutilh, G., Brown, S., Neumann, J., Yves von Cramon, D., Richard Ridderinkhof, K., & Wagenmakers, E.-J. (2008). Striatum and pre-SMA facilitate decision-making under time pressure. Proceedings of the National Academy of Sciences, 105(45), 17538–17542. Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J. (2016). Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. Annual Review of Psychology, 67, 641–666. Frazier, P., & Yu, A. J. (2008). Sequential hypothesis testing under stochastic deadlines. Advances in Neural Information Processing Systems, 20, 465–472. Frydman, C., & Nave, G. (2017). Extrapolative beliefs in perceptual and economic decisions: Evidence of a common mechanism. Management Science, 63(7), 2340–2352. Fudenberg, D., Philipp S., & Strzalecki, T. (2015). Stochastic choice and optimal sequential sampling. Working Paper. Gabaix, X., & Laibson, D. (2005). Bounded rationality and directed cognition. Unpublished Working Paper. Gao, J., Wong-Lin, K. F., Holmes, P., Simen, P., & Cohen, J. D. (2009). Sequential effects in two-choice reaction time tasks: Decomposition and synthesis of mechanisms. Neural Computation, 21(9), 2407–2436. Geng, S. (2016). Decision time, consideration time, and status quo bias. Economic Inquiry, 54(1), 433–449. Gerhardt, H., Biele, G. P., Heekeren, H. R., & Uhlig, H. (2016). Cognitive load increases risk aversion. SFB Discussion Paper. Glimcher, P. W. (2011). Foundations of neuroeconomic analysis. Oxford University Press. Gluth, S., Rieskamp, J., & Büchel, C. (2012). Deciding when to decide: Time-variant sequential sampling models explain the emergence of value-based decisions in the human brain. Journal of Neuroscience, 32(31), 10686–10698. Gneezy, U., Rustichini, A., & Vostroknutov, A. (2010). Experience and insight in the race game. Journal of Economic Behavior & Organization, 75(2), 144–155. Gökaydin, D., Navarro, D. J., Ma-Wyatt, A., & Perfors, A. (2016). The structure of sequential effects. Journal of Experimental Psychology: General, 145(1), 110–123. Gold, J. I., & Shadlen, M. N. (2002). Banburismus and the brain: Decoding the relationship between sensory. Stimuli, Decisions, and Reward, Neuron, Neuron, 36(2), 299–308. Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574. Green, N., Biele, G. P., & Heekeren, H. R. (2012). Changes in neural connectivity underlie decision threshold modulation for reward maximization. Journal of Neuroscience, 32(43), 14942–14950. Hare, T. A., Schultz, W., Camerer, C. F., O’Doherty, J. P., & Rangel, A. (2011). Transformation of stimulus value signals into motor commands during simple choice. Proceedings of the National Academy of Sciences, 108(44), 18120–18125. Hawkins, G. E., Forstmann, B. U., Wagenmakers, E.-J., Ratcliff, R., & Brown, S. D. (2015). Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. Journal of Neuroscience, 35(6), 2476–2484. Heekeren, H. R., Marrett, S., & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9(6), 467–479. Hey, J. D. (2005). Why we should not be silent about noise. Experimental Economics, 8(4), 325–345. Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11–26. Hunt, L. T., Kolling, N., Soltani, A., Woolrich, M. W., Rushworth, M. F. S., & Behrens, T. E. J. (2012). Mechanisms underlying cortical activity during value-guided choice. Nature Neuroscience, 15(3), 470–476. Hutcherson, C. A., Bushong, B., & Rangel, A. (2015). A neurocomputational model of altruistic choice and its implications. Neuron, 87(2), 451–462. Jamieson, D. G., & Petrusic, W. M. (1977). Preference and the time to choose. Organizational Behavior and Human Performance, 19(1), 56–67. Jensen, A. R. (2006). Clocking the mind: Mental chronometry and individual differences. Elsevier. Jimenez, N., Rodriguez-Lara, I., Tyran, J.-R., & Wengström, E. (2018). Thinking fast, thinking badly. Economics Letters, 162, 41–44. Kőszegi, B., & Rabin, M. (2006). A model of reference-dependent preferences. Quarterly Journal of Economics, 121(4), 1133–1165.

84

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Kirby, N. H. (1976). Sequential effects in two-choice reaction time: Automatic facilitation or subjective expectancy? Journal of Experimental Psychology: Human Perception and Performance, 2(4), 567–577. Kocher, M. G., Pahlke, J., & Trautmann, S. T. (2013). Tempus fugit: Time pressure in risky decisions. Management Science, 59(10), 2380–2391. Kocher, M. G., & Sutter, M. (2006). Time is money – Time pressure, incentives, and the quality of decision-making. Journal of Economic Behavior & Organization, 61(3), 375–392. Konovalov, A., & Krajbich, I. (2016a). On the strategic use of response times. Working Paper. Konovalov, A., & Krajbich, I. (2016b). Revealed indifference: Using response times to infer preferences. Working Paper. Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in goal-directed choice. Nature Neuroscience, 13(10), 1292–1298. Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015a). Rethinking fast and slow based on a critique of reaction-time reverse inference. Nature Communications, 6(7455). Krajbich, I., Hare, T., Bartling, B., Morishima, Y., & Fehr, E. (2015b). A common mechanism underlying food choice and social decisions. PLoS: Computational Biology, 11(10), e1004371. Krajbich, I., Lu, D., Camerer, C., & Rangel, A. (2012). The attentional drift-diffusion model extends to simple purchasing decisions. Frontiers in Cognitive Science, 3, 193. Krajbich, I., Oud, B., & Fehr, E. (2014). Benefits of Neuroeconomic modeling: New policy interventions and predictors of preference. American Economic Review: Papers & Proceedings, 104(5), 501–506. Krajbich, I., & Rangel, A. (2011). Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciences, 108(33), 13852–13857. Krajbich, I., & Smith, S. M. (2015). Modeling Eye movements and response times in consumer choice. Journal of Agricultural & Food Industrial Organization, 13(1), 55–72. Lacetera, N., Larsen, B. J., Pope, D. G., & Sydnor, J. R. (2016). Bid takers or market makers? The effect of auctioneers on auction outcomes. American Economic Journal: Microeconomics, 8(4), 195–229. Laming, D. R. J. (1968). Information theory of choice-reaction times. Academic Press. Laming, D. (1979). Choice reaction performance following an error. Acta Psychologica, 43(3), 199–224. Leite, F. P., & Ratcliff, R. (2011). What cognitive processes drive response biases? A diffusion model analysis. Judgment and Decision Making, 6(7), 651–687. Lindner, F. (2014). Decision time and steps of reasoning in a competitive market entry game. Economics Letters, 122(1), 7–11. Lotito, G., Migheli, M., & Ortona, G. (2013). Is cooperation instinctive? Evidence from the response times in a public goods game. Journal of Bioeconomics, 15(2), 123–133. Luce, R. D. (1986). Response times: Their role in inferring elementary mental organization. Oxford University Press. Martin, D. (2016). Rational inattention in games: Experimental evidence. Working paper. Masatlioglu, Y., Nakajima, D., & Ozbay, E. Y. (2012). Revealed attention. American Economic Review, 102(5), 2183–2205. McFadden, D. (2001). Economic choices. American Economic Review, 91(3), 351–378. McMillen, T., & Holmes, P. (2006). The dynamics of choice among multiple alternatives. Journal of Mathematical Psychology, 50(1), 30–57. Merkel, A. L., & Lohse, J. (2018). Is fairness intuitive? An experiment accounting for subjective utility differences under time pressure. Experimental Economics. Milosavljevic, M., Malmaud, J., Huth, A., Koch, C., & Rangel, A. (2010). The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure. Judgment and Decision Making, 5(6), 437–449. Moffatt, P. G. (2005). Stochastic choice and the allocation of cognitive effort. Experimental Economics, 8(4), 369–388. Mosteller, F., & Nogee, P. (1951). An experimental measure of utility. Journal of Political Economy, 59(5), 371–404. Mulder, M. J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W., & Forstmann, B. U. (2012). Bias in the brain: A diffusion model analysis of prior probability and potential payoff. Journal of Neuroscience, 32(7), 2335–2343. Myrseth, K. O. R., & Wollbrant, C. E. (2017). Cognitive foundations of cooperation revisited: Commentary on Rand et al. (2012, 2014). Journal of Behavioral and Experimental Economics, 69, 133–138. Natenzon, P. (2017). Random choice and learning. Journal of Political Economy. Navarro-Martinez, D., Loomes, G., Isoni, A., & Butler, D. (2014). Sequential expected utility: Sequential sampling in economics decision making under risk. Working Paper. Nicholas, M. C., & Van Huyck, J. B. (2013). Eureka learning: Heuristics and response time in perfect information games. Games and Economic Behavior, 79, 223–232. Nielsen, U. H., Tyran, J.-R., & Wengström, E. (2014). Second thoughts on free riding. Economics Letters, 122(2), 136–139. Nursimulu, A. D., & Bossaerts, P. (2014). Risk and reward preferences under time pressure. Review of Finance, 18(3), 999–1022. Otter, T., Allenby, G. M., & van Zandt, T. (2008). An integrated model of discrete choice and response time. Journal of Marketing Research, 45(5), 593–607. Oud, B., Krajbich, I., Miller, K., Cheong, J. H., Botvinick, M., & Fehr, E. (2016). Irrational time allocation in decision-making. Proceedings of the Royal Society B, 283(1822), 20151439. Palmer, J., Huk, A. C., & Shadlen, M. N. (2005). The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision, 5(5), 376–404. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(3), 534–552. Petrusic, W. M., & Jamieson, D. G. (1978). Relation between probability of preferential choice and time to choose changes with practice. Journal of Experimental Psychology: Human Perception and Performance, 4(3), 471–482. Philiastides, M. G., & Ratcliff, R. (2013). Influence of branding on preference-based decision making. Psychological Science, 24(7), 1208–1215. Piovesan, M., & Wengström, E. (2009). Fast or fair? A study of response times. Economics Letters, 105(2), 193–196. Pirrone, A., Stafford, T., & Marshall, J. A. R. (2014). When natural selection should optimize speed-accuracy trade-offs. Frontiers in Neuroscience, 8(73), 1–5. Pisauro, M. A., Fouragnan, E., Retzler, C., & Philiastides, M. G. (2017). Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI. Nature Communications, 8(15808), 1–9. Polanía, R., Krajbich, I., Grueschow, M., & Ruff, C. C. (2014). Neural oscillations and synchronization differentially support evidence accumulation in perceptual and value-based decision making. Neuron, 82(3), 709–720. Rabbitt, P., & Rogers, B. (1977). What does a man do after he makes an error? An analysis of response programming. Quarterly Journal of Experimental Psychology, 29(4), 727–743. Rand, D. G., Fudenberg, D., & Dreber, A. (2015). It’s the thought that counts: The role of intentions in noisy repeated games. Journal of Economic Behavior & Organization, 116, 481–499. Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous Giving and calculated greed. Nature, 489(7416), 427–430. Rangel, A., & Clithero, J. A. (2013). The computation of stimulus values in simple choice. In P. W. Glimcher, & E. Fehr (Eds.). Neuroeconomics: Decision making and the brain(2nd ed.). Elsevier. Rapoport, A., & Burkheimer, G. J. (1971). Models for deferred decision making. Journal of Mathematical Psychology, 8(4), 508–538. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59–108. Ratcliff, R. (1985). Theoretical interpretations of the speed and accuracy of positive and negative responses. Psychological Review, 92(2), 212–225. Ratcliff, R. (2013). Parameter variability and distributional assumptions in the diffusion model. Psychological Review, 120(1), 281–292. Ratcliff, R., & Frank, M. J. (2012). Reinforcement-based decision making in corticostriatal circuits: Mutual constraints by neurocomputational and diffusion models. Neural Computation, 24(5), 1186–1229. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20(4), 873–922. Ratcliff, R., & Rouder, J. N. (1998). Modeling response times for two-choice decisions. Psychological Science, 9(5), 347–356. Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychological Review, 111(2), 333–367. Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20(4), 260–281.

85

Journal of Economic Psychology 69 (2018) 61–86

J.A. Clithero

Ratcliff, R., Thapar, A., & McKoon, G. (2004). A diffusion model analysis of the effects of aging on recognition memory. Journal of Memory and Language, 50(4), 408–424. Ratcliff, R., & Tuerlinckx, F. (2002). Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin & Review, 9(3), 438–481. Ratcliff, R., Van Zandt, T., & McKoon, G. (1999). Connectionist and diffusion models of reaction time. Psychological Review, 106(2), 261–300. Recalde, M. P., Riedl, A., & Vesterlund, L. (2018). Error prone inference from response time: The case of intuitive generosity in public-good games. Journal of Public Economics, 160, 132–147. Reutskaja, E., Nagel, R., Camerer, C. F., & Rangel, A. (2011). Search dynamics in consumer choice under time pressure: An eye-tracking study. American Economic Review, 101(2), 900–926. Rieskamp, J., Busemeyer, J. R., & Mellers, B. A. (2006). Extending the bounds of rationality: Evidence and theories of preferential choice. Journal of Economic Literature, 44(3), 631–661. Rodriguez, C. A., Turner, B. M., & McClure, S. M. (2014). Intertemporal choice as discounted value accumulation. PLoS: One, 9(2), e90138. Roe, R. M., Busemeyer, J. R., & Townsend, J. T. (2001). Multialternative Decision field theory: A dynamic connectionist model of decision making. Psychological Review, 108(2), 370–392. Rubinstein, A. (2007). Instinctive and cognitive reasoning: A study of response times. Economic Journal, 117(523), 1243–1259. Rubinstein, A. (2013). Response time and decision making: An experimental study. Judgment and Decision Making, 8(5), 540–551. Rubinstein, A. (2016). A typology of players: Between instinctive and contemplative. Quarterly Journal of Economics, 131(2), 859–890. Rustichini, A. (2009). Is there a method of neuroeconomics? American Economic Journal: Microeconomics, 1(2), 48–59. Rustichini, A., Dickhaut, J., Ghirardatoc, P., Smith, K., & Pardo, J. V. (2005). A brain imaging study of the choice procedure. Games and Economic Behavior, 52(2), 257–282. Schotter, A., & Trevino, I. (2015). Is response time predictive of choice? An experimental study of threshold strategies. Working Paper. Shadlen, M. N., & Kiani, R. (2013). Decision making as a window on cognition. Neuron, 80(3), 791–806. Shenhav, A., & Buckner, R. L. (2014). Neural correlates of dueling affective reactions to win-win choices. Proceedings of the National Academy of Sciences, 111(30), 10978–10983. Shenhav, A., Straccia, M. A., Cohen, J. D., & Botvinick, M. M. (2014). Anterior cingulate engagement in a foraging context reflects choice difficulty, not foraging value. Nature Neuroscience, 17(9), 1249–1254. Shipley, W. C., Coffin, J. I., & Hadsell, K. C. (1945). Affective distance and other factors determining reaction time in judgments of color preference. Journal of Experimental Psychology, 35(3), 206–215. Simen, P., Contreras, D., Buck, C., Hu, P., Holmes, P., & Cohen, J. D. (2009). Reward rate optimization in two-alternative decision making: Empirical tests of theoretical predictions. Journal of Experimental Psychology: Human Perception and Performance, 35(6), 1865–1897. Smith, P. L. (1990). A note on the distribution of response times for a random walk with Gaussian increments. Journal of Mathematical Psychology, 34(4), 445–459. Smith, P. L. (2000). Stochastic dynamic models of response time and accuracy: A foundational primer. Journal of Mathematical Psychology, 44(3), 408–463. Smith, A., Douglas Bernheim, B., Camerer, C. F., & Rangel, A. (2014). Neural activity reveals preferences without choices. American Economic Journal: Microeconomics, 6(2), 1–36. Soltani, A., De Martino, B., & Camerer, C. (2012). A range-normalization model of context-dependent choice: A new model and evidence. PLoS: Computational Biology, 8(7), e1002607. Spiliopoulos, L. (2013). Strategic adaptation of humans playing computer algorithms in a repeated constant-sum game. Autonomous Agents and Multi-Agent Systems, 27(1), 131–160. Spiliopoulos, L., & Ortmann, A. (2018). The BCD of response time analysis in experimental economics. Experimental Economics, 21(2), 383–433. Sprenger, C. (2015). An endowment effect for risk: Experimental tests of stochastic reference points. Journal of Political Economy, 123(6), 1456–1499. Steiner, J., Stewart, C., & Matějka, F. (2017). Rational inattention dynamics: Inertia and delay in decision-making. Econometrica, 85(2), 521–553. Stewart, N., Hermens, F., & Matthews, W. J. (2016). Eye movements in risky choice. Journal of Behavioral Decision Making, 29(2-3), 116–136. Summerfield, C., & de Lange, F. P. (2014). Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews Neuroscience, 15(11), 745–756. Summerfield, C., & Koechlin, E. (2010). Economic value biases uncertain perceptual choices in the parietal and prefrontal cortices. Frontiers in Human Neuroscience, 4(208), 1–12. Summerfield, C., & Tsetsos, K. (2012). Building bridges between perceptual and economic decision-making: neural and computational mechanisms. Frontiers in Decision Neuroscience, 6(70), 1–20. Tajima, S., Drugowitsch, J., & Pouget, A. (2016). Optimal policy for value-based decision-making. Nature Communications, 7, 12400–12411. Teodorescu, A. R., Moran, R., & Usher, M. (2016). Absolutely relative or relatively absolute: Violations of value invariance in human decision making. Psychonomic Bulletin & Review, 23(1), 22–38. Teodorescu, A. R., & Usher, M. (2013). Disentangling decision models: From independence to competition. Psychological Review, 120(1), 1–38. Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34(4), 273–286. Towal, R. B., Mormann, M., & Koch, C. (2013). Simultaneous modeling of visual saliency and value computation improves predictions of economic choice. Proceedings of the National Academy of Sciences, 110(40), E3858–E3867. Townsend, J. T., & Ashby, F. G. (1983). The stochastic modeling of elementary psychological processes. Cambridge University Press. Turner, B. M., van Maanen, L., & Forstmann, B. U. (2015). Informing cognitive abstractions through neuroimaging: The neural drift diffusion model. Psychological Review, 122(2), 312–336. Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108(3), 550–592. Webb, R. (2018). The (neural) dynamics of stochastic choice. Management Science. Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of the drift-diffusion model in python. Frontiers in Neuroinformatics, 7. Wilcox, N. T. (1993a). Lottery choice – Incentives, complexity and decision time. Economic Journal, 103(421), 1397–1417. Wilcox, N. T. (1993b). On a lottery pricing anomaly: Time tells the tale. Journal of Risk and Uncertainty, 7(3), 311–324. Woodford, M. (2014). Stochastic choice: An optimizing neuroeconomic model. American Economic Review: Papers & Proceedings, 104(5), 495–500. Woodford, M. (2016). Optimal evidence accumulation and stochastic choice. Working Paper. Zandbelt, B., Purcell, B. A., Palmeri, T. J., Logan, G. D., & Schall, J. D. (2014). Response times from ensembles of accumulators. Proceedings of the National Academy of Sciences, 111(7), 2848–2853.

86