Decisions beyond boundaries: When more information is processed faster than less

Decisions beyond boundaries: When more information is processed faster than less

Acta Psychologica 139 (2012) 532–542 Contents lists available at SciVerse ScienceDirect Acta Psychologica journal homepage: www.elsevier.com/ locate...

577KB Sizes 0 Downloads 37 Views

Acta Psychologica 139 (2012) 532–542

Contents lists available at SciVerse ScienceDirect

Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy

Decisions beyond boundaries: When more information is processed faster than less Andreas Glöckner a,⁎, Tilmann Betsch b a b

Max Planck Institute for Research on Collective Goods, Bonn, Germany University of Erfurt, Germany

a r t i c l e

i n f o

Article history: Received 19 September 2011 Received in revised form 20 January 2012 Accepted 24 January 2012 Available online 28 February 2012 PsycINFO codes: 2340 Cognitive Processes 4160 Neural Networks

a b s t r a c t Bounded rationality models usually converge in claiming that decision time and the amount of computational steps needed to come to a decision are positively correlated. The empirical evidence for this claim is, however, equivocal. We conducted a study that tests this claim by adding and omitting information. We demonstrate that even an increase in information amount can yield a decrease in decision time if the added information increases coherence in the information set. Rather than being influenced by amount of information, decision time systematically increased with decreasing coherence. The results are discussed with reference to a parallel constraint satisfaction approach to decision making, which assumes that information integration is operated in an automatic, holistic manner. © 2012 Elsevier B.V. All rights reserved.

Keywords: Bounded rationality Parallel constraint satisfaction Holistic processing Adaptive decision making Heuristics Response time

In western religion and philosophy, decision making is generally considered the supreme discipline of conscious thought. Free will is exclusively attributed to human beings, manifesting itself in the capability of making choices upon anticipating and waging the consequences of the alternatives. Common language reflects this denotation by defining a decision as a “determination arrived at after consideration” (Merriam-Webster, online). This notion is maintained in the maximization principle of expected utility theory. Accordingly, a rational decision maker should identify the entire set of eligible options, calculate each option's expected utility (EU) and select the one with the highest EU (Savage, 1954; von Neumann & Morgenstern, 1944). Expected utility models do not claim that people indeed calculate weighted sums, but only that their choices can be predicted by such a model (Luce, 2000; Luce & Raiffa, 1957). Simon (1955; see also Veblen, 1898) drew attention to the decision process. He questioned the assumption that people rely on deliberate calculations of weighted sums in decisions, because the limitations of cognitive capacity and the multitude of decision options do not allow them to do so. Essentially two alternative process model approaches have been suggested. The first approach is based on the idea of adaptive strategy

selection. People might use effortful weighted sum calculations only in some situations (Beach & Mitchell, 1978; Gigerenzer, Todd, P. M., and the ABC Research Group, 1999; Payne, Bettman, & Johnson, 1988, 1993). In other situations, for instance under time pressure, they might rely on short-cut strategies, which consist of stepwise cognitive operations that are (usually) carried out deliberately (cf. Payne et al., 1988; although they also consider a possible implementation as production rules). The considered strategies are usually well specified on a process level; their cognitive costs are measured as the number of necessary calculations for applying the respective strategy, called elementary information processes (EIPs; Newell & Simon, 1972; Payne et al., 1988). Consequently, all adaptive strategy selection approaches converge in assuming that the cognitive effort and the necessary time for a decision increase with the number of processing steps (i.e., EIPs) required by the decision strategy (see also Brandstätter, Gigerenzer, & Hertwig, 2006; Bröder & Gaissmaier, 2007, for examples related to the adaptive toolbox approach).1 Hence, response times should increase with an increasing number of information to be processed. The second approach suggests that people utilize strategies that partially rely on automatic processes (for overviews see Evans, 2008; Gilovich, Griffin, & Kahneman, 2002; Glöckner & Witteman, 2010), thereby using the huge computational and storage power of

⁎ Corresponding author at: Max Planck Institute for Research on Collective Goods, KurtSchumacher-Str. 10, D-53113 Bonn, Germany. Tel.: +49 2 28 9 14 16 857; fax: +49 2 28 9 14 16 858. E-mail address: [email protected] (A. Glöckner).

1 Note, that the approach by Payne et al. (1988) highlights the EIP perspective stronger than both the contingency model (Beach & Mitchell, 1978) and the adaptive toolbox model (Gigerenzer et al., 1999), but the prediction concerning time that we investigate in this paper can still be derived from all of them.

1. Introduction

0001-6918/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.actpsy.2012.01.009

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

the brain to overcome the obvious limitations of conscious cognitive capacity. Automatic information structuring processes, for instance, are activated in visual perception (McClelland & Rumelhart, 1981) and social perception (Bruner & Goodman, 1947; Read & Miller, 1998) to quickly form reasonable interpretations (i.e., Gestalten; Wertheimer, 1938), which can constitute a basis for judgments and decisions (Betsch & Glöckner, 2010; Glöckner & Betsch, 2008b). Findings indicate that people seem to rely at least partially on such automatic processes in probabilistic inference decisions (e.g., Glöckner & Betsch, 2008c; Glöckner, Betsch, & Schindler, 2010; Hilbig, Scholl, & Pohl, 2010; Horstmann, Ahlgrimm, & Glöckner, 2009; Simon, Pham, Le, & Holyoak, 2001; Simon, Snow, & Read, 2004) and risky choices (e.g., DeKay, Patino-Echeverri, & Fischbeck, 2009a, 2009b; Glöckner & Betsch, 2008a; Glöckner & Herbold, 2011; Hilbig & Glöckner, 2011). Several models exist that aim to describe the underlying cognitive processes. According to Glöckner and Witteman (2010), these processes can be categorized into mainly reflex-like associative mechanisms (e.g., Betsch, Haberstroh, Molter, & Glöckner, 2004; Finucane, Alhakami, Slovic, & Johnson, 2000), more complex pattern matching mechanisms involving memory prompting (e.g., Dougherty, Gettys, & Ogden, 1999; Fiedler, 1996; Juslin & Persson, 2002; Thomas, Dougherty, Sprenger, & Harbison, 2008), automaticity based evidence-accumulation mechanisms (e.g., Busemeyer & Johnson, 2004; Busemeyer & Townsend, 1993; Diederich, 2003), and constructivist mechanisms based on holistic evaluations of the evidence (e.g., Glöckner & Betsch, 2008b; Holyoak & Simon, 1999; Monroe & Read, 2008; Read, Vanman, & Miller, 1997; Thagard & Millgram, 1995). In the current work, we mainly focus on models for constructivist mechanisms. The other mechanisms are, however, briefly discussed in the final part of this paper. As we will explain in more detail below, these constructivist-automatic processes operate in a holistic fashion and can profit from more information. In contrast to the adaptive strategy selection approaches, it can be predicted that, under certain conditions, more information can be processed more quickly than less. In the study reported in this paper, we test this claim empirically. In the remainder of the introduction, we will first explain the EIPbased perspective underlying the adaptive strategy selection approach in more detail. Then we will discuss the parallel constraint satisfaction models that can be used to computationally implement holistic processes and we will conclude with reviewing previous evidence concerning the relation between decision time and processing steps. 1.1. Elementary Information Processes (EIPs) perspective: more information = slower decision According to the adaptive strategy selection approach, cognitive effort is measured to quantify the costs of thinking. In this way, decision making processes are decomposed into elementary information processes (EIP, Newell & Simon, 1972). Table 1 shows some EIPs involved in solving decisions from description (Bettman, Johnson, & Payne, 1990). For example consider the following simple decision problem consisting of a choice between two options based on the four cues presented in Table 2. For illustration purposes, let the options be consumer products and let the cues be consumer testing institutes. The entries in the matrix are evaluations of the product on a relevant Table 1 EIPs used in decision strategies. Source:Bettman et al. (1990, p.114). READ COMPARE DIFFERENCE ADD PRODUCT ELIMINATE MOVE CHOOSE

Read an alternative's value into STM [short-term memory] Compare two alternatives on an attribute Calculate the size of the difference of two alternatives for an attribute Add the values of an attribute in STM Weight one value by another (multiply) Remove an alternative or attribute from consideration Go to next element of external environment Announce preferred alternative and stop process

533

Table 2 Example of a binary decision task. Cue

Validity

Option A

Option B

w x y z

0.55 0.70 0.80 0.60

− + + −

+ − − +

criterion dimension, say, its predicted durability, for more (+) or less (−) than 2 years. The cues differ with respect to their cue validity v, each representing the probability that a tester's prediction is correct. The decision task is to select products with the higher durability. Adaptive strategy selection models assume that the cognitive effort spent on a decision depends primarily on the chosen strategy. A person applying a lexicographic strategy (LEX; Fishburn, 1974; see also Gigerenzer & Goldstein, 1999), for instance, will look up cues in the order of their validity and choose the option that is better on the first differentiating cue. In the example task (Table 2), LEX would require at least 5 EIPs (i.e., 2 READ, 1 MOVE, 1 COMPARE, 1 CHOOSE) to reach a decision. Now consider a compensatory strategy that processes and integrates all information. The weighted additive (WADD) strategy, underlying utility theory, requires the individual to read all information given, weigh (PRODUCT) the outcome values ([+] =+1; [−]=−1) with validities and sum up the products for each option (ADD). Finally, the individual must COMPARE the two aggregate values of the two options in order to choose the dominant one. Thus, application of a WADD strategy to the example task requires at least 47 EIPs (16 READ [i.e., 2×(4 cues and 4 cue validities)], 15 MOVE, 8 PRODUCT, 6 ADD, 1 COMPARE, 1 CHOOSE). Hence, LEX and WADD strategy differ with regard to number of EIPs; the cognitive effort involved in application should be smaller for LEX compared to WADD. Of course, there are some confinements. Research on the metric of cognitive effort has shown that: (i) different EIPs consume different amounts of cognitive effort (Lohse & Johnson, 1996); (ii) cost differences between strategies are less pronounced in binary decision tasks compared to those involving more options; and (iii) learning reduces the relative costs of a strategy (e.g., Abelson & Levi, 1985, for an overview). 2 However, all other things being equal, one can surely assume that an increase in the number of EIPs used by a strategy (i.e., adding further processing steps) should never result in a decrease of cognitive effort. 3 One measurable correlate of cognitive effort is decision time (cf. Lohse & Johnson, 1996). Accordingly, processing time should be equal or greater as EIPs increase. The more information we have to consider, compare and integrate, the longer it should take us to arrive at a decision. At first glance, this claim might be considered a truism. In this paper, however, we question its general validity. Whereas the claim is most likely valid with respect to deliberative processes, it is less likely that it applies to all kinds of automatic processes. Deliberation involves slow, step-by-step consideration of information. It requires conscious control and substantially consumes cognitive resources. Automatic processes, in contrast, operate rapidly and can include a huge amount of information (e.g., Glöckner & Betsch, 2008c; Hammond, Hamm, Grassia, & Pearson, 1987; Hilbig et al., 2010; 2 Recently, it has furthermore been argued that cognitive effort might be reduced by using internal short-cuts within compensatory strategies in that information integration for the second option might be aborted if the remaining cues cannot compensate for the advantage of the first option (Bergert & Nosofsky, 2007). 3 Note, that the adaptive toolbox (cf. also the comment on production rules used by adaptive decision maker above) could potentially contain tools that are based on automatic-intuitive processes (Gigerenzer et al., 1999). Additional tools have, however, not been clearly specified yet, except for the assumption that automatic-intuitive tools might rely on the same stepwise processes assumed for deliberate heuristic (Gigerenzer, 2007). It is important to note that under this seriality assumption the core argument of this paper equally applies since for any kind of serial processing (including automatic processing) decision time should not decrease with increasing amount of information to be processed.

534

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Hogarth, 2001). They can work without explicit control and require only a minimum of cognitive resources. Some of these automatic processes implement holistic mechanisms in that information is automatically structured to form Gestalten in a perception-like process (i.e., constructivist mechanisms; see above and Glöckner & Witteman, 2010), which we will henceforth refer to as holistic processing. 1.2. Parallel constraint satisfaction as computational implementation of holistic processing: decision time depends on coherence Holistic processing has been modeled using parallel constraint satisfaction (PCS) networks (for overviews see Holyoak & Spellman, 1993; Read et al., 1997). PCS networks consist of nodes that represent hypotheses or elements as well as bidirectional links between these representing their relations (Thagard, 1989). Through spreading activation in the network, the best possible interpretation under parallel consideration of all constraints (i.e., links) is constructed. In this process, activation of nodes change, which means that the hypothesis represented by this node is perceived as more or less likely (high vs. low activation). PCS can be applied to a decision task as described in Table 2 (Glöckner & Betsch, 2008b). Cues and options form the elements (nodes) in a working network. Connections between the elements represent their relations (e.g., cues speaking for or against an option). Relevant information encoded from the environment and related information in (long-term) memory is automatically activated and fed into the working network. Information gained by active search can also be added. The working network represents a subset of knowledge from long-term memory and the environment. It is possible but not necessary that parts of the working network enter conscious awareness. PCS operates on a subconscious level and is assumed to capitalize on the high computational capacity of automatic processing. According to PCS, decision time should mainly depend on initial coherence in the network (Glöckner & Betsch, 2008b). Coherence is high if all pieces of information in the network fit together well. Coherence is low if there is conflict or inconsistency between elements in the network (cf. Festinger, 1957; Heider, 1958). Consider a network containing two options (cf. Table 2). The more strongly and frequently one option is positively linked to cues compared to the competitor, the clearer the evidence is in favor of this option and the less inconsistency has to be resolved. In such cases coherence is high from the beginning. In contrast, a high degree of conflict in the network (i.e., if cues are equally strong for both options) makes it more difficult to find a coherent solution and, therefore, leads to an increase in decision time. PCS mechanisms are implemented as iterative updating of nodes. Decision time is predicted by the number of iterations until node activations reach an asymptotic level (Freeman & Ambady, 2011; Glöckner, 2009; Glöckner & Betsch, 2008b). Furthermore, according to PCS the option with the higher activation after settling is chosen and the difference in activation between option nodes predicts confidence. In summary, the PCS-approach predicts that decision time will be a function of coherence in the network. In contrast to the EIP perspective, decision time should be rather independent of the amount of encoded information. Specifically, we predict that (i) decision time will increase if information is removed so that coherence decreases; and (ii) decision time will decrease if information is removed so that coherence increases. 1.3. Previous evidence on the relation between decision time and processing steps Several results support the hypothesis derived from the EIP perspective. Payne et al. (1988) could show that individuals under time pressure tend to switch to less effortful strategies in situations in which information has to be actively searched using the mouse. Bröder and Gaissmaier (2007) showed that response time increases with the number of computational steps necessary to implement a

lexicographic strategy for memory based probabilistic inference tasks. Similarly, Bergert and Nosofsky (2007) found response times which were in line with lexicographic strategies. Finally, in the domain of risky choices Brandstätter et al. (2006) report that decision times increase with the steps necessary to differentiate between gambles using a (semi-)lexicographic strategy for risky choices (i.e., the priority heuristic). However, there is also evidence showing that this assumed positive relation does not always hold. For probabilistic inference tasks in which information is openly displayed, Glöckner and Betsch (2008c) found a decrease of decision times when comparing tasks for which a lexicographic strategy predicted the opposite. Glöckner and Hodges (2011) qualified the findings by Bröder and Gaissmaier (2007) on memory based decisions by showing that decision times for a substantial portion of participants can be better explained by PCS than by serial heuristics. Ayal and Hochman (2009) attempted to replicate the decision time findings for risky choices by Brandstätter et al. (2006) and found a significant effect in the opposite direction. Similarly, also further investigations of risky choices provided support for the decision time predictions of PCS than for the predictions of the suggested semi-lexicographic strategy (Glöckner & Betsch, 2008a; Glöckner & Herbold, 2011; Glöckner & Pachur, 2012; see also Hilbig, 2008). Also investigations of decision times in probabilistic inferences involving recognition information have shown data more in line with PCS than with strategies assuming stepwise processing such as recognition heuristic (Glöckner & Bröder, 2011; Hilbig & Pohl, 2009; Hochman, Ayal, & Glöckner, 2010). Hence, overall, evidence is equivocal and calls for further investigation. A closer look at papers challenging the EIP perspective reveals one potential weakness in their argument. Specifically, it could be argued that persons might have used another strategy for which predictions were not considered in the analysis (cf. Bröder & Schiffer, 2003a). Since, the number of heuristics is huge and still growing it is often hard to impossible to include all of them in a single model comparison test. We use an improved design to rule out this argument. The basic idea is to manipulate tasks so that all established EIP-based strategies predict a reduction or equal decision time whereas PCS predicts an increase in half of them and a decrease in the other half. Note, however, that we of course cannot rule out that an EIP-based strategy might be developed in the future that can account for our findings. As we will discuss in more detail in Section 4.2, our investigation by necessity has to be limited to the specified parts of adaptive-decision-making approaches. In the current study we rely on the standard paradigm for investigating probabilistic inference tasks in which persons make decisions based on probabilistic cues. The new idea is to reduce the complexity of tasks by selectively dropping less valid information. For all strategies considering information from all cues (e.g., WADD) this should lead to a reduction in decision time because less information has to be processed. For all lexicographic or elimination strategies (e.g., take-the-best; elimination by aspects; minimalist) dropping cues with low validity should have no influence on decision times if all cues make differentiating predictions. 4 The same should be the case for guessing strategies. Hence, for all strategies that we are aware of dropping less valid cues should lead to a reduction of decision time (or should have no influence). As will be discussed below in more detail, dropping can be done so that coherence is increased or decreased which allows realizing the aspired predictions of PCS. A set of prototypical strategies, which were also used in the model comparison reported later, is described in Appendix A. One important factor influencing decision time is constraints in information acquisition (Glöckner & Betsch, 2008c). If information 4 As can be seen below, the requirement of all differentiating cues held only in three out of four cue patterns. All results, however, also hold when considering these three cue patterns only.

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

acquisition is very time-consuming (e.g., each piece of information has to be looked up for 1 min) the prediction of increasing decision time with increasing number of information would be trivial. We are, however, interested in the time needed for information integration and therefore use an open matrix paradigm to minimize constraints to information search. 2. Method In repeated decision trials, participants were instructed to select the better of two products (options). They were given information from four testers (cues) with different predictive validity (cue validity), which provided dichotomous quality ratings (good vs. bad) for each product. Following the procedure used in previous studies (e.g., Glöckner & Betsch, 2008c; Exp. 3), information was presented in an “open” matrix (no covered information). The order of cues and options was randomized to avoid effects of pattern learning and recognition. The amount of information was manipulated by omitting information from one of the less valid testers. We either removed information that supported the dominating alternative (decrease in coherence) or information that conflicted with the dominating alternative (increase in coherence). Information on the most valid cue was always available and always discriminated between options. Therefore, application of simple, lexicographic strategies should be unaffected by our manipulation. Besides our main dependent variables, response time and choice, we also assessed confidence ratings after choice, which provide a useful additional measure to investigate individuals' decision strategies (Glöckner, 2009, 2010; see also Jekel, Fiedler, & Glöckner, 2011; Jekel, Nicklisch, & Glöckner, 2010). 2.1. Participants and design There were 112 participants from the MPI Decision Lab subject pool. 5 They were mainly students from the University of Bonn (mean age: 22.9 years; 60 female). The experiment lasted approximately 20 min and was part of a 1 hour experimental battery. Participants were compensated with 12 Euro (approx. USD 16.80 at that time). Decision tasks varied as a within-participants factor, resulting in a 4 (CUE PATTERN)× 3 (VERSION: Complete, Decreased Coherence, Increased Coherence) design. The factor CUE PATTERN represents four different basic decision tasks. Participants worked on these tasks in a regular (i.e., complete) version and in two variants in which information of one cue was removed, respectively (Table 3). Our central manipulation was contained in the factor VERSION. For all cue patterns, information was removed that increased vs. decreased coherence to test our hypotheses. The cue validities v (here: probabilities of correct predictions) were 0.80, 0.70, 0.60, and 0.55. O1 to O2 represent the eligible options. Cue values are represented by the symbols “+” (good) and “−” (bad). Each of the twelve decision tasks was presented five times. For each of these decision tasks, PCS predictions for decision time and confidence were calculated using standard parameters as described in Appendix B. Thirty additional tasks were used as distracters resulting in a total of 90 decisions. 2.2. Materials and procedure A computer program written in Visual Basic 6.0 was used to run the experiment. Participants were instructed to repeatedly select the better of two options. They were informed about the testers' cue validities. To facilitate participants understanding of the provided cue validity information, they were informed that a validity of 0.50 represents chance and a validity of 1 represents a cue with perfect predictions. Moreover, participants were asked to make good decisions and to be as fast as 5 Participants signed up online using the subject-pool management software ORSEE (Greiner, 2004).

535

Table 3 Decision tasks used in the experiment. v

Cue patterns 1 A

2 B

Complete decision tasks 0.80 + − 0.70 + − 0.60 + − 0.55 − + Decreased 0.80 0.70 0.60 0.55

coherence + − + − −

Increased coherence 0.80 + 0.70 + 0.60 + 0.55

+

− − −

3

4

A

B

A

B

A

B

+ + − −

− − + +

+ − + −

− + − +

+ − + −

− − − +

+



+ −

− +

+ −

− −

− −

+ +



+



+

+ +

− −

+



+

− +

− − −



+ −

+ − +

Note. v indicates the validity of the respective cue (0.5 = chance; 1 = perfect prediction).

possible in deciding (Fazio, 1990). All pieces of cue value information (6 to 8) were presented simultaneously in an information matrix with cues displayed in rows and options in columns (see format used in Table 2). The presentation order of cues and options in the matrix was randomized. Participants chose one option by mouse click. Choices and decision times were recorded.6 Afterwards, they rated their confidence in choice on a scale from very uncertain (−100) to very certain (100) using a horizontal scroll bar. Participants clicked a button centered on an empty screen to start the next trial. A warm-up decision trial was used to familiarize participants with the material and the procedure. It was followed by 90 trials including targets and distracters presented in randomized order. A 1-minute break was embedded after half of the tasks to minimize the effects of decreasing concentration. 3. Results 3.1. Choices All participants were able to complete the tasks. No missing values were encountered. The observed proportion of choices for option A are summarized in Table 4. Participants' choices were highly consistent. In all twelve tasks the majority of choices were in favor of option A. In 97% of the repeated choices, participants chose the same option in all five repetitions of the respective variant of a cue pattern, indicating a high choice reliability. 3.2. Decision times Participants followed the instruction to make quick decisions and showed a mean decision time of M = 4128 ms (SD = 2922 ms; Skew = 3.53; Kurt = 23.5; MD = 3295 ms) over all 90 trials. To reduce skewness and the influence of outliers, all decision time analyses were computed for ln-transformed data. We predict that decision time will be a function of coherence instead of merely depending on the amount of information. Specifically, we hypothesize that, in comparison to the Complete condition with 8 cue values, (i) decision time will increase if information is removed so that coherence decreases 6 Using the computer mouse (instead of hitting keys) might increase error variance in response time which could work against our hypothesis. Nevertheless, we decided to use the mouse because otherwise no fine-grained confidence measurement would have been possible.

536

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Table 4 Choices for option A in percent. Version

Complete Decreased coherence Increased coherence Total

Cue pattern 1

2

3

4

1 0.99 0.99 0.99

0.98 0.82a 0.99 0.93

0.98 0.65a 1 0.87

0.99 0.99 0.99 0.99

Note. N = 112 for each combination of cue pattern and version. a Significantly different from the version Complete at p b 0.001.

(Decreased Coherence), and (ii) decision time will decrease if information is removed so that coherence increases (Increased Coherence). Descriptively, decision times are mainly in line with both hypotheses in that decision time was high for the Decreased Coherence tasks, intermediate for the Complete tasks, and low for the Increased Coherence tasks (Fig. 1). To test the hypotheses statistically, we regressed decision time on our factor VERSION, controlling for differences in cue patterns (both dummy coded) and the order of trial presentation (Table 5; column 1). The dummy variables Decreased and Increased Coherence directly test our hypotheses in that they express the difference of each version from the complete cue patterns (i.e., control). Both variables turned out to be significant supporting our hypotheses. Decision time significantly increased when information was removed that decreased coherence. Decision times significantly decreased, however, if reduction of information resulted in an increase in coherence. Estimated means (standard error of predictions in parentheses) are MComp = 8.06 (0.024), MDecCo = 8.12 (0.025), and MIncCo = 7.92 (0.026). 7 Decision times also differed significantly between cue patterns and decreased over trials indicating learning effects. To further explore the effects of our coherence manipulation on decision time, we ran regressions separately for each cue pattern. Both effects were also found for individual cue patterns (all p b 0.01) with the exception of cue pattern 4. In cue pattern 4, decision time in the decreased coherence tasks did not differ from decision time in the complete tasks (b = −0.007, t = − 0.31, p = 0.76). Note, however, that this null-effect does not contradict our general findings and might be due to random fluctuation. 3.3. Confidence Participants were rather confident in their choices. Ratings show the inverse pattern compared to that observed for decision times (Fig. 2). We regressed confidence on the same variables used in the decision time regression (Table 5, column 2). All coefficients, except for order, were significant in the opposite direction to that observed for decision times. Confidence was decreased for the reduced coherence task, in which information consistent with the favored option was removed, and increased when contrary information was removed (i.e., increased coherence tasks). The above analyses support the qualitative predictions of PCS. We furthermore investigated whether data was also in line with quantitative predictions of the model. Specifically, we investigated the fit between PCS predictions for decision times and confidence with the observed data. Predictions for decision time are derived from the number of iterations needed to stabilize and confidence predictions are calculated as the advantage in activation of the chosen option over the non-chosen option. Fig. 3 shows that PCS predicts time and confidence aggregated for the 12 decision tasks very well (both p b 0.001). The same holds in regression analyses that take into 7 Lexicographic models (searching cues in order of validity) predict equal decision time for all tasks since the most valid cue always differentiates between options. All tallying models predict reduced decision time for the two reduced versions compared to the complete versions. Our findings allow rejecting both hypotheses.

account individual-level data. We regressed ln-decision time on PCS-time-prediction and order and found that PCS-time-predictions had a significant effect, b = 0.0066, t = 17.16, p b 0.0001 (R 2 = 0.15). In an equivalent regression with confidence as criterion we found that PCS-confidence-predictions had a significant effect as well, b = 478.09, t = 19.19, p b 0.0001 (R 2 = 0.25). Note, that for both dependent variables the explained variance for the model with PCS predictions was essentially the same as for the full models including cue pattern dummies and dummies for our reduction manipulation (cf. Table 5). Hence, PCS can account for the systematic variance in the data. In a final step we jointly analyzed choices, time, and confidence using Multiple Measure Maximum Likelihood estimation (Glöckner, 2009, 2010; Jekel et al., 2010) which allows investigating individual differences in decision strategies. The analysis shows that implementations of PCS account best for the overall behavior of the large majority of participants (i.e., 74%; for details see Appendix C). This provides further evidence for the importance of PCS mechanisms in probabilistic inference tasks. 4. Discussion One cornerstone assumption of the bounded rationality approach states that cognitive capacity is constrained. Building on this assumption, adaptive-decision-making models converge in assuming that humans use a variety of simple decision strategies that, in certain situations, allow them to reduce cognitive costs (Beach & Mitchell, 1978; Gigerenzer et al., 1999; Payne et al., 1988). According to these models, cognitive costs and decision time are predicted to increase, ceteris paribus, the more elementary information processes (EIPs) are necessary to make a decision. Consequently, less information should be processed faster than more. We argued that – at least in environments allowing for quick information acquisition – this notion is valid for serial processing but not for holistic processes that are involved in decision making, as well. Based on a Parallel Constraint Satisfaction (PCS) approach to decision making, we predicted that the reduction of information in a decision task can yield either an increase or decrease in decision time depending on whether it increases or decreases coherence in the entire set of information. These PCS predictions were strongly corroborated in the current study. Decision time was not generally reduced by removing information from decision tasks. It was, however, systematically affected by changes in coherence. When information was removed, participants needed more (vs. less) time to arrive at a decision if information reduction resulted in a decrease (vs. increase) in overall coherence. Hence, we have shown that – under certain circumstances – providing more information leads to quicker decisions than providing less information. The major new contribution of the current study is that tasks were constructed in such a way that allows testing a critical prediction that holds for all EIP-based strategies suggested in the literature in contrast to previous tests which focused on tests of one or a few strategies only. The findings challenge the general validity of the cognitive-cost assumption underlying the bounded rationality approach and important adaptive-decision-making models (see also Hilbig, 2010; Newell & Bröder, 2008) and corroborate the holistic processing perspective. The fact that coherence drives decision times supports PCS models but is also in line with many other models (see below). From a more general perspective, the findings support theoretical approaches assuming a) that automatic processes play a crucial role in decision making and b) that these processes have very specific properties that are markedly different from stepwise computations (e.g., Beach & Mitchell, 1996; Betsch, 2005; Busemeyer & Townsend, 1993; Dougherty et al., 1999; Glöckner & Betsch, 2008b; Hogarth, 2001; Kahneman & Frederick, 2002). 4.1. Related findings Our findings concerning decision time are in line with the classic distance effect in choices, which states that decision time increases with

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

537

Fig. 1. Decision time by cue pattern and version. ‘comp’ refers to the complete regular cue pattern. ‘decCo’ and ‘incCo’ refer to the cue pattern in which coherence was decreased vs. increased by removing two pieces of information from the regular cue pattern. On a millisecond (ms) scale the y-axis ranges from 2441 ms (e7.8) to 4447 ms (e8.4).

decreasing distance of the options on the criterion value (for an overview see Birnbaum & Jou, 1990; for a recent investigation see also Brown & Tan, 2011). Several paramorphic models contain specific assumptions to account for the distance effect without providing well specified process-based explanations for its emergence (e.g., Birnbaum & Jou, 1990; Cartwright & Festinger, 1943). In PCS models, the distance effect follows from the general spreading activation mechanisms. The current findings elaborate previous work on probabilistic inference tasks, which show that decision time increases with decreasing coherence while holding the amount of information in the task constant (Glöckner & Betsch, 2008c). As mentioned above, similar findings were observed for probabilistic inferences including recognition information (Glöckner & Bröder, 2011; Hilbig & Pohl, 2009; Hochman et al., 2010) and for risky choices (Glöckner & Betsch, 2008a; Glöckner & Herbold, 2011; Hilbig, 2008). Decision times more in line with an adaptive strategy selection perspective were observed in tasks with more effortful information acquisition, which Table 5 Regression analysis of decision time. (1)

(2)

ln(time)

Confidence

0.0625⁎⁎⁎ (4.91) − 0.145⁎⁎⁎ (− 10.89) 0.211⁎⁎⁎ (13.60) 0.301⁎⁎⁎

− 21.27⁎⁎⁎ (− 15.52) 11.36⁎⁎⁎ (11.05) − 30.12⁎⁎⁎ (− 17.96) − 41.17⁎⁎⁎

(17.75) 0.142⁎⁎⁎

Constant

(7.67) − 0.00544⁎⁎⁎ (− 14.75) 8.201⁎⁎⁎

(− 19.15) − 24.25⁎⁎⁎ (− 11.13) − 0.0171 (− 0.73) 84.25⁎⁎⁎

Observations R2

(291.00) 6720 0.163

(44.61) 6720 0.263

Decreased coherence (1 = yes) Increased coherence (1 = yes) Cue pattern 2 (1 = yes) Cue pattern 3 (1 = yes) Cue pattern 4 (1 = yes) Order

Note. Coefficients for decreased and increased coherence are comparisons against complete cue patterns (i.e., control); likewise coefficients for cue patterns 2 to 4 are comparisons against cue pattern 1. t statistics in parentheses; standard errors were adjusted for 112 clusters in observations due to repeated measurement (Gould, Pitblado, & Sribney, 2006; Rogers, 1993). ⁎⁎⁎ p b 0.001.

we explicitly did not address in our study. A generally higher prevalence of lexicographic strategies (Bröder & Schiffer, 2003b) and decision times more in line with its predictions were particularly observed in memory-based probabilistic inferences (Bröder & Gaissmaier, 2007) as well as in tasks in which cue validities had to be acquired in a previous learning phase (Bergert & Nosofsky, 2007). 4.2. Potential caveats and alternative accounts 4.2.1. Testing specified parts of adaptive-decision-making models The most prominent models for adaptive decision making – the Adaptive Decision Maker and the Adaptive Toolbox – are formulated as general frameworks. They basically do not limit the possible number of heuristics and strategies. From the vantage point of hindsight, these frameworks can be tuned to “account” for any type of finding by postulating a new strategy (for a critical discussion see also Glöckner & Betsch, 2011, 2010). Our results can only rule out the by now specified parts of these frameworks. For proponents of adaptive decision making models, however, our findings might provide guidance for efficiently extending their models. 8 4.2.2. Ignoring information and lexicographic strategies Of course, we cannot rule out that some individuals may have ignored some pieces of information sometimes. Our Multiple Measure Maximum Likelihood analysis provides convergent evidence, however, that the majority of individuals systematically processed all pieces of the given information in a holistic fashion. Removing less valid information influenced choices, decision times and confidence ratings considerably, indicating that cue information even on these less valid cues was taken into account. Furthermore, note that we never removed information on the high validity cues. Hence, if participants 8 Please note, that many further possible heuristics can be directly ruled out by our data. Consider, for example a set of strategies one could call lexicographic racemodels (or take-two-heuristic; take-three-heuristic etc.), which go through cues (ordered by validity) and chooses the option for which they first finds two (three, four, …) positive cue values (we thank an anonymous reviewer for suggesting this alternative). Considering cue pattern 1, a take-two-heuristic would not predict any time differences between versions and can directly be rules out; a take-three-heuristic could account for the increased decision time in the decCo version but it cannot explain the reduced decision time for the incCo version (both compared to the complete version). This example should illustrate that it is quite hard to think of simple serial heuristics that could account for the current findings.

538

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Fig. 2. Confidence by cue pattern and version. ‘comp’ refers to the complete regular cue pattern. ‘decCo’ and ‘incCo’ refer to the cue pattern in which coherence was decreased vs. increased by removing two pieces of information from the complete cue pattern.

would have employed, for example, a lexicographic strategy, our manipulation would have no effect on decision times and confidence, because the options always differed on the most valid cue and in all but one task each cue differentiated. 4.2.3. Matching processes One might argue that participants encoded constellations of information and compared them with prototypes or exemplars using automaticity-based matching processes (cf. Dougherty et al., 1999; Fiedler, 1996; Juslin & Persson, 2002). Recall, however, that the arrangement of cue patterns changed over trials and within each condition, as the order of cues was randomized. Furthermore, increasing decision times after removing information could not easily be explained by these approaches. 4.2.4. Tallying It is also not possible to account for our findings by postulating that participants simply counted pluses and minuses and selected the alternative with the higher sum (i.e., an equal-weight or tallying strategy). An application of this strategy should have resulted in a decrease in decision time when information was removed. We found evidence for the very opposite and in the Multiple Measure Maximum Likelihood analysis no participant was classified as user of EQW (see Appendix C). 4.2.5. Accumulative processes Evidence accumulation models (Busemeyer & Townsend, 1993) can well account for our decision time findings, because the likelihood for reaching the decision threshold more quickly increases with increasing superiority of one option over the other. Due to the fact that evidence is always accumulated until certain evidence strength is reached, these kinds of models cannot, however, easily account for our confidence findings. 9 It would be premature a conclusion to interpret this finding as evidence against evidence accumulation models in general. Further assumptions would, however, have to be incorporated in these models to account for the very systematic 9 This is the case under the assumption that confidence ratings are based on the total amount of accumulated evidence for the favored option. Given a constant threshold, there should be no differences in accumulated evidence between tasks. Consequently also no differences in confidence should be observed.

findings on confidence ratings. It is due to further research to investigate whether recently developed two-stage models (Pleskac & Busemeyer, 2010) can account for the current findings. 4.2.6. Short-cut weighted compensatory strategy Bergert and Nosofsky (2007) argued that people might use internal short-cuts within compensatory strategies (see footnote 2). Specifically, the strategy predicts that information integration for the second option is aborted as soon as the remaining cues cannot compensate for the advantage of the first option. Our data speaks against this explanation. First, the observed decision time of about 4 s will make it hard to take into account 5 pieces of information in a weighted compensatory manner. Second, for cue pattern 1 in versions decreased and increased coherence calculation should be aborted after considering the first cue of option 2. Hence, the strategy would predict equal decision time for both versions and cannot account for the observed significant differences. Third, for the complete versions of cue patterns 3 and 4 the strategy would predict that integration is aborted after considering the first cue of the second option. Note that in both cases exactly the same information would be processed. Therefore, the significant differences between both decision tasks concerning time and confidence cannot be explained. 4.2.7. Confusion Removing information might have surprised or confused participants. Consequently, they might have begun to ponder longer on the decision, thus yielding an increase in decision time. Such an alternative explanation, however, only accounts for parts of the result. Recall that removing information led to a decrease in decision time if coherence increased. Therefore, variations in decision time cannot be attributed to surprise or confusion. 4.2.8. Modeling information acquisition One of the differences between the implementation of PCS used in this paper and heuristics is that PCS does not model information acquisition. Consequently, eventual differences in decision time between cue patterns resulting from information acquisition are not taken into account. It is, however, possible to extend PCS in this respect. Under the assumptions that a) individuals look up all pieces of cue information and b) need a fix amount of time for each cue, information acquisition should be quicker by a fix amount of time for both reduced versions of cue patterns

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

539

Fig. 3. Fit between PCS predictions for decision times and confidence with data collapsed for 4 cue patterns with 3 versions each (12 observations).

compared to the complete version. Note that this main effect of acquisition time works against the more-information-is-processed-faster-thanless hypothesis. That is, in the decreased coherence tasks the increase in processing time must overcome the main effect of decreased time for information uptake. Inspection of Table 5 shows that there is support for this additional main effect of information acquisition: the effect for increased coherence on time is twice as large as the effect for decreased coherence (although the effect for confidence is larger for the decreased coherence case). Therefore, the reported analysis of decision time is a conservative test of the more-information-is-processed-faster-than-less hypothesis. The effect of coherence on time for information integration might even be somewhat underestimated because differences in information acquisition are not taken into account. 4.3. The efficient interaction of deliberate and automatic processes As in several previous studies (Glöckner & Betsch, 2008c; Glöckner & Bröder, 2011) we observed that participants are able to make decisions rather quickly while taking into account many cues and their validities. Considering the very quick responses in the current study (i.e., MD= 3.2 s), automatic processes of information integration seem to play an important role. However, it is also clear that parts of the process are under deliberate control. Hence, automatic and deliberate processes seem to jointly drive decisions. Recently, we postulated that deliberate and automatic processes serve different functions (Betsch & Glöckner, 2010; Glöckner & Betsch, 2008b). Deliberate processes are necessarily involved if the decision maker actively searches information in the environment as was the case in our study. Moreover, deliberation is essential to modify mental representations of the decision problem by changing relations among elements in the working network or by generating new information via inferential processes. These processes are performed step-by-step, require conscious control and consume cognitive resources. We assume that they are supplemented by automatic processes that work in a holistic manner below the level of consciousness and that consume only a minimum of cognitive resources. We proposed that these (PCS-) processes are responsible for integrating information in many decision situations. Such a notion is largely in line with recent default-interventionist (e.g., Evans, 2006; Kahneman & Frederick, 2002) and parallelactivation (Sloman, 2002) dual-process models (for an overview see

Evans, 2008). We claim that automatic processes are always activated and perform automatically in the mental background irrespective of the amount of deliberation. These processes can be nicely accounted for by the PCS approach. Different strategies come into play at the level of information search and construction of the problem space. The two scissors, described by Herbert Simon (1955), bound strategies of search and construction but not the process of information integration itself. Appendix A. Description of strategies

Table A1 Strategies. Model

Description

Search cues in order of Lexicographic strategies/take-the validity and choose the option according to the best (LEX/TTB) first cue that discriminates between options Equal Weight Add up all cue values Strategy (EQW) and choose the option with the higher sum

Time prediction⁎⁎

Confidence prediction⁎⁎

Number of cues to be searched

Validity of the discriminating cue

Number of cues to be integrated

Difference between sums of the alternatives Difference between weighted sums of the alternatives All equal

Weighted Additive Strategy (WADD)

Weight all cue values by their validity, add them up and choose the option with the higher weighted sum⁎

Number of cues to be integrated

Random Choice Strategy (RAND) Parallel Constraint Satisfaction (PCS)⁎⁎⁎

Choose one option randomly Construction of coherent interpretations by spreading activation in an iterative process until convergence. The option with the higher activation is chosen.

All equal Number of iterations to converge

Difference in activation between options

⁎Cue validities are corrected for chance level by subtracting 0.5 before weighting. ⁎⁎In the model comparison described in Appendix C predictions are generated as contrast weights comparing between tasks but within each strategy. ⁎⁎⁎Network parameters for PCS are described in Appendix B. In the model comparison PCS is used in two implementations as described in Appendix C.

540

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Appendix B. Specification of PCS

with

PCS was simulated using the network model proposed by Glöckner and Betsch (2008b) with two layers. The first layer consisted of cue nodes which were activated by a general validity node. The second layer consisted of option nodes with mutual inhibition. Both layers are connected by bidirectional links representing cue predictions. Connections between the general validity node and the cue nodes represent the weight given to each cue. Spreading activation in the network is simulated by an iterative updating algorithm which uses a sigmoid activation function proposed by McClelland and Rumelhart (1981; see also Read and Miller, 1998):

input i ðt Þ ¼ ∑ wij aj ðt Þ:

 ai ðt þ 1Þ ¼ ai ðt Þð1−decayÞ þ

if if

input i b0 input i ðai ðt Þ−f loorÞ input i ≥0 input i ðceiling−ai ðt ÞÞ

Table B1 Model parameters for PCS simulations. Value/function

Description

Decay

0.10

wo1–o2

− 0.20

wc–o

0.01/−0.01

Decay parameter for node activation; influences the overall activation level of the nodes, the higher the value the lower the final activation level. Mutual inhibitory connection between options. Connection between cues and options representing positive or negative cue predictions. Links between general validity node and cues representing a priori cue validity. v is the objective cue validity which is corrected for chance level (by subtracting 0.5). For the standard implementation of PCS p was set to 1 (but see Appendix C). Upper and lower limit for cue activations. The network was considered having reached a stable solution if there was no energy change in the network for 10 iterations which exceeded 10− 6.

wv

wv = (v − 0.5)p

ceiling/floor Stability criterion

1/−1 10− 6

j¼1→n

ai(t) represents the activation of the node i at iteration t. The parameters floor and ceiling stand for the minimum and maximum possible activation. Inputi(t) is the activation node i receives at iteration t, which is computed by summing up all products of activations and connection weights wij for node i. Decay is a constant decay parameter. The model was applied without formal parameter fitting. We used the parameters presented in Table B1 which are based on parameters used in previous simulations (Glöckner & Bröder, 2011; Glöckner & Hodges, 2011; Glöckner et al., 2010). We thereby had to adapt the function for determining cue weights wv because we used objective cue validities instead of subjective cue usage ratings as input. Cue values for options A and B (i.e., wc–o) were transformed into weights of − 0.01 (negative prediction) or 0.01 (positive prediction). Cue weights (i.e., wv) were computed from objective validities by correcting for chance validity. The option with the highest final activation is predicted to be chosen. The number of iterations to find the solution is used as predictor for decision time, the absolute difference in activation between the two options is used as predictor for confidence (Glöckner, 2010; Glöckner & Betsch, 2008b). Predictions are shown in Fig. B1. Appendix C. Strategy Classification For a more in-depth investigation of interindividual differences, choices, decision time and confidence were simultaneously analyzed by conducting a Multiple Measure Maximum Likelihood strategy classification (Glöckner, 2009; Jekel et al., 2010). We thereby included the already described strategies PCS (with parametrization from Table B1), LEX/TTB, and WADD (with chance corrected validities) in the analysis. As further competitors the comparison included an equal weight strategy (EQW) assuming that persons add up cue values without weighting them by validity; a random choice strategy (RAND); and a second implementation of PCS (PCS2). PCS2 uses the parameters listed in Table B1 except for applying a different transformation function for cue validities, which has been introduced in

Fig. B1. PCS predictions for decision times and confidence.

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

previous research (Glöckner & Bröder, 2011). Specifically, it uses wv = (v − 0.5) 2 to transform cue validities into network weights (note the change in the exponent p). PCS2 has been shown to approximate a rational Bayes-solution for probabilistic inferences best among all the competitors considered here (Jekel et al., under review). Following recent suggestions (Moshagen & Hilbig, 2011), the analysis was complemented by an additional global fit test for choices with p b 0.05 to avoid misclassification due to the fact that the real strategy was not included in the set of competitors. Time and confidence predictions for TTB, WADD, EQW and RAND were derived using standard conventions (for a detailed description see Glöckner, 2010). This analysis revealed that 83 of the participants (74%) were best described by one of the two versions of PCS (PCS: 38; PCS2: 45); 23 participants (21%) were classified as users of WADD; 5 participants (5%) were classified as users of LEX/TTB, and one person was not classified because it failed the global fit test for choices. The average prediction errors in choices of the different strategies (i.e., deviations of behavior from model prediction) taking into account all participants were PCS: ε = 0.08; PCS2: ε = 0.05; TTB: ε = 0.05; EQW: ε = 0.14; WADD: ε = 0.05. The average individual correlations of model predictions and observed choice time were PCS: r = 0.33; PCS2: r = 0.34; EQW: r = 0.05; WADD: r = 0.05 (times were ln-transformed and order effects were partialled out before correlating; correlations were conducted per participant over 60 observations and averaged across ALL participants). The respective average correlations for confidence were: PCS: r = 0.59; PCS2: r = 0.57; EQW: r = 0.33; WADD: r = 0.61 (TTB predicted no between-tasks variation in time and confidence and correlations cannot be computed). References Abelson, R. P., & Levi, A. (1985). Decision making and decision theory. In G. Lindzey, & E. Aronson (Eds.), (3ed.). Handbook of social psychology, Vol. 1. (pp. 231–309) New York: Random House. Ayal, S., & Hochman, G. (2009). Ignorance or integration: The cognitive processes underlying choice behavior. Journal of Behavioral Decision Making, 22, 455–474. Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439–449. Beach, L. R., & Mitchell, T. R. (1996). Image theory, the unifying perspective. In L. R. Beach (Ed.), Decision making in the workplace: A unified perspective (pp. 1–20). Hillsdale, NJ: Lawrence Erlbaum. Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized rational and take-the-best models of decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 107–129. Betsch, T. (2005). Preference theory: An affect-based approach to recurrent decision making. In T. Betsch, & S. Haberstroh (Eds.), The routines of decision making (pp. 39–65). Mahwah, NJ: Lawrence Erlbaum Associates Publishers. Betsch, T., & Glöckner, A. (2010). Intuition in judgment and decision making: Extensive thinking without effort. Psychological Inquiry, 21, 279–294. Betsch, T., Haberstroh, S., Molter, B., & Glöckner, A. (2004). Oops, I did it again—Relapse errors in routinized decision making. Organizational Behavior and Human Decision Processes, 93, 62–74. Bettman, J., Johnson, E., & Payne, J. (1990). A componential analysis of cognitive effort in choice. Organizational Behavior and Human Decision Processes, 45, 111–139. Birnbaum, M. H., & Jou, J. -w. (1990). A theory of comparative response times and “difference” judgments. Cognitive Psychology, 22, 184–210. Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113, 409–432. Bröder, A., & Gaissmaier, W. (2007). Sequential processing of cues in memory-based multiattribute decisions. Psychonomic Bulletin & Review, 14, 895–900. Bröder, A., & Schiffer, S. (2003). Bayesian strategy assessment in multi-attribute decision making. Journal of Behavioral Decision Making, 16, 193–213. Bröder, A., & Schiffer, S. (2003). Take The Best versus simultaneous feature matching: Probabilistic inferences from memory and effects of reprensentation format. Journal of Experimental Psychology. General, 132, 277–293. Brown, N. R., & Tan, S. (2011). Magnitude comparison revisited: An alternative approach to binary choice under uncertainty. Psychonomic Bulletin & Review, 18, 392–398. Bruner, J. S., & Goodman, C. C. (1947). Value and need as organizing factors in perception. Journal of Abnormal and Social Psychology, 42, 33–44. Busemeyer, J. R., & Johnson, J. G. (2004). Computational models of decision making. In D. J. Koehler, & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 133–154). Malden, MA: Blackwell Publishing. Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432–459.

541

Cartwright, D., & Festinger, L. (1943). A quantitative theory of decision. Psychological Review, 50, 595–621. DeKay, M. L., Patino-Echeverri, D., & Fischbeck, P. S. (2009). Better safe than sorry: Precautionary reasoning and implied dominance in risky decisions. Journal of Behavioral Decision Making, 22, 338–361. DeKay, M. L., Patino-Echeverri, D., & Fischbeck, P. S. (2009). Distortion of probability and outcome information in risky decisions. Organizational Behavior and Human Decision Processes, 109, 79–92. Diederich, A. (2003). Decision making under conflict: Decision time as a measure of conflict strength. Psychonomic Bulletin & Review, 10, 167–176. Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory processes model for judgments of likelihood. Psychological Review, 106, 180–209. Evans, J. S. B. T. (2006). The heuristic–analytic theory of reasoning: Extension and evaluation. Psychonomic Bulletin & Review, 13, 378–395. Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278. Fazio, R. H. (1990). A practical guide to the use of response latency in social psychological research. In C. Hendrick, & M. S. Clark (Eds.), Research methods in personality and social psychology (pp. 74–97). Thousand Oaks, CA: Sage Publications, Inc. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press. Fiedler, K. (1996). Explaining and simulating judgment biases as an aggregation phenomenon in probabilistic, multiple-cue environments. Psychological Review, 103, 193–214. Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13, 1–17. Fishburn, P. C. (1974). Lexicographic orders, utilities, and decision rules: A survey. Management Science, 20, 1442–1472. Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118, 247–279. Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York: Viking Press. Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: The take the best heuristic. Simple heuristics that make us smart (pp. 75–95). New York, NY: Oxford University Press. Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999). Simple heuristics that make us smart. Evolution and cognition. New York, NY: Oxford University Press. Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and biases: The psychology of intuitive judgment. Heuristics and biases: The psychology of intuitive judgment xviNew York, NY, US: Cambridge University Press 857 pp.. Glöckner, A. (2009). Investigating intuitive and deliberate processes statistically: The Multiple-Measure Maximum Likelihood strategy classification method. Judgment and Decision Making, 4, 186–199. Glöckner, A. (2010). Multiple measure strategy classification: Outcomes, decision times and confidence ratings. In A. Glöckner, & C. L. M. Witteman (Eds.), Foundations for tracing intuition: Challenges and methods (pp. 83–105). London: Psychology Press & Routledge. Glöckner, A., & Betsch, T. (2008). Do people make decisions under risk based on ignorance? An empirical test of the Priority Heuristic against Cumulative Prospect Theory. Organizational Behavior and Human Decision Processes, 107, 75–95. Glöckner, A., & Betsch, T. (2008). Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making. Judgment and Decision Making, 3, 215–228. Glöckner, A., & Betsch, T. (2008). Multiple-reason decision making based on automatic processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1055–1075. Glöckner, A., & Betsch, T. (2010). Accounting for critical evidence while being precise and avoiding the strategy selection problem in a parallel constraint satisfaction approach — A reply to Marewski. Journal of Behavioral Decision Making, 23, 468–472. Glöckner, A., & Betsch, T. (2011). The empirical content of theories in judgment and decision making: Shortcomings and remedies. Judgment and Decision Making, 6, 711–721. Glöckner, A., Betsch, T., & Schindler, N. (2010). Coherence shifts in probabilistic inference tasks. Journal of Behavioral Decision Making, 23, 439–462. Glöckner, A., & Bröder, A. (2011). Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time. Judgment and Decision Making, 6, 23–42. Glöckner, A., & Herbold, A. -K. (2011). An eye-tracking study on information processing in risky decisions: Evidence for compensatory strategies based on automatic processes. Journal of Behavioral Decision Making, 24, 71–98. Glöckner, A., & Hodges, S. D. (2011). Parallel constraint satisfaction in memory-based decisions. Experimental Psychology, 58, 180–195. Glöckner, A., & Pachur, T. (2012). Cognitive models of risky choice: Parameter stability and predictive accuracy of Prospect Theory. Cognition, 123, 21–32. Glöckner, A., & Witteman, C. L. M. (2010). Beyond dual-process models: A categorization of processes underlying intuitive judgment and decision making. Thinking and Reasoning, 16, 1–25. Gould, W., Pitblado, J., & Sribney, W. (2006). Maximum likelihood estimation with Stata (3rd ed.). College Station, TX: Stata Press. Greiner, B. (2004). An Online Recruitment System for Economic Experiments. In K. Kremer, & V. Macho (Eds.), Forschung und wissenschaftliches Rechnen 2003. GWDG Bericht 63 (pp. 79–93). Göttingen: Ges. für Wiss. Datenverarbeitung. Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE Transactions on Systems, Man, and Cybernetics, 17, 753–770. Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.

542

A. Glöckner, T. Betsch / Acta Psychologica 139 (2012) 532–542

Hilbig, B. E. (2008). One-reason decision making in risky choice? A closer look at the priority heuristic. Judgment and Decision Making, 3, 457–462. Hilbig, B. E. (2010). Reconsidering ‘evidence’ for fast and frugal heuristics. Psychonomic Bulletin & Review, 17, 923–930. Hilbig, B. E., & Glöckner, A. (2011). Yes, they can! Appropriate weighting of small probabilities as a function of information acquisition. Acta Psychologica, 138, 390–396. Hilbig, B. E., & Pohl, R. F. (2009). Ignorance- versus evidence-based decision making: A decision time analysis of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 1296–1305. Hilbig, B. E., Scholl, S. G., & Pohl, R. F. (2010). Think or blink—Is the recognition heuristic an “intuitive” strategy? Judgment and Decision Making, 5, 300–309. Hochman, G., Ayal, S., & Glöckner, A. (2010). Physiological arousal in processing recognition information: Ignoring or integrating cognitive cues? Judgment and Decision Making, 5, 285–299. Hogarth, R. M. (2001). Educating intuition. Chicago, IL: University of Chicago Press. Holyoak, K. J., & Simon, D. (1999). Bidirectional reasoning in decision making by constraint satisfaction. Journal of Experimental Psychology. General, 128, 3–31. Holyoak, K. J., & Spellman, B. A. (1993). Thinking. Annual Review of Psychology, 44, 265–315. Horstmann, N., Ahlgrimm, A., & Glöckner, A. (2009). How distinct are intuition and deliberation? An eye-tracking analysis of instruction-induced decision modes. Judgment and Decision Making, 4, 335–354. Jekel, M., Fiedler, S., & Glöckner, A. (2011). Diagnostic task selection for strategy classification in judgment and decision making. Judgment and Decision Making, 6, 782–799. Jekel, M., Nicklisch, A., & Glöckner, A. (2010). Implementation of the Multiple-Measure Maximum Likelihood strategy classification method in R: Addendum to Glöckner (2009) and practical guide for application. Judgment and Decision Making, 5, 54–63. Jekel, M., Glöckner, A., Fiedler, S., & Bröder, A. (under review). The rationality of different kinds of intuitive processes. Juslin, P., & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A “lazy” algorithm for probabilistic inference from generic knowledge. Cognitive Science: A Multidisciplinary Journal, 26, 563–607. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). New York, NY: Cambridge University Press. Lohse, G. L., & Johnson, E. J. (1996). A comparison of two process tracing methods for choice tasks. Organizational Behavior and Human Decision Processes, 68, 28–43. Luce, R. D. (2000). Utility of gains and losses: Measurement-theoretical and experimental approaches. Mahwah, NJ: Erlbaum. Luce, R. D., & Raiffa, H. (1957). Games and decisions: Introduction and critical survey. New York: Wiley. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological Review, 88, 375–407. Monroe, B. M., & Read, S. J. (2008). A general connectionist model of attitude structure and change: The ACS (Attitudes as Constraint Satisfaction) model. Psychological Review, 115, 733–759.

Moshagen, M., & Hilbig, B. E. (2011). Methodological notes on model comparisons and strategy classification: A falsificationist proposition. Judgment and Decision Making, 6, 814–820. Newell, B. R., & Bröder, A. (2008). Cognitive processes, models and metaphors in decision research. Judgment and Decision Making, 3, 195–204. Newell, A., & Simon, H. A. (1972). Human problem solving. Oxford, England: PrenticeHall Print. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 534–552. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. The adaptive decision maker. New York, NY: Cambridge University Press xiii, 330 pp. Pleskac, T. J., & Busemeyer, J. R. (2010). Two-stage dynamic signal detection: A theory of choice, decision time, and confidence. Psychological Review, 117, 864–901. Read, S. J., & Miller, L. C. (1998). On the dynamic construction of meaning: An interactive activation and competition model of social perception. In S. J. Read, & L. C. Miller (Eds.), Connectionist models of social reasoning and social behavior (pp. 27–68). Mahwah, NJ: Lawrence Erlbaum Associates Publishers. Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, parallel constraint satisfaction processes, and Gestalt principles: (Re)introducing cognitive dynamics to social psychology. Personality and Social Psychology Review, 1, 26–53. Rogers, W. H. (1993). Regression standard errors in clustered samples. Stata Technical Bulletin, 13, 19–23. Savage, L. J. (1954). The foundations of statistics (2nd ed.). New York: Dover. Simon, H. A. (1955). A behavioural model of rational choice. Quarterly Journal of Economics, 69, 99–118. Simon, D., Pham, L. B., Le, Q. A., & Holyoak, K. J. (2001). The emergence of coherence over the course of decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 1250–1260. Simon, D., Snow, C. J., & Read, S. J. (2004). The redux of cognitive consistency theories: Evidence judgments by constraint satisfaction. Journal of Personality and Social Psychology, 86, 814–837. Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 379–396). New York: Cambridge University Press. Thagard, P. (1989). Explanatory coherence. The Behavioral and Brain Sciences, 12, 435–502. Thagard, P., & Millgram, E. (1995). Inference to the best plan: A coherence theory of decision. In A. Ram, & D. B. Leake (Eds.), Goal-driven learning (pp. 439–454). Cambridge, MA: MIT Press. Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. I. (2008). Diagnostic hypothesis generation and human judgment. Psychological Review, 115, 155–185. Veblen, T. (1898). Why is economics not an evolutionary science. Quarterly Journal of Economics, 12, 373–397. von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior (1st ed.). Princeton, NJ: Princeton University Press. Wertheimer, M. (1938). Gestalt theory. In W. D. Ellis (Ed.), A source book of Gestalt psychology (pp. 1–11). London, England: Kegan Paul, Trench, Trubner & Company.