Journal Pre-proof
Intelligence Predicts Choice in Decision-Making Strategies Thomas Maran , Theo Ravet-Brown , Martin Angerer , Marco Furtner , Stefan E. Huber PII: DOI: Reference:
S2214-8043(19)30266-6 https://doi.org/10.1016/j.socec.2019.101483 JBEE 101483
To appear in:
Journal of Behavioral and Experimental Economics
Received date: Revised date: Accepted date:
10 June 2019 18 October 2019 22 October 2019
Please cite this article as: Thomas Maran , Theo Ravet-Brown , Martin Angerer , Marco Furtner , Stefan E. Huber , Intelligence Predicts Choice in Decision-Making Strategies, Journal of Behavioral and Experimental Economics (2019), doi: https://doi.org/10.1016/j.socec.2019.101483
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Inc.
Highlights
From stock market to corporate economics, decision-making biases often arise from insufficient modelling of salient information We hypothesized high cognitive ability to enable the expression of a model-based strategy Both intelligence and working memory are linked with model-based strategy use Intelligence shows higher predictive capability than working memory
Intelligence Predicts Choice in DecisionMaking Strategies Thomas Maran 1, Theo Ravet-Brown 2, Martin Angerer 3, Marco Furtner 1, Stefan E. Huber 2,4 1 Institute for Entrepreneurship, University of Liechtenstein, Liechtenstein 2 Department of Psychology, Leopold Franzens University Innsbruck, Austria 3 Institute for Finance, University of Liechtenstein, Liechtenstein 4 Institute for Ion Physics and Applied Physics, Leopold Franzens University Innsbruck, Austria
Correspondence: Ass.-Prof. Dr. Martin Angerer University of Liechtenstein, Fürst-Franz-Josef-Strasse, 9490 Vaduz, Liechtenstein. Email:
[email protected] Funding information: none. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. All information is available at: https://osf.io/6n8z4/
Abstract We analyze how cognitive sophistication shapes the individual balance between two economic decision strategies: model-based, which utilizes a mental map of the environment to calculate the possible outcomes, and model-free, which habitually learns outcomes through trial and error. To address this question, we investigate how individual differences in cognitive ability, proxied by intelligence and working memory, shape the balance of strategies. We provide first evidence that these constructs are directly related to an individual's tendency to model task structures when facing economic choice scenarios. However, structural equation modeling shows intelligence to be the sole predictor of model-based behavior, without any effect of working memory. These findings offer new insights into what factors shape variance from person to person when faced with economic decisions, and inform classical economic models of human behavior.
Keywords: Decision-making; Psychology; Economics; Cognitive ability; Intelligence JEL classification : A12 D01 D81 D91
1 Introduction Dealing with complex, dynamically changing decision environments by utilizing past experience is an essential component of economic decision-making. When making a decision in such situations of ambiguity, we use different strategies to utilize accrued information and calculate the values of the options at hand (Hirshleifer, 2015; Kahneman, 2011; Prokosheva, 2016). Choices are based either on habit, relying on the outcomes of past choices, or on the use of a model of outcome probabilities, whose construction entails cognitive cost (Benito-Ostolaza, Hernández & Sanchis-Llopis, 2016; Brocas & Carillo, 2014; De Neys, 2006). Concordantly, Kahneman (2011) references two systems with broadly different abilities; the first system is automatic, fast and unconscious, while the second is deliberative, calculating, and thereby effortful (Kahneman, 2011; Kahneman & Frederick, 2002). One's behavior at the moment of reaching a decision results from the dominance of one of the two systems, which varies from person to person. This gives rise to the question of which factors shape the balance between choice strategies that are computationally less and more expensive, the latter being inherently aversive (Kool et al., 2010). This simple circumstance, that cognitive effort is aversive, may bias many people, depending on their cognitive sophistication, to over-rely on simpler, model-free choice strategies. Indeed, Kahneman in his seminal work (2011) recommended the examination of “effortful mental abilities that demand it, including complex computations” (Kahneman, 2011, p. 15) as crucial factors in shaping choice. This suggests that modelbased economic behavior is easier to adopt for those higher in cognitive ability than for others. This leads to inter-individual variance in the way people make decisions, which remains a known unknown in classical economic models of human behavior (BrañasGarza & Smith, 2016; Brocas, 2012). Our study aims to test this known unknown, surrounding the way powers of cognition shape the individual balance between model-free and model-based strategy in proximate decision-making. More specifically, we investigate whether the ability to perform complex computations predicts an individual's tendency to adopt an effortful strategy when faced with an economic decision scenario. Our findings show that higher cognitive ability shapes an individual's balance between both decision strategies towards a dominance of the model-based system (e.g., Del Missier, Mäntylä & De Bruin, 2011; Toplak, West & Stanovich, 2011). Theoretical background Various dual-system interpretations of decision-making have been put forward, each with considerable theoretical and practical grounding and many finding applications in the field of economics (e.g., Benito-Ostolaza et al., 2016; Brocas & Carillo, 2014). Some
differentiate rule-based and associative reasoning (Sloman, 1996), while others form a distinction between intuition and reflection (Frederick, 2005), goal-directed and habitual behavior (Balleine & O‟Doherty, 2010), or simply between thinking and doing (Schwabe & Wolf, 2013). It must be noted that, while sharing a dualistic approach, the conceptualizations mentioned above characterize and distinguish their two components in significantly different ways. However, what they have in common with most noteworthy dual-system approaches to human decision-making is the ascription of deliberation and cognitive effort to one system, while assigning a faster, more automatic and less complex way of thinking to the other system. Indeed, considerable evidence has been found in support of a common foundation underlying the different conceptualizations of the two systems of decision-making (e.g., Don et al., 2016; Dolan & Dayan, 2013; Evans & Stanovich, 2013). A more recent computational approach to this dual-process conception distinguishes a model-based and model-free strategy for deciding between different options (Daw, Niv & Dayan, 2005). Model-based decision-making utilizes a mental map of the environment, which is used to calculate the possible outcomes of candidate choices online and is therefore cognitively effortful. In contrast, habitual choice is model-free, in the sense that choice outcomes are learned through trial and error, with no internal model of the relevant contingencies adduced when the outcome expectations are retrieved at the moment of choice (Daw et al., 2005; Dayan & Berridge, 2014). Hence, whereas one system determines values based solely on previous choice outcomes, the other system utilizes a mental map that integrates environmental and goal-related information to estimate the values of a choice option online. This dual system approach, focusing on a mental model as the mechanism for guiding choice, shares a common basis with previous dual system conceptualizations, with already established links between model-based behavior and reflective choices (Don et al., 2016). Model-based decisions require effortful computations; while, by contrast, model-free decisions are driven merely by habit. This novel operationalization allows an assessment of how individuals address gaining access to reward in a highly dynamic environment with a probabilistic structure (Daw et al., 2011). Indeed, by stimulating the creation of a model of the decision environment, it permits determining the tendency towards either choice strategy. Identifying decision markers for each model gives a more differentiated assessment of strategy selection, with both strategies assessed according to distinct markers, and the measurement of one (e.g., model-free) not resting merely on a failure to employ the other (e.g., model-based). Thus, the circumstance is appreciated that, when faced with decisions that mirror the economic reality (e.g., Corgnet, DeSantis & Porter, 2018; Grinblatt, Keloharju & Linnainmaa, 2011), the former strategy may also prove adaptive. When an inability to perform the operations necessary to qualify reward histories with
further relevant information, gleaned from the decision environment and placed in an integrated model, leads to over-extrapolative behavior, the effect is erroneous decisionmaking (Frydman & Camerer, 2016). Thus, inter-individual differences in cognitive ability make the effortful modeling of choice option values more taxing for some people than for others (e.g., Christopoulos, Liu & Hong, 2017; Evans, 2003; Kahneman & Frederick, 2002; Otto et al., 2013). This influences the balance between a model-free strategy, built on retrospectively molded expectations of rewards, and a model-based strategy, actuated by the fuel of contextual information, thereby begetting inter-individual variance in the way people make economic decisions and representing a continuing process of known unknown for classical economic models of choice behavior. Focusing on mental abilities therefore make sense indeed. Cognitive ability not only explains variations in a range of economic decision phenomena (Frydman & Cammerer, 2016; Rustichini et al., 2016) but also predicts the outcomes of adaptive economic decision-making in the real world (Hanushek & Woessmann, 2008). For example, cognitive abilities and especially intelligence exceed financial literacy, gender and income as predictors of stock market participation and higher returns (e.g., Cole & Shastry, 2009; Grinblatt et al., 2011). General cognitive ability is best assessed by proven and psychometrically well designed performance tests for general intelligence and working memory (e.g., Deary, 2012). While the correlation between intelligence and working memory is high (Kyllonen & Christal, 1990; Colom, Abad, Quiroga, Shih & Flores-Mendoza, 2008), there are clear differences between these cognitive functions (Shipstead, Harrison & Engle, 2016). Whereas working memory describes maintenance, retrieval, manipulation, and control of information needed for further computation (Conway, Kane & Engle, 2003), intelligence describes the construction of a mental model of complex task environments, specifically when solving novel problems (e.g., Duncan, Schramm, Thompson & Dumontheil, 2012; Nisbett et al., 2012). Working memory can be determined by testing the amount of information concurrently held pliant for computation (Conway et al., 2003); in contrast, general intelligence is best predicted by extracting rules to build mental models, thereby predicting the correct piece of a puzzle in complex patterns (Deary, 2012; Duncan et al., 2012). Research questions Although cognition is implicated by the computational expense of the model-based system (De Neys, 2006; Stanovich, 2011; Stanovich & West, 2000; Evans & Stanovich, 2013), there are few previous findings concerning its cognitive antecedents (e.g., Otto et al., 2014). Moreover, these previous studies offer but a fragmentary picture. We aim to shed light on how individual variations in cognitive ability shape one‟s decision to either
rely on a model-based or model-free system when faced with a dynamic and economic task environment. To this end, we combine a broad battery of established psychometrics with an incentivized and probabilistic decision paradigm designed to elicit model-based choice behavior (Markov two-stage paradigm; Daw et al., 2011; Eppinger et al., 2013). To underline the applicability of the model-based paradigm, we take a closer look at the Markov two-stage paradigm employed in this study. A task environment is created that incorporates both dynamic reward opportunities and a probabilistic transition structure (see Fig. 1). Two options are presented at a first stage; each of these two options leads to one pair of two possible pairs of reward opportunities in a second stage and forms an overall decision tree. Of these two pairs of opportunities in the second stage, one pair has a far higher probability than the other pair and each pair is associated with one of the two first stage choices (see Fig. 1). When a reward is gained after accessing the more probable reward opportunity, it makes sense to exploit the corresponding first stage option. Modelfree behavior is reflected in an over-reliance on a first stage‟s options‟ reward outcome, which ignores transition probabilities, leading to a change of choice based purely on an absence of reward, even in the case of an unrewarded but improbable second-stage option. Model-based decision-makers, in contrast, persevere with the previously rewarded first stage options, even when this choice leads to an unrewarded but improbable opportunity at the second stage. Hence, they are more likely to persevere with a profitable option in the face of an improbable non-profitable result but also explore new opportunities when deemed propitious. Such choice behavior would indicate an active representation of the task environment‟s decision tree, incorporating the transition probabilities of each branch and thus the effective probability of reaching a second-stage option by making an appropriate first-stage choice. We expect the adoption of a model-based strategy in economic decision-making to require expensive computations that load heavily on cognitive resources, hence requiring cognitive capacity of the type that is indexed by tests for intelligence and working memory. Therefore, individual differences in cognitive sophistication should predict one‟s individual balance between the strategies that underlie one‟s economic decisions in the task at hand. Both holding a reward history in active memory, which relies on working memory (e.g., Conway et al., 2003), as well as predicting the probabilistic occurrence of reward opportunities based on a complex mental model, supported by general intelligence (e.g., Deary, 2012), might lie at the heart of effectively employing a model-based strategy (e.g., Toplak et al., 2011). If lacking in cognitive abilities, however, individuals might fail to build a model of the task at hand and to effectively align their decisions with it, hence rashly switching away from a promising option. Hypothesis 1: Higher working memory supports the expression of model-based decision behavior.
Hypothesis 2: Higher general intelligence supports the expression of model-based decision behavior. However, the question then arises whether common operations unify intelligence and working memory or their distinct features, which might account for the adoption of model-based choice behavior. We argue that intelligence may be central to the construction of a mental model of a task environment and the adaption of goal-oriented behavior to fit this model (e.g., Duncan et al., 2012). Therefore, we hypothesize that general intelligence outweighs the influence of working memory on model-based decision-making. Hypothesis 3: General intelligence exceeds working memory as a predictor of modelbased decision behavior.
2 Design and methods Participants completed a Markov two-stage decision task designed to measure the balance of model-free to model-based choices, in which performance was incentivized, and performed tests for intelligence (analogy completion, number series and Raven‟s Progressive Matrices) and working memory (automated operation span, rotation span and symmetry span) in a separate session. 2.1 Participants A sample of 141 young adult participants (98 females, 43 males, Mage = 21.36, SD = 4.00) were included in this study. All parts of the study were carried out in accordance with the recommendations of the Ethics Committee of the University of Innsbruck. 2.2 Markov two-stage decision task The Markov decision task presented participants with a simple, two-stage simulation with the goal of selecting the most profitable of four possible second-stage choice options. The first stage offered a choice between two transition opportunities, represented by pictures of airplanes (labeled plane A and plane B) that appeared on the left side and right side of the screen, each of which transferred participants to one of two second-stage choice opportunities. The second-stage choices were represented as islands, with plane A leading to island A and plane B leading to island B. However, participants had a 70% chance of reaching their chosen destination and a 30% chance that the transition would go awry and lead to the other island. These transition probabilities stimulated the expression of model-based behavior by requiring participants to integrate them into a mental model, which is necessary to maximize the probability of choosing the most rewarding second-stage option. After selecting the first stage choice
option, one of the two second-stage pairs of options was displayed and represented by two images of GoGo figures on the left side and right side of the screen. These GoGo figures symbolized the two populations that could be traded on each of the two islands. Different images were used for each of the four second-stage options. Selection of one of these populations led to a possible reward and ended the period, with the probability of receiving a reward changing with each repetition and predetermined by an independent and slowly drifting Gaussian random walk for each population. These served the function of motivating participants to continuously adjust their choice behavior, thereby providing a consistent impetus to the task environment. Once the choice had been made, the two images were replaced by a message that informed participants about whether a reward had been received. After a brief inter-period pause, a new period was initiated. Participants had to indicate their choice at each stage within two seconds by pressing either the “f” key or the “j” key on the keyboard present in front of them; the periods without a response within the designated time limit were aborted, and the next period started. The eight predetermined Gaussian random walks informing reward probabilities were created by adding Gaussian noise at each period (200 periods overall), with a standard deviation of 0.025 and a mean of 0. When averaged over the whole task, the random walks of each option varied by no more than ten percent from the mean of 50 percent, ranging slowly and independently inside their reflective boundaries. Each participant was randomly assigned a set of random walks, with four groups formed. After each rewarded trial, the screen would display “+10 Cents”, while an unrewarded trial would yield “0 Cents”. The use of random walks to determine reward probability served a dual function. First, random walks consistently invigorated dynamic shifts in choice behavior. Second, due to the constantly and slowly changing reward probabilities, neither a model-based nor model-free strategy was optimal with a view to maximizing earnings, leading to neither decision strategy being reinforced. It is this crucial characteristic of the paradigm that allows an assessment of an individual‟s preference towards either a model-based or model-free strategy without a bias inherent in the task that is caused by rewarding an optimal choice strategy.
s
Fig. 1 (A) Schematic depiction the Markov twostage task structure, designed to elicit model-free and model-based behavior and facilitate differentiation between them. (B) Slowly drifting Gaussian random walks, displaying the probability change for winning on each period for each given choice of population. Participants were instructed to attempt to maximize their outcomes throughout the task. They then completed a test session composed of a series of progressively more complex decision scenarios that built up to the employed two-stage paradigm that involved both probabilistic transitions and reward probabilities that were subject to slowly drifting random walks (Eppinger et al., 2013; “Material and Methods” section, p. 4). Throughout the test session, participants were instructed verbally alongside detailed PowerPoint slides. More specifically, during the training session, participants first had to choose between 10 pairs of options in sequence that bore fixed reward probabilities of 60%. This was to familiarize them with the nature of probabilistic reward contingencies. Afterwards, 20 further trials were carried out in which participants had to choose the one option that carried the highest reward probability from four possible options. Once this aim had been achieved, the participants were informed of the slowly changing reward probabilities they would encounter during the task. Next, the participants were introduced to the probabilistic first stage transition structure. Using a graphical representation, common (70%) and rare (30%) transitions were explained and 20 trials incorporating stage one to stage two probabilistic transitions were played out in practice. Finally, 30 trials were completed as training for the task and incorporated all elements of the primary and probabilistic transitions, the resulting shifts in second-stage choice options and the random reward probabilities for second-stage choices. Following this, participants were asked whether they understood how to proceed during the task. Performance in the Markov two-stage task was explicitly incentivized. Participants‟ rewards at each period were displayed on the screen. Notably, due to the random walks determining win probability, all participants showed a similar, i.e., narrow, range of earnings, irrespective of strategy (M = 99.18, SD = 6.14). More specifically, the incentivization followed a simple, linear structure where 10 cents were received for every point achieved in the game. Thus, the mean payout amount was 9 Euro 92 cents.
2.3 Measures of general intelligence 2.3.1. Analogies
Participants were presented with an incomplete analogy (e.g., “dog is to cat as cat is to …”) and then asked to select one of five words that best completed the analogy. (Papageorgiou et al., 2016). The test comprised 20 items, and the dependent variable was the percentage of correct answers. 2.3.2. Number series Participants were given a series of numbers and asked to discern the rule governing the sequence‟s progression. They were then given five possible answers, from which to choose the correct number to complete the sequence (Thurstone, 1940; Unsworth et al., 2009). The test comprised 20 items, and the dependent variable was the percentage of correct answers. 2.3.3. Raven's Progressive Matrices Presented in ascending order of difficulty, these three by three matrices each consisted of eight abstract, geometric patterns, with the bottom-right figure missing. Participants were asked to complete the overall series by selecting the correct choice from a set of different alternatives displayed below. The test comprised 36 sets of matrices (Johnson & Bouchard, 2005; Raven, Raven & Court, 1998), and the dependent variable was the percentage of correct answers. 2.4. Measures of working memory To assess working memory, we employed tasks measuring operation span, symmetry span and rotation span. The operation span consisted of two parts, including a processing component and a storage component. First, processing ability was assessed by presenting a mathematical equation to solve, after which the storage component consisted of a letter, presented after the equation, to be remembered. After a number of equation-letter pairs, participants were required to remember the letters in the order that they were presented. At the end of the task, the number of correctly remembered letters was determined, and a score was calculated (Kane et al., 2004; Unsworth et al., 2005). The symmetry span task was identical, except that participants were asked to judge a series of figures for symmetry, thereafter having to remember the location of a red square in a matrix (Foster et al., 2014; Kane et al., 2004; Unsworth et al., 2005). The rotation span task was also identical but required participants to judge a rotated letter as either normal or mirrored, thereafter remembering the length and orientation of an arrow (Kane et al., 2004). On all three tasks, there were between three and five repetitions, and absolute scores were considered as the variable for the statistical analyses.
3 Data analysis strategy
3.1 Model-based and Model-free Decision-Making To quantify individual differences in model-free and model-based expression, we calculated difference measures for each participant. Following previous work (Daw et al., 2011; Eppinger et al., 2013), the probability to stay with the first-stage decision of the previous period was used as a base measure for the calculation of model-free and modelbased behavior ratios. The primary analysis conducted on the participant data from the Markov two-stage decision task was a calculation of “one-period-back” stay indexes under different conditions, which were simple functions of the transition category (common, rare) and the reward category (rewarded, unrewarded). We estimated the stay probabilities for each participant for the following four conditions: a common transition and a reward, a rare transition and a reward, a common transition and no reward, and a rare transition and no reward. A strictly model-free strategy can be assumed to be reflected in a main effect of reward on the stay probability. In contrast, a pure modelbased strategy would result in an interaction of the reward and the transition type (Daw et al., 2011; Eppinger et al., 2013). Model-based decision-making results in participants choosing to switch after a rare transition is rewarded. A model-free strategy, on the other hand, disregards transition type, as it employs no cognitive map of the task contingencies. The model-free difference measure was calculated as follows: [common rewarded + rare rewarded] - [common unrewarded + rare unrewarded]. The model-based difference measure was calculated as follows: [common rewarded + rare unrewarded] - [common unrewarded + rare rewarded]. The higher the model-free difference measure is, the more participants stayed with their previous choice option irrespective of transition type. The model-based difference measure, on the other hand, adds stay probabilities for profitable options, such as staying after a rewarded common transition and an unrewarded rare transition, and subtracts the unprofitable probabilities, such as staying after a rewarded rare transition (in which case switching would be ideal) and an unrewarded common transition. 3.2 Testing predictions First, to test the relationships between cognitive abilities and model-free and model-based decision behaviors, we calculated Pearson product-moment correlation coefficients to assess the relations between the difference measures for model-free and model-based behaviors and component scores of intelligence and working memory. Correlations are reported as r (± .10 = small effect; ± .30 = medium effect; ± .50 = large effect). In addition, to ensure robust statistical testing of the underlying relationships beyond frequentist correlational analyses, we calculated Bayesian factors according to the guidelines of Wagenmakers et al. (2018). Bayes factors were reported as BF10 (1 to 3 = anecdotal evidence; 3 to 10 = moderate evidence; 10 to 30 = strong evidence; 30 to 100 =
very strong evidence; >100 = extreme evidence; Lee & Wagenmakers, 2014). Data analyses were conducted using SPSS and JASP. To illustrate the relationships between model-free and model-based expressions and the two cognitive structures of intelligence and working memory, we compiled a structural equation model in which the three measures used each to assess intelligence and working memory converged into two latent variables (see Fig. 4). Maximum likelihood estimates were calculated using SPSS AMOS (Version 24.0.0). As descriptive measures of the overall model fit, we used χ²/d.f. (threshold ≤ 3), the Root Mean Square of Error Approximation (RMSEA, threshold ≤ 0.06), and the Standardized Root Square Mean Residual (SRMR, threshold ≤ 0.08; Hu & Bentler, 1999). The Comparative Fit Index (CFI, threshold ≥ 0.95; Hu & Bentler, 1999) was used as comparative measures of increased model fit between the proposed and the independence model, chosen because of its lower dependence on sample size. We report standardized coefficients for the structural equation model. All of our data have been uploaded to: https://osf.io/6n8z4/
4 Results Our results rendered a clear connection between the model-based and model-free difference measures and cognitive performance. The overall model-based and model-free difference measures were negatively correlated (r = -.248, p = .003, BF10 = 8.043). The model-based difference measure was significantly associated with all component scores of intelligence, revealing relationships with number series (r = .315, p < .001, BF
10
=
137.150), Raven‟s Progressive Matrices (r = .290, p < .001, BF10 = 44.232) and analogies (r = .168, p = .046, BF10 = .754). These results provide evidence in support of our first hypothesis that measures of intelligence support model-based choice behavior (see Table 1; Fig. 2). Of the employed measures for working memory, only rotation span (r = .260, p = .002, BF10 = 12.691) showed an association with model-based behavior, while operation span (r = .165, p = .050, BF 10 = .707) and symmetry span (r = .059, p = .484, BF 10 = .134) did not show an association. Hence, only one measure for working memory is linked with model-based decision-making, revealing no clear conclusion for our second hypothesis (see Table 1; Fig. 2). Model-free behavior was negatively correlated with scores on the operation span measure (r = -.191, p = .023, BF10 = 1.335), which is in line with previous findings showing that high working memory protects against a regression to using modelfree strategy (Otto et al., 2013). However, no significant associations were found between the model-free difference measure and analogies (r = -.125, p = .139, BF10 = .311), number series (r = .026, p = .760, BF10 = .110) or Raven‟s Progressive Matrices (r = .081, p = .338, BF10 = .166). Neither were rotation span (r = -.110, p = .196, BF10 = .241)
nor symmetry span (r = -.111, p = .191, BF10 = .246) correlated (see Table 1; Fig. 3).
Fig. 2 Correlations of the model-based difference measure and the test scores for intelligence: (A) Raven‟s Progressive Matrices, (B) analogies, (C) number series and tests for working memory: (D) operation span, (E) symmetry span and (F) rotation span. The unstandardized values (N = 141) are displayed with linear regressions and a 95% confidence interval. Histograms on either side of the graphs denote frequency distributions.
Fig. 3 Correlations of the model-based free measure and the test scores for intelligence: (A) Raven‟s Progressive Matrices, (B) analogies, (C) number series and tests for working memory: (D) operation span, (E) symmetry span and (F) rotation span. The unstandardized values (N = 141) are displayed with linear regressions and a 95% confidence interval. Histograms on either side of the graphs denote frequency distributions.
Next, to test hypothesis three, we developed a structural equation model. Furthermore, to meet the most stringent requirements of state-of-the-art statistical methodology and comply with the greatest transparency, we calculated bootstrap estimates of robust standard errors for all the regression weights (Antonakis, 2017) using 500 bootstrap samples (Arbuckle, 2016; Nevitt and Hancock, 2001; Yung & Bentler, 1996). The model showed that intelligence predicts the model-based difference measure (β = .454, SE = .166, p = .004). The standardized estimate for the working memory to model-based difference connection failed to show a significant effect once it was controlled for the latent variable intelligence (β = -.064, SE = .169, p = .649), confirming our third hypothesis. Neither intelligence (β = .048, SE = .158, p = .735) nor working memory (β = -.213, SE = .159, p = .136) showed any predictive effect on model-free expression. Intelligence and working memory showed a strong relationship when both were computed as latent factors (β = .591, SE = .103, p < .001) (see Fig. 4). The observed data show a good fit with the proposed structural model: χ²/d.f. = 1.37, p = .144, RMSEA = 0.05, SRMR = 0.05, CFI = 0.97.
Fig. 4 Structural equation model of the latent variables intelligence and working memory and their predictive effect on the model-free (DiMF) and model-based (DiMB) difference measures, as well as the relationship between the difference measures. Standardized coefficient estimates are displayed. N = 141.
5 Discussion Every day, people face choices. While some of these choices are easily solvable with a fast and easy model-free strategy, other choices, where a wealth of information needs consideration, require a more active, taxing approach: the creation of a mental model. Our results provide the first clear evidence that such model-based decision-making relies strongly on cognitive ability. Measured performance in both intelligence (hypothesis one) and working memory (hypothesis two) supported model-based choice behavior in a highly dynamic decision environment. Furthermore, intelligence was shown to be the dominant predictor (hypothesis three) of model-based decision behavior. The connection to working memory is well in line with previous findings; however, structural equation modeling revealed working memory‟s effect to be largely due to the overlapping contribution of intelligence. Intelligence is operationalized in tests that share a core requirement for subjects to extract rules, create a mental model thereof, and apply it to the problem at hand (Duncan et al., 2012; Shokri-Kojori & Krawczyk, 2018). To estimate the values of candidate choices in the current Markov two-stage paradigm requires the creation of precisely one such prospective model, which is built from incoming reward information and the probability associated with accessing an opportunity for gain. Working memory appears to protect from model-free regression (Otto et al., 2013); however, this finding is to be approached cautiously since it relies on one specific facet of working memory performance. Together, these results clearly demonstrate that cognitive abilities bias an individual‟s balance of arbitration between decision strategies, an effect not caused by reinforcement, biasing participants towards either strategy. More specifically, highly intelligent individuals adopt a decision strategy based on a sophisticated mental map of the economic decision environment in defiance of the cognitive cost entailed. In contrast, individuals with a
lower cognitive ability might struggle to mobilize the required computational effort, and therefore fail to align their choices with an accurate model of the task at hand, consequently applying a cognitively less demanding, model-free strategy. Hence, differences in cognitive ability from person to person shape their ability to employ a model-based decision strategy when faced with both uncertainty and a highly dynamic environmental structure. Therefore, we add to a line of research that aims to shed light on the roots of variations in real world economic behavior, thereby helping to explain how cognitive sophistication impacts real world decision outcomes. There is good reason for attempting to illuminate the mechanisms involved. Differences in cognitive ability have been shown to exceed financial literacy. Gender and income are predictors of adaptive financial decisionmaking, with intelligence aiding in the avoidance of cognitive biases and enabling superior market timing and trade execution (e.g., Grinblatt et al., 2011, 2012). Crucially, cognitive ability has been linked to a range of distal phenomena affecting decisionmaking from groups (Al-Ubaydli, Jones & Weel, 2016) and to the economy as a whole (Frydman & Cammerer, 2016; Hanushek & Woessmann, 2008; Moritz, Hill & Donohue, 2013; Corgnet et al., 2018). Although our study provides clear evidence for the link between cognitive ability and decision-making in the face of uncertainty, some limitations must be considered. First, we find only a small negative relationship between model-based and model-free behaviors, suggesting that, while there are inter-individual differences in the preponderance towards one strategy, people may employ both in dynamic decision environments. Even so, however, intelligent people might be more apt at deciding when to effectively employ model-based strategies to supplement less effortful heuristics (e.g., Michalkiewicz, Arden & Erdfelder, 2018). Second, we chose to focus on individual variations in cognitive ability as a predictor of decision-making behavior. This does not account for all of the variance in decision-making, with some areas of competence influenced more by affective or experiential skills (e.g., Otto et al., 2013, Del Missier, Mäntylä & De Bruin, 2011). For example, the emotional and bodily state of participants might well have a marked effect on model-based decision-making. Evidence has indeed shown that even peripheral bodily signals have the power to shape choice by giving anticipatory cues (see, e.g., Bechara, Damasio, Tranel, & Damasio, 1997; Damasio, 1996; Hinson, Jameson, & Whitney, 2002; Poppa & Bechara, 2018). Moreover, an individual‟s attitude towards risk, which is also linked to cognitive ability (e.g., Dohmen, Falk, Huffman, & Sunde, 2010; Prokosheva, 2016; Taylor, 2016), might also lead to greater readiness to explore a variety of strategies. Third, although simulating a highly dynamic decision environment, which corresponds well to many real-world decision situations, we test decision-making in the laboratory, thus providing a high degree of internal validity. However, this may raise concerns about
external, ecological validity (but see Dasgupta et al. 2019). Finally, and importantly, the results demonstrate that general intelligence aids distal decision-making overall (e.g., Corgnet et al., 2018), a result that our findings offer to proximally explain how modelbased decision behavior rests on the contribution of intelligence. Nevertheless, more research is needed to connect the current dual system model of choice behavior that is reliant on either a model-based strategy or a model-free strategy with real world outcomes (e.g., Moritz et al., 2013).
6 Conclusions How people resolve choices between several different outcomes is an unanswered question, but one whose resolution has a myriad of possible applications. This study is the first to directly elucidate the connection between cognitive ability and model-free and model-based decision-making by combining the latter‟s measurement with an in-depth analysis of inter-individual differences in cognitive skill. Our results demonstrate that differences in intelligence from person to person predict variations in the balance of arbitration between the types of decision-making that guide their choices (Evans & Stanovich, 2013). Resolving the role of general intelligence in proximal decision-making reflects previous real world findings, which show that intelligence aids adaptive financial decision-making and economic behavior (Corgnet et al., 2018; Grinblatt et al., 2011, 2012). Recent research highlights that the combination of methodologies from economics and cognitive psychology permits an unrivaled elucidation of the black box of human behavior in economic decision environments (e.g., Cueva et al., 2016; Rustichini et al., 2016), from financial investments (Corgnet et al., 2018) to operations management (Moritz et al., 2013). Therefore, a useful goal is to strive for a “richer characterization of economic agents via a better understanding of human cognition” (Thaler, 2000, p. 137).
7 Bibliography Antonakis, John, 2017. “On doing better science: From thrill of discovery to policy implications.“
The
Leadership
Quarterly,
28(1),
5-21.
https://doi.org/10.1016/j.leaqua.2017.01.006 Arbuckle, J. L., 2016. IBM SPSS Amos 24 user’s guide. Chicago, IL: Amos Development Corporation. Al-Ubaydli, Omar, Jones, Garett, and Weel, Jaap, 2016. “Average player traits as predictors of cooperation in a repeated prisoner's dilemma.” Journal of Behavioral
and
Experimental
https://doi.org/10.1016/j.socec.2015.10.005
Economics,
64,
50-60.
Balleine, Bernard W. and O'Doherty, John P., 2010. “Human and rodent homologies in action control: corticostriatal determinants of goal-directed and habitual action.”
Neuropsychopharmacology,
35(1),
48.
https://doi.org/10.1038/npp.2009.131 Bechara, Antoine, Damasio, Hanna, Tranel, Daniel, and Damasio, Antonio R., 1997. “Deciding advantageously before knowing the advantageous strategy.” Science, 275(5304), 1293-1295. DOI: 10.1126/science.275.5304.1293 Benito-Ostolaza, Juan, Hernández, Penélope, and A., Sanchis-Llopis, Juan, 2016. “Do individuals with higher cognitive ability play more strategically?” Journal of
Behavioral
and
Experimental
Economics
64(C),
5-11
.DOI:
10.1016/j.socec.2016.01.005 Brañas-Garza, Pablo and Smith, John, 2016. “Cognitive abilities and economic behavior.” Journal of Behavioral and Experimental Economics, 64, 1-4. http://dx.doi.org/10.1016/j.socec.2016.06.005 Brocas, Isabelle, 2012. “Information processing and decision-making: evidence from the brain sciences and implications for economics.” Journal of Economic Behavior
&
Organization,
83(3),
292-310.
https://doi.org/10.1016/j.jebo.2012.06.004 Brocas, Isabelle and Carrillo, Juan D., 2014. “Dual-process theories of decisionmaking: A selective survey.” Journal of Economic Psychology, 41, 45-54. https://doi.org/10.1016/j.joep.2013.01.004 Christopoulos, George I., Liu, Xiao-Xiao, and Hong, Ying-yi, 2017. “Toward an Understanding of Dynamic Moral Decision Making: Model-Free and ModelBased
Learning.”
Journal
of
Business
Ethics,
144(4),
699–715.
https://doi.org/10.1007/s10551-016-3058-1 Cole, Shawn and Shastry, Gauri Kartini, 2009. Smart money: The effect of education, cognitive ability, and financial literacy on financial market participation, 09071. Boston: Harvard Business School. Colom, Roberto, Abad, Francisco J., Quiroga, M. Ángeles, Shih, Pei C., and FloresMendoza, Carmen, 2008. “Working memory and intelligence are highly related constructs,
but
why?”
Intelligence,
36(6),
584-606.
DOI:
10.1016/j.intell.2008.01.002 Conway, Andrew R. A., Kane, Michael J., and Engle, Randall W., 2003. “Working memory capacity and its relation to general intelligence.” Trends in Cognitive Sciences, 7(12), 547–552. https://doi.org/10.1016/j.tics.2003.10.005 Corgnet, Brice, DeSantis, Mark, and Porter, David, 2018. “What makes a good trader? On the role of intuition and reflection on trader performance.” The Journal of Finance, 73(3), 1113-1137. https://doi.org/10.1111/jofi.12619
Cueva, Carlos, Iturbe-Ormaetxe, Iñigo, Mata-Pérez, Esther, Ponti, Giovannni, Sartarelli, Marcello, Yu, Haihan, and Zhukova, Vita, 2016. “Cognitive (ir) reflection:
New
experimental
evidence.”
Journal
of
Behavioral
and
Experimental Economics, 64, 81-93. https://doi.org/10.1016/j.socec.2015.09.002 Damasio, Antonio R., 1996. “The somatic marker hypothesis and the possible functions of the prefrontal cortex.” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 351(1346), 1413-1420. https://doi.org/10.1098/rstb.1996.0125 Dasgupta, Utteeyo, Mani, Subha, Sharma, Smriti, and Singhal, Saurabh, 2019. “Internal and external validity: Comparing two simple risk elicitation tasks.” Journal of Behavioral and Experimental Economics, 81, 39-46. DOI: 10.1016/j.socec.2019.05.005 Daw, Nathaniel D., Gershman, Samuel J., Seymour, Ben, Dayan, Peter, and Dolan, Raymond J., 2011. “Model-based influences on humans‟ choices and striatal prediction
errors.”
Neuron,
69(6),
1204–1215.
https://doi.org/10.1016/j.neuron.2011.02.027 Daw, Nathaniel D., Niv, Yael, and Dayan, Peter, 2005. “Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control.”
Nature
Neuroscience,
8(12),
1704–1711.
https://doi.org/10.1038/nn1560 Dayan, Peter, and Berridge, Kent C., 2014. “Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.” Cognitive, Affective, & Behavioral Neuroscience, 14(2), 473-492. https://doi.org/10.3758/s13415-0140277-8 De Neys, Wim, 2006. “Dual Processing in Reasoning: Two Systems but One Reasoner.”
Psychological
Science,
17(5),
428–433.
https://doi.org/10.1111/j.1467-9280.2006.01723.x Deary, Ian J., 2012. “Intelligence.” Annual Review of Psychology, 63(1), 453-482. https://doi.org/10.1146/annurev-psych-120710-100353 Del Missier, Fabio, Mäntylä, Timo, and De Bruin, Wändi Bruine, 2011. “Decision‐ making competence, executive functioning, and general cognitive abilities.” Journal
of
Behavioral
Decision
Making,
25(4),
331-351.
https://doi.org/10.1002/bdm.731 Dohmen, Thomas, Armin Falk, David Huffman, and Uwe Sunde, 2010. "Are Risk Aversion and Impatience Related to Cognitive Ability?" American Economic Review, 100(3), 1238-60. DOI: 10.1257/aer.100.3.1238 Dolan, Ray J. and Dayan, Peter, 2013. “Goals and habits in the brain.” Neuron, 80(2), 312-325. https://doi.org/10.1016/j.neuron.2013.09.007
Don, Hilary J., Goldwater, Micah B., Otto, A. Ross, and Livesey, Evan J., 2016. “Rule abstraction, model-based choice, and cognitive reflection.” Psychonomic Bulletin & Review, 23(5), 1615-1623. https://doi.org/10.3758/s13423-016-1012y Duncan, John, Schramm, Moritz, Thompson, Russell, and Dumontheil, Iroise, 2012. “Task rules, working memory, and fluid intelligence.” Psychonomic Bulletin & Review, 19(5), 864–870. https://doi.org/10.3758/s13423-012-0225-y Eppinger, Ben, Walter, Maik, Heekeren, Hauke R., and Li, Shu-Chen, 2013. “Of goals and habits: Age-related and individual differences in goal-directed decision-making.”
Frontiers
in
Neuroscience,
7,
253.
https://doi.org/10.3389/fnins.2013.00253 Evans, Jonathan St. B. T., 2003. “In two minds: dual-process accounts of reasoning.” Trends
in
Cognitive
Sciences,
7(10),
454–459.
https://doi.org/10.1016/J.TICS.2003.08.012 Evans, Jonathan St. B. T and Stanovich, Keith E., 2013. “Dual-process theories of higher cognition: Advancing the debate.” Perspectives on Psychological Science, 8(3), 223-241. https://doi.org/10.1177/1745691612460685 Foster, Jeffrey L., Shipstead, Zach, Harrison, Tyler L., Hicks, Kenny L., Redick, Thomas S., and Engle, Randall W., 2014. “Shortened complex span tasks can reliably measure working memory capacity.” Memory and Cognition, 43(2), 226–236. https://doi.org/10.3758/s13421-014-0461-7 Frederick, Shane, 2005. “Cognitive reflection and decision making.” Journal of Economic
Perspectives,
19(4),
25-42.
https://doi.org/10.1257/089533005775196732 Frydman, Cary and Camerer, Colin F., 2016. “The psychology and neuroscience of financial decision making.” Trends in Cognitive Sciences, 20(9), 661-675. https://doi.org/10.1016/j.tics.2016.07.003 Grinblatt, Mark, Keloharju, Matti, and Linnainmaa, Juhani, 2011. “IQ and Stock Market
Participation.”
Journal
of
Finance,
66(6),
2121–2164.
https://doi.org/10.1111/j.1540-6261.2011.01701.x Grinblatt, Mark, Keloharju, Matti, and Linnainmaa, Juhani, 2012. “IQ, trading behavior, and performance.” Journal of Financial Economics, 104(2), 339-362. https://doi:10.1016/j.jfineco.2011.05.016 Hanushek, Eric A. and Woessmann, Ludger, 2008. “The Role of Cognitive Skills in Economic Development.” Journal of Economic Literature, 46(3), 607–668. https://doi.org/10.1257/jel.46.3.607
Hinson, John M., Jameson, Tina L., and Whitney, Paul, 2002. “Somatic markers, working memory, and decision making.” Cognitive, Affective, & Behavioral Neuroscience, 2(4), 341-353. https://doi.org/10.3758/CABN.2.4.341 Hirshleifer, David, 2015. “Behavioral finance.” Annual Review of Financial Economics,
7,
133-159.
https://doi.org/10.1146/annurev-financial-092214-
043752 Hu, Li-tze, and Bentler, Peter M., 1999. “Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.” Structural Equation
Modeling:
A
Multidisciplinary
Journal,
6(1),
1–55.
https://doi.org/10.1080/10705519909540118 Johnson, Wendy and Bouchard, Thomas J., 2005. “The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized.”
Intelligence,
33(4),
393–416.
https://doi.org/10.1016/J.INTELL.2004.12.002 Kahneman, Daniel, 2011. Thinking fast and slow. London: Allen Lane. Kahneman, Daniel and Frederick, Shane, 2002. “Representativeness revisited: Attribute substitution in intuitive judgment.” In Gilovich, Thomas, Griffin, Dale, and Kahneman, Daniel (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment,.
49-81.
New
York,:
Cambridge
University
Press.
http://dx.doi.org/10.1017/CBO9780511808098.004 Kane, Michael J., Hambrick, David Z., Tuholski, Stephen W., Wilhelm, Oliver, Payne, Tabitha W., and Engle, Randal W., 2004. “The generality of working memory capacity: a latent-variable approach to verbal and visuospatial memory span and reasoning.” Journal of Experimental Psychology: General, 133(2), 189. http://dx.doi.org/10.1037/0096-3445.133.2.189 Kool, Woutner, McGuire, Joseph T., Rosen, Zev B., and Botvinick, Matthew M., 2010. “Decision Making and the Avoidance of Cognitive Demand.” Journal of Experimental
Psychology:
General,
139(4),
665–682.
https://doi.org/10.1037/a0020198 Kyllonen, Patrick C. and Raymond E. Christal, 1990. "Reasoning ability is (little more than)
working-memory capacity?!." Intelligence 14(4),
389-433.
https://doi.org/10.1016/S0160-2896(05)80012-1 Lee, Michael D., and Wagenmakers, Eric-Jan, 2014. Bayesian Cognitive Modeling: A Practical Course. Cambridge: Cambridge University Press. Michalkiewicz, Martha, Arden, Katja, and Erdfelder, Edgar, 2018. “Do smarter people employ better decision strategies? The influence of intelligence on adaptive use of the recognition heuristic.” Journal of Behavioral Decision Making, 31(1), 3-11. https://doi.org/10.1002/bdm.2040
Moritz, Brent B., Hill, Arthur V., and Donohue, Karen L., 2013. “Individual differences in the newsvendor problem: Behavior and cognitive reflection.” Journal
of
Operations
Management,
31(1-2),
72-85.
https://doi.org/10.1016/j.jom.2012.11.006 Nevitt, Jonathan and Hancock, Gregory R., 2001. “Performance of bootstrapping approaches to model test statistics and parameter standard error estimation in structural equation modeling.” Structural equation modeling, 8(3), 353-377. https://doi.org/10.1207/S15328007SEM0803_2 Nisbett, Richard E., Aronson, Joshua, Blair, Clancy, Dickens, William, Flynn, James, Halpern, Diane F., and Turkheimer, Eric, 2012. “Intelligence: new findings and theoretical
developments.”
American
Psychologist,
67(2),
130.
DOI:
10.1037/a0026699 Otto, A. Ross, Raio, Candace M., Chiang, Alice, Phelps, Elizabeth A., and Daw, Nathaniel D., 2013. “Working-memory capacity protects model-based learning from stress.” Proceedings of the National Academy of Sciences, 110(52), 20941– 20946. https://doi.org/10.1073/pnas.1312011110 Otto, A. Ross, Skatova, Anya, Madlon-Kay, Seth, and Daw, Nathaniel D., 2014. “Cognitive control predicts use of model-based reinforcement learning.” Journal of
Cognitive
Neuroscience,
27(2),
319-333.
https://doi.org/10.1162/jocn_a_00709 Papageorgiou, Eleni, Christou, Constantinos, Spanoudis, George, and Demetriou, Andreas, 2016. “Augmenting intelligence: Developmental limits to learningbased
cognitive
change.”
Intelligence,
56,
16-27.
https://doi.org/10.1016/j.intell.2016.02.005 Poppa, Tasha and Bechara, Antoine, 2018. “The somatic marker hypothesis: Revisiting the role of the „body-loop‟in decision-making.” Current Opinion in Behavioral Sciences, 19, 61-66. https://doi.org/10.1016/j.cobeha.2017.10.007 Prokosheva, Sasha, 2016. “Comparing decisions under compound risk and ambiguity: The importance of cognitive skills.” Journal of Behavioral and Experimental
Economics,
64,
94-105.
https://doi.org/10.1016/j.socec.2016.01.007 Raven, John, Raven, John C., and Court, John H. (1998) Advanced progressive matrices. Oxford: Oxford Psychologists Press. Rustichini, Aldo, DeYoung, Colin G., Anderson, Jon E., and Burks, Stephen V., 2016. “Toward the integration of personality theory and decision theory in explaining economic behavior: An experimental investigation.” Journal of Behavioral
and
Experimental
https://doi.org/10.1016/j.socec.2016.04.019
Economics,
64,
122-137.
Schwabe, Lars and Wolf, Oliver T., 2013. “Stress and multiple memory systems: from „thinking‟ to „doing‟.” Trends in Cognitive Sciences, 17(2), 60-68. DOI: 10.1016/j.tics.2012.12.001 Shipstead, Zach, Harrison, Tyler L., and Engle, Randall W., 2016. “Working Memory Capacity and Fluid Intelligence.” Perspectives on Psychological Science, 11(6), 771–799. https://doi.org/10.1177/1745691616650647 Shokri-Kojori, Ehsan and Krawczyk, Daniel C., 2018. “Signatures of multiple processes contributing to fluid reasoning performance.” Intelligence, 68, 87–99. https://doi.org/10.1016/j.intell.2018.03.004 Sloman, Steven A., 1996. “The empirical case for two systems of reasoning.” Psychological Bulletin, 119(1), 3. http://dx.doi.org/10.1037/0033-2909.119.1.3 Stanovich, Keith, 2011. Rationality and the Reflective Mind. Oxford: Oxford University Press. Stanovich, Keith E. and West, Richard F., 2000. “Individual differences in reasoning: Implications for the rationality debate?” Behavioral and Brain Sciences, 23(5), 645-665. https://doi.org/10.1017/S0140525X00003435 Taylor, Matthew P., 2016. “Are high-ability individuals really more tolerant of risk? A test of the relationship between risk aversion and cognitive ability.” Journal of Behavioral
and
Experimental
Economics,
63,
136-147.
https://doi.org/10.1016/j.socec.2016.06.001 Thaler, Richard H., 2000. “From Homo Economicus to Homo Sapiens.” Journal of Economic Perspectives, 14(1), 133–141. https://doi.org/10.1257/jep.14.1.133 Thurstone, Louis L., 1940. “Current issues in factor analysis.” Psychological Bulletin, 37(4), 189. http://dx.doi.org/10.1037/h0059402 Toplak, Maggie E., West, Richard F., and Stanovich, Keith E., 2011. “The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks.” Memory & Cognition, 39(7), 1275. https://doi.org/10.3758/s13421-011-0104-1 Unsworth, Nash, Heitz, Richard P., Schrock, Josef C., and Engle, Randall W., 2005. “An automated version of the operation span task.” Behavior Research Methods. 3(37), 498-505. https://doi.org/10.3758/BF03192720 Unsworth, Nash, Redick, Thomas S., Heitz, Richard P., Broadway, James M., and Engle, Randall W., 2009. “Complex working memory span tasks and higherorder cognition: A latent-variable analysis of the relationship between processing and
storage.”
Memory,
17(6),
635–654.
https://doi.org/10.1080/09658210902998047 Wagenmakers, Eric-Jan, Marsman, Maarten, Jamil, Tahira, Ly, Alexander, Verhagen, Josine, Love, Jonathon, ... and Morey, Richard D., 2018. “Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications.”
Psychonomic Bulletin & Review, 25(1), 35–57. https://doi.org/10.3758/s13423017-1343-3 Yung, Yiu-Fai and Bentler, Peter M., 1996. “Bootstrapping techniques in analysis of mean and covariance structures.” In George A. Marcoulides and Randall E. Schumacker (Eds.), Advanced structural equation modeling: Techniques and issues. 95–226. Hillsdale, NJ: Lawrence Erlbaum.
Appendix Model-based and model-free determination: Markov two-stage decision task instructions Before commencing the assessment of their choice strategy during their first of two sessions, participants were carefully instructed as to the nature and procedure surrounding the task:
“In this study, you will play a simple economic game that repeatedly offers you a choice scenario. You are a travelling salesman, who is tasked with turning a profit from the trade with four island populations. Each turn, you will board a plane and attempt to reach one of the two islands, to engage with the most profitable of the populations. These four populations are represented by figures such as these [participants are shown the four GoGos on the screen], and are split two and two between the two islands. You will notice that each population offers you a different return. Furthermore, the rates of return for each population vary over time. It is your task to assess which is the most profitable and trade with it, until its profitability decreases. To complicate matters, you will be flying with an airline of uncertain competence. While you may choose an island as your destination in each turn, the chances of reaching that island are only 7 in 10. Approximately 3 times out of every 10, the plane will divert and take you to the island you did not choose. However, once on the island, you are free to choose whichever population you consider best.”
Then participants were instructed as to the precise nature of the probabilities involved in the task, using a PowerPoint presentation (see Design and Methods, section 2.2 Markov two-stage decision task). Furthermore, several detailed screens were used to acquaint the participants with the investment and reward structure employed to motivate them, with several example scenarios being illustrated and explained.