Available online at www.sciencedirect.com
Cognitive Systems Research 24 (2013) 9–17 www.elsevier.com/locate/cogsys
Data acquisition dynamics and hypothesis generation Action editor: Hedderik van Rijn Nicholas D. Lange a,b,⇑, Eddy J. Davelaar a, Rick P. Thomas b a
Department of Psychological Sciences, Birkbeck, University of London, London, UK b Department of Psychology, University of Oklahoma, Norman, OK, USA Available online 22 January 2013
Abstract When formulating explanations for the events we witness in the world, temporal dynamics govern the hypotheses we generate. In our view, temporal dynamics influence beliefs over three stages: data acquisition, hypothesis generation, and hypothesis maintenance and updating. This paper presents experimental and computational evidence for the influence of temporal dynamics on hypothesis generation through dynamic working memory processes during data acquisition in a medical diagnosis task. We show that increasing the presentation rate of a sequence of symptoms leads to a primacy effect, which is predicted by the dynamic competition in working memory that dictates the weights allocated to individual data in the generation process. Individual differences observed in hypothesis generation are explained by differences in working memory capacity. Finally, in a simulation experiment we show that activation dynamics at data acquisition also accounts for results from a related task previously held to support capacity-unlimited diagnostic reasoning. Ó 2013 Elsevier B.V. All rights reserved. Keywords: Hypothesis generation; Temporal dynamics; Working memory; Abduction; Diagnostic reasoning
1. Introduction Imagine you are a medical physician and an ailing patient has come to you in need of diagnosis. You observe that the patient has skin irritation and she tells you that she is also suffering from a fever and severe cough. In order for you to determine the best courses of action, you begin to consider likely diagnoses explaining her symptoms based on your experiences in the past. We refer to this memory retrieval process as hypothesis generation and view it as an underlying determinant of many decisions we make on an everyday basis. Unlike many laboratory decision making tasks in which the options are provided, everyday decision making requires us to generate our own options (e.g., hypotheses, ⇑ Corresponding author. Address: University of London, Birkbeck College, Department of Psychological Sciences, Malet Street, London WC1E 7HX, United Kingdom. Tel.: +44 (0) 20 7631 6535; fax: +44 (0) 20 7631 6312. E-mail address:
[email protected] (N.D. Lange).
1389-0417/$ - see front matter Ó 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.cogsys.2012.12.006
explanations, beliefs) from long-term memory based on the information (i.e., data) available in the environment. In such instances, hypothesis generation is a pre-decisional process through which decision makers bring structure to the ill-structured situations they encounter. We view this process as rather ubiquitous and as an underlying component of much of our day to day decision making as well as that of many professional domains. Auditors, for instance, must generate hypotheses to explain abnormal financial patterns and mechanics generate hypotheses concerning car failure. Given these examples and the case of medical diagnosis above, it becomes clear that impoverished or biased hypothesis generation can have injurious consequences. It is important to more fully understand this process so that we may circumvent such cognitive shortcomings and avoid the costs they induce. Previous research has demonstrated the memorial underpinnings of hypothesis generation and its consequences for understanding judgment and decision making. Sprenger and Dougherty (2006) demonstrated the influence of proactive interference in determining the hypothesis sets
10
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
people retrieve and consequently bring to bear on probability judgments. Sprenger et al. (2011) demonstrated that introducing a dual task during hypothesis generation causes judgments to be greater (i.e., more subadditive) than would otherwise be the case. Relatedly, the results of Dougherty and Hunter (2003a) suggest that time pressure truncates the amount of hypotheses that people generate. These authors additionally found that the degree of miscalibration in people’s judgments was negatively correlated their working memory capacity (WMC). Similar results implicating the role of WMC as constraining the number of generated hypotheses that can be used in service of judgment have also been found (Dougherty & Hunter, 2003b; Sprenger & Dougherty, 2006). As in our previous example of the patient with skin irritation, fever, and severe cough, when engaged in hypothesis generation tasks, information acquisition most often occurs serially. As a consequence of serial data acquisition, each piece of data is obtained in some relative ordering to other data. These constraints of individual data acquisition over time and the relative ordering of data are likely to have significant consequences for hypothesis generation processes and highlight the fact that understanding data acquisition dynamics is important for understanding hypothesis generation as a whole. In our view, temporal dynamics influence the beliefs we entertain over three stages: data acquisition, hypothesis generation processing, and hypothesis maintenance and updating (as further data is acquired or judgments and decisions rendered). This paper concerns the temporal dynamics unfolding over the initial data acquisition phase which until now has remained relatively unaddressed. At present there exists limited empirical data concerning the temporal dynamics of hypothesis generation tasks. Therefore, we know little about how the constraints operating over these processes influence the beliefs we entertain. Interest in understanding these underlying temporal dynamics is increasing however. For instance, Sprenger and Dougherty (2012) and Lange, Thomas, and Davelaar (2012, Experiment 1) found a general recency bias in hypothesis generation whereby people tended to generate hypotheses more consistent with data received later than data received earlier. Additionally, Mehlhorn, Taatgen, Lebiere, and Krems (2011) tracked the activation of hypotheses during data acquisition. They found that activation of a hypothesis increased with the number of supporting data. These authors also considered a model of activation dynamics at the level of hypotheses. Importantly, their work concluded that the best-fitting model was one that does not have a buffer component. This is at odds with the empirical literature associated with WMC. We take the position that Mehlhorn et al. (2011) did not test the ability of buffer models, which maintain the data instead of (or in combination with) the hypotheses, to account for their findings. In previous work we forwarded a model with activation dynamics at the level of data acquisition (Lange et al.,
2012). Here, we present results predicted by our model that strengthens the evidence base for the view that data is acquired and maintained in a limited-capacity dynamic buffer before hypotheses are generated. We first introduce our model and a novel prediction stemming from it. This is followed by an experiment testing this prediction. We then fit our model to these data uncovering role of individual differences in WMC in the utilization of data in the generation of hypotheses. Finally, we return to the conclusion of Mehlhorn et al. (2011) that no buffer is involved in data acquisition/maintenance by capturing their critical data pattern in a model that maintains the observed data in a dynamic buffer. 2. A dynamic model of data acquisition and hypothesis generation HyGene (Dougherty, Thomas, & Lange, 2010; Thomas, Dougherty, Sprenger, & Harbison, 2008), short for hypothesis generation, is a computational architecture addressing hypothesis generation, evaluation, and testing. This model has been useful in elucidating hypothesis generation as a general case of cued recall and its importance in judgment and decision making tasks. This model architecture is demonstrated in Fig. 1. Processing begins in the model when it is presented with data from the environment (Dobs) which is first matched against episodic memory from which an unidentified probe is extracted. This unidentified probe is essentially a prototype representation of all the highly activated memories in the episodic store. This unidentified probe, however, carries with it no specific semantic information. In order to identify the hypotheses suggested by this probe it is matched against semantic memory. This match gives rise to activation values for each hypothesis in semantic memory which are then used as the basis for sampling into working memory via Luce’s choice rule. These memory retrieval functions in the model are seen as steps one through four in Fig. 1. The working memory construct in HyGene is called the set of leading contenders (SOC). Once retrieval has terminated, the hypotheses in the SOC can then serve as input to further tasks. In step five of Fig. 1 it can be seen that the SOC then provides the basis for probability judgments and hypothesis testing (i.e., information search). Until recently this model has been static with regards to data acquisition processes. We have recently forwarded a dynamic version of HyGene (Lange et al., 2012) in which the weights allocated to each piece of data are determined by the competitive WM dynamics of a recent model of list recall (Davelaar, Goshen-Gottstein, Ashkenazi, Haarmann, & Usher, 2005). These working memory processes form the dynamic data acquisition mechanisms in this dynamic HyGene model. As each piece of data is presented to the model its activation value is determined by the competitive processes described by Eq. (1) which highlights the activation of each individual item, xi, as being determined
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
11
Fig. 1. Flow diagram of processing in HyGene. As = semantic activation of retrieved hypothesis, ActMinH = minimum semantic activation criterion for placement of hypothesis in SOC, K = total number of retrieval failures, and Kmax = number of retrieval failures allowed before terminating hypothesis generation.
by three main factors. Each item receives sensory input I, as it is presented to the model and receives a further boost in activation from self-recurrent excitation that each item recycles onto itself (alpha parameter). Additionally, as there is competition for memory activation, each item receives global lateral inhibition from other active items. The amount of inhibition received by each item is controlled by the beta parameter. Additionally, on each time step of data presentation zero-mean Gaussian noise N with standard deviation r is applied to active items and k is the Euler integration constant that discretizes the differential equation. Eq. (1): activation calculation of the context-activation model: xi ðt þ 1Þ ¼ kxi ðtÞ þ ð1 kÞ aF½xi ðtÞ þ I i ðtÞ X b F½xj ðtÞ þ Nð0; rÞ ð1Þ j–i
As will be seen below, these competitive dynamics dictate that the memory activation possessed by each piece of data acquired from the environment has a dynamic trajectory which ebbs and flows over time. At the end of the data acquisition phase the activation value of each piece of data determines its contribution to the retrieval processes in hypothesis generation. That is, the activation values for each piece of data as determined by Eq. (1) serve as the weights on the retrieval processes in HyGene, specifically the initial match between the Dobs and episodic memory (Step 1 in Fig. 1).1 1 For the computational details of this model please refer to Lange et al. (2012).
3. A novel prediction of the dynamic HyGene model Fig. 2 illustrates the interplay between the competitive data acquisition processes described above in two noiseless runs when five pieces of data are presented to the model at a slow rate of 1500 iterations per datum (top panel) and a fast rate of 100 iterations per datum (bottom panel). The activation of the internal representation of each datum rises as it is presented to the model and its bottom-up sensory input contributes to the activation. These activations are then dampened in the absence of bottom-up input as inhibition from the other items drives activation down. Self-recurrency can keep an item in the buffer in the absence of bottom-up input, but this ability is in proportion to the amount of competition from other items in the buffer. As can be seen, the model predicts very different patterns in the data activation values between the slow and fast presentation rates. Whereas an “early in, early out” pattern is predicted under the slow presentation rate, the inverse is predicted under the fast presentation rate. This “early in, stay in” pattern arises due to the truncated duration of bottom-up activation for each item. Under the fast rate there is not enough time for the later items to overcome the inhibition already present in the buffer from the earlier items as occurs at the slow presentation rate. Taken together the model predicts a shift from recency to primacy with increases in the presentation rate of the data, which was confirmed in a cued recall study (Davelaar et al., 2005). We now present an experiment investigating this prediction using a simulated medical diagnosis task. In this task, participants first learn the contingencies between symptoms and (fictitious) diseases and then are presented with a new patient whose five symptoms are presented sequentially. The task is to choose the most likely disease given the
12
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
Fig. 2. Activation trajectories for 5 sequentially received data at slow presentation rate (top) and fast presentation rate (bottom). F(x) = WM activation and 0.2 = WM threshold.
symptoms provided. In this experiment, we examined how processing time (i.e., presentation duration) per datum influences the contributions of the individual data to the generation process. The dynamic HyGene model discussed above is then used to simulate the findings from this experiment. Following this, we apply a related model (assuming data acquisition dynamics) to results from a related task which has thus far not been considered from this perspective. 4. Experiment 4.1. Participants One hundred and twenty-four participants participated in this experiment for course credit. 4.2. Design The design was a one-way between-subjects design with the presentation rate of the data as the independent variable. The ecology for this experiment appears in Table 1 in which the values represent the probability of a piece of data (i.e., D1 – D5) being present given a particular hypothesis (i.e., H1 or H2). The important aspect of this ecology is that the early data (D1 and D2) are diagnostic in favor of H1 whereas the later data (D4 and D5) favors H2. Note that once all data have been presented, the posterior probabilities of each Disease are equal.
Table 1 Disease (hypothesis) symptom (data) ecology. H = hypothesis and D = data.
H1: Metalytis H2: Zymosis
D1
D2
D3
D4
D5
0.8 0.3
0.7 0.3
0.5 0.5
0.3 0.7
0.3 0.8
4.3. Procedure The procedure was comprised of three main stages. The first stage was an exemplar training task in which a series of hypothetical pre-diagnosed patients was presented to the participant in order for them to learn the contingencies between the hypotheses and data through repeated experience. Each of these patients was represented by a diagnosis at the top of the screen (H1 or H2) and test results (i.e., symptoms) pertaining to the columns of D1 through D5. Over the course of the training phase the specific test results precisely respected the disease-symptom contingencies appearing in Table 1 and the same number of patients were presented with each disease thereby holding the disease base rates equal. Stage 2 of the procedure was a learning test to discriminate participants that had learned the disease-symptom contingencies well from those that did not learn well. For this test, the participants were provided with an individual piece of data and asked to indicate the most likely disease. Their total learning score was the amount of correct
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
responses in this task. Responses were counted automatically correct for responses to the D3 data as both hypotheses were equally likely. Following an arithmetic distraction task, the third stage of the procedure commenced. This was an elicitation phase in which we implemented our manipulation of data presentation rate and assessed hypothesis generation performance. The participants were told that they were now going to see an individual patient’s symptoms one at a time and would then be asked to report the most likely diagnosis for the patient. The patients were familiarized with each of the possible presentation rates so that those in the fast presentation rate condition were not caught off guard by the quick speed of presentation. The participant triggered the onset of the patient’s data stream at their readiness. Each datum of the patient was presented individually at a fast rate (144 ms per datum) or slow rate (1504 ms per datum) and always appeared in the order displayed in Table 1 (from D1 to D5). Directly following the last piece of data, the participant entered the disease they thought was most likely given the patient’s symptoms as displayed in Fig. 3. As displayed in Fig. 2, the context-activation model predicts that fast presentation rate leads to the earlier data residing more strongly in working memory following D5 whereas the model predicts the opposite for the slower presentation rate. Therefore we predicted that the fast presentation rate should lead to greater relative activations of early data, thereby leading to greater generation of H1, whereas the opposite would be the case when the data are presented slowly leading to a preference for H2 and accordingly a lower rate of H1 relative to the fast condition.
13
4.4. Results Although the rate of H1 selection was slightly higher in the fast presentation rate condition, this difference did not reach significance, z = 1.27, p = 0.102. A further analysis was performed within groups of high learning and low learning participants based on their performance on the learning test. Those scoring higher than 60% were counted as high learners and those scoring lower were counted as low learners. Conditional analyses within each learning group revealed a predicted effect of presentation rate for the low learners, z = 1.6, p = 0.054 and no effect for the high learners, z = 0.34, p = 0.367. This result reflects the fact that the trend witnessed in the overall data was, somewhat counter-intuitively, due to those that did not learn the contingencies in the task as fully. These results are plotted as the bars appearing in Fig. 4. We explain this effect below with our model. 5. Simulating individual differences in hypothesis generation via emergent competitive working memory dynamics In line with the empirical result, we are not solely concerned with capturing differences resulting from presentation rate, but we are additionally interested in capturing the difference manifesting between the high and low learning groups. We posit this difference to be attributable to the role of working memory capacity (WMC). It is likely that high capacity participants were better able to learn the contingencies as each exemplar provided several bits of information for encoding. Furthermore, successful learning likely included some form of hypothesis testing carried
Fig. 3. Schematic of the elicitation trial sequence in the experiment.
14
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
Fig. 4. Human and Model results for the Experiment plotting the probability of generating H1 and H2 as most likely by presentation rate and learning/ capacity groups. Error bars represent standard errors.
out over successive exemplars which would be cognitively taxing, but beneficial to learning. Moreover, general cognitive ability has been found to be related to hypothesis testing (Stanovich & West, 1998) and links with WMC have been suggested previously (Dougherty et al., 2010). Therefore we suggest that the high learning group possessed a greater proportion of high capacity participants with the opposite being the case for the low learning group.2 In the present analysis, we ask if differences in a parameter governing the emergent capacity of the buffer can explain the presence of a presentation rate effect amongst low learners and its amelioration amongst high learners. As the beta parameter governs the strength of the global lateral inhibition that is applied to each item, this parameter can be used to impose capacity constraints (Davelaar, 2007). As beta increases, competition between items increases and fewer items will cohabitate the buffer. In order to highlight the role of beta in governing the capacity of the buffer we ran a simulation assessing capacity of the working memory buffer at the end of a data presentation sequence under various levels of beta across different presentation rates. In this simulation, the buffer was provided with five pieces of data as in the above experiment. Fig. 5 displays the results of this simulation as a contour plot. Buffer capacity was clearly sensitive to beta as lower values of beta result in higher capacity while higher values of beta lead to lower capacity. In addition, the effect of beta interacted with the presentation duration, highly so up to about 1000 iterations. Between presentation rates of 100 and 1000, capacity decreased as presentation rate slowed. This can be seen by the stars on the left at presentation rate of 100 (and beta levels of 0.05 and 0.1) with their counterparts on the right at 1500 iterations. As you move across to the right the capacity decreases. In sum, this simulation demonstrates the highly interactive nature of 2
We note that we did not collect measures of working memory capacity in this experiment. However, using the dynamic working memory model, we can test the validity of the hypothesis.
task variables (i.e., data presentation rate) and internal dynamics (i.e., amount of lateral inhibition) in determining the emergent capacity of the buffer. For the present experiment, however, the focus is on the impact of beta as governing capacity, where higher beta leads to lower capacity and lower beta leads to higher capacity. To simulate the above empirical results, the model was endowed with experience in the ecology displayed in Table 1. The manipulation of presentation rate was implemented in the model by varying the number of iterations for each piece of data. For instance, in the fast condition each piece of data received bottom-up input for 100 iterations whereas in the slow condition each piece of data received bottom up activation for 1500 iterations. To simulate the high and low learning groups from the experiment described above we manipulated beta at two levels to capture low learning/ low capacity (beta = 0.1) and high learning/capacity (beta = 0.05) and used presentation rates of 100 iterations (fast rate) and 1500 iterations (slow rate). This resulted in average buffer capacity of 2.73 in the slow rate and 4.14 in the fast rate under beta = 0.05 and values of 2.07 in the slow rate and 3.37 in the fast rate under beta = 0.1. Therefore, on average more items were active in working memory when beta = 0.05 relative to beta = 0.1. This entire simulation was run for 700 model runs of each condition.3 The specific parameter values used in the simulation presented here were based on a grid search of the model’s behavior across a range of parameter values. Importantly, the model predicts the same qualitative pattern demonstrated in Fig. 4 across a large swath of the parameter space although the quantitative fit suffers with deviation from the parameters utilized here. As demonstrated in Fig. 4, the model is able to capture the patterns in the data occurring with differences in presentation rate and differences in learning between groups. 3
The parameters used for this simulation were the following. HyGene: L = 0.85, Ac = 0.075, Phi = 4, Kmax = 8. Data Acquisition Buffer: Alpha = 2.0, Beta = [0.05 and 0.1], Lambda = 0.98, Noise = 1.0.
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
15
Fig. 5. Contour plot displaying the capacity of the buffer at the end of the data presentation sequence plotted by beta level and presentation rate. Black and white stars represent the presentation rates and levels of beta used in the simulation below.
On the left hand side of the plot, the low learning/low capacity group demonstrates the predicted effects whereas the high learning/high capacity group demonstrates relative insensitivity to the presentation rate of the data. 6. Data acquisition dynamics and hypothesis activations The above results indicate the influence of working memory dynamics operating during data acquisition on the contributions of data in the hypothesis generation process. Mehlhorn et al. (2011) have recently suggested that data from a diagnostic reasoning task was best described by a model with complete access to all data throughout the task. In this way, this particular model implicitly ignores data activation dynamics occurring during data acquisition. We utilize a model incorporating the dynamic buffer processes from Davelaar et al. (2005) to assess the ability of dynamic data acquisition processes to account for their results as well. Mehlhorn et al. (2011) conducted a diagnostic reasoning experiment in which participants were shown a sequence of four symptoms from a larger pool of nine symptoms (cough, skin irritation, diarrhea, shortness of breath, vomiting, redness, headache, eye inflammation, and itching). The cover story was that a person had come in contact with a chemical causing various symptoms to be expressed and it was the participant’s task to determine the correct chemical diagnosis. Mehlhorn et al. (2011) were interested in measuring the level of activation for the hypotheses that were entertained by the participant during the presentation of the symptom sequences. As a measure of memory activation, they interpolated the symptom sequence presentation with a classification task. At certain points in the presentation sequence the participants were presented with a probe which they had to classify as a chemical or not.
There were three types of probes. Compatible probes were probes that were consistent with all symptoms presented thus far. Incompatible probes were inconsistent with all presented symptoms. Foils were probes that were not associated with any of the symptoms used in the experiment.4 The results were twofold. First, response times to the compatible probes were faster than to the incompatible probes. Second, the size of the compatibility effect increased with number of symptoms observed before the probe. Mehlhorn et al. conducted a simulation study to understand how memory was utilized in the task. Their best-fitting model assumed that all symptoms were held active and do not decay. In other words, there was no short-term buffer. This contrasts with our findings in the above experiment, in which not only a short-term buffer is needed to obtain the dynamics related to presentation rate, but also to explain the individual differences observed. Therefore, we tested whether a model of diagnostic reasoning with a short-term buffer at the symptom level is able to capture the pattern observed by Mehlhorn et al. (2011). The model, displayed in Fig. 6, contained two layers. Layer one contained nine nodes representing the symptoms presented in the experiment. Layer two contained ten nodes representing the nine chemicals (i.e., hypotheses) and a single foil node. The connectivity matrix was taken from the experiment and contained values of 1 or 0 representing the presence or absence of a connection. We used the activation dynamics for only the symptom layer.5 The
4 Mehlhorn et al. included conditions of coherent and incoherent trials, where the latter trials had as the second symptoms one that was not consistent with the target diagnoses. This manipulation had no effect on the results. 5 The parameters from Equation (1) governing the symptom layer dynamics were alpha = 2, beta = 0.2, lambda = 0.98, noise = 1.
16
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
Fig. 7. Results from the simulation model with dynamic maintenance of symptoms and using the hypothesis layer as an output layer that passively accumulates the propagated activations. Compatible probes were hypothesis letters that were consistent with all the symptoms presented thus far. Incompatible probes were hypothesis letters that were inconsistent with all the symptoms presented thus far.
7. Discussion
Fig. 6. Buffer model used to simulate the results of Mehlhorn et al. The letters in the symptom layer refer to the symptoms used in the experiment. The letters in the hypothesis layer refer to the actual letters used. Only a single foil letter is presented, which is not connected to any symptoms. All connections are shown. All activations were propagated forward as indicated by the arrows. Only the symptom layer was endowed with buffer dynamics.
second layer was used as an output layer to assess the ability of a symptom layer buffer model to capture the pattern in the empirical results in the absence of further processes on the activation level of the hypotheses. Thus, this allows a direct test of the sufficiency of a model with buffer dynamics operating only on the sequence of symptoms. The model was run 1000 times. On each run, each symptom was presented for 500 iterations, after which the activation level of each hypothesis at the end of each symptom presentation was taken as a proxy for the response time. To obtain response times, we entered the activation level into a sigmoid function with slope 12 and threshold 0.7. These values were scaled to get the response times similar to the results reported by Mehlhorn et al. (2011). Our model results are displayed in Fig. 7. Although better patterns could be obtained with additional free parameters and an actual response execution module, the figure shows that the buffer model is able to capture the qualitative results, with a compatibility effect that increased with increasing number of symptoms before the probe. In and of itself, this simulation shows the viability of our approach to hypothesis generation to account for results from related tasks.
We presented two models investigating the notion that data acquisition dynamics are important for understanding hypothesis generation processes with sequentially presented data. The first was an extension of the HyGene model endowed with a dynamic working memory buffer which has been demonstrated in previous research to account for recency biases in generation (Lange et al., 2012). We set out to test a prediction from this model concerning the temporal dynamics of data acquisition. Our model predicts a shift from using more recently acquired data in the hypothesis generation process to using earlier data as the data presentation rate increases. The experimental results revealed that this prediction held for participants who did not indicate good learning in the initial disease learning phase of our simulated medical diagnosis task. We account for this counter intuitive result by suggesting that these participants were likely of lower WMC than the high learning group and that their lower capacity gave rise the greater temporal bias in the data used to generate hypotheses from long-term memory. We captured this effect by modulating the beta parameter governing lateral inhibition which has been shown to modulate the emergent capacity of the working memory buffer (Davelaar, 2007). The ability of our model to capture the results from this experiment lends credence to our specific buffer implementation as borrowed from the contextactivation model (Davelaar et al., 2005). Although many instantiations of a working memory buffer may predict the results from previous experiments demonstrating recency effects on hypothesis generation (Lange et al., 2012; Sprenger & Dougherty, 2012), the present results provide support for our specific buffer instantiation as underlying data acquisition in hypothesis generation tasks.
N.D. Lange et al. / Cognitive Systems Research 24 (2013) 9–17
With a separate model, we additionally demonstrated that a two layer model with competitive working memory dynamics operating at only the data level is able to account for data from a related diagnostic reasoning task. This is intriguing as the original account of this data concluded that working memory dynamics at the data level were unnecessary while the sufficiency of such processes went untested. In and of itself, this simulation enhances the viability of our approach to hypothesis generation as influenced by data acquisition dynamics. The models presented here could be extended in future work such that the activations of hypotheses themselves will be subject to the competitive buffer dynamics demonstrated here. Such models could address the hypothesis maintenance and updating components of temporally dynamic hypothesis generation and utilization in judgment and decision making following retrieval from longterm memory. We entertain the position that the activation dynamics presently observed in the buffer are present for both data and hypotheses as they compete for limited working memory resources throughout diagnostic tasks. We further anticipate that the activations of symptoms and hypotheses interact to create more subtle data patterns than witnessed here, such as purging of inconsistent hypotheses from working memory (Lange et al., 2012) and the search of diagnostic data in the external environment (Cooper, Yule, & Fox, 2003, Dougherty et al., 2010). Acknowledgments This research was supported by a grant from the US National Science Foundation (#SES-1024650) awarded to Rick Thomas, Nicholas Lange, and Eddy Davelaar. References Cooper, R. P., Yule, P., & Fox, J. (2003). Cue selection and category learning: A systematic comparison of three theories. Cognitive Science Quarterly, 3, 143–182.
17
Davelaar (2007). Sequential retrieval and inhibition of parallel (re)activated representations: A neurocomputational comparison of competitive cueing and resampling models. Adaptive Behavior, 1(15), 51–71. Davelaar, E. J., Goshen-Gottstein, Y., Ashkenazi, A., Haarmann, H. J., & Usher, M. (2005). The demise of short term memory revisited: Empirical and computational investigations of recency effects. Psychological Review, 112(1), 3–42. Dougherty, M. R. P., & Hunter, J. E. (2003a). Probability judgment and subadditivity: The role of WMC and constraining retrieval. Memory & Cognition, 31(6), 968–982. Dougherty, M. R. P., & Hunter, J. E. (2003b). Hypothesis generation, probability judgment, and working memory capacity. Acta Psychologica, 113(3), 263–282. Dougherty, M. R., Thomas, R. P., & Lange, N. (2010). Toward an integrative theory of hypothesis generation, probability judgment, and information search. In B. H. Ross (Ed.). The psychology of learning and motivation (Vol. 52). San Diego: Elsevier. Lange, N. D., Thomas, R. P., & Davelaar, E. J. (2012). Temporal dynamics of hypothesis generation: The influences of data serial order, data consistency, and elicitation timing. Frontiers in Cognitive Science, 3, 215. http://dx.doi.org/10.3389/fpsyg.2012.00215. Mehlhorn, K., Taatgen, N. A., Lebiere, C., & Krems, J. F. (2011). Memory activation and the availability of explanations in sequential diagnostic reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1391–1411. Sprenger, A., & Dougherty, M. R. (2006). Differences between probability and frequency judgments: The role of individual differences in working memory capacity. Organizational Behavior and Human Decision Processes, 99(2), 202–211. Sprenger, A., & Dougherty, M. R. (2012). Generating and evaluating options for decision making: The impact of sequential presented evidence. Journal of Experimental Psychology: Learning Memory, and Cognition, 38(3), 550–575. http://dx.doi.org/10.1037/a0026036. Sprenger, A. M., Dougherty, M. R., Atkins, S. M., Franco-Watkins, A. M., Thomas, R. P., Lange, N., et al. (2011). Implications of cognitive load for hypothesis generation and probability judgment. Frontiers in Cognitive Science, 2, 1–15. Stanovich, K. E., & West, R. F. (1998). Who uses base rates and P(D/ H)? An analysis of individual differences. Memory & Cognition, 28, 161–179. Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. I. (2008). Diagnostic hypothesis generation and human judgment. Psychological Review, 115(1), 155–185.