Cognition 197 (2020) 104160
Contents lists available at ScienceDirect
Cognition journal homepage: www.elsevier.com/locate/cognit
Source information is inherently linked to working memory representation for auditory but not for visual stimuli☆
T
Mengjiao Xu1, Yingtao Fu1, Jiahan Yu, Ping Zhu, Mowei Shen , Hui Chen ⁎
⁎
Department of Psychology and Behavioral Sciences, Zhejiang University, China
ARTICLE INFO
ABSTRACT
Keywords: Short-term source amnesia Working memory Attention Expectation
Failing to remember the source of retrievable information is known as source amnesia. This phenomenon has been extensively investigated in long-term memory but rarely in working memory, as we share the intuition that the source information of an item that we have encountered in the immediate past is always available. However, a recent study (Chen, Carlson, & Wyble, 2018) challenged this common sense by showing the source amnesia for simple visual stimuli (e.g., colored square) in the context of working memory when participants did not expect having to report source information, which indicated that the source information of visual stimuli was not automatically encoded into working memory. The current study sought to further examine this newly discovered phenomenon by testing whether it persists with complex and meaningful stimuli in the visual modality (Experiments 1, 4a & 4b), cross-visual-and-auditory modalities (Experiments 2a & 2b), and within-auditory modality (Experiment 3). Interestingly, the results revealed that short-term source amnesia was a robust effect in the visual modality even for complex and meaningful stimuli, whereas it was absent in the cross-visual-andauditory or within-auditory modalities, regardless of reporting expectation. This indicates differences in working memory representations of visual and auditory stimuli, namely, the representation of auditory stimuli was stored together with the corresponding original sources, while that of visual stimuli was stored independently of its source information. These findings have crucial implications for further clarifying the longstanding debate regarding whether or not there is a modality-independent working memory storage system for different modalities.
1. Introduction We all have some experience of remembering the content of a particular event but forgetting where and when the event occurred, or how we acquired the knowledge of the event. This is a well-known phenomenon, namely source amnesia (also known as source misattribution, source forgetting, source-monitoring error, and source error; Mitchell & Johnson, 2009). This phenomenon is associated with two different kinds of memory: source memory and item memory. Source memory is broadly defined as remembered information that specifies how an event was experienced, such as perceptual (e.g., color, format of a stimulus), spatial-temporal, and affective details (Glisky, Polster, & Routhieaux, 1995; Johnson, Hashtroudi, & Lindsay, 1993). A source memory task usually requires participants to retrieve associated information, such as which list a word was encountered on (e.g., Wilding & Rugg, 1996), whether a learned stimulus was presented in auditory or visual modality during study (e.g., Bornstein & LeCompte, 1995), or whether a
stored semantic representation was acquired from a word or a picture (e.g., Mitchell, Johnson, Raye, & Greene, 2004; Mitchell, Raye, Johnson, & Greene, 2006). In the current study, as well as in Chen, Carlson, and Wyble (2018), the source memory refers to the memory of source format information of a semantic representation of a stimulus (e.g., whether a specific color representation was extracted from the color of a square or the identity of a color word). Source memory is usually contrasted with item memory, which refers to the memory of previous exposure to a specific item and relies on less differentiated information, such as familiarity or recency (e.g., Glisky et al., 1995; Johnson et al., 1993; Slotnick, Moo, Segal, & Hart, 2003). Both scientists and lay folk have traditionally believed that source amnesia could only be observed in the context of long-term memory, but not in working memory (WM) or short-term memory. This is because we often intuitively believe that the source information of an item that we have encountered in the immediate past is always available and some previous studies seem to support this intuition (e.g., Mitchell
Data and methods were posted on the Open Science Framework: https://osf.io/vfk72/. Department of Psychology and Behavioral Sciences, Zhejiang University, Xixi Campus, 148 Tianmushan Road, Hangzhou, 310007, China. E-mail addresses:
[email protected] (M. Shen),
[email protected] (H. Chen). 1 Mengjiao Xu and Yingtao Fu contributed equally to this work. ☆ ⁎
https://doi.org/10.1016/j.cognition.2019.104160 Received 22 April 2019; Received in revised form 11 December 2019; Accepted 16 December 2019 0010-0277/ © 2019 Published by Elsevier B.V.
Cognition 197 (2020) 104160
M. Xu, et al.
et al., 2004, 2006). However, a recent study challenged this commonsense view by showing a series of striking demonstrations of source amnesia even in the context of WM (Chen et al., 2018). Specifically, take Experiment 1 of Chen et al. (2018) for example, participants were repeatedly asked to judge whether or not the physical ink color of a color word (e.g., the word “RED” displayed in a green color) was congruent with the word identity in pre-surprise trials, and then in a surprise trial they were unexpectedly asked to report the physical ink color of the word (by selecting a colored line) they had just seen on that trial. The results showed that many participants failed to correctly report the ink color on the surprise test even though they had just used and stored that color representation for a task (i.e., congruency judgment) a moment before. Crucially, majority of these incorrect-report participants incorrectly selected the color lines that matched the word's identity when asked about the word's ink color, showing the first evidence of source amnesia in the context of WM with young healthy participants. This short-term amnesia was replicated in another experiment (Experiment 3 in Chen et al., 2018) using a more typical source amnesia task in which participants were directly asked to report the source information (ink color of a square or word identity) of a probed color representation. Noted that in Chen et al. (2018) two semantic representations and their original source formats were activated on each trial, and thus the source amnesia was mainly driven by a lack of automatic binding/association between the representations and their source origins in WM in the absence of expectation, which was consistent with previous studies (Chalfonte & Johnson, 1996; Shimamura & Squire, 1991). More specifically, the surprise source memory test is a measure of incidental binding task, as participants do not know that they are going to be tested on the source origin and the task is therefore assessing whether the semantic representation is bound to the stimulus format incidentally. However, the source memory test in the subsequent control trials is an intentional binding task because participants expected the source memory task and therefore intentionally bound the semantic representation with its source format. Therefore, the shortterm source amnesia study, which enables a direct comparison between the incidental and intentional binding task, could help us to examine whether the binding in working memory is an passive/automatic (Allen, Baddeley, & Hitch, 2006; Allen, Hitch, & Baddeley, 2009; Allen, Hitch, Mate, & Baddeley, 2012; Baddeley, Allen, & Hitch, 2011) or active/resource-demanding process (Brown & Brockmole, 2010; Elsley & Parmentier, 2009; Wheeler & Treisman, 2002; Zokaei, Heider, & Husain, 2014). Although short-term source amnesia in the study by Chen et al. (2018) was a robust phenomenon with visual stimuli, it remains unclear whether or not an analogous effect exists in the auditory domain. It could be assumed that such an amnesia phenomenon should be eliminated or reduced with auditory stimuli, as many previous studies have shown the memory advantage of auditory stimuli in comparison with visual stimuli (Bray & Batchelder, 1972; Eberman & Mckelvie, 2002; Kahan, 1996; Murdock & Walker, 1969; Penney, 1975, 1989; Rummer, Schweppe, & Martin, 2013). One classical example comes from a wellknown phenomenon called modality effect, which showed a superior memory for auditory items compared with visual items in short-term memory tasks (e.g., Grenfell-Essam, Ward, & Tan, 2017; Murdock & Walker, 1969; Penney, 1975). For instance, in a study by Murdock and Walker (1969) (Experiment 1), participants were asked to learn and remember 20-word auditory or visual lists sequentially and then try to recall them when the list ends. The results showed that the probability of correct recall for auditory presentation was higher than visual presentation in the recency part of the lists, indicating that auditory lists compared with visual lists have a stronger serial position effect (i.e., the recency effect in this case). Another line of evidence favoring the advantage of auditory memory than visual memory comes from studies of long-term source memory. These studies demonstrated that performance of long-term source memory test with auditory stimuli was better than with visual ones (Bray & Batchelder, 1972; Eberman &
Mckelvie, 2002; Kahan, 1996). For instance, in a study by Bray and Batchelder (1972), participants studied a list of 32 words with mixed modalities of presentation (visual and auditory) and were then asked to recall the studied words. After the recall task, participants were given a list of 64 words, which included all 32 old words from the studied list and 32 new words they had never seen before, and were asked to identify the input modality (visual, auditory, or not presented) of each word on the test list. The results demonstrated that performance in the modality identification task (i.e., source memory task) was significantly better for auditory words than for visual words when the task was performed immediately after the recall test. Thus, based on the abovementioned findings, the short-term source amnesia observed in Chen et al. (2018) may not be expected when using auditory stimuli. However, although the aforementioned studies showed an advantage of auditory memory over visual memory, some other studies reported the opposite pattern (i.e., better memory for visual stimuli than auditory ones). For example, in terms of item memory, some studies showed that item memory for auditory stimuli was inferior to that for visual stimuli in the context of long-term memory task (Cohen, Horowitz, & Wolfe, 2009; Olszewska, Reuter-Lorenz, Munier, & Bendler, 2015; Thelen, Talsma, & Murray, 2015), whereas the pattern was reversed in the context of short-term memory task as shown by the studies of modality effect (Kirsner & Craik, 1971; Murdock, 1967, 1968; Murdock & Walker, 1969; Nairne, 1990; Penney, 1975, 1989; Rummer et al., 2013). Therefore, likewise, the relationship between source memory of auditory and visual stimuli might also differ in different memory systems (long-term memory vs. short-term memory). If this is true, then it is possible that a similar short-term source amnesia would be expected for auditory stimuli. It thus remains to be established whether or not the phenomenon of short-term source amnesia observed for visual stimuli would still persist in the auditory domain, and the current study was conducted to directly address this issue. Addressing this issue has crucial theoretical implications because it could enable to further clarify the nature of WM representations for different modalities, which is one of longstanding debate issues in the field of working memory. That is, whether information is maintained in a modality-independent working memory storage system (Cowan, 1995, 2006; Saults & Cowan, 2007) or whether there are modality-specific stores for different modalities (e.g., Allen, Havelka, Falcon, Evans, & Darling, 2015; Baddeley, 1986; Baddeley, Hitch, & Allen, 2019; Baddeley & Logie, 1999; Fougnie & Marois, 2011; Fougnie, Zughni, Godwin, & Marois, 2015). 2. Experiment 1 Experiment 1 was designed for two purposes. First, we aimed to investigate whether short-term source amnesia is a robust phenomenon for visual stimuli that could be replicated and extended to different contexts. Second and more importantly, this experiment would serve as a control group to compare with the source memory of auditory stimuli in the following experiments. This experiment was on the basis of Experiment 3 of Chen et al. (2018) with two major modifications. First, instead of using simple stimuli such as colored squares and words, we adopted complex, meaningful stimuli (e.g., facial expression pictures and English emotional words). This is because some previous studies have provided evidence that complex and meaningful stimuli could be better remembered in WM (Bankó, Gál, & Vidnyánszky, 2009; Brady, Störmer, & Alvarez, 2016) and thus the short-term source amnesia might be eliminated or reduced when using these stimuli. Second, the two stimuli were presented simultaneously, rather than sequentially as in Chen et al. (2018), so as to explore whether the short-term source amnesia could be observed in different experimental paradigms.
2
Cognition 197 (2020) 104160
M. Xu, et al.
2.1. Method
in Fig. 1, we used Chinese words to indicate the probed emotion, instead of English words, so as to minimize a template-based strategy. Both the presentation order of the answer choices (i.e., press 3 if picture or 4 if word, or vice versa) and the probed emotion representation (either from a picture or a word) on the surprise test were counterbalanced across participants. After this surprise memory task, participants were asked to complete the congruency judgment task as in previous trials. Following the surprise trial, the participants received four more control trials that have the same format as the surprise trial, except that in these control trials participants had an expectation that they might need to remember and report the source format information. All responses in this and following experiments were self-paced, with no time pressure being applied.
2.1.1. Participants On the basis of the results of Experiment 3 in Chen et al. (2018), which is very similar to the current experiment, we predicted a medium effect size (φ = 0.40) for our experimental design. To ensure adequate power, we performed a power calculation in G*power 3 (Faul, Erdfelder, Buchner, & Lang, 2009), which determined that with a level of significance (α) of 0.05, the sample size needed to achieve a high Power-level of (1-β) = 0.95 was appropriate 40 individuals. Forty Zhejiang University undergraduates completed Experiment 1 for course credits or 5 Chinese yuan rewards. One additional participant participated in this experiment but was excluded because only one of four control trials was correct. All participants received instructions in Chinese and had normal or corrected-to-normal visual acuity. This study was approved by the institution review board at the Department of Psychology and Behavioral Sciences, Zhejiang University.
2.2. Results The results are shown in Table 1. The accuracy of the congruency judgment task in the pre-surprise trials was 95.2%, which indicates that participants were able to perform the task with high accuracy. However, only 26 of 40 (65.0% correct; chance is 50.0%) participants were correct in the source memory test on the surprise trial. Crucially, participants exhibited a substantial improvement in performance in the same source memory test in the control trials (80.0%, 92.5%, 100.0%, and 95.0% correct, respectively), with the improvement reaching significance by the second control trial (65.0 vs. 92.5%), χ2(1, N = 80) = 9.038, p = .003, φ = 0.336, replicating the short-term source amnesia effect. With regard to the congruency judgment task, participants exhibited a large decline in accuracy on the surprise trial (65.0% correct) compared with the pre-surprise (95.2% correct) and control trials (77.5%, 90.0%, 97.5% and 100.0% correct), which might be caused by the surprise source judgment task before it. This is consistent with previous studies (Chen et al., 2018; Chen, Swan, & Wyble, 2016; Chen & Wyble, 2015, 2016). As expected, the results of Experiment 1 replicated those of Chen et al. (2018), suggesting that source amnesia in the context of WM is a robust effect that could be observed regardless of the different types of visual stimuli (simple vs. complex and meaningful stimuli) and experimental paradigms (sequential vs. simultaneous stimuli presentation paradigm). These new findings suggest that, although many previous studies have shown that participants exhibited an aptitude for remembering complex, meaningful stimuli (e.g., Bankó et al., 2009; Brady et al., 2016; Chen et al., 2019), they were still unable to report the source information of such stimuli, even when it was just in the focus of attention. This is crucial for us to understand the constrains of memory advantage for complex, meaningful stimuli.
2.1.2. Apparatus The stimuli were presented on a 17-inch computer monitor with a resolution of 1024 × 768 pixels at 60 Hz refresh rate. The experiment was programmed with MATLAB (MathWorks, Natick, MA) with the Psychophysics Toolbox extensions (Brainard, 1997; Pelli, 1997). Participants sat at a viewing distance of about 50 cm and made their responses using a computer keyboard in all experiments. 2.1.3. Stimuli The fixation display consisted of a black central fixation cross (radius = 0.68° [degrees of visual angle]). Each stimulus display contained a facial expression picture and an English emotional word, which were presented on the right and left sides of the screen. The distance from the center of the stimuli to the fixation subtended visual angles of approximately 5.72° horizontally. In this experiment, there were eight facial expression pictures, divided equally into four types of expressions: angry, fear, happy, and sad, with each expression enacted by one male and one female. The facial expression pictures were selected from the NimStim Set of Facial Expressions (Tottenham et al., 2009); however, the external contour of the faces was removed (i.e., an approximately oval shape without hair was retained), to prevent interference of unrelated information. These pictures subtended visual angles of approximately 7.74° horizontally and 11.42° vertically. In addition, four English emotional words (angry, fear, happy, and sad) were used. All English words were presented in black Arial font, with a height of 2.29° and a width varying between 3.88° and 6.86° depending on the length of the words. All stimuli were displayed on a gray background (RGB value = 150, 150, 150).
3. Experiments 2a & 2b
2.1.4. Procedure and design Participants completed 8 practice trials and 40 experimental trials in the experiment. As shown in Fig. 1, each trial began with a fixation display for a duration that varied between 1000 and 2000 ms. After that, the stimuli display containing one facial expression picture and one English emotional word was shown for 500 ms. After the offset of the stimuli display, two black words, “Congruent” and “Incongruent” with two corresponding numbers (1 and 2) appeared and remained on the screen until participants made a response on the first 35 trials (i.e., pre-surprise trials). Participants were asked to report whether or not the emotion conveyed by the facial expression picture was congruent with that conveyed by the word by pressing one of the two number keys (1 or 2). Feedback was given at the end of each trial informing participants of the correct answer on that trial. Then, on the 36th trial (i.e., the surprise trial), immediately after the offset of the stimuli display, participants were unexpectedly given a source memory test requiring them to judge whether a given probed emotion representation came from the picture or the word they had just seen on that trial by pressing one of two number keys (3 or 4). As shown
As mentioned before, the main purpose of the current study was to examine whether short-term source amnesia persisted in the auditory modality. Experiments 2a and 2b were conducted to directly address this issue by replicating Experiment 1, except that one of the two stimuli was replaced by an auditory stimulus. Experiment 2a presented a visual emotional word and an auditory emotional word simultaneously, while Experiment 2b presented a visual facial expression picture and an auditory emotional word simultaneously. According to the well-known three-component model of WM (Baddeley & Hitch, 1974), stimuli in Experiment 2a were stored in the same module (phonological loop)2 whereas stimuli in Experiment 2b were stored in different modules (phonological loop and visuospatial sketchpad). This manipulation 2
In Experiment 2a, it is possible that the concurrent auditory word occupies the phonological rehearsal mechanism and prevents phonological recoding of the visual word. If so, stimuli in Experiment 2a were also stored in different modules (phonological loop and visuospatial sketchpad). This point will be expounded in General discussion. 3
Cognition 197 (2020) 104160
M. Xu, et al.
Fig. 1. Sample trial sequence in Experiment 1. The pre-surprise question reads “Is the emotion conveyed by the facial expression picture congruent with that conveyed by the word on this trial?” The surprise question reads “Surprise test! What's the presentation form of the emotion ‘happiness’ that appeared in this trial? Press 3 if in picture form, press 4 if in word form”. Table 1 Accuracy in Experiment 1 (N = 40).
Congruency judgment task Source memory task
Pre-surprise trials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
95.2% N/A
65.0% 65.0%
77.5% 80.0%
90.0% 92.5%
97.5% 100.0%
100.0% 95.0%
Note. N/A = not applicable.
allowed us to examine the effect of cross-module on short-term source amnesia, so as to further explore the nature of WM representations.
displayed in the center of the screen for 500 ms while, simultaneously, an auditory English emotional word (read by a male voice) was presented through earphones with a duration of 500 ms. The auditory materials were generated on a website (http://www.peiyinge.com/) and then processed with the Adobe Audition software. The auditory stimuli were recorded with 32-bit resolution at a sampling rate of 16 kHz. In the pre-surprise trials, participants were asked to report whether or not the emotional word they heard was congruent with the one they saw on the screen by pressing one of two corresponding number keys (1 or 2). Subsequently, in the surprise trial, participants were unexpectedly asked to judge whether a probed emotion representation came from the auditory or visual word by pressing one of two number keys (3 or 4). The probed representation in the surprise trial came from the auditory word for half of participants and from the visual word for the other half. Experiment 2b was identical to Experiment 2a except that the visual word stimuli were replaced with
3.1. Method 3.1.1. Participants Another sample of 80 undergraduate students completed Experiments 2a and 2b (40 participants in each experiment) for course credits or 5 Chinese yuan rewards in accordance with the local institutional review board. None of participants was excluded. All participants received instructions in Chinese and had normal or correctedto-normal visual acuity. 3.1.2. Design Experiments 2a and 2b were identical to Experiment 1 with the following exceptions. In Experiment 2a, an English emotional word was 4
Cognition 197 (2020) 104160
M. Xu, et al.
Table 2 Accuracy in Experiment 2a (N = 40).
Congruency judgment task Source memory test
Pre-surprise trials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
97.3% N/A
80.0% 90.0%
97.5% 95.0%
97.5% 95.0%
100.0% 97.5%
100.0% 95.0%
Pre-surprise trials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
96.8% N/A
92.5% 85.0%
100.0% 97.5%
95.0% 100.0%
100.0% 97.5%
97.5% 92.5%
Note. N/A = not applicable. Table 3 Accuracy in Experiment 2b (N = 40).
Congruency judgment task Source memory test
Note. N/A = not applicable.
four male facial expression pictures, and participants were asked to judge whether a probed emotion representation came from an auditory word or a visual picture.
from visual stimuli. In other words, the nature of WM representation differs between visual and auditory stimuli. Furthermore, the results of Experiments 2a and 2b were essentially similar, indicating that there was no clear effect of cross-module on short-term source amnesia.
3.2. Results
4. Experiment 3
The results are shown in Tables 2 and 3. Contrary to Experiment 1, short-term source amnesia was absent in Experiments 2a and 2b. That is, participants in these two experiments performed well in the source memory task in the surprise trial (Experiment 2a: 36/40 = 90.0% correct; Experiment 2b: 34/40 = 85.0% correct), which was not significantly different from the performance in the first control trial (Experiment 2a: 90.0% vs. 95.0%, χ2(1, N = 80) = 0.180,3 p = .671, φ = 0.047, B10 = 0.2034; Experiment 2b, 85.0% vs. 97.5%, χ2(1, N = 80) = 2.505, p = .113, φ = 0.177, B10 = 0.997), or in the other control trials (Experiment 2a: 95.0%, 97.5%, and 95.0% correct; all ps > .356, all B10 < 0.317; Experiment 2b: 100.0%, 97.5%, 92.5% correct, all ps > .113, all B10 < 0.997, except the second control trial, 85.0% vs. 100.0%, χ2(1, N = 80) = 4.505, p = .034, φ = 0.237). Cross study comparison Between-experiment comparisons showed that accuracy in the source memory task on the surprise trial was significantly higher in Experiments 2a and 2b than in Experiment 1 (Experiment 2a vs. Experiment 1: 90.0% vs. 65.0%, χ2(1, N = 80) = 7.168, p = .007, φ = 0.299; Experiment 2b vs. Experiment 1: 85.0% vs. 65.0%, χ2(1, N = 80) = 4.267, p = .039, φ = 0.231), and there was no significant difference between Experiments 2a and 2b (90.0% vs. 85.0%, χ2(1, N = 80) = 0.457, p = .499, φ = 0.076, B10 = 0.226). Consistent with our prediction, short-term source amnesia, which was a robust phenomenon for visual stimuli, was nearly absent when one of two stimuli was an auditory word in both Experiments 2a and 2b. Noted that despite the results of Experiment 2b showed a slight amount of short-term source amnesia, which might be caused by the disruption of memory arising from the use of the surprise memory test (Swan, Wyble, & Chen, 2017), the effect was not comparable to that of visual stimuli in Experiment 1. These results suggest that source information of auditory stimuli was automatically encoded into WM, even when participants did not expect to report it, which is different
Experiment 3 was designed to further investigate whether the different results shown in previous experiments (i.e., short-term source amnesia was observed in Experiment 1 and all experiments in Chen et al. (2018), but not in Experiments 2a or 2b) were truly due to differences in source memory between visual and auditory stimuli, or could be driven by some other factors arising from different methodologies in these experiments. Specifically, in Experiment 1 both items were visual stimuli and thus belong to the same modality, whereas in Experiments 2a and 2b two items came from different modalities (visual and auditory). Thus, it is possible that it is easier for participants to distinguish two sources of different modalities in comparison with two sources of the same modality. Experiment 3 aimed to rule out this possibility by using two auditory stimuli. 4.1. Method 4.1.1. Participants Another 40 undergraduate students completed this experiment for course credits or 5 Chinese yuan rewards in accordance with the local institutional review board. None of participants was excluded. All participants received instructions in Chinese and had normal or corrected-to-normal visual acuity. 4.1.2. Design This experiment was identical to Experiment 1 with the following exceptions. After the offset of the fixation cross, one emotional word read by a male voice and another emotional word read by a female voice were presented sequentially through the earphones. Each of these two auditory words lasted 500 ms and there was no interval between two words. In the pre-surprise trials, participants were asked to report whether or not the two sequential auditory words were congruent by pressing one of two number keys (1 or 2). Subsequently, in the surprise trial, participants were unexpectedly asked to judge whether a given probed emotion representation came from the male or the female voice by pressing one of two number keys (3 or 4). The probed emotion representation always came from the first stimulus, so as to ensure that the probed representation had already been stored in WM.
3
Chi-square with Yates corrections was performed if one of the cell is lesser than 5 in all experiments. 4 Traditional statistical tests do not allow for the statement of evidence for the null hypothesis, we calculated Bayes factors to evaluate whether these results support the null hypothesis when chi-square tests did not show significant effects. According to convention proposed by Kass and Raftery (1995), we set the ratio of 1:3 as support for the alternative or null hypotheses. Thus, in evaluation of the null hypothesis, a B10 value smaller than 1/3 supports the null hypothesis, whereas a B10 value between 1/3 and 3 suggests that the data are not diagnostic.
4.2. Results The results are shown in Table 4. Similar to Experiments 2a and 2b, source amnesia was still absent in Experiment 3. Participants performed 5
Cognition 197 (2020) 104160
M. Xu, et al.
Table 4 Accuracy in Experiment 3 (N = 40).
Congruency judgment task Source memory test
Pre-surprise tials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
98.4% N/A
100.0% 87.5%
100.0% 95.0%
100.0% 90.0%
100.0% 92.5%
100.0% 95.0%
Note. N/A = not applicable.
well in the source judgment task on the surprise trial, with 35 of 40 (87.5% correct; chance is 50.0%) participants being correct, which was not significantly different from the performance on the first control trial (87.5% vs. 95.0%, χ2(1, N = 80) = 0.626, p = .429, φ = 0.088, B10 = 0.298), or any other control trials (90.0%, 92.5%, and 95.0% correct, all ps > .429, all B10 < 0.298). Cross study comparison Between-experiment comparisons showed that accuracy in the source memory task on the surprise trial in Experiment 3 was significantly better than in Experiment 1 (Experiment 3 vs. Experiment 1: 87.5% vs. 65.0%, χ2(1, N = 80) = 5.591, p = .018, φ = 0.264) and was not significantly different from that in Experiments 2a or 2b (Experiment 3 vs. Experiment 2a: 87.5% vs. 90.0%, χ2(1, N = 80) = 0, p = 1.000, φ = 0, B10 = 0.186; Experiment 3 vs. Experiment 2b: 87.5% vs. 85.0%, χ2(1, N = 80) = 0.105, p = .745, φ = 0.036, B10 = 0.200). The results of Experiment 3 indicated that the different findings in previous experiments resulted from differences in source memory between visual and auditory stimuli, rather than from methodological differences between these experiments. These findings further extend prior research on source amnesia with simple visual stimuli (Chen et al., 2018) by indicating that source information of auditory stimuli could be automatically encoded into WM even though participants did not expect to report such information.
5.1. Method 5.1.1. Participants 80 undergraduate students completed Experiments 4a and 4b (each for 40 participants) for course credits or 5 Chinese yuan rewards in accordance with the local institutional review board. None of participants was excluded. All participants received instructions in Chinese and had normal or corrected-to-normal visual acuity. Experiment 4a was identical to Experiment 1 except that two stimuli were presented sequentially, rather than simultaneously, and the duration of each stimulus was 500 ms with no interval between them. Experiment 4b was also similar as Experiment 1 except as follows. On each trial two emotional words with different font sizes (font size 40 vs. 75) were presented simultaneously. The distance from the center of each stimulus to the fixation subtended visual angles of approximately 5.15° horizontally. The small font size word was presented with a height of 1.03° and a width varying between 2.06° and 3.55° depending on the length of the words. The large font size word was presented with a height of 1.83° and a width varying between 3.89° and 6.87° depending on the length of the words. In pre-surprise trials participants were asked to judge whether or not the two words were the same in identity without concerning the size, and then they were unexpectedly asked to report whether a given probed word was presented in small or large font on the surprise test.
5. Experiments 4a and 4b
5.2. Results
Experiments 4a and 4b were two control experiments, which were conducted to rule out two alternative explanations for the previous findings.5 An anonymous reviewer concerned whether the previous findings were affected by the difference on the stimuli presentation style (simultaneous vs. sequential). For instance, in Experiment 1 the two visual stimuli were presented simultaneously while in Experiment 3 the two auditory stimuli were presented sequentially, and thus the observed different source amnesia between visual and auditory in these two experiments might be due to the different presentation style used. The goal of Experiment 4a was to eliminate this possibility by replicating Experiment 1 but presenting two visual stimuli sequentially, which would be directly compared with Experiment 3 wherein two auditory stimuli were also presented sequentially. Another anonymous reviewer argued that the source memory task was not comparable between Experiments 1 and 3. In order to make an interpretable comparison between modalities, according to the reviewer's suggestion in Experiment 4b we adopted a new source memory task for visual stimuli that is more comparable to the source memory test of auditory stimuli presented by female versus male voices in Experiment 3. That is, we presented participants two visual emotional words with large difference on font size and asked them in surprise whether a given probed word was presented in small or large font. The results of surprise source memory task in this experiment would be directly compared with Experiment 3.
5
The results of Experiments 4a and 4b are depicted in Tables 5 and 6. Experiment 4a Consistent with Experiment 1, only 23 of 40 (57.5% correct; chance is 50.0%) participants were correct in the source memory test on the surprise trial. Crucially, participants exhibited a substantial improvement in the control trials (85.0%, 87.5%, 92.5%, and 90.0% correct, respectively), with the improvement reaching significance by the first control trial (57.5% vs. 85.0%), χ2(1, N = 80) = 7.384, p = .007, φ = 0.304, replicating the short-term source amnesia effect. Experiment 4b Similar with Experiment 1, only 22 of 40 (55.0% correct; chance is 50.0%) participants were correct in the source memory test on the surprise trial and their performance exhibited a substantial improvement in the control trials (75.0%, 90.0%, 97.5%, and 97.5% correct, respectively), with the improvement reaching significance by the second control trial (55.0% vs. 90.0%), χ2(1, N = 80) =12.288, p = .000, φ = 0.392, also replicating the short-term source amnesia effect. Cross study comparisonThe results of both Experiments 4a and 4b were conducted to compare with Experiment 3, and the comparisons showed that accuracy in the source memory task on the surprise trial in Experiments 4a and 4b were both significantly better than in Experiment 3 (Experiment 4a vs. Experiment 3: 57.5% vs. 87.5%, χ2(1, N = 80) = 9.028, p = .003, φ = 0.336; Experiment 4b vs. Experiment 3: 55.0% vs. 87.5%, χ2(1, N = 80) = 10.313, p = .001, φ = 0.359). The results of Experiment 4a were conducted to compared with Experiment 1, and the comparison showed that accuracy in the source memory task on the surprise trial in Experiment 4a was not significantly different from that in Experiment 1 (Experiment 4a vs. Experiment 1: 57.5% vs. 65.0%, χ2(1, N = 80) = 0.474, p = .491, φ = 0.077, B10 = 0.335).
We thank two anonymous reviewers for proposing these two possibilities. 6
Cognition 197 (2020) 104160
M. Xu, et al.
Table 5 The accuracy of Experiment 4a (N = 40).
Congruency judgment task Source memory test
Pre-surprise trials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
95.2% N/A
37.5% 57.5%
72.5% 85.0%
95.0% 87.5%
95.0% 92.5%
100.0% 90.0%
Pre-surprise trials
Surprise trial
Control trial 1
Control trial 2
Control trial 3
Control trial 4
97.2% N/A
85.0% 55.0%
100.0% 75.0%
97.5% 90.0%
97.5% 97.5%
100.0% 97.5%
Note. N/A = not applicable. Table 6 The accuracy of Experiment 4b (N = 40).
Congruency judgment task Source memory test
Note. N/A = not applicable.
groups. Second, in Experiment 3 both stimuli were displayed in auditory modality wherein the above response bias would not affect the results at all, yet we still found that short-term source amnesia was absent. As a consequence, the results of the current study indicate that, compared with visual stimuli, source information of auditory stimuli was automatically encoded into WM and the advantage of auditory source memory should not be caused by the response bias.
The results of Experiments 4a and 4b suggested that the observation that source amnesia differs between visual and auditory stimuli in previous experiments was not driven by any of the two aforementioned possibilities, instead, it reflects the difference in working memory representations of visual and auditory stimuli. 6. Discussion
6.2. Extending the classical modality effect/auditory advantage effect
To explore whether the newly discovered striking phenomenon of short-term source amnesia is a robust phenomenon that could exist across modalities, we used three different combinations of stimuli sources: (1) two stimuli presented in the visual modality (Experiments 1a and 4a: text & picture; Experiment 4b: small font size word & large font size word); (2) one stimulus displayed in the auditory modality and the other in the visual modality (Experiment 2a: voice & text; Experiment 2b: voice & picture); (3) both stimuli displayed in the auditory modality (Experiment 3: male voice & female voice). The results showed that source amnesia in the context of short-term memory/WM was observed in Experiments 1, 4a and 4b, but not in Experiments 2a, 2b, or 3. These results suggest that although the short-term source amnesia was a robust phenomenon in the visual modality, it was absent in the auditory and cross-visual-and-auditory modalities, even though participants did not expect to remember the source information. These discoveries indicate a difference in the nature of the WM representation of visual and auditory stimuli, that is, the semantic representation of auditory stimuli in WM was automatically bound to the corresponding original source, while the representation of visual stimuli was stored independently of its source information.
The results of the current study have great implications for better understanding the well-known “modality effect,” which refers to an advantage of auditory over visual presentation mode in short-term retention of verbal information. This memory advantage is typically limited to the last one or two items of a list of auditory stimuli such as digits, characters, pseudo-words, or words (e.g., Conrad & Hull, 1968; Crottaz-Herbette, Anagnoson, & Menon, 2004; Crowder, 1967; Rummer & Schweppe, 2005). The current study extends this classical modality effect in several important ways. Firstly, most previous studies relied on using verbal materials (e.g., letters and words), whereas the current study demonstrated that such a modality effect could be extended to visual information (i.e., pictures). Secondly, Penney (1989) suggested that the modality effect might be due to the fact that the acoustic sensory code persists for a longer period of time than the visual sensory code. However, the current findings indicated that the modality effect should not be entirely (if any) driven by the relative longer duration of auditory sensory memory, as the average response time in the surprise source memory task in the present study was around 10 s (Experiment 2a: 10.91 s; Experiment 2b: 11.81 s; Experiment 3: 8.10 s), which was far longer than the typical duration of auditory sensory memory (Darwin, Turvey, & Crowder, 1972; Fougnie et al., 2015). Last but not least, the modality effect in previous studies mainly focused on item memory. In contrast, the current study concentrated on the short-term memory of source information and still found a memory advantage for source information of auditory stimuli in comparison with visual stimuli. In other words, these new discoveries demonstrate that the classical modality effect not only occurs for item memory but could also be extended to source memory. This finding has an important implication for another account of modality effect, which suggests that the modality effect occurs because more modality dependent features (physical features) are encoded with auditory than with visual input according to Nairne's feature model (Nairne, 1990, 2002). More specifically, although this physical/perceptual difference account could well interpret the modality effect for item memory in many previous studies, it, however, can hardly be applied to the modality effect for source
6.1. Did the auditory advantage effect reflect a response bias? Previous studies suggested that the auditory advantage in the source recognition task in long-term memory might be caused by a response bias (Bray & Batchelder, 1972; Eberman & Mckelvie, 2002). That is, when participants need to judge whether an item was presented in auditory or visual modality, they tend to choose the auditory modality as the answer regardless of the actual stimuli modality. Nonetheless, such response bias could not be the factor driving the absence of shortterm source amnesia for auditory stimuli in the current study. First, we averaged the results from the visual-probed and auditory-probed groups during data analyses in Experiments 2a & 2b. This eliminates any influence of response bias on the results, as response bias would affect the results of both groups in opposite directions (i.e., reducing accuracy in the surprise source memory test in the visual-probed group while improving accuracy in the auditory-probed group). Thus, the influences, if any, would cancel each other out when averaging the two 7
Cognition 197 (2020) 104160
M. Xu, et al.
6.4. Implications for the relationship between attention, expectation, and binding
memory as shown in the current study. This is because the source memory task in the current study requires the binding between semantic representation and its original format, thus just encoding more associated perceptual features should not be useful in performing such a source memory task.
Source amnesia essentially reflects a lack of binding between semantic item representation and its corresponding source format (Mammarella & Fairfield, 2008; Raj & Bell, 2010). Previous studies mainly focused on the critical role of attention in forming correct bindings, such as the well-known feature-integration theory (FIT) proposed by Treisman and Gelade (1980). When attention is diverted or overloaded, features may be wrongly recombined giving rise to “illusory conjunctions” (Treisman & Schmidt, 1982). Not only in visual perception, Wheeler and Treisman (2002) employed change-detection tasks to investigate the effect of focused attention on binding in visual WM and found that binding performance in WM was impaired in the whole-display test but not in the single-display test. The authors attributed the binding deficit to more attentional resources being consumed by the whole-display test and concluded that focused attention was required to maintain an explicit representation of the binding of visual features in WM. Moreover, Gao et al. (2017) investigated whether retaining bindings in WM demanded more object-based attention than retaining constituent features, by adopting a dual-task paradigm. They demonstrated that a secondary object-based attention task generated a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM. Despite the critical role of focused attention in forming and maintaining correct binding has been extensively investigated (Gao et al., 2017; Treisman & Schmidt, 1982; Wheeler & Treisman, 2002), yet the contribution of expectation has been largely neglected until recent studies (Chen et al., 2018; Eitam, Yeshurun, & Hassan, 2013). Eitam et al. (2013) showed that participants' performance in reporting a taskirrelevant feature was worse than reporting a relevant feature of the same object, even when the object was in the focus of attention (i.e., irrelevant-induced blindness). Furthermore, Experiments 1, 4a and 4b in the current study together with Chen et al. (2018) showed that the semantic representations of visual stimuli were not automatically bound to their original source formats if participants did not expect to report them, even though they had to attend to and use both representations to perform the task, or had to store representations in WM. These results suggest that attention, despite being necessary, could not provide sufficient assurance for correct binding between features within an object or between the item and its source in the visual modality, and the expectation to report plays a determining role in binding. This is consistent with the expectation-based binding account (Chen et al., 2016), which suggests that information that participants expect to be useful later is bound to the object representation in WM, while information that will not be useful later is stored as an activated trace in long-term memory (Oberauer, 2002) or stimulated state (Eitam & Higgins, 2010), regardless of whether it has been relevant for the task. However, it seemed that expectation did not have a decisive role in binding in the auditory domain according to Experiments 2a, 2b, and 3. In these experiments, short-term source amnesia was absent in the auditory and cross-visual-and-auditory modalities even when participants did not expect to report the source, which indicates that source information of auditory stimuli was automatically bound to their corresponding mental representations in WM. These results have important implications for the concept of episodic buffer that was proposed by Baddeley (2000) to address the binding issue in WM. Such a buffer was initially assumed subject to executive processes that were typically manipulated by attention, however, recently it remains controversy because many studies reported conflict findings regarding whether encoding and maintaining visual feature bindings requires attention (e.g., Allen et al., 2006; Wheeler & Treisman, 2002). The current findings suggest that there were different types of bindings, such that the binding of item and its source information from visual modality needs the participation of
6.3. Implications for the relationship between auditory and visual memory It's debated whether WM storage is mediated by distinct subsystems for auditory and visual stimuli (i.e., modality-specific storage hypothesis, Allen et al., 2015; Baddeley, 1986; Baddeley et al., 2019; Baddeley & Logie, 1999; Fougnie & Marois, 2011; Fougnie et al., 2015) or whether it is constrained by a single, central capacity-limited system (i.e., modality-independent storage hypothesis, Cowan, 1995, 2006; Saults & Cowan, 2007). On the one hand, Saults and Cowan (2007) combined a visual-auditory dual task and homologous visual or auditory single task in a series of experiments and found that combined dual-task capacity was no greater than the higher (visual) of the two single-task capacities, which means there was a single limit in the number of WM object representations that can be shared across modalities, and thus supported the modality-independent storage theory. On the other hand, Fougnie et al. (2015) assessed the dual-task cost for concurrent maintenance of visuospatial and auditory arrays when minimizing nonmnemonic sources of dual-task interference, such as task preparation and coordination, overlap in representational content, or cognitive strategies. They found that auditory and spatial arrays can be concurrently held in WM with no discernible interference, and thus supported the modalityspecific storage hypothesis. In contrast to previous studies focusing on whether WM capacity limit shares across modalities, the current study tended to further solve this debate in a different perspective. That is, this study provided novel evidence supporting the modality-specific hypothesis by showing that the nature of the WM representation from visual and auditory stimuli is different, i.e., the WM representation of auditory stimuli included the original source information, while the WM representation of visual stimuli was stored independently of its source information. We suggest this difference on the nature of WM representation across modalities might help to reduce the representation overlap, which is one of key sources of interference (Fougnie et al., 2015). In addition, the current findings have also important implications for understanding the difference regarding the relationship between auditory and visual memory in different contexts. In the domain of long-term memory, auditory item memory was inferior to visual item memory (e.g., Cohen et al., 2009; Olszewska et al., 2015; Thelen et al., 2015), whereas in the context of short-term memory/WM, the pattern was reversed (auditory item memory was better than visual item memory, e.g. modality effect) (Kirsner & Craik, 1971; Murdock, 1967, 1968; Murdock & Walker, 1969; Nairne, 1990;Penney, 1975; Penney, 1989; Rummer et al., 2013). However, unlike item memory, the current study showed the advantage of auditory source memory over visual source memory in the domain of WM, with the pattern being the same as that observed in long-term memory (Bray & Batchelder, 1972; Eberman & Mckelvie, 2002). As depicted in Table 7, the current findings provide an important complement to the existing literature regarding the relationship between auditory and visual memory. Table 7 Relationship between auditory and visual memory.
Item memory Source memory
Long-term memory
Short-term memory
Auditory < visual (Cohen et al., 2009; Thelen et al., 2015) Auditory > visual (Eberman & Mckelvie, 2002; Bray & Batchelder, 1972)
Auditory > visual (Penney, 1975; Penney, 1989; Rummer et al., 2013) Auditory > visual (Current study)
8
Cognition 197 (2020) 104160
M. Xu, et al.
executive processes to enter the episodic buffer, whereas such binding in auditory modality might take place automatically and store within the phonological subsystem of working memory or feed directly into the episodic buffer. It is also worth consideration that the role of executive processes was mostly investigated through attentional manipulation (e.g., backward counting task), we suggest that expectation was also a considerable aspect of executive processes.
source memory, which is hard to be entirely excluded because many previous studies showed an extremely large overlap between perception and working memory at both behavioral and neural levels (Gao, Gao, Li, Sun, & Shen, 2011; Harrison & Tong, 2009; Magnussen & Greenlee, 1992; Nemes, Parry, Whitaker, & McKeefry, 2012; Teng & Kravitz, 2019). Nonetheless, we believe that the above mentioned perceptual encoding difference is not likely to be the primary cause (if any) of the observed auditory advantage effect, in particular when considering the surprise memory test paradigm used in the current study. According to previous studies (Chen & Wyble, 2015, 2016; Swan et al., 2017), if the perceived information was not consolidated into working memory to produce robust memory traces, it could hardly be explicitly reported in a surprise memory test, even though the information had just been under the focus of attention and reached at the full level of awareness, or had been processed/encoded to a level that is sufficient to produce an inter-trial priming effect (e.g., Jiang, Shupe, Swallow, & Tan, 2016). This is because to answer a surprise memory test participants have to perform several complex tasks, including reading of the surprise question and planning a new response using a different set of keys, which would disrupt many fragile memory traces (e.g., perceptual encoding levels), but not working memory traces (Swan et al., 2017). Instead, we suggest that the observed auditory advantage effect was due to the fact that source information is automatically stored with working memory representation of auditory stimuli but not for visual stimuli. This should make sense when considering the different contributions of source information for understanding the sematic meaning of auditory and visual stimuli. For auditory stimuli, the source format information (e.g., intonation, stress and variation of timbre) is usually important for semantic understanding. For example, the same word or sentence can convey different meanings if they come from different speakers, and the modulation of stress and timbre can sometimes even reverse the meaning of a word or a sentence. In contrast, this is much less the case in the visual modality, wherein the source formats (e.g., font type, font size and font color) are usually useless in extracting the meaning of the words or sentences. In this perspective, it is reasonable that participants stored the semantic meaning (i.e., item memory) together with its source information for auditory but not for visual stimuli.
6.5. Implications for three-component model of WM The current results also have important implications for the wellknown three-component model of WM (Baddeley & Hitch, 1974), which aims to figure out the nature of WM representation. According to this model, WM storage could be divided into two sub-components (phonological loop and visual-spatial sketchpad) according to different coding modes. The phonological loop is assumed to hold verbal and acoustic information using a temporary store and an articulatory rehearsal system. The sketchpad is assumed to hold visuospatial information, which can be split in separate visual, spatial, and possibly kinesthetic components. This separation in modules had a profound influence on subsequent WM studies, as research was often conducted within the same module: either verbal WM or visual WM (for a review see Baddeley, 2012). However, the current study implies that information acquired from different source modalities may have different representation forms in WM despite being in the same module according to the three-component model. The results of Experiments 2a, 2b, and 3 revealed that information in the auditory modality was automatically bound together with its original source format in WM, whereas Experiments 1, 4a, 4b as well as the experiments in Chen et al. (2018) showed that the mental representations of visual stimuli were stored in WM independently of their original sources. More importantly, the English words presented in the visual and auditory modalities should both be stored in the phonological loop as verbal WM according to Baddeley's model, interestingly our results showed that their representations in WM were different, with the auditory words being bound together with the source format in WM, whereas the visual words being stored without retaining their original source format. Someone might argue that in Experiment 2a wherein the visual and auditory words were presented simultaneously, and it is possible that the concurrent auditory word occupies the phonological rehearsal mechanism and prevents phonological recoding of the visual word. If this is true, it's not surprising that the visual and the auditory words possess different representation nature, because they were encoded into different WM modules. However, in Experiment 1 and the Experiment 4b the above possibility should not exist (the visual words were not accompanied by any auditory stimuli), we still observed the short-term source amnesia for visual words, which was different from the auditory words that were stored in the same WM module (i.e., phonological loop).
Author contributions The study concept was developed by Author H. Chen. Authors M.J. Xu and Y.T. Fu performed the programming, data collection and analysis, and all authors were responsible for the writing and editing. All authors approved the manuscript for submission. Acknowledgements This work was supported by grants from National Natural Science Foundation of China (No. 31771201), National Science Foundation for Distinguished Young Scholars of Zhejiang Province, China (No. LR19C090002), Humanities and Social Science Fund of Ministry of Education of China (No. 17YJA190001) awarded to author Hui Chen.
6.6. Why is source information inherently linked to the auditory but not visual WM representation? It is crucial to discuss the origins or possible mechanisms behind the observed auditory advantage effect on short-term source memory. As mentioned before, the first possibility that comes to mind might be that auditory stimuli possess more perceptual features than visual stimuli (Nairne, 1990, 2002) and there are usually more complex processes involved in processing auditory stimuli (e.g., words and sentences) than visual stimuli, but we are not sure whether this is necessarily the case in the current study wherein the auditory stimuli were simple words that were generated by computer whereas the visual stimuli were complicated pictures of emotional human faces that were natural social stimuli. Given the fact that the aforementioned perceptual encoding difference still existed in our study, such a difference might also partially contribute to the observed auditory advantage effect on short-term
References Allen, R. J., Baddeley, A. D., & Hitch, G. J. (2006). Is the binding of visual features in working memory resource-demanding? Journal of Experimental Psychology: General, 135(2), 298–313. Allen, R. J., Havelka, J., Falcon, T., Evans, S., & Darling, S. (2015). Modality specificity and integration in working memory: Insights from visuospatial bootstrapping. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(3), 820–830. Allen, R. J., Hitch, G. J., & Baddeley, A. D. (2009). Cross-modal binding and working memory. Visual Cognition, 17(1–2), 83–102. Allen, R. J., Hitch, G. J., Mate, J., & Baddeley, A. D. (2012). Feature binding and attention in working memory: A resolution of previous contradictory findings. The Quarterly Journal of Experimental Psychology, 65(12), 2369–2383.
9
Cognition 197 (2020) 104160
M. Xu, et al. Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11), 417–423. Baddeley, A. D. (1986). Working memory. New York, NY: Oxford University Press. Baddeley, A. D. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 1–29. Baddeley, A. D., Allen, R. J., & Hitch, G. J. (2011). Binding in visual working memory: The role of the episodic buffer. Neuropsychologia, 49(6), 1393–1400. Baddeley, A. D., & Hitch, G. (1974). Working memory. In Psychology of learning and motivation (Vol. 8, pp. 47–89). Academic Press. Baddeley, A. D., Hitch, G. J., & Allen, R. J. (2019). From short-term store to multicomponent working memory: The role of the modal model. Memory & Cognition, 47(4), 575–588. Baddeley, A. D., & Logie, R. (1999). Working memory: The multiple component model. In A. Miyake, & P. Shah (Eds.). Models of working memory: Mechanisms of active maintenance and executive control (pp. 28–61). New York. NY: Cambridge University Press. Bankó, E. M., Gál, V., & Vidnyánszky, Z. (2009). Flawless visual short-term memory for facial emotional expressions. Journal of Vision, 9(1), 1–13. Bornstein, B. H., & LeCompte, D. C. (1995). A comparison of item and source forgetting. Psychonomic Bulletin & Review, 2, 254–259. Brady, T. F., Störmer, V. S., & Alvarez, G. A. (2016). Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli. Proceedings of the National Academy of Sciences of the United States of America, 113(27), 7459–7464. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. Bray, N. W., & Batchelder, W. H. (1972). Effects of instructions and retention interval on memory of presentation mode. Journal of Memory and Language, 11(3), 367–374. Brown, L. A., & Brockmole, J. R. (2010). The role of attention in binding visual features in working memory: Evidence from cognitive ageing. Quarterly Journal of Experimental Psychology, 63(10), 2067–2079. Chalfonte, B. L., & Johnson, M. K. (1996). Feature memory and binding in young and older adults. Memory & Cognition, 24(4), 403–416. Chen, H., Carlson, R. A., & Wyble, B. (2018). Is source information automatically available in working memory? Psychological Science, 29(4), 645–655. Chen, H., Swan, G., & Wyble, B. (2016). Prolonged focal attention without binding: Tracking a ball for half a minute without remembering its color. Cognition, 147, 144–148. Chen, H., & Wyble, B. (2015). Amnesia for object attributes: Failure to report attended information that had just reached conscious awareness. Psychological Science, 26, 203–210. Chen, H., & Wyble, B. (2016). Attribute amnesia reflects a lack of memory consolidation for attended information. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 225–234. Chen, H., Yu, J., Fu, Y., Zhu, P., Li, W., Zhou, J., & Shen, M. (2019). Does attribute amnesia occur with the presentation of complex, meaningful stimuli? The answer is, “it depends”. Memory & Cognition, 1–12. Cohen, M. A., Horowitz, T. S., & Wolfe, J. M. (2009). Auditory recognition memory is inferior to visual recognition memory. Proceedings of the National Academy of Sciences, 106(14), 6008–6010. Conrad, R., & Hull, A. J. (1968). Input modality and the serial position curve in short-term memory. Psychonomic Science, 10, 135–136. Cowan, N. (1995). Attention and memory. New York, NY: Oxford University Press. Cowan, N. (2006). Working memory capacity. New York, NY: Psychology Press. Crottaz-Herbette, S., Anagnoson, R. T., & Menon, V. (2004). Modality effects in verbal working memory: Differential prefrontal and parietal responses to auditory and visual stimuli. Neuroimage, 21(1), 340–351. Crowder, R. G. (1967). Prefix effect in immediate memory. Canadian Journal of Psychology, 21, 450–461. Darwin, C. J., Turvey, M. T., & Crowder, R. G. (1972). An auditory analogue of the sperling partial report procedure: Evidence for brief auditory storage. Cognitive Psychology, 3(2), 255–267. Eberman, C., & Mckelvie, S. J. (2002). Vividness of visual imagery and source memory for audio and text. Applied Cognitive Psychology, 16(1), 87–95. Eitam, B., & Higgins, E. T. (2010). Motivation in mental accessibility: Relevance of a representation (ROAR) as a new framework. Social and Personality Psychology Compass, 4, 951–967. Eitam, B., Yeshurun, Y., & Hassan, K. (2013). Blinded by irrelevance: Pure irrelevance induced “blindness”. Journal of Experimental Psychology: Human Perception and Performance, 39, 611–615. Elsley, J. V., & Parmentier, F. B. (2009). Short article: Is verbal–spatial binding in working memory impaired by a concurrent memory load? Quarterly Journal of Experimental Psychology, 62(9), 1696–1705. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. Fougnie, D., & Marois, R. (2011). What limits working memory capacity? Evidence for modality-specific sources to the simultaneous storage of visual and auditory arrays. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1329–1341. Fougnie, D., Zughni, S., Godwin, D., & Marois, R. (2015). Working memory storage is intrinsically domain specific. Journal of Experimental Psychology: General, 144(1), 30–47. Gao, T., Gao, Z., Li, J., Sun, Z., & Shen, M. (2011). The perceptual root of object-based storage: An interactive model of perception and visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 37(6), 1803–1823. Gao, Z., Wu, F., Qiu, F., He, K., Yang, Y., & Shen, M. (2017). Bindings in working memory: The role of object-based attention. Attention, Perception, & Psychophysics, 79(2),
533–552. Glisky, E. L., Polster, M. R., & Routhieaux, B. C. (1995). Double dissociation between item and source memory. Neuropsychology, 9(2), 229–235. Grenfell-Essam, R., Ward, G., & Tan, L. (2017). Common modality effects in immediate free recall and immediate serial recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(12), 1909–1933. Harrison, S. A., & Tong, F. (2009). Decoding reveals the contents of visual working memory in early visual areas. Nature, 458(7238), 632–635. Jiang, Y. V., Shupe, J. M., Swallow, K. M., & Tan, D. H. (2016). Memory for recently accessed visual attributes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1331–1337. Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114(1), 3–28. Kahan, T. L. (1996). Memory source confusions: Effects of character rotation and sensory modality. American Journal of Psychology, 109(3), 431–449. Kirsner, K., & Craik, F. I. (1971). Naming and decision processes in short-term recognition memory. Journal of Experimental Psychology, 88(2), 149–157. Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795. Magnussen, S., & Greenlee, M. W. (1992). Retention and disruption of motion information in visual short-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(1), 151. Mammarella, N., & Fairfield, B. (2008). Source monitoring: The importance of feature binding at encoding. European Journal of Cognitive Psychology, 20(1), 91–122. Mitchell, K. J., & Johnson, M. K. (2009). Source monitoring 15 years later: What have we learned from fMRI about the neural mechanisms of source memory? Psychological Bulletin, 135, 638–677. Mitchell, K. J., Johnson, M. K., Raye, C. L., & Greene, E. J. (2004). Prefrontal cortex activity associated with source monitoring in a working memory task. Journal of Cognitive Neuroscience, 16, 921–934. Mitchell, K. J., Raye, C. L., Johnson, M. K., & Greene, E. J. (2006). An fMRI investigation of short-term source memory in young and older adults. NeuroImage, 30, 627–633. Murdock, B. B., Jr. (1967). Recent developments in short-term memory. British Journal of Psychology, 58(3–4), 421–433. Murdock, B. B., Jr. (1968). Modality effects in short-term memory: Storage or retrieval? Journal of Experimental Psychology, 77(1), 79. Murdock, B. B., Jr., & Walker, K. D. (1969). Modality effects in free recall. Journal of Verbal Learning and Verbal Behavior, 8(5), 665–676. Nairne, J. S. (1990). A feature model of immediate memory. Memory & Cognition, 18(3), 251–269. Nairne, J. S. (2002). Remembering over the short-term: The case against the standard model. Annual Review of Psychology, 53(1), 53–81. Nemes, V. A., Parry, N. R., Whitaker, D., & McKeefry, D. J. (2012). The retention and disruption of color information in human short-term visual memory. Journal of Vision, 12(1), 26 , 1–14. Oberauer, K. (2002). Access to information in working memory: Exploring the focus of attention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 411–421. Olszewska, J. M., Reuter-Lorenz, P. A., Munier, E., & Bendler, S. A. (2015). Misremembering what you see or hear: Dissociable effects of modality on short-and long-term false recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(5), 1316–1325. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. Penney, C. G. (1975). Modality effects in short-term verbal memory. Psychological Bulletin, 82(1), 68–84. Penney, C. G. (1989). Modality effects in the structure of short-term verbal memory. Memory and Cognition, 17, 398–422. Raj, V., & Bell, M. A. (2010). Cognitive processes supporting episodic memory formation in childhood: The role of source memory, binding, and executive functioning. Developmental Review, 30(4), 384–402. Rummer, R., & Schweppe, J. (2005). Evidence for a modality effect in sentence retention. Psychonomic Bulletin & Review, 12(6), 1094–1099. Rummer, R., Schweppe, J., & Martin, R. C. (2013). Two modality effects in verbal shortterm memory: Evidence from sentence recall. Journal of Cognitive Psychology, 25(3), 231–247. Saults, J. S., & Cowan, N. (2007). A central capacity limit to the simultaneous storage of visual and auditory arrays in working memory. Journal of Experimental Psychology: General, 136(4), 663–684. Shimamura, A. P., & Squire, L. R. (1991). The relationship between fact and source memory: Findings from amnesic patients and normal subjects. Psychobiology, 19(1), 1–10. Slotnick, S. D., Moo, L. R., Segal, J. B., & Hart, J., Jr. (2003). Distinct prefrontal cortex activity associated with item memory and source memory for visual shapes. Cognitive Brain Research, 17(1), 75–82. Swan, G., Wyble, B., & Chen, H. (2017). Working memory representations persist in the face of unexpected task alterations. Attention, Perception, & Psychophysics, 79(5), 1408–1414. Teng, C., & Kravitz, D. J. (2019). Visual working memory directly alters perception. Nature Human Behaviour, 3(8), 827–836. Thelen, A., Talsma, D., & Murray, M. M. (2015). Single-trial multisensory memories affect later auditory and visual object discrimination. Cognition, 138, 148–160. Tottenham, N., Tanaka, J. W., Leon, A. C., Mccarry, T., Nurse, M., Hare, T. A., ... Nelson, C. (2009). The nimstim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168(3), 242–249. Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive
10
Cognition 197 (2020) 104160
M. Xu, et al. Psychology, 12(1), 97–136. Treisman, A., & Schmidt, H. (1982). Illusory conjunctions in the perception of objects. Cognitive Psychology, 14, 107–141. Wheeler, M. E., & Treisman, A. M. (2002). Binding in short-term visual memory. Journal of Experimental Psychology: General, 131(1), 48–64.
Wilding, E. L., & Rugg, M. D. (1996). An event-related potential study of recognition memory with and without retrieval of source. Brain, 119, 889–905. Zokaei, N., Heider, M., & Husain, M. (2014). Attention is required for maintenance of feature binding in visual working memory. The Quarterly Journal of Experimental Psychology, 67(6), 1191–1213.
11