Neuropsychologia 136 (2020) 107283
Contents lists available at ScienceDirect
Neuropsychologia journal homepage: http://www.elsevier.com/locate/neuropsychologia
No intermodal interference effects of threatening information during concurrent audiovisual stimulation Kierstin M. Riels *, Harold A. Rocha , Andreas Keil Center for the Study of Emotion & Attention, University of Florida, USA
A R T I C L E I N F O
A B S T R A C T
Keywords: Emotion Attention Steady-state visual evoked potential Intermodal interactions
Changes in attention can result in sensory processing trade-off effects, in which sensory cortical responses to attended stimuli are heightened and responses to competing distractors are attenuated. However, it is unclear if competition or facilitation effects will be observed at the level of sensory cortex when attending to competing stimuli in two modalities. The present study used electroencephalogram (EEG) and frequency-tagging to quan titatively assess auditory-visual interactions during sustained multimodal sensory stimulation. The emotional content of a 6.66 Hz rapid serial visual presentation (RSVP) was manipulated to elicit well-established emotional attention effects, while a constant 63 dB tone with a 40.8 Hz modulation served as a concurrent auditory stimulus in two experiments. As a directed attention manipulation, participants were instructed to detect transient sound level events in the auditory stream in Experiment 1. To manipulate attention through threat anticipation, par ticipants were instructed to expect an aversive noise burst after a higher 40.8 Hz modulated tone in Experiment 2. Each stimulus evoked reliable steady-state sensory cortical responses in all participants (n ¼ 30) in both ex periments. The visual cortical responses were modulated by the auditory detection task, but not by threat anticipation: Visual responses were smaller during auditory streams with a transient target as compared to uninterrupted auditory streams. Conversely, visual stimulus condition had no significant effects on auditory sensory cortical responses in either experiment. These results indicate that there is neither a competition nor facilitation effect of visual content on concurrent auditory sensory cortical processing. They further indicate that competition effects of auditory stream content on sustained visuocortical responses are limited to auditory target processing.
1. Introduction An inherent part of the human experience is the constant exposure to vast amounts of simultaneous information conveyed through different sensory channels. One long-standing question in this regard has been how these different channels interact, and to what extent they share limited capacity. Although there is clear evidence of multi-leveled sen sory processing interactions (Caruso et al., 2016; Kreifelts et al., 2007), at present, there is a gap in our knowledge regarding the specific role of primary cortical areas in intermodal interactions. This gap is especially evident under naturalistic, sustained stimulus conditions, in which simultaneous but unrelated stimulus streams compete for limited ca pacity, e.g., when working on a paper while listening to a podcast, or when listening to an ambulance while trying to stay on the bike lane. Establishing the extent to which sensory cortices are affected by multi modal exposure to competing, biologically relevant, sensory
information is a prerequisite for developing and testing models of intermodal learning, attentional biases, and maladaptive intermodal functioning. The present report examines interaction effects between early cortical processing evoked by sustained and simultaneous, but semantically unrelated visual and auditory stimulus streams, with a focus on how limited capacity is managed across the senses. Specifically, we are interested in stimuli that vary in intrinsic motivational relevance to the observer/listener, rather than being defined by task instructions alone. Cognitive neuroscience studies examining the effects of stimulating multiple sensory modalities have resulted in mixed evidence, which may be organized into two broad, competing, notions. First, the facilitation model (e.g., Kreifelts et al., 2007) is based on the notion that concurrent stimulation across two modalities results in augmented neural re sponses. Empirically, evidence consistent with multimodal facilitation, i.e., that responses in a given modality are amplified in tandem with
* Corresponding author. Center for the Study of Emotion & Attention, University of Florida, PO BOX 112766, Gainesville, FL, 32611, USA. E-mail address:
[email protected] (K.M. Riels). https://doi.org/10.1016/j.neuropsychologia.2019.107283 Received 3 September 2019; Received in revised form 5 November 2019; Accepted 24 November 2019 Available online 27 November 2019 0028-3932/© 2019 Elsevier Ltd. All rights reserved.
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
presented in different modalities.
concurrent responses in another modality, has been observed in studies of sensory integration (Huang et al., 2018) and intermodal attention (Shrem and Deouell, 2016). By contrast, the competition model, arising primarily from research with attention capture and competition tasks (e. g., Max et al., 2015; Kreifelts et al., 2007) suggests that attentive pro cessing in one modality is negatively affected by the presence of salient stimuli in another modality. Highlighting the task-dependency of these divergent findings, several studies have failed to provide clear support for either a facilitation or competition effect (e.g., Keitel et al., 2013). For example, during simultaneous presentation of information in sepa rate modalities during spatial attention tasks, dependent measures do not tend to reflect modality-specific effects of attention, but often vary based on attended location, across modalities (Duncan et al., 1997; Keitel et al., 2013). Furthermore, the studies that found small or absent competition effects tended to employ tasks that relied on participants to direct attention to neutral stimuli (Duncan et al., 1997; Keitel et al., 2013). These differing approaches to operationalizing attention during multi-modal manipulations may account for the mixed support for each of the extant models.
1.2. Using steady-state potentials to study intermodal interactions An effective method for measuring cortical engagement with high temporal accuracy is the electroencephalogram (EEG). Among the many indices that can be derived from EEG signals the steady-state evoked potential has many desirable properties for studies interested in sensory processing of concurrent sensory events. Steady-state potentials index neural population responses to rapidly presented stimuli, which prompt neural oscillatory activity at the same rate as stimulus presentation, often including higher harmonics (Müller and Hillyard, 2000). Steady-state evoked responses can occur in both visual (steady-state visual evoked potential; ssVEP) and auditory (auditory steady state response; ASSR) domains (Wieser et al., 2016a, 2016b). Importantly, both signals allow for the study of multiple concurrent stimuli through so-called frequency tagging (Hillyard et al., 1997), because distinct steady-state evoked potentials are elicited by simultaneous stimuli presented at differing driving frequencies (Toffanin et al., 2009). This frequency tagging technique has been used to measure neural resource allocation in both intramodal and intermodal EEG studies of selective attention (Saupe et al., 2009a, 2009b). Concurrent transient, salient, task-irrelevant stimuli have been shown to create interference consistent with limited capacity within the modality of another frequency-tagged stimulus, but not across different modalities (Porcu et al., 2014). Here, we examine the impact of task-relevant and motivationally/emotionally relevant stimuli during sustained processing on intermodal interactions. During rapid serial visual presentations (RSVPs) of complex scenes, ssVEP amplitudes evoked by flickering the same emotionally engaging scene tend to be greater than when flickering a neutral scene (e.g., Keil et al., 2003). However, when changing scene content with each flicker cycle, RSVP rates in the 6 Hz range lead to markedly attenuated ssVEP amplitudes during emotionally salient content and relatively potenti ated amplitudes during neutral content (Bekhtereva et al., 2018). The SSVEP effect seen in Bekhtereva et al. (2018) had Bayes factors which were greater than the effects seen in lower and higher RSVP rates. This frequency-specific emotional interference effect in the ssVEP amplitude is thought to be a function of destructive interference resulting from waveform impositions of event-related potentials (ERP) of opposite polarity, elicited by current and previous stimuli (Bekhtereva et al., 2018). ERP studies have shown that from about 130 to 300 ms after image onset, emotionally salient complex scenes elicit greater potentials in temporo-occipital sites than neutral scenes (Schupp et al., 2003). Peyk et al., (2009) found that the early posterior negativity (EPN) response, which occurs between 120 and 150 ms post stimulus onset, can co-occur with other ERPs at varying RSVP rates. This parallel processing leads to interference effects between the EPN response of one image and early event-related responses to a subsequent image in a constant stream (Peyk et al., 2009). The present study uses the very large effect size (Cohen’s d > 1.2) of the 6-Hz ssVEP emotion interference effect estab lished across multiple laboratories (Bekhtereva et al., 2018), to examine intermodal effects related to sustained stimulation with emotionally engaging stimuli.
1.1. Saliency of stimuli in sensory processing It is well established that sensory processing, indexed by macro scopic neurophysiological measures such as EEG or MEG, is amplified when a stimulus is actively and intentionally attended to, compared to when it is ignored (Hillyard et al., 1973; Saupe et al., 2009a, 2009b). This effect is evident at the cortical level, with neural responses increasing when attention is directed towards a particular source of stimulation and decreasing when attention is actively suppressed (Talsma and Woldorff, 2005; Saupe et al., 2009a, 2009b). In addition to task-relevant information, emotionally or motiva tionally engaging information is also prioritized in sensory cortical processing (Wieser et al., 2016a, 2016b). Such prioritization is demon strated by so-called emotional interference effects, in which the presence of an emotionally engaging distractor stimulus is associated with decreased performance and attentive processing of concurrent cognitive task stimuli (Bradley et al., 2012). It should be noted that when task stimuli themselves are emotionally salient, augmented (instead of diminished) neural responses and facilitated (instead of disrupted) behavioral responses can be observed, even under conditions where the emotional feature is not pertinent to the task (e.g., Kanske and Kotz, 2010). In paradigms with multiple stimuli however, the robustness of emotional interference exerted by concurrent emotional distractors has been demonstrated consistently (Bradley et al., 2012). There is also evidence that emotionally engaging task-irrelevant images act as dis tractors to neutral task-relevant stimuli in studies measuring steady-state visual evoked potentials (ssVEPs) evoked by the task (Muller et al., 2008). Thus, these findings converge with a literature in which the amount of emotional interference on concurrent sensory processing is used as an index of attentional engagement with emotionally salient stimuli (MacLeod and MacDonald, 2000). One such index is the emotional Stroop effect, where naming response times are slower for emotionally arousing targets compared to emotionally neutral targets (Kindt et al., 1997; MacLeod and MacDonald, 2000; Schimmack, 2005). This effect is exaggerated in participants with high trait and state anxiety (Egloff and Hock, 2001). In both inter- and intra-modal presentation, emotionally salient stimuli can distract from cortical responses to concurrently presented engaging stimuli (Keil et al., 2007). However, when a highly salient stimulus acts as a cue for an upcoming neutral target, interference effects can be reduced when the cue is in a separate modality from the target (Zeelenberg and Bocanegra, 2010). Together, these studies indicate that task-irrelevant emotionally salient stimuli tend to interfere with the processing of concurrent task-relevant stimuli. Here we address the question to what extent these interference effects are present during sustained attention, in situations where task and distractor items are
1.3. Emotion-attention interactions and limited capacity To compare sustained attention states with sustained states of aver sive/defensive motivation, the present study used a threat-of-noise manipulation, in which participants were instructed to expect a noxious noise under certain conditions. In both visual and auditory domains, threat-of-shock and threat-of-noise paradigms are commonly used to induce sustained aversive motivational states in participants (Grillon, 2008). Most variations of these paradigms include instructing participants to expect the possibility of a noxious stimulus under certain experimental conditions. This manipulation prompts increased antici patory anxiety without temporal contingencies to one specific 2
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
conditioned stimulus, which would be the case if using a fear condi tioning paradigm (Miskovic and Keil, 2012). Studies investigating interference effects in neural responses as functions of anticipatory anxiety in the auditory domain have shown mixed results. For example, Cornwell et al. (2007) found greater auditory cortical activation to transient tone targets in threatening as compared to neutral auditory contexts. By contrast, Fucci and colleagues found this difference spe cifically among highly anxious participants in a comparable experi mental design (Fucci et al., 2019). In an auditory fear cue and context conditioning paradigm, Armony and Dolan (2001) found that auditory cortex activation during conditioned cue presentation was greater under a visual safety context as compared to a threat context. This body of research leaves questions unanswered regarding how anticipatory anx iety affects ongoing rather than discrete auditory processes, as well as how emotional saliency in a separate sensory domain interacts with these processes.
informed consents prior to study participation. Final exclusion criteria included retaining less than 50% of EEG trials after artifact rejection and correction, recording errors, and participant withdrawals. Recording errors were defined as not recording during 50% or more of the trials, capping errors that resulted in significant data loss or lack of data collection, presentation monitor display errors, and other software and hardware errors that occurred during the study run time. This study was approved by the local Institutional Review Board. Using a Monte Carlo simulation, a 2 (before x after image content switch) x 2 (unpleasant x neutral images) x 2 (auditory x visual mo dality) ANOVA was conducted to determine the number of participants needed to reach statistical significance at p < .02 from EEG neural timeseries data. Based on previous literature of the auditory and visual modality resource interaction effects (Keil et al., 2007; partial eta squared (η2p) ¼ 0.336) and the visual steady state amplitude switch by content in the 6 Hz range (Bekhtereva et al., 2018; (η2p) ¼ 0.59) being tested here, a conservative estimate of a η2p ¼ 0.2 was used. This analysis indicated that 19 participants would be required to detect a significant 3-way interaction with a η2p of 0.2, at p < .02. Additional participants were recruited to account for potential data loss, and to allow for explorative analyses, labeled as such in the results discussed below.
1.4. The current study To address the overarching experimental question regarding the multi-sensory intermodal interactions modulated by concurrent sus tained selective attention and emotional engagement, this study aimed to replicate the emotional interference effect in the 6 Hz ssVEP time course as demonstrated by Bekhtereva et al. (2018). This visual manipulation was crossed with manipulations of an auditory stream which contained either occasional targets (sustained auditory selective attention), or represented a threat versus safety cue (sustained motiva tional engagement). The present study utilized two experiments to address two alterna tive hypotheses regarding intermodal capacity limitations during moti vated attention; specifically, whether intermodal processing in sensory cortex is facilitative or competitive in nature. Experiment 1 consisted of a neutral auditory stimulus associated with a selective attention task that was concurrently presented with a RSVP of emotional or neutral scenes. Experiment 2 consisted of both a neutral and threat-cue auditory stimulus that also co-occurred with RSVP of emotional or neutral scenes. If intermodal interactions mirror effects seen within the visual mo dality, presenting motivationally relevant stimuli in one modality is expected to interfere with sensory cortical processing in the other. By contrast, if the two sensory systems do not share capacity limits, there will be no intermodal interference effects. A facilitation effect would be reflected in greater cortical responses during conditions with greater motivational relevance in the concurrent modality. For example, if greater ASSRs occur during an RSVP of un pleasant images as compared to ASSRs during neutral images, this would reflect facilitatory intermodal processing. A competition effect would be reflected through smaller cortical responses during conditions with greater motivational relevance in the opposite modality. For example, if smaller ASSRs occur during an RSVP of unpleasant images as compared to ASSRs during neutral images, this would reflect competitive inter modal processing. The same logic would follow for these hypothesized interactions between the visual response signals and motivational rele vance of the auditory stimuli. Hypotheses and experimental design were pre-registered for Experiments 1 and 2 separately on the Open Science Framework at https://osf.io/2dyj4 and https://osf.io/wjptd, respectively.
2.2. Experiment 1: Stimuli and task Stimuli consisted of 120 trials of neutral and unpleasant RSVPs with scenes from the International Affective Pictures System (IAPS; Lang et al., 2008) and other various sources (Bekhtereva et al., 2018) pre sented at a rate of 6.667 Hz, and a 63 dB sine wave tone with a 600 Hz carrier frequency with a 40.8 Hz modulation presented through speakers positioned behind the participants. Images consisted of 120 neutral and 80 mutilation scenes (See Appendix A for image numbers). Image se lection and presentation procedure was identical to Experiment 3 of Bekhtereva et al. (2018) with the exclusion of the erotic image category. Results from the RSVPs consisting of erotic images replicated those containing mutilation images (Bekhtereva et al., 2018), therefore this category was excluded. The tone lasted 6 s, beginning 600 ms before image stream onset and ending 600 ms after image stream offset. There were intermittent transient amplitude reductions in the sine wave modulation of the tone. These events reduced the amplitude of the sine wave to 0% of the full amplitude for 44 ms. Individual trials could contain either 0, 1, or 2 amplitude reductions. There were 32 images in each trial, with each image presented for 150 ms, resulting in a 4.8-s-long RSVP. The image stream content would either remain con stant throughout the trial or switch from unpleasant to neutral (or vice versa) at variable time points within 2100–2700 ms post-RSVP onset. Participants were prompted after the auditory stimulus offset to indicate with a button press if they detected a sound level dip in the auditory stream at any point in the trial. Each trial was followed by an inter-trial interval with durations from 4000 to 4600 ms (randomly drawn from a rectangular distribution). Participants sat in a sound dampened room with dimmed lights. A 23” LED presentation monitor with a 120 Hz refresh rate was positioned 120 cm from participants’ eyes, and each image was presented against a black background with pixel dimensions of 1024 � 768. This created a vertical visual angle of 4.055� and horizontal visual angle of 5.913� . Before the task, participants were presented with exemplars from the auditory task to ensure that they were able to successfully detect the transient sound level changes that would act as auditory targets (see Fig. 1).
2. Methods 2.1. Participants
2.3. Experiment 2: Stimuli and task
Of the 30 participants recruited for this study, 23 were female, all were students from the University of Florida (Mage ¼ 19), 21 identified as White, 5 identified as Black, and 4 identified as Asian. Among all participants 8 also identified as Hispanic. All participants were compensated with either course credit or a $20 gift card and signed
Experiment 1 and 2 did not differ in visual stimuli or room condi tions, and were conducted with the same participants, in counter balanced order. Experiment 2 was a passive viewing and listening task in which participants anticipated an aversive white noise burst to follow 3
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
Fig. 1. Illustration of partial trial in which images in the 6.66 Hz RSVP change from neutral to unpleasant valence presentation. Image onset occurs 600 ms after onset of auditory stimulus and image offset occurs 600 ms before offset of auditory stimulus.
estimates cortical potentials in a manner that is independent of reference values and reduces effects of volume conduction (Carvalhaes and de Barros, 2015). The ssVEP time course was extracted using a 10th order filter Hilbert transformation of the 6.667 Hz frequency after initial filtering and averaging. Mean ssVEP amplitudes were calculated over 12 clustered occipital sensors during time segments from 940 to 140 ms before and 780–1580 ms after a change in RSVP content. The amplitude for the ASSR was calculated across periods of 2099 to 66 ms before and 200–2232 ms after any potential change in RSVP content using a fast Fourier transformation (FFT). This resulted in a frequency resolution of 0.4921 Hz. Signal-to-noise ratios (SNRs) were then calculated using a range of 10 frequencies below and 10 above the 40.8 Hz ASSR driving frequency. The immediate 2 frequencies above and below the target signal were excluded from SNR calculation. Av erages over a cluster of 12 central and frontal sensors were entered into statistical analyses. See Fig. 2 for representations of both the ssVEP and ASSR spectra.
one of the auditory tones. The safety-cue tone was a 63 dB sine wave tone with a 600 Hz carrier frequency and a 40.8 Hz sine wave modu lation presented through speakers positioned behind the participants. The threat-cue tone was a 63 dB sine wave tone with a 650 Hz carrier frequency and a 40.8 Hz sine wave modulation. Conditions were defined in both experiments by whether the contents of the RSVP were contin uously neutral, continuously unpleasant, switched from neutral to un pleasant, or switched from unpleasant to neutral, as well as if there was the presence of an auditory target or threat. Thus, the four RSVP streams and the two auditory conditions entered within-modality analyses for the visual and auditory stimuli, respectively, and the same conditions entered inter-modality analyses, described below. Participants rated 6 images from the study (3 neutral, 3 unpleasant) using the SelfAssessment-Manikin (SAM) scale (Bradley and Lang, 1994). 2.4. EEG recording EEG data was continuously recorded through a 129-channel sensor net with a 500 Hz sampling rate. Data was collected using an Electrical Geodesic amplifier with an input impedance of 200 MΩ. Electrode im pedances were kept below 60 kΩ. Online elliptical filters of 1 Hz highpass and 40 Hz low-pass were applied. All data were averagereferenced after recording.
2.6. Heart rate analyses Electrocardiogram data were derived from ongoing dense-array array data on a trial-by-trial basis, using independent component anal ysis (SOBI algorithm, Tran et al., 2009), implemented in EEGlab soft ware (Delorme and Makeig, 2004). One component with clear ECG QRS complexes could be identified in 27 subjects and that component’s projection to the data was used as an ECG channel and subject to quality control as described in extant guidelines papers (Jennings et al., 1981). The inter-beat intervals were then estimated from the integrated ECG data using a threshold filter and transformed into instantaneous heart rate (HR; i.e., the inverse of the inter-beat interval). To this end, the time range from 0.5-s pre-stimulus to 6-s post-stimulus was divided into 0.5-s bins, and each instantaneous HR was weighted proportionally to the fraction of the half-second bin it occupied (Gatchel and Lang, 1973; Graham, 1978). The resulting single-trial stimulus-locked HR change time series were then averaged across all trials to quantify how the presentation of threat and safety cues affected stimulus-locked HR changes. In line with the extant literature, (Bradley et al., 2012), we expected and therefore quantified an early relative HR deceleration (0.5–2 s after cue onset), followed by a subsequent acceleration (3–6 s after cue onset). The resulting values (one HR deceleration score and one HR acceleration score per person and per threat/safety condition) were compared using paired t-tests.
2.5. Auditory and visual steady-state analysis For the auditory steady state data, all channels were filtered with a low-pass cut-off at 48 Hz (Butterworth, 22nd order) and a high-pass 20 Hz cut-off (Butterworth, 7th order). Epochs were extracted 200 ms before and 6000 ms after auditory stimulus onset. To analyze the visual steady state, all channels were filtered with a low-pass at 30 Hz (But terworth, 10th order) and a high-pass filter with 1 Hz cut-off (Butter worth, 1st order) filters. All cut-offs were defined as 3 dB points. Epochs were extracted 2600 ms before and 2600 ms after the change in RSVP emotional content. Artifacts were rejected automatically within these epochs, based on absolute value, standard deviation, and maximum of differences across time points and for every channel, as implemented in ElectroMagnetic EncaphaloGraphy Software (EMEGS) software (Junghofer et al., 2000; Peyk et al., 2011). Artifact free or corrected epochs were averaged by condition and across participants. Data were averaged across trials within each condition according to the respective event markers for each stimulus, thus aligning the phase of the steady state signals. Past research has shown that amplitude/power measures derived from this “evoked” steady state activity are statistically very similar to measures that directly quantify inter-trial phase locking (Eidelman-Rothman et al., 2019). To increase scalp topography specificity for both sensory cortical responses, all average referenced data were converted to current source density (CSD) estimates. The CSD estimates were calculated using a spline interpolation method with a regularization parameter (lambda) of €fer et al., 1997). This method of data transformation .02 (Jungho
2.7. Statistical analyses 2.7.1. SAM ratings and behavioral data Participants provided behavioral data in the form of hedonic valence and intensity SAM ratings of 6 representative images (3 neutral, 3 un pleasant), and auditory target detection accuracy converted to percent correct scores in Experiment 1. Dependent variables were created by averaging across individual pictures, within each participant, separately 4
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
Fig. 2. Signal-to-noise ratios of spectral peaks at the target frequencies for both visual and auditory steady state responses in both Experiment 1 and 2 are clearly visible. The peaks of ssVEPs in both experiments occur topographical over occipital sensors as expected. Peaks of ASSRs in both experiments also occur as expected, over frontoparietal sensors.
for the valence and arousal dimension. Mean valence and intensity ratings were analyzed using paired sample t-tests to compare unpleasant and neutral images. To test for an effect of RSVP condition on auditory task accuracy, percent correct scores were entered into a one-way repeated measures ANOVA with RSVP content (neutral, unpleasant, neutral-to-unpleasant, unpleasant-to-neutral) acting as the within subject factor. Percent cor rect scores were used instead of d’ because the performance was overall near ceiling, with very few or no false alarms. Furthermore, the amount of information contained in response time was limited because of the fact that participants responded at the end of the trial, prompted by an imperative signal.
we analyzed average ssVEP amplitudes during a trial segment following a possible content switch using paired-samples t-tests given the target vs no target conditions in Experiment 1 and the threat vs. safety cue con ditions in Experiment 2. Follow-up analyses of significant effects were conducted using additional ANOVAs and t-tests. Bayes factors are reported with estima tion errors and follow interpretation guidelines proposed by Jeffreys (1961).
2.7.2. Steady-state evoked potentials Statistical analyses were conducted using JASP (v. 0.9.2, JASP Team, 2018). Partial eta-squared and Cohen’s d are reported as measures of effect size. Greenhouse-Geisser corrected p-values and uncorrected F-values are reported for repeated measures ANOVAs where Mauchly’s test of sphericity is significant. Additional Bayesian analyses were con ducted to establish convergence or divergence with frequentist ap proaches, and to quantify the strength of null effects. Bayes factors are reported with estimation errors. Interpretation of Bayes factors follows the guidelines proposed by Jeffreys (1961).
Comparing the mean ratings across 6 selected pictures from each category (neutral, unpleasant), unpleasant images were rated signifi cantly lower in pleasure on the SAM 9-point scale, (Mdifference ¼ 2.92, CI ¼ 3.32 to 2.52; t(86) ¼ 14.62, p < .001), and higher in arousal, (Mdifference ¼ 6.62, CI ¼ 1.46–2.71; t(86) ¼ 6.62, p < .001), than neutral images.
3. Results 3.1. SAM ratings
3.2. Experiment 1: Auditory target detection 3.2.1. Task accuracy Participants were able to report the presence or absence of changes in the auditory tone with a mean accuracy of 93%, with scores ranging from 72.5% to 100% accuracy. There was no main effect of image trial type on response accuracy, F(3, 87) ¼ 0.507, p ¼ .679.
2.7.2.1. Within modality analyses. To examine the effect of RSVP con tent on visuocortical response, mean ssVEP Hilbert amplitudes entered a 2 � 4 repeated measures ANOVA with trial segment (before potential switch, after potential switch) and RSVP content (neutral, unpleasant, neutral-to-unpleasant, unpleasant-to-neutral) as within subject factors. To investigate the effects of auditory condition (target-or-no target and threat cue-or-safety cue, for Experiments 1 and 2, respectively) on auditory cortical responses, mean ASSR amplitudes during the second half of the trial were analyzed using paired samples t-tests.
3.2.2. Within modality effects In the visual cortical response, the emotional interference effect seen in Bekhtereva et al. (2018) was replicated in this study, shown visually in Fig. 3. There was no main effect of trial segment, F(1, 29) ¼ 0.388, p ¼ .538. There was a main effect of image content, F(3, 87) ¼ 15.725, p < .001, η2p ¼ 0.352, as well as a significant interaction effect between image content and trial segment, F(3, 87) ¼ 18.729, p < .001, η2p ¼ 0.392. Follow up tests indicated that neutral images elicited signifi cantly greater ssVEP amplitudes than unpleasant images both before (Mneutral ¼ 2.418, SDneutral ¼ 1.561; Munpleasant ¼ 1.878, SDunpleasant ¼ 1.157; F(3, 87) ¼ 17.56, p < .001, η2p ¼ 0.377) and after (Mneutral ¼ 2.434, SDneutral ¼ 1.501; Munpleasant ¼ 1.902, SDunpleasant ¼ 1.064; F(3, 87)
2.7.2.2. Between modality analyses. To determine the effect of RSVP content on the auditory signal, mean ASSR amplitudes entered a 2 � 4 repeated measures ANOVA with trial segment (before potential switch, after potential switch) and RSVP content (neutral, unpleasant, neutralto-unpleasant, unpleasant-to-neutral) as within subject factors. To investigate the effects of auditory condition on visuocortical responses, 5
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
Fig. 3. Effect of RSVP is visible in both the raw (left) and Hilbert transformed (right) time courses of the ssVEPs in both experiments. Red lines indicate a valence change from neutral to unpleasant images while blue lines indicate a valence change from unpleasant to neutral. Shaded areas indicate within-subject error. Solid vertical line is the change in RSVP valence. Topographies of Hilbert transformed data continue to indicate greatest activity over occipital sensors. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
¼ 16.81, p < .001, η2p ¼ 0.367) a change in the RSVP content. The Bayesian repeated measures ANOVA suggested decisive support for evidence in favor of a main effect of image content over the null, BF10 ¼ 532206.888 �0 .807%. During the second half of the RSVP, ASSRs in trials without an auditory target (M ¼ 4.636, SD ¼ 3.248) were not significantly different than trials with an auditory target (M ¼ 4.257, SD ¼ 3.046), t(29) ¼ 1.786, p ¼ .085. A Bayesian paired samples t-test indicated anecdotal evidence in support of the null over the hypothesis that the ASSRs in each auditory condition are different, BF01 ¼ 1.26 �0 .57 � 10 5%. 3.2.3. Between modality effects During the second half of the RSVP, ssVEPs in trials without an auditory target (M ¼ 2.136, SD ¼ 1.32) were significantly greater than in trials with an auditory target (M ¼ 2.017, SD ¼ 1.228), t(29) ¼ 2.591, p ¼ .015, d ¼ 0.473, see Fig. 4. A Bayesian paired samples t-test indicated substantial evidence in support of the difference in ssVEP responses by auditory condition hypothesis over the null, BF10 ¼ 3.232 �0 .32 � 10 5%. There were no significant effects of trial segment (F(1, 29) ¼ 1.324, p ¼ .259), image content (F(3, 87) ¼ 0.498, p ¼ .685), nor interaction (F (3, 87) ¼ 0.463, p ¼ .709) on the ASSR. The Bayesian repeated measures ANOVA indicated substantial evidence in support of the null over the alternative hypothesis for any effect of trial segment, BF01 ¼ 3.49 � 1.263%, strong evidence in support of the null over the alternative for an effect of image content, BF01 ¼ 22.711 �0 .529%, and strong evidence in support of the null over the alternative for any interaction effect, BF01 ¼ 79.674 � 1.119%.
Fig. 4. Bar graph and topographies of means of ssVEP amplitude during second half of trials with and without an auditory target. Mean ssVEP amplitude is lower during trials with transient auditory disruption than during trials without any auditory disruptions. Error bars represent within-subject error.
3.3. Experiment 2: Auditory threat of shock 3.3.1. Within modality effects The emotional interference effect seen in Bekhtereva et al. (2018) was replicated again in Experiment 2 (Fig. 3). There was no main effect of trial segment, F(1, 29) ¼ 0.849, p ¼ .364. There was a significant main effect of image content, F(3, 87) ¼ 18.773, p < .001, η2p ¼ 0.393 and a significant interaction effect between image content and trial segment, F (3, 87) ¼ 22.979, p < .001, η2p ¼ 0.442. Follow up tests indicated that neutral images elicited significantly greater ssVEP amplitudes than un pleasant images both before (Mneutral ¼ 2.51, SDneutral ¼ 1.565; Munpleasant ¼ 1.909, SDunpleasant ¼ 1.102; F(3, 87) ¼ 17.16, p < .001, η2p ¼ 0.372) and after (Mneutral ¼ 2.585, SDneutral ¼ 1.532; Munpleasant ¼ 1.898, SDunpleasant ¼ 1.075; F(3, 87) ¼ 23.83, p < .001, η2p ¼ 0.451) a change in the RSVP content. The Bayesian repeated measures ANOVA again suggested
decisive support for evidence in favor of a main effect of image content over the null, BF10 ¼ 3.441 � 108�0.587%. During the second half of the RSVP, ASSRs in auditory threat cue trials (M ¼ 3.727, SD ¼ 2.643) were not significantly different than auditory safety cue trials (M ¼ 4.480, SD ¼ 2.995), t(29) ¼ 1.865, p ¼ .072. A Bayesian paired samples t-test indicated anecdotal evidence in support of the null over the hypothesis that the ASSRs in each auditory condition are different, BF01 ¼ 1.117 �0 .55 � 10 5%. 3.3.2. Between modality effects During the second half of the RSVP, ssVEPs in auditory safety cue 6
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
Experiment 2, (M ¼ 4.161, SD ¼ 2.613), t(29) ¼ 1.131, p ¼ .267.1
trials (M ¼ 2.143, SD ¼ 1.299) were not significantly different than auditory threat cue trials (M ¼ 2.178, SD ¼ 1.272), t(29) ¼ 0.546, p ¼ .589. A Bayesian paired samples t-test indicated substantial evidence in support of the null over the difference in ssVEP responses by auditory condition hypothesis, BF10 ¼ 4.482 �0 .006%. There were no significant main effects of trial segment (F(1, 29) ¼ 1.025, p ¼ .32), nor image content (F(3, 87) ¼ 0.667, p ¼ .574) on the ASSR. The interaction effect was also non-significant, F(3, 87) ¼ 0.011, p ¼ .993. The Bayesian repeated measures ANOVA indicated substantial evidence in support of the null over the alternative for any effect of trial segment, BF10 ¼ 4.984 �0 .929%, strong evidence in support of the null over the alternative for an effect of image content, BF10 ¼ 17.835 �0 .543%, and strong evidence in support of the null over the alternative for any interaction effect, BF10 ¼ 88.322 �0 .932%.
4. Discussion The present study aimed to address the question whether sustained processing of motivationally and task-relevant stimulus streams results in facilitative (H1) or competitive (H2) intermodal effects in the respective sensory cortices. Both auditory and visual steady state re sponses were analyzed as a function of motivational relevance in both modalities, in two experiments. In both experiments, motivational relevance of the visual domain was manipulated by changing the emotional content of an RSVP stream. In Experiment 1, stimulus rele vance in the auditory domain was manipulated with a selective attention task, and by presenting auditory threat versus safety cues in Experiment 2. The emotional interference effect on the visual response signal re ported in Bekhtereva et al. (2018) was replicated twice in the current study. In both experiments, the ssVEP amplitudes during serial neutral image presentation were greater than during serial unpleasant image presentation. There was also no difference between early and late trial segments within one valence category. There were no effects of auditory task or threat manipulation on the auditory signal. Participant heart rate acceleration during auditory threat cues in Experiment 2 was greater than in safety cue conditions, suggesting the threat cue was perceived as a valid predictor of the noxious noise, distinguishable from the safety cue, and selectively prompting defensive mobilization (Lang and Brad ley, 2013). Across experiments and modalities only one intermodal interaction effect was observed: Interference of auditory task on visuocortical pro cessing was present in Experiment 1. Here, weaker ssVEP amplitudes were observed in auditory target trials, compared to auditory non-target trials. By contrast, auditory manipulations in Experiment 2 had no effect on ssVEP amplitudes. There was also lack of support for both alternative hypotheses regarding intermodal interactions with respect to the audi tory cortical sensory responses. The content of the visual stream had no effect on the auditory response in both experiments. As such, these findings are broadly consistent with a body of electrophysiology studies in humans and monkeys, observing stronger interference by auditory tasks on visual processing, compared to visual-to-auditory interference effects (Bendixen et al., 2010; Mehta et al., 2000). The present findings also mirror the report by Porcu et al. (2014), who, using steady-state potential frequency tagging, found no evidence of intermodal interfer ence exerted by salient transient events, whereas within-modality cost effects were pronounced. There is also the possibility that the null visual-to-auditory intermodal interference effects may have been driven by the lack of shifting attention away from the auditory modality, i.e. that sensory capacity limitations were not reached (Roth et al., 2013). The within-modality emotional interference effect in the ssVEP was robust, occurring during both auditory task and threat contexts. Despite the difference in ssVEP amplitude during trials with and without a transient disturbance in the auditory stimulus, neutral RSVP content continued to elicit greater primary visual cortex responses than un pleasant RSVP content. This indicates that the present visual RSVP paradigm is a consistent and robust method of eliciting visuocortical responses to emotional content under varying experimental contexts and during different task instructions. This effect’s strength illustrates its potential as an index of emotional attention to visual scenes. This is a valuable measurement for studies with high demands regarding effect
3.3.3. Heart rate analyses Paired samples t-tests indicated no significant differences in initial heart rate deceleration in threat (M ¼ 0.68, SD ¼ 0.71) and safety (M ¼ 0.56, SD ¼ 0.69) cue conditions, (t(26) ¼ 0.59, p ¼ .56). However, heart acceleration in the latter half of the trial was significantly greater in the threat (M ¼ 0.77, SD ¼ 1.05) versus safety (M ¼ 0.44, SD ¼ 0.79) cue conditions (t(26) ¼ 4.36, p < .001) (see Fig. 5). 3.4. Between experiment analyses After averaging across trial segment and RSVP condition, a paired samples t-test showed that amplitudes of ssVEP Hilbert segments were not significantly different between Experiment 1, (M ¼ 2.158, SD ¼ 1.287), and Experiment 2, (M ¼ 2.225, SD ¼ 1.278), t(29) ¼ 0.630, p ¼ .534. After averaging across trial segment and auditory condition, a paired samples t-test showed that ASSR signal-to-noise ratios were not signifi cantly different between Experiment 1, (M ¼ 4.499, SD ¼ 3.058), and
Fig. 5. Mean heart rate change from baseline across participants in threat and safety cue trials. Heart rate acceleration was significantly greater in the threat versus safety conditions between 3- and 6-s post-stimulus onset.
1 Control analyses conducted on average reference EEG data: All effects in support of the alternative hypotheses over the null were also observed in Bayesian analyses conducted on average reference data with BF10 � 3. Conversely, all effects in support of the null hypotheses over the alternative were also observed in analyses conducted on average reference data with BF10 � 1. Thus, the pattern of findings reported in this manuscript does not depend on the CSD representation of the data.
7
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
size, such as studies of interindividual differences, or in noisy mea surement environments. The absence of threat-of-noise effects in the present study was un expected, however several lines of empirical and conceptual work may account for the strong null-effects observed across both experiments. First, this study focused on sustained processing, which may have obscured short and transient responses reflective of selective facilitation or interference. Such transient responses would be too brief to be captured by the present analysis technique (Cornwell et al., 2007). Secondly, several studies have shown that higher-order cortical, but not sensory, responses were affected by sustained threat context manipu lations (e.g., Andreatta et al., 2015). By contrast, work that involves systematic associative fear conditioning (Miskovic and Keil, 2013), or that focuses on transient responses in the 100s of milliseconds range, also indicative of higher-order cognitive processing (Cornwell et al., 2007), demonstrates threat effects in sensory areas. Further, many of the published threat context effects are demonstrated within visual, rather than auditory, responses (Wieser et al., 2016a, 2016b) or are associated with extra-sensory brain regions, such as the middle frontal gyrus, or with behavioral outcome measures (Petro et al., 2017; Robinson et al., 2011). Dolan et al., (2006) found that while visual ERP components showed early modulation to conditioned stimuli, simultaneously evoked auditory ERP components only showed modulation to unconditioned stimuli in later time windows and in frontal sensors. Late auditory ERP modulations would indicate higher-order cortical rather than primary response influences (Dolan et al., 2006). Such effects would not be captured by the measures used here (ASSRs, ssVEPs), originating pre dominantly in primary sensory cortices. Previously seen intermodal in teractions have also been predominantly found in paradigms where the competing stimuli are dependent upon each other (Armony and Dolan, �squez et al., 2018). Thus, one limitation of the present 2001; Garrido-Va study is the exclusion of the implementation and analysis of transient auditory and visual stimulation. The use of sustained concurrent sensory stimulation was intended to index naturalistic, sustained sensory pro cessing. The absence of strong intermodal effects of emotional content in the present study together with the literature discussed above suggests that this interference may be transient in nature, and potentially more prominent during brief, task-relevant intermodal distraction. The lack of intramodal interactions in sensory cortices during sus tained attentional and motivational engagement may also reflect con flicting effects at different levels of the auditory and visual stream at levels preceding sensory cortex. Research in the auditory periphery has established inter- and intramodal interactions that serve to increase stimulus processing efficiency before the signal arrives in auditory cor tex (Gruters et al., 2018). One intramodal periphery regulation system is based on the suppressive role of the medial olivocochlear efferent neu rons on the cochlear outer hair cells (Smith and Keil, 2015). Contrary to higher order cortical and behavioral responses in selective attention research, this suppression in the peripheral auditory response is more pronounced when participants selectively attend auditory over visual stimuli (Smith and Keil, 2015). This process is thought to increase signal-to-noise within the auditory modality by filtering auditory signals that are not relevant to the behavioral task or motivational goal (Smith and Keil, 2015). Applied to the present situation, the primary auditory cortex may be receiving top-down and bottom-up modulatory signals that together change the neural population gain in ways that cannot readily be assessed with extracranial scalp recordings.
auditory-to-visual interference exerted by the presence of task-relevant and transient targets. By contrast, manipulations of temporally sus tained changes in stream relevance did not result in measurable inter ference effects. Interference effects were most pronounced within the visual modality where a previously established destructive interference effect was robustly observed, irrespective of concurrent task type and condition. These findings suggest that models of intermodal selective attention may apply to more naturalistic types of attentive selection, due to motivational/emotional content. In terms of emotional content, future studies may include both pleasant and unpleasant stimuli to test for possible directional effects of relevance. Future work may also extend the present experimental approach to examine the effects of motivationally salient, transient intermodal events on indices of limited capacity, including in participant populations with known difficulties in terms of emotion-attention interactions. Funding This work was supported by grant R01MH112558 from the National Institute of Mental Health and grant N00014-18-1-2306 from the Office of Naval Research to Andreas Keil. CRediT authorship contribution statement Kierstin M. Riels: Software, Formal analysis, Writing - original draft, Writing - review & editing, Visualization, Data curation, Investi gation. Harold A. Rocha: Investigation, Resources, Data curation, Writing - review & editing. Andreas Keil: Conceptualization, Method ology, Writing - original draft, Writing - review & editing, Supervision, Funding acquisition. Acknowledgments The authors wish to thank Anika Hossain for her assistance in recruitment of participants, and data collection. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1016/j.neuropsychologia.2019.107283. References Andreatta, M., Glotzbach-Schoon, E., Mühlberger, A., Schulz, S.M., Wiemer, J., Pauli, P., 2015. Initial and sustained brain responses to contextual conditioned anxiety in humans. Cortex 63, 352–363. https://doi.org/10.1016/j.cortex.2014.09.014. Armony, J.L., Dolan, R.J., 2001. Modulation of auditory neural responses by a visual context in human fear conditioning. Neuroreport 12 (15), 3407–3411. https://doi. org/10.1097/00001756-200110290-00051. Bekhtereva, V., Pritschmann, R., Keil, A., Müller, M.M., 2018. The neural signature of extracting emotional content from rapid visual streams at multiple presentation rates: a cross-laboratory study. Psychophysiology, e13222. https://doi.org/10.1111/ psyp.13222. Bendixen, A., Grimm, S., Deouell, L.Y., Wetzel, N., Madebach, A., Schroger, E., 2010. The time-course of auditory and visual distraction effects in a new crossmodal paradigm. Neuropsychologia 48 (7), 2130–2139. https://doi.org/10.1016/j. neuropsychologia.2010.04.004. Bradley, M.M., Keil, A., Lang, P.J., 2012. Orienting and emotional perception: facilitation, attenuation, and interference. Front. Psychol. 3, 493. https://doi.org/ 10.3389/fpsyg.2012.00493. Bradley, M.M., Lang, P.J., 1994. Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25 (1), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9. Caruso, V.C., Pages, D.S., Sommer, M.A., Groh, J.M., 2016. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control. J. Neurophysiol. 115 (6), 3162–3173. https://doi.org/10.1152/jn.00935.2015. Carvalhaes, C., de Barros, J.A., 2015. The surface Laplacian technique in EEG: theory and methods. Int. J. Psychophysiol. 97 (3), 174–188. https://doi.org/10.1016/j. ijpsycho.2015.04.023. Cornwell, B.R., Baas, J.M.P., Johnson, L., Holroyd, T., Carver, F.W., Lissek, S., Grillon, C., 2007. Neural responses to auditory stimulus deviance under threat of electric shock
4.1. Conclusions and future directions The present study set out to define the nature of intermodal in teractions between sustained sensory cortical responses when process ing concurrent stimulus streams varying in task relevance and motivational/emotional relevance. Replicating a body of previous research examining intermodal selective attention effects, we found limited evidence for intermodal trade-off, which was restricted to 8
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
revealed by spatially-filtered magnetoencephalography. Neuroimage 37 (1), 282–289. https://doi.org/10.1016/j.neuroimage.2007.04.055. Delorme, A., Makeig, S., 2004. EEGLAB: an open source toolbox for analysis of singletrial EEG dynamics including independent component analysis. J. Neurosci. Methods 134 (1), 9–21. https://doi.org/10.1016/J.Jneumeth.2003.10.009. Dolan, R.J., Heinze, H.J., Hurlemann, R., Hinrichs, H., 2006. Magnetoencephalography (MEG) determined temporal modulation of visual and auditory sensory processing in the context of classical conditioning to faces. Neuroimage 32 (2), 778–789. https:// doi.org/10.1016/j.neuroimage.2006.04.206. Duncan, J., Martens, S., Ward, R., 1997. Restricted attentional capacity within but not between sensory modalities. Nature 387 (6635), 808–810. https://doi.org/10.1038/ 42947. Egloff, B., Hock, M., 2001. Interactive effects of state anxiety and trait anxiety on emotional Stroop interference. Personality and Individual Differences 31 (6), 875–882. https://doi.org/10.1016/S0191-8869(00)00188-4. In this issue. Eidelman-Rothman, M., Ben-Simon, E., Freche, D., Keil, A., Hendler, T., Levit-Binnun, N., 2019. Sleepless and desynchronized: impaired inter trial phase coherence of steadystate potentials following sleep deprivation. BioRxiv. https://doi.org/10.1101/471 730. Fucci, E., Abdoun, O., Lutz, A., et al., 2019. Auditory perceptual learning is not affected by anticipatory anxiety in the healthy population except for highly anxious individuals: EEG evidence. Clinical Neurophysiology 130 (7), 1135–1143. https:// doi.org/10.1016/j.clinph.2019.04.010. Garrido-V� asquez, P., Pell, M.D., Paulmann, S., Kotz, S.A., 2018. Dynamic facial expressions prime the processing of emotional prosody. Front. Hum. Neurosci. 12 https://doi.org/10.3389/fnhum.2018.00244. Gatchel, R.J., Lang, P.J., 1973. Accuracy of psychophysical judgement and physiological response amplitude. Exp. Psychol. 98 (1), 175–183. https://doi.org/10.1037/ h0034312. Graham, F.K., 1978. Constraints on measuring heart rate and period sequentially through real and cardiac time. Psychophysiology 15 (5), 492–495. https://doi.org/10.1111/ j.1469-8986.1978.tb01422.x. Grillon, C., 2008. Models and mechanisms of anxiety: evidence from startle studies. Psychopharmacology 199 (3), 421–437. https://doi.org/10.1007/s00213-007-10191. Gruters, K.G., Murphy, D.L.K., Jenson, C.D., Smith, D.W., Shera, C.A., Groh, J.M., 2018. The eardrums move when the eyes move: a multisensory effect on the mechanics of hearing. Proc. Natl. Acad. Sci. 115 (6), 10. https://doi.org/10.1073/ pnas.1717948115. Hillyard, S.A., Hink, R.F., Schwent, V.L., Picton, T.W., 1973. Electrical signs of selective attention in the human brain. Science 182 (4108), 177–180. https://doi.org/ 10.1126/science.182.4108.177. Hillyard Steven, A., Hinrichs, H., Tempelmann, C., Morgan, S.T., Hansen, J.C., Scheich, H., Heinze, H.-J., 1997. Combining steady-state visual evoked potentials and f MRI to localize brain activity during selective attention. Hum. Brain Mapp. 5 (4), 287–292. https://doi.org/10.1002/(SICI)1097-0193(1997)5:4<287::AIDHBM14>3.0.CO;2-B. Huang, J., Reinders, A.A.T.S., Wang, Y., Xu, T., Zeng, Y., Li, K., et al., 2018. Neural correlates of audiovisual sensory integration. Neuropsychology 32 (3), 329–336. https://doi.org/10.1037/neu0000393. Jeffreys, H., 1961. Theory of probability, 3rd Ed. Oxford University Press, Oxford, UK. Jennings, J.R., Bberg, W.K., Hutcheson, J.S., Obrist, P., Porges, S., Turpin, G., 1981. Publication guidelines for heart rate studies in man. Psychophysiology 18 (3), 226–231. https://doi.org/10.1111/j.1469-8986.1981.tb03023.x. Jungh€ ofer, M., Elbert, T., Leiderer, P., Berg, P., Rockstroh, B., 1997. Mapping EEGpotentials on the surface of the brain: a strategy for uncovering cortical sources. Brain Topogr. 9 (3), 203–217. Junghofer, M., Elbert, T., Tucker, D.M., Rockstroh, B., 2000. Statistical control of artifacts in dense array EEG/MEG studies. Psychophysiology 37 (4), 523–532. https://doi.org/10.1111/1469-8986.3740523. Kanske, P., Kotz, S.A., 2010. Modulation of early conflict processing: N200 responses to emotional words in a flanker task. Neuropsychologia 48 (12), 3661–3664. https:// doi.org/10.1016/j.neuropsychologia.2010.07.021. Keil, A., Bradley, M.M., Junghofer, M., Russmann, T., Lowenthal, W., Lang, P.J., et al., 2007. Cross-modal attention capture by affective stimuli: Evidence from eventrelated potentials. Cognitive, Affective, & Behavioral Neuroscience 7 (1), 18–24. https://doi.org/10.3758/CABN.7.1.18. Keil, A., Gruber, T., Muller, M.M., Moratti, S., Stolarova, M., Bradley, M.M., Lang, P.J., et al., 2003. Early modulation of visual perception by emotional arousal: Evidence from steady-state visual evoked brain potentials. Cognitive, Affective, & Behavioral Neuroscience 3 (3), 195–206. https://doi.org/10.3758/CABN.3.3.195. Keitel, C., Maess, B., Schr€ oger, E., Müller, M.M., 2013. Early visual and auditory processing rely on modality-specific attentional resources. Neuroimage 70, 240–249. https://doi.org/10.1016/j.neuroimage.2012.12.046. Kindt, M., Bierman, D., Brosschot, J.F., 1997. Cognitive bias in spider fear and control children: assessment of emotional interference by a card format and a single-trial format of the Stroop task. J. Exp. Child Psychol. 66 (2), 163–179. https://doi.org/ 10.1006/jecp.1997.2376. Kreifelts, B., Ethofer, T., Grodd, W., Erb, M., Wildgruber, D., 2007. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study. Neuroimage 37 (4), 1445–1456. https://doi.org/10.1016/j. neuroimage.2007.06.020. Lang, P.J., Bradley, M.M., 2013. Appetitive and defensive motivation: goal-directed or goal-determined? Emotion Rev. 5 (3), 230–234. https://doi.org/10.1177/ 1754073913477511.
Lang, P.J., Bradley, M.M., Cuthbert, B.N., 2008. International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainesville, FL. MacLeod, C.M., MacDonald, P.A., 2000. Interdimensional interference in the Stroop effect: uncovering the cognitive and neural anatomy of attention. Trends Cogn. Sci. 4 (10), 383–391. https://doi.org/10.1016/S1364-6613(00)01530-8. Max, C., Widmann, A., Kotz, S.A., Schr€ oger, E., Wetzel, N., 2015. Distraction by emotional sounds: disentangling arousal benefits and orienting costs. Emotion 15 (4), 428–437. https://doi.org/10.1037/a0039041. Mehta, A.D., Ulbert, I., Schroeder, C.E., 2000. Intermodal selective attention in monkeys. I: distribution and timing of effects across visual areas. Cerebr. Cortex 10 (4), 343–358. https://doi.org/10.1093/cercor/10.4.343. Miskovic, V., Keil, A., 2012. Acquired fears reflected in cortical sensory processing: A review of electrophysiological studies of human classical conditioning: Acquired fears reflected in cortical sensory processing. Psychophysiology 49 (9), 1230–1241. https://doi.org/10.1111/j.1469-8986.2012.01398.x. Miskovic, V., Keil, A., 2013. Perceiving threat in the face of safety: excitation and inhibition of conditioned fear in human visual cortex. J. Neurosci. 33 (1), 72–78. https://doi.org/10.1523/JNEUROSCI.3692-12.2013. Muller, M., Andersen, S., Keil, A., 2008. Time Course of Competition for Visual Processing Resources between Emotional Pictures and Foreground Task. Cerebral Cortex 18 (8), 1892–1899. https://doi.org/10.1093/cercor/bhm215. In this issue. Müller, M.M., Hillyard, S., 2000. Concurrent recording of steady-state and transient event-related potentials as indices of visual-spatial selective attention. Clinical Neurophysiology 111 (9), 1544–1552. https://doi.org/10.1016/S1388-2457(00) 00371-0. Petro, N.M., Gruss, L.F., Yin, S., Huang, H., Miskovic, V., Ding, M., Keil, A., 2017. Multimodal imaging evidence for a frontoparietal modulation of visual cortex during the selective processing of conditioned threat. J. Cogn. Neurosci. 29 (6), 953–967. https://doi.org/10.1162/jocn_a_01114. Peyk, P., De Cesarei, A., Junghofer, M., 2011. ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes. Computational Intelligence and Neuroscience 1–10. https://doi.org/10.1155/2011/ 861705. Peyk, P., Schupp, H.T., Keil, A., Elbert, T., Jungh€ ofer, M., 2009. Parallel processing of affective visual stimuli. Psychophysiology 46 (1), 200–208. https://doi.org/ 10.1111/j.1469-8986.2008.00755.x. Porcu, E., Keitel, C., Müller, M.M., 2014. Visual, auditory and tactile stimuli compete for early sensory processing capacities within but not between senses. Neuroimage 97, 224–235. https://doi.org/10.1016/j.neuroimage.2014.04.024. Robinson, O.J., Letkiewicz, A.M., Overstreet, C., Ernst, M., Grillon, C., 2011. The effect of induced anxiety on cognition: threat of shock enhances aversive processing in healthy individuals. Cognit. Affect Behav. Neurosci. 11 (2), 217–227. https://doi. org/10.3758/s13415-011-0030-5. Roth, C., Gupta, C.N., Plis, S.M., Damaraju, E., Khullar, S., Calhoun, V.D., Bridwell, D.A., 2013. The influence of visuospatial attention on unattended auditory 40 Hz responses. Front. Hum. Neurosci. 7 https://doi.org/10.3389/fnhum.2013.00370. Saupe, K., Schroger, E., Andersen, S.K., Muller, M.M., 2009. Neural mechanisms of intermodal sustained selective attention with concurrently presented auditory and visual stimuli. Front. Hum. Neurosci. 3 https://doi.org/10.3389/neuro.09.058.2009. Saupe, K., Widmann, A., Bendixen, A., Müller, M.M., Schr€ oger, E., 2009. Effects of intermodal attention on the auditory steady-state response and the event-related potential. Psychophysiology 46 (2), 321–327. https://doi.org/10.1111/j.14698986.2008.00765.x. Schimmack, U., 2005. Attentional interference effects of emotional pictures: threat, negativity, or arousal? Emotion 5 (1), 55–66. https://doi.org/10.1037/15283542.5.1.55. Schupp, H.T., Junghofer, M., Weike, A.I., Hamm, A.O., et al., 2003. Emotional Facilitation of Sensory Processing in the Visual Cortex. Psychological Science 14 (1), 7–13. https://doi.org/10.1111/1467-9280.01411. Shrem, T., Deouell, L.Y., 2016. Hierarchies of attention and experimental designs: effects of spatial and intermodal attention revisited. J. Cogn. Neurosci. 29 (1), 203–219. https://doi.org/10.1162/jocn_a_01030. Smith, D.W., Keil, A., 2015. The biological role of the medial olivocochlear efferents in hearing: separating evolved function from exaptation. Front. Syst. Neurosci. 9 https://doi.org/10.3389/fnsys.2015.00012. Talsma, D., Woldorff, M.G., 2005. Selective Attention and Multisensory Integration: Multiple Phases of Effects on the Evoked Brain Activity. Journal of Cognitive Neuroscience 17 (7), 1098–1114. https://doi.org/10.1162/0898929054475172. Toffanin, P., de Jong, R., Johnson, A., Martens, S., 2009. Using frequency tagging to quantify attentional deployment in a visual divided attention task. Int. J. Psychophysiol. 72 (3), 289–298. https://doi.org/10.1016/j.ijpsycho.2009.01.006. Wieser, M.J., Miskovic, V., Keil, A., 2016. Steady-state visual evoked potentials as a research tool in social affective neuroscience: SsVEPs in social affective neuroscience. Psychophysiology 53 (12), 1763–1775. https://doi.org/10.1111/ psyp.12768. Wieser, M.J., Reicherts, P., Juravle, G., von Leupoldt, A., 2016. Attention mechanisms during predictable and unpredictable threat—a steady-state visual evoked potential approach. Neuroimage 139, 167–175. https://doi.org/10.1016/j. neuroimage.2016.06.026.
9
K.M. Riels et al.
Neuropsychologia 136 (2020) 107283
Zeelenberg, R., Bocanegra, B.R., 2010. Auditory emotional cues enhance visual perception. Cognition 115 (1), 202–206. https://doi.org/10.1016/j. cognition.2009.12.004.
Tran, Y., Thuraisingham, R. A., Craig, A., & Nguyen, H. (2009). Evaluating the efficacy of an automated procedure for EEG artifact removal. Conf Proc IEEE Eng Med Biol Soc, 2009, 376–379.https://doi.org/10.1109/IEMBS.2009.5334554. JASP Team (2018). JASP (Version 0.9.2)[Computer software].
10