NeuroImage 57 (2011) 1542–1551
Contents lists available at ScienceDirect
NeuroImage j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / y n i m g
Emotion modulates the effects of endogenous attention on retinotopic visual processing Ana Gomez a, Marcus Rothkirch a, Christian Kaul b, c, Martin Weygandt d, John-Dylan Haynes d, Geraint Rees b, Philipp Sterzer a, d, e,⁎ a
Department of Psychiatry, Campus Charité Mitte, Charité – Universitätsmedizin Berlin, Germany Institute of Cognitive Neuroscience & Wellcome Trust Centre for Neuroimaging, University College London, UK Department of Psychology and Center for Neural Science, New York University, USA d Bernstein Center for Computational Neuroscience, Berlin, Germany e Berlin School of Mind and Brain, Berlin, Germany b c
a r t i c l e
i n f o
Article history: Received 5 January 2011 Revised 15 April 2011 Accepted 25 May 2011 Available online 2 June 2011 Keywords: Visual perception Emotion Attention Functional magnetic resonance imaging Retinotopic mapping
a b s t r a c t A fundamental challenge for organisms is how to focus on perceptual information relevant to current goals while remaining able to respond to goal-irrelevant stimuli that signal potential threat. Here, we studied how visual threat signals influence the effects of goal-directed spatial attention on the retinotopic distribution of processing resources in early visual cortex. We used a combined blocked and event-related functional magnetic resonance imaging paradigm with target displays comprising diagonal pairs of intact and scrambled faces presented simultaneously in the four visual field quadrants. Faces were male or female and had fearful or neutral emotional expressions. Participants attended covertly to a pair of two diagonally opposite stimuli and performed a gender-discrimination task on the attended intact face. In contrast to the fusiform face area, where attention and fearful emotional expression had additive effects, neural responses to attended and unattended fearful faces were indistinguishable in early retinotopic visual areas: When attended, fearful face expression did not further enhance responses, whereas when unattended, fearful expression increased responses to the level of attended face stimuli. Remarkably, the presence of fearful stimuli augmented the enhancing effect of attention on retinotopic responses to neutral faces in remote visual field locations. We conclude that this redistribution of neural activity in retinotopic visual cortex may serve the purpose of allocating processing resources to task-irrelevant threat-signaling stimuli while at the same time increasing resources for task-relevant stimuli as required for the maintenance of goal-directed behavior. © 2011 Elsevier Inc. All rights reserved.
Introduction Human perception is characterized by a striking ability to discern relevant from irrelevant information. Relevance depends on the present goals of an individual, but also on the potential significance of goal-unrelated stimuli, such as unattended events that signal threat. A crucial task in perception is therefore to strike a balance between focusing on the current task while remaining able to respond to potential harm. The allocation of processing resources in accordance with current task requirements has been conceptualized as endogenous attention (Corbetta and Shulman, 2002). Endogenous visual attention enhances neural responses in retinotopic cortex and functionally specialized extrastriate visual areas (Kastner et al., 2009; Reynolds and Chelazzi, 2004). Behaviorally, directing endogenous attention covertly to a particular location improves stimulus ⁎ Corresponding author at: Department of Psychiatry, Campus Charité Mitte, Charitéplatz 1, D-10117 Berlin, Germany. Fax: + 49 30 450517944. E-mail address:
[email protected] (P. Sterzer). 1053-8119/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2011.05.072
detection and discrimination, and enhances spatial resolution (Carrasco, 2006). It is also well established that threat-signaling visual stimuli are processed preferentially. For example, fearful faces are detected more easily (Frischen et al., 2008) and can enhance spatial attention effects on perception (Phelps et al., 2006). Converging evidence from different behavioral and neuroimaging methodologies suggests that emotional stimuli are processed, at least partially, in the absence of attention and even of awareness (Pessoa, 2005; Vuilleumier, 2005). While most previous studies focused on the role of the amygdalae in the detection of threat signals, emotional information processing in visual cortex has been explored less systematically. Functional neuroimaging studies have repeatedly shown that emotional information enhances responses in visual cortex (Peelen et al., 2007; Pessoa et al., 2002; Sabatinelli et al., 2005; Vuilleumier et al., 2001), but less is known about the interaction of emotional information with endogenous attention in visual cortex. Vuilleumier et al. (2001) used functional magnetic resonance imaging (fMRI) to investigate the effects of endogenous spatial attention and emotion on responses in
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
high-level extrastriate visual cortex by modulating these factors independently. Attending to faces versus houses enhanced responses to faces in the face-responsive fusiform cortex regardless of expression, but responses to fearful compared to neutral faces were stronger, irrespective of attention. In extrastriate visual cortex, emotional information thus seems to modulate activity independently of attention. Here, we asked how retinotopically localized emotional visual information influences the effects of endogenous spatial attention in early retinotopic cortex. During fMRI, neutral or fearful faces were presented simultaneously in two visual field quadrants and scrambled versions of these faces in the remaining two quadrants. Attention was directed to one intact/scrambled face pair as indicated by a central cue. We could thus assess how emotion and endogenous spatial attention interacted in retinotopic visual cortical processing. We reasoned that if preferential processing of visual threat-related cues serves selection of appropriate actions, such as reorienting, then emotional information anywhere in the display should evoke retinotopically specific enhancement of neural activity corresponding to the location of a stimulus. Moreover, if emotion and attention exert their effects on visual processing independently as previously suggested, emotional information should enhance retinotopic visual processing over and above the well-known enhancing effects of endogenous attention on retinotopic cortex. Materials and methods Participants Fifteen healthy right-handed participants (nine females, aged 21– 39) with normal or corrected-to-normal vision participated in the study after giving written informed consent as approved by the local ethics committee. Three participants (two females) were excluded from the study due to inability to maintain central fixation, as revealed by eyetracking recordings that showed systematic saccadic eye movements towards the face stimulus locations throughout the experiment. Stimuli and design Stimuli were generated using MATLAB (Mathworks Inc.) and COGENT 2000 toolbox (www.vislab.ucl.ac.uk/Cogent/index.html) and projected from an LCD projector (ProExtra Multiverse Projector, Sanyo Electric Co. Ltd; refresh rate 60 Hz) onto a screen at the headend of the scanner that was viewed via a mirror attached to the head coil directly above the participants' eyes (viewing distance 59 cm). The size of the screen was 24.9° × 18.6° of visual angle. We used a combined blocked and event-related fMRI paradigm subdivided in 6 experimental runs of 7 min duration. Each run comprised 12 blocks of 35 s duration separated by a resting period of 10 s during which only a diagonal gray fixation cross (1.8° of visual angle) and gray placeholders in the four possible stimulus locations (see below and Fig. 1) were displayed. At the beginning of each block, one of the two diagonal bars of the fixation cross darkened to indicate which diagonal stimulus pair the participant should attend to while keeping central fixation. That is, throughout each 35 s block participants covertly attended either to the stimulus location pair in the upper left and lower right visual field quadrant, or to the pair in the upper right and the lower left quadrant (Fig. 1). The fixation-cross remained on the screen throughout the experiment to ensure central fixation, which was essential for the covert attention task. The block order was fully randomized. Each block contained 4 trials with target displays that were presented for 250 ms with a randomly jittered interstimulus interval of 3–9 s duration. The target displays consisted of four stimuli presented simultaneously in the four visual field quadrants (4.7° eccentricity). Stimuli consisted of a selection of eight face
1543
images, 4 female and 4 male, taken from the standardized series developed by Ekman and Friesen (Pictures of Facial Affect 1976, Palo Alto, CA: Consulting Psychologists Press). There was an intact and a scrambled version of each face image. Faces were fitted into an elliptical shape (visual angle 4.1° × 6.2°) that eliminated background and hair. In each target display, two intact face images and two scrambled face images were shown, and an intact face was always paired with a scrambled face on one diagonal of the display. That is, there were always pairs of an intact and a scrambled face on the attended and unattended diagonals of the display, respectively. Faces could be either male or female. The occurrence of male or female faces was completely randomized, that is, the two intact faces in a given display could be both male, both female, or one male and one female. Participants performed a speeded gender discrimination task on the intact face stimulus that appeared in the attended diagonal using right-hand index or middle finger key presses on a custom-made MRIcompatible button box. The face images could have either fearful or neutral emotional expressions. There were three main conditions: In one third of all trials, a fearful face appeared in an attended location while a neutral face appeared in an unattended location (emotion attended). In another third, a neutral face appeared in an attended location and another neutral face appeared in an unattended location (no emotion). Finally, in yet another third of trials a neutral face appeared in an attended location and a fearful face in an unattended location (emotion unattended). Except for the diagonal pairing of intact and scrambled faces (see above), stimulus locations across trials and the order of trial types were fully randomized. This design enabled us to analyze the data according to two principles: First, behavioral data and fMRI responses in nonretinotopically organized brain regions (fusiform gyrus, amygdalae) could be analyzed according to the three above-mentioned main conditions emotion attended, no emotion, and emotion unattended. Second, in retinotopic cortex the stimulus geometry with one stimulus in each quadrant of the visual field allowed us to separately extract the fMRI signal evoked by each of the four stimuli in their respective retinotopic cortical representations, resulting in six possible conditions for each retinotopic stimulus representation. This is illustrated in Fig. 2, which shows these six possible conditions in relation to the retinotopic representation of the stimulus in the right upper quadrant. The difference between non-retinotopic and retinotopic analyses is best illustrated with an example: For nonretinotopic areas such as the FFA, panels (A) and (F) in Fig. 2 belong to the same condition, as in both stimulus configurations a fearful face is attended and a neutral face is unattended. For the retinotopic representations of the upper right stimulus location in areas V1–V3, in contrast, panels (A) and (F) obviously represent different conditions: In (A), an attended fearful face is present in the right upper quadrant, while there is an unattended neutral face present elsewhere in the display; in (F), an unattended neutral face is present in the same visual field location, while there is an attended fearful face present elsewhere in the display. Thus, we could determine the responses to face stimuli in each retinotopic stimulus representation separately as a function of whether the stimulus was attended or not, and whether it was fearful or neutral. Moreover, retinotopic responses to neutral face stimuli could be analyzed according to whether the other face in the display was neutral or fearful. This led to six conditions for the analysis of the retinotopic fMRI responses to face stimuli (see Fig. 2): (1) attended emotional face; (2) unattended emotional face; (3) attended neutral face with another neutral face in the display; (4) unattended neutral face with another neutral face in the display; (5) attended neutral face with an emotional face in the display; and (6) unattended neutral face with an emotional face in the display. While the focus of our study was on the effects of emotion and attention on retinotopic visual processing of faces, the simultaneous presentation of scrambled faces allowed us to verify the effectiveness
1544
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
Fig. 1. Experimental design. The direction of spatial attention was blocked over four trials and randomized over twelve blocks. Prior to each block, one of the two diagonal bars of the fixation-cross darkened to indicate which diagonal stimulus pair the participant should attend to while keeping central fixation. Thus, throughout blocks participants covertly attended either to the stimulus location pair in the upper left and lower right visual field quadrant, or to the pair in the upper right and the lower left quadrant. Each block contained 4 trials with target displays that were presented for 250 ms with a randomly jittered interstimulus interval of 3–9 s duration. The target displays consisted of four stimuli presented simultaneously in the four visual quadrants (see Fig. 2). Upon appearance of the target display, participants performed a gender-discrimination task on the intact face of the attended diagonal stimulus pair. After each block a 10 s resting period followed with the diagonal gray fixation cross and the four gray placeholders on display.
of our manipulation of endogenous spatial attention with stimuli involving no emotional or configural facial information (i.e., features that were expected to interact with attention effects); and also to assess possible effects of emotional information on stimuli that were task-irrelevant. It should be noted, however, that the direct comparison of emotion effects on task-relevant (face) and taskirrelevant (scrambled face) stimuli was not critical to our research question. fMRI data acquisition Images were acquired on a TRIO 3T scanner (Siemens, Erlangen, Germany) equipped with a 12-channel head coil. Functional images were obtained with a gradient echo-planar imaging sequence (repetition time= 2.26 s; echo-time = 25 ms). Whole-brain coverage was obtained with 38 contiguous slices (voxel size = 3 × 3 × 3 mm). The main experiment consisted of 6 runs of 195 volumes each. Additionally, we acquired a T1-weighted structural image (MPRAGE, voxel size 1 × 1 × 1 mm) and several functional localizer scans. For retinotopic meridian mapping, we performed two scans of 244 volumes each, during which participants viewed contrast reversing (4 Hz) checkerboard stimuli that covered either the horizontal or the vertical meridian and were presented in 22.6 s blocks interleaved with 11.3 s rest periods. To functionally localize the retinotopic representations of the face stimuli in the main experiment, we performed two retinotopic region-of-interest (ROI) localizer scans of 165 volumes each, during which a contrast reversing (4 Hz) black-and-white oval checkerboard (visual angle 4.1°× 6.2°) was presented in alternating 11.3 s blocks in the four visual quadrants (4.7° eccentricity). Finally, to identify the fusiform face area (FFA) in the mid-fusiform gyrus, we performed a
standard localizer scan of 204 volumes during which black-and-white face and house stimuli were presented foveally in alternating 13.6 s blocks, interleaved with 9.0 s rest periods. It should be noted, that the existence of the FFA as a brain region specialized on face processing is controversial (e.g., Bukach et al., 2006). We use the term FFA pragmatically as referring to a region in the mid-fusiform gyrus functionally defined by greater fMRI responses to faces compared to other objects (Kanwisher and Yovel, 2006). Eye movements were monitored online during the main experiment using an infrared video eye tracker with a sampling rate of 60 Hz (SMI IVIEW X™ MRI-LR, SensoMotoric Instruments, Teltow, Germany) custom-adapted for use in the MRI scanner environment to ensure that participants were able to hold fixation. Analysis Imaging data Data were analyzed using SPM5 (www.fil.ion.ucl.ac.uk/spm). Preprocessing was performed following standard methods implemented in SPM5 (Ashburner and Friston, 1997). After discarding the first four images of each run to allow for magnetic saturation effects, the remaining images were slice-time-corrected and realigned to the first image. The structural T1 image was co-registered to the functional scans, and all images were normalized into standard MNI space. Data from the main experiment and from the FFA localizer were spatially smoothed with a 5 mm full-width at half maximum (FWHM) Gaussian kernel for analyses of FFA and amygdala responses. For the analysis of responses in retinotopic areas V1–V3 unsmoothed data were used to retain the fine-grained retinotopic information in these areas and to avoid ‘cross-talk’ from patches of cortex in opposite locations in the walls of sulci. Data from retinotopic meridian
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
1545
Fig. 2. Experimental conditions. In each target display, two intact face images and two scrambled face images were shown, and an intact face was always paired with a scrambled face on one diagonal of the display. Participants were cued by the black arrow at fixation to attend to one diagonal stimulus pair. There were six possible conditions for the analysis of responses to face stimuli in retinotopic cortex: (A) attended emotional face; (B) unattended emotional face; (C) attended neutral face with another neutral face in the display; (D) unattended neutral face with another neutral face in the display; (E) attended neutral face with an emotional face in the display; (F) unattended neutral face with an emotional face in the display. The dotted gray line in each panel A–F illustrates which stimulus pair was attended. Here the six possible conditions for the stimulus in the right-upper quadrant (surrounded by a black line for illustration) of the visual field are shown. Note that these six conditions could occur in all four stimulus locations, and fMRI responses were collapsed accordingly across the four corresponding retinotopic stimulus representations in each visual area V1, V2 and V3 for statistical analysis.
mapping and from the retinotopic ROI localizer were smoothed using a small (3 mm) FWHM Gaussian kernel in order to facilitate delineation of borders on cortical flatmaps between areas V1–V3 and of ROIs in these areas, respectively. We analyzed the data from the main experiment in a two-stage procedure. In a first step, each participant's data were analyzed voxelwise using the general linear model (GLM). The model included separate regressors for each possible trial-type in each quadrant of the display to enable separate analysis of activations in retinotopic cortical representations of each stimulus location. For each of the four stimulus locations, the six possible trial types were modeled separately (see Stimuli and design section and Fig. 2). In addition, the respective version of each condition with scrambled images was modeled separately for each stimulus location. Regressors for each experimental condition were modeled as stick functions convolved with the canonical hemodynamic response function implemented in SPM5. Motion parameters defined by the realignment procedure were
added to the model as six separate regressors. Parameter estimates for each regressor at every voxel were determined using multiple linear regression and scaled to the global mean signal of each run across conditions and voxels. The parameter estimates therefore represent signal change relative to the global brain signal (% global brain signal). We removed low-frequency fluctuations by a high-pass filter with a cutoff at 128 s and used an autoregressive model of order one (AR (1) + white noise) to correct for temporal autocorrelation in the data. Data from the retinotopic meridian mapping scans, the retinotopic ROI localizer, and the FFA localizer were analogously analyzed using the GLM approach implemented in SPM5. The conditions of interest were modeled as boxcar regressors convolved with the canonical hemodynamic response function. In a second step, parameter estimates for each experimental condition were extracted from the ROIs defined on the basis of the independent functional localizer scans in each participant individually. ROIs in visual areas V1, V2, and V3 were defined using data from
1546
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
retinotopic meridian mapping and the retinotopic ROI localizer. First, meridian data were used to delineate the ventral and dorsal portions of areas V1, V2, and V3 of each hemisphere after segmentation and cortical flattening following standard methods using Freesurfer (http:// surfer.nmr.mgh.harvard.edu). Within each of these retinotopic areas, the representations of each of the four stimulus locations in the main experiment were delineated in V1, V2 and V3 on the basis of activations from the independent retinotopic ROI localizer thresholded at p b .01, uncorrected for multiple comparisons. FFA ROIs were defined as contiguous voxels in the fusiform gyrus that responded significantly more to faces than to houses in the FFA localizer scan at a threshold of p b .001, uncorrected, and were delineated manually using MRIcron (http://www.sph.sc.edu/comd/rorden/MRicron/). In view of its role in emotional face processing (Phelps and LeDoux, 2005), even though not our primary focus, we also evaluated fMRI responses in the amygdalae. We used standard bilateral amygdala ROIs derived from the WFU Pick atlas (http://www.fmri.wfubmc.edu/cms/software). Finally, parameter estimates from the main experiment were extracted and averaged from those voxels within the retinotopic, FFA, and amygdala ROIs that were generally responsive to our stimulus paradigm, as determined with a ttest for the main effect of all stimulus presentations at a liberal threshold of p b .05, uncorrected. For the assessment of activations in the retinotopic cortical stimulus representations, parameter estimates were sorted according to the 6 possible conditions (Fig. 2) for intact faces and the corresponding conditions for scrambled images, for each of the four stimulus locations. Please note that the six conditions occurred at all four stimuluslocations. For the analysis of each of the six conditions, parameter estimates were therefore pooled across the four corresponding retinotopic stimulus representations in each visual area V1, V2 and V3. These pooled parameter estimates thus provided for each condition a composite measure of fMRI signal irrespective of visual field quadrant, yet preserving the retinotopic specificity of responses to a given stimulus relative to the remaining three stimuli in each display. This procedure resulted in one average parameter estimate per condition and participant for each visual area V1, V2, and V3. In contrast, for the analysis of activations in non-retinotopic regions, i.e., FFA and amygdalae, parameter estimates could not be separated according to retinotopic stimulus location and were thus collapsed into the three main conditions emotion attended, no emotion, and emotion unattended (see Stimuli and design section). For statistical inference at the group level, parameter estimates from the ROIs were subjected to one-, two-, or three-way repeated-measures ANOVA as appropriate. Effects were considered significant at p b .05. Greenhouse–Geisser correction was applied in cases of significant (p b .05) sphericity violation as evidenced by Mauchley's sphericity test. For further exploration of significant results from ANOVA, planned post-hoc t-tests were used. Resulting p values were corrected for multiple comparisons using Bonferroni correction. We corrected for the number of post-hoc comparisons performed subsequent to each ANOVA, i.e., 6 comparisons for the ANOVA testing for effects in the FFA and 9 comparisons for the ANOVA testing for effects in retinotopic cortex. Behavioral data Performance was measured in terms of reaction time and accuracy and was analyzed for the three main conditions: emotion attended, no emotion, and emotion unattended. Statistical inference was performed using one-way repeated-measures ANOVA followed by planned posthoc t-tests with Bonferroni correction for 3 comparisons. Results Behavioral performance Overall mean accuracy (expressed as percentage of correct responses) on the gender-discrimination task was 83%± 3 SEM. Overall
mean reaction time was 1071 ms ±75 SEM (Fig. 3). Trials were sorted according to the three main conditions emotion attended, no emotion, and emotion unattended. A one-way repeated-measures ANOVA revealed differences between these three main conditions in accuracy (F(1,11) = 7.58, pb .005, η2 =0.41) but not in reaction time (F(1,11)b 1). Planned post-hoc t-tests verified that compared to the no emotion condition (86% ± 2 SEM), accuracy was significantly lower in the emotion attended (81% ± 3 SEM; t(11) = 3.95, p b .01, Bonferroni-corrected) and trendwise reduced in the emotion unattended condition (82% ± 3 SEM; t(11) = 2.59, p = .075, Bonferroni-corrected). Fusiform face area fMRI responses in FFA were analyzed using 2 × 3 repeatedmeasures ANOVA with the factors hemisphere (right and left FFA) and emotion (according to the three main conditions emotion attended, no emotion, and emotion unattended; see Figs. 4A and B). There was a significant main effect of emotion (F(2,22) = 3.4, p b .05, η 2 = 0.25) but no main effect of hemisphere (F(1,11) b 1) and no significant hemisphere-by-emotion interaction (F(2,22) = 1.2, p N .1). As emotion-related responses were previously reported for the right fusiform gyrus only (Vuilleumier et al., 2001), we performed separate one-way ANOVAs for right and left FFA and indeed found that responses in the right FFA were robustly modulated by emotion (F(1,11) = 7.1, p b .005, η 2 = 0.39). No significant effect was found in the left FFA alone (F(1,11) b 1). Post-hoc t-tests revealed a significant difference in evoked responses for emotion attended versus no emotion (t(11) = 3.27, p b .05, Bonferroni-corrected) but no significant effect for emotion attended versus emotion unattended (t(11) = 2.13 p N .1, Bonferroni-corrected) and emotion unattended versus no emotion (t(11) = 1.98 p N .1, Bonferroni-corrected). Amygdalae fMRI signals in the amygdalae were generally weak and insufficient (i.e., no significant voxels within amygdala ROI at 0.05,
Fig. 3. Behavioral results. Trials were sorted according to whether they contained an emotional face or not, and whether the face was in an attended location of the visual field or not. This resulted in three main groups of trials: 1) emotional face in an attended location (emo attended), 2) neutral faces only (no emo) and 3) emotional face in an unattended location (emo unattended). Significant differences between these three conditions were revealed for accuracy (A) but not for reaction time (B). Accuracy was significantly lower in for both emotion attended and emotion unattended conditions. *p b .05, (*)p b .1, planned post-hoc t-tests, Bonferroni-corrected. Error bars denote standard errors corrected for between-subject variability (Cousineau, 2007).
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
Fig. 4. Average parameter estimates of brain activity during the main experiment extracted from the right FFA, left FFA and the retinotopic cortex. fMRI signals were analyzed according to the three main conditions: emotion attended, no emotion, and emotion unattended, according to whether they contained an emotional face or not, and whether the face was in an attended location of the visual field or not. There were significant differences between the three main conditions in the right FFA (p b 0.01, one-way repeated-measures ANOVA) but not in the left FFA. There were no significant general response differences between these three conditions in retinotopic visual cortex (V1–V3) when analyzed in analogy to the FFA. *p b .05, planned post-hoc t-tests, Bonferroni-corrected. Error bars denote standard errors corrected for between-subject variability (Cousineau, 2007).
uncorrected, using the contrast all trials vs. baseline) for our ROI analyses in three out of 12 participants in the left amygdala and another three participants in the right amygdala. Analysis of the fMRI responses in the remaining nine participants for left and right amygdala, respectively, revealed no significant differences between the three main conditions emotion attended, no emotion, and emotion unattended (F(1,8) b 1, one-way repeated-measures ANOVAs). Retinotopic visual cortex The stimulus geometry with one stimulus in each quadrant of the visual field allowed us to separately analyze the fMRI responses to each of the four stimuli in their respective retinotopic cortical representations according to the six conditions shown in Fig. 2. The retinotopic stimulus representations in visual areas V1, V2, and V3 were determined using standard retinotopic meridian mapping in combination with a functional localizer scan mapping the four stimulus locations. All results for retinotopic visual cortex reported henceforth refer to these mapped stimulus representations. General responses in areas V1 to V3 In a first step, we aimed to investigate any general effects of attention and emotion on retinotopic visual processing as, for example, caused by arousal. For this aim, we assessed the general responses in retinotopic stimulus representations to the presence of attended and unattended emotional stimuli in analogy with the FFA analysis (see above). Thus, average responses collapsed across all four stimulus representations in areas V1 to V3 were analyzed according to the three main conditions emotion attended, no emotion and emotion unattended (Fig. 4C). A 3 × 3 repeated-measures ANOVA with the factors region (V1, V2, V3) and emotion (emotion attended, no emotion and emotion unattended) showed a main effect of region (F(2,22) = 4.2, p b .05, η 2 = 0.28) but no main effect of emotion (F(2,22) = 2.2, p N 0.1) and no region-by-emotion interaction (F(2,22) = 1.5, p N 0.1). This indicates that the presence of an emotional stimulus did not have a general effect on early visual processing, e.g., due to general arousal. It is noteworthy that this response pattern is clearly different from that in the FFA, where significant differences between the three main conditions were observed. Retinotopically specific responses in areas V1 to V3 Next, we determined the responses to intact faces in each retinotopic stimulus representation separately as a function of whether the face was attended or not, and whether it was fearful or neutral. In addition, responses to neutral faces were analyzed separately for trials
1547
that contained a fearful face in another location and those that contained only neutral faces. In other words, our design allowed us to analyze how the effect of directing endogenous spatial attention to a stimulus was affected by the emotional valence of that stimulus itself, and by the emotional valence of the other face stimulus appearing in the same display. This resulted in the six possible conditions shown in Fig. 2. Of note, these six conditions could occur in all four stimulus locations, and fMRI responses were pooled accordingly across the four corresponding retinotopic stimulus representations in each visual area V1, V2, and V3 for statistical analysis. Statistical analysis was performed using 3 × 3 × 2 factorial repeatedmeasures ANOVA with the factors region (V1, V2, V3), emotion (fearful face, neutral face, and neutral face with a fearful face in the display), and attention (attended vs. unattended), and planned post-hoc t-tests to assess the effect of attention in the three emotion conditions separately. The results are summarized in Fig. 5. As expected, we found a significant main effect of attention (F(1,11) = 12.1, p b .005, η 2 = 0.52). There was also a significant main effect of region (F(2,22) =4.2, p b .05, η 2 = 0.28), but no main effect of emotion (F(2,22) b 1). Importantly, however, there was a significant attention-by-emotion interaction (F(2,22) = 4.6, p b .05, η2 = 0.29). We also found significant region-by-attention (F(2,22) = 6.3, p b .01, η 2 = 0.36) and region-byemotion interactions (F(4,44) = 3.2, p b .05, η 2 = 0.23), but no significant three-way interaction effect region-by-attention-byemotion (F(4,44) = 2.0, p N .1). As the observed attention-by-emotion interaction was central to our research question, we further explored this effect by planned post-hoc ttests for attended vs. unattended faces in all three emotion conditions, separately for areas V1, V2, and V3. There were no significant differences between attended and unattended emotional faces in any of the visual areas V1 to V3: V1 (t(11) = 0.08, p N .1), V2 (t(11) = −0.27, p N .1), and V3 (t(11) =0.89, p N .1). For neutral faces in the presence of another neutral face, there was no significant effect of attention in V1 (t(11) = −0.37, p N .1). While the analogous comparison revealed only a trend towards an attention effect in V2 that did not survive correction for multiple comparisons (t(11) = 2.79, p = .15, Bonferroni-corrected), a significant attention effect was observed in V3 (t(11) = 3.79, p b .05, Bonferroni-corrected). Finally, and in stark contrast with the complete absence of an attention effect on the processing of emotional faces, there was a robust effect of attention on retinotopic processing of neutral faces when an emotional face was present in the display. This effect was significant throughout all three retinotopic visual areas analyzed (V1: t(11) = 3.98, p b .05; V2: t(11) = 3.75, p b .05; V3: t(11) = 6.95, p b .01; all Bonferroni-corrected). Taken together, the pattern of responses was similar in all visual areas V1 to V3 (Figs. 5A–C), with the smallest or no effect of attention for emotional faces and the largest effect of attention for neutral faces in the presence of an emotional face in the display. In other words, the enhancing effect of endogenous spatial attention, which was taskrelevant in our paradigm, was abolished in retinotopic representations of fearful emotional faces. In contrast, the mere presence of a fearful face in the display augmented the effect of attention in representations of neutral faces. It is particularly noteworthy that even in V1, where responses to attended and unattended stimuli in the absence of emotional stimuli were statistically indistinguishable, a robust attentional enhancement of neutral face processing emerged in the presence of an emotional face. Processing of scrambled face stimuli Having established an interaction of task-irrelevant emotional information with the effects of attention on retinotopic visual processing of face stimuli, we next asked whether this effect was confined to processing of intact face stimuli. In our paradigm, each intact face was paired with a scrambled face on the attended and unattended diagonals, respectively. We therefore assessed the effect of emotion in attended and unattended visual field locations on the
1548
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
Fig. 5. Parameter estimates of brain activity for each condition during main experiment extracted from each stimulus representation, as determined in separate localizer scans, were averaged across 12 participants in V1 (A), V2 (B), and V3 (C). Here responses to intact faces in each retinotopic stimulus representation were analyzed separately as a function of the six possible conditions depicted in Fig. 2. The six bars plotted in each panel represent these six possible conditions. (D) Parameter estimates pooled across areas V1–V3 for intact faces and (E) for scrambled faces. Responses to scrambled faces were qualitatively similar in areas V1, V2, and V3, and are therefore not shown in separate plots. *p b .05, **p b .01, planned post-hoc t-tests, Bonferroni-corrected. Error bars denote standard errors corrected for between-subject variability (Cousineau, 2007).
processing of scrambled faces, analogous to the analysis of intact face stimuli. For example, corresponding to the analysis of retinotopic responses to intact emotional faces we analyzed fMRI responses in retinotopic representations of scrambled faces that appeared either in the same attended diagonal as an emotional face, or in the unattended diagonal together with an emotional face. In the example given in Fig. 2, the scrambled picture that corresponds to each of the six possible conditions for intact faces in the upper right quadrant would be the one in the lower left quadrant. We report the results of a 3 × 3 × 2 (region × emotion × attention) repeated-measures ANOVA (as above for responses to intact faces) for responses to scrambled face-stimuli. There were clear main effects of region (F(2,22) = 17.1, p b .001, η 2 = 0.61) and attention (F(1,11) = 10.8, p b .01, η 2 = 0.49), but no main effect of emotion (F(2,22) b 1). There was a significant region-by-attention interaction (F(2,22) = 7.0, p b .01, η 2 = 0.39), but no region-by-emotion interaction (F(4,44) b 1) and, crucially, no emotion-by-attention interaction (F(2,22) b 1). That is, while the significant main effect for attention confirmed that our manipulation of endogenous spatial attention had a general effect on retinotopic visual processing, the interaction of emotion with the effect of endogenous spatial attention in retinotopic visual cortex was limited to the processing of intact face stimuli. As for intact faces, there was no
significant region-by-attention-by-emotion interaction for responses to scrambled faces (F(4,44) b 1). Intact vs. scrambled faces While our research question was not primarily concerned with differences in processing of face and non-face stimuli in retinotopic visual cortex, we nevertheless performed a tentative 2 × 3 × 3 × 2 factorial repeated-measures ANOVA with the factors stimulus type (intact vs. scrambled), region (V1, V2, V3), emotion (fearful face, neutral face, and neutral face with a fearful face in the display), and attention (attended vs. unattended) to directly compare responses to intact and scrambled faces. There was a significant main effect of stimulus type (F(1,11) = 8.8, p b .013, η2 = 0.44), which was due to overall stronger responses to face compared to scrambled stimuli (see Figs. 5D and E). There were again main effects of region (F(2,22) = 11.0, p b .001, η2 = 0.50) and attention (F(1,11) = 12.1, p b .01, η2 = 0.53), but no main effect of emotion (F(2,22) b 1). There was again a significant region-by-attention interaction (F(2,22) = 9.8, p b .001, η2 = 0.47), and trends towards significant interaction effects for region × emotion (F(4,44) = 2.0, p = .11, η2 = 0.16), stimulus type × attention (F(1,11) = 3.8, p = .08, η 2 = 0.26), emotion × attention (F(2,22) = 3.3, p = .06, η2 = 0.23), stimulus type× emotion × attention (F(2,22) = 2.5, p =.11,
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
η2 = 0.18), and stimulus type × region × emotion × attention (F(4,44) = 2.4, p = .07, η2 = 0.18). All other interactions remained insignificant (F b 1). We should emphasize, however, that our experiment was primarily designed to assess the effects and interactions of attended task-relevant and unattended task-irrelevant intact face stimuli. The results of this latter analysis should therefore be interpreted with caution, as the direct comparison of intact and scrambled faces is hampered by the fact that scrambled face stimuli were either attended or unattended but were never task-relevant. Attentional modulation in areas V1 to V3 It is well established that the degree of attentional modulation is rather small in V1 but increases at higher cortical levels of retinopic processing (Kastner et al., 1999; Luck et al., 1997; O'Connor et al., 2002). To formally assess attentional modulation in areas V1, V2 and V3 and to what extent it was influenced by emotion, we calculated the attentional modulation index (AMI= (attended− unattended)/ (attended+unattended); see Kastner et al., 1999) for each visual area and for each emotion condition separately (Fig. 6). Greater AMI values denote stronger modulation by attention. A 3 × 3 repeated measures ANOVA with the factors region (V1, V2, and V3), emotion (fearful face, neutral face, and neutral face with a fearful face in the display) showed significant main effects of region (F(2,22) = 9.75, p b .005, η2 = 0.47) and emotion (F(2,22) = 4.57, p b .05, η2 = 0.29) on the AMI, but no significant interaction (F(4,44) = 2.10, p N .1). Thus, the degree of attentional modulation increased from V1 to V3 in line with previous findings (Kastner et al., 1999; Luck et al., 1997; O'Connor et al., 2002), while the effect of emotion on endogenous spatial attentional modulation was similar at the three levels of cortical processing analyzed. Discussion We found differential effects of emotional information on attentional modulation in low-level retinotopic visual cortex and functionally specialized higher-level visual cortex. The latter, exemplified by the FFA, showed additive effects of spatial attention and fearful emotional expression. In contrast, fearful expression did not further enhance the effect of attention in retinotopic cortex, while responses to unattended fearful stimuli were at the level of those to attended stimuli. Thus, there was no difference between local retinotopic responses to attended and unattended fearful faces. Strikingly however, fearful faces exerted a strong remote effect on retinotopic processing of neutral faces: The presence of a fearful stimulus in the display augmented the effect of spatial attention on processing of neutral face stimuli. Even in V1, where responses to attended and unattended stimuli in the absence of emotional stimuli
Fig. 6. Attentional modulation index (AMI = (attended − unattended) / (attended + unattended); greater AMI values denote stronger modulation by attention, see Kastner et al., 1999) for each visual area and for each emotion condition separately. The degree of attentional modulation increased from V1 to V3, while the effect of emotion on endogenous spatial attentional modulation was similar at the three levels of cortical processing analyzed.
1549
were statistically indistinguishable, a robust attentional enhancement of neutral face processing emerged in the presence of an emotional face elsewhere in the display. The additive emotion–attention effect observed in the right FFA replicates previous findings (Vuilleumier et al., 2001) that suggested independent mechanisms underlying these phenomena in high-level visual cortex. The lack of an overall effect of fearful expression on fMRI responses in retinotopic cortex is in accord with previous work that failed to find an enhancing effect of fearful expression on early visual cortex activity (Pourtois et al., 2006), which was, however, not analyzed in retinotopic detail in this earlier study. A recent study used retinotopic mapping to show that responses to fear-conditioned gratings were increased in V1–V4 (Padmala and Pessoa, 2008). Similar effects are also observed with conditioned faces (Damaraju et al., 2009). This latter study also reported greater retinotopic cortical responses to task-irrelevant fearful faces than to neutral faces independent of conditioning, in line with our result that activity in retinotopic representations of unattended fearful faces was elevated to the level of attended stimuli. Our current study goes beyond these previous reports in at least two respects: First, by explicitly modulating endogenous spatial attention, we could show that enhancement of local retinotopic processing by fearful expression is limited to task-irrelevant, unattended fearful faces, but is not detectable for attended fearful faces. Second, our stimulus design allowed us to assess remote effects of fearful faces on the retinotopic processing of neutral faces, showing that the modulatory effect of spatial attention to neutral faces is augmented by the mere presence of a fearful face at an unattended location elsewhere in the display. Several previous studies focused on the effects of emotional information as an exogenous attention cue (i.e., attentional capture), contrasting with our investigation of its interactions with endogenous spatial attention. Behaviorally, reaction times and contrast sensitivity are improved for target stimuli that are preceded by threat-signaling stimuli serving as exogenous cues in corresponding visual field locations (Bradley and Lang, 2000; Mogg et al., 1994; Phelps et al., 2006). Accordingly, fMRI and electroencephalography show that fear cues enhance processing of subsequently presented neutral targets (Pourtois et al., 2004, 2006); and that such emotional cueing of spatial attention involves parietal regions previously implicated in endogenous spatial attention (Pourtois et al., 2006). Moreover, a recent magnetoencephalographic study showed that task-irrelevant fearful faces elicit an N2pc component, a signal that is known to reflect attentional focusing in visual search (Fenker et al., 2010). Thus, the ability of threat-signaling stimuli to capture and direct spatial attention is a possible mechanism underlying the local enhancement of retinotopic cortex activity by unattended fearful faces observed in our study and in previous work (Damaraju et al., 2009). In our study, attention to a specific location had no direct local effect on early visual processing of emotional information at that location. However, local processing of unattended emotional information provoked remote effects on the processing of attended neutral stimuli in different regions of the visual field. This finding that the enhancing effect of attention on the processing of neutral stimuli was augmented by the presence of remote emotional information can be conceptualized as a compensatory mechanism. Exogenous attention through attentional capture by salient stimuli interferes with endogenous attention (Jonides and Irwin, 1981). Such interference is even more pronounced in high-anxiety individuals (Moriya and Tanno, 2009), suggesting that reactivity to task-unrelated salient stimuli may support an individual's readiness to react to potential threat. In our study, reduced accuracy on gender-identification of neutral faces in the presence of a fearful face suggests such interference through attentional capture. The augmented attention effect in the retinotopic representation of the task-relevant neutral face could thus reflect the compensatory allocation of processing resources in order to maintain task performance despite interference
1550
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551
from the attention-capturing threat signal. In addition, a decrease in the allocation of processing resources to unattended neutral faces in the presence of emotional faces could also have contributed to the observed effect. Importantly, our finding that remote enhancement of retinotopic processing was only detectable for intact but not for scrambled stimuli in corresponding attended locations, supports this interpretation. Although a scrambled face stimulus was always presented along with an intact face in the attended diagonal, only the intact face was relevant for the gender-identification task. The notion that retinotopic processing of task-relevant stimuli is enhanced in situations of additional perceptual challenge is also in line with the previous finding that responses to task-relevant stimuli in retinotopic cortex are increased by the concurrent performance of a demanding task (Pinsk et al., 2004). Several mechanisms could underlie the remote enhancing effect of emotional stimuli on attentional modulation of responses to neutral stimuli. It could be directly mediated by feedback signals from structures involved in fearful face processing (Amaral et al., 2003; Vuilleumier, 2005). Obvious candidate regions to mediate this effect are the amygdalae and the FFA, but it is difficult to explain how feedback signals from non-retinotopically organized structures could selectively target processing of attended neutral stimuli. Furthermore, contrary to retinotopic cortex, the highest activity level in FFA was evoked by attended emotional stimuli. Thus, more likely, processes that mediate the effects of endogenous attention on visual processing and that involve higher-order topographic regions in frontal and parietal cortex (Silver and Kastner, 2009) also mediate the effects of emotion on attentional modulation in retinotopic cortex. Nevertheless, while our analyses of attentional modulation replicated the wellestablished progressive increase of attentional effects from lower- to higher-level retinotopic areas (Kastner et al., 1999; Luck et al., 1997; O'Connor et al., 2002), the effects of emotion on attentional modulation were similar in V1 through V3. The mechanisms underlying the emotion effects on attentional modulation in retinotopic visual cortex are thus probably not identical to the mechanisms of endogenous spatial attention. Rather, the latter may be under the modulatory influence of brain regions involved in threat-signal detection such as the amygdalae (Phelps and LeDoux, 2005), and regulate the redistribution of processing resources across the visual field according to the challenge posed by task-irrelevant emotional information. Because we did not find significant activity differences in the amygdalae we are not able to decide whether the observed retinotopic effects of emotional information might originate from the amygdalae, either directly or by modulation of attentional mechanisms. Most likely, this relates to suboptimal signal in medial temporal lobe as our scanning sequence focused on visual cortex, and to the generally poor signal-to-noise ratio in the amygdalae (LaBar et al., 2001). Moreover, the relatively attention-demanding task used (mean correct responses ~85%) and peripheral stimulus presentation may have reduced amygdala activation (Eimer et al., 2003; Holmes et al., 2003; Wright and Liu, 2006). However, the significant effects of emotional expression observed in visual cortex argue against the lack of significant activity modulation in the amygdalae being task- or stimulus related in this case. While emotional faces are useful for the investigation of emotion processing because of their ecological validity, one might be concerned that they could not optimally drive responses in retinotopic cortex. However, we found robust effects using these stimuli, with regard to both attentional and emotional modulation, which speaks against the concern that our stimulus paradigm may lack sensitivity for effects in retinotopic cortex. Another concern could be related to possible systematic low-level stimulus differences between fearful and neutral faces (Whalen et al., 2004). However, the absence of any effects of fearful vs. neutral faces in retinotopic areas, but also inspection of the effect sizes, e.g., in V1 (Fig. 5A), render the possibility
that any of the effects observed may be related to low-level stimulus differences highly unlikely. In conclusion, we propose that the differential effects of emotional information on attentional modulation in low-level retinotopic and high-level functionally specialized visual cortex subserve different adaptive functions. Additive effects of emotion and attention in FFA may be adaptive in that emotional expression could signal the requirement of additional face-related processing resources over and above those allocated by endogenous spatial attention. In contrast to such a feature- or object-related tuning of information processing in higher-order visual cortex, the distribution of processing resources in retinotopic cortex seems to be governed primarily by the spatial relationship of stimuli potentially relevant for, or interfering with, the current task. The redistribution processes observed in our study may be adaptive in that they serve the purpose of allocating processing resources to locations of task-irrelevant threat-signaling stimuli while at the same time increasing resources for task-relevant stimuli as required for the maintenance of goal-directed behavior. Acknowledgments AG, MR, and PS are supported by Deutsche Forschungsgemeinschaft (Emmy-Noether-Program STE1430/2-1). GR is supported by the Wellcome Trust. CK is holding a Feodor-Lynen-Stipend by Alexander von Humboldt-Foundation. References Amaral, D.G., Behniea, H., Kelly, J.L., 2003. Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey. Neuroscience 118 (4), 1099–1120. Ashburner, J., Friston, K., 1997. Multimodal image coregistration and partitioning—a unified framework. Neuroimage 6 (3), 209–217. Bradley, M.M., Lang, P.J., 2000. Measuring emotion: behavior, feeling, and physiology. Cogn. Neurosci. Emotion 25, 49–59. Bukach, C.M., Gauthier, I., Tarr, M.J., 2006. Beyond faces and modularity: the power of an expertise framework. Trends Cogn. Sci. 10, 159–166. Carrasco, M., 2006. Covert attention increases contrast sensitivity: psychophysical, neurophysiological and neuroimaging studies. Fundam. Vis. 33. Corbetta, M., Shulman, G.L., 2002. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3 (3), 201–215. Cousineau, D., 2007. Confidence intervals in within-subject designs: a simpler solution to Loftus and Masson's method. Tutorials in Quantitative Methods for Psychology 1, 42–45. Damaraju, E., Huang, Y.M., Barrett, L.F., Pessoa, L., 2009. Affective learning enhances activity and functional connectivity in early visual cortex. Neuropsychologia 47 (12), 2480–2487. Eimer, M., Holmes, A., McGlone, F.P., 2003. The role of spatial attention in the processing of facial expression: an ERP study of rapid brain responses to six basic emotions. Cogn. Affect. Behav. Neurosci. 3 (2), 97–110. Fenker, D.B., Heipertz, D., Boehler, C.N., Schoenfeld, M.A., Noesselt, T., Heinze, H.J., et al., 2010. Mandatory processing of irrelevant fearful face features in visual search. J. Cogn. Neurosci. 22 (12), 2926–2938. Frischen, A., Eastwood, J.D., Smilek, D., 2008. Visual search for faces with emotional expressions. Psychol. Bull. 134 (5), 662–676. Holmes, A., Vuilleumier, P., Eimer, M., 2003. The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Brain Res. Cogn. Brain Res. 16 (2), 174–184. Jonides, J., Irwin, D.E., 1981. Capturing attention. Cognition 10 (1–3), 145–150. Kanwisher, N., Yovel, G., 2006. The fusiform face area: a cortical region specialized for the perception of faces. Philos. Trans. R. Soc. Lond. B Biol. Sci. 361, 2109–2128. Kastner, S., Pinsk, M.A., De Weerd, P., Desimone, R., Ungerleider, L.G., 1999. Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron 22 (4), 751–761. Kastner, Mains, A.M., Beck, M., 2009. Mechanisms of selective attention in the human visual system: evidence from neuroimaging. In: Gazzanige, M.S. (Ed.), Cognitive Neurosciences. MIT Press. LaBar, K.S., Gitelman, D.R., Mesulam, M.M., Parrish, T.B., 2001. Impact of signal-tonoise on functional MRI of the human amygdala. Neuroreport 12 (16), 3461–3464. Luck, S.J., Chelazzi, L., Hillyard, S.A., Desimone, R., 1997. Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. J. Neurophysiol. 77 (1), 24–42. Mogg, K., Bradley, B.P., Hallowell, N., 1994. Attentional bias to threat: roles of trait anxiety, stressful events, and awareness. Q. J. Exp. Psychol. A 47 (4), 841–864. Moriya, J., Tanno, Y., 2009. Competition between endogenous and exogenous attention to nonemotional stimuli in social anxiety. Emotion 9 (5), 739–743.
A. Gomez et al. / NeuroImage 57 (2011) 1542–1551 O'Connor, D.H., Fukui, M.M., Pinsk, M.A., Kastner, S., 2002. Attention modulates responses in the human lateral geniculate nucleus. Nat. Neurosci. 5 (11), 1203–1209. Padmala, S., Pessoa, L., 2008. Affective learning enhances visual detection and responses in primary visual cortex. J. Neurosci. 28 (24), 6202–6210. Peelen, M.V., Atkinson, A.P., Andersson, F., Vuilleumier, P., 2007. Emotional modulation of body-selective visual areas. Soc. Cogn. Affect. Neurosci. 2 (4), 274. Pessoa, L., 2005. To what extent are emotional visual stimuli processed without attention and awareness? Curr. Opin. Neurobiol. 15 (2), 188–196. Pessoa, L., McKenna, M., Gutierrez, E., Ungerleider, L.G., 2002. Neural processing of emotional faces requires attention. Proc. Natl Acad. Sci. 99 (17), 11458. Phelps, E.A., LeDoux, J.E., 2005. Contributions of the amygdala to emotion processing: from animal models to human behavior. Neuron 48 (2), 175–187. Phelps, E.A., Ling, S., Carrasco, M., 2006. Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychol. Sci. 17 (4), 292–299. Pinsk, M.A., Doniger, G.M., Kastner, S., 2004. Push–pull mechanism of selective attention in human extrastriate cortex. J. Neurophysiol. 92 (1), 622–629. Pourtois, G., Grandjean, D., Sander, D., Vuilleumier, P., 2004. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb. Cortex (New York, N.Y.: 1991) 14 (6), 619–633.
1551
Pourtois, G., Schwartz, S., Seghier, M.L., Lazeyras, F., Vuilleumier, P., 2006. Neural systems for orienting attention to the location of threat signals: an event-related fmri study. Neuroimage 31 (2), 920–933. Reynolds, J. H. & Chelazzi, L. (2004). Attentional modulation of visual processing. Sabatinelli, D., Bradley, M.M., Fitzsimmons, J.R., Lang, P.J., 2005. Parallel amygdala and inferotemporal activation reflect emotional intensity and fear relevance. Neuroimage 24 (4), 1265–1270. Silver, M.A., Kastner, S., 2009. Topographic maps in human frontal and parietal cortex. Trends Cogn. Sci. 13 (11), 488–495. Vuilleumier, P., 2005. How brains beware: neural mechanisms of emotional attention. Trends Cogn. Sci. 9 (12), 585–594. Vuilleumier, P., Armony, J.L., Driver, J., Dolan, R.J., 2001. Effects of attention and emotion on face processing in the human brain: an event-related fMRI study. Neuron 30 (3), 829–841. Whalen, P.J., Kagan, J., Cook, R.G., Davis, F.C., Kim, H., Polis, S., et al., 2004. Human amygdala responsivity to masked fearful eye whites. Science 306 (5704), 2061. Wright, P., Liu, Y., 2006. Neutral faces activate the amygdala during identity matching. Neuroimage 29 (2), 628–636.