Brain Research Bulletin 77 (2008) 264–273
Contents lists available at ScienceDirect
Brain Research Bulletin journal homepage: www.elsevier.com/locate/brainresbull
Research report
Emotional object and scene stimuli modulate subsequent face processing: An event-related potential study Masahiro Hirai a,b,∗ , Shoko Watanabe a , Yukiko Honda a , Kensaku Miki a , Ryusuke Kakigi a,c a
Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan Japan Society for the Promotion of Science, Japan c Department of Physiological Sciences, School of Life Sciences, The Graduate University for Advanced Studies, Hayama, Kanagawa, Japan b
a r t i c l e
i n f o
Article history: Received 22 May 2008 Received in revised form 16 August 2008 Accepted 18 August 2008 Available online 13 September 2008 Keywords: Contextual effect Face processing Event-related potential Emotional information processing
a b s t r a c t To understand the processing of facial expressions in terms of social communication, it is important to clarify how they are influenced by environmental stimuli such as natural scenes or objects. We investigated how and when neural responses to facial expressions were modulated by both a natural scene and an object containing emotional information. A facial expression stimulus (fearful/neutral) was presented after a scene or object stimulus (fearful/neutral), and then event-related potentials were recorded from both the onset of scene and facial expression presentation. As in previous studies, for the presentation of the scenes and objects positive-going waves at around 200–500 ms were observed for unpleasant visual stimuli at the Pz and Cz electrodes when the stimuli were intact; however, such a response was not observed when the stimuli were scrambled. During the subsequent facial expression presentation period, although we could not identify a significant interaction between the contextual information and facial expression in the N170 component, we observed a significant interaction in the P2 component: the P2 amplitude of the fearful cued was significantly larger than that of the neutral cued condition when the face was fearful, and the P2 amplitude of the neutral face was significantly larger than that of the fearful face condition when the preceding stimulus was neutral. These findings show that an adjacent, non-face stimulus containing emotional information influences the subsequent processing of facial expressions up to 260 ms, and even in cases when the two stimulus categories are different. © 2008 Elsevier Inc. All rights reserved.
1. Introduction Both the perceiving and expressing of facial expressions are vital for communication with others. In our daily lives, it seems that facial expressions do not exist by themselves, but are embedded in social communication. This implies that the processing of facial expressions is modulated by non-facial environmental information such as that included in emotional scenes. In support of this observation, several neuroimaging studies have revealed the role of emotional context in the encoding of faces. Sterpenich et al. indicated that emotional scene context information enhances the memory of neutral faces compared with neutral contexts [45], and found that the interaction between emotion and memory showed significant responses in the amygdala and parahippocampal gyrus. Moreover, amygdala responses were more closely related
∗ Corresponding author at: Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan. Tel.: +81 564 55 7811; fax: +81 564 52 7913. E-mail address:
[email protected] (M. Hirai). 0361-9230/$ – see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.brainresbull.2008.08.011
to responses of the locus ceruleus when remembering faces that had been encoded in an emotional rather than a neutral context. Further, a functional magnetic resonance imaging (fMRI) study reported that responses in the left fusiform gyrus to surprised faces were cued more by negative sentences than positive sentences [35]. Righart and de Gelder [39] also investigated how spatial contextual information influences the neural dynamics of face processing by measuring event-related potentials (ERP). They embedded a face (fearful or neutral) in photographs of a scene (fearful or neutral) and used them as an index of the N170 component. They found that the N170 amplitude was larger for fearful faces in the fearful scene than for fearful faces in the neutral scene. It is well known that the P1–N1(N170)–P2 complex ERP components are observed when face stimuli are presented. Notably, the N170 component is considered to be associated with the structural encoding of faces [4,12,48] and facial expression [3,44]. In addition to N170, the P2 component is also sensitive to facial expressions [44]. Further, the N170 component has also been observed to be modulated by visual attention tasks [11,12,25], although other studies have reported that the N170 was not modulated by visual attention [9,16]. For the P1 component, it has been suggested that
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
the component was modulated by attentional task-dependent factors [46]. Interestingly, a study by Righart and deGelder [39] showed electrophysiological evidence that simultaneous information on spatial context modulates the processing of facial expressions. In addition to the effect of spatial context, a recent behavioral study revealed that a bidirectional priming effect for cross-domain emotional information was observed between word or picture stimuli and facial expressions [8]. However, it is not well known how such primed cross-domain emotional information modulates subsequent facial processing. One available method to investigate the interference of fearful facial processing with fearful contextual information processing in a temporal domain is through a repetitive stimulus presentation paradigm such as the ‘double-pulse paradigm’ or priming paradigm. The ‘double-pulse paradigm’ is a method [22,30] to probe the neural representation of targets [20,24]. The method is to present two visual stimuli successively and investigate how the neural response to the second stimulus is modulated compared with the first stimulus. If two stimuli that vary in a single dimension produce the same amount of adaptation as the repetition of a single stimulus, it is assumed that the neural representation is invariant for that dimension [22]. In an electrophysiological study, the decrement of neural activity for the presentation of two successive stimuli has also been reported [30]. Jeffreys found a reduction in the amplitude of the vertex positive potential (VPP) when a face stimulus was presented repeatedly with a shorter exposure (200 ms). According to the study, the attenuation was face-selective, that is, the reduction of the component was prominent when the adjacent stimulus was a face, not an object. In a magnetoencephalography (MEG)-based study, Harris and Nakayama [22] also observed that the attenuation of the face-selective M170 component occurred only when stimuli were presented with relatively short stimulus onset asynchrony, and the effect was larger for faces preceded by faces than by houses. A recent priming ERP study in face processing revealed that the N170 component was enhanced by a preceding visual stimulus. Jemel et al. exposed the same person’s face (within-domain self-priming) or name (cross-domain self-priming) prior to the presentation of a moony face [32]. They reported that the N170 amplitude was larger when famous face targets were preceded by the same face or the corresponding name than when preceded by a neutral prime. However, a priming effect in the cross-domain condition was not observed for unfamiliar faces. This finding suggests that other domain information contributed to face priming effects when the face was familiar. The authors suggested that the enhancement of the N170 component was due to top-down influences across domains. In addition to cross-domain priming effects, many withindomain priming experiments have been performed in facial processing [2,5,13,19,31,41,42]. These studies have revealed that ERP repetition effects reflect the modulation of at least two distinct components: the attenuation of a negative-going component that peaks around 400 ms (N400), and an increase in the amplitude of a late positive component (LPC). These studies have also suggested that the ERP waveforms elicited by repeated items are usually more positive than those elicited by the initial presentation. In particular, the repetition effect on the ERP component has been observed at relatively early latency (around 200–250 ms), which was defined as the N250r component [40,42]. That is, the component was likely to depend on the degree of perceptual overlap between the first and second presentations of faces, regardless of whether the faces were familiar. This early portion of the ERP repetition effect, or the N250r component, was abolished for long-lag repetitions [42].
265
The effect of repetitive presentation of faces reduces the amplitude of the early and long-latency components [23,28]. Heisz et al. found a progressive decrease in N170 amplitude with multiple repetitions of upright faces presented at unattended locations [23]. Itier and Taylor revealed a reduced N170 amplitude and latency for repeated faces. They also found a repetition effect on the N250 component [28]. These double-pulse paradigm and priming studies revealed that the repetition effect was observed within 170–400 ms after the onset of the second visual stimulus. Moreover, the effect was modulated by stimulus categories of the preceding visual stimulus. However, these studies focused mainly on the repetition effect of the category of objects, and not the emotional information. To clarify the interference between the processing of fearful facial expressions and that of fearful scene stimuli from the temporal domain, as was done in the above-mentioned behavioral study [8], we adopted a modified version of the double-pulse [22,30] and priming paradigm [8] to separate context stimuli from face stimuli, as shown in Fig. 1. The procedure of the present experimental study is basically similar to that of previous repetition and priming paradigm studies; we modified the stimulus onset asynchrony. This method enabled us to evaluate the effect of context on the same facial stimulus. For this reason, we were able to evaluate both fearful and neutral scene effects on the processing of identical facial expressions. In the experiment, we mainly focused on the N170 and P2 components because the above priming [32,42] and adaptation studies [22,30] reported that the effect was observed within 200–400 ms. Furthermore, as many ERP studies have reported that the N170 component [3,44] and P2 component were sensitive to facial expressions [44], we focused on how the N170 and P2 components were modulated by an adjacent stimulus across priming information. In addition to the ERP responses to S2, we also focused on the ERP responses to S1 because we could confirm the validity of the experimental stimulus of S1. It has been observed that unpleasant pictures prompted a positive-going shift compared with neutral images at around 400–700 ms [10,34]. After evaluating the validity of the ERP responses to S1, we analyzed the ERP responses to S2. Our hypothesis was that the intensity of the ERP components would be reduced when both emotional categories of the preceding non-facial stimulus and facial expression are identical, and the effect might be prominent in the fearful condition (e.g. [39], because both emotional attributes of faces and of scenes would be commonly processed in part (e.g. [21])). 2. Methods 2.1. Participants Eleven neurologically healthy participants (6 males and 5 females) with normal or corrected-to-normal vision volunteered (M = 28.4, S.D. = 5.1 years). All participants were neurologically healthy; this was verified by a medical examination. All participants provided informed consent for the experimental protocol, which was approved by the Ethics Committee of the National Institute for Physiological Sciences, Okazaki, Japan and signed written informed consent was also provided. 2.2. Experimental setup and procedure Facial stimuli were taken from an ATR (Kyoto, Japan) commercial database set. Six sets of stimuli (3 females and 3 males with a fearful or neutral facial expression) were extracted from the database and used in the experiment. For facial expressions, 27 undergraduate students evaluated each facial stimulus on 7 emotions (happy, sad, surprised, anger, disgust, fear, contempt) with a 7-point scale (1 = none to 7 = strongly expressed) at ATR. The results showed that the mean scores of the neutral faces were: happy = 1.9; sad = 1.9; surprised = 1.2; anger = 1.6; disgust = 1.6; fear = 1.4; contempt = 2.0. The fearful face scores were: happy = 1.2; sad = 3.6; surprised = 4.7; anger = 2.6; disgust = 3.2; fear = 4.1; contempt = 2.1. These results indicate that fearful faces are significantly more fearful than neutral faces. Scene stimuli were selected from a commercial database found on the World
266
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
Fig. 1. Experimental procedure. (A) S1 intact condition: (a) FF condition; (b) FN condition; (c) NF condition; (d) NN condition. (B) S1 scrambled condition: (e) sFF condition; (f) sFN condition; (g) sNF condition; (h) sNN condition (F, N and s refer to “fear”, “neutral”, and “scrambled”, respectively). For (a)–(d), the S1 stimulus was intact, while for (e)–(h), the S1 stimulus was scrambled. In each trial, a white fixation point (1760 ms, 0.3 × 0.3◦ ) preceded S1 (1000 ms). For S2, the stimulus duration was 500 ms. Wide Web. For these scene stimuli, first 9 of the participants evaluated 75 scene and object stimuli with respect to arousal (9-point scale: 1 = calm; 5 = neutral; 9 = extremely arousing) and valence (1 = very unpleasant; 5 = neutral; 9 = very pleasant), as had been done in a previous study [39]. Then, we selected the 12 stimuli for both stimuli types (12 fear-related stimuli, e.g., kitchen knife, snake, wasp, centipede, car crash scene; 12 neutral stimuli, e.g., crayon, cup, sofa, dish) based on the evaluations. Emotional arousal rates of the selected fearful pictures were reliably different from the selected neutral pictures (fearful M = 7.6, S.D. = 0.8; neutral M = 3.1, S.D. = 1.2; t(8) = 7.3, p < 0.01). Fearful pictures were evaluated as significantly different in emotional valence compared to neutral pictures (fearful M = 2.2, S.D. = 0.7; neutral M = 6.2, S.D. = 0.6; t(8) = 10.2, p < 0.01). Every fearful picture was evaluated as being more unpleasant and more arousing than any of the neutral pictures. For the scene stimuli used as first stimuli (S1), we used 8-bit color (256 colors) images and two types of stimuli (intact and scrambled). For the scrambled stimuli, each stimulus was divided into 20 × 20 pieces, which were then mixed into a random sequence. For the following face stimuli (S2), all stimuli were degraded into grayscale. For the S1 images, we used color images in order to improve the participants’ understanding. In our preliminary experiment, we found that it took a long time for the participants to understand the content of the S1 images in grayscale.
For the S2 images, however, in order to exclude the effect of low-level physical properties across the facial images on the ERP component, we converted the images to grayscale and adjusted them to identical luminance. To input the visual images into the fovea, the height and width of the scene images were 3 × 3◦ , which was the case in previous ERP studies [7,29]. The facial images were also 3 × 3◦ . The sizes of the scene images and face stimuli were both 8 cm × 8 cm. Participants sat at a distance of 150 cm from the monitor. For the scrambled images, the pieces of each photograph were distributed within the 3 × 3◦ space, with each piece having a size of 0.15 × 0.15◦ (Fig. 1). The size of each scene and face image was smaller compared with those used in other studies [18,39,45]. This was because (1) in the preliminary experiment larger sized visual stimuli elicited eye movement in certain participants, while (2) some participant reported that larger visual stimuli was difficult to recognize with short duration. To suppress eye movement and improve visibility, therefore, the visual stimuli were presented within the fovea in the present study. Fig. 1 represents the experimental procedure. The experiment consisted of eight conditions (FF, FN, NF, NN, sFF, sFN, sNF, and sNN; F, N and s refer to “fear”, “neutral”, and “scrambled”, respectively). We performed two experimental conditions separately: the S1 intact condition (Fig. 1A) and the S1 scrambled condition (Fig. 1B). In each experimental condition, eight blocks of trials were conducted with a 1 min
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273 inter-block interval. The order was counterbalanced across participants. Each condition was presented eight times per block, and thus 32 stimulus sets (S1–S2 stimulus presentation) were presented per block. In each trial, a white fixation point (1760 ms, 0.3 × 0.3◦ ) was preceded by S1 (1000 ms). For S2, the stimulus duration was 500 ms. The content of S2 was randomly paired with content different from that of S1 in each trial. We presented S1 and S2 sequentially without any interruption or pause in between because a previous MEG study of face-adaptation revealed that shorter ISI elicited a stronger adaptation effect [22]. Thus, in order to provoke a strong effect and exclude ISI factors, we did not insert any blanks between the stimuli. To ensure that participants maintained their gaze on the center of the monitor during the experiment, participants were asked to engage in a continuous performance task. They were asked to press a button when the target (cartoon character) appeared randomly on the screen. The target was presented eight times per block for 500 ms in place of the face stimulus.
2.3. EEG recording Electroencephalograms (EEGs) were recorded with Ag/AgCl disk electrodes placed on the scalp at 18 locations: Nose, O1, O2, P3, P4, T3, T4, C3, C4, Cz, F3, F4, Fz, FCz, T5, T6, T5 , and T6 , according to the International 10–20 System. T5 and T6 were located 2 cm below T5 and T6, as in previous studies [48,49]. The impedance was maintained at less than 5 k. All of the EEG signals were collected on a signal processor (EEG-1100, Nihon-Kohden, Tokyo, Japan). The bandpass filter was set at 0.1–100 Hz. All recordings were initially referenced to C3 and C4 (based on system settings) and later to tip of nose. The electrical potential was digitized at a 1000-Hz sampling rate, and data were stored on a computer disk for offline analysis. Vertical and horizontal electro-oculograms (EOG) were recorded to monitor eye movements.
267
2.4. Data analysis In the off-line analysis of the EEG recordings, a 0.1–30 Hz bandpass filter (24 dB/Octave) was applied to the data. Trials in which the EEG or EOG signal variation exceeded ±50 V were discarded. The averaged trials were above 50 trials for all conditions (FF condition: 52.5 ± 7.3 trials; FN condition: 50.8 ± 8.5 trials; NF condition: 51.5 ± 6.8 trials; NN condition: 50.4 ± 7.5 trials; sFF condition: 53.7 ± 8.2 trials; sFN condition: 50.1 ± 8.1 trials; sNF condition: 52.9 ± 8.2 trials; sNN condition: 50.7 ± 8.2 trials; mean ± S.D.). For S1, the analysis window was extended for 1000 ms following the onset of each stimulus, and the analysis window was extended for 500 ms in S2. For both S1 and S2, the mean amplitude of 100 ms before the stimulus was used as the baseline. The baseline period for S2 was defined as the 100 ms time window before the offset of S1 (Fig. 1). The grand mean of the waveform was then calculated for each condition. The target trials were excluded from further analysis. The analysis was performed as follows. For S1, to investigate the effect of emotional stimuli, the mean amplitude was calculated for the 200–500 ms time windows at the Pz and Cz electrodes, as was done in previous studies [10,34]. Further, to clarify that the differential amplitudes in the S2 period were not due to the deviation of the baseline of S2, we calculated the mean amplitude of 900–1000 ms at the T5, T6, T5 and T6 electrodes. Then, to ensure that no statistical significance would be observed in the baseline, we used the prestimulus period as baseline. For S2, we mainly focused on the three components (P1, N170, P2) that were observed at the T5, T6, T5 , and T6 electrodes [48,49]. Each component was determined by a time frame. For P1, the time frame was set between 40 and 160 ms; for N170, between 160 and 240 ms; and for P2, between 240 and 340 ms, based on previous ERP studies [3,44]. The peak latencies and amplitudes from the baseline of the P1, N170 and P2 components observed in the face presentation period (S2) were then calculated.
Fig. 2. (A) Grand average ERP waveforms for S1 stimuli. (B) Average amplitudes between 200 and 500 ms time windows at Cz and Pz electrodes for all stimulus conditions. (*p < 0.05, **p < 0.01). Error bar indicates the standard error (S.E.). Blue: intact fear-related stimulus; red: intact neutral stimulus; aqua: scrambled fear-related stimulus; and pink: scrambled neutral stimulus.
268
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
2.5. Statistical analysis For S1, a three-way ANOVA with the S1 condition (Intact or Scrambled), type of S1 stimulus (Fearful or Neutral object), and electrode site (Pz and Cz) was applied to confirm whether fear-related stimuli elicited a late positive shift, as had been observed in previous studies [10]. For S2, all facial expression stimuli elicited a typical visually evoked P1–N1(N170)–P2 complex that was maximal in the bilateral occipitotemporal regions, as shown in Figs. 3 and 4. Therefore, further analysis focused on the three ERP components observed at the T5 and T6 electrodes, as we had done in previous studies [48,49]. Firstly, to confirm the baseline deflection of S2, a three-way ANOVA was also applied to the mean amplitude (900–1000 ms), using the laterality (Left or Right hemisphere, T5 /T6 electrodes), S1 condition (Intact or Scrambled), and type of S1 stimulus (Fearful or Neutral object) as variables. For each component during S2 period (N170 and P2), a four-way ANOVA was carried out using the laterality (Left or Right hemisphere, T5 /T6 electrodes), S1 condition (Intact or Scrambled), type of S1 stimulus (Fearful or Neutral object/scenery), and facial expression (Fearful or Neutral) as variables. If the sphericity assumption was violated in Mauchly’s sphericity test, then the Greenhouse-Geisser correction coefficient epsilon was used to correct the degrees of freedom, and then the F- and p-values were recalculated. We considered statistical significance as p < 0.05.
3. Results 3.1. Behavioral data The accuracy of target detection was 96.4 ± 5.0 % (mean ± S.D.). At the end of the experiment, participants were required to give the arousal and valence of each presented stimulus. Results indicated that the emotional arousal rates of the fearful pictures were significantly different from those of the neutral pictures (fearful M = 7.4, S.D. = 0.4, neutral M = 3.0, S.D. = 0.4), suggesting that the former were significantly higher (t(10) = 6.9, p < 0.01). Emotional valence was also significantly different between fearful and neutral objects (fearful M = 2.7, S.D. = 0.7, neutral M = 6.6, S.D. = 0.5), suggesting that
the latter was significantly higher (t(10) = 6.2, p < 0.01). These scores are similar to those obtained in a psychophysical study conducted before the experiment (see Section 2.2). 3.2. ERP results For S1, intact fear-related stimuli elicited a late positive shift, just as it had in previous studies [10,34], but not for the scrambled stimuli version, as shown in Fig. 2. For both the intact and scrambled conditions of S2, all facial expression stimuli elicited a typical visually evoked P1–N1(N170)–P2 complex that was maximal in the bilateral occipitotemporal regions, as shown in Figs. 3 and 4. The results of the peak amplitude and peak latency for N170 and P2 components are shown in Figs. 5 and 6. 3.2.1. ERP results for S1 For the S1 averaged amplitude, a three-way interaction (S1 condition, S1 type, and electrode site) was significant [F(1,20) = 6.7, p = 0.03]. We then carried out subsequent analyses regarding two partial two-way ANOVAs for each electrode. At the Cz electrode, the interaction between the S1 condition and type of S1 was significant [F(1,10) = 8.9, p = 0.01]. Subsequent analysis revealed that the amplitude in the intact was significantly larger than that in the scrambled condition when the S1 type was fearful (1.9 ± 0.5 vs. 0.5 ± 0.7 V; p < 0.01, mean ± S.E.) and the amplitude induced by the fearful stimulus was significantly larger than that by the neutral stimulus when the S1 stimulus was intact (1.9 ± 0.5 vs. 0.9 ± 0.6 V; p = 0.03). At the Pz electrode, the interaction between the S1 condition and type of S1 was also significant [F(1,10) = 8.9, p = 0.01]. Subsequent analysis revealed that the amplitude in the intact was significantly larger than that in
Fig. 3. Grand averaged ERP waveforms of S2 (FF, FN, NF, and NN conditions) for the S1 intact condition. The ERP waveforms were displayed at O1, O2, T5, T6, T5 , T6 , and Cz electrodes. P1–N1(N170)–P2 components were clearly observed. Blue line: FF condition; red line: FN condition; aqua line: NF condition; and pink line: NN condition.
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
269
Fig. 4. Grand averaged ERP waveforms of S2 (sFF, sFN, sNF, and sNN conditions) for the S1 scrambled condition. The ERP waveforms were displayed at O1, O2, T5, T6, T5 , T6 , and Cz electrodes. P1–N1(N170)–P2 components were clearly observed. Blue line: sFF condition; red line: sFN condition; aqua line: sNF condition; and pink line: sNN condition.
the scrambled condition when the S1 type was fearful (6.3 ± 0.8 vs. 3.6 ± 0.8 V; p < 0.01) and the amplitude induced by the fearful stimulus was significantly larger than that by the neutral stimulus when the S1 stimulus was intact (6.3 ± 0.8 vs. 4.8 ± 0.8 V; p = 0.03). For the mean amplitude of the 900–1000 ms period, no significant effect was observed [Fs < 2.0, ps > 0.18]. 3.2.2. ERP results for S2 For the P1 amplitude, the main effect of the S1 condition was significant [F(1,10) = 50.2, p < 0.01], suggesting that the amplitude in the S1 intact was significantly larger than that in the S1 scrambled condition (4.5 ± 0.2 vs. 3.2 ± 0.2 V; mean ± S.E.). For the P1 latency, the interaction between the S1 condition and type of S1 was significant [F(1,10) = 7.9, p = 0.02], suggesting that the latency in the S1 neutral was significantly longer than that in the S1 fearful stimulus when the S1 was scrambled (143.5 ± 4.0 vs. 131.8 ± 2.6 ms, p = 0.01).
For the N170 amplitude (Table 1), a two-way interaction (S1 condition and S1 stimulus type) was significant [F(1,10) = 11.2, p = 0.01]. Subsequent analysis revealed that the negative amplitude induced by the S1 scrambled was significantly larger than that by the S1 intact stimulus when the S1 stimulus was fearful (−8.0 ± 0.8 vs. −6.5 ± 0.8 V, p = 0.02). For the N170 latency (Table 2), the main effect of the facial expression [F(1,10) = 7.0, p = 0.02] and interaction between the S1 condition and S1 type was significant [F(1,10) = 8.0, p = 0.02], suggesting that the latency in the fearful facial expression was significantly longer than that in the neutral facial expression (191.3 ± 1.6 vs. 189.2 ± 1.6 ms). Moreover, the latency in the S1 scrambled stimulus was significantly longer than that in the S1 intact stimulus when the S1 was neutral (194.9 ± 3.8 vs. 186.4 ± 5.0 ms, p = 0.01). For the P2 amplitude (Table 1), the main effect of the S1 condition was significant [F(1,10) = 20.1, p < 0.01], and a two-way interaction (type of S1 and facial expression) was significant [F(1,10) = 7.4, p = 0.02]. This suggests that the amplitude in the
Table 1 Peak amplitude of N170 and P2 components at the T5 /T6 electrodes for S1 intact condition and S1 scrambled condition T5
T6
FF
FN
NF
NN
FF
FN
NF
NN
S1 intact N170 P2
−5.9 ± 1.0 2.9 ± 0.5
−5.9 ± 1.0 3.2 ± 0.6
−7.2 ± 1.1 2.2 ± 0.6
−6.3 ± 1.1 2.9 ± 0.7
−7.1 ± 0.9 2.5 ± 0.6
−7.2 ± 0.8 2.9 ± 0.9
−8.2 ± 0.9 1.5 ± 0.6
−7.0 ± 0.8 2.8 ± 0.9
S1 scrambled N170 P2
−7.6 ± 0.9 2.0 ± 0.5
−7.3 ± 1.1 2.1 ± 0.7
−7.2 ± 1.1 1.4 ± 0.7
−6.1 ± 0.6 3.1 ± 0.7
−8.3 ± 1.0 1.6 ± 0.9
−8.8 ± 1.0 1.7 ± 0.6
−8.1 ± 1.0 1.0 ± 0.7
−7.1 ± 0.7 2.8 ± 1.0
Mean ± S.E. (V). F and N: Fear and Neutral, respectively.
270
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
Table 2 Peak latency of N170 and P2 components at the T5 /T6 electrodes for S1 intact condition and S1 scrambled condition T5
T6
FF
FN
NF
NN
FF
FN
NF
NN
S1 intact N170 P2
189.1 ± 5.6 255.7 ± 6.8
188.2 ± 4.6 249.4 ± 5.2
187.6 ± 5.4 263.1 ± 8.2
187.4 ± 5.0 249.6 ± 4.5
188.7 ± 5.7 256.8 ± 7.9
184.8 ± 4.7 253.2 ± 6.9
188.5 ± 5.5 263.4 ± 6.7
182.4 ± 4.7 248.3 ± 5.8
S1 scrambled N170 P2
192.6 ± 2.9 262.5 ± 4.9
192.6 ± 3.9 262.3 ± 6.3
196.8 ± 3.2 261.6 ± 4.7
193.6 ± 3.7 260.1 ± 5.1
191.6 ± 4.4 261.1 ± 6.3
190.9 ± 4.4 256.2 ± 7.9
195.6 ± 3.9 259.9 ± 5.7
193.5 ± 4.3 259.3 ± 6.1
Mean ± S.E. (ms). F and N: Fear and Neutral, respectively.
4. Discussion In this study, in order to clarify how the face-selective component is modulated by an adjacent non-face stimulus containing emotional information (neutral and fearful scene/object stimulus), we adopted a modified version of the double-pulse and priming paradigm using the N170 and P2 components as indices. Prior to the experiment, we hypothesized that if the ERP responses to the S2 stimulus, whose emotional category was identical to the S1 stimulus, was significantly smaller following the S1 intact relative to the S1 scrambled condition, then the attenuation of the S2 response would be caused by the ‘cross-domain emotional’ adaptation effect. Contrary to our hypothesis, neither a significant three-way interaction (S1 condition, type of S1 stimulus, and facial expression) nor two-way interaction (type of S1 stimulus and facial expression) was
Fig. 5. (A) Peak amplitude of the N170 component for S1 intact and S1 scrambled condition. (B) Peak latency of the N170 component for S1 intact and S1 scrambled condition (*p < 0.05). Error bar indicates the standard error (S.E.).
S1 intact was significantly larger than that in the S1 scrambled condition (2.6 ± 0.2 vs. 1.9 ± 0.3 V). Moreover, the amplitude of the fearful cued was significantly larger than that of the neutral cued condition when the S2 face was fearful (2.3 ± 0.5 vs. 1.5 ± 0.5 V, p = 0.01; mean ± S.E.). The amplitude induced by the neutral face was larger than that by the fearful face for the neutral cued condition (2.9 ± 0.8 vs. 1.5 ± 0.5 V, p = 0.01). For the P2 latency (Table 2), the main effect of the S1 condition was significant [F(1,10) = 9.9, p = 0.01], suggesting that the latency in the S1 scrambled was significantly longer than that in the S1 intact stimulus (260.4 ± 2.0 vs. 255.0 ± 2.3 ms). These results indicate the modulation effect was more prominent in the S1 intact than S1 scrambled condition.
Fig. 6. (A) Peak amplitude of the P2 component for S1 intact and S1 scrambled condition. (B) Peak latency of the P2 component for S1 intact and S1 scrambled condition (*p < 0.05, **p < 0.01). Error bar indicates the standard error (S.E.).
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
identified in the N170 component. However, for the P2 component, we observed a main effect of the S1 condition and a significant two-way interaction between the type of S1 and facial expression: the amplitude in the fearful was significantly larger than that in the neutral cued condition when the presented face was fearful. Then, the amplitude of the neutral face was significantly larger than that of the fearful face condition when the preceding stimulus was neutral. Additionally, the P2 amplitude in the S1 intact was significantly larger than that in the S1 scrambled condition. These results suggest that fearful face processing can be relatively shifted to neutral face processing when the preceding cue is fearful (see below). These findings indicate that an adjacent, non-face stimulus containing emotional information influences the subsequent processing of facial expressions at around 250–260 ms, even if the two stimulus categories are different. 4.1. Contextual modulation in the N170 and P2 components Meaningful and significant results regarding the interaction between contextual information and facial expression were not observed in the N170 component, but were in the P2 component. Although a previous ERP study reported that the presence of a fearful face in a fearful context enhances the N170 amplitude over a face in a neutral context at left occipito-temporal sites [39], we could not observe a contextual effect on the N170 component in the present experiment. Rather, we observed a significant effect of the type of S1 and S1 condition on the N170 component. This suggests that the N170 component is not likely to directly reflect the coding of facial expression in the present experimental paradigm, but to reflect the contextual information itself presented in the preceding period. Consistent with the results, a recent MEG study reported that the face-selective M170 component was modulated by the preceding visual stimulus, and not by the facial expression [17]. Contrary to our expectation, the N170 amplitude was modulated by both the S1 condition and by the type of S1 condition. One possibility is that the N170 amplitude was affected by an earlier, meaningless picture. Another is that the process of scrambling differentially altered some of the low-level visual characteristics of the stimuli. Alternatively, the process of scrambling may have induced different levels of ambiguity across the neutral versus fearful scrambled image conditions, or may have induced some vague level of identifiability for segments of the fearful scrambled images relative to the neutral scrambled ones. As well, this modulation might be to the result of overlearning of the small (24 item) S1 set. It has also been reported that the lateral occipital complex responds more strongly to intact object than to scrambled stimuli [43]; thus the differential neural activations of this region between intact and scrambled stimuli might affect subsequent face processing. This point needs to be addressed in future studies. As for N170 latency, the latency of the S1 scrambled was significantly longer than that of the S1 intact stimulus when the S1 stimulus was neutral. Moreover, the N170 latency was modulated by facial expression: the N170 latency in the fearful face was significantly longer than that in the neutral face condition. This result seems to be consistent with previous ERP studies of facial expressions [3,39]. These studies reported that the N170 latency for fearful face was significantly longer than that for neutral face stimuli, suggesting that the N170 latency was susceptible to but the amplitude was insensitive to facial expression. As for the P2 component, contrary to the N170 component, we identified a main effect of the S1 condition and a two-way interaction between the type of S1 and facial expression. This suggests that: (1) the P2 amplitude in the S1 intact condition was significantly larger than that in the S1 scrambled condition; (2) the P2 amplitude of the fearful cued was significantly larger than
271
that of the neutral cued condition when the face stimulus was fearful; and (3) the P2 amplitude of the neutral face was significantly larger than that of the fearful face condition when the preceding stimulus was neutral. Consistent with the results of the neutral cued condition (NF and NN conditions), previous ERP studies reported that emotional information affects the P2 amplitude: neutral faces elicited a more positive P2 than fearful faces [44]. In the present results, despite the fact that an identical fearful face was presented in the NF and FF conditions, the P2 amplitude in the fearful cued was significantly larger than that in the neutral cued condition. This implies that the neural activities relating to fearful face processing in the FF condition can be relatively shifted to neutral face processing compared with the NF condition. It should be noted here that, contrary to our expectation, we could not observe a significant three-way interaction (S1 condition, type of S1, and facial expression) in this P2 component. We found only the main effect of the S1 condition and a two-way interaction, as mentioned above. This suggests that the scrambling operation of the S1 stimulus completely alter the P2 amplitude irrespective of the type of S1 and facial expression. Thus, we regard the scrambling operation as appropriate to eliminate the emotional information for the S1 period. However, this operation might not be in effective in the subsequent facial processing. One might surmise two possibilities: (1) the P2 amplitude in the NF condition was significantly attenuated compared with other FF or NN conditions, or (2) the P2 amplitude in the FF or NN condition was significantly enhanced compared with the NF condition. Unfortunately, as we could not identify a significant three-way interaction in this component, as mentioned above, we cannot directly conclude whether the modulation was simply due to ‘attenuation’ or ‘enhancement’ compared with the control condition. According to the present results, we can conclude that fearful face processing can be relatively shifted to neutral face processing when the preceding cue stimulus is fearful. In previous MEG studies, the neural responses relating to facial expression could be modulated by the preceding cue stimulus in this time period [17,26]. Furl et al. reported sustained field activities during 300–400 ms after the stimulus onset, and the current density in the mid-STS was parametrically enhanced by the difference between adapted and morph expressions [17]. Another MEG study also reported that the peak MEG response observed at around 220–250 ms to the fearful face was pronounced when the preceding facial stimulus was fearful compared with the neutral face [26]. It has been pointed out that the repeated presentation of single fearful and neutral faces induces the habituation of bloodoxygenation-level-dependent (BOLD) signals in the right amygdala, hippocampus, as well as in the medial/inferior temporal cortex, bilaterally irrespective of facial expressions [14]. Moreover, repetitive suppression has been observed in the fusiform gyrus (FG), superior temporal sulcus (STS), inferior occipital gyrus (IOG), and insula. Ishai et al. showed that repetitive suppression was observed with both fearful and neutral faces, but that repetitive suppression with the former faces was stronger than that with the latter [27]. An intracranial event-related potential study also reported that activity in the amygdala was habituated when the same stimulus was presented repeatedly [36]. The same study also reported that amplitude responses in the amygdala to a fearful face were smaller in the second half of the experiment, indicating habituation to aversive stimuli. These studies imply that the repetitive presentation of emotional faces reduces the neural activities related to face processing. Interestingly, recent studies indicate that the regions activated by facial stimuli are also activated by scene stimuli to some degree. Using both face and International Affective Picture
272
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
System (IAPS) stimuli, an fMRI-based study suggested that both fearful faces and scenes activate the amygdala [21]. In addition to the amygdala, activation of the superior temporal gyrus (STG) for fearful faces was, relative to neutral faces, significantly larger than corresponding activations elicited by fearful relative to neutral scenes [6]. Because the stimulus category was different for S1 and S2 in our study, we cannot directly combine these results to derive overall conclusions; however, the repeat presentation of identical emotional stimuli might modulate the activities of neural circuits commonly involved in face and scene processing. What neural mechanisms might be involved in the present effect? Because we introduced a modified version of the doublepulse and priming paradigm, it may be partly involved in the novelty-sensitive mechanisms related to emotional information, as proposed in a recent MEG study [17]: the responses could be elicited when the emotional category in the current stimulus was deviated by the recent experience. We presumed that the deviation of the S2 (facial expression) from S1 (object/scenery) stimulus can also modulate the late component irrespective of the stimulus category. Because we observed a modulatory response in a later component (P2 component), it is also likely to be involved in the backward connections from higher cortical regions (see below). In the present experiment, we were somewhat inspired by Righart & De Gelder’s report [39] of contextual effects at N170. Initially, we expected that the contextual modulation could be observed in the N170 component; however, a contextual effect was only observed at the P2 component in the present experimental paradigm. Thus, we postulated that early (N170) contextual modulation would be prominent when both facial and contextual information were presented simultaneously, but that contextual modulation could be delayed when the face stimulus was presented after the contextual stimulus. It is possible that the processing of facial expression was differently affected by when the contextual information was presented. The differential timing of modulation might help reveal how emotional information of different categories is coded in the neural system. 4.2. Timing of contextual modulation The timing of modulation also seems to be consistent with previous electrophysiological and MEG findings. According to intracranial recordings, fearful stimuli have been found to not only enhance activity in the amygdala [15,36], but also in the orbitofrontal cortex [33]. For the timing of activation in the amygdala and occipitotemporal cortex, a study of intracranial event-related potential found that the specific potentials to fear beginning at 200 ms poststimulus occurred in the amygdala, and that the occipitotemporal region was activated after 300 ms [36]. The authors speculated that the delay of latency was due to retrograde neuromodulation of the amygdala in the ventral visual system [37,38] or subsequent feedback signals from the amygdala and interconnected limbic region [1]. Other studies support the notion that the fusiform and other brain regions may critically depend on inputs received from the amygdala [47]. These results seem to be consistent with our results that such modulation by the processing of emotional information occurred at around 200–300 ms. Using the facial adaptation paradigm, a recent MEG study reported that the current density in the mid-STS was parametrically enhanced by the difference between adapted and morph expressions for 300–400 ms [17]. Another MEG study revealed that the repeat presentation of a fearful face modulates the MEG response which was observed at around 220–250 ms, suggesting that it reflects feedback from other brain regions [26]. Consistent with these findings, we found that the significant interaction between type of S1 stimulus and facial expression in the P2 ampli-
tude. As mentioned above, we speculate that the adjacent stimulus effect continues to around 250–260 ms, and this ‘late’ modulation might be the result of the feedback from other brain region as mentioned in previous studies (e.g. [17]). 4.3. Neural response to S1 The neural response to S1 was well replicated in previous ERP studies. The present results showed that the positive-going waves in the regions of 200–500 ms were evoked by fearful stimulus at the Pz and Cz electrodes when the stimuli were intact. However, the differential neural response to the emotional stimulus was absent when the stimulus was scrambled. This indicates that the late positive potential is associated with the processing of emotional information, as reported in previous studies [10,34]. Cuthbert et al. reported midline cortical response differences among picture contents for 400–700 ms from the onset of stimulus presentation, indicating that unpleasant pictures prompted a positive-going shift compared with neutral images. This suggests that the present scene and object stimuli were sufficient for eliciting emotional processing, and that the scrambled operation would eliminate the emotional information in this S1 period. One might think that the differential neural response to S2 would be due to the difference of the baseline. To confirm this possibility, we calculated the mean amplitude for 900–1000 ms in all S1 stimuli and performed a statistical analysis. However, we could not find any significance when the S1 was intact, and only significance of laterality was observed when the S1 stimulus was scrambled. These results suggest that fluctuation of baseline does not contribute to the differential neural response to S2 when S1 is intact. Further, these results suggest that fearful scenes do not have an overall effect on S2-elicited components. 4.4. P1 component The present results indicate that the P1 amplitude was modulated by the S1 condition and the P1 latency was modulated by the type of S1 (neutral or fearful) when the adjacent stimulus (S1) was scrambled, but not when the S1 was intact. That is, P1 latency was prolonged for the neutral cued condition compared with the fearful cued condition. This was an interesting but unexpected finding that is difficult to explain. As in the case of the N170 component, one possibility is that P1 latency was also affected by an earlier, meaningless picture or the process of scrambling differentially altered some of the low-level visual characteristics of the stimuli, as mentioned in the previous section. Another possibility is that the process of scrambling may have induced different levels of ambiguity across the neutral versus fearful scrambled image conditions, or may have induced some vague level of identifiability for segments of the fearful scrambled images relative to the neutral scrambled ones. Further studies are needed to address how the scrambling operation of emotional stimulus affects the subsequent visual processing. 5. Conclusions In conclusion, we clarified that the processing of facial expressions is modulated rapidly by contextual information (non-facial stimuli) containing emotional information. That is, the P2 amplitude of the fearful cued was significantly larger than that of the neutral cued condition when the face was fearful, and the P2 amplitude of the neutral face was significantly larger than that of the fearful face condition when the preceding stimulus was neutral. The present findings seem useful to elucidate the neural mechanism of cross-domain emotional processing.
M. Hirai et al. / Brain Research Bulletin 77 (2008) 264–273
Conflict of interest The authors declare that they have no competing financial interests. Acknowledgements We thank Mr. O. Nagata, Mr. Y. Takeshima, and Ms. M. Teruya for their technical support. We are also grateful to Dr. Koji Inui for comments and suggestions on the data analysis. This study was supported by the Kakigi group of RISTEX, the Japan Science and Technology Agency. M. Hirai was supported by a Grant-in-Aid for JSPS Fellows No. 18-11826 from the Ministry of Education, Science, Sports, and Culture, Japan. References [1] D.G. Amaral, H. Behniea, J.L. Kelly, Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey, Neuroscience 118 (2003) 1099–1120. [2] S.E. Barrett, M.D. Rugg, D.I. Perrett, Event-related potentials and the matching of familiar and unfamiliar faces, Neuropsychologia 26 (1988) 105–117. [3] M. Batty, M.J. Taylor, Early processing of the six basic facial emotional expressions, Brain Res. Cogn. Brain Res. 17 (2003) 613–620. [4] S. Bentin, T. Allison, A. Puce, E. Peerz, G. McCarthy, Electrophysiological studies of face perception in humans, J. Cogn. Neurosci. 8 (1996) 551–565. [5] S. Bentin, G. McCarthy, The effects of immediate stimulus repetition on reaction time and event-related potentials in tasks of different complexity, J. Exp. Psychol. Learn. Mem. Cogn. 20 (1994) 130–149. [6] J.C. Britton, S.F. Taylor, K.D. Sudheimer, I. Liberzon, Facial expressions and complex IAPS pictures: common and differential networks, Neuroimage 31 (2006) 906–919. [7] S. Campanella, P. Quinet, R. Bruyer, M. Crommelinck, J.M. Guerit, Categorical perception of happiness and fear facial expressions: an ERP study, J. Cogn. Neurosci. 14 (2002) 210–227. [8] N.C. Carroll, A.W. Young, Priming of emotion recognition, Q. J. Exp. Psychol. A 58 (2005) 1173–1197. [9] A.S. Cauquil, G.E. Edmonds, M.J. Taylor, Is the face-sensitive N170 the only ERP not affected by selective attention? Neuroreport 11 (2000) 2167–2171. [10] B.N. Cuthbert, H.T. Schupp, M.M. Bradley, N. Birbaumer, P.J. Lang, Brain potentials in affective picture processing: covariation with autonomic arousal and affective report, Biol. Psychol. 52 (2000) 95–111. [11] P. Downing, J. Liu, N. Kanwisher, Testing cognitive models of visual attention with fMRI and MEG, Neuropsychologia 39 (2001) 1329–1342. [12] M. Eimer, Event-related brain potentials distinguish processing stages involved in face perception and recognition, Clin. Neurophysiol. 111 (2000) 694–705. [13] M. Eimer, The face-specific N170 component reflects late stages in the structural encoding of faces, Neuroreport 11 (2000) 2319–2324. [14] H. Fischer, C.I. Wright, P.J. Whalen, S.C. McInerney, L.M. Shin, S.L. Rauch, Brain habituation during repeated exposure to fearful and neutral faces: a functional MRI study, Brain Res. Bull. 59 (2003) 387–392. [15] I. Fried, K.A. MacDonald, C.L. Wilson, Single neuron activity in human hippocampus and amygdala during recognition of faces and objects, Neuron 18 (1997) 753–765. [16] M.L. Furey, T. Tanskanen, M.S. Beauchamp, S. Avikainen, K. Uutela, R. Hari, J.V. Haxby, Dissociation of face-selective cortical responses by attention, Proc. Natl. Acad. Sci. USA 103 (2006) 1065–1070. [17] N. Furl, N.J. van Rijsbergen, A. Treves, K.J. Friston, R.J. Dolan, Experiencedependent coding of facial expression in superior temporal sulcus, Proc. Natl. Acad. Sci. USA 104 (2007) 13485–13489. [18] G. Ganis, M. Kutas, An electrophysiological study of scene effects on object identification, Brain Res. Cogn. Brain Res. 16 (2003) 123–144. [19] N. George, B. Jemel, N. Fiori, R. Renault, Localisation of face and shape repetition effects in humans, Neuroreport 8 (1997) 1417–1423. [20] K. Grill-Spector, R. Malach, fMR-adaptation: a tool for studying the functional properties of human cortical neurons, Acta Psychol. (Amst.) 107 (2001) 293–321. [21] A.R. Hariri, A. Tessitore, V.S. Mattay, F. Fera, D.R. Weinberger, The amygdala response to emotional stimuli: a comparison of faces and scenes, Neuroimage 17 (2002) 317–323. [22] A. Harris, K. Nakayama, Rapid face-selective adaptation of an early extrastriate component in MEG, Cereb. Cortex 17 (2007) 63–70.
273
[23] J.J. Heisz, S. Watter, J.M. Shedden, Automatic face identity encoding at the N170, Vision Res. 46 (2006) 4604–4614. [24] R.N. Henson, Neuroimaging studies of priming, Prog. Neurobiol. 70 (2003) 53–81. [25] A. Holmes, P. Vuilleumier, M. Eimer, The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials, Brain Res. Cogn. Brain Res. 16 (2003) 174–184. [26] A. Ishai, P.C. Bikle, L.G. Ungerleider, Temporal dynamics of face repetition suppression, Brain Res. Bull. 70 (2006) 289–295. [27] A. Ishai, L. Pessoa, P.C. Bikle, L.G. Ungerleider, Repetition suppression of faces is modulated by emotion, Proc. Natl. Acad. Sci. USA 101 (2004) 9827– 9832. [28] R.J. Itier, M.J. Taylor, Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs, Neuroimage 15 (2002) 353–372. [29] C. Jacques, B. Rossion, Electrophysiological evidence for temporal dissociation between spatial attention and sensory competition during human face processing, Cereb. Cortex 17 (2007) 1055–1065. [30] D.A. Jeffreys, Evoked potential studies of face and object processing, Vis. Cogn. 3 (1996) 1–38. [31] B. Jemel, M. Calabria, J.F. Delvenne, M. Crommelinck, R. Bruyer, Differential involvement of episodic and face representations in ERP repetition effects, Neuroreport 14 (2003) 525–530. [32] B. Jemel, M. Pisani, L. Rousselle, M. Crommelinck, R. Bruyer, Exploring the functional architecture of person recognition system with event-related potentials in a within- and cross-domain self-priming of faces, Neuropsychologia 43 (2005) 2024–2040. [33] H. Kawasaki, O. Kaufman, H. Damasio, A.R. Damasio, M. Granner, H. Bakken, T. Hori, M.A. Howard 3rd, R. Adolphs, Single-neuron responses to emotional visual stimuli recorded in human ventral prefrontal cortex, Nat. Neurosci. 4 (2001) 15–16. [34] A. Keil, M.M. Bradley, O. Hauk, B. Rockstroh, T. Elbert, P.J. Lang, Large-scale neural correlates of affective picture processing, Psychophysiology 39 (2002) 641–649. [35] H. Kim, L.H. Somerville, T. Johnstone, S. Polis, A.L. Alexander, L.M. Shin, P.J. Whalen, Contextual modulation of amygdala responsivity to surprised faces, J. Cogn. Neurosci. 16 (2004) 1730–1745. [36] P. Krolak-Salmon, M.A. Henaff, A. Vighetto, O. Bertrand, F. Mauguiere, Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human, Neuron 42 (2004) 665– 676. [37] J.S. Morris, C.D. Frith, D.I. Perrett, D. Rowland, A.W. Young, A.J. Calder, R.J. Dolan, A differential neural response in the human amygdala to fearful and happy facial expressions, Nature 383 (1996) 812–815. [38] L. Pessoa, M. McKenna, E. Gutierrez, L.G. Ungerleider, Neural processing of emotional faces requires attention, Proc. Natl. Acad. Sci. USA 99 (2002) 11458– 11463. [39] R. Righart, B. de Gelder, Context influences early perceptual analysis of faces–an electrophysiological study, Cereb. Cortex 16 (2006) 1249–1257. [40] S.R. Schweinberger, V. Huddy, A.M. Burton, N250r: a face-selective brain response to stimulus repetitions, Neuroreport 15 (2004) 1501–1505. [41] S.R. Schweinberger, E.-M. Pfutze, W. Sommer, Repetition priming and associative priming of face recognition: Evidence from event-related potentials, J. Exp. Psychol. Learn. Mem. Cogn. 21 (1995) 722–736. [42] S.R. Schweinberger, E.C. Pickering, A.M. Burton, J.M. Kaufmann, Human brain potential correlates of repetition priming in face and name recognition, Neuropsychologia 40 (2002) 2057–2073. [43] M. Spiridon, B. Fischl, N. Kanwisher, Location and spatial profile of categoryspecific regions in human extrastriate cortex, Hum. Brain Mapp. 27 (2006) 77–89. [44] J.J. Stekelenburg, B. de Gelder, The neural correlates of perceiving human bodies: an ERP study on the body-inversion effect, Neuroreport 15 (2004) 777– 780. [45] V. Sterpenich, A. D’Argembeau, M. Desseilles, E. Balteau, G. Albouy, G. Vandewalle, C. Degueldre, A. Luxen, F. Collette, P. Maquet, The locus ceruleus is involved in the successful retrieval of emotional memories in humans, J. Neurosci. 26 (2006) 7416–7423. [46] M.J. Taylor, Non-spatial attentional effects on P1, Clin. Neurophysiol. 113 (2002) 1903–1908. [47] P. Vuilleumier, M.P. Richardson, J.L. Armony, J. Driver, R.J. Dolan, Distant influences of amygdala lesion on visual cortical activation during emotional face processing, Nat. Neurosci. 7 (2004) 1271–1278. [48] S. Watanabe, R. Kakigi, A. Puce, The spatiotemporal dynamics of the face inversion effect: a magneto- and electro-encephalographic study, Neuroscience 116 (2003) 879–895. [49] S. Watanabe, K. Miki, R. Kakigi, Gaze direction affects face perception in humans, Neurosci. Lett. 325 (2002) 163–166.