BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
a v a i l a b l e a t w w w. s c i e n c e d i r e c t . c o m
w w w. e l s e v i e r. c o m / l o c a t e / b r a i n r e s
Research Report
EEG-MEG evidence for early differential repetition effects for fearful, happy and neutral faces Shasha Morel a,b,⁎, Aurélie Ponz c , Manuel Mercier d , Patrik Vuilleumier c , Nathalie George a,b,e a
CNRS, UPR 640 LENA, Cognitive Neuroscience and Brain Imaging Laboratory, Paris, France UPMC Univ Paris 06, Paris, France c Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, Geneva, Switzerland d Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland e Centre MEG-EEG, Hôpital de la Salpêtrière, Paris, France b
A R T I C LE I N FO
AB S T R A C T
Article history:
To determine how emotional information modulates subsequent traces for repeated stimuli,
Accepted 18 November 2008
we combined simultaneous electro-encephalography (EEG) and magneto-encephalography
Available online 3 December 2008
(MEG) measures during long-lag incidental repetition of fearful, happy, and neutral faces. Repetition effects were modulated by facial expression in three different time windows,
Keywords:
starting as early as 40–50 ms in both EEG and MEG, then arising at the time of the N170/M170,
EEG
and finally between 280–320 ms in MEG only. The very early repetition effect, observed at
MEG
40–50 ms over occipito-temporo-parietal regions, showed a different MEG topography
N170/M170
according to the facial expression. This differential response to fearful, happy and neutral
Face
faces suggests the existence of very early discriminative visual processing of expressive
Emotion
faces, possibly based on the low-level physical features typical of different emotions. The
Repetition
N170 and M170 face-selective components both showed repetition enhancement selective to neutral faces, with greater amplitude for emotional than neutral faces on the first but not the second presentation. These differential repetition effects may reflect valence acquisition for the neutral faces due to repetition, and suggest a combined influence of emotion- and experience-related factors on the early stage of face encoding. Finally, later repetition effects consisted in enhanced M300 (MEG) between 280 and 320 ms for fearful relative to happy and neutral faces that occurred on the first presentation, but levelled out on the second presentation. This effect may correspond to the higher arousing value of fearful stimuli that might habituate with repetition. Our results reveal that multiple stages of face processing are affected by the repetition of emotional information. © 2008 Elsevier B.V. All rights reserved.
1.
Introduction
Faces convey multiple types of information that are essential for interindividual interactions. Among these, emotional expressions play a central role for they are crucial to infer the
observed person's state of mind, feelings and intentions. This has led to considerable interest for the neural substrates of emotional face perception. Important advances have been made in elucidating these substrates although the neural networks implicated in processing each specific expression are
⁎ Corresponding author. Laboratoire de Neurosciences Cognitives & Imagerie Cérébrale, CNRS UPR640–LENA 47, boul. de l'hôpital 75651 Paris Cedex 13–FRANCE. Fax: +33 1 45 86 25 37. E-mail address:
[email protected] (S. Morel). 0006-8993/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.brainres.2008.11.079
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
not yet fully specified (for reviews LaBar and Cabeza, 2006; Vuilleumier and Pourtois, 2007). Moreover, the temporal dynamics of the brain responses to emotional faces remain relatively unclear (for a recent review Vuilleumier and Pourtois, 2007). In particular, it is still unclear which stage(s) of face processing may be influenced by the emotional expression of a face. Considering the saliency of emotional expressions and their importance in social perception, it could be expected that these may influence how the brain encodes and memorises an individual face and that such influence may operate at early stages of face perception. Here, we investigated this issue by using an incidental repetition paradigm with fearful, happy, and neutral unknown faces in a combined electro- and magneto-encephalography (EEG-MEG) study. Traditional models of face processing as well as more recent anatomo-functional versions of these models have generally assumed independence between the initial perceptual analysis of individual faces and the processing of facial emotion (Bruce and Young, 1986; Gobbini and Haxby, 2007; Haxby et al., 2000). According to these models, emotional expression should influence the brain responses to faces only beyond the early time ranges of face perceptual analysis. In agreement with this view, several early studies using time-resolved techniques in healthy humans reported only late effects (N200 ms) associated with the perception of emotional facial expressions, in particular in the form of enhanced Late Positive Component (LPC) elicited by emotional vs. neutral stimuli (Krolak-Salmon et al., 2001, 2004; Munte et al., 1998; Sato et al., 2001; for review Vuilleumier and Pourtois, 2007). However, an increasing number of findings point to much earlier influences of emotional faces, encompassing the well-known N170 response to faces (Batty and Taylor, 2003; Campanella et al., 2002; Eger et al., 2003; Miyoshi et al., 2004). These findings contradict a number of previous studies that did not find any influence of emotional facial expression on the N170 (e.g. Eimer and Holmes, 2007; Krolak-Salmon et al., 2001; Munte et al., 1998). However, factors such as stimulus repetition, the number of faces and/or facial expressions used, as well as the type of paradigm, may have contributed to these negative results. Thus, further investigations are needed to conclude on the sensitivity of the N170 component to emotional facial expression. Moreover, some studies reported an influence of emotional expression on very early brain responses even prior to the N170 (≤100 ms) (Batty and Taylor, 2003; Eger et al., 2003; Eimer and Holmes, 2002; Pourtois et al., 2004; see also Pizzagalli et al., 2002). Indirect evidence for early processing of emotional expressions also comes from studies suggesting an implication of subcortical pathways during the processing of emotional stimuli, including fearful, expressions (Johnson, 2005; Morris et al., 1998; Pegna et al., 2005; Vuilleumier et al., 2003). These findings suggest that emotional information might potentially exert very early influences on visual processes underlying face encoding and memory, which could be unveiled by repetition effects. Repetition effects are particularly interesting in order to pinpoint the perceptual stages at which the processing of emotional and neutral faces differs. In essence, these effects refer to the difference observed between brain activity in response to the first and subsequent presentations of a stimulus, and they can thus disclose information retained
85
from a stimulus between the first and the repeated occurrence. Repetition effects are dependent on multiple parameters such as the time lag between the first and the second presentations (Henson et al., 2004), the number of intervening items (Henson et al., 2003), the incidental vs. explicit nature of the task (Boehm and Sommer, 2005), as well as the type of stimuli (familiar vs. unfamiliar). However, although these paradigms have been used in many fMRI and electrophysiological studies of neutral face perception (for review Henson, 2003), they have rarely been used to investigate the temporal dynamics of emotional face encoding. With neutral unknown faces, early repetition effects have been observed in EEG already ∼50 ms post stimulus-onset, after both short and long lags, during incidental repetition as well as explicit repetition detection tasks (George et al., 1997; Seeck et al., 1997). MEG studies are much scarcer, but very early face repetition effects (between 30 and 60 ms) have also been observed in a few recent studies (Bailey et al., 2005; Braeutigam et al., 2001). Although the interpretation of these effects remains unclear, it was proposed that they may reflect fast visual pathways, involving either rapid feed-forward flow of cortical information or subcortical-to-extrastriate-cortical pathways. In addition, the face-selective N170, traditionally associated with the initial stage of face perceptual analysis (e.g. Bentin et al., 1996; Eimer, 2000; George et al., 1996; Itier et al., 2006), has also been shown to be affected by face repetition, at least after short lags (Campanella et al., 2000; Itier and Taylor, 2002, 2004; Jemel et al., 2003). This effect can take the form of decreased or increased N170 for repeated stimuli. The decrease of N170 activity has been related to more efficient processing of repeated faces, whereas N170 increase has been attributed to a qualitative change in perceptual analysis for the repeated faces. A later repetition effect, the N250r, has also been reported over occipito-temporal regions between 200 and 320 ms. However, this effect is mostly restricted to familiar faces and observed only for immediate or short lags of repetition (Schweinberger et al., 2002). Finally, other reported repetition effects for unknown faces include modulations of EventRelated Potentials (ERPs) during later time ranges (N300 ms), such as greater Late Positive Component (LPC) and/or reduced N400-like wave for repeated compared to new faces over centro-parieto-occipital regions (Guillem et al., 2001; Kida et al., 2007; Munte et al., 1997; for review Paller et al., 2007). These late effects have been associated with episodic and semantic memory for faces. However, these effects can sometimes be reversed, or absent, in particular in the case of highly arousing or affective stimuli, which themselves give rise to increased LPC (Codispoti et al., 2006a,b). Moreover, MEG studies investigating the immediate repetition of line drawings of objects (Penney et al., 2003) or longer-lag repetition of words (∼1 min, Dale et al., 2000; Dhond et al., 2001; Marinkovic et al., 2003) reported reduced electromagnetic activity for repeated items between ∼ 300 and 500 ms over fronto-parietal and frontotemporal sites. Such decrease of electrophysiological activity has been proposed to reflect the reduced computation required for the analysis of repeated items, and it may also relate to habituation processes (Grill-Spector et al., 2006). Regarding the repetition of emotional faces, to our knowledge, only one electrophysiological study has directly addressed this issue (Ishai et al., 2006). These authors used
86
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Fig. 1 – Early repetition effect between 40 and 50 ms in EEG. (a) Grand average scalp topography of the mean ERP difference (2nd–1st presentation) between 40 and 50 ms. The evoked potential difference map averaged across all 3 facial expressions is represented over left and right head views. The black dots represent the electrodes of measurement. (b) Grand average EEG waveforms showing the early repetition effect. The ERPs for first (in black) and second (in red) stimulus presentations are represented on two electrodes measured respectively in the left (PO9) and right (PO10) hemispheres, for each facial expression. The arrows indicate the early repetition effect that was observed for neutral faces only. (c) Plot of the mean ERP amplitude between 40 and 50 ms. The plot compares the mean ERP amplitude (averaged across the hemispheres) for each facial expression (F: fearful, H: happy, N: neutral), under 1st (in blue) and 2nd (in red) presentations. The star indicates the significant repetition effect found for neutral faces only. Vertical bars represent standard errors of the mean.
MEG to compare the short-lag repetition effects for fearful and neutral faces that were either task relevant (target) or task irrelevant (distracters). They observed reduced activity with the repetition of both types of faces, i.e., unaffected by emotional expression, in the time-range of the M170 (160– 185 ms), the magnetic component concomitant with the N170, and between 220 and 250 ms. Additionally, an interaction between emotion and repetition was observed in the late time-window but only when the faces were task irrelevant. This suggests that some modulation of face repetition by the facial expression might be more likely when repetition is incidental. In addition, some EEG studies examined the influence of a change in emotional expression between two consecutive presentations of the same or different individual face stimuli (Campanella et al., 2002; Guillaume and Tiberghien, 2001; Miyoshi et al., 2004). These studies found somewhat inconsistent results since two of them reported a modulation of the N170 by changes in emotional expression between two presentations of the same individual face (Campanella et al., 2002; Miyoshi et al., 2004), whereas the other did not (Guillaume and Tiberghien, 2001). Moreover, short and long lag repetition effects are likely to reflect different neural processes. While the former may be mediated by transient neural activity associated with iconic memory traces, the latter is held to reflect long-lasting synaptic changes which may be associated with longer term perceptual memory (Henson et al., 2004). Therefore, here, we sought to examine how emotional expression (fearful, happy and neutral) could influence
incidental repetition effects for faces at long lags. Our aim was to determine the exact time ranges during which differential repetition effects for these faces may be observed with EEG-MEG. Thus we focused on repetition effects and systematically tested whether these effects differed according to the facial expression. More specifically, we asked whether, in the conditions of our paradigm, differential repetition effects (i.e. difference of electromagnetic responses to the 2nd vs. the 1st presentation of the faces) may be observed in the early time range corresponding to face perceptual analysis. According to the recent findings on the N170 and/ or M170 sensitivity to emotion and repetition respectively, we expected that the early N170 (in EEG) and M170 (in MEG) responses to faces may be differentially modulated by the repetition of faces with distinct emotional expressions. We were also interested in examining whether other repetition effects at early (∼ 50 ms) or later latencies (N250 ms) could be observed in our paradigm. Furthermore, the comparison of repetition effects for fearful, happy, and neutral faces allowed us to determine whether such effects were selective to fearful stimuli–as opposed to happy and neutral stimuli–or more globally sensitive to any emotional vs. neutral faces. EEG and MEG provide unique insights in the temporal course of information processing by the human brain. Although these two techniques have a similar temporal resolution, they are highly complementary in revealing the activity of different neuronal assemblies. Indeed MEG is best tuned to record neural assemblies whose resulting dipole is tangential to the cortical surface, while EEG is sensitive to both
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
87
Fig. 2 – Early repetition effect between 40 and 50 ms in MEG. (a) Grand average scalp topographies of the ERF difference (2nd–1st presentation) for each facial expression. The evoked magnetic field difference maps, averaged between 40 and 50 ms, are represented over left and right head views. The black dots represent the measured sensors chosen to cover the repetition effect for each facial expression, in each hemisphere. (b) Grand average MEG waveforms showing the early repetition effect. The ERFs for first (in black) and second (in red) stimulus presentations are displayed on two representative sensors respectively in the left (top row) and right (bottom row) hemispheres for each facial expression. The arrows indicate the early repetition effect. (c) Plots of the mean ERF amplitude between 40 and 50 ms. For each facial expression, the plots compare the mean ERF amplitude in each region of measurement (P: posterior; M: middle; A: anterior; averaged across the hemispheres), for the 1st (in blue) and the 2nd (in red) presentations. The stars indicate significant repetition effects. Vertical bars represent standard errors of the mean.
tangential and radial dipoles, but best-fit for radial dipoles. Furthermore, MEG has a more focal spatial resolution but is less sensitive to deep structures than EEG (Cohen and Cuffin, 1983). Therefore, the combination of these two techniques should allow us to identify more subtle effects than any of these methods used in isolation and should also help understanding the effects observed.
Here we will focus on brain responses that showed repetition effects. Such effects were observed on the N/M170 in EEG/MEG, on the M300 in MEG, and additionally in a very early time window between 40 and 50 ms over temporo-parieto-occipital sensors in both EEG and MEG. Because of its early latency, this effect will be described first.
2.2.1.
2.
Results
2.1.
Behavioural results
Subjects responded accurately to immediate face repetitions, with a mean rate of correct responses of 91.4 ± 2.4% and mean reaction time of 653 ± 21 ms.
2.2.
Electrophysiological results
Several components classically elicited in visual paradigms could be identified in both ERFs and ERPs, namely the M100, M170, and M300 in MEG, and the P100, N170, and LPC in EEG.
Very early repetition effect between 40 and 50 ms
The visual examination of grand average ERP data revealed a surprisingly early repetition effect which consisted of a bilateral posterior negativity for the second relative to the first presentation of the faces between 40 and 50 ms (Fig. 1a). The statistical analysis on the mean amplitudes measured over 7 occipito-temporal electrodes confirmed a main difference between the first and second presentations of the same face (F(1,13) = 7.2, p b .02). Although the interaction between Repetition and Facial Expression was not significant (F(2,26) = 1.1, p N .10, εGG = .79), planned comparisons performed on each expression showed that this repetition effect reached significance only for neutral faces (F(1,13) = 15.0, p b .002). It was not significant neither for fearful nor for happy faces (both F b 1) (Figs. 1b, c).
88
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Fig. 3 – (a) Scalp topography of the N170. Evoked potential maps averaged across all conditions and all subjects are represented over left and right views at 160 ms. (b) Grand average EEG waveforms on electrodes on which the N170 peaked. The ERPs on two electrodes in the left (P7, PO9) and right (P8, PO10) hemispheres are displayed for the 1st and the 2nd presentations of fearful (red), happy (purple) and neutral (black) faces. The arrow indicates the N170 enhancement found on 2nd relative to 1st presentation of neutral stimuli only.
As the latency of this effect was very early, we checked whether any statistical difference between the first and second presentations could be observed in the baseline interval prior to or at stimulus onset. We performed statistical analyses in successive 10 ms time-windows from −50 ms to +70 ms. No effect of Repetition was observed in any 10 ms time window before 40 ms or after 60 ms (all p N .10). The only timewindows revealing a significant or just significant effect of Repetition were the 40–50 ms (F(1,13) = 7.2, p b .02) and the 50– 60 ms (F(1,13) = 4.4, p = .055) time-windows. This confirmed our choice of the 40–50 ms time-window for the analysis, which was initially selected based on grand average data examination.
When we then examined whether MEG revealed a corresponding early repetition effect, we found that a posterior repetition-related pattern was indeed observed in the form of a bipolar response to the 2nd vs. 1st face presentation, with maximal amplitude between 40 and 50 ms. This effect was present for all three facial expressions but with a shift of topography from more posterior and superior sensors for fearful faces, to more anterior and inferior sensors for neutral faces, with happy faces eliciting a repetition effect on intermediate sensors between these two groups (Fig. 2a). Again, we first checked for any statistical difference between the first and second presentations in the baseline interval prior to or at stimulus onset, by performing statistical
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
89
Fig. 4 – (a) Scalp topography of the M170. Evoked magnetic field maps averaged across all conditions and all subjects are represented over left and right views at 160 ms. (b) Grand average MEG waveforms at sensors where the M170 peaked. The ERFs on two MEG sensors in the left (MLT33, MLT42) and right (MRT33, MRT42) hemispheres are displayed for the 1st and the 2nd presentations of fearful (red), happy (purple) and neutral (black) faces. The arrow indicates the M170 enhancement found on 2nd relative to 1st presentation of neutral stimuli only.
analyses in successive 10 ms time-windows from −50 ms to +70 ms. There was no effect of Repetition in any time-window before 30 ms or after 60 ms (all p N .10). There was a small trend to a main effect of Repetition between 30 and 40 ms (F(1,13) = 3.2, p b .10), followed by a highly significant effect in the 40– 50 ms and the 50–60 ms time windows (F(1,13) = 11.7; p b .005and F(1,13) = 9.1 p b .01 respectively), that was maximal between 40 and 50 ms. We therefore focused on the 40–50 ms time-window as in EEG. Quantitative measurements were performed on three sets of sensors covering the repetition effect for all emotion conditions in both hemispheres. The statistical analysis on amplitudes with Facial Expression, Repetition, Region of measurement, and Hemisphere as within-subjects factors showed a significant main effect of Repetition (F(1,13) = 11.7, p b .005). Moreover, the 3-way interaction between Region, Facial Expression and Repetition was significant (F(4,52) = 3.4, p b .03, εGG = .85).
Subsequently planned comparisons confirmed that this early MEG repetition effect (i.e., difference between first and second presentations) was significant for fearful faces in the posterior/superior sensor region only (F(1,13) = 9.0, p b .01), while it was significant for happy faces in the middle region only (F(1,13) = 4.8, p b .05), and for neutral faces in the anterior region only (F(1,13) = 11.4, p b .005) (Figs. 2b, c). Thus, MEG data revealed a clear repetition effect for all types of faces but peaking at different sites according to the facial expression, and with a similar early latency corresponding to the EEG effects.
2.2.2.
N170 and M170
In EEG, the statistical analysis on N170 peak amplitude showed that there was no main effect of Repetition or Facial Expression, but there was a significant interaction between Facial Expression, Repetition and Hemisphere (F(2,12) = 5.2, p b .03; Sphericity test: Mauchley's W = .58, p b .04). This interaction
90
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Fig. 5 – (a) Scalp topography of the M300. Evoked magnetic field maps averaged across all conditions and all subjects and between 280 and 320 ms are represented over left and right views. (b) Grand average MEG waveforms on anterior temporal sensors where the M300 was maximal. The ERFs on two sensors in the left (MLT12, MLT22) and right (MRT12, MRT22) hemispheres are displayed for the 1st and the 2nd presentations of fearful (red), happy (purple) and neutral (black) faces. The arrow indicates the M300 amplitude reduction that was found on 2nd relative to 1st presentation of fearful stimuli only.
reflected first that there was a significant increase of the N170 with the repetition of neutral faces only (F(1,13) = 7.2, p b .02), which was slightly more marked over the left (mean difference = −1.4 ± 0. 5 µV; p b .03) than the right hemisphere (mean difference = − 1.0 ± 0. 4 µV; p b .03) (Fig. 3). By contrast, no significant effect of Repetition was found neither for happy nor for fearful faces (all F b 1). Second, there was a significant effect of Facial Expression on the first presentation only (F(2,26) = 4.8, p b .02, εGG = .94), with smaller N170 for neutral (−9.7 ± 1. 3 µV) than emotional faces (−10.1 ± 1. 1 µV for fearful; −11.3 ± 1. 2 µV for happy). This effect was again best seen in the left (p b .02) than in the right hemisphere (p b .09). No difference was seen between the three types of faces on the second presentation (all F b 1).
The statistical analysis on the N170 peak latency did not reveal any significant effect. Turning then to MEG data, there was no main effect of Facial Expression or Repetition on the peak amplitude of the M170. However, the interaction between Facial Expression and Repetition was significant (F(2,26) = 4.0, p b .04, εGG = .89). Consistent with EEG results, planned comparisons showed that the M170 amplitude was enhanced on the second as compared to the first presentation for neutral faces only (F(1,13) = 8.4, p b .02), while there was no such repetition effect for fearful and happy faces (all F b 2.3, all p N .15). Furthermore, on the first presentation, there was a significant effect of Facial Expression (F(2,26) = 4.3, p = .03, εGG = .85) with neutral faces eliciting a smaller M170 (183.9 ± 14.0 fT) than happy (205.4 ± 14.3 fT) and
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
fearful faces (200.1 ± 12.2 fT). On second presentation, no significant difference was found among stimulus categories (all F b 1.2, p N .30) (Fig. 4). The differential pattern of repetition effect across expressions was thus highly consistent on both N170 in EEG and M170 in MEG. No effect of facial expression or repetition was found on the peak latency of the M170.
2.2.3.
M300
Two components were observed in the late time range, namely the M300 in MEG and the LPC in EEG. The statistical analysis performed on the mean amplitude of the M300 revealed main effects of Repetition (F(1,13) = 13.6, p b .003) and of Facial Expression (F(2,26) = 4.0, p b .04, εGG = .89). The M300 was overall larger on the first than on the second presentation. Although the interaction between Facial Expression and Repetition was not significant (F b 1), this effect on M300 was significant for the fearful faces only (F(1,13) = 5.2, p b .04), not for happy and neutral faces (both p N .30; Fig. 5). Moreover, the effect of Facial Expression was significant only for the first presentation (F(2,26) = 4.7, p b .03, εGG = .91 on the 1st presentation; F b 1 on the 2nd presentation). This was due to fearful faces yielding greater M300 than both happy and neutral faces on their initial occurrence (both F(1,13) N 7.0, p b .02) but not on their subsequent presentation (both F b 1.4, p N .30; Fig. 5). There was no corresponding repetition effect in the EEG data, irrespective of facial expression. In particular, although there was no identifiable N250r in our data (Schweinberger et al., 2002, 2004), we specifically tested for the existence of any repetition effect on temporo-occipital electrodes between 260 and 320 ms. This analysis showed no significant result (F b 1). Finally, the LPC measured over fronto-centro-parietal electrodes between 300 and 600 ms only showed an effect of emotion with greater mean amplitude for fearful (5.33 ± 0.54 µV) than both happy (4.47 ± 0.44 µV) and neutral (4.33 ± 0.63 µV) faces (F(2,12) = 4.5, p b .04; Sphericity test: Mauchley's W = .60, p b .05). However, there was neither any effect of repetition (F(1,13) = 2.3, p N .10) nor interaction between repetition and emotion on this late component (F b 1).
3.
Discussion
This study reveals several modulations of brain activity with distinctive time-course during a long-lag incidental repetition of unknown faces, which varied as a function of their emotional or neutral expression. Furthermore, by combining EEG and MEG simultaneously, we were able to identify both common and specific effects with each type of measure. Our main goal was to test whether experience-related effects (i.e. 2nd vs. 1st presentation difference) might differ for fearful, happy, or neutral faces, and whether these might affect neural responses at early perceptual processing stages such as during the N170 or M170. Our data revealed repetition effects at three distinct time processing stages. The first repetition effects started as early as 40–50 ms in both MEG and EEG, followed by modulations in the time range of the M/N170, and then between 280 and 320 ms in MEG. Only the latter effect was selective for fearful faces, whereas the earlier effects showed
91
stronger repetition-related modulations for neutral faces. We will discuss these results in their temporal order. First, our data revealed a very early repetition effect around 40–50 ms over posterior regions, which was reliably found in both EEG and MEG. In EEG, this effect reached significance only for neutral faces, while MEG showed a more global repetition effect but with peaks at different sites depending on the facial expression. Because MEG is more sensitive to small changes in dipole orientation or location, and thus has a more focal spatial resolution than EEG, it is likely that MEG allowed a better distinction between neural networks than EEG could (Yoneda et al., 1995). Here, the shift of topography observed in MEG suggests that the repetition of fearful, happy and neutral faces recruited at least partially different networks, or that a common network was differentially activated in the very early (40–50 ms) time range. These neural assemblies may have been more optimally oriented for recording EEG repetition effect for neutral faces (relative to fearful and happy faces), which would account for the different results obtained for all expressions with MEG. What can be the functional significance of such an early effect? A cautious interpretation is required since such latency raises the question of its neural substrates, which may involve the earliest stage of cortical visual processing in V1 (Foxe and Simpson, 2002; Pourtois et al., 2008), or even a contribution of subcortical structures given that it is still controversial whether this latency is sufficient for visual information to reach the primary visual cortex (Poghosyan et al., 2005; Vanni et al., 2004; Yoneda et al., 1995). Nevertheless, there is now a growing number of studies that have reported similar evidence for early modulations of electromagnetic brain responses beginning as early as 50 ms (George et al., 1997; Mouchetant-Rostaing et al., 2000b; Seeck et al., 1997) or even earlier (Bailey et al., 2005; Braeutigam et al., 2001; Inui et al., 2006; Luo et al., 2007; Mouchetant-Rostaing et al., 2000a; Poghosyan et al., 2005; Yoneda et al., 1995). In particular, some studies have observed face repetition effects arising from 50 ms in EEG (George et al., 1997) and between 30 and 60 ms in MEG (Bailey et al., 2005; Braeutigam et al., 2001). These authors have proposed that such very early repetition effects might reflect fast visual pathways, involving either rapid feedforward flow of cortical information or subcortical-to-extrastriate-cortical pathway bypassing V1 (Bullier, 2001; Hamm et al., 2003). In line with this, several fMRI studies have provided evidence for the activation of subcortical routes during the processing of emotional faces (Morris et al., 1998; Pegna et al., 2005; Vuilleumier et al., 2003). In line with these studies, our result reinforces the view that some modulations of face perception may be observed at the earliest stages of visual processing, and be influenced by experience-related factors (i.e. stimulus repetition), as well as by emotional cues. The very early latency of this effect suggests that it is likely to be related to relatively low-level cues. However, this result cannot be merely attributed to the repetition of very low-level pixelwise information since the stimuli were presented under a different size on their first and second presentations (Vuilleumier et al., 2002). It is also unlikely to be explained by the RMS contrast that differed between happy and both fearful and neutral faces, as the biggest difference in our repetition effect was observed
92
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
between fearful and neutral faces. Instead, we propose that this effect may reflect the processing of low-level visual cues intrinsically related to the emotional expression (e.g. local variations in contrast produced by the smile for happy faces or wide sclera size of the fearful faces). Hence, our results suggest the existence of a very early differential visual processing of expressive faces which could be based on coarse physical features typical of the different emotions. A second important finding concerns the N170 and M170 that were differentially sensitive to the repetition of neutral and emotional faces. Both components showed a similar effect, with increased amplitude for neutral faces when repeated (but not for emotional faces). These components have been traditionally associated with the initial stage of face-selective perceptual analysis, immune to the influence of higher level cognitive processes (e.g. Bentin et al., 1996; Bentin and Golland, 2002; Eimer, 2000; George et al., 1996; Itier et al., 2006; Liu et al., 2000). However, a growing number of studies have found N170 and M170 modulations by various parameters, particularly related to individual face recognition and familiarity (Caharel et al., 2005; Heisz et al., 2006a,b; Itier and Taylor, 2002, 2004; Jacques and Rossion, 2006; Jemel et al., 2003; Liu et al., 2002). Recent studies have also shown N/M170 modulation by short-lag face repetition (Campanella et al., 2000; Ishai et al., 2006; Itier and Taylor, 2002, 2004; Jemel et al., 2003). Finally, although this finding remains disputed, some recent studies reported N170 sensitivity to facial emotion (Batty and Taylor, 2003; Campanella et al., 2002; Eger et al., 2003; Miyoshi et al., 2004). Thus, our results further reinforce the idea of a contextual penetrability of the N170 and M170 by showing that both these brain responses can be differentially affected by long-lag repetitions of faces with distinctive influences of emotional and neutral expressions. Moreover, repetition enhancements have been observed in neuroimaging studies with meaningless or ambiguous visual stimuli (Bentin and Golland, 2002; Dolan et al., 1997; George et al., 1999; Penney et al., 2001, 2003). They have been proposed to reflect the building of a representation for a previously unknown object or the activation of additional processes following the repetition of ambiguous stimuli (for review Henson, 2003). In our study, the N170 and M170 repetition enhancement for neutral faces might thus reflect some change in the representation of these faces elicited by repetition. More precisely, two processes might have come into play. Firstly, it is possible that neutral faces seen for the first time in a stream of emotional stimuli may have appeared ambiguous to the participants, and that repetition then contributed to their disambiguation. Secondly, it is possible that happy and fearful faces seen for the first time were more arousing than neutral faces, eliciting deeper encoding or more attentional capture, and leading to greater N/M170 (Holmes et al., 2003). Repetition might have increased the subjective relevance or affective value of initially neutral stimuli, consistent with the view that novelty and familiarity are major dimensions of emotional appraisal (Sander et al., 2005; see also Campanella et al., 2002; Miyoshi et al., 2004). In any case, our results reinforce and extend the recent findings on the N170 sensitivity to experience- and emotion-related factors, while providing new and highly converging data on M170. This convergence of EEG and MEG data obtained
simultaneously by the same observers is unique in the literature, to the best of our knowledge, and constitutes an internal replication of our finding. Thus, here we show a combined influence of long-lag incidental repetition and facial expressions on both these face-selective responses, with increased response to neutral faces after their repetition and to emotional faces during their first presentation. In line with recent models of face processing (Calder and Young, 2005; Vuilleumier, 2007), these results point to an early integration of emotional expressions and familiarity in the encoding of individual faces. Note that such integration might involve topdown modulatory influences that reach the visual ventral stream in the time-range of the N/M170 (see Foxe and Simpson, 2002). Such influences are consistent with fMRI studies showing enhanced face processing in fusiform cortex for emotional relative to neutral faces (Vuilleumier et al., 2001), as well as with intracranial recordings in humans (Barbeau et al., 2008) and single-cell data in monkeys (Sugase et al., 1999). Our results may appear in apparent contradiction with those of Ishai et al. (2006) who reported greater M170 to neutral than fearful faces, together with a decrease of this component following the repetition of both types of faces, i.e. irrespective of emotional information. However, several important differences between our protocol and that of Ishai et al. may explain these apparent discrepancies. Ishai et al. (2006) used an explicit task where the repetition of some stimuli had to be detected while that of others had to be ignored, among unrepeated neutral distractors. These task constraints may have induced different encoding strategies as compared with our paradigm, in which the main repetition factor was incidental. In addition, in Ishai et al., each stimulus was presented 3 to 4 times (including one supplementary presentation for the targets), for a much longer duration (2 s), and with much shorter repetition lag and only a few intervening stimuli. As mentioned in the Introduction, repetition effects are highly dependent on such parameters, and it is likely that short- and long-lag repetition paradigms tap into different memory processes. Thus, our data add new support to the notion that M170 may be modulated by incidental, long-lag repetition and that these effects may interact with the affective value of stimuli. Interestingly, none of the early effects observed here was selective to fearful faces. Although such threat-related stimuli may appear as the most important ones to decode rapidly for survival-related adaptive behaviour, there is still some debate in the literature as to whether the early modulation of brain responses by emotional faces may be selective to the expression of fear (Brosch et al., 2008). Thus, in line with other studies (e.g. Ashley et al., 2004; Lewis et al., 2003; Miyoshi et al., 2004), our data rather suggest a more general effect of emotional facial expression in the early time-range, irrespective of emotional valence. Finally, there was a late repetition effect that was modulated by emotion, in MEG only. This effect was functionally very different from those observed in the early time range. It was in the form of reduced magnetic response to the 2nd (as compared to the 1st) presentation of fearful faces, observed over temporo-frontal sensors between 280 and 320 ms. More precisely, on first presentation, the M300 mean amplitude was
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
greater for fearful compared to happy and neutral faces, but its amplitude decreased selectively with the repetition of fearful faces, so that no significant difference between the three facial expressions was found on the second stimulus presentation. Such an effect may reflect the higher arousing or threatening value of fearful relative to happy and neutral stimuli (e.g. Lang et al., 1998), which was attenuated during the second presentation of the faces. Several MEG studies have observed a decrease of late magnetic responses with repetition (Dhond et al., 2001, 2005; Ishai et al., 2006; Marinkovic et al., 2003; Penney et al., 2003; Sekiguchi et al., 2001). This was attributed to the reduced computation required for the analysis of repeated items, and compared to the EEG N400 that reflects semantic and contextual processing and is known to be reduced for repeated (old) faces compared to unrepeated (new) faces (for review see Boehm and Paller, 2006). However, there was no evidence for an N400-like effect in our EEG data, and the LPC did not show any repetition effect. This is consistent with the finding that several parameters (including emotionality) may influence late EEG responses in different directions, possibly leading to an absence of repetition effect on the LPC in our study (Olofsson and Polich, 2007). The reduced M300 response for the second presentation of fearful faces is therefore rather suggestive of fast habituation of the M300 response to these stimuli (Grill-Spector et al., 2006). Moreover, the anterior temporal topography of the M300 suggests that such an effect may involve direct or indirect modulatory influences from brain regions encompassing the amygdala, which is known to be highly sensitive to fearful stimuli and to show fast response habituation (e.g. Adolphs et al., 1995; Breiter et al., 1996). However, it is important to keep in mind that at such latencies, large-scale networks are activated in response to faces, involving distributed areas within the ventral visual pathway, including medial temporal structures, and extending into frontal regions (Barbeau et al., 2008). Source reconstruction was beyond the scope of the present paper and will be required to further delineate the cerebral sources of the M300 effect. In the previous MEG study on repetition by Ishai et al. (2006), enhanced ERFs were also reported for fearful as compared to neutral faces between 220 and 250 ms, together with a decrease of this activity with the repetition of both fearful and neutral target faces. Such “repetition suppression” was also obtained for distracter faces but only in the case of neutral stimuli. It is unclear how these effects relate to ours, as both protocols are very different. In particular, the explicit nature of the task and the shorter lag of repetition used by Ishai et al. may have favoured the observation of earlier “repetition suppression” effect. In conclusion, our study demonstrates that the EEG and MEG repetition effects for faces can be influenced by the emotional facial expression at three different latency ranges. We first show a very early repetition effect (40–50 ms) that was sensitive to facial expression, suggesting the existence of an early discriminative visual processing for different facial expressions. Moreover, we report that the N170/M170 early responses to faces were differentially modulated by the repetition of emotional and neutral faces, supporting the idea of an early integration of emotional expressions in the encoding of individual faces. In addition, a late repetition
93
effect (M300) was selective to fearful faces, possibly related to higher arousal with this emotion that habituated rapidly. These results show that early visual processes related to face encoding can be modulated by stimulus repetition, even after several minutes, and that these processing stages are also sensitive to the emotional content of face stimuli.
4.
Experimental procedure
4.1.
Participants
Fourteen (6 females and 8 males, mean age 26.4 ± 1.1 years) graduate or post-graduate students participated in this study. All subjects were healthy, right-handed, and had normal or corrected-to-normal vision. They gave written informed consent and were paid for their participation. The study was approved by the French Ethical Committee on Human Research (CCPPRB n° 02002, Hôpital Pitié-Salpêtrière).
4.2.
Stimuli
210 grey-scale photographs of different individuals posing one of three different facial expressions (70 fearful, 70 happy and 70 neutral, half male and half female faces in each condition) were selected from a large set of 409 faces. These faces were gathered from three different sources: the Karolinska Directed Emotional Faces set (Lundqvist et al., 1998), the AR Face Database of happy and neutral faces (Martinez and Benavente, 1998), and home-made photographs of emotional and neutral faces produced by actors. Because convincing expressions of fear were particularly difficult to obtain, we conducted two pre-tests to select our fearful stimuli. In the first pre-test, the 409 face photographs were presented to a total of 76 subjects who had to categorise each photograph according to its emotional expression (fearful, happy, neutral, or other). In the second pre-test, only fearful and neutral faces were presented to 29 other subjects who were asked to rate each face according to the degree of fear expressed (0: not fearful at all, 1: moderately fearful, 2: fearful, 3: very fearful, 4: extremely fearful). Fearful stimuli were selected for our EEG/MEG study only if they were rated so by at least 70% of the subjects of the first pre-test and/ or if they were rated with the three highest levels (2, 3 or 4) by at least 80% of the participants in the second pre-test. Hence the mean degree of fear expressed by our final subset of 70 fearful stimuli was 2.8 ± 0.1. Note that although these selection criteria may seem somewhat lenient, they correspond to the same level of fear ratings that were obtained with a large set of different stimuli (Johansson et al., 2004). Our final set encompassed 55 KDEF faces, which have proven efficient in activating emotion processes (e.g. Sergerie et al., 2006; Vuilleumier et al., 2003), and 15 home-made faces. All faces were presented on a black background. They were cropped similarly around the outline and centred so that the eyes and nose fell at the same positions across all pictures. In happy faces, the teeth were slightly shaded in grey in order to minimize the bright white area often engendered by smiles. The photographs were then normalised in grey levels so that they did not differ significantly in global luminance (mean
94
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Fig. 6 – Schematic view of the experiment. Fearful, happy and neutral faces were repeated across blocks, with 0, 1 or 2 intervening blocks between the 1st and the 2nd presentations of the faces. A few immediately repeated target faces were added within each block for the purpose of the task and not taken into account in further analyses.
grey level: 94.8 ± 1.2 for fearful, 93.5 ± 1.2 for happy and 92.2 ± 1.0 for neutral pictures; F(2,204) = 1.1, p N .30). There remained however a small but significant effect of Facial Expression on RMS contrast1 (F(2,204) = 3.8, p b .03). This was only due to the global contrast being slightly smaller for happy (0.17 ± 0.3) than for both fearful (0.19 ± 0.4) and neutral faces (0.18 ± 0.3). This bias was considered in the interpretation of our effects. Each stimulus was finally resized in two different formats; small: 180 pixels of height (5.4° of visual angle) with a width of 138 ± 0.6 pixels (∼ 4° of visual angle); and large: 234 pixels of height (6.9° of visual angle) with a width of 179 ± 0.8 pixels (∼5.4° of visual angle). Neutral, fearful and happy stimuli did not differ significantly in width (F(2,204) = 1.3, p N .20).
4.3.
Procedure
Participants were comfortably seated in a dimly lit electromagnetically shielded MEG room in front of a white screen (viewing distance 85 cm). Stimuli were projected on the screen through a system including a video-projector placed outside of the room and two mirrors inside the MEG room. Participants received 10 experimental blocks. The 210 face stimuli were distributed in 5 blocks, each containing 42 identities presented in a random order (14 fear/14 happy/14 neutral; half male and half female in each case). In each trial, stimuli were presented for 200 ms, followed by an interstimulus interval randomised between 1500 and 2500 ms.
1
ThesRoot Mean Square (RMS) contrast was defined as: ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi N P 2
RMSc = levels.
ðxi x Þ
i= 1
N1
, with 0≤ xi≤1 corresponding to normalized grey
Each block was repeated once during the experiment after either 0, 1, or 2 other intervening blocks (Fig. 6). In repeated blocks, stimuli were presented in a different random order than the first time. Thus, the time interval between the first and the second presentation of a given face was comprised between 1 and 7 min, corresponding to 21 to 105 intervening items between two presentations. Furthermore, the size of the stimuli varied between the first and second presentations, in order to avoid mere pixelwise repetition (see Vuilleumier et al., 2002). Half the subjects saw the first face presentation with the smaller size (180 pixels high) and the second one with the bigger size (234 pixels high), while the order was reversed for the other half of subjects. In addition, 4 to 8 target faces (different from the 210 face set) were added in each block for the purpose of the task. These faces were immediately repeated and participants were instructed to signal these immediate repetitions (1-back task) by a button press. This task ensured that subjects paid attention to each face. These target faces were not included in any further analysis.
4.4.
Simultaneous EEG and MEG acquisition
The experiment was conducted at the MEG Centre of the Hôpital de la Salpêtrière, Paris, France. Magnetic fields were measured with a 151-DC-SQUID whole-head type MEG system (Omega 151,CTF Systems, Port Coquitlam, B.C., Canada), including 17 external reference gradiometers and magnetometers used to apply a synthetic third-gradient to all MEG signals for ambient field correction. Three coils were attached to reference landmarks on the participant (left and right preauricular points, plus nasion) in order to control the head position at the beginning of each block and reposition it if
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
needed. Electrical activity was recorded simultaneously with a 64 Ag/AgCl unipolar electrode cap and processed by the MEG system. Electrode placement followed the Extended International 10-20 system including low fronto-occipito-temporal electrodes (FT9/10; F9/10; TP9/10; P9/10; PO9/10; O9/10 and Iz). EEG signal was referenced on-line against the nose potential. The recording also included the signal of a photodiode that detected the actual appearance of the stimuli on the screen within the MEG room. This allowed correcting for the delay introduced by the videoprojector (24 ms) and averaging eventrelated potentials (ERPs) and magnetic fields (ERFs) precisely time-locked on the signal of the photodiode and hence the actual onset of the stimulus. Note however that this may result in slightly earlier latency than typically reported in EEG literature where the delay of the display on the computer screen, related to the monitor's refresh rate and response time, is usually not corrected for. Subject's responses and MEG/EEG signals were recorded online with a digitization rate of 625 Hz. During acquisition, EEG signals were band-pass filtered (0.16–100 Hz) as well as MEG signals (0–100 Hz). Raw EEG and MEG data were segmented into epochs from 200 ms before stimulus onset to 1400 ms after stimulus onset. Data were baseline corrected to the first 200 ms of the epoch. Participants were instructed to maintain their gaze fixed on the centre of the screen and to avoid blinking during the trials. Horizontal eye movements were assessed by subtracting the signals from F9 and F10 electrodes, and vertical eye movements were recorded by two electrodes placed above and below the right eye. After removal of EEG artefacts, vertical eye movements and blinks were corrected by an automatic eyemovement correction program (Gratton et al., 1983). EventRelated Potentials (ERPs) and Event-Related Magnetic Fields (ERFs) were averaged for each condition between 200 ms before stimulus onset and 900 ms after stimulus onset (mean number of averaged trials: 46.9 ± 2.3 for fearful, 47.1 ± 2.2 for happy, and 47.3 ± 2.2 for neutral faces; F b 1). Data were digitally low-pass filtered at 30 Hz.
4.5.
Measures and statistical analyses
Our analyses focused on the ERP and ERF activities in three distinct time windows: a very early activity between 40 and 50 ms (in both EEG and MEG), followed by the N170/M170 (in EEG and MEG), the M300 (in MEG), and the N250r and LPC (in EEG). For EEG, the mean amplitude of the early repetitionsensitive activity was measured between 40 and 50 ms on 7 posterior electrodes in each hemisphere for every subject (O1/ 2, O9/10, PO7/8, PO9/10, P7/8, P9/10, and TP9/10). These parameters of measurement were chosen to cover the peak of the repetition effect (ERP difference between the first and second presentations) as observed on the grand average data where this effect was first revealed. Then, sensors and time interval for N170 measurement were set individually for each subject. Thus, the N170 peak latency and amplitude were measured from 2 temporo-occipital electrodes out of 4 in each hemisphere (PO7/8; PO9/10; P7/8 and P9/10), which were fitted on the peak activity of the N170 for each subject, for each hemisphere and for each condition. Moreover, we checked
95
that there was not any N250r-like repetition effect by measuring the mean ERP amplitude in every subject in the time window (260–320 ms) and on the subset of sensors (PO7/8, PO9/10, P5/P6, P7/8, P9/10, TP7/8, and TP9/10) where this component has been reported (see Schweinberger et al., 2004). Finally, we measured the Late Positive Component (LPC). As this late component was widespread and well averaged across subjects, we measured its mean amplitude in a time-window (300–600 ms) and on a subset of sensors (FC3, C3, CP3, P3, FCz, Cz, CPz, Pz, FC4, C4, CP4, and P4) centred on the grand mean peak of the component. For MEG, the mean amplitude of the early repetitionsensitive activity was measured between 40 and 50 ms from three sets of 4 MEG sensors in each hemisphere selected to encompass the repetition effect observed on the grand average data under every facial expression. The posterior and superior set consisted of MLT15, MLT16, MLT26, and MLO22 in the left hemisphere and MRT15, MRT16, MRP32, and MRP33 in the right hemisphere. The middle set consisted of MLT13, MLT14, MLT24, and MLT25 in the left hemisphere and MRO11, MRO12, MRO21, and MRO22 in the right hemisphere. The anterior set of sensors consisted of MLT22, MLT23, MLT32, and MLT33 in the left hemisphere and MRT33, MRT34, MRT42, and MRT43 in the right hemisphere. Then, sensors and time interval for M170 measurement were set individually for each subject. Namely, the M170 measures were taken from 4 occipito-temporal sensors that were fitted on the peak of the activity in each hemisphere and for each subject. We picked the amplitude and latency of the maximum of this wave on each selected sensor for each condition. Finally, the mean amplitude of the late component, peaking around 300 ms (M300), was measured in a time-window (280–320 ms) and on a subset of right and left anterior fronto-temporal sensors (ML/RT11, ML/RT12, ML/RT13, ML/RT22, ML/RT23, ML/RT32, ML/RF34, and ML/RF45) centred on the grand mean peak of this component. For each of these responses, the parameters measured (peak latency, peak amplitude or mean amplitude) were analysed using either univariate (ANOVA) or multivariate analyses of variance (MANOVA) for repeated measures. Namely, for each comparison of more than one degree of freedom, we systematically tested for the sphericity assumption (Mauchley's test) since this assumption is critical for the validity of the univariate approach. When Mauchley's test was non significant, the univariate approach (ANOVA with Greenhouse–Geisser correction) was retained, and εGG is reported. When Mauchley's test was significant (i.e. sphericity assumption violated), MANOVA was preferred in order to ensure statistical validity. Practically, Mauchley's test was found to be significant (i.e. sphericity assumption violated) for the threeway interaction between Repetition, Facial Expression and Hemisphere on N170 amplitude and for the effect of Facial Expression on LPC mean amplitude (see Results; all other Mauchley's p N .10). Moreover, as the gender of the participants has been shown to modulate ERPs recorded to emotions (e.g. Kesler-West et al., 2001), this factor was included as a between-subject factor in a prior analysis, in order to test for any effect of emotion that could be due to gender. No main effect of Gender or any interaction between Gender and Facial
96
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Expression or between Gender, Facial Expression and Repetition was observed on any of the components measured in either EEG or MEG (all p N .10). The Gender factor was thus subsequently suppressed from the analyses reported here. Thus, statistical analyses (ANOVA/MANOVA) were performed with Facial Expression (fearful, happy or neutral), Repetition (1st or 2nd presentation), and when adequate Hemisphere (left or right), as within-subject factors. For the MEG early repetition effect, there was also a Region factor (posterior, middle or anterior). Additional planned comparisons were run according to our working hypotheses and significant effects or interactions. In particular, we systematically checked whether any main effect of Repetition observed differed according to the Facial Expression (see Ishai et al., 2006 for a similar approach). All statistical analyses were performed using Statistica7 (StatSoft, Inc.) software. Prior to the statistical analyses, MEG data were multiplied by −1 in one or the other hemisphere. This was done because MEG topographies were systematically bipolar with a flowing-out field in one hemisphere corresponding to a flowing-in field in the opposite hemisphere.
Acknowledgments We thank Antoine Ducorps, Denis Schwartz, Florence Bouchet and Dr. Pascale Pradat-Diehl for assistance with data acquisition and analysis, Bernard Renault for discussions at preliminary stages of data analysis and Catherine Tallon-Baudry for helpful discussion and advice. We also thank the actors who posed the emotional expressions for our home-made database. This work was supported by grants from the French Ministère de la Recherche (ACI “Neurosciences Intégratives et Computationnelles”-NIC0081) and from the Agence Nationale de la Recherche (project no P005336).
REFERENCES
Adolphs, R., Tranel, D., Damasio, H., Damasio, A.R., 1995. Fear and the human amygdala. J. Neurosci. 15, 5879–5891. Ashley, V., Vuilleumier, P., Swick, D., 2004. Time course and specificity of event-related potentials to emotional expressions. NeuroReport 15, 211–216. Bailey, A.J., Braeutigam, S., Jousmaki, V., Swithenby, S.J., 2005. Abnormal activation of face processing systems at early and intermediate latency in individuals with autism spectrum disorder: a magnetoencephalographic study. Eur. J. Neurosci. 21, 2575–2585. Barbeau, E.J., Taylor, M.J., Regis, J., Marquis, P., Chauvel, P., Liegeois-Chauvel, C., 2008. Spatio temporal dynamics of face recognition. Cereb. Cortex 18, 997–1009. Batty, M., Taylor, M.J., 2003. Early processing of the six basic facial emotional expressions. Brain Res. Cogn. Brain Res. 17, 613–620. Bentin, S., Golland, Y., 2002. Meaningful processing of meaningless stimuli: the influence of perceptual experience on early visual processing of faces. Cognition 86, B1–14. Bentin, S., Allison, T., Puce, A., Perez, E., McCarthy, G., 1996. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8, 551–565. Boehm, S.G., Sommer, W., 2005. Neural correlates of intentional and incidental recognition of famous faces. Brain Res. Cogn. Brain Res. 23, 153–163.
Boehm, S.G., Paller, K.A., 2006. Do I know you? Insights into memory for faces from brain potentials. Clin. EEG Neurosci. 37, 322–329. Braeutigam, S., Bailey, A.J., Swithenby, S.J., 2001. Task-dependent early latency (30–60 ms) visual processing of human faces and other objects. NeuroReport 12, 1531–1536. Breiter, H.C., Etcoff, N.L., Whalen, P.J., Kennedy, W.A., Rauch, S.L., Buckner, R.L., Strauss, M.M., Hyman, S.E., Rosen, B.R., 1996. Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17, 875–887. Brosch, T., Sander, D., Pourtois, G., Scherer, K.R., 2008. Beyond fear: rapid spatial orienting toward positive emotional stimuli. Psychol. Sci. 19, 362–370. Bruce, V., Young, A., 1986. Understanding face recognition. Br. J. Psychol. 77 (t 3), 305–327. Bullier, J., 2001. Integrated model of visual processing. Brain Res. Brain Res. Rev. 36, 96–107. Caharel, S., Courtay, N., Bernard, C., Lalonde, R., Rebai, M., 2005. Familiarity and emotional expression influence an early stage of face processing: an electrophysiological study. Brain Cogn. 59, 96–100. Calder, A.J., Young, A.W., 2005. Understanding the recognition of facial identity and facial expression. Nat. Rev., Neurosci. 6, 641–651. Campanella, S., Hanoteau, C., Depy, D., Rossion, B., Bruyer, R., Crommelinck, M., Guerit, J.M., 2000. Right N170 modulation in a face discrimination task: an account for categorical perception of familiar faces. Psychophysiology 37, 796–806. Campanella, S., Quinet, P., Bruyer, R., Crommelinck, M., Guerit, J.M., 2002. Categorical perception of happiness and fear facial expressions: an ERP study. J. Cogn. Neurosci. 14, 210–227. Codispoti, M., Ferrari, V., Bradley, M.M., 2006a. Repetitive picture processing: autonomic and cortical correlates. Brain Res. 1068, 213–220. Codispoti, M., Ferrari, V., De Cesarei, A., Cardinale, R., 2006b. Implicit and explicit categorization of natural scenes. Prog. Brain Res. 156, 53–65. Cohen, D., Cuffin, B.N., 1983. Demonstration of useful differences between magnetoencephalogram and electroencephalogram. Electroencephalogr. Clin. Neurophysiol. 56, 38–51. Dale, A.M., Liu, A.K., Fischl, B.R., Buckner, R.L., Belliveau, J.W., Lewine, J.D., Halgren, E., 2000. Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 26, 55–67. Dhond, R.P., Buckner, R.L., Dale, A.M., Marinkovic, K., Halgren, E., 2001. Spatiotemporal maps of brain activity underlying word generation and their modification during repetition priming. J. Neurosci. 21, 3564–3571. Dhond, R.P., Witzel, T., Dale, A.M., Halgren, E., 2005. Spatiotemporal brain maps of delayed word repetition and recognition. NeuroImage 28, 293–304. Dolan, R.J., Fink, G.R., Rolls, E., Booth, M., Holmes, A., Frackowiak, R.S., Friston, K.J., 1997. How the brain learns to see objects and faces in an impoverished context. Nature 389, 596–599. Eger, E., Jedynak, A., Iwaki, T., Skrandies, W., 2003. Rapid extraction of emotional expression: evidence from evoked potential fields during brief presentation of face stimuli. Neuropsychologia 41, 808–817. Eimer, M., 2000. Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clin. Neurophysiol. 111, 694–705. Eimer, M., Holmes, A., 2002. An ERP study on the time course of emotional face processing. NeuroReport 13, 427–431. Eimer, M., Holmes, A., 2007. Event-related brain potential correlates of emotional face processing. Neuropsychologia 45, 15–31. Foxe, J.J., Simpson, G.V., 2002. Flow of activation from V1 to frontal cortex in humans. A framework for defining “early” visual processing. Exp. Brain Res. 142, 139–150.
BR A IN RE S E A RCH 1 2 54 ( 20 0 9 ) 8 4 – 9 8
George, N., Evans, J., Fiori, N., Davidoff, J., Renault, B., 1996. Brain events related to normal and moderately scrambled faces. Brain Res. Cogn. Brain Res. 4, 65–76. George, N., Jemel, B., Fiori, N., Renault, B., 1997. Face and shape repetition effects in humans: a spatio-temporal ERP study. NeuroReport 8, 1417–1423. George, N., Dolan, R.J., Fink, G.R., Baylis, G.C., Russell, C., Driver, J., 1999. Contrast polarity and face recognition in the human fusiform gyrus. Nat. Neurosci. 2, 574–580. Gobbini, M.I., Haxby, J.V., 2007. Neural systems for recognition of familiar faces. Neuropsychologia 45, 32–41. Gratton, G., Coles, M.G., Donchin, E., 1983. A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 55, 468–484. Grill-Spector, K., Henson, R., Martin, A., 2006. Repetition and the brain: neural models of stimulus-specific effects. Trends Cogn. Sci. 10, 14–23. Guillaume, F., Tiberghien, G., 2001. An event-related potential study of contextual modifications in a face recognition task. NeuroReport 12, 1209–1216. Guillem, F., Bicu, M., Debruille, J.B., 2001. Dissociating memory processes involved in direct and indirect tests with ERPs to unfamiliar faces. Brain Res. Cogn. Brain Res. 11, 113–125. Hamm, A.O., Weike, A.I., Schupp, H.T., Treig, T., Dressel, A., Kessler, C., 2003. Affective blindsight: intact fear conditioning to a visual cue in a cortically blind patient. Brain 126, 267–275. Haxby, J.V., Hoffman, E.A., Gobbini, M.I., 2000. The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. Heisz, J.J., Watter, S., Shedden, J.M., 2006a. Progressive N170 habituation to unattended repeated faces. Vis. Res. 46, 47–56. Heisz, J.J., Watter, S., Shedden, J.M., 2006b. Automatic face identity encoding at the N170. Vis. Res. 46, 4604–4614. Henson, R.N., 2003. Neuroimaging studies of priming. Prog. Neurobiol. 70, 53–81. Henson, R.N., Goshen-Gottstein, Y., Ganel, T., Otten, L.J., Quayle, A., Rugg, M.D., 2003. Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cereb. Cortex 13, 793–805. Henson, R.N., Rylands, A., Ross, E., Vuilleumier, P., Rugg, M.D., 2004. The effect of repetition lag on electrophysiological and haemodynamic correlates of visual object priming. NeuroImage 21, 1674–1689. Holmes, A., Vuilleumier, P., Eimer, M., 2003. The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Brain Res. Cogn. Brain Res. 16, 174–184. Inui, K., Sannan, H., Miki, K., Kaneoke, Y., Kakigi, R., 2006. Timing of early activity in the visual cortex as revealed by simultaneous MEG and ERG recordings. NeuroImage 30, 239–244. Ishai, A., Bikle, P.C., Ungerleider, L.G., 2006. Temporal dynamics of face repetition suppression. Brain Res. Bull. 70, 289–295. Itier, R.J., Taylor, M.J., 2002. Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs. NeuroImage 15, 353–372. Itier, R.J., Taylor, M.J., 2004. Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs. NeuroImage 21, 1518–1532. Itier, R.J., Latinus, M., Taylor, M.J., 2006. Face, eye and object early processing: what is the face specificity? NeuroImage 29, 667–676. Jacques, C., Rossion, B., 2006. The speed of individual face categorization. Psychol. Sci. 17, 485–492. Jemel, B., Pisani, M., Calabria, M., Crommelinck, M., Bruyer, R., 2003. Is the N170 for faces cognitively penetrable? Evidence from repetition priming of Mooney faces of familiar and unfamiliar persons. Brain Res. Cogn. Brain Res. 17, 431–446.
97
Johansson, M., Mecklinger, A., Treese, A.C., 2004. Recognition memory for emotional and neutral faces: an event-related potential study. J. Cogn. Neurosci. 16, 1840–1853. Johnson, M.H., 2005. Subcortical face processing. Nat. Rev., Neurosci. 6, 766–774. Kesler-West, M.L., Andersen, A.H., Smith, C.D., Avison, M.J., Davis, C.E., Kryscio, R.J., Blonder, L.X., 2001. Neural substrates of facial emotion processing using fMRI. Brain Res. Cogn. Brain Res. 11, 213–226. Kida, Y., Tachibana, H., Takeda, M., Yoshikawa, H., Okita, T., 2007. Recognition memory for unfamiliar faces in Parkinson's disease: behavioral and electrophysiologic measures. Parkinsonism Relat. Disord. 13, 157–164. Krolak-Salmon, P., Fischer, C., Vighetto, A., Mauguiere, F., 2001. Processing of facial emotional expression: spatio-temporal data as assessed by scalp event-related potentials. Eur. J. Neurosci. 13, 987–994. Krolak-Salmon, P., Henaff, M.A., Vighetto, A., Bertrand, O., Mauguiere, F., 2004. Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human. Neuron 42, 665–676. LaBar, K.S., Cabeza, R., 2006. Cognitive neuroscience of emotional memory. Nat. Rev., Neurosci. 7, 54–64. Lang, P.J., Bradley, M.M., Cuthbert, B.N., 1998. Emotion and motivation: measuring affective perception. J. Clin. Neurophysiol. 15, 397–408. Lewis, S., Thoma, R.J., Lanoue, M.D., Miller, G.A., Heller, W., Edgar, C., Huang, M., Weisend, M.P., Irwin, J., Paulson, K., Canive, J.M., 2003. Visual processing of facial affect. NeuroReport 14, 1841–1845. Liu, J., Higuchi, M., Marantz, A., Kanwisher, N., 2000. The selectivity of the occipitotemporal M170 for faces. NeuroReport 11, 337–341. Liu, J., Harris, A., Kanwisher, N., 2002. Stages of processing in face perception: an MEG study. Nat. Neurosci. 5, 910–916. Lundqvist, D., Flykt, A., Ohman, A., 1998. The Karolinska Directed Emotional Faces-KDEF, CD ROM from Department of Clinical Neuroscience, Psychology Section. Karolinska Institute. ISBN 91-630-7164-9. Luo, Q., Holroyd, T., Jones, M., Hendler, T., Blair, J., 2007. Neural dynamics for facial threat processing as revealed by gamma band synchronization using MEG. NeuroImage 34, 839–847. Marinkovic, K., Dhond, R.P., Dale, A.M., Glessner, M., Carr, V., Halgren, E., 2003. Spatiotemporal dynamics of modalityspecific and supramodal word processing. Neuron 38, 487–497. Martinez, A.M., Benavente, R., 1998. The AR Face Database. CVC Technical Report #24 (June). Miyoshi, M., Katayama, J., Morotomi, T., 2004. Face-specific N170 component is modulated by facial expressional change. NeuroReport 15, 911–914. Morris, J.S., Friston, K.J., Buchel, C., Frith, C.D., Young, A.W., Calder, A.J., Dolan, R.J., 1998. A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain 121 (Pt 1), 47–57. Mouchetant-Rostaing, Y., Giard, M.H., Bentin, S., Aguera, P.E., Pernier, J., 2000a. Neurophysiological correlates of face gender processing in humans. Eur. J. Neurosci. 12, 303–310. Mouchetant-Rostaing, Y., Giard, M.H., Delpuech, C., Echallier, J.F., Pernier, J., 2000b. Early signs of visual categorization for biological and non-biological stimuli in humans. NeuroReport 11, 2521–2525. Munte, T.F., Brack, M., Grootheer, O., Wieringa, B.M., Matzke, M., Johannes, S., 1997. Event-related brain potentials to unfamiliar faces in explicit and implicit memory tasks. Neurosci. Res. 28, 223–233. Munte, T.F., Brack, M., Grootheer, O., Wieringa, B.M., Matzke, M., Johannes, S., 1998. Brain potentials reveal the timing of face identity and expression judgments. Neurosci. Res. 30, 25–34.
98
BR A IN RE S EA RCH 1 2 54 ( 20 0 9 ) 8 4 – 98
Olofsson, J.K., Polich, J., 2007. Affective visual event-related potentials: arousal, repetition, and time-on-task. Biol. Psychol. 75, 101–108. Paller, K.A., Voss, J.L., Boehm, S.G., 2007. Validating neural correlates of familiarity. Trends Cogn. Sci. 11, 243–250. Pegna, A.J., Khateb, A., Lazeyras, F., Seghier, M.L., 2005. Discriminating emotional faces without primary visual cortices involves the right amygdala. Nat. Neurosci. 8, 24–25. Penney, T.B., Mecklinger, A., Nessler, D., 2001. Repetition related ERP effects in a visual object target detection task. Brain Res. Cogn. Brain Res. 10, 239–250. Penney, T.B., Maess, B., Busch, N., Derrfuss, J., Mecklinger, A., 2003. Cortical activity reduction with stimulus repetition: a wholehead MEG analysis. Brain Res. Cogn. Brain Res. 16, 226–231. Pizzagalli, D.A., Lehmann, D., Hendrick, A.M., Regard, M., PascualMarqui, R.D., Davidson, R.J., 2002. Affective judgments of faces modulate early activity (approximately 160 ms) within the fusiform gyri. NeuroImage 16, 663–677. Poghosyan, V., Shibata, T., Ioannides, A.A., 2005. Effects of attention and arousal on early responses in striate cortex. Eur. J. Neurosci. 22, 225–234. Pourtois, G., Grandjean, D., Sander, D., Vuilleumier, P., 2004. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb. Cortex 14, 619–633. Pourtois, G., Rauss, K.S., Vuilleumier, P., Schwartz, S., 2008. Effects of perceptual learning on primary visual cortex activity in humans. Vis. Res. 48, 55–62. Sander, D., Grandjean, D., Scherer, K.R., 2005. A systems approach to appraisal mechanisms in emotion. Neural Netw. 18, 317–352. Sato, W., Kochiyama, T., Yoshikawa, S., Matsumura, M., 2001. Emotional expression boosts early visual processing of the face: ERP recording and its decomposition by independent component analysis. NeuroReport 12, 709–714. Schweinberger, S.R., Pickering, E.C., Burton, A.M., Kaufmann, J.M., 2002. Human brain potential correlates of repetition priming in face and name recognition. Neuropsychologia 40, 2057–2073. Schweinberger, S.R., Huddy, V., Burton, A.M., 2004. N250r: a faceselective brain response to stimulus repetitions. NeuroReport 15, 1501–1505.
Seeck, M., Michel, C.M., Mainwaring, N., Cosgrove, R., Blume, H., Ives, J., Landis, T., Schomer, D.L., 1997. Evidence for rapid face recognition from human scalp and intracranial electrodes. NeuroReport 8, 2749–2754. Sekiguchi, T., Koyama, S., Kakigi, R., 2001. The effect of stimulus repetition on cortical magnetic responses evoked by words and nonwords. NeuroImage 14, 118–128. Sergerie, K., Lepage, M., Armony, J.L., 2006. A process-specific functional dissociation of the amygdala in emotional memory. J. Cogn. Neurosci. 18, 1359–1367. Sugase, Y., Yamane, S., Ueno, S., Kawano, K., 1999. Global and fine information coded by single neurons in the temporal visual cortex. Nature 400, 869–873. Vanni, S., Warnking, J., Dojat, M., Delon-Martin, C., Bullier, J., Segebarth, C., 2004. Sequence of pattern onset responses in the human visual areas: an fMRI constrained VEP source analysis. NeuroImage 21, 801–817. Vuilleumier, P., 2007. Neural representations of faces in human visual cortex: the roles of attention, emotion, and viewpoint. In: Osaka, N., Rentschler, I., Biederman, I. (Eds.), Object Recognition, Attention and Action. Springer, Tokyo Berlin Heidelberg New York, pp. 109–128. Vuilleumier, P., Pourtois, G., 2007. Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia 45, 174–194. Vuilleumier, P., Armony, J.L., Driver, J., Dolan, R.J., 2001. Effects of attention and emotion on face processing in the human brain: an event-related fMRI study. Neuron 30, 829–841. Vuilleumier, P., Henson, R.N., Driver, J., Dolan, R.J., 2002. Multiple levels of visual object constancy revealed by event-related fMRI of repetition priming. Nat. Neurosci. 5, 491–499. Vuilleumier, P., Armony, J.L., Driver, J., Dolan, R.J., 2003. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 6, 624–631. Yoneda, K., Sekimoto, S., Yumoto, M., Sugishita, M., 1995. The early component of the visual evoked magnetic field. NeuroReport 6, 797–800.