Brain and Cognition 106 (2016) 13–22
Contents lists available at ScienceDirect
Brain and Cognition journal homepage: www.elsevier.com/locate/b&c
Timing of emotion representation in right and left occipital region: Evidence from combined TMS-EEG Giulia Mattavelli a,b,⇑, Mario Rosanova c,d, Adenauer G. Casali c, Costanza Papagno a,b, Leonor J. Romero Lauro a,b a
Department of Psychology, University of Milano-Bicocca, Piazza Ateneo Nuovo, 1, Milano, Italy NeuroMi-Milan Center for Neuroscience, Milan, Italy Department of Biomedical and Clinical Sciences ‘‘L. Sacco”, University of Milan, Via GB Grassi 74, Milano, Italy d Fondazione Europea di Ricerca Biomedica FERB Onlus, Milano, Italy b c
a r t i c l e
i n f o
Article history: Received 22 May 2015 Revised 26 February 2016 Accepted 17 April 2016
Keywords: TMS-EEG Emotion processing Facial expressions
a b s t r a c t Neuroimaging and electrophysiological studies provide evidence of hemispheric differences in processing faces and, in particular, emotional expressions. However, the timing of emotion representation in the right and left hemisphere is still unclear. Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) was used to explore cortical responsiveness during behavioural tasks requiring processing of either identity or expression of faces. Single-pulse TMS was delivered 100 ms after face onset over the medial prefrontal cortex (mPFC) while continuous EEG was recorded using a 60channel TMS-compatible amplifier; right premotor cortex (rPMC) was also stimulated as control site. The same face stimuli with neutral, happy and fearful expressions were presented in separate blocks and participants were asked to complete either a facial identity or facial emotion matching task. Analyses performed on posterior face specific EEG components revealed that mPFC-TMS reduced the P1–N1 component. In particular, only when an explicit expression processing was required, mPFC-TMS interacted with emotion type in relation to hemispheric side at different timing; the first P1–N1 component was affected in the right hemisphere whereas the later N1–P2 component was modulated in the left hemisphere. These findings support the hypothesis that the frontal cortex exerts an early influence on the occipital cortex during face processing and suggest a different timing of the right and left hemisphere involvement in emotion discrimination. Ó 2016 Elsevier Inc. All rights reserved.
1. Introduction Emotion recognition is a crucial skill for human interaction; indeed, a fast and efficient discrimination of facial expressions is necessary to interpret others’ intentions and guide adaptive behaviour. Converging evidence from the psychological and neuroscience literature shows that the human sensory system is highly efficient in discriminating face emotions both when they are the focus of attention and when they are implicitly processed (Vuilleumier, 2005). Moreover, neuroimaging research has identified a complex network of cortical and subcortical structures deputed to facial expression processing, including posterior regions as the posterior occipital cortex, the fusiform gyrus and the posterior superior temporal gyrus, specifically involved in the ⇑ Corresponding author at: Department of Psychology, University of MilanoBicocca, Piazza Ateneo Nuovo, 1, 20126 Milano, Italy. E-mail address:
[email protected] (G. Mattavelli). http://dx.doi.org/10.1016/j.bandc.2016.04.009 0278-2626/Ó 2016 Elsevier Inc. All rights reserved.
perceptual analyses of faces, and, for emotion processing, the amygdala, cingulate cortex, insula and prefrontal cortex (Adolphs, 2002; Haxby, Hoffman, & Gobbini, 2000). The first neural model proposed by Haxby et al. (2000) suggested the existence of two partially independent pathways, processing invariant aspects of faces leading to person identification and changeable aspects, as expressions or eye gaze, respectively; however, recent findings showed that early stages of face processing were also modulated by emotions. These modulatory effects are probably mediated by the existence of interconnections among the amygdala, the prefrontal cortex and posterior regions, which allow the simultaneous activation of bottom-up and top-down mechanisms depending on the cognitive demand of behavioural tasks (Ishai, 2008; Vuilleumier & Pourtois, 2007). The modulation of face processing by emotions is testified by electrophysiological studies investigating the time-course of face processing. These studies revealed that emotional expressions can modulate event-related potentials (ERPs) at different stages.
14
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
Traditionally, early ERP components recorded from temporooccipital electrodes at 100–120 ms (P1) and 150–190 ms (N170) from stimulus onset have been considered the electrophysiological correlate of structural encoding of faces (Bentin, Allison, Puce, Perez, & McCarthy, 1996), thus being unaffected by emotions. Conversely, expression processing has been associated to more anterior and later components (Eimer & Holmes, 2002; KrolakSalmon, Fischer, Vighetto, & Mauguière, 2001; Streit, Wölwer, Brinkmeyer, Ihl, & Gaebel, 2000). However, recent studies showed an amplitude modulation of P1 and N170 components when ERPs for different emotional expressions were compared. These results have been interpreted as supporting either the hypothesis of a quick feedback from the amygdala to cortical areas or as a fast activation of the distributed emotion brain network occurring in parallel with sensory processing (Batty & Taylor, 2003; Pourtois, Schettino, & Vuilleumier, 2013; Rotshtein et al., 2010; Wronka & Walentowska, 2011). Previous studies also compared ERPs when participants were engaged in different tasks, requiring discrimination of expression, identity or gender of faces. Differences in timing and scalp distribution have been reported related to behavioural task, supporting independent analyses of structural features or emotion of faces (Münte et al., 1998; Streit et al., 2000); however, these studies used different stimuli in separate tasks. In the studies presenting the same stimuli in both tasks (Caharel, Courtay, Bernard, Lalonde, & Rebaï, 2005; Krolak-Salmon et al., 2001; Wronka & Walentowska, 2011) differences between tasks for ERP components were not reported, although results on modulatory effects of emotion are partially inconsistent: Caharel et al. (2005) found a modulation of the N170 component related to emotion type both in tasks requiring expression or familiarity processing, whereas other studies showed differences among emotional faces only during explicit expression recognition (Krolak-Salmon et al., 2001; Wronka & Walentowska, 2011). Besides providing information about timing, ERP studies explored laterality effects in brain responses when emotional faces were presented. Different studies showed higher activity for emotional faces and greater differences between expressions in the right hemisphere, thus supporting its dominant role in processing emotional faces (Batty & Taylor, 2003; Pizzagalli et al., 2002; Streit et al., 2000). Nonetheless, the asymmetrical emotion representation in the two hemispheres is still a matter of debate and recent findings suggest that face recognition relies on a bilateral network characterised by strong inter-hemispheric functional connectivity (Barbeau et al., 2008; Davies-Thompson & Andrews, 2012). Indeed, using lateralized hemifield presentation of liked and disliked faces, Pizzagalli, Regard, and Lehmann (1999) found valence-dependent activity in both hemispheres, but in the right regions it appeared earlier (at 80 ms from face onset) than in the left regions. To date, two hypotheses have been proposed for the interpretation of hemispheric asymmetries in emotion representation: the right hemisphere hypothesis assumes that the right hemisphere is dominant in perception and expression of emotions (Borod, 1993), whereas the valence hypothesis distinguishes a specific role for each hemisphere in processing either positive (left) or negative (right hemisphere) emotions (Davidson, 1992). To unveil the role of fronto-occipital connections and the temporal dynamics of the two hemispheres involvement in emotion processing, we reanalysed the data from our prior study (Mattavelli, Rosanova, Casali, Papagno, & Romero Lauro, 2013) exploring whether the activity in the face processing network is modulated by the presentation of different emotions in behavioural tasks requiring explicit discrimination either of expression or identity of faces. In the previous study, by using transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) we demonstrated an early modulation of cortical excitability in a fronto-temporo-occipital circuit during expression
discrimination (Mattavelli et al., 2013). TMS-EEG is a recent developed, non-invasive technique, which allows a direct measurement of the excitability and effective connectivity of the human cerebral cortex by directly perturbing the cortical activity and recording the response to this perturbation with a high temporal resolution (Taylor, Walsh, & Eimer, 2008). The analysis of the TMS-evoked potentials (TEPs) can provide information about the speed of interaction between different regions involved in a behavioural task and about the way the neural signal is distributed and modulated during perceptual and cognitive processing (Miniussi & Thut, 2010). In our study, TMS was applied 100 ms after face onset over the medial prefrontal cortex (mPFC), while participants were asked to discriminate the expression or identity of faces in different oneback matching tasks. In separate sessions, the right premotor cortex (rPMC) was also targeted with TMS as a control site. EEG was continuously recorded with a 60-channel TMS-compatible amplifier. Differently from studies with ERPs alone, combined TMSEEG technique can be applied to gauge the connectivity between different regions within a cortical network probing the causal link between TMS perturbation of a target site and the modulation of brain response recorded in a different region. Since the aim of the present study was to explore whether fronto-occipital connections are relevant to differentiate emotions, we focused on early face related ERP components recorded from occipital electrodes, with the hypothesis that TMS applied over mPFC could differently modulate posterior activity depending on the type of facial expression and behavioural task. 2. Experimental procedures 2.1. Participants Twelve healthy right-handed (Oldfield, 1971) volunteers (6 male, mean age 31.4, SD 8.4 years) participated in the study, which took place in the TMS-EEG laboratory of the University of MilanoBicocca with the approval of the local Ethic Committee. All participants gave written informed consent prior to their participation according to the Declaration of Helsinki. 2.2. Stimuli and procedure Black and white photographs of three female individuals from the Ekman series (Ekman & Friesen, 1976) were used as stimuli. Images were presented for 700 ms at the centre of a computer screen covering a visual angle of 8° 11° interleaved with a fixation cross, which remained on the screen for a time period jittering between 1200 and 1400 ms. In the expression task, participants were instructed to respond by pressing a button with the righthand index finger when the same expression was repeated in two consecutive trials. In the identity task, instead, participants were asked to respond when the same individual was presented in two consecutive trials. Each block consisted of 180 stimuli; repetitions of the same expression or same identity occurred in 15% of trials and the order of the stimuli was controlled to avoid repetition of identical faces (same expression and identity). The same facial stimuli with happy, fearful and neutral expressions were used in both tasks. Experiments were run using E-prime software (Psychology Software Tools, Pittsburgh, PA); accuracy and reaction times (RTs) were recorded. 2.3. TMS stimulation TMS was delivered with an Eximia TMS stimulator (Nexstim, Helsinki, Finland) using a focal bi-pulse, figure of eight 70-mm coil. TMS targets were identified in each subject by means of a Navi-
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
gated Brain Stimulation (NBS) system (Nexstim, Helsinki, Finland) that uses infrared-based frameless stereotaxy to map the position of the coil and the subject’s head within the reference space of the individual’s MRI space. In the case of mPFC stimulation, the coil was placed between the electrodes AFZ and FZ targeting the medial prefrontal gyrus in the right hemisphere: mean MNI coordinates were X = 12 (SD 7) Y = 34 (SD 13) Z = 60 (SD 8). A single TMS pulse was delivered over mPFC 100 ms from face onset; to ensure a sufficient number of good trials, blocks with TMS, both during the expression task and the identity task, were repeated twice with a different stimuli randomization. In addition, each participant completed the two tasks without TMS (no-TMS condition) and a block of TMS stimulation during a passive point fixation (no-task condition) with TMS pulses separated by 1900–2100 ms in order to maintain the same pulse-interval as in the face task conditions. The order of the TMS task, no-TMS task and TMS no-task conditions and the order of the two face tasks within each condition were counterbalanced across subjects. In order to test the specificity of the effects for mPFC stimulation, seven participants completed also a separate session with TMS applied over rPMC identified on individual MRI; mean MNI coordinates were X = 19 (SD 7.3) Y = 5 (SD 11.4) Z = 72 (SD 4.2). This session was carried out approximately two months after the mPFC one and consisted of 5 blocks: the facial expression and identity tasks with only ERPs recording (no-TMS condition), the facial expression and identity tasks with TMS applied over the rPMC and one block of TMS during the point fixation (no-task condition). The order of stimulation conditions and the order of the two face tasks within each condition were counterbalanced across subjects and were different from the first session for each subject. The NBS system allows estimating the electrical field induced by TMS taking into account head shape, distance from scalp, coil position and orientation. The estimated mean intensity was 101 ± 6 V/m (62 ± 3% of the stimulator output) for mPFC sessions and 105 ± 16 V/m (59 ± 2% of the stimulator output) for rPMC sessions. For both target sites coil orientation was adjusted for each subject in order to direct the electric field perpendicularly to the gyrus shape. Moreover, a masking noise reproducing the timevarying frequency components of the TMS sound was continuously played into earplugs worn by participants during the experimental sessions in order to avoid auditory EEG responses evoked by the TMS coil discharge (Massimini et al., 2005; Rosanova et al., 2009). 2.4. EEG recording and analyses As already reported, EEG was recorded with a 60-channel TMS compatible amplifier (Nexstim; Helsinki, Finland), which uses a proprietary sample-and-hold circuit to hold the amplifier output constant from 100 ls pre- to 2 ms post-TMS pulse avoiding amplifier saturation (Virtanen, Ruohonen, Naatanen, & Ilmoniemi, 1999). Signal was recorded with a sampling rate of 1450 Hz. Electrodes impedance was kept below 5 kX; two electrodes were placed over the forehead as reference and ground, and two additional electrodes near the eyes were used to record eye movements. Data pre-processing was carried out with Matlab R2011b (Mathworks, Natick, MA, USA) and consisted in the following steps: downsampling to 725 Hz, trials splitting from 800 ms pre- to 800 ms post-TMS pulse, artefacts rejection with a semi-automatic procedure (Casali, Casarotto, Rosanova, Mariotti, & Massimini, 2010) and band-pass filtering of signal between 2 and 80 Hz. One participant was excluded from the analyses because of a high number of trials rejected due to signal noise in the EEG registration. Following artefact rejection, the mean number of trials analysed for each participant in mPFC session was 281.8 (SD 55.2; 78% of total trials) in the TMS expression task, 291.9 (SD 26.4; 81% of total trials) in the TMS identity task, 158.8 (SD 16.1; 88% of total trials) in the no-TMS
15
condition of expression task, 162.3 (SD 16.5; 90% of total trials) in the no-TMS condition of identity task and 133.2 (SD 22.4; 74% of total trials) in the TMS no-task condition. In the rPMC session the mean number of valid trials was 168.3 (SD 5; 93% of total trials) in the TMS expression task, 172.1 (SD 3.3; 95% of total trials) in the TMS identity task, 165.9 (SD 11; 92% of total trials) in the noTMS condition of expression task, 164.4 (SD 7.8; 91% of total trials) in the no-TMS condition of identity task and 156.6 (SD 9.9; 87% of total trials) in the TMS no-task condition. EEG traces for good trials in each block were averaged referenced and baseline corrected between 300 and 80 ms before TMS pulse; this baseline window was selected since it preceded the onset of signal changes related both to face stimuli presentation and TMS pulse. The complete analyses of TMS effects on brain excitability during the expression and identity tasks can be found elsewhere (Mattavelli et al., 2013). Here, we focused on the effects of TMS perturbation on early posterior face related components in relation to the emotional expression of the stimuli used in the two tasks. Face presentation produced a first positive peak at 100 ms from face onset (P1) in occipital electrodes, followed by a negative peak at 150 ms (N1) and a second positive peak at 200 ms (P2). In order to compare the amplitude of these early components in the TMS and no-TMS conditions of the face tasks eliminating unspecific TMS effects not related to cortical responses, trials for the no-task condition were subtracted from the TMS task conditions for each session and the epochs obtained from this subtraction were then filtered, averaged referenced and baseline corrected with the same parameter as above (Reichenbach, Whittingstall, & Thielscher, 2011; Thut, Ives, Kampmann, Pastor, & Pascual-Leone, 2005). The analyses considered the averaged signal from continuous electrodes in left (PO3, O1), midline (POZ, OZ), and right (PO2, O2) occipital region. The three peaks of interest were identified by visual inspection in each subject and for each condition as the maximum value between 50 and 130 ms (P1), the minimum value between 100 and 180 ms (N1) and the maximum value between 120 and 270 ms (P2). The peak-to-peak amplitude of the P1–N1 and N1–P2 components was measured in each subject for the expression and identity task, with and without TMS, dividing trials in which happy, fearful or neutral expressions were presented. Then, repeated measures ANOVAs with factors TMS (yes, no), task (expression, identity), emotion (happy, fear, neutral) and side (left, midline, right) were carried out to test significant effects for P1–N1 and N1–P2 amplitude. Greenhouse– Geisser corrections to degrees of freedom were applied when appropriate and only the corrected probability values are reported. Post hoc analyses were Bonferroni corrected. The same analyses were performed for TMS applied over mPFC and rPMC.
3. Results 3.1. Behavioural results In the mPFC sessions without TMS the detection rate was 35.5% (SD 16.5%) in the facial expression task (mean RT 580.4 ms, SD 40.6 ms) and 52.3% (SD 19.6%) in the identity task (mean RT 555.5 ms, SD 43.8 ms). In TMS blocks, mean accuracy was 39.1% (SD 16.9%) in the facial expression task (mean RT 578 ms, SD 38.9 ms) and 49.9% (SD 20.3%) in the identity task (mean RT 552.1 ms, SD 49.8 ms). Mean performance separately for each emotion is reported in Table 1. Mean false alarm rate was overall 0.21% (SD 0.38%) in mPFC session. Responses provided after stimulus offset were not recorded for the analyses, explaining the low mean accuracy in task performances. Repeated measures ANOVAs with TMS, task and facial emotion as within subjects factors revealed a significant main effect of task both for accuracy [F (1, 10) = 7.54, p = .021, partial g2 = .43] and RTs [F(1, 8) = 19.86,
16
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
Table 1 Mean accuracy (first line in each cell) and reaction times (second line in each cell) for happy, fear and neutral emotions separately in the expression and identity behavioural task. Standard deviations are reported in brackets. mPFC session
rPMC session
no-TMS
Happy Fear Neutral
TMS
no-TMS
TMS
Expression
Identity
Expression
Identity
Expression
Identity
Expression
Identity
34.2 (22) 578.9 (43.3) 35.2 (22.8) 554.7 (64.5) 36.2 (24) 579.5 (63.3)
45.7 (25.1) 545.7 (59.1) 50.6 (21.8) 537.1 (51.6) 62.1 (21.3) 562.8 (59.1)
40.8 (19.7) 561.5 (55.7) 36.2 (20) 583 (50.6) 39.8 (21.1) 583.8 (37.9)
45.1 (22.1) 541.5 (54.5) 52.7 (23.6) 544.4 (54.5) 52.1 (19.7) 545.7 (60.7)
47.4 (29.3) 528.3 (82.4) 31.6 (28.3) 495.9 (67.6) 50.7 (26.4) 550.9 (54.1)
64.4 (27.4) 503.7 (70.1) 62 (23) 537.6 (74.7) 59.1 (35.5) 529.1 (70.5)
52.4 (25.6) 520.9 (47.7) 42.7 (26.9) 546.6 (66.2) 55.6 (30.3) 567.2 (64.3)
70.6 (20.1) 523.8 (82) 65.1 (14.2) 497.4 (58.2) 57 (32.5) 526.3 (89.9)
p = .002, partial g2 = .71], being accuracy higher and RTs faster in the identity task, whereas the effects of TMS (ACC: F(1, 10) = .12, p = .92, partial g2 = .001; RT: F(1, 8) = .001, p = .97, partial g2 < .001), facial emotion (ACC: F(2, 20) = 2.68, p = .09, partial g2 = .21; RT: F(2, 16) = 1.67, p = .22, partial g2 = .17) or their interaction were not significant (ACC: TMS task [F(1, 10) = .95, p = .35, partial g2 = .09], TMS emotion [F(2, 20) = .64, p = .53, partial g2 = .06], task emotion [F(2, 20) = 1.05, p = .37, partial g2 = .09], TMS task emotion [F(2, 20) = .72, p = .5, partial g2 = .07]; RT: TMS task [F(1, 8) = .6, p = .46, partial g2 = .07], TMS emotion [F(2, 16) = 1.73, p = .21, partial g2 = .18], task emotion [F(2, 16) = .01, p = .99, partial g2 = .001], TMS task emotion [F(2, 16) = 1.59, p = .23, partial g2 = .17]). In the rPMC sessions without TMS, the detection rate was 43.6% (SD 25.2%) in the facial expression task (mean RT 544.4 ms, SD 59.6 ms) and 62% (SD 22.6%) in the identity task (mean RT 526.9 ms, SD 65.2 ms). In rPMC-TMS blocks, mean accuracy was 50.4% (SD 25.7%) in the facial expression task (mean RT 555.3 ms, SD 63.2 ms) and 65.1% (SD 17.9%) in the identity task (mean RT 525.1 ms, SD 70.1 ms). Mean false alarm rate was overall 0.42% (SD 0.8%) in rPMC session. Repeated measures ANOVAs with TMS, task and facial emotion as within subjects factors did not show any significant effect for accuracy (TMS [F(1, 6) = 1.58, p = .25, partial g2 = .21], task [F(1, 6) = 4.99, p = .07, partial g2 = .45], emotion [F (2, 12) = 1.31, p = .3, partial g2 = .18], TMS task [F(1, 6) = .7, p = .43, partial g2 = .1], TMS emotion [F(2, 12) = .22, p = .8, partial g2 = .04], task emotion [F(2, 12) = 3.69, p = .06, partial g2 = .38], TMS task emotion [F(2, 12) = .22, p = .8, partial g2 = .04]) or RTs (TMS [F(1, 4) = .42, p = .55, partial g2 = .09], task [F(1, 4) = 1.44, p = .3, partial g2 = .26], emotion [F(2, 8) = 2.03, p = .19, partial g2 = .34], TMS task [F(1, 4) = 4.44, p = .1, partial g2 = .53], TMS emotion [F(2, 8) = .001, p = .99, partial g2 < .001], task emotion [F(2, 8) = 1.52, p = .28, partial g2 = .27], TMS task emotion [F(2, 8) = 3.5, p = .08, partial g2 = .47]). Moreover, for those participants who completed both experimental sessions, repeated measures ANOVA, with session (mPFC or rPMC), TMS and task as within-subjects factors, did not reveal any significant effect on accuracy (session [F(1, 6) = 2.64, p = .15, partial g2 = .31], TMS [F(1, 6) = 2.68, p = .15, partial g2 = .31], task [F(1, 6) = 5.15, p = .06, partial g2 = .46], session TMS [F(1, 6) = .13, p = .73, partial g2 = .02], session task [F(1, 6) = .25, p = .63, partial g2 = .04], TMS task [F(1, 6) = .01, p = .92, partial g2 = .002], session TMS task [F(1, 6) = .32, p = .59, partial g2 = .05]). When the same analysis was run on RTs, we found a significant main effect of task [F(1, 6) = 8.26, p = .028, partial g2 = .58], being participants faster in detecting identity than expression repetitions, but no significant effects of session [F(1, 6) = 1.27, p = .3, partial g2 = .17], TMS [F(1, 6) = .41, p = .55, partial g2 = .06] or interactions (session TMS [F(1, 6) = 1.28, p = .3, partial g2 = .18], session task [F (1, 6) = .05, p = .82, partial g2 = .009], TMS task [F(1, 6) = 1.04, p = .35, partial g2 = .15], session TMS task [F(1, 6) = .07, p = .8, partial g2 = .01]).
3.2. Electrophysiological results Signals from occipital electrodes for each experimental condition are depicted in Figs. 1 and 2. When TMS was applied over mPFC, a repeated measures ANOVA TMS task emotion side on P1–N1 amplitude revealed significant main effects of TMS [F (1, 10) = 40.94, p < .001, partial g2 = .8], emotion [F(2, 20) = 4.00, p = .035, partial g2 = .29] and side [F(2, 20) = 16.33, p < .001, partial g2 = .62]. The P1–N1 amplitude significantly decreased after mPFCTMS. Moreover, the P1–N1 amplitude was overall larger for happy than fearful faces (p = .045) and larger in the left and right electrodes than in the midline ones (p < .001 and p = .001, respectively). The main effect of task was not significant [F(1, 10) = .6, p = .81, partial g2 = .01], as the two-way interactions TMS task [F(1, 10) = .42, p = .53, partial g2 = .04], TMS emotion [F(2, 20) = .3, p = .74, partial g2 = .03], task emotion [F(2, 20) = .11, p = .9, partial g2 = .01], TMS side [F(2, 20) = .09, p = .91, partial g2 = .01], task side [F(1.301, 13.015) = 3.5, p = .08, e = .651, partial g2 = .26], emotion side [F(1.705, 17.048) = .92, p = .4, e = .426, partial g2 = .08], and three-way interactions TMS task emotion [F(2, 20) = 1.64, p = .22, partial g2 = .14], TMS task side [F (1.325, 13.253) = .38, p = .61, e = .663, partial g2 = .04], TMS emotion side [F(4, 40) = .96, p = .44, partial g2 = .09], task emotion side [F(4, 40) = .17, p = .95, partial g2 = .02]. Crucially, the four-way interaction TMS task emotion side was significant [F(4, 40) = 4.66, p = .003, partial g2 = .32]. This effect was further analysed by means of two ANOVAs TMS emotion side carried out separately for the expression and identity tasks. In the expression task the interaction TMS emotion side was significant [F(4, 40) = 4.51, p = .004, partial g2 = .31], being the TMS emotion interaction significant only in the right electrodes [F(2, 20) = 3.57, p = .047, partial g2 = .26]; this significant interaction indicated that the signal was differentially affected by TMS depending on emotion type (Fig. 3): more specifically, TMS significantly reduced P1–N1 amplitude in the right electrodes when happy (p = .004) or neutral (p = .002) faces were presented, whereas trials with fearful faces were not affected (p = .47). The TMS emotion interaction was not significant on the left side [F (2, 20) = 2.19, p = .14, partial g2 = .18] or in the midline [F (1.237, 12.367) = 1.42, p = .26, e = .618, partial g2 = .12] electrodes. As shown in Fig. 3, in the no-TMS condition, the mean activity for fearful faces was lower than that for happy and neutral faces; however, a one-way ANOVA run in order to check the effect of emotion was not significant [F(2, 20) = 2.31, p = .12, partial g2 = .19]. In the identity task the ANOVA TMS emotion side showed a non significant three-way interaction [F(4, 40) = .77, p = .55, partial g2 = .07]. A repeated measures ANOVA TMS task emotion side carried out on the N1–P2 amplitude revealed a significant main effect of side [F(2, 20) = 13.22, p < .001, partial g2 = .57] due to larger components in the left and right electrodes than in the midline
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
17
Fig. 1. Average scalp potentials recorded at occipital electrodes for each emotion in TMS and no-TMS conditions during the expression task. In the TMS condition the depicted signal was obtained following the subtraction of the TMS no-task condition from the TMS task condition for each session. Shadowed areas represent the components of interest with significant interaction between TMS and emotional expression, for which the peak-to-peak amplitudes are represented in Figs. 3 and 4.
ones (p < .001 and p = .006, respectively). The main effects of TMS [F(1, 10) = .79, p = .4, partial g2 = .07], task [F(1, 10) = .74, p = .41, partial g2 = .07], and emotion [F(2, 20) = 3.12, p = .07, partial g2 = .24] were not significant, as the two-way and three-way interactions: TMS task [F(1, 10) = .18, p = .68, partial g2 = .02], TMS emotion [F(2, 20) = 1.67, p = .21, partial g2 = .14], task emotion [F(2, 20) = .72, p = .5, partial g2 = .07], TMS task emotion [F(2, 20) = 2.82, p = .08, partial g2 = .22], TMS side [F(1.339, 13.392) = .88, p = .4, e = .67, partial g2 = .08], task side [F(1.287, 12.875) = 2.57, p = .13, e = .644, partial g2 = .2], TMS task side [F(2, 20) = 1.28, p = .3, partial g2 = .11], emotion side [F(2.079, 20.792) = 3.00, p = .07, e = .52, partial g2 = .23], TMS emotion side [F(4, 40) = .33, p = .86, partial g2 = .03], task emotion side [F(2.488, 24.882) = .48, p = .46, e = .662 partial g2 = .05]. As for the first component, the four-way interaction was significant [F(4, 40) = 4.9, p = .003, partial g2 = .33]. This effect was further analysed by means of two separate ANOVAs TMS emotion side for the expression and identity tasks, respectively. In the expression task the interaction TMS emotion side was significant [F(4, 40) = 2.85, p = .036, par-
tial g2 = .22] and the ANOVA revealed a significant interaction TMS emotion only in the left electrodes [F(2, 20) = 3.87, p = .038, partial g2 = .28], where TMS specifically reduced N1–P2 amplitude in trials with neutral faces (p = .024), but not in trials with happy (p = .51) or fearful (p = .18) faces (Fig. 4). The TMS emotion interaction was not significant in the midline [F (2, 20) = 0.95, p = .40, partial g2 = .09] or right [F(2, 20) = 2.94, p = .08, partial g2 = .23] electrodes. Similarly to the P1–N1 amplitude, also in the case of the N1–P2 amplitude, the three-way interaction TMS emotion side was not significant in the identity task [F(2.805, 20.051) = 2.3, p = .1, e = .701, partial g2 = .19]. Stimulation applied over rPMC did not affect the P1–N1 and N1–P2 components: indeed, an ANOVA TMS task emotion side showed no significant main effect of TMS nor was significant the interaction of TMS with other factors (all ps > .05, see Appendix A for detailed results). In summary, the critical result was that the interference of mPFC-TMS with face emotion differed in the two hemispheres, namely modulating the occipital activity for different emotions first in the right hemisphere (P1–N1 component) and later in the
18
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
Fig. 2. Average scalp potentials recorded at occipital electrodes for each emotion in TMS and no-TMS conditions during the identity task. In the TMS condition the depicted signal was obtained following the subtraction of the TMS no-task condition from the TMS task condition for each session.
Fig. 3. Peak-to-peak amplitude of the P1–N1 component in occipital electrodes during the expression task. Asterisks highlight significant effects (p < .05). Error bars represent means’ standard error.
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
19
Fig. 4. Peak-to-peak amplitude of the N1–P2 component in occipital electrodes during the expression task. The asterisk highlights a significant effect (p < .05). Error bars represent means’ standard error.
left hemisphere (N1–P2 component). This modulatory effect was specific for the expression task whereas TMS did not affect the identity task, in which emotion processing was implicit. 4. Discussion In the present study we examined whether fronto-occipital excitability and long-range top-down modulation are affected by different behavioural tasks and specific emotional expressions during face processing. Single-pulse TMS was applied over mPFC 100 ms after face onset, while participants performed two different tasks requiring explicit processing of either expression or identity of faces, in which the same happy, fearful or neutral emotional stimuli were presented. ERPs simultaneously recorded from occipital electrodes showed that perturbation of mPFC activity by means of TMS produced a specific modulation of the P1–N1 component in the right electrodes and of the N1–P2 component in the left electrodes, depending on the type of emotional expression. Crucially, the effect proved to be selective for site of stimulation and cognitive task, since posterior components were affected by mPFC stimulation, but not by rPMC stimulation, and only during the explicit processing of facial expressions; in contrast, the identity task in which emotion processing could be implicit but not required to participants was not affected. Previous studies reported a modulation of the electrophysiological signal for different facial expressions already at a latency of the P1 and N170 components (Batty & Taylor, 2003; Pourtois, Grandjean, Sander, & Vuilleumier, 2004), suggesting a fast discrimination of emotional stimuli in visual perceptual analysis; moreover, results on cortical excitability (Mattavelli et al., 2013) highlighted the role of top-down regulation at an early stage of face processing, showing that the functional coupling among different regions in the face network was modulated by the behavioural task. The present results add new evidence to the neural mechanisms supporting a fast emotion discrimination by confirming a causal link between the activity of the prefrontal cortex and the electrophysiological signal recorded from the occipital region and showing that this link is affected by specific facial expressions, when emotions are explicitly processed, and at different times in the two hemispheres. In particular, our results revealed that, in the right hemisphere, TMS applied over mPFC reduced the posterior activity in the case of happy and neutral expressions, while the P1–N1 amplitude evoked by fearful faces was unaffected. This lack of TMS modulation for fearful faces in the right hemisphere seems to contradict the valence hypothesis, which states a dominant role of the right hemisphere in processing negative emotions (Davidson, 1992). Indeed, we could speculate that TMS did not affect the activity recorded
from the right electrodes when fearful expressions were presented because processing of negative emotions in the right hemisphere does not depend on fronto-occipital connections and, therefore, is not affected by perturbation on the medial part of the right superior prefrontal gyrus. However, an alternative and more plausible hypothesis is that fearful expressions, being relevant signal for potential threat, rely on a direct sensory route in the right hemisphere not influenced by top-down cortical connections. This would explain why processing of fearful expressions, in the right hemisphere, was not affected by mPFC-TMS (Calder, Lawrence, & Young, 2001; Tamietto & de Gelder, 2010). Moreover, the task used in this study tapped emotion perception and was not intended to elicit emotion experience in participants, and the recent revisions of the valence hypothesis showed anterior asymmetries during reactions to emotional stimuli and suggested a relationship between individual differences in prefrontal activation asymmetries and dispositional mood, positive or negative affective style (Davidson, 2003; Johnstone, van Reekum, Urry, Kalin, & Davidson, 2007). Interestingly, emotion modulation appeared at different time points in right and left electrodes, namely first in the right P1– N1 component and then in the left N1–P2 component. This different timing supports the hypothesis that both hemispheres distinguish emotions, but the right hemisphere seems to be faster in organising different neural representations for specific emotions (Pizzagalli et al., 1999). In the left hemisphere, TMS specifically reduced the amplitude of the N1–P2 component when neutral faces were presented in the expression task. The specific effect for neutral expressions suggests a different representation in the right and left hemispheres for emotion type. Indeed, neutral faces can be considered ambiguous stimuli from an emotional point of view as compared with happy and fearful expressions, which are interpreted as clearly positive or negative (Surguladze et al., 2003; Young et al., 1997). Previous neuroimaging findings highlighted different responses in the right and left hemispheres in relation to ambiguity and level of arousal of emotional and neutral stimuli. In particular, the left amygdala showed a higher activation than the right one in processing neutral faces; moreover, it has been suggested that the left amygdala is more involved in the cognitive evaluation of the arousal produced by emotional stimuli than the right one (Cooney, Atlas, Joormann, Eugène, & Gotlib, 2006; Gläscher & Adolphs, 2003; Wright & Liu, 2006). In light of these previous studies, we could speculate that neutral faces require additional cognitive processing compared with other emotions, especially in an expression matching task in which a ‘‘naming-strategy” is often used (Wright & Liu, 2006). As a consequence of this cognitive demand, the neural pathway underpinning neutral expression discrimination is more affected by top-down
20
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
modulation in the left hemisphere. However, caution is recommended, since the status of neutral faces as affectively neutral, ambiguous or slightly negative stimuli is still an issue requiring further investigation. As reported in the method section, this study included only right-handed participants. Previous studies have shown that subjects’ handedness can affect the degree of hemispheric lateralization in face processing (Bourne, 2008; Harris, Almerigi, Carbary, & Fogel, 2001). Future research should investigate whether righthanded and left-handed subjects have similar laterality effects in the cortical network for expression processing. Another relevant result of the present study is the relation between emotion modulation and the behavioural task. We did not find any significant main effect of task on early posterior ERP components during identity and expression discrimination, confirming that different tasks produce comparable cortical responses when participants are presented with the same stimuli (Caharel et al., 2005). Differently from previous ERP studies, we used combined TMS-EEG to probe the relation between activity in prefrontal cortex and response to face stimuli in posterior regions. Interestingly, TMS interacted with emotion type only when expression recognition was explicitly required demonstrating that the role of top-down modulation is crucial when emotion discrimination is relevant for the task and confirming previous results reporting that unbalanced activity for different emotions was affected by spatial and voluntary attention (Eimer & Holmes, 2007; Wronka & Walentowska, 2011, 2014). Previous studies computing analyses both at EEG sensors level and source modelling have demonstrated that, even in areas far from the TMS site, TEPs are produced by the transmission of the neural perturbation along structural and functional connections, rather than through volume conduction of the electric field induced by the TMS pulse (Casarotto et al., 2010; Romero Lauro et al., 2014, 2016). This has been further confirmed in a recent study on vegetative state patients, which showed absence of EEG responses to TMS when TMS was targeted over a non-functional cortical site (Gosseries et al., 2015). In the present study we found that posterior EEG components were differently modulated by
mPFC-TMS depending on both, task and stimuli; moreover, the same posterior components were not affected by rPMC stimulation, confirming that our results were not due to generic physical effects of the TMS pulses. Finally, behavioural performance was not affected by TMS; this result was expected, since the aim of the stimulation in TMS-EEG experiments is detecting cortical excitability and not interfering with behaviour, as in the classical virtual lesion approach (Akaishi, Morishima, Rajeswaren, Aoki, & Sakai, 2010; Johnson, Kundu, Casali, & Postle, 2012). In the mPFC session, the only significant result was the main effect of task, being participants more accurate and faster in detecting identity repetition than expression repetition, as reported in previous studies (Campbell, Brooks, de Haan, & Roberts, 1996; Münte et al., 1998). A similar trend appeared in sessions with rPMC stimulation, but the effect did not reach statistical significance, one possible explanation being the low number of participants. 5. Conclusion This study demonstrated a role of fronto-occipital network in emotion discrimination, showing that top-down mechanisms were modulated by emotion processing depending on the cognitive task. These effects occurred at different time points and for different expressions in the right and left hemisphere. The finding that emotions were modulated in both hemispheres partially contradicts the hypothesis of a dominant role of the right hemisphere in emotion processing; however, specific effects related to facial expression occurred in the early P1–N1 right component and later in the N1–P2 left component suggesting a time advantage for the right posterior region in creating different neural representations for specific emotions. Acknowledgment This study was supported by a research grant (FAR) from the University of Milano-Bicocca to CP.
Appendix A Results of the ANOVAs TMS task emotion side on P1–N1 and N1–P2 components in the rPMC session. Asterisks highlight significant effects (p < .05).
Effect P1–N1 TMS task emotion side TMS task TMS emotion task emotion TMS task emotion TMS side task side TMS task side emotion side TMS emotion side task emotion side TMS task emotion side
Degrees of freedom
F
p
Partial g2
1, 6 1, 6 2, 12 2, 12 1, 6 2, 12 2, 12 2, 12 2, 12 2, 12 2, 12 4, 24 4, 24 4, 24 4, 24
.33 5.68 3.25 15.01 .57 2.31 .38 .76 1.33 .98 .26 .34 .41 2.79 1.43
.585 .055 .075 .001⁄ .480 .141 .693 .491 .302 .402 .778 .849 .796 .049⁄ .256
.053 .486 .351 .714 .086 .278 .059 .112 .181 .141 .041 .053 .065 .317 .192
21
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
Appendix A (continued)
Effect N1–P2 TMS task emotion side TMS task TMS emotion task emotion TMS task emotion TMS side task side TMS task side emotion side TMS emotion side task emotion side TMS task emotion side
Degrees of freedom
F
p
Partial g2
1, 6 1, 6 2, 12 2, 12 1, 6 2, 12 2, 12 2, 12 2, 12 2, 12 1.088, 6.529 (e = .544) 4, 24 4, 24 1.732, 10.394 (e = .433) 4, 24
.84 .33 3.85 10.82 .78 .65 7.68 .47 .84 .49 .89 1.34 1.84 .76 .7
.394 .585 .051 .002⁄ .41 .54 .007⁄ .638 .457 .625 .389 .285 .154 .561 .598
.123 .052 .391 .642 .115 .098 .561 .072 .122 .075 .129 .182 .235 .113 .105
References Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12, 169–177. Akaishi, R., Morishima, Y., Rajeswaren, V. P., Aoki, S., & Sakai, K. (2010). Stimulation of the frontal eye filed reveals persistent effective connectivity after controlled behavior. Journal of Neuroscience, 30, 4295–4305. Barbeau, E. J., Taylor, M. J., Regis, J., Marquis, P., Chauvel, P., & Liégeois-Chauvel, C. (2008). Spatio temporal dynamics of face recognition. Cerebral Cortex, 18, 997–1009. Batty, M., & Taylor, M. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Research, 17, 613–620. Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. Borod, J. C. (1993). Cerebral mechanisms underlying facial, prosodic, and lexical emotional expression: A review of neuropsychological studies and methodological issues. Neuropsychology, 7, 445–463. Bourne, V. J. (2008). Examining the relationship between degree of handedness and degree of cerebral lateralization for processing facial emotion. Neuropsychology, 3, 350–356. Caharel, S., Courtay, N., Bernard, C., Lalonde, R., & Rebaï, M. (2005). Familiarity and emotional expression influence an early stage of face processing: An electrophysiological study. Brain and Cognition, 59, 96–100. Calder, A. J., Lawrence, A. D., & Young, A. W. (2001). Neuropsychology of fear and loathing. Nature Reviews Neuroscience, 2, 352–362. Campbell, R., Brooks, B., de Haan, E., & Roberts, T. (1996). Dissociating face processing skills: Decisions about lip-read speech, expression, and identity. The Quarterly Journal of Experimental Psychology, 49, 259–314. Casali, A. G., Casarotto, S., Rosanova, M., Mariotti, M., & Massimini, M. (2010). General indices to characterize the electrical response of the cerebral cortex to TMS. NeuroImage, 49, 1459–1468. Casarotto, S., Romero Lauro, L. J., Bellina, V., Casali, A. G., Rosanova, M., Pigorini, A., ... Massimini, M. (2010). EEG responses to TMS are sensitive to changes in the perturbation parameters and repeatable over time. PLoS One, 5, e10281. Cooney, R. E., Atlas, L. Y., Joormann, J., Eugène, F., & Gotlib, I. H. (2006). Amygdala activation in the processing of neutral faces in social anxiety disorder: Is neutral really neutral? Psychiatry Research, 148, 55–59. Davidson, J. (2003). Darwin and the neural bases of emotion and affective style. Annals of the New York Academy of Sciences, 1000, 316–336. Davidson, J. (1992). Anterior cerebral asymmetry and the nature of emotion. Brain and Cognition, 20, 125–151. Davies-Thompson, J., & Andrews, T. J. (2012). Intra- and interhemispheric connectivity between face-selective regions in the human brain. Journal of Neurophysiology, 108, 3087–3095. Eimer, M., & Holmes, A. (2007). Event-related brain potential correlates of emotional face processing. Neuropsychologia, 45, 15–31. Eimer, M., & Holmes, M. (2002). An ERP study on the timecourse of emotional face processing. NeuroReport, 13, 427–431. Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect.Palo Alto, California: Consulting Psychologists Press. Gläscher, J., & Adolphs, R. (2003). Processing of the arousal of subliminal and supraliminal emotional stimuli by the human amygdala. Journal of Neuroscience, 23, 10274–10282. Gosseries, O., Sarasso, S., Casarotto, S., Boly, M., Schnakers, C., Napolitani, M., ... Rosanova, M. (2015). On the cerebral origin of EEG responses to TMS: Insights from severe cortical lesions. Brain Stimulation, 8, 142–149.
Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–232. Harris, L. J., Almerigi, J. B., Carbary, T. J., & Fogel, T. G. (2001). Left-side infant holding: A test of the hemispheric arousal-attentional hypothesis. Brain and Cognition, 46, 159–165. Ishai, A. (2008). Let’s face it: It’s a cortical network. NeuroImage, 40, 415–419. Johnson, J. S., Kundu, B., Casali, A. G., & Postle, B. R. (2012). Task-dependent changes in cortical excitability and effective connectivity: A combined TMS-EEG study. Journal of Neurophysiology, 107, 2383–2392. Johnstone, T., van Reekum, C. M., Urry, H. L., Kalin, N. H., & Davidson, R. J. (2007). Failure to regulate: Counterproductive recruitment of top-down prefrontalsubcortical circuitry in major depression. Journal of Neuroscience, 27, 8877–8884. Krolak-Salmon, P., Fischer, C., Vighetto, A., & Mauguière, F. (2001). Processing of facial emotional expression: Spatio-temporal data as assessed by scalp eventrelated potentials. European Journal of Neuroscience, 13, 987–994. Massimini, M., Ferrarelli, F., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). Breakdown of cortical effective connectivity during sleep. Science, 309, 2228–2232. Mattavelli, G., Rosanova, M., Casali, A. G., Papagno, C., & Romero Lauro, L. J. (2013). Top-down interference and cortical responsiveness in face processing: A TMSEEG study. NeuroImage, 76, 24–32. Miniussi, C., & Thut, G. (2010). Combining TMS and EEG offers new prospects in cognitive neuroscience. Brain Topography, 22, 249–256. Münte, T. F., Brack, M., Grootheer, O., Wieringa, B. M., Matzke, M., & Johannes, S. (1998). Brain potentials reveal the timing of face identity and face expression judgments. Neuroscience Research, 30, 25–34. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. Pizzagalli, D. A., Lehmann, D., Hendrick, A. M., Regard, M., Pascual-Marqui, R. D., & Davidson, R. J. (2002). Affective judgments of faces modulate early activity (160 ms) within the fusiform gyri. NeuroImage, 16, 663–667. Pizzagalli, D. A., Regard, M., & Lehmann, D. (1999). Rapid emotional face processing in the human right and left brain hemispheres: An ERP study. NeuroReport, 10, 2691–2698. Pourtois, G., Schettino, A., & Vuilleumier, P. (2013). Brain mechanisms for emotional influences on perception and attention: What is magic and what is not. Biological Psychology, 92, 492–512. Pourtois, G., Grandjean, D., Sander, D., & Vuilleumier, P. (2004). Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cerebral Cortex, 14, 619–633. Reichenbach, A., Whittingstall, K., & Thielscher, A. (2011). Effects of transcranial magnetic stimulation on visual evoked potentials in a visual suppression task. NeuroImage, 54, 1375–1384. Romero Lauro, L. J., Pisoni, A., Rosanova, M., Casarotto, S., Mattavelli, G., Bolognini, N., & Vallar, G. (2016). Localizing the effects of anodal tDCS at the level of cortical sources: A reply to Bailey et al., 2015. Cortex, 74, 323–328. Romero Lauro, L. J., Rosanova, M., Mattavelli, G., Convento, S., Pisoni, A., Opitz, A., ... Vallar, G. (2014). TDCS increases cortical excitability: Direct evidence from TMS-EEG. Cortex, 58, 99–111. Rosanova, M., Casali, A., Bellina, V., Retsa, F., Mariotti, M., & Massimini, M. (2009). Natural frequencies of the human corticothalamic circuits. Journal of Neuroscience, 29, 7679–7685. Rotshtein, P., Richardson, M. P., Winston, J. S., Kiebel, S. J., Vuilleumier, P., Eimer, M., ... Dolan, R. J. (2010). Amygdala damage affects event-related potentials for fearful faces at specific time windows. Human Brain Mapping, 31, 1089–1105.
22
G. Mattavelli et al. / Brain and Cognition 106 (2016) 13–22
Streit, M., Wölwer, W., Brinkmeyer, J., Ihl, R., & Gaebel, W. (2000). Electrophysiological correlates of emotional and structural face processing in humans. Neuroscience Letters, 278, 13–16. Surguladze, S. A., Brammer, M. J., Young, A. W., Andrew, C., Travis, M. J., Williams, S. C. R., & Phillips, M. L. (2003). A preferential increase in the extrastriate response to signals of danger. NeuroImage, 19, 1317–1328. Tamietto, M., & de Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nature Reviews Neuroscience, 11, 697–709. Taylor, P. J., Walsh, V., & Eimer, M. (2008). Combining TMS and EEG to study cognitive function and cortico–cortico interactions. Behavioural Brain Research, 191, 141–147. Thut, G., Ives, J. R., Kampmann, F., Pastor, M. A., & Pascual-Leone, A. (2005). A new device and protocol for combining TMS and online recordings of EEG and evoked potentials. Journal of Neuroscience Methods, 141, 207–217. Virtanen, J., Ruohonen, J., Naatanen, R., & Ilmoniemi, R. (1999). Instrumentation for the measurement of electrical brain responses to transcranial magnetic stimulation. Medical & Biological Engineering & Computing, 37, 322–326.
Vuilleumier, P. (2005). How brains beware: Neural mechanisms of emotional attention. Trends in Cognitive Sciences, 9, 585–594. Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia, 45, 174–194. Wright, P., & Liu, Y. (2006). Neutral faces activate the amygdala during identity matching. NeuroImage, 29, 628–636. Wronka, E., & Walentowska, W. (2014). Attentional modulation of the emotional expression processing studied with ERPs and sLORETA. Journal of Psychophysiology, 28, 32–46. Wronka, E., & Walentowska, W. (2011). Attention modulates emotional expression processing. Psychophysiology, 48, 1047–1056. Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: Tests of dimensional and category accounts of emotion recognition. Cognition, 63, 271–313.