www.elsevier.com/locate/ynimg NeuroImage 36 (2007) 480 – 487
Amygdala responses to nonlinguistic emotional vocalizations Shirley Fecteau, a,b,⁎ Pascal Belin, a,c Yves Joanette, a and Jorge L. Armony d a
Faculté de médecine, Université de Montréal, and Institut Universitaire de Gériatrie de Montréal, Canada Center for NonInvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Harvard Medical School, MA 02215, USA c Centre for Cognitive Neuroimaging, University of Glasgow, UK; Laboratory for Brain, Music and Sound (BRAMS), Université de Montréal and McGill University, Canada d Douglas Hospital Research Centre, McGill University, Canada b
Received 1 August 2006; revised 20 February 2007; accepted 23 February 2007 Available online 13 March 2007 Whereas there is ample evidence for a role of the amygdala in the processing of visual emotional stimuli, particularly those with negative value, discrepant results have been reported regarding amygdala responses to emotional auditory stimuli. The present study used event-related functional magnetic resonance imaging to investigate cerebral activity underlying processing of emotional nonlinguistic vocalizations, with a particular focus on neural changes in the amygdala. Fourteen healthy volunteers were scanned while performing a gender identification task. Stimuli, previously validated on emotional valence, consisted of positive (happiness and sexual pleasure) and negative (sadness and fear) vocalizations, as well as emotionally neutral sounds (e.g., coughs). Results revealed bilateral amygdala activation in response to all emotional vocalizations when compared to neutral stimuli. These findings suggest that the generally accepted involvement of the amygdala in the perception of emotional visual stimuli, such as facial expressions, also applies to stimuli within the auditory modality. Importantly, this amygdala response was observed for both positive and negative emotional vocalizations. © 2007 Elsevier Inc. All rights reserved.
Introduction It is generally accepted that the amygdala plays an important role in processing emotional visual stimuli, such as facial expressions and affective pictures, as evidenced by lesion (i.e., Adolph et al., 1994; Broks et al., 1998) and neuroimaging (i.e., Breiter et al., 1996; Morris et al., 1996; Reinders et al., 2005; Wang et al., 2005; Whalen et al., 1998) studies (see also review from Phelps and LeDoux, 2005). Yet, the amygdala is a multimodal structure, receiving information from several sensory systems. Indeed, the human amygdala has been shown to respond to emotional stimuli across different modalities in addition to vision, including auditory (Büchel ⁎ Corresponding author. Center for NonInvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline, Boston, MA 02215, USA. E-mail address:
[email protected] (S. Fecteau). Available online on ScienceDirect (www.sciencedirect.com). 1053-8119/$ - see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2007.02.043
et al., 1998; Pallensen et al., 2005; Phillips et al., 1998), gustatory (O’Doherty et al., 2001, 2002) and olfactory (Gottfried et al., 2002a, b) stimuli. These findings are consistent with proposed models of amygdala function which emphasize its role in the detection of biologically relevant stimuli in the environment, regardless of their modality (e.g., Armony and Ledoux, 2000; Glascher and Adolphs, 2003; Sander et al., 2003). In particular, a wealth of anatomical and physiological data from rodents (Ledoux et al., 1990), as well as some evidence from nonhuman primates (Aggleton et al., 1980; Brothers et al., 1990; Turner et al., 1980; Yukie, 2002), supports the notion that the amygdala plays an important role in the detection of affective-laden auditory stimuli. However, the involvement of the amygdala in processing emotional auditory information in humans is less clear, as inconsistent findings exist in the literature (Phan et al., 2002). For example, in terms of emotional prosody (the modulation of acoustic properties of speech to express an emotion), some studies have reported right amygdala responses to anger (Sander et al., 2005), while others have failed to find any significant amygdala involvement in emotional prosody (Imaizumi et al., 1997; Pourtois et al., 2005). Similarly, lesion studies have yielded inconsistent results in terms of the ability of patients with amygdala damage to normally recognize (Adolphs et al., 2001; Adolph and Tranel, 1999; Anderson and Phelps, 1998), or not (Scott et al., 1997), negative prosody. One class of auditory stimuli which is of particular relevance for emotion is that of nonlinguistic vocalizations. Emotional nonlinguistic vocalizations, such as laughs, gasps or screams, are part of innate behaviors to communicate emotional states (Barr et al., 2000; Kreiman, 1997) and can thus be thought of as the auditory equivalent of facial expressions (Belin et al., 2004). Although prosody and nonlinguistic vocalizations are both means to express emotions via vocal information, they are different in the emotional information they can convey (Barr et al., 2000; Ross et al., 1986; Scherer, 1981). In particular, some emotions are rarely naturally expressed by prosody, but instead consist of brief nonlinguistic vocalizations, such as moans, groans or bursts of laughter (Scherer, 1981). Despite their importance in social communication, little is known about the neural structures underlying the processing of
S. Fecteau et al. / NeuroImage 36 (2007) 480–487
nonlinguistic emotional vocalizations. In particular, conflicting findings have been reported in terms of the possible involvement of the amygdala. A few studies found an impaired recognition of negative vocalizations in patients with amygdala lesions (Scott et al., 1997), while others reported no deficits (Anderson and Phelps, 1998). Functional neuroimaging studies have also yielded inconsistent results. For instance, in an fMRI study combining vocal and facial expressions, Phillips et al. (1998) found that, consistent with the multimodal hypothesis, fearful expressions, both visual and auditory, activated the amygdala bilaterally when compared to the mildly happy stimuli. In contrast, in a PET experiment, Morris et al. (1999) observed a decreased activity in right amygdala for fearful vocalizations, compared to the combination of happy, sad and neutral sounds. More recently, Sander and Scheich (2001) used a noise-reduced fMRI protocol to scan two groups of subjects while performing a gender or emotion discrimination task on samples of laughs and cries. Analyses revealed bilateral activation of the amygdala during listening to both crying and laughing compared to the silent baseline. Activation in the amygdala was stronger in the right hemisphere, but was not different between laughing and crying, and did not seem to depend on the task performed. Finally, Meyer et al. (2005) reported no significant changes in amygdala activity when contrasting vocalizations of happiness (laughs) to speech or nonvocal sounds in an eventrelated fMRI study. Unfortunately, large methodological differences among these studies make it difficult to compare them and thus draw any definitive conclusions on the role of the amygdala in the emotional processing of nonlinguistic vocalizations. Here, we sought to shed further light on this question by conducting an event-related fMRI study in which we presented previously validated (Fecteau et al., 2005) vocalizations representing positive (happiness and sexual pleasure) or negative (fear and sadness) emotions, together with emotionally neutral vocalizations (e.g., yawning, coughing). Critically, and unlike previous studies, vocalizations were produced by different speakers, thus minimizing the chances of speakerspecific habituation effects (Belin and Zatorre, 2003), and presented in a pseudo-random order to reduce potential emotional priming, habituation or expectation effects (Breiter et al., 1996; Büchel et al., 1998; Büchel et al., 1999; Fischer et al., 2000). In addition, we used neutral vocalizations as control stimuli to specifically identify the emotional contributions to any observed activations. Materials and methods Subjects Fourteen healthy right-handed adults (six women; mean age of 23.4 ± 3.0 years) were scanned while performing a gender identification task of vocalizations (another participant was excluded for excessive motion). Participants were included if they reported that they have a normal audition and no history of hearing impairment. They were not explicitly informed of the emotional experimental variable. Informed written consent was obtained and the ethical committee from the Centre Hospitalier de l’Université de Montréal approved the study. Stimuli Positive stimuli consisted of 12 vocalizations of happiness (laughs) and 12 vocalizations of sexual pleasure. Negative stimuli
481
were 12 vocalizations of sadness (cries) and 12 vocalizations of fear (screams). In addition, 24 neutral emotionally vocalizations (e.g., cough, throat clearing) were included. Each vocalization was produced by a different adult speaker (72 male and 72 female speakers). Stimuli had been previously validated (Fecteau et al., 2005) by a group of 60 individuals who rated the stimuli on emotional valence and intensity. The main acoustic parameters of the stimuli for the different categories were obtained using Praat (www. pratt.org) and are shown in Table 1A. Stimuli were presented twice to increase statistical power, delivered in a pseudo-randomized order with a mean stimulus onset asynchrony (SOA) of 4.3 s. In order to avoid habituation and/or stimulus expectation effects, we ensured that stimuli from the different emotions were distributed equally throughout the entire duration of the experiment and that no more than four vocalizations from the same emotion were presented together. Longer SOAs (socalled null events) were also included to provide an estimate of baseline activity and to reduce the participant’s expectations to hear a sound. Stimuli were presented using MCF (DigiVox, Montreal), with a sound-pressure level of 85–90 dB. They were delivered binaurally through pneumatic headphones, acting as a low pass filter with a cutoff frequency around 2 kHz, sealed by foam ear inserts and further shielded by plastic ear defenders that provided an attenuation of fMRI scanning noise of about 30 dB. Subjects’ task was to perform a gender decision of the stimuli (i.e., decide whether the speaker was male or female). After the scanning session, participants were asked to rate the vocalizations on emotional valence and intensity using a visual analog scale (valence range: − 50 to 50: very negative to very positive; intensity range: 0–100: not at all intense to extremely intense). The order of stimulus presentation during rating was counterbalanced across participants. Scanning procedure Scanning was performed on a 1.5-T MRI system (Magnetom Vision, Siemens Electric, Erlangen, Germany). Functional scans Table 1 Stimulus properties Duration RMS (s) (a.u.) A. Acoustic parameters Neutral 1.25 0.022 Sadness 1.40 0.019 Fear 1.57 0.092 ⁎⁎ Happiness 1.32 0.039 Pleasure 1.39 0.033
B. Subjective ratings Neutral Sadness Fear Happiness Pleasure
Median f0 UNV (Hz) (%) 437.00 296.77 866.97 ⁎⁎ 489.93 393.73
HNR (dB)
Range f0 (Hz)
50.98 7.45 32.21 ⁎ 10.35 0.74 ⁎⁎ 9.91 30.23 ⁎ 9.20 42.63 7.44
727.28 295.45 ⁎ 602.80 855.45 235.74 ⁎
Valence (− 50 to 50)
Intensity (0–100)
− 3.86 − 33.03 ⁎⁎ − 37.80 ⁎⁎ 31.58 ⁎⁎ 30.65 ⁎⁎
39.90 52.87 ⁎⁎ 85.58 ⁎⁎ 61.14 ⁎⁎ 68.81 ⁎⁎
Abbreviations: RMS, root mean square; a.u., arbitrary units (relative to peak amplitude); f0, fundamental frequency; UNV, unvoiced segments; HNR, harmonics-to-noise ratio. ⁎ p < 0.05, compared to neutral stimuli. ⁎⁎ p < 0.005, compared to neutral stimuli.
482
S. Fecteau et al. / NeuroImage 36 (2007) 480–487
were acquired with a single-shot echoplanar gradient-echo (EPI) pulse sequence (TR = 2.6 s, TE = 40 ms, flip angle = 90°, FOV = 215 mm, matrix = 128 × 128). The 28 axial slices (resolution 3.75 × 3.75 mm in plane, 5 mm between planes) in each volume were aligned with the AC–PC line, covering the whole brain. A total of 320 volumes were acquired (the first four volumes of each series were later discarded to allow for T1 saturation). After functional scanning, T1-weighted anatomical images were obtained for each participant (1 mm × 1 mm × 1 mm resolution). Scanner noise was continuous throughout the experiment providing a constant auditory background. Image processing and statistical analysis were performed using SPM2b (Wellcome Department of Cognitive Neurology).
lower threshold for negative stimuli in the left amygdala (p < 0.005 uncorrected). Further post hoc comparisons revealed that the activation in the right amygdala was significantly larger for positive than for negative stimuli (t(13) = 3.29, p = 0.006), whereas no significant differences between these two categories were observed in the left amygdala (t(13) < 1). The emotional vocalizations also induced greater activity than the neutral ones within the superior temporal plane in primary auditory cortex bilaterally, extending ventrally to superior temporal sulcus (STS) in the right hemisphere. This contrast also elicited
Statistical analysis Data analysis was performed in a two-stage mixed-effects analysis (equivalent to a random effects analysis) in which BOLD responses for each participant for the five categories were first modeled using a synthetic hemodynamic function in the context of the fixed-effects general linear model. Subject-specific linear contrasts on the parameter estimates were then entered into a second-level analysis to perform between-subjects analyses, resulting in a t-statistic for each voxel. These t-statistics (transformed to Z-statistics) constitute a statistical parametric map (SPM). SPMs were thresholded at p = 0.001 (uncorrected) at the voxel level and a p = 0.05 (uncorrected) at the cluster level (corresponding to a minimum cluster size of 20 voxels), except for the amygdala, where a p < 0.05 (corrected for multiple comparisons within an anatomically defined region of interest; Tzourio-Mazoyer et al., 2002) was applied, based on our a priori hypothesis of the involvement of this structure in emotional vocalizations processing. Statistical comparisons involved contrasts between different experimental conditions (e.g., emotional vs. neutral vocalizations), as well as contrasts between experimental conditions and baseline. Results Behavioral results Performance on gender identification was 75.8% (±7.9%). Ratings of valence and intensity for the different stimulus categories are shown in Table 1B. Our a priori categorization of the stimuli as positive, negative or neutral was confirmed, as valence ratings were significantly different between these categories (all; p < 0.0001), without significant differences within positive (happiness vs. sexual pleasure) or negative (fear vs. sadness) categories, consistent with our previous findings using a different larger sample (Fecteau et al., 2005). All emotional categories were rated as more intense that neutral sounds. In addition, fearful screams were rated as more intense than all the other categories and vocalizations of pleasure were more intense than cries (all other pairwise comparisons were nonsignificant). Neuroimaging results Contrast of emotional vocalizations (happiness, sexual pleasure, sadness, and fear) with neutral ones revealed significant bilateral amygdala activation (Fig. 1 and Table 2). These activations remained significant when positive and negative stimuli were individually compared with the neutral vocalizations, although at a
Fig. 1. (A) Statistical parametric map, thresholded at p = 0.01 for visualization purposes, showing the bilateral amygdala activations for the contrast of emotional vs. neutral vocalizations. (B) Parameter estimates for each stimulus category are shown for the peaks in the left (top) and right (bottom) amygdala. Symbols indicate the level of significance associated with the paired t-test between each emotional category and the neutral vocalizations: **p < 0.005 and *p < 0.05. N: neutral vocalizations; H: vocalizations of happiness; P: vocalizations of sexual pleasure; S: vocalizations of sadness; F: fearful screams.
S. Fecteau et al. / NeuroImage 36 (2007) 480–487 Table 2 Significant activations associated with the various contrasts of interest Anatomical location
Size x
y
z
Z-score BA (peak voxel)
(A) Emotional > Neutral R primary auditory 249 54 1 8 cortex/middle STS L middle STG 83 −38 − 32 8 R anterior STG 29 56 8 −6 L primary auditory cortex 41 −54 − 20 0 L amygdala 19 −22 − 4 − 22 R amygdala 30 26 0 − 26 (B) Positive > Neutral R STG R primary auditory cortex/middle STS L amygdala R amygdala (C) Negative > Neutral R middle STS R middle STG L primary auditory cortex/STS L superior frontal gyrus L precentral gyrus R anterior STG R superior frontal gyrus R amygdala
77 25
46 − 28 54 − 16
21 −22 39 26
5.07 4.17 3.57 3.55 3.51* 4.24*
0 4.41 8 4.23
− 4 − 22 4.37* 0 − 24 4.48*
212 50 − 14 96 −38 − 30 65 −52 − 18
8 4.56 8 4.40 4 4.33
30 −18 14 68 24 −30 − 22 48 26 56 8 −6 40 4 32 46 7 26 0 − 28
4.32 4.05 3.89 3.80 3.63*
42/21/22 22/42 22/21 22/42 – –
21/22 22/42 – –
22/42 22/42 22/42 6 4/6 22 8/32 –
(D) Negative > Positive L middle STS
46 −46 − 18
−2 4.26
22/21
(E) Positive > Negative L occipital cortex
35
− 4 − 92
18 4.00
18
Coordinates are in MNI space. BA: Broadmann areas. Significance was assessed using a threshold of p < 0.001 (uncorrected) together with a cluster threshold of p < 0.05 (uncorrected), except for the amygdala (*), where a value of p < 0.05, corrected for multiple comparisons within an anatomically defined region of interest, was used.
increased activity in the middle part of the left superior temporal gyrus (STG), anterior parts of right STG, and left primary auditory cortex (Table 2A). The positive vocalizations, compared to the neutral ones, elicited greater activity in the right STG, as well as in the right primary auditory cortex extending to the middle parts of STS (Table 2B). Increases in neural activity associated with the negative vocalizations contrasted with the neutral ones were observed in right middle STS, left middle STG, left primary auditory cortex/ STS, bilateral superior frontal gyri, left precentral gyrus, as well as in the anterior parts of right STG (Table 2C). All the opposite contrasts (neutral vs. emotional; neutral vs. negative; neutral vs. positive) did not yield any significant activations. Finally, the negative vs. positive contrast elicited activation within the left middle STS (Table 2D) whereas the positive vs. negative one yielded a significant cluster in the left occipital cortex (Table 2E). Discussion The aim of the present study was to investigate neural changes in response to emotional nonlinguistic vocalizations, with a particular
483
emphasis on the amygdala. Contrary to most previous studies, we used an event-related design, each vocalization being produced by a different speaker, and we included emotionally neutral vocalizations as a control condition. Overall, the emotional vocalizations, compared to the neutral ones, elicited greater activity in the amygdala, as well as in other temporal regions. Critically, we observed enhanced activity in bilateral amygdala in response to vocalizations of both positive and negative emotions. Amygdala Consistent with our hypothesis, we found increased activity in amygdala, especially in the right hemisphere, in response to negative emotions, namely vocalizations of sadness and fear. These findings are in agreement with some of the previous studies showing right amygdala activation in response to vocalizations of fear (Phillips et al., 1998) and bilateral amygdala responses to vocalizations of sadness (Sander and Scheich, 2001). Our results appear to be inconsistent with those of Morris et al. (1999), who found deactivations in the right amygdala for vocalizations of fear. However, their comparison involved contrasting vocalizations of fear with the combination of sad, happy and neutral vocalizations. Therefore, the reported deactivation could instead be thought of as an activation associated with the opposite contrast (Gusnard and Raichle, 2001). This interpretation would be consistent with our findings of a significant right amygdala activation for sad and happy vocalizations. Thus, our findings provide further support for an involvement of the amygdala in processing negative nonlinguistic vocalizations, consistent with the wealth of imaging studies showing the involvement of the amygdala in processing visual negative stimuli, particularly fearful facial expressions (see review by Davidson and Irwin, 1999; Phan et al., 2004). We also found greater bilateral amygdala activity in response to the positive vocalizations (happiness and sexual pleasure) compared to the neutral ones (e.g., coughs). This is in line with the bilateral amygdala activation associated with vocalizations of happiness reported by Sander and Scheich (2001); but see Morris et al. (1999) and Meyer et al. (2005). These findings are of particular interest as the involvement of the amygdala with positive-related stimuli is still under debate (Canli et al., 1998; Garavan et al., 2000). Nonetheless, a growing literature (see Baxter and Murray, 2002, Sander et al., 2003; Zald, 2003 for reviews) supports its role in processing positive information, using a variety of stimuli, such as static facial expressions (Breiter et al., 1996; Canli et al., 2002; Garavan et al., 2001; Gorno-Tempini et al., 2001; Killgore et al., 2001; Gur et al., 2002; Hamann et al., 2002; Pessoa et al., 2002a,b; Wright et al., 2002; Yang et al., 2002; Winston et al., 2003; Somerville et al., 2004; Fitzgerald et al., 2005), dynamic facial expressions (Sato et al., 2004), linguistic non-auditory stimuli (Hamann and Mao, 2002), olfactory stimuli (Gottfried et al., 2002a,b), taste (O’Doherty et al., 2001, 2002), erotic film excerpts (Beauregard et al., 2001; Karama et al., 2002), rewards (Zalla et al., 2000; Knutson et al., 2001a,b), and food stimuli (LaBar et al., 2001). As well, deficits in processing positive stimuli have been observed in patients with amygdala damage (Bechara et al., 1999) and in one patient with bilateral amygdala damage in addition to extensive damage in both temporal lobes (Broks et al., 1998; see also Fine and Blair, 2000). More recently, it has been shown that electrical stimulation of the left amygdala in patients with refractory epilepsy evoked positive emotions, such as happiness and joy (Lanteaume et al., 2006).
484
S. Fecteau et al. / NeuroImage 36 (2007) 480–487
Taken together, our results are consistent with a growing literature suggesting that, as it is the case in nonhuman animals (Ledoux, 2000), the amygdala is involved in the processing of emotional information across modalities. Furthermore, our results show that, at least within the auditory domain, the amygdala responds to both positive and negative stimuli. These findings are consistent with the notion that the amygdala responds to the “emotionality” or hedonic value of a stimulus (Cunningham et al., 2004; Garavan et al., 2001; Kensigner and Schacter, 2006; Winston et al., 2003, 2005), rather than being specific to either positive or negative valence. In that sense, our findings provide further support to the model put forward by Sander et al. (2003), which proposes that the amygdala is a supramodal detector of biologically relevant, positive and negative, stimuli in the environment. Post hoc analyses (see Fig. 1B) confirmed that the amygdala responded to each emotional category (except for fear in the left amygdala, where a there was a trend that did not reach statistical significance). However, there were significant differences in the magnitude of the response between categories. In particular, the right amygdala responded more strongly to positive than negative stimuli, an effect mainly driven by the vocalizations of sexual pleasure, a positive, high-intensity emotion. This activation is consistent with previous studies (Hamann et al., 2004) showing significant amygdala activation to visual erotic stimuli and could reflect a non-linear interaction effect between valence and intensity on amygdala responses (Winston et al., 2003). As mentioned below, future studies using a wider range of emotional categories, permitting the dissociation between valence and intensity should shed further light on this issue. Temporal regions Other brain regions, besides the amygdala, showed neural changes in response to the emotional vocalizations. The largest activation cluster associated with the emotional vocalizations was observed in the primary auditory cortex bilaterally, and extending ventrally to the middle STS in the right hemisphere. The right middle STS is thought to be involved in high-level analysis of complex acoustic information in human voices (Belin et al., 2004; von Kriegstein et al., 2003). Previous studies have reported anterior and middle STS activity as being selective to human vocalizations relative to non-vocal sounds (Belin et al., 2000) and animal nonhuman vocalizations (Fecteau et al., 2004), and its more anterior part being sensitive to speaker adaptation (Belin and Zatorre, 2003). In a recent study (Grandjean et al., 2005), angry prosody elicited activations in the right middle STS. Thus, the right middle STS appears to be involved in processing human voice, with or without linguistic information, and is sensitive to its emotional content. Another large set of activations in response to emotional vocalizations was found in the anterior aspects of the right STG. These findings, consistent with previous studies using emotional vocalizations (Phillips et al., 1998; Meyer et al., 2005) and prosody (Buchanan et al., 2000; Mitchell et al., 2003; Wildgruber et al., 2002), provide further support to the hypothesis that STG plays a role in the extraction of general emotional information (Phillips et al., 1998). The present results are also in line with Schirmer and Kotz’s model (2006), proposing that emotional acoustic information is integrated in regions within the STS and STG, especially in the right hemisphere, to form an emotional “gestalt,” which is then made accessible for higher order cognitive processes, possibly taking place in the prefrontal cortex.
With regard to the primary auditory cortices, an increased signal was associated with the emotional vocalizations. These activations are consistent with the proposed interacting network of brain regions engaged in processing vocal emotions. There are anatomical connections between the primary auditory cortex (also STS and STG) and other regions such as the prefrontal cortex (Barbas, 2000; see also Cavada et al., 2000 for a review) and the amygdala (Amaral et al., 1992), regions known to be involved in emotional processing (e.g., Adolphs, 2002; Phan et al., 2004; Phillips et al., 2003; Zald, 2003). Thus, one possible interpretation is that the increased activity in the primary auditory cortex observed for processing the emotional vocalizations might be the result of feedback activity from regions specialized in emotional processing. However, the present experiment cannot be used to test this hypothesis. Alternatively, the observed activations in the primary auditory cortex could reflect acoustic differences between the stimulus categories (see Table 1A). Another possible explanation is that early cortical levels may already be sensitive to the emotional content of stimuli. This is in line with findings from classical conditioning studies showing enhanced auditory cortex responses to auditory stimuli (e.g., tones) with an acquired aversive value (Armony and Dolan, 2001; Büchel et al., 1998). Further research is necessary to specifically assess the exact nature of the involvement of the primary auditory cortex in emotional processing, in particular in relation to the various acoustic parameters that differ between emotional and neutral vocalizations. In addition, connectivity studies between the amygdala and auditory cortex could shed further light on the functional interaction between these structures during the detection of emotional stimuli and thus contribute to a better understanding of the neural underpinning of emotional processing, not only in terms of structures but, more critically, at a network level. Frontal regions Of particular interest, vocalizations of negative emotions (sadness and fear) induced activations in bilateral superior frontal regions. Previous studies have reported enhanced activity in the superior frontal gyrus and ventral prefrontal cortex (Morris et al., 1999) and in the medial frontal cortex (Phillips et al., 1998) when processing vocalizations of fear, as well as in the orbitofrontal cortex when processing prosodic signal expressing anger (Sander et al., 2005). It has been shown that emotional stimuli engage attentional resources in an automatic fashion (Armony and Dolan, 2001), thus it is possible that the frontal activations observed here represent an increased attention specific to the negative emotional vocalizations. Schirmer and Kotz (2006) proposed that frontal regions, such as the orbitofrontal cortex and the inferior frontal gyri are involved in higher order cognitive processes of emotional prosody (e.g., judgment of emotions expressed). The lack of increased signal in these areas in the present study might be due to the use of a nonemotional related task (i.e., gender identification). Methodological considerations It has been suggested (Anderson et al., 2003; Lewis et al., 2006; Small et al., 2003) that the amygdala codes for the emotional intensity of stimuli rather than their valence. Unfortunately, in our study, valence strength and intensity were significantly correlated (r = 0.77), as it is usually the case with other types of emotional stimuli, such as words (Lewis et al., 2006) and facial expressions (Sergerie et al., 2006). Therefore, we cannot
S. Fecteau et al. / NeuroImage 36 (2007) 480–487
unequivocally determine whether the observed amygdala activations associated with emotional vocalizations were due to the valence or intensity of the stimuli used. However, our results provide indirect evidence arguing against a pure intensity-based amygdala involvement in emotional processing. That is, although fearful screams were rated as significantly more intense than cries of sadness (see Table 1B), no significant differences in amygdala activation were observed between these two types of stimuli. Still, sad vocalizations were rated as more intense than neutral ones, leaving open the possibility of a non-linear influence of stimulus intensity on amygdala activation. Future studies employing stimuli specifically chosen to dissociate valence from intensity should shed further light on this question. In addition, concurrent physiological measurements, such as electrodermal responses, should provide a more direct measure of stimulus-induced arousal. Another important concern when studying responses to emotional stimuli, in any modality, is the potential effects due to differences in physical characteristics between emotional and neutral stimuli used. While it is probably impossible to completely circumvent this problem, a novel aspect of this study was our use of different types of negative (screams of fear and crying) and positive (sexual pleasure and laughter) vocalizations in the same experiment. The fact that we observed significant amygdala activation for all types of emotional vocalizations, compared to neutral ones, suggests that these activations were unlikely to be solely due to differences in simple acoustic parameters between stimuli (see Table 1A). However, we cannot rule out that other, yet to be determined, physical differences between the emotional and neutral stimuli could underlie the observed amygdala activation. Further studies using a larger set of stimuli, designed to control for a wide range of acoustic parameters, are necessary to differentiate between brain activity associated with emotional processing from that due to acoustic differences between emotional and neutral stimuli. Finally, the issue of task is also worth mentioning. In our study subjects performed an incidental gender discrimination task. It has been suggested that the amygdala, as well as STS, activity may be modulated by the task performed during scanning. For instance, in the visual domain, greater amygdala responses have been observed during an emotion discrimination task compared to an age discrimination task (Gur et al., 2002), as well as during passive viewing compared to gender or emotion identification (Lange et al., 2003). However, Sander and Scheich (2001) found no difference in the magnitude of amygdala responses between self-induction emotions and pitch-shifts detection task. Moreover, a recent metaanalysis (Baas et al., 2004) suggests that amygdala activations are not significantly related to task instructions. The STS has also been reported to be task-dependent: STS activations were observed during a direct emotional task, but not during a non-emotional one (Winston et al., 2003). Further studies should be conducted to explore whether responses to emotional vocalizations obtained here would be modulated by the task performed by the subjects or by attentional manipulations, as it has been explored for facial expressions (e.g., see reviews from Dolan and Vuilleumier, 2003; Pessoa et al., 2002a,b) and prosody (Mitchell et al., 2003; Grandjean et al., 2005; Sander et al., 2005). Conclusions The findings reported here provide strong, direct support for a role of the amygdala in the processing of nonverbal emotional stimuli
485
in the auditory domain. Importantly, amygdala responses were observed for a variety of emotional expressions, of both negative and positive valence. These results are consistent with the hypothesis of a general involvement of the human amygdala in affective processing, spanning different emotions and sensory modalities. Acknowledgments This study was supported by the Canadian Institutes of Health Research (J.L.A. and P.B.), National Sciences and Engineering Research Council, Fonds Québecois de Recherche sur la Nature et les Technologies, Université de Montréal (P.B.), and Fonds de la Recherche en Santé Québec (S.F.). J.L.A. is supported by the Canada Research Chairs Program. References Adolph, R., Tranel, D., 1999. Intact recognition of emotional prosody following amygdala damage. Neuropsychologia 37, 1285–1292. Adolph, R., Tranel, D., Damasio, H., Damasio, A., 1994. Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature 372, 669–672. Adolphs, R., 2002. Neural systems for recognizing emotion. Curr. Opin. Neurobiol. 12, 169–177. Adolphs, R., Jansari, A., Tranel, D., 2001. Hemispheric perception of emotional valence from facial expressions. Neuropsychology 15, 516–524. Aggleton, J.P., Burton, M.J., Passingham, R.E., 1980. Cortical and subcortical afferents to the amygdala of the rhesus monkey (Macaca mulatta). Brain Res. 190 (2), 347–368. Amaral, D.G., Price, J.L., Pitkänen, A., Carmichael, S.T., 1992. Anatomical organization of the primate amygdaloid complex. In: Aggleton, J.P. (Ed.), The Amygdala: Neurobiological Aspects of Emotion, Memory and Mental Dysfunction. Wiley-Liss, New York, pp. 1–66. Anderson, A.K., Phelps, E.A., 1998. Intact recognition of vocal expressions of fear following bilateral lesions of the human amygdala. NeuroReport 9, 3607–3613. Anderson, A.K., Christoff, K., Stappen, I., Panitz, D., Ghahremani, D.G., Glover, G., Gabrieli, J.D., Sobel, N., 2003. Dissociated neural representations of intensity and valence in human olfaction. Nat. Neurosci. 6 (2), 196–202. Armony, J.L., LeDoux, J.E., 2000. How danger is encoded: Towards a systems, cellular, and computational understanding of cognitiveemotional interactions, In: Gazzaniga, M.S. (Ed.), The New Cognitive Neurosciences, 2nd Ed. MIT Press, Cambridge, pp. 1067–1079. Armony, J.L., Dolan, R.J., 2001. Modulation of auditory neural responses by a visual context in human fear conditioning. NeuroReport 12, 3407–3411. Baas, D., Aleman, A., Kahn, R.S., 2004. Lateralization of amygdala activation: a systematic review of functional neuroimaging studies. Brain Res. Rev. 45, 96–103. Barbas, H., 2000. Connections underlying the synthesis of cognition, memory, and emotion in primate prefrontal cortices. Brain Res. Bull. 52, 319–330. Barr, R.G., Hopkins, B., Greene, J.A., 2000. Crying as a Sign, a Symptom, and a Signal. Cambridge University Press, New York. Baxter, M.G., Murray, E.A., 2002. The amygdala and reward. Nat. Rev., Neurosci. 3, 563–573. Beauregard, M., Levesque, J., Bourgouin, P., 2001. Neural correlates of conscious self-regulation of emotion. J. Neurosci. 21, RC165. Bechara, A., Damasio, H., Damasio, A.R., Lee, G.P., 1999. Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making. J. Neurosci. 19 (13), 5473–5481. Belin, P., Zatorre, R.J., 2003. Adaptation to speaker’s voice in right anterior temporal lobe. NeuroReport 14, 2105–2109.
486
S. Fecteau et al. / NeuroImage 36 (2007) 480–487
Belin, P., Zatorre, R.J., Lafaille, P., Ahad, P., Pike, B., 2000. Voice-selective areas in human auditory cortex. Nature 403, 309–312. Belin, P., Fecteau, S., Bédard, C., 2004. Thinking the voice: neural correlates of voice perception. Trends Cogn. Sci. 8, 129–135. Breiter, H.C., Etcoff, N.L., Whalen, P.J., Kennedy, W.A., Rauch, S.L., Buckner, R.L., Strauss, M.M., Hyman, S.E., Rosen, B.R., 1996. Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17, 875–887. Broks, P., Young, A.W., Maratos, E.J., Coffrey, P.J., Calder, A.J., Isaac, C.L., Mayes, A.R., Hodges, J.R., Montaldi, D., Cezayirli, E., Roberts, N., Hadley, D., 1998. Face processing impairments after encephalitis: amygdala damage and recognition of fear. Neuropsychologia 36, 59–70. Brothers, H.M., Ring, B., Kling, A., 1990. Response of neurons in the macaque amygdala to complex social stimuli. Behav. Brain Res. 41 (3), 199–213. Buchanan, T.W., Lutz, K., Mirzazade, S., Specht, K., Shah, N.J., Zilles, K., Jancke, L., 2000. Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Brain Res. Cogn. Brain Res. 9, 227–238. Büchel, C., Morris, J., Dolan, R.J., Friston, K.J., 1998. Brain systems mediating aversive conditioning: an event-related fMRI study. Neuron 20, 947–957. Büchel, C., Dolan, R.J., Armony, J.L., Friston, K.J., 1999. Amygdalahippocampal involvement in human aversive trace conditioning revealed through event-related functional magnetic resonance imaging. J. Neurosci. 19, 10869–10876. Canli, T., Desmond, J.E., Zhao, Z., Glover, G., Gabrieli, J.D., 1998. Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport 9, 3233–3239. Canli, T., Sivers, H., Whitfield, S.L., Gotlib, I.H., Gabrieli, J.D., 2002. Amygdala response to happy faces as a function of extraversion. Science 296, 2191. Cavada, C., Company, T., Tejedor, J., Cruz-Rizzolo, R.J., Reinoso-Suarez, F., 2000. The anatomical connections of the macaque monkey orbitofrontal cortex. A review. Cereb. Cortex 10, 220–242. Cunningham, W.A., Raye, C.L., Johnson, M.K., 2004. Implicit and explicit evaluation: fMRI correlates of valence, emotional intensity, and control in the processing of attitudes. J. Cogn. Neurosci. 16 (10), 1717–1729. Davidson, R.J., Irwin, W., 1999. The functional neuroanatomy of emotion and affective style. Trends Cogn. Sci. 3, 11–21. Dolan, R.J., Vuilleumier, P., 2003. Amygdala automaticity in emotional processing. Ann. N. Y. Acad. Sci. 985, 348–355. Fecteau, S., Armony, J.L., Joanette, Y., Belin, P., 2004. Is voice processing species-specific in the human brain? An fMRI study. NeuroImage 23, 840–848. Fecteau, S., Armony, J.L., Joanette, Y., Belin, P., 2005. Judgment of emotional nonlinguistic vocalizations: age-related differences. Appl. Neuropsychol. 12, 40–48. Fine, C., Blair, R.J.R., 2000. Mini review: the cognitive and emotional effects of amygdala damage. Neurocase 435–450. Fischer, H., Furmark, T., Wik, G., Fredrikson, M., 2000. Brain representation of habituation to repeated complex visual stimulation studied with PET. NeuroReport 11, 123–126. Fitzgerald, D.A., Angstadt, M., Jelsome, L.M., Nathan, P.J., Phan, K.L., 2005. Beyond threat: amygdala reactivity across multiple expressions of facial affect. NeuroImage 30, 1441–1448. Garavan, H., Pankiewicz, J., Bloom, A., Cho, J.K., Sperry, L., Ross, T.J., Salmeron, B.J., Risinger, R., Kelley, D., Stein, E.A., 2000. Cue-induced cocaine craving: neuroanatomical specificity for drug users and drug stimuli. Am. J. Psychiatry 157, 1789–1798. Garavan, H., Pendergrass, J.C., Ross, T.J., Stein, E.A., Risinger, R.C., 2001. Amygdala response to both positively and negatively valenced stimuli. NeuroReport 12, 2779–2783. Glascher, J., Adolphs, R., 2003. Processing of the arousal of subliminal and supraliminal emotional stimuli by the human amygdala. J. Neurosci. 23 (32), 10274–10282.
Gorno-Tempini, M.L., Pradelli, S., Serafini, M., Pagnoni, G., Baraldi, P., Porro, C., Nicoletti, R., Umita, C., Nichelli, P., 2001. Explicit and incidental facial expression processing: an fMRI study. NeuroImage 14, 465–473. Gottfried, J.A., Deichmann, R., Winston, J.S., Dolan, R.J., 2002a. Functional heterogeneity in human olfactory cortex: an event-related functional magnetic resonance imaging study. J. Neurosci. 22, 10819–10928. Gottfried, J.A., O’Doherty, J., Dolan, R.J., 2002b. Appetitive and aversive olfactory learning in humans studied using event-related functional magnetic resonance imaging. J. Neurosci. 22, 10829–10837. Grandjean, D., Sander, D., Pourtois, G., Schwartz, S., Seghier, M.L., Scherer, K.L., Vuilleuimer, P., 2005. The voices of wrath: brain responses to angry prosody in meaningless speech. Nat. Neurosci. 8, 145–146. Gur, R.C., Schroeder, L., Turner, T., McGrath, C., Chan, R.M., Turetsky, B.I., Alsop, D., Maldjian, J., Gur, R.E., 2002. Brain activation during facial emotion processing. NeuroImage 16, 651–662. Gusnard, D.A., Raichle, M.E., 2001. Searching for a baseline: functional imaging and the resting human brain. Nat. Rev., Neurosci. 2 (10), 685–694. Hamann, S., Mao, H., 2002. Positive and negative emotional verbal stimuli elicit activity in the left amygdala. NeuroReport 13, 15–19. Hamann, S.B., Ely, T.D., Hoffman, J.M., Kilts, C.D., 2002. Ecstasy and agony: activation of the human amygdala in positive and negative emotion. Psychol. Sci. 13 (2), 135–141. Hamann, S., Herman, R.A., Nolan, C.L., Wallen, K., 2004. Men and women differ in amygdala response to visual sexual stimuli. Nat. Neurosci. 7 (4), 411–416. Imaizumi, S., Mori, K., Kiritani, S., Yumoto, M., 1997. Observation of neural processes of auditory scene analysis by magnetoencephalography. Acta Oto-Laryngol., Suppl. 532, 106–108. Karama, S., Lecours, A.R., Leroux, J.M., Bourgouin, P., Beaudoin, G., Joubert, S., Beauregard, M., 2002. Areas of brain activation in males and females during viewing of erotic film excerpts. Hum. Brain Mapp. 16, 1–13. Kensigner, E.A., Schacter, D.L., 2006. Processing emotional pictures and words: effects of valence and arousal. Cogn. Affect. Behav. Neurosci. 6 (2), 110–126. Killgore, W.D., Oki, M., Yurgelun-Todd, D.A., 2001. Sex-specific developmental changes in amygdala responses to affective faces. NeuroReport 12, 427–433. Knutson, B., Adams, C.M., Fong, G.W., Hommer, D., 2001a. Anticipation of increasing monetary reward selectively recruits nucleus accumbens. J. Neurosci. 21, RC159. Knutson, B., Fong, G.W., Adams, C.M., Varner, J.L., Hommer, D., 2001b. Dissociation of reward anticipation and outcome with event-related fMRI. NeuroReport 12, 3683–3687. Kreiman, J., 1997. Listening to voices: theory and practice in voice perception research. In: Johnson, K., Mullenix, J. (Eds.), Talker Variability in Speech Research. Academic Press, New York, pp. 85–108. LaBar, K.S., Gitelman, D.R., Parrish, T.B., Kim, Y.H., Nobre, A.C., Mesulam, M.M., 2001. Hunger selectively modulates corticolimbic activation to food stimuli in humans. Behav. Neurosci. 115, 493–500. Lange, K., Williams, L.M., Young, A.W., Bullmore, E.T., Brammer, M.J., Williams, S.C., Gray, J.A., Phillips, M.L., 2003. Task instructions modulate neural responses to fearful facial expressions. Biol. Psychiatry 53, 226–232. Lanteaume, L., Khalfa, S., Regis, J., Marqui, P., Chauvel, P., Bartolomei, F., 2006. Emotion induction after direct intracerebral stimulations of human amygdala. Cereb. Cortex (Jul 31; Epub ahead of print). Ledoux, J.E., 2000. Emotion circuits in the brain. Annu. Rev. Neurosci. 23, 155–184. Ledoux, J.E., Cicchetti, P., Xagoraris, A., Romanski, L.M., 1990. The lateral amygdaloid nucleus: sensory interface of the amygdala in fear conditioning. J. Neurosci. 10 (4), 1062–1069. Lewis, P.A., Critchley, H.D., Rotshtein, P., Dolan, R.J., 2006. Neural correlates of processing valence and arousal in affective words. Cereb. Cortex (May 22; Epub ahead of print).
S. Fecteau et al. / NeuroImage 36 (2007) 480–487 Meyer, M., Zysset, S., von Cramon, D.Y., Alter, K., 2005. Distinct fMRI responses to laughter, speech, and sounds along the human peri-sylvian cortex. Brain Res. Cogn. Brain Res. 24, 291–306. Mitchell, R.L.C., Elliott, R., Barry, M., Cruttenden, A., Woodruff, P.W., 2003. The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia 41, 1410–1421. Morris, J.S., Frith, C.D., Perrett, D.I., Rowland, D., Young, A.W., Calder, A.J., Dolan, R.J., 1996. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 383, 812–815. Morris, J.S., Scott, S.K., Dolan, R.J., 1999. Saying it with feeling: neural responses to emotional vocalizations. Neuropsychologia 37, 1155–1163. O’Doherty, J., Rolls, E.T., Francis, S., Bowtell, R., McGlone, F., 2001. Representation of pleasant and aversive taste in the human brain. J. Neurophysiol. 85, 1315–1321. O’Doherty, J.P., Deichmann, R., Critchley, H.D., Dolan, R.J., 2002. Neural responses during anticipation of a primary taste reward. Neuron 33, 815–826. Pallensen, K.J., Brattico, E., Bailey, C., Korvenoja, A., Koivisto, J., Gjedde, A., Carlson, S., 2005. Emotion processing of major, minor, and dissonant chords: a functional magnetic resonance imaging study. Ann. N. Y. Acad. Sci. 1060, 450–453. Pessoa, L., Kastner, S., Ungerleider, L.G., 2002a. Attentional control of the processing of neural and emotional stimuli. Brain Res. Cogn. Brain Res. 15 (1), 31–45. Pessoa, L., McKenna, M., Gutierrez, E., Ungerleider, L.G., 2002b. Neural processing of emotional faces requires attention. Proc. Natl. Acad. Sci. U. S. A. 99, 11458–11463. Phan, K.L., Wager, T., Taylor, S.F., Liberzon, I., 2002. Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. NeuroImage 16, 331–348. Phan, K.L., Wager, T.D., Taylor, S.F., Liberzon, I., 2004. Functional neuroimaging studies of human emotions. CNS Spectr. 9, 258–266. Phelps, E.A., LeDoux, J.E., 2005. Contributions of the amygdala to emotion processing: from animal models to human behavior. Neuron 48, 175–187. Phillips, M.L., Young, A.W., Scott, S.K., Calder, A.J., Andrew, C., Giampietro, V., Williams, S.C., Bullmore, E.T., Brammer, M., Gray, J.A., 1998. Neural responses to facial and vocal expressions of fear and disgust. Proc. Biol. Sci. 265, 1809–1817. Phillips, M.L., Drevets, W.C., Rauch, S.L, Lane, R., 2003. Neurobiology of emotion perception: I. The neural basis of normal emotion perception. Biol. Psychiatry 54, 504–514. Pourtois, G., de Gelder, B., Bol, A., Crommelinck, M., 2005. Perception of facial expressions and voices and of their combination in the human brain. Cortex 41, 49–59. Ross, E.D., Edmondson, J.A., Seibert, G.B., 1986. The effect of affect on various acoustic measures of prosody in tone and non-tone languages: a comparison based on computer analysis. J. Phon. 14, 283–302. Reinders, A.A., den Boer, J.A., Buchel, C., 2005. The robustness of perception. Eur. J. Neurosci. 22, 524–530. Sander, K., Scheich, H., 2001. Auditory perception of laughing and crying activates human amygdala regardless of attentional state. Brain Res. Cogn. Brain Res. 12, 181–198. Sander, D., Grafman, J., Zalla, T., 2003. The human amygdala: an evolved system for relevance detection. Rev. Neurosci. 14, 303–316. Sander, D., Grandjean, D., Pourtois, G., Schwartz, S., Seighier, M.L., Scherer, K.R., Vuilleumier, P., 2005. Emotion and attention interactions in social cognition: brain regions involved in processing anger prosody. NeuroImage 28, 848–858. Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., Matsumura, M., 2004.
487
Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Brain Res. Cogn. Brain Res. 20, 81–91. Scherer, K.R., 1981. Speech and emotional states. In: Darby, J. (Ed.), Speech Evaluation in Psychiatry. Grune and Stralton, New York, pp. 189–220. Schirmer, A., Kotz, S.A., 2006. Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. Trends Cogn. Sci. 10 (1), 24–30. Scott, S.K., Young, A.W., Calder, A.J., Hellawell, D.J., Aggleton, J.P., Johnson, M., 1997. Impaired auditory recognition of fear and anger following bilateral amygdala lesions. Nature 16, 254–257. Sergerie, K., Lepage, M., Armony, J.L., 2006. A process-specific functional dissociation of the amygdala in emotional memory. J. Cogn. Neurosci. 18 (8), 1359–1367. Small, D.M., Gregory, M.D., Mak, Y.E., Gitelman, D., Mesulam, M.M., Parrrish, T., 2003. Dissociation of neural representation of intensity and affective valuation in human gustation. Neuron 39 (4), 701–711. Somerville, L.H., Kim, H., Johnstone, T., Alexander, A.L., Whalen, P.J., 2004. Human amygdala responses during presentation of happy and neutral faces: correlations with state anxiety. Biol. Psychiatry 55, 897–903. Turner, B.H., Mishkin, M., Knapp, M., 1980. Organization of the amygdalopetal projections from modality-specific cortical association areas in the monkey. J. Comp. Neurol. 191 (4), 515–543. Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., Mazoyer, B., Joliot, M., 2002. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage 15 (1), 273–289. von Kriegstein, K., Eger, E., Kleinschmidt, A., Giraud, A.L., 2003. Modulation of neural responses to speech by directing attention to voices or verbal content. Brain Res. Cogn. Brain Res. 17, 48–55. Wang, L., McCarthy, G., Song, A.W., Labar, K.S., 2005. Amygdala activation to sad pictures during high-field (4 Tesla) functional magnetic resonance imaging. Emotion 5, 12–22. Whalen, P.J., Rauch, S.L., Etcoff, N.L., McInerney, S.C., Lee, M.B., Jenike, M.A., 1998. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. J. Neurosci. 18, 411–418. Wildgruber, D., Pihan, H., Ackermann, H., Erb, M., Grodd, W., 2002. Dynamic brain activation during processing of emotional intonation: influence of acoustic parameters, emotional valence, and sex. NeuroImage 15, 856–869. Winston, J.S., O’Doherty, J., Dolan, R.J., 2003. Common and distinct neural responses during direct and incidental processing of multiple facial emotions. NeuroImage 20, 84–97. Winston, J.S., Gottfried, J.A., Kilner, J.M., Dolan, R.J., 2005. Integrated neural representations of odor intensity and affective valence in human amygdala. J. Neurosci. 25 (39), 8903–8907. Wright, C.I., Martis, B., Shin, L.M., Fischer, H., Rauch, S.L., 2002. Enhanced amygdala responses to emotional versus neutral schematic facial expressions. NeuroReport 13, 785–790. Yang, T.T., Menon, V., Eliez, S., Blasey, C., White, C.D., Reid, A.J., Gotlib, I.H., Reiss, A.L., 2002. Amygdalar activation associated with positive and negative facial expressions. NeuroReport 13, 1737–1741. Yukie, M., 2002. Connections between the amygdala and auditory cortical areas in the macaque monkey. Neurosci. Res. 42 (3), 219–229. Zald, D.H., 2003. The human amygdala and the emotional evaluation of sensory stimuli. Brain Res. Brain Res. Rev. 41, 88–123. Zalla, T., Koechlin, E., Pietrini, P., Basso, G., Aquino, P., Sirigu, A., Grafman, J., 2000. Differential amygdala responses to winning and losing: a functional magnetic resonance imaging study in humans. Eur. J. Neurosci. 12, 1764–1770.