Neuropsychologia ] (]]]]) ]]]–]]]
1
Contents lists available at SciVerse ScienceDirect
3
Neuropsychologia
5
journal homepage: www.elsevier.com/locate/neuropsychologia
7 9 11
Brain correlates of musical and facial emotion recognition: Evidence from the dementias
13 15 17
Q1
S. Hsieh a,b, M. Hornberger a,b, O. Piguet a,b, J.R. Hodges a,b,n a b
Neuroscience Research Australia, Sydney, NSW 2031, Australia School of Medical Sciences, University of New South Wales, Sydney, NSW 2052, Australia
19 21
a r t i c l e i n f o
abstract
23
Article history: Received 7 November 2011 Received in revised form 23 February 2012 Accepted 8 April 2012
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n ¼ 11), Alzheimer’s disease (n ¼12) and healthy control participants (n ¼20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. & 2012 Published by Elsevier Ltd.
25 27 29 31 33
Keywords: Music Emotions Language Semantic dementia Alzheimer’s disease Voxel-based morphometry
35 37
67
39 41 43 45 47 49 51 53 55 57
69 1. Introduction The degradation of knowledge in semantic dementia (SD), a subtype of frontotemporal dementia (FTD), affects a broad range of domains, including words, animals, objects and people, and is evident across modalities (Bozeat, Lambon Ralph, Patterson, Garrard, & Hodges, 2000; Hodges & Patterson, 2007). Atrophy of the anterior and inferior regions of the temporal lobe appears critical to the genesis of the syndrome (Levy, Bayley, & Squire, 2004; Mion et al., 2010). The understanding of emotions, particularly those that are negative in valence (e.g., anger, fear, disgust and sadness), is also affected with recognition deficits observed for photographs of facial expressions, human vocalisations and film excerpts (Kumfor et al., 2011; Omar, Rohrer, Hailstone, & Q2 Warren, 2010b; Perry et al., 2001; Werner et al., 2007). Emotion comprehension impairments in SD have been shown to correlate with right-sided atrophy of the amygdala and orbitofrontal
59 61 63 65
n Corresponding author at: Neuroscience Research Australia, PO Box 1165, Randwick, NSW 2031, Australia. Tel.: þ 61 2 9399 1134; fax: þ 61 2 9399 1047. E-mail addresses:
[email protected] (S. Hsieh),
[email protected] (M. Hornberger),
[email protected] (O. Piguet),
[email protected] (J.R. Hodges).
regions (e.g., Rosen et al., 2002). Carers of patients with SD also report consistent blunting of emotions and loss of empathy (Calabria, Cotelli, Adenzato, Zanetti, & Miniussi, 2009; Rankin, Kramer, & Miller, 2005; Seeley et al., 2005). Music is capable of communicating basic emotions (for a review, see Peretz, 2010), which are recognized effortlessly in adults (Bigand, Vieillard, Madurell, Marozeau, & Dacquet, 2005; Peretz, Gagnon, & Bouchard, 1998), regardless of musical training (Vieillard et al., 2008) and universally across different cultures (Balkwill, Thompson, & Matsunaga, 2004; Balkwill & Thompson, 1999; Fritz et al., 2009). The emotional experiences that music offers is one of the primary reasons why people listen to music (Juslin and Laukka, 2004). Studies using music as an alternative emotional medium have shown that the labelling of emotions, particularly negative emotions, that are conveyed by melodies is impaired in SD (Omar, Hailstone, Warren, Crutch, & Warren, 2010a; Omar et al., 2011) and appears to be associated with disturbance in a distributed network involving temporal and frontal regions bilaterally (Omar, et al., 2011). Whether these findings are specific to the processing of emotions in music is, however, unclear as existing studies have only used pieces of classical (e.g., Pachelbel’s Canon) or film music (e.g., Jaws). The recognition of known melodies is generally
0028-3932/$ - see front matter & 2012 Published by Elsevier Ltd. http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
71 73 75 77 79 81 83 85 87 89 91 93
2
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
impaired and the degree of this deficit is correlated with the degree of right anterior temporal polar atrophy (Hsieh, Hornberger, Piguet, & Hodges, 2011). The neural pathways for the processing of musical emotions seem to rely on structures separable from those required for the perception of musical sounds and memory for known tunes. For example, the well-known Patient IR (Peretz & Gagnon, 1999), who suffered bilateral lesions to the auditory cortex, had longstanding difficulties discriminating between the non-emotional content of music (e.g., variations in pitch and rhythm) and the recognition of famous melodies (e.g., Happy Birthday) but was able to classify a repertoire of music as ‘happy’ or ‘sad’. In contrast, Griffiths, Warren, Dean, and Howard (2004) described a radio announcer who, following a left-sided stroke affecting the insula, amygdala and frontal areas, became unable to obtain the same enjoyment from music despite intact peripheral auditory processing and music perception abilities. Functional neuroimaging studies of healthy individuals show that listening to emotionally expressive music recruits limbic structures, notably the amygdala, as well as surrounding structures such as the hippocampus, parahippocampal gyrus, orbitofrontal and temporal cortices and anterior cingulate (for a review, see Koelsch, 2010). In lesion studies, post-surgical patients who have had anteromedial temporal resections for medically intractable epilepsy are impaired in the recognition of musical emotions (Gosselin et al., 2005; Khalfa et al., 2008). The extent to which impairment in the recognition of emotions is seen in other dementia syndromes, such as Alzheimer’s disease (AD), is an additional area of interest. Compared to SD, AD is much more common and is frequently encountered in memory clinics and dementia services, yet little is known about emotion processing relative to other aspects of cognition. Recognition impairments for facial expressions of emotion are seen, particularly with disease progression, in AD (Bediou et al., 2009; Hargrave, Maddock, & Stone, 2002; Weiss et al., 2008). The ability of AD patients to identify musical emotions, in contrast, have been investigated rather little but appears preserved in patients in the mild stages of the disease (Drapeau, Gosselin, Gagnon, Peretz, ¨ op, ¨ 2009). In addition, how & Lorrain, 2009; Gagnon, Peretz, & Ful performance in AD differs from that of SD is limited to one study of two musicians (Omar et al., 2010a), while the recognition of musical emotions was spared in the patient with AD, the individual with SD was markedly impaired. The generalisability of this dissociation is currently unknown. The aims of the study were: (1) to investigate the recognition of emotions conveyed by unfamiliar pieces of music in a group of patients with SD in comparison to AD and healthy control participants; and (2) to investigate the neural correlates for the recognition of musical emotions using voxel-based morphometry in SD, AD and healthy controls combined. We predicted that the recognition of musical emotions would be impaired in SD, particularly for those that are negative in valence, relative to AD and age- and education-matched controls. In contrast, we predicted that patients with AD would not have difficulty on this task. With regards to the neuroimaging, it was anticipated that the recognition of emotions in music would be associated with atrophy of regions known to be involved in the processing of emotions, particularly the amygdala, in the participant groups combined.
59 61 63 65
were selected from a healthy volunteer panel or were spouses/caregivers of patients. Patients with SD met current consensus clinical diagnostic criteria (Gorno-Tempini et al., 2011) and were characterized by a progressive language disorder with impaired single word comprehension in the context of relative preservation of other language skills (phonology, syntax, word repetition, speech fluency). AD patients met NINCDS–ADRDA diagnostic criteria for probable AD (McKhann et al., 1984). Patient groups did not differ on measures of mood [t(20)¼ 1.53, p 4.10], as indicated by ratings provided by a caregiver on the Cambridge Behavioural Inventory (Bozeat, Gregory, Ralph, & Hodges, 2000). All participants provided informed consent for the study according to the Declaration of Helsinki; dual consent was obtained from the caregiver for some patients. All participants volunteered their time but were reimbursed for travel costs. This study was approved by the Southern Sydney and Illawarra Area Health Service and the University of New South Wales ethics committees. 2.2. General cognitive screening and neuropsychological tests Participants were given the Addenbrooke’s Cognitive Examination— Revised (ACE-R; Mioshi, Dawson, Mitchell, Arnold, & Hodges, 2006) as a general measure of cognitive impairment. The ACE-R is a screening test that incorporates the Mini Mental State Examination (MMSE) and assesses the domains of orientation, attention, memory, verbal fluency, language and visuospatial abilities. To exclude for gross deficits in music perception, the Scale subtest from the Montreal Battery of Evaluation of Amusia (MBEA; Peretz, Champod, & Hyde, 2003) was given. On this task, participants discriminated between pairs of novel short melodic phrases which may differ with non-diatonic manipulations to one note within the melody; that is, the pitch for this note is selected to be out of scale with the original melody although the same melodic contour is retained. Verbal fluency to the category ‘animals’ and the 15-item Boston Naming Test (BNT-15; Goodglass and Kaplan, 2000) were used as standardized measures of verbal semantic impairment. The copying component of the Rey–Osterrieth Complex Figure Test (ROCF; Meyers & Meyers, 1995) was administered to obtain measures of visuospatial drawing ability.
69 71 73 75 77 79 81 83 85 87 89 91 93 95
2.3. Tests of emotion recognition
97 Musical emotion stimuli consisted of 40 musical excerpts, which have been experimentally validated (Copyright, Bernard Bouchard, 1998; Vieillard, et al., 2008), and represent one of four emotional categories (happy, peaceful, sad or scary). All excerpts were composed according to the western tonal system and contain a melody with an accompaniment. Stimuli were computer-generated and presented in piano timbre. In a forced-choice paradigm, participants were required to select one of four emotion labels (happy, peaceful, sad or scary) for each musical excerpt. Participants were given the Ekman 60 Faces Test obtained from the Facial Expressions of Emotion—Stimuli and Tests (FEEST; Young, Perrett, Calder, Sprengelmeyer, & Ekman, 2002). This is a computerized task consisting of 60 black-and-white photographs of six basic facial expressions (happiness, surprise, fear, disgust, sadness and anger) from the Ekman and Friesen (1976) series. Each picture is presented for 5 s. On this task, participants were required to select one of six emotion labels (happiness, surprise, fear, disgust, sadness and anger) for each facial expression of emotion.
99 101 103 105 107 109 111
2.4. Statistical analyses of behavioural data
113 Data were analyzed using PASW Statistics Version 18 (IBM: Chicago, III). Variables were checked for normality of distribution using the Kolmogorov– Smirnov test. Parametric data were compared across the three groups (SD, AD and controls) using one-way analysis of variance (ANOVA) with post-hoc comparisons using the Tukey HSD test. The Kruskal–Wallis Test was used to compare groups where data were non-parametric; post-hoc group comparisons were made using the Mann–Whitney U Test. For the emotion recognition tests, group differences on the total percentage correct scores were analysed first. Then, scores on the individual emotions were averaged to form two new variables: positive and negative. For musical emotions, happy and peaceful tunes formed the positive composite score; and scary and sad tunes formed the negative composite score. The division of musical emotions into positive and negative stimuli were based on pre-existing ratings of valence (Vieillard, et al., 2008). For the Ekman 60 Faces Test, these consisted of happiness and surprise for positive emotions; and anger, disgust, fear and sadness for negative emotions.
2. Method 2.5. Voxel-based morphometry 2.1. Participants A total of 43 participants were included: SD (n¼11) and AD (n¼12) patients were recruited from the Frontier Dementia Clinic, Sydney, Australia where they had been diagnosed by a senior neurologist (JRH). Control participants (n¼20)
67
Voxel-based morphometry (VBM) is a technique which identifies changes in grey matter density on a voxel-by-voxel basis from structural MRI data. It was used to investigate the regions of grey matter atrophy in the patient groups and the neuroanatomical correlates of performance on behavioural measures of interest.
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
115 119 121 123 125 127 129 131 133
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Structural MRIs were available for 9 SD patients (7 males; mean age¼ 62.6 75.4; years of education ¼13.8 7 3.3) and 12 AD patients (11 males; mean age¼ 62.97 8.2; mean years of education ¼13.6 7 3.8) within six months of experimental testing. Scans were available for 15 control participants (8 males; mean age¼ 64.2 7 6.4; years of education ¼ 13.97 2.3). Groups were matched for age [H(2) ¼1.37] and years of education [H(2) ¼0.18]. Patient groups were matched for disease severity on the Clinical Dementia Rating (CDR; Morris, 1993) (U ¼ 34.5, z ¼ 1.62, p 4.10), Functional Rating Scale (FRS; Mioshi, Hsieh, Savage, Hornberger, & Hodges, 2010) [t(17)¼ 0.43, p 4.10] and the Mini Mental State Examination [t(19) ¼0.18, p 4.10].Two patients with SD had cardiac pacemakers and were unable to be scanned. The SD group in the VBM analysis did not differ from the entire SD cohort on measures listed in Table 1 (p 4.10 for all comparisons). MRI images were obtained using a 3-Tesla Philips scanner with a standard quadrature head coil. Whole-brain T1-weighted images were obtained using the following sequences: coronal orientation, matrix 256 256, 200 slices, 1 1 mm2 in-plane resolution, slice thickness 1 mm, TE/TR ¼2.6/5.8 ms, flip angle a ¼ 191. MRI data were analysed with FSL-VBM, a voxel-based morphometry style analysis (Ashburner & Friston, 2000; Mechelli, Price, Friston, & Ashburner, 2005) and carried out with the FSL-VBM tool box (http://www.fmrib.ox.ac.uk/fsl/; Smith et al., 2004). Structural brain images were first extracted using BET ( Smith, 2002). Tissue-type segmentation was carried out using FAST4 (Zhang, Brady, & Smith, 2001). The resulting grey matter partial volume images were then aligned to MNI152 standard space using non-linear registration with FNIRT (Andersson, Jenkinson, & Smith, 2007a, 2007b), which uses a b-spline representation of the registration warp field (Rueckert et al., 1999). The resulting images were averaged to create a study-specific template, to which the native grey matter images were then non-linearly re-registered. The registered partial volume images were then modulated by dividing by the Jacobian of the warp field. The modulated segmented images were then smoothed with an isotropic Gaussian kernel with a sigma of 3 mm. Grey matter density differences were investigated via permutation-based nonparametric statistics (Nichols & Holmes, 2002) with 5000 permutations per contrast. First, differences in grey matter densities between patients and control participants were assessed using an unpaired t-test to check the overall atrophy pattern in the SD and AD groups. Next, a covariate-only statistical model with a [1] t-contrast was used to investigate the correlation between behavioural variables and regions of grey matter atrophy where lower scores on a test was associated with decreasing grey matter volumes. Separate correlation analyses were conducted for all variables. In the first set of analyses, the neural basis for musical and facial emotion recognition was investigated. Next, regions of atrophy that correlated with a test of semantic memory (i.e., BNT-15) and visuospatial skills (i.e., ROCF-copy) were conducted as control measures. The SD, AD and healthy control participants were entered into the correlation analyses as a single group to increase variance in the test scores for statistical power (Chow, Brambati, GornoTempini, Miller, & Johnson, 2010; Sollberger et al., 2009). Measures of disease severity, as indicated by both the Functional Rating Scale (FRS) and the Mini Mental State Examination (MMSE), which differ between control and patient groups, were included as nuisance variables. Thresholding of the calculated statistical maps was carried out using TFCE (Threshold-Free Cluster Enhancement), a method for finding significant clusters in MRI data which avoids arbitrary thresholds (Smith & Nichols, 2009). A threshold of significance of po .001 uncorrected for multiple comparisons was adopted for the correlation analyses except for the analysis with copy scores of the ROCF where a significance of p o .005 was used. An additional cluster threshold was used whereby contiguous clusters greater than 140 voxels were reported. Maximum coordinates are provided in MNI stereotaxic space. Anatomical labels were determined with reference to the Harvard–Oxford probabilistic cortical atlas.
49 51
Table 1 Demographic characteristics and scores on cognitive screening and semantic tests (means7 standard deviation).
53 55 57 59 61 63 65
Sex (M/F) Age (yrs) Education (yrs) MMSE (/30) ACE-R (/100) MBEA-Scale Animal Fluency BNT-15 (/15) ROCF-copy (/36)
SD (n¼ 11)
AD (n¼ 12)
Controls (n ¼20)
9/2 63.3 75.2 13.1 73.4 24.4 73.47 57.8 712.1 82.4 711.3 5.77 3.9 2.47 1.7 31.6 74.6
9/3 62.9 7 8.2 13.6 7 3.7 24.8 7 3.41 75.1 7 14.5 86.9 7 9.7 11.8 7 5.5 11.4 7 3.8 27.0 7 7.0
13/7 66.5 7 7.2 14.4 7 2.8 29.2 7 0.88 94.6 7 3.59 82.7 7 14.2 21.0 7 4.9 14.5 7 0.9 33.1 7 2.7
Note: MMSE ¼Mini Mental State Examination; ACE-R ¼ Addenbrooke’s Cognitive Examination-Revised; MBEA-Scale¼ Montreal Battery of Evaluation of Amusia—Scale subtest; BNT-15¼ Boston naming Test—15 item version; ROCF-copy¼ Rey Osterrieth Complex Figure—copy component.
3
3. Results
67
3.1. Demographic information and music screening measure
69
Groups were matched for age [H(2)¼3.67, p4.10] and years of education [H(2)¼1.39, p4.10]. Patient groups were also matched for disease severity on the Clinical Dementia Rating Scale (U ¼39.5, z ¼ 1.93, p 4.05) (CDR; Morris, 1993) and the Functional Rating Scale (U¼50.0, z¼ 0.35, p 4.10) (FRS; Mioshi, et al., 2010). No group differences were observed on the melodic discrimination task [H(2)¼0.99, n.s.] (Table 1).
71
3.2. General cognitive screening and neuropsychological tests
79
On general screening and formal cognitive measures, significant group differences were found on the MMSE [H (2)¼24.1, po.001] and the ACE-R [H (2)¼ 32.1, p o.001] (Table 1). As expected, post-hoc comparisons indicated that each patient group scored below the control group on the MMSE and ACE-R (p o.001 for all pairwise comparisons). Patient groups were matched on the MMSE (p 4.10). On the ACE-R, SD patients scored significantly lower than the AD patients (U ¼24.0, z ¼ 2.59, p ¼.01, r ¼ 0.54), which reflects the large language component of this measure. Not surprisingly, the SD group was impaired on standard tests of verbal semantic knowledge. Overall group differences were present for the BNT-15 [H (2)¼28.9, p o.001] and animal fluency [F (2, 41) ¼37.9, p o.001]. Post-hoc tests further showed that both dementia groups scored worse than controls (SD: p o.001 and AD: po.005 on both tests) and that SD patients were more anomic (U ¼0, z¼ 4.0, p o.001, r ¼ 0.85) and generated fewer animals than AD patients (p o.05). Similarly, group differences were present on the copy scores of the ROCF Test [H (2)¼8.06, p o.05]. Post-hoc comparisons revealed that the AD group scored significantly lower than the control (U¼ 27.0, z¼ 2.79, p¼ .005, r ¼ 0.54) and SD (U¼23.5, z¼ 1.99, po.05, r¼ 0.44) groups on this measure of visuospatial drawing ability whereas the SD group did not differ from controls (U¼84.5, p 4.10).
81
73 75 77
83 85 87 89 91 93 95 97 99 101 103 105
3.3. Recognition of musical and facial emotions 107 Recognition accuracy for musical and facial emotions is displayed in Fig. 1. All patient groups showed deficits in recognition of musical emotions with performance worst in the SD group. A one-way ANOVA confirmed a significant main effect of group [F (2, 42) ¼49.1, p o.001]. Post-hoc comparisons showed that the SD group was significantly impaired compared to both control and AD groups (po.001 in both tests). Unlike the face emotion task, a significant difference was also observed between the AD and control groups (p o.001). Significant group effects were also present for positive [H (2)¼20.0, po.001] and negative [F (2, 42) ¼39.8, po.001] musical emotions. Post-hoc comparisons again revealed that the SD group was significantly worse at recognizing positive and negative musical emotions compared to controls (p o.001 in both comparisons). On this task, the AD group also differed significantly from controls for both positive and negative musical emotions (p o.05). Comparing the two dementia groups, the SD group differed significantly from the AD group in only the recognition of negative musical emotions (p o.001). A one-way ANOVA revealed a significant main effect of group for the total per cent correct recognition score on the Ekman 60 Faces Test [F (2, 41)¼13.3, p o.001]. Post-hoc comparisons showed that SD patients scored significantly lower than the AD (p o.05) and control (po.001) groups with no difference between AD patients and controls (p 4.05).
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
109 111 113 115 119 121 123 125 127 129 131 133
4
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1
67
3
69
5
71
7
73
9
75
11
77
13
79
15
81
17
83
19
85
21
87
23
89
25
91
27
93 95
29 31
Fig. 1. Recognition of musical (top) and facial emotions (bottom) in the SD, AD and control groups. Total recognition score (left) and composite negative and positive emotion score (right) are displayed. Error bars represent SEM. * ¼p o .05; ** ¼ p o.01; *** ¼ p o.001.
97
33
99
35
101
37
103
39
105
41
107
43
109
45
111
47
113
49
115
51
119
53
121
55
123
57
125
59 61
Fig. 2. Voxel-based morphometry analyses showing regions of atrophy in the SD (top) and AD (bottom) groups compared to healthy controls. Coloured voxels show regions which were significant at p o.05 corrected for multiple comparisons (FWE, t 41.71). MNI coordinates for the panels are: left (x¼ 28, y¼ 14, z ¼ 20), middle (x¼ 6, y¼ 62, z ¼ 2) and right (x ¼34, y¼ 48, z¼ 54).
129 131
63 65
127
Analyses of composite scores for the recognition of positive and negative facial emotions separately showed a group effect for positive [H (2)¼6.59, po.05] and negative [F (2, 41) ¼12.5,
po.001] emotions. Post-hoc comparisons revealed that the SD group was significantly worse at recognizing positive (p o.05) and negative (p o.001) facial emotions compared to controls. The
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
133
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1 3
AD group did not differ from controls (p 4.05). The SD group was also significantly worse than the AD group at recognizing negative (po.05), but not positive (p ¼.06), facial emotions.
5 3.4. Voxel-based morphometry 7 9 11 13 15 17 19 21
Overall patterns of brain atrophy in the SD and AD groups in comparison to matched control participants are displayed in Fig. 2. In the SD group, asymmetrical focal atrophy of the anterior temporal lobes, which was more extensive in the left than right hemisphere, was present. Atrophy was present in the temporal poles and inferolateral temporal regions with relative preservation of the superior temporal gyrus. In the AD group, atrophy of the bilateral hippocampi was seen as well as a more diffuse pattern of atrophy involving the frontal and temporoparietal association cortices (Fig. 2). Overall, patterns of atrophy in the patient groups replicate findings previously reported (e.g., Chan et al., 2001; Whitwell et al., 2011). Results of the correlation analyses combining all participants (SD, AD and controls) are displayed in Figs. 3 and 4 and Table 2.
5
The recognition of musical emotions was associated with the degree of anterior temporal lobe atrophy bilaterally. On the left, this involved regions including the anterior fusiform gyrus and extended to include the amygdala and insula. On the right, the temporal pole, parts of the amygdala and insula were involved. The recognition of facial emotions was associated with grey matter loss involving the right anterior temporal pole, including the insula and amygdala, as well as areas in the right posterior fusiform gyrus. Object naming was, as expected, associated with atrophy of the anterior temporal lobe, involving the left anterior fusiform gyrus and temporal pole. The ability to copy a complex geometric figure, a non-semantic control measure, was correlated with volume loss in the right superior parietal lobule. Notably, the areas found to correlate with the recognition of musical emotions overlapped considerably with regions found to correlate with object naming and face emotion recognition (Fig. 4). This association was further investigated in post-hoc correlation analyses of the behavioural data in the patient groups separately. The ability to label emotions in music correlated with tests of semantic memory in AD (animal fluency: rp ¼ 0.61, p o.05, one-tailed; BNT-15: rp ¼ 0.54, p o.05, one-tailed). In contrast, no
67 69 71 73 75 77 79 81 83 85 87
23
89
25
91
27
93
29
95
31
97
33
99
35
101
37
103
39
105
41
107
43
109
45
111
47
113
49
115
51
119
53
121
55
123
57
125
59
127
61
129
63 65
Fig. 3. Voxel-based morphometry analyses showing brain regions that correlate with recognition of musical emotions (top left: MNI, x ¼ 46, y¼ 12, z ¼ 46; top right: MNI, x¼ 40, y¼ 6, z ¼ 48), recognition of facial expressions of emotion (middle coronal and sagittal sections: MNI, x ¼34, y¼8, z¼ 22), naming performance (bottom left: MNI, x ¼ 38, y¼ 8, z¼ 48), and copying of a complex Fig. (bottom right: MNI, x¼ 42, y¼ 54, z ¼ 64). Coloured voxels are significant at po .001 uncorrected except for ROCF where po .01 uncorrected is displayed.
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
131 133
6
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1
67
3
69
5
71
7
73
9
75
11
77
13
79
15
81
17
83
19
85
21
87
23
89
25
91 93
27 29 31 33 35 37 39 41 43 45 47 49 51 53 55
Fig. 4. Areas of overlap between the recognition of musical emotions and naming of objects (top: MNI, x ¼ 42, y¼ 8, z¼ 48) and facial emotions (bottom: MNI: x¼ 34, y¼ 8, z ¼ 24). Coloured voxels are significant at p o .001 uncorrected.
Table 2 Voxel-based morphometry results showing regions of significant grey matter density covarying with emotion recognition tasks and with cognitive control tasks. Behavioural measure
Musical emotions Posterior inferior temporal gyrus Temporal pole Ekman 60 Faces Test Temporal pole Posterior fusiform gyrus Boston Naming Test Anterior inferior temporal gyrus Rey–Osterrieth Complex Figure Test—Copy1 Superior parietal lobule
Side Cluster size
x
y
z
L R R R L R
46 40 34 18 38 42
12 6 8 8 8 54
46 48 22 42 48 64
2759 260 193 155 4815 144
Note: All results are significant for p o .001 uncorrected for multiple comparisons except for 1 where po .005 was used.
significant correlations were observed between the recognition of musical emotions and the Ekman 60 Faces Test (p4 .05). Correlations were not significant in the SD group most likely due to the severity of the semantic memory impairment and reduced variance in test performance.
57 4. Discussion 59 61 63 65
This study is the first to investigate the recognition of emotions in two modalities, music and faces, in groups of patients with SD and AD combining behavioural measures and neuroimaging data. The novel findings arising from this investigation are that identification of emotions was more impaired in SD than AD in both photographs of unfamiliar faces and unknown musical tunes, particularly for negatively valenced stimuli. Performance
on both types of emotional stimuli was associated with the degree of atrophy of the right anterior temporal pole, insula and amygdala. Important differences, however, were observed: the recognition of musical (but not facial) emotions was associated with atrophy within the left anterior temporal lobe. In our SD cohort, patients were impaired in the recognition of emotions from two differing modalities, faces and music, which is consistent with findings in the literature (Omar, et al., 2011). These impairments are not easily explained by verbal comprehension deficits in SD: patients showed greater deficits for the recognition of musical emotions which required the use of fewer emotion word labels in comparison to the Ekman 60 Faces Test (i.e., four rather than six respectively). Our results highlight the well-established finding in the syndrome of SD of a pervasive and amodal degradation for semantic knowledge, such as animals, objects, tools, people, as well as emotion concepts (Hodges & Patterson, 2007; Hsieh, et al., 2011). The disproportionate impairment in the recognition of musical emotions in SD compared to AD patients is consistent with the previous comparative study of two individuals (Omar et al., 2010a). Unlike prior reports, however, impairment in the recognition of musical emotions was present in our AD cohort (Drapeau, et al., 2009; Gagnon, et al., 2009). A larger sample size in the current study and the use of different, possibly more sensitive, tasks probably underlie differences with previous studies. Both patient groups showed greater impairment in the recognition of musical than facial emotions. These behavioural findings may be explained by differences in the relative ease with which emotions are conveyed in these modalities. While certain facial and musical cues are invariant signatures of basic emotions (Juslin & Laukka, 2003; Smith, Cottrell, Gosselin, & Schyns, 2005), it may be argued that the relationship between these cues and the intended underlying emotion is more variable in music than in faces. For example, while tunes that sound happy are generally faster in tempo than those that sound sad, happy melodies can be composed according to different tonal systems, have wide varying rhythmic and melodic patterns. In contrast, the upward curvature of the lips is characteristic of happy
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
95 97 99 101 103 105 107 109 111 113 115 119 121 123 125 127 129 131 133
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
faces. From phylogenetic and ontogenetic viewpoints, the relevance of face and music emotions also differs: recognition of facial emotions represents an important feature for survival whereas that of music appears not as critical. Impaired recognition of facial and musical emotions was found to be associated with atrophy of the right temporal pole, amygdala and insula. These results are consistent with previous neuroimaging studies of facial emotion recognition in healthy and pathological populations (e.g., Adolphs, Tranel, & Damasio, 2001; Costafreda, Brammer, David, & Fu, 2008; Meletti et al., 2003; Philippi, Mehta, Grabowski, Adolphs, & Rudrauf, 2009). Similarly, these regions have also been involved in the recognition of musical emotions in patients with unilateral temporal resections for epilepsy (Gosselin, et al., 2005; Khalfa, Guye, et al., 2008) and in neuroimaging studies of healthy individuals (Blood & Zatorre, 2001; Koelsch, Fritz, ¨ Cramon, Muller, & Friederici, 2006). From a theoretical viewpoint, the right temporal pole and surrounding structures is argued to be a conduit for integrating visceral information, sensory representations and memories for emotional or socially-relevant concepts (Olson, Plotzker, & Ezzyat, 2007). Atrophy of this region, therefore, results in the decoupling of this information and may also help to explain impaired emotion recognition across both stimulus modalities in the patient groups. This investigation further uncovered an important finding in that the severity of impairment for the recognition of emotions in melodies (but not faces) was selectively associated with atrophy of the left anterior and inferior temporal lobe, regions which are primarily devoted to the processing of language and verbal semantics. The extraction of the emotional content of melodies appears to draw upon cognitive language-based resources that are not required for facial emotions. Seemingly disparate at a superficial level, the structure of music and language are similar. Both involve sequences of sounds with specific rhythmic and melodic patterns constituting basic elements (e.g., phonemes/morphemes and notes/chords) which evolve over time according to specific rules (e.g., syntax/ grammar and key/harmonic structures) that convey higher-order meaningful structures (e.g., sentences or melodies). A growing body of research shows that music and language processing may share a degree of overlapping cognitive resources (Koelsch et al., 2004; Patel, 2003). Left-sided cortical activity (e.g., Broca’s area) and physiological indices (e.g., Early Right Anterior Negativity measured on EEG) that were once thought to be specific to the extraction of information, such as syntax, from language are also activated in response to music (Maess, Koelsch, Gunter, & Friederici, 2001; Sammler, Koelsch, & Friederici, 2011). In addition, evidence from developmental studies demonstrates the bidirectional relation between music and language. Musical skills training in children enhances the development of language skills such as reading and phonological awareness (Dege & Schwarzer, 2011; Moreno et al., 2009) and modulates the neurophysiological mechanisms underlying language syntax processing (Jentschke and Koelsch, 2009). Conversely, children with deficits in the processing of linguistic syntax also tend to experience difficulties in processing musical irregularities in chord sequences (Jentschke, Koelsch, Sallat, & Friederici, 2008). Musical expertise facilitates the discrimination and learning of tonal languages (Marie, Delogu, Lampis, Belardinelli, & Besson, 2011; Wong and Perrachione, 2007) and enhances higher-order language pragmatics, such as the perception of emotional speech prosody (Lima and Castro, 2011), whereas adults with congenital amusia exhibit deficits in speech intonation perception (Liu, Patel, Fourcin, & Stewart, 2010; Tillmann et al., 2011). Importantly, the specificity of the associations between face emotion recognition and the right posterior fusiform area, and between visuospatial abilities and the right parietal regions provide strong support that the association between music emotion recognition and left anterior
7
atrophy is not artefactual, arising from the severe left-sided atrophy in SD. The significant correlations between the recognition of musical emotions and verbal semantic measures in the AD group also argue against the specificity of this finding to SD alone. At a practical level, although most impaired in the recognition of musical emotions, individuals with SD were generally better at identifying positive emotions such as happiness (e.g., tunes that are in the major key in the Western tonal system, fast in tempo and with a relatively simple rhythmic pattern) than negative emotions in music. This finding is consistent with a case report of an individual with SD who showed a newfound interest for Polka music after disease onset (Boeve and Geda, 2001). These findings may be relevant to music therapy, which is used widely in the setting of AD (Guetin et al., 2009; Koger, Chapin, & Brotons, 1999; Ledger & Baker, 2007), but has not yet been investigated in patients with SD. The relationship between music and language suggests that linguistic skills may be an important outcome variable to consider, in addition to mood and behaviour, in music therapies for individuals with dementia (Brotons and Koger, 2000). One limitation of our study exists in the division of emotion concepts into those that are positive and negative. The emotion of surprise was considered to be a positive emotion in the current study (see also e.g., Kipps, Nestor, Acosta-Cabronero, Arnold, & Hodges, 2009); however, the valence of surprise remains controversial (Neta, Davis, & Whalen, 2011; Toivonen et al., 2012). Similarly, although sadness in music is distinct from the expression of happiness and calm/peace, sad melodies are often perceived to be relatively pleasant (Khalfa, Roy, Rainville, Dalla Bella, & Peretz, 2008; Vieillard, et al., 2008) and, therefore, may not necessarily reflect the same meaning as sadness that is conveyed by faces. One avenue for future research may be to examine the recognition and also neural correlates for individual musical and facial emotions separately in FTD, particularly with regards to the extent to which recognition for specific emotion, such as fearful faces and scary music, is impaired and correlate with regions such as the amygdala which are known to be involved in the processing of stimuli signalling threat or uncertainty (Adolphs, 2008; Whalen, 2007). To conclude, recognition of emotions conveyed by music draws upon additional cognitive resources which are not required by facial expressions of emotion. Extraction of the emotional content from melodies involve neural resources which overlap in part with those known to be involved in the processing of information from language. This study of patients with SD adds to the growing body of literature highlighting the relationship between the domains of musical and language semantics.
67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 113
Acknowledgements 115 The authors thank participants for their support of our research. We also thank Professor Isabelle Peretz for providing the Montreal Battery of Evaluation of Amusia (MBEA) and Bernard Bouchard for composing the set of musical emotions. This work was supported in part by a National Health and Medical Research Council (NHMRC) of Australia Project Grant (#510106) and by the Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders. SH is supported by an Australian Postgraduate Award (APA). MH is supported by an ARC Research Fellowship (DP110104202). OP is supported by a NHMRC Clinical Career Development Fellowship (APP1022684). JRH is supported by an ARC Federation Fellowship (#FF0776229). References Adolphs, R. (2008). Fear, faces, and the human amygdala. Current Opinion in Neurobiology, 18, 166–172.
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
119 121 123 125 127 129 131 133
8
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion recognition from faces and prosody following temporal lobectomy. Neuropsychology, 15, 396–404. Andersson, J.L.R., Jenkinson, M., & Smith, S. (2007a). Non-linear optimisation (FMRIB Technical Report TR07JA1): Retrieved from the University of Oxford FMRIB Centre: /http://www.fmrib.ox.ac.uk/analysis/techrep/tr07ja1/tr07ja1.pdfS. Andersson, J.L.R., Jenkinson, M., & Smith, S. (2007b). Non-linear registration, aka Spatial normalisation (FMRIB Technical Report TR07JA2): Retrieved from the University of Oxford FMRIB Centre: /http://www.fmrib.ox.ac.uk/analysis/ techrep/tr07ja2/tr07ja2.pdfS. Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry—The methods. NeuroImage, 11, 805–821. Balkwill, L. L., Thompson, J. C., & Matsunaga, R. (2004). Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners. Japanese Psychological Research, 46, 337–349. Balkwill, L. L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues. Music Perception, 17, 43–64. Bediou, B., Ryff, I., Mercier, B., Milliery, M., Henaff, M. A., D’Amato, T., et al. (2009). Impaired social cognition in mild Alzheimer disease. Journal of Geriatric Psychiatry and Neurology, 22, 130–140. Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 19, 1113–1139. Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion. Proceedings of the National Academy of Sciences, 98, 11818–11823. Boeve, B. F., & Geda, Y. E. (2001). Polka music and semantic dementia. Neurology, 57, 1485. Bozeat, S., Gregory, C. A., Ralph, M. A., & Hodges, J. R. (2000). Which neuropsychiatric and behavioural features distinguish frontal and temporal variants of frontotemporal dementia from Alzheimer’s disease?. Journal of Neurology Neurosurgery and Psychiatry, 69, 178–186. Bozeat, S., Lambon Ralph, M. A., Patterson, K., Garrard, P., & Hodges, J. R. (2000). Non-verbal semantic impairment in semantic dementia. Neuropsychologia, 38, 1207–1215. Brotons, M., & Koger, S. M. (2000). The impact of music therapy on language functioning in dementia. Journal of Music Therapy, 37, 183–195. Calabria, M., Cotelli, M., Adenzato, M., Zanetti, O., & Miniussi, C. (2009). Empathy and emotion recognition in semantic dementia: A case report. Brain and Cognition, 70, 247–252. Chan, D., Fox, N. C., Scahill, R. I., Crum, W. R., Whitwell, J. L., Leschziner, G., et al. (2001). Patterns of temporal lobe atrophy in semantic dementia and Alzheimer’s disease. Annals of Neurology, 49, 433–442. Chow, M. L., Brambati, S. M., Gorno-Tempini, M. L., Miller, B. L., & Johnson, J. K. (2010). Sound naming in neurodegenerative disease. Brain and Cognition, 72, 423–429. Costafreda, S. G., Brammer, M. J., David, A. S., & Fu, C. H. Y. (2008). Predictors of amygdala activation during the processing of emotional stimuli: A metaanalysis of 385 PET and FMRI studies. Brain Research Reviews, 58, 57–70. Dege, F., & Schwarzer, G. (2011). The effect of a music program on phonological awareness in preschoolers. Frontiers in Psychology, 2, 124. Drapeau, J., Gosselin, N., Gagnon, L., Peretz, I., & Lorrain, D. (2009). Emotional recognition from face, voice, and music in dementia of the Alzheimer type. Annals of the New York Academy of Sciences, 1169, 342–345. Ekman, P., & Friesen, W. V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting Psychologists Press. Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Perets, I., Turner, R., et al. (2009). Universal recognition of three basic emotions in music. Current Biology, 19, 1–4. ¨ op, ¨ Gagnon, L., Peretz, I., & Ful T. (2009). Musical structural determinants of emotional judgments in dementia of the Alzheimer type. Neuropsychology, 23, 90–97. Goodglass, H., & Kaplan, E. (2000). Boston Naming Test ((2nd edn.). Philadelphia: Lippincott Williams & Wilkins. Gorno-Tempini, M. L., Hillis, A. E., Weintraub, S., Kertesz, A., Mendez, M., Cappa, S. F., et al. (2011). Classification of primary progressive aphasia and its variants. Neurology, 76, 1006–1014. Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., Baulac, M., et al. (2005). Impaired recognition of scary music following unilateral temporal lobe excision. Brain, 128, 628–640. Griffiths, T. D., Warren, J. D., Dean, J. L., & Howard, D. (2004). ‘‘When the feeling’s gone’’: A selective loss of musical emotion. Journal of Neurology Neurosurgery and Psychiatry, 75, 344–345. Guetin, S., Portet, F., Picot, M. C., Pommie, C., Messaoudi, M., Djabelkir, L., et al. (2009). Effect of music therapy on anxiety and depression in patients with Alzheimer’s type dementia: Randomised, controlled study. Dementia and Geriatric Cognitive Disorders, 28, 36–46. Hargrave, R., Maddock, R. J., & Stone, V. (2002). Impaired recognition of facial expressions of emotion in Alzheimer’s disease. The Journal of Neuropsychiatry and Clinical Neurosciences, 14, 64–71. Hodges, J. R., & Patterson, K. (2007). Semantic dementia: A unique clinicopathological syndrome. Lancet Neurology, 6, 1004–1014. Hsieh, S., Hornberger, M., Piguet, O., & Hodges, J. R. (2011). Neural basis of music knowledge: evidence from the dementias. Brain, 134, 2523–2534. Jentschke, S., & Koelsch, S. (2009). Musical training modulates the development of syntax processing in children. NeuroImage, 47, 735–744.
Jentschke, S., Koelsch, S., Sallat, S., & Friederici, A. D. (2008). Children with specific language impairment also show impairment of music-syntactic processing. Journal of Cognitive Neuroscience, 20, 1940–1951. Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code?. Psychological Bulletin, 129, 770–814. Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33, 217–238. Khalfa, S., Guye, M., Peretz, I., Chapon, F., Girard, N., Chauvel, P., et al. (2008). Evidence of lateralized anteromedial temporal structures involvement in musical emotion processing. Neuropsychologia, 46, 2485–2493. Khalfa, S., Roy, M., Rainville, P., Dalla Bella, S., & Peretz, I. (2008). Role of tempo entrainment in psychophysiological differentiation of happy and sad music?. International Journal of Psychophysiology, 68, 17–26. Kipps, C. M., Nestor, P. J., Acosta-Cabronero, J., Arnold, R., & Hodges, J. R. (2009). Understanding social dysfunction in the behavioural variant of frontotemporal dementia: The role of emotion and sarcasm processing. Brain, 132, 592–603. Koelsch, S. (2010). Towards a neural basis of music-evoked emotions. Trends in Cognitive Sciences, 14, 131–137. ¨ Koelsch, S., Fritz, T. V., Cramon, D. Y., Muller, K., & Friederici, A. D. (2006). Investigating emotion with music: An FMRI study. Human Brain Mapping, 27, 239–250. Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friederici, A. D. (2004). Music, language and meaning: brain signatures of semantic processing. Nature Neuroscience, 7, 302–307. Koger, S. M., Chapin, K., & Brotons, M. (1999). Is music therapy an effective intervention for dementia? A meta-analytic review of literature. Journal of Music Therapy, 36, 2–15. Kumfor, F., Miller, L., Lah, S., Hsieh, S., Savage, S., Hodges, J. R., et al. (2011). Are you really angry? The effect of intensity on facial emotion recognition in frontotemporal dementia. Social Neuroscience, 6, 502–514. Ledger, A. J., & Baker, F. A. (2007). An investigation of long-term effects of group music therapy on agitation levels of people with Alzheimer’s disease. Aging & Mental Health, 11, 330–338. Levy, D. A., Bayley, P. J., & Squire, L. R. (2004). The anatomy of semantic knowledge: Medial vs. lateral temporal lobe. Proceedings of the National Academy of Sciences, 101, 6710–6715. Lima, C. F., & Castro, S. L. (2011). Speaking to the trained ear: Musical expertise enhances the recognition of emotions in speech prosody. Emotion, 11, 1021–1031. Liu, F., Patel, A. D., Fourcin, A., & Stewart, L. (2010). Intonation processing in congenital amusia: discrimination, identification and imitation. Brain, 133, 1682–1693. Maess, B., Koelsch, S., Gunter, T. C., & Friederici, A. D. (2001). Musical syntax is processed in Broca’s area: An MEG study. Nature Neuroscience, 4, 540–545. Marie, C., Delogu, F., Lampis, G., Belardinelli, M. O., & Besson, M. (2011). Influence of musical expertise on segmental and tonal processing in Mandarin Chinese. Journal of Cognitive Neuroscience, 23, 2701–2715. McKhann, G., Drachman, D., Folstein, M., Katzman, R., Price, D., & Stadlan, E. M. (1984). Clinical diagnosis of Alzheimer’s disease: report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer’s Disease. Neurology, 34, 939–944. Mechelli, A., Price, C. J., Friston, K. J., & Ashburner, J. (2005). Voxel-based morphometry of the human brain: Methods and applications. Current Medical Imaging Reviews, 1, 1–9. Meletti, S., Benuzzi, F., Rubboli, G., Cantalupo, G., Stanzani Maserati, M., Nichelli, P., et al. (2003). Impaired facial emotion recognition in early-onset right mesial temporal lobe epilepsy. Neurology, 60, 426–431. Meyers, J., & Meyers, K. (1995). The Meyers Scoring System for the Rey Complex Figure and the Recognition Trial: Professional Manual. Odessa, Fla: Psychological Assessment Resources. Mion, M., Patterson, K., Acosta-Cabronero, J., Pengas, G., Izquierdo-Garcia, D., Hong, Y. T., et al. (2010). What the left and right anterior fusiform gyri tell us about semantic memory. Brain, 133, 3256–3268. Mioshi, E., Dawson, K., Mitchell, J., Arnold, R., & Hodges, J. R. (2006). The Addenbrooke’s Cognitive Examination —Revised (ACE-R): A brief cognitive test battery for dementia screening. International Journal of Geriatric Psychiatry, 21, 1078–1085. Mioshi, E., Hsieh, S., Savage, S., Hornberger, M., & Hodges, J. R. (2010). Clinical staging and disease progression in frontotemporal dementia. Neurology, 74, 1591–1597. Moreno, S., Marques, C., Santos, A., Santos, M., Castro, S. L., & Besson, M. (2009). Musical training influences linguistic abilities in 8-year-old children: more evidence for brain plasticity. Cerebral Cortex, 19, 712–723. Morris, J. C. (1993). The Clinical Dementia Rating (CDR): Current version and scoring rules. Neurology, 43, 2412–2414. Neta, M., Davis, F. C., & Whalen, P. J. (2011). Valence resolution of ambiguous facial expressions using an emotional oddball task. Emotion, 11, 1425–1433. Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: A primer with examples. Human Brain Mapping, 15, 1–25. Olson, I. R., Plotzker, A., & Ezzyat, Y. (2007). The enigmatic temporal pole: A review of findings on social and emotional processing. Brain, 130, 1718–1731. Omar, R., Hailstone, J. C., Warren, J. E., Crutch, S. J., & Warren, J. D. (2010a). The cognitive organization of music knowledge: A clinical analysis. Brain, 133, 1200–1213. Omar, R., Henley, S. M., Bartlett, J. W., Hailstone, J. C., Gordon, E., Sauter, D. A., et al. (2011). The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration. NeuroImage, 56, 1814–1821.
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 113 115 119 121 123 125 127 129 131 133
S. Hsieh et al. / Neuropsychologia ] (]]]]) ]]]–]]]
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
Omar, R., Rohrer, J. D., Hailstone, J. C., & Warren, J. D. (2010b). Structural neuroanatomy of face processing in frontotemporal lobar degeneration. Journal of Neurology Neurosurgery and Psychiatry, 82, 1341–1343. Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6, 674–681. Peretz, I. (2010). Towards a Neurobiology of Musical Emotions. In: P. N. Juslin, & J. A. Sloboda (Eds.), Handbook of Music and Emotion: Theory, Research, Applications (pp. 99–126). New York, NY, US: Oxford University Press. Peretz, I., Champod, A. S., & Hyde, K. (2003). Varieties of musical disorders: The Montreal Battery of Evaluation of Amusia. Annals of the New York Academy of Sciences, 999, 58–75. Peretz, I., & Gagnon, L. (1999). Dissociation between recognition and emotional judgements for melodies. Neurocase, 5, 21–30. Peretz, I., Gagnon, L., & Bouchard, B. (1998). Music and emotion: Perceptual determinants, immediacy, and isolation after brain damage. Cognition, 68, 111–141. Perry, R. J., Rosen, H. R., Kramer, J. H., Beer, J. S., Levenson, R. L., & Miller, B. L. (2001). Hemispheric dominance for emotions, empathy and social behaviour: Evidence from right and left handers with frontotemporal dementia. Neurocase, 7, 145–160. Philippi, C. L., Mehta, S., Grabowski, T., Adolphs, R., & Rudrauf, D. (2009). Damage to association fiber tracts impairs recognition of the facial expression of emotion. The Journal of Neuroscience, 29, 15089–15099. Rankin, K. P., Kramer, J. H., & Miller, B. L. (2005). Patterns of cognitive and emotional empathy in frontotemporal lobar degeneration. Cognitive and Behavioral Neurology, 18, 28–36. Rosen, H. J., Perry, R. J., Murphy, J., Kramer, J. H., Mychack, P., Schuff, N., et al. (2002). Emotion comprehension in the temporal variant of frontotemporal dementia. Brain, 125, 2286–2295. Rueckert, D., Sonoda, L. I., Hayes, C., Hill, D. L., Leach, M. O., & Hawkes, D. J. (1999). Nonrigid registration using free-form deformations: Application to breast MR images. IEEE Transactions on Medical Imaging, 18, 712–721. Sammler, D., Koelsch, S., & Friederici, A. D. (2011). Are left fronto-temporal brain areas a prerequisite for normal music-syntactic processing?. Cortex, 47, 659–673. Seeley, W. W., Bauer, A. M., Miller, B. L., Gorno-Tempini, M. L., Kramer, J. H., Weiner, M., et al. (2005). The natural history of temporal variant frontotemporal dementia. Neurology, 64, 1384–1390. Smith, M. L., Cottrell, G. W., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16, 184–189.
9
Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17, 143–155. Smith, S. M., Jenkinson, M., Woolrich, M. W., Beckmann, C. F., Behrens, T. E., Johansen-Berg, H., et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage, 23, S208–219. Smith, S. M., & Nichols, T. E. (2009). Threshold-free cluster enhancement: Addressing problems of smoothing, threshold dependence and localisation in cluster inference. NeuroImage, 44, 83–98. Sollberger, M., Stanley, C. M., Wilson, S. M., Gyurak, A., Beckman, V., Growdon, M., et al. (2009). Neural basis of interpersonal traits in neurodegenerative diseases. Neuropsychologia, 47, 2812–2827. Tillmann, B., Burnham, D., Nguyen, S., Grimault, N., Gosselin, N., & Peretz, I. (2011). Congenital amusia (or tone-deafness) interferes with pitch processing in tone languages. Frontiers in Psychology, 2, 120. ¨ M., Saramaki, ¨ Toivonen, R., Kivela, J., Viinikainen, M., Vanhatalo, M., & Sams, M. (2012). Networks of emotion concepts. PLoS One, 7, e28883. Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, 720–752. Weiss, E. M., Kohler, C. G., Vonbank, J., Stadelmann, E., Kemmler, G., Hinterhuber, H., et al. (2008). Impairment in emotion recognition abilities in patients with mild cognitive impairment, early and moderate Alzheimer disease compared with healthy comparison subjects. American Journal of Geriatric Psychiatry, 16, 974–980. Werner, K. H., Roberts, N. A., Rosen, H. J., Dean, D. L., Kramer, J. H., Weiner, M. W., et al. (2007). Emotional reactivity and emotion recognition in frontotemporal lobar degeneration. Neurology, 69, 148–155. Whalen, P. J. (2007). The uncertainty of it all. Trends in Cognitive Sciences, 11, 499–500. Whitwell, J. L., Jack, C. R., Jr., Przybelski, S. A., Parisi, J. E., Senjem, M. L., Boeve, B. F., et al. (2011). Temporoparietal atrophy: A marker of AD pathology independent of clinical diagnosis. Neurobiology of Aging, 32, 1531–1541. Wong, P. C. M., & Perrachione, T. K. (2007). Learning pitch patterns in lexical identification by native English-speaking adults. Applied Psycholinguistics, 28, 565–585. Young, A.W., Perrett, D.I., Calder, A.J., Sprengelmeyer, R., & Ekman, P. (2002). Facial Expressions of Emotions: Stimuli and Test (FEEST). Thurstone, UK: Thames Valley Test Company. Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging, 20, 45–57.
Please cite this article as: Hsieh, S., et al. Brain correlates of musical and facial emotion recognition: Evidence from the dementias. Neuropsychologia (2012), http://dx.doi.org/10.1016/j.neuropsychologia.2012.04.006
33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63