International Congress Series 1270 (2004) 173 – 176
www.ics-elsevier.com
Spatiotemporal neuromagnetic activities during pitch processing in musicians M. Yumoto a,*, K. Itoh b, A. Uno c, M. Matsuda d, S. Karino e, O. Saitoh f, Y. Kaneko g, K. Nakahara a, K. Kaga e a
Department of Clinical Laboratory, Faculty of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8655, Japan b Department of Speech and Cognitive Science, Faculty of Medicine, University of Tokyo, Tokyo, Japan c National Institute of Mental Health, National Center of Neurology and Psychiatry, Chiba, Japan d Department of Musicology, Tokyo National University of Fine Arts and Music, Tokyo, Japan e Department of Otolaryngology, Faculty of Medicine, University of Tokyo, Tokyo, Japan f Department of Psychiatry, National Center of Neurology and Psychiatry, Tokyo, Japan g Department of Neurosurgery, National Center of Neurology and Psychiatry, Tokyo, Japan
Abstract. Although the spatial distribution of neural subsystems involved in music processing has been elucidated by recent neuroimaging studies, little is known about the temporal profile of such neural activities. In this study, spatiotemporal neuromagnetic activities during pitch processing in a musical context were investigated using a cross-modal pitch-matching task, which required subjects to find infrequent pitch errors implanted in heard performance while sight-reading its musical score. Eight right-handed musicians and eight right-handed non-musicians participated in this study. Neuromagnetic responses to each tone onset were recorded using VectorViewk (Neuromag, Helsinki, Finland). The source localization was estimated by minimum current estimates (MCE) algorithm and was verified by a multiple-current-dipole model. Although every subject showed magnetic components, M50, M100 and M200 in both erroneous and correct conditions, significant amplitude difference between these two conditions was detected only in musicians. Incongruent condition activated spatiotemporally distributed brain regions, including not only superior temporal gyrus but also other areas; the middle temporal, inferior temporal, inferior frontal, dorsolateral prefrontal cortex, inferior parietal lobule and sensorimotor cortex were activated in musicians. On the contrary, activated areas were limited in the vicinity of the auditory cortex in both conditions in non-musicians. Our findings suggest musicians’ multiple strategies of pitch processing. D 2004 Elsevier B.V. All rights reserved. Keywords: Magnetoencephalography (MEG); Minimum current estimate (MCE); Music; Pitch; Score sightreading
* Corresponding author. Tel.: +81-3-3815-5411; fax: +81-3-5689-0495. E-mail address:
[email protected] (M. Yumoto). 0531-5131/ D 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.ics.2004.05.046
174
M. Yumoto et al. / International Congress Series 1270 (2004) 173–176
1. Introduction As letters of the alphabet have auditory (phonemic) and visual (graphemic) modalities [1], a musical score also has these modalities for trained musicians [2]. Although such multimodal integration of music processing has been studied [3], little is known about its temporal profile [4]. In this study, a spatiotemporal aspect of the functional neuroanatomy of music processing was investigated by applying cross-modal tonal violation technique [5] to the neuromagnetic measurement. 2. Materials and methods 2.1. Subjects Eight right-handed musicians (aged 22 –28 years, 4 females and 4 males) and eight righthanded non-musicians (aged 21 – 35 years, 5 females and 3 males) participated in this study. All the subjects gave written informed consent prior to the experiments, and the procedure used in this study had been approved by the Ethics Committee of the University of Tokyo. 2.2. Stimulation and task Tone series were presented both visually and auditorily to the subjects. To exclude the contamination of long-term memory and emotion, we used unfamiliar atonal melodies of computer-generated oboe tones between musical C4 (262 Hz) and B4 (494 Hz) in semitone steps. Auditory stimuli were played on an Apple personal computer via MOTU 828 (Mark of the Unicorn, Massachusetts, USA) audio interface and were presented binaurally through ER-3A (Etymotic Research, Illinois, USA) foam insert earphones at a comfortable listening level. Subjects were instructed to sight-read the score projected onto the screen in front of them while listening to the melody, and to detect infrequent (probability = 13.2%) performance errors without overt response. The errors implanted in each trial were all melodic in this study. Pitch errors perturbed a written note by raising or lowering it a diatonic interval randomly in equal probability. 2.3. Measurement The brain’s neuromagnetic signals were recorded using VectorViewk (Neuromag, Helsinki, Finland), which has 102 magnetometers and 204 planar first-order gradiometers at 102 measurement sites on a helmet-shaped surface that covers the entire scalp. In this study, all magnetometers were inactivated. The passband of the MEG recordings was 1.0– 200 Hz and the data were digitized at 600 Hz. Horizontal and vertical electro-oculograms (EOG; passband 0.03 – 100 Hz) and electroencephalograms (EEG; passband 0.3 –100 Hz) were recorded simultaneously in order to measure eye position and scalp potential distribution, respectively. 2.4. Data analysis Brain evoked magnetic fields to correct (congruent) and erroneous (incongruent) tones were selectively averaged off-line. All the trials with MEG gradients greater than
M. Yumoto et al. / International Congress Series 1270 (2004) 173–176
175
3000 fT/cm were excluded. Approximately 1– 4 noisy channels were also excluded from further analysis. The averaged signals were low-pass filtered at 45 Hz prior to analyses. Amplitudes of the activity in both conditions were compared by repeated measures ANOVA. Localization of the activity was analyzed by minimum current estimates (MCE) [6] and multiple-current-dipole model. Source localization results were superimposed onto 3D-reconstructed MR images and evaluated neuroanatomically. 3. Results Every subject showed three prominent magnetic components, M50, M100 and M200 evoked at latencies of approximately 50, 100 and 200 ms from each tone onset in both hemispheres. In musicians, the waveforms of evoked responses in congruent and
Fig. 1. Evoked magnetic fields in a representative musician (A) and those in a representative non-musician (B) (LH and RH represent selected channels in the left and in the right hemisphere, respectively).
176
M. Yumoto et al. / International Congress Series 1270 (2004) 173–176
Table 1 Activated brain areas detected by both MCE and multidipole modelling Group
Condition
Left hemisphere
Right hemisphere
Musicians
Congruent Incongruent Congruent Incongruent
AC, mT AC, mT, FG, dlPFC, iPL, SM, FO AC AC
AC, mT, FG AC, mT, FG, dlPFC, iPL, FO AC AC
Non-musicians
Abbreviations: AC, auditory cortex; mT, middle temporal cortex; FG, fusiform gyrus; dlPFC, dorsolateral prefrontal cortex; iPL, inferior parietal lobule; SM, sensorimotor cortex; FO, frontal operculum.
incongruent conditions were significantly different ( p < 0.01). On the contrary, differences between the two conditions in non-musicians were not significant (Fig. 1). In musicians, although activities were limited within the temporal lobe in the congruent condition, incongruent condition revealed spatiotemporally distributed brain regions, including not only superior temporal gyrus but also other areas; middle temporal, inferior temporal, inferior frontal, dorsolateral prefrontal cortex, inferior parietal lobule and sensorimotor cortex were activated. In non-musicians, activated areas were limited to the vicinity of the auditory cortex in both conditions (Table 1). 4. Discussion Both subject groups showed consistent auditory evoked magnetic fields that were timelocked with each tone onset, however, only musicians showed additional activities in incongruent conditions. Considering that reading music induces auditory imagery in trained musicians [2], violation to the imagery expected from notation may modulate perception of incongruent tones. This interpretation by sensory encoding of the pitch information explains only a part of the incongruent activities detected in this study. Musicians may have other types of strategies to encode the pitch information, such as verbal encoding (naming), visual encoding (notation) and motor encoding (playing instruments) [7]. Activated brain areas detected in this study may agree with this postulate. Acknowledgements This work was supported in part by JSPS Scientific Research, 12610163, 13680926, 13877400, 15300209 and 15500312. References [1] T. Raij, K. Uutela, R. Hari, Audiovisual integration of letters in the human brain, Neuron 28 (2) (2000) 617 – 625. [2] W. Brodsky, et al., Auditory imagery from musical notation in expert musicians, Percept. Psychophys. 65 (4) (2003) 602 – 612. [3] J. Sergent, et al., Distributed neural network underlying musical sight-reading and keyboard performance, Science 257 (5066) (1992) 106 – 109. [4] T.C. Gunter, B.H. Schmidt, M. Besson, Let’s face the music: a behavioral and electrophysiological exploration of score reading, Psychophysiology 40 (5) (2003) 742 – 751. [5] L.M. Parsons, Exploring the functional neuroanatomy of music performance, perception, and comprehension, Ann. N.Y. Acad. Sci. 930 (2001) 211 – 231. [6] K. Uutela, M. Hamalainen, E. Somersalo, Visualization of magnetoencephalographic data using minimum current estimates, Neuroimage 10 (2) (1999) 173 – 180. [7] M. Mikumo, Motor encoding strategy for pitches of melodies, Music Percept. 12 (2) (1994) 175 – 197.