Cognitive Brain Research 7 Ž1999. 511–518
Short communication
Hemispheric lateralization of the neural encoding of temporal speech features: a whole-head magnetencephalography study Hermann Ackermann a
a,)
, Werner Lutzenberger b, Ingo Hertrich
a
Department of Neurology, UniÕersity of Tubingen, Hoppe-Seyler-Straße 3, D-72076 Tubingen, Germany ¨ ¨ b MEG-Center, UniÕersity of Tubingen, Otfried-Muller-Str. 47, D-72076 Tubingen, Germany ¨ ¨ ¨ Accepted 1 December 1998
Abstract Using a passive oddball design Žrandomized series of standard wfrequentx and deviant wrarex stimuli., the present study investigated the neural encoding of syllables differing in a duration parameter Žrdarsshort-lag voice onset time wVOTx, rtarslong-lag VOT. by means of whole-head magnetencephalography ŽMEG.. Dipolar activities at the level of the supratemporal planes allowed to explain the evoked magnetic fields. The N1mrP2m-complex Žmagnetic equivalent to the N1rP2-wave of the electroencephalogram. in response to standard stimuli showed bilateral symmetric distribution. Furthermore, the latency of P2m significantly depended on VOT. Finally, the mismatch response to the deviant rdar-syllables—which represent in German a very frequent word ŽEnglish: ‘here’ or ‘there’. — evolved significantly earlier in the left hemisphere as compared to the right side. In conclusion, processing speed may be an important aspect of the hemispheric specialization of language. q 1999 Elsevier Science B.V. All rights reserved. Keywords: Magnetencephalography ŽMEG.; Speech perception; Hemispheric lateralization; Acoustic cortex; Supratemporal plane; Temporal processing
Within some limits, the various classes of speech sounds Žphonemes. of any language system can be characterized in terms of rather specific features of the acoustic signal w9x. Among others, duration parameters contribute to phoneme contrasts. For example, long and short vowels represent distinct linguistic categories in some languages such as German w5x. Furthermore, syllables with initial voiced and unvoiced stop consonant, e.g., rdar versus rtar, differ in voice onset time ŽVOT., i.e., the time interval between initial burst and vowel onset at the acoustic signal Žsee Fig. 1 for an example. w2x. The temporal speech cue VOT exhibits specific neural encoding at the level of primary acoustic cortex ŽA1.: multilaminar recordings in awake monkeys revealed time-locked intracortical and surface responses to consonant release and voicing onset, i.e., the two markers of VOT, during auditory presentation of the syllables rdar and rtar w22x. As a rule, the posterior part of the left inferior frontal ŽBroca area. and left superior temporal gyrus ŽWernicke area., sparing auditory cortex, are considered the anatomic
) Corresponding author. Fax: q 49-7071-296507;
[email protected]
E-mail:
substratum of hemispheric language dominance. Besides lateralization of higher Žcognitive. aspects of speech communication at the level of Broca’s and Wernicke’s area, recent clinical findings and dichotic listening experiments indicate specialization of the left sylvian area for the temporal processing of auditory stimuli in humans w23x. Furthermore, this basic and non-linguistic function is assumed to be of critical importance for speech perception. Since the acoustic cortex seems to be specialized for the decoding of temporal aspects of auditory signals and since this area shows a distinct neural representation of VOT, language dominance, in terms of a faster processing of durational speech parameters such as VOT, might be expected to extend beyond Broca’s and Wernicke’s region to the supratemporal plane of the left hemisphere. Both electroencephalography ŽEEG. and the more recent technology of magnetencephalography ŽMEG. allow to study the spatiotemporal distribution, including laterality effects, of cortical auditory processing within the domain of milliseconds. These procedures revealed two components of speech sound encoding at the level of the supratemporal plane. Ža. Provided that the interval between successive stimuli exceeds some very short minimal duration, any discrete
0926-6410r99r$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 6 - 6 4 1 0 Ž 9 8 . 0 0 0 5 4 - 8
512
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
Fig. 1. The two upper panels display the oscillogram of the synthetic syllables rdar and rtar. Bottom panel: Formants of the respective stimuli.
auditory event reliably evokes long-latency responses comprising a negative EEG deflection peaking at about 100 ms ŽN1. after stimulus onset and a subsequent positive potential with a latency of 180–200 ms ŽP2. w13x. This event-related potential ŽERP. seems to reflect mere detection of abrupt stimulus changes with respect to, e.g., sound pressure or spectral features. MEG studies reported N1m-responses Žmagnetic equivalent of the EEG N1-component. to speech sounds, time-locked, e.g., to the onset of consonant–vowel sequences w8x or to acoustic events within syllables such as the transition from a consonantal to the following vocalic segment w6x. Evoked magnetic fields, therefore, should reflect durational acoustic speech parameters such as VOT in terms of a sequence of N1m-responses. Žb. Besides N1rP2 ŽN1mrP2m., randomly interspersed deviant stimuli within a sequence of homogeneous acoustic events evoke an additional EEG response, the mismatch negativity ŽMMN., characterized in humans by a latency of roughly 200 ms w12x. Dipole source analyses indicate MMN to be generated at the level of the supratemporal plane w20x. In contrast to the N1rP2-complex, MMN seems to be related to early cognitive processes and to reflect the evaluation of the deviants against stored traces of the standard stimuli. The latter ERP-component arises even in the absence of attention directed to the auditory channel and, thus, appears to reflect automatic Žpre-attentive. stimulus discrimination w13x. Both EEG and MEG studies revealed MMN and MMNm Žthe magnetic equivalent to MMN., respectively, to be sensitive to acoustic speech parameters such as onset frequency w3,10,18x, tran-
sition dynamics w11x, and steady-state w1,15x of the first three formants. The available data on the temporospatial distribution of the late ERP to speech sounds, i.e., N1rP2– ŽN1mr P2m-.complexes and MMN– ŽMMNm-.responses, do not yet provide a coherent picture with respect to hemispheric side-differences in processing. Furthermore, only sparse data are available on the duration parameter VOT w7x. These latter studies, in addition, do not provide information on laterality effects because recordings were restricted to the midsagittal plane. As compared to EEG, MEG technology exhibits a selective sensitivity to superficial, tangentially oriented neural currents and, therefore, allows for a rather focused study of the activity of the auditory cortex w4x. Furthermore, event-related magnetic fields in response to discrete acoustic stimuli are less distorted by the skull and the skin than the EEG. MEG, thus, can be expected to represent a feasible tool for the investigation of speech perception at the level of the supratemporal plane. Especially, wholehead devices are capable to detect small but significant lateralization effects of cerebral functions at high spatial resolution w16x. The present study, therefore, used this technology to test the hypotheses Ža. that the N1mrP2mcomponents of the evoked magnetic fields in response to speech stimuli track VOT and Žb. that the left supratemporal plane is superior with respect to the detection of this durational speech sound feature, as reflected in the N1rP2– ŽN1mrP2m-.complex, andror its early cognitive processing, as indicated by the MMN– ŽMMNm-.responses.
1. Materials and methods Eleven paid university students and laboratory employees Žage: 24–42 years; 2 females. participated in the present study. Informed consent was obtained from all volunteers. At clinical examination, the subjects showed unimpaired hearing sensitivity. None of them had a history of any relevant audiological or neurological disease. The local Ethics Committee had approved the experimental design. For the sake of comparability with the animal data on cortical VOT processing obtained by Steinschneider et al. w22x, the present study considered similar resynthesized speech material as auditory stimuli. Using commercially available software ŽLPC-parameter manipulation and synthesis option of the Computerized Speech Lab CSL 4300; Kay Elemetrics, USA., the two consonant–vowel syllables rdar Žs initial voiced stop. and rtar Žs unvoiced cognate. were created Žframe length s 10 ms, sampling rate s 12.500 Hz, pulse excitation for voiced portions.. The two stimulus categories differed in VOT ŽVOT rdars10 ms, rtars60 ms. but comprised identical linear transitions—extending across 30 ms—of the first three formants
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
Žonset s 548, 1834, and 3441 Hz, respectively; steady states s 816, 1182, and 2631 Hz; Fig. 1.. In order to obtain natural-sounding stimuli, two further stationary formants at 4300 and 4900 Hz were added and formant bandwidths manually adjusted. The monotonous fundamental frequency of the vocalic portion of the syllables amounted to 128 Hz. For technical reasons, i.e., adaptation to the PC used for syllable presentation, stimuli had to be resampled at 11.127 Hz. The magnetic fields were recorded by means of a 143-channel whole-head magnetometer ŽCTF; Vancouver, Canada. in an electromagnetically shielded room using a sampling rate of 250 Hz and an anti-aliasing filter at 80 Hz. Subjects were sitting during MEG recordings and instructed to ignore the stimuli. Two randomized series of the two syllables rdar and rtar were applied—using
513
binaural air-conducting plastic tubes and ear inserts—in balanced order across subjects: Ža. rtarsstandard stimulus Ž80% of events., rdarsdeviant Ž20%.; Žb. rdars standard Ž80%., rtarsdeviant Ž20%.. Each block comprised 450 sweeps of a duration of 500 ms each including a pre-stimulus interval of 48 ms Žstimulus length s 190 ms.. The onset-to-onset inter-stimulus interval amounted to 805 ms. An artifact criterion of 1.3 pT referring to prestimulus baseline resulted in the elimination of about 10% of the trials, mainly due to blinks visible in the frontal sensors. After off-line filtering Ž1 to 40 Hz., the recordings were averaged separately for the two stimulus categories considered, i.e., rdar and rtar, within subjects. The magnetic responses to the frequent rdar- and rtar-syllables as well as two kinds of response differences between stan-
Fig. 2. Evoked magnetic fields in response to the frequent Žstandard. rdar- Žleft column. and rtar-stimuli Žright column.. A: The two panels at the top of the figure show the superimposed MEG responses averaged across subjects. Both stimulus categories elicited two prominent initial peaks Ž0.0 at the time axis s stimulus onset.. B: The four maps of the medium and the bottom part of the figure display the distribution of the evoked magnetic fields at the time of the two prominent peaks referred to Žleft side srdar-stimuli: 70 and 112 ms, respectively; right side srtar-stimuli: 70 and 190 ms..
514
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
dards and deviants ŽMMNm. were analyzed: Ža. within blocks Žrare rdar minus frequent rtar, rare rtar minus frequent rdar., Žb. between blocks Žrare rdar minus frequent rdar, rare rtar minus frequent rtar; see below.. Two reasons motivated this second approach to signal analysis. First, the initial consonants of the two syllables rdar and rtar pertain to separate sound categories Žphonemes.. This linguistic factor might contribute to or
interfere with the effects of acoustic features w17x. Second, rdar and rtar differ in the duration of the relevant temporal cue, i.e., VOT. In order to account for these confounding variables, switching the standard and deviant events has been recommended w7,19x. Further analysis relied on two complementary procedures: First, the averages of the subjects were collapsed to group data sets. This approach allowed the calculation of
Fig. 3. Magnetic mismatch response ŽMMNms magnetic analogue of the mismatch negativity. in terms of the difference wave between the magnetic fields elicited by the deviant and the corresponding standard stimuli Žleft column: deviant rdar minus standard rdar; right column: deviant rtar minus standard rtar.. A: superimposed MEG responses. B: maps of the responses 156 ms and 190 ms post stimulus onset; the rdar-responses show asynchronous onset. C: statistical mapping for the time-interval 160–190 ms; only effect places with p - 0.001 are displayed.
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
515
group means of the fields as well as statistical mapping of the effects. As a second step of analysis, dipole localization techniques were used in order to evaluate the dynamics of the origin and strength of the sources. There is evidence that a single dipole located near the primary auditory cortices may account for neuromagnetic fields in response to speech sounds w21x. Symmetrically linked dipoles, thus, were fitted to the data within a time domain which showed a clear-cut pattern of activity at the supratemporal plane. The dipoles then were fixed and the time course of source strength computed for an interval extending from 48 ms prior to 250 ms post-stimulus onset. Latencies were determined by means of an interactive graphical computer program. Statistical analysis relied on within-group ANOVAs.
2. Results Fig. 2A separately displays the MEG responses to the standard, i.e., frequent, rdar- and rtar-syllables. Each trace represents the overlay of the recordings of all channels. As concerns the rdar-stimulus, two prominent deflections emerged with a latency of about 70 and 115 ms, respectively. The left column of Fig. 2B shows the distri-
Fig. 4. Time course of the mean dipole strength in response to standard stimuli Žrdar and rtar. recorded from the left Žsolid lines. and the right hemisphere Ždashed lines.. The upward and subsequent downward deflections represent the N1m- Ž s magnetic analogue of the EEG N1-wave. and the P2m-complexes Ž s magnetic counterpart of the EEG P2-potential., respectively. Note the delay of the P2m-component elicited by the rtar-stimuli Žlong-lag VOT. as compared to the rdar-cognates Žshort-lag VOT..
Fig. 5. Mismatch negativity of the evoked magnetic fields ŽMMNm. obtained by subtraction of the response to the deviants from the waves elicited by the standard stimuli within the experimental blocks. The traces again show the time course of the mean dipole strengths Žsolid lines left hemisphere; dashed lines right hemisphere..
bution of the evoked magnetic field in response to the rdar-syllables at the time of these two initial peaks. Simple dipole sources at the level of the left and right supratemporal planes each explained more than 95% of the variance of these two patterns. As compared to the rdarsyllable, the responses to the standard rtar-stimuli exhibited a quite similar first deflection ŽFig. 2A, right side.. In contrast, the second one, peaking at about 185 ms, presents with a considerably broadened and delayed morphology. The two maps of the right column of Fig. 2B display the distribution of the evoked magnetic fields in response to rtar. Obviously, both stimulus categories, i.e., rdar and rtar, elicited quite similar spatiotemporal magnetic patterns. The left part of Fig. 3 shows the mismatch activity in terms of the computed difference waves between the responses to deviant and frequent rdar-stimuli. A left-sided peak emerged with a latency of about 160 ms. At about 190 ms after stimulus onset, this unilateral pattern of the evoked magnetic field had developed into a rather symmetrical distribution across both hemispheres. In contrast, the mismatch response to rtar-stimuli presented with a nearly synchronous onset at both sides. The maps of Fig. 3C Žbottom panels. display the results of a z-statistics comprising the distribution of the values with a probability less than 0.001. Within a time-interval extending from 160 to 190 ms post stimulus onset, the rdar-mismatch achieved significance at the left side only whereas the stimulus cognate elicited bilateral significant effects. As a further step of analysis, the dipole sources of the evoked magnetic fields were computed on the basis of the
516
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
recordings obtained from all channels. Fig. 4 shows the time course of the mean dipole strength to standard stimuli separately for the two hemispheres and the two syllables considered Žrtar versus rdar.. Each stimulus category yielded rather identical N1mrP2m-complexes at both sides of the brain which neither significantly differed in latency nor amplitude. However, the rtar-stimuli elicited a significantly delayed P2m-wave Ž185 " 5 ms. as compared to the respective rdar-response Ž118 " 13 ms; F w1,10x s 78.6; p - 0.0001.. The difference waves obtained by simple subtraction of the responses to rare stimuli from the ones elicited by the frequent events within each block are displayed in Fig. 5. Most presumably, the discrepancies of the MMNm elicited under these two conditions reflect inherent contrasts of the stimuli considered Žsee Section 1 and Fig. 4.. As a second step of MMNm analysis, therefore, subtraction of the ERP in response to standards Že.g., rdar in the series comprising 80% rdar- and 20% rtar-events. from the waves elicited by the respective deviants Že.g., rdar in the series with 80% rtar- and 20% rdar-events. was performed separately for the two phoneme categories ŽFig. 6.. An ANOVA of the latencies showed a significant interaction of syllable category and hemisphere Ž F w1,10x s 17.7, p - 0.002.. As concerns the rdar-stimuli, the latency of the MMNm obtained from the left hemisphere Ž157 ms.
Fig. 6. MMNm obtained by subtraction of the response to the deviants from the waves elicited by the standard stimuli between the experimental blocks to correct for inherent stimulus differences. For further explanations see legend to Fig. 5.
turned out to be 8 ms shorter than at the contralateral side Ž165 ms; t w10x s 3.32, p - 0.01 for the difference.. The rtar-syllables failed to exhibit a significant latency difference Žleft s 163 ms, right s 160 ms.. Furthermore, no significant effects were found for the MMNm-amplitudes Ž F - 1..
3. Discussion Multilaminar recordings in awake monkeys demonstrated neural encoding of VOT at the level of A1 in terms of time-locked intracortical and surface responses to consonant release and vowel onset w22x. Accordingly, Kaukoranta et al. w6x noted two successive N1m-components in response to the Finnish rheir-syllable, the first bound to stimulus onset and the second elicited by the transition from the fricative noise Žrhr. to the following vocalic segment Žreir.. At a first glance, the present study, in contrast, found long-lag VOT of unvoiced stops to yield a delayed P2m-component rather than a second N1m-response whereas rdar-events elicited a single well-established N1mrP2m-complex. However, the fricative noise of the rheir-stimuli lasted 100 ms whereas long-lag VOT of the rtar-syllables amounted to 60 ms only. Conceivably, the shorter time interval between stimulus onset and vowel initiation gives rise to a partial overlap between two successive N1mrP2m-complexes. Visual inspection of the evoked magnetic fields, indeed, shows an intermediate plateau which might reflect truncation of the P2m-potential of a first and the N1m-wave of a second response. The N1m- and P2m-components of the evoked magnetic fields did not exhibit any significant latency or amplitude laterality effects in response to either the standard rdar- or rtar-stimuli. Since the N1mrP2m-complex, most presumably, reflects automatic detection of abrupt transitions of the acoustic signal w13x, both supratemporal planes seem to synchronously represent the inherent temporal structure of the consonant–vowel syllables considered. Animal data indicate some encoding of VOT already at the level of the thalamus w22x. Conceivably, the N1mrP2m-complex fails hemispheric side-differences because it just reflects the outcome of subcortical signal processing conveyed to A1. Comparing standards and deviants of the same syllable category, rdar-stimuli yielded a significantly shorter MMNm-latency above the left as compared to the right hemisphere whereas the rtar-cognates failed this effect. Functional imaging data point to a significantly lateralized hemodynamic response toward the left side at the level of primary motor cortex during connected speech w24x. In consideration of the findings of the present study, functional lateralization appears to represent a common organizational principle both of the central auditory and the speech motor system establishing, presumably, an interface
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
between the bilateral sensoryrmuscular periphery and the unilateral perisylvian language zones. Conceivably, linguistic representations such as word forms may account for the observed differences between rdar- and rtar-stimuli. In contrast to the latter event, the monosyllable ‘da’ represents a lexical item ŽEnglish: ‘here’.. There is evidence that, e.g., visually presented letters are faster recognized within the context of a word as compared to isolated occurrence. In some analogy, the lexical item ‘da’ may support the temporal processing of linguistically relevant acoustic features within the left auditory cortex. Furthermore, previous studies rather consistently reported larger and slightly earlier MMN or MMNm, respectively, above the right hemisphere during the processing of frequency, intensity andror duration of nonspeech auditory stimuli w14x. The findings of a selective superiority of the left hemisphere with respect to syllable processing further corroborate the suggestion of specific linguistic influences in this regard. A recent positron emission tomography ŽPET. study did not find lateralization effects at the level of the temporal lobe during application of consonant–vowel sequences w25x. Obviously, the restricted temporal resolution of that technology does not allow to detect subtle hemispheric differences in processing time.
4. Conclusion Ža. The present study indicates that both supratemporal planes synchronously encode the inherent acoustic structure of the two syllables considered, i.e., rdar and rtar. Žb. The N1mrP2m-complex of the evoked magnetic field seems to track the durational speech parameter VOT. Žc. Early cognitive aspects of auditory processing—as reflected by MMNm—showed differential laterality effects. Conceivably, lexical effects such as the influence of word forms account for these observations.
Acknowledgements This study was supported by a grant from the German Research Foundation ŽSFB 307.. The authors thank D. Mooshammer and B. Ableiter for technical assistance, J. Dichgans and N. Birbaumer for helpful comments.
References w1x O. Aaltonen, P. Niemi, T. Nyrke, M. Tuhkanen, Event-related brain potentials and the perception of a phonetic continuum, Biol. Psychol. 24 Ž1987. 197–207.
517
w2x H. Ackermann, I. Hertrich, Voice onset time in ataxic dysarthria, Brain Lang. 56 Ž1997. 321–333. w3x G. Dehaene-Lambertz, Electrophysiological correlates of categorical phoneme perception in adults, NeuroReport 8 Ž1997. 919–924. w4x R. Hari, O.V. Lounasmaa, Recording and interpretation of cerebral magnetic fields, Science 244 Ž1989. 432–436. w5x I. Hertrich, H. Ackermann, Articulatory control of phonological vowel length contrasts: kinematic analysis of labial gestures, J. Acoust. Soc. Am. 102 Ž1997. 523–536. w6x E. Kaukoranta, R. Hari, O.V. Lounasmaa, Responses of the human auditory cortex to vowel onset after fricative consonants, Exp. Brain Res. 69 Ž1987. 19–23. w7x N. Kraus, T. McGee, T.D. Carrell, A. Sharma, Neurophysiologic bases of speech discrimination, Ear Hear. 16 Ž1995. 19–37. w8x S. Kuriki, M. Murase, Neuromagnetic study of the auditory responses in right and left hemispheres of the human brain evoked by pure tones and speech sounds, Exp. Brain Res. 77 Ž1989. 127–134. w9x P. Ladefoged, I. Maddieson, The Sounds of the World’s Languages, Blackwell, Oxford, 1997. w10x A.C. Maiste, A.S. Wiens, M.J. Hunt, M. Scherg, T.W. Picton, Event-related potentials and the categorical perception of speech sounds, Ear Hear. 16 Ž1995. 68–90. w11x T. McGee, N. Kraus, T. Nicol, Is it really a mismatch negativity? An assessment of methods for determining response validity in individual subjects, Electroencephalogr. Clin. Neurophysiol. 104 Ž1997. 359–368. w12x R. Naatanen, A.W.K. Gaillard, S. Mantysalo, Early selective atten¨¨ ¨ ¨ tion effect on evoked potential reinterpreted, Acta Psychol. ŽAmst.. 42 Ž1978. 313–329. w13x R. Naatanen, Attention and Brain Function, Erlbaum, HillsdalerNJ, ¨¨ ¨ 1992. w14x R. Naatanen, K. Alho, Generators of electrical and magnetic mis¨¨ ¨ match responses in humans, Brain Topogr. 7 Ž1995. 315–320. w15x R. Naatanen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huoti¨¨ ¨ lainen, A. Ilvonen, M. Valnio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen, K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses, Nature 385 Ž1997. 432–434. w16x N. Nakasato, S. Fujita, K. Seki, T. Kawamura, A. Matani, I. Tamura, S. Fujiwara, T. Yoshimoto, Functional localization of bilateral auditory cortices using an MRI-linked whole head magnetencephalography ŽMEG. system, Electroencephalogr. Clin. Neurophysiol. 94 Ž1995. 183–190. w17x C. Phillips, A. Marantz, M. McGinnis, D. Pesetsky, K. Wexler, E. Yellin, D. Poeppel, T. Roberts, H. Rowley, Brain mechanisms of speech perception: a preliminary report, in: C. Schutze ¨ ŽEd.., Papers on Language Processing and Acquisition, MIT Press, CambridgerMA, MIT Working Papers in Linguistics, Vol. 26, 1995, pp. 153–191. w18x M. Sams, R. Aulanko, O. Aaltonen, R. Naatanen, Event-related ¨¨ ¨ potentials to infrequent changes in synthesized phonetic stimuli, J. Cogn. Neurosci. 2 Ž1990. 344–357. w19x S.A. Sandridge, A. Boothroyd, Using naturally produced speech to elicit the mismatch negativity, J. Am. Acad. Audiol. 7 Ž1996. 105–112. w20x M. Scherg, J. Vajsar, T.W. Picton, A source analysis of the late human auditory evoked potentials, J. Cogn. Neurosci. 1 Ž1989. 336–355. w21x K. Sekihara, D. Poeppel, A. Marantz, H. Koizumi, Y. Miyashita, Noise covariance incorporated MEG-MUSIC algorithm: a method for multiple-dipole estimation tolerant of the influence of background brain activity, IEEE Trans. Biomed. Eng. 44 Ž1997. 839–847. w22x M. Steinschneider, C.E. Schroeder, J.C. Arezzo, H.G. Vaughan Jr., Speech-evoked activity in primary auditory cortex: effects of voice onset time, Electroencephalogr. Clin. Neurophysiol. 92 Ž1994. 30– 43. w23x P. Tallal, S. Miller, R.H. Fitch, Neurobiological basis of speech: a
518
H. Ackermann et al.r CognitiÕe Brain Research 7 (1999) 511–518
case for the preeminence of temporal processing, in: P. Tallal, A.M. Galaburda, R.R. Llinas, ´ C. von Euler ŽEds.., Temporal Information Processing in the Nervous System: Special Reference to Dyslexia and Dysphasia, New York Academy of Sciences, New YorkrNY, 1993, pp. 27–47. w24x D. Wildgruber, H. Ackermann, U. Klose, B. Kardatzki, W. Grodd,
Functional lateralization of speech production at primary motor cortex: a fMRI study, NeuroReport 4 Ž1996. 2791–2795. w25x J.A. Fiez, P. Tallal, M.E. Raichle, F.M. Miezin, W.F. Katz, S.E. Petersen, PET studies of auditory and phonological processing: effects of stimulus characteristics and task demands, J. Cogn. Neurosci. 7 Ž1995. 357–375.