Brain and Language xxx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
Brain and Language journal homepage: www.elsevier.com/locate/b&l
Review
An interactive model of auditory-motor speech perception ⁎
Einat Liebenthala, , Riikka Möttönenb,c a b c
Department of Psychiatry, Brigham & Women’s Hospital, Harvard Medical School, Boston, USA Department of Experimental Psychology, University of Oxford, Oxford, UK School of Psychology, University of Nottingham, Nottingham, UK
A R T I C L E I N F O
A B S T R A C T
Keywords: Sensorimotor Speech EEG MEG ECoG Intracranial TMS
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
1. Dorsal auditory stream for speech processing Functional neuroimaging research has led to significant advances in our understanding of the functional organization of neural networks underlying speech perception and production in humans. The classic Wernicke-Lichtheim lesion-based neuroanatomical model of speech processing postulated a route for speech repetition from a posterior auditory center to an anterior articulatory center (Lichtheim, 1885; Wernicke, 1974). Modern neurophysiological research in primates led to the formulation of dual models of perception postulating, for auditory processing (Rauschecker & Tian, 2000), a dorsal stream important for sound localization from posterior superior temporal auditory to inferior parietal somatosensory and frontal motor areas, and a ventral stream important for auditory object recognition. In the context of speech processing, the dorsal auditory route has been suggested to provide essential auditory feedback for articulation (Houde & Jordan, 2002; Tourville, Reilly, & Guenther, 2008). More recent research on the human dorsal stream, especially using transcranial magnetic stimulation (TMS) to disrupt neural processing temporarily, has also uncovered an important modulatory role of articulatory motor areas in speech perception (D’Ausilio et al., 2009; Meister et al., 2007; Möttönen & Watkins, 2009; Smalle, Rogers, & Möttönen, 2015). Such motor influences on speech perception have generally been integrated in neuroanatomical dual-stream models of speech perception as either direct or
⁎
indirect recurrent feedback from fronto-parietal to superior temporal cortex (Hickok, Houde, & Rong, 2011; Rauschecker & Scott, 2009). Despite the mounting evidence indicating a role for a dorsal auditory stream in speech processing, the issue of somatomotor influences on speech perception and their neural mediation remains at the center of significant controversy. Some authors have questioned whether somatomotor systems participate in speech perception under ecologically valid conditions, in light of research suggesting that motor influences are observed specifically in conditions involving perception of sublexical, noisy, distorted or unfamiliar speech signals (Hickok & Poeppel, 2007). Under these circumstances, decoding speech involves phonological processes that rely on auditory-memory and somatomotor systems. On the other hand, other authors have suggested that somatomotor systems are universally involved in speech perception (Pulvermuller & Fadiga, 2010; Skipper, Devlin, & Lametti, 2017). We contend that considering the neural dynamics of the auditory dorsal stream, i.e., the temporal course of activation of auditory and motor areas under various tasks, is critical for resolving the neural mechanisms underlying sensorimotor speech perception. However, temporal information, although essential to constructing computational models (for example, of speech production, Guenther, Ghosh, & Tourville, 2006; Houde & Nagarajan, 2011), is seldom incorporated in neuroanatomical models of speech perception. In this review, we first present three possible types of neurodynamic
Corresponding author at: Brigham & Women’s Hospital, 221 Longwood Avenue, Boston, MA 02115, USA. E-mail address:
[email protected] (E. Liebenthal).
https://doi.org/10.1016/j.bandl.2017.12.004 Received 5 April 2017; Received in revised form 3 October 2017; Accepted 2 December 2017 0093-934X/ © 2017 Elsevier Inc. All rights reserved.
Please cite this article as: Liebenthal, E., Brain and Language (2017), https://doi.org/10.1016/j.bandl.2017.12.004
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
Fig. 1. Three schematic models of auditory-motor speech processing. Parallel, hierarchical, and interactive models are depicted that vary in the timing of activity and functional connectivity of the auditory and motor systems. The approximate progression of speech processing is indicated on the time line at the bottom. In the parallel model, the auditory cortex is responsible for acoustic-phonetic feature extraction and mapping to phonemic categories, and the motor cortex is responsible for response selection. Auditorymotor functional connections are active during the late response selection phase. In the hierarchical model, input from auditory areas flows forward into motor areas where it is transformed from an auditory to an articulatory form, and is subsequently fed back into auditory areas. Motor influences on speech perception occur during the phonemic categorization and response selection phases. In the interactive model, auditory and motor processing are intimately related and motor influences can occur at all phases of speech perception. Auditory and motor bi-directional interactions can occur early and with several iterations. Importantly, auditory-motor interactions in speech perception are hypothesized to vary with attention and stimulus predictability, such that context-dependent or mixed models of auditory-motor relations are possible. See text for further details.
bias (Hickok, 2010; Venezia et al., 2012). In a hierarchical model of auditory-motor speech processing, input from auditory areas flows forward into motor areas where it is transformed from an auditory to an articulatory form, and is subsequently fed back into auditory areas. In this type of model, the motor feedback is suggested to serve as an error prediction signal that influences the selection of auditory categories for decision making and action. The neuroanatomical model of speech processing proposed by Rauschecker and Scott (2009), consisting of forward-mapping and inverse-mapping loops between auditory and motor areas, can be seen as consistent with a hierarchical architecture. In the Rauschecker and Scott (2009) model, auditory information is mapped onto articulatory representations via an antero-ventral stream connecting auditory areas in the middle superior temporal cortex (STC) with motor areas in the posterior inferior frontal cortex (IFC), and an efference copy is transmitted to somatosensory areas in the supramarginal gyrus (SMG) of the inferior parietal cortex (IPC). The IPC also receives information from auditory areas in the posterior STC via a dorsal stream that is primarily responsible for articulation, such that the IPC serves as a pivot where auditory and articulatory information can be compared and influence the selection of phonemic categories. This version of the hierarchical model postulates a dominant role in phonemic categorization for motor areas targeted by the ventral auditory stream, with feedback routed to auditory areas via the dorsal auditory stream. In a hierarchical model of auditory-motor speech processing, the onset of activation of motor areas is predicted to follow the onset of activation in auditory areas, and to coincide with the phases of phonemic category mapping and response selection. In an interactive model of auditory-motor speech processing, auditory and motor areas are activated in concert and there can be several phases of activation and interaction between the areas. This type of model emphasizes an intimate relationship between auditory, somatosensory, and motor processing, whereby auditory areas support speech production and somatomotor areas modulate speech (and auditory) perception. In an interactive model, the activation of articulatory somatosensory and motor areas is predicted to occur proximally in time with that of auditory areas during speech perception tasks. The neuroanatomical model of speech processing proposed by Hickok and colleagues (Hickok & Poeppel, 2007; Hickok et al., 2011) can be seen as
models of auditory-motor speech processing in the dorsal stream: a parallel, a hierarchical, and an interactive model. We review studies that have used magnetoencephalography (MEG), electroencephalography (EEG) and electrocorticography (ECoG) to investigate the time course of speech processing in auditory and motor areas and the interactions between these areas. Then, we review TMS studies of how speech is represented in the motor areas and how motor areas contribute to performance in speech perception tasks. We also review studies that have used TMS in combination with EEG/MEG to investigate how disruptions in the motor areas affect the time course of speech processing in the auditory areas. Finally, we evaluate the neurodynamic models in light of the reviewed studies and propose an updated version of the interactive model that is consistent with current neurophysiological evidence.
2. Neurodynamic models of auditory-motor speech processing Fig. 1 presents three schematized neurodynamic models of auditorymotor speech processing: a parallel, a hierarchical and an interactive model. The neurodynamic models make different predictions regarding the time course and influence of the motor system on early (acousticphonetic), intermediate (phonemic) and late (response selection), phases of speech perception. The neurodynamic models also predict different levels of involvement of dorsal and ventral auditory streams in mediating motor influences on speech perception, and they are linked to several prominent neuroanatomical models. In a parallel model of auditory-motor speech processing, the auditory and motor systems are active independently and have different functions. In this type of model, the auditory system is solely responsible for speech perception and the motor system is responsible for response selection. Therefore, it is predicted that there are no significant influences of motor areas on perceptual processing. Auditorymotor functional connections are active during the late response selection phase but not during prior phases of acoustic-phonetic feature extraction and phonemic category mapping. This model is consistent with the notion that the motor system is not involved in speech perception per se, rather activation of motor areas during speech perception tasks reflects processes strictly related to response selection and 2
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
identification of an 8-step continuum between ‘ba’ and ‘da’, created by gradually changing the onset frequencies of the first and second formants. The results demonstrate a sharp category boundary between the syllables, indicating that perception does not change gradually along the acoustic continuum. The discrimination performance (Fig. 2B) is predicted from the identification performance: Sounds that belong to different phonemic categories are discriminated more accurately than sounds that belong to the same phonemic category, even when the acoustic distance between the sound pairs within and across category is equal. The ability to extract acoustic-phonetic features and to categorize speech sounds into phonemic categories is essential for speech perception and comprehension, because phonemes are the smallest linguistically meaningful units. For example, the words ‘bay’ and ‘day’ have different meanings in English. It is clear that speech sounds are processed partly by the same auditory mechanisms as other sounds and partly by mechanisms that are specialized for speech. Functional brain imaging studies have consistently shown that the middle STC in the vicinity of primary auditory cortex plays a special role in acousticphonetic and phonemic processing of speech (DeWitt & Rauschecker, 2012; Binder et al., 2000; Liebenthal et al., 2005, 2014; Möttönen et al., 2006; Humphries et al., 2014). MEG and EEG studies have provided evidence that the time window from 50 to 230 ms is critical for (pre-lexical) speech processing in the auditory areas in the middle STC. The neural responses to speech sounds differ from those to non-speech sounds as early as 50–100 ms after speech sound onset (Kuriki & Murase, 1989; Parviainen, Helenius, & Salmelin, 2005; Gootjes et al., 1999). The neural responses in this early time window were found to represent the acoustic-phonetic features of speech sounds (Obleser, Lahiri, & Eulitz, 2003, 2004; Obleser, Scott, & Eulitz, 2006). A recent ECoG study found evidence that phonetic features of speech sounds are encoded in the STC at 150 ms after
consistent with an interactive framework in that it postulates direct motor influences on perception via the dorsal auditory stream. The boundary area in the left Sylvian fissure between the temporal and parietal lobes in the dorsal auditory stream (termed Spt) is suggested to play a bi-directional role in auditory-motor transformations, such that auditory-motor interactions can be mediated by short feed-forward and feed-back loops within the dorsal auditory stream. Nevertheless a point of distinction is that Hickok’s model postulates that somatomotor influences on speech perception are limited to phonological tasks (e.g., identification/discrimination of sublexical or unfamiliar segments of speech), whereas the interactive auditory-motor model of speech perception proposed here is open to the possibility of broader somatomotor influences on speech perception. Importantly, as elaborated upon in the final section of the paper, we hypothesize that auditory-motor interactions in speech perception vary with attention and linguistic context, and also between individuals, such that context-dependent or mixed models of auditory-motor relationships are possible. 3. EEG, MEG and ECoG studies on neural dynamics of speech processing in the auditory cortex Speech is a complex highly variable acoustic signal. For example, context, speech rate and the shape of the speaker’s vocal tract influence the acoustic features of speech sounds. During speech perception, the acoustic-phonetic features of acoustically variable speech sounds are extracted and mapped onto phonemic categories. Consequently, speech sounds (especially consonants) are perceived categorically (Liberman et al., 1957). Categorical speech perception can be investigated using an acoustic continuum between two phonemes. Fig. 2A presents typical results in a single subject from a 2-alternative forced choice 1
C
Proportion of ‘ba’ responses
Proportion of ‘ba’ responses
A
0.8
0.6
0.4
0.2
1
2
3
4
5
6
‘ba’ 1
0.2
5&7
0.2
2
3
4
5
6
7
8
‘da’
D 3.5
0.4
4&6
0.4
1
0.6
3&5
0.6
‘ba’
0.8
2&4
post TMS 0.8
8
Across-category
1&3
no TMS
‘da’
d’ to across-catergory pairs
Proportion of ‘different’ responses
B
7
1
3.0 2.5 2.0 1.5 1.0 0.5
6&8
Pair 3
Fig. 2. Categorical perception of speech sounds. (A) Typical performance of a single participant in an identification task. Eight equidistant tokens from a ‘ba’-‘da’ acoustic continuum were presented and the participant pressed one of two buttons to indicate whether she heard ‘ba’ or ‘da’. The data demonstrate a steep change in the perceived category between tokens 3 and 5. (B) Typical performance of a single participant in a discrimination task. Pairs of stimuli were presented and the participant responded whether the sounds were ‘same’ or ‘different’. The proportion of ‘different’ responses was highest to the across-category pair with prototypical tokens (3&5) and lower to across-category pairs with a non-prototypical token (2&4 and 4&6). This participant gave no ‘different’ responses to within-category pairs (1&3 and 6&8). (C) TMS-induced disruption of the lip motor cortex reduced the slope of the category boundary in a ba-da identification task (N = 20, p < .05), suggesting that the perception of phonemic categories was affected. (D) TMS-induced disruption of the lip motor cortex reduced the ability to discriminate between ba and da categories (N = 18, p < .01). See Smalle et al. (2015) for further details.
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
the behavioral ability to categorize speech stimuli, implicating the auditory dorsal stream in phonemic categorization. Interestingly, the categorical representations in posterior IFC were found when participants discriminated the auditory syllables and were absent when participants watched a silent film and ignored the auditory syllables. In the ignore condition, the neuronal adaptation in the left middle and posterior STC was sensitive to acoustic-phonetic features of the speech sounds at a longer latency (> 300 ms), consistent with the notion that attention modulates auditory-motor interactions in speech perception. The findings of ECoG studies are in line with the early motor activations found in MEG/EEG studies. Cheung et al. (2016) found that the average onset of the activation in the somatomotor cortex was approximately 100 ms after the onset of the auditory syllables, shortly after the onset of the activation in the STC. The activity peaked simultaneously in the superior part of the ventral somatomotor cortex and in the STC, whereas the peak of the inferior part of the ventral somatomotor cortex activity was slightly later. The majority of the electrodes in the somatomotor cortex showed simultaneous or lagging activation compared to STC, but some somatomotor electrodes showed leading activation compared to STC. Intriguingly, the responses to auditory speech in the sensorimotor regions were organized along their acoustic features, as in the STC. These findings are in disagreement with a large number of studies that have demonstrated mapping of speech sounds onto the motor representations of articulators (e.g., lips and tongue) based on their articulatory features (Möttönen & Watkins, 2009; Fadiga, 2002; Pulvermuller et al., 2006; D’Ausilio et al., 2009; see Section 5 below). Future research is needed on how speech is represented in the motor system. Cogan et al. (2014) measured ECoG activity bilaterally during speech articulation tasks. They found activity related to articulation bilaterally in the electrodes over SMG, PMC, IFC and somatomotor areas (i.e., along the dorsal stream). 92% of the electrodes activated during articulation showed activity also during passive listening to speech and with the same latency as the auditory sites, providing evidence of early auditory-motor transformations.
the stimulus onset, but found no evidence for neural representations of single phonemes (Mesgarani et al., 2014). Numerous studies have investigated phonemic processing by comparing implicit mismatch responses to phonemic and non-phonemic changes in speech sounds sequences, and found that responses to changes in phonemic category typically peak around 150–200 ms after sound onset (Vihla, Lounasmaa, & Salmelin, 2000; Sharma & Dorman, 1999; Naatanen et al., 1997; Phillips et al., 2000). However, differences between phonemic and nonphonemic MMN responses could reflect differences not only in phonemic, but also in acoustic properties of the stimuli (Maiste et al., 1995; Sharma et al., 1993), and in familiarity and ability to categorize the stimuli (Aaltonen et al., 1997; Naatanen et al., 1997; Winkler et al., 1999). In an EEG study comparing responses to acoustically-matched phonemic and nonphonemic continua during an explicit identification task, differences were observed at 180–230 ms in temporal cortex when subjects were unfamiliar with the stimuli, but there were no differences in the responses after subjects trained to classify the stimuli into two discrete categories (Liebenthal et al., 2010). These results support a role for neural processing in the 200 ms time range in phonemic (and more generally auditory) categorization. In sum, according to the electrophysiological studies, the extraction of acoustic-phonetic features of speech sounds starts at 50–100 ms, whereas phonemic processing starts later, approximately 150 ms from the sound onset (see also Salmelin, 2007). 4. EEG, MEG and ECoG studies on neural dynamics of auditorymotor speech processing Liebenthal et al. (2013) combined fMRI with EEG to study phonological processing with high spatiotemporal resolution. Ambiguous duplex syllables were presented dichotically at varying interaural synchronies to participants, who performed syllable and chirp identification tasks. This paradigm allowed separation of phonological processing from auditory processing. The authors found that the dorsal auditory stream was engaged early during phonemic processing. Early activations were found in the posterior STC bilaterally (80–90 ms) and in the IPC and ventral central sulcus (95–230 ms) in the left hemisphere. These areas were re-activated in the left hemisphere after 300 ms, possibly reflecting maintenance of categorical representations for response selection. The early activation of somatomotor areas concurrently with auditory areas, and earlier left lateralization of the somatomotor areas, was suggested to be consistent with the existence of direct functional feedback loops in the auditory dorsal stream mediating somatomotor influences on phonemic perception. Alho et al. (2012) used MEG to track neural activity during passive listening to clear and noisy syllables (‘pa’ and ‘ta’) and found that they activate the left premotor cortex (PMC) as early as 100 ms after syllable onset. Interestingly, the strength of this early activity in the left PMC was positively correlated with participants’ ability to categorize syllables in noise, in agreement with the idea that the left PMC contributes to phonemic categorization. A similar positive correlation was found in the left posterior IFC and STC later, 200 ms after sounds onset. The authors also found that the neuronal oscillations in the left STC and the left PMC were synchronized 60–80 ms after syllable onset and that this neural synchrony was enhanced during listening to noisy syllables relative to clear syllables (Alho et al., 2014). Furthermore, the strength of the STC to PMC synchrony correlated with participants’ ability to categorize noisy syllables (90–110 ms). Similar positive correlations with behavior were also found with directional synchrony from PMC to IPC (90–120 ms) and from PMC to the primary motor cortex (120–140 ms). In their recent MEG study, Alho et al. (2016) measured neuronal adaptation to investigate phoneme category representations. They found such representations in the left posterior IFC that were activated 115–140 ms after syllable onset. The left posterior IFC was functionally connected with posterior STC areas in this time range, and the degree of phoneme category selectivity in posterior IFC correlated positively with
5. The role of motor areas in speech perception: evidence from TMS studies What is the role of the motor areas in speech perception? According to the central claim of the Motor Theory of Speech Perception the listener simulates the speaker’s intended articulatory movements during speech perception (Liberman, Cooper, Shankweiler, & StuddertKennedy, 1967; Liberman & Mattingly, 1985). Although many other claims of this theory have proven to be too radical, the idea of inverse modeling has survived and the motor system is presently assumed to represent speech signals as the movements of the articulators. In order to test this idea experimentally, it is crucial to determine (1) whether the articulatory motor system is activated during speech perception, (2) whether these activations are articulator-specific, and (3) whether motor areas contribute to how speech is perceived. TMS provides efficient tools to address all of these questions (for reviews, Murakami, Ugawa, & Ziemann, 2013; Möttönen & Watkins, 2012; Möttönen et al., 2014a). The pioneering TMS studies measured amplitudes of motor-evoked potentials (MEPs) from lip and tongue muscles and showed that they are enhanced during listening to speech (Fadiga, 2002; Watkins, Strafella, & Paus, 2003). These studies provided evidence that the excitability of the articulatory motor cortex is enhanced during passive listening to clear continuous speech without background noise, showing that the motor cortex is involved in speech processing in ecologically valid conditions (see also, Murakami, Restle, & Ziemann, 2011; Murakami et al., 2015). Importantly, some MEP studies have also demonstrated articulator-specificity, i.e., that listening to lip- and tonguearticulated speech sounds enhances motor excitability of tongue and lip representations, respectively (Fadiga, 2002; Nuttall et al., 2016). In 4
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
cortex. The results from a combined TMS and EEG study showed that automatic auditory discrimination of speech sounds (i.e., mismatch negativity responses) at a latency of 170–210 ms was impaired when the articulatory motor cortex was disrupted (Möttönen et al., 2013). This indicates that auditory and motor cortices interact during speech processing even when the speech sounds are outside the focus of attention. Interestingly, the modulations were not specific to the articulatory features of speech sounds. A later combined TMS and MEG study showed that focusing attention on the speech sounds modulated auditory-motor interactions (Möttönen et al., 2013, 2014b). TMS-induced disruption of the left lip motor cortex specifically modulated the processing of lip-articulated ‘ba’ syllables in the left STC at 60–100 ms after syllable onset when the sounds were attended. The later modulations in the STC (> 170 ms) were bilateral, not modulated by attention, and unspecific to articulatory features of speech sounds (Möttönen et al., 2014b). These studies demonstrate that the articulatory motor cortex modulates early stages of speech processing in the auditory system, providing support for the interactive model. The studies also suggest that although auditory-motor interactions are automatic, attention can modulate their timing and specificity. In summary, TMS studies have shown that the motor cortex is activated in an articulator-specific manner and that the contributions of the motor cortex to speech perception tasks are articulator-specific (Fadiga, 2002; D’Ausilio et al., 2009; Möttönen & Watkins, 2009). These findings are in line with the idea that the speech signals are transformed into articulatory movements during speech perception. They further imply that the motor system processes the articulatory features of speech sounds, complementing the processing of acoustic features in the auditory system. Although many neuroimaging studies also support this idea (e.g., Pulvermuller et al., 2006), this may not necessarily be the only way the motor system processes speech signals. Cheung et al. (2016) have provided evidence that the early activations of the motor system during passive listening to speech are organized along acoustic features of speech sounds similar to the organization in the auditory system. This suggests that the role of the motor cortex may not be limited to processing based on articulatory features, rather it may also contribute to processing based on acoustic features together with the auditory system. Furthermore, there is evidence that auditorymotor interactions are not strictly articulator-specific when speech signals are outside the focus of attention (Möttönen et al., 2013, 2014b). Further studies are needed to investigate the representation of speech signals in the motor system and the factors that modulate it. We propose that the motor system can play multiple roles in speech processing, and that attention importantly modulates motor contributions and auditory-motor interactions in speech perception.
these MEP studies TMS pulses were delivered 100 ms after the syllable/ phoneme onset, providing evidence that the motor excitability is enhanced during early processing of speech sounds (see also, Roy et al., 2008), in agreement with MEG, EEG and ECoG studies reviewed above. The MEP studies do not, however, address the fundamental question of whether the motor activations are epiphenomenal or whether the motor cortex contributes to speech perception. Several studies have used repetitive TMS to disrupt regions in the primary motor or premotor cortex that are involved in speech production and have shown that these disruptions temporarily impair participants’ ability to categorize and discriminate speech sounds (Möttönen & Watkins, 2009; Sato, Tremblay, & Gracco, 2009; Meister et al., 1997; Krieger-Redwood et al., 2013; Rogers et al., 2014; Smalle et al., 2015). For example, TMSinduced disruption of the lip representation in the left primary motor cortex made the category boundary between ‘ba’ and ‘da’ shallower and impaired discrimination of syllables from different phonological categories (Möttönen & Watkins, 2009). This effect was also articulatorspecific, since TMS-induced disruption of the lip representation had no effect on categorical perception of speech sounds that do not involve lips in their articulation. In addition to off-line repetitive TMS that disrupts functioning of the motor cortex, many studies have used dualpulse on-line TMS to prime the articulator representations in the motor cortex. These studies have consistently shown that priming of the lip and tongue representations facilitates identification of lip- and tonguearticulated syllables, respectively (D’Ausilio et al., 2009; Bartoli et al., 2015). These findings have been recently replicated by measuring reaction times in a word-to-picture-matching task (instead of syllable identification), which suggests that the articulatory motor system contributes to comprehension of speech (Schomers et al., 2015). The early TMS studies demonstrating changes in task performance (e.g., Möttönen & Watkins, 2009; Meister et al., 1997; D’Ausio et al., 2009) did not, however, provide conclusive evidence of whether TMSinduced motor modulation affected perception of speech sounds or postperceptual processes (e.g., response bias), as pointed out by Hickok (2010) and Venezia et al. (2012). This is a critical question which dissociates the interactive and hierarchical models from the parallel model (see Section 1). In order to address this question, Smalle et al. (2015) used a modified experimental paradigm that allowed calculation of sensitivity (i.e., d′) and response bias. The results showed that the TMS-induced disruption of the lip motor cortex reduced the slope of the ‘ba’-‘da’ category boundary in the identification task, replicating the earlier finding by Möttönen and Watkins (2009). Moreover, the TMSinduced disruption of the lip motor cortex reduced d’ in the discrimination task (Fig. 2C and D), but had no effect on the response bias. These findings support the idea that the articulatory motor cortex contributes to perception of speech sounds. Rogers et al. (2014) used an AXB discrimination task that is not sensitive to response bias and showed that performance in this task dropped to a chance level 20–35 min after continuous theta-burst TMS over the lip motor cortex. Importantly, this impairment was articulator-specific, i.e., discrimination performance was impaired only for across-category pairs that included lip-articulated syllables, as in the study by Möttönen and Watkins (2009). Furthermore, discrimination of within-category pairs was unaffected in all studies (Möttönen & Watkins, 2009; Rogers et al., 2014; Smalle et al., 2015). Together, these TMS studies provide evidence that the articulatory motor cortex contributes to the ability to perceive subtle differences between speech sounds that belong to different phonemic categories. These findings are in agreement with interactive and hierarchical models, but in disagreement with the parallel model. The combination of TMS with EEG and MEG provides a tool to investigate causal interactions between motor and auditory systems with a high temporal accuracy. Möttönen, Dutton, and Watkins et al. (2013), Möttönen, van de Ven, and Watkins (2014b) have used this technique to measure how TMS-induced disruptions in the articulatory motor cortex modulate the time course of speech processing in the auditory
6. An updated interactive model of auditory-motor speech processing In the strong form of a parallel model of auditory-somatomotor speech processing, it is assumed that the somatomotor cortex is activated well after the auditory cortex and that it contributes strictly to response selection and maintenance. This model fails to account for activations of somatomotor areas during the early and intermediate (< 200 ms) phases of speech processing (e.g., Fadiga, 2002; Alho et al., 2012; Cheung et al., 2016; Cogan et al., 2014; Liebenthal et al., 2014) and the contributions of the articulatory motor areas to speech perception and speech processing in the auditory areas (e.g., Möttönen et al., 2013, 2014b; Smalle et al., 2015). In hierarchical and interactive models of auditory-motor speech processing, it is assumed that neural representations of both graded and categorical properties of speech sounds exist within the same general auditory cortex region, but are activated at different time phases, consistent with the existence of feedforward and feedback processes in this region. In the earlier time phase (< 200 ms), feedback from discrete somatomotor representations of speech may serve to narrow the 5
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
signal is going to contain salient information. It has been proposed that this type of predictive timing relies on coupling of low-frequency oscillations to speech signals (for a review, Giraud & Poeppel, 2012). Recent evidence suggests that the articulatory motor cortex is involved in controlling coupling of low-frequency oscillations with speech signals in superior temporal cortex (Park et al., 2015). More research is needed on the role of auditory-motor interactions in predictive coding and timing during speech processing. It is plausible that during processing of predictable speech signals the influence of the articulatory motor system on auditory cortex may even precede the incoming speech signal. The articulatory motor system has been proposed to play a compensatory role whereby the level of background noise increases the recruitment of motor areas in speech processing (Wilson, 2009). There is some evidence that the excitability of the articulatory motor cortex increases when there is background noise or the speech signal is distorted (Murakami et al., 2011; Nuttall et al., 2016). There is also some evidence that the level of background noise increases the strength of auditory-motor interactions (Alho et al., 2014). It should be, however, noted that the articulatory motor cortex is activated and interacts with the auditory cortex during speech processing even in the absence of background noise (Fadiga, 2002; Watkins et al., 2003; Möttönen et al., 2013, 2014b). Moreover, there is recent evidence that the activity of the articulatory motor system during listening to spoken sentences is not increased in noisy conditions, suggesting that its recruitment is not necessarily compensatory (Panouilleres, Boyles, Chesters, Watkins, & Möttönen, 2017). Whether the timing of auditory-motor interactions is affected by noise should be investigated in future studies. There are likely to be individual differences in the strength of auditory-motor interactions during speech processing. For example, speech perception and comprehension has been shown to be relatively unimpaired in some patients with motor speech production problems (e.g., Bishop, Brown, & Robson, 1990; Rogalsky et al., 2011), suggesting that in these patients speech perception does not rely critically on the integrity of the articulatory motor system. The ability of the auditory system to categorize speech sounds without support from the articulatory motor system has been also demonstrated in studies in nonhuman animals (e.g., Kuhl & Miller, 1975, 1978). These findings, while showing that the articulatory motor system is not as critical for speech perception as the auditory system, do not negate the position offered in this paper. In our interactive model, the articulatory motor system modulates speech processing in the healthy human brain. The auditory system can, however, process speech signals independently in the absence of modulatory input from the articulatory motor system. An interesting line for future research would be to investigate how the auditory-motor networks adapt to malfunction of the motor system. It is also important to investigate how aging affects auditory-motor speech processing, since these mechanisms have been studied in young adults almost exclusively.
range of possible sound inputs and activate categorical phonemic representations in the auditory cortex. In the later time phase (> 200 ms) feedback from somatomotor cortex may reflect processes related to response selection and maintenance. A key difference between the two models is that somatomotor influences are expected earlier (< 100 ms) and with multiple iterations in the interactive model. That is, the interactive model allows for somatomotor influences not only on phonemic perception, but also on acoustic-phonetic processing. The electrophysiological and TMS results reviewed here, showing activation of somatomotor areas, and interactions between somatomotor and auditory areas, well within 100 ms of speech sound onset in the time period associated primarily with auditory processes such as acoustic-phonetic feature extraction, are most consistent with an interactive auditory-somatomotor model of speech processing. Based on the current evidence, we propose a new interactive model in which auditory and motor systems interact with each other at both acousticphonetic and phonemic stages of speech processing. In this model, the motor cortex modulates speech processing in the auditory system, and plays a primary role in speech perception. Attention and other factors can affect the timing and strength of auditory-motor interactions (see Section 7). In an interactive model of auditory-motor processing, somatomotor influences on speech perception must be mediated via relatively direct and bidirectional connections between somatosensory, motor, and auditory areas. In the monkey (Petrides & Pandya, 1984, 2009; Seltzer & Pandya, 1978) and more recently in the human (Frey et al., 2008; Makris et al., 2009), the anterior part of the IPC encompassing somatosensory cortex was demonstrated to have strong reciprocal connections with ventral premotor (and ventrolateral prefrontal) areas controlling orofacial musculature via the superior longitudinal fasciculus, and with the auditory posterior STC via the middle longitudinal fasciculus. These anatomical connections could form the basis for a functional phonological loop mediating sensorimotor influences on speech perception. 7. Factors affecting auditory-motor interactions and directions for future studies The interactions between auditory and motor systems are likely to be sensitive to attention, context and other factors. More research is needed to advance our knowledge on how these factors affect the timing and strength of auditory-motor interactions. Based on the current literature, the early (< 150 ms) auditorymotor interactions can be modulated by the direction of attention. In all studies that found early motor activations, attention was directed to the speech input (e.g., Alho et al., 2012, 2014; Cheung et al., 2016; Liebenthal et al., 2014). In studies in which attention was directed away from speech sounds the auditory-motor interactions started later (> 150 ms) (Möttönen et al., 2013, 2014b). These results indicate that auditory-motor interactions are not dependent on attention, but attention can modulate the timing (and possibly specificity and strength) of auditory-motor interactions (Möttönen et al., 2014b). We have proposed above (see Section 5) that attention may also influence how speech is represented in the motor system. According to a popular view the brain generates predictions on what is going to happen in the sensory environment (Friston, 2005). This type of predictive coding is also likely to be important for speech processing, since speech signals are often highly predictable. The motor system plays an essential role in predictive coding during speech production (Guenther et al., 2006; Hickok et al., 2011; Rauschecker & Scott, 2009). It is plausible that during processing of predictable speech signals the articulatory motor cortex is involved in generating predictions, which modulate speech processing in the auditory cortex (Daikoku & Möttönen, submitted for publication; Davis & Johnsrude, 2007). In addition to predictive coding, the articulatory motor system may also be involved in generating temporal predictions, i.e., when the speech
8. Conclusion We reviewed neurophysiological evidence on the time course of speech processing in auditory and somatomotor areas in order to contrast three neurodynamic models of auditory-motor speech processing: parallel, hierarchical and interactive. EEG, MEG and ECoG studies have provided evidence that somatomotor areas are activated during the early stages of speech processing (before 100 ms), simultaneously with the auditory areas. Moreover, TMS studies have shown that disruptions in the articulatory motor areas impair speech perception and modulate early and intermediate processing of speech sounds in the auditory areas. These findings support the interactive model in which auditory and motor systems interact at both acoustic and phonemic stages of speech processing. Our interactive model of auditory-motor speech perception expands the role of the motor system in speech perception, beyond strictly phonological tasks as postulated in earlier models 6
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
(Hickok et al., 2011). Attention and context can affect the strength and timing of auditory-motor interactions. While the auditory areas can function independently of the motor areas, for example, when the articulatory motor system is damaged, the reviewed evidence suggests that in healthy individuals, speech perception relies on both auditory and motor areas and interactions between them across multiple speech perception tasks. Nevertheless, different from models that emphasize a ubiquitous role for the motor areas (Pulvermuller & Fadiga, 2010; Skipper et al., 2017), our interactive model emphasizes the primary role of auditory areas and postulates a strictly modulatory role for the motor areas in speech perception.
Gootjes, L., et al. (1999). Left-hemisphere dominance for processing of vowels: A wholescalp neuromagnetic study. NeuroReport, 10(14), 2987–2991. Guenther, F. H., Ghosh, S. S., & Tourville, J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96(3), 280–301. Hickok, G. (2010). The role of mirror neurons in speech perception and action word semantics. Language and Cognitive Processes, 25, 749–776. Hickok, G., Houde, J., & Rong, F. (2011). Sensorimotor integration in speech processing: Computational basis and neural organization. Neuron, 69(3), 407–422. Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. Houde, J. F., & Jordan, M. I. (2002). Sensorimotor adaptation of speech I: Compensation and adaptation. Journal of Speech, Language, and Hearing Research, 45(2), 295–310. Houde, J. F., & Nagarajan, S. S. (2011). Speech production as state feedback control. Frontiers in Human Neuroscience, 5, 82. Humphries, C., et al. (2014). Hierarchical organization of speech perception in human auditory cortex. Frontiers in Neuroscience, 8, 406. Krieger-Redwood, K., et al. (2013). The selective role of premotor cortex in speech perception: A contribution to phoneme judgements but not speech comprehension. Journal of Cognitive Neuroscience, 25(12), 2179–2188. Kuhl, P. K., & Miller, J. D. (1975). Speech perception by the chinchilla: Voiced-voiceless distinction in alveolar plosive consonants. Science, 190(4209), 69–72. Kuhl, P. K., & Miller, J. D. (1978). Speech perception by the chinchilla: Identification function for synthetic VOT stimuli. Journal of the Acoustical Society of America, 63(3), 905–917. Kuriki, S., & Murase, M. (1989). Neuromagnetic study of the auditory responses in right and left hemispheres of the human brain evoked by pure tones and speech sounds. Experimental Brain Research, 77(1), 127–134. Liberman, A. M., et al. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54, 358–368. Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74(6), 431–461. Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21(1), 1–36. Lichtheim, L. (1885). On aphasia. Brain, 7(4), 433–484. Liebenthal, E., et al. (2005). Neural substrates of phonemic perception. Cerebral Cortex, 15(10), 1621–1631. Liebenthal, E., et al. (2010). Specialization along the left superior temporal sulcus for auditory categorization. Cerebral Cortex, 20(12), 2958–2970. Liebenthal, E., et al. (2013). Neural dynamics of phonological processing in the dorsal auditory stream. Journal of Neuroscience, 33(39), 15414–15424. Liebenthal, E., et al. (2014). The functional organization of the left STS: A large scale meta-analysis of PET and fMRI studies of healthy adults. Frontiers in Neuroscience, 8, 289. Maiste, A. C., et al. (1995). Event-related potentials and the categorical perception of speech sounds. Ear and Hearing, 16(1), 68–90. Makris, N., et al. (2009). Delineation of the middle longitudinal fascicle in humans: A quantitative, in vivo, DT-MRI study. Cereb Cortex, 19(4), 777–785. Meister, I. G., et al. (2007). The essential role of premotor cortex in speech perception. Current Biology, 17(19), 1692–1696. Mesgarani, N., et al. (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006–1010. Möttönen, R., et al. (2006). Perceiving identical sounds as speech or non-speech modulates activity in the left posterior superior temporal sulcus. Neuroimage, 30(2), 563–569. Möttönen, R., Dutton, R., & Watkins, K. E. (2013). Auditory-motor processing of speech sounds. Cerebral Cortex, 23(5), 1190–1197. Möttönen, R., Rogers, J., & Watkins, K. E. (2014a). Stimulating the lip motor cortex with transcranial magnetic stimulation. Journal of Visualized Experiments(88). Möttönen, R., van de Ven, G. M., & Watkins, K. E. (2014b). Attention fine-tunes auditorymotor processing of speech sounds. Journal of Neuroscience, 34(11), 4064–4069. Möttönen, R., & Watkins, K. E. (2009). Motor representations of articulators contribute to categorical perception of speech sounds. Journal of Neuroscience, 29(31), 9819–9825. Möttönen, R., & Watkins, K. E. (2012). Using TMS to study the role of the articulatory motor system in speech perception. Aphasiology, 26(9), 1103–1118. Murakami, T., et al. (2015). Left dorsal speech stream components and their contribution to phonological processing. Journal of Neuroscience, 35(4), 1411–1422. Murakami, T., Restle, J., & Ziemann, U. (2011). Observation-execution matching and action inhibition in human primary motor cortex during viewing of speech-related lip movements or listening to speech. Neuropsychologia, 49(7), 2045–2054. Murakami, T., Ugawa, Y., & Ziemann, U. (2013). Utility of TMS to understand the neurobiology of speech. Frontiers in Psychology, 4, 446. Naatanen, R., et al. (1997). Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385(6615), 432–434. Nuttall, H. E., et al. (2016). The effect of speech distortion on the excitability of articulatory motor cortex. Neuroimage, 128, 218–226. Obleser, J., Lahiri, A., & Eulitz, C. (2003). Auditory-evoked magnetic field codes place of articulation in timing and topography around 100 milliseconds post syllable onset. Neuroimage, 20(3), 1839–1847. Obleser, J., Lahiri, A., & Eulitz, C. (2004). Magnetic brain response mirrors extraction of phonological features from spoken vowels. Journal of Cognitive Neuroscience, 16(1), 31–39. Obleser, J., Scott, S. K., & Eulitz, C. (2006). Now you hear it, now you don't: Transient traces of consonants and their nonspeech analogues in the human brain. Cerebral Cortex, 16(8), 1069–1076. Panouilleres, M. T. N., Boyles, R., Chesters, J., Watkins, K.E., and Möttönen, R. (2017).
Statement of significance We review neurophysiological evidence on the time course of speech processing in auditory and somatomotor areas in order to contrast three neurodynamic models of auditory-motor speech processing: parallel, hierarchical and interactive. We conclude that findings support the interactive model. Our interactive model of auditory-motor speech perception expands the role of the motor system in speech perception beyond strictly phonological tasks as postulated in earlier models (Hickok et al., 2011). Nevertheless, different from models that emphasize a ubiquitous role for the motor areas (Pulvermuller & Fadiga, 2010; Skipper et al., 2017), our interactive model emphasizes the primary role of auditory areas and postulates a strictly modulatory role for the motor areas in speech perception. Acknowledgements EL was supported by the National Institute of Health (DC006287) and the Brain & Behavior Research Foundation (22249). RM was supported by Medical Research Council, UK (G1000566). We thank Dr. Daniel Lametti for help in preparing Fig. 2C. References Aaltonen, O., et al. (1997). Perceptual magnet effect in the light of behavioral and psychophysiological data. Journal of the Acoustical Society of America, 101(2), 1090–1105. Alho, J., et al. (2012). Enhanced early-latency electromagnetic activity in the left premotor cortex is associated with successful phonetic categorization. Neuroimage, 60(4), 1937–1946. Alho, J., et al. (2014). Enhanced neural synchrony between left auditory and premotor cortex is associated with successful phonetic categorization. Frontiers in Psychology, 5, 394. Alho, J., et al. (2016). Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage, 129, 214–223. Bartoli, E., et al. (2015). Listener-speaker perceived distance predicts the degree of motor contribution to speech perception. Cerebral Cortex, 25(2), 281–288. Binder, J. R., et al. (2000). Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex, 10(5), 512–528. Bishop, D. V., Brown, B. B., & Robson, J. (1990). The relationship between phoneme discrimination, speech production, and language comprehension in cerebral-palsied individuals. Journal of Speech and Hearing Research, 33(2), 210–219. Cheung, C., et al. (2016). The auditory representation of speech sounds in human motor cortex. Elife, 5. Cogan, G. B., et al. (2014). Sensory-motor transformations for speech occur bilaterally. Nature, 507(7490), 94–98. D'Ausilio, A., et al. (2009). The motor somatotopy of speech perception. Current Biology, 19(5), 381–385. Davis, M. H., & Johnsrude, I. S. (2007). Hearing speech sounds: Top-down influences on the interface between audition and speech perception. Hearing Research, 229(1–2), 132–147. DeWitt, I., & Rauschecker, J. P. (2012). Phoneme and word recognition in the auditory ventral stream. Proceedings of the National Academy of Sciences of the United States of America, 109(8), E505–E514. Fadiga, L. (2002). Speech listening specifically madulates the excitability of tongue muscles: A TMS study. European Journal of Neuroscience, 15, 399–402. Frey, S., et al. (2008). Dissociating the human language pathways with high angular resolution diffusion fiber tractography. Journal of Neuroscience, 28(45), 11435–11444. Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society of London. Series B, Biological sciences, 360(1456), 815–836. Giraud, A. L., & Poeppel, D. (2012). Cortical oscillations and speech processing: Emerging computational principles and operations. Nature Neuroscience, 15(4), 511–517.
7
Brain and Language xxx (xxxx) xxx–xxx
E. Liebenthal, R. Möttönen
Neurophysiology, 118(2), 237–254. Sato, M., Tremblay, P., & Gracco, V. L. (2009). A mediating role of the premotor cortex in phoneme segmentation. Brain and Language, 111(1), 1–7. Schomers, M. R., et al. (2015). Causal influence of articulatory motor cortex on comprehending single spoken words: TMS evidence. Cerebral Cortex, 25(10), 3894–3902. Seltzer, B., & Pandya, D. N. (1978). Afferent cortical connections and architectonics of the superior temporal sulcus and surrounding cortex in the rhesus monkey. Brain Research, 149(1), 1–24. Sharma, A., et al. (1993). Acoustic versus phonetic representation of speech as reflected by the mismatch negativity event-related potential. Electroencephalography and Clinical Neurophysiology, 88(1), 64–71. Sharma, A., & Dorman, M. F. (1999). Cortical auditory evoked potential correlates of categorical perception of voice-onset time. Journal of the Acoustical Society of America, 106(2), 1078–1083. Skipper, J. I., Devlin, J. T., & Lametti, D. R. (2017). The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. Brain and Language, 164, 77–105. Smalle, E. H., Rogers, J., & Möttönen, R. (2015). Dissociating contributions of the motor cortex to speech perception and response bias by using transcranial magnetic stimulation. Cerebral Cortex, 25(10), 3690–3698. Tourville, J. A., Reilly, K. J., & Guenther, F. H. (2008). Neural mechanisms underlying auditory feedback control of speech. Neuroimage, 39(3), 1429–1443. Venezia, J. H., et al. (2012). Response bias modulates the speech motor system during syllable discrimination. Frontiers in Psychology, 3, 157. Vihla, M., Lounasmaa, O. V., & Salmelin, R. (2000). Cortical processing of change detection: Dissociation between natural vowels and two-frequency complex tones. Proceedings of the National Academy of Sciences of the United States of America, 97(19), 10590–10594. Watkins, K. E., Strafella, A. P., & Paus, T. (2003). Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia, 41(8), 989–994. Wernicke, C. (1974). Der aphasische symptomenkomplex. Berlin: Springer ed. Wilson, S. M. (2009). Speech perception when the motor system is compromised. Trends in Cognitive Sciences, 13(8), 329–331. Winkler, I., et al. (1999). Brain responses reveal the learning of foreign language phonemes. Psychophysiology, 36(5), 638–642.
Facilitation of motor excitability during listening to spoken sentences is not modulated by noise or semantic coherence. bioRxiv, 2017. http://dx.doi.org/10.1101/168864. Park, H., et al. (2015). Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners. Current Biology, 25(12), 1649–1653. Parviainen, T., Helenius, P., & Salmelin, R. (2005). Cortical differentiation of speech and nonspeech sounds at 100 ms: Implications for dyslexia. Cerebral Cortex, 15(7), 1054–1063. Petrides, M., & Pandya, D. N. (1984). Projections to the frontal cortex from the posterior parietal region in the rhesus monkey. Journal of Comparative Neurology, 228(1), 105–116. Petrides, M., & Pandya, D. N. (2009). Distinct parietal and temporal pathways to the homologues of Broca's area in the monkey. PLoS Biology, 7(8), e1000170. Phillips, C., et al. (2000). Auditory cortex accesses phonological categories: An MEG mismatch study. Journal of Cognitive Neuroscience, 12(6), 1038–1055. Pulvermuller, F., et al. (2006). Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences of the United States of America, 103(20), 7865–7870. Pulvermuller, F., & Fadiga, L. (2010). Active perception: Sensorimotor circuits as a cortical basis for language. Nature Reviews Neuroscience, 11(5), 351–360. Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nature Neuroscience, 12(6), 718–724. Rauschecker, J. P., & Tian, B. (2000). Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proceedings of the National Academy of Sciences of the United States of America, 97(22), 11800–11806. Rogalsky, C., et al. (2011). Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system. Neurocase, 17(2), 178–187. Rogers, J. C., et al. (2014). Discrimination of speech and non-speech sounds following theta-burst stimulation of the motor cortex. Frontiers in Psychology, 5, 754. Roy, A. C., et al. (2008). Phonological and lexical motor facilitation during speech listening: A transcranial magnetic stimulation study. Journal of Physiology – Paris, 102(1–3), 101–105. Salmelin, R. (2007). Clinical neurophysiology of language: The MEG approach. Clinical
8