Extrasylvian temporal language area defined by electrical cortical stimulation: A comparison with non-invasive studies in the standard space

Extrasylvian temporal language area defined by electrical cortical stimulation: A comparison with non-invasive studies in the standard space

Abstracts / Neuroscience Research 71S (2011) e46–e107 O4-D-2-1 Extrasylvian temporal language area defined by electrical cortical stimulation: A compa...

54KB Sizes 0 Downloads 24 Views

Abstracts / Neuroscience Research 71S (2011) e46–e107

O4-D-2-1 Extrasylvian temporal language area defined by electrical cortical stimulation: A comparison with noninvasive studies in the standard space Riki Matsumoto 1

,

Hisaji Imamura 1 ,

Shimotake 1 , Takeharu Miyamoto 2 , Hidenao

Tomoyuki Fumuro 1 ,

Kunieda 2 ,

Nobuhiro Fukuyama 3 , Ryosuke

Akihiro Susumu Takahashi 1 , Akio

Mikuni 4 ,

Ikeda 1 1

2

Dept. of Neurology, Grad. Sch. of Med., Kyoto Univ., Kyoto, Japan Dept. of Neurosurgery, Grad. Sch. of Med., Kyoto Univ., Kyoto, Japan 3 HBRC, Grad. Sch. of Med., Kyoto Univ., Kyoto, Japan 4 Dept. of Neurosurgery, Sapporo Med. Univ., Sapporo, Japan Objective: Recent neuroimaging and lesion studies of semantic dementia (SD) highlighted the importance of the anterior temporal cortices in language comprehension. By means of direct cortical stimulation, we attempted to identify the function and anatomy of the anterior temporal language area by comparing the stimulation findings with non-invasive studies in the standard space. Methods: Subjects were 7 patients with intractable partial epilepsy who underwent subdural electrode implantation in the language-dominant left hemisphere for invasive presurgical evaluation. The patients gave written informed consent (no. 79). Behaviors of language tasks were evaluated during high frequency electrical cortical stimulation (50 Hz, 5 s, 10–15 mA). Stimulus sites were coregistered to presurgical 3D-MRI, and then to MNI standard space for anatomical localization. Results: Electrical cortical stimulation revealed well-restricted language areas in (1) the anterior part of the superior temporal sulcus and gyrus (aSTS/STG) in 2 patients and (2) the anterior basal temporal area (aBTA) in 7. Stimulation of aSTS/STG produced selective impairment of speech comprehension. aSTS/STG (mean coordinate: −64, 0, −6) well corresponded with the coordinates of speech perception reported in neuroimaging studies. Stimulation of aBTA impaired object naming most frequently, followed by paragraph reading, kanji word reading, kana word reading and auditory comprehension. Distribution of functional impairment well corresponded with that of atrophy in SD with the fusiform gyrus being mostly involved (mean coordinate: −44, −19, −33). The aBTA was located anterior to the visual word form area and the region for reading Kanji words reported in neuroimaging studies. Conclusion: Taken together with non-invasive studies, aSTS/STG engages in speech comprehension while aBTA most likely plays a role in semantic processing. Research fund: KAKENHI (C) 20591022.

O4-D-2-3 Neural dynamics underlying subliminal priming for syntactic judgment: An MEG study Kazuki Iijima 1,2 , L. Kuniyoshi Sakai 1,3 1

Dept. of Basic Sci., Univ. of Tokyo, Tokyo 2 Japan Society for the Promotion of Science 3 CREST, Japan Science and Technology Agency, Tokyo

Automatic and predictive features in syntactic processing are crucial to our real-time language comprehension. In order to clarify the neural basis of such rapid features, we devised a new paradigm of subliminal priming for syntactic judgment. Native Japanese speakers (N = 15) judged grammaticality of two-word sentences, each of which consisted of a noun phrase (NP) with a case marker (-o: accusative or -ga: nominative) and a verb (transitive (vt) or intransitive (vi)). Depending on the case of an NP, sentences had either object–verb (OV) or subject–verb (SV) sentence structure. After presentation of an NP, a semantically related verb was presented as a subliminal prime, followed by a target verb that was either congruent or incongruent with the verb prime in terms of the verb type (vt or vi). Because the preceding NP with an accusative case provides enough syntactic information of the vt, verb primes for OV sentences would then facilitate syntactic processing of the target verb with the same verb type, i.e., under the congruent condition. Under the congruent condition, RTs were significantly shorter for OV than SV sentences, confirming the priming effect. We measured cortical responses to a target verb using magnetoencephalography (MEG), and adopted a cluster analysis with permutation tests. Consistent with the behavioral priming effect, we found significantly enhanced cortical responses to OV sentences under the congruent condition (corrected P < 0.05), observed at 150–170 ms after the verb onset in the left inferior frontal cortex (Brodmann’s areas 44 and 6). Moreover, the cortical responses to OV sentences were significantly larger under the congruent than incongruent condition. This novel finding suggests that even unconscious stimuli can facilitate on-going predictive processing of syntactic information in a structure-dependent way, further supporting the autonomous and domain-specific nature of syntactic processing in the left frontal region. Research fund: Grant-in-Aid for JSPS Fellows (22.10126), KAKENHI 20220005), CREST. doi:10.1016/j.neures.2011.07.391

O4-D-2-4 Spatial and temporal dynamics of languagerelated and face recognition brain functions by electrocorticogram and MEG Kyousuke Kamada 1 , Naoto Kunii 2 , Kensuke Kawai 2 , Nobuhito Saito 2

doi:10.1016/j.neures.2011.07.389

1

O4-D-2-2 Expanding activation of the left frontal cortex depending on lexical, syntactic, and contextual processes of Japanese Sign Language: An fMRI study Tomoo 1

Inubushi 1,2

, Kuniyoshi L.

Sakai 1,2

Dept. of Basic Sci., Univ. of Tokyo, Tokyo, Japan 2 CREST, JST, Tokyo, Japan

The commonality of lexical, syntactic, and contextual features between spoken and sign languages provides us a unique opportunity to reveal the modality-independent and thus universal features of language processes. Although previous neuroimaging studies have shown that these features in spoken languages elicit some overlapping activation in the left language areas, cortical regions critical for the linguistic features in sign language have not been fully understood. Here, we conducted an fMRI experiment with three linguistic (lexical, syntactic, and contextual decisions) tasks and one nonlinguistic (a repetition detection) task. In the linguistic tasks, deaf participants (N = 28) judged whether there was an error in dialogue sentences, which were video-taped images of signs in Japanese Sign Language (JSL). In the nonlinguistic task, the participants detected the repetition of reversed video-taped images used in the linguistic tasks. Compared with the repetition detection task, the lexical decision task elicited significant (corrected p < 0.05) activation in the left lateral premotor cortex (LPMC) and dorsal inferior frontal gyrus (IFG). In the syntactic and contextual decision tasks, this activation extended more ventrally. Moreover, the contextual decision task also elicited bilateral but left-dominant activation in the LPMC/IFG, middle frontal gyrus, middle temporal gyrus, and angular gyrus. These activation patterns suggest that more cortical regions are recruited for syntactic and contextual processes. Furthermore, we establish for the first time that the left LPMC/IFG subserves syntax in signs, demonstrating the existence of common mechanisms for both spoken and sign languages. Research fund: CREST, KAKENHI (20220005). doi:10.1016/j.neures.2011.07.390

e91

2

Dept. of Neurosurgery, School of Medicine, Asahikawa Medical University Dept. Neurosurgery, The University of Tokyo

We validated the ECoG results with semantic tasks by electrical cortical stimulation (ECS). Thirty-two patients underwent implantation of subdural electrodes bilaterally for diagnostic. Purpose: Semantic-ECoG was recorded with word, figure and face recognition and memory tasks. The ECoG raw data was processed by averaging and time-frequency analysis. ECS was applied to identify the eloquent areas of language- and memory-related functions. Simultaneous recording of ECoG and MEG with spontaneous state and semantic tasks was recorded to compare the source localization. The basal temporal–occipital cortex was activated within 250 ms after visual object presentations. Face stimulation evoked significantly higher ECoG amplitudes than other stimuli. The superior temporal and inferior frontal regions were alternatively activated until 800 ms with the word recognition. Profiles of semantic MEG showed similar with those of ECoG. Time-frequency analysis showed three major spots with increased gamma-band activity in the frontal, posterior temporal and temporal base. Semantic MEG demonstrated decrease of lower frequency bands between 20 and 40 Hz. ECS to the Gamma-band ECoG spots induced impairment of specific cognitive functions. Semantic-ECoG is a powerful technique to detect and decode the human brain functions. Research fund: Japan epilepsy research foundation, KAKENHI 21390406, KAKENHI 21119508, A Research Grant, Decoding and controlling brain information of Japan Science and Technology Agency. doi:10.1016/j.neures.2011.07.392