Electrophysiological correlates of grapheme-phoneme conversion

Electrophysiological correlates of grapheme-phoneme conversion

Neuroscience Letters 366 (2004) 254–258 Electrophysiological correlates of grapheme-phoneme conversion Koongliang Huang, Kosuke Itoh, Shugo Suwazono,...

203KB Sizes 2 Downloads 39 Views

Neuroscience Letters 366 (2004) 254–258

Electrophysiological correlates of grapheme-phoneme conversion Koongliang Huang, Kosuke Itoh, Shugo Suwazono, Tsutomu Nakada∗ Center for Integrated Human Brain Science, Brain Research Institute, University of Niigata, 1-757 Asahimachi-dori, Niigata 951-8585, Japan Received 17 March 2004; received in revised form 9 May 2004; accepted 19 May 2004

Abstract The cortical processes underlying grapheme-phoneme conversion were investigated by event-related potentials (ERPs). The task consisted of silent reading or vowel-matching of three Japanese hiragana characters, each representing a consonant-vowel syllable. At earlier latencies, typical components of the visual ERP, namely, P1 (110 ms), N1 (170 ms) and P2 (300 ms), were elicited in the temporo-occipital area for both tasks as well as control task (observing the orthographic shapes of three Korean characters). Following these earlier components, two sustained negativities were identified. The earlier sustained negativity, referred here to as SN1, was found in both the silent-reading and vowel-matching task but not in the control task. The scalp distribution of SN1 was over the left occipito-temporal area, with maximum amplitude over O1. The amplitude of SN1 was larger in the vowel-matching task compared to the silent-reading task, consistent with previous reports that ERP amplitude correlates with task difficulty. SN2, the later sustained negativity, was only observed in the vowel-matching task. The scalp distribution of SN2 was over the midsagittal centro-parietal area with maximum amplitude over Cz. Elicitation of SN2 in the vowel-matching task suggested that the vowel-matching task requires a wider range of neural activities exceeding the established conventional area of language processing. © 2004 Elsevier Ireland Ltd. All rights reserved. Keywords: Language; ERP; Literacy; Hiragana

Phonological processing plays a crucial role in reading, even in the silent form of reading [3,4,6,7,14,15,24,27]. Characters are first converted to phonemes and syllables [7,9,24], which are then stored and manipulated in working memory to form phonetic representations of words and phrases [4,6,15]. The importance of phonological processing, namely, grapheme-phoneme conversion and phonological manipulation in working memory, becomes explicit, for example, when reading unfamiliar words or appreciating rhymes in verses. Underdeveloped skills in either of the two fundamental stages of phonological processing impair literacy acquisition in children [6,7,24,25]. Although many studies [3,4,6,7,9,10,14–16,20,21,24–27], including those using event-related potentials (ERPs) [10,20,21,26], have investigated the phonological processing of grapheme-phoneme conversion and phonological manipulation, methodological difficulties have hindered clear and separate identification of the electrophysiological indices associated with these two stages of processing.



Corresponding author. Tel.: +81 25 227 0677; fax: +81 25 227 0821. E-mail address: [email protected] (T. Nakada).

Determining whether or not two English words rhyme, a type of vowel-matching task, has been effectively utilized to study phonological matching [20,21,26]. However, orthographic priming represents a potential confounder of these studies, because word pairs that rhyme often share common graphemes (e.g. “a” and “l” in “pale” and “nail”). Semantic processing of the word may also contribute to and confound the observed ERPs. Utilization of single letters would alleviate these effects [10], but the ERP components prove difficult to be effectively separated due to the short processing time associated with converting single letters. The Japanese written system presents especially interesting opportunities for examining the cognitive components of reading because of its two co-existing orthographic systems, kana and kanji. Simply stated, this dual system comprises a linearly combined phonetic character set of kana, and a “semantic” character set of kanji. Furthermore, there are two varieties of kana, namely, hiragana and katakana, each of which constitute a complete set of phonetic characters. A single hiragana character represents the equivalent of a single syllable consisting of a consonant and a vowel. Hiragana characters can rhyme with each other without sharing a common grapheme.

0304-3940/$ – see front matter © 2004 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.neulet.2004.05.099

K. Huang et al. / Neuroscience Letters 366 (2004) 254–258

Three sets of hiragana characters can be selected without forming any meaningful word thereby obviating the potential confounding effect of semantic processing. Accordingly, ERP components of grapheme-phoneme conversion and phonological matching were separated in time using three hiragana characters. The present ERP study aimed to delineate the electrophysiological correlates of grapheme-phoneme conversion and phonological matching (i.e. rhyming) by taking advantage of the unique aforementioned characteristics of hiragana. Eleven healthy right-handed subjects (19–24 years old; four males, seven females) with normal or corrected-to-normal visual acuity participated in the study. They were all native speakers of Japanese and had formal education of more than 12 years. Informed consent was obtained according to the human research guidelines of the Institutional Review Board of University of Niigata. Each subject performed three tasks, namely, silent-reading, vowel-matching, and viewing without reading (control). The stimuli for silent-reading and vowel-matching tasks consisted of a set of three Japanese hiragana characters which do not form any interpretable words (Fig. 1). In the silent-reading task, subjects read the characters silently in the order of left, right, and bottom. In the vowel-matching task, subjects read the characters as they did in the silent-reading task and then had to decide whether either of the first two hiragana characters shared a same vowel with the bottom character. (The first two characters never shared the same vowel.) In the control task, subjects observed orthographic shapes of three Korean characters which were meaningless symbols for all subjects (Fig. 1). Characters were presented simultaneously for 90 ms. The size of each character was 1.1◦ × 1.1◦ . The most distant character was located 1.3◦ from the fixation point. The fixation point was denoted by ‘+’ in the control and silent-reading tasks and ‘×’ in the vowel-matching task, to indicate the type of task assigned. Reaction time (RT) was measured according to three differential behavioral responses. First, subjects pushed a but-

Fig. 1. Stimuli and tasks. Three Japanese hiragana characters were presented in the silent-reading (left) and vowel-matching (middle) tasks. In the silent-reading task, subjects read the characters silently in the order of left, right, and bottom. In the vowel-matching task, subjects read the characters as in the silent-reading task and then made a decision whether either of the first two characters shared a common vowel with the bottom character. The fixation mark indicated the specific task: ‘+’ for silent-reading and ‘×’ for vowel-matching. In the control task (right), subjects observed the orthographic shapes of Korean characters, meaningless to the subjects, that were presented in the same spatial configuration as with the other two tasks.

255

ton when they read the first character of the stimulus (RT1 ). Second, they indicated the time when they read the last character (RT2 ). Third, they pushed the button when they arrived at their decision (RT3 ). Measurements of RT1 , RT2 , and RT3 were done in separate but continuous blocks in the order of “ABCABC,” where A, B, and C corresponded to the measurement of RT1 , RT2 , and RT3 , respectively. Each block (32 s) began with a 5-s interval indicating the measurement condition, followed by a series of nine trials (27 s), with 3 s stimulus onset asynchrony. The whole sequence was repeated twice for each subject. RT1 , RT2 , and RT3 were found to be 673 ± 169 ms (mean ± S.D.), 1376 ± 237 ms, and 2075 ± 280 ms, respectively (Fig. 2). One-factor repeated-measures ANOVA showed a significant main effect [F(2,20) = 297.64, P < 0.0001, ε = 0.873], while Tukey’s Honestly Significant Difference (HSD) post hoc analysis revealed significant differences in all pair-wise comparisons (P < 0.0001). The percentage of correct responses was 92 ± 7% in the vowel-matching task. In the ERP experiment, the three tasks were arranged in a continuous cyclic sequence of “XYZXYZ,” where X, Y, and Z corresponded to the control, silent-reading, and vowel-matching tasks, respectively. Subjects made no overt reactions in any of the tasks. At the beginning of each block (32 s), a 5-s interval indicated the task condition, followed by a series of nine trials (27 s) with stimulus onset asynchrony of 3 s. The whole sequence was repeated four to six times while electroencephalographic (EEG) recordings were made. EEG recordings were conducted in a dark, temperature adjusted (25 ◦ C), electrically shielded, sound-attenuated room. Twenty-one silver electrodes were applied according to the international 10-20 system [8], positioned at Fpz, Fp1, Fp2, Fz, F3, F4, F7, F8, Cz, C3, C4, T3, T4, T5, T6, Pz, P3, P4, Oz, O1, and O2. Electro-oculograms (EOG), horizontal and vertical, were also recorded. All channels were recorded against the nose electrode. Impedance was kept below 5 k throughout the experiment. EEG and EOG data were amplified using a SynAmp amplifier (Neuroscan Labs) with a bandpass of 0.05–100 Hz, sampled at 1 kHz. All data were segmented into 3000 ms epochs, including a 100 ms pre-stimulus period. After the baseline was corrected by the pre-stimulus average, epochs were artifact-rejected at ±70 ␮V. ERPs were obtained separately for each task by averaging responses time locked to the stimulus onset. The averages were then low-pass filtered at 30 Hz. Finally, the data were averaged across subjects to obtain grand-average ERPs for each task. Fig. 2 shows the grand-average ERPs. At earlier latencies, typical components of the visual ERP, namely, P1 (110 ms), N1 (170 ms) and P2 (300 ms), were elicited over the temporo-occipital area in all tasks, signifying the initial visual processing of orthographic forms [11,19]. Following these earlier components, two sustained negativities were identified. The earlier sustained negativity, which we refer to here as SN1 (peak ∼590 ms), was found in both the silent-reading and vowel-matching tasks but not in the

256

K. Huang et al. / Neuroscience Letters 366 (2004) 254–258

Fig. 2. RTs and grand-average ERPs. RT1 , RT2 , and RT3 marked the time at which the first character was read, the last character was read, and a decision was reached whether the stimuli rhymed, respectively. (Bars indicate S.D.) The ERPs were significantly affected by the tasks after about N1 in latency. Two sustained negativities, SN1 and SN2, were identified, clearly distinguishable in their temporal course and spatial distribution (see Fig. 3).

Fig. 3. Topography. SN1 was distributed in left temporo-occipital area, maximal over O1. SN1 in the vowel-matching task was significantly greater in amplitude (P < 0.05) and longer in duration (see Fig. 2) than that in the silent-reading task. SN2 was distributed in the midsagittal centro-parietal area, maximal over Cz. The topographic characteristics clearly distinguish SN2 from SN1.

K. Huang et al. / Neuroscience Letters 366 (2004) 254–258

control task. The scalp distribution was centered over the left occipito-temporal region, maximal over O1 (Figs. 2 and 3). The amplitude of the grand-average ERPs was analyzed by repeated-measures ANOVA with the task factor (silent-reading/vowel-matching/control) at O1. At the peak latency of SN1, the main effect of task was significant [F(2, 20) = 23.65, P < 0.0001, ε = 0.82], and post hoc analyses (Tukey’s HSD) revealed that the amplitude was significantly larger in the silent-reading (P < 0.01) and vowel-matching task (P < 0.0001) compared to control. It was also significantly larger in the vowel-matching task than in silent-reading task (P < 0.05). The latter of the two sustained negativities, termed SN2 (peak ∼1574 ms), was only elicited with the vowel-matching task. The scalp distribution was over the midsagittal centro-parietal area, maximal over Cz, and had no apparent hemispheric asymmetry (Figs. 2 and 3). Its amplitude at Cz (averaged within the 40 ms window centered at its peak) was analyzed by one-factor repeated measures ANOVA (silent-reading/vowel-matching/control). The main effect of task was significant [F(2, 20) = 19.12, P < 0.0001, ε = 0.735], and post hoc analyses (Tukey’s HSD) revealed that their amplitudes were significantly greater in the vowel-matching condition compared with either the silent-reading (P < 0.0001) or control conditions (P < 0.0001). There was no significant difference in amplitude between silent-reading and control (P > 0.5). There are two clearly distinguishable sustained negative ERP components, SN1 and SN2 identified in this study. These components were distinct in terms of task-specificity, temporal course, and spatial distribution. SN1 was found in the silent-reading and vowel-matching tasks, while SN2 was found only in the vowel-matching task. Elicitation of SN1 by the silent-reading task indicates that grapheme-phoneme conversion in working memory without the process of phonological matching is capable of generating SN1. The amplitude of SN1 was greater in the vowel-matching task, consistent with previous reports that amplitudes of sustained negativities correlate with task difficulty [17,18]. The prolonged duration of SN1 in the vowel-matching task likely reflect more complex neural processing required in phonological matching such as the recurrence of grapheme-phoneme conversion during matching process. Subcomponent of SN1 reflecting purely phonological matching could not convincingly be isolated in this study. The left-dominant distribution of SN1 was consistent with the cortical lateralization of processing of Japanese [12,13,22,23]. The midsagittal centro-parietal distribution of SN2 clearly contrasted with that of SN1. The differences in scalp topography strongly indicated distinct underlying neural processes. Considering its central distribution and late latency, SN2 likely reflected cortical functions related to the maintenance and manipulation of transformed phonological information [1,2,5], which requires a wider range of neural activities exceeding the established conventional area of language processing. Other general processes such as atten-

257

tion or expectancy may also significantly contribute to the elicitation of SN2. In conclusion, using three-character stimuli based on Japanese hiragana, the study successfully resolved two clearly distinguishable sustained negativities, namely, SN1 and SN2, in space and time. Sustained negativities similar in distribution to SN1 and SN2 shown in this study have been reported in a vowel-matching task using characters of the alphabet [10]. It appears that these sustained negativities likely represent ERP indices for language in general, and therefore is not specific to Japanese. Further analysis of the neural generators of SN1 and SN2 in the phonological processing of reading is highly warranted.

Acknowledgements The authors thank Dr. Ingrid L. Kwee, University of California, Davis for her critical review of the manuscript. The work was supported by grants from the Ministry of Education, Culture, Sports, Science, and Technology.

References [1] A.D. Baddeley, Exploring the central executive, Q. J. Exp. Psychol. A 49 (1996) 5–28. [2] A.D. Baddeley, G.J. Hitch, Working memory, in: G. Bower (Ed.), The Psychology of Learning and Motivation, vol. 8, Academic Press, New York, 1974, pp. 47–90. [3] L. Bradley, P.E. Bryant, Categorizing sounds and learning to read: a causal connection, Nature 301 (1983) 419–421. [4] S.E. Gathercole, Cognitive approaches to the development of short-term memory, Trends Cogn. Sci. 3 (1999) 410–419. [5] S.E. Gathercole, A.D. Baddeley (Eds.), Working Memory and Language, Erlbaum, Hove, UK, 1993. [6] S. Gathercole, S. Pickering, Working memory deficits in children with special educational needs, Br. J. Spec. Educ. 28 (2001) 89–97. [7] M. Golden, R. Zenhausern, Grapheme to phoneme conversion: the basis of reading disability? Int. J. Neurosci. 20 (1983) 229–239. [8] H.H. Jasper, Report of the committee methods of clinical examination in electroencephalography, Electroencephalogr. Clin. Neurol. 10 (1958) 370–375. [9] I.Y. Liberman, D. Shankweiler, F.W. Fischer, B. Carter, Explicit syllable and phoneme segmentation in the young child, J. Exp. Child Psychol. 18 (1974) 201–212. [10] D. Lovrich, R. Simson, H.G. Vaughan Jr., W. Ritter, Topography of visual event-related potentials during geometric and phonetic discriminations, Electroencephalogr. Clin. Neurol. 65 (1986) 1–12. [11] B.D. McCandliss, M.I. Posner, T. Givón, Brain plasticity in learning visual words, Cogn. Psychol. 33 (1997) 88–110. [12] T. Nakada, Y. Fujii, Y. Yoneoka, I.L. Kwee, Planum temporale: where spoken and written language meet, Eur. J. Neurol. 46 (2001) 121–125. [13] T. Nakada, Y. Fujii, I.L. Kwee, Brain strategies for reading in the second language are determined by the first language, Neurosci. Res. 40 (2001) 351–358. [14] M. Niznikiewicz, N.K. Squires, Phonological processing and the role of strategy in silent reading: behavioral and electrophysiological evidence, Brain Lang. 52 (1996) 342–364. [15] J. Oakhill, F. Kyle, The relation between phonological awareness and working memory, J. Exp. Child Psychol. 75 (2000) 152–164.

258

K. Huang et al. / Neuroscience Letters 366 (2004) 254–258

[16] K.R. Pugh, B.A. Shaywitz, S.E. Shaywitz, R.T. Constable, P. Skudlarski, R.K. Fulbright, R.A. Bronen, D.P. Shankweiler, L. Katz, J.M. Fletcher, J.C. Gore, Cerebral organization of component processes in reading, Brain 119 (1996) 1221–1238. [17] B. Rolke, M. Heil, E. Hennighausen, C. Haussler, F. Rösler, Topography of brain electrical activity dissociates the sequential order transformation of verbal versus spatial information in humans, Neurosci. Lett. 282 (2000) 81–84. [18] F. Rösler, M. Heil, B. Roder, Slow negative brain potentials as reflections of specific modular resources of cognition, Biol. Psychol. 45 (1997) 109–141. [19] B. Rossion, C.A. Joyce, G.W. Cottrell, M.J. Tarr, Early lateralization and orientation tuning for face, word, and object processing in the visual cortex, Neuroimage 20 (2003) 1609–1624. [20] M.D. Rugg, Event-related potentials in phonological matching tasks, Brain Lang. 23 (1984) 225–240. [21] M.D. Rugg, S.E. Barrett, Event-related potentials and the interaction between orthographic and phonological information in a rhyme-judgment task, Brain Lang. 32 (1987) 336–361.

[22] Y. Sakurai, Y. Ichikawa, T. Mannen, Pure alexia from a posterior occipital lesion, Neurology 56 (2001) 778–781. [23] Y. Sakurai, T. Momose, M. Iwata, Y. Sudo, K. Ohtomo, I. Kanazawa, Cortical activity associated with vocalization and reading proper, Cogn. Brain Res. 12 (2001) 161–165. [24] L.S. Siegel, Phonological processing deficits and reading disability, in: J.L. Metsala, L.C. Ehri (Eds.), Word Recognition in Beginning Literacy, Erlbaum, Mahwah, NJ, 1998, pp. 141–160. [25] H.L. Swanson, Short-term memory and working memory: do both contribute to our understanding of academic achievement in children and adults with learning disabilities? J. Learn. Disabil. 27 (1994) 34–50. [26] M. Valdes-Sosa, A. Gonzalez, L. Xiang, Z. Xiao-Lei, H. Yi, M.A. Bobes, Brain potentials in a phonological matching task using Chinese characters, Neuropsychologia 31 (1993) 853–864. [27] R.K. Wagner, J.K. Torgesen, The nature of phonological processing and its causal role in the acquisition of reading skills, Psychol. Bull. 101 (1987) 192–212.