International Congress Series 1300 (2007) 199 – 202
www.ics-elsevier.com
Cortical representation of phonemic and phonetic contrasts in Japanese vowel Seiya Funatsu a,⁎, Satoshi Imaizumi b , Akira Hashizume c , Kaoru Kurisu c a
The Prefectural University of Hiroshima, 1-1-71 Ujinahigashi Minami-ku, Hiroshima, Hiroshima, 734-8558, Japan b The Prefectural University of Hiroshima at Mihara, Mihara, Japan c Faculty of Medicine, Hiroshima University, Hiroshima, Japan
Abstract. Phonemic changes in spoken words alter linguistic meanings, whereas allophonic phonetic variations do not. The Japanese vowel /u/ is produced as devoiced [u] when surrounded by voiceless consonants, otherwise as voiced [u]. Due to this allophonic variation,˚ Japanese single vowel /u/ has a phonetic contrast between [u] and [u] without any phonemic change. The present study investigated ˚ phonemic and phonetic contrasts using voiced and devoiced vowel [u] how Japanese speakers process and [u]. Two oddball experiments were carried out. Under phonemic condition, a frequent stimulus [t∫ita]˚was contrasted with a deviant [tsuta], and a frequent [tsuta] with a deviant [t∫ita]. Under phonetic ˚ ˚ ˚ ˚ condition, a frequent [tsuta] was contrasted with a deviant [tsuta], and a frequent [tsuta] with a deviant ˚ [tsuta]. The subjects were 14 native Japanese. The equivalent current dipole moment (ECDM) was ˚ estimated from the mismatch magnetic field (MMF). The ECDM in the left hemisphere was significantly larger than that in the right hemisphere under phonemic condition ( p b 0.01), whereas the ECDM was not significantly different between the hemispheres under phonetic condition ( p = 0.4870). Moreover, under phonetic condition, the ECDM elicited by the voiced deviant was significantly larger than that elicited by the devoiced deviant in both hemispheres ( p b 0.01), while there were no significant deviant-related differences in ECDM under phonemic condition in both hemispheres. Although significant MMF was elicited for phonetic contrasts between the voiced and the devoiced vowels, these mismatch responses were significantly different from those observed under phonemic contrasts. The phonemic contrasts elicited significantly larger responses in left than right hemisphere, whereas the phonetic contrasts did not. © 2007 Elsevier B.V. All rights reserved. Keywords: Vowel devoicing; MMF; Japanese language
⁎ Corresponding author. Tel.: +81 82 251 9728; fax: +81 82 251 9405. E-mail address:
[email protected] (S. Funatsu). 0531-5131/ © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.ics.2007.01.046
200
S. Funatsu et al. / International Congress Series 1300 (2007) 199–202
1. Introduction In the Japanese language the vowel /i/ and /u/, put between voiceless consonants, are pronounced as devoiced vowel [i] and [u]. This effect is called vowel devoicing. These ˚ ˚ of vowels [i] and [u], respectively, in Japanese devoiced vowels [i] and [u] are allophones ˚ ˚ phonology. There are many studies on the vowel devoicing with respect to phonetics, phonology and physiology. However, there are few studies about the vowel devoicing in the field of brain sciences. This study investigated how Japanese speakers discriminate these allophones using MEG technique. 2. Methods 2.1. Subjects The subjects were 14 native speakers of Japanese (3 males, 11 females) aged 20– 46 years. All subjects were right-handed, and had no hearing loss. Subjects were instructed to attend passively to the stimulus sequence. Stimuli were binaurally presented to subjects by inserted earphones. 2.2. Stimuli Stimuli for voiced–devoiced discrimination, [tsuta] and [tsu ta], were spoken by a ˚ cut into [tsu] and [ta], Japanese male speaker, and were sampled at 11.025 kHz. They were and [tsu] and [ta]. Then these two [ta] were discarded, and they were aligned to [ts]: 65 ms, [u]: 75 ˚ms ([tsu]), [tsu]: 140 ms. Next, new [ta] was uttered solely by the same speaker and ˚ and [tsu] and [ta], and [tsu] and [ta] were joined together, so the total was aligned to 200 ms, ˚ closure section 80 ms). Stimulus for duration of each stimulus was 420 ms (including phonemic discrimination, [t∫ita], was produced in the same way ([t∫i]: 140 ms, closure ˚ section: 80 ms, [ta]: 200 ms).˚ 2.2.1. Phonetic condition (allophone discrimination) An oddball paradigm was adopted, in which a standard stimulus was presented at a high frequency of 85% and a deviant at a low frequency of 15%. Two sessions with a standarddeviant pair set as [tsuta] vs. [tsuta], and [tsuta] vs. [tsuta], respectively, were conducted in a ˚ ˚ counter-balanced order. 2.2.2. Phonemic condition (phonemic discrimination) As with the phonetic condition, stimuli were presented using the oddball paradigm. Two sessions with standard–deviant pair sets as [tsuta] vs. [t∫ita], and [t∫ita] vs. [tsuta], ˚ ˚ ˚ respectively, were conducted in a counter-balanced˚ order. 2.3. MEG recordings and analyses The recordings were performed in a magnetically shielded room using a 204-channel whole head gradiometer (Neuromag Ltd., Finland). The MEG epochs, starting 100 ms
S. Funatsu et al. / International Congress Series 1300 (2007) 199–202
201
before and ending 600 ms after each stimulus onset, were averaged separately for standard and deviant stimuli, and filtered using a 1.2–26 Hz digital band-pass filter. The MMF was determined from the deviant stimulus response minus the standard stimulus response subtraction waves between 160 ms and 310 ms (phonetic condition) and between 100 ms and 250 ms (phonemic condition). For each subject and condition, equivalent current dipoles (ECDs) were determined for the MMF. 3. Results Fig. 1(a) shows the ECD moment in all conditions. In the phonemic condition, the ECD moment in the left hemisphere is larger than that in the right hemisphere, whereas, in the phonetic condition, the ECD moment in the right hemisphere is larger than that in the left hemisphere. Two-way ANOVA (condition vs. hemisphere) showed a significant main effect in conditions ( p b 0.01) and a significant interaction effect ( p b 0.05). Namely, the phonemic discrimination was mainly performed in the left hemisphere, whereas the voiced–devoiced vowel discrimination was mainly performed in the right hemisphere. A post hoc test was carried out. In the phonemic condition, the ECD moment in the left hemisphere was significantly larger than that in the right hemisphere ( p b 0.01). There was no significant difference in others. Fig. 1(b) shows the ECD moment as the individual deviant stimuli. In the phonemic condition, the ECD moments of both the deviant stimuli [tsuta] and [t∫ita] were almost same ˚ ˚ phonetic condition, in the left hemisphere and in the right hemisphere, respectively. In the however, the ECD moments of [tsuta] were larger than those of [tsuta] in the left hemisphere and in the right hemisphere, respectively. Two-way ANOVA˚ (deviant stimulus vs. hemisphere) showed a significant main effect in deviant stimuli ( p b 0.01). A post hoc test was carried out. In the left hemisphere, the ECD moment of the [tsuta] was significantly larger than that of [tsuta] ( p b 0.01). In the right hemisphere, same as left, the ECD moment ˚ of the [tsuta] was significantly larger than that of [tsuta] ( p b 0.01). While, there were no ˚ hemispheres. significant difference between [tsuta] and [t∫ita] in both ˚ ˚ In general, the acoustical difference between standard and deviant stimulus is larger, the ECD moment is increasing. In this case, the acoustical difference between the standard and the deviant stimulus was equal. Accordingly, the ECD moment of the voiced vowel should be almost equal to that of the devoiced vowel, but this was not the case.
Fig. 1. (a) ECD moment. Dv: phonetic condition, Ph: phonemic condition. (b) ECD moment as individual deviant stimuli. [t∫i]; [t∫ita], [tsu]: [tsuta], [tsu]: [tsuta], [tsu]: [tsuta]. ˚ ˚ ˚ ˚ ˚ ˚
202
S. Funatsu et al. / International Congress Series 1300 (2007) 199–202
4. Discussion In the phonetic condition, the ECD moment changed depended upon deviant stimuli, while in the phonemic condition, the ECD moment did not change irrespective of deviant stimuli. According to Shtyrov and Pulvermuller, the mismatch response of word deviant which existed in the long-term memory is larger than that of pseudoword deviant which did not exist in the long-term memory [1]. Moreover, Näätänen and his colleagues showed that the mismatch of the native language prototype deviant is larger than that of the non-prototype deviant [2]. In the Japanese language, the devoiced vowels are uttered in standard pronunciation, but not the voiced vowel. Therefore, there should be the devoiced vowels in the long-term memory in the Japanese subjects. Consequently, from this result, the mismatch of the word which did not exist in the long-term memory is larger than that which existed in the long-term memory. Another interpretation of the present results is that irrespective of the standard pronunciation, there might be voiced vowels in the long-term memory in Japanese subjects. In this case, our results are consistent with the results of both Styrov et al. and Näätänen et al. In any case, further studies are needed on this problem. Acknowledgement This study was partially supported by Grant-in-Aid for Scientific Research (No. 18520327), Japan Society for the Promotion of Science. References [1] Y. Shtyrov, F. Pulvermuller, Neurophysiological evidence of memory traces for words in the human brain, NeuroReport 13 (2002) 521–525. [2] R. Näätänen, et al., Language-specific phoneme representations revealed by electric and magnetic brain responses, Nature 385 (6615) (1997) 432–434.