Symposia Abstracts / International Journal of Psychophysiology 85 (2012) 291–360 b
Institute of Linguistics of the Hungarian Academy of Sciences, Budapest, Hungary c Department of Linguistics, University Debrecen, Hungary In the present study we have investigated the influence of prosody on processing sentences with embedded phrase structure. This particular structure is characterized by a well-defined intonation contour, allowing studying how the modification of the expected prosody affects the processing of sentences. In an event-related brain potential (ERP) experiment we investigated the processing of meaningful and meaningless embedded sentences with normal and incongruent prosodic structures. As a result, we obtained the CPS component at the boundaries of intonation phrases in both sentences, and the CPS was similar regardless of the congruency of the prosodic and syntactic structures. Moreover, we found evidence that the incongruent prosody was detected, as shown by the appearance of the RAN component, and it induced neural reintegration processes manifesting in the P600 component, in spite of the syntactic structure of sentences being intact, but only in meaningful sentences. These results suggest that prosody has an abstract, recursive representation, independent of other linguistic information and that prosodic information is always taken into account during the processing of phrasal structures.
doi:10.1016/j.ijpsycho.2012.06.168
Different roles of prosody and repetition in infant word recognition: ERP studies in 6-, 9- and 12-month-old German infants C. Männel, A.D. Friederici Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Previous studies have shown that infants start to detect unknown words in sentences between 7 and 10 months of age [1, 2]. In this context, the role of phonological and statistical cues in the speech input received a lot of attention [3, 4], while speech style characteristics have been widely neglected. However, parents naturally use accentuation within context and high numbers of repetitions when teaching infants new words [5, 6]. Here, we systematically investigated the impact of these conversational cues on word recognition in a familiarization-test paradigm across an age range of 6 to 12 months. When infants were repeatedly familiarized to words with or without accentuation in sentences, event-related brain potentials to word processing revealed clear developmental differences. Younger infants' processing was driven by prosodic cues, whereas older infants relied on repetition cues. In subsequent test phases, brain responses to familiarized versus new words confirmed the age-dependent reliance on different input cues for word recognition. Six-month-olds only recognized previously accentuated familiarized words. Both 9- and 12-month-olds showed recognition independent of previous accentuation, with 9-month-olds displaying an additional response to accentuated words. In summary, for prosodically salient input, infants show word segmentation and recognition at an earlier age than previously reported, emphasizing the crucial role of prosody in early language acquisition. After an initial reliance on prosodic information, repetition of a given item in the input becomes the relevant cue, indicating specific inputsensitive periods in speech segmentation within developmental steps of only three months.
doi:10.1016/j.ijpsycho.2012.06.169
353
Prosody meets syntax: Localization and connectivity D. Sammler Institute of Neuroscience and Psychology, University of Glasgow, UK Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Contemporary models of auditory language comprehension assign segmental processing of syntactic and lexical semantic information predominantly to the left hemisphere, whereas the right hemisphere is thought to have a primacy for the processing of suprasegmental prosodic information. A dynamic interplay between the hemispheres is assumed to allow the timely coordination of both information types. The present talk will focus on EEG lesion and fMRI data on the localization and connectivity of the prosody and syntax networks. Specifically, it will be shown that the posterior corpus callosum provides the crucial brain basis for the online interaction of syntactic and prosodic information. Patients with lesions in the posterior third of the corpus callosum, connecting temporal, parietal and occipital areas, as well as patients with lesions in the anterior two-thirds of the corpus callosum, connecting orbital and frontal structures, and matched healthy participants, were tested in a paradigm that crossed syntactic and prosodic manipulations. An anterior negativity elicited by a mismatch between syntactically predicted phrase structure and prosodic intonation was analyzed as a marker for syntax–prosody interaction. Healthy controls and patients with lesions in the anterior corpus callosum showed this anterior negativity, demonstrating an intact interplay between syntax and prosody (Fig. 1A). No such effect was found in patients with lesions in the posterior corpus callosum, although they exhibited intact, prosody-independent syntactic processing comparable with healthy controls and patients with lesions in the anterior corpus callosum (Fig. 1B). These data support the interplay between the speech processing streams in the left and right hemispheres via the posterior portion of the corpus callosum, building the brain basis for the coordination and integration of local syntactic and prosodic features during auditory speech comprehension.
Fig. 1. (A) Words carrying a syntax–prosody mismatch (red dotted line) compared to a match (red solid line) eliciting an anterior negativity in healthy controls and the anterior corpus callosum (CC) group (see black arrows and topography maps), but not in posterior CC patients, indicating a deficient interhemispheric interaction between syntax and prosody in this group. (B) Syntactically incorrect (green dotted line) compared to correct words (green solid line) eliciting an ELAN and a P600 in all three experimental groups, indicating normal syntactic processing irrespective of whether participants had brain lesions or not.
doi:10.1016/j.ijpsycho.2012.06.170
Symposium C: Prestimulus EEG effects on the ERP Symposium Chair: Robert J. Barry (Australia) The aim of this symposium is to advance our understanding of the brain dynamics involved in genesis of the event-related potential
354
Symposia Abstracts / International Journal of Psychophysiology 85 (2012) 291–360
(ERP), by examining the influence of pre-stimulus EEG on auditory ERPs. The first paper (De Blasio & Barry) examines the effect of prestimulus delta and theta amplitudes on ERP component amplitudes and latencies in an equiprobable Go/NoGo task. The second (Steiner, De Blasio & Barry) examines prestimulus delta, theta, alpha, and beta amplitudes and their effects on single-trial LPC amplitudes in a long-ISI dishabituation task. The third paper (De Blasio & Barry) returns to the equiprobable Go/NoGo task to examine the ERP effects of prestimulus alpha and beta amplitude. The last paper (Barry) changes the focus from prestimulus EEG amplitudes in traditional bands to prestimulus EEG phase effects, utilising narrow EEG bands. The symposium presents a range of new methodologies and data, and reinforces the importance of the ongoing EEG activity in perceptual and cognitive functioning.
components assessed, pre-stimulus delta activity might be considered a global determinant of the positivity of ERP amplitudes. In contrast, pre-stimulus theta level was predominantly associated with stimulusspecific modulations of the endogenous components, suggesting that pre-stimulus theta may to some extent determine processing-related outcomes. These findings are important as they not only provide insight into the expression of the mechanisms underlying ERP genesis, but also might facilitate the identification of EEG mechanisms deficient in clinical samples.
doi:10.1016/j.ijpsycho.2012.06.171
Pre-stimulus EEG amplitude modulation of the LPC in a dishabituation paradigm Pre-stimulus EEG amplitude and ERPs in a Go/NoGo task: I. Slow wave effects F.M. De Blasio, R.J. Barry Centre for Psychophysics, Psychophysiology, and Psychopharmacology, University of Wollongong, Wollongong, Australia Brain & Behaviour Research Institute, University of Wollongong, Wollongong, Australia School of Psychology, University of Wollongong, Wollongong, Australia Despite event-related potentials (ERPs) being a commonly utilised measure within psychophysiological research, our understanding of ERP genesis remains limited. Ongoing electroencephalographic (EEG) activity has been implicated as a contributing factor. Considering the known associations between post-stimulus delta and theta band activity and ERP component outcomes, surprisingly few investigations have assessed the nature and extent of the pre-stimulus EEG– ERP relationships in these slow wave bands, and those that have are not free of confounds. The present investigation employed methodology allowing separate within-subjects analyses of these bands, and assessed the effect of their pre-stimulus activity on each ERP component separately, for two stimulus conditions. Twenty participants completed an equiprobable auditory Go/NoGo task in which a button press response was required. Only those artefact-free trials with correct responses were included for analysis. Pre-stimulus epochs (− 500 to 0 ms) were derived for Cz, a Fast Fourier Transform was applied, and the activity within the delta (1–3 Hz) and theta (4– 7 Hz) bands was computed separately for each individual trial. These pre-stimulus EEG levels were then used to order the pre–post stimulus epochs (±500 ms), at nine central sites (F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4), according to the ascending level of band activity. ERPs were then derived from the upper and lower thirds of the sorted trials. The amplitudes and latencies of five components (P1, N1, P2, N2, and P3) were assessed for ERPs from the High/Low pre-stimulus EEG levels, as was Go reaction time. Across the components assessed, pre-stimulus EEG level in the delta and theta bands had no effect on ERP latencies. Pre-stimulus delta level had a significant effect on the amplitudes of all five components: the positive component amplitudes were increased, and the negative component amplitudes were reduced, for High compared to Low pre-stimulus delta. There were no interactions involving both pre-stimulus delta level and stimulus condition. Pre-stimulus theta level produced no main effects in any of the components, although Low activity was associated with a relative parietal enhancement in P2 and P3, and with a relative increase in right–central NoGo N1, and with increases in NoGo N2 and Go P3. RT showed no effect of pre-stimulus EEG level for either band. The prestimulus level of slow wave EEG differentially modulated ERP amplitudes, yet failed to contribute to component latencies or behavioural response performance. Given its influence across the stimulus conditions for all
G.Z. Steiner, F.M. De Blasio, R.J. Barry Centre for Psychophysics, Psychophysiology, and Psychopharmacology, University of Wollongong, Wollongong, Australia Brain & Behaviour Research Institute, University of Wollongong, Wollongong, Australia School of Psychology, University of Wollongong, Wollongong, Australia It is established that pre-stimulus EEG activity contributes to ERP component measures; however the strength of this relationship and its associated mechanisms remains unmapped. To date, research in this area has utilised paradigms in which the presentation of stimuli has been contingent upon the concurrent level of ongoing EEG activity, or has assessed average ERPs derived from trials with High/ Low levels of pre-stimulus EEG band activity. The findings reported by this previous research have been largely conflicting, perhaps partly due to the wide range of paradigms employed. Our recent investigations have focused on systematically clarifying the prestimulus EEG/ERP relationship within an equiprobable auditory Go/ NoGo paradigm for each EEG band and ERP component. In the present study, we attempt to clarify some of the inconsistencies in this previous research, by assessing the consistency of one of the most widely explored ERP components, the Late Positive Complex (LPC), in a task that differs in terms of the following: stimulus onset asynchrony (SOA; long vs. short and random vs. fixed); response (silent count vs. button press); and analysis (single-trial vs. average ERPs and midline – midline vs. vertex – 9 sites). Twenty university students completed two counterbalanced blocks of an auditory dishabituation paradigm. In one block (Count), participants were instructed to silently count the stimuli for later recall, and in the other block (No Count), participants were informed that there was no task in relation to the stimuli. The stimulus sequence contained 10 homogenous tones (standards), followed by a deviant tone of a different frequency, and succeeded by 2–4 standards; each with a random SOA (5–7 s). The raw EEG data were EOG corrected and lowpass filtered (0.1–30 Hz, 24 dB/oct). Only the midline data (Fz, Cz, and Pz) from the first 13 trials of each block were retained for analysis. A time–frequency analysis was conducted for each site, and for each trial the sum of the mean activity within the pre-stimulus (− 500 to 0 ms) period was computed for the delta (0.5–3.5 Hz), theta (4–7.5 Hz), alpha (8–13 Hz), and beta (13.5–23.5 Hz) bands. The LPC component amplitudes and latencies were derived from the single trial epochs (− 100 to 700 ms). Across all subjects and sites, the within-subjects pre-stimulus EEG/LPC amplitude correlations revealed significant relationships in each of the EEG bands assessed, and in all instances these relationships were direct. The most variance in LPC amplitude was accounted for by alpha and theta, while delta accounted for the least. For LPC latency, inverse correlations were found for pre-stimulus EEG in delta, theta, and