The processing of prosody: Evidence of interhemispheric specialization at the age of four

The processing of prosody: Evidence of interhemispheric specialization at the age of four

www.elsevier.com/locate/ynimg NeuroImage 34 (2007) 416 – 425 The processing of prosody: Evidence of interhemispheric specialization at the age of fou...

663KB Sizes 0 Downloads 10 Views

www.elsevier.com/locate/ynimg NeuroImage 34 (2007) 416 – 425

The processing of prosody: Evidence of interhemispheric specialization at the age of four Isabell Wartenburger, a,b,d,⁎ Jens Steinbrink, a Silke Telkemeyer, a Manuela Friedrich, c Angela D. Friederici, c and Hellmuth Obrig a a

Berlin NeuroImaging Center and Department of Neurology, Charité University Medicine Berlin, Campus Mitte, Berlin, Germany Department of Neurology II, Otto-von-Guericke University, Magdeburg, Germany c Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany d Neuroscience Research Center Charité, Berlin, Germany b

Received 19 May 2006; revised 22 August 2006; accepted 8 September 2006 Available online 20 October 2006 Beyond its multiple functions in language comprehension and emotional shaping, prosodic cues play a pivotal role for the infant’s amazingly rapid acquisition of language. However, cortical correlates of prosodic processing are largely controversial, even in adults, and functional imaging data in children are sparse. We here use an approach which allows to experimentally determine brain activations correlating to the perception and processing of sentence prosody during childhood. In 4-year-olds, we measured focal brain activation using near-infrared spectroscopy and demonstrate that processing prosody in isolation elicits a larger right fronto-temporal activation whereas a larger left hemispheric activation is elicited by the perception of normal language with full linguistic content. Hypothesized by the dual-pathway-model, the present data provide experimental evidence that in children specific language processes rely on interhemispheric specialization with a left hemispheric dominance for processing segmental (i.e. phonological) and a right hemispheric dominance for processing suprasegmental (i.e. prosodic) information. Generally in accordance with the imaging data reported in adults, our finding underlines the notion that interhemispheric specialization is a continuous process during the development of language. © 2006 Elsevier Inc. All rights reserved.

Abbreviations: BOLD, blood-oxygen-level-dependent; [deoxy-Hb], deoxygenated hemoglobin; df, degrees of freedom; EEG, electroencephalography; ERP, event related potentials; fMRI, functional magnetic resonance imaging; NIRS, near-infrared spectroscopy; [oxy-Hb], oxygenated hemoglobin. ⁎ Corresponding author. Berlin NeuroImaging Center and Department of Neurology, Charité University Medicine Berlin, Campus Mitte, CharitéPlatz 1, 10117 Berlin, Germany. Fax: +49 30 450560952. E-mail address: [email protected] (I. Wartenburger). Available online on ScienceDirect (www.sciencedirect.com). 1053-8119/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2006.09.009

Introduction The lateralization of cortical systems to serve language function is the best established and clinically most widely applied neuropsychological asymmetry of the human cortex. While in the left hemisphere processing of syntactic, semantic, and phonological information converges to specific areas, the relevance of the right hemisphere for prosodic processing of spoken language has been postulated and a number of studies have provided evidence for a dynamic interaction between the hemispheres in response to regular speech of full segmental and suprasegmental content (Ross et al., 1997; Baum and Pell, 1999; Grimshaw et al., 2003; Friederici and Alter, 2004). Beyond phonological, syntactic, and semantic content, prosodic suprasegmental information is required to determine the full meaning of a spoken sentence or phrase. Prosody is characterized by a specific intensity, pitch contour, and duration of spoken words, phrases, or sentences. Notably, prosodic cues can carry both linguistic and emotional information. As an example, pitch contour as a prosodic parameter can determine if someone is asking a question or making a statement but can also convey whether the speaker is appreciating or disapproving of the statement uttered. The present study focuses on the cerebral basis of linguistic prosody, which may be considered more relevant for the acquisition of language due to its potential to chunk the auditory stream. For a review on emotional prosody, see e.g. Baum and Pell (1999). Prosodic cues, semantic information, and syntactic structures can interact and influence the interpretation of a sentence (e.g. Cutler et al., 1997; Steinhauer et al., 1999; Astesano et al., 2004; Friederici and Alter, 2004; Eckstein and Friederici, 2005; Magne et al., 2005; Frazier et al., 2006). Thus, the understanding of language requires decoding of both segmental (phonological) and suprasegmental (prosodic) information. As compared to the much more devastating loss of semantic and syntactic processing in the classical aphasias, a loss of prosodic abilities may be of lesser importance after a cerebral lesion. However, it has been

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

hypothesized that prosodic, suprasegmental cues may be of supreme relevance during language acquisition in infants (e.g. Gleitman and Wanner, 1982; Jusczyk, 1997). Infant language acquisition is thought to greatly rely on prosodic cues to segment the incoming auditory stream into linguistic units, thereby also facilitating the acquisition of syntactic structures and the recognition of words in sentences (e.g. Jusczyk et al., 1992; Jusczyk, 1999; Gout et al., 2004). In most languages, infant-directed speech contains an accentuated prosody such as exaggerated intonation, stretching of vowels or longer pauses, all improving the distinctiveness of speech categories. This may serve tuning the perceptual mechanisms in specialized brain areas to language specific requirements aiding the fascinating velocity in which language is acquired (Fernald et al., 1989; Jusczyk, 1997; Kuhl et al., 1997; Kuhl, 2004). The importance of prosodic cues for language acquisition is mirrored in the infant’s early sensitivity to prosodic information as was shown in behavioral as well as in electrophysiological studies (e.g. Mehler et al., 1988; Jusczyk et al., 1993, 1999; Friederici et al., 2002; Thiessen and Saffran, 2003; Weber et al., 2004; for an overview, see Weissenborn and Höhle, 2001; Kuhl, 2004; Friederici, 2005). Despite its relevance, the functional–anatomical correlates of prosodic processing are controversial. The dynamic dual-pathwaymodel of auditory language comprehension posits that syntax and semantics are predominantly processed by the left hemisphere, while prosody recruits a more dynamic network. In isolation, it is processed by the right hemisphere, however, the left hemisphere becomes the more engaged the more linguistic the task or the stimulus’ content (Friederici and Alter, 2004). Based on lesion and functional imaging studies, the model suggests that the right hemisphere has a pivotal role for prosodic processing in adults, however, findings are inconsistent with respect to a concise functional–anatomical localization (Baum and Pell, 1999; Friederici and Alter, 2004). A widespread neural network processing linguistic prosody has been demonstrated by functional imaging studies in adults. Meyer and colleagues found a right hemispheric dominance when only suprasegmental information of a sentence is available in the input (Meyer et al., 2002, 2003). In another study, active processing of low-pass filtered speech material only containing prosodic contour elicited greater right frontal brain activation in adult subjects when compared to normal speech (Plante et al., 2002). Also the processing of high vs. low degrees of prosody in connected speech resulted in greater right hemispheric involvement (Hesling et al., 2005). Furthermore studies in adults have demonstrated that the lateralization of prosodic processing depends on the functional relevance of prosody in the respective language studied. A left hemispheric dominance could thus be shown for native speakers of languages in which lexical information highly depends on prosodic processing, such as Mandarin Chinese, when compared to the same stimuli presented to English speaking subjects (e.g. Gandour et al., 2004; Tong et al., 2005). In children, functional imaging data on prosodic processing are sparse. Some studies, however, investigated basic issues concerning the left–right lateralization of language. In sleeping newborns, greater left hemispheric activation for infant-directed speech was shown by Pena et al. (2003). Using optical imaging, a larger increase in cerebral blood volume was demonstrated over left temporal brain regions during forward compared to backward speech. While the authors conclude that at this very early age the left hemisphere is already specialized for language processing

417

differences in acoustic properties and potentially also an attentional difference between the stimulus types must be respected. In fact a functional magnetic resonance imaging (fMRI) study in 2- to 3month-old infants demonstrated a robust left hemispheric activation for both types of stimuli which may stress the importance of e.g. the presence of fast transitions in the auditory material as an additional decisive feature beyond linguistic properties (DehaeneLambertz et al., 2002). The latter study furthermore reported a greater right frontal brain activation for forward as compared to backward speech in awake but not in sleeping infants. Taken together, these pioneering studies use a control stimulus that contains irregular phonemes and irregular prosody. Thus the effect of basic acoustic properties and differential attention towards the natural as compared to the unnatural stimulus type must be taken into account (see e.g. Zatorre and Belin, 2001; Zatorre et al., 2002; Poeppel, 2003; Boemio et al., 2005; Schonwiesner et al., 2005). It thus remains unclear whether the results reflect lateralization due to linguistic or acoustic processing since discrimination between acoustic properties of different languages has even been demonstrated in animals (e.g. Ramus et al., 2000). Another recent optical imaging study in sleeping 3-month-old infants demonstrated a larger right temporo-parietal response for normal when compared to flattened, non-prosodic sentences (Homae et al., 2006). The findings can be interpreted as first evidence of a specialization of the right hemisphere for the processing of prosody. However, the study did not investigate stimulus material which contained prosodic i.e. suprasegmental information in isolation, because in the stimulus material used segmental information was always present. To further elucidate the development of interhemispheric language specialization, it is thus essential to find out whether prosody in isolation will activate right hemispheric fronto-temporal language areas as it has been shown in adults (see above). Therefore, we here investigate how prosodic information is processed in children. As Plante et al. found a bilateral network of activation and a right frontal dominance for prosody using fMRI during a sentence-prosody-matching task in children from 5 to 18 years of age (Plante et al., 2006a), we studied the processing of pure prosodic information in 4-year-olds. This research is motivated by the hypothesis that suprasegmental information guides language acquisition, and it is therefore required to disentangle the neural basis of specific processing of segmental and suprasegmental information. Electrophysiological studies (EEG) indicate that a number of phonological processes can indeed be differentiated during early language development: using odd-ball designs, an effective discrimination of phoneme duration and stress patterns by the age of 2–5 months has been shown (Friederici et al., 2002, 2004; Weber et al., 2004; Friederici, 2005). Aberrant patterns in the event related potentials (ERP) of infants at that age are moreover related to the risk for language impairments (Friedrich et al., 2004; Weber et al., 2005; see also Benasich et al., 2006). Concerning the issue of suprasegmental processing, a slow positive shift (Closure Positive Shift) in response to the end of an intonational phrase reflecting the processing of an intonational phrase boundary (Steinhauer et al., 1999) is present by the age of 8–9 months (Friederici, 2005; Pannekamp et al., 2006). Despite the potential of ERPs to separate different processing stages, source localization is limited even as to the projection of the dipole to the left or right hemisphere. Therefore, we here use an optical imaging approach similar to the study by Homae et al.

418

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

(2006) to examine the contribution of left and right hemispheric pathways for the perception of segmental and suprasegmental information during sentence processing. Based on results of previous functional imaging studies in adults and children, we hypothesized right hemispheric dominance for isolated prosodic processing and left hemispheric dominance for the processing of normal speech. The study also intends to further validate the versatility of functional optical imaging since its undemanding setup lends itself to studies in infants, thus allowing for longitudinal studies and the comparison with functional imaging data in adults. Materials and methods Subjects Fifty-one healthy children were investigated (age 4.06 ± 0.07 years [mean, SD], 30 boys, native and monolingual German speakers, normal hearing, no neurological disorders, normal language development). We acquired informed consent from both parents, the study protocol was approved by the local ethics committee (Charité, Berlin). Material Three kinds of sentences were presented: Normal sentences (child-directed speech) contained phonological, syntactic, semantic (segmental), and prosodic (suprasegmental) information. Sentences were spoken by a trained female speaker as described elsewhere (Pannekamp et al., 2005). Hummed stimuli only conveyed prosodic (suprasegmental) information. The intonational contour equaled the Normal sentences, however, lacking phonological, semantic and syntactic information and phonological variation. These stimuli without segmental information were literally hummed by the same trained female speaker to preserve the sounds’ naturalness (Pannekamp, 2005; Pannekamp et al., 2005). Normal stimuli (mean pitch 231 ± 6 Hz, maximal intensity 85.8 ± 1.2 dB) and Hummed stimuli (mean pitch 232 ± 6 Hz, maximal intensity 85.9 ± 0.7 dB) did neither differ in pitch (t = − 1.0, p = 0.3) nor in maximal intensity (t = −0.7, p = 0.5). A slight difference in duration is due to the procedure of literally humming the sentences conserving the ‘naturalness’ of the stimuli which we consider essential especially in the young subjects (mean duration Normal 4.0 ± 0.3 s; mean duration Hummed 3.8 ± 0.2 s). Half of the sentences contained one, the other half two intonational phrase boundaries (compare Fig. 3). Intonational phrase boundaries are marked by a boundary tone with changed pitch contour, a lengthening of the last syllable, and a following pause and are therefore present in both the Normal as well as in the Hummed condition (Pannekamp et al., 2005). In addition, so called Flat stimuli were presented which neither contained segmental nor suprasegmental information but only rhythmic features of the sentences. Duration and intensity were conserved while neither varying intonational and phonological nor semantic and syntactic information was available. For flattening the Hummed stimuli, a script was written and applied utilizing the PRAAT software (www.fon.hum.uva.nl). In accordance with the speaker’s individual fundamental frequency range, the intonation contour of all sentences was set to 200 Hz. Following the automatic manipulation of the fundamental frequency, a visual inspection of its contours was conducted.

Whenever outliers of more than 10 Hz were obtained, these were manually corrected and adapted. Since the phonological content of the Hummed stimuli did not particularly vary, problems with fast transitions between voiced and unvoiced segments did not occur. Hummed (68.6 ± 2.9 dB) and Flat (68.2 ± 2.6 dB) stimuli did not differ in mean intensity (t = 0.5, p = 0.6) but maximal intensity values differed between conditions because of differential amounts of suprasegmental variation in the stimuli (maximal intensity Hummed: 85.9 ± 0.7 dB; Flat: 84.7 ± 0.7 dB; t = 8.3, p < 0.05). Because of missing variations in the pitch contour of the Flat stimuli, intonational phrase boundaries are not marked by movements of the fundamental frequency and boundary tones anymore. However, durational properties (segment length, pauses) were preserved. The three conditions varied with respect to their linguistic content. To study the influence of segmental + suprasegmental vs. isolated suprasegmental information, brain activation in response to Normal vs. Hummed sentences and vice versa will be compared. To assess the influence of isolated prosodic (suprasegmental) information, Hummed and Flat stimuli and vice versa will be compared. All stimuli were normalized and adjusted in amplitude and were presented with the same loudness to all subjects. The sentences with a duration of 3.8 ± 0.26 s were separated by variable inter-stimulus intervals ranging from 1 s to 12 s (mean 3.3 ± 2.1 s) and presented in pseudo-randomized order. Children sat on their parent’s lap and listened to the sentences while watching a silent slow video (unrelated to the sentences). The parent listened to music via headphones to avoid stimulus related physiologic or behavioral reactions. Forty-eight stimuli were presented for each of the three conditions. The total duration of the experiment was 16 min and was well tolerated by the participants. Data acquisition and analysis To investigate and compare cortical activation patterns in response to the three different stimuli, we measured cortical oxygenation changes by near-infrared spectroscopy (NIRS). This method relies on the spectroscopic determination of changes in hemoglobin concentrations in the cerebral cortex which result from increased regional cerebral blood flow (Obrig and Villringer, 2003). In brief, the relative transparency of biological tissue in the spectral region 600–900 nm allows for spectroscopic assessment of the cerebral cortex through the skull and scalp. Changes in light attenuation at different wavelengths greatly depend on the concentration changes in oxygenated and deoxygenated hemoglobin ([oxy-Hb] and [deoxy-Hb]) in a sampling volume reaching down to the cerebral cortex. Physiologically, an increase in metabolic demand by neuronal signaling results in a disproportionally large increase in blood flow (Fox and Raichle, 1986; Logothetis and Wandell, 2004). Thus focal oxygenation and focal blood volume increase. It must be noted that besides the increase in oxygenated hemoglobin ([oxy-Hb]↑) we therefore expect a decrease in deoxygenated hemoglobin ([deoxy-Hb]↓) in an activated area since the latter is washed out faster due to increased blood flow velocity. This washout of deoxy-Hb also is the major constituent of blood-oxygen-level-dependent (BOLD)-contrast increases in fMRI measurements (Kleinschmidt et al., 1996). While some NIRS studies solely report the changes in [oxy-Hb], we consider it mandatory to report on both hemoglobins to allow for direct comparison with related findings in BOLD-contrast fMRI.

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

The NIRS system used here (Omniat Tissue Oxymeter; ISS Inc., Champaign, IL, USA) consists of 4 light detectors and 8 emitters separated by an inter-probe distance of 3 cm. Due to scattering and light propagation in turbid media NIRS samples from a banana-shaped volume connecting each emitter–detector pair. As illustrated in the bottom sketch of Fig. 2A, five separate channels were recorded over each hemisphere covering frontotemporal regions (the exact probe positions are given by the following 10–20 system coordinates, the first position denoting the emitter: left hemisphere: (1) F7–FT7; (2) FC5–FT7; (3) T7–FT7; (4) T7–TP7; (5) P7–TP7; right hemisphere: (1) F8–FT8; (2) FC6– FT8; (3) T8–FT8; (4) T8–TP8; (5) P8–TP8). Attenuation changes measured at 690 nm and 830 nm are converted into changes in [oxy-Hb] and [deoxy-Hb] based on a modified Beer–Lambert law (Cope and Delpy, 1988). To guarantee optimal safety and convenience for the subjects, the fiber-optic bundles (emitter: 1 mm and detector: 3 mm in diameter) were integrated into the commercially available EEG-cap (www.easycap.de/easycap/). NIRS data were continuously sampled at 10 Hz. For analysis, we first band-pass filtered the data (0.04–0.5 Hz) to attenuate slow drifts and the high frequency noise, the latter mainly caused by physiological noise such as the heart pulse. Movement artifacts, a major source of noise in children, were corrected by a semiautomated procedure, which allows to mark artifacts in single channels to then pad the contaminated data segments by linear interpolation. This procedure has proven advantageous since an artifact in a single channel can be attenuated without loosing the complete trial. Next we calculated the beta-values for each probe position for the different conditions. As is illustrated in Fig. 1, this was achieved by correlating the data with a predictor, generated by convolving the boxcar function of the stimulus design with the hemodynamic response function. The contrast between conditions and post hoc statistical testing was performed in analogy to ‘Statistical Parametric Mapping’, the standard tool of BOLDcontrast fMRI analysis (e.g. SPM2, www.fil.ion.ucl.ac.uk/spm/). Since time courses over the activated area are also of interest we next de-convolved the time courses assessed over the channels in whom a statistically significant change was seen (an example is given in Fig. 3). This de-convolution procedure is necessary since the hemodynamic response is sluggish and will not reach baseline for some 20 s after stimulus cessation. Thus our design with intertrial intervals ranging from 1 to 12 s is a tradeoff between limiting the duration of the experiment and the de-convolution procedure, assuming linear superposition of the hemodynamic response to rapidly successive stimuli (Wobst et al., 2001). It has been demonstrated that variable inter-stimulus intervals increase statistical power (Birn et al., 2002). Also the design takes advantage of the fact that the stimulus material is necessarily an intermediate between a ‘single trial’ and a ‘block design’. Each sentence is thus treated as a short ‘block’. Since we performed three comparisons (Normal vs. Flat, Hummed vs. Flat, and hemispheric asymmetry), correction for multiple comparisons, i.e. Bonferroni, would require a significance level of p < 0.017. Such a conservative correction will not change the major finding of the paper, i.e. the lateralization of Hummed vs. Normal stimuli and vice versa (see Table 2), and we report uncorrected data (see Tables 1–4 for corrected data). For the between-hemisphere comparison (Table 4) solely the left hemispheric dominance for the processing of Normal speech survives such a conservative correction while the difference between Flat and Hummed stimuli is lost. The main effect of condition survives

419

Fig. 1. The flow chart illustrates the different steps of data processing and analysis: based on the stimulus design consisting of pseudo-randomized succession of Normal, Hummed, and Flat stimuli, the design matrix (boxcar) was convolved with the hemodynamic response function. Thus predictors were generated for different conditions, as examples, predictors for all stimuli vs. rest (Pall) for hummed (PHum) and normal sentences (PNorm) are shown. In a general linear model, these predictors were compared to the filtered data to yield beta-values and statistical significance (t-values as shown in the Tables 1–4). Next the time courses of the hemodynamic response were analyzed by de-convolution of the signals in those channels in whom a significant response was seen (for an example see Fig. 3). The sketch at the bottom illustrates the underlying assumption of a linear superposition of the hemodynamic response for rapid stimulus succession.

a conservative correction at left frontal and temporal regions as well as right temporal and temporo-occipital regions while the interaction of hemisphere by condition survives this correction at the temporal brain area (see Table 1). We simultaneously acquired 21-channel standard EEG to determine evoked potentials related to the intonational phrase boundaries (closure positive shift, as reported by Steinhauer et al., 1999; Steinhauer and Friederici, 2001). This ERP component has been shown to be present already in 8-month-old infants in response to prosodic phrase boundaries (Pannekamp et al., 2006). The evoked potential can be judged as additional evidence for a processing of prosodic information. In the present paper, we do not report on the electrophysiological findings since the design of our study does not allow for a direct comparison between electrophysiological and vascular response, thus unduly complicating the

420

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

Table 1 Main effects of hemisphere, condition, and interaction of hemisphere by condition Position

Main effect hemisphere (left, right)

Main effect condition (N, H, F)

Interaction of hemisphere × condition

1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5

left left left left left right right right right right

[oxy-Hb]

[deoxy-Hb]

df

F-value

p-value

F-value

p-value

1.86 0.23 3.71 0.33 0.03 1.13 4.27 4.40 8.26 1.01 4.17 3.61 2.29 3.35 7.50 3.53 0.07 0.02 15.80 1.70

0.18 0.63 0.06 0.57 0.86 0.33 0.02* 0.01* 0.00* 0.37 0.02* 0.03* 0.11 0.04* 0.00* 0.03* 0.93 0.98 0.00* 0.19

0.00 0.59 1.82 0.28 1.48 1.64 0.41 3.05 3.80 0.69 0.85 4.08 1.06 4.79 0.30 2.04 0.67 0.99 7.25 0.05

0.96 0.45 0.18 0.60 0.23 0.20 0.67 0.05* 0.03* 0.50 0.43 0.02* 0.35 0.01* 0.74 0.14 0.51 0.38 0.00* 0.95

1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

MANOVA; *p ≤ 0.05; df: degrees of freedom; Position: refers to (1) frontal, (2) medial–frontal, (3) fronto-temporal, (4) temporal, (5) temporo-occipital within the left and right hemisphere (compare sketch in Fig. 2A); N: Normal; H: Hummed; F: Flat. Note that for the main effect of condition only the left hemispheric position 3 and 4 as well as right hemispheric position 4 and 5 survive a conservative correction for multiple comparisons (Bonferroni, p ≤ 0.017, required for 3 comparisons). Note that for the interaction of hemisphere by condition only the temporal position 4 survives the conservative correction for multiple comparisons.

major result of the study, concerned with lateralization of prosody processing. Results Generally the experiment was well tolerated by the young subjects. We did not have to exclude any of the children in whom

the experiment was completed. The artifact rejection procedure yielded a minimum of 80% uncontaminated trials (rejection of 17.4 ± 3.0% across subjects). The rejected trials were balanced across conditions, thus not introducing a bias in signal to noise in one specific condition. Analysis of variance indicates no statistically significant main effect of hemisphere over all conditions; there is only a trend in the fronto-temporal position 3 (see Table 1). In line with our expectation, there is a statistically significant main effect of condition over left (position 2, 3, 4) and right (position 1, 2, 4, 5) fronto-temporal brain regions. Additionally, there is a statistically significant interaction of hemisphere by condition within frontal (position 1) and temporal (position 4) brain regions (see Table 1). In line with our hypothesis, we found a significantly larger response to Normal speech compared to Hummed stimuli in three positions (position 1, 3, 4) over the left fronto-temporal areas. In contrast, Hummed compared to the Normal sentences elicited a larger response over the right hemisphere (position 1, 2, 4) and in a left medial–frontal region (position 2) (Fig. 2A, Table 2). Moreover, we found greater activation during Hummed relative to Flat in the right medial–frontal cortex (position 2), but not in the left hemisphere (Fig. 2B, Table 3). This lateralization pattern was confirmed by an asymmetry in left frontal and temporal regions during Normal speech (left > right at position 1 and 4) and in right fronto-temporal regions during processing Hummed and Flat stimuli (right > left at positions 3 and 4) (compare Table 4). An example of the time courses in [oxy-Hb] and [deoxy-Hb] indicating brain activation is given in Fig. 3. Similar to the response pattern reported in a number of previous studies, we find an increase in [oxy-Hb] and a decrease in [deoxy-Hb], which peak some 7 s after the stimulus and return to baseline over the following 10–15 s. The result is thus in line with the dynamics of the expected increase in blood flow and nicely correlates to the well documented dynamics of BOLD-contrast changes in an activated cortical area. In an additional analysis, Normal and Hummed sentences with two intonational phrase boundaries were compared to the respective sentences with only one intonational phrase boundary. In Normal sentences, one-tailed paired t-statistics revealed greater activation for the processing of two phrase boundaries

Table 2 Comparison of conditions: Normal vs. Hummed and vice versa Hemisphere

Left

Right

Position

1 2 3 4 5 1 2 3 4 5

[oxy-Hb]

[deoxy-Hb]

t-value

p-value

t-value

p-value

− 0.34 − 2.73 2.57 3.84 − 0.45 − 2.71 − 2.1 1.65 − 1.97 − 1.8

0.74 0.01* 0.01* 0.00* 0.66 0.01* 0.04* 0.11 0.05* 0.08

− 1.98 1.08 − 1.53 − 1.82 0.61 0.49 2.57 0.47 3.57 0.46

0.05* 0.29 0.13 0.08 0.54 0.63 0.01* 0.64 0.00* 0.65

df

Normal > Hummed**

50 50 50 50 50 50 50 50 50 50

N>H

Hummed > Normal**

H>N N>H N>H H>N H>N H>N

Two-tailed paired t-tests; *p ≤ 0.05; **statistically significant greater increase in [oxy-Hb] or statistically significant greater decrease in [deoxy-Hb] in the respective comparison of conditions; df: degrees of freedom; Position: refers to (1) frontal, (2) medial–frontal, (3) fronto-temporal, (4) temporal, (5) temporooccipital within the left and right hemisphere (compare sketch in Fig. 2A); N: Normal; H: Hummed. Note that for a conservative correction for multiple comparisons a p < 0.017 is required. This will not change the general result but will result in 3 positions no longer reaching statistical significance.

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

421

Fig. 2. Lateralization of language processes in 4-year-olds. Statistical differences between conditions. (A) Normal vs. Hummed: the upper part gives color-coded locations where Normal shows a significant larger response than Hummed, while the lower part indicates color-coded positions in whom the relation is vice versa. (B) The comparison between Hummed and Flat stimuli yielded only one statistically significant result over the right hemisphere. Absolute t-values are displayed, red colors indicate statistically significant changes in [oxy-Hb] (upper row), blue those in [deoxy-Hb] (lower row). Absolute t-values ≥ 2 indicate a statistical significance of p ≤ 0.05 for the respective comparison of conditions (two-tailed t-test, all t-values are given in Tables 2 and 3). See text for a detailed description and the discussion of Bonferroni correction. The small sketch shows the approximate five measurement positions over the right hemisphere, defined by all possible emitter–detector pairs ( = 4 emitters; = 2 detectors). The left hemisphere was sampled correspondingly. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

as opposed to one phrase boundary at left fronto-temporal (position 3, [deoxy-Hb]: p = 0.05, t = −1.7, df (degrees of freedom) = 50) and right temporal (position 4, [oxy-Hb]: p = 0.04, t = 1.8, df = 50; [deoxy-Hb]: p = 0.04, t = − 1.8, df = 50) brain regions. In Hummed sentences with two as compared to one phrase boundary, there was a trend for greater activation of the right fronto-temporal cortex (position 3, [oxy-Hb]: p = 0.058, t = 1.6, df = 50). This may indicate that brain activation in these regions is modulated by the ‘amount’ of prosodic information available. Note, however, that these results only survive uncorrected thresholds. Boys and girls showed similar patterns of brain activation in response to the three sentence conditions; thus there were no gender differences. This finding is in line with a recent fMRI study showing no significant sex differences for the lateralization of different language tasks (including prosody processing) (Plante et al., 2006b).

In conclusion, our data provide strong evidence that suprasegmental prosodic information is predominantly processed in right fronto-temporal brain regions. Conversely, normal sentences with full linguistic information elicit a stronger lateralization of brain activation to left cortical areas.

Discussion Our results in children are consistent with recent results in adults showing a right hemispheric dominance for processing prosodic information (e.g. Zatorre et al., 1992; Meyer et al., 2002; Gandour et al., 2004; Tong et al., 2005). For instance, Meyer and colleagues found greater left hemispheric responses to normal and syntactic speech as compared to prosodic speech devoid of syntactic and lexical–semantic information. Processing prosody in isolation resulted in greater right temporal and bilateral fronto-

422

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

Table 3 Comparison of conditions: Hummed vs. Flat and vice versa Hemisphere

Position

Left

1 2 3 4 5 1 2 3 4 5

Right

[oxy-Hb]

[deoxy-Hb]

df

t-value

p-value

t-value

p-value

1.31 − 0.15 0.24 − 0.35 − 0.96 0.54 − 0.52 − 0.12 − 0.27 − 1.82

0.19 0.88 0.81 0.73 0.34 0.59 0.61 0.91 0.78 0.08

0.74 − 0.33 − 1.03 − 1.31 0.6 − 1.36 − 2.27 − 1.46 − 1.1 0.29

0.46 0.74 0.31 0.2 0.55 0.18 0.03* 0.15 0.28 0.77

Hummed > Flat**

50 50 50 50 50 50 50 50 50 50

Flat > Hummed**

H>F

Two-tailed paired t-tests; *p ≤ 0.05; **statistically significant greater increase in [oxy-Hb] or statistically significant greater decrease in [deoxy-Hb] in the respective comparison of conditions; df: degrees of freedom; Position: refers to (1) frontal, (2) medial–frontal, (3) fronto-temporal, (4) temporal, (5) temporooccipital within the left and right hemisphere (compare sketch in Fig. 2A); H: Hummed; F: Flat. Note that the comparison does not reach statistical significance if a conservative correction according to Bonferroni is applied (p ≤ 0.017, required for 3 comparisons).

al., 2002). Besides potentially language specific differences, attentional aspects must be considered as the study reports a clearly differential activation over the right dorsolateral prefrontal cortex depending on whether the infant was asleep or not. This aspect is also stressed by Plante and co-workers, who additionally highlight the relevance of task demand. Besides bilateral activations, however, a greater right hemispheric activation was seen also during sentence-prosody-matching tasks in 5- to 18-year-old children (Plante et al., 2006a). Concerning the network engaged in language comprehension in infants, basic auditory processing and attentional factors must thus be taken into account beyond linguistic processing. Lateralization during the processing of spoken language has been discussed from the broader perspective of lateralization in response to complex auditory stimuli (e.g. Zatorre and Belin, 2001; Zatorre et al., 2002; Poeppel, 2003; Boemio et al., 2005; Schonwiesner et al., 2005). Most notably, the analogy to music processing has been addressed due to a similarly specific role in uniquely human cognition. Processing prosody can be considered as processing the ‘melody’ of a sentence and may thus rely on similar neuronal correlates (e.g. Patel, 2003a; Trehub, 2003). Such a hypothesis is supported by findings of a positive transfer of musical training on both music and language processing that was shown in adults and in children (Chan et al., 1998; Ho et

opercular activation (Meyer et al., 2002). The effect of language experience could be shown by a differential lateralization of prosodic processes in native Chinese Mandarin subjects as compared to native English speakers. While a stronger left hemispheric activation was seen in the former, the English subjects showed a larger activation in the right hemisphere while processing the exact same tasks, namely judgments on the tone and intonation of syllables (Gandour et al., 2004). In infants and children, the still small number and differences between control conditions and methodologies may limit direct comparison between the functional imaging studies published. Nonetheless, such studies start to delineate pivotal areas in a network dedicated to process specific aspects of language processing and serve its acquisition. In neonates, greater left hemispheric activation when processing forward as compared to backward speech was demonstrated in a NIRS study (Pena et al., 2003) while normal versus flattened, nonprosodic speech resulted in a lateralization to right inferior parietal areas in a recent study in 3-month-old infants (Homae et al., 2006). Building on the superior spatial resolution of fMRI, a comparison between forward and backward speech yielded an asymmetry in favor of the left planum temporale for either stimulus compared to rest, while the comparison between forward and backward speech showed a difference in the left angular gyrus (Dehaene-Lambertz et

Table 4 Hemispheric differences within conditions Position

df

Normal

Hummed

[oxy-Hb]

1 2 3 4 5

50 50 50 50 50

[deoxy-Hb]

t

p

t

p

2.65 − 0.3 − 1.31 2.47 1.08

0.01* 0.62 0.9 0.01* 0.14

− 0.76 − 1.04 0.33 − 2.39 − 0.93

0.23 0.15 0.63 0.01* 0.18

sig.**

L>R

L>R

Flat

[oxy-Hb]

[deoxy-Hb]

sig.**

[oxy-Hb]

[deoxy-Hb]

t

p

t

p

t

p

t

p

0.78 − 0.21 − 1.7 − 2.11 0.13

0.78 0.42 0.05* 0.02* 0.55

1.28 − 0.25 1.76 0.65 − 0.87

0.1 0.6 0.04* 0.26 0.81

0.13 − 0.71 − 1.87 − 2.05 − 0.76

0.55 0.24 0.03* 0.02* 0.22

−0.55 −0.87 1.4 0.81 −1.39

0.71 0.81 0.08 0.21 0.91

R>L R>L

sig.**

R>L R>L

One-tailed paired t-tests; *p ≤ 0.05; **statistically significant greater increase in [oxy-Hb] or statistically significant greater decrease in [deoxy-Hb] in the respective comparison of hemispheres (within conditions); df: degrees of freedom; Position: refers to (1) frontal, (2) medial–frontal, (3) fronto-temporal, (4) temporal, (5) temporo-occipital within the left and right hemisphere (compare sketch in Fig. 2A); t: t-value; p: p-value; L: left hemisphere; R: right hemisphere. Note that only the left hemispheric dominance for Normal speech survives a conservative correction for multiple comparisons (Bonferroni, p ≤ 0.017, required for 3 comparisons).

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

423

Fig. 3. Time courses of the increase in [oxy-Hb] and decrease in [deoxy-Hb] indicating brain activation. The upper part shows the response to Normal sentences, averaged across all subjects and the most statistically significant positions over the left hemisphere and the corresponding smaller response over the right hemisphere. The lower graphs provide the same comparison for the Hummed stimuli over the right hemisphere and the respective left hemisphere. [oxy-Hb] is given in orange, [deoxy-Hb] in blue, error bars indicate standard errors of the mean. The middle part of the graph shows examples of the stimulus material. The example sentence (* ‘Lena promises | to help mom | and to buy drinks’) contains two phrase boundaries (indicated by |). (An example for a sentence with one phrase boundary: ‘Lena promises mom to run | and to buy drinks’.) The audio traces show the Normal sentence (dark red) and the corresponding Hummed sentence (dark blue). Note that the phrase boundaries are well conserved in the hummed version. For statistical comparisons of conditions see text, Tables 1–4, and Fig. 2.

al., 2003; Magne et al., 2003, 2006; Schön et al., 2004; Jentschke et al., 2005a). Music seems to be processed by a bilateral neural network, however, many studies found stronger responses of right hemispheric brain regions (see e.g. Tervaniemi et al., 2000; Maess et al., 2001; Koelsch and Friederici, 2003; Tervaniemi and Hugdahl, 2003; Zatorre, 2003; Jentschke et al., 2005b; Koelsch and Siebel, 2005; Koelsch et al., 2005). Thus, processing music and prosody of a language might in part be subserved by similar neural networks (see Koelsch et al., 2002; Patel, 2003a,b; Schmithorst, 2005; Limb, 2006).

In the broader perspective, it is argued that the auditory cortices of the left and right hemisphere are specialized to temporal and spectral resolution, respectively (e.g. Zatorre et al., 2002; Schonwiesner et al., 2005). Other authors assume that the left hemispheric regions are sensitive to faster temporal transitions whereas right hemispheric regions process slower temporal modulations. This more general specialization for different acoustic properties can be considered the basis for the lateralization of specific language processes since suprasegmental prosodic modulations in language stimuli rely on slower transitions

424

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425

compared to faster temporal transitions relevant for segmental cues (e.g. Jancke et al., 2002; Boemio et al., 2005). In line with the general concept of a gradual but continuous specialization of neuronal networks subserving language competence, this basic hemispheric asymmetry has as yet not been demonstrated in children. Given that both Normal and Hummed sentences contain prosodic information, the differential activation of left and right fronto-temporal areas indicates that by the age of 4 years prosody is already processed in a dynamic fashion: the more linguistic the stimulus’ content, the more left hemispheric language-related brain regions are engaged. Isolated prosodic information is dominantly processed by the right hemisphere, as proposed by the dualpathway-model (Friederici and Alter, 2004). Here we have demonstrated that functional specialization of the right hemisphere for prosodic processes is present by the age of 4 years, which is similar to the hemispheric specialization in adults. Certainly, further research will have to show how early the brain basis for the lateralization of prosodic processes is established. Acknowledgments Financial support of BMBF (Berlin NeuroImaging Center and Bernstein Center for Computational Neuroscience Berlin), EU (NEST 012778, EFRE 20002006 2/6), and International Leibniz Program are gratefully acknowledged. We thank H. Benav, J. Haselow, P. Koch, and C. Ruegen for their help with data acquisition and analysis and H.R. Heekeren, A. Pannekamp, and U. Toepel for their comments on the manuscript. References Astesano, C., Besson, M., Alter, K., 2004. Brain potentials during semantic and prosodic processing in French. Brain Res. Cogn. Brain Res. 18, 172–184. Baum, S.R., Pell, M.D., 1999. The neural bases of prosody: insights from lesion studies and neuroimaging. Aphasiology 13, 581–608. Benasich, A.A., Choudhury, N., Friedman, J.T., Realpe-Bonilla, T., Chojnowska, C., Gou, Z., 2006. The infant as a prelinguistic model for language learning impairments: predicting from event-related potentials to behavior. Neuropsychologia 44, 396–411. Birn, R.M., Cox, R.W., Bandettini, P.A., 2002. Detection versus estimation in event-related fMRI: choosing the optimal stimulus timing. NeuroImage 15, 252–264. Boemio, A., Fromm, S., Braun, A., Poeppel, D., 2005. Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nat. Neurosci. 8, 389–395. Chan, A.S., Ho, Y.C., Cheung, M.C., 1998. Music training improves verbal memory. Nature 396, 128. Cope, M., Delpy, D.T., 1988. System for long-term measurement of cerebral blood and tissue oxygenation on newborn infants by near infra-red transillumination. Med. Biol. Eng. Comput. 26, 289–294. Cutler, A., Dahan, D., van Donselaar, W., 1997. Prosody in the comprehension of spoken language: a literature review. Lang. Speech 40 (Pt 2), 141–201. Dehaene-Lambertz, G., Dehaene, S., Hertz-Pannier, L., 2002. Functional neuroimaging of speech perception in infants. Science 298, 2013–2015. Eckstein, K., Friederici, A.D., 2005. Late interaction of syntactic and prosodic processes in sentence comprehension as revealed by ERPs. Brain Res. Cogn. Brain Res. 25, 130–143. Fernald, A., Taeschner, T., Dunn, J., Papousek, M., Boysson-Bardies, B., Fukui, I., 1989. A cross-language study of prosodic modifications in

mothers’ and fathers’ speech to preverbal infants. J. Child Lang. 16, 477–501. Fox, P.T., Raichle, M.E., 1986. Focal physiological uncoupling of cerebral blood flow and oxidative metabolism during somatosensory stimulation in human subjects. Proc. Natl. Acad. Sci. U. S. A. 83, 1140–1144. Frazier, L., Carlson, K., Clifton Jr., C., 2006. Prosodic phrasing is central to language comprehension. Trends Cogn. Sci. Friederici, A.D., 2005. Neurophysiological markers of early language acquisition: from syllables to sentences. Trends Cogn. Sci. 9, 481–488. Friederici, A.D., Alter, K., 2004. Lateralization of auditory language functions: a dynamic dual pathway model. Brain Lang. 89, 267–276. Friederici, A.D., Friedrich, M., Weber, C., 2002. Neural manifestation of cognitive and precognitive mismatch detection in early infancy. NeuroReport 13, 1251–1254. Friedrich, M., Weber, C., Friederici, A.D., 2004. Electrophysiological evidence for delayed mismatch response in infants at-risk for specific language impairment. Psychophysiology 41, 772–782. Gandour, J., Tong, Y., Wong, D., Talavage, T., Dzemidzic, M., Xu, Y., Li, X., Lowe, M., 2004. Hemispheric roles in the perception of speech prosody. NeuroImage 23, 344–357. Gleitman, L.R., Wanner, E., 1982. Language acquisition: the state of the state of the art. In: Wanner, E., Gleitmann, L.R. (Eds.), Language Acquisition: The State of The Art. Cambridge Univ. Press, New York, pp. 3–48. Gout, A., Christophe, A., Morgan, J., 2004. Phonological phrase boundaries constrain lexical access: II. Infant data. J. Mem. Cogn. 51, 548–567. Grimshaw, G.M., Kwasny, K.M., Covell, E., Johnson, R.A., 2003. The dynamic nature of language lateralization: effects of lexical and prosodic factors. Neuropsychologia 41, 1008–1019. Hesling, I., Clement, S., Bordessoules, M., Allard, M., 2005. Cerebral mechanisms of prosodic integration: evidence from connected speech. NeuroImage 24, 937–947. Ho, Y.C., Cheung, M.C., Chan, A.S., 2003. Music training improves verbal but not visual memory: cross-sectional and longitudinal explorations in children. Neuropsychology 17, 439–450. Homae, F., Watanabe, H., Nakano, T., Asakawa, K., Taga, G., 2006. The right hemisphere of sleeping infant perceives sentential prosody. Neurosci. Res. 54, 276–280. Jancke, L., Wustenberg, T., Scheich, H., Heinze, H.J., 2002. Phonetic perception and the temporal cortex. NeuroImage 15, 733–746. Jentschke, S., Koelsch, S., Friederici, A.D., 2005a. Investigating the relationship of music and language in children: influences of musical training and language impairment. Ann. N. Y. Acad. Sci. 1060, 231–242. Jentschke, S., Koelsch, S., Friederici, A.D., 2005b. Investigating the relationship of music and language in children: influences of musical training and language impairment. Ann. N. Y. Acad. Sci. 1060, 231–242. Jusczyk, P.W., 1997. The Discovery of Spoken Language. MIT Press, Cambridge, MA. Jusczyk, P.W., 1999. How infants begin to extract words from speech. Trends Cogn. Sci. 3, 323–328. Jusczyk, P.W., Pisoni, D.B., Mullennix, J., 1992. Some consequences of stimulus variability on speech processing by 2-month-old infants. Cognition 43, 253–291. Jusczyk, P.W., Cutler, A., Redanz, N.J., 1993. Infants’ preference for the predominant stress patterns of English words. Child Dev. 64, 675–687. Jusczyk, P.W., Houston, D.M., Newsome, M., 1999. The beginnings of word segmentation in English-learning infants. Cognit. Psychol. 39, 159–207. Kleinschmidt, A., Obrig, H., Requardt, M., Merboldt, K.D., Dirnagl, U., Villringer, A., Frahm, J., 1996. Simultaneous recording of cerebral blood oxygenation changes during human brain activation by magnetic resonance imaging and near-infrared spectroscopy. J. Cereb. Blood Flow Metab. 16, 817–826. Koelsch, S., Friederici, A.D., 2003. Toward the neural basis of processing structure in music. Comparative results of different neurophysiological investigation methods. Ann. N. Y. Acad. Sci. 999, 15–28.

I. Wartenburger et al. / NeuroImage 34 (2007) 416–425 Koelsch, S., Siebel, W.A., 2005. Towards a neural basis of music perception. Trends Cogn. Sci. 9, 578–584. Koelsch, S., Gunter, T.C., Cramon, D.Y., Zysset, S., Lohmann, G., Friederici, A.D., 2002. Bach speaks: a cortical “language-network” serves the processing of music. NeuroImage 17, 956–966. Koelsch, S., Gunter, T.C., Wittfoth, M., Sammler, D., 2005. Interaction between syntax processing in language and in music: an ERP study. J. Cogn. Neurosci. 17, 1565–1577. Kuhl, P.K., 2004. Early language acquisition: cracking the speech code. Nat. Rev., Neurosci. 5, 831–843. Kuhl, P.K., Andruski, J.E., Chistovich, I.A., Chistovich, L.A., Kozhevnikova, E.V., Ryskina, V.L., Stolyarova, E.I., Sundberg, U., Lacerda, F., 1997. Cross-language analysis of phonetic units in language addressed to infants. Science 277, 684–686. Limb, C.J., 2006. Structural and functional neural correlates of music perception. Anat. Rec. A Discov. Mol. Cell Evol. Biol. 288, 435–446. Logothetis, N.K., Wandell, B.A., 2004. Interpreting the BOLD signal. Annu. Rev. Physiol. 66, 735–769. Maess, B., Koelsch, S., Gunter, T.C., Friederici, A.D., 2001. Musical syntax is processed in Broca’s area: an MEG study. Nat. Neurosci. 4, 540–545. Magne, C., Schön, D., Besson, M., 2003. Prosodic and melodic processing in adults and children. Behavioral and electrophysiologic approaches. Ann. N. Y. Acad. Sci. 999, 461–476. Magne, C., Astesano, C., Lacheret-Dujour, A., Morel, M., Alter, K., Besson, M., 2005. On-line processing of “pop-out” words in spoken French dialogues. J. Cogn. Neurosci. 17, 740–756. Magne, C., Schön, D., Besson, M., 2006. Musician children detect pitch violations in both music and language better than nonmusician children: behavioral and electrophysiological approaches. J. Cogn. Neurosci. 18, 199–211. Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N., Bertoncini, J., AmielTison, C., 1988. A precursor of language acquisition in young infants. Cognition 29, 143–178. Meyer, M., Alter, K., Friederici, A.D., Lohmann, G., von Cramon, D.Y., 2002. FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences. Hum. Brain Mapp. 17, 73–88. Meyer, M., Alter, K., Friederici, A.D., 2003. Functional MR imaging exposes differential brain responses to syntax and prosody during auditory sentence comprehension. J. Neurolinguist. 16, 277–300. Obrig, H., Villringer, A., 2003. Beyond the visible-imaging the human brain with light. J. Cereb. Blood Flow Metab. 23, 1–18. Pannekamp, A., 2005. Prosodische Informationsverarbeitung bei normalsprachlichem und deviantem Satzmaterial: Untersuchungen mit ereigniskorrelierten Hirnpotentialen. Max-Planck-Institut für Kognitions- und Neurowissenschaften, Leipzig. Pannekamp, A., Toepel, U., Alter, K., Hahne, A., Friederici, A.D., 2005. Prosody-driven sentence processing: an event-related brain potential study. J. Cogn. Neurosci. 17, 407–421. Pannekamp, A., Weber, C., Friederici, A.D., 2006. Prosodic processing at sentence level in infants. NeuroReport 17, 675–678. Patel, A.D., 2003a. Language, music, syntax and the brain. Nat. Neurosci. 6, 674–681. Patel, A.D., 2003b. Rhythm in language and music: parallels and differences. Ann. N. Y. Acad. Sci. 999, 140–143. Pena, M., Maki, A., Kovacic, D., Dehaene-Lambertz, G., Koizumi, H., Bouquet, F., Mehler, J., 2003. Sounds and silence: an optical topography study of language recognition at birth. Proc. Natl. Acad. Sci. U. S. A. 100, 11702–11705. Plante, E., Creusere, M., Sabin, C., 2002. Dissociating sentential prosody from sentence processing: activation interacts with task demands. NeuroImage 17, 401–410. Plante, E., Holland, S.K., Schmithorst, V.J., 2006a. Prosodic processing by children: an fMRI study. Brain Lang. 97, 332–342. Plante, E., Schmithorst, V.J., Holland, S.K., Byars, A.W., 2006b. Sex

425

differences in the activation of language cortex during childhood. Neuropsychologia 44, 1210–1221. Poeppel, D., 2003. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Commun. 41, 245–255. Ramus, F., Hauser, M.D., Miller, C., Morris, D., Mehler, J., 2000. Language discrimination by human newborns and by cotton-top tamarin monkeys. Science 288, 349–351. Ross, E.D., Thompson, R.D., Yenkosky, J., 1997. Lateralization of affective prosody in brain and the callosal integration of hemispheric language functions. Brain Lang. 56, 27–54. Schmithorst, V.J., 2005. Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing. NeuroImage 25, 444–451. Schön, D., Magne, C., Besson, M., 2004. The music of speech: music training facilitates pitch processing in both music and language. Psychophysiology 41, 341–349. Schonwiesner, M., Rubsamen, R., von Cramon, D.Y., 2005. Hemispheric asymmetry for spectral and temporal processing in the human anterolateral auditory belt cortex. Eur. J. Neurosci. 22, 1521–1528. Steinhauer, K., Friederici, A.D., 2001. Prosodic boundaries, comma rules, and brain responses: the closure positive shift in ERPs as a universal marker for prosodic phrasing in listeners and readers. J. Psycholinguist. Res. 30, 267–295. Steinhauer, K., Alter, K., Friederici, A.D., 1999. Brain potentials indicate immediate use of prosodic cues in natural speech processing. Nat. Neurosci. 2, 191–196. Tervaniemi, M., Hugdahl, K., 2003. Lateralization of auditory-cortex functions. Brain Res. Brain Res. Rev. 43, 231–246. Tervaniemi, M., Medvedev, S.V., Alho, K., Pakhomov, S.V., Roudas, M.S., Van Zuijen, T.L., Naatanen, R., 2000. Lateralized automatic auditory processing of phonetic versus musical information: a PET study. Hum. Brain Mapp. 10, 74–79. Thiessen, E.D., Saffran, J.R., 2003. When cues collide: use of stress and statistical cues to word boundaries by 7- to 9-month-old infants. Dev. Psychol. 39, 706–716. Tong, Y., Gandour, J., Talavage, T., Wong, D., Dzemidzic, M., Xu, Y., Li, X., Lowe, M., 2005. Neural circuitry underlying sentence-level linguistic prosody. NeuroImage 28, 417–428. Trehub, S.E., 2003. The developmental origins of musicality. Nat. Neurosci. 6, 669–673. Weber, C., Hahne, A., Friedrich, M., Friederici, A.D., 2004. Discrimination of word stress in early infant perception: electrophysiological evidence. Brain Res. Cogn. Brain Res. 18, 149–161. Weber, C., Hahne, A., Friedrich, M., Friederici, A.D., 2005. Reduced stress pattern discrimination in 5-month-olds as a marker of risk for later language impairment: neurophysiological evidence. Brain Res. Cogn. Brain Res. 25, 180–187. Weissenborn, J., Höhle, B., 2001. Approaches to Bootstrapping: Phonological, Lexical, Syntactic and Neurophysiological Aspects of Early Language Acquisition. John Benjamins Publishing Company, Amsterdam. Wobst, P., Wenzel, R., Kohl, M., Obrig, H., Villringer, A., 2001. Linear aspects of changes in deoxygenated hemoglobin concentration and cytochrome oxidase oxidation during brain activation. NeuroImage 13, 520–530. Zatorre, R.J., 2003. Music and the brain. Ann. N. Y. Acad. Sci. 999, 4–14. Zatorre, R.J., Belin, P., 2001. Spectral and temporal processing in human auditory cortex. Cereb. Cortex 11, 946–953. Zatorre, R.J., Evans, A.C., Meyer, E., Gjedde, A., 1992. Lateralization of phonetic and pitch discrimination in speech processing. Science 256, 846–849. Zatorre, R.J., Belin, P., Penhune, V.B., 2002. Structure and function of auditory cortex: music and speech. Trends Cogn. Sci. 6, 37–46.