Dichotic Listening and Language: Overview

Dichotic Listening and Language: Overview

Dichotic Listening and Language: Overview Kenneth Hugdahl, University of Bergen, Bergen, Norway; and Haukeland University Hospital, Bergen, Norway Ó 2...

1MB Sizes 0 Downloads 151 Views

Dichotic Listening and Language: Overview Kenneth Hugdahl, University of Bergen, Bergen, Norway; and Haukeland University Hospital, Bergen, Norway Ó 2015 Elsevier Ltd. All rights reserved.

Abstract In this article, I provide a selective review of the history of dichotic listening and its use as an application in neuropsychology, both with regard to research and clinical work. The review traces the concept of presenting simple speech sounds, such as syllables or numbers dichotically, from its original use by Broadbent in studies of attention in the early 1950s, to the pioneering work by Kimura a decade later in her studies of hemispheric asymmetry, and to the present-day use of the dichotic listening procedure in research and clinical practice. The article reviews studies on both healthy individuals and psychiatric and neurological populations. In addition to behavioral studies, dichotic listening has also been used in functional neuroimaging studies to reveal the underlying neuronal circuitry of language at the brain systems level.

Brief History of Dichotic Listening Research The British psychologist Donald Broadbent is credited as having been the first to systematically use a dichotic setup in his studies of how air traffic controllers managed to maintain attentional focus when they had to process flight bearings from several airplanes at the same time (Broadbent, 1956; see also Bryden, 1988; Kimura, 2011; Hugdahl, 2011 for historical overviews of dichotic listening research). Broadbent used a dichotic procedure with different auditory messages being sent to the right and left ear, requiring the subject to focus attention on either the left or right-ear message, as a way to simulate the real-life situation facing the air traffic controllers. It should be noted, however, that although Broadbent used a dichotic setup, he did not systematically investigate whether there was an asymmetry in how subjects were able to process the different messages in the right or left ear. In fact, if there was any interest in dichotic listening outside of an attentional framework it was not in asymmetry of the brain, but rather implications for the understanding of short-term memory. Bryden (1988) writes that Donald Hebb had suggested that Broadbent’s findings of subjects’ tendency to first report the set of stimuli in one ear and then proceed to the other ear, must involve some kind of short-term memory mechanism. The discovery that dichotic presentations of verbal stimuli produced an increased number of correct reports for items presented in the right ear is credited to Doreen Kimura, who in a seminal paper in the Canadian Journal of Psychology (1961a) found that patients with temporal-lobe pathology reported more items from the right ear than from the left ear. She followed up the 1961a findings on patients with a study on right-handed neurologically intact individuals published the same year (Kimura, 1961b; in fact the two papers appeared back-to-back in the same issue). Again the results showed that there were more correctly reported items from the right ear than from the left ear. In a personal recollection of the pioneering 1961-studies, Kimura (2011) writes: “I knew from animal research that the crossed pathways from the ear to auditory cortex predominate over the uncrossed, and in fact might occlude the uncrossed input where there was overlap (Rosenzweig, 1951; Tunturi, 1946). I concluded that the rightear effect was due to the fact that in people also, the crossed auditory pathways were more effective than the uncrossed.

International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 6

This gave the right-ear input an advantage in accessing areas in the left hemisphere critical for speech perception” (p. 214). Figure 1 shows a drawing of the crossing of the auditory pathways and the underlying neuroanatomical basis for the right-ear effect (adapted from Kimura, 2011; originally published in Kimura, 1967). The original electrophysiology work by Rosenzweig (1951) and the anatomical model proposed by Kimura in 1967 to explain the REA were later corroborated in both lesion studies (e.g., Sparks and Geschwind, 1968; Pollmann et al., 2002) and with advanced electrophysiology (e.g., Brancucci et al., 2005) and finally with neuroimaging techniques (Hugdahl et al., 1999). The study by Sparks and Geschwind (1968) was the first study to also show the importance of the corpus callosum for the REA. They tested a patient with commissurotomy in which the two

Figure 1 Schematic presentation of the auditory pathways and the neuroanatomy of the dichotic stimulus presentation as originally published in Kimura (1967). Reproduced from Kimura, D., 2011. From ear to brain. Brain and Cognition 76, 214–217, with permission from the publisher.

http://dx.doi.org/10.1016/B978-0-08-097086-8.54030-6

357

358

Dichotic Listening and Language: Overview

hemispheres were surgically separated by sectioning the corpus callosum. This patient showed an almost total REA, and a corresponding left-ear extinction effect, which then demonstrated the direct pathways for the right-ear stimulus and the indirect (callosal) pathways for the left-ear stimulus in the dichotic situation. This was later corroborated by Westerhausen et al. (2008) using diffusion-tensor imaging (DTI) technique. Others have, however, claimed that a neuroanatomical model does not take into account cognitive factors, e.g., attention that may also contribute to the REA (e.g., Hiscock and Kinsbourne, 2011; see also below under top-down modulations of the REA). The right-ear effect that Kimura mentioned was called a ‘right-ear advantage (REA)’ by Shankweiler and StuddertKennedy (1967), and later abbreviated REA (e.g., Bryden et al., 1983; Hugdahl and Andersson, 1984). The next researcher to take up the thread of the right-ear effect was P.M. Bryden who suggested that the right-ear effect observed by Kimura could have been caused by an ‘order-effect’ since subjects were instructed to report the right-ear items first. He therefore counterbalanced the order of presentation, but the REA still prevailed despite controlling for any ‘order-effects.’ Bryden (1963) concluded that “The findings suggest that the auditory system is better organized for the perception of verbal material presented to the right ear” (Abstract, p. 103).

Figure 2 Results for the effect of working memory load on the rightear advantage, as a function of increasing the number of letter pairs to be held in the working memory buffer. From Penner, I., Schlaefli, K., Opwis, K., Hugdahl, K., 2009. The role of working memory in dichotic-listening studies of auditory laterality. Journal of Clinical and Experimental Neuropsychology 31, 956–966, reproduced with permission from the publisher.

Asymmetry for Working Memory It should be noted, however, that the original studies by Kimura and Bryden were not exclusively about laterality for speech perception, but also contained elements of working memory. This was evident from the stimulus paradigms used in these studies. In both the Kimura and Bryden original studies, series of 3, 4, or 5 pairs of digits were presented dichotically, i.e., one digit in the right ear and another digit simultaneously in the other ear. The instruction to the subject was that he/she had to later recall which digits had been presented. Thus, the situation was to first listen to 6, 8, or 10 digits in sequence, and immediately after report as many digits that was possible, while the experimenter noted how many were reported that had been presented in the right ear and how many were reported that had been presented in the left ear. However, this situation requires the subject to keep the nonreported, but recalled, items in the working memory buffer for its turn to be reported. Thus, the original studies also pointed to the fact that working memory for verbal stimuli interacted with the perception of the same stimuli to produce a laterality effect. Much later, Penner et al. (2009) explicitly studied the interaction of laterality for speech sound perception and working memory by having subjects to listen to three-, four-, or five-letter pairs i.e., using the same working memory load as in the original studies, but with different stimuli (spoken letters rather than spoken digits). The results are shown in Figure 2, which reveals that as the working memory load increased from three to four stimulus-pairs the REA also increased, while it leveled off when the working memory load increased further to five-letter pairs. The results in the study by Penner et al. (2009) followed the original working memory effect in dichotic listening reported by Bryden (1963) when controlling for the order of report from either the right or left ear.

The Consonant-Vowel Syllables Paradigm In 1967, Donald Shankweiler and Michael Studdert-Kennedy at the Haskins Laboratories (see also Studdert-Kennedy and Shankweiler, 1970) asked the question at which level of speech processing the asymmetry the right-ear effect occurred, and if the REA was modified by the phonetic features of the stimuli. To be able to study these questions empirically they introduced a new paradigm, the consonant-vowel (CV) syllables paradigm. This paradigm consisted of pairwise presentations of CV-syllables based on the six stop-consonants /b/, /d/, /g/, /p/, /t/, /k/ that were paired with the vowels /a/, /e/, /i/, /æ/, and /u/. With this paradigm, it was possible to study the relative contribution of the consonants and vowels separately, by analyzing the different phonetic contrasts made up of combinations of the consonants and vowels. The instruction to the subject was to verbally report which syllable he/she had perceived after each trial. Shankweiler and Studdert-Kennedy (1967) included 15 right-handed members of the laboratory staff, and included also a test with only the steady-state vowels that were presented dichotically, a different vowel in each ear. A significant REA was observed only for the consonants, but not for the vowels (later replicated by Hugdahl & Andersson, 1984), indicating that the REA may be more of a perceptual effect for the identification of the initial phonological information in a syllable. Shankweiler and Studdert-Kennedy (1967) summarized the strong laterality effects they observed for the superiority for the right-ear items such that: “This strongly suggests that left hemisphere dominance in speech perception operates at the level of speech sound structure, [and that] the effect can be demonstrated when only a single pair of

Dichotic Listening and Language: Overview

syllables is presented on each trial indicating that it pertains to the registration of the stimuli and not only their retention.” (p. 62). Thus, the study by Shankweiler and Studdert-Kennedy (1967) set the record straight that the REA was not a memory retention artifact, but a direct consequence of the phonetic features of the speech sound. Moreover, the effect was based on the energy release of the initial consonant segment of the CV-syllable, but relatively insensitive to the manipulation of the following vowel sound. The CV-syllables paradigm soon became very popular in research on hemispheric asymmetry and a search through PubMed and ISI Web of Knowledge data bases show that more than 2000 articles have used a CV approach in studies of auditory laterality. A paradigm that has been as popular as the CV-syllables paradigm is the fusedrhymed paradigm (Wexler and Halwes, 1983), which is similar to the CV-syllables paradigm except for the important distinction that the two sounds presented at the ears fuse into a coherent perceptual unit. A characteristic of the fused-rhymed paradigm is that although subjects subjectively report that they hear ‘only one sound,’ they report this to be the right-ear item. Before proceeding with other studies that followed in the wake of the pioneering studies by Kimura, Bryden, and Shankweiler and Studdert-Kennedy, two factors that have been related to hemispheric asymmetry and dichotic listening should be mentioned, namely effects of handedness and sex differences.

359

in general. Although several early studies on hemispheric asymmetry and laterality, using different tasks and paradigms, both auditory and visual, had shown that males were superior on visuospatial tasks, while females were superior on verbal tasks (e.g., Harris, 1978; Levy, 1971), there were also studies failing to report significant sex differences (e.g., Hannay and Boyer, 1978; see also McGlone, 1980 for review). The first systematic review of sex differences on tasks measuring hemispheric asymmetry (Voyer, 1996) showed modest overall effects of sex, accounting for a small proportion of the variance. When it comes to sex differences in performance on a dichotic listening task, the data essentially show the same picture, the sex difference is either marginal or absent. Two recent metaanalyses (Voyer, 2011; Sommer et al., 2008; Hiscock and MacKay, 1985) both showed that sex differences in dichotic listening performance accounted for only about 1% of the total variance, with no significant differences between the sexes for the REA. These findings were recently followed up by Hirnstein et al. (2012) in the largest sample of males and females so far (N ¼ 1782) where they analyzed the ear advantage scores separately for males and females, including children, young adults, and old adults. The results showed that sex differences in dichotic listening are age-dependent such that male young adults had a greater REA than young female adults. The results from the Hirnstein et al. study are shown in Figure 3.

Handbook of Dichotic Listening, 1988 The Effect of Handedness The CV distinction as reported by Shankweiler and StuddertKennedy (1967; Studdert-Kennedy and Shankweiler, 1970) was later corroborated by Hugdahl and Andersson (1984) who contrasted the six stop-consonants with the vowels /a/, /i/, /u/. When analyzing the errors made for the different consonants and vowels respectively, they found significant differences for the consonants, depending on the ear of presentation, but not for the vowels. Interestingly, Hugdahl and Andersson (1984) also included a group of left handers, and found a reduction of the REA in the left-handed group, measured as fewer errors for the left-ear input compared to the left-ear input. Hugdahl and Andersson (1984) concluded that the CV-syllables dichotic listening paradigm “allows for the separation of left- and right-hemisphere dominance for right- and left-handed subjects” (p. 141). Later research has however concluded that the CV-syllables dichotic listening paradigm does not discriminate right- from left handers (e.g., Sommer, 2010 for a recent review), which would be expected considering that about 70% of left handers have language in the left hemisphere equal to right handers (Satz, 1973; Springer and Deutsch, 1989).

The Effect of Sex Studdert-Kennedy and Shankweiler did not report on the sex of their subjects, only that they were right handed, and Hugdahl and Andersson (1984) only included males. Thus, it was not clear from the early studies if sex had an effect on performance on the CV-syllables paradigm and on hemispheric asymmetry

After the initial studies which showed that the REA was a robust empirical phenomenon, replicable in many contexts and situations, there was a surge of studies applying dichotic listening to almost every phenomenon thinkable in research on hemispheric asymmetry. This produced a well of interesting findings, summarized in the volume Handbook of dichotic listening, Edited by Kenneth Hugdahl, and Published by Wiley and Sons, UK in 1988. These studies also created an optimism that dichotic listening could provide answers to outstanding questions regarding how the two hemispheres of the brain functioned, not the least in the clinical domain. For example, it was believed that dichotic listening could replace invasive methods, like the Sodium Amytal test (Wada and Rasmussen, 1960) for the identification of side of language in patients with epilepsy undergoing surgical treatment.

Clinical and Developmental Applications The different chapters in the Handbook of dichotic listening (Hugdahl, 1988) showed that dichotic listening at that time had diversified into understanding not only the functioning of the two cerebral hemispheres when it came to laterality for speech and speech perception, but also applications to a wide range of disorders where these functions would be expected to be compromised. This was particularly true for studies of children with various forms of language disorders, like dyslexia (Bakker and Kappers, 1988), learning and reading disability (Obrzut and Boliek, 1988), and cognitive development in children (Hiscock and Decter, 1988). There were also chapters on the application of dichotic listening to psychiatric and

360

Dichotic Listening and Language: Overview

Figure 3 Effects of sex on the right-ear advantage (REA) in dichotic listening based on a large sample of healthy individuals (N > 1780). The figure shows the frequency of males and females split for different age groups and for magnitude of the laterality index (LI). An LI of ‘0’ indicates no difference for the right- and left-ear scores, an LI of þ100 indicates 100% REA and an LI of 100 indicates 100% left-ear advantage. From Hirnstein, M., Westerhausen, R., Kosrnes, M.S., Hugdahl, K., 2012. Sex differences in language asymmetry are age-dependent and small: a largescale, consonant- vowel dichotic listening study with behavioral and FMRI data. Cortex, Epub ahead of publication, with permission from the publisher.

neurological disorders, like depression (Bruder, 1988), schizophrenia (Nachshon, 1988), Sodium Amytal studies of patients with epilepsy (Strauss, 1988), split-brain patients (Sidtis, 1988), and correlates with brain lesion findings (Eslinger and Damasio, 1988). A general conclusion from reading these applications was that dichotic listening was a valuable noninvasive measure of abnormality of the laterality of speech. However, these early clinical findings also prompted a surge of interest in both hemispheric asymmetry in general and dichotic listening in particular, with very high expectations of what a noninvasive measure could achieve of data. When dichotic listening results did not completely match invasive data, as e.g., when matching the size of REA with aphasia data, people started losing interest in the field, not considering that no neuropsychological measure or test ever match lesion data on that level of matching.

Bottom-Up Modulation of the REA An important question that was asked after the initial studies was the degree to which it could be, and was, modulated by other factors than laterality, and how hemispheric asymmetry interacted with those factors to modulate the REA. The modulation of the REA can be studied from two perspectives, one is the modulation as a consequence of the characteristic of the subject being tested, e.g., whether children show different degrees of the REA than adults, or if certain disorders affect the REA in a specific way, e.g., if dyslexic individuals perform different on a dichotic listening task than normal-reading individuals. The other perspective on how the REA is

modulated is as a consequence of the stimulus characteristics, e.g., voicing and timing of the syllable sounds that are presented to the subject. With regard to the first kind of modulation, Hugdahl et al. (2001) showed that the REA is present at an early age, actually before reading, and is as such a feature of hemispheric asymmetry for language in a very basic sense. When it comes to studies of dyslexia and reading disorders, Obrzut and colleagues (see Obrzut and Boliek, 1988; Obrzut and Mahoney, 2011) used a CV-syllable dichotic listening paradigm to study the phonetic aspects of reading impairment in children with dyslexia and other language-related problems. This is in line with what Obrzut and Boliek already in 1986 reported of a reduction of the REA, or a tendency for a leftear advantage (LEA) in learning- and reading-disabled children, see also Satz and Sparrow (1970) for an even earlier study in reading-disabled children. Such findings would then support a notion that reading disability is related to impaired phonological processing and phonological awareness (Høien and Lundberg, 2000), or reduced ability for low-level auditory discrimination (cf Tallal and Piercy, 1973). Another finding was from studies being conducted by Bakker and his colleagues in the Netherlands (e.g., Bakker, 1970; Bakker and Kappers, 1988), which showed that the development of the ear advantage and the REA was delayed in dyslexic children compared to normal-reading children. In still another series of studies Cohen et al. (1992) found that the REA was modulated not only by reading impairment and dyslexia in general, but children with auditory as compared to visual dyslexia showed greater reduction of the REA. A later study by Helland et al. (2007) found that the REA in dyslexic children was also modulated by how severe the disorder is, such that children

Dichotic Listening and Language: Overview

diagnosed with a reading disorder requiring special education training showed a greater reduction in the REA than children following regular classes. See Figure 4, for example, of the effect of how the need for special education training affects the REA. As can be seen in Figure 4, children with a dyslexia diagnosis and in need of special education training compared to children where such training was not considered necessary revealed a significant reduction of the REA. The REA may therefore have clinical value as a complementary diagnostic tool. All in all, studies of dyslexia and reading impairment have shown that children with reading impairment and dyslexia show an aberrant laterality displayed as reduced phonetic discrimination and abnormal REA. With regard to modulations of the REA as a consequence of the stimulus characteristics, Berlin and colleagues reported a series of intriguing studies in the 1970s (summarized in Berlin, 1977) on the effects of amplitude (intensity) and phase (time) shifts of the dichotic signal in the right versus the left ear. Berlin manipulated the two signals of the dichotic pair by increasing the intensity of, e.g., the right-ear syllable relative to the left, and vice versa, and noted how performance on the task varied. He also varied the relative timing of the onset of the two signals such that either the right or the left syllable trailed the other by a few ms, and noted how time differences affected the REA. The results by Berlin et al. were that when the right-ear signal lagged behind the left-ear signal by 15–30 ms, the REA was increased, and vice versa if the left-ear signal lagged behind the right-ear signal, thus showing a kind of feed-forward effect such that the signal arriving later had a greater effect on the ear advantage. From these studies, it is clear that the REA is modified and modulated by both amplitude and phase shifts of the two syllables in the dichotic pair. The early studies by Berlin and colleagues have later been followed up by Hugdahl

361

and Westerhausen (e.g., Hugdahl et al., 2008; Westerhausen et al., 2010; Falkenberg et al., 2011) where they gradually increased the intensity of the syllable in either the right or left ear in steps of 3 dB at a time (Hugdahl et al., 2008). The results are shown in Figure 5, and revealed a linear increase in the magnitude of the REA when the right-ear signal was increased, and a corresponding linear decrease when the left-ear signal intensity was increased. What is interesting from Figure 5 is that the REA was shifted to LEA when the intensity of the left-ear signal was increased 6 dB above the level of the right-ear signal. Thus, the REA could therefore be said to withstand a sound pressure intensity difference of 6 dB, making it possible to express a cognitive concept (REA) in physical terms (dB). In later experiments, the same research group has found that the REA withstands up to a 9 dB difference favoring the left-ear signal, depending on the context in which the test is conducted. A third kind of stimulus manipulation is voice-onset-time (VOT), also called voicing, which means comparing syllable pairs with voiced (/ba/,/da/,/ga/) versus unvoiced (/pa/,/ta/, ka/) syllables, corresponding to short (20 ms) versus long (70 ms) VOTs. The REA is sensitive to the VOT features of the dichotic pair such that having an unvoiced syllable in the right ear and a voiced syllable in the left ear produces a stronger REA than having equally voiced syllables in either ear. Interestingly, a voiced syllable in the right ear and an unvoiced syllable in the left ear produce LEA in adults (Rimol et al., 2006). Thus, it is obvious that the REA is modulated by both amplitude and phase shifts of the signals making up the dichotic pair. Özgören et al. (2012) put the two features together to investigate the relative contribution of amplitude- and phase shifts when the two features are in conflict, such as when increasing the intensity of the right-ear signal, but having the left-ear

Figure 4 Difference in direction and magnitude of the ear advantage in dichotic listening between a normal-reading control group (control), a dyslexia group with no further assessment, training (dyslexia 1), and a dyslexia group, which was further referred by the local psychological school services to a specialized regional resource center (dyslexia 2). Small bars indicate SD. From Helland, T., Asbjørnsen, A., Hushovd, A.E., Hugdahl, K., 2007. Dichotic listening and school performance in dyslexia. Dyslexia 14, 42–53, reproduced with permission from the publisher.

362

Dichotic Listening and Language: Overview

Figure 5 Effects of systematic variation of the intensity in one ear relative to the other ear in steps of 3 dB at a time. ‘0’ on the x-axis indicates equal intensity for the right and left-ear signal. þ21 means an increase of 21 dB of the signal in the right ear, 21 means an increase of 21 dB of the signal in the left ear. From Hugdahl, K., Westerhausen, R., Alho, K., Medvedev, S., Hämäläinen, H., 2008. The effect of stimulus intensity on the right ear advantage in dichotic listening. Neuroscience Letters 431, 90–94, reproduced with permission from the publisher.

signal presented 10 ms before the right-ear signal. Overall, the results of the Özgören et al. (2012) study showed that amplitude shifts favoring the right ear had a greater effect on the REA than corresponding phase shifts, while phase shifts favoring the left ear were found to have a greater effect on the LEA than corresponding amplitude shifts. In addition, it was found that phase shifts favoring the left ear had a greater effect on the LEA than corresponding phase shifts favoring the right ear had on the REA. The results of simultaneously

manipulating amplitude and phase of the syllables in the dichotic pair is seen in Figure 6. In contrast to the findings by Berlin and colleagues, Özgören et el. (2012) found that the REA was increased for syllable pairs where the right-ear syllable had an earlier onset, and vice versa for the LEA to syllable pairs where the left-ear syllable had an earlier onset. Recall that Berlin et al. (reviewed in Berlin, 1977) found that the REA was increased to syllable pairs where the right-ear syllable trailed the left-ear syllable,

Figure 6 Effects on the right-ear advantage (REA) by simultaneously manipulating intensity (amplitude) and time (phase) of signal arrival to the ear. L,L ¼ phase favoring left ear, intensity favoring left ear, N,L ¼ no intensity difference, phase favoring left ear, R,L ¼ phase favoring right ear, intensity favoring left ear, etc., for the abbreviations. From Özgören, M., Bayazit, O., Oniz, A., Hugdahl, K., 2012. Amplitude and phase-shift effects on dichotic listening performance. International Journal of Audiology 51, 591–596 with permission from the publisher.

Dichotic Listening and Language: Overview

and vice versa. The difference in results could be due to that the subject in previous studies was instructed to report both syllables in the dichotic pair, while Özgören et al. (2012) instructed their subjects to only report one syllable of the pair (the one perceived most clearly).

Top-Down Modulation of the REA A completely different kind of modulation of the REA is from a top-down, or cognitive, perspective, e.g., when instructing the subject to focus attention to either the right- or left-ear stimulus of the dichotic pair (Bryden et al., 1983; Hugdahl and Andersson, 1986), or have another cue-stimulus presented in either the right or left ear a few hundred ms before the CV-syllables are presented, and instruct the subject to report the syllable in the same ear as the cue was presented in (Mondor and Bryden, 1991). In both of these situations, the aim is to induce a top-down cognitive strategy to focus attention on either syllable and then for the Experimenter to record the effect this has on the REA. The two procedures differ, however, in that in the first example, attention is shifted to one side for a block of trials, until the subject is instructed to shift attention to the other side. This was labeled as ‘the forced-attention’ paradigm by Hugdahl and Andersson (1986). In the second example, attention is shifted to either the right or left side on a trial-by-trial basis, since the subject cannot know which side he/she is expected to shift attention to on a give trial until the cue-stimulus is given (typically a brief tone), see e.g., Mondor and Bryden (1991). Top-down procedures go back to the use of shadowing approaches where the subject is instructed to shadow a continuous message in one ear while another message simultaneously is presented in the other ear (Cherry, 1953).

The ‘Forced-attention’ Paradigm In the following, I will describe studies using the ‘forcedattention paradigm’ as it was developed by Hugdahl and Andersson (1986) since this is one of the most widely used variants of top-down REA modulation. A detailed description of the ‘forced-attention’ paradigm is at the end of manuscript for anyone interested in using the paradigm in empirical research, including instructions how to obtain a copy of the paradigm. Top-down manipulations were introduced into asymmetry research as a result of laterality studies by Marcel Kinsbourne (e.g., 1970; see also Hiscock and Kinsbourne, 2011 for an updated account of the attentional model in research on auditory laterality) that the processing differences for language seen between the right and left hemispheres could be due to a default attentional bias to process events happing in one half field of sensory space. Kinsbourne suggested that the REA may be caused by a kind of hemisphere preactivation mechanism of the left, language dominant, hemisphere in a language processing situation caused by attention bias to that side. Part of this suggestion came from studies of hemispace neglect where patients with localized lesions to the right parietal lobe revealed a deficit in processing events occurring the left hemispace (see Heilman and Valenstein, 1972; Heilman and Van

363

den Abel, 1980; see also Heilman, 1995 for a review). It may be interesting to note that Oxbury et al. already in 1967 in a paper in Nature suggested that the ear advantage in dichotic listening may be caused by an attentional bias toward the right ear, and that the effect is not because of an effect of hemisphere dominance. Satz (1968) wrote a response letter to Nature where he presented new data and argued that Oxbury et al. (1967) were mistaken, and added: “ .this conclusion is misleading, that it ignores central nervous system mechanisms which are probably associated with the asymmetry, and that the conclusion is inconsistent with experiments which have controlled for order-effects and have found a right ear superiority” (p. 277). Others have later (e.g., Pollmann, 2010) also pointed out that attention could not explain the findings in the ‘forced-attention’ paradigm introduced by Hugdahl and Andersson (1986) since in this situation the subject is asked also to pay attention to the nondominant left ear, which would not be part of an in-built bias to always shift attention to the left side in hemispace. Instructing the subject to pay attention to either the right or left syllable, and to only report from that side has nevertheless profound effects on the ear advantage. As expected, reporting only from the right ear increases the REA while reporting only from the left ear produces LEA in healthy adult individuals. In the initial studies, Bryden et al. (1983) and Hugdahl and Andersson (1986) it was believed that instructing the subject to half of the time report from the right ear and half of the time to report from the left ear would control for any unwanted effects of attention. Thus, attention was seen as an artifact, confounding a true laterality effect. Asbjørnsen and Hugdahl (1995) and Hiscock et al. (1999) found, however, that the different results for the right and left ear focus could be due to a more general cognitive factor, i.e., that attention was an integrated aspect of auditory laterality. At this time, it was however still believed that the active factor in modulating the REA as a consequence of instructions of side of reporting focus was attention. Later both Hugdahl (e.g., Hugdahl et al., 2009) and Hiscock and Kinsbourne (e.g., 2011) suggested a two-stage model with bottom-up and top-down processing. Hiscock and Kinsbourne (2011) argued that top-down manipulations only help in providing information about localization of signals, but not for the processing of the signal. Hugdahl, on the other hand, argued that the top-down manipulation, depending on which side is attended to, provides information about two fundamentally different modes of cognitive processing. Hugdahl et al. (2009) reviewed the dichotic listening literature on clinical patients, particularly patients with schizophrenia, and found it puzzling why these patients could not perform the condition with instruction to focus attention on the left-ear syllable (labeled ‘forced-left’ condition), while they performed normal when instructed to focus attention on the right-ear syllable (labeled ‘forced-right’ condition). It was moreover clear that healthy individuals also struggled more with the ‘forced-left’ instruction condition than with the ‘forced-right’ instruction condition, although they could switch to LEA, which the patients could not. Performance data for healthy individuals and patients with schizophrenia are shown in Figure 7. From the schizophrenia and other clinical data, Hugdahl et al. (2009) describe the process that led them to the conclusion that the ‘forced-right’ and ‘forced-left’ instruction

364

Dichotic Listening and Language: Overview

Figure 7 Mean correct reports for the right and left-ear signal in the three ‘forced-attention’ instruction conditions, nonforced (NF), forced-right/ (FR), and forced-left (FL), split for healthy individuals and schizophrenia patients.

conditions reflects different cognitive processes, or different degrees of different cognitive processes.

We now believe that instructions to focus attention on the right or left ear stimulus induces different degrees of cognitive conflict and a corresponding need for cognitive control strategies. We have below described the process that brings us to this conclusion. The puzzle occurred when interpreting the results in two previous studies from our laboratory (Hugdahl et al., 2003; Løberg et al., 1999) with the ‘forced-attention’ dichotic listening paradigm (Hugdahl and Andersson, 1986) where it was found that patients with schizophrenia could not modulate the REA when instructed to focus attention on and report only the left ear stimulus, while they were able to modulate the REA when instructed to focus on the right ear stimulus. Healthy control subjects could easily perform both tasks, that is, they increased the REA when instructed to focus attention on and report the right ear stimulus, and reduced the REA, shifting to a LEA when instructed to focus attention on the left ear stimulus. A clue to an explanation of the selective FL effect seen in the patients as an example of cognitive control may be the fMRI study by Thomsen et al. (2004; see also Hugdahl et al., 2000; Jäncke et al., 2001) where it was shown that healthy subjects uniquely showed activation in the left prefrontal and anterior cingulate cortex areas when the BOLD activation images acquired during the FL task were contrasted with images obtained during the FR task, while the reversed contrast showed no remaining significant activations.. Considering that the prefrontal and anterior cingulate cortex have been linked to increased cognitive load, as seen in executive and other cognitively demanding tasks (Duncan and Owen, 2000), it could be argued that the FR and FL attention situations differ in the degree to which they reflect cognitive control (p. 3–4).

Brain Correlates of the REA The underlying neuronal circuitry for the REA, which would also validate any theoretical model of the REA (e.g., Kimura,

1967; Pollmann et al., 2002) could be studied with functional neuroimaging methods like positron emission tomography (PET using radioactive 15O, or functional magnetic resonance imaging (fMRI)). The first PET imaging study with CV-syllables was published by Hugdahl et al., 1999, using the 15O-PET method. Subjects listened to the standard CV-syllables (see Appendix) while in the PET scanner and had to press a button placed on the chest whenever they detected a predetermined target syllable, thus the paradigm in the PET scanner deviated somewhat from the standard paradigm in that a target-detection procedure was used (see Clark et al., 1988 for a description of dichotic target-detection variants). As control stimuli, musical chords were presented, also dichotically, from three different instruments. The results are shown in Figure 8, and reveal significant bilateral activations in the superior temporal gyrus, with a significant leftward asymmetry when directly comparing the left minus the right side. The corresponding activation for the musical chords showed a rightward asymmetry in the same region in the temporal lobe. The results thus confirmed previous behavioral results, and also the theoretical assumptions regarding the underlying neuronal pathways for the REA. The findings by Hugdahl et al. (1999) were replicated by van den Noort et al. (2008) who used fMRI, with superior spatial resolution and where the subject was instructed to give a verbal response after each syllable-pair presentation. Thus, the Van den Noort et al. (2008) study is so far the neuroimaging study that best resembles the standard behavioral paradigm, using verbal responses. The results replicated the basic asymmetry in the superior temporal gyrus reported in the Hugdahl et al. (1999) study, again confirming the specialization for processing of a phonetic stimulus (CV-syllable) in the left hemisphere. Van den Noort et al. (2008) also applied a conservative statistical procedure in that they used a probability mapping statistics,

Dichotic Listening and Language: Overview

365

cognitive deficits and impairments in clinical groups, like schizophrenia and brain lesion patients. I also show how dichotic listening may be an important vehicle to understand language development and language-related disorders, like dyslexia and other forms of learning disabilities. It may be relevant in this context to remind oneself of the conclusion in the introduction paper in the 50 years anniversary special issue in the Journal Brain and Cognition (Hugdahl, 2011), that “auditory laterality and dichotic listening will be an important field of study and method in neuropsychology and related fields of research and in the clinic.also in the next 50 years” (p. 213). Thus, despite unwarranted criticism that dichotic listening has outplayed its role in neuropsychology, recent advances with applications to the clinical domain, and with the introduction of top-down manipulations to study ever more complex brain-behavior interactions, dichotic listening is “still going and going and.” (Hugdahl, 2011, p. 211).

Figure 8 Changes in blood flow in the left and right superior temporal gyrus in healthy individuals, measured with 15O-PET. Data from Hugdahl, K., Brønnick, K., Kyllingsbæk, S., Law, I., Gade, A., Paulson, O.B., 1999. Brain activation during dichotic presentations of consonant-vowel and musical instruments stimuli: a 15O-PET study. Neuropsychologia 37, 431–440 with permission from the publisher.

showing only activated areas that succeeded a threshold of 80% (or 60%) of all subjects showing significantly activated in the same region of interest in the superior posterior region of the left temporal lobe. Although the study by Hugdahl et al. (1999) was the first functional neruroimaging study using a dynamic imaging method, and with CV-syllables, Coffey et al. (1989) used a blood flow technique with inhalation of a radioactively tagged nobel gas, 133Xenon, which would be distributed with the blood flow in the brain. The stimuli in the Coffey et al. (1989) study were however a pitch-discrimination task where the subject had to discriminate between the high and low pitch of tones presented dichotically, e.g., a high pitch tone in the right ear and a low pitch tone in the left ear. Thus, the task in the Coffey et al. study was not really a study of dichotic listening and hemispheric asymmetry for language, but for auditory processing and pitch processing. The results showed an asymmetry-effect in that activation was higher in the temporal lobe contralateral to the target stimulus of the dichotic pair.

Summary and Conclusions In this article, I have summarized some trends in the history of dichotic listening research from the original studies by Kimura (1961a,b) to more recent studies using PET (Hugdahl et al., 1999) and fMRI (Van den Noort et al., 2008) to reveal the underlying neuronal circuitry. I have also shown how the REA is modulated and influenced by both bottom-up perceptual and top-down cognitive factors, and that this make the dichotic listening paradigm a much more dynamic and wide-ranging measure for the study of complex perceptual-cognitive asymmetry interactions with consequences for the understanding of

Appendix In the following the ‘forced-attention’ paradigm is described as it is used in our laboratory, for the reader who wants to implement the paradigm for own research. The paradigm runs as an application in the E-prime programming platform (Psychology Software Tools, Inc., http://www.pstnet.com/ eprime.cfm). A copy of the E-prime dichotic listening application and a manual describing how to use the paradigm and score the data can be obtained by sending an e-mail to Kenneth Hugdahl ([email protected]). The manual also contains normative data for comparisons based on 1800 healthy subjects, aged 7–89 years, males and females, right- and left handers.

The ‘Forced-Attention’ Dichotic Listening Paradigm The paradigm consists of series of presentation of CV-syllables via headphones to the subject. The stimuli are the six stopconsonants /b/, /d/, /g/, /p/, /t, /k/ that are paired with the vowel /a/ to form six dichotic six CV-syllable pairs of the type /ba/ – /ga/, /ta/ – /ka/ etc. The syllables are then paired with each other for all possible combinations, thus yielding 36 dichotic pairs, including the six homonymic pairs. The stimuli are natural speech, spoken by a male. The homonymic pairs are not included in the statistical analysis, but just as catch trials to ensure that the subject has understood the instructions. The maximum number of correct reports is thus 30 for each ear/instruction condition (nonforced, NF), with no instruction, forced-right, FR, instruction to focus on the right ear, and forced-left, FL with instructions to focus on the left ear. Mean duration is 350– 400 m and the intertrial interval is 4 sec, during which the subject orally respond, which syllable he/she has perceived on the previous trial. The syllables are read through a microphone and digitized for later computer editing on a standard PC using state-of-the-art audio editing software (SWELL, Goldwave, CoolEdit). The syllables are recorded with a sampling rate of 44 000 Hz and amplitude resolution of 16 bit. After digitization, each CV-pair is displayed on the PC screen and synchronized for simultaneous onset at the

366

Dichotic Listening and Language: Overview

first identifiable energy release in the consonant segment between the right and left channels. The stimuli are played to the subject using digital play-back equipment, connected to high-performance headphones, with intensity between 70 and 75 dB. The subject is told that he/she would hear repeated presentations of the six CV-syllables (ba, da, ga, pa, ta, ka), and that he/she should report which one he/she heard from the six possible syllables after each trial.

Acknowledgment The present research was funded by a grant from the European Research Council (ERC) advanced grant to Kenneth Hugdahl.

See also: Functional Brain Imaging of Language processes; Lateralization of Language as Demonstrated by Brain Imaging Procedures.

Bibliography Asbjørnsen, A., Hugdahl, K., 1995. Attentional effects in dichotic listening. Brain and Language 49, 189–201. Bakker, D.J., 1970. Ear-asymmetry with monaural stimulation: relations to lateral dominance and lateral awareness. Neuropsychologia 8, 103–117. Bakker, D.J., Kappers, E.J., 1988. Dichotic listening and reading (dis)ability. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Method, Theory, and Research (pp. 513–526). Wiley and Sons., Chichester, UK, pp. 103–117. Berlin, C.I., 1977. Hemispheric asymmetry in auditory tasks. In: Harnad, S., Doty, R.W., Goldstein, L., Jaynes, J., Krauthamer, G. (Eds.), Lateralization in the Nervous System. Academic Press, New York, pp. 303–324. Brancucci, A., Babiloni, C., Vecchio, F., Galderisi, S., Mucci, A., Tecchio, F., Romani, G.L., Rossini, P.M., 2005. Decrease of functional coupling between left and right auditory cortices during dichotic listening: an electroencephalography study. Neuroscience 136, 323–332. Broadbent, D.E., 1956. Successive responses to simultaneous stimuli. Quarterly Journal of Experimental Psychology 8, 145–162. Bruder, G.E., 1988. Dichotic listening in psychiatric patients. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons., Chichester, UK, pp. 527–564. Bryden, M.P., 1988. An overview of the dichotic listening procedure and its relation to cerebral organization. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 1–44. Bryden, M.P., Munhall, K., Allard, F., 1983. Attentional biases and the right-ear effect in dichotic listening. Brain and Language 18, 236–248. Bryden, M.P., 1963. Ear preference in auditory perception. Journal of Experimental Psychology 65, 103–105. Cherry, E.C., 1953. Some experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America 25, 975–979. Clark, C.R., Geffen, L.B., Geffen, G., 1988. Invariant properties of auditory perceptual asymmetry assessed by dichotic listening. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons., Chichester, UK, pp. 71–84. Coffey, C.E., Bryden, M.P., Schroering, E.S., Wilson, W.H., Matthew, R.J., 1989. Regional cerebral blood flow correlates of a dichotic listening task. Neuropsychiatry 1, 46–52. Cohen, M., Hynd, G., Hugdahl, K., 1992. Dichotic listening performance in subtypes of developmental dyslexia and a left temporal lobe brain tumour contrast group. Brain and Language 42, 187–202. Duncan, J., Owen, A.M., 2000. Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences 10, 475–483. Eslinger, P.J., Damasio, H., 1988. Anatomical correlates of paradoxic ear extinction. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 139–160.

Falkenberg, L.E., Specht, K., Westerhausen, R., 2011. Attention and cognitive control networks assessed in a dichotic listening fMRI study. Brain and Cognition 76, 276–285. Hannay, H.J., Boyer, C.L., 1978. Sex differences differences in hemispheric asymmetry revisited. Perceptual and Motor and Motor Skills 315–321. Harris, L.J., 1978. Sex differences in spatial ability: possible environmental, genetic, and neurological factors. In: Kinsbourne, M. (Ed.), Asymmetrical Functions of the Brain. Cambridge University Press., Cambridge, UK, pp. 405–522. Heilman, K.M., Valenstein, E., 1972. Auditory neglect in man. Archives of Neurology 22, 32–35. Heilman, K.M., Van den Abell, T., 1980. Right hemisphere dominance for attention: the mechanims underlying hemispheric asymmetries on inattention (neglect). Neurology 30, 327–330. Heilman, K.M., 1995. Attentional asymmetries. In: Davidson, R.J., Hugdahl, K. (Eds.), Brain Asymmetry. MIT Press, Cambridge MA, pp. 217–234. Helland, T., Asbjørnsen, A., Hushovd, A.E., Hugdahl, K., 2007. Dichotic listening and school performance in dyslexia. Dyslexia 14, 42–53. Hirnstein, M., Westerhausen, R., Kosrnes, M.S., Hugdahl, K., 2012. Sex differences in language asymmetry are age-dependent and small: a large-scale, consonantvowel dichotic listening study with behavioral and FMRI data. Cortex. Epub ahead of publication. Hiscock, M., Kinsbourne, M., 2011. Attention and the right-ear advantage: what is the connection? Brain and Cognition 76, 263–275. Hiscock, M., Decter, M.H., 1988. Dichotic listening in children. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. John Wiley and Sons, New York, NY, pp. 431–474. Hiscock, M., Inch, R., Kinsbourne, M., 1999. Allocation of attention in dichotic listening: differential effects on detection and localization of signals. Neuropsychology 13, 404–414. Hiscock, M., MacKay, M., 1985. The sex difference in dichotic listening: multiple negative findings. Neuropsychologia 23, 441–444. Høien, T., Lundberg, I., 2000. Dyslexia: From Theory to Intervention. Kluwer Academic Publishers, Dordrecht, NL. Hugdahl, K., 2011. Fifty years of dichotic listening research – still going and going and going.. Brain and Cognition 76, 211–214. Hugdahl, K., Andersson, B., 1984. A dichotic listening study of differences in cerebral organization in dextral and sinistral subjects. Cortex 20, 135–141. Hugdahl, K., Andersson, L., 1986. The “forced-attention paradigm” in dichotic listening to CV-syllables: a comparison between adults and children. Cortex 22, 417–432. Hugdahl, K., Brønnick, K., Kyllingsbæk, S., Law, I., Gade, A., Paulson, O.B., 1999. Brain activation during dichotic presentations of consonant-vowel and musical instruments stimuli: a 15O-PET study. Neuropsychologia 37, 431–440. Hugdahl, K., Carlsson, G., Eichele, T., 2001. Age effects in dichotic listening to consonant-vowel syllables: interactions with attention. Developmental Neuropsychology 20, 449–457. Hugdahl, K., Law, I., Kyllingsbæk, S., Brønnick, K., Gade, A., Paulson, O.B., 2000. Effects of attention on dichotic listening: an 15O-PET study. Human Brain Mapping 10, 87–97. Hugdahl, K., Rund, B.R., Lund, A., Asbjørnsen, A., Egeland, J., Landrø, N.I., Roness, A., Stordal, K., Sundet, K., 2003. Attentional and executive dysfunctions in schizophrenia and depression: evidence from dichotic listening performance. Biological Psychiatry 53, 609–616. Hugdahl, K., Westerhausen, R., Alho, K., Medvedev, S., Hämäläinen, H., 2008. The effect of stimulus intensity on the right ear advantage in dichotic listening. Neuroscience Letters 431, 90–94. Hugdahl, K., Westerhausen, R., Alho, K., Medvedev, S., Laine, M., Hämäläinen, H., 2009. Attention and cognitive control: unfolding the dichotic listening story. Scandinavian Journal of Psychology 50, 11–22. Hugdahl, K. (Ed.), 1988. Handbook of Dichotic Listening: Theory, Methods and Research. Wiley and Sons, Chichester, UK. Jäncke, L., Buchanan, T.W., Lutz, K., Shah, N.J., 2001. Focused and nonfocused attention in verbal and emotional dichotic listening: an fMRI study. Brain and Language 78, 349–363. Kimura, D., 1961a. Some effects of temporal-lobe damage on auditory perception. Canadian Journal of Psychology 15, 156–165. Kimura, D., 1961b. Cerebral-dominance and the perception of verbal stimuli. Canadian Journal of Psychology 15 (3), 166–171. Kimura, D., 1967. Functional asymmetry of the brain in dichotic listening. Cortex 3, 163–178. Kimura, D., 2011. From ear to brain. Brain and Cognition 76, 214–217.

Dichotic Listening and Language: Overview

Levy, J., 1971. Lateral specialization of the human brain: behavioral manifestation and possible evolutionary basis. In: Kirger, J.A. (Ed.), The Biology of Behavior. Oregon State University Press, Corvallis, pp. 159–180. Løberg, E.M., Hugdahl, K., Green, M.F., 1999. Hemispheric asymmetry in schizophrenia: a “dual deficits” model. Biological Psychiatry 45, 76–81. McGlone, J., 1980. Sex differences in human brain asymmetry: a critical survey. Behavioral and Brain Science 3, 215–227. Mondor, T.A., Bryden, M.P., 1991. The influence of attention on the dichotic REA. Neuropsychologia 29, 1179–1190. Nachshon, I., 1988. Dichotic listening models of cerebral deficit in schizophrenia. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 565–598. Obrzut, J., Boliek, C., 1986. Lateralization characteristics in learning disabled children. Journal of Learning Disabilities 19, 308–314. Obrzut, J., Mahoney, E.B., 2011. Use of dichotic listening techniques with learning disabilities. Brain and Cognition 76, 323–331. Obrzut, J.E., Boliek, C.A., 1988. Dichotic listening: evidence from learning and reading disabled children. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 475–512. Oxbury, S., Oxbury, J., Gardiner, J., 1967. Laterality effects in dichotic listening. Nature 214, 742–743. Özgören, M., Bayazit, O., Oniz, A., Hugdahl, K., 2012. Amplitude and phase-shift effects on dichotic listening performance. International Journal of Audiology 51, 591–596. Penner, I., Schlaefli, K., Opwis, K., Hugdahl, K., 2009. The role of working memory in dichotic-listening studies of auditory laterality. Journal of Clinical and Experimental Neuropsychology 31, 956–966. Pollmann, S., 2010. A unified structural-attentional framework for dichotic listening. In: Hugdahl, K., Westerhausen, R. (Eds.), The Two Halves of the Brain. MIT Press, Cambridge, MA. Pollmann, S., Maertens, M., von Cramon, D.Y., Lepsien, J., Hugdahl, K., 2002. Dichotic listening in patients with splenial and nonsplenial callosal lesions. Neuropsychology 16, 56–64. Rimol, L.M., Eichele, T., Hugdahl, K., 2006. The effect of voice-onset-time on dichotic listening with consonant-vowel syllables. Neuropsychologia 44, 191–196. Rosenzweig, M.R., 1951. Representations of the two ears at the auditory cortex. American Journal of Physiology 167, 147–214. Satz, P., 1968. Laterality effects in dichotic listening. Nature 218, 277–278. Satz, P., 1973. Left-handedness and early brain insult. Neuropsychologia 11, 115–117. Satz, P., Sparrow, S.S., 1970. Specific developmental dyslexia: a theoretical formulation. In: Bakker, D.J., Satz, P. (Eds.), Specific Reading Disability: Advance in Theory and Method. Rotterdam University Press, Rotterdam, pp. 95–110. Shankweiler, D., Studdert-Kennedy, M., 1967. Identification of consonants and vowels presented to left and right ears. Quarterly Journal of Experimental Psychology 19, 59–63.

367

Sidtis, J.J., 1988. Dichotic listening after commissurotomy. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 161–184. Sommer, I.E., Aleman, A., Sommers, M., Boks, M., Kahn, S., 2008. Sex differences in handedness, asymmetry of the Planum Temporale and functional language lateralization. Brain Research 1206, 76–88. Sommer, I.E.C., 2010. Sex differences inhandedness, brain asymmetry, and language lateralization. In: Hugdahl, K., Westerhausen, R. (Eds.), The Two Halves of the Brain. MIT Press, Cambridge, MA, pp. 287–312. Sparks, R., Geschwind, N., 1968. Dichotic listening in man after section of neocortical commissures. Cortex 4, 3–16. Springer, S., Deutsch, G., 1989. Left Brain, Right Brain. Freeman Publishers, San Francisco, CA. Strauss, E., 1988. Dichotic listening and sodium-amytal: functional and morphological aspects of hemispheric asymmetry. In: Hugdahl, K. (Ed.), Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley and Sons, Chichester, UK, pp. 117–139. Studdert-Kennedy, M., Shankweiler, D., 1970. Hemispheric specialization for speech perception. Journal of the Acoustical Society of America 48, 579–594. Tallal, P., Piercy, M., 1973. Developmental aphasia: impaired rate of non-verbal processing as a function of sensory modality. Neuropsychologia 11, 389–398. Thomsen, T., Specht, K., Rimol, L.M., Hammar, Å., Nyttingnes, J., Ersland, L., Hugdahl, K., 2004. Brain localization of attentional control in different age groups by combining functional and structural MRI. Neuroimage 22, 912–919. Tunturi, A.R., 1946. A study on the pathway from the medial geniculate body to the acoustic cortex in the dog. American Journal of Physiology 147, 311–319. Van den Noort, M., Specht, K., Rimol, L.M., Ersland, L., Hugdahl, K., 2008. A new verbal reports fMRI dichotic listening paradigm for studies of hemispheric asymmetry. Neuroimage 40, 902–911. Voyer, D., 1996. On the magnitude of laterality effects and sex differences in functional lateralities. Laterality 1, 51–83. Voyer, D., 2011. Sex differences in dichotic listening. Brain and Cognition 76, 245–255. Wada, J., Rasmussen, T., 1960. Intracarotid injection of sodium amytal for the lateralization of cerebral speech dominance – experimental and clinical observations. Journal of Neurosurgery 17, 266–282. Westerhausen, R., Hugdahl, K., 2008. The corpus callosum in dichotic listening studies of hemispheric asymmetry: a review of clinical and experimental evidence. Neuroscience and Biobehavioral Reviews 32, 1044–1054. Westerhausen, R., Moosmann, M., Alho, K., Belsby, S.O., Hämäläinen, H., Medvedev, S., Specht, K., Hugdahl, K., 2010. Identification of attention and cognitive control networks in a parametric auditory fMRI study. Neuropsychologia 48, 2075–2081. Wexler, B.E., Halwes, T., 1983. Increasing the power of dichotic methods: the fused rhymed words test. Neuropsychologia 21, 59–66.