How the brain laughs

How the brain laughs

Behavioural Brain Research 182 (2007) 245–260 Research report How the brain laughs Comparative evidence from behavioral, electrophysiological and ne...

672KB Sizes 3 Downloads 72 Views

Behavioural Brain Research 182 (2007) 245–260

Research report

How the brain laughs Comparative evidence from behavioral, electrophysiological and neuroimaging studies in human and monkey Martin Meyer a,b,∗ , Simon Baumann c,d , Dirk Wildgruber e , Kai Alter c a

Institute of Neuroradiology, Department of Medical Radiology, University Hospital of Zurich, Frauenklinikstrasse 10, CH-8091 Zurich, Switzerland b Department of Neuropsychology, University of Zurich, Switzerland c School of Neurology, Neurobiology and Psychiatry, Newcastle University, UK d School of Psychology, Brain & Behavior, Newcastle University, UK e Department of Psychiatry, University of Tuebingen, Germany Received 30 January 2007; received in revised form 26 April 2007; accepted 30 April 2007 Available online 5 May 2007

Abstract Laughter is an affective nonspeech vocalization that is not reserved to humans, but can also be observed in other mammalians, in particular monkeys and great apes. This observation makes laughter an interesting subject for brain research as it allows us to learn more about parallels and differences of human and animal communication by studying the neural underpinnings of expressive and perceptive laughter. In the first part of this review we will briefly sketch the acoustic structure of a bout of laughter and relate this to the differential anatomy of the larynx and the vocal tract in human and monkey. The subsequent part of the article introduces the present knowledge on behavioral and brain mechanisms of “laughter-like responses” and other affective vocalizations in monkeys and apes, before we describe the scant evidence on the cerebral organization of laughter provided by neuroimaging studies. Our review indicates that a densely intertwined network of auditory and (pre-) motor functions subserves perceptive and expressive aspects of human laughter. Even though there is a tendency in the present literature to suggest a rightward asymmetry of the cortical representation of laughter, there is no doubt that left cortical areas are also involved. In addition, subcortical areas, namely the amygdala, have also been identified as part of this network. Furthermore, we can conclude from our overview that research on the brain mechanisms of affective vocalizations in monkeys and great apes report the recruitment of similar cortical and subcortical areas similar to those attributed to laughter in humans. Therefore, we propose the existence of equivalent brain representations of emotional tone in human and great apes. This reasoning receives support from neuroethological models that describe laughter as a primal behavioral tool used by individuals – be they human or ape – to prompt other individuals of a peer group and to create a mirthful context for social interaction and communication. © 2007 Elsevier B.V. All rights reserved. Keywords: Neuroimaging; Auditory cortex; Amygdala; Perisylvian region; Premotor cortex; Vocalization; Laughter

“And he began to laugh again, and that so heartlily, that, though I did not see the joke as he did, I was again obliged to join him in his mirth.” Robert Louis Stevenson, 1883



Corresponding author. Tel.: +41 44 255 4965; fax: +41 44 255 4504. E-mail address: [email protected] (M. Meyer).

0166-4328/$ – see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.bbr.2007.04.023

1. Introduction During the last two decades there has been a marked increase in the amount of neuroimaging work, whose advent has revolutionized the field of cognitive neuroscience. In complement to neuropsychological observation, the combination of these research approaches has significantly advanced our understanding of the relationship between auditory functions and its organization in the human brain. In particular, a cornucopia of studies has provided abundant evidence on the cerebral circuits supporting the perception of propositional speech [29,31,40,60,61,80,89,105,107,127,135], the perception

246

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

of emotional tone in speech [10,23,73,150,151], the production of propositional speech [16,64,148], perception of music [9,17,18,21,22,52,65,92,94], and nonspeech auditory signal perception, namely processing of spectral and temporal cues available in all acoustic signals [14,19,20,46,51, 67,81,100,104,118,126,128,139,140,145,157,158]. From these studies it is obviously compelling to conclude that brain regions encompassing the entire length of the Sylvian fissure mediate the perception of acoustic and phonetic information available in all kinds of spoken language and music. However, there is an increasing inconsistency pertaining to the function–structure relationship of emotional sensations and experience assumed to be elicited by music or speech that has an affective tune. While neuroimaging studies explicitly investigating the impact of affective music perception observed salient responses in cerebral structures well known to be involved in emotional processing (including for example amygdala, insula, ventral striatum) [9,17,18], studies that sought for neural substrates of emotional speech melody (affective prosody) often failed to find significant involvement of areas that subserve emotional functions [73,74]. The assumption that participants do not experience strong emotions when presented with emotional speech in the context of an experimental setting may account for this finding. By contrast, it has been observed that listening to music even in an adverse scanning environment elicits responses in brain regions surmised to be recruited in reward/motivation, emotion and arousal [17]. This finding comes as no surprise since pleasant or melancholic music is a powerful stimulus for evoking strong emotional experiences reflected by changes in physiological parameters, namely skin conductance response. One may now wonder whether this observation also holds true for other nonlinguistic emotional vocalizations, such as crying and laughing. This review concentrates on the latter and introduces recent evidence provided by functional neuroimaging. Laughter is also a phenomenon that deserves particular attention as it has been considered a link between animal vocalization and human speech. Even though human speech and vocal communication in monkeys do not seem to originate from a common articulatory system [112], it is a widely agreed opinion that some vocalizations expressed by humans and monkeys share some common qualities. This holds particularly true for laughter, which has been recently described as a powerful affective device in acoustic communication in monkey and man. In the following sections of this article we describe comprehensively different neurobiological facets of laughter in humans and monkeys and address the question to what extent laughter should be considered a phenomenon that is indeed unique to humans or whether it could be considered partly equivalent to affective vocalizations expressed by other mammalians.

chirps when they play and when they are “tickled”1 [72,97,98]. These chirps appear to correspond to joyful experience even though it should be emphasized that they are not compulsorily considered equivalent to human laughter. However, the recent evidence of “laughing rats” and other “laughing animals” [97] may indicate that the evolvement of laughter preceded the evolution of spoken language and that rat “laughter-type” responses and infantile human laughter share some common underlying features which may be of significance in research on emotions in the mammalian brain [99]. But even human laughter appears to be an ancestral, instinctual trait as it is probably a congenital function [113]. Four-month-old children begin to laugh, smile and chuckle, even if they were born blind and deaf. But does this mean that young children and other mammalians have a sense of humor? We obviously have to distinguish between cheerful and spontaneous vocalizations that are expressed by young children and animals during social interactions, and the structured iterative concatenation of monotonous sounding laugh pulses that accompany adult conversation. Given these differences it may be an interesting and insightful undertaking to learn more about the brain areas that serve the different forms of expressive and perceptive laughter, as it may help better understand to what extent human laughter in its complexity resembles mammalian “laughter” as introduced above. Human laughter is undoubtedly a multi-faceted phenomenon that has at least three components (motor, affect, and cognition) [39] and can be described at several different levels that are associated with cortical and subcortical circuits. Amongst these different aspects of human (and monkey) “laughter”, the relationship between acoustic and motor loops that subserve the perception and production of laughter is so far understood relatively well. The current review describes the research which has so far investigated this issue more comprehensively in Sections 2.2 and 2.3. Other behavioral facets of human laughter, namely the feeling of joyfulness, derision, embarrassment, and the sense of humor, are evidently more difficult to examine and reveal. The latter is an internal concept that includes the detection, appreciation and interpretation of what one might find funny or amusing. This concept is related to emotive experience and to date there exist only a few neuroimaging studies that have looked at this aspect of human cognition. Not surprisingly, a number of widely distributed, phylogenetically ancestral and recent brain regions, including the left temporo-occipito-parietal junction, the left prefrontal cortex, the left superior and middle temporal gyrus, the anterior cingulate cortex, the frontoinsular cortex, the right inferior frontal cortex, mesolimbic structures (nucleus accumbens) and amygdala were involved when participants were presented with funny cartoons [7,86,143,146]. Spoken jokes (puns) activated the medial ventral prefrontal cortex, which has been formerly attributed to reward processing

1.1. What makes laughter an interesting issue to study? Laughter is not a phenomenon that is only observed in humans, since mammalians too exhibit vocalizations and sounds that are similar to human laughter [109,110]. For instance, Panksepp and co-workers reported that young rats emit 50-kHz

1 It should be mentioned here, that the understanding of “tickling” in Panksepp’s studies describe manipulations mimicking the sequence of events in the rough-and-tumble play of juvenile rats and not tickling in the human sense.

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

[49]. While recruitment of the nucleus accumbens and amygdala implies that participants experienced humorous and mirthful emotions, the finding of “higher” cognitive circuits in association cortices does not allow for a specific conclusion. One is thus most likely to assume that the detection and interpretation of humor requires the interplay of a network of brain regions, with the association cortices playing an important role when it comes to the evaluation and integration of socially relevant information, and for the preparation of socially appropriate behavioral reactions. However, we prefer to abstain from expanding speculations about the “humorous” brain, as this article is meant to summarize what is so far known about the brain mechanisms associated with expressive and perceptive laughter, regardless of whether it is triggered by a humorous or by a socially or physically contagious event. With respect to the latter, there is no doubt that laughter occurs uncoupled from humorous experience. Small children are not assumed to have any sense of humor but they giggle and laugh in particular as a reaction to tickling, and in the context of social interactions. This does not, to the same extent, hold true for human adults as they are better able to suppress and inhibit the impulse to laugh in situations where this is undesirable or socially inappropriate. Furthermore, adult humans (and monkeys) also consciously make use of laughter as a social strategy in peer interactions and communication, so as to influence and manipulate a perceiver’s mood, attitude, and arousal state [95] even (or especially) in serious situations that are not amusing. But before this aspect of laughter is more comprehensively explored in the review we will take a moment to provide the reader with detailed information about the acoustic structure of laughing specifically in contrast to speech. We think it is beneficial to introduce this knowledge as it may help better understand the Sections 2.2 and 2.3, which then proceed with a description of the present understanding of the relationship between laughter and the mammalian brain. 1.2. Acoustic structure of laughter Laughter can be described as a vocal act with special laryngeal functions [28] resulting in stereotyped acoustic features [111] sharing similar anatomic structures as compared to speech production. Fig. 1A shows its relatively simple acoustic structure, which is reflected by a strong temporal symmetry and simple rhythmic laugh note structure. From a phonetic point of view, human laughter and spoken language share common features. Due to their common vocal tract filter characteristics, the acoustic properties of speech and laughter are comparable. Both laughter and speech originate from the same source of vocalization [108]. Akin to spoken language, laughter involves the respiratory system and the vocal apparatus, and is determined by changes in the vocal and sub-glottal respiratory tracts. Changes in the musculoskeletal system form the staccato-like syllabic vocalizations [28]. A sequence of laughter syllables can be combined into laugh bouts that extend over several seconds. These bursts of laughter start with a vocalized inhalation and consist of a series of short distinct laugh pulses with almost isochronous time intervals [87,111]. The central sound

247

Fig. 1. (A) Acoustic waveform of human laughter. Equal distances between distinct laugh pulses represent a temporal structure that is rhythmically different from spoken natural language. (B) Wide band spectrogram (analysis width (s) = 0.005) generated from human laughter shows the presence of (at least the first three) formants in the vocalic parts of voiced laughter and their dynamic transitions, i.e. rapid changes in frequency due to articulatory movements of the tongue, the jaw and other articulators.

features of laughter are an aspiration /h/ followed by a vowel, i.e. /a/. Only rarely is the vowel quality changed to /o/ or /i/ within a laugh utterance. The reiteration of syllables with their special segmental characteristics and the decreasing amplitude envelope of laugh pulses enables us to identify these acoustic signals as laughter. As in speech, laughter can vary with regard to its acoustic qualities, such as variance of fundamental frequency (F0 ) and F0 -bandwidth [4,111]. Laughter has a rich harmonic structure [109] and the spectral qualities, for example formant frequencies can be evaluated [4]. Fig. 1B demonstrates the resonance characteristics of the human vocal tract, that is, frequency bands with higher amplitudes in periodic parts. Differences between the two types of vocalization, namely speech and laughter, can be observed with regard to the organization of syllables contained in speech and laughing utterances. The isochronous reiteration of stereotyped syllables in laughter is untypical for normal speech. Furthermore, syllable boundaries in speech are modified by co-articulation. In laughter, syllables end abruptly. Although the larynx is involved in both speech and laughter, laughter has a different F0 -contour to speech. Furthermore, due to the differential temporal organization of laughter, which consists of repeated syllable-like laugh pulses, i.e. /ha/ or /he/, the harmonic dynamics in laughter is limited to syllabic nuclei, whereas in speech it is distributed over all voiced parts of

248

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

the signal. Thus, while laughter can be easily identified as nonspeech vocalization due to the lack of interpretable linguistic content, propositional language is considerably more complex with regard to acoustics. The developmental changes in preverbal vocalization of infants occur at five preverbal stages [116]. Laughter normally first occurs at the age of 2–3 months, which is characterised by an increasing number of vocalisations with differing modulations. According to Scheiner et al. [116] this step is followed by three stages where the capacity of the vocal tract is explored (4–6 months), the infant starts babbling (7–10 months) and finally experiment with complex vocal patterns (11–12 months). Laughter in infants has been described as a rhythmic tonal vocalisation with alternating noisy and tonal parts separated by short pauses, consisting of two or more vocal elements. Although an infant possesses a repertoire of various vocalisations indicating positive or negative affect, there is no change in their fundamental frequency and duration, including laughter, during the first 12 months of an infant’s development. By the age of three, a child’s laughter has gained acoustic complexity and starts to approximate adult laughter with a mean duration of 200–220 ms. The fundamental frequency of laughter is, however, much higher than adult laughter ranging from 270 Hz for a dull comment to 2900 Hz for a squeal [90]. In terms of expressive laughter it should be emphasized that unlike speech, laughter is a product of a single effector system producing changes in the vocal and sub-glottal respiratory tracts as well as in the musculoskeletal system [28]. Pitch alterations are not determined by changes in the rhythmic glottal valving, which appear relatively constant; instead, each individual raises and lowers the entire larynx in order to control the pitch of laughter. The vocal folds undergo identical rhythmic abduction and adduction. With respect to monkey vocalizations that are equivalent to human laughter, one should note that the different shape of the vocal tract in humans and monkeys accounts for the differential acoustic quality of laughter. Only human voiced laughter has spectral qualities and is rich in harmonics. However, modulations of affective content in both monkeys and humans appear to be determined by the source which enables us to compare phonetic signals from acoustic signals across species. In particular, some acoustic parameters, such as pitch and harmonic-to-noise ratio, encode affective information in both human and animal vocalization [95]. However, it should be mentioned that the acoustic structure of “laughter” in animals and humans differs in frequency (50-kHz in rats) especially due to the difference in smaller size of the vocal tract. 2. Laughter and the brain As mentioned above, human laughter is a multi-faceted vocalization that can be described at various levels. Beyond its complex social and communicative functions, research has also been done to reveal the cerebral mechanisms of human laughter. The meager evidence provided by neuropsychological and neuroimaging data draws an incomplete picture, which we will now describe in turn.

2.1. Human laughter: neuropsychological evidence Not surprisingly, the major insight gained from the studies on human laughter, is the observation that no “laughing centre” exists in the brain. It rather appears that expressive and perceptive laughter is mediated by a network of widely distributed cortical and subcortical circuits. A number of clinical reports have described a bizarre behavior, that is rarely shown by neurological patients. This mood-independent and uncontrollable phenomenon is called “pathological laughter”, and is typically emitted by patients regardless of an internal or external emotional trigger. It often occurs involuntarily and in socially inappropriate situations. Thus, Poeck and Hartje’s conclusion that pathological laughter cannot be considered an affective disorder, but a pathological disinhibition of the motor pathways that support human laughter, appears to be most plausible [54,102]. Pathological laughter has also been coined “pseudobulbar affect” and has been described as occasionally accompanying multiple and amyotrophic lateral sclerosis as well as extrapyramidal movement disorders [38,54]. To date, the specific cause triggering pathological laughter is not known, but it is assumed that this syndrome emerges from bilateral disruptions of the corticobulbar pathway including the cerebral peduncles or damage to the tracts originating from the primary motor cortex to the internal capsule [93]. Another interesting observation was made perisurgically by Fried et al. who incidentally discovered that electrocortical stimulation of a patient’s supplementary motor area instantaneously elicited involuntary and fierce bouts of laughter [39]. The latter case study also implies that motor areas are a substantial part of the complex and widely branched human brain system that governs laughter. The tight and evident relationship between expressive human laughter and motor systems has also been outlined in a recent review paper of Wild et al. who propose that two path-ways are involved in expression of laughter [147]. Due to Wild’s suggestion, expressive laughter can be associated with an involuntary emotionally driven system involving the amygdala, thalamic, hypo- and subthalamic areas, and with a voluntary system originating from the (pre-) motor/frontal opercular area and leading through the motor cortex to the ventral brainstem. Notably, the data obtained from the neuroimaging studies we address in the following subsection indicate that parts of the involuntary and the voluntary system also support the perception of human laughter. 2.2. Human laughter: neuroimaging evidence Only a few brain imaging studies have so far sought for neural substrates of perceptive human laughter. Due to the wellknown susceptibility of functional magnetic resonance imaging to signal artifacts caused by head movements, expressive laughter has not yet become a topic of particular interest in imaging research. A pioneering study on human vocalization by Belin et al. presented participants with numerous different vocal speech and nonspeech stimuli, amongst other human laughter [13]. Hear-

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

ing vocal sounds, as compared to energy-matched, nonvocal sounds, preferentially activated bilateral portions of the nonprimary auditory cortex with the entire right superior temporal sulcus (STS) exhibiting the strongest responses. Even though Belin et al. did not analyze selective brain responses to laughter, this study is of great interest as it demonstrated the existence of cortical regions in the bilateral but dominant right STS that preferentially responded to frequency structure in human vocalizations. The neural circuits that respond to the perception of human laughter were explicitly investigated in three brain imaging studies [85,114,115] that however, had a different focus. Sander and Scheich used functional MR imaging to investigate the extent to which cortical and subcortical brain regions respond to emotional vocalizations, namely laughing and crying. In a first study they presented these socially and emotionally relevant stimuli to healthy individuals who had to perform different tasks (passive listening, self-induction of emotional states, emotionally irrelevant pitch shift discrimination) [114]. Based on the present evidence about the neural representation of nonverbal affective vocalization [101] and emotional prosody in spoken language, the authors proposed an involvement of the amygdala and rightward functional asymmetry in the auditory cortex. Regardless of the task, the authors reported a bilateral involvement of the amygdala in the perception of laughter. The study also revealed a bilateral but strongly right dominant recruitment of primary and encompassing auditory association cortices irrespective of the subjects’ behavioral performance. A deviant stimulus-dependent pattern of activation was found for the insula, which responded more strongly to crying. Similar to auditory cortex and amygdala the authors did not discover a significant task-dependent fMRI signal change in the insula. In a follow-up fMRI study, Sander and Scheich presented healthy individuals with human laughing and crying in either a natural or a time-reversed version [115]. Akin to the preceding study, participants had to perform a pitch-shift discrimination to direct their attention to stimuli without being involved in an explicit emotional task. Also in analogy with the preceding study is the finding that the perception of laughter involved the amygdala, the insula and auditory regions residing in the supratemporal plane. More specifically, the amygdala was activated when hemodynamic responses collected during the perception of laughter were compared to those recorded during a silent control condition. When laughter played “forward” was compared to time-reversed laughter, no significant engagement of the amygdala was found. A similar pattern is reported for the insula with the right insular cortex responding more strongly when the “laughter” condition was contrasted with silent control. The analysis of “natural laughter” versus “time-reversed laughter” yielded only a few activated voxels in the insula. Even though the authors emphasize the important role of the insula in emotional processing, it might be worth noting that other reports have related the insula to audio-motor functions, e.g. articulation [1,154], or sound–action associations [88]. We will take up the issue of a potential intertwining between auditory and motor functions more elaborately at the end of this section.

249

With respect to all sections of the auditory cortex (TA, T1, T2, T3)2 the authors report robust bilaterally equal responses with the posterior compartments exposing strongest recruitment. For the contrast “natural laughter” versus “time-reversed laughter”, a strongly leftward asymmetric pattern of functional activation is reported. Starting from the outcome of their studies, the authors propose that human laughter is first and foremost supported by an ensemble consisting of the amygdala, the insula, and several segments of the supratemporal plane harboring the auditory cortex. These regions conjointly cooperate to juggle with acoustical and emotional issues of human laughter, which ought to be additionally evidenced by the finding of anatomical connections tightly linking these nodes of the network [3,5,156]. Another fMRI study put particular emphasis on the phonetic traits of human laughter and did not seek to investigate the emotional aspects of laughing [85]. This study instead compared short laugh bouts recorded by male and female actors to brief spoken sentences and nonvocal sounds. More specifically, this study scrutinized the extent to which the neural circuits that are preferentially driven by human laughter are distinct, or overlapping with other neural networks that are more strongly involved in the perception of human spoken sentences or nonvocal sounds. Participants had to respond explicitly each time they heard an auditory target (“lalala” “lalala”), which was randomly interspersed between the other stimuli that required no behavioral response. Primarily, the study found an ensemble of peri-auditory, (pre-) motor and primary somatosensory regions in response to human laughter with regions in the right hemisphere being more substantially engaged. Fig. 2 shows a selection of areas in the perisylvian, (pre-) motor and somatosensory regions which brought on stronger hemodynamic responses either to spoken sentences or human laughter. The latter evoked more salient activation in the right caudomedial portion of the primary auditory cortex, in the right posterior superior temporal gyrus (STG), namely the planum temporale, in the right subcentral gyrus, in the right ventral postcentral gyrus and in the bilateral Rolandic operculum. When compared to the perception of nonvocal sounds, hearing laughter also produced a stronger signal increase in the right primary auditory cortex and auditory association areas that directly abut on the auditory core region. Additionally this contrast also revealed stronger contributions of the right STS and the right fusiform gyrus. With respect to the findings of Sander and Scheich [114,115], these observations need to be more comprehensively outlined. First, it is remarkable that Meyer et al. [85] did not find activity in regions that have been attributed to emotional perception so far. In particular, the amygdala did not light up when participants heard a laughing voice. Interestingly, Sander and Scheich [114,115] presented their individuals with continuous blocks of laughter that lasted 60 s while the laughing trials utilized by

2

For a detailed description of this macroanatomical parcellation of the auditory cortex and its corresponding nomenclature please refer to a publication of Brechmann et al. [20].

250

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

Fig. 2. Averaged hemodynamic responses (n = 12) obtained from statistical parametric contrast maps between human speech (bluish colors) and human laughter (reddish colors) [85]. The upper row shows transversal sections in neurological convention. Functional activation is superimposed onto a standard anatomical normalized template. Y- and Z-values correspond to the stereotactic Talairach space. The lower row illustrates coronal sections in neurological convention. Abbreviations of anatomical labels are as follows: STG, superior temporal gyrus; ROP, Rolandic operculum; SCG, subcentral gyrus; HeG, Heschl’s gyrus; PoCG, postcentral gyrus.

Meyer et al. [85] were only of approximately 6 s in duration. As outlined by Panksepp and Burgdorf “laughter certainly is infectious” [99,p. 540], but it is also plausible to assume that passively hearing laughter for 6 s is not a sufficient trigger to evoke a robust emotional experience in participants. In the studies of Sander and Scheich [114,115], the subjects were presented with periods of continuous human laughter that were ten times longer and undoubtedly a more contagious and suitable stimulus to induce emotional sensations in other human individuals [111]. Alternatively one may reason that the sequences of laughter used by Meyer et al. [85] were not sufficiently “emotional” to evoke a solid response in limbic regions that are tied to emotional processes. A common feature in all the three studies is that they noticed robust brain responses in the auditory cortices of both the right and the left hemisphere. While both the first study of Sander and

Scheich [114], and the investigation by Meyer et al. reported a rightward functional asymmetry in the auditory cortex [85,114], the second study of Sander and Scheich [115] located a left dominant activation pattern in the supratemporal plane. The different baselines applied in the statistical contrasts most likely account for this finding. In the second study, Sander and Scheich contrasted “natural laughter” with “time-reversed laughter” that differed in “short-term spectro-temporally dynamics”, but was equal in the “overall spectral contents” and the “sequential structure of sound elements” [115,p. 520]. An increasing number of recent imaging studies have provided compelling evidence that the posterior auditory cortex, including secondary and auditory association fields mediate rapidly changing spectro-temporally cues that are available in speech and nonspeech sounds [67,84,104,157]. Thus, it comes as no surprise that Sander and Scheich [115] report more vigor-

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

ously responses of the left posterior auditory cortex to rapidly changing acoustic cues, e.g. formant transitions that occur in natural laughter but in the time-reversed version. It should be mentioned that the authors provide an alternative explanation. According to their reasoning, the participants of their study had to “bind similar sounds disregarding the immediate sequence of sounds” to accomplish the task [115,p. 1527]. Thus, the stronger involvement of the left auditory cortex is considered to reflect “the analysis of sequences of sounds”, since this region “has been implicated in the specialized processing of temporal features of sounds and particularly in the high temporal resolution of fast changes” [115,p. 1527]. The silent control condition used in the first study of Sander and Scheich [114] may account for the functional rightward lateralisation of brain responses to laughter in the auditory cortex. As outlined in Section 1.2, voiced human laughter is rich in spectral components and harmonics, which appear to preferentially drive the auditory cortices of the right supratemporal plane when human laughter is contrasted with a silent control [51,158]. The study by Meyer et al. [85] also discovered a clear rightward asymmetry of the hemodynamic response pattern in the caudiomedial portion of the primary auditory cortex and the planum temporale for laughter relative to human speech. In terms of acoustic structure, speech and laughter primarily differ in a way that predicts a distinct involvement of the left superior temporal and left inferior frontal regions in perception of spoken language relative to laughter. Fig. 2 illustrates that our finding is in keeping with this prediction. For the finding of a stronger engagement of right primary and auditory association cortex, Meyer et al. account for the fact that “the right hemisphere is more proficient in processing acoustic cues such as changes in pitch, amplitude, and frequency modulation, if they load affective information” [85,p.302]. As laughter should be considered an acoustic signal that transmits acoustic cues of emotional prosody, it is plausible to reason that the decoding, extraction and recognition of these acoustic parameters that encode emotionally relevant features available in human laughter preferentially drive the right posterior auditory cortices. Alternatively, one should consider the option that the right posterior supratemporal responses to laughter may be primarily elicited by vocal cues regardless of valence. At least two recent imaging studies demonstrated that the posterior STG, namely the planum temporale and the planum parietale, display a sensitivity to elemental vocal cues such as voice spectral information [75] and source of voices [142]. With respect to the former finding, it has been emphasized that great apes show the same structural rightward asymmetry of the planum parietale [42] as humans [66] so that this evidence should be considered a suitable starting point for a comparative inter-species behavior approach to more deeply comprehend the common nature of primate and human vocalizations [42]. The human planum temporale has generally been coined a “computational hub” [50] that serves a variety of spectrotemporal functions when it comes to the decoding of auditory signals [83,140,157]. However, it should be mentioned that other imaging studies that have researched voice perception in the

251

human brain, consistently report that not the STG, but rather regions stretching along the entire STS accommodate cortices that appear to be most amenable to the selective aspects of voice processing [12,37,133,134]. Interestingly, the study of Meyer et al. [85] also showed an involvement of (pre-) motor and primary somatosensory regions when participants heard laughter. More specifically, a high sensitivity to human laughter was revealed in the bilateral Rolandic operculum, the bilateral subcentral gyrus, and the right postcentral gyrus. According to the authors at least the regions in the subcentral gyrus and the Rolandic operculum can be considered part of the “voluntary” system described by Wild et al. [147], which partly supports the expression of laughter. Evidently, these (pre-) motor regions abuting on the ventral-most part of the central sulcus and the precentral gyrus, respectively, have been associated with the motor control of the functioning of the larynx and the articulators (tongue, lips, jaws, palate) [59,78]. This interpretation seems to be the most perpicuous by virtue of the notion of the “infectious power” of human laughter [77,99]. Even though the individuals partaking in this study were not asked to overtly laugh, it cannot be ruled out that hearing laughter may induce activation in cortical regions which “belong to the neural circuit subserving expressive laughter” [85,p.302], or – with regard to the finding of responses in the insula for the perception of laughter – mediating articulation [1,154]. Similarly, cortices that subserve the expression of spoken language are also part of the speech comprehension network [33,64,152,153]. One may raise the question why the reports of Sander and Scheich [114,115] did not mention the finding of (pre-) motor activation. To reconcile their findings with the results of Meyer et al. [85] one has to note that Sander and Scheich only analyzed a-priori defined regions of interest (amygdala, insula, auditory cortex) and thus spared the (pre-) motor regions situated at the upper bank of the Rolandic and central opercula. The findings of Meyer et al. [85] are of further interest as they point to a potential involvement of extra-auditory and extramotor regions in the neural circuit that serves the perception of laughter. In particular, the contrast between human laughter relative to nonvocal and nonbiological sounds uncovers prominent activation of the right fusiform gyrus and the right posterior STS. With respect to the former, Meyer et al. argue that “cross-modal mechanisms may account for this intriguing result” [85,p.303]. The finding of responses of the visually tied fusiform area to speech has also been reported by a few other neuroimaging studies, in particular in cochlear-implant patients who are assumed to translate auditory speech into facial expressions [47,48]. Thus, it seems plausible that participants who partook in the study of Meyer et al. [85] envisaged a laughing face while presented with bouts of laughter. The fact that laughter should be considered a biological signal that is of outstanding social relevance, adds additional evidence to the observation that associative cortices in widely distributed regions of the brain turn out to be sensitive to this stimulus. This statement also holds true for the finding of right posterior STS involvement in the perception of laughter relative to nonvocal sounds. The human STS has been described as a heteromodal area that corresponds to the superior temporal polysensory area in the macaque cortex. Due to its connections

252

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

to the auditory cortex [91], this associative cortex is presumed to bind information coming from unimodal sensory areas, and thus may help form crossmodal associations [36,125]. Wright et al. [155] localized stronger responses to paired audiovisual stimuli (movies of animated character moving her mouth) relative to the isolated presentation of visual and auditory stimuli in a portion of the right posterior STS, which is close to the location at the fund of the posterior STS that has been reported by Meyer et al. [85]. By virtue of this confluent evidence, we conclude that processing laughter does not only elicit activation in perisylvian regions that are devoted to auditory and motor functions. The findings also strongly suggest that hearing laughter is also capable of inducing multisensory operations that may associate visual mental images with auditory sensations to better enable a semantic interpretation of the stimuli at issue. The three neuroimaging studies introduced here draw a relatively consistent picture of the cerebral representation of laughter perception. Besides the PAC and auditory association cortex, the studies report an involvement of cortical regions that have been attributed to the control of larynx, other articulators (tongue, lips, jaws, palate) [59,78], and expressive co-articulating functions that appear to demonstrate sensitivity even during the pure perception of laughter. The finding of activation in the amygdala emphasizes the affective components of laughter, as this region is part of the ancestral subcortical system that modifies and regulates the emotional responses of a mammalian organism. In sum, the human brain appears to process laughter as an affectively loaded acoustic stimulus, which instantaneously also elicits activation in cortical regions that are related to specific motor and multisensory functions. Since non-human primates apparently share the faculty of laughter the following section will provide recent reports from monkey studies which will help better understand the “riddle of laughter” [96]. 2.3. Evolutionary origin of laughter: evidence from non-human primates As outlined above, the comparison of evolutionary well preserved affective vocalizations in different species, such as “laughter” is likely to provide invaluable information on the evolution of vocalizations in general. In this section we sketch the evidence in support of the view that repetitive, stereotypical vocalization of non-human primates that is considered to be a “laughter-like” response, is indeed an equivalent homologue of the same vocalization in humans. Apart from the similarities of the acoustic signal we mentioned above, there is additional evidence available from social and situational contexts in which monkeys display laughter-like vocalizations. Furthermore, we outline parallels between monkeys and humans pertaining to the ensemble of anatomical structures constituting physiological mechanisms of laughter. In analogy to human neuroscience, the understanding of the neuronal mechanisms that mediate laughter-like vocalizations in non-human primates is still very sparse and mainly limited to the expression of laughter, but parsimonious data on perceptive aspects of affective vocalizations in monkeys is available and will be introduced in turn.

Darwin already described laughter-like vocalizations in chimpanzees and considered them similar in form and context to human laughter and he interpreted these expressions as a positive and friendly signal [30]. Comparable vocalizations were also observed in social play of bonobos [32] and barbary macaques [71]. Weinstein and Gender were the first to evoke laughter-like vocalizations in macaques (Macaca mulatta) by brain stimulation of diencphalon, midbrain, pons and medulla. Furthermore, these authors postulated a rubro-reticulo-olivary system as the integrator of the facial movement patterns also involved in laughter-like responses [144]. van Hoff [129] reasoned about the relationship between facial and vocal expression and hypothesized that the observed laughter-like responses in non-human primates and the “play face” displayed in similar contexts were the evolutionary origin of human laughter. Vice versa, he suggested that human smiling was homologous to the “grin face” which monkeys display as an appeasing response to aggression. The proposed social functions of the “play face” and the “grin face” have been affirmedly described by Waller and Dunbar [137] and the recent dissections of the chimpanzee facial musculature demonstrate the striking similarity with the human anatomy, and thus the comparability of human and chimpanzee facial expressions in principle [24]. J¨urgens explicitly investigated the relationship between various emotional states and emitted accompanying vocalizations in squirrel monkeys, amongst them expressions of enjoyment evoked by brain stimulations [70]. Besides mirthful expression, aversive and hedonic behavior also corresponded to stimulations of several brain areas, namely the septum, nucleus accumbens, preoptic area, hypothalamus, midline thalamus, amygdala, and stria terminalis. The close correlation between call type and associated emotional state together with the fact that the threshold for the elicitation of vocalization was always higher than that for the elicitation of the emotional effects, was taken as evidence that the vocalizations represent expressions of stimulation-induced motivational effects rather than primary motor responses. Furthermore, the anterior cingulum was interpreted as responsible for the voluntary production of the vocalizations. J¨urgens suggested the involvement of the reticular formation and the medulla primarily as providing motor coordination in the production of the vocalizations [70]. Finally, the periaqueductal grey seems to play a central role in coordination of the different subsystems required to produce emotional vocalizations. In general, the reported areas involved in the production of monkey laughterlike vocalizations correspond well to similar areas implicated in expressive human laughter. In particular, the periaqueductal grey and the reticular formation are supposed to be closely related to the pattern generation of human laughter [147]. Furthermore subcortical ancestral brain areas that have been attributed to determine emotional states (amygdala, hypothalamus) appear to be involved in both human laughter and monkey laughter-like vocalizations. In contrast to the neuronal network involved in the expression of monkey laughter-like vocalizations, the understanding of the perception of laughter in non-human primates is scarce. Berntson et al. investigated cardiac and behavioral responses

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

of infant chimpanzees to screams and laughter [15]. In contrast to a deceleratory cardiac response to screams associated with orienting behavior infant chimpanzees reacted with a cardioacceleratory response which likely resulted from sympathetic activation. At a more general level recent studies provided evidence for a particular sensitivity of monkey’s auditory cortex to conspecific vocalizations [138], with the left temporal pole apparently playing a cardinal role [106]. Another recent imaging study in three macaques reported greater responses in the monkey’s right STG to both conspecific and nonbiological sounds [45]. Interestingly, the authors noticed that hearing only conspecific sounds selectively produced increased activation in bilateral anterior and posterior perisylvian areas which can be considered homologs of the human core speech area. Even though no imaging studies investigating monkey brain responses to laughter-like vocalizations have been published so far neurophysiological data from other monkey vocalizations might give an indication of what we would expect to find for monkey laughter. Section 2.2 on human imaging studies mentioned that in contrast to speech there is no clear evidence for a leftward lateralization in the perception of laughter and affectively loaded tone. It even appears that the right auditory cortex is preferentially driven by the acoustic features of laughter. In the case of monkeys, it is far more contentious whether or not a precursor of a leftward lateralization for the perception of vocalizations exists. Nevertheless, the auditory cortex of monkeys has been shown to be sensitive to rapid acoustic transients [76,123], which are key acoustic cues helping humans to extract and decipher phonemes available in spoken language [119]. Some studies propose a dominance of the monkeys’ left auditory cortex when it comes to the processing of species-specific vocalizations. For example, Heffner and Heffner [56] reported that monkeys were impaired in discriminating monkey vocalizations in the awake ablations of the left auditory cortex. Hauser and Andersson [55] demonstrated that macaques showed a right ear and thus left hemisphere preference when hearing vocalizations from their own repertoire. We already alluded to reported stronger activation of the left temporal pole in one imaging study in macaques when the animals heard conspecific calls [106]. However, all other areas of the STG showed a rightward lateralization for the processing of conspecific calls and a variety of sounds. Supporting evidence for a right cortical government of conspecific vocal cues comes from another recent monkey study which showed that vervet monkeys preferred to listen to species-specific calls with the left ear indicating a right hemisphere dominant processing [44]. These results suggest a right hemisphere bias for processing conspecific vocalizations regardless of whether the calls were recorded from vervets who were familiar or unknown to the monkeys who partook in the study. The occurence of laughter-like vocalizations in chimpanzees has been thoroughly investigated by Matsusaka [79]. The expression of these laughter-like sounds was most frequently observed during playful chasing and tickling, situations that are also typical for human laughter. Interestingly, it was observed that laughter was more often expressed in “somewhat fragile interactions, which may contain the risk of confusing “defensive”

253

actions by the target of “aggressive” actions with real efforts to escape the situation” [79,p. 221]. This observation suggests, that “laughter” not only serves as an expression of one’s own present joyful and positive emotional state but it is additionally meant to directly affect and manipulate the behavior of a partner or predator. Such an interpretation would support the affectinduction approach introduced by Owren and Bachorowski [95] who state that laughter in human and non-human primates supports the manipulation of the affective state of a receiver and in consequence directly affects their behavior. Thus, it may also be sensible to reason that laughter could serve as a tool to “award” social contact. Human infants and children essentially need social interaction with older siblings, parents, nursery teachers, etc. to train relevant communicative behaviors. In terms of evolutionary thinking, one may assume that “laughing” is an important building block in the vulnerable formation of a family, a collective, a species to bind it together emotionally and behaviorally. In keeping with this view, Sroufe and Waters propose the hypothesis that laughter is a means of releasing tension such as that which builds up during the child’s interactions with its social environment [121]. In summary, we can conclude that there is compelling evidence that non-human primate laughterlike vocalizations and human laughter occurs in similar social contexts and thus may answer a similar purpose. The recent emergence of PET and fMRI studies of monkeys is an encouraging development for the understanding of the evolution of vocalizations and animal laughter. Imaging in monkeys allows the observation of vocalization specific activity in large cortical areas and it facilitates the comparison with the increasing amount of data from human brain mapping. A pioneering PET study in monkeys [43], for example, uncovered widely distributed cortical and subcortical regions in the brain of macaques, including the STG, that respond to a variety of sounds, including species-specific calls. Although this study does not provide further insight into the question of lateralized vocal perception in the monkey brain (only one out of three participating monkeys showed activities mainly in the left hemisphere, a second in the right hemisphere and the third in the two hemispheres), it is of outstanding interest in the context of comparative neuroscience. The finding of reported areas indicate that, in macaques, an ensemble of cortical and subcortical brain sites support both the perception and the production of vocalizations which is very similar to the human brain network supporting the perception of voices and laughter [85]. Interestingly, the study by Gilda Costa et al. [43] also reported involvement of non-auditory areas previously reported in neuroimaging studies on perception of human laughter, such as the amygdala [115] and visual association areas, namely the right fusiform gyrus [85]. The sum of these studies do not, however, draw a coherent picture that prompts us to nominate either the right or the left auditory cortex of the monkey brain as the candidate region most sensitive to affective vocalization and other signal calls. Nevertheless, we consider the striking similarity of the imaging data on vocalization processing obtained from monkeys [106] and the data on laughter perception in humans as important evidence for the involvement of analogous underlying neural circuits.

254

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

The consistent findings of a resembling structural and functional hierarchical architecture of the auditory system in man and monkey [136,153] makes the assumption of similar ensembles serving laughter in the human brain and laughter-like responses in the monkey brain even more likely. It is evident, that the lack of evidence on brain areas involved in the perception of laughter in non-human primates does not yet allow for a direct comparison with human imaging studies on the perception of laughter. Yet, for a better understanding of the evolution of primate vocalizations, imaging data of monkeys processing laughter-like vocalizations would be immensely helpful and desirable. 3. Conclusions The reviewed studies taken from human and animal research provide compelling evidence for the view that “laughter” can be considered an affective vocalization which not only exist in humans but also in other mammalians, such as monkeys. While “laughing rats” emit laughter-like joyful responses when “tickled” or in anticipation of play [96], the nature of laughter in monkey and man appears to be more complex and requires a more thorough consideration. We have outlined, in Section 1.1, that laughter is a natural human response to humor, jokes, and gags. However, based on the current knowledge, we assume that monkeys do not have a particularly pronounced sense of humor, and we conclude that laughter also serves other functions in the social world of humans and monkeys. By adopting the proposal of Owren and Bachorowski [95], we would like to indicate that laughter may not only display the mirthful state or experience of one individual (be it monkey or human), but rather serves to actively influence the affective state of listeners, and their behavior accordingly. Thus, one essential social function of laughter might be to enable the “laughing speaker” to take up a socially advantageous stance by manipulating the listener’s attention, arousal, and emotions. Owren and Bachorowski argue that “laughter is evidently a specieswide, ‘hard-wired’ behavior” [95,p. 187] that is part of an individual’s repertoire, not only to signal happiness but to prompt other individuals of a peer group and to induce association of pleasant emotions with the speaker. In more detail, the authors conjecture that laughter serves to transmit the speaker’s played positive emotional state to a listener or recipient. This effect aims at inducing the same or similar mirthful state in the recipient and configure a positive situational context for social interaction, which would be a reasonable explanation for the finding of amygdala involvement exposed by the participants of Sander and Scheich [114,115]. Analogous to the idea of mirror neurons that are considered the neural foundation of dialogues and nonverbal social interaction [112], one might reason whether or not such a “mirror device” might account for the finding of (pre-) motor activity in the brains of individuals who only perceive short sequences of laugher. Other authors agree with Owren and Bachorowski’s view and state that laughter “facilitates social interactions and take them (the listeners) in positive directions in ways that promote bonding and cooperative activities.” [99,p. 540] and that “laughter is infectious and may transmit moods of positive

social solidarity, thereby prompting cooperative forms of social engagement” [99,p. 543]. The laughter of their offspring, for example, might serve as a social “reward” for playing behavior in parents and other members of the peer group. The evolutionary “goal” of this faculty in animals and humans, thus, might be to encourage training of skills that are highly relevant for the survival of individuals, the peer group and the species. The finding of multiple cortical and subcortical brain regions that have been reported in association with human laughter or laughterlike responses observed in monkeys, supports the statements quoted above. Of course we are fully aware of the present paucity of knowledge about the neural substrates of human laughter. In particular, we emphasize again that the studies which investigated brain responses to detection and interpretation of humor, discovered areas in associative cortices other than those tied to auditory and motor representations. Hence, the ensemble of brain regions obtained from studies on expressive and perceptive laughter that have been published so far does not include the cortical territories devoted to ‘higher cognition’. These auditory and (pre-) motor areas are rather especially assumed to be indispensable parts of the basic network that constitutes laughter. Certainly, a comparative review such as this raises the ineluctable question of whether the brain regions that are likely to support laughter overlap across species or whether they are different. In essence, there is evidence that humans and monkeys share a common neural substrate for representing socially relevant objects [43]. However, when it comes to the description of cerebral representation of speech or conspecific calls, inconsistent or even conflicting findings arise. On the one hand, there is no doubt that (in the majority of human individuals) the left hemisphere is dominant for language functions [69]. More specifically, it is the perisylvian region that accommodates the core language system in humans.3 A recent brain imaging study in macaques comes to the conclusion that the homologous perisylvian areas in the monkey brain subserve the processing of conspecific calls [44]. However, there is still considerable uncertainty about the different roles that the right and the left hemisphere play in the perception of speech and vocal cues. In terms of human speech recent evidence from neuroanatomical and functional research suggests that the specific aptitude of the left posterior auditory cortex to preferentially process rapidly changing acoustic cues, i.e. formant transitions, may be considered the basic foundation underlying speech perception [2,63,84,103,104,120,157,158]. The right auditory cortex is thought be more strongly involved in processing slowly changing acoustic cues, i.e. intonation contour and speech rhythm [57,58,83], the perceptive aspects of general vocal information [11,25,75,85], the perception of pitch and timbre available in vocal and nonvocal acoustic signals [53,82,100,141], as well as the processing of emotionally tuned spoken utterances [35,149,150,151]. Interestingly, a preference of left auditory regions for rapidly changing acoustic cues has also been observed in monkey studies [76,123] even though the

3 For most recent review articles on the cerebral organization of speech and language see [34,124,132].

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

anatomical leftward asymmetries in anterior and posterior perisylvian compartments that are present in man [66,68,117,122] and great apes [26,27,41] are contentious in the monkey brain. By virtue of this observation it is equivocal to directly compare

255

functional imaging results obtained from humans and monkeys without taking this knowledge into account. As outlined in Section 2.3, there is still uncertainty whether the right or the left perisylvian region is better adept at processing species-specific

Fig. 3. Schematic illustration of the neural circuits mediating expressive and perceptive laughter in the human (top image) and emotional vocalizations in the monkey brain (bottom image). We depict the brain of a macaque to superimpose evidence collected from a variety of monkeys (macaque, squirrel monkey, vervets). The anatomical labels that are bluely underlined mark brain areas that are known to support the perception of laughter while the labels that are signaled with an orange line index brain regions which have been attributed to expressive laughter. We would like to mention that this schematic illustration predicates upon the present knowledge and should be considered incomplete. Abbreviations of anatomical labels: ACC, anterior cingulate cortex; Am, amygdala; Ht, hypothalamus; Md/Pns, medulla oblongata/Pons; PAG/Tg, periaqueductal gray/tegmentum; STG, superior temporal gyrus; ROP, Rolandic operculum; HeG, Heschl’s gyrus; STS, superior temporal sulcus; PT, planum temporale; Ins, Insula; SCG, subcentral gyrus; PoCG, postcentral gyrus; SMA, supplementary motor area. MRIs were recorded at the Max Planck Institute for Human Cognitive and Brain Science, Leipzig (human) and Primate Imaging Lab, Newcastle University (macaque).

256

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

calls. However when looking for common neural substrates of human laughter and laughter-like responses we should be aware of the fact that only a fraction of monkeys’ conspecific vocalizations are affectively loaded. Thus, when reasoning about a potential correspondence between human and monkey “laughter” we should focus on monkey studies that distinguished between brain responses to affective and nonaffective vocalizations. Unfortunately, to the best of our knowledge, such a study has not yet been published. With respect to the scarceness of human neuroimaging studies on laughter perception we introduced in Section 2.2 that a functional rightward asymmetry of activation in auditory cortices has been observed. Several explanations may account for this finding. Due to outdated theories on the lateralization of functions the right hemisphere has been coined the harbour of emotional experience in general or at least the residence of emotional speech [130]. Meanwhile numerous studies have also evidenced a role of the left hemisphere in emotionally tuned speech perception [73,74]. One recent proposition holds that the processing of acoustically encoded emotional information does not occur globally in either one of the hemispheres but is organized by both the left and the right hemisphere [131]. More specifically, it depends on the particular acoustic cue which hemisphere is more strongly engaged. An approach to delineate the acoustic devices which encode emotional tune has been published by Banse and Scherer [6], who identified changes in respiration, phonation, and articulation modulated by alterations of various acoustic parameters (F0 , mean energy, speech rate, average spectrum) that correspond to distinct emotional experiences in listeners. Since laughter is an affectively loaded auditory stimulus, which has several differential phonetic aspects (syllable rate, voiced and unvoiced portions of the acoustic signal), it comes as no surprise that areas of the left and right auditory cortex respond to laughter. Since the right auditory cortex has been revealed as being better adept at processing stereotyped vocalizations or particular aspects of voice perception [11,75] it is plausible to find auditory cortices in the right supratemporal areas more strongly involved. Even though former views attributed emotional experience to the right hemisphere, we do not believe that emotional experience accounts for the aforementioned observations of stronger right auditory cortex responses to presentation of human laughter [85,114]. We rather think, that emotional experience elicited by the perception of laughter is reflected by the activation of subcortical, “limbic” regions, among others the amygdala, [114,115]. This interpretation is in line with other brain imaging studies that attribute emotional experience to the amygdala [9,101] and with monkey studies that observed mirthful expression after stimulation of the amygdala [70]. Thus, the amygdala and other subcortical regions should be considered the major “loci of control” for emotional and affective processes that have been evolutionary conserved in the mammalian brain [99]. Based on the paucity or, respectively, the lack of studies that addressed the issue of laughter in the mammalian brain, it is a problematical endeavour to sketch a potential map of brain regions that might (at least partly) constitute the neural circuit of laughter. However, Fig. 3 provides such a schematic bluebrint

that shows regions in the human and in the monkey brain that have been reported by the aforementioned studies in Sections 2.2 and 2.3 in the context of perceptive and expressive laughter and affective vocalizations. As apparent from Fig. 3 auditory, (pre-) motor and primary somatosensory regions are most evident in humans, in particular when it comes to the perception of laughter. This is a plausible finding as it has been shown recently that a densely intertwined network of auditory and (pre-) motor regions subserves a variety of perceptive and expressive functions in the context of speech [61,62,83,152] and music [8,60]. This reciprocal interaction between perception and expression holds particularly for human laughter as hearing laughter has been described as “infectious” and sufficient to induce expressive laughter which is reflected by the recruitment of somatosensory and (pre-) motor loops in the subcentral gyrus, the postcentral gyrus and the Rolandic operculum [99]. Furthermore this reasoning provides support for the affect-induction approach that proposes that the perception of laughter induces the same positive emotional state in a recipient since it signals a mirthful state of the emitter [95]. This would also imply that brain regions which support expressive laughter, namely (pre-) motor areas, including the Rolandic operculum and the subcentral gyrus coactively activate as it has been outlined above. In the line of this reasoning the finding of somatosensory activation in recipients of laughter makes also sense as it is conceivable that hearing laughter elicits the same internal sensations of chest movements as overt laughing does. This “mirror-like” transmission of emotive states that covaries with activation of audio-motor circuits in humans is also likely to occur in monkey. However, presently there are no studies at hand that provide suitable data. Fig. 3 further shows that there appears to be some overlap, in particular in subcortical ancestral brain regions that seem to support expressive aspects of affective vocalization in both monkey and man. Furthermore, one part of premotor cortex (Rolandic operculum) turns out to be involved when the mammalian brain produces affective calls. Unfortunately, the lack of relevant studies that particularly address the issue of affective vocalizations in man and monkey does not allow us to draw presently a more precise picture and does not bring us in a comfortable position to do insightful cross-species comparative considerations. But we are confident that induced by the advent of in vivo imaging techniques in the field of monkey research the present knowledge on auditory cognition in mammalians will be rapidly advancing. Acknowledgements We would like to thank Schweizerischer National Fonds (Swiss National Foundation, SNF 3200B0-105877 awarded to Martin Meyer) and the Deutsche Forschungsgemeinschaft (German Science Foundation, DFG AL 357/1-2 awarded to Kai Alter, DFG WI 2101/2-1 awarded to Dirk Wildgruber) for financial support during the preparation of the manuscript. Furthermore we are grateful to Elsa S¨agesser for her helpful comments on the manuscript.

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

References [1] Ackermann H, Riecker A. The contribution of the insula to motor aspects of speech production: a review and a hypothesis. Brain Lang 2004;89:280–9. [2] Anderson B, Southern BD, Powers RE. Anatomic asymmetries of the posterior superior temporal lobes: a postmortem study. Neuropsychiatry Neuropsychol Behav Neurol 1999;12:247–54. [3] Augustine JR. Circuitry and functional aspects of the insular lobe in primates including humans. Brain Res Rev 1996;39:172–84. [4] Bachorowski JA, Smoski MJ, Owren MJ. The acoustic features of laughter. J Acoust Soc Am 1992;110:1581–97. [5] Bamiou DE, Musiek FE, Luxon LM. The insula (Island of Reil) and its role in auditory processing. Literature review. Brain Res Rev 2003;42:143–54. [6] Banse R, Scherer KR. Acoustic profiles in vocal emotion expression. J Pers Soc Psychol 1996;70:614–36. [7] Bartolo A, Benuzzi F, Nocetti L, Baraldi P, Nichelli P. Humour comprehension and appreciation: an fMRI study. J Cog Neurosci 2006;18:1789–98. [8] Baumann S, Koeneke S, Meyer M, Lutz K, Jancke L. A network for sensory-motor integration. What happens in the auditory cortex during piano playing without acoustic feedback? Ann NY Acad Sci 2005;1060:186–8. [9] Baumgartner T, Lutz K, Schmidt CF, Jancke L. The emotional power of music: how music enhances the feeling of affective pictures. Brain Res 2006;1075:151–64. [10] Beaucousin V, Lacheret A, Turbelin M, Morel M, Mazoyer B, TzourioMazoyer N. FMRI study of emotional speech comprehension. Cereb Cortex 2007;17:339–52. [11] Belin P, Fecteau S, B´edard C. Thinking the voice: neural correlates of voice perception. Trends Cog Sci 2004;8:129–35. [12] Belin P, Zatorre RJ, Ahad P. Human temporal-lobe response to vocal sounds. Brain Res Cogn Brain Res 2002;13:17–26. [13] Belin P, Zatorre RJ, Lafaille P, Ahad P, Pike B. Voice-selective areas in human auditory cortex. Nature 2000;403:309–12. [14] Benson RR, Whalen DH, Richardson M, Swainson B, Clark VP, Lai S, et al. Parametrically dissociating speech and nonspeech perception in the brain using fMRI. Brain Lang 2001;78:364–96. [15] Berntson GG, Boysen ST, Bauer HR, Torello MS. Conspecific screams and laughter: cardiac and behavioral reactions of infanct chimpanzees. Dev Psychobiol 1989;22:771–87. [16] Blank SC, Bird H, Turkheimer F, Wise RJS. Speech production after stroke: the role of the right pars opercularis. Ann Neurol 2003;54: 310–20. [17] Blood A, Zatorre RJ. Intensly pleasurable responses to music with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci USA 2001;98:11818–23. [18] Blood A, Zatorre RJ, Bermudez P, Evans AC. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neurosci 1999;2:382–7. [19] Boemio A, Fromm A, Braun A, Poeppel D. Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nature Neurosci 2005;8:389–95. [20] Brechmann A, Baumgart F, Scheich H. Sound-level-dependent representation of frequency modulations in human auditory cortex: a low-noise fMRI study. J Neurophysiol 2002;87:423–33. [21] Brown S, Martinez MJ, Parsons LM. Passive music listening spontaneously engages limbic and paralimbic systems. NeuroReport 2004;15:2033–7. [22] Brown S, Martinez MJ, Parsons LM. Music and language side by side in the brain: a PET study of the generation of melodies and sentences. Eur J Neurosci 2006;23:2791–803. [23] Buchanan TW, Lutz K, Mirzazade S, Specht K, Shah NJ, Zilles K, et al. Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Brain Res Cogn Brain Res 2000;9:227–38. [24] Burrows AM, Waller BM, Parr LA, Bonar CJ. Muscles of facial expression in the chimpanzee (pan troglodytes): descriptive, ecological and phylogenetic contexts. J Anat 2006;208:153–67.

257

[25] B´elizaire G, Fillion-Bilodeau S, Chartrand JP, Bertrand-Gauvin C, Belin P. Cerebral response to ‘voiceness’: a functional magnetic resonance imaging study. NeuroReport 2007;18:29–33. [26] Cantalupo C, Hopkins WD. Asymmetric Broca’s area in great apes. Nature 2001;414:505. [27] Cantalupo C, Pilcher DL, Hopkins WD. Are planum temporale and sylvian fissure asymmetries directly related? A MRI study in great apes. Neuropsychologia 2003;41:1975–81. [28] Citardi MJ, Yanagisawa E, Estill J. Videoendoscopic analysis of laryngeal function during laughter. Ann Otol Rhinol Laryngol 1996;105:545–9. [29] Crinion JT, Lambon-Ralph MA, Warburton E, Howard D, Wise RJS. Temporal lobe regions during normal speech comprehension. Brain 2003;126:1193–201. [30] Darwin C, The expression of emotions in man and animals. New York: Oxford University, 1872/1998. [31] Davis MH, Johnsrude IS. Hierarchical processing in spoken language comprehension. J Neurosci 2003;23:3423–31. [32] de Waal F. The communicative repertoire of captive bonobos (pan paniscus), compared to that of chimpanzees). Behaviour 1988;106:183–251. [33] Dogil G, Ackermann H, Grodd W, Haider H, Kamp H, Mayer J, et al. The speaking brain: a tutorial introduction to fMRI experiments in the production of speech, prosody, and syntax. J Neuroling 2002;15:59–90. [34] D´emonet JF, Thierry G, Cardebat D. Renewal of the neurophysiology of language: functional imaging. Physiol Rev 2005;85:49–95. [35] Ethofer T, Anders S, Wiethoff AS, Erb M, Herbert C, Saur R, et al. Effects of prosodic emotional intensity on activation of associative auditory cortex. NeuroReport 2006;17:249–53. [36] Ethofer T, Pourtois G, Wildgruber D. Investigating audiovisual integration of emotional signals in the human brain. In: Anders S, Ende G, Junghofer M, Kissler J, Wildgruber D, editors. Progress in brain research, Vol.156. Amsterdam: Elsevier; 2006. p. 345–61. [37] Fecteau S, Armony JL, Joanette Y, Belin P. Is voice processing species-specific in human auditory cortex?An fMRI study. NeuroImage 2004;23:840–8. [38] Feinstein A, Feinstein K, Gray T, O’Connor P. Prevalence and neurobehavioral correlates of pathological laughing and crying in multiple sclerosis. Arch Neurol 1997;54:11116–21. [39] Fried I, Wilson CL, MacDonald KA, Behnke EJ. Electric current stimulates laughter. Nature 1998;391:650. [40] Friederici AD, Ruschemeyer S, Hahne A, Fiebach C. The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb Cortex 2003;13:170–7. [41] Gannon PJ, Holloway RL, Broadfield DC, Braun AR. Asymmetry of the planum temporale: humanlike pattern of Wernicke’s brain language area homolog. Science 1998;279:220–2. [42] Gannon PJ, Kheck NM, Braun AR, Holloway RL. Planum parietale of chimpanzees and orangutans: a comparative resonance of human-like planum temporale. Anat Rec A 2005;278:1128–41. [43] Gil-da Costa R, Braun A, Lopes M, Carson R, Herscovitch P, Martin A. Toward an evolutionary perspective on conceptual representation: species-specific calls activate visual and affective processing systems in the macaque. Proc Natl Acad Sci USA 2004;101:17516–21. [44] Gil-da Costa R, Hauser MD. Vervet monkeys and humans show brain asymmetries for processing conspecific vocalizations, but with opposite patterns of laterality. Proc Biol Sci 2006;271:2313–8. [45] Gil-da Costa R, Martin A, Lopes MA, Mu˜noz M, Fritz JB, Braun A. Species-specific calls activate homologs of Broca’s and Wernicke’s area in the macaque. Nat Neurosci 2006;9:1064–70. [46] Giraud AL, Lorenzi C, Ashburner J, Wable J, Johnsrude I, Frackowiak R, et al. Representation of the temporal envelope of sounds in the human brain. J Neurophysiol 2000;84:1588–98. [47] Giraud AL, Price CJ, Graham JM, Truy E, Frackowiak RSJ. Cross-modal plasticity underpins language recovery after cochlear implantation. Neuron 2001;30:657–63. [48] Giraud AL, Trury E. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects. Neuropsychologia 2002;40:1562–9.

258

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

[49] Goel V, Dolan RJ. The functional anatomy of humour: segragating cognitive and affective components. Nat Neurosci 2001;4:237–8. [50] Griffiths TD, Warren JD. The planum temporale as a computational hub. Trends Neurosci 2002;25:348–53. [51] Hall DA, Johnsrude IS, Haggard MP, Palmer AR, Akeroyd MA, Summerfield AQ. Spectral and temporal processing in human auditory cortex. Cereb Cortex 2002;12:140–9. [52] Halpern AR, Zatorre RJ. When the tune runs through your head: a PET investigation of auditory imagery for familiar melodies. Cereb Cortex 1999;9:697–704. [53] Halpern AR, Zatorre RJ, Bouffard M, Johnson JA. Behavioral and neural correlates of perceived and imagined musical timbre. Neuropsychologia 2004;42:1281–92. [54] Hartje W, Poeck K. Klinische Neuropsychologie. 6th edition Stuttgart: Thieme; 2006. [55] Hauser MD, Andersson K. Left hemisphere dominance for processing vocalizations in adult, but not infant, rhesus monkeys: field experiments. Proc Natl Acad Sci USA 1994;91:3946–8. [56] Heffner HE, Heffner RS. Temporal lobe lesions and perception of speciesspecific vocalizations by macaques. Science 1984;226:75–6. [57] Hesling I, Cl´ement S, Bordessoules M, Allard M. Cerebral mechanisms of prosodic integration: evidence from connected speech. NeuroImage 2005;24:937–47. [58] Hesling I, Dilharreguy B, Cl´ement S, Bordessoules M, Allard M. Cerebral mechanisms of prosodic sensory integration using low-frequency bands of connected speech. Hum Brain Mapp 2005;26:157–69. [59] Hesselmann V, Sorger B, Lasek K, Guntinas-Lichius O, Krug B, Sturm V, et al. Discriminating the cortical representation sites of tongue and lip movement by functional MRI. Brain Topogr 2005;16: 159–67. [60] Hickok G, Buchsbaum B, Humphries C, Muftuler T. Auditory–motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J Cog Neurosci 2003;15:673–82. [61] Hickok G, Poeppel D. Towards a functional neuroanatomy of speech perception. Trends Cog Sci 2000;4:131–8. [62] Hickok G, Poeppel D. Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 2004;92:67–99. [63] Hutsler JJ. Hemispheric asymmetries in cerebral cortical networks. Trends Neurosci 2003;28:429–35. [64] Indefrey P, Hellwig F, Herzog H, Seitz RJ, Hagoort P. Neural responses to the production and comprehension of syntax in identical utterances. Brain Lang 2004;14:312–9. [65] Janata P, Birk JL, Van Horn JD, Leman M, Tillmann B, Bharucha JJ. The cortical topography of tonal structures underlying Western music. Science 2002;298:2167–70. [66] Jancke L, Steinmetz H. Auditory lateralization and planum temporale asymmetry. NeuroReport 1993;5:169–72. [67] Jancke L, W¨ustenberg T, Scheich H, Heinze H. Phonetic perception and the temporal cortex. NeuroImage 2002;15:733–46. [68] Josse G, Mazoyer B, Crivello F, Tzourio-Mazoyer N. Left planum temporale: an anatomical marker of left hemispheric specialization for language comprehension. Brain Res Cogn Brain Res 2003;18:1–14. [69] Josse G, Tzourio-Mazoyer N. Hemispheric specialization for language. Brain Res Rev 2004;44:1–12. [70] J¨urgens U. Neuronal control of mammalian vocalization with special reference to the squirrel monkey. Naturwissenschaften 1998;85:376–88. [71] Kipper S, Todt D. The use of vocal signals in the social play of barbary monkeys. Primates 2002;43:3–17. [72] Knutson B, Burgdorf J, Panksepp J. Anticipation of play elicits highfrequency ultrasonic vocalizations in young rats. J Comp Physiol 1998;112:65–73. [73] Kotz SA, Meyer M, Alter K, Besson M, von Cramon DY, Friederici AD. On the lateralization of emotional prosody: an event-related functional MR investigation. Brain Lang 2003;86:366–76. [74] Kotz SA, Meyer M, Paulmann S. Lateralization of emotional prosody in the brain: an overview and synopsis on the impact of study design. In: Anders S, Ende G, Junghofer M, Kissler J, Wildgruber D, editors. Progress

[75] [76]

[77] [78]

[79] [80] [81]

[82]

[83]

[84]

[85]

[86]

[87] [88]

[89]

[90]

[91]

[92]

[93] [94]

[95] [96] [97] [98]

[99]

in brain research, Vol.156. Amsterdam: Elsevier; 2006. p. 285–94. Chapter 15. Lattner S, Meyer M, Friederici A. Voice perception: sex, pitch, and the right hemisphere. Hum Brain Mapp 2005;24:11–20. Lu T, Liang L, Wang X. Neural representations of temporally asymmetric stimuli in the auditory cortex of awake primates. J Neurophysiol 2001;85:2364–80. Martin GN, Gray CD. The effects of audience laughter on men’s and women’s responses to humor. J Soc Psychol 1996;136:221–31. Martin RE, MacIntosh BJ, Smith RC, Barr AM, Stevens TK, Gati JS, et al. Cerebral areas processing swallowing and tongue movement are overlapping but distinct: a functional magnetic resonance imaging study. J Neurophysiol 2004;92:2428–93. Matsusaka T. When does play panting occur during social play in wild chimpanzees? Primates 2004;45:221–9. Mazoyer BM, Tzourio N, Frak V, Syrota A, Murayama N, Levrier O, et al. The cortical representation of speech. J Cog Neurosci 1993;5:467–79. Menon V, Levitin DJ, Smith BK, Lembke A, Krasnow BD, Glazer D, et al. Neural correlates of timbre change in harmonic sounds. NeuroImage 2002;17:1742–54. Meyer M, Baumann S, Jancke L. Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. NeuroImage 2006;32:1510–32. Meyer M, Steinhauer K, Alter K, Friederici AD, von Cramon DY. Brain activity varies with modulation of dynamic pitch variance in sentence melody. Brain Lang 2004;89:277–89. Meyer M, Zaehle T, Gountouna EV, Barron A, Jancke L, Turk A. Spectrotemporal integration during speech perception involves left posterior auditory cortex. NeuroReport 2005;16:1985–9. Meyer M, Zysset S, von Cramon DY, Alter K. Distinct fMRI responses to laughter, speech, and sounds along the human perisylvian cortex. Brain Res Cogn Brain Res 2005;24:291–306. Mobbs D, Greicius MD, Abdel-Azim E, Menon V, Reiss AL. Humour modulates the mesolimbic reward centers. Neuron 2003;40: 1041–8. Moore P, van Leden H. Dynamic variations of the vibratory pattern in the normal larynx. Folia Phoniatr 1958;10:205–38. Mutschler I, Schulze-Bonhage A, Glauche V, Demandt E, Speck O, Ball T. A rapid sound-action association effect in human insular cortex. PloS One 2007;2:e259. Narain C, Scott SK, Wise RJS, Rosen S, Leff A, Iversen SD, et al. Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb Cortex 2003;13:1362–8. Nwokah EE, Davies P, Islam A, Hsu HC, Fogel A. Vocal affect in threeyear-olds: a quantitative acoustic analysis of child laughter. J Acoust Soc Am 1993;94:3076–90. Ochiai T, Grimault S, Scavarda D, Roch G, Hori T, Rivi`ere D, et al. Sulcal pattern and morphology of the superior temporal sulcus. NeuroImage 2004;22:706–19. Ohnishi T, Matsuda H, Asada T, Aruga M, Hirakata M, Nishikawa M, et al. Functional anatomy of musical perception in musicians. Cereb Cortex 2001;11:754–60. Okuda DT, Chyung ASC, Chin CT, Waubant E. Acute pathological laughter. Mov Disord 2005;20:1389–90. Overy K, Norton AC, Cronin KT, Gaab N, Alsop DC, Winner E, et al. Imaging melody and rhythm processing in young children. NeuroReport 2004;15:1723–6. Owren MJ, Bachorowski JA. Reconsidering the evolution of nonlinguistic communication: the case of laughter. J Nonverbal Behav 2003;27:183–97. Panksepp J. The riddle of laughter: neural and psychoevolutionary underpinnings of joy. Curr Dir Psychol Sci 2000;9:183–6. Panksepp J. Beyond a joke: from animal laughter to human joy. Science 2005;308:62–3. Panksepp J. Neuroevolutionary sources of laughter and social joy: Modeling primal human laughter in laboratory rats. Behav Brain Res 2007;182:231–44. Panksepp J, Burgdorf J. “Laughing rats” and the evolutionary antecedents of joy? Physiol Behav 2003;79:533–47.

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260 [100] Patterson RD, Uppenkamp S, Johnsrude IS, Griffiths TD. The processing of temporal pitch and melody information in auditory cortex. Neuron 2002;36:767–76. [101] Phillips ML, Young AW, Scott SK, Calder AJ, Andrew C, Giampietro V, et al. Neural responses to facial and vocal expression of fear. Proc Biol Sci 1998;265:1809–17. [102] Poeck K. Pathological laughter and crying. In: Frederik S, editor. Handbook of clinical neurology. Amsterdam: Elsevier; 1985. p. 219– 25. [103] Poeppel D. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Communication 2003;41:245–55. [104] Poeppel D, Guillemin A, Thompson J, Fritz J, Bavelier D, Braun AR. Auditory lexical decision, categorical perception, and FM direction discrimination differentially engage left and right auditory cortex. Neuropsychologia 2004;42:183–200. [105] Poldrack RA, Temple E, Protopapas A, Nagarajan S, Tallal P, Merzenich M, et al. Relations between the neural bases of dynamic auditory processing and phonological processing: evidence from fMRI. J Cog Neurosci 2001;13:687–97. [106] Poremba A, Malloy M, Saunders RS, Carson RE, Herscovitch P, Mishkin M. Species-specific calls evoke asymmetric activity in the monkey’s temporal pole. Nature 2004;427:448–51. [107] Price C, Thierry G, Griffiths T. Speech-specific auditory processing: where is it? Trends Cog Sci 2005;9:271–6. [108] Provine RR. Laughter punctuates speech: linguistic, social and gender contexts of laughter. Ethology 1993;95:291–8. [109] Provine RR. Laughter Am Sci 1996;84:38–45. [110] Provine RR. Laughter: a scientific investigation. New York: Viking; 2000. [111] Provine RR, Young LY. Laughter: a stereotyped human vocalization. Ethology 1991;89:115–24. [112] Rizzolatti G, Arbib MA. Language within our grasp. Trends Neurosci 1999;21:188–96. [113] Ruch W, Ekman P. The expressive pattern of laughter. In: Kaszniak AW, editor. Emotion, qualia, and consciousness. Tokyo: Word Scientific Publisher; 2001. p. 426–43. [114] Sander K, Scheich H. Auditory perception of laughing and crying activates human amygdala regardless of attentional state. Brain Res Cogn Brain Res 2001;12:181–98. [115] Sander K, Scheich H. Left auditory cortex and amygdala, but right insula dominance for human laughing and crying. J Cog Neurosci 2005;17:1519–31. [116] Scheiner E, Hammerschmidt K, J¨urgens U, Zwirner P. Acoustic analysis of developmental changes and emotional expression in the preverbal vocalization of infants. J Voice 2002;16:509–29. [117] Schlaug G, Jancke L, Huang Y, Steinmetz H. In vivo evidence of structural brain asymmetry in musicians. Science 1995;267:699–701. [118] Sch¨onwiesner M, R¨ubsamen R, von Cramon DY. Hemispheric asymmetry for spectral and temporal processing in the human antero-lateral auditory belt cortex. Eur J Neurosci 2005;22:1521–8. [119] Schwartz J, Tallal P. Rate of acoustic change may underlie hemispheric specialization for speech perception. Science 1980;207: 1380–1. [120] Sigalovsky IS, Fischl B, Melcher JR. Mapping an intrinsic MR property of gray matter in auditory cortex of living humans: a possible marker for primary cortex and hemispheric differences. NeuroImage 2006;32: 1524–37. [121] Sroufe AL, Waters E. The ontogenesis of smiling and laughter: a perspective on the organization of development in infancy. Psychol Rev 1976;83:173–89. [122] Steinmetz H, Volkmann J, Jancke L, Freund H. Anatomical left-right aymmetry of language-related temporal cortex is different in left and right handers. Ann Neurol 1991;29:315–9. [123] Steinschneider M, Volkov IO, Fishman YI, Oya H, Arezzo JC, Howard MA. Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter. Cereb Cortex 2005;15:170– 86.

259

[124] Stowe LA, Haverkort M, Zwarts F. Rethinking the neurological basis of language. Lingua 2005;115:997–1045. [125] Tanabe HC, Honda M, Sadato N. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning. J Neurosci 2005;25:6409–18. [126] Tervaniemi M, Szameitat AJ, Kruck S, Schr¨oger E, Alter K, De Baene W, et al. From air oscillations to music and speech: functional magnetic imaging evidence for fine-tuned neural networks in audition. J Neurosci 2006;26:8647–52. [127] Tzourio-Mazoyer N, Josse G, Crivello F, Mazoyer B. Interindividual variability in the hemispheric organization for speech. NeuroImage 2004;21:422–35. [128] Uppenkamp S, Johnsrude IS, Norris D, Marslen-Wilson W, Patterson RD. Locating the initial stages of speech-sound processing in human temporal cortex. NeuroImage 2006;31:1284–96. [129] van Hoff JAR. A comparative approach to the phylogeny of laughter and smiling. In: Hinde RA, editor. Non-verbal communication. Cambridge: Cambridge University; 1972. p. 209–41. [130] Van Lancker D. Cerebral lateralization of pitch cues in the linguistic signal. Papers Ling 1980;13:201–77. [131] Van Lancker D, Sidtis JJ. The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: all errors are not created equal. J Speech Hear Res 1992;35:963–70. [132] Vigneau M, Beaucousin V, Herv´e PY, Duffau F, Crivello F, Houd´e O, et al. Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. NeuroImage 2006;30:1414–32. [133] von Kriegstein K, Eger E, Kleinschmidt A, Giraud AL. Modulation of neural responses to speech by directing attention to voices or verbal content. Brain Res Cogn Brain Res 2003;17:48–55. [134] von Kriegstein K, Giraud AL. Distinct functional substrates along the right superior temporal sulcus for the processing of voices. NeuroImage 2004;22:948–55. [135] Vouloumanos A, Kiehl K, Werker JF, Liddle PF. Detection of sounds in the auditory stream: event-related fMRI evidence for differential activation to speech and nonspeech. J Cog Neurosci 2001;13:994– 1005. [136] Wallace MN, Johnston PW, Palmer AR. Histochemical identification of cortical areas in the auditory region of the human brain. Brain Res Exp Brain Res 2002;143:499–508. [137] Waller B, Dunbar RIM. Differential behavioural effects of silent bared teeth display and relaxed open mouth display in chimpanzees (pan troglodytes). Ethology 2005;111:129–42. [138] Wang X, Kadia SC. Differential representation of species-specific primate vocalizations in the auditory cortices of marmoset and cat. J Neurophysiol 2001;86:2616–20. [139] Warren J, Griffiths T. Distinct mechanisms for processing spatial sequences and pitch sequences in the human auditory brain. J Neurosci 2003;23:5799–804. [140] Warren J, Jennings AR, Griffiths T. Analysis of the spectral envelope of sounds by the human brain. NeuroImage 2005;24:1052–7. [141] Warren J, Uppenkamp S, Patterson R, Griffiths T. Separating pitch chroma and pitch height in the human brain. Proc Natl Acad Sci USA 2002;100:10038–43. [142] Warren JD, Scott SK, Price CJ, Griffiths T. Human brain mechanisms for the early analysis of voices. NeuroImage 2006;31:1389–97. [143] Watson KK, Matthews BJ, Allman JM. Brain activation during sight gags and language dependent-humour. Cereb Cortex 2007;17:314–24. [144] Weinstein EA, Gender MB. Integrated facial patterns elicited by stimulation of the brainstem. Arch Psychiat 1943;50:34–42. [145] Wessinger CM, VanMeter J, Tian B, Pekar J, Rauschecker JP. Hierarchical organization of the human auditory cortex revealed by functional magnetic resonance imaging. J Cog Neurosci 2001;13: 1–7. [146] Wild B, Rodden FA, Rapp A, Erb M, Grodd W, Ruch W. Humour and smiling. Cortical regions selective for cognitive, affective, and volitional components. Neurology 2006;66:887–93. [147] Wild B, Rodden FA, Grodd W, Ruch W. Neural correlates of laughter and humour. Brain 2003;126:2121–38.

260

M. Meyer et al. / Behavioural Brain Research 182 (2007) 245–260

[148] Wildgruber D, Ackermann H, Klose U, Kardatzki B, Grodd W. Functional lateralization of speech production at primary motor cortex: a fMRI study. NeuroReport 1996;7:2791–5. [149] Wildgruber D, Ackermann H, Kreifelts B, Ethofer T. Cerebral processing of affective and linguistic prosody: fMRI studies. In: Anders S, Ende G, Junghofer M, Kissler J, Wildgruber D, editors. Progress in brain research, Vol.156. Amsterdam: Elsevier; 2006. p. 249–68. [150] Wildgruber D, Pihan D, Erb M, Ackermann H, Grodd W. Dynamic brain activation during processing of emotional intonation: influence of acoustic parameters, emotional valence, and sex. NeuroImage 2002;15:856–69. [151] Wildgruber D, Riecker A, Hertrich I, Erb M, Grodd W, Ethofer T, et al. Identification of emotional intonation evaluated by fMRI. NeuroImage 2005;24:1233–41. [152] Wilson SM, Saygin AP, Sereno MI, Iacoboni M. Listening to speech activates motor areas involved in speech production. Nat Neurosci 2004;7:701–2.

[153] Wise RJS. Language systems in normal and aphasic human subjects: functional imaging studies and inferences from animal studies. Br Med Bull 2003;65:95–119. [154] Wise RJS, Greene J, Buchel C, Scott SK. Brain regions involved in articulation. Lancet 1999;353:1057–61. [155] Wright TM, Pelphrey KA, Allsion T, McKeown MJ, McCarthy G. Polysensory interactions along lateral temporal regions evoked by audiovisual speech. Cereb Cortex 2003;13:1034–43. [156] Yukie M. Connections between the amygdala and auditory cortical areas in the macaque monkey. Neurosci Res 2002;42:219–29. [157] Zaehle T, W¨ustenberg T, Meyer M, Jancke L. Evidence for rapid auditory perception as the foundation of speech processing – a sparse temporal sampling fMRI study. Eur J Neurosci 2004;20:2447–56. [158] Zatorre RJ, Belin P. Spectral and temporal processing in human auditory cortex. Cereb Cortex 2001;11:946–53.