The neurophysiological correlates of face processing in adults and children with Asperger’s syndrome

The neurophysiological correlates of face processing in adults and children with Asperger’s syndrome

Brain and Cognition 59 (2005) 82–95 www.elsevier.com/locate/b&c The neurophysiological correlates of face processing in adults and children with Aspe...

708KB Sizes 0 Downloads 24 Views

Brain and Cognition 59 (2005) 82–95 www.elsevier.com/locate/b&c

The neurophysiological correlates of face processing in adults and children with Asperger’s syndrome Kate O’Connor ¤, JeV P. Hamm, Ian J. Kirk Research Centre for Cognitive Neuroscience, Department of Psychology, University of Auckland, Auckland, New Zealand Accepted 16 May 2005 Available online 11 July 2005

Abstract Past research has found evidence for face and emotional expression processing diVerences between individuals with Asperger’s syndrome (AS) and neurotypical (NT) controls at both the neurological and behavioural levels. The aim of the present study was to examine the neurophysiological basis of emotional expression processing in children and adults with AS relative to age- and gendermatched NT controls. High-density event-related potentials were recorded during explicit processing of happy, sad, angry, scared, and neutral faces. Adults with AS were found to exhibit delayed P1 and N170 latencies and smaller N170 amplitudes in comparison to control subjects for all expressions. This may reXect impaired holistic and conWgural processing of faces in AS adults. However, these diVerences were not observed between AS and control children. This may result from incomplete development of the neuronal generators of these ERP components and/or early intervention.  2005 Elsevier Inc. All rights reserved. Keywords: Autism; Asperger’s syndrome; Face processing; ERPs; EEG

1. Introduction For many of us, the ability to empathize and interact with others is an intuitive process and requires limited eVort. However, for individuals with Asperger’s syndrome (AS), relating to and understanding other human beings is often diYcult. Asperger’s syndrome is a neurodevelopmental disorder, mainly aVecting non-verbal communication and sensory processing. People with AS also have restricted interests, exhibit repetitive and stereotyped behavioural responses, and enjoy routine. The symptomatology of AS is similar to autism, but without the associated language or cognitive delay (Attwood, 1998). For this reason, autism and AS are often classiWed as autistic spectrum disorders (ASDs), with AS at the higher end of the spectrum (Macintosh & Dissanayake, 2004).

*

Corresponding author. Fax: +11 649 373 7450. E-mail address: [email protected] (K. O’Connor).

0278-2626/$ - see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.bandc.2005.05.004

However, although individuals with AS often want to interact with others, they experience great diYculty (Attwood, 1998; Birch, 2003; Miller, 2003). Two main theories for these interaction deWcits are apparent in the autism literature. The Wrst theory proposes that social deWcits in ASD result from a general impairment in “theory of mind” (ToM), the ability to attribute thoughts and intentions to others. Past research has found evidence for impairment on Wrst and second-order TOM tasks in children with autism, and on more complex ToM tasks in both children and adults with AS (Baron-Cohen, 1989; Baron-Cohen, Leslie, & Frith, 1985; Baron-Cohen, O’Riordan, Stone, Jones, & Plaisted, 1999; Happe, 1994). Similarly, deWcits in joint attention and imitation—potential precursors in ToM development, have been documented in individuals with ASD (Charman, 2003; Williams, Whiten, Suddendorf, & Perrett, 2001). The second theory is that social interaction diYculties in ASD may arise from a general deWcit in central

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

coherence, the ability to integrate local details into a coherent or ‘global’ whole (see Frith & Happe, 1994 for a review). This theory explains the processing style common to ASD, which is biased towards processing details over general meaning (Vermeulen, 2001). Furthermore, the use of local processing strategies to comprehend social interactions, which involve the simultaneous integration of visual, auditory, tactile, and even olfactory information, would leave an individual with AS at a serious disadvantage. Past research has found evidence for face processing diVerences in individuals with AS relative to NTs. For example, whereas faces are mainly processed holistically (as perceptual wholes) in NTs, individuals on the autistic spectrum appear to favour a more feature-based, “analytical” approach. Evidence for this has been found using inverted faces, which are thought to be processed using predominantly analytical strategies (Tanaka & Farah, 1993; Yin, 1969). Inverting faces impairs face recognition in NTs through disruption of both conWgural (the ability to process spatial relationships between facial features) and holistic processing (Itier & Taylor, 2002). However, this procedure does not always impair face recognition in individuals with ASD who tend to show similar performance for recognition of upright and inverted faces (Hobson, Ouston, & Lee, 1988; Langdell, 1978). These Wndings suggest that the ability to process faces holistically and/or process facial conWgurations may be impaired in ASD, or that individuals with ASD prefer to use other face processing strategies (i.e., an analytical strategy). The latter explanation is probably has more empirical support, as recent research suggests that conWgural processing can occur in AS when face recognition is dependent on the mouth (Joseph & Tanaka, 2003). Other studies have found evidence for abnormal processing of features in ASD. For example, past research has shown NTs Wxate more on the eye than mouth region of faces while individuals with AS focus less on the eyes and instead devote greater attention to the mouth (Klin, Jones, Schultz, Volkmar, & Cohen, 2002; Klin, Jones, Schultz, & Volkmar, 2003; Joseph & Tanaka, 2003). Furthermore, adults with AS have diYculty identifying complex emotional expressions from the eye region (Baron-Cohen, Wheelwright, & JolliVe, 1997). Evidence for face processing diVerences between individuals with ASD and NT controls are also found at a neurological level. Several studies have shown individuals with ASD to exhibit hypoactivation of the right fusiform gyrus during face processing, a region activated by extremely familiar stimuli (Hubl et al., 2003; Pierce, Muller, Ambrose, Allen, & Courchesne, 2001; Schultz et al., 2000). Decreased activity has also been observed in the superior temporal sulcus (STS), involved in the detection of biological motion such as eye gaze (Pierce

83

et al., 2001; Puce, Allison, Bentin, Gore, & McCarthy, 1998). Furthermore, increased activity in the inferior temporal and lateral occipital regions has been observed in individuals with ASD in response to faces. More importantly, these regions exhibit greater activation to objects in NT control subjects. Together, these Wndings provide further evidence that analytical processing strategies may be used to process faces in ASD (Hubl et al., 2003; Schultz et al., 2000). DiVerences are also observed during emotional face processing in individuals with ASD relative to NT controls. For example, adults with ASD have been found to exhibit decreased activation of the left inferior frontal gyrus (IFG) and insula during identiWcation of complex emotions from the eyes alone, and in the left middle frontal gyrus during recognition of fearful faces (BaronCohen, Ring, et al., 1999; Ogai et al., 2003). Furthermore, decreased amygdala activation has been observed in response to both emotional and neutral faces in ASD adults (Critchley et al., 2000; Pierce et al., 2001). Interestingly Carr, Iacoboni, Dubeau, Mazziotta, and Lenzi (2003) postulate that the STS, IFG, insula, and amygdala may incorporate a circuit involved in empathy, which may explain the ToM diYculties common to ASD. A large number of studies have investigated the neurophysiological basis of face processing in NTs using eventrelated potentials (ERPs). The initial categorization of a stimulus as a face has been shown to occur as early as 100 ms (the P1 component). A few (but not all) studies have found evidence that P1 may also reXect an early face processing stage. For example, some studies have observed P1 to be smaller and/or earlier to upright relative to inverted faces and objects (Taylor, Edmonds, McCarthy, & Allison, 2001; Itier & Taylor, 2002, 2004). However, in contrast to P1, a negative deXection occurring around 170 ms (between approximately 140 and 200 ms) is consistently activated to faces, and is largest in amplitude over posterior temporal electrodes (Taylor, Batty, & Itier, 2004). Termed the N170, this component has been shown to be larger to human faces than to objects (furniture, Xowers, etc.), animal faces and human hands (Bentin, Allison, Puce, Perez, & McCarthy, 1996). Several research groups have implicated the N170 component in conWgural and/or holistic processing. For example, N170 amplitude is delayed and/or larger in response to inverted relative to upright faces (Itier & Taylor, 2004; McPartland, Dawson, Webb, Panagiotides, & Carver, 2004; Itier & Taylor, 2004). Finally, some studies have observed that the P2 component are also sensitive to inverted faces, although this component has been examined less extensively than the N170 (Itier & Taylor, 2002; Rebai, Poiroux, Bernard, & Lalonde, 2001). Developmental studies have found the N290 and P400 components in 12-month-old infants are

84

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

modulated by inverted faces in a similar manner to N170. This has led de Haan, Johnson, and Halit (2003) to propose that either one or both these components may function as the precursor/s of the N170 observed in older children and adults. Furthermore, the P1 and N170 responses to faces do not reach full maturity until adulthood, decreasing in amplitude, and occurring at earlier latencies throughout the child and adolescent years (Taylor, McCarthy, Saliba, & Degiovanni, 1999; Taylor et al., 2001; Taylor et al., 2004). Another interesting Wnding is that, whereas P1 is modulated by inverted faces from the age of 4 years onwards, the N170 component does not appear to distinguish between upright and inverted faces until late childhood/adolescence (Taylor et al., 2004). In accordance with Taylor et al. (2004), who propose that the P1 and N170 components reXect holistic and conWgural face processing, respectively, these Wndings suggest that processing of facial conWgurations takes longer to develop than the ability to process faces holistically. Emotional modulation of ERP and magnetoencephalography (MEG) components in response to facial expression appears to occur predominantly at later processing stages. For example, increased amplitudes relative to neutral expressions and diVerent ERP proWles among various expressions have been detected between 220–550 and 500–750 ms over frontal-central electrodes (Batty & Taylor, 2003; Krolak-Salmon, Fischer, Vighetto, & Mauguiere, 2001; Munte et al., 1998; Orozco & Ehlers, 1998; Sato, Kochiyama, Yoshikawa, & Matsumura, 2001; Schupp et al., 2004). Interestingly, recent ERP and MEG studies have found evidence for emotional modulation as early as 120 ms (Holmes, Vuilleumier, & Eimer, 2003; Streit et al., 2003). However, evidence for early modulation of neurophysiological activity by emotions is not common. Moreover, relatively few studies have provided evidence for modulation of the N170 component by emotional expression (Batty & Taylor, 2003; Eimer & Holmes, 2003; Holmes et al., 2003). Together, these Wndings suggest that later components are involved in processing emotional expression. Few studies have examined the neurophysiological correlates of face processing in ASD. Dawson et al. (2002) found that typically developing 3- to 4-year-old children exhibit ERP diVerences to novel and familiar faces at around 400 ms. This diVerence was not observed in children with ASD, however. A more recent study by Dawson, Webb, Carver, Panagiotides, and McPartland (2004) found that unlike agedmatched NT controls, 3- to -4-year-old children with ASD failed to show increased amplitudes to fearful relative to neutral expressions over posterior electrodes between 270–300 and 810–1170 ms. Two recent studies found evidence for diVerences in the N170 response to faces between AS and NT adults. McPartland et al. (2004) showed that N170 latency to faces over bilateral

posterior temporal regions was delayed by approximately 17 ms in a mixed sample of six adults with AS and three with Autistic Disorder in comparison to NT controls. Further, in contrast to NTs, subjects with ASD did not exhibit a delay in N170 latency to inverted relative to upright faces. Similar Wndings were observed by Grice et al. (2001) in eight adults with ASD over bilateral temporal-occipital regions. In addition, although a direct statistical comparison was not made, their ERP data suggest that adults with ASD exhibit decreased amplitudes to both inverted and upright faces relative to NT controls. Another interesting Wnding by this research group was the discovery of larger bursts of  activity to upright relative to inverted faces in NT, but not ASD subjects. Interestingly, past studies have found an association between -band bursts and the integration of separate stimulus parts into a coherent whole (Herrman, Mecklinger, & Pfeifer, 1999; Tallon-Baudry, Bertrand, & Delpuech, 1996). Together, these Wndings provide further evidence for impaired holistic and/or conWgural processing of faces in ASD. In the present experiment, subjects were presented (and asked to recognise) happy, sad, angry, scared, and neutral facial expressions while ERPs were collected. An explicit task was used to compare accuracy between NT and AS subjects. The principal objective was to determine if diVerences in ERP component amplitudes, and latencies can be observed in AS adults compared to ageand gender-matched NT control adults, and whether similar diVerences (if any) would be observed between AS and NT children. It was diYcult to predict clear hypotheses in the present experiment, given the lack of developmental data for ERPs to facial expression in ASD. However, based on the research discussed above it was predicted that diVerences between emotional and neutral faces may be observed between NT and AS subjects at later ERP components. In addition an abnormality in the N170 response to neutral and emotional expressions was predicted to occur in AS subjects relative to age-matched controls, in accordance with past ERP face processing data in ASD.

2. Methods 2.1. Subjects The initial sample included 17 control adults (NT), 19 AS adults, 19 control (NT) children, and 21 AS children. Of these subjects, 2 control adults, 4 AS adults, 4 control children, and 6 AS children provided too few artifactfree trials and were removed from the sample. The Wnal sample in both adult and child studies therefore consisted of 15 control and 15 AS subjects, all of which had normal or corrected to normal vision.

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

Control subjects were selected from the Auckland community while subjects with AS were recruited through the Auckland Autistic Association. AS subjects had been diagnosed by a registered medical professional experienced with autistic spectrum disorders and satisWed the DSM-IV (APA, 1994) criteria for AS. In addition, all children with AS met the criteria for AS, as deWned by the Australian Scale for Asperger’s syndrome (Attwood, 1998). Seven children and four adults with AS were on medication for the treatment of depression and/ or anxiety. We did not test IQ, since individuals with AS often show an uneven proWle of abilities on IQ tests which does not reXect their academic attainment (Attwood, 1998). However, within both age groups, control and AS subjects had received a similar number of years of education and were not behind their peers in terms of academic achievement. Additionally, individuals with AS exhibited diYculties with social interaction according to parental and/or personal report and as evidenced by their behaviour (abnormal eye contact, limited facial expression, literal speech with Xat aVect) compared to controls. Subjects with an existing neurological condition (epilepsy, head injury, signiWcant sensorimotor impairment, schizophrenia or dyslexia) were excluded. Control and AS subjects were matched with respect to age and gender. In the adult study, control and AS subjects had a mean age of 24.8 years (SD D 8.7 years) and 24.6 years (SD D 8.8 years), respectively, each group consisting of 14 males and 2 females. The age range in each adult group was from 18 to 45 years, with the majority of subjects under the age of 30. All children were male with a mean age of 11.2 years (SD D 1.8 years) in control and 11.6 years (SD D 1.9 years) in AS subjects. Seventy-three and sixty percent of children were aged between 9 and 10 years in control and AS subjects, respectively, while the remainder in each group were between 11 and 15 years. The experimental protocol was approved by the University of Auckland Human Subjects Ethics Committee and written consent was obtained from all subjects prior to participation. 2.2. Facial stimuli Colour photographs of unfamiliar actors expressing the basic expressions happy, sad, angry, and scared were

85

selected from QuickTime Wles in the Mind Reading Emotions Library (Baron-Cohen, Golan, Wheelwright, & Hill, 2003). A neutral (expressionless) face for each actor was also selected. All photographs were cropped to remove the ears, shoulders, and part of the hair so that the face was the central focus. Greyscale images were obtained using Adobe Photoshop software so that skin colour and tone did not detract from emotional expression. Each facial expression was rated by 48 neurotypical adults. Expressions with a high degree of accuracy (>85%) were selected for the present task (Fig. 1). The Wnal stimulus set contained photographs of 10 adults and 4 children (seven males and seven females) expressing each of the Wve emotions (a total of 70 stimuli). The experiment was programmed and run using EPrime (Psychology Software Tools, version 1.0 Beta 5) software on a Pentium II/200 (‘display’) computer. The stimuli were centred on a 15-in. Xat-screen SVGA monitor with a 640 £ 480 pixel resolution. All stimuli subtended vertical and horizontal visual angles of 4.3° and 5.3°, respectively, when viewed at a distance of 70 cm and were presented against a light grey background. 2.3. Procedure Subjects were seated approximately 70 cm from the SVGA monitor in an electrically shielded, sound attenuated room. Three blocks of 70 stimuli were presented, separated by two 5-min rest intervals. Each trial commenced with a Wxation cross (1000 ms), followed by a randomly presented facial expression (1000 ms). Following an inter-stimulus interval (1000 ms) where the screen was left blank, a response screen was presented. This consisted of a vertical list with the words scared, angry, neutral, sad, and happy (in black) overlaid on coloured boxes. Subjects were required to verbalize the word which described how the person in the photograph was feeling. The experimenter was seated at right angles to the subject during the experiment and keyed in the response (coded numerically). Once the response was entered the next trial began. Subjects were instructed to refrain from moving their eyes, blinking, twitching, and clenching their jaw during stimulus presentation to help avoid artifacts. All subjects were required to undergo a brief practice session consisting of 10 faces that were not

Fig. 1. Example of the Wve expressions presented in this study. Each expression was presented by 14 diVerent people.

86

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

Fig. 2. (A) Electrical Geodesics Sensor net with 10–20 positions marked. Electrodes used for amplitude and latency measurements for each in component in (B) adults and (C) children. Time windows are in brackets.

presented in the experiment to ensure each subject understood the task.

and was then re-referenced oZine to the average. Using the average reference includes Cz as an active electrode, resulting in 129 data channels.

2.4. Electroencephalogram acquisition 2.5. EEG processing Subjects were Wtted with an appropriate-sized, highdensity 128-channel Ag/AgCl electrode net (Electrical Geodesics Inc; (Tucker, 1993) Fig. 2). The Geodesic sensor net distributes electrodes from nasion to inion and from left to right mastoids at uniform intervals. However, due to the geometry of the sensor net and diVerences in scalp anatomy between subjects, electrode positions may vary 1–2 cm from their standard location for any given subject. The impedance of each electrode was set below 50 k (range 40–50 k) by ensuring each sensor was hydrated and in contact with the scalp. This range is acceptable given the high input impedance ampliWers of this system (Tucker, 1993). Electrodes located at the outer canthi and above and below the left and right eyes were used to record horizontal and vertical electrooculogram (EOG), respectively. Electroencephalogram (EEG) was registered continuously from all electrodes and the signal ampliWed (1000£) and Wltered through a 0.1–100 Hz analogue bandpass Wlter using the Electrical Geodesics, Inc., preampliWer system (200 M input impedance). A MacIntosh G4 acquisition computer with a 16-bit analogue-to-digital conversion card (National Instruments PCI-1200) was used to multiplex and digitalize the signal at 250 Hz and to store this data on the computer’s hard disk. To ensure stimulus presentation was synchronized with the EEG record, triggers corresponding to stimulus category were sent as TTL pulses at stimulus onset from the ‘display’ to the EEG acquisition computer (via the parallel port). EEG data were collected using the vertex (Cz) electrode reference

EEG data were segmented as a function of trigger type (scared, angry, neutral, sad or happy) and included correct and incorrect behavioural responses to increase the signalto-noise ratio1. Segments (epochs) of 800 ms (100 ms pre-, 700 ms post-stimulus) were obtained for each stimulus category, each epoch containing 201 data points. Trials in which any of the electrooculogram (EOG) channels exceeded 100 V were discarded and the remaining trials corrected for residual eye movement artifacts using procedures from Jervis, Nichols, Allen, Hudson, and Johnson (1985). These trials were averaged for each subject to produce a total of Wve ERPs (one for each expression). ERPs from individual subjects were then grand-averaged as a function of expression and each grand-averaged waveform was digitally Wltered using a 30 Hz low-pass Butterworth Wlter. DC oVsets were calculated from the pre-stimulus baseline and removed from all waveforms. On average 36 § 4 (SD) trials were accepted in control adults for each expression, 33 § 7 (SD) trials in AS adults, 29 § 7 (SD) trials in control children and 26 § 8 (SD) trials in AS children. 2.6. ERP analysis Visualization of grand-averaged waveforms in adults and children (both control and AS) for each expression 1 The results did not change signiWcantly when only correct trials were examined.

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

87

revealed the presence of three main peaks.2 These peaks had latency and scalp distributions typical of the P1, N170, and P2 components and were thus named accordingly. Topographical plots were visualized for each component to determine regions of maximal amplitude in each hemisphere. Electrode groups were then selected within these regions.

measured for every electrode within the speciWed electrode groups for each component. These values were averaged across electrode groups in the right and left hemispheres for components of interest (P1, N170, and P2) in every participant. Amplitude and latency data for each component were analysed using repeated measures analyses of variance (ANOVA).

2.6.1. Adults For P1 these groups were located at occipital sites and consisted of Electrical Geodesics (Electrical Geodesics, Eugene, OR) sensor numbers 65, 66, 70, and 71 (O1) on the left and 84 (O2), 85, 90, and 91 on the right. The N170 was measured over occipital-temporal regions and included sensor numbers 58 (T5), 59, 64, 65, 69, and 70 on the left and 90, 91, 92, 95, 96, and 97 (T6) on the right. P2 included sensor numbers 65, 66, 70, and 71 (O1) on the left and 84 (O2), 85, 90, and 91 on the right located over occipital regions. These electrode groups are similar to those used to measure the P1, N170, and P2 components in other studies that have used high-density electrode arrays (Halit, de Haan, & Johnson, 2000; McPartland et al., 2004; Schupp et al., 2004).

2.7. Comparison of scalp distributions

2.6.2. Children Maximal amplitudes for the P1 component in control and AS subjects occurred over the same electrodes as adults (sensor numbers 65, 66, 70, and 71 (O1) on the left and 84 (O2), 85, 90, and 91 on the right). However, in comparison to adults, the N170 was largest over occipital parietal regions in control and AS children and was measured over sensor numbers 59, 60 (P3), 61, 65, 66, and 67 on the left and 78, 79, 85, 86 (P4), 91, and 92 on the right in both groups. The P2 was measured over temporal-parietal regions and included sensor numbers 58 (T5), 59, 60 (P3), 64, 65, and 66 on the left and 85, 86 (P4), 91, 92, 96, and 97 (T6) on the right. The selection of time windows was based on visual inspection of ERPs within the selected electrode groups across hemispheres for each expression. The data from individual participants were also inspected to ensure time windows were selected that incorporated the component of interest from all participants. The overall time windows across all expressions and groups ranged from 124 to 152 ms (P1), 164–200 ms (N170), and 236–272 ms (P2) in adults and 124–152 ms (P1), 204–240 ms (N170), and 320–352 ms (P2) in children. For each participant: (a) peak amplitudes were extracted across the selected electrode groups within the designated time windows and (b) the latency to peak was 2 We did not analyse ERPs at later latencies due to the high degree of individual variation within groups and increased noise levels over electrodes in both occipito-temporal and fronto-central regions to all expressions. This resulted in diYculty identifying components of interest and deWning appropriate time windows.

Two-way between-groups ANOVAs with the factors expression and electrode were performed on amplitude data within each time window to identify the presence of scalp distribution diVerences both within and between groups. However, prior to this analysis the data were Wrst normalized to remove variations in amplitude. This process ensures that diVerences in scalp distribution are the main focus of the analysis (McCarthy & Wood, 1985). Greenhouse-Geisser corrected degrees of freedom were used for all ANOVAs to control for Type I errors associated with violation of the assumptions for ANOVA. Scalp distribution diVerences between Groups were identiWed when a signiWcant electrode by expression by Group or electrode by Group interaction was present. A signiWcant electrode by expression interaction was indicative of a within-groups diVerence. SigniWcant scalp distribution diVerences between groups indicates they may have been produced by diVerent underlying neuronal generators (Picton et al., 2000).

3. Results 3.1. Behavioural results Accuracy scores from each subject were converted to percentages analysed using a repeated measures ANOVA with the factors expression (happy, sad, angry, scared, and neutral) and Group (control versus AS). In adults, main eVects were observed for expression (F (4, 28) D 36.68, p < .001) and group (F (1, 28) D 18.01, p < .001). The interaction between these factors was also signiWcant (F (4, 28) D 5.20, p D .001). Further analysis of this interaction revealed signiWcant diVerences between groups for neutral, sad, and angry expressions (t (28) D 3.01, p < .05; t (28) D 2.68, p < .05; and t (28) D 2.57, p < .05, respectively), with NT controls obtaining greater accuracy scores than AS adults (Fig. 3A). SigniWcant diVerences between groups were not present for happy and scared expressions (t (28) D 1.71, p D .098 and t (28) D 1.40, p D .173). However, in children, signiWcant main eVects occurred for expression (F (4, 27) D 41.93, p < .001), but not for group (F (1, 30) D 1.50, p > .05). The expression by group interaction was not signiWcant (F (4, 27) D 1.96,

88

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

Fig. 3. Mean accuracy (percentage correct) for angry, happy, neutral, sad, and scared expressions between (A) control versus AS subjects in both age groups and (B) adult versus child comparisons in control and AS subjects. An asterisk indicates a signiWcant diVerence between groups. Error bars represent standard errors.

p > .05). This shows that performance was not statistically diVerent between NT control and AS children for any expression (Fig. 3A). To investigate developmental diVerences in expression recognition, an ANOVA was performed on accuracy scores from children and adults in the control and AS groups, respectively. In control subjects, a signiWcant main eVect occurred for emotion (F (4, 27) D 53.57, p < .01) and group (F (1, 30) D 25.97, p < .01). The interaction between these factors was also signiWcant (F (4, 27) D 3.73, p < .01). Further analysis of this interaction revealed signiWcant diVerences between groups for neutral and sad expressions (t (30) D 2.73, p < .05) and (t (30) D 3.33, p < .05), respectively. However, in AS subjects a signiWcant main eVect was present for emotion (F (4, 27) D 37.60, p < .001), but not for group (F (1, 30) D .20, p > .05). The emotion by group interaction was not signiWcant (F (4, 27) D .29, p > .05). These Wndings suggest an improvement with age for sad and neutral expressions in control, but not AS subjects (see Fig. 3B). 3.2. ERP data A repeated measures ANOVA with the factors Group (control versus AS), Expression (happy, sad, angry, scared, and neutral), and Hemisphere (left versus right) was performed on amplitude and latency data within each time window. Bonferroni-corrected post hoc tests were used to interpret signiWcant interaction eVects. Comparisons were made between control and AS subjects in both adult and child groups for each component

of interest. Average waveforms are shown in Fig. 4 and average amplitude and latency listed in Table 1 for each component. 3.2.1. Adult group The P1 component: analysis of P1 amplitude revealed no signiWcant main eVect of expression, hemisphere, Group or signiWcant interactions among any of these variables (all p 7 .115). Similarly for P1 latency the eVect of expression and hemisphere were not signiWcant (all p 7 .225), however, there was a signiWcant eVect of Group (F(1,28) D 5.25, p < .05), with AS subjects exhibiting delayed latencies to all expressions relative to controls (t (28) D 2.30, p < .05) as shown in Fig. 5A. The N170 component: ANOVA performed on N170 amplitude revealed a signiWcant eVect of Group (F (1, 28) D 11.09, p < .05), with controls eliciting larger amplitudes to expressions in comparison to subjects with AS (t (28) D 3.33, p < .05) as in Fig. 5B. In addition, a main eVect of hemisphere was observed (F (1, 112) D 11.34, p < .05) with larger amplitudes in the right than left hemisphere (t (28) D 3.37, p < .05). There was no signiWcant eVect of expression or signiWcant interaction between any of the variables (all p 7 .114). Furthermore, a signiWcant diVerence between groups was also apparent for N170 latency (F (1, 28) D 17.70, p < .05), with AS subjects exhibiting longer latencies to expressions than controls (t (28) D 4.21, p < .05) as depicted in Fig. 5C. However, all other main eVects and interactions were non-signiWcant (all p 7 .151). The P2 component: for P2 amplitude and latency there were no signiWcant eVects of Group, expression or

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

89

Fig. 4. Waveforms averaged over all expressions and electrodes of interest for control and AS subjects within adult (A) and child (B) groups for each hemisphere. All ERP waveforms have a 100 ms pre- and a 700 ms post-stimulus period. Positive amplitudes are plotted upwards and time windows indicated with vertical arrows. Table 1 Mean amplitude (V) and latency (ms) for control and AS subjects in adults and children for each hemisphere and component (P1, N170, and P2) Age Group:

Adult  (SD)

Hemisphere:

Left

P1 Voltage (V) Control AS Latency (ms) Control AS N170 Voltage (V) Control AS Latency (ms) Control AS

Left

Right

2.40 (3.23) 3.80 (2.68)

1.90 (2.76) 3.70 (2.66)

5.97 (3.71) 5.75 (3.25)

124.47 (11.68) 130.33 (12.44)

123.33 (10.81) 132.77 (12.42)

141.43 (9.12) 140.29 (10.63)

143.07 (9.98) 144.31 (9.71)

¡4.19 (1.77) ¡1.87 (2.25)

¡4.69 (1.79) ¡3.29 (1.78)

¡4.14 (3.65) ¡3.93 (4.21)

¡4.49 (3.53) ¡4.43 (3.88)

173.47 (10.78) 198.00 (20.51)

175.10 (9.77) 196.21 (19.10)

217.71 (14.48) 213.72 (12.16)

217.43 (14.82) 213.14 (13.13)

2.43 (3.13) 2.26 (3.07)

3.35 (3.12) 2.48 (3.25)

2.22 (2.29) 2.84 (2.42)

3.75 (2.65) 3.44 (1.62)

238.79 (15.10) 249.03 (21.77)

239.32 (14.07) 249.13 (24.30)

283.11 (42.52) 292.62 (41.56)

283.21 (43.82) 282.14 (44.21)

P2 Voltage (V) Control AS Latency (ms) Control AS

Child  (SD) Right

5.21 (3.46) 5.52 (3.39)

Standard deviations are in brackets.

interaction between any of the variables (all p 7 1.0). There was, however, an eVect of hemisphere (F (1, 28) D 8.12, p < .05), with larger amplitudes in the right hemisphere (t (28) D 2.84, p < .05).

dren found no signiWcant eVects of expression, Group or interaction between any variable (all p 7 .110, see Fig. 5). However, a main eVect of hemisphere was observed for P2 amplitude, with larger amplitudes in the right hemisphere.

3.2.2. Child group Repeated measures ANOVA conducted on amplitudes and latencies for the P1, N170, and P2 components in chil-

3.2.3. Adult versus child Given that there was no signiWcant eVect of expression in both age groups for control and AS subjects, the

90

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

Fig. 5. Mean P1 latency, N170 amplitude, and N170 latency to all expressions in each hemisphere between control and AS subjects for adults and children. Error bars represent standard errors. SigniWcant diVerences are denoted by an asterisk.

data were Wrst averaged across expression within each group for adults and children, respectively. Repeated measures ANOVA was then used to compare amplitude and latency data for each component with the factors Group (control adult, AS adult, control child, and AS child) and hemisphere (left versus right). SigniWcant interactions were investigated further using Bonferroni-corrected post hoc tests. The P1 component: analysis of P1 amplitude and latency found a signiWcant eVect of Group (F(3,56)D 13.14, p< .05 and F(3,56) D10.35, p <.05, respectively), with children in both groups eliciting larger amplitudes that were delayed in latency relative to control and AS adults (all p6 .05). The eVect of hemisphere and hemisphere by Group interaction were not signiWcant (all p 7.608). The N170 component: for N170 amplitude a signiWcant main eVect of hemisphere was observed (F (1, 56) D 4.87, p < .05), with larger amplitudes elicited in the right hemisphere to faces (t (56) D 2.21, p < .05).

However, the Group eVect and Group by hemisphere interaction was not signiWcant (all p 7 .247). In contrast an eVect of Group was present for N170 latency (F (3, 56) D 29.01, p < .05), with control and AS children exhibiting delayed latencies in comparison to both adult groups (all p 6 .05). The P2 component: in a similar manner to the N170, a signiWcant eVect of hemisphere was observed for P2 amplitude (F (1, 56) D 15.43, p < .05), with larger amplitudes to faces occurring over the right hemisphere (t (56) D 3.93, p < .05). There was no signiWcant eVect of Group or Group by hemisphere interaction (all p 7 .230). Analysis of P2 latency identiWed a signiWcant eVect of Group (F(3, 56) D 9.85, p < .05), with AS children showing delayed latencies to faces relative to control and AS adults (t (56) D 4.86, p < .05 and t (56) D 3.97, p < .05, respectively). Control children, however, were found to exhibit signiWcant delays in P2 latency to faces relative to control adults

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

91

Fig. 6. The spatial distribution of voltages within each time window for control and AS adults and children. Positive and negative voltages are marked as black dotted and black contour lines, respectively. The black and grey asterisks indicate signiWcant N170 scalp distribution diVerences between adult and child control and AS subjects, respectively (see text for detail). Scale bars were chosen to best show each component. Electrodes that diVered signiWcantly in N170 topography between adults in children in (A) control and (B) AS groups are shown at right.

only (t (56) D 3.29, p < .05). The hemisphere and interaction eVect were not signiWcant (all p 7 .925). 3.3. Topographical comparisons Surface maps were plotted for control and AS adults and children within each time window using EMSE 4.2 analysis software (Source Signal Imaging, San Diego, USA, see Fig. 6). Comparison of topographical distributions within groups (condition by electrode interaction) in adults found scalp topographies were not signiWcantly diVerent between expressions for any component (all p 7 .229). More importantly, between-group comparisons (interaction of Group with electrode and/or condition) revealed that scalp distributions were not signiWcantly diVerent between control and AS subjects to any expression for the P1, N170 or P2 components (all p 7 .094). Similarly, scalp distributions did not diVer signiWcantly within or between control and AS children for any component (all p 7 .617). Together, these Wndings indicate that the underlying neuronal generators for each component were the same across all stimulus conditions both within and between groups for adults and children. To investigate developmental changes in scalp distribution during face processing, topographical analyses were performed on averaged expression data from control adults versus children and AS adults versus children within each time window. This revealed signiWcant diVerences between adults and children within the N170 time

window for control (F (3.08, 83.01) D 10.60, p < .05) and AS (F (3.30, 92.33) D 7.55, p < .05) subjects on 31 and 36 electrodes, respectively. There were no signiWcant diVerences between adults and children within the P1 or P2 time windows for control or AS groups (all p 7 .147).

4. Discussion The aim of the present study was to determine if behavioural and neurophysiological diVerences are present in children and adults, with and without AS, during emotional expression processing. As the accuracy data did not correlate with our ERP Wndings, the behavioural and neurophysiological results will be discussed separately. 4.1. Behavioural Wndings Our accuracy data suggest that NT and AS children do not diVer in their ability to recognize basic expressions. In contrast, adults with AS are impaired in the recognition of sad, angry, and neutral expressions relative to controls. This unexpected Wnding of impaired emotion recognition in AS adults, but not AS children may reXect early intervention in children with AS. For example, the majority of adults in this study were not diagnosed with AS until their late teens and had received limited support since diagnosis. In contrast, most children with AS had received some form of social skills

92

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

training, either through their school or privately which may have improved their ability to recognize facial expressions. However, although expression recognition performance was similar between AS children and controls, children with AS exhibited diYculties with social interaction according to parental and/or personal report and as evidenced by their behaviour (abnormal eye contact, limited facial expression, literal speech with Xat aVect) in comparison to control subjects. This discrepancy may arise from the use of atypical cognitive strategies to correctly recognize facial expressions in AS. For example, individuals with AS may correctly identify facial expressions through analysing the spatial orientation of the eye and mouth regions without understanding how that person may be feeling emotionally. Comparison of expression recognition across both age groups for NT and AS subjects found similar performance for happy, scared, and angry expressions. This suggests that the ability to recognize these expressions is fully developed in childhood. However, NT adults recognized neutral and sad faces more accurately than NT children, while there were no signiWcant diVerences between AS children and AS adults in the recognition of either of these expressions. This Wnding suggests that unlike NT controls, the ability to recognize neutral and sad faces does not improve with age in those with AS. This may reXect the fact that these emotions may be slightly more complex, due to the ambiguous nature of neutral expressions and close association of sad expressions with empathy compared to anger, happy, and fear. It is uncertain, however, whether greater accuracy would have occurred in AS adults for neutral and sad faces had they received social-skills training as a child. Further research would be to follow the children in this study into adulthood to determine if these diVerences still exist. 4.2. Neurophysiological results The present Wndings show that signiWcant diVerences between control and AS adults occur predominantly within the N170 component during face processing. First, the N170 amplitude in AS adults was signiWcantly decreased in comparison to adult controls for not only neutral faces (as observed by Grice et al., 2001), but to all emotional expressions. In accordance with past research showing that ERP amplitudes become smaller as task diYculty increases, and the potential role of N170 as an index of conWgural face processing (Taylor et al., 2004; Taylor & Smith, 1995), this Wnding suggests that adults with AS are impaired at processing facial conWgurations rather than emotions expressions per se. Furthermore, both P1 and N170 latencies were delayed to faces in AS adults relative to controls. A delay in N170 amplitude to faces in AS is consistent with Wndings obtained by McPartland et al. (2004). However, the

P1 component has not been investigated in individuals with AS before. Together, these Wndings suggest that adults with AS take longer to recruit neuronal networks involved in holistic and conWgural processing of faces reXected as delayed P1 and N170 amplitudes, respectively. A disruption in conWgural processing of faces in AS may result from greater reliance on individual facial features, leading to an impaired ability to integrate the spatial orientation of the eyes, nose, and mouth. Furthermore, these Wndings may reXect a more general processing diVerence in AS due to the tendency of people with ASD to favour local over conWgurational processing (Frith & Happe, 1994; Vermeulen, 2001). This would result in a reduced propensity to develop associations between stimuli, creating diYculties in applying this information to other situations. Therefore, as social interactions require the simultaneous integration of multiple sensory modalities (gestures, facial expressions, posture, speech, tone, etc.) this processing style would lead to problems understanding social behaviour. The consequences of this deWcit over time may explain the impaired ToM development in ASD, which is dependent on an ability to integrate social information. However, in contrast to adults, N170 amplitude and latency diVerences were not observed between AS and NT children. This may be a consequence of incomplete development of the N170 component in children, which has been shown in past studies to continue to mature into adulthood (Taylor et al., 2004, 1999). Our Wndings are similar to these studies, in that a decrease in N170 latency occurred in adults relative to children. In addition, the presence of signiWcant scalp distribution diVerences in the present and in past studies may reXect ongoing development of the neuronal generators of N170. Recent MRI research has shown grey matter density in the posterior temporal cortex to increase linearly with age, eventually reaching full maturity at around 30 years (Giedd et al., 1999; Sowell et al., 2003). Moreover, a recent fMRI study found that while adults and children exhibited activation in the medial fusiform gyrus to faces children showed a broader distribution pattern, activating regions both anterior and lateral to the fusiform region (Passarotti et al., 2003). These Wndings suggest that neuronal regions involved in face processing become more localized with age as temporal grey matter density increases. In addition, incomplete cortical development in children may result in greater dependence on limbic circuits during face processing. Therefore, further research using a combination of MEG and fMRI should be undertaken to determine whether the neuronal correlates of face processing diVer in deeper cortical layers and limbic regions which are not reXected in the EEG. Interestingly, qualitative examination of N170 potentials from each child revealed some variability in N170

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

amplitude within NT and AS subjects, respectively. In contrast, less variation was observed in data from NT and AS adults. This result is in accordance with Wndings from a recent fMRI study which observed some variation in posterior temporal activation between typically developing children (aged 10–12 years) during face processing (Passarotti et al., 2003). These researchers suggest this may reXect individual diVerences in development of the ventral system in children. This may also explain why signiWcant N170 diVerences between NT and AS children were not observed in the present study. Furthermore, although both groups were age-matched, it is possible that N170 diVerences are present between NT and AS children at certain ages. Unfortunately we were unable to investigate this possibility due to the limited sample size and an inability to recruit more subjects with AS. In addition, the majority of children in both groups were aged between 8 and 10 years and exhibited a medial-parietal N170 distribution, similar to that observed in past studies in children within a similar age range (Taylor et al., 2004). Further research should be undertaken to determine if N170 diVerences are present between 14- and 15-year-old adolescents who typically show an adult-like distribution of the N170. It is also possible that the absence of signiWcant P1 and N170 diVerences between NT and AS children may reXect the fact that the majority of children with AS had received social-skills training which may have improved their ability to recognize faces. However, adults with AS were diagnosed much later in life and had not received the same amount of intervention as children. Consequently, this may explain why signiWcant P1 and N170 diVerences were observed between NT and AS adults. Additional research would be to examine the N170 response to faces in NT and AS children into adulthood to determine if N170 diVerences between these groups become apparent with age. Our comparisons of adult to child data reXect similar Wndings to past developmental and face processing ERP studies (Taylor et al., 2004, 1999). For example, P1 amplitudes were larger and delayed to faces in children relative to adults. Furthermore, children also exhibited delayed latencies to faces. Together, these Wndings provide evidence for impaired holistic (P1) and conWgural (N170) processing in children relative to adults. However, in contrast to previous research, N170 amplitudes to faces were not larger in adults in comparison to children. This may reXect the use of an emotional expression rather than a face processing task as used in past experiments. P2 latency was also delayed in children compared to adults. To our knowledge, this component has not been examined in the developmental face processing literature before. Past research suggests that the P2 response to faces in adults may reXect the encoding of facial conWgurations (Halit et al., 2000; Rebai et al., 2001). Therefore, delayed P2 amplitudes to

93

faces in children may correspond to further impairments in processing facial conWgurations relative to adults. Our Wndings indicate that ERP diVerences were not present between neutral and emotional faces in control or AS subjects within the P1, N170 or P2 components. These Wndings are in agreement with the majority of past ERP studies, where diVerences between emotional and neutral expressions were present from 270 ms onwards (see Section 1). However, our Wndings diVer from Batty and Taylor (2003) who found evidence for emotional processing diVerences within the N170 component over occipito-temporal regions. It is possible that the absence of N170 diVerences between emotional and neutral expressions in the present (and in past) tasks may reXect the fact that neutral faces are often ambiguous in emotional meaning. In addition, as neutral faces are social stimuli, they may have recruited emotion processing circuitry such as the amygdala (Thomas et al., 2001). This may have resulted in the absence of signiWcant N170 diVerences between neutral and emotional faces in the present task. Unfortunately, we were unable to analyse emotional expression eVects at later latencies in the present study due the high degree of individual variation within groups and diYculty deWning appropriate time windows. Further research is therefore needed to determine whether diVerences exist between neutral and emotional faces between NT and AS subjects at later ERP latencies.

5. Conclusions Taken together, the present experiment provides evidence for P1 and N170 diVerences between NT and AS adults, but not children. This may be due to impairments in holistic and conWgural processing of faces in adults with AS, possibly as a result from a lack of expertise for the human face and a tendency to process faces as objects. However, these diVerences were not observed between NT and AS children. This may be due to incomplete development of the neuronal generators of these ERP components in children. It may also be that, unlike AS adults, children with AS had taken part in social skills programmes which may have improved their ability to process faces. Our behavioural data show that adults with AS make more errors when identifying sad, angry and neutral expressions relative to adult controls. Furthermore, unlike NT adults who recognized neutral and sad expressions more accurately than children, adults, and children with AS did not diVer in their ability to recognize these expressions. These Wndings may be attributed to the fact many of the children with AS had received social-skills training, which may have improved their ability to recognize facial expressions.

94

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95

In summary, further research is needed to investigate development of the neuronal generators underlying the P1 and N170 response to faces and the impact of socialskills training to better understand the pathology of AS.

Acknowledgments This research was supported funding from the Ministry of Science, Research and Technology, New Zealand. The authors thank Joss and Paddy O’Connor, Dr. Antje Hollander, Branka Milivojevic, Scott Fairhall, and Dr. Karen Waldie, for technical support. Thank you also to all AS and control subjects who participated in this research for their time and energy.

References APA. (1994). DSM-IV Diagnostic and Statistical Manual of Mental Disorders. Washington, DC: American Psychiatric Association. Attwood, T. (1998). Asperger’s syndrome: A guide for parents and professionals. London: Jessica-Kingsley Publishers. Baron-Cohen, S. (1989). The autistic child’s theory of mind: A case of speciWc developmental delay. Journal of Child Psychology and Psychiatry, 30, 285–297. Baron-Cohen, S., Golan, O., Wheelwright, S., & Hill, J. (2003). Mind reading emotions library. London: Jessica-Kingsley Publishers (www.jkp.com/mindreading). Baron-Cohen, S., Leslie, A., & Frith, U. (1985). Does the autistic child have a “theory of mind?”. Cognition, 21, 37–46. Baron-Cohen, S., O’Riordan, M., Stone, V., Jones, R., & Plaisted, K. (1999). Recognition of faux pas by normally developing children and children with Asperger Syndrome or High-Functioning Autism. Journal of Autism and Developmental Disorders, 29, 407– 415. Baron-Cohen, S., Ring, H., Wheelwright, S., Bullmore, E., Brammer, M., Simmons, A., et al. (1999). Social intelligence in the normal and autistic brain: An fMRI study. European Journal of Neuroscience, 11, 1891–1898. Baron-Cohen, S., Wheelwright, S., & JolliVe, T. (1997). Is there a ‘Language of the Eyes?’ Evidence from normal adults and adults with Autism or Asperger’s Syndrome. Visual Cognition, 4, 311–331. Batty, M., & Taylor, M. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Research, 17, 613–620. Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. Birch, J. (2003). Congratulations! It’s Asperger’s syndrome. London: Jessica-KIngsley Publishers. Carr, L., Iacoboni, M., Dubeau, M., Mazziotta, J., & Lenzi, G. (2003). Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences, 100, 5497–5502. Charman, T. (2003). Why is joint attention a pivotal skill in autism?. Philosophical Transactions of the Royal Society of London, Series B, 358, 315–324. Critchley, H., Daly, E., Bullmore, E., Williams, S., van Amelsvoort, T., Robertson, D., et al. (2000). The functional neuroanatomy of social behaviour: Changes in cerebral blood Xow when people with autistic disorder process facial expressions. Brain, 123, 2203–2212. Dawson, G., Carver, L., MeltzoV, A., Panagiotides, H., McPartland, J., & Webb, S. (2002). Neural correlates of face and object recog-

nition in young children with autism spectrum disorder, developmental delay and typical development. Child Development, 73, 700–717. Dawson, G., Webb, S., Carver, L., Panagiotides, H., & McPartland, J. (2004). Young children with autism show atypical brain responses to fearful versus neutral facial expressions of emotion. Developmental Science, 7, 340–359. de Haan, M., Johnson, M., & Halit, H. (2003). Development of facesensitive event-related potentials during infancy: A review. International Journal of Psychophysiology, 51, 45–58. Eimer, M., & Holmes, A. (2003). The role of spatial attention in the processing of facial expression: An ERP study of rapid brain responses to six basic emotions. Cognitive, AVective and Behavioural Neuroscience, 3, 97–110. Frith, U., & Happe, F. (1994). Autism: Beyond: Theory of mind. Cognition, 50, 115–132. Giedd, J., Blumenthal, J., JeVries, N., Castellanos, F., Liu, H., Zijdenbos, A., et al. (1999). Brain development during childhood and adolescence: A longitudinal MRI study. Nature Neuroscience, 2, 861– 863. Grice, S., Spratling, M., KarmiloV-Smith, A., Halit, H., Csibra, G., de Haan, M., et al. (2001). Disordered visual processing and oscillatory brain activity in autism and Williams Syndrome. Neuroreport, 12, 2697–2700. Halit, H., de Haan, M., & Johnson, M. (2000). Modulation of eventrelated potentials by prototypical and atypical faces. Neuroreport, 11, 1871–1875. Happe, F. (1994). An advanced test of theory of mind: Understanding of story character’s thoughts and feelings by able autistic, mentally handicapped and normal children and adults. Journal of Autism and Developmental Disorders, 24. Herrman, C., Mecklinger, A., & Pfeifer, E. (1999). Gamma response and ERP’s in a visual classiWcation task. Clinical Neurophysiology, 110, 636–642. Hobson, R., Ouston, J., & Lee, A. (1988). What’s in a face? The case of autism. British Journal of Psychology, 79, 441–453. Holmes, A., Vuilleumier, P., & Eimer, M. (2003). The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Cognitive Brain Research, 16, 174–184. Hubl, D., Bolte, S., Feineis-Matthews, S., Lanfermann, H., Federspiel, A., Strik, W., et al. (2003). Functional imbalance of visual pathways indicates alternative face processing strategies in autism. Neurology, 61, 1232–1237. Itier, R., & Taylor, M. (2002). Inversion and contrast polarity reversal aVect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. NeuroImage, 15, 353–372. Itier, R., & Taylor, M. (2004). N170 or N1? Spatiotemporal diVerences between object and face processing using ERPs. Cerebral Cortex, 14, 132–142. Jervis, B., Nichols, M., Allen, E., Hudson, N., & Johnson, T. (1985). The assessment of two methods for removing eye movement artefact from the EEG. Electroencephalography and Clinical Neurophysiology, 61, 444–452. Joseph, R., & Tanaka, J. (2003). Holistic and part-based face recognition in children with autism. Journal of Child Psychology and Psychiatry, 44, 529–542. Klin, A., Jones, W., Schultz, R., & Volkmar, F. (2003). The enactive mind, or from actions to cognition: Lessons from autism. Philosophical Transactions of the Royal Society of London, Series B, 358, 345–360. Klin, A., Jones, W., Schultz, R., Volkmar, F., & Cohen, D. (2002). Visual Wxation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59, 809– 816.

K. O'Connor et al. / Brain and Cognition 59 (2005) 82–95 Krolak-Salmon, P., Fischer, C., Vighetto, A., & Mauguiere, F. (2001). Processing of facial expression: Spatio-temporal data as assessed by scalp event-related potentials. European Journal of Neuroscience, 13, 987–994. Langdell, T. (1978). Recogntion of faces: An approach to the study of autism. Journal of Child Psychology and Psychiatry, 19, 255– 268. Macintosh, K., & Dissanayake, C. (2004). Annotation: The similarities and diVerences between autistic disorder and Asperger’s disorder: A review of the empirical evidence. Journal of Child Psychology and Psychiatry, 45, 421–434. McCarthy, G., & Wood, C. (1985). Scalp distributions of event-related potentials: An ambiguity associated with analysis of variance models. Electroencephalography and Clinical Neurophysiology, 62, 203– 208. McPartland, J., Dawson, G., Webb, S., Panagiotides, H., & Carver, L. (2004). Event-related brain potentials reveal anomalies in temporal processing of faces in autism spectrum disorder. Journal of Child Psychology and Psychiatry, 45, 1235–1245. Miller, J. (2003). Women from another planet?: Our lives in the Universe of autism. Milan: Dancing Mind Books. Munte, T., Brack, M., Grootheer, O., Wieringa, B., Matzke, M., & Johannes, S. (1998). Brain potentials reveal the timing of face identity and expression judgments. Neuroscience Research, 30, 25–34. Ogai, M., Matsumoto, H., Suzuki, K., Ozawa, F., Fukuda, R., Uchiyama, I., et al. (2003). fMRI study of recognition of facial expressions in high-functioning autistic patients. Neuroreport, 14, 559– 562. Orozco, S., & Ehlers, C. (1998). Gender diVerences in electrophysiological responses to facial stimuli. Biological Psychiatry, 44, 281–289. Passarotti, A., Paul, B., Bussiere, J., Buxton, R., Wong, E., & Stiles, J. (2003). The development of face and location processing: An fMRI study. Developmental Science, 6, 100–117. Picton, T., Bentin, S., Berg, P., Donchin, E., Hillyard, S., Johnson, R., et al. (2000). Guidelines for using human event-related potentials to study cognition: Recording standards and publication criteria. Psychophysiology, 37, 127–152. Pierce, K., Muller, R.-A., Ambrose, J., Allen, G., & Courchesne, E. (2001). Face processing occurs outside the fusiform ‘face area’ in autism: Evidence from functional MRI. Brain, 124, 2059–2073. Puce, A., Allison, T., Bentin, S., Gore, J., & McCarthy, G. (1998). Temporal cortex activation in humans viewing eye and mouth movements. Journal of Neuroscience, 18, 2188–2199. Rebai, M., Poiroux, S., Bernard, C., & Lalonde, R. (2001). Eventrelated potentials for category-speciWc information during passive viewing of faces and objects. International Journal of Neuroscience, 106, 209–226. Sato, W., Kochiyama, T., Yoshikawa, S., & Matsumura, M. (2001). Emotional expression boosts early visual processing of the face: ERP recording and its decomposition by independent components analysis. Neuroreport, 12, 709–714.

95

Schultz, R., Gauthier, I., Klin, A., Fulbright, R., Anderson, A., Volkmar, F., et al. (2000). Abnormal ventral temporal cortical activity during face discrimination among individuals with autism and Asperger’s syndrome. Archives of General Psychiatry, 57, 331–340. Schupp, H., Ohman, A., Junghofer, M., Weike, A., Stockburger, J., & Hamm, A. (2004). The facilitated processing of threatening faces: An ERP analysis. Emotion, 4, 189–200. Sowell, E., Peterson, B., Thompson, P., Welcome, S., Henkenius, A., & Toga, A. (2003). Mapping cortical changes across the human life span. Nature Neuroscience, 6, 309–315. Streit, M., Dammers, J., Simsek-Kraues, S., Brinkmeyer, J., Wolwer, W., & Ioannides, A. (2003). Time course of regional brain activations during facial emotion recognition in humans. Neuroscience Letters, 342, 101–104. Tallon-Baudry, C., Bertrand, O., & Delpuech, C. (1996). Stimulus speciWcity of phase-locked and non-phase locked 40 Hz visual responses in humans. Journal of Neuroscience, 16, 4240–4249. Tanaka, J., & Farah, M. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, A, 46, 225–245. Taylor, M., Batty, M., & Itier, R. (2004). The faces of development: A review of early face processing over childhood. Journal of Cognitive Neuroscience, 16, 1426–1442. Taylor, M., Edmonds, G., McCarthy, G., & Allison, T. (2001). Eyes Wrst! Eye processing develops before face processing in children. Neuroreport, 12, 1671–1676. Taylor, M., McCarthy, G., Saliba, E., & Degiovanni, E. (1999). ERP evidence of developmental changes in processing of faces. Clinical Neurophysiology, 110, 910–915. Taylor, M., & Smith, M. (1995). Maturational changes in the ERPs to verbal and nonverbal memory tasks. Journal of Psychophysiology, 9, 283–297. Thomas, K., Drevet, W., Whalen, P., Eccard, C., Dahl, R., Ryan, N., et al. (2001). Amygdala response to facial expressions in children and adults. Biological Psychiatry, 49, 309–316. Tucker, D. (1993). Spatial sampling of head electrical Welds: The geodesic sensor net. Electroencephalography and Clinical Neurophysiology, 87, 154–163. Vermeulen, P. (2001). Autistic Thinking—This is the title. London: Jessica-Kingsley Publishers. Williams, J., Whiten, A., Suddendorf, T., & Perrett, D. (2001). Imitation, mirror neurons and autism. Neuroscience and Biobehavioural Reviews, 25, 287–295. Yin, R. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145.

Further reading WHO. (1989). Tenth Revision of the International ClassiWcation of Disease. Geneva: World Health Organization.