Distortions in the brain? ERP effects of caricaturing familiar and unfamiliar faces

Distortions in the brain? ERP effects of caricaturing familiar and unfamiliar faces

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88 a v a i l a b l e a t w w w. s c i e n c e d i r e c t . c o m w w w. e l s e v i e r. c o m /...

837KB Sizes 0 Downloads 62 Views

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

a v a i l a b l e a t w w w. s c i e n c e d i r e c t . c o m

w w w. e l s e v i e r. c o m / l o c a t e / b r a i n r e s

Research Report

Distortions in the brain? ERP effects of caricaturing familiar and unfamiliar faces Jürgen M. Kaufmann⁎, Stefan R. Schweinberger Department of Psychology, Friedrich-Schiller-University of Jena, Am Steiger 3, Haus 1, 07743 Jena, Germany

A R T I C LE I N FO

AB S T R A C T

Article history:

We report two experiments in which participants classified familiarity and rated best-

Accepted 18 June 2008

likeness of photorealistic spatial caricatures and anti-caricatures (up to a distortion level of

Available online 2 July 2008

30%) in comparison to veridical pictures of famous faces (Experiment 1) and personally familiar faces (,Experiment 2). In both experiments there was no evidence for a caricature

Keywords:

advantage in the behavioural data. In line with previous research, caricatures were perceived

Face perception

as worse likenesses than veridical pictures and moderate anti-caricatures. In Experiment 2,

Event-related potential

ERPs for familiar faces were largely unaffected by spatial caricaturing, whereas clear effects

Face learning

of caricaturing were observed for unfamiliar faces, for which caricaturing elicited increased

Caricature

occipito-temporal N170 and N250 responses. Whereas increases in N170 amplitude were

EEG

limited to the first half of the experiment, increases in N250 were largest after a number of

Face-space

stimulus repetitions in the second half of the experiment. In the second half, ERPs to caricatured unfamiliar faces became more similar to ERPs to familiar faces, whereas ERP differences between familiar and unfamiliar faces remained prominent for veridicals and anti-caricatures. In the context of previous reports of caricature effects for line-drawings, these results imply that non-spatial (e.g., texture) information plays a prominent role for familiar face recognition, whereas spatial caricaturing may be particularly important for the recognition of unfamiliar faces, by increasing their distinctiveness. © 2008 Elsevier B.V. All rights reserved.

1.

Introduction

Caricatures are distortions that exaggerate the distinctive features of a face (Perkins, 1975), and in general, are recognized at least as well as veridical displays. It has been suggested that caricatures can act as “super-portraits”, which are more recognizable than veridical faces. This assumption was mainly based on earlier work using computer generated line-drawn caricatures, which suggested a caricature advantage for the recognition of famous faces (for a review see Rhodes, 1996). In principle, any distinctive feature of a face can be caricatured, however, the bulk of research deals with caricatures in space, in which the size of single features and

the configuration of features is manipulated (but see also Lee and Perrett, 2000 for caricaturing colour information). The idea that a distorted image resembles a particular face more than the face resembles itself is very appealing. However, whereas caricature advantages have been demonstrated for degraded stimuli such as line-drawings (Rhodes, 1996), evidence for a caricature advantage for high quality photographic stimuli is sparse and rather inconsistent, with caricature advantages either absent or considerably smaller than those found for line-drawings (Benson and Perrett, 1991; Calder et al., 1996; Chang et al., 2002; Lee and Perrett, 2000; Lee et al., 2000; Rhodes et al., 1997). Where present, effects of spatial caricaturing for photographs seem to be limited to very short presenta-

⁎ Corresponding author. Fax: +49 0 3641 945182. E-mail address: [email protected] (J.M. Kaufmann). 0006-8993/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.brainres.2008.06.092

178

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

tion times (Lee et al., 2000; Lee and Perrett, 2000). Possibly, texture information (e.g., skin texture, relative distribution of light and dark), which has been shown to play an important role in face recognition (e.g. Calder et al., 2001; Hancock et al., 1996) can override the effects of spatial caricaturing. Also the observation that veridical photographs are better recognized than line-drawn caricatures limits the power of caricatures (Hagen and Perkins, 1983; Tversky and Baratz, 1985). Rhodes et al. (1987) had initially suggested two alternatives to explain the findings of a caricature advantage for linedrawings, both of which were formulated within a normbased model of face representation. The first one was that faces might actually be stored as caricatures rather than veridical representations. Caricatures would then be easier to recognize because they represent a better match to stored representations. This explanation also implies that caricatures should be perceived as better likenesses than veridical pictures. Whereas there seemed to be some initial evidence for such an interpretation based on line-drawings (Rhodes et al., 1987; for a review see Rhodes, 1996) and Identi-Kit faces (Mauro and Kubovy, 1992), more recent research using stimuli of photographic quality reported higher likeness ratings for veridical versions or even for moderate anti-caricatures, suggesting that mental face representations are closer to veridical depictions (Chang et al., 2002; Benson and Perrett, 1991; Lee and Perrett, 2000; but see also Rhodes et al., 1997 and Lewis and Johnston, 1998). According to the second alternative, faces are stored as veridical representations but caricaturing enhances recognition by exaggerating specific features, which in turn reduces the activation of similar looking candidates. As in most explanations of caricature effects, the concept of distinctiveness plays a major role here. It is a well known fact that distinct faces can be remembered and recognized better than faces that are rated as more typical (Bartlett et al., 1984; Cohen and Carr, 1975; Dewhurst et al., 2005; Going and Read, 1974; Light et al., 1979; Shepherd et al., 1991; Sommer et al., 1995; Valentine, 1991; Vokey and Read, 1992; Wickham et al., 2000; Winograd, 1981). Importantly, it has also been shown that the perceived distinctiveness of a face can be enhanced by caricaturing (Chang et al., 2002; Lee et al., 2000; Rhodes et al., 1997; Stevenage, 1995a). Distinctiveness effects have been accounted for by the multi-dimensional face-space (MDFS) framework (Valentine, 1991). One basic assumption of this model is that each dimension of face-space is characterized by a normal distribution, resulting in a larger number of less distinct faces clustering at or near the origin of the multidimensional grid. Distinctive faces differ from typical faces by extreme values on at least one dimension and are therefore located further away from the densely crowded centre. According to the MDFS model (Valentine, 1991), in particular those dimensions are represented on which individuals of a certain population differ from each other. Although the model does not explicitly specify these dimensions, it is generally thought that the spatial arrangement of facial features plays a major role (see also Rhodes, 1996). Caricaturing is thought to move a face away from the cluster of representations by exaggerating its distinctive features (see also Lee et al., 2000; Lewis, 2004; Lewis and Johnston, 1998; Lewis and Johnston, 1999). Although some of the basic assumptions of the MDFS framework such as the presumption that the majority of faces

should be perceived as more or less typical have been put into question both in theory (Burton and Vokey 1998) and by empirical data (Wickham et al., 2000), the model has been quite successful in explaining effects of distinctiveness, caricaturing and race (Lee et al., 2000; Lewis, 2004; Lewis and Johnston, 1998; Lewis and Johnston, 1999; Valentine, 1991). However, it is also noteworthy that unfamiliar faces that are perceived to resemble a familiar face are rated as more distinct, regardless of how close they are to an average face (Wickham et al., 2000). Within the MDFS framework caricature effects are compatible both with norm-based and exemplar based models. According to norm-based models, each individual face is represented as a vector in face-space that defines the direction and the distance from an average or norm face. This norm face is a psychological prototype formed by the learning history of an individual. According to norm-based models, caricaturing exaggerates everything that distinguishes an individual face from the norm face, keeping the direction of the vector constant while increasing its length and therefore moving it away from the centre of face-space, whereas anti-caricaturing decreases its length and therefore moves the face towards the centre, making it more confusable with other faces clustering there. Evidence for the role of a norm face comes from a study using newly learned faces: after adapting to anti-faces, which have been produced by reversing the identity vector, participants perceived an average face as individual face on the according trajectory in face-space (Leopold et al., 2001). In contrast, exemplar based models do not require a facial prototype. According to these models, faces are solely defined by their absolute position in multi-dimensional face-space, not by their distance from a supposed norm face. The idea is that caricaturing increases a face's distinctiveness by “moving” it to less crowded locations in face-space, where it is less likely that close neighbours get erroneously activated by an incoming stimulus. Absolute coding predicts that the recognisability of a stimulus solely depends on its distinctiveness and its degree of distortion from the veridical, whereas norm based coding additionally assumes that the direction of the distortion plays a major role. Using veridicals, anti-caricatures, caricatures and “lateral caricatures” (which move a face off its norm-deviation vector), Rhodes et al. (1998) found that recognisability was primarily predicted by distinctiveness and proximity to the veridical target, but not by the direction of the distortion in face space. The authors interpreted this finding as evidence for exemplar based coding (for details see Rhodes et al., 1998; see also Byatt and Rhodes, 1998). In the following, we present two experiments on face recognition using caricatures, veridical pictures and anticaricatures of varying degree. Our main motivation for the experiments was to investigate behavioural effects of caricaturing in a face familiarity task using high-quality photorealistic caricatures of familiar and unfamiliar faces (Experiment 1) and to identify the associated brain processes (Experiment 2). Second, we intended to replicate previous findings of higher likeness ratings for veridical depictions and moderate anti-caricatures over spatial caricatures of familiar faces (Chang et al., 2002; Lee and Perrett, 2000) in contrast to results on line-drawings (Benson and Perrett, 1991; Rhodes et al., 1987; Rhodes, 1996). In Experiment 2 we investigated the time course of caricature effects in personally familiar and

179

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

unfamiliar faces using the high temporal resolution of eventrelated potentials (ERPs). To our knowledge, there is only one study that investigated ERP correlates of face caricatures (Minnebusch et al., 2007). This study focussed on N170 effects in participants with developmental prosopagnosia and control participants. Controls recognized veridical photos of famous faces better than drawn caricatures and there was no difference in the N170. For prosopagnosics, there was a trend for better recognition of caricatures. The authors suggest this finding might be due to the prosopagnosics concentrating on salient cues in order to compensate for face processing deficits. Unfortunately, because the authors do not specify how exactly the caricatures were generated (and whether they included spatial or other exaggerations), the relevance of that study for the present experiments is unclear. According to cognitive models based on the suggestions of Bruce and Young (1986), the result of an initial “structural encoding” process is matched with stored facial representations, which have been termed “face recognition units” (FRUs). It is still a matter of debate what exactly is coded in a FRU. Initially it has been suggested that FRUs can be imagined as rather abstract, view, expression and picture independent representations. Recently, in contrast to the idea of norm based face coding and the resulting importance of caricatures, it has been suggested that FRUs comprise the average of all experienced views and pictures of a particular face (Burton et al., 2005). This means that by perceptual learning our brain extracts a mental average of a person, representing invariant characteristics which can be compared with incoming facial information. The quality of the representation of invariant information is thought to depend on the number of different exemplars that were available during learning. Alternatively, models have been put forward that suggest multiple instances of familiar faces (e.g. Newell et al., 1999). Both views are compatible with findings of faster identity recognition of familiar faces displaying typical expressions for a particular person (Kaufmann and Schweinberger, 2004). Although previous research on caricatures of photographic quality has not revealed strong behavioural effects, we were interested whether caricature effects for photographic stimuli would show up in the ERPs during a face familiarity task. Special interest was given to two components, which have been shown to be sensitive to faces: N170 (Allison et al., 1994; Bentin et al., 1996; see also Liu et al., 2000, for M170, the magnetic counterpart of N170) and N250 (Schweinberger et al., 2002; Schweinberger et al., 2004; Tanaka et al., 2006). Because N170 seems to be largely unaffected by familiarity, it is generally thought that it is associated with early structural

processing in the sense of the Bruce and Young model (Bruce and Young, 1986) rather than with the extraction of familiarity information (e.g., Bentin and Deouell, 2000; Eimer, 2000b; Eimer, 2000c; but see also Heisz et al., 2006, who suggest that N170 represents mechanisms that underlie face familiarity acquisition). Effects of familiarity typically emerge around the time range of N250, a component which has been associated with accessing stored face representations (Caharel et al., 2002; Kaufmann et al., in press; Schweinberger et al., 2002; Schweinberger et al., 2004; Tanaka et al., 2006). If caricaturing leads to a more efficient activation of stored face representations, we would expect to see larger amplitudes of inferiortemporal N250 for caricatures of familiar faces in comparison to anti-caricatures and veridical depictions.

2.

Results

For behavioural and ERP data, where appropriate, Epsilon corrections for heterogeneity of covariances were performed with the Huynh–Feldt method (Huynh and Feldt, 1976) throughout.

2.1.

Experiment 1

All participants indicated that they were well familiar with the German celebrities, and unfamiliar with the British celebrities, (mean familiarity ratings: M = 3.78, SD = .24 for German and M = 1.14, SD = 0.14 for British celebrities). For familiar faces, the ANOVA on response times, with repeated measurements on the factor caricature level (−30%, −15%, 0%, 15%, 30%) did not reveal a significant effect, F(4,100) b1. For hit rates there was a numeric difference of 3% between the 30% caricatures and the −30% anti-caricatures, (for all means see Table 1), however these differences did not reach significance, F(4,100) = 1.93, p N .11. In contrast to RTs and hit rates, the bias free sensitivity measure d′ was affected by the factor caricature level, F(4,100)= 3.03, p b .05. However, post-hoc comparisons (Bonferroni corrected α = .016) attributed the effect to larger d'-values for 30% caricatures in comparison to 30% anticaricatures, F(1,25) = 11.74, p b .01, whereas no significant differences were found between veridicals and 30% caricatures, F (1,25)= 2.81, p N .10, or between veridicals and −30% anti-caricatures, F(1,25) = 1.34, p N .25. The response criterion (C) was not modulated by the factor caricature level, F(4,100) b 1. The analysis of best-likeness ratings yielded an effect of caricature level, F(1,100) = 4.63, p b .05. Inspection of Fig. 1 suggests lower ratings for caricatures in comparison to veridicals. This impression was confirmed by post-hoc tests comparing

Table 1 – Behavioural accuracy (% hits and correct rejections), reaction times (ms), d-values and criteria (C), depending on caricature level in Experiment 1 (standard deviations in parentheses) Caricature-level

Famous faces Hits

− 30% − 15% 0% 15% 30%

88.4 88.9 88.8 87.4 91.3

(14.2) (12.5) (12.4) (13.6) (14.0)

RTs 585 (143) 584 (137) 579 (120) 581 (143) 593 (148)

Unfamiliar faces Correct rejections 88.9 90.5 92.1 89.9 92.5

(11.9) (11.1) (7.9) (10.7) (8.4)

d'

C

RTs 649 (147) 647 (168) 651 (145) 644 (157) 653 (162)

2.62 2.70 2.74 2.60 2.91

(.93) (.78) (.76) (.84) (.79)

.00 .03 .05 .05 .01

(.27) (.33) (.25) (.29) (.29)

180

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

veridicals with 15% and 30% caricatures (F(1,25) = 2.68, pb .11 and F(1,25) =6.38, pb .017, respectively). Ratings of veridicals did not differ from those of −30% anti-caricatures, F(1,25)b 1. For unfamiliar faces, neither response times nor the percentage of correct rejections were significantly modulated by the factor caricature level, (F(4,100) b 1 and F(4,100) = 1.69, p N .16, for RTs and correct rejections, respectively).

2.1.1.

Discussion

Spatial caricaturing of colour photographs of famous faces did not produce a caricature advantage in Experiment 1, although there was a minor trend for slightly higher hit rates for the 30% caricatures. There was also an overall effect of caricaturing on d' values, but the only significant differences were found between caricatures and anti-caricatures. This finding suggests that in comparison to line-drawn stimuli (Rhodes, 1996) spatial caricatures of photographic quality are not recognized more easily than veridical pictures, at least when not shown with very short presentation times (Lee and Perrett, 2000; Rhodes et al., 1997). However, there seems to be a remarkably large window of tolerance for distortions both in the direction of caricatures and anti-caricatures. The results imply that apart from the distorted spatial information, participants use other cues such as texture information to perform the familiarity task. Interestingly, and in line with previous research using photographic caricatures (Chang et al., 2002; Lee and Perrett, 2000), caricatures were perceived as worse likenesses of familiar faces than veridical pictures, whereas likeness ratings did not differ between veridicals and anti-caricatures (at least within the analysed caricature range up to manipulations of 30%). This pattern clearly speaks against the notion that familiar faces are represented in the form of caricatures, which exaggerate the deviations from a facial prototype. In Experiment 2 we investigated whether spatial caricaturing affects neural correlates of face recognition. Experiment 1 had not revealed behavioural effects of caricaturing, but we wondered whether the processing of caricatured faces results in similar ERP modifications that have been reported for naturally distinct faces (Sommer et al., 1995). To our knowledge, this is the first ERP study using spatial caricatures of photographic quality. In Experiment 2 we decided to use pictures of personally familiar faces for a number of reasons.

Fig. 1 – Results of best-likeness rating task for famous faces in Experiment 1. Note that 30% caricatures were rated as worst likenesses.

First, this enabled us to precisely control stimulus conditions such as lighting, expression and image quality. Second, and more important, we wondered whether we would find stronger effects of caricaturing for personally familiar faces in contrast to famous faces. Recently, it has been suggested that configural information might be represented differently for famous and personally familiar faces (Kloth et al., 2006). If personal contact leads to stronger spatial configural representations, this might enhance effects of spatial caricaturing. In addition, we decided to present the pictures in grey-scale mode rather than in colour. This was done in the hope to increase the likelihood of finding a caricature advantage. Although grey-scale pictures provide much richer information than line-drawings, for which super-portrait effects have been found, the amount of non-configural information (such as colour hue) that can be used to identify the faces (Lee and Perrett, 1997) is clearly reduced. In contrast to Experiment 1, only 3 levels of caricaturing (−30%, 0% and 30%) were used. This decision was based on findings of a tendency for higher accuracies for caricatures at the level of 30% in Experiment 1, whereas there was no evidence for effects at level 15%. There were also practical considerations: including 3 levels instead of 5 resulted in fewer repetitions of individual faces, which might have weakened the effects of caricaturing in Experiment 1. We also slightly modified the best-likeness task: instead of rating the pictures, participants selected one of two simultaneously presented versions of the same familiar person which in their opinion looked more like the respective person. All possible pairings (veridical vs. caricature; veridical vs. anti-caricature and caricature vs. anti-caricatures) were presented. We decided to use a forced-choice paradigm because in Experiment 1 there was a tendency for medium ratings, which lead to relatively small differences between conditions. Our motivation was to replicate findings of Experiment 1 using a more powerful design.

2.2.

Experiment 2

2.2.1.

Behavioural data

All participants indicated that they were well familiar with the familiar, and unfamiliar with the unfamiliar faces, (mean familiarity ratings: M = 3.81, SD = .28 for familiar and M = 1.2, SD = 0.18 for unfamiliar faces). Familiar faces: For response times, an ANOVA with repeated measurements on the factor caricature level (anticaricature vs. veridical vs. caricature) did not reveal significant effects, F b 1. There were also no significant effects for hit rates, F(2,20) = 1.33, p N .28., d' values, F(2,20) = 2.63, p b .1 or criterion C, F(2,20) N 1 (for means see also Table 2). In the best-likeness task, caricatures were less likely to be selected as the better representation both when presented together with veridicals, F(1,10) = 26.43, p b .001 (M = 27.9% vs. 72.1%, see also Fig. 2) and with anti-caricatures, F(1,10) = 8.77, p b .05 (M = 34.8% vs. M = 65.2%). No difference emerged between veridicals and anti-caricatures, F(1,10) b 1 (M = 51.7% vs. M = 48.3%). Unfamiliar faces: The factor caricature level did not affect response times for unfamiliar faces, F b 1. Also, the percentage of correct rejections for unfamiliar faces did not differ significantly between the three caricature levels, F b 1 (for means see also Table 2).

181

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

Table 2 – Behavioural accuracy (% hits and correct rejections), reaction times (ms), d'-values and criteria (C), depending on caricature level in Experiment 2 (standard deviations in parentheses) Caricature-level

− 30% 0% 30%

2.2.2.

Familiar faces

Unfamiliar faces

Hits

RTs

Correct rejections

RTs

91.1 (9.4) 92.0 (8.4) 89.9 (8.5)

648 (89) 647 (70) 640 (89)

98.7 (2.9) 99.4 (1.6) 98.5 (2.4)

693 (89) 691 (90) 686 (83)

ERPs

Familiar faces: No significant effects of caricature level were found for either N170 or N250 (both Fs(2,20) b1, see also Fig. 3). Unfamiliar faces: N170: The ANOVA revealed a main effect of caricature level on amplitudes of N170, F(2,20) = 3.64, p b .05. This was attributed to more negative amplitudes for caricatures than for veridicals, F(1,10) = 10.54, p b .01,1 whereas no difference emerged between veridicals and anti-caricatures, F b 1. N250: The factor caricature level also significantly affected ERP amplitudes in the time range of N250, F(2,20) = 12.60, p b .001. Post-hoc testing attributed this effect to more negative amplitudes of N250 for caricatures in comparison to veridicals, F(1,10) = 19.33, p b .01, whereas there was no significant difference between veridicals and anti-caricatures, F(1,10) = 1.31, p N .27 (see also Fig. 3). To summarize, caricaturing only affected behavioural bestlikeness selections for familiar faces, whereas response times and accuracies in face recognition were not significantly modified. Interestingly, caricatures were less likely to be selected as the better representation of a familiar person. However, clear influences of caricaturing were visible in the ERPs. Interestingly and somewhat unexpectedly, these effects were largely limited to unfamiliar faces. They started in the time range of N170 and also affected inferior-temporal N250: caricatures of unfamiliar faces yielded larger negativity at inferior-temporal sites than veridical or anti-caricatured depictions. Fig. 4 shows topographies of ERP differences (caricatures minus veridicals) for familiar and unfamiliar faces, respectively2. The finding of such clear ERP effects for unfamiliar faces, in particular for N250 was somewhat surprising, as this component has been associated with the activation of person specific representations. By definition, unfamiliar faces do not initially have stable representations. However, over the course of the experiment, each unfamiliar face was presented a total of 6 times (twice as veridical, caricature and anti-caricature each). Thus, one may expect that a representation of an unfamiliar face should have been gradually formed over the course of the experiment. We therefore reasoned that, if ERP effects of caricaturing for unfamiliar faces reflect differences in face learning (as a result of superior processing of the more distinctive caricatured versions), they should be stronger in 1 Note that the finding of more negative N170 for caricatures than for veridicals, which was based on mean amplitude measures, was also confirmed by peak amplitude measurements, F(1,10) = 6.37, p b .05. 2 Topographies of ERR differences (Fig. 4) suggest slightly enhanced N170 also for caricatures of familiar faces. However, in contrast to unfamiliar faces, this was only observed at electrode O2 and post-hoc testing (familiar caricatures vs. familiar veridicals) at this electrode did not reveal significant differences, F(1,10) = 3.5, p N .09.

d'

C

3.63 (.59) 3.75 (.52) 3.46 (.58)

.31 (.32) .31 (.31) .35 (.24)

the second half of the experiment, in which stronger representations should have been built.

2.2.3.

Block-wise analysis of unfamiliar faces

For unfamiliar faces, differences between caricatures and veridicals were therefore further explored by separate analyses of the first and the second half of the experiment. For response times and accuracies we did not find significant effects of caricature level either in the first or the second half of the experiment (all Fs(2,20) b 1). However, ERP effects differed in the first and the second half. In the first half, the only significant difference between ERPs for caricatures and veridicals of unfamiliar faces was the finding of higher negative amplitudes of N170 for caricatured faces, F(1,10) = 15.57, p b .01 (see also Fig. 5). Interestingly, this effect disappeared in the second half, F(1,10) = 1.70, p N .22. Importantly, and as predicted, N250 was significantly more negative for caricatures in comparison to veridicals in the second half, F(1,10) = 7.03, p b .025, but not in the first, F(1,10) = 1.31, p N .27 (see also Fig. 5)3.

3.

Discussion

Using stimuli of photographic quality which were presented at supra-liminal presentation times, we found no behavioural evidence for a caricature advantage for famous faces presented in colour (Experiment 1) or for personally familiar faces presented in greyscale mode (Experiment 2). In the familiarity task hit rates, response times and d' values for caricatures did not significantly differ from those of veridicals. Likeness ratings were even lower for 30% caricatures, whereas ratings for anticaricatures at the same level of distortion did not differ from those for veridical pictures. This was true both in an unforced (Experiment 1) and in a forced choice paradigm (Experiment 2). Overall, the behavioural data from both experiments are consistent with previous work using photographic stimuli and suggest that in comparison to earlier findings of “super-portrait” effects for line-drawn caricatures (Rhodes, 1996), caricature advantages for photographic stimuli are either smaller or absent (Benson and Perrett, 1991; Chang et al., 2002; Rhodes et al., 1997), or are limited to extremely short presentation times (Lee and Perrett, 2000). In addition, the results are compatible with previous work demonstrating that caricatures are neither perceived as better likenesses of famous (Lee et al., 2000), personally 3

Following the suggestions of an anonymous reviewer, we also analysed effects of caricature level on N250 for familiar faces in the second block of the experiment. The according ANOVA did neither reveal a significant main effect of caricature level, F(2,20) = .97, p N .39, nor any significant interactions.

182

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

Fig. 2 – Results of forced-choice best-likeness task for personally familiar faces in Experiment 2. Veridicals (white) and 30% anti-caricatures (grey) were more likely to be selected as better likeness of the respective familiar person than 30% caricatures (dotted).

familiar or newly learned faces (Chang et al., 2002; but see Rhodes et al., 1997). Initially, Rhodes et al. (1987) had suggested that representations of familiar faces become more like caricatures. The present results clearly suggest that this is not the case, neither for famous nor for personally familiar faces. Our findings are more in line with the idea that the spatial coordinates of familiar face representations are close to veridical. ERP effects for personally familiar faces in Experiment 2 basically seemed to reflect the behavioural data, and were not significantly modulated by caricaturing or anti-caricaturing. By contrast, clear effects of caricaturing were seen in the ERPs to unfamiliar faces. Caricatures elicited significantly increased N170 and N250 amplitudes in comparison to veridical pictures. Why should caricaturing have larger effects for unfamiliar faces than for familiar ones? As caricaturing enhances

Fig. 4 – Topographical voltage maps of ERP differences between caricatures and veridicals for familiar (top row) and unfamiliar faces. Displayed are time segments reflecting N170 (140–180 ms) and N250 (260–320 ms). Note that differences between caricatures and veridicals were significant for unfamiliar faces only. All maps were obtained by using spherical spline interpolation (Perrin et al., 1989). Maps show a 90° projection from a back view perspective. Positions of electrodes are indicated. Negativity is red.

distinctiveness (Chang et al., 2002; Lee et al., 2000; Rhodes et al., 1997; Stevenage, 1995a) and because it has been repeatedly shown that distinctive faces can be better remembered or learned (Bartlett et al., 1984; Cohen and Carr, 1975; Dewhurst et al., 2005; Going and Read, 1974; Light et al., 1979; Shepherd et al., 1991; Valentine, 1991; Vokey and Read, 1992; Wickham et al., 2000; Winograd, 1981), we tentatively explain ERP effects for repeatedly presented caricatured unfamiliar faces in the

Fig. 3 – Grand mean ERPs at selected frontal, parietal, inferior-temporal and occipital electrodes for caricatures, veridicals and anti-caricatures of familiar faces (left) and unfamiliar faces (right) in Experiment 2.

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

Fig. 5 – Grand mean ERPs at inferior-temporal electrode P10 for unfamiliar faces (top row) and familiar faces (bottom row) in the first and second halves of Experiment 2. For unfamiliar faces caricatures evoked larger amplitudes of N170 in the first half and larger amplitudes of N250 in the second half of the experiment.

context of the influence of distinctiveness on face learning. Intriguingly, one study by Sommer et al. (1995) – though not reporting results for earlier components including the N170 – also reports increased inferior-temporal negativity at a later latency for distinctive in comparison to typical faces. Apart from that, there is only little work published on ERP correlates of face distinctiveness. Another study (Halit et al., 2000) reports increased amplitudes of P1 and N170 for the processing of “atypical” faces which were generated by artificially changing features. However, the relationship of this manipulation to the established concept of facial distinctiveness is unclear. If caricatured unfamiliar faces in the present study indeed enjoyed an advantage of distinctiveness, the question might arise why no behavioural advantage was observed for those faces when compared to veridical or anti-caricatured faces. In our opinion, this is likely due to the nature of the present task, which was to recognize personally familiar faces among unfamiliar faces. Thus, our participants were not required to memorize unfamiliar faces, and memory for unfamiliar faces was never explicitly tested. If caricaturing improves the encoding of unfamiliar faces by increasing their distinctiveness, an obvious prediction is that caricatured unfamiliar faces should be more easily memorized and recognized than veridical unfamiliar faces. This is an interesting question for future studies. In the context of face learning it is also important that in the present study N170 and N250 effects depended on stimulus exposure and therefore learning experience: N170 caricature effects were limited to the first half of the experiment, whereas the finding of increased N250 for caricatures was particularly prominent in the second half of the experiment, when unfamiliar faces had been presented a couple of times. N170 is usually associated with the extraction of structural information from a face stimulus, independent of familiarity, and increased amplitudes of N170 have also been shown for inverted faces (e.g., Eimer, 2000a; Schweinberger et al., 2004). Because there is evidence that inverted faces are

183

recognized via featural rather than holistic processing (Rhodes, 1988; Rhodes et al., 1993; Searcy and Bartlett, 1996; Tanaka and Sengco, 1997) the present finding of increased N170 for caricatures of unfamiliar faces in the first half of the experiment might reflect enhanced processing of feature information that was exaggerated by caricaturing and which might have facilitated face encoding, particularly during the first half of the experiment, when the faces were still unfamiliar. In this context it is also of interest that Heisz et al. (2006) have recently suggested that N170 reflects early face identity processes during familiarization. It is also noteworthy that during the course of the experiment and the associated face repetitions, ERPs for unfamiliar faces clearly changed and transformed in the direction of those for familiar faces. These transformations included in particular increasing amplitudes of N250, a finding which is in line with a range of previous studies (Kaufmann et al., in press; Schweinberger et al., 2002; Tanaka et al., 2006). Most importantly, the finding of larger increases of N250 for caricatures of unfamiliar faces (see also Fig. 6) is consistent with more efficient face learning due to higher distinctiveness of caricatures. In this context it is of particular interest that the amplitude of N250 has been associated with face learning and the strength of familiar face representations4 (Caharel et al., 2002; Kaufmann et al., in press; Tanaka et al., 2006). Distinctiveness effects have been attributed to the stage of encoding rather than retrieval (Deffenbacher et al., 2000; Mauro and Kubovy, 1992; Sommer et al., 1995; Stevenage, 1995a; Stevenage, 1995b). Of relevance for our study, Stevenage (1995b) showed a caricature advantage with line drawings when children learned unfamiliar faces. This is in line with the present results which could suggest that initially, caricaturing facilitates feature based processing by enhancing the distinctiveness of facial features (indicated by enhanced N170 in the first half), which then results in stronger newly built representations as reflected in larger N250 amplitudes in the second half. This might also explain why no significant effects were found for personally familiar faces: as these should already be very well represented, they are less sensitive to image differences (Bindemann et al., 2008; Hancock et al., 2000). Apparently spatial caricatures of familiar faces do not provide any additional useful information, at least in the case of photographic stimuli that also provide non-spatial information such as texture. Although inspection of Figure 5 suggests the residual possibility that the lack of significant caricaturing effects for familiar faces in the second half might be a result of limited statistical power, we note that the pattern of the numerical differences due to caricaturing also deviates from the one seen for unfamiliar faces. While a non-significant decrease in N250 for anti-caricatured familiar faces might tentatively be thought to reflect a very slightly reduced ability 4

Following the suggestions of an anonymous reviewer, we have also performed contrasts (familiar vs. unfamiliar faces) for the different caricature levels in the second half of Experiment 2. These analyses revealed that N250 differed significantly between familiar and unfamiliar faces for anti-caricatures, F(1,10) = 21.32, p b .01 and veridicals, F(1,10) = 36.0, p b .001, but not for caricatures, F(1,10) = 1.38, p N .26. In contrast, in the first half of the experiment N250 differences between familiar and unfamiliar caricatures were significant, F(1,10) = 46.51, p b .0001.

184

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

Fig. 6 – Grand mean ERPs for unfamiliar faces and familiar faces at inferior-temporal electrode P10 in the first and the second half of the experiment. Overall, familiar faces evoked larger N250 amplitudes than unfamiliar faces. However, N250 amplitudes for caricatures of unfamiliar faces approached N250 for familiar faces after a number of stimulus repetitions in the second half of the experiment (see arrow). of anti-caricatures to activate the appropriate representation, there was clearly no hint of an increase in N250 amplitude for caricatured relative to veridical familiar faces. Therefore, the pattern of caricature ERP effects for familiar and unfamiliar faces possibly reflects a qualitative difference. During learning, there might be a shift in the relative contribution of spatial and non-spatial information. At early stages of encoding, spatial information in the form of features and spatial configuration seems to be of particular importance, whereas with increasing familiarity, other kinds of information such as texture are represented. This would explain why for familiar faces effects of spatial caricaturing can be largely neutralized by the unaltered texture information available in photographs. In contrast, for unfamiliar faces, enhancement of identity specific spatial features leads to better learning which is reflected in a temporarily increased N170, followed by larger N250 after a number of repetitions. On the other hand, for faces which are already highly familiar there seems to be a large degree of tolerance for spatial distortions (see also Hole et al., 2002), reflected in the absence of significant differences in RTs and accuracies between veridicals, caricatures and anti-caricatures. Possibly this is because for familiar faces other than spatial information is of particular importance. This interpretation is in line with reports of larger caricature advantages for caricatures in colour rather than for caricatures in space for famous faces (Lee and Perrett, 2000) and with the observation that averaged exemplars of faces which keep specific texture information but which have been normalized in shape can be easily recognized (Burton et al., 2005). Possibly, one aspect of the apparent qualitative difference in unfamiliar and familiar face processing, and of the puzzling transition from an unfamiliar to a familiar face (Hancock et al., 2000; Megreya and Burton, 2006) is the changing importance of spatial and non-spatial information. In this context it is also interesting that the to our knowledge only significant (spatial) caricature-advantage using photographic stimuli that were not presented with very short presentation times comes from a study which also included newly learned faces (Chang et al., 2002).

To summarize, in two experiments we found no advantage of spatial caricaturing on the recognition of famous (Experiment 1) or personally familiar faces (Experiment 2). In both experiments, caricatures at a distortion level of 30% were rated as worse likenesses than veridical pictures, whereas ratings of anti-caricatures at the same distortion level did not differ from those of unaltered versions. ERP effects of caricaturing were largely limited to unfamiliar faces and suggested a beneficiary role of spatial exaggerations on initial encoding (N170), and on forming face representations over several repeated exposures (N250), possibly mediated by increased distinctiveness of caricatured facial features. By contrast, the recognition of familiar faces appears to be remarkably tolerant against a considerable degree of spatial distortion.

4.

Experimental procedures

4.1.

Experiment 1

4.1.1.

Participants

A total of 26 students of Psychology (21 women, mean age M = 21.2 years, SD = 2.7) from Jena University, Germany, participated in the experiment. All of them received course credit. Data from one additional participant had to be excluded from the analysis due to an insufficient number of correctly recognized famous faces in the recognition task (M b 60%).

4.1.2.

Stimuli and apparatus

Pictures of 15 famous and 15 unfamiliar faces were used to generate the stimuli. Faces of celebrities famous in Germany were selected on the basis of a previously performed rating study on 10 different participants. To avoid contamination by factors such as differences in attractiveness and general outfit (mascara etc.), pictures of unfamiliar faces consisted of British celebrities unknown in Germany. Raw pictures were edited using Adobe Photoshop™ (CS2, Version 9.0.2) so that any information from the neck downwards, such as clothing accessories, was removed. Pictures were adjusted to 351×461 pixels. Caricatures and anticaricatures were then produced using morph software Sierra

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

Morph™ (Version 2.5). The caricaturing process exaggerated the metric differences between each individual face and a gender appropriate average face by 15% and 30%. For anti-caricatures these differences were decreased to the same extent. The female average face was a blend of 127 female faces aged between 19 and 45 (mean age=29.1 years) and the male face was an average of 124 male faces aged between 19 and 45 (mean age=31.0 years). The same image set on which the blends were based has previously been used in other studies (Perrett et al., 2002). To calculate metric differences between individual and average faces, 207 reference points were placed manually in standardized positions on each face. Contours were defined by lines between appropriate reference points (for stimulus examples see also Fig. 7, top). All stimuli were presented on black background in the centre of a 15" monitor. The presentation software was EPrime™ (Version 1.1.). The size of the stimuli was 5.7 cm×7.9 cm with an image resolution of 29.2 pixels/cm. At viewing distance of approximately 60 cm, this resulted in a visual angle of 5.4°×7.5°.

4.1.3.

Procedure

Before the experiment, participants gave informed consent and handedness was determined using the Edinburgh Inventory (Oldfield, 1971) and were encouraged to ask questions in case anything remained unclear at any time during the experiment.

4.1.3.1. Face familiarity task. Participants first performed a speeded dual-choice face recognition task. Faces were classified as familiar or unfamiliar by pressing marked keys on a standard computer keyboard using the index fingers of both hands. The assignment of response hand and response alternative was counterbalanced across participants. Both speed and accuracy were stressed. After reading the instructions on the monitor, participants performed 15 practice trials, which were randomly selected from the total stimulus set. These trials did not enter data analysis. The experimental

185

block consisted of 150 different pictures (2 levels of familiarity × 5 levels of caricaturing × 15 individual faces) in random order. Each picture was only presented once. After a maximum of 50 trials, there was a short break. In all trials a central white fixation cross on black background was shown for 500 ms, followed by a face (500 ms) and a blank screen (2000 ms). No feedback was given for single trials. Participants were given a maximum of 2000 ms to respond.

4.1.3.2. Best-likeness ratings. After completing the familiarity task, participants decided via key-press using a rating scale from 1 to 4 whether a particular stimulus represented a good likeness of the respective familiar face (1 = not representative at all; 4= very representative). All five caricature levels of the 15 famous faces were presented in random order until a key was pressed. 4.1.3.3. Familiarity ratings and recognition. Finally, to ensure that all participants were familiar with the German and unfamiliar with the British celebrities, they performed a familiarity rating task, using a scale from 1 to 4 (1 = never seen before the experiment; 2 = seems vaguely familiar, but do not know where I might have seen the face; 3 = familiar, but do not know the name; 4 = very familiar). Only veridical pictures were used in the rating task (15 familiar and 15 unfamiliar faces, respectively). In addition, participants were asked to provide the correct name or semantic information and answers were noted by the experimenter for the evaluation of possible false positives. Faces were presented in random order until a key was pressed. 4.1.4.

Behavioural analysis

Response times (RTs) were recorded from the onset of face stimuli. Performance was analysed by means of accuracies and sensitivities (d') (Brophy, 1986). For the calculation of d', hit rates of 1 and false alarm rates of 0 were adjusted according to the suggestions by Macmillan and Kaplan (1985).

Fig. 7 – Top: Examples of stimuli used in Experiment 1. German celebrity Johannes B. Kerner at caricature levels ranging from −30 (anti-caricature) to 30% (caricature). Bottom: Examples of stimuli used in Experiment 2.

186

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

4.2.

Experiment 2

4.2.1.

Participants

A total of 11 participants (all female) aged between 20 and 26 years (M = 21.3, SD = 1.9) contributed data for this study. All participants had normal or corrected to normal vision, were right-handed and received 12 Euros. All subjects were students in their 2nd year at the Department of Psychology in Jena and were well familiar with lecturers and students, whose faces were included in the stimulus set. Data from 1 additional participant were excluded due to an insufficient number of artefact-free EEG trials.

4.2.2.

Stimuli and apparatus

Photographs of 21 familiar and 21 unfamiliar persons were taken under standardized conditions with a digital camera (Fujifilm™, FinePix S5600, 4 Mega pixels). All faces were photographed in full frontal view, displaying a neutral expression in front of a black background. The 21 personally familiar faces belonged to lecturers and psychology students from the Department of Psychology at Jena University. Unfamiliar faces were age and gender matched to the familiar faces, and included pictures showing friends and relatives of psychology students attending a class held by JMK, none of whom studied psychology and most of whom lived in another city. Raw photographs were initially saved in jpg format (24 bit colour, 2592 × 1944 pixels, at a resolution 28.3 pixels/cm). Raw pictures were edited as for Experiment 1 and then downsized to 704 × 533 pixels. Caricatures were generated as in Experiment 1, but only the caricature levels −30%, 0% and 30% were used. Stimuli were downsized to 184 × 244 pixels and converted to 8 bit greyscale mode (for stimulus examples see Fig. 7, bottom). All stimuli were presented on black background in the centre of a 19" monitor. The presentation software was ERTS™ (Beringer, 2000). Image resolution was 28.3 pixels/cm, at a screen resolution of 800 by 600 pixels. The size of the stimuli was 6.5 cm × 8.6 cm at a viewing distance of 80 cm, resulting in a visual angle of 4.6° × 6.2°. Viewing distance was kept constant by using a chin rest.

4.2.3.

familiarity task. After a maximum of 63 trials, there was a short break in which feedback on average response times and errors was provided. After reading the instructions on the monitor and prior to the first block of the test phase, participants performed 16 practice trials, which were randomly selected from the total stimulus set. These trials did not enter data analysis. In all trials a central white fixation cross on black background was shown for 500 ms, followed by a face (1500 ms) and a blank screen (500 ms). No feedback was given for single trials. Data were averaged across condition (anti-caricatures, veridicals and caricatures of familiar and unfamiliar faces, respectively), resulting in a maximum of 42 trials per condition.

Procedure

After giving informed consent, handedness was determined using the Edinburgh Inventory (Oldfield, 1971). Then an electrode cap (Easycap™) was applied and participants were guided to the electrically shielded chamber. Participants were encouraged to ask questions in case anything remained unclear at any time during the experiment.

4.2.3.1. Face familiarity task. Participants first performed a speeded dual-choice face recognition task. Faces were classified as familiar or unfamiliar by pressing a key on a response time keypad (ERTSKEY™) using the index fingers of both hands. The assignment of response hand and response alternative was counterbalanced across participants. Both speed and accuracy were stressed. The familiarity task comprised 2 blocks. In each block, anti-caricatures, veridicals and caricatures of 21 familiar and 21 unfamiliar faces were presented once, resulting in 126 different trials per block. Within blocks, trials were presented in random order. The second block consisted of the same stimuli so that each individual stimulus was presented twice across the

4.2.3.2. Best-likeness ratings. After completion of the familiarity task, participants decided via key-press, which version of two simultaneously presented pictures depicting the same familiar person looked more like the respective person. The two pictures were presented one above the other along the vertical medial axis of the computer screen. All possible pairings (veridical vs. caricature; veridical vs. anti-caricature and caricature vs. anti-caricature, resulting in 63 trials) were presented in random order and the position of each picture type was counterbalanced within blocks. Participants were given a maximum of 30 s for each decision. 4.2.3.3. Familiarity ratings. As in Experiment 1 familiarity was rated using veridical pictures. 4.2.4.

Behavioural analysis

The analysis of behavioural data was as in Experiment 1.

4.2.5.

Event-related potentials

4.2.5.1. Data Recording. The electroencephalogram (EEG) was recorded in an electrically and acoustically shielded room. Data were recorded with sintered Ag/AgCl electrodes mounted on an electrode cap (EasyCap™, Falk Minow Services, Herrsching-Breitbrunn, Germany) using SynAmps amplifiers (NeuroScan Labs, Sterling, VA), arranged according to the extended 10/20 system at the scalp positions Fz, Cz, Pz, Iz, Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, FT9, FT10, P9, P10, PO9, PO10, F9, F10, F9', F10', TP9 and TP10. TP10 (right upper mastoid) served as initial common reference, and a forehead electrode (AFz) served as ground. Impedances were kept below 10 kΩ and were typically below 5 kΩ. The horizontal electro-oculogram (EOG) was recorded from F9' and F10' at the outer canthi of both eyes. The vertical EOG was monitored bipolarly from electrodes above and below the right eye. All signals were recorded with AC (0.05 Hz high pass, 40 Hz low pass, − 6dB attenuation, 12 dB/octave), and sampled at a rate of 250 Hz. 4.2.5.2. Pre-processing of ERP Data. Offline, epochs were generated, lasting 1800 ms and starting 200 ms before the onset of a face stimulus. Automatic artefact detection software (KN-Format) was run for an initial sorting of trials, and all trials were then visually inspected for artefacts of ocular (e.g. blinks, saccades) and non-ocular origin (e.g. drifts). Trials with non-ocular artefacts, trials with saccades, and trials with

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –1 88

incorrect behavioural responses were discarded. For all remaining trials, ocular blink contributions to the EEG were corrected using the KN-Format algorithm (Elbert et al., 1985). ERPs were averaged separately for each channel and experimental condition. Each averaged ERP was low-pass filtered at 20 Hz with a zero phase shift digital filter, and recalculated to average reference, excluding the vertical EOG channel.

4.2.5.3. Analysis of ERP Data. Only correct and artefact-free trials were included for averaging. The average numbers of correct and artefact-free trials per condition were as follows: familiar faces: 33.8, 34.9 and 33.8; unfamiliar faces: 36.1, 36.6 and 36.7 (for anti-caricatures, veridicals and caricatures, respectively). ERPs were quantified by mean amplitudes for N170 (140– 180 ms) and N250 (260–320 ms). The time-windows were chosen to correspond with distinct peaks identified from the grand mean waveforms across all conditions. Measures were taken relative to a 200 ms baseline preceding the target stimulus. The latency of N170, as observed in the grand means, was ~160 ms and there was little variability across conditions (ranges from 156 ms–160 ms). For N250, there was a negative peak at ~290 ms over the right (TP10) and at ~300 ms over the left hemisphere (TP9). Effects were quantified at regions of interest which were based on maximum amplitudes of a particular component in grand mean waveforms and on previous research (Schweinberger et al., 2002). N170 was analyzed at P7/P8, P9/P10, P09/P010 and TP9/TP10. N250 was quantified at P9/P10, P09/P010 and TP9/TP10. First, as for the behavioural data, we initially performed ANOVAs with repeated measurements on the factor caricature level (− 30% vs. 0% vs. +30%). The additional factors electrode site (depending on component) and hemisphere were introduced. Familiar and unfamiliar faces were analysed separately and effects of the factor caricature level where further investigated by post-hoc ANOVAs with Bonferroni corrected alpha-levels.

Acknowledgments This study resulted from a student research project supervised by JMK and conducted at the Department of Psychology, University of Jena. We gratefully acknowledge the help of Christoph Casper, Julia Görnandt, Isabel Hentrich, Alexander Kurt, Lisa Merkel, Carolin Müller, Magdalena Pfisterer, Susanne Schwager, Dominique Schwartze and Lydia Walther for their contributions in planning, setting-up and conducting the experiments. We also thank Bettina Kamchen, Axel Mayer, Romi Zäske and Janina Suhrke for their help in stimulus editing and data collection. Special thanks also to all volunteers who kindly agreed to have their faces caricatured for this study. Finally we thank Mike Burt for providing us with male and female average faces for generating the caricatures.

REFERENCES

Allison, T., Ginter, H., McCarthy, G., Nobre, A.C., Puce, A., Luby, M., et al., 1994. Face recognition in human extrastriate cortex. J. Neurophysiol. 71, 821–825.

187

Bartlett, J.C., Hurry, S., Thorley, W., 1984. Typicality and familiarity of faces. Mem. Cogn. 12, 219–228. Benson, P.J., Perrett, D.I., 1991. Perception and recognition of photographic quality facial caricatures: implications for the recognition of natural images. Eur. J. Cogn. Psychol. 3, 105–135. Bentin, S., Allison, T., Puce, A., Perez, E., McCarthy, G., 1996. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8, 551–565. Bentin, S., Deouell, L.Y., 2000. Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cogn. Neuropsychol. 17, 35–54. Beringer, S., 2000. Experimental Run Time System (Version 3.32c). Frankfurt Berisoft Cooperation, Frankfurt. Bindemann, M., Burton, A.M., Leuthold, H., Schweinberger, S.R., 2008. Brain potential correlates of face recognition: geometric distortions and the N250r brain response to stimulus repetitions. Psychophysiology 45, 535–544. Brophy, A.L., 1986. Alternatives to a table of criterion values in signal-detection-theory. Behav. Res. Meth. Instrum. Comput. 18, 285–286. Bruce, V., Young, A., 1986. Understanding face recognition. Br. J. Psychol. 77, 305–327. Burton, A.M., Vokey, J.R., 1998. The face-space typicality paradox: understanding the face-space metaphor. Q. J. Exp. Psychol., A Human Exp. Psychol. 51, 475–483. Burton, A.M., Jenkins, R., Hancock, P.J.B., White, D., 2005. Robust representations for face recognition: The power of averages. Cogn. Psychol. 51, 256–284. Byatt, G., Rhodes, G., 1998. Recognition of own-race and other-race caricatures: implications for models of face recognition. Vis. Res. 38, 2455–2468. Caharel, S., Poiroux, S., Bernard, C., Thibaut, F., Lalonde, R., Rebai, M., 2002. ERPs associated with familiarity and degree of familiarity during face recognition. Int. J. Neurosci. 112, 1499–1512. Calder, A.J., Burton, A.M., Miller, P., Young, A.W., Akamatsu, S., 2001. A principal component analysis of facial expressions. Vis. Res. 41, 1179–1208. Calder, A.J., Young, A.W., Benson, P.J., Perrett, D.I., 1996. Self priming from distinctive and caricatured faces. Br. J. Psychol. 87, 141–162. Chang, P.P.W., Levine, S.C., Benson, P.J., 2002. Children's recognition of caricatures. Dev. Psychol. 38, 1038–1051. Cohen, M.E., Carr, W.J., 1975. Facial recognition and the von Restorff effect. Bull. Psychon. Soc. 6, 383–384. Deffenbacher, K.A., Johanson, J., Vetter, T., O'Toole, A.J., 2000. The face typicality-recognizability relationship: encoding or retrieval locus? Mem. Cogn. 28, 1173–1182. Dewhurst, S.A., Hay, D.C., Wickham, L.H.V., 2005. Distinctiveness, typicality, and recollective experience in face recognition: a principal components analysis. Psychon. Bull. Rev. 12, 1032–1037. Eimer, M., 2000a. Effects of face inversion on the structural encoding and recognition of faces — Evidence from event-related brain potentials. Cogn. Brain Res. 10, 145–158. Eimer, M., 2000b. Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clin. Neurophysiol. 111, 694–705. Eimer, M., 2000c. The face-specific N170 component reflects late stages in the structural encoding of faces. NeuroReport 11, 2319–2324. Elbert, T., Lutzenberger, W., Rockstroh, B., Birbaumer, N., 1985. Removal of ocular artifacts from the EEG — a biophysical approach to the EOG. Electroencephalogr. Clin. Neurophysiol. 60, 455–463. Going, M., Read, J.D., 1974. Effects of uniqueness, sex of subject, and sex of photograph on facial recognition. Percept. Mot. Skills 39, 109–110.

188

BR A I N R ES E A RC H 1 2 2 8 ( 2 00 8 ) 1 7 7 –18 8

Hagen, M.A., Perkins, D., 1983. A refutation of the hypothesis of the superfidelity of caricatures relative to photographs. Perception 12, 55–61. Halit, H., de Haan, M., Johnson, M.H., 2000. Modulation of event-related potentials by prototypical and atypical faces. NeuroReport 11, 1871–1875. Hancock, P.J.B., Bruce, V., Burton, A.M., 2000. Recognition of unfamiliar faces. Trends Cognit. Sci. 4, 330–337. Hancock, P.J.B., Burton, A.M., Bruce, V., 1996. Face processing: human perception and principal components analysis. Mem. Cogn. 24, 26–40. Heisz, J.J., Watter, S., Shedden, J.A., 2006. Automatic face identity encoding at the N170. Vis. Res. 46, 4604–4614. Hole, G.J., George, P.A., Eaves, K., Rasek, A., 2002. Effects of geometric distortions on face-recognition performance. Perception 31, 1221–1240. Huynh, H., Feldt, L.S., 1976. Estimation of the Box correction for degrees of freedom from sample data in randomized block and split-plot designs. J. Educ. Stat. 1, 69–82. Kaufmann, J.M., Schweinberger, S.R., 2004. Expression influences the recognition of familiar faces. Perception 33, 399–408. Kaufmann, J. M., Schweinberger, S. R., Burton, A. M. in press. N250 ERP correlates of newly acquired face representations across different images. Journal of Cognitive Neuroscience. Kloth, N., Dobel, C., Schweinberger, S.R., Zwitserlood, P., Bolte, J., Junghofer, M., 2006. Effects of personal familiarity on early neuromagnetic correlates of face perception. Eur. J. Neurosci. 24, 3317–3321. Lee, K., Byatt, G., Rhodes, G., 2000. Caricature effects, distinctiveness, and identification: testing the face-space framework. Psychol. Sci. 11, 379–385. Lee, K.J., Perrett, D., 1997. Presentation-time measures of the effects of manipulations in colour space on discrimination of famous faces. Perception 26, 733–752. Lee, K.J., Perrett, D.I., 2000. Manipulation of colour and shape information and its consequence upon recognition and best-likeness judgments. Perception 29, 1291–1312. Leopold, D.A., O'Toole, A.J., Vetter, T., Blanz, V., 2001. Prototypereferenced shape encoding revealed by high-level after effects. Nat. Neurosci. 4, 89–94. Lewis, M.B., 2004. Face space-R: towards a unified account of face recognition. Vis. Cogn. 11, 29–69. Lewis, M.B., Johnston, R.A., 1998. Understanding caricatures of faces. Q. J. Exp. Psychol. Sect. A, Human Exp. Psychol. 51, 321–346. Lewis, M.B., Johnston, R.A., 1999. A unified account of the effects of caricaturing faces. Vis. Cogn. 6, 1–42. Light, L.L., Kayra-Stuart, F., Hollander, S., 1979. Recognition memory for typical and unusual faces. J. Exp. Psychol. Hum. Learn. Mem. 5, 212–228. Liu, J., Higuchi, M., Marantz, A., Kanwisher, N., 2000. The selectivity of the occipitotemporal M170 for faces. NeuroReport 11, 337–341. Mauro, R., Kubovy, M., 1992. Caricature and face recognition. Mem. Cogn. 20, 433–440. Macmillan, N.A., Kaplan, H.L., 1985. Detection theory analysis of group data — estimating sensitivity from average hit and falsealarm rates. Psychol. Bull. 98, 185–199. Megreya, A.M., Burton, A.M., 2006. Unfamiliar faces are not faces: Evidence from a matching task. Mem. Cogn. 34, 865–876. Minnebusch, D.A., Suchan, B., Ramon, M., Daum, I., 2007. Eventrelated potentials reflect heterogeneity of developmental prosopagnosia. Eur. J. Neurosci. 25, 2234–2247. Newell, F.N., Chiroro, P., Valentine, T., 1999. Recognizing unfamiliar faces: the effects of distinctiveness and view. Q. J. Exp. Psychol., A Human Exp. Psychol. 52, 509–534. Oldfield, R.C., 1971. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9, 97–113.

Perrin, F., Pernier, J., Bertrand, O., Echallier, J.F., 1989. Spherical splines for scalp potential and current-density mapping. Electroencephalogr. Clin. Neurophysiol. 72, 184–187. Perkins, D., 1975. A definition of caricature, and caricature and recognition. Stud. Anthropol. Vis. Commun. 2, 1–24. Perrett, D.I., Penton-Voak, I.S., Little, A.C., Tiddeman, B.P., Burt, D. M., Schmidt, N., et al., 2002. Facial attractiveness judgements reflect learning of parental age characteristics. Proc. R. Soc. Lond. Ser. B, Biol. Sci. 269, 873–880. Rhodes, G., 1988. Looking at faces: first-order and second-order features as determinants of facial appearance. Perception 17, 43–63. Rhodes, G., 1996. Superportaits. Caricatures and Recognition. Psychology Press, Hove, UK. Rhodes, G., Brake, S., Atkinson, A.P., 1993. What's lost in inverted faces. Cognition 47, 25–57. Rhodes, G., Brennan, S., Carey, S., 1987. Identification and ratings of caricatures — implications for mental representations of faces. Cogn. Psychol. 19, 473–497. Rhodes, G., Byatt, G., Tremewan, T., Kennedy, A., 1997. Facial distinctiveness and the power of caricatures. Perception 26, 207–223. Rhodes, G., Carey, S., Byatt, G., Proffitt, F., 1998. Coding spatial variations in faces and simple shapes: a test of two models. Vis. Res. 38, 2307–2321. Schweinberger, S.R., Huddy, V., Burton, A.M., 2004. N250r: a faceselective brain response to stimulus repetitions. NeuroReport 15, 1501–1505. Schweinberger, S.R., Pickering, E.C., Jentzsch, I., Burton, A.M., Kaufmann, J.M., 2002. Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cogn. Brain Res. 14, 398–409. Searcy, J.H., Bartlett, J.C., 1996. Inversion and processing of component and spatial–relational information in faces. J. Exp. Psychol. Hum. Percept. Perform. 22, 904–915. Shepherd, J.W., Gibling, F., Ellis, H.D., 1991. The effects of distinctiveness, presentation time and delay on face recognition. Eur. J. Cogn. Psychol. 3, 137–145. Sommer, W., Heinz, A., Leuthold, H., Matt, J., Schweinberger, S.R., 1995. Metamemory, distinctiveness, and event-related potentials in recognition memory for faces. Mem. Cogn. 23, 1–11. Stevenage, S.V., 1995a. Can caricatures really produce distinctiveness effects? Br. J. Psychol. 86, 127–146. Stevenage, S.V., 1995b. Demonstration of a caricature advantage in children. Cah. Psychol. Cogn.-Curr. Psychol. Cogn. 14, 325–341. Tanaka, J.W., Curran, T., Porterfield, A.L., Collins, D., 2006. Activation of preexisting and acquired face representations: the N250 event-related potential as an index of face familiarity. J. Cogn. Neurosci. 18, 1488–1497. Tanaka, J.W., Sengco, J.A., 1997. Features and their configuration in face recognition. Mem. Cogn. 25, 583–592. Tversky, B., Baratz, D., 1985. Memory for faces — are caricatures better than photographs. Mem. Cogn. 13, 45–49. Valentine, T., 1991. A unified account of the effects of distinctiveness, inversion, and race in face recognition. Q. J. Exp. Psychol. Sect. A, Hum. Exp. Psychol. 43, 161–204. Vokey, J.R., Read, J.D., 1992. Familiarity, memorability, and the effect of typicality on the recognition of faces. Mem. Cogn. 20, 291–302. Wickham, L.H.V., Morris, P.E., Fritz, C.O., 2000. Facial distinctiveness: its measurement, distribution and influence on immediate and delayed recognition. Br. J. Psychol. 91, 99–123. Winograd, E., 1981. Elaboration and distinctiveness in memory for faces. J. Exp. Psychol. Hum. Learn. Mem. 7, 181–190.