Available online at www.sciencedirect.com
Acta Psychologica 128 (2008) 119–126 www.elsevier.com/locate/actpsy
The horizontal and vertical relations in upright faces are transmitted by different spatial frequency ranges Valerie Goffaux * Maastricht Brain Imaging Center, Department of Neurocognition, Faculty of Psychology, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands Cognition and Development Unit, Faculty of Psychology, Catholic University of Louvain, Belgium Received 13 July 2007; received in revised form 12 November 2007; accepted 15 November 2007 Available online 31 December 2007
Abstract Faces convey distinct types of information: features and their spatial relations, which are differentially vulnerable to inversion. While inversion largely disrupts the processing of vertical spatial relations (e.g. eyes’ height), its effect is moderate for horizontal relations (e.g. interocular distance) and local feature properties. The SF ranges optimally transmitting horizontal and vertical face relations were here investigated to further address their functional role in face perception. Participants matched upright and inverted pairs of faces that differed at the level of local featural properties, horizontal relations in vertical relations. Irrespective of SF, the inversion effect was larger for vertical than horizontal and featural cues. Most interestingly, SF differentially influenced the processing of vertical, horizontal and featural cues in upright faces. Vertical relations were optimally processed in intermediate SF, which are known to carry useful information for face individuation. In contrast, horizontal relations were best conveyed by high SF, which are involved in the processing of local face properties. These findings not only confirm that horizontal and vertical relations play distinct functional roles in face perception, but they also further suggest a unique role of vertical relations in face individuation. Ó 2007 Elsevier B.V. All rights reserved. PsycINFO classification: 2323 Keywords: Face perception; Inversion effect; Features; Spatial relations; Spatial frequencies
1. Introduction The question of how humans encode, store and recognize hundreds of individuals based solely on facial information has impelled decades of research in high-level vision and is still being investigated (Farah, Wilson, Drain, & Tanaka, 1998; Sinha, Balas, Ostrovsky, & Russell, 2006). Each face conveys a multitude of information both at the local level of its constituent features (eyes, nose, etc.) and at the level of the spatial relations organizing such features *
Address: Maastricht Brain Imaging Center, Department of Neurocognition, Faculty of Psychology, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands. Tel.: +31 43 388 19 55; fax: +31 43 388 41 25. E-mail address: Valerie.Goff
[email protected] 0001-6918/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.actpsy.2007.11.005
(e.g. the relative distance between nose and mouth, or between eyes and nose, or between mouth and chin, etc.). In 1969, Yin (1969) made the intriguing observation that picture-plane inversion dramatically impairs face processing skills. This inversion effect (IE) is larger for faces than for most other visual stimuli (McKone, Kanwisher, & Duchaine, 2007; see Reed, Stone, Bozova, & Tanaka, 2003). Consequently, inversion has been largely used as a tool to disrupt face-specialized processes (e.g. Rhodes, Peters, Lee, Morrone, & Burr, 2005). However, there is evidence demonstrating that inversion does not affect all aspects of face information equally. In upright faces, humans are able to process both the features and their spatial relations. But, once faces are inverted, human sensitivity declines more for spatial relations than for local feature properties (Bartlett & Searcy, 1993; Freire, Lee, & Symons,
120
V. Goffaux / Acta Psychologica 128 (2008) 119–126
2000; Leder & Bruce, 2000). These findings indicate that features and their spatial relations are dissociable cues for face perception. Some authors have also suggested that face individuation skills for upright faces rely on the processing of spatial relations (e.g. Maurer, Grand, & Mondloch, 2002). Multiple spatial relations define each individual face (Farkas, 1994; Shi, Samal, & Marx, 2006). In most face perception studies, it is implicitly assumed that all these spatial relations equally contribute to face perception and, hence, are equally affected by inversion. However, our recent investigations contradict this view (Goffaux & Rossion, 2007). We showed that when faces were inverted, sensitivity to vertical spatial relations decreased (for example, eyes or mouth height; see also Barton, Keenan, & Bass, 2001), whereas the perception of horizontal relations and local feature properties remains comparably accurate. This finding could not be attributed to differential difficulty with regard to processing vertical as opposed to horizontal spatial relations as matching performance across these conditions was equivalent for upright faces. Furthermore, the dissociable effects of inversion could not be accounted for by the fact that inversion and vertical relational manipulation both alter vertical face organization. The largest detriments were indeed invariably observed for vertical relations at other rotation angles (Goffaux & Rossion, 2007, experiment 3). These findings helped resolving empirical discrepancies across past studies on face inversion, which confounded horizontal and vertical relational manipulation (e.g. Riesenhuber, Jarudi, Gilad, & Sinha, 2004; Yovel & Kanwisher, 2004) and highlighted crucial dimensions of face information. The differential vulnerability of vertical and horizontal relations to inversion reflects their distinct functional contribution to face perception. Nevertheless, the reasons of the selective vertical IE remain to be clarified. As inversion is accompanied by substantially inferior face identification than observable for upright faces, the large IE observed for vertical relations suggests their predominant role for upright face processing skills. In contrast, the weak IE observed for horizontal relations suggests that these are processed at a local scale and do not contribute to the efficient upright individuation skills. However, it is not clear how vertical relations contribute to upright face perception. On the one hand, since faces are vertically elongated shapes, feature integration in upright faces may crucially rely on vertical relations. On the other hand, the vertical IE may indicate a more fundamental role of vertical relations in face processing. The face processing system may be tuned to process vertical relations because these could be the most useful/discriminative when individuating faces. It was the aim of the present experiment to further explore the respective contribution of vertical and horizontal relations in face perception. We addressed whether the processing of these two types of cues rely on different input spatial frequencies (SF). The extraction of SF is a
primary step in early vision (see De Valois & De Valois, 1988; Sowden & Schyns, 2006 for a recent review). Interestingly, the high-level processing of faces is also largely influenced by input SF. This SF reliance is thought to be larger for faces than for other visual categories (Biederman & Kalocsai, 1997; Boutet, Collin, & Faubert, 2003; Collin, Liu, Troje, McMullen, & Chaudhuri, 2004; Liu, Collin, Rainville, & Chaudhuri, 2000). Various aspects of face information have indeed been found to map onto distinct SF ranges. For example, judging the gender of a face relies on lower SF than judging its emotional expressivity (e.g. Schyns, Bonnar, & Gosselin, 2002). In contrast, a 2-octave-wide range of SF situated between 8 and 16 cpf is known to convey inter-individual face differences most efficiently. Intermediate spatial frequency (ISF; ca. 12cpf, e.g. Na¨sa¨nen, 1999) input thus optimally promotes recognition of familiar faces. In contrast, low spatial frequencies (LSF, below 8 cycles per face or cpf) contain much poorer information and govern the fast integration of face features into a united representation (Goffaux, Hault, Michel, Vuong, & Rossion, 2005; Goffaux & Rossion, 2006; see Sergent, 1986). Lastly, high spatial frequency (HSF, above 32 cpf) information transmits the local details of face features. In the present paper, we took advantage of the fact that various SF subserve different goals in face perception by identifying which SF ranges optimally convey vertical and horizontal relational information. As the differential IE observed for horizontal and vertical relations indicates their dissociable contribution to face perception, it is plausible that these cues are represented by different SF ranges. Defining these ranges will in turn clarify their role in face perception. Participants discriminated upright and inverted pairs of faces that varied either at the level of a given local feature (eye shape and surface), or with respect to vertical or horizontal relations (eyes positioned higher/lower or closer/further apart, respectively). Stimuli were filtered to exclusively preserve LSF, ISF or HSF. We used two measures to probe featural, horizontal and vertical processes across SF: upright performance and IE magnitude (upright minus inverted performance). For upright faces, we expected horizontal and vertical processing to rely on different SF ranges. Our hypothesis was that if vertical relations are more significant than horizontal relations to individuate upright faces, they should be best conveyed by ISF. Alternatively, vertical relations may be more crucial for upright faces than horizontal ones simply because features are integrated more intensely along than across the vertical elongation axis. In this case, they should be best processed in LSF. In contrast, since horizontal and featural cues are spared by inversion, we expected them to be processed locally and to rely on higher SF than vertical relations. Moreover, by selectively targeting vertical relations, we expected to observe larger IE in the SF ranges best conveying vertical relations for upright faces.
V. Goffaux / Acta Psychologica 128 (2008) 119–126
121
2. Materials and procedure
2.3. Procedure
2.1. Subjects
Faces were presented in pairs, one by one. A trial started with a fixation cross which remained for 300 ms followed by a 200 ms blank. The first face was presented for 900 ms, followed by a 600 ms blank. Presentation of the second face was terminated by subjects’ responses or after 3000 ms. Consecutive trials were separated by a fixed 900 ms blank. To avoid retinal persistence and purely local comparison strategies, the first face was randomly jittered (by 30 pixels) around the screen center; the second face was always centered. Subjects performed a matching task and had to press different keys depending on whether faces constituting each pair were identical or different (half of the trials). They were informed that the face differences were subtle but were naı¨ve as to their exact nature. Featural, vertical and horizontal pairs were randomly presented in LSF, ISF and HSF throughout the experiment. In contrast, face orientation was blocked: subjects either started with upright (n = 8) or inverted pairs (n = 10). Our previous investigations showed that vertical IE replicates irrespective of whether the orientation is varied at random or in separate blocks. Since the task was relatively challenging, subjects were trained with a subset of face pairs (5 pairs per condition) prior to the experiment. They had 70 practice trials interspersed with frequent feedback on performance accuracy. During the experiment, subjects were presented with the remaining face pairs (15 pairs per condition in upright and inverted orientation). Feedback was also provided during the experiment though less frequently (every 60 trials). We tested 24 experimental conditions in total: Orientation (Upright, Inverted), SF (LSF, ISF, HSF) and Stimulus (Featural, Vertical, Horizontal, Catch). There were 30 trials per condition, resulting in a total of 720 experimental trials.
Eighteen right-handed undergraduate students (faculty of psychology, UCL, Belgium) gained course credits for their participation (age range: 19–23 years old, 3 males). All had normal or corrected-to-normal vision. 2.2. Stimuli Face stimuli were identical to those of our recent study (Goffaux & Rossion, 2007). Twenty full-front greyscale pictures of faces (half males; neutral expression; 190 250 pixel size) were modified in different ways. For the featural condition, the eyes of a given face were replaced by those of another. In the vertical condition, the eyes were elevated/lowered by 15 pixels. In the horizontal condition, each eye was moved closer to or further from the nose by 15 pixels. The 15 pixel differences in eye position subtended 0.45° of visual angle. Both the eyes and eyebrows were displaced in vertical and horizontal conditions in the face to avoid distorting local eye–eyebrow relationships, on which subjects may rely to accomplish the task. Nose–mouth stimuli, in which nose and mouth were exchanged with those of another face, were included as catch trials and were not further analyzed. Featural, vertical, and horizontal changes applied here are probably not equivalently representative of the natural variations among human faces (see discussion). However, these stimulus manipulations were calibrated to obtain equal task difficulty across upright full spectrum (i.e., unfiltered) conditions (Goffaux & Rossion, 2007). To explore horizontal and vertical processing across SF, we filtered face stimuli to retain only the low, intermediate or highest SF (using the same procedure as in Goffaux & Rossion, 2006). Faces were pasted onto a 256 256 pixels grey background. They were then Fourier transformed and multiplied with 2-octave-wide band-pass filters. Cutoff values were 2–8, 8–32 and 32– 128 cpf to preserve LSF, ISF and HSF, respectively. After filtering and inverse Fourier transform, all stimuli received the same global luminance and contrast values, so that behavioral differences across spatial scales could not be attributed to luminance or contrast consequences of SF filtering (see Fig. 1). The same grey frame was then pasted on LSF, ISF and HSF images so that each face stimulus covered the same surface across SF ranges. To obtain inverted stimuli, filtered faces were flipped vertically. Stimulus size was reduced to 170 224 pixels for display reasons. Subjects were seated in a quiet and dark room in front of a PC monitor (17-in.; 75 Hz refresh rate; 1024 768 pixel resolution). Stimuli subtended approximately 5.5° 7.5° of visual angle.
2.4. Data and statistical analyses Trials exceeding the mean RT by more than four standard deviations were excluded (a maximum of 7 trials maximum were excluded per subject). Based on hit and false alarm rates, for each subject we computed d0 sensitivity indices for each experimental condition (based on Stanislaw & Todorov, 1999). D0 , accuracy and correct response times (RT) were analyzed using a three-way repeated-measure ANOVA with Orientation (Upright, Inverted), SF (LSF, ISF, HSF) and Stimulus (Featural, Vertical, Horizontal) as within-subject factors. Whenever Stimulus significantly interacted both with Orientation and SF (see below), we ran two-way repeated-measure ANOVAs separately for each stimulus condition. To compare the IE across stimulus and SF conditions, we computed d0 , accuracy and RT difference between upright and inverted conditions in each subject and each condition separately. These IE magnitude values were also submitted to a
122
V. Goffaux / Acta Psychologica 128 (2008) 119–126
Fig. 1. This figure illustrates the stimuli and findings of the study. On the left panel, the face of one individual has been manipulated at the level of eyes local feature properties, horizontal relations or vertical relations. These manipulations are displayed in LSF, ISF and HSF. The right panel depicts the average matching performance, in d0 , in every experimental condition (n = 18; bars indicate standard errors of the means, SEM).
two-way ANOVA with Stimulus and SF as within-subject factors. Tukey HSD tests were used to compare conditions relative to each other. To compare featural, horizontal and vertical conditions in the upright orientation, we considered Tukey HSD from the three-way ANOVA. In all other cases, we report Tukey HSD from two-way ANOVAs. The analyses of accuracy and correct RT are reported in the Appendix. Accuracy analyses largely confirmed d0 findings. As for correct RT, they were less sensitive to our experimental manipulations than d0 and accuracy as indicated by the lack of interactions observed for this measure. The fact that stimuli remained until subject’s response and that feedback stressed accuracy all along the experiment likely accounts for this.
2.5. Results The three-way ANOVA of d0 revealed significant main effects of Orientation (F(1, 17) = 54.25, p < .0001) and Stimulus (F(2, 34) = 35.18, p < .0001). These factors significantly interacted (Orientation Stimulus: F(2, 17) = 14.31, p < .0001). SF also significantly affected sensitivity (F(2, 34) = 7.71, p < .002) but its influence depended on Stimulus type as indicated by the significant SF Stimulus interaction (F(4, 68) = 8.13, p < .0001). Using post-hoc comparisons, we first addressed whether the upright processing of vertical, horizontal and featural variations was influenced by SF. Upright featural processing was not influenced by SF (ps > .2). In contrast, sensitivity to vertical relations presented at upright was
V. Goffaux / Acta Psychologica 128 (2008) 119–126
significantly better in ISF than in HSF and marginally better in ISF than in LSF (ISF versus LSF: p = .08; ISF versus HSF: p < .004). Sensitivity to upright horizontal relations was higher in HSF than in LSF (p < .003). When comparing upright stimulus conditions across SF, we found that horizontal processing was more efficient than featural processing, irrespective of SF (ps > .04). But it surpassed vertical performance only in HSF (p < .0002, in other SF ranges: ps > .75). Sensitivity tended to decline with inversion independent of condition. Nonetheless, the ANOVA computed separately for each stimulus condition revealed significant main effects of Orientation for vertical and horizontal relations only (vertical: F(1, 17) = 47.71, p < .0001; horizontal: F(1, 17) = 9.84, p < .006; featural: F(1, 17) = 2.83, p = .1). The magnitude of horizontal and vertical IE was stable across SF (no difference of IE magnitude across SF: ps > .1). Vertical IE was however larger than horizontal IE, irrespective of SF (ps < .033). 3. Discussion Goffaux and Rossion (2007) recently showed that face IE largely stems from the impaired processing of vertical relations, whereas the processing of horizontal relations and local feature properties is merely moderately affected by face orientation. These findings suggest that horizontal and vertical relations play different functional roles in face perception. The present experiment aimed at further exploring the functional contribution of horizontal and vertical relations to face perception by addressing the SF ranges optimally conveying horizontal and vertical relational cues. Our rationale was that if vertical and horizontal relations represent functionally different cues, they should be conveyed by different SF ranges, as the latter carry functionally dissociable cues for the face processing system. According to our expectations, the results show that vertical and horizontal relations are processed most efficiently in different SF ranges. In upright faces, vertical relations were best conveyed by ISF. This is of particular interest since previous evidence has demonstrated that ISF gather the most optimal information to individuate and recognize faces (e.g. Na¨sa¨nen, 1999). The fact that subjects’ performance for extraction of vertical relations was the highest in ISF suggests that these are among the most diagnostic cues to discriminate upright faces. The upright processing of horizontal relations relied on a clearly different SF range than vertical relational processing, i.e. HSF, which are considered optimal for the processing of local properties (Goffaux & Rossion, 2006; Goffaux et al., 2005). In contrast, the sensitivity to local feature variations was stable across SF. This was surprising since varying a feature is generally thought to induce local processing strategies, which themselves rely on HSF (Goffaux & Rossion, 2006; Goffaux et al., 2005). However, the perception of a local feature is also robustly influenced by its spatial relations with other
123
features (e.g. the whole-part advantage or composite illusion; Tanaka & Farah, 1993; Young, Hellawell, & Hay, 1987). It is thus likely that the present featural variations were not processed in a strictly local way and were represented in a larger SF range than HSF. More generally, this shows the ambiguity of featural variations in face perception studies: depending on the paradigm, they are used to index featural or relational processing. Previous investigations had addressed the processing of facial features and relations across SF. Boutet and colleagues (2003) reported an ISF advantage for the processing of both relational and featural cues. However, several methodological aspects distinguish their study from the present one, thus preventing direct comparison. In their experiment, all inner features were manipulated (eyes, nose and mouth). Moreover, the SF ranges studied largely differed from those explored in the present study and performance was generally poor in LSF and HSF conditions. In a previous paper, we also addressed how featural and relational processes are rooted in LSF and HSF ranges of face information (Goffaux et al., 2005). We reported an LSF advantage for processing relations and an HSF advantage for processing local features. However, horizontal and vertical manipulations were then confounded. Most importantly, this previous work did not investigate inversion effects and ISF range. Our interest was not only to investigate the SF ranges conveying horizontal and vertical relations in upright faces, but also to study how the (vertical) IE would vary across SF. The present results replicate the disproportionate IE for the discrimination of vertical relations. Our results confirm previous evidence that the face IE is SF-independent (Boutet et al., 2003; Collin et al., 2004). Hence, the effect of inversion on vertical relations was also independent of input SF. This suggests that inversion alters observerdependent variables, the involvement of which does not exclusively depend on input SF. More generally, these results indicate the many facets of the relational processes engaged for faces (cf. Maurer et al., 2002). The robust IE for vertical relations indicated that these could be most diagnostic to discriminate faces; the present study on face input properties provides firmer evidence. It thus may appear paradoxical that we fail to observe higher sensitivity for vertical than horizontal relations in upright faces. However, this was expected since the stimulus manipulations investigated here were calibrated to yield equally good performance levels in the upright orientation (based on Goffaux & Rossion, 2007). It has to be acknowledged that the resulting relational manipulations largely surpassed the natural variability in feature position across individual faces. Some authors (Gosselin, Fortin, Michel, Schyns, & Rossion, 2007; Shi et al., 2006) recently measured the natural variations in feature position and feature distances in large sets of face pictures. Shi et al. (2006) report some variability in feature position across individuals, whereas it is reported to be very weak by Gosselin et al. (2007). Horizontal manipulations were even more extreme
±29 ±33 736 784 ±26 ±26 709 749 ±27 ±33 742 786 ±29 ±37 718 793 ±25 ±35 697 745 ±25 ±39 733 804 ±24 ±37 759 812 Standard errors of the mean are displayed in italic.
±27 ±27 ±25 ±34 748 805
756 773
77.2 58.3 ±2.1 ±2.4 ±1.9 ±2 75.4 76.3
Upright Inverted Correct RT (ms) Upright Inverted
83.3 78.3
±2.1 ±2.4
79.3 61.3
±3 ±1.7
80.2 73.3
±2.5 ±1.8
88.3 84.4
±2.5 ±2.4
83.5 64.1
±2.7 ±2.2
75 72.2
±2.3 ±2.7
90.9 86.1
Vertical Horizontal HSF
Feature Vertical Horizontal
ISF
Feature Vertical Horizontal
LSF
Feature Accuracy (percent correct)
since interocular distance was varied by a double pixel interocular distance was varied twice as much as eye height. However, it is unlikely that the large horizontal changes account for the weak inversion effect observed. Leder, Candrian, Huber, and Bruce (2001) investigated the processing of various ranges of interocular distance. These were subtle (ranging from 0.11° to 0.342° of visual angle) compared to ours (each eye displaced by 0.45° of visual angle making an interocular distance change of 0.90°). Since task performance was not influenced by the presence/absence of other features, the authors concluded that interocular distance is processed locally. Moreover, the IE was not found to systematically vary as a function of interocular distance differences. Despite not having compared the processing of horizontal to that of vertical relations as here, Leder et al.’s (2001) evidence shows that horizontal relations are processed locally even when they are much more subtle than in our investigation. Why would vertical relations play a more significant role than horizontal ones in face individuation? The ISF advantage for processing vertical relations in upright faces suggests that these are more diagnostic to discriminate individual faces than horizontal relations. As a matter of fact, Gosselin et al.’s (2007) measurements tend to indicate that most of the inter-individual variability in feature position occurs along the vertical axis. Another aspect in favor of the vertical information’s high diagnosticity is that its extraction is possible under almost all viewpoints (front, 3/4, even profile). In contrast, the information stemming from horizontal relations and features decays much more rapidly when a face moves from front to profile view and is thus mostly available under frontal views. Plausibly, the visual system could be tuned to the vertical relations as this information is most reliably depicted under varying conditions in everyday life. Horizontal relations are informative on other crucial aspects than face identity such as face symmetry, which has evolutionary significance (for example in mate choice; Rhodes et al., 2001). Rhodes and colleagues (2005) recently studied the influence of SF filtering and inversion on the detection of face symmetry. In agreement with the weak IE we observed for horizontal relations, inversion only weakly affected performance in face symmetry detection. In contrast to the present results, though, symmetry detection was not better in HSF than in other SF ranges. However, the HSF ranges explored by Rhodes et al. (2005) were much higher than in the present study and consequently contained less energy. This may have attenuated a potential HSF advantage in Rhodes et al.’s (2005) study. In summary, we present evidence that horizontal and vertical face cues carry dissociable aspects of face information. Vertical relations are part of the most significant cues used to individuate faces as indicated by their larger reliance on ISF ranges. This is clearly different from horizontal relations, which are best transmitted by HSF and are thus likely processed at a local level. We confirm that face inversion disrupts vertical relations to a much greater extent
±3.2 ±2
V. Goffaux / Acta Psychologica 128 (2008) 119–126
Table 1 Mean accuracy (in percent correct) and mean correct response times (in ms) are displayed for all experimental conditions
124
V. Goffaux / Acta Psychologica 128 (2008) 119–126
than features and horizontal relations. We also confirm previous evidence showing that the face IE is SF-independent, even when vertical relations are selectively targeted. Acknowledgements V.G. is supported by the Belgian Fund for Scientific Research (F.N.R.S.). The author is grateful to Meike Ramon and two anonymous reviewers for their helpful comments. Appendix Accuracy Mean accuracy and correct RT are presented in Table 1 for each condition separately. The three-way ANOVA confirmed the observations made on d0 as it again revealed significant effects of Orientation (F(1, 17) = 84.44, p < .001, Stimulus (F(2, 34) = 30.97, p < .0001) and SF (F(2, 34) = 6.08, p < .006). As indicated by significant interactions between Stimulus and SF (F(4, 68) = 7.90, p < .0001) as well as between Stimulus and Orientation (F(2, 34) = 22.03, p < .001), differences across stimulus types were mainly due to their differential vulnerability to inversion and SF filtering. At upright orientation, the accuracy in processing featural, vertical and horizontal variations was differentially influenced by SF input content. In featural condition, upright accuracy was better in ISF than in LSF (p < .01) and HSF (p < .02) and it was stable across the latter SF ranges (p = .8). In upright horizontal condition, accuracy was higher in HSF than in LSF (p < .004). In vertical condition, it was marginally better in ISF than in HSF (p = .06). The direct comparison of upright stimulus conditions across SF revealed significantly higher accuracy in horizontal than in vertical condition only in HSF (p < .001; ISF: p = .69; LSF: p = .89). Accuracy in horizontal condition also surpassed featural performance, but in all SF ranges (ps < .032). Inversion significantly affected accuracy in each Stimulus conditions (main effect of Orientation in featural condition: F(1, 17) = 5,16, p < .036; horizontal condition: F(1, 17) = 9.44, p < .007); vertical condition: F(1, 17) = 60.96, p < .0001). However, as indicated by the significant Stimulus Orientation, IE magnitude was significantly larger in vertical than other stimulus conditions irrespective of SF (ps < .002). Correct. RT The three-way ANOVA on correct RT only revealed significant main effects. Orientation significantly affected correct RT (F(1, 17) = 15.21, p < .001) as subjects were slower to match inverted than upright faces. The main effect of Stimulus was also significant (F(2, 34) = 9.29, p < .001). The processing of horizontal variations was faster than both vertical and featural variations (ps < .003) whereas these latter conditions led to similar RT (p = .95).
125
SF also significantly affected processing speed, irrespective of Stimulus and Orientation conditions (F(2, 34) = 9.85, p < .0001). LSF input slightly but consistently delayed matching performance with respect to ISF and HSF (ps < .003) whereas ISF and HSF conditions were resolved at similar speed (p = .91). In contrast to accuracy and d0 , there were no significant interactions for RT (ps > .09). However, to exclude that the differential effects of inversion on vertical and horizontal performance on both accuracy and sensitivity are due to ceiling performance in horizontal condition, we explored whether RT also show a trend for larger inversion effects in vertical relations. The ANOVA computed on IE RT magnitude did not yield any significant result (ps > .16). Still, we found robust inversion effects in each SF ranges for vertical condition (LSF: p < .02; ISF: p < .0002; HSF: p = .051) whereas it slowed down the processing of horizontal variations only in ISF (p < .05). This indirectly confirms that d0 and accuracy evidence that inversion disrupts the processing of vertical relations to a larger extent than horizontal relations. References Bartlett, J. C., & Searcy, J. (1993). Inversion and configuration of faces. Cognitive Psychology, 25, 281–316. Barton, J. J., Keenan, J. P., & Bass, T. (2001). Discrimination of spatial relations and features in faces: Effects of inversion and viewing duration. British Journal of Psychology, 92, 527–549. Biederman, I., & Kalocsai, P. (1997). Neurocomputational bases of object and face recognition. Philosophical Transactions of the Royal Society of London. Biological Sciences, 352(1358), 1203–1219. Boutet, I., Collin, C., & Faubert, J. (2003). Configural face encoding and spatial frequency information. Perception and Psychophysics, 65(7), 1078–1093. Collin, C. A., Liu, C. H., Troje, N. F., McMullen, P. A., & Chaudhuri, A. (2004). Face recognition is affected by similarity in spatial frequency range to a greater degree than within-category object recognition. Journal of Experimental Psychology: Human Perception and Performance, 30(5), 975–987. De Valois, R. L., & De Valois, K. K. (1988). Spatial vision. New York: Oxford University Press. Farah, M. J., Wilson, K. D., Drain, M., & Tanaka, J. N. (1998). What is ‘‘special” about face perception? Psychological Review, 105(3), 482–498. Farkas, L. G. (1994). Anthropometry of the head and face. New York: Raven Press. Freire, A., Lee, K., & Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29(2), 159–170. Goffaux, V., Hault, B., Michel, C., Vuong, Q. C., & Rossion, B. (2005). The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception, 34(1), 77–86. Goffaux, V., & Rossion, B. (2006). Faces are ‘‘spatial” – holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception and Performance, 32(4), 1023–1039. Goffaux, V., & Rossion, B. (2007). Face inversion disproportionately impairs the perception of vertical but not horizontal relations between features. Journal of Experimental Psychology: Human Perception Performance, 33, 995–1002. Gosselin, F., Fortin, I., Michel, C., Schyns, P. G., & Rossion, B. (2007). On the distances between internal human facial features. In Paper presented at the Vision Science Society Meeting.
126
V. Goffaux / Acta Psychologica 128 (2008) 119–126
Leder, H., & Bruce, V. (2000). When inverted faces are recognized: The role of configural information in face recognition. The Quarterly Journal of Experimental Psychology Section A, 53(2), 513– 536. Leder, H., Candrian, G., Huber, O., & Bruce, V. (2001). Configural features in the context of upright and inverted faces. Perception, 30(1), 73–83. Liu, C. H., Collin, C. A., Rainville, S. J., & Chaudhuri, A. (2000). The effects of spatial frequency overlap on face recognition. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 956–979. Maurer, D., Grand, R. L., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Science, 6(6), 255–260. McKone, E., Kanwisher, N., & Duchaine, B. C. (2007). Can generic expertise explain special processing for faces? Trends in Cognitive Science, 11(1), 8–15. Na¨sa¨nen, R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision Research, 39(23), 3824–3833. Reed, C. L., Stone, V. E., Bozova, S., & Tanaka, J. (2003). The bodyinversion effect. Psychological Science, 14(4), 302–308. Rhodes, G., Peters, M., Lee, K., Morrone, M. C., & Burr, D. (2005). Higher-level mechanisms detect facial symmetry. Proceedings of the Royal Society of London, Series B, 272(1570), 1379–1384. Rhodes, G., Zebrowitz, L. A., Clark, A., Kalick, S. M., Hightower, A., & McKay, R. (2001). Do facial averageness and symmetry signal health? Evolution and Human Behavior, 22(1), 31–46. Riesenhuber, M., Jarudi, I., Gilad, S., & Sinha, P. (2004). Face processing in humans is compatible with a simple shape-based model of vision.
Proceedings of the Royal Society of London, Series B, 271(Suppl. 6), S448–S450. Schyns, P. G., Bonnar, L., & Gosselin, F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13(5), 402–409. Sergent, J. (1986). Microgenesis of face perception. In H. D. Ellis, M. A. Jeeves, F. Newcombe, & A. M. Young (Eds.), Aspects of face processing (pp. 17–33). Dordrecht, The Netherlands: Martinus Nijhoff. Shi, J., Samal, A., & Marx, D. (2006). How effective are landmarks and their geometry for face recognition? Computer Vision and Image Understanding, 102, 117–133. Sinha, P., Balas, B. J., Ostrovsky, Y., & Russell, R. (2006). Face recognition by humans: 19 results all computer vision researchers should know about. Proceedings of the IEEE, 94(11), 1948–1962. Sowden, P. T., & Schyns, P. G. (2006). Channel surfing in the visual brain. Trends in Cognitive Science, 10(12), 538–545. Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments & Computers, 31(1), 137–149. Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology Section A, 46(2), 225–245. Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face perception. Perception, 16(6), 747–759. Yovel, G., & Kanwisher, N. (2004). Face perception: domain specific, not process specific. Neuron, 44(5), 889–898.