Facilitation of face recognition through the retino-tectal pathway

Facilitation of face recognition through the retino-tectal pathway

Neuropsychologia 51 (2013) 2043–2049 Contents lists available at ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsych...

2MB Sizes 3 Downloads 50 Views

Neuropsychologia 51 (2013) 2043–2049

Contents lists available at ScienceDirect

Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia

Facilitation of face recognition through the retino-tectal pathway Tamami Nakano a,b,n, Noriko Higashida a, Shigeru Kitazawa a,b a b

Graduate School of Frontiers Biosciences, Osaka University, Suita, Osaka 565-0871, Japan Graduate School of Medicine, Osaka University, Osaka 565-0871, Japan

art ic l e i nf o

a b s t r a c t

Article history: Received 10 March 2013 Received in revised form 14 June 2013 Accepted 16 June 2013 Available online 25 June 2013

Humans can shift their gazes faster to human faces than to non-face targets during a task in which they are required to choose between face and non-face targets. However, it remains unclear whether a direct projection from the retina to the superior colliculus is specifically involved in this facilitated recognition of faces. To address this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in a blue–yellow scale (S-cone-isolating stimuli) in another. The information of the S-cone-isolating stimuli is conveyed through the retino-geniculate pathway rather than the retino-tectal pathway. For the luminance stimuli, the reaction time was shorter towards a face than towards a non-face target. The facilitatory effect while choosing a face disappeared with the S-cone stimuli. Moreover, fearful faces elicited a significantly larger facilitatory effect relative to neutral faces, when the face (with or without emotion) and non-face stimuli were presented in greyscale. The effect of emotional expressions disappeared with the S-cone stimuli. In contrast to the S-cone stimuli, the face facilitatory effect was still observed with negated stimuli that were prepared by reversing the polarity of the original colour pictures and looked as unusual as the Scone stimuli but still contained luminance information. These results demonstrate that the face facilitatory effect requires the facial and emotional information defined by luminance, suggesting that the luminance information conveyed through the retino-tectal pathway is responsible for the faster recognition of human faces. & 2013 Elsevier Ltd. All rights reserved.

Keywords: Superior colliculus Saccade S cone Face Subcortical Emotion

1. Introduction Humans can discriminate and orient to human faces in only 100 ms, which is significantly faster than they can orient to nonface targets (other animals, vehicles, and various other objects encountered in everyday life) in tasks in which they are required to choose between face and non-face targets (Crouzet, Kirchner, & Thorpe, 2010; Girard & Koenig-Robert, 2011). However, the neural mechanisms that underlie this facilitated recognition of human faces remain unknown. It is well known that facial stimuli evoke an event-related potential with a peak latency of approximately 170 ms (N170 in electro-encephalography, and M170 in magneto-encephalography) (Bentin, Allison, Puce, Perez, & McCarthy, 1996; Watanabe, Kakigi, Koyama, & Kirino, 1999). The source of the N170/M170 signal has been localised to the fusiform gyrus in humans (Deffke et al., 2007), which shows a selective increase of blood flow in response to facial stimuli (Kanwisher, McDermott, & Chun, 1997). The fusiform gyrus clearly plays a central role in face recognition, but n Corresponding author at: Osaka University, Graduate School of Frontiers Biosciences, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan. Tel.: +81 6 6879 4435; fax: +81 6 6879 4437. E-mail address: [email protected] (T. Nakano).

0028-3932/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neuropsychologia.2013.06.018

the N170/M170 component occurs too late to be directly involved in the rapid face recognition. In addition to the N170/M170 component, facial stimuli elicit an early positive component with a peak latency of 100 ms (P100) or less (Braeutigam, Bailey, & Swithenby, 2001; Liu, Harris, & Kanwisher, 2002; Rossion & Caharel, 2011). This fast brain response might be involved in the fast recognition of human faces because it was enhanced during a task in which participants were required to judge whether a visual stimulus was a face or an object (Liu et al., 2002). To determine the origin of the fast component, we considered the signals conveyed through the retino-geniculo-cortical pathway. Because fast recognition can be achieved in the absence of fine details of facial information (Crouzet & Thorpe, 2011; Girard & Koenig-Robert, 2011; Honey, Kirchner, & VanRullen, 2008), the fast and coarse magnocellular pathway is likely to be involved. However, the shortest visual response latency in V1 is 56 ms and these latencies in V3 and V4 are 70–80 ms in humans (Yoshor, Bosking, Ghose, & Maunsell, 2007). As it takes 20–30 ms to generate a saccade with electrical stimulation of the frontal eye fields (FEF) in monkeys (Bruce, Goldberg, Bushnell, & Stanton, 1985; Robinson & Fuchs, 1969), the retino-geniculo-cortical pathway seems to be too slow to play a role in the fast face-selective responses that occur within 100 ms. This issue raises the possibility that the fast face recognition depend on the retino-tectal pathway, which conveys

2044

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

visual information to the cerebral cortex through the superior colliculus and the pulvinar. Several studies have suggested that the subcortical pathway is involved in processing facial information (Johnson, 2005). Patients with V1 lesions were still able to discriminate the gender and expressions of human faces, although they were unaware of their ability (Morris, DeGelder, Weiskrantz, & Dolan, 2001). Neuroimaging studies have indicated that facial stimuli with emotional expressions evoked responses in these subcortical regions (Morris, Ohman, & Dolan, 1999; Vuilleumier, Armony, Driver, & Dolan, 2003). In addition, neurons in the pulvinar of the monkey responded to face-like patterns with a latency as short as 50 ms (Nguyen et al., 2013). In the present study, we aimed to elucidate whether the retinotectal pathway is involved in the fast process of face recognition. To answer this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in blue–yellow scale in another. The latter condition was designed so that the stimuli would exclusively activate the short-wave-sensitivity cone (S cone) out of three cones in the retina (S-cone-isolating stimuli; (Sumner, Adamjee, & Mollon, 2002; Wandell et al., 1999)). The luminance-defined stimuli are conveyed through both retino-tectal and retino-geniculate pathways, but the S-cone-isolating stimuli are exclusively conveyed through the retino-geniculate pathway. In fact, S-cone signals do not reach the superior colliculus (Marrocco & Li, 1977; Schiller & Malpeli, 1977; White, Boehnke, Marino, Itti, & Munoz, 2009) but do reach both the ventral and dorsal visual pathways in the visual cortices (Chatterjee & Callaway, 2002; Seidemann, Poirson, Wandell, & Newsome, 1999; Wandell et al., 1999). In Experiment 1, we examined whether the facilitatory effect on choosing a face that was observed for the luminance stimuli disappeared for the S-cone stimuli. The facilitatory effect was measured by comparing the reaction time of a saccade performed to choose a face with one performed to choose a butterfly. If the retino-tectal pathway is essential for the fast processing of facial information, the facilitatory effect observed for the luminance stimuli should disappear for the S-cone stimuli. In Experiment 2, we compared the facilitatory effect observed for fearful faces to that observed for neutral faces. Because saccadic latency is shorter for fearful faces than for neutral faces (Bannerman, Hibbard, Chalmers, & Sahraie, 2012), we speculated that the facilitatory effect would be larger for emotional faces than for neutral faces when the stimuli were modulated in greyscale, whereas the effect of emotional expressions would disappear for the S-cone stimuli. Even if the face facilitatory effect disappears with the S-cone stimuli, it may still be argued that the disappearance has nothing to do with the retino-tectal pathway but was simply due to the unusual appearance of the S-cone stimuli that is conveyed through the retinogeniculate pathway. To address this issue, we prepared “negated stimuli” (Russell, Sinha, Biederman, & Nederhouser, 2006) in Experiment 3 by reversing the polarity of the original colour pictures. It is worth noting that the face colours of negated stimuli looked as unusual as the S-cone stimuli but still contained luminance information. If the face facilitatory effect is still observed with the negated stimuli, we may exclude the possibility that natural appearance conveyed through the reticulo-geniculate pathway is essential for the face facilitatory effect and suggest that the retino-tectal pathway is involved in the fast processing of facial information. 2. Material and methods 2.1. Participants Fifteen volunteers participated in Experiment 1 (8 females; mean age 20.7 years, age range 20–23 years), thirteen volunteers participated in Experiment 2 (7 females; mean age 20.1 years, range 20–22 years), and eleven volunteers

participated in Experiment 3 (1 female; mean age 21.7 years, age range 20–23 years). All of the participants had normal or corrected-to-normal vision and gave written informed consent to participate in the experiments. The study was approved by the ethical committee of Osaka University.

2.2. Apparatus and general task procedures Participants viewed visual stimuli presented on a gamma-corrected CRT 17inch monitor (Trinitron Multiscan G200, Sony) with their heads resting on a chin rest and a forehead rest to maintain a viewing distance of 60 cm. The screen refresh rate was 100 Hz (VSG 2/2, Cambridge Research System), and the spatial resolution was 1024  768 pixels (321  241). Eye movements were recorded using a camera-based eye tracker (SR research, Eyelink1000) with a temporal resolution of 1000 Hz. Participants performed a saccadic choice task that followed a protocol similar to the previous study (Crouzet et al., 2010). The participants had to fixate on a cross in the centre (0.51  0.51) for a randomly determined duration between 1.2 and 1.8 s, until the cross disappeared (Fig. 1b). After a gap period of 0.2 s, one face picture and one butterfly picture, each subtending 101  121 (width  height), were presented simultaneously, one on the left and the other on the right side of the screen for 2 s. Each picture appeared on its respective side (right or left) at a distance of 81 from the centre. The participants were instructed to move their eyes as quickly and accurately as possible toward the stimulus that belonged to the target category (face or butterfly), which had been pre-determined before the session began. Each session consisted of 100 choice trials. The apparatuses were controlled using house-made programs run on Matlab software (Mathworks, Natick, MA) with the Psychophysics toolbox and the Eyelink Toolbox (Brainard, 1997; Cornelissen, Peters, & Palmer, 2002).

2.3. Stimuli The visual stimuli consisted of 100 neutral and 100 fearful face pictures taken from the Karolinska Directed Emotional Faces set (Lundqvist, Flykt, & Öhman, 1998) and the NimStim set of facial expressions (Tottenham et al., 2009), as well as 100 butterfly pictures taken from a commercial photo album. We used butterfly pictures because both butterflies and human faces have a symmetrical structure. All of the images were converted to greyscale (0–255, mean¼ 127, s. d.¼ 60) and resized to 330  400 pixels (101  121). These greyscale images were used as the luminance-defined stimuli (Fig. 1a, luminance). To prepare the S-cone-isolating stimuli, the spectral emissions of the red (R), green (G), and blue (B) phosphors of the CRT monitor were measured by using a spectral colorimeter (CS-2000, Konica Minolta). Then, 9 dot products were calculated between the spectral absorptions of the L, M, and S cones (Smith & Pokorny, 1975) and the spectral emissions of the R, G, and B phosphors to yield a 3-by-3 matrix that transformed the R, G, B levels linearly into the excitation levels of the L, M, S cones. The inverse of the matrix, which transforms (L, M, S) to (R, G, B), was then used to determine the line (S-coneisolating line) along which the (R, G, B) levels were modulated to keep the excitation levels of the L and M cones constant while modulating the S-cone excitation level in isolation (see the Visual System Engineering Toolbox provided by Brain Wandell, https://github.com/wandell/vset). The screen background was set to grey (x¼ 0.29, y¼ 0.3) in the CIE colour space at a luminance of 33.1 cd m−2, and the S-cone-isolating line was prepared around this point. The S-cone-isolating stimuli were prepared by mapping the value of each pixel (0–255) linearly onto the S-coneisolating line over the range of S-cone excitation levels that the CRT monitor was capable of producing. The negated visual images were created by inverting the polarity of the original colour pictures of faces and butterflies that were used in Experiment 1 (Fig. 3a).

2.4. Experiments In Experiment 1, each participant performed 4 sessions, one for each of two-bytwo conditions: two object categories (face and butterfly) by two stimulus types (luminance and S-cone stimuli). Therefore, the total number of trials was 400 (4 sessions of 100 trials each). The order of the four sessions was counterbalanced across the participants. In Experiment 2, each participant performed 4 sessions, two for each of the two stimulus types (luminance and S-cone stimuli). The face target category was used across all of the sessions. In this experiment, we presented fearful faces in half of the 100 trials in each session, in addition to the neutral faces used in Experiment 1. The order of the two facial expressions was randomly shuffled. In Experiment 3, each participant performed 2 sessions with the negated stimuli, one for each of the two object categories (face and butterfly). Therefore, the total number of trials was 200. The order of the two sessions was counterbalanced across the participants. In all experiments, each session was preceded by a training session of 10 trials.

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

2045

Fig. 1. The face effect on the SRT in Experiment 1. (a) Examples of the images used in this study. (b) Protocols of the saccadic choice task. ((c)–(d)) Distributions of the SRT for the two different target categories: face and butterfly for the luminance stimuli (c) and for the S-cone stimuli (d). The correct responses are shown as solid lines, incorrect as dashed lines. (e) The face effect on the SRTs for each stimulus type. The mean SRT to the butterfly is subtracted from the mean SRT to the face. The error bar represents the standard error of these differences for each participant.

2.5. Analysis The Saccade Reaction Time (SRT) was determined offline from the eye-tracking data using the criteria of a 221/s velocity threshold and an acceleration greater than 80001/s2.

3. Results 3.1. Face facilitatory effects for the luminance-defined and S-coneisolating stimuli (Experiment 1) The correct response rate was 92% overall, and it ranged from 91% to 94% across the four conditions (91% for the face and 92% for the butterfly with the luminance stimuli; 92% for the face and 94% for the butterfly with the S-cone stimuli). The two-way analysis of

variance revealed that neither of the two main effects (stimulus type and target category) nor their interaction was significant (F(14,1) ¼1.6, p ¼0.2; F(14,1) ¼ 1.6, p ¼0.2; F(14,1) ¼0.7, p ¼ 0.4, respectively). The error trials were excluded from the later analysis for the saccadic reaction time (SRT). For the luminance stimuli, the mean SRT was significantly smaller toward the face (181 720 ms, mean 7 s.d.; red trace in Fig. 1c) than toward the butterfly (1927 18 ms; p ¼0.0001, t-test; blue trace). However, for the S-cone stimuli, the distribution of the SRT toward the face (232 723 ms, red trace in Fig. 1d) overlapped with that toward the butterfly (237 726 ms; blue trace). The mean SRTs were not significantly different from each other (p ¼0.5, t-test). The standard deviation of SRT did not significantly differ between the luminance stimuli (41 ms) and the S-cone stimuli (48 ms) according to an F-test (F(15,15) ¼1.16, p ¼0.6). The mean

2046

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

Face

Butterfly

Proportion of saccades (%)

30

Face 20

Butterfly 10

0

0

100

200

300

400

Reaction time (ms) 20

p < 0.0001

Face effect (ms)

15

10

5

0 Fig. 2. The fear effect on the SRT in Experiment 2. (a) Examples of the images used in this experiment. (b) The fear effect on the SRTs in each stimulus type. The mean SRT to the neutral face is subtracted from the mean SRT to the fearful face. The error bar represents the standard error of these differences for each participant.

SRTs of the incorrect trials toward the face (red dashed trace in Fig. 1c and d) was not significantly different from that toward the butterfly (blue dashed trace) in both the luminance (face: 154 750 ms, butterfly: 162 727 ms; p ¼ 0.17, t-test) and the S-cone stimuli (face: 2007 44 ms, butterfly 192 730 ms; p ¼0.15, t-test). To compare the face facilitatory effect on the SRT between the S-cone and the luminance stimuli, the mean SRT to the face was subtracted from that to the butterfly for each participant in each stimulus condition (Fig. 1e). Consequently, a significant facilitatory effect of choosing a face was observed for the luminance stimuli (10.8 7 3.1 ms, mean 7 s.e.; p ¼ 0.004, t-test) but not for the S-cone stimuli (−1.3 7 1.9 ms; p ¼0.5). A paired ttest further confirmed that the facilitatory effect was significantly larger for the luminance stimuli relative to the S-cone stimuli (p ¼ 0.01).

-5 Fig. 3. The face effect on the SRT in Experiment 3. (a) Examples of the images used in this experiment. (b) Distributions of the SRT for the two different target categories: face and butterfly. The correct responses are shown as solid lines, incorrect as dashed lines. (c) The face effect on the SRTs. The mean SRT to the butterfly is subtracted from the mean SRT to the face. The error bar represents the standard error of these differences for each participant.

3.2. Effects of the fearful expression (Experiment 2) Next, we examined whether fearful faces induce larger facilitatory effect as compared to neutral faces in choosing a face against a butterfly. The overall correct response rate was 96%, and was almost identical (95–96%) irrespective of the facial expressions (neutral vs. fearful) or the stimulus type (luminance vs. S-cone). With the luminance stimuli, the mean SRT to the fearful faces (188 729 ms, mean 7s.d.) was significantly smaller than that to the neutral faces (191 729 ms, p ¼0.04, t-test). With the S-cone stimuli, however, the mean SRT was not significantly different: 234 725 ms toward the fearful faces and 233728 ms toward the neutral faces (t-test, p ¼0.6).

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

When the effect of fearful expressions on the SRT was analysed by subtracting the mean SRT to the fearful face from that to the neutral face in each participant, a significant effect of fearful expressions on the SRT was observed with the luminance stimuli (t-test, p ¼0.008), but not with the S-cone stimuli (t-test, p ¼0.5; Fig. 2b). A paired t-test revealed that the effect of fearful expressions on the SRT was significantly larger with the luminance stimuli than with the S-cone stimuli (p ¼0.02). 3.3. Effects of the negated faces (Experiment 3) Next, we examined whether the face facilitatory effect occurs with the negated stimuli that had unusual facial colours like the Scone stimuli but still contained luminance information (Fig. 3a). The overall correct response was 92% for the face and 90% for the butterfly. The mean SRT to the faces (185 729 ms, mean 7 s.d.; red trace in Fig. 3b) was significantly smaller than that to the butterfly (201 735 ms, po 0.0001, t-test, blue trace). The mean SRTs of the incorrect trials toward the face (red dashed trace in Fig. 3b) was not significantly different from that toward the butterfly (blue dashed trace) in the negated stimuli (face: 163 729 ms, butterfly 162725 ms; p ¼0.93, t-test). When the face effect on the SRT was analysed by subtracting the mean SRT to the face from that to the butterfly in each participant, a significant facilitatory effect of choosing a face was observed for the negated stimuli (16.0 7 2 ms, mean 7s.e.; p o0.0001, t-test, Fig. 3c). The size of the face facilitatory effect was not significantly different from that for the normal luminance stimuli in Experiment 1 (p¼ 0.26, two-sample t-test).

4. Discussion In choosing between a face and a butterfly presented in greyscale (luminance-defined stimuli), participants made a saccade with a shorter latency toward the face as compared to when they made a saccade toward the butterfly. The facilitatory effect, defined as the difference between the saccadic reaction time toward the face and that toward the butterfly, disappeared with the S-cone-isolating stimuli (Experiment 1). Fearful faces elicited a significantly larger facilitatory effect as compared to neutral faces, when the stimuli were presented in greyscale. However, the facilitatory effect of the fearful faces disappeared with the S-cone-isolating stimuli (Experiment 2). Moreover, the facilitatory effect survived with the negated faces whose facial colours appeared as unnatural as S-cone isolating stimuli (Experiment 3). Taken together, our results clearly show that the facial and emotional information defined by luminance is essential for the increased speed with which human faces are recognized. It is well known that higher-order spatial recognition is poor when information is defined by isoluminant color contrast (Cavanagh, MacLeod, & Anstis, 1987; Gregory, 1977; Morales & Pashler, 1999). Furthermore, regarding face perception, several recent studies have reported that isoluminant color information induces poor face identification (Pearce & Arnold, 2012). Therefore, the present study provides the novel insight that luminance information is critical not only for slow-fine facial processing but also for fast-coarse facial processing. 4.1. Fast processing of facial information through the retino-tectal pathway The first candidate of the neural pathway for the face facilitatory effect that depended on the luminance information is the magnocelluer geniculate pathway. This visual pathway is generally accepted to take a central role in fast-coarse visual processing in contrast to the parvocelluer geniculate pathway assuming slow-

2047

fine visual processing (Goodale & Milner, 1992). Thus, one might argue that the lack of the face facilitatory effect with the S-cone stimuli resulted from the lack of activation of the magnocellular visual pathway. Indeed, it was once believed that the magnocellular layers of the lateral geniculate nucleus (LGN) received luminance-based information, but not S-cone-based information (Martin, 1998). However, neurons in area MT, a major target of the magnocellular visual pathway, have been shown to be responsive to S-cone-isolating stimuli (Seidemann et al., 1999; Wandell et al., 1999). In addition, it is now clear that neurons in the magnocellular layers of the LGN in macaque monkeys respond to S-coneisolating stimuli with latencies as short as those elicited by L- and M-cone stimuli (Chatterjee & Callaway, 2002). These findings show that the magnocellular visual pathway could be activated by the Scone stimuli. Thus, the lack of the face facilitatory effect with the S-cone stimuli cannot be attributed to the lack of activation in the magnocellular visual pathway. However, one might still argue that the lack of the face facilitatory effect with the S-cone stimuli has nothing to do with the retino-tectal pathway, but rather, was simply due to information about the unnatural appearance of the faces being conveyed through the retino-geniculate pathway. Indeed, the human visual system is highly sensitive to the colours of faces. For example, face identification becomes significantly poorer when the negative images of colour pictures are used (Kemp, Pike, White, & Musselman, 1996; Russell et al., 2006). Accordingly, activation of the fusiform gyrus is weaker in response to a negated face than to a positive face (George et al., 1999). However, we demonstrated in Experiment 3 that the negated faces elicited a clear facilitation effect that was as large as that evoked by the positive face pictures. The results exclude the possibility that information about natural appearance conveyed through the reticulo-geniculo-cortical pathway is essential for the face facilitatory effect. The effect should be independent of the higher cortical pathway for face identification. Finally, we compared the visual latencies of the retino-geniculate and retino-tectal pathways. The shortest visual latencies were 56 ms in V1, and 70–80 ms in V3 and V4 in humans (Yoshor et al., 2007), which reflected the fastest responses through the retino-geniculate pathway. Assuming an additional latency of 20–30 ms to generate a saccade via activation of FEF (Bruce et al., 1985; Robinson & Fuchs, 1969) or the lateral intraparietal cortex (LIP, (Thier & Andersen, 1998)), less than 10 ms remain to produce a saccade that occurs towards a face in just 100 ms. This time is too short to discriminate between the face and the butterfly and send the target information to FEF or LIP. Thus, the visual cortical responses arising from the retino-geniculate pathway are too slow to play a role in this face facilitation. However, a recent neurophysiological study in monkeys reported that neurons in the pulvinar show face-specific responses with latencies as short as 50 ms (Nguyen et al., 2013). Because the pulvinar projects directly to LIP (Hardy & Lynch, 1992) and because saccades can be generated with mean latencies of 30 ms after activation of LIP (Thier & Andersen, 1998), the facial information processed by retino-tectal pathway is indeed able to generate saccades toward a face within 100 ms. However, we admit that definitive evidence for this hypothesis has yet to be provided. Demonstrating that the face facilitatory effect remains after a focal and complete lesion of the retino-geniculo-cortical pathway would provide conclusive evidence for this hypothesis. Thus, the present findings generally agree with our hypothesis that fast processing of facial information takes place in the retino-tectal visual pathway. 4.2. Implications for sociability and development The results of Experiment 2 revealed that fearful faces are rapidly processed through the subcortical pathway. This fearrelated information would be promptly conveyed to the amygdala

2048

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

through the subcortical pathway, which should facilitate the detection of potential dangers (LeDoux, 1996). Although the present study did not examine the effect of other facial expressions on saccadic reaction time, it has been reported that not only fearful faces but also happy faces have shorter saccade latencies compared to neutral faces (Bannerman et al., 2012). Moreover, patients with lesions of the visual cortex can unconsciously discriminate facial expressions (Morris et al., 2001). Assuming facial expressions other than fear are also conveyed through the subcortical pathway to the amygdala, this phylogenetically old visual system may play an important role in the unconscious processing of social information in humans. Humans spend a good deal of time viewing human faces in natural scenes (Nakano et al., 2010; Yarbus, 1967). Our initial focus within a novel scene concentrates on the human faces in particular (Birmingham, Bischof, & Kingstone, 2008). This preference towards human faces cannot be explained by the bottom-up saliency model as long as the model rests merely on the physical parameters (Itti & Koch, 2000). The rapid and unconscious face processing in the subcortical pathway may underlie this automatic orientation to human faces. Furthermore, this subcortical system may enable newborn infants, with a still immature cerebral cortex, to orient to human faces by instinct. This innate orientation to human faces would serve to provide the visual input required for developing specialised cortical areas for face processing (Johnson, 2005). Many previous studies reported deficits in face recognition and improper orientation to faces in autism, which is primarily characterised by social impairments (Klin, Jones, Schultz, Volkmar, & Cohen, 2002; Nakano et al., 2010). It is possible that the deficits of face processing in autism derive from a dysfunction of the subcortical pathway during an early period in development (Johnson, 2005), which would then lead to an insufficient development of the cortical areas for face processing later. Whether and how this phylogenetically old subcortical system interacts with the higher cortical systems for face processing during infancy and in adults warrants further investigations.

Acknowledgements We thank B. Wandell and H. Horiguchi for advice on creating the visual stimuli. This work was supported by the Grant-in-Aid for Scientific Research on Innovative Areas 23119719 from the Ministry of Education, Culture, Sports, Science and Technology, Japan to T. N.

References Bannerman, R. L., Hibbard, P. B., Chalmers, K., & Sahraie, A. (2012). Saccadic latency is modulated by emotional content of spatially filtered face stimuli. Emotion, 12, 1384–1392. Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. Birmingham, E., Bischof, W. F., & Kingstone, A. (2008). Gaze selection in complex social scenes. Visual Cognition, 16, 341–355. Braeutigam, S., Bailey, A. J., & Swithenby, S. J. (2001). Task-dependent early latency (30–60 ms) visual processing of human faces and other objects. Neuroreport, 12, 1531–1536. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. Bruce, C. J., Goldberg, M. E., Bushnell, M. C., & Stanton, G. B. (1985). Primate frontal eye fields. II. Physiological and anatomical correlates of electrically evoked eye movements. Journal of Neurophysiology, 54, 714–734. Cavanagh, P., MacLeod, D. I., & Anstis, S. M. (1987). Equiluminance: spatial and temporal factors and the contribution of blue-sensitive cones. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 4, 1428–1438. Chatterjee, S., & Callaway, E. M. (2002). S cone contributions to the magnocellular visual pathway in macaque monkey. Neuron, 35, 1135–1146.

Cornelissen, F. W., Peters, E. M., & Palmer, J. (2002). The Eyelink Toolbox: eye tracking with MATLAB and the Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, 34, 613–617. Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: face detection in just 100 ms. Journal of Visualized, 10(16), 11–17. Crouzet, S. M., & Thorpe, S. J. (2011). Low-level cues and ultra-fast face detection. Frontiers in Psychology, 2, 342. Deffke, I., Sander, T., Heidenreich, J., Sommer, W., Curio, G., Trahms, L., et al. (2007). MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform gyrus. Neuroimage, 35, 1495–1501. Girard, P., & Koenig-Robert, R. (2011). Ultra-rapid categorization of fourier-spectrum equalized natural images: macaques and humans perform similarly. PLoS One, 6, e16453. George, N., Dolan, R. J., Fink, G. R., Baylis, G. C., Russell, C., & Driver, J. (1999). Contrast polarity and face recognition in the human fusiform gyrus. Nature Neuroscience, 2, 574–580. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. Gregory, R. L. (1977). Vision with isoluminant colour contrast: 1. A projection technique and observations. Perception, 6, 113–119. Hardy, S. G., & Lynch, J. C. (1992). The spatial distribution of pulvinar neurons that project to two subregions of the inferior parietal lobule in the macaque. Cerebral Cortex, 2, 217–230. Honey, C., Kirchner, H., & VanRullen, R. (2008). Faces in the cloud: Fourier power spectrum biases ultrarapid face detection. Journal of Visualized, 8(9), 1–13. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. Johnson, M. H. (2005). Subcortical face processing. Nature Reviews Neuroscience, 6, 766–774. Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. Kemp, R., Pike, G., White, P., & Musselman, A. (1996). Perception and recognition of normal and negative faces: the role of shape from shading and pigmentation cues. Perception, 25, 37–52. Klin, A., Jones, W., Schultz, R., Volkmar, F., & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59, 809–816. LeDoux, J. E. (1996). The emotional brain. New York: Simon and Shuster (Chap.). Liu, J., Harris, A., & Kanwisher, N. (2002). Stages of processing in face perception: an MEG study. Nature Neuroscience, 5, 910–916. Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces-KDEF. In I. D. o. C. Neuroscience (Ed.). Stockholm. Marrocco, R. T., & Li, R. H. (1977). Monkey superior colliculus: properties of single cells and their afferent inputs. Journal of Neurophysiology, 40, 844–860. Martin, P. R. (1998). Colour processing in the primate retina: recent progress. Journal of Physiology, 513( Pt . 3), 631–638. Morales, D., & Pashler, H. (1999). No role for colour in symmetry perception. Nature, 399, 115–116. Morris, J. S., DeGelder, B., Weiskrantz, L., & Dolan, R. J. (2001). Differential extrageniculostriate and amygdala responses to presentation of emotional faces in a cortically blind field. Brain, 124, 1241–1252. Morris, J. S., Ohman, A., & Dolan, R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences of the United States of America, 96, 1680–1685. Nakano, T., Tanaka, K., Endo, Y., Yamane, Y., Yamamoto, T., Nakano, Y., et al. (2010). Atypical gaze patterns in children and adults with autism spectrum disorders dissociated from developmental changes in gaze behaviour. Proceedings of the Royal Society of London Series B, 277, 2935–2943. Nguyen, M. N., Hori, E., Matsumoto, J., Tran, A. H., Ono, T., & Nishijo, H. (2013). Neuronal responses to face-like stimuli in the monkey pulvinar. The European Journal of Neuroscience, 37, 35–51. Pearce, S., & Arnold, D. (2012). Face perception: wholes, parts, configurations, and features: Facial coding at isoluminance: face recognition relies disproportionately on shape from shading. In: Annual meeting of vision sciences society 12th. Robinson, D. A., & Fuchs, A. F. (1969). Eye movements evoked by stimulation of frontal eye fields. Journal of Neurophysiology, 32, 637–648. Rossion, B., & Caharel, S. (2011). ERP evidence for the speed of face categorization in the human brain: disentangling the contribution of low-level visual cues from face perception. Vision Research, 51, 1297–1311. Russell, R., Sinha, P., Biederman, I., & Nederhouser, M. (2006). Is pigmentation important for face recognition? Evidence from contrast negation. Perception, 35, 749–759. Schiller, P. H., & Malpeli, J. G. (1977). Properties and tectal projections of monkey retinal ganglion cells. Journal of Neurophysiology, 40, 428–445. Seidemann, E., Poirson, A. B., Wandell, B. A., & Newsome, W. T. (1999). Color signals in area MT of the macaque monkey. Neuron, 24, 911–917. Smith, V. C., & Pokorny, J. (1975). Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm. Vision Research, 15, 161–171. Sumner, P., Adamjee, T., & Mollon, J. D. (2002). Signals invisible to the collicular and magnocellular pathways can capture visual attention. Current Biology, 12, 1312–1316. Thier, P., & Andersen, R. A. (1998). Electrical microstimulation distinguishes distinct saccade-related areas in the posterior parietal cortex. Journal of Neurophysiology, 80, 1713–1735.

T. Nakano et al. / Neuropsychologia 51 (2013) 2043–2049

Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., et al. (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research, 168, 242–249. Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nature Neuroscience, 6, 624–631. Wandell, B. A., Poirson, A. B., Newsome, W. T., Baseler, H. A., Boynton, G. M., Huk, A., et al. (1999). Color signals in human motion-selective cortex. Neuron, 24, 901–909. Watanabe, S., Kakigi, R., Koyama, S., & Kirino, E. (1999). Human face perception traced by magneto- and electro-encephalography. Brain Research. Cognitive Brain Research, 8, 125–142.

2049

White, B. J., Boehnke, S. E., Marino, R. A., Itti, L., & Munoz, D. P. (2009). Color-related signals in the primate superior colliculus. Journal of Neuroscience, 29, 12159–12166. Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press. Yoshor, D., Bosking, W. H., Ghose, G. M., & Maunsell, J. H. (2007). Receptive fields in human visual cortex mapped with surface electrodes. Cerebral Cortex, 17, 2293–2302.