Consciousness and Cognition 20 (2011) 1502–1517
Contents lists available at ScienceDirect
Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog
Sad people are more accurate at face recognition than happy people Peter J. Hills a,b,⇑, Magda A. Werno b, Michael B. Lewis a a b
School of Psychology, Cardiff University, Park Place, Cardiff CF24 0JF, United Kingdom Department of Psychology, Anglia Ruskin University, East Road, Cambridge CB1 1PT, United Kingdom
a r t i c l e
i n f o
Article history: Received 5 October 2010 Available online 2 August 2011 Keywords: Face recognition Mood induction Sad mood Attentional biases
a b s t r a c t Mood has varied effects on cognitive performance including the accuracy of face recognition (Lundh & Ost, 1996). Three experiments are presented here that explored face recognition abilities in mood-induced participants. Experiment 1 demonstrated that happy-induced participants are less accurate and have a more conservative response bias than sad-induced participants in a face recognition task. Using a remember/know/guess procedure, Experiment 2 showed that sad-induced participants had more conscious recollections of faces than happy-induced participants. Additionally, sad-induced participants could recognise all faces accurately, whereas, happy- and neutral-induced participants recognised happy faces more accurately than sad faces. In Experiment 3, these effects were not observed when participants intentionally learnt the faces, rather than incidentally learnt the faces. It is suggested that happy-induced participants do not process faces as elaborately as sad-induced participants. Ó 2011 Elsevier Inc. All rights reserved.
1. Introduction Negative mood states are detrimental to performance on a wide range of cognitive tasks (e.g., Burt, Niederehe, & Zembar, 1995; den Hartog, Derix, van Bemmel, Kremer, & Jolles, 2003) by limiting cognitive resources (Sedikides, 1994) causing deficits in tasks including transitive reasoning (Sedek & von Hecker, 2004), abstract thinking (Hirt, Devers, & McCrea, 2008), memory (Hasher & Zacks, 1979), and face perception (Pavuluri, O’Conner, Harral, & Sweeney, 2007). In contrast, positive mood states are often beneficial to cognitive tasks (e.g., Derryberry & Tucker, 1994). However, the studies that show negative mood limits cognitive resources employed cognitively demanding tasks. Using simpler tasks, happy and sad people perform equally (Austin, Mitchell, & Goodwin, 2001; Hartlage, Alloy, Vazquez, & Dykman, 1993; Yonelinas & Jacoby, 1996; Zakzanis, Leach, & Kaplan, 1998). Potentially this is because sad participants use more elaborate and deep processing (c.f., Bodenhausen, Kramer, & Susser, 1994; Schwarz & Clore, 1996).1 Some researchers propose that mood affects an attention regulatory system (e.g., Ellis & Ashbrook, 1988; Gotlib, Roberts, & Gilboa-Schechtman, 1996; Hertel, 1994) causing these differences in cognitive abilities. Although there is extensive, if inconclusive, evidence that mood affects cognitive processing, there is limited evidence that mood affects more specialist cognitive processing, such as face perception. Here, we shall summarise the research that suggests mood-congruency or mood-relevant biases and processing differences between happy and sad people.
⇑ Corresponding author at: Department of Psychology, Anglia Ruskin University, East Road, Cambridge CB1 1PT, United Kingdom. E-mail address:
[email protected] (P.J. Hills). Persuasion research has shown that sad people are less persuaded by weak arguments than happy people (Bless, Mackie, & Schwarz, 1992; Mackie & Worth, 1991) because they process the information in a systematic and detailed manner (Schwarz & Clore, 1996). 1
1053-8100/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.concog.2011.07.002
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1503
1.1. Mood-congruency Attentional resources are administered differently to emotionally salient stimuli, including faces (Phillips, 2003) and facilitation of recognition occurs when there is a convergence between mood and stimuli valence (e.g., Fox, Russo, Bowles, & Dutton, 2001). This is an example of a mood-congruent memory bias (Bower, 1981; Eich & Macauley, 2000; Forgas, 1995). Moodcongruent memory biases exist in certain clinical populations also.2 When tested using a probe-detection task, Mogg and Bradley (1998, 2002) found that socially anxious individuals show a vigilance effect for threatening facial expressions (e.g., an angry face). That is, they detect an angry face quicker than a happy face from a series of neutral faces due to attentional orientation toward threatening faces (Mogg, Philippot, & Bradley, 2004; see also Lundh et al., 1996). This recognition accuracy is often the result of (or linked to) a response bias, whereby socially phobic patients show a more liberal response bias (that is they are more likely to produce false hits). Whereas control participants show enhanced memory for happy expressions relative to sad expressions (Gilboa-Schechtman, Erhard-Weiss, & Jeczemien, 2002; Kottoor, 1989; Leigland, Schulz, & Janowsky, 2004; Ridout, Astell, Reid, Glen, & O’Carroll, 2003, but see Johansson, Mecklinger, and Treese (2004), Sergerie, Lepage, and Armony (2005) for different results), social phobic participants do not show this benefit (D’Argembeau et al., 2003). Attention biases are also noted in clinically depressed patients. Ridout, Astell, Reid, Glen, and O’Carroll (2003) reported that patients with major depression recognised more sad faces than happy faces. This may, in part, be due to the fact that depressed people process happy and sad facial expressions in the same sustained way, as measured by event-related potentials, whereas non-depressed people do not elaborately encode negative stimuli (Deveney & Deldin, 2004). In a similar recognition paradigm, Jermann, van der Linden, and D’Argembeau (2008) found that depressed mood did not affect recognition of facial identity, but did affect recollection of facial expressions. However, Jermann et al.’s study confounded learning type (incidental and intentional) with recognition type (identity and expression respectively). Additionally, Joormann and Gotlib (2007) found that both currently and formerly depressed individuals show attentional biases towards sad facial expressions, whereas control non-depressed individuals had a tendency to orient towards happy faces. The effect facial expressions have on subsequent recollection is greater if conscious attention is not made to the expression (D’Argembeau and Van der Linden, 2007). When the emotional valence of stimuli matches the observer’s mood recall for that stimuli is significantly higher than a lack of such a convergence (Kanayama, Sato, & Ohira, 2008). Therefore, individuals in a negative mood should recognise more negative stimuli compared to positive or neutral stimuli (e.g., Fox et al., 2001), whereas those in a positive mood would tend to be more accurate at recognising positive targets (e.g., Koster, De Raedt, Goeleven, Frank, & Crombez, 2005). Having said that mood may affect recognition accuracy, it is also worth speculating whether mood may affect other aspects of the recognition response. A response bias has been found in which emotionally valenced stimuli are associated with increased tendency for being classified as previously seen, even if they were not presented before (Windmann & Kutas, 2001). That is, participants have a more liberal response criterion to emotionally valenced stimuli, using terms borrowed from Signal Detection Theory (SDT, e.g., Swets, 1961). This is because emotional stimuli induce attentional bias, which in turn leads to a higher rate of false alarms in recognition paradigms (Piguet, Connally, Krendl, Huot, & Corkin, 2008). Emotional valence of a stimulus has been found to influence recognition states of awareness differentially (Ochsner, 2000). 1.2. Processing differences Although the evidence for mood-congruent memory bias is extensive, many of the priming effects mood has on cognition (e.g., Bower, 1981) are less obvious in participants in a negative mood (Fiedler, Asbeck, & Nickel, 1991). Indeed, Williams, Watts, MacLeod, and Matthews (1997) have suggested that mood-congruent biases occur only in certain circumstances and Huber, Beckmann, and Herrmann (2004) found that mood influences recall of information regardless of its emotional valence. In free recall experiments of word lists, sad or depressed participants tend to recall fewer words than happy or control participants (e.g., Hartlage et al., 1993). There are, however, no effects of sad mood on indirect tests of memory (Roediger & McDermott, 1992). Mood does not necessarily affect recognition tests of memory either (Bower, Sahgal, & Routh, 1983), since physiological states do not tend to affect memory tests in which a cue is present (c.f., Eich, 1980). However, other researchers have demonstrated that depression or sad-induced mood does cause recognition memory deficits for word and pictorial stimuli (e.g., Channon, Baker, & Robertson, 1993; Hertel & Hardin, 1990; Hertel & Milan, 1994; Ramponi, Murphy, Calder, & Barnard, 2010; Watts, Morris, & MacLeod, 1987). Thus, these data indicate that sad participants should perform worse at recognition tests than happy or neutral participants. Individuals who are sad or depressed have been found to show enhanced memory for details of a perceptual experience, at the expense of perceiving the overall picture; happy people tend to focus on the ‘gist’, rather than on details of a scene (Gasper & Clore, 2002; Huber et al., 2004). Erber and Erber (1994) believe that this may be an adaptive response, in that sad people are trying to correct their mood by attempting to be more accurate (see also Clark & Isen, 1982). It has been
2 Obviously, the cause for mood-congruency may be different in all the disorders described, however, they highlight the importance of the interaction between the observers’ mood and the expression of the face.
1504
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
suggested that individuals in negative mood show elaborate processing and a higher overall recognition of all types of stimuli, compared to individuals in positive mood (e.g., Deldin, Deveney, Kim, Casas, & Best, 2001; Derryberry & Tucker, 1994; Deveney & Deldin, 2004; Schwarz & Clore, 1996). Thus, sad people should show greater recognition accuracy than happy people. It has been suggested that previously encountered stimuli can bring to mind a wide range of memories, including context in which the stimulus was first presented, or some other qualitative information (i.e., the feeling of ‘‘remembering’’, Yonelinas, 2001), or it can simply seem familiar (the feeling of ‘‘knowing’’). In both cases the item is recognised as having been seen before yet these states of awareness reflect different memory systems (Yonelinas, 2001). However, both the individual’s conscious experience and the processes that underlie recollection-based and familiarity-based responses are functionally distinct for these two forms of retrieval (e.g., Jacoby, 1991; Tulving, 1985; Yonelinas, 1994). Only information which is above a particular threshold can be accurately retrieved and remembered (Rotello, Macmillan, & Van Tassel, 2000; Yonelinas, 1997, 1999). Contextual cues (including mood, Ochsner, 2000) can lower the threshold for recollection. In contrast, familiarity is thought to reflect the strength of memory content excluding context, described as the feeling of ‘‘knowing.’’ Items that seem most familiar are classified as having been seen before (Ratcliff, Van Zandt, and McKoon (1995); see also, Yonelinas, 1994, 1999; Yonelinas, Kroll, Dobbins, Lazzara, & Knight, 1998). Emotional arousal is associated with an increase in a subjective feeling of remembering due to increased contextual information (e.g., Bower, 1981). This leads to more remember responses for emotional stimuli, relative to know responses (Sharot, Delgado, & Phelps, 2004). Sharot, Davidson, Carson, and Phelps (2008) found that remembering is associated with encoding the salient features of a stimulus, whereas familiarity leads to processing the stimulus as a whole. Focusing on the salient features of a stimulus parallels the systematic coding employed by sad people (Gasper & Clore, 2002), and processing the stimulus as a whole parallels the fact that happy participants tend to process the overall ‘gist’ (Gasper & Clore, 2002; Huber et al., 2004). This suggests qualitative differences in the response types for happy and sad participants in which sad participants will give more remember responses and happy participants will give more know responses. Indeed, Jermann, Van der Linden, Laurencon, and Schmitt (2009) have reported that depressed patients do tend to give more remember responses to mood-congruent than non-congruent words. In the composite-face task (in which two faces are put together to create a new face, Young, Hellawell, & Hay, 1987), sadinduced participants can identify the individual faces more accurately than happy-induced participants (Curby, Johnson, & Tyson, 2009, 2011). The composite-face task is a hallmark of holistic processing (Gauthier & Bukach, 2007; Richler, Tanaka, Brown, & Gauthier, 2008) and identification of the faces that make up the aligned composite is indicative of reduced use of holistic processing (Hole, 1994). Holistic processing is the expert processing type typically employed during face recognition (Farah, Wilson, Drain, & Tanaka, 1998). Thus, these studies indicate that sad mood leads to a decreased use of holistic processing and therefore should lead to lower face recognition accuracy. 1.3. The present work This detailed introduction presents several lines of evidence for how mood may affect face recognition both overall and of faces displaying emotional expressions. In terms of accuracy, sad people may employ more elaborate encoding for all faces and show greater overall recognition accuracy. Alternatively, sad mood will cause a reduction in the use of holistic processing and thus reduce face recognition accuracy. In addition, a mood-congruent memory bias may occur, leading to greater recognition accuracy of faces displaying sad expressions for sad participants. In terms of false hits and response bias, faces displaying expression may produce more false hits and thus a liberal response bias. There may, also, be a mood-congruency effect in false hits and response bias due to increased valence to such stimuli wrongly attributed to familiarity. Finally, there may be a mood-congruent effect in sad participants in conscious recollections or sad participants may give more remember than familiar only responses overall due to more elaborate processing of all stimuli. These predictions were tested in three Experiments.
2. Experiment 1 There has been limited research exploring the basic and overall level of face recognition abilities in non-clinical samples of participants in different moods (but see Jermann et al. (2008)). Experiment 1 aimed to establish whether happy, sad, or neutral participants are better at face recognition, employing a standard old/new recognition paradigm. Participants had a happy or sad mood induced by using music and an autobiographical memory task (Hesse & Spies, 1994). Two control groups were implemented. One had a neutral mood induction (Hesse & Spies, 1994) and one had no mood induction. Recognition performance was measured using the SDT (Swets, 1961) measure of stimulus discriminability, d0 . Response bias in terms of the SDT measure of response criterion, C, was also measured. Previous work summarised suggests that one of three possible outcomes can be expected: there should be no overall differences in abilities to recognise faces since an old/new recognition task is a simple paradigm; happy-induced participants will be more accurate than sad-induced participants since depression is associated with attention overload; or sad-induced participants will be more accurate than happy-induced participants since happy participants will avoid cognitive overload by not spending too many resources on the task.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1505
2.1. Method 2.1.1. Participants Eighty-eight psychology undergraduates from Cardiff University participated in this experiment: 70 were female, 17 male, and 1 was transgender. Modal age of the participants was 19 (range 18–55). Two participants were Asian, two were Black and all others were White.3 All participants self-reported they had normal or corrected vision. Participants received course credit as payment. Participants were randomly allocated to one of four experimental conditions with the pre-requisite that each experimental group had an equal number of participants in it. 2.1.2. Materials Sixty-four faces from the NimStim stimulus set (Tottenham, Borscheid, Ellertsen, Markus, & Nelson, 2002) and from those collected by Paul Hutchings at Cardiff University were used in this experiment. Sergent (1986) recommends the use of multiple stimulus sets in face recognition research to increase experimental validity.4 The faces contained in these stimuli sets contained no extraneous features (such as glasses, beards, or jewelry) and were of White males and females aged between 18 and 28 displaying a neutral expression. Clothing and background was masked, such that all faces had the same white background. Two images were used of each face (in which the faces were in a slightly different pose, i.e., with their mouth open or mouth closed), one presented during learning, and a different one during test to prevent picture recognition as recommended by Bruce (1982). This was counterbalanced across participants. All faces were presented with the dimensions 140 mm 130 mm with resolution 72 dpi. All the face stimuli were presented on a high-resolution colour monitor using DirectRT™ Research Software (Empirisoft™) from an RM PC. The research was conducted in a well-lit research laboratory with a comfortable heating level. Participants made their responses to the face stimuli on a standard computer keyboard. The Visual Analogue Scale (VAS: Aiken, 1974) was used to measure mood. It is a simple tool for measuring mood, whereby participants are presented with a 100 mm line with the anchor points ‘‘extremely positive mood (happy)’’ at one end and ‘‘extremely negative mood (unhappy)’’ at the other end. On one scale, participants mark down on the line the point that best reflects their mood on average and, on a separate VAS, participants mark down the point that best reflects their mood at that particular moment. Mood is measured in millimetres along the line. This is a reliable measure of mood and in particular depression (Ahles, Ruckdeschel, & Blanchard, 1984). The difference between the two VAS scores was used as a manipulation check (VAScurrent VASaverage), whereby a greater difference indicated a more effective mood manipulation. These were embedded within an irrelevant distractor questionnaire. To induce mood, music was played through headphone speakers into the room. The music was playing from before the participants entered the laboratory. This was so that the participants were unaware that it was part of experiment and thus would prevent demand characteristics. The music was Mozart Requiem for the sad condition, Mozart Jubilate and Exultate for the happy condition (chosen from examples given in Hesse & Spies, 1994), and The Hunt for Red October soundtrack for the neutral condition. In addition, an autobiographical memory task was used to induce mood. This consisted of asking participants to write down their happiest or saddest event in their life, or their journey from home to university, depending on the experimental condition (happy induction, sad induction, and neutral induction respectively). Again this was taken from Hesse and Spies (1994). 2.1.3. Design The independent variable (IV) was the mood of the participants. There were four mood induction conditions: sad, happy, and two control groups, one with an autobiographical task (neutral induction) and one with no such induction task. This IV was manipulated between subjects. The dependent variables in this experiment were face recognition accuracy and response bias measured using the SDT measures of d0 for accuracy and C for response bias. The presentation order of faces in the learning and test phases was randomised. The stimuli were counterbalanced such each face appeared as a target and as a distractor an equal number of times for each experimental condition. 2.1.4. Procedure This study had five phases: mood induction; learning; distractor; test; and debrief. In the first phase, participants were brought to the laboratory by the experimenter. The mood inducing music was already playing in laboratory when the participants entered. Participants were then given the autobiographical memory task as the secondary mood induction technique. The instructions given to the participants were: ‘‘Please write down [the happiest/saddest moment of your life/your journey to University today]. You have five minutes to complete this task. Please be as accurate and emotive as possible. Be assured that your information is completely anonymous.’’ Participants had 5 min to write their piece and this was timed using a standard stopwatch. Immediately following this task, the sheet the participants had been writing on was taken away and the participants were positioned in front of the
3 4
There was no difference in the subsequent analysis when the non-White participants were included or removed. Recognition accuracy rates for the two sets of faces were equivalent and, thus, they are analysed as one set of faces.
1506
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
Table 1 Mean VASaverage VAScurrent (manipulation check), recognition accuracy, response bias, reaction time, reaction time during the learning task, and distinctiveness ratings in Experiment 1.
Manipulation check d0 C RT RT learning Distinctiveness ratings
Sad induced
Happy induced
Neutral induced
No induction
11.75 2.69 0.07 1249.07 2410.00 4.94
8.33 1.58 0.08 1166.66 2460.22 4.38
0.79 1.94 0.19 1292.50 2474.94 4.79
1.00 1.91 0.18 1346.11 2578.38 4.94
computer screen. Participants in the no induction condition did not have any music playing and did not have to undergo this task. Instead, they went straight to the next phase. Once the participants were positioned in front of the computer, they were given the instructions to the face learning phase. Participants were presented with 32 faces sequentially in a random order. Participants were instructed to rate each face according to the question ‘‘how easy would this face be to spot in a crowd?’’ (a measure of distinctiveness: Light, KayraStuart, & Hollander, 1979) using a one to seven scale, where one is easy to spot in a crowd (distinctive face) and seven is difficult to spot in a crowd (typical face). The face was on screen until the participant made their response using the numerical keypad on the computer keyboard. Between each face a random noise mask was presented lasting 150 ms. Immediately following this task, participants were given a questionnaire booklet containing the VAScurrent, two irrelevant questionnaires measuring personality, some demographic questions (age, gender, place of birth, and place of residence), and the VASaverage. This took 5 min to complete. The fourth phase of the experiment was a standard old/new recognition experiment, in which participants were presented with all 64 faces (32 targets and 32 distractors) sequentially and were instructed for each face to state whether they had seen it before by pressing the appropriate key on the keyboard. The keys were z for seen before and m for not seen before. Each trial was response terminated. Faces were presented in a random order. Between each presentation a mask of random noise was presented lasting 150 ms. Finally, participants were debriefed and offered the positive mood induction in accordance with ethical guidelines. 2.2. Results The old/new responses were converted into measures of accuracy and response bias. These shall be presented separately following the manipulation check. 2.2.1. Manipulation check To ensure that the mood induction was effective, a manipulation check was conducted. This involved taking each participants score on the VASaverage and subtracting this from their VAScurrent. A positive number indicates the mood induction made the participants happier whereas a negative number indicates that the mood induction made the participants sadder. The results of this manipulation check are presented in Table 1. A univariate between-subjects ANOVA was run on these data, revealing that the effect of the mood induction was significant, F(3, 84) = 9.55, MSE = 171.97, p < .001, g2p ¼ :25. Crucially, the VAS difference score for sad induced participants was significantly lower than for happy induced participants (mean difference = 20.08, p < .001),5 neutral induced participants (mean difference = 10.96, p = .029) and participants with no induction (mean difference = 12.75, p = .021). Happy participants had a higher VAS difference score than neutral induced participants though not significantly (mean difference = 9.13, p = .11). No other differences were significant (p > .52). This indicates that the sad mood induction was reliable and the happy mood induction produced trend to make people happier. 2.2.2. Accuracy The SDT measure of stimulus discriminability, d0 , was calculated using the Macmillan and Creelman (2005) method. This combines correct hits with false alarms converted into a normal distribution. This measure typically ranges between 0 and 4, where 0 equals recognition at chance levels and 4 approaches maximum recognition. The means are presented in Table 1.6 This shows that sad participants had higher recognition accuracy than all other participants, and that happy participants were poorer at this face recognition task than all other participants. The data were subjected to a univariate between-subjects ANOVA. This revealed a significant main effect of mood induction on recognition accuracy, F(3, 84) = 28.54, MSE = 0.18, p < .001, g2p ¼ :51. Post hoc comparisons, revealed that happy induced participants were less accurate than neutral induced participants (mean difference = 0.36, p = .029) and sad induced participants (mean difference = 1.11, p < .001). Sad induced participants were also more accurate than neutral induced participants (mean difference = 0.75, p < .001) and participants not induced (mean difference = 0.78, p < .001). No other comparisons were significant. 5 6
All significance levels for simple effects are reported following a Bonferroni correction throughout this paper. Incidentally, an analysis on percentage accuracy and hit rate produced an identical pattern of results to this d0 analysis.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1507
2.2.3. Response bias A parallel analysis of response bias was conducted and the means are presented in Table 1.7 The SDT measure of response bias C was calculated using the MacMillan and Creelman (2005) method. A higher response bias implies a more conservative response from the participants indicating a greater tendency to report with a new response than an old response and zero represents no bias. A univariate between-subjects ANOVA revealed the effect of mood induction was significant, F(3, 84) = 3.88, MSE = 0.09, p = .012, g2p ¼ :12. Post hoc comparisons revealed that sad induced participants had a more liberal response bias than happy induced participants (mean difference = 0.15, p = .458, ns), neutral induced participants (mean difference = 0.26, p = .016), and participants not induced (mean difference = 0.25, p = .057, ns). No other comparisons were significant. 2.2.4. Reaction time A parallel analysis of reaction time was conducted and the means are presented in Table 1. There were no significant differences across conditions, F(3, 84) = 0.25, MSE = 365453.77, p = .859, g2p ¼ :03. 2.2.5. Responses during the learning phase There is one caveat with the accuracy data presented thus far: The faces in the learning phase were presented until the participants made their response. Thus, it is possible that sad participants spent more time viewing faces in the learning phase. To check for this possibility, we analysed the RT to make distinctiveness ratings data in a univariate between-subjects ANOVA. There were no significant differences across conditions, F(3, 84) = 0.73, MSE = 162170.67, p = .538, g2p ¼ :01. Similarly, the distinctiveness ratings made in the learning phase were subjected to a parallel univariate ANOVA. There were no significant differences across conditions, F(3, 84) = 1.56, MSE = 1.04, p = .205, g2p ¼ :05. 2.3. Discussion The results from Experiment 1 are clear: sad participants are more accurate than happy participants whilst showing a more liberal response bias than happy participants. Sad participants are more willing to state that they recognise someone when actually they have not seen them before. Interestingly, emotionally valenced conditions produced a lower response bias than the emotionally neutral conditions. These results are unlikely to be due to sad participants paying more attention to faces since the response time data for both learning and test were not significant. Nevertheless, more covert processing could have been allocated to the faces by sad participants. These results indicate that face recognition tasks do not overload cognitive resources, given the evidence that suggests sad people are more accurate in less cognitively demanding tasks (Bodenhausen et al., 1994). This allows sad participants to engage in more elaborate encoding of faces thus being more accurate. This could act as a measure to repair ones mood by not making interpersonal recognition errors that could potentially be embarrassing. Additionally, happy participants are less accurate than neutral participants, suggesting that they were less willing to exert cognitive effort. Possibly this behaviour acts as a protective measure by avoiding the risk of negative consequences of concentrating on a potentially difficult task and failing at it. The results from the response bias measure indicate that although mood induction caused a smaller bias than neutral conditions irrespective of emotion, this effect is greater for negative moods. Potentially, the mood induction caused a general increase in valence and arousal. This may have been incorrectly attributed to familiarity with a face and thus affected response bias. Sad participants may have shown this effect more so because their mood encouraged more elaborate processing and this may have increased arousal and thus their tendency to respond with an old response. Alternatively, this may further act as a mechanism to correct ones mood by encouraging social interaction. Either way, it is usually found that happy people are more liberal and positive in their responses (Isen, 1987), whereas sad people are more structured and cautious in their thought processes (Hertel, 1994). The inconsistency may result from the recognition procedure tapping more basic cognitive processes, or that reaction to faces is distinct in some way. Although we have described how sad and happy participants may differ based on our results, there is a caveat in the present experiment: that is, the happy mood induction did not seem to be as effective as the sad mood induction. Thus, any differences obtained and conclusions drawn from them should be treated with caution. Additionally, how mood affects the recognition response specifically cannot be addressed in a standard old/new recognition procedure. To address these issues, a second experiment was carried out. 3. Experiment 2 Experiment 1 demonstrated that negative mood leads to higher face recognition accuracy and a more liberal response bias than neutral mood. Happy mood leads to lower accuracy but a more liberal response bias than neutral participants (but a more conservative bias than sad participants). These results may imply that mood affects the conscious and unconscious recognition processes differentially. Valence alters bias which is often assumed to be associated with an unconscious process. However, specific mood affects the more conscious recognition accuracy process differently depending on the class
7
Incidentally, an analysis of false alarms produces a similar pattern of results to those for C presented here.
1508
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
of mood. These suppositions were tested in a second experiment using a remember/know/guess procedure (e.g., Hirshman, 1998). The remember/know procedure involves participants responding with a judgement about how confident their recognition judgement was following the accuracy statement. The remember response refers to recollection-based retrieval and the know response indicates familiarity-based retrieval (e.g., Gardiner, Ramponi, & Richardson-Klavehn, 1998; Wais, Mickes, & Wixted, 2008). In order to allow for responses that fit neither remember nor know categories (Dunn, 2004), as well as to increase the accuracy of know responses (Eldridge, Sarfatti, & Knowlton, 2002), a guess response category is often included in this procedure. (e.g., Gardiner, Ramponi, & Richardson-Klavehn, 2002; Gardiner & Richardson-Klavehn, 2000). It is this remember/know/guess procedure that we adopt here. Previously, we have stated that greater emotional valence of a stimulus increases conscious recollections (Ochsner, 2000). Thus, we predict that faces displaying an emotional expression will have more remember responses than know or guess responses. We have also indicated that participants’ emotional arousal increases the remember responses for emotional stimuli (Sharot et al., 2004). Whether this is mood specific or a general response is unclear. Thus, we predict that participants in the mood induction conditions will give more remember responses to faces displaying an expression. We have also described that know responses are associated with processing a stimulus as a whole (Sharot et al., 2008) and that happy participants process the ‘gist’ of visual scenes (Gasper & Clore, 2002; Huber et al., 2004). Thus, we also predict that happy participants will give significantly more know responses than sad participants. Experiment 2 was thus conducted employing an old/new recognition paradigm in addition to a remember/know/guess procedure. Participants were induced into a happy, sad, or neutral mood. Recognition accuracy, response bias, and response type was measured for faces displaying happy, sad, and neutral facial expressions. 3.1. Method 3.1.1. Participants Sixty Anglia Ruskin University psychology undergraduate students participated in this experiment: 45 were female and 15 male). All participants were White and self-reported that they had normal or corrected vision. Participants received course credit for taking part in this study. Participants were randomly allocated to one of three experimental conditions with the condition that there was an equal number of participants in each condition. 3.1.2. Materials The same faces used in Experiment 1 were employed in Experiment 2, except the faces displayed either a happy, sad, or neutral expression. This was counterbalanced across conditions. They were displayed in the same size and resolution as Experiment 1. Faces were displayed using E-Prime Research Software from a HP PC onto a high-resolution LCD colour monitor. The demographic questionnaire and the two VASs were used in this study. Given that the happy mood induction was less effective than the sad mood induction in Experiment 1, an alternative piece of music was chosen. The Theme from A-Team was chosen for the positive mood induction (see Hills & Lewis, 2011, for a validation of this). All other materials were the same as those used in Experiment 1. 3.1.3. Design A 3 3 mixed factorial design was used. Induced mood was manipulated between-subjects. Given that in Experiment 1 there were no significant differences between no mood induction and neutral mood induction, the no mood induction condition was not run in this Experiment. Participants were randomly assigned to one of the three mood-induction conditions, with the pre-requisite that each experimental group consisted of an equal number of participants. The second IV was the type of facial expression (happy, sad and neutral). This was manipulated within-subjects. The facial expression was counterbalanced across subjects such that each face appeared in each expression approximately an equal number of times. Facial expression was matched at learning and at test. Participants’ face recognition performance was measured using the SDT measures d0 , C and remember/know/guess judgements. 3.1.4. Procedure The procedure for Experiment 2 was similar to that of Experiment 1. Five phases were employed: mood induction, learning, distractor, test, and debriefing. The mood induction, learning, distractor, and debriefing phases were identical to those in Experiment 1. The test phase was similar to that of Experiment 1, except that after every ‘‘old’’ response, the participants were asked for a remember/know/guess judgement by pressing the appropriate key on a standard computer keyboard. The instruction for the remember/know/guess procedure was based on that of Gardiner et al. (1998) and Yonelinas (2001). The instructions were: ‘‘I would like you to respond R only if you can remember any qualitative information about the study event. This could include such things as recollecting what you were thinking about when the face was presented, what the face looked like, etc. However, if you think that a picture was presented in the previous part of the experiment, but you cannot recollect any details about the study event, please choose the K response. Finally, the G response corresponds to situations when your answer is simply a guess.’’
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1509
3.2. Results As in Experiment 1, the manipulation check will be presented first to ensure that the mood induction procedures were effective. This will be followed by an analysis of d0 , C, and the remember/know/guess responses. 3.2.1. Manipulation check The manipulation check for Experiment 2 involved the same procedure as that in Experiment 1. That is a VAS difference score was computed for each condition. These are presented in Table 2. A univariate between-subjects ANOVA was run on these data, revealing that the effect of the mood induction was significant, F(2, 57) = 33.77, MSE = 47.14, p < .001, g2p ¼ :54. The VAS difference score for sad induced participants was significantly lower than for happy induced participants (mean difference = 17.45, p < .001) and for neutral induced participants (mean difference = 11.95, p < .001). Additionally, happy induced participants had a higher VAS difference score than neutral induced participants (mean difference = 5.50, p = .014). 3.2.2. Recognition accuracy The SDT measure of stimulus discriminability, d0 , was calculated as in Experiment 1. The means are plotted in Fig. 1. This shows that sad induced participants had a higher accuracy than all other participants. Additionally, the data appear to show that happy faces were better recognised than sad or neutral faces for happy and neutral participants, but not for sad participants. The data summarised in Fig. 3 were subjected to a 3 3 mixed ANOVA. Replicating Experiment 1, there was a main effect of participant mood, F(2, 57) = 15.96, MSE = 0.59, p < .001, g2p ¼ :36, whereby sad participants were more accurate than happy participants (mean difference = 0.69, p < .001) and neutral participants (mean difference = 0.67, p < .001). The difference between happy and neutral participants was not significant. Replicating previous work (e.g., Foa, Gilboa-Schectman, Amir, & Freshman, 2000; Mather & Carstensen, 2003), there was a significant effect of facial expression, F(2, 114) = 10.08, MSE = 0.28, p < .001, g2p ¼ :15, whereby sad faces were recognised less accurately than happy faces (mean difference = 0.43, p < .001) and neutral faces (mean difference = 0.27, p = .03). The difference between happy and neutral faces was not significant (mean difference = 0.16, p = .20). These main effects were qualified by a significant interaction, F(4, 114) = 6.19, MSE = 0.28, p < .001, g2p ¼ :18. The interaction was revealed by happy participants showing greater accuracy for happy faces over sad faces (mean difference = 0.85, p < .001) and neutral faces (mean difference = 0.58, p = .006), whereas sad participants showed lower accuracy for happy faces than sad faces (mean difference = 0.19) and neutral faces (mean difference = 0.32) though not significantly. Neutral participants showed a similar pattern of results as happy participants, although no significant differences were found.
Table 2 Mood induction manipulation check for Experiment 2: Mean VASaverage VAScurrent scores. Sad induced
Happy induced
Neutrally induced
11.75
5.70
0.20
Fig. 1. Mean recognition accuracy (d0 ) in Experiment 2 for happy, sad, and neutral faces split by mood of participant. Error bars represent standard error.
1510
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
Fig. 2. Mean proportion of remember and know responses split by participant mood for Experiment 2. Error bars represent standard error.
3.2.3. Response bias A parallel analysis of response bias was conducted. The main effect of facial expression and the interaction were not significant: F(2, 114) = 1.15, MSE = 0.07, p = .32, g2p ¼ :02 and F(4, 114) = 0.53, MSE = 0.07, p = .72, g2p ¼ :02, respectively. However, the main effect of participant mood was approaching significance, F(2, 57) = 2.47, MSE = 0.13, p = .09, g2p ¼ :08. Sad participants (mean = 0.02) had a more liberal response bias than happy participants (mean = 0.07) and neutral participants (mean = 0.12), although no differences were significant. The pattern of results is the same as that found in Experiment 1. 3.2.4. Reaction time As in Experiment 1, a parallel analysis of reaction time was conducted on the data. This revealed no significant effects or interactions, largest F = 1.76, smallest p > .14. 3.2.5. Remember/know/guess Proportions of remember, know and guess responses were calculated for correct responses using a similar analytical procedure as used by Tunney and Fernie (2007). A 2 3 3 mixed models ANOVA was run of these data with the factors: response (remember and know)8; expression (happy, sad, and neutral); and participant mood (happy, sad, and neutral). Fig. 2 presents the mean remember and know response rate for happy, sad, and neutral participants, collapsing across expression since there were no significant effects involving the variable expression (largest F = 2.12, p > .13). Fig. 2 demonstrates that there were significantly more remember responses than know responses (mean difference = 0.51), F(1, 57) = 186.73, MSE = 0.13, p < .001, g2p ¼ :77. Additionally, Fig. 2 shows that sad induced participants show more remember relative to know responses than happy or neutral participants. This is shown in the response-by-participant mood interaction, F(2, 57) = 6.26, MSE = 0.13, p = .003 g2p ¼ :18. Simple effects were used to explore this. Sad participants gave significantly more remember responses than happy participants (mean difference = .17, p = .02) and neutral participants (mean difference = .16, p = .03). Additionally, sad participants gave less know responses than happy participants (mean difference = .10, p = .05) and neutral participants (mean difference = .13, p = .04). For both remember and know responses, there was no difference between happy and neutral participants. No other effects were significant. 3.2.6. Responses during the learning phase As in Experiment 1, the reaction time to make the distinctiveness judgements in the learning phase were subjected to an ANOVA, with the factors: participant mood and expression of the face, given the data that indicate there is mood-congruency effects in the amount of time spent learning information (Forgas, 1998; Forgas & Bower, 1987). This revealed no significant effects, largest F = 2.05, smallest p > .13. A parallel analysis of the distinctiveness ratings made during the learning phase did reveal a significant effect of facial expression, F(2, 114) = 15.22, MSE = 0.56, p < .001, g2p ¼ :21, in which neutral faces (mean = 4.74, SE = .14) were rated as less distinctive than sad faces (mean = 5.45, SE = .12, mean difference = 0.71, p < .001) and happy faces (mean = 5.31, SE = .11, mean difference = 0.57, p < .001). The difference in distinctiveness between
8 Guess responses are not required for this analysis for two reasons: There were two few guess responses to use ANOVA appropriately and the pattern of the guess responses matched that of the know responses. The guess responses are indirectly assessed by just using remember and know responses since if both are lower in one condition than another, then guess responses must be higher in that condition. This pattern did not occur.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1511
happy and sad faces was not significant (mean difference = 0.14, p = .85). No other effects were significant, largest F = 1.00, smallest p > .41. This pattern of distinctiveness did not account for the recognition accuracy data. 3.3. Discussion Sad-induced participants were more accurate in face recognition, showed a more liberal response bias, and had more conscious recollections to faces than happy- or neutral-induced participants. These results are consistent with Experiment 1. Overall, happy faces were more accurately recognised than neutral or sad faces, consistent with previous research (Kottoor, 1989; Leigland et al., 2004; Ridout, Astell, Reid, Glen, & O’Carroll, 2003). This was not the case for sad participants, who recognised all facial expressions equally well. Thus, the only mood-congruent recognition bias found was for happy participants and not sad participants. Thus, the hypothesis that mood-relevant processing would be found was not supported, whereas the hypothesis that negative mood will lead to greater recognition accuracy was supported. The results are somewhat inconsistent with those reported by Jermann, Van der Linden, and D’Argembeau (2008). They found that participants’ mood, measured by BDI scores, did not affect the accuracy with which participants recognised faces, but did affect the accuracy with which participants recalled facial expressions. However, as mentioned in the introduction, in their study, Jermann et al. combined intentional learning with memory for identity and incidental learning for expression memory, thus creating a confound. Specifically, Jermann et al. instructed their participants in the learning phase of their experiment ‘‘to look carefully at the faces in order to be able to recognise them afterwards (intentional encoding, without mentioning the presence of facial expressions)’’ (p. 367). Thus, their results could be explained in terms of mood affecting learning type: mood affects recognition when incidental learning is performed, whereas it does not when intentional learning is performed. We employed an incidental learning paradigm in our study. This may account for the differences between our results and Jermann et al.’s. It is also possible that mood measured by the BDI causes qualitative differences in face processing to temporary mood inducted with music.9 Experiment 3 was thus conducted to assess whether mood only affects face recognition under incidental memory conditions or both incidental and intentional memory conditions. 4. Experiment 3 Experiment 3 was conducted in a very similar manner to Experiment 2. Participants were induced into a happy, sad, or neutral mood. Recognition accuracy, response bias, and response type was measured for faces displaying happy, sad, and neutral facial expressions. However, participants were told about the subsequent recognition test, and thus were instructed to memorise the faces, thus an intentional memory task was employed. All other aspects of the procedure were identical to Experiment 2. If the results of Experiment 2 generalise to intentional memory tasks, then the results of Experiment 2 should be replicated. However, if the effects of mood only occur during incidental memory tasks, then there should be no effect of mood on recognition nor remember/know responses. 4.1. Method Sixty Anglia Ruskin University psychology undergraduate students participated in this experiment: 43 were female and 17 male). All participants were White and self-reported that they had normal or corrected vision. Participants received course credit for taking part in this study. Participants were randomly allocated to one of three experimental conditions with the condition that there were an equal number of participants in each condition. These participants took part in the same experimental procedure using the same materials and design as in Experiment 2. The only difference was during the learning phase the participants were instructed to ‘‘pay careful attention to the faces in order to recognise them later’’ (c.f., Jermann et al., 2008) to ensure intentional learning. Due to this, distinctiveness ratings were not taken and the faces were on screen for 2500 ms during the learning phase. This time was chosen as it was roughly the mean time the faces were on screen during the response terminated learning phase of Experiments 1 and 2. 4.2. Results As in Experiment 1, the manipulation check will be presented first to ensure that the mood induction procedures were effective. This will be followed by an analysis of d0 , C, and the remember/know/guess responses. 4.2.1. Manipulation check The manipulation check for Experiment 3 involved the same procedure as that in Experiment 1. That is a VAS difference score was computed for each condition. These are presented in Table 3. A univariate between-subjects ANOVA was run on these data, revealing that the effect of the mood induction was significant, F(2, 57) = 27.35, MSE = 120.97, p < .001, g2p ¼ :49. 9 There are further differences between Jermann, Van der Linden, and D’Argembeau’s (2008) study and ours, but they are unlikely to account for the differences in the results. These differences include: the number of stimuli (Jermann et al. used 20, we used 64); facial expressions at test (neutral versus matched expressions); study duration at learning (5 s versus response terminated); presence of a mask (none versus a random noise mask between each face); duration of the interval between learning and recognition (immediate versus 5 min); and stimulus size (145 mm 105 mm versus 140 mm 130 mm).
1512
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517 Table 3 Mood induction manipulation check for Experiment 3: Mean VASaverage VAScurrent scores. Sad induced
Happy induced
Neutrally induced
15.68
9.37
1.89
The VAS difference score for sad induced participants was significantly lower than for happy induced participants (mean difference = 25.06, p < .001) and for neutral induced participants (mean difference = 17.57, p < .001). Additionally, happy induced participants had a higher VAS difference score than neutral induced participants (mean difference = 7.48, p = .107). 4.2.2. Recognition accuracy The SDT measure of stimulus discriminability, d0 , was calculated as in Experiment 1. The means are plotted in Fig. 3. The data summarised in Fig. 3 were subjected to a 3 3 mixed ANOVA. This revealed a main effect of expression, F(2, 114) = 9.02, MSE = 0.15, p < .001, g2p ¼ :14, whereby happy expressions were more accurately recognised than sad expressions (mean difference = 0.29, p < .001) and neutral expressions (mean difference = 0.23, p = .002). Recognition rates were similar for neutral and sad expressions. The effect of participant mood was not significant, F(2, 57) = 2.56, MSE = 0.28, p = .086, g2p ¼ :08, and neither was the interaction, F(4, 114) = 0.27, MSE = 0.15, p = .896, g2p ¼ :01. 4.2.3. Response bias A parallel analysis of response bias was conducted. The main effect of facial expression was significant, F(2, 114) = 5.08, MSE = 0.04, p = .008, g2p ¼ :08, in which response bias was more liberal for happy expressions (mean = 0.18) than sad expressions (mean = 0.10) and significantly more so than neutral expressions (mean = 0.07, p = .001). The main effect of participant mood was significant, F(2, 57) = 3.52, MSE = 0.05, p = .036, g2p ¼ :11. Sad participants (mean = 0.18) had a more liberal response bias than happy participants (mean = 0.09) and neutral participants (mean = 0.09), although no differences were significant. The interaction was not significant, F(4, 114) = 0.12, MSE = 0.04, p = .98, g2p < :01. 4.2.4. Reaction time As in Experiment 1, a parallel analysis of reaction time was conducted on the data. This revealed no significant effects or interactions, largest F = 1.75, smallest p > .14. 4.2.5. Remember/know/guess Analysis of the remember and know responses was carried out as in Experiment 2, using a 2 3 3 mixed models ANOVA with the factors: response (remember and know); expression (happy, sad, and neutral); and participant mood (happy, sad, and neutral). Fig. 4 presents the mean remember and know response rate for happy, sad, and neutral participants, collapsing across expression since there were no significant effects involving the variable expression (largest F = 1.86, p > .21).
Fig. 3. Mean recognition accuracy (d0 ) in Experiment 3 for happy, sad, and neutral faces split by mood of participant. Error bars represent standard error.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1513
Fig. 4. Mean proportion of remember and know responses split by participant mood for Experiment 3. Error bars represent standard error.
There were significantly more remember responses than know responses (mean difference = 0.51), F(1, 57) = 357.51, MSE = 0.05, p < .001, g2p ¼ :86. The main effect of participant mood was not significant in this experiment, F(2, 57) = 0.35, MSE = 0.06, p = .71 g2p < :01, and critically, nor was the interaction significant, F(2, 57) = 0.45, MSE = 0.05, p = .64 g2p < :02. 4.3. Discussion The results from Experiment 3 are in stark contrast with those of Experiment 2. In this study, we replicated the recognition advantage for happy faces over unhappy faces (Foa et al., 2000; Mather & Carstensen, 2003). We also replicated previous work that demonstrated a more liberal response bias for emotionally valenced stimuli than neutral stimuli (Windmann & Kutas, 2001). This study, thus, goes some way to explain the differences in findings between Jermann, Van der Linden, and D’Argembeau’s (2008) work and ours. The critical difference is the learning instructions employed. When intentional learning strategies are employed, the effects of observer mood are minimal. That is, when intentional learning strategies are employed, there are no mood-congruency effects, however, happy faces will still be remembered more accurately than unhappy faces, and potentially sad participants will perform worse at face recognition than happy participants. These data call into question some of the conclusions drawn by Jermann et al., who confounded learning strategy with memory for expressions or identity. Rather than mood affecting memory for expressions, mood affects all kinds of memory, but only under incidental learning conditions. 5. General discussion The results from these experiments show that negative mood does not always lead to performance detriments. Although sad people appear to show limited cognitive resources (Sedikides, 1994), negative mood leads to higher accuracy in face recognition when incidental learning strategies are employed. This suggests that face perception is a task that does not tax cognitive resources (c.f., Austin et al., 2001; Hartlage et al., 1993). Stating that face recognition does not load cognitive resources does not necessarily lead to the conclusion that sad people would be better at it than happy people as was found here. This is because of the numerous studies showing that sad mood is usually associated with poorer recognition memory for words and objects (e.g., Channon et al., 1993; Hertel & Hardin, 1990; Hertel & Milan, 1994; Ramponi et al., 2010; Watts et al., 1987) under incidental and intentional learning conditions. Thus, there must be something ‘‘special’’ about face recognition (c.f., Farah, 1996) in this domain. In our introduction, we hypothesised that sad people may be more accurate at face recognition since they encode many classes of stimuli more elaborately than happy people (Deldin et al., 2001; Derryberry & Tucker, 1994; Deveney & Deldin, 2004; Schwarz & Clore, 1996). This elaborate processing may take several forms. Though a detailed explanation of such processing is beyond the scope of the present article, we shall briefly explain why sad people may be more accurate at face recognition in terms of type of processing, defocused attention, and feature selection. It must be noted that these three possible explanations are not mutually exclusive. For example, defocused attention could alter the features that are encoded and this could alter the type of processing employed. Face perception researchers often describe the way we process faces in terms of configural (or holistic) and featural processing (e.g., Tanaka & Farah, 1993), in which the expert processing we rely on primarily is holistic processing (Freire, Lee, &
1514
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
Symons, 2000; Gauthier & Tarr, 2002; Leder & Carbon, 2004). Featural processing usually involves processing the features in isolation, whereas holistic processing involves processing the facial features in some gestalt whole (see Maurer, Le Grand, and Mondloch (2002) for a review). Thus, it may be that sad people rely more on holistic processing than happy people do. However, Curby et al. (2009, 2011) have shown convincingly that sad people do not employ holistic processing as readily as happy people. We appear to have shown that sad people are better at face recognition by using the non-optimal featural processing strategy. Perhaps then, sad people are employing some additional strategies to recognise faces. Our results are consistent with the notion that sad people encode the world more elaborately than happy people (e.g., Derryberry & Tucker, 1994; Deveney & Deldin, 2004). They show sustained attention to all faces (Deveney & Deldin, 2004) leading to better encoding and thus better memory, though in our study, there were no response time differences across participants. This more elaborate encoding is also associated with more conscious recollections of faces: Sad people encode greater detail allowing them to remember something associated with the original presentation of that face. This finding could be related to a phenomenon known as defocused attention (e.g., Pour & von Hecker, 2008). Defocused attention is where attention is broadly focused, unfocused or unselective (e.g., Oatley & Johnson-Laird, 1987). For example, participants may be able to recall additional contextual information in addition to the target information (Meiser, 2005; Meiser & Bröder, 2002). von Hecker and Meiser (2005) tested depressed and non-depressed participants in a source monitoring task. Their participants were presented with words surrounded by a coloured frame in either the left or right portion of the screen. Both depressed and non-depressed participants accurately recognised the target words, but only the depressed participants accurately recalled the colour of the frame and the position of the word. This increased memory for surrounding, extraneous, contextual information may be linked to deeper, more elaborate encoding. This deeper coding may explain why more associated information is recalled by sad people, leading to more conscious recollections. Having described how defocused attention in sad participants may lead to more elaborate encoding (especially of contextual information) and thus more conscious recollections, it becomes possible to hypothesise what might happen if the faces were not matched for expression from learning to test (as in the Jermann et al., 2008, study). If sad people recall more extraneous information, they may encode the expression. If the expression is different at test from learning, there will be additional interference from a change in an extraneous feature or context. This may help to explain why sad participants did not show higher face recognition accuracy than happy participants in Jermann et al.’s study.10 Since Jermann et al. did not provide sufficient simple effects or analyses involving neutral faces, it is difficult to assess whether the interference effect hypothesised here were observed. Thus, a further research study could explore this. Defocused attention may have an additional or alternative effect to simply greater accuracy. It may cause sad people to encode areas of the face they would not normally do. It is well established that some features are more frequently viewed and used when processing faces: The eyes are the most critical feature of faces for recognition (Gold, Sekuler, & Bennett, 2004; Vinette, Gosselin, & Schyns, 2004) and evidenced by event-related potentials that selectively respond to the eyes (Bentin, Allison, Puce, Perez, & McCarthy, 1996; Eimer, 1998) and eye-tracking data showing that the eyes attract more and longer fixations and greater scanning than any other feature (e.g., Cook, 1978; Groner, Walder, & Groner, 1984; Henderson, Falk, Minut, Dyer, & Mahadevan, 2001; Mertens, Siegmund, & Grüsser, 1993; Walker-Smith, Gale, & Findlay, 1977). It is known that depressed individuals avoid eye contact in social situations (Gotlib, 1982) and may thus encode other features. Thus, it may be that sad participants do not attend to the same facial features as happy participants (see Hills & Lewis, 2011) and this may affect face recognition accuracy. Defocused attention may mean that more features are studied and encoded and this may improve accuracy. Eye-tracking research could be used to establish this. In the present study, we found that mood induction affected response bias, while the facial expression did not for incidental learning. For intentional learning, mood induction and facial expressions caused a more liberal response bias. Piguet et al. (2008) found that emotionally valenced stimuli induce attentional biases, causing a more liberal response bias. Thus, the emotionally valenced environment affected response bias rather than the stimuli. This difference may be the result of the mood induction causing the participants to attribute the emotional arousal to the faces rather than to the mood induction. This suggestion is only speculative at best. Nevertheless, the reason for emotion valence causing response bias to change is not well understood. Perhaps emotional valence causes participants to mistake emotional arousal caused by the mood induction for feelings of familiarity given that there is an unconscious arousal response to familiar faces (Bauer & Verfaellie, 1988; Burton, Young, Bruce, Johnston, & Ellis, 1991; Ellis, 2007; Stone, Valentine, & Davis, 2001; Young, 2009). Importantly, with regards to response bias, is why there is a difference between happy and sad induced participants. Though Experiment 2 did not find significant differences in response bias, the pattern was the same as in Experiment 1 and the effect size was moderate suggesting that an effect would be found with increased experimental power. Sad induced participants had a more liberal response bias than happy participants suggesting they were more willing to respond with an ‘‘old’’ response to the face. This has the effect of having more correction recognitions (hits) but also more false recognitions (false hits). From an adaptive response this may serve to ensure that a familiar individual is detected. This would probably lead to a supportive social network being found or an increase in the social network. Thus, the lower response bias in sad participants may well be adaptive in an attempt to improve mood. Again, this is a rather speculative suggestion that fits the data. 10 Based on the supposition made here, it could be assumed that happy participants would not be affected by expressions. However, we did observe a moodcongruent recognition bias in happy participants. This, however, may be based on happy participants deliberately increasing the attention paid to happy faces rather than a defocused attention phenomenon. Multiple factors could be playing a role in such effects.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1515
One issue that is unresolved by the present set of studies is the precise cognitive locus for these effects. The effect of mood-congruency on memory is known to be at learning primarily (Forgas, 1998; Watkins, Mathews, Williamson, & Fuller, 1992) and less strongly at retrieval (Teasdale & Russell, 1983). Because the mood-induction in the present work was done at the beginning of the experiment, there is no way to know whether the present effects are due to encoding, storage, or retrieval. By manipulating the timing of the mood induction (i.e., immediately before learning, or immediately before recognition) the locus could be understood. In conclusion, we have reported three experiments in which participants’ face recognition performance was measured following mood induction. Happy participants were less accurate than sad participants at face recognition following incidental learning, and sad participants had a more liberal response bias than happy participants. Additionally, sad participants demonstrated more remember responses than happy participants. These results indicate that mood causes distinct changes in the encoding elaboration that is paid to faces, possibly altering the features that are used to encode those faces. However, these effects are moderated by the learning strategy employed. Acknowledgments Experiment 1 was supported by Grant PTA-030-2003-00524 from the ESRC to Peter J. Hills. The authors would also like to thank two anonymous reviewers for useful comments made to a previous draft of this work. References Ahles, T. A., Ruckdeschel, J. C., & Blanchard, E. B. (1984). Cancer-related pain: II. Assessment with visual analogue scales. Journal of Psychosomatic Research, 28, 121–124. Aiken, R. C. B. (1974). Assessment of mood by analogue. In A. T. Beck, H. L. P. Resnick, & J. D. Lettier (Eds.), The prediction of suicide. Bowie, MD: Charles Press. Austin, M., Mitchell, P., & Goodwin, G. M. (2001). Cognitive deficits in depression. British Journal of Psychiatry, 178, 200–206. Bauer, R. M., & Verfaellie, M. (1988). Electrodermal discrimination of familiar but not unfamiliar faces in prosopagnosia. Brain and Cognition, 8, 240–252. Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. Bless, H., Mackie, D. M., & Schwarz, N. (1992). Mood effects on attitude judgements: Independent effects of mood before and after message elaboration. Journal of Personality and Social Psychology, 63, 585–595. Bodenhausen, G. V., Kramer, G. P., & Susser, K. (1994). Happiness and stereotypic thinking in social judgement. Journal of Personality and Social Psychology, 66, 621–632. Bower, G. H. (1981). Mood and memory. American Psychologist, 36, 129–148. Bower, G. H., Sahgal, A., & Routh, D. A. (1983). Affect and cognition [and discussion]. Philosophical Transactions of the Royal Society London B, 302, 387–402. doi:10.1098/rstb.1983.0062. Bruce, V. (1982). Changing faces: Visual and non-visual coding processes in face recognition. British Journal of Psychology, 73, 105–116. Burton, A. M., Young, A. W., Bruce, V., Johnston, R. A., & Ellis, A. W. (1991). Understanding covert recognition. Cognition, 39, 129–166. Burt, D. B., Niederehe, G., & Zembar, M. J. (1995). Depression and memory impairment: A meta-analysis of the association, its pattern, and specificity. Psychological Bulletin, 117, 285–305. Channon, S., Baker, J. E., & Robertson, M. M. (1993). Effects of structure and clustering on recall and recognition memory in clinical depression. Journal of Abnormal Psychology, 102, 323–326. Clark, M. S., & Isen, A. M. (1982). Toward understanding the relationship between feeling states and social behaviour. In A. Hastorf & A. M. Isen (Eds.), Cognitive social psychology (pp. 73–108). New York: American Elsevier. Cook, M. (1978). Eye movements during recognition of faces. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory. New York: Academic Press. Curby, K., Johnson, K., & Tyson, A. (2009). Perceptual expertise has an emotional side: Holistic face processing is modulated by observers’ emotional state. Journal of Vision, 9, 510. doi:10.1167/9.8.510. Curby, K., Johnson, K., & Tyson, A. (2011). Face to face with emotion: Holistic processing is modulated by emotional state. Cognition and Emotion, 25, 1–10. D’Argembeau, A., & Van der Linden, M. (2007). Facial expressions of emotion influence memory for facial identity in an automatic way. Emotion, 7, 507–515. D’Argembeau, A., Van der Linden, M., Etienne, A., & Comblain, C. (2003). Identity and expression memory for happy and angry faces in social anxiety. Acta Psychologica, 114, 1–15. Deldin, P. J., Deveney, C. M., Kim, A., Casas, B. R., & Best, J. L. (2001). A slow wave investigation of working memory biases in mood disorders. Journal of Abnormal Psychology, 110, 267–281. Derryberry, D., & Tucker, D. M. (1994). Motivating the focus of attention. In P. M. Niedenthal & S. Kitayama (Eds.), The heart’s eye: Emotional influences in perception and attention (pp. 167–196). New York: Academic Press. Deveney, C. M., & Deldin, P. J. (2004). Memory of faces: A slow wave ERP study of major depression. Emotion, 4, 295–304. Dunn, J. C. (2004). Remember–know: A matter of confidence. Psychological Review, 111, 524–542. Eich, J. E. (1980). The cue-dependent nature of state-dependent retrieval. Memory and Cognition, 8, 157–173. Eich, E., & Macauley, D. (2000). Are real moods required to reveal mood-congruent and mood-dependent memory? Psychological Science, 11, 244–248. Eimer, M. (1998). Does the face-specific N170 component reflect the activity of a specialized eye processor? Neuroreport, 9, 2945–2948. Eldridge, L. L., Sarfatti, S., & Knowlton, B. T. (2002). The effect of testing procedure on remember–know judgements. Psychnomic Bulletin & Review, 9, 139–145. Ellis, H. C., & Ashbrook, P. W. (1988). Resource allocation model of the effects of depressed mood states on memory. In K. Fielder & J. Forgas (Eds.), Affect, cognition and social behaviour. Toronto: Hogrefe. Ellis, H. D. (2007). Delusions: a suitable case for imaging? International Journal of Psychophysiology, 63, 146–151. Erber, R., & Erber, M. R. (1994). Beyond mood and social judgement: Mood incongruent recall and mood regulation. European Journal of Social Psychology, 24, 79–88. Farah, M. J. (1996). Is face recognition ‘‘special’’? Evidence from neuropsychology. Behavioural Brain Research, 76, 181–189. Farah, M. J., Wilson, K. D., Drain, M., & Tanaka, J. N. (1998). What is ‘‘special’’ about face perception? Psychological Review, 105, 482–498. Fiedler, K., Asbeck, J., & Nickel, S. (1991). Mood and constructive memory effects on social judgment. Cognition and Emotion, 5, 363–378. Forgas, J. P., & Bower, G. H. (1987). Mood effects on person perception judgements. Journal of Personality and Social Psychology, 53, 53–60. Foa, E. B., Gilboa-Schechtman, E., Amir, N., & Freshman, M. (2000). Memory bias in generalized social phobia: Remembering negative emotional expressions. Journal of Anxiety Disorders, 14(5), 501–519. Forgas, J. P. (1995). Mood and judgment: The affect infusion model (AIM). Psychological Bulletin, 116, 39–66.
1516
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
Forgas, J. P. (1998). Happy and mistaken? Mood effects on the fundamental attribution error. Journal of Personality and Social Psychology, 75, 318–331. Fox, E., Russo, R., Bowles, R., & Dutton, K. (2001). Do threatening stimuli draw or hold visual attention in subclinical anxiety? Journal of Experimental Psychology: General, 130, 681–700. Freire, A., Lee, K., & Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29, 159–170. Gasper, K., & Clore, G. L. (2002). Attending to the big picture: Mood and global versus local processing of visual information. Psychological Science, 13, 34–40. Gardiner, J. M., Ramponi, C., & Richardson-Klavehn, A. (1998). Experiences of remembering, knowing, and guessing. Consciousness and Cognition, 7, 1–26. Gardiner, J. M., Ramponi, C., & Richardson-Klavehn, A. (2002). Recognition memory and decision processes: A meta-analysis of remember, know, and guess responses. Memory, 10, 83–98. Gardiner, J. M., & Richardson-Klavehn, A. (2000). Remembering and knowing. In E. Tulving & F. I. M. Craik (Eds.), The Oxford handbook of memory (pp. 229–244). Oxford, UK: Oxford University Press. Gauthier, I., & Bukach, C. M. (2007). Should we reject the expertise hypothesis? Cognition, 103, 322–330. Gauthier, I., & Tarr, M. J. (2002). Unraveling mechanisms for expert object recognition: Bridging brain activity and behaviour. Journal of Experimental Psychology: Human Perception and Performance, 28, 431–446. Gilboa-Schechtman, E., Erhard-Weiss, D., & Jeczemien, P. (2002). Interpersonal deficits meet cognitive biases: Memory for facial expressions in depressed and anxious men and women. Psychiatry Research, 113, 279–293. Gold, J. M., Sekuler, A. B., & Bennett, P. J. (2004). Characterizing perceptual learning with external noise. Cognitive Sciences, 28, 167–207. Gotlib, I. H. (1982). Self-reinforcement and depression in interpersonal interaction: The role of performance level. Journal of Abnormal Psychology, 93, 19–30. Gotlib, I. H., Roberts, J. E., & Gilboa-Schechtman, E. (1996). Cognitive interferences in depression. In I. G. Sarason, G. Pierce, & B. R. Sarason (Eds.), Cognitive interference: Theories, methods, and findings. Mahwah, NJ: Erlbaum. Groner, R., Walder, F., & Groner, M. (1984). Looking at faces: Local and global aspects of scanpaths. In F. Johnson & A. G. Gale (Eds.), Theoretical and applied aspects of eye movement research (pp. 523–533). New York: Elsevier. Hartlage, S., Alloy, L. B., Vazquez, C., & Dykman, B. (1993). Automatic and effortful processing in depression. Psychological Bulletin, 113, 247–278. den Hartog, H. M., Derix, M. M. A., van Bemmel, A. L., Kremer, B., & Jolles, J. (2003). Cognitive functioning in young and middle-aged unmedicated outpatients with major depression: Testing the effort and cognitive speed hypotheses. Psychological Medicine, 33, 1443–1451. Hasher, L., & Zacks, R. T. (1979). Automatic and effortful processing in memory. Journal of Experimental Psychology: General, 108, 356–388. Henderson, J. M., Falk, R. J., Minut, S., Dyer, F. C., & Mahadevan, S. (2001). Gaze control for face learning and recognition by humans and machines. In T. Shipley & P. Kellman (Eds.), From fragments to objects: Segmentation processes in vision (pp. 463–482). Amsterdam: Elsevier. Hertel, P. T. (1994). Depression and memory: Are impairments remediable through attention control? Current Directions in Psychological Science, 3, 190–193. Hertel, P. T., & Hardin, T. S. (1990). Remembering with and without awareness in a depressed mood: Evidence of deficits in initiative. Journal of Experimental Psychology: General, 119, 45–59. Hertel, P. T., & Milan, S. (1994). Depressive deficits in recognition: Dissociation of recollection and familiarity. Journal of Abnormal Psychology, 103, 736–742. Hesse, A. G., & Spies, K. (1994). Experimental inductions of emotional states and their effectiveness. British Journal of Psychology, 85, 55–78. Hills, P. J., & Lewis, M. B. (2011). Sad people avoid the eyes or happy people focus on the eyes? Mood induction affects facial feature discrimination. British Journal of Psychology, 102, 260–274. Hirshman, E. (1998). On the utility of the signal detection model of the remember–know paradigm. Consciousness and Cognition, 7, 103–107. Hirt, E. R., Devers, E. E., & McCrea, S. M. (2008). I want to be creative: Exploring the role of hedonic contingency theory in the positive mood-cognitive flexibility link. Journal of Personality and Social Psychology, 94, 214–230. Hole, G. J. (1994). Configurational factors in the perception of unfamiliar faces. Perception, 23, 65–74. Huber, F., Beckmann, S. C., & Herrmann, A. (2004). Means-end analysis: Does the affective state influence information processing style? Psychology and Marketing, 21, 715–737. Isen, A. M. (1987). Positive affect, cognitive processes, and social behavior. In L. Berkowitz (Ed.). Advances in experimental social psychology (Vol. 20). New York: Academic Press. Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30, 513–541. Jermann, F., Van der Linden, M., & D’Argembeau, A. (2008). Identity recognition and happy and sad facial expression recall: Influence of depressive symptoms. Memory, 16, 364–373. Jermann, F., Van der Linden, M., Laurencon, M., & Schmitt, B. (2009). Recollective experience during recognition of emotional words in clinical depression. Journal of Psychopathology and Behavioral Assessment, 31, 27–35. Johansson, M., Mecklinger, A., & Treese, A. C. (2004). Recognition memory for emotional and neutral faces: An event-related potential study. Journal of Cognitive Neuroscience, 16, 1840–1853. Joormann, J., & Gotlib, I. H. (2007). Selective attention to emotional faces following recovery from depression. Journal of Abnormal Psychology, 116, 80–85. Kanayama, N., Sato, A., & Ohira, H. (2008). Dissociative experience and mood-dependent memory. Cognition and Emotion, 22, 881–896. Koster, E. H. W., De Raedt, R., Goeleven, E., Franck, E., & Crombez, G. (2005). Mood-congruent attentional bias in dysphoria: Maintained attention to and impaired disengagement from negative information. Emotion, 5, 446–455. Kottoor, T. M. (1989). Recognition of faces by adults. Psychological Studies, 34, 102–105. Leder, H., & Carbon, C.-C. (2004). Part-to-whole effects and configural processing in faces. Psychological Science, 46, 531–543. Leigland, L. A., Schulz, L. E., & Janowsky, J. S. (2004). Age related changes in emotional memory. Neurobiology of Aging, 25, 1117–1124. Light, L. L., Kayra-Stuart, F., & Hollander, S. (1979). Recognition memory for typical and unusual faces. Journal of Experimental Psychology: Human Learning and Memory, 5, 212–228. Lundh, L. G., & Ost, L. G. (1996). Recognition bias for critical faces in social phobias. Behavioral Research and Therapy, 34, 787–794. MacMillan, N. A., & Creelman, C. D. (2005). Detection theory: A user’s guide. New York: Cambridge University Press. Mackie, D. M., & Worth, L. T. (1991). Feeling good but not thinking straight: Positive mood and persuasion. In J. P. Forgas (Ed.), Emotion and Social Judgements. Oxford: Pergammon. Mather, M., & Carstensen, L. L. (2003). Aging and attentional biases for emotional faces. Psychological Science, 14, 409–415. Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. Meiser, T. (2005). A hierarchy of multinomial models for multidimensional source monitoring. Methodology, 1, 2–17. Meiser, T., & Bröder, A. (2002). Memory for multidimensional source information. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 116–137. Mertens, I., Siegmund, H., & Grüsser, O.-J. (1993). Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia, 31, 989–998. Mogg, K., & Bradley, B. P. (1998). A cognitive-motivational analysis of anxiety. Behaviour, Research, and Therapy, 36, 809–848. Mogg, K., & Bradley, B. P. (2002). Selective orientation to masked threat faces in social anxiety. Behaviour, Research, and Therapy, 40, 1403–1414. Mogg, K., Philippot, P., & Bradley, B. P. (2004). Selective attention to angry faces in clinical social phobia. Journal of Abnormal Psychology, 113, 160–165. Oatley, K., & Johnson-Laird, P. N. (1987). Towards a cognitive theory of emotions. Cognition and Emotion, 1, 29–50. Ochsner, K. N. (2000). Are affective events richly recollected or simply familiar? The experience and process of recognizing feelings past. Journal of Experimental Psychology: General, 129, 242–261. Pavuluri, M. N., O’Conner, M. M., Harral, E., & Sweeney, J. A. (2007). Affective neural circuitry during facial emotion processing in pediatric bipolar disorder. Biological Psychiatry, 62, 158–167. Phillips, M. L. (2003). Understanding the neurobiology of emotion perception: Implications for psychiatry. British Journal of Psychiatry, 182, 190–192.
P.J. Hills et al. / Consciousness and Cognition 20 (2011) 1502–1517
1517
Piguet, O., Connally, E., Krendl, A. C., Huot, J. R., & Corkin, S. (2008). False memory in aging: Effects of emotional valence on word recognition accuracy. Psychology and Aging, 23, 307–314. Pour, M. F., & von Hecker, U. (2008). Defocused mode of attention: Further evidences from perceptual eccentricity and memory. International Journal of Psychology, 43, 60–61. Ramponi, C., Murphy, F. C., Calder, A. J., & Barnard, P. J. (2010). Recognition memory for pictorial material in subclinical depression. Acta Psychologica, 135, 293–301. Ratcliff, R., Van Zandt, T., & McKoon, G. (1995). Process dissociation, single-process theories, and recognition memory. Journal of Experimental Psychology: General, 124, 352–374. Richler, J. J., Tanaka, J. W., Brown, D. D., & Gauthier, I. (2008). Why does selective attention to part fail in face processing? Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1356–1368. Ridout, N., Astell, A. J., Reid, I., Glen, T., & O’Carroll, R. (2003). Memory bias for emotional facial expressions in major depression. Cognition and Emotion, 17, 101–122. Roediger, H. L., III, & McDermott, K. B. (1992). Depression and implicit memory: A commentary. Journal of Abnormal Psychology, 101, 587–591. Rotello, C. M., Macmillan, N. A., & Van Tassel, G. (2000). Recall-to-reject in recognition: Evidence from ROC curves. Journal of Memory and Language, 43, 67–88. Schwarz, N., & Clore, G. L. (1996). Feelings and phenomenal experience. In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology: Handbook of basic principles. New York: Guilford Press. Sedek, G., & von Hecker, U. (2004). Effects of subclinical depression and aging on generative reasoning about linear orders: Same or different processing limitations? Journal of Experimental Psychology: General, 133, 237–260. Sedikides, C. (1994). Incongruent effects of sad mood on self conception valence: It’s a matter of time. European Journal of Social Psychology, 24, 161–172. Sergent, J. (1986). Methodological constraints in neuropsychological studies of face perception in normals. In R. Bruyer (Ed.), The neuropsychology of face perception and facial expression. Hillsdale: NJ: Lawrence Erlbaum Associates Publishers. Sergerie, K., Lepage, M., & Armony, J. L. (2005). A face to remember: Emotional expression modulates prefrontal activity during memory formation. NeuroImage, 24, 127–144. Sharot, T., Davidson, M. L., Carson, M. M., & Phelps, E. A. (2008). Eye movements predict recollective experience. PLoS One, 3, e2884. Sharot, T., Delgado, M. R., & Phelps, E. A. (2004). How emotion enhances the feeling of remembering. Nature Neuroscience, 7, 1376–1380. Stone, A., Valentine, T., & Davis, R. (2001). Face recognition and emotional valence: Processing without awareness by neurologically intact participants does not simulate covert recognition in prosopagnosia. Cognitive Affect and Behavioural Neuroscience, 1, 183–191. Swets, J. A. (1961). Detection theory and psychophysics – A review. Psychometrika, 26, 49–63. Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 46A, 225–245. Teasdale, J. D., & Russell, M. L. (1983). Differential effect of induced mood on the recall of positive, negative, and neutral words. British Journal of Clinical Psychology, 22, 163–171. Tottenham, N., Borscheid, A., Ellertsen, K., Markus, D. J., & Nelson, C. A. (2002). Categorization of facial expressions in children and adults: establishing a larger stimulus set. Poster presented at the Cognitive Neuroscience Society annual meeting, San Francisco, April 2002. Tulving, E. (1985). Memory and consciousness. Canadian Psychology, 26, 1–12. Tunney, R. J., & Fernie, G. (2007). Repetition priming affects guessing not familiarity. Behavioral and Brain Functions, 3, 40. Vinette, C., Gosselin, F., & Schyns, P. G. (2004). Spatiotemporal dynamics of face recognition in a flash: It’s in the eyes. Cognitive Sciences, 28, 289–301. von Hecker, U., & Meiser, T. (2005). Defocused attention in depressed mood: Evidence from source monitoring. Emotion, 5, 456–463. Wais, P. E., Mickes, L., & Wixted, J. T. (2008). Remember/know judgements probe degrees of recollection. Journal of Cognitive Neuroscience, 20, 400–405. Walker-Smith, G. J., Gale, A. G., & Findlay, J. M. (1977). Eye movement strategies involved in face perception. Perception, 6, 313–326. Watkins, T., Mathews, A. M., Williamson, D. A., & Fuller, R. (1992). Mood congruent memory in depression: Emotional priming or elaboration. Journal of Abnormal Psychology, 101, 581–586. Watts, F. N., Morris, L., & MacLeod, A. K. (1987). Recognition memory in depression. Journal of Abnormal Psychology, 96, 273–275. Williams, J. M. G., Watts, F. N., MacLeod, C. M., & Matthews, A. (1997). Cognitive psychology and emotional disorders. Chichester, UK: John Wiley. Windmann, S., & Kutas, M. (2001). Electrophysiological correlates of emotion-induced recognition bias. Journal of Cognitive Neuroscience, 13, 577–592. Yonelinas, A. P. (1994). Receiver-operating characteristics in recognition memory – Evidence for a dual-process model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1341–1354. Yonelinas, A. P. (1997). Recognition memory ROCs for item and associative information: The contribution of recollection and familiarity. Memory and Cognition, 25, 747–763. Yonelinas, A. P. (1999). The contribution of recollection and familiarity to recognition and source-memory judgements: A formal dual-process model and an analysis of receiver operating characteristics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1415–1434. Yonelinas, A. P. (2001). Components of episodic memory: The contribution of recollection and familiarity. Philosophical Transactions of the Royal Society B, 356, 1363–1374. Yonelinas, A. P., & Jacoby, L. L. (1996). Noncriterial recollection: Familiarity as automatic, irrelevant recollection. ⁄⁄Consciousness and Cognition, 5, 131–141. Yonelinas, A. P., Kroll, N. E. A., Dobbins, I., Lazzara, M., & Knight, R. T. (1998). Recollection and familiarity deficits in amnesia: Convergence of remember– know, process dissociation, and receiver operating characteristic data. Neuropsychology, 12, 323–339. Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. Young, G. (2009). In what sense ‘familiar’? Examining experiential differences within pathologies of facial recognition. Consciousness and Cognition, 18, 628–638. Zakzanis, K. K., Leach, L., & Kaplan, E. (1998). On the nature and pattern of neurocognitive function in major depressive disorder. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 11, 111–119.