Disgust-specific modulation of early attention processes

Disgust-specific modulation of early attention processes

Acta Psychologica 152 (2014) 149–157 Contents lists available at ScienceDirect Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy ...

651KB Sizes 0 Downloads 18 Views

Acta Psychologica 152 (2014) 149–157

Contents lists available at ScienceDirect

Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy

Disgust-specific modulation of early attention processes Johanna C. van Hooff ⁎, Melanie van Buuringen, Ihsane El M'rabet, Margot de Gier, Lilian van Zalingen Cognitive Psychology Department, VU University Amsterdam, Amsterdam, The Netherlands

a r t i c l e

i n f o

Article history: Received 18 April 2014 Received in revised form 14 August 2014 Accepted 26 August 2014 Available online 16 September 2014 PsycINFO classification: 2346 Attention 2360 Motivation & Emotion Keywords: Emotion Disgust Fear Arousal Attention engagement Time-course of attention

a b s t r a c t Although threatening images are known to attract and keep our attention, little is known about the existence of emotion-specific attention effects. In this study (N = 46), characteristics of an anticipated, disgust-specific effect were investigated by means of a covert orienting paradigm incorporating pictures that were either disgustevoking, fear-evoking, happiness-evoking or neutral. Attention adhesion to these pictures was measured by the time necessary to identify a peripheral target, presented 100, 200, 500, or 800 ms after picture onset. Main results showed that reaction times were delayed for targets following the disgust-evoking pictures by 100 and 200 ms, suggesting that only these pictures temporarily grabbed hold of participants' attention. These delays were similar for ignore- and attend-instructions, and they were not affected by the participants' anxiety levels or disgust sensitivity. The disgust-specific influence on early attention processes thus appeared very robust, occurring in the majority of participants and without contribution of voluntary- and strategic-attention processes. In contrast, a smaller and less reliable effect of all emotional (arousing) pictures was present in the form of delayed responding in the 100 ms cue-target interval. This effect was more transitory and apparent only in participants with relatively high state-anxiety scores. Practical and theoretical consequences of these findings are discussed. © 2014 Elsevier B.V. All rights reserved.

It is well known that emotionally salient stimuli are preferentially processed and attract more attention resources than neutral stimuli, particularly when these stimuli signal threat and immediate danger (Yiend, 2010). This bias towards negative or threatening information is believed to originate from evolutionary pressures and to occur in a highly reflexive manner (Öhman, Flykt, & Esteves, 2001; Vuilleumier, 2005). Indeed, from a survival viewpoint it is important to quickly spot an angry face in the crowd (Fox, Lester, Russo, Bowles, & Dutton, 2000; Hansen & Hanse, 1988) or to swiftly direct attention to the location of a dangerous animal (Öhman et al., 2001). Yet, although this sounds relatively straight forward, the effects of negative emotion and threat on attention are much more dynamic and complex than a first reading of these observations suggests. To understand some of this complexity, the current study examined the influence of four potentially critical factors and their interactions, relating to: (1) emotion-specific effects, (2) the time course of attention effects, (3) contribution of voluntary or task-related attention, and (4) state-dependent effects. With reference to these factors, we were particularly interested in specifying the conditions that may restrict our previous conclusion stating that “disgust- but not fear-evoking images hold our attention” (Van Hooff, Devue, Vieweg, & Theeuwes, 2013). ⁎ Corresponding author at: Department of Cognitive Psychology, VU University Amsterdam, Van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands. Tel.: +31 20 5985577; fax: +31 20 5988971. E-mail address: [email protected] (J.C. van Hooff).

http://dx.doi.org/10.1016/j.actpsy.2014.08.009 0001-6918/© 2014 Elsevier B.V. All rights reserved.

First, most experimental studies in this field so far, have used stimuli (words, pictures) that varied in valence or arousal level and primarily focussed on the emotion fear. Recent reports however suggest that stimuli evoking disgust produce different attention effects than those eliciting fear, even when these stimuli are equally arousing and similarly negative (Carretié, Ruiz-Padial, López-Martín, & Albert, 2011; Chapman, Johannes, Poppenk, Moscovitch, & Anderson, 2013; Van Hooff et al., 2013). More specifically, results from these studies suggest that attention bias effects are exclusively present or are much larger for disgustas compared to fear-evoking pictures. Likewise, more distraction and greater attentional blink effects have been found for disgust- as compared to fear-related words (Charash & McKay, 2002; Cisler, Olatunji, Lohr, & Williams, 2009). Together, these results suggest that the specific kind of threat implied by a negative stimulus determines the magnitude of the attention effect observed, presumably because the emotions fear and disgust are associated with different action tendencies (Susskind et al., 2008) and/or different cost/benefit analyses (Carretié et al., 2011). Indeed, a differential attention effect for fear- and disgustevoking images would make sense given that the sight of, for example, an aggressive animal or a pointed gun requires urgent action at the cost of being killed, while, on the contrary, noticing a bleeding injury or a rotten piece of meat calls for a more detailed evaluation with less immediate costs attached to it. In other words, only in the latter, more disgust-related cases one can permit oneself to narrow attention (Gable & Harmon-Jones, 2010) and/or to direct (temporarily) more attention resources towards the “threatening” stimulus (Carretié et al.,

150

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

2011; Van Hooff et al., 2013). Regardless of interpretation, the single fact that different attention effects are observed for disgust- and fear-related stimuli signifies that it is important to design studies that allow for the investigation of emotion-specific effects, as clearly not all results can be explained by arousal or emotional salience alone. This may be particularly relevant for studies that incorporate images from the International Affective Picture System (IAPS), which is organized along valence and arousal dimensions (Lang, Bradley, & Cuthbert, 2008) and not by the type of emotion elicited. Moreover, it is important to recognize that many unpleasant, high-arousing IAPS pictures, which are often labeled as highly threatening (e.g., mutilated bodies or burn victims), are typically considered to be more disgusting than fearful (Libkuman, Otani, Kern, Viger, & Novak, 2007; Mikels et al., 2005). Consequently, this restricted category of “high-threat” images may affect attention not only because they are highly arousing but perhaps also because they elicit (strong) feelings of disgust. Second, perception and attention develop over time and thus attention capture and engagement by fearful and disgusting images may depend on the timing and duration of successive emotional and neutral (target) stimuli. For example, in our previous study we found that task irrelevant disgust-evoking pictures delayed subsequent, peripheral target identification exclusively when these targets were presented 200 ms after picture onset and not after 500, 800, or 1100 ms (Van Hooff et al., 2013). Likewise, Ciesielski, Armstrong, Zald, and Olatunji (2010) reported that the enhanced attentional blink effect for emotional stimuli rapidly declined from short (200 ms) to longer time lags (400 ms and 600 ms) and even reversed for the longest time lag of 800 ms (i.e., enhanced instead of diminished target processing following negative images). Bocanegra and Zeelenberg (2009) also demonstrated that negative word cues impaired subsequent target identification at short (50 and 500 ms) inter stimulus intervals (ISIs), but improved target identification at a longer ISI of 1000 ms. Together, these results suggest that primary task performance deteriorates only, or foremost, when emotional distracters and task-relevant targets are presented in close temporal proximity. One likely explanation for this would be that with short ISIs the competition for processing resources is higher. What “short” means however may again depend on the specific type of emotion elicited. For example, our previous findings suggest that disgustevoking pictures compete maximally with targets for attention resources at around 200 ms and not thereafter (Van Hooff et al., 2013). For fear-evoking images however, this may occur at an earlier point in time because, as argued before, only quick registration of their rough contents is necessary to trigger the appropriate (fight–flight) reaction. This suggestion is supported by results from Koster, Crombez, Verschuere, Vanvolsem, and De Houwer (2007), who found attention capture effects for highly threatening pictures in an exogenous cueing task when these pictures were presented for 100 ms, but not when they were presented for 28, 200 or 500 ms. Moreover, Cisler et al. (2009), using an RSVP task, reported that probe detection rate dropped when it was directly preceded (120 ms) by a fear-related word but not by a disgust-related word. In contrast, when there were two or three intervening items between the probes and the emotion target words (N240 ms), then probe detection rate was more affected by the disgustrelated words than the fear-related words (i.e., a reversed pattern), albeit only when targets were made relevant to the task (see next). Thus, taking the first two factors together, emotion specific attention effects may exist both in terms of magnitude (i.e., disgust larger than fear) and in relation to temporal course (i.e., fear earlier but more transitory than disgust). To allow for the presence of a very quick and brief impact by fearful stimuli on attention, it is thus necessary to also include experimental conditions with very brief stimulus durations or very short ISIs (≤100 ms), something we did not do in our previous study (Van Hooff et al., 2013). Third, priority access and attention (dis)engagement are guided not only by the nature and emotional saliency of the eliciting stimuli but also by current task goals and situational (attention) demands. For example, using a spatial cueing task, Okon-Singer, Tzelgov and Henik

(2007) demonstrated that emotion effects were solely present when attention was oriented towards the location at which the emotional items were presented. Moreover, even in tasks in which neutral and emotion pictures were presented at fixation, thus already in focus of attention, it was found that negative images interfered with task performance more when the primary task included just a few instead of many distracting items (Erthal et al., 2005; Okon-Singer et al., 2007). This was explained by the notion that (emotional) distraction may occur only when the primary task does not consume all attention resources (i.e., when perceptual load is low). The modulating role of task-related and voluntary attention is furthermore supported by results from event-related potential (ERP) research, showing much larger brain activation differences between negative- and neutral images when participants' attention is directed towards the contents of these images than when they were just viewing them (Schupp et al., 2007) or when their attention is directed towards a concurrent perceptual decision task (Wiens, Sand, Norberg, & Andersson, 2011). It is as yet unclear however, whether the effects of such attention manipulation would depend on the type of negative emotion elicited, either directly or as a result of the time course of enhanced processing. More specifically, it is feasible that a potentially modulating effect of directed attention would be particularly present at the later, more strategic processing stages (Schupp et al., 2007), and according to our reasoning above, such effect would thus overlap or interact more with the expected attention adhesion effects for disgust-evoking images than with that for fear-evoking images. Indeed, results from Cisler et al.'s (2009) RSVP study, as mentioned earlier, provided some evidence for this suggestion. More specifically, their results showed that the relatively late effects for disgust-related words (i.e., probe position after two or three intervening items) occurred only when these words were made task relevant. In contrast, the early effects for the fear-related words (no intervening items) occurred regardless of top-down attention. The attention effects of fear thus seemed to occur more automatically than those of disgust. This claim however needs further investigation as we clearly found a detrimental effect of disgusting images, with a quick onset and while their contents were ignored (Van Hooff et al., 2013, see also Carretié et al., 2011; Krusemark & Li, 2011). Finally, attentional biases for threat are more pronounced in anxious individuals (Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg, & IJzendoorn, 2007) and the internal state of the participant seems to play a determining role in early visual selection (Rossi & Pourtois, 2012). Moreover, several studies as cited in Bar-Haim et al. (2007) found evidence for increased orienting towards- and/or impaired disengagement from threat-related stimuli exclusively in high anxious participants. In this review paper however, no distinction is made between the effects for fearful or disgusting stimuli. At first glance, it seems that individual differences in disgust sensitivity are less crucial for finding robust attention effects for disgust-evoking stimuli (Van Hooff et al., 2013; Vogt, Lozo, Koster, & De Houwer, 2011) although Cisler et al. (2009) have claimed the opposite. In the latter study, an attentional bias for task-irrelevant, fear-related words was found among all participants, whereas a similar effect for disgust-related words was observed in disgust-prone individuals only. Perhaps this discrepancy is due to the fact that Cisler and colleagues used words instead of pictures. In general, it is easier to elicit emotions from pictures, and arguably this may be more easily done for disgust-related pictures as compared to fear-related ones. All in all, it shows that attention (dis)engagement effects for disgust- and fear-related images may differ with respect to magnitude, time course, voluntary attention contributions, and dependency on internal state factors. 1. The present study Taking these four factors together, the current experiment was developed to achieve the following aims. First, to investigate the existence and characteristics of emotion-specific attention effects, different types of photographical images from the IAPS data base were included in a

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

covert orienting paradigm that were either disgust-evoking, fearevoking, or happiness-evoking, and their responses were compared to those for pictures that did not elicit a particular emotion (neutral pictures). As in our original study (Van Hooff et al., 2013), these pictures were presented at fixation thus already in focus of attention, but participants' main task was to identify a subsequently presented, peripheral target. In this paradigm, a relatively longer target identification time would mean that participants find it more difficult to disengage their attention from the picture and to move it to the target (Fox, Russo, Bowles, & Dutton, 2001). In our original study (Van Hooff et al., 2013) happiness evoking pictures were not included, but by now doing so, we ensured that the probabilities of all four picture categories were equal and that emotion-specific explanations could be contrasted more completely with valence- and arousal accounts. Second, to examine the temporal course of attention (dis)engagement, the peripheral targets were presented 100, 200, 500 and 800 ms after picture onset. The 100 ms picture–target interval was included to examine more precisely than in our previous study (Van Hooff et al., 2013) when attention effects for the two different types of threatening pictures start to differ and in which way. Theoretically, it is feasible that both disgust- and fear-evoking pictures are prioritized at an early point in time, but that attention towards subsequent processing stages differ between these emotions, with increased orienting towards the disgusting pictures only. Translated to the current study, this would lead to the following predictions: (1) delayed responding following both the disgust- and fear-evoking pictures in the 100 ms cue-target interval condition, as their preferred processing would take away resources from the primary task, and (2) delayed responding following the disgust-evoking pictures only in the 200 ms interval condition, because disgust but not fear- evoking pictures temporarily keep hold of participants' attention (Carretié et al., 2011; Van Hooff et al., 2013). Third, to investigate whether directed attention would augment the emotion-specific effects, we included two different instruction conditions: (1) a standard ignore condition in which participants were instructed to identify the target while ignoring the presented picture (as in Van Hooff et al., 2013), and (2) an attend condition in which participants were asked to identify the target and to look at the presented pictures in order to remember them for later recall. It was presumed that participants in the second condition would pay more attention to the actual contents of the presented pictures, which in turn, could potentially augment the attention (dis)engagement effects for the negative picture cues. On the other hand, because such attention modulation is more likely to occur for post-detection processing stages, we tentatively hypothesized that the augmenting effect of the attend instruction would be present primarily for the relatively longer cue-target interval conditions (N200 ms). Finally, to examine whether the expected attention bias effects are more evident in participants who are either anxious, disgust-sensitive, or both, we aimed to compare results between participants who scored high and low on several questionnaires, measuring state-anxiety, traitanxiety, disgust sensitivity, and attention control. Based on previous findings (Bar-Haim et al., 2007; Rossi & Pourtois, 2012) we expected that particularly the early attention effect for fearful pictures (in the 100 ms cue-target interval condition) would be dependent on a high anxious state of the participants. In contrast, since the attention effect of disgust-evoking pictures appeared to be present in not-selected (Van Hooff et al., 2013) and un-primed samples (Vogt et al., 2011), a potentially modulating effect of disgust sensitivity was expected to be smaller or even absent. 2. Method 2.1. Participants Forty-nine female University students volunteered to participate in this study in exchange for either course credits or 8 Euro. Only females

151

were recruited to prevent an expected confound. In general, women tend to rate negative emotional pictures as more unpleasant (lower valence) and more arousing than men (Lang et al., 2008) and that would cause problems for our stimulus selection procedure. Participants were randomly assigned to one of the two instruction conditions (ignore or attend instruction). Data from 3 participants were not used in the analysis because their task performance was below or only slightly above chance level (their proportion of accurate responses were 0.47, 0.51, and 0.63 respectively). The remaining 46 participants had a mean age of 21 years (SD = 2.6, range 18–29 years). 2.2. Stimulus selection The experimental stimulus set consisted of 40 IAPS pictures (Lang et al., 2008). Ten fear-evoking, 10 disgust-evoking, and 10 neutral pictures were similar to those used in our previous study (Van Hooff et al., 2013). As described in this earlier study, these pictures were carefully selected on the basis of arousal-, valence-, disgust-, and fearratings collected from a separate sample of female participants. In addition, several post-experimental control analyses established that none of the observed effects could be attributed to a particular subset of these pictures, for example, because some may have evoked a stronger emotion or were more visually challenging (Van Hooff et al., 2013). These three sets of pictures were thus rather homogeneous in character. Ten happiness-evoking pictures were additionally chosen on the basis of the normative scores obtained by Libkuman et al. (2007), characterized by high ratings of happiness and low ratings of fear and disgust. As in our previous study, these pictures were matched as best as possible with the other sets (fear, disgust, neutral) on the basis of their visual complexity and figure-background composition. Most selected pictures depicted only one object, one animal, or one person performing a simple act. Examples of fear-evoking pictures are an aggressive dog, a violent attack, or a man with a pointed gun. Examples of disgust-evoking pictures are a mouth with rotten teeth, a dirty toilet, or a vomiting person. The neutral pictures showed, for example, an every-day object, such as a blow-dryer or a boat, or someone performing a routine task (e.g., reading, sitting behind a computer). Finally, the happinessevoking pictures depicted, for example, a laughing person, a little puppy, or an appetizing chocolate sorbet. Female valence- and arousal-ratings for these four sets of pictures (obtained from the original data base from Lang et al., 2008) are shown in Fig. 1. As can be seen in this figure, valence ratings were lowest for both the disgust- and fear-evoking pictures (Mdisgust = 2.16, Mfear = 2.64), intermediate for the neutral pictures (Mneutral = 5.32), and highest for the happiness-evoking pictures (Mhappy = 8.13). In contrast, arousal ratings were lowest for the neutral pictures (Mneutral = 3.07) and comparable for all three sets of emotional pictures, though, on average, somewhat lower for the happiness-evoking ones (Mdisgust = 6.03, Mfear = 6.51, Mhappy 5.28). It should be noted that the arousal ratings for the three categories of emotional pictures were not extremely high, mainly because selecting more arousing, fearevoking pictures would automatically make them also more disgustevoking, thus resulting in a less clear differentiation between the two negative picture categories. 2.3. Procedure Before the start of the experiment, participants were asked to fill in Dutch versions of the State- and Trait-Anxiety Inventory (STAI; Van der Ploeg, Defares, & Spielberger, 1980), the Revised Disgust Scale (DS-R; Olatunji et al., 2007) and the attention control scale (AC; Derryberry & Reed, 2002). The covert orienting task was subsequently explained followed by two training blocks of 24 trials each. Pictures from the training blocks were different from those in the actual experiment.

152

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

Fig. 1. Arousal and valence ratings of the IAPS images that were used in the covert orienting task (allocated to four different emotion-specific categories). Points represent the mean ratings for the female participants as provided by Lang et al. (2008).

The main experiment consisted of 16 test blocks (40 trials per block, 640 trials in total), which could be started by pressing the space bar on the keyboard in front of the participants. An example trial is depicted in Fig. 2. Each trial started with the presentation of a

black central fixation cross (1000 ms), which was followed by an image cue (231 × 231 pixels, 7° visual angle viewed at 75 cm) presented in the middle of the screen. Shortly after the onset of the image, a target (letter Z or N, font Arial, size 7.5) appeared for 50 ms either above, below, left or right of the image cue (approximately 4.5° visual angle from the center of the screen). Participants' task was to indicate as quickly as possible which target letter (Z or N) had appeared by pressing the corresponding key on the keyboard with their left or right index finger. The target presentation time was 50 ms so it was impossible to make a directed eye movement to the target letter. The image remained on the screen until a response was made with a maximum of 1200 ms. Image offset was followed by a 500 ms blank screen or feedback if no response was made (the text ‘no response detected’ in red). Each test block contained the same 40 images (10 disgusting, 10 threatening, 10 happy, and 10 neutral) but with randomized presentation orders. The cue-target interval could be 100, 200, 500 or 800 ms, and these intervals were randomized over four test blocks (and these four times). After each test block, a white screen appeared with feedback about the reaction time and the error rate. Participants were encouraged to keep accuracy levels above 80% and to respond slower if this could not be achieved. Throughout the experiment, participants were asked to remain fixated on the center of the screen. About half of the participants (attend condition) received the additional instruction to pay close attention to the pictures, as after the experiment, they will be asked to recall them. The other participants (ignore condition) received no further instructions regarding the pictures. To examine whether this manipulation had worked, all participants were asked to recall as many pictures as they could after they had finished the orienting task. They were instructed to describe the pictures they could remember (in any order) by writing down three keywords.

Fig. 2. Sequence of events in one example trial of the covert orienting paradigm. Note: Participants were required to identify the target (Z or N) which was presented either left, right, top, or bottom of the central image cue. The image cue could either be neutral, disgust-evoking, fear-evoking, or happiness-evoking. The example images presented in this Figure did not come from the IAPS database, though are comparable to those used in the present study. Stimulus sizes are not proportional to those in the actual experiment.

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

153

550

3. Results

Mean questionnaire scores were STAI-state = 32.7 (SD 6.8), STAI-trait = 36.8 (SD 8.5), DS-R = 51.5 (SD 13.6), and AC = 47.9 (SD 8.0). State and trait (STAI) anxiety scores correlated significantly with each other (r = 0.50, p b 0.001) and both also correlated with scores from the AC scale (rSTAI-state-AC = 0.30, p b 0.05; rSTAI-trait-AC = 0.58, p b 0.001). Disgust sensitivity scores (DS-R) did not correlate with either the anxiety scores (STAI-trait and STAI-state) or with scores from the AC scale (all r's b 0.16 and all p's N 0.30). The four questionnaire scores were comparable between the two instruction conditions (ignore vs attend) (all p's N 0.22). 3.2. Manipulation check Recall data were coded as matches or non-matches with the test pictures by two independent raters. Agreement computed across all subjects and images (N = 794 descriptions) between these raters was 96%. An independent samples t-test confirmed that participants who received the attention instruction recalled more IAPS images than those who did not receive such instruction (Mattend = 19.50, SD 5.8; Mignore = 14.86, SD 5.8; t(44) = 2.71, p = 0.01). This result validates our instruction manipulation. 3.3. Accuracy and reaction time Accuracy and reaction time (RT) were investigated with a mixed, repeated measures ANOVA with cue-target interval (100, 200, 500, and 800 ms) and image (disgust, fear, happiness, and neutral) as within subjects factors and instruction (ignore, attend) as a between subjects factor. If the sphericity assumption was violated, degrees of freedom were adjusted using the Greenhouse–Geisser epsilon (ε). Pairwise comparisons following a main effect of Interval or Image were all Bonferroni corrected. Overall accuracy ranged from 66 to 97% correct target identifications with an average of 87% (SD = 8.0). A significant main effect for interval (F (3, 132) = 5.88, p b 0.01, ε = 0.86; η2p = .12) revealed that accuracy was lower for the 100 ms cue-target interval (M100 = 86%) as compared to the 500 ms (M500 = 88%) and 800 ms (M800 = 88%) intervals (both p's b 0.05). In addition, accuracy was also lower for the 200 ms interval (M200 = 87%) as compared to the 500 ms interval (p b 0.05). More importantly, a main effect for image (F (3, 132) = 6.05, p = .001, η2p = .12) revealed that, across intervals, target identifications were less accurate following the disgust-evoking images (Mdisgust = 86%) than following the neutral images (Mneutral = 89%) (p b 0.001). Accuracy was also relatively reduced following fear-evoking (Mfear = 87%) and happinessevoking pictures (Mhappy = 87%) but this slight decline in performance did not reach significance. There were no significant main- or interaction effects with the factor instruction. RTs were examined for correct responses only. Responses quicker than 200 ms were considered premature and were not included. Significant main effects were found for both interval (F (3, 132) = 73.81, p b .001, ε = 0.72; η2p = .63) and image (F (3,132) = 15.75, p b .001, η2p = .26) as well as a significant interaction between these factors (F (9,396) = 5.21, p b .001, η2p = .11). Fig. 3 shows mean RTs for the different images for each interval. In general, the longer the cue-target interval the quicker the responses (M100 = 521 ms; M200 = 509 ms; M500 = 488 ms; M800 = 491 ms — all pairwise comparisons reached significance (p b 0.001) apart from the one comparing the 500 and 800 ms intervals). RTs for the targets following the disgust-evoking images (Mdisgust = 509 ms) were generally longer compared to those following the fear-evoking images (Mfear = 502, p b 0.01), the happiness-evoking images (Mhappy = 499 ms, p b 0.001) and the neutral images (Mneutral = 499 ms, p b 0.001). No other pairwise

Response time (in ms)

540

3.1. Questionnaire data

530 Disgust

520

Fear

510

Happy

500

Neutral

490 480 470 100

200

500

800

Cue-target interval (in ms) Fig. 3. Mean response times and standard errors (SE) for the four different image types as a function of cue-target interval. Disgust-evoking images significantly delayed RT in the 100 and 200 ms cue-target intervals. Means and SEs are calculated across the two instruction conditions (ignore, attend) as there were no significant interactions with the factor Instruction.

comparisons were significant. There was no main effect of instruction (Mattend = 496 ms, Mignore = 509 ms; F (1, 44) = 0.89, p = 0.35), nor any significant interactions with this factor. The response delay caused by the disgust-evoking pictures thus seemed to have occurred independently from the cue-processing instruction. Follow-up analyses for each cue-target interval revealed that the effect of image was significant only for the 100 ms (F (3, 132) = 12.28, p b .001, η2p = .22) and 200 ms (F (3, 132) = 13.42, p b .001, η2p = .23) cue-target interval conditions. For both intervals RTs were longer for the targets following the disgust-evoking pictures compared to all other types of pictures (see Fig. 3). This pattern of results was evident in both the ignore- and attend instruction conditions, as was also confirmed by the absence of a three-way interaction (F (9,396) = 1.38, p = 0.19). 3.4. State and trait factors To examine whether and how different levels of state-anxiety, traitanxiety, disgust-sensitivity, and attention-control, would affect RT results, the group was split in two (four times), based on their questionnaire scores (median split) and this was added as a between subjects factor (high versus low scoring group). Participants whose questionnaire scores corresponded to the median were included in the ‘lowscore’ group (three for STAI-state and three for AC scale). In these analyses, the instruction factor was left out because otherwise the number of participants in each cell would become too small for a meaningful analysis. In addition, the type of instruction was found not to influence the results (described above) and both types of instructions (ignore, attend) were found to be evenly distributed over the low and high scoring groups (all 2 b 1.39, all p's N 0.24). RT results were thus investigated with a mixed, repeated measures ANOVA with cue-target interval (100, 200, 500, and 800 ms) and image (disgust, fear, happiness, and neutral) as within subjects factors and group (high, low) as a between subjects factor. The Image × Group (high, low) interaction approached significance for state-anxiety (F (3, 132) = 2.37, p b 0.073). For the other individual difference measures this interaction was not significant (trait anxiety: F (3, 132) = 1.85, p = 0.14; attention control: F (3,132) = 1.14, p = 0.34; disgust sensitivity: F (3, 132) = 0.90, p = 0.44). ANOVAs following up the first interaction showed that for both state-anxiety groups, a significant effect of image was present (high state-anxiety: F (3,60) = 10.13, p b 0.001; low state-anxiety: F (3,72) = 8.80, p b 0.001) as well as a significant Image × Interval interaction (high state-anxiety: F (9, 180) = 3.09, p b 0.01; low state-anxiety: F (9, 216) = 3.23, p b 0.01). Similar to the overall analyses, RTs were slowest for targets following the disgust-evoking pictures in both groups, particularly in

154

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

the 100 and 200 ms cue-target interval conditions. Indeed, the difference between the groups seems to concern mainly their different responses towards the fear-evoking images. That is, in the high stateanxious group, responses following the fearful pictures (across intervals) were found to be delayed relative to those following the neutral pictures (Mfear = 492 ms; Mneutral = 485 ms; p b 0.01) whereas this was not the case in the low state-anxious group (Mfear = 510 ms; Mneutral = 510 ms; p = 1.0). As shown in Fig. 4, this differential response was mainly apparent in the 100 ms cue-target interval condition. Analyzing RTs for the 100 ms interval separately, revealed that in the high state-anxious group, identifications of targets that followed the fear-evoking images (M100_fear = 513 ms) were somewhat slower than those for the neutral images (M100_neutral = 497 ms; p b 0.001) and a similar effect (albeit even smaller) was observed for the happiness-evoking pictures (M100_happy = 510 ms; p b 0.05). In the low state-anxious group, there were no comparable response delays for the fear- and happiness-evoking pictures in any of the intervals (see Fig. 4). Given the near-significance of the overall Image × Group interaction, these results however should be interpreted with care and should not distract from the observation that in both the low- and high-state-anxious groups the response delays were largest for the disgust-evoking images. Compared to the other participants, those in the high state-anxiety group (N = 21, STAI-state N 33) were characterized not only by higher STAI-state scores (Mhigh 38.9; Mlow = 27.6; t(44) = 10.11, p b 0.001), but also by higher STAI-trait scores (Mhigh = 40.3; Mlow = 33.8; t(44) = 2.70, p b 0.01) and slightly higher AC scores (Mhigh = 50.1; Mlow = 46.0; t(44) = 1.74; p = 0.09). DS-R scores did not differ between high and low state-anxiety groups (Mlow = 50.3; Mhigh = 52.8; t(44) = 0.60, p = 0.54).

High state-anxiety group (N=21) Response time (in ms)

560

To examine the possibility that the RT results were caused by different habituation rates for the different sets of emotional pictures, the data set was divided into four equal-sized parts, each containing the data from four consecutive test blocks (blocks 1–4, 5–8, 9–12, 13–16 respectively). Then, test part (first, second, third, fourth) was added as a between subjects ANOVA factor in addition to image (fear, disgust, happiness and neutral) and cue-target interval (100, 200, 500 and 800 ms). A significant main effect of test part (F (3, 135) = 45.97, p b 0.001, η2p = 0.51) revealed that responses gradually became quicker towards the end of the experiment (Mfirst = 530 ms, Msecond = 504 ms, Mthird = 491 ms, Mfourth = 483 ms). Moreover, a significant Test part × Image interaction (F (9, 405) = 3.21, p b 0.001, ηp2 = 0.51) showed that the effect of image type decreased towards the end of the experiment. This was confirmed by the fact that significant image effects were present in the first (F (3, 135) = 14.40, p b 0.001, η2p = 0.24) and second test parts (F (3, 135) = 3.27, p b 0.05, η2p = 0.07) but not in the third and fourth. As in the main analyses, these image effects were largest for the 100 and 200 ms cue-target intervals. Most importantly, the significant effects of image in the first and second part of the experiment were exclusively due to slower RTs for targets paired with the disgustevoking pictures. More specifically, in the first part of the experiment (test blocks 1–4), RT differences relative to the neutral pictures were on average 42 ms (100 ms interval) and 39 ms (200 ms interval) for the disgust-evoking pictures (both p's b 0.001), whereas this was respectively 8 ms (100 ms interval) and 11 ms (200 ms interval) for the fear-evoking pictures (p's N 0.05) and 8 ms (100 ms interval) and 4 ms (200 ms interval) for the happiness-evoking pictures (p's N 0.05). Thus, even at the beginning of the experiment, when there was yet noor little habituation towards the emotional pictures, RTs were comparable for the neutral, fear-evoking, and happiness-evoking pictures. This means that, similar to our previous study (Van Hooff et al., 2013), the lack of an overall effect for the fear- and happiness-evoking pictures cannot be explained by a faster habituation rate towards these stimuli.

540

4. Discussion 520 Disgust Fear

500

Happy Neutral

480 460 100

200

500

800

Cue-target interval (in ms) Low state-anxiety group (N=25) 560

Response time (in ms)

3.5. Habituation

540 520 Disgust Fear

500

Happy Neutral

480 460 100

200

500

800

Cue-target interval (in ms) Fig. 4. Mean response times and standard errors (SE) for the four different image types as a function of cue-target interval for the high state-anxiety group (top) and low state-anxiety group (bottom). The main difference between these groups is encircled and concerns the responses towards the fear- and happiness-evoking images in the 100 ms cue-target interval condition.

Results from this study are in agreement with those from our previous study (Van Hooff et al., 2013) and additionally showed that disgustevoking pictures delayed and impaired subsequent target identification when targets appeared not only 200 ms after picture-onset but also 100 ms after picture-onset, thus well before picture contents were consciously evaluated. Comparable delays or impairments were not observed for fear- and happiness-evoking pictures (in the whole sample), suggesting that this early, momentary effect is specific for disgust-evoking images and not simply attributable to low-valence or moderately high-arousal stimulus characteristics. These results therefore seem to support our hypothesis and previous conclusion that disgust- but not fear-evoking pictures temporarily grab hold of early attention resources at the cost of subsequent target identification processes (Van Hooff et al., 2013, see also Carretié et al., 2011; Chapman et al., 2013). In addition, the response delays following the disgust-evoking pictures were found to occur irrespective of instruction condition (i.e., whether participants were paying attention to the images or not) and it is therefore likely that they resulted from a largely instinctive mechanism that did not rely on the contribution of voluntary attention processes. Finally, the disgust-specific effects were observable in the whole sample, independent of the level of disgust-sensitivity. We therefore believe these effects to be more prevailing than those triggered by the other emotional pictures (fear- and happiness-evoking), the effects of which appeared more fleetingly, and were noticeable only in participants with relatively high state-anxiety. Before discussing the current findings in a wider and more theoretical context, we first would like to highlight some results that validate our previous findings (Van Hooff et al., 2013). In our earlier study, we

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

already excluded the possibility that the effects that were observed for the disgust-evoking pictures could be due to the fact that, compared to the fear-evoking pictures, a larger proportion of them could be considered biologically relevant. Indeed, those results suggested that none of the effects observed could be attributed to a particular image or subset of the stimulus materials. In addition, we also ruled out the alternative explanation that responses to fear-evoking pictures habituate quicker than those to disgust-evoking pictures. This was confirmed by results from the current study, showing that even in the first quarter of the experiment, during which the main effect of image was largest, the fear-evoking pictures did not delay subsequent target processing, whereas the disgust-evoking pictures clearly did. In the current study, we additionally confirmed that the observed RT differences between disgust and neutral trials could not be attributed to a lower probability of the first. That is, in the current experiment, trials from all four emotion categories (fear, disgust, happy, neutral) occurred with the same probability, and thus none of the different picture sets stood out as different because of low occurrence frequency. At the same time, by the inclusion of the extra set of happiness-evoking pictures, we doublechecked that the RT differences were not simply due to high arousal levels, independent of valence. Furthermore, drawing participants' attention to the contents of the picture cues did not affect the RTs in any way and it also did not introduce a more “guided” attention effect for the fear-evoking images. Together, these results suggest that not only the presence of an effect for the disgust-evoking images was very robust, but also the absence of a comparable effect for the fear-evoking images, at least in a conventional all-female sample. With respect to the latter, a small response delay was observed for all emotional pictures in the 100 ms cue-target interval condition but in participants with a relatively high anxious state only; this is discussed further below. In addition, it should be recognized that in the current study the fear-evoking pictures (but also the disgust- and happiness-evoking ones) were characterized by moderate arousal levels (corresponding to mean averages around 6 on a 9 point scale) and as such can be considered only mildly frightening. The point therefore is not to deny that fearful stimuli can attract or hold attention but that, as far as IAPS pictures are concerned, the absence or presence of such effect may depend on (a) the level of arousal that they evoke (e.g., Koster et al., 2007) or (b) whether they evoke feelings of disgust or not (instead or besides “pure” fear). At present, these two explanations cannot be separated. An important practical implication of our results is therefore that in future studies one should take into account that even when pictures are similarly arousing and similarly negative, they may still influence perception and attention differently when they are not equally disgust-evoking. Theoretically, these results are also important, as they suggest that there must be “something” distinctive about the disgust-evoking pictures, besides the fact that they are unpleasant and arousing, that triggered a particular response in the participants that temporarily led to diminished target processing. This “something” is presumably linked to the specific function(s) of disgust or alternatively, to their feasibly higher impact (Murphy, Hill, Ramponi, Calder, & Barnard, 2010). The concept of impact is as yet poorly understood, but it allegedly refers to “the immediate effects of images on viewers in terms of their generic cognitive-affective qualities” (Murphy et al., 2010, p. 612). Murphy et al. found that high-impact pictures attracted more attention resources than low-impact pictures even when these pictures were similarly distinctive and complex, and importantly, also equally (un)pleasant and arousing. With this in mind, it could be argued that our disgust-evoking pictures may have produced a higher impact than our fear-evoking pictures, related to the common impression that it is easier to elicit disgust from a picture than it is to elicit fear from a picture (see also, Carretié et al., 2011). On the other hand, it should be noted that the high-impact pictures in the Murphy et al. study included both disgust-evoking (e.g., mutilation and disease) and fear-evoking images (e.g., predatory animals and violence). Thus, given the ambiguity surrounding this concept it seems sensible to seek another explanation

155

for our results; one which relates more to the specific function(s) of disgust. In theory, this leads however to two rather opposing predictions with regards to its potential effect on early selective attention processes. First, disgust-evoking images have a more ambiguous relation to threat than fear-evoking images (Rozin, Haidt, & McCauley, 2000) and thus careful processing is necessary to determine their precise risk and to select the appropriate reaction. Together with the notion that there are no big costs associated with the detailed processing of such information, this would lead to the hypothesis that disgusting images would receive more (early) attention resources than fearful images. This has been our viewpoint throughout this paper, motivated mainly by prevailing explanations in behavioral emotion–attention research, clustering around the suggestion that “emotional, and particularly negative information, elicits selective attentional priority over nonemotional [information] and, in so doing, commands additional attention resources” (Yiend, 2010, p. 34). It furthermore fits with the notion that disgust is associated with a general motivation to narrow attention (Gable & Harmon-Jones, 2010) and it runs parallel with the recent finding that disgusting images are also better remembered than frightening ones (Chapman et al., 2013; Croucher, Calder, Ramponi, Barnard, & Murphy, 2011). Moreover, Wiens, Peira, Golkar, and Öhman (2008) reported that while fear produced a response bias in a masked-threat study (i.e., more false alarms), disgust improved the actual recognition of these images. Hence the fitting subtitle of their study “fear betrays, but disgust you can trust” (p. 810). Finally, direct support for the increased attention interpretation can be found in the ERP literature, showing that compared to fear-evoking images, disgust-evoking ones are characterized by a larger frontal P2 (Carretié et al., 2011) and a larger early posterior negativity (EPN) (Wheaton et al., 2013). Both of these ERP components occur around 200 ms post-stimulus and reflect early attention orienting. Nevertheless, it cannot be denied that an important function of disgust is to reduce sensory interactions with the environment (Rozin & Fallon, 1987; Susskind et al., 2008) and accordingly, this would lead to the opposite prediction, namely that disgusting images would receive not more but less attention. There is also some ERP evidence in support of this suggestion, as Krusemark and Li (2011) found reduced P1 amplitudes in response to a set of disgusting images, while, very similar to our study, they also observed diminished processing of – and delayed responding towards – a search array that was superimposed on these images (150 ms after image onset). Further evidence for sensory withdrawal that can be linked to the processing of a disgust cue is currently lacking but this would be an interesting avenue to explore as an alternative way of explaining our results. On the other hand, it should be recognized that our target stimuli were a lot smaller and less predictable (both in terms of location and onset) than those in the study by Krusemark and Li (2011). It therefore might be argued that our participants could not allow themselves to reduce their sensory intake by too much; otherwise the task would become impossible. In addition, Sherman, Haidt, and Clore (2012) provided evidence that disgust actually improved a certain type of sensory intake, namely the ability to discriminate between different shades of light gray, which they linked to an enhanced sensitivity for “impurity”. Another argument against an explanation in terms of sensory withdrawal or avoidance from the disgust-eliciting stimulus would be that in our paradigm, where targets were presented next to the emotion pictures (and not on top of them as in Krusemark & Li, 2011), this would have resulted in quicker target identification times and not, like we observed, in slower RTs. Indeed, we originally hypothesized that this might happen at the longer cuetarget intervals (N500 ms) due to an avoidance reaction, however this was found not to be the case (Van Hooff et al., 2013). Although there is yet no conclusive evidence for the exact mechanism that may underlie the disgust-specific effect, some important characteristics of it emerged from the current results. First, the effect becomes active very rapidly (starting before 100 ms post stimulus) and second, its influence is short-lived (ending before 500 ms post

156

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

stimulus). The rapid onset suggests that some basic form of emotion categorization takes place at very early processing stages, well before full object recognition and conscious inference. This early emotion categorization is remarkable as it goes beyond distinguishing between arousing and non-arousing (neutral) stimuli, for which the amygdale is known to play a pivotal role (Vuilleumier, 2005). Indeed, this very quick onset of a disgust-specific effect confirms the importance and influence of basic emotion categories, each corresponding to unique, innate response patterns and brain activations that ensure adaptive responding (Panksepp, 1998, see also Hot & Sequeira, 2013). To this extent it is interesting to note that there is some evidence from fMRI research showing that although there is a substantial overlap between disgust and fear processing in cortical areas, there is a dissociation in subcortical regions, especially the amygdale. Specifically, disgustevoking IAPS pictures (but not fear-evoking ones) have been found to increase activity in the amygdale compared to neutral pictures in a passive viewing task (Stark et al., 2003). Despite the quick onset, our participants apparently quickly regained their ability to inhibit their initial response towards the (moderately) disgusting images in favor of task demands. That is, like in our previous study (Van Hooff et al., 2013) no response delays were observed for the disgust-evoking pictures (nor for any of the other emotional ones) when they were presented more than 200 ms before the target. Koster et al. (2007) similarly reported that high-threat images (which interestingly also appeared to be the most disgusting ones) attracted more attention when they were presented for 100 ms, but not when they were presented for 200 or 500 ms. Thus, it seems that in the current study, but also in our previous one (Van Hooff et al., 2013), participants can suppress their tendency to shift attention resources towards disgust-evoking images somewhere between 200 and 500 ms. This is an important addition to the existing literature, as in the ERP studies mentioned before (Carretié et al., 2011; Krusemark & Li, 2011) fixed cue-target intervals were used (0 and 150 ms respectively) and later ERP components were not analyzed, thereby providing no information about the duration of the attention effects. The third characteristic of the disgust-specific effect is that it was not dependent on – or modulated by – voluntary or task-related attention, as no performance differences (accuracy and RT) were found between the two instruction conditions (ignore versus attend), nor were there any differences between these conditions with regards to the magnitude of the interval- and image-effects and their interactions. The fact that no overall performance differences were found between the two instruction conditions, suggests that, somewhat surprisingly, the extra task of attending to the pictures did not make the concurrent target identification task more demanding. Tentatively, this could have been due to the large number of picture repetitions, letting participants believe that they will remember the picture contents anyway. Nevertheless, because the attend-instruction resulted in a relatively higher recall performance, it is evident that the participants in this group complied with this instruction. More importantly, in both the ignore- and attend-instruction conditions, a similar distracting influence of the disgust-evoking images was observed, suggesting that the disgustspecific effect was mainly determined by an involuntary, stimulusdriven mechanism. This is furthermore supported by the already discussed observation that the effect of the disgust-evoking images was significant only when targets were presented within 200 ms after their onset, thus arguably before explicit attention could augment any implicit emotion effects (cf., Schupp et al., 2007; Wiens et al., 2011). Furthermore, general accuracy was lowest and overall RT was slowest in the short (100 and 200 ms) cue-target interval conditions, suggesting that these conditions were the most demanding. As a consequence, in these short interval conditions the least number of resources would be available for picture processing; yet especially in these conditions the disgust-specific effects were observed. Together with the null findings of the factor instruction, this combination of qualities makes it likely that the effects of the disgust-evoking pictures, presented already in

focus of attention, did not rely on voluntary or intentional attention processes. The final characteristic of the mechanism underlying the disgustspecific effect is that it does not depend on how disgust-sensitive the participants were, nor how anxious they were at that moment or in general (i.e., state- and trait-anxiety). That is, the effect was observed in the overall sample and was comparable for high- and low-disgust-sensitive groups. This is in agreement with a dot-probe study by Vogt et al. (2011) showing that participants' attention is oriented towards disgusting pictures regardless as to whether or not they were first asked to touch fake disgusting objects (to induce a disgust state). Augmented attention to disgust-evoking pictures therefore seems to be the norm (see also, Carretié et al., 2011; Chapman et al., 2013; Van Hooff et al., 2013). This is interesting given that the study of emotion-attention interactions is so dominated by the emotion fear, while fear-related attention biases are often observed only in high anxious individuals (Bar-Haim et al., 2007) or only when high-arousing pictures are used (e.g., Koster, Crombez, Verschuere, & De Houwer, 2006). Indeed, with respect to the latter, it is noteworthy that in the current study all emotion pictures were characterized by relatively moderate arousal ratings. Although this did not prevent a disgust-specific effect from occurring, it could well be the reason why no significant effects were found for the fearevoking and perhaps also happiness-evoking pictures. Accordingly, the moderate arousal values may also explain why only participants in a relatively high anxious state showed a brief and small attention engagement effect for the fear-evoking images in the 100 ms cue-target interval condition. That is, according to the cognitivemotivational model of anxiety (Mogg & Bradley, 1998) individuals with high anxiety levels will more readily appraise a (slightly) fearful stimulus as threatening and therefore increase their attention towards it (cf., Koster et al., 2006). Because in the current study, all emotional pictures (fear, disgust, happy) delayed target identification in the shortest cue-target interval condition, this brief attention grabbing effect was presumably driven by the stimulus' arousal level. Indeed, this idea would be supported by results from Brosch, Sander, Pourtois, and Schere (2008) showing enhanced P1 responses for both negative (angry face) and positive (baby face) pictures in a dot-probe task. Based on these results, the authors argue against the existence of a separate fear module and instead propose that early attention allocation occurs as a result of one common mechanism that quickly and automatically appraises what the “relevance” of the presented stimulus is regardless of its valence (let alone specific negative emotion). This conclusion contrasts with other ERP findings though, demonstrating early P1 effects for negative stimuli only (see Olofsson, Nordin, Sequeira, & Polich, 2008). In addition, this reasoning also does not fit with our previous explanation regarding early emotion categorization. On the other hand, our results do not necessarily exclude the possibility that arousal and emotion-specific effects co-exist. More specifically, because RT differences between the fear- and disgust-evoking images seem to remain fairly constant in the 100 and 200 ms interval conditions in both the high- and low-state anxious groups (around 20 ms difference), it seems that the arousal-driven and disgust-specific attention effects are additive. Consequently, this would suggest that there are at least two different ways in which disgust-evoking pictures may affect early attention processes mediated by (1) a disgust-specific mechanism and (2) an arousal-driven mechanism. While the first mechanism is easily “activated” and present in most people, the second only becomes operational at higher arousal levels and/or in participants with relatively high levels of state-anxiety. Furthermore, while both mechanisms seem to have an early onset (b100 ms), the disgust-related one is active for a slightly longer period. In conclusion, this study has provided clear evidence for a disgustspecific modulation of early attention processes that could be dissociated from a more transitory and less reliable (moderate) arousal effect. This modulation has an early onset (b100 ms) and a duration of less than 400 ms. During this time period, disgust-evoking pictures were

J.C. van Hooff et al. / Acta Psychologica 152 (2014) 149–157

shown to impair and delay subsequent target processing, presumably because these pictures gained the competition for early attention resources. Because this occurred so quickly following the presentation of the disgust-evoking pictures and irrespective of whether participants were paying attention to the pictures or not, it seems that a largely automatic mechanism underlies this effect. In addition, this disgustspecific mechanism appears to be activated independently from a particular state of the participants. References Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M. J., & IJzendoorn, M. H. (2007). Threat-related attention bias in anxious and non-anxious individuals: A meta-analytic study. Psychological Bulletin, 133, 1–24. Bocanegra, B. R., & Zeelenberg, R. (2009). Dissociating emotion-induce blindness and hypervision. Emotion, 9, 865–873. Brosch, T., Sander, D., Pourtois, G., & Schere, K. R. (2008). Beyond fear: Rapid spatial orienting towards positive emotional stimuli. Psychological Science, 19, 362–370. Carretié, L., Ruiz-Padial, E., López-Martín, S., & Albert, J. (2011). Decomposing unpleasantness: Differential exogenous attention to disgusting and fearful stimuli. Biological Psychology, 86, 247–253. Chapman, H. A., Johannes, K., Poppenk, J. L., Moscovitch, M., & Anderson, A. K. (2013). Evidence for differential salience of disgust and fear in episodic memory. Journal of Experimental Psychology: General, 142, 1100–1112. Charash, M., & McKay, D. (2002). Attention bias for disgust. Journal of Anxiety Disorders, 16, 529–541. Ciesielski, B. G., Armstrong, T., Zald, D. H., & Olatunji, B. O. (2010). Emotion modulation of visual attention: Categorical and temporal characteristics. PLoS ONE, 5(11), e13860. Cisler, J. M., Olatunji, B. O., Lohr, J. M., & Williams, N. L. (2009). Attentional bias differences between fear and disgust: Implications for the role of disgust in disgust-related anxiety disorders. Cognition & Emotion, 23, 675–687. Croucher, C. J., Calder, A. J., Ramponi, C., Barnard, P. J., & Murphy, F. C. (2011). Disgust enhances the recollection of negative emotional images. PLoS ONE, 6, e26571. Derryberry, D., & Reed, M.A. (2002). Anxiety-related attentional biases and their regulation by attentional control. Journal of Abnormal Psychology, 111, 225–236. Erthal, F. S., De Oliveira, L., Mocaiber, I., Machado-Pinheiro, W., Volchan, E., & Pessoa, L. (2005). Load-dependent modulation of affective picture processing. Cognitive, Affective, & Behavioral Neuroscience, 5, 388–395. Fox, E., Lester, V., Russo, R., Bowles, R. J., & Dutton, K. (2000). Facial expression of emotion: Are angry faces detected more efficiently? Cognition and Emotion, 14, 61–92. Fox, E., Russo, R., Bowles, R., & Dutton, K. (2001). Do threatening stimuli draw or hold visual attention in subclinical anxiety? Journal of Experimental Psychology: General, 130, 681–700. Gable, P., & Harmon-Jones, E. (2010). The blues broaden, but the nasty narrows: Attentional consequences of negative affects low and high in motivational intensity. Psychological Science, 21, 211–215. Hansen, C. H., & Hanse, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54, 917–924. Hot, P., & Sequeira, H. (2013). Time course of brain activation elicited by basic emotions. NeuroReport, 24, 898–902. Koster, E. H. W., Crombez, G., Verschuere, B., & De Houwer, J. (2006). Attention to threat in anxiety-prone individuals; mechanisms underlying attentional bias. Cognitive Therapy Research, 30, 635–643. Koster, E. H. W., Crombez, G., Verschuere, B., Vanvolsem, P., & De Houwer, J. (2007). A time-course analysis of attentional cueing by threatening scenes. Experimental Psychology, 54, 161–171. Krusemark, E. A., & Li, W. (2011). Do all threats work the same way? Divergent effects of fear and disgust on sensory perception and attention. The Journal of Neuroscience, 31, 3429–3434.

157

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual. Technical report A-8. Gainesville, FL: University of Florida. Libkuman, T. M., Otani, H., Kern, R., Viger, S. G., & Novak, N. (2007). Multidimensional normative ratings for the International Affective Picture System. Behavior Research Methods, 39, 326–334. Mikels, J. A., Fredrickson, B.L., Larkin, G. R., Lindberg, C. M., Maglio, S. J., & ReuterLorenz, P. A. (2005). Emotional category data on images from the International Affective Picture System. Behavior Research Methods, 37, 626–630. Mogg, K., & Bradley, B. P. (1998). A cognitive-motivational analysis of anxiety. Behaviour Research and Therapy, 3, 809–848. Murphy, F., Hill, E., Ramponi, C., Calder, A., & Barnard, P. (2010). Paying attention to negative emotional images with impact. Emotion, 10, 605–614. Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130, 466–478. Okon-Singer, H., Tzelgov, J., & Henik, A. (2007). Distinguishing between automaticity and attention in the processing of emotionally signifcant stimuli. Emotion, 7, 147–157. Olatunji, B. O., Williams, N. L., Tolin, D. F., Abramowitz, J. S., Sawchuk, C. N., Lohr, J. M., et al. (2007). The disgust scale: Item analysis, factor structure, and suggestions for refinement. Psychological Assessment, 19, 281–297. Olofsson, J. K., Nordin, S., Sequeira, H., & Polich, J. (2008). Affective picture processing: An integrative review of ERP findings. Biological Psychology, 77, 247–265. Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. New York: Oxford University Press. Rossi, V., & Pourtois, G. (2012). State-dependent attention modulation of human primary visual cortex: A high density ERP study. NeuroImage, 60, 2365–2378. Rozin, P., & Fallon, A. E. (1987). A perspective on disgust. Psychological Review, 94, 23–41. Rozin, P., Haidt, J., & McCauley, C. (2000). Disgust. In M. Lewis, & J. Haviland-Jones (Eds.), Handbook of emotions (pp. 637–653). New York: Guilford Press. Schupp, H. T., Stockburger, J., Codispoti, M., Junghöfer, M., Weike, A. I., & Hamm, A. O. (2007). Selective visual attention to emotion. The Journal of Neuroscience, 27, 1082–1089. Sherman, G. D., Haidt, J., & Clore, G. L. (2012). The faintest speck of dirt: Disgust enhances the detection of impurity. Psychological Science, 23, 1506–1514. Stark, R., Schienle, A., Walter, B., Kirsch, P., Sammer, G., Ott, U., et al. (2003). Hemodynamic responses to fear and disgust-inducing pictures; an fMRI study. International Journal of Psychophysiology, 50, 225–234. Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., & Anderson, A. K. (2008). Expressing fear enhances sensory acquisition. Nature Neuroscience, 11, 843–850. Van der Ploeg, H. M., Defares, P. B., & Spielberger, C. D. (1980). Handleiding bij de zelfbeoordelingsvragenlijst, ZBV. Een Nederlandstalige bewerking van de Spielberger State-Trait Anxiety inventory. Lisse: STAI-DY. Swets en Zeitlinger. Van Hooff, J. C., Devue, C., Vieweg, P. E., & Theeuwes, J. (2013). Disgust- and not fearevoking images hold our attention. Acta Psychologica, 143, 1–6. Vogt, J., Lozo, L., Koster, E. H. W., & De Houwer, J. (2011). On the role of goal relevance in emotional attention: Disgust evokes early attention to cleanliness. Cognition and Emotion, 25, 466–477. Vuilleumier, P. (2005). How brains beware: Neural mechanisms of emotional attention. Trends in Cognitive Sciences, 9, 585–594. Wheaton, M. G., Holman, A., Rabinak, C. A., MacNamara, A., Hajcak Proudfit, G., & Phan, K. L. (2013). Danger and disease: Electrocortical responses to threat- and disgust-eliciting images. International Journal of Psychophysiology, 90, 235–239. Wiens, S., Peira, N., Golkar, A., & Öhman, A. (2008). Recognizing masked threat: Fear betrays, but disgust you can trust. Emotion, 8, 810–819. Wiens, S., Sand, A., Norberg, J., & Andersson, P. (2011). Emotional event-related potentials are reduced if negative pictures presented at fixation are unattended. Neuroscience Letters, 495, 178–182. Yiend, J. (2010). The effects of emotion on attention: A review of attentional processing of emotional information. Cognition & Emotion, 24, 3–47.