Neuroscience Letters 460 (2009) 1–5
Contents lists available at ScienceDirect
Neuroscience Letters journal homepage: www.elsevier.com/locate/neulet
The course of visual searching to a target in a fixed location: Electrophysiological evidence from an emotional flanker task Guangheng Dong a,∗ , Lizhu Yang b , Yue Shen b a b
Department of Psychology, Zhejiang Normal University, 688# Yingbin Road, Jinhua City, Zhejiang Province 321000, China Department of Psychology, Liaoning Normal University, Dalian, China
a r t i c l e
i n f o
Article history: Received 3 January 2009 Received in revised form 8 May 2009 Accepted 8 May 2009 Keywords: Visual search Emotional flanker task Parallel and serial
a b s t r a c t The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150–250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300–400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was “parallel” in this task. The results supported the feature integration theory of visual search. © 2009 Elsevier Ireland Ltd. All rights reserved.
One of the challenges of everyday life is to select and maintain relevant information in the presence of a sea of irrelevant, distracting, and competing influences. In the case of the visual system, only a small fraction of information received reaches a level of processing to be voluntarily reported or directly used to influence behavior. Visual search is the process to select a particular stimulus to which one will react based on the properties and appearance pattern of the stimulus. The human visual system can effortlessly detect an interesting area or object within natural or cluttered scenes through the visual search mechanism. The visual search mechanism allows the human vision system to more effectively process input visual scenes with a higher level of complexity. Based on results obtained in a number of visual search experiments, two modes of visual search, serial and parallel have been proposed [19]. The serial model postulated that several successive processing steps should separate sensation from perception. Thus, the serial model has two main characteristics: first, processing is accomplished through a cascade of effectors (neuron types or cortical areas), and second, progression in the hierarchy corresponds to an increasing ‘complexity’ of the effectors. However, the parallel models placed the emphasis on independent and simultaneous processing by modules that are specialized for different aspects of the visual stimulus. In behavioral studies, if a search function increases only slightly with an increase in display size (or number), it is assumed that all items in the display are searched simultane-
∗ Corresponding author. Tel.: +86 15867949909; fax: +86 57982282549. E-mail address:
[email protected] (G. Dong). 0304-3940/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.neulet.2009.05.025
ously; that is, they are approached in “parallel”. In contrast, if the search functions exhibit a linear increase, it is assumed that each individual item is searched successively; that is, the search operates “serially” [16]. Two models, the feature integration theory [16] and the guided search mode [20], account for the psychophysical differences found when a subject is searching for one feature or for a feature conjunction. The feature integration theory formalizes the dichotomy between the two types of searches. Thus, the feature integration theory posits the existence of two separate search mechanisms. A pre-attentive, parallel search is deployed for the detection of a single feature. In contrast, the detection of a feature conjunction requires focused attention, which can be directed serially, and permits accurate localization and conjunction of features. The guided search model posits that the searches for a single feature or a feature conjunction are not separate and different processes. Parallel processes provide both visual spatial information as well as information about the nature of the individual items. These parallel processes cannot distinguish whether an item is a target, but rather they guide attention to probable targets. A serial process then is used to search through the probable targets until the actual target is detected. The neural mechanisms of visual search have been studied by assessing the components of event-related potentials (ERPs). For example, Soria and Srebro [13] found that ERP scalp fields that distinguished parallel from serial searches were identified between 150 and 250 ms. It is proposed that these different scalp fields represent timing and/or magnitude differences in the regions of cortex activated during parallel and serial searches. Study by Tong
2
G. Dong et al. / Neuroscience Letters 460 (2009) 1–5
and Melara [15] suggested that target representations are diminished in distinctiveness as distractors activate a wider range of the task-relevant continuum in working memory. Also, studies on schizophrenia found that the N2 component appears more vulnerable than the P300 in both auditory and visual attention in schizophrenia [21]. ERPs were widely used in visual search studies for it can provide an excellent and precise metric of the time course of neural activity. A large body of work has explored the neural mechanisms by which visual search helps to select relevant from irrelevant information during perception. However, most studies in this area have focused on the situation where a target would be presented at an unknown or unpredicted location. Participants were asked to find the target from among the distractors using visual searching. However, if participants are asked to respond to a target always presented at a known location, while attempting to ignore irrelevant information presented at distractor locations, the nature of the visual search course is unknown, but could be parallel, serial, or some other mode. The aim of this study was to investigate this question. In this study, flanker task was used. In a typical flanker interference task, subjects make a rapid respond to a central target that is flanked by distractors. The distractors can be congruent, incongruent, or neutral with respect to the target or the response to it. This paradigm has been used to not only demonstrate the properties of visual attention [6], but also to investigate the electrophysiological features and neurological correlates of response channel activation [10,11]. The flanker effect refers to the fact that participants are faster to respond and make fewer errors when the target and distractors are congruent than they are when the target and distractors are incongruent. The very existence of the flanker effect reveals limitations in the focused allocation of visual attention. Although the target always appears at a known central location, participants are unable to ignore the peripheral distractors. In this study, we took advantage of the features of ERP to study the course of visual search in flanker tasks. To trace the course of visual attention, we modified the flanker task by using emotional faces (Fig. 1) instead of the more traditional materials, such as letters or colors. It was our intent to determine the course of visual search in an emotional flanker task by analyzing the emo-
Fig. 1. Stimulus material for example.
tional effect over different time courses during task processing. Since we can measure ERPs while participants perform a flanker task, we hoped that this would provide further information about the visual search function in these tasks. We used emotional facial pictures as stimulus materials for the reason that facial expression plays an important role in expressing our own emotions and in understanding the emotions expressed by others. During the process of expression identification, we are easily affected by facial expression and its evoked relevant emotions [5]. Because of this, emotional expressions are useful emotion-eliciting materials and have been widely used in many types of emotion studies. Considerable research indicates that the human brain is especially sensitive to emotionally negative events, such that negative events are usually processed preferentially and elicit higher amplitudes than do positive ones [4]. Because of the different features between positive and negative stimuli, we can speculate on the neural working process in a flanker task by analyzing the effect of emotion in facial processing. Studies adopting emotionally implicit tasks have shown that the anterior P200 components, which are considered as indexes of attention-related processes, were larger for negative stimuli than for positive stimuli [3,4]. Another related potential is N200, an ERP commonly assessed in stimulus-locked ERP analyses using the flanker task [17], which is recorded at 200–350 ms following a challenging stimulus. Classically, this potential has been interpreted as indicating a decision to suppress a response, or the implementation of that decision, and has also been recorded from frontal scalp electrodes in human ERP studies [8,14]. Some studies have also found anterior directed attention negativity, which was found over the anterior scalp sites contra-lateral to the to-be-attended location, approximately 300–500 ms following an attention-directing cue [7]. Therefore, a hypothesis emerges: if the emotional effects of flanker stimuli were seen in early perception, it would indicate that participants pay attention to all of the stimuli as a group (because the number of distractors is greater than the number of targets). If this is so, then we can conclude that the visual search sequence is parallel. On the other hand, if the target stimulus itself exerts an emotional response in early perception, this would imply that participants were paying attention immediately to the target. In this case, a visual search was not being conducted for the target was always shown at the same known location. Participants could respond directly to the target, without first searching for it. As paid volunteers, 15 college students (9 women, 6 men) aged 22.3–26.8 years (mean age: 23.8 years) participated in the study. All subjects were healthy, right handed, with normal or corrected to normal vision. Each subject signed an informed consent form for the experiment. The experimental procedure was in accordance with the ethical principles of the 1964 Declaration of Helsinki [22]. One stimulus picture consists of five emotional faces of the same gender (one target face, four identical faces as distractor). All stimulus pictures could be grouped into four conditions: congruent, positive (CP); congruent, negative (CN); incongruent, positive face centered (IP); incongruent, negative face centered (IN) (Fig. 1). The number of trials was equal in each condition. The entire stimulus was viewed as a picture with a size of 150 (wide) × 37 (high) pixels (when running E-Prime software, the whole screen was 640 × 480 pixels). All stimulus pictures were in monochrome with a black background. All emotional facial pictures were taken from the CSFE (college students’ facial expression of emotion) in our study. (This system was developed in a key laboratory of mental health, the Chinese Academy of Sciences, to avoid the cultural bias of emotional inducement found in Chinese participants when international affective picture system (IAPS) [9] was used. In the college students’ facial expression of emotion scale, all facial pictures were divided into seven groups: disgust, surprise, neutral, happy, sad,
G. Dong et al. / Neuroscience Letters 460 (2009) 1–5
angry, and fear. More details about this scale are accessible in Wang and Luo [18].) Two categories of facial pictures were used—happy (positive), and angry (negative). Each category included 20 pictures (10 male faces and 10 female faces). Before the formal study, we asked 20 college students to make judgments about all of these pictures (happy, angry), and confirmed that the correct rate of response was over 99%. Subjects were seated in a quiet room approximately 100 cm from a computer screen (DELL 15 CRT monitor) with the horizontal and vertical visual angles below 5◦ . Prior to the study, all subjects were told that they should pay attention to the center stimulus and press relative keys with their right hands as soon as possible (1 for positive; 2 for negative. The relevant keys were counterbalanced between subjects). Each trial was initiated by a 250 ms presentation of a small white cross (+) in the center of the black screen, followed by a stimulus picture that lasted 1000 ms at most. Participants were to press corresponding keys during this latter period. The stimulus was terminated by a key press, or was terminated when 1000 ms had elapsed. Each response was followed by 1000 ms of blank screen. The study consisted of four blocks of 240 trials. All trials were randomized during the presentation. High-density ERPs were recorded from each participant using a 128-channel geodesic sensor net (Electrical Geodesics Inc., Eugene, OR, USA), coupled to a high input impedance amplifier. EEG was continuously recorded and sampled at 250 Hz. Wherever possible, impedances were reduced to less than 50 k prior to recording. Vertical electrooculograms (EOGs) were recorded at the left orbital rim; horizontal EOG (HEOG) was recorded at the right orbital rim. The data were analyzed offline with the NetStation software (Electrical Geodesics Inc., Eugene, OR, USA). Trials with incorrect responses and trials with EOG artifacts (EOG Voltage exceeding 50 V) were excluded from averaging. The data were filtered with a 0.3–30 Hz band pass. EEG activity for each correct response in each valence condition was overlapped and averaged separately. ERP waveforms were time-locked to the onset of stimuli, giving an average epoch of 800 ms, including a 200 ms pre-stimulus baseline. As shown by the ERP grand averaged waveforms, the ERPs elicited by all four of these conditions showed prominent differences from each other, which were largest at frontal sites (Fig. 1). The largest effect was showed for the FCz electrode. According to this feature, and based on previous studies about selective attention [1] and emotional studies [2,4], we selected the following 11 sites for statistical analysis: Fp1, Fp2, Fz, F3, F4, FCz, FC3, FC4, Cz, C3, C4 (eight frontal and three central sites). For all conditions, the mean amplitude (mean value in selected time window) and peak latency (from stimulus onset to the peak of each component) of the components (N100 (75–125 ms); P200 (150–250 ms); N200 (300–400 ms)) were measured and analyzed using the NetStation software. A two way repeated ANOVA was conducted for the amplitude and latency of each component. ANOVA factors were valence conditions and electrode sites (11 sites). Post hoc contrast was conducted when the main effect was significant among these four conditions. The results were selected to present according to our research goals. The degrees of freedom of the Fratio were corrected according to the Greenhouse–Geisser method. Reactions that were too fast (less than 100 ms), too slow (more than 1000 ms), or incorrect were excluded from analysis. The mean reaction times (RTs) for the congruent (CP, CN) and incongruent (IP, IN) conditions in the study were 411.9 ms (SE = 37.0) and 526.3 ms (SE = 53.7), respectively. A repeated t test showed a significant main effect of valence [t(1,28) = 6.794, p < 0.05]. Participants responded faster in congruent than in incongruent conditions and the results were consistent with the flanker effect. False response was 0.084 (SE = 0.014) for the congruent (CP, CN) condition and 0.101 (SE = 0.101) for the incongruent (IP, IN) condition. Although participants made fewer errors in congruent condi-
3
Fig. 2. Averaged ERPs at Fz, FCz, and Cz.
tions than in incongruent conditions, no significant difference was found between these [t(1,28) = 2.022, p > 0.05]. Overall, the results confirmed the existence of flanker effect, which meant that the modified flanker task in our study was a good paradigm and could be used in relevant studies. As shown in Fig. 2, N1 component was elicited by all valence conditions, but no significant main effect was found in N1 for mean amplitude [F(3,56) = 0.077, p > 0.05] and peak latency [F(3,56) = 0.912, p > 0.05]. Comparing the stimuli in CN and IN condition, the center stimuli (target) are the same (negative), but the flanker stimuli (faces in the flanking positions) are different (negative vs. positive). A significant
4
G. Dong et al. / Neuroscience Letters 460 (2009) 1–5
main effect was found between the CN and IN valence conditions for P200 in mean amplitude [F(1,28) = 18.245, p < 0.05] in the frontal and central sites. The CN condition specifically showed a larger mean amplitude than did the IN condition. No significant main effect was found for N200 in either mean amplitude [F(1,28) = 1.032, p > 0.05] or peak latency [F(1,28) = 0.887, p > 0.05] between the CN and IN valence conditions. Comparing the stimuli in CP and IP condition, the center stimuli (target) are same (positive), but the flanker stimuli are different (positive vs. negative). A significant main effect was found between the CP and IP valence conditions for P200 in mean amplitude [F(1,28) = 19.186, p < 0.05] in frontal and central sites. The CP condition specifically showed a smaller mean amplitude than did the IP condition. No significant main effect was found for N200 in either mean amplitude [F(1,28) = 1.867, p > 0.05] or peak latency [F(1,28) = 0.561, p > 0.05] between the CP and IP conditions. Comparing the stimuli in CP and IN condition, the center stimuli (target) are different (positive vs. negative), but the flanker stimuli are the same (positive). No significant main effect was found for P200 in either mean amplitude [F(1,28) = 0.774, p > 0.05] or peak latency [F(1,28) = 0.983, p > 0.05]. However, a higher mean amplitude was found for the IN condition compared to the CP condition for N200 [F(1,28) = 15.359, p < 0.05]. No significant main effect was found in peak latency for N200 [F(1,28) = 1.833, p > 0.05] between the CP and IN conditions. Comparing the stimuli in CN and IP condition, the target stimuli are different (negative vs. positive) but the flanker stimuli are the same (negative). A significant main effect was found for N200 in mean amplitude (CN > IP) [F(1,28) = 6.852, p < 0.05], but not in peak latency [F(1,28) = 1.387, p > 0.05] (Table 1). In our experiment, a significant main effect was found in the analysis of flanker stimuli in P200. The comparison between the CP and IP conditions showed that the negative flanker faces (IP condition) elicited a larger P200 amplitude than did positive faces (CP condition). The comparison between CN and IN conditions showed that positive flanker faces elicited smaller P200 amplitude than did negative faces. Overall, as flankers, negative facial pictures elicited larger amplitudes than did positive ones. Research on emotional negative bias has shown that the human brain is especially sensitive to emotionally negative events [23], and that negative stimuli elicit larger P2 amplitudes than do positive stimuli as early as 100 ms after stimulus onset [12]. The results from our study were consistent with these findings on positive and negative emotions. The anterior P200 component is considered to be an index of attention-related processes and has been shown to be larger for negative stimuli than for positive stimuli [3,4]. According to the characteristics of these stimuli, we can indicate that the differences seen in P200 mean amplitude in our task were mostly caused by the flanker faces. Because there were more flanker faces than target faces (4:1) in one whole stimulus, the flanker faces dominated perception at that time. This may explain why flanker faces generated this strong effect. The results also indicated that participants paid attention to the whole stimulus, not only the target face (in
Table 1 The mean amplitudes of P200, N200 components collapsed across the 11 selected electrode sites in CP, CN, IN, and IP conditions. CP
N100 P200 N200
CN
IN
IP
M
SD
M
SD
M
SD
M
SD
1.98 2.04 0.13
0.33 0.45 0.03
1.87 3.22 1.02
0.27 0.56 0.23
1.85 2.13 0.86
0.24 0.41 0.16
1.91 2.89 0.07
0.31 0.58 0.02
CP, congruent, all positive faces; CN, congruent, all negative faces; IP, incongruent, positive face centered; IN, incongruent, negative face centered.
the middle of the whole stimulus), at about 150–250 ms after the stimulus onset. From the comparison analysis of target faces, no significant main effect was found for P200 in mean amplitude or peak latencies between any valences. However, a significant main effect was found for N200 mean amplitude between the CP and IN conditions, as the IN condition elicited a higher mean amplitude than did the CP condition. This indicates that negative (angry) target faces elicited higher mean amplitudes in N200, compared with positive (happy) target faces. A similar finding can be noted between CN and IP conditions, in which negative target faces in the center elicited higher amplitude responses than did positive ones. All of these results can be interpreted by the different characteristics of positive and negative emotions. (See discussion regarding P200.) The frontal N200 is an ERP that is commonly assessed in stimulus-locked ERP analyses by the flanker task [17]. The results in the current study indicated that the difference in N200 mean amplitude was caused by the target faces in the center of the whole stimulus, while the effects of flanker faces were suppressed during this period. This indicates that participants were paying attention to the target faces and ignoring the flanker faces nearby at about 300–400 ms after the stimulus onset. From the analysis above, we can easily find the visual search sequence of selective attention in the emotional flanker tasks. At about 150–250 ms after the stimulus onset, participants paid attention to the whole stimulus picture. Because the emotional effect from flankers showed an effect during this period, this means that participants were responding to the whole stimulus (five faces) at this time. However, during the period of N200, no significant emotional effect was found for flankers: only the emotional effect from the target face showed an effect, which implies that participants had already focused their attention on the target. These results indicate that the course of visual search moves from a large whole overview down to the targeted particulars, even when the target always appears at a known location and even when the participants have been told to respond to the target as soon as possible. During the visual processing of the flanker task, participants first paid attention to the whole stimulus. Because of the limitations in selective attention, the whole stimulus picture, including target and distractors, came into perception in its entirety, and both target and distractors acted as activators. Thus, the global features of the stimulus will dominate perception at that time. The selection process to the target immediately followed the perception process and attention was focused quickly on the specific target. At this time, the features of the target generated their effect on perception. The course of visual search in an emotional flanker task would therefore appear to be from a large overview down to the specific target. Participants processed all of the stimuli at the same time, for a short initial period, then concentrated on the center target. In other words, the processing was performed in a “parallel”, rather than a “serial” fashion in early stage of visual search in this task. In our study, visual search was easy as we presented the target at the same centered known location and we also provided a cue with a cross (+) at the target position prior to its appearance. We hypothesized that participants might pay attention to the target without any visual search process, because they would know the location of the target and could readily ignore irrelevant information presented at distractor locations. Interestingly, the participants still appeared to pay attention to the distractors during this process. The processing sequence was always in a “parallel” fashion: participants were paying attention to the whole group of stimuli at the same time. During the visual search process in this task, participants paid attention to the whole stimulus first, then “parallel” searching was deployed for the detection of a single feature. After that, partic-
G. Dong et al. / Neuroscience Letters 460 (2009) 1–5
ipants focused their attention on the target, permitting accurate localization and feature conjunction, in a manner more analogous to a “serial” search. The whole visual search process, from P200 to N200, from the whole stimulus to the target, from parallel to serial, was more similar to how feature integration theory describes a visual search [16]. Acknowledgements This research was supported by National Social Foundation of China (BBA080048) and Young Scholar Foundation of Zhejiang Normal University (SKQN200809). The author thanks Yuejia Luo for his developing and providing the CSFE in our research, Thanks Min Li, Min Liu for their assistance with EEG recording and analysis. Appendix A. Identification numbers of CSFE pictures presented in this study Happy (Female: f170, f175, f189, H36f, H48f, H62f, H64f, H65f, H66f, H79f; Male: H2m, H4m, H14m, H16m, H68m, H95m, m124, m138, m139, m141). Angry (Female: f1, f2, f3, f4, f5, f12, f34, f37, f38, f39; Male: A5m, A22m, m3, m4, m11, m12, m16, m28, m29, m38). References [1] M.M. Botvinick, T.S. Braver, D.M. Barch, C.S. Carter, J.D. Cohen, Conflict monitoring and cognitive control, Psychological Review 108 (2001) 624–652. [2] J.T. Cacioppo, W.L. Gardner, Emotion Annual Review of Psychology 50 (1999) 191–214. [3] L. Carreti’e, F. Mercado, M. Tapia, J.A. Hinojosa, Emotion attention, and the ‘negativity bias’, studied through event-related potentials, International Journal of Psychophysiology 41 (2001) 75–85. [4] S. Delplanque, M.E. Lavoie, P. Hot, L. Silvert, H. Sequeira, Modulation of cognitive processing by emotional valence studied through event-related potentials in humans, Neuroscience Letters 356 (2004) 1–4. [5] P. Ekman, W.V. Friesen, Pictures of Facial Affect, Consulting Psychologists Press, Palo Alto, CA, 1976.
5
[6] C.W. Eriksen, J.D. James, Visual attention within and around the field of focal attention: a zoon lens model., Perception and Psychophysics 40 (1986) 225–240. [7] J. Green, W.A. Teder-Sälejärvi, J.J. McDonald, Control mechanisms mediating shifts of attention in auditory and visual space. A spatio-temporal ERP analysis, Experimental Brain Research 166 (2005) 358–369. [8] S.R. Jackson, G.M. Jackson, M. Roberts, The selection and suppression of action: ERP correlates of executive control in humans, NeuroReport 10 (1999) 861–865. [9] P.J. Lang, M.M. Bradley, B.N. Cuthbert. International affective picture system (IAPS): affective ratings of pictures and instruction manual, Technical Report A-6, University of Florida, Gainesville, FL, 2005. [10] R.D. Rafal, F. Gershberg, R. Egly, R. Ivry, A. Kingstone, T. Ro, Response channel activation and the lateral prefrontal cortex, Neuropsychologia 34 (1996) 1197–1202. [11] T. Ro, A. Cohen, R. Ivry, R. Rafal, Response channel activation and the temporoparietal junction, Brain and Cognition 37 (1998) 461–476. [12] N.K. Smith, J.T. Cacioppo, J.T. Larsen, T.L. Chartrand, May I have your attention please: electrocortical responses to positive and negative stimuli, Neuropsychologia 41 (2003) 171–183. [13] R. Soria, R. Srebro, Event-related potential scalp fields during parallel and serial visual searches, Cognitive Brain Research 4 (1996) 201–210. [14] S. Thorpe, D. Fiza, C. Marlot, Speed of processing in the human visual system, Nature 381 (1996) 520–522. [15] Y. Tong, R.D. Melara, Behavioral and electrophysiological effects of distractor variation on auditory selective attention, Brain Research 1166 (2007) 110–123. [16] A.M. Treisman, Search, similarity, and integration of features between and within dimensions, Journal of Experimental Psychology. Human Perception and Performance 17 (1991) 652–676. [17] V. vanVeen, C.S. Carter, The timing of action monitoring processes in the anterior cingulate cortex, Journal of Cognitive Neuroscience 14 (2002) 593–602. [18] Y. Wang, Y.J. Luo, Standardization and assessment of college students’ facial expression of emotion, Chinese Journal of Clinical Psychology 13 (2005) 396–398. [19] J.M. Wolfe, Visual search, in: H. Pashler (Ed.), Attention, Psychology Press, Philadelphia, 1998, pp. 13–73. [20] J.M. Wolfe, S.R. Friedman-Hill, A.B. Bilsky, Parallel processing of part-whole information in visual search tasks, Perception & Psychophysics 55 (1994) 537–550. [21] S.M. Wood, G.F. Potts, J.F. Hall, J.B. Ulanday, C. Netsiri, Event-related potentials to auditory and visual selective attention in schizophrenia, International Journal of Psychophysiology 60 (2006) 67–75. [22] World Medical Organization, Declaration of Helsinki, 1964, British Medical Journal 313 (1996) 1448–1449. [23] J. Yuan, Q. Zhang, A. Chen, H. Li, Q. Wang, Z. Zhuang, S. Jia, Are we sensitive to valence differences in emotionally negative stimuli? Electrophysiological evidence from an ERP study, Neuropsychologia 45 (2007) 2764–2771.