Neuropsychologia 48 (2010) 1725–1734
Contents lists available at ScienceDirect
Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia
Emotion modulates language production during covert picture naming José A. Hinojosa a,∗ , Constantino Méndez-Bértolo a , Luis Carretié b , Miguel A. Pozo a a b
Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain Departamento de Psicología Biológica y de la Salud, Universidad Autónoma de Madrid, Madrid, Spain
a r t i c l e
i n f o
Article history: Received 27 August 2009 Received in revised form 19 January 2010 Accepted 17 February 2010 Available online 24 February 2010 Keywords: Emotion Language production Phonological encoding Grapheme monitoring ERPs
a b s t r a c t Previous studies have shown that emotional content modulates the activity of several components of the event-related potentials during word comprehension. However, little is known about the impact of affective information on the different processing stages involved in word production. In the present study we aimed to investigate the influence of positive and negative emotions in phonological encoding, a process that have been shown to take place between 300 and 450 ms in previous studies. Participants performed letter searching in a picture naming task. It was found that grapheme monitoring in positive and negative picture names was associated with slower reaction times and enhanced amplitudes of a positive component around 400 ms as compared to monitoring letters in neutral picture names. We propose that this modulation reflects a disruption in phonological encoding processes as a consequence of the capture of attention by affective content. Grapheme monitoring in positive picture names also elicited higher amplitudes than letter searching in neutral image names in a positive component around 100 ms. This amplitude enhancement might be interpreted as a manifestation of the ‘positive offset’ during conceptual preparation processes. The results of a control experiment with a passive viewing task showed that both effects cannot be simply attributed to the processing of the emotional images per se. Overall, it seems that emotion modulates word production at several processing stages. © 2010 Elsevier Ltd. All rights reserved.
1. Introduction Most of the research on emotion has focused on how the brain processes the affective content of pictorial stimuli (e.g., Carretié et al., 2009; Codispoti, Ferrari, & Bradley, 2007; Delplanque, Silvert, Hot, Rigoulot, & Sequeira, 2006; Schupp, Junghöfer, Weike, & Hamm, 2004; Smith, Cacioppo, Larsen, & Chartrand, 2003) or faces (e.g., Pourtois, Dan, Grandjean, Sander, & Vuilleumier, 2005; Schacht & Sommer, 2009a; Vuilleumier & Pourtois, 2007). More recently, researchers’ attention have been directed to study the impact of emotional content in a different set of stimuli that are perceptually simple and highly symbolic that is, linguistic stimuli. A number of studies have tried to elucidate the temporal course and the brain areas implicated in the processing of affective information during single word comprehension (Hinojosa, Carretié, Valcárcel, Méndez-Bértolo, & Pozo, 2009; Kissler, Herbert, Winkler, & Junghofer, 2009; Naumann, Maier, Diedrich, Becker, & Bartussek, 1997; Scott, O’Donnell, Leuthold, & Sereno, 2009). The results of these studies have shown that the emotional content modulates the amplitude of an early posterior negativity generated in extrastriate cortices. This is thought to reflect rudimentary semantic stimulus
∗ Corresponding author. E-mail address:
[email protected] (J.A. Hinojosa). 0028-3932/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.neuropsychologia.2010.02.020
classification that is sensitive to attention modulations (Herbert, Junghofer, & Kissler, 2008; Kissler, Herbert, Peyk, & Junghofer, 2007; Schacht & Sommer, 2009b). Also, the amplitude of a late positivity is enhanced for emotional arousing words, which has been related to the allocation of additional attentional resources for an efficient memory encoding of affective features (Dillon, Cooper, Grent-t‘Jong, Woldorff, & La Bar, 2006; Herbert, Kissler, Junghofer, Peyk, & Rockstroh, 2006; Hinojosa, Carretié, Méndez-Bértolo, Míguez, & Pozo, 2009). Even though the influence of emotional content in word comprehension has been well established, little is known about the effects of affective information in language production. The current investigation addresses this question employing the high temporal resolution of the event-related brain potentials (ERPs). Based on speech error evidence, the most prevalent theoretical view of language production assumes that to produce a word, a speaker will first activate an appropriate lexical concept. Lexical concepts are conceived as nodes in a semantic network, so there is always some activation spreading from the target concept to semantically related concepts. As a consequence, the corresponding lexical item (lemma) in the mental lexicon, which is an abstract description of the syntactic properties of the item, is activated during ‘lexical selection’. Lexical selection is achieved through a process of competitive, spreading activation of both the targets and the related nontargets nodes, with the node with the highest activation being selected. The next stage is called ‘phonological encoding’,
1726
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
and involves the retrieval of word form properties. Two kinds of phonological information become available. The first one is word’s segmental composition, roughly the individual phonemes of a word and their ordering (during segmental spell out). The second one is the word’s metrical structure; this is the number of syllables and the word’s stress pattern over these syllables (during metrical spell out). In a process called segment-to-frame association segmental and metrical information is combined into a phonological word. During phonological word formation the previously retrieved segments are syllabified according to universal and language-specific rules. The resulting phonological syllables activate phonetic syllables in a so-called mental syllabary that are used by the speaker to prepare the articulatory gestures for words in a final processing stage (Levelt, 1993; Levelt, 2001; Levelt, Roelofs, & Meyer, 1999; but see for instance Dell, Schwartz, Martin, Saffran, & Gagnon, 1997 for alternative proposals). The model predicted that the two system’s architecture is serial, so lexical selection precedes phonological form encoding. This assumption has been tested in several electrophysiological studies using a variety of go/no go tasks that have confirmed the serial nature of word production. Basically, these studies found effects in the Lateralized Readiness potential (related to response preparation) and the N200 (related to response inhibition) that suggested that grammatical and semantic information processing during lexical selection precedes phonological processing between 40 and 170 ms (Rodriguez-Fornells, Schmitt, Kutas, & Münte, 2002; Smith, Münte, & Kutas, 2000; Van Turennout, Hagoort, & Brown, 1997, 1998; Zhang & Damian, 2009; but see Abdel Rahman & Sommer, 2003; Abdel Rahman, van Turennout, & Levelt, 2003). Evidence about the mechanisms involved in language production mainly comes from the use of the picture naming task. In this paradigm, participants are forced to retrieve the name of an object displayed in a picture and/or to monitor the presence of a linguistic unit (Howard, Nickels, Coltheart, & Cole-Virtue, 2006; Indefrey & Levelt, 2004; Roelofs, 2008; Schuhmann, Schiller, Goebel, & Sack, 2009). Picture naming has been proved to be useful in order to characterize the stages involved in word production, even when participants were instructed to perform silently instead of overtly naming the picture (Eulitz, Hauk, & Cohen, 2000; Levelt, Praamstra, Meyer, Helenius, & Salmelin, 1998; RodriguezFornells et al., 2002; Salmelin, Hari, Lounasmaa, & Sams, 1994; Smith, Schiltz, Zaake, Kutas, & Münte, 2001). On the basis of data obtained in picture naming studies, Indefrey and Levelt (2004) delineated the time course of word production in a metaanalysis that included 82 studies. These authors estimated that conceptual representations are accessed within the first 175 ms, followed by lexical access (175–250 ms) and phonological encoding (250–450 ms). Finally, articulatory preparation would occur between 450 and 600 ms. Up to date, research on picture naming has shown that some variables influence language production at several processing stages, including the age of acquisition of the names (early acquired picture names are produced faster than later acquired picture names; Bonin, Chalard, Méot, & Barry, 2006; Catling & Johnston, 2006; Morrison & Ellis, 2000; Morrison, Ellis, & Quinlan, 1992), familiarity (better performance in naming familiar than unfamiliar picture names; Lambon Ralph, Graham, Ellis, & Hodges, 1998; Meltzer, Postman-Caucheteux, McArdie, & Braun, 2009), word frequency (shorter naming latencies for high frequency as compared to low frequency words; Dent, Johnston, & Humphreys, 2008; Graves, Grabowski, Mehta, & Gordon, 2007; Kavé, Samuel-Enoch, & Adiv, 2009), or word length (naming reaction times increase as picture names get longer; Okada, Smith, Humphries, & Hickok, 2003; Wilson, Isenberg, & Hickok, 2009). However, to our knowledge no previous study has attempted to determine whether affective content modulates word production.
With this purpose, the current study evaluates grapheme monitoring in a picture naming task that presented pictures of emotional objects. This paradigm is a variation of the phoneme-monitoring task (Wheeldon & Levelt, 1995). A target grapheme is presented before the picture and participants have to indicate whether this target is present or absence in the picture name. The grapheme monitoring task is suitable to study phonological encoding processes (Hauk, Rockstroh, & Eulitz, 2001) because of the close relationship that exists between graphemic and phonemic codes for words (see Wheeldon & Levelt, 1995, and Indefrey & Levelt, 2004 for a discussion on this issue). Also, this task has been used in previous ERP research on language production. In this regard, Hauk et al. (2001) found that grapheme monitoring in picture naming is associated with a positive component in the time interval between 300 and 450 ms. Consistent with the proposal made by Indefrey and Levelt (2004), this component was thought to reflect the final stages of phonological encoding and the transition to silent articulation (Hauk et al., 2001). This assertion has received additional support from the results of a MEG study that used a similar task in which participants had to monitor phonological information by deciding whether the name of the object started with a vowel (Vihla, Laine, & Salmelin, 2006). It was found that phonological encoding takes place after 300 ms, as reflected in the sustained activation of the posterior temporal and inferior frontal cortices. Also, using electrodes implanted in language-related areas of epileptic patients, Sahin, Pinker, Cash, Schomer, & Halgren (2009) reported phonological processing to occur around 450 ms when subjects produced grammatically inflected words. Finally, ERP abnormalities in a similar time window (300–450 ms) have been found for aphasic individuals with phonological impairments but not for those with semantic or lexical deficits (Laganaro, Morand, & Schnider, 2009; Laganaro, Morand, Schwitter, et al., 2009). Due to the lack of studies on affective processing in word production, the hypotheses that might be outlined are tentative. In language comprehension research, the emotional content of the words has shown to be able to capture attention and disrupt an ongoing task, especially during the processing of negative intense words. This modulation was reflected in delayed reaction times (RTs) and enhanced amplitudes of several brain waves (Carretié et al., 2008; Kuchinke et al., 2005; Larsen, Mercer, Balota, & Strube, 2008; McKay et al., 2004; Pratto & John, 1991). If this finding generalizes to language production, in particular to phonological encoding, the grapheme monitoring of picture names with emotional content should enhance the amplitude of a positive component as compared to neutral words in the time range between 300 and 450 ms. Grapheme monitoring of emotional picture names should also elicit longer RTs than in neutral words. Alternatively, the absence of amplitude and RTs differences in this time window would indicate that affective information exerts no influence in phonological encoding during word production. 2. Methods 2.1. Participants Thirty four native Spanish speakers (30 females; 20–33 years, mean 22 years; lateralization quotient 50–100%, mean 78% measured by the Edinburgh Handedness Scale; Oldfield, 1971). All participants reported normal or corrected-to-normal vision. They gave their informed consent to participate in the study. 2.2. Stimuli The stimuli were selected from a set of 107 pictures taken from the IAPs database (Lang, Bradley, & Cuthbert, 2001). These pictures included negative, positive and neutral stimuli. In a pre-test, 19 subjects (different from those who participated in the ERP recording) named all the images and rated the names they gave in a 9-points Lykert scale in the dimensions of arousal and valence (being 9 very activating and very positive, respectively). Only pictures in which at least 90% of the subjects used the same name were further considered. An equal number of pictures
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
1727
Table 1 Means of valence (1, negative to 9, positive) and arousal (1, calming to 9, arousing) IAPs ratings for every pictures type. The assessments given by the independent sample of subjects to picture names are also shown (more details are in the main text). Finally, the word length (number of syllables) and word frequency are provided. Last row shows the results of the statistical analyses concerning each of these variables. Valence
Arousal
Frequency (per 2 million)
Length
Pictures Negative Positive Neutral ANOVA Post-hoc
3.3 7.26 5.09 F = 256.01*** Pos > NeuPos > NegNeu > Neg
5.34 5.24 2.83 F = 77.83*** Neg > Neu
– – – –
– – – –
Pictures names Negative Positive Neutral ANOVA Post-hoc
3.07 6.02 4.64 F = 98.87*** Pos > NeuPos > NegNeu > Neg
6.48 5.73 4.34 F = 54.12*** Neg > NeuNeg > PosPos > Neu
29 40 38 F = .53n.s.
3 2.78 2.56 F = 1.5n.s.
Pos > Neu
*p < .05, **p < .01, ***p < .001, + statistical trend p < .1, n.s. non-significant, df: 2,34.
for each of the three emotional categories were selected according to the following criteria that were contrasted via one-way analysis of variance (ANOVA; see Table 1) and post-hoc analyses via the Bonferroni correction (alpha <.05): (a) positive, negative and neutral pictures differed in valence; (b) positive and negative pictures had similar arousal and differed from neutral pictures in this dimension; (c) all names were one word familiar names; (d) the names used by the participants had similar frequency of use in Spanish (Alameda & Cuetos, 1995); and (e) the names were equated in word length. Only 18 positive, 18 negative and 18 neutral pictures met these restricted criteria. They were also matched in physical attributes and complexity. Table 1 summarizes mean values in arousal and valence for pictures, as well as mean word frequency, word length, arousal and valence for picture names. All posthoc analyses were in the expected direction with one exception. Even though the comparison between negative and positive pictures in the arousal dimension was not significant (5.34 for negative pictures vs 5.24 for positive pictures, there were differences in the arousal ratings that subjects gave to the corresponding pictures names (6.48 for negative picture names vs 5.73 for positive picture names). All pictures were presented twice during the experimental session, once with the target grapheme present and once with the target grapheme absent. Repetition effects have been found to influence brain waves by increasing the amplitude of several components after the first presentation of stimulus (Doyle, Rugg, & Wells, 1996; Rugg et al., 1998). However, it seems that the amplitude is not affected after the second presentation of the stimuli. Moreover, in the case of emotional stimuli this effect seems to be homogeneously distributed and there is no evidence for differential repetition modulation among affective stimulus categories (Olofsson & Polich, 2007; Rozenkrants, Olofsson, & Polich, 2008). Even though these findings preclude from attributing to repetition effects any possible modulation of the ERPs that differentiates between stimulus types, participants saw the pictures in the computer and named them before the practice sequence. Therefore, once the experiment started none of the images were presented for the first time to subjects. This procedure also allowed us to ensure that all participants knew the names of the objects (they made .85 naming errors on average). Correct picture names were reported to participants in the few cases they made an error.
picture for 1000 ms. The intertrial interval was 3000 ms. Participants were instructed to press the left index finger if the target letter was present in the picture name and the right index finger if the picture name did not comprise the target grapheme in the first sequence. The assignment was reversed for the other sequence. They were told to minimize blinking. A 5 min break was allowed between the two test sequences. A practice sequence was presented before the first experimental sequence. As already explained, previous to the practice sequence and with the purpose of minimizing confounding repetition effects and to ensure that they used the intended names, participants saw the pictures in a computer monitor. 2.4. Data acquisition Electroencephalographic data were recorded using an electrode cap (ElectroCap International) with tin electrodes. A total of 58 scalp locations homogeneously distributed all over the scalp were used (see Fig. 2). All scalp electrodes, as well as one electrode at the left mastoid (M1), were referenced to one electrode placed at the right mastoid (M2). Bipolar horizontal and vertical electrooculogram was recorded
2.3. Procedure Participants had to perform a grapheme monitoring task (Hauk et al., 2001; Özdemir, Roelofs, & Levelt, 2007). Phonological codes are considered to be involved even with visual presentation of stimuli (Jescheniak & Schriefers, 2001; Zubicaray, McMahon, Eastburn, & Wilson, 2002). Also, Spanish is a transparent language that has a fairly shallow orthography with regular spelling-to-sound correspondences, which makes it difficult to distinguish between graphemic and phonemic effects (Ardila, 1991). Given this close correspondence, the grapheme monitoring task was preferred to the classical phonological monitoring task since it allows the presentation of all the stimuli in the same sensorial modality. Stimuli were presented to every participant in two sequences with the same proportion of negative, positive and neutral pictures, as well as yes/no responses. Every stimulus was presented once in each of the sequences. The order of sequences was counterbalanced across participants. For each category, the target letter was present in half of the pictures whereas it was absent in the other half. In those images in which the target grapheme was present, it could appear equiprobably at a random position between the first third, the second third and the last third of the number of syllables of the picture name. Target letters were always consonants, since vowels are known to be associated to longer production times and consonants and vowels seem to be represented differently in the brain (Caramazza, Chialant, Capasso, & Miceli, 2000; Carreiras, Vergara, & Perea, 2009). Fig. 1 exemplifies the experimental procedure. Each trial began with a question about the target grapheme that was presented for 500 ms (for instance, “Is there a “B”?). A blank screen replaced the question for 500 ms, immediately followed by a
Fig. 1. Schematic illustration of the stimulation paradigm (¿tiene una “T”? = Is there a “T”?).
1728
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
Fig. 2. Grand averaged ERPs elicited by (a) grapheme monitoring in negative, positive and neutral picture names, and (b) passive viewing of negative, positive and neutral images, at a selected sample of representative electrodes (F3, F4, P3, P4). Scales are represented at the F3 electrode.
for artifact rejection purposes. Electrode impedances were kept bellow 5 K. The signal was recorded continuously with a bandpass from .1 to 50 Hz (3 dB points for −6 dB octave roll-off) and digitization sampling rate was set to 250 Hz. 2.5. Data analysis Trials with RTs longer than 2000 ms or shorter than 200 ms were excluded from the analyses. In addition, those trials with incorrect responses were eliminated. RTs and errors were analyzed by means of repeated-measures ANOVAs with the factor Affect type (three levels: positive, negative and neutral) and post-hoc analyses with the Bonferroni correction (alpha < .05) where appropriate. Average ERPs from −200 to 800 ms after stimulus onset were computed separately for all the experimental conditions. Data were baseline corrected using the entire 200 ms before picture onset. Muscle artifacts, drifts, and amplifier blockings were removed by visual inspection. Offline correction of eye movement artifacts was made, using the method described by Semlitsch, Anderer, Schuster, and Preelich (1986). After the averaging of every stimulus category, originally M2-referenced data were re-referenced to the average of the mastoids. Components explaining most ERP variance were detected and quantified through covariance-matrix-based temporal principal analysis (tPCA). This method has been repeatedly recommended since the exclusive use of traditional visual inspection of grand averages and voltage computation may lead to several types of misinterpretation (Chapman & McCrary, 1995; Coles, Gratton, Kramer, & Millar, 1986; Dien, 2010; Foti, Hajcak, & Dien, 2009). The main advantage of tPCA over traditional procedures based on visual inspection of recordings and on temporal windows of interest is that it presents each ERP component separately and with its clean shape, extracting and quantifying it free of the influences of adjacent or subjacent components. Indeed, the waveform recorded at a site on the head over a period of several hundreds of milliseconds represents a complex superposition of different overlapping electrical potentials. Such recordings can stymie visual inspection. In brief, tPCA computes the covariance between all ERP time points, which tends to be high between those time points involved in the same component and low between those belonging to different components. The solution is therefore a set of independent factors made up of highly covarying time points, which ideally correspond to ERP components. Temporal factor score, the tPCA-derived parameter in which extracted temporal factors may be quantified, is linearly related to amplitude. In the present study, the number of components to select was based on the scree test (Cliff, 1987). Extracted components were submitted to promax rotation, since this rotation has found to give the best overall results for temporal PCA (Dien, 2010; Dien, Beal, & Berg, 2005). Repeated-measures ANOVAs on temporal factor scores were carried out. Two within-subjects factors were included in the ANOVA: Affect type (three levels: positive, negative and neutral), and electrode (58 levels). The Greenhouse–Geiser epsilon correction was applied to adjust the degrees of freedom of the F-ratios where necessary. Signal overlapping may also occur at the space domain. At any given time point, several neural processes (and hence, several electrical signals) may occur, so the recording at any scalp location at that moment is the electrical balance
of these different neural processes. While temporal PCA “separates” ERP components along time, spatial PCA (sPCA) separate ERP components along space, each spatial factor ideally reflecting one of the concurrent neural processes underlying each temporal factor. Additionally, sPCA provides a reliable division of the scalp into different recording regions, an advisable strategy prior to statistical contrasts, since ERP component frequently show a different behavior in some scalp areas than in others (e.g., they present different polarity or react differently to experimental manipulations). Basically, each region or spatial factor is composed by the scalp points where recordings tend to covary. As a result, the shape of the sPCA-configured regions is functionally based and scarcely resembles the shape of the geometrically configured regions defined by traditional procedures. Moreover, each spatial factor can be quantified through the spatial factor score, a single parameter that reflects the amplitude of the whole spatial factor. Therefore, sPCAs were carried out for those temporal factors that were sensitive to our experimental manipulations. Again, the number of factors to select was based on the scree tests, and extracted factors were submitted to promax rotation. Repeatedmeasures ANOVAs on the spatial factors with respect to Affective Type were carried out (three levels: positive, negative and neutral). Again, the Greenhouse–Geiser epsilon correction was applied to adjust the degrees of freedom of the F-ratios, and follow-up planned comparisons with the Bonferroni correction (alpha < .05) were made for determining the significance of pairwise contrasts where appropriate.
3. Results 3.1. Behavioral data Participants were faster when identifying the graphemes in names corresponding to neutral pictures (mean RT = 1181 ms) than those corresponding to negative (1241 ms) or positive (1208 ms) pictures. The overall ANOVA showed that this difference was significant (F2,66 = 8.6; p < .005). The result of the post-hoc analyses revealed that whereas there were no differences between RTs to positive and negative names, they both differed from RTs to neutral names. Mean number of errors were 2.6 for the identification of graphemes in neutral names, 2.7 in positive names and 3.4 in negative names. These differences were significant in the overall ANOVA (F2,66 = 3.3; p < .05). However, only the difference between negative and neutral names was marginally significant (p = .09) according to post-hoc analyses. Table 2 shows the mean and standard deviation of RTs and errors for every stimulus type.
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734 Table 2 Mean and standard deviation values (in parenthesis) corresponding to behavioral and electrophysiological data.
Behavior RT (ms) Errors
Negative
Positive
Neutral
1241 (314) 3.4 (1,8)
1208 (320) 2.7 (1,6)
1181 (319) 2.6 (1,5)
RT: reaction times.
3.2. Electrophysiological data A selection of the grand averages for all stimulus types is shown in Fig. 2a. These grand averages correspond to those scalp areas where experimental effects (described later) were most evident. As a consequence of the application of the tPCA, six components were extracted from the ERPs. The factor loadings are represented in Fig. 3. Repeated-measures ANOVAs were carried out on temporal factor scores for Affect type and Electrode. Only TF2 and TF4 were found to be sensitive to emotion. The effect of Affect type alone was significant in both TF2 (F2,66 = 7.34; p < .005) and TF4 (F2,66 = 5.02; p < .05). The interaction between Affect type and Electrode was also significant in TF2 (F114,3762 = 4; p < .05). Hereafter and to make the results easier to understand, these components will be labeled P400 and P100 respectively, due to their latency and polarity. The sPCA subsequently applied to temporal factor scores extracted four spatial factors for P400 and two spatial factors for P100. Repeated-measures ANOVAs on P400 and P100 spatial factor scores (directly related to amplitudes, as previously indicated) were carried out for the factor Affect type. For P400, results reached significance in three out of the four spatial factors. In the parietooccipital spatial factor (F2,66 = 5.12; p < .05) monitoring graphemes in the names of negative pictures elicited higher amplitudes than the letter search in the names of neutral images. Also, in the left central parietal (F2,66 = 13.43; p < .0001) and the right central (F2,66 = 5.46; p < .05) spatial factors, monitoring graphemes in the names of both positive and negative images elicited enhanced amplitudes as comparing to the grapheme monitoring in the names of neutral pictures. For P100, analyses showed that in the frontocentral spatial factor (F2,66 = 4.15; p < .005) searching for letters in positive picture names elicited higher amplitudes that monitoring graphemes in neutral picture names. Significant effects were also found in a posterior spatial factor (F2,66 = 3.97; p < .05). However, post-hoc analyses revealed that even though grapheme monitoring in the names of positive pictures elicited later amplitudes than searching letters in the names of negative and neutral pictures, the comparison only reached a statistical trend (p = .07 and p = .05, respectively). Table 3 summarizes the results of these analyses and
1729
topographical maps corresponding to these effects are shown in Fig. 4a. The time course and the polarity of the P400 component resembles to some extend those of the late parietal positivities (LPC) reported in affective research with pictorial stimuli in a variety of tasks including affective evaluation (Schupp, Junghöfer, Weike, & Hamm, 2003; Schupp et al., 2004), categorization (De Cesari & Codispoti, 2006), passive viewing (Hajcak & Nieuwenhuis, 2006; Pastor et al., 2008) or indirect tasks (Carretié, Hinojosa, Albert, & Mercado, 2006). Although the modulation of the behavioral measures by the emotional content suggests that the participants were performing the grapheme monitoring task, so the P100 and the P400 effects could be related to word production processes, an additional control experiment was conducted in order to rule out that these effects were due to the processing of emotional pictures per se. In this experiment participants passively view the same set of images presented during the main study, so no word production operations were required. By finding a different pattern of ERP modulations, the results of the main experiment could be unequivocally related to word production stages involved in grapheme monitoring. 4. Control experiment: passive viewing task 4.1. Participants Eighteen subjects (13 females), ranging in age from 18 to 26 (M = 20) participated in this study as volunteers. All had normal or corrected-to-normal vision, and all were right-handed, according to the Edinburgh Handedness Inventory (Oldfield, 1971; lateralization quotient 86–100%, mean 98%). They gave their informed consent to participate in the study. 4.2. Stimuli The stimuli were the same set of 18 positive, 18 negative and 18 neutral pictures used in the grapheme monitoring experiment. Again, every picture was presented twice during the recording session. Participants saw the images in the computer before the practice sequence, as in the letter searching experiment. 4.3. Procedure The stimuli were presented following the criteria described in the Procedure section of the grapheme monitoring experiment. The only difference was that the participants were asked to focus on the screen and simply watch all of the pictures as they were displayed. 4.4. Data acquisition Electroencephalographic recording procedures were exactly the same as in the letter searching experiment, including the use of the same electrodes and parameters. Table 3 Results of the statistical contrasts on the P400 and P100 spatial factors (Affect type).
Fig. 3. Grapheme monitoring tPCA: factor loadings after promax rotation. Temporal factors 4 (P100) and 2 (P400) are drawn in black.
Temporal factor
Spatial factor
ANOVA (df = 2,33)
TF2 (P400)
Parieto-occipital Anterior Left central parietal Right central
F = 5.120, p < .05 F = .150, n.s. F = 13.430, p < .001 F = 5.462, p < .01
TF4 (P100)
Posterior Right temporal
F = 4.153, p < .05 F = 3.966, p < .05
TF: temporal factor; df: degrees of freedom; n.s.: non-significant.
1730
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
Fig. 4. Factor scores difference maps of (a) the P400 (TF2) and the P100 (TF4) components in the grapheme monitoring task, and (b) the P500 (TF4) components in the passive viewing task. Note that individual scales have been used TF: temporal factor; SF: spatial factor; NEG: negative; NEU: neutral; POS: positive.
4.5. Data analysis Data were analyzed in the same way as described in Section 2.5 of the grapheme monitoring experiment.
showed that viewing positive images elicited higher amplitudes than looking at neutral pictures. Differences were also evident at a right temporal factor (F2,34 = 6.5; p < .05). In this case, positive pictures were associated with enhanced amplitudes as compared to
4.6. Results Fig. 2b shows a selection of grand averages that corresponds to those scalp areas where experimental effects were most prominent. The application of the tPCA showed four components extracted from the ERPs. Fig. 5 represents the factor loadings after the promax rotation. Repeated-measures ANOVAs on temporal factor scores (Affect type and Electrode) showed that only TF4 was sensitive to emotion. Significant results were found for Affect type (F2,34 = 4.34; p < .05). This effect will be hereafter labeled P500 because of its latency and polarity. The sPCA subsequently applied to temporal factor scores extracted four spatial factors for the P500. Repeated-measures ANOVAs on these spatial factor scores were carried out for the factor Affect type. The results indicated that differences reached significance for a posterior factor (F2,34 = 5.3; p < .05). Post-hoc analyses
Fig. 5. Passive viewing tPCA: factor loadings after promax rotation. Temporal factors 4 (P500) is drawn in black.
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734 Table 4 Results of the ANOVAs (“Affect type”) on all P500 extracted spatial factors. Temporal factor
Spatial factor
Affect type (df = 2,17)
TF4 (P500)
Posterior Central Frontal Right temporal
F = 5.296, p < .05 F = 2.691, n.s. F = 1.089, n.s. F = 6.455, p < .01
TF: temporal factor; df: degrees of freedom; n.s.: non-significant.
both negative and neutral pictures.1 Table 4 summarizes the results of these analyses and topographical maps corresponding to these effects are shown in Fig. 4b. 4.7. Comparison between grapheme monitoring and passive viewing tasks The results of the two experiments clearly show a different pattern of ERP effects associated with monitoring graphemes in the names of emotional pictures as compared to passively viewing the same emotional pictures. Whereas the former task modulated the amplitude of P100 and P400 components, the later task influenced the amplitude of a P500 component. In the particular case of late effects, a number of important differences suggested that the P400 effect found in the grapheme monitoring task and the P500 component reported in the passive viewing of emotional pictures was not the same effect. First, there was a 100 ms latency difference between both effects. Second, the amplitude of the P400 and the P500 was differently modulated by the task. Monitoring graphemes in both negative and positive picture names elicited higher P400 amplitudes than searching letters in neutral image names. Contrary to this finding, only the passive viewing of positive pictures elicited enhanced P500 amplitudes than viewing neutral images. Moreover looking at positive pictures elicited higher amplitudes than viewing negative images at right temporal scalp locations. Task differences were further explored by means of an ANOVA on the temporal score factors of P400 component reported in the grapheme monitoring task and the P500 effects found in the passive viewing task. The within-subjects factors Affect type (three levels) and Electrode (58 levels) and the between-subjects factor Task (two levels: grapheme monitoring and passive viewing) were included in the analyses. The significant effects found in the Task by Electrode (F57,2850 = 5.2; p < .005), Task by Affect type (F2,100 = 3.3; p < .05) and Task by Affect type by Electrode (F114,5700 = 2.2; p < .05) interactions corroborated that P400 effects found in the grapheme monitoring task were not similar to P500 effects reported in the passive viewing experiment. Finally, topographical analyses indicated that P400 effects extended from parieto-occipital to bilateral-central scalp regions. By contrast, P500 effects were confined to posterior electrodes. To further assess whether these components were distinguishable with regard to their scalp distributions, overall amplitude differences were eliminated by normalization with the vector method (profile analyses; McCarthy & Wood, 1985). This method involves dividing the voltage at each electrode by vector length across all electrodes within each condition in the two tasks. An ANOVA was carried out on these scaled data with the within-subjects factors of
1 Although late ERP amplitude effects have been typically found to be higher for both positive and negative as compared to neutral stimuli in affective research, the finding of specific effects for positive stimuli is not rare in previous literature. In fact, several studies have reported similar results to those found here (Delplanque, Lavoie, Hot, Silvert, & Sequira, 2004; Delplanque et al., 2006; Herbert, Kissler, Junghofer, Peyk, & Rockstroh, 2006; Herbert et al., 2008; Kissler et al., 2009; Schapkin, Gusev, & Kuhl, 2000). This ‘positivity bias’ seems more likely to occur in tasks than do not promote deep encoding strategies such as the one used in this experiment (see Herbert et al., 2006, 2008 for a detailed discussion on this issue).
1731
Affect type (three levels) and Electrode (58 levels) and the betweensubjects factor of Task (two levels). Significant effects in any of the interactions involving Task and Electrode in the ANOVA of these data indicate that there are topographical differences independent of overall ERP activity. The results of these analyses confirmed the differences in the scalp distributions of the P400 and P500 components since the interactions of Task by Electrode (F57,2850 = 4.5; p < .05) and Task by Affect type by Electrode (F114,5700 = 5.12; p < .05) reached significance. 5. Discussion Word production is a complex multistage process linking conceptual representations, lexical entries, phonological forms and articulation (Levelt, 2001; Levelt et al., 1999). The time course of the different stages has been well established in several previous studies (Indefrey & Levelt, 2004; Levelt et al., 1998). However, the impact of affective content in word production remained to be specified. The current study attempted at elucidating this question in part by investigating the influence that emotion exerts on a task that emphasized the retrieval of the segmental content that occurs during phonological encoding. The finding of higher amplitudes in both an early and a late positive component, as well as slowed reaction times for emotional as compared to neutral stimuli suggests that affect modulates word production at several processing stages. It is generally assumed that reaction times are sensitive to participants’ decision-making processes and task-related strategies (Kounios & Holcomb, 1992; Zhang, Lawson, Guo, & Jiang, 2006). In the current study, identifying graphemes in the names of positive and negative pictures was associated with slower reaction times than in neutral picture names. Although, to the best of our knowledge, there are no previous data with picture naming tasks, some studies have reported delayed reaction times for the processing of emotional words (especially for negative ones) as compared to neutral words with several indirect tasks including Stroop and lexical decisions (Carretié et al., 2008; Estes & Adelman, 2008; McKay et al., 2004; Pratto & John, 1991; Wentura, Rothermund, & Bak, 2000). Such slowed responses were thought to indicate that attention to emotional information diverts processing resources away from task performance (Estes & Adelman, 2008). Our results suggest that these effects might also generalize to the production of emotional words. Thus, it seems likely that emotional content disrupts the access to the phonological properties of words during picture naming due to the engagement of attention in the processing of affective information. The grapheme monitoring task has been thought to trigger phonological encoding processes that is, the retrieval of word form properties, or even the transition between phonological encoding and articulation (Hauk et al., 2001; Wheeldon & Levelt, 1995). These processes were proposed to take place between 250 and 450 ms (Indefrey & Levelt, 2004), a suggestion that has been corroborated by the findings of several ERP and MEG studies (Hauk et al., 2001; Laganaro, Morand, & Schnider, 2009; Laganaro, Morand, Schwitter, et al., 2009; Schiller, Bles, & Jansma, 2003; Vihla et al., 2006). In agreement with the results of the analysis of reaction times, monitoring graphemes in names corresponding to positive and negative pictures also elicited an enhanced positivity as compare to the grapheme searching in neutral pictures around 400 ms2
2 Although within the time window that have been proposed for phonological encoding to occur, it should be noted that the latency of the P400 found in the present study is slightly delayed in comparison with the latency of phonological encoding-related positivities reported in other studies. This discrepancy might be partly due to word frequency effects. Phonological encoding has been proved to be slowed in low frequency words up to 60 ms (Jescheniak & Levelt, 1994; Levelt et
1732
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
after stimuli onset at bilateral-central and parieto-occipital scalp locations. Similar amplitude enhancements in several late latency components by emotional content have been reported in word comprehension research. They have been taken to index an automatic withdrawn of resources from the ongoing cognitive task due to a privileged processing of affective information (Carretié et al., 2008; Keil, 2006; Kissler et al., 2009). Thus, our data might be interpreted as suggesting that the presence of emotional content attracts attention, prompting the allocation of further processing resources in a way that interferes with the retrieval of word properties during phonological encoding, which is reflected in the amplitude enhancement of the positive component. In terms of the lexical access model proposed by Levelt et al. (1999), Levelt (2001), phonological encoding can be divided in two planning stages. During ‘segmental spell out’ the individual phonemes of a word and their ordering are retrieved. The number of syllables and the location of the lexical stress form part of the information being retrieved during ‘metrical spell out’. Segmental and metrical information is further combined during segment-to-frame association. These retrieved segments are computed incrementally and syllabified according to universal and language-specific rules during ‘syllabification’. The temporal course of some of these processes has been studied by Schiller, Bles, and Jansma (2003). In particular, these authors explored the time course of metrical encoding (by indicating whether the picture name had an initial or final stress) and syllabification (by pressing a key when the first postvocalic consonant belonged to the first syllable and withhold the response if the consonant belonged to the second syllable). It was found that both processes equally modulated ERPs activity around 375 ms. Although the retrieval of the segmental content is a central process in grapheme monitoring tasks, the results of the study by Schiller et al. suggest that at least some of the operations involved in phonological encoding occur approximately at the same time. Therefore, we are likely to conclude that emotional content impacts phonological encoding without further specifying which of the operations might be involved. Clearly, this question should be the topic of future investigations. The existence of subtle differences in the processing of affective information between negative and positive picture names in the P400 component deserves some consideration. Letter monitoring in positive and negative picture names was associated with amplitude enhancement in central electrodes as compared to neutral picture names. However, this effect extended to parieto-occipital electrodes in the particular case of negative picture names. This wider topographical distribution of the activity elicited by the grapheme search in negative picture names might be tentatively related to arousal effects. In the current study, even though positive and negative pictures were matched in arousal values, subjects rated negative picture names as being more arousing than positive picture names. Research on pictorial information processing and on word comprehension has shown that several ERP components are especially sensitive to the arousal dimension of the emotional experience (Hinojosa, Carretié, Méndez-Bértolo, et al., 2009; Kissler, Assadollahi, & Herbert, 2006; Schupp et al., 2004). Moreover, long latency components seem to be particularly influenced by arousal, showing larger amplitudes as the level of arousal increases (Olofsson, Nordin, Sequeira, & Polich, 2008). Thus, we propose that ERP activity related to phonological encoding during word production might be also particularly sensitive to the arousal aspects of the affective content of the stimuli.
al., 1998; Indefrey & Levelt, 2004). It should be noted that relatively low frequency words were used in this study (36 per 2 million) as compared to those used in Hauk and collaborators’ (200 per million) or Levelt and collaborators’ (100 per million) studies.
The amplitude enhancement of a positive component around 100 ms that was associated to the letter search in names corresponding to positive pictures as compared to negative and neutral stimuli was an unexpected finding of the present study. Due to its latency it seems unlikely that this effect might be related to phonological encoding processes that have been shown to occur later in time (Hauk et al., 2001; Indefrey & Levelt, 2004). The P1 component has been related to the mobilization of automatic attentional resources (see Hopfinger & Mangun, 2001 for a review). Moreover, several studies have found larger amplitudes of this wave for emotional as compared to neutral stimuli (Bernat, Bunce, & Shevrin, 2001; Carretié, Hinojosa, Martín-Loeches, & Mercado, 2004; Carretié et al., 2009; Scott et al., 2009). However, in the present study the modulation of the P1 found in the grapheme monitoring task cannot be attributed to affective effects triggered by the emotional images per se, since the passive viewing of the same pictures in the control experiment did not modulate early ERP activity. To our knowledge, P1 effects have not been previously reported in ERP research on word production, no matter whether the tasks used imposed specific demands on phonological encoding or in other processes among those postulated to be involved in language production. However, activations in the right occipital cortex within the first 150 ms have been found in a MEG study with a covert picture naming task (Levelt et al., 1998). These effects were related to the access of the lexical concept in this study. Also, the timing of our effects falls into the time course estimated by Indefrey and Levelt (2004) in their meta-analysis study for conceptual preparation to take place. According to the prevalent theoretical model in language production, the speaker performs a chain of specific operations before a word is produced. This stage model assumes that those processes involved in language production would occur, even if the experimental task place stronger demands in some of them. Therefore, even though the task used in the present study was originally designed to study some of the aspects involved in phonological encoding, the time course of our early effect suggests that it could be reflecting conceptual preparation processes. Another issue concerns the specificity of the P100 amplitude enhancement in relation to the monitoring of graphemes in positive picture names. In language comprehension research with ERPs, early amplitude enhancements for positive words have been interpreted as a manifestation of a ‘positive offset’ (Carretié et al., 2008; Herbert et al., 2006; Kanske & Kotz, 2007). It has been argued that the positive motivational approach system is activated more strongly than the negative motivational withdrawal system by low levels of arousal input (Cacioppo & Gardner, 1999). The results of the present study suggest that this claim might be extended to early effects in language production since the names of the positive images were rated by the participants as being less arousing than those corresponding to negative pictures. Therefore, we might tentatively interpret the P100 amplitude enhancement as reflecting the operation of the positive motivational approach system during the activation of the lexical concepts of the names corresponding to positive pictures. The exact interpretation of this early effect, however, remains an open question for future research. The present study constitutes a general first attempt to explore affective and language production interactions. Therefore, it is important to note several limitations. First, the possible influence of the position of the monitored grapheme across emotional categories could not be determined. The few number of stimuli that would be involved in such a comparison would not allow to establish a clear pattern of results. Second, our methods did not allow to examine whether the age of acquisition of the picture names has a different impact on the monitoring of graphemes in the names of positive, negative and neutral pictures. This also holds for the
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
interaction between emotion and some other variables that have been shown to modulate language production in several studies. Some of these parameters include the phonological complexity of the picture names (Goldrick & Larson, 2008), the frequency of the syllables (Laganaro & Alario, 2006), or the imageability of the concept (Binder, Medler, Desai, Conant, & Liebenthal, 2005; Graves, Desai, Humphries, Seidenberg, & Binder, in press). These important questions can be addressed in future work. An interesting avenue for future research would be also to explore whether emotional content influences other processing stages (e.g., lexical selection, articulation) by using different tasks such as go-no go or pictureword interference paradigms. In conclusion, previous research has shown that emotion interacts with language comprehension at several levels and stages of the processing. However, the question of the impact of affective information in language production remained unexplored. The results of the present study revealed that emotional content was able to influence the retrieval of the word form properties that occurs during the phonological encoding stage of word production and possibly during conceptual preparation. Acknowledgement The authors would like to thank Arturo Míguez for his help in stimulus preparation and data collection. This work was supported by grant PSI2009-08607 from the Ministerio de Ciencia e Innovación of Spain. References Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge? Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16, 372–382. Abdel Rahman, R., van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 5, 850–860. Alameda, J. R., & Cuetos, F. (1995). Diccionario de frecuencias de las unidades lingüísticas del castellano. Oviedo: Universidad de Oviedo. Ardila, A. (1991). Errors resembling semantic paralexias in Spanish-speaking aphasics. Brain and Language, 41, 437–445. Bernat, E., Bunce, S., & Shevrin, H. (2001). Event-related brain potentials differentiate positive and negative mood adjectives during both supraliminal and subliminal visual processing. International Journal of Psychophysiology, 42, 11–34. Binder, J. R., Medler, D. A., Desai, R., Conant, L. L., & Liebenthal, E. (2005). Some neurophysiological constraints on models of word naming. Neuroimage, 27, 677–693. Bonin, P., Chalard, M., Méot, A., & Barry, C. (2006). Are age-of-acquisition effects on object naming due simply to differences in object recognition? Comments of Levelt (2002). Memory & Cognition, 34, 1172–1182. Cacioppo, J. T., & Gardner, W. L. (1999). Emotion. Annual Review of Psychology, 50, 191–214. Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. Nature, 403, 428–430. Carreiras, M., Vergara, M., & Perea, M. (2009). ERP correlates of transposed-letter priming effects: The role of vowels versus consonants. Psychophysiology, 46, 34–42. Carretié, L., Hinojosa, J. A., Albert, J., López-Martín, S., de la Gándara, B. S., Igoa, J. M., et al. (2008). Modulation of ongoing cognitive processes by emotionally intense words. Psychophysiology, 45, 188–196. Carretié, L., Hinojosa, J. A., Albert, J., & Mercado, F. (2006). Neural response to sustained affective visual stimulation using an indirect task. Experimental Brain Research, 174, 630–637. Carretié, L., Hinojosa, J. A., López-Martín, S., Albert, J., Tapia, M., & Pozo, M. A. (2009). Danger it worse when it moves: Neural and behavioral indices of enhanced attentional capture by dynamic threatening stimuli. Neuropsychologia, 47, 364–369. Carretié, L., Hinojosa, J. A., Martín-Loeches, M., Mercado, F., & Tapia, M. (2004). Automatic attention to emotional stimuli: Neural correlates. Human Brain Mapping, 22, 290–299. Catling, J. C., & Johnston, R. A. (2006). Effects of age of acquisition on an object name verification task. British Journal of Psychology, 97, 1–18. Chapman, R. M., & McCrary, J. W. (1995). EP component identification and measurement by principal component analysis. Brain and Cognition, 27, 288–310. Cliff, N. (1987). Analyzing multivariate data. New York: Harcourt Brace Jovanovich. Codispoti, M., Ferrari, V., & Bradley, M. M. (2007). Repetition and event-related potentials: Distinguishing early and late processes in affective picture perception. Journal of Cognitive Neuroscience, 19, 577–586.
1733
Coles, M. G. H., Gratton, G., Kramer, A. F., & Miller, G. (1986). In M. G. H. Coles, E. Donchin, & S. W. Porges (Eds.), Psychophysiology: Systems, processes and applications (pp. 183–221). Amsterdan: Elsevier. De Cesari, A., & Codispoti, M. (2006). Effects of stimulus size on affective modulation. Pscyhophysiology, 43, 207–215. Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. A. (1997). Lexical access in aphasic and nonaphasic speakers. Psychological Review, 104, 801–838. Delplanque, S., Lavoie, M., Hot, P., Silvert, L., & Sequira, H. (2004). Modulation of cognitive processing by emotional valence studied through event-related potentials in humans. Neuroscience Letters, 356, 1–4. Delplanque, S., Silvert, L., Hot, P., Rigoulot, S., & Sequira, H. (2006). Arousal and valence effects on event-related P3a and P3b during emotional categorization. International Journal of Psychophysiology, 60, 315–322. Dent, K., Johnston, R. A., & Humphreys, G. W. (2008). Age of acquisition and word frequency effects in picture naming: A dual-task investigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 282–301. Dien, J. (2010). Evaluating two-step PCA of ERP data with Geomin, Infomax, Oblimin, Promax, and Varimax rotations. Psychophysiology, 47, 170–183. Dien, J., Beal, D. J., & Berg, P. (2005). Optimizing principal components analysis of event-related potentials: Matrix type, factor loading weighting, extraction, and rotations. Clinical Neurophysiology, 116, 1808–1825. Dillon, D. G., Cooper, J. J., Grent-t‘-Jong, T., Woldorff, M. G., & La Bar, K. S. (2006). Dissociation of event-related potentials indexing arousal and semantic cohesion during emotional word encoding. Brain and Cognition, 62, 43–57. Doyle, M. C., Rugg, M. D., & Wells, T. (1996). A comparison of the electrophysiological effects of normal and repetition priming. Psychophysiology, 33, 132–147. Estes, Z., & Adelman, J. S. (2008). Automatic vigilance for negative words in lexical decision and naming: Comment on Larsen, Mercer, and Balota (2006). Emotion, 8, 441–444. Eulitz, C., Hauk, O., & Cohen, R. (2000). Electroencephalographic activity over temporal brain areas during phonological encoding in picture naming. Clinical Neurophysiology, 111, 2088–2097. Foti, D., Hajcak, G., & Dien, J. (2009). Differentiating neural responses to emotional pictures: Evidence from temporal-spatial PCA. Psychophysiology, 46, 521–530. Goldrick, M., & Larson, M. (2008). Phonotactic probability influences speech production. Cognition, 107, 1155–1164. Graves, W. W., Grabowski, T. J., Mehta, S., & Gordon, J. K. (2007). A neural signature of phonological access: Distinguishing the effects of word frequency from familiarity and length in overt picture naming. Journal of Cognitive Neuroscience, 19, 617–631. Graves, W. W., Deasai, R., Humphries, C., Seidenberg, M. S., & Binder, J. R. (in press). Neural systems for reading aloud: A multiparametric approach. Cerebral Cortex. Hajcak, G., & Nieuwenhuis, S. (2006). Reappraisal modulates the electrocortical response to unpleasant pictures. Cognitive, Affective, & Behavioral Neuroscience, 6, 291–297. Hauk, O., Rockstroh, B., & Eulitz, C. (2001). Grapheme monitoring in picture naming: An electrophysiological study of language production. Brain Topography, 14, 3–13. Herbert, C., Junghofer, M., & Kissler, J. (2008). Event-related potentials to emotional adjectives during reading. Psychophysiology, 45, 487–498. Herbert, C., Kissler, J., Junghofer, M., Peyk, P., & Rockstroh, B. (2006). Processing of emotional adjectives: Evidence from startle EMG and ERPs. Psychophysiology, 43, 197–206. Hinojosa, J. A., Carretié, L., Méndez-Bértolo, C., Míguez, A., & Pozo, M. A. (2009). Arousal contributions to affective priming. Emotion, 9, 164–171. Hinojosa, J. A., Carretié, L., Valcárcel, M. A., Méndez-Bértolo, C., & Pozo, M. A. (2009). Electrophysiological differences in the processing of affective information in words and pictures. Cognitive, Affective & Behavioral Neuroscience, 9, 173– 189. Hopfinger, J. B., & Mangun, G. R. (2001). Electrophysiological studies of reflexive attention. In C. L. Folk, & B. S. Gibson (Eds.), Attraction, distraction and action: Multiple perspectives on attentional capture (pp. 3–26). Amsterdan: Elsevier. Howard, D., Nickels, L., Coltheart, M., & Cole-Virtue, J. (2006). Cumulative semantic inhibition in picture naming: Experimental and computational studies. Cognition, 100, 464–482. Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92, 101–144. Jescheniak, J. D., & Levelt, W. J. M. (1994). Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 824–843. Jescheniak, J. D., & Schriefers, H. (2001). Priming effects from phonological related distracters in picture-word interference. Quarterly Journal of Experimental Psychology, 54A, 371–382. Kanske, P., & Kotz, S. A. (2007). Concreteness in emotional words: ERP evidence from a hemifield study. Brain Research, 1148, 138–148. Kavé, G., Samuel-Enoch, K., & Adiv, S. (2009). The association between age and the frequency of nouns selected for production. Psychology and Aging, 24, 17–27. Keil, A. (2006). Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli. Progress in Brain Research, 156, 217–232. Kissler, J., Assadollahi, R., & Herbert, C. (2006). Emotional and semantic networks in visual word processing: Insights from ERP studies. Progress in Brain Research, 156, 147–183. Kissler, J., Herbert, C., Peyk, P., & Junghofer, M. (2007). Buzzwords: Early cortical responses to emotional words during reading. Psychological Science, 18, 475–480. Kissler, J., Herbert, C., Winkler, I., & Junghofer, M. (2009). Emotion and attention in visual word processing. Biological Psychology, 80, 75–83.
1734
J.A. Hinojosa et al. / Neuropsychologia 48 (2010) 1725–1734
Kounios, J., & Holcomb, P. J. (1992). Structure and process in semantic memory: Evidence from event-related brain potentials and reaction times. Journal of Experimental Psychology: General, 121, 459–479. Kuchinke, L., Jacobs, A., Grubich, C., Vo, M. L., Conrad, M., & Herrmann, M. (2005). Incidental effects of emotional valence in single word processing: An fMRI study. Neuroimage, 28, 1022–1032. Laganaro, M., & Alario, F. (2006). On the locus of the syllable frequency effect in speech production. Journal of Memory and Language, 55, 178–196. Laganaro, M., Morand, S., & Schnider, A. (2009). Time course of evoked potential changes in different forms of anomia in aphasia. Journal of Cognitive Neuroscience, 21, 1499–1510. Laganaro, M., Morand, S., Schwitter, V., Zimmermann, C., Carmen, C., & Schnider, A. (2009). Electrophysiological correlated of different anomic patterns in comparison with normal word production. Cortex, 45, 697–707. Lambon Ralph, M. A., Graham, K. S., Ellis, A. W., & Hodges, J. R. (1998). Naming in semantic dementia-what matters? Neuropsychologia, 36, 775–784. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2001). International affective picture system (IAPS): Instruction manual and affective ratings. Technical report A-5. Gainesville, FL: The Center for Research in Psychophysiology, University of Florida. Larsen, R. J., Mercer, K. A., Balota, D. A., & Strube, M. J. (2008). Nota ll negative words slow down lexical decision and naming speed: Importance of word arousal. Emotion, 8, 445–452. Levelt, W. J. M. (1993). Timing in speech production with special reference to word form encoding. Annals of the New York Academy of Sciences, 682, 283–295. Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13465–13471. Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. A. (1998). MEG study of picture naming. Journal of Cognitive Neuroscience, 10, 553–567. Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1–38. McCarthy, G., & Wood, C. C. (1985). Scalp distributions of event-related potentials: An ambiguity associated with analysis of variance models. Electroencephalography & Clinical Neurophysiology, 62, 203–208. McKay, D., Shafto, M., Taylor, J., Marian, D., Abrams, L., & Dyer, J. (2004). Relations between emotion, memory, and attention: Evidence from taboo Stroop, lexical decision, and immediate memory tasks. Memory & Cognition, 32, 474– 488. Meltzer, J. A., Postman-Caucheteux, W. A., McArdle, J. J., & Braun, A. R. (2009). Strategies for longitudinal neuroimaging studies of overt language production. Neuroimage, 15, 745–755. Morrison, C. M., & Ellis, A. W. (2000). Real age of acquisition effects in word naming and lexical decision. British Journal of Psychology, 91, 167–180. Morrison, C. M., Ellis, A. W., & Quinlan, P. T. (1992). Age of acquisition not word frequency, affects object naming, not object recognition. Memory and Cognition, 20, 704–714. Naumann, E., Maier, S., Diedrich, O., Becker, G., & Laufer, M. E. (1997). Structural, semantic, and emotion-focused processing of neutral and negative nouns: Event-related potential correlates. Journal of Psychophysiology, 11, 234–256. Okada, K., Smith, K. R., Humphries, C., & Hickok, G. (2003). Word length modulates neural activity in auditory cortex during covert object naming. Neuroreport, 14, 2323–2326. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 9–97. Olofsson, J. K., Nordin, S., Sequeira, H., & Polich, J. (2008). Affective picture processing: An integrative review of ERP findings. Biological Psychology, 77, 247–265. Olofsson, J. K., & Polich, J. (2007). Affective visual event-related potentials: Arousal, repetition, and time-on-task. Biological Psychology, 75, 101–108. Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105, 457–465. Pastor, C., Bradley, M. M., Löw, A., Versace, F., Moltó, J., & Lang, P. J. (2008). Affective picture perception: Emotion, context, and the late positive potential. Brain Research, 1189, 145–151. Pourtois, G., Dan, E. S., Grandjean, D., Sander, D., & Vuilleumier, P. (2005). Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: Time course and topographic evoke-potentials mapping. Human Brain Mapping, 26, 65–79. Pratto, F., & John, O. P. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61, 380–391. Rodriguez-Fornells, A., Schmitt, B. M., Kutas, M., & Münte, T. F. (2002). Electrophysiological estimates of the time course of semantic and phonological encoding during listening and naming. Neuropsychologia, 40, 778–787.
Roelofs, A. (2008). Attention, gaze, shifting, and dual-task interference from phonological encoding in spoken word planning. Journal of Experimental Psychology: Human Perception and Performance, 34, 1580–1598. Rozenkrants, B., Olofsson, J. K., & Polich, J. (2008). Affective visual event-related potentials: Arousal, valence, and repetition effects for normal and distorted pictures. International Journal of Psychophysiology, 67, 114–123. Rugg, M. D., Mark, R. E., Wall, P., Schloerscheidt, A. M., Birch, C. S., & Allan, K. (1998). Dissociation of the neural correlates of implicit and explicit memory. Nature, 392, 595–598. Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., & Halgren, E. (2009). Sequential processing of lexical, grammatical, and phonological information within Broca’s area. Science, 326, 445–449. Salmelin, R., Hari, R., Lounasmaa, O. V., & Sams, M. (1994). Dynamics of brain activation during picture naming. Nature, 368, 463–465. Schacht, A., & Sommer, W. (2009a). Emotions in word and face processing: Early and late cortical responses. Brain and Cognition, 69, 538–550. Schacht, A., & Sommer, W. (2009b). Time course and task dependence of emotion effects in word processing. Cognitive, Affective, & Behavioral Neuroscience, 9, 28–43. Schapkin, S. A., Gusev, A. N., & Kuhl, J. (2000). Categorization of unilaterally presented emotional words: An ERP analysis. Acta Neurobiologiae Experimentalis, 60, 17–28. Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related potential study. Cognitive Brain Research, 17, 819–831. Schuhmann, T., Schiller, N. O., Goebel, R., & Sack, A. T. (2009). The temporal characteristics of functional activation in Broca’s are during overt picture naming. Cortex, 45, 1111–1116. Schupp, H. T., Junghofer, M., Weike, A. I., & Hamm, A. O. (2003). Emotional facilitation of sensory processing in the visual cortex. Psychological Science, 14, 7–13. Schupp, H. T., Junghofer, M., Weike, A. I., & Hamm, A. O. (2004). The selective processing of briefly presented pictures: An ERP analysis. Psychophysiology, 41, 441–449. Scott, G. C., O’Donnell, P. J., Leuthold, H., & Sereno, S. C. (2009). Early emotion word processing: Evidence from event-related potentials. Biological Psychology, 80, 95–104. Semlitsch, H. V., Anderer, P., Schuster, P., & Preelich, O. (1986). A solution for reliable and valid reduction of ocular artifacts applied to the P300 ERP. Psychophysiology, 23, 695–703. Smith, N. K., Cacioppo, J. T., Larsen, J. T., & Chartrand, T. L. (2003). May I have your attention please: Electrophysiological responses to positive and negative stimuli. Neuropsychologia, 41, 171–183. Smith, B. M., Münte, T., & Kutas, M. (2000). Electrophysiological estimates of the time course of semantic and phonological encoding during implicit picture naming. Psychophysiology, 37, 473–484. Smith, B. M., Schiltz, K., Zaake, W., Kutas, M., & Münte, T. F. (2001). An electrophysiological analysis of the time course of conceptual and syntactic encoding during tacit picture naming. Journal of Cognitive Neuroscience, 13, 510–522. Van Turennout, M., Hagoort, P., & Brown, C. M. (1997). Electrophysiological evidence on the time course of semantic and phonological process in speech production. Journal of Experimental Psychology: Learning, Memory and Cognition, 23, 787–806. Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 ms. Science, 280, 572–574. Vihla, M., Laine, M., & Salmelin, R. (2006). Cortical dynamics of visual/semantic vs. phonological analysis in picture confrontation. Neuroimage, 33, 732–738. Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia, 45, 174–194. Wentura, D., Rothermund, K., & Bak, P. (2000). Automatic vigilance: The attentiongrabbing power of approach- and avoidance-related social information. Journal of Personality and Social Psychology, 78, 1024–1037. Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34, 311–334. Wilson, S. M., Lisette Isenberg, A., & Hickok, G. (2009). Neural correlates of word production stages delineated by parametric modulation of psycholinguistic variables. Human Brain Mapping, 30, 30596–33608. Zhang, Q., & Damian, M. F. (2009). The time course of semantic and orthographic encoding in Chinese word production: An event-related potential study. Brain Research, 1273, 92–105. Zhang, Q., Lawson, A., Guo, C., & Jiang, Y. (2006). Electrophysiological correlates of visual affective priming. Brain Research Bulletin, 71, 316–323. Zubicaray, G. I., McMahon, K. L., Eastburn, M. M., & Wilson, S. J. (2002). Ortographic/phonological facilitation of naming responses in the picture-word task: An event-related fMRI study using overt vocal responding. Neurimage, 16, 1084–1093.