Negative induced mood influences word production: An event-related potentials study with a covert picture naming task

Negative induced mood influences word production: An event-related potentials study with a covert picture naming task

Author’s Accepted Manuscript Negative induced mood influences word production: An event-related potentials study with a covert picture naming task J.A...

1MB Sizes 0 Downloads 14 Views

Author’s Accepted Manuscript Negative induced mood influences word production: An event-related potentials study with a covert picture naming task J.A. Hinojosa, U. Fernández-Folgueiras, J. Albert, G. Santaniello, M.A. Pozo, A. Capilla www.elsevier.com/locate/neuropsychologia

PII: DOI: Reference:

S0028-3932(16)30462-6 http://dx.doi.org/10.1016/j.neuropsychologia.2016.12.025 NSY6216

To appear in: Neuropsychologia Received date: 5 August 2016 Revised date: 21 December 2016 Accepted date: 22 December 2016 Cite this article as: J.A. Hinojosa, U. Fernández-Folgueiras, J. Albert, G. Santaniello, M.A. Pozo and A. Capilla, Negative induced mood influences word production: An event-related potentials study with a covert picture naming task, Neuropsychologia, http://dx.doi.org/10.1016/j.neuropsychologia.2016.12.025 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Negative induced mood influences word production: An event-related potentials study with a covert picture naming task

Hinojosa J.A. 1,2*, Fernández-Folgueiras, U.1, Albert, J.1,3, Santaniello, G. 1, Pozo, M. A1, Capilla, A.3 1 Instituto Pluridisciplinar, Universidad Complutense de Madrid, Spain 2 Facultad de Psicología, Universidad Complutense de Madrid, Spain 3 Facultad de Psicología, Universidad Autónoma de Madrid, Spain

*Correspondence:

José A Hinojosa Universidad Complutense de Madrid Instituto Pluridisciplinar Paseo Juan XXIII, 1 Madrid, 28040, Spain [email protected] Tel.: +34 91 394 32 61; Fax: +34 91 394 32 64

1

Abstract The present event-related potentials (ERPs) study investigated the effects of mood on phonological encoding processes involved in word generation. For this purpose, negative, positive and neutral affective states were induced in participants during three different recording sessions using short film clips. After the mood induction procedure, participants performed a covert picture naming task in which they searched letters. The negative compared to the neutral mood condition elicited more negative amplitudes in a component peaking around 290 ms. Furthermore, results from source localization analyses suggested that this activity was potentially generated in the left prefrontal cortex. In contrast, no differences were found in the comparison between positive and neutral moods. Overall, current data suggest that processes involved in the retrieval of phonological information during speech generation are impaired when participants are in a negative mood. The mechanisms underlying these effects were discussed in relation to linguistic and attentional processes, as well as in terms of the use of heuristics. Key words: Mood; Emotion; Language production; Phonological encoding; ERPs

2

1. Introduction It is a well-established finding that mood fluctuations modulate processing at several cognitive levels (see Mitchell & Phillips, 2007, for a review) including attention (Schmitz, De Rosa & Anderson, 2009), working-memory (Martin & Kerns, 2011), decision-making (Hockey, Maule, Clough & Bdzola, 2000), problem-solving (Gasper, 2003), cognitive control (Dreisbach & Goschke, 2004) or the formation of social judgments (Forgas, 2007). Several theoretical approaches attempt to explain the general mechanisms that account for mood effects on cognition (see Martin & Clore, 2013, for a review). A widely accepted view, the ‘affect-as-information’ hypothesis, assumes that mood influences the processing style (Bodenhausen, Kramer & Süsser, 1994; Clore & Huntsinger, 2007). Positive mood is associated with a more global and flexible processing mode that relies on heuristics, whereas negative mood is thought to promote a relatively analytical, careful and effortful processing style. Evidence for this proposal comes from behavioral studies showing that participants in a positive mood increase the production of remotely associated words (Isen, Daubman & Nowicki, 1987), improve the production of novel uses of an object (Phillips, Bull, Adams & Fraser, 2002), use more abstract terms to describe events from autobiographical memory (Beukeboom & Semin, 2006) or show greater reliance on general knowledge about social groups when judging individual members (Park & Banaji, 2000). In contrast, negative mood has been found to be associated with the use of more concrete language in the description of autobiographical memories (Beukeboom & Semin, 2006) or with stricter criteria to identify members of stereotyped groups (Park & Banaji, 2000). Within this framework, some researchers have stated that mood can be explained in terms of approach and withdrawn motivations, where positive moods are associated with a broadened attentional scope that increases the individual’s thought-action repertories and negative 3

moods are associated with an attentional focus on a narrow range of information (Derryberry & Tucker, 1994; Föster, Friedmann, Özelsel & Denzler, 2006; Fredrickson, 2001). Alternatively, capacity limitation theories claim that affective-related thoughts lead to a reduction in the processing capacity that is available for task-related processes as a result of the engagement of cognitive resources in the processing of the individual’s own mood (Ellis & Ashbrook, 1988; Schmeichel, 2007; Seibert & Ellis, 1991). In the language domain, there is considerable evidence from behavioral and neuroimaging studies showing that several processes involved in language comprehension are modulated by a person’s affective mood. Both word recognition and discourse comprehension are facilitated when the emotional meaning of the stimuli is congruent with the participants’ mood (Chung, Tucker, West, Potts, Liotti, et al., 1996; Egidi & Caramazza, 2014; Egidi & Gerrig, 2009; Egidi & Nusbaum, 2012; Ferraro, King, Ronning, Pekarski & Risan, 2003; Niedenthal, Halberstast & Setterlund, 1997; Niedenthal & Setterlund, 1994; but see Sereno, Scott, Yao, Thaden & O’Donnell, 2015). Mood influences on the processing of verbal stimuli with neutral semantic content have also been investigated. In this respect, increased processing of morphosyntactic information related to subject- agreement relations has been observed when participants are in a positive compared to a negative mood (Vissers, Virgillito, Fitzgerald, Speckens, Tendolkar et al., 2010; but see Van Berkum, De Goede, Van Alphen, Mulder & Kerstholt, 2013). Also, prior work has shown that positive compared to negative mood eases the integration of information in semantic memory during sentence and discourse comprehension (Chwilla, Virgillito & Vissers, 2011; Federmeier, Kirson, Moreno & Kutas, 2001; Pinheiro et al., 2013) and the detection of semantic reversal anomalies (Vissers, Chwilla, Egger & Chwilla, 2013). Furthermore, positive mood facilitates the anticipation of referents based on verb information (Van 4

Berkum, et al., 2013), and improves the reliability of semantic coherence judgements (Bolte, Goschke & Kuhl, 2003). The results of these studies indicate that the influence of mood on language comprehension is a rather consistent finding, with positive mood being mainly related to improved task performance compared to negative mood. These differences might be partially attributed to the activity of an overlapping, yet distinct set of brain circuits underlying positive and negative mood processing. In this sense, negative mood has been linked to an increased activation in the amygdala and the left medial and lateral prefrontal cortices (Gray et al., 2002; Keightley et al., 2003; PascualLeone, Catala & Pascual-Leone, 1996). Also, there is evidence pointing to the involvement of the anterior cingulate and the ventromedial frontal cortex in negative mood processing, although some studies found attenuated activity (Baker, Frith & Dolan, 1997) whereas others reported increased activation (Kohn et al., 2014) in these areas. Regarding positive mood, increased activation has been reported in the anterior cingulate cortex, the basal ganglia, and the superior and inferior frontal gyri (Matsunaga et al., 2009; Phan, Wagner, Taylor & Liberzon, 2002; Pascual-Leone et al., 1996; Subramanian, Kounios, Parrish & Jung-Beeman, 2009). Most accounts on language processing treat language production and comprehension as quite distinct from each other (Gaskell, 2007). In contrast, based to some extent on evidence which suggests that production and comprehension are influenced by similar linguistic and non-linguistic variables, recent proposals claim that producing and understanding are interwoven (Pickering & Garrot, 2013). Thus, examining whether comprehension and production are subject to modulations by similar non-linguistic aspects may be useful to gain insights on this issue. Given the literature examined above, the existence of mood effects on language comprehension seems a well-established finding. However, evidence for mood influences on language 5

production is lacking. The main goal of the study presented here is to bridge this gap by recording event-related potentials (ERPs) –a methodology with a high temporal resolution that allows a precise characterization of the processing stages involved in affect and cognition- while participants in different induced moods perform a silent picture naming paradigm. By using this task, we were able to specifically explore operations that required the retrieval of phonological codes. As we shall comment below, although mood effects on the retrieval of phonological form information remain unexplored, prior reports indicate that this processing stage might be particularly sensitive to other affective properties (Hinojosa, Méndez-Bértolo, Carretié, & Pozo, 2010; White, Abrams, LaBat, & Rhynes, 2016). A second goal of our study is to test some of the claims made by modular models, which assume that processing stages involved in speech generation take place without inputs from other cognitive and/or affective domains (Levelt, 1989). Most theoretical frameworks assume that speech production involves several processing stages before a word can be articulated, although differences between models arise with regard to the interdependence or modularity of these processing levels and the serial or parallel flow of information between them (Dell, 1986; Dell, Chang & Griffin, 1999; Garrett, 1988; Levelt, 1989; Levelt, Roelofs & Meyer, 1999). In the stage of conceptual-semantic encoding, the targeted conceptual representation is activated. A second process concerns the selection of the appropriate lexical item 1–the lemma- in the mental lexicon, which includes the access to the syntactic properties of the entry. The third process is related to the retrieval of word form properties that serve as inputs of a phonological encoding process2. This processing stage involves several operations, 1

A lexical item is a word or a combination of words, which act as basic units of meaning of a language’s vocabulary (the lexicon). 2 Phonological encoding is the retrieval of the words’s sound.

6

mainly related to the segmental composition3 and the metrical structure4 of the utterances. Several factors influence processes involved in phonological encoding, including age of acquisition (Dent, Johnston & Humphreys, 2008; Lambon-Ralph, Graham, Ellis & Hodges, 1998; Morrison & Ellis, 2000), as well as word length (Wilson, Isenberg & Hickok, 2009), frequency of use (Dent et al., 2008; Lambon-Ralph et al., 1998) and familiarity (Lambon-Ralph et al., 1998). The modulatory effects of these variables have been incorporated into most models of language production. In contrast, even though the impact of mood in several cognitive processes has been firmly established, the potential effects of emotional-related variables, such as mood, in speech production have deserved little attention. Of particular interest to our current purposes are the results of some studies that indirectly point to a possible role of mood during phonological encoding in speech generation. In this sense, prior evidence has shown that emotion influences phonological encoding. For instance, slowed picture naming in the presence of taboo relative to neutral distractors -which persisted on subsequent neutral filler pictures- has been reported in a picture-word interference paradigm (White et al., 2016). In this study, the authors manipulated the phonological overlap between targets and distractors, so these emotional carryover effects were thought to occur at the phonological encoding processing stage. Also, emotional effects during phonological encoding operations have been reported in an ERP study in which participants performed a letter detection task with pictures denoting emotional concepts (Hinojosa et al. , 2010). The results of this study showed that grapheme5 monitoring in positive and negative compared to neutral picture labels was associated with slower reaction times and larger positive posterior 3

Segmental composition is the combination of discrete units that occur in sequential speech. The metrical structure is related to the stress pattern of the syllables in a word. 5 Graphemes are individual letters –or groups of letters- that represent a single sound in a word. 4

7

amplitudes around 400 ms. Notably, emotion and mood differ in several aspects. Moods are typically long lasting and slow-changing affective states that have no strong associations with a specific event, whereas emotions are usually shorter lasting, more intense and caused by a particular object or event (Davidson, Ekman, Frijda, Goldsmith, Kagan et al., 1994; Lazarus, 1994; Scherer, 2005). Sometimes, however, the distinction between these concepts may be rather blurry, depending on the circumstance (e.g., mood can change several times throughout a day, which is not a very long period of time). Another source of indirect evidence comes from behavioral studies exploring mood effects on executive functions by means of letter fluency tasks, which involve all the core processes implicated in word production. In this paradigm, participants need to retrieve words beginning with a target letter in a short period of time. A variety of results have been found with this task, with some studies reporting worse (Bartolic, Basso, Schefft, Glauser & Titanic-Schefft, 1999), better (Clark, Iversen & Goodwin, 2001) or even equal (Phillips et al., 2002) word production in negative compared to positive mood participants. Remarkably, in addition to those processes specifically involved in speech generation, verbal fluency paradigms trigger several other processes, such as strategic retrieval, inhibition of previous dominant responses or capacity to switch search strategies (Mitchell & Phillips, 2007). The impact of mood on these processes is not easy to disentangle without using techniques with an appropriate temporal resolution. Therefore, based on prior behavioral studies, no clear conclusions can be outlined about the specific locus of the effects of positive and negative mood on fluency. The overlap between cortical regions involved in phonological encoding and those sensitive to the processing of emotional information also argues in favor of a 8

potential effect of participants’ mood on the retrieval of phonological cues during word production. In this sense, the results of several functional magnetic resonance imaging (fMRI) studies indicate that regions in dorsal, medial and infero-lateral prefrontal cortices are involved in representing phonological information during speech generation (Blank, Scott, Murphy, Warburton & Wise, 2002; Bourguignon, 2014; Indefrey & Levelt, 2004). Interestingly, these regions seem to be crucially implicated in emotional experience (including mood), as well as in the cognitive regulation of emotion (Habel, Klein, Kellerman, Shah & Schneider, 2005; Ochsner, Ray, Cooper, Robertson, Chopra, Gabrieli et al., 2004; Phan et al, 2002). The main aim of the present ERP study was to provide for the first time a direct test of the effects of mood on phonological encoding during speech generation. To achieve this purpose, we induced positive, negative and neutral moods using shortduration videos in three different recording sessions. Subsequently, participants were engaged in a silent monitoring task, which is a variation of covert picture naming tasks. Monitoring tasks have been largely used to investigate language production processes, especially at the level of phonological encoding (Camen, Morand & Laganaro, 2010; Schiller, Bles & Jansma, 2003; Wheeldon & Levelt 1995; Wheeldon & Morgan 2002). Specifically, in our experiment, participants viewed a picture and then had to decide whether or not the image’s name included a particular target letter that was presented before the image. Given the close relation that exists between graphemic and phonemic codes, this task is suitable to investigate phonological encoding processes (Indefrey & Levelt, 2004; Wheeldon & Levelt, 1995). Prior research has found ERP modulations between 250 and 450 ms specifically linked to phonological encoding (Levelt et at., 1998). Crucially, the timing of these effects roughly corresponds to the time course that has been proposed for the retrieval of 9

phonological information in theoretical models of language production (Indefrey, 2011; Indefrey & Levelt, 2004). Cumming, Seddoh and Jallo (2016) examined whether phonological encoding is influenced by consonant type. In this study, participants covertly named pictures that had an initial liquid (e.g., /l/) or a stop (e.g., /d/) consonant. Picture labels with initial liquids elicited a larger negativity between 293 and 371 ms post-picture onset, which suggests that phonological code retrieval has already started around 290 ms. A similar finding was reported by Perret and Laganaro (2012) using a spoken and a hand written picture naming task. Speaking, which was the only task that required phonological encoding, elicited a more pronounced negativity in frontal electrodes than writing. These effects started at 260 ms after picture onset. In a different study, Eulitz, Hauk and Cohen (2000) compared ERP responses elicited by the passive viewing of the stimuli with those elicited by a covert picture naming task and a cover/overt production of a nominal phrase using the name and the color of the picture. The authors found that phonological encoding processes were reflected by a larger negativity at frontal electrodes and a larger positivity at posterior electrodes in the time interval between 275 and 400 ms after picture onset. In a series of experiments (Laganaro, Morand & Schnider, 2009; Laganaro, Morand, Michel, Spinelli & Schinder, 2011; Laganaro, Python & Toepel, 2013), Laganaro and coworkers compared the performance of aphasic patients with phonological impairments and healthy controls in a picture naming task. Differences in ERP amplitudes between patients and controls were found in the 250-450 ms time interval after picture onset, which was reflected by an increased frontal negativity and posterior positivity for aphasic patients with phonological impairments. Finally, Hauk, Rockstroh & Eulitz (2004) found increased frontal negativities and posterior positivities with an onset at 320 ms after picture presentation when participants monitored for a target grapheme at different syllabic 10

positions in a silent picture naming task. To summarize, the results of these studies with a variety of tasks indicate that the retrieval of phonological information during speech generation seems likely to be related with increased amplitudes in an anterior negative and/or a positive posterior component peaking at different latencies in the time range between 250 and 450 ms, depending on the study. In the present study, hypotheses regarding mood effects on phonological encoding may be only tentative, since prior research investigating the effects of affective variables on speech generation is very scarce. Based on the results of the studies that reported impaired performance on language comprehension and verbal fluency tasks when participants were in a negative mood (Vissers et al., 2010, 2013), we would expect that negative mood worsens phonological encoding compared to neutral and positive moods. This should be reflected in longer reaction times (RTs) and/or more errors. With regard to ERP results, we have focused in a component peaking in the latency range between 250 and 450 ms, in which ERP activity indexing phonological encoding has been reported in prior studies (Eulitz et al., 2000; Laganaro et al., 2011). In particular, we expected a larger anterior negativity and/or posterior positivity peaking within this time window, as a reflection of the additional efforts needed to retrieve phonological information (Eulitz et al., 2000). In contrast, based on prior reports (Federmeier et al., 2001; Van Berkum et al., 2013), a processing advantage would be expected for those participants in a positive mood, which would result in shorter RTs and/or less errors and reduced amplitudes in a negative anterior and/or a positive posterior component peaking between 250 and 450 ms. In order to get a more precise characterization of the phenomenon, we were also interested in exploring and making inferences on the possible neural origin of mood effects on phonological encoding by conducting source location analyses with eLORETA source model (Pascual-Marqui, 11

2007). Activation in dorsolateral and/or inferior frontal areas was hypothesized, since these regions have been associated in prior hemodynamic studies with both phonological encoding in picture naming tasks and processing of affective information (Blank et al., 2002; Ochsner et al., 2004). 2. Methods 2.1. Participants Twenty five native Spanish speakers took part in the experiment. Participants were college students recruited from the Complutense University of Madrid via printed and electronic advertisements. Data from three participants were excluded due to equipment failure and muscular artifacts, leaving a total of twenty two participants (16 women, age range from 20 to 30 years, M = 23, SD = 3.69). All participants were right-handed, according to the Edinburgh Handedness Inventory (Oldfield, 1971; M = 85%). They reported no history of neurological or psychiatric disorders, as well as normal or corrected-to-normal vision. Participants gave their informed consent to participate in the study and received a monetary compensation for their participation. 2.2. Stimuli Viewing emotional clips is one of the most commonly used and effective procedures to induce mood in participants (Gerrards-Hesse, Spies, & Hesse, 1994; Gross & Levenson, 1995; Schaefer, Nils, Sánchez & Philippot, 2010; Westermann, Stahl & Hesse, 1996; Zhang, Yu & Barrett, 2014). Thus, we selected six films excerpts of about 3 min each (mean duration= 2’24’’). Two videos were used for inducing mood in each of the negative, positive and neutral conditions. In the neutral condition neutral video excerpts were presented to participants with the aim of inducing a neutral mood that attenuated any initial emotional bias they may have had when coming to the laboratory (Eigidi & Nusbaum, 2012) and to match the experimental procedure across 12

conditions. Also, it has been noted that a neutral condition is needed in order to better delineate the relationship between positive and negative moods (Mitchell & Phillips, 2007). The six video clips were selected from a normative study (Megías, Mateos, Ribaudi & Fernández-Abascal, 2011) that collected emotional ratings of 57 excerpts with the Self Assessment Manikin (SAMs) in Spanish participants. We chose extracts from commercial movies: Ghost (valence = 7.64; arousal = 5.64) and Dead Poets Society (valence = 7.12; arousal = 5.94) for positive mood, Saving Private Ryan (valence = 2.15; arousal = 5.92) and Schindler's List (valence = 1.55; arousal = 5.73) for negative mood and Blue (valence = 4.92; arousal = 2.69) and L'amant (valence = 5.44; arousal = 2.81) for neutral mood. The order of presentation of the two films within a particular mood induction session was counterbalanced across participants. A total of 228 images were selected for the monitoring task. They came from a set of 254 images standardized for a Spanish sample (Sanfeliú & Fernández, 1996), taken from the Snodgrass and Vanderwart’s (1980) database. We assigned the experimental pictures to three lists of 76 stimuli, which were matched according to the following criteria: (a) all names had similar values in naming agreement; (b) all names were one word familiar names; (c) the names used by the participants had similar frequency of use in Spanish (Sebastián-Gallés, Martí, Carreiras & Cuetos, 2000); (d) the names were equated in word length (number of syllables); and (e) images were matched in familiarity and visual complexity (Sanfeliú & Fernández, 1996). These criteria were contrasted via one-way analysis of variance (ANOVA; see Table 1) and post-hoc analyses with the Bonferroni correction (alpha <.05). Values for these variables are reported in Table 1.

2.3. Procedure 13

Participants were tested in three separate recording sessions (one week interval, +/- one day), in which a different mood was induced to them. Thus, a within-subject design with the factor Mood (positive, negative and neutral) was used to take advantage of the statistical efficiency of these designs (Greenwald, 1976). To minimize the potential influence of practice and learning in our mood manipulation, the order of mood induction, as well as the assignment of the three lists of images to every induced mood, was counterbalanced across participants according to a Latin square design. Additionally, the order of presentation of pictures within a given list was randomized for each participant. Participants performed a grapheme monitoring task in which they had first to covertly generate the name denoted by a picture and subsequently, to decide whether a given letter was present or missing (Hauk et al., 2001; Hinojosa et al., 2010; Eulitz et al., 2000). Thus, in order to fulfill task requirements, concept-driven phonological selection seems critical (Bourguignon, 2014). These processes are thought to be involved even in visual presentations of the stimuli (Jescheniak & Schriefers, 2001; De Zubicaray, McMahon, Eastburn & Wilson, 2002). Remarkably, Spanish is a transparent language with a quite shallow orthography with regular-spelling-to-sound correspondences, which makes it difficult to differentiate between graphemic and phonemic processing (Ardila, 1991). Given this correspondence, a grapheme instead of a phoneme monitoring task was preferred since it allows for the presentation of the stimuli in the same sensorial modality. The experimental procedure is shown in Figure 1. A question about the target letter was presented for 1000 ms at the beginning of each trial. Thereafter, a fixation cross appeared for 1000 ms. A picture replaced the fixation cross for 3000 ms and disappeared from the screen as soon as participants responded or after 3000 ms. The 14

intertrial interval was a 1000 ms fixation cross. In order to minimize ocular interference, participants were asked to avoid blinking during the presentation of the picture. They were instructed to press a button with the left index finger if the target letter was present in the image name or, alternatively, to press a button with the right finger if the picture name did not include the grapheme. Key-response assignment was counterbalanced across participants. Stimuli were presented through a 19-in. LCD-LED Samsung 943 N color monitor with a viewing distance of approximately 60 cm. All stimuli were generated off-line using Matlab 7.5 (The MathWorks Inc., Natick, MA, USA) and were presented using Psychtoolbox (Brainard, 1997) on a white background.

**** Figure 1****

Prior to the recording session, subjects were familiarized with the pictures that were included in the task in order to make sure that the participants knew the name corresponding to each image. Correct picture names were provided in the few cases where errors were made (participants made 5.3 naming errors on average). Thereafter, task instructions were given to participants. They were asked to perform the task without delay, as is Eulitz and coworkers’ (2000) study. The experimental session started with a short practice block of 20 images. Stimuli were presented to participants in two blocks of 38 pictures that were preceded by the presentation of the short video for mood induction purposes. In each of these blocks, the target letter was present in half of the images and absent in the remaining half. In those stimuli that included the target grapheme, it could appear equiprobably at a random position between the first third, the second third and the last third of the number of syllables of the stimulus name. Consonant letters were used always as targets, since production times for vowels are 15

longer. Also, the processing of vowels and consonants triggers different brain mechanisms (Caramazza, Chialant, Capasso & Micelli, 2000; Carreiras, Duñabeitia & Molinaro, 2009). Participants were asked to rate the current state of their mood by fulfilling a scoresheet that included scales of valence (from -5, extremely negative to +5, extremely positive) and arousal (from -5, extremely calmed to +5, extremely activated) dimensions As in several other studies (e.g. Egidi & Nusbaum, 2012; Kross, Berman, Mischel, Smith & Wager, 2011; Sereno et al., 2015; Smith & Wager, 2011; Verhees, Chwilla, Tromp & Vissers, 2015), this method was selected due to its brevity in comparison with other widely used mood-assessment scales, such as the PANAs. Participants rated their mood when arriving to the lab. Participants also scored their state after watching each film clip, in order to assess whether the videos successfully induced the intended mood. Finally, they completed once again the questionnaire immediately following the end of the experiment. 2.4. EEG data recording and preprocessing Electroencephalogram (EEG) activity was recorded using an electrode cap with 64 Ag/AgCl disk electrodes (Quick-Cap, Neuroscan, Inc., USA), arranged according to the International 10-20 system (American Electroencephalographic Society, 1994; electrode positions: FP1/z/2, AF7/3/4/8, F7/5/3/1/z/2/4/6/8, FT7/8, FC5/3/1/z/2/4/6, T7/8, C5/3/1/z/2/4/6, TP7/8, CP5/3/1/z/2/4/6, P7/5/3/1/z/2/4/6/8, PO7/5/3/z/4/6/8, O1/z/2). All electrodes were on-line referenced to the linked mastoids. Horizontal (HEOG) and vertical (VEOG) eye activity was measured by using two electrodes placed lateral to the right and the left canthus for the HEOG and two electrodes above and below the left eye for the VEOG. Participants’ skin was abraded with skin preparation gel (Nuprep, Weaver and Company) until impedances were below 10 kΩ. Recordings were amplified 16

using Neuroscan SynAmps amplifiers, continuously digitized at a sample rate of 1000 Hz and filtered on-line with a frequency band-pass of 0.1-100 Hz (3 dB points for -6 dB octave roll-off). Data were processed off-line using Matlab version 7.7 (The MathWorks) and Fieldtrip (http://fieldtrip.fcdonders.nl; Oostenveld, Fries, Maris, & Schoffelen, 2011). The EEG data was digitally band-pass filtered between 0.5 and 30 Hz (12 dB/oct. rolloff). Thereafter, the continuous EEG was segmented from 200 ms before to 800 ms after the presentation of the pictures. Baseline correction was made using the 200 ms period before the onset of the images. Also, trials for which subjects responded erroneously or did not respond were eliminated from further analyses. EEG data was visually inspected for artifacts. Trials containing blinks or eye-movements during the time window of visual presentation (-0.2 to 0.4 s) were manually identified in the EOG recording and discarded. In addition, trials with other artifacts (e.g. cable movement, swallowing or muscular artifacts) were also rejected from subsequent analyses (± 100 μV voltage rejection threshold). This incorrect response and artifact rejection procedure led to the average admission of 66.10% trials in the positive mood condition (mean=50.23, SD=15.4), 69.80% in the negative mood condition (mean=52.9; SD=12.13) and 74.50% in the neutral mood condition (mean=55.32; SD=12.32). There were no significant differences in the number of valid trials per condition (F(2,42)=0.86, p=0.42). Grand averages were computed in all subjects separately for each mood condition and electrode location. 2.5. Data analysis 2.5.1. Behavioral analysis Performance in the grapheme monitoring task was analyzed. To this end, average reaction times (RTs) and number of errors for each participant in each mood condition 17

were submitted to non-parametric contrasts, as these data were not normally distributed (as assessed by Shaphiro-Wilks W tests, ps<0.05). Specifically, these two behavioral variables were analyzed with the Friedman test with Mood as a within-subjects factor (three levels: negative, positive and neutral). 2.5.2. Scalp ERP Analysis Detection and quantification of the ERP component specifically associated with phonological encoding was performed through a temporo-spatial principal component analysis (PCA). This procedure has been shown to be an effective data-driven method for analyzing ERP data (Chapman and McCrary, 1995; Dien and Frishkoff, 2005). Firstly, a covariance-matrix-based temporal PCA (tPCA) was used to disentangle ERP components over time. The main advantage of tPCA over conventional procedures based on a visual inspection of the recordings and on ‘temporal windows of interest’ is that it presents each ERP component separately and with its ‘clean’ shape, extracting and quantifying it free of the influences of adjacent or subjacent components. Indeed, the waveform recorded at a site on the head over a period of several hundreds of milliseconds represents a complex superposition of different overlapping electrical potentials. Such recordings can stymie visual inspection. In brief, tPCA computes the covariance between all ERP time points, which tends to be high between the time points involved in the same component, and low between those belonging to different components. The solution is therefore a set of factors made up of highly covarying time points, which ideally correspond to ERP components. A temporal factor score, the tPCA-derived parameter in which extracted temporal factors may be quantified, is linearly related to amplitude (Carretié et al., 2004; Dien 2010). The decision on the number of factors to select was based on the scree test (Cattell, 1966). Extracted components were submitted to promax rotation, as recommended (Dien, 2010, 2012). 18

As explained in detail later, the ERP component associated with phonological encoding (peaking between 250 and 450 ms) was satisfactorily identified and disentangled from other components. Once quantified in temporal terms, temporal factor scores were submitted to spatial PCA (sPCA) in order to decompose the scalp topography of the phonological encoding-related component into its main spatial regions. Whereas temporal PCA separates ERP components along time, sPCA distinguishes ERP components along space, each spatial factor ideally reflecting one of the concurrent neural processes underlying each temporal factor. This spatial decomposition is an advisable strategy prior to statistical contrast, because ERP components frequently behave differently in some scalp areas than they do in others (e.g., they present opposite polarity or react differently to experimental manipulations). Basically, each region or spatial factor is formed with the scalp points where recordings tend to covary. As a result, the shape of the sPCA-configured regions is functionally based, and scarcely resembles the shape of the geometrically configured regions defined by traditional procedures. Moreover, each spatial factor can also be quantified through the spatial factor score, a single parameter that reflects the amplitude of the whole spatial factor. Also in this case, retained factors were submitted to promax rotation. Finally, temporo-spatial factor scores (equivalent to traditional amplitudes, as explained above) were submitted to a Friedman test, since data were not normally distributed, as assessed by Shaphiro-Wilks W tests (ps<0.05). Subsequently, planned comparisons using Wilcoxon tests were used to compare between positive versus neutral, and negative versus neutral conditions. Effect sizes for significant Wilcoxon

19

tests were calculated using the formula r=Z/

(Rosenthal, 1994; small effect: r<0.3,

medium effect: 0.30.5). 2.5.3 Source location analysis Exact low-resolution brain electromagnetic tomography (eLORETA; Pascual-Marqui, 2007) was employed to tentatively explore those cortical regions that were sensitive to the experimental effects observed at the scalp level. In its current version, eLORETA computes the standardized current density at each of 6239 voxels mainly localized in the cortical gray matter of the digitized Montreal Neurological Institute (MNI) standard brain. Specifically, three-dimensional current-density estimates for relevant ERP components (as defined by tPCA) were computed for each subject and each condition. Source analysis was performed on temporal factor scores instead of direct voltages because previous studies have shown substantial improvement in source localization accuracy by temporal PCA over conventional windowed differences approach (Carretié et al., 2004; Dien, Spencer & Donchin, 2003; Dien, 2010). Subsequently, the voxelbased whole-brain eLORETA-images (6239 voxels at a spatial resolution of 5 mm) were compared between experimental conditions, using the eLORETA built-in voxelwise randomization tests based on statistical non-parametric mapping (SnPM). As explained by Nichols and Holmes (2002), the non-parametric methodology inherently avoids multiple comparison-derived problems and does not require any assumption of Gaussianity. Voxels that showed significant differences between conditions (t-statistic, one-tailed corrected p<0.05) were located in anatomical regions and Brodmann areas (BAs). 3. Results 20

3.1. Mood induction manipulation Participants rated the valence and arousal values of their current mood four times throughout each session. Mean ratings across these sessions are summarized in Figure 2. These data were not normally distributed (Shapiro-Wilks W tests, ps>0.05) and thus, were analyzed by means of nonparametric Friedman tests with Mood as within-subjects factor. No differences in ratings were noticeable, either for valence (χ2=4.09, p=0.13) or for arousal (χ2=3.22, p=0.2), when participants arrived to the laboratory. In contrast, differences emerged after watching the first video excerpts in both, valence (χ2=34.96, p<.001) and arousal (χ2=12.16, p=0.002). Post-hoc Wilcoxon tests indicated that participants in the positive mood condition showed a significant increase in valence compared to those in the neutral and the negative mood conditions (Z=3.22, p=0.001, r=0.49, and Z=3.94, p<.001, r=0.59, respectively). In the negative condition we observed a significant decrease in valence ratings (negative vs. neutral: Z=-3.95, p<.001, r=0.59; see above the negative vs. positive contrast), as well as higher arousal scores (negative vs. neutral: Z=3.33, p=0.001, r=0.5; negative vs. positive: Z=2.46, p=0.01, r=0.37), compared to the neutral and positive mood conditions. **** Figure 2**** Again, differences in valence (χ2=34.3, p<.001) and arousal (χ2=22.11, p<.001) between conditions were found after second film clip presentation. Post-hoc Wilcoxon test on valence and arousal scores replicated the pattern of results observed after first clip presentations (valence: positive>neutral>negative, positive vs. neutral: Z=2.86, p=0.004, r=0.43; positive vs. negative: Z=3.93, <.001, r=0.59; neutral vs. negative: Z=3.75, <.001, r=0.57; arousal: negative>positive>neutral, negative vs. positive: Z=2.31, p=0.02, r=0.35, negative vs. neutral: Z=3.64, <.001, r=0.55; positive vs. neutral: Z=2.72, p=0.006, r=0.41). 21

Subtle differences in valence ratings still persisted when participants rated their mood at the end of the experiment (χ2=11.22, p=0.04), with lower scores in the negative mood compared to the positive and the neutral mood conditions (negative vs. positive: Z=-2.34, p=0.02, r=0.35; negative vs. neutral: Z=-2.58, p=0.01, r=0.39). Similarly, differences in arousal were still found at the end of the experiment (χ2=7.43, p=0.02), with higher scores in the negative mood compared to the neutral mood condition (Z=2.12, p=0.03, r=0.32). Overall, the results of these analyses suggest that although the intended mood was successfully induced in participants in every experimental session, this was particularly evident in the case of the negative mood.

3.2. Behavioral data As already mentioned, a nonparametric Friedman test with Mood as a within-subjects factor was applied on RTs and number of errors. With respect to errors, the Friedman test was not significant (χ2=0.95, p=0.62). Participants made a mean total number of 21.45 errors (SD=9.84; minimum=11 and maximum=58). Mean and standard deviations of number of errors were 8.14±5.54 for the positive mood induction, 6.72±5.36 in the negative induction and 6.59±5.41 in the neutral mood. Also, the Friedman test was not significant for the analysis of RTs (χ2=2.82, p=0.2). Means and standard deviations of RTs for each mood condition were 1186.4±193.6 ms (neutral), 1250.8±246.5 ms (negative) and 1222±240.9 ms (positive).

3.3. Electrophysiological data Figure 3 shows a selection of grand averages once that activity from the baseline (prestimulus recording) had been subtracted from each ERP. These grand averages 22

correspond to left frontal electrodes, where the critical experimental effects were most evident. As stated in the introduction, phonological encoding processes have been associated with ERP activity peaking between 250 and 450 ms in prior work. In the present study, we observed a negative deflection peaking around 290 ms, with a maximum at frontal electrodes. Visual inspection of grand averages also suggested more negative amplitudes of this component (N290) in the negative, compared to the neutral mood condition (see also the topographical voltage map showing the difference between negative and neutral mood conditions for the N290 in Figure 3). **** Figure 3****

As a consequence of the application of the tPCA, five temporal factors (TF) were extracted from the ERPs. Figure 4 displays the time course, peak latency and topographical distribution of these factors, as well as the variance explained by each of them. Factor peak-latency and topography characteristics clearly associate TF1 with the component labeled N290 in grand averages. Notably, TF1 was the only factor that peaked within the proposed time course for phonological encoding (i.e., between 250 and 450 ms). It was the factor accounting for most of the variance (60.45%). Hence, further sPCA was only performed for the TF1. Specifically, sPCA decomposed TF1 (N290) into three spatial factors showing the following topography: left frontal, posterior and right frontocentral (Figure 5).

**** Figure 4***** *** Figure 5****

23

Friedman tests on the three N290 spatial factors with the factor Mood were performed. The main effect of Mood was only significant in the left frontal scalp region (χ2=10.64, p=0.005). Planned comparisons using Wilcoxon tests confirmed that frontally-distributed N290 showed larger negative amplitudes in the negative, compared to the neutral mood induction (Z=3.33, p=0.001, r=0.5), whereas no significant differences were observed between positive and neutral mood conditions (Z=0.73, p=0.46). No significant effects of Mood were found for either the posterior or the right frontocentral N290 (χ2=3.27, p=0.19 and χ2=4.36, p=0.11, respectively).

3.4. Source-location data The last analytical step consisted of making inferences about the cortical regions that might underlay scalp-recorded activity. To this end, three-dimensional current-density estimates for the N290 component (TF1 as defined by tPCA) were computed for each subject and each mood condition using the eLORETA (Pascual-Marqui, 2007) software package. Subsequently, the voxel-based whole-brain images were compared between conditions using SnPM. Resembling the results from scalp ERP analysis, greater N290related activation was observed in the negative, compared to the neutral mood condition (t=3.39, p=0.005; number of significant voxels=21). As shown in Figure 6, this activation was observed in the left lateral prefrontal cortex (peak MNI coordinates: X = - 25, Y = 55, Z = 30; all significant voxels were observed in left BAs 9 and 10). There were no significant differences in activation between positive and neutral mood conditions (t=3.67, p=0.1). **** Figure 6****

4. Discussion 24

In the current study, we examined the interplay between mood and word production. In particular, we were concerned with the influence of participants’ mood on phonological encoding processes. The analysis of mood scores indicated that the method used for mood induction was successful, which is in line with prior studies using video films to elicit mood (Van Berkum et al., 2013; Verhees et al., 2015; Zhang et al., 2014). In this sense, participants were in a positive mood after watching positive clips while they were in a negative mood after watching clips with negative content. Nonetheless, differences between positive and negative moods were found with regard to the level of activation induced by the films, which may partially account for some of the current findings as we will discuss later. Our data show that after the retrieval of the phonological representation of a word denoting an image, it took longer for participants to perform a grapheme monitoring task when they were in a negative (1251 ms) compared to a neutral mood (1186 ms), although this difference was not statistically significant. Nonetheless, differences were found in ERP waveforms. In this sense, a frontal component peaking around 290 ms showed more negative amplitudes in negative, compared to neutral induced moods. Since the latency of this effect falls within the period of time that has been previously associated with phonological encoding in word production studies (Indefrey & Levelt, 2004; Laganaro, Morand & Schnider, 2009; Laganaro, Morand, Schwitter, Zimmermann, Camen et al., 2009; Riés, Janssen, Burle & Alario, 2013), current results may be taken to suggest that being in a negative mood disrupts those processes involved in the selection of phonological representations during speech generation. The results of source location analyses (eLORETA) showed that the left lateral PFC (BA9/BA10) was the estimated solution for the scalp recorded N290 effects. This 25

is consistent with reports from prior fMRI and PET studies using verbal fluency and naming tasks (Abrahams, Goldstein, Simmons, Brammer et al., 2003; Kerns, Cohen, Stenger & Carter, 2004; Poline, Vandenberghe, Holmes, Friston & Frackowiak, 1996), as well as with those using experimental paradigms specifically relying on phonological encoding processes (Bourguignon, 2014; Burton, Small & Blumstein, 2000; Liu, Hue, Chen, Chuang, Liang et al., 2006). Additionally, BA9 and B10 activation has been reported in fMRI studies dealing with emotional regulation (Ochsner et al., 2004; Zwanzger, Steinberg, Rehbein, Bröckelmann, Dobel et al., 2014), and when participants internally attended to their moods (Habel et al., 2005; Phan et al., 2002). In line with the involvement of medial and lateral prefrontal cortices in cognitive and affective processing, it has been proposed that these areas mediate the integration of moods and cognitive functions required to complete a complex, novel and effortful task (Burgess & Wu, 2013; Gray, 2004; Mitchell & Phillips, 2007; Phan et al., 2002). It should be noted that some caution is needed when interpreting results based on source location analyses, due to the limitations these methods have in solving the inverse problem of determining the position and orientation of the dipoles corresponding to the voltage distribution measured at the scalp surface (Friston et al., 2008; Luck, 2014). In this sense, these techniques cannot provide direct, unambiguous evidence about underlying neural activity (Keil et al., 2014). These limitations do not allow us to establish strong conclusions based on these data or to make strong claims on the participation of prefrontal cortices in the processing of mood and phonology. Rather, we can tentatively infer that the finding of the left lateral PFC as being the source of the scalp-recorded activity points to a possible role of this area in the interplay between negative mood and the complex cascade of processes that are needed to achieve the picture naming task. In this sense, it seems likely to assume that, in our task, 26

after perceiving pictures, participants should generate the word associated with the images. Subsequently, they would be able to perform the sequential scanning of the target letter once the phonological structure has been retrieved from long-term memory. Thus, it could be speculated that this lateral PFC activation is a product of the influence of the negative mood on the retrieval and selection of phonological forms information, or alternatively, on working memory demands required for grapheme monitoring (Burton et al., 2000). Nonetheless, future studies with hemodynamic-based neuroimaging techniques or ERP recordings from the surface of the cortex in neurosurgery patients (Luck, 2014) should attempt to provide converging evidence about the brain regions implicated in these processes. With regard to the mechanisms that may account for mood effects on phonological encoding, several possibilities arise. There is evidence suggesting interactive modulations between phonological and other representational levels involved in speech production. In this sense, several studies have reported phonological facilitation for naming words overlapping in form with semantically activated lexical items that are irrelevant to the task (Costa, Caramazza & Sebatián-Gallés, 2000; Gollan & Acenas, 2004; Morsella & Miozzo, 2002; see Goldrick, 2006 for review). Additional findings come from studies investigating the mixed error effect (Harley & MacAndrew, 2001; Martin, Gagnon, Schwartz, Dell & Saffran, 1996), which refers to the observation that mixed phonological and semantic errors (‘cat’ for ‘rat’) occur more often than predicted, based on the rates of purely semantic (‘cat’ for ‘dog’) or phonological errors (‘cab’ for ‘cat’). These interactive effects have been explained by the existence of facilitatory and inhibitory feedback mechanisms between phonological and word-level representations (Chen & Mirman, 2012; Dell, 1986; Goldrick & Rapp, 2002; Sadat, Martin, Costa & Alario, 2014). Thus, it could be argued that the activation of similar 27

feedback connections from mood to phonological representations may be responsible for the impaired execution in the grapheme monitoring task when participants are in a negative mood. A second possibility is that participants rely on the application of heuristic strategies to accomplish the selection of phonological representations associated with each image and the subsequent monitoring of the graphemes. In fact, it has been suggested that heuristics play an important role in language processing, since the use of these rules has been documented at different linguistic levels including semantic, syntactic or phonological processes (Caramazza & Zurif, 1976; Ferreira & Patson, 2007; Gallo, Meadow, Johnson & Foster, 2008; Osterhout & Swinney, 1989; Vissers et al., 2013). The use of heuristics, which are grounded on expectations based on world knowledge, would help people to guide decision making and improve task performance. In this sense, participants’ responses in our letter search task could be partially biased by an estimation of the frequency of appearance of a given letter in the Spanish language (Tversky & Kahneman, 1973). According to some theoretical perspectives negative mood seems to be related to the use of a more analytic processing style that avoids the use of cognitive shortcuts (Bodenhausen et al., 1994; Zadra & Clore, 2011). Thus, under this perspective, our effects would be indicating the involvement of additional processing resources due to difficulties in using heuristics to perform the grapheme monitoring task when participants are in a negative mood. Finally, our data could be explained in terms of general attentional and/or motivational mechanisms. Negative moods have been associated with a narrowing of attention (Derryberry & Tucker, 1994; Föster et al., 2006). Also, the finding of impaired task performance in prior studies manipulating mood has been thought to reflect a decrease in the processing capacity because cognitive resources are committed to the 28

processing of participants’ own moods (Ellis & Ashbrook, 1988; Scheimel, 2007), to repetitive and uncontrollable thinking focused on negative events (rumination; Miyake, Friedman, Emerson, Witzki, Howerter & Wager, 2000; Whitmer & Banich, 2007), or to concentration in mood regulation processes to re-establish positive mood (Mitchell & Phillips, 2007). Along this line, some theories assume delays in speech generation –and in particular during phonological encoding- when participants’ attention is engaged in the processing of affective-related features (Roelofs & Piai, 2011; White et al., 2012). Thus, drawing attentional resources away from the demands placed by the letter searching task due to rumination or emotional self-regulation, in conjunction with a narrowing of attention, may have led to a disruption in the retrieval of the phonological components of a word. The present findings have some potential implications for the debate concerning the modularity of language processing. Modular views of language production claim that processing stages involved in speech generation are encapsulated and independent of one another. Moreover, no interaction between language and other non-linguistic systems would occur (Garrett, 1980; Levelt et al., 1999), so, according to this perspective, mood would have no influence on any of the stages involved language production. In contrast, non-modular models question the information encapsulation and the lack of interaction among components involved in language production or in other cognitive domains (Dell, Chang & Griffin, 1999). Accordingly, this proposal could, in principle, assume the existence of mood effects on speech production. In agreement with prior data showing interactions between phonological features and other sources of linguistic information during word retrieval processes (Alario, Schiller, Domoto-Reilly & Caramazza, 2003; Jaeger, Furth & Hilliard, 2012; Miceli, Capasso & Caramazza, 1999), we have found that phonological encoding processes in speech generation are not 29

encapsulated and might be modulated by mood. Therefore, our data agree with nonmodular proposals according to which language production interacts not only with other linguistic subsystems but also with more general, non-linguistic systems, like those involved in mood processing (Dell et al., 1999; Trueswell & Tanenhaus, 1994). Similar evidence for interactive theories of language comes from language comprehension studies that have observed mood effects on different levels of linguistic representations (Chwilla et al., 2011; Egidi & Gerrig, 2009; Federmeier et al., 2001; Van Berkum et al., 2013; Verhees et al., 2015). A final comment should be devoted to the absence of positive mood modulations on phonological encoding during word production. Although null effects should be interpreted with caution, it has been claimed that positive moods may have less impact on cognitive processing because they require less behavioral adjustment than negative moods (Chu, 2014; Pham, 2007). In this regard, positive moods are typically associated with safe situations with little action requirements, whereas negative moods indicate problematic situations that make action necessary (Schwarz, Bless & Bohner, 1991). Interestingly, the absence of effects in the positive mood condition could also be interpreted in the light of proposals that have incorporated mechanisms to account for affective modulations in speech generation. In particular, the Weaver++ model (Roelofs, 2003; Roelofs & Piai, 2011) postulates that emotion can impact picture naming via attentional mechanisms (see also White et al., 2015). Only if the arousal level of the words exceeds a competition threshold, the processing of affective features compete with picture naming, which results in impaired performance. In agreement with this view, it has been shown that greater performance effects on cognitive tasks are elicited by higher emotion arousal (Chu, 2014; see Pham, 2007 for a review). Thus, one possibility is that, even though valence ratings indicated that our participants were in a 30

positive mood, the diminished level of arousal induced by our positive compared to our negative video excerpts may explain, in part, the lack of effects on the grapheme monitoring task. These data point to the necessity of collecting not only valence but also arousal scores in studies investigating the interplay between language and mood. Also, they emphasize the importance of including a neutral mood condition when dealing with mood manipulations (Mitchell & Phillips, 2007), a condition that has been absent in several prior studies. Otherwise, it cannot be ruled out that benefits in the positive mood condition are just the reverse of an impaired execution when participants are in a negative mood. In other words, only the existence of differences in task performance between participants’ in a positive and a neutral mood condition could be taken as strong evidence for facilitated processing. 4.1. Limitations and future directions It could be argued that the frontal negative component (N290) found in our study might be contaminated by visual ERPs elicited by the presentation of pictures in our task. However, the results of two prior studies suggest that ERPs associated with phonological encoding in covert naming tasks differ from ERP activity elicited by a passive viewing of the images. In this sense, Eulitz el al. (2000) found that the N1 and P2 components of the visual ERPs shared a similar topography and were originated within the same brain areas in both, the passive and the naming task. Interestingly, activity between 250 and 400 ms -indexing phonological encoding- showed the involvement of additional brain areas during naming. Also, Hinojosa and collaborators (2010), found that grapheme monitoring in positive and negative pictures elicited a more positive posterior component at 400 ms compared to neutral images. In contrast, only the passive viewing of the positive, compared to the neutral images was associated with an increased positive component at 500 ms, a latency that falls outside the time 31

course estimated for phonological encoding (Levelt et al., 1998). Additionally, the results of topographical analyses revealed a different scalp distribution for these effects. A second aspect concerns to the existence of differences in arousal between the positive and negative mood conditions makes it difficult to disentangle the contribution of valence and arousal to our results, since our mood conditions were associated with different ratings in the two dimensions. In consequence, valence or arousal (or both) may have accounted for the observed differences. Future research could shed some light on this question by, for instance, comparing performance in silent picture naming tasks after inducing negative low-arousing (e.g., sad moods), negative high-arousing and positive moods to participants. Finally, it should be noted that the present study is a first approximation to the study of mood influences on language production. As discussed elsewhere (Chwilla et al., 2011; Verhees et al., 2010, 2103, 2015), future studies should try to disentangle the relative contribution of purely linguistic processes involved in speech generation and more general factors, such as motivation or attention. For instance, this could be achieved by manipulating the level of processing of the stimuli with different task demands (see Verhees et al., 2015 for a similar approach within the framework of the interplay between mood and syntactic processing in language comprehension). 4.2. Conclusions In sum, the present study investigated the influence of participants’ mood in language production. We showed that, in agreement with the results reported in language comprehension tasks, negative mood interacts with those processes involved in speech generation. This finding suggests that some non-linguistic variables can similarly influence language production and comprehension. In a more general sense, the results of the present study contribute to the accumulating body of evidence that 32

highlights the importance of considering several aspects of emotion and mood as a powerful source of modulation of linguistic processes.

33

Acknowledgements We would like to thank two anonymous reviewers from helpful comments on this manuscript. We would also like to thank Sara López-Martín for her help with figures. This work was supported by grants PSI2012-37535 and PSI2015-68368-P (MINECO/FEDER) and Ref. H2015/HUM-3327 from the Comunidad de Madrid.

34

References Abrahams, S., Goldstein, L. H., Simmons, A., Brammer, M. J., Williams, S. C., Giampietro, V. P., ... & Leigh, P. N. (2003). Functional magnetic resonance imaging of verbal fluency and confrontation naming using compressed image acquisition to permit overt responses. Human Brain Mapping, 20(1), 29-40. Alario, F. X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. American Electroencephalographic Society. (1994). Guideline thirteen: Guidelines for standard electrode position nomenclature. Journal of Clinical Neurophysiology, 11, 111-113. Ardila, A. (1991). Errors resembling semantic paralexias in Spanish-speaking aphasics. Brain and Language, 41(3), 437-445. Bartolic, E. I., Basso, M. R., Schefft, B. K., Glauser, T., & Titanic-Schefft, M. (1999). Effects of experimentally-induced emotional states on frontal lobe cognitive task performance. Neuropsychologia, 37(6), 677-683. Baker, S. C., Frith, C. D., & Dolan, R. J. (1997). The interaction between mood and cognitive function studied with PET. Psychological Medicine, 27(03), 565-578. Beukeboom, C. J., & Semin, G. R. (2006). How mood turns on language. Journal of Experimental Social Psychology, 42(5), 553-566. Blank, S. C., Scott, S. K., Murphy, K., Warburton, E., & Wise, R. J. (2002). Speech production: Wernicke, Broca and beyond. Brain, 125(8), 1829-1838. Bodenhausen, G. V., Kramer, G. P., & Süsser, K. (1994). Happiness and stereotypic thinking in social judgment. Journal of Personality and Social Psychology, 66(4), 621-632. 35

Bolte, A., Goschke, T., & Kuhl, J. (2003). Emotion and intuition effects of positive and negative mood on implicit judgments of semantic coherence. Psychological Science, 14(5), 416-421. Bourguignon, N. J. (2014). A rostro-caudal axis for language in the frontal lobe: the role of executive control in speech production. Neuroscience & Biobehavioral Reviews, 47, 431-444. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433-436. Burgess; P. W., & Wu, H. C. (2013). Rostral prefrontal cortex (Brodmann area 10): metacognition in the brain. In D. T. Struss & R. T. Knight (Eds.), Principles of frontal lobe function, 2nd ed. (pp. 524-544). New York, NY: Oxford University Press. Camen, C., Morand, S., & Laganaro, M. (2010). Re-evaluating the time course of gender and phonological encoding during silent monitoring tasks estimated by ERP: serial or parallel processing?. Journal of Psycholinguistic Research, 39(1), 35-49. Burton, M. W., Small, S. L., & Blumstein, S. E. (2000). The role of segmentation in phonological processing: an fMRI investigation. Journal of Cognitive Neuroscience, 12(4), 679-690. Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. Nature, 403(6768), 428-430. Caramazza, A., & Zurif, E. B. (1976). Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and Language, 3(4), 572-582. Carreiras, M., Duñabeitia, J. A., & Molinaro, N. (2009). Consonants and vowels contribute differently to visual word recognition: ERPs of relative position priming. Cerebral Cortex, 19(11), 2659-2670.

36

Carretié, L., Tapia, M., Mercado, F., Albert, J., López-Martín, S., de la Serna, J. M. (2004). Voltage-based versus factor score-based source localization analyses of electrophysiological brain activity: A comparison. Brain Topography, 17, 109-115. Chapman, R. M., & McCrary, J. W. (1995). EP component identification and measurement by principal components-analysis. Brain and Cognition, 27(3), 288310. Chen, Q., & Mirman, D. (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. Psychological Review, 119(2), 417. Chu, O. (2014). The effect of mood on set-switching abilities in younger and older adults. Electronic Theses and Dissertations.Paper 5013. Chung, G., Tucker, D. M., West, P., Potts, G. F., Liotti, M., Luu, P., & Hartry, A. L. (1996). Emotional expectancy: Brain electrical activity associated with an emotional bias in interpreting life events. Psychophysiology, 33(3), 218-233. Chwilla, D. J., Virgillito, D., & Vissers, C. T. W. (2011). The relationship of language and emotion: N400 support for an embodied view of language comprehension. Journal of Cognitive Neuroscience, 23(9), 2400-2414. Clark, L., Iversen, S. D., & Goodwin, G. M. (2001). The influence of positive and negative mood states on risk taking, verbal fluency, and salivary cortisol. Journal of Affective Disorders, 63(1), 179-187. Clore, G. L., & Huntsinger, J. R. (2007). How emotions inform judgment and regulate thought. Trends in Cognitive Sciences, 11(9), 393-399. Costa, A., Caramazza, A., & Sebastian-Galles, N. (2000). The cognate facilitation effect: implications for models of lexical access. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(5), 1283. 37

Cummings, A., Seddoh, A., & Jallo, B. (2016). Phonological code retrieval during picture naming: Influence of consonant class. Brain Research, 1635, 71-85. Davidson, R. J., Ekman, P., Frijda, N. H., Goldsmith, H. H., Kagan, J.; Lazarus,…, & Clark, L. A. (1994). How are emotions distinguished from moods, temperament, and other related affective constructs?. In P. Ekman & R. J. Davidson (Eds.), The nature of emotion: Fundamental questions. Series in affective science (pp. 49-96). New York, NY: Oxford University Press. De Zubicaray, G. I., McMahon, K. L., Eastburn, M. M., & Wilson, S. J. (2002). Orthographic/phonological facilitation of naming responses in the picture–word task: An event-related fMRI study using overt vocal responding. Neuroimage, 16(4), 1084-1093. Dell, G. S. (1986). A spreading-activation theory of retrieval in sentence production. Psychological Review, 93(3), 283. Dell, G. S. (1988). The retrieval of phonological forms in production: Tests of predictions from a connectionist model. Journal of Memory and Language, 27(2), 124-142. Dell, G. S., Chang, F., & Griffin, Z. M. (1999). Connectionist models of language production: Lexical access and grammatical encoding. Cognitive Science, 23(4), 517542. Dent, K., Johnston, R. A., & Humphreys, G. W. (2008). Age of acquisition and word frequency effects in picture naming: A dual-task investigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(2), 282. Derryberry, D., & Tucker, D. M. (1994). Motivating the focus of attention. In P. M. Niedenthal & Kitayama (Eds.), The heart’s eye: Emotional influences in perception and attention (pp. 167-196). San Diego, CA: Academic Press. 38

Dien, J. (2010). Evaluating two step PCA of ERP data with geomin, infomax, oblimin, promax, and varimax rotations. Psychophysiology, 47, 170-183. Dien, J. (2012). Applying principal components analysis to event-related potentials: a tutorial. Developmental Neuropsychology, 37, 497-517. Dien, J., Spencer, K. M., & Donchin, E. (2003). Localization of the event-related potential novelty response as defined by principal components analysis. Brain Research Cognitive Brain Research, 17, 637-650. Dreisbach, G., & Goschke, T. (2004). How positive affect modulates cognitive control: reduced perseveration at the cost of increased distractibility. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 343-353. Egidi, G., & Caramazza, A. (2014). Mood-dependent integration in discourse comprehension: Happy and sad moods affect consistency processing via different brain networks. NeuroImage, 103, 20-32. Egidi, G., & Gerrig, R. J. (2009). How valence affects language processing: Negativity bias and mood congruence in narrative comprehension. Memory & Cognition, 37(5), 547-555. Egidi, G., & Nusbaum, H. C. (2012). Emotional language processing: how mood affects integration processes during discourse comprehension. Brain and language, 122(3), 199-210. Ellis, H. C., & Ashbrook, P. W. (1988). Resource allocation model of the effects of depressed mood states on memory. In K. Fiedler & J. Forgas (Eds.), Affect, Cognition and Social behavior. Toronto: Hogrefe. Eulitz, C., Hauk, O., & Cohen, R. (2000). Electroencephalographic activity over temporal brain areas during phonological encoding in picture naming. Clinical Neurophysiology, 111(11), 2088-2097. 39

Federmeier, K. D., Kirson, D. A., Moreno, E. M., & Kutas, M. (2001). Effects of transient, mild mood states on semantic memory organization and use: an eventrelated potential investigation in humans. Neuroscience letters, 305(3), 149-152. Ferraro, F. R., King, B., Ronning, B., Pekarski, K., & Risan, J. (2003). Effects of induced emotional state on lexical processing in younger and older adults. The Journal of Psychology, 137(3), 262-272. Ferreira, F., & Patson, N. D. (2007). The ‘good enough’approach to language comprehension. Language and Linguistics Compass, 1(1‐2), 71-83. Förster, J., Friedman, R. S., Özelsel, A., & Denzler, M. (2006). Enactment of approach and avoidance behavior influences the scope of perceptual and conceptual attention. Journal of Experimental Social Psychology, 42(2), 133-146. Forgas, J. P. (2007). When sad is better than happy: Negative affect can improve the quality and effectiveness of persuasive messages and social influence strategies. Journal of Experimental Social Psychology, 43(4), 513-528. Fredrickson, B. L. (2001). The role of positive emotions in positive psychology: The broaden-and-build theory of positive emotions. American Psychologist, 56(3), 218226. Friston, K., Harrison, L., Daunizeau, J., Kiebel, S., Phillips, C., Trujillo-Barreto, N., ... & Mattout, J. (2008). Multiple sparse priors for the M/EEG inverse problem. NeuroImage, 39(3), 1104-1120. Gallo, D. A., Meadow, N. G., Johnson, E. L., & Foster, K. T. (2008). Deep levels of processing elicit a distinctiveness heuristic: Evidence from the criterial recollection task. Journal of Memory and Language, 58(4), 1095-1111. Garrett, M. (1980). Levels of processing in sentence production. In B. Butterworth (Ed.), Language Production (pp. 177-220). London: Academic Press. 40

Garrett, M. F. (1988). Processes in language production. In F. J. Newmeyer (Ed.), Linguistics: The Cambridge Survey, Vol. 3, Language: Psychological and biological aspects (pp. 69-96). Cambridge: Cambridge University Press. Gaskell, G. (2007). Oxford handbook of psycholinguistics. Oxford University Press. Gasper, K. (2003). When necessity is the mother of invention: Mood and problem solving. Journal of Experimental Social Psychology, 39(3), 248-262. Gerrards‐Hesse, A., Spies, K., & Hesse, F. W. (1994). Experimental inductions of emotional states and their effectiveness: A review. British Journal of Psychology, 85(1), 55-78. Goldrick, M. (2006). Limited interaction in speech production: Chronometric, speech error, and neuropsychological evidence. Language and Cognitive Processes, 21(7-8), 817-855. Goldrick, M., & Rapp, B. (2002). A restricted interaction account (RIA) of spoken word production: The best of both worlds. Aphasiology, 16(1-2), 20-55. Gollan, T. H., & Acenas, L. A. R. (2004). What is a TOT? Cognate and translation effects on tip-of-the-tongue states in Spanish-English and tagalog-English bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1), 246. Gray, J. R. (2004). Integration of emotion and cognitive control. Current Directions in Psychological Science, 13(2), 46-48. Gray, J. R., Braver, T. S., & Raichle, M. E. (2002). Integration of emotion and cognition in the lateral prefrontal cortex. Proceedings of the National Academy of Sciences, 99(6), 4115-4120. Greenwald, A. G. (1976). Within-subjects designs: To use or not to use?. Psychological Bulletin, 83(2), 314.

41

Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films. Cognition & Emotion, 9(1), 87-108. Habel, U., Klein, M., Kellermann, T., Shah, N. J., & Schneider, F. (2005). Same or different? Neural correlates of happy and sad mood in healthy males. Neuroimage, 26(1), 206-214. Hauk, O., Rockstroh, B., & Eulitz, C. (2001). Grapheme monitoring in picture naming: an electrophysiological study of language production. Brain Topography, 14(1), 313. Harley, T. A., & MacAndrew, S. B. (2001). Constraints upon word substitution speech errors. Journal of Psycholinguistic Research, 30(4), 395-418. Hinojosa, J. A., Méndez-Bértolo, C., Carretié, L., & Pozo, M. A. (2010). Emotion modulates language production during covert picture naming. Neuropsychologia, 48(6), 1725-1734. Hockey, G. R. J., John Maule, A., Clough, P. J., & Bdzola, L. (2000). Effects of negative mood states on risk in everyday decision making. Cognition & Emotion, 14(6), 823-855. Indefrey, P., & Levelt, W. J. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1), 101-144. Isen, A. M., Daubman, K. A., & Nowicki, G. P. (1987). Positive affect facilitates creative problem solving. Journal of Personality and Social Psychology, 52(6), 1122-1131. Jaeger, T. F., Furth, K., & Hilliard, C. (2012). Phonological overlap affects lexical selection during sentence production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1439.

42

Jescheniak, J. D., & Schriefers, H. (2001). Priming effects from phonologically related distractors in picture–word interference. The Quarterly Journal of Experimental Psychology: Section A, 54(2), 371-382. Keightley, M. L., Seminowicz, D. A., Bagby, R. M., Costa, P. T., Fossati, P., & Mayberg, H. S. (2003). Personality influences limbic-cortical interactions during sad mood induction. Neuroimage, 20(4), 2031-2039. Kerns, J. G., Cohen, J. D., Stenger, V. A., & Carter, C. S. (2004). Prefrontal cortex guides context-appropriate responding during language production. Neuron, 43(2), 283-291. Kohn, N., Falkenberg, I., Kellermann, T., Eickhoff, S. B., Gur, R. C., & Habel, U. (2013). Neural correlates of effective and ineffective mood induction. Social Cognitive and Affective Neuroscience, 9(6), 864-872. Kross, E., Berman, M. G., Mischel, W., Smith, E. E., & Wager, T. D. (2011). Social rejection shares somatosensory representations with physical pain. Proceedings of the National Academy of Sciences, 108(15), 6270-6275. Laganaro, M., Morand, S., & Schnider, A. (2009a). Time course of evoked-potential changes in different forms of anomia in aphasia. Journal of Cognitive Neuroscience, 21(8), 1499-1510. Laganaro, M., Morand, S., Schwitter, V., Zimmermann, C., Camen, C., & Schnider, A. (2009b). Electrophysiological correlates of different anomic patterns in comparison with normal word production. Cortex, 45(6), 697-707. Laganaro, M., Morand, S., Michel, C. M., Spinelli, L., & Schnider, A. (2011). ERP correlates of word production before and after stroke in an aphasic patient. Journal of Cognitive Neuroscience, 23(2), 374-381.

43

Laganaro, M., Python, G., & Toepel, U. (2013). Dynamics of phonological–phonetic encoding in word production: Evidence from diverging ERPs between stroke patients and controls. Brain and Language, 126(2), 123-132. Lambon-Ralph, M. A., Graham, K. S., Ellis, A. W., & Hodges, J. R. (1998). Naming in semantic dementia—what matters?. Neuropsychologia, 36(8), 775-784. Lazarus, R. (1994). The stable and unstable in emotion. In P. Ekman, & R. J. Davidson (Eds.), The nature of emotion: Fundamental questions. Series in affective science (pp. 79-85). New York, NY: Oxford University Press. Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press. Levelt, W. J., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(01), 1-38. Liu, C. L., Hue, C. W., Chen, C. C., Chuang, K. H., Liang, K. C., Wang, Y. H., ... & Chen, J. H. (2006). Dissociated roles of the middle frontal gyri in the processing of Chinese characters. Neuroreport, 17(13), 1397-1401. Luck, S. J. (2014). An introduction to the event-related potential technique (2nd ed.). Cambridge: MIT Press. Martin, L. L., & Clore, G. L. (2013). Theories of mood and cognition: A user's guidebook. Psychology Press. Martin, N., Gagnon, D. A., Schwartz, M. F., Dell, G. S. & Saffran, E. M. (1996). Phonological facilitation of semantic errors in normal and aphasic speakers. Language and Cognitive Processes, 11(3), 257-282. Martin, E. A., & Kerns, J. G. (2011). The influence of positive mood on different aspects of cognitive control. Cognition & Emotion, 25(2), 265-279.

44

Matsunaga, M., Isowa, T., Kimura, K., Miyakoshi, M., Kanayama, N., Murakami, H., ... & Kaneko, H. (2009). Associations among positive mood, brain, and cardiovascular activities in an affectively positive situation. Brain Research, 1263, 93-103. Megías, C. F., Mateos, J. C. P., Ribaudi, J. S., & Fernández-Abascal, E. G. (2011). Validación española de una batería de películas para inducir emociones. Psicothema, 23(4), 778-785. Miceli, G., Capasso, R., & Caramazza, A. (1999). Sublexical conversion procedures and the interaction of phonological and orthographic lexical forms. Cognitive Neuropsychology, 16(6), 557-572. Mitchell, R. L., & Phillips, L. H. (2007). The psychological, neurochemical and functional neuroanatomical mediators of the effects of positive and negative mood on executive functions. Neuropsychologia, 45(4), 617-629. Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49-100. Morrison, C. M., & Ellis, A. W. (2000). Real age of acquisition effects in word naming and lexical decision. British Journal of Psychology, 91(2), 167-180. Morsella, E., & Miozzo, M. (2002). Evidence for a cascade model of lexical access in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(3), 555. Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: A primer with examples. Human Brain Mapping, 15, 1-25.

45

Niedenthal, P. M., Halberstadt, J. B., & Setterlund, M. B. (1997). Being happy and seeing''happy'': Emotional state mediates visual word recognition. Cognition & Emotion, 11(4), 403-432. Niedenthal, P. M., & Setterlund, M. B. (1994). Emotion congruence in perception. Personality and Social Psychology Bulletin, 20(4), 401-411. Ochsner, K. N., Ray, R. D., Cooper, J. C., Robertson, E. R., Chopra, S., Gabrieli, J. D., & Gross, J. J. (2004). For better or for worse: neural systems supporting the cognitive down-and up-regulation of negative emotion. Neuroimage, 23(2), 483-499. Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J. M. (2010). FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational intelligence and neuroscience. Article ID 156869. Osterhout, L., & Swinney, D. A. (1989). On the role of the simplicity heuristic in language processing: Evidence from structural and inferential processing. Journal of Psycholinguistic Research, 18(6), 553-562. Park, J., & Banaji, M. R. (2000). Mood and heuristics: the influence of happy and sad states on sensitivity and bias in stereotyping. Journal of Personality and Social Psychology, 78(6), 1005-1023. Pascual-Leone, A., Catala, M. D., & Pascual, A. P. L. (1996). Lateralized effect of rapid-rate transcranial magnetic stimulation of the prefrontal cortex on mood. Neurology, 46(2), 499-502. Pascual-Marqui, R. D. (2007). Discrete, 3D distributed, linear imaging methods of electric neuronal activity. Part 1: exact, zero error localization. arXiv preprint arXiv:0710.3341

46

Phan, K. L., Wager, T., Taylor, S. F., & Liberzon, I. (2002). Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage, 16(2), 331-348. Pham, M. T. (2007). Emotion and rationality: A critical review and interpretation of empirical evidence. Review of General Psychology, 11(2), 155. Phillips, L. H., Bull, R., Adams, E., & Fraser, L. (2002). Positive mood and executive function: evidence from stroop and fluency tasks. Emotion, 2(1), 12-22. Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36(04), 329-347. Pinheiro, A. P., del Re, E., Nestor, P. G., McCarley, R. W., Gonçalves, Ó. F., & Niznikiewicz, M. (2013). Interactions between mood and the structure of semantic memory: Event-related potentials evidence. Social Cognitive and Affective neuroscience, 8(5), 579-594. Poline, J. B., Vandenberghe, R., Holmes, A. P., Friston, K. J., & Frackowiak, R. S. J. (1996). Reproducibility of PET activation studies: lessons from a multi-center European experiment: EU Concerted Action on Functional Imaging. Neuroimage, 4(1), 34-54. Ries, S., Janssen, N., Burle, B., & Alario, F. X. (2013). Response-locked brain dynamics of word production. PloS one, 8(3), e58197. Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110(1), 88. Roelofs, A., & Piai, V. (2011). Attention demands of spoken word planning: a review. Frontiers in Psychology, 2: 307. doi: 10.3389/fpsyg.2011.00307

47

Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 231-244). New York: Russell Sage Foundation. Sadat, J., Martin, C. D., Costa, A., & Alario, F. X. (2014). Reconciling phonological neighborhood effects in speech production through single trial analysis. Cognitive Psychology, 68, 33-58. Sanfeliu, M. C., & Fernandez, A. (1996). A set of 254 Snodgrass-Vanderwart pictures standardized for Spanish: Norms for name agreement, image agreement, familiarity, and visual complexity. Behavior Research Methods, Instruments, & Computers, 28(4), 537-555. Schaefer, A., Nils, F., Sanchez, X., & Philippot, P. (2010). Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition & Emotion, 24(7), 1153-1172. Scherer, K. R. (2005). What are emotions? And how can they be measured?. Social Science Information, 44(4), 695-729. Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study. Cognitive Brain Research, 17(3), 819-831. Schmeichel, B. J. (2007). Attention control, memory updating, and emotion regulation temporarily reduce the capacity for executive control. Journal of Experimental Psychology: General, 136(2), 241-255. Schmitz, T. W., De Rosa, E., & Anderson, A. K. (2009). Opposing influences of affective state valence on visual cortical encoding. The Journal of Neuroscience, 29(22), 7199-7207.

48

Schwarz, N., Bless, H., & Bohner, G. (1991). Mood and persuasion: Affective states influence the processing of persuasive communications. Advances in Experimental Social Psychology, 24, 161-199. Sebastián-Gallés, N., Martí, M. A., Carreiras, M., & Cuetos, F. (2000). LEXESP: Una base de datos informatizada del español. Universitat de Barcelona, Barcelona. Seibert, P. S., & Ellis, H. C. (1991). Irrelevant thoughts, emotional mood states, and cognitive task performance. Memory & Cognition, 19(5), 507-513. Sereno, S. C., Scott, G. G., Yao, B., Thaden, E. J., & O'Donnell, P. J. (2015). Emotion word processing: does mood make a difference?. Frontiers in Psychology, 6: 1191. doi: 10.3389/fpsyg.2015.01191. eCollection 2015. Subramaniam, K., Kounios, J., Parrish, T. B., & Jung-Beeman, M. (2009). A brain mechanism for facilitation of insight by positive affect. Journal of Cognitive Neuroscience, 21(3), 415-432. Trueswell, J. C., & Tanenhaus, M. K. (1994). Toward a lexicalist framework for constraint-based syntactic ambiguity resolution. In C. Clifton, K. Rayner, & L. Frazier (Eds.), Perspectives on sentence processing (pp.155-179). Hillsdale, NJ: Lawrence Erlbaum Associates. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232. Van Berkum, J. J., De Goede, D., Van Alphen, P. M., Mulder, E. R., & Kerstholt, J. H. (2013). How robust is the language architecture? The case of mood. Frontiers in Psychology, 4: 505. doi: 10.3389/fpsyg.2013.00505. eCollection 2013. Verhees, M. W., Chwilla, D. J., Tromp, J., & Vissers, C. T. (2015). Contributions of emotional state and attention to the processing of syntactic agreement errors: evidence from P600. Frontiers in Psychology, 6: 388. 49

Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of experimental psychology: Human Learning and Memory, 6(2), 174. Subramaniam, K., Kounios, J., Parrish, T. B., & Jung-Beeman, M. (2009). A brain mechanism for facilitation of insight by positive affect. Journal of Cognitive Neuroscience, 21(3), 415-432. Vissers, C. T. W., Chwilla, U. G., Egger, J. I., & Chwilla, D. J. (2013). The interplay between mood and language comprehension: Evidence from P600 to semantic reversal anomalies. Neuropsychologia, 51(6), 1027-1039. Vissers, C. T. W., Virgillito, D., Fitzgerald, D. A., Speckens, A. E., Tendolkar, I., van Oostrom, I., & Chwilla, D. J. (2010). The influence of mood on the processing of syntactic anomalies: evidence from P600. Neuropsychologia, 48(12), 3521-3531. Westermann, R., Stahl, G., & Hesse, F. (1996). Relative effectiveness and validity of mood induction procedures: analysis. European Journal of Social Psychology, 26, 557-580. Wheeldon, L. R., & Levelt, W. J. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34(3), 311-334. Wheeldon, L. R., & Morgan, J. L. (2002). Phoneme monitoring in internal and external speech. Language and Cognitive Processes, 17(5), 503-535. White, K. K., Abrams, L., LaBat, L. R., & Rhynes, A. M. (2016). Competing influences of emotion and phonology during picture-word interference. Language, Cognition and Neuroscience, 31(2), 265-283. Whitmer, A. J., & Banich, M. T. (2007). Inhibition versus switching deficits in different forms of rumination. Psychological Science, 18(6), 546-553.

50

Wilson, S. M., Isenberg, A. L., & Hickok, G. (2009). Neural correlates of word production stages delineated by parametric modulation of psycholinguistic variables. Human Brain Mapping, 30(11), 3596-3608. Zadra, J. R., & Clore, G. L. (2011). Emotion and perception: The role of affective information. Wiley Interdisciplinary Reviews: Cognitive Science, 2(6), 676-685. Zhang, Xuan, Hui W. Yu, and Lisa F. Barrett. 2014. “How does this make you feel? A comparison of four affect induction procedures.” Frontiers in Psychology 5 (1): 689. doi:10.3389/fpsyg.2014.00689. Zwanzger, P., Steinberg, C., Rehbein, M. A., Bröckelmann, A. K., Dobel, C., Zavorotnyy, M., ... & Junghöfer, M. (2014). Inhibitory repetitive transcranial magnetic stimulation (rTMS) of the dorsolateral prefrontal cortex modulates early affective processing. NeuroImage, 101, 193-203.

51

Figure 1. Schematic representation of the experimental task.

Figure 2. Valence and arousal ratings given by participants to describe their mood throughout the experiment. Error bars reflect ± 1 standard error (SE) of the mean

Figure 3. Top: Grand ERP averages at left frontal electrodes where the experimental effects described in the text are clearly visible. Bottom: Topographic voltage map showing the difference between negative and neutral mood conditions for the N290. The color scale represents voltage scores.

Figure 4. Temporal principal component analysis (tPCA): factor loadings after promax rotation. The scalp distribution of each temporal factor is also shown: blue indicates negative temporal factor scores and red indicates positive temporal factor scores. Factor peak latencies and percentage of variance explained by each temporal factor after rotation are displayed between parentheses. Temporal factor 1 (N290) is in bold black. The color scale represents spatial factor scores.

Figure 5. Spatial factors extracted for the temporal factor 1 (N290) through spatial Principal Component Analysis (sPCA). The spatial factor sensitive to experimental manipulation (mood induction) is marked with an asterisk (left frontal distribution). The color scale represents spatial factor scores.

Figure 6. Source localization analysis (eLORETA): increased N290-related activation during negative mood condition relative to neutral mood condition. Colour bar represents voxel t values. 52

Table 1. Means and standard deviations of naming agreement, familiarity, frequency of use (per one million), word length (number of syllables) and visual complexity of neutral stimuli

Naming agreement

Familiarity

Frequency

Syllables

Visual complexity

List 1

86.83 (21.73)

2.95 (1.14)

20.38 (31.71)

2.68 (0.77)

2.66 (0.89)

List 2

87.28 (18.56)

3.11 (1.13)

32.33 (62.59)

2.49 (0.79)

2.64 (0.97)

List 3

82.61 (12.39)

3.25 (1.01)

23.59 (75.49)

2.72 (0.74)

2.69 (0.94)

ANOVA F = 1.30; n.s

F = 1.45; n.s

F = .78; n.s

F = 2.23; n.s

F = .03; n.s

d.f. = 2,150; n.s. = non-significant

53

54

55

56

57

Highlights Negative mood during in a grapheme monitoring task elicited larger N290 LORETA analyses suggested a neural origin of the N290 in prefrontal cortices Negative mood impaired phonological encoding The impairment might be explained in terms of linguistic and attentional mechanisms

58