Judgements of emotion in words and faces: ERP correlates

Judgements of emotion in words and faces: ERP correlates

International 5 (1987) Journal of Psychophysiology, 193 193-205 Elsevier PSP 00171 Judgments of emotion in words and faces: ERP correlates Ro...

1MB Sizes 0 Downloads 111 Views

International

5 (1987)

Journal of Psychophysiology,

193

193-205

Elsevier

PSP 00171

Judgments

of emotion in words and faces: ERP correlates

Rodney D. Vanderploeg *, Warren S. Brown 2,3 and James T. Marsh 3*4 ’ Department of Neurology, Uniuersrty of South Florida College of Medicine, and the James A. Haley VA Hospital, Tampa, FL (U.S.A.), ’ Fuller Theological Seminary, Graduate School of Psychology, ’ Department of Psychiatry and Blobehavioral and 4 Brain Research Institute, Unwersity of Cahfornia, Los Angeles, CA (USA.) (Accepted

Key words:

Visual

event-related

perceived

emotional

Principal

component

faces and words. the stimulus. negative.

analysis

(PCA)

of the ERPs two ERP neutrally

This

effect

was lateralized

effects,

connotation-related

differences

components

processing known

(faces

reveales

5 factors

accounting

as positive

A later

and negative

in slow wave topography.

of emotional

patterns

connotation

to be associated

positive

affects

ERP

with more general

aspects

according component,

Hemispheric

asymmetries,

waveforms of cognitive

emotional

than stimuli

unrelated

on the ERP

connotation

and that the effects

to affective

as neutral

over

but produced

connotation,

were

The results suggest

can be understood

of or

ms), manifested

connotation,

component.

males.

for both

rated as positive

than those perceived emotional

of the

adult

variance

the slow wave (448-616

amplitudes

for perceived

depending

the effects

functioning

waveform

to the perceived

larger amplitudes

larger

main effects

of lateralization

to determine normal

for over 90% of the ERP

significantly produced

were analyzed

in 10 right-handed

varied in amplitude

rated stimuli produced

connotation

and words)

or negative)

components

different

Emotional

neutral,

stimuli did not result in significant

in the verbal data, manifesting

differential

potential;

(positive,

to the left hemisphere.

i.e. faces perceived The verbal

26 May 1987)

to two types of stimuli

In the facial data,

the right hemisphere. evident

(ERPs)

of the stimuli

For the P3 component,

complementary subtle

potentials

connotations

Event-related

Sciences,

in terms

that

of ERP

processing.

INTRODUCTION

(Begleiter et al., 1979; Chapman, 1979; Chapman et al., 1977; Chapman et al., 1978; Chapman et

In addition to the many studies of cognitive correlates of event-related potential (ERP) components (for reviews see Donchin, 1979; Hillyard and Woods, 1979; NKatanen, 1982; Pritchard, 1981), there is a body of literature concerning the

al., 1980; Lifshitz, 1966). A number of these studies have utilized Osgood’s system for the analysis of connotative meaning (Osgood, 1952; 1971). He has found that a large portion of the total variance in an individ-

effects on ERPs of the emotional meaning of stimuli. These studies

or affective indicate that of sub-

ual’s judgments of meaning can be accounted for by three orthogonal dimensions of meaning: evaluation (good-bad), potency (strong-weak),

jects to stimuli, created either by conditioned emotional reactions to otherwise neutral stimuli (Be-

and activity (active-passive). These factors emerge across languages and cultures, and in judgments

gleiter et al., 1967; Begleiter et al., 1969) or to the emotional connotation of the stimuli themselves

of facial expressions (Hastorf et al., 1966;

ERPs

vary with the emotional

Correspondence: James

R.

Vanderploeg,

A. Haley VA Hospital,

FL 33612,

13000

reactions

Psychology North

Service

30th Street,

116B. Tampa,

U.S.A.

0167-8760/87/$03.50

0 1987

Elsevier

Science

Publishers

as well as word meanings Osgood, 1964; 1966).

Begleiter and his colleagues (1967 and 1969) conditioned meaningless figures to elicit positive, negative, or neutral affective responses, using words maximally differing along Osgood’s evaluative dimension as unconditioned stimuli. Signifi-

B.V. (Biomedical

Division)

194

cant differences were found in the ERPs across the three conditioned affective connotations for components in the 60-250 ms latency range. In a later study, Begleiter (1979) reported amplitude differences to positive, negative, and neutral stimuli (visually presented words) in the Nl-P2 ERP component (140-200 ms). There were significant differences between all three affective conditions with positive evoking the largest amplitudes and neutral the smallest. This effect was present only when subjects were rating the stimuli according to their affective connotation, disappearing during a task of letter identification. Also using Osgood’s scales of connotative meaning, Chapman and his colleagues (Chapman, 1979; Chapman et al., 1977; 1978; 1980) found ERP differences along all three of Osgood’s dimensions (evaluation [El, potency [P] and activity [A]), with the largest effects on ERPs occurring with the E dimension. Positive words elicited larger ERP amplitudes than negative words for components in the 200-420 ms range. Chapman also reported that the task of rating stimuli (visually presented words) on the E, P, or A scales affected an earlier ERP component (189 ms), while the actual class of the word stimuli (e.g. positive evaluative) was differentially reflected in a later component (311-332 ms; Chapman, 1979 and Chapman et al., 1980). Since the time of Darwin, facial expressions have been known to be powerful communicators of emotion. The present study compared facial expressions and verbal connotations (words) in a replication and extension of Chapman and Begleiter’s demonstrations of the effects of emotional connotation on ERPs. Additionally, a classical conditioning procedure was utilized in an attempt to create specific emotional interpretation of otherwise neutral faces and to observe any subsequent ERP differences. The methodology of principal component analysis (PCA) was utilized in an attempt to determine whether or not emotional effects (if present) were associated with unique aspects of the waveform or were reflected in other components associated with more general cognitive information processing (e.g. P2, P3, or the Slow Wave). Finally, the present study utilized a greater sampling of scalp topography than previ-

ous studies, to examine the possibility of topographical variations in electophysiological concomitants of information processing of emotionally charged stimuli.

METHODS Subjects

Ten right-handed normal functioning adult male volunteers between 22 and 35 years of age (X = 28.1 years) served as subjects. All subjects had normal hearing and vision. Handedness, normal hearing, and normal corrected vision were determined by self-report. All subjects reported being easily able to make the necessary stimulus discriminations (i.e. rate the visually presented stimuli as to emotional connotation). This was behaviorally verified by a very high correspondence between subjective rating and predesignated stimulus categorization. Paradigm

Visual ERPs to two types of stimuli (faces and words) were analyzed to determine the effects of subjective perception of the emotional connotation of the stimuli (positive, neutral, or negative). In the initial phase, words and faces, with emotional connotation randomized, were presented in separate counter-balanced blocks while subjects viewed the stimuli and rated them for emotional content. During this phase, ERPs were recorded and averaged separately according to the subjects’ ratings. This was followed by a conditioning phase. Visual presentations of the faces (CS) were selectively paired with auditory presentations of the words (UCS) in an attempt to differentially condition the two neutral faces, one to be experienced as positive, the other as negative. Finally, a second rating phase was run which was an exact replication of the first rating phase, including recording of post-conditioning ERPs. Of additional interest in this second rating phase was the effect of conditioning on the subjects’ ratings of the two ‘neutral’ faces and on the ERPs. Stimuli

Subjects viewed slides ccntaining

single words

195

or drawings of faces. All stimuli were black on white background. Sixty words were used, taken from stimuli used by either Chapman or Begleiter (Begleiter et al., 1967, 1969; Chapman, 1979; Chapman et al., 1977; 1978). The final list consisted of 20 positive, 20 neutral, and 20 negative words. The facial stimuli were 6 simple line drawings. There were two basic facial outlines, each with 3 emotional expressions (positive, neutral, and negative). Facial stimuli were included in this study on the basis of the judgments of emotional connotation made by 15 normal adults using Osgood’s semantic differential scales (1952, 1966, 1971). Criteria for inclusion were mean ratings with large differences (positive, negative, or neutral) on the evaluative dimension and relatively neutral on the potency and activity dimensions. Slides were projected through a small window into a sound-attenuated, electrically shielded chamber. A second projector was used to present a constant ‘X’ fixation point in the middle of the screen over which the stimulus slides were projected. Subjects sat 107 cm from the screen and the displays subtended visual angles of 8.8 (horizontal) by 1.7 (vertical) degrees of arc at the retina for the words and 10.1 by 12.3 degrees for the faces. The level of luminancy was low and relatively constant. No subjects reported visual afterimages. Words were flashed for 80 ms and faces for 100 ms under the control of a Lafayette electronic shutter. The different exposure times were used to I

equate the difficulty in recognition and evaluation of the two different types of stimuli. Since ERPs to words and faces were not to be directly compared, the differences in exposure were of no consequence. An 80-ms (75 dB SPL, 450 Hz) warning tone was presented through headphones one second before slide onset. A second tone (80 ms, 75 dB SPL, 220 Hz) was presented 1600 ms after the slide and served as a cue for the subjects’ responses. Thus, any muscle movement or other artifact associated with rating would not disturb the ERPs. (See Fig 1 for a diagram of this time sequence.) White noise (60 dB SPL) was constantly presented through the headphones to mask any environmental or equipment noise. Tones could be distinctly heard above the noise background by all subjects. During the conditioning phase the facial stimuli (CS) were presented for 1500 ms. Following slide onset by 500 ms, spoken words (UCS), with voice inflections matching the emotional connotation of the words, were presented through the headphones at approximately 84 dB SPL. Positive faces were always paired with positive words and negative faces with negative words. One of the neutral faces was always paired with positive words and the other with negative words. The positively and negatively conditioned neutral faces were counter-balanced across subjects. Again, white noise was presented as a masking background. During the first and third phases subjects were asked to verbally rate their affective experience of

2680msec

I

, PCA Timeframe, f-l

920

80 msec warning tone

msec

I I 320 1 msec I I I I I -

I 1Stimulus 1presentation I I

1520

msec

n

Rating

80 msec rating tone

1016 msec I epoch of EEG averaging

Fig. 1. Time line for each trial during

the pre- and postconditioning rating phases of the experiment, both trial sequence and ERP analysis epoch.

including

representations

of

196

each visual stimulus

as either positive,

neutral,

or

negative. Their ratings were recorded and tallied by the computer, with ERPs averaged accordingly. During the conditioning phase, subjects were simply asked to imagine that the faces were saying the words.

EEG recording EEG was recorded from 6 scalp sites (Fz, Pz, F7, Fg, T5, and T6) using the Electo-Cap VII and electrode gel. Bilateral earclip electrodes were used as a reference and an occipital site served as a ground. Eye movements and blinks were monitored via Fp2 to right canthal bipolar recordings. Electrode impedence was maintained at less than 5 KQ. EEG was amplified (X14000) with a band-pass epoch onset

of 0.1-50

of EEG

Hz (3 dB down).

data beginning

of each stimulus

microcomputer at a sampling

was averaged

7-channel

A 1016-ms

320 ms before

A/D

(&bit

on-line

resolution)

The averaging program was designed to reject any trials which contained data points lying outside of an amplitude threshold in either a positive the the

necessary number of artifact-free responses: 32 each for words and faces rated positive, neutral, and negative, respectively. In the final postconditioning phase, while the positive and negative responses to facial stimuli were averaged in the same way, the neutral stimuli were averaged separately for each neutral face, according to whether conditioned positive or negative, until 16 artifact-free responses to each neutral face had been obtained. no conditioning

of the neutral

stimuli

was

apparent in subjects’ ratings (see Results), the responses to the two neutral faces were combined into a single-32 trial average. No ERPs were recorded during the conditioning phase.

BEHAVIORAL The were

basic the

RESULTS behavioral

ratings

data

(positive,

of this experiment neutral

Mean rafings of emolionol

cormoration

Mean ratings based on a +1 for positive, 0 for neutral, and - 1 for negative ratings. Probabilities are for f-tests between pre- and postconditioning mean ratings. For each emotional category, mean ratings for the two faces are shown separately, as well as two word sub-lists. For neutral stimuli, the C+ stimulus was associated with positive words during conditioning, and the C- stimulus, with negative words. Faces

Positive Positive Neutral Neutral Negative Negative

word7 Post

Pre

(1) 0.79 (2) 0.39 (C+) 0.01 (C - ) 0.05 (1) -0.92 (2) - 0.65

P

0.92 < 0.50 0.05 0.09 n.s. -0.02 n.s. -0.96 - 0.86 < o,05

P

Pre

Post

0.92 0.89 0.00 0.07 -0.95 -0.96

0.90 0.89 nS. 0.01 0.04 n.s. -0.99 -0.98 < ‘.05

the

points per channel). Average ERPs were plotted out on an X-Y plotter and stored on floppy discs for later statistical analyses.

Since

I

by a

rate of 8 ms per point (127 sample

or negative direction. Trials continued until average ERP for each condition contained

TABLE

or negative)

made after each stimulus presentation during the pre- and postconditioning runs. Table 1 summarizes the behavioral data. Inspection table reveals that ratings were in the

of that expected

direction for both faces and words, i.e. positive stimuli were rated as positive, negative stimuli as negative,

and neutral

stimuli

as neutral.

ditioning ratings were more consistent more of the positive stimuli were rated (for

faces

Fl 9 = 6.87,

Postconin that positive

P -C 0.05) and more of the

negative stimuli were rated as negative (for faces F,,9 = 6.52, P < 0.05: and for words F,, = 5.19, P < 0.05). It was expected that the conditioning procedure would increase the number of positive and negative ratings of the neutral facial stimuli. As Table I clearly indicates, that change did not occur. Neutral faces were still rated as neutral following conditioning, indicating that no significant

behavioral

conditioning

of the neutral

faces

occurred.

ERP

RESULTS

ERPs were averaged in accordance with each subject’s ratings of the stimuli as positive, neutral, or negative, since the variable of interest was the subjective emotional perception. Following the conditioning phase, ERPs were also averaged sep-

197

arately for the two neutral faces and analyzed in a manner similar to that described below. No significant effects of the conditioning procedure on these ERPs were found. Since there were neither behavioral nor ERP effects of the conditioning procedure, the conditioning results will not be further described. A description of the relationship between the subjective perception of the stimuli (rated as positive, neutral, or negative) and ERP variables follows. Grand mean ERPs for the 6 electrodes and 3 emotional conditions from the two data sets (faces and words) can be seen in Fig. 2. In the interval between the warning tone and the visual stimulus, a gradually recovering negativity is apparent, most prominent at frontal electrodes. This probably reflects a contingent negative variation (CNV) which continues through the Pl-Nl components.

The conventional Pl-NlLP2 complex is also evident in both data sets. In addition, the Pz tracings and those from all anterior electrodes manifest later positive components. While the ERPs are similar for faces and words, the Pl-NlLP2 complex is somewhat larger for the facial stimuli, which is perhaps a function of the differences in amount of visual pattern between the two stimulus sets. The later positive components also differ in amplitude and morphology for the responses to words and faces. Most importantly, for both sets of responses the effects of emotional connotation are apparent at Pz, with the late components to neutrally rated stimuli smaller than that to positively and negatively rated stimuli. Further statistical analyses (see below) indicate that only the differences for faces are significant at this electrode locus.

FACES msec PCA

Factor

I

200 I

I

4

1

400 I 3

600 I

I

,

200

400

200

600

----.....

WORDS msec PC

200

400

400

600

2

600

200

400

600

200

POSITIVE NEUTRAL NEGATIVE

400

600

r"

Fig. 2. Grand mean ERPs averaged across the 10 subjects of this study for each electrode locus for the facial and verbal data. Waveforms from stimuli rated positive (solid line), neutral (dashed line), and negative (dotted line) are superimposed. The entire 1016-ms average is depicted, with the time line indicating the epoch used in principal components analysis and the major epoch of each component indicated by vertical lines and factors numbers.

198

in each ERP. In both analyses, the program extracted and rotated 5 principal components (factors) accounting for 93.6% and 90.0% of the variance in the facial and verbal data, respectively. The fifth factor in both cases accounted for approximately 3% of the overall data variance. Plots of the centroid and the factors which emerged from PCA of the two ERP data can be seen in Fig. 3.

Principal component analysis For further analysis, ERPs at each electrode for each subject and condition were reduced from 127 to 40 data points by obtaining the mean of every two consecutive points, from 16 ms before stimulus onset, to 624 ms after the stimulus. (This data condensation was necessary due to limitations in the statistical software.) This represents the time frame for the PCA and subsequent ERP analysis (see Fig. 1). The ERP data set for each subject was then normalized (mean = 0, standard deviation = 1) to reduce between-subject ERP amplitude differences. The facial and verbal data were separately analyzed throughout, each analysis involving 360 case (10 subjects X pre- or postconditioning X 6 electrodes X 3 emotional categories). Varimax principal component analyses (BMD P4M; Dixon, 1975) were run separately on the covariance matrix of both data sets to obtain orthogonal factors which accounted for most of the variance within these data, as well as to determine the amplitude of each component (factor)

FACE

Topographical and experimental effects To assess the effects of experimental treatments on the ERP factors, the PCA factor scores were used as the dependent measures in repeated-measures analyses of variance (BMD P2V; Dixon, 1975). Since PCA extracts statistically orthogonal factors, separate ANOVAs were run for each principal component in each data set. The point has been made (Jennings and Wood, 1976 and Keselman and Rogan, 1980) that often in psychophysiological research, repeated-measures ANOVAs may have inflated degrees of freedom,

WORD

% Variance

I

Accounted

Factor

1

57.4

Factor

2

21.3%

Factor

% Variance . Accounted for

for Factor

1

57.2%

Factor

2

15.6%

3

Factor

3

Factor

4

Factor

4

Factor

5

Factor

5

00

2.9% 93.6 00

Centroid

3.2%

Centroid

u 200

400

600

I

1

200

I

I

400

I

1

600

Fig. 3. Factor loadings plotted over time for each of the 5 principal components in the facial (left) and verbal (right) data, along with the centroid, or grand mean ERP, represented at the bottom. Variance accounted for by each factor is indicated to the right of each trace. Factors are differentiated as ERP components in relation to the temporal epoch having the highest loadings. The arrow on the time scale indicates stimulus onset.

199

mon to the two data sets. Fig. 4 presents the topographic distribution (i.e. left, middle, and right for anterior and posterior sites) of the factor scores from each data set, arranged in order of temporal occurrence of factors. Most of the factors varied significantly across scalp locations for both faces and words. CNV factor. In both data sets, factor 1 was most highly correlated with time points at the beginning of the epoch, in the warning interval and immediately after the onset of the stimulus. As can be seen in Figs. 2 and 4, this epoch was negative over the entire anterior scalp (although it was represented as positive factor loadings in Fig. 3). This would be consistent with a CNV. Factor 1 ended approximately 185 ms following the onset of the stimulus, overlapping with the Nl component (Fig. 3). The only significant effects in the ANOVAs for this factor in either data set were related to the scalp topography. No effects were found for perceived emotional connotation. Since stimuli with different emotional connotations were

and hence increased Type I error rates, due to violations of the assumption of homogeneity of variances across treatment levels. To correct for any heterogeneity, Huynh-Feldt corrections were calculated in each ANOVA. The reported significance levels reflect the corrected values. Main effects tested in each ANOVA were prepost conditioning [PI, ratings of emotional connotation (positive, neutral, or negative) [El, anteriorposterior electrode group [A], and coronal electrode placement (left, middle, or right) [Cl. In Fig. 4 it can be seen that the factor structures for the facial and verbal data were very similar, although the ordering of factors was somewhat different in the two analyses. It has been suggested (Donchin, 1979) that the same ERP component (e.g. P3) may vary widely in amplitude and latency, but as long as the scalp distribution and eliciting variables remain generally constant, components should be considered the same. Therefore, a topographical analysis to define and compare components which are com-

FACE

(W

WV) Factor 2

5

4

1

(SW) 2

3

1.0

/I

1’’ ‘\ \\

t?

U-J

z ;;

P3

(Nl, N2)

0.0

/’

‘\

I/ :, L

IL -1.0

‘\ M

R

I

1

I

t

M

R

!x LL

M

-

! I

L

L

R

M

R

WORD

(CW 1

Factor

L

M

R

LL

(P3)

(P3-SW)

2

3

M

R

(SW 4

L-R

Fig. 4. Graphs of the coronal topography of factor scores for each principal component in the facial (upper) and verbal (lower) data. Factors are arranged from left to right in sequence of their temporal occurrence and arranged to demonstrate correspondence between the factors in the two data sets. Spaces indicate epochs in which there is not a factor to correspond with the other data factor analysis. The solid line represents the anterior (A) and the dashed line, the posterior (P) factor scores for the left (L), middle (M), and right (R) scalp placements.

200

randomly ception

intermixed, would

no effect

be expected

wave. P2 factor. Factor

of emotional

per-

FACE:

P3

for this anticipatory

il.0

4 of the facial data and factor

5 of the verbal data were also similar. They occurred in the 180-230 ms latency range, the latency range of the P2 component. The scalp topographies were similar for both word-rated and facerated ERPs, with the exception of the posterior midline recordings (see Fig. 4). Significant effects in the ANOVAs reflected aspects of scalp topography, i.e., the polarity reversal between the posterior

and anterior

at lateral

electrodes

electrodes.

particularly

No effects

-1.0 L

evident

of perceived

emo-

FACE: SW

tional connotation appeared in the analysis of the factor scores from this component, save two significant data

the

three-way

interactions.

P x E x C

+I.0

Left l

Middle*

Right

r

In the face-rated

interaction

was

significant

(F4.36 = 3.30, P < 0.05)while in the word-rated data the E x A X C interaction was significant = 5.04, P < 0.05). There were no significant (6.36 differences found in any specific tests of perceived emotional connotations at individual electrode sites. Therefore, the emotional connotation effects in these interactions apparently represented subtle differences in topographical patterns which varied between

pre-

and postconditioning

data. P3 factor. A third factor (factor 3 of the facial data and factor 2 of the verbal data) also appears to reflect

the same

component

in the two data

sets. Both occurred in the 230-420 ms latency range and both were maximally positive at Pz (Figs. 2 and 4). This factor corresponded topographically and in latency to the P3 component which is maximally elicited when stimuli are taskrelevant and rare (Duncan-Johnson and Donchin, 1977; facial

Pritchard, 1981). data, a significant

perception was observed as well as two significant

_,.o L Left

in the facial

For the P3 factor in the main effect of emotional ( F2,18 = 4.36, P < 0.05) interactions E x C ( F4,36

= 2.99, P-c 0.05) and P x E x A (F& = 3.72, P -c0.05). Further analysis of the E X C interaction revealed that there was a significant emotional connotative effect in left hemisphere recordings (F2.18 = 5.91, P < 0.01)and midline (F&, = 4.09, P -c0.05). The upper graph of Fig. 5 presents the coronal topography of the P3 in the facial data. It

Middle*

Right l

Fig. 5. Coronal topographic distribution (left, middle and right) of mean factor scores for the P3 (top) and SW (bottom) components in the facial data. The 3 lines in each graph represent factor amplitudes in the ERPs for stimuli rated as positive (solid), neutral (dashed), and negative (dotted). Asterisks indicate loci where emotional effects are significant.

can be seen that neutrally larger

amplitudes

than

rated stimuli produced positively

or

negatively

rated stimuli at left hemisphere sites honestly significant difference (HSD),

(Tukey’s P < 0.05

and

was also

P -c0.01 respectively).

This

effect

apparent at the midline sites (Tukey’s HSD, P -C 0.05). There were no significant effects of judgment of emotional connotation on P3s recorded from the right hemisphere. There were no significant effects for perception of emotional connotation in the P3 factor of the verbal data. The major effect on P3 was that of coronal electrode distribution, due primarily to the predominantly mid-posterior distribution

201

characteristic of the P3. However, analysis of the A X C interaction ( F2,,a = 18.97, P < 0.001) revealed a hemispheric asymmetry for posterior scalp recordings ( F2,18= 35.65, P < 0.001). Right hemisphere responses were slightly larger (more positive) than those from the left (Tukey’s HSD P < 0.05, see Fig. 4). Slow wave factor. Factor 2 in the facial and 4 in verbal data had similar latencies (500-624 ms) and scalp distributions (Fig. 4). This component was a late positive slow wave, largest at the midline. A similar factor was first reported as the ‘slow wave’ (SW) by Squires et al. (1975) and was found to be task-relevant and probability-dependent (Kok and Looren de Jong, 1980; Squires et al., 1977). Kok and Looren de Jong (1980) suggested that this late SW reflects a later stage of processing, called by them ‘continued processing’, that becomes active with more complex perceptual processing demands. For the SW in the facial data (factor 2) there was a significant main effect of rated emotional connotation ( F2,18= 3.72, P < 0.05) and a significant interaction with coronal (left-right) topography (E x C; 4.36 = 4.77, P < 0.01). Further analysis of this interaction revealed that there were significant emotional perception effects midline (F2.18 = 6.58, P < 0.01) and from right hemisphere sites ( F2,,8 = 3.68, P < 0.05). In the midline recordings, positively and negatively rated stimuli elicited larger amplitudes than neutral (Tukey’s HSD, P < 0.01 and P < 0.05 respectively). In the right hemisphere, only positive greater than neutral was significant (Tukey’s HSD, P < 0.05). Positively and negatively rated stimuli did not differ significantly. No significant differences were found for the left hemisphere. These effects can be seen in the waveforms of Fig. 2, as described above, and in graphs of factors scores in the lower graph of Fig. 5. The SW factor in the verbal data (factor 4) showed no significant main effect of emotional perception of the stimuli. Significant interaction of rated emotional connotation with topographic distribution did occur (E X A X C, F4,36= 3.36, P < 0.05; P x E x A; F, 18 = 6.13, P < 0.01). However, since there were no significant differences found in specific tests of emotional perception at

individual electrode sites, the effect of emotional connotation of words represented subtle differences in topographic patterns (which presumably changed between pre- and postconditioning). In addition, there were hemispheric lateralizing effects for the SW factor in the verbal data (A X C, F*,I8 = 9.57, P < 0.01). Although analysis of this interaction revealed significant coronal differences from both anterior (F2,,8 = 11.33, P < 0.01) and posterior ( F2,18= 7.57, P < 0.01) scalp recordings, only the anterior recordings showed a significant lateralizing effect, with right hemisphere recordings larger (more positive) than left (Tukey’s HSD, P < 0.05). There was a trend for the opposite (i.e. left larger than right) in the posterior sites (see Fig. 4); however, this did not reach statistical significance. Most of the variance was understandably accounted for by differences between midline and lateral sites. Other, non-significant PCA factors. Factor 5 of the facial data had no corresponding factor in the verbal data. Its bimodal peaks corresponded to the Nl and N2 components of the ERP waveform. No significant effects for ratings of emotional connotation were found for this factor. Factor 3 of the verbal data had no corresponding factor in the facial data. This factor appeared from Fig. 2 to be a transitional phase between the P3 and SW. Although it accounted for 9.0% of the variance in the verbal data, it did not vary significantly according to any of the experimental conditions or across scalp locations.

DISCUSSION In the present study, differential processing of the emotional connotations of facial stimuli were reflected in visual ERP waveforms. When ERPs to facial stimuli were averaged according to the subjective affective rating (positive, neutral, or negative), the later positive components differed by affective perception. These findings are consistent with the results of Begleiter et al. (1979) Chapman (1979), and Chapman et al. (1977, 1978, 1980), in which differences in ERPs were found to stimuli rated differently on connotative/emotional meaning. The present results differ some-

202

what from previous studies in finding ERP differences for emotional connotations of faces, but only subtle topographic effects of the connotative meaning of words. Differential perception of facial affective expressions found in the present study occurred in the later part of the ERP waveform (240-616 ms), as did those connotative effects reported by Chapman and his colleagues to words (1979, 1980). This is the area of the ERP waveform which includes the P3 and the SW known to be related to discrimination and recognition of the task-relevant aspects of stimuli. The P3 wave is related in its amplitude to the recognition of the occurrence of relatively novel or particularly task-relevant information, and the consequent updating of context models in short-term memory (Duncan-Johnson and Donchin, 1977; Pritchard, 1981). Similarly, Kok and Looren de Jong (1980) consider the SW to be related to the ‘continuing processing’ of stimuli. In the present study, the affective perception effect in ERPs had opposite lateralized maximum amplitudes across the latter positive components, (P3 and SW). For Factor 3 (P3, 240-448 ms) neutrally rated stimuli elicited larger P3 amplitudes than emotionally charged stimuli (those rated as positive or negative) at left hemisphere and midline sites. On the other hand, for Factor 2 (SW, 448-616 ms), ERP effects of perception of emotional expression were found at right hemisphere and midline loci. In contrast to the P3, the SW was larger for the emotionally rated stimuli (positive and negative) than for stimuli rated as neutral. In an attempt to relate the present data to the current theoretical understanding of the nature of these later positive components, it could be interpreted that recognition of the distinction between neutral and affectively charged facial stimuli represents a categorical decision predominantly done in the left hemisphere, i.e. the recognition of the stimuli allows occurrence of ‘non-emotional’ stimulus-processing in this paradigm to be completed and memory-updating to occur, as indicated by the larger P3 amplitude. It is reasonable to speculate that, whereas the emotional/nonemotional distinction is made earlier and on the

left, the positive/negative distinction takes ‘continued processing’ (Kok and Looren de Jong, 1980) and necessitates the greater facial recognition ability of the right hemisphere. Thus, the SW is large and right-lateralized while processing both the positive and negative stimuli. Wood and McCarthy (1984) have demonstrated in ERP simulation studies of PCA, that differences in waveform shape between PCA con,structed component waveforms and the actual underlying component waveforms can result in misallocation of variance from one PCA component to another, particularly in cases where components overlap. This is a potential confound in all ERP research, regardless of ERP component measurement methodology. For example, with peakto-peak amplitude measurements, if the Nl component is superimposed on a large CNV, it may well be artificially enhanced. The same is true for overlapping P3 and SW components. However, in the present data this does not appear to be the case. Examination of Fig. 5 does not suggest that a single source has produced the emotional perception effects on both P3 and SW factors. The lateralizing pattern switches chronologically from left to right, between P3 and SW components. Similarly, the emotional connotative effects are reversed in the P3 and SW factors. A more consistent pattern might be expected if these effects were the result of a single source of variance. Furthermore, it does not appear likely that ERP components unique to emotional perception are being superimposed on the P3 and SW to provide the current results. The consistency (but differing patterns) in topography and time across two identifiable PCA factors, suggests that the processing associated with P3 and SW ERP components is responsible for the observed pattern of results. The present results therefore suggest that the effects of emotional perception on the ERP observed in this research are manifest in terms of modulation of components of the ERP known to be related to more general aspects of cognitive processing (i.e. P3 and SW), rather than unique components related to the specific processing of emotional connotation. Relative to other factors involved in ERP amplitude variability (e.g. topo-

203

graphical variations), the emotional connotative effects (at least in the present data) appear to account for a relatively small proportion of the variance. Our results for the effects on ERPs of the perception of emotional connotation of words do not, in the main, replicate the results of Chapman (1979; Chapman et al., 1977; 1978; 1980) or Begleiter (1979). In our data, effects of the affective connotation of words were found only in topographic distributions, but were not significant for comparisons made at individual electrodes. The discrepancy can perhaps best be explained by differences in the methods and electrode placements used. Chapman, for example, reports data using only a midline electrode (midway between Cz and Pz). He standardized the ERPs for each subject separately, subtracting the mean overall ERP out of each subject’s data and transforming the residuals to z-scores at each time sample, leaving only variance due to word connotation and error variance remaining for statistical analysis (no topographical variance being present). Chapman used a correlation matrix as part of his PCA rather than the covariance matrix used in the present study. He also allowed his PCA to isolate up to 11 factors, as opposed to the 5 factors reported here, although in both cases approximately 90% of the variance is accounted for. Chapman’s methods are somewhat more sensitive to subtle waveform differences, which may be obscured by the large topographically related ERP variations in the present data. However, in Chapman’s data the original ERP components are visually obscured and the results are more subject to type I errors. It is also impossible for Chapman to relate his PCA factors to traditionally understood ERP components by topographical analysis or visual inspection of the normalized waveform. Thus, it is difficult to relate our PCA factors to his, or to make direct comparisons of our respective data. Similar differences may account for the discrepancy between Begleiter’s (1979) data and ours. Although he used 6 electrode sites, he reported data from only two, P3 and P4, which are somewhat posterior to the sites used in the present experiment. Begleiter used peak-to-peak measure-

ments rather than PCA factor scores. His bilateral recording and direct amplitude measurement also result in less variance due to factors other than word connotation, and thus an increased sensitivity to connotative effects. In terms of dominant hemisphere for the processing of facial or verbal stimuli, the present data show some unusual findings. The right hemisphere is generally assumed to be more involved than the left in recognition and processing of facial expression (De Renzi and Spinnler, 1966; Geffen et al., 1977; Gilbert and Bakan, 1973; Hilliard, 1973; Rizzolatti et al., 1971). There is a great deal of literature to suggest that generally larger ERP component amplitudes are to be found in recordings from electrodes over the hemisphere considered to be dominant for a particular type of processing (for reviews see Hillyard and Kutas, 1983; Hillyard and Woods, 1979). The present results do not show greater right hemisphere amplitudes for responses to faces (i.e. no hemisphere main effect for any of the factors). However, it must be recognized that our paradigm involved verbal judgment of emotional connotation. There was also a large degree of stimulus redundancy (i.e. only 6 different facial stimuli). Thus, right hemisphere effects may have been counterbalanced by the requirement of verbal ratings and the well-routinized stimulus presentation, which put minimal demand on the system for unique facial analysis. Sergent and Bindra (1981) report that although both hemispheres appear to contribute to the total processing of faces, their contributions vary as a function of task demands. These data do not support a concept of verbal stimuli processed in the left hemisphere, and faces in the right hemisphere, in any simple way. In fact, at least in terms of ERP factor amplitudes, there is a complex interhemispheric relationship, with lateralization shifting across latter ERP components. For the processing of words, P3 amplitudes were larger on the right, while SW-laterality differed for anterior and posterior sites. Processing of faces produced no overall amplitude laterality. However, ERP difference related to the emotional connotation of faces were left-lateralized for the P3 and right-lateralized for the SW. In summary, perceptions of emotional connota-

204

tions of different facial stimuli were reflected in the ERP waveforms. Processing of facial expression appeared to proceed in two steps: discrimination of the neutral from the emotional faces, indexed by a larger left hemisphere P3 for neutral faces; and continued processing of the emotional faces (positive or negative expressions), indexed by a larger right hemisphere SW to these stimuli. Processing of the emotional connotation of words produced only subtle differences in topography. Results also suggest that facial expression may be a more powerful conveyor of emotional meaning than verbal semantics, at least in terms of the electrophysiological concomitants of emotional information processing using the paradigms and methods of this research.

ACKNOWLEDGEMENT This paper was submitted by R.D.V. in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Clinical Psychology at Fuller Theological Seminary, School of Psychology.

REFERENCES Begleiter, H., Gross, M.M. and Kissin, B. (1967) Evoked cortical responses to affective visual stimuli. Psychophysiology 3: 336-344. Begleiter, H., Gross, M.M., Projesz, B. and Kissin, B. (1969) The effects of awareness on cortical evoked potentials to conditioned affective stimuli. Psychophysiology, 5: 517-529. Begleiter, H., Pojesz, B. and Garozzo, R. (1979) Visual evoked potentials and affective ratings of semantic stimuli. In H. Begleiter (Ed.) Evoked Brain Potentials and Behavior, Plenum, New York, pp. 127-141. Chapman, R.M. (1979) Connotative meaning and averaged evoked potentials. In H. Begleiter (Ed.), Evoked Brain Potentials and Behavior, Plenum, New York, pp. 171-196. Chapman, R.M., Bragdon, H.R., Chapman, J.A. and McCrary, J.W. (1977) Semantic meaning of words and averaged evoked potentials. In J.E. Desmedt (Ed.), Language and Hemisphere Speciafization in Man: Cerebral ERPs. Progress in Clinical Neurophysiology. Vol. 3, Kager, Basel, pp. 36-47. Chapman, R.M., McCrary, J.W., Chapman, J.A. and Bragdon, H.R. (1978) Brain responses related to semantic meaning. Brain Language, 5: 195-205. Chapman, R.M., McCrary, J.W., Chapman, J.A. and Martin, J.K. (1980) Behavioral and neural analyses of connotative

meaning: word classes and rating scales. Brain Language, 11: 319-339. DeRenzi, E. and Spinnler, H. (1966) Visual recognition in patients with unilateral cerebral disease. J Neruous Mental Dis, 142: 515-525. Dixon, W.J. (Ed.) (1975) BMD Biomedical Computer Programs, University of California Press, Berkeley. Donchin, E. (1979) Event-related potentials: a tool in the study of human information processing. In H. Begleiter (Ed.), Evoked Brarn Potentials and Behavior, Plenum, New York, pp. 13-88. Duncan-Johnson, C.B. and Donchin, E. (1977) On quantifying surprise: the variation in event-related potentials with subjective probability. Psychophysiology, 14: 456-467. Geffen, G., Bradshaw, J.L. and Wallace, G. (1977) Interhemisphenc effects on reaction time to verbal and non-verbal visual stimuli. J. Exp. PsychoI., 87: 415-422. Gilbert, C. and Bakan, P. (1973) Visual asymmetry in perception of faces. Neuropsychololia, 11: 355-362. Hastorf, A.H., Osgood, C.E. and Ono, H. (1966) The semantics of facial expressions and the prediction of the meaning of stereoscopically fused facial expressions. Stand. J. PsychoI., 7: 179-189. Hilliard, R.D. (1973) Hemispheris laterality effects on a facial recognition task in normal subjects. Cortex 9: 246-258. Hillyard, S.A. and Kutas, M. (1983) Electrophysiology of cognitive processing. Annu. Rev. PsychoI., 34: 33-61. Hillyard, S.A. and Woods, D.L. (1979) Electrophysiological analysis of human brain function. In M.S. Gazaniga (Ed.), Handbook of Behavioral Neurobiology, Plenum, New York, pp. 345-378. Jennings, J.R. and Wood, C.C. (1976) The adjustment procedure for repeated-measures analyses of variance. Psychophysiology, 13: 277-278. Keselman, H.J. and Rogan, J.C. (1980) Repeated measures F-tests and psychophysiological research: controlling the number of false positives. Psychophysiology, 17: 499-503. Kok, A. and Looren de Jong, H. (1980) Components of the event-related potential following degraded and undegraded visual stimuli. Biol. PsychoI., 11: 117-133. Lifshitz, K. (1966) The averaged evoked cortical response to complex visual stimuli. Psychophysiology, 3: 55-68. Ntitlnen, R. (1982) Processing negativity: An evoked-potential reflection of selective attention. Psychol Bull., 92, 605-640. Osgood, C.E. (1952) The nature and measurement of meaning. Psychol. Bull., 49: 197-237. Osgood, C.E. (1964) Semantic differential technique in the comparative study of cultures. Am. Anthropol., 66: 171-200. Osgood, C.E. (1966) Dimensionality of the semantic space for communication via facial expressions. Stand. J. Psycho/., 7: I-30. Osgood, C.E. (1971) Exploration in semantic space: a personal diary. J. Sot. Issues, 27: 5-64. Pritchard, W.S. (1981) Psychophysiology of P300. Psycho/. BUN., 89: 506-540. Rizzolatti. G., Umilta, C. and Berlucchi, G. (1971) Opposite superiorities of the right and left cerebral hemispheres in

205

discriminative reaction time to physiognomical and alphabetical material. Bruin, 94: 431-442. Sergent, J. and Bindra, D. (1981) Differential hemispheric processing of faces: methodological considerations and reinterpretation. Psycho/. Bull., 89: 541-554. Squires, N.K., Squires, K.C. and Hillyard, S.A. (1975) Two varieties of long-latency positive waves evoked by unpredictable auditory stimuli in man. Electroencephalogr. Clin. Neurophysiol.,

38: 387-402.

Squires, K.C., Donchin, E., Herning, R.I. and McCarthy, G. (1977) On the influence of task relevance and stimulus probability on event-related potential components. Electroencephalogr.

Clin. Neurophysrol.,

42: l-14.

Wood, CC. and McCarthy, G. (1984) Principal component analysis of event-related potentials: simulation studies demonstrate misallocation of variance across components. Electroencephalogr.

Clin. Neurophysiol.,

59: 249-260.