Event-related potentials and the semantic matching of pictures

Event-related potentials and the semantic matching of pictures

BRAIN AND 14, 201-212 (1990) COGNITION Event-Related SARAH Potentials and the Semantic Matching of Pictures E. BARRETT AND MICHAEL D. RUGG Well...

794KB Sizes 0 Downloads 10 Views

BRAIN

AND

14, 201-212 (1990)

COGNITION

Event-Related

SARAH

Potentials and the Semantic Matching of Pictures E. BARRETT AND MICHAEL D. RUGG

Wellcome Brain Research Group, University of St. Andrews, Fife KY16 9JU. United Kingdom

St. Andrews,

Event-related potentials (ERPs) were recorded from one midline and three pairs of lateral electrodes while subjects determined whether pairs of sequentially presented pictures were semantically associated. The ERPs evoked by the second picture of each pair differed as a consequence of whether it was associated with its predecessor, such that ERPs to nonassociated pictures were more negativegoing than those to associated items. These differences resulted from the modulation of two ERP components, one frontally distributed and centered on an N300 deflection, the other distributed more widely over the scalp and encompassing an N4.50 deflection. The modulation of N450 is interpreted as further evidence that the “N400” ERP component is sensitive to semantic relationships between nonverbal stimuli. The earlier N300 effects, which do not appear to occur when ERPs are evoked by semantically primed and unprimed words, could suggest that the semantic processing of pictorial stimuli involves neural systems different from those associated with the semantic processing of words. D 1990 Academic

Press. Inc.

INTRODUCTION

Since the seminal study by Kutas and Hillyard (1980) numerous studhave investigated event-related potentials (ERPs) evoked by words in semantic priming paradigms (see Kutas and Van Petten, 1988 for a review). The essence of these paradigms is the manipulation of the relationship between a “target” item and its immediately preceding context, such that the semantic information provided by the context is either “congruent” or “incongruent” with the semantic features of the target. In the work of Kutas and Hillyard (e.g., Kutas & Hillyard, 1980, 1984), semantic priming was manipulated by varying the “cloze probability” of the terminal word of a short sentence. In other work, priming has ies

Address correspondence and reprint requests to M. D. Rugg, at Wellcome Brain Research Group, Department of Psychology, University of St. Andrews, St. Andrews, Fife KY16 9JU. UK. This research was supported by the Wellcome Trust. 201 0278-2626190$3.00 Copyright D 1990 by Academic Press, Inc. All rights of reproduction in any form reserved.

202

BARRETT AND RUGG

been manipulated by varying the degree of semantic association across successive items in tasks such as lexical decision (Bentin, McCarthy, & Wood, 1985; Rugg, 1985), semantic categorization (Barrett & Rugg, 1987; Stuss, Picton, & Cerri, 1988), or sentence verification (Fischler, Bloom, Childers, Roucos, & Perry, 1983). In all of these experiments, a major finding has been that ERPs evoked by unprimed words are more negative than those evoked by primed items. In general, this difference appears to result from the modulation of a centroparietally distributed negativegoing deflection peaking approximately 400 msec poststimulus. This deflection was originally identified and labeled as the “N400” component by Kutas and Hillyard (1980); the label is now often applied to the primesensitive late negative-going ERP deflections observed in all the paradigms described above.’ In contrast to work with words, few studies have investigated ERPs evoked by primed and unprimed nonverbal stimuli. Barrett, Rugg, and Perrett (1988) studied ERPs evoked by familiar and unfamiliar faces in an identity-matching task. They found that nonmatching stimuli gave rise to substantially more negative-going ERPs than did matching ones, especially for pairs of familiar faces. The interaction between match/ nonmatch ERP differences and familiarity was considered by Barrett et al. (1988) to be consistent with the idea that N400 can be evoked by “unprimed” nonverbal stimuli, and that this component is modulated most potently by stimuli that can activate pre-existing representations in memory. Barrett and Rugg (1989) investigated the ERPs evoked by faces in a “semantic” matching task, thus extending the study of Barrett et al. (1988) beyond the realm of repetition priming. Faces of individuals belonging to the same occupational category as an immediately preceding face gave rise to less negative-going ERPs than did nonmatching stimuli. On the basis of its latency and topography, this match/nonmatch effect was again considered to involve the modulation of an N400 component, leading to the conclusion that some aspects of the semantic priming of faces may share a common neural substrate with the mechanisms underlying word priming. The present study extends the work of Barrett and colleagues by investigating the ERPs evoked by pictures of common objects in a semantic matching task. The processing of such stimuli has been intensively studied in both healthy and clinical populations (see for example the papers in Job & Sartori, 1988). As with words, the processing of a picture ’ The scalp topography of these deflections varies across paradigms, suggesting that multiple prime-sensitive generators are active in this latency range. The term “N400” is used here to refer generically to any late negative-going deflection peaking around 400 msec which can be modulated by priming manipulations, irrespective of topography.

ERPs AND PICTURE MATCHING

203

can be enhanced if it is preceded by a semantic associate, suggesting that the semantic representations accessed by pictures are organized similarly to those accessed by words (Bajo, 1988). There is agreement that access to semantic information about a picture does not depend upon the retrieval of its name (in fact the reverse situation obtains), implying that pictorial semantic priming effects are unlikely to be verbally mediated. There is disagreement however over whether a picture of an object and its visually presented name each access the same representation in a single semantic memory system, or whether they access representations in separate pictorial and verbal systems (see for example Bajo, 1988; Riddoch, Humphreys, Coltheart, & Funnell, 1988; Seymour, 1979; Shallice, 1987). If pictorial stimuli access the same semantic representations as words, or access a semantic system organized similarly to the one representing words, ERPs evoked by primed and unprimed nonverbal stimuli might be expected to demonstrate differences in the amplitude of an N400-like component. On the other hand, if priming effects in ERPs evoked by pictorial stimuli are qualitatively different from those evoked by words, this would imply a difference in some aspect(s) of the way that these two types of stimuli are semantically processed. METHOD Subjecrs. Subjects were seven female and five male right-handed (as defined by reported

writing hand) young adult volunteers (age range 18-30 years). Stimuli. Stimuli consisted of 160 black and white slides, each portraying a line drawing of a common, easily nameable object. Many of the drawings were selected from the Snodgrass and Vanderwart (1980) set, and the remainder were drawn in a style consistent with this set. Eighty pairs of “matching” pictures were constructed. A pair was defined as matching when the pictures were associatively related (e.g., knife-fork, lock-key). The members of 40 of these picture pairs were rearranged so that each was paired with a nonmatching partner. The 40 matching and 40 nonmatching picture pairs were then randomly ordered, so as to form List 1. A second stimulus list was similarly formed. In this list, the stimuli used to provide the nonmatching pairs in List 1 were employed in the matching pairs, and the matching pairs of that list were rearranged to form the nonmatching pairs. As with List 1, the matching and nonmatching pairs were then randomly ordered to form an 80pair series. Examples of matching and nonmatching pairs are shown in Fig. 1. The full stimulus set is available from the authors on request. In the experiment, half of the subjects were presented with each list. Over the course of the experiment, each critical stimulus thus appeared equally often as a matching and a nonmatching item, thereby removing any possible confound between items and the match/nonmatch manipulation. A further five matching and five nonmatching pairs, none of which was employed in the experimental lists described above, served as practice items. Slides were back-projected onto a translucent screen and displayed on a TV monitor via a video link. The diameter of the display aperture of the monitor subtended a visual angle of approximately 2” at the viewing distance of 90 cm. Stimulus duration was 200 msec, and the interval between the onset of the first (Sl) and second 62) members of a pair was 1560 msec. The interval between the onsets of Sl on two consecutive trials was

BARRETT AND RUGG

m

NON-MATCH

~

FIG. 1. Examples of matching and nonmatching picture pairs.

8.5 sec. A small fixation dot was continuously present in the center of the aperture in which the stimuli were presented. To indicate when eye movements were permissible, a row of crosses, easily visible within subjects’ peripheral vision, was presented on a TV monitor immediately below that on which the pictures were presented. These crosses were displayed from 900 msec after the onset of S2 until 600 msec before the onset of Sl on the subsequent trial. Procedure. Following electrode application, subjects were seated in front of the TV monitors, with the index fingers of each hand resting on laterally positioned microswitches. They were instructed to respond quickly and accurately with one hand when the two stimuli in a pair matched (see above), and with the other hand when the pictures did not match. The hands used by a subject for match and nonmatch responses remained the same throughout the experimental session, and were counterbalanced across subjects. Subjects were instructed to keep eye and body movements to a minimum throughout each trial, and to maintain fixation on the center of the aperture in which the stimuli were presented. They were informed that they could blink whenever the X’s were present on the lower TV monitor. After the 10 practice trials, the subjects were presented with one of the two experimental lists. A short rest break was given after 40 trials. ERP recording. EEG was recorded, with reference to linked mastoids, from Pz (Jasper, 1958) and from lateral frontal, temporal, and parietal locations. Frontal electrodes were 75% of the distance from Fz to F7 on the left and F8 on the right, temporal electrodes were 75% of the distance from Cz to T3 on the left and T4 on the right, and parietal electrodes were 75% of the distance from Pz to T5 on the left and T6 on the right. Cz served as ground, and EOG was recorded between electrodes placed on the outer canthus of the left eye, and just above the right eyebrow. Interelectrode impedances were less than 5 kohms, and all channels were amplified (Digitimer D150/160 system) with a bandwidth of 0.03-30 Hz (3 dB points), and with gains of 20K for EEG and 5K for EOG. The sampling rate for each channel was one point per 10 msec, beginning 100msec before the onset of Sl and continuing for 2460 msec thereafter. Epochs of EEG and EGG were digitized on-line and stored on computer disc, permitting subsequent off-line averaging and analysis. Separate ERPs were formed for each subject by averaging the trials of each experimental condition. Only trials on which a correct behavioral response was made were used to form the averages. Trials on which EOG activity exceeded a peak-to-peak amplitude of 125 pV, or on which saturation of one or more channels of the analog-digital converter occurred, were also excluded. The EOG

205

ERPs AND PICTURE MATCHING

._._...

b-t-+--+-+-

400 nlsec

0

,,O,,+,ATC”

400 nwec

0

400 msec

FIG. 2. Grand-average waveforms evoked by semantically matching and nonmatching pictures. LF, RF, LT, RT, LP, and RP signify left and right frontal, temporal, and parietal electrodes, respectively. The N300 deflection is indicated by the open triangles, the N450 by the closed triangles, and P556 by the arrowhead.

was averaged along with the EEG so that the absence of eye movement artifact could be confirmed for each subject. Raring of picture pnirs for visual similari[y. List I and List 2 picture pairs were rated for degree of visual similarity by two groups of 20 subjects (one group for each list), none of whom had participated in the ERP experiment described above. Subjects were shown photocopies of the matching and nonmatching picture pairs in the same order that they had been presented in the ERP experiment, and were instructed to rate each pair on a scale from 0 (no similarity) to 5 (highly similar). The instructions emphasized the need to make these ratings purely on the basis of the visual characteristics of the members of a

picture pair. By averagingsubjects’judgements,a visual similarity rating was obtainedfor every pair of items in each list.

RESULTS

On average, 18% of trials (SD = 9) were lost because of response errors or artifacts. These rejected trials were evenly distributed across the two experimental conditions. Grand average waveforms evoked by matching and nonmatching pictures are shown in Fig. 2. Matchlnonmatch ERP differences are evident from approximately 250 msec poststimulus, following which they persist for the remainder of the recording epoch. These differences encompass two negative-going deflections in the waveforms: a relatively early frontally distributed negativity (N300), and a later, more widespread deflection (N450). Mean Amplitude

Measures

The match/nonmatch effects shown in Fig. 2 were analyzed by measuring the mean amplitude of three regions of the waveform, selected to encompass the major deflections that appeared to be modulated by the match/nonmatch manipulation. These latency regions were: 250-350 msec, a region centered on the N300,350-550 msec, taking in the broader N450 deflection, and 550-900 msec, encompassing the prominent late

206

BARRETT AND RUGG TABLE I

MEAN

AMPLITUDE (MICROVOLTS) AT EACH ELECTRODE SITE OF THE 250-350, 350-550, AND 550-900 msec LATENCY REGIONS OF ERPs EVOKED BY MATCHING AND NONMATCHING PICTURES

Pz

LF

LT

LP

RF

RT

RP

250-350 Match Nonmatch d

4.2 2.5 1.7

0.2 -1.5 1.7

0.4 - 1.0 1.4

4.6 4.3 0.3

0.3 -2.3 2.6

1.2 -0.4 1.6

6.5 5.9 0.6

350-550 Match Nonmatch d

10.6 5.3 5.3

2.3 -0.3 2.6

3.8 0.2 3.6

8.4 5.3 3.1

2.4 -0.6 3.0

4.7 0.8 3.9

9.0 5.5 3.5

550-900 Match Nonmatch d

9.7 7.5 2.2

0.8 0.2 0.6

2.8 2.0 0.8

6.8 4.9 1.9

1.3 -0.3 1.6

4.0 2.5 1.5

5.7 3.8 1.9

positive component (P556) and the subsequent slow wave. These data are shown in Table 1. They were assessed statistically by repeated measures ANOVA, with degrees of freedom adjusted when necessary by the Geisser-Greenhouse procedure. A posteriori comparisons between means were performed with the Tukey HSD test. Results of the statistical analysis of data from Pz are not presented unless they differ from or add to the analyses of the lateral electrode sites. The analyses of the lateral sites employed the factors of condition (match/nonmatch), hemisphere (left/right), and electrode site (frontal/temporal/parietal). ANOVA of the 250-350 msec region gave rise to significant main effects of condition (F(1, 11) = 12.10, p = .005) and electrode site (F(1.2, 13.0) = 31.94, p < .OOl), and to significant interactions between condition and site (F(1.4, 15.4) = 8.45, p < .Ol), and hemisphere and site (F(1.3, 14.7) = 5.63, p < .05). The interactions between condition and hemisphere, and condition, hemisphere, and electrode site were both far from significant (F’s of 1.61 and .65, respectively). As can be seen from Table 1, the interaction between condition and site reflects the anterior distribution of the match/nonmatch differences in this latency range. Tukey tests showed that these differences were significant at frontal and temporal but not at parietal electrodes. The hemisphere by site interaction arose because of the tendency for this region of the waveform to be more positive-going at the right than at the left parietal electrode. ANOVAs on the 350-550 and 550-900 msec data revealed, in each case, only two significant effects. These were for condition [350-550: F(1, 11) = 69.75, p < .OOl; 550-900: F(1, 11) = 16.69, p < .005] and

ERPs AND PICTURE MATCHING

207

TABLE 2 MEAN REACTION TIME (msec) AND PERCENTAGE CORRECTAS A FUNCTION OF MATCHING CONDITION (STANDARD DEVIATIONS IN BRACKETS)

RT 5%correct

Match

Nonmatch

875 11351 92.7 [IS]

913 [1391 97.7 [I.01

electrode site [350-550: F(l.l, 12.5) = 21.81, p < 0.001; 550-900: F(1.2, 13.0) = 9.02, p < .Ol]. As can be seen from Table 1, the first of these effects reflects the more negative-going values in the nonmatching condition, and the second reflects the parietal maximum of these regions of the waveforms. In neither latency region did any interaction involving the factors of condition, hemisphere, or electrode site approach significance (all F’s < 3.00). The foregoing analyses suggest that the match/nonmatch differences in these waveforms possess a frontal maximum in the 250-350 msec latency region (straddling the N300 deflection), but are distributed more evenly over the scalp in the 350-550 msec region (encompassing N450). To determine whether the difference between these adjacent latency regions in the topography of their match/nonmatch effects was reliable, the approach recommended by McCalthy and Wood (1985) was adopted. First, the differences between match and nonmatch conditions for each subject were normalized across electrode sites within each latency region. This eliminates any systematic differences between the regions in the size of the match/nonmatch effects, and allows differences in their distribution to be assessed unequivocally (McCarthy and Wood, 1985). The normalized data were then subjected to ANOVA, with factors of latency region, hemisphere, and site. This revealed a significant interaction between latency and electrode site (F(1.7, 18.3) = 11.93, p = .OOl; all other F’s < l), indicating that the match/nonmatch effects in the two latency regions did indeed differ in topography, independently of any difference in their magnitude. Late Positive Component

The peak latency and amplitude of the late positive component was measured at Pz, where it was most prominent and therefore easiest to identify. ANOVAs indicated that both latency and amplitude varied between conditions (latency: match = 534 msec, nonmatch = 578 msec, F(1, 11) = 7.18, p < .05; amplitude: match = 14.0 pV, nonmatch = 11.4 /.LV, F(1, 11) = 6.55, p < .05).

208

BARRETT AND RUGG

b

400 msec

-

SIMILAR

..-.-..

DISSIMILAR

b

b

3. Grand-average waveforms evoked by pictures rated as visually similar or dissimilar to their predecessors. Electrode sites as for Fig. 1. FIG.

Behavioral

Data

Mean RT and accuracy data are shown in Table 2. ANOVA revealed no effect of condition on RT. However, a significant difference in accuracy was found (F(1) 11) = 15.53, p < .005), reflecting higher scores on nonmatching trials. Visual Similarity

Ratings

The mean ratings of visual similarity for each stimulus pair were subjected to ANOV-A, with factors of stimulus list (List 1 vs. List 2) and condition (match vs. nonmatch). This revealed a significant effect of condition (F(1, 156) = 118.10, p < .OOl), indicating that the mean visual similarity rating given to matching pairs (2.49, SD = .56) was reliably greater than that given to pairs of nonmatching items (0.68, SD = .46). No effects of stimulus list were found (F’s < 1). Role of Visual Similarity

in ERP Modulation

To clarify the role of visual similarity on match/nonmatch ERP effects, an analysis was conducted post-hoc to examine the influence of this variable when it was uncorrelated with semantic association. ERPs to visually similar picture pairs were formed by averaging the single trials evoked by the 15 most visually similar pairs from the matching and nonmatching conditions, respectively (yielding ERPs based on a maximum of 30 trials). ERPs to visually dissimilar pairs were likewise formed by averaging the single trials associated with the 15 least similar pairs from each condition. The grand averages of these ERPs are shown in Fig. 3. In ANOVAs of the mean amplitude of the 250-350, 350-550, and 550-900 msec latency regions of these waveforms, no effect involving the factor of visual similarity approached significance (maximum F = 2.01). The mean similarity rating of the 30 high visual similarity pairs

ERPs ANDPICTURE

MATCHING

209

used to form the ERPs was 2.60 (SD = 1.43), while the rating for the 30 low similarity pairs was 0.62 (SD = S8). The difference in mean similarity between these pairs is comparable to the difference in visual similarity between the semantically matching and nonmatching pairs used to evoke the ERPs shown in Fig. 2. The differing visual similarity of matching and nonmatching picture pairs therefore seems unlikely to have been responsible for the match/nonmatch effects illustrated in that figure. DISCUSSION

The ERPs evoked in this experiment by nonmatching pictures were more negative-going than those evoked by matching items. This match/nonmatch effect resulted from the modulation of at least two ERP components: the first of these was frontally distributed, centering around the N300 deflection, while the second component was more widespread in its distribution, and was centered around the N450 deflection. As would be expected on the basis of previous studies of matching (e.g., Barrett & Rugg, 1990; Barrett et al., 1988; Rugg, 1984), a sizeable contingent negative variation (CNV) occurred in the present experiment during the IS1 between the two members of the picture pairs. This raises the question of whether the observed match/nonmatch effects would also be found in circumstances less favorable for CNV development. N400-like deflections evoked by words have been modulated in a wide variety of tasks and paradigms (Kutas and Van Petten, 1988), suggesting that the N450 effects in the present experiment are likely to generalize to other settings in which pictorial material is employed. It remains to be seen however whether this is true of N300. The pattern of ERP modulation in the present experiment is very similar to that found by Barrett et al. (1988), when subjects were required to match sequentially presented faces on the basis of whether they depicted the same individual. As in the present study, match/nonmatch ERP effects were initially frontally distributed, and centered around a prominent negative-going deflection. One factor common to the present study and Barrett et al. (1988) is that matching and nonmatching stimulus pairs differed in their level of visual similarity. Barrett et al.‘s pairs of matching stimuli, which consisted of two different views of the same individual (e.g., full face followed by half profile), were generally more visually similar to one another than nonmatching pairs (which consisted of different views of different individuals). And in the present experiment, members of matching stimulus pairs were rated as more visually similar than pairs of nonmatching items. However, the post-hoc analysis of the ERPs evoked by visually similar and dissimilar pictures suggests strongly that visual similarity is not responsible for the modulation of N300. It is possible therefore that the frontally distributed ERP modulation evoked by matching and nonmatching picture and face pairs reflects the activity

210

BARRETT

AND RUGG

of neural systems that are sensitive to “semantic” relationships between these stimuli. The finding that such match/nonmatch effects are not observed when faces are matched on the basis of their expressions (Potter, 1988; Potter & Parker, 1989) is consistent with this suggestion, since the processes underlying the derivation of expression and semantic information from a face appear to be largely independent (Young, McWeeny, Hay, & Ellis, 1986). Figures 2 and 3 suggest that the N450 deflection overlaps in time with the subsequent late positive component, P556. This raises the question of whether the apparent sensitivity of N450 to the match/nonmatch manipulation results from changes in the amplitude and latency of P556. The contrasting scalp distributions of P556 and the match/nonmatch effects in the 350-550 and 550-900 msec regions make this extremely unlikely. While P556 shows a strongly parietal distribution, the match/ nonmatch effects do not vary significantly in magnitude between frontal, temporal, and parietal electrodes. This suggests that the generators responsible for P556, and those giving rise to the match/nonmatch effects, are not equivalent. This conclusion receives further support from the fact that the match/nonmatch effects reach their maximum size well before the peak of P556, as can be seen in Fig. 2. A related question is whether the present match/nonmatch effects result merely from some general “match-sensitive” process, rather than from differences in the level of semantic association between matching and nonmatching picture pairs. This seems unlikely, since a comparison of the ERP effects evoked in the present and in previous matching tasks indicates that these effects differ as a function of the type of judgement that is made. In particular, when either words (e.g., Rugg, 1984) or pictures (Barrett & Rugg, 1990) are matched on the basis of phonology, the resulting match/nonmatch effects exhibit a marked asymmetry over the scalp, predominating over the right hemisphere. This asymmetry is not evident when the same types of material are matched on the basis of their semantic properties (as evidenced by the present data and Barrett and Rugg, 1987). In addition, as discussed below, the early frontally distributed ERP effects observed in the present study and in Barrett et al. (1988) may be material as well as task specific. It is difficult to see how these various differences can be accounted for under the assumption that match/nonmatch effects reflect processes that are nonspecific with respect to the type of match being performed. On the basis of functional, latency, and topographic criteria, the modulation of N450 by the match/nonmatch manipulation seems best interpreted as reflecting changes in the amplitude of a late component allied or equivalent to the N400 described in priming studies with words (see Introduction). These data are therefore consistent with the conclusion of Barrett and Rugg (1989) that the N400 component is sensitive to

ERPs AND PICTURE MATCHING

211

semantic relationships between nonverbal stimuli. As noted by these authors, if the N400 evoked and modulated by nonverbal stimuli reflects activity in some of the generators that also give rise to the N400 evoked by words, then some aspects of the semantic priming of linguistic and nonlinguistic stimuli share a common neural substrate. It remains to be established whether this substrate, and the cognitive processes dependent upon it, plays a causal role in the behavioral effects of semantic priming. A frontally distributed ERP modulation preceding the N400 has yet to be reported in studies involving either the semantic matching of word pairs (e.g., Barrett & Rugg, 1987; Sanquist, Rohrbaugh, Syndulko, & Lindsley, 1980), or the semantic priming of single words (e.g., Bentin et al., 1985; Rugg, 1985). If the absence of such a modulation in ERPs evoked by words is confirmed, this would raise the possibility that prior to N400, different neural systems are involved in the semantic processing of pictorial and visually presented verbal stimuli. This does not necessarily imply that pictures require processing additional to that needed to process words semantically. It is possible that semantically matching and nonmatching words are also differentiated by pre-N400 ERP activity, but that the location and orientation of the generators of this activity are such that it cannot be detected at the scalp. If it is confirmed that the modulation of N300 is associated with the semantic processing of pictorial but not verbal stimuli, this would be consistent with the view (e.g., Shallice, 1987) that such stimuli access different semantic memory systems. This conclusion would however have to remain provisional until the cognitive processes associated with the N300 effects have been identified. For example, it might transpire that these effects reflect “post-access” processes-that is, processes which are only active after a picture’s semantic information has been retrieved from memory. In this case differential N300 effects would be uninformative about differences in the way that the semantics of words and pictures are represented and accessed. The effects would instead reflect divergences in the way that semantic information derived from pictorial and verbal stimuli is utilized after it has been accessed. The resolution of this and related issues will require considerable further research. REFERENCES Bajo, M. T. 1988. Semantic facilitation with pictures and words. Jouma/ of Experimentul Psychology: Learning, Memory and Cognition, 14, 579-589. Barrett, S. E., & Rugg, M. D. 1990. Event-related potentials and the phonological matching of pictures. Brain and Language 38, 424-437. Barrett, S. E., & Rugg, M. D. 1989. Event-related potentials and the semantic matching of faces. Neuropsychologia, 27, 913-922. Barrett, S. E., & Rugg, M. D. 1987. Event-related potentials in semantic and phonological matching tasks. Psychophysiology, 24, 577-578.

212

BARRE-IT AND RUGG

Barrett, S. E., Rugg, M. D., & Perrett, D. I. 1988. Event-related potentials and the matching of familiar and unfamiliar faces. Neuropsychologia, 26, 105-l 17. Bentin, S., McCarthy, G., & Wood, C. C. 1985. Event-related potentials, lexical decision and semantic priming. Electroencephalography and Clinical Neurophysiology, 60,343355. Fischler, I., Bloom, P. A., Childers, D. Cl., Roucos, S. E., & Perry, N. W. 1983. Brain potentials related to stages of sentence verification. Psychophysiology, 20, 400-409. Jasper, H. H. 1958. The ten twenty system of the International Federation. Electroencephalography and Clinical Neurophysiology, 10, 371-375. Job, R., & Sartori, Cl., Eds. 1988. The cognitive neuropsychology of visual and semantic processing of concepts. Cognitive Neuropsychology, 5, l-150. Kutas, M., & Hillyard, S. A. 1984. Brain potentials during reading reflect word expectancy and semantic association. Nature (London) 307, 161-163. Kutas, M., & Hillyard, S. A. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207, 203-205. Kutas, M., & Van Petten, C. 1988. ERP studies of language. In P. K. Ackles, J. R. Jennings, & M. G. H. Coles (Eds.), Advances in Psychophysiology, Vol. 3. Greenwich: JAI Press, Pp. 139-187. McCarthy, G., & Wood, C. C. 1985. Scalp distributions of event-related potentials: An ambiguity associated with analysis of variance models. Electroencephalography and Clinical

Neurophysiology,

62, 203-208.

Potter, D. D. 1988. Behavioural and electrophysiological correlates of the processing of facial identity and expression. Unpublished Ph.D. thesis, University of Aberdeen. Potter, D. D., & Parker, D. M. 1989. Electrophysiological correlates of facial identity and expression processing. In J. Crawford and D. M. Parker (Eds.), Developments in Clinical and Experimental Neuropsychology. New York: Plenum Press, pp. 137-150. Riddoch, M. J., Humphreys, G. W., Coltheart, M., & Funnell, E. 1988. Semantic systems or system? Neuropsychological evidence re-examined. Cognitive Neuropsychology, 5, 3-26.

Rugg, M. D. 1985. The effects of semantic priming and word repetition on event-related potentials. Psychophysiology, 22, 642-647. Rugg, M. D. 1984. Event-related potentials in phonological matching tasks. Brain and Language,

23, 225-240.

Sanquist, T. F., Rohrbaugh, J. W., Syndulko, K., & Lindsley, D. B. 1980. Electrocortical signs of levels of processing: Perceptual analysis and recognition memory. Psychophysiology,

17, 568-576.

Seymour, P. H. K. 1979. Human visual cognition. London: Collier Macmillan. Shallice, T. 1987. Impairments of semantic processing: Multiple dissociations. In M. Coltheart, G. Sartori, & R. Job (Eds.), The cognitive neuropsychology of language. London: L. Erlbaum, Pp. 111-127. Snodgrass, J. G., & Vanderwart, M. A. 1980. Standardized set of 260 pictures: Norms for name agreement, familarity and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174-215. Stuss, D. T., Picton, T. W., & Cerri, A. M. 1988. Electrophysiological manifestations of typicality judgment. Brain and Language, 33, 260-272. Young, A. W., McWeeny, K. H., Hay, D. C., & Ellis, A. W. 1986. Matching familiar and unfamiliar faces on identity and expression. Psychological Research, 48, 63-68.