Functional Neuroanatomy of Semantic Memory: Recognition of Semantic Associations

Functional Neuroanatomy of Semantic Memory: Recognition of Semantic Associations

NeuroImage 9, 88–96 (1999) Article ID nimg.1998.0386, available online at http://www.idealibrary.com on Functional Neuroanatomy of Semantic Memory: R...

456KB Sizes 0 Downloads 96 Views

NeuroImage 9, 88–96 (1999) Article ID nimg.1998.0386, available online at http://www.idealibrary.com on

Functional Neuroanatomy of Semantic Memory: Recognition of Semantic Associations Paul T. Ricci,* Benjamin J. Zelkowicz,* Robert D. Nebes,* Carolyn Cidis Meltzer,*,† Mark A. Mintun,† and James T. Becker,*,‡ Neuropsychology Research Program and Functional Imaging Research Program, *Department of Psychiatry, ‡Department of Neurology, and †Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania 15213 Received December 29, 1997

to be some debate on the issue, the functions of the inferior frontal cortex seem related, at least in part, to the processes involved in overt or covert speech as well as to the executive components of semantic memory (See Fiez (3) for discussion). Although we (6) and others (e.g., 2), have shown that word reading involves the left fusiform gyrus, the extent to which this activation is in response to a specific lexical referent is less clear. That is, with typical word reading or object naming tasks, subjects are required to orally read or name a specific visually presented stimulus. But, is this brain region also involved when knowledge of a name is not required, but knowledge about attributes of the object or word is required to accurately perform the task? Relevant to this question is the study by Vandenberghe and colleagues (18) in which they used a matching-to-sample procedure of semantic associations (8) to effectively demonstrate the common and independent processing systems for words and drawings of objects; their procedure was based on the Pyramids and Palm Trees Test of semantic association (8). Subjects were shown a sample drawing (or word), with two choice drawings (or words) below. The task was simply to indicate which of the two choice stimuli were more closely related to the sample. Semantic knowledge in the form of attribute information was sufficient to accurately perform the matching-to-sample procedure, as the subjects did not need to be able to name objects to perform the task. Among their findings was a consistent and reliable activation of the fusiform gyrus when the subjects had to access the meaning or semantics or objects or words. We were specifically interested in replicating these findings with regard to objects, and further to attempt to define the processing components of the task. Given the importance of between-center replication to validate in vivo functional data (19), this study aids in defining the functional components of the semantic memory system for processing pictures.

In an effort to examine the functional neuroanatomy of semantic memory, we studied the relative cerebral blood flow of eight healthy young subjects using 15Owater positron emission tomography (PET). Relative to a visual baseline control condition, each of four visual matching-to-sample tasks activated components of the ventral visual processing stream, including the inferior occipital and temporal cortices. Contrasting the task with the highest semantic component, a variation on the Pyramids and Palm Trees paradigm, with a size discrimination task resulted in focal activation in the anterior inferior temporal lobe, focused in the parahippocampal gyrus. There was additional activation in BA47 of the inferior frontal cortex. These data replicate and extend previously reported results using similar paradigms, and are consistent with cognitive neuropsychological models that stress the executive role of BA47 in semantic processing tasks. r 1999 Academic Press

INTRODUCTION Advances in neuroimaging techniques, such as positron emission tomography (PET), have greatly enhanced our understanding of brain function assessed in vivo. These techniques have been particularly useful in the study of cognitive processes which allow us to ascertain which brain regions are involved with specific cognitive processes by measuring changes in cerebral blood flow as a function of the different demands of cognitive tasks. In the process of overt naming of pictures, the inferior temporal cortex gyrus and inferior frontal cortex on the left have consistently demonstrated activation when compared to control tasks (e.g., 2, 12, 20). A variety of data suggest that the fusiform gyrus, especially BA37, is involved at some level in the semantic aspects of various tasks such as naming, reading, and certain word generation tasks. Although there appears 1053-8119/99 $30.00 Copyright r 1999 by Academic Press All rights of reproduction in any form reserved.

88

89

RECOGNITION OF SEMANTIC ASSOCIATIONS

METHODS Subjects There were 8 subjects (5 males, 3 females); all were healthy (no history of neurological or psychiatric disorders), young (age ⫽ 26.4 years, SD ⫽ 7.6, range ⫽ 18–41), right-handed, and English was their native language. Informed consent was obtained prior to the start of the study. Each female volunteer had a negative serum pregnancy test on the day of the PET scan. This research had been approved by the Institutional Review Board of the University of Pittsburgh Medical Center. PET Procedures Each subject was scanned 10 times measuring relative cerebral blood flow (relCBF) using 15O-water with standard laboratory procedures (7, 20). Briefly, the subjects were placed in the supine position on a Siemens HR⫹ PET scanning table which collects 63 parallel planes over a 15.2-cm axial field of view. An intravenous catheter was placed in the antecubital vein of the left arm for radiopharmaceutical injection. The head was positioned within the head holder and a softened thermoplastic mask fitted over the face, molded to the patient’s facial contours, and fastened to the head holder. A 10-min transmission scan was done prior to radiopharmaceutical injection using three rotating rod sources of 68Ge/68Ga. Measurements of relCBF followed the intravenous bolus injection of 10–11 mCi of H215O in approximately 5 to 7 ml of saline. The start of each scan was triggered when the total counts measured exceeded twice the background count, approximately 30 s after each injection. The data were acquired with the scanner septa retracted, i.e., full 3-D mode. Each subject had undergone a structural MRI scan prior to the PET study. Thus, we first aligned the PET scans to their respective MRI’s, and then aligned them to stereotaxic space (16). All reconstruction of PET images was to an inplane resolution of approximately 8 mm full-width half maximum prior to analysis using SPM95 (5). Semantic Association Tasks The purpose of this study was to evaluate the relCBF during the performance of tasks that placed different demands on the analysis of semantic information. To record the subject’s responses in each condition, a button box with two buttons was placed under the right index and middle fingers. There were five task conditions in this study: baseline (visual noise), figure matching, size matching, group matching, and semantic matching. There were two scans for each condition which were presented in a pseudorandom order for each subject. The presentation of the trials was random

within condition. The stimuli for each trial were presented for 2 s with a 1-s intertrial interval. The Baseline condition consisted of viewing three figures that had been created by randomly pasting together portions of other drawings from the set of real objects (See Fig. 1). When the subject viewed the stimulus he or she pressed either button as fast as possible, and reaction time was recorded. This condition was viewed as a control for the general experimental environment, nonspecific visual stimulation, and rapid button pressing. In the figure match condition, subjects were presented with three nonsense objects (11); a sample and two choice stimuli. They were told to pick the object at the bottom of the screen that matched the sample at the top, pressing the button corresponding to the side on which the object appeared. This condition was viewed as an experiment-specific control. The subjects had to make rapid decisions about visually presented stimuli, but did so on the basis of physical but not meaningbased attributes of the stimuli. For the size match condition, the subjects were told to choose the item at the bottom of the screen that was the same size as the sample; one of the choice stimuli differed in size from the sample by 4%. This condition, taken from Vandenberghe and colleagues (18), also required the subjects to make a judgement about visual stimuli. However, the task was made demanding because of the small difference in size of the choice stimuli. The goal was to attempt to account for the longer latency to respond in the semantic conditions reflecting, perhaps, task difficulty. With the group match condition the subjects were to match the stimuli based on the type of object presented (e.g., two different types of telephones). The semantic match condition used a variation of the pyramids and palm trees task (8); subjects were to choose the object that was semantically related to the sample. In the example shown in Fig. 1, the correct response is the apple. These two conditions were both aimed at the semantics of the task. The latter was used by Vandenberghe and associates (18), and at face value, was the more semantically related of the two. The group match condition was included for comparison purposes since it could be solved on the basis of either lexical or semantic knowledge. RESULTS The performance of the subjects on the activation tasks while in the scanner is shown in Table 1; behavioral data from two of the subjects could not be analyzed due to a failure by the computer to record the responses. In the remaining six subjects there was a significant difference as a function of task condition in terms of both accuracy (F(4,54) ⫽ 61.707, P ⬍ 0.001)

90

RICCI ET AL.

FIG. 1. The stimuli used in each of the five task conditions. In the case of the visual baseline condition, the subjects had only to respond with a button press as soon as they had seen the stimuli. In the other four conditions, the subjects had to indicate with a button press which of the two choice stimuli matched the sample. See text for details.

and reaction time (F(4,54) ⫽ 50.781, P ⬍ .001). Accuracy in the semantic match condition was significantly lower than that in the group match and figure match conditions (Tukey’s HSD Test, P ⬍ 0.05). The accuracy of the subjects on the size match task was poorer than that on any of the other measures (P ⬍ 0.05) and was not significantly different from chance (P ⬎ 0.05). In terms of reaction times, those for the semantic match condition were significantly longer than those for the TABLE 1 Behavioral Data Baseline

Figure match

Size match

Group Semantic match match

Percentage correct 99.4 a 99.4 57.4 95.2 79.5 SD 0.022 0.022 0.119 0.108 0.104 Reaction time (ms) 618.94 600.94 1400.63 889.23 1317.13 SD 174.35 112.12 225.94 220.51 270.22 a

Percent of trials with response.

baseline and group match conditions (P ⬍ 0.05), but there was no significant difference in mean reaction time between the semantic and size match tasks (P ⬎ 0.05). The first step in the analysis of the imaging data was to compare the relCBF in each of the four activation conditions with that seen during the baseline control (P ⬍ 0.001) (See Table 2 and Fig. 2). As can be seen from the activation maps, there was a significant increase in activity in the occipital cortex bilaterally when comparing the figure match and baseline conditions. When the size match condition was contrasted with baseline, significant activation was seen bilaterally in the ventral processing stream. Separate foci were observed in the lingual gyrus and the cuneus on the left. A region of significant activation was also seen in BA40 of the inferior parietal lobe on the left. Comparing the group match condition to baseline also revealed significant activation of the visual processing stream bilaterally. Increased relCBF was seen in the globus pallidus as well as temporal cortex on the

RECOGNITION OF SEMANTIC ASSOCIATIONS

TABLE 2 Regions of Significant Activation Relative to Baseline Condition Peak activation b Coordinates Task conditions

Size a (k)

Z

160

4.35

46

y

z

42

⫺74

⫺20

3.71

42 ⫺36

⫺76 ⫺74

0 ⫺16

661

5.37 4.83 4.72

⫺36 ⫺38 ⫺40

⫺70 ⫺74 ⫺60

⫺20 ⫺8 ⫺16

Inf. occipital/ temporal (right)

638

5.33 5.01 4.77

40 38 42

⫺72 ⫺80 ⫺56

⫺16 ⫺12 ⫺16

Lingual gyrus (left)

183

4.70 3.50

0 ⫺2

⫺84 ⫺70

⫺12 ⫺20

50

4.43

⫺36

⫺46

52

624

5.83 5.53 4.88

⫺34 ⫺36 ⫺38

⫺44 ⫺74 ⫺72

⫺24 ⫺20 ⫺8

675

5.37 5.10 4.71

40 34 36

⫺72 ⫺40 ⫺48

⫺16 ⫺24 ⫺20

14

4.17

⫺14

⫺4

⫺4

1650

5.74 5.70 5.41

⫺34 ⫺42 ⫺36

⫺48 ⫺56 ⫺68

⫺20 ⫺20 ⫺20

815

5.63 5.59 5.48

32 40 38

⫺44 ⫺72 ⫺82

⫺20 ⫺16 0

Figure match Inf. occipital gyrus (right) Inf. occipital gyrus (left) Size match Inf. occipital/ temporal (left)

Inf. parietal lobule (left) Group match Inf. occipital/ temporal (left)

Inf. occipital/ temporal (right)

Globus pallidus (left) Semantic match Inf. occipital/ temporal (left)

Inf. occipital/ temporal (right)

x

Number of contiguous voxels above threshold (P ⬍ 0.001). Locus and intensity of activation at peak within area of significant difference. a b

left. Finally, the semantic match task evoked extensive and strong activation in the visual processing stream, extending further forward in the inferior temporal lobe than that seen in the other contrasts. The figure match condition was equivalent to the semantic match task in that both required visual processing, and both required a decision; the decisions,

91

however, were based on different attributes. In the case of the figure match the decision was based on absolute identity of nonobjects; in the case of semantic match, it was based on the semantic attributes of real objects. Thus, we would predict that these two tasks would both activate ‘‘early’’ visual processing systems, but that the semantic match would also activate regions involved in object recognition and semantics. In addition to the occipital cortex, the semantic match task activated (relative to the figure match) a large region of the inferior temporal lobe, including fusiform gyrus (BA37) and parahippocampal gyrus (BA36). Further, there was a region of significant activity in the frontal cortex on the left (BA44). The critical comparison in this study, as it was in Vandenberghe’s, was that between the size match and semantic match conditions. Both tasks involved viewing visually presented two-dimensional line drawings of real objects. The tasks differ in that one, semantic match, required that the subjects infer an association between the choice stimuli and the sample based on the semantic characteristics of both; the other, size match, required only that the subjects choose the stimulus that matched the sample in physical size (as presented on the video screen). There was a large region of activation in the ventral temporal/occipital lobe, centering on the parahippocampal gyrus when size match is subtracted from semantic match (See Table 3). Fig. 3 shows this region of activation overlayed onto the summed MR image of the study subjects; this region overlaps with that identified with the semantic-figure match contrast (see above). Additional regions of importance were seen in the frontal cortex (see Fig. 4). One, centered in BA47, had significantly higher relCBF in the semantic match condition relative to the size match condition (x ⫽ ⫺40, y ⫽ 28, z ⫽ 4). The increases in relCBF in this area were seen in both the semantic and group match tasks relative to the control conditions (mean relCBF; baseline, 77.1; figure, 75.5; size, 75.7; group, 78.3; semantic, 78.0). As noted above, there was a second region centered on BA44 that was identified by the semanticfigure contrast (x ⫽ ⫺36, y ⫽ 16, z ⫽ 28; see Fig. 4B). This region had its highest relCBF in the semantic match condition, and lowest in the figure size and group match conditions (mean relCBF: baseline, 86.0; figure, 84.0; size, 85.4; group, 85.3; semantic, 87.7). Finally, we completed an eigenimage analysis of the relCBF in the figure, size, group, and semantic match conditions. This analysis emphasizes the consistency of blood flow changes across task conditions and is less affected by the absolute value of those changes; it is analogous to a principal component analysis of the relCBF data (4). The first component accounts for

92

RICCI ET AL.

FIG. 2. Results of SPM95 analysis of data from the four ‘‘task’’ conditions relative to the baseline Fixation condition (P ⬍ 0.001). In each of the four sets of figures all of the significant voxels are presented in axial, coronal, and sagittal planes (P ⬍ 0.001).

46.9% of the variance in the data; the figure match condition had a positive loading on the component, and the size and semantic match conditions had negative loadings. As can be seen in Fig. 5 (Top), the negative loadings were associated with the ventral visual processing system, bilaterally, but with greater anterior extension in the left hemisphere. There is also bilateral activation of the frontal cortex over the convexity. The second component accounted for 22.6% of the variance (Fig. 5, Bottom); the size match condition had a high negative loading, and the semantic match had a high positive loading. Here, the patterns of covariation are

consistent with what we might have predicted from the activation t maps. That is, the semantic match condition involves a connected network of brain regions including the left inferior temporal lobe, the left inferior frontal cortex, and portions of the left temporal/ parietal border regions. Of note is that fact that unlike the first component, the right frontal cortex is not a part of this functional network. In terms of the size match, there is a network involving the superior parietal cortex, greater on the right than the left. There is also a region of the right frontal cortex, but less of a contribution from the left frontal areas.

RECOGNITION OF SEMANTIC ASSOCIATIONS

TABLE 3 Regions of Significant Activation Focus of peak activation Coordinates Task Conditions

Size (k)

Semantic matchfigure match Occipital gyrus (BA18) (right)

294

5.11 4.75 3.64

32 36 16

⫺84 80 ⫺90

12 4 20

Occipital gyrus (left)

606

4.84 4.67 4.34

⫺26 ⫺4 ⫺4

⫺76 ⫺78 ⫺78

⫺16 ⫺12 ⫺24

422

4.55

⫺24

⫺46

⫺16

4.27

⫺26

⫺24

⫺20

59

3.71 3.67

⫺36 ⫺30

16 10

28 24

164

4.83 3.53

⫺28 ⫺22

⫺22 ⫺38

⫺20 ⫺16

36

4.05

⫺52

⫺8

⫺12

46

3.84

⫺34

32

⫺4

64

4.37 3.62 3.58

⫺28 ⫺26 ⫺22

⫺76 ⫺74 ⫺66

32 20 12

Semantic matchgroup match Med. occipital gyrus (left)

144

4.58 4.25 3.64

⫺18 ⫺26 ⫺26

⫺84 ⫺76 ⫺78

12 16 28

Inf. temporal (BA37)

26

3.67 3.21

⫺44 ⫺42

⫺58 ⫺60

⫺8 ⫺20

Inferior temporal cortex (left) Parahippocampal gyrus (left) Frontal cortex (BA44) Semantic matchsize match Parahippocampal gyrus (left) Medial temporal lobe (left) Inferior frontal cortex (BA47) Sup. occipital cortex (left)

Z

x

y

z

DISCUSSION The data in this study replicate and extend those of Vandenberghe (18); as they reported, there was a consistent activation of the left inferior temporal/ occipital cortex when subjects had to access meaning in order to solve the task (e.g., semantic match). Although we also saw a separate region of increased relCBF in the inferior frontal cortex (BA47) it did not appear to overlap with that reported previously (18). Further, the activity on the frontal cortex was much less extensive in our study and consisted of discrete regions rather than a broad area of functionally associated cortex. The regions of overlap between the studies, however, did

93

correspond to portions of the common semantic system described by Vandenberghe and colleagues. These data extend those previously published in that they emphasize a more specific region of the inferior temporal lobe. Relative to our studies of visual naming and word reading, the activation seen here is more anterior within the temporal lobe. The peak of activity in the naming study (20) was 42–50 mm posterior to the anterior commissure; the focus in the reading study (6) was only 34 mm posterior. In this study, however, the focus was 26 mm posterior. While some of the differences between this study and the two previous reports may be due to differences in acquisition and data analysis, the general trend in the data is for more ‘‘advanced’’ analysis to proceed rostrally along the temporal lobe axis. Thus, the semantic match condition demands more of the knowledge about objects and how the objects relate to one another; it cannot be performed correctly with only reference to the stimuli as presented. Thus, the subjects have to understand what fruit trees look like and that apples, not onions, grow on such trees (see Fig. 1) to respond correctly. This requires a higher-order knowledge base than does simple naming, or even reading. The locus of the activation observed in the inferior temporal lobe in the group and semantic match conditions bears some important consideration. Unlike previous reports (see above), the activity reported here falls in the parahippocampal region of the inferior temporal lobe, not the fusiform gyrus. Although the fusiform gyrus was activated during all tasks that involved real objects (see Table 1), it was as part of the larger inferior temporal activation. The parahippocampal activity only occurred in the tasks with a significant semantic processing demand. This region, in particular because of its relatively anterior location along the collateral sulcus, appears to include entorhinal and the perirhinal cortices (see, for example, Insausti et al. (9)). If, in fact, we have identified functional activation within these brain regions, and they are homologous to the entorhinal and perirhinal cortex of the monkey (10), then our data speak directly to recent models of temporal lobe structure:function relationships. Specifically, Murray and colleagues (13–15, 17) have argued that this brain region has a role in ‘‘mediating the storage of knowledge about objects, i.e., semantic memory’’ ((14), page 20). In their view (17) the entorhinal and perirhinal cortices play a critical role in ‘‘mediating the storage of different parts of individual objects . . . serv[ing] as the kernel of a system specialized for storing knowledge about objects, thereby mediating object identification’’ (p. 8548). The critical aspect of this model to the current data is that object information per se is not stored or necessarily represented in this region. Rather, it is an information processing system specialized for object identification. Thus, assuming structural homology be-

94

RICCI ET AL.

FIG. 3. The activation of the inferior temporal cortex seen in the semantic-size match contrast overlaid onto a summed MRI of the study subjects. The images were taken at the peak activation for this contrast (x ⫽ ⫺28; y ⫽ ⫺22; z ⫽ ⫺20).

tween these brain regions, we may have demonstrated functional homology as well. These data are also relevant to the question of the role of the inferior frontal cortex in phonological and semantic processing. The inferior frontal cortex, specifi-

FIG. 4. The activation of the frontal cortex overlayed on the summed study MR images (axial plane). (A) BA47 (⫺40, 28, 4); (B) BA44 (⫺36, 16, 28). See text for details.

cally BA47, appears to have an executive role in semantic processing, while BA44 (Broca’s area) is responsible for phonological processing (3). In this study, relCBF in BA47 was higher in the two conditions, group and semantic match, which require access to semantic memory. The activation does not appear to be simply as a consequence of automatic object naming since one of the control conditions (size match) also involved real, nameable objects and yet had significantly lower relCBF. The results of the eigenimage analysis support this interpretation. The first component appears, parsimoniously, related to task difficulty. That is, the easiest of the behavioral tasks (as measured by reaction time) was figure match, the most difficult were the size and semantic match tasks. The second component, however, seems to have captured one of the critical aspects of the study, namely the qualitatively different information processing demands proceeding from the size and semantic match tasks. Thus, this second component, we believe, reflects the semantic aspects of the tasks— highly relevant for one, irrelevant for the other. This component does not include the group match condition

RECOGNITION OF SEMANTIC ASSOCIATIONS

95

FIG. 5. Results of eigenimage analysis. In the top portion of the figure are the spatial modes from the first component; the results for the second component are shown in the bottom portion of the figure. The ‘‘look-through’’ images show the voxels with significant positive or negative loadings on the component.

that may rely more heavily on the lexical aspects of the tasks (e.g., ‘‘these are both dogs’’) than the semantic match task (where lexical knowledge was irrelevant). Thus, we would further conclude that the similarity in the t maps noted above were due to the overlap between the lexical and semantic demands of the tasks—it was only in the eigenimage analysis that we can see the important between-task differences. By and large, the data presented here replicate and extend those previously described. The studies differ, however, in several technical ways that may be important. First, the visual stimuli in this study were presented for a much shorter time that in Vandenberghe’s study. We used a relatively brief presentation to minimize the effects of visual scanning and eye movements on activation and to provide sufficient time to perform the difficult size match task. Although we succeeded in making the size match task as difficult as the semantic match task, as indexed by reaction times, accuracy suffered (in spite of pilot testing the procedures). The difference in presentation duration might therefore explain why the earlier work had better (i.e., ⬎70% correct) accuracy on the size match task. The baseline condition used here was useful in

activating BA17/18 and reducing some of the posterior regional activity seen in our earlier studies using visual fixation. However, unlike other visual noise patterns (12), these were created by pasting together parts of drawings of real objects. As such, it is possible that subjects might have attempted to ‘‘create’’ a real image out of the noise. This may have attenuated some of our effects, in that the baseline condition might thus include some regional activity associated with semantic processing while ‘‘reconstructing’’ objects from the patterns. The fact that the relCBF in some of the ‘‘semantic’’ brain regions was slightly elevated by this condition would be consistent with this hypothesis. Nevertheless, there were reliable increases in relCBF associated with performance of the semantic tasks above and beyond that seen in the Baseline condition. Further, comparisons between the semantic and figure match task conditions were fully consistent with these findings. Finally, these data add to the growing body of evidence that functional neuroimaging is a reliable method of studying brain function in vivo. In spite of several differences in study design, we have replicated and extended the main findings from the study of Vanden-

96

RICCI ET AL.

berghe and colleagues (18). Further, other studies from our group examining at episodic (1) and semantic (6, 20) memory have also shown reliable betweencenter replication. Thus, this field of enquiry appears to be maturing to the point that differences observed between studies may now be more reliably attributed to significant differences in study design and analysis, rather than simply to minor differences in technology or data acquisition methods. ACKNOWLEDGMENTS This research was supported in part by funds from the National Institute on Aging to J.T.B. (AG13669) and by the Center for Functional Brain Imaging (MH49815) at the University of Pittsburgh Medical Center. J.T.B. is the recipient of a Research Scientist Development Award, Level II (K02-MH01077), and C.C.M. is a Physician Scholar (K07-MH01410). B. J. Zelkowicz was an undergraduate summer fellow while this study was completed. M. A. Mintun is now at the Mallinckrodt Institute of Radiology, Washington University School of Medicine (St. Louis, MO). The authors are grateful to Dr. E. A. Murray for thoughtful discussions of the data.

REFERENCES Becker, J. T., Mintun, M. A., Diehl, D. J., Dobkin, J., Martidis, A., Madoff, D. C., and DeKosky, S. T. 1994. Functional neuroanatomy of verbal free recall: A replication study. Human Brain Map. 1:284–292. Bookheimer, S. Y., Zeffiro, T. A., Blaxton, T., Gaillard, W., and Theodore, W. 1995. Regional cerebral blood flow during object naming and word reading. Human Brain Map. 3:93–106. Fiez, J. A. 1997. Phonology, semantics, and the role of the left inferior prefrontal cortex. Human Brain Map. 5:79–83. Friston, K. J. 1997. Characterising Distributed Functional Systems. In Human Brain Function (R. S. J. Frackowiak, K. J. Friston, C. D. Frith, R. J. Dolan, and J. C. Mazziotta Eds.), pp. 107–126. Academic Press, San Diego. Friston, K. J., Frith, C. D., Liddle, P. F., and Frackowiak, R. S. J. 1991. Comparing functional (PET) images: The assessment of significant change. J. Cereb. Blood Flow Metab. 10:458–466. Herbster, A. N., Mintun, M. A., Nebes, R. D.,and Becker, J. T. 1997. Regional cerebral blood flow during word and nonword reading. Human Brain Map. 5:84–92.

Herbster, A. N., Nichols, T., Wiseman, M. B., Mintun, M. A., DeKosky, S. T., and Becker, J. T. 1996. Functional connectivity in auditory verbal short-term memory in Alzheimer’s disease. NeuroImage 4:67–77. Howard, D., and Patterson, K. 1992. The Pyramid and Palm Trees Test: A Test of Semantic Access from Words and Pictures. Thames Valley Test Company, Bury St. Edmunds. Insausti, R. 1993. Comparative anatomy of the entorhinal cortex and hippocampus in mammals. Hippocampus 3:19–23. Insausti, R., Tunon, T., Sobreviela, T., Insausti, A. M., and Gonzalo, L. M. 1995. The human entorhinal cortex: A cytoarchitectonic analysis. J. Comp. Neurol. 355:171–198. Kroll, J. F., and Potter, M. C. 1984. Recognizing words, pictures, and concepts: A comparison of lexical, object, and reality decisions. J. Verbal Learn. Verbal Behav. 23:39–66. Martin, A., Wiggs, C. L., Ungerleider, L. G., and Haxby, J. V. 1996. Neural correlates of category-specific knowledge. Nature 379:649– 652. Meunier, M., Bachevalier, J., Mishkin, M., and Murray, E. A. 1993. Effects on visual recognition of combined and separate ablations of the entorhinal and perirhinal cortex in rhesus monkeys. J. Neurosci. 13:5418–5432. Murray, E. A. 1996. What have ablation studies told us about the neural substrates of stimulus memory? Semin. Neurosci. 8:13–22. Murray, E. A., Gaffan, D., and Mishkin, M. 1993. Meural substrates of visual stimulus-stimulus association in rhesus monkeys. J. Neurosci. 13:4549–4561. Talairach, J., and Tournoux, P. 1988. Coplanar Stereotactic Atlas of the Human Brain: 3-Dimensional Proportional System: An Approach to Cerebral Imaging. Thieme Medical Publishers, New York. Thornton, J. A., Rothblat, L. A., and Murray, E. A. 1997. Rhinal cortex removal produces amnesia for preoperatively learned discrimination problems but fails to disrupt postoperative acquisition and retention in rhesus monkeys. J. Neurosci. 17:8536–8549. Vandenberghe, R., Price, C., Wise, R., Josephs, O., and Frackowiak, R. S. J. 1996. Functional anatomy of a common semantic system for words and pictures. Nature 383:254–256. Vitouch, O., and Gluck, J. 1997. Small Group PETting: Sample sizes in brain mapping research. Human Brain Map. 5:74–77. Zelkowicz, B. J., Herbster, A. N., Nebes, R. D., Mintun, M. A., and Becker, J. T. 1997. An examination of regional cerebral blood flow during object naming tasks. J. Inter. Neuropsychol. Soc. 4:160–166.