The effect of encoding strategy on the neural correlates of memory for faces

The effect of encoding strategy on the neural correlates of memory for faces

Neuropsychologia 40 (2002) 86 – 98 www.elsevier.com/locate/neuropsychologia The effect of encoding strategy on the neural correlates of memory for fa...

522KB Sizes 11 Downloads 36 Views

Neuropsychologia 40 (2002) 86 – 98 www.elsevier.com/locate/neuropsychologia

The effect of encoding strategy on the neural correlates of memory for faces Lori J. Bernstein, Sania Beig, Amy L. Siegenthaler, Cheryl L. Grady * Rotman Research Institute, Baycrest Centre for Geriatric Care, Uni6ersity of Toronto, 3560 Bathurst Street, Toronto, Ont., Canada M6A 2EI Received 6 March 2000; received in revised form 4 December 2000; accepted 23 March 2001

Abstract Encoding and recognition of unfamiliar faces in young adults were examined using positron emission tomography to determine whether different encoding strategies would lead to encoding/retrieval differences in brain activity. Three types of encoding were compared: a ‘deep’ task (judging pleasantness/unpleasantness), a ‘shallow’ task (judging right/left orientation), and an intentional learning task in which subjects were instructed to learn the faces for a subsequent memory test but were not provided with a specific strategy. Memory for all faces was tested with an old/new recognition test. A modest behavioral effect was obtained, with deeply-encoded faces being recognized more accurately than shallowly-encoded or intentionally-learned faces. Regardless of encoding strategy, encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. Within encoding, the type of strategy produced different brain activity patterns, with deep encoding being characterized by left amygdala and left anterior cingulate activation. There was no effect of encoding strategy on brain activity during the recognition conditions. Posterior fusiform gyrus activation was related to better recognition accuracy in those conditions encouraging perceptual strategies, whereas activity in left frontal and temporal areas correlated with better performance during the ‘deep’ condition. Results highlight three important aspects of face memory: (1) the effect of encoding strategy was seen only at encoding and not at recognition; (2) left inferior prefrontal cortex was engaged during encoding of faces regardless of strategy; and (3) differential activity in fusiform gyrus was found, suggesting that activity in this area is not only a result of automatic face processing but is modulated by controlled processes. © 2001 Elsevier Science Ltd. All rights reserved. Keywords: Memory; Neuroimaging; Human; Cognition; Prefrontal; Fusiform

1. Introduction One aspect contributing to episodic memory performance (i.e. conscious recollection of particular episodes or events that occurred in a person’s experience (see Ref. [60]) is the degree to which the new material has been incorporated into existing knowledge. Craik and colleagues [12,14] have shown that individuals are better at remembering an item when the item has been processed in a way that promotes abstract, semantic, or associative organization (deep encoding) than when it has been processed superficially (shallow encoding). It is believed that memory retention depends on the extent * Corresponding author. Tel.: + 1-416-7852500, ext. 3525; fax: +1-416-7852862. E-mail address: [email protected] (C.L. Grady).

to which the learner has developed a system to analyze and enrich new information. Deeper levels of processing are concerned with pattern recognition and extraction of meaning, and the deeper the level, the greater degree of semantic or abstract analysis, which yields more robust memory traces. The levels of processing effect is robust, being observed for words (e.g. Ref. [12]) and pictures of objects (e.g. Refs. [13,24], but see Ref. [30]) as well as for faces (e.g. Ref. [7]). Although the levels of processing effect is much stronger and more consistent for words than for less verbal material (pictures and faces), it has been found for all these types of stimuli. Cognitive psychological theories, lesion studies, and imaging experiments support the notion that encoding and retrieval are dissociable forms of memory, each of which can be differentially affected by a variety of

0028-3932/01/$ - see front matter © 2001 Elsevier Science Ltd. All rights reserved. PII: S 0 0 2 8 - 3 9 3 2 ( 0 1 ) 0 0 0 7 0 - 7

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

factors, such as brain damage or disease (e.g. Refs. [4,17,53], for review). In addition, imaging studies have shown that the neural correlates of encoding and recognition can be differentially affected by encoding instructions such as those promoting different levels of processing. For instance, evidence from positron emission tomography (PET) reveals different patterns of regional cerebral blood flow (rCBF) during incidental shallow and deep encoding of words and pictures in contrast with intentional learning (i.e. when participants are explicitly told to try to learn the items) [24,33,52]. In one of these studies [24], deep processing engaged left anterior medial prefrontal cortical areas, left medial temporal regions, and bilateral posterior extrastriate cortex. Intentional learning was associated with left ventrolateral prefrontal cortex, left premotor cortex, and bilateral ventral extrastriate cortex. Nyberg and colleagues [46] found more left medial temporal activity during retrieval when the items had been studied under incidental deep task instructions. Although none of these studies compared brain activity during both encoding and retrieval as a function of encoding manipulation, the evidence suggests that encoding and retrieval processes are subserved by different systems which can both be modulated by encoding strategy. The present study was designed to examine whether the manner in which unfamiliar faces are studied influences the neural networks involved in encoding and recognition. Although the neural correlates of face perception and memory have been investigated quite extensively (e.g. Refs. [10,11,21– 23,26,27,29,31,32,35,39, 56]), no study has yet investigated the neural correlates of different types of encoding strategies using faces as stimuli. We contrasted several strategies which are known to influence memory — two incidental ones (deep and shallow) and an intentional one. It is possible that different encoding strategies may engage the same memory system, with participating regions being more active during one condition in comparison with another. Alternatively, different strategies of encoding information may engage different memory systems that reflect individual strategies in encoding the item, and these systems could be anatomically dissociable. Although the reviewed findings are consistent with the hypothesis that different encoding strategies may engage different neural circuits when readily nameable objects or words are used as stimuli, it is not known whether such effects on neural activity can be generalized to other types of stimuli, such as faces. In examining the effects of encoding strategy on brain activity we have focused on the roles of two specific regions of cortex. One region is left prefrontal cortex, which is reported to be active during encoding of both objects and words [24,35] and sometimes during face encoding (e.g. Ref. [27]), but not always [35,41]. It was hoped that the present study would confirm a role

87

for left prefrontal gyrus in encoding of unfamiliar faces. If the left prefrontal cortext is indeed a general encoding area, then it should be active during all encoding conditions, regardless of strategy. The second area of interest is the fusiform gyrus. Evidence from neuropsychological (for review see Ref. [40]) and neuroimaging (e.g. Refs. [25,26,31,49,56]) studies suggest that the fusiform is specialized for face and/or object perception. However, it remains unclear if activation of this area reflects an automatic processing of faces, being active regardless of what type of face task is required of the participant, or whether it can be affected by the type or extent of processing in which the person is engaged (e.g. Refs. [36,65]). If the fusiform is sensitive to controlled processes then we should observe differences in rCBF during different levels of encoding or recognition, or perhaps in the relation of activation in this area to performance.

2. Materials and methods

2.1. Participants Twelve right-handed individuals (mean age 23.4 years; range 21–28; six male, six female) participated for pay. They provided informed consent, and the experiment was conducted according to University of Toronto and Baycrest Centre for Geriatric Care guidelines. None of these participants had any sign of brain abnormality based on structural MRIs. In addition to these 12 individuals, ten age and educated-matched participants were given the face encoding and recognition tasks (described below), but were not scanned during any of the conditions. Through a screening questionnaire, it was determined that all twenty-two participants had normal or corrected-to-normal vision and none had a history of neurological or psychiatric disease or brain injury. The average education of participants was 17 years. Every participant performed within normal limits on several neuropsychological tests: the Mill–Hill Vocabulary Test [50], Beck Depression Inventory [5], Mini Mental Status Examination [19], and F-A-S verbal fluency [6].

2.2. Procedure Eighty-eight faces were obtained from a high school yearbook (also used in Ref. [26]). All were black-andwhite photograph quality (Grayscale), were cropped to remove hair and clothing, and portrayed mostly Caucasians. The faces were presented against a black background. Each face subtended approximately 4° of visual angle with participants approximately 60 cm from the computer monitor.

88

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

All stimuli were presented on a monitor attached to a PC running Superlab software (Cedrus, Phoenix, AZ). During the encoding conditions, 24 faces (half women, half men) were presented one at a time for two seconds with an inter-stimulus interval (ISI) of 2 s. Between trials, a white fixation cross against a black background was presented and participants were instructed to look at it at all times. Responses for all tasks were button presses made with the right index finger for the left mouse button or right middle finger for the right mouse button. There were three encoding conditions. The faces were divided into three lists that were assigned to each of the three encoding conditions in a counterbalanced way. In one condition, participants were asked to judge whether each face was pleasant or unpleasant (‘deep’ level of processing), pressing one of two mouse buttons to indicate their decision. This strategy was chosen because it has been shown to be effective in improving recognition performance (e.g. Refs. [7,58]) even though it may also be considered a decision which requires an emotional judgment. Although there was no right or wrong answer for the pleasantness of each face, a pilot experiment showed that approximately half of the faces in each condition were judged as pleasant and half as unpleasant. In a second condition, participants were asked to report whether the face was oriented primarily to the right or to the left (‘shallow’ level of processing), indicating their decision by pressing one of two mouse buttons. Half of the faces were oriented to the right and the other half to the left. This strategy was intended to require a judgment utilizing perceptual processing, analogous to the usual verbal shallow tasks such as deciding whether there is a letter ‘a’ in a word. In the deep and shallow conditions, participants were not told they would be tested later for their memory of these faces; thus, these two conditions are considered incidental encoding tasks. During the third encoding condition, participants were told to study the faces because memory for them would be tested later but no specific strategy was provided (‘intentional learning’). Participants were instructed to press the left button for each face during the encoding condition. Questioning after the experiment revealed that most participants could not generally articulate the strategies they used during memorization, or they claimed not to use any one strategy in particular. When a strategy was specified, it was usually looking for distinctive features of the face or thinking of a person of whom the unfamiliar face reminded them. Each participant performed all three encoding conditions; the order of these conditions was counterbalanced across participants. There were at least two practice trials given for each type of encoding condition to allow participants to become accustomed to the timing and to make sure they understood the instructions. If a participant needed additional practice

items, the same two-item practice trials were shown again. The faces shown during the practice trials were the same in all three encoding conditions and were not a part of the experimental set of faces. After all three encoding conditions were completed, a recognition test was given, divided into three separate lists. In each recognition list, participants saw 32 faces one at a time, half of which were presented at encoding (‘old’ faces), and half of which had not been seen previously (‘new’ faces). Participants judged whether the face was one they had seen before in any of the conditions. The recognition test was divided into three separate scans so that brain activity during recognition of faces encoded under different instructions could be compared. The recognition lists were presented in the same order in which encoding tasks had been performed. That is, if a participant performed the shallow encoding task first, then the first recognition list would contain faces from the shallow encoding task. As in the encoding conditions, the stimuli were presented for two seconds with an ISI of 2 s. Participants had as long as four seconds to make a response, and they were told to make their best guess if they were not sure. No emphasis was made on speed of response. Recognition accuracy was calculated by computing the proportion of hits (number of ‘old’ responses to old faces divided by the total number of old faces, in this case 16) minus the proportion of false alarms (number of ‘old’ responses to new faces divided by the total number of new faces, or 16). For mean reaction times (RT), all correct trials (responding ‘old’ to old faces and ‘new’ to new faces) were included. The ten agematched participants, who were not scanned, and the twelve participants who were scanned, were not significantly different on any of the behavioral measures (no group by condition interaction, FB 1 in all cases). To increase the power, therefore, the data from all 22 participants were combined and were analyzed by using a repeated measures ANOVA with encoding strategy as the repeated measure. Both sets of data (PET participants or all participants together) are reported in the results section.

2.3. PET Changes in rCBF were measured using PET during each of the six conditions (three encoding, three recognition). Before the first encoding and after the last recognition scan, baseline conditions were administered in which participants were instructed to press the left mouse button to 24 distorted and skewed faces presented with the same timing as the experimental tasks. Although these distorted faces were not recognized as faces by a separate group of pilot observers, some of the PET participants reported in post-scan questioning that they realized during the second baseline scan (i.e.

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

the second time they saw them after the encoding and recognition scans), that a few of the control stimuli were distorted faces. PET scans, with injections of 40 mCi [15O]water each and separated by 11 min, were performed on a GEMS PC2048-15B tomograph, which has a reconstructed resolution of 6.5 mm in both transverse and axial planes. This tomograph allows 15 planes, separated by 6.5 mm (center to center), to be acquired simultaneously. Emission data were corrected for attenuation by means of a transmission scan obtained at the same levels as the emission scans. A thermoplastic mask that was molded to each participant’s head and attached to the scanner bed helped to minimize head movement during the scans. Each task started 15 s before isotope injection into a forearm vein through an indwelling catheter and continued throughout the 1-min scanning period. Eight scans were obtained in each participant during the experiment, one for each encoding and recognition condition, and the two baseline scans. Integrated regional counts were used as an index of rCBF. For each participant, all images were first registered to the initial scan to correct for head motion during the experiment using AIR [66]. Then each image was spatially transformed to an rCBF template conforming to Talairach and Tourneaux [59] stereotaxic space using SPM95 (SPM95, Wellcome Department of Cognitive Neurology, London) implemented in Matlab (Mathwords, Sherborn, MA). After transformation, each image was smoothed with a 10-mm isotropic Gaussian filter. Ratios of rCBF to global cerebral blood flow (CBF) within each scan for each participant were computed and analyzed using a multivariate analysis, partial least squares or PLS (for a more complete description see Ref. [43]), to identify spatially distributed patterns of brain activity related to the different task conditions. PLS operates on the covariance between brain voxels and the experimental design to identify a new set of variables (so-called latent variables or LVs) that optimally relate the two sets of measurements. Brain activity patterns are displayed (e.g. Fig. 1) as singular images which show the brain areas that covary with the contrast or contrasts that contribute to each LV. Each brain voxel has a weight, known as a salience, that is proportional to these covariances, and multiplying the rCBF value in each brain voxel for each participant by the salience for that voxel, and summing across all voxels gives a score for each participant for each condition on a given LV. Comparison of these scores across conditions indicates the effect of task on brain activity. The significance for each LV as a whole was assigned by using a permutation test [15,43]. Because saliences are derived in a single analytic step, no correction for multiple comparisons of the sort done for univariate image analyses is required. In addition to the permuta-

89

tion test, a second and independent step in PLS analysis is to determine the reliability of the saliences for the brain voxels characterizing each pattern identified by the LVs. To do this, all saliences were submitted to a bootstrap estimation of the standard errors [16,54]. Peak voxels with a salience to standard error ratio greater than or equal to 2.0 (corresponding to a 95% confidence interval [54]) were considered reliable. Local maxima for the brain areas with reliable saliences on each LV were defined as the voxel with a salience/SE ratio higher than any other voxel in a 2-cm cube centered on that voxel. These maxima are reported in terms of brain region, or gyrus, and Broadman area (BA) as defined in the Talairach and Tournoux atlas [59].

Fig. 1. The brain areas that show differential activity related to face encoding and recognition are shown. The image at the top of the figure presents activations imposed on standard MRI slices ranging from z = − 28 (top left) to z = 40 (bottom right) [59], separated by 4-mm intervals. Positive saliences are indicated in white and represent areas where there was greater activity during encoding than during recognition. Black areas represent areas where there was greater activity during recognition than during encoding. Right is right on the image. See Table 2 for local maxima of these regions. The graph at the bottom of the figure shows the mean brain scores on this LV. Positive mean brain scores were found in the encoding conditions, where activity was increased in the brain regions shown in white (i.e. those with positive salience on the LV). Negative mean brain scores were found in the recognition conditions, where activity was increased in the brain areas shown in black (i.e. those with negative salience on the LV).

90

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

The PET data were submitted to two types of PLS analysis. The first, Task PLS, reveals the patterns of covariation that occur across the task conditions as described above. Three task analyses were performed: one compared the three encoding and the three recognition conditions, a second contrasted the encoding conditions and the average of the control conditions, and a third compared the three recognition tasks and the average of the control conditions. The second type of PLS analysis, Behavioral PLS, reveals patterns of covariation between brain activity and the behavioral data (i.e. accuracy) (e.g. Refs. [42,55]). Task PLS reveals which areas are active while performing a particular task, and can tell us that conditions 1 and 2 have areas in common which are in contrast to conditions 3 and 4. However, this analysis reveals nothing about how those areas are related to performance, only that those areas are active in those conditions. Behavioral PLS supplements the Task PLS because it can tell us not only that an area is used in a particular condition but that the area is correlated with good (or poor) recognition performance. It is quite possible for there to be some areas that are active during a particular task but which are not related to individual differences in performance on that task. We carried out this behavioral analysis using the accuracy measures (hits-false alarms) of the 12 participants and all six of the memory tasks (encoding and recognition) to examine similarities and/or differences in brain/behavior correlations across conditions.

3. Results

3.1. Beha6ioral data During the encoding tasks, participants were significantly faster in making judgments about the orientation of the face than the pleasantness of the face (831 vs. 1025 ms, F(1,11)=7.36; P B0.02). Although a speeded response was not emphasized, the RT difference suggests that participants were not engaged to the same extent in the two conditions. This indicates that the pleasantness decision required more effort or other additional processing than judging face orientation. The recognition task was relatively difficult (see Table 1). There was a significant effect of encoding strategy (F(2,40)=4.65, P B 0.05) on recognition accuracy. A Helmert contrast revealed a significant difference between recognition of deeply encoded faces versus those from the shallow encoding and intentional learning conditions (F(1,21) =8.95, P B 0.01) and did not show a significant difference between the learn and shallow encoding conditions (F B1). Reaction times in the recognition conditions did not differ significantly from each other.

Table 1 Recognition task performance Encoding Strategy

Accuracya

RT (ms)

Shallow Learn Deep

0.23 (0.24) 0.29 (0.28) 0.40 (0.39)

1183 (1196) 1190 (1209) 1179 (1209)

Numbers in ‘()’ are means for PET alone. a The accuracy measures are shown as proportion hits minus proportion false alarms such that 0 is chance performance and 1 is perfect performance.

3.2. PET data 3.2.1. Task effects The task analysis contrasting encoding and recognition revealed significant patterns of activity (Fig. 1), which showed a main effect of encoding versus recognition (PB0.001), identifying brain regions that distinguished the three encoding conditions from the three recognition conditions regardless of the type of encoding instructions. Areas of increased rCBF during encoding included left ventral and medial frontal areas, bilateral inferior and middle temporal areas, left amygdala, bilateral fusiform, and right inferior parietal and angular gyri. Areas of increased activity during recognition included right middle and superior frontal areas, a left inferior frontal area (superior to the one active during encoding), left posterior cingulate gyrus, right superior and medial parietal (precuneus) cortex, and bilateral lingual gyri. Maximal coordinates of these regions are shown in Table 2. Because these areas are more (or less) active during encoding or recognition, regardless of encoding strategy and resulting overall level of memory performance, they represent areas that appear necessary for face memory in general. Because there was a main effect of encoding versus recognition, it seemed sensible to perform two additional task PLS analyses which compared encoding and recognition separately with the average of the two baseline scans (see Fig. 2A and B). When including encoding and the average baseline in the analysis, one significant pattern of activation was identified that differentiated deep encoding from baseline (PB 0.01). Deep encoding was associated with increased activity in the left amygdala, anterior cingulate gyrus, and bilateral inferior frontal cortex, whereas the baseline conditions had relatively more activity in bilateral inferior temporal and lateral parietal areas (Fig. 2A). Coordinates are listed in Table 3. The shallow and learn conditions contributed little to this pattern of activity, as indicated by the mean brain scores for these conditions (which were near zero, Fig. 2A). Thus, the areas identified by this LV that showed increased activity during the pleasantness judgement (e.g. left amygdala, anterior cingulate gyrus and bilateral inferior frontal

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

cortices) and those with increased activity during the control task (e.g. temporal and occipital regions) showed essentially equivalent activity during the shallow encoding and intentional learning conditions. In addition, three of the regions with greater activity during deep encoding were similar in location to areas identified in the previous analysis as having greater activity during encoding in general. These three regions were left inferior prefrontal cortex, left amygdala, and right thalamus (compare Tables 2 and 3). In contrast, when the analysis included all recognition conditions and the average baseline, all recognition conditions showed the same pattern compared to baseline (P B 0.01). This pattern was characterized by bilateral frontal activation during recognition and greater occipital and occipito-temporal area activation during the baseline tasks (Fig. 2B). Coordinates are listed in Table 2 Local maxima of areas with activity differentiating encoding and recognition of faces Region, gyrus Encoding\recognition Prefrontal Medial/orbital Medial/inferior/orbital Inferior frontal Middle/superior Superior Temporal Inferior Inferior Middle Middle Occipital Fusiform Fusiform Fusiform Parietal Inferior lateral Inferior lateral Precuneus Amygdala Cerebellum Thalamus Recognition\encoding Prefrontal Inferior Middle Superior Precentral/Inferior Parietal Superior lateral Precuneus Postcentral Occipital Lingual Lingual Parieto-occipital sulcus Cingulate gyrus

Hem

BA

X

Y

Z

L R L L L

10 11/47 46 10/9 8

−2 18 −48 −4 −18

52 28 42 54 38

−4 −16 12 20 44

L L R R

20 37 21 21

−48 −58 58 48

−14 −50 −52 −8

−24 −8 8 −16

L R R

37 37 37

−48 14 32

−36 −42 −58

−20 −12 −20

L R R L L R

40 40 7

−62 54 12 −24 −2 14

−46 −38 −56 −8 −60 −16

24 28 32 −8 −16 4

L R R R

45 10 8 44/6

−32 42 10 36

28 52 26 2

20 16 36 20

R R L

7/19 7 40

30 6 −56

−70 −72 −18

36 36 16

L R L L

18 18 19 31/23

−10 12 −20 −14

−86 −96 −76 −48

0 −12 24 24

91

Table 4. In summary, the particular strategy of face encoding had an effect on brain activity during those tasks, particularly during deep encoding. For recognition, on the other hand, the manner in which faces were encoded did not appear to differentially distinguish patterns of brain activity; instead, a general recognition pattern was identified in comparison to viewing non-face stimuli.

3.2.2. Brain–beha6ior correlations While the task PLS analyses reveal different brain patterns associated with some conditions compared with others, they cannot reveal how the brain activation is related to behavioral performance. The behavioral PLS, however, is able to examine such correlations. In doing so, one significant pattern was found, shown in Fig. 3, with maximum coordinates listed in Table 5. This pattern distinguished the deep memory conditions from both the shallow and learn conditions (PB0.001). Better recognition performance for deeply-encoded faces was associated with increased activity in the left inferior temporal, left inferior and medial frontal, and bilateral parietal regions during both encoding and recognition (Table 5). Alternatively, if these same areas were active during the other two encoding and recognition conditions, then recognition performance was poorer. In contrast, good recognition performance for shallowlyencoded and intentionally-learned faces depended upon activation in the left cerebellum, bilateral posterior fusiform, bilateral hippocampi and amydalae, and right inferior temporal areas during both encoding and recognition (Table 5). If these latter areas were active during deep encoding and deep recognition, recognition for the deeply-encoded faces was poorer. Finally, we were curious whether there were any areas that showed differential activity during encoding v. recognition and that were also correlated with behavior. To do this, the overlap between the reliable regions from the task analysis and the behavioral analysis was determined by multiplying the saliences from the two LVs (Fig. 4) [47]. Several areas with increased activity during encoding were found to be related to performance but during different conditions. The left amygdala and bilateral fusiform gyri were active across encoding tasks but were correlated with better recognition performance after shallow encoding and learning. Several regions in the temporal lobe and a region in medial dorsal frontal cortex also were active in all encoding tasks but were correlated with better recognition for deeply encoded faces. There were also some areas with increased activity during all the recognition tasks which were found to be related differentially to performance. Activity in areas in middle temporal regions, right inferior frontal and left middle frontal areas was correlated with better performance in the deep recognition conditions (compare Fig. 1, Fig. 3A and Fig. 4).

92

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

Fig. 2. (A) The brain areas where deep encoding was distinguished from the average baseline conditions are shown. White areas are those where there was greater activity during the deep encoding condition and black regions are those with more activity during the baseline compared to deep encoding. See Table 3 for local maxima of these regions. The graph to the right of the figure shows the mean brain scores on this LV. The mean scores for the shallow encoding and intentional learning conditions are near zero, indicating that these conditions contribute little to this pattern of activity. (B) The brain areas where recognition was distinguished from the average baseline conditions are shown. White areas are those where there was greater activity during all three recognition conditions, and black regions are those with more activity during the baseline compared to recognition. See Table 4 for local maxima of these regions. The graph to the right of the figure shows the mean brain scores on this LV.

4. Discussion The primary pattern in the task PLS analysis distinguished encoding and recognition regardless of the encoding strategy. During all three encoding strategies participants employed a mainly ventral network of regions in frontal and in temporal cortex, including left inferior frontal and bilateral medial temporal areas. This is consistent with previous studies which have shown increased activity in ventral occipital, temporal, and left prefrontal regions during face encoding [2,28]. Although there was not a main effect of strategy across the memory conditions when encoding was compared to recognition, when the three encoding conditions were compared to the average of the baseline conditions, there was a significant difference between the deep encoding condition and baseline. This pattern showed that the left amygdala, anterior cinguate gyrus, and bilateral inferior frontal areas were more active during deep encoding. It is not surprising that the left amyg-

dala was more active during the deep encoding condition, as assessment of pleasantness is a type of emotional judgment, and the amygdala is thought to be important in processing of emotional stimuli and facial expressions (e.g. Refs. [1,8,37,44,64]). As to whether these areas are more associated with deep processing as opposed to emotional processing per se cannot be determined with the current study. An experiment which includes contrasting types of ‘deep’ encoding strategies – some involving emotional judgments, some not-would be one possible way of examining this. Regardless of the specific mechanism involved, the relative lack of differential brain activity related to the three encoding strategies contrasts with the previous results of Grady et al. [23] who found distinct changes in rCBF that characterized deep and shallow encoding as well as intentional learning of words and pictures (see also Ref. [34]). The discrepancy between our finding and that of Grady et al. [23] may have arisen because our manipulation produced only a modest levels-of-processing ef-

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

fect on behavioral performance, although the magnitude of this improvement was consistent with previous studies (e.g. Refs. [7,58]). However, the brain activity results were consistent with the behavioral results in that the deep encoding condition produced both a distinct brain activity pattern and better recognition performance. It is also possible that a given strategy for encoding faces does not trigger the same neural systems as those that are engaged when using a similar strategy for encoding words and nameable pictures. In comparing previous studies, results indicate that memory for unfamiliar faces is not as affected by study instructions as memory for words and easy-to-name pictures (e.g. Refs. [7,12,24]). This may relate to the fact that some stimuli, in this case unfamiliar faces, cannot be encoded semantically as easily as others (see Ref. [3]). It should be noted that although levels of processing effects on Table 3 Local maxima of areas with activity differentiating deep encoding and average baselines Region, gyrus Deep encoding\a6erage Frontal Inferior Inferior Inferior Middle Middle Middle Medial Medial Temporal, superior Cingulate Amygdala Insula Cerebellum Caudate Thalamus

Hem

BA

X

Y

Z

−36 −42 44 −42 40 22 −16 4 −26 2 −30 −34 −8 −4 10

20 16 16 50 46 44 54 36 −30 28 −4 14 −82 6 −18

−12 24 24 0 −8 16 −4 40 16 24 −20 8 −24 12 0

of baselines L L R L R R L R L R L L L L R

47 44 44 10 10 9 10 8 42 24

A6erage of baselines\deep encoding Frontal Superior R 8 Cingulate L 31 Temporal Inferior L 37 Inferior L 20 Inferior R 37 Middle R 21 Superior R 22 Amygdala R Parietal Inferior L 40 Inferior R 40 Occipital Middle L 18 Middle R 18 Fusiform R 19 Cuneus L 18 Insula R

20 −2

40 −30

44 40

−44 −52 52 60 46 20

−68 −28 −58 −14 −34 −8

4 −24 −8 −8 12 −20

−62 48

−34 −48

24 40

−42 24 40 −2 34

−86 −90 −64 −92 6

16 16 −4 12 8

93

Table 4 Local maxima of areas with activity differentiating recognition and baselines Region, gyrus

Hem

BA

Recognition\a6erage Frontal Inferior Middle/Inferior Middle/Inferior Middle Superior Precentral Precentral Temporal Middle Occipital Fusiform Hippocampal Cingulate

of baselines

X

Y

Z

−44 −44 42 28 20 −36 48

40 18 12 42 54 −6 −16

0 28 28 0 20 32 28

L L R R R L R

46/10 9 8/9 10 9 4 4

R

39

32

−64

24

R R L

19 19 24

36 26 −4

−68 −50 −6

−16 4 32

10 8

−2 20

54 40

0 40

20 37 37 37 21

−54 58 −52 56 42

−22 −46 −62 −58 −12

−24 −20 4 0 −16

40

50

−44

24

19 18 18 19 37 18

−40 34 8 −20 30 14 30 0 −24

−80 −92 −96 −48 −48 −80 2 32 −74

16 12 20 −8 −12 −20 12 4 −24

A6erage baselines\recognition Frontal Medial L Superior R Temporal Inferior L Inferior R Middle L Middle R Middle R Parietal Inferior R Occipital Middle L Middle R Cuneus R Fusiform L Fusiform R Lingual R Insula R Cingulate M Cerebellum L

24

brain activity have been found at both encoding (e.g. Refs. [24,35]) and during retrieval (e.g. Refs. [46,52]), overall it appears that the effects of encoding strategy generally have stronger brain effects during encoding than during recognition. The task PLS analysis also revealed that in contrast with encoding, recognition of faces appears to involve more dorsal regions, including bilateral prefrontal, right parietal and premotor areas. A number of studies have reported enhanced rCBF in right frontal cortex during recognition (e.g. Refs. [51,57,62]). In contrast, encoding often has been associated with increased rCBF in the left frontal region (hemispheric encoding/retrieval asymmetry, or HERA) (e.g. Ref. [61]). In the current study, encoding was associated with enhanced rCBF predominantly in the left inferior frontal areas and recognition was associated with increases in rCBF in

94

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

bilateral frontal regions, more extensively on the right side, particularly in anterior frontal cortex. This pattern of activity, although not conforming to a stringent HERA prediction, is consistent with the idea of some lateralization of different memory operations. Further-

more, this result is supported by evidence that encoding of faces and other stimuli such as pictures can engage a more symmetrical set of regions (e.g. Refs. [24,35]). In fact, our results shed additional light on prefrontal activity during encoding and suggest that there are two

Fig. 3. (A) Brains areas where activity was correlated with performance levels as a function of particular conditions are shown. White areas are those which are associated with good performance if active during encoding and recognition of faces studied under shallow encoding or intention learning instructions, and poor performance if active during deep encoding and recognition. Black areas are the opposite; namely, increased activity in these areas is associated with good performance during deep encoding and recognition, and with poor performance during shallow and intentional learning conditions. (B) Plots of brain scores from the PLS analysis and accuracy for each of the encoding (top three panels) and recognition conditions (bottom three panels). During the shallow and learning conditions (two left and two center panels) there are positive correlations between scores and accuracy indicating that increased activity in brain regions shown in white (with more positive salience) are correlated with better recognition performance. Conversely, during the deep conditions (two right panels), the correlations are negative indicating that increased activity in the regions with negative salience, shown in black, is associated with better performance.

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98 Table 5 Local maxima of areas where activity is differentially correlated with performance in memory tasks Region, gyrus

Hem

BA

X

Y

Z

Positi6e correlation with performance in shallow and intentional learn conditions, negati6e correlation with performance in deep conditions Occipital Fusiform L 19 −20 −72 −20 Fusiform L 18 −38 −84 −12 Fusiform R 18 24 −82 −16 Cuneus L 18/19 −4 −82 28 Lingual R 18 6 −70 0 Temporal Middle R 21 50 −40 −4 Superior L 38 −34 8 −20 Hippocampus L −20 −32 −16 Hippocampus R 22 −14 −20 Amygdala L −22 −4 −16 Amygdala R 28 −6 −16 Cerebellum L −8 −48 −16 Midbrain L −12 −14 −16 Positi6e correlation with performance in deep conditions, negati6e correlation with performance in shallow and intentional learn conditions Occipital Cuneus R 18 4 −68 20 Temporal Inferior L 37 −50 −50 −4 Middle L 21 −54 −16 −16 Middle L 39 −34 −64 20 Middle R 21/22 58 −28 4 Middle R 39 34 −70 24 Superior R 22 50 −50 20 Frontal Inferior L 45 −40 14 8 Inferior R 45 32 18 12 Inferior R 47 42 26 0 Middle R 6 40 2 32 Medial/Superior L 10 −8 68 0 Orbital R 11 8 60 −12 Cingulate L 32 −8 44 −4

prefrontal regions that are more active during face encoding than during recognition but that have different laterality. One is a ventral prefrontal area in the left hemisphere (coordinates: − 48, 42, 12, see Table 2 and Fig. 1) that appears to be important during encoding regardless of the type of stimulus, whether verbal or visual (e.g. Refs. [28,48,57]). However, there is another prefrontal region that is more dorsal and posterior to the first region whose laterality appears to be a function of the type of stimulus that is encoded. This area shows more right hemisphere activation for visual material such as faces, and more left hemisphere activation for verbal material (e.g. Refs. [35,41]). We also found this prefrontal region to be bilaterally active, although more on the right, during encoding, particularly during the deep encoding condition (coordinates: 44, 16, 24, see Table 3 and Fig. 2A).

95

In the current study, the task PLS analysis revealed enhanced rCBF in the fusiform gyrus, mainly in middle regions, during encoding compared to the recognition conditions. This is consistent with many studies which have shown increased rCBF in this region for tasks involving the processing of faces (e.g. Refs. [25,28,31]). The fact that the fusiform gyrus was differentially active during encoding and recognition suggests that this area can be modulated by controlled processes. That is, although the fusiform may indeed be specialized for face processing, it is not necessarily involved in the same manner for every kind of task. This indicates that it plays a role in perceiving the face (processing the features, for instance), and further that this processing capacity is engaged to a greater degree during initial presentation of the face than during repeated presentations. One possible explanation for why there is more middle fusiform activation during encoding than recognition is that the difference reflects an effect of attention. That is, it may be that the encoding condition is more perceptually demanding than recognition, and as such, more activation in the fusiform is observed (e.g. Ref. [65]). Another possibility is that more activity during encoding could actually indicate a reduction in activity during recognition, consistent with monkey and human work showing that some brain areas are less active the second time a stimulus is encountered as compared with the first time, perhaps reflecting object memory (e.g. Refs. [18,38,63]). Consistent with this interpretation was our finding of an additional fusiform area that had greater activity during the baseline conditions compared to recognition (coordinates: 30, − 48, −12), a contrast that also compares novel to more familiar items. This fusiform region with reduced activity during recognition compared to baseline was similar in location to the midfusiform region that was active during face encoding (coordinates: 14, − 42, − 12). However, the fusiform results cannot be explained solely in terms of such a general novelty effect because there also was a posterior fusiform area that was more active during recognition than during the baseline (coordinates: 36, − 68, −16), where the non-face stimulus patterns are presumably most novel. Thus, it may be that different areas of the fusiform gyrus perform different functions. For instance, it may be that the middle fusiform is more related to processing novel stimuli, whether those be newly seen faces or non-face stimuli, whereas posterior fusiform is more related to the process of recognizing previously seen faces. The observed effects of deep versus shallow encoding on recognition performance for faces are consistent with previous behavioral studies (e.g. Ref. [7]) showing improved performance when material has been processed at a deeper, semantic level. At the neural level, according to our behavioral PLS analysis, participants

96

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

were best at recognizing faces studied under deep encoding instructions when rCBF was increased in left inferior temporal and frontal areas and in bilateral middle temporal and parietal cortices. Some of these regions are involved in semantic processing of words and pictures, for example, left inferior temporal and left frontal areas (e.g. Refs. [9,20,24]). This suggests that some ‘semantic’ areas are also important for deep encoding of faces even if the actual deep processing task is different. On the other hand, increased activity in the left amygdala was related to better performance in the shallow and learn conditions, consistent with other reports of a role for this region in implicit processing of emotion in faces (e.g. Refs. [45,64]. In addition, good recognition performance for shallowly-encoded faces was observed when there was increased blood flow in bilateral fusiform, medial temporal, and right inferior temporal areas. The same pattern was true for the intentionally-encoded faces. Thus, not only is there differential fusiform activation during encoding and retrieval, but we also found that posterior fusiform activation is differentially correlated with performance in different memory strategy conditions. Interestingly, these correlations between brain activity and recognition accuracy were found in posterior regions of the fusiform gyrus, and not in the midfusiform gyrus, as has been reported by Kuskowski and Pardo [36]. Differences between our study and theirs that could account for the discrepant results include different analyses (multivariate PLS versus univariate correla-

tions), and the fact that they scanned participants only during encoding and not during recognition. Our data suggest that activation of the fusiform gyrus during a face-processing task does not necessarily ensure good task performance. Posterior fusiform activity appears to be more related to perceptual conditions than deep (emotive or semantically encoded) conditions whereas midfusiform activity may be unrelated to performance level. This result underscores the importance of examining directly the relation between brain activity and behavior. An analysis of task effects shows what areas are active during the condition regardless of how these regions are related to performance, whereas a behavioral analysis reveals which areas are specifically correlated with individual differences in performance. Our results clearly show that a region may be active during a number of task conditions (as the fusiform was during encoding in general), but nevertheless may contribute differently to task performance in these conditions (compare Fig. 1 and Fig. 4). In conclusion, the results have highlighted several important aspects of face memory. First, encoding and recognition of unfamiliar faces involve different networks of brain activity. Second, there is an effect of encoding strategy on blood flow during encoding of faces, mainly during deep encoding. There is no effect of encoding strategy on blood flow during recognition. Third, the effect of different encoding strategies on brain activity is consistent with the behavioral effects, namely, that recognition of deeply encoded faces is

Fig. 4. Areas of overlap between those differentiating encoding and recognition (Fig. 1) and those correlated with behavioral performance (Fig. 3A). Regions with a threshold ] 4 are shown. In this overlap image areas that are the same color in Figs. 1 and 3 (i.e. both white or both black) are shown in white, and those areas with different colors in Figs. 1 and 3 are shown in black. White arrows point to areas with greater activation during encoding than during recognition and where increased activity was correlated with better performance in shallow and intentional learning conditions. Black arrows point to areas with greater activation during encoding than during recognition and where increased activity was correlated with better performance in the deep conditions. Black circles indicate areas with greater activation during recognition than during encoding and where activity was correlated with better performance in the deep conditions.

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98

greater than those encoded in the shallow and learn conditions and brain activity during deep encoding is distinct from that in the other two encoding conditions. Finally, brain areas that are active across all encoding tasks may nevertheless be related differently to individual differences in memory performance due to encoding strategy. Acknowledgements The authors would like to thank the staff of the PET Centre at the Centre for Mental Health and Addiction, University of Toronto, for their technical assistance in carrying out this experiment, and Dr Claude Alain for comments on an earlier version of the manuscript. This work was supported by the Ontario Mental Health Foundation, the Canadian Institutes of Health Research, and the Alzheimer Society of Canada. References [1] Aggleton JP, editor. The Amygdala: Neurobiological Aspects of Emotion, Memory, and Mental Dysfunction. New York: Wiley, 1992. [2] Andreasen NC, O’Leary DS, Arndt S, et al. Neural substrates of facial recognition. Journal of Neuropsychiatry and Clinical Neuroscience 1996;8:139 –46. [3] Baddeley AD. The trouble with levels: a reexamination of Craik and Lockhart’s framework for memory research. Psychological Review 1978;85:139 –52. [4] Bauer RM, Tobias B, Valenstein E. Amnesic disorders. In: Heilman KM, Valenstein E, editors. Clinical Neuropsychology. New York: Oxford University Press, 1993:523 –602. [5] Beck AT. Beck Depression Inventory. San Antonio, TX: The Psychological Corporation, 1987. [6] Benton AL, Hamsher KD. Multilingual Examination. Iowa City, IA: University of Iowa, 1976. [7] Bower GH, Karlin MB. Depth of processing pictures of faces and recognition memory. Journal of Experimental Psychology 1974;103:751 – 7. [8] Breiter HC, Etcoff NL, Whalen PJ, et al. Response and habituation of the human amygdala during visual processing of facial expression. Neuron 1996;17:875 – 87. [9] Cabeza R, Nyberg L. Imaging cognition: an empirical review of PET studies with normal subjects. Journal of Cognitive Neuroscience 1997;9:1 – 26. [10] Courtney SM, Ungerleider LG, Keil K, Haxby JV. Object and spatial visual working memory activate separate neural systems in human cortex. Cerebral Cortex 1996;6:39 – 49. [11] Courtney SM, Ungerleider LG, Keil K, Haxby JV. Transient and sustained activity in a distributed neural system for human working memory. Nature 1997;386:608 –11. [12] Craik FIM, Lockhart RS. Levels of processing: a framework for memory research. Journal of Verbal Learning and Verbal Behavior 1972;11:671 – 84. [13] Craik FIM, Simon E. Age differences in memory: the roles of attention and depth of processing. In: Poon LW, Fozard JL, Cermak L, Arenberg D, Thompson LW, editors. New Directions in Memory and Aging: Proceedings of the George Talland Memorial Conference. Hillsdale, NJ: Lawrence Erlbaum, 1980:95 – 112.

97

[14] Craik FIM, Tulving E. Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology General 1975;104:268 – 94. [15] Edgington ES. Randomization Tests. New York: Marcel Dekker, 1980. [16] Efron B, Tibshirani R. Bootstrap methods for standard errors, confidence intervals and other measures of statistical accuracy. Statistical Science 1986;1:54 – 77. [17] Eustache F, Desgranges B, Laville P, et al. Episodic memory in transient global amnesia: encoding, storage, or retrieval deficit? Journal of Neurology, Neurosurgery and Psychiatry 1999;66:148 – 54. [18] Fahy FL, Riches IP, Brown MW. Neuronal activity related to visual recognition memory: long-term memory and the encoding of recency and familiarity information in the primate anterior and medial inferior temporal and rhinal cortex. Experimental Brain Research 1993;96:457 – 72. [19] Folstein MF, Folstein SE, McHugh PR. ’Mini-mental state’. A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research 1975;12:189 –98. [20] Grady CL. Neuroimaging and activation of the frontal lobes. In: Miller BL, Cummings JL, editors. The Human Frontal Lobes: Function and Disorders. New York: Guilford Press, 1999:196 – 230. [21] Grady CL, Horwitz B, Pietrini P, et al. The effect of task difficulty on cerebral blood flow during perceptual matching of faces. Human Brain Mapping 1996;4:227 – 39. [22] Grady CL, Maisog JM, Horwitz B, et al. Age-related changes in cortical blood flow activation during visual processing of faces and location. Journal of Neuroscience 1994;14:1450 – 62. [23] Grady CL, McIntosh AR, Bookstein F, Horwitz B, Rapoport SI, Haxby JV. Age-related changes in regional cerebral blood flow during working memory for faces. NeuroImage 1998;8:409 – 25. [24] Grady CL, McIntosh AR, Rajah MN, Craik FIM. Neural correlates of the episodic encoding of pictures and words. Proceedings of the National Academy of Science USA 1998;95:2703 – 8. [25] Halgren E, Dale AM, Sereno MI, Tootell RB, Marinkovic K, Rosen BR. Location of human face-selective cortex with respect to retinotopic areas. Human Brain Mapping 1999;7:29 –37. [26] Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL. The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations. Journal of Neuroscience 1994;14:6336 – 53. [27] Haxby JV, Ungerleider LG, Horwitz B, Maisog JM, Rapoport SI, Grady CL. Face encoding and recognition in the human brain. Proceedings of the National Academy of Science USA 1996;93:922 – 7. [28] Haxby JV, Ungerleider LG, Horwitz B, Maisog JM, Rapoport SI, Grady CL. Storage and retrieval of new memories for faces in the intact human brain. Proceedings of the National Academy of Science USA 1996;93:922 – 7. [29] Haxby JV, Ungerleider LG, Horwitz B, Rapoport SI, Grady CL. Hemispheric differences in neural systems for face working memory: a PET-rCBF study. Human Brain Mapping 1995;3:68 –82. [30] Intraub H, Nicklos S. Levels of processing and picture memory: the physical superiority effect. Journal of Experimental Psychology: Learning, Memory and Cognition 1985;11:284 – 298. [31] Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience 1997;17:4302 – 11. [32] Kapur N, Friston KJ, Young A, Frith CD, Frackowiak RSJ. Activation of human hippocampal formation during memory for faces: a PET study. Cortex 1995;31:99 – 108. [33] Kapur S, Craik FIM, Tulving E, Wilson AA, Houle S, Brown GM. Neuroanatomical correlates of encoding in episodic mem-

98

[34]

[35]

[36] [37] [38]

[39]

[40] [41]

[42] [43]

[44]

[45]

[46]

[47]

[48]

[49]

[50]

L.J. Bernstein et al. / Neuropsychologia 40 (2002) 86–98 ory: levels of processing effect. Proceedings of the National Academy of Science USA 1994;91:2008 –11. Kapur S, Rose R, Liddle PF, et al. The role of the left prefrontal cortex in verbal processing: semantic processing or willed action? Neuroreport 1994;5:2193 –6. Kelley WM, Miezin FM, McDermott KB, et al. Hemispheric specialization in human dorsal frontal cortex and medial temporal lobe for verbal and nonverbal memory encoding. Neuron 1998;20:927 – 36. Kuskowski MA, Pardo JV. The role of the fusiform gyrus in successful encoding of face stimuli. Neuroimage 1999;9:599 – 610. LeDoux JE. Emotional memory systems in the brain. Behavioural Brain Research 1993;58:69 – 79. Li L, Miller EK, Desimone R. The representation of stimulus familiarity in anterior inferior temporal cortex. Journal of Neurophysiology 1993;69:1918 –29. McCarthy G, Puce A, Gore JC, Allison T. Face-specific processing in the fusiform gyrus. Journal of Cognitive Neuroscience 1997;9:605 – 10. McCarthy RA, Warrington EK. Cognitive Neuropsychology: a Clinical Introduction. San Diego, CA: Academic Press, 1990. McDermott KB, Buckner RL, Petersen SE, Kelley WM, Sanders AL. Set- and code-specific activation in frontal cortex: an fMRI study of encoding and retrieval of faces and words. Journal of Cognitive Neuroscience 1999;11:631 –40. McIntosh AR., Mapping cognition to the brain through neural interactions. Memory 1999;7:523 – 548. McIntosh AR, Bookstein FL, Haxby JV, Grady CL. Spatial pattern analysis of functional brain images using partial least squares. NeuroImage 1996;3:143 –57. Morris JS, Frith CD, Perrett DI, et al. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 1996;383:812 –5. Morris JS, Ohman A, Dolan RJ. A subcortical pathway to the right amygdala mediating ‘unseen’ fear. Proceedings of the National Academy of Sciences USA 1999;96:1680 –5. Nyberg L, McIntosh AR, Houle S, Nilsson L-G, Tulving E. Activation of medial temporal structures during episodic memory retrieval. Nature 1996;380:715 – 7. Nyberg L, Persson J, Habib R et al. Large scale neurocognitive networks underlying episodic memory. Journal of Cognitive Neuroscience 2000;12:163 –173. Owen AM, Milner B, Petrides M, Evans AC. Memory for object features versus memory for object location: a positron-emission tomography study of encoding and retrieval processes. Proceedings of the National Academy of Science USA 1996;93:9212 – 7. Puce A, Allison T, McCarthy G. Electrophysiological studies of human face perception. III: Effects of top-down processing on face-specific potentials. Cerebral Cortex 1999;9:445 – 58. Raven JC. Revised Manual for Raven’s Progressive Matrices and Vocabulary Scale. Windsor, UK: NFER Nelson, 1982.

[51] Rugg MD, Fletcher PC, Frith CD, Frackowiak RSJ, Dolan RJ. Differential activation of the prefrontal cortex in successful and unsuccessful memory retrieval. Brain 1996;119:2073 – 84. [52] Rugg MD, Fletcher PC, Frith CD, Frackowiak RSJ, Dolan RJ. Brain regions supporting intentional and incidental memory: a PET study. Neuroreport 1997;8:1283 – 7. [53] Sagar HJ, Cohen NJ, Corkin S, Growdon JH. Dissociations among processes in remote memory. Annals of the New York Academy of Sciences 1985;444:533 – 5. [54] Sampson PD, Streissguth AP, Barr HM, Bookstein FL. Neurobehavioral effects of prenatal alchohol: Part II. Partial least squares analysis. Neurotoxicology and Teratology 1989;11:477 – 91. [55] Schreurs BG, McIntosh AR, Bahro M, Herscovitch P, Sunderland T, Molchan SE. Lateralization and behavioral correlation of changes in regional cerebral blood flow with classical conditioning of the human eyeblink response. Journal of Neurophysiology 1997;77:2153 – 63. [56] Sergent J, Ohta S, MacDonald B. Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain 1992;115:15 – 36. [57] Shallice T, Fletcher P, Frith CD, Grasby P, Frackowiak RS, Dolan RJ. Brain regions associated with acquisition and retrieval of verbal episodic memory. Nature 1994;368:633 – 5. [58] Smith AD, Winograd E. Adult age differences in remembering faces. Developmental Psychology 1978;14:443 – 4. [59] Talairach J, Tournoux P. Co-Planar Stereotaxic Atlas of the Human Brain. New York: Thieme, 1988. [60] Tulving E. Elements of Episodic Memory. New York: Oxford University Press, 1983. [61] Tulving E, Kapur S, Craik FIM, Moscovitch M, Houle S. Hemspheric encoding/retrieval asymmetry in episodic memory: positron emission tomography findings. Proceedings of the National Academy of Science USA 1994;91:2016 – 20. [62] Tulving E, Kapur S, Markowitsch HJ, Craik FIM, Habib R, Houle S. Neuroanatomical correlates of retrieval in episodic memory: auditory sentence recognition. Proceedings of the National Academy of Science USA 1994;91:2012 – 5. [63] Vandenberghe R, Dupont P, Bormans G, Mortelmans L, Orban G. Blood flow in human anterior temporal cortex decreases with stimulus familiarity. Neuroimage 1995;2:306 – 13. [64] Whalen PJ, Rauch SL, Etcoff NL, McInerney SC, Lee MB, Jenike MA. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. Journal of Neuroscience 1998;18:411 – 8. [65] Wojciulik E, Kanwisher N, Driver J. Covert visual attention modulates face-specific activity in the human fusiform gyrus: fMRI study. Journal of Neurophysiology 1998;79:1574 –8. [66] Woods RP, Cherry SR, Mazziotta JC. Rapid automated algorithm for aligning and reslicing PET images. Journal of Computer Assisted Tomography 1992;16:620 – 33.