Interhemispheric categorization of pictures and words

Interhemispheric categorization of pictures and words

Brain and Cognition 52 (2003) 181–191 www.elsevier.com/locate/b&c Interhemispheric categorization of pictures and words Mika Koivistoa,* and Antti Re...

173KB Sizes 0 Downloads 47 Views

Brain and Cognition 52 (2003) 181–191 www.elsevier.com/locate/b&c

Interhemispheric categorization of pictures and words Mika Koivistoa,* and Antti Revonsuob a b

Centre for Cognitive Neuroscience, Department of Psychology, University of Turku, FIN-20014 Turku, Finland Centre for Cognitive Neuroscience, Department of Philosophy, University of Turku, FIN-20014 Turku, Finland Accepted 1 April 2003

Abstract Earlier studies suggest that interhemispheric processing increases the processing power of the brain in cognitively complex tasks as it allows the brain to divide the processing load between the hemispheres. We report two experiments suggesting that this finding does not generalize to word–picture pairs: they are processed at least as efficiently when processed by a single hemisphere as compared to processing occurring between the two hemispheres. We examined whether dividing the stimuli between the visual fields/ hemispheres would be more advantageous than unilateral stimulus displays in the semantic categorization of simultaneously presented pictures, words, and word–picture pairs. The results revealed that within-domain stimuli (semantically related picture pairs or word pairs) were categorized faster in bilateral than in unilateral displays, whereas cross-domain stimuli (word–picture pairs) were not categorized faster in bilateral than in unilateral displays. It is suggested that interhemispheric sharing of word–picture stimuli is not advantageous as compared to unilateral processing conditions because words and pictures use different access routes, and therefore, it may be possible to process in parallel simultaneously displayed word–picture stimuli within a single hemisphere. Ó 2003 Elsevier Science (USA). All rights reserved. Keywords: Categorization; Cerebral dominance; Interhemispheric interaction; Visual field; Semantics

1. Introduction The two cerebral hemispheres can be regarded as two parallel, relatively independent information processing systems which are specialized for different cognitive functions. According to current views, however, most cognitive tasks are not performed exclusively in the left or right hemisphere, but in the close collaboration of the hemispheres, with each hemisphere offering its own contribution to the performance as a whole (Banich, 1995a, 1995b; Hellige, 1993). One aspect of interhemispheric processing that has attracted attention during recent years is whether it is advantageous or disadvantageous to divide the processing between the two hemispheres (Banich, 1995a, 1995b). Typical experiments studying the efficiency of intra- versus interhemispheric processing make use of the neuroanatomical organization of the human visual system: stimuli in the left visual field (LVF) are projected * Corresponding author. Fax: +358-2-333-5060. E-mail address: mika.koivisto@utu.fi (M. Koivisto).

to the right hemisphere and stimuli in the right visual field (RVF) to the left hemisphere. This organization makes it possible to present the stimuli unilaterally to the same visual field/hemisphere or bilaterally to the opposite visual fields/hemispheres. Successful performance in the bilateral condition necessarily requires interhemispheric interaction and integration of the information from the two hemispheres, whereas performance in the unilateral conditions does not necessarily demand such communication. A unilateral advantage is observed when the stimuli in the unilateral conditions are processed faster or more accurately than those in the bilateral conditions. On the other hand, a bilateral advantage is revealed when the stimuli are processed faster or more accurately in the bilateral conditions as compared to the unilateral conditions. Experiments on interhemispheric processing have revealed both unilateral and bilateral advantages. The specific pattern that is observed, however, seems to be dependent on the complexity of the task, with the probability of finding a bilateral advantage increasing as a function of task demands. Banich and Belger (1990)

0278-2626/03/$ - see front matter Ó 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0278-2626(03)00054-X

182

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

observed a unilateral advantage in matching physically identical letters (e.g., A–A) and numbers, whereas a bilateral advantage was found for matching the names of upper- and lowercase letters (e.g., A–a) and performing mental arithmetic. In addition to the complexity of the task, a further study (Belger & Banich, 1992) showed that the complexity of the stimuli [number of stimuli] increased the bilateral advantage. Similarly, Merola and Liederman (1990) found that the bilateral advantage in a letter naming task became stronger when the number of letters in the display was increased. These findings suggest that it is advantageous to divide the processing across the hemispheres when the task is computationally complex. It has been proposed that interhemispheric processing allows the hemispheres to work in parallel or to perform two operations simultaneously (Banich, 1995a,b; Liederman, 1998). In bilateral conditions, the processing load is distributed across a larger neural space, which is presumed to increase the processing power of the brain. The bilateral advantage can also be observed in tasks calling for semantic processing. Zhang and Feng (1999) found a bilateral advantage for matching synonymous Chinese characters, whereas the matching of visually similar characters was equal in unilateral and bilateral conditions. Koivisto (2000) used pictorial stimuli and found that pictures belonging to the same category were semantically categorized faster in bilateral than in unilateral presentations. Categorization of visually identical stimuli did not result in any bilateral advantage. These results suggest that simultaneous processing in the two hemispheres improves processing in relatively complex semantic tasks but not in less complex tasks requiring visual matching. The present experiments studied interhemispheric semantic processing in greater detail. They were designed to test whether the bilateral advantage found in semantic categorization of pictorial stimuli (Koivisto, 2000) would generalize to categorization of word–word pairs and word–picture pairs. Although the left hemisphere superiority in language processing is one of the earliest and most reliably documented findings in human neuropsychology, recent evidence from brain-damaged and normal participants suggests that the right hemisphere also possesses some capacity to process words semantically (Beeman, 1993; Burgess & Chiarello, 1996; Chiarello, 1991; Hagoort, Brown, & Swaap, 1996). It has been suggested that regardless of the degree of lateralization, dividing processing over a wider expanse of neural regions is useful (Banich, 1995a, 1995b). Therefore, it could be hypothesized that bilateral stimulus presentation would result in the hemispheres processing the words in parallel, as this would be more efficient than the serial processing of two words within a single hemisphere. Thus, we predict that the categorization of words is more efficient when the input is divided between the hemispheres as com-

pared to conditions wherein the left or right hemisphere alone receives the words. It is not clear, however, whether the same hypothesis can be applied to a cross-domain condition, for example, to word–picture pairs. Accessing the meaning of words and pictures rely at least partially on different routes (Potter, So, Eckardt, & Feldman, 1984; Snodgrass, 1984; Te Linde, 1982). Brain imaging studies suggest that the semantic processing of words activates different brain areas than the processing of pictures, although considerable overlap is also observed (Chee et al., 2000; Moore & Price, 1999). Thus, the processing of words and pictures engages different neural systems, making it possible that a word–picture stimulus pair could be processed both efficiently and in parallel within one single hemisphere. It follows that dividing a word– picture stimulus pair between the hemispheres would not necessarily be expected to result in an increase in performance compared to conditions that restrict the pair to one hemisphere. We tested the efficiency of unilateral vs. bilateral semantic categorizations by simultaneously presenting pictures, words, and word–picture pairs to the same visual field or to the opposite visual fields in two experiments. We predict that within-domain (picture–picture or word–word) stimuli would show a bilateral advantage whereas cross-domain (word–picture) stimuli would not.

2. Experiment 1 In this experiment, the subjects categorized picture– picture, word–word, and word–picture pairs according to whether they belong to the same semantic category or not. Three types of stimuli were used: identical, semantically related, and semantically unrelated pairs. Identical pairs consisted of two copies of the same word (e.g., bird–bird) or picture; in the word–picture pairs the word was the written name of the picture. Two general categories were used in the semantically related pairs: animals and objects. Each semantically related pair contained either two animals (e.g., bird–donkey) or two objects (e.g., hat–book). The items in the pairs were simultaneously directed either to the same hemisphere (unilateral presentation) or to the opposite hemispheres (bilateral presentation), and the subjectÕs task was to decide whether the stimuli belonged to the same category (i.e., both are animals or both are objects) or not (one of them is an animal and one is an object, or vice versa). 2.1. Subjects Eighteen right-handed subjects (8 males, 10 females) either volunteered or took part in order to fulfill a course requirement. Their mean age was 21.8 (SD ¼ 2:0)

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

years, and according to the Edinburgh Inventory (Oldfield, 1971), all were right-handed. 2.2. Stimuli Three stimulus domains were used: picture–picture pairs, word–word pairs, and word–picture pairs. The stimuli consisted of 16 pictures of animals and 16 pictures of objects, selected from the set of Snodgrass and Vanderwart (1980) pictures, and their written names. Sixteen semantically related pairs (8 animal pairs and 8 object pairs), 16 unrelated (animal–object) pairs were constructed from these stimuli for each stimulus domain. In addition, identical pairs were created from the same stimuli for each domain. In the picture–picture domain, the identical pairs included two copies of the same picture. In the word–word domain, the identical pairs consisted of two copies of the same visual word. In the word–picture domain, the identical pairs contained a picture and its written name. When viewed from the distance of 100 cm, the mean size of the pictures was 3.3° horizontally and 2.5° vertically. The mean size of the words was 2.4° horizontally and 0.6° vertically. The inner edges of the stimuli were placed 1.7° from the central fixation cross. Unilateral left visual field (LVF) and right visual field (RVF) versions were constructed from each pair (see Fig. 1). In unilateral versions, the stimuli appeared within the same visual field, one above the other, with the lower edge of the upper stimulus and the upper edge of the lower stimulus separated by a vertical distance of 1.1°. In the word–picture domain, two forms of unilateral versions were used. In one of them, the upper

Fig. 1. Examples of the displays for the semantically related stimuli in picture–picture domain. The displays in the word–picture domain were otherwise identical to those in the picture–picture domain but one of the items (either the upper or lower one) was always the written name of the corresponding picture. In the word–word domain, both items were the written names of the pictures.

183

stimulus was the word and the lower stimulus was the picture; in the other version the upper stimulus was the picture and the lower one was the word. Each subject saw half of the word–picture pairs in the former form and half in the latter form. Across the subjects, each of these forms was presented equally often. Two forms of the bilateral versions were constructed from each pair in the picture–picture and word–word domains. In one form of the bilateral versions, the upper item was in the LVF and the lower one in the RVF; in the other form the upper item was in the RVF and the lower one in the LVF. Each subject was presented with half of the bilateral versions, but across subjects each version was displayed equally often. Additionally in the word–picture domain, two different versions of each bilateral pair were used. In one of the versions, the upper stimulus contained the word and the lower stimulus contained the picture; this pattern was reversed for the other version. Across the subjects, each of these versions was presented equally often, although each subject was presented with only half of them. There were two types of bilateral trials within the word–picture domain: those in which the word appeared in the LVF and the picture in the RVF and those in which the picture was presented in the LVF and the word in the RVF. 2.3. Apparatus and procedure A chin rest was used to keep the subjectÕs head stable at a 100-cm distance from the screen. The stimuli were presented in black on a white microcomputer screen. A computer program (Revonsuo & Portin, 1998) synchronized the raster beam with stimulus presentation. Each trial began with a fixation cross presented at the center of the screen for 500 ms, followed by the stimulus display (two items simultaneously) for 150 ms. The subjects were asked to fixate on the cross and not to move their eyes when the stimuli appeared. They were told that their task was to decide whether the two items belonged to the same semantic category or not. If both of the items represented either animals or objects, they were asked to respond ‘‘yes,’’ otherwise to answer ‘‘no,’’ by pressing one of two buttons. Half of the subjects in each condition responded with the right hand and half with the left hand. Positive responses were indicated with the index finger and negative responses with the middle finger. Both speed and accuracy was stressed in the instructions. The next trial began 112 s after the subject had responded. Prior to the experimental trials, a practice block of 22 trials (including none of the stimuli in the experimental block) was presented to each subject. After practising, each subject received 480 experimental trials in four blocks of 120 trials. The blocks were separated by brief resting periods. Stimulus order was randomized for each subject. Response latencies and accuracy were measured.

184

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

sponse times than stimuli in the LVF (828 ms) ðp < :05Þ or RVF (843 ms) ðp < :01Þ. There were no VF differences for unrelated stimuli ðF < 1Þ. The Stimulus Type (3)  VF (3) ANOVA for response times in the word–word domain showed a significant main effect for Stimulus Type ðF ð2; 34Þ ¼ 132:35; p < :001Þ, indicating that all stimulus types differed from each other ðps < :001Þ. In addition, the main effect for VF was significant ðF ð2; 34Þ ¼ 7:12; p < :01Þ. Subjects responded to bilateral stimuli (952 ms) faster than to stimuli in the LVF (1019 ms) ðp < :01Þ or RVF (1006 ms) ðp < :01Þ. The Stimulus Type  VF interaction was not statistically significant ðF < 1Þ. The data from the word–picture domain were analysed with a 3 (Stimulus Type: identical, semantic, unrelated)  4 (VF: LVF, RVF, LVF/word–RVF/picture, LVF/picture–RVF/word) ANOVA. It revealed a significant main effect for stimuli ðF ð2; 34Þ ¼ 113:38; p < :001Þ, again indicating that all the stimulus types differed from each other ðps < :001Þ. Most importantly, the main effect for VF was significant ðF ð3; 51Þ ¼ 6:30; p < :01Þ. Subjects responded to the stimuli in the LVF (976 ms) more slowly than to the stimuli in the RVF (938 ms) ðp < :05Þ, the LVF/word–RVF/picture condition (951 ms) ðp < :05Þ, and the LVF/picture–RVF/word condition (915 ms) ðp < :01Þ. In addition, the LVF/ word–RVF/picture stimuli received slower responses than the LVF/picture–RVF/word stimuli ðp < :05Þ. The Stimulus Type  VF interaction was not statistically significant ðF ð6; 102Þ ¼ 2:09Þ.

3. Results 3.1. Response times Trials with response times longer than 2SD from each subjectÕs mean (for each condition) were removed from the reaction time analyses (about 4% of the trials). Only correct trials were included in the reaction time analyses. Because there were three visual field conditions (LVF, RVF, bilateral) in word–word and picture–picture domains and four conditions in the word–picture domain (LVF, RVF, LVF/word–RVF/picture, LVF/word– RVF/picture), the data for each domain were analysed separately. Greenhouse–Geisser corrected significance levels were applied when more than two levels were used in ANOVAs. The results are presented in Table 1. A 3 (Stimulus Type: identical, semantic, unrelated)  3 (VF: LVF, RVF, bilateral) ANOVA was conducted on the response latencies in the picture–picture domain. The analysis revealed a significant main effect for Stimulus Type ðF ð2; 34Þ ¼ 276:44; p < :001Þ, indicating that all types of stimuli significantly differed from each other ðps < :001Þ. Also, a significant Stimulus Type  Visual Field interaction was observed ðF ð4; 68Þ ¼ 3:38; p < :05Þ, showing that the stimulus type modified the responses differently in the visual field conditions. For identical stimuli, no statistically significant VF differences were observed ðF ð2; 34Þ ¼ 3:07Þ. For semantically related stimuli, a significant difference between VF conditions was found ðF ð2; 34Þ ¼ 5:39; p ¼ 0:01Þ: bilateral stimuli (794 ms) generated faster re-

Table 1 Response times (RT) in milliseconds and error percentages (E%) for identical, semantically related, and unrelated stimuli as a function of stimulus domain and visual field (VF) (standard deviations in parentheses) Domain

Picture–picture

VF

LVF RVF Bilateral

Word–word

LVF RVF Bilateral

Word–picture

LVF RVF LVF/word–RVF/picture LVF/picture–RVF/word

Identical

Semantic

Unrelated

RT

E%

RT

E%

RT

E%

641 (78) 667 (64) 659 (85)

1.7 (2.4) 0.0 (1.5) 0.0 (1.5)

828 (99) 843 (124) 794 (101)

3.5 (4.4) 2.4 (3.8) 4.2 (5.7)

933 (103) 922 (75) 937 (99)

4.5 (5.2) 4.2 (7.4) 3.1 (7.5)

805 (166) 791 (191) 720 (93)

3.8 (6.1) 8.0 (11.9) 1.7 (3.2)

1077 (187) 1072 (171) 1045 (149)

21.2 (13.8) 15.3 (8.9) 11.5 (12.4)

1175 (219) 1156 (174) 1091 (164)

31.6 (19.3) 23.3 (13.5) 20.1 (14.6)

820 (96) 802 (97) 823 (116) 803 (99)

3.1 (4.4) 2.4 (4.4) 1.4 (2.7) 0.1 (2.0)

1040 (196) 977 (127) 979 (160) 961 (118)

10.1 (6.5) 8.0 (6.7) 9.4 (7.5) 7.3 (6.9)

1068 (154) 1036 (147) 1051 (149) 980 (117)

14.6 (12.1) 8.7 (8.3) 10.8 (6.7) 4.9 (5.1)

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

3.2. Errors A 3 (Stimulus Type)  3 (VF: LVF, RVF, bilateral) ANOVA on errors in the picture–picture domain showed a main effect for Stimulus Type ðF ð2; 34Þ ¼ 5:20; p < :05Þ, indicating that the error rate for identical stimuli was lower than for semantically related ðp < :01Þ or unrelated stimuli ðp < :05Þ. Other effects were not statistically significant ðF s < 1Þ. The Stimulus Type (3)  VF (3) ANOVA on errors in the word–word domain showed a significant main effect for Stimulus Type ðF ð2; 34Þ ¼ 31:07; p < :001Þ, indicating that all stimulus types differed from each other ðps < :001Þ. The main effect for VF was significant ðF ð2; 34Þ ¼ 5:55; p < :01Þ, and this effect was modified by a significant Stimulus Type  VF interaction ðF ð4; 68Þ ¼ 3:16; p < :05Þ, suggesting that the pattern of results in the VFs was different as a function of the stimulus type. For identical stimuli, a significant VF effect ðF ð2; 34Þ ¼ 3:93; p < :05Þ indicated that fewer errors were made in responses to bilateral stimuli (1.0%) than to stimuli in the RVF (8.0%) ðp < :05Þ. For semantically related stimuli, a significant VF difference ðF ð2; 34Þ ¼ 4:25; p < :05Þ was due to higher errors rates in the LVF (21.2%) than in the bilateral presentations (11.5%) ðp < :05Þ. Also, unrelated stimuli differed significantly ðF ð2; 34Þ ¼ 4:98; p < :05Þ: less errors were made in response to bilateral (20.1%) than to LVF (31.6%) stimuli ðp < :01Þ. A 3 (Stimulus Type)  4 (VF: LVF, RVF, LVF/ word–RVF/picture, LVF/picture–RVF/word) ANOVA on errors in the word–picture domain revealed a main effect for Stimulus Type ðF ð2; 34Þ ¼ 27:17; p < :001Þ, showing that more errors were made in response to semantically related ðp < :001Þ and unrelated ðp < :001Þ stimuli than to identical stimuli. Also, the main effect for VF was significant ðF ð3; 51Þ ¼ 5:26; p < :01Þ. This effect was due to more accurate responses in the LVF/ picture–RVF/word condition (4.3%) than in the LVF (9.3%) ðp < :01Þ and LVF/word–RVF/picture condition (7.2%) ðp < :01Þ. Stimulus Type did not interact with VF ðF ð6; 102Þ ¼ 1:52Þ.

185

ger & Banich, 1992; Koivisto, 2000). On the other hand, the results for word–word pairs showed a general bilateral advantage which was similar for identical, semantically related, and unrelated word pairs. Given that the linguistic stimuli were more difficult than the pictures, a difference which can be observed in response latencies and error rate, it seems possible that interhemispheric processing was beneficial even in the categorization of the less complex identical stimuli. Perceptual load, however, was not equally divided between unilateral and bilateral conditions in the present experiment because there were two items projected to each hemisphere in the unilateral conditions and one item to each hemisphere in the bilateral conditions. This, together with the difficulty of the tasks, may explain why the bilateral advantage appeared for the less complex identical stimuli. In other words, the bilateral advantage may have originated at the perceptual rather than the semantic level. On the other hand, the finding that the bilateral advantage did not increase as a function of stimulus complexity may also imply that semantic processing was used in categorizing both identical and semantic stimuli. In contrast to the picture–picture and word–word pairs, the word–picture pairs did not show a bilateral advantage for any type of stimuli. Although the responses to bilateral pairs were faster and more accurate than the responses to LVF pairs, the bilateral pairs did not differ from RVF pairs. Thus, sharing the stimuli between the two hemispheres did not result in more efficient processing compared to restricting the input to the left hemisphere only. Before discussing further the dissociation between the bilateral advantage in the withindomain conditions (picture–picture and word–word pairs) and the lack of bilateral advantage in the crossdomain condition (word–picture), we shall first test whether the results can be replicated with a three-item display (Banich & Belger, 1990). This will allow us to keep the number of items projected to each visual field/ hemisphere constant across unilateral and bilateral trials.

5. Experiment 2 4. Discussion The results for the picture–picture pairs showed that the categorization of semantically related stimulus pairs was more efficient when the input was divided between the visual fields/hemispheres than when the input was directed either to the LVF/right hemisphere or to the RVF/left hemisphere. The results for the identical pairs did not show this bilateral advantage. This pattern supports the view that it is advantageous to divide the processing between the hemispheres when the task is computationally complex (Banich & Belger, 1990; Bel-

The previous experiment employed a two-item display in which the perceptual load was not divided equally between the hemispheres in the bilateral trials as compared to the unilateral trials. In the unilateral trials, one of the visual fields/hemispheres was presented with two stimuli and the other visual field/hemisphere did not receive any stimuli; in the bilateral trials, both hemispheres received one stimulus. In order to keep the number of stimuli constant between unilateral and bilateral trials, Experiment 2 used a three-item display introduced by Banich and Belger (1990) to study the effects of stimulus domain (picture–picture, word–word,

186

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

word–picture) on interhemispheric categorization of identity and semantically related stimuli. In the threeitem display, two stimuli are presented to one of the visual fields/hemispheres and one stimulus to the opposite visual field/hemisphere, both in the unilateral and in the bilateral trials. What distinguishes the unilateral and bilateral trials is that the probe stimulus matches the stimulus in the same visual field in the unilateral trials, whereas in the bilateral trials, the probe matches the stimulus in the opposite visual field. In Experiment 2, the subjects decided whether the probe belonged to the same category than either of the other two stimuli in the three-item display. 5.1. Subjects Forty-eight right-handed subjects (24 males, 24 females) either volunteered or took part in order to fulfill a course requirement. Their mean age was 22.7 (SD ¼ 2:2) years, and according to the Edinburgh Inventory (Oldfield, 1971), all were right-handed. None had participated in Experiment 1. Stimulus domain (picture–picture vs. word–word vs. word–picture) was manipulated between subjects: 16 subjects (8 females) participated in the picture–picture condition, 16 (8 females) in the word–word condition, and 16 (8 females) in the word–picture condition. 5.2. Materials Twelve pairs of pictures were selected from the Snodgrass and Vanderwart (1980) picture set so that the concepts represented by the pictures in each pair belonged to the same semantic category (e.g., swan– chicken, table–sofa). In addition, 12 unrelated pairs were formed by mixing the semantic pairs (e.g., swan– sofa, table–chicken), and 12 identical pairs were created from the same stimuli (e.g., swan–swan, table–table). For the picture–picture condition, arrays consisting of three pictures were constructed for each type of stimulus pair (see Fig. 2). Two pictures, one in each visual field, were located equally above and lateral to the central fixation. The third picture was placed below the central fixation either in the LVF or in the RVF. The upper pictures were never semantically related to each other. In the semantically related and identity stimulus types, however, the lower picture (probe) matched one of the upper pictures. In semantically related arrays, the lower picture belonged to the same semantic category as one of the upper pictures; in identical arrays, one of the upper stimuli was an identical copy of the probe. In unilateral displays, the probe matched the picture that appeared in the same visual field, whereas in bilateral displays the probe matched the picture in the opposite visual field. In unrelated displays, the probe did not match either of the stimuli.

Fig. 2. Examples of the displays for the semantically related stimuli in picture–picture domain. The displays in the word–picture domain were otherwise identical to those in the picture–picture domain but the lower item (probe) was always the written name of the corresponding picture (e.g., ‘‘apple’’). In the word–word domain, all the items were the written names of the pictures.

In the word–word condition, all stimuli were words (the written names of the corresponding pictures). The arrays were otherwise identical to those in the picture– picture condition. In the word–picture condition, the arrays were identical to those in the picture–picture condition, but the probe (the lower stimulus) was always a word. The word was the written name of the picture in the corresponding picture–picture array. Thus, in semantically related arrays, the word belonged to the same semantic category as one of the upper pictures; in identical arrays, the word was the written name of one of the upper pictures. The mean size of the pictures was 3.0° horizontally and 2.5° vertically. The mean size of the words was 2.7° horizontally and 0.6° vertically. The inner edges of the two upper stimuli were placed on average 2.5° horizontally from the fixation cross. The inner edge of the lower stimulus (probe) was always placed 1.6° from the cross. Vertically, the lower edge of the upper stimuli were 1.5° above the upper edge of the lower stimulus. 5.3. Procedure As Experiment 1 demonstrated, two words presented simultaneously were more difficult to categorize than were stimuli presented in other domains. Because displaying three words simultaneously would be even more difficult to process, the subjects in the word–word condition were allowed to familiarize themselves with the words used in the experiment for two minutes. The words were presented on paper, printed in random order. The familiarization was conducted before the instructions and the practice trials were given, so that the

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

187

eral)  3 (Domain: picture–picture vs. word–word vs. word–picture) ANOVA was conducted for the response latencies of positive trials, with Stimulus Type, VF, and Condition as the within-subject factors and Domain as a between-subjects factor. It revealed a significant main effect for Stimulus Type ðF ð1; 45Þ ¼ 213:53; p < :001Þ, showing faster responses to identical than to semantically related stimuli. The main effect for VF was significant ðF ð1; 45Þ ¼ 5:42; p < :025Þ, showing faster responses to the RVF than to the LVF displays. This result was modified by a VF  Stimulus Type interaction ðF ð1; 45Þ ¼ 4:85; p < :05Þ, showing a stronger RVF advantage for semantically related than for identical stimuli. Domain interacted with Stimulus Type ðF ð2; 45Þ ¼ 6:69; p < :01Þ, indicating that the difference between identical and semantic stimuli was smallest in the word–picture domain. Importantly, Condition interacted with Stimulus Type ðF ð1; 45Þ ¼ 10:76; p < :01Þ. In addition, this interaction was modified by a significant Domain  Stimulus Type  Condition interaction ðF ð2; 45Þ ¼ 4:41; p < :02Þ. In order to gain a clearer understanding of the sources of these interactions, identical and semantic stimuli were analysed separately. A 2 (VF: LVF vs. RVF)  2 (Condition: unilateral vs. bilateral)  3 (Domain: picture–picture vs. word–word vs. word–picture) ANOVA for identical stimuli revealed a significant main effect for Domain ðF ð2; 45Þ ¼ 67:20; p < :001Þ, indicating that picture–picture stimuli received the fastest responses and word–word stimuli the slowest responses. The most important finding here is that Condition had a significant main effect ðF ð1; 45Þ ¼ 9:12; p < :01Þ, showing that subjects responded to unilaterally presented stimuli faster than to bilaterally presented stimuli (779 vs. 827 ms). In other words, a unilateral advantage was observed for identical stimuli. Other main effects and interactions were not significant (F-values < 2.13).

subjects did not know what they were expected to do with the stimuli in the actual experiment. The apparatus was the same as in Experiment 1. Each trial began with a fixation cross presented at the center of the screen for 500 ms. The cross was followed by the stimulus display (the three items simultaneously) for 150 ms. Each type of stimulus and presentation condition was displayed randomly. The subjects were asked to fixate on the cross and not to move their eyes when the stimuli appeared. They were told that their task was to decide whether the lower stimuli (the probe) belonged to the same semantic category as either of the upper stimuli. Both speed and accuracy was stressed in the instructions. Half of the subjects responded with the right hand and half with the left hand. Positive responses were indicated with the index finger and negative responses with the middle finger. The next trial began 112 s after the subject had responded. Prior to the experimental task, a practice block of 18 trials (including none of the stimuli used in the experimental block) was presented to each subject. Stimulus order was randomized for each subject. Response latencies and accuracy were measured.

6. Results 6.1. Response times Trials with response times longer than 2SD from each subjectÕs mean (for each condition) were removed from the response time analyses (about 3% of the trials). Only correct responses were included in the reaction time analyses. Table 2 displays the mean response times and error percentages for each condition. A 2 (Stimulus Type: identical vs. semantic)  2 (VF: LVF vs. RVF)  2 (Condition: unilateral vs. bilat-

Table 2 Response times (RT) in milliseconds and error percentages (E%) for unilateral and bilateral trials as a function of visual field, stimulus type, and stimulus domain (standard deviations in parentheses) Stimulus Type

Domain

Left visual field display Unilateral

Identical

Picture–picture Word–word Word–picture

Semantic

Picture–picture Word–word Word–picture

Right visual field display Bilateral

Unilateral

Bilateral

RT

E%

RT

E%

RT

E%

RT

E%

613 (102) 983 (221) 717 (67)

1.0 (2.8) 35.4 (25.2) 5.2 (6.0)

629 (105) 1122 (206) 732 (60)

1.0 (2.1) 38.5 (28.7) 4.2 (6.8)

614 (87) 1061 (211) 688 (60)

2.1 (4.8) 31.3 (24.4) 3.7 (5.2)

631 (95) 1108 (230) 740 (87)

1.0 (2.8) 32.8 (25.0) 3.1 (4.2)

944 (162) 1395 (211) 874 (103)

22.9 (12.7) 59.9 (16.0) 15.6 (10.0)

886 (165) 1308 (186) 908 (87)

14.6 (13.4) 41.7 (15.3) 18.3 (12.6)

932 (140) 1296 (195) 835 (93)

13.0 (7.4) 44.3 (18.7) 15.6 (11.9)

893 (158) 1253 (122) 875 (77)

17.2 (14.7) 47.4 (10.8) 15.6 (10.9)

188

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

The corresponding 2  2  3 ANOVA for semantically related stimuli showed a main effect for VF ðF ð1; 45Þ ¼ 8:50; p < :01Þ, indicating faster responses to RVF displays (1014 ms) than to LVF displays (1053 ms). Also, the main effect for Domain was significant ðF ð2; 45Þ ¼ 62:61; p < :001Þ, showing that response times were faster in picture–picture and word–picture domains than in the word–word domain. The critical finding here was the significant Condition  Domain interaction ðF ð2; 45Þ ¼ 6:64; p < :01Þ. This interaction suggests that the three stimulus domains produced different patterns of unilateral vs. bilateral advantages. For picture–picture stimuli, bilateral stimuli (890 ms) were categorized faster than unilateral ones (938 ms) ðF ð1; 15Þ ¼ 5:88; p < :05Þ. Similarly, for word–word stimuli, bilateral stimuli (1280 ms) were categorized faster than unilateral ones (1346 ms) ðF ð1; 15Þ ¼ 5:37; p < :05Þ. For word–picture stimuli, the opposite pattern was observed: unilateral stimuli (854 ms) were categorized faster than bilateral ones (892 ms) ðF ð1; 15Þ ¼ 7:88; p < :01Þ. Fig. 3 shows the response times averaged across LVF and RVF presentations for the different stimulus domains in unilateral and bilateral conditions. A 2 (Visual Field)  3 (Domain) ANOVA for the response times in the unrelated trials did not reveal any main effect for VF (1154 ms in the LVF, 1147 ms in the RVF) ðF < 1Þ or VF  Domain interaction ðF ð2; 45Þ ¼ 1:37Þ. The main effect for Domain ðF ð2; 45Þ ¼ 64:30; p < :001Þ indicated that subjects responded to word– word stimuli (1522 ms) more slowly than to stimuli in the picture–picture (1006 ms) and word–picture (923 ms) domains. 6.2. Errors Errors in positive trials were analyzed with a 2 (Stimulus Type)  2 (Visual Field)  2 (Condition: unilateral vs. bilateral)  3 (Domain) ANOVA with Stimulus Type, VF, and Condition as within-subject factors

and Domain as a between-subjects factor. The main effects for Stimulus Type ðF ð1; 45Þ ¼ 57:32; p < :001Þ, VF ðF ð1; 45Þ ¼ 5:88; p < :02Þ, and Domain ðF ð2; 45Þ ¼ 77:28; p < :001Þ were significant. In addition, these effects were modified by significant Stimulus Type  Condition  Domain ðF ð2; 45Þ ¼ 7:94; p < :01Þ and Stimulus Type  VF  Condition  Domain ðF ð2; 45Þ ¼ 3:82; p < :05Þ interactions. In order to reveal the sources of these interactions, separate ANOVAs were conducted on the errors for identical and semantic stimuli. The VF  Condition  Domain ANOVA for errors for identical stimuli showed a main effect for domain ðF ð2; 45Þ ¼ 36:70; p < :001Þ, indicating that the error level was higher for responses to word–word stimuli than to picture–picture or word–picture stimuli. Other effects were not statistically significant ðF s < 1:85Þ. The VF  Condition  Domain ANOVA on errors for semantically related stimuli revealed significant main effects for VF ðF ð1; 45Þ ¼ 4:13; p < :05Þ, Condition ðF ð1; 45Þ ¼ 4:66; p < :05Þ, and Domain ðF ð2; 45Þ ¼ 74:28; p < :001Þ. These results show that less errors were made in responses to RVF (25.5%) than to LVF (28.8%) displays, to bilateral (25.8%) than to unilateral displays (28.6%), and to picture–picture and word–picture stimuli (16.3 and 16.9%) than to word–word stimuli (48.3%). In addition, a significant VF  Condition interaction ðF ð1; 45Þ ¼ 6:73; p < :02Þ showed a bilateral advantage in the LVF displays ðF ð1; 47Þ ¼ 9:69; p < :01Þ but not in the RVF displays ðF ð1; 47Þ ¼ 1:00Þ. The Condition  Domain interaction ðF ð2; 45Þ ¼ 4:02; p < :05Þ indicated that the bilateral advantage was stronger in the word–word domain than in the other domains. Thus, the errors for semantically related stimuli seem to show a bilateral advantage predominantly for word–word stimuli in the LVF displays, although the VF  Condition  Domain interaction just fails to reach statistical significance ðF ð2; 45Þ ¼ 3:04; p < :06Þ. A 2 (VF)  3 (Domain) ANOVA for errors in unrelated trials showed a significant main effect for Domain ðF ð2; 45Þ ¼ 46:23; p < :001Þ, showing that more errors were made in responses to word–word stimuli (40.4%) than to picture–picture (5.0%) or word–picture (5.4%) stimuli. The main effect for VF ðF < 1Þ and the VF  Domain interaction ðF ð2; 45Þ ¼ 1:49Þ were not statistically significant.

7. Discussion

Fig. 3. The response times for identical and semantic stimuli in unilateral and bilateral presentation conditions as a function of the stimulus domain (picture–picture, word–word, word–picture).

As expected, the error rates for word–word stimuli were high. In spite of this, reliable effects were observed for the response time data. The results for semantically related stimuli replicate the bilateral advantages found in Experiment 1 for the word–word as well as picture– picture stimuli. Also in line with Experiment 1, no bi-

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

lateral advantages were observed for the cross-domain (word–picture) stimuli. In contrast to Experiment 1, however, the cross-domain stimuli showed a unilateral advantage, that is, unilateral word–picture stimuli were processed faster than were bilateral pairs. For identical stimuli, Experiment 2 showed a unilateral advantage, which did not interact with the presentation domain of the stimuli. In general, it seems that a unilateral advantage is more probable when using the three-item display (Experiment 2) as compared to the two-item display (Experiment 1). A possible cause of the difference might be that the integration of information between the hemispheres is more disrupted when processing the three-item display. In the two-item display, both hemispheres receive one stimulus, making the integration of information between the hemispheres more simple than in the three-item display condition wherein one of the hemispheres is presented with two stimuli. Alternatively, it is possible that in processing the three-item display, the subjects tend to match the probe (lower stimulus) to the stimulus in the same visual field/ hemisphere first and then to the stimulus in the opposite visual field/hemisphere. Even if this kind of scanning bias were true, a bilateral advantage would still provide evidence for a genuine interhemispheric processing advantage. Such a bias would, however, underestimate the magnitude of the interhemispheric advantage and overestimate the unilateral advantage. The semantic categories used in Experiment 2 were different from those used in Experiment 1. Experiment 1 used the general categories of objects and animals, whereas in Experiment 2, the semantically related stimuli were from more clearly defined categories, such as bird and furniture. Although the stimuli within the animal and object categories in Experiment 1 may have differed more than those used in Experiment 2 in size or in respect to the level of categorization, the results from the two experiments together suggest that the bilateral advantage for within-domain stimuli generalizes across different types of categories and stimulus displays.

8. General discussion Both experiments replicated the earlier finding that categorization of semantically related pictures is more efficient in bilateral than in unilateral presentations (Koivisto, 2000). In other words, categorization response times were shorter when each hemisphere processed one of the two pictures, as compared to the condition wherein one hemisphere processed both of the pictures. This bilateral advantage was not observed when two copies of the same picture were presented. Such stimuli can be categorized on the basis of their physical shape, and no higher level semantic processing is necessary. This pattern of results supports the view

189

that it is advantageous to divide the processing between the hemispheres when the task is computationally complex (Banich & Belger, 1990; Belger & Banich, 1992). Both experiments also showed a bilateral advantage in the categorization of semantically related words. In Experiment 1, however, the advantage was similar for all types of stimuli, that is, for identical stimuli (two copies of the same written word) and for semantically related or unrelated stimuli. Thus, categorization was easier when each hemisphere received one word than when one hemisphere received both words. Importantly, bilateral word trials, as well as bilateral picture trials, were processed faster as compared to unilateral LVF or RVF trials. In this sense the bilateral advantages observed for picture and word domains were genuine: dividing the input between the hemispheres resulted in more efficient processing than restricting the input to the left or right hemisphere alone. For word–picture pairs, we did not observe any genuine bilateral advantages. In Experiment 2, a withinfield advantage was observed: response times were faster in unilateral displays than in bilateral displays. In Experiment 1, bilateral word–picture pairs were never processed faster or more accurately than the pairs in the RVF. In other words, dividing the stimuli between the hemispheres was not advantageous compared to presenting all the stimuli to the left hemisphere. Bilateral word–picture trials were processed more efficiently than LVF trials. Thus, when one hemisphere was processing a word and the other one was processing a picture in Experiment 1, the performance was more efficient than when the right hemisphere was processing both stimuli, but no more efficient than when the left hemisphere alone processed both stimuli. In addition, subjects responded to the word–picture pairs in the RVF faster than to those in the LVF, suggesting that the left hemisphere is more efficient than the right hemisphere in categorizing word–picture pairs. In sum, the results for the semantically related stimuli from the two experiments are consistent in showing a bilateral advantage for the picture–picture and word–word domains and in showing no such effects in the cross-domain (word– picture) condition. Pictures and words use different routes to access a common, domain-independent semantic network (for reviews, see Klimesch, 1994; Te Linde, 1982). The present data suggest that the bilateral advantage in semantic categorization does not emerge at the stage of higher-level, semantic processing that is independent of the input domain. If it were to have emerged at such a level, then one might have expected that the magnitude of the bilateral advantage would have increased (or the unilateral advantage would have decreased) with the difficulty of the semantic categorization process. Comparing the identical and the semantically related stimuli

190

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191

in the word–picture domain tests this hypothesis as the identical and semantically related pairs differ in the complexity of the semantic category comparison process, but not in the complexity of accessing the meanings of each of the individual stimuli. Neither Experiment 1 nor 2, however, showed any interactions for stimulus type and the magnitude of the bilateral/unilateral advantage for the word–picture stimuli. The assumption that the bilateral advantage originates after meaning access has taken place at the domain-independent semantic level was not supported. On the other hand, the fact that identical stimuli in the picture–picture domain and in the word–word domain in Experiment 2 did not show any bilateral advantages suggests that the advantage does not take place at the early levels of visual analysis. Thus, the bilateral advantage in semantic categorization tasks is likely to occur during the meaning access process, occurring between early visual analysis and the integration of the accessed meanings. Recent brain imaging studies show activations due to amodal semantic processing, predominantly in the left middle temporal gyrus (Cabeza & Nyberg, 2000). Our results suggest that the benefits of bilateral processing are at least to some extent dependent on the degree of hemispheric specialization. Experiment 2 revealed a general RVF/left hemisphere advantage for semantically related stimuli both in response speed and accuracy, suggesting that semantic categorization is more strongly lateralized to the left than to the right hemisphere. The finding that error rates showed a bilateral advantage only in the LVF displays, particularly in the word–word domain, suggests that it was more beneficial for the less efficient right hemisphere to share the processing with the left hemisphere than vice versa. In Experiment 1, it was also observed that hemispheric specialization had an effect on interhemispheric processing: those bilateral word–picture trials in which the word was displayed directly to the language-dominant left hemisphere were processed faster than those bilateral trials in which the word was directed to the right hemisphere. How can one explain the absence of the bilateral advantage in the word–picture domain but not in the word–word or picture–picture domains? Typically, the bilateral advantage is explained by assuming that interhemispheric processing allows the hemispheres to work in parallel and to perform two operations simultaneously (Banich, 1995a, 1995b; Banich & Belger, 1990; Belger & Banich, 1992; Liederman, 1998). When one hemisphere receives two stimuli from the same stimulus domain (two pictures or two words) at the same time, the two stimuli press processing load on the same brain areas or processing system within a hemisphere, so that their meanings have to be accessed for conscious decision making in a serial manner. In such a case, dividing the processing of the stimuli between the hemispheres helps by allowing the use of two parallel access proce-

dures, one in each hemisphere. It can be suggested that cross-domain stimulus pairs in unilateral displays are not processed in a serial manner. Many cognitive models assume that words and pictures use different routes to access a common semantic representation (Potter et al., 1984; Snodgrass, 1984; Te Linde, 1982). Similarly, recent functional neuroimaging studies suggest that the processing of words and pictures engages both domainindependent areas and areas that are specific for different domains (Chee et al., 2000; Moore & Price, 1999). Thus, it may be possible for the word and picture access routes to be simultaneously activated within a single hemisphere. In such a way, the processing of cross-domain stimuli would be distributed across a larger neural space within one hemisphere than the processing of multiple stimuli within the same domain. This would explain why little or no benefits are gained from dividing the processing of cross-domain stimuli between the hemispheres.

Acknowledgments This study was financially supported by the Academy of Finland (Project Numbers 36106, 45704, and 47238) and the Jenny and Antti Wihuri Foundation.

References Banich, M. T. (1995a). Interhemispheric interaction: Mechanisms of unified processing. In F. L. Kitterle (Ed.), Hemispheric communication: Mechanisms and models (pp. 271–300). Hillsdale, NJ: Erlbaum. Banich, M. T. (1995b). Interhemispheric processing: Theoretical considerations and empirical approaches. In R. J. Davidson & K. Hugdahl (Eds.), Brain asymmetry (pp. 427–450). Cambridge, MA: MIT Press. Banich, M. T., & Belger, A. (1990). Interhemispheric interaction: How do the hemispheres divide and conquer a task. Cortex, 26, 77–94. Beeman, M. (1993). Semantic processing in the right hemisphere may contribute to drawing inferences from discourse. Brain and Language, 44, 80–120. Belger, A., & Banich, M. T. (1992). Interhemispheric interaction affected by computational complexity. Neuropsychologia, 30, 923–931. Burgess, C., & Chiarello, C. (1996). Neurocognitive mechanisms underlying metaphor comprehension and other figurative language. Metaphor and Symbolic Activity, 11, 67–84. Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12, 1–47. Chee, M. W. L., Weekes, B., Lee, K. M., Soon, C. S., Schreiber, A., Hoon, J. J., & Chee, M. (2000). Overlap and dissociation of semantic processing of Chinese characters, English words, and pictures: Evidence from fMRI. Neuroimage, 12, 392–403. Chiarello, C. (1991). Interpretation of word meanings by the cerebral hemispheres: One is not enough. In P. Schwanenflugel (Ed.), The pychology of word meanings (pp. 251–278). Hillsdale, NJ: Erlbaum. Hagoort, P., Brown, C. M., & Swaap, T. Y. (1996). Lexical semantic event-related potential effects in patients with left hemisphere

M. Koivisto, A. Revonsuo / Brain and Cognition 52 (2003) 181–191 lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain, 119, 627–649. Hellige, J. B. (1993). Hemispheric asymmetry: WhatÕs right and whatÕs left. Cambridge, MA: Harvard University Press. Klimesch, W. (1994). The structure of long-term memory: A connectivity model of semantic processing. Hillsdale, NJ: Erlbaum. Koivisto, M. (2000). Interhemispheric interaction in semantic categorization of pictures. Cognitive Brain Research, 9, 45–51. Liederman, J. (1998). The dynamics of interhemispheric collaboration and hemispheric control. Brain and Cognition, 36, 193–208. Merola, J. L., & Liederman, J. (1990). The effect of task difficulty upon the extent to which performance benefits from between-hemisphere division of inputs. International Journal of Neuroscience, 54, 35–44. Moore, C. J., & Price, C. J. (1999). Three distinct ventral occipitotemporal regions for reading and object naming. Neuroimage, 10, 181–192. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97–113.

191

Potter, M. C., So, K., Eckardt, B., & Feldman, L. B. (1984). Lexical and conceptual representation in beginning and proficient bilinguals. Journal of Verbal Learning and Verbal Behavior, 23, 23–38. Revonsuo, A., & Portin, R. (1998). CogniSpeed2: Picture experiment generator. Turku: University of Turku. Snodgrass, J. G. (1984). Concepts and their surface representations. Journal of Verbal Learning and Verbal Behavior, 23, 3–22. Snodgrass, J. G., & Vanderwart, M. A. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174–215. Te Linde, D. J. (1982). Picture–word differences in decision strategy: A test of common-coding assumptions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 584–598. Zhang, W., & Feng, L. (1999). Interhemispheric interaction affected by identification of Chinese characters. Brain and Cognition, 39, 93–99.