BRAIN
AND
LANGUAGE
40, 89-105 (1991)
Rhyming and the Right Hemisphere JAN RAYMAN AND ERAN ZAIDEL University
of Culifornia,
Los Angeles
Subjects determined whether two successively presented and orthographically different words rhymed with each other. The first word was presented at fixation and the second was presented either to the left or to the right of fixation, either alone (unilateral presentation) or accompanied by a distractor word in the other visual hemifield (bilateral presentation). Subjects were more accurate when the words did not rhyme, when presentation was unilateral, and when the target was flashed to the right visual hemifield. It was predicted that bilateral presentation would produce interference when information from both visual fields was processed by one hemisphere (callosal relay), but not when each of the two hemispheres performed a task independently (direct access). That is, callosal relay tasks should show greater laterality effects with bilateral presentations, whereas direct access tasks should show similar laterality effects with both bilateral and unilateral presentations. Greater laterality effects were observed for bilaterally presented rhyming words, but nonrhyming words showed similar laterality effects for both bilateral and unilateral presentations. These results suggest that judgment of nonrhyming words can be performed by either hemisphere, but that judgment of rhyming words requires callosal relay to the left hemisphere. The absence of a visual field difference with nonrhyming word pairs suggests further that judgment of nonrhyming word pairs may be accomplished by the right hemisphere when presentation is to the left visual field. Q IYYI Academic Press. Inc.
INTRODUCTION
Linguistic tasks involving tachistoscopically presented stimuli generally produce lower reaction times and error rates when presentation is to the right visual field (RVF). It is now generally accepted that at least some of these right visual field advantages (RVFAs) are due to differences between the left and right hemispheres, but their precise interpretation This research was supported by an ADA and MHA NRSA MH15795-08, by NIH Grant NS-20187, and by NIMH RSDA MH-00179. We thank Steve Hunt and David Kaiser for technical assistance. Address correspondence and reprint requests to Eran Zaidel, Department of Psychology, University of California, Los Angeles, CA 90024.1563. Bitnet: RYZAIDE@UCLASSCF. 89 0093-934x/91
$3.00
Copyright Q 1991 by Academic Press, Inc. All rights of reproductmn m any form reserved.
90
RAYMAN
AND ZAIDEL
is still somewhat controversial. A callosal relay interpretation explains poorer left visual field (LVF) performance as being due to the time and information loss resulting from interhemispheric transfer to the left hemisphere (LH) via the corpus callosum (Moscovitch, 1976). A direct access interpretation of an observed VF difference assumes that many tasks or processes can be performed by either hemisphere, but not necessarily equally well. According to a direct accessinterpretation, whichever hemisphere first receives the stimulus will perform the task. Thus, VF differences are attributed to the differential processing efficiencies of the two hemispheres (Zaidel, 1980, 1983; Zaidel, Clarke, & Suyenobu, 1990). Visual field advantages in tachistoscopic tasks are often magnified when bilateral presentation is used. That is, when subjects are presented with a different stimulus in each visual field, but are cued to respond to only one of them, visual field differences in performance tend to increase (McKeever, 1971; Boles, 1979, 1983, 1987; Seitz & McKeever, 1984). This effect of bilateral presentation or the “bilateral effect” has been observed for both verbal and nonverbal stimuli, and there is evidence indicating that it is not simply due to increased task complexity or difficulty, because it still occurs when one holds the amount of information in the display constant and simply manipulates its distribution across the two visual fields (Boles, 1979). Boles (1979) has proposed that some form of hemispheric interaction may be responsible for the bilateral effect, which may involve “inhibitory influences between hemispheres, disruption of interhemispheric communication, and competition between the hemispheres for a common orienting mechanism.” One such hemispheric interference interpretation of the increased VFAs observed with bilateral presentation (Boles, 1990) assumes a callosal relay model, with bilateral presentation activating homologous areas in the two hemispheres and disrupting interhemispheric communication between them. This, and many other hypotheses involving interaction between the hemispheres, leads to our prediction of greater VF asymmetries under bilateral presentation for those tasks requiring information to be relayed across the corpus callosum, perhaps even forcing processing by direct access, but similar bilateral and unilateral field asymmetries for tasks that do not require callosal relay, that is, direct access tasks. The present experiment began as an attempt to compare the effects of unilateral and bilateral presentation upon rhyme judgment, a task we believed would require callosal relay to the LH. Results were to be compared to those obtained from a task believed to be direct access, such as lexical decision of short concrete words of high frequency. If the callosal relay task showed the bilateral effect and the direct access task did not, this would be evidence for the callosal relay interference explanation of the bilateral effect. Bilateral presentation could then be used in future
RHYMING
AND
THE RIGHT
HEMISPHERE
91
experiments as a decision procedure for helping to determine whether a given VFA was due to direct access or callosal relay. There are a number of reasons to believe that rhyme judgment of orthographically different printed words is a callosal relay task which can be performed only by the left hemisphere. This task presumably requires the ability to compare the phonological features of words. These phonological features may be addressed when a word is retrieved from the lexicon (Seidenberg & Tanenhaus, 1979) or derived directly (assembled) from the orthography. Regardless of how this phonological information about printed words is obtained, evidence suggests that only the LH is capable of making the type of phonological comparison required for rhyme judgments of printed words (Coltheart, 1980, 1983; Zaidel, 1978; Zaidel & Peters, 1981). There is evidence, however, that the RH may be able to perform rhyme judgment when information is presented acoustically or pictorially (Zaidel & Peters, 1981; Zecker, Tanenhaus, Alderman, & Silqueland, 1986). If the RH can perform some rhyme judgments, it probably does so in a manner different from that of the LH, because phonetic processing in the RH is impoverished. For example, Cohen and Freeman (1978) found RVF-LH lexical decision performance to be more sensitive to phonological aspects of stimuli than LVF-RH performance. Still more evidence for the nonphonetic nature of lexical access in the RH comes from the differential effects of orthographically similar and phonologically similar primes in lexical decision (Chiarello, 1985) and spelling irregularity in word identification performance (Parkin & West, 1985) on LVF-RH and RVF-LH performance. Furthermore, although there is evidence that the RHs of normal subjects and of commissurotomy patients are capable of some reading, it appears that the RH reads in a manner which is different from that of the LH. For example, LVF-RH reading may be particularly sensitive to word length (Young & Ellis, 1985; Ellis, Young, & Anderson, 1988) and is ideographic and nonphonetic (Zaidel, 1978; Zaidel & Peters, 1981; Schweiger, Zaidel, Field, & Dobkin, 1989), perhaps using graphemes to accessits lexicon (Young & Ellis, 1985; Eviatar & Zaidel, 1990). This RH reading appears to be different from the way the LH is believed to read: The LH seems capable of lexical access through grapheme to phoneme translation and as well as retrieving phonological codes from lexical entry. The effect of articulatory suppression on rhyme judgment (Besner, Davies, & Daniels, 1981; Wilding & White, 1985; Johnson & McDermott, 1986) attests to the important role phonological codes play in rhyme judgment performance. That orthographic codes are also involved in rhyme judgment is argued by another well-replicated finding: Rhyme judgment is made more difficult when orthographic and phonological cues conflict, such as when words rhyme but are spelled differently, or have
92
RAYMAN
AND ZAIDEL
visually identical end spellings, but are pronounced differently and, therefore, do not rhyme (Seidenberg & Tanenhaus, 1979; Donnenwerth-Nolan, Tanenhaus, & Seidenberg, 1981; Polich, McCarthy, Wang, & Donchin, 1983; Rugg, 1984a; Rugg & Barrett, 1987; Crossman & Polich, 1988). Like in reading, orthographic and phonological analyses are thought to proceed in parallel in rhyme judgment (Polich et al., 1983; Rugg, 1984a), but for orthographically dissimilar rhymes, phonological analysis is presumed to be necessary. When making rhyme judgments subjects may simply compare the ends of words for phonetic similarity, but they may also try to predict the rhyming word (Donnenwerth-Nolan et al., 1981; Rugg, 1984a). It has been suggested that performance is poor with orthographically dissimilar rhymes because subjects’ predictions of rhyming words are less accurate with these types of stimuli, but Donnenwerth-Nolan et al. (1981) equated orthographically similar and dissimilar rhyming word pairs for rhyme production frequency and still found orthographically similar rhymes to be detected more rapidly than orthographically dissimilar rhymes. Supporting the prediction hypothesis are results from Rugg and Barrett (Rugg, 1984a, 1984b; Rugg and Barrett, 1987), who examined evoked potentials and found a negative N4.50 component which occurred only for nonrhyming trials and was especially strong when orthographic and phonological codes were not congruent. This N450 component is believed to be analogous to the N400 component originally reported by Kutas and Hillyard (1980), which occurs under conditions of semantic incongruity or unexpectedness. Rugg and Barrett (1987) explain their results in terms of disconfirmation of subjects’ expectations. That is, the N450 may occur when subjects generate a word which rhymes with the first word and predict that it will appear, but then find that their hypothesis is not confirmed. Interestingly, the N450 component occurs over sites on the right side of the head. However, Rugg and Barrett (1987, p. 357) suggest that “It could be that these processes are carried out entirely in the left hemisphere, and that information about level of congruity between word pairs is then transmitted to some right-hemisphere system, the operation of which is reflected by N450.” Because of findings suggesting the important role of phonological analysis in rhyme judgment, it seemed safe to assume that, although the processes involved in rhyme judgment were not entirely understood or restricted to the LH, rhyme judgment of orthographically dissimilar printed words presented to the LVF would have to be relayed to the LH for the phonological analysis presumed to be necessary for rhyme determination, thereby making rhyme judgment a callosal relay task. METHODS Subjects. Forty undergraduate UCLA students participated in this experiment (19 females and 21 males) in order to fulfill an introductory psychology course requirement. All subjects
RHYMING
AND THE RIGHT HEMISPHERE
93
reported normal or corrected-to-normal vision in both eyes and neither evidence of neurological insult nor exposure to a language other than English before the age of six. All subjects were strongly right handed, with no left-handed first-degree relatives. Apparafus. Subjects were seated approximately 30.5 cm from an Amdek VIDEO-310A CRT monitor of an IBM AT compatible computer (Computer Products United, Turbo Model) with their chins in a chinrest, right feet resting on a footpedal on the floor, and the index and middle fingers of both hands poised on the keys of a vertically aligned response panel placed at midline. A computer software package (CT) was used to present stimuli and to record responses. This package, written in the C language by Steve Hunt, used the 8253 clock on the PC’s motherboard. Procedure. Subjects started each trial by pressing a footpedal which caused a single word to appear on the screen. They were instructed to read this “standard word” carefully, concentrating on how it sounded, and to fix its sound in their mind before pressing the footpedal once more. The standard word was then removed from the screen, and a warning tone sounded, followed by a fixation cross, presented for 1 sec. Two hundred milliseconds after termination of the fixation cross, a “test word” appeared for 100 ms in either the left or the right VF. The subject’s task was to decide whether this test word rhymed with the standard word. Half of the test words rhymed with the standard word and half did not, but all standardword/test-word pairs were orthographically dissimilar (i.e., they had different end spellings). In the unilateral presentation condition, the test word appeared in either the left or the right VF and nothing appeared in the other VF. In the bilateral presentation condition, the test word appeared as before and a nonrhyming distractor word was presented in the other VF. An arrow at central fixation told the subject which word was the test word. Subjects indicated with key presses whether the test word rhymed with the previously presented standard word. They were instructed to respond with both hands simultaneously. In the “Yes-up” condition, “yes” responses were signaled by subjects’ using their index fingers to press the two upper keys and “no” responses were indicated by subjects’ pressing the two lower response keys with their middle fingers. In the “Yes-down” condition the meaning of the upper and lower response keys was reversed, with the two lower keys now indicating a yes response. For each trial, reaction times for both the left and the right hand were recorded. About half of the subjects served in the Yes-up condition and half in the Yes-down condition. Each subject participated in a practice session of 32 trials and then received two blocks of 40 test trials, with the two blocks matched for word frequency of test words and levels of within-subject independent variables. Stimulus materials. Stimuli were one-syllable English words of relatively high frequency, as measured per million words by Francis and Kucera (1982). Standard words were three to six letters long, with an averge word frequency of 303. Test words and distractor words were all three letters long, with average word frequencies of 276 and 299, respectively. Every rhyming standard-word/test-word pair had a yoked nonrhyming standard-word/testword pair which was matched to it for test-word frequency. For example, the rhyming standard-word/test-word pair “STONE/OWN” was yoked to the nonrhyming word pair “DRAIN/HOW.” All word pairs were pretested under untimed conditions to ensure consensual agreement among at least 15 of 16 native speakers of English as to whether they rhymed. Stimuli were presented horizontally in uppercase. Test words were presented in either the left or the right VF, their innermost edge 1.4” from fixation and also subtending 1.4” of horizontal and 0.8” of vertical visual angle. Design. The within-subject factors of left and right VF, unilateral and bilateral presentation mode, and rhyming and nonrhyming standard-word/test-word pairs were orthogonally combined, so that each subject received a total of 80 trials, 10 trials from each of the eight combinations of these factors. Word frequency and other effects associated with particular standard-word/test-word pairs
94
RAYMAN
AND ZAIDEL
were balanced between subjects by the creation of four stimulus lists. Every standardword/test-word pair and its yoked nonrhyming pair appeared once in each stimulus list in one of the four combinations of left and right VFs and unilateral and bilateral presentation conditions. That is, each stimulus list contained an equal number of rhyming and nonrhyming word pairs presented unilaterally and bilaterally in the left and right VFs, but each particular word pair appeared once in each list in one of the four VF and presentation mode conditions. Rhyming word pair 1, for example, appeared in List A as unilateral LVF, in List B as unilateral RVF, in List C as bilateral LVF, and in List D as bilateral RVF. Each subject received one of the four stimulus lists with items presented in a different random order. Each list was divided into two blocks of 40 trials equivalent in test-word frequency and experimental conditions. Between-subject variables were gender, response orientation (Yes-up or Yes-down), and stimulus list (A, B, C, D). Sixteen subjects are required to perfectly counterbalance these three factors.
RESULTS Conventional
Analyses
Dependent variables. Accuracy was measured by the percentage of trials
correct, corrected for guessing and response bias by the classical correction method given in Kling and Riggs (1972). In this procedure, the proportion of false alarms is subtracted from the proportion of hits and that difference is divided by one minus the proportion of false alarms. The classical method was used because it corrects for response bias but, unlike signal detection analysis, it allows one to obtain measures of accuracy separately for rhyming and nonrhyming trials. This correction for bias was first applied while regarding rhymes as targets and then also applied while regarding nonrhymes as targets, so that both yes and no responses were adjusted for bias. Nearly identical results were obtained by using uncorrected accuracy data, the average raw and adjusted accuracy values being 92.5 and 92.0% correct, respectively. Response latencies for correct trials were calculated by averaging response times for the two hands. The average response latency on correct trials was 1009 ms. Statistical analyses. Preliminary analyses of variance (ANOVA) were performed using 32 subjects (two replications) for each dependent variable, with repeated measures and with subjects nested within response orientation, and counterbalance list. In no analysis were the main effects or interactions involving gender and stimulus list ever significant. In order to obtain more power, these counterbalancing constraints, therefore, were relaxed, which allowed the addition of 8 more subjects. Analyses reported below employed a 2 x 2 x 2 x 2 (Response Orientation x VF x Presentation Mode x Rhyme Class) ANOVA, with response orientation varied between subjects. Analyses were performed using the full sample of 40 subjects and, because of problems with possible ceiling effects on accuracy, on a reduced sample of those 26 subjects who each scored below 95%. Results reported are for the full sample unless otherwise indicated. Results are reported by effect, rather than by dependent variable.
RHYMING
AND THE RIGHT HEMISPHERE
95
26 SUBJECTS SCORING BELOW 95% 1 oo-
95
”
80
..
Im
UNILATERAL PRESENTATION
IiiifZZl10~
/
FIG. 1. Adjusted percentage correct as a function of visual field and mode of presentation.
VF effects. Subjects’ responses were more accurate when stimuli were presented in the RVF (94.1% correct) than when stimuli were presented in the LVF (89.8% correct), F(1, 38) = 16.32, p < .OOl. Presentation mode. Mode of presentation had a significant effect on response accuracy, F(1, 38) = 13.83, p < .OOl, and latency, F(1, 38) = 240.17, p < .OOl, with unilateral presentation producing faster and more accurate responses (832 ms, 93.9% correct) than bilateral presentation (1090 ms, 90.0% correct). A significant presentation mode by VF interaction was obtained for accuracy, F(1, 38) = 5.14, p = .028 and is shown in the upper portion of Fig. 1. Post hoc Scheffe tests revealed that the RVFA was significant only for bilaterally presented trials, F’(3, 114) = 10.04, p < .05, and mode of presentation was significant only in the LVF, F’(3, 114) = 8.94, p < .05. The lower portion of Fig. 1 displays results obtained from the reduced sample of 26 subjects scoring below 95%. Analysis of the reduced sample produced a marginally significant interaction, F(1, 24) = 3.284, p = .079, of essentially the same form as that produced from the full sample, suggesting that this interaction is not due to ceiling effects.
96
RAYMAN
AND ZAIDEL
ALL SUBJECTS
100
NON-RHYMES
26 SUBJECTS SCORING BELOW 95%
100) NON-RHYMES
L;/F
Ri’F
FIG. 2. Adjusted percentage correct as a function of visual field and rhyme class.
Rhyming and nonrhyming trials. Subjects performed more accurately on nonrhyming trials (95.1%) than on rhyming trials (88.8%), F(1, 38) = 29.36, p < .OOl, and this result was not due to a tendency to give no responses, because the data were corrected for response bias. Response latencies for rhyming and nonrhyming trials did not differ. The RVFA in accuracy was stronger for rhyming than for nonrhyming word pairs, as indicated by a significant rhyme class by VF interaction, F(1, 38) = 5.55, p = .022, shown in the upper portion of Fig. 2. Scheffe post hoc comparisons among these four means showed that the RVFA was significant only for the rhyming word pairs, F’(3, 114) = 9.66, p < .05, and that rhyming and nonrhyming trials differed significantly only when presentation was to the LVF, F’(3, 114) = 16.07, p < .Ol. When data from the reduced sample were analyzed the significance of this interaction dropped, F(1, 24) = 3.009, p = .092, but the shape of the interaction remained the same, as shown in the lower portion of Fig. 2. The rhyme by VF interaction regained its significance in an additional analysis using only bilateral trials from all 40 subjects, F(1, 38) = 5.456, p = .023.
RHYMING
AND THE RIGHT HEMISPHERE
------l
< 2
26 SUBJECTS
RHYMING
97
TRIALS
SCORING BELOW 95%
looNON-RHYMING
TRIALS
95 .. O------W
“,“,II
80
OLo
i
, UNMTERAL
RHYMING
TRIT BILATERAL
MODE OF PRESENTATION
FIG. 3. Adjusted percentage correct as a function of rhyme class and mode of presentation.
A significant rhyme class by mode of presentation interaction, F(1, 38) = 4.05, p = .049, was also found for the accuracy measure. Scheffe post hoc comparisons among the four means revealed that the only significant effect was between rhyming and nonrhyming trials under bilateral presentation, F’(3, 114) = 15.03, p < .Ol. This effect is shown for the full and reduced samples, respectively, in the upper and lower portions of Fig. 3. Analysis of the reduced sample produced results of essentially the same form, but with lower significance levels, F(1, 24) = 2.841, p = .lOl. Response orientation. A significant effect of response orientation was obtained for response latency, F(1, 38) = 4.87, p = .032, with subjects’ responses in the Yes-up condition (896 ms) being faster than those in the Yes-down condition (1025 ms). Response orientation interacted with VF for the accuracy variable, with stronger VF effects occurring for the Yesup response orientation condition, F(1, 38) = 3.98, p = .05. This interaction is illustrated in Fig. 4.
98
RAYMAN
AND ZAIDEL
FIG. 4. Adjusted percentage correct as a function of visual field and response orientation.
Item Analysis
For the subset of 32 subjects perfectly balanced for all independent variables and counterbalancing conditions, each of the 40 rhyming and 40 nonrhyming standard-word/test-word pairs was presented an equal number of times unilaterally and bilaterally in the left and right VFs. This balancing permitted the calculation of the average difficulty of each item, defined as the percentage of subjects responding incorrectly to this item. The average reaction time for correct trials was also calculated for each item. Each of these two measures was calculated separately for rhyming and nonrhyming items presented unilaterally and bilaterally to the left and right VFs and subjected to a 2 x 2 x 2 (VF x Rhyme Class x Mode) ANOVA, using the 80 items as the random variable. In order to address issues raised by Clark (1973), subjects were treated as a fixed effect and items were the sole random effect in the analysis. Results were almost identical to those obtained in previous analyses using subjects as the random variable. The one exception was that the previously significant VF by mode interaction for the accuracy measure now fell short of significance, F(1, 78) = 3.23, p = .07. In order to determine whether rhyming trials were more difficult because they had more irregular vowel pronunciations, we examined each standard-word/test-word pair for irregular vowel pronunciation according to Venezky (1970) and found that more of our rhyming word pairs (19 of 40) had one or more unusual vowel pronunciations than did our nonrhyming word pairs (2 of 40), z = 4.076, p < .OOOl. However, further examination of subjects’ accuracy to rhyming word pairs with regular (88.7% correct) and irregular (89.5% correct) vowel pronunciation revealed that this pronunciation factor did not affect accuracy, F(1, 30) < 1, p > S; nor was there an interaction between vowel regularity and VF, F(1, 30) = 1.417, p = .239. These results indicate that regularity of vowel pronunciation does not account for why subjects found rhyming trials more difficult. In order to examine the role of rhyme prediction in performance of
RHYMING
AND THE RIGHT HEMISPHERE
99
rhyme judgment in our study, we obtained production frequencies by questionnaire, from 38 undergraduate psychology students at UCLA who were all native English speakers. Subjects were asked to generate up to three rhyming words for each of our standard words. Rhyme production frequency, defined as the number of subjects who spontaneously generated the rhyming target word, was calculated for each of the 40 rhyming standard-word/test-word pairs and ranged from 1 (2.6%) to 29 (76.3%). Correlational analysis found that rhyme production frequency did not predict item difficulty, r = .Ol, but it was marginally related to response latency, Y = - .25, p < .06. Signal Detection Analyses
Hit and false alarm rates were calculated separately for each subject and combination of visual field and presentation mode. Each of these two measures was analyzed by itself and used to create estimates of the signal detection parameters d’ and the natural log of /3, using methods presented in Hochhaus (1972) and McNichol (1972). Hits, false alarms, and the d’ measure were each subjected to a 2 x 2 x 2 (Response Orientation x VF x Presentation Mode) ANOVA. The d’ measure estimates subjects’ sensitivity in discriminating between rhyming word pairs and nonrhyming word pairs, independent of response bias. Subjects’ sensitivity, as measured by d’, was greater when presentation was to the RVF (x = 4.47) than when presentation was to the LVF (X = 3.96), F(1, 39) = 7.20, p = .Ol. There was also a significant effect of presentation mode on d’, F(1,39) = 8.64, p < .006, with bilateral presentation (R = 3.95) decreasing sensitivity compared to unilateral presentation (X = 4.48). The interaction of VF and presentation mode was in the direction of stronger VF effects with bilateral presentation, but failed to reach significance, F(1, 39) = 2.61, p = .111. Separate analyses of hits and false alarms found that for hits, results obtained were similar to those for d’, that is, significant effects of VF, F(1, 38) = 12.63, p = .OOl, presentation mode, F(1, 38) = 10.21, p = .003, and the interaction between them, F(1, 39) = 5.983, p = .019, but none of these effects was significant for false alarms. This pattern of results suggests that most of the variation associated with the effects of VF and presentation mode on d’ were due to the effects of these independent variables on hit rates rather than on false alarm rates. In other words, these variables affected sensitivity, but not bias. The only significant effect in the analysis of false alarms was a three-way interaction between response orientation, VF, and presentation mode. When presentation was bilateral, there were more false alarms in the RVF in the Yes-down condition and more false alarms in the LVF in the Yes-up condition. Response bias was estimated for each VF by log p. When there is no bias, log /3 will have a value of zero. When there is a bias toward yes
100
RAYMAN
AND
ZAIDEL
(rhyme) responses, log p will be negative, and when the bias is toward no (nonrhyme) responses, log /3 will have a positive value. Because a large number of responses is necessary for the p measure, it was not calculated separately by mode of presentation. Both the LVF (X = .679, t = 6.03, p < .OO.S) and the RVF (K = .413, t = 3.56, p < .OOS)showed a significant bias toward nonrhyme responses. Although the LVF showed a somewhat stronger bias toward nonrhyme responses, it failed to differ significantly, F(1, 39) = 2.793, p = .099, from that of the RVF measure. DISCUSSION
Recently, Crossman and Polich (1988) failed to find a significant RVFA in a lateralized rhyme judgment task and attributed this finding to the tendency for males to make a different pattern of errors from females. Our results are quite different from theirs: We found a strong RVFA for rhyme judgment accuracy and the only interaction involving gender which even approached significance was the tendency for males to show a slightly stronger RVFA in rhyme judgment accuracy, p = .172. The RVFA observed in our experiment appears to be due to greater RVF sensitivity rather than to bias, because both the accuracy and the d’ measures were adjusted for bias and they still showed a significant RVFA. Our finding that subjects’ accuracy was greater for nonrhyming word pairs than for rhyming ones is consistent with what others have found using orthographically dissimilar words (Polich, et al., 1983; Rugg, 1984a, 1984b; Johnson & McDermott, 1986; Kramer & Donchin, 1987; Rugg & Barrett, 1987; Crossman & Polich, 1988), but why orthographically different rhyming word pairs are more difficult than orthographically different nonrhyming word pairs remains unknown. Our failure to find an effect of unusual vowel pronunciations on item difficulty suggests that the difference between rhyming and nonrhyming trials was not due to differences in vowel regularity. Orthographic dissimilarity may produce a bias in subjects to respond “no rhyme,” and the p measure from our signal detection analysis clearly showed that there was a significant bias in this direction, but the accuracy measure used to compare rhymes and nonrhymes in this experiment was adjusted for differences in bias. Another hypothesis is that subjects’ predictions of rhyming words are less accurate with orthographically dissimilar words and that subjects may have had a tendency to respond no when their prediction was wrong. We did find a weak effect of rhyme predictability (as measured by rhyme production frequency) on reaction time within rhyming pairs. At first glance, two results in this experiment appear to point to opposite interpretations of the rhyme judgment task. The rhyme by visual field interaction suggests that rhyme judgment is a direct accesstask while the bilateral effect indicates that rhyme judgment is callosal relay. One interpretation consistent with both of these results is that judgment of most
RHYMING
AND
THE RIGHT
HEMISPHERE
101
rhyming trials requires callosal relay, but that judgment of most nonrhyming trials does not. That is, when presentation is to the LVF and the word pair does not rhyme, the RH is somehow able to determine that the words are too different to rhyme, without consulting the LH. When the words rhyme, however, all the RH is able to do is to determine that they are not so different so as to not rhyme and it must consult the LH for rhyme confirmation. As outlined by Zaidel (1983), a callosal relay model predicts that, because the same hemisphere will perform the task regardless of VF of presentation, there should be no interaction between VF and other independent variables, such as rhyme class. Much to our surprise, we obtained a significant interaction between VF and rhyme class. This interaction is difficult to explain with a callosal relay model, because one would have to assume that for some reason rhyming test words cross the corpus callosum less efficiently than nonrhyming test words. This possibility seems extremely unlikely given that rhyming and nonrhyming test words were carefully matched for word length and frequency and that regularity of vowel pronunciation did not affect accuracy. A more likely interpretation of the rhyme class by VF interaction is a direct access interpretation. According to a partial direct accessexplanation, each hemisphere is identifying the words presented to “its” VF, with nonrhyme judgment being accomplished by either hemisphere and rhyme judgment only by the LH. According to a complete direct accessexplanation, the greater RVFA for rhyming trials was produced by the different abilities of the left and right hemispheres to detect rhyming and nonrhyming word pairs. The left and right hemispheres appear to be equally skilled at detecting nonrhyming word pairs, but the LH-RVF seems more adept at detecting rhymes. This interpretation is also consistent with our finding (albeit nonsignificant) of greater bias toward nonrhyme responses for the RH-LVF. The main source of variation of interest was the bilateral effect as measured by the presentation mode by VF interaction, which indicates whether the size of VF differences changes as a function of presentation mode. Our primary prediction was that bilateral presentation would lead to a greater increase (over unilateral presentation) in VF differences. This prediction was confirmed for the accuracy variable, but only for rhyming trials, suggesting that rhyme judgment of rhyming word pairs is primarily callosal relay and that judgment of nonrhyming word pairs is primarily direct access. A specific comparison among eight means revealed that the strength of the bilateral effect was significantly greater for rhyming trials, F(7, 266) = 2.11, p < .Ol, than for nonrhyming trials. This result was predicted specifically by our model. It is also consistent with our interpretation of the VF by rhyme class interaction, which is that more rhyming than nonrhyming trials required callosal relay to the LH. That is, if a task requires callosal relay, VFAs should increase with bilateral presen-
102
RAYMAN
AND ZAIDEL
tation, because resource distribution in the hemispheres is affected, but the bilateral effect should not occur with direct access tasks. Taken together, these results suggest that correct responses to rhyming and nonrhyming trials may be produced by two different processes, which may be present in differing levels of competence in the left and right hemispheres. Rhyme detection of orthographically different printed words may require left-hemisphere specialized phonetic comparisons, while nonrhyme detection may be possible in either hemisphere with orthographic analysis alone. The LH may be able to perform both orthographic and phonological analysis, while the RH may be capable of only orthographic analysis. We can also speculate that the RH may have some limited information about the association between orthographic features and sounds. The end spellings of many nonrhyming standard-word/test-word pairs may be sufficiently dissimilar in these orthographic features to allow for nonrhyme detection with orthographic analysis alone. The RH may have enough knowledge of this association between orthographic features and sounds to perform as well as the LH when detecting some nonrhyming pairs, at least for the three-letter test words used in this experiment. If orthographic analysis does not indicate a clear discrepancy, then the results of further phonetic analysis may be required. Therefore, many rhyming items presented to the LVF-RH may need to be relayed to the LH for phonetic processing. More evidence supporting the notion that detection of orthographically dissimilar nonrhyming word pairs may be accomplished by nonphonetic means, while confirmation of rhyming word pairs may require more extensive phonetic analysis, comes from Johnson and McDermott (1986). They failed to find an effect of articulatory suppression for orthographically dissimilar nonrhyming word pairs (in contrast to the three other types of word pairs, where articulatory suppression was observed). The model as stated above would predict that nonrhyme detection requires less time than rhyme detection, because on many trials it can be performed using orthographic analysis alone, while rhyme detection requires additional phonetic analysis or accessto phonetic information. This prediction was not confirmed in our data: We found no significant difference between the reaction times of correct responses to rhyming (979 ms) and nonrhyming (1006 ms) trials. One explanation of why rhyme detection did not require more time than nonrhyme detection is that subjects often try to predict rhyming words. Response latencies to rhyming trials where a correct prediction was made should be very short, and this process could lower subjects’ average reaction times to rhyming words. The marginally significant (p < .06) effect of normative rhyme production frequency on response latency does suggest that on some occasions subjects do attempt to predict the rhyming word and that this process should be included in our model of rhyme judgment. It also suggests that dif-
RHYMING
AND
THE RIGHT
HEMISPHERE
103
ferences among the studies cited above could be due in part to differences in the portion of rhyming trials correctly predicted by subjects. In specifying our model in more detail, we can suggest that on some trials subjects correctly predict what the test word will be and that this information is available to both hemispheres. Thus, correctly predicted rhyming test items are processed in a direct access fashion. One can speculate further and suggest that when test items are presented to the RVF-LH, orthographic and phonetic processes may proceed in parallel, in a manner similar to what many believe to take place in normal reading. When test items are presented to the LVF-RH, only orthographic analysis is available and, if this analysis is not sufficient to eliminate the possibility of rhyming, some representation of the test word must be relayed to the LH for phonetic processing. Thus, we are proposing a mixed direct access/callosal relay model for rhyme judgment of orthographically dissimilar printed word pairs. That is, nonrhyming trials and correctly predicted rhyming trials are dealt with in a direct access manner by whichever hemisphere first receives the test word. The remaining rhyming test items, not correctly predicted, and perhaps some nonrhyming trials as well, probably have to be relayed to the left hemisphere for phonetic analysis. This mixed model can also account for our finding that more errors were made with rhyming test words than with nonrhyming test words (i.e., there were more misses than false alarms). According to our mixed model interpretation, misses tend to occur when orthographic analyses incorrectly indicate that the orthography of rhyming test words is too different for them to rhyme. This explanation makes sense considering that all our word pairs were orthographically different. In conclusion, we had originally assumed a callosal relay model and exclusive LH specialization for rhyme judgment of printed words. However, our results argue against a strictly callosal relay model for rhyme judgment. Zecker et al. (1986, p. 384) argue that, “. . . with auditory presentation, the right hemisphere demonstrates an ability to make rhyme decisions as well as the left hemisphere.” Results of our experiment suggest that the right hemisphere may be able to perform some types of nonrhyme judgment of printed words as well, perhaps using orthographic rules at its disposal. Perhaps RH processing of nonrhyming words results from the bilateral presentation procedure, which may have tied up the LH and encouraged RH processing of LVF items. REFERENCES Besner, D., Davies, J., & Daniels, S. 1981. Reading for meaning: The effects of concurrent articulation. The Quarterly Journal of Experimental Psychology A, 33, 415-437. Boles, D. B. 1979. The bilateral effect: Mechanisms for the advantage of bilateral over unilateral stimulus presentation in the production of visual field asymmetry. Ph.D.
104
RAYMAN
AND ZAIDEL
Dissertation, Department of Psychology, University of Oregon; Dissertation Abstracts 40B(9), 4529. Boles, D. B. 1983. Hemispheric interaction in visual held asymmetry. Cortex, 19, 99-113. Boles, D. B. 1987. Reaction time asymmetry through bilateral versus unilateral stimulus presentation. Brain and Cognition, 6, 321-333. Boles, D. B. 1990. What bilateral displays do. Brain and Cognition, 12, 205-228. Chiarello, C. 1985. Hemisphere dynamics in lexical access: Automatic and controlled priming. Brain and Language, 26, 146-172. Clark, H. H. 1973. The language as fixed effect fallacy: A critique of language statistics in psychological research. Journal of Verbal Learning and Verbal Behavior, l2, 335-359. Cohen, G., & Freeman, R. 1978. Individual differences in reading strategies in relation to cerebral asymmetries. In J. Requin (Ed.), Attention and performance VIZ. Hillsdale, NJ: Lawrence Erlbaum Associates. Coltheart, M. 1980. Deep dyslexia: A right hemisphere hypothesis. In M. Coltheart, K. Patterson, & J. C. Marshall, (Ed%), Deep Dyslexia. London: Routledge & Kegan Paul. Pp. 326-380. Coltheart, M. 1983. The right hemisphere and disorders of reading. In A. Young (Ed.), Functions of the right cerebral hemisphere. London: Academic Press. Pp. 171-201. Crossman, D. L., & Polich, J. 1988. Hemispheric differences for orthographic and phonological processing. Brain and Language, 35, 301-312. Donnenwerth-Nolan, S., Tanenhaus, M. K., & Seidenberg, M. S. 1981. Multiple code activation in word recognition: Evidence from rhyme monitoring. Journal of Experimental Psychology: Human Learning and Memory, 7(3), 170-180. Ellis, A. W., Young, A. W., & Anderson, C. 1988. Modes of word recognition in the left and right cerebral hemispheres. Bruin and Language, 35, 254-273. Eviatar, Z., & Zaidel, E. 1990. The effects of word length and emotionality on hemispheric contribution to lexical decision, submitted for publication. Francis, W. N., & Kucera, H. 1982. Frequency analysis of English usage: Lexicon and Grammar. Boston: Houghton Mifflin. Hochhaus, L. 1972. A table for the calculation of d’ and beta. Psychological Bulletin, 77,(5), 375-376. Johnson, R. S., & McDermott, E. A. 1986. Suppression effects in rhyme judgment tasks. The Quarterly Journal of Experimental Psychology A, 38, 111-124. Kling, J. W., & Riggs, L. A. 1972. Woodworth and Schlosberg’s experimental psychology, third edition. New York: Holt, Rinehart & Winston. Kramer, A. F., & Donchin, E. 1987. Brain potentials as indices of orthographic and phonological interaction during word matching. Journal of Experimental Psychology: Learning, Memory and Cognition, 13, 76-86. Kutas, M., & Hillyard, S. A. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207, 203-205. McKeever, W. F. 1971. Lateral word recognition: Effects of unilateral and bilateral presentation, asynchrony of bilateral presentation, and forced order of report. Quarterly Journal of Experimental Psychology, 23, 410-416. McNichol, D. 1972. A primer on signal detection theory. London: Allen & Unwin. Moscovitch, M. 1976. On the representation of language in the right hemisphere of righthanded people. Brain and Language, 3, 47-71. Parkin, A. J., & West, S. 1985. Effects of spelling-to-sound regularity on word identification following brief presentation in right and left visual fields. Neuropsychologia, 23, 270284. Polich, J., McCarthy, G., Wang, W., & Donchin, E. 1983. When wordscollide: Orthographic and phonological interference during word processing. Biological Psychology, 16(3-4), 155-180. International,
RHYMING
AND THE RIGHT HEMISPHERE
105
Rugg, M. D. 1984a. Event-related potentials and the phonological processing of words and non-words. Neuropsychologia, 22(4), 435-443. Rugg, M. D. 1984b. Event-related potentials in phonological matching tasks. Brain and Language, 23, 225-240.
Rugg, M. D., & Barrett, S. E. 1987. Event-related potentials and the interaction between orthographic and phonological information in a rhyme-judgment task. Bruin and Language, 32, 336-361.
Schweiger, A., Zaidel, E., Field, T., & Dobkin, B. 1989. Right hemisphere contribution to lexical access in an aphasic with deep dyslexia. Brain nnd Language, 37, 73-89. Seidenberg, M. S., & Tanenhaus, M. K. 1979. Orthographic effects of rhyming. Journal of Experimental
Psychology:
Human Learning
and Memory,
S(6), 546-544.
Seitz, K. S., & McKeever, W. F. 1984. Unilateral versus bilateral presentation methods in the reaction time paradigm. Brain and Cognition, 3, 413-425. Venezky, R. L. 1970. The structure of English orthography. The Hague: Mouton. Wilding, J., & White, W. 1985. Impairment of rhyme judgments by silent and overt articulatory suppression. The Quarterly Journal of Experimental Psychology A, 37, 95-107. Young, A. W., & Ellis, A. W. 1985. Different methods of lexical accessfor words presented in the left and right visual hemifields. Bruin and Language, 24, 326-35X. Zaidel, E. 1978. Lexical organization in the right hemisphere. In P. A. Buser and A. Rougeul-Buser (Eds.), Cerebral correlates of conscious experience. Amsterdam: Elsevier/North-Holland Biomedical Press. Pp. 177-197. Zaidel, E. 1980. The structure of language: Clues from hemispheric specialization. In U. Bellugi and M. Studdert-Kennedy (Eds.), Signed and spoken language: Biological constraints on linguistic form. Weinheim: Verlag Chemie GmbH. Pp. 291-340. Zaidel, E. 1983. Disconnection syndrome as a model for laterality effects in the normal brain. In J. B. Hellige (Ed.), Cerebral hemisphere asymmetry: Merhod, theory, and applications. New York: Praeger. Pp. 95-151. Zaidel, E., Clarke, J. M., & Suyenobu, B. 1990. Hemispheric independence: A paradigm case for cognitive neuroscience. In A. B. Scheibel and A. J. Wechsler (Eds.), Neurobiology of higher cognitive function. New York: Guilford Press. Pp. 297-352. Zaidel, E., & Peters, A. M. 1981. Phonological encoding and ideographic reading by the disconnected right hemisphere: Two case studies. Brain and Language, 14, 205-234. Zecker, S. G., Tanenhaus, M. K.. Alderman, L., & Sequeland, L. 1986. Lateralization of lexical codes in auditory word recognition. Brain and Language, 29, 372-389.