J. FLUENCY DISORD. 12 (1987) 271-286
COMPARISON OF REACTION TIME AND ACCURACY MEASURES OF LATERALITY FOR STUTTERERS AND NORMAL SPEAKERS DOUGLAS
E. CROSS
Ithaca College, Ithaca. New York
This study compared reaction time and accuracy scores of normal speakers and stutterers on a dichotic listening syllable recognition task. A mean right ear advantage for both measures was observed for the two groups. While no group difference in ear difference score (EDS) was observed for the accuracy measure, the stutterers exhibited almost half the EDS of the normal speakers for reaction time. Analysis of group distributions revealed that stutterers are more heterogeneous than the normal speakers for all measures. Factors that influence analysis and interpretation of “laterality” data are discussed for both stutterers and nonstutterers using dichotic listening accuracy and reaction time measures.
INTRODUCTION Presently, the neurophysiologic mechanisms associated with stuttering are not well understood. There is evidence, however, that central nervous system factors are involved. The nature of hemispheric processing for speech and language has received particular emphasis. There is evidence from investigations using EEG (Zimmerman and Knott, 1974; Ponsford et al., 1975; Moore and Haynes, 1980; Boberg et al., 1983), sodium amatol injections (Jones, 1966), cerebral blood flow (Wood et al., 1980), tachistoscopic procedures (Plakosh, 1978; Moore, 1976; Hand and Hayes, 1983; Wilkins et al., 1984), and auditory tracking (Sussman and McNeilage, 1975) that some individuals who stutter may process certain speech/language information more efficiently in either the right hemisphere or in both hemispheres. Most normal right-handed subjects exhibit lateralization for these processes in the left hemisphere. Generally, the inference has been that bilateral or right hemispheric involvement for some speech/language processes interferes with the production of fluent speech.
Address correspondence to Douglas E. Cross, Ph.D., Department and Audiology, Ithaca College, Ithaca, NY 14850. Elsevier Science Publishing Co., Inc. 52 Vanderbilt Ave., New York, NY 10017 0 1987 by
of Speech Pathology
271 0094-730X/87/$03.30
272
D. E. CROSS
The dichotic listening paradigm has been one of the most frequently used methods for investigating lateralization of speech and language processes in man. Consistently higher right-ear than left-ear accuracy scores have been reported for normal subjects when identifying dichotically presented verbal stimuli (e.g., Kimura, 1961, 1967; Cullen et al., 1974; Berlin and McNeil, 1976; Shankweiler and Studdert-Kennedy, 1967; Kinsbourne, 1978; Darwin, 1974;) whereas there has been much debate over specific models accounting for this effect, there is general agreement that the data reflect a left hemisphere predominance for processing linguistically coded information. The dichotic listening paradigm has been employed to compare processing characteristics of stutterers with normal speakers. However, the results have been conflicting. Several studies have reported that on the average stutterers exhibited either no ear advantage or a left ear advantage for accuracy measures (Curry and Gregory, 1969; Perrin and Eisenson, 1970; Quinn, 1972; Sommers et al., 1975; Sussman, 1971; Sussman and MacNeilage, 1975; Blood, 1985). Conversely, other studies have reported that group ear advantages for stutterers were no different from those of control groups (Cerf and Prins, 1974; Dorman and Porter, 1975; Gruber and Powell, 1974; Brady and Berson, 1975; Pinsky and MacAdam, 1980; Liebetrau and Daly, 1981). Factors such as the stimuli and modalities used to elicit the presumed language processes, test methodology, and subject sampling have been attributed to many of these discrepancies in the reported ear effects among dichotic listening studies. The role of hemispheric organization in stuttering continues to be an important area of investigation. Several issues concerning the use and interpretation of dichotic listening data as indices of hemispheric processing of stutterers warrant additional attention. For example, accuracy data alone may not be a sensitive and reliable measure of lateralization for certain speech/language processes. The magnitude and direction of ear effects in normal subjects have been shown to be unstable across repeated testing (Porter et al., 1976; Blumstein et al., 1975; Berlin, 1977) and influenced by the strategies adopted by subjects when performing the task (Springer, 1977; Hughes, 1978; Shadden, 1979). In addition, accuracy measurements reflect lateralization for perception of the verbal stimuli. These data provide little or no information about possible mechanisms underlying integration of sensory and motor information within and between the two hemispheres for speech and language. Sussman et al. (1975) reported there was no correlation (r = - .08) between subject’s ear advantages for dichotic listening accuracy scores and for auditory tracking using jaw movements. The inference of a singular “dominant” or “nondominant” hemisphere for language from accuracy data alone may not reflect the actual involvement of the left and right hemispheres during speech.
COMPARISON OF REACTION TIME AND LATERALITY
273
Springer (1977) and others have suggested that reaction time (RT) may be a more sensitive measure of hemispheric lateralization of verbal information than accuracy. Reaction times of normal speakers for identification of acoustically presented verbal stimuli are faster when presented to the right ear than the left ear in dichotic and some monotic listening tasks (Springer, 1972, 1973; Bever et al., 1976; Catlin and Neville, 1976; Hughes, 1978; Shadden, 1979; Moscovitch, 1979). This ear effect has been shown to be relatively stable regardless of whether the subject responded with the right or left hand (Hughes, 1978; Morais and Darwin, 1974; Bever et al., 1976; Springer, 1972; Catlin et al., 1976). Shadden (1979) used a target recognition task to investigate the reaction times and accuracy scores of normal speakers for identification of a target CV syllable in monotic and dichotic listening conditions. Subjects responded as quickly as possible to stimuli by determining whether the target syllable /ba/ was present or absent. Reaction times were significantly faster when the target syllable was presented to the right ear than to the left ear for the dichotic condition with no ear difference in the monotic condition. Whereas accuracy scores (correctly identifying the target syllable as present or absent) were also higher for the right ear in the dichotic mode, the ear effect was not significant. Differences in subject strategies, as well as the greater sensitivity of the RT to measure laterality effects, were proposed as likely factors in the difference between the accuracy and RT ear difference scores. The fact that RT involves, at least to some degree, processing time for the task may contribute to its advantage over accuracy in investigating laterality. A second issue concerns the distribution of individual subject performance relative to group data in studies comparing hemispheric processing of stutterers and normal speakers. There is evidence that “stutterers” represent a heterogeneous group with respect to laterality characteristics. In a recent dichotic listening study Blood (1985) reported that a group of 76 young stutterers and a group of 76 normal speakers exhibited a significant mean right ear advantage for accuracy measurements. Further analysis, however, revealed significant subgroups of stutterers with left ear or no ear advantages. Use of group data alone to infer some role of hemispheric laterality factors in stuttering is questionable. Comparing individual subject distributions with groups data for reaction time and accuracy data would provide a better understanding of how and to what degree laterality is involved in the stuttering problem. The purpose of this study was to investigate factors that influence indices of laterality for auditory verbal stimuli of adult normal speakers and stutterers for a dichotic listening target recognition task. Specifically, three issues were considered: (1) differences between left- and right-ear scores for reaction time and percent correct measures for both subject groups, (2) differences between the normal speakers’ and stutterers’ ear
274
D. E. CROSS
difference scores (EDS) for both reaction time and percent correct, and (3) comparison of individual subject distributions with group data for both the stutterers and normal speakers. METHODS Subjects. Subjects in this study were 12 adult male normal speakers and 12 adult male stutterers. The age of the normal speakers ranged from 22 to 33 yr (mean = 26 yr) and the stutterers ages ranged from 18 to 35 yr (mean = 25 yr). All subjects exhibited pure tone hearing thresholds at or better than 15 dB (ANSI) with no interaural threshold differences in excess of 10 dB SPL. No subject had a known history of a neurologic disorder. Handedness for each subject was determined using the Edinburgh Handedness Inventory (Oldenfield, 1971). Eleven of the 12 stutterers and normal speakers exhibited right-hand preference with one subject in each group exhibiting left-hand preference. None of the normal speakers had a known history of speech or language disorders. Based on Van Riper’s Profile of Stuttering Behavior (1971), using frequency and duration of stuttering as criterion measures, severity three of the stutterers were classified as mild, five as moderate, and four as severe. Eight of the 12 stutterers were enrolled in treatment at the time of the study. Dichotic Stimuli The stimulus tape used in this study was prepared at Kresge Hearing Research Laboratory and consisted of 160 CV syllable pairs. The syllables were /pa,ba,ta,da,ka, and da/. Each syllable was originally recorded using an adult male voice. These signals were digitized for computer generation of the syllable pairs on the master tape. The syllable pairs were recorded on two channels of an audio tape with equal peak intensity levels and duration of the vowel portion of the CV. Onset alignment for each of the syllable pairs was within 2.5 msec. Interstimulus intervals varied randomly among 3, 4, and 5 sec. The stimulus tape was divided into four blocks consisting of 40 syllable pairs each. Each block contained two tokens of each of the ten possible CV pairs containing the syllable /ba/ and one token of all other possible CV combinations. Stimulus pairs were randomly ordered within each block. Thus, the tape was weighted such that 50% of the syllable pairs contained the target syllable /ba/ and the remaining pairs did not. Each syllable was preceded on the master tape by a 15kHz tone burst with a 5 msec rise and decay time. The bursts were 10 msec long and were recorded at an intensity level 20 dB down from the peak vowel portion of the CV tokens. This burst preceded each syllable by 5 msec
COMPARISON
OF REACTION
TIME AND LATERALITY
275
and triggered the onset of a digital logic system used to measure reaction time. A 1-min 1000 Hz calibration tone recorded at the same peak intensity level as the CV pairs for calibration of the presentation level of the CV syllables. Instrumentation The stimulus tape was played to subjects through TDH-39 stereo earphones at 65 dB SPL. The output of the tape recorder was first routed through a 48dB/octave low-pass (8kHz) filter to eliminate presentation of the 15kHz trigger signal to the subject. The output of the tape recorder was also rectified and low pass filtered at 15 kHz before being routed to a logic circuit which triggered the onset of a 1000 Hz digital clock. The subject’s response apparatus consisted of two 2 inch’ copper pads mounted side by side on a metal plate and attached to the arm portion of the subject’s chair at wrist level. In the “ready” position the subject’s index and middle fingers rested comfortably on each pad. After presentation of a syllable pair lifting either finger broke the circuit to the digital counter allowing for measurement of reaction time to the nearest millisecond. A light mechanism on the logic apparatus indicated to the experimenter whether the response was made with the left or right pad. Procedures All testing was conducted with the subject seated in a sound treated booth. Subjects were instructed that they would be presented with a series of syllable pairs consisting of /pa,da,ta,da,ka, and da!. One syllable was presented to the left ear and a different syllable presented simultaneously to the right ear. They were told that in each case the two syllables would be different and that in one-half of the trials the “target” syllable iba/ would be presented and in the other half it would not. Subjects were instructed to respond to each syllable pair as quickly as possible by indicating whether they perceived the target syllable /ba/ to be present or absent. During both the instruction and training sessions equal emphasis was given to the need for accuracy as well as speed of the response. Subjects were familiarized with the response pad and told to lift the finger off of the pad marked “Ba” if they perceived the target syllable /ba/ as one of the syllables presented, or to lift their finger off of the pad marked “Not Ba” if the target syllable was absent. Subjects were told that the presentation order of the “ba present” and “ba absent” pairs, as well as which ear received the target syllable, were random. All subjects responded with their preferred hand. The finger orientation of the “Ba” and “Not Ba” pads was counterbalanced among subjects. Prior to testing the output of each earphone was calibrated to 65 dB
276
D. E. CROSS
SPL using the 1000 Hz calibration signal. The orientation of the earphones was reversed midway through the experiment. Before starting the experiment subjects were presented with each of the six CVs in order to familiarize them with the syllable pairs. They were then presented with a complete block of 40 practice trials. During testing the experimenter recorded the reaction time and whether the target syllable was correctly or incorrectly identified as present or absent after each response. There was approximately 2 min between each of the four blocks.
Data Analysis The data presented in this paper is based on the correct “ba present” responses. Incorrect responses were excluded from analysis. Mean reaction times and accuracy measurements were calculated for each subject for both the left and right ears. Percent correct responses for each ear were used to measure accuracy. These data were subjected to an arc sine transformation, expressed in radians (Dixon and Massey, 1969), before analysis. Ear difference scores (EDS) were used as an index of laterality for each subject for both the accuracy and reaction-time measurements. These data were used to compare laterality effects between groups as well as for analysis of within-group distributions. For reaction time, ear difference scores were calculated by subtracting right-ear from left-ear response times. Thus, a positive EDS reflected faster right-ear responses. Accuracy ear difference scores were calculated by subtracting the left-ear from the right-ear responses also resulting in positive scores for more accurate right-ear responses. Ear difference scores for both the reaction time and accuracy measures were analyzed for individual subjects to provide a profile of group distributions for the two laterality measures. Pearson product-moment correlations were computed between mean ear difference scores for the reaction time and accuracy measurements for the two groups. These correlations were based on the mean scores for each subject with the two groups.
RESULTS Time Table 1 displays the mean reaction times and ear scores for correct “ba present” responses for both the stutnormal speakers. As a group, the normal speakers exhibited mean response time when the target stimulus was correctly in the right ear (mean = 566 msec) than in the left ear (mean
Mean Reaction
difference terers and a shorter identified,
COMPARISON
OF REACTION
1. Mean Reaction Speakers and Stutterers
Times
Table
TIME
and Ear Difference
Scores
Stutterers Subject Sl s2 s3 s4 S5 S6 S7 S8 s9 SlO Sll s12
Mean SD Abbreviations:
LE
277
AND LATERALITY
(EDS) for Normal Normals
RE
758 57s 759 641 537 681 629 775 696 687 657 670
644 568 725 647 482 638 620 744 720 632 589 681
672.1 69.00
640.8 70.35
EDS
Subject
114 7
Nl N2 N3 N4 N5 N6 N7 N8 N9 NlO Nil N12
34 -6 55 43 9 31 -24 55 68 -11 31.3 37.39
Mean SD
LE
RE
407 534 722 687 499 693 651 696 676 628 698 643
397 461 703 612 512 620 610 669 546 560 520 592
627.8 92.84
566.8 82.70
EDS 10 73 19 75 -13 73 41 27 130 68 178 51 61.0 50.40
RE, right ear; LE, left ear.
= 628 msec). The difference was statistically significant (t = 4.008, df = 11, p > 0.01). As a group the stutterers also exhibited a faster mean right-ear reaction time (mean = 640 msec) than mean left-ear reaction time (672 msec). The difference between left- and right-ear response times for the stutterers was also significant (t = 2.83, df = 11, p > 0.02). Whereas both subject groups exhibited a right ear advantage for reaction time the average ear difference score was substantially larger for the normal speakers (mean = 61 msec) than the stutterers (mean = 31 msec). This difference was significant (t = 1.72, df = 22, p > 0.05). Accuracy. Table 2 presents the mean arc sine transformed percent correct data for the left and right ears of both groups. It can be seen that the normal speakers were more accurate when identifying the target syllable in the right ear (mean = 2.1637 rads) than the left ear (mean = 1.5757 rads). This ear effect was statistically significant (r = 3.27, df = 11, p > 0.01). The stutterer group also exhibited a similar right ear advantage for percent correct. The difference between the right-ear accuracy scores (mean = 2.13 11 rads) and the left-ear scores (mean = 1.5272 rads) was statistically significant (t = 3.79, df = 11, p > 0.01). Unlike reaction time the average ear difference score for the accuracy data was similar for both the stutterer and normal speaker groups. The difference in mean EDS between the two groups was nonsignificant (t = .0663, df = 22, p < 0.05).
D. E. CROSS
278
Table 2. Mean Arc Sine Transformed Percent Scores for Normal Speakers and Stutterers
Correct
Data and Ear Difference
Stutterers
Normals
LE
RE
EDS
Subject
s2 s3 s4 S5 S6 Sl S8 s9 SlO Sll s12
2.4341 2.5681 2.0264 1.3898 2.6902 1.9606 2.0264 2.1412 1.6911 2.4039 2.6906 1.5508
1.5508 1.4505 1.1154 1.181 1.2239 2.0264 1.8132 1.4907 2.0715 1.181 2.3462 0.8763
0.8833 1.1176 0.911 0.2088 1.4663 - 0.065 0.2132 0.6505 - 0.380 1.2229 0.3444 0.6745
Mean SD
2.1311 0.4197
1.5272 0.4311
0.6038 0.5294
Subject Sl
Abbreviations:
LE
RE
EDS
Nl
1.3989
N2
2.4039
1.287 2.1895 2.1412 0.8763 1.287 1.5508 1.4508 1.8132 1.2239 1.5908 2.4981 1.0004
0.1119 0.2144 - 0.450 1.5276 0.9025 0.9473 0.5098 -0.162 1.3442 0.5504 0.2953 1.2653
N3
1.6911
N4
2.4039
N5
2.1895
N6
2.4981
N7
1.9606
N8
1.6509
N9
2.5681
NlO
2.1412
Nil
2.7934
N12
2.2657
Mean
2.1637
1.5757
0.5880
SD
0.3990
0.4770
0.5945
RE, right ear; LE, left ear
Individual Subject Performance For
the purpose of this study ear difference scores 2 2 10% were defined as right ear and left ear advantages, respectively. Ear difference scores between these ranges were defined as no ear advantage. Similarly, ear difference scores for reaction times of + 10 msec were also defined as respective ear advantages. Figure 1 shows the percent correct ear difference scores for each of the normal speakers and stutterers. Ten of the 12 normal speakers (83%) and 7 (58%) of the stutterers exhibited a right ear advantage while one normal speaker (8%) and one stutterer (8%) showed a left ear advantage. One normal speaker (8%) and four stutterers (33%) exhibited no ear advantage for percent correct of the “ba present” response. Figure 2 displays the individual ear difference scores for reaction time of correctly identified responses. Eleven of the 12 normal speakers (92%) and seven stutterers (58%) had right ear advantages. One normal speaker (8%) and two stutterers (17%) showed left ear advantages for reaction times. None of the normal speakers and three of the stutterers (33%) exhibited no ear advantage for reaction time. Correlation Between Reaction Time and Accuracy Measurement Correlation between the mean reaction time and accuracy data were r = .27 for the normal speakers and r = .43 for the stutterers. Both correlations were statistically nonsignificant at the .05 level.
COMPARISON
OF REACTION
Ml
)s?
N5
N4
TIME AND LATERALITY
N8
N@
N7
279
MS
N8
Ml0
Ml1
Ml1
s8
sa
310
Sll
312
susJKc7NuuBm
maoso40SO-
a0 -
10 o-10
-
-2O-
Sl
s2
u
84
sa
a8
s7
suuccrNuuBm
Figure 1. Percent
stutterers.
correct ear difference
score for (a) normal speakers
and (b)
280
D. E. CROSS
MRT
100
EAR
DIFFERENCE NollyK-
SCORES
lmo180-
I ii
140120100W-
II f
W-
4020O-
-W-
-.wJ,
Nl
I
N2
I
1
NJ
t#
1
NC)
I
I
W
NT
I
M
I
No
I
NlO
I
Nil
I
N12
2uwcmNuumm
MRT
EAR
DIFFERENCE
SCORES
200
% I fs
21
22
I
23
I
24
I
22
I
22
I
$7
I
22
I
sm
I
SlO
311
I
312
2ualIcIwJuom
Figure 2. MRT ear difference
scores for (a) normal speakers and (b) stutterers.
COMPARISON OF REACTION TIME AND LATERALITY
281
DISCUSSION The first issue addressed what differences, if any, are observed between group reaction time and accuracy measures for the target syllable recognition task. As a group, the normal speakers exhibited the expected right ear advantage for both reaction time and accuracy. That is, they were able to identify the target syllable with a higher percentage of accuracy and responded more rapidly when it was presented to the right ear. These results are consistent with other studies showing that under conditions of dichotically presented stimuli the left hemisphere is more efficient for perceiving and processing presumed linguistically coded acoustic information. Both the reaction time and accuracy measures in this study reflect the typical right ear advantage for normal right-handed subjects under similar testing circumstances. The EDS for reaction time for the normal speakers in this study of 61 msec is quite similar to the 69 msec EDS score reported by Shadden (1979) using virtually identical methods and procedures. The ear effect for accuracy data between the two studies, however, is quite different. While Shadden reported the expected right ear advantage for percent correct (also expressed in radians) the ear effect was statistically nonsignificant. In fact, the EDS was almost half of that observed in the present study. This lends support to the inference that reaction time may be more sensitive than accuracy data to differences in processing characteristics of the two hemispheres (Geffen et al., 1978; Springer, 1977; Shadden, 1979). Factors such as selective attention to the “stronger” or “weaker” ear during presentation of dichotic stimuli, as well as the influence of speed/accuracy trade-offs in reaction time experiments may have substantial influence on the magnitude of ear difference scores, especially for accuracy measurements (Springer, 1977). While accuracy provides a basic indice of laterality for perception of acoustic stimuli the size of ear difference scores (and thus presumed laterality effects) are susceptible to extraneous variables typically outside the control of the experimenter (Hughes, 1978). As a group the stutterers also had a right ear advantage for both accuracy and reaction time. This is consistent with results from the dichotic listening studies which show that most stutterers process CV auditory stimuli more efficiently in the left hemisphere. An important difference is observed, however, when the average EDS for the two variables are compared between the stutterer and normal speaker groups. The hit rate for correctly identifying the target syllable in the left and right ears are virtually the same for the stutterers (EDS = 25.33%) and normals speakers (EDS = 26.58%). For reaction time, however, the average EDS for the normal speakers (61 msec) is almost twice that of the stutterers (31 msec). This difference is even more pronounced when compared with
282
D. E. CROSS
other investigations using similar procedures and conditions with normal speakers. Probably the target recognition RT task activates two independent yet related mechanisms. The first mechanism is perception of the acoustic target signal, a process lateralized to the left hemisphere. The second involves activation of a language-motor integration processes which translates the perceptual information into the appropriate motor response (i.e., lifting the correct finger off the designated response pad). The relatively consistent right ear advantage for reaction time reported in dichotic listening studies indicates that this process is lateralized in the left hemisphere as well. The small or nonsignificant correlations between reaction time and accuracy measures reported in this and other target recognition tasks (Catlin et al., 1976; Shadden, 1979) provide additional evidence that two processes may be involved. While most stutterers perceive acoustic verbal stimuli more efficiently in the left hemisphere some stutterers may not have a well defined left hemisphere for language-motor integration. That is, they may be less efficient in translating linguistic information into the appropriate motor response. This could be caused by a less efficient albeit left hemisphere processor or by bilateral or right hemisphere processing. The latter situation could necessitate transcallosal transfer of information before initiating the response. Previous studies have provided evidence for a left hemisphere sensorimotor integration mechanism for speech. Auditory tracking studies, for example, have investigated the relationship between speech-related movements and hemispheric processing (Sussman, 1971; Sussman et. al., 1975; Sussman and MacNeilage, 1975; Sussman and Westbury, 1978). In these tasks subjects “track” variations in frequency or amplitude of a cursor tone presented to one ear with a similar tone generated by movements of articulators such as the tongue or jaw presented simultaneously to the opposite ear. Consistently greater tracking accuracy for normal subjects results when the movement-generated tone is monitored in the right ear. The author’s have interpreted this right ear advantage as evidence for a left hemisphere sensorimotor integration mechanism associated with speech motor control. Stutterers have exhibited a mixed or left ear advantage in some tracking studies indicating bilateral or right hemisphere processing for the presumed integration process (Sussman and MacNeilage, 1975). Evidence of a right ear advantage for manual tracking has not been as consistent in these studies, however, the results of the present study are in favorable agreement with the possibility that some stutterers may have difficulty in efficiently integrating sensorimotor information. Whereas the reaction time task in the present study used a manual response, the use of linguistically coded CV syllables may activate integration processes in a manner similar to tracking a tone with speechrelated movements.
COMPARISON OF REACTION TIME AND LATERALITY
283
Another issue in this study was the relationship between the distribution of individual subject performance and group mean data. Considerable range in ear difference scores for both the accuracy and reaction time measures was observed, especially for the stutterer group. Eighty-three percent of the normal speakers exhibited the expected right ear advantage for percent correct and 92% for reaction time. Only one of the 12 normal speakers exhibited a left ear advantage and one no ear advantage. These distributions are consistent with data from other dichotic listening studies which use accuracy and reaction-time measurements to assess laterality in normal speakers, Of importance is the finding that 58% of the stutterers also exhibited a right ear advantage for both accuracy and reaction time. That is, seven of the 12 stutterers performed like the normal speakers on both measures. While a larger number of stutterers showed the more atypical lateralization pattern, the data do not support a notion that all or even a majority of adults who develop a stuttering problem are markedly different from normal speakers with respect to hemispheric processing. Differences that are observed are largely limited to subgroups of individuals and do not reflect a general characteristics of “stutterers.” Inconsistencies among studies are often related to large within-and-between-group variability, especially for studies limited to using a relatively small sample of subjects. This problem has been pointedly noted in the recent laterality study by Blood (1985) suggesting that interpretation of group mean data without inclusion of individual or subgroup performance may be of limited clinical or interpretive value. Finally, in accordance with the observations above, the data in this and other dichotic listening studies cannot and should not be interpreted as evidence of a causal relationship between hemispheric processing characteristics and stuttering. These data only provide evidence that some individuals who develop a stuttering problem may exhibit differences in processing strategies when compared to most normal speakers. The growing body of literature on the hemispheric organization in man indicates that the contributions of the left and right hemispheres for speech and language are complex and involve more than the issue of “left versus right” dominance. Specific understanding of how the two hemispheres function independently and interactively during speech is presently unknown. It is most reasonable to assume that individuals who exhibit less efficient hemispheric organization for certain speech/language processes may be more susceptible to fluency breakdown, especially during early periods of speech and language acquisition. Questions concerning the specific nature of how this breakdown occurs, its relationship to environmental influences, and how these factors influence development of the stuttering response need to be addressed. Data from dichotic accuracy measures alone may not provide specific information about the functional involvement of the two hemispheres for speech.
D. E. CROSS
284
REFERENCES American National Standards Institute. Specifications for Audiometers (ANSI 53.6-1969). New York: American National Standards Institute, Inc., 1972. Berlin, C.I. Hemispheric asymmetry in auditory tasks. In: Lateralization in the Nervous System, Harnard, S., Doty, R.W., Goldstein, L, Jaynes, J., and Krauthamer, G. (eds.). New York: Academic Press, 1977. Berlin, C.I., and McNeil, Experimental
Phonetics,
M.R. Dichotic listening. In: Contemporary Issues in Lass, N.J. (ed.). New York: Academic Press, 1976,
pp. 327-387. Bever, T.G., Hurtig, R.R., and Handel, A.B. Analytic processing superiority in monaurally presented speech. Neuropsychofogiu,
elicits right ear 1976, 14, 17%
181.
Blood, G.W. Laterality levels, and statistical
differences treatments.
in child stutterers:
Heterogeneity,
Journal
and Hearing
of Speech
severity Disorders,
1985, 50, 53-60.
Blumstein, S., Goodglass, H., and Tartter, V. The reliability dichotic listening. Bruin and Language, 1975, 2, 226-236.
of ear advantage
in
Boberg, E. Yeudall, L., Schopflocher, D., and Bo-Lassen, P. The effects of an intensive behavioral program on the distribution of EEG alpha power in stutterers during the processing of verbal and visuospatial information. Journal of Fluency
Disorders,
1983, 8, 245-263.
Brady, J.P., and Berson, J. Stuttering, Archives
of General Psychiatry,
dichotic listening and cerebral diminance.
1975, 32, 1449-1459.
Catlin, J., and Neville, H. Note: The laterality effect in reaction time to speech stimuli. Neuropsychologiu, 1976, 14, 141-143. Cerf, A., and Prins,
D. Stutterer’s ear preference for dichotic syllables. Paper presented at the Annual Meeting of the American Speech, Language, and Hearing Association, Las Vegas, 1974.
Cullen, J.K. Jr., Thompson, C.L., Hughes, L.F., Berlin, C.I., and Samson, D. The effects of varied acoustic parameters on performance in dichotic speech perception tasks. Bruin and Language, 1974, 1, 307-322. Curry,
F., and Gregory, H. The performance of stutterers on dichotic listening tasks thought to reflect cerebral dominance. Journal of Speech and Hearing
Research,
1969, 12, 73-82.
and hemispheric specialization. In: The NeurosciSchmitt, F.O., and Morgan, F.G. (eds.). Cambridge, MA: The Massachesetts Institute of Technology, 1974.
Darwin,
ences:
C.J. Ear differences
Third Study Program,
Dixon, W.J., and Massey, F.J. Introduction to Statistical Analysis, 3rd ed. New York: McGraw-Hill, 1969. Dorman, M.F., and Porter, R.J. Jr. Hemispheric lateralization ception in stutterers. Cortex, 1975, 11, 181-185.
for speech per-
Geffen, G., Bradshaw, F.L., and Wallace, G. Interhemispheric effects on reaction time to verbal and nonverbal visual stimuli. Journal of Experimental Psychology, 1978, 87, 415-422.
COMPARISON
OF REACTION
285
TIME AND LATERALITY
Gruber, L., and Powell, R. Responses of stuttering and nonstuttering children to a dichotic listening task. Perceptual and Motor Skills, 1974, 35, 263-264. Hand, C.R., and Haynes, W.O. Linguistic processing and reaction time differences in stutterers and nonstutterers. Journal of Speech and Hearing Research, 1983, 26, 181-185.
Hughes, L.F. Effects of varied response modes upon dichotic consonant-vowel identification latency. Brain and Language, 1978, 5, 301-309. Jones, R.K. Observations of Neurology,
Kimura,
on stammering
Neurosurgery
D. Cerebral dominance
Journal
of Psychology,
Kimura, D. Functional 3, 163-178.
after localized cerebral injury. Journal
and Psychiatry,
1966, 29, 192-195.
and the perception
of verbal stimuli. Canadian
1961, 15, 166-175.
asymmetry
of the brain in dichotic listening.
Cortex, 1967,
Liebetrau, R.M., and Daly, D.A. Auditory processing and perceptual abilities of “organic” and “functional” stutterers. Journal of Fluency Disorders, 1981, 6, 219-231.
Moore, W.H. Jr. Bilateral tachistoscopic word perception subjects. Brain and Language, 1976, 3, 434-442.
of stutterers and normal
Moore, W.H. Jr., and Haynes, W.O. Alpha hemispheric asymmetry and stuttering: Some support for segmentation dysfunction hypothesis. Journal of Speech
and Hearing Research,
1980, 23, 229-247.
Morais, J., and Darwin, C.J. Ear differences for same-different reaction times to monaurally presented speech. Brain and Language, 1974, 1, 383-390. processing and the cerebral hemispheres. In: HandVol. 2, Neuropsychology, Gazzaniga, M.S. (ed.). New York: Plenum Press, 1979.
Moscovitch,
M. Information
book of Behavioral
Neurobiology,
Perrin, K.L., and Eisenson, J. An examination of ear preference for speech and nonspeech stimuli in a stuttering population. Paper presented at the Annual Meeting of the American Speech, Language, and Hearing Association, New York, 1970. Pinsky, S.D., and McAdam, D.W. Electroencephalographic of cerebral laterality in stutterers. Brain and Language,
and dichotic indices 1980, 11, 374-397.
Plakosh, P. The functional asymmetry of the brain: Hemispheric specialization in stutterers for processing of visually presented linguistic and spatial stimuli. Unpublished doctoral dissertation, the Palo Alto School of Professional Psychology, 1978. Ponsford, R., Brown, W., Marsh, J., and Travis, L. Evoked potential correlates of cerebral dominance for speech perception in stutterers and nonstutterers. Electroencephalography and Clinical Neurophysiology, 1975,39,434 (Abstr.). Porter, R.J. Troendle, R., and Berlin, C.I. Effects of practice on the perception of dichotically presented stop- consonant-vowel syllables. Journal of the Acoustical
Society of America,
Quinn, P.T. Stuttering, Journal
of Australia,
1976, 59, 679-682.
cerebral dominance 1972, 2, 639-643.
and the dichotic word test. Medical
D. E. CROSS Shadden B.B., Differences in reaction time as a function of ear stimulated, cessing task, and mode of stimulation. Unpublished doctoral dissertation, versity of Tennessee 1979. Shankweiler, D., and Studdert-Kennedy, M. Identification of consonants vowels presented to left and right ears. Quarterly Journal of Experimental chology,
proUniand Psy-
1967, 19, 59-63.
Sommers, R.K., Brady, W., and Moore, W.H. Jr. Dichotic ear preferences of stuttering children and adults. Perceptual and Motor Skills, 1975, 41, 931-938. Springer, S.P. Lateralization of phonological processing in a dichotic detection task. Unpublished doctoral dissertation, Stanford University, 1972. Springer, S.P. Hemispheric specialization for speech opposed noise. Perceptual Psychophysiology, 1973, 13, 391-393.
by contralateral
Springer, S.P. Tachistoscopic and dichotic-listening investigations of laterality in human subjects. In: Lateralization in the Nervous System, Harnard, S., Doty, R.W., Goldstein, L., Jaynes, J., and Krauthamer, G. (eds.). New York: Academic Press, 1977. Sussman,
H. The laterality
Acoustical
effect in lingual-auditory
Society of America,
tracking.
Journal
of the
1971, 49, 1874-1880.
Sussman, H. Evidence for left hemisphere superiority in processing movementrelated tonal signals. Journal of Speech and Hearing Research, 1975, 22, 224235.
Sussman, H.M., and MacNeilage, P.F. Hemispheric specialization for speech production and perception in stutterers. Neuropsychologia, 1975, 9, 19-26. Sussman, H. MacNeilage, P.F., and Lumbley, J. Pursuit auditory tracking of dichotically presented tonal amplitudes. Journal of Speech and Hearing Research,
1975, 18, 74-81.
Sussman, H., and Westbury, J.R. A laterality effect in isometric and isotonic labial tracking. Journal of Speech and Hearing Research, 1978, 21, 563-579. Van Riper, C. The Nature of Stuttering. 1971. Wilkins, C., Webster, stimulus recognition orders,
Englewood
Cliffs, NJ: Prentice-Hall,
R.L., and Morgan, B.T. Cerebral lateralization of visual in stutterers and fluent speakers. Journal of Fluency Dis-
1984, 17, 131-141.
Wood, F., Stump, D., McKeehan, A., Sheldon, S., and Proctor, J. Patterns of regional cerebral blood flow during attempted reading aloud by stutterers both on and off haloperidol medication: Evidence for inadequate left frontal activation during stuttering. Brain and Language, 1980, 9, 141-144. Zimmerman, processing
G.N., and Knott, J.R. Slow potentials of the brain relation to speech in normal speakers and stutterers. Electroencephalogruphy and
Clinical Neurophysiology,
1974, 37, 599-607.