Recognition of vocal and facial cues to affect in language-impaired and normally-developing preschoolers

Recognition of vocal and facial cues to affect in language-impaired and normally-developing preschoolers

Journal of Communication Disorders 37 (2004) 5–20 Recognition of vocal and facial cues to affect in language-impaired and normally-developing prescho...

145KB Sizes 8 Downloads 118 Views

Journal of Communication Disorders 37 (2004) 5–20

Recognition of vocal and facial cues to affect in language-impaired and normally-developing preschoolers Marlena Creuserea,*, Mary Alta,b, Elena Plantea,b a

National Center for Neurogenic Communication Disorders, University of Arizona, Tucson, AZ, USA b Department of Speech and Hearing Sciences, University of Arizona, Tucson, AZ, USA Received 27 August 2002; received in revised form 30 March 2003; accepted 15 May 2003

Abstract The current study was designed to investigate whether reported [J. Learn. Disabil. 31 (1998) 286; J. Psycholinguist. Res. 22 (1993) 445] difficulties in language-impaired children’s ability to identify vocal and facial cues to emotion could be explained at least partially by nonparalinguistic factors. Children with specific language impairment (SLI) and control participants received an affect discrimination task, which consisted of the following cue situations: (1) facial expression and unfiltered speech; (2) lowpass-filtered speech only; (3) facial expression only; and (4) facial expression and filtered speech. The results of the study indicated that impaired and nonimpaired group performance differed only for the items including facial expression and nonfiltered speech. Developmental and investigative implications of this finding are addressed. Learning outcomes: As a result of this activity, the participant will be able to summarize the findings from existing research on affect comprehension in children with language impairments (LI). As a result of this activity, the participant will be able to discuss ways in which language impairment and difficulties in understanding emotion cues are associated and propose how these associations might influence social interactions. # 2003 Published by Elsevier Inc. Keywords: Affect comprehension; Emotion cues; Specific language impairment

*

Corresponding author. Present address: Texas Interagency Council on Early Childhood Intervention, 4900 N. Lamar Blvd., Austin, TX 78751-2399, USA. Tel.: þ1-512-424-6780; fax: þ1-512-424-6834. E-mail address: [email protected] (M. Creusere).

0021-9924/$ – see front matter # 2003 Published by Elsevier Inc. doi:10.1016/S0021-9924(03)00036-4

6

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

Information about a speaker’s intonation, facial expression, and gestures can add to or change the meaning of spoken discourse. Such nonverbal actions are considered to have multiple functions, including the expression of emotion (Bavelas & Chovil, 2000; Feldman, Tomasian, & Coats, 1999; Patterson, 1995). Research on the encoding and decoding of affective cues has increased greatly in recent years, partially as a result of the assumption that comprehension of others’ emotional states has an important role in understanding communicative intentions (e.g., Berk, Doehring, & Bryans, 1983; Jenkins & Ball, 2000; Kelly, Barr, Church, & Lynch, 1999; Motley & Camden, 1988). In addition, there is growing evidence that the ability to recognize emotions in others is a nontrivial aspect of social competence and social adjustment (Holder & Kirkpatrick, 1991; Nowicki & Duke, 1994; Nowicki & Mitchell, 1998; Semrud-Clikeman & Hynd, 1991; Sisterhen & Gerber, 1989). For example, Nowicki and colleagues have reported several studies in which schoolaged children’s performance on the receptive facial expression and paralanguage subtests of the Diagnostic Analysis of Nonverbal Accuracy (DANVA; Nowicki & Duke, 1994) significantly correlates with peer ratings of popularity and with academic achievement. Similarly, Nowicki and Mitchell found an association between preschoolers’ skills at reading facial and prosodic affect and social competence ratings by both peers and teachers. Individuals who may be particularly at risk for experiencing problems with social interactions are those who have not only difficulty in understanding affective cues, but also deficits in cognitive or linguistic skills. Holder and Kirkpatrick (1991) noted, for example, that children with learning disabilities often have negative social relationships with both strangers and individuals close to them, and suggested that these relations may result from an inability to interpret facial expressions. However, the learning-disabled population is heterogeneous and it appears that performance on facial affect tasks differs depending on the kind of learning deficit a child has (Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998). Dimitrovsky and colleagues compared normally-developing children’s ability to identify facial expressions with that of children with three types of learning deficits: nonverbal only, verbal only, and both nonverbal and verbal. The researchers found that, overall, the control group of children was better at the task than any of the three disabled groups. However, children with verbal only deficits more accurately identified facial affect than did children with either nonverbal only deficits or with both nonverbal and verbal deficits. Thus, although children with learning disabilities are, as a whole, at risk for developing poor social relations, some children may be more at risk than others. In communicative interactions, cues to emotion are not generally limited to the visual modality; therefore, it is necessary to consider whether children with verbal disabilities demonstrate difficulties in understanding other signals of affect, such as those given by intonation. Unfortunately, few studies have examined this issue. Berk et al. (1983) gave children a task in which they had to identify angry, happy, and sad utterances and found that participants with language delays were less accurate judges than those with normallydeveloping language. The researchers suggested that children with language difficulties may need to devote a greater percentage of their attention to processing verbal content and, therefore, may fail to encode or recall affective intonation cues. Just as Berk et al. (1983) did not argue their results to reflect a general deficiency in emotion understanding, Courtright and Courtright (1983) seemed hesitant to claim that children with language

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

7

disorders have a deficit in affect comprehension per se. The latter experimenters also found evidence that language-impaired children are less proficient at interpreting vocal cues to affect than are children without impairment. In discussing their results, Courtright and Courtright considered accounts related to both deficits in linguistic feature detection and general cognitive association skills. Ultimately, they deferred explanation of their results until future investigations ‘‘could compare profitably the relationships among language development, vocalic sensitivity, and the sensitivity to visual displays of emotional meaning, that is, facial expressions’’ (p. 417). To date, only one study has examined the question of whether difficulty with interpreting affect is modality specific for children with language impairment (LI). Trauner, Ballantyne, Chase, and Tallal (1993) required participants to identify happiness, sadness, and anger in still photographs and taperecorded utterances. Children in the LI and control groups performed identically in the photograph task, but differently in the auditory task. As in the previous studies, LI children correctly identified emotions for the utterances less often than did children without language impairment. Consequently, Trauner et al. argued that children with language impairment have an affective deficit that is limited to the auditory modality. The nature of such a deficit is unclear, however, particularly when consideration is given to a production component of Trauner et al.’s investigation. In addition to having their participants identify the emotions underlying utterances, the researchers asked children to imitate a series of utterances in the three tested emotions. According to naive judges, children in the two groups performed equally well in this part of the study. Because children must have been able to encode enough of the cues characteristic to the emotions in order to reproduce them accurately, a simple ‘‘feature detector’’ explanation is not likely for the apparent difficulty LI children have with affect identification tasks. Trauner et al.’s (1993) conclusion that language-impaired and nonimpaired children do not differ in their ability to read facial expressions also warrants reconsideration. First, it is inconsistent with recent evidence by Dimitrovsky et al. (1998) indicating that normallydeveloping children are better at recognizing facial affect than are learning-disabled children, including those with only verbal deficits. Second, the visual modality task used by Trauner and colleagues can be argued to be considerably less demanding than the auditory modality measure they used for comparison. In the former portion of the study, participants merely had to label a woman in three photographs as being happy, sad, or angry. In the latter task, children listened to 15 utterances and then identified the emotion underlying an utterance by pointing to a picture of a woman with one of the expressions. Notably, Trauner et al. did not provide a direct comparison of children’s performance in the two modalities, presumably because they did not have an equal number of trials. One interesting, and potentially limiting, aspect of the above studies is that the vocal stimuli used were unfiltered. Investigations of vocal affect comprehension by adults with brain damage (e.g., Heilman, Bowers, Speedie, & Coslett, 1984; Pell & Baum, 1997; Van Lancker & Sidtis, 1992) usually employ filtered samples of speech in order to reduce the amount of linguistic processing necessary to complete the task. It would be of value to investigate the affect comprehension of language-impaired children without the confound of linguistic content. Another general issue of concern regarding our understanding of the association between learning disabilities, language impairment, and nonverbal communication skills is that there have been few, if any, investigations in which the stimuli

8

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

adequately reflect a real-life communicative situation. For example, the facial affect studies discussed above (Dimitrovsky et al., 1998; Holder & Kirkpatrick, 1991; Nowicki & Duke, 1994) all used still photographs. The ecological validity of static displays of emotion has been an issue of concern (Wagner, 1997), as it is possible photographs fail to provide the same information as rapid changes of facial expression in real communicative settings (Bavelas & Chovil, 1997; Davitz, 1964). In addition, previous studies have tended to focus on the comprehension of affect in either the visual or the verbal domain. Even the Trauner et al. (1993) study of language-impaired children, which did concern both domains, had separate tasks for testing affect comprehension in faces versus utterances. Although there is clear utility for examining these areas separately, Bavelas and Chovil argue that it is also important to analyze facial and verbal displays of affect as they occur in actual social interaction, accompanying each other (and other aspects of context) to create meaning. Thus, while there is evidence that children with learning disabilities and language impairments perform worse than normally-developing children on tasks requiring recognition of emotions via facial expression or intonation, there remain several unanswered questions, particularly for children with specific language impairment (SLI). First, it is not known whether individuals with language impairment perform poorly on tasks of verbal affect recognition because they have deficits in interpreting emotional prosody or because they have difficulty processing the linguistic structure overlaying the emotional prosody. Second, it is unclear whether children with SLI would demonstrate better or worse comprehension of emotion for dynamic displays of facial expression than is reported for static ones. Third, it has yet to be determined how children with SLI would fare on a task in which prosodic and facial cues to emotion were presented simultaneously. Experiments have provided inconsistent conclusions regarding multisensory emotion perception by older children and adolescents with learning disabilities; in some cases participants perform worse when one cue is presented in isolation, while in other cases participants perform worse when visual and auditory cues are combined (Sisterhen & Gerber, 1989). One potential cause for the inconsistency in these results is the tendency to treat participants with learning disabilities as a homogenous group. Individuals with a variety of deficits have been collapsed into one general category. Consequently, information on how children with primarily linguistic deficits integrate vocal and visual cues to affect is nonexistent. The study reported here was intended to provide some insight into the questions raised above. The stimuli used differs from prior investigations of affect comprehension in children in that it includes: (1) low-pass filtered utterances, so that children’s perception of emotional prosody could be examined independent of the influence of semantic content and linguistic processing demands; (2) videotaped displays of emotion 2–4 s in length, rather than still photographs; and (3) items in which both visual and auditory cues to affect are available. In addition, the stimuli consisted of approximately equal exemplars by male and female actors, as there is growing evidence for an effect of stimulus gender on recognition of affect in both faces (e.g., Dimitrovsky, Spector, & Levy-Shiff, 2000; Kleck & Mendolia, 1990; Wagner, MacDonald, & Manstead, 1986) and speech (e.g., Dimitrovsky, 1964). Although the influence of gender on emotion recognition was not of primary interest in the present study, we included items by both men and women in hopes of extending the degree to which our findings could be generalized.

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

9

Because several features of this experiment differ from the prior studies in this area, strong hypotheses regarding its results were difficult to make. However, the existing literature suggests several outcomes: 1. If children with SLI have a deficit in a general processor for emotion perception (cf. Borod et al., 2000), their performance would differ from the performance of children without language impairment across all of our experimental conditions. More specifically, an overall difficulty with interpreting affective cues should manifest itself in lower scores for interpreting emotion presented by filtered speech alone, videotaped faces alone, and both filtered or nonfiltered speech presented simultaneously with faces. 2. If children with SLI have difficulty with interpreting affect specific to the auditory modality (cf. Trauner et al., 1993), they should have lower scores than the control group on filtered speech-only items, but not the face-only items. However, if deficits previously documented in the auditory modality (Berk et al., 1983; Courtright & Courtright, 1983; Trauner et al., 1993) are attributable to the linguistic content of the stimuli, then children with SLI will perform comparably to their normal-language (NL) peers in the filtered sentence condition. 3. If children with SLI are selectively poor at decoding facial affect cues, then they should perform worse than their peers on the face-only items. Note that the existing literature has not considered how affective information in the auditory and visual modalities may interact for children with SLI. Such interactions would be particularly relevant if affective deficits are modality specific. It is possible that these children may benefit from having information presented simultaneously in the two modalities if children can use information from one modality to support weaker processing in the other. However, it is equally likely that processing difficulties in one modality could interfere with children’s ability to benefit from good processing in the other domain.

1. Method 1.1. Participants Participants were 52 English-speaking children between the ages of 4:0 and 6:5. Twentysix (6 girls, 20 boys) of the participants had specific language impairment with a mean age of 5:1 and an age range of 4:2–6:5; 26 (6 girls, 20 boys) had normal language with a mean age of 5:1 and age range of 4:0–6:5. Children in the NL group were matched to the SLI group on the basis of gender and chronological age (2 months). Scores from formal language testing (see Table 1) document significant differences in language skills for the NL and SLI groups. Language or IQ matches were not used because of methodological confounds that these types of matches introduce (Plante, Swisher, Kiernan, & Restrepo, 1993). Participants came from a variety of racial backgrounds and had mothers with levels of education ranging from high school to postbaccalaureate degrees. Children with SLI met the following criteria: (1) they had been identified by a certified speech–language pathologist as SLI, (2) they were currently receiving speech and language services, (3) they passed a hearing screening at 25 dB for 500 Hz and 20 dB for 1000, 2000,

10

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

Table 1 Language and nonverbal intelligence scores by classification Test

SLI mean (S.D.)

NL mean (S.D.)

BBTOP PPVT SPELT KABC

38.54 91.23 4.33 96.54

74.50 112.62 0.97 104.81

(20.10) (15.27) (2.60) (13.44)

(5.86) (15.30) (1.18) (10.43)

BBTOP: Bankson–Bernthal Test of Phonology (Bankson & Bernthal, 1990); PPVT: Peabody Picture Vocabulary test — Third Edition (Dunn & Dunn, 1997); SPELT: Structured Photographic Expressive Language Test — II (Werner & Kresheck, 1983); KABC: Kaufman Assessment Battery for Children (Kaufman & Kaufman, 1983).

and 4000 Hz, and (4) scored above 75 (70 þ 1 S.E.M.) on the Kaufman Assessment Battery for Children (KABC; Kaufman & Kaufman, 1983). Children with NL met the hearing and IQ criteria listed above. In addition (1) they were considered by their parents and teacher to be developing speech and language skills normally, (2) their language was judged to be within normal limits during observation by a certified speech–language pathologist (M.A.), and (3) their test scores on formal speech and language testing was considered within normal limits. 1.2. Materials Four blocks of stimuli, corresponding to four experimental conditions, were developed for this study. Six adult speakers (3m, 3f) were videotaped while repeating 54 utterances in a manner that would indicate that they were happy, sad, mad, and surprised. None of the utterances included explicit mention of an emotion term. The videos were captured on computer using MiroVideo DV200 (1999) and edited with Adobe Premiere 5.0 (1998) so that utterance presentation was randomized across emotions, speakers, and utterances. The file was then divided into two files of 432 utterances each. Four adults between the age of 22 and 30 were asked to watch one of the files and write down which of the four emotions was being conveyed by each utterance. In addition, four adults between the age of 19 and 28 performed the same task, but were not told in advance what the possible target emotions were. In order for an item to be used in the final stimulus set, it had to be correctly identified as demonstrating the emotion intended by all of the judges. ‘‘Joy’’ and ‘‘pleasure’’ were accepted as interpretations for happiness, as were ‘‘anger’’ and ‘‘pissed’’ for madness and ‘‘melancholy’’ for sadness. Utterances that had been judged as expected were further edited to create four blocks of stimulus items corresponding to the different stimulus blocks. For the face-only block, the audio track was removed, leaving 2–4 s displays of the speakers’ facial expression when making the utterance. For the filtered speech-only block, the video track was deleted and the audio track was low-passed filtered at 450 Hz using a set of analog filters. The filtering process effectively removed the semantic content of the utterances while retaining their prosodic contours. For the face-plus-filtered-speech block, the audio track was cut, lowpassed filtered, and then imported and realigned with the original video track. The fourth block served as a control block and consisted of unfiltered speech plus face stimuli. This block was identical to the third, except that the audio track was not filtered.

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

11

For the second stage of the piloting procedure, two additional adults, blind to the target emotions, were presented with the items and asked to determine the emotion conveyed. The purpose of this procedure was to ensure that final items used adequately represented the emotion intended when only one cue was available. Items not identified as representing the intended emotion were removed from the pool. Three different stimulus sets were constructed for the experiment. None of the items used in one set appeared in either of the other two sets. Each set had an approximately equal number of items (on average, 18) in all four blocks, and an approximately equal number of exemplars by the six speakers and of the four emotions. The order of presentation of the blocks was different for each set. In total, 72 items were presented to each child. 1.3. Procedures Child subjects participated in the study on an individual basis with an experimenter present to monitor and encourage on-task behavior. Stimuli were presented via computer. Prior to each block, the experimenter gave the participant a brief explanation of the nature of the stimuli; for example, filtered speech items were described as sounding as if ‘‘speakers had their hands over their mouth or were in another room behind a door.’’ After presentation of each item, the experimenter asked the child a forced-choice question measuring his or her interpretation of the emotion conveyed (e.g., ‘‘Was she mad or surprised?’’). The order of the correct response was counterbalanced, as was the emotion selected for the incorrect response. In addition, there were two versions of the questionnaire for each stimuli set, which differed only by the valence of the emotion given for the incorrect response; for example, a test question that was worded ‘‘Was he sad or happy?’’ on the first query version would be ‘‘Was he sad or mad?’’ on the second version.

2. Results Because there were an unequal number of exemplars in each block across the three stimulus sets, it was necessary to select a subset of the items in order to complete the analyses relevant to the predictions for this study. An item analysis was done for the three stimulus sets and, for each block, the 14 exemplars with the highest correlations to participants’ total scores were used in subsequent statistical tests. This procedure assured that items with the highest reliability were selected, without biasing item selection in terms of group performance. Utterances analyzed in the face þ unfiltered speech blocks are listed in Appendix A. A one-way ANOVA indicated that there was no difference in children’s performance across the three stimulus sets; thus, the fact that children received different items and different block orders did not affect overall performance. In addition, an effect of query version was not found. Finally, there was no sex difference for task performance. However, this should not be considered definitive in that the 10:3 male:female ratio may have made any true performance differences difficult to detect. Although prior researchers of affect recognition in language-delayed children (e.g., Berk et al., 1983; Courtright & Courtright, 1983; Trauner et al., 1993) have used forced-choice

12

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

Table 2 Descriptive statistics by cue block and classification Variable

SLI mean (S.D.)

NL mean (S.D.)

Total mean (S.D.)

Filtered speech only Face only Filtered speech þ face Unfiltered speech þ face Total affect score

7.77 8.46 8.50 9.00 33.73

7.92 9.19 9.73 11.04 37.88

7.85 8.83 9.12 10.02 35.81

(1.56) (2.16) (2.45) (2.35) (6.28)

(2.08) (2.35) (3.55) (2.41) (8.22)

(1.82) (2.26) (3.08) (2.57) (7.54)

paradigms, we wished to assure that our participants were capable of making forced-choice responses at above chance levels. A paired t-test revealed that correct responses occurred at above chance rates for both the SLI (t ¼ 4:66, P < 0:0001) and NL (t ¼ 6:14, P < 0:0001) groups. Furthermore, both groups responded at above chance levels to each of the individual affect conditions tested (P ¼ 0:03 or less for each condition). As can be seen in Table 2, children’s recognition of affect was influenced by the type of cues available to them. A mixed ANOVA (with group as the between-subjects factor and cue block as the within-subject factor) revealed that the main effect of group was statistically significant, Fð1; 50Þ ¼ 4:20, P < 0:05, as was the main effect of cue block, Fð3; 50Þ ¼ 12:43, P < 0:01. The interaction between the two variables was not significant. However, planned comparisons suggested that the difference between the two groups’ performances is best represented by scores in one block in particular, that in which items were presented with both facial cues and unfiltered speech; the total scores for these items are the only ones for which SLI and NL children differed significantly (tð50Þ ¼ 3:09, P < 0:01). Within-group comparisons revealed more differences between conditions for the NL group than for the SLI group. For the NL group, paired samples t-tests showed that performance across the cue blocks was significantly different for all contrasts except face only versus face and filtered speech (filtered speech þ face versus filtered speech only: tð25Þ ¼ 2:63, P < 0:05; filtered speech only versus face only: tð25Þ ¼ 2:55, P < 0:05; unfiltered speech þ face versus filtered speech only: tð25Þ ¼ 6:12, P < 0:01; face only versus unfiltered speech þ face: tð25Þ ¼ 4:50, P < 0:01; and unfiltered speech þ face versus filtered speech þ face: tð25Þ ¼ 2:35, P < 0:05). For the SLI group, performance across blocks differed significantly only for one contrast, unfiltered speech þ face versus filtered speech only, tð25Þ ¼ 2:43, P < 0:05. To determine if there was any relation between subject characteristics (i.e., age, speech, language, and cognitive performance), we correlated these variables with scores for each block of the affect task. The results for the NL and SLI groups are presented in Tables 3 and 4, respectively. As Table 3 indicates, significant correlations were found for four pairs of variables for the NL group. More specifically, children’s age in months was associated with their total scores on the affect task, as well as their scores on the face only and face þ filtered speech blocks. In addition, scores on the face þ unfiltered speech items were associated with performance on the KABC (Kaufman & Kaufman, 1983). In contrast, a significant correlation for the SLI group was found only for performance on the face þ filtered speech block and children’s age in months (see Table 4). Thus, the association between age and

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

13

Table 3 Pearson correlation coefficients between linguistic and cognitive tasks and components of the affect task for normal-language children Variable

BBTOP

PPVT

SPELT

KABC

Age in months

Filtered speech only Face only Filtered speech þ face Unfiltered speech þ face Total affect score

0.08 0.12 0.21 0.07 0.13

0.11 0.16 0.12 0.04 0.14

0.17 0.07 0.10 0.06 0.08

0.25 0.28 0.15 0.44a 0.34

0.19 0.41a 0.45a 0.35 0.47a

a

Correlation is significant at the 0.05 level (two-tailed).

Table 4 Pearson correlation coefficients between linguistic and cognitive tasks and components of the affect task for language-impaired children Variable

BBTOP

PPVT

SPELT

KABC

Age in months

Filtered speech only Face only Filtered speech þ face Unfiltered speech þ face Total affect score

0.18 0.08 0.04 0.23 0.14

0.31 0.34 0.03 0.24 0.14

0.33 0.01 0.02 0.13 0.14

0.02 0.14 0.09 0.03 0.00

0.01 0.00 0.41a 0.26 0.26

a

Correlation is significant at the 0.05 level (two-tailed).

performance on the various affect cue blocks was, with the exception of the face-only items, the same for SLI and NL children. Interestingly, the relations among cognitive skills and performance on the affect task was dissimilar for the groups of children only on the type of item for which their scores most differed, scenarios in which both facial and unfiltered vocal cues to affect were available.

3. Discussion Similar to prior investigations in this area (Berk et al., 1983; Courtright & Courtright, 1983; Dimitrovsky et al., 1998; Trauner et al., 1993), we found an overall difference in performance between NL and SLI children on a task requiring recognition of emotional meaning. More specifically, total scores for the affect task were lower for SLI participants than for NL participants. However, closer inspection of the pattern of results across the stimuli blocks suggest that the present data are, in several ways, inconsistent with prior claims regarding the abilities of language-impaired children in interpreting affect. We will now consider our results in light of previous data and theoretical claims. Research has suggested that SLI children have difficulty with vocal affect tasks when unfiltered speech stimuli are employed (Berk et al., 1983; Courtright & Courtright, 1983; Trauner et al., 1993). That language-impaired participants in this study demonstrated lower performance than normally-developing children for the items in which both the face and unfiltered speech were available is arguably in line with these previous findings. Yet, when

14

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

presented with filtered speech only, or facial expression and filtered speech, SLI children recognized affect as frequently as their NL peers. These conditions were included to evaluate affective processing without linguistic content. However, the explanations for differential performance in these conditions go beyond a simple linguistic–nonlinguistic distinction. Age effects must also be considered. Our participants were considerably younger than those in the studies by Berk et al. and Trauner et al.; the mean ages of the SLI and NL groups in the latter investigation were 10:8 and 10:6, respectively. Furthermore, in our study, the younger subjects in both groups had more difficulty with the filtered speech-only condition than the other conditions. Note also that an effect of age was found in our study for total scores on the affect task by NL, but not SLI, children. Both NL and SLI groups showed a significant association for age and the face þ filtered speech condition. Therefore, it is possible that the filtered speech conditions proved sufficiently difficult for both groups, because of their young age, so that no group differences emerged. If this explanation of performance in the filtered speech condition is true, the difference between language-impaired and nonimpaired children’s skills in recognizing vocal affect may become increasingly apparent as children become older. Our data may miss deficits that may emerge at later points in development, such as those suggested by other researchers (e.g., Berk et al., 1983; Trauner et al., 1993). It should be noted, however, that the subjects in Courtright and Courtright’s (1983) investigation were approximately the same age as our participants (e.g., the mean age of the SLI group was 5:1 in both studies). Although those researchers found a difference in performance by SLI and NL children on unfiltered speech stimuli, we did not discover a similar distinction in interpretative ability when children were presented with filtered speech-only items. Therefore, a second possibility is that SLI children’s demonstrated difficulty in emotional prosody (Berk et al., 1983; Courtright & Courtright, 1983; Trauner et al., 1993) is influenced by their linguistic impairments. Data from the blocks in which unfiltered and filtered versions of items can be compared support this position; NL and SLI participants in the present study did not differ significantly in their scores to face þ filtered speech items, but did differ in their scores to face þ unfiltered speech items. For both groups, performance was improved by the presence of unfiltered speech compared with filtered speech. However, the SLI group derived proportionately less benefit from the presence of unfiltered speech. As can be seen in Appendix A, the content of the sentences was affectively normal, in that an attempt was made to use items that were not strongly biased towards a particular emotion interpretation. In addition, items with an arguable potential for bias sometimes were presented with the affect they were biased towards (e.g., ‘‘I’m going to get married’’ — happy) and sometimes were not (e.g., ‘‘The coke tastes flat’’ — happy). Therefore, it is not the case that improved performance was due to the linguistic content. It is more likely that the familiarity of unfiltered speech facilitated both groups, but provided less help to language-impaired children. If linguistic processing demands influenced the ability of SLI children to attend to the facial element of the task, then they would derive less benefit from the added familiarity of unfiltered speech. Differential performance by both groups on the face þ unfiltered speech relative to other conditions is somewhat surprising, given that participants were reminded before each block that their task was only to identify the emotion presented in the displays. Presented with a potentially demanding task, the SLI children in particular could have used a relatively

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

15

simple strategy: ignore the verbal information and attend only to the facial expression. A face-focal strategy is a viable one, given that the face is often argued to be the most effective way to convey emotions (e.g., Etcoff & Magee, 1992; Hess, Kappas, & Scherer, 1988). Furthermore, SLI children displayed an ability to interpret face-only emotional items as well as their NL counterparts. Although SLI children could have theoretically relied on the cues presented by the visual channel for the face þ unfiltered speech blocks, they did not. In fact, there is recent evidence that bimodal perception of emotion mandates integration of the information via each channel in adults (de Gelder & Vroomen, 2000). If young children also automatically appeal to visual and auditory cues when interpreting affect, then children with language impairment may be at a particular disadvantage. Just as our results call into question previous claims that language-impaired children have a blanket deficit in judging vocal affect, the data presented here are also inconsistent with previous investigations of facial affect recognition by language-impaired individuals (e.g., Dimitrovsky et al., 1998, 2000). Although Dimitrovsky and colleagues have provided convincing evidence that SLI children and children with both verbal and nonverbal deficits are less successful than normally-developing children at reading facial affect, we found no group differences for face-only items. However, there is a potentially important distinction in the nature of the stimuli used in the current and past studies. Here, we employed moving displays of facial expression, rather than still photographs. Notably, Ekman, Friesen, and Ellsworth (1982) have suggested that movies or videotapes may be better than still photographs for answering some types of research questions. Dynamic and static expressions are likely to provide different types of information important for affect interpretation (Davitz, 1964). Not only is an immobile face unusual in human interaction, a ‘‘precise but fleeting expression becomes something very different when frozen in time’’ (Bavelas & Chovil, 1997, p. 335). Because we did not find a difference between our groups for moving facial displays of emotion, but other researchers (Dimitrovsky et al., 1998; Dimitrovsky et al., 2000) have found such a difference for still displays, it is possible that the transient cues behind dynamic expressions have a special relevance to affect interpretation and may have a facilitative effect for SLI children. Clearly, future studies in emotion reading by children would be well-served by examining this issue further. Investigations with adults have begun to explore how the nature of experimental stimuli influences emotion perception. For example, there is evidence that (1) in the absence of verbal cues, posed expressions are easier to interpret than spontaneous expressions (Motley & Camden, 1988; Wagner et al., 1986); (2) for spontaneous expressions, profile versus full-face views have a complex influence on emotion perception (e.g., Kleck & Mendolia, 1990); and (3) for posed emotional stimuli, compound facial expressions are more accurately read than singular ones (LaPlante & Ambady, 2000). The evidence here suggests that SLI children may miss cues to speakers’ emotional states and, therefore, are likely to face challenges when determining communicative intentions, which rely at least partially on emotional inferencing (e.g., Berk et al., 1983; Jenkins & Ball, 2000; Kelly et al., 1999; Motley & Camden, 1988). Comprehension of jokes, irony, and sarcasm may be particularly difficult, as such nonliteral speech acts involve incongruous and emotional pieces of information (Worling, Humphries, & Tannock, 1999). Thus, the data presented serve to highlight the claim that deficits in pragmatic skills may accompany other impairments of language (e.g., Gelfer, 1996).

16

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

The impeded ability of our subjects to interpret affective information places them at risk for poor social competence and adjustment (Holder & Kirkpatrick, 1991; Nowicki & Duke, 1994; Nowicki & Mitchell, 1998; Semrud-Clikeman & Hynd, 1991; Sisterhen & Gerber, 1989).

Acknowledgements This work was supported by National Multipurpose Research and Training Grant DC01409 from the National Institute on Deafness and Other Communication Disorders (NIDCD).

Appendix A There’s a dog in the house. Anna likes to eat shrimp. It’s midnight at the oasis. It’s been quite the ride. Here’s my school book. Make way for the king. I don’t live here anymore. They have a three hour wait. The coke tastes flat. Her turtle is kicking the snail. His plane is already late. That cost five-hundred dollars. I think I know you. Romans had pipelines. Mr. Quinn would like to meet you. You make me think a lot. It’s snowing in June. The big fish swims the best. Somebody moved my motorbike. You beat me home. The car is made from ice cubes. Your camel is back. The pig has wings. The doctor is ready now. I can’t see you. Pat got that big grant. Peace has come again. You want a cookie. The sky is falling. Queen Mary had a baby.

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

17

We’re finished already. What a nice house. I’m going to get married. There’s money on the ground. Dad missed the bus. The phantom is in my mind. The cats have climbed the tree. You might be the one. Pink shadows follow his steps. We landed on the moon. There’s no more room. The water is rising.

Appendix B. Continuing education 1. Information regarding a speaker’s communicative intent is garnered by a. Intonation b. Gestures c. Facial expression d. All of the above e. None of the above 2. The findings of Berk et al. (1983), Courtright and Courtright (1983), and Trauner et al. (1993) converge to suggest the following a. Children with language delays have difficulty recognizing vocal affect, but most likely not facial affect b. Children with language delays demonstrate difficulties in comprehending both verbal and facial affect c. Children with language delays display deficits in facial affect comprehension only d. Children with language delays display deficits in the ability to interpret lexical labels referring to affect, but not affect itself e. Children with language delays are not impaired in their ability to read emotion cues 3. Potential limitations in interpreting the results of the above studies are a. Recent evidence (Dimitrovsky et al., 1998) suggesting that children with learning disabilities (including verbal only learning deficits) are less accurate at recognizing facial affect than normally-developing children b. The vocal stimuli used in prior studies were unfiltered, potentially confounding linguistic processing with affect comprehension c. Paradigms often required children to name emotions, confounding language production with affect comprehension d. a and b above e. None of the above 4. Which of the following statements is supported by the results of the current study? a. Children with SLI performed better than normally-developing children at recognizing cues of emotion overall

18

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

b. Children with SLI had more difficulty than children without language impairment when presented with stimuli containing unfiltered speech and facial expression c. Children with SLI more accurately detected affect in filtered speech and face-only situations than did children with normally-developing language d. Children with and without SLI recognized emotional cues to affect in all experimental situations equally well e. Children with SLI were less accurate in detecting emotion when presented only with filtered speech 5. Based on the literature review and discussion, the results of the current study are significant in that they suggest that a. Although children with SLI may not have a deficit in a general processor for emotion perception, the nature of their language impairments may influence their recognition of communicative intent in a variety of real-life situations b. Difficulty in recognizing emotions is a likely cause of SLI; therefore, therapy with language-delayed children should primarily target affect comprehension c. Affect comprehension does not appear to be associated with language impairment; thus, the proposed link between inability to read facial expressions and negative social relationships has been nullified d. Sufficient information has been gathered in the area of SLI and affect comprehension; consequently, there is no longer a need to pursue investigations into this issue e. All of the above References Adobe Premiere 5.0 [Computer software]. (1998). San Jose, CA: Adobe Systems, Inc. Bankson, N., & Bernthal, J. (1990). Bankson–Bernthal test of phonology. Chicago: The Riverside Publishing Company. Bavelas, J. B., & Chovil, N. (1997). Faces in dialogue. In J. A. Russell & J. M. Fernandez-Dols (Eds.), The psychology of facial expression. Cambridge: Cambridge University Press. Bavelas, J. B., & Chovil, N. (2000). Visible acts of meaning: An integrated message model of language in faceto-face dialogue. Journal of Language and Social Psychology, 19, 163–194. Berk, S., Doehring, D. G., & Bryans, B. (1983). Judgments of vocal affect by language-delayed children. Journal of Communication Disorders, 16, 49–56. Borod, J. C., Pick, L. H., Hall, S., Sliwinski, M., Madigan, N., Obler, L. K., Welkowitz, J., Canino, E., Erhan, H. M., Goral, M., Morrison, C., & Tabert, M. (2000). Relationships among facial, prosodic, and lexical channels of emotional perceptual processing. Cognition and Emotion, 14, 193–211. Courtright, J. A., & Courtright, I. C. (1983). The perception of nonverbal vocal cues of emotional meaning by language-disordered and normal children. Journal of Speech and Hearing Research, 26, 412–417. Davitz, J. R. (1964). A review of research concerned with facial and vocal expression of emotion. In J. R. Davitz (Ed.), The communication of emotional meaning. New York: McGraw-Hill. de Gelder, B., & Vroomen, J. (2000). The perceptions of emotions by ear and by eye. Cognition and Emotion, 14, 289–311. Dimitrovsky, L. (1964). The ability to identify the emotional meaning of vocal expressions at successive age levels. In J. R. Davitz (Ed.), The communication of emotional meaning. New York: McGraw-Hill. Dimitrovsky, L., Spector, H., & Levy-Shiff, R. (2000). Stimulus gender and emotional difficulty level: Their effect on recognition of facial expressions of affect in children with and without LD. Journal of Learning Disabilities, 33, 410–416.

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

19

Dimitrovsky, L., Spector, H., Levy-Shiff, R., & Vakil, E. (1998). Interpretation of facial expressions of affect in children with learning disabilities with verbal or nonverbal deficits. Journal of Learning Disabilities, 31, 286–292. Dunn, L. M., & Dunn, L. M. (1997). Peabody picture vocabulary test — third edition. Circle Pines, MN: American Guidance Service, Inc. Ekman, P., Friesen, V., & Ellsworth, P. (1982). Methodological decisions. In P. Ekman (Ed.), Emotion in the human face. Cambridge: Cambridge University Press. Etcoff, N. L., & Magee, J. J. (1992). Categorical perception of facial expressions. Cognition, 44, 227–240. Feldman, R. S., Tomasian, J. C., & Coats, E. J. (1999). Nonverbal deception abilities and adolescents’ social competence: Adolescents with higher social skills are better liars. Journal of Nonverbal Behavior, 23, 237–250. Gelfer, M. P. (1996). Survey of communication disorders: A social and behavioral perspective. New York: McGraw-Hill. Heilman, K. M., Bowers, D., Speedie, L., & Coslett, H. B. (1984). Comprehension of affective and nonaffective prosody. Neurology, 34, 917–921. Hess, U., Kappas, A., & Scherer, K. (1988). Multichannel communication of emotion: Synthetic signal production. In K. Scherer (Ed.), Facets of emotion: Recent research. Hillsdale, NJ: Erlbaum. Holder, H. B., & Kirkpatrick, S. W. (1991). Interpretation of emotion from facial expressions in children with and without learning disabilities. Journal of Learning Disabilities, 24, 170–177. Jenkins, J. M., & Ball, S. (2000). Distinguishing between negative emotions: Children’s understanding of the social-regulatory aspects of emotion. Cognition and Emotion, 14, 261–282. Kaufman, A. S., & Kaufman, N. L. (1983). Kaufman assessment battery for children. Circle Pines, MN: American Guidance Service, Inc. Kelly, S. D., Barr, D. J., Church, R. B., & Lynch, K. (1999). Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of Memory and Language, 40, 577–592. Kleck, R. E., & Mendolia, M. (1990). Decoding of profile versus full-face expressions of affect. Journal of Nonverbal Behavior, 14, 35–49. LaPlante, D., & Ambady, N. (2000). Multiple messages: Facial recognition advantage for compound expressions. Journal of Nonverbal Behavior, 24, 211–224. MiroVideo DV200 [Computer software]. (1999). Mountain View, CA: Pinnacle Systems, Inc. Motley, M. T., & Camden, C. T. (1988). Facial expression of emotion: A comparison of posed expressions versus spontaneous expressions in an interpersonal communication setting. Western Journal of Speech Communications, 52, 1–22. Nowicki, S., & Duke, M. P. (1994). Individual differences in the nonverbal communication of affect: The diagnostic analysis of nonverbal accuracy scale. Journal of Nonverbal Behavior, 18, 9–35. Nowicki, S., & Mitchell, J. (1998). Accuracy in identifying affect in child and adult faces and voices and social competence in preschool children. Genetic, Social, and General Psychology Monographs, 124, 39–59. Patterson, M. L. (1995). Invited article: A parallel process model of nonverbal communication. Journal of Nonverbal Behavior, 19, 3–29. Pell, M. D., & Baum, S. R. (1997). The ability to perceive and comprehend intonation in linguistic and affective contexts by brain-damaged adults. Brain and Language, 57, 80–99. Plante, E., Swisher, L., Kiernan, B., & Restrepo, M. A. (1993). Language matches: Illuminating or confounding? Journal of Speech and Hearing Research, 36, 772–776. Semrud-Clikeman, M., & Hynd, G. W. (1991). Specific nonverbal and social-skills deficits in children with learning disabilities. In J. E. Obrzut & G. E. Hyde (Eds.), Neuropsychological foundations of learning disabilities: A handbook of issues, methods, and practice. San Diego: Academic Press. Sisterhen, D. H., & Gerber, P. J. (1989). Auditory, visual, and multisensory nonverbal social perception in adolescents with and without learning disabilities. Journal of Learning Disabilities, 22, 245–249. Trauner, D. A., Ballantyne, A., Chase, C., & Tallal, P. (1993). Comprehension and expression of affect in language-impaired children. Journal of Psycholinguistic Research, 22, 445–452. Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and righthemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963–970.

20

M. Creusere et al. / Journal of Communication Disorders 37 (2004) 5–20

Wagner, H. L. (1997). Methods for the study of facial behavior. In J. A. Russell & J. M. Fernandez-Dols (Eds.), The psychology of facial expression. Cambridge: Cambridge University Press. Wagner, H. L., MacDonald, C. J., & Manstead, A. S. R. (1986). Communication of individual emotions by spontaneous facial expressions. Journal of Personality and Social Psychology, 50, 737–743. Werner, E. O., & Kresheck, J. D. (1983). Structured photographic expressive language test — II. DeKalb, IL: Janelle Publications, Inc. Worling, D. E., Humphries, T., & Tannock, R. (1999). Spatial and emotional aspects of language inferencing in nonverbal learning disabilities. Brain and Language, 70, 220–239.