The social skills intake interview: Reliability and convergent validity assessment

The social skills intake interview: Reliability and convergent validity assessment

J. Behov. Ther. &Exp. Psychiat. Printed in Great Britain. Vol. 14, No. 4, pp. 305-310, THE SOCIAL RELIABILITY ooO5-7916/83 S3.00+ .OO 0 1983 Persam...

582KB Sizes 8 Downloads 116 Views

J. Behov. Ther. &Exp. Psychiat. Printed in Great Britain.

Vol. 14, No. 4, pp. 305-310,

THE SOCIAL RELIABILITY

ooO5-7916/83 S3.00+ .OO 0 1983 Persamon Press Ltd.

1983.

SKILLS INTAKE

AND CONVERGENT

INTERVIEW:

VALIDITY

ASSESSMENT

PETER M. MONT1 Providence VA Medical Center and Brown University Summary-Interview data on 77 hospitalized male psychiatric patients were compared to professional staffs’ in vivo evaluations, trained judges’ evaluations of video-taped role-plays, as well as patients’ self-report evaluations of their social skill. The results on overall skill ratings showed good reliability between two senior research staff members who conducted the interviews (r = 0.66) and between the video judges (r =0.92). While the convergent ratings of professionals corresponded significantly with the interview ratings, the self-report ratings did not. Agreement between raters and correspondence among rating sources are discussed in the context of the amount of structure in each observational setting.

The intake interview has gained recognition as an assessment instrument in behavior therapy and research (Haynes and Jensen, 1979). Recent work in behavioral assessment has emphasized the unique and important role of interview data (Bellack, 1979; Hay et al., 1979; Haynes and Jensen, 1979). Indeed, Haynes (1978) suggests that the interview is probably the most frequently employed assessment instrument in behavioral and psychological intervention programs. As Haynes et al. (1981) point out, given its pivotal role as a pre-intervention assessment instrument, determining the validity of the intake interview is crucial. In the assessment of social skills, interview data may play an especially important role in that two kinds of relevant information may be obtained. This includes information of a general social/interpersonal nature as well as in vivo observational data of the client’s behavior during the interview itself (Bellack and Hersen, 1978). Although there is potentially more to be gained with a behavioral interview in the social skills area as compared with other behavioral areas (presumably because indicators of the construct of interest, social skill, are observed during the interview itself), there are perhaps also more

limitations. Not only does one need to be concerned about the usual issues of validity and reliability of interview data (Haynes and Jensen, 1979) but there are additional limitations relevant to the situational specificity of the observed interview behavior (Bellack, 1979). To be sure, the context of the interview itself may not be representative of less structured interpersonal interactions. Nevertheless, the potential benefits of interview-obtained data (e.g. the fact that they are relatively inexpensive to obtain, consume relatively little staff time, and provide information that may not be obtainable through other avenues) seem to outweigh their limitations, and several recent studies have reported the use of behavioral interviews in the assessment of social skills (Bellack, Hersen and Turner, 1978; Monti, Corriveau and Curran, in press (a); Wessberg et al., 1981). Indeed, further evidence suggesting the important role of interviews in social skills research is that several treatment outcome studies have selected patients primarily on the basis of social skills interviews (e.g. Falloon et al., 1977; Marzillier, Lambert and Kellett, 1976). While there has been wide utilization of the intake interview in social skills clinical work and

This study was funded in part by a research grant from the Veterans Administration. Reprints may be obtained from Peter M. Monti, Veterans Administration Medical Center, Davis Park, Providence, Rhode Island 02908. 305

306

PETER M. MONT1

research, there has not been adequate description of the instruments employed nor sufficient evaluation of their validity and reliability. Inferences regarding the validity of social skills interviews are usually based on “face validity” or on the apparent relevance of the information elicited to the researcher’s concept of skillfulness. Furthermore, the reliability of the social skills interview has received relatively little empirical attention (Bellack, 1979). While several recent studies have examined behavioral intake interviews with a variety of populations (e.g. Haynes et al., 1981; Wincze, Hoon and Hoon, 1978; Hay et al., 1979), to date no study has specifically focused on social skills per se. Findings from published reports in other clinical areas in general suggest that with some populations behavioral interview data may be reliable and valid (Haynes and Jensen, 1979). Nevertheless, as Haynes et al. (1981) point out, “inferences about the validity of an assessment instrument, such as the interview, are confined to the specific population and conditions associated with a particular study” (p. 380). Clearly, given this line of argument there is no scientific basis for assuming the reliability and validity of data obtained from social skills intake interviews with psychiatric patients. Given the present status of the valid assessment of the construct of social skill (Bellack et al., 1979), the lack of systematically collected empirical data on the social skills intake interview leaves a serious gap in our knowledge. This gap is especially problematic given the clinical (and sometimes research) practice of employing interviews to obtain relevant historical and current data on the social functioning and social skill of socially deficient patient populations. The purpose of the present study was to obtain reliability and validity data on a particular social skills interview recently employed (Monti et al., in press (a), Wessberg et al., 1981) with psychiatric patients. Since we are faced with a criterion problem whenever attempting to vali-

date a social skills assessment procedure (Curran and Mariotto, 1980), this study employed a convergent rating approach (Wiggins, 1973) in attempting to obtain validity data on the interview employed. Interview data provided by highly experienced senior members of our research team were compared with ratings from three other sources: (a) patients themselves, (b) clincial staff members of the treatment units to which the patients had been admitted, and (c) trained judges’ ratings of patients’ videotaped role-play social behavior. METHOD Subjects Eighty-seven male psychiatric patients volunteered to serve as subjects for this study. Patients were informed that the investigation was designed to examine their social behavior. Thirty-eight were inpatients on a psychiatric unit and 49 were day hospital patients from the same medical center. Subjects ranged from 21 to 67 yr of age, with a mean of approximately 39 yr. Subjects included nearly all patients admitted to either unit during a 5-month period. Patients were included from both units to increase the sample size of the present study.* The two patient groups can be considered very similar. Indeed, the age range, diagnostic categories and socioeconomic status of both groups overlap nearly completely. Also, the day hospital program is considered to be an alternative to inpatient hospitalization and there usually is a steady flow of patients between units. Patients were asked to participate in this study upon admission to the medical center. Exclusionary criteria included: organic brain syndrome, thought disorders or inability to follow directions as measured by the patient’s report that he could not understand verbal instructions. Five patients from each group either refused to participate, were excluded, or were lost to attrition, leaving a final N of 77. Patients’ medications remained unchanged during their 3 days in the study. Also, patients had not ever received formal social skills training prior to their participation in the study. Procedure The study consisted of comparing interviewers’ ratings of patients’ social skill with those of clinical staff ratings, those of the patients themselves, and those of highly trained videotape judges. An abbreviated form of the Social Performance Survey Schedule (Lowe and Cautella, 1978) as well as an overall rating of patients’ social skill on an 1l-point Likert-type scale (anchored by the descriptions not at all, skillful-extremely skillful) were completed by all ratina sources with the exception of the video judges who completed only the overall rating of skill. The basic description of the social skills approach offered by Goldsmith and McFall(l975) was

*Portions of data reported in this study were collected as part of two other studies recently conducted in our laboratory. Some of the data from the inpatient sample were reported in Wessberg et ui. (1981) and some of the data from the day hospital sample were reported in Monti et al. (on press (1).

307

SOCIAL SKILLS INTERVIEW used as the working definition for this study. These authors suggest that a skills training approach: assumes that each individual always does the best he can, given his physical limitations and unique learning history, to respond as effectively as possible in every situation. Thus, when an individual’s “best effort” behavior is judged to be maladaptive, this indicates the presence of a situation-specific skill deficit (p. 51). The Social Performance Survey Schedule is a social inventory developed by Lowe and Cautella to measure social skills. It consists of 100 items describing a variety of social behaviors, half of which are positive and half negative. The respondent (either the informant or the patient) is instructed to indicate how frequently the patient engages in the behaviors. The abbreviated form of the Social Performance Survey Schedule employed in this study has been used and described in previous studies conducted in our laboratory (Monti et al.. in press (0); Wessberg et al., 1981). Interview ratings Each patient was individually interviewed by a male senior investigator (licensed clinical psychologist) and a female research associate, neither of whom had any previous contact with the patients. While the senior investigator conducted the actual interview, the research associate observed the entire protocol. The interview consisted of a series of social/ developmental questions relating to the patients’ adolescent and adult social behavior.* It focused on such topics as the patients’ educational and employment history, his interests and leisure time activities, his family and other social support systems, and his current behavior in a variety of social situations. Although all questions in the interview format were asked of each patient by the senior investigator, he exercised some flexiblity regarding the order of presentation of the stimulus questions. The interviews lasted approximately 30 min. Based on data obtained during the interview (including both verbal content of the responses as well as the patient’s interview behavior) the senior investigator completed the abbreviated Social Performance Survey Schedule as well as an overall rating of global skill for each patient. Independent reliability ratings were provided by the research associate on both measures for all patients. Role-play ratings After the interview, patients individually participated in the Simulated Social Interaction Test (Monti et ol., in press). [6]. The Simulated Social Interaction Test consisted of eight videotaped role-play situations from which trained judges made global ratings of social skills. The test took place in a videotape studio where each patient was seated next to a confederate. A narrator described a hypothetical situation and following this narration a confederate delivered a predetermined prompt to the patient and awaited his response. Four scenes involved a male confederate and four a female confederate. Patients were informed that the confederates were not allowed to talk beyond programmed responses nor between scenes. Patients were further instructed to respond as they normally would in each particular situation and any gross misunderstandings (e.g. the patient not understanding that he should respond to the confederate) were clarified prior to presenting the actual test situations.

The videotapes of the Simulated Social Interaction Test were rated for social skill by four undergraduate research assistants who were trained to attend to specific indicators of social skill (e.g. giving a compliment) and to incorporate these into their overall impressions of a particular patient’s response. The judges were cautioned not merely to sum the identified indicators which had been presented in training but rather to depend on their overall impression of the subjects’ social skill. For a more detailed description of our judges’ training see Curran (1982). Clinical staff ratings Staff ratings were provided by the clinical staff on either the inpatient unit or the day hospital. Three days following admission, the staff of either unit assessed the patients’ social skill based on their observations of the patients’ behaviors on the respective units. All staff completed the abbreviated Social Performance Survey Schedule as well as the overall rating of patients’ social skill. On the inpatient unit five psychiatric nurses provided the ratings. On the day hospital unit a clinical psychologist, a psychiatrist, a psychiatric nurse and a psychiatric social worker provided the ratings. Staff members from both units functioned in similar capacities insofar as all served as patients’ treatment coordinators. Also, several staff members had previously worked on both units. Each staff person had a minimum of 3 yr of clinical experience. Although both clinical staffs had not been specifically trained to evaluate patients’ social skill, all staff were familiar with social skills training as it was routinely offered to patients from both units. Indeed, staff members had previously served as the primary source of referrals for our ongoing social skills training program. Self-report ratings Following participation in the Simulated Social Interaction Test, all patients completed the abbreviated Social Performance Survey Schedule and rated themselves on the 1l-point scale for overall social skills. A research assistant was available to clarify any questions the patients had regarding the self ratings.

RESULTS Pearson reliability coefficients were calculated for each set of raters. The mean interrater reliability coefficients and alpha coefficients for both Social Performance Survey Schedule ratings and the overall skills ratings for each set of raters are presented in Table 1. It should be noted that staff ratings for the two patient groups studied are presented separately since staff raters were not the same across subjects. The table shows that the reliability and alpha coefficients for the video judges were very high. While the alpha coefficients for the other sets of raters were also within an acceptable range, the reliability coeffi-

*A copy of the interview format and stimulus questions are available upon request from the author.

308

PETER M. MONT1

cients for the interviewers were adequate and those for both clinical staffs were somewhat low. To examine the relationship between interviewers’ ratings and those of various other rating sources, correlations between these ratings are presented in Table 2. This table shows generally moderate relationships between interviewers’ ratings and those of other non-patient rating sources. While the clincial staffs’ ratings and video judges’ ratings were all significantly, though moderately, related to the interviewers’ ratings, patients’ self-report assessments were not at all related to interviewers’ ratings. With the exception of the day hospital staff’s Social Performance Survey Schedule ratings being more highly related to the interviewers’ ratings than those from other non-patient sources, there is some consistency in the strength of the relationship between interviewers’ ratings and other rating sources across the two patient samples studied.

DISCUSSION In the present study we have developed and reported on a structured social skills interview that can be systematically administered to psychiatric inpatients. The interrater reliabilities obtained in completing both an over-all global rating and a Social Performance Survey Schedule rating on the basis of data obtained during this interview suggest that the interview can be reliably evaluated on a “difficult” population. Although the interrater reliabilities were significant, they were somewhat disappointing given the experience level of the interviewers. This can perhaps be explained by the fact that the interviewers’ previous experience was primarily in rating simulated interactions, a considerably more structured task. It is likely that additional experience and more specific training with the present interview format on the population selected would enhance interrater reliability.

Table 1. Mean reliability and alpha coefficients for staff raters, interviewers and video judges Rating scale

Observers Day hospital staff

Over skill Social Performance Survey Schedule Skill ‘P **p

Inpatient staff

Interviews

r’s

a

r's

l?

0.47** 0.31;

0.78 0.70

0.51** 0.47**

0.80 0.83

T'S 0.66** 0.77.’

Video judges

a

T’S

c?

0.79 0.86

0.92**

0.96



Table 2. Correlations between interviewers’ ratings and clinical staff, self-report and video judges’ ratings Interviewers

Rating sources Clinical staff

Global skill Social Performance Survey Schedule Skill *P **p

<0.05.


Self-report

Inpatients

Day hospital patients

Inpatients

Day hospital patients

0.40** 0.42**

0.32* 0.62**

0.00 0.12

0.11 -0.13

Video judges Inpatients

Day hospital patients

0.41**

0.39.;

309

SOCIAL SKILLS INTERVIEW

The reliabilities of our interviews fell somewhere between those of our video judges and those of our clinical staff. The interrater reliability of our video judges was very good. This may well be related to both the training of the judges (Curran, 1982) as well as to the structure of the role-play test. The ratings made in the least structured of the settings studied, the clinical units, yielded the poorest reliabilities. The gradation of reliability coefficients as a function of structure level of setting is not surprising and is consistent with the recent literature (e.g. Foster and Cone, 1980) on variables influencing observational skills. The relationship between interviewers’ ratings and those made by other sets of judges, while moderate, are interesting. In general, higher agreement was found among Social Performance Survey Schedule ratings as compared to global ratings. This may be due in part to the additional structure of the Social Performance Survey Schedule instrument as compared with the global rating. In interpreting the present results, it is important to consider the different observational settings in which the various sets of judges based their ratings. Not only is the context of a structured interview appreciably different from that of a role-play test or a psychiatric unit, but there were also setting differences within the psychiatric units. For example, while all patients from the day hospital unit were observed by all staff in some identical settings, they were also observed in some different settings by some of the staff. More specifically, all patients were observed by all staff members during community meetings, while only some staff observed all patients during group therapy. It is likely that observations across the different setting conditions affected both agreement between staff observers as well as correlations between interview and staff ratings (Cunningham and Tharp, 1981; Monti et al., in press [a]). Finally, the poor correspondence between interviewers’ ratings and patients’ self report assessments is not surprising. These results are consistent with those of other studies where patients’ self reported skill has been compared

with other sources of social skills assessment (e.g. Curran, 1982). A possible explanation is that the sample studied simply did not fully understand the nature of the self-rating task. An alternative explanation might suggest that the social perception (Morrison and Bellack, 1981) of the sample studied is perhaps somehow different from that of our other rating sources. Further research in this important area is presently underway in our laboratory. While the reliability and validity data reported in this study were moderately encouraging, clearly, more work needs to be done with this and similar clinically useful assessment instruments. Given the widespread use of intake interviews in social skills work, this area demands more empirical research. The major point in this paper is that we now have a documented social skills interview format on which there are available data. We know of no other published data on the reliability and validity of a social skills intake interview. Thus, the present study is a first attempt at responding to the documented deficiencies in behavioral interviews (Haynes and Jensen, 1979) as they pertain to the valid assessment of social skills. As such, it has responded to all of Haynes and Jensen’s (1979) relevant “recommendations for the use and reporting of behavioral interviews” (p. 103). Further systematic use and research on the reported intake interview format should lead to its refinement and eventual usefulness for both social skills researchers and clinicians.

REFERENCES

Bellack A. S. (1979) Behavioral assessment of social skills. In Research and Practice in Social Ski& Training (Ed. by Bellack A. S. and Hersen M.). Plenum Press, New York. Bellack A. S. and Hersen M. (1978) Chronic psychiatric patients: Social skills training. In Behavior Therapy in the Psychiatric Setting (Ed. by Hersen M. and Bellack A. S.). Williams & Wilkins, Baltimore. Bellack A. S., Hersen M. and Turner S. M. (1978) Role-play tests for assessing social skis: Are they valid? Behav. Ther. 9, 448461. Cunningham T. R. and Tharp R. G. (1981) The influence of settings on accuracy and reliability of behavioral observation, Behav. Assess. 3, 67-78.

310

PETER M. MONT1

Curran J. P. (1982) A procedure for the assessment of social skills: The simulated social interaction test. In Social Skills Training: A PracticaI Handbook for Assessment and %ntment (Ed. by Curran J. P. and Monti P. M.). Guilford Press, New York. Curran J. P. and Mariotto M. J. (1980) A conceptual structure for the assessment of social skills. In Progress in Behavior Modification (Ed. by Hersen M., Eisler D. and Miller P.). Academic Press, New York. Falloon I. R. H., Lindley P., McDonald R. and Marks I. M. (1977) Social skills training of outpatient groups. A controlled study of rehearsal and homework, Br. J. Psychiat. 131, 599-609. Foster S. L. and Cone J. D. (1980) Current issues in direct observation, Behav. Assess. 2, 313-338. Goldsmith J. B. and McFall R. M. (1975) Development and evaluation of an interpersonal skills training program for psychiatric inpatients, J. Abnorm. Psychol. 84, 51-58. Hay W. M., Hay L. R., Angle H. V. and Nelson R. 0. (1979) The reliability of problem identification in the behavioral interview, Behav. Assess. 1, 107-l 18. Haynes S. N. (1978) Principles of Behavioral Assessment. Gardner Press, New York. Haynes S. N. and Jensen B. J. (1979) The interview as a behavioral assessment instrument, Behav. Assess. 1, 97-106. Havnes S. N., Jensen B. J.. Wise E. and Sherman D. (1981) The marital intake interview: A multimethod criterion validity assessment, J. Consult. Clin. Psychol. 49,379-387. Acknowledgements-The Inpatient Staff Members primarily responsible for conducted the interviews, this study.

Lowe M. R. and Cautella J. R. (1978) A self-report measure of social skill, Behov. 7’her. 9, 535-544. Marzillier J. S., Lambert C. and Kellett J. (1976) A controlled evaluation of svstematic desensitization and social skills training for so&lly inadequate psychiatric patients, Behuv. Res. Ther. 14, 225-238. Monti P. M., Corriveau D. P. and Curran J. P. Assessment of social skill in the day ho&al: Does the clinician see something other than the researcher sees? Int. J. Part. Hospitaliz.. in mess la). Monti-P. M.,~Wallander’J. L., Ahern D. K., Abrams D. B. and Munroe S. M. Multi-modal measurement of anxiety and social skills in a behavioral role-play test: Generalizability and discriminant validity, Behav. Assess. in press(b). Morrison R. L. and Bellack A. S. (1981) The role of social perception in social skill, Behov. Ther. 12, 69-79. Wessberg H. W., Curran J. P., Monti P. M., Corriveau D. P. Coyne N. A. and Dziadosz T. H. (1981) Evidence for the external validity of a social simulation measure of social skills, J. Behav. Assess. 3,209-220. Wiggins J. S. (1973) Personality and Prediction: Principles of Personality Assessment. Addison-Wesley, Reading, Massachusetts. Wincze J. P., Hoon E. F. and Hoon P. W. (1978) Multiole measure analysis of women experiencing low sexual arousal. Behav. Res. Ther. 16. 43-49.

author wishes to acknowledge the help of the Providence VA Medical Center’s Day Hospital and who provided the in vivo observational data. Special thanks go to Donald Corriveau who was the development of the social skills interview. Also, thanks go to both he and Noreen Coyne, who and to other members of the Behavior Training Clinic who provided their assistance in completing