Available online at www.sciencedirect.com
Behavior Therapy 43 (2012) 848 – 861
www.elsevier.com/locate/bt
Incremental Benefits of a Daily Report Card Intervention Over Time for Youth With Disruptive Behavior Julie Sarno Owens Alex S. Holdaway Allison K. Zoromski Steven W. Evans Lina K. Himawan Ohio University Erin Girio-Herrera Cincinnati Children's Hospital Medical Center Caroline E. Murphy Nationwide Children's Hospital and The Ohio State University
This study examined the percentage of children who respond positively to a daily report card (DRC) intervention and the extent to which students achieve incremental benefits with each month of intervention in a general education classroom. Participants were 66 children (87% male) with attention-deficit hyperactivity disorder or disruptive behavior problems who were enrolled in a school-based intervention program in rural, low-income school districts in a Midwest state. The DRC was implemented by each child's teacher, who received consultation from a graduate student clinician, school district counselor, or school district social worker. A latent class analysis using growth-mixture modeling identified two classes of response patterns (i.e., significant improvement and significant decline). Results indicated that 72% of the sample had all of their target behaviors classified as improved, 8% had all of their targets classified as declining, and 20% had one target behavior in each class. To examine the monthly incremental benefit of the DRC, individual Address correspondence to Julie Sarno Owens, Ohio University, Porter Hall 200, Athens, OH 45710; e-mail:
[email protected]. 0005-7894/43/848–861/$1.00/0 © 2012 Association for Behavioral and Cognitive Therapies. Published by Elsevier Ltd. All rights reserved.
effect sizes were calculated. Results for the overall sample indicated that most children experience a benefit of large magnitude (.78) within the first month, with continued incremental benefits through Month 4. The differential pattern of effect sizes for the group of improvers and the group of decliners offer data to determine when and if the DRC should be discontinued and an alternative strategy attempted. Evidence-based guidelines for practical implementation of the DRC are discussed.
Keywords: daily report card; school-based intervention; ADHD; disruptive behavior; evidence-based intervention
BEHAVIORS THAT ARE DISRUPTIVE to the classroom process and difficult for teachers to manage include hyperactivity, impulsivity, inattention, oppositionality, and aggression. Such behaviors are occurring at a problematic level in 8 to 12% of elementary school students (Kamphaus, Huberty, DiStefano, & Petoskey, 1997). Recurrent and frequent disruptive behaviors are stressful for teachers (Greene, Beszterczey, Katzenstein, Park, & Goring, 2002; Sterling-Turner, Robinson, & Wilczynski, 2001), with some evidence that student disruptive
849
benefits of a drc behavior is the most prominent contributor to teacher burnout (Bibou-nakou, Stogiannidou, & Kiosseoglou, 1999; Chang, 2009; Evers, Tomic, & Brouwers, 2004). Further, disruptive behavior is the most common reason that students are referred for mental health services, as early as preschool (Keenan & Wakschlag, 2000) and continuing into adolescence (Frick et al., 1993). Children who demonstrate these behaviors may be at risk for, or may be already diagnosed with, attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), or conduct disorder (CD), and many of these children are identified as eligible for special education services. In fact, students with ADHD constitute the majority of students in the special education categories of emotional disturbance and other health impaired (Schnoes, Reid, Wagner, & Marder, 2006). Unfortunately, early disruptive behavior is associated with a host of negative outcomes, many of which continue throughout the life span. For example, youth with ADHD often demonstrate high levels of noncompliance and classroom disruptions, poor organization, and low rates of work completion and accuracy (e.g., Abikoff et al., 2002; Atkins, Pelham, & Licht, 1985). When these difficulties persist, they can result in negative academic outcomes, including higher rates of expulsion, suspension, and grade repetition relative to youth without ADHD (Barkley, Murphy, & Fischer, 2008). The profiles and developmental trajectories of youth with ODD and CD are similarly adverse (National Institute of Mental Health, 2001). Further, many students with ADHD require additional services related to retention, disciplinary problems, and special education that are estimated to cost almost 16 times more than the cost to educate a student without ADHD (Robb et al., 2011). The above-described teacher stress and classroom disruptions could likely be decreased by enhancing general education teachers’ use of evidence-based classroom interventions (Epstein, Atkins, Culinan, Kutash, & Weaver, 2008). Further, the use of evidence-based interventions in the general education classroom could lead to improved student outcomes and reduce the need for expensive special education services for many students. Although the majority of research on classroom interventions has been conducted with participants with ADHD, the results are also applicable to students with other behavior disorders (e.g., see Vannest, Davis, Davis, Mason, & Burke, 2010, for a review). One of the most thoroughly studied interventions for disruptive classroom behavior is the daily report card (DRC; Pelham & Fabiano, 2008; Pelham, Wheeler, & Chronis, 1998). Research indicates that
the DRC is viewed as acceptable by teachers (e.g., Girio & Owens, 2009), is feasible under typical school conditions (e.g., Fabiano et al., 2010; Owens, Murphy, Richerson, Girio, & Himawan, 2008), and is effective in modifying both academic and behavioral problems (Vannest et al., 2010). However, most of the studies related to the DRC have taken place in the context of short, singlesubject designs, under analogue conditions, or under conditions heavily influenced by researcher support. Thus, there is little empirical data to guide teachers’ practical decisions about implementing the DRC in a general education setting. Specifically, data are needed to answer the following questions: What percentage of children respond positively to this intervention? Are there incremental benefits with each month of intervention? When should a teacher discontinue a DRC? Teachers face many competing demands and they will likely only implement an intervention if they believe that the expected benefits outweigh the costs. The goal of this study is to provide empirically based answers to the questions above so that we can establish evidence-based guidelines that inform the practical implementation of the DRC for youth with disruptive behaviors in the general education setting. The development of such guidelines may improve student outcomes and reduce unnecessary referrals to special education services.
The Daily Report Card The DRC is a tool to monitor and modify clearly defined target behaviors (e.g., interruptions, work completion). When using the DRC, the teacher provides feedback to the student about each target behavior (ideally at the point of performance), and the child's performance on each target is evaluated in relation to goals. Parents review the DRC with the child each day and provide privileges contingent on the level of success across all target behaviors (Kelley, 1990). DRCs may be appealing to teachers because they can be tailored to each student's needs and are flexible with regard to the behaviors that are targeted (e.g., academic productivity, disruptive behaviors), the frequency of data collection (e.g., daily, weekly), the rater of the behavior (e.g., teacher, student), the contingencies used (e.g., positive reinforcement, response cost), and the location of the contingencies (e.g., home, school; Chafouleas, Riley-Tillman, & McDougal, 2002). The DRC can enhance home–school collaboration (Power, Soffer, Clarke, & Mautone, 2006), be used as a prereferral intervention prior to a special education evaluation, and can be used as a progress-monitoring tool to determine a child's response to intervention (Pelham, 1993). Given the
850
owens et al.
positive characteristics of the DRC, continued research that informs practical use of this intervention is warranted. The effectiveness of the DRC has been examined in group designs as a single intervention (Fabiano et al., 2010; Murray, Rabiner, Schulte, & Newitt, 2008) and as a component in an intervention package (e.g., Owens et al., 2008; Pfiffner et al., 2007; Wells et al., 2000). All studies reported significant improvement in child behavior, with some reporting improvement in academic productivity and academic skills (Murray et al., 2008), as well as teacher-rated improvement on individualized education plan (IEP) goals (Fabiano et al., 2010). Further, a recent meta-analysis examined the impact of the DRC across 17 single-subject A-B design studies using an improvement rate difference (IRD) effect size. The average IRD effect size was .61, with a range of –.15 to .97 (Vannest et al., 2010). Although these studies offer substantial support for the effectiveness of the DRC, the utility of these data for teachers’ daily decision making is limited. Many of the studies were conducted with small sample sizes (e.g., Murray et al., 2008; Pfiffner et al., 2007) or have been criticized for their methodological weaknesses (see Chafouleas et al., 2002, for a review). These weaknesses limit the generalizability of findings. In addition, Fabiano and colleagues (2010) evaluated a sample that included only students in special education. Because there are often multiple teachers assisting students in special education, as well as low student to teacher ratios in special education classrooms, the results from that study may not generalize to the general education classroom. Both of these limitations are addressed in the current study. In addition to these methodological limitations, two other patterns are noteworthy. First, the wide range of effect sizes found in the meta-analysis (Vannest et al., 2010) suggests that specific patterns of student response (e.g., responders and nonresponders) may be identifiable. Second, with a few exceptions, most studies evaluated the DRC over the course of a few weeks, rather than months or the duration of a school year. Thus, additional data are needed in order to help teachers decide how long they should implement the intervention (i.e., if there are incremental benefits of additional months of intervention) and when they should discontinue.
Incremental Benefit of Intervention Studies that have examined incremental benefits of school-based interventions over time suggest that the interventions must be implemented for 2 to 3 months to achieve improved outcomes and that there are incremental benefits for persisting longer
than 3 months. For example, in a sample of boys identified by teachers as aggressive and disruptive, those who received 18 sessions of a cognitivebehavioral anger-coping program showed greater decreases in off-task and inattentive behavior compared to those receiving only 12 sessions (Lochman, 1985), and maintained this advantage at 3-year follow-up evaluations (Lochman, 1992). Similarly, among high school students with ADHD who received an organization intervention, it took 9 weeks for most (70%) children to meet mastery criteria for the intervention; yet with continued intervention (14 to 21 weeks) the remaining 30% also achieved mastery (Sadler, Evans, Schultz, & Zoromski, 2011). Thus, there is some evidence that interventions extended over a longer period of time improve outcomes for children and adolescents and reach a greater number of students in need.
Current Study Understanding students’ response to the DRC over time can indicate how long a teacher must persist with a DRC intervention to achieve meaningful gains and when to make modifications or abandon the technique. These are important data in the development of evidence-based implementation guidelines. Because a positive response to an intervention may not occur until after 2 to 3 months of the intervention, longitudinal data are needed to adequately capture patterns of intervention response. Similarly, because teachers have the option of using the DRC for several consecutive months, it would be informative to know the incremental value of persisting with the intervention over time. Yet, none of the studies that included implementation of the DRC over an extended period of time examined incremental improvements with each month (i.e., dosage effect). Finally, because school-based student evaluation teams typically convene on a monthly basis, data about incremental monthly improvement could inform teacher expectations and interpretation of an individual student's response. Specifically, findings could help with decision making such as whether a DRC should be initiated for a given child, how long the DRC should be continued, and whether a DRC should be discontinued based on the child's initial response. The goal of this study was to address this gap in the literature using data from a school-based mental health program with a large sample of typically referred, general education students who received services over the course of an academic year. We provide data showing response to the DRC intervention over 4 months and suggest guidelines for persisting with this intervention in elementary school classrooms.
851
benefits of a drc
Method study overview Participants described below were selected from among the total sample of consecutive new referrals who enrolled in the Youth Experiencing Success in School (Y.E.S.S.) Program between 2002 and 2009 (see Owens, Andrews, Collins, Griffeth, & Mahoney, 2011, for details). Children participating in the Y.E.S.S. Program were elementary school students in grades K–5 who demonstrated disruptive behavior problems in the classroom. School personnel were instructed to refer children whose primary problems were associated with inattention, hyperactivity/ impulsivity, defiance, and/or aggression, and whose intellectual functioning was estimated to be represented by a standard score of 70 or above. School staff invited the child's parents to participate, and parental consent and child assent were obtained. Children in the program received a DRC intervention. Teachers of these children received year-long collaborative teacher consultation offered on a biweekly basis. Parents were offered behaviorally based parenting sessions in either a group or individual format. All three of these interventions are empirically supported for this population (DuPaul & Eckert, 1997; Pelham & Fabiano, 2008). All services were available from the time of enrollment (typically fall) through the end of the school year. The Y.E.S.S. Program was developed in the context of a university–community partnership that adopted a community-based participatory research approach. Because the goal of the program was to examine treatment outcome when services were provided to typically referred cases and in routine practice, strict inclusion and exclusion criteria like those used in an efficacy trial were not imposed (e.g., requiring a diagnosis of ADHD to be enrolled). Although there are losses to internal validity with effectiveness research relative to efficacy research, there are gains in ecological validity. Because our goal was to provide data to guide practical decisions about DRC implementation, we believe that these naturalistic conditions represent a strength. participants Participants (N = 66) were a subset of children (ages 5–12) with disruptive behavior who were enrolled in the Y.E.S.S. Program between 2002 and 2009 (see Table 1 for demographic characteristics). Participants came from one of seven school buildings across two school districts in southeastern Ohio. All schools had similar profiles with regard to student body size (200–400 students), percentage of
Table 1
Demographic Characteristics of Participants by Analysis
Gender (male) Race (Caucasian) Identified for SPED Grade K through 3rd grade 4th or 5th grade Repeated a grade Medicated at referral In counseling at referral Met criteria for ADHD Age (M, SD) IQ estimate (M, SD) a Hollingshead SES (M, SD)
Total Sample (n = 66)
Effect Size Sample (n = 49)
n
%
n
%
58 62 23
87.90 93.90 34.84
44 47 16
89.80 95.90 33.33
54 12 14 20 17 50 7.84 95.64 26.53
81.81 18.18 21.20 30.30 25.80 75.75 1.47 13.26 12.14
40 9 13 18 13 41 7.94 96.10 26.97
81.63 18.37 26.50 36.70 27.10 71.43 1.43 12.82 12.70
Note. There were no significant differences between the samples on any demographic characteristic. SPED = special education, ADHD = attention-deficit/hyperactivity disorder, SES = socioeconomic status. a IQ estimates were based on the Wechsler Abbreviated Scales of Intelligence.
minority students (b5%), students qualifying for free and reduced lunch (40–65%), and special education status (15–20%). Although some participants (35%) were receiving special education services, all spent the majority of their day in a general education classroom. Most participating teachers implemented the DRC for one student per year, and no teacher implemented the DRC for more than three students per year. To determine eligibility for inclusion in the current study, the daily DRC data from the children in the total, original sample (N = 147) were reviewed. DRC data were coded for length of implementation (number of school days), teacher compliance with recording procedures (i.e., target behaviors were tallied and teachers indicated whether the child met his or her daily goal), and presence or absence of data at the end of Month 1, Month 2, Month 3, and Month 4. Daily data from the last 2 weeks of each month were used to represent the child's response at the end of that month. We chose a 4-month time span because it provided enough data to model latent classes of response patterns and to examine incremental benefits over several months while still maintaining an adequate sample size. To answer the research questions, the child's daily data for at least one target had to meet the following criteria: (a) daily data were present for any three of the four monthly time points; and (b) the target was successfully implemented, defined
852
owens et al.
as having been implemented by the teacher with at least 50% compliance with recording procedures. The data for 56 children (38% of the total sample) were excluded due to low compliance with teacher-recording procedures (Criterion B); an additional 25 children (17% of the total sample) were excluded due to insufficient length of DRC implementation (Criterion A). The data for the remaining 66 children (45% of the total sample) with 128 target behaviors met the above inclusion criterion. Because only a few children had four or more target behaviors that met this criterion, the decision was made to include up to three targets per child (i.e., the first three that met the criteria), as a DRC with three or fewer target behaviors best represented the overall sample. T test and chisquare analyses were used to compare the current sample (n = 66) to the total sample from which they were drawn. Results indicated that the children in the total sample did not differ from those in the current study with regard to gender or race distribution, age, special education classification, grade retention, prior medication use, prior receipt of mental health counseling, severity of ADHD symptoms at intake, intellectual functioning, or family socioeconomic status.
procedures Between 2002 and 2006, program facilitators were graduate students supervised by university faculty (Owens et al., 2008). As part of a sustainability plan, in 2006, district-employed school counselors and social workers began implementing the model with ongoing consultation from the university (Owens et al., 2011). Although teachers were paid for completing the program-evaluation rating scales in the larger effectiveness trial ($15 at three time points in the year), they were not paid to implement the DRC intervention or to participate in consultation sessions. Thus, implementation procedures closely match typical school practice. Initial Evaluation All procedures were approved by the university research board and by administrators at participating schools and school districts. Once a child was referred to the program, parent consent and child assent were obtained and an initial assessment was completed. The assessment included a parent interview, a teacher interview, and parent and teacher rating scales. Diagnostic status was determined using parent and teacher versions of the Disruptive Behavior Disorders (DBD) Rating Scale (Pelham, Gnagy, Greenslade, & Milich, 1992) and the Impairment Rating Scale (IRS; Fabiano et al., 2006), in combination with a semistructured parent interview (either the DBD
Structured Parent Interview; Pelham, 2002, or the Children's Interview for Psychiatric Syndromes– Parent Version [P-ChIPS]; Weller, Weller, Rooney, & Fristad, 1999). To meet criteria for ADHD, six or more symptoms of either inattention or hyperactivity/ impulsivity had to be endorsed (as “pretty much” present or “very much” present) on the DBD Rating Scales or as present on the parent interview. The symptoms may have been endorsed by the teacher, the parent, or a combination. The same symptom was not counted twice if endorsed by both raters. In addition, both the parent and the teacher had to endorse impairment (as indicated by a score of 3 or higher on the IRS) in at least one domain of functioning. To meet the criteria for ODD, four or more symptoms of ODD had to be endorsed on the parent or teacher DBD Rating Scale or on the parent interview. To meet the criteria for CD, three or more symptoms had to be endorsed. For the diagnoses of ODD and CD, impairment had to be endorsed by one rater. In the few cases in which the parent interview data conflicted with rating scale data, the program clinician and the licensed clinical supervisor resolved these discrepancies by incorporating other data (e.g., behavioral observation) and contextual information (e.g., possible rater biases, previous school records). Of the 66 children in the sample, 50 (77%) met criteria for ADHD. Of those who met criteria for ADHD, 33 (66%) met criteria for co-occurring ODD and/or CD. The remainder presented ODD or CD without ADHD (n = 6), or with subclinical but at-risk levels of disruptive behavior symptoms (n = 10). DRC Implementation DRC implementation was completed using a collaborative process involving the student's teacher, parents, and a program facilitator. Setting up the DRC was a multistep process that was guided by publicly available resources (see How to Establish a School-Home Daily Report Card at http://casgroup. fiu.edu/ccf/pages.php?id=1401; Kelley, 1990). First, the facilitators met with the teachers and identified two to four behaviors to be targeted on the DRC and created operational definitions for these behaviors. After this interview, teachers were instructed to record frequencies of the target behaviors (without informing the child) for 5 school days to establish a baseline. Teachers and program facilitators reviewed the baseline data to determine the goal criterion for each behavior. Goal criterion were set at a level that would predict that the child would initially experience more success than failures (i.e., achieve the goal at least 3 days per week). In cases where baseline data were not available for review (e.g., due to teacher noncompliance), the criterion for that goal was
853
benefits of a drc determined based on teacher estimates of the daily frequency of the target behavior so as not to withhold services from the child. Concurrently with the above procedures, the facilitator oriented the parents to the DRC intervention, helped parents create a daily privilege system, and described the procedures for contingent use of this privilege system. 1 Because parent adherence to the privilege system was not tracked across all years, those data are not presented in this paper. Once teachers started to implement the DRC, they received 30-minute behavioral consultation sessions on a biweekly basis to discuss modifying the DRC target behaviors, the goal criteria, or teacher intervention procedures (DRC troubleshooting guides were used as a resource; Pelham, 2002). Teachers were instructed to modify the DRC goal criteria so that criteria were more challenging to the child if the child achieved his or her stated goal (i.e., received a “yes”) on 8 of 10 consecutive days. For example, if a child had a goal of seven or fewer interruptions per day, and met this goal 8 of 10 days in a 2-week period, the goal criterion for receiving a reward was lowered to five or six interruptions per day (depending on the data from the previous weeks) in an effort to shape the child's behavior into the normative range. See Figure 1 for a sample DRC.
measures Disruptive Behavior Disorders (DBD) Rating Scale The parent and teacher versions of the DBD (Pelham et al., 1992) are 45-item scales that assess the presence and frequency of the DSM-based symptoms of inattention, hyperactivity/impulsivity, and ODD and CD symptoms. Items are rated on a 4-point scale ranging from 0 (not at all present) to 3 (very much present). Both parent and teacher versions of the ADHD subscales have adequate 4-week test–retest reliability (alphas above .75), internal reliability (alphas ranging from .71 to .94 across subscales), convergent validity as evidenced by significant correlations with observational data and other ADHD rating scales, and discriminant validity as evidenced by their ability to discriminate between children with and without diagnoses of ADHD (DuPaul, Power, McGoey, Ikeda, & Anastopoulos, 1998; Pelham, Fabiano, & Massetti, 2005).
1
A home-based privilege system was attempted for all children; however, if the inconsistency of the home-based privileges was considered by the facilitator and teacher to be interfering with treatment progress, home-based privileges were replaced or supplemented by school-based privileges. During the first 4 years of the program, the use of the privilege system was tracked. Data indicate that school-based rewards were used as a supplement or replacement for the majority of families (Owens et al., 2008). Thus, parent adherence was a common problem.
Impairment Rating Scale (IRS) The IRS (Fabiano et al., 2006) is used to assess adult perceptions of child functioning in multiple domains (academic performance, classroom functioning, family functioning, and relationships with peers, siblings, parents, and teachers). Parents and teachers place a mark on a visual analogue line representing a continuum of impairment from 0 (no problem/definitely does not need treatment) to 6 (extreme problem/definitely needs treatment) to indicate the child's impairment in each domain. The measure has respectable cross-informant reliability (e.g., correlations above .47), convergent and divergent validity with other impairment scales, and predictive validity in identifying children with ADHD (Fabiano et al., 2006). Daily Report Card (DRC) Data Teachers were instructed to document the occurrence of each child's target behaviors through the use of tally marks. The daily frequencies of behavioral targets (e.g., interruptions, out of seat, touching others) are the primary data used in this study. Behaviors documented on the DRC have been found to be highly correlated with scores on ADHD rating scales (Pelham et al., 2005), and have demonstrated sensitivity to change in the expected direction in double-blind placebo-controlled medication studies (e.g., Pelham et al., 2001) and in other classroom intervention studies (e.g., Jurbergs, Palcic, & Kelley, 2007). Because academic targets are often scored as percent complete or correct instead of frequency based, they were not included in the current study. Because latent class analyses are sensitive to heterogeneous variance ranges, we standardized the daily frequency data for each target behavior for each child. Standardization also facilitates interpretability across target behaviors, by placing all behavior on the same metric. Each target behavior was coded (via consensus across the first three authors) into one of nine categories: interruptions (28%), out of seat (8%), touching others (14%), prompts needed for on-task behaviors (23%), rule violations (5%), noncompliance (6%), disrespect (13%), temper tantrums (2%), and a final category that included targets that did not fit in any other category (2%). This allowed us to examine (in secondary analyses) the extent to which target type was associated with any pattern of overall response or incremental response.
Results data preparation and analytic plan To examine patterns of student response to the DRC intervention, growth-mixture modeling was used to conduct a latent class analysis that identified classes
854
owens et al. Name: Angela Daily Tracker
Date: 10/13 Daily Goal Met
1. Remains seated with three or fewer instances of leaving seat
////
Yes
No
2. Raises hand to speak with seven or fewer violations
/////
Yes
No
3. Completes 75% of daily math work
80%
Yes
No
Total Number of Yeses Percent of Yeses Earned Teacher Comments: Angela worked very hard today.
2 66%
Parent Signature: Parent Provided a Reward at Home: Yes
FIGURE 1
No
Sample daily report card.
of students characterized by different growth patterns (Ram & Grimm, 2009; Wang & Bodner, 2007). Intercept and slope indices served as dependent measures. To assess the goodness of fit between competing models, Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Sample-Size Adjusted BIC (SSABIC) were used, with lower values indicating better fit (Nylund, Asparouhov & Muthen, 2007; Ram & Grimm, 2009). The Lo-Mendell-Rubin's Adjusted Likelihood Ratio Test (LMR LRT) was used to compare models with different number of classes, a p value of ≥ 0.05 indicated that the model with one fewer class should be accepted (Nylund et al., 2007; Ram & Grimm, 2009). Last, the entropy measure was used as a measure of how accurately participants were classified. Entropy values range from 0 to 1, with higher values indicating better classification. An entropy value of N0.8 indicates that individuals are classified with confidence and that there is adequate separation between the latent classes. To examine the incremental benefit of continued implementation, we calculated individual effects sizes (ESs) that represented the cumulative benefit of the DRC with each additional month of intervention relative to the baseline period (i.e., cumulative ES), as well as the incremental benefit of the DRC with each additional month relative to the previous month (i.e., incremental ES). Thus, to be included in the ES analyses, the child target behavior data had to have data present at baseline, in addition to three of the four monthly time points and 50% teacher compliance (as described above). The data for 49 of the 66 children with 94 target behaviors met this criterion (see Table 1 for demographic characteristics). Individual ESs were
calculated for each student using standardized mean difference (SMD) and percentage of data exceeding the median (PEM). For the cumulative ES, the SMD represents the difference between the mean of a follow-up period (i.e., Month 1, Month 2, Month 3, or Month 4) and the mean of the baseline period divided by the standard deviation of the baseline period. By this definition, SMD can be interpreted as the average improvement of a participant at a follow-up period compared to the baseline period, as a function of the variability at baseline. For the cumulative ES, the PEM represents the number of days in a follow-up period that were below the median of the baseline period divided by the total number of days at both baseline and follow-up periods. For the incremental ES, the SMD represents the difference between the mean of a follow-up period (e.g., Month 2) and the mean of the previous month divided by the standard deviation of the baseline period. The SMD can be interpreted as the average improvement at a follow-up period compared to the previous month, as a function of the variability at baseline. We chose to use the variability of the data at baseline as the denominator for two reasons. First, in some cases, a student had a plateau in progress in later months (e.g., the child was interrupting only once per day). This resulted in no variability in the data for this time point and prevented the calculation of an incremental ES. Second, this procedure is consistent with other studies that have examined the incremental benefit of different medication doses for children with ADHD (e.g., Evans et al., 2001). For the incremental ES, the PEM represents the number of days in a follow-up period (e.g., Month 2) that were below
benefits of a drc the median of the previous period (e.g., Month 1) divided by the total number of days at both the follow-up period and the previous period. Calculating ESs on single-subject data is controversial and a gold standard has not been identified, as each method (i.e., standard mean different [SMD], percent of nonoverlapping data [PND], percent of all nonoverlapping data [PAND], percent of data exceeding the median [PEM], and improvement rate difference [IRD]) has its advantages and disadvantages. The SMD was selected because it is useful for making comparisons across a large number of cases and across studies, there are widely accepted guidelines for interpretation (e.g., Cohen, 1988), and it allows for the identification of deterioration trends. However, SMD is very sensitive toward extreme values (e.g., an extremely bad day in a follow-up period together with an extremely good day in a baseline period will produce a small ES, and vice versa). Although the literature documents that children with ADHD have a positive response to the DRC intervention overall, variability in behavior across days during a given time period (e.g., 2-week window) is a hallmark of the disorder (Barkley, 2005). Thus, ESs were also calculated using the PEM, which is less reactive to extreme data points. The PND was not used because it does not handle data with such variability as well as the PEM. Further, the IRD was not used because the requirements of this method would have been an excessively conservative approach to the current data set (Parker, Vannest, & Brown, 2009). In addition, unlike the PAND, the PEM can be calculated when the number of data points in a given period is small (e.g., less than 10).
latent class analysis The first step was to examine data from the first 4 months of intervention. Standardized target scores during this time frame range from –4.40 to 7.72. The first model fitted was a one-class linear growth curve model (intercept = 0.33, SE = 0.05, p b 0.01; slope = –0.03, SE = 0.00, p b 0.01). The AIC, BIC, and SSABIC for the one-class linear growth curve model were 4298.71, 4372.86, and 4290.64, respectively. The second model fitted was a two-class linear growth curve model. The AIC, BIC, and SSABIC for the two-class linear growth curve model were 4284.85, 4367.56, and 4275.85, respectively, all of which were smaller values than those in the one-class model. The entropy for the two-class linear growth curve model was 0.87, indicating strong classification. The LMR LRT statistics for the model with one latent class versus the model with two latent classes was 18.58 with a p value of 0.24. Even though the LMR LRT
855
indicated that the two-class model does not fit the data better than the one-class model, the AIC, BIC, and SSABIC showed that the two-class model fit the data better than the one-class model and the entropy is higher for the two-class model. Thus, data are interpreted for the two-class linear growth curve model. In this model, Class 1 included data from 108 target behaviors and Class 2 included data from 20 target behaviors. Estimates of the intercept and the slope means for Class 1 were 0.46 (SE = 0.05, p b 0.01) and –0.04 (SE = 0.00, p b 0.01), respectively. Estimates of the intercept and the slope means for Class 2 were –0.37 (SE = 0.12, p b 0.01) and 0.04 (SE = 0.01, p b 0.01), respectively. Thus, Class 1 includes a higher intercept than Class 2 and a slope indicating a significant decline in disruptive behavior, as opposed to significant increase. Stated differently, Class 1 represents targets that are significantly improving over time, whereas Class 2 represents targets that are significantly worsening over time. It is important to note that the 108 target behaviors in Class 1 (i.e., improvers) were associated with 48 children (72% of the sample) who had all of their target behaviors fall in Class 1, as well as 13 children (20%) who had one or two target behaviors in Class 1, but also one target behavior in Class 2. The 20 target behaviors in Class 2 were associated with five children (8%) who had all of their target behaviors fall in Class 2, as well as the 13 children (mentioned above) who had one target behavior in Class 2 and at least one other in Class 1. As a secondary analysis, we conducted chi-square tests, t tests, and qualitative visual inspection analyses in an attempt to identify variables that were significantly different for the 48 children who were in Class 1 only and the other two samples (n = 5 in Class 2 and n = 13 split across classes). No differences emerged in age; gender; special education status; medication use; type of facilitator (i.e., graduate student or school counselor); socioeconomic status; IQ; initial severity of ADHD, ODD, or CD symptoms; target behavior category (e.g., interruptions, out of seat); or teacher adherence to recording procedures. There are some trends that are worth exploring in future research (e.g., children in grades 4 and 5, and children who had previously repeated a grade tended to be more likely to fall in Class 2 rather than Class 1, whereas children in kindergarten through third grade and children who had not repeated a grade were more likely to fall in Class 1; children with rewards provided in one location, either home only or school only, were more likely to fall in Class 1, whereas children who received a combination of
856
owens et al.
home and school rewards were more likely to fall in Class 2). However, given the small sample size in the current study, we cannot draw any conclusions about the characteristics associated with a given response pattern.
effect size analysis The average incremental and cumulative ESs (and standard deviations) by month for the total sample, as well as those children who fell uniquely in Class 1 or Class 2, are presented in Table 2 (SMD ESs) and Table 3 (PEM ESs). Independent sample t tests were conducted to determine if the average ES for Class 1 was significantly different from that for Class 2 at each month (see Table 2). Results indicated that for the SMD ES, on average, in Month 1, children in both classes show improvement, with no significant difference between the classes. In Month 2, there was a significant difference between the classes, with children in Class 1 achieving incremental improvement (.32), and children in Class 2 experiencing deterioration (–.43). In Month 3, both groups experience little change, with no significant group differences. In Month 4, there was a significant difference between the classes, with children in Class 1 achieving another incremental gain equivalent to the second month's improvement (.30), and children in Class 2 experiencing deterioration (–.29). Similarly, the cumulative SMD ESs indicate that by Month 2, children in Class 1 (1.19) have achieved a cumulative benefit of a significantly larger magnitude than children in Class 2 (.18). This significant difference is maintained at Month 3 (1.27 vs. –.15) and Month 4 (1.39 vs. –.36), during which time children in Class 2 experience a cumulative
Table 2
Standard Mean Difference (SMD) Effect Sizes by Sample Total Sample LCA Class 1 LCA Class 2 t M (SD) M (SD) M (SD)
Incremental Effect Size Month 1 .78 (1.31) Month 2 .22 (.93) Month 3 –.02 (.85) Month 4 .21 (.80) Cumulative Effect Size Month 1 .78 (1.31) Month 2 1.07 (1.27) Month 3 1.06 (1.32) Month 4 1.16 (1.24)
df
.84 (1.36) .32 (.92) –.01 (.86) .30 (.76)
.44 (.96) –.43 (.79) –.08 (.87) –.29 (.92)
1.00 2.46* .23 2.19*
87 75 71 65
.84 (1.36) 1.19 (1.25) 1.27 (1.21) 1.39 (1.08)
.44 (.96) .18 (1.07) –.15 (1.24) –.36 (1.17)
1.00 2.43* 3.86* 4.71*
87 78 83 72
Note. The total sample includes 49 children with 94 targets. Class 1 includes 48 children with 88 targets; Class 2 includes 5 children with 13 targets. LCA = latent class analysis. ⁎ Significant difference between Class 1 and Class 2, p b .05.
Table 3
Percentage of Data Exceeding the Median (PEM) Effect Sizes by Sample Total Sample LCA Class 1 LCA Class 2 t M% (SD) M% (SD) M% (SD)
Incremental Effect Size Month 1 60 (38) Month 2 37 (37) Month 3 39 (36) Month 4 44 (37) Cumulative Effect Size Month 1 60 (38) Month 2 67 (43) Month 3 70 (38) Month 4 72 (35)
df
62 (38) 39 (37) 39 (38) 47 (38)
46 (37) 24 (34) 38 (28) 30 (30)
1.43 1.29 .08 1.31
89 77 73 65
62 (38) 70 (41) 76 (34) 78 (32)
46 (37) 45 (47) 37 (43) 35 (33)
1.43 1.86 3.82* 4.00*
89 80 85 72
Note. The total sample includes 49 children with 94 targets. Class 1 includes 48 children with 88 targets; Class 2 includes 5 children with 13 targets. LCA = latent class analysis. ⁎Significant difference between Class 1 and Class 2, p b .05.
deterioration of small magnitude. Indeed, the cumulative benefit for children in Class 2 never exceeds the benefit that children in Class 1 achieve in Month 1. To give context to the meaning of the ESs, we examined the descriptive statistics for the raw frequency data (i.e., unstandardized) for each category of target behaviors at baseline and at each time point. For example, in Class 1, average interruptions decreased from a frequency of 8.70 per day (sd = 6.05) at baseline to frequency of 3.59 per day (sd = 4.34) by Month 4. In Class 1, touching others deceased from 4.31 instances per day (sd = 3.20) at baseline to 0.85 per day (sd = .75) at Month 4. In Class 1, out-of-seat behavior decreased from 11.06 instances (sd = 10.74) per day at baseline to 3.94 (sd = 5.00) at Month 3 and 5.63 (sd = 3.64) per day at Month 4. As expected, the PEM ES revealed a slightly more conservative picture. As such, the t tests were not significant for the incremental PEM ESs, but the overall pattern between the two groups was similar to that described above. In Month 1, children in both classes were improving, with children in Class 1 showing a slightly better response (62% vs. 46% improvement over baseline). This incremental benefit is not as strong in Month 2 or Month 3; yet in Month 4, children in Class 1 achieve another incremental improvement that is slightly greater than that experienced by children in Class 2 (47% vs. 30%). The cumulative PEM ESs indicate that children in Class 1 achieve a cumulative benefit of a large magnitude by Month 3 (76%) and maintain it through Month 4 (78%), whereas the cumulative benefit at Month 4 for children in Class 2 (35%)
benefits of a drc never exceeds the benefit that children in Class 1 achieve in Month 1 (62%). Finally, because a differential pattern of ESs emerged for the two latent classes at Month 2, we examined individual response trajectories to determine the likelihood that a child will achieve a positive cumulative response (i.e., defined as an SMD ES of .5 or higher) by Month 3 or Month 4 depending on their cumulative response at Month 2. Among the children who had an SMD ES of .5 or higher at Month 2, over 80% had a cumulative positive response at Month 3 or Month 4. In contrast, among the children who had an SMD ES of 0 or lower at Month 2, only 33% had a cumulative positive response at Month 3 or Month 4.
Discussion We examined the percentage of children with disruptive behavior who respond positively to a DRC intervention and the extent to which students achieve incremental benefits with each month of intervention in a general education setting. The results offer preliminary information for the development of evidence-based guidelines for practical implementation of the DRC. The use of such guidelines could improve student outcomes while reducing the need for expensive special education services for many students. With regard to our first research question, the results of the latent class analyses (LCA) indicate that the majority of children respond positively to the DRC intervention. Specifically, 72% of the sample had all of their target behaviors classified as improved and an additional 20% had at least one target behavior classified as improved. This is consistent with the literature that has established the DRC as an evidence-based classroom intervention for youth with ADHD (Pelham & Fabiano, 2008; Pelham et al., 1998). In addition, the LCA analyses reveal that most children respond uniformly across target behaviors, as less than 25% of children had target behaviors classified in both categories (i.e., Class 1 and Class 2). This is an important new finding, and suggests that teachers can have optimism that most children with or at risk for ADHD will respond positively to the DRC intervention and that this response will generalize across a variety of target behaviors. Finally, preliminary analyses suggest that the DRC will produce improved behavior regardless of child gender, age, special education status, medication use, IQ, severity of ADHD symptoms, type of facilitator (graduate student or school counselor), and target behavior category. However, trends in some of these characteristics (e.g., grade, graderetention status, location of reward system) may be
857
worth exploring in future research, as our sample size was too small to report a pattern with confidence. With regard to the second research question, several interesting patterns emerged. First, on average, many children experienced a benefit of large magnitude (.78) within the first month (see the Total Sample column in Table 2), experienced an incremental benefit of small magnitude (.22) during Month 2, maintained this performance over Month 3 (–.02), and experienced another incremental benefit of small magnitude (.21) in Month 4. The combined intervention effect over 4 months results in a cumulative benefit of large magnitude (1.16). It is important to note that as children made improvements in behavior, the goal criterion for each behavior was made more difficult. Thus, the incremental benefit described above actually represents a "double" increment, as children are continuing to improve over time, while also meeting more difficult criteria. To put these benefits in context, on average, children are making a 70% improvement over their median performance during the baseline period (see the Total Sample column in Table 3) and are reducing their average daily instances of the target behaviors by nearly half (e.g., 11 instances of out of seat to 6; 9 instances of interruptions to 4, and 4 instances of touching others to less than 1). Although the rates of these disruptive behaviors at Month 4 may still exceed normative levels in some classrooms (Pelham, Greiner, & Gnagy, 2004), this reduction is clinically meaningful. More specifically, studies have demonstrated that when children with ADHD and disruptive behavior disorders do not receive needed treatment, symptoms and related impairments worsen significantly over the course of the academic year (Owens et al., 2008; Schultz, Evans, & Serpell, 2009). Thus, our data underscore the importance of persisting with a behavioral classroom intervention, as incremental benefits with each month are probable. Second, the differential pattern that emerged in ESs for Class 1 and Class 2 offers some preliminary evidence that may inform a teacher's decision about when and if the DRC should be discontinued and an alternative strategy attempted. In Month 1, the pattern of ESs for Class 1 and Class 2 are both positive and do not differ significantly from each other; however, by the end of Month 2, the ESs for Class 2 deteriorate, whereas the ESs for Class 1 continue to show incremental improvement. This suggests that teachers should implement and monitor the DRC for up to 2 months before considering a discontinuation decision. However, if by the end of Month 2, the child has shown
858
owens et al.
deterioration in the context of continued intervention, the likelihood of having a large incremental or cumulative improvement (i.e., SMD ES N .5) by Month 4 is 50% lower than if the child has shown a positive response in the first 2 months. In contrast, if the child has shown a sustained positive response or continued improvement during Month 2, there is a high probability (80%) of sustained or even incremental improvement. Thus, as long as there is not a period of significant deterioration, the DRC should be continued with updates and modifications to the goal criterion and rewards. Interestingly, the patterns for each class also demonstrate that it is common for positive responders to show a period in which performance plateaus are followed by another period of incremental improvement. Although we can only speculate, possible factors contributing to this plateau include holiday breaks, teacher fatigue, and/or reward satiation, each of which are worthy of further study. In addition, the patterns above are likely affected by the quality of privilege system implemented by the parent, the quality of all the classroom components implemented by the teacher, or both; these variables were not assessed for all participants in this study. This study's results, along with consideration of other factors, offer guidelines for practical implementation decisions. More specifically, after a DRC is begun, a student's response to intervention should be assessed on a monthly basis. If progress appears to be slow during the first month, there are several important steps that should occur before deeming the student a nonresponder to the intervention. First, the teacher's compliance to the protocol should be assessed. Effort should be made to determine whether the teacher is consistently providing feedback to the student about his or her target behaviors at the point of performance and documenting these occurrences on the DRC. If the teacher's implementation is inconsistent, it is recommended that a school-based consultant engage in a problem-solving discussion with the teacher to determine whether the quality of the implementation can be improved. Biweekly consultation offers opportunities for monitoring this. Second, data from previous studies suggest that parent adherence to the home-based privilege system is a common problem (Owens et al., 2008) and related to worse outcomes (Fabiano et al., 2010). Thus, efforts should be made to assess whether rewards contingent upon daily success are being provided at home and whether these rewards are salient to the student. Preliminary data from this study suggest that if there are problems with the home-based reward system that cannot be resolved,
rewards should be replaced by, rather than supplemented with, school-based rewards, as there may be some diffusion of responsibility when parents and teachers are aware that the other party is providing privileges. This interesting trend warrants further investigation. Finally, after making the above modifications, the teacher should implement the DRC for a second month and student's response to intervention should be assessed again. If both the teacher and parents appear to be complying with DRC procedures, and improvement is not seen after 2 months, it is likely best to consider using a different intervention in addition to or instead of the DRC.
Limitations Several limitations warrant discussion. First, to answer this study's primary research questions, we had to implement stringent inclusion criteria related to length of intervention duration (4 months of implementation) and quality of teacher completion of the DRC (at least 50% compliance with recording procedures). As a result of these criteria, the data for over half of all children initially enrolled in the original sample were excluded. Thus, our findings may present an overly optimistic picture that may not generalize to all student– teacher dyads in the general education setting. Indeed, that 50% of teachers either stopped the DRC intervention before 4 months or failed to adequately record child behavior as intended is a striking and important finding worthy of future study, particularly given the ongoing needs of children with ADHD and the data documenting the link between intervention adherence and outcome (Pfiffner, O'Leary, Rosen, & Sanderson, 1985). Second, this study did not assess parent and teacher implementation or parent characteristics (e.g., parenting stress, parent psychopathology) that can affect parent implementation quality. Additionally, information regarding consistency, quality, and type of rewards was not available for all participants. Future studies that account for such variables may reveal a more nuanced pattern of incremental changes from month to month. Third, our sample is largely homogenous with regard to race and gender, limiting the ability to generalize these findings to more diverse populations. However, there is little evidence to suggest that child gender would influence the current pattern of results (e.g., MTA Cooperative Group, 1999). There is some evidence that race may moderate treatment outcome (Arnold et al., 2003), thus further study with minority populations is warranted. Fourth, target behaviors included in the current study were entirely behavioral in nature
benefits of a drc (e.g., out of seat, interruptions, touching others) and did not include academic productivity targets. It is noteworthy, however, that the meta-analysis on single-subject designs did not find a significant effect of target type on intervention response (Vannest et al., 2010). This suggests that the current results may generalize to both academic and behavioral targets. Fifth, it is not possible to determine whether the improvement of multiple targets for a single child is due to true improvement across all targets or rater bias (i.e., the teacher may generalize improvement on one target to all other targets for that child). However, it is important to note that some children did not improve on all target behaviors, and facilitators encouraged the use of concrete operational definitions for each behavior (e.g., “out of seat” means the child “does not have feet on the floor and bottom in the chair”) so as to minimize the subjectiveness of ratings. Nonetheless, future research should include independent observation and rating of child behavior to examine the possibility of rater bias. Last, due to the naturalistic nature of this study, many variables (e.g., diagnostic status, use of medication) were not collected or tightly controlled, as they might be in efficacy trials or analogue settings. Although a study in the natural setting may result in a higher likelihood of outside variables influencing outcome data, it more closely resembles a real-world school setting, and thus produces the type and magnitude of results that might be expected in a typical classroom. As the focus of this study was to provide information that may aid in making practical decisions for teachers and school-based mental health professionals, we believe that this trade-off is warranted and appropriate.
Summary The current study yields several important findings that provide teachers and school-based mental health professionals with a better understanding of the likelihood of student response to a DRC and the pattern and rates of change that can be expected. The primary findings indicate that a large majority of children have a positive response to a DRC and show a meaningful decrease in the frequency of target behaviors (i.e., 50% improvement). Additionally, many students continue to incrementally improve over the course of 4 months with sustained implementation of the intervention. This could have a significant impact on teacher stress levels, classroom environment, and special education referrals. Thus, the benefits of continued implementation likely outweigh the cost of time required to continue the intervention. Importantly, the results of this study offer evidence-based
859
guidelines that can be used by teachers and mental health professionals to aid in making practical and efficient decisions regarding intervention implementation and continuation. Acknowledgement This project was supported in part by funding from the Ohio Department of Mental Health (ODMH) Office of Program Evaluation and Research (Grant #s 04.1203 and 05.1203); ODMH Residency and Training Program (Grant #s OU-05-26 and OUSP 06-12), the Health Resources and Services Administration’s Quentin Burdick Program for Rural Interdisciplinary Training (D36HP03160), the Logan-Hocking School District, the Ohio Department of Youth Services via the Hocking County Juvenile Court, the R. Alvin Stevenson Fund of the Columbus Foundation (Grant # TFB03-0260 STE), and the Holl Foundation. References Abikoff, H. B., Jensen, P. S., Arnold, L. L., Hoza, B., Hechtmen, L., Pollack, S., . . . Wigal, T. (2002). Observed classroom behavior of children with ADHD: Relationship to gender and comorbidity. Journal of Abnormal Child Psychology, 30, 349–359. Arnold, L. E., Elliott, M., Sachs, L., Bird, H., Kraemer, H. C., Wells, K. C., . . . Wigal, T. (2003). Effects of ethnicity on treatment attendance, stimulant response/dose, and 14-month outcome in ADHD. Journal of Consulting and Clinical Psychology, 71, 713–727. Atkins, M. S., Pelham, W. E., & Licht, M. H. (1985). A comparison of objective classroom measures and teacher ratings of attention deficit disorder. Journal of Abnormal Child Psychology, 13, 155–167. Barkley, R. A. (2005). Attention-deficit hyperactivity disorder: A handbook for diagnosis and treatment (3rd ed.). New York: Guilford Press. Barkley, R. A., Murphy, K. R., & Fischer, M. (2008). ADHD in adults: What the science says. New York: Guilford Press. Bibou-nakou, I., Stogiannidou, A., & Kiosseoglou, G. (1999). The relation between teacher burnout and teachers’ attributions and practices regarding school behaviour problems. School Psychology International, 20, 209–217. Chafouleas, S. M., Riley-Tillman, T. C., & McDougal, J. (2002). Good, bad, or in-between: How does the daily behavior report card rate? Psychology in the Schools, 39, 157–169. Chang, M. (2009). An appraisal perspective of teacher burnout: Examining the emotional work of teachers. Educational Psychology Review, 21, 193–218. Cohen, J. (1988). Statistical power for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. DuPaul, G. J., & Eckert, T. L. (1997). School-based interventions for children with attention- deficit hyperactivity disorder: A meta-analysis. School Psychology Review, 26, 5–27. DuPaul, G. J., Power, T. J., McGoey, K. E., Ikeda, M. J., & Anastopoulos, A. D. (1998). Reliability and validity of parent and teacher ratings of attention-deficit/hyperactivity disorder symptoms. Journal of Psychoeducational Assessment, 16, 55–68.
860
owens et al.
Epstein, M., Atkins, M., Culinan, D., Kutash, K., & Weaver, R. (2008). Reducing behavior problems in the elementary school classroom. IES practice guide (NCEE 2008-012). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office. Evans, S. W., Pelham, W. E., Smith, B. H., Bukstein, O., Gnagy, E. M., Greiner, A. R., . . . Baron-Myak, C. (2001). Dose– response effects of methylphenidate on ecologically valid measures of academic performance and classroom behavior in adolescents with ADHD. Experimental and Clinical Psychology, 9, 163–175. Evers, W. J. G., Tomic, W., & Brouwers, A. (2004). Burnout among teachers. School Psychology International, 25, 131–148. Fabiano, G., Vujnovic, R., Pelham, W., Waschbusch, D., Massetti, G., Pariseau, M., . . . Volker, M. (2010). Enhancing the effectiveness of special education programming for children with attention deficit hyperactivity disorder using a daily report card. School Psychology Review, 39, 219–239. Fabiano, G. A., Pelham, W. E., Waschbusch, D. A., Gnagy, E. M., Lahey, B. B., Chronis, A. M., . . . Burrows-MacLean, L. (2006). A practical measure of impairment: Psychometric properties of the Impairment Rating Scale in samples of children with attention deficit hyperactivity disorder and two school-based samples. Journal of Clinical Child and Adolescent Psychology, 35, 369–385. Frick, P. J., Lahey, B. B., Loeber, R., Tannenbaum, L. E., Van Horn, Y., Christ, M. A. G., . . . Hansen, K. (1993). Oppositional defiant disorder and conduct disorder: A meta-analytic review of factor analyses and cross-validation in a clinic sample. Clinical Psychology Review, 13, 319–340. Girio, E. L., & Owens, J. S. (2009). Teacher acceptability of evidence-based and promising treatments for children with attention-deficit/hyperactivity disorder. School Mental Health, 1, 16–25. Greene, R., Beszterczey, S. K., Katzenstein, T., Park, K., & Goring, J. (2002). Are students with ADHD more stressful to teach? Patterns of teacher stress in an elementary school sample. Journal of Emotional and Behavioral Disorders, 10, 79–89. Jurbergs, N., Palcic, J., & Kelley, M. L. (2007). School–home notes with and without response cost: Increasing attention and academic performance in low-income children with attention-deficit/hyperactivity disorder. School Psychology Quarterly, 22, 358–379. Kamphaus, R. W., Huberty, C. J., DiStefano, C., & Petoskey, M. D. (1997). A typology of teacher-rated child behavior for a national U.S. sample. Journal of Abnormal Child Psychology, 25, 453–463. Keenan, K., & Wakschlag, L. S. (2000). More than the terrible twos: The nature and severity of behavior problems in clinic-referred preschool children. Journal of Abnormal Child Psychology, 28, 33–46. Kelley, M. L. (1990). School–home notes: Promoting children's classroom success. New York: Guilford Press. Lochman, J. E. (1985). Effects of different treatment lengths in cognitive behavioral interventions with aggressive boys. Child Psychiatry and Human Development, 16, 45–56. Lochman, J. E. (1992). Cognitive-behavioral intervention with aggressive boys: Three-year follow-up and preventive effects. Journal of Consulting and Clinical Psychology, 60, 426–432. MTA Cooperative Group. (1999). Moderators and mediators of treatment response for children with ADHD: The MTA Study. Archives of General Psychiatry, 56, 1088–1096.
Murray, D. W., Rabiner, D., Schulte, A., & Newitt, K. (2008). Feasibility and integrity of a parent–teacher consultation intervention with ADHD students. Child Youth Care Forum, 37, 111–126. National Institute of Mental Health. (2001). Taking stock of risk factors for child/youth externalizing behavior problems (NIH Publication No. 02-4938). Washington, DC: Author. Nylund, K., Asparouhov, T., & Muthen, B. (2007). Deciding on the number of classes in latent class analysis: A Monte Carlo simulation study. Structural Equation Modeling, 14, 535–569. Owens, J. S., Andrews, N., Collins, J., Griffeth, J. C., & Mahoney, M. (2011). Finding common ground: University research guided by community needs for elementary school-aged youth. In L. Harter, J. Hamel-Lambert, & J. Millesen(Eds.) Participatory partnerships for social action and research (pp. 49–71). Dubuque, IA: Kendall Hunt. Owens, J. S., Murphy, C. E., Richerson, L., Girio, E. L., & Himawan, L. K. (2008). Science to practice in underserved communities: The effectiveness of school mental health programming. Journal of Clinical Child and Adolescent Psychology, 37, 434–447. Parker, R., Vannest, K. J., & Brown, L. (2009). The improvement rate difference for single case research. Exceptional Children, 75, 135–150. Pelham, W. E. (1993). Pharmacotherapy for children with attention deficit hyperactivity disorder. School Psychology Review, 22, 199–227. Pelham, W. E. (2002). Attention deficit hyperactivity disorder: Diagnosis, assessment nature, etiology, and treatment. Unpublished manual. Pelham, W. E., & Fabiano, G. A. (2008). Evidence-based psychosocial treatment for ADHD: An update. Journal of Clinical Child and Adolescent Psychology, 31, 184–214. Pelham, W. E., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-based assessment of attention-deficit/hyperactivity disorder in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 449–476. Pelham, W. E., Gnagy, E. M., Burrows-Maclean, L., Williams, A., Fabiano, G. A., Morrisey, S. M., . . . Morse, G. D. (2001). Once-a-day Concerta methylphenidate versus three-times-daily methylphenidate in laboratory and natural settings. Pediatrics, 107, 105–120. Pelham, W. E., Gnagy, E. M., Greenslade, K. E., & Milich, R. (1992). Teacher ratings of DSM-III-R symptoms for the disruptive behavior disorders. Journal of the American Academy of Child and Adolescent Psychiatry, 31, 210–218. Pelham, W. E., Greiner, A. R., & Gnagy, E. M. (2004). Children's summer treatment program manual. Buffalo, NY: Comprehensive Treatment for Attention Deficit Disorder. Pelham, W. E., Wheeler, T., & Chronis, A. (1998). Empirically supported psychosocial treatments for attention deficit hyperactivity disorder. Journal of Clinical Child Psychology, 27, 190–205. Pfiffner, L. J., O'Leary, S. G., Rosen, L. A., & Sanderson Jr., W. C. (1985). A comparison of the effects of continuous and intermittent response cost and reprimands in the classroom. Journal of Clinical Child Psychology, 14, 348–352. Pfiffner, L. J., Yee Mikami, A., Huang-Pollock, C., Easterlin, B., Zalecki, C., & McBurnett, K. (2007). A randomized, controlled trial of integrated home–school behavioral treatment for ADHD, predominantly inattentive type. Journal of the American Academy of Child and Adolescent Psychiatry, 46, 1041–1050. Power, T. J., Soffer, S. L., Clarke, A. T., & Mautone, J. A. (2006). Multisystemic intervention for children with
benefits of a drc ADHD.Report on Emotional and Behavioral Disorders in Youth, 6(3), 51–52, 67-69. Ram, N., & Grimm, K. J. (2009). Growth mixture modeling: A method for identifying differences in longitudinal change among unobserved groups. International Journal of Behavioral Development, 33, 565–576. Robb, J., Sibley, M., Pelham, W. E., Foster, E. M., Molina, B., Gnagy, E., & Kuriyan, A. B. (2011). The estimated annual cost of ADHD to the U.S. school system. School Mental Health, 3, 169–177. Sadler, J. M., Evans, S. W., Schultz, B. K., & Zoromski, A. K. (2011). Potential mechanisms of action in the treatment of social impairment and disorganization with adolescents with ADHD. School Mental Health, 3, 156–168. Schnoes, C., Reid, R., Wagner, M., & Marder, C. (2006). ADHD among students receiving special education services: A national survey. Exceptional Children, 72, 483–496. Schultz, B. K., Evans, S. W., & Serpell, Z. N. (2009). Preventing failure among middle school students with ADHD: A survival analysis. School Psychology Review, 38, 14–27. Sterling-Turner, H., Robinson, S., & Wilczynski, S. (2001). Functional assessment of distracting and disruptive
861
behaviors in the school setting. School Psychology Review, 30, 211–226. Vannest, K. J., Davis, J. L., Davis, C. R., Mason, B. A., & Burke, M. D. (2010). Effective intervention for behavior with a daily behavior report card: A meta-analysis. School Psychology Review, 39, 654–672. Wang, M., & Bodner, T. E. (2007). Growth mixture modeling: Identifying and predicting unobserved subpopulations with longitudinal data. Organizational Research Methods, 10, 635–656. Weller, E. B., Weller, R. A., Rooney, M. T., & Fristad, M. A. (1999). Children's interview for psychiatric syndromes. Washington, DC: American Psychiatric Association. Wells, K. C., Pelham, W., Kotkin, R. A., Hoza, B., Abikoff, H. B., Abramowitz, A., . . . Schiller, E. (2000). Psychosocial treatment strategies in the MTA study: Rationale, methods, and critical issues in design and implementation. Journal of Abnormal Child Psychology, 28, 483–505. R E C E I V E D : August 29, 2011 A C C E P T E D : February 9, 2012 Available online 21 February 2012