Studies in Educational Evaluation 53 (2017) 41–54
Contents lists available at ScienceDirect
Studies in Educational Evaluation journal homepage: www.elsevier.com/stueduc
Measuring teachers’ perceptions about differentiated instruction: The DI-Quest instrument and model Catherine Coubergs* , Katrien Struyven, Gert Vanthournout, Nadine Engels Educational Sciences Department, Vrije Universiteit Brussel, Belgium
A R T I C L E I N F O
Article history: Received 29 August 2016 Accepted 20 February 2017 Available online xxx Keywords: Differentiated instruction Differentiated Instruction Questionnaire Validation DI-Quest Student diversity Academic diversity Primary education Secondary education Teacher perceptions Reliability
A B S T R A C T
Within a democratic and multicultural society, diversity is a reality, and differences between students are a fact which teachers have to deal with on a daily basis. Differentiated instruction aims to meet these differences in learning in order to provide all students with the best possible learning opportunities. However, to date no validated instruments exist to measure teachers’ perceptions of differentiated instruction and their related classroom practices. This study, therefore, examined the factor structure and reliability of the Differentiated Instruction Questionnaire, called the DI-Quest instrument. A list of 87 items was constructed, building on existing prevalent theoretical models of Differentiated Instruction (e.g. Tomlinson, 2014; Hall, 2002). An exploratory and confirmatory factor analysis was undertaken to investigate the factor structure of the questionnaire. As a result, five factors emerged: two factors related to the teachers’ philosophy of differentiated instruction (the teachers’ mindset and their ethical compass), two factors referred to the practical principles that teachers apply to differentiate (flexible grouping and output = input) and the last factor (differentiated instruction) covered the self-reported extent to which teachers differentiated their instruction related to three types of differences in learning (students’ interests, readiness and learning profile). As a result, the DI-Quest instrument entailed 31 items with a five-factor structure indicating a good fit (CFI = 0.919; TLI = 0.911; RMSEA = 0.041 [0.037–0.044 – 90% confidence interval, p(0.05) = 1.000]; SRMR = 0.048; x2 = 5888.338, df = 465, p = 0.000). In addition, assuming theoretical relatedness between the factors, the validation of a DI-Quest model was empirically validated. We compared the model fit for two models by investigating which model had a lower BIC and AIC value and by comparing their chi square values. The best-fitting DI-Quest model showed four factors (teachers’ mindset, ethical compass, flexible grouping and output = input as dependent variables) functioning as significant predictors of the fifth factor (the self-reported adoption of Differentiated Instruction, which served as an independent variable). Moreover, this paper also discusses the psychometric properties of the DI-Quest instrument and the implications of the model for schools, educators and researchers. © 2017 Elsevier Ltd. All rights reserved.
1. Introduction Within a multicultural society, one of the main challenges of education is to prepare all students to build their lives, to participate in and contribute to society, and to live together in harmony (Belfi, Goos, De Fraine, & Van Damme, 2012). Diversity in education is a fact and, therefore, differences between students are inherent in classroom contexts. Failing to take these differences into consideration could disadvantage or inhibit students’ learning (Belfi et al., 2012). Differentiated instruction aims to deal with these differences in learning, in order to provide all students with
* Corresponding author. http://dx.doi.org/10.1016/j.stueduc.2017.02.004 0191-491X/© 2017 Elsevier Ltd. All rights reserved.
the best opportunities for learning. This fundamental goal of providing all students with a maximum number of learning opportunities presents challenges for the school, the teachers and other stakeholders (Tomlinson, 2001). This study aimed to develop and validate a theory-driven instrument with the objective of describing the extent to which teachers think and act according to the philosophy and principles of Differentiated Instruction in their classrooms, called the DI-Quest. The concept of Differentiated Instruction can be seen as a philosophy and praxis of teaching. Bade and Bult (1981) defined differentiated instruction as the collection of all measures that interact with differences between students. Tomlinson (2001) described differentiated instruction as a form of adaptive teaching, with the aim of providing all students with optimal learning
42
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
possibilities, whereas Woolfolk (2010) elaborated and referred to differentiated instruction as a variety of different components of education and teaching, taking into account the specific characteristics of students. More recently, Tomlinson (2005) expanded differentiated instruction as a proactive method of teaching involving modifying curricula, teaching methods, resources, learning activities and student products to address the different needs of students, in order to maximize learning opportunities for every student in the classroom. This highlights the carefully planned, positive and proactive nature of differentiated instruction. In academic literature, two models of Differentiated Instruction tend to reoccur. One was developed and continuously refined by Tomlinson (2014), namely the Differentiated Instruction Model (Fig. 1), the other was described by Hall (2002) (Fig. 2). Since both models form the basis on which the survey instrument, which is central to this study, was constructed, more details are provided.
When taking a closer look at the Differentiated Instruction Model of Tomlinson (2014) in Fig. 1, the concept of mindset arises. Sousa and Tomlinson (2011) stated that a teacher's mindset can affect the successful implementation of differentiated instruction. Dweck (2006) distinguished two types of mindsets: the fixed and the growth mindset. In a fixed mindset, the teachers tend to believe that the students’ qualities, like their talent or intelligence are fixed traits determining their success, without taking effort into account. Typical presumptions are: ‘Some students have what it takes to be successful, others do not’. However, in a growth mindset, teachers believe that most learning can be achieved through dedication and hard work. In this perspective, every student can be successful if they put in effort. Intelligence and talent are just a starting point for learning to happen. Hattie (2005) assumed that teachers with a growth mindset are more likely to accept differences between students and tend to consider student diversity as part of a rich learning environment (Hattie, 2005).
Fig. 1. Model on Differentiated Instruction (Tomlinson, 2014).
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
43
Fig. 2. Model on Differentiated Instruction (Hall, 2002).
This concept of mindset encapsulates the quality curriculum, as presented by Kelly (2009). The rationale behind ‘quality’ lies in identifying four fundamental questions for a quality educational curriculum. These entail the educational purposes that a school seeks to attain, the educational experiences that can be provided while attaining these purposes, the organization of these educational experiences and the manner of assessment of the purposes that are being attained. Within this regard, Tomlinson (2014) promoted the concept of ‘teaching up’ for excellence, stating that too few students regularly have educational experiences that stimulate and ‘stretch them’. Teaching up is seen as an approach that can be used to make these experiences available to all students, regardless of their backgrounds and starting points. Tomlinson et al. (2003) distinguished three forms of differentiated instruction, responding to the different needs in learning, based on students’ readiness, students’ learning profiles and students’ interests. Firstly, differentiated instruction that copes with differences in readiness focuses on differences according to a student's learning position relative to the learning goals that are to be attained within a given subject at a certain time, which stands for a state of preparedness (Woolfolk, 2010). Teachers will provide the possibility (for every student) to attain the learning objectives based on every student's individual learning pace and learning position (Woolfolk, 2010). Secondly, differences can also occur in the learning profile of students. This term refers to a student's preferred mode of learning (Tomlinson et al., 2003), which can be affected by different factors, such as learning styles, intelligence, preference, gender, culture, and context. Thirdly, differences can appear regarding the (level of) interests of students. Differentiated instruction based on students’ interests often provides students with the opportunity to choose between assignments, subject matter or teaching methods (Tomlinson et al., 2003). Addressing learner interests can be linked to motivation and it appears to have a positive impact on learning (Tomlinson et al., 2003). Learning content which is interesting to students is more likely to lead to enhanced student engagement, increased student productivity, a higher level of intrinsic motivation and student autonomy (Eisenberger & Shanock, 2003). Moreover, Tomlinson (1999) claimed that the interventions in differentiated instruction might focus on the ‘content’, ‘product’
and/or ‘process’. ‘Content’ stands for the subject matter which the students have to learn. In this regard, Vygotsky (1978) defined the zone of proximal development, in which a student can successfully master the acquired level, but not without scaffolding or support from a peer or a teacher. Tomlinson et al. (2003) stated that differentiated instruction should push every student into his or her zone of proximal development. ‘Process’ refers to the learning path students enroll on, associated with the learning activities they execute to achieve the learning goals. The main objective is that teachers design a variety of teaching methods tailored to the students’ learning needs, fields of interest or learning profiles. ‘Product’ is the learning outcomes and achievements, based on which students can prove their accomplished goals (Tomlinson & Imbeau, 2010). Obtaining insight on how, when and if students have accomplished their learning goals is part of a rich learning environment. This embodies the idea of assessment. Knowing where your students are, what difficulties they encounter, and what works well is essential while aiming to differentiate between your students. Hattie and Timperley (2007) stated that feedback as an on-going assessment strategy has a powerful influence on learning. This makes on-going assessment a key factor of the model (Tomlinson, 1999). ‘Affect’ stands for the students’ emotions and feelings. Taking these into account can positively impact students’ motivations to learn. The ‘learning environment’ refers to the appearance and structure of a classroom and can be inviting for students, while the emotional climate can support students during their learning (Tomlinson, 2014). Last, but not least, Tomlinson (2014) suggested a variety of instructional strategies to differentiate instruction in class. These can be seen as hands-on teaching methods, which are assumed to be proven to be effective whilst providing differentiated instruction. Some examples can be found in Fig. 1, such as learning contracts, tiered instruction, and learning centers. For example, Reis, McCoach, Little, Muller, and Kaniskan (2011) examined the effect of a differentiated reading program on students’ oral reading and comprehension. In this research, 63 teachers and 1192 primary school students across five elementary schools were analyzed by means of multi-level modeling. Significant differences in favor (of reading fluency) of the differentiated enriched reading program were found in reading fluency. In terms of learning outcomes, the
44
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Reis et al. (2011) study found that differentiated instruction was as effective or more effective than a traditional whole group approach. Another model of Differentiated Instruction that tends to reoccur in the literature is the model of Hall (2002), represented in Fig. 2. The rationale of this model is to recognize, through a measure of pre-assessment, the students’ varying prior knowledge, readiness, learning preferences and/or interests and to react responsively to this variation in student learning. From here on, the teacher will plan the process of learning. Assessment has a key role in this model by meeting each student where he or she stands in their learning process and assisting them accordingly (Hall, 2002). Within Hall's model, assessment is seen as assessment for learning (Hall, 2002). This involves social interaction between the teacher and student (and among students), who have a shared vision of learning. Berry (2008) defined this as a planned collection of information from students that helps them to understand their knowledge, skills, strengths, and weaknesses. Therefore, Hattie and Timperley (2007) stated that feedback plays an important role in the process of assessment and teaching. When applied properly, it can have a positive effect on the learning outcomes of students. Finally, Hall (2002) stated that a teacher has a focus on addressing differences between students and providing all of them with a maximum number of learning possibilities, which are crucial elements for integrating differentiated instruction in their classrooms. Proponents of differentiated instruction have stated that the principles and guidelines are rooted in years of educational research and theory (Hall, 2002), but it is very difficult to make statements about the active components of differentiated instruction. In general, interventions in experimental designs use a compilation of different components from differentiated instruction, as described by Hall (2002) or Tomlinson and Imbeau (2010). For instance, Baumgartner, Lipowski, and Rush (2003) described and studied the impact of differentiated teaching methods in a program designed to improve reading achievement. This program included the practices of flexible grouping and students’ choices on a variety of tasks. The targeted students of primary and middle schools showed improvement, such as a more positive attitude towards reading, a rise in the instructional reading levels, a rise in the number of comprehensive strategies used by students and a greater mastery of phonemic and decoding skills, compared to the students from the control group. Also, Tieso (2005) examined the combined effects of a differentiated curriculum and grouping structures. The study suggested that flexible ability grouping, combined with appropriate curricular revision or differentiation may result in substantial achievement gains both for average and high ability learners. The intricacy of these studies lies in the obscurity of the active components of differentiated instruction (Hattie, 2005; Smit & Humpert, 2012). Different studies have investigated diverse concepts, which imply that there needs to be investigation of the different characteristics of differentiated instruction and their effects on learning outcomes. Moreover, no empirical validation for differentiated instruction as a ‘package’, including teachers’ philosophy and practices, was found. Consequently, it was difficult to compare the different studies and draw more generic conclusions over and beyond several studies and particular contexts. This asks for a psychometrically valid and reliable instrument of differentiated instruction, which enables scientific research to be carried out so that teachers and other stakeholders can evaluate and compare their differentiating teaching practices. We based the development of the questionnaire on the described models, as defined by Tomlinson (2014) and Hall (2002). In summary, this study aimed to create and validate the Differentiated Instruction Questionnaire, called the DI-Quest. It is
an instrument developed to measure teachers’ philosophy and practices of differentiated instruction on the one hand, resulting in the validation of a model of Differentiated Instruction that describes the ‘common grounds’ between teachers that adopt differentiated instruction in their classroom, on the other hand. 2. Method 2.1. Constructing the Differentiated Instruction Questionnaire for teachers Comparing teachers’ philosophy and practices across a wide range of classrooms and contexts, requires a tool that can be administered easily, integrating the key ideas and concepts that are used in academic and professional literature on Differentiated Instruction (DI). A self-report instrument, measuring different perceptions teachers have about differentiated instruction, as well as their practices, is the tool that we aimed to develop. 2.1.1. Item creation As a first step, based on a literature review and the mostprevalent models of differentiated instruction developed by Tomlinson (2014) and Hall (2002), a pool of items referring to distinct aspects of differentiated instruction was created by the researchers with expert knowledge of differentiated instruction. [1] Constructing the initial pool of items to measure perceptions and practices about/concerning differentiated instruction, the following criteria were used: (1) items had to refer to specific perceptions (e.g. beliefs, concerns, excluding ‘feelings’) or behaviors of the implementations (of different aspects) of differentiated instruction, (2) items needed to explicitly measure a single practice or perception and (3) items needed to be relevant across multiple teaching contexts. This process generated a pool of 147 items. 2.1.2. Item selection In order to check for content validity, the 147 items retained from the item creation phase were administered to two educational experts (one male post-doctoral researcher with teaching experience in higher education and knowledge about differentiated instruction; and one female teaching assistant with experience in teaching in multilingual and multicultural schools). They were asked to provide feedback on how well each question measured the question's theoretical ideas. The experts were also invited, where needed, to make adjustments to the wording of items to improve the clarity and quality. Their feedback was analyzed and the first version of the questionnaire was adapted accordingly. Afterwards, a pilot study was conducted (Nteachers = 23), to check the level of difficulty and clarity of the questions. Through this process, 87 items were retained in the questionnaire, which was validated in the present study for a large-scale group of teachers in primary and secondary education. The questionnaire consists of two parts. Part one contains 47 items to be answered on a 7-point Likert scale ranging from ‘I totally disagree’ to ‘I totally agree’, measuring perceptions of teachers about various topics related to differentiated instruction, such as teachers’ mindset (fixed versus growth), concerns and practices related to quality curriculum (standards, study materials, demands), within-class grouping and cooperative learning (homogeneous vs heterogeneous groups) and classroom management (concerns, facilities . . . ). Part two contains 40 items, to be answered on a 7-point Likert scale ranging from ‘never’ to ‘always’, measuring frequencies of different practices of differentiated instruction. Items relate to the use of bringing students into their zone of proximal development (e.g. support and challenging tasks), assessment of learning practices, and adapting classrooms practices to student differences
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
in learning, namely students’ interests, readiness and learning profiles. The questionnaire was administered in Dutch. Table 1 contains an English translation of exemplary original items; Table 5 presents the items of the DI-Quest instrument in its final version (see further). 2.2. Respondents The population of the validation study included all of the school-based teachers in Flanders, Belgium (from kindergarten to the last years of secondary education, with pupils aged 3–18), except for special needs schools. A list of all schools from the year 2013–2014 was obtained from the Flemish Department of Education; this amounted to 3353 schools. By email, we approached these schools to participate, by means of a selfselecting convenience sample, aiming to obtain a considerable sample to validate the instrument. This was achieved by sending an introductory letter to the managing staff of the different schools, where the goals of the research were explained, as well as the confidentiality clause. Afterward, the respondents were stratified according to province, education authority (subsidized official education, subsidized private education, community education) and their level of education (primary schools, secondary schools). In the days following the call, 131 schools agreed to participate and 3700 paper-and-pencil questionnaires were sent to this set of schools in the month of January 2014. Some schools declined the invitation to participate, articulating why they did not want to participate in the research. The most common reason was a lack of time, as the schools had other priorities, or they had experienced saturation when it came to participating in educational research. For those schools that agreed to take part, the teachers were sent a letter informing them that they were to participate in a large-scale validation of a survey investigating perceptions of differentiated instruction. In addition, teachers and principals were promised a school-based feedback
45
report if at least ten teachers per school agreed to become involved. No information on the theoretical framework of this study was given to the teachers. In order to minimize non-response, hard copies were sent to every school and forms could be returned free of charge. The final sample included 1573 teachers from 94 different schools (with 16.7 as the average number of teachers per school). As shown in Table 2, the stratification variables province, education authority and level of education of these 94 participating schools represent the distribution in the sample. 2.3. Data analyses To validate and evaluate the factor structure of the DI-Quest, a stepwise approach was adopted. The combination of exploratory and confirmatory factor-analytical techniques was undertaken. SEM is defined as the combination of explorative analysis and multiple regression analysis. In comparison with Confirmatory Factor analysis, SEM extends to researching the possibility of a relationship among latent variables by encompassing two components: (a) the measurement model and (b) a structural model (Schreiber, Nora, Stage, Barlow, & King, 2006). Since the aim of the study was not merely to describe the data with a reduced number of variables, but to describe the underlying structure and latent constructs, we opted to use exploratory factor analysis (EFA), instead of principal component analysis (PCA). To minimize the chance of overanalyzing the data, the sample was randomly split in half and exploratory and confirmatory techniques were carried out on separate sub-samples. This split reduced our sample size. In this regard, Kline (2011) states that an ideal sample-size-to-parameters ratio would be 20:1. However, Kline (2011) also states that although less ideal a ratio of 10:1 could be considered safe. Prior to the exploratory techniques, the suitability of the data for the exploratory analysis were checked by closely looking at the Kaiser–Meyer–Olkin and the Bartlett's test of sphericity (Pallant, 2010).
Table 1 Rotated Factor and Structure Matrices for exemplary items of part three and four of the DI-QUEST. Item Despite the learning capacity of students, we cannot change intellectual capabilities
Factor 1 PS
Factor 2 PS
Factor 3 PS .609
Factor 4 PS
Factor 5 PS
h2 M (SD)
Factor 6 PS
.583
.725 The way a teacher teaches is of influence on the intellectual .742 capabilities of students. .762 .759 Experiences of success can influence the intellectual capabilities of students. .628 The motivational methods I use during my classes, will not affect .628 students’ intellectual capabilities. The curriculum is not providing any flexibility to cope with an .581 .584 individual student. .430 .583 Following the criteria of governmental inspection makes me a less qualitative teacher. My lessons are motivational for all of my class. My lessons are focused on learning profit for every student. I expect all of my students to think with a focus on problem solving. My lessons are motivational for every student. I teach my students to help each other. .640 .622 I use assessment to assess which goals a student has attained. .470 .506 I use assessment to gain insight into the learning processes of my .430 .461 students. .560 .572 I teach my students how to cope with feedback. During my lessons different students work on different tasks with a .589 .515 different level of difficulty. .424 If my students are not able to finish an assignment during my lessons, .428 they get the opportunity to finish it at home. .588 .560 During my lessons, my students can decide with me on which assignment they need to work
.453 .535 .592 .469
.583 .611 .603 .513
Note. P = pattern coefficients; S = structure coefficients; h2 = communalities of the measured variables. Only pattern coefficients and structure coefficients with values of |.40| or greater are being displayed.
46
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Table 2 Distribution of province, type of school and level of education.
Province Antwerp Limburg Flemish-Brabant East-Flanders West-Flanders Brussels Capital Region Type of school Subsidized official education Community education Subsidized private education Level of education Primary school Secondary school
Total number of schools (3350)
Total number of respondents per school (1574 over 94 Division of schools from convenience sample in terms of schools) percentages
891 (27%) 460 (14%) 501 (15%) 716 (21%) 616 (18%) 166 (5%)
311 from 26 schools 293 from 11 schools 310 from 16 schools 265 from 14 schools 312 from 20 schools 83 from 7 schools
27.7% 11.7% 17% 14.9% 21.2% 7.4%
624 (19%)
166 from 15 schools
16%
631 (19%) 2095 (62%)
655 from 33 schools 753 from 46 schools
35.1% 48.9%
2389 (71%) 961 (29%)
719 from 63 schools 855 from 31 schools
67% 33%
The analyses were conducted with SPSS 22 software. The first half of the random split data was used for the EFA. We based the selection of the number of factors on the Kaiser–Guttman rule (eigenvalues greater than one) (Bandalos & Boehm-Kaufman, 2009), parallel analysis, and the scree plot, as well as on the interpretability and the fit of the scales into the theoretical framework. We opted for oblique rotation instead of varimax rotation, since we expected the different factors to be correlated and to be less simple than as stated by Thurstone (1947). This is shown by the availability of factor loadings, as they are 0.30 or higher on more than one factor. Factors with a minimum of three items and items with a sufficiently high loading of more than |0.40| were retained (Tabachnick and Fidell, 2007). The theoretical interpretability of the factor solution was used as a final criterion. However, after our analyses we opted to perform a varimax analysis on the full dataset. When performing this varimax-based
explorative analysis on the whole dataset, the same factors and items occurred, and no significant difference was found. In the second sub-sample, a confirmatory factor analysis (CFA), was conducted in order to cross-validate the factor structure identified through the EFA. This technique is theory driven, whilst using a hypothesized model to estimate a population covariance matrix. The aim is to minimize the difference between the estimated and observed matrices (Schreiber et al., 2006). These two CFAs were conducted on the second half of the split data with the open source software R version 3.0.2 with the following packages: psych, car, teaching demos, fields, gmodels, rela, lavaan, GPArotation. Default settings, such as the use of a variance –covariance matrix with ML estimation, were used as the choice of input matrix and estimation method. In each of these confirmatory analyses a model consisting of the number of factors and the number of items as retained from the EFA was tested. On several
Table 3 Total variance explained EFA on random split file. Factor
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Initial eigenvalue
Extraction sums of squared loadings
Rotation sums of squared loadings
Total
% of variance
Cumulative %
Total
% of variance
Cumulative %
Total
11,829 5,038 3,885 3,229 3,189 2,633 2,337 2,090 1,825 1,716 1,650 1,632 1,496 1,461 1,322 1,271 1,224 1,197 1,189 1,151 1,112 1,056 1,031 0.989 0.980 0.944 0.912 0.894 0.879
13,597 5,791 4,465 3,792 3,666 3,026 2,686 2,402 2,098 1,972 1,896 1,876 1,720 1,680 1,520 1,460 1,407 1,376 1,366 1,323 1,278 1,214 1,185 1,137 1,126 1,085 1,048 1,028 1,011
13,597 19,386 23,852 27,644 31,310 34,336 37,023 39,425 41,522 43,495 45,391 47,267 48,987 50,667 52,187 53,647 55,055 56,431 57,797 59,119 60,397 61,611 62,796 63,933 65,059 66,144 67,192 68,220 69,231
11,183 4,383 3,225 2,728 2,557 2,002
12,854 5,038 3,707 3,136 2,940 2,301
12,854 17,892 21,599 24,735 27,674 29,976
7,012 3,835 4,885 4,523 7,019 5,178
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
occasions the model was optimized, based on the modification indices. To obtain models with a good fit, we used the following goodness-of-fit indices: for the CFI and TLI, we strived for values around 0.9, for the RMSEA values smaller than 0.06 and for the SRMR values smaller than 0.08 (Hu & Bentler, 1999). Moreover, the reliability of the scores for the established scales were checked by means of their Cronbach's Alpha values and accompanying item analyses. An alpha value of 0.7 is considered sufficiently reliable (Pallant, 2010). Finally, to validate a model of differentiated instruction, we compared the model fit for the two models by investigating which model had a lower BIC and AIC value and by comparing their chi square values (Burnham &Anderson, 2004). 3. Results 3.1. Factor structure at the teacher level In the first phase, the 87 items of the DI-Quest were subjected to an Exploratory Factor Analysis (EFA) with oblique rotation. Prior to this EFA, the suitability of the data for the EFA was checked. These analyses were performed on one half of the randomly split data. The Kaiser–Meyer–Oklin (KMO) value was 0.843, exceeding the recommended 0.6 and Bartletts’ test of sphericity reached statistical significance (x2 = 16989.939, df = 3741, p = 0.000), indicating that there was sufficient communality present in the manifest variables in order to perform a factor analysis (Pallant, 2010). The initial eigenvalue of the EFA indicated the presence of 29 factors, which in Table 3 explains 69.231% of the variance, respectively. However, from factor 8 the extra percentages of variance were between 2.5% and 1% per factor. When we performed a parallel analysis, it showed that 14 factors had an eigenvalue that exceeded the corresponding criterion value and explained 50.667% of the variance. For this criterion value, it is recommended to use the eigenvalue that corresponds to the 95 percentile (Glorfeld, 1995). However, the 7, 8 , 9 , 10 , 11, 12 , 13 and 14 factor solution did not contain more than three items. Therefore, we opted to use a six-factor solution. This is in line with the scree plot, which showed a clear break after the sixth factor. The scree plot for the eigenvalues is shown in Fig. 3. These six factors explain 13.597%, 5.791%, 4.465%, 3.792%, 3.666%, 3.026%, respectively, of the total variance of 34.336%. To interpret these six factors, we made use of an oblique rotation. The factor correlation matrix indeed shows that there was a moderate correlation
Fig. 3. Scree plot with eigenvalues based on EFA with random split file.
47
between the various factors (absolute r-values between 0.001 and 0.268). The rotated solution, as presented in Table 1, shows that there were sufficient items with a loading higher than 0.40 for each factor and most items load on one factor only. All of the items with a loading lower than 0.40 were deleted. In the second phase, the six factors and corresponding 49 items that resulted from the EFA were subjected to a Confirmatory Factor Analysis (CFA). These analyses were performed on the other half of the randomly split data file. Prior to this CFA, the suitability of the data for the CFA was checked. The Kaiser–Meyer–Oklin (KMO) value was 0.854, exceeding the recommended 0.6 and Bartlett's test of sphericity reached statistical significance (x2 = 17133.482.582, df = 3741, p = 0.000), indicating that there was sufficient communality present in the manifest variables in order to perform a factor analysis (Pallant, 2010). The results of this CFA point to moderate but still insufficiently high fit indices (CFI = 0.756; TLI = 0.742; RMSEA = 0.058 [0.056– 0.060 – 90% confidence interval, p(0.05) = 0.000]; SRMR = 0.072; x2 = 10351.172, df = 1176, p = 0.000). Subsequently, changes were made to the six-factor model in order to achieve a better fit. Items 50, 52, 77, 81, 82, 92, 95 and 104 were removed since the modified indices showed that the error covariance should be included between these items and items of other scales. These moderate-tostrong correlations with items from other scales may indicate that these items measured different concepts or that different respondents dubiously interpreted these items. The removal of these items led to a better, but still insufficient fit (CFI = 0.833; TLI = 0.821; RMSEA = 0.050 [0.047–0.052 – 90% confidence interval, p(0.05) = 0.612]; SRMR = 0.062; x2 = 7602.863, df = 820, p = 0.000). In the next step, the error covariance (based on the modified indices and the content of items) between items of the same scale were added. Taking the changes based on the modified indices into account, a six-factor structure with 40 items and three error covariances arose, showing a better, but still insufficient fit (CFI = 0.857; TLI = 0.846; RMSEA = 0.046 [0.043–0.049 – 90% confidence interval, p(0.05) = 0.992]; SRMR = 0.061; x2 = 7602.863, df = 820, p = 0.000). Since we also aimed for a clear and explanatory model behind this questionnaire, we studied the different error covariances available in this six-factor model. Before deleting the items, more covariances were situated in just one factor, more specifically, in the factor that we interpreted as ‘teaching up’, due to similarities with the Tomlinson (2014) model (Fig. 1). This may indicate an ambiguous interpretation of the items under this factor. When examining these different items, we noticed the measurement of different concepts behind every individual item. Some items questioned the motivational aspects of a lesson, others measured a focus on learning how to solve problems, and also the zone of proximal development was measured in this factor. Moreover, when examining the EFA analysis, we concluded that the sixth factor gives an added explanatory value of 3.026%. Based on these findings, we performed a new CFA analysis based on a five-factor structure with 33 items and three error covariances. This led to a sufficient result (CFI = 0.911; TLI = 0.902; RMSEA = 0.042 [0.038– 0.045 – 90% confidence interval, p(0.05) = 1.000]; SRMR = 0.051; x2 = 5894.790, df = 486, p = 0.000). Next, the internal consistency of the scores for these five factors with corresponding items was tested showing that all the scales were sufficiently reliable (Table 4) with alphas of 0.6 and more. Taking the suggestions of these analyses into account, several items were dropped (40, 72). This led to a new five-factor structure with 31 items and three error covariances with a better and sufficient fit (CFI = 0.919; TLI = 0.911; RMSEA = 0.041[0.037–0.044 – 90% confidence interval, p(0.05) = 1.000]; SRMR = 0.048; x2 = 5888.338, df = 465, p = 0.000), considering striving toward values for CFI and TLI above the 0.9.
48
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Table 4 Number of items (N), mean (M), range and internal consistency (a) for the factors. Factors (scales)
Nitems
M
Range
a
a if item deleted
Teachers’ (fixed) mindset Teachers’ ethical compass (quality curriculum) Flexible grouping and peer learning Output = input (assessment for learning) Use of Differentiated Instruction (based on interests, readiness and learning profile)
5 6 9 4 8
3.792 3.669 5.446 4.579 4.033
1.037 1.311 1.940 0.937 1.816
0.858 0.790 0.773 0.632 0.827
No better results 0.856 (if 40 deleted) 0.791 (if 72 deleted) No better results No better results
Fig. 4. Five factor structure of DI-Quest with 31 items.
When interpreting the items associated with the factors, different factors arose. The first factor1 measured the ‘teachers’ mindset’ with regard to learning successes (number of items = 5) (3.792% of the variance explained). This factor contained items such as: “The way a teacher teaches is of influence on the intellectual capabilities of students”; “Experiences of success can influence the intellectual capabilities of students”; “The motivational methods I use during my classes, will not affect students’ intellectual capabilities”. The second factor measured the ‘ethical compass’ of the teachers (number of items = 6) (4.465% of variance explained). This factor contained items such as: “Following the curriculum makes differentiated instruction impossible”, “Governmental inspection on the implementation of the curriculum is limiting my professional freedom as a teacher”, “The curriculum is not providing any flexibility to cope with an individual student”. This scale measured the perceptions regarding strictly following a curriculum (including standards) without taking the students into consideration. The third factor of this model measured vision with regard to ‘flexible grouping’ (including cooperative peer learning) (number of items = 9) (3.666% of the variance explained). This factor contained items such as: “I make sure that every student gets the support he or she needs”, “I teach my students to help each other”, “Working with heterogeneous groups provides my students with the possibility to learn from each other”.
1 This factor showed negative loadings (for growth mindset). We decided to reverse these items.
The fourth factor measured teachers’ information gathering, assessment for learning and feedback practices, in short called ‘output = input’ (number of items = 4) (5.791 of variance explained). This factor contained items such as: “I teach my students how to cope with feedback”, “I use assessment to assess in what way I can adjust my lesson to the learning process of my students”. This scale measured the actions that emphasize the use of feedback, learning to cope with feedback and using feedback as an engine for learning. The fifth factor, called ‘Differentiated Instruction’, measured teachers’ adaptation of their classroom practices related to students’ differences in learning, related to readiness, interest and learning profile (number of items = 8) (13.597% of variance explained). This factor contained items such as: “Based on their learning profile, I let my students choose between suggested learning contents, materials or teaching methods”, “During my lessons I select the learning content and teaching methods for my students based on the learning profile of my students”. This final five-structure model with 31 corresponding items can be found in Fig. 4. The final questionnaire based on this five-factor structure model of DI-Quest can be found in Table 5. Since all the analyses were performed on a randomly split file, we repeated the EFA and CFA analyses on the complete dataset. Prior to these analyses, the suitability of the data for the EFA and CFA was checked. The Kaiser–Meyer–Oklin (KMO) value was 0.884, exceeding the recommended 0.6 and Bartlett's test of sphericity reached statistical significance (x2 = 31006.548, df = 3741, p = 0.000), indicating that there was sufficient communality present in the manifest variables in order to perform a factor analysis (Pallant, 2010). The results of the CFA were based on a five-factor structure, with 31 items and three error variances that indicated a good fit
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
49
Table 5 The DI-QUEST 5-factor structure with 31 corresponding items and factor loadings. Mindset Ethical compass (19) Visie.S2 Despite the learning potential of students, we cannot change their intellectual capacities (R22) RVisie.S5 The way a teacher teaches is of influence on the intellectual capacities of his students (R27) RVisie.S10 Classroom experiences of success can influence the intellectual capacities of students (R28) RVisie.S11 A teacher's belief in the competences of a student can influence their intellectual capacities (29) Visie.S12 The way a teacher motivates his students is not of influence for the students’ intellectual capacities (39) Curr.S7 The curriculum does not provide any flexibility to cope with an individual student (41) Curr.S9 The curriculum is overloaded on content and goals (42) Curr.10 The curriculum does not provide any room to foresee in differentiated instruction (43) Curr.S11 Governmental inspection on the implementation of the curriculum is limiting my professional freedom as a teacher (44) Curr.S12 Following the curriculum makes differentiated instruction impossible (45) Curr.S13 Following the criteria of governmental inspection makes me a less qualitative teacher (60) FlexSamle.S1 I regularly change between working with homogeneous and heterogeneous groups (61) FlexSamle.S2 I teach my students to help each other (62) FlexSamle.S3 I explicitly make sure I have a good relationship with all of my students (63) FlexSamle.S4 During my lessons, students need to work together in order to progress in their learning process (65) FlexSamle.S6 I make sure that every student has a specific function in my classroom (66) FlexSamle.S7 Working in heterogeneous groups gives my students the opportunity to learn from each other (67) FlexSamle.S8 I make sure that every student who needs extra guidance will get this (68) FlexSamle.S9 I differentiate by switching between working with heterogeneous and homogeneous groups (78) AssDiff.F3 I use assessment to gain insight into the learning processes of my students (79) AssDiff.F4 I use assessment to assess in what way I can adjust my lessons to the learning processes of my students. (83) AssDiff.F8 I teach my students how to cope with feedback (92) AssDiff.F17 My students get the opportunity to rework a task based on given feedback (88) AssDiff.F13 I choose the learning content and teaching methods based on my students (89) AssDiff.F14 I adjust my assessment based on my students (or groups of students) (90) AssDiff.F15 During my lessons, different students work on different tasks with a different level of difficulty (93) Vs.F1 Every student will receive the same assessment (97) Vs.F5 During my lessons, my students can decide with me on which assignment they need to work (98) Vs.F6 Knowing my students, I select the learning content, materials and teaching methods (99) Vs.F7 Based on their learning profile, I let my students choose between learning content and teaching methods (100) Vs.F8 During my lessons, I choose the learning content and teaching methods for my students based on the learning profile of my students
(CFI = 0.909; TLI = 0.899; RMSEA = 0.043 [0.041–0.045 – 90% confidence interval, p(0.05 = 1.000]; SRMR = 0.044; x2 = 11023.574, df = 465, p = 0.000). When examining the Model of Differentiated Instruction as presented by Tomlinson (2014) (Fig. 1) and Hall (2002) (Fig. 2), two possible models of differentiated instruction arose, which are presented in Figs. 5 and 6. In Model 1, four factors (teachers’ (fixed) mindset, teachers’ ethical compass (quality curriculum), flexible grouping (including peer learning), output = input (assessment for learning for students and teacher) occurred (dependent variables), which functioned as significant predictors of a fifth factor (independent variable), namely the self-reported use of differentiated instruction in the classroom related to differences in interests, readiness and learning profile. Table 6 gives us insight into the significance of the different predictors with a positive or negative estimate. In Model 2, the same occurs. However, two of the four factors (flexible grouping and input = output) had a mediating effect on differentiated instruction. Table 7 provides an insight into the significance of the different predictors and mediators with a positive or negative estimate.
Flexible grouping
Output = input Differentiated Instruction
0.635 0.768 0.831 0.881 0.616 0.523 0.524 0.579 0.647 0.711 0.562 0.438 0.525 0.564 0.379 0.449 0.448 0.565 0.430 0.389 0.344 0.479 0.303 0.586 0.575 0.536 0.467 0.641 0.427 0.714 0.605
In our final step, we compared the fit of Model 1 with four subfactors and Model 2 with two of the four sub-factors functioning as mediators. The results of the Aikake information-criteria were all directed towards Model 1 as providing a better fit (see Table 8). 4. Discussion and conclusions Preparing all students to participate in everyday life within a diverse society is one of the main challenges of education. Slavin (1990) stated that if education responded to the differences between students it could address more learning potential. Since research, on the one hand, and education, on the other, is contingent on a psychometrically sound instrument that measures perceptions on and practices of differentiated instruction, the aim of this study was to develop and investigate the validity of the Differentiated Instruction Questionnaire, called the DI-Quest. We conducted exploratory and confirmatory factor analyses to investigate the factor structure of the DI-Quest at the level of the teacher. The results indicated that at a teacher level the DI-Quest measures five different constructs with a total of 31 items: fixed mindset (perception), ethical compass (perception), flexible
50
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Fig. 5. Differentiated Instruction Model 1. *All values are significant at 1% level.
grouping (praxis), output = input (praxis) and differentiation (praxis). Moreover, they highlighted and embodied different core aspects of differentiated instruction, as developed by Tomlinson (2005) and Hall (2002). A detailed examination of the first construct – fixed mindset – shows that it included five items, measuring teachers’ perceptions about successful learning. Dweck (2006) highlighted the difference between a growth and a fixed mindset. She stated that a growth
mindset of the teacher could make a difference. Teachers with a growth mindset believe that success is related to the effort a student makes. These teachers will provide every student with challenging goals to tackle. Teachers with a fixed mindset, however, believe that success is related to a predefined concept of intelligence. Some students will have what it takes to make it, others will not. Having a growth mindset will be of great importance within a teacher's philosophy of teaching, as it was
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
51
Fig. 6. Differentiated Instruction Model 2. *All values are significant at 1% level.
Table 6 Regressions from SEM analyses Model 1. Regressions Differentiated Instruction Mindset Differentiated Instruction Ethical compass Differentiated Instruction Flexible grouping Differentiated instruction Output = Input
Estimate
Std.Err
0.119
0.035
0.116 0.278
0.035 0.057
0.256
0.046
Z-value 3.4141
P(>l Zl)
Std.lb
Std.all
0.001
0.148
0.148
3.13 4.840
0.001 0.000
0.147 0.285
0.147 0.285
5.540
0.000
0.319
0.319
52
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Table 7 Regressions from SEM analyses Model 2. Regressions Differentiated Instruction Mindset Differentiated Instruction Ethical compass Differentiated Instruction Flexible grouping Differentiated Instruction Output = Input Flexible grouping Mindset Flexible grouping Ethical compass Output = input Mindset Output = input Ethical compass
Estimate
Std.Err
Z-value
0.088
0.028
3.135
0.002
0.101
0.101
0.089 0.287
0.026 0.034
3.468 8.475
0.001 0.000
0.115 0.320
0.115 0.320
0.311 0.237 0.206 0.184 0.152
0.033 0.035 0.031 0.036 0.033
9.379 6.837 6.627 5.104 4.663
0.000 0.000 0.000 0.000 0.000
0.356 2.244 2.240 0.185 0.172
0.356 2.244 2.240 0.185 0.172
Table 8 Comparison Models 1 and 2.
Fit model 1 Fit model 2
df
AIC
BIC
df diff
421 422
121819 121936
122204 122315
1
Hattie (2009) who stated that the teacher can make a difference in learning outcomes of students, just by believing in their capabilities. The DI-Quest model demonstrates that a fixed mindset negatively predicts the differentiated instruction of teachers related to students’ interests, readiness and learning profile. In contrast, the theoretically opposite ‘growth mindset’ is empirically a positive predictor of differentiated instruction, meaning the more a teacher's philosophy is oriented to Dweck's ‘growth mindset about learning’, the higher the self-reported adaptation of their teaching strategies associated with students’ differences in learning (interest, readiness and learning profile). The second construct – ethical compass – included six items measuring teachers’ perceptions of the use of different curricula as a compass for learning versus the observation of the student as a compass for learning. Tomlinson and Imbeau (2010) stated that an ethical compass focused on the student embodies developing or fine tuning meaningful learning outcomes, devising assessments (that align with those outcomes) and creating daily lesson plans designed to move students to proficiency with those goals. This way, the teacher works to integrate differentiated instruction into his or her design for a high-quality curriculum. The DI-Quest shows that strictly following a curriculum, without taking students’ needs into account negatively predicts the use of differentiated instruction. Thirdly, flexible grouping included nine items measuring the frequencies of different flexible grouping forms (Whitburn, 2001), while enabling positive group dynamics. Flexible grouping is referred to as the variation between working with homogeneous and heterogeneous groups (Tomlinson et al., 2003) while actively learning how to learn by yourself and in different types of groups. Continuous teaching in fixed homogeneous groups results in adapting the curriculum, resulting in less challenging lessons for low achieving students (Terwel, 2005; Belfi et al., 2012). However, switching between homogeneous and heterogeneous groups can help students to make a progression based on their abilities (when in homogeneous groups) and facilitate learning from interaction (when in heterogeneous groups) (Whitburn, 2001). Therefore, since the aim of differentiated instruction will be providing learning opportunities for all students, the variation between teaching and learning methods that work with homogeneous and heterogeneous groups is essential. Moreover, when combining different forms of flexible grouping, every student can be addressed in their zone of proximal development (Whitburn,
P(>l Zl)
Std.lb
Std.all
2001). The DI-Quest shows that combining different forms of flexible grouping based on students’ readiness, interests and learning profiles positively predicts the use of differentiated instruction. Fourthly, the construct ‘output = input’ included four items measuring practices of feedback as an essential part of the process of teaching. Feedback as the main engine for learning and differentiated instruction is the second core aspect of differentiated instruction. Hattie (2009) stated that it is one of the main effective instruments a teacher can use to influence the lifelong learning skills of students. Gijbels, Dochy, Van den Bossche, and Segers (2005) stated that, from this point of view, assessment and feedback are not the final steps of the process of teaching, but they are an essential part of the process of teaching and learning. The DI-Quest shows that including feedback as an essential part of learning positively predicts the use of differentiated instruction. The final factor ‘complying with differences in learning’ included eight items measuring the self-reported frequencies of the use of different components of differentiation on the level of students’ readiness, learning profiles and interests, as defined by Tomlinson et al. (2003). When considering these five constructs in comparison with the models of Tomlinson (2014) and Hall (2002), our first construct, ‘mindset’, is easily recognizable in the model of Tomlinson (2014) and draws further on the work of Dweck (2006). Our second construct ‘ethical compass’, has a focus on strictly following a curriculum not based on students’ needs. This is not an explicit part of the model of Tomlinson (2014) or Hall (2002). Indirectly, however, we can see a theoretical opposite link with Tomlinson's idea of having a quality curriculum present based on students’ needs, since she stated that this is a key factor within the use of differentiated instruction. Our third construct of flexible grouping can be recognized both in the model of Tomlinson (2014) and that of Hall (2002). Both authors used the term ‘process’ to refer to the learning path students enroll on, associated with the learning activities they execute to achieve the learning goals. Our fourth and fifth constructs output = input and differentiated instruction (according to students’ readiness, learning profiles and interests) can be easily recognized in both of the models of Tomlinson (2014) and Hall (2002). Both authors used the concept of feedback and assessment for learning and made a clear distinction within differentiated instruction based on students’ readiness, interests and learning profiles. In contrast to the original models of Hall and Tomlinson, the DIQuest model only focuses on the discriminating factors that predict the adoption of differentiated instruction as a teaching philosophy and strategy. As a consequence, factors (operationalized by items and based on theoretical concepts) that have less discriminating power, such as more generic quality indicators of good teaching or too specific DI-related perceptions or examples of strategies that are not widely adopted in primary and secondary education, were excluded from the instrument and model, explaining the reduction
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
of items and concepts as compared to the existing DI-models. The DI-Quest model, as a consequence, shows the common denominators of differentiated instruction across a wide variety of classrooms and schools. Obviously, any stand-alone instrument suffers limitations when describing complex phenomena in education such as differentiated instruction. A first considerable limitation is that the instrument is a self-reported quantitative measure. More qualitative methods that prompt for examples (e.g. interviews) or that rely on actual practices (e.g. observations) would add to the reliability and richness of the data when describing and analyzing (the actual use of) differentiated instruction. Also, including data from the students’ perspective would give a more accurate picture of DIpractices in classrooms. In addition, future research on the external validity of this questionnaire is desirable in order to check the equivalence of the instrument with other countries and languages. Building on these aspirations, DI-Quest may provide an important ‘building block’ in the empirical process of understanding and scientifically investigating differentiated instruction practices, interventions and DI-related professional development as a means to address student diversity in learning. The validity of the DI-Quest in this study was examined by means of exploratory and confirmatory factor analyses using teachers’ perceptions and self-reported practices. Agoliotis and Kalyva (2011) stated that the way teachers perceive their work, their role and the role of other professionals with whom they have to cooperate, is a key determinant of the quality of the education given to students. However, the validity examined in this study was limited to construct validity. Further research on the external validity of this questionnaire is desirable. This will examine the psychometric properties of the DI-Quest in contexts other than the Flemish in order to check the equivalence of the DI-Quest for other countries and languages. Also, to obtain a more reliable, realistic and accurate perspective in research on this topic, it would be useful to complement the data gathered by the DI-Quest with qualitative data, such as observation, video-diaries, interviews and self-report measures. Moreover, triangulation or the use of multiple research methods on the same phenomenon could transcend individual shortcomings, such as socially desirable answers in survey research (Levering & Smeyers, 2003). From a more technical perspective, Sivo, Fan, Witta, and Willse (2006) calculated new and optimal cut-off criteria for rejecting models. Based on these criteria, our model requires further finetuning and research. However, we would like to state that our goal was to test whether the explorative model would still be true after a model fit. This questionnaire was based on a literature review and we chose an oblique rotation to ascertain whether the model could be worthy of reproduction, based on the explorative suggestions. In our analyses, we chose an oblique rotation method to reduce the number of items. This was triggered by the assumption that the factors were indeed correlated, since rotation methods are either orthogonal or oblique, and orthogonal rotation methods assume that the factors in the analyses are uncorrelated. Nevertheless, searching for the right fit between the explorative and confirmative analyses would be a very useful aspect to investigate in further research. However, this would require a larger dataset per level of education and type of school, since the importance of factors may differ per context of educational level and school type. In this regard, there might also be a potential nesting problem as an existential part of our variance might be explained by the teachers’ school, since our dataset of teachers came from different schools (1574 teachers distributed over 94 schools). It is very likely that school differences will have an influence on a teacher's perception and praxis of differentiated instruction. This will also be a very interesting angle to investigate in future research. Again, a larger dataset per level of education and
53
type of education would be needed, since this dataset only helped us to seek distinctive general and theoretical constructs alongside their predictive value within the global context of primary and secondary education. In conclusion, the Differentiated Instruction Questionnaire, called the DI-Quest instrument, with a five-factor structure and 31 items proved to be a valid and reliable instrument for teachers. It captured five essential concepts regarding the self-reported perceptions and practices of differentiated instruction in the classroom. Considering the need to provide education for all, the existence of such a valid and reliable instrument is important in order to investigate the effectiveness of ‘differentiated instruction’ approaches to teaching. Moreover, a model of differentiated instruction can be derived, based on this questionnaire and on the available data. Therefore, the DI-Quest with a five-factor structure can be used to measure the current state of affairs with regard to differentiated instruction from the teachers’ perspective, including the philosophy and praxis of teaching. As a result of this research, a validated model was derived, and a more comprehensive notion about differentiated instruction and its key components was attained. In future, a more unified conceptual research regarding the effectiveness of differentiated instruction can be performed. The results of these studies, accompanied with professional development programs, can help teachers to develop their competences to implement differentiated instruction in their classrooms. References Agoliotis, I., & Kalyva, E. (2011). A survey of Greek general and special education teachers’ perceptions regarding the role of the special needs coordinator: Implications for educational policy on inclusion and teacher education. Teaching and Teacher Education, 27(3), 543–551. Bade, J., & Bult, H. (1981). Differentiated instruction: Practical implications for the teacher. Uitgeverij Intro Nijkerk. Bandalos, D. L., & Boehm-Kaufman, M. R. (2009). Four misconceptions in exploratory factor analysis. In C. E. Lance, & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity, and fable in the organizational and social sciences, New York: Routledge (2014). Baumgartner, T., Lipowski, M. B., & Rush, C. (2003). Increasing reading achievement of primary and middle school students through differentiate instruction. (Master's research). Available from Education Resources Information Center. Belfi, B., Goos, M., De Fraine, B., & Van Damme, J. (2012). The effect of class composition by gender and ability on secondary school students’ school wellbeing and academic self-concept: A literature review. Educational Research Review, 7(1), 62–74. Berry, R. (2008). Assessment for learning. Hong Kong University Press. Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods Research, 33(261) . Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House. Eisenberger, R., & Shanock, L. (2003). Rewards, intrinsic motivation and creativity. A case study of conceptual and methodological isolation. Creativity Research Journal, 15(2–3), 121–130. Gijbels, D., Dochy, F., Van den Bossche, P., & Segers, M. (2005). Effects of problembased learning: A meta-analysis from the angle of assessment. Review of Educational Research, 75(1), 27–61. Glorfeld, L. W. (1995). An improvement on Horn's parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55, 377–393. Hall, T. (2002). Differentiated instruction. Effective classroom practices report. National Center on Accessing the General Curriculum, US Office of Special Education Programs. Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Taylor & Francis Ltd.. Hattie, J. (2005). The paradox of reducing class size and improving learning outcomes. International Journal of Educational Research, 43, 387–425. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. Hu, L. T., & Bentler, P. M. (1999). Cut-off criteria for fit indexes in covariance structure analysis: Conventional criteria versus alternatives. Structural Equation Modelling: A Multidisciplinary Journal, 6, 1–55. Kelly, A. V. (2009). The curriculum: Theory and practice. Goldsmiths College, University of London: Sage Publications. Kline, R. B. (2011). Principles and practices of structural equation modelling. London: The Guildford Press.
54
C. Coubergs et al. / Studies in Educational Evaluation 53 (2017) 41–54
Levering, B., & Smeyers, P. (2003). Learning to see education: An introduction to interpretative research. Amsterdam: Boom. Pallant, J. (2010). SPSS survival manual: A step by step guide to data analysis using SPSS for Windows, 4th ed. New York: Open University Press. Reis, S. M., McCoach, D. B., Little, C. A., Muller, L. M., & Kaniskan, R. B. (2011). The effects of differentiated instruction and enrichment pedagogy on reading achievement in five elementary schools. American Educational Research Journal, 48(2), 462–501. Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modelling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–337. Sivo, S. A., Fan, X. T., Witta, E. L., & Willse, J. T. (2006). The search for ‘optimal’ cutoff properties: Fit index criteria in structural equation modeling. The Journal of Experimental Education, 74(3), 267–289. Slavin, S. (1990). Achievement effects of ability grouping in secondary schools. Review of Educational Research, 60(3), 471–499. Smit, R., & Humpert, W. (2012). Differentiated instruction in small schools. Teaching and Teacher Education, 28(8), 1152–1162. Sousa, D. A., & Tomlinson, C. A. (2011). Differentiation and the brain: How neuroscience supports the learner – friendly classroom. Bloomington: Solution Tree Press. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics, 5th ed. Boston: Pearson Education. Terwel, J. (2005). Curriculum differentiation: Multiple perspectives and developments in education. Journal of Curriculum Studies, 37(6), 653–670. Thurstone, L. L. (1947). Multiple factor analysis. University of Chicago Press. Tieso, C. (2005). The effects of grouping practices and curricular adjustments on achievement. Journal for the Education of the Gifted, 29(1), 60. Tomlinson, C. A., Brighton, C., Hertberg, H., Callahan, C. M., Moon, T. R., Brimijoin, K., et al. (2003). Differentiating instruction in response to student readiness, interest, and learning profile in academically diverse classrooms: A review of literature. Journal for the Education of the Gifted, 27, 119–145. Tomlinson, C. A. (1999). The differentiated classroom: Responding to the needs of all learners. Association for Supervision and Curriculum Development.
Tomlinson, C. A. (2001). How to differentiate instruction in mixed ability classrooms, 2nd ed. Alexandria, VA: Association for Supervision and Curriculum Development. Tomlinson, C. A. (2005). Grading and differentiation: Paradox or good practice? Theory Into Practice, 44(3), 262–269. Tomlinson, C. A., & Imbeau, M. B. (2010). Leading and managing a Differentiated Classroom. ASCD. Tomlinson, C. A. (2014). The differentiated classroom: Responding to the needs of all learners. ASCD. Vygotsky, L. S. (1978). Mind and society: The development of higher psychological processes. Cambridge: Harvard Education Press. Whitburn, J. (2001). Effective classroom organisation in primary schools: Mathematics. Oxford Review of Education, 27(3), 411–428. Woolfolk, A. (2010). Educational psychology. The Ohio State University: Pearson Education International. Catherine Coubers has a master in educational sciences and is doing a PhD research on the effectiveness of Differentiated Instruction in primary schools.
Katrien Struyven is a professor at the VUB Department Educational Sciences. Her fields of expertise are Differentiated Instruction and assessment.
Nadine Engels is a professor at the VUB Department Educational Sciences. Her fields of expertise are gender studies, teacher training programs and Differentiated Instruction.
Gert Vanthournout is a post-doctoral researcher and his fields of interests are motivational theories and teacher training programsy