Computers & Education 142 (2019) 103650
Contents lists available at ScienceDirect
Computers & Education journal homepage: www.elsevier.com/locate/compedu
An Arabic assessment tool to measure Technological Pedagogical and Content Knowledge
T
Hanaa Eid Alharbi Department of Educational Technology, Faculty of Education, Taibah University, Medina, Saudi Arabia
A R T IC LE I N F O
ABS TRA CT
Keywords: Secondary education Teaching/learning strategies Adult learning Postsecondary education Pedagogical issues
Issues associated with classroom technology integration have become an interest of various educational research communities worldwide. To guide research into teachers' integration of technology, several important theoretical frameworks have emerged recently, one of which is the Technological, Pedagogical and Content Knowledge (TPACK) conceptual model. However, current research on methodological assessments adopting this model has indicated a need for developing a valid and reliable Arabic assessment for assessing Arabic-speaking secondary preservice teachers' TPACK. Therefore, an Arabic survey instrument was designed and administered to 350 secondary preservice teachers who were studying in a postgraduate teachers preparation program. The instrument includes 27 items and can be gathered under 7 factors representing the conceptual framework domains. Confirmatory factor analyses provided evidence that the designed Arabic instrument is a reliable and valid assessment for determining the TPACK of secondary preservice teachers in the context of the study location. Recommendations and implications have been given for future research and practices.
1. Introduction Over the past 15 years, the Technological Pedagogical and Content knowledge (TPACK) model has been developed to articulate which domains of knowledge teachers should have for integrating technologies successfully and effectively into their educational practices (Graham, 2011; Herring, Koehler, & Mishra, 2016). A teacher's TPACK has been identified as the most significant predictor variable influencing the teacher's effective integration of educational technology (Al Harbi, 2014; Wu, 2013). Research on this field has emphasized embracing the TPACK as the model basis to guide teacher education programs according to understanding how preservice teachers might develop this knowledge (Altun & Akyildiz, 2017). One of the valuable ways that this may be achieved is through providing valid and reliable assessments for preservice teacher educators to measure their students' TPACK, which provides insight into how well the preservice teachers are prepared to effectively use technology in future practices. To date, there is no instrument specifically designed for assessing Arabic-speaking secondary preservice teachers' TPACK of all subject domains. Most of the available instruments are designed for English-speaking teachers and are either restrictively aimed at a specific content area or targeted toward primary preservice teachers (Burgoyne, Graham, & Sudweeks, 2010; Schmidt et al., 2009; Zelkowski, Gleason, Cox, & Bismarck, 2013). Moreover, most of the available instruments development in previous studies have not presented evidence regarding the issues of reliability and validity (Kartal, Kartal, & Uluay, 2016; Koehler, Shin, & Mishra, 2012; Tsai, Koh, & Chai, 2016). According to Koehler et al. (2012), their analytical research revealed that more than 60% of the reviewed studies provided no evidence of addressing the method reliability and validity. Therefore, this paper aims to design and investigate the
E-mail address:
[email protected]. https://doi.org/10.1016/j.compedu.2019.103650 Received 10 March 2019; Received in revised form 1 August 2019; Accepted 5 August 2019 Available online 08 August 2019 0360-1315/ © 2019 Elsevier Ltd. All rights reserved.
Computers & Education 142 (2019) 103650
H.E. Alharbi
validity and reliability of a self-reporting survey tool to assess Arabic-speaking secondary preservice teachers' TPACK that will be applicable for all subject domains. A valid and reliable instrument could provide insight into the strengths, weaknesses, and required improvements of a program of teacher preparation. It is hoped that this research paper will contribute to filling the gap that has been found in the related literature, as it will be one of the first studies to focus specifically on Arabic secondary preservice teachers. 2. Literature review Recent calls have emphasized that to create an advantageous teaching and learning experience that truly integrates technology, educators should move from thinking about technology as a separative instrument toward instead thinking about planning for technology uses that are truly integrated and truly focused on students' content and processed related learning (Geisert & Futrell, 2000). More recently, relevant literature has emerged, and several theories on effective technology integration in education have been proposed. In 2005, the TPACK framework was first introduced and then initially developed through a five-year research program by applying a design-based experimental research method (Koehler & Mishra, 2009). TPACK emphasizes considering technology-related knowledge together with the Pedagogical Content Knowledge framework (Shulman, 1986). In his framework, Shulman explained that knowing content and knowing the pedagogy of teaching methods separately is not enough, as there is an intersection called pedagogical content knowledge, which addresses not only the knowledge of the content being taught nor only the approach to teaching the pedagogy but literally the knowledge of how a teacher teaches particular content (Shulman, 1986). Clearly, this concept could explain why many teachers seemed to know their content very well, but they did not know how to teach it. Thus, if teachers know their content and know the teaching strategies but do not know how to marry these domains together, through what it is called pedagogical content knowledge, they are not very effective. Therefore, in 2005, Koehler and Mishra realized that the requirement is not only pedagogical content knowledge but instead technological pedagogical content knowledge. They argue that there is a whole range of new knowledge related to technology which can merge with pedagogical and content knowledge in maximum effective ways in terms of improving students' learning. In their framework, the authors propose that interrelationships exist among three main domains of knowledge. These are:
• Knowledge of using technologies - Technological Knowledge (TK); • Knowledge of teaching strategies - Pedagogical Knowledge (PK); and • Knowledge of subject matter - Content Knowledge (CK). When these components intersect, four blended components emerge. These are:
• Knowledge of teaching subject matter - Pedagogical Content Knowledge (PCK); • Knowledge of technologies for learning subject matter - Technological Content Knowledge (TCK); • Knowledge of technologies for teaching strategies - Technological Pedagogical Knowledge (TPK); and • The integrated knowledge of the intersection of all three main components - Technological Pedagogical and Content Knowledge (TPACK).
Since the publishing of the TPACK framework, various measurements and approaches have been designed to examine teachers' TPACK (Tsai et al., 2016; Willermark, 2018). The most commonly used approach to examine teachers' TPACK has involved the use of self-reporting instrument surveys. According to Willermark's review study (2018), nearly 72% of the reviewed articles were conducted via a self-reporting approach in which participants reported their confidence in or perceptions of TPACK. Often, on a Likerttype scale, participants were required to rate their agreement level with statements regarding using educational technologies. Usually, the instrument statements (items) were categorized into multiple subscales representing the TPACK model domains. The most widely used self-reporting instrument survey has been “The Survey of Preservice Teachers' Knowledge of Teaching and Technology” (Schmidt et al., 2009). It was developed specifically for addressing the TPACK perceptions of preservice primary teachers with a focus on literacy, social studies, science, and mathematics content areas. It consists of 57 items with seven subscales of TPACK. For instance, “I know how to assess student performance in a classroom” (PK) or “I have the technical skills I need to use technology” (TK). The survey was developed and revised through ongoing research during the 2008–2009 and 2009–2010 academic years. The results demonstrated the reliability and validity of the tool for helping educators assess the TPACK development of preservice primary teachers. Similarly, Burgoyne et al. (2010) designed a 36-item survey to measure the TPACK self-efficacy of early childhood and elementary preservice teachers taking an introductory instructional technology course. The survey consisted of seven subscales representing TPACK constructs. It employed 6-point Likert-type response options starting with “not confident at all” to “completely confident”. Burgoyne et al. (2010) suggested that alterations be made to the survey items to increase its validity. While these instrument surveys were directly linked to the TPACK framework in that they included all the seven subscales of TPACK, some surveys represented TPACK using different dimensions. For instance, Yurdakul et al. (2012) applied a transformative perspective to design the “TPACK-deep scale”, which consisted of 33 statements in a structure including four components named “design”, “exertion”, “ethics” and “proficiency”, which were derived from TPACK model-centered dimensions. Moreover, some studies focused on a specific subject domain, resulting in slightly different scales. For example, Zelkowski et al. (2013) modified “The Survey of Preservice Teachers' Knowledge of Teaching and Technology” to assess 315 secondary mathematics preservice teachers' TPACK. Zelkowski et al. (2013) found that the TPACK, PK, TK, and CK domains are valid and reliable, yet the PCK, TPK and TCK domains remain difficult for participants to separate and self-report. The final version of their survey consisted of 2
Computers & Education 142 (2019) 103650
H.E. Alharbi
22 statements restricted only to the domains of TPACK, PK, TK, and CK. Additionally, some other TPACK surveys were designed specifically for science and social science. For example, an instrument named the “TPACK confidence scale” designed for measuring science teachers' confidence considered only four TPACK domains that involve technology in science education (TPACK, TCK, TPK and TK) (Graham et al., 2009). For social science, Akman and Güven (2015) designed a 55-item survey instrument to measure social science teachers' and teacher candidates' TPACK. The analysis of the validity-reliability study revealed that the instrument was reliable and valid, as seven constructs were obtained, and the reliability coefficient was found to be 0.98. At the same time, other instruments were focused on a specific technology (Archambault & Crippen, 2009; Jang & Tsai, 2012; Lee & Tsai, 2010). For example, Archambault and Crippen (2009) designed a questionnaire for assessing online teachers' TPACK perceptions. In a similar effort, Lee and Tsai (2010) designed a TPCK-W questionnaire specific to web technology, while Jang and Tsai (2012) created the interactive whiteboard IWB-based TPACK scale to investigate the impact of using interactive whiteboards (IWBs) on teachers' TPACK. Although there is a variety of measurements and approaches, most research on the developed and adapted TPACK scales has been conducted in the USA. Therefore, many researchers have moved toward developing TPACK scales in different languages to measure teachers' knowledge in different settings and contexts. For instance, Sahin (2011) developed a TPACK survey to be applied in the Turkish setting. The survey consisted of the seven subscales of the TPACK framework for examining teachers' perceptions of their knowledge and was validated using a sample of 348 Turkish preservice teachers. Moreover, Yahui, Jang, and Chen (2015) designed a scale to assess two physics university instructors' TPACK development in two contexts (one in Taiwan and the other in China) from the perspective of their students. The scale contained 33 items with 4 subscales, namely, “Technology Integration and Application”, “Instructional Representation and Strategies”, “Knowledge of Students' Understandings”, and “Subject Matter Knowledge”. The results of their study indicated that various teaching characteristics were found between the two contexts. In the Middle East, very little research has been conducted to study teachers' TPACK via self-reporting questionnaires. Alqurashi, Gokbel, and Carbonara (2017) conducted a comparative analysis study to compare the TPACK levels of Saudi in-service K-12 teachers and higher education instructors to those who teach in the USA. By adopting the instrument of Archambault and Crippen (2009), Alqurashi et al. (2017) reported that differences in TPACK levels were found between the two groups when controlled for educational level and years of teaching experience. With regard to preservice teachers, two studies have been found that explored primary school preservice teachers' perceptions of their TPACK (Alblaihed, 2016; Khine, Ali, & Afari, 2017). Both studies adopted “the Survey of Preservice Teachers' Knowledge of Teaching and Technology” (Schmidt et al., 2009). Alblaihed's study (2016) took place in Saudi Arabia, while the study of Khine et al. (2017) was conducted in the United Arab Emirates. Overall, this review has shown that there have been no studies to date that have designed an Arabic survey tool for addressing secondary preservice teachers' TPACK that can be applied to any subject domains. Many of the previous scale instruments have been designed for K-6 teachers and include different subject domains. For instance, the survey of Schmidt et al. (2009) was designed based on the original dimensions of TPACK, with CK including four main subject areas: mathematics, science, social studies and literacy. This tool cannot be applied to any sample other than K-6 teachers. When applied to a specific content area, it brings difficulty in that the same constructs of the TPACK model are obtained, as the tool addresses each of the four content areas with only three items, and they may shift toward constructs other than those of the original tool. Given the gap in extant research, the current study aimed to design and validate an Arabic survey tool for addressing the TPACK of Arabic secondary preservice teachers that can be applied to any subject domain. The proposed research question was: Is the designed Arabic TPACK instrument reliable and valid for addressing the TPACK of Arabic secondary preservice teachers in the context of the study location? 3. Method The study presented in this research paper applied a quantitative methodology approach to answer the above research question. 3.1. Instrument development A self-reporting survey instrument was developed to address the TPACK of Arabic secondary preservice teachers. The instrument consists of the seven subdomains of TPACK (Koehler & Mishra, 2009). An initial pool of knowledge items (n = 60 items) was generated based on an extensive literature review (Akman & Güven, 2015; Burgoyne et al., 2010; Kartal et al., 2016; Lux, Bangert, & Whittier, 2011; Sahin, 2011; Schmidt et al., 2009; Yahui et al., 2015; Zelkowski et al., 2013). The items were classified by the researcher under the appropriate subdomains based on the definitions of the TPACK model, as illustrated at the beginning of section 2. To establish the content and face validity, the classified items were evaluated in terms of clarity, accuracy, and content relevance by the researcher and a panel of two experts who were faculty members from different subject areas and have experience in teaching with educational technology. Based on the feedback obtained, some items were modified or deleted if there were duplications of the same meaning. This resulted in an instrument comprising 28 items (four items for each subdomain) that were judged to be relevant and important for preservice teachers to know. Two sample items for each subscale are presented in Table 1. The response options to the instrument were built using a 5-point Likert scale. The choice of the Likert scale came from the fact that it offers the opportunity to measure the knowledge using a range of response options ranging from one extreme to the other. Moreover, and for the purpose of data analyses, Likert-type scale responses can be treated as interval data when there are at least five categories (Hair, 2010). The response options included “not applicable at all”, “rarely applicable”, “moderately applicable”, “largely 3
Computers & Education 142 (2019) 103650
H.E. Alharbi
Table 1 Sample items of the TPACK survey. Subscale
Sample items
CK
I I I I I I I I I I I I I I
TK PK PCK TPK TCK TPACK
have a deep and wide understanding of my subject matter. know how theories or principles of the subject have been developed. have enough knowledge about using different technologies (e.g., computers, interactive whiteboard, tablets). can learn technology easily. am aware of possible student learning difficulties and misconceptions. know how to adapt my teaching style to different learners with individual differences. know how to select appropriate and effective teaching strategies for my content area. know how to prepare a comprehensive lesson plan that includes attractive activities and different materials. know how to use technologies to motivate students to want to learn in the classroom. know how to use technologies to help in assessing student learning. know how to use area-specific software applications and websites to understand content in my subject area. know how to use technologies to represent content related to my subject area. know how to use technology in a way that supports content area exploration and learning among my students. know how to select appropriate technologies to improve students' learning of a topic that is difficult for students to understand.
applicable” and “totally applicable”. The options scores ranged from zero for “not applicable at all” to four “totally applicable”, giving a total that ranged between 0 and 112, with higher scores reflecting greater TPACK knowledge. 3.2. Translation and pilot study The developed instrument statements were translated from English into Arabic by three educators who are bilingual. Then, the statements were pretested and checked for appropriateness among 20 preservice teachers in Saudi Arabia. Based on the obtained feedback, the wording of some statements was slightly changed to resolve any cultural ambiguities. 3.3. Population and sample The study population was defined as female secondary preservice teachers who enrolled in a teacher preparation program at Taibah University in Saudi Arabia. Those students were bachelor's degree graduates from different departments. The sample size was determined (based on a priori power analysis) to be 350. Power analysis was chosen because it considers the significance level being used, the adequate level of power and the effect size. The sample number of 350 was considered sufficient for conducting confirmatory factor analysis, as it exceeded the minimum sample size required for a power of 0.80 with a medium effect size and a significance level of 0.05. To achieve acceptable power in social sciences research, studies should be designed to have at least medium effect sizes (Crano, Brewer, & Lac, 2015). A medium effect size was considered sufficient to balance the need for a statistically significant difference in mean values without demanding that the effect be large. Through using Statistica software, the minimum sample size required for the statistical power threshold value of 0.8 was calculated as 279, while the statistical power is 0.9 (0.9 > 0.8) for a sample size of 350. Therefore, it was determined that a sample size of 350 participants was sufficient for conducting the current study. Random sampling techniques were applied to draw a representative sample. 3.4. Procedure and ethical considerations Permission for students' participation was obtained from their course instructors. Upon the instructors' approval, the final draft of the survey was distributed, along with an informed consent form, to the students during regular lecture time. The research aim was explained to the students, and then their consent was obtained for voluntarily completing the survey. They were assured of confidentiality and that their responses would be kept anonymous to be used only for research purposes. The completed surveys and consent forms were returned directly to the researcher. 3.5. Data analysis The obtained data were subjected to reliability and validity analysis using the Statistical Package for the Social Sciences (SPSS) v.25 and IBM SPSS Amos v.23. The selection of these programs was based on the guidelines suggested by Ormrod and Leedy (2018). After entering the data, data screening was conducted to check for any missing values and to assess the normality of the distribution. Then, the database was checked for outliers. To detect possible univariate outliers, standardized scores (z-scores) were calculated and examined, wherein cases with a z-score above 4 would be considered to be univariate outliers. All of the z-scores were within acceptable limits, indicating that no univariate outliers existed in the data. Then, the Mahalanobis distance was inspected to check for possible multivariate outliers. Using the critical value of the Mahalanobis distance, χ 2 (28) > 56.9, ρ < 0.001, 11 multivariate outliers were identified. Outliers may significantly distort statistical analyses such as correlations, as they can result in both Type I and Type II errors (Tabachnick & Fidell, 2013). In fact, the absence of outliers is required as an assumption in common statistical procedures. For 4
Computers & Education 142 (2019) 103650
H.E. Alharbi
instance, in some analytical procedures that are based on the correlation coefficient, such as factor analysis, the existence of outliers in a dataset may lead to generating improper solutions (Bollen, 1987; Brown, 2015). Therefore, the researcher decided to eliminate the outlier cases from the database, leaving 339 cases for analysis. In addition to conducting content validity (as mentioned earlier in section 3.1), the researcher performed construct validity analysis. The main purpose was to examine the instrument structure and how much it measured the hypothetical TPACK domains (Mills & Gay, 2018). Therefore, confirmatory factor analysis (CFA) was performed, with a CFA model being built based on the underlying theory of TPACK and prior research. Analysis of the variance–covariance matrix was conducted through maximum likelihood estimation. Multiple model fit indices were assessed to determine the acceptable model fit. By following the guidelines and recommendations provided in Brown (2015), the standard fit criteria for acceptable fit values were: the incremental fit index (IFI), the Tucker-Lewis index (TLI), the comparative fit index (CFI), the goodness-of-fit index (GFI) ≥ 0.90, the chi-squared (χ2)/degrees of freedom (df) ≤ 3.00, the root mean square error of approximation (RMSEA) ≤ 0.05, and the adjusted goodness-of-fit index (AGFI) ≥ 0.85. Moreover, two important model parameters were examined: regression weights and measurement errors. Regression weights (or factor loadings) show how strongly an item is associated with a corresponding factor, while the measurement error (or unique variance) is the variance amount in the item that is not explained by the factors (Kline, 2016). Then, the Cronbach alpha (α ) coefficient and composite reliability (CR) index were estimated to test the reliability. According to the authors, both values should be 0.7 or greater to indicate satisfactory construct reliability. Finally, the construct validity was investigated by examining the degree to which the groups of the scale items shared a high amount of variance (Hair, 2010; Tabachnick & Fidell, 2013). This is known as convergent validity, in which all items are significantly loaded on the expected factor (Fornell & Larcker, 1981). This can be demonstrated by the values of factor loadings and the average variance extracted (AVE) being at least 0.5, along with by establishing composite reliability. AVE values less than but near 0.5 can be accepted only if the composite reliability is satisfactory (Fornell & Larcker, 1981). 4. Results This section is divided into five subsections to present the study findings: (1) Participants' demographics, (2) Assumptions of CFA, (3) Model fit assessment, (4) Model modification and (5) Construct validity and reliability. 4.1. Participants' demographics As indicated earlier in section 3.3, all participants (339) in this study were females. Most of the participants (323; 95.3%) were aged from 20 to 30 years, while only five participants (1.5%) were aged from 30 to 40 years and 11 participants (3.2%) did not indicate their age. All participants were bachelor's degree graduates and enrolled in a teacher preparation program. Teaching areas of specialization varied, including Islamic studies (14.2%), Arabic (18.3%), Mathematics (13.3%), Science (13.6%), English (14.7%), Social Science (5.6%), Computer Science (12.4%), and Family Sciences (5.6%). 4.2. Assumptions of CFA For this study, and as illustrated earlier (in the previous sections), the data were derived from a sample of 339 cases, considering the sufficient size for CFA (Pallant, 2016; Tabachnick & Fidell, 2013). Moreover, the designed instrument was considered as having an interval variable on a 5-point scale. Using SPSS v.25, principal components analysis was conducted on all of the instrument items. The results indicated that the Kaiser-Meyer-Olkin value was 0.912 and that Bartlett's Test of Sphericity was statistically significant (ρ < 0.5). Moreover, most of the item coefficients in the correlation matrix were 0.3 and above. Consequently, the data in this study was suitable for factor analysis. 4.3. Model fit assessment As mentioned in the previous section, a CFA model was specified and created using IBM SPSS Amos v.23. The initial CFA results demonstrated a relatively adequate model fit. The χ2 was 612.138, with df (329) and ρ -value < 0.001, which were significant given the large sample size (339). The ratio χ2/df = 1.861, which is below three, indicating a good fit (Kline, 2016). The model was further evaluated against some other fit indices (CFI = 0.935, TLI = 0.926, IFI = 0.936, GFI = 0.885, AGFI = 0.859, and RMSEA = 0.050); thus, model revision was preferred. 4.4. Model modification To improve the model, the standardized residual covariances matrix was inspected for any localized areas of strain. It revealed that item TCK4 yielded extremely high scores exceeding the value of 1.4. This indicated that this item was not explained well by the proposed model, so the item was discarded. Several examinations of the impact of some Modification Indices were also conducted. Each time, the largest estimation was considered and accepted only if the items were reasonably related and shared the same factor. Those modifications led to a modified model (see Fig. 1) with χ2 = 501.919, df = 299, χ2/df = 1.679, CFI = 0.951, IFI = 0.952, TLI = 0.943, GFI = 0.903, AGFI = 0.877, and RMSEA = 0.045. Consequently, the modified model indicated a relatively acceptable 5
Computers & Education 142 (2019) 103650
H.E. Alharbi
Fig. 1. The modified CFA model from Amos v.23.
overall fit. 4.5. Construct validity and reliability The results of the CFA were used to obtain evidence of the construct reliability and validity of the designed survey. As seen in Table 2, the α values ranged between 0.78 (for ‘‘CK’’) and 0.84 (for ‘‘TPK’’, ‘‘TCK’’, and ‘‘TPACK’’). The calculated CR indices ranged between 0.78 (for ‘‘CK’’) and 0.84 (for ‘‘TCK’’). All reliability measurements exceeded the minimum of 0.7, indicating satisfactory construct reliability. With regard to the convergent validity, the factor loadings from each of the seven factors to the items ranged from 0.61 to 0.86, with the majority reaching or exceeding 0.7, and the AVE values were all in the acceptable range. The results indicated that all items were sufficiently loaded on the corresponding factor, suggesting high convergent validity. Taken together, the findings of the study demonstrated that the designed Arabic instrument was reliable and valid for addressing the TPACK of Arabic secondary preservice teachers in the context of the study location. 5. Discussion and conclusion The literature review highlights the critical need to continue efforts to examine and refine practical applications of the TPACK framework. It is believed that working toward improving the TPACK framework would enable its application in supporting teacher 6
Computers & Education 142 (2019) 103650
H.E. Alharbi
Table 2 Results of validity and reliability tests. Constructs
Items
Factor loadings
Average variance extracted (AVE)
Cronbach's alpha (α)
Composite reliability (CR)
TK
TK1 TK2 TK3 TK4 PK1 PK2 PK3 PK4 CK1 CK2 CK3 CK4 PCK1 PCK2 PCK3 PCK4 TPK1 TPK2 TPK3 TPK4 TCK1 TCK2 TCK3 TPACK1 TPACK2 TPACK3 TPACK4
0.64 0.63 0.75 0.80 0.67 0.73 0.79 0.78 0.61 0.61 0.77 0.73 0.77 0.72 0.75 0.66 0.76 0.82 0.70 0.68 0.75 0.82 0.81 0.76 0.65 0.86 0.71
0.50
0.81
0.80
0.55
0.82
0.83
0.47
0.78
0.78
0.53
0.81
0.82
0.55
0.84
0.83
0.63
0.84
0.84
0.56
0.84
0.83
PK
CK
PCK
TPK
TCK
TPACK
preparation for technology implementation in education (Abbitt, 2011). When the body of literature was reviewed, different methodological approaches to studying teachers' TPACK were found. Among the reviewed approaches, no validated Arabic survey instrument was available for measuring secondary preservice teachers' TPACK that could be applicable to any subject domain. Additionally, validated instruments for addressing teachers' TPACK in general are still lacking (Kartal et al., 2016; Koehler et al., 2012). In this study, an Arabic self-reporting survey instrument was developed, and its validity and reliability were tested in an Arabic speaking population. The instrument has been regulated in seven subscales, representing the TPACK factors, as has been done in other studies (Chai, Koh, & Tsai, 2010; Schmidt et al., 2009). The seven factors of the TPACK model have been successfully confirmed through confirmatory factor analysis among 339 secondary preservice teachers from Saudi Arabia. The construct validity and reliability were confirmed, with 27 items loaded adequately on the corresponding factors, and all reliability measurements exceeded the minimum of 0.7. Validating an Arabic TPACK survey for secondary preservice teacher education affords researchers and teachers educators a diagnostic assessment that can be applied as a tool for assessing students' knowledge and identifying their learning needs. Moreover, the instrument can be helpful in evaluating and improving courses related to technology integration in education by providing understanding about what knowledge domains need attention. Being able to improve students' experiences within teacher education courses based on knowledge assessment is one strategy to maximize teachers' level of effectiveness regarding their use of educational technology. That is, if educators can assess how knowledge changes based on assessment data and students' feedback, further actions to support better and more meaningful learning experiences can be taken. In terms of the construct reliability and validity, the current findings supported the work of similar studies (Akman & Güven, 2015; Albion, Jamieson-Proctor, & Finger, 2010; Chai, Ng, Li, Hong, & Koh, 2013). On the other hand, they were not aligned with other studies (e.g., Zelkowski et al., 2013) that reported unsatisfactory convergent validity. This could be explained in at least two ways. First, preservice teachers participating in the current study were bachelor's degree graduates with different majors who enrolled in a teacher preparation program that requires not only a degree but also a relatively high GPA to compete for admission. Erdogan and Sahin (2010) reported that a significant relationship existed between teacher candidates' achievement levels and their TPACK. According to Sahin, Akturk, and Schmidt (2009), TPACK is linked to self-efficacy beliefs, as it requires preservice teachers to have confidence in combining different domains of knowledge effectively, which may be explained by the fact that preservice teachers' self-efficacy is significantly related to their achievement (Erdogan & Sahin, 2010). Second, the current study was conducted near the end of the preparation program when the students were about to finish their one-year in-school field training. In their training, the students were encouraged to incorporate educational technologies to support classroom practices. According to Cai and Deng (2015), involving students in teaching training may have a positive impact on their ability to identify TPACK factors, while some studies (Lee & Tsai, 2010; Su, Huang, Zhou, & Chang, 2017) pointed out that teachers' TPACK factor structural relationships may be affected if teachers lack proper training. Both of the above explanations are hypothetical, and further studies are required to confirm and validate these hypotheses by specifically considering the differential and potential aspects affecting the TPACK factor 7
Computers & Education 142 (2019) 103650
H.E. Alharbi
structure. Although the Arabic survey instrument validated in the current study is believed to be a useful tool necessary to fill the gap in educational technology fields in Arab countries, the study has certain limitations. The generalizability of the results may be limited to a specific population, as the sample of the study was collected from one region in Saudi Arabia. Researchers may replicate the study to validate the instrument for more diverse samples within different regions of Saudi Arabia or other Arab countries. Further research with a larger sample may propose further practice variations, enabling greater validation and generalization of the findings. This study was also limited to Saudi female students. Therefore, studying the differences in the validation of the Arabic TPACK survey between male and female students is recommended to determine whether the results determined in this study would be obtained for the other gender. Finally, given that most of the available scale instruments have been designed for K-6 teachers' TPACK, it is recommended that the Arabic TPACK survey could be translated into other languages and tested in different education settings. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Acknowledgements A very special thank goes to each of the participants in this study who took the time to complete the instrument. References Abbitt, J. T. (2011). Measuring technological pedagogical content knowledge in preservice teacher education: A review of current methods and instruments. Journal of Research on Technology in Education, 43(4), 281–300. Akman, Ö., & Güven, C. (2015). TPACK survey development study for social sciences teachers and teacher candidates. International Journal of Research in Education and Science, 1(1), 1–10. https://doi.org/10.21890/ijres.97007. Al Harbi, H. E. M. (2014). An examination of Saudi high school teachers' ICT knowledge and implementation(Doctoral dissertation Doctoral dissertation) Brisbane, Australia: Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/78462/. Albion, P., Jamieson-Proctor, R., & Finger, G. (2010). Auditing the TPACK confidence of Australian pre-service teachers: The TPACK confidence survey (TCS). Paper presented at the society for information technology & teacher education international conference, Chesapeake, VAhttps://www.learntechlib.org/primary/p/33969/. Alblaihed, M. A. (2016). Saudi Arabian science and mathematics pre-service teachers' perceptions and practices of the integration of technology in the classroom. Exeter, UK: University of Exeter. Alqurashi, E., Gokbel, E. N., & Carbonara, D. (2017). Teachers' knowledge in content, pedagogy and technology integration: A comparative analysis between teachers in Saudi Arabia and United States. British Journal of Educational Technology, 48(6), 1414–1426. Altun, T., & Akyildiz, S. (2017). Investigating student teachers' technological pedagogical content knowledge (TPACK) levels based on some variables. European Journal of Education Studies, 3(5), 467–485. https://doi.org/10.5281/zenodo.555996. Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education, 9(1), 71–88. Bollen, K. A. (1987). Outliers and improper solutions: A confirmatory factor analysis example. Sociological Methods & Research, 15(4), 375–384. https://doi.org/10. 1177/0049124187015004002. Brown, T. A. (2015). Confirmatory factor analysis for applied research. New York: Guilford Publications. Burgoyne, N., Graham, C. R., & Sudweeks, R. (2010). The validation of an instrument measuring TPACK. Paper presented at the society for information technology & teacher education international conference, chesapeake, VA. Paper retrieved from https://www.learntechlib.org/primary/p/33971/s. Cai, J., & Deng, F. (2015). Study on technological pedagogical content knowledge (TPACK): The latest development and trend. Modern Distance Education Research, 3, 9–18. Chai, C.-S., Koh, J. H.-L., & Tsai, C.-C. (2010). Facilitating preservice teachers' development of technological, pedagogical, and content knowledge (TPACK). Journal of Educational Technology & Society, 13(4), 63–73. Chai, C.-S., Ng, E. M., Li, W., Hong, H.-Y., & Koh, J. H.-L. (2013). Validating and modelling technological pedagogical content knowledge framework among Asian preservice teachers. Australasian Journal of Educational Technology, 29(1), 41–53. https://doi.org/10.14742/ajet.174. Crano, W., Brewer, M., & Lac, A. (2015). Principles and methods of social research. New York: Routledge. Erdogan, A., & Sahin, I. (2010). Relationship between math teacher candidates' technological pedagogical and content knowledge (TPACK) and achievement levels. Procedia-Social and Behavioral Sciences, 2(2), 2707–2711. https://doi.org/10.1016/j.sbspro.2010.03.400. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104. Geisert, P. G., & Futrell, M. K. (2000). Teachers, computers, and curriculum: Micro computers in the classroom (3rd ed.). Boston: Allyn and Bacon. Graham, C. R. (2011). Theoretical considerations for understanding technological pedagogical content knowledge (TPACK). Computers & Education, 57(3), 1953–1960. https://doi.org/10.1016/j.compedu.2011.04.010. Graham, C. R., Burgoyne, N., Cantrell, P., Smith, L., St Clair, L., & Harris, R. (2009). TPACK development in science teaching: Measuring the TPACK confidence of in service science teachers. TechTrends, 53(5), 70–79. https://doi.org/10.1007/s11528-009-0328-0. Hair, J. F. (2010). Multivariate data analysis : A global perspective. Upper Saddle River, N.J: Pearson Education. Herring, M. C., Koehler, M. J., & Mishra, P. (2016). Handbook of technological pedagogical content knowledge (TPACK) for educators. Florence, KY: Routledge. Jang, S.-J., & Tsai, M.-F. (2012). Exploring the TPACK of Taiwanese elementary mathematics and science teachers with respect to use of interactive whiteboards. Computers & Education, 59(2), 327–338. https://doi.org/10.1016/j.compedu.2012.02.003. Kartal, T., Kartal, B., & Uluay, G. (2016). Technological pedagogical content knowledge self-assessment scale (TPACK-SAS) for pre-service teachers: Development, validity and reliability. International Journal of Eurasia Social Sciences, 7(23), 1–36. Khine, M. S., Ali, N., & Afari, E. (2017). Exploring relationships among TPACK constructs and ICT achievement among trainee teachers. Education and Information Technologies, 22(4), 1605–1621. https://doi.org/10.1007/s10639-016-9507-8. Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). New York: The Guilford Press. Koehler, M. J., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research, 32(2), 131–152. https://doi.org/10.2190/0EW7-01WB-BKHL-QDYV. Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. Koehler, M. J., Shin, T. S., & Mishra, P. (2012). How do we measure TPACK? Let me count the ways. In R. N. Ronau (Ed.). Educational technology, teacher knowledge, and classroom impact: A research handbook on frameworks and approaches (pp. 16–31). Hershey, PA: IGI Global.
8
Computers & Education 142 (2019) 103650
H.E. Alharbi
Lee, M.-H., & Tsai, C.-C. (2010). Exploring teachers' perceived self efficacy and technological pedagogical content knowledge with respect to educational use of the World Wide Web. Instructional Science, 38(1), 1–21. https://doi.org/10.1007/s11251-008-9075-4. Lux, N. J., Bangert, A. W., & Whittier, D. B. (2011). The development of an instrument to assess preservice teacher's technological pedagogical content knowledge. Journal of Educational Computing Research, 45(4), 415–431. https://doi.org/10.2190/EC.45.4.c. Mills, G. E., & Gay, L. R. (2018). Educational research: Competencies for analysis and applications (12 Ed.). Hudson, NY: Pearson Higher Education & Professional Group. Ormrod, J., & Leedy, P. (2018). Practical research: Planning and design (12 ed.). New York, NY: Pearson. Pallant, J. (2016). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (6th ed.). NSW: Allen & Unwin. Sahin, I. (2011). Development of survey of technological pedagogical and content knowledge (TPACK). Turkish Online Journal of Educational Technology - TOJET, 10(1), 97–105. Sahin, I., Akturk, A. O., & Schmidt, D. A. (2009). Relationship of preservice teachers' technological pedagogical content knowledge with their vocational self-efficacy beliefs. In C. D. Maddux (Ed.). Society for information technology & teacher education international conference (pp. 293–301). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Su, X., Huang, X., Zhou, C., & Chang, M. (2017). A Technological pedagogical content knowledge (TPACK) scale for geography teachers in senior high school. Education & Science, 42(190)https://doi.org/10.15390/eb.2017.6849. Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Harlow, United Kingdom: Pearson Education Limited. Tsai, C.-C., Koh, J. H. L., & Chai, C. S. (2016). A review of the quantitative measures of technological pedagogical content knowledge (TPACK). In M. C. Herring, M. J. Koehler, & P. Mishra (Eds.). Handbook of technological pedagogical content knowledge (TPACK) for educators (pp. 97–116). (2nd ed.). New York: Routledge. Willermark, S. (2018). Technological pedagogical and content knowledge: A review of empirical studies published from 2011 to 2016. Journal of Educational Computing Research, 56(3), 315–343. https://doi.org/10.1177/0735633117713114. Wu, Y. T. (2013). Research trends in technological pedagogical content knowledge (TPACK) research: A review of empirical studies published in selected journals from 2002 to 2011. British Journal of Educational Technology, 44(3), E73–E76. https://doi.org/10.1111/j.1467-8535.2012.01349.x. Yahui, C., Jang, S.-J., & Chen, Y.-H. (2015). Assessing university students' perceptions of their Physics instructors' TPACK development in two contexts. British Journal of Educational Technology, 46(6), 1236–1249. https://doi.org/10.1111/bjet.12192. Yurdakul, I. K., Odabasi, H. F., Kilicer, K., Coklar, A. N., Birinci, G., & Kurt, A. A. (2012). The development, validity and reliability of TPACK-deep: A technological pedagogical content knowledge scale. Computers & Education, 58(3), 964–977. https://doi.org/10.1016/j.compedu.2011.10.012. Zelkowski, J., Gleason, J., Cox, D. C., & Bismarck, S. (2013). Developing and validating a reliable TPACK instrument for secondary mathematics preservice teachers. Journal of Research on Technology in Education, 46(2), 173–206. https://doi.org/10.1080/15391523.2013.10782618.
9