The development, validation, and potential uses of the Student Interest-in-the-Arts Questionnaire

The development, validation, and potential uses of the Student Interest-in-the-Arts Questionnaire

Studies in Educational Evaluation 39 (2013) 90–96 Contents lists available at SciVerse ScienceDirect Studies in Educational Evaluation journal homep...

693KB Sizes 28 Downloads 28 Views

Studies in Educational Evaluation 39 (2013) 90–96

Contents lists available at SciVerse ScienceDirect

Studies in Educational Evaluation journal homepage: www.elsevier.com/stueduc

The development, validation, and potential uses of the Student Interest-in-the-Arts Questionnaire Paul R. Brandon a,*, Brian E. Lawton b,1 a b

Curriculum Research & Development Group, College of Education, 1776 University Avenue, Castle Memorial Hall 118, University of Hawai‘i at Ma¯noa, Honolulu, HI, United States Curriculum Research & Development Group, College of Education, 1776 University Avenue, Castle Memorial Hall 130, University of Hawai‘i at Ma¯noa, Honolulu, HI, United States

A R T I C L E I N F O

A B S T R A C T

Article history: Received 10 October 2012 Received in revised form 10 January 2013 Accepted 25 January 2013

The Student Interest-in-the-Arts Questionnaire was designed to measure elementary school students’ interest in dance, drama, music, and the visual arts. We collected data providing evidence for reliability, content validity, construct validity, and convergent and discriminant validity. We describe the development of the method and the collection and analysis of the validity data. The brief instrument is easy to administer, fills a gap in the compendium of available instruments, and is useful in a variety of settings with a variety of research and evaluation designs. ß 2013 Elsevier Ltd. All rights reserved.

Keywords: Program evaluation Arts education Student interest Validation

Since the passing of the No Child Left Behind legislation at the beginning of the last decade, some education policy makers’ attention to the arts has diminished, but other policy makers have maintained efforts to advance the role of the arts in schooling. For example, the U.S. Department of Education’s Arts in Education Model Development and Dissemination (AEMDD) program has funded projects to integrate arts-based projects into elementary- and middle-school instruction. The program encourages the development and dissemination of projects and requires research and evaluation at all stages. Drawing on a considerable body of research showing a relationship between students’ involvement in the arts and their academic achievement (e.g., Deasy, 2002; Hetland & Winner, 2004; Podlozny, 2000), a major AEMDD goal has been to improve student achievement. On the premise that students’ interest in school can help improve student engagement, which has been shown to predict academic achievement (e.g., Caraway, Tucker, Reinke, & Hall, 2003; Finn & Rock, 1997; Wang & Holcombe, 2010), a secondary AEMDD goal is to increase students’ interest and engagement in the arts. Evaluators who examine student participation in the arts have few instruments for conducting their studies. Without such instruments, accompanied by evidence of reliability and validity,

* Corresponding author. Tel.: +1 808 956 4928. E-mail addresses: [email protected] (P.R. Brandon), [email protected] (B.E. Lawton). 1 Tel.: +1 808 956 4919. 0191-491X/$ – see front matter ß 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.stueduc.2013.01.001

these and other researchers are left to their own methodological devices, potentially developing and using instruments that do not pass psychometric muster. This article describes the development and validation of a Likert-scale survey questionnaire, the Student Interest-in-the-Arts Questionnaire (SIAQ), and demonstrates its usefulness to evaluators examining elementary students’ participation in the arts. We show the instrument in Appendix A. We developed and administered the SIAQ during a three-year evaluation under a subcontract to an AEMDD grant awarded to the Hawaii Arts Alliance, a nonprofit organization. The goals of the project included improving student academic achievement, focusing primarily on language arts, and increasing student interest in the arts. Professional development providers from the Hawaii Arts Alliance trained elementary school teachers in workshops about how to use strategies for integrating drama, music, dance, and the visual arts (drawing and painting) into their core classroom instruction. They also provided follow-up mentoring sessions in the classroom. The SIAQ is potentially useful in circumstances in which educators or researchers are examining students’ interest in the arts in projects intended to improve student achievement, as well as in arts-education settings where increasing students’ arts interest is an intended outcome. Our primary intended audience consists of evaluators, researchers, and art educators seeking an instrument like the SIAQ, with documentation of instrument reliability and validity. We also present the instrument as an instance of enhancing the effects of an evaluation beyond the setting in which it occurred. As Greenseid and Lawrenz (2011)

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

stated, ‘‘emphasis on the development and dissemination of evaluation instruments is one way to increase the use and influence of evaluation efforts’’ (p. 404). Theoretical foundation An objective to improve students’ interest in the arts is appropriate as a secondary goal of a program that is designed to improve student achievement because of the motivating nature of interest (Sylvia, 2006) and its effect on engagement in learning and other tasks. Schraw and Lehman (2001) defined interest as ‘‘liking and willful engagement in a cognitive activity’’ (p. 23) manifested through how we allocate our attention. It has aspects of both cognition and emotion. Interest ‘‘plays an important part in the learning process,’’ affecting what we choose to learn, how well we learn, and how much we learn—indeed, it results in ‘‘learning more than one would otherwise learn’’ (Schraw & Lehman, 2001, p. 23). Thus, interest is a mediator variable in a model of the effects of teaching on learning. We based the development of the SIAQ on this theoretical foundation. When developing the instrument, we considered the various ways in which students could indicate interest in the four art forms with which teachers instructed their students in reading during the AEMDD project. We wrote items about the art forms in light of the young ages of the students and the degree to which they could be expected to participate in the arts. The affective aspect of the items is shown in their focus on students’ liking, a key definitional component of the construct. Development and pilot-testing Item preparation With these considerations in mind, we developed 26 affective items (seven for each of drama and dance and six for each of visual arts and music). The items addressed various aspects or manifestations of interest in the four art forms:  Learning about the art form (two items for each art form). One item was about learning the art form in general, and one was about taking classes in the art form outside of school. We intended responses to these two items to provide an indication of the breadth of students’ interest in learning about the art form.  Participating in the art form (three items for each of drama and dance and two items for each of music and visual arts). The items addressed developing the art form (writing plays or making up dances), doing the art form (acting, dancing, playing music or singing, and drawing or painting), and observing the art form (watching plays and dancing, listening to music, and looking at drawings or paintings). We considered the degree to which students were likely to be involved in these steps of participation in an art form to be key indicators of interest. We wrote items about developing the art form only about drama and dance because we did not consider it likely that students in Grades 3–5 could write music or specify how to make drawings or paintings.  Being happy participating in the art form (one item for each of the four art forms). Sylvia (2006) reported that ‘‘in structural studies, interest and its synonyms load on a ‘positive-affect’ factor alongside happiness and its synonyms. . . . In everyday speech, people often use interest to refer to enjoyment or preference’’ (p. 25). We had these findings in mind when we included items about how happy the students were when participating in the art forms.

91

 Talking about the art form (one item for each of the four art forms). Aside from learning about an art form, participating in its various steps, and finding pleasure in participating in it, the one activity that remains is discourse about the art form. We speculated that student interest would be shown in part by their reports of whether they liked to talk about the art form. The items went through multiple edits. Except for items about learning the art form, we did not include multiple items about each aspect (which would have been desirable for purposes of enhancing reliability) because the instrument would have been excessively long for young students to complete. Pilot-testing All instrument development should include pilot tests for the purpose of gathering preliminary information and evidence about students’ understanding of the wording of instructions and items, the feasibility of administration, the reliability of data collected with the instrument, and the extent to which the items show variation among students. We conducted three rounds of pilottesting with students at the K-12 public charter laboratory school associated with our university and with Grade 3 students participating in the AEMDD project. Round 1 of the pilot testing. The first draft of the SIAQ used a threepoint scale in which the students’ circled icons of faces for enjoy, neutral, and dislike, which we believed would be an appropriate design and provide a sufficient number of response options for the pupils. We administered the questionnaire to nine third-grade laboratory school students and examined the results for reliability and discrimination among students. Our focus was on Grade 3 because at that point in the grant project, we were administering the instrument only to third-grade students furthermore; we wanted to ensure that the instrument was appropriate for a group of young students throughout the project. We discussed the items with the students after they completed the questionnaire, and we calculated descriptive statistics. The variation in responses was small and showed a marked ceiling effect; therefore, we revised the items to use a 4-point Likert scale (1 = strongly disagree, 2 = somewhat disagree, 3 = somewhat agree, and 4 = strongly agree). The responses on the revised scale also provided an option for ‘‘don’t know’’ because some teachers in the AEMDD project were not using all four art forms in their instruction. Round 2 of the pilot testing. We conducted the second round of pilot-testing on a group of eight third-grade laboratory school students. As we did in the first round, we discussed the items with the students and, based on their comments, reworded the items to improve consistency among items (e.g., I like talking about the art form, I like learning about the art form, and so forth) and to maximize students’ attention to the art forms rather than to the differentiation in item wording. We made no other changes in the instrument and prepared it for a pilot test with a larger group of students who were participating in the project. Round 3 of the pilot testing. The first two rounds of pilot-testing were conducted with small groups of students who were conveniently available at times of our choosing and whose teachers donated class time for their students’ discussion of the items with us after completing the instrument. We needed an additional round to examine how well the items performed with students participating in the AEMDD project before proceeding with the evaluation. For these purposes, we targeted a larger group of students so as to help ensure the stability of the statistics. We did not have ready access to all grade levels of students in the project schools for pilot-testing purposes but obtained the agreement of project personnel in the largest school to pilottest the SIAQ in one grade. We chose Grade 3 with the rationale

92

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

Table 1 SIAQ rotated factor pattern loadings for 26 items. Item

Factors 1

1. I like to watch plays 2. I like to act in plays 3. I like to help write plays 4. I like talking about plays 5. I like learning how to act in, or write, plays 6. I take (or want to take) acting lessons outside of school 7. Acting in or writing plays make me happy 8. I like to watch dancing 9. I like to dance 10. I like to make up dances 11. I like talking about dancing 12. I like learning about dancing and how to dance 13. I take (or want to take) dance lessons outside of school 14. Dancing makes me happy 15. I like to listen to music 16. I like to play music or sing 17. I like talking about music or singing 18. I like learning about music or learning to play music or singing 19. I take (or want to take) music lessons outside of school 20. Listening to music, playing music, or singing makes me happy 21. I like to look at drawings or paintings 22. I like to draw or paint 23. I like talking about drawing or painting 24. I like learning about drawing or painting 25. I take (or want to take) drawing or painting lessons outside of school 26. Drawing or painting makes me happy

that we were most likely to find problems, if any, with the youngest students participating in the project. We administered the instrument to all 94 third-grade students in the selected school during the spring of the school year. We did not ask for student feedback and focused on examining the internal consistency reliability. Coefficient alpha for the results was .91, showing high reliability. With these results, we were sufficiently confident to proceed with administering the instrument as part of the evaluation of the AEMDD project. Instrument validation To make a case for the validity of data collected with an instrument is to present an argument that the proposed interpretation of the data collected with the instrument is justifiable (Kane, 2006). Multiple sources of validity can be presented when making the argument; rarely can all be addressed, of course—particularly when developing an instrument during the course of a small evaluation. Among some others, the sources might include (a) the degree to which the instrument addresses the targeted content and to which its items are technically adequate (content validity), with reliability serving as one source of evidence of validity, (b) the extent to which the items address the overarching constructs underlying the instrument (construct validity), and (c) the degree to which scores on the instrument are correlated with scores on instruments measuring similar constructs and uncorrelated with scores on instruments measuring dissimilar constructs (convergent and discriminant validity, respectively). We address these sources of validity evidence in this article. Together, our findings from these sources provide confidence about using the SIAQ for the appropriate purposes. Content validity evidence Content validity is demonstrated primarily by showing evidence that items address the appropriate content. The four art forms that the SIAQ addresses drama, dance, music, and the visual arts—were taught to teachers in the Hawaii Arts Alliance

2 .630 .774 .664 .538 .804 .709 .759 .116 .016 .006 .040 .046 .166 .016 .229 .021 .024 .108 .306 .162 .042 .043 .037 .021 .183 .028

3 .045 .039 .077 .054 .008 .194 .011 .736 .887 .742 .599 .840 .711 .731 .098 .010 .015 .043 .127 .004 .029 .020 .017 .022 .101 .041

4 .069 .104 .163 .022 .046 .105 .047 .046 .028 .048 .009 .004 .054 .126 .640 .808 .630 .789 .514 .681 .028 .059 .015 .015 .105 .034

5 .013 .034 .052 .022 .036 .062 .056 .022 .051 .019 .049 .025 .050 .087 .072 .013 .009 .016 .007 .006 .788 .837 .638 .790 .685 .852

.263 .112 .348 .559 .034 .074 .082 .137 .005 .187 .576 .026 .138 .028 .019 .018 .464 .039 .005 .033 .130 .179 .431 .118 .050 .025

AMEDD project professional development. The items also address the various ways in which interest can be manifested—producing or developing an art form, observing or listening to it, talking about it, and expressing positive emotions about it. Content validity is also demonstrated by providing evidence that the procedures for developing the instrument are technically adequate (Messick, 1989). In addition to item-writing procedures, this evidence includes pilot-testing. The pilot tests that we described in the previous section (a) focused on the appropriate age groups, (b) took place in multiple rounds to allow iterative improvement, (c) began with small groups to enable close attention to students’ comments and feedback, (d) ended with a large enough group to conduct adequate reliability analyses, and (e) were conducted with time set aside for revisions between rounds. Evidence for content validity can furthermore be supported with findings about the reliability of the data collected with an instrument. In addition to the internal consistency reliability analysis that we conducted in our third pilot-test, we conducted a test–retest analysis by administering the 26-item instrument twice to 37 laboratory school students in Grades 2–5, with a weekend between administrations. (We included Grade 2 to increase the pool of available students.) The correlation between total scores for the two occasions was .93, suggesting high test– retest reliability. We also calculated coefficient alpha, by grade, for the 635 AMEDD project students who responded to all items of the instrument in the spring of the final year of the project (out of a total of 746 who completed the instrument in total or in part); for Grade 3 (N = 230), it was .90; for Grade 4 (N = 198), it was .90; and for Grade 5 (N = 207), it was .92. These reliability estimates indicate that the items were measuring concepts in a highly consistent manner. Descriptive statistics for the items (4-point Likert scale) in the final year of the project, by grade, ranged from 2.51 to 3.85 in Grade 3, 2.01–3.86 in Grade 4, and 1.97–3.85 in Grade 5. Overall means (i.e., means of the item means, excluding students with missing data) for the three grades were 3.16 (SD = .35), 2.97 (SD = .44), and

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

93

Table 2 SIAQ rotated factor pattern loadings for 22 items. Item 1. I like to watch plays 2. I like to act in plays 3. I like to help write plays 4. I like learning how to act in, or write, plays 5. I take (or want to take) acting lessons outside of school 6. Acting in or writing plays make me happy 7. I like to watch dancing 8. I like to dance 9. I like to make up dances 10. I like learning about dancing and how to dance 11. I take (or want to take) dance lessons outside of school 12. Dancing makes me happy 13. I like to listen to music 14. I like to play music or sing 15. I like learning about music or learning to play music or singing 16. I take (or want to take) music lessons outside of school 17. Listening to music, playing music, or singing makes me happy 18. I like to look at drawings or paintings 19. I like to draw or paint 20. I like learning about drawing or painting 21. I take (or want to take) drawing or painting lessons outside of school 22. Drawing or painting makes me happy

2.84 (SD = .46), respectively. Most of the results were negatively skewed; item skewness values ranged from .41 to 3.54, with a mean skewness of .80 (SD = .82). The decline in mean ratings from Grade 3 to Grade 5 perhaps reflects a lessening effect of a social desirability bias on students’ responses as they progress through the grades. Construct validity evidence The second aspect of validity that we addressed was construct validity. We accomplished this with an exploratory factor analysis, followed by a confirmatory factor analysis. Exploratory factor analysis With data on the 635 students with complete records in Grades 3–5 in the spring of the final year of the AEMDD project, we conducted an exploratory factor analysis (EFA). We used SAS PROC FACTOR to conduct a principal component analysis with the promax rotation method to examine the dimensionality of the data collected with the instrument. We chose the promax rotation based on the results of a series of studies showing that its solutions ‘‘more closely approximated the accepted solutions for classic factor analysis data sets than did other solutions’’ (Widaman, 2012, p. 371; italics in original). The factor pattern matrix results, which show the unique relationship between the item results and the factors and are interpreted like regression coefficients, suggest five factors, as shown in Table 1. The eigenvalues for the five factors were 9.13, 2.70, 2.01, 1.72, and 1.07. The first four of these factors mirrored the four art forms, with each set of items for an art form loading greater than .50 on a single factor. Only three of these loadings were less than .60. Four of the items (nos. 4, 11, 17, and 23) loading high on the first four factors also loaded between .43 and .57 on the fifth factor. These were the items that addressed talking about the art forms; of the four loadings, three were higher on one of the other four factors, and the fourth was only .02 higher than on another factor. The factor structure matrix results (not presented here), which show the correlation between the item and the factor and are affected by the correlation of the item with other factors, reflected the correlation among factors (as might be expected, given the high coefficient alpha for the entire instrument) but raised no red flags about our interpretation of the factor pattern matrix results.

Drama .627 .777 .729 .826 .729 .761 .127 .009 .007 .047 .175 .014 .254 .032 .092 .289 .146 .001 .087 .041 .189 .027

Dance

Music

.062 .051 .099 .001 .173 .003 .752 .912 .750 .844 .723 .757 .118 .017 .044 .118 .035 .045 .044 .022 .116 .029

.119 .050 .110 .033 .125 .058 .066 .035 .025 .010 .013 .124 .641 .814 .813 .537 .719 .023 .058 .015 .132 .030

Visual arts .072 .069 .089 .039 .021 .051 .055 .056 .012 .034 .014 .085 .072 .034 .016 .036 .028 .829 .807 .817 .658 .847

Because the eigenvalue for the fifth factor was low (1.04) and was defined by four items that loaded highly on the four factors defined by the art forms, we eliminated the four items from our analyses and conducted a second principal component analysis with the promax rotation on the same 635 students who had completed every item. The results given in Table 2 show a clear four-factor set of items. The eigenvalues for the four factors range from 7.74 to 1.54, with the eigenvalue for the next highest factor showing an value of .97. The four factors account for 62% of the total variance. The rotated factors clearly correspond to the four art forms and reflect a simple structure: The variables for each of the art forms show high factor pattern loadings for only their corresponding art form, with the loadings ranging from .50 to .91. Only five of the items show loadings less than .70 on the factor for their art form. All the other loadings are less than .20 (with the exception of one item at .289); only eight of these other loadings are greater than .10. Furthermore, using the empirically based criterion that ‘‘components with four or more loadings above .60 in absolute value are reliable, regardless of sample size’’ (Stevens, 2002, p. 395), the results show reliable factors. Of the items’ original foci (learning about the art form, participating in it, expressing positive emotions about it, and talking about it), talking might be interpreted as a reflection of the least direct form of involvement in an art form. Interest in an art form is most strongly indicated by being enmeshed in activities involving the art form, not simply by talking about them. After all, a topic of discussion about an art form might be about a lack of interest. With this interpretation, the elimination of the items about talking makes sense substantively. Confirmatory factor analysis With the clear factor structure that emerged from the EFA in mind, we conducted a confirmatory factor analysis (CFA), which adjusted the results for measurement error and allowed us to test a more parsimonious solution than in the EFA. We conducted the analysis with Mplus software on the full dataset of 746 students in Grades 3–5, including those with missing data. (Mplus imputes missing data.) Because the item data were skewed, we chose the robust weighted least squares method (as suggested by Brown, 2006). We specified the four factors that we had found in the EFA. The values for three primary goodness-of-fit indicators produced by Mplus—the comparative fit index (CFI), the Tucker–Lewis Index

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

94 Table 3 Confirmatory factor analysis results.

Est./SE

p

R2

Factor 1 (drama) (composite reliability = .91; variance extracted estimate = .59) 0.699 0.025 Item 1 Item 2 0.803 0.018 Item 3 0.640 0.028 Item 5 0.812 0.018 Item 6 0.812 0.021 Item 7 0.834 0.017

27.912 43.945 23.015 45.442 38.081 49.611

0.000 0.000 0.000 0.000 0.000 0.000

.488 .645 .410 .660 .659 .695

Factor 2 (dance) (composite reliability = .93; variance extracted estimatea = .70) Item 8 0.750 0.022 Item 9 0.893 0.014 Item 10 0.748 0.022 Item 12 0.863 0.015 Item 13 0.880 0.015 0.887 0.013 Item 14

33.407 63.934 33.972 56.373 59.050 67.628

0.000 0.000 0.000 0.000 0.000 0.000

.563 .797 .560 .744 .774 .787

Factor 3 (music) (composite reliability = .80; variance extracted estimatea = .61) Item 15 0.672 0.053 Item 16 0.768 0.025 Item 18 0.796 0.020 Item 19 0.874 0.021 Item 20 0.789 0.025

12.695 30.623 39.162 42.393 31.396

0.000 0.000 0.000 0.000 0.000

.452 .590 .634 .763 .622

Factor 4 (visual arts) (composite reliability = .92; variance extracted estimatea = .70) Item 21 0.801 0.020 Item 22 0.829 0.021 Item 24 0.841 0.019 Item 25 0.832 0.021 Item 26 0.877 0.017

39.250 39.288 43.655 39.618 52.655

0.000 0.000 0.000 0.000 0.000

.641 .688 .707 .692 .770

Estimate

SE a

a

The ratio of the variance captured by the construct to the error variance. Missing items were removed based on the results of the EFA.

(TLI), and the root mean square error of approximation (RMSEA)— were .964, .959, and .062, respectively. These values suggest that the model fit the data well. None of the factor structure modification-index analysis results indicated a more appropriate realignment of the items. We present the CFA results in Table 3. For each item within each of the four factors, the standardized factor loadings (labeled ‘‘estimates’’ in the table), standard errors, and statistics for testing the null hypothesis that the coefficients equal zero are shown. All the loadings were highly significant. The results provide good evidence for construct validity. A recommended step in conducting CFAs is to calculate the reliability of factors. In Table 3, we show R2 values for each item, indicating the item reliability (i.e., the proportion of variance in the item accounted for by the factor); the composite reliability, showing the reliability across all items for the factor; and the variance extracted estimates (Hatcher, 1994), showing the variance accounted for by the factors relative to the error variance. The composite reliabilities range from .80 to .93, all highly acceptable results, and the variance extracted values are all above .50, which Hatcher (1994) gave as a conservative minimum value. Together, these results suggest highly acceptable levels of reliability.

the School Attitude Assessment Survey–Revised (McCoach & Siegle, 2003) (alpha coefficient = .91) and with scaled total scores for the reading and mathematics sections of the Hawaii State assessment (the statewide test administered annually by the Hawaii Department of Education). The attitude scale focused specifically on reading and mathematics. Excluding all records with missing data on any item, our data set for the final year of the AEMDD project included 346 records. Pearson correlation coefficients among the four sets of total scores are shown in Table 4. They show a statistically significant correlation of .37 between the SIAQ items and attitude items and non-significant correlations of .08 between the SIAQ scores and each of the reading and mathematics scores. We interpret these results as suggesting that student affect about aspects of school is correlated with interest in the arts but that, as expected, interest in arts activities have nothing to do with student achievement. The correlations of the attitude scores were significant with the reading and mathematics items (.24 and .30, respectively)— results confirming the widely held notion that the broad construct of attitudes toward reading and mathematics are associated with achievement in the subjects. Thus, the SIAQ is useful as an instrument that measures interest in participating in the arts apart from school academic success and is confirmed as an instrument that measures affect toward aspects of school.

Convergent and discriminant validity evidence Convergent validity analyses show the extent to which scores on instruments measuring similar constructs are related; conversely, discriminant validity analyses show the extent to which the data collected with instruments measuring disparate constructs are unrelated. The results on an affective instrument such as the SIAQ might be expected to correlate somewhat with other affective instruments about school but not with achievement instruments. To conduct convergent and discriminant validity analyses, we examined the correlation of the SIAQ total scores with total scores for a slightly revised 26-item version of

Table 4 Correlations among total scores (N = 347) on the Student Interest-in-the-Arts Questionnaire (SIAQ), school attitude assessment survey (SAAS), and Hawaii statewide assessment (HSA) reading and mathematics subtests. Instrument

SIAQ

SAAS

HSA reading

SAAS HSA reading HSA math

.37* .08 .08

.24* .30*

.81*

*

Statistically significant at p < .0001.

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

Limitations and conclusions Some issues and limitations exist about the instrument structure and the results of the data collected with it. First, the extent to which the ordering of the items (items clustered by art form) affected students’ responses is unknown. We ordered them by art form so as to diminish any chance of confusion among the young pupils; perhaps ordering them randomly would have diminished the relationship among items within the art forms. Second, the skewness of the distributions of scores is not desirable. We speculate that the instrument might be more appropriate for middle school students, when children are more discriminating about their likes and dislikes and have learned more about their capabilities. The younger the students, the more likely that they might respond in a socially desirable manner, in which they provide answers designed to please their teachers but not indicative of their actual opinions. The results of this study provide evidence of the content validity (including reliability), construct validity, and convergent and discriminant validity of data collected with the SIAQ. The instrument development and validation followed careful procedures (constrained chronologically by their occurrence within the boundaries of an ongoing evaluation study, but complete in their totality by the end of the project); yielded results forming reliable scores; show a dimensionality conforming to the four art forms that are addressed; and produced scores that are associated with student’s affect toward school but not their

95

reading or mathematics achievement. The findings presented in the article support the conclusion that the instrument (a) is appropriate for lower- and upper-elementary children, (b) can produce internally consistent results that are reliable across administrations of the instrument, and (c) can produce data showing factors measuring clear features of student interest in the distinct art forms. The instrument is brief, easy to administer, and is presented in language appropriate for children in the elementary grades. It is useful for the purposes of project evaluation or educational research on the use of arts in the classroom in a variety of designs and for projects addressing a variety of purposes. The instrument is freely offered in this article. All in all, the instrument fills a gap in the small collection of instruments for examining the effects of the use of the arts in elementary school classrooms. Appendix A. Student Interest-in-the-Arts Questionnaire Instructions: For each statement below, please fill in one circle per row that best gives your opinion. If you strongly agree with the statement on the left, fill in the circle in this column; if you somewhat agree with the statement, fill in the circle in this column; if you somewhat disagree with the statement, fill in the circle in this column; and if you strongly disagree with the statement, fill in the circle in this column. Remember only fill in one circle per row. Thank you!

96

P.R. Brandon, B.E. Lawton / Studies in Educational Evaluation 39 (2013) 90–96

References Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford. Caraway, K., Tucker, C. M., Reinke, W. M., & Hall, C. (2003). Self-efficacy, goal orientation, and fear of failure as predictors of school engagement in high school students. Psychology in the Schools, 40, 417–424. Deasy, R. (Ed.).Critical links: Learning in the arts and student academic and social development. Washington, DC: Arts Education Partnership. Finn, J. D., & Rock, D. A. (1997). Academic success among students at risk for school failure. Journal of Applied Psychology, 82, 221–234. Greenseid, L. O., & Lawrenz, F. (2011). Using citation analysis methods to assess the influence of science, technology, engineering and mathematics education evaluations. American Journal of Evaluation, 32, 392–407. Hatcher, L. (1994). A step-by-step approach to using SAS for factor analysis and structural equation modeling. Cary, NC: SAS Institute. Hetland, L., & Winner, E. (2004). Cognitive transfer from arts education to non-arts outcomes: Research evidence and policy implications. In E. Eisner & M. Day (Eds.), Handbook on research and policy in art education (pp. 135–162). Reston, VA: National Art Education Association.

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport, CT: American Council on Education, Praeger. McCoach, D. B., & Siegle, D. (2003). The school attitude assessment survey-revised: A new instrument to identify academically able students who underachieve. Educational and Psychological Measurement, 63, 414–429. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13– 103). New York: American Council on Education, Macmillan. Podlozny, A. (2000). Strengthening verbal skills through the use of classroom drama: A clear link. The Journal of Aesthetic Education, 34, 239–275. Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions for future research. Educational Psychology Review, 13, 23–52. Stevens, J. P. (2002). Applied multivariate statistics for the social sciences. Mahwah, NJ: Erlbaum. Sylvia, P. J. (2006). Exploring the psychology of interest. New York: Oxford. Wang, M., & Holcombe, R. (2010). Adolescents’ perceptions of school environment, engagement and academic achievement in middle school. American Educational Research Journal, 47, 633–662. Widaman, K. F. (2012). Exploratory factor analysis and confirmatory factor analysis. In H. Cooper (Ed.-in-Chief), APA handbook of research methods in psychology (Vol. 3, pp. 361–389). Washington, DC: American Psychological Association.