Journal of Experimental Child Psychology xxx (2015) xxx–xxx
Contents lists available at ScienceDirect
Journal of Experimental Child Psychology journal homepage: www.elsevier.com/locate/jecp
Brief Report
Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students Elizabeth Jewett, Deanna Kuhn ⇑ Teachers College, Columbia University, New York, NY 10027, USA
a r t i c l e
i n f o
Article history: Available online xxxx Keywords: Underserved students Scientific thinking Problem-based learning Observational learning Social science problem content Disadvantaged students
a b s t r a c t Engagement in purposeful problem solving involving social science content was sufficient to develop a key set of inquiry skills in lowperforming middle school students from an academically and economically disadvantaged urban public school population, with this skill transferring to a more traditional written scientific thinking assessment instrument 3 weeks later. Students only observing their peers’ activity or not participating at all failed to show these gains. Implications are addressed with regard to the mastery of scientific thinking skills among academically disadvantaged students. Also addressed are the efficacy of problem-based learning and the limits of observational learning. Ó 2015 Elsevier Inc. All rights reserved.
Introduction Originating with the classic work by Inhelder and Piaget (1958), research on the development of scientific thinking has a now long history in developmental psychology (for reviews, see Kuhn, 2011; Lehrer & Schauble, 2015; Moshman, 2011; Zimmerman, 2007), mostly devoted to the ‘‘control of variables” (COV) strategy, although COV is no more than one aspect of authentic scientific practice (Kuhn & Arvidsson, 2015; Lehrer & Schauble, 2015). The educational relevance of this line of work is considerable. The COV strategy appears in the Next Generation Science Standards. (2013) from the middle grades through high school as a key understanding that all students should achieve. ⇑ Corresponding author. E-mail address:
[email protected] (D. Kuhn). http://dx.doi.org/10.1016/j.jecp.2015.10.019 0022-0965/Ó 2015 Elsevier Inc. All rights reserved.
Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
2
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
Argumentation also now figures prominently in the standards, with valid (nonconfounded) comparisons being a key building block of sound scientific arguments (Ford, 2012; Kuhn, 2010). We focused here on a low-SES (socioeconomic status), low-achieving middle school population, a population for which it is difficult to develop rudimentary scientific thinking skills, compared with success in doing so among more privileged students, according to researchers (Kuhn & Dean, 2008; Lorch et al., 2010; Siler, Klahr, Magaro, Willows, & Mowery, 2010). Among multiple challenges that these low-achieving students face, a formidable one may be their inability to recognize the point and purpose of scientific practices. The approach we introduce here is novel in its use of social science as a potential ‘‘hook” in engaging students in science as a practice and enabling them to appreciate its purpose and power (Lehrer & Schauble, 2015). Such topics are ones that students know something about. But they likely will not know that they are the stuff of science. What better way, then, to get them to see its power and relevance? The method we employ has the further appeal of involving an authentic problem to be solved. The process, often called inquiry learning, is largely self-directed and exploratory (what will be learned is not known in advance) and encompasses investigation, inference, and argumentation (Duschl, 2008). Each of these entails further specific skills (Kuhn, 2011, in press; Zimmerman, 2007), and these must be coordinated in the common case in which multiple variables affect an outcome (Kuhn, Ramsey, & Arvidsson, 2015). We focused here on a foundational skill central to effective inquiry—generation of informative evidence. In the most common multivariable context, it requires selecting for comparison instances equated on all known other variables, allowing a valid inference regarding the effect of a focal variable. In the absence of this control of variables strategy, faulty inferences are likely (typically when outcome differences are attributed to an uncontrolled variable rather than their true cause). Our intervention method fulfills criteria for problem-based learning (PBL) as well as inquiry learning. Studies of PBL largely address declarative knowledge acquisition (Hmelo-Silver, 2004). We asked here whether PBL is effective in developing procedural skills that enable and support declarative knowledge acquisition. Thus, we asked students to acquire not specific content knowledge (‘‘right answers” to the problems posed) but rather effective procedures for obtaining answers. The problem remains ill-structured; no direct instruction is given as to how to address it. The instructional method is guided inquiry; a coach remains present and makes comments and suggestions without, however, explicitly directing students’ activity. We followed the PBL model in designing a problem that students would find realistic and engaging, an important characteristic for low-performing students who typically exhibit low levels of engagement in school settings, and we afforded them ample time to engage deeply with the problem. Thus, we asked here whether social science content, in conjunction with a PBL instructional method, can help to develop key scientific inquiry skills among a low-achieving population unlikely to master such skills without extended intervention (Kuhn & Dean, 2008; Siler et al., 2010), in contrast to more advantaged students. Following Pease and Kuhn (2011) and Wirkala and Kuhn (2011), we compared individual and group conditions, allowing us to establish whether the characteristic PBL small-group format is essential to the method’s effectiveness. Finally, to this comparison we added a third instructional condition, one in which learning occurs only vicariously, to determine whether the active dimension is essential—an important question when addressing procedural knowledge. Method Participants Participants were 79 sixth- and seventh-grade students (38 girls) attending an urban public school. Most (73%) were African American, and nearly all of the remainder were Latino. More than 65% qualified for free or reduced-price lunch. The large majority functioned below grade level. In standardized assessments, 6.5% of students at the school were classified as proficient (Level 3) in English language arts (ELA) and 4.9% in math. All 117 students in sixth and seventh grades took a written pretest that assessed mastery of the COV skill. The 88 selected to participate were those whose scores indicated no mastery of the COV skill. Of these, 9 did not complete the intervention and were excluded. Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
3
Design Participants were randomly assigned to one of four conditions: Individual problem solvers: These participants worked on the PBL problem with the guidance of an adult coach during three sessions of approximately 30 min. The interval between sessions was on average 3 days, ranging from 1 to 5 days. Team problem solvers: These participants worked together as a group of three under the same conditions as the first group. Observers: These students silently observed a participant in the individual condition. Control participants: These students participated in assessments but not the intervention. Procedure Participants were asked to help solve a real-world problem. Juvenile delinquency (‘‘teen crime”) was chosen as a topic familiar to students and one they had ideas about. We sought to engage students’ interest on the grounds that it was not a problem to be solved as a school assignment but rather one that was important to address in the real world. The coach introduced herself as from the nearby university and involved with a group that was hired to help address the rise in teen crime that a town was experiencing. The town leaders had made unsuccessful attempts to lower the crime rate. Therefore, the town leaders realized that in order to address the problem, they first needed to determine what makes a difference to teen crime. The coach explained that she was soliciting students’ help. A slide presentation dramatizing the problem accompanied this introduction. Students worked on the problem for three sessions over a 1- to 2-week period. They were asked to determine which of four dichotomous variables make a difference to an index of teen crime. They were provided records from different towns to examine. Although towns were fictional, data were authentic (adapted from available databases and simplified). Records consisted of 16 8½ 11-inch sheets, each for a different town. The sheet indicated its status on four factors possibly related to teen crime: teen unemployment, poverty rate, population density, and school dropout rate. Students were introduced to each of the variables and the way in which it was represented, and any vocabulary was explained as needed. Information regarding the rate of teen crime for that town appeared in the middle of the sheet but was initially concealed by a paper cover that could be lifted to reveal this information. Each factor was depicted both in words and by an icon. For example, the icon for teen unemployment was a young man holding a sign reading ‘‘Needs Job.” Depicting the low level, just one person was shown; for the high level, there were several people shown. All possible combinations of levels of variables were included. The three variables that bore any relation to teen crime outcome were teen unemployment, population density, and dropout rate. Poverty rate did not covary with crime levels. In the team condition, the three participants decided together which of the four factors to investigate first. The only suggestion made was that they work on one factor at a time. Once decided, the coach said, ‘‘I’m going to be doing some work over here. Why don’t you all start to see if it does or doesn’t make a difference. If you have questions, let me know. Also, if you think you have found out something, let me know.” The instruction was similar in the individual condition, omitting reference to other students. In the team condition, if an individual student approached the coach saying that he or she had found something, the coach asked the student to present it to the group first to make sure that everyone agreed. When called over, the coach asked students to explain and justify any conclusions they had reached. If the evidence was inadequate to justify the conclusion (either a single record or an uncontrolled comparison of two records), the coach sought to heighten students’ recognition of its inadequacy by suggesting alternative explanations, for example, ‘‘But couldn’t it maybe be the _____ that’s causing the different outcomes?” She would then ask the student(s) to search for better evidence—evidence that would prevent someone from making the argument she had just suggested (i.e., that an uncontrolled variable was in fact the cause). If students had presented a controlled comparison, the coach made a similar countersuggestion (‘‘Couldn’t it be _____ that’s causing the difference?”), but in this Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
4
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
case students could dismiss it by noting that the suggested factor had remained constant and, hence, could not have caused the difference. Students were asked not to move on to another variable until they completed work on the one they had chosen. They resumed work in the same manner when they returned for the second and third sessions. The number of variables investigated during a session varied, but the large majority of students had investigated all variables by the time the three sessions ended. If students changed their focus and found evidence regarding another variable, the coach asked them to remember that for later. Students in the observer condition were told, ‘‘Your job is that of a reporter or a secretary. You are not going to be solving the problem, but you are going to be reporting on how the problem was solved. Therefore, while you watch _____ work on the problem, ask the following questions: What did he/she find? How did he/she find this out?” Assessments Post-intervention interview A post-intervention interview occurred at the end of the third session or during a fourth session within a few days. Only students in the team and individual conditions participated, each individually. Only the randomly ordered set of records was available. Students were asked to choose two variables, one that they had discovered made a difference and one that did not make a difference to the teen crime rate, and then to use the records to show how they knew they were correct. For each variable, the coach asked the following: 1. 2. 3. 4.
What thing (variable) are you looking at? Does it make a difference? How do you know? What do you think would happen in this town if (the variable) went down? (pointing to town with a high level of the variable) 5. What do you think would happen in this town if (the variable) went up? (pointing to town with a low level of the variable) 6. If someone in the town were to come to you and say, ‘‘We want to start a program to deal with (the variable) because we think it will make teen crime in our town go down,” what would you say? The final question (Q6) was included to assess most explicitly students’ ability to construct and convey an argument to justify their conclusions, a skill that we expected the intervention to strengthen, at least in the team condition where students needed to discuss their conclusions and justify them to one another. Students who did not present a controlled comparison as their justification were asked more questions after answering Q3 before being asked the next question. If students had presented a confounded comparison in response to Q3, they were asked, ‘‘What if someone were to argue that they don’t think that it was X [student’s choice] that made a difference and that it was actually Y? What would you tell them?” Pre- and posttest assessment All students were administered a paper-and-pencil assessment of COV mastery twice: once during the month preceding the intervention and 3 weeks (or within the next few days) following completion of the intervention. Testing was administered not by the coach or authors but rather by an employee of the school. No connections were made between these assessments and the intervention activity; thus, the assessment served as a test of far transfer from the intervention activities and postintervention interview. The pretest was administered to students in their classroom. Questions were read aloud while students followed. Because students required quite different lengths of time to complete the pretest, the school staff requested that the posttest be administered in small groups outside the classroom. These groups averaged five students (ranging from two to eight). The 16-question test included several types of items. Content varied, but all questions were designed to assess understanding and mastery of COV. In some questions, students were asked to Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
5
choose which of four different experimental designs would test whether a variable made a difference. In others, they read about a character’s experiment and chose which of several conclusions could be drawn. One two-part question made the maximum score 17. Results Post-intervention interview Two variables were discussed during the interview: one variable the student had concluded made a difference to the outcome (the causal variable) and one not (the noncausal variable). For the latter, all students chose to discuss the variable of poverty rate. For each variable, a level of skill was assigned based on the ordinal scale in Table 1. In assigning a level, the response to Q3 was most critical. One quarter of the interviews were coded by a second coder, and interrater reliability was adequate at kappa = .694 (95% confidence interval [CI], .394 to .994, p < .005). Most students achieved at least Level 4 for both variable types, but fewer achieved Level 5 for a noncausal variable (Table 2). Of 14 students whose levels differed for causal and noncausal variables, levels of 13 were higher for the causal variable (Wilcoxon signed rank test, z = 2.67, p = .008). Most common was for a student to achieve Level 5 for the causal variable but to succumb to the final countersuggestion regarding the noncausal variable (Q6) and decide the variable could have causal efficacy.
Table 1 Performance levels for causal and noncausal variables in post-intervention interview. Level
Definition
Example(s)
0
Justified by belief only with no reference to evidence Justified by reference to single case or incorrect interpretation of multiple cases
Dropout rate makes a difference because when you don’t go to school you start to get bored and then you do bad things If you look here (one case), you see that dropout rate is low and teen crime is high, so we know that dropout rate makes a difference. [How do you know it is dropout rate and not one of the other things that also differs?] There’s more dropout rate here than here, so it makes a difference Here, population density is low and teen crime is low. Over here, population density is high and teen crime is high. [How do you know it is population density and not one of these other things that also differs?] Because when you live close together, you get mad at people and you commit crimes If you look here, population density is low and teen crime is low. Over here, population density is high and teen crime is high. [How do you know it is population density and not something else?] It could also be dropout rate because that is going up too Over here poverty rate is low. but here it is high, and nothing else is different. And teen crime doesn’t change. So poverty rate does not make a difference. [What would happen if poverty rate went up?] Crime would stay the same because we know that it doesn’t make a difference. [If one of the town leaders told you they were going to start a program to change the poverty rate to make teen crime go down. would you think that was a good idea?] Yes, I would say it would be a good idea Here poverty rate is low, but here it’s high, and everything else is the same. Teen crime doesn’t change. So poverty rate doesn’t make a difference. [What would happen if poverty rate went up?] Teen crime would stay the same because poverty rate doesn’t make a difference. [If one of the town leaders told you they were going to start a program to change the poverty rate to make teen crime go down. would you think that was a good idea?] No, it would not be a good idea for teen crime. It would be good, but not for teen crime because poverty rate doesn’t make a difference
1
2
Justified by uncontrolled comparison of cases without recognition of alternative conclusions
3
Justified by uncontrolled comparison of cases but with recognition of alternative conclusions
4
Justified by controlled comparison of cases but with inconsistent interpretation
5
Justified by controlled comparison of cases, with consistent interpretation
Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
6
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
Table 2 Interview performance by condition and variable type. Level
0 1 2 3 4 5
Causal variable
Noncausal variable
Individual
Team
Individual
Team
0 0 6 6 6 82
0 0 10 15 0 75
0 0 6 0 44 50
0 0 15 10 25 50
Note. Figures in table show percentages of students showing each level of performance. n = 18 in the individual condition; n = 20 in the team condition.
Table 3 Posttest performance by condition. Group
n
Mean
Standard deviation
Control Observer Individual Team
15 14 19 20
3.20 3.43 8.32 8.80
2.57 2.56 2.85 4.36
Differences across individual and team conditions were nonsignificant; for the causal variable, Mann– Whitney U = 163, z = .698, p = .485, and for the noncausal variable, U = 163.5, z = .528, p = .59. Pre- and posttest written assessments Eleven students were not available to complete both pre- and posttest and were excluded from the analysis. The median pretest score across groups was 3 (of a possible 17), with the large majority of scores in the 0 to 6 range and the highest score 9. There were no significant pretest differences across groups, F(3, 64) = 1.541, p = .250. Posttest performance by group is summarized in Table 3. A mixed-model analysis of variance (ANOVA) showed an interaction between test score and condition, F(3, 64) = 12.861, p = .001, partial g2 = .376. Post hoc comparisons showed no significant difference between team and individual groups (D = .51, SE = .716, p = .892) or between observer and control groups (D = .55, SE = .830, p = .913). There was a difference between team and control groups (p = .001) as well as between individual and control groups (p = .012), team and observer groups (p = .001), and individual and observer groups (p = .002). Only the individual group (p < .01) and team group (p < .001) showed significant change across time. An association was evident between posttest interview and written assessment. Dividing students into a full mastery group on the interview (n = 20), a partial mastery group (Level 5 score for causal variable, Level 4 score for noncausal variable; n = 10), and a negligible evidence of mastery group (n = 8), mean written posttest scores were as follows: no mastery group = 4.88, partial mastery group = 7.50, and full mastery group = 10.85. Within the team condition, at least one student in every team developed full mastery, with the most frequent pattern being one student showing full mastery and the others showing partial mastery. Although numbers are too small for statistical comparison, the teams that collectively showed greater mastery also showed greater individual gains on the written posttest. Discussion The results reported here are noteworthy in the contexts of several distinct literatures, but we begin with the central one of their implications for the development of scientific thinking. The current findings indicate that deep engagement with meaningful problem content is capable of fostering key scientific thinking skills even among the most disadvantaged students. Particularly important is the Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
7
link between the two skills controlled comparison and argumentation. Students were motivated to make controlled comparisons in the service of arguments they wished to advance; that is, they engaged their intellects in the context of a purpose they were able to understand as worthwhile. This purposeful engagement fostered skill development that was sufficient to successfully address the problem at hand, as the interview data attest. Especially noteworthy regarding this success is its demonstration that this population is by no means incapable of mastering higher order intellectual skills. These children’s typically poor academic achievement is not indicative of limitations in their ability to reason at a high level, nor is it an indicator that direct instruction and rote learning should be the focus of their education. Deep engagement in inquiry that students found meaningful and purposeful arguably is key to the achievements we observed. Further research nonetheless remains necessary to identify critical components of the intervention at a more microscopic level. Sustained engagement and practice involving the COV skill made the skill strong enough to be maintained and later manifest itself in the more traditional academic context of a paper-and-pencil assessment for many (but not all) participants. However, this transfer was far from complete. In contrast to the interview context, where the large majority of intervention participants displayed mastery, attainment on a parallel task in more conventional paper-and-pencil form was more variable. Many of the students in this sample were handicapped by very low reading levels, and although the questions were read aloud, many students were not accustomed to extracting meaning from written text and relied largely on other forms of communication. And many students, we observed, performed particularly poorly where the posttest required constructing a written response. Hence, further research is necessary to determine the extent to which failure to achieve full mastery on the written task should be attributed to cognitive challenges versus the challenges of the test format. A second literature to which the current results are pertinent is that on problem-based learning. The current results show that PBL can be effective in the attainment of procedural as well as declarative knowledge, a finding that extends its significance. Although the mechanisms are not certain, the effectiveness of the method has been attributed to the fact that knowledge acquisition is purposeful and the knowledge is contextualized, and as a result students activate prior knowledge, creating associations that make new knowledge more meaningful and retrieval pathways that make it more accessible (Hmelo-Silver, Duncan, & Chinn, 2007; Wirkala & Kuhn, 2011). Our results are consistent with earlier findings regarding declarative knowledge acquisition in showing that it is the problembased component, rather than the social component, that is key to the efficacy of PBL (Pease & Kuhn, 2011; Wirkala & Kuhn, 2011). It is the problem itself that makes the activity meaningful and gives learning a purpose. It is also significant that the procedural knowledge gained in the context of a social science problem transferred to more traditional science content in the written posttest. There is little reason to maintain that science process skills such as COV would not transfer widely, given that they are not bound to particular content, and rigorous evidence of transfer of such skills across physical and social science topics has begun to appear (Iordanou, 2010; Iordanou & Constantinou, 2015). Such transfer is critical if the proposal we advance—that social science problem content is a promising tool in developing scientific thinking skills—is to be viable. Broad transfer and consistent effects across content nonetheless leave many questions regarding the mechanisms involved in PBL. One of these is activity level, for which there exists another distinct educational literature (Chi, 2009). Our findings with respect to the observer group are consistent with the bulk of this literature indicating that observation alone does not achieve the cognitive benefits of active participation (Muldner, Lam, & Chi, 2014). In a college population, Muldner and colleagues (2014) found dialogue between observing students to enhance the benefit. Intellectual collaboration, however, is itself a learned skill (Kuhn, 2015), and without sufficient skill development children may fail to benefit from it. Until we learn more about the kind of skill needed for such benefit to accrue, maximizing direct engagement appears to be the better course if we wish to maximize achievement. Acknowledgment We thank Toi Sin Arvidsson and Aaron Hawn for their contributions to this work. Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019
8
E. Jewett, D. Kuhn / Journal of Experimental Child Psychology xxx (2015) xxx–xxx
References Chi, M. (2009). Active–constructive–interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1, 73–105. Duschl, R. (2008). Science education in three-part harmony: Balancing conceptual, epistemic, and social learning goals. Review of Research in Education, 32, 268–291. Ford, M. (2012). A dialogic account of sense-making in scientific argumentation and reasoning. Cognition and Instruction, 30, 207–245. Hmelo-Silver, C. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16, 235–266. Hmelo-Silver, C., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning. Educational Psychologist, 42, 99–107. Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books. Iordanou, K. (2010). Developing argument skills across scientific and social domains. Journal of Cognition and Development, 11, 293–327. Iordanou, K., & Constantinou, C. (2015). Supporting use of evidence in argumentation through practice in argumentation and reflection in the context of SOCRATES learning environment. Science Education, 99, 282–311. Kuhn, D. (2010). Teaching and learning science as argument. Science Education, 94, 810–824. Kuhn, D. (2015). Thinking together and alone. Educational Researcher, 44, 46–53. Kuhn, D. (in press). What do young science students need to know about variables? Science Education. Kuhn, D., & Arvidsson, T. S. (2015). Beyond control of variables: Introducing academically low-performing young science students to practices of science. Manuscript under review. Kuhn, D., & Dean, D. (2008). Scaffolded development of inquiry skills in academically-disadvantaged middle-school students. Journal of the Psychology of Science and Technology, 1, 36–50. Kuhn, D. (2011). What is scientific thinking and how does it develop? In U. Goswami (Ed.), Handbook of childhood cognitive development (2nd ed., pp. 371–393). Oxford, UK: Blackwell. Kuhn, D., Ramsey, S., & Arvidsson, T. S. (2015). Developing multivariable thinkers. Cognitive Development, 35, 92–110. Lehrer, R., & Schauble, L. (2015). The development of scientific thinking. In R. M. Lerner (Series Ed.), L. S. Liben, & U. Mueller (Vol. Eds.), Handbook of child psychology and developmental science, Vol. 2: Cognitive processes (7th ed., Chap. 16). Hoboken, NJ: John Wiley. Lorch, R. F., Jr., Lorch, E. P., Calderhead, W. J., Dunlap, E. E., Hodell, E. C., & Freer, B. D. (2010). Learning the control of variables strategy in higher and lower achieving classrooms: Contributions of explicit instruction and experimentation. Journal of Educational Psychology, 102, 90–101. Moshman, D. (2011). Adolescent rationality and development: Cognition, morality, identity (3rd ed.). New York: Psychology Press. Muldner, K., Lam, R., & Chi, M. (2014). Comparing learning from observing and from human tutoring. Journal of Educational Psychology, 106, 69–85. Next Generation Science Standards. (2013, April). Available at
. Pease, M., & Kuhn, D. (2011). Experimental analysis of the effective components of problem-based learning. Science Education, 95, 57–86. Siler, S., Klahr, D., Magaro, C., Willows, K., & Mowery, D. (2010). Predictors of transfer of experimental design skills in elementary and middle school children. In V. Aleven, J. Kay, & J. Mostow (Eds.). Proceedings of the ITS 2010 Conference (Vol. 6095, pp. 198–208). Berlin: Springer Verlag. http://dx.doi.org/10.1007/978-3-642-13437-1_20. Wirkala, C., & Kuhn, D. (2011). Problem-based learning in K-12 education: Is it effective and how does it achieve its effects? American Educational Research Journal, 48, 1157–1186. Zimmerman, C. (2007). The development of scientific thinking skills in elementary and middle school. Developmental Review, 27, 172–223.
Please cite this article in press as: Jewett, E., & Kuhn, D. Social science as a tool in developing scientific thinking skills in underserved, low-achieving urban students. Journal of Experimental Child Psychology (2015), http://dx.doi. org/10.1016/j.jecp.2015.10.019