Assessment in the History, Civics and Social Studies Domains P G Avery, University of Minnesota, Minneapolis, MN, USA ã 2010 Elsevier Ltd. All rights reserved.
Assessment in the domains of history, civics, and social studies is associated with a myriad of complex issues. Similar to other content area domains, the content, format, and purposes of assessment are the subject of intense debate, as is the impact of assessment on teaching and learning. However, there are also challenges and issues associated with the assessment of history, civics, and social studies that are unique. This article describes large-scale assessments in the domains, classroom-level assessments, and the issues that make assessment in these domains particularly problematic.
Large-Scale Assessments Large-scale assessments such as the National Assessment for Educational Progress (NAEP) in civics and US history, the National Assessment Program (NAP) in civics and citizenship in Australia (MCEETYA, 2006), and the International Association for the Evaluation of Educational Achievement (IEA) CivEd study (Torney-Purta et al., 2001) are not intended to provide information about an individual student’s achievement, but rather to give a profile of student achievement at a particular point in time. Using complex sampling procedures, the results often serve as a baseline for trend studies; the NAEP assessments in civics and US history are conducted in grades 4, 8, and 12 in the United States approximately every 5–10 years; the NAP in Australia is scheduled to be conducted every 3 years in grades 6 and 10; and the IEA, which has conducted two international studies of adolescents’ civic knowledge and understanding, is scheduled to conduct another study of civic and citizenship education in 2008–2009. National and international frameworks provide insight into how the domains are conceptualized for the purpose of assessment. In the area of civics and citizenship, the frameworks for the 2006 NAEP in the United States, the 2004 NAP in Australia, and the 1999 (IEA) CivEd Study conducted in 28 countries, provide interesting comparisons. The NAEP framework includes civic knowledge, skills, and dispositions; the NAP divides the domain into two components: civics knowledge and understandings, and citizenship skills and dispositions. The CivEd study assessed students’ knowledge, skills, understanding of concepts, attitudes, and actions. It is significant that even in large-scale assessments the subject is recognized as more than knowledge and skills. This broad conceptualization of civics and citizenship, which includes dispositions and
behaviors, is supported by civic leaders, professional organizations, and educational mission statements across many countries. The NAEP and NAP items assessed students’ dispositions; however, items were worded such that they assessed whether students could identify and describe the significance of particular civic dispositions in a democracy; importantly, an individual student’s civic dispositions were not assessed. For example, students were asked to identify the significance of a sense of civic duty in a democracy as opposed to reporting their own sense of civic duty. The IEA CivEd study included a test of knowledge and skills with right or wrong answers, and a survey of students’ conceptions of democracy, citizenship attitudes, and their current and expected political behaviors, with no right or wrong answers. Whereas national assessments often focus on the structures, functions, and characteristics of government of a given country, the IEA CivEd assessment was framed around broad democratic concepts and principles not specific to a particular nation-state. The primary concepts underlying items in the CivEd study were democracy, citizenship, national identity, international relations, social cohesion, and diversity. The NAEP and NAP assessments included multiple choice and constructed response items; the CivEd Study did not include constructed response items (presumably because of the many languages of the students), but did include Likert-type response items in assessing student attitudes as well as current and expected citizenship participation. The framework developed for the 2006 NAEP US history is organized around four major themes (e.g., change and continuity in American democracy), eight chronological periods in American history, and two cognitive domains. The two domains are described as ‘‘ways of knowing and thinking about U.S. history,’’ and are (1) historical knowledge and perspective, and (2) historical analysis and interpretation. Approximately two-thirds of the items are focused on what the test developers identify as historical analysis and interpretation. Although multiple choice items still dominate largescale assessments, constructed response items now often comprise a significant portion. On the 2006 US Civics NAEP, for example, 20% of the items for grade 12 were constructed response. One item asked students to ‘‘explain three ways in which the power of the president can be checked by the legislative or the judicial branch.’’ Responses were scores as ‘Complete’, ‘Acceptable’, ‘Partial’, or ‘Unacceptable’. Note, however, that constructed
323
324
Educational Assessment - Assessment in Domains
response items do not necessarily require higher order thinking skills. Although there are a range of possible correct responses to this item, it is still knowledge that is being assessed. In the United States, the No Child Left Behind Act (NCLB) has led to an increased use of statewide tests. Grant and Salinas (2008), who have conducted the most comprehensive review of state assessments in the social studies to date, report that approximately half of the 50 states administer tests in social studies subjects, and about 10 states attach high-stakes consequences to them. They note that the exact number of states engaged in testing is a moving target, with two to three states opting into testing the same year that two to three states opt out. There is wide variation in what is tested, how it is tested, the consequences attached to the results, and for whom the consequences apply (student, teacher, school, and/or school district). Even the name of the test is not necessarily an indicator of the content of the test. Social studies tests can be predominately tests of history knowledge, while history tests can include items that could easily be interpreted to be within the broader field of social studies. Passing scores are determined by the states, and vary widely. A student who passes a social studies test in one state could quite conceivably move to a neighboring state and be rendered a failure in social studies. The results are not comparable across states because of wide variation in content and the determination of proficiency levels. As with national assessments, multiple-choice items dominate the state-level assessments because of the ease and efficiency with which they can be scored. Further, when high stakes are attached to the results, such as student retention or failure to graduate, the scoring of multiple-choice items with clear right or wrong answers is more defensible from the standpoint of the test administrators than is the scoring of open-ended, constructed response items. The document-based question (DBQ), however, has been used in the New York Regents History exam since 2000. The format is to present students with parts of 5–8 historical documents, such as newspaper articles, political cartoons, letters, speeches, advertisements, and posters, each followed by short-answer questions, culminating with an essay question. In the 2008 New York Social Studies Regents Examination for US History and Government, for example, students were given eight documents related to reform movements in the 1800s and early 1900s (e.g., an anti-child labor poster). The directions for the essay were: ‘‘Discuss the social, political, and/or economic problems addressed by reformers in the 1800s and early 1900s. In your discussion, include the methods used by reformers to expose these problems.’’ Students were further instructed to: develop all aspects of the task; incorporate information from at least five documents;
incorporate relevant outside information; support the theme with relevant facts, examples, and
details; and use a logical and clear plan of organization, including
an introduction and a conclusion that are beyond a restatement of the theme. The task clearly requires more high-level thinking than the typical multiple-choice or short-answer question. However, is it more authentic with respect to the type of work historians do? Grant et al. (2004) analyzed another DBQ from the New York State Regents Examination in Global History and Geography, and concluded that the task fell far short in terms of representing or reflecting the type of work historians do. They noted that historians always examine an entire document (the New York exam often edits primary source documents), engage in sourcing (evaluating the source of documents) and corroborating (comparing the facts presented in different documents), and present historical arguments based on their analyses. The researchers concluded that ‘‘if statewide tests claim to assess accurately students’ historical understanding, then those tests should resemble more closely the work that historians do’’ (p. 337). The results and secondary analyses of large-scale assessments provide some insights into student achievement in subject areas. Some of the conclusions are predictable; for example, all studies show students of parents with higher incomes, education, and resources outperform their counterparts. Achievement gaps exist between dominant and nondominant groups, and between nativeborn students and immigrants. However, other findings are somewhat less expected, and potentially more valuable. For example, frequent (weekly) tests in social studies classrooms are associated with lower achievement levels on the US History NAEP. Minority and lower income students often receive less of the high-quality, active instruction that is associated with higher achievement scores (Smith and Niemi, 2001). Students who report participating in frequent, substantive discussions in their social studies classrooms tend to demonstrate higher achievement scores in history and civics, and are more likely to state that they expect to vote as adults (TorneyPurta et al., 2001). But it is unclear if and how results such as these impact policymakers and classroom teachers. The national climate of accountability and testing in the United States appears to affect most social studies teachers, regardless of whether their state mandates social studies tests or uses the tests to determine student promotion or graduation; however, the degree to which it changes individual teacher’s classroom practices varies widely. Grant (2006) surmised that teachers are most likely to change their selection of social studies content as a result of testing, less likely to change their assessment
Assessment in the History, Civics and Social Studies Domains
procedures, and least likely to change their instruction. The next section describes what we know (and do not know) about classroom-level assessments.
325
The content of classroom assessments is presumably shaped by state/national standards, textbooks, and local curriculum guidelines. However, there has been very little research on the content, format, or impact of classroom assessments. On the IEA CivEd survey, teachers of civicsrelated subjects in 26 countries reported that written essays and oral participation were the most common type of assessments used in classrooms. Teachers from Eastern European countries were more likely to report the use of multiple-choice tests than were those from Western European countries. In the United States (not included in the IEA teacher reports), the NAEP studies provide an indication of the types of assessments used most frequently in social studies classrooms. On the 2006 US History NAEP, teachers were asked to report how often they used five different types of assessments (see Table 1). Fill-in-the-blank assessments were used most often in grade 4, while written paragraphs were the most frequently reported type of assessment in grade 8. It is difficult to draw any conclusions about the use of projects without knowing the type and quality of projects, but unlike the other four assessments, projects are more likely to involve peers. The other four types of assessments are almost always conducted at the individual level. Not surprisingly, the emphasis on writing increases from grade 4 to grade 8; however, the preponderance of fill-in-the-blank assessments, which almost always require lower level recall, at both grade levels is striking. As reported by the students responding to the 2006 US History NAEP, the emphasis on writing decreases at grade 12, perhaps because teachers at this level often have a much higher number of students than do teachers at grade 4 or 8 (see Table 2). One of the more interesting (or alarming) findings is that 66% of the grade 12 students report taking a test or quiz at least once a week. Students
who take a test or quiz 1–2 times a month, however, score significantly higher on the US History NAEP than do students who are tested more frequently. There are no systematic studies that examine the use of more nontraditional assessments in the classroom. Similar to educators in other subject areas, however, social studies teachers’ repertoire of assessment types was enhanced when notions of alternative assessments and authentic assessments received greater attention in the 1990s. Portfolios, performances, exhibitions, debates, etc. – all modes of assessment teachers had used previously – became more legitimized as valued means of assessing students’ understandings. Rubrics, appropriately developed, prompt teachers to be more explicit about their goals, and the criteria associated with those goals. For students, well-designed rubrics can clarify tasks and grading criteria. Fred Newmann’s work on authentic assessment has had a special impact on social studies education, in part because his original work was with the social studies. He conceptualized assessment as integrally connected to instruction and student performance. In the early 1990s, Newmann and his colleagues at the University of Wisconsin’s Center on Organization and Restructuring Schools (CORS) initiated a major research program focused on students’ authentic intellectual achievement. They reasoned that meaningful teaching and learning should focus on the quality of work that students produce. The CORS group suggested that authentic intellectual achievement should involve: (1) students constructing knowledge through (2) disciplined inquiry that has (3) value beyond the classroom (Newmann et al., 1995). Their conceptualization of authentic intellectual achievement reflects much of what we know about good teaching and learning: students are actively involved in producing and developing understandings, rather than reproducing isolated bits of information; they are using disciplinary methods, concepts, and generalizations; and they are recognizing a strong connection between the form and substance of the work they are doing in school and the intellectual work that takes place outside of school. Newmann and coworkers suggest that authenticity may be reflected in three areas: assessment tasks, instruction, and student performance. They developed parallel,
Table 1 US social studies teachers’ report of assessments used one or more times per weeka
Table 2 US grade 12 students’ report of assessments used one or more times per weeka
Classroom-Level Assessments
Type of assessment
Grade 4 teachers
Grade 8 teachers
Extended essays Fill in the blank Projects Written paragraphs Multiple choice
20% 72% 53% 62% 13%
36% 74% 62% 81% 22%
a
Data from the 2006 US History NAEP, public schools.
Type of Assessment Take a test or quiz Write short answers to history/social studies questions Write long answers to history/social studies questions Group project Give a report Write a report a
Data from the 2006 US History NAEP, public schools.
66% 59% 18% 18% 11% 11%
326
Table 3
Educational Assessment - Assessment in Domains
CORS standards for authentic assessment tasksa
Construction of knowledge: Standard 1. Organization of information The task asks students to organize, synthesize, interpret, explain, or evaluate complex information in addressing a concept, problem, or issue. Standard 2. Consideration of alternatives The task asks students to consider alternative solutions, strategies, perspectives, or points of view as they address a concept, problem, or issue. Disciplined inquiry: Standard 3. Disciplinary content The task asks students to show understanding and/or use of ideas, theories, or perspectives considered central to an academic or professional discipline (e.g., democracy, social class, market economy, and theories of revolution). Standard 4. Disciplinary process The task asks students to use methods of inquiry, research, or communication characteristic of an academic or professional discipline. Standard 5. Elaborated written communication The task asks students to elaborate on their understanding, explanations, or conclusions through extended writing. Value beyond the classroom: Standard 6. Problem connected to the world beyond the classroom The task asks students to address a concept, problem, or issue that is similar to the one that they have encountered, or are likely to encounter, in life beyond the classroom. Standard 7. Audience beyond the school The task asks students to communicate their knowledge, present a product or performance, or take some action for an audience beyond the teacher, classroom, and school building. a
From Newmann, F. M., Secada’, W. G., and Wehlage, G. G. (1995). A Guide to Authentic Instruction and Assessment: Vision, Standards, and Scoring, pp 81–85. Madison, WI: Wisconsin Center for Education Research.
though different, standards and scoring criteria for each area. The authenticity of assessment tasks was evaluated according to seven standards (Table 3). The CORS researchers conducted an extensive study of authentic intellectual achievement in math and social studies classes in 24 restructured elementary, middle, and high schools. They found a high correlation between level of authentic pedagogy and student performance, that is, the higher the quality of instruction and assessment, the higher the quality of student performance. Their observations of classrooms and review of student tasks, however, suggested that in most classrooms, the authenticity of pedagogy (i.e., instruction and assessment) was fairly low. When teachers did implement more authentic pedagogy, all students benefited, regardless of ethnicity, socioeconomic status, gender, or achievement level. The study is one of the few that has looked systematically at the quality of classroom assessments used by social studies teachers. Although the criteria for authentic intellectual achievement can be applied across subject areas, social studies educators have been particularly receptive to the framework.
The National Council for the Social Studies (NCSS) published sets of professional development materials at the elementary, middle, and high-school levels based on the CORS work. Social studies curriculum materials have been developed around the framework for authentic assessment. At least two states, Wisconsin and Michigan, have embedded parts of the framework into their state standards and assessments. Frameworks or rubrics have been developed for assessing many of the rich experiences that take place in engaging social studies classrooms, such as mock trials, role plays, historical debates, and simulated legislative hearings. Discussions about controversial contemporary or historical issues in open and supportive classroom environments are considered by many social studies educators and researchers to be critical experiences for young people in preparing them for adult citizenship. Discussion formats that have received the most attention in social studies include Socratic seminars, structured academic controversies, or public issues discussions. An assessment for an issues discussion, such as the one developed by Harris (2002), includes substantive and procedural criteria. For example, students receive credit for stating and identifying issues, using disciplinary knowledge, and elaborating with reasons or evidence (substantive), or for inviting other students to contribute, recognizing another student’s contribution, and providing summarizing statements (procedural). Credit is deducted for negative behaviors such as monopolizing the discussion. Although professional organizations are very supportive of more authentic assessments, little research has been conducted to indicate how widely such assessments are used in social studies classrooms or how they impact student learning.
Issues and Challenges There are at least five significant challenges and issues that are particularly salient to assessment in the domains of history, civics, and social studies. First, and perhaps most important, the purpose of each of these subjects in the schools is integrally tied to citizenship education. The social studies is defined by NCSS as ‘‘the integrated study of the social sciences and humanities to promote civic competence.’’ The study of a nation’s history is undertaken in part to give students a sense of their heritage, and a historical perspective that will enable them to make more informed decisions as citizens. Civics as a subject of study, of course, has as its focus the knowledge, skills, and values that enable participation in a democratic society. The citizenship purpose associated with these domains can be problematic in terms of assessment. Engaged and enlightened citizenship, the goal of most democratic societies, entails much more than knowledge
Assessment in the History, Civics and Social Studies Domains
of the structures and functions of government or even an understanding of the principles of democracy. It also involves values and behaviors that are not only difficult to agree upon, but are also very difficult to assess. Many argue, for example, that service learning provides students with experiences that are likely to promote more concerned, community-oriented citizens. However, the understandings of people and communities that often develop through service learning do not lend themselves well to traditional assessments. Although it is possible to assess citizenship attitudes and behaviors (as the CivEd study did), if the results are linked to individual students, how should the results be used? Also, at what point does the assessment of an individual student’s citizenship values and behaviors become undemocratic? This is particularly troublesome in countries that are home to peoples with different ethnic, religious, and cultural backgrounds. However, if assessment is limited to knowledge and skills, it becomes what most social studies educators would consider a very shallow and minimalist vision of citizenship. Second, the domains of history, civics, and social studies are more likely to provoke public debate than many of the other subject areas in schools. The controversial nature of these domains has implications for assessment. The question ‘‘Whose history should we teach?’’ is intertwined with ‘‘Whose history should we assess?’’ Do we assess students on their knowledge and understanding of the dark side of our countries’ histories? Do we teach and assess conceptions of citizenship that are critical, participatory, community oriented, and/or law-abiding? Assessments often reflect the knowledge and skills about which there is the most agreement in a society and those that are least likely to provoke controversy; unfortunately, this sometimes reduces history to trivial pursuits and civics to a dull study of the structures and functions of government, all of which are unlikely to engage many students. Third, there is less agreement within the profession as to the nature of history, civics, and social studies, particularly in comparison to disciplines such as math and science. In a study of five subject-area departments at 16 high schools, Grossman and Stodolsky (1995) found that English and social studies teachers viewed their subject areas as less defined than faculty in science, math, and foreign languages, and were less likely to develop common exams with their colleagues. The Ministerial Council in Australia reported that ‘‘the definitions associated with certain key [citizenship and civics] concepts were not generally agreed across jurisdictions, nor was their appearance in formal curriculum documents universal’’ (MCEETYA, 2006: 3). Similarly, in the IEA CivEd study, teachers from across 26 countries reported a lack of consensus as to what constitutes civic education. Moreover, the study conducted by the European Commission (2005) on citizenship education in Europe found different
327
conceptualizations of citizenship and citizenship education across the 30 countries surveyed. The ambiguity associated with the social studies field, and citizenship education in particular, suggests greater variation in the content of classroom-based assessments, and a difficulty achieving consensus among social studies educators about the content and nature of assessments. Fourth, increased calls for accountability, combined with the lower status of civics and social studies in schools, present difficult dilemmas for social studies educators. The NCLB, passed in 2001 in the United States, identifies civics, government, history, economics, and geography as core subjects. However, unlike reading and math, states are not required to administer tests in these areas. When social studies tests are mandated at the state level, social studies tends to enjoy increased status and resources. The negative consequences of mandated state-level tests often include the narrowing of the curriculum, teaching to the test, and a focus on the recall of information. However, when social studies is not among the state-level mandated tests, it becomes marginalized. The intensive focus on reading and math has led to a decrease in social studies instruction, particularly at the elementary level in high-minority, lowincome schools (Jennings and Rentner, 2006; von Zastrow and Janc, 2004). It may take 5–10 years before we can assess the effect of less social studies instruction on young people’s social studies knowledge and understandings. Concerns about the marginalization of social studies led the NCSS, in collaboration with national history, economics, geography, and civics education organizations, to issue a joint statement on NCLB in 2007 calling for the US government to include the social studies disciplines among those for which state-level assessments are mandated. At the same time, the NCSS (1994) supports the development of comprehensive assessment plans that: 1. are designed to inform pedagogical practices; 2. are aligned with local standards and curriculum; 3. include multiple types of assessments, such as portfolios, teacher observation, essays, student performance, and multiple-choice responses; 4. directly connect to the goals of citizenship education; and 5. assess students’ knowledge, thinking skills, values, and social participation. Most social studies educators are supportive of comprehensive assessment plans, and are concerned about the use of single, high-stakes tests that determine whether a student is promoted to the next grade level or graduates from school. They tend to view assessment more as a classroom-based tool integrated into curriculum and instruction for the purpose of providing ongoing support and feedback to teachers and students. They also express concern about the validity and reliability of all assessments, but particularly those with high stakes attached. Yet, the intense pressure for accountability, and concerns
328
Educational Assessment - Assessment in Domains
about the marginalization social studies when it is not tested, suggest that state-level testing will continue and possibly increase. Fifth, history education, in particular, is often illserved by the way in which assessments have been conceptualized and implemented. That which can be readily tested in civics and history is often that which is least significant and meaningful. The broad survey nature of courses makes it difficult to assess students’ in-depth knowledge and understandings. The type of historical skills that are most meaningful, that help students to understand historical significance, take historical perspectives, and develop and analyze historical interpretations, for example, are not those that can be assessed well in one setting at one time with one test or assessment task. Levstik and Barton (2005) noted that perspective recognition, an aspect of historical interpretation, ‘‘cannot be taught, practiced, and mastered during the course of a single, seven-step lesson; it requires sustained attention in a variety of contexts over the course of many lessons, many units, and many years’’ (p. 160). They advocate the type of comprehensive assessment plan that integrates curriculum, instruction, and assessment; is an ongoing part of the students’ educational experiences; and involves students in reflecting on their own development. This vision stands in contrast with, say, the high percentage of students who report completing fill-in-the-blank exercises on a weekly basis in their US history classes. In summary, the knowledge and skills that can be easily assessed in the history, civics, and social studies domains are often those which are least significant to academicians, and least meaningful to students. The movement toward more authentic assessments holds promise, but is frequently at odds with a culture of testing and accountability. Most striking, however, is the lack of systematic research on the ways in which assessments are implemented in history, civics, and social studies classrooms, and their impact on teaching and student learning. See also: Alternative Assessment; Portfolio Assessment.
Bibliography European Commission (2005). Citizenship Education at School in Europe. Brussels: Eurydice European Unit. Grant, S. G. (2006). Research on history tests. In Grant, S. G. (ed.) Measuring History: Cases of State-Level Testing across the United States, pp 29–56. Greenwich, CT: Information Age. Grant, S. G., Gradwell, J. M., and Cimricz, S. K. (2004). A question of authenticity: The document-based question as an assessment of students’ knowledge of history. Journal of Curriculum and Supervision 19(4), 309–337. Grant, S. G. and Salinas, C. (2008). Assessment and accountability in the social studies. In Levstik, L. S. and Tyson, C. A. (eds.) Handbook
of Research on Social Studies Education, pp 219–236. London: Taylor and Francis. Grossman, P. L. and Stodolsky, S. S. (1995). Content as context: The role of school subjects in secondary school teaching. Educational Researcher 24, 5–11; 23. Harris, D. E. (2002). Classroom assessment of civic discourse. In Parker, W. C. (ed.) Education for Democracy: Contexts, Curricula, Assessments, pp 211–232. Greenwich, CT: Information Age. Jennings, J. and Rentner, D. S. (2006). Ten big effects of the No Child Left behind Act on public schools. Phi Delta Kappan 88(2), 110–113. Levstik, L. S. and Barton, K. C. (2005). Doing History: Investigating with Children in Elementary and Middle Schools, 3rd edn. Mahwah, NJ: Erlbaum. Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) (2006). National Assessment Program: Civics and Citizenship Years 6 and 10 Report. Carlton South: MCEETYA. National Council for the Social Studies (NCSS) (1994). Expectations for Excellence: Curriculum Standards for Social Studies. Washington, DC: Author. Newmann, F. M. and associates (1996). Authentic Achievement: Restructuring Schools for Intellectual Quality. San Francisco, CA: Jossey-Bass. Newmann, F. M., Secada, W. G., and Wehlage, G. G. (1995). A Guide to Authentic Instruction and Assessment: Vision, Standards, and Scoring. Madison, WI: Wisconsin Center for Education Research. Smith, J. and Niemi, R. G. (2001). Learning history in school: The impact of course work and instructional practices on achievement. Theory and Research in Social Education 29(1), 18–42. Torney-Purta, J., Lehmann, R., Oswald, H., and Schulz, W. (2001). Citizenship and Education in Twenty-Eight Countries: Civic Knowledge and Engagement at Age Fourteen. Amsterdam: International Association for the Evaluation of Educational Achievement. Von Zastrow, C. and Janc, H. (2004). Academic Atrophy: The Condition of the Liberal Arts in America’s Public Schools. Washington, DC: Council for Basic Education.
Further Reading Grant, S. G. (2003). History Lessons: Teaching, Learning, and Testing in U.S. High School Classrooms. Mahwah, NJ: Erlbaum. Grant, S. G. (ed.) (2006). Measuring History: Cases of State-Level Testing Across the United States. Greenwich, CT: Information Age. Harris, D. E. and Yocum, M. (2000). Powerful and Authentic Social Studies (PASS): A Professional Development Program for Teachers. Washington, DC: National Council for the Social Studies. Lee, J. and Weiss, A. R. (2007). The Nation’s Report Card: U.S. History 2006 (NCES 2007-474). U.S. Department of Education, National Center for Education Statistics. Washington, DC: US Government Printing Office. Lutkus, A. D. and Weiss, A. R. (2007). The Nation’s Report Card: Civics 2006 (NCES 2007-476). U.S. Department of Education, National Center for Education Statistics. Washington, DC: US Government Printing Office. Miller, B. and Singleton, L. (1997). Preparing Citizens: Linking Authentic Assessment and Instruction in Civic/Law-Related Education. Boulder, CO: Social Science Education Consortium. National Council for the Social Studies (NCSS) (1999). Special Issue: Authentic Assessment in Social Studies. Social Education. 63(6). Niemi, R. G. and Sanders, M. S. (2004). Assessing student performance in civics: The NAEP 1998 civics assessment. Theory and Research in Social Education 32(3), 326–348.
Assessment in the History, Civics and Social Studies Domains Pike, M. A. (2007). The state and citizenship education in England: A curriculum for subjects or citizens? Journal of Curriculum Studies 39, 471–489. US Department of Education (2006). U. S. History Framework for the 2006 National Assessment of Educational Progress. Washington, DC: Author. Wineburg, S. (2004). Crazy for history. Journal of American History 90(4), 1401–1414.
329
Relevant Websites http://www.socialstudies.org – National Council for the Social Studies. http://www.nysedregents.org – State Assessment: Social Studies Regents Examinations. http://nces.ed.gov – The Nation’s Report Card: The National Assessment of Educational Progress (NAEP).