Editorial: The Practice of Outcomes Assessment
F
or years, libraries have engaged in counting and reporting inputs (resource allocation), outputs (how busy they are in the service of their clientele), and performance measures (the definition of which changes over time). Now, there are efforts to show them how to collect performance measures in an electronic networked environment. At the same time, there have been advocates of impact measures, customer-related measures, and outcomes (resultsoriented) measures. In all these instances, the presumed goal is a set of measures relevant to individual libraries that document their performance to peer institutions and stakeholders to whom they are accountable. Any of these measures, however, are usually reported in the form of ratios and percentages. As Ellen Altman and I explained in Assessing Service Quality, “eleven questions outline the different ‘hows’ of measurement and, in effect, encompass input, output, performance, and outcomes measures. The questions can be used singly or in groups. In fact, some of the ‘hows’ are calculated by using data derived from other ‘hows.’”1 Simply stated, these questions focus on “how much?,” “how many?,” “how economical?,” “how prompt?,” “how accurate?,” “how responsive?,” “how well?,” “how valuable?,” “how reliable?,” “how courteous?,” and “how satisfied?”2 Outcomes assessment takes one of these questions, “how well?,” and expects the answer to be cast in terms of a program’s (e.g., course of study or series of workshops) impact on its participants (e.g., students). Thus, outcomes assessment is a form of impact assessment, which, as Peter H. Rossi, Howard E. Freeman, and Mark W. Lipsey explain, is “undertaken to find out whether interventions actually produce the intended effects. Such assessments cannot be made with certainty but only with varying degrees of plausibility.”3 Consequently, the audit evidence gathered to support Peter Hernon is Professor and Editor-in-Chief, Simmons College, Graduate School of Library and Information Science, 300 The Fenway, Boston, Massachusetts 02115⬍
[email protected]⬎.
The Journal of Academic Librarianship, Volume 28, Number 1, pages 1–2
stated outcomes might be direct or indirect, qualitative or quantitative, reduced to a ratio or percentage or not capable of such expression. Clearly, outcomes assessment represents choices—the selection of those methods most likely to show the accomplishment of stated outcomes.
A NEW WAY
OF
LOOKING
AT
HIGHER EDUCATION
As part of accountability, stakeholders (including, e.g., government bodies, accrediting agencies, and parents) want to know how well institutional rhetoric expressed in mission and vision statements, goals and objectives, change actual performance. Outcomes assessment, therefore, focuses on changes in behavior caused by a program of study, or, in the case of academic libraries, the extent to which exposure to the library, its programs and services, over a period of time (e.g., while one earned a baccalaureate or graduate degree) produced change. What knowledge and skills do students have as entering freshmen? What knowledge and skills do they have upon graduation? How did access to library programs and services produce a change in behavior? How well did the library set the stage for lifelong learning? As these questions illustrate, outcomes assessment should be linked to a suitable context—an assessment plan that identifies those outcomes that the library will meet.4 That plan should discuss points such as: ●
Are those outcomes lower-order or higher-order outcomes? Lower-order refers to skills and higher-order outcomes encompass knowledge such as that associated with critical thinking or problem solving.
●
Are those outcomes student learning outcomes or research outcomes? The former refers to mastery of information literacy, and the latter pertain to research as an inquiry process.5 As Hernon and Dugan explain, a possible research outcome might be to “differentiate between a problem statement and a purpose statement.” Related to this outcome might be an expectation about the ability of students to express themselves effectively both in written
Jan-Mar 2002 1
and oral communication. Thus, over time, examination of the outcome would look at the mechanics of a problem statement and student ability to write a statement more dramatically— command the attention and interest of readers. ●
Will the audit evidence be direct or indirect? Which methods will then be used, and how well does the evidence explain any change in behavior? To what extent, does that evidence eliminate rival explanations?
●
Will outcomes be limited to students, or do some pertain to faculty and staff? Does the library need a separate faculty support assessment plan?6
AREA
OF
CONFUSION
It merits mention that an outcome, such as students showed improved information literacy skills, is vague and open to varied interpretation. Further, an outcome focusing on what people indicated they learned is an indirect measure. Did they really learn something—what? In examining a number of outcomes measures that libraries use, it is evident that many of them remain counts of something. For example, the number of students in a chat room session (really an output measure), the number of staff trained in certain uses of technology (really an input measure), the number of questions received and answered (really an output measure), and so on. The question arises then “Give us an example of what an outcome measure really is?” Because outcomes assessment deals with documented changes in learning behavior due to contact with a particular program or a series of programs, librarians might engage in pretesting and post-testing. What is the level of student knowledge and skill to begin with, and did that knowledge and skill change in the way that the library expected? For instance, using database A (or search engine B), construct a search strategy to locate information that:
2
●
Applies Boolean operators to narrow the scope of the search (Outcome One);
●
Applies both basic and advanced search protocols to retrieve needed information (Outcome Two); and
The Journal of Academic Librarianship
●
Distinguished among types of resources contained in that database or search engine (e.g., distinguishes between journal articles and report literature, or between journal articles and books) (Outcome Three).
For each of these outcomes, it is possible to determine the extent of knowledge and skills that students have as they enter a program and the extent to which they accomplished the expectation over time.
CONCLUSION In An Action Plan for Outcomes Assessment in Your Library, we elaborate on each of the points discussed in this essay. Some key issues to address in an action plan and in gathering audit evidence are: ●
●
●
●
●
How well can you show that any changes in behavior were due to the library’s programs and services? How much inference can we legitimately draw from any audit evidence? What is feasible for a library to do? (The answer is addressed through the assessment plan.) How will the evidence gathered be used for program improvement? What is adequate for any library to do? (The answer is addressed in an assessment plan.)
NOTES
AND
REFERENCES
1. Peter Hernon & Ellen Altman, Assessing Service Quality: Satisfying the Expectations of Library Customers (Chicago: American Library Association, 1998), p. 51. 2. Ibid., pp. 51–54. 3. Peter H. Rossi, Howard E. Freeman, & Mark W. Lipsey, Evaluation: A Systematic Approach, 6th ed. (Thousand Oaks, CA: Sage, 1999), p. 225. 4. See Peter Hernon & Robert E. Dugan, An Action Plan for Outcomes Assessment in Your Library (Chicago: American Library Association, 2002), pp. 18 – 42. 5. Peter Hernon, “Editorial: Components of the Research Process: Where Do They Need to Focus Attention,” The Journal of Academic Librarianship 27(2001): 81– 89. 6. See Hernon & Dugan, An Action Plan for Outcomes Assessment in Your Library, pp. 38 – 40.