Reconceptualizing classroom-based research in computers and composition

Reconceptualizing classroom-based research in computers and composition

Computers and Composition 23 (2006) 462–476 Reconceptualizing classroom-based research in computers and composition Patricia Rose Webb Department of ...

158KB Sizes 2 Downloads 71 Views

Computers and Composition 23 (2006) 462–476

Reconceptualizing classroom-based research in computers and composition Patricia Rose Webb Department of English, Arizona State University, Tempe, AZ, United States

Abstract In this article, Webb maps out three dominant forms of research that have been undertaken in the field of computers and composition—theoretical, case studies, and limited quantitative. She then identifies a fourth mode that has begun to appear—a multimodal approach. Arguing for ways in which this method could help us answer research questions that have been previously unanswerable by other research methods, she calls for the field to reconsider incorporating this multimodal approach more within our research studies. © 2006 Elsevier Inc. All rights reserved. Keywords: Research methods; Quantitative research; Qualitative research; Multimodal research; Case studies; Best practices; Politics of research

It is time for scholars in computers and composition who study classroom interactions to extend our research beyond the useful anecdotal studies that we have, up to this point, used to help us map the state of affairs in computer-mediated, online, and hybrid writing courses. Three major types of studies have dominated the field of research to date: theoretical arguments that address large-scale, meta-pedagogical, or cultural issues relating to the use of technology; case studies that qualitatively focus on a specific situation (one class, student, professor, workplace, social interaction); and limited quantitative studies that typically measure computer-mediated participation. Our studies that focus theoretical arguments draw on research in computers and composition but primarily focus on technoscientific and cultural critics such as Donna Haraway, Michel Foucault, and Michel de Certeau. The results of these studies range from raising larger metaquestions about the impact uses of technologies are having on the culture to theory-based recommendations that aim to help teachers and scholars frame their interactions with and thoughts on technologies. The second type of research—qualitative case studies—also uses theoretical perspectives to frame the discussion, but their primary focus is on understanding how technology is used in particular situations. The theory is typically presented in one of ∗

Email address: [email protected].

8755-4615/$ – see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.compcom.2006.09.003

P.R. Webb / Computers and Composition 23 (2006) 462–476

463

two ways: (1) in a section directly following the introduction and then used throughout the discussion and analysis of the specific case study being presented and/or (2) in the discussion/analysis section included after the data is presented. Depending on the site studied, the end result of this research is practical recommendations for either pedagogical changes or future research. The last type of research—limited quantitative studies—typically analyzes one key part of a writing course, such as online interaction. Looking at one or two specific classes, the studies typically present criteria that are then used to evaluate the interactions. The observations are used to draw specific conclusions about pedagogical practice. The end result of this type of study is then called for particular pedagogical changes or understandings based on quantitative findings from a small number of classes. An emerging form of research is offering another alternative to these three types. Some scholars in the field of computers and composition are mixing quantitative and qualitative measurements in order to draw conclusions about the impact of computers in our culture and our classrooms. Multimodal types of research, which Gesa Kirsch and Patricia Sullivan (1992) called research that combine qualitative and quantitative methods, begin with a question that can only be answered by collecting multiple forms/types of data. Researchers using this approach adapt and combine strategies from the other three methods in order to fully address the research question. With a multimodal approach, the data can lead to conclusions that rest on durable, overarching issues and can, therefore, be applied to more situations than the one described in the study. While all four types of research—theoretical arguments, qualitative case studies, limited-scope quantitative, and multimodal approach—provide durable issues that cut across schools, classes, and teachers, the multimodal approach lends itself to wider application because it provides a comprehensive analysis of multiple measures that have a broader impact. In this article, I map the three dominant forms of research that have predominated to date. This “state of the profession” discussion will demonstrate the strengths of our current research strategies along with the gaps that need to be filled with different kinds of studies. I use one key study to show each of the three types of research while pointing to other studies that also fit into the framework being discussed. The studies I analyze in detail are representative of the kinds of work being done in the individual areas and have been chosen for that reason. Also, they are influential studies that have been written by major theorists and researchers in the field and have, therefore, impacted other research. Using these analyses as background, I then present studies that are using the multimodal approach to research, highlighting the ways in which they help us to extend our current thinking about research in the field. The focus of my study is particularly on works published in the dominant venues of the field of computers and composition, primarily the journal Computers and Composition and the related book series. I understand that others in the field of technical communication and education have done important work in cultural impacts of technology use, but I am particularly interested in the methods that scholars who locate themselves in the field “proper” have used. Certainly, we have drawn upon many disciplines and are located in many departments, but, I suggest, there is a cohesive field of study that has been labeled “computers and composition,” and thus I locate my work in that category even as I recognize the shifting and malleable nature of such categories. Further, while many in the field research arenas other than the classroom, in this article I analyze the ways in which we have studied mediated classrooms because, as

464

P.R. Webb / Computers and Composition 23 (2006) 462–476

a teacher-researcher, I contend that we must invite more people into the published conversations about teaching, and one way to accomplish this may be to expand our definitions of research.

1. Theoretical arguments Computers and composition researchers whose studies focus primarily on theoretical analyses raise questions about our meta-cognitive understandings of technology use and point to the informed perspectives that should guide that use. Anne Wysocki and Johndan Johnson-Eilola’s (1999) “Blinded by the Letter” was published in Passions, Pedagogies, and 21st Century Technologies which was edited by two of the field’s founders, Gail Hawisher and Cynthia Selfe. Wysocki and Johnson-Eilola are two important “second generation” computers and compositioners whose cutting-edge, theoretical works have had a direct impact on the direction of the field. In this piece, Wysocki and Johnson-Eilola troubled the rush to put literacy “next to other terms: visual literacy, computer literacy, video literacy, media literacy, multimedia literacy, television literacy, technological literacy.” (p. 349) They unpacked cultural (including educational) uses of “literacy” in order to highlight the political underpinnings of the term and to encourage us to move away from the idea of the literate subject as passive receiver of knowledge. Instead, they contended that literacy is “about how we all might understand ourselves as active participants in how information gets ‘rearranged, juggled, experimented with’ to make the reality of different cultures.” (p. 366) In their theoretically based argument, they drew upon theorists such as Walter Ong (orality and literacy scholar), Sven Birkerts (technoscience critic), and Stuart Hall (cultural studies theorist) to argue for a different conception of literacy. They used both Ong and Birkerts to map out our cultural conceptions of literacy and how they work to construct particular identities. Their use of Ong helped them to establish the cultural preconception that literacy is a must for any person in our culture, quoting him as saying “Literacy [. . .] is absolutely necessary for the development not only of science but also of history, philosophy, explicative understanding of literature and of any art, and indeed for the explanation of language (including oral speech) itself” (qtd. in Wysocki and Johnson-Eilola, p. 351). Clearly, they said, Ong proved that a large bundle comes with our use of the term “literacy.” They used Sven Birkerts’ nostaglia for books/print literacy to demonstrate the role that literacy plays in constructing identity: “I stare at the textual field on my friend’s (computer) screen and I am unpersuaded. Indeed, this glimpse of the future—if it is the future—has me clinging all the more tightly to my books, the very idea of them” (qtd. in Wysocki and Johnson-Eilola, p. 357). They offered Stuart Hall’s concept of articulation as an alternative to these traditional notions of literacy. Hall defined articulation as “the form of the connection that can make a unity of two different elements, under certain conditions. It is a linkage, which is not necessary, determined, absolute and essential for all time. You have to ask, under what circumstances can a connection be forged or made?” (qtd. in Wysocki and Johnson-Eilola, p. 367). Articulation, they argued, allows us to shift from a sense of literacy as monolithic entity/term to seeing it “as a cloud of sometimes contradictory nexus points among different positions.” (p. 367)

P.R. Webb / Computers and Composition 23 (2006) 462–476

465

Wysocki and Johnson-Eilola then analyzed our old assumptions about literacy in order to show how they limit our thinking about technological literacy. These uses of theorists set up Wysocki and Johnson-Eilola’s call for a different definition of literacy, one that asks for a shared and discussed, ongoing, reconception of the space and time we use for a shared and in which we find (and can construct) information and ourselves. This reconception is thus not about handing down skills to others who are not where we are, but about figuring out how we all are where we are, and about how we all participate in making these spaces and the various selves we find here (p. 366).

The goal of the essay is to encourage us to avoid dragging old assumptions about literacy with us when we discuss technological literacy. Instead of analyzing a specific classroom situation or making pedagogical suggestions at the end of the essay, Wysocki and Johnson-Eilola focused on the theoretical framework that guides our thinking about what exactly “technological literacy” is. Their article does, in the end, offer suggestions that have an impact on how we teach “technological literacy,” but the article’s primary focus is changing our thinking at a theoretical level.1 Rethinking technological literacy as Wysocki and Johnson-Eilola did is necessary because it encourages large-scale reevaluations of our assumptions and definitions. Therefore, this body of work contributes greatly to the field’s understanding of itself and the directions it should explore. Our theoretical frames shape the ways in which we use technology in our own research and in our own classrooms; thus, it is crucial to make visible those frames. Instead of assuming that all in the field understand what “technological literacy” is, theoretical work such as theirs insists upon the need for us to engage in a dialogue that examines the theoretical underpinnings of our work and points to the effects of those theories. No class, no research study, no scholarship is ever free from theory; making that theory visible is, as shown in theoretical studies, a crucial move in the field.

2. Qualitative case studies Case studies and the qualitative research they embody are a hallmark of computers and composition in particular and of the humanities in general. As a field, computers and composition has been built through scholarly analyses of computer-mediated writing classrooms. Since the 1 Their call for theoretical analysis is echoed by others in the field, including Cynthia Selfe’s highly influential Technology and literacy in the twenty-first century in which she outlines two responsibilities that humanities’ scholars must take on:

It has become increasingly clear over the past five years that we also have two much larger and more complicated obligations: first, we must try to understand—to pay attention to—how technology is inextricably linked to literacy and literacy education in this country; and, second, we must help colleagues, students, administrators, politicians, and other Americans gain some increasingly critical and productive perspective on technological literacy. (24)

Some other scholars who have undertaken these theoretical reflections are Patricia Sullivan (2001) “Practicing safe visual rhetoric on the World Wide Web”; Susan Miller (2001) “How near and yet how far? Theorizing distance learning”; Laura Brady (2001) “Fault lines in the terrain of distance education,” and Jeffrey Grabill (2003) “On divides and interfaces: Access, class, and computers,” to name but a few.

466

P.R. Webb / Computers and Composition 23 (2006) 462–476

journal Computers and Composition started in 1985 as a newsletter until today when it is an international scholarly journal with readership in more than 55 countries around the world, case studies of individual students (Sibylle Gruber, 1995, “Re: Ways we contribute”), individual teachers (Charles Moran, 1998, “From a high-tech to a low-tech writing classroom: You can’t go home again”), and individual classrooms situations (Robert Yagelski and Jeffrey Grabill, 1998, “Computer-mediated communication in the undergraduate writing classroom”) have been a dominant form of accepted research in the field.2 A great deal has been accomplished through these case studies: If literacy is always contextual and situational, then individual case studies allow us to analyze the particular contexts of the individual course. Case studies allow researchers to look closely and particularly at certain parts of a classroom situation. They provide full, in-depth exploration of an example that can then lead to generalizations about durable issues. The conclusions drawn from particular cases can be applied critically to other cases or situations. “The poetics of computers: Composing relationships with technology” (2003) collaboratively written by Bernadette Longo, Donna Reiss, Cynthia Selfe, and Art Young is a representative example of this type of study. In the article, they analyzed a course titled “Exploring the Relationships between Humans and Computers,” which they collaboratively taught at Clemson University in Spring 2001. In an attempt to situate the course “at the boundary between humanistic and technological studies,” (p. 101) the teacher-researchers designed it to attract two very different populations of Master’s students—students pursuing MA degrees in literature and MA degrees in Professional Communication. They hoped the students from both programs would see “that these two areas of study, professional communication and literature, have a great deal to offer each other, thereby promoting an opportunity for productive interdisciplinary study within English departments.” (p. 98) Karel Capek’s (1961) and Langdon Winner’s (1992) work on robots and autonomous technology provided the theoretical foundation for the course. Using these theories as a foundation, the authors used the following premises to structure the course: “we wanted to learn more about what makes us human users and creators of machines by studying machines that were designed to emulate humans. We wanted to know more about human desires and motivations as they are realized in machines.” (p. 101) The focus of their article, then, is to determine to what degree the course achieved this goal. The data collected in the study is fairly typical of case study data: the course assignments and the class activities that were derived from them; examples of students’ projects completed in the course; email interviews that occurred sixteen months after the course was taught; and the teacher-researchers’ observations about and readings of student behavior in class and on the discussion board. The article’s authors relied heavily upon the email responses because they provided insights into the effectiveness of the course in achieving its goals: 2 Other studies like these include (but are certainly not limited to): Greg Wickliff and Kathleen Blake Yancey (2001) “The perils of creating a class Web site;” Robert Yagelski and Sarah Powley (1996) “Virtual connections and real boundaries: Teaching writing and preparing writing teachers on the Internet;” Susan Kirtley (2005) “Students’ views on technology and writing: The power of personal history;” and Jeff Rice (2003) “Writing about cool: Teaching hypertext as juxtaposition.”

P.R. Webb / Computers and Composition 23 (2006) 462–476

467

What interested us most in this regard was what we discovered 16 months after the seminar ended when we sent an email questionnaire to the 10 students for whom we had addresses. They wrote us about how little thought they had given to the humanistic issues associated with computer technology before taking the seminar [. . .]. After the seminar, the students from both disciplines felt they developed an important critical perspective, based in humanistic study, on how computers shaped their lives and the lives of others—an important goal of the course (p. 99)3 .

Students’ responses to the email questionnaire appear throughout the article in order to support the claims the teacher-researchers made about the learning in the course. They are used to contextualize the student projects on which the authors focus their analysis. Verbal/visual examples from the projects and post-course email answers from the students are the chief pieces of data used in the article. After presenting a range of data collected from the course assignments, their observations, and the email responses, the teacher-researchers then used Anthony Giddens (1979) and Andrew Feenberg (1999) to analyze the data they had collected. To ground their analysis, they focus on one assignment—an assignment that “asked students to reflect on the differences between various kinds of entities—both human and not exactly human—that they had encountered in their readings of or discussions about technology.” (p. 111) They grouped students’ responses together into three main groups—those who thought sorting out humans from machines was either (1) “relatively easy,” (2) “a complicated one,” or (3) “an exercise in reflection [that engaged students] in the process of actively composing their own relationship to technology” (pp. 111–112). Under each of these groups of responses, the authors offered reflections from multiple students. In the end, the authors use their data and analyses to conclude that the course goals were met: In addition to providing students (and faculty) opportunities to explore a range of personal takes on human-computer relationships, we also think this class offered students some perspective on how these relationships had developed within the cultural and historical context of the twentieth century. And we think the course encouraged students to reflect—broadly, deeply, and from humanistic perspectives—on their own responsibility for shaping the human-computer relationship in contemporary contexts (pp. 112–113).

This summation highlights the key impetus for and results of case studies that appear in the field of computers and composition: contextualized personal understandings of cultural and historical issues surrounding human/technology relationships. Throughout the study, the teacher-researchers’ focus was on “personal takes on” cultural and historical issues. In the article, the authors mapped individual students’ responses to the assignments, to the course discussions, and to the human-machine topic being addressed in the course. While, at points, students were grouped into categories, typically the students were treated as separate individuals who had separate, individual responses to the course themes/issues. Yes, the “individual” is clearly positioned as constructed through culture and history, but the focus is primarily on how one digests, interprets, and understands the culture. The individual, then, is presented as 3

Thirteen students enrolled in and took the course during Spring 2001.

468

P.R. Webb / Computers and Composition 23 (2006) 462–476

a “case” that can be studied, and based on this analysis, individual responses can be translated into durable issues. As a field, we have learned a great deal from case studies like this. By understanding the learning situation(s) in much clearer terms, we have been able to become more effective teachers. The studies that fall into this category typically move to show how the issues addressed in the particular study should be applied to other class situations. The move to generalize, then, is a common factor in these articles, just as in the theoretical argument method discussed earlier. To make such a move means that analysis of one particular class can have meaning to more people than just those who participated in that individual class. This is a hallmark of a new trend in computers and composition—the tendency seems to be that we don’t debate why or how to use technology in concrete, technically specific ways (i.e., set up discussion boards this way or have students log on this way), but rather we analyze, in specific contexts, the teaching situation and what is accomplished through the use of technology.

3. Limited-scope quantitative studies Some scholars in the field of computers and composition have identified a need for quantitative studies to add to the results achieved through theoretical methods and/or case studies.4 In “Investigating the practices of student researchers: Patterns of use and criteria for use of Internet and library sources” (2000), Vicki Tolar Burton and Scott Chadwick (2000) (drawing on Fred Kerlinger’s 1986 argument) contended that quantitative methodologies (such as surveys that have been widely distributed) have two distinct advantages: “First, surveys allow the researcher to gather a large amount of data from a sample that can be generalized to a large population. Second, the data gathered from a properly designed and administered survey is quite accurate.” (p. 310) Through its analysis of quantitative data collected from surveys, Burton and Chadwick’s article represents one trend in quantitative studies in the field—articles primarily using quantitative research. Here, I analyze Burton and Chadwick’s argument and methodology as a representative example of these types of studies. In their study, Burton and Chadwick (2000) wanted to determine the ways in which students evaluated the sources used in their research-based writing. Burton and Chadwick were responding to teachers’ complaints that when students write research papers, they are engaged in academic promiscuity, a kind of textual sleeping around among whatever attractive sources can easily be picked up in chatrooms, databases, or stacks. Whether student researchers are choosing inappropriate sources due to lack of training, lack of time, lack of discretion, or for some other reason, the practice merits attention because it both devalues and places at risk a central assumption of academic writing: that a writer will support claims with appropriate, valid, and authoritative evidence.” (p. 310) 4 Some of these scholars include Susan Marie Harrington, Mark D. Shermis, and Angela Rollins (2000) “The influence of word processing on English placement test results” and Thomas Reynolds and Curtis Jay Bonk “Facilitating college writers’ revisions within a generative-evaluative computerized prompting framework” (1996). There are far fewer of these kinds of studies in Computers and Composition than case studies and theoretical arguments, though.

P.R. Webb / Computers and Composition 23 (2006) 462–476

469

In order to address this concern, Burton and Chadwick surveyed students to determine which courses across the campus required students to write research-based projects, what training students received in using Internet and library sources, who taught them their skills, and what criteria students used to evaluate Internet and library sources. Burton and Chadwick used large-scale surveys “to gather information from several hundred students, analyze that information, and then generalize our findings to students at similar universities.” (p. 310) They surveyed 543 students, freshman through graduate, at a Research I university. Because the students came from 97 different majors, Burton and Chadwick contended that they could generalize their findings and apply them to more students and more universities. This diversity, they state, is a unique feature of their study. They reached three significant conclusions from the data they collected: first, 91.9% of the undergraduate students5 were required to write at least one research-based paper for a college class during the last year, which, they said suggests that there is a great need for strong Writing Across the Curriculum programs. The study “indicates the need for instruction in source evaluation across the curriculum,” (p. 320) especially if how to evaluate sources is not taught in first-year composition. Second, 37.6% of students in the study had not received any training on how to use the Internet and the library to find sources. Burton and Chadwick (2000) assert that this lack of training leads to the kind of textual promiscuity about which teachers complain. “If teachers want student writers and researchers to stop using sources irresponsibly or inappropriately, they must take responsibility for training students in strategies of critical evaluation.” (p. 324) In addition to the lack of training to find sources, students were not given training in how to evaluate the sources they did find. When students were asked by the researchers to rank the criteria they used in selecting which sources to use, they overwhelmingly identified accessibility of the source as the primary criteria used. Although they also prioritized up-todate sources, primary research sources, and authoritative sources, typically the first sources students found were the ones they eventually included in their papers. Burton and Chadwick (2000) explain: When students seem to use sources without discrimination, they are probably using what is most accessible. Although Rose (1996) suggested that we teach “scholarly citation as a courtship ritual designed to enhance a writer’s standing in a scholarly discourse community,” (p. 34) the truth is that, for many students, the assigned research report is more like a one-night stand (sometimes literally) than a courtship. They do not have the personal investment in their topic or even their discipline that a professional in the field would have. For most students, there will be no second date; thus, the high priority on ease of access (p. 321).

This finding suggests that students need to learn how to better select sources based on criteria that help to strengthen the paper. This process would ask students to be more committed to the work they are doing; hence, we must “believe that the discovery of knowledge matters more to a student’s learning than the regurgitation of previously discovered knowledge.” (p. 325) The way we teach research, then, needs to change fundamentally. Burton and Chadwick used their quantitative findings to both suggest changes in teaching and point to future research that needs to be undertaken in order to better understand this 5

Burton and Chadwick did not tell us what portion of graduate students had written a research-based paper within the last year.

470

P.R. Webb / Computers and Composition 23 (2006) 462–476

complex problem.6 At the heart of their findings is that we are not effectively teaching students to write research-based papers. They argued that it is especially important for non-Research I institutions to explore a similar study in order “to determine if institutional context creates differences in categories of he survey,” (p. 325) implying that their findings are generalizable to other Research I institutions. These are the specific benefits of the quantitative nature of the research that they cite and want to further. The quantitative, across-the-curriculum data allows them to claim, first, that this problematic evaluation is common across disciplines and not just in first-year writing courses and, second, that the sample population is large enough for them to generalize that this problem exists at other universities as well, even though they call for scholars from other universities to explicitly test their findings at their own institutions. Burton and Chadwick’s research is certainly a useful addition to our current understandings. By focusing on large-scale surveys of students’ perceptions of research, they were able to highlight and explain some of the key problems found in students’ research papers. What the study didn’t answer, however, is what relationship exists between students’ methods and assumptions and the final quality of the paper. Nor do they look at how the paper itself fits into the larger contexts of the courses—is it worth a significant portion of their grade? Does the teacher spend time on explaining and detailing the writing assignment throughout the semester or is it an “on your own” kind of project? What is the pedagogical purpose of the paper? So, yes, we understand that students’ relationships with research is “promiscuous,” but we do not really know how these findings impact student learning. There are, then, limits to the usefulness of the findings. These limits perhaps explain the field’s heavy reliance on case studies, but mixing the two methods (quantitative and qualitative) together could explain the situation more clearly by offering not only statistical information but also qualitative reflections that would help to address the questions that Burton and Chadwick’s study cannot. Certainly, a study cannot be all things to all people, nor can it answer all questions; looking at the possibilities of a multimodal approach to research, however, could complement our already existing methods while also inviting us to reconsider the effectiveness of current methods.

4. Mixed quantitative and qualitative: a progressive model A recent methodology used by scholars in computers and composition does just that Beth Hewett’s (2000) “Characteristics of interactive oral and computer-mediated peer group talk and its influence on revision” details a functional and qualitative study of interactive oral and computer-mediated communication (CMC).7 Hewett explains her rationale: “I developed a case study grounded in naturalistic inquiry [. . .] and triangulated with quantitative data from functional analyses of talk and textual analyses of writing, as well as quantitative data 6

At the end of the article, they listed five different types of research projects that need to be undertaken, from replication of their study to quantitative and qualitative studies of “how writing is actually taught in the classroom.” (p. 325) 7 Another example that fits this multimodal model is Michael Russell, Damian Bebell, Jennifer Cowan, and Mary Corbelli (2002) “An AlphaSmart for each student: Do teaching and learning change with full access to word processors?”

P.R. Webb / Computers and Composition 23 (2006) 462–476

471

from student response journals, retrospective interviews, and teacher observations.” (p. 267) Hewett’s study is representative of the types of articles that mix quantitative and qualitative, and her reason for incorporating the quantitative data—to triangulate the qualitative data—is the most common reason others have set up mixed-mode studies. In her study, Hewett determined how peer response—both oral, face-to-face responses and CMC environments—influenced students’ revision processes. In order to answer this question, she employed two different types of methodology frameworks: (1) she compared face-to-face, oral peer interactions to written, CMC peer interactions. Drawing on discourse analysis theories, she analyzed the frequency of comment types; (2) she analyzed the relationship between the comment types and the revisions students made in their papers. In order to determine the impacts of peer-response talk on writing, Hewett adopted different methodologies: “to understand the qualitative and quantitative influence of the group talk on revision, I used student journals and interview, rhetorical analyses of the arguments, and revision changes compared against the coded peer talk about each proposal and draft.” (p. 278) Even when analyzing the direct impacts on writing, Hewett combined qualitative and quantitative data. This methodology is unique because studies that analyze writing typically do so through textual and contextual analysis based in qualitative strategies. In order to determine the kinds of conversations that occur during peer-response sessions, Hewett started with a functional analysis of three representative examples: Group 1’s face-to-face discussion,8 and Group 2’s CMC discussion (both synchronous and asynchronous)9 along with Group 2’s face-to-face discussion. Using Lester Faigley and Stephen Witte’s (1981) categories—“addition, deletion, substitution, rearrangement, distribution, and consolidation”—Hewett coded the rhetorical choices students made in the revision process (p. 278). She analyzed selected sections of conversations in each types, looking for the following kinds of comments: inform (Content, Form, Process, Reference, Context); Direct (Content, Form, Process Reference, Context); Elicit (Content, Procedure, Context); and Phatic. For example, when Frank (in an online post) asks the following of one peer in a CMC discussion, “Where are the solutions,” Hewett labeled this “Elicit Writing Process” because the comment directly asked the student to consider her writing process. Another example from Rob (in a face-to-face discussion): “So, and they’re talking about, like, you know, helping the—you know, the poverty-stricken. I mean, I’m sure, I’m sure there’s something, you know, where Jesus helps the poor man.” Hewett labeled this passage “Inform Writing Context” because in it Rob was trying to determine the audience for the writing, one key part of the writing context. Hewett combined this kind of textual analysis with statistical data about the frequency of comment types in each group environment. The first move—a textual analysis—allow Hewett to provide a detailed map of the nature of the conversations that took place. In other words, we see actual student engagements and get a “feel” of these discussions. No statistical data could have captured the “mood” of the discussions, and mood is an important factor in Hewett’s claims about the differences between the two types of environments. Yet, textual analysis by itself cannot necessarily give us a sense of the bigger picture—how many times were certain kinds of comments used in the discussions over the course of the semester? 8 9

Group 1 only ever met face-to-face. They never used CMC technologies to interact with each other. Students used Norton CONNECT for the synchronous and asynchronous discussions.

472

P.R. Webb / Computers and Composition 23 (2006) 462–476

Through a statistical evaluation of comments made throughout the entire semester, Hewett drew generalizable claims about peer discussions in oral and online environments. Hewett’s analysis of students’ revision, then, is textual, contextual and quantitative. These strategies helped her to generalize her findings about students’ revision choices. By comparing the quantitative findings about the types of peer interaction comments made, she determined that there was a direct correlation between the frequency of certain kinds of comments and the kinds of revisions students did in their papers. For example, “inform idea units” in which peers and writers tell the other perceptions about the writing itself were the largest group of comments made during peer review sessions. Likewise, “inform idea units were most frequently involved in revision choices.” (p. 280) The Group 2’s CMC discussion included fewer overall revision suggestions possibly because the written nature of the comments erased the need for redundancy that can be important in oral discussions. Interestingly enough, although Group 2 (CMC) made fewer revision suggestion comments (446) than Group 1 (Oral) (558) and Group 2 (Oral) (669), Group 2 (CMC) made more revisions than the other two groups: Group 2 (CMC) made 75 changes, Group 1 (Oral) made 62, and Group 2 (oral) made 39 changes. Thus, Hewett pointed out that correlating the types of comments and the types of writing helps us to see that more comments do not necessarily lead to greater revision; in fact, her findings suggest that the medium in which the comments are delivered is a key factor in the ways in which the comments get taken up. Another factor that is important to consider, Hewett suggests, is exactly how writers took up their comments. Direct exchanges (directly applying the peer’s comment in the paper) occurred more frequently in the CMC class while indirect (either imitation of another’s way of argument or style of writing and/or intertextual references to and uses of large group conversation about writing issues) exchanges happened much more frequently in the oral sections of the course. Hewett surmises that direct exchanges were more frequent in the CMC section because the asynchronous nature of that medium meant that students had to write a direct comment to the writer rather than participate in a group exchange in which students bounced ideas off each other. Thus, the comments and the asynchronous discussion itself ended up being 1:1 kinds of remarks that did not draw in the rest of the group. Thus, the intertextual piece was missing. According to Hewett, the benefits of indirect exchange are based on the physical nature of the class: “With oral talk, gestures and body language supply cues that signal the particular receiver of the exchange, while they keep the talk open to the group as a whole. Including the entire group as interlocutors in the talk encourages interaction which may lead to more intertextual idea exchanges.” (p. 282) This kind of indirect effect of discussions on writing/revisions is crucial. Hewett generalized her findings to argue that “students self-generate ideas, not only in direct connection with their writing, but also as they talk about peers’ writing. Again, it is the interaction among group members that inspires students to talk and think about their writing, encouraging them to make connections among their ideas and those of their peers.” (p. 283) Hewett’s study serves as an example of what can be accomplished in our scholarship when we combine various methods in order to answer our research questions. Correlating both parts of her study and drawing upon both quantitative and qualitative methods allowed Hewett to generalize about the reasons for the differences between CMC and Oral group discussions and revision patterns. If she had only used one or the other (quantitative or qualitative), she could not have as effectively made these kinds of claims because she wouldn’t have had the

P.R. Webb / Computers and Composition 23 (2006) 462–476

473

data in which to both present the pattern and to explain why it happened. As she said in her conclusion, “although naturalistic inquiry can yield only limited conclusions, a systematic study such as this one can begin to address how electronic conferences influence classroom settings and writing.” (p. 284) Clearly, her mixed-mode methodology helped her to draw the conclusions. Other scholars in our field should consider using this mixed-mode approach because of its benefits. It opens up new areas for us to research as well as expanding the kinds of answers and results we can achieve. While all three modes of research are important, the mixed mode version needs to be expanded upon, given that the field is currently dominated by the first two methods.

5. Conclusions All of these types of research methods have been useful in adding to our current understanding of how to use this technology most effectively, what its impacts are on learning and teaching in particular situations, and what kind of changes or re-orientations we need to consider. However, as many of these studies argue directly, we need to add to these current, anecdotal understandings in three significant ways: 1. We need to study a larger population of students across semesters and with different instructors. In other words, we need studies that can make more generalizable claims based on measurement and assessment of multiple populations of students and teachers. 2. We need studies that collect more varied data so that the findings provide a richer, more comprehensive picture of the impacts of technology on teaching and learning. For this to occur, we need to measure interaction as many of the existing studies do, but we must complement that information/data with three major groupings of data: cognitive gains, attitudinal perceptions, and retention rates. 3. We need to create durable, generally applicable measurement instruments that can effectively and consistently collect usable data on the three groupings of data listed above. Many claims are made about technology’s impact on teaching and learning based on case study, qualitative assessment measures. These claims have moved the field forward and have contributed much to our thinking. But what they have not done is create effective ways to collect a well-rounded body of data and, even more importantly, effective ways to assess the data that is collected. Although these new views on/practices in research would strengthen our field, undertaking these kinds of studies can be problematic within the institutional contexts in which we publish and get evaluated for our scholarship. These kinds of studies ask us to straddle disciplinary boundaries of traditional humanities research and traditional social science research, creating a sort of bastardized version of research that is not fully located in one camp or another. As Barry Maid (2000) indicated, those of us in computers and composition already occupy troubled positions within the departments (typically English departments) that house us: Unfortunately, faculty who specialize in rhetoric and composition or technical communication publish their professional work in different ways. The names of the journals are not familiar

474

P.R. Webb / Computers and Composition 23 (2006) 462–476

to their literary colleagues. Even worse, the nature of the field is that much of the scholarship may have to do with pedagogy—a topic not traditionally valued by humanities faculty. Further complicating matters is the tendency of techrhets to write and produce collaborative work with two, three, or even more authors. The nice, neat world of the English department has become messy, and the easiest way to deal with mess is to eliminate those causing it (p. 12).

Theoretical arguments and case studies, then, better approximate the kinds of scholarship that humanities faculty who serve on promotion/tenure committees relate to and value. Studies that are quantitative, while not located within the humanities comfort zone, appear to be located in social sciences and are thus acknowledged as valuable because they make sense within a current disciplinary framework. To mix the types of studies together, however, can jeopardize computers and composition researchers whose work is evaluated by those who privilege more traditional forms research. Further, in order to get a “hearing” for our cutting edge work in which we critique dominant definitions of literacy, researchers have tended to utilize accepted conventions. As Sibylle Gruber (2000) pointed out, Although technology-related work may focus on issues of marginalization and oppression, authors often conform to traditional western structural elements in their work to increase their chances of being read and understood. In a sense, scholars working in this tradition are trying to enact transformation and to renegotiate power by using power to raise awareness of the positionality of institutional outsiders in an “acceptable” format (p. 50).

Even though Gruber did not specifically mention the kinds of research that are accepted, her argument can be extended to the choices scholars make in that arena in addition to, the structure of their writing. Traditional forms of research kept in “neat” categories that don’t make the English department work “messy” are one way to create a space for our research to be read and valued. The problems with this approach, however, are that certain kinds of research questions cannot be answered by using only one method, as I argued above. This “careful” approach to research, then, limits us in ways with which we should be uncomfortable. So, what can we do to reconcile the need for multimodal research in order to address the three areas I outlined in the beginning of this section with the very real institutional constraints we face? Maid and Gruber and many others in the “Tenure 2000” Special Issue of Computers and Composition argued that senior colleagues in the field must teach our colleagues in English departments about the value of our work despite its difference from the traditional literary scholarship with which they are familiar. Maid (2000) wrote: “I feel the single best way to legitimatize our professional work to our literary colleagues is for senior faculty in the field to be vocal, to publish in the same electronic venues that junior faculty are publishing in, and, most of all, to become advocates for junior faculty throughout the profession.” (p. 17) Maid, therefore, placed the onus on us to teach the uninformed about our value. While this is certainly a useful suggestion, it assumes a static dichotomy between “literary” and “rhetoric and composition” scholars. From my experience in my own department as well as with colleagues’ experiences at other universities, the dichotomy between the various areas within English, while still present, is being troubled and questioned. Senior faculty in rhetoric and composition sit on department personnel and university promotion and tenure committees. The value of technological literacies is more widely accepted and encouraged by our colleagues

P.R. Webb / Computers and Composition 23 (2006) 462–476

475

in other areas as well as by administrators who not only want us to run labs but who see the ways in which our work can bring in grant monies and national recognition. Articles on technology are increasingly being published in non-technology journals like CCC and College English, suggesting that our research is receiving a larger space even within rhetoric and composition. While we certainly have to continue to teach our colleagues about the value of our work, I would encourage us to see that audience as more receptive and certainly more diverse in their own research methods than they are often painted. We also need to recognize that some within our own field of computers and composition are classically trained humanities faculty and they need to be informed of the new waves of technology and writing as well. Further, in an effort to revise traditional conceptions of disciplinary research, many universities are moving toward units and schools that actively build connections across disciplines. We are ahead of the game in this arena because we have typically been multi-disciplinary. We can – and should – have an active role in these transformations both in our committee work (“service”) as well as our research agendas. There is institutional support – in terms of course releases, faculty leave time, and financial support – that we can draw upon to further our goals and research. Through interactions with colleagues in other department specialities, we can learn more about quantitative research and build more connections between it (in all its forms) and our theoretical and case study work in order to answer the larger questions that are facing our field now. In this current moment, we need more studies like Hewett’s multimodal one because they will help us to arrive at large-scale assessments of the impacts of various types of technologically mediated learning environments and will provide a long-lasting way to assess the ongoing changes that occur as our cultural understandings of technology increase and shift. I am not suggesting that this shift will be without its challenges; I am, however, suggesting that the road is more paved than we think it might be. Patricia Webb ([email protected]) is an Associate Professor at Arizona State University. Her current research projects include a large-scale assessment of first-year composition students’ perceptions of their learning in online and hybrid environments along with a research project that examines how students actually use the comments instructors give them on first drafts of their writing projects. Her most recent article “Feminist Social Projects: Building Bridges between Communities and Universities”–on which she collaborated with two of her graduate students, Kirsti Cole and Thomas Skeen–will appear this winter in College English.

References Brady, Laura. (2001). Fault lines in the terrain of distance education. Computers and Composition, 18, 347–358. Burton, Vicki Tolar, & Chadwick, Scott. (2000). Investigating the practices of student researchers: Patterns of use and criteria for use of Internet and library sources. Computers and Composition, 17, 309–328. Grabill, Jeffrey. (2003). On divides and interfaces: Access, class, and computers. Computers and Composition, 20, 455–472. Gruber, Sibylle. (2000). Technology and tenure: Creating oppositional discourse in an offline and online world. Computers and Composition, 17, 41–56. Gruber, Sibylle. (1995). Re: Ways we contribute: Students, instructors, and pedagogies in the Computer-Mediated Writing Classroom. Computers and Composition, 12, 61–78.

476

P.R. Webb / Computers and Composition 23 (2006) 462–476

Harrington, Susanmarie, Shermis, Mark, & Rollins, Angela. (2000). The influence of word processing on English placement test results. Computers and Composition, 17, 197–210. Hewett, Beth. (2000). Characteristics of interactive oral and computer-mediated peer group talk and its influence on revision. Computers and Composition, 17, 265–288. Kirsch, Gesa, & Sullivan, Patricia (Eds.). (1992). Methods and methodology in composition research. Carbondale, IL: SIU P. Kirtley, Susan. (2005). Students’ views on technology and writing: The power of personal history. Computers and Composition, 22, 231–238. Longo, Bernadette, Reiss, Donna, Selfe, Cynthia L., & Young, Art. (2003). The poetics of computers: Composing relationships with technology. Computers and Composition, 20, 97–118. Maid, Barry. (2000). Yes, a technorhetorician can get tenure. Computers and Composition, 17, 9–18. Miller, Susan. (2001). How near and yet how far? Theorizing distance teaching. Computers and Composition, 18, 321–328. Moran, Charles. (1998). From a high-tech to a low-tech writing classroom: “You can’t go home again”. Computers and Composition, 15, 1–10. Reynolds, Thomas, & Bonk, Curtis Jay. (1996). Facilitating college writers’ revisions within a generative-evaluative computerized prompting framework. Computers and Composition, 13, 93–108. Rice, Jeff. (2003). Writing about cool: Teaching hypertext as juxtaposition. Computers and Composition, 20, 221–236. Russell, Michael, Bebell, Damian, Cowan, Jennifer, & Corbelli, Mary. (2002). An AlphaSmart for each student: Do teaching and learning change with full access to word processors? Computers and Composition, 20, 51–76. Selfe, Cynthia. (1999). Technology and literacy in the twenty-first century: The importance of paying attention. Carbondale, IL: Southern Illinois UP. Sullivan, Patricia. (2001). Practicing safe visual rhetoric on the World Wide Web. Computers and Composition, 18, 103–121. Wickliff, Greg, & Yancey, Kathleen Blake. (2001). The perils of creating a class web site: It was the best of times, it was the . . .. Computers and Composition, 18, 177–186. Wysocki, Anne, & Johndan, Johnson-Eilola. (1999). Blinded by the letter. In Gail Hawisher, & Cynthia Selfe (Eds.), Passions, pedagogies, and 21st century technologies (pp. 349–367). Urbana, IL: National Council of Teachers of English. Yagelski, Robert, & Grabill, Jeff. (1998). Computer-mediated communication in the undergraduate writing classroom: A study of the relationship of online discourse and classroom discourse in two writing classes. Computers and Composition, 15, 11–40. Yagelski, Robert, & Powley, Sarah. (1996). Virtual connections and real boundaries: Teaching writing and preparing writing teachers on the Internet. Computers and Composition, 13, 25–36.