Available online at www.sciencedirect.com
ScienceDirect Computers and Composition 54 (2019) 102525
Online Public Spheres in the Era of Fake News: Implications for the Composition Classroom Dan Ehrenfeld b,∗ , Matt Barton a a b
St. Cloud State University, Department of English, 720 4th Avenue South, St. Cloud, MN 56301-4498 Stockton University, School of General Studies, 101 Vera King Farris Drive, Galloway, NJ 08205-9441
Abstract ¨ Future of Rational-Critical Debate in Online Public Spheres¨in light of recent This article revisits Matt Barton’s 2005 article The debates around misinformation and disinformation, data-driven influence campaigns, the blurring line between social media and news media, and the algorithmic incentivization of “fake news.” While today’s social media platforms exhibit many of the qualities that C.W. Mills and Jürgen Habermas associate with a healthy public sphere—communication between strangers is participatory, immediate, accessible, and decentralized—this article raises questions about the extent to which everyday digital writing and circulation practices align with broader democratic aspirations. The goal of this article is to explore not only what these social and technological developments mean for the health of public discourse, but also how we, as teachers of writing, can meaningfully engage with them in our classrooms. An appendix includes ideas for assignments that engage students in critical reflection about their own participation in today’s online public spheres. Published by Elsevier Inc. Keywords: fake news; public sphere; social media; composition; misinformation; disinformation; critical thinking; media literacy
For those who study emerging writing practices, public sphere theory offers a useful framework for considering the historically-contingent ways that writers come together as publics within a changing media landscape. The technological practices of the print revolution, for example, enabled far-flung citizens to imagine one another as interlocutors through the production, circulation, and uptake of printed texts. Later, with the rise of mass media, the commercialization and centralization of news sources threatened to reduce citizens’ ability (or even desire) to challenge official narratives, ushering in the forms of publicness that characterized the twentieth century. And while writing technologies have always played a constitutive role in the formation of publics, we believe that it is especially important today, as the line between social media and news media continues to blur, to take stock of the ways that changes in the communication landscape have affected how strangers become “public” with one another. In ¨ Future of Rational-Critical Debate in Online 2005, when the social web was still in its infancy, Matthew Barton’s The Public Spheres¨considered this very question. In that piece, Barton explored the extent to which the internet might offer writers access to—and empowerment within—the collective social space of the public sphere. He concluded that wikis, ∗
Corresponding author. E-mail addresses:
[email protected] (D. Ehrenfeld),
[email protected] (M. Barton).
https://doi.org/10.1016/j.compcom.2019.102525 8755-4615/Published by Elsevier Inc.
2
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
blogs, and online discussion boards did indeed have the potential to facilitate the creation of what Jürgen Habermas (1991) termed a “rational-critical public sphere” by simulating the functions of the salons, table societies, and coffee shops within which Europe’s eighteenth century bourgeois class came to understand itself as a public. Barton asked, as well, how educators (particularly those of us teaching writing) could encourage students to maintain a strong sense of self, evaluate information critically, and consider the broader civic implications of material they create and share online. The goal was to reinforce the values and practices that would lead to what Mills (1999) termed a “public” rather than a “mass.” More than a decade later, it is clear that other compositionists have shared Barton’s interest in the emerging writing platforms of the social web. Compositionists have studied, for example, how social media platforms become spaces for identity performances (Buck, 2012; DeLuca, 2015; Sparby, 2017), how platforms facilitate mass mobilization (Hayes, 2017; McVey & Wood, 2016; Penney & Dadas, 2014), how platform design shapes user experiences, how platforms have influenced the ways that writers communicate (Swartz, 2013; Vie, 2008), how gender identification affects social media usage (Shepherd, 2015a), and how mobile platforms enable students to negotiate transnational spaces (Monty, 2015). In addition, we have considered the pedagogical potential of social media platforms. For example, we have investigated the ways that social media platforms might help students develop rhetorical capabilities (Balzhiser et al., 2011; Fife, 2010; Shepherd, 2015b), reflect about identity performances (Maranto & Barton, 2010), critique the ways that platform design shapes communication (Coad, 2013), and develop writing knowledge that might transfer to academic contexts (Head, 2016; Shepherd, 2018). Douglas M. Walls and Stephanie Vie’s (2017) collection Social Writing/Social Media: Publics, Presentations, and Pedagogies and Dustin W. Edwards and Bridget Gelms’s (2018) special issue of Present Tense (“Special Issue on the Rhetoric of Platforms”) are recent publications that have extended these investigations. Despite this increased attention to the practices that characterize the social web, and despite the significant changes that the internet has undergone, we find ourselves asking many of the same questions that Barton asked in 2005: What are the parallels between today’s most popular writing environments and the deliberative spaces that characterized eighteenth century Europe? If society’s fate is determined in part by “the shape of our tools” (Feenberg, as cited in Barton, 2005), to what extent do today’s digital tools prepare citizens to engage in deliberation and collective action? And how can we incorporate these tools and environments into our pedagogies in ways that will help students develop their voices, strengthen their voices in rhetorical arenas, and fuse their voices to others who are committed to social action? Ultimately, we believe that these are questions not only about new tools and environments but also about the public sphere. As Howard Rheingold writes in Net Smart (2014), “The public sphere is a theory about what is, at its base, a simple question: Am I going to act as if citizens acting in concert can wield any power to influence policy? Or am I going to leave my liberty to others?” (p. 242). As conscientious teachers and scholars of writing, few of us would agree with Rheingold that this is a “simple” question; it is actually a very complex and often baffling question of agency. The question “Whose public sphere?” looms large. Like Rheingold, however, we argue that while we should avoid utopian as well as dystopian thinking on these questions, we can indeed improve the public sphere(s) “through our actions” (p. 242). Furthermore, our status as teachers and scholars of composition and rhetoric may uniquely empower us to do so. Though some have rejected the term “fake news” as a vague and unhelpful conflation of multiple, distinct phenomena, we find it to be a useful umbrella term, a shorthand way to reference a broad array of practices and phenomena that characterize our public sphere today. In considering this array of practices and phenomena, we build upon work in the field that has examined related topics—government virtualpolitik (Losh, 2009), “post-truth” rhetoric (McComiskey, 2017; Cloud, 2018), Donald Trump’s rhetorical style (Skinnell, 2018), and the relationship between literacy and fake news (Miller and Leon, 2017; Craig, 2017; Laquintano and Vee, 2017; Minnix, 2017; Riche, 2017). In this article, our goal is not to consider all of the phenomena that might fall under the umbrella of “fake news.” Instead, we focus our attention on three tendencies in online public spheres that suggest the emergence of a “mass” society at odds with the principles of democratic deliberation and collective action: the blurring of the line between journalism and social media chatter, the incentivization of “fake news” over news, and the use of engagement metrics to enact new forms of mass persuasion. Through a discussion of these tendencies, we aim to pave the way for further research about the ways that today’s emerging writing practices support or hinder the creation of a healthy public sphere. In addition, we aim to demonstrate that this area of research has important pedagogical implications. When we ask our students to consider misinformation, disinformation, and networked persuasive practices of all kinds, we
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
3
not only teach them to be wary of “fake news” stories—we also encourage them to reflect about the ways that their everyday writing and circulation practices align with broader democratic aspirations. To consider ways that we might pursue this goal, we believe that it is worth returning to Mills’s definitions of a “public” and a “mass” to assess the extent to which the web has fulfilled its promise as a public. According to Mills (1999), a public exhibits the following criteria: (1) Virtually as many people express opinions as receive them. (2) Public communications are so organized that there is a chance immediately and effectively to answer back any opinion expressed in public. Opinion formed by such discussion (3) readily finds an outlet in effective action, even against—if necessary—the prevailing system of authority. And (4) authoritative institutions do not penetrate the public, which is thus more or less autonomous in its operation. (p. 249) In contrast, Mills writes that in a mass (1) far fewer people express opinions than receive them; for the community of publics becomes an abstract collection of individuals who receive impressions from the mass media. (2) The communications that prevail are so organized that it is difficult or impossible for the individual to answer back immediately or with any effect. (3) The realization of opinion in action is controlled by authorities who organize and control the channels of such action. (4) The mass has no autonomy from institutions; on the contrary, agents of authorized institutions penetrate this mass, reducing any autonomy it may have in the formation of opinion by discussion. (p. 249) When we return to Mills’s (1999) definitions of a mass and a public, we see that today’s digital landscape exhibits tendencies of both. On the one hand, it is clear that social media is much more participatory than the mass media of the previous century. Social media platforms offer opportunities for multidirectional discourse, often built upon interfaces that enable thousands of participants to “answer back” opinions expressed in public—to post, like, upvote, favorite, retweet, flag, edit, or comment upon texts written by others. In addition, these platforms offer opportunities to gather with strangers, and to respond to distant strangers “immediately and effectively,” expanding the scope of social movements and providing an “outlet in effective action” that has manifested in the mass mobilizations of the Arab Spring, the Occupy movement, the Black Lives Matter movement, the Women’s March, and the March for Our Lives, to name just a few. Lastly, today’s digital landscape offers writers spaces seemingly free from constraints on the ways that they might interact with one another, leading to the creation of communities that often feel autonomous because the roles that authoritative institutions play in shaping these spaces are not apparent to most users. In other words, social networking sites, microblogging platforms, and smartphones contribute to the creation of discursive spaces that do seem to resemble publics, as Mills defines them. On the other hand, we should recognize that Mills’s mass/public dichotomy reflects the democratic hopes and fears of the historical moment in which he wrote, when mass media broadcasting’s unidirectional flow of opinion and information appeared to be the greatest obstacle to democratic deliberation. It may be possible that today’s communication landscape, as much as it seems to exhibit the qualities of a public, has birthed a new kind of mass whose characteristics we are only beginning to understand. Even since 2005, our communication landscape has changed dramatically, raising questions about the criteria according to which we judge the health of the public sphere. Today, we are concerned less about the passive role that citizens play in a mass-mediated communication landscape. We are concerned, instead, about how engagement metrics incentivize the proliferation of “fake news” over news, how our semi-public communications are mined for value and monetized in less transparent ways each year, how our voices are transformed into “content” that can be delivered to increasingly polarized “echo chambers” via opaque algorithms, how the use of psychometric data gleaned from these communications is used to manipulate public opinion, how AI-powered video and audio hoaxes can be used to deceive, how social media platforms facilitate coordinated campaigns of harassment and abuse, and how the line between professional journalism and social media chatter continues to be blurred. We are concerned, also, about the ways that social media platforms conceal the interference of authoritative institutions. Today, much of our political discourse takes place not in autonomous spaces, but via platforms designed to maximize intrusion. The most popular of these platforms are essentially advertising ecosystems that work to monopolize attention and engagement. In addition, we are concerned about the ways that social media platforms give government agencies the tools to collect data about
4
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
communication networks, to enact overly intrusive law enforcement practices, to weaken social movements through the analysis of location data, and even to detain travelers at U.S. borders. If we want to consider the potential for writers to come together—as autonomous publics—to express and receive opinions, form opinions, and take action in the public sphere, then we need to continually investigate the ways that our communication landscape supports or hinders these democratic aspirations. In this article, we revisit the goal that motivated Barton in 2005; namely, to consider the ways that compositionists might incorporate online practices into the classroom as a means of “reinforc[ing] the principles inherent in a true democracy” (p. 178). Ultimately, we argue that while today’s most popular digital platforms exemplify Mills’s vision of a “public,” these platforms have simultaneously facilitated the creation of a new kind of “mass” that foundational theories of the public sphere do not account for. In the following sections, we discuss three tendencies that characterize this “mass” society today: the blurring of the line between journalism and social media chatter, the incentivization of “fake news,” and the emergence of new forms of mass persuasion. The Blurring Line Between Journalism and Social Media Chatter In the early 2000s, Barton—along with Lawrence Lessig (2004) and many others—was optimistic about the possibility that bloggers and wiki editors could resist the control that corporate media outlets held over the news. In some ways, these trends toward “citizen journalism” seemed like a substantial step toward the type of unfiltered access to information championed by Habermas, Mills, and other public sphere theorists. Even if individual news items proved to be false—or if their independent status came into question, as with the faking of grassroots origins, or “astroturfing”—it was argued that citizen journalism would provide raw material for professional journalists, who would be trained and equipped to separate the wheat from the chaff. Alarmingly, however, the line between amateur social media writing and journalism has blurred in ways that threaten journalistic standards. Today, the use of social media platforms to simultaneously compose, disseminate, consume, and debate issues of common concern suggests nothing less than the emergence of a new kind of public sphere. Because of the speed of information distribution on social media, news stories can often appear as a flood of disparate tweets long before they make the rounds on television or radio news channels (or even news websites). This spreadability allows a text to move from Twitter to the many semi-private discourses of Facebook friend circles and groups, to mainstream news outlets, and to “echo chamber” ecosystems of blogs and online magazines. In short, rather than neatly divided private and public spheres, we end up with a blurry Venn diagram, or even a rhizomatic structure where traditional demarcations between public and private are, at best, temporary and arbitrary. In this modern media ecosystem, tweets and Facebook posts are often incorporated into mainstream news broadcasts with minimal vetting. One heinous example was when CNN, Fox News, and other outlets broadcast photos and personal details about Ryan Lanza, who they wrongly identified as the shooter at an elementary school shooting in Newton, Connecticut (the actual shooter was his brother, Adam Lanza). The error was made by a law enforcement official who accidentally transposed the brothers’ names in a report, injuring the reputation of many prominent outlets and turning Ryan Lanza into “a hit-and-run victim of the media’s desire for speed in disseminating information” (Hill, 2012). Social media “creates a world in which we are watching the investigation—and reporting—unfold in real time,” wrote Ben Smith and Chris Geidner (2012) of Buzzfeed. “That’s confusing and messy, and many thousands of people were led to believe that the wrong man was the suspect in a horrific crime.” Stopping short of accepting blame and apologizing for their role in the “mess,” the reporters did promise to be more “transparent” in the future about where they receive their information. While we have traditionally associated alternative media with those who have been silenced or marginalized due to lack of resources, even the President of the United States—supposedly the most powerful person in the world—claims that he must take to Twitter to avoid filtering, distortion, or censorship by “fake news.” Trump, who “doesn’t do the email thing,” freely uses Twitter to bypass allegedly biased news agencies (Barbaro & Eder, 2015). Indeed, President Trump has used Twitter on repeated occasions to bash The New York Times, CNN, the BBC, and other mainstream news outlets (Khazan, 2017). Christiane Amanpour (2016), CNN’s chief international correspondent, has declared an “existential crisis” for journalism, and fears that American reporters critical of the administration may soon be as “delegitimized” and vilified as those in Egypt, Turkey, and Russia. The rise of social media platforms and the simultaneous gutting of the journalism industry have set the stage for ¨ Future of Rational-Critical Debate in Online Public Spheres,” this “existential crisis.” In 2005, when Barton wrote The
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
5
Twitter did not exist. Facebook was only a year old and was limited to college students. In the past 14 years, these platforms have amassed enormous numbers of users (Facebook’s user base is now larger than any country on Earth, and will soon surpass “Christians” as the largest non-biological category of humans) (Read, 2017). More importantly, however, social media platforms have emerged as new kinds of media environments entirely. The ambiguous roles that these platforms play—as media companies, as technology companies, as personal publishing platforms, as commercial marketplaces, as social networks—has been central to their success. But this ambiguity has also helped them skirt journalistic standards that have defined the modern media landscape. As Tim Wu writes, Facebook has avoided the strict regulatory environment within which the media companies of the mid-twentieth century operated. By claiming to be “a mere middleman for information passed between others,” Facebook has been able to “expand into every corner of our lives without government interference” (quoted in Read, 2017). During Facebook’s early years, according to employee Kate Losse, Zuckerberg tended to speak about information in ways that were “so neutral it was almost weird.” His mantra, Losse says, was “I just want to create information flow” (Read, 2017). And while Zuckerberg has consistently denied that Facebook is a media company (instead preferring to call it a technology company), he found it necessary to publicly clarify its mission in the aftermath of the 2016 U.S. presidential election, during which Facebook was blamed for the dissemination of targeted misinformation. In a 2017 manifesto titled “Building Global Community,” Zuckerberg positioned social media as the next stage in a long history of humans forming new kinds of communities. Human history, as he described it, is “the story of how we’ve learned to come together in ever greater numbers—from tribes to cities to nations” (quoted in Read, 2017). At its best, Zuckerberg argues, the Facebook platform can facilitate the emergence of this “global community” by “giv[ing] people the power to build community and bring the world closer together” (quoted in Read, 2017). Despite Zuckerberg’s attempts to downplay Facebook’s role as a media company, and to instead emphasize its ability to create information flow or foster strong communities, the fact remains that the platform is the top source of political news for millennials (Griffith, 2017). In an era when “Americans’ trust and confidence in the media ‘to report the news fully, accurately, and fairly’ has dropped to its lowest level in Gallup polling history” (Swift, 2016), many have embraced social media as a necessary alternative to the media. It is not difficult to see how, especially in the current political climate, a strong public is needed to critically and rationally challenge hegemonic discourses and assess the accuracy of breaking news. But the rise of social media platforms and the delegitimization of professional journalism raises questions about the ways that we, as compositionists, teach our students to assess the accuracy of information sources. If separating truth from fiction is too difficult for experienced and well-trained journalists working at CNN and The New York Times, what should we expect from the students in our first-year writing courses? While we rightly chastise journalists for failing to vet their sources and for filtering content that challenges corporate interests, we also need to ask our students to reflect about the information that comes to them, “liked” and “shared,” from trusted friends and family. As the line between professional journalism and citizen writing blurs, we are concerned about the health of the public sphere. But we also believe that it is worth considering the opportunities that this “existential crisis” might offer our students. As David Riche (2017) writes, there may be pedagogical value in asking students to engage in reflection about the thorny questions of credibility and vulnerability that “fake news” stories raise. In addition, we believe that the era of “fake news” offers students opportunities to connect their everyday writing and circulation practices to the health of the broader news ecosystem. To spur this kind of reflection, we might ask our students to articulate their own ethical commitments to the transformation of the public sphere, and to enact these ethical commitments through their own writing. Whatever classroom approaches we take, we need to encourage our students to ask serious questions about the neutrality of the news ecosystems that are replacing twentieth-century journalism. In particular, we believe that the dominance of personalized news feeds tied to engagement metrics has already begun to affect the ways that we write and circulate texts in the public sphere. In the following section, we consider this algorithmic incentivization of “fake news” in greater depth, arguing that it is a second “mass” society tendency that threatens our ability to engage in responsible information-vetting practices. The Incentivization of “Fake News” Over News As teachers of composition are well aware, one can easily find slickly-produced web sources to support conspiracy theories, fabricated news stories, viral misinformation, propaganda, pseudoscience, and flimflam of all kinds. As Claire
6
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
Wardle and Hossein Derakhashan (2017) write, an analysis of this emerging landscape of “information disorder” (a term that they use in place of the term “fake news”) requires that we differentiate between three related phenomena. They define these phenomena as follows: • Mis-information is when false information is shared, but no harm is meant. • Dis-information is when false information is knowingly shared to cause harm. • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere. While much of our public discourse about “fake news” has focused on the difference between fact and fiction, Wardle and Derakhashan’s focus on intent paves the way for more nuanced understandings of the ways that misinformation, disinformation, and malinformation proliferate in online public spheres. Their definitions can help us consider, for example, the 2016 phenomenon of “Pizzagate,” a conspiracy theory alleging that Hillary Clinton was running a child sex ring out of a pizza shop in Washington D.C. This conspiracy theory was actually rooted in a malinformation campaign, relying largely upon authentic emails obtained through a phishing operation that targeted John Podesta, chairman of Clinton’s presidential campaign. The true genesis of the Pizzagate phenomenon, however, was a disinformation campaign. According to Buzzfeed’s Craig Silverman (2016), the conspiracy theory was the brainchild of an unknown white supremacist on Twitter. Masquerading as a Jewish lawyer named David Goldberg from New York, this disinformant cited Podesta’s emails and a screenshot of a Facebook post from one “Carmen Katz,” who claimed to have a source in the New York Police Department. The Pizzagate story really began to pick up steam, however, when users propagated it as misinformation. Pizzagate stories ended up on conspiracy theory forums, and later on Facebook, where they immediately went viral, being shared by hundreds of thousands of believers just weeks before the U.S. presidential election. Citing the work of Yochai Benkler, Max Read (2017) discusses how sites such as TruthFeed, Infowars, and Breitbart created an “attention backbone . . . through which conspiracy–mongering and disinformation traveled up to legitimating sources and with which extreme actors could set the parameters of political conversation.” Compounding the problem was the fact that writers continued to contribute their own fabrications and alleged evidence, fueling an out-of-control misinformation epidemic with more disinformation. Disentangling the intertwined problems of misinformation, disinformation, and malinformation that emerge during the life cycle of a writing event such as Pizzagate, we argue, is a first step toward curbing the influence of “fake news.” The next step, however, is a harder one. Since the 2016 U.S. election, Facebook has done much to publicize their attempts to wage this battle. In a document called “Information Operations and Facebook,” Jen Weedon, William Nuland, and Alex Stamos (2017) write, “Given the increasing role that Facebook is playing in facilitating civic discourse, we wanted to publicly share what we are doing to help ensure Facebook remains a safe and secure forum for authentic dialogue” (p. 3). The document describes three forms of malicious “information operations” that Facebook has observed, including targeted data collection, content creation, and false amplification (p. 6.). The most relevant of these for the present discussion is “false amplification,” or “coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion or amplifying sensationalistic voices over others)” (p. 6). To thwart these efforts, Facebook has developed algorithms that remove the accounts before they can have a major impact. Most recently, these algorithms were employed to remove 30,000 fake accounts in France that threatened the integrity of the presidential election (Shaik, 2017). As admirable as Facebook’s efforts may be, it seems unlikely that an algorithm can solve the greater problem: the “logic” (or lack thereof) that leads so many to eagerly pass along dubious information to friends, family, and strangers. In addition, there is reason to be critical of the idea that social media platforms can be trusted to solve the problem of “sensationalistic voices.” As Zeynep Tufekci (2018) writes, YouTube’s recommender algorithm may be designed with a “bias toward inflammatory content.” As she explains, a “nexus of artificial intelligence and Google’s business model” has resulted in a kind of “computational exploitation,” leading users down “a rabbit hole of extremism” by feeding them increasingly sensational material. In a recent tweet, Chris Hayes (2018) called YouTube’s algorithm “informationally toxic.” As he demonstrated, a high school freshman using the platform to research the federal reserve would immediately be fed “conspiratorial quackery” about the Illuminati and Marxist infiltration of the media. Max Fisher and Amanda Taub (2018) recently addressed similar concerns about Facebook, highlighting three ways that the platform promotes extremism. While Mark Zuckerberg has publicized Facebook’s efforts to combat extremism in its
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
7
most blatant forms—for example, by removing messages intended to “incite real harm” against Myanmar’s Muslim minority population—Fisher and Taub demonstrate that the platform has done little to address two other causes of extremism. First, Facebook has said little about the ways that social media platforms might incentivize extremist tendencies in digital writing. As Fisher and Taub argue, the dopamine boost provided by likes and comments may unconsciously “train” or condition users to create content rooted in primal emotions such as anger and fear, which tend to engage more readers. Second, Facebook downplays the extent to which their platform’s own algorithms amplify sensational content. Because Facebook’s news feed is designed to offer users content deemed likely to engage them, and because Facebook holds a treasure trove of user data with which to personalize this process, it is likely that users who are drawn to extremist narratives will be continually fed more of the same—“negative, primal” content rooted in anger and fear (Fisher & Taub, 2018). In other words, while Facebook has focused on rooting out “inauthentic” and “malicious” accounts, it has done little to acknowledge the ways that its product—by its very design—incentivizes the spread of misinformation, disinformation, and malinformation. If we aim to engage our students in the digital literacies that characterize the modern era, then social media undoubtedly has a place in our writing pedagogies. What concerns us, however, is the extent to which social media has come to be seen as today’s “public square.” As this brief discussion has demonstrated, platforms such as Facebook, Twitter, and YouTube are not neutral spaces in which citizens gather to engage in conversation. On the contrary, these platforms offer users highly contrived, personalized media experiences designed to serve the needs of advertisers. When we ask our students to write on these platforms, we believe that we should also ask them to think critically about the difference between “engagement”—a Silicon Valley euphemism—and civic engagement. Failing to attend to both kinds of engagement, we argue, leaves our students unprepared to reflect about the productive ways that social media engagement can coexist with civic engagement. In addition, ignoring the dynamics of digital “engagement” leaves our students vulnerable to new forms of mass persuasion that can only be understood through reference to the data-driven economy of the modern social web. In the following section, we turn our attention to these new forms of persuasion—a third tendency that we believe is contributing to a new kind of “mass” society.
New Forms of Mass Persuasion In addition to teaching our students to evaluate the credibility of individual texts—something that has been an important part of writing instruction since long before the rise of networked communication—we believe that it is important to engage them in reflection about larger patterns of mass persuasion that characterize online public spheres. Today’s practices of “astroturfing,” “sockpuppeting,” “cognitive infiltration,” “cognitive hacking,” “automated laser phishing,” “trolling,” “shilling,” “opinion spamming,” “psychological targeting,” “memetic warfare,” “flooding the zone,” “computational propaganda,” “shadow banning,” and “brigading”—whether sponsored by states, political parties, organizations, or individuals—are digital equivalents of writing practices that pre-date the digital era. But in the data-driven, viral contexts of today’s social web, we believe that these practices signal a new frontier of persuasion, raising questions about the ways that writing can be used to manipulate the public sphere on a large scale. In this section, we discuss two forms of mass persuasion that are reshaping online public spheres—digital information warfare and data-driven influence campaigns. In addition, we consider the pedagogical implications of these new forms of mass persuasion. One thing linking many of these new forms of persuasion, we argue, is that they depend less upon person-to-person dialogue and more upon strategic messaging, amplification, and coordination. In other words, these new forms of persuasion are akin to what governments call “information warfare” or “information operations.” Recently, reporting about Russian “troll farms” has considered the extent to which coordinated efforts to spread disinformation may have affected the outcome of the 2016 U.S. election (Kim, 2017). As Weedon et al. write, fake news disseminators no longer rely on simple (and easily detected) “bots,” which automatically “spam” or repeat the same message across multiple Facebook accounts. Instead, the authors detected a much more sophisticated operation: We have observed many actions by fake account operators that could only be performed by people with language skills and a basic knowledge of the political situation in the target countries, suggesting a higher level of coordination and forethought. (p. 9)
8
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
In addition to promoting (or denigrating) a single cause or issue, Weedon et al. describe well-coordinated efforts that “actively engaged across the political spectrum with the apparent intent of increasing tensions between supporters of these groups and fracturing their supportive base” (p. 8). While Russian trolls were using these state-sponsored information warfare tactics in the run-up to the 2016 election, everyday writers were engaging in similar practices in the U.S. As Ben Schreckinger (2017) writes, many of Donald Trump’s online supporters saw themselves as “meme warriors” whose job it was to naturalize far-right positions in the minds of the broader public. Working within a viral ecosystem composed of message boards, social media platforms, and news outlets, these writers “took their intimate knowledge of this ecosystem and weaponized it.” They designed “supermemes” that they knew would gain traction in specific communities. They endlessly repeated sensational claims about Hillary Clinton that mainstream news outlets would be hungry to amplify. They created fake Twitter accounts to maximize the reach of their content. They gamed the system on sites like Reddit, ensuring that their content would be more visible and lasting. And they organized “raids” against their enemies—coordinated harassment campaigns that forced Clinton supporters to gather in more private forums. These organized attempts to “game” the public sphere have recently revived interest in an information warfare concept known as the Overton window. The term, coined by right-wing think-tank leader Joseph Overton in 1994, refers to the fact that “in a given public policy area . . . only a relatively narrow range of potential policies will be considered politically acceptable” (“The Overton Window”). In effect, the concept of the Overton window redefines the battleground of politics—it encourages writers to focus less on persuading audiences directly and more on shifting the parameters of mainstream public discourse. In the run-up to Donald Trump’s presidency, for example, a number of white nationalists saw it as their mission to strategically shift this “window.” After the election, Vox Day wrote, “Donald Trump has a lot to do . . . It is the Alt-Right’s job to move the Overton Window and give him conceptual room to work” (quoted in Marantz, 2016). Much of this work has been done, of course, through the folk literacies of the internet. White nationalists have “flooded the Internet with offensive images and words—cartoon frogs emblazoned with swastikas, theories of racial hierarchy—and then ridiculed anyone who had the temerity to be offended” (Marantz, 2016). The more outrageous this writing—much of it delivered with a tongue-in-cheek sensibility—the more it has worked to normalize the fixations of white nationalists. And while attempts to influence politics through cultural messaging are as old as time (As Andrew Breitbart often said, “Politics is downstream from culture” [quoted in Poniewozik, 2012]), we believe that it is worth considering the extent to which digital platforms have ushered in a new era of mass persuasion. What has changed today, we argue, is the broad availability of the tools with which everyday writers can engage in acts of information warfare, working in coordinated ways to represent the public sphere, or some version of it, back to participants themselves. As we have argued, college students armed with good critical thinking skills and a healthy dose of skepticism could do much to combat the spread of fake news on social media and elsewhere. A second feature that distinguishes this new era of mass persuasion is the use of emerging data mining practices. For the majority of our undergraduate students, scandals about the use of social media data have been part of the news landscape since they were in elementary school. In 2009, for instance, there was significant public outcry when Facebook introduced Beacon, a marketing program that automatically shared information such as what movie tickets users had purchased from Fandango (Facebook shuttered its Beacon system a few months later) (Ortutay, 2009). And while rumors of a “Facebook Drug Task Force” turned out to be rooted in a hoax, police have in fact used Facebook to bust underage drinkers (Aujla, 2009) and infiltrate gangs (Broussard, 2015). Unsurprisingly, incidents such as these have prompted many users to look for more clandestine apps to communicate with their friends. Snapchat, which offers “off-the-record” messaging, has spiked in popularity, becoming the third most popular social network among millennials (Perez, 2014). Though Snapchat boasts about how it routinely purges its servers of the data generated by users, a convenient workaround allows its advertisers to exploit the data collected by other networks. As Jim Edwards (2013) writes, “All Snapchat needs to do is sync its application programming interface with Facebook or Twitter and advertisers can use that data to target users inside Snapchat.” For much the same reason that Habermas feared the commercial sector’s intrusion into the public sphere, we (and our students) should be aware of the sophisticated ways that our data is being used by today’s most popular social media platforms. In recent years, compositionists have done much to explore the ways that these concerns about digital privacy intersect with writing pedagogy. We have considered the ways that students manage privacy settings (Swartz, 2013; Shepherd, 2015b), the ways that educators might engage students in reflection about the data that constitutes their “invisible digital identities” (Beck, 2015), the relationship between privacy and security in wireless classrooms (Poe
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
9
& Garfinkel, 2009), the ways that semi-public social networking sites reshape the nature of ethos (Maranto & Barton, 2010), the relationship between user data and authorship (Reyman, 2013), and the ways that engagement with wearable technologies might foster critical digital literacies (Hutchinson & Novotny, 2018). In addition, we have considered questions about privacy raised by blogs and e-portfolios in the classroom (Ross, 2014), assessment platforms (Crow, 2013), and writing program administration practices (Beck et al., 2016). We believe, however, that the question of privacy is just one of the many questions that must be raised about the ways that user data is stored, mined, and used. As Natasha Singer (2018) argues, today’s most pressing questions are less about privacy and more about the broader issue of “unfettered data exploitation.” The social media landscape, she writes, is a largely unregulated playground for harmful “data-driven influence campaigns”—what some have termed “computational propaganda.” In 2018, the impact of such campaigns was demonstrated in dramatic form when news broke that Cambridge Analytica, a strategic communication company, had successfully used psychometric Facebook data to deploy individualized political messaging in support of Ted Cruz, the pro-Brexit “Leave” campaign, and possibly Donald Trump (Illing, 2017). In this new era of “platform capitalism” (Srnicek, 2017), we believe that persuasion is becoming—like so many other aspects of our lives—a data-driven endeavor. The intersections between user data and mass persuasion will undoubtedly be fruitful terrain for research in our field. But these intersections also hold great potential for classroom pedagogy. By bringing questions about data-driven mass persuasion into the classroom, we can certainly prepare students to defend themselves against malicious influence campaigns. But investigating data-driven mass persuasion can also help us engage students in questions that have always been central to the practice of rhetoric: How does demographic knowledge about the members of one’s audience shape an act of writing or speech? To what extent can a message “target” multiple audiences through strategic delivery methods? How do audiences make judgments about the authenticity of a rhetor’s assumed identity through an analysis of that rhetor’s ethos? How can an understanding of the commonplaces that characterize a culture help a rhetor determine whether their message will fall within the acceptable “window” of public discourse? And how can analyzing the links between the members of an audience—what we now call “social network analysis”—help us predict the impact of our messages? By asking these questions and others, we believe that compositionists can explore the bleeding edge of technological change while also reaffirming our field’s commitment to engaging students in reflection about “the available means of persuasion.” Conclusion Emerging technologies enable us to communicate across greater distances and at greater speeds. But these technologies also enable the creation of new forms of association, of new ways to become “public” with others. As Alexis de Tocqueville (2014) argued, the technology of the print newspaper enabled strangers to come together and engage in “common activity” with one another, despite the fact that each was “detained in the place of his domicile.” As he wrote, It frequently happens . . . in democratic countries, that a great number of men who wish or who want to combine cannot accomplish it, because as they are very insignificant and lost amidst the crowd, they cannot see, and know not where to find, one another. A newspaper then takes up the notion or the feeling which had occurred simultaneously, but singly, to each of them. All are then immediately guided towards this beacon; and these wandering minds, which had long sought each other in darkness, at length meet and unite. During the broadcast media era, radio and television served similar functions—each was a “beacon” that linked a distributed web of strangers together. But as with print media, broadcast media’s flow of news, opinion, and analysis remained unidirectional. As such, broadcast media—like print media—remained incompatible with the ideals of democratic deliberation articulated by Habermas and Mills. Scholars have done much to explore the dangers of unidirectional media. In Manufacturing Consent: The Political Economy of the Mass Media, Edward S. Herman and Noam Chomsky (2002) emphasize the extent to which “money and power are able to filter out the news fit to print, marginalize dissent, and allow the government and dominant private interests to get their messages across to the public” (p. 2). For Herman and Chomsky, as for Mills and Habermas, the message is clear: for democracy to flourish, people need 1) access to reliable information, 2) autonomy, and 3) the means to participate in conversations that have a real possibility of becoming politically consequential.
10
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
As we have discussed, today’s social web does not offer users all of these things. Even so, the web’s most popular platforms enable interactive writing on a scale never seen before. The hundreds of thousands of ordinary people who liked and shared the Pizzagate conspiracy theory, for example, engaged with one another in a “conversation” that was participatory, immediate, decentralized, accessible, and relatively autonomous. In a strange way, Pizzagate is a perfect example of the kind of people-powered knowledge base that many public sphere theorists might have idealized during the age of mass media. As we have argued, however, today’s digital practices raise new questions about the health of the public sphere. What we have attempted to show is that the issues are myriad and complex, and the lines between public and private, commercial and non-profit, grassroots and corporate, and even facts and “alternative facts” are weakened if not already demolished. Though “fake news” is as old as writing itself, today we face new forms of misinformation, disinformation, and influence campaigns. The breakdown of the distinction between journalism and social media writing has brought gossip and “folk news” (Phillips, 2017) into greater circulation than ever before. The popularity of engagement-driven newsfeeds has incentivized content that provokes fear and anger, creating feedback loops that accelerate extremist tendencies and create powerful communities bound together by misinformation. Data-driven influence campaigns have initiated a new era of mass persuasion, demonstrating that a nexus of artificial intelligence, user data, and viral circulation channels can be exploited to dramatic effect. As scholars of writing and rhetoric, we tend to be skeptical audiences when we encounter misinformation, disinformation, and influence campaigns of all kinds. As Daniel Wuebben (2016) argues, however, logics of virality may already be shaping the academy in subtle ways. Wuebben writes that the “logic of mass delivery is increasingly the logic of social media, which primarily operates under paradigms of popularity and virality” (p. 67). As this logic “filters into academia, evaluations of teaching, research, and publications,” he argues, “publications gradually become more firmly rooted in the social media landscape, which tends to value a quantity of clicks more than academic rigor or long-term impact to a field or discipline” (p. 67). We, along with our students, will find it “increasingly difficult, and thus crucial, to distinguish the content that is designed to teach and educate from that which is crafted to attract clicks and proliferate” (p. 68). As we have argued, this emerging landscape of “fake news” raises pedagogical questions. As teachers of composition with an interest in writing technology, we are naturally concerned about how we can better equip ourselves and our students with both the critical and technological know-how to not only survive but also to flourish in this challenging media environment. Today, almost 15 years after Barton wrote his 2005 piece, we still believe that incorporating emerging public writing environments into our classrooms can help us create “capable participants” in the public sphere (p. 179). In this article, we have focused on just one facet of this project—asking our students to investigate emerging misinformation practices, disinformation practices, and data-driven influence campaigns. In the appendix of this article, we further develop this pedagogical project through brief summaries of two assignment sequences appropriate for use in first-year writing courses. By asking students to reflect about the messy landscape of “fake news,” we believe that we can better engage them in what John M. Ackerman and David J. Coogan call “the true grit and tumble of public life” (p.12). In doing so, we might more closely align our classrooms with a public sphere that is deeply flawed but is nonetheless our best hope for democratic action as writers. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declarations of interest None. Acknowledgements We wish to thank Lanette Cadle, who brought us together to work on this piece. We also wish to thank the anonymous reviewers who offered feedback that contributed to its development.
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
11
Appendix A. This appendix includes ideas for two assignment sequences that engage students in critical reflection about their own participation in today’s online public spheres. The first assignment sequence is intended to help students better understand how social media is created and disseminated, and their role in perpetuating (or challenging) the spread of misinformation, disinformation, and malinformation. Arguably, our most pressing obligation is to prevent our students from becoming complicit in this process, which in effect damages not only a target politician or party but, ultimately, our faith in democracy. Appropriately, this assignment sequence asks students to compare the peer review process of an academic journal to the process they use when deciding whether to share a post on Facebook or Twitter. The second assignment sequence asks student to investigate the anonymous discourse that takes place on forums such as 4chan, Reddit, and behind-the-scenes discussion pages at Wikipedia. Using the concept of ethos as a lens, students analyze practices of trolling, harassment (such as “doxxing”), and sockpuppeting in ways that invalidate simplistic linkages of anonymity with privacy and celebrity with the lack thereof. This assignment is intended to get at the tension between posting anonymously and taking personal responsibility for one’s statements. Assignment #1 This assignment sequence encourages students to develop reflective criteria for the re-circulation of media in digital spaces. While media literacy assignments often ask students to consider the credibility of particular texts, this assignment sequence asks them to connect textual analysis of this kind to their own theories about how circulating texts might contribute to the transformation of the public sphere. Students begin by engaging in reflective writing about their own commitments to social change—whether local, national, or international—and the extent to which different kinds of information flow help or hinder these goals. Next, each student creates a “network map” that represents an active space of online public writing, focusing in particular on the ways that the circulation and re-circulation of texts has shaped the flow of information in this space. Building upon the public writing pedagogies of Brian Gogan (2014), Laurie Gries (2015), Nathaniel Rivers and Ryan P. Weber (2011), and others, this mapping activity encourages students to consider the complex and indeterminate lifespans of public texts, preparing them to make ethical choices about the ways that they participate in prologing or shortening the lives of these texts. The final assignment in this sequence asks students to develop a set of criteria for the circulation of texts in the space of digital writing that they have chosen to investigate. To prepare students to develop these criteria, the class engages in an investigation of the review process that governs academic publication in a peer-reviewed academic journal. Students might, for example, hear from a guest speaker with editorial board experience. And they might study the stages that a text goes through prior to publication, reading sample comments and learning about reviewer practices. By focusing on the ways that peer review practices align with scholarly values of rigor, honesty, and progress, students prepare to make claims about the ways that practices of posting, sharing, liking, upvoting, favoriting, retweeting, flagging, and commenting upon digital texts might align with their own ethical commitments. The final assignment in this sequence asks students to develop personal criteria for engaging with texts in digital space. Using sentence stems such as “Does this text . . .?” and “Will this text . . .?” students develop criteria designed to help them determine not only the accuracy of a text’s information but also how its possible circulation in the public sphere might result in desirable or undesirable changes to a broader “network” of information flow. The assignment sequence ends with informal presentations in which students demonstrate the value of their criteria by discussing how one “accepted” text and one “rejected¨text reflect their personal commitments to the transformation of the public sphere. Assignment #2 This assignment sequence asks students to synthesize a number of theories about ethos, and to later draw upon these theories when reflecting about the nature of anonymous discourse in digital spaces. The sequence begins with a class discussion of the public comments of Mark Zuckerberg, who said in 2010
12
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
You have one identity . . . The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly . . . Having two identities for yourself is an example of a lack of integrity. (quoted in Cutler, 2010) Students are then asked to connect Zuckerberg’s quotation to various quotations about identity, including one from James Baldwin (2013), who compared identity to a loose garment: Identity would seem to be the garment with which one covers the nakedness of the self: in which case, it is best that the garment be loose, a little like the robes of the desert, through which one’s nakedness can always be felt, and, sometimes, discerned. This trust in one’s nakedness is all that gives one the power to change one’s robes. (p. 76) By putting these and other theories into conversation with one another, students begin to explore how different conceptions of ethos shape the ways that they navigate social contexts of all kinds, from school to work to home. Turning to digital spaces, this assignment sequence then asks students to assess the explanatory power of their own theories of ethos through brief case studies of digital public discourse. Students are encouraged to investigate questions about how the communication landscape of the 21st century encourages us to rethink our notions of what it means to become public with strangers. What notions of ethos, for example, can help us navigate a public sphere whose discourse is characterized by the viral recirculation of texts written by others? And how do new modes of anonymity and personal branding change the nature of credibility in digital spaces? Through discussions of specific controversies related to anonymity and ethos, students reflect about these questions. For example, the class might reflect about the ways that 2014’s “GamerGate” controversy—a coordinated campaign of harassment that targeted female video game developers—led to practices of “doxxing,” or publishing of private information such as home addresses and phone numbers. Finally, the assignment asks each student to investigate a controversy of personal or professional interest (in fields such as journalism, literature, medicine, law, the arts, pop culture, social justice, etc.) and to consider the extent to which this controversy was shaped by anonymous public discourse and non-anonymous public discourse. Students work to propose theories about the ways that varied forms of ethos shape the production, distribution, exchange, and consumption of texts in the public sphere. Dan Ehrenfeld is an Assistant Professor of Writing and First-Year Studies at Stockton University, where he has taught since 2017. His areas of interest include digital rhetoric, writing in the public sphere, rhetorical circulation, and pedagogy. He recently completed his dissertation, Rhetorical Investments: Writing, Technology, and the Emerging Logics of the Public Sphere. Matt Barton is an English professor at Saint Cloud State University, where he has served on the faculty since 2005 after receiving his PhD in Rhetoric and Composition from the University of South Florida. He teaches courses in rhetoric, digital and social media, popular culture, and professional writing. His published work includes books and articles on wikis, social media, videogames, and popular culture.
References Amanpour, Christiane. (2016). In this dangerous new world, journalism must protect itself. The Guardian. November 23. Retrieved from https://www.theguardian.com/commentisfree/2016/nov/23/journalism-first-amendment-protections-trump-administration. Aujla, Simmi. (2009). Police officers set up Facebook account to catch underage drinkers. The Chronicle of Higher Education. December 8. Retrieved from http://www.chronicle.com/blogs/wiredcampus/police-officers-set-up-facebook-account-to-catch-underage-drinkers/9103. Baldwin, James. (2013). The devil finds work. New York, NY: Knopf Doubleday Publishing Group. Balzhiser, Deborah, Polk, Jonathan D., Grover, Mandy, Lauer, Evelyn, McNeely, Sarah, & Zmikly, Jon. (2011). The Facebook papers. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 16(1). Retrieved from http://kairos.technorhetoric.net/16.1/praxis/balzhiser-et-al/ Barbaro, Michael, & Eder, Steve. (2015). Under oath, Donald Trump shows his raw side. New York Times. July 28. Retrieved from https://www.nytimes.com/2015/07/29/us/politics/depositions-show-donald-trump-as-quick-to-exaggerate-and-insult.html?smid=tw-share & r=1. Barton, Matthew D. (2005). The future of rational-critical debate in online public spheres. Computers and Composition, 22(2), 177–190. Beck, Estee N. (2015). The invisible digital identity: Assemblages in digital networks. Computers and Composition, 35, 125–140. Beck, Estee N., Crow, Angela, McKee, Heidi, Reilly, Colleen A., deWinter, Jennifer, Vie, Stephianie, Gonzales, Laura, & DeVoss Dànielle, Nicole. (2016). Writing in an age of surveillance, privacy, and net neutrality. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, (January). Retrieved from. http://kairos.technorhetoric.net/20.2/topoi/beck-et-al/index.html
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
13
Broussard, Meredith. (2015). When cops check Facebook. The Atlantic. April 19. Retrieved from https://www.theatlantic.com/ politics/archive/2015/04/when-cops-check-facebook/390882/. Buck, Amber. (2012). Examining digital literacy practices on social network sites. Research in the Teaching of English, 47(1), 9–38. Cloud, D. (2018). Reality bites: Rhetoric and the circulation of truth claims in U.S. political culture. Columbus, OH: Ohio State University Press. Coad, David. (2013). Developing critical literacy and critical thinking through Facebook. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 18(1). Retrieved from http://kairos.technorhetoric.net/praxis/tiki-index.php?page=Developing Critical Literacy and Critical Thinking through Facebook Coogan, David J., & Ackerman, John M. (2013). Introduction: The space to work in public life. In J. M. Ackerman, & D. J. Coogan (Eds.), The Public Work of Rhetoric: Citizen-Scholars and Civic Engagement. Columbia: University of Southern Carolina Press. W. Craig Jacob. (2017). Navigating a varied landscape: Literacy and the credibility of networked information. In B. Glascott, J. Lewis, T. Lockhart, H. Middleton, J. Parrish, C. Warnick (Eds.), Special issue on literacy, democracy, and fake news. Literacy in Composition Studies, 5(2). Crow, Angela. (2013). Managing datacloud decisions and “big data”: Understanding privacy choices in terms of surveillant assemblages. In H. A. McKee, & D. N. DeVoss (Eds.), Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from https://ccdigitalpress.org/book/dwae/02 crow.html Cutler, Kim-Mai. (2010). Why Mark Zuckerberg needs to come clean about his views on privacy. VentureBeat. May 13. Retreived from https://venturebeat.com/2010/05/13/zuckerberg-privacy/. DeLuca, Katherine. (2015). Can we block these political thingys? I just want to get f*cking recipes’: Women, rhetoric, and politics on pinterest. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 19(3). Retrieved from http://kairos.technorhetoric.net/19.3/topoi/deluca/references.html Edwards, Dustin W., & Gelms, Bridget. (2018). Special issue on the rhetoric of platforms. Present Tense, 3(6) Edwards, Jim. (2013). How Snapchat will make money even though it deletes the most important asset it has–Data. Business Insider. November 21. Retrieved from http://www.businessinsider.com/snapchat-data-monetization-through-advertising-2013-11. Fife, Jane. (2010). Using Facebook to teach rhetorical analysis. Pedagogy, 10(3), 555–561. Fisher, Max, & Taub, Amanda. (2018). In search of Facebook’s heroes, Finding only victims. The New York Times. October 10. Retrieved from https://www.nytimes.com/2018/04/22/insider/facebook-victims-sri-lanka.html. Gogan, Brian. (2014). Expanding the aims of public rhetoric and writing pedagogy: Writing letters to the editors. College Composition and Communication, 65(4), 534–559. Gries, Laurie. (2015). Still life with rhetoric: A new materialist approach for visual rhetoric. In Still life with rhetoric: A new materialist approach for visual rhetoric. Logan, UT: Utah State University Press. Griffith, Erin. (2017). Memo to Facebook: How to tell if you’re a media company. Wired. October 12. Retrieved from https://www.wired.com/story/memo-to-facebook-how-to-tell-if-youre-a-media-company/. Habermas, Jürgen. (1991). The structural transformation of the public sphere: An inquiry into a category of bourgeois society. Cambridge, MA: MIT Press (T. Burger, Trans.) (Original work published 1962) Hayes, Chris. (2018). My favorite example of how informationally toxic YouTube’s algorithm is this: Imagine you’re high school freshman and got a school assignment about the Federal Reserve. [Tweet]. September 6. Retrieved from https://twitter.com/chrislhayes/status/1037831503101579264?lang=en. Hayes, Tracey J. (2017). #MyNYPD: Transforming Twitter into a public place for protest. Computers and Composition, 43, 118–134. Head, Samuel L. (2016). Teaching grounded audiences: Burke’s identification in Facebook and composition. Computers and Composition, 39, 27–40. Herman, Edward S., & Chomsky, Noam. (2002). Manufacturing consent: The political economy of the mass media. New York, NY: Pantheon Books (Original work published 1988) Hill, Kashmir. (2012). Blaming the wrong Lanza: How media got it wrong in Newtown. Forbes. December 17. Retrieved from https://www.forbes.com/sites/kashmirhill/2012/12/17/blaming-the-wrong-lanza-how-media-got-it-wrong-in-newtown/#442fd9227601. Hutchinson, Les, & Novotny, Maria. (2018). Teaching a critical digital literacy of wearables: A feminist surveillance as care pedagogy. Computers and Composition, http://dx.doi.org/10.1016/j.compcom.2018.07.006 Illing, Sean. (2017). Cambridge Analytica, the shady data firm that might be a key Trump-Russia link, explained. Vox. December 18. Retrieved from https://www.vox.com/policy-and-politics/2017/10/16/15657512/mueller-fbi-cambridge-analytica-trump-russia. Khazan, Olga. (2017). What Trump could mean for journalism. The Atlantic. January 23. Retrieved from https://www.theatlantic.com/international/archive/2017/01/what-trump-means-for-journalism/514160/. Kim, Lucian. (2017). Russian magazine says “trolls” used social media to disrupt U.S. election. NPR. October 20. Retrieved from https://www.npr.org/2017/10/20/559113223/russian-magazine-says-trolls-used-social-media-to-disrupt-u-s-election. T. Laquintano, A. Vee. (2017). How automated writing systems affect the circulation of political information online. In B. Glascott, J. Lewis, T. Lockhart, H. Middleton, J. Parrish C. Warnick (Eds.), Special issue on literacy, democracy, and fake news. Literacy in Composition Studies, 5(2). Lessig, Lawrence. (2004). Free culture: How big media uses technology and the law to lock down culture and control creativity. New York, NY: Penguin Press. Losh, Elizabeth M. (2009). Virtualpolitik: an electronic history of government media-making in a time of war, scandal, disaster, miscommunication, and mistakes. Cambridge, MA: MIT Press. Maranto, Gina, & Barton, Matt. (2010). Paradox and promise: MySpace, Facebook, and the sociopolitics of social networking in the writing classroom. Computers and Composition, 27(1), 36–47. Marantz, Andrew. (2016). The alt-right hails its victorious god-emperor. The New Yorker. November 12. Retrieved from https://www.newyorker.com/news/news-desk/the-alt-right-hails-its-victorious-god-emperor. McComiskey, Bruce. (2017). Post-truth rhetoric and composition. Logan, UT: Utah State University Press.
14
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
McVey, James Alexander, & Wood, Heather Suzanne. (2016). Anti-racist activism and the transformational principles of hashtag publics: From #HandsUpDontShoot to #PantsUpDontLoot. Present Tense, 3(5). Retrieved from http://www.presenttensejournal.org/ volume-5/anti-racist-activism-and-the-transformational-principles-of-hashtag-publics-from-handsupdontshoot-to-pantsupdontloot/ Mills, C. Wright. (1999). The power elite. New York, NY: Oxford University Press (Original work published 1956) C. Minnix. (2017). “Globalist scumbags”: Composition’s global turn in a time of fake news. In B. Glascott, J. Lewis, T. Lockhart, H. Middleton, J. Parrish, C. Warnick (Eds.), Special issue on literacy, democracy, and fake news. Literacy in Composition Studies, 5(2). P. Miller Thomas, A. Leon. (2017). Introduction to special issue on literacy, democracy, and fake news: Making it right in the era of fast and slow literacies. In B. Glascott, J. Lewis, T. Lockhart, H. Middleton, J. Parrish, C. Warnick (Eds.), Special issue on literacy, democracy, and fake news. Literacy in Composition Studies 5(2). Monty, Randall W. (2015). Everyday borders of transnational students: Composing place and space with mobile technology, social media, and multimodality. Computers and Composition, 38, 126–139. Ortutay, Barbara. (2009). Facebook to end Beacon tracking tool in settlement. USA TODAY. September 21. Retrieved from https://usatoday30.usatoday.com/tech/hotsites/2009-09-21-facebook-beacon N.htm. Penney, Joel, & Dadas, Caroline. (2014). (Re)Tweeting in the service of protest: Digital composition and circulation in the OWS Movement. New Media & Society, 16(1), 74–90. Perez, Sarah. (2014). Snapchat is now the #3 social app among millennials. TechCrunch. August 11. Retrieved from https://techcrunch.com/2014/08/11/snapchat-is-now-the-3-social-app-among-millennials/. Phillips, Whitney. (2017). Putting the folklore in fake news. Culture Digitally. January 24. Retrieved from http://culturedigitally.org/2017/01/putting-the-folklore-in-fake-news/. Poe, Mya, & Garfinkel, Simson. (2009). Security and privacy in the wireless classroom. In A. C. Kimme Hea (Ed.), Going Wireless: A Critical Exploration of Wireless and Mobile Technologies for Composition Teachers. Cresskill, NJ: Hampton Press. Poniewozik, James. (2012). Andrew Breitbart, 1969–2012. Time. March 1. Retrieved from http://entertainment.time.com/2012/03/01/andrew-breitbart-1969-2012/. Read, Max. (2017). Does even Mark Zuckerberg know what Facebook is? New York Magazine. October 1. Retrieved from http://nymag.com/selectall/2017/10/does-even-mark-zuckerberg-know-what-facebook-is.html. Reyman, Jessica. (2013). User data on the social web: Authorship, agency, and appropriation. College English, 75(5), 513–533. Rheingold, Howard. (2014). Net smart: How to thrive online. Cambridge, MA: MIT Press. Riche, David. (2017). Toward a theory and pedagogy of rhetorical vulnerability. In B. Glascott, J. Lewis, T. Lockhart, H. Middleton, J. Parrish, & C. Warnick (Eds.), Special issue on literacy, democracy, and fake news. Literacy in Composition Studies, 5(2). Rivers, Nathaniel A., & Weber, Ryan P. (2011). Ecological, pedagogical, public rhetoric. College Composition and Communication, 63(2), 187–219. Ross, Jen. (2014). Engaging with “webness” in online reflective writing practices. Computers and Composition, 34, 96–109. Schreckinger, Ben. (2017). World war meme. Politico. March/April. Retrieved from https://politi.co/2qK8kHH. Shaik, Shabnam. (2017). Improvements in protecting the integrity of activity on Facebook. April 13. Retrieved from https://www.facebook.com/notes/facebook-security/improvements-in-protecting-the-integrity-of-activity-on-facebook/10154323366590766/. Shepherd, Ryan P. (2015a). Men, women, and Web 2.0 writing: Gender difference in Facebook composing. Computers and Composition, 39, 14–26. Shepherd, Ryan P. (2015b). FB in FYC: Facebook use among first-year composition students. Computers and Composition, 35, 86–107. Shepherd, Ryan P. (2018). Digital writing, multimodality, and learning transfer: Crafting connections between composition and online composing. Computers and Composition, 48, 103–114. Silverman, Craig. (2016). How the bizarre conspiracy theory behind “Pizzagate” was spread. Buzzfeed. December 5. Retrieved from https://www.buzzfeed.com/craigsilverman/fever-swamp-election?utm term=.qgBoAPZaV0#.guk85E2Led. Singer, Natasha. (2018). Just don’t call it privacy. New York Times. September 22. Retrieved from https://www.nytimes.com/2018/09/22/sunday-review/privacy-hearing-amazon-google.html. Skinnell, Ryan. (2018). Faking the news: what rhetoric can teach us about Donald J. Trump. Exeter, UK: Imprint Academic. Smith, Ben, & Geidner, Chris. (2012). A local crisis meets the global social web. Buzzfeed. December 15. Retrieved from https://www.buzzfeed.com/bensmith/a-local-crisis-meets-the-global-social-web?utm term=.gaBVlJXqkx#.lvJGwo92Z0. Sparby, Erika M. (2017). Digital social media and aggression: Memetic rhetoric in 4chan’s collective identity. Computers and Composition, 45, 85–97. Srnicek, Nick. (2017). Platform capitalism. Cambridge, UK: Polity. Swartz, Jennifer. (2013). MySpace, Facebook, and multimodal literacy in the writing classroom. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 15(2). Retrieved from http://kairos.technorhetoric.net/praxis/tiki-index.php?page=Multimodal Literacy Swift, Art. (2016). Americans’ trust in mass media sinks to new low. Gallup News. September 14. Retrieved from http://www.gallup.com/poll/195542/americans-trust-mass-media-sinks-new-low.aspx. The Overton window: A model of policy change. Retrieved from http://www.mackinac.org/overtonwindow. de Tocqueville, Alexis. (2014). Democracy in America. Adelaide, South Africa: University of Adelaide, eBooks@Adelaide (H. Reeve, Trans.), (Original work published 1840) Tufekci, Zeynep (2018). YouTube, the great radicalizer. The New York Times. March 10. Retrieved from from https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html. Vie, Stephanie. (2008). Digital divide 2.0: ‘Generation M’ and online social networking sites in the composition classroom. Computers and Composition, 25, 9–23. Walls, Douglas M., & Vie, Stephanie. (2017). Social writing/Social media: Publics, presentations, and pedagogies. Perspectives on writing. Fort Collins, Colorado: The WAC Clearinghouse and University Press of Colorado. Retrieved from https://wac.colostate.edu/books/perspectives/social/
D. Ehrenfeld, M. Barton / Computers and Composition 54 (2019) 102525
15
Wardle, Claire, & Derakhshan, Hossein. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. The Council of Europe. Retrieved from http://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-forresearch-and-policy-making.html Weedon, Jen, Nuland, William, & Stamos, Alex. (2017). Information operations and Facebook, Version 1.0.. Retrieved from https://fbnewsroomus.files.wordpress.com/2017/04/facebook-and-information-operations-v1.pdf Wuebben, Daniel. (2016). Getting likes, going viral, and the intersections between popularity metrics and digital composition. Computers and Composition, 42, 66–79.