Library and Information Science Research xxx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
Library and Information Science Research journal homepage: www.elsevier.com/locate/lisres
Leveraging library trust to combat misinformation on social media M. Connor Sullivana,b, a b
⁎
Harvard University, Faculty of Arts and Sciences, Widener Library, Harvard Yard, Cambridge, MA, 02138, USA Simmons University, School of Library and Information Science, 300 The Fenway, Boston, MA 02115, USA
ABSTRACT
One reason librarians are confident they have a role to play in fighting misinformation is the level of trust in libraries as institutions. Exactly how they might leverage that trust remains unclear and untested. Building on recent work in correcting health misperceptions on social media, this study tests whether libraries can leverage trust to combat misinformation online. Using a misperception about the influenza vaccine as a test case, an experiment (n = 625) was conducted in fall 2018 using Amazon's Mechanical Turk. Results suggest that the misperception can be reduced, but not by library institutions. An unsuccessful follow-up (n = 600) suggests that the effectiveness of the correction is season dependent and opens the possibility that libraries may yet play a role, but not necessarily because they are trusted. Future library proposals for combating misinformation need to be developed and tested within a broader contemporary misinformation research program.
1. Introduction As public concern for the problem of mis- and disinformation spread following the 2016 U.S. presidential election, the overwhelming response from librarians was that they have a key role to play in the fight against all its forms. This fight has been characterized as librarians' longstanding information war (Becker, 2016), and many have imagined themselves or their peers on the front lines (Jacobson, 2017; NeelySardon & Tignor, 2018; Wade & Hornick, 2018). Outside the library, others have not only adopted this characterization (e.g., Large, 2017) but also begun looking to libraries as “a critical resource for teaching the skills required for navigating the digital ecosystem” (Wardle & Derakshan, 2017, p. 84). Lewandowsky, Ecker, and Cook (2017), arguably among the most influential researchers in misinformation, look hopefully to information literacy as a necessary part of their technocognition approach to the problems of a post-truth era.1 Essential to the role that libraries may play in combating misinformation is trust. There has been a marked decline in public trust in traditional journalism and other institutional arbiters of truth, leaving individuals uncertain about what or whom to trust when seeking reliable information. By contrast, as Wardle and Derakshan (2017) note, “Libraries are one of the few institutions where trust has not declined” (p. 84). According to the Pew Research Center, 78% of U.S. adults believe that libraries help them find information that is trustworthy and reliable (Geiger, 2017). For this reason, some library and information science (LIS) researchers and practitioners have called for libraries to build on that trust to bring quality information and education to the public (Saunders, Gans-Boriskin, & Hinchliffe, 2018) or to help restore
trust in traditional journalism and other quality sources (Batchelor, 2017). 1.1. Problem statement There are reasons to be skeptical about the LIS approach to combating misinformation (incorrect information) and misperceptions (belief in misinformation). To begin, there is a robust literature on the difficulties in correcting both, and LIS authors have not yet fully reckoned with the implications of this work (see Sullivan, 2018). Information literacy approaches focus heavily on source evaluation, but “basic cognitive research makes clear that evaluating sources, while important, will be an incomplete solution” (Marsh & Yang, 2017, p. 401). Others have expressed skepticism about LIS approaches, as when one of the coauthors of the Science article on fake news (Lazer et al., 2018) was interviewed following the backlash to Harvard Library's (2017) fake news guide (Kuritzkes, 2017). The most pressing problem with LIS solutions to the problem of misinformation is that they remain untested. Responding to the problems of fake news, LIS authors have made testable claims, but these are presented as statements rather than questions, as when Batchelor (2017) asserts that “all methods of promoting critical thinking skills and awareness of fake news have the potential to make an impact” (p. 145). Even when some have sought to measure impact, they have demonstrated a need that information literacy is believed to be able to meet, rather than the actual impact of literacy in meeting that need (e.g., El Rayess, Chebl, Mhanna, & Hage, 2018). There has also been little discussion of how libraries might leverage
Corresponding author at: Simmons University, School of Library and Information Science, 300 The Fenway, Boston, MA 02115, USA. E-mail address:
[email protected]. 1 See also the responses to their proposal in the same issue of Journal of Applied Research in Memory and Cognition. ⁎
https://doi.org/10.1016/j.lisr.2019.02.004 Received 4 January 2019; Received in revised form 4 February 2019; Accepted 8 February 2019 0740-8188/ © 2019 Elsevier Inc. All rights reserved.
Please cite this article as: M. Connor Sullivan, Library and Information Science Research, https://doi.org/10.1016/j.lisr.2019.02.004
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
trust in the fight against false information and belief, or how to assess what works. For instance, if libraries are trusted not just for information in general but also within specific domains such as health, with 73% of those over 16 saying that “libraries contribute to people finding the health information they need” (Horrigan, 2015, p. 8), might they contribute to combating misinformation and misperceptions regarding the seasonal influenza (flu) vaccine? Vaccination rates in the United States remain far below recommended levels, and have even decreased in recent years (CDC, 2018a), resulting in staggering costs in human life, health, and resources. Among the barriers to addressing this problem are persistent misperceptions about the flu and flu vaccine (e.g., CDC, 2018b), and there is an urgent global need to pilot strategies that work in reducing vaccine hesitancy, particularly on social media (Larsen, 2018). Using the seasonal flu vaccine as a test case, this study is an attempt to initiate within LIS a new misinformation research program and to introduce a research tool. Building on recent work in the context of social media, where “corrections” research has been limited but where leveraging institutional trust has proved promising (Vraga & Bode, 2017a, 2017b), it is a first attempt to address empirically whether libraries or library organizations can leverage trust to combat misinformation and misperceptions online. Along with testing and extending recent research, results of this study will help inform librarians and allied professionals about the challenges of correcting misinformation and provide a starting point for thinking critically about the role of libraries in fighting it.
2.2. Incidental exposure to misinformation If incidental exposure to information can inform users about and alter the perceived importance of issues, might misinformation have a similar effect? Evidence suggests so. First, the effects of exposure to misinformation are amplified by repetition, even if from the same source (Ecker, 2015), and experiments demonstrate that the source of information is often lost over time, especially as it becomes familiar (Marsh & Yang, 2017). Thus, misinformation that is encountered frequently, even incidentally, may be remembered more readily, independent of its source. Second, people often do not recognize demonstrably false stories when scanning information (Southwell, Thorson, & Sheble, 2017). This is an inevitable result of the mismatch between the pace of modern life and our mental resources, necessitating heuristics, or mental shortcuts, for processing information and making decisions. While relying on shortcuts can cause problems, it is important to note that “in many cases, it is the most efficient form of behaving” (Cialdini, 2009, p. 7), and we regularly turn to heuristics such as deferring to expertise. Nevertheless, our default modes of information processing create problems when the information to which we are exposed is unreliable. This is particularly the case with social media, where information within our networks is endowed with increased issue relevance (Feezell, 2018). Our knowing may thus go awry when we pay more attention to the information communicated in the commentary surrounding news articles than in the articles themselves. In a Facebookbased experiment, Anspach and Carlson (2018) found that individuals in social commentary conditions were more misinformed about a topic, tending to believe the information relayed in comments rather than in the article preview. This occurred even when participants recognized the biased nature of comments and reported trusting legitimate news organizations over other users, and even when the information was biased against individuals' prior beliefs.
2. Literature review 2.1. Misinformation and social media A standard complaint among the post-election postmortems in the United States has been the lack of traditional information gatekeepers online. Particular blame has been placed on social media platforms such as Facebook and Twitter. Authors decry the use of social media as a source of information, specifically news, and Johnson (2017) finds the “social function of online news-sharing” antithetical to librarianship (p. 15). Underlying much of this criticism is the claim that “unlike traditional news sources, social media … allows users to create a bubble of news stories that only pander to their beliefs and opinions” (Rochlin, 2017, p. 386). Such claims about filter bubbles are widespread but tend to overstretch the available data (see Guess, Nyhan, Lyons, & Reifler, 2018). There is substantial evidence that most Internet users do not avoid exposure to counter-attitudinal content (Garrett, 2018), and Facebook does not appear to systematically screen out the latter (Bakshy, Messing, & Adamic, 2015). News fragmentation remains low, and interactions appear less segregated online than offline. Even if birds of a feather do not necessarily flock together online, they continue to flock there, where they encounter increasing amounts of information of varying quality. As social media use has mushroomed in the United States, seeing a nearly tenfold increase from 2005 to 2015 (Perrin, 2015), it has become a key source of news and information. Two-thirds of U.S. adults now report accessing news through social media (Anspach & Carlson, 2018), with just under half getting news on Facebook (Guess et al., 2018). Nearly as many Facebook users regularly encounter health-related news (Vraga & Bode, 2017b). This occurs even for those using social media primarily for social purposes, as platforms facilitate incidental exposure to information. Even if most users do not click on much news content relating to national events, politics, or world affairs (Bakshy et al., 2015), article previews in news feeds can inform users to some extent (Anspach & Carlson, 2018), and mere exposure can alter the perceived importance of issues, particularly among those with low interest (Feezell, 2018).
2.3. Social correction If users rely on social networks for encountering and to some degree trusting information and pay more attention to the commentary surrounding that information, then social media could provide an avenue for correcting misinformation or misperceptions. This seems somewhat paradoxical, but recent research has suggested how unique features of social media might be able to overcome one of the thornier problems of correcting misinformation—namely, that it is ineffective, if not counterproductive, to directly challenge individuals' false beliefs (Lewandowsky et al., 2017). The key is the potential for social media to facilitate corrections without triggering the motivated reasoning thought to undermine direct approaches (Vraga & Bode, 2017b). In two experiments, Bode and Vraga (2015) investigated the impact that Facebook's “related stories” feature could have when correcting health-related misperceptions. They found that this algorithmic correction was able to reduce misperceptions about genetically modified organisms, but not about the false link between vaccines and autism. Moving from Facebook to Twitter, Vraga and Bode (2017b) then explored what impact observing the correction of a misperception about the Zika virus might have on support for that misperception. They termed this observational or social correction and found that observing a correction from the Centers for Disease Control and Prevention (CDC) significantly reduced misperceptions compared to the user or control conditions. Overall, each type of correction was more effective among those higher in initial misperception. In one final Facebook study, Bode and Vraga (2018) compared algorithmic and social corrections for a misperception regarding the Zika virus. They found that both types of corrections were able to significantly reduce misperception, compared to a control condition. 2
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
conditions are also included to replicate previous research and provide a basis of comparison for the library conditions. To understand better the relationship between trust and the effectiveness of corrections, which might go in either direction, the study asks:
2.4. Correcting flu vaccine misperceptions Vraga and Bode's results using social and algorithmic corrections are encouraging, but the latter's success did not extend to misperceptions about vaccines and autism. This may be due to the fact that this particular misperception is held both widely and deeply and has been around enough for “people to build up their …ability to resist incongruent information” (Bode & Vraga, 2015, p. 633). By contrast, the ability to reduce misperceptions about Zika may be due to the relative recency of the issue, before misinformation had taken root (Vraga & Bode, 2017b). It is thus unclear whether social correction will be effective in reducing common misperceptions about the seasonal flu vaccine that continue to prevent many Americans from vaccinating. Since 2010–11, the CDC has recommended that everyone older than six months receive the flu vaccine, except when medically contraindicated (Nowak, Sheedy, Bursey, Smith, & Basket, 2015), but coverage remains far below the 70% goal of the Healthy People 2020 program. The 2017 flu season was one of the worst but saw some of the lowest rates—37.1%, down 6.2% from the previous year (CDC, 2018a)—leading to roughly 950,000 hospital admissions and 80,000 deaths (Larsen, 2018). The annual cost is in the billions (Nowak et al., 2015). While there are many important obstacles to getting vaccinated, misperceptions about the flu vaccine remain a significant barrier. One of the most common is the belief that you can get the flu from the flu shot (CDC, 2018b), with over 40% of Americans reporting it as somewhat or very accurate (Nyhan & Reifler, 2015), and 5.8% of respondents in one survey citing it as a reason not to vaccinate (Luz, Johnson, & Brown, 2017). There is thus an urgent global need to find strategies for reducing vaccine hesitancy and addressing misinformation, particularly on social media (Larsen, 2018). There is some evidence that the misperception that the seasonal flu vaccine gives you the flu can be corrected to some extent (Jolley & Douglas, 2014; Nyhan & Reifler, 2015). However, these studies also found that reducing the misperception did not lead to an increase in intention to vaccinate, and in one instance decreased the likelihood of vaccinating among those with high concern about side effects (Nyhan & Reifler, 2015).
RQ4. a: Does a greater level of trust in the source of a correction lead to increased effectiveness of the correction? b: Does providing a correction to a misperception lower the perceived trustworthiness of the source of the correction among those who hold the misperception more strongly? Whereas the first part of this question asks whether trust can help fight misinformation, the second asks whether trust can be hurt in the processing of fighting misinformation. Although Vraga and Bode (2017b) found that the perceived trustworthiness or credibility were not harmed by providing corrections, Anspach and Carlson (2018) found that challenging the veracity of a post on social media can impact the perceived trustworthiness of all parties. Further, research on the hostile media effect (e.g., Nyhan & Reifler, 2012) or cultural cognition (e.g., Kahan, Jenkins-Smith, & Braman, 2011) demonstrates that the perceived expertise or bias of a source may vary depending on the extent to which that source aligns with individuals' prior beliefs. Finally, in view of research indicating that this misperception can be reduced via direct correction, but that this correction does not increase the intention to vaccinate (Jolley & Douglas, 2014) and may even decrease intention among those more concerned about the safety of vaccines (Nyhan & Reifler, 2015), this study asks: RQ5. : Does correcting the misperception that individuals can get the flu from the seasonal flu vaccine lead to an increased intention to receive the flu vaccination? 3. Methodology To address these questions, an experiment was conducted in fall 2018 using Amazon's Mechanical Turk (MTurk) marketplace (www. mturk.com). Originally created as an on-demand human workforce, MTurk has become a low-cost, popular way to conduct online research. Researchers recruit participants from a large pool of available workers by creating a human intelligence task (HIT), which workers complete to receive compensation. Since it was initially proposed as a possible source of inexpensive, high-quality data in 2009 (Buhrmester, Kwang, & Gosling, 2011), its use has grown rapidly, from 61 studies in 2011 to over 1200 in 2015 (Walters, Christakis, & Wright, 2018). Evaluations have consistently found that MTurkers meet or exceed the psychometric standards set by other population samples (Buhrmester, Talaifar, & Gosling, 2018), and replication studies show that they perform as well as those populations (Anspach & Carlson, 2018) while being more attentive (Hauser & Schwarz, 2016) and diverse (Walters et al., 2018). MTurk also offers a participant pool that is much larger than undergraduate convenience samples, but with a similar turnover rate (Stewart et al., 2015). It is important to note, however, that although most MTurkers are American, they are not representative of the U.S. population (Buhrmester et al., 2018), differing in important ways, including demographics and health behaviors (Walters et al., 2018), as well as certain psychological dimensions (McCredie & Morey, 2018). Over the course of two days in late October, the recommended point by which vaccines should be administered (Grohskopf et al., 2018), participants (n = 625) were recruited via two HITs, ensuring that workers who completed the first were unable to view and accept the second. The compensation for the experiment, which on average took 7.5 min, was $0.40. Participation was limited to workers in the United States who had an approval rating of at least 95%. The HIT included information about the study and informed consent, followed by a link to an external survey on Qualtrics. Responses within Qualtrics were anonymized, and the survey did not ask for Worker IDs. To receive
2.5. Research questions In view of the findings of Vraga and Bode that social correction can be an effective means of reducing health-related misperceptions on social media when coming from the CDC (2018b) or a user (Bode & Vraga, 2018), but that this effect may not extend to such entrenched issues as the link between vaccines and autism (Bode & Vraga, 2015), this study first asks: RQ1. : Can social correction on Facebook effectively reduce the misperception that individuals can get the flu from the seasonal flu vaccine? Given the high degree of trust placed in libraries for finding reliable health information—which compares favorably to national levels of trust in the CDC (Kowitt, Schmidt, Hannan, & Goldstein, 2017; Pew Research Center, 2013)—the study then asks: RQ2. : Can social or observational correction on Facebook from a library institution effectively reduce the misperception that individuals can get the flu from the seasonal flu vaccine? RQ3. : How does a correction from a library institution compare to a correction from another trusted institution, such as the CDC, or from another user? Along with testing the effectiveness of a correction from a public library, this study also tests the effectiveness of the American Library Association (ALA), since trust in the latter, where it exists, could be more readily leveraged to combat misinformation. The CDC and user 3
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
compensation for the completed study, participants received a code at the end that enabled them to complete the HIT. Although workers had no incentive to complete the survey more than once, a cookie was placed on their browser when submitting responses to prevent “ballot stuffing.” Data were gathered through pre- and post-experiment questions in Qualtrics. The pre-experiment questions concerned demographics, social media use, sources of health information, and general health, including vaccination behavior in the previous and current years. These were followed by a series of statements about common health perceptions, and participants were asked to indicate the degree to which they agreed or disagreed with them (seven-point scale). Each participant was asked to agree or disagree with the following three statements concerning flu misperceptions, which were then combined into an index score to indicate general flu misperception (MisIndex):
differences in demographics or pre-experiment measures across the five conditions. Participants were roughly divided by gender (48.6% female), relatively young (59.9% between 26 and 40), and well educated: 21.6% had at least some graduate school, 64.1% at least a four-year college, and 89.2% at least a two-year college. 4.1.2. Vaccination levels and intent Overall, the participants could be considered under-vaccinated, with only 46.4% having received the shot in the previous year, and 45.4% having received or planning to receive this year. Another 13.1% were undecided. Past vaccination behavior was a strong predictor of present behavior: Of those who did not receive it last year, 70.6% did not intend to receive it this year, whereas 85% of those who had received it last year planned to receive it this year, if they had not received it already. Most participants were not very (31.5%) or only somewhat (36.5%) concerned about getting sick during the flu season (FluConcern), but the percentage of those who had received or intended to receive the shot increased with level of concern—from 18.4% among those not at all concerned to 67.1% of those very concerned. Concern also correlated with past vaccination.
1. “You do not need to get a seasonal flu vaccination every year.” 2. “You can get the flu from the seasonal flu vaccine.” 3. “Healthy people do not need to receive the seasonal flu vaccine.” These statements were distributed across three pages, each of which included two additional statements about general health, which were included to disguise the focus on vaccinations. Not every statement that participants assessed was a misperception, so that participants did not think they were only assessing common myths and adjust their responses accordingly. Participants were randomly sorted either into a control condition or into one of four experimental conditions. In all conditions, participants viewed a series of three simulated Facebook pages for at least 10 s each. Facebook was selected for the experiment since it has been found to be the most widely used platform among MTurkers (Roy, Kase, & Bowman, 2017). In the experimental conditions, participants viewed an additional page containing a post from a user with a link supporting the misconception that the flu vaccine can give you the flu. A correction to this misconception was provided as a comment, from the CDC, another user, a public library, or the ALA (see Fig. 1 for a sample post). Each comment included a credible source (Harvard Health, 2018), which Vraga and Bode (2018) find is necessary for social corrections to be effective. Following the first three pages, attention questions served to train participants in how to view each page—including questions about the content, source, and comments. Participants were told that user photos and names were anonymized to protect their identity. Post-experiment, participants were again asked to assess three health statements, one of which was the key misperception, “You can get the flu from the seasonal flu vaccine.” Assessment of this statement pre- and post-experiment (MisFlu1 and MisFlu2) provided the key data point (MisChange) across the conditions. All participants were also asked to indicate their intention to receive the seasonal flu vaccine, if they had not received it already (FluIntend). Participants in the experimental conditions were also asked to rate the trustworthiness of the source of the correction (Trust), and those in the library conditions indicated how frequently they visited a local library or used any of its resources. At the end, participants were debriefed about the focus of the study and informed that they had been exposed to a common misconception that has been shown to be inaccurate. This was accompanied by links to authoritative resources on the issue, as well as general information about the safety of seasonal flu vaccines.
4.1.3. Vaccine misperception Pre-experiment support for the misperception “You can get the flu from the seasonal flu vaccine” (MisFlu1) was on average slightly on the disagree side of the scale (M = 3.70, SD = 1.87). The same is true for misperception index (MisIndex), or the average score across the three misperceptions about the flu vaccine (M = 3.71, SD = 1.58). A series of ANOVAs revealed no significant differences in either measure between conditions or education level, but differences were found between genders (MisFlu1: p = .040; MisIndex: p = .021), with females on average less misinformed than males; and age (MisFlu1: p = .004; MisIndex: p = .013), with those 56 or older on average less misinformed than those 26 to 40. 4.2. Experiment results 4.2.1. Misperception change Fig. 2 shows the support for the misperception “You can get the flu from the seasonal flu vaccine” per condition, both before (MisFlu1) and after (MisFlu2) the experiment. The average change (MisChange) overall and per condition can be seen in Table 1. The average change was −0.06 (SD = 1.16), with the greatest decrease in the CDC (M = −0.28, SD = 1.16) and User (M = −0.22, SD = 1.15) conditions. There was a slight increase for both the public library (Library: M = 0.02, SD = 1.15) and the ALA (M = 0.02, SD = 1.21), but these were both below Control (M = 0.14, SD = 1.26). One-way ANOVA of the control vs. experimental conditions revealed that the experiment was effective in reducing the misperception (F(1,601) = 4.65, p = .031, ηp2 = 0.008). Comparing all conditions revealed a significant difference in MisChange per condition (F (4,598) = 2.82, p = .025, ηp2 = 0.018). Post hoc testing2 was just shy of significant for the CDC against Control (p = .051), but a series of independent sample t-tests with Control were significant for both CDC (p = .004) and User (p = .010), but not for Library (p = .379) or ALA (p = .396). No other significant differences were found between conditions, although results were near-significant for CDC vs. Library (p = .056) and ALA (p = .063). The library conditions were similar (p = .999). To control for the possibility that the impact of the correction was dependent on the initial level of misperception, separate one-way ANCOVAs were run using MisFlu1 or MisIndex as covariate. The former tested for the impact of the key misperception only, while the latter tested for the general level of misperception regarding flu vaccines.
4. Findings 4.1. Pre-experiment analysis 4.1.1. Demographics and distribution After removing incomplete responses (n = 12) and responses with standardized residuals for MisChange beyond 3.29 (n = 10), the number of valid responses was 603. Testing revealed no significant
2
4
All post hoc and pairwise comparisons are Bonferroni.
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
Fig. 1. Example Facebook page with flu misinformation and correction (CDC condition). Table 1 Breakdown of participants and misperception measures by condition. Condition
Control Library ALA CDC User Total
Participants
MisIndex
MisFlu1
MisFlu2
MisChange
N
%
M
M
M
M
SD
124 123 121 117 118 603
20.6 20.4 20.1 19.4 19.6 100.0
3.77 3.57 3.92 3.57 3.69 3.71
3.57 3.65 3.82 3.62 3.83 3.70
3.71 3.67 3.83 3.33 3.61 3.63
.14 .02 .02 –.28 –.22 –.06
1.00 1.15 1.21 1.26 1.15 1.16
average, compared to Control (p < .05). Alternatively, classifying respondents based on z-scores of initial misperception (MisFlu1) into Low (score of 1–2; n = 195), Mid (3–5; n = 291), and High (6–7; n = 117) groups (MisFluGroup) revealed significant differences in average MisChange overall (ANOVA: p < .001) and between groups (High vs. Low and Mid, p < .001; Low vs. Mid, p = .033) (see Table 2). In all conditions, the Low group saw an increase in misperception, and High group a decrease. Separate ANOVAs for each group (Low, Mid, High) per condition were significant for High (p = .030) and Mid (p = .012), with post hoc for Mid suggesting that the CDC is effective compared to Control (p = .011). Since significant differences in initial misperception were found for
Fig. 2. Pre- and post-experiment misperception by condition.
Each test improved the results slightly (MisFlu1: F(4, 598) = 2.88, p = .022, ηp2 = 0.019; MisIndex: F(4,597) = 2.93, p = .021, ηp2 = 0.019), and pairwise comparisons suggested that a correction from the CDC is effective in reducing support for the misperception on 5
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
4.2.3. Vaccination intention Post-experiment, 8.5% of participants reported already having vaccinated. Of the remaining (n = 552), the majority (55.2%) was at least somewhat likely to vaccinate. Overall, however, more participants were very unlikely (23.7%) than very likely (19.9%) to vaccinate. Of those reporting to be undecided pre-experiment (n = 79), 62% were at least somewhat likely, with 50.6% somewhat likely, compared to 24.1% somewhat unlikely. After coding the intention to vaccinate (FluIntend) as a numeric (from 1 = very unlikely to 6 = very likely) and removing those already vaccinated (n = 51), ANOVA of the control vs. experimental conditions indicated that in general, the experiment was not able to significantly increase intention (p = .326). However, ANOVA for all conditions revealed a significant difference in FluIntend (F(4,547) = 2.95, p = .020; ηp2 = 0.021), and post hoc suggested that a correction from the CDC led to greater intention to vaccinate, compared to the ALA (p = .018). Similar results were found when controlling for the initial level of misperception. Comparing the ALA directly to other conditions was significant for Control (p = .008), CDC (p = .001), and User (p = .016), and near-significant for Library (p = .051). As shown in Fig. 3, splitting the set of participants who had not yet vaccinated (n = 552) into two groups, those intending (Yes) or not intending (No) to vaccinate, indicates that the ALA is the only condition for which more participants did not intend to vaccinate (57.1%). This lower level of FluIntend for the ALA (see Table 3) occurred even though the ALA did not have a significantly larger initial level of MisFlu1, and in fact had a slightly lower level than the User condition. The overall MisIndex was highest in this condition, but not significantly so (see Table 1).
Table 2 Average misperception change by level of initial misperception and condition. MisFluGroup
Low Mid High Total
Control
Library
ALA
CDC
User
M
M
M
M
M
.14 .29 –.37 .14
.18 .08 –.39 .02
.36 .02 –.42 .02
.17 –.49 –.80 –.28
.24 –.23 –.83 –.22
Table 3 Trust in source of correction and intention to vaccinate by condition. Condition
Control Library ALA CDC User Total
Trust
FluIntend
N
M
SD
N
M
SD
– 123 101 114 118 456
– 3.67 3.61 4.17 2.54 3.49
– 1.10 1.31 1.26 1.52 1.43
112 113 112 106 109 552
3.68 3.50 3.01 3.81 3.61 3.52
1.92 1.97 1.81 1.84 1.89 1.90
both gender and age, two-way ANOVAs were run to test for the role that each might play in the effectiveness of the correction. Although there appeared to be a large difference in the average MisChange between genders in the User condition (F: –0.40; M: –0.06), results from testing were non-significant. However, when running a one-way ANOVA for all conditions for females only, results were significant (F(4,288) = 2.89, p = .023, ηp2 = 0.039), and post hoc suggested that a correction from a user is effective compared to Control (p = .037). No such difference occurs for males. This disparity occurs despite no significant difference between genders in the average trust in users (F: 2.27; M: 2.78).
4.3. Replication attempt In mid-December, a follow-up study was run on MTurk (n = 600) to replicate the findings from the initial study and to correct for the shortcomings in the latter regarding RQ4b. Asking participants to rate the trustworthiness of the source of correction only in the experimental conditions, and only for that source, did not afford points of comparison to address the question. In the follow-up, participants in all conditions were asked to rate the trustworthiness of each source (CDC, User, Library, ALA), on a scale from 0 to 5, with the option of not familiar for each. The order of presentation was randomized. This measure allowed a comparison between conditions, including Control, to determine whether exposure to a correction from a given source resulted in a lower trustworthiness rating of that source—overall and depending on the initial level of misperception. All other elements of the experiment were preserved.
4.2.2. Trust in source of correction Participants in the experimental conditions (n = 479) rated the trustworthiness of the source of the correction on a scale from 0 (complete untrustworthy) to 5 (completely trustworthy), with the option of not familiar for both the CDC (n = 3) and ALA (n = 20). The latter were removed from the analysis, resulting in 456 participants. As shown in Table 3, participants on average trusted the CDC most (M = 4.17, SD = 1.84), users the least (M = 2.54, SD = 1.89), and public libraries (M = 3.67, SD = 1.87) and the ALA (M = 3.61, SD = 1.81) at similar levels. Post hoc testing following ANOVA (F(3,452) = 32.07, p < .001, ηp2 = 0.175) returned significant results for greater trust in the CDC (vs. User, p < .001; vs. ALA and Library, p < .05) and lower trust in Users (vs. all conditions, p < .001). Similar results were found when controlling for the initial level of misperception. To control for the different levels of trust in each source, Trust was added as a covariate to the model, along with MisFlu1.3 ANCOVA was more significant and accounted for more of the variance in MisChange (F(3,450) = 3.77, p = .011, ηp2 = 0.025), although the model no longer accounted for Control, and pairwise comparison did not reveal any significant differences between conditions. Comparing Trust for each condition based on initial misperception (e.g., Low, Mid, High; see Section 4.2.1) was significant for the CDC (p < .001). Post hoc revealed that those Low trusted the CDC more than Mid and High (p < .01). Controlling also for the degree of MisChange increased the comparison significance (p < .001).
4.3.1. Misperception change The overall effect of the follow-up was not significant in reducing the misperception (p = .094), even though average MisChange of the Control (M = 0.09, SD = 1.01) and experimental conditions (M = −0.12, SD = 1.32) were similar to the first study, and a comparison of overall MisChange between studies was not significant (p = .787). ANOVA between conditions was at the cutoff level of significance (p = .050) with no significant post hoc comparisons. Adding covariates that were significant in the first study reduced the significance here (e.g., MisFlu1: p = .073). Curiously, the condition with the greatest average decrease in MisChange (−0.32), and the only condition differing significantly from Control (p = .008), was the ALA. Across most measures, including demographics and vaccine behavior, the two studies were similar. However, significant differences were found based on initial level of misperception (MisFlu1: p = .019; MisIndex: p = .005) and concern about getting sick, between different levels (p = .020) or when coded on a four-point scale (p = .002). The studies also differed in average intention to vaccinate (p = .001), with only 43.6% of those not vaccinated intending to do so in the second
3 Since using MisFlu1 and MisIndex as covariates returned similar results, the former is used for the remainder of the discussion.
6
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
compared to the control, which accounts for any such effect. 5.2. Social correction on social media: library institutions (RQ2) In the initial experiment, neither library institution was able to reduce the misperception that individuals can get the flu from the seasonal flu vaccine. This was the case when compared directly to the control or in the ANOVA with all five conditions (see Section 4.2.1). Across most tests, the library conditions were similar, as when comparing directly the misperception change (p = .999) or level of trust (p = .743). Frequency of library use had no effect on levels of misperception, the correction, or trust in libraries. 5.3. Social correction on social media: comparison (RQ3) The results of most testing indicated that there were significant differences between the means of all five conditions. In many cases, post hoc or pairwise testing indicated that the effect was driven by the CDC compared to the control. Comparisons between the CDC and library conditions were near-significant (Library: p = .056; ALA: p = .063) (see Section 4.2.1). In a few cases, when controlling for the level of trust in the source of correction and in source of health information, as well as when comparing conditions within each type of health source, a correction from a user was effective compared to one or both of the library conditions. However, direct comparisons were not significant. That the CDC was more effective in reducing misperceptions than either library condition might not come as a surprise. National surveys indicate that these institutions are all highly trusted for health information (Horrigan, 2015; Kowitt et al., 2017), although these surveys do not indicate the strength of that trust. It is interesting to note, then, the significantly greater trust in the CDC compared to both library institutions. What is surprising, however, is the degree to which other users are significantly less trusted than library institutions and yet are more able to reduce levels of misperception, compared to a control (see Section 4.2.2). This seems to support Anspach and Carlson's (2018) finding that although individuals report trusting legitimate news organizations over other users, the information from the latter outweighs that of the former when encountered on Facebook. The significance of these findings for libraries is unclear. Is it possible that trust in libraries is somehow benign, whereby they are highly regarded and trusted in the abstract, but less so for real-world health issues? Here it is worth noting that while 73% of those 16 and older say that libraries contribute to finding health information, only 10% have gone online at a library for health-related searches (Horrigan, 2015). Or does trust in libraries only operate under certain circumstances, such as actively seeking information on a topic? To that end, it would be worth investigating the impact of libraries among those highly interested in or concerned about this (or another) issues, and how this compares to other sources. In any case, it leaves open the question of how libraries can more proactively combat misinformation where it is needed most. If libraries are not able to leverage institutional trust to combat health misinformation in the way the CDC can, in what ways can libraries leverage their trust?
Fig. 3. Intention to vaccinate (binary) by condition.
study, compared to 55.2% in the first. Across all condition, more participants did not intend to vaccinate, which occurred only for the ALA in the first study. 4.3.2. Impact on trust Providing a correction to the misperception that the flu vaccine can give you the flu does not appear to significantly impact the perceived trustworthiness of the source of correction. Trust in each source was not lower when comparing the experimental condition using that source to Control (CDC: 0.760; User: 0.228; ALA: 0.879; Library: 0.167) or when comparing trust in each source across all conditions (CDC-trust: 0.355; User-trust: 0.996; ALA-trust: 0.366; Library-trust: 0.539), even while controlling for initial misperception (0.361, 0.990, 0.366, and 0.530, respectively). Overall, trust was lower in the second study (M = 3.11) than the first (M = 3.49), but the CDC remains most trusted (M = 3.90), and users least (M = 1.98). 5. Discussion The significance and consistency of the findings of the main study allow answers for the research questions and afford other insights worth consideration. Implications of the follow-up study are discussed along with the limitations (see Section 5.6). 5.1. Social correction on social media: overall (RQ1) As the main study demonstrated, social correction on Facebook can be effective in reducing the misperception that individuals can get the flu from the seasonal flu vaccine, even after brief exposure to a correction. This is true when comparing the control condition to all experimental conditions together, or when comparing the CDC and user conditions against the control. Compared directly, there was no significant difference between the CDC and user conditions (p = .695) (see Section 4.2.1). These findings confirm and add to those of Vraga and Bode, who found that correcting a misperception about Zika on Twitter was effective when coming from the CDC but not a user (2018b), but that a correction from a user was effective on Facebook (Bode & Vraga, 2018). Further, just as Vraga and Bode (2017b) found that each type of correction was more effective among those higher in initial misperception, this study found greater decreases in misperception among those classified as having high initial misperception. However, there were also increases in the misperception score among those initially low. The same was found for the control condition, in the absence of any correction. The possibility that both changes are the result of testing effects should be considered in future work. What is important here is the significant decrease in misperception in the experimental conditions
5.4. Social correction and trust (RQ4) When added as a covariate to initial misperception, trust in the source of correction increased the significance of the ANCOVA and accounted for more of the variance in misperception change. Testing revealed significant differences in trust in the CDC only based on level of misperception (Low, Mid, High), and those differences were stronger when accounting for misperception change (see Section 4.2.2). It was not clear, however, whether those differences were preexisting or due in part to the experiment. Testing in the follow-up suggested that the latter was not the case (see Section 4.3.2), confirming what Vraga and 7
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
Bode (2017b) found for CDC and user conditions. Trust in libraries neither significantly affects nor is significantly affected by the correction.
plausible model for engaging misinformation at scale on social media, and correcting specific misinformation may be less productive than providing accurate information (see Motta et al., 2018). Nevertheless, as Bode and Vraga (2018) argue, it may be necessary—and beneficial—to say something when you see something. Finally, this study was limited to Facebook since it is the platform on which nearly half of Americans get their news (Guess et al., 2018) and has been found to be the most widely used platform among MTurkers (Roy et al., 2017). Although Vraga and Bode find social correction to be effective on both Facebook (2018) and Twitter (2017b), they also note importance differences between the platforms, and highlight the importance of studying them separately (2018). Intriguingly, they speculate that there might be important differences in the expectations people have of information encountered on each platform, and may be more or less willing to be persuaded accordingly. This possibility merits further investigation.
5.5. Social correction and vaccine intention (RQ5) Compared to a control condition, a correction of the misperception that individuals can get the flu from the seasonal flu vaccine, although effective, does not appear to lead to an increased intention to vaccinate (see Section 4.2.3). This supports previous research demonstrating that the misperception can be corrected without leading to an increase in intention (Jolley & Douglas, 2014; Nyhan & Reifler, 2015). There are numerous vaccination obstacles that may account for this hesitancy, some of which may be addressed through other means, such as convenience (Luz et al., 2017), social pressure, or planning prompts (Milkman, Beshears, Choi, Laibson, & Madrian, 2011). Addressing persistent misperceptions is an important part of these and other strategies. (For a timely overview, see Motta, Stecula, & Haglin, 2018.) The only significant finding compared to the control or between the various conditions, across several tests, is that a correction from the ALA leads to a decrease in vaccination intention. In fact, the ALA is the only condition for which more participants did not intend to be vaccinated (see Fig. 3). The reason for this is unclear, and attempts to account for it with the available data were unsuccessful. For instance, comparing the intention to vaccinate by levels of concern about getting sick (FluConcern) was significant (F(3,548) = 44.92, p < .001, ηp2 = 0.197), and post hoc tests confirmed the expectation that those very or somewhat concerned have a higher intention to vaccinate than those not very or not at all concerned (p < .001). However, when adding FluConcern as an additional factor to Condition, even while controlling for initial misperception, or comparing conditions for each level of FluConcern, post hoc and pairwise tests consistently showed participants in the ALA condition with lower intentions to vaccinate. ALA even had the most participants who were very concerned about getting sick (31.6%), but across all levels of concern, the ALA had the lowest levels of intention.
6. Conclusion One of the reasons LIS professionals are confident that they have a role to play in the fight against mis- and disinformation concerns the level of trust in libraries as public institutions, which has endured despite marked declines in trust in other traditional gatekeepers of information. Exactly how they might leverage that trust to combat misinformation remains unclear. Following recent research showing the effectiveness of observational or social corrections on social media, this study investigates whether library institutions could leverage trust in similar way, using a common misperception about the seasonal influenza (flu) vaccine as a test case. Results of the initial, successful study suggest that libraries are not able to reduce misperceptions in this way. However, the results concern only one way of many, among myriad issues, that libraries and librarians might combat false information and false beliefs. Along with supporting and extending recent research, the main value of this study is its attempt to initiate within LIS a misinformation research program that is more closely aligned to research in other fields, and to introduce a promising research tool (Amazon's Mechanical Turk) that has been overlooked. It is hoped that future discussions of how libraries might combat misinformation, in general or in relation to trust, will be more concrete, more aligned with contemporary misinformation research, and more willing to demonstrate the effectiveness of library interventions within that research program.
5.6. Limitations There are several limitations to the study. First, the initial findings were not replicated in the follow-up. Rather than casting doubt on those findings, this may indicate that the ability to correct misperceptions about the seasonal flu vaccine is season dependent. Given that the corrections were more effective among those initially higher in misperception, it is noteworthy that participants in the follow-up had significantly lower initial misperceptions. The follow-up may cast a shadow, however, on the inability of libraries to lower misperceptions given that the ALA was the only condition to significantly do so compared to the control, although this occurred in an experiment that was unsuccessful overall. Further testing is required to assess whether the ALA can reduce misperceptions about the flu vaccine. Concerning participants, MTurk workers have been shown to be different from the U.S. population in important ways (Walters et al., 2018), including certain psychological dimensions in ways that reflect frequent Internet users (McCredie & Morey, 2018). However, MTurk samples have been found to be more diverse and to perform as well as traditional research samples, most often undergraduates. It remains to be seen, however, whether the present findings would replicate among different audiences, or with a different mechanism. Another limitation is the simulated nature of the experiment, in which participants view a post and, in the user condition, a correction from anonymous users who are not actual friends. Even though Anspach and Carlson (2018) argue that the use of fictional individuals maintains external validity, it still leaves open the question of how a correction might function differently within an individual's own peer network. It is also doubtful that commenting on a user's post is a
Acknowledgements This study was conducted as part of the doctoral program at Simmons University, School of Library and Information Science, Boston, MA. Dr. Laura Saunders supervised the project, and Dr. Kyong Eun Oh provided statistical advice. Additional statistical support was provided by Dr. Marie Forgeard, William James College. This work was supported in part by Simmons University, through a Student Research Fund grant, December 2018. References Anspach, N. M., & Carlson, T. N. (2018). What to believe? Social media commentary and belief in misinformation. Political Behavior.. https://doi.org/10.1007/s11109-0189515z. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. Batchelor, O. (2017). Getting out the truth: The role of libraries in the fight against fake news. References Services Review, 45, 143–148. Becker, B. W. (2016). The librarian's information war. Behavioral & Social Sciences Librarian, 35(4), 188–191. Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65, 619–638. Bode, L., & Vraga, E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33, 1131–1140.
8
Library and Information Science Research xxx (xxxx) xxx–xxx
M.C. Sullivan
McCredie, M. N., & Morey, L. C. (2018). Who are the Turkers? A characterization of MTurk workers using the Personality Assessment Inventory. Assessment. https://doi. org/10.1177/1073191118760709. Milkman, K. L., Beshears, J., Choi, J. J., Laibson, D., & Madrian, B. C. (2011). Using implementation intentions prompts to enhance influenza vaccination rates. PNAS, 108(26), 10415–10420. Motta, M., Stecula, D., & Haglin, K. (2018). “Fake flus!” When it comes to health, battling misinformation requires strategic thinking. Cambridge, MA: Nieman Foundation, Harvard University. Retrieved February 6, 2019, from http://www.niemanlab.org/ 2018/12/fake-flus-when-it-comes-to-health-battling-misinformation-requiresstrategic-thinking/. Neely-Sardon, A., & Tignor, M. (2018). Focus on the facts: A news and information literacy instructional program. Reference Librarian, 59, 108–201. https://doi.org/10. 1080/02763877.2018.1468849. Nowak, G. J., Sheedy, K., Bursey, K., Smith, T. M., & Basket, M. (2015). Promoting influenza vaccination: Insights from a qualitative meta-analysis of 14 years of influenza-related communications research by U.S. Centers for Disease Control and Prevention (CDC). Vaccine, 30, 2741–2756. Nyhan, B., & Reifler, J. (2012). Misinformation and fact-checking: Research findings from social science (Media Policy Initiative Research Paper). Washington, DC: New American Foundation. Retrieved from http://www.dartmouth.edu/~nyhan/Misinformation_ and_Fact-checking.pdf. Nyhan, B., & Reifler, J. (2015). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine, 33, 459–464. Perrin, A. (2015). Social media usage. Washington, DC: Pew Research Center2005–2015. Retrieved from http://www.pewinternet.org/2015/10/08/social-networking-usage2005-2015/. Pew Research Center (2013). Trust in government nears record low, but most federal agencies are viewed favorably. Washington, DC: Pew Research Center. Retrieved from http:// www.pewresearch.org/wp-content/uploads/sites/4/legacy-pdf/10-18-13-Trust-inGovt-Update.pdf. Rochlin, N. (2017). Fake news: Belief in post-truth. Library High Tech, 35(3), 386–392. Roy, H., Kase, S. E., & Bowman, E. K. (2017). Crowdsourcing social media for military operations. Proceedings of the International Workshop on Social Sensing. 2https://doi. org/10.1145/3055601.3055606. Saunders, L., Gans-Boriskin, R., & Hinchliffe, L. (2018). Know news: Engaging across allied professions to combat misinformation. Boston, MA: Simmons University. Retrieved from http://slis.simmons.edu/blogs/disinformation/files/2018/05/White-Paper-FinalDraft-1.pdf. Southwell, B. G., Thorson, E. A., & Sheble, L. (2017). The persistence and peril of misinformation. American Scientist, 105(6), https://doi.org/10.1511/2017.105.6.372. Stewart, N., Ungemach, C., Harris, A. J. L., Bartels, D. M., Newell, B. R., Paolacci, G., & Chandler, J. (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision making, 10(5), 479–491. Sullivan, M. C. (2018). Why librarians can't fight fake news. Journal of Librarianship and Information Science. https://doi.org/10.1177/0961000618764258. Vraga, E. K., & Bode, L. (2017a). Leveraging institutions, educators, and networks to correct misinformation: A commentary on Lewandowsky, Ecker, and Cook. Journal of Applied Research in Memory and Cognition, 6(4), 382–388. Vraga, E. K., & Bode, L. (2017b). Using expert sources to correct health misinformation in social media. Science Communication, 39, 621–645. Vraga, E. K., & Bode, L. (2018). I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, 21, 1337–1353. Wade, S., & Hornick, J. (2018). Stop! Don't share that story! Designing a pop-up undergraduate workshop on fake news. Reference Librarian, 59, 188–194. https://doi.org/ 10.108/02763877.2018.1498430. Walters, K., Christakis, D. A., & Wright, D. R. (2018). Are Mechanical Turk worker samples representative of health status and health behaviors in the U.S.? PLoS One, 13(6), e0198835. Wardle, C., & Derakshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making (Council of Europe Report DGI(2017)09). Strasbourg, France: Council of Europe.
Buhrmester, M. D., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5. Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). An evaluation of Amazon's Mechanical Turk, its rapid rise, and its effective use. Perspectives on Psychological Science, 13(2), 149–154. Centers for Disease Control and Prevention (2018a). Estimates of influenza vaccination coverage among adults—United States, 2017–18 flu season. Atlanta, GA: CDC. Retrieved from https://www.cdc.gov/flu/fluvaxview/coverage-1718estimates.htm. Centers for Disease Control and Prevention (2018b). Misconceptions about seasonal flu and flu vaccines. Atlanta, GA: CDC. Retrieved from https://www.cdc.gov/flu/about/qa/ misconceptions.htm. Cialdini, R. B. (2009). Influence: Science and practice (5th ed.). Boston, MA: Pearson. Ecker, U. K. H. (2015). The psychology of misinformation. Australasian Science, 36(2), 21–23. El Rayess, M., Chebl, C., Mhanna, J., & Hage, R.-M. (2018). Fake news judgement: The case of undergraduate students at Notre Dame University–Louaise, Lebanon. Reference Services Review, 46, 129–146. Feezell, J. T. (2018). Agenda setting through social media: The importance of incidental news exposure and social filtering in the digital era. Political Research Quarterly, 71(2), 482–494. Garrett, R. K. (2018). The “echo chamber” distraction: Disinformation campaigns are the problem, not audience fragmentation. Journal of Applied Research in Memory and Cognition, 6(4), 370–376. Geiger, A. (2017). Most Americans—especially millennials—say libraries can help them find reliable, trustworthy information. Washington, DC: Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2017/08/30/most-americansespecially-millennials-say-libraries-can-help-them-find-reliable-trustworthyinformation/. Grohskopf, L. A., Sokolow, L. Z., Broder, K. R., Walter, E. B., Fry, A. M., & Jerniga, D. B. (2018). Prevention and control of seasonal influenza with vaccines: Recommendations of the advisory committee on immunization practices—United States, 2018–19 influenza season. Morbidity and Mortality Weekly Report, 67(3), 1–20. Guess, A., Nyhan, B., Lyons, B., & Reifler, J. (2018). Avoiding the echo chamber about echo chambers: Why selective exposure to like-minded political news is less prevalent than you think. Miami, FL: Knight Foundation. Harvard Health (2018). 10 flu myths. Retrieved from https://www.health.harvard.edu/ diseases-and-conditions/10-flu-myths. Harvard Library (2017). Fake news, misinformation, and propaganda. Retrieved February 7, 2019, from https://guides.library.harvard.edu/fake. Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks that do subject pool participants. Behavioral Research, 48, 400–407. Horrigan, J. (2015). Libraries at the crossroads. Washington, DC: Pew Research Center. Retrieved from http://www.pewresearch.org/wp-content/uploads/sites/9/2015/ 09/2015-09-15_libraries_FINAL.pdf. Jacobson, L. (2017). The smell test: In the era of fake news, librarians are our best hope. School Library Journal, 63(1), 24–29. Johnson, B. (2017). Information literacy is dead: The role of librarians in a post-truth world. Computers in Libraries, 37(2), 12–15. Jolley, D., & Douglas, K. M. (2014). The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS One, 9, e89177. https://doi.org/10.1371/journal.pone. 0089177. Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural cognition of scientific consensus. Journal of Risk Research, 14(2), 147–174. Kowitt, S. D., Schmidt, A. M., Hannan, A., & Goldstein, A. O. (2017). Awareness and trust of the FDA and CDC: Results from a national sample of US adults and adolescents. PLoS One, 12(5), e0177546. https://doi.org/10.1371/journal.pone.0177546. Kuritzkes, A. (2017, March 22). Fake news research guide draws ire from conservatives. Harvard Crimson. Retrieved from https://www.thecrimson.com/article/2017/3/22/ outlets-criticize-harvard-library-guide/. Large, J. (2017, February 6). Librarians take up arms against fake news. Seattle Times. Retrieved from http://www.seattletimes.com/seattle-news/librarians-take-up-armsagainst-fake-news/. Larsen, H. (2018). The state of vaccine confidence. Lancet, 392(24), 2244–2246. Lazer, D., Baum, M., Benkler, Y., Berinsky, A., Greenhill, K., Menczer, F., ... Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. Luz, P. M., Johnson, R. E., & Brown, H. E. (2017). Workplace availability, risk group and perceived barriers predictive of 2016–17 influenza vaccine uptake in the United States: A cross-sectional study. Vaccine, 35, 5890–5896. Marsh, E. J., & Yang, B. W. (2017). A call to think broadly about information literacy. Journal of Applied Research in Memory and Cognition, 6(4), 401–404.
M. Connor Sullivan is currently the librarian for collection development and planning for Widener Library, Harvard University, Cambridge, MA. He is also a PhD student in library and information science at Simmons University, Boston, MA, from which he received his MS in 2014. He also holds degrees from Harvard University, Cambridge, MA (MTS, 2007) and Oxford University, England (MSt, 2005). His current research is gravitating toward mis- and disinformation, particularly in the context of U.S. politics, and the role that information plays (or is thought to play) in political decision-making and democratic participation.
9