Fake News - Does Perception Matter More Than the Truth?
Journal Pre-proof
Fake News - Does Perception Matter More Than the Truth? Peter J. Jost, Johanna Punder, Isabell Schulze-Lohoff ¨ PII: DOI: Reference:
S2214-8043(19)30339-8 https://doi.org/10.1016/j.socec.2020.101513 JBEE 101513
To appear in:
Journal of Behavioral and Experimental Economics
Received date: Revised date: Accepted date:
23 July 2019 14 January 2020 16 January 2020
Please cite this article as: Peter J. Jost, Johanna Punder, Isabell Schulze-Lohoff, Fake News - Does ¨ Perception Matter More Than the Truth?, Journal of Behavioral and Experimental Economics (2020), doi: https://doi.org/10.1016/j.socec.2020.101513
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Inc.
Highlights
This paper examines how individuals make judgements and decisions when faced with fake news and how situational and social forces influence this. The paper uses an experimental setup to test the effect of anchoring when subjects have different levels of awareness and are confronted with sources marked by different levels of authenticity. Our results show that increasing subjects’ awareness as well as source credibility are crucial instruments for combating fake news.
Fake News - Does Perception Matter More Than the Truth? Peter-J. Jost, Johanna P¨ under & Isabell Schulze-Lohoff∗†
Abstract The present paper experimentally investigates the effect of anchoring on fake news. In particular, we test how different levels of authority and awareness influence this effect. Subjects were presented with false information as an unreasonably high anchor and had to fulfill a related estimation task afterwards. Results show that all subjects, including those who were told that the information was false, were influenced by the anchor. Furthermore, a higher level of awareness of fake news led subjects to adjust more strongly downwards from the anchor. The effect of anchoring was also reduced when subjects without prior awareness were presented with arguments that were inconsistent with the anchor information.
∗ Chair of Organization Theory, WHU − Otto Beisheim School of Management, Burgplatz 2, D-56179 Vallendar, e-mail:
[email protected], phone: +49-261-6509-300. † The authors thank the associate editor, two anonymous referees, Anna Ressi, Eberhard Feess and the seminar participants of the XVIII. GEABA Symposium for helpful comments and suggestions. All errors remain our own.
1
1
Introduction
Although the use of fake news is an old phenomenon, it reached the height of its popularity recently in 2016, with its influence on the U.S. presidential election, the Brexit, and the European refugee crisis.1 The main reason for the recent success of fake news is the increasing popularity of the internet since the beginning of the century. It offers an uncharted medium that is uncontrollable, wide-reaching, and anonymous, thus making it perfect for the spreading of fake news; see e.g. Baym (2005), Flynn et al. (2017), or Lazer et al. (2018). Following recent studies that define the term ”fake news” in similar ways, we refer to fake news as a specific type of information that is false, intended to deceive people, and that is designed to look like real news.2 If we consider fake news as a good in a market with a supply and demand side, we can identify three drivers which form the basis for the current popularity of fake news.3 The first driver is the production of fake news, which became more efficient over the last few years due to progress in digitalization. Programs such as Adobe Photoshop allow the creation of a strikingly real-looking yet fake reality; social bots automatically generate messages in social media networks and can, for example, perfectly imitate Facebook profiles; and, new algorithms and big data analyses now create detailed user profiles, so that a user can be targeted directly with news in line with her general opinion of a topic. This all enables the widespread use of fake news, see Shane (2017). A second important driver comes from changes at the supply side of the market for fake news: the commercial use of the internet allows fake news authors to gain unprecedented advertisement revenues giving them more incentives to produce fake news, see Metzger et al. (2003). The concept is simple: an article that proposes sensational, yet completely fictional news generates clicks and will be shared, thus yielding thousands of dollars in advertisement revenues per day, see Pickard (2017). It is therefore not surprising that over the past two years, the fake news industry has exploded with professionally designed websites employing numerous people, see Sydell (2016) . And thirdly, the demand side of the market has shifted over the last few years. In contrast to the older generations, the digital natives, the so-called ”Generation Y”, now use online subscriptions as well as social media as their primary sources of information, see Ahlers (2006) or Gottfried and Shearer (2016). As fake news has become a widely discussed topic in recent years, there is a growing body of literature in psychology, economics, communication and political science on this topic, see Section 2. The aim of this paper is to contribute to this research by focusing on the process of judgment formation in the context of fake news. Using an experimental approach we want to examine how individuals make actual judgments and decisions when they are faced with fake news, and how situational and social forces influence this. The underlying assumption is that the circumstances under which people receive and process fake news, are crucial for their assessments of whether it is true or not as well as for their subsequent decision making. Building on previous literature in social sciences and behavioral economics, we consider the following three-stage process of judgment formation in the context of fake news: 1 For early historical examples of fake news, see, for example, Eisenstein (1980), Dittmar (2011), or Gottheil et al. (2011). For recent trends in the diffusion of fake news on social media, see Allcott et al. (2019). 2 See, for example, Allcott and Gentzkow (2017, p. 213) who define fake news ”to be news articles that are intentionally and verifiabley false, and could mislead readers”, Pennycook et al. (2018, p. 1) who define fake news as ”news stories that were fabricated (but presented as if from legitimate sources) and promoted on social media to deceive the public for ideological and/or financial gain” or Lazer et al. (2018, p. 1094) who define fake news ”to be fabricated information that mimics news media content in form but not in organizational process or intent”. See also Gelfert (2018) for a review of several other definitions of fake news and their shortcomings, or Tandoc et al. (2018) for a typology of fake news. 3 See also Acemoglu et al. (2010), Acemoglu et al. (2013), Gentzkow et al. (2016), Allcott and Gentzkow (2017), or Azzimonti and Fernandes (2018) for theoretical models of the market for fake news.
2
Figure 1: The process of judgment formation in the context of fake news The process of judgment formation starts after a reader receives some fake news.4 In the first stage, the individual processing phase, the reader then either makes a judgment concerning an issue that is related to the news she received before, or she makes a first credibility assessment concerning the trustworthiness of the received news. She does so according to all directly available factors. Such factors are, for example, the content of the news itself or the design of the website. Since these assessments are in general judgments under uncertainty, one of the most important heuristics, when it comes to information processing in this stage, is the anchoring and adjustment heuristic formulated by Tversky and Kahneman (1974). Two influencing factors on anchoring are important in this first stage, namely awareness and authority: the first factor is the level of awareness of the fake news and how it influences the receiver’s anchoring mechanism. This aspect relates to the uncertainty with respect to the trustworthiness of fake news and is particularly interesting with regard to the future of fake news as people are slowly becoming more aware of the issue. Consequently, our research could provide an idea of how people will process fake news in the future. The second factor is the level of authority connected to the received information. This aspect relates to the authenticity with respect to the trustworthiness of the author of the information, and follows the observation by Lowry et al. (2014). They argue that the more authentic the context appears to the receiver, the more convinced she is that she is reading a trustworthy piece of news. This aspect also relates to the observation by Lin et al. (2016) that fake news is often believed by so many that even mainstream media picks up on it, giving it a significantly greater reach and credibility. Here, we analyze whether (deceitfully) quoting an authority figure or illustrating the author himself to be an authority figure could lead the reader to believe the news more readily. As fake news is generally spread via social media networks or online platforms, receivers of fake news can usually gather additional information to confirm their initial evaluation from the processing phase. Here, we first examine how the levels of awareness and authority from the individual processing stage influence her decision on whether or not to gather new information. Provided the reader opts for additional information, there are two ways to do so: searching for additional information and/or relying on the opinion of others. We incorporate these aspects in the second and third stage of the judgment formation model. In the second stage, the search phase, the receiver has the possibility to seek further information to confirm her evaluation from the first stage. This information search in general includes keyword searches for further articles and information about the author but not communication with other people. This can be done in the third stage, the group phase. This phase introduces group dynamics and mimics the observation “that most users rely on others to make credibility assessments, often through the use of group-based tools“, see Metzger et al. (2010, p.413). Likes and shares show her how many other people favor the news, and comments allow for reading different opinions on the information. At this stage, the reader gains an overview of how the public, consisting of both friends and strangers, has reacted to the news. This third phase, like the second, is optional, as not all receivers choose to engage in a conversation about every article they read. There is also the option for the receiver to go back to the second phase after having completed the third phase, should she feel the need to conduct further research after having received new information through communication. With respect to both stages we examine, how the reader’s initial assessment from the first phase will be strengthened, neutralized or weakened depending on whether she finds further confirming or contradicting information. 4 In
the following, reader and receiver - henceforth, she - will be used as synonyms.
3
2
Related Literature
In the following section, previous research existing on the main topics of this paper will be outlined. Specifically, anchoring will be discussed in detail with regard to the two influencing factors of authority and awareness. We also review the growing recent experimental literature on fake news. Other related topics on this issue, before the term fake news was popularized, are covered by a vast academic literature in economics, psychology, political science, and communication.5 The classical study of externally provided anchors was conducted by Tversky and Kahneman (1974) using a wheel of fortune. In their experiment, participants first observed a wheel that was predetermined to stop on either 10 or 65 and then asked to estimate the percentage of African countries that are members of the United Nations. Their finding was that subjects’ judgments were influenced by the initial anchor: those whose wheel stopped on 10 guessed lower values than those whose wheel stopped at 65. This anchoring effect, due to an insufficient adjustment of the anchor, confirms that people rely too heavily on the first anchor that is set and adjust insufficiently from it. In a fake news context, this means that whatever the content and credibility of the news, it serves as an anchor. Accordingly, even if the reader reaches the conclusion that a given piece of news is not true, she might still be influenced by the news in her judgment of the topic. While the previous study focuses on a completely irrelevant anchor, others investigate the anchoring effect in a scenario of a plausible anchor, see Strack and Mussweiler (1997) or Wegener et al. (2010). These studies yielded no significant experimental evidence that the plausibility of an anchor influences the anchoring effect. Furnham and Boo (2011) thus characterized the informational relevance of an anchor as irrelevant for anchoring in judgment formation. Gl¨ockner and Englich (2015) criticized the previous work for insufficient manipulation and participants’ subjective judgments of the relevance of an anchor. Using a direct empirical test of anchor relevance on judgments, they showed that relevant anchors yielded a stronger anchoring effect. This aspect is critical for our experiment as both the different awareness and the different authority conditions aim at manipulating the relevance of the given anchor. The anchoring effect has proven to be strikingly robust in countless experiments, and there is little evidence of successful prevention or countermeasures, see Englich and Mussweiler (2016). However, Mussweiler et al. (2000) found in the first of their studies that by listing counter-arguments, the anchoring effect could be partially mitigated. A so-called consider-the-opposite approach was used to encourage subjects to think consciously about arguments contradicting the anchor value. While the anchoring effect still occurred in all treatments, when participants were in the consider-the-opposite treatment, it was only marginally significant. When participants were not instructed to think of counter-arguments, however, it was highly significant. This research is especially important for this paper, because the second stage of the experiment provides subjects with anchor-inconsistent arguments, aimed at testing whether the same limitations apply in a fake news context. In a natural experiment following a release of false news, Carvalho et al. (2011) investigated its persisting effect. When the 2002 news about United Airlines’ parent company’s bankruptcy resurfaced, leading people to believe the company was again bankrupt, stock prices fell dramatically. Even three trading sessions after the clarification, they had not returned to their old level. Carvalho et al. (2011) concluded from further tests that a false news shock can have a lasting effect on readers, even after the news has been corrected. Even an explicit warning about the persistency of misinformation did not lead to an elimination of the effect. It did, however, reduce the extent to which the information was believed. At least more than a 5 See
Allcott and Gentzkow (2017) and the articles cited by them.
4
mere awareness that “facts are not always properly checked before [the] information is disseminated”, see Ecker et al. (2010, p.1087). The present experiment investigates this effect in the context of fake news where participants were previously anchored. Still, the experiment’s awareness treatments aim to confirm the effect of information known to be false. Another topic of importance with regard to fake news is source credibility. Source credibility has an enormous impact on the assessment of whether news is true or false, see Castillo et al. (2011). Moreover, people are more likely to believe the information to come from a credible source when the information is in line with their beliefs, see Fragale and Heath (2004) and Li and Sakamoto (2014). This underlines that the receiver’s opinion plays an important role for the imagined credibility of a source. The effect can be explained by people’s tendency to believe that their view is correct and thus, a source confirming this view has to be credible, see Fragale and Heath (2004) or Sternthal et al. (1978). Further empirical evidence of the persuasive effect of seemingly reliable sources has been provided by many other studies, see Chaiken and Maheswaran (1994), Robins and Holmes (2008) and Tormala et al. (2006). For experimental evidence on source credibility effects in the context of political misinformation, see Berinsky (2017). In addition to being a key success determinant of fake news in general, a connection can be drawn between source credibility and the degree of authority investigated in our experiment. In this case, the authority that induces the reader is not a specific person, but a website that is perceived as credible, thus giving it authority. Therefore, the higher people evaluate the authority or credibility of the source, the higher the effect of the respective anchoring function of the news. All in all, anchoring as well as source credibility effects have been analyzed separately in detail over the last decades. These studies provide evidence of the robustness of anchors and the positive correlation of persuasion and source credibility. Nevertheless, the combination of an anchor with different credibility treatments has not been analyzed in the context of fake news before. Moreover, several awareness levels concerning the existence of untrustworthy sources have been added in our experiments. Related to our research are other studies, which analyze the cognitive processes that contribute to a belief in false news. One explanation is confirmation bias, when people prefer information that supports their preexisting beliefs over conflicting information, see Lazer et al. (2018) . Another explanation is selective exposure, when people prefer information that supports their preexisting attitudes, see Spohr (2017). The lack of analytical thinking could also be an explanation, see Pennycook and Rand (2019). In another experimental study, Pennycook et al. (2018) found that message repetition also made fake news more believable. The effect of warnings, attached to fake news to reduce the spread of misinformation in social media, was experimentally studied recently in Pennycook et al. (2019). And finally, two priming experiments also discuss interventions to combat fake news: Ladd (2010) finds that priming people to think about complaints about media coverage reduce media trust, and Van Duyn and Collier (2019) find that people are able to distinguish between real and fake news stories in the absence of elite rhetoric. However, in the presence of elites saying that a piece of news is fake, people are significantly more likely to say that a real news story is actually a fake one.
3
The Model
In this section, we describe the different stages of the judgment formation in the context of fake news in more detail. In each of the three different stages the receiver evaluates the truthfulness of the information on a certainty scale. Our model of judgment formation has two dimensions, i.e. the receiver’s uncertainty about
5
the truthfulness of the information that she receives, and the different stages of the judgment formation process as discussed in the Introduction (see also Figure 2). Figure 2: The model of judgment formation The x-axis reflects the different stages of judgment formation; Stage 1, the individual processing phase; Stage 2, the searching phase; and Stage 3, the group phase. We assume that the receiver always renders an assessment in Stage 1, but that her reassessments in the following two stages depend on whether she searches for additional information and/or whether she relies on the opinion of others. The y-axis reflects the level of the receiver’s uncertainty in any given stage when assessing the truthfulness of a piece of news. During her evaluation, the receiver is either ignorant, meaning she is not able to tell whether the piece of news is true or not; certain, meaning she feels certain that the piece of news is either fake or true; or uncertain, meaning she is unsure about the level of credibility. In Stage 1, the reader renders a first assessment concerning the truthfulness of a piece of news that she received. At this stage, she only relies on the information that the piece of news contains and does not search for further information on the topic or takes the opinion of others into account. If, due to lack of information, it is impossible for the receiver to build a first opinion regarding the level of credibility, she finds herself in the ignorance zone. A receiver in the uncertainty zone can rely on at least some information for assessing whether the piece of news is true or not, but she cannot form a final judgment without further research. If the receiver is certain that the piece of news is false or true, for example because she is an expert in the field, she is in the certainty truth or certainty fake zone. However, note that being in the certainty truth or certainty fake zone does not necessarily imply that the receiver’s assessment is actually correct. Importantly, receivers could also be in the certainty truth zone because they are unaware that the piece of news could be fake. In Stage 2, the receiver chooses whether or not to engage in further research to collect more information. Readers from all three zones (ignorance, uncertainty, certainty) may deem it to be unworthy of their time to research the article’s background. If a reader decides to conduct further research, she might find new information that confirms her opinion or contradicts it, which might send her towards a completely new direction. In the context of fake news, the receiver might also be unsuccessful in finding further information. In any case, the reader still makes a new judgment based on the initial piece of news and the additional information found in Stage 2. Depending on whether the reader was able to find further confirming or contradicting information, her initial assessment from the first stage is either strengthened, neutralized or weakened, relocating her on the spectrum of the certainty levels as indicated by the five grey boxes in the search phase of Figure 2. In Stage 3, the receiver may choose whether or not to also consider the opinion of others about the piece of news. This stage introduces group dynamics which potentially have the power to overthrow the receiver’s previous opinion on the credibility of the piece of news. As a result, her current judgment, which she either developed in the first stage (only based on the piece of news) or in the second stage (based on the piece of news and the additional information), will be influenced by the receiver’s choice in Stage 3. With regard to the context of fake news, this stage can be interpreted as the group phase. The opinion and feedback from other people may further influence the receiver’s assessment on whether the piece of news is trustworthy or not. Moreover, the receiver has the possibility to search again for additional information in Stage 2 before she renders her final assessment.
6
4
Experimental Set-Up
The experiment was conducted with 240 undergraduate and graduate WHU students aged between 19 and 25 (on average 21 years old). Subjects had no prior knowledge about the topic of the experiment. They were instructed to provide an evaluation regarding the value of the main train station of Koblenz, which is located in the South-West of Germany. We choose the task of evaluating a train station because we wanted subjects to be in a state of ignorance at the beginning of the experiment, which means that it should be unlikely that they have any background knowledge about the value of this object.6 Subjects had to repeat their evaluations after every stage. This serves as the basis for comparison between the treatment groups. They were promised a reward in the form of chocolate, should they guess correctly within a margin of error of +/- 10% of a supposedly correct value. The reward was chosen to symbolize a mild but not too high interest in the topic, comparable to a situation in which a subject wants to find out whether a news article is true when there is no direct incentive to do so. While the instructions and answer fields were provided in an online form, the experiment was administered personally for three reasons. First of all, optional additional information was partly distributed in print as will be explained in the following. Secondly, the presence of the experimenters prevented subjects in Stage 2 from conducting further research online. And third, the personal administration of the group phase prevented subjects from communicating with others about the piece of news that they received. Stage 1: To study the effect of fake news in the process of judgment formation, subjects received additional but false information before they were asked to provide their first assessment. This additional information in form of a fake news story claimed that the Deutsche Bahn (DB) had bought a comparable main station in the Netherlands for e800m. The content of this article was entirely untrue and the high number should serve as an anchor to mislead subjects towards choosing an unreasonably high estimate.7 In order to determine the anchor, we used a control treatment, where subjects were also instructed to evaluate the train station as described above, but without the additional article that includes the anchor. Since one of our aims was to examine how much the given anchor would affect participants with regard to the level of awareness of fake news and with regard to how much authority they attributed to the publisher of the piece of news, we randomized participants into six treatment groups and primed them accordingly before they made their first assessment. The different treatment groups correspond to the type and source of additional information given to subjects. There were three different treatments pertaining to the level of awareness that the topic of interest in the experiment could be fake news, see Table 1. The first awareness level was No Awareness, which means that subjects were not previously informed about the topic of the experiment and received no hints that the information given to them might be untrue. The second awareness level was Awareness, which included an article about fake news and how nowadays people often do not check facts. Thereby, we increase participants’ awareness of fake news, which should be of relevance for their evaluations. The third awareness level was Certainty Fake. Subjects in this awareness level were informed that the piece of news they received was completely false. 6 We consciously chose the evaluation of a Germain train station instead of an unknown object, because we wanted to study the dissemination of fake news concerning an object that is not too abstract but as close as possible to the spread of fake news in real-life. We also decided against an actual real-life example, because news that is really published is likely to be known by some of the participants and this knowledge would have biased our results. 7 Different to other experimental studies on fake news, see e.g. Pennycook et al. (2018, 2019), which select false news from Snopes.com, a third-party website that fact-checks news stories, the story of Deutsche Bahn was completely constructed by ourself, and not a false story we obtained in some other way.
7
With regard to authority there were two treatments differing in the respective information about DB’s strategy and acquisition plan, see Table 1. The first authority level was Authority, which means that subjects received a piece of news that was supposedly published by an authority figure. In this condition, it was claimed that the DB themselves had released the news in their annual report.8 The second level of authority was No Authority. In this treatment subjects received a piece of news that was supposedly based on rumors. In particular, subjects were merely instructed that the information comes from someone on the street, which means that it was not the DB itself that had released the piece of news but someone without authority. After having been primed with the respective level of awareness and authority, we asked participants to provide their first estimate of the main station’s value in em. Also, they had to self-assess how certain they were that their answer was correct by means of reporting how likely they thought it was that they would receive the reward. This report served as an indicator for whether participants believed the article or not. For subjects who estimated the value of the main station to be close to the anchor, a high level of certainty would indicate that they believed the piece of news. Stage 2: In the second stage, subjects were presented with the option of spending approximately two minutes of their time scanning further articles about the information they received in Stage 1. Subjects could choose between 8 articles in total, which were numbered from 1 to 8. Whereas two of these articles were confirmatory of DB’s plans, another two questioned them and four more had nothing to do with the plans. The two critical articles specifically mentioned that the price of e800m would be “ridiculously high” and one of them, a blog post, even called the first information “fake news”. These two articles were intended to drive subjects towards the certainty fake zone while the confirmatory ones did the opposite. This second stage was designed to reflect the research people an choose to engage in when they want to find out whether a piece of news is true or false. The effort and time requirement of reading further articles, some of which are not even relevant, represents the opportunity costs of doing further research. The partly confirmatory articles represent the fact that many authors of fake news publish the same story in different outlets in order to achieve greater credibility, see Metzger (2007). The contradicting articles represent the fruitful part of the search for further information and may help subjects in their following estimations. They were designed to follow a consider-the-opposite approach like the one Mussweiler et al. (2000) used in their experiment. Of course, such additional information only influences subjects’ first judgments if the corresponding confirming or contracting articles are of relevance. For our real-life example of Deutsche Bahn and its acquisition of a train station, this implies that we had to give participants articles that match the cover story. We therefore photoshopped these articles in a way such that they looked as real as possible. However, we did not use deception as each article conveyed all the relevant information a subject may want to have. In particular, each article that looks like being an original one also conveyed the information that it was fake. The ”BILD” article, for example, was in English although the newspaper is published only in German language. Also, the URL of the fake Fox News website read “foxnews.ru.co” instead of “foxnews.com”, see Appendix 1.9 8 Since we used false news as treatment variables, we had to make the piece of news in the authority treatment look as real as possible to simulate a situation in reality. 9 In the terminology of Krawczyk (2019) such a message can be clustered as intentional and complete and constitutes no deception at all. In the notion of McDaniel and Stramer (1998) we used the photoshopped articles as treatment variables and our procedure was therefore one form of ”economy with the truth”, that is, we avoided to tell the participants something which is true but we did not actively mislead subjects. Also, according to Hey (1998), ”there is a world of difference between not
8
Subjects could, however, also choose to continue without reading the information as, in real life, people also have the option of accepting what they read without questioning it. Either way, they were again asked to estimate the value of the main station and the certainty with which they expect to receive a reward in Stage 2. Additionally, subjects were asked to provide the number of the article that they found most helpful so that we could analyze which articles were most relevant. Stage 3:
In Stage 3, each treatment group from the 2x3 matrix of different authority and awareness levels
of Stage 1 was further split in half (see Table 1). Table 1: Treatment Groups One half of the participants of each treatment was in the Confirmatory Friends condition. They received the information that, given they were discussing the task with their friends, they found out that their friends had made a similar estimate. This treatment was intended to reassure subjects regarding the certainty zone they were tending towards. The other half of the subjects was in the Contradictory Friends condition. They received the information that given they were discussing the task with their friends, their friends had made an estimate much lower than their own. This treatment was intended to push subjects towards uncertainty or ignorance. Afterwards, subjects were again asked to enter their new estimate and rate their certainty. Since the experiment was administered personally and the presence of the experimenters prevented subjects from communicating with others, participants were aware that the confirmatory and contradictory friends conditions were put in a hypothetical background story. This stage was designed to put subjects in a setting with conformity pressure. If subjects that had previously rated their certainty regarding the correctness of their estimate as high changed their estimate due to friends having a different opinion, this could indicate their sensitivity to conformity pressure. Figure 3: Experimental set-up in the model of judgment formation Figure 3 graphically summarizes where subjects are situated within our model of judgment formation in the different phases of the experiment. In the first phase, subjects in the Certainty Fake condition will be situated in the certainty fake zone of the model. Subjects in the Awareness condition will be in the uncertainty or ignorance zone. Some might already suspect that the piece of news does not sound reasonable, some might not have a tendency yet, and some might lean towards believing the news. Subjects in the No Awareness condition are primed to be situated somewhere between ignorance and certainty truth. As they did not receive any priming, many of these subjects will not consider the option of the news being fake, and thus start with a high perceived certainty of the news being true. Others might be careful with their judgment regardless of priming and thus be in the ignorance or uncertainty zone. Concerning subjects’ search in Stage 2, the two partly confirming articles (BILD, Blog) push subjects more towards the certainty fake zone, whereas the other two partly contradictory articles (Daily News, Fox News) lure subjects towards the certainty truth zone. Finally, the group phase then pictures the two treatments after Stage 3: The Confirmatory Friends condition should position subjects in the certainty fake or certainty truth zone, whereas the Contradictory Friends condition should drive subjects away from the certainty zones. Note that the ordering of Stage 2 and 3 was not fixed in our experimental set-up since both stages were optional and participants could go back to the second phase after having completed Stage 3. telling subjects things and telling them the wrong things. The latter is deception, the former is not”.
9
5
Hypotheses
The first set of hypotheses concerns subjects’ estimates across the awareness treatments. In treatment No Awareness, we hypothesize that subjects will not consider the option that the news could be false. The only reason for why they would adjust their estimate away from the anchor is because they were evaluating a different train station. In treatment Awareness, we hypothesize that some subjects will think that the information could be fake with a positive probability. This will lead them to adjust their estimate further downwards from the given anchor than those in treatment No Awareness. We further hypothesize that subjects in treatment Certainty Fake will still be influenced by the anchor as they do not have any other information they can depend on. H1a: Subjects in the No Awareness condition will adjust the least from the anchor. H1b: Subjects in the Awareness condition will adjust less from the anchor than those in the Certainty Fake condition. The second hypothesis concerns subjects’ estimates across different authority treatments. In treatment Authority, we hypothesize that subjects will trust the information more than when they hear it from someone on the street (No Authority). This would lead them to adjust less than those in the treatment without authority. H2: Subjects in the Authority condition will adjust less from the anchor than in those in the No Authority condition. The third hypothesis is concerned with the second stage of the experiment. We hypothesize that the condition in which most subjects will choose to read further information will be the Awareness condition. Subjects in this condition are more aware of the possible fakeness and will therefore want to find out more. The second in terms of the number of subjects choosing to read further information will be in the No Awareness condition. The third will be in the Certainty Fake condition, merely because in this case there is no potential gain from reading further information. H3: In the Awareness condition, more subjects will want to look for further information than in the No Awareness condition. Subjects in the Certainty Fake condition will conduct the least further research. The fourth hypothesis is concerned with the third stage of the experiment. Those subjects who are in the Confirmatory Friends treatment will have no reason to adjust their estimate. Those who were in the Contradicting Friends treatment, however, will adjust their estimate downwards as they were told that their friends’ estimate was lower than theirs. H4a: Those subjects whose friends made a similar estimate will not adjust their estimate. H4b: Those subjects whose friends made a lower estimate will adjust their estimate downwards. The fifth hypothesis concerns the development of subjects’ estimates throughout the process of judgment formation. We hypothesize that gaining additional information in each stage will help subjects to understand that the anchor was set too high and thus lead them to adjust their estimate downward. H5: As subjects gain more information throughout the stages, they will adjust their estimate away from the anchor. 10
The following hypotheses pertain to the certainty level. Hypothesis 6 is concerned with subjects’ certainty of whether they made a good estimate in the first stage. We hypothesize that subjects in the No Awareness condition will be most certain of their answers in the first stage because they are not aware of the fact that fake news might be involved in the experiment. H6: Subjects in the No Awareness condition will be most certain of their answers in the first stage. We further hypothesize that subjects will exhibit an authority bias, meaning those in the Authority condition will be more certain of their answers than those in the No Authority condition. H7: Subjects in the Authority condition will be more certain of their answers than subjects in the No Authority condition. As subjects gain more information, they will be better equipped to make an accurate estimate and thus, their certainty will increase from the first to the second stage. In the Confirmatory Friends treatment, subjects will feel confirmed in their estimate because their friends have made a similar one. This will increase their certainty of their estimate from Stage 2 to Stage 3. In the Contradicting Friends treatment, however, subjects will feel less certain about their estimate because their friends have made a much lower one. H8a: Certainty will increase from Stage 2 to Stage 3 for subjects in the Confirmatory Friends condition. H8b: Certainty will decrease from Stage 2 to Stage 3 for subjects in the Contradicting Friends condition.
6
Results and Discussion
First of all, the control treatment significantly differed from all other treatments, see Appendix 2. While the mean estimate of the control treatment was e110.75m, even the lowest of all remaining treatments, i.e. Certainty Fake/No Authority, exhibited an average estimate of e213m, see Table 2. This means that an anchoring effect occurred in all treatments. For detailed tables and statistical analyses of all results, please refer to Appendix 2-14. Table 2: Experimental Results – Estimate Mean Stage 1 and 2 Our results with regard to the different awareness levels are partly as expected. H1a can be confirmed as subjects in the No Awareness condition were significantly (at the 1% level) more influenced by the anchor than those in the other awareness conditions across both authority levels, see Appendix 3. However, H1b has to be rejected as there is no significant difference between the Awareness and the Certainty Fake condition, see Appendix 4 and 5. This could mean that an increased awareness induces subjects to reflect upon the given information and realize the obvious fallacies in the article. R1a: Subjects’ adjustments from the anchor were the smallest in the No Awareness condition. R1b: There was no significant difference between adjustments in the Awareness and Certainty Fake condition. There is no significant evidence for the second hypothesis with regard to the difference in the two authority conditions. While there is a tendency for subjects in the Authority condition with the DB announcement to be slightly more influenced by the anchor in all awareness conditions, these differences are not statistically significant, see Appendix 4 and 5. The insignificant results could have been caused by the anchoring effect merely overshadowing the authority bias. 11
R2: In the Authority condition, subjects did not adjust significantly less than in the No Authority condition. With respect to the third hypothesis, subjects in the Awareness condition indeed decided to read further information more often than those in the other conditions. Overall, 60 subjects from the Awareness condition, 50 from the No Awareness condition and only ten out of 80 from the Certainty Fake condition read the provided articles, see Appendix 6. We attribute the decision to read the articles by those from the Certainty Fake condition to either curiosity or a misunderstanding of the purpose of the additional information as it provided no further benefit to them. R3: Most subjects looked for further information in the Awareness condition, second most in the No Awareness condition, and almost none in the Certainty Fake condition. Hypothesis 4a, namely the adjustment from Stage 2 to Stage 3 in the Confirmatory Friends condition, can be accepted. There were either no differences at all, or, in three cases, small and insignificant differences between the second and third stage in all awareness levels and both authority conditions, see Appendix 7. This is in line with the fact that subjects have no reason to change their estimate when their friends agree with them. Hypothesis 4b, representing the Contradicting Friends condition, can also be accepted. Subjects adjusted downwards from Stage 2 to Stage 3 across all conditions. In all but one treatment, the change was significant at the 1% or 5% level. In the Certainty Fake/Authority condition, this downward adjustment was marginally insignificant. This is surprising, especially since we see no reason for why participants in this treatment should behave differently than the others with regard to this hypothesis, see Appendix 8. Since there were large outliers in this group, however, we do not attribute meaning to the insignificance of changes in this but not in other treatments. Overall, the results indicate that subjects are sensitive to conformity pressure. R4: Except for the Certainty Fake/Authority condition, subjects’ estimates are influenced by their friends’ estimates. In order to evaluate the fifth hypothesis, we compare the differences between the estimates in the three stages of our experiment, see Table 3. Because the search for information in Stage 2 as well as friends’ opinions in Stage 3 could be a source for changes in subjects’ estimates, we conducted a paired t-test for repeatedmeasure designs between the estimates in the first and second stage and the second and third stage. The comparison between the first and second stage shows that only the difference in the No Awareness condition was significant, see Appendix 9. This is reasonable as subjects in the No Awareness condition realize that fake news could be involved only in the second stage and were therefore expected to adjust their estimate further from the anchor than subjects in the other two awareness conditions. Moreover, it is interesting that the change in the second stage, when further split in those participants who decided to read further information and those who did not, is only significant for those participants who read further information, see Appendix 10. With regard to the articles the subjects found most relevant for their next estimate, there was no correlation between the number of the article indicated as most helpful and the change in the estimate. R5: As subjects gained more information throughout the stages, they adjusted their estimate away from the anchor. Table 3: Experimental Results – Difference Stages 1 to 2 and 2 to 3
12
With regard to the sixth and seventh hypothesis, only the certainty level in the No Awareness/Authority condition showed a tendency towards higher certainty whereas the other treatments did not, see Appendix 11. This suggests that the increased certainty due to a lack of awareness of fake news or an authority bias by itself is not sufficient to cause an upward tendency. However, combining the two effects, a slight difference compared to the other treatments can be observed. R6: Subjects in the No Awareness condition were not significantly more certain of their answers than those in the other awareness conditions. R7: Subjects in the Authority condition were not significantly more certain of their answers than subjects in the No Authority condition. Hypothesis 8a can be accepted as the level of certainty increases over the course of the three stages. While there is no or little change in certainty in Stage 2, certainty increases in Stage 3, confirming the hypothesis that subjects feel reassured by their friends having the same opinion. In Hypothesis 8b, we expected that subjects would feel less certain if their friends had a different opinion than them. However, the results showed that subjects’ certainty remained steady with no significant changes, see Appendix 11. This could be explained by two effects balancing each other out. There could be a negative effect on certainty as subjects were told that they were wrong in their judgment. However, there could also be a positive effect because they nevertheless gained new information in the third stage. Had their friends merely told them that their estimate was wrong without giving a direction for adjustment, the certainty might have decreased. R8: Subjects are more certain of their opinion when their friends agree with them. When their friends disagree with them but give them a direction for adjustment, subjects remain steady in their certainty of their answer. Another interesting finding in addition to the hypotheses so far is that, in general, women in all treatments were more influenced by the anchor than men for the first estimate, see Appendix 12. These results are in line with previous experimental studies, for example Barber and Odean (2001) or Bengtsson et al. (2005). However, this finding changes over the three periods during which women adjust more than men. This, on the one hand, indicates that additional information and the authority bias, as well as friends’ opinions, seem to have a stronger effect on women. On the other hand, it could mean that men are more certain of their estimates, which is true for almost all treatments. In five out of six treatments, there is a tendency for men to be more certain of their estimates, on average, than women. Only in the No Awareness condition do women appear to be slightly more confident in their estimates than men, see Appendix 13 and 14. Gender differences are particularly interesting with regard to fake news as they are a personal factor that allows for separation of people into target groups. It is relevant because microtargeting of voters was an important factor in the U.S. election which enabled campaign managers to target people more specifically than ever before, see Kruikemeier et al. (2016). The same technology could be used to target specific groups of readers. R9: Men appear to be more confident in their estimates than women, yet there is no significant evidence for this.
7
Limitations
Some limitations have to be taken into account in order to understand the results correctly. First of all, the sample consisted of students with a similar background. Another drawback of the sample is connected to 13
the intentional setting of complete ignorance. None of our subjects had previous experience with evaluating a train station, leading to a large spread of estimates. Large outliers could have also been caused by the relatively small number of participants per treatment (n = 40, for Stage 3 n = 20 respectively). With regard to the different awareness treatments, the Certainty Fake condition was very clear. In the No Awareness condition, subjects were not aware of fake news. However, they might have also expected the piece of news they were given to be true merely because they did not expect to be lied to in an experiment. With regard to the Awareness condition, however, it is difficult to prime subjects with the right level of awareness in an experiment. In real life, even if people are aware of fake news, they do not receive a reminder of that directly before reading it. This could be one of the reasons why we could not find significant differences between the Awareness and the Certainty Fake treatment with regard to the magnitude of anchoring effects. An alternative procedure could be to prime subjects more subtly by showing them different articles as a priming device, only one of which would have been about fake news. A further limitation concerning anchoring is the uncontrollability of self-generated anchors. Subjects, even though they had as little prior knowledge as possible, might have thought of instances when they heard a value connected to the topic and used this value in order to make a better estimate. As our subjects were German, they could have, for example, thought of “Stuttgart 21.” The construction of the new train station including underground railways is taking longer and costs more than expected, which is a topic often debated in German mainstream news. Subjects might have therefore taken the exorbitantly high estimation for the total costs of the station of e10b as of 2016 as a self-generated anchor, biasing them upwards. This could explain large outliers. In the second stage of our experiment, optional articles were supposed to represent the search for further information. Even though some of these had nothing to do with the first story, in a real information search it would not have been as easy to find the relevant articles. Furthermore, subjects read all the given articles whereas in real life, the search process is often terminated before. Our provision of incentives was chosen so that subjects would have a slight motivation to find out the truth. In real life, however, there is not always such an incentive as subjects do not always read articles for a specific purpose. Especially in the context of fake news, it is usually the sensational headline rather than a previously formed motivation that leads the reader to explore the topic discussed in the respective article. Furthermore, even though the experiment had to be conducted in person due to the optional additional information and to ensure that participants would not use other information sources than those provided by us, an obvious drawback of this is an increased active participation effect. Due to the presence of the experimenters, subjects might have felt obliged to read the additional information (even though, of course, they were not intentionally pushed towards reading it). This means that, especially with regard to the second stage, more participants might have chosen to read further information on the topic than people in real life. In the third stage, a conversation with a friend is simulated in a highly simplified manner. The obvious limitation when comparing this to real life is that subjects were not able to reply and have a real conversation with a friend. In a conversation, arguments for and against believing the information from the beginning and reasons for why each friend reached their estimate would be exchanged. Another limitation of our analysis concerns the use of a non-abstract stimulus. As in other experimental studies on fake news, see e.g. Pennycook et al. (2018, 2019), subjects in our study were presented with stimuli that try to look like real news. The advantage of our concrete story of Deutsche Bahn as a non-abstract stimulus was to make the setting easier to understand and less contrived to the participants. However, given that context is likely to matter because of strong connotations and normative associations, the use of an
14
abstract context-free stimulus may allow more generality of the experimental findings.
8
Conclusion
Overall, our results yielded by the experiment are in line with the findings discussed in the literature review. This implies that the effect of anchoring as well as the effects of awareness and authority on anchoring are applicable to fake news. There are manifold real life implications of these results. The most important finding with regard to the dangers of fake news is that subjects in the Certainty Fake condition were still anchored by the news. The following real life implication is that fake news can still influence the reader even though she knows they are false. This is problematic as it is difficult to undo anchoring effects once they occurred, see LeBoeuf and Shafir (2009) or Wright and Anderson (1989). However, as the literature and the second stage of the experiment showed, a consider-the-opposite approach can be useful for mitigating anchors, see Mussweiler et al. (2000). Nevertheless, awareness is a factor that was proven to decrease the anchoring effects for fake news in our experiment. A higher level of awareness also led to a higher willingness to search for additional information. This shows the importance of increasing people’s awareness as a crucial instrument for combating fake news. In this respect, our results further demonstrate that a mundane level of awareness from following the public debate around fake news is not enough to reduce the reader’s bias. The experiment was conducted in an environment where knowledge about fake news is widespread. Thus, even subjects in the No Awareness condition were likely to have some level of awareness of fake news. Consequently, it is important that awareness is spread and targeted more directly and profoundly. One measure to achieve this could be to include the evaluation of source credibility more extensively in the curriculum at schools. As this is one of the first papers examining the effect of fake news, future research directions are abundant. Since our subjects were drawn from a pool with a fairly homogenous background, it would be important to investigate the effects of anchoring and fake news in a more diverse group of subjects. Especially age could be a highly influential determinant of how easily subjects believe fake news. As older subjects are not digital natives, they are likely to access or at least have previously accessed news in a more traditional manner, such as print. This could make them more sensitive to sensationalism and lack of source credibility. Building on this, it should be investigated how the preferred medium for accessing news is connected to the susceptibility to fake news. Furthermore, as anchoring has been proven to exert an effect even on experts, see Mussweiler et al. (2000), it would be interesting to see whether the same is true if the anchor is additionally made less relevant through a fake news setting. Moreover, further research directed at the question as to which personal factors cause a person to be more susceptible to fake news should also be conducted. Furthermore, the anchoring effect tested in this experiment is based on a numerical anchor. It remains to be seen whether qualitative aspects of fake news trigger a similar effect in the reader. There is no prior research in this direction, yet it could be interesting to test whether exposure to disparaging information about someone can impair people’s view of that person even if the information is known to be untrue. Lastly, other heuristics and biases, such as the two other well-known heuristics defined by Tversky and Kahneman (1974), namely the availability and the representativeness heuristic, might influence information processing just as much as anchoring. The present research does not aim to assign precedence to anchoring over other heuristics; it should merely be seen as a starting point. The various biases resulting from the aforementioned and other heuristics should thus be investigated in future research.
15
9
References
Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis) information in social networks. Games and Economic Behavior, 70(2), 194-227. Acemoglu, D., Como, G., Fagnani, F., & Ozdaglar, A. (2013). Opinion fluctuations and disagreement in social networks. Mathematics of Operations Research, 38(1), 1-27. Ahlers, D. (2006). News consumption and the new electronic media. Harvard International Journal of Press/Politics, 11(1), 29-52. Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on social media. Research & Politics, 6(2), 2053168019848554. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives 31.2: 211-36. Azzimonti, M., & Fernandes, M. (2018). Social media networks, fake news, and polarization (No. w24462). National Bureau of Economic Research. Barber, B. M., & Odean, T. (2001). Boys will be boys: Gender, overconfidence, and common stock investment. The Quarterly Journal of Economics, 116(1), 261-292. Baym, G. (2005). The Daily Show: Discursive integration and the reinvention of political journalism. Political Communication, 22(3), 259-276. Bengtsson, C., Persson, M., & Willenhag, P. (2005). Gender and overconfidence. Economics Letters, 86(2), 199-203. Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241-262. Carvalho, C., Klagge, N., & Moench, E. (2011). The persistent effects of a false news shock. Journal of Empirical Finance, 18(4), 597-615. Castillo, C., Mendoza, M., & Poblete, B. (2011). Information credibility on twitter. Paper presented at the Proceedings of the 20th international conference on World wide web. Chaiken, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology, 66, 460-460. Dittmar, J. E. (2011). Information technology and economic change: The impact of the printing press. The Quarterly Journal of Economics, 126(3), 1133-1172. Ecker, U. K., Lewandowsky, S., & Tang, D. T. (2010). Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory & cognition, 38(8), 1087-1100. Eisenstein, E. L. (1980). The printing press as an agent of change (Vol. 1). USA: Cambridge University Press. Englich, B., & Mussweiler, T. (2016). Anchoring effect. Cognitive Illusions: Intriguing Phenomena in Judgement. Thinking and Memory, 223-240. 16
Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38, 127-150. Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. The Journal of SocioEconomics, 40(1), 35-42. Fragale, A. R., & Heath, C. (2004). Evolving informational credentials: The (mis)attribution of believable facts to credible sources. Personality and Social Psychology Bulletin, 30(2), 225-236. Gelfert, A. (2018). Fake news: A definition. Informal Logic, 38(1), 84-117. Gentzkow, M., Shapiro, J. M., & Stone, D. F. (2015). Media bias in the marketplace: Theory. In Handbook of media economics (Vol. 1, pp. 623-645). North-Holland. Gl¨ ockner, A., & Englich, B. (2015). When relevance matters. Social Psychology, 46, 4-12. Gottheil, R., Strack, H., & Jacobs, J. (2011). Blood Accusation. Jewish Encyclopedia. Retrieved from http://jewishencyclopedia.com/articles/3408-blood-accusation Gottfried, J., & Shearer, E. (2016).
News use across social media platforms 2016.
Retrieved from
http://www.journalism.org/2016/05/26/news-useacross- social-media-platforms-2016/ Hey, J. D. (1998). Experimental economics and deception: A comment. Journal of Economic Psychology, 19(3), 397-401. Krawczyk, M. (2019). What should be regarded as deception in experimental economics? Evidence from a survey of researchers and subjects. Journal of Behavioral and Experimental Economics, 79, 110-118. Kruikemeier, S., Sezgin, M., & Boerman, S. C. (2016). Political Microtargeting: Relationship Between Personalized Advertising on Facebook and Voters’ Responses. Cyberpsychology, Behavior, and Social Networking, 19(6), 367-372. Ladd, J. M. (2010). The neglected power of elite opinion leadership to produce antipathy toward the news media: Evidence from a survey experiment. Political Behavior, 32(1), 29-50. Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Schudson, M. (2018). The science of fake news. Science, 359(6380), 1094-1096. LeBoeuf, R. A., & Shafir, E. (2009). Anchoring on the” here” and” now” in time and distance judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), 81. Li, H., & Sakamoto, Y. (2014). Social impacts in social media: An examination of perceived truthfulness and sharing of information. Computers in Human Behavior, 41, 278-287. Lin, X., Spence, P. R., & Lachlan, K. A. (2016). Social media and credibility indicators: The effect of influence cues. Computers in Human Behavior, 63, 264-271. Lowry, P. B., Wilson, D. W., & Haig, W. L. (2014). A picture is worth a thousand words: Source credibility theory applied to logo and website design for heightened credibility and consumer trust. International Journal of Human-Computer Interaction, 30(1), 63-93.
17
McDaniel, T., & Starmer, C. (1998). Experimental economics and deception: A comment. Journal of Economic Psychology, 19(3), 403-409. Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078-2091. Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. M. (2003). Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Annals of the International Communication Association, 27(1), 293-335. Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413-439. Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), 1142-1150. Pennycook, G., Bear, A., Collins, E., & Rand, D. G. (2019). The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science, Forthcoming. Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of experimental psychology: General, Advance online publication. http://dx.doi.org /10.1037/xge0000465, 1-16. Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39-50. Pickard, V. (2017). Media Failures in the Age of Trump. The Political Economy of Communication, 4(2), 118-122. Robins, D., & Holmes, J. (2008). Aesthetics and credibility in web site design. Information Processing and Management, 44(1), 386-399. Shane, S. (2017, January 18). From headline to photograph, a fake news masterpiece. New York Times. Retrieved from https://www.nytimes.com/ 2017/01/18/us/fake-news-hillary-clinton-cameron-harris.html Spohr, D. (2017). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Business Information Review, 34(3), 150–160. Sternthal, B., Dholakia, R., & Leavitt, C. (1978). The persuasive effect of source credibility: Tests of cognitive response. Journal of Consumer Research, 4(4), 252-260. Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73(3), 437. Sydell, L. (2016). We tracked down a fake-news creator in the suburbs. Here’s what we learned. NPR. Retrieved from http://www.npr.org/sections/ alltechconsidered/2016/11/23/503146770/npr-finds-thehead-of-a-covert-fake-news-operation-in-the-suburbs 18
Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153. Tormala, Z. L., Bri˜ nol, P., & Petty, R. E. (2006). When credibility attacks: The reverse impact of source credibility on persuasion. Journal of Experimental Social Psychology, 42(5), 684-691. Tversky, A., & Kahneman, D. (1974). Heuristics and biases: Judgement under uncertainty. Science, 185, 1124-1130. Van Duyn, E., & Collier, J. (2019). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 22(1), 29-48. Wegener, D. T., Petty, R. E., Blankenship, K. L., & Detweiler-Bedell, B. (2010). Elaboration and numerical anchoring: Implications of attitude theories for consumer judgment and decision making. Journal of Consumer Psychology, 20(1), 5-16. Wright, W. F., & Anderson, U. (1989). Effects of situation familiarity and financial incentives on use of the anchoring and adjustment heuristic for probability assessment. Organizational Behavior and Human Decision Processes, 44(1), 68-82.
19
10.2
Tables
21
No Awareness
Awareness
Certainty Fake
Confirmatory Authority Contradictory Confirmatory No Authority Contradictory
Table 1:
Treatment Groups
No Awareness Stage
1
Awareness
2
1
Certainty Fake 2
1
2
Authority
645
508
309
294
296
293
No Authority
618
506
284
290
234
213
Table 2: Experimental Results -- Estimate Mean Stage 1 and 2
No Awareness Stage
1 to 2 Confirmatory
Authority Contradictory
*** 1%
**5%
2 to 3
-60.00 ***
1 to 2
2 to 3 -2.00
-3.75 -39.00 ***
-43.00
0.00 +6.75
-76.50 **
Certainty Fake
0.00 -15.00
+1.00
-58.00 ** Contradictory
Significance Levels:
1 to 2
+7.50 -137.50 ***
Confirmatory No Authority
2 to 3
Awareness
0.00 -10.00
-37.25 ***
*10%
Table 3: Experimental Results -- Difference Stages 1 to 2 and 2 to 3
-31.50 **
10.3
Figures
23
Stage 1 Individual Processing Phase
Fake News
Stage 2
Search Phase
Stage 3
Group Phase
Judgment
Figure 1: The process of judgment formation in the context of fake news
2. Search Phase
Ignorance Uncertainty
Certainty truth Uncertainty
Fake News Received
Certainty fake
1. Processing Phase
Figure 2: The model of judgment formation
3. Group Phase
Ignorance Uncertainty
Certainty truth Uncertainty
Fake News Received
Certainty fake
1. Processing Phase
2. Search Phase
Certainty Fake
3. Group Phase
Confirmatory Friends
Bild, Blog Awareness Contradicting Friends
No Awareness
Daily News, Foxnews
Figure 3: Experimental Set-Up in the model of judgment formation
Confirmatory Friends