Journal Pre-proof Dynamic mechanism of social bots interfering with public opinion in network Chun Cheng, Yun Luo, Changbin Yu
PII: DOI: Reference:
S0378-4371(20)30016-9 https://doi.org/10.1016/j.physa.2020.124163 PHYSA 124163
To appear in:
Physica A
Received date : 28 September 2019 Revised date : 31 December 2019 Please cite this article as: C. Cheng, Y. Luo and C. Yu, Dynamic mechanism of social bots interfering with public opinion in network, Physica A (2020), doi: https://doi.org/10.1016/j.physa.2020.124163. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2020 Published by Elsevier B.V.
Journal Pre-proof
of
Dynamic mechanism of social bots interfering with public opinion in network Chun Chenga,b , Yun Luoc , Changbin Yub,∗ a
p ro
School of Computer Science, Fudan University, Shanghai 200433, P. R. China b School of Engineering, Westlake University, Hangzhou 310024, P. R. China c School of Resource and Environmental Sciences, Wuhan University, Wuhan 430072, P. R. China
Abstract
urn
al
Pr e-
Participants in discussions on online social networks tend to become polarized into clusters of users with diametrically opposite opinions. Recent evidence has suggested that social bots are being used on social media networks to manipulate public opinion, but this mechanism has not been adequately investigated. In this paper, using the “spiral of silence” theory of social communication, we establish a multi-agent model based on user interactions on social media, define the behavioral characteristics of social bots and human users at the microscopic level, and reveal the mechanism of manipulation of public opinion by bots. The results of simulations of small-world and scale-free networks show that social bots need only constitute 5–10% of participants in a given discussion to alter public opinion such that the view being propagated by them eventually becomes the dominant opinion (held by more than 2/3 of the population). The influence of network density, efficiency of clustering, and spatial location on the manipulative effect of bots was analyzed. The results show that social bots can influence the formation of opinions in online social networks.
Jo
Keywords: Public opinion formation; Social bots; Spiral of silence; Multi-agent system; Network analysis
∗
Corresponding author Email address:
[email protected] (Changbin Yu) URL: www.westlake.edu.cn (Changbin Yu)
Preprint submitted to Physica A
December 31, 2019
Journal Pre-proof
1. Introduction
Jo
urn
al
Pr e-
p ro
of
Social bots are algorithmically driven programs that act like humans in conversations on social media networks [1, 2, 3], they can be classified as benevolent and malevolent according to their purpose. Bots that provide weather forecasts, advisory assistance, and warnings of disaster events [3, 4] to online users are typical examples of the former. However, some bots are created to cause damage by manipulating and deceiving social media users [2]. They have been used to infiltrate political discourse[5], disrupt financial markets, steal personal information, spread misinformation [6], and manipulate public opinion [7]. According to Grimme et al. [8], social bots have five major characteristics: 1) full automation or partly human controlled; 2) autonomous action; 3) goal orientation; 4) multiple modes of communication; and 5) used on social networks. Research has shown that 9%–15% percent of Twitter accounts are social bots [9]. In April 2017, Facebook admitted that approximately 100,000 bots were active on its messenger platform each month [1]. Social media networks nowadays are a hybrid space where both human users and social bots reside. A total of 19% of interactions on these networks are between bots and humans [10]. The influence of social bots cannot thus be ignored or simplified when we study information diffusion, opinion formation, and public tendencies on social networks. Ruths et al. have noted [11] that the existence of social bots can lead to bias in researchers’ inferences concerning human users in the network that can compromise the representativeness of data. Furthermore, social bots are no longer media but interlocutors in a social network. Researchers at Stanford University [12] designed a set of experiments to compare how well people communicated with social bots and other human users, and the results showed that regardless of whether their interlocutors were social bots or other humans, emotional disclosure was more significant than factual disclosure. The result is that social bots can act as humans in virtual spaces and engage in conversations with other users to deliver effects similar to those of real humans without arousing suspicion. Murthy et al. observed [13] the role of social bots in disseminating political content by artificially increasing their number on social networks. Vosoughi et al. compared [14] the role of social bots in the diffusion of true and false news by retaining data and not. Moreover, bots on social networks can amplify the “information gerrymandering” [15] effect of democratic elections by interfering with information flow. Therefore, with their proliferation 2
Journal Pre-proof
urn
al
Pr e-
p ro
of
and improvements in them, social bots have gradually affected public opinion on social networks. Legislators and regulators around the world are considering ways to respond to the risk of democratic elections being manipulated by social bots. At the same time, researchers are debating the extent to which this risk is real and exploring effective responses. Current research on social bots has focused on methods of detecting them (such as network- or graph-based techniques, crowd-sourcing strategies, and feature-based machine learning methods) [16, 17], their macro-behavior analysis and ethical problems [1, 6, 10, 18]. These efforts have resulted in a “cat and mouse cycle” in which detection algorithms evolve to keep up with ever-evolving bots. Ross [19] investigated how bots influence public opinion using the underlying mechanism of social networks, and assumed that bots independently join social networks to study their influence on public opinion from the aspects of scale, mode of connection, and degree of suspicion. Bots play a similar role in public opinion formation as stubborn individuals (or zealots) [20, 21, 22]. However, the connection strategy of bots and the anti-detection capability of networks have been continually improved as driven by intelligent algorithms. In this paper, we focus on the differences between bots and humans in terms of behavioral characteristics. The purpose is to establish a model based on individual interactions to explain how bots can interfere with public opinion in small-world and scale-free networks, by invoking the “spiral of silence” theory of social communication. We propose a multi-agent method of modeling to examine the role of social bots and explore their mechanism of manipulation of public opinion on social networks. The remainder of this paper is organized as follows: We first explain the relevant theoretical background and modeling methods in Section 2. In Section 3, we investigate the proposed model on networks of different types using numerical simulations. Finally, a discussion and the conclusions of this study are given in Section 4. 2. Model Formation
Jo
2.1. The Spiral of Silence As a classic theory of social communication, the “spiral of silence” has been widely tested in different contexts on various moral issues and methodologies since being proposed by Noelle-Neumann in 1974 [23, 24]. The theory posits that individuals, motivated by fear of isolation, constantly monitor 3
Journal Pre-proof
p ro
of
their opinions to assess whether they conform to or contradict the opinion of the majority. It has been acknowledged as “one of the most influential recent theories of public opinion formation” [25]. If the opinions are widely welcomed, the person tends to speak up, but if there are few supporters, the person stays silent. Noelle-Neumann generalized [23]: “The tendency of the one to speak up and the other to be silent starts off a spiraling process, which increasingly establishes one opinion as the prevailing one.” As social media becomes increasingly popular as an online space for social interaction and political participation, the emergence of the spiral of silence on social media has emerged as a popular subject of research [24].
Jo
urn
al
Pr e-
2.2. Opinion Formation and Expression Agent-based models (ABMs) provide the means to explore systems of interaction and adaptive actors [26, 27, 28, 29, 30], where this can help identify evolutionary mechanisms by tracking macroscopic and microscopic connections as well as explore specific social phenomena [31, 32, 33, 34, 35], such as the domino effect, emergence, and criticality. As an approach to computational social science, ABMs are used here to construct and measure the spiral of silence model [19, 36, 37]. The framework of opinion formation and expression according to the spiral of silence theory is shown in Fig.1. Consider a society of N individuals, where each has an opinion (or attitude) toward a debatable topic, such as gay marriage. We assume that there are only two opinions on this issue (positive and negative). Let Wi (t) be individual i’s willingness to express his/her opinion as driven by opinion climate, which is the surrounding environment of the discussion and changes over time. As mentioned above, people who encounter hostility to their opinions tend to be silent to avoid being isolated. Therefore, their willingness to express their opinions are constantly changing with the environment. It is interesting that only those who express their opinions ultimately influence public opinion in the macroscopic society. The opinion climate for individual i at time t is described as δi (t) in terms of the observed percentage of its neighbors with supportive and opposing opinions:
δi (t) =
(
ns (t)−no (t) , ns (t)+no (t)
if ns (t) + no (t) > 0 if ns (t) + no (t) = 0
0,
4
(1)
p ro
of
Journal Pre-proof
Figure 1: Integrated framework of opinion formation and expression.
Pr e-
where ns (t) and n0 (t), respectively, denote the number of neighbors with supportive and opposite opinions to those of individual i in the network at time t. According to research by Sohn [37], the impact of opinion climate Ii (t) is defined as a sigmoid function about δi (t), and is demonstrated in Eq. (2): Ii (t) = l ∗ (1 + e−kδi (t) )−1 −
l 2
(2)
urn
al
The normalizing constant l was set to 2, and k was set to 5, so that the impact Ii (t) was within the predetermined range of [-1,1]. It is worth noting that Ii (t) and δi (t) defined here only for human individuals, not bots, which will be defined separately later, considering their particular behavior pattern. The willingness of individual i to express Wi (t) can be modeled as two major components, prior willingness Wi (t − 1) and the current impact of opinion climate Ii (t). Thus, to delimit the willingness Wi (t) in the range [0,1], the willingness to express at time t is described as an iterative function:
Wi (t) =
Wi (t − 1) + (1 − Wi (t − 1)) ∗ Ii (t), if Ii (t) ≥ 0 Wi (t − 1) + Wi (t − 1) ∗ Ii (t), if Ii (t) < 0
(3)
Jo
If δi (t) > 0, the impact of the opinion climate increases the willingness Wi (t) to express, whereas it reduces Wi (t) if δi (t) < 0. If δi (t) = 0, the impact becomes zero, and the willingness Wi (t) remains unchanged. Each individual has an implicit expression threshold φi with a continuous value in the range [0,1]. The expression threshold φi refers to the minimum 5
Journal Pre-proof
Pi (expression) =
p ro
of
degree of intensity of the individual’s opinion or the lower bound of expressing his/her opinion. With the same degree of willingness, individuals with lower thresholds of expression readily speak out whereas those with higher ones do not. In each step of the process, individuals choose to speak out or remain silent by comparing their willingness to express Wi (t) with the threshold of expression φi . The probability that an individual expresses his/her opinion is as follows: 1, if Wi (t) ≥ φi 0, if Wi (t) < φi
(4)
urn
al
Pr e-
2.3. The Role of Social Bots First, in contrast to humans, goal-oriented social bots always persist with their preset opinions in the network regardless of the perceived climate of opinion. Second, bots continue to express opinions during the evolution of opinions, which tends to interfere with the perceived opinion climate of human neighbors, as shown in Fig.1. Ultimately, bots successfully influence public opinions by discouraging human users with opposing opinions to express them. A key notion is that not everyone expresses their opinions, and people’s willingness to express themselves is influenced by expressed opinions rather than those unexpressed, even though the latter exist and remain constant. Bots are designed to change people’s willingness to express, not their opinions. Thus, we set different behaviors for “humans” and “social bots” in the ABMs, where both are assumed to be agents. Compared with human agents, in every step of the process, bots agents choose to ignore external social influences and express their own opinions. 3. Simulation Results
Jo
3.1. Initial Setup The multi-agent model is constructed and simulated in the Netlogo programmable environment that based on the Logo language developed by Wilensy in 1999 for simulating natural and social phenomena. Consider a mixed population containing N = 1, 000 agents, where the numbers of humans and bots are Na and Nb , respectively. N = Na + Nb . λ = Nb /N denotes the ratio of bots in the network to the total number 6
Journal Pre-proof
p ro
of
of agents. Human and bots are set-up separately because of differences in their behaviors as shown in Table 1. The opinions of humans are randomly selected from binary values of positive and negative, while bots are assigned negative opinions throughout. The initial willingness to express Wi (0) and expression threshold φi of human are randomly distributed in the range [0, 1] to matching the diversity of the population and the randomness of humans’ state Pi at t = 0. The bots’ states and behaviors are always determined and consistent, because of which we set Wi ≡ 1, φi ≡ 0 and Pi ≡ 1 for t = 0, 1, · · · , ∞. Table 1: Initial setup of human and bots agents Num
Opinion
Human Bots
Na Nb
pos. & neg. neg.
Wi
φi
Status Pi
[0 , 1] 1
[0 , 1] 0
0&1 1
Pr e-
Type
3.2. Network Assumption As mentioned above, the behavioral characteristics of bots have been defined, and some basic assumptions are as follows:
al
• In general, the change in network structure should lag behind information propagation in the social network. We thus assume that the network topology remained fixed during the formation of opinions.
urn
• We assume that social bots are advanced pretenders, bots and human accounts have similar connection-related properties in the network, and the differences between them are behavioral.
Jo
We preset a strict baseline by referring to realistic rules for political elections: That is, when 2/3 of individuals in a group in the network hold the same opinion (dominant opinion), public opinion is considered to have formed and the resulting trend is considered difficult to change. Then, the impact of bots on public opinion formation will be investigated in small-world and scale-free networks, respectively.
7
Journal Pre-proof
Jo
urn
al
Pr e-
p ro
of
3.3. The WS Small-World Network The WS small-world network can be established from a regular network. A ring-nearest neighbor coupling network containing N nodes is first build, where each node is connected to K/2 adjacent nodes. Then, each original edge of the network is randomly reconnected with probability Pr . We create 1,000 nodes in the small-world network with a reconnection probability of Pr =0.5, and set the ratio of bots as λ to vary from 1% to 10%. The bots are randomly selected from the group, and we ran 1,000 simulations for each parameter combination. We analyze the results of multiple simulations in which the negative opinion hold by bots is eventually considered the dominant opinion, that is, above the base-line, accounting for 2/3 of the population. Considering the influence of network density, we simulate networks with different average degrees hki as shown in Fig.2. We find that, with an increase in the number of bots, the probability of public opinion being guided increased significantly. It is noteworthy that even a small percentage of bots acting in concert is sufficient to disturb the opinion climate of human users and tip public opinion in the group in favor of the bots. For example, when hki=10, even if only 5% of the nodes are bots, the probability that they could turn public opinion is p=0.9; conversely, when the network is sparse with hki=6, bots needed to account for only 10% of all users to produce the same effect. Compared with low-density networks, the influence of bots is stronger in denser networks. Further, we change the reconnection probability Pr in the small-world network to analyze the influence of a certain ratio of bots (λ=5%) on the formation of public opinion under different clustering coefficients. Fig.3 shows Pr varying from zero to one, the network clustering coefficient C (dotted line), and the change in the probability of interference P by the bots (solid line). The effect of interference by the bots increased with the probability of network reconnection. The network become completely random at the reconnection probability Pr =1, the clustering coefficient is very small, and the bots have the most significant effect. Correspondingly, when the reconnection probability Pr =0, the network degrade into a regular network with the largest clustering coefficient, the effect of bots is not significant. As the clustering coefficient represents the strength of the connection between a node’s neighbors in the network, it reflects the level of the density of a local sub-cluster of the network, indicating that a small number of bots cannot necessarily 8
Pr e-
p ro
of
Journal Pre-proof
Figure 2: Influence of bots in small-world networks, with Pr = 0.5. This is an average over 1,000 realizations.
Jo
urn
al
interfere with the climate of public opinion in a dense local community. Finally, in practice, social networks often exhibit features of the small-world model and the reconnection probability is Pr ∈ (0, 1). Thus, the effect of bots is related not only to network density hki, but is also restricted by the network’s clustering coefficient C. The snapshots in Fig.4 illustrate the evolution of opinions over tiem, where the white nodes represent bots, and the orange and blue nodes represent positive and negative opinion of humans, respectively. When a human user become silent as Pi (expression)=0, the node turns to black. This reflects that without the influence of bots, the number of human agents with positive and negative opinions remain roughly the same over time and not change easily (see a − d); on the contrary, bots with negative opinions in the network could interfere with the opinion climate through concerted action, inhibit the willingness of human users with positive opinion to express, cause them to fall silent (black), and encourage humans with similar opinions to continue expressing themselves (see e − h).
9
Pr e-
p ro
of
Journal Pre-proof
Figure 3: Influence of reconnection probability, with 50 bots. This is an average over 1,000 realizations.
Jo
urn
al
3.4. The BA Scale-free Network The BA scale-free network starts from a connected network with m0 nodes, and each new node is connected to the m existing nodes (m ≤ m0 ) according to a preferential rule of attachment. Without loss of generality, we set m0 = 6, and m took integer in the range [0, 6]. We analyze the effects of the increase in the number of bots on public opinion under different network densities, and Fig.5 shows the average result of 1,000 simulation runs with each parameter combination. The correlation is significant between the number of bots and the influence on the climate of opinion, and is more significant in high-density networks. We find that, a very small number of bots can tip over public opinion. For example, when m ≥ 4, bots comprising only 5% of the total number of agents could guarantee, with a probability of over P = 0.9, interference with the opinion climate. When m = 3, only 8% of all agents needed to produce the same effect. Even if the network become very sparse with m = 2, with 10% of the agents as bots, public opinion could be influenced with a probability P > 0.7. Because scale-free networks have the characteristics of a power law distribution, we consider the influence of the positions of bots in the network. The dotted line in Fig.5 represents the situation where bots are at the center 10
(e) t=0
(b) t=1
al
(a) t=0
Pr e-
p ro
of
Journal Pre-proof
(f) t=1
(c) t=5
(d) t=20
(g) t=5
(h) t=20
Jo
urn
Figure 4: Snapshot illustrates the evolution of N =1,000 agents opinions in small-world model with Pr = 0.8 and hki = 10. Human users with positive opinions (orange) and negative opinions (blue), bots with negative opinions (white), humans fall silent (black), (a)-(d) without bots, and (e)-(h) with 50 bots.
11
Journal Pre-proof
urn
al
Pr e-
p ro
of
(node with high degree) and edge (node with low degree) of the network, respectively. When the bots are located in the center, they have the most significant subversive effect on public opinion. For example, when m = 2, the population need only 3% of bots with a high degree to ensure that public opinion could be accordingly manipulated. On the contrary, when the bots are on the edge of the network, their effect is least obvious. We can conclude that the manipulation of public opinion depends not only on the ratio of bots to the total number of agents, but also on their positions in the network the density of scale-free networks.
Figure 5: Influence of bots on scale-free networks, and is an average over 1,000 realizations.
Jo
The effect of the population size on the public opinion formation has been investigated, we consider the population size N increasing from 1k to 10k, and assume the proportion of social bots as λ = 5% for all the population size. The average percentage of majority agents with different population size is shown in Fig.6, from which it can be seen that the average percentage of majority agents holds the stable level as N increases. 12
Pr e-
p ro
of
Journal Pre-proof
Figure 6: The average percentage of majority agents with different population size. The result refers to λ = 5% in samll-world network with hki = 6, hki = 8 and λ = 5% in scale-free network with m = 2, m = 3, and is an average over 1,000 realizations.
4. Conclusions and discussion
Jo
urn
al
In this paper, we examined the mechanism by which bots manipulate public opinion on social media networks. To understand how their concerted action affects the formation and expression of public opinion, we proposed a multi-agent model featuring human users and bots by invoking the “spiral of silence” theory of social communication, given that bots can affect a human agent’s willingness to express his/her opinion by interfering with the perceived climate of opinion. In the proposed model, human agents evaluated the climate of opinion, updated their willingness of expression, and released their status (keep speaking or be silent) in the group in each step. By increasing the ratio of bots to the total number of agents in the group, we investigated their effects on manipulating public opinion in small-world and scale-free networks. On one hand, only a small percentage (5-10%) of bots were enough to tip public opinion in favor of their own as the final, dominant opinion in the network (more than 2/3 of the group). On the other hand, in the small-world network, the influence of bots was positively correlated with the average degree hki of the network and negatively correlated with the clustering coefficient C, 13
Journal Pre-proof
5. Acknowledgements
Pr e-
p ro
of
indicating that the effect of a small number of bots on the climate of opinion in dense, local communities was inhibited. In scale-free networks, the manipulation of public opinion was not only positively correlated with the number of bots and network density, but was also affected by the positions of the bots in the network (node degree). Our work has revealed the mechanism of manipulation of public opinion by bots from the microscopic perspective of interaction between individuals in a network. We theoretically examined the impact of social bots, and found that a small number was sufficient to influence public opinion, triggering silence from human detractors that eventually led to the acceptance of the opinion of the bots as dominant. This study offers a new perspective and method for studying bots in terms of their online detection, behavioral characteristics, and macro-level effects. However, there are several limitations to this work. For example, to focus on the interaction between individuals, we simplified the network structure and ignored the influence of mass media in the “spiral of silence” as it influences global public opinion. In future work, we wish to study the effects of the interaction between individuals in the network and mass media.
6. References
al
This work was supported by the National Science Foundation of China (61761136005) and National Academy of Innovation Strategy (Project No.CXYZKQN- 2019-026).
urn
[1] Daniel F, Cappiello C, Benatallah B. Bots acting like humans: understanding and preventing harm[J]. IEEE Internet Computing, 2019, 23(2): 40-49. [2] Ferrara E, Varol O, Davis C, et al. The rise of social bots[J]. Communications of the ACM, 2016, 59(7): 96-104.
Jo
[3] Clark E M, Williams J R, Jones C A, et al. Sifting robotic from organic text: a natural language approach for detecting automation on Twitter[J]. Journal of Computational Science, 2016, 16: 1-7.
14
Journal Pre-proof
of
[4] Hofeditz L, Ehnis C, Bunker D, et al. Meaningful use of social bots? Possible applications in crisis communication during disasters[C]//Proceedings of the 27th European conference on information systems (ECIS). 2019.
p ro
[5] Keller T R, Klinger U. Social Bots in Election Campaigns: Theoretical, Empirical, and Methodological Implications[J]. Political Communication, 2019, 36(1): 171-189. [6] Shao C, Ciampaglia G L, Varol O, et al. The spread of low-credibility content by social bots[J]. Nature communications, 2018, 9(1): 4787.
Pr e-
[7] Bradshaw S, Howard P N. Challenging truth and trust: A global inventory of organized social media manipulation[J]. The Computational Propaganda Project, 2018. [8] Grimme C, Preuss M, Adam L, et al. Social bots: Human-like by means of human control?[J]. Big data, 2017, 5(4): 279-293. [9] Varol O, Ferrara E, Davis C A, et al. Online human-bot interactions: Detection, estimation, and characterization[C]//11th international AAAI conference on web and social media. 2017.
al
[10] Stella M, Ferrara E, De Domenico M. Bots increase exposure to negative and inflammatory content in online social systems[J]. Proceedings of the National Academy of Sciences, 2018, 115(49): 12435-12440.
urn
[11] Ruths D, Pfeffer J. Social media for large studies of behavior[J]. Science, 2014, 346(6213): 1063-1064. [12] Ho A, Hancock J, Miner A S. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot[J]. Journal of Communication, 2018, 68(4): 712-733.
Jo
[13] Murthy D, Powell A B, Tinati R, et al. Automation, algorithms, and politics | bots and political influence: A sociotechnical investigation of social network capital[J]. International journal of communication, 2016, 10: 20. [14] Vosoughi S, Roy D, Aral S. The spread of true and false news online[J]. Science, 2018, 359(6380): 1146-1151. 15
Journal Pre-proof
[15] Bergstrom T C, Bak-Coleman B J. Gerrymandering in social networks[J]. Nature, 2019, 9(40):573.
of
[16] Alothali E, Zaki N, Mohamed E A, et al. Detecting social bots on Twitter: a literature review[C]//2018 International Conference on Innovations in Information Technology (IIT). IEEE, 2018: 175-180.
p ro
[17] Yang K C, Varol O, Davis C A, et al. Arming the public with artificial intelligence to counter social bots[J]. Human Behavior and Emerging Technologies, 2019, 1(1): 48-61.
Pr e-
[18] Gilani Z, Farahbakhsh R, Tyson G, et al. A Large-scale Behavioural Analysis of Bots and Humans on Twitter[J]. ACM Transactions on the Web (TWEB), 2019, 13(1): 7. [19] Ross B, Pilz L, Cabrera B, et al. Are social bots a real threat? An agentbased model of the spiral of silence to analyse the impact of manipulative actors in social networks[J]. European Journal of Information Systems, 2019: 1-19. [20] Javarone M A. Network strategies in election campaigns[J]. Journal of Statistical Mechanics: Theory and Experiment, 2014, 2014(8): P08013.
al
[21] Mobilia M. Does a single zealot affect an infinite group of voters?[J]. Physical review letters, 2003, 91(2): 028701.
urn
[22] Yildiz E, Ozdaglar A, Acemoglu D, et al. Binary opinion dynamics with stubborn agents[J]. ACM Transactions on Economics and Computation (TEAC), 2013, 1(4): 19. [23] Noelle-Neumann E. The spiral of silence a theory of public opinion[J]. Journal of communication, 1974, 24(2): 43-51.
Jo
[24] Chen H T. Spiral of silence on social media and the moderating role of disagreement and publicness in the network: Analyzing expressive and withdrawal behaviors[J]. New Media and Society, 2018, 20(10): 39173936. [25] Kennamer J D. Self-serving biases in perceiving the opinions of others: Implications for the spiral of silence[J]. Communication Research, 1990, 17(3): 393-404. 16
Journal Pre-proof
of
[26] Chen S, Glass D H, McCartney M. Two-dimensional opinion dynamics in social networks with conflicting beliefs[J]. AI and SOCIETY, 2016: 1-10. [27] Ramos M, Shao J, Reis S D S, et al. How does public opinion become extreme?[J]. Scientific reports, 2015, 5: 10032.
p ro
[28] Galam S. Public debates driven by incomplete scientific data: the cases of evolution theory, global warming and H1N1 pandemic influenza[J]. Physica A: Statistical Mechanics and its Applications, 2010, 389(17): 3619-3631.
Pr e-
[29] Pires M A, Crokidakis N. Dynamics of epidemic spreading with vaccination: impact of social pressure and engagement[J]. Physica A: Statistical Mechanics and its Applications, 2017, 467: 167-179. [30] Galam S, Jacobs F. The role of inflexible minorities in the breaking of democratic opinion dynamics[J]. Physica A: Statistical Mechanics and its Applications, 2007, 381: 366-376. [31] Javarone M A. Social influences in opinion dynamics: the role of conformity[J]. Physica A: Statistical Mechanics and its Applications, 2014, 414: 19-30.
al
[32] Crokidakis N. A three-state kinetic agent-based model to analyze tax evasion dynamics[J]. Physica A: Statistical Mechanics and its Applications, 2014, 414: 321-328.
urn
[33] Huang G, Cao J, Wang G, et al. The strength of the minority[J]. Physica A: Statistical Mechanics and its Applications, 2008, 387(18): 4665-4672. [34] Huang G, Cao J, Qu Y. The minority’s success under majority rule[J]. Physica A: Statistical Mechanics and its Applications, 2009, 388(18): 3911-3916.
Jo
[35] Crokidakis N. Effects of mass media on opinion spreading in the Sznajd sociophysics model[J]. Physica A: statistical mechanics and its applications, 2012, 391(4): 1729-1734. [36] Sohn D, Geidner N. Collective dynamics of the spiral of silence: The role of ego-network size[J]. International Journal of Public Opinion Research, 2015, 28(1): 25-45. 17
Journal Pre-proof
Jo
urn
al
Pr e-
p ro
of
[37] Sohn D. Spiral of Silence in the Social Media Era: A Simulation Approach to the Interplay Between Social Networks and Mass Media[J]. Communication Research, 2019: 0093650219856510.
18
Journal Pre-proof Highlights: An agent-based model contains human users and social bots on social network is proposed. We find that a small number of social bots is sufficient to influence
of
public opinion.
The influence of bots is positively correlated with the average degree
Our research reveal the mechanism of manipulation of public opinion
urn
al
Pr e-
by bots.
Jo
p ro
and negatively correlated with the clustering coefficient of network.
Journal Pre-proof
Declaration of interests
Jo
urn
al
Pr e-
p ro
of
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.