Journal of Economic Behavior & Organization 143 (2017) 1–8
Contents lists available at ScienceDirect
Journal of Economic Behavior & Organization journal homepage: www.elsevier.com/locate/jebo
The Streisand effect: Signaling and partial sophistication Jeanne Hagenbach a , Frédéric Koessler b,∗ a b
École Polytechnique — CNRS, France Paris School of Economics – CNRS, France
a r t i c l e
i n f o
Article history: Received 23 March 2017 Received in revised form 28 August 2017 Accepted 1 September 2017 Available online 14 September 2017 JEL classification: C72 D82 Keywords: Analogy-based expectations Signaling Streisand effect
a b s t r a c t This paper models the Streisand effect in a signaling game. A picture featuring a Star has been exogenously released. The Star privately knows whether the picture is embarrassing or neutral and can decide to censor it, with the aim of having it unseen. Receivers observe the Star’s action and make efforts to see the picture, that depend on how embarrassing they expect it to be. Censorship reduces the Receivers’ chances to see the picture but also serves as a motivating signal to search for it. When players are fully rational, we show that censorship cannot occur if the picture has little chances to be found when believed neutral. Next, we consider that players may not fully understand the signaling effect of censorship and study how it affects the equilibrium outcome. We model such partial sophistication of players using analogical reasoning à la Jehiel (2005). We explain that partially sophisticated Receivers are less responsive to the Star’s action, which makes censorship more likely. We also show that a partially sophisticated Star can censor in equilibrium while it gives the picture higher chances to be found than without censorship. The Streisand effect is at play, in the sense that censorship creates interest which is unexpected by the Star. © 2017 Elsevier B.V. All rights reserved.
1. Introduction The Streisand effect is the phenomenon whereby an attempt to remove or censor a piece of information has the paradoxical consequence of publicizing it more widely. This phenomenon is usually linked to information released on the Internet where suppressing photos, files and websites is nearly impossible.1 Cases where, instead of being suppressed, a piece of information receives extensive attention are numerous. A first example involves Barbara Streisand, an American singer and actress, who gave her name to the effect. In 2003, she sued the California Coastal Records Project accusing it of violating her privacy: one of the many publicly available aerial pictures of the coastline – aimed at showing evidence of coastal erosion – pictured her mansion in Malibu. While less than ten persons had downloaded the problematic picture before B. Streisand asked for its removal, more than 400,000 people visited the project website in the month following her reaction. The picture was also widely spread on the Internet before it was eventually removed as ordered by the court. Another representative example involves Wikipedia France and the French Intelligence Agency (Direction Centrale du Renseignement Interieur). In 2013, the Agency wanted the removal of an article about a military radio base run by the French Air Force, under the reason that the article contained classified information. Facing the refusal of Wikipedia, the
∗ Corresponding author. E-mail addresses:
[email protected] (J. Hagenbach),
[email protected] (F. Koessler). 1 This problem gave rise to a current and lively debate about the Right to Be Forgotten on the Internet, practically hard to implement but that the European Union, for instance, has been trying to put in practice for a few years. http://dx.doi.org/10.1016/j.jebo.2017.09.001 0167-2681/© 2017 Elsevier B.V. All rights reserved.
2
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
Agency pressured, in its offices, a French Wikipedia editor who removed the article under the threat of immediate arrest. The editor made these pressures public and the problematic article was quickly restored by another Wikipedia contributor. It ended up being, for two days of April 2013, the most viewed article on French Wikipedia. Newspapers have been describing the Streisand effect for long and regularly report instances of it.2 Advices to practically avoid being victim of the effect can be found in Liarte (2013) and Jansen and Martin (2015), as well as on several websites for the purpose of lawyers and communication experts.3 However, to the best of our knowledge, there has been no attempt to model the Streisand effect in a game between a player who has a piece of information to hide and an audience whose interest may be triggered by censorship. We propose such a model with the objective to shed light on the strategic forces behind this common effect. In particular, we wish to study the extent to which censorship can emerge when individuals are fully rational, or whether the lack of sophistication of some agents makes censorship more likely. We also aim at understanding when the Streisand effect occurs, in the precise sense that the interest created by censorship is not properly anticipated by the party who uses it. Formally, we study a simple signaling game involving a Star who can decide to censor, at a cost, a piece of information exogenously released. The Star, or sender, is assumed to know how embarrassing this information is. The Receivers – representing media, reporters, people on social networks, . . .– observe whether the Star has tried to censor the information and form expectations about how embarrassing the information is. The Receivers’ expectation determines their motivation and search effort to access and spread the information, which in turn determine the probability that the information is actually discovered and published. On the one hand, censorship has the direct mechanical effect that it decreases the chances that the information is discovered for every given level of search effort. On the other hand, censorship has an indirect signaling effect in that it creates interest for the information, inducing more search effort. The first effect increases the Star’s incentive to censor when the information is embarrassing, while the second effect discourages the Star from censoring. We first examine the trade-off between these two effects in the benchmark case of fully rational players. We show that the signaling game has a unique (perfect Bayesian) equilibrium. Quite intuitively, when the direct effect of censorship on the chances that the information is discovered is relatively strong, the Star with embarrassing information is more likely to censor (and therefore to separate). In this benchmark situation, censorship cannot be observed in equilibrium if the probability to discover the information is low when the Receivers believe it is neutral. When censorship occurs with a positive probability in equilibrium, the Star is actually better-off doing so: on average, censorship does not have the unintended consequence of publicizing the information more widely than when the information is left circulate. As suggested by the examples given above, we talk about Streisand effect when censorship is excessive and unprofitable for the Star. A simple reason for this may be that players do not fully understand the signaling effect of censorship. We examine this possibility by considering partially sophisticated players and using the concept of analogical reasoning developed in Jehiel (2005), Jehiel and Koessler (2008) and Ettinger and Jehiel (2010). This approach relies on limited cognitive rationality by relaxing players’ ability to form correct expectations, but by maintaining their ability to act rationally given their beliefs. It assumes that players’ expectations are correct on average, but that they are derived from a coarse perception of others’ strategies. A similar approach is taken by other bounded rationality models in the literature such as, for example, cursed equilibrium (Eyster and Rabin, 2005), behavioral equilibrium (Esponda, 2008) and Berk–Nash equilibrium (Esponda and Pouzo, 2016). The use of an analogy-based expectation equilibrium can be justified by a learning process in which players receive partial statistical feedbacks about past interactions (see, for example, Jehiel and Koessler, 2008, Section 3; Esponda and Pouzo, 2016, Sections 3.3 and 4). In an analogy-based expectation equilibrium, a partially sophisticated Receiver bundles the types of the Star into an analogy class and forms expectation about the Star’s action in that class. Said differently, such a Receiver understands the aggregate behavior of the Star but not the link between the Star’s type and her action. Similarly, a partially sophisticated Star perceives the average strategy of the Receivers across their decision nodes but does not understand that their reaction might be linked to her own decision to censor or not. When a proportion of the Receivers are partially sophisticated, we show that the (analogy-based) equilibrium remains unique. As the proportion of partially sophisticated Receivers increases, the signaling effect of censorship is less at play and separation (and censorship) is therefore easier to obtain. As in the benchmark case however, when there is censorship, the probability that information is discovered is low enough to justify the use of censorship by the Star. In fact, the Star correctly perceives the extent to which the Receivers partial sophistication make them less responsive to the signaling aspect of her action. The results are similar if we interpret the proportion of partially sophisticated Receivers as a proportion of rational Receivers who do not observe the Star’s action. When it is the Star who is partially sophisticated, she underestimates the signaling effect of her own action, and we show that there may be multiple equilibria. The conditions to have censorship (and separation) are also weaker than in the benchmark case. In particular, when the cost of censorship is low enough, the unique equilibrium is separation: the Star with embarrassing information always tries to censor even though the probability to find out the information is lower by leaving the information circulate. The Streisand effect is at play: the Star uses censorship at her expense because she does
2 See, for instance, the article of The New York Times entitled “Living with the Streisand Effect” (Dec. 26 2008), the article of The Guardian entitled “The Streisand effect: Secrecy in the digital age” (Mar. 20 2009), or the article of Le Monde entitled “The buzz we did not want” (Nov. 1 2013). 3 See, for example, http://www.mcw.com.au/page/Publications/Technology/avoiding-the-streisand-effect/.
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
3
not correctly understand the correlation between the Receivers’ effort and her own action. She erroneously assesses the signaling aspect of her action. In the last section of the paper, we abandon the analogy-based approach to shed light on other potential sources of the Streisand effect. Precisely, we propose a variant of our game where the Star wrongly assesses how the Receivers’ react to her action because she misperceives her environment. In particular, we assume that the Star underestimates the visibility of her action by the Receivers, which directly enhances her incentive to censor in equilibrium. The same holds for a Star who overestimates the efficiency of censorship in decreasing the probability to find a hidden piece of information for a given level of effort. We conclude the paper by studying a repeated version of our basic signaling game. We show that it is possible that the Star censors in every period, whatever her information, while censorship is never used in the static setting. When the Star builds a reputation of censoring every piece of information, she in fact manages to eliminate the signaling aspect of her action. From a general point of view, this paper asks how bounded rationality affects the outcome of a signaling game. Few papers have investigated this question and two exceptions can be found in Bilancini and Boncinelli (2016) and Eyster and Rabin (2005). Bilancini and Boncinelli (2016) consider Receivers who also reason analogically à la Jehiel (2005) but across a pair of signaling games, and they show that separation can occur without the single-crossing condition. Eyster and Rabin (2005) develop the bounded rationality concept of cursed equilibrium, in which players also do not fully take into account how other players’ actions depend on their information. They first explain why a fully cursed Receiver destroys the potential of signaling in a signaling game à la Spence (1973): a costly signal that does not affect the Receiver’s belief is never sent. In our case, by contrast, the fact that the Receiver does not read the signal properly is beneficial for the sender and therefore enhances separation. Next, Eyster and Rabin (2005) show how a partially cursed Receiver can create the potential for signaling where a fully rational one cannot. They however do not examine the possibility of a boundedly rational sender. Applying the bounded rationality concept of Jehiel (2005) to incomplete information games, Jehiel and Koessler (2008) study cheap-talk communication in presence of a boundedly rational Receiver also keeping full rationality on the sender side. 2. A simple signaling game We consider a game involving a Star (S) and Receivers (R). Receivers represent media, reporters, people on social networks, etc. A piece of information about the Star has been exogenously released (say, a picture posted online) that can be of two ¯ We let 0 ≤ < . ¯ The Star privately knows whether the information is embarrassing () ¯ or neutral (). The types, ∈ {, }. Receivers ignore the content of the information but know it is embarrassing with prior probability p ∈ (0, 1). Knowing her type, the Star can decide to censor the picture or to leave it circulate: the action of the Star is a ∈ {C, L} and costs ca . Censorship costs cC = c > 0 while the cost of leaving it is cL = 0. The Receivers observe whether the Star has censored the information or not, and then exert a search effort, denoted by e ≥ 0, to access (and spread) the information. We assume that this search effort only depends on the Receivers’ expectation about how embarrassing the information is, and that it is strictly increasing in this expectation. Hence, without further loss of generality, we assume that e is equal to the conditional expectation of the Receivers about . For a given e, the Receivers discover the piece of information with probability Pa (e) ∈ [0, 1], which depends on the action a of the Star. We assume that for every a ∈ {C, L}, Pa (e) is strictly increasing in e, which means that the probability to discover the information is increasing in the search effort. We also assume that for every e, 0 ≤ PC (e) < PL (e) ≤ 1: given a level of search effort, the chance to discover the piece of information is higher if the Star has not censored it. One key assumption is that censorship is not fully successful: provided that some search effort is done, there is a positive chance to discover the released piece of information whatever the action of the Star. The Star suffers from having the information discovered, and suffers more from having an embarrassing piece of information found than a neutral one. Her payoff is given by: US (a, e;) =− Pa (e) − ca . For simplicity we assume that c > so that L is a dominant strategy for the Star of type . 3. Equilibria of the game In this section we characterize all perfect Bayesian equilibria of the signaling game described above. A strategy for the ¯ → ({C, L}), and we write () = a when the Star of type plays a ∈ {C, L} with probability one. A Star is given by : {, } ¯ a = C, L, such that () = L, and such that: perfect Bayesian equilibrium is given by a pair of strategies and e(a) ∈ [, ], ¯ S (C, e(C); ) ¯ + (L | )U ¯ S (L, e(L); ) ¯ ≥ US (a, e(a); ), ¯ for every a ∈ {C, L}, (C | )U e(a) = E [ | a], for every a ∈ supp[]. Censorship has two effects on the Star’s payoff. First, it has a direct positive effect: given a fixed level of search effort, censorship decreases the probability of discovering the information compared to the case where the information is left circulate, that is, PL (e) − PC (e) > 0. Second, it has an indirect negative effect as censorship is always a signal of the type being
4
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
embarrassing, which increases the interest of the Receivers: e(L) − e(C) ≤ 0. It is the trade-off between these two effects that determines the Star’s incentive to censor. ¯ = C, () = L, e(C) = ¯ and e(L) = . Such an equilibrium exists iff the Star has In a separating equilibrium we have () ¯ i.e., no incentive to deviate to L when her type is , ¯ ≥ DS := PL () − PC ()
c . ¯
¯ then DS is negative and separation is impossible. In particular, even when censorship is free for Note that if PL () < PC (), the Star (c = 0), separation cannot occur if the probability to discover the information is low when the Receivers believe that the information is neutral. ¯ = () = L and e(L) = E[]. Such an equilibrium exists iff the Star has no incentive In a pooling equilibrium we have () ¯ i.e., to deviate to C when her type is , ¯ C (e(C)) − c. ¯ L (E[]) ≥ −P −P The easiest way to support this equilibrium is to let e(C) = ¯ (the Receivers believe that the Star who deviated is of the embarrassing type). Hence, a pooling equilibrium exists iff ¯ ≤ c. DP := PL (E[]) − PC () ¯ Notice that DS < DP , so there cannot be a pooling and separating equilibrium at the same time. ¯ = x ∈ (0, 1) (the Star of type ¯ chooses to censor with probability x and Finally, in a mixed equilibrium we have (C | ) ¯ ¯ to leave with probability 1 − x), () = L, e(C) = and e(L) = E ( | L) = E[]−xp := ˆ L (x). Such an equilibrium exists iff the 1−xp
¯ i.e., Star is indifferent between L and C when her type is , ¯ = c. D(x) := PL (ˆ L (x)) − PC () ¯ Notice that ˆ L (x) is strictly decreasing in x, with ˆ L (0) = E[] and ˆ L (1) = . It follows that D(x) is strictly decreasing in x and that D(0) = DP and D(1) = DS . Hence, there is a mixed and unique equilibrium with x ∈ (0, 1) when DS < c/¯ < DP . Under these conditions the equilibrium probability of censorship x is decreasing in the cost c of censorship, decreasing in the prior belief p that the information is embarrassing (since ˆ L (x) is decreasing in p), and increasing in the efficiency of censorship (i.e., if PC (e) is lower for every e). The following proposition summarizes the equilibrium conditions given above: Proposition 1.
The signaling game has generically a unique perfect Bayesian equilibrium:
¯ > • Separation. If DS := PL () − PC ()
c , ¯
the Star of neutral type leaves the information circulate and the Star of embarrassing
type censors the information; ¯ < c , both types of the Star leave the information circulate; • Pooling. If DP := PL (E[]) − PC () ¯ c • Mixed. If DS < < DP , the Star of neutral type leaves the information circulate and the Star of embarrassing type randomizes ¯
between censoring and leaving the information circulate with strictly positive probabilities. 4. Partially sophisticated receivers In this section we assume that there is a proportion ∈ [0, 1] of Receivers who are partially sophisticated in the sense that they do not understand the signaling aspect of the Star’s action. More precisely, we assume as in Jehiel (2005) that a partially sophisticated Receiver forms analogy-based expectations about the Star’s strategy. Said differently, the Receiver partitions into analogy classes the nodes at which the Star has to take a decision, and expects the average Star’s strategy in these classes. We assume that a partially sophisticated Receiver uses the coarsest analogy partition: the strategy of the Star perceived by such a Receiver is the average strategy of the Star over her two decision nodes. Hence, a partially sophisticated Receiver’s expectation about the state is the prior expectation whatever the Star’s action. Now, the total search effort after action a is given by e(a) = (1 − )E [ | a] + E[],
(1)
¯ when a has zero probability according to . where, as in the benchmark case, E [ | a] can be chosen arbitrarily in [, ] ¯ ¯ ¯ ˆ Let := (1 − ) + E[] ≥ , := (1 − ) + E[] ≤ , and (x) := (1 − )ˆ L (x) + E[]. The equilibrium conditions L
are the same as in the benchmark case replacing , ¯ and ˆ L (x) by , ¯ and ˆ L (x) respectively. Equilibrium is generically unique, and the mixed equilibrium x solves
D (x) := PL (ˆ L (x)) − PC (¯ ) =
c . ¯
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
5
Proposition 2. The signaling game with a proportion of partially sophisticated Receivers has generically a unique analogy-based expectation equilibrium: • Separation. If D := PL ( ) − PC (¯ ) > S type Censors the information; • Pooling. If D := PL (E[]) − PC (¯ ) < P • Mixed. If
DS
<
c ¯
<
DP ,
c , ¯
c , ¯
the Star of neutral type leaves the information circulate and the Star of embarrassing
both types of the Star leave the information circulate;
the Star of neutral type leaves the information circulate and the Star of embarrassing type randomizes
between censoring and leaving the information circulate with strictly positive probabilities. When increases, DS and D (x) increase while DP decreases. This means that separation and censorship are more likely to be observed with a higher proportion of partially sophisticated Receivers. At the limit, when all Receivers are partially sophisticated ( = 1), the probability to discover the information after censoring is always smaller than after leaving: only the direct effect of censorship is at play as there is no signaling effect of censorship, and we have DS1 = DP1 = D1 (x) = PL (E[]) − PC (E[]) > 0. Finally, note that, when there is censorship, the probability that the information is discovered is always lower ¯ than in the benchmark case, that is, PC (¯ ) < PC (). It is interesting to see that the equilibria identified in the previous proposition actually correspond to -cursed equilibria (Eyster and Rabin, 2005), with = . The interpretation, which underlies this equilibrium concept, is that each Receiver’s perception of the Star’s strategy is a convex combination of the average strategy of the Star across types (with weight ) and the actual (type-contingent) strategy of the Star (with weight 1 − ). Said differently, every Receiver only partially takes into account the correlation between the Star’s type and her action, which also translates into a lower responsiveness of the effort to this action. Finally, note that there are several alternative interpretations of the proportion of partially sophisticated Receivers. First, a straightforward equivalent interpretation is to consider that a proportion of Receivers think that the Star’s decision to censor is not correlated to the content of her information, but perhaps to some intrinsic preference of the Star (for example, her preference for intimacy). Second, leaving bounded rationality issues aside, the proportion of partially sophisticated Receivers can be interpreted as a proportion of perfectly rational Receivers who do not observe or pay attention to the Star’s action. Note that if instead we interpret as the probability that the Star’s decision does not appear in the media, while with probability 1 − it does, then the model is slightly different. Indeed, in that case all Receivers do not observe the Star’s decision with probability , and all Receivers do observe her decision with probability 1 − . Then, a separating equilibrium exists if and only if ¯ + (PL (E[]) − PC (E[])) > c . (1 − )(PL () − PC ()) ¯ As in the former model, separation and censorship are more likely to be observed with a higher . 5. Partially sophisticated Star In this section we assume that the Star is partially sophisticated: the strategy of the Receivers perceived by the Star is the average strategy of the Receivers across their decision nodes.4 This can be interpreted as the fact that the Star does not understand the signaling aspect of her own action. Said differently, the Star correctly anticipates the equilibrium distribution of search efforts of the Receivers, but she does not understand that it might be correlated with the state or with her decision to censor or not. In an analogy-based expectation equilibrium, the Star therefore best-responds to a mixed strategy of the Receivers which consists for them to exert search effort e(C) with the probability that the Star plays C, and effort e(L) with the probability that the Star plays L. This mixed strategy is the real equilibrium distribution of efforts of the Receivers. It corresponds to the statistical inference the Star could make in a learning process in which she only receives feedbacks about the distribution of efforts, without getting statistical feedbacks about the efforts conditional on the decision to censor (Jehiel and Koessler, 2008; Esponda and Pouzo, 2016). In a separating equilibrium the actual strategy of the Receivers is e(C) = ¯ and e(L) = , so the strategy of the Receivers perceived by the Star is the lottery with outcome e = ¯ with probability p and outcome e = with probability 1 − p. Such an ¯ i.e., equilibrium exists iff the Star has no incentive to deviate to L when her type is , ¯ − PC ()] ¯ + (1 − p)[PL () − PC ()] ≥ c . ˜ S := p[PL () D ¯ ˜ S > 0, so such a separating equilibrium always exists when the cost of censorship, c, is low enough. Since Pa (e) Notice that D ˜ S > DS for every p, which means that separation is always easier to obtain than in the is increasing in e, we also have D
4 Precisely, as for the case of partially sophisticated Receivers, the Star partitions into analogy classes the set of histories at which the Receivers take a decision. We consider the coarsest analogy partition that puts the four possible histories in the same element.
6
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
benchmark case. In particular, when PL () = 0, separation is never possible under full rationality but it is possible with a partially sophisticated Star. Similarly, a pooling equilibrium exists iff ˜ P := PL (E[]) − PC (E[]) ≤ D
c . ¯
˜ P > DP for every p, which means that pooling is always harder to obtain than in the benchmark case. However, We have D ˜S > D ˜ P or D ˜S < D ˜ P depending on the functions PC (e) and PL (e), so there may exist multiple equilibria.5 we may have D In a mixed equilibrium in which the Star of type ¯ chooses to censor with probability x, the strategy of the Receivers perceived by the Star is the lottery with outcome e = ¯ with probability px and outcome e = ˆ L (x) with probability p(1 − x) + (1 − p). Hence, the Star of type ¯ is indifferent between the two actions iff ¯ − PC ()] ¯ + (p(1 − x) + (1 − p))[PL (ˆ L (x)) − PC (ˆ L (x))] = ˜ := px[PL () D(x)
c . ¯
˜ ˜ P and D(1) ˜ ˜ S . So there is such a mixed equilibrium whenever Notice that D(0) =D =D ˜ is always unique if D(x) is decreasing in x. Proposition 3.
c ¯
˜ P and D ˜ S . The equilibrium is between D
The signaling game with a partially sophisticated Star has the following analogy-based expectation equilibria:
¯ − PC ()] ¯ + (1 − p)[PL () − PC ()] ≥ ˜ S := p[PL () • Separation. If D
c , ¯
the Star of neutral type leaves the information circulate and
the Star of embarrassing type Censors the information; ˜ P := PL (E[]) − PC (E[]) ≤ c , both types of the Star leave the information circulate; • Pooling. If D ¯ ˜ P and D ˜ S , the Star of neutral type leaves the information circulate and the Star of embarrassing type • Mixed. If c is between D ¯
randomizes between censoring and leaving the information circulate with strictly positive probabilities. When c is low enough, the unique equilibrium is always separation. The Star of embarrassing type always censors despite ¯ can be much higher than if she leaves the the fact that the probability to discover the information with censorship (PC ()) information circulate (PL ()). The Streisand effect occurs in the sense that the Star censors at her expense in equilibrium. ¯ is high and c is low, the Star of embarrassing type always This is in contrast with the fully rational benchmark where, if PC () leaves the information circulate. 6. Extensions In the previous section, we have shown that partial sophistication on the Star side can lead to the Streisand effect in our signaling game. We now consider a variant of our benchmark model in which the Star misperceives her environment, and show that we can also end up in a situation where the Star censors embarrassing information at her expense. In both cases, the Star’s erroneous perception of the signaling effect of censorship is key for the Streisand effect to occur. Finally, we discuss the possibility for the Star to censor every piece of information in a repeated version of our game. By doing so, the Star manages to use censorship in a way that does not signal any information anymore. 6.1. Star’s misperception of the environment We now consider a variant of our benchmark model in which the Streisand effect can occur because the Star wrongly perceives the visibility of her action to the Receivers. Precisely, assume that a proportion 1 − of Receivers observe the Star’s action, but that the Star believes that a proportion 1 − < 1 − of Receivers observe her action, with , ∈ (0, 1).6 First note that, in the case where = (the Star correctly perceives the visibility of her action) we are back to the model studied in Section 5. In this section, we assumed that a proportion of Receivers were partially sophisticated. We mentioned it is equivalent to having a proportion of Receivers who do not observe the Star’s action. In this case, the Star properly anticipates the partial lack of reaction to her action a, and expects the effort e(a) given by Eq. (1). In a separating equilibrium ¯ = C, () = L), the effort expected after action C is then given by = (1 − ) + E[] and the effort expected after (() action L is then given by ¯ = (1 − )¯ + E[]. Proposition 2 gives the condition for a separating equilibrium to exist: c PL ( ) − PC (¯ ) > . ¯
(2)
If instead the Star believes that the visibility of her action to the Receivers is , she expects them to exert effort (1 − )E [ | a] + E[] after action a. In a separating equilibrium, the effort expected by the Star is then given by =
˜P ≥ D ˜ S. A sufficient condition for pooling and separating equilibria not to coexist is that PL (e) − PC (e) is concave, since in that case D In the terminology of Esponda and Pouzo (2016), the objective model is the signaling game with parameter , while the signaling game with parameter represents the subjective (and misspecified) model of the Star. 5 6
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
7
(1 − )¯ + E[] after action C, and by ¯ = (1 − ) + E[] after action L. It follows that a separating equilibrium exists iff: c PL ( ) − PC (¯ ) > . (3) ¯
For > , we have > and ¯ < ¯ . When the Star underestimates the visibility of her action, she actually expects a lower (greater, resp.) effort from the Receivers after C (after L, resp.) than the true effort they exert. In fact, she underestimates the signaling aspect of her action. The condition given by Eq. (3) is always weaker than the one given by Eq. (2). So it can be that the Star of embarrassing type would not censor information if she correctly perceived the visibility of her action while she does censor when she underestimates this visibility. Consider the extreme case where tends to zero while tends to one. The Receivers perfectly observe the Star’s action as in the benchmark model, but she believes they do not observe anything. In this case, as in Section 5, the Star censors and looses on average by doing so. In both cases, the Streisand effect is due to the Star failing to understand fully how her action affects the Receivers’ reaction. But the two variations of the benchmark model are subtly different. When the Star underestimates the visibility of her action, she anticipates effort E() from the Receivers whatever she does. When the Star is partially sophisticated in the sense considered in Section 5, she correctly anticipates the true average distribution of the Receivers’ effort: she expects the Receivers to exert effort e = ¯ with probability p and effort e = with probability 1 − p. However, she fails to understand the correlation of the effort to her action. Finally, note that we can consider an alternative misperception of her environment by the Star and get similar results. Instead of underestimating the visibility of her action, let the Star overestimates the efficiency of censorship. Formally, assume that the Receivers discover the piece of information with probability PC (e) when exerting effort e after censorship, but that the Star believes that censorship is more efficient and that the Receivers discover the piece of information with probability PC (e) < PC (e) when exerting effort e. In that case, condition for separation in the benchmark model is given ¯ > c , while the condition for separation in the model where the Star is misled is weaker and given by by PL () − PC ()
¯ > PL () − PC ()
¯
c . ¯
6.2. Repeated censorship Our basic model is essentially static. In this section, we study a simple repeated version of the game and illustrate how censorship can be rationalized when it cannot be in the static version. We construct an equilibrium in which the Star is able to build a reputation for censoring information however embarrassing it is, as long as the cost of censorship is low enough or the prior belief of the Receivers that the information is embarrassing is high enough. In the long run, the Star is better-off by censoring any type of information (embarrassing and neutral) because she can benefit from censorship without sending a signal that is at the origin of the Streisand effect. The repeated extension of the game proceeds as follows: in each period, the signaling game of Section 2 is played, and at the end of each period past actions are observed and the state is redrawn according to the same probability distribution as in the static model. With probability ı ∈ (0, 1) the game proceeds to the next period, and with probability (1 − ı) the game ends (so ı can also be interpreted as the Star’s discount factor). The interpretation is that, in each period, a piece of information about the Star can be exogenously released, and she can decide to censor it or not. Consider the case identified in Proposition 1 in which the unique equilibrium consists for both types of the Star to leave ¯ < c/. ¯ Playing this static equilibrium in each period independently of past the information circulate: DP = PL (E[]) − PC () actions is still an equilibrium in the repeated game, and yields an expected utility for the Star equal to −PL (E[])E[]. Consider now the following strategies in the repeated version: in each period along the equilibrium path both types of the Star censor, and therefore the Receivers keep their prior beliefs and exert effort E[]. Off the equilibrium path, that is, if the Star leaves the information circulate in some period, the Receivers believe that the Star who deviated in that period is of the embarrassing type (so the Receivers exert effort ¯ in that period),7 and the equilibrium of the static game is played in all remaining periods. To check that this constitutes a perfect Bayesian equilibrium we have to check that, along the equilibrium path, no type of the Star has an incentive to deviate in any given period by not censoring. By not deviating (i.e., by censoring), the expected equilibrium payoff of the Star of type is (1 − ı)(−PC (E[]) − c) + ı(−PC (E[])E[] − c). By deviating (i.e., by not censoring), the expected equilibrium payoff of the Star of type is ¯ (1 − ı)(−PL ()) + ı(−PL (E[])E[]). Hence, the deviation is not profitable if ¯ (1 − ı)(PC (E[]) + c) + ı(PC (E[])E[] + c) ≤ (1 − ı)(PL ()) + ı(PL (E[])E[]),
7
The argument below does not depend on this belief off the equilibrium path as long as ı is close to 1.
8
J. Hagenbach, F. Koessler / Journal of Economic Behavior & Organization 143 (2017) 1–8
i.e., PL (E[]) − PC (E[]) ≥
¯ c + (1 − ı)(PC (E[]) − PL ()) . ıE[]
¯ < 0 this condition is satisfied for every if and only if it is satisfied for = . When ı → 0 it cannot Since PC (E[]) − PL () be satisfied because in the static game it is a dominated strategy for the Star of type to censor. However, when ı is high enough, the previous condition is satisfied whenever the cost c of censorship is low enough. This shows that there could be an equilibrium in the repeated game in which both types of the Star censor, while there cannot be censorship in equilibrium of the static game.8 7. Conclusion In this paper we have proposed and analyzed a simple signaling game with the objective to understand the strategic forces behind the so-called “Streisand effect”. We have characterized all possible (separating, pooling and mixed) equilibria as a function of the prior beliefs of the Star’s audience, the cost of censorship, and the direct effect of censorship on the probability to discover information given a search effort. In particular, we have shown that a rational Star should never censor information if censorship is not efficient enough when there is a high search effort, even when the cost of censorship is negligible. This is due to the signaling effect of censorship. Next, we have introduced players’ partial sophistication to capture the fact that the Star or her audience underestimates the signaling effect of censorship. Censorship can then be rationalized when it was not possible in the benchmark model. In particular, we have shown that a partially sophisticated Star always decides to censor information when the cost of censorship is low, even if it increases the probability to discover the information. In addition, we demonstrated that misperception of the environment on the Star side can lead as well to censorship being used while detrimental, in the sense suggested by the Streisand effect examples. In more general signaling games, a limited comprehension of some parameters of the game or a coarse perception of the receivers’ strategy would also help explain a sender’s wrong assessment of the signaling aspect of her action. Acknowledgments We thank David Ettinger, Philippe Jehiel, an associate editor and two anonymous referees for useful suggestions. References Bilancini, E., Boncinelli, L., 2016. Signaling to Analogical Reasoners Who Can Costly Acquire Information. Working Paper. Esponda, I., 2008. Behavioral equilibrium in economies with adverse selection. Am. Econ. Rev. 98, 1269–1291. Esponda, I., Pouzo, D., 2016. Berk-Nash equilibrium: a framework for modeling agents with misspecified models. Econometrica 84 (3), 1093–1130. Ettinger, D., Jehiel, P., 2010. A theory of deception. Am. Econ. J.: Microecon. 2 (1), 1–20. Eyster, E., Rabin, M., 2005. Cursed equilibrium. Econometrica 73 (5), 1623–1672. Jansen, S., Martin, B., 2015. The Streisand effect and censorship backfire. Int. J. Commun. 9, 656–671. Jehiel, P., 2005. Analogy-based expectation equilibrium. J. Econ. Theory 123 (2), 81–104. Jehiel, P., Koessler, F., 2008. Revisiting games of incomplete information with analogy-based expectations. Games Econ. Behav. 62 (2), 533–557. Liarte, S., 2013. Image de marque et internet: comprendre, éviter et gérer l’effet “streisand”. Décis. Mark. 69, 103–110. Spence, A.M., 1973. Job market signaling. Q. J. Econ. 87, 355–374.
8
¯ < More precisely, when ı → 1, this happens when PL (E[]) − PC ()
c ¯
and PL (E[]) − PC (E[]) ≥
c . E[]