Credibility and determinism in a game of persuasion

Credibility and determinism in a game of persuasion

Games and Economic Behavior 71 (2011) 409–419 Contents lists available at ScienceDirect Games and Economic Behavior www.elsevier.com/locate/geb Cre...

212KB Sizes 3 Downloads 55 Views

Games and Economic Behavior 71 (2011) 409–419

Contents lists available at ScienceDirect

Games and Economic Behavior www.elsevier.com/locate/geb

Credibility and determinism in a game of persuasion ✩ Itai Sher Department of Economics, University of Minnesota, Minneapolis, MN 55455, United States

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 31 January 2010 Available online 31 May 2010

This paper studies a game of persuasion. A speaker attempts to persuade a listener to take an action by presenting evidence. Glazer and Rubinstein (2006) showed that when the listener’s decision is binary, neither randomization nor commitment have any value for the listener, and commented that the binary nature of the decision was important for the commitment result. In this paper, I show that concavity is the critical assumption for both results: no value to commitment and no value to randomization. Specifically, the key assumption is that the listener’s utility function is a concave transformation of the speaker’s utility function. This assumption holds vacuously in the binary model. The result that concavity implies credibility allows us to dispense with the assumption that the listener’s decision is binary and significantly broadens the scope of the model. © 2010 Elsevier Inc. All rights reserved.

JEL classification: C72 D82 D83 Keywords: Optimal persuasion rules Credibility Determinism Concavity Commitment Evidence

1. Introduction This paper studies a game of persuasion. A speaker attempts to persuade a listener to take an action. In general, the listener’s decision may be binary, such as the decision of whether to honor a request, or multi-valued, as in the case where a consumer attempts to persuade a seller to lower her price. The speaker has private information which is relevant to the listener’s decision, and while the listener knows the speaker’s preferences, the speaker has evidence about her private information which she can use to attempt to persuade the listener. This paper studies the listener’s optimal choice of persuasion rule; the listener commits to a rule for responding to the speaker’s evidence, knowing that the choice of rule will influence the speaker’s reporting behavior. Glazer and Rubinstein (2006) studied the listener’s problem of selecting an optimal persuasion rule under the assumption that the listener’s decision is binary, and proved two fundamental results1 : Determinism There is no need for randomization; there always exists an optimal persuasion rule which is deterministic. Credibility Commitment has no value in the persuasion problem. Every optimal persuasion rule has a credible implementation, which is an equilibrium of the persuasion game without commitment which implements the same outcome as the optimal rule. ✩ I am very grateful to Eddie Dekel for many valuable comments and advice. I would like to thank David Austen-Smith, Marciano Siniscalchi, Asher Wolinsky, Ariel Rubinstein, Jesse Bull, Elchanan Ben-Porath, David Rahman, Tomasz Strzalecki, Evsen Turkay, and Jan Werner. Nyalleng Moorosi provided valuable research assistance. A Grant-in-Aid from the University of Minnesota is gratefully acknowledged. I am also grateful to the Center for Economic Theory of Northwestern University for financial support. All errors are mine. E-mail address: [email protected]. 1 Glazer and Rubinstein (2004) studied a related problem. In that model, the credibility result held but the determinism result did not. Glazer and Rubinstein (2006) discuss the relationship between the two papers. Another related paper is Glazer and Rubinstein (2001).

0899-8256/$ – see front matter doi:10.1016/j.geb.2010.05.008

© 2010

Elsevier Inc. All rights reserved.

410

I. Sher / Games and Economic Behavior 71 (2011) 409–419

The second result is especially striking. The assumption that the listener’s decision problem is binary is limiting. Many papers in the literature on persuasion games study multi-valued decisions. For instance, Milgrom and Roberts (1986) study a model in which a seller attempts to persuade a buyer to purchase a large quantity of his product. Shin (1994a) studies a model in which parties in a legal dispute attempt to persuade a judge to award either a high or low level of damages. Shin (2003) studies a model in which the manager of a firm attempts to persuade the market that the firm has a high value. In all of these cases, the decision variable—be it the quantity purchased, the level of damages, or the market price—can take on many values. The classic model of strategic communication of Crawford and Sobel (1982)—which is, however, more distantly removed—also involves a decision among many actions. Unfortunately, without further assumptions, the credibility and determinism results do not extend beyond the case of a binary decision. Glazer and Rubinstein (2006) present an example with three actions in which the credibility result fails to hold. Below I show that the determinism result may also fail with three actions. A sharper analysis shows that concavity is the essential property for credibility. More precisely, when the listener’s utility function is a concave transformation of the speaker’s utility function, both credibility and determinism are restored, both when actions are continuous and when actions are discrete. This property holds vacuously for binary actions. The concavity property establishes a relationship between credibility and determinism which is invisible in the binary case. Relaxing the concavity assumption, I characterize the form which randomization may take in optimal persuasion rules when actions are discrete. In particular, there always exist optimal persuasion rules which are quasi-deterministic, meaning (i) the speaker expected utility of any message is equal to the utility of some pure action, and (ii) each message leads to a probability distribution over at most two actions; these actions are not adjacent. With two actions, (i) is equivalent to determinism, as is (ii) (because of nonadjacency). Quasi-determinism generalizes determinism, and this provides a second generalization of the determinism result with two actions. The paper is organized as follows: Section 2 presents the model. Section 3 presents results on determinism. Section 4 deals with credibility. Section 5 presents two applications to multi-valued settings that have been studied in the literature: one to marketing, and another to disclosure in financial markets. Section 6 concludes. Some proofs are contained in Appendix A. 2. The model There is a speaker and a listener. There is a finite set of states X . The speaker knows the state but the listener does not. p x > 0 is the probability of state x. There is a finite set M of messages. These are the messages that the speaker could potentially send. However, different messages are available to the speaker at different states. This may be because of hard evidence whose availability depends on the state of the world. At every state x, σ (x)  is the nonempty set of messages that the speaker can send at state x. We have σ (x) ⊆ M. Moreover, we assume that M = x∈ X σ (x). The tuple M := ( X , M , σ ) is a message structure and σ is the message correspondence. I consider two possibilities for the listener’s choice set. Under discrete actions, the listener selects an action from J = {1, . . . , n} with n  2. j is adjacent to j  if j = j  + 1 or j = j  − 1. Under continuous actions, the listener’s choice set is J = [0, 1]. The speaker has a utility function u : J → R which does not depend on the state. This utility function is increasing: j < j  ⇒ u ( j ) < u ( j  ). The listener has a utility function v : J × X → R, which does depend on the state. When actions are continuous, I assume that u and v are continuous. The fact that the speaker’s utility function does not depend on the state captures the assumption that the listener knows what the speaker would like him to do. A higher action then corresponds to an action more preferred by the speaker. The relevant uncertainty from the listener’s perspective concerns what evidence the speaker can present to persuade him. Such evidence is relevant because what the listener would prefer to do depends on the speaker’s information. The remaining definitions in this section and afterwards will be for the discrete actions case; analogous definitions apply for continuous actions. 2.1. Optimal persuasion rules A persuasion rule is a rule which assigns to each message a probability distribution over actions. Formally, a persuasion rule is defined as a function f : M × J → [0, 1], where f (m, j ) is interpreted as the  probability assigned to j given that message m is received. Of course, we assume that for all m and j, f (m, j )  0, and i ∈ J f (m, i ) = 1. F is the set of all persuasion rules. A persuasion rule f is deterministic if for all m and j, f (m, j ) ∈ {0, 1}. D is the set of all deterministic persuasion rules. If f ∈ D, I write f˜ (m) for the unique action j such that f (m, j ) = 1. A speaker strategy is a function which assigns to each state a probability distribution over messages. For any state x, positive probability may only be assigned to messages available at that state, or in other words, to messages in σ (x). Formally, a speaker strategy is a function ζ : X × M → [0, 1]. ζ (x, m) is interpreted as the probability that the speaker sends / σ (x) ⇒ ζ (x, m) = 0 for all x ∈ X . message m in state x. A speaker strategy ζ satisfies m∈σ (x) ζ (x, m) = 1 and m ∈

I. Sher / Games and Economic Behavior 71 (2011) 409–419

411

The problem we are interested in is that of selecting an optimal persuasion rule. First, the listener is assumed to commit to a persuasion rule. Afterwards, the speaker sends his message, knowing the rule to which the listener has committed. For anypersuasion rule f , B ( f ) is the set of speaker best replies to f . Formally, B ( f ) is the set of ζ ’s which maximize  ζ (x, m) f (m, j )u ( j ) for all x ∈ X . Equivalently, B ( f ) is the set of ζ ’s such that ζ (x, m) > 0 only if m maximizes m∈ M j ∈ J ∗ j ∈ J f ( j , m)u ( j ) subject to m ∈ σ (x). A persuasion rule f is optimal if

f ∗ ∈ arg max max

f ∈F ζ ∈B( f )



ζ (x, m) f (m, j ) v ( j , x) p x

x∈ X m∈ M j ∈ J

Notice that f enters into the objective in two ways, one of which is through B ( f ). Implicitly, in selecting an optimal persuasion rule, it is as though the listener could select the speaker’s strategy as well, provided that it is a best reply to the selected rule. 2.2. Game without commitment In addition to the listener’s problem when the listener can commit, we will also be interested in a game without commitment. In this version, the speaker moves first, sending a message, and the listener then responds with an action. The solution concept is sequential equilibrium.2 A persuasion rule f ∗ is credible if there exists a speaker strategy ζ ∗ and a listener strategy f ∗∗ such that

• (ζ ∗ , f ∗∗ ) is a sequential equilibrium, and  • ∀m ∈ M , ∀ j ∈ J , x∈ X ζ ∗ (x, m) p x > 0 ⇒ f ∗∗ (m, j ) = f ∗ (m, j ). In words, f ∗∗ coincides with f ∗ on messages sent with positive probability according to ζ ∗ . A persuasion rule is strongly credible if there exists a speaker strategy ζ ∗ such that:

• (ζ ∗ , f ∗ ) is a sequential equilibrium. Thus a persuasion rule is strongly credible if there is some sequential equilibrium in the game without commitment which supports it. Credibility is weaker than strong credibility insofar as the requirement that f ∗ is supported as a sequential equilibrium is weakened to the requirement that the outcome induced by f ∗ in the game with commitment (i.e., the mapping from states to probability distributions over actions) can be supported as a sequential equilibrium. This paper will be interested in credibility (and strong credibility) of optimal persuasion rules. In general, for optimal persuasion rules, one may want to add to the definition of credibility the requirement that in the equilibrium (ζ ∗ , f ∗ ) supporting the rule f ∗ , the speaker strategy ζ ∗ also satisfies:

ζ ∗ ∈ arg max

ζ ∈B( f ∗)



ζ (x, m) f ∗ (m, j ) v ( j , x) p x

(1)

x∈ X m∈ M j ∈ J

In words, in the equilibrium, the speaker chooses the best reply which is most favorable to the listener. The rationale for (1) is that we are interested in the question of whether optimal persuasion rules are credible, and optimal persuasion rules are evaluated on the assumption that the listener may choose among speaker best responses; we want to be sure that both the persuasion rule and the speaker best response that the listener chooses are consistent with equilibrium in the game without commitment. The reason that we do not impose the requirement (1) is that we will be concerned with credibility of persuasion rules which are both optimal and deterministic, and when f ∗ is deterministic, every speaker best reply ζ ∗ to f ∗ leads to the same utility for the listener against f ∗ , so in this case condition (1) will always be trivially satisfied whenever the other conditions for credibility are satisfied. 3. Determinism Glazer and Rubinstein (2006) established that in the case of two actions, there always exists an optimal deterministic persuasion rule. In the following example, I show that this is no longer true in the case of many actions. 2 When specialized to the persuasion game, a sequential equilibrium of the game without commitment is a pair (ζ ∗ , f ∗ ) satisfying the following two conditions:

1. ζ ∗ ∈ B ( f ∗ ). / σ (x), and ζn (x, m) = 1 exactly if 2. There exists a sequence of totally mixed speaker strategies {ζn : n ∈ N} with ζn (x, m) = 0 exactly if m ∈ all n, and a system of beliefs μ ∈ [0, 1] X × M such that (a) for all x ∈ X and j ∈ J , limn→∞ ζn (x, j ) = ζ ∗ (x, j ); (b)

μ(x, m) = limn→∞

 ζn (x,m) p x ; x ∈ X ζn (x ,m) p x

(c) f ∗ ( j , m) > 0 only if j

∈ arg max j  ∈ J



x∈ X

v ( j  , x)μ(x, m).

σ (x) = {m} for

412

I. Sher / Games and Economic Behavior 71 (2011) 409–419

Example 1. Suppose that there are two states x1 and x2 and three actions J = {1, 2, 3}, and that p x > 0 for both states x. Suppose that the listener’s utility function is given by:

v

1

2

3

x1

1

0

1

x2

0

1

0

σ (xi ) = {m1 , m2 } for i = 1, 2.

I assume moreover that the speaker’s utility function is such that u ( j ) = j. In any optimal rule, the response to one message is action 2 with probability 1, and the response to the other puts half the probability on action 1 and the other half on action 3. In this section, I present two generalizations of the determinism result to many actions. Throughout this section, I will restrict attention to the discrete actions case, but at the end, I will comment on how one of the results extends to the continuous actions case. The following concept is weaker than determinism: Definition 1. A persuasion rule f is quasi-deterministic if for all m ∈ M:





u ( j ) f (m, j ) ∈ u ( j ): j ∈ J



(2)

j∈ J

   j ∈ J : f (m, j ) > 0   2

(3)

(2) says that every message gives the speaker an expected utility equal to that of some pure action. (3) says that given any message, at most two actions receive positive probability. Every deterministic persuasion rule satisfies both of these properties, so quasi-determinism generalizes determinism. In the model with binary actions, a persuasion rule satisfies (2) if and only if it is deterministic. This implies that in the binary case, determinism coincides with quasi-determinism. (2) and (3) imply that if for any message, two actions receive positive probability, then these two actions are nonadjacent. In particular, if exactly two actions j and j  (with j < j  ) receive positive probability conditional on message m, there must exist another action j  with j < j  < j  such that:

   u j  = f (m, j )u ( j ) + f m, j  u j  u ( j  )−u ( j  )

u ( j  )−u ( j )

(Note that the probabilities must be f (m, j ) = u ( j  )−u ( j ) and f (m, j  ) = u ( j  )−u ( j ) .) Since with two actions, it is trivially true that every pair of distinct actions is adjacent, this nonadjacency property is satisfied exactly (and vacuously) by the deterministic persuasion rules in the binary case. This is another way of seeing that determinism and quasi-determinism coincide in the binary case. Define R (ζ ) := { f ∈ F : ζ ∈ B ( f )}. R (ζ ) is the set of persuasion rules which rationalize ζ . R (ζ ) is always nonempty because it always contains all persuasion rules which respond to all messages in the same way. Being defined by a set of linear inequalities, bounded, and nonempty, R (ζ ) is always a polytope (i.e., the convex hull of a finite number of points), which may be thought of as a subset of [0, 1] M × J . Lemma 1. The extreme points of R (ζ ) are quasi-deterministic. Proof. In Appendix A.

2

Theorem 1. There exists an optimal quasi-deterministic persuasion rule. Proof. Let f ∗ be an optimal persuasion rule, and let ζ ∗ be the speaker strategy which maximizes the listener’s expected utility within B ( f ∗ ). Then any persuasion rule which maximizes the listener’s expected utility within R (ζ ∗ ) on the assumption that the speaker will use ζ ∗ is optimal. We can find a solution to the latter problem among the extreme points of R (ζ ∗ ).3 So the conclusion follows from Lemma 1. 2 Corollary 1. In the case of two actions, there is an optimal deterministic persuasion rule. Proof. This follows from Theorem 1 and the fact that with two actions quasi-determinism and determinism are equivalent. 2 3

This follows from the fact that holding ζ ∗ fixed, the listener’s objective is linear in persuasion rules and R (ζ ∗ ) is a polytope.

I. Sher / Games and Economic Behavior 71 (2011) 409–419

413

As mentioned above, the last corollary was proven in a different way in Glazer and Rubinstein (2006). In fact, because quasi-determinism is a generalization of determinism which coincides with determinism in the binary case, Theorem 1 is a generalization of Glazer and Rubinstein’s theorem to many actions. Using an argument similar to that in Theorem 1, it follows that: Corollary 2. For every sequential equilibrium (ζ ∗ , f ∗ ) of the game without commitment, there exists quasi-deterministic f ∗∗ such that (ζ ∗ , f ∗∗ ) is a sequential equilibrium. Given Assumption 1 below, f ∗∗ is deterministic. In the case of two actions, f ∗∗ is deterministic.4 It is obvious that the listener—but possibly not the speaker—attains the same utility in the two equilibria (ζ ∗ , f ∗ ) and (ζ ∗ , f ∗∗ ). This result shows that in the case of two actions, not only is it always possible to find an optimal deterministic persuasion rule, but in response to any equilibrium speaker strategy there is a deterministic equilibrium listener best response. An analogous fact involving quasi-determinism holds with many actions. I now introduce an assumption which is important for both determinism and credibility. Assumption 1. For all x ∈ X , there exists a concave function c x : R → R such that for all j ∈ J , v ( j , x) = c x (u ( j )). Theorem 2. Given Assumption 1, there exists an optimal deterministic persuasion rule. Proof. By Theorem 1, we can choose an optimal quasi-deterministic persuasion rule f ∗ , and let ζ ∗ be the speaker strategy which maximizes the listener’s utility in B ( f ∗ ) given that the listener uses f ∗ . By quasi-determinism, for any message m such that f ∗ selects a nondegenerate distribution δ in response to m, there is an action j such that j with probability 1 gives the speaker the same utility as δ . Construct a new persuasion rule f ∗∗ which replaces all such δ ’s with the corresponding j’s. Then ζ ∗ ∈ B ( f ∗∗ ). Moreover, by Jensen’s inequality, the listener is weakly better off. 2 Since every function on a two element set is the restriction of some concave function to that set, Theorem 2 is a second generalization of Glazer and Rubinstein’s result concerning deterministic optimal rules in the binary case. As stated above, the results in this section apply to the discrete actions case. However, in the case of continuous actions, Theorem 2—the main result of this section establishing determinism—still holds. In the proof, it is unnecessary to appeal to quasi-determinism or an analogous concept. In particular, in the proof, let f ∗ be any optimal persuasion rule. Then continuity of the speaker’s utility function guarantees that for any probability measure δ on J , there exists an action j ∈ J such that the speaker is indifferent between j and δ . The proof then proceeds in the same way, establishing that for any random rule, there is a deterministic rule that outperforms it. Restricting attention to deterministic persuasion rules, the problem of choosing an optimal rule can be represented as the problem of maximizing a continuous function on a compact subset of a Euclidean space, guaranteeing the existence of an optimum. 4. Credibility Glazer and Rubinstein (2006) establish that in the case of two actions, there is an optimal credible rule. This result is robust to some variations to the model. For example, Glazer and Rubinstein (2004) presented a credibility result for a related model in which the listener has two actions, and in which the speaker sends a cheap talk message to the listener, after which the listener can verify some but not all the information provided by the speaker. Sher (2010) extends this latter version of the credibility result to a broader class of message structures. Glazer and Rubinstein (2001) presented an example of a two speaker debate such that the optimal debate from the listener’s perspective was credible. Glazer and Rubinstein (2006) point out that in the case of many actions, the credibility result no longer holds. The following example is adapted from their paper: Example 2. Suppose that there are two states X = {x1 , x2 }, and the probability distribution p is such that p x1 = 0.4 and p x2 = 0.6. J = {1, 2, 3} and u ( j ) = j for all j ∈ J . Moreover, σ (x1 ) = {m1 }, σ (x2 ) = {m1 , m2 }. The listener’s utility function is:

v

1

2

3

x1

0

−1

1

x2

0

1

−1

4 With respect to the requirement that in a sequential equilibrium, off the equilibrium path, the listener’s strategy is a best reply to beliefs which are consistent with the structure of the game, one can show that f ∗∗ can always be chosen so that for every message m and action j, f ∗∗ (m, j ) > 0 ⇒ f ∗ (m, j ) > 0. This observation allows us to use the system of beliefs that rationalizes f ∗ in the original equilibrium (ζ ∗ , f ∗ ) to construct beliefs that rationalize the response of f ∗∗ to off equilibrium messages. On a separate note, observe that the corollary continues to be true if the solution concept of sequential equilibrium is replaced by the weaker solution concept of Bayesian Nash equilibrium.

414

I. Sher / Games and Economic Behavior 71 (2011) 409–419

If we restrict attention to deterministic rules, the rule f which assigns action 1 to m1 and 2 to m2 is the unique optimal rule. However, in this case, upon seeing m1 , the listener would know that the state is x1 , and therefore would prefer to take action 3. So f is not credible.5 Allowing for random rules, the optimal rule f  responds to m1 by randomizing over actions 1 and 3 with equal probability, and responds to m2 by taking action 2 with probability 1. Again, this rule is not credible. If the speaker chooses from his best replies, the one which is optimal from the listener’s perspective, at x1 , the speaker will send m1 and at x2 , the speaker will send m2 . But in this case the listener will prefer to select action 3 with probability 1 upon seeing m1 . So f  is not credible. Observe that in the above example, the listener’s utility function is “convex” in actions at x1 . One can restore the credibility result under the concavity assumption above. An optimal deterministic persuasion rule is a persuasion rule which is both (i) optimal among all rules (including random rules) and (ii) deterministic. Recall that by Theorem 2, under Assumption 1, optimal deterministic rules exist. Theorem 3. Given Assumption 1: (a) Every optimal deterministic persuasion rule is credible. (b) There exists an optimal deterministic persuasion rule which is strongly credible. Proof. In Appendix A.

2

Theorem 3 applies both when actions are continuous and when actions are discrete. It is notable that the same assumption that guarantees that there exist optimal rules which are deterministic also guarantees that those optimal rules are credible. Assumption 1 plays two roles in Theorem 3. Consider the discrete actions case. First, this assumption implies—via Theorem 2—that there exists an optimal persuasion rule which is deterministic. Second, the credibility result for the binary actions case implies that at an optimal deterministic rule, the listener would never like to deviate to an adjacent action upon seeing any message m. The concavity property in Assumption 1 then implies that the listener would not like to deviate to any other action. A limiting argument is used to extend these considerations to the continuous case. 5. Applications In this section, I apply the results of the current paper to two models from the literature. 5.1. Marketing Milgrom and Roberts (1986) present an example of a seller (speaker) and buyer (listener).6 The buyer will buy more of the seller’s product the higher is the quality. The seller would like to maximize sales, and therefore would like to persuade the buyer that the product is as high a quality as possible. The state x ∈ X represents the quality of the seller’s product. For example, X could be the set {1, 2, . . . , k} where a higher number represents a higher quality, and the buyer’s utility function could be v ( j , x) = w ( j , x) − p j where j is the quantity purchased, p is the price, and w ( j , x) is the value to the buyer of quantity j of a good of quality x. The set of quantities may be either an interval J = [0, q¯ ] for some q¯ > 0, or discrete J = {q1 , q2 , . . . , qn } with q1 < q2 < · · · < qn . The seller’s utility function could be given by u ( j ) = ( p − γ ) j where γ is the unit cost and we assume that γ < p. Suppose J = [0, q¯ ], w is differentiable, and w 1 is the derivative of w with respect to its first argument. Then if x < y ⇒ w 1 ( j , x) < w 1 ( j , y ), then the buyer will demand a higher quantity when the product is of a higher quality. A discrete analogue will give the same result if quantity choices are discrete. If for each quality x, w ( j , x) is concave in j, then Assumption 1 is satisfied.7 Above, we have presented one way of filling in the details, but in what follows we treat u, v, X and J more abstractly. Milgrom and Roberts (1986) assume the seller is able to prove any true statement about his product but cannot lie. In other words, if Y ⊆ X and x ∈ Y , then the seller can prove that x ∈ Y . So the message correspondence is given by σ (x) = {Y ⊆ X: x ∈ Y }. The main result of Milgrom and Roberts (1986) is that every sequential equilibrium will lead to unraveling, meaning that the listener will be able to exactly infer the state. Similar models have been used to argue that laws mandating disclosure about product characteristics are unnecessary. Milgrom and Roberts (1986) assume that every true statement is provable. We use our results to address what happens if fewer statements are provable. Let S ⊆ 2 X be a family of subsets of X with the property that X ∈ S . The set of statements 5 Although, they do not say so explicitly, Glazer and Rubinstein’s analysis of this counter-example shows that with many actions, a persuasion rule which is optimal among deterministic rules may not be credible. This is natural, because they are making a comparison with the two action case, where optimality among deterministic rules implies optimality. Here I extend the example to consider rules which are optimal among all rules—random and deterministic. 6 An earlier version of the model appeared in Milgrom (1981). rp 7 In particular, the function c x in Assumption 1 can be chosen to be c x (r ) := w ( p −r γ , x) − p −γ .

I. Sher / Games and Economic Behavior 71 (2011) 409–419

415

in S is assumed to be the set of provable statements. The corresponding message correspondence is then given by σS (x) = {Y ∈ S : x ∈ Y }. In other words, the seller can now prove some but not all statements Y about his product. The seller can prove those statements that are both true (x ∈ Y ) and provable (Y ∈ S ). The assumption that X ∈ S means that it is always possible to prove the vacuous statement that the quality belongs to X , and guarantees that at each state, at least one message is available. The model in Milgrom and Roberts (1986) coincides with the model presented here when S = 2 X , the set of all subsets of X . Consider two families S and T of subsets of X (both of which contain the vacuous statement X ). Then we say that more statements are provable in T than in S if S ⊆ T . Note that S ⊆ T implies that for all x ∈ X , σS (x) ⊆ σT (x). Say that a sequential equilibrium (ζ ∗ , f ∗ ) of the buyer–seller game (without commitment) is buyer-optimal if the buyer receives a weakly higher expected utility in (ζ ∗ , f ∗ ) than in any other sequential equilibrium. Theorem 3 implies the following result: Proposition 1. Under Assumption 1, as more statements become provable, the buyer’s payoff increases at the buyer-optimal sequential equilibrium. Proof. By making more statements provable, the set of possible persuasion rules is effectively increased, and so in the game with commitment, the listener (i.e., the buyer) must do weakly better at the optimal persuasion rule.8 The result now follows from Theorem 3. 2 So if we start from the situation where no statements are provable, and incrementally add one statement to the provable set until all statements are provable, the buyer’s payoff monotonically increases until the point at which all statements are provable, where the unraveling result applies and the buyer attains his full information payoff.9 The order in which provable statements are added does not affect this conclusion. It is important to note that without Assumption 1, the concavity assumption, Proposition 1 would not hold. Section A.4 of the appendix presents an example of a persuasion game in which Assumption 1 is violated and adding a statement to the provable set decreases the buyer’s expected utility at the buyer-optimal sequential equilibrium. 5.2. Disclosure in financial markets Shin (2003) presents an interesting application of the persuasion model to disclosure in financial markets. A firm undertakes several projects which may be either successful or unsuccessful. The speaker is the manager of the firm. Some projects may be complete by the interim stage and the rest will be complete at the final stage; ex ante, it is uncertain how many projects will be complete by the interim stage. At the interim stage, the manager discovers which projects are complete and which are not, and may truthfully report the outcome of any subset of complete projects. In other words, if (s, t )—which we refer to as the state—is a pair consisting of the number s of complete successful projects at the interim stage and the number t of complete unsuccessful projects at the interim stage, then the manager may claim that the state is (s , t  ) so long as s  s and t   t. At the final stage, the outcome of all the projects is revealed. The “market” is modeled as a player (the listener), who may choose a price p in the interim period with the objective of minimizing the expected value of ( v − p )2 , where v is the value of the firm conditional on the realization of all projects at the final stage. The manager’s objective is to maximize the interim price p. Shin shows that no equilibrium of this game leads to unraveling, but that there is a sequential equilibrium in which the manager reveals only the successful projects which have been completed at the interim phase. While I omit some of the details here, this model fits all of the assumptions of the model I study including Assumption 1.10 In this setting, we can define a mechanism to be a map from the manager’s reports to a price p. Say that a mechanism is incentive compatible, if whenever s  s and t   t, the price the manager receives at (s, t ) must be at least the price he receives at (s , t  ). An optimal mechanism is one that minimizes the expected value of ( v − p )2 subject to incentive compatibility. Therefore, using the results of this paper—and in particular, Theorem 3—we may conclude the following:

8

Observe that eliminating a provable statement is equivalent to restricting attention to persuasion rules that assign that statement the lowest action. Shin (1994b) presents a related result on the comparative statics of skepticism as the likelihood of the speaker receiving more evidence increases. However, the results are quite different because Shin’s result concerns a one-dimensional parameter that controls the likelihood that the speaker will have various pieces of evidence, whereas the result in the current paper performs comparative statics directly on the provable set of statements. Moreover, Shin focuses on a particular class of sequential equilibria, whereas the current paper studies the buyer-optimal sequential equilibria. 10 The translation from Shin’s model to the model studied in this paper is as follows. X = {(s, t ) ∈ {0, 1, . . . , k} × {0, 1, . . . , k}: s + t  k}, where k is the total number of projects. A typical state (s, t ) ∈ X is the pair of outcomes (number of successes and failures) realized by the interim phase. It is straightforward to derive the probability of any state (s, t ) ∈ X from Shin’s model. Let V be a random variable specifying the value of the firm at the final stage after all projects have been completed. It is straightforward to derive explicit expressions for E [ V |(s, t )] the expected value of V conditional on any state (s, t ) from Shin’s model. At any state (s, t ), the message correspondence is given by σ (s, t ) = {(s , t  ) ∈ X: s  s, t   t }. The listener may choose any price p in the interval [0, r ] for some real number r. We can set some upper bound on the listener’s choice set because it is straightforward to show that at the optimal persuasion rule the listener would never want to use any price which is greater than the value of the firm conditional on all projects being successful. The speaker’s utility function is given by u ( p ) = p. The listener’s utility function is given by v ( p , (s, t )) = − E [( V − p )2 |(s, t )]. Since u ( p ) = p, in order for Assumption 1 to be satisfied, it is sufficient to show that v ( p , (s, t )) is a concave function of p for each (s, t ) ∈ X , which is clearly true. 9

416

I. Sher / Games and Economic Behavior 71 (2011) 409–419

Proposition 2. The pricing rule induced by the optimal mechanism is also the market pricing rule derived from some sequential equilibrium of the disclosure game. Notice that the binary model of Glazer and Rubinstein (2006) implies neither Proposition 1 nor Proposition 2; one needs to have the results established in the current paper. 6. Conclusion Glazer and Rubinstein (2006) wrote: The problem studied here can be viewed as a special case of a leader–follower problem in which the leader can commit to his future moves. As is well known, it is generally not true that the solution to such an optimization problem is credible. We are not aware, however, of any general theorem or principle that addresses this issue and that can explain why it is the case that in our model the listener’s optimal strategy is credible. This question remains for future research. This paper has taken a step toward answering this question. We have shown that concavity implies credibility. Future research will explore the degree to which a credibility result can be established in leader–follower games beyond the class of persuasion games. The current paper should provide a foundation for this project. Appendix A A.1. Proof of Lemma 1 Proof of Lemma 1. Suppose that f ∈ R (ζ ) is such that for some m∗ , Let





P := m ∈ M:

u ( j ) f (m, j ) =

j∈ J

For each m ∈ P , let j

m





u ( j ) f m∗ , j



j∈ J

u ( j ) f (m∗ , j ) ∈ / {u ( j ): j ∈ J } (i.e., f violates (2)).



j∈ J

(resp., jm ) be the speaker’s least (resp., most) preferred action j such that f (m, j ) > 0. The fact that

every m ∈ P gives the speaker an expected utility outside of {u ( j ): j ∈ J } implies that jm = jm . Now define two persuasion rules f and f such that for all m ∈ P :

































f m, j m = f m, j m +  m ,

f m, j m = f m, j m −  m

f m, j m = f m, j m +  m ,

f m, j m = f m, j m −  m

where for all m ∈ P ,

 m solves:   m  u j − u jm  m = 

for some  > 0 (the same  for all m). In all other cases, f and f coincide with f . If  is chosen small enough, then f and f are in fact persuasion rules (i.e., assign a probability distribution over actions to each message). Moreover, the ranking of messages (including ties) according to the speaker’s expected utility to sending them is the same under f , f and f . It follows that f and f are in R (ζ ). On the other hand f = 12 f + 12 f . So f is not an extreme point of R (ζ ). So any extreme point of R (ζ ) satisfies (2). ∗ j  , j  , j  ∈ J such that for all j ∈ I := Next, assume that f ∈ R (ζ ) violates  (3). Then there exist m and distinct  { j  , j  , j  }, f (m∗ , j ) > 0. Define e := j ∈ I u ( j )h j , where h j := f (m, j )/( i ∈ I f (m∗ , i )). Notice that since the numbers,





u ( j  ), u ( j  ), and u ( j  ) are all distinct, the set K := {r ∈ R I : e = j ∈ I u ( j )r j , j ∈ I r j = 1} is a one-dimensional affine space. Moreover, there must be two distinct pairs of elements in the set {u ( j  ), u ( j  ), u ( j  )} such that e is a (possibly degenerate for one pair) weighted average of the pair. So assume WLOG that there exist α , β ∈ [0, 1) such that e = α u ( j  )+(1 − α )u ( j  ) and e = β u ( j  )+(1 −β)u ( j  ). In other words, (α , 1 − α , 0), (β, 0, 1 −β) ∈ K . Moreover (α , 1 − α , 0), (β, 0, 1 − β) are affinely independent. So since (h j  , h j  , h j  ) ∈ K and K is one dimensional, there exists γ ∈ R such that γ (α , 1 − α , 0) + (1 − γ )(β, 0, 1 − β) = (h j , h j , h j ). Since h j > 0 for all j ∈ I , it follows that γ ∈ (0, 1). So define persuasion rules f  and f  so that:

f m ∗ , j  = α



f m∗ , j ,

f m ∗ , j  = (1 − α )

j∈ I

f m ∗ , j  = β

 j∈ I



f m∗ , j ,

f m ∗ , j  = 0

j∈ I

f m∗ , j ,

f m ∗ , j  = 0,

f m ∗ , j  = (1 − β)

 j∈ I

f m∗ , j

I. Sher / Games and Economic Behavior 71 (2011) 409–419

417

where I have used the notation f m, j instead of f (m, j ). In all other cases, f  and f  coincide with f . Notice that the speaker’s expected utility to any message is the same under f  and f  as under f . It then follows from the fact that f ∈ R (ζ ) that both f  and f  are in R (ζ ). Moreover f = γ f  + (1 − γ ) f  . So f is not an extreme point of R (ζ ). So any extreme point of R (ζ ) satisfies (3). 2 A.2. Proof of Theorem 3 A Bayesian Nash equilibrium in the game without commitment is a strategy profile (ζ ∗ , f ∗ ) such that ζ ∗ and f ∗ are mutual best replies. I now prove the following result: Proposition 3. Given Assumption 1, for every optimal deterministic persuasion rule f ∗ , there exists a speaker strategy ζ ∗ such that (ζ ∗ , f ∗ ) is a Bayesian Nash equilibrium of the game without commitment. Lemma 2 (proved in Section A.3) and Proposition 3 together imply Theorem 3.11 I first prove Proposition 3 for discrete actions, and then discuss the extension to continuous actions. So let Assumption 1 hold. Let f ∗ be an optimal deterministic persuasion rule for persuasion problem P = ( X , M , σ , J , v , u , p ). For each j ∈ J = {1, 2, . . . , n} construct a persuasion problem P j = ( X j , M j , σ j , J j , v , u , p j ) where X j = {x ∈ X: maxm∈σ (x) f˜ ∗ (m) = j }, M j := {m ∈ M: f˜ ∗ (m) = j },

σ j (x) := σ (x) ∩ M j , p xj := p x /



y∈ X j

p y for all x ∈ X j , and J j = { j − 1, j , j + 1} unless j = 1 (in which case J 1 = {1, 2})

or j = n (in which case J = {n − 1, n}). It is easy to see that σ j (x) = ∅ for j ∈ {1, 2, . . . , n} and all x ∈ X j . Now choose j = 2, . . . , n − 1 (the argument for j = 1 or j = n is similar). Corollary 2 implies that there exists a Bayesian Nash equilibrium (ζ j , f j ) in the game without commitment corresponding to P j such that f j is deterministic.12 For k ∈ { j − 1, j , j + 1},  j j ∗ define Y kj := {x ∈ X j : maxm∈σ j (x) f˜ j (m) = k}. Assume for contradiction that there exists m∗ ∈ M j with x∈ X j ζ (x, m ) p x > 0 such that for k ∈ { j − 1, j + 1}: n







v (k, x) − v ( j , x) ζ j x, m∗ p x > 0 j

(4)

x∈ X j

Then since f

j

j −1

is a best reply to ζ j , f˜ j (m∗ ) ∈ { j − 1, j + 1}; so Y j

∀ y ∈ Y kj , ∀m ∈ σ j ( y ),





j +1

∪Yj

= ∅. Note that for k ∈ { j − 1, j , j + 1}:

j

v (k, x) − v ( j , x) ζ j (x, m) p x  0

(5)

x∈ X j

is a best reply to ζ j . For k ∈ { j − 1, j + 1}, let H kj := {m ∈ M j : ∃x ∈ Y kj , ζ j (x, m) > 0}. It follows from (4) and (5)

j

because f that:

 



j −1

x∈Y j

 



j

v ( j + 1, x) − v ( j , x) p x

j +1



=

j

v ( j − 1, x) − v ( j , x) p x +

j −1

m∈ H j

 

x∈Y j



j

v ( j − 1, x) − v ( j , x) ζ j (x, m) p x +

j −1

 j +1

x∈Y j

m∈ H j

 



j

v ( j + 1, x) − v ( j , x) ζ j (x, m) p x

j +1

x∈Y j

>0

(6)

where we have also used the fact that the set of messages which are used with positive probability in some state in Y kj

is precisely H kj . Now consider the persuasion rule f ∗∗ in the original problem P defined by f ∗∗ (m) := f j (m), if m ∈ M j ,

and f ∗∗ (m) := f ∗ (m) otherwise. It is easy to see—using the definition of P j and the assumption that the speaker selects j −1 j +1 a best reply—that the set of states at which persuasion rules f ∗ and f ∗∗ induce different actions is Y j ∪ Y j . There fore the listener utility induced by f ∗∗ minus the listener utility induced by f ∗ equals j −1 ( v ( j − 1, x) − v ( j , x)) p x + x∈Y



j

j +1 x∈Y j

( v ( j + 1, x) − v ( j , x)) p x > 0, where the inequality follows from (6) and the fact that p j is proportional to p on X j .

So f ∗∗ attains a higher utility for the listener than f ∗ , contradicting the optimality of f ∗ . It follows that (4) is false, and hence for all m ∈ M j and all k ∈ { j − 1, j + 1},





v (k, x) − v ( j , x) ζ j (x, m) p x  0

x∈ X j

11 12

I am grateful to Evsen Turkay for a conversation which helped to simplify this proof. Here, I have also used the fact that every finite extensive form game has a sequential equilibrium.

(7)

418

I. Sher / Games and Economic Behavior 71 (2011) 409–419

where we have again used the proportionality of p and p j . So now consider the speaker strategy ζ ∗ such that for all j ∈ J , ζ ∗ (x, m) := ζ j (x, m) if x ∈ X j and m ∈ M j , and ζ ∗ (x, m) := 0 otherwise. This is a speaker best reply to f ∗ because at each state the speaker only puts positive probability on messages that produce the highest  action that he can achieve given f ∗ . ∗ j ∗ ˜ Moreover choose any k > j, and m such that f (m) = j (in other words, m ∈ M ) and x∈ X ζ (x, m) p x > 0. Then:





 v (k, x) − v ( j , x) j ζ (x, m) p x u (k) − u ( j ) j



v (k, x) − v ( j , x) ζ ∗ (x, m) p x = u (k) − u ( j )

x∈ X

x∈ X

 v ( j + 1, x) − v ( j , x) j ζ (x, m) p x  u (k) − u ( j ) u ( j + 1) − u ( j ) j

x∈ X

=

u (k) − u ( j ) u ( j + 1) − u ( j )





v ( j + 1, x) − v ( j , x) ζ j (x, m) p x

x∈ X j

0

(8)

The first inequality follows from Assumption 1 and the fact that u is strictly increasing. The second inequality follows from (7) and the fact that u is strictly increasing. A similar argument shows that the listener also weakly prefers j to any action k < j conditional on m. It follows that f ∗ is a best reply to ζ ∗ and hence (ζ ∗ , f ∗ ) is an equilibrium. This completes the proof of Proposition 3 in the case of discrete actions. It is straightforward to extend the above argument to the continuous case, using a limiting argument, which I sketch below. Again, under Assumption 1, we know that there exists an optimal deterministic persuasion rule f ∗ . Since M is finite,



J  := j ∈ J : ∃m ∈ M , f˜ ∗ (m) = j



j is finite. For each j ∈ J  , we consider a sequence Pn of persuasion problems each of which is identical to P j , except that the j

action set is J n = { j − n , j , j + n } where

n → 0 as n → ∞, and derive equilibria (ζnj , f nj ) in each such problem. Then, we

establish the analog of (7) within persuasion problem Pn . Then, we define ζn∗ analogously to ζ ∗ in the above argument, and then—extracting a convergent subsequence if necessary—redefine the term ζ ∗ as the limit of ζn∗ . For the same reasons as above, we see that ζ ∗ is a best reply to f ∗ . Then choose k = j and for sufficiently large n, we derive inequalities analogous to (8) for ζn∗ , which implies that since ζ ∗ is the limit of ζn∗ , f ∗ is a best reply to ζ ∗ . j

A.3. Lemma 2 The following lemma extends a result in Sher (2010). Lemma 2. Let Assumption 1 hold. Let f ∗ be an optimal deterministic persuasion rule, and suppose that there exists a speaker strategy ζ ∗ such that (ζ ∗ , f ∗ ) is a Bayesian Nash equilibrium of the game without commitment. Then there exists an optimal deterministic persuasion rule f ∗∗ such that (a) (ζ ∗ , f ∗∗ ) is a sequential equilibrium, and (b) for all messages m which are used with positive probability according to ζ ∗ , f˜ ∗ (m) = f˜ ∗∗ (m).



∗ Here I assume that J is discrete; the proof with continuous actions is similar. Let M 0 := {m ∈ M: x∈ X: m∈σ (x) ζ (x, m) p x = 0}. For each x ∈ X , let j (x) be the action which the listener would find optimal if he knew the state was x. If there are multiple optimal actions, then let j (x) be the smallest one. For each message m ∈ M, let g (m) := min{j (x): m ∈ σ (x)}. For each x ∈ X , define η(x) := max{ f˜ ∗ (m): m ∈ σ (x)}. Let m0 ∈ M 0 , and Y := {x ∈ X: m0 ∈ σ (x), η(x) < g (m0 )}. Assume for contradiction that Y = ∅. Then consider the deterministic persuasion rule f which agrees with f ∗ except on m0 , where f˜ (m0 ) = g (m0 ). Then observe that at all states x ∈ X \ Y , the speaker receives the same action under f ∗ and f . On the other hand, for x ∈ Y , the speaker receives η(x) under f˜ ∗ and g (m0 ) under f , and by Assumption 1 and the definition of g, v (η(x), x) < v ( g (m0 ), x), so f gives the speaker a higher expected utility than f ∗ , contradicting the optimality of f ∗ . It follows that for all m0 ∈ M 0 and x ∈ X with m0 ∈ σ (x), g (m0 )  η(x). This implies that ζ ∗ is a best reply to the deterministic persuasion rule f ∗∗ defined by:

f˜ ∗∗ (m) :=



g (m),

f˜ ∗ (m),

if m ∈ M 0 otherwise

Moreover since f ∗ is a best reply to ζ ∗ and f ∗∗ agrees with f ∗ on all messages which are used with positive probability according to ζ ∗ , f ∗∗ is also a best reply to ζ ∗ . It remains only to show that there are off equilibrium beliefs consistent with the structure of the game which rationalize the response of f ∗∗ to messages in M 0 . For all x ∈ X , m ∈ σ (x) and  > 0 define: ⎧ ∗ ζ (x, m) − γ  , if ζ ∗ (x, m) > 0

ζ  (x, m) :=



, ⎩ 2  ,

x

if ζ ∗ (x, m) = 0 and j (x) = g (m) otherwise

I. Sher / Games and Economic Behavior 71 (2011) 409–419

419



 where for each x ∈ X , γx is chosen so that m∈σ (x) ζ (x, m) = 1. Finiteness of M implies that if  is sufficiently small, then a unique such γx ∈ [0, 1] exists, and finiteness of X then implies that for sufficiently small  , ζ  is a speaker strategy. Moreover, ζ  → ζ ∗ as  → 0. By the definition of g for all m0 ∈ M 0 , there exists x ∈ X with m0 ∈ σ (x) such that j (x) = g (m0 ). It follows that for all m0 ∈ M, the listener’s limiting beliefs conditional on receiving m0 put probability 1 on states in which g (m0 ) is an optimal action, implying that (ζ ∗ , f ∗∗ ) is a sequential equilibrium.

A.4. Example where more provability harms the listener This section substantiates the comment made at the end of Section 5.1, namely, that without Assumption 1, it is possible that at the buyer-optimal sequential equilibrium of the game without commitment, increasing evidence may decrease the buyer’s utility. Suppose that X = {x1 , x2 , x3 } and each state is equiprobable, having a probability of 1/3. The set of actions is J = {1, 2, 3, 4}. The speaker’s utility function is given by u ( j ) = j and the listener’s utility function is given by:

v

1

2

3

4

x1 x2

−2

2

−4

1

−2 −4

0

−4 −4

x3

−4

2

3

This example violates Assumption 1. As in Section 5.1, assume that there is a set of provable statements. First suppose that the set of provable statements is S = {{x2 , x3 }, {x1 , x2 , x3 }}. Then at the buyer-optimal sequential equilibrium, at state x1 , the speaker sends message {x1 , x2 , x3 } and the listener selects action 2, whereas at states x2 and x3 , the speaker sends message {x2 , x3 } and the listener selects action 3. The listener’s expected utility is 4/3. Now suppose that the statement {x3 } is added to the provable set, so that the provable set becomes T = {{x3 }, {x2 , x3 }, {x1 , x2 , x3 }}. Then at the buyer-optimal sequential equilibrium (indeed, the unique sequential equilibrium), at states x1 and x2 , the speaker sends message {x1 , x2 , x3 }, and the listener selects action 2, whereas at state x3 , the speaker sends message {x3 }, and the listener selects action 4. The listener’s expected utility is 1. This shows that adding the message {x3 } lowers the listener’s expected utility at the buyer-optimal sequential equilibrium. References Crawford, V., Sobel, J., 1982. Strategic information transmission. Econometrica 50, 1431–1451. Glazer, J., Rubinstein, A., 2001. Debates and decisions: On a rationale of argumentation rules. Games Econ. Behav. 36, 158–173. Glazer, J., Rubinstein, A., 2004. On the optimal rules of persuasion. Econometrica 72, 1715–1736. Glazer, J., Rubinstein, A., 2006. A study in the pragmatics of persuasion: A game theoretical approach. Theoretical Econ. 1, 395–410. Milgrom, P., 1981. Good news and bad news: Representation theorems and applications. Bell J. Econ. 2, 380–391. Milgrom, P., Roberts, J., 1986. Relying on information of interested parties. RAND J. Econ. 17, 18–32. Sher, I., 2010. Persuasion and dynamic communication. Mimeo. Shin, H., 1994a. The burden of proof in a game of persuasion. J. Econ. Theory 64, 253–264. Shin, H., 1994b. News management and the value of firms. RAND J. Econ. 25, 58–71. Shin, H., 2003. Disclosure and asset returns. Econometrica 71, 105–133.