The power of words: A model of honesty and fairness

The power of words: A model of honesty and fairness

Journal of Economic Psychology 33 (2012) 642–658 Contents lists available at SciVerse ScienceDirect Journal of Economic Psychology journal homepage:...

344KB Sizes 0 Downloads 60 Views

Journal of Economic Psychology 33 (2012) 642–658

Contents lists available at SciVerse ScienceDirect

Journal of Economic Psychology journal homepage: www.elsevier.com/locate/joep

The power of words: A model of honesty and fairness Raúl López-Pérez ⇑ Department of Economic Analysis, Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid, Spain

a r t i c l e

i n f o

Article history: Received 9 January 2011 Received in revised form 15 November 2011 Accepted 6 December 2011 Available online 13 December 2011 JEL classification: C72 D01 D62 D64 Z13

a b s t r a c t We develop a game-theoretical model of honesty and fairness to study cooperation in social dilemma games with communication. It is based on two key intuitions. First, players suffer a utility cost if they break norms of honesty and fairness, and this cost is highest if most others comply with the norm. Second, people are heterogeneous with regard to their concern for norms. We show that a model based on honesty norms alone cannot explain why pre-play communication fosters cooperation in simultaneous social dilemmas. In contrast, the model based on norms of honesty and fairness can. We also illustrate other predictions of the model, offering experimental evidence in line with them – e.g., the effect of communication on cooperation depends on how many players communicate, and whether the social dilemma is played simultaneously or sequentially. In addition, ideas for new experiments are suggested. Ó 2011 Elsevier B.V. All rights reserved.

PsycINFO classification: 3020 Group & Interpersonal Processes Keywords: Communication Cooperation Fairness Heterogeneity Honesty Reciprocity Social norms

1. Introduction Communication often has a very substantial effect on cooperation, something true even if the communicators are anonymous subjects playing one-shot games (for early studies pointing this, see Dawes, McTavish, & Shaklee, 1977; Dawes, 1980). In a meta-analysis of social dilemma experiments conducted from 1958 to 1992, for instance, Sally (1995) concluded that non-binding communication raises cooperation by approximately 30%; a more recent meta-analysis by Balliet (2010) has found as well a large positive effect.1 In this paper we offer one possible explanation for this effect. Our model is based on the idea that people care about social norms (Becker, 1996; Bicchieri, 2002; Elster, 1989) in a conditional/reciprocal manner – intuitively, they feel painful emotions like shame when they transgress internalized norms that others respect. Crucially, it also assumes that players are ⇑ Tel.: +34 91 497 6801. E-mail address: [email protected] See also Bicchieri (2002) for a survey of some evidence on communication in public good games and social dilemmas, and Ellingsen and Johannesson (2004) for a review of some related psychological literature. 1

0167-4870/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.joep.2011.12.004

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

643

heterogeneous concerning the norms that they care about.2 More precisely, we distinguish between three types of players: (i) selfish types who do not care at all about norms, (ii) honest (H) types who are motivated by a norm of honesty, and (iii) EH types who are motivated by a norm of both distributive justice and honesty – this norm requires players to seek an efficient and egalitarian (E) outcome and also to be honest, which explains the acronym EH. A key message from our model is that the interaction between the three types is crucial to understand why communication fosters cooperation in social dilemmas. In effect, our first result shows that a model based on honesty norms alone cannot account for this phenomenon, at least in simultaneous social dilemmas. In short, honesty/lie-aversion is not enough to deliver a communication effect. In contrast, our ‘3-types’ hypothesis can account for it. To get some of the intuition in this respect, consider first the following three behaviors, well-observed in numerous studies (Sally, 1995): (a) some people cooperate in social dilemmas even if they cannot communicate, (b) communication increases cooperation (which suggests that some people cooperate only if they can communicate), and (c) some people never cooperate in social dilemmas, even if pre-play communication is available. These behaviors correspond in our model to the EH, honest, and selfish types, respectively. In effect, EH types cooperate if they expect their co-players to cooperate as well – that is, they are conditional cooperators – and their presence explains why some people cooperate even if they cannot communicate. In turn, communication increases cooperation because it allows honest types to make promises and hence commit themselves to cooperate. Is that ever an optimal strategy? Yes, if they believe that the promisee is an EH player, that is, the type of person who cooperates conditionally and hence would defect if the sender announced defection. Finally, selfish agents are necessary to explain why cooperation sometimes fails to happen, even if communication is available. We also hint that the effect of communication subtly depends on a number of variables including: (i) the message content, (ii) the number of message senders, (iii) the order in which players communicate or make choices later (e.g., sequentially or simultaneously), (iv) the possibility for the players to keep silent, (v) the expected price of telling the truth, or (vi) prior history. These predictions are conveyed by means of simple examples, which might hopefully inspire further theoretical research. Further, we report some existing experimental evidence consistent with those predictions, although it seems to us that more data is required to better understand motivations like honesty and the effect of communication. We hope in this respect that a formal, detailed model will help suggest ideas for future experiments. This paper contributes to two expanding research fields. On one hand, several models of fairness and reciprocity have emerged in the last decades (we review them in Section 5), providing insights on why people cooperate. For instance, these models predict costly conditional cooperation in public good games, trust games, and other social dilemmas, a phenomenon observed in many experiments – see Chaudhuri (2011) for a survey. On the other hand, a number of papers have recently formalized the idea that people are lie-averse or honest, motivated by evidence that people often tell the truth even when it is contrary to their material interest (Belot, Bhaskar, & van de Ven, 2010; Erat & Gneezy, 2009; Fischbacher & Heusi, 2008; Gneezy, 2005). Our model incorporates ideas from both fields, not only with the aim of better understanding how communication affects behavior, but also to improve our knowledge on the above mentioned motivations, which could help to account for behavior related to accounting, auditing, insurance, job interviews, labor negotiations, regulatory hearings, and tax compliance, to provide a few examples. The paper is organized as follows. To better appreciate the interaction between the three types of players, the next section introduces a simple model in which there are only two types: honest and selfish. We prove that this model is not consistent with the robust phenomenon cited before, by which pre-play communication fosters cooperation in social dilemmas (Sally, 1995). For this reason, we extend the model in Section 3, introducing the EH types. Using this extended model, Section 4 presents our main results. Section 5 compares our account with alternative explanations, and Section 6 concludes by mentioning some possible extensions of the model together with some potential experiments where it could be put to test.

2. A model of honesty Consider any n-player, extensive form game of perfect recall. Let N = {1, . . . , n} denote the set of players, z a terminal node, h an information set, and A(h) the set of available actions or moves at h. Further, let ui(z) denote player i’s utility payoff at z, and xi(z) player i’s monetary payoff at z. At some h, a player may be given the opportunity to communicate – i.e., send a message to other players. Given our focus on pre-play communication, we assume that players only exchange messages about future moves. More precisely, the messages available at h are announcements of actions [a(h1), . . . , a(hk)], where h1, . . . , hk are information sets following h and a(hj) 2 A(hj) for any j 2 {1, . . . , k}. A message announcing action a 2 A(hj) means ‘the mover at hj would choose a if she had 0 to move at hj . Observe that players are allowed to announce their own future moves and/or other players’ potential moves;

2 The idea that agents are heterogeneous in their pro-sociality is consistent with a large body of experimental data. To start, the evidence from social dilemmas without pre-play communication (Brandts & Schram, 2001; Croson, 2000, 2007; Fischbacher & Gächter, 2010; Fischbacher, Gächter, & Fehr, 2001; Neugebauer, Perote, Schmidt, & Loos,2009) points out that some subjects are conditional cooperators who cooperate if they expect others to cooperate as well, while remaining subjects tend not to cooperate. In addition, recent lab evidence shows as well that subjects differ in their propensity to tell the truth (Gneezy, 2005: Hurkens & Kartik, 2009).

644

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

yet we will see that this latter type of messages cannot affect players’ best-responses. Unless otherwise noted, we posit that (i) sending a message has no monetary cost, (ii) a communicator can always keep silent if she wishes, and (iii) players share a common language and communication occurs without noise – i.e., if one player receives a message from another, it is common knowledge that both interpret the message in the same way. For the moment, we consider two types of players: Honest (H) and Selfish (we introduce an additional type in Section 3). Selfish players are risk neutral money-maximizers with utility function ui(z) = xi(z). In contrast, the honest types care about the money earned xi(z) but also about one norm. To formalize this, we adopt the following definition. Definition 1 (Norm). A norm is a rule w that assigns a nonempty set of A(h) to any information set h of the game except Nature’s. We see a norm as a statement indicating how one ought to behave in a game – this is not unusual in the literature on norms, see López-Pérez (2010) for a review. More precisely, we view the actions selected by w at h as the behavior recommended or required by that norm at h. Therefore, the mover at h is said to respect norm w at that information set if she chooses a move selected by w, and to deviate from w otherwise. The norm that the honest types care about is a norm of honesty, that is, it recommends telling the truth when communicating. More precisely, this norm permits players to send any message, but then subsequently to choose only those actions that correspond to previous declarations. Definition 2 (The H-norm). At any h where a player communicates, this rule selects silence and any of the available messages. At any other h, it selects action a 2 A(h) if the mover announced a previously, and the whole set A(h) otherwise. We assume that the honest types have internalized the H-norm, and suffer a psychological cost (guilt, shame) if they deviate from it. This means that their utility function is ui(z) = xi(z)  I(z)  c(z), where I(z) is an indicator taking value 1 if the player deviated from the H-norm in the history of z, and value 0 otherwise; and c(z) is a positive function representing the psychological cost of breaking the norm – we will later introduce some structure into function c(z), but for the moment we do not need to be more specific. Finally, we posit that players’ types are private information and use Perfect Bayesian Equilibrium (PBE) as our solution concept. A PBE consists of a probability assessment (beliefs) over the nodes of each player’s information sets and a strategy profile. At any information set, assessments reflect what the corresponding mover believes has happened before. They must be, to the extent possible, consistent with Bayesian updating on the hypothesis that the equilibrium strategies have been used to date – to simplify exposition, however, we do not explicitly report beliefs unless they can affect behavior. In addition, any player’s strategy in a PBE must be sequentially rational. That is, she must choose optimally at any of her information sets given her beliefs at that set and the fact that future play will be governed by the equilibrium strategies. 2.1. Application: simultaneous social dilemmas with pre-play communication In the simplest version of a social dilemma, players choose simultaneously between two strategies (cooperation and defection), with the following features: (1) cooperation generates a positive externality on the other players so that aggregate welfare is maximized if all players cooperate, and (2) cooperation is strictly dominated by defection in terms of monetary payoffs. The Prisoner’s Dilemma (PD) and many public good games fit this description. Consider now a two-stage social dilemma consisting of a communication and an action stage (the combination of both stages is henceforth called the entire game). In the first stage, players exchange messages following a given communication protocol, while the social dilemma game proper is played in the second stage. There are many types of protocols; three examples (of increasing complexity) in the context of the PD are the following: (1) Row sends a single message to Column, who cannot reply; (2) both players send a single message to each other simultaneously; and (3) Row sends first a message, Column observes it and then sends her own message to Row, who can in turn reply with a final message. For simplicity, we assume that the structure of the protocol cannot be chosen by the players (as is the rule in many experiments). That is, players cannot decide on things like who sends messages, who receives them, the order in which messages are sent, etc. We also assume that a player cannot announce moves conditional on another player’s announcement; promises such as ‘I promise to cooperate if you promise to cooperate as well’ are not possible. This is a rather stringent restriction on the set of messages available, but we will motivate it later. As mentioned in the introduction, multiple experiments show that pre-play communication fosters cooperation in social dilemmas. Can our model replicate this phenomenon? The answer is negative: Whatever the (finite) protocol used in the communication stage and the positive function c(z) considered, players should always defect in equilibrium. Proposition 1. If the communication protocol is finite and there are only honest and selfish types, players never cooperate along any equilibrium path of the entire game.

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

645

Proof. To check for sequential rationality, we follow a line of reasoning akin to backward induction. First consider behavior in the action stage. Trivially, selfish players never cooperate here, whatever they announced before. Similarly, an honest type should defect if she announced defection or kept silent in the communication stage. In contrast, she may cooperate if she previously announced that she would,3 and lying induces a sufficiently large psychological cost. More precisely, consider any continuation game such that player i previously announced to cooperate, and let xi (D) and xi (C) respectively denote the monetary payoff from defection and cooperation given the other players’ moves. For player i to honor her word, the cost of lying c(D) must be such that xi(D)  c(D) 6 xi(C). However, the honest types will not make such announcements in equilibrium. To prove this, we move backwards and consider an information set h(L) of the communication stage such that the mover (call her L) and the corresponding message receivers cannot send a message at any subsequent information set; thus the message sent at h(L) cannot affect communication afterwards. There must be at least one L because the communication protocol is finite (there may be several such players if the protocol allows for simultaneous communication). At any such h(L), the optimal message for L must be to announce defection (silence is also optimal), for three reasons. First, no message sent at h(L) can shape other players’ best responses in the action stage, as conditional promises are not feasible and the social dilemma is played simultaneously (see below for some intuition on this). Second, a player does not deviate from the H-norm if she announces defection, thus suffering no psychological cost. Finally, defection maximizes the monetary payoff in the social dilemma. Consider now the information set immediately prior to any h(L). Messages sent here cannot affect the behavior of L, as she always announces defection (or keeps silent). Hence, our previous reasoning for h(L) applies here as well so that the optimal announcement is defection (or silence). This logic can be extended to any prior information set, including those at which the mover sends a message for the first time. In summary, announcing cooperation is never optimal for an honest player, which implies that cooperation is never optimal along the equilibrium path. h There are three key intuitions behind Proposition 1. First, the H-norm recommends telling the truth, that is, consistency between actions and promises. As far as it is consistent, therefore, any behavior is permitted by this norm. This means that cooperation has no value per se for a purely honest type, so that she suffers no cost by announcing defection in the communication stage and defecting afterwards. Second, the social dilemma is played simultaneously in the action stage, so that no message sender can announce behavior conditional on the other players’ choices (e.g., ‘I cooperate if you cooperate’). This is related to the third intuition, that is, players cannot commit to cooperate conditional on other players’ committing as well, because conditional promises are not possible. The main message from Proposition 1 is that lie-aversion is not enough to explain why pre-play communication fosters cooperation in social dilemmas. Certainly this result focuses on a restricted setting characterized by simultaneous dilemmas and communication protocols with no conditional promises, but we know from many experiments that communication improves cooperation even in these cases. For instance, cooperation increases in a PD even if only one player can communicate and hence cannot make conditional promises; see Duffy and Feltovich (2002). We require a more complex model in order to account for such evidence. In the rest of the paper, we extend the model by adding one key hypothesis, that is, a fraction of the honest types care as well about fairness. 3. A model of honesty and fairness Hence, we introduce in our prior model a third type of player, called EH, and respectively denote by l and q the probability of being an EH and an honest type; the existence of selfish agents requires l + q < 1. Like the honest types, the EH types care about a norm; for this reason, we refer to both types as principled. However, they have internalized a different norm: The EH-norm. Broadly speaking, this is a norm of both fairness (which implies cooperation) and honesty. To define it more precisely, we must introduce an additional concept. Let t0 denote any initial decision node of the game – i.e., any node immediately following Nature’s moves (if any) – and X(t0) denote the set of all allocations of monetary payoffs that succeed t0, which we assume to be a compact set. Definition 3 (E-allocation). Allocation x = (x1, . . . , xn) 2 X(t0) is an (Efficient and Egalitarian) E-allocation of t0 if it maximizes the function

F E ðxÞ ¼

X i2N

xi  dðmaxfxi g  minfxi gÞ i2N

i2N

ð1Þ

over X(t0), where 0 < d < 1. A path connecting node t0 and one of its E-allocations is an E-path of the game. An E-action is an action that belongs to at least one E-path. 3 If the protocol is such that a player can communicate at different moments of time, it is logically possible that a player announces cooperation at some moment, and defection at another moment. In this case, defection would be the optimal choice at the action stage, as the H-norm only asks for consistency with a prior message. To be precise, therefore, cooperation requires that the player only announced cooperation before. This is implicit in the rest of the proof. Note also that the only relevant messages are those that announce the sender’s future intentions. In contrast, messages announcing a co-player’s future move (which could be interpreted as guesses about what that player will choose) have no effect on the players’ utility: Given the structure of the H-norm, they are pure cheap talk.

646

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

Definition 4 (The EH-norm). At any h where the mover communicates, this rule selects silence and any message, with one condition: Messages announcing the mover’s future choice at any information set in an E-path must announce an E-action. At any other h, the norm selects action a 2 A(h) if the mover announced a previously, and any E-action of h otherwise – if there is no E-action, the norm selects the whole set A(h). The EH-norm is more restrictive than the H-norm, as it recommends not only honesty, but also that players try to implement an E-allocation (hence the EH name); we motivate this norm at the end of this section. Further, the EH-norm asks for moral coherence, as messages announcing the sender’s future move (i.e., her intentions) must announce E-actions. This point seems natural: If E-actions are the ‘right’ actions and moreover announcements are morally binding, one should not announce something different. Apart of adding a third type of player, there is a second change from the model of Section 2, as we will specify a precise form for the cost c(z) of breaking an internalized norm. For this, let c denote a strictly positive scalar and r(w, z) 2 [0, 1] denote the overall proportion of players who respect norm w if they follow the path to z. We posit that the utility function of any principled type is ui (z) = xi(z)  c  r(w, z)  I(w, z), where w denotes the EH-norm/H-norm if the player is an EH/honest type, and I() is an indicator taking value 1 if the player deviated from norm w in the history of z, and value 0 otherwise. In other words: The strength of the cost c(z) depends positively on the proportion of players who respect the norm. We make four remarks. First, López-Pérez (2010) offers a deeper account of this assumption in terms of the psychology of shame, an emotion that is strongly correlated with inferiority feelings. Second, this hypothesis implies that principled types are more likely to respect their own norm if they expect sufficiently many co-players to comply as well. Much lab evidence supports the idea that humans often behave in a reciprocal manner; consult Fehr and Gächter (2000). Third, our results do not depend on the linear formulation of the cost; what is essential is that the cost strictly increases with r(w, z). Finally, we note that the cost is null if nobody complies with the norm, that is, there is no preference for norm compliance per se. Although adding an unconditional cost in our model would be straightforward, we also note that some evidence appears to be at odds with the idea of a (significant) fixed cost; see for instance Miettinen and Suetens (2008).4 Two aspects of our model merit some final discussion. First, why do we posit the EH and H-norms and not other norms? The reason is twofold. On one hand, both norms are relatively simple. On the other hand, the EH-norm is in line with much laboratory evidence from games without communication; particularly from dictator games. A potential reason is that this norm takes into account both (social) efficiency and equity. In effect, note first that any E-allocation is Pareto efficient, as one can prove by contradiction from condition 0 < d < 1. Furthermore, function (1) depends positively on the social surplus and negatively on the maximal payoff distance, which is a measure of inequity, particularly in two-player games. LópezPérez (2008) discusses at length the empirical relevance of function (1) and some alternatives. We stress that some of them can also explain the evidence considered – e.g., a function incorporating efficiency and maximin concerns, or one including a more refined measure of inequity in multiple-player games. Unfortunately, we are unaware of any controlled evidence to discriminate between an account based on function (1) and those alternatives; whenever such evidence is available, it will be possible to fine-tune the function, if necessary. Second, why do we assume three types of players? Since the complexity of the model increases with the number of types, it seems sensible to limit that number to the minimum possible. As we will argue, however, a model with two types is not enough to explain key phenomena. Further, a within-subjects study by Hurkens and Kartik (2009) offers evidence in line with our hypotheses that some players are selfish, others care about fairness and honesty, and others only about honesty. Subjects in this study participated in a binary dictator game and in a payoff-equivalent sender-receiver game. In their treatment 4, for instance, the (dictator, recipient) allocations were (4, 12) and (5, 4), and the authors report that 32.7% of the subjects chose (5, 4) in the dictator game and lied in the sender-receiver game, 43.1% chose (5, 4) but told the truth, and 17.2% chose (4, 12) and told the truth. These decision patterns are consistent with the predictions for selfish, H, and EH types respectively – although our model cannot explain why the remaining 5% of subjects chose (4, 12) but lied. We note anyway that additional within-subjects studies would be welcome, as our ‘3-types’ hypothesis is crucial for our results. 4. Applications This section illustrates three important predictions of our model: (i) under certain parameter conditions, pre-play communication increases cooperation in social dilemmas, (ii) the net effect of communication on cooperation depends on such details of the communication protocol as the number of message senders, or the order in which they communicate, and (iii) the effectiveness of communication also depends on details of the action stage, such as the order of play (e.g., simultaneous vs. sequential). We also show the model to be in line with experimental evidence, when available. For simplicity, we focus on two-player social dilemmas, postponing the study of games with multiple players for further research.

4 Observe also that any passive player making no choice complies with any norm by definition. Thus, the cost of breaking a norm increases with the number of passive players. This could be easily avoided by assuming that the cost only depends on the share of active players who comply, although this is immaterial for our analysis here.

647

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658 Table 1 Monetary payoffs in the PD.

C D

C

D

c, c t, 0

0, t d, d

Table 2 Row’s utility payoffs if she is EH.

C D

C

D

c t  c/2

0 d

4.1. Pre-play communication increases cooperation in the Prisoner’s Dilemma Table 1 depicts monetary payoffs in the Prisoner’s Dilemma (PD) lab game. Both players earn c monetary units if they cooperate (C), and d if they defect (D). Further, a unilateral defector gets a ‘temptation’ payment of t while a unilateral cooperator gets a normalized payoff of zero. Payoffs satisfy t > c > d > 0 so that defection strictly dominates cooperation in monetary terms, and 2c > t so that mutual cooperation is socially efficient. To show that pre-play communication can increase cooperation in the PD, we first study the benchmark level of cooperation when communication is not available. Let lsim and lseq denote two scalars in the interval [0, 1], defined below. Observation 1. If l P lsim and c/2 P (t  c), the PD has a PBE in which the EH types cooperate while other types defect. Proof. Clearly, the selfish types find defection optimal. Further, a player cannot deviate from the H norm if she cannot communicate (this norm does not restrict behavior then; recall Definition 2). As a result, the utility of an honest type collapses to that of a selfish type and she finds defection optimal as well. With respect to the EH types, condition 2c > t implies that (c, c) is the only E-allocation of the game and cooperation the only E-action. Consequently, an EH type finds cooperation optimal if (see Table 2 for clarification):

lc þ ð1  lÞ0 P lðt  c=2Þ þ ð1  lÞ  d;

ð2Þ

that is, if l is larger than threshold

lsim ¼

d c  t þ d þ c=2

ð3Þ

which is smaller or equal to one and hence a proper probability only if c/2 P (t  c).h This ‘cooperative’ PBE is not the unique PBE: For any value of l, there exists another PBE where all types defect. However, this leads to a lower expected payoff than the cooperative equilibrium for all types if l P d/c = lseq. That is, in this case the cooperative PBE payoff-dominates the equilibrium with unanimous defection. This point is relevant if one assumes, in line with Harsanyi and Selten (1988), that payoff-dominance can help players to coordinate on a specific equilibrium. This selection criterion is simple, makes our model more precise and, as we will see, is not inconsistent with the evidence from social dilemmas with pre-play talk mentioned later. The reader should be aware, however, that some evidence from coordination games without pre-play communication goes against it. In the Stag Hunt Game, for instance, Cooper, DeJong, Forsythe, and Ross (1992) report almost nil play of the payoff-dominant equilibrium, while Charness (2000) and Duffy and Feltovich (2002) report frequencies of play of around 35% and 60%, respectively. We also note that all these studies show an increase in payoff-dominant play when pre-play communication is allowed. The equilibrium described in Observation 1 is in line with some robust experimental phenomena. First, some agents cooperate in one-shot social dilemmas; consult the meta-analysis by Sally (1995). Second, cooperation in our model requires at least a mass lsim of EH types. In other words, people only cooperate if the co-player is likely to cooperate as well, a prediction well supported by numerous experiments – see again Sally (1995) or Croson (2000). Finally, lsim decreases with c and increases with t and d. Since cooperation is hindered as lsim increases, the model consequently forecasts a direct (inverse) relation between the cooperation rate and c (t and d); again in line with the evidence – Rapoport and Chammah (1965, pp. 36–39), Clark and Sefton (2001). This has an interpretation in terms of a law of demand: Cooperation decreases when its price increases – observe that the expected price of cooperation l  (t  c) + (1  l)  d depends negatively on c and positively on t and d. According to Observation 1, l is the highest possible cooperation rate in the PD without communication. What if players can communicate prior to playing the PD? To explore this question, we first consider the simplest communication protocol:

648

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

Message ‘C’

Silence

‘D’

C

D

C

D

C

D

C

c

0

C

c - γ /2

0

C

c

0

D

t - γ /2

d

D

t - γ /2

d

D

t - γ /2

d

C

D

C

D

C

D

C

c

0

C

c - γ /2

- γ /2

C

c

0

D

t - γ /2

d- γ /2

D

t

d

D

t

d

Fig. 1. Utility payoffs of the message sender for any possible strategy profile (the three upper matrices depict an EH sender’s payoffs, while the lower three ones depict an honest sender’s ones).

Unilateral or one-way communication. That is, before playing the PD, one of the players (the sender) can send a message announcing her future move to the other player (the receiver). Since the two messages available are ‘I will play C’ and ‘I will play D’, we refer to them simply as ‘C’ and ‘D’. To apply the model, note that the EH-norm selects silence or message ‘C’ in the communication stage – message ‘D’ is a deviation because D is not an E-action. In the action stage, in turn, the EH-norm selects action C for the sender if she announced ‘C’ or kept silent, and action D if she announced ‘D’,5 whereas the receiver should always move C. Finally, the H-norm selects any message but requires players to act accordingly later. If the sender keeps silent, however, the H-norm selects any action (this applies to the receiver as well). Taking all this into account, Fig. 1 depicts utility payoffs for an EH and an honest sender and for any possible strategy profile. The tree indicates available messages, while the matrices below specify the sender’s payoffs for each possible message and combination of players’ choices in the action stage (in the payoff matrices, the sender is assumed without loss of generality to be the Row player; further, the three upper matrices correspond to an EH sender; the three lower ones to an honest sender). For example, an honest sender always suffers a cost c/2 if she does not act as previously announced – note that the receiver always complies with the H-norm because he cannot communicate, hence r(w, z) P 0.5 always for this norm. Observation 2. If l P max{lsim, lseq} and c/2 P (t  c), the PD with one-way communication has a PBE where (i) all types of senders announce ‘C’, (ii) principled senders and EH receivers cooperate afterwards, (iii) honest receivers and selfish senders and receivers always defect, and (iv) any principled type defects if message ‘C’ was not sent/received. Proof. As defection is strictly dominant in monetary terms, point (iii) is trivial – note than an honest receiver gets the same utility payoffs as a selfish type because she is silent. Point (iv), in turn, follows from Fig. 1 and the equilibrium strategies. Note that these predictions are independent of the beliefs off the equilibrium path. Consider now point (ii). An EH sender who announced ‘C’ gets an expected payoff of l  c if she cooperates afterwards and of l  (t  c/2) + (1  l)  d if she defects. This is because only an EH co-player cooperates then – see the upper, left-hand matrix in Fig. 1 for further clarification. We hence conclude that cooperation is optimal if l P lsim – as in Eq. (3) above. The argument is similar for an EH receiver who received ‘C’. Since both EH and honest senders are expected to cooperate, however, we now require q + l P lsim for cooperation to be optimal. In turn, inspection of Fig. 1 also indicates that an honest sender who announced ‘C’ would rather honor her word or is just indifferent if

l  c þ ð1  lÞ0 P lðt  c=2Þ þ ð1  lÞðd  c=2Þ; which necessarily holds if l P lsim and c/2 P (t  c). Finally, point (i) follows from two reasons. Given the equilibrium strategies, first, any principled sender finds ‘C’ better than ‘D’ or silence if l P d/c = lseq. Second, selfish senders mimic the other types’ announcement to prevent signaling their type. h There are several intuitions here. Selfish senders seek to maximize their own money payoff, and hence defect independently of the message sent. In contrast, the principled senders care about honesty and cooperate if they announced they would (provided that cooperation is not too costly). When do they announce that? Clearly, this is only optimal if the receiver is expected to cooperate as well. Yet the only receivers who might cooperate are the conditional cooperators

5 Note that the EH-norm is defined so that honesty ‘overtakes’ fairness if a player happens to announce a non-E-action. Since such an announcement is already a deviation from the EH-norm, however, any posterior recommendation of play by the EH-norm has no effect on utility, and is simply mentioned for completeness.

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

649

– i.e., the EH types. Consequently, the principled senders commit themselves to cooperate only if the share l of EH types is large enough. Since cooperative messages are likely to be truthful in this case, EH receivers reciprocate and cooperate as well. One might be tempted to conclude that one-way communication fosters cooperation with respect to the case with no communication if l P max{lsim, lseq} and c/2 P (t  c). However the question is not so simple because the ‘cooperative’ equilibrium of Observation 2 is not unique. To start, consider any strategy profile characterized as follows: (i) in the communication stage, the EH senders choose silence, the honest senders choose ‘D’ or silence, and the selfish senders choose any message or silence; (ii) whatever the message sent/received, all types of players defect in the action stage, except the honest senders, who cooperate if they announced ‘C’ and d < c/2. Provided that beliefs off the equilibrium path are conveniently chosen (e.g., if the sender announces ‘C’, she is not an honest type), any such profile is a PBE whatever the parameter constellation – the reader can prove this with the help of Fig. 1. We refer to these equilibria henceforth as equilibria with ‘unanimous defection’. There also exists one more equilibrium, which is a subtle variant of the previous equilibria, as all types keep silent and then the EH types cooperate; this ‘silent’ equilibrium exists only if l P lsim (as in Observation 1).6 Given this multiplicity of equilibria, we cannot claim that one-way communication increases the cooperation rate for sure, even if the conditions l P max{lsim, lseq} and c/2 P (t  c) hold. It is logically possible that players coordinate on the equilibrium of Observation 1 when playing the PD without communication, thus achieving a cooperation rate of l, but unanimously defect when one-way communication is available. To prevent this kind of paradox, one can use the payoff-dominance criterion. In effect, the cooperative equilibrium of the PD with one-way communication payoff-dominates the equilibria with unanimous defection cited before if l P d/c = lseq. Hence, this selection principle leaves us with the cooperative and the silent equilibria, thus ensuring that the cooperation level with one-way communication is never lower than l. Observe that payoff-dominance cannot rule out the silent equilibrium, as honest senders are better off here than in the cooperative equilibrium (the other types are better off in the cooperative equilibrium). However, we can rule it out directly by requiring the sender to send a message (i.e., removing the option of silence). In this case, the equilibrium in Observation 2 is the only refined one. Since honest senders will in this case find cooperation optimal, average cooperation is unequivocally strictly higher than in the PD with no communication. The predicted rise in cooperation when communication is introduced is in line with the available experimental evidence. For instance, Duffy and Feltovich (2002) report an increase in the cooperation rate from 22% to 40% (in their experiment, subjects played ten times a PD against different opponents; silence was not an option). Further, the equilibrium in Observation 2 shows that cooperation crucially depends on the content of the message: Nobody cooperates after sending or receiving a ‘D’ message, while some types cooperate after sending or receiving a ‘C’ message. In line with this, Duffy and Feltovich (2002) report that senders often announce truthful messages (‘C’ messages are truthful half of the time, and ‘D’ messages 85% of the time). Further, receivers condition their actions on the message they receive – i.e., they cooperate significantly more after receiving message ‘C’ than after receiving ‘D’ (50.4% vs. 16.1%). Yet we note that both senders and receivers cooperated in the same proportion in this experiment, a point at odds with our model, which predicts lower cooperation from receivers. As another remark on the experimental evidence, observe that both thresholds lsim and lseq depend negatively on monetary payoff c and positively on d. As a result, the increase in cooperation becomes more unlikely if, say, the difference c  d decreases (this is also the case if PD players cannot communicate; see Observation 1). This might explain the results in Charness (2000) from a PD experiment with one-way communication. The payoff calibration was such that c  d was arguably small, and Charness reports that, although most senders announced cooperation, most senders and receivers defected afterwards. Finally, we insist on the role played by the ‘3-types’ assumption in accounting for the lab evidence. If we dropped one of the types in the model and kept the other two, we could not replicate the above mentioned facts. In effect, the proofs make clear that (1) the EH types are necessary to explain why some people cooperate even without communication, (2) the honest types are necessary to explain why some people cooperate only when pre-play communication is available, and (3) the selfish types account for the fact that some people never cooperate.7 Our previous discussion shows that, under certain conditions, one-way communication fosters cooperation, but one can think of alternative communication protocols. Do they foster cooperation as well? To study this point, we next apply our model to the PD with bilateral (two-way), simultaneous pre-play communication – i.e., both players send a message simultaneously. In the communication stage of this game, the EH-norm selects message ‘C’ and silence for each player, whereas the H-norm selects any message and silence for each player. Both norms require players to honor prior announcements. Further, if one player keeps silent, the EH-norm requires her to cooperate afterwards, whereas the H-norm selects any move. 6 Observe that all types should pool messages (or silence) in equilibrium if at least one type cooperates. In effect, if the EH types cooperate, the other types should mimic the EH type’s message in order to prevent signaling their type (an EH player will not cooperate if she believes that the co-player is not going to cooperate). If only the honest types cooperate (this is possible if they announce ‘C’ and the EH-types announce ‘D’), this is not a stable situation, as they can then improve their payoff by announcing defection or nothing (see Proposition 1). 7 The reader may wonder about the importance of the hypothesis that the EH types care about fairness and honesty; could we get similar results if they just cared about fairness? The answer is negative: Without that hypothesis, there would exist more equilibria than those cited before, and some would be rather implausible in view of the experimental evidence (e.g., all players announce ‘D’ and then the EH types cooperate). In short, this hypothesis reduces the number of equilibria, ruling out some implausible ones. Note also that the evidence from Hurkens and Kartik (2009) cited in Section 3 is in line with our hypothesis but not with the alternative mentioned.

650

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658 Table 3 Row’s payoffs if she is honest and both players previously announced ‘C’.

C D

C

D

c t  c/2

0 d

As we did for the game with one-way communication, one can draw the game tree of this new game. We leave this to the interested reader, and just note the following: If both players can send messages, the payoffs of any principled player depend on her own message and the co-player’s. As an illustration, Table 3 depicts Row’s utility payoffs if she is honest and both players announced previously to cooperate. We can see that Row suffers a utility cost if she is the only defector, but not if both players defect. The reason is that, although defection constitutes a deviation from the H-norm if one previously announced to cooperate (it is a lie), Row suffers no cost if the co-player deviates (i.e., lies) as well, due to the reciprocal character of her preferences. The possibility of mutual deception is a key difference between two- and one-way communication, where the co-player is silent and hence cannot deviate – indeed, Table 3 is different than the matrix in Fig. 1 after an honest sender announces ‘C’. Observation 3. If l + q P max{lsim, lseq + (t  d)  q/c} and c/2 P (t  c), the PD with two-way communication has an equilibrium in which any principled player cooperates. In this equilibrium (i) all types announce ‘C’, (ii) principled types cooperate afterwards, (iii) selfish types always defect, (iv) principled types defect if they choose ‘D’ or if no player chooses ‘C’, (v) under appropriate conditions, only the honest types cooperate if they announce ‘C’ alone.

Proof. Point (iii) is trivial. With respect to point (ii), suppose that both players announced ‘C’. Taking into account the equilibrium strategies, an EH type gets an expected payoff of (l + q)  c if she cooperates in the action stage and of (l + q)  (t  c/ 2) + (1  l  q)  d if she defects – note that no psychological cost is suffered if both players mutually defect, that is, if there is mutual deception. We hence conclude that cooperation is optimal if l + q is larger than lsim. For an honest type, in turn, cooperation is optimal if (l + q)  c P (l + q)  (t  c/2) + (1  l  q)  d, so that again l + q P lsim is required. Point (iv) is trivial given the equilibrium strategies. For (v), consider first a silent EH-player who receives ‘C’. Given (v), she will cooperate if q P lsim. As a result, an honest, single ‘C’ sender will cooperate if c/2 P d and q < lsim, and for even lower values of c if q P lsim. With respect to an EH-player who announces ‘C’ alone, cooperation would be optimal for her only if the co-player kept silent and out-of-equilibrium beliefs were such that l > lsim. We rule out this possibility as point (i) would not hold then. To prove this point (i), finally, assume that everybody announces ‘C’. The type who can profit most from a unilateral deviation is an honest type, in case she chooses silence instead of ‘C’. Taking into account the equilibrium strategies, the expected payoff from that deviation would be q  t + (1  q)  d, which is lower or equal than the equilibrium payoff (l + q)  c under our assumptions. h We make three remarks. First, since any principled player cooperates in this equilibrium, the cooperation rate is larger than in the cooperative equilibrium with one-way communication, where the honest receivers do not cooperate. The intuition is clear: More types can commit to cooperate if the communication structure is rich (they do that, among other reasons, to induce the co-player to cooperate as well). Second, the game has other equilibria apart of the cooperative one, and a discussion similar to that for Observation 2 applies here as well. In particular, the equilibria with unanimous defection are ruled out by the payoff-dominance criterion if l + q is large. Third, condition l + q P lseq + (t  d)  q/c implies that the share of honest types q cannot be ‘too’ large for the cooperative equilibrium to exist – to appreciate this, think of case q = 1. This point reminds us of Proposition 1, and the relevance of the ‘3-types’ hypothesis. The next result extends Observations 2 and 3 to any protocol. Observation 4. Whatever the communication protocol, the PD with pre-play communication has a PBE in which all senders announce always cooperation and the principled ones cooperate afterwards (provided that l and q are large enough).

Proof. Protocols different than one- and two-way communication may differ in two aspects: (i) sequentiality, and (ii) at least one player sends multiple messages. With respect to the first feature, it suffices to consider the following protocol: One player sends a message, the co-player observes it, and then sends a response. If any type chooses message ‘C’ with this protocol and the conditions stated in Observation 3 hold, no type of player will benefit by choosing a different message or silence. This is particularly true for the first message sender, as no second mover will commit then to cooperate; however, it is also true for the second message sender. In effect, choosing ‘D’ or silence is not a best response if condition q  t + (1  q)  d 6 (l+q)  c holds: Since the other player will defect in the action stage if she is EH, choosing ‘C’ is a better option – as a result, the principled types commit to cooperate. With respect to the latter feature (multiple messages), this clearly does not affect the existence of the cooperative equilibrium, as far as the players can always announce ‘C’. h

651

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658 Table 4 A social dilemma with two E-allocations.

C1 C2 D

C1

C2

D

c, c 0, 0 t, 0

0, 0 c, c t, 0

0, t 0, t d, d

Observations 1–4 show that pre-play communication affects cooperation and social efficiency. We finish this section with a summary of these effects. For this, consider any communication protocol with multiple equilibria and assume first that payoff-dominance does not apply. If we focus on the equilibria with the maximum level of cooperation, we can show that the rate of cooperation achievable is always at least as large as without communication, whatever the communication protocol chosen. This is trivial if l < lsim or c/2 < (t  c), as the cooperation level in the PD without communication is then nil. If we assume instead l P lsim and c/2 P (t  c), so that the highest possible cooperation rate in the PD without communication is l, the argument is again simple. It suffices to note that, whatever the protocol, there always exist a ‘silent’ equilibrium in which all types keep silent and then only the EH-types cooperate, thus achieving a cooperation rate of l – note that there might be other equilibria with even higher cooperation rates. If payoff-dominance applies and the parameters are appropriate, moreover, the minimum possible level of cooperation with pre-play communication is always at least as large as the maximum level without communication. This is because any equilibria in which all players defect along the equilibrium path is payoff-dominated by the silent equilibrium if l P lseq. Perhaps the most interesting point, finally, refers to those protocols in which at least one player cannot choose to remain silent. If the parameters are those of Observation 2, the level of cooperation in equilibrium is then strictly higher than the maximum level without communication, l. The reason is that at least the non-silent player will announce ‘C’ in equilibrium (whatever his type), and this implies a cooperation rate at least equal to that of Observation 2, which is strictly higher than l. 4.2. The effect of pre-play communication on cooperation: some determinants While pre-play communication fosters cooperation in the PD when conditions are appropriate, the net effect depends on several characteristics of (a) the protocol employed and (b) the action stage. We first illustrate point (a) with the help of Observations 2 and 3. Corollary 1. Cooperation in the PD with pre-play communication depends on the number of message senders. Proof. This simply follows from comparing the equilibria in Observations 2 and 3. The average cooperation rates are l + q/2 and l + q with one-way and two-way communication, respectively, thus proving our statement. h We are not aware of any economic experiment explicitly comparing cooperation levels in the simultaneous PD game with one-way and two-way communication. However, Sally (1995) meta-analysis suggests that two-way communication has a strong positive effect on cooperation. In effect, the author finds that the presence of discussion – a form of bilateral communication – in one-shot social dilemma games is highly significant, and on average raises the cooperation rate by more than 45 percentage points. As another illustration of the effect of the protocol on cooperation, consider the PD with two-way communication. Observations 3 and 4 indicate that the maximum level of cooperation possible with a simultaneous or a sequential protocol is the same. That is, bilateral communication achieves the same level of social efficiency in the PD even if no player has the first word. Yet this result should not be generalized to any social dilemma. Observation 5. Social efficiency in social dilemmas is not always invariant to the order in which players communicate (provided that there are multiple message senders). Proof. The two-player game in Table 4 coincides with the PD game of Table 1 except that each player has two possible cooperative moves (C1 and C2). The game has two E-paths, (C1, C1) and (C2, C2), as both lead to the E-allocation (c, c). If both players communicate sequentially before playing the dilemma, the entire game has a continuum of mixed strategy equilibria, provided that l and c are large enough. In these mixed equilibria, the first message sender randomizes with some probability between announcements ‘C1’ and ‘C2’, the second sender responds with the same announcement as the first one, and both players honor their words afterwards if they are principled. The proof follows a similar line to that of Observation 3: If l is large enough, announcing ‘D’ or choosing silence is not optimal because the EH types would then defect in the action stage. If c is large, finally, message ‘C’ acts as a commitment device for the principled types. With simultaneous communication, the game has also multiple mixed strategy equilibria. In them, players randomize between announcements ‘C1’ and ‘C2’; the proof is again similar to that of Observation 3. In contrast with the sequential

652

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658 M1

D

C

M2

M2

C

c c

C

D

0 t- γ/2

t- γ/2 0

D

d- γ/2 d

Fig. 2. EH-players’ payoffs in the sequential PD.

protocol, however, two principled players need not achieve outcome (c, c) for sure – they may reach outcome (0, 0) if they make different announcements –, thus reducing social efficiency.8 h Corollary 1 and Observation 5 illustrate how the protocol can affect the cooperation rate. We show now that the effectiveness of communication also depends on details of the action stage, such as the order of play (e.g., simultaneous vs. sequential). For this, we concentrate on the sequential PD: One of the players (the second mover, or M2) chooses after observing the choice of the other player (the first mover, or M1). To ascertain the net effect of communication in this game, we first consider the sequential PD without communication. To apply our model to this game, observe that allocation (c, c) is the only E-allocation, so that both players cooperate in the unique E-path (definition 3). Consequently, the EH-norm requires M1 to cooperate, and M2 to cooperate as well if M1 cooperated. In addition, the EH-norm allows M2 to choose any action if M1 defected and hence deviated from the E-path (see definition 4). Fig. 2 depicts players’ utility payoffs if both players are EH (upper payoffs correspond to M1; arrows identify E-actions). Observation 6. The sequential PD has an essentially unique PBE for any parameter calibration. If c/2 P (t  c) and l P lseq, EH second movers cooperate conditionally, and any type of first mover cooperates. Proof. More precisely, there exist multiple equilibria whatever the parameters, but their only difference is in the beliefs off the equilibrium path (except marginal cases). To prove this, we proceed by backward induction, starting with M2’s decision. Clearly, this player always defects if she is selfish or honest. If M2 is an EH type, in turn, the reader can confirm from Fig. 2 that M2 defects at any information set if c/2 < (t  c), and reciprocates M1’s choice otherwise – that is, she cooperates if M1 cooperated and defects if M1 defected. To analyze M1’s decision, consider first the case c/2 < (t  c). Since M2 never cooperates in this case, it follows that M1 does not cooperate as well if she is honest or selfish. In contrast, inspection of Fig. 2 indicates that M1 cooperates if she is EH and 0 > d  c/2. In other words: If the joint condition d < c/2 < (t  c) holds, an EH first mover cooperates even if M2 is not expected to reciprocate. If c/2 P (t  c), we know that the EH second movers cooperate conditionally. Hence, cooperation is profitable for M1 if l is large enough. More precisely, cooperation gives a larger expected monetary payoff than defection for an honest or selfish first mover if l  c > d, whereas it is optimal for an EH first mover if l  c > d  c/2 (see Fig. 2 for clarification), that is, if l is larger or equal than the following threshold (which is in turn lower than lseq):

l ¼

2d  c : 2c

ð4Þ

Since backward induction selects a unique optimal choice for any type of player and any parameter constellation (except marginal cases), we conclude our proof. h The experimental data from Hayashi, Ostrom, Walker, and Yamagishi (1999) and Clark and Sefton (2001) corroborate our equilibrium predictions: Second movers often cooperate, but conditional on the first mover’s choice, while unconditional cooperation is negligible. In addition, Clark and Sefton (2001) report that conditional cooperation falls as its material cost rises, something also consistent with our model, as reciprocation is predicted only if c/2 P (t  c). Finally, Hayashi et al. (1999) and Clark and Sefton (2001) also report that the sequential PD elicits a higher rate of cooperation than the simultaneous one. This is in line with our model as well, as one can infer by comparing Observations 1 and 6. To start, nobody cooperates in the simultaneous PD if c/2 < (t  c), whereas EH first movers cooperate if d < c/2 < (t  c). Nevertheless, the differences with the simultaneous PD are more acute if c/2 P (t  c). On one hand, any type of first mover cooperates in the sequential PD if l is large, whereas honest and selfish types never cooperate in the simultaneous PD. By comparing 8 In fact, simultaneous communication in this game might be even less efficient than one-way communication, provided that the proportion of EH-types is large (to see this, consider the extreme case l = 1). This shows that a richer communication protocol – i.e., more message senders – need not increase cooperation in any social dilemma, thus qualifying Example 1 (which refers only to the PD; see also Observation 7 later).

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

653

expressions (3) and (4), moreover, the reader can easily prove that lsim > l if c/2 P (t  c). This means that an EH mover in the simultaneous PD requires a larger prior to cooperate than an EH first mover in the sequential PD, a further reason for the sequential mechanism to be relatively more effective in raising cooperation. To study the relation between the action stage structure and communication, consider the effect of one-way communication in the sequential PD. Does it foster cooperation, as in the simultaneous PD? Not always, as the answer depends on who sends the message. To facilitate comparison with Observation6, we focus on the case c/2 P (t  c) and l P lseq. Observation 7. In the sequential PD, messages do not affect behavior if the sender is the first mover in the action stage. Thus, the effect of one-way communication on cooperation depends on whether the PD is played simultaneously or sequentially. Yet messages are effective if the sender is the second mover and parameters are appropriate. Proof. Consider the first statement. Since c/2 P (t  c), Observation 6 indicates that M1 should cooperate (or commit to cooperation by announcing it beforehand) only if she expects M2 to cooperate conditionally with sufficient probability – i.e., if M2 is expected to be an EH type. However, giving the voice to M1 cannot change anything in this respect: M2 never cooperates if she is selfish, and the same occurs if she is honest and cannot communicate. In comparison with Observation 6, therefore, one-way communication by the first mover never increases cooperation. In contrast, one-way communication can increase cooperation in the simultaneous PD (Observation 2). This proves our second statement. To prove the third statement we proceed by backward induction, starting with M2’s decision in the action stage. Clearly, she always defects if she is selfish. An honest M2, in turn, cooperates at information set h only if she previously announced to cooperate at h and c/2 is large enough – in this respect, c/2 P (t  c) implies that M2 has an incentive to cooperate conditionally. Similarly, an EH second mover cooperates at h only if c/2 is large and moreover everybody respected the EHnorm before – i.e., if M1 previously cooperated and M2 did not announce to defect after that contingency. Consider now M1’s decision. Since M1 cannot communicate, an honest M1 has the same utility function as a selfish one. As a result, an honest or selfish M1 cooperates only if M2 is expected to cooperate conditionally with sufficient probability. This can occur only if M2 is EH and kept silent, or if M2 is principled and previously announced ‘I cooperate if M1 cooperates’, at least.9 Hence, a selfish or honest M1 cooperates either if M2 kept silent and her beliefs about M2’s type at that information set satisfy l  c > d (see Observation 6), or if M2 announced conditional cooperation and (l + q)  c > d. If M1 is EH and M2 chose silence or announced conditional cooperation, in turn, she should respectively cooperate if l  c > d  c/2 or (l + q)  c > d  c/2. Note that any of these conditions holds if l P lseq, as assumed. We finally study the communication stage, starting with three remarks. First, M2 should either announce conditional cooperation or keep silent, as any honest or selfish M1 will never cooperate otherwise. Second, a selfish M2 should mimic in equilibrium another type’s choice in order to prevent signaling her type (M1 will not cooperate if she believes that M2 is selfish). Third, it is not optimal for an honest/selfish second mover to choose silence if the EH second mover announces conditional cooperation: In this case, M1 should not cooperate after silence, and the cooperative announcement would be a profitable deviation for those second movers. It follows that all types of M2 pool messages, so we are left with two equilibria. In one of them, M2 announces ‘I cooperate if M1 cooperates’ and any type of M1 cooperates as long as (l + q)  c > d (note well that beliefs are coherent with Bayesian updating along the equilibrium path). The other equilibrium coincides with the equilibrium with no communication (Observation 6); M2 keeps silent here and any type of M1 cooperates if l  c > d.10 h In summary: Compared with the case without communication, one-way communication by M2 never reduces cooperation and indeed rises it if q + l and c are large enough, and silence is not an option. In this case, the game has a unique equilibrium: Any type of M2 announces conditional cooperation, and honors her word afterwards if she is principled. In comparison, M2 only cooperates in the game without communication if she is EH. We stress that giving the voice to M2 is more effective than giving it to M1 because honest second movers cooperate in the first case. More generally, promises from agent A to agent B are useless if A makes all choices before B starts moving, as these promises do not add any relevant information (one can say in this case that actions ‘crowd out’ words) and do not affect B’s incentives to cooperate. The experimental results from Charness and Dufwenberg (2006) are consistent with this. They study a trust game to which our previous analysis can be applied,11 and three of their treatments are of particular interest to us: A first one in which no subject could announce their intentions, a second one in which the first mover could communicate, and a third one where the second mover could communicate (silence was allowed but rarely used). The percentages of mutual cooperation elicited by these three treatments were respectively 20%, 26%, and 50%. In line with our model, the first two

9 Observe that M2 is sure to cooperate conditionally if she simply announces ‘I cooperate if M1 cooperates’ (without specifying what she does if M1 defects), because in equilibrium any type of M2 defects if M1 defected previously, provided that M2 announced nothing for that contingency. 10 This is only sustained given certain beliefs off the equilibrium path. More precisely, M1 must believe that M2 is selfish if she announces conditional cooperation. Since the equilibrium with silence is based on relatively more stringent conditions on q, l and beliefs off the equilibrium path, it seems to us that the other equilibrium is ‘‘less risky’’ (and arguably more focal). 11 A simple trust game like that of Charness and Dufwenberg (2006) has basically the structure of the sequential PD in Fig. 2, except that allocation (d, d) is reached for sure if M1 chooses D. However, this does not affect our prior equilibrium analysis. Finally, we note that the trust game in Charness and Dufwenberg (2006) includes a random shock, but this is again immaterial for our results.

654

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658 Table 5 Maximal average cooperation in each communication treatment. Action stage

Simultaneous PD Sequential PD

Communication stage No comm.

One-way

Two-way

l

l + q/2 (1 + l + q)/2

l+q (1 + l + q)/2

(1 + l)/2

percentages are not significantly different while the third one is higher than the others. Additionally, we note that mutual cooperation in the third treatment was much higher following a statement of intent or promise than otherwise (even if subjects were free to write whatever they wanted in their messages). We add two final remarks on Observation 7. First, communication by M2 can generate cooperation in the sequential PD even if l = 0, i.e., if there are no EH types. In effect, it directly follows from the proof that some players may cooperate in equilibrium even if l = 0, provided that both c and q are large enough. In this equilibrium, M2 announces to cooperate conditionally and then any type of M1 cooperates in the action stage if q > d/c. Afterwards, M2 honors her word if she is honest and c/2 P (t  c), and defects otherwise. This contrasts with Proposition 1, which indicated that communication could not foster cooperation in any simultaneous social dilemma if l = 0. Intuitively, the key difference with the simultaneous PD is that M2 can condition her behavior on M1’s behavior, and use the communication stage to credibly threat a harming choice (defection) if the other player misbehaves. In other words, M2 can use promises to force M1 to cooperate. Still, since hypothesis l = 0 cannot explain why some people cooperate in the sequential dilemma when pre-play communication is not feasible, our ‘3-types’ hypothesis seems well vindicated even in the context of the sequential PD. Second, note also one contrast between Corollary 1 and Observation 7. The first result asserts that cooperation increases in the simultaneous PD as the number of message senders increases (under certain conditions). Yet Observation 7 hints that more is not always better, as communication by M1 is ineffectual in raising the cooperation rate in the sequential PD. As a summary of the previous observations, Table 5 depicts the maximum possible average cooperation in equilibrium depending on the structure of the communication and the action stages (in the sequential PD with one-way communication, M2 is supposed to be the message sender). For instance, this equals l in the simultaneous PD with no pre-play communication (first column) because only the EH types cooperate then. To finish this section, we would like to point out one important feature of our model: Given their norm-based utility functions, the principled types care about previous history, and more precisely on whether other players respected a norm before – consult Sen (1997) on history-dependent preferences. This feature makes a difference with some other models of otherregarding preferences and moreover leads to predictions which, in view of some available evidence, seem reasonable. For instance, it clearly implies that the principled types are less likely to tell the truth if they were deceived before (in other words, they are less likely to respect the honesty component of their norms if others did not respect it before). This is consistent with the evidence from Ellingsen, Johannesson, Lilja, and Zetterqvist (2009, p. 253), supporting their claim that ‘‘honesty is indeed conditional’’. Participants in their experiment were matched in pairs and afterwards played two games: (1) a Prisoner’s Dilemma with 4 rounds of sequential pre-play communication, and (2) a double auction bargaining game with or without communication. Although a full theoretical analysis of the entire game would be too lengthy, we make a few remarks. Assume for simplicity that all players viewed the PD and the bargaining game as isolated, one-shot games – this is not totally unrealistic, as the bargaining game was introduced to the subjects after they played the PD. Observation 4 hence implies that all players should announce ‘C’ (silence is not an option given the design), and that principled players should cooperate afterwards if l + q and c are large enough, while selfish ones should defect. In line with this, the authors report that most subjects (97%) attempted to make their co-player believe that they would cooperate. However, less than 70 percent of the subjects cooperated afterwards. In other words, some subjects lied. After being informed of their co-player’s choice in the PD, the subjects passed to play the bargaining game. In this game, players bargained over a single unit of a fictitious good, and the players’ valuation of the good was private information. If communication is available and players cannot keep silent, a natural extension of the H-norm and the EH-norm would require players to sincerely announce Nature’s move – i.e., the player’s valuation of the good. If their co-player lied in the PD setting, however, principled types suffer no cost by breaking the norm, so that the model predicts a lower rate of truth-telling in this case than if the co-player told the truth and cooperated before (provided that c is large enough, of course). Indeed, Ellingsen et al. (2009) report that only 6% of the cooperators truthfully revealed their values if the opponent defected before, while 53% sent a false message in that case. In contrast, 64% were honest if the opponent cooperated in the PD, and only 7% deceived. This kind of conditional revelation seems difficult to explain unless people care about prior history. The introduction of norms in our model is one manner to formalize such history-dependence. 5. Related literature We review here some related literature, stressing the differences with our account. To start, our paper is related to a recent literature on fairness and reciprocity that relaxes the standard assumption of selfishness (see López-Pérez, 2008 for a more detailed survey). Rabin (1993) models reciprocity in normal-form, two-player games as the idea that people are kind

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

655

to those who are kind to them, and harm those who harm them. Dufwenberg and Kirchsteiger (2004) extend Rabin’s ideas to extensive-form games. Levine (1998) assumes type-based altruism and spitefulness, and both Fehr and Schmidt (1999) and Bolton and Ockenfels (2000) propose models of inequity-averse players. In turn, Charness and Rabin (2002), Falk and Fischbacher (2006), and Cox, Friedman, and Gjerstad (2007) introduce both reciprocity and distributive concerns. Finally, LópezPérez (2008) assumes that people are averse to deviate from norms of fairness, and become aggressive towards norm violators.12 These models provide numerous insights on cooperation, but do not seek to explain why communication affects cooperation. Models of inequity aversion (Bolton & Ockenfels, 2000; Fehr & Schmidt, 1999), to provide just one example, assume that players’ utility only depends on the distribution of material payoffs, and this is something that costless communication cannot shape by definition. As a result, these models predict no effect of communication on best-responses and equilibria of the action stage subgame. To account for the role of communication, social researchers have advanced at least five theories: (1) communication acts as a coordinating device (Farrell, 1987; Farrell & Rabin, 1996), (2) communication reduces social distance (Orbell, van de Kragt, & Dawes, 1988), (3) communication raises payoff expectations on receivers, and senders feel badly if they let down those expectations (guilt aversion, as in Charness & Dufwenberg, 2006), (4) a contractarian approach according to which promises are binding only if they are mutual (see Gauthier, 1986, or Orbell, Dawes, & van de Kragt, 1990), and (5) lie-aversion, which is akin to our honesty norms account. Since communication is a multifaceted phenomenon, all of these forces are possibly relevant. In the rest of this section, we compare theories (1)–(4) with our model. Consider (1). The intuition here is that pre-play communication acts as an equilibrium selection device. To illustrate this, consider the simultaneous PD of Table 1, assuming for the sake of the argument that both players are conditional cooperators. Since mutual cooperation and defection are both pure strategy equilibria of the game without communication, coordination is a key issue. If one-way communication is available, however, one could apply the hypothesis in Farrell and Rabin (1996) that self-committing, self-signaling messages are always trusted.13 This reduces the number of equilibria of the entire game, ensuring coordination on the most efficient equilibrium. More precisely, a combined model (conditional cooperation + communication refinement) predicts two refined equilibrium paths in the entire game. In one path, the sender announces ‘C’ and both players cooperate afterwards; in the other path the sender keeps silent and then mutual cooperation follows – announcing ‘D’ is never optimal because it leads to the ‘bad’ equilibrium (D, D). If one additionally assumes that (at least some) conditional cooperators fail to coordinate on the cooperative equilibrium when communication is not available, this model could consequently explain why one-way communication increases cooperation in the simultaneous PD. There are similarities and differences between theory (1) and our account. One similarity is that conditional cooperation is assumed (in our model, recall, the EH-types are conditional cooperators). According to (1), however, communication works when conditional cooperators are initially pessimistic (i.e., expect defection), as it gives them the confidence that the coplayer will coordinate on the efficient, cooperative equilibrium. In our model, in contrast, communication works because the H-types can use it as a commitment device. This difference has other implications. For instance, the number of players who communicate may be immaterial if communication affects only expectations, but not if it acts as a commitment device (see Corollary 1). Also, theory (1) predicts no communication effect if the action stage subgame has a unique equilibrium – i.e., when coordination is not an issue. When people care about honesty, in contrast, communication can induce equilibria with higher cooperation (recall the analysis of the sequential PD in Observations 6 and 7). Note also that theory (1) implies no conditional truth-telling, that is, no dependence between honesty and prior history. This also makes a difference between our model and the idea that communication works because it enhances group identity and possibly a desire to cooperate, that is, theory (2). According to theory (2), furthermore, it should not matter who sends messages for pre-play communication to be effective. Observation 7 clearly indicates the opposite: Communication is not effective in some occasions. See also Bicchieri (2002), who compares theory (2) with a norm-based explanation, and argues in favor of the latter. With respect to theory (3), one important difference from our model is that guilt-aversion is based on the Psychological Game Theory developed by Geanakoplos, Pearce, and Stacchetti (1989) and hence posits that beliefs enter directly into the players’ utility functions. Notwithstanding, the predictions by guilt-aversion coincide with ours in some games. For instance, Charness and Dufwenberg (2006) show that the evidence reported in Observation 7 can be explained by guilt-aversion, provided that promises affect beliefs. They also report one phenomenon that can be explained by guilt-aversion but not by our model. More precisely, first movers in the trust game were asked to guess the proportion of second movers who would reciprocate if the first mover cooperated, while second movers were asked to guess the average guess made by the first movers who cooperated (both were paid for accuracy). The authors then used a probit regression and the data from their second and third treatments to show that a second mover’s decision to cooperate is significantly correlated with her guess, but not with 12 Our model extends López-Pérez (2008), introducing honesty norms and analyzing games with communication. One difference is that López-Pérez (2008) assumes that the cost of breaking a norm depends on the number of players who respect the norm, not on the overall proportion. Although both specifications render qualitatively similar results in the games analyzed there, we now believe that the latter idea is empirically more valid (especially in multiple-player games). Importantly, the model here can explain all the experimental phenomena studied in López-Pérez (2008), except costly punishment. In future research, we plan to extend our model here so as to explain punishment, including punishment of deceitful behavior. 13 A message is self-committing if the sender wants to honor it in case she believes that the receiver believes it – the message must be part of an equilibrium strategy profile of the action stage subgame. For instance, both ‘C’ and ‘D’ are self-committing in the simultaneous PD if both players are sufficiently inequity averse. A message is self-signaling when the sender prefers the receiver to play a best response to it if and only if the message is true – e.g., both ‘C’ and ‘D’ are self-signaling messages in the PD if both players are sufficiently inequity averse.

656

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

a dummy for the treatment. Lie-aversion correctly forecasts a shift in the second mover’s guess if she communicates (as in the third treatment), but cannot account for these results.14 Still, other factors apart from guilt-aversion could explain this correlation, as argued by Vanberg (2008), who also reports lab evidence in line with lie-aversion, but inconsistent with guiltaversion. Theory (4) is related to our approach, as it also predicts reciprocal behavior with respect to promise-keeping. In terms of our theory, the contractarian approach amounts to say that a deviation from the honesty norm does not entail a psychological cost unless everyone has made a promise. In other words: promises are not ethically binding unless they are unanimous. The first thing to note is that this hypothesis alone is insufficient to deliver a communication effect. As Proposition 1 indicates, a concern for honesty alone predicts zero effect in simultaneous dilemmas; a concern for fairness must be present as well. Further, the hypothesis seems at odds with the evidence, as it predicts no effect of one-way communication or, in general, any protocol that does not allow for mutual promises. Maybe, as Orbell et al. (1990) mention and we posit, people relax the unanimity requirement so that some proportion of people promising is enough to trigger ethical obligation. To finish, there are now several models of lie-aversion (theory 5), which are most related to our paper. Ellingsen and Johannesson (2004) combine inequity aversion and a fixed cost of lying in a hold-up game; Miettinen (2005) studies a two-player game which is preceded by negotiations and assume that players feel badly if they deviate from the agreement; Demichelis and Weibull (2008) posit that players have a lexicographic preference for honesty (second to the material payoffs in the stage game) and analyze evolutionary stability in coordination games; and Kartik (2009) studies sender-receiver games when the sender bears a cost of lying. Their main differences with our model are (i) the cost of lying in these models is independent of the other players’ behavior, so that they predict no change in truth-telling depending on previous history, and (ii) we introduce both fairness and honesty concerns. Although Ellingsen and Johannesson (2004) also introduce point (ii), they do not explore the correlation between fairness and honesty motivations, which seems crucial to account for the effect of communication on cooperation, as we have argued in this paper. 6. Conclusion Much experimental evidence confirms that communication affects behavior. A full account of this phenomenon seems difficult, if not impossible, under the standard game-theoretical assumptions. Indeed, Crawford and Sobel (1982, p. 1450) note that these assumptions largely imply that ‘‘concepts like lying, credibility, and credulity – all essential features of strategic communication – do not have fully satisfactory operational meanings’’.15 This paper suggests that one key reason why communication works is because people care about social norms of fairness and honesty and, importantly, are heterogeneous in this regard. Our approach throws light on the issues of truth-telling, deception, and credibility. When is someone expected to lie? There are two crucial requisites here. First, the psychological cost of lying should be relatively small. Thus, people are more likely to lie if the expected monetary gain is large – e.g., when the stake size is large and the relationship non-repeated; think of house negotiations – or if sufficiently many others are expected to lie or behave unfairly.16 Second, she should expect others to trust her, as her lie would be possibly ineffective otherwise. Heterogeneity is crucial in this regard: dishonest players find it easier to cheat others because players’ types are private information and a significant part of the population is expected to be honest (if truth-telling is not too costly). The model can be extended in several ways. To start, other factors apart from lie-aversion could be introduced. For example, personal, non-strategy-relevant communication can increase the amounts sent and returned in a trust game (Buchan, Johnson, & Croson, 2006), which suggests a role for social distance (although probably a complex one, as noted by these authors): People from group X might be more likely to cooperate with someone who declares to be a member of group X than with another person. This phenomenon could be formalized by making our parameter c depend on the identity of the co-player(s). Note, however, that norms of honesty somehow predate group identity: To understand why saying ‘I belong to group X’ has an effect on the others, one must first be able to explain why others believe that message to be sincere. A second extension of the model is suggested by Kartik (2009), who posits a magnitude-dependent cost of lying. While we have assumed for simplicity that the cost of lying does not depend on the type of lie uttered, parameter c could again vary with this. Finally, the model can be readily extended to deal not only with communication about intentions (future moves), but also past actions (including random shocks). In addition, the model can be tested or improved with the help of new experiments. For instance, we predict that the cooperation rate should vary depending on the protocol and the structure of the action stage. Interestingly, other theories make different predictions in this respect, as the discussion in Section 5 indicates. To test our model and the alternative theories, therefore, one could check if the ranking of games and protocols according to the cooperation rate coincides with that

14 Charness and Dufwenberg (2006, p. 1593) give additional arguments against a model assuming a fixed cost of lying. For instance, they claim that people do not suffer from lying in certain contexts, as when playing poker. Yet this can be explained by our approach: Implicitly, the rules (norms) of poker allow some deceptive use of language – it is indeed part of the fun of poker! 15 Crawford and Sobel propose extending their model by ‘‘allowing lying to have costs for [sender] S, uncertain to [receiver] R, in addition to those inherent in its effect on R’s choice of action’’ (ibid.). Our paper follows this line. 16 In relation with this, anecdotal evidence indicates that some professional groups like politicians and lawyers are expected to lie more frequently than others like doctors or professors; even if this image was false, it might become a self-fulfilling prophecy if many people come to believe it.

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

657

in Table 5. Is it true for example that two-way communication is more effective than a one-way protocol in the simultaneous PD? Also, the discussion in Observations 4 and 5 hints that a sequential protocol is equally effective as a simultaneous one in the PD, but not necessarily in games with multiple E-paths, as the game in Table 4. When should we give the first word to one player (i.e., use the sequential protocol)? Experiments could be also used to improve our knowledge about norms. For an illustration of this point, consider a lie that may possibly benefit the receiver if she believes it; think of a doctor who tells a reassuring lie to an ill patient. While a very stringent honesty norm would forbid telling these white lies, many people could find them perfectly ethical (a few could still disapprove them; see Erat & Gneezy, 2009 for some evidence in this respect). As a result, their occurrence could be more frequent than that of other lies. For a second example, consider lies that induce moral behavior. Think of the French priest who, when asked by a Nazi official whether he had fugitive Jews hidden in his church, lied that he did not. Again, a very strict honesty norm would forbid that lie, but most of us would agree that it was commendable. Experiments could be used to investigate sophisticated norms in this line, like a norm that tolerates a lie when one expects that another player will respect the EH-norm if she is deceived. To finish, we would like to note that communication not only gives an opportunity for making promises; but also one for teaching, thus raising productivity, and for discussing moral issues with others – some social researchers have speculated that dialogue might have a positive effect in avoiding conflict. To understand all this, however, one must understand first why people believe (or not) what others say. This article offers some insights in this respect. Acknowledgements I am indebted to Urs Fischbacher, Michael Kosfeld, Andreas Leibbrandt, Topi Miettinen, Michael Näf, Christian Zehnder, and participants at several conferences and seminars for helpful comments. Part of this research was conducted while visiting the Institute for Empirical Research in Economics at Zurich, and I would like to thank their members for their great hospitality. I also gratefully acknowledge financial support from the European Union through the ENABLE Marie Curie Research Training Network. References Balliet, D. (2010). Communication and cooperation in social dilemmas: A meta-analytic review. Journal of Conflict Resolution, 54, 39–57. Becker, G. (1996). Accounting for tastes. Harvard University Press. Belot, M., Bhaskar, V., & van de Ven, J. (2010). Promises and cooperation: Evidence from a TV game show. Journal of Economic Behavior and Organization, 73, 396–405. Bicchieri, C. (2002). Covenants without swords: Group identity, norms, and communication in social dilemmas. Rationality and Society, 14, 192–228. Bolton, G. E., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity, and competition. American Economic Review, 90, 166–193. Brandts, J., & Schram, A. (2001). Cooperation and noise in public good experiments: Applying the contribution function approach. Journal of Public Economics, 79, 399–427. Buchan, N. R., Johnson, E. J., & Croson, R. (2006). Let’s get personal: An international examination of the influence of communication, culture and social distance on other regarding preferences. Journal of Economic Behavior and Organization, 60, 373–398. Charness, G. (2000). Self-serving cheap talk: A test of Aumann’s conjecture. Games and Economic Behavior, 33, 177–194. Charness, G., & Dufwenberg, M. (2006). Promises and partnerships. Econometrica, 74, 1579–1601. Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. Quarterly Journal of Economics, 117, 817–869. Chaudhuri, A. (2011). Sustaining cooperation in laboratory public goods experiments: A selective survey of the literature. Experimental Economics, 14, 47–83. Clark, K., & Sefton, M. (2001). The sequential prisoner’s dilemma: Evidence on reciprocation. The Economic Journal, 111, 51–68. Cooper, R., DeJong, D., Forsythe, R., & Ross, T. (1992). Communication in coordination games. Quarterly Journal of Economics, 107, 739–771. Cox, J., Friedman, D., & Gjerstad, S. (2007). A tractable model of reciprocity and fairness. Games and Economic Behavior, 59, 17–45. Crawford, V., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50, 1431–1452. Croson, R. (2000). Thinking like a game theorist: Factors affecting the frequency of equilibrium play. Journal of Economic Behavior and Organization, 41, 299–314. Croson, R. (2007). Theories of commitment, altruism and reciprocity: Evidence from linear public goods games. Economic Inquiry, 45, 199–216. Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193. Dawes, R. M., McTavish, J., & Shaklee, H. (1977). Behavior, communication, and assumptions about other people’s behavior in a commons dilemma situation. Journal of Personality and Social Psychology, 35, 1–11. Demichelis, S., & Weibull, J. W. (2008). Language, meaning, and games: A model of communication, coordination, and evolution. American Economic Review, 98, 1292–1311. Duffy, J., & Feltovich, N. (2002). Do actions speak louder than words? Observation vs. cheap talk as coordination devices. Games and Economic Behavior, 39, 1–27. Dufwenberg, M., & Kirchsteiger, G. (2004). A theory of sequential reciprocity. Games and Economic Behavior, 47, 268–298. Ellingsen, T., & Johannesson, M. (2004). Promises, threats, and fairness. The Economic Journal, 114, 397–420. Ellingsen, T., Johannesson, M., Lilja, J., & Zetterqvist, H. (2009). Trust and truth. The Economic Journal, 119, 252–276. Elster, J. (1989). Social norms and economic theory. Journal of Economic Perspectives, 3, 99–117. Erat, S., Gneezy, U. (2009). White Lies, mimeo. Falk, A., & Fischbacher, U. (2006). A theory of reciprocity. Games and Economic Behavior, 54, 293–315. Farrell, J. (1987). Cheap talk, coordination, and entry. RAND Journal of Economics, 18, 34–39. Farrell, J., & Rabin, M. (1996). Cheap talk. Journal of Economic Perspectives, 10, 103–118. Fehr, E., & Gächter, S. (2000). Fairness and retaliation: The economics of reciprocity. Journal of Economic Perspectives, 14, 159–181. Fehr, E., & Schmidt, K. (1999). A theory of fairness, competition and cooperation. Quarterly Journal of Economics, 114, 817–868. Fischbacher, U., Heusi, F. (2008). Lies in disguise: An experimental study on cheating, mimeo. Fischbacher, U., & Gächter, S. (2010). Social preferences, beliefs, and the dynamics of free riding in public good experiments. American Economic Review, 100, 541–556. Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71, 397–404.

658

R. López-Pérez / Journal of Economic Psychology 33 (2012) 642–658

Gauthier, D. (1986). Morals by agreement. Oxford University Press. Geanakoplos, J., Pearce, D., & Stacchetti, E. (1989). Psychological games and sequential rationality. Games and Economic Behavior, 1, 60–79. Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 95, 384–394. Hayashi, N., Ostrom, E., Walker, J., & Yamagishi, T. (1999). Reciprocity, trust and the sense of control: A cross-societal study. Rationality and Society, 11, 27–46. Harsanyi, J. C., & Selten, R. (1988). A General Theory of Equilibrium Selection in Games. MIT Press. Hurkens, S., & Kartik, N. (2009). Would i lie to you? On social preferences and lying aversion. Experimental Economics, 12, 180–192. Kartik, N. (2009). Strategic communication with lying costs. Review of Economic Studies, 76, 1359–1395. Levine, D. K. (1998). Modeling altruism and spitefulness in experiments. Review of Economic Dynamics, 1, 593–622. López-Pérez, R. (2008). Aversion to norm-breaking: A model. Games and Economic Behavior, 64, 237–267. López-Pérez, R. (2010). Guilt and shame: An axiomatic analysis. Theory and Decision, 69, 569–586. Miettinen, T. (2005). Promises and lies: A theory of pre-play negotiations, mimeo. Miettinen, T., & Suetens, S. (2008). Communication and guilt in a prisoner’s dilemma. Journal of Conflict Resolution, 52, 945–960. Neugebauer, T., Perote, J., Schmidt, U., & Loos, M. (2009). Self-biased conditional cooperation: On the decline of cooperation in repeated public goods experiments. Journal of Economic Psychology, 30, 52–60. Orbell, J., Dawes, R. M., & van de Kragt, A. (1990). The limits of multilateral promising. Ethics, 100, 616–627. Orbell, J., van de Kragt, A., & Dawes, R. M. (1988). Explaining discussion-induced cooperation. Journal of Personality and Social Psychology, 54, 811–819. Rabin, M. (1993). Incorporating fairness into game theory and economics. American Economic Review, 83, 1281–1302. Rapoport, A., & Chammah, A. M. (1965). Prisoner’s dilemma: A study in conflict and cooperation. Ann Arbor, MI: University of Michigan Press. Sally, D. (1995). Conversation and cooperation in social dilemmas: a meta-analysis of experiments from 1958 to 1992. Rationality and Society, 7, 58–92. Sen, A. (1997). Maximization and the act of choice. Econometrica, 65, 745–779. Vanberg, C. (2008). Why do people keep their promises? An experimental test of two explanations. Econometrica, 76, 1467–1480.