Government warnings and the information provided by safety regulation

Government warnings and the information provided by safety regulation

International Review of Law and Economics 24 (2004) 71–88 Government warnings and the information provided by safety regulation Paul Calcott∗ School ...

136KB Sizes 0 Downloads 21 Views

International Review of Law and Economics 24 (2004) 71–88

Government warnings and the information provided by safety regulation Paul Calcott∗ School of Economics and Finance, Faculty of Commerce and Administration, Victoria University of Wellington, P.O. Box 600, Wellington, New Zealand

Abstract Imperfect information suggests a potential rationale for safety regulation. If government officials have information that citizens do not, then regulation could be in the citizens’ interests. But if officials are excessively concerned with safety, then safety warnings might be preferable to regulation. A formal model is developed to examine these arguments. Regulation is represented as restrictions on citizens’ action sets. Government warnings are modeled as cheap talk. Regulation may be beneficial— even when it results in a higher accident rate. If officials can effectively communicate warnings, then can indeed be preferable to regulation. But it is not always preferable. Giving officials the power to regulate can improve their incentives to communicate truthfully. © 2004 Elsevier Inc. All rights reserved. JEL classification: D78; D82; I18 Keywords: Government warnings; Information provision; Safety regulation

1. Introduction Many economists are sceptical of regulation, and this scepticism extends to regulation of consumer or workplace safety. Some sceptics concede, however, that there may be a case for safety regulation if government officials have more information than consumers, producers and workers (e.g. Antle, 1995; Viscusi, 1998). Regulations might prevent citizens from dangerous activities that they would not wish to engage in, if they were sufficiently informed. But a sceptic might insist that this justification for regulation is only valid in limited cases—when it is impractical for officials to warn citizens instead. Safety warnings seem preferable to regulation, as they allow citizens to make their own informed decisions. ∗

Tel.: +64-4-495-5233x8945; fax: +64-4-463-5014. E-mail address: [email protected] (P. Calcott).

0144-8188/$ – see front matter © 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.irle.2004.03.005

72

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

The purpose of this paper is to examine the logic of the informational argument for safety regulation in a formal model. The results provide some support for the argument, but also suggest some important qualifications. Safety regulation can indeed be beneficial for citizens, although this is not always so. Surprisingly, the informational benefits of regulators are often strongest when citizens respond to regulation by acting more dangerously. If regulation or warnings are moderate rather than extreme, this may indicate that the danger is low, and so reasonably risky behavior is justified. Furthermore, citizens would often prefer a regime in which officials could only provide information, rather than being able to regulate as well. As Antle and Viscusi suggest, government warnings have the advantage that citizens can ignore them when they judge that the officials’ agenda does not accord with the citizens’ interests. But there are exceptions. Sometimes it is better to empower officials to regulate, even when warnings can be cheaply and effectively communicated. It is shown below that officials may provide better information if they are permitted to regulate. The reason is that if citizens ignore warnings, officials might respond by changing the message. There would not be such a strong temptation to provide misinformation if officials could regulate. The model focuses on the information provided by regulations and government warnings. It is shown that both types of intervention can sometimes be beneficial, because of the information that they provide. But a number of other factors are important in evaluating regulation and warnings, and which are not explicitly captured in the model. Such factors include ostensible departures from rationality (by both citizens and government officials), the direct costs of providing information and regulating, behavioral responses and citizen heterogeneity. These factors are, however, briefly considered in the discussion section below. But there is one factor that needs to be explicit in the model—the motivation of the government officials who would regulate or provide warnings. A sufficiently pessimistic assumption about the goals of officials will lead us to the conclusion that neither regulation nor warnings can have much value. And a sufficiently optimistic assumption will lead us to conclude that intervention is beneficial. An intermediate approach is followed below, in which there are moderate differences between the objectives of officials and citizens’ views of their own welfare. Officials will be assumed to be more concerned with safety than citizens are, whether because of their underlying values or because of the incentives that they face.1 This assumption reflects observations by both economists (e.g. Nichols & Zeckhauser, 1986; Viscusi, 1998) and noneconomists (e.g. Kelman, 1980), that officials dealing with safety often take a very conservative approach to risk. Four versions of the model are developed. The first version—in which officials can neither regulate nor provide information—is presented in Section 2. In Section 3, we assume that information provision is impractical, but that officials are empowered to regulate. Regulations are represented as restrictions on the actions that the citizen can take, and are modeled in a simple signaling game. The converse situation, in which officials can provide information, but cannot regulate, is examined in a game of cheap talk in Section 4. Finally, 1 Another reason could be that officials are concerned about externalities. If externalities generated the divergence between citizens’ and officials’ preferences, then social welfare could be identified with the officials’ preferences rather than the citizens’. Otherwise the following analysis would be unchanged.

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

73

officials can both regulate and provide information in Section 5; and so signaling and cheap talk are both incorporated. A brief discussion is provided in Section 6.

2. The model without intervention by an official A citizen chooses an action from set A, which has three members: a1 , a2 and a3 . Action a1 is the safe option—such as avoiding a certain class of pesticide. Actions a2 and a3 do involve risk. For example, they could be the use of different varieties of pesticide. Action a2 would be the safer of the two and a3 the more dangerous. The citizen’s (expected) utility from action ai , given a probability of an accident, π, is uC (ai , π) = vi − πδi , where vi ≥ 0 is the direct payoff from action i and δi ≥ 0 is the disutility of an accident when engaged in this action.2 The expected utility of the government official is uG (ai , π) = λvi − πδi . We assume that 0 < λ < 1, which means that the official places relatively more importance on safety than the citizen does (i.e. gives less weight to the benefits of risky action). Assume that v1 = δ1 = 0. This means that the citizen’s expected utility is zero if he chooses a1 , no matter what π is. The citizen might benefit from a more dangerous action, but only if the risk is not too high. The parameters of the utility functions (the vs, δs and λ) are common knowledge, but the citizen does not know π. If the citizen’s belief about the probability of an accident, πˆ is below vi /δi (i = 1), then he will prefer action ai to the safe option a1 . It is convenient to define three cost/benefit thresholds. Let γH = v2 /δ2 , γM = v3 /δ3 and γL = (v3 − v2 )/(δ3 − δ2 ). Assume that 0 < γL < γM < γH < 1.3 A higher value of π is a reason to take a safer action. The citizen’s decision can be characterized by comparing his ˆ with the thresholds. belief about the probability of an accident, π,    a3 if πˆ < γL ∗ a = a2 if γL < πˆ < γH . (1)   a1 if γH < πˆ A more dangerous action is only justified if the probability of an adverse event, π, is below a ratio of incremental benefits to extra potential costs. If there is no official, or if the official cannot regulate or provide information, then the citizen will make his decision using his prior belief about π. Then the citizen’s decision is described by (1), where πˆ is equal to µ, the expected value of π. The possible outcomes are described in Table 1.4 2 This approach is chosen for convenience. Nothing of substance would change if the citizen’s action could affect π. The important feature is that the optimal action is monotonic in the signal that the official receives. 3 This implies that δ > δ . It will also be assumed that 0 < F(λγ ) < F(λγ ) < F(λγ ) < 1. 3 2 L M H 4 The conditions considered are phrased in terms of strict inequalities. Equality will hold with zero measure, for the model with generic F or γs.

74

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

Table 1 Outcomes without government intervention Outcome

Condition

a = a1 a = a2 a = a3

γH < µ γL < µ < γH µ < γL

3. The regulation-only game Sometimes, for reasons exogenous to the model, information provision may be impractical or ineffective. For example, it may be straightforward to ban dangerous toys, but difficult to provide information that reaches all parents, and which accurately describes which toys are dangerous. Regulation might be considered in such cases. A regulation restricts the actions that the citizen may take. It defines an allowable set of actions s. The official might ban all risky activity outright (i.e. s = {a1 }), or ban only the most dangerous form of the activity, a3 (i.e. s = {a1 , a2 }). She might decide not to regulate at all (i.e. s = A = {a1 , a2 , a3 }). But she cannot make risky activity compulsory and she cannot just ban the careful form of the activity.5 Let S0 be the set of possible regulations. S0 = {{a1 }, {a1 , a2 }, A}.The steps of the regulation-only game are as follows. 1. 2. 3. 4.

Nature chooses the probability π from the discrete distribution F. The official observes π. She chooses which set of actions s ∈ S0 to allow. The citizen does not observe π. He chooses an action a ∈ s. Nature chooses whether there is an ‘accident’. The probability of an accident is π.

This is a simple signaling game. Like many such games, a variety of perfect Bayesian equilibria (PBE) are possible. In one such equilibrium, the official bans an action if she considers it too dangerous,  if π < λγL  A (2) s(π) = {a1 , a2 } if λγL < π < λγH   {a1 } if λγH < π and the citizen takes the riskiest action that he is permitted to take.    a3 if s = A a(s) = a2 if s = {a1 , a2 }   a1 if s = {a1 } 5 The motivation for this assumption is that it does not seem realistic that safety regulators might enforce high risk behavior. If this assumption was relaxed, the principal consequence would be that the analysis of Section 3 would be simplified. Regulators would have more direct control, and a signaling role for regulation would not be necessary.

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

75

Table 2 PBE outcomes in the regulation-only game (with a pure strategy by the citizen) Implementable actions

1

a 3 , a2 , a 1

2

a3 , a 1

3

a 2 , a1

4

a1

Outcome    a3 a = a2   a  1 a3 a= a  1 a2 a= a1 a = a1

Condition if

π < λγL

if

λγL < π < λγH

if λγH < π if π < λγM if

λγM < π

if

π < λγH

if

λγH < π

The outcome is that the citizen’s action conforms to the official’s wishes.    a3 if π < λγL a = a2 if λγL < π < λγH .   a1 if λγH < π

Always

Always Always Always

(3)

This outcome has been entered on the first row of Table 2. Other PBE outcomes in pure strategies are entered on following rows. In each case the official induces the citizen to take the action that she prefers, out of those that she can induce him to take (i.e. of those that are ‘implementable’).6 The rows differ by which actions she can induce him to take (as identified in the first column). Table 2 only shares one row with Table 1, which presents possible outcomes when intervention by the official is not possible. The reason is that now the official can ensure that the citizen takes the safe action, a1 (e.g. by banning a2 and a3 ). And she will want to ensure that he takes that action, whenever π is high enough (when it is above λγ H ). Thus there cannot be an equilibrium in which the citizen would never take a1 . The outcomes in Table 2 are not equally plausible. In particular, the outcome where a1 is always chosen (the fourth row) is somewhat counterintuitive. To get this outcome, the official would impose the most stringent regulation possible when risk was low, and the citizen would choose the safest possible action whenever risky behavior was permitted. Although the counterintuitive7 outcome is consistent with a PBE, it requires the official to play a weakly dominated strategy, and hence is not consistent with a trembling hand perfect (THP) equilibrium. Essentially a THP equilibrium is one that is robust to small probabilities of ‘trembles’ or mistakes (Fudenberg and Tirole, 1991). It is not rational to play a weakly dominated strategy if there is even a tiny a chance that other players might play each of 6 Note that there are more rows in Table 2 (more possible sets of implementable actions) than there are elements in S0 . It may be possible to induce the citizen to choose a1 or a3 but not a2 , even though {a1 , a3 } is not a possible regulation. 7 ‘Intuitive criteria’, such as that proposed by Cho and Kreps (1987), would not rule out the counterintuitive outcome here. As the official gets no direct disutility from regulating more stringently, it can be a best response for the citizen to choose a1 irrespective of what he is permitted to do, and a (weak) best response for the official to only permit the citizen to make risky choices when the risk was so high that the citizen would not wish to do so.

76

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

their strategies. But the imposition of the most stringent regulation possible when risk is low (when π < λγL ) is weakly dominated. If the risk is low enough, the official prefers that the citizen take any other action rather than a1 . Consequently, the official can do no worse and may do better by weakening the regulation. Allowing the official to regulate is sometimes beneficial to the citizen, but not always. The official has more information than the citizen does, but she is also more cautious than he would like to be. However, it is possible to identify conditions under which the benefits from the use of the official’s information outweigh the costs due to excessive caution. One such condition is proposed in Proposition 1. Proposition 1. Assume that the default outcome is a = a1 (i.e. that γH < µ). Then, (i) a PBE outcome when the official is empowered to regulate is no worse for the citizen, than when she is not so empowered; and (ii) a THP equilibrium outcome when the official is empowered to regulate is strictly better for the citizen, than when she is not so empowered. Proposition 1 states that if the citizen believes that the risk is so high that (without regulation) he would take the safe option, a1 , then regulation would be an unambiguous improvement. The reason is that the citizen is at least as tolerant of high risk as the official is. And the official sometimes considers the risk to be low enough, to justify an action more dangerous than a1 . If she permits the citizen to take dangerous actions when the risk is low, then this provides valuable information to the citizen. He reasons that if an action or product was really dangerous, then it would be banned.8 And if she does restrict dangerous actions, the regulation does not force him to be any more cautious than he would otherwise be. So overall, the citizen is better off. This will now be demonstrated under the assumption that the citizen adopts a pure strategy. Mixed strategies are dealt with in the appendix. Assume that µ > γH . Absent regulation, the citizen would choose a1 and have expected utility of zero. Under regulation, his expected utility would depend on which equilibrium outcome obtained. To allow for the range of outcomes in Table 2, let I(ai |π) be an indicator function equal to one if ai is chosen and zero otherwise—for a given value of π. The occasions when it equals one will depend on the specific equilibrium outcome. But it will always be nonnegative. Now the expected utility of the citizen can be described with the following expression, no matter which outcome in Table 2 obtains (recall that v1 −πδ1 = 0).  λγM  λγH (v3 − πδ3 )I(a3 |π) dF + (v2 − πδ2 )I(a2 |π) dF 0

0

This is equal to  λγM  δ3 (γM − π3 )I(a3 |π) dF + δ2 0

λγH

(γH − π)I(a2 |π) dF.

0

Given the assumptions that 0 < λ < 1 and 0 < γL < γM < γH < 1, this expression is (weakly) positive. Therefore regulation raises expected utility. 8

Survey evidence indicates that a majority of Americans believe that the government is making sure that they are protected against harmful chemicals and dangerous food (e.g. Loader and Hobbs, 1999).

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

77

Empowering the official to regulate has two effects—increased safety when risk is high and decreased caution when risk is low. The former effect is not always a benefit—at least from the citizen’s perspective. Regulation may force him to be more cautious than he would like to be. But the latter effect is always beneficial. Proposition 1 deals with the case in which only the latter effect operates. Note that in this case, empowering the official to regulate increases the severity of accidents. The presence of a regulator can be beneficial even when she leads to more severe injuries. Regulation and legalization provide information. But they are very indirect ways to ‘send a message’. In some cases, it is practicable for officials to send messages directly—with information campaigns. This alternative will be examined in the following section.

4. The information-only game Assume that the official does not have the power to regulate—but that she can provide information to the citizen. The amended model is as follows. 1. 2. 3. 4.

Nature chooses the probability π from the discrete distribution F. The official observes π. She chooses a message m ∈ M. The citizen observes m but not π. He chooses an action a ∈ A. Nature chooses whether there is an ‘accident’. The probability of an accident is π.

There are seven potential equilibria to consider—as there are seven possible combinations of nonempty subsets of A. To keep the discussion brief, only one of these cases will be covered here. The remaining six cases will be dealt with in the appendix (as will outcomes that require the citizen to adopt a mixed strategy). A summary is presented in Table 3.

Table 3 PBE outcomes in the communication-only game (with pure strategies by the citizen) Non-empty Ωi s

1

Ω1 , Ω2 , Ω3

2

Ω1 , Ω2

3

Ω1 , Ω3

4

Ω2 , Ω3

5 6 7

Ω1 Ω2 Ω3

Outcome    a3 a = a2   a1  a2 a= a1  a3 a= a1  a3 a= a2 a = a1 a = a2 a = a3

Conditions for a PBE if

π < λγL

γ L < E[π|λγ L < π < λγ H ]

if

λγL < π < λγH

γ H < E[π|λγ H < π]

if

λγH < π

if

π < λγH

γ L < E[π|π < λγ H ]

if

λγH < π

γ H < E[π|λγ H < π]

if

π < λγM

E[π|π < λγ M ] < γ L

if

λγM < π

γ H < E[π|λγ M < π]

if

π < λγL

γ L < E[π|λγ L < π]

if

λγL < π

E[π|λγ L < π] < γ H γH < µ γL < µ < γH µ < γL

78

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

One possible case (out of the seven) is that the official can persuade the citizen to take any of the three actions. Let Ωi ⊂ M be the set of messages that would convince the citizen to take action ai . Ωi = {m ∈ M|a(m) = ai },

∀i

Then in the case under consideration, none of Ω1 , Ω2 and Ω3 is empty. The official can persuade the citizen to choose whichever action she prefers.    Ω3 if π < λγL m ∈ Ω2 if λγL < π < λγH   Ω1 if λγH < π It is shown in the appendix that if there is a PBE in which the official adopts this strategy, then there is a PBE in which the citizen interprets messages in the following way.  if m ∈ Ω3   E[π|π < λγL ] E[π|m] = E[π|λγL < π < λγH ] if m ∈ Ω2 (4)   E[π|λγH < π] if m ∈ Ω1 On hearing a message m ∈ Ωi , the citizen learns (only) that the official has an incentive to send a message in Ωi . If the citizen adopts this interpretation of messages, then he will maximize his utility by following the official’s advice only if certain conditions hold. These conditions follow from Eqs. (1) and (4). For example, he will choose a3 after hearing a message in Ω3 , only if E[π|π < λγL ] is smaller than γ L . However this is true, and so no substantive condition is required. He will choose a2 after hearing a message in Ω2 , only if E[π|λγL < π < λγH ] is between γ L and γ H . This expectation is always lower than γ H , but it is not always higher than γ L . So there is a substantive requirement that γL < E[π|λγL < π < λγH ]. Finally, he will choose a1 after hearing an m ∈ Ω1 , only if E[π|λγH < π] is higher than γ H . This condition is also substantive. The two substantive conditions are entered in column three the first row of Table 3. The outcome—described in Eq. (3)—is entered in the second column. Other PBE outcomes are described on following rows. In each case, the conditions for Bayesian Perfection can be identified with reasoning similar to that used for the first row. In general, let µG (ai ) be the expected value of π, conditional on the information that the official would like the citizen to choose ai , rather than any other implementable action. µG (ai ) = E[π|uG (ai , π) ≥ uG (aj , π), ∀aj ∈ A s.t. Ωj = ∅], ∀ai ∈ A s.t. Ωi = ∅ The citizen takes action ai when he makes the assessment that E[π|m] = µG (ai ). uC (ai , µG (ai )) ≥ uC (aj , µG (ai )),

∀aj ∈ A.

Specific versions of this expression are presented in the right hand column of Table 3. Information is successfully communicated in most but not all of these equilibria. The possibility of communication when interests are somewhat but not fully aligned, is a standard result (e.g. Farrell, 1995). Kim and Koh (1997) have suggested the application to safety.

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

79

As in Section 3, the equilibrium (PBE) is not generally unique. And as in Section 3, we can nevertheless reach a conclusion about when it is beneficial to the citizen to permit the official to intervene. Any PBE outcome with communication will be at least as good for the citizen as the outcome without either communication or regulation. The reason is that it is always possible for the citizen to ignore all advice from the official, and to rely on his own prior beliefs instead. If the official is so cautious that the citizen would be worse off by following her advice, then he is free to ignore that advice. Therefore the availability of advice cannot make the citizen worse off, ex ante. Proposition 2. If the official is empowered to provide information (assuming a PBE outcome) then the citizen gets at least as much expected utility (ex ante) as in the outcome when neither regulation nor information provision is possible. Demonstration of Proposition 2 is straightforward. In any PBE, the citizen chooses a strategy that gives him his maximum expected payoff given his beliefs, and these beliefs are consistent with the strategy chosen by the official. So his payoff must be at least as high as if he played the strategy that he would have chosen if he did not receive any information. For example, if the citizen plays the (pure) strategy a(m), then his expected utility, uC (a(m); E[π|m]), must be at least as high as uC (ai ; E[π|m]). And what is true for every message, is true on average. Therefore, his expected utility is at least as high as in the default outcome— when he always chooses ai . There is no need for further refinements of equilibrium, if we only wish to show that a program of government warnings does no harm. However, we do need more precise predictions if we are to compare warnings with regulation. We will make such a comparison in the following section, and so we now turn to refinements of equilibria in games of cheap talk. It is well known that some PBEs are implausible in games of cheap talk. For example, there is generally an equilibrium in which communication is ineffective (a “babbling equilibrium”)—even when there is an incentive to tell the truth. However, when the parties have the opportunity to communicate in a rich common language, then such a failure to communicate seems implausible (Farrell, 1993). Communication can be frustrated by lack of credibility, but not by lack of comprehension, when a PBE is strongly announcement-proof (SAP) (Mathews, Okuno-Fujiwara, & Postlewaite, 1991). To be SAP,9 an equilibrium must be robust to deviating messages. Essentially, it should be impossible to come up with a plan for which messages to send in which occasions, whereby: (i) the official has an incentive to stick to the plan, irrespective of what π is; and (ii) the information revealed will sometimes be sufficient to induce a rational citizen to change his (alleged equilibrium) strategy. A more formal account is contained in the appendix. But the essence can be explained informally. Consider a PBE in which only Ω2 is empty (as in the third row of Table 3). This 9

If an equilibrium satisfies this requirement then it is also announcement-proof (Mathews et al., 1991) and neologism-proof (Farrell, 1993).

80

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

Table 4 SAP equilibrium outcomes in the communication-only game Non-empty Ωi s 1

Ω1 , Ω2 , Ω3

Outcome    a3 a = a2   a1  a2 a= a1  a3 a= a1  a3 a= a2

Conditions for a SAP PBE if

π < λγL

if

λγL < π < λγH

if

λγH < π

if

π < λγH

if

λγH < π

if

π < λγM

if

λγM < π

if

π < λγL

if

λγL < π

γ L < E[π|λγ L < π < λγ H ] γ H < E[π|λγ H < π]

2

Ω1 , Ω2

3

Ω1 , Ω3

4

Ω2 , Ω3

5 6

Ω1 Ω2

a = a1 a = a2

Never Never

7

Ω3

a = a3

E[π|λγ L < π] < γ L

Never E[π|λγ L < π < λγ H ] < γ L γ H < E[π|λγ M < π]

γ L < E[π|λγ L < π] E[π|λγ H < π] < ␥H

E[π|λγ M < π] < γ M

type of PBE is not always SAP. The official would like the citizen to change his action (i.e. to take an apparently non-equilibrium action) whenever π is between λγ L and λγ H . She would always want the citizen to choose a2 when π is in this range, but would prefer the status quo otherwise. Now imagine that the official deviates from the proposed equilibrium, by trying to persuade the citizen to choose a2 . The citizen knows that the official would only make such an attempt when π is between λγ L and λγ H . We know from Eq. (1), that that the citizen would choose a2 when realizing (only) that λγL < π < λγH , if E[π|λγL < π < λγH ] is between γ L and γ H . So the proposed PBE is only robust to the proposed deviation when E[π|λγL < π < λγH ] is less than γ L . This condition has been added to the third row of Table 4. Other rows give the requirements for other types of PBE to be SAP. Furthermore, it is shown in the appendix that (generically) the citizen does not play a mixed strategy in a SAP PBE. Note that Ω3 is never empty in a SAP PBE. The official can always persuade the citizen that the risk is low enough to choose a3 . This is because the citizen knows that when the ‘excessively’ cautious official thinks a3 is safe enough, then it will be safe enough according to his own preferences.

5. The regulation-plus-information game Now assume that the official has access to both policy instruments. She can provide information and she can also regulate. The new model is as follows. 1. Nature chooses the probability π from the discrete distribution F. 2. The official observes π. She chooses a message m ∈ M. She also chooses which set of actions s ∈ S0 to allow.

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

81

3. The citizen observes m, but not π. He chooses an action a ∈ s. 4. Nature chooses whether there is an ‘accident’. The probability of an accident is π. Any PBE outcome of the regulation-only game is also a PBE outcome of this game. The addition of cheap talk does not rule out any equilibria, because a babbling equilibrium is always possible. But now there is only one equilibrium outcome that is also SAP. This is the outcome described in Eq. (3), in which the official is always able to get the citizen to comply with her wishes. By allowing for cheap talk, and requiring equilibria to be SAP, we rule out the other possibilities. In the regulation-only game of Section 3, it was possible to have a PBE in which the citizen chose cautious actions, even when the risk was low. The official might not be able to successfully communicate the idea that risk is low, by permitting a wider range of activities. Although regulation can be used to provide information, it is not a ‘rich language’, in the sense of having indefinitely many messages with widely agreed interpretations. As a result, successful communication through this medium is not guaranteed. However in the present section, regulation is combined with communication in a rich language. Citizens know what warnings mean. So it is reasonable to think that the official can convince the citizen that the risk is low (π < λγL ) when she permits the dangerous activity, a3 . By way of illustration, imagine that an alternative PBE outcome obtained—say that given in row 2 of Table 2. Then the official would like the citizen to change his behavior whenever π is between λγ L and λγ H . When π is in this range, the official can introduce regulation banning a3 (i.e. s = {a1 , a2 }) and send a message recommending a2 . As E[π|λγL < π < λγH ] is less than γ H , the citizen would choose a2 in preference to a1 . Hence the proposed alternative outcome would not be SAP. Similar arguments rule out other outcomes in Table 2—except the outcome in row 1. In Section 4, we saw that the citizen could do no worse (ex ante) with information than without it. It is tempting to also conclude that he would be no worse off with information alone than with both regulation and information. He can ignore information that he considers unreliable, but he cannot ignore regulation. But this conclusion would be incorrect. The flaw in the argument is that the official will not passively accept her advice being ignored. For example, if she cannot regulate and she predicts that moderate warnings will be ignored, then she may issue a stronger warning when risks are moderate. From the point of view of the citizen, this may degrade the information service. The consequence is that sometimes the citizen is better off with regulation than with an information service alone. Imagine that information campaigns are possible but regulation is not. Consider the type of equilibrium in which the citizen sometimes chooses a1 and sometimes a3 but never a2 . According to row 3 of Table 4, an equilibrium of this type will be the unique SAP PBE10 when E[π|λγL < π < λγH ] < γL 10

and

γH < E[π|λγM < π].

(5)

It will also be the unique neologism-proof PBE. There may, however, also be a noncommunicative announcement-proof PBE. But there is no such noncommunicative equilibrium in the example presented below.

82

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

In such an equilibrium, the citizen’s ex ante expected utility is  λγM (v3 − πδ3 ) dF.

(6)

0

However, if the official is also able to regulate, the outcome is given by Eq. (3) (i.e. by row 1 of Table 4). The citizen’s expected utility is  λγL  λγH (v3 − πδ3 ) dF + (v2 − πδ2 ) dF. 0

λγL

This is higher than Eq. (6), if  λγM  (v3 − πδ3 ) dF < λγL

λγH

λγL

(v2 − πδ2 ) dF.

(7)

If (5) and (7) are both satisfied, then it is better for the citizen if the official can regulate, than if she can only provide warnings. Such an example follows. Example v2 = 6, δ2 = 8, v3 = 8, δ3 = 16, λ = 1/2  0.1 if 0 ≤ π < 0.15     0.25 if 0.15 ≤ π < 0.35 F(π) =  0.35 if 0.35 ≤ π < 0.95    1 if 0.95 ≤ π This example establishes that the ability to regulate can sometimes be beneficial, even when the official can issue safety warnings. However, this will not always be the case. The following proposition provides some conditions where we can be confident that giving the official the power to regulate cannot be an improvement. It is proved in the appendix. Proposition 3. Assume that the official can send a message to the citizen. The outcome if she can also regulate will be no better (ex ante) for the citizen if either of the following conditions is true: (i) γL < E[π|λγL < π < λγH ], or (ii) E[π|λγH < π] < γM .

6. Discussion and conclusions The main conclusions are that if government officials are better informed but more cautious than citizens, then (i) empowering the official to regulate may be beneficial, especially if the citizen assesses the risk as high;

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

83

(ii) empowering the official to provide information will be beneficial; and (iii) empowering the official to regulate as well as provide information is often worse than only empowering her to provide information—but there are cases in which it is beneficial. These conclusions are derived under specific assumptions. And these assumptions neglect some factors that are important to the value of regulation and information. In the remainder of this section, some of these factors will be briefly discussed. The first factor is that citizens may be subject to failures of judgement, rationality or volition. Citizens may underestimate risks associated with non-salient events, or be overoptimistic about significant risks (Jolls, Sunstein, & Thaler, 1998). As a result, regulation may be justified even when consumers are well-informed. However, these failures may also have political expression (Jolls et al., 1998; Viscusi, 1998). Regulatory decisions often appear to fall short of full rationality (e.g. Antle, 1995; Nichols & Zeckhauser, 1986; Viscusi, 1998). Or alternatively, government officials may be less susceptible to these failures than citizens are (New, 1999). But unless officials are subject to failures additional to those made by citizens, there will still be an informational role for regulations or warnings. A second factor is that citizens are not all identical. For example, they may differ according to their exposure to risk. Regulations that benefit one type of citizen may be detrimental to others. However, heterogeneity can be accommodated with a relatively minor adjustment to the models presented in the preceding sections. Imagine an alternative story, in which there are two kinds of citizens, but only two actions. This new story can be represented in the models described above, by reinterpreting the actions. Action a1 becomes both types of agent choosing the safe option, action a2 becomes only one type taking this option and a3 becomes neither taking it. A redefinition of the vs and δs is also required. The equilibria of the information-only game is exactly the same under the heterogeneity interpretation, as with the three-action interpretation. The equilibria with regulation, however, are a little different. The reason is that the regulator has to impose the same rules on everyone—rather than tailoring them for the different types of citizen. This means that the official is not so effective in imposing her preferences. Nevertheless, the propositions continue to hold, although a modification is necessary to the example. A third factor is that regulation and provision of information both require expenditure of resources and effort by officials—and possibly by citizens.11 Consequently, the value of these interventions is likely to be have been overstated in the results from the models. However, costs to the official can also help with credibility. The official can credibly signal high risks, if she must incur costs to regulate or provide information. A fourth factor is that citizens may respond to intervention in undesirable ways. For example, motorists may drive faster when they are compelled to use safety belts—leading to a higher injury rate. Some behavioral responses can be accommodated without changing the models. Recall from Section 3, that a moderate regulation can increase the severity of accidents if the default action was extreme care. But this is not evidence that the regulation is harmful. However, there are other behavioral responses that are not well described in the models. For example, producers may discontinue product lines in response 11

In addition, regulation will not always be complied with, and warnings will not always be heard.

84

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

to regulation (Viscusi, 1984) or information provision (Loader & Hobbs, 1999)—reducing product diversity. Officials will not generally to have the power to directly require that a wide range of products is available. Although intervention will not have unintended consequences in a perfect Bayesian equilibrium, some consequences will reduce the value of intervention. A fifth factor is that the regulatory framework may affect incentives to collect information about product safety. The official will have more reason to collect this kind of information when she has more ability to influence the citizen’s behavior. The citizen will have more reason to collect information when his behavior is not heavily regulated. In any case, he is unlikely to collect an efficiently high level of information (Viscusi, 1984). Finally, the objectives of officials are treated as exogenous in this paper, although these objectives can be influenced with the design of incentives for, and the selection of employees to, the public service. Consequently, it may be possible to increase the congruence between officials’ objectives and citizens’ preferences. In the models above, increasing λ toward one will always increase citizen welfare if the official can regulate. And it will usually do so if the official can provide information. However, this does not account for the importance of the official’s motivation for the effort she expends in her duties. It seems likely that an effective and motivated agency would place a high emphasis on increasing safety (Tirole, 1994). For example, rewarding officials for reducing the injury rate may induce a greater concern with safety (a lower value of λ) but also motivate the official to exert more effort in collecting and disseminating information. Although the preceding sections neglected these six factors, it appears that the effects suggested in these sections are plausible even when the factors are present. In evaluating proposals for safety regulation for a particular activity or commodity, many factors might be considered. And this paper provides a formal account of some of these factors—some of which have been suggested before, and some of which have not previously been recognized.

Appendix A. Preliminaries There are two players, a government official G and a citizen C. G chooses a strategy σ:Π → M, and C chooses α:M → A. Together, these strategies determine an outcome o:Π → A, where o(a|π) = α(q|m)σ(m|π) dm. A pure strategy for C is a mapping a:M → A. For any s ∈ S0 , define BG (ai , s) = {π ∈ [0, 1]|ai ∈ argmaxa∈s uG (ai , π)}, and BC (ai , s) analogously. Assume that 0 < F(λγL ) < F(λγM ) < F(λγH ) < 1. Finally, let γ ij be (vj − vi )/(δj − δi ).

Appendix B. Lemma Assume that C plays a pure strategy. Consider the following two conditions. (i) ∃σ( · | · ),a( · ) s.t. π ∈ BG (a(m), s)};

∀m, ai {ai = a(m) → E[π|m] ∈ BC (ai , A) and σ(m|π) > 0 →

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

85

(ii) ∀ai ∈ s E[π|π ∈ BG (ai , s)] ∈ BC (ai , A); where s ⊆ A. If (i) is true then (ii) is also true; and if (ii) is true then (i) is generically true. Proof. Assume that (i) is true. Then by Bayes’ rule ∃σ(·|·), a(·) s.t.  πσ(m|π) dF G B (ai ,s) ∈ BC (ai , A), ∀m ∈ M, ∀ai ∈ s σ(m|π) dF G B (ai ,s) But as BC (ai , A) is an interval in [0, 1],  πσ(m|π) dF > (<)min(max)BC (ai , A) BG (ai ,s)  × σ(m|π) dF, ∀m ∈ M, ∀ai ∈ s BG (ai ,s)

or integrating through wrt m,   π dF > (<)min(max)BC (ai , A) × BG (ai ,s)

BG (ai ,s)

dF,

∀ai ∈ s,

and so (ii) follows. Assume (ii). Consider a strategy of the following form  θ(m) if π ∈ BG (a(m), s) σ(m|π) = 0 otherwise As a consequence, σ(m|π) > 0 → π ∈ BG (a(m), s) is immediately satisfied. Furthermore, it will be generically true that ∀s ⊆ A, ∀{ai , aj } ∈ s2 s.t. aj = ai BG (ai , s) ∩ BG (aj , s) = ∅, and so if π ∈ BG (a(m), s) then  θ(m) BG (ai ,s) π dF  = E[π|BG (ai , s)], ∀ai ∈ s. E[π|m] = θ(m) BG (ai ,s) dF So (i) follows.



Appendix C. PBE in the information-only game Assume that C plays the pure strategy a(·). Let s∗ = {ai ∈ A|∃m ∈ M  ai = a(m)}. There is a PBE in which C plays a pure strategy if there are strategies σ(·|·) and a(·) s.t. (a) (b) (c)

∀ai ∈ s∗ ∀m ∈ M [ai = a(m) → E[π|m] ∈ BC (ai , A)] ∀ai ∈ s∗ ∃ ␲ ∈ conv(Π)  π ∈ BC (ai , A) ∀m, π [σ(m|π) > 0 → π ∈ BG (a(m), s∗ )]

C plays a BR on eqm path C plays a BR off eqm path G plays a BR

Condition (a) requires C to play a best response on the equilibrium path—in accordance with Bayesian rationality. Condition (b) implies that he plays a best response according to

86

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

some coherent belief, off the equilibrium path. Condition (c) requires G to. Assume that (b) is satisfied for all ai ∈ A. Then by the lemma, a PBE can generically be characterized with the conditions that E[π|π ∈ BG (a(m), s∗ )] ∈ BC (ai , A), ∀ai ∈ s∗ .

Appendix D. SAP in the information-only game Let sD be the support of the citizen’s strategy following a deviation by the official. Consider a PBE in which C plays a pure strategy. Generically, there is no weakly credible announcement that would induce the citizen to deviate by adopting a mixed strategy. To see this note that a strategy that mixed over an element in s∗ (say ai ) and an element in sD (say aD ) would be proposed only if π ∈ BG (aD , {ai , aD }). But this would only induce C to mix if E[π|π ∈ BG (aD , {ai , aD }] was exactly equal the relevant threshold. But such an equality will not hold generically. Furthermore, a strategy that mixed over two elements in s, would (for a given value of π) by assumption be preferred to the status quo by G for some mixing probabilities, and over the status quo for other mixing probabilities. This disqualifies the proposal from being weakly credible (Mathews et al., 1991). If there is a weakly credible announcement that induces the citizen to adopt a pure strategy, then there is a set ΠD ⊆ Π and a deviating strategy σ D (·|·) s.t.   (i) ∀π ∈ ΠD π ∈ ai ∈sD BG (ai , sD s∗ )   (ii) ∀π ∈ Π\ΠD π ∈ / ai ∈sD BG (ai , sD s∗ )  (iii) ∀m, ∀ai ∈ sD ∀π ∈ BG (ai , sD s∗ )[σD (m|π) > 0 → π ∈ BC (ai , A)] By the lemma, E[π|π ∈ BG (ai , s∗ ∪ sD )] ∈ BC (ai , A), ∀ai ∈ sD . The conditions in Table 4 follow.

Appendix E. SAP in the regulation-plus-information game Eq. (3) gives the only outcome that is consistent with a SAP PBE. To see this, assume the contrary. Then there is some a ∈ A, s.t. there is no choice for G in M × S0 that leads to C choosing a for sure. Let s (a) be the smallest member of S0 that contains a, and consider a message m with the literal meaning that “π ∈ BG (a, A)”. Such a deviation will amount to a credible deviation if E[π|π ∈ BG (a, A)] ∈ BC (a, s (a)). But as 0 < λ < 1, this will be true.

Appendix F. Mixed strategies for the citizen Let G’s action set be N-i.e. S, M or M × S depending on assumptions. Let Nij ⊆ N be the actions that lead C to mix over ai and aj . Then E[π|n] = γij for ∀n ∈ Nij  i = j. Assume G chooses such an n whenever π ∈ Πij ⊆ Π. Generically, G will always strictly prefer one of ai or aj over the other—ai if π < λγij and aj if λγij < π. Therefore, if C has different mixing probabilities in response to different n in Nij , then G will choose different

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

87

n in the two cases, and so E[π|n] = γij will not be equal for all of them. But this violates the requirement that E[π|n] = γij for ∀n ∈ Nij . Therefore, the mixing probabilities are common across n ∈ Nij , and so E[π|n] = E[π|π ∈ Πij ] = γij , ∀n ∈ Nij Generically, such a condition will not be true if the bounds of Π ij are both thresholds. The only remaining possibilities are that Π 12 shares a boundary with Π 33 or that Π 23 shares a boundary with Π 11 . But it is easy to verify that neither possibility is consistent with a SAP PBE. For example, in the latter case Π23 = {π|λγL < π < λγt } for some t between γ M and γ H . It follows that E[π|λγL < π < λγH ] > γH , and so G can make a weakly credible announcement when λγL < π < λγH . Moreover, if N = S and so there is no communication, then in a PBE outcome C will take a1 for sure whenever π > λγ12 . So if C ever mixes, it will only be when π < λγ12 , with E[π|s] < γ12 . But this means that C will not mix between a1 and a2 . Therefore, if he mixes at all, it will be between a2 and a3 , and so it can only be when s = A (and so there is no s ∈ S0 for which C will play a3 for sure). G prefers such a mixing to a1 when π < λt, for some t between γ 13 and γ 12 . So C’s ex ante expected utility is: 

λt

(δ3 ρ(a3 |A)(γ13 − π) + δ2 ρ(a2 |A)(γ23 − π) dF

0

But as E[π|A] = E[π|π < λt] = γ23 when C mixes over a2 and a3 , this expression is positive, and so Proposition 1 holds for mixed strategies.

Appendix G. Proposition 3 (i) Assume that E[π|λγ12 < π] < γ13 . First imagine that E[π|λγ23 < π] < γ23 also holds. Then by Table 4 and A.6, row 7 describes the only SAP PBE outcome in the information-only game. But row 7 entails higher expected utility for C than row 1, whenever  λγ12  1 (v3 − πδ3 ) dF > (v2 − πδ2 ) dF. λγ23

λγ23

And this is satisfied under our two assumptions. Second, imagine that E[π|λγ23 < π] < γ23 does not hold. Then row 4 describes the outcome in the information-only game, and this entails higher expected utility for C than row 1, whenever  (∗)

1 λγ12

(v2 − πδ2 ) dF > 0.

but this is implied by our first assumption. (ii) Assume that E[π|λγ23 < π < λγ12 ] > γ12 . Then row 1 gives the outcome if E[π|λγ12 < π] > γ12 , and row 4 gives the outcome otherwise. With the row 1 outcome, regulation makes no difference. With row 4, C’s expected utility is higher than with row 1 if (∗) holds. But this condition follows from E[π|λγ12 < π] < γ12 .

88

P. Calcott / International Review of Law and Economics 24 (2004) 71–88

References Antle, J. (1995). Choice and efficiency in food safety policy. AEI Press. Cho, I. -K., & Kreps (1987). Signaling games and stable equilibria. Quarterly Journal of Economics, 52, 179–221. Farrell, J. (1993). Meaning and credibility in cheap talk games. Games and Economic Behavior, 5, 514–531. Farrell, J. (1995). Talk is cheap. American Economic Review, 85, 186–190. Fudenberg, D., & Tirole, J. (1991). Game theory. Cambridge, MA: MIT Press. Jolls, C., Sunstein, C., & Thaler, R. (1998). A behavioral approach to law and economics. Stanford Law Review, 50, 1147–1550. Kelman, S. (1980). Regulating America, regulating Sweden. Cambridge, MA: MIT Press. Kim, J.-Y., & Koh, D.-H. (1997). Preannouncement as a deterrence in a model of safety regulation. Seoul Journal of Economics, 10, 41–55. Loader, R., & Hobbs, J. (1999). Strategic responses to food safety legislation. Food Policy, 24, 685–706. Mathews, S., Okuno-Fujiwara, M., & Postlewaite, A. (1991). Refining cheap talk equilibria. Journal of Economic Theory, 55, 247–273. New, B. (1999). Paternalism and public policy. Economics and Philosophy, 15, 63–83. Nichols, A., & Zeckhauser, R. (1986). The perils of prudence: How conservative risk assessments distort regulation. Regulation, November/December 13–24. Tirole, J. (1994). The internal organization of government. Oxford Economic Papers, 46, 1–29. Viscusi, W. K. (1984). Regulating consumer product safety. AEI Press. Viscusi, W. K. (1998). Rational risk policy. Oxford: Oxford University Press.