European Journal of Operational Research 245 (2015) 226–235
Contents lists available at ScienceDirect
European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor
Decision Support
Quantifying the social welfare loss in moral hazard models Mostafa Nasri∗, Fabian Bastin, Patrice Marcotte Département d’Informatique et de Recherche Opérationnelle, Université de Montréal, Montréal, QC, Canada
a r t i c l e
i n f o
Article history: Received 22 April 2014 Accepted 10 February 2015 Available online 19 February 2015 Keywords: Monotone likelihood ratio Moral hazard model Social welfare
a b s t r a c t The main aim of this paper is to measure the social welfare loss for a continuous moral hazard model when a set of minimal assumptions are fulfilled. By using a new approach, we are able to reproduce the results of Balmaceda, Balseiro, Correa, and Stier-Moses (2010) pertaining to the social welfare loss for discrete and continuous models respectively. Previous studies rely on the validity of the first-order approach at the expense of strong assumptions, in particular the convexity of the distribution function condition while we do not make such a restrictive assumption in our developments. In addition, we obtain new bounds for the social welfare loss that are both tight and easy to compute. © 2015 Elsevier B.V. All rights reserved.
1. Introduction The principal-agent model is an asymmetric information game of the Stackelberg type where the principal (the leader) hires an agent (the follower) to perform a task on its behalf. In the present paper, we consider the ‘moral hazard’ version. The aim of the principal is to devise an incentive scheme with respect to the possible outcomes, striking the right balance between wages (linked to outcomes) and revenues, knowing that the agent maximizes its own utility, which is expressed as the difference between the wage and the cost associated with a given effort level. Throughout, it is assumed that the probability of observing a high outcome is positively correlated with the effort level. For more details concerning this classical framework, the reader is referred to Laffont and Martimort (2001), Liu (2008) and van Ackere (1993). Moreover, specific applications of the principal-agent model have been widely discussed in Laffont and Martimort (2001), Su (2005) and van Ackere (1993). Since the principal-agent game is non-cooperative, a question that arises naturally is that to assess the loss of efficiency due to this behavior. Such an analysis has been performed in diverse areas such as game theory, computer science, transportation and economics. See for instance Acemoglu, Bimpikis, and Ozdaglar (2009), Acemoglu and Ozdaglar (2007), Koutsoupias and Papadimitriou (2009), Moulin (2008, 2009), Nisan, Roughgarden, Tardos, and Vazirani (2007), as well as the references therein. In the present framework, the authors in Balmaceda, Balseiro, Correa, and Stier-Moses (2010) based their analysis on the worst-case social welfare loss, defined as the largest possible ratio between the social welfare achieved when
∗
Corresponding author. Tel.: +15145765411. E-mail addresses:
[email protected] (M. Nasri),
[email protected] (F. Bastin),
[email protected] (P. Marcotte). http://dx.doi.org/10.1016/j.ejor.2015.02.024 0377-2217/© 2015 Elsevier B.V. All rights reserved.
both agents cooperate, and the non-cooperative social welfare (see Definition 2.1(vi) for the description). They obtained tight upper bounds and showed that the worst-case social welfare loss (priceof-anarchy) could be as large as the number of efforts. These bounds were obtained under a number of restrictive assumptions. The first one, the monotone likelihood ratio (MLR from now on and see Definition 2.1(vii) for the description), can be motivated by the natural intuition that more effort produces a higher outcome, and implies that payments must increase with outcome. The authors also require that the social welfare (see Definition 2.1(iii) for the description) is non-decreasing under increasing efforts, or that the distribution functions of the outcome random variable are convex (CDFC from now on and see Definition 2.1(viii) for the description). Note that this condition has been invoked by Liu (2008) and Rogerson (1985) to justify the first-order approach (FOA from now on and see Definition 2.1(ix) for the description). Although FOA significantly simplifies the study of the principal-agent model, CDFC is often viewed as a bad expedient to ensure validity of FOA, as very few distributions satisfy both MLR and CDFC (Jewitt, 1988). Moreover, CDFC does not comply with the standard diminishing marginal returns intuition (Conlon, 2009; Rogerson, 1985). This led researchers to investigate weaker assumptions that would yet imply FOA. In particular, Jewitt (1988) retained MLR but relaxed CDFC, while ensuring FOA and complying with the diminishing marginal returns conditions, at the price of overly technical assumptions (Conlon, 2009). The main aim of this paper is to weaken the conditions under which the social welfare loss in the moral hazard model is obtained. In particular, our analysis does not require the restrictive CDFC. Moreover, we show that our new bounds for the social welfare loss are both tight and easy to compute and we prove that a ‘benevolent’ principal may force a socially-optimal outcome. The paper is organized as follows. In Section 2, we formulate the moral hazard model, together with some of its properties, and introduce key definitions.
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
In Section 3, the main theoretical results are developed and proved. Section 4 summarizes our findings and opens avenues for further research. Finally, the notation used throughout the paper is displayed in Appendix A. 2. Formulation The moral hazard model, a particular instance of a Stackelberg game, involves two players. At the upper level, a risk-neutral principal sets wages associated with the outcomes of a task performed by an agent, with the aim of maximizing its expected profit, expressed as the difference between the ‘value’ of the random outcome and the associated wage. At the lower level, the risk-neutral agent selects the effort level that maximizes its own expected profit, expressed as the difference between the wage and the cost of the associated effort level. The model rests on random variables that relate the probability of observing an outcome, for a given effort level. It is mathematically expressed as
(vi) The worst-case social welfare loss is the quantity
, y, c, E ), sup ρ(π
π ,y,c
πt (f ) πs (f ) ≥ ∀s < t and e < f. πt (e) πs (e) (viii) The convexity of the distribution function condition (CDFC) is satisfied on E if j
·π (f ) − c(f )} e ∈ arg max {w ≥ 0. w
(LL)
f ∈E
i=1
min w∈RS
(1) s.t.
· π( e) MPLP(e) z(e) = min w S w∈R
·π (e) − c(e) ≥ 0 s.t. w ·π · π( (e) − c(e) ≥ w f ) − c(f ) w ≥ 0. w
∀f ∈ E
(IR) (IC) (LL) (2)
We introduce the following concepts that will be used in this paper. Definition 2.1. Consider the model (1). (i) We say that the principal implements an effort e ∈ E if there consistent with the agent choosing e. exists a wage schedule w (ii) The set of optimal efforts for the principal is E P = arg max{uP (e)}, where uP (e) is called the principal’s maximum e∈E
expected utility when effort e is implemented and is defined as
(e) − z(e). uP (e) = y · π
(3)
The social welfare uSW (e) associated with an effort e and a rev-
enue y is the joint profit (revenue minus costs) of the principal (e) − c(e). and the agent, expressed as uSW (e) = y · π (iv) The optimal social welfare is uSO = max{uSW (e)}. e∈E
(v) The social welfare loss is
ρ(π , y, c, E ) =
uSO uSO = sup SW . SW inf {u (e)} e∈E P u (e)
e∈E P
S
πi (e)wi
i=1 S
πi (e)wi ≥ c(e)
i=1
where E denotes the set of efforts, c : E → [0, +∞ is the cost function, is the wage schedule. In the above, the y is a random revenue, and w acronym (IR) stands for individual rationality (the agent will not take part in a contract with a negative expected revenue), (IC) for incentive compatibility, and (LL) for limited liability. The notation x is used to denote a vector versus a scaler x. The dot product of two vectors u = (u1 , . . . , un ) ∈ Rn and v = (v1 , . . . , vn ) ∈ Rn is denoted and defined as u · v = ni=1 ui vi . For each outcome with respect to index s ∈ S = {1, . . . , S}, we associate a revenue ys , whose (positive) probability of occurrence is πs (e). Without loss of generality, we assume that the outcomes are sorted in increasing order, i.e., ys ≤ yt when s ≤ t. The minimum payment linear program with respect to an effort e, denoted by z(e), is defined by the semi-infinite programming problem
(iii)
πi
(e) ≥ 0 ∀e ∈ E ∀j ∈ S .
(ix) The first-order approach (FOA) is valid if any solution of
) · π( e) max (y − w
(IR) (IC)
(5)
where the supremum is taken over a subset of all instances satisfying certain properties, to be described later. (vii) The monotone likelihood ratio property (MLR) is satisfied on E if
e∈E ,w
·π (e) − c(e) ≥ 0 s.t. w
227
(4)
S
πi (e)wi = c (e)
i=1
w≥0 solves (2). Note that, using (3), one can reformulate the principal’s problem as
max{uP (e)} e∈E
The social welfare loss ρ , which is typically larger than 1, captures the inefficiency of a system when its players act selfishly instead of choosing a socially-optimal effort. When the principal-agent model is discrete, meaning that the set of efforts E is finite, the authors in Balmaceda et al. (2010) obtained tight bounds, that can be made arbitrarily large in some specific situations, for instance by increasing the number of possible efforts, or by choosing the probability of the best state at the lowest effort level arbitrarily small. The same set of authors in Balmaceda et al. (2010) show that under MLR and CDFC, the analysis can be extended to a continuous case, meaning that the set of efforts E is an interval. In the present paper, we focus on the continuous principal-agent model, where E = [a, b] with 0 < a < b. We assume that the social welfare takes positive values for every e ∈ [a, b]. We also make the following assumptions. , y and c, uSW is strictly positive for each e ∈ E, and A1: Given π uSW is single-peaked. More precisely, there exists some effort m ∈ (a, b] such that uSW is increasing on [a, m] and decreasing on [m, b]. A2: The cost function c is convex, increasing and twice differentiable, with non-negative values on E. (e) satisfy MLR property on E. A3: The probability distributions π A4: The distribution πS is concave and twice differentiable on E. A2 and A3 are standard assumptions in this framework. Especially, A3 implies that the outcome is increasing with respect to the effort level. A4 is consistent with the diminishing marginal returns intuition. Actually, we will show that πS : E → [0, 1] is an increasing function, so that concavity implies that marginal probability gains decrease with the effort size. It is also interesting to note that A4 results from
228
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
(e) solves MPLP(e), (i) w (e) ≤ (x, e) ≤ w (ii) for all x, f ∈ E with x < e < f , it holds that w (e, f ), w (iii) e ≥ d, where d is the balancing effort.
the popular CDFC. Indeed, since S
πi (e) = 1
∀e ∈ E ,
i=1
the concavity of πS easily follows from
0≤
S−1
πi
(e) = −πS
(e)
The set of relevant efforts is denoted by RE .
∀e ∈ E .
i=1
CDFC also implies that uSW : E → R is a concave function (see, e.g., Rogerson, 1985) which is stronger than the single-peaked property described in A1. CDFC is often invoked since, alongside MLR, it ensures the validity of FOA, which in turn implies that the limited liability constraint of the agent is not tight. One can then set the derivative of the agent’s objective to zero to obtain a linear programming program which is simpler and more tractable. Under these conditions, plus twice differentiability of π , the authors in Balmaceda et al. (2010) obtain the tight bound
ρ(π , y, c, [0, 1]) ≤ 1 + ln
πS (1) . πS (0)
The main goal of the present paper is to weaken the assumptions by showing that, under A1, A2, A3, A4 and differentiability of π , there holds
ρ(π , y, c, E ) ≤ 1 + ln
c(b) πS (b) ≤ 1 + ln . πS (a) c(a)
We will prove that the bound 1 + ln
c(b) c(a)
is tight. Moreover, we will
From this definition, we have that a relevant effort can be associated with an optimal wage schedule where the agent is only paid if the best state is achieved. While the wage increases with the effort, yet the effort d is the largest one allowing the agent to balance his/her revenue against his/her cost. In other words, d is the largest effort in E at which the agent’s utility is zero or equivalently (IR) in (1) is binding. 3. Bounding the worst-case social welfare loss Before addressing the heart of the matter, let us emphasize that, in the absence of the limited liability constraint, there is no welfare loss. Indeed (see Laffont & Martimort, 2001), the objective of the principal and the social welfare can be made equal for any effort level. We now begin with results that will be useful in the sequel. Proposition 3.1. If A3 holds and πS is concave, then there exists β ∈ is constant over [β , b]S or πS : E → [0, 1] is increasing. [a, b) such that π Proof. Following Rothschild and Stiglitz (1970), it is easy to show (e) that πS is non-decreasing. Indeed, take e, f ∈ E with e < f . Since π (f ) are probability mass functions, we have that Ss=1 πs (e) = and π S πi (f ) s=1 πs (f ) = 1. Therefore, there exists some i ∈ S such that π (e) ≥ π (f )
π (f )
i
, y, c, E ) = 1. Our study present natural conditions under which ρ(π in motivated in part by the fact that CDFC is a very restrictive condition, which is not satisfied by several probability distributions, and does not comply with the natural property of diminishing marginal returns (see Rogerson, 1985). Since our assumptions are implied by the conditions imposed in Balmaceda et al. (2010), all our results will hold in their more restrictive setting, without relying on FOA. After developing our theory in Section 3, we will provide examples of distributions that comply with A3 and A4, but not with CDFC, describing a real-world problem (see Example 5). might notbe available It happens that the distribution function π in practice. In this case, this paper suggests that 1 + ln cc((ba)) can be , y, c, E ). In other alternatively used to approximate the quantity ρ(π words, it says that this upper bound does not change as long as the cost function and the set of effort are the same. Next we introduce the class of relevant efforts which, under some conditions, include the optimal ones. Whether this is the case or not will influence considerably the nature of the social welfare loss. The definition will rely on the concept of balancing effort that will be described in the following.
1. From MLR, we must have πS (e) ≥ πi (e) ≥ 1, which in turn implies S i πS (f ) ≥ πS (e). Now assume that πS is not increasing. Therefore, there exists some β ∈ [a, b) such that πS (β) = πS (b), which in turn implies that πS is a constant function on [β , b], because the concavity of πS implies that πS is continuous on its interior of domains (see Rockafellar, 1997). Next we prove that πi is constant on [β , b] for each i = 1, . . . , S − 1 if πS is so. To this end, let e, f ∈ [β , b] with e < f . From A3 we obtain
Definition 2.2. Consider the model (1). Take x, e ∈ E and define
The following proposition shows that, without loss of generality, we can assume that πS is increasing.
c(x) − c(e) (x, e) = 0, . . . , 0, w , πS (x) − πS (e) (e) = 0, . . . , 0, v(e) , w
πS (a)
πS (f ) πS−1 (f ) π1 (f ) ≥ ≥ ··· ≥ . πS (e) πS−1 (e) π1 (e)
Therefore, πi is non-increasing on E for each i = 1, . . . , S − 1. On the other hand, assume that there exists i ∈ S such that πi (fˆ) < πi (f¯) for some f¯, fˆ ∈ [β , b] with f¯ < fˆ. Then, we must have πj (fˆ) < πj (f¯) for j = 1, . . . , i, implying the contradiction
1=
S s=1
πs (fˆ) <
S
πs (f¯) = 1.
s=1
This completes the proof.
Proposition 3.2. Consider the model (1). If c is increasing, A3 holds,
πS is concave, and there exists β ∈ [a, b) such that πS is constant on [β , b], then the feasible set of MPLP(e) is empty for each e ∈ (β , b], and therefore, the solution set of (1) is a subset of [a, β ].
where
⎧ c(e)−c(x) ⎨max c(e) , sup πS (e) πS (e)−πS (x) v(e) = a≤x
1=
if e = a, otherwise.
d is said to be a balancing effort if it is the largest effort in E satisfying
(d) · π( d) = c(d). w Definition 2.3. Consider the model (1). e ∈ E is said to be a relevant effort whenever
for each e ∈ e) = α Proof. Using Proposition 3.1, we know that π( is a constant vector in RS . Take e ∈ (β , b]. In this situ[β , b], where α ation, (IC) of MPLP(e) reduces to c(β) ≥ c(e) which is impossible due to the fact that c is increasing. This proves that MPLP(e) has an empty feasible set. We now state a technical result that will be useful in later developments.
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
Proposition 3.3. If A2, A3 and A4 hold, then for all x, e, f ∈ E with a ≤ x < e < f ≤ b, one has
c (e) c(x) − c(e) ≤
≤ πS (x) − πS (e) πS (e)
c(e) − c(f ) . πS (e) − πS (f )
Proof. Take v, e, f ∈ E. Following Rockafellar (1997) and using the fact that c and πs are convex and concave functions respectively, we have that
c (e)(v − e) ≤ c(v) − c(e)
(6)
and
πS (v) − πS (e) ≤ πS (e)(v − e).
(7)
Multiplying both sides of (6) and (7) gives us
c (e) c(v) − c(e) ≤ πS (v) − πS (e) πS (e)
Theorem 3.4. Assume that A1, A2, A3 and A4 hold. Let g be given by (9) and d be the balancing effort introduced in Definition 2.2. Then the following statements hold. (i) g is non-decreasing on [a, b]. (b) = (0, . . . , 0, πc(b(b)) ), RE = {b} and d = b. (ii) If g(b) ≤ 0, then w S Moreover, z(e) = c(e) for every e ∈ E, where z(e) is given by (2), ¯ (e) = 0, . . . , c(e) but if g is increasing and e ∈ [a, b , neither w πS (e)
ˆ (e) = (0, . . . , c (e) ) is feasible for MPLP(e). nor w πS (e)
(iii) If g(b) > 0 and g has a zero in E, then RE = [d, b], with associated
(e) = (0, . . . , 0, c (e) ) for all e ∈ (d, b] optimal wage schedules w π (e)
0, . . . , 0, πc ((ee)) if e ∈ (a, b] S (e) = w c(e) if e = a. 0, . . . , 0, πS (e)
and
if e < v,
Proof.
proving the desired inequalities.
(i) Taking into account A2 and A4, we obtain
Now, let us consider the dual of the semi-infinite programming (2) whose dual function associated with the constraints (IC) is denoted by λ(f ) . Following Shapiro (2005), we know that λ(f ) is equal to zero, except at a finite number of points. This allows us to formulate the dual as
sup
{μ(e)c(e) +
λ(f ),μ(e)
s.t.
[c(e) − c(f )]λ(f )}
g (e) = πS (e)
c
(e)πS (e) − c (e)πS
(e)
[πs (e) − πs (f )]λ(f ) + πs (e)μ(e) ≤ πs (e)
⎧ ⎨
(8)
the balancing effort is the largest effort b, in which case b is the unique relevant effort; d = a and every effort is relevant; d is the unique root of g, defined at e as
(9)
The set of relevant effort is then [d, b]. Moreover, if d ∈ a, b , then v, introduced in Definition 2.2, is given at e ∈ d, b] by
which yields g(e) > 0.
c(e) − c(x) c (e) =
πS (e) − πS (x) πS (e)
(10)
c(e) − c(f ) . πS (e) − πS (f )
a≤x
c(x) − c(b) c (b) . =
πS (x) − πS (b) πS (b)
Since, by assumption, g(b) ≤ 0, we also have
c(b) c (b) ≤ , πS (b) πS (b)
In the next theorem, we capitalize on this dual to identify relevant efforts and optimal wages. We first establish that the balancing effort d, introduced in Definition 2.2, is the smallest relevant effort. More precisely, one of the following three situations must occur:
c(e) < v(e) = sup πS (e) a≤x
≤
sup
∀s ∈ S λ(f ) ≥ 0, μ(f ) ≥ 0.
c (e) − c(e). πS (e)
c(x) − c(e) πS (x) − πS (e)
This implies that
f ∈E
g(e) = πS (e)
c (e) πS (e) c(e) − c(f ) = inf e
where the sum is taken over the non-zero values of λ(f ). This program is equivalent to
•
≥ 0,
=
∀s ∈ S λ(f ) ≥ 0, μ(f ) ≥ 0,
•
2
which implies that g is non-decreasing. (ii) Note that Proposition 3.3 implies that, for all x, e and f with a ≤ x < e < f ≤ b, we have that
f ∈E
•
[πS (e)]
c(x) − c(e) ≤ sup πS (x) − πS (e) a≤x
f ∈E
⎫ ⎬ sup μ(e)c(e) + [c(e) − c(f )]λ(f ) ⎭ λ(f ),μ(e) ⎩ f ∈E πs (f ) λ(f ) + μ(e) ≤ 1 s.t. 1− πs (e)
S
(d) = (0, . . . , 0, c (d) ) = (0, . . . , 0, πc(d(d)) ). Moreover, z(e) = and w πS (d) S c(e) for each e ∈ [a, d]. (iv) If g is positive on E, then d = a, RE = [a, b] and
if v < e
c(v) − c(e) c (e) ≤ πS (e) πS (v) − πS (e)
229
and therefore, from Definition 2.3,
v(b) =
c(b) . πS (b)
It is obvious that (0, . . . , 0, πc(b(b)) ) satisfies (IR) and (LL). The S previous developments also show that (IC) holds since
c(b) − c(x) c(b) ≥ πS (b) πS (b) − πS (x)
∀x ∈ [a, b),
and consequently, part (ii) of Definition 2.3 holds for e = b. Moreover, (IR) is binding for e = b, so that b solves MPLP(b) and part (iii) of Definition 2.3 is satisfied, and b is the unique relevant effort. To show that z(e) = c(e) for each e ∈ [a, b , we select e ∈ [a, b . (IC) of MPLP(e), defined in (2), can be written as S i=1
[πi (e) − πi (f )]wi ≥ c(e) − c(f )
∀f ∈ E .
230
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
Divide the both sides of the above inequality by the positive number f − e and then take the limits as f → e+ . We then obtain S
−πi (e)wi ≥ −c (e).
a≤x
w∈RS
S
s.t.
S
πi (e)wi
i=1
(11)
−πi (e)wi ≥ −c (e)
i=1
S−1
S i=1
πi (e) = 1 and πS (e) > 0 for each e ∈ E, we have that
πi (e) = −πS (e) < 0
c (a) ≤ πS (a)
(j)(e) whose only non-zero component, the jth wage schedule w (j)(e) is a feasible point of (11), one, is set to πc((ee)) . As a result, w j
with objective value c(e). The dual program associated with (11) is expresses as
{δ1 c(e) − δ2 c (e)}
s.t. δ1 πi (e) − δ2 πi (e) ≤ πi (e)
(12)
i∈S
δ1 , δ2 ≥ 0. (j)(e) and 1, 0 solve (11) and (12) reIt is easy to see that w spectively since at these points we have zr (e) = zrD (e) = c(e). On the other hand, the feasible set of MPLP(e) is a subset of that of (11). Therefore,
zrD (e) = c(e) ≥ z(e) ≥ zD (e),
(13)
zD (e) is
where the optimal value of (8). We have that λ(f ) = 0 for each f ∈ E and μ(e) = 1 are feasible for (8), yielding an objective value c(e). From (13), we then have that z(e) = c(e). Moreover, since g is decreasing and achieves a negative value for each e ∈ [a, b , and since the feasible set of (11) includes ˆ (e) is a ¯ (e) nor w that of MPLP(e), it is easily seen that neither w ˆ (e) is a feasible ¯ (e) nor w feasible point of (11), whence neither w point of MPLP(e).
(iii) Select e ∈ [d, b]. First we prove that (0, . . . , 0, πc ((ee)) ) solves S MPLP(e). It is obvious that (LL) in MPLP(e) holds. (IR) is valid due to the fact that g(e) > 0 for each e ∈ (d, b] and g(d) = 0. Moreover, (10) shows that (IC) is also satisfied. Therefore,
(0, . . . , 0, πc ((ee)) ) is feasible for MPLP(e) and the correspondS
ing objective value of MPLP(e) is equal to πS (e) πc ((ee)) . Now set
(0, . . . , 0,
∀f ∈ (a, b].
(15)
defined in (8),
(14)
∀s ∈ S .
c (a) πS (a)
) we construct an optimal solution to MPLP(a)
by decreasing its Sth component until we reach a face. It is clear that the binding constraint in this process is (IR) (a) − c(a) = 0, which implies that of MPLP(a). Indeed, π (a)w (a) = (0, . . . , 0, πc(a(a)) ) solves MPLP(a). Finally, the inequality w S (a, f ) for all a < f ≤ b follows from the positivity of (a) ≤ w w g(a) and (15). It is worthwhile to note that the program (11) is similar to the one that can be deduced from FOA, the only difference being that the second inequality becomes an equality. Therefore, FOA could be used to prove similar results, and has proved to be popular. It is for instance at the heart of Theorem 5.1 of Balmaceda et al. (2010) dealing with the worst-case social welfare loss in the case of a continuous set of efforts. However, the analysis remains limited, since CDFC is invoked as well. Our arguments allow us to bypass the contentious CDFC and keep the developments at a more general level. It is also notable that the principal’s utility uP , defined in (3) may not be differentiable at e = d, while it is differentiable on d, b], where d is the balancing effort introduced in Definition 2.2. This nondifferentiability results from the optimal wage schedules obtained in Theorem 3.4(iii)–(iv). Indeed, one gets
⎧ c (e) ⎪ ⎨ y · π( e) − πS (e)
P π u (e) = S (e) ⎪ ⎩ y · π( e) − c(e)
if e ∈ d, b]
(16)
if e ∈ [a, d] .
Example 1. Take S = {1, 2}, π1 (e) = 2−e , π2 (e) = 1 − 2−e , c(e) = e, −1 y = (0, 2 [ln(2)] ) and E = [0.2, 1]. It is easy to see that the assumptions A1, A2, A3 and A4 hold. Furthermore, g(e) = [ln(2)]−1 (2e − 1) − e which has a unique root at 0, which in turn implies d = 0.2. We also have uSW (e) = 2[ln(2)]−1 − [ln(2)]−1 21−e − e, m = 1, E P = {0.5} and ρ(π , y, c, E ) ≈ 1.283. Moreover, if w (e) denotes the optimal wage associated with MPLP(e), then
S
μ(e) = 0, λ(f ) = 0 for each f ∈ E \ {x}, and λ(x) = πS (eπ)S−(eπ)S (x) with x ∈ [a, e) in the dual problem associated with MPLP(e), c(e) − c(x) sup πS (e) πS (e) − πS (x) a≤x
c(a) − c(f ) πS (a) − πS (f )
which in turn implies πj (e) < 0 for some j ∈ S. Now define a
δ1 ,δ2 ∈R
To complete (iii), it remains to show that z(e) = c(e) for each e ∈ [a, d]. We have shown in (ii) that z(e) = c(e) for each e ∈ [a, b . One can use a similar argument to prove that z(e) = c(e) for each e ∈ [a, d]. (iv) The argument invoked in (iii) also shows that each e ∈ (a, b] is relevant. Therefore, it remains to show that a is relevant as well. Using Proposition 3.3, we have that
Therefore, (0, . . . , 0, πc ((aa)) ) is feasible for MPLP(a). From S
i=1
zrD (e) = max
S
S
πi (e)wi ≥ c(e)
w ≥ 0. Since
c(e) − c(x) c (e) = πS (e) . πS (e) − πS (x) πS (e)
(e, f ) for all x, f ∈ E with x < e < f , is a direct result (e) ≤ w w of (10). The property stated in Definition 2.3(iii) follows from the fact that d is the largest zero of g on E. Note also that c (d) c(d) π (d) = π (d) follows from g(d) = 0.
i=1 S
πS (e)
From the strong duality theorem (see, e.g., Shapiro, 2005), we
(x, e) ≤ have that (0, . . . , 0, πc ((ee)) ) solves MPLP(e). The fact that w
Therefore, MPLP(e) reduces to
zr (e) = min
sup
i=1
S
From A3, we see that the constraints in (14) are redundant. Therefore, using (10), (14) reduces to
(e) = w
(0, [ln(2)]−1 2e ) if e = 0.2 (0, 1.545) if e = 0.2.
As a result, w is not continuous at e = 0.2. In addition, uP , defined in (16), is not a continuous function either, because uP (0.2) ≈ 0.174 while lim uP (e) ≈ 0.159. e→0.2+
It can also occur that FOA does not hold at the left boundary of the effort set. For instance, let E = [0.6, 1]. If g is positive on E, we obtain
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
d = 0.6 and
(e) = w
(0, [ln(2)] 2 ) if e = 0.6 (0, 1.763) if e = 0.6. −1 e
is not continuous at e = 0.6. Moreover, the second inAs before, w equality in (11) is not active at e = 0.6. We now show that there is no incentive for the principal to implement an effort larger than m, the mode of uSW . This is in line with Balmaceda et al. (2010), since CDFC implies the concavity, and therefore the single-peaked property, of uSW . More precisely, we will prove that when g(d) = 0, defined in (9), with d < m, the principal implements a relevant effort in [d, m], where m is the socially-optimal effort introduced in A1. On the other hand, if d ≥ m, then an interesting case occurs. In this situation, (IR) in (1) is binding at e = m. Therefore, it is optimal for the principal to implement m, because he reaps the full social surplus uSO , driving the agent’s profit to zero. Theorem 3.5. Consider the model (1). Assume that A1, A2, A3 and A4 are satisfied. Moreover, assume that d is the balancing effort introduced in Definition 2.2. Then, the following statements hold.
Example 2. Set S = {1, 2}, π1 (e) = 1.5 − 0.5e, π2 (e) = 0.5e − 0.5, c(e) = 0.5e2 , y = (10, 13) and E = [1, 2]. This instance verifies A1, A2, A3 and A4. We have that g(e) = 0.5e2 − e which has a unique root at e = 2 in E, uSW (e) = 8.5 + 1.5e − 0.5e2 , m = 1.5, d = 2, E P = {1.5} and ρ(π , y, c, E ) = 1. Moreover, let ω∗ (e) denote the optimal wage associated with MPLP(e) for each e ∈ E. We obtain
∗ (e) = w
(i) First we prove that E P ⊂ [a, m]. Let e∗ ∈ E P with e∗ > m. Using (16), we have that
(e) − πS (e) uP (e) = y · π
(e − 0.5e2 , 3e − 0.5e2 ) if e = 1 (0.5, w∗2 ) if e = 1
where w∗2 is any number in [0, 2.5]. In this case, there is no welfare loss and the optimal effort is irrelevant. We now turn our attention to the derivation of upper bounds for the worst-case social welfare loss defined in (5), for which we must first compute the quantity defined in (4). Note that, as a consequence of Theorem 3.5, there is no welfare loss if d ≥ m. Theorem 3.6. Assume that A1, A2, A3 and A4 are fulfilled for the model (1). Then the following statements are true. , y, c, E ), defined in (4), is given by (i) ρ(π
is differentiable, any optimal effort of the principal(i) If d < m and π agent model (1) is relevant. (ii) If d ≥ m, m is the only optimal solution of (1). Proof.
ρ(π , y, c, E ) =
ρ(π , y, c, E ) ≤
[π
S
( )] e∗
2
ρ(π , y, c, E ) ≤
(f ) − z(f ) − y · π (d) + z(d) = y · π (f ) − c(f ) − y · π( d) + c(d) = y · π
(f ) − u (d).
As a result,
EP
= {m}.
(20)
πS (m) πS (b) ≤ πS (d) πS (a)
(21)
ρ(π , y, c, E ) ≤
πS (m) c(m) c(b) ≤ ≤ . πS (d) c(d) c(a)
(22)
Proof. (i) By Definition 2.1(iv) and A1, we have that uSO = uSW (m), which proves (18). (ii) Taking into account (18) and Theorem 3.5(ii), we conclude that ρ(π , y, c, E ) = 1. (iii) Equation (19) follows from Theorem 3.5(i) and the fact that uSW is increasing on [d, m]. On the other hand,
m) − z(m) uP (m) = y · π(
SW
Therefore, uSW (d) ≤ uSW (f ), contradicting the fact that uSW is increasing on [a, m]. This completes the proof of (i). (ii) Using Definition 2.1(iv) and the fact that uSW is single-peaked, (16) shows that uSO = uSW (m) = uP (m). On the other hand, uSW (e) ≥ uP (e) for each e ∈ E. This implies that max{uP (e)} = uSW (m).
πS (m) πc S ((mm)) − c(m) , d) − c(d) y · π(
and
.
0 ≤ uP (f ) − uP (d)
=u
(19)
(17)
By our assumptions, we must have that (uSW ) (e∗ ) < 0 and G(e∗ ) > 0. This implies that (uP ) (e∗ ) < 0, in contradiction with (17). This proves that E P ⊂ [a, m]. Next we prove that E P ⊂ [d, m]. To this end, let f be an optimal effort for the principal, with f < d. We have that 0 ≤ uP (f ) − uP (d) and, by Theorem 3.4(iii), c(f ) = z(f ) and z(d) = c(d). From (16) and Definition 2.1(iii), there follows
SW
m) − c(m) y · π( , d) − c(d) y · π(
ρ(π , y, c, E ) ≤ 1 +
where
c
(e∗ )πS (e∗ ) − c (e∗ )πS
(e∗ )
(18)
, y, c, E ) = 1, where d is the balancing effort (ii) If d ≥ m, then ρ(π introduced in Definition 2.2. is differentiable, then (iii) If d < m and π
for each e ∈ d, b]. On the other hand, we must have that P ∗ u (e ) ≥ 0 because uP must be increasing on [e∗ − , e∗ ] where > 0 is small enough. Therefore,
G(e∗ ) = πS (e∗ )
(m) − c(m) y · π . (e) − c(e)} min {y · π e∈E P
c (e) πS (e)
P ∗
u (e ) = uSW (e∗ ) − G(e∗ ) ≥ 0,
231
e∈E
In Balmaceda et al. (2010), the authors suggest the possibility to replace CDFC by the requirement that the social welfare is not decreasing. This non-decreasing social welfare (NDSW) condition naturally implies that the social welfare is single-peaked, with a peak equal to the upper bound of the effort set. Our previous theorem also covers this alternative restrictive assumption, that rules out the possibility that an irrelevant effort could be optimal (see below).
m) − c(m) + c(m) − z(m) = y · π( = uSW (m) + c(m) − z(m) = uSW (m) + c(m) − πS (m)
(23) c (m) , πS (m)
(m) = (0, . . . , 0, c (m) ) where the last equality follows from w πS (m) (see Theorem 3.4). Note that
uP (m) ≤ inf {uP (e)}. e∈E P
Taking into account the fact that uP (e) ≤ uSW (e) for all e ∈ E, we obtain
uP (m) ≤ inf {uSW (e)}. e∈E P
(24)
232
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
is differentiable, then (ii) If e∗ < m and π
Therefore, using (23) together with (24) gives us
uSW (m) ≤ inf {uSW (e)} + πS (m) e∈E P
c (m) − c(m). πS (m)
ρ(π , y, c, E ) ≤ 1 + ln
Dividing both sides of the above inequality by inf uSW (e) e∈E P
and using Theorem 3.5 (i) yields
uSW (m) ≤1+ inf {uSW (e)}
e∈E P
and
πS (m) πc S ((mm)) − c(m) . (d) − c(d) y · π
S
πi (d)yi − c(d)
πS (m) πS (e∗ ) c(m) ≤ 1 + ln c(e∗ ) c(b) ≤ 1 + ln . c(a)
ρ(π , y, c, E ) ≤ 1 + ln
This gives us (20). Next we show that (21) holds. Taking into account A3 and the definition of uSW , we have that
uSW (d) =
πS (m) πS (e∗ ) πS (b) ≤ 1 + ln πS (a)
(27)
(28)
Proof.
i=1
=
πS (d)
S i=1
≥
(i) Taking into account Remark 3.7, this part easily follows from Theorems 3.5 and 3.6(ii). (ii) Note that, using the definitions of uP and uSW , we get
πi (d) y − c(d) πS (d) i
S πS (d) π (m)yi − c(d) πS (m) i=1 i
πS (d) SW = u (m) + πS (d)j(d), πS (m)
uSW (e∗ ) ≥ uP (e∗ ) ≥ uP (e)
(29)
Using (16), we obtain
(25)
) − c(d) . We set η(e) = c(e) and show that η where j(d) = πc(m πS (e) πS (d) S (m) is non-decreasing. We have that η (e) ≥ 0 if and only if g(e) ≥ 0, where g is given by (9). Since g is non-negative on [d, b], η must be non-decreasing, which in turn implies j(d) ≥ 0. Therefore, based on (25), we obtain
uSW (d) ≥
∀e ∈ E .
πS (d) SW u (m). πS (m)
e) − πS (e) uP (e) = y · π(
c (e) πS (e)
∀e ∈ [d, b] .
(30)
On the other hand, the definition of uSW implies that c (e) = S
SW ] (e) from which we get, using (29) and (30), i=1 yi πi (e) − [u
uSW (e∗ ) ≥ (e) + πS (e) where
S
yi
[uSW ] (e) πS (e)
∀e ∈ [d, b] ,
(31)
πS (e)πi (e) − πS (e)πi (e) . πS (e)
On the other hand, since πS is increasing on E, we get (21). Next we proceed to prove (22). From the first inequality in (21) and the fact that c is increasing on E, it is sufficient to show that
(e) =
πS (m) c(m) ≤ . πS (d) c(d)
inequality holds due to A3, which is equivalent to πi (e) bei ing non-decreasing in i for each e, when π is differentiable (see Milgrom, 1981). This shows that (e) ≥ 0, from which we get, using (31),
(26)
Note that by our assumption g, defined in (9), is non-negative on [d, b]. Therefore,
c (e) ≥ c(e) m
d
c (e) de ≥ c(e)
d
m
πS (e) de. πS (e)
Since the logarithmic function ln(·) is increasing, (26) is satisfied. Theorem 3.6 remains valid if d is replaced by any e∗
∈
EP.
We finally derive similar bounds to those proposed in Balmaceda et al. (2010). The arguments are similar but do not rest on FOA. Moreover, we propose a new tight bound in terms of the cost function only. Theorem 3.8. Assume that A1, A2, A3 and A4 are fulfilled for the model is differentiable on E. Set e∗ = sup E P . (1) and π e
(i) If
e∗
, y, c, E ) = 1. ≥ m, then ρ(π
πS (e) ∀e ∈ d, b] . πS (e)
uSW (m) ≤ 1 + ln uSW (e∗ )
c(m) πS (m) ≥ ln . c(d) πS (d)
Remark 3.7.
π (e)
Note that e∗ ∈ [d, m], by Theorem 3.5(i). Integrating both sides of the above inequality on [e∗ , m] gives us
As a result,
ln
Next we prove that (e) ≥ 0, for which it is sufficient to show that πS (e)πi (e) − πS (e)πi (e) ≥ 0 for each i ∈ S. But the latter
[uSW ] (e) ≤ uSW (e∗ )
πS (e) ∀e ∈ [d, b]. πS (e)
Integrating both sides of the above inequality gives us
i=1
πS (m) . πS (e∗ )
(32)
Using (4) and the fact that πS is increasing, we obtain (27). Moreover, (28) follows from (26), A2, Remark 3.7 and the fact that ln(·) is an increasing function. According to (28), if the costs corresponding to the lowest and highest levels are known (the value of πS is unknown), then effort 1 + ln cc((ba)) is an upper bound for the social welfare loss. On the other hand, if the optimal effort is larger than the mode of the social welfare utility, there is no welfare loss, a situation that often occurs, and that we will be illustrated later in the paper. The following Corollary , y, c, E ) with respect to πS . discusses the boundedness of ρ(π , y, c, E ), defined in (4). Assume that A1, Proposition 3.9. Consider ρ(π A2, A3 and A4 hold. Then, the following statements hold. , y, c, E ) is a finite number. (i) ρ(π
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
(ii) Let be a family of probability distributions such that each ele ∈ satisfies A1, A3, A4 on E = [a, b]. If there exist posiment π tive constants γ and ν satisfying
γ ≤ πS (a)
∀π ∈
(33)
and
1 ≥ ν ≥ πS (b)
∀π ∈ ,
(34)
Assumptions A1, A2, A3 and A4 are satisfied. Moreover, uSW is increasing in e which implies that m = 1. We also have
1 − + 2 1− g(e) = − β + ln e+1 . 1−
Since g(0) = − 1+β < 0, g(1) > 0 and g is increasing, we must have that d ∈ 0, 1 with g(d) = 0, illustrating Theorems 3.4(iii) and 3.5(i). In particular, g(d) = 0 yields
then
sup
π ∈ ,c∈C ,y
ρ(π , y, c, E ) ≤
ln
1 ν ≤ , γ γ
1−
(35)
d=
Proof. , y, c, E ) must be finite, based on (i) Since πS : [a, b] → (0, 1], ρ(π (21) and Theorem 3.6(ii). (ii) This is a direct consequence of (21), Theorem 3.6(ii) and the fact that πS (b) ≤ 1 for each πS .
1−
1− β 1 − + 2
−1 .
(37)
uSW (1) uSW (d) 1 − c(1) = (1 − )d + − c(d) =
= =
1 − − β −
1− + 2 1−
+ 2 ln 1 − d − β + 1−1−
1 − − β −
1− + 2 1− g(d)
ln
1− + 2 1−
ln
ln 1−
d+1
(38)
1 − d − 1 − − β −
1 − d
,
and
Assume that y1 > 0. Therefore, uSW (0, ) = y1 > 0, implying that A1 holds too for each ∈ (0, 1). Now let = {( e , 1 − e )} ∈[ˆ,˜] be as described in (33) and (34), where 0 < ˆ < ˜ < 1. Take = {φ } ∈[ˆ,˜] with φ (e) = 1 − e for each ∈ [ˆ , ˜ ]. Then, for each e ∈ (0, +∞) we must have
inf
(36)
ρ(π , y, c, E ) =
Example 3. The main aim of this example is to present a family of problems for which the probability of the highest outcome under the smallest effort is bounded away from zero. In this case, the worst-case social welfare loss is finite. Set S = {1, 2}, π1 (e) = e , π2 (e) = 1 − e , c(e) = e, y = (y1 , y2 ) and E = [a, b], where 0 < < 1 and 0 < a < b < ∞. A2, A3 and A4 are satisfied. In addition, uSW (e, ) = y2 + (y1 − y2 ) e − e is strictly concave and achieves its unique maximum at
ln[(y1 − y2 ) ln()] . ln()
exp
1− β 1 − + 2
On the other hand, using (16), we get uP (e) = − 2 e for each e ∈ [d, 1], which implies that uP is decreasing on [d, 1]. Therefore, E P = {d} by Theorem 3.5(i). One can easily verify that
We conclude this section with several examples illustrating the upper bounds.
m=−
d+1 =
or equivalently
where C is the set of cost functions satisfying A1 and A2.
233
1− c(d) = 1+β + (1 − + 2 ) d − ln d+1 1− = 1+β + (1 − + 2 ) d −
1+β
1 − + 2
= (1 − + 2 )d, where we have used (36). , y, c, E ) can be made arbitrarily large by It is easy to see that ρ(π taking small enough. For any δ > 0, one can choose such that
{1 − e } = 1 − ˜ e
∈[ˆ ,˜ ]
and
ρ(π , y, c, E ) − (1 − ln ) < δ .
sup {1 − e } = 1 − ˆ e .
∈[ˆ ,˜ ]
Similarly,
Note that (33) and (34) hold with γ =
1 − ˜ a
and ν =
1 − ˆ b
respec-
ˆ b . If a → 0+ or ˜ → 1− , then tively. Therefore, (35) holds with γν = 1− 1−˜ a ν → +∞. Nonetheless, taking into account (28), we have that γ
c(b) , y, c, E ) ≤ ln sup ρ(π c(a) ∈[ˆ ,˜ ]
b < +∞. = ln a
In the following example, we show that 1 + ln
c(b) c(a)
is a tight
bound that can be arbitrary large. A similar example has been pre- π (b) sented in Balmaceda et al. (2010) to show that the bound 1 + ln π2 (a) 2
1 + ln
c(1) → +∞ c(0)
and for and β small enough,
1 + ln c(1) − (1 − ln ) < δ . c(0) Combining these two inequalities, we have
c(1) < 2δ , ρ(π , y , c, E ) − 1 + ln c(0)
is tight.
which implies that the bound (28) is tight.
Example 4. We wish to build a family of problems for which the worst-case social welfare loss can be arbitrarily large. Let > 0 and β > 0 be small enough. Take S = {1, 2}, π1 (e) = (1 − )(1 − e), π2 (e) = (1 − )e + , y = (0, 1) and E = [0, 1], and set the cost function c to
Inspired by Example 3.1 of Liu (2008), the next example describes a real-world problem, illustrating that our theory can be applied to model real-world problems. Moreover, it provides a distribution that complies with A3 and A4, but not with CDFC.
1+β
+ 1−+
2
e−
1−
ln
1−
e+1
.
Example 5. A profit-maximizing firm is currently without manager. Shareholders are looking for a new manager. Once hired, the manager can decide to run the firm with effort e ∈ E = [0.5, 0.7]. The effort
234
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
cannot be directly observed by the shareholders, but they observe that the firm’s profits are one of the outcomes
y = (y1 , y2 , y3 , y4 ) = (1, 2, 3, 4). The cost function of the manager when effort e is implemented is given by c(e) = 0.5e2 . The relationship between efforts and profits is random. Given an effort e, the outcome vector y is associated with the probability vector
e) = (π1 (e), π2 (e), π3 (e), π4 (e)) π( = (e(1 − e)2 , 0.5 − 0.5e, 0.5 − e(1 − e)2 , 0.5e). The above scenario can be formulated in terms of a principal-agent model whose principal and agent are respectively the shareholders and the new manager. Once the shareholders set non-negative wages = (w1 , w2 , w3 , w4 ) ≥ 0 with respect to the outcomes y of the task w performed by the manager in order to maximize the expected profit, the model reads as
max {2.5 − 0.5w2 − 0.5w3 − (1 + w1 − 0.5w2 − w3 + 0.5w4 )e
e∈E ,w
+ 2(2 + w1 − w3 )e2 − (2 + w1 − w3 )e3 } s.t. 0.5(w2 + w3 ) + (w1 − 0.5w2 − w3 + 0.5w4 )e − 2(0.25 + w1 − w3 )e2 + (w1 − w3 )e3 ≥ 0
(IR)
e ∈ arg max{0.5(w2 + w3 ) + (w1 − 0.5w2 − w3 + 0.5w4 )f f ∈E
− 2(0.25 − w1 − w3 )f 2 + (w1 − w3 )f 3 } wi ≥ 0
(IC) (LL)
∀i ∈ {1, 2, 3, 4}.
Next we show that CDFC is not satisfied while A3 and A4 hold. In fact, CDFC does not hold since π1
(0.5) = −1. It is clear that π4 (e) is concave and twice differentiable, implying A4. It remains to show π (f ) π (f ) that MLR holds. Let 0.5 ≤ e < f ≤ 0.7. It is easily seen that π1 (e) ≤ π2 (e) , 1
π (f )
2
π (f )
because this inequality is equivalent to 1 ≤ e + f . Now, π2 (e) ≤ π3 (e) 2 3 can be written as (1 − e)(1 − f )(1 − e − f ) ≤ 0.5, the latter inequality π (f ) π (f ) being valid since 1 − e − f < 0. It remains to show that π3 (e) ≤ π4 (e) , 3 4 or equivalently ef (2 − e − f ) ≤ 0.5. But this inequality holds since e and f belong to [0.5, 0.7]. Using (4) and Theorem 3.4, it is easily seen that
uSW (0.7) ρ(π , y, c, E ) = SW = 1.078. u (0.5) Moreover, our developments in Theorem 3.8 produces 1.336 as an upper bound for the social welfare loss with respect to the above problem. Indeed, we have
ρ(π , y, c, E ) ≤ 1 + ln
π4 (0.7) = 1.336. π4 (0.5)
Our theory also says that for any probability distribution satisfying our assumptions, we have the following upper bound in terms of the cost function c(e) = 0.5e2
ρ(π , y, c, E ) ≤ 1 + ln
c(0.7) = 2.673. c(0.5)
Example 6. In this example we would like to vary the date of Example 5 and study a similar scenario. For that, we take E = [0, 1], c(e) = e1.25 ,
y = (y1 , y2 , y3 , y4 ) = 0, 1, 2, 3
and
e) = (π1 (e), π2 (e), π3 (e), π4 (e)) π( = (0.25(1 − e), 0.25, 0.25, 0.25(1 + e)). It can be easily shown that the required assumptions for our devel , y, c, E ) = opments hold. Equation (4) and Theorem 3.4 give us ρ(π 1.013 while Theorem 3.8 produces 1.693 as an upper bound for the social welfare loss.
Finally note that a real-world scenario like that of Example 4 can be described to show that the recommended upper bounds in Theorem 3.8 for the social welfare loss can be as close as possible to the actual social welfare loss defined in (4). 4. Conclusions The paradigm of the principal-agent model is a classical concept in economics, although several of its theoretical properties are still not well understood, including the welfare loss resulting from the noncooperative behavior of the two players involved. Following the footsteps of Balmaceda et al. (2010), we address the issue of computing the social welfare loss (the price of anarchy) in the risk-neutral case, under weaker conditions than those considered by these authors. In particular, we do not assume the convexity of the distribution functions, or that social welfare is a strictly increasing function of effort, two conditions that are not met for natural families of distributions. Our results also indicate that bounds are highly dependent on the shape of the distributions, and on the effort sets. Social welfare loss is indeed bounded whenever the cost function is bounded. If large efforts are disallowed, it may be that the social-optimal effort may not be reached, and that the principal’s optimal strategy is to impose a wage schedule that only rewards the agent if the best state is implemented. If the latter is associated with a very small probability, i.e., the principal is simply betting on the outcome. The nature of the situation totally changes if the socially-optimal effort can be considered, resulting in the absence of the welfare loss. This has important practical results, suggesting that large efforts should be allowed, whenever it is relevant to the situation, in order to strike the right balance between the objectives of the principal and the agent. Some issues remain to be settled with respect to the social welfare loss. One of them concerns the class of distributions that are the most relevant to actual applications, and another the generalization of the analysis to continuous probability measures. Appendix A We here summarize symbols and parameters used throughout the paper. CDFC: Convexity of the distribution function condition. FOA: First-order approach. MLR: Monotone likelihood ratio. IC: Incentive compatibility. IR: Individual rationality. LL: Limited liability constraints. MPLP(e): Minimum payment linear program with respect to an effort e. S = {1, . . . , S}: Set of outcome indices. c(e): Cost with respect to an effort e. π (e) = (π1 (e), . . . , πS (e)): Outcome probabilities with respect to an effort e. y = (y1 , . . . , yS ): Outcome vector. = (w1 , . . . , wS ): Wage schedule. w E: Set of effort levels. E P : Set of optimal efforts for the principal. RE : Set of relevant efforts. d: Balancing effort. m: Mode of uSW with respect to the set of efforts. ρ(π , y, c, E ): Social welfare loss associated with π , y, c and E, uSW (e): Social welfare with respect to an effort e. uP (e): Principal’s expected utility with respect to an effort e. uSO : Optimal social welfare. , y, c, E ): Worst-case social welfare loss. sup ρ(π π ,y,c
M. Nasri et al. / European Journal of Operational Research 245 (2015) 226–235
References Acemoglu, D., Bimpikis, K., & Ozdaglar, A. (2009). Price and capacity competition. Games and Economic Behavior, 66(1), 1–26. Acemoglu, D., & Ozdaglar, A. (2007). Competition and efficiency in congested markets. Mathematics of Operations Research, 32(1), 1–31. Balmaceda, F., Balseiro, S. R., Correa, J. R., & Stier-Moses, N. E. (2010). The cost of moral hazard and limited liability in the principal-agent problem. In A. Saberi (Ed.), Internet and network economics. Lecture Notes in Computer Science, Vol. 6484 (pp. 63–74). Berlin Heidelberg, Germany: Springer. Conlon, J. R. (2009). Two new conditions supporting the first-order approach to multisignal principal–agent problems. Econometrica, 77(1), 249–278. Jewitt, I. (1988). Justifying the first-order approach to principal-agent problems. Econometrica, 56(5), 1177–1190. Koutsoupias, E., & Papadimitriou, C. H. (2009). Worst-case equilibria. Computer Science Review, 3(2), 65–69. Laffont, J.-J., & Martimort, D. (2001). The theory of incentives: The principal-agent model. Princeton, NJ, USA: Princeton University Press. Liu, B. (2008). The mathematics of principal-agent problems (Master’s thesis). University of Victoria.
235
Milgrom, P. R. (1981). Good news and bad news: Representation theorems and applications. The Bell Journal of Economics, 12(2), 380–391. Moulin, H. (2008). The price of anarchy of serial, average and incremental cost sharing. Economic Theory, 36(3), 379–405. Moulin, H. (2009). Almost budget-balanced VCG mechanisms to assign multiple objects. Journal of Economic Theory, 144(1), 96–119. Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (2007). Algorithmic game theory. New York, NY, USA: Cambridge University Press. Rockafellar, R. T. (1997). Convex analysis. Princeton, NJ, USA: Princeton University Press. Rogerson, W. P. (1985). The first-order approach to principal-agent problems. Econometrica, 53(6), 1357–1367. Rothschild, M., & Stiglitz, J. E. (1970). Increasing risk: I. A definition. Journal of Economic Theory, 2(3), 225–243. Shapiro, A. (2005). On duality theory of convex semi-infinite programming. Optimization, 54(6), 535–543. Su, C.-L. (2005). Equilibrium problems with equilibrium constraints: Stationarities, algorithms, and applications (Ph.D. thesis). Stanford, CA, USA: Stanford University. van Ackere, A. (1993). The principal/agent paradigm: Its relevance to various functional fields. European Journal of Operational Research, 70(1), 83–103.