Manipulation of voting schemes with restricted beliefs

Manipulation of voting schemes with restricted beliefs

Journal of Mathematical Economics 44 (2008) 1232–1242 Contents lists available at ScienceDirect Journal of Mathematical Economics journal homepage: ...

212KB Sizes 0 Downloads 28 Views

Journal of Mathematical Economics 44 (2008) 1232–1242

Contents lists available at ScienceDirect

Journal of Mathematical Economics journal homepage: www.elsevier.com/locate/jmateco

Manipulation of voting schemes with restricted beliefs Nicolas Gabriel Andjiga a , Boniface Mbih b,∗ , Issofa Moyouwou a a b

Ecole Normale Sup´erieure, Universit´e de Yaound´e 1, BP 47, Yaound´e, Cameroon CREM UMR CNRS 6211, Universit´e de Caen, Esplanade de la Paix, 14032 Caen, France

a r t i c l e

i n f o

Article history: Received 28 March 2006 Received in revised form 20 February 2008 Accepted 22 February 2008 Available online 29 February 2008

a b s t r a c t In this paper we define manipulation with restricted beliefs as the possibility for some voter to have an insincere preference ordering that dominates the sincere one within the given individual beliefs over other agents’ preferences. We then show that all non-dictatorial voting schemes are manipulable in this sense, up to a given threshold. © 2008 Elsevier B.V. All rights reserved.

Keywords: Strategy-proofness Voting schemes Restricted beliefs

1. Introduction The basic hypothesis in social choice theory is that social outcomes depend on individual agents’ preferences. But it is also well known that all non-dictatorial voting schemes – or voting procedures – leave opportunities for individual misrepresentation of preferences, that is the preferences reported by the agents may profitably differ from their actual preferences. This result, known as the main theorem on the manipulation of voting schemes, was first conjectured by Dummett and Farquharson (1961) and then proved about 30 years ago by Gibbard (1973) and Satterthwaite (1975). Roughly, the basic theorem states that if a voting scheme selects a unique outcome and gives no incentive to any individual to misrepresent her preferences, then it must be dictatorial, in the sense that it selects the alternative ranked first by some particular agent. The scope of this result has been shown to be very large, and in particular, the non-manipulability property appears to be equivalent to Arrow’s conditions on social welfare functions. Then, as for Arrow’s theorem, many directions have been explored in order to examine the consequences of this result and its robustness. To cite only two aspects, first the notion of ¨ manipulation has been defined in slightly different ways (see for example Gardenfors, 1976 or Pattanaik, 1976), and second frequency calculations have been undertaken to evaluate the probability of the phenomenon (see for example Lepelley and Mbih, 1994, or Favardin and Lepelley, 2006). In this paper we are concerned with a different aspect of the problem. The non-manipulability (or strategy-proofness) property requires every individual to truthfully report her preferences. In other words, a voting scheme is strategy-proof if every rational agent’s choice of what preferences to express is based only on her own values, and is independent from what preferences she believes the other agents will report. In game-theoretic terms, strategy-proofness requires the sincere preference of every agent to be a dominant strategy over the whole set of other agents strategic possibilities. In other terms, when agents have no reason to believe that the strategic possibilities open to other agents are restricted in some way, one can always find some agent such that her own sincere preference ordering will not be the best choice of strategy in

∗ Corresponding author. Tel.: +33 231565425. E-mail address: [email protected] (B. Mbih). 0304-4068/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.jmateco.2008.02.003

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

1233

some circumstance. The goal of this paper is to examine what happens when individuals believe that other agents strategic possibilities are restricted. One interpretation of this possibility amounts to assuming that individuals have some information or some beliefs that allow them to eliminate some possibilities, but not enough to know exactly what will be the actual actions of other agents. In this context, our concept of manipulation is somewhat different from the usual one. We say that a voting scheme is manipulable under restricted beliefs if under the given beliefs and for some individual, the sincere preference ordering is dominated by an insincere one. We start by defining different orders of beliefs on voters’ actions, beginning from total knowledge of other agents’ actions to unrestricted beliefs; we then show that there is a threshold of beliefs up to which all voting schemes are manipulable, and finally we provide a value of that threshold. In other terms, we show that there exists an order of beliefs necessary for all voting schemes to be manipulable and beyond that threshold, some voting schemes are still manipulable while others are not. The problem of preference misrepresentation under restricted beliefs on voters’ actions has been addressed before this work by some authors. An example is Sengupta (1980), in the context of a three-element set of alternatives and a class of voting schemes based on pairwise comparisons. He shows that, in order for some individual to successfully misrepresent her preferences, it may be sufficient that she knows only the best or the worst alternative in every individual sincere preference ranking of the other voters, and this permits her to manipulate under the corresponding restricted beliefs. Another example is Moulin (1981). He addresses two extreme cases: in one situation, no individual has any information – beliefs are unrestricted – about other agents’ actions, and this leads to prudent behavior of the maximin type, and at the other extreme situation, everybody has a complete knowledge of every other agent’s sincere preference (but not necessarily of the preference she will express), and then the underlying behavior is sophistication – which induces some type of beliefs restriction – in Farquharson’s sense (see Farquharson, 1969), that is every individual eliminates all preference relations which in all circumstances lead to an outcome at most as preferred as the outcome of some other preference relation. Bebchuk (1980) proves that if no individual has any information about other agents’ actions and if the voting scheme is neutral and positively responsive, then it is non-manipulable. More recently, Majumdar and Sen (2004) use the concept of ordinally bayesian incentive compatibility to study the manipulation of voting schemes, which is a weakening of strategy-proofness based on the hypothesis that every voter seeks to maximize her expected utility with respect to her prior beliefs over the other agents’ actions. In this paper we assume that agents do not necessarily have a complete knowledge of other participants’ actions or orderings they submit. The Gibbard–Satterthwaite theorem states that all non-dictatorial voting schemes defined on the universal domain and with more than two alternatives in their range are manipulable. The conventional decision-theoretic interpretation of this notion of manipulation is that voters may want to misrepresent their preferences when they are allowed to have unrestricted beliefs about the actions of other voters. We show that the Gibbard–Satterthwaite impossibility result can also be derived from our restricted beliefs approach. This shows, once again, the robustness of the Gibbard–Satterthwaite theorem. The paper is organized as follows: in Section 2 we explain what we mean by restricted beliefs on voters’ actions, and we define levels of beliefs. Section 3 is an exposition of our results, and Section 4 concludes the paper with some general remarks and open problems. 2. Strategic behavior with restricted beliefs Assume a finite set N of individuals or voters, with cardinality n. As a group, these individuals have to choose one alternative among m, within a finite set A. Let Ri denote a preference relation for individual i, which is a linear order (complete, antisymmetric and transitive binary relation) over A, and let L denote the set of all individual linear orders over A. A preference profile on A is an n-tuple RN = (R1 , . . . , Rn ) of individual preferences, one for each individual, and LN will denote the set of all profiles. A preference profile, or a profile for short, will sometimes be written RN = (Ri , R−i ), where R−i = (R1 , . . . , Ri−1 , Ri+1 , . . . , Rn ) will be called a contingency for individual i – or simply a contingency, where there is no ambiguity – to indicate an (n − 1)-tuple obtained from an n-tuple by removing individual i’s preference order, and the set of all contingencies will be denoted L−i . Given a non-empty subset S of N, a candidate x and a profile RN , RS refers to preferences of voters in S, and R−S to preferences of voters who are not members of S. A voting scheme is a social choice function (hereafter denoted SCF), that is mapping ‘f’ the domain of which is the set of all profiles and the range is the set A of alternatives. Given an SCF, there is an opportunity for strategic behavior in the usual sense when at some profile RN some individual can, by expressing an insincere preference order, secure an alternative she prefers to the alternative selected by the voting scheme if she votes according to her sincere preference order given the rankings reported by the other voters. In other terms, given contingency R−i of others’ actions or strategies, player i with sincere preference Ri prefers the outcome chosen by the voting scheme when she expresses some preference Q i to the outcome when she expresses Ri . By so doing, she expects to secure an outcome better from her viewpoint. It is easy to see that the behavior described above will actually depend on the beliefs of voter i about the other voters’ actions. To illustrate this point, consider the example below:





 









Example 1. Let N = 1, 2, 3, 4, 5, 6 and assume N1 = 1 , N2 = 2, 3 and N3 = 4, 5, 6 are homogeneous groups whose members are supposed to vote in the same way. They have to choose an alternative in A = {x, y, z} using plurality rule. The plurality rule selects at each profile the alternative ranked first by the highest number of voters with ties broken

1234

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

in favor of the alternative coming first in the alphabetical order xyz. Further, assume individual 1’s sincere preference order is R1 = zxy. Under plurality rule, voting sincerely is equivalent to casting one’s vote for one’s best ranked candidate. Let us then consider the two following situations: Case 1. Individual 1 has no other information about the preferences of members of N2 and N3 . In other words, her beliefs about N2 and N3 ’s actions are restricted to all profiles at which the members of each group all vote in the same way. The table below exhibits all possible profiles and the corresponding selected outcomes; every contingency is written (a, b), which means that members of N2 all vote for a, and members of N3 all vote for b. The table shows that in no circumstance does voting for y provide a better result from 1’s viewpoint than voting for z. This implies that, if 1 is rational, she will never vote for y. The table also shows that on the one hand, in some situation, namely (x, y), voting for x yields a better result than voting sincerely; but on the other hand, it also appears that there are situations where voting for z yields a better alternative, from her viewpoint. Contingency (x, y) creates an opportunity for individual 1 to misrepresent her preferences. But since she does not know anything about what preferences the other voters are going to report, voting sincerely can be considered as quite natural. Votes of individual 1

x y z

Votes of members of N2 and N3 (x, x)

(x, y)

(x, z)

(y, x)

(y, y)

(y, z)

(z, x)

(z, y)

(z, z)

x x x

x y y

x z z

x x x

y y y

z y z

x x x

y y y

z z z

Case 2. Individual 1 knows (or believes) that members of N3 will not vote for z. Then all profiles and associated outcomes can now be summarized as follows: Votes of individual 1

x y z

Votes of members of N2 and N3 (x, x)

(x, y)

(y, x)

(y, y)

(z, x)

(z, y)

x x x

x y y

x x x

y y y

x x x

y y y

First note that beliefs on the way members of N3 are (not) going to vote has now reduced the set of all possible profiles. But more importantly, voting sincerely does not give any advantage to individual 1, as compared with voting for x or for y. On the contrary, for all contingencies, voting for x provides an outcome at least as good as the outcome of sincere voting, and if the contingency is (x, y) it provides an outcome individual 1 (strictly) prefers to the outcome of sincere voting. Hence, if she acts rationally, she will misrepresent her preferences by voting for x instead of voting for z. The moral of this example is the following: depending on the beliefs about others’ actions, it will be possible or not for some individual to successfully misrepresent her preferences. But note that this is true also for the usual notion of strategic behavior. What is different is the fact that in the usual notion of strategic behavior, the individual in position to misrepresent her preferences has unrestricted beliefs over the actions of other individuals, so that her preference order is not a dominant strategy, whereas in the notion we consider in this paper, her beliefs on other agents are restricted, but given these beliefs her sincere preference is dominated by an insincere one. We now define our notion of restricted beliefs more formally. Definition 1.

Let i ∈ N. A beliefs support set for i is any non-empty subset of L−i .

Notice that individual i’s beliefs on other agents’ actions lead her to eliminate all rankings of alternatives – and thus all contingencies – that are incompatible with those beliefs. The set of all remaining contingencies is her beliefs support set. Hereafter, singletons of L−i are fully restricted support sets for individual i. L−i is the unrestricted support set for i. Restricted support sets are all subsets D with more than one contingency. Now, we distinguish restricted beliefs on voters’ actions by the size of the corresponding beliefs support sets. There are many other ways of doing this. For example one can take into account the peculiar type of messages actually used by a particular SCF (like the most preferred alternatives under plurality rule) or some descriptive properties of contingencies related to some social organizations (like voting in the same way by members of the same party). These possibilities undoubtedly raise very interesting questions, but we omit them in thispaper.  For the sake of simplicity, fully restricted support sets H −i will in the sequel be written H −i . Definition 2.

A beliefs support set of order k for a voter i is any subset D ⊆ L−i such that |D| = k.

Note that the greater the order of the beliefs support set, the less the beliefs on others’ actions are precise. To put it another way, the more an individual is informed about others’ actions, the smaller the beliefs support set she is facing. Fully restricted beliefs mean that the order of the beliefs support set is 1, which intuitively describes situations in which the given voter expects one and only one contingency to occur. At the other extreme situation, the beliefs support set of order k = (m!)n−1

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

1235

describes the case where the beliefs support set covers the whole domain of contingencies. Restricted beliefs appear as soon as the order of the support set is smaller than (m!)n−1 . We now define our notion of manipulation with restricted beliefs; for that purpose we shall always assume that every player is rational, which means that she will never choose to report a preference relation (a strategy) which may lead to a less preferred outcome as compared with the outcome when she expresses the sincere preference. We first introduce further notations. Given a, b ∈ A and R ∈ L, aR b holds if a = b or a is ranked above b according to R, and aR b if a is ranked above b. Definition 3. (i, Ri , Q i ) if

Let f be an SCF, i ∈ N, Ri , Q i ∈ L and let D be a beliefs support set for i. We shall say that f is D-manipulable via

(i) for all R−i ∈ D, f (Q i , R−i )Ri f (Ri , R−i ), and (ii) for some R−i ∈ D, f (Q i , R−i )Ri f (Ri , R−i ). Intuitively, a player is in position to manipulate an SCF given a beliefs support set if the two following conditions are satisfied: (1) in every circumstance the insincere strategy yields an outcome at least as good as the outcome of the sincere one, and (2) in at least one circumstance the insincere strategy yields a strictly better outcome. The next notion is more demanding: it requires that in every circumstance the insincere strategy yields an outcome strictly preferred to the outcome of the sincere strategy. Definition 4. Let f be an SCF, i ∈ N, Ri , Q i ∈ L and let D be a beliefs support set for i. We shall say that f is strictly D-manipulable via (i, Ri , Q i ) if for all R−i ∈ D, f (Q i , R−i )Ri f (Ri , R−i ). An SCF can be interpreted as the outcome function of a game in normal form given the sincere preferences RN = In such a game, voters are the players, alternatives are outcomes, and L is the set of strategies open to each player. In game theoretic words, Definition 3 says that strategy Q i dominates strategy Ri in the subgame where the set of strategy profiles is restricted to the Cartesian product L × D. The introduction of this approach in social choice theory originates in Farquharson (1969) and has been subsequently used by many authors in the context of manipulation (see for example Sengupta, 1978, or Mbih, 1995). f is D-manipulable if in the subgame where the strategy profile set is L × D, for some individual i the sincere strategy is dominated by an insincere one. In the strict version (Definition 4), Q i strictly dominates Ri in the subgame where the set of profiles is L × D. (R1 , . . . , Rn ).

Definition 5.

Let f be an SCF and k ∈





1, 2, . . . , (m!)n−1 . f is manipulable (resp. strictly manipulable) under beliefs of

order k, denoted Mk (resp. SMk ), if for some i ∈ N, Ri , Q i ∈ L, and D ⊆ L−i , f is D-manipulable (resp. strictly D-manipulable) via (i, Ri , Q i ) with |D| = k. To stress the difference between the usual notion of manipulation and our notion of manipulation with restricted beliefs, note that we know from the Gibbard–Satterthwaite theorem that all non-dictatorial SCFs are M1 since one can always find some contingency at which some non-sincere preference does better than the sincere one for some agent; equivalently, the sincere preference is not dominant on the full domain of profiles. For manipulation to be possible within some beliefs support set, we require the sincere preference not only to be non-dominant, but further to be dominated by some other preference relation. We thus obtain “degrees” of manipulability for SCFs, from M1 to Mk with k = (m!)n−1 . However, note that an Mk SCF is not necessarily manipulable by some individual with any beliefs support set of order k. Besides, an Mk SCF manipulable under unrestricted beliefs is an SCF f for which there exists some player for whom the sincere strategy is dominated by some other strategy in the game in normal form associated with f (see Moyouwou, 2004, or Andjiga et al., 2005). We can now state our results. We do this in the next section. 3. Manipulability results We start with a proposition giving some precision about the relationships between different orders of beliefs. 3.1. Preliminary results Proposition 1.

Let f be an SCF and k ∈

(i) f is Mk ⇔ for all k ∈





2, . . . , (m!)n−1 .

  1, 2, . . . , k , f is Mk .  

(ii) f is SMk ⇔ for all k ∈ 1, 2, . . . , k , f is SMk . (iii) f is SMk ⇒ f is Mk . (iv) f is Mk ⇒ there exists k such that 1 ≤ k ≤ k and f is SMk .

1236

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

We first prove statement (i). Suppose for all k ∈

Proof.

suppose f is Mk and consider some k ∈









1, 2, . . . , k , f is Mk , then, in particular, f is Mk . Reciprocally,

1, 2, . . . , k − 1 . By the definition of an Mk SCF, there exist i ∈ N, Ri , Q i ∈ L and

some beliefs support set of order k, D ⊆ L−i , such that f is D-manipulable via (i,Ri , Qi ). By Definition 3 there exists some contingency H −i ∈ D such that f (Q i , H −i )Ri f (Ri , H −i ). Let D be a subset of D − H −i with cardinality k − 1. Clearly, f is









also D ∪ H −i -manipulable via (i, Ri , Q i ). Hence f is Mk since |D ∪ H −i | = k . Statement (ii) can be proved in the same manner, and statements (iii) and (iv) are straightforward.  In words, the proposition above says that if an SCF is manipulable under beliefs of some order k, then it is still manipulable under beliefs of any order less than k and in particular under unrestricted beliefs. As a consequence, if an SCF is manipulable under unrestricted beliefs, then it is manipulable under restricted beliefs of every order k, and manipulable under fully restricted beliefs. Definition 6. all k ∈



The threshold of manipulability of a manipulable SCF is the strictly positive integer denoted tf , such that for

1, 2, . . . , (m!)n−1



(i) k ≤ tf ⇒ f is Mk , (ii) k > tf ⇒ f is not Mk . We shall assume that the threshold of manipulability of a non-manipulable SCF f is tf = 0. Definition 7. that for all k ∈

of strict manipulability of a manipulable SCF is the strictly positive integer denoted sf , such The threshold n−1  1, 2, . . . , (m!)

(i) k ≤ sf ⇒ f is SMk , (ii) k > sf ⇒ f is not SMk .





By Proposition 1, and since any beliefs support set is of order k ∈ 1, 2, . . . , (m!)n−1 , tf and sf are always well-defined, for every manipulable SCF f. Intuitively, the definitions above describe tf and sf as the sizes of the largest sets of contingencies at which some individual has an incentive to misrepresent her preferences, given f. 3.2. Results on manipulation We shall first present two lemmas showing that for all SCFs there exists a level of restricted beliefs at which some individual has an incentive to misrepresent her preferences. Definition 8. Given a linear order R on A and two alternatives x and y such that y is ranked just below x in R, an elementary deterioration of x in R is the linear order R(x) obtained from R by permuting x and y. Lemma 1. Let f be an SCF with m ≥ 3, i ∈ N, Ri , Q i ∈ L and H −i ∈ L−i . Suppose f (Ri , H −i ) = x and f (Q i , H −i ) = x . If xRi x and Q i = Ri (b) for some b ∈ A − {x}, then f is M(m−1)! . Lemma 2. Let f be an SCF with m ≥ 3, i ∈ N, Ri , Q i ∈ L and R−i ∈ L−i . Suppose f (Ri , R−i ) = x and f (Q i , R−i ) = x . If (x Ri x and x = / x ) and Q i = Ri (b) for some b ∈ A − {x}, then f is M(m−1)! . Proposition 2. Let f be an SCF with m ≥ 3, i ∈ N, Ri , Q i ∈ L and R−i ∈ L−i . Suppose f (Ri , R−i ) = x and f (Q i , R−i ) = x . If x = / x and i i Q = R (b) for some b ∈ A − {x}, then f is M(m−1)! . Proof. 

The proposition straightforwardly follows from Lemmas 1 and 2, and from the fact that Ri is a linear order over A.

Proofs of Lemmas 1 and 2 are provided in Appendix A. But the general intuition of the proofs is as follows: under the hypothesis stated in each lemma, we consider two individuals i and j and we suppose that preferences of players in N − {i, j} are fixed and equal to their preferences in R−i . By variations of preferences of i and j, we observe that there exists some restricted beliefs support set of order (m − 1)! for individual i or j under which f is manipulable. We now use Proposition 2 to derive our main result on the manipulation of SCFs under restricted beliefs. We also use an important equivalence on manipulable SCFs due to Muller and Satterthwaite (1977) whose presentation requires further notations. Given x ∈ A, S ⊆ N and RN , Q N ∈ LN : • Ri x Q i means that for all y ∈ A : xRi y implies xQ i y, • RN x Q N holds if and only if for all i ∈ N : Ri x Q i . Definition 9. An SCF f satisfies the strong positive association property (SPA) if for all RN , Q N ∈ LN , and all x ∈ A, [(f (RN ) = x and RN x Q N )] ⇒ f (Q N ) = x.

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

1237

Theorem 1 (Muller and Satterthwaite, 1977). An SCF is M1 if and only if it satisfies SPA. Using Theorem 1 and some observations on individual preferences, we prove the theorem below, which brings precision about what order of restricted beliefs is necessary for all SCFs to be manipulable. Theorem 2.

Let f be an SCF and suppose m ≥ 3. Then, f is M1 if and only if f is M(m−1)! .

Proof. Sufficiency: Since (m − 1)! ≥ 2, then every M(m−1)! SCF is also M1 , by Proposition 1. Necessity: Suppose f is M1 . By Theorem 1, f does not satisfy the SPA property. Hence there exists RN , Q N ∈ LN and x ∈ A N such that f (RN ) = x, f (Q N ) = / x and RN x Q N . For all S ⊆ N, let (RN /Q S ) stand  for the profile  obtained from R by replacing each Ri with Q i for all i ∈ S. Let xi = f (RiN ) where, R0N = RN and for all i ∈ 1, 2, . . . , n , RiN = (RN /Q {1,2,...,i} ). Then x0 = x / x. Consider p the least integer i ∈ and xn = f (Q N ) =



1, 2, . . . , n



−p

−p

N ) = f (Rp , R such that xi = / x. Thus f (Rp−1 ) = x, f (RpN ) = p−1

f (Q p , Rp−1 ) = / x. Consequently Rp = / Q p and since RN x Q N , Rp x Q p . Suppose then that there exists a finite sequence of orderings (Hj )j=1,2,...,l which satisfies H1 = Rp , Hl = Q p and for all j∈





−p

1, 2, . . . , l − 1 , there exists aj ∈ A − {x} such that Hj+1 = Hj (aj ). Let yj = f (Hj , Rp−1 ) for all j ∈





−p





1, 2, . . . , l . Then y1 = x

−p

and yl = / x. Consider q the least integer j ∈ 2, . . . , l such that yl = / x. Thus f (Hq−1 , Rp−1 ) = x, f (Hq , Rp−1 ) = / x, Hq = Hq−1 (aq−1 ) with aq−1 ∈ A − {x}. By Proposition 2, f is M(m−1)! . To conclude the proof, we have to prove the existence of a sequence (Hj )j=1,2,...,l as described above. Before this, (i) we introduce a further notation : R[k] is the alternative ranked kth in R, and (ii) let us point out two important observations on individual preferences we use and which present no major difficulty:   Observation 1: For two orderings R , R , there exists some k ∈ 1, 2, . . . , m − 1 such that R [k + 1]R R [k], that is R ranks   this just imaginethe contrary and observe that the two orderings coincide). R [k + 1] above R [k] (to see  Observation 2: Given k ∈ 1, 2, . . . , m − 1 , y ∈ A and two orderings R and R such that R [k + 1]R R [k] and R y R , then









/ y, R (R [k])y R and R \ R (R [k]) = R \ R  − 1 (just remember the notation R y R and that R (R [k]) differs from R [k] =  R only by the ordering of R [k + 1] and R [k]). / Q p , by Observation 1 consider k1 ∈ Step 1. Let H1 = Rp . Since Rp =





1, 2, . . . , m − 1

such that H1 [k1 + 1]Q p H1 [k1 ]. Let a1 =









H1 [k1 ] and H2 = H1 (a1 ). Since H1 = by Observation 2, a1 = / x, H2 x and Q p \ H2  = Q p \ H1  − 1. x p Step 2. If H2 = Q then the process ends. the same procedure is applied on H2 to construct H3 and a2 such that    Otherwise  / x, H3 x Q p and Q p \ H3  = Q p \ H2  − 1. a2 = Rp 



Q p,

Qp



After Q p \ H1  = l − 1 steps, the process generates (Hj )1≤j≤l and (aj )1≤j≤l−1 such that

For allj ∈





1, 2, . . . , l − 1

Therefore Hl = Q p .





: Hj+1 = Hj (aj ), aj = / x and Q p \ Hl  = 0



According to Theorem 2, every SCF manipulable in the Gibbard–Satterthwaite sense remains manipulable with restricted beliefs, even with the strengthening of the notion of manipulation which requires the sincere preference order to be dominated in the beliefs support set. We now recall the Gibbard–Satterthwaite theorem. Definition 10. An SCF f is dictatorial if there exists i ∈ N such that for all possible profiles RN , f (RN ) is i’s most preferred candidate according to her sincere preference Ri . Theorem 3 (Gibbard, 1973 and Satterthwaite, 1975). Every non-dictatorial SCF selecting one alternative among at least three possible outcomes is M1 . Corollary 1. Suppose m ≥ 3. Then an SCF f is manipulable under unrestricted beliefs if and only if f is manipulable under restricted beliefs. Proof.

Straightforward from Proposition 1 and Theorem 2.



The next result is an extension of the Gibbard–Satterthwaite theorem from unrestricted beliefs to restricted beliefs. Corollary 2. Every non-dictatorial SCF f selecting one alternative among at least three possible outcomes is manipulable under restricted beliefs. Furthermore, the manipulability threshold of f satisfies tf ≥ (m − 1)!. Proof.

Straightforward from Theorems 2 and 3. 

3.3. Remarks on strict manipulation We shall now use two examples to show how different the notions of manipulation and strict manipulation with restricted beliefs can be, depending on the SCF under consideration.

1238

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

Example 2. Let i be a given voter and f the SCF defined by f (RN ) = min Ri . Voter i always guarantees the election of her most preferred candidate, namely max Ri , instead of her less preferred candidate by reporting a preference order where max Ri is ranked last. Therefore sf = tf = (m!)n−1 . Clearly, f is both manipulable and strictly manipulable with unrestricted beliefs. And furthermore, this example shows that there exist SCFs that are both manipulable and strictly manipulable up to the same order of the beliefs support set. Example 3. Suppose that A contains more than two candidates. Let a be a given alternative, R0 a fixed linear order such that max R0 = / a and g the SCF defined by



N

g(R ) =

max R a

if all voters agree on the same order R = / R0 (Ri = R, i ∈ N) otherwise

For a voter whose sincere preference order is R0 , reporting a distinct preference order Q i with max R0 ranked first is a dominant strategy. Hence tg = (m!)n−1 . / a by strict But suppose g is strictly D-manipulable via (i, Ri , Q i ). Let R−i ∈ D. Suppose g(Ri , R−i ) = a, then g(Q i , R−i ) = manipulability. Therefore Rj = Q i for all voters j = / i and consequently D = {R−i ∈ L−i : R−i = (Q i , Q i , . . . , Q i )}. Similarly if / a, then D = {R−i ∈ L−i : R−i = (Ri , Ri , . . . , Ri )}. Hence sg = 1. g(Ri , R−i ) = Clearly, g is manipulable under unrestricted beliefs but is strictly manipulable only under fully restricted beliefs. Moreover, this example shows that for the same SCF, the difference of orders of beliefs support sets up to which the SCF is manipulable and strictly manipulable may be maximal, in the sense that it may be manipulable up to order (m!)n−1 but strictly manipulable only up to order 1. More generally, if we require strict manipulability under restricted beliefs, then it turns out that the more universal – in the sense of being valid for all SCFs – result one can obtain is the Gibbard–Satterthwaite theorem. 3.4. Remarks on the thresholds Given an SCF f, a voter i ∈ N and two linear orders Ri , Q i ∈ L, consider M(i, Ri , Q i ) and SM(i, Ri , Q i ), the subsets of L−i defined by M(i, Ri , Q i , f )

=

SM(i, Ri , Q i , f )

=

 −i −i  i −i i −i H −i ∈ L−i : f (Q i , H −i )Ri f (Ri , H−i ) H

∈L

: f (Q , H )Ri f (R , H )

It is clear that sf is the maximal cardinality over all beliefs support sets having the form SM(i, Ri , Q i , f ). In the same way, tf is the maximal cardinality over all beliefs support sets having the form M(i, Ri , Q i , f ) with SM(i, Ri , Q i , f ) = / ∅. Therefore sf and tf can be evaluated by a complete enumeration of all beliefs support sets of the form SM(i, Ri , Q i , f ) and M(i, Ri , Q i , f ). Let us evaluate the threshold of manipulability and the threshold of strict manipulability for three well-known SCFs. Example 4.

Consider the Borda rule, hereafter denoted BR, and evaluate sBR and tBR for N =





1, 2 and A = {x, y, z}. Given

a profile RN and an alternative x, let BC(x, RN ) =





with BC(x, Ri ) =  y ∈ A : xRi y

BC(x, Ri )

 

i∈N

BC(x, Ri )

is the Borda score of x from Ri , individual i’s ranking, and BC(x, RN ) is then the total Borda score of x given At each profile the Borda rule selects the alternative with the highest Borda score, with ties broken in favor of the profile alternative coming first in the alphabetical order xyz. In fact BR is completely defined by the following Feldman (1980) table: RN .

Player 1’s strategies

R1 R2 R3 R4 R5 R6

= xyz = xzy = yxz = yzx = zxy = zyx

It follows that SM(1, R1 , Q, BR) = M(1, R1 , R2 , BR) = SM(1, R2 , Q, BR) = M(1, R2 , R1 , BR) =

Player 2’s actions R1

R2

R3

R4

R5

R6

x x x y x x

x x x x x z

x x y y x y

y x y y z y

x x x z z z

x z y y z z

  if Q = R2 and SM(1, R1 , Q, BR) = ∅ if Q = / R2  R4  R1 , R2 , R3 , R4 , R5

  if Q = R1 and SM(1, R2 , Q, BR) = ∅ if Q = / R1  R6  R1 , R2 , R3 , R5 , R6

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

SM(1, R3 , Q, BR) =

1239

  if Q = R4 and SM(1, R3 , Q, BR) = ∅ if Q = / R4  R1 

M(1, R3 , R4 , BR) = R1 , R2 , R3 , R4 , R6 SM(1, R4 , Q ) = ∅ for all Q ∈ L

 

if Q = R6 and SM(1, R5 , Q, BR) = ∅ if Q = / R6 SM(1, R5 , Q, BR) = R2   SM(1, R5 , R6 , BR) = R1 , R2 , R5 , R6 SM(1, R6 , Q, BR) = SM(1, R6 , R5 , BR) =

    if Q = R5 and SM(1, R6 , Q, BR) = R4 if Q = / R5 R4  R1 , R4 , R5 , R6

Since SM(1, R, Q, BR) = SM(2, R, Q, BR) and M(1, R, Q, BR) = M(2, R, Q, BR) for all R, Q ∈ L, then sBR = 1 and tBR = 5. Example 5.



Consider the Copeland rule, hereafter denoted CR, and evaluate sCR and tCR for N =



1, 2

and A = {x, y, z}.

Given some profile RN and two alternatives x and y, let



 



C(x, RN ) = C + (x, RN ) − C − (x, RN ) where      C + (x, RN ) = y ∈ A :  i ∈ N : xRi y  >  i ∈ N : yRi x 

C − (x, RN ) =





y ∈ A :  i ∈ N : xRi y

    <  i ∈ N : yRi x 

C + (x, RN ) is the set of all alternatives which loose against x in a pairwise majority voting and C − (x, RN ) is the set of all alternatives which defeat x in a pairwise majority voting. C(x, RN ) is then the win–loss record of x given profile RN . At each profile the Copeland rule selects the alternative with the highest win–loss record; ties are broken in favor of individual 1’s most preferred alternative.   The type of tie-breaking rule has been changed. In fact the Copeland rule with alphabetical tie-break, N = 1, 2 and A = {x, y, z} coincides with BR defined in the precedent example. The Copeland rule CR with individual 1 as a chairman, is completely defined by the following Feldman table: Player 1’s strategies

R1 R2 R3 R4 R5 R6

Player 2’s actions

= xyz = xzy = yxz = yzx = zxy = zyx

R1

R2

R3

R4

R5

R6

x x y y x z

x x x y z y

x x y y z z

y x y y z z

x x y z z z

x z y y z z

Given a linear order R = abc on A, the table above shows that





SM(1, abc, Q, CR) = M(1, abc, acb, CR) =

bca  if Q = acb and SM(1, abc, Q, CR) = ∅ if Q =/ acb L − cba

SM(2, abc, Q, CR) =

cba if Q = bac ∅ otherwise   L − cab   L − cab

M(2, abc, acb, CR) = M(2, abc, bac, CR) =

⎧  ⎨ bac if Q = acb ⎩

Therefore sCR = 1 and tCR = 5. Example 6. Consider the plurality rule, denoted by PR, as defined in Example 1. With N = completely defined by the following Feldman table: Player 1’s strategies

xyz xzy yxz yzx zxy zyx





1, 2

and A = {x, y, z}, PR is

Player 2’s actions xyz

xzy

yxz

yzx

zxy

zyx

x x x x x x

x x x x x x

x x y y y y

x x y y y y

x x y y z z

x x y y z z

1240

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

Since PR is anonymous, in the sense that for all R, Q ∈ L, PR(R, Q ) = PR(Q, R) then sPR and tPR can be evaluated as the maximal cardinality of beliefs support sets of the form SM(1, R, Q, PR) and M(1, R, Q, PR). From the above table:



SM(1, R, Q, CR) = M(1, zxy, Q, CR) =

{yxz, yzx} if R = zxy and Q [1] = x ∅ otherwise {xyz, xzy, yxz, yzx} if Q [1] = x

Therefore, sPR = 2 and tPR = 4. 4. Conclusion The topic of this paper was the study of individual strategic behavior in voting, under the hypothesis that agents can have restricted beliefs on other agents’ actions. The critical criterion used to identify such a behavior is the domination of the sincere preference relation by an insincere one in the subgame defined by the set of beliefs. We have shown that, provided the range of an SCF contains at least three alternatives, every non-dictatorial SCF is manipulable with restricted beliefs. Moreover, we have provided an order up to which SCFs are all manipulable, and shown that this threshold depends on the number of alternatives. Yet, this does not mean that no SCF is still manipulable if we go on expanding the beliefs of voters on other voters’ actions. It only means that for some voting schemes, a larger set of beliefs will by no means allow any individual to misrepresent her preferences and secure an outcome better from her viewpoint (in the sense of the sincere preference relation being dominated). But, one can also find other voting schemes which will still be manipulable, even with a larger beliefs support set than that corresponding to the threshold. Also note that given an SCF and an order at which it is manipulable, not all beliefs support sets will leave place to manipulation. Hence, many issues remain open, among which the three following ones: (1) for a given class of SCFs, the characterization of all beliefs support sets giving an opportunity to manipulate; (2) given an SCF, the determination of the threshold of manipulability; and (3) the characterization of all SCFs manipulable only up to the threshold, what we can call beliefs-minimally manipulable. Acknowledgements We are grateful to two anonymous referees for very helpful suggestions. Appendix A Proof of Lemma 1. Let f be an SCF with m ≥ 3, i ∈ N, Ri , Q i ∈ L and R−i ∈ L−i such that f (Ri , R−i ) = x and f (Q i , R−i ) = x . Suppose that xRi x , x = / x and Q i = Ri (b) for some b ∈ A − {x}. Let us prove that f is M(m−1)! .  Let j ∈ N − i . Recall that R−{i,j} is the (n − 2)-tuple obtained from R−i by removing j’s preference order. Thus R−i = (Rj , R−{i,j} ). Note that Q i = Ri (b) and Ri differ only on





a, b

where for some k, a = R[k], b = R[k + 1]. Thus xRi x and x = / b implies

xQ i x . Step 1. Let L1 =





H j ∈ L : f (Ri , H j , R−{i,j} )Q i f (Q i , H j , R−{i,j} )

(1)

  Suppose that L1  ≥ (m − 1)!. Then D1 =



(H j , R−{i,j} ) : H j ∈ L1



is a beliefs support set of order at least (m − 1)! for i. Since xQ i x and x = / x, then f (Ri , Rj , R−{i,j} )Q i f (Q i , Rj , R−{i,j} ). Therefore Rj ∈ L1 and by (1), f is D1 -manipulable via (i, Q i , Ri ). Hence by Proposition 1, f is M(m−1)! .

 

In the remainder of the proof, L1  ≤ (m − 1)! − 1. Step 2. Let L2 =



H j ∈ L − L1 : f (Ri , H j , R−{i,j} ) = / b or f (Q i , H j , R−{i,j} ) = / a

  Suppose that L2  ≥ (m − 1)!. Then D2 =



(H j , R−{i,j} ) : H j ∈ L2





(2)

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

1241

is a beliefs support set of order at least (m − 1)! for i. For all H j ∈ L2 , H j ∈ / L1 and by (1), f (Q i , H j , R−{i,j} )Q i f (Ri , H j , R−{i,j} ). Since Q i = Ri (b) and [for all H j ∈ L2 , f (Ri , H j , R−{i,j} ) = / b or f (Q i , H j , R−{i,j} ) = / a], then for all H j ∈ L , f (Q i , H j , R−{i,j} ) i f (Ri , H j , R−{i,j} ). Therefore f is D -manipulable via (i, Ri , Q i ). Hence by 2

2

R

Proposition 1 f is M(m−1)! .

  In the remainder of the proof, L2  ≤ (m − 1)! − 1.

Step 3. Let L3 = L − (L1 ∪ L2 ). Note that

  L3  ≥ m! − 2[(m − 1)! − 1] = 2 + (m − 2)(m − 1)! > (m − 1)!

(3)

and D3 =



(H j , R−{i,j} )/H j ∈ L3



is a beliefs support set of order at least (m − 1)! for i. Moreover for all H j ∈ L3 , H j ∈ L2 and consequently by (2) for all H j ∈ L3 ,

f (Ri , H j , R−{i,j} ) = b and f (Q i , H j , R−{i,j} ) = a.

Now suppose that

(4)

 

there exist some S i ∈ L − Ri and some S j ∈ L3 such that / b S i [1] = b and f (S i , S j , R−{i,j} ) =

(5)

Since S i [1] = b, then by (4), for all H j ∈ L3 , f (Q i , H j , R−{i,j} )Si f (S i , H j , R−{i,j} ) and f (Q i , S j , R−{i,j} )Si f (S i , S j , R−{i,j} ). Therefore f is D3 -manipulable via (i, S i , Ri ). Hence by Proposition 1 f is M(m−1)! . In the remainder of the proof, (5) does not hold and consequently

 

for all S i ∈ L − Ri , Step 4. Suppose that

H j ∈ L3 , S i [1] = b ⇒ f (S i , H j , H −{i,j} ) = b.

(6)

 

there exist some S i ∈ L − Q i and some S j ∈ L3 such that S i [1] = a andf (S i , S j , R−{i,j} ) = / a

(7)

Since S i [1] = a, then by (4), for all H j ∈ L3 , f (Q i , H j , R−{i,j} )Si f (S i , H j , R−{i,j} ) and f (Ri , S j , R−{i,j} )Si f (S i , S j , R−{i,j} ). Therefore f is D4 -manipulable via (i, S i , Q i ) with D4 = D3 . Hence by Proposition 1 f is M(m−1)! . In the remainder of the proof, (7) does not hold and consequently

 

for all S i ∈ L − Q i ,

H j ∈ L3 , S i [1] = a ⇒ f (S i , H j , R−{i,j} ) = a.

(8)

Step 5. Suppose that there exists some S j ∈ L3 such that S j [m] = b

(9)

Note that D5 =



(H i , R−{i,j} ) : H i = Ri or H i [1] = b



is a beliefs support set of order (m − 1)! for j. Since S j ∈ L3 , S j [m] = b and b = / x, then by  (4)f (Ri , Rj , R−{i,j} ) = xQ j f (Ri , S j , R−{i,j} ) = b. Also note that by (6), for all (H i , R−{i,j} ) ∈ D5 − (Ri , R−{i,j} ) , f (H i , Rj , R−{i,j} ) j f (H i , S j , R−{i,j} ) = b. Therefore f is D5 -manipulable via (j, S j , Rj ). Hence by Proposition 1 f is S

M(m−1)! . In the remainder of the proof, (9) does not hold and consequently for all H j ∈ L3 ,

H j [m] = / b

 j  note that H ∈ L : H j [m] = b ⊆ L1 ∪ L2 .    H j ∈ L : H j [m] = b or H j [m] = a  = 2(m − 1)!, then

Furthermore

Since

  L0 ∪ L1  ≤ 2(m − 1)! − 2,

a= / b

and

there exists S j ∈ L3 such that S j [m] = a

(10)

there exists some S j ∈ L3 such that xSj b.

(11)

    Also observe that  H j ∈ L : xH j b  = m!/2 and L0 ∪ L1  < m!/2 with m ≥ 3. Thus

1242

N.G. Andjiga et al. / Journal of Mathematical Economics 44 (2008) 1232–1242

Step 6. Suppose that for allH i ∈ L,

H i [1] = a ⇒ (H i , Rj , R−{i,j} ) = a

(12)

Note that D6 =



(H i , R−{i,j} ) : H i = Ri or H i [1] = a



is a beliefs support set of order at least (m − 1)! for j. By (11) consider S j ∈ L3 such that xSj b. Since S j ∈ L3 and by (4),   f (Ri , Rj , R−{i,j} ) = xSj f (Ri , S j , R−{i,j} ) = b. Moreover by (12), for all (H i , R−{i,j} ) ∈ D6 − (Ri , R−{i,j} ) , f (H i , Rj , R−{i,j} ) = a. Also note that S j ∈ L and by (8), f (H i , S j , R−{i,j} ) = a. Therefore f (H i , Rj , R−{i,j} ) = f (H i , S j , R−{i,j} ) and f is D 3

6

manipulable via (j, S j , Rj ). Hence by Proposition 1 f is M(m−1)! . In the remainder of the proof, (12) does not hold. Step 7. Since (12) does not hold, there exists some S i ∈ L such that S i [1] = a and f (S i , Rj , R−{i,j} ) = / a

(13)

Consider D7 =



(H i , R−{i,j} ) : H i [1] = a



D7 is a beliefs support set of order (m − 1)! for j. By (10) consider S j ∈ L3 such that S j [m] = a. Then by (12), for all (H i , R−{i,j} ) ∈ D7 , f (H i , S j , R−{i,j} ) = a and consequently f (H i , Rj , R−{i,j} )Si f (H i , S j , R−{i,j} ). By (13), for some (S i , R−{i,j} ) ∈ D7 f (S i , Rj , R−{i,j} )Si f (S i , S j , R−{i,j} ). Therefore f is D7 -manipulable via (j, S j , Q j ). Hence by Proposition 1 f is M(m−1)! .  Proof of Lemma 2.

It is based on very similar arguments, and is thus omitted. 

References Andjiga, N.G., Mbih, B., Moyouwou, I., 2005. Strategic behavior under complete ignorance: approval and Condorcet-type voting rules. Imhotep 6, 1–8. Bebchuk, L.A., 1980. Ignorance and manipulation. Economics Letters 5, 119–123. Dummett, M., Farquharson, R., 1961. Stability in voting. Econometrica 29, 33–43. Farquharson, R., 1969. Theory of Voting. Yale University Press, New Haven. Favardin, P., Lepelley, D., 2006. Further results on the manipulability of social choice rules. Social Choice and Welfare 26, 485–509. Feldman, A., 1980. Welfare Economics and Social Choice Theory. Martinus Nijhoff Pub., Boston. ¨ Gardenfors, P., 1976. Manipulation of social choice functions. Journal of Economic Theory 13, 217–228. Gibbard, A., 1973. Manipulation of voting schemes: a general result. Econometrica 41, 587–601. Lepelley, D., Mbih, B., 1994. The vulnerability of four social choice functions to coalitional manipulation of preferences. Social Choice and Welfare 11, 253–265. Majumdar, D., Sen, A., 2004. Ordinally bayesian incentive compatible voting schemes. Econometrica 72, 523–540. Mbih, B., 1995. On admissible strategies and manipulation of social choice procedures. Theory and Decision 39, 169–188. Moulin, H., 1981. Prudence versus sophistication in voting strategy. Journal of Economic Theory 24, 398–412. ` Moyouwou, I., 2004. Manipulation des fonctions de choix social en information incomplete. Ph.D. Thesis: University of Yaounde 1. Muller, E., Satterthwaite, M.A., 1977. The equivalence of strong positive association and strategy-proofness. Journal of Economic Theory 14, 412–418. Pattanaik, P.K., 1976. Counter-threats and strategic manipulation under voting schemes. Review of Economic Studies 43, 11–18. Satterthwaite, M.A., 1975. Strategy-proofness and Arrow’s conditions: existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory 10, 187–217. Sengupta, M., 1978. On a difficulty in the analysis of strategic voting. Econometrica 46, 331–343. Sengupta, M., 1980. The knowledge assumption in the theory of strategic voting. Econometrica 48, 1301–1304 (Notes and comments).