Rent-seeking and competitive preferences

Rent-seeking and competitive preferences

Accepted Manuscript Rent-Seeking and Competitive Preferences Caleb A. Cox PII: DOI: Reference: S0167-4870(16)30262-8 http://dx.doi.org/10.1016/j.joep...

742KB Sizes 0 Downloads 120 Views

Accepted Manuscript Rent-Seeking and Competitive Preferences Caleb A. Cox PII: DOI: Reference:

S0167-4870(16)30262-8 http://dx.doi.org/10.1016/j.joep.2017.02.002 JOEP 1975

To appear in:

Journal of Economic Psychology

Please cite this article as: Cox, C.A., Rent-Seeking and Competitive Preferences, Journal of Economic Psychology (2017), doi: http://dx.doi.org/10.1016/j.joep.2017.02.002

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

RENT-SEEKING AND COMPETITIVE PREFERENCES CALEB A. COX DEPARTMENT OF ECONOMICS VIRGINIA COMMONWEALTH UNIVERSITY SNEAD HALL, 301 W. MAIN STREET, BOX 844000 RICHMOND, VA, 23284-4000, USA [email protected]. A BSTRACT. In this experiment, I examine the extent to which competitive social preferences can explain over-bidding in rent-seeking contests. The Human treatment is a standard two-player contest. In the Robot treatment, a single player bids against a computerized player, eliminating potential social preference motives. The results show no difference in bids between treatments at the aggregate level. Further analysis shows evidence of heterogeneous treatment effects between impulsive and reflective subjects. Moreover, impulsive subjects are more likely than reflective subjects to deviate qualitatively from the shape of the theoretical best response function in the sequential contest. Keywords: Rent-seeking, contests, laboratory experiments, game theory, social preferences JEL Classification: C72 D03 D72 PsycINFO Classification: 2300 2360

Date: September 29, 2016. 1

RENT-SEEKING AND COMPETITIVE PREFERENCES

A BSTRACT. In this experiment, I examine the extent to which competitive social preferences can explain over-bidding in rent-seeking contests. The Human treatment is a standard two-player contest. In the Robot treatment, a single player bids against a computerized player, eliminating potential social preference motives. The results show no difference in bids between treatments at the aggregate level. Further analysis shows evidence of heterogeneous treatment effects between impulsive and reflective subjects. Moreover, impulsive subjects are more likely than reflective subjects to deviate qualitatively from the shape of the theoretical best response function in the sequential contest. Keywords: Rent-seeking, contests, laboratory experiments, game theory, social preferences JEL Classification: C72 D03 D72 PsycINFO Classification: 2300 2360

1 I NTRODUCTION Competition is a central topic in economics. Contests such as lobbying and elections have attracted great interest, much of it using the game-theoretic framework of Tullock (1980). In Tullock’s model, competitors make unrecoverable bids to win a prize, with higher bids more likely (but not certain) to win. Many laboratory experiments on rent-seeking contests have consistently found over-bidding, with the total bid often exceeding the prize value (beginning with Millner and Pratt 1989 and summarized in Sheremeta 2013 and Dechenaux et al. 2015). Such behavior could have substantial welfare implications, leading to waste of resources. To determine how such wasteful over-bidding might be reduced, it is necessary to understand the motivations for this behavior. While traditional economic theory typically assumes narrow self-interest, much evidence from behavioral and experimental economics has shown that people

Date: September 29, 2016. 1

2

RENT-SEEKING AND COMPETITIVE PREFERENCES

often have preferences to help or harm others in a variety of related contexts (e.g. Fonseca 2009, Balafoutas et al. 2012, and Cox 2013). In rent-seeking contests, Herrmann and Orzen (2008) and Sano (2014) suggest that over-bidding might be explained by spiteful or negatively reciprocal preferences to prevent the other player(s) from winning. Sheremeta (2015) finds that over-bidding is driven by impulsive competitiveness, measuring impulsiveness using the cognitive reflection test (Frederick, 2005). I present a laboratory experiment to test the hypothesis that over-bidding in rent-seeking contests is due to competitive other-regarding preferences such as relative payoff maximization. I directly test whether competitive social preferences explain over-bidding using two treatments, varied between subjects. The first treatment (Human) is a standard two-player Tullock contest. The second treatment (Robot) involves only one player who bids against a computerized “robot” player (and this is known). In the Robot treatment, there is no room for any kind of other-regarding preference, since there is no other player. Thus, the degree to which such preferences explain over-bidding in rent-seeking contests can be examined by comparing bids between treatments. I also use the cognitive reflection test (CRT) to learn whether removing human competitors reduces the impulsive bidding found by Sheremeta (2015). In both treatments, subjects bid in both the simultaneous and sequential Tullock contests, using the strategy method in the sequential contest. While the theoretical prediction for the sequential contest is the same as for the simultaneous contest, including the strategy-method sequential contest allows observation of the entire bid function and controlling for beliefs. Furthermore, this approach allows me to explore how the results of Sheremeta (2015) on impulsive bidding might extend to the sequential contest where the entire bid function can be observed. An alternative explanation for over-bidding is that people simply enjoy winning for its own sake, and thus the reward value alone does not fully capture the value of winning the competition. Sheremeta (2010) shows that many subjects indeed bid positive amounts for a prize of zero monetary value. However, it is not clear whether this enjoyment of winning depends on defeating

RENT-SEEKING AND COMPETITIVE PREFERENCES

3

a human rival. If it does, then this explanation is very similar to competitive preferences. To explore the role of utility of winning, I use Sheremeta’s method in both treatments: after completing the standard rent-seeking contest, subjects bid for a reward of no monetary value. This method makes it possible to control for utility of winning and to determine whether such utility is present even in the Robot treatment, when there is no human rival. The results of this study show no overall difference in average bids between the Human treatment and the Robot treatment. Moreover, the results from the Human treatment confirm the finding from Sheremeta (2015) that bids are negatively correlated with Cognitive Reflection Test (CRT) scores. However, this correlation is somewhat weaker in the Robot treatment. Further analysis shows that High-CRT subjects and Low-CRT subjects respond to the Robot treatment in opposite directions, leading to no overall treatment effect and weakening the correlation between bids and CRT scores in the Robot treatment. Importantly, I find that Low-CRT subjects are far less likely than High-CRT subjects in both treatments to choose the bid function shape most consistent with theoretical best response. In particular, in the Human treatment, a plurality of Low-CRT subjects choose an increasing bid function. This result suggests that one reason impulsive subjects tend to overbid is that their bid functions deviate substantially and qualitatively from theory. The paper is organized as follows. Section 2 gives the theoretical background. Section 3 details the experimental design and procedures. Section 4 specifies the key hypotheses to be tested. Section 5 presents the results, with a concluding discussion in Section 5. Additional analysis and experimental instructions are contained in Appendices A and B, respectively.

2 T HEORETICAL B ACKGROUND Standard Model I employ the rent-seeking contest model of Tullock (1980) with two players bidding for a reward of value v. The probability that player i receives the reward, as a function of the bids, is given by:

4

RENT-SEEKING AND COMPETITIVE PREFERENCES

p i (b i , b j ) =

  

bi b i +b j

  0. 5

if b i + b j > 0

(1)

if b i + b j = 0

Both bids must be paid regardless of who receives the reward. Thus, player i ’s objective function is given by the reward value times the probability of receiving the reward, minus i ’s bid:

max vp i ( b i , b j ) − b i bi

(2)

Solving this optimization problem yields i ’s best response function for b j > 0:1

b i (b j ) =

p

vb j − b j

(3)

In the unique Nash equilibrium, b∗ = b i = b j = v/4. Bids above b∗ are strictly dominated. I also consider the sequential version of the Tullock contest, in which one player bids, and then the other player observes this bid before bidding in response. The best response function is the same as in the simultaneous contest, and the unique subgame perfect equilibrium yields the same bids as the Nash equilibrium of the simultaneous contest.2

Behavioral Model A variety of other-regarding preference models predict higher bids than the standard Nash equilibrium prediction. One such model is relative payoff maximization (see Mago et al. 2016 and Sheremeta 2015). In this model, the bidder’s objective function takes the following form:

¡ ¢ max vp i ( b i , b j ) − b i − r vp j ( b j , b i ) − b j bi

(4)

1The best response to b = 0 is not well-defined in the continuous case. However, in the discrete case with increment j

², b i (0) = ². 2While no difference in bids is expected between the simultaneous and sequential contests, the sequential contest is useful to observe the entire bid function and to control for beliefs.

RENT-SEEKING AND COMPETITIVE PREFERENCES

5

where the parameter r represents the bidder’s attitude toward relative payoff. Assuming r > 0 yields a competitive preference to achieve a higher payoff than the rival player. Solving the optimization problem to find player i ’s best response function (for b j > 0) yields:

b i (b j ) =

p

(1 + r )vb j − b j

(5)

In the new unique equilibrium, b∗∗ = b i = b j = (1 + r )v/4. Figure 1 shows the best response function for v normalized to 1, for various values of r . Note that the best response function reaches a maximum at the equilibrium bid.

[INSERT FIGURE 1 HERE.]

An increase in the relative payoff parameter r is analogous to an increase in the reward value. The greater is r , the greater is the equilibrium bid. In particular, as long as r > 0, the equilibrium bid in the behavioral model will exceed the prediction of the standard model. That is, the behavioral model predicts overbidding. Several previous contest experiments find evidence that

r > 0 (Fonseca 2009, Sheremeta 2015, Mago et al. 2016), consistent with the common finding of overbidding. I use the relative payoff maximization model for simplicity of analysis. However, various alternative models of other-regarding preferences similarly predict over-bidding. For example, Sano (2014) uses a model of reciprocal preferences based on Rabin (1993) to explain experimental findings of over-bidding. In this model, anticipation that a rival will take an unkind action by overbidding motivates over-bidding in response through negative reciprocity. Similarly, over-bidding can be explained through outcome-based social preference models such as spite or inequity aversion, as in Herrmann and Orzen (2008) and Fonseca (2009). This experiment is not designed to distinguish between these various models of other-regarding preferences, but to determine the degree to which overbidding is driven by such preferences of any type.

6

RENT-SEEKING AND COMPETITIVE PREFERENCES

Over-bidding may also be explained by non-monetary utility of winning, which would effectively increase the perceived prize value by an added amount w and thus increase bids (Sheremeta, 2010). The prediction of this model is thus similar to the relative-payoff maximization model, with identical bid functions if w = rv. However, it is less clear whether such a non-monetary utility of winning depends on facing a human rival. If winning is only valuable against a human rival, then utility of winning becomes very similar to the other models previously discussed.

3 E XPERIMENTAL D ESIGN AND P ROCEDURES There are two treatments: Human and Robot. In the Human treatment, human subjects play against one another. In the Robot treatment, human subjects play against computerized players programmed to bid according the distribution of actual bids in the Human treatment. Subjects in this treatment were shown a random robot index number which could be matched with a printed table after the experiment to verify that the robot player’s bidding behavior was predetermined. Each robot index number corresponded to an actual human subject’s bid from a previous session of the Human treatment. The printed table was shown at the front of the computer laboratory at the beginning of the session so that subjects were aware of its existence, but were unable to read it until the end of the session.3 The instructions were read aloud, and subjects answered several control questions to check their understanding of the experiment. Each subject began the experiment with an endowment of 10 tokens, worth $0.50 each. In Part 1 of each session, subjects bid in a standard two-player

3Similar methods of eliminating social preference motives has been used in other contexts by Blount (1995), Houser

and Kurzban (2002), and Shapiro (2009). Herrmann and Orzen (2008) consider a treatment which transforms a sequential contest into an individual choice problem by replacing the first mover with a random variable and removing any reference to another player. For the current study, particularly in the simultaneous contest, it is desirable that subjects’ homegrown beliefs about the distribution of others’ bids do not vary with the treatment. Thus, I use an approach similar to Shapiro (2009) where subjects know that the computerized players are programmed to behave like previous human subjects.

RENT-SEEKING AND COMPETITIVE PREFERENCES

7

Tullock contest for a reward worth 8 tokens. In each part, bids could be any whole number of tokens between 0 and 10.4 In Part 2, subjects were randomly re-matched and bid in a two-player sequential Tullock contest using the strategy method. Subjects entered initial bids in the role of the first mover, and then entered bids in response to all possible initial bids in the role of the second mover. The strategy method sequential contest was used to elicit each subject’s entire bid function, thus gaining additional information about each subject’s preferences. This approach also allows controlling for beliefs, since the second mover’s bid is conditioned on the first mover’s bid. In Part 3, subjects were again randomly re-matched and bid in a two-player simultaneous Tullock contest for a reward worth 0 tokens. This zero-reward contest was used to elicit a measure of each subject’s utility of winning. Either Part 1 or Part 2 was randomly selected to be used for payment, and Part 3 was always used for payment. No feedback was given until the end of the experiment. Before receiving the instructions for Parts 1, 2, and 3, subjects completed a multiple price list risk preference elicitation task (Holt and Laury, 2002), the cognitive reflection test (CRT) from Frederick (2005), and a demographic survey.5 The cognitive reflection test was incentivized by paying a $1 reward if all three questions were answered correctly. While the theoretical effect of risk aversion on bids is ambiguous, several studies have found that risk aversion reduces bids in Tullock contests (Millner and Pratt 1991; Anderson and Freeborn 2010; Sheremeta and Zhang 2010; Sheremeta 2011; Shupp et al. 2013; Mago et al. 2013, see also Dechanaux et al. 2015). In 10 sessions, a total of 105 subjects participated, with 54 in the Human treatment and 51 in the Robot treatment. Subjects earned an average of approximately $12 each, with sessions lasting less than one hour. The experiment was run at Virginia Commonwealth University using z-Tree software (Fischbacher, 2007). Subjects were recruited using ORSEE (Greiner, 2015). 4These parameters were selected for similarity with Sheremeta (2015). However, the conversion rate differs from

Sheremeta for budgetary reasons. Moreover, the whole-token bidding restriction was imposed to reduce the number of conditional bids subjects must enter in the strategy-method sequential contest. 5Subjects also completed additional decision tasks for an unrelated experiment.

8

RENT-SEEKING AND COMPETITIVE PREFERENCES

4 H YPOTHESES This experiment is designed to test competitive preferences as an explanation for overbidding by removing the human rival player. The behavioral model predicts overbidding in the Human treatment, but in the absence of a human rival, does not predict any deviation from the Nash equilibrium baseline. The first hypothesis derives from this key aspect of the behavioral model of relative payoff maximization, together with empirical evidence from previous contest experiments suggesting that the relative payoff parameter r is positive (Fonseca 2009, Sheremeta 2015, Mago et al. 2016). However, as discussed in the theoretical background, the same hypothesis can be derived from various alternative models of other-regarding preferences.

Hypothesis 1. Bids will be higher in the Human treatment than in the Robot treatment.

Individual subjects are likely to be heterogeneous in the degree to which they overbid. Sheremeta (2015) finds that more impulsive subjects make higher bids than less impulsive subjects. Based on the dual-system theory of cognitive decision making (Kahneman, 2011), highly impulsive subjects rely on the quick and effortless System 1 for decision making, while less impulsive subjects rely on the reflective and slower System 2. As Sheremeta points out, the Nash equilibrium analysis presented in the theoretical background implicitly assumes that subjects use the more rational and calculating System 2. If some subjects actually use System 1, then they may deviate from the Nash equilibrium prediction, and based on Sheremeta’s experimental results, may be more likely to behave in an irrationally competitive manner and overbid. This idea leads to the second hypothesis.

Hypothesis 2. Bids will be greater for subjects with low CRT scores than for subjects with high CRT scores.

RENT-SEEKING AND COMPETITIVE PREFERENCES

9

5 R ESULTS Treatment Effects

[INSERT FIGURE 2 HERE.]

Figure 2 shows the mean bid in both treatments for Part 1, the simultaneous contest, Part 2(A), the first-mover bid in the sequential contest, and Part 3, the zero-reward contest.6 Like many previous experiments, the results show over-bidding, with average bids exceeding the Nash equilibrium benchmark. Furthermore, bids in the simultaneous contest exceed the Nash equilibrium prediction of 2 tokens 51.8% of the time in the Human treatment, and 60.8% of the time in the Robot treatment. Wilcoxon-Mann-Whitney tests show no significant treatment difference in bids for Part 1, Part 2(A), or Part 3, with p-values of 0.708, 0.222, and 0.882 respectively.

[INSERT TABLE 1 HERE.]

Table 1 shows linear regression results for bids in Part 1, the simultaneous contest with 8-token reward. The explanatory variables include a treatment indicator (Robot), cognitive reflection test score (CRT), the number of lower-risk choices made in the risk elicitation task (RiskAverse), and bid for a reward of zero tokens in Part 3 (ZeroBid).7 Interactions between the Robot treatment indicator and other explanatory variables are included in Model 1C. The results are consistent with the previous non-parametric test showing no significant difference in bids between treatments, contrary to Hypothesis 1. 6The standard theoretical model would predict identical bids in Part 1 and Part 2(A). In the Human treatment, the

correlation between bids in these two parts is 0.670. However, in the Robot treatment, the correlation is only 0.185. It is unclear why this difference might occur, and difficult to draw broad conclusions from this unexpected result. 7It is somewhat unusual that the estimated coefficient of ZeroBid is negative and insignificant, as a positive coefficient would be expected as in previous studies such as Sheremeta (2010) and Sheremeta (2015). If regressions are run with ZeroBid as the only explanatory variable, the estimated coefficients are positive for the Human and Robot treatments as well as the combined data. However, in all cases the estimated coefficients fall far short of standard significance levels.

10

RENT-SEEKING AND COMPETITIVE PREFERENCES

The results further show that CRT score is negatively correlated with bid, as found in Sheremeta (2015), and consistent with Hypothesis 2. While the interaction between the Robot treatment indicator and CRT score is not significant, its estimated coefficient is fairly large and opposite in sign compared to the estimated coefficient of CRT score for subjects in the Human treatment. To test whether there is a significant relationship between CRT score and bid in the Robot treatment, I take the sum of the estimated coefficients of CRT score and its interaction with the Robot treatment indicator. This procedure yields an estimate of -0.162, which is negative but not significantly different from zero (p-value=0.475). As the interaction term is not significant, this evidence is not conclusive. However, it suggests that the relationship between CRT score and bid may be somewhat weaker in the Robot treatment than in the Human treatment. I will explore this feature of the data in more depth later in the subsection on heterogeneity.

[INSERT FIGURE 3 HERE.]

Figure 3 shows the mean second-mover bids in Part 2(B) for each possible initial bid, as well as the benchmark best response function, restricted to bidding whole-token amounts. As in the previous figure, over-bidding is clearly evident. It is also clear from the graph that the bid functions are similar on average between the two treatments, and separate Wilcoxon-Mann-Whitney tests show no significant difference in second-mover bids for any level of the first-mover bid.

[INSERT TABLE 2 HERE.]

Table 2 shows linear regression results for second-mover bids in Part 2(B), the sequential contest. In addition to the explanatory variables used in the previous regressions for Part 1, the regressions in Table 2 also include the first-mover bid and the square root of the first-mover bid, as specified in the theoretical best response function.8 These variables are interacted with the 8The theoretical values of the coefficient from the best response function are −1 for OtherBid and p8 (the square root

of the reward value) for SqRtOtherBid. 95% confidence intervals for the estimated coefficients do not include these values in any of the regressions presented.

RENT-SEEKING AND COMPETITIVE PREFERENCES

11

Robot treatment indicator to test whether the bid function differs across treatments. Model 2D also includes interactions of the Robot treatment indicator with the other explanatory variables. The Robot treatment indicator is marginally significant in Models 2B and 2C, consistent with the somewhat higher vertical intercept of the average bid function in the Robot treatment shown in Figure 3. However, this positive effect is contrary to Hypothesis 1. To test for overall treatment differences in the bid function, I use Wald tests of the null hypothesis that the coefficients of the Robot treatment indicator and its interactions with OtherBid and SqRtOtherBid are jointly equal to zero. These tests do not show significant treatment differences in any of the models 2B, 2C, or 2D (with respective p-values of 0.276, 0.341, and 0.323). As in the simultaneous contest, the results show a negative correlation between CRT score and bid in the Human treatment. However, the interaction of CRT score with the Robot treatment indicator in Model 2D is positive and significant. Taking the sum of the CRT score coefficient and the interaction of CRT score with the Robot treatment indicator yields an estimate of -0.110, with p-value=0.576. This result suggests a weaker relationship between CRT score and bid in the Robot treatment compared to the Human treatment. The results also show that ZeroBid is positively correlated with bid in the Human treatment. However, the interaction of ZeroBid with the Robot treatment indicator is negative and significant. Taking the sum of the ZeroBid coefficient and the interaction of ZeroBid with the Robot treatment indicator yields an estimate of 0.084, with p-value=0.473, showing no significant relationship between ZeroBid and bid in the Robot treatment. This result might suggest that utility of winning drives bids only in the Human treatment, and thus utility of winning is conditional on defeating a human rival. However, as shown in Table 1, ZeroBid is not significant in the simultaneous contest, so the results on utility of winning are mixed. Overall, the results do not show evidence of the expected treatment effect, contrary to Hypothesis 1. Thus, the overall lack of treatment difference in bids is inconsistent with the theory that over-bidding is driven by competitive preferences. It also noteworthy that CRT score correlates with bid in the Human treatment, but this relationship appears weaker in the Robot treatment.

12

RENT-SEEKING AND COMPETITIVE PREFERENCES

The following main results summarize these findings. However, despite the lack of treatment effect at the aggregate level, heterogeneous treatment effects for different types of subjects are possible. Heterogeneous bidding behavior and heterogeneous treatment effects are explored in the next subsection. Result 1. There is no significant overall treatment difference in bids in the simultaneous contest or in second-mover bid functions in the sequential contest, contrary to Hypothesis 1. Result 2. In the Human treatment, bids in the simultaneous and second-mover bids in the sequential contests show a significant negative correlation with CRT score, consistent with Hypothesis 2. However, in the sequential contest, this correlation is significantly weaker in the Robot treatment than in the Human treatment.

Heterogeneity

[INSERT FIGURE 4 HERE.]

Figure 4 shows histograms for Part 1 bids across treatments, for all subjects and split into subgroups for High CRT (2 or 3 questions correct) and Low CRT (0 or 1 questions correct). The bid distributions appear similar across treatments for all subjects and for Low-CRT subjects, and indeed Kolmogorov-Smirnov tests do not show significant differences, with respective p-values of 0.985 and 0.981. For High-CRT subjects, the distributions do appear to differ, and a Kolmogorov-Smirnov test shows a marginally significant difference with a p-value of 0.085. However, a Wilcoxon-MannWhitney test narrowly fails to show a significant treatment difference for High-CRT subjects (pvalue = 0.117). Nonetheless, the change in distribution of High-CRT subjects’ bids appears related to the somewhat weaker correlation between CRT score and bid in the Robot treatment found in the regression results reported in Table 1.

RENT-SEEKING AND COMPETITIVE PREFERENCES

13

These results contrast with Hypotheses 1. Rather than finding lower bids in the Robot treatment than in the Human treatment, the data show little difference for the simultaneous contest. Low-CRT subjects bid no differently across treatments. There is some evidence of a treatment difference in the distribution of bids among High-CRT subjects, but no significant change in central tendency. The next main result summarizes this finding. Result 3. In the simultaneous contest, Low-CRT subjects’ bids do not differ by treatment. HighCRT subjects appear to bid slightly more in the Robot treatment than in the Human treatment, but the difference is not statistically significant. These results are contrary to Hypothesis 1.

[INSERT FIGURE 5 HERE.]

Figure 5 shows the mean second-mover bids in Part 2(B) for each possible first-mover bid, split into High CRT and Low CRT subgroups. For High-CRT subjects, average bids appear to be higher in response to high first-mover bids in the Robot treatment than in the Human treatment. For Low-CRT subjects, the opposite is true, with lower bids in response to high first-mover bids in the Robot treatment. Wilcoxon-Mann-Whitney tests, run separately for each possible first-mover bid, show significant treatment differences in response to first-mover bids 6-9 for High-CRT subjects and in response to first-mover bids 6-10 for Low-CRT subjects. As an alternative approach, shown in Appendix A, Table A2, I use separate regression analyses for the High-CRT and Low-CRT subgroups. This approach also shows significant treatment differences in the bid function for each subgroup. The opposing treatment effects for High-CRT and Low-CRT subjects are consistent with the weaker correlation between CRT score and bid shown in the regression results reported in Table 2. Low-CRT subjects also appear to bid somewhat more on average in response to low first-mover bids in the Robot treatment than in the Human treatment. This difference is not significant using Wilcoxon-Mann-Whitney tests, but the regression results reported in Appendix A, Table A2 show a significantly higher vertical intercept in the average bid function for Low-CRT types in the

14

RENT-SEEKING AND COMPETITIVE PREFERENCES

Robot treatment compared to the Human treatment. It is interesting that the average bid function for Low-CRT subjects appears flatter in the Robot treatment than in the Human treatment. This result may be related to heterogeneity in the bid function shapes of Low-CRT subjects, as explored in the next subsection. Result 4. There are heterogeneous treatment effects in the sequential contest. Second-mover bids in response to high first-mover bids are lower in the Robot treatment than in the Human treatment for Low-CRT subjects, and higher in the Robot treatment than in the Human treatment for High-CRT subjects.

[INSERT TABLE 3 HERE.]

Bid Function Types In addition to examining the average bid function in the sequential contest, it is useful to examine different types of bid functions chosen by different subjects. I define six different types of bid functions an individual subject might choose, similar to those used in Herrmann and Orzen (2008). The Inverted-U type initially increases and later decreases, consistent with the theoretical best response functions in the standard and behavioral theory.9 The Increasing type is weakly monotonically increasing in the first-mover bid, but not constant. Constant, Decreasing, U-Shape, and Other bid function types do not have clear intuitive interpretations, but are included to describe the choices actually made by some subjects. Table 3 shows the proportion of subjects classified into each bid function type. Consistent with the average bid function seen in the left panel of Figure 5, the vast majority of High-CRT subjects in the Human treatment are classified as Inverted-U types, the type most consistent with theory. 9No individual subject’s bid function exactly matches the best response function from the standard theory. Based on

the sum of absolute deviations from the best response, only 3 subjects are within 5 tokens, and 16 subjects within 10 tokens.

RENT-SEEKING AND COMPETITIVE PREFERENCES

15

In the Robot treatment, more High-CRT subjects are classified as Increasing types compared to the Human treatment, and this difference is significant based on a chi-squared test with a pvalue of 0.030. However, in both treatments, a large majority of High-CRT subjects are classified as Inverted-U types. A plurality of Low-CRT subjects in the Human treatment are classified as Increasing types, consistent with the increasing average bid function shown in Figure 5. In the Robot treatment, the proportion of Low-CRT subjects classified as Increasing types falls significantly, based on a chi-squared test with a p-value of 0.022. Low-CRT subjects appear more heterogeneous in their bid function types in the Robot treatment, with non-negligible proportions falling into each type, including Constant, Decreasing, U-Shape, and Other types. This heterogeneity in bid function shape may explain the relatively flat average bid function of Low-CRT subjects shown in Figure 5. Importantly, large majorities of Low-CRT subjects in both treatments are classified into a bid function type other than Inverted-U. In contrast, most High-CRT subjects are classified as Inverted-U types. High-CRT subjects are significantly more likely than Low-CRT subjects to be classified as Inverted-U types in the Human treatment (chi-squared test, p-value<0.001) and in the Robot treatment (chi-squared test, p-value=0.013). This result shows that an important reason why Low-CRT subjects tend to overbid more than High-CRT subjects is that the shapes of the bid functions of Low-CRT subjects deviate substantially and qualitatively from the theoretical best response function. The following main results summarize the findings in this subsection.

Result 5. In the sequential contest, a plurality of Low-CRT subjects choose Increasing bid functions in the Human treatment, but this proportion is significantly lower in the Robot treatment. The proportion of High-CRT subjects choosing Increasing bid functions is significantly higher in the Robot treatment compared to the Human treatment.

16

RENT-SEEKING AND COMPETITIVE PREFERENCES

Result 6. In the sequential contest, the vast majority of High-CRT subjects choose Inverted-U bid functions in both the Human treatment and the Robot treatment. In contrast, the vast majority Low-CRT subjects in both treatments choose a bid function type other than Inverted-U.

6 D ISCUSSION In the simultaneous and sequential Tullock contest games, average bids do not differ significantly between the Human and Robot treatments. Thus overbidding at the aggregate level does not appear to be primarily driven by competitive preferences such as relative payoff maximization, or by other-regarding preferences more generally. However, I find evidence of heterogeneous treatment effects in the sequential contest. HighCRT subjects (2 or 3 correct answers) in the sequential contest bid more in response to high firstmover bids in the Robot treatment than in the Human treatment. In contrast, Low-CRT subjects (0 or 1 correct answers) bid less in response to high first-mover bids in the Robot treatment than in the Human treatment. Thus, in the sequential contest, High-CRT and Low-CRT subjects respond to the Robot treatment in opposite and partially-offsetting ways. While it is somewhat difficult to interpret these heterogeneous treatment effects, they suggest that other-regarding preferences may play some role in bidding in sequential contests, despite not being the primary driver of aggregate overbidding. Future research might further examine how playing a human rival affects bidding behavior in the sequential contest by comparing Human and Robot treatments within subject. By observing the transition from one bid function to another at the individual level, it may be possible to better understand the underlying individual preferences or characteristics driving these effects. I confirm the result of Sheremeta (2015) that bids are negatively correlated with CRT scores in the simultaneous contest with a human rival. I find that this result extends to the sequential contest as well, though the correlation is significantly weaker in the Robot treatment than in the Human treatment. Moreover, my results reveal an important reason why impulsive subjects tend

RENT-SEEKING AND COMPETITIVE PREFERENCES

17

to overbid: the vast majority of Low-CRT subjects’ bid functions in the sequential contest deviate substantially and qualitatively from the theoretical best response function. In particular, in the Human treatment, a plurality of Low-CRT subjects choose an increasing bid function. High-CRT subjects, on the other-hand, are far more likely to choose a bid function that at least qualitatively matches the shape of the theoretical best response. While it is clear that impulsiveness plays an important role in driving bids in rent-seeking contests, further research is needed to understand impulsive competitiveness more generally. As conjectured by Sheremeta (2015), impulsiveness might explain overbidding in winner-pay auctions such as first-price private-value auctions and common-value auctions. The results of the current study suggest the further conjecture that impulsiveness may be related to shape of the bid function. Future studies might use a strategy-method approach to elicit each subject’s bid as a function of her value or signal in a winner-pay auction and compare these bid functions between impulsive and reflective subjects. Exploring the role of impulsiveness in other competitive environments such as real-effort contests or oligopoly games is also an interesting direction for future research.

A CKNOWLEDGEMENTS I would like to thank Brock Stoddard, Glenn Dutcher, Michael Caldara, Joshua Foster, Yaron Azrieli, and Paul J. Healy for helpful comments and conversations. I am also grateful to the editorial team and two anonymous referees for very useful suggestions. Any remaining errors are my own responsibility. Funding was provided by VCU School of Business.

R EFERENCES Anderson, L. A., Freeborn, B. A., 2010. Varying the intensity of competition in a multiple prize rent seeking experiment. Public Choice 143, 237–254.

18

RENT-SEEKING AND COMPETITIVE PREFERENCES

Balafoutas, L., Kerschbamer, R., Sutter, M., 2012. Distributional preferences and competitive behavior. Journal of Economic Behavior and Organization 83, 125–135. Blount, S., 1995. When social outcomes aren’t fair: The effect of causal attributions on preferences. Organizational Behavior and Human Decision Processes 63, 131–144. Cox, C., 2013. Inequity aversion and advantage seeking with asymmetric competition. Journal of Economic Behavior and Organization 86, 121–136. Dechenaux, E., Kovenock, D., Sheremeta, R., 2015. A survey of experimental research on contests, all-pay auctions and tournaments. Experimental Economics 18, 609–669. Fischbacher, U., 2007. z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics 10, 171–178. Fonseca, M. A., 2009. An experimental investigation of asymmtric contests. International Journal of Industrial Organization 27, 582–591. Frederick, S., 2005. Cognitive reflection and decision making. Journal of Economic Perspectives 19, 25–42. Greiner, B., 2015. Subject pool recruitment procedures: organizing experiments with ORSEE. Journal of the Economic Science Association 1 (1), 114–125. Herrmann, B., Orzen, H., 2008. The appearance of homo rivalis: social preferences and the nature of rent seeking, Working paper. Holt, C. A., Laury, S. K., 2002. Risk aversion and incentive effects. American Economic Review 92 (5), 1644–1655. Houser, D., Kurzban, R., 2002. Revisiting kindness and confusion in public goods experiments. American Economic Review 92, 1062–1069. Kahneman, D., 2011. Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux. Mago, S. D., Samak, A. C., Sheremeta, R. M., 2016. Facing your opponents: Social identification and information feedback in contests. Journal of Conflict Resolution 60 (3), 459–481. Mago, S. D., Sheremeta, R. M., Yates, A., 2013. Best-of-three contest experiments: Strategic versus psychological momentum. International Journal of Industrial Organization 31, 287–296.

RENT-SEEKING AND COMPETITIVE PREFERENCES

19

Millner, E. L., Pratt, M. D., 1989. An experimental investigation of efficient rent-seeking. Journal of Economic Behavior and Organization 62, 139–151. Millner, E. L., Pratt, M. D., 1991. Risk aversion and rent-seeking: An extension and some experimental evidence. Public Choice 69, 81–92. Rabin, M., 1993. Incorporating fairness into game theory and economics. American Economic Review 83, 1281–1302. Sano, H., 2014. Reciprocal rent-seeking contests. Social Choice and Welfare 42 (3), 575–596. Shapiro, D. A., 2009. The role of utility interdependence in public good experiments. International Journal of Game Theory 38, 81–106. Sheremeta, R., 2010. Experimental comparison of multi-stage and one-stage contests. Games and Economic Behavior 68, 731–747. Sheremeta, R., 2013. Overbidding and heterogeneous behavior in contest experiments. Journal of Economic Surveys 27 (3), 491–514. Sheremeta, R., 2015. Impulsive behavior in competition: Testing theories of overbidding in rentseeking contests, Working Paper. Sheremeta, R. M., Zhang, J., 2010. Can groups solve the problem of over-bidding in contests? Social Choice and Welfare 35, 175–197. Shupp, R., Sheremeta, R. M., Schmidt, D., Walker, J., 2013. Resource allocation contests: Experimental evidence. Journal of Economic Psychology 39, 257–267. Tullock, G., 1980. Efficient rent seeking. In: Buchanan, J. M., Tollison, R. D., Tullock, G. (Eds.), Toward a theory of the rent-seeking society. A&M University Press, Texas, pp. 97–112.

20

RENT-SEEKING AND COMPETITIVE PREFERENCES

T ABLES AND F IGURES

Own Bid

0.5

0.25

0 0

0.25

0.5

0.75

1

Other Bid r=0

r=0.5

r=1

Figure 1. Best response function for various values of r and reward value normalized to 1.

RENT-SEEKING AND COMPETITIVE PREFERENCES

21

4 3.5 3 2.5 2 1.5 1 0.5 0 Part 1

Part 2 (A) Human

Robot

Figure 2. Mean bid with 95% confidence intervals.

Part 3

22

RENT-SEEKING AND COMPETITIVE PREFERENCES

Model 1A Model 1B Model 1C CRT

-

-0.322** (0.162)

-0.485** (0.235)

RiskAverse

-

-0.026 (0.094)

-0.087 (0.133)

ZeroBid

-

-0.026 (0.108)

-0.086 (0.112)

0.135 (0.392)

0.100 (0.389)

-1.325 (1.133)

× CRT

-

-

0.323 (0.326)

× RiskAverse

-

-

0.181 (0.188)

× ZeroBid

-

-

0.111 (0.192)

2.963*** (0.276)

3.462*** (0.556)

4.018*** (0.759)

Robot

constant

Table 1. Linear regression results for bids in Part 1. Robust standard errors in parentheses. Each regression includes 105 subject-level observations. ***, **, and * indicate statisticial significance at the 1%, 5%, and 10% levels, respectively.

RENT-SEEKING AND COMPETITIVE PREFERENCES

23

7 6 5 4 3 2 1 0 0

1

Human

2

3

Robot

4

5

95% CI - Human

6

7

8

95% CI - Robot

9

10

Best Response (Restricted)

Figure 3. Second-mover mean bid in Part 2 (B).

2

3

4

5

6

7

8

24

RENT-SEEKING AND COMPETITIVE PREFERENCES

Model 2A Model 2B Model 2C Model 2D OtherBid

-0.353*** -0.330*** -0.330*** -0.330*** (0.107) (0.123) (0.123) (0.123)

SqRtOtherBid

1.751*** (0.287)

1.906*** (0.293)

1.906*** (0.293)

1.906*** (0.294)

CRT

-

-

-0.342** (0.146)

-0.659*** (0.187)

RiskAverse

-

-

-0.052 (0.086)

0.053 (0.099)

ZeroBid

-

-

0.181* (0.102)

0.439*** (0.110)

0.095 (0.365)

0.991* (0.531)

0.921* (0.528)

1.708 (1.220)

× OtherBid

-

-0.049 (0.215)

-0.049 (0.215)

-0.049 (0.216)

× SqRtOtherBid

-

-0.320 (0.582)

-0.320 (0.583)

-0.320 (0.584)

× CRT

-

-

-

0.549** (0.271)

× RiskAverse

-

-

-

-0.190 (0.180)

× ZeroBid

-

-

-

-0.355** (0.160)

1.508*** (0.314)

1.073*** (0.261)

1.584*** (0.578)

1.150* (0.674)

Robot

constant

Table 2. Linear regression results for bids in Part 2(B). Robust standard errors in parentheses are clustered by individual subject. Each regression includes 1155 observations and 105 subject-level clusters. ***, **, and * indicate statisticial significance at the 1%, 5%, and 10% levels, respectively.

RENT-SEEKING AND COMPETITIVE PREFERENCES

25

Human

Robot

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

Human, High CRT

4

5

6

7

8

9

10

7

8

9

10

7

8

9

10

Robot, High CRT

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

Human, Low CRT

4

5

6

Robot, Low CRT

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

Figure 4. Part 1 bid histogram for All Subjects, High CRT (2 or 3 correct), and Low CRT (0 or 1 correct).

26

RENT-SEEKING AND COMPETITIVE PREFERENCES

High CRT

Low CRT

7

7

6

6

5

5

4

4

3

3

2

2

1

1

0

0 0

1

2

3

4

5

6

Human 7 6 5 4 3 2 1 0

7

8

9

Robot

10

0

95% CI - Human

1

2

3

95% CI - Robot

4

5

6

7

8

9

10

Best Response (Restricted)

Figure 5. Second-mover mean bid in Part 2(B) for High CRT (2 or 3 correct) and Low CRT (0 or 1 correct). 0

1

2

3

4

Human Robot Inverted-U Increasing Constant Decreasing U-Shape Other

53.7% 27.8% 5.6% 1.9% 1.9% 9.3%

43.1% 21.6% 7.8% 9.8% 7.8% 9.8%

N

54

51

5

6

Human Robot High CRT High CRT 85.7% 68.8% 4.8% 31.3% 4.8% 0.0% 4.8% 0.0% 0.0% 0.0% 0.0% 0.0% 21

16

7

8

9

Human Robot Low CRT Low CRT 33.3% 31.4% 42.4% 17.1% 6.1% 11.4% 0.0% 14.3% 3.0% 11.4% 15.2% 14.3% 33

35

Table 3. Bid function types in Part 2(B) for all subjects, High CRT (2 or 3 correct), and Low CRT (0 or 1 correct).

10

A PPENDIX A: A DDITIONAL A NALYSIS Human Treatment

Robot Treatment

Decision/Task Measures

Mean Std Dev

Mean Std Dev

Bid - Part 1 Bid - Part 2(A) Bid - Part 3 CRT RiskAverse

2.963 3.074 0.741 1.056 5.500

0.276 0.289 0.202 0.138 0.284

3.098 2.608 0.922 0.922 5.647

0.279 0.288 0.305 0.163 0.225

0.426 2.926 0.148 0.315 0.241 0.130 0.167 1.259 1.852 1.815

0.068 0.171 0.049 0.064 0.059 0.046 0.051 0.210 0.300 0.404

0.490 2.882 0.137 0.471 0.176 0.098 0.118 1.333 2.000 1.490

0.071 0.179 0.049 0.071 0.054 0.042 0.046 0.220 0.307 0.543

Demographic Measures Male Year of Study Econ/Finance/Acct Major Other Quant Major Other Business Major Other Soc Sci Major Other Major Econ Classes Taken Past Econ Experiments Past Other Experiments

Table A1. Summary Statistics

Table A1 shows summary statistics for the decision/task variables in the experiment, as well as demographic characteristics of the subjects. The demographic characteristic summary statistics are shown for informational purposes, but were not used in the analyses reported in the paper. Table A2 shows linear regression analysis for second-mover bids in the sequential contest, split into High CRT (2 or 3 correct) and Low CRT (0 or 1 correct). The explanatory variables are the same as those included in the regressions in Table 2, except for CRT and its interaction with the Robot treatment indicator, which are omitted. In the regression models with High-CRT subjects, the interaction between ZeroBid and the Robot treatment indicator is omitted because every HighCRT subject in the Robot treatment bid exactly zero for a reward of zero tokens. For High-CRT subjects, the Robot treatment indicator and its interactions are not significant in Models A2B, A2C, and A2D. However, the Wald test for the null hypothesis that the coefficients of the Robot treatment indicator and its interactions with OtherBid and SqRtOtherBid are jointly equal to zero is at least marginally significant in all three of these models, with respective p-values 1

2 High CRT Model A2A Model A2B Model A2C Model A2D

Low CRT Model A2E Model A2F Model A2G Model A2H

OtherBid

-0.802*** (0.136)

-0.915*** (0.167)

-0.915*** (0.167)

-0.915*** (0.167)

-0.109 (0.139)

0.043 (0.136)

0.043 (0.136)

0.043 (0.136)

SqRtOtherBid

2.761*** (0.357)

2.696*** (0.516)

2.696*** (0.517)

2.696*** (0.517)

1.201*** (0.385)

1.403*** (0.325)

1.403*** (0.326)

1.403*** (0.326)

RiskAverse

-

-

-0.135 (0.135)

-0.225 (0.177)

-

-

-0.009 (0.108)

0.084 (0.109)

ZeroBid

-

-

0.541*** (0.091)

0.520*** (0.098)

-

-

0.181* (0.100)

0.391*** (0.145)

1.044** (0.427)

-0.569 (0.482)

-0.373 (0.370)

-1.257 (1.577)

-0.536 (0.484)

1.742** (0.718)

1.677** (0.732)

3.499** (1.613)

× OtherBid

-

0.261 (0.277)

0.261 (0.278)

0.261 (0.278)

-

-0.295 (0.272)

-0.295 (0.272)

-0.295 (0.273)

× SqRtOtherBid

-

0.150 (0.700)

0.150 (0.702)

0.150 (0.703)

-

-0.393 (0.756)

-0.393 (0.757)

-0.393 (0.758)

× RiskAverse

-

-

-

0.153 (0.262)

-

-

-

-0.271 (0.249)

× ZeroBid

-

-

-

-

-

-

-

-0.299 (0.186)

constant

0.582 (0.389)

1.280*** (0.459)

1.849** (0.850)

2.377** (1.074)

2.114*** (0.456)

0.942*** (0.313)

0.816 (0.668)

0.115 (0.724)

N Clusters

407 37

407 37

407 37

407 37

748 68

748 68

748 68

748 68

Robot

Table A2. Linear regression results for bids in Part 2(B), split into High CRT and Low CRT subgroups. Robust standard errors in parentheses are clustered by individual subject. ***, **, and * indicate statisticial significance at the 1%, 5%, and 10% levels, respectively.

of 0.068, 0.040, and 0.077. These results are thus consistent with the change in the average bid function of High-CRT subjects shown in Figure 5. Furthermore, the signs of these coefficients are consistent with the pattern for High-CRT subjects in Figure 5, with a slightly lower vertical intercept and positive interactions in the Robot treatment. For Low-CRT subjects, similar Wald tests for Models A2F, A2G, and A2H are significant with respective p-values of 0.009, 0.009, and 0.006, consistent with the change in the average bid function of Low-CRT subjects shown in Figure 5. The signs of the coefficients of Robot and its interactions with OtherBid and SqRtOtherBid are also consistent with the pattern for Low-CRT subjects in Figure 5, with a higher vertical intercept and negative interactions in the Robot treatment.

3

A PPENDIX B: E XPERIMENTAL I NSTRUCTIONS Human Treatment INSTRUCTIONS Payments in this experiment are denominated in tokens. Each token is worth $0.50. You begin the experiment with an initial balance of 10 tokens. PART 1 In PART 1 of the experiment you will be randomly and anonymously matched with another participant. You and the other participant will choose how much to bid for a reward of 8 tokens. You may bid any number between 0 and 10 tokens. After both participants bid, the computer will calculate your earnings. Both participants will have to pay their bids, no matter who receives the reward. Thus, your earnings will be calculated as follows: If you receive the reward: Earnings = 8 tokens - your bid If you do not receive the reward: Earnings = 0 tokens - your bid You begin this experiment with an initial balance of 10 tokens. You may receive either positive or negative earnings. If your earnings are negative, we will subtract them from your initial 10 tokens. If the earnings are positive, we will add them to your initial 10 tokens. The more you bid, the more likely you are to receive the reward. The more the other participant bids, the less likely you are to receive the reward. Specifically, for each token you bid, you will receive one lottery ticket. After both participants make their bids, the computer will randomly draw one ticket from the tickets purchased by you and the other participant. The owner of the drawn ticket will receive the reward of 8 tokens. Therefore, your chance of receiving the reward is equal to the number of tokens you bid divided by the total number of tokens bid by you and the other participant. Your bid Probability you receive the reward = Your bid+Other participant’s bid

If both participants bid zero, the reward is randomly assigned to one of the two participants with equal probability. After both participants bid, the computer will make a random draw based on these bids and determine who receives the reward. The computer will then calculate your earnings based on your bid and whether or not you received the reward. Example: Assume that you bid 2 tokens and the other participant bids 3 tokens. Therefore, the computer assigns 2 lottery ticket to you and 3 lottery tickets to the other participant. Then

4

the computer randomly draws 1 lottery ticket out of 5 (2 + 3). Thus, your chance of receiving the reward is 40% = 2/5 and the other participant’s chance is 60% = 3/5. Also, assume that the computer made a random draw and the other participant has received the reward. Therefore, your earnings = 0 - 2 = - 2 tokens, since you did not receive the reward and your bid was 2 tokens. The other participant’s earnings = 8 - 3 = 5 tokens, since the reward was 8 tokens and the other participant’s bid was 3 tokens. PART 2 In PART 2 of the experiment you will be randomly and anonymously re-matched with another participant. As in the previous part of the experiment, you and the other participant will choose how much to bid for a reward of 8 tokens. You may bid any number between 0 and 10 tokens. PART 2 of the experiment works in the same way as the previous part, except that in PART 2 either you or the other participant will be randomly selected with equal probability to update your bid based on the other’s bid. In PART 2 (A) you and the other participant will choose initial bids. If you are not randomly selected to update your bid, then your initial bid in PART 2 (A) will be your bid for PART 2. In PART 2 (B), you will suppose that you are randomly selected to update your bid. For each possible initial bid by the other player from 0 to 10 tokens, you will enter the amount you would like to bid in response. If you are randomly selected to update your bid, then the bid you select in response to the other player’s actual initial bid will be your bid for PART 2. Example: assume that in PART 2 (A), you choose an initial bid of 5 tokens and the other participant chooses an initial bid of 7 tokens. Also, assume that in PART 2 (B), when you choose bids for every possible initial bid by the other participant, you choose a bid of 6 tokens in response to an initial bid of 7 tokens by the other participant. If you are randomly selected to update your bid, then the other participant’s bid would be 7 tokens, and your bid would be 6 tokens. IMPORTANT NOTES Either PART 1 or PART 2 (but not both) will count for payment, with equal probability. You will not know until the end of the experiment whether PART 1 or PART 2 has been randomly selected for payment. In both PART 1 and PART 2, you will not be told which of the participant in this room are matched with you. You can never guarantee yourself the reward. However, by increasing your bid, you can increase your chance of receiving the reward. Regardless of who receives the reward, both participants will have to pay their bids if that part of the experiment is randomly selected for payment.

5

The actual earnings for the experiment will be calculated at the end of the experiment. PART 3 There is one additional part of the experiment, PART 3. PART 3 will always count for payment. In PART 3 of the experiment you will be randomly and anonymously matched with another participant. The rules for PART 3 are similar to the rules for PART 1. You and the other participant will choose how much to bid in order to be a winner. The only difference is that in PART 3, the winner does not receive the reward. That is, the reward is worth 0 tokens to you and the other participant. After both participants have bid, your earnings will be calculated as follows: Earnings = 0 tokens - your bid After both participants bid, the computer will make a random draw based on these bids and determine the winner. At the end of the experiment, the computer will display the results of this part of the experiment (that is, whether you won or not), and will calculate your earnings. Robot Treatment INSTRUCTIONS Payments in this experiment are denominated in tokens. Each token is worth $0.50. You begin the experiment with an initial balance of 10 tokens. PART 1 In PART 1 of the experiment you will be matched with a robot player. The robot player is computerized and receives no earnings. The robot player has been programmed to play in the same way as a previous human player in a past experiment, in which human players were matched with one another rather than with robot players. Thus, you should expect the robot player to play in the same way you would expect a human to play. You and the robot will choose how much to bid for a reward of 8 tokens. You may bid any number between 0 and 10 tokens. After you and the robot bid, the computer will calculate your earnings. You will have to pay your bid, whether you receive the reward or not. Thus, your earnings will be calculated as follows: If you receive the reward: Earnings = 8 tokens - your bid If you do not receive the reward: Earnings = 0 tokens - your bid You begin this experiment with an initial balance of 10 tokens. You may receive either positive or negative earnings. If your earnings are negative, we will subtract them from your initial 10 tokens. If the earnings are positive, we will add them to your initial 10 tokens.

6

The more you bid, the more likely you are to receive the reward. The more the robot bids, the less likely you are to receive the reward. Specifically, for each token you bid, you will receive one lottery ticket. After you and the robot bid, the computer will randomly draw one ticket from the tickets purchased by you and the robot. If you are the owner of the drawn ticket, you will receive the reward of 8 tokens. Therefore, your chance of receiving the reward is equal to the number of tokens you bid divided by the total number of tokens bid by you and the robot. Your bid Probability you receive the reward = Your bid +Robot’s bid

If both you and the robot bid zero, you will have a 50% to receive the reward. After you and the robot bid, the computer will make a random draw based on these bids and determine whether you receive the reward. The computer will then calculate your earnings based on your bid and whether or not you received the reward. Example: Assume that you bid 2 tokens and the robot bids 3 tokens. Therefore, the computer assigns 2 lottery ticket to you and 3 lottery tickets to the robot. Then the computer randomly draws 1 lottery ticket out of 5 (2 + 3). Thus, your chance of receiving the reward is 40% = 2/5 and the chance you will not receive the reward is 60% = 3/5. Also, assume that the computer made a random draw and you have not received the reward. Therefore, your earnings = 0 - 2 = - 2 tokens, since you did not receive the reward and your bid was 2 tokens. PART 2 In PART 2 of the experiment you will again be matched with a robot player. As in the previous part of the experiment, you and the robot will choose how much to bid for a reward of 8 tokens. You may bid any number between 0 and 10 tokens. PART 2 of the experiment works in the same way as the previous part, except that in PART 2 either you or the robot will be randomly selected with equal probability to update your bid based on the other’s bid. In PART 2 (A) you and the robot will choose initial bids. If you are not randomly selected to update your bid, then your initial bid in PART 2 (A) will be your bid for PART 2. In PART 2 (B), you will suppose that you are randomly selected to update your bid. For each possible initial bid by the robot from 0 to 10 tokens, you will enter the amount you would like to bid in response. If you are randomly selected to update your bid, then the bid you select in response to the robot’s actual initial bid will be your bid for PART 2. Example: assume that in PART 2 (A), you choose an initial bid of 5 tokens and the robot chooses an initial bid of 7 tokens. Also, assume that in PART 2 (B), when you choose bids for every possible initial bid by the robot, you choose a bid of 6 tokens in response to an initial bid of 7 tokens by the

7

robot. If you are randomly selected to update your bid, then the robot’s bid would be 7 tokens, and your bid would be 6 tokens. IMPORTANT NOTES Either PART 1 or PART 2 (but not both) will count for payment, with equal probability. You will not know until the end of the experiment whether PART 1 or PART 2 has been randomly selected for payment. In both PART 1 and PART 2, you can never guarantee yourself the reward. However, by increasing your bid, you can increase your chance of receiving the reward. Regardless of whether you receive the reward, you will have to pay your bid if that part of the experiment is randomly selected for payment. On the top of your screen in each part of the experiment, you will see a robot index number. If you wish, you can match this number with a printed table after the experiment to verify that the robot’s bid is predetermined. The actual earnings for the experiment will be calculated at the end of the experiment. PART 3 There is one additional part of the experiment, PART 3. PART 3 will always count for payment. In PART 3 of the experiment you will again face a robot player, programmed to play like a human player in a past experiment. The rules for PART 3 are similar to the rules for PART 1. You and the robot will choose how much to bid in order to be a winner. The only difference is that in PART 3, the winner does not receive the reward. That is, the reward is worth 0 tokens. After you have bid, your earnings will be calculated as follows: Earnings = 0 tokens - your bid After you and the robot bid, the computer will make a random draw based on these bids and determine the winner. At the end of the experiment, the computer will display the results of this part of the experiment (that is, whether you won or not), and will calculate your earnings.

Highlights     

I experimentally compare bids in contests against human and robot rivals. Bids do not differ on average between treatments. Cognitive reflective test (CRT) correlates with bids against humans. I find heterogeneous treatment effects between High-CRT and Low-CRT subjects. Bid functions of most Low-CRT subjects deviate qualitatively from best response.