Viewpoint: The costs and benefits of deception in economic experiments

Viewpoint: The costs and benefits of deception in economic experiments

Food Policy xxx (xxxx) xxx–xxx Contents lists available at ScienceDirect Food Policy journal homepage: www.elsevier.com/locate/foodpol Viewpoint: T...

336KB Sizes 0 Downloads 64 Views

Food Policy xxx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

Food Policy journal homepage: www.elsevier.com/locate/foodpol

Viewpoint: The costs and benefits of deception in economic experiments Jayson L. Lusk Department of Agricultural Economics, Purdue University, 403 W. State St., W. Lafayette, IN 47907, United States

ARTICLE INFO

ABSTRACT

Keywords: Experimental economics Deception Research ethics

The historical justifications typically given for the prohibition against deception in economic experiments are less relevant for today’s experiments that are often conducted in non-lab settings with non-student subjects. I describe a variety of research questions that might be most adequately answered with some form of deception, and call for a more nuanced view of the issue that requires a consideration of the importance of the research question relative to the potential costs of deception. For example, in the case of new food products that have not yet been developed, does the sin of hypothetical bias outweigh the sin of deceiving subjects in a non-hypothetical experiment? It is important for journals or professions, which ban the use of deception, to actually define what practices fall under the ban.

It is doubtful many people would avidly embrace a “pro-deception” position for economic research involving human subjects. I will not do so here. However, this short note will make the case for a more reasoned and nuanced consideration of the issue than is often espoused in the economic literature. Deception in economic experiments became taboo because of the context in which early experiments were conducted. Several decades ago, when experimental economics began gaining traction as a distinct method and sub-discipline, the typical experiment involved student subjects who interacted in an induced value or other abstractly framed decision environment in a computer lab on campus. In this environment, the ban on deception had a rationale explained in early texts on the subject (e.g., Davis and Holt 1993).1 In particular, subjects who participated in one experiment were highly likely to be recruited to participate in future experiments, and being deceived in a prior experiment may cause a participant to distrust the rules and instructions in a future experiment. More broadly, the student subjects may talk to one another outside the experiment, and a

lab may develop a reputation for deception, perhaps evening tarnishing the credibility of all economic experiments. In short, deception can contaminate the subject pool and degrade trust, which is a public good shared by all experimental economists.2 The rub is that that the taboo surrounding deception has come to envelop experiments for which these sorts of pool-contamination and public-good critiques are less applicable. Many experimental economists, in their roles as reviewers or editors, seem to have uncritically applied a blanket methodological prohibition to include all experiments even where there is little conceptual or empirical reason for the prohibition.3 Even a casual perusal of recent literature shows that today experiments are routinely conducted outside a lab, in field settings, with non-standard subject pools (Harrison and List, 2004; Levit and List, 2009), and randomized controlled field trials have become standard tools for development economists (Duflo et al., 2007). In these contexts, there is a vanishingly small chance a given individual will be asked to participate in a subsequent experiment. Because the experiments are not conducted at a particular lab, the risks to reputation are

E-mail address: [email protected]. Davis and Holt (1993, p. 23) argue experimental economists should be: “concerned about developing and maintaining a reputation among the student population for honesty in order to ensure that subject actions are motivated by the induced monetary rewards rather than by psychological reactions to suspected manipulation.” While the argument is appealing, it should be noted that justifications relying on impacts on the public good and reputation of economists, if carried to their extreme, could also be used to argue for bans on publication of all kinds of economic research. 2 The exhortations against deception largely seemed to have come about because of conceptual reasoning about public good issues not empirical evidence on the consequences of deception. More recent research has provided (some but equivocal) evidence in support of the conceptual concerns. Jamison et al. (2008) and Ortmann and Hertwig (2002) both find that participants who have previously been deceived have different proclivities for showing up for future experiments and often behave differently when they do show up as compared to subjects who have not previously been deceived. However, in their review Ortmann and Hertwig (2002) do not find evidence to support the broader public good concern, where effects from one subject spill over to another, affecting the reputation of a lab or the profession more generally. 3 Of course, there are exceptions to this statement and nuanced views on the issue of deception, along the lines of the views expressed here, exist in the literature; for examples, see Cason and Wu (2017), List (2008), McFadden and Huffman (2017), and Rousu et al. (2015). 1

https://doi.org/10.1016/j.foodpol.2018.12.009 Received 6 July 2018; Received in revised form 19 July 2018; Accepted 29 December 2018 0306-9192/ © 2019 Elsevier Ltd. All rights reserved.

Please cite this article as: Lusk, J.L., Food Policy, https://doi.org/10.1016/j.foodpol.2018.12.009

Food Policy xxx (xxxx) xxx–xxx

J.L. Lusk

far lower. In the case of “true” field experiments, participants are not even aware they took part in an experiment. Aside from deontological reasoning, what is the argument against deception in these sorts of field experiments? Whatever might be the reasons, they must be different than the typical ones given for studentbased lab experiments. There are ethical considerations to be sure, but most university human subject’s internal review boards (IRBs), which presumably exist to protect the interests of research participants, allow deception in certain circumstances. IRBs may require researchers to justify the use of deception (showing that the research question cannot be adequately answered any other way), require quasi-consent from the participant (where they agree to participate acknowledging the full purpose of the research hasn’t been disclosed), or provide debriefings to subjects following deception, but there is no blanket prohibition against the practice.4 When, in food and agricultural economics research, might deception produce valuable insights that would be difficult or impossible to obtain without deception? A common example is the interest in determining consumers’ preferences for a new food or technology that has not yet been developed (or is in very short supply). In such instances, researchers may want consumers to bid on a product that claims to have particular attribute or trait, when in fact it does not. One could, of course, ask hypothetical willingness-to-pay questions, but which is greater – the sin of hypothetical bias or the sin of deception?5 Research on priming, the subconscious, and “gut” reactions often must involve some form of deception because a priori informing subjects as to the true purpose of the experiment would bring the topic of study to the subject’s conscious attention, undermining the entire point of the inquiry. Studies on causal impact of social-networks or social pressure can be conducted when confederate participants, behaving in a pre-defined manner, are employed. In other cases, researchers may want to study deception in its own right. How do consumers or farmers respond when they learn a food has been mis-labeled, a product or marketing claim is deceptive, an agribusiness executive lied, or a contract is not upheld?6 Given that the empirical research suggests spill-over or public-good type effects from deception in experiments appears to be relatively small (Ortmann and Hertwig, 2002), profession-wide policies to protect a common pool resource of trust in the profession are hard to justify. My guess is that the psychology discipline is no less trusted generally than economists despite the fact that their discipline permits deception in experiments. If anything has harmed the reputation of psychological research, it appears to have little to do with deception but rather the inability of researchers to replicate previous experimental studies (Open Science Collaboration, 2015).7

If a journal or association is to enact a blanket ban on deception in experiments, my recommendation is that an explicit policy and definition be formulated. Yet, this is much trickier that first meets the eye. What, exactly, constitutes deception? The question has been tackled by, among others, Krawczyk (2013), Rousu et al. (2015), and Wilson (2016). Is a boldface lie to be treated the same as withholding information? All deceptions are not created equal, and there is ample disagreement among researchers about the seriousness of a given practice. For example, Colson et al. (2016) surveyed food and agricultural economics who had previously conducted experiments and asked whether a list of practices should be banned. Their results suggest about 84% thought buying a mislabeled product should be banned, but only 57% thought bidding on a mislabeled product (in a design that assured it was not purchased) should be banned. As a general rule, professions, editors, and reviewers do not place a blanket ban on other research methods (say, computable general equilibrium models, neural networks, time series forecasting models, etc.) that produce results that sometimes (hopefully very infrequently) lead to incorrect inference and harm. Rather, we rely on reviewers and editors to thoughtfully consider the strengths and weaknesses of the methods used in a paper relative to the ability of the methods to provide accurate insight on the question at hand. To be sure, it is much tougher to evaluate the benefits of a particular piece of research relative to the various downsides posed by deception in the context of a particular study environment than to simply adopt a blanket ban, but if economist can’t equate marginal benefit with cost, I’m not sure who can. Acknowledgements The author would like to thank Jay Corrigan, Wally Huffman, Matt Rousu, and Steve Wu for providing comments on an earlier version of this paper. References Camerer, C.F., Dreber, A., Forsell, E., Ho, T.H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., Heikensten, E., 2016. Evaluating replicability of laboratory experiments in economics. Science 351 (6280), 1433–1436. Cason, T.N., Wu, S.Y., 2017. Subject Pools and Deception in Agricultural and Resource Economics Experiments. Working Paper. Department of Economics, Purdue University. Colson, G., Corrigan, J.R., Grebitus, C., Loureiro, M.L., Rousu, M.C., 2016. Which deceptive practices, if any, should be allowed in experimental economics research? Results from surveys of applied experimental economists and students. Am. J. Agric. Econ. 98 (2), 610–621. Davis, D.D., Holt, C.A., 1993. Experimental Economics. Princeton University Press, Princeton. Duflo, E., Glennerster, R., Kremer, M., 2007. Using randomization in development economics research: a toolkit. In: In: Schultz, T.P., Strauss, J.A. (Eds.), Handbook of Development Economics. Amsterdam: North-Holland, vol. 4. pp. 3895–3962. Harrison, G.W., List, J.A., 2004. Field experiments. J. Econ. Lit. 42 (4), 1009–1055. Jamison, J., Karlan, D., Schechter, L., 2008. To deceive or not to deceive: the effect of deception on behavior in future laboratory experiments. J. Econ. Behav. Organ. 68 (3), 477–488. Krawczyk, M., 2013. Delineating Deception in Experimental Economics: Researchers’ and subjects’ views. Faculty of Economic Sciences Working Paper. University of Warsaw. Levitt, S.D., List, J.A., 2009. Field experiments in economics: the past, the present, and the future. Eur. Econ. Rev. 53 (1), 1–18. List, J.A., 2008. Informed consent in social science: response. Science 322 (5902), 672. McFadden, J.R., Huffman, W.E., 2017. Consumer valuation of information about food Safety achieved using biotechnology: evidence from new potato products. Food Policy 69, 82–96.

4 At present, there appear to be efforts to deregulate federal IRB requirements in the U.S., and it may be possible that certain kinds of economic experiments will no longer require IRB approval in the future. 5 There are some ways around this conundrum. For example, in research designs where people make multiple choices or submit multiple bids on food products, only one choice/bid need be selected as binding. While the binding choice/bid is typically selected at random, this need not always be the case, and subjects can simply be told one of their choices/bids has been preselected as binding but this selection will not be revealed until all choices/bids are made, and the researcher can ensure that the binding choice/bid corresponds to a truthfully labeled product. 6 My 16 year old son, Jackson Lusk, suggested a clever experiment that would require deception. Because of placebo effects, research subjects are often not informed as to whether they received the treatment or control. What if a subject was given the treatment but told they received the control? This would directly pit the psychological placebo effect against the “true” effect of the treatment. 7 Published experimental economic studies appear to have much higher rates of successful replication than do psychology experiments (Camerer et al., 2016).

2

Food Policy xxx (xxxx) xxx–xxx

J.L. Lusk Open Science Collaboration, 2015. Estimating the reproducibility of psychological science. Science 349 (6251), aac4716. Ortmann, A., Hertwig, R., 2002. The costs of deception: evidence from psychology. Exp. Econ. 5 (2), 111–131. Rousu, M.C., Colson, G., Corrigan, J.R., Grebitus, C., Loureiro, M.L., 2015. Deception in

experiments: towards guidelines on use in applied economics research. Appl. Econ. Perspect. Policy 37 (3), 524–536. Wilson, B.J., 2016. The meaning of deceive in experimental economic science”. In: DeMartino, G., McCloskey, D. (Eds.), The Oxford Handbook of Professional Economic Ethics. Oxford University Press, Oxford.

3