Journal of Business Research 66 (2013) 1406–1408
Contents lists available at SciVerse ScienceDirect
Journal of Business Research
Research with In-built replications: Comment and further suggestions for replication research Heiner Evanschitzky a,⁎, J. Scott Armstrong b,⁎⁎ a b
Aston Business School, Birmingham, United Kingdom University of Pennsylvania, United States
a r t i c l e
i n f o
Article history: Received 1 May 2011 Received in revised form 1 December 2011 Accepted 1 January 2012 Available online 19 May 2012 Keywords: Replication research Advancing science In-built replications Commentary
a b s t r a c t This brief commentary on the paper Designing Research with In-built Differentiated Replication expands on concerns about a lack of replication research by focusing on three key questions of continuous importance: Why should researchers conduct more replication research? Why do so few researchers conduct replication studies? What can researchers do to add more studies? This paper identifies barriers preventing replication related to the scientific system, the replication researcher, and the initial research. Suggestions include publishing all papers electronically along with reviews; authors taking steps to encourage replications of their work; and editors inviting replications of important papers. The scientific community should establish a replication index as a measure of output quality. © 2012 Elsevier Inc. All rights reserved.
1. Introduction Many authors call for more replication research in management science, with apparently little impact on research practice (e.g., Evanschitzky, Baumgarth, Hubbard, & Armstrong, 2007). Calls to increase the occurrence of replication research are worthy of praise. The authors of the paper Designing Research with In-built Differentiated Replication (Uncles & Kwok, 2013–this issue) focus on restating the importance of replication as an integral component of the initial research design. The authors highlight the importance of differentiated replication and the use of multiple sets of data in establishing empirical generalizations. Before discussing how to replicate, understanding why replications are necessary, and what discourages researchers from doing them more frequently, is important.
2. Why replicate? Researchers generally agree on the importance of replication research (e.g., Hubbard & Vetter, 1996; Hunter, 2001; Madden, Easley, & Dunn, 1995; Singh, Ang, & Leong, 2003; Tsang & Kwan, 1999; Wells, 2001). Their agreement is partly due to the large percentage of failed replications
⁎ Correspondence to: H. Evanschitzky, Aston Business School, Birmingham, B4 7ET, United Kingdom. ⁎⁎ Correspondence to: J.S. Armstrong, Wharton School, University of Pennsylvania, Philadelphia, PA 19104, United States. E-mail addresses:
[email protected] (H. Evanschitzky),
[email protected] (J.S. Armstrong). 0148-2963/$ – see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.jbusres.2012.05.006
(Evanschitzky & Armstrong, 2010; Evanschitzky et al., 2007; Hubbard & Armstrong, 1994). Many proposals encourage replications. Such proposals include editors inviting replications of important papers, accepting replications based on evaluating proposals that outline the replication attempt, appointing replication editors, and finding ways to publish all replications. Scientific findings rest upon replication, but currently, research fails to replicate many findings in the management sciences. Given the lack of replications, researchers have suggested that practitioners should approach with skepticism their decision-making based on findings reported in journals. If medicine used the same practice, researchers might test many treatments and occasionally discover some of them useful by chance. Teachers should be wary of including the findings of one-off studies in their curricula, and researchers need to recognize that such findings rest on a weak foundation. Many journals have responded to the challenge of publishing replications. These include Winer's (1998) revival of the “Research Notes and Communications” section of the Journal of Marketing Research, and Mick's (2001) introduction of a “Re-inquiries” section in the Journal of Consumer Research. The Journal of Money, Credit and Banking provides a practical solution to make replications possible as authors must deposit the data and code used for papers they publish. In an attempt to ease replication of papers published in their journal, the editorial team of the International Journal of Forecasting adopted a policy of requesting data and details on the methods. The journal also instituted a systematic procedure to obtain data and methods from authors prior to publication. The Journal of Conflict Resolution adopted a similar editorial policy. The Journal of Applied Econometrics provides additional emphasis with the appointment of a replication editor.
H. Evanschitzky, J.S. Armstrong / Journal of Business Research 66 (2013) 1406–1408
3. Why so few replications? Despite these positive steps, the rate of published replications is still declining (Evanschitzky et al., 2007). The short term invites pessimism, and this paper proposes explanations for this lack of published replications. The barriers are the scientific review system, the replication researcher, and the initial research (Baumgarth & Evanschitzky, 2005). 4. Scientific review system Even though scientists believe replication is important, and editorials call for replication (e.g., Monroe, 1992a,b), a bias exists against publishing the research (Bornstein, 1990; Easley, Madden, & Dunn, 2000; Neuliep & Crandall, 1990). Kerr, Tolliver, and Petree (1977) found that 52% of reviewers indicate they would reject direct replications. Similarly, Rowney and Zensiek (1980) found 34% of reviewers with serious concerns about replications, causing them to reject any direct replication attempt. Bornstein (1990) identifies a replication paradox: if research successfully replicates previous findings, reviewers and editors do not see this discovery as an important contribution. They falsely consider the results as nothing new—something that merely confirms previous findings. Nor does failure to replicate initial findings increase the chance of publication: either findings are insignificant or researchers cannot explain why the replication failed (Rowney & Zensiek, 1980). 4.1. Replication researcher Reid, Soley, and Wimmer (1981) suggest that conducting a replication can be as time-consuming and laborious as doing original research. Why bother replicating in light of a low likelihood of publication? Business schools base their reward system on the number of publications in the right journals. Fairness in promotions at business schools is more important than whether the researcher discovered anything important. This philosophy carries over into the journal review process. The journals publish papers when the reviewers vote in favor. The increasing emphasis on quantity leads to “the iron law of important papers” (Holub, Tappeiner, & Eberharter, 1991). This law says that the number of important papers rises linearly while the total number of papers rises exponentially. As a result, important papers make up a smaller percentage of total papers published. Pay for papers, and one gets papers, not scientific progress. Or, the advancement of scientists becomes more important than the advancement of science. Another barrier preventing replications is a lack of knowledge about how to conduct replications, and more generally, misinterpretation of empirical findings. Research illustrates that researchers themselves falsely believe that tests of statistical significance provide good information about the likelihood that they could successfully replicate the findings. Oakes (1986) showed that 42 of 70 (60%) experienced academic psychologists falsely believe that an experimental outcome that is significant at the 0.01 level has a 0.99 probability of being statistically significant if the study were replicated. 4.2. Initial research A final barrier preventing replication research touches upon the more general issues of academic practice and importance of research findings. Madden, Franz, and Mittelstaedt (1979) conclude that investigators could replicate only two of 60 papers appearing in proceedings of leading marketing conferences based on the information in the paper. Furthermore, Reid, Rotfeld, and Wimmer (1982) find that only about 50% of authors of leading marketing journals are willing to share necessary materials to allow others to replicate their work. Madden et al. (1979) and Dewald, Thursby, and Anderson (1986) arrive at similar conclusions in
1407
their studies. Apparently, the way in which researchers present their findings prevents replication. Replication of an academic study is likely to create additional insights only if the original study presents important findings. Exceptions occur, such as when a substantial number of researchers pursue an area that shows little promise (game theory and Box–Jenkins spring to mind). However, academics must question the importance of the findings. Estimates suggest that only 3% to 20% of published papers are important (e.g., Armstrong, 2004; Armstrong, Brodie, & Parsons, 2001; Churchill, 1988; Simon, 1986). Papers with controversial empirical findings appear especially difficult to publish (Armstrong & Hubbard, 1991). This complements the previously mentioned “iron law of important papers” (Holub et al., 1991). Apparently, a key issue in increasing replication research is making the initial research important—worth replicating. 5. What to do? Authors with important and well-supported papers should encourage replications. This might require extra effort, but these steps will pay off eventually. Ioannidis (2005) found replication studies in about three-fourths of highly cited papers in medicine (in a sample from 1990 through 2003). So replication is a sign of importance of the initial study. One might even consider successful replications of initial research as one way of judging a research paper's value. Perhaps a replication index could measure output quality. Journal editors could identify important papers in the field for replication or extension, and invite designated researchers to publish such replications. If editors restricted invitations to important problems, they would likely gain the attention and cooperation from the authors of the original study. Furthermore, researchers are much more likely to undertake a replication of an important study, especially when invited. The reward system holds little promise for the short term, but the long term is bright, thanks to technology. Publication is less expensive for editors than trying to justify rejections, and also for authors, who go through endless rounds of revisions, often for trivial changes. Technology allows for electronic publication. Because anyone can publish, rewarding people based on the number of publications is senseless. The effect would be to reduce the number of papers submitted. Thus, only those who have something important to say would publish. This system would ignore unimportant or useless papers, and would judge researchers on the importance of their findings instead of a count of publications. The system would measure how well the findings hold up in replication attempts. This would move management sciences closer to physical sciences where researchers often pay to publish their papers, acceptance rates are very high, and replications are common for important papers. Perhaps these suggestions, as well as suggestions put forward by the paper Designing Research with In-built Differentiated Replication, can help advance management science. References Armstrong, J. S. (2004). Does an academic research paper contain useful knowledge? No (p b .05). Australasian Marketing Journal, 12(2), 51–61. Armstrong, J. S., Brodie, R., & Parsons, A. (2001). Hypotheses in marketing science: Literature review and publication audit. Marketing Letters, 12(2), 171–187. Armstrong, J. S., & Hubbard, R. (1991). Does the need for agreement among reviewers inhibit the publication of controversial findings? The Behavioral and Brain Sciences, 14, 136–137. Baumgarth, C., & Evanschitzky, H. (2005). Die Rolle von Replikationen in der Marketingwissenschaft. Marketing ZFP, 27(4), 253–262. Bornstein, R. (1990). Publication politics, experimenter bias and the replication process in social science research. Journal of Social Behavior and Personality, 5(4), 71–81. Churchill, G. F. (1988). Comments on the AMA Task Force. Journal of Marketing, 52(1), 26–31. Dewald, W. G., Thursby, J. G., & Anderson, R. G. (1986). Replication in empirical economics: The Journal of Money, Credit, and Banking project. The American Economic Review, 76(4), 587–603.
1408
H. Evanschitzky, J.S. Armstrong / Journal of Business Research 66 (2013) 1406–1408
Easley, R. W., Madden, C. S., & Dunn, M. G. (2000). Conducting marketing science. Journal of Business Research, 48(1), 83–92. Evanschitzky, H., & Armstrong, J. S. (2010). Replication of forecasting research. International Journal of Forecasting, 26(1), 4–8. Evanschitzky, H., Baumgarth, C., Hubbard, R., & Armstrong, J. S. (2007). Replication research's disturbing trend. Journal of Business Research, 60(4), 411–415. Holub, H. W., Tappeiner, G., & Eberharter, V. (1991). The iron law of important papers. Southern Economic Journal, 58(2), 317–328. Hubbard, R., & Armstrong, J. S. (1994). Replications and extensions in marketing: Rarely published but quite contrary. International Journal of Research in Marketing, 11(3), 233–248. Hubbard, R., & Vetter, D. E. (1996). An empirical comparison of published replication research in accounting, economics, finance, management, and marketing. Journal of Business Research, 35(2), 153–164. Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28(1), 149–158. Ioannidis, J. P. A. (2005). Contradicted and initially stronger effects in highly cited clinical research. Journal of the American Medical Association, 294(2), 218–228. Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals. Academy of Management Journal, 20(1), 132–141. Madden, C. S., Easley, R. W., & Dunn, M. G. (1995). How journal editors view replication research. Journal of Advertising, 24(4), 77–87. Madden, C. S., Franz, L. S., & Mittelstaedt, R. (1979). The replicability of research in marketing. In O. C. Ferrell, S. W. Brown, & C. W. Lamb (Eds.), Conceptual and theoretical developments in marketing, Chicago (pp. 76–85).
Mick, D. G. (2001). From the editor. Journal of Consumer Research, 28(1) unpaged. Monroe, K. B. (1992). Editorial: On replications in consumer research: Part I. Journal of Consumer Research, 19(1) preface. Monroe, K. B. (1992). Editorial: On replications in consumer research: Part II. Journal of Consumer Research, 19(1) preface. Neuliep, J. W., & Crandall, R. (1990). Editorial bias against replication research. Journal of Personality and Social Psychology, 5(4), 85–90. Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York, NY: Wiley. Reid, L. N., Rotfeld, H. J., & Wimmer, R. D. (1982). How researchers respond to replication requests. Journal of Consumer Research, 9(2), 216–218. Reid, L. N., Soley, L. C., & Wimmer, R. D. (1981). Replication in advertising research: 1977, 1978, 1979. Journal of Advertising, 10(1), 3–13. Rowney, J. A., & Zensiek, T. J. (1980). Manuscript characteristics influencing reviewers' decisions. Canadian Psychology, 21(1), 17–21. Simon, H. (1986). Herausforderungen an die Marketingwissenschaft. Marketing ZFP, 8(3), 205–213. Singh, K., Ang, S. H., & Leong, S. M. (2003). Increasing replication for knowledge accumulation in strategy research. Journal of Management, 29(4), 553-549. Tsang, E. W. K., & Kwan, K. M. (1999). Replication and theory development in organizational science: A critical realist perspective. Academy of Management Review, 24(4), 759–780. Uncles, M. D., & Kwok, S. (2013). Designing research with in-built differentiated replication. Journal of Business Research., 66(9), 1398–1405 (this issue). Wells, W. D. (2001). The perils of N = 1. Journal of Consumer Research, 28(4), 494–498. Winer, R. S. (1998). From the editor. Journal of Marketing Research, 35(1), iii–v.