False positive results in preclinical research

False positive results in preclinical research

+Model THERAP-92; No. of Pages 3 ARTICLE IN PRESS Therapie (2016) xxx, xxx—xxx Available online at ScienceDirect www.sciencedirect.com EDITORIAL ...

299KB Sizes 1 Downloads 79 Views

+Model THERAP-92; No. of Pages 3

ARTICLE IN PRESS

Therapie (2016) xxx, xxx—xxx

Available online at

ScienceDirect www.sciencedirect.com

EDITORIAL

False positive results in preclinical research Les résultats faussement positifs en recherche pré-clinique

In 2005, John PA Ioannidis published a paper entitled ‘‘Why most published research findings are false’’[1]. He elegantly pointed out several factors associated with non-reliable findings, such as: small sample size, small effect size, high number of tested relationships, high protocol flexibility, financial interests and hot scientific field. He also advocated that research findings should be reproduced by several independent research groups. He was not specifically targeting preclinical research but one may acknowledge that these biases are likely to be met in many fundamental research papers.

Statistical bias A PhD student asking his supervisor regarding the number of sample to test in each experimental condition is likely to have an answer, such as ‘‘the number needed to have a mean value, a standard deviation and if possible a statistically significant difference with a Pvalue < 0.05’’. It is very unlikely that the anticipated effect size of an intervention will be used for sample size calculation, as it is requested in clinical research. Small sample size (small number of animals or cell culture batches) may lead to underpowered experiments and eventually, meaningless preclinical results. Regarding the widespread use of statistical tests to produce a P-value, Garcia-Berthou et al. showed, in a 2004 publication, that more than 10% of the statistical results published in Nature and the British Medical Journal in 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors [2].

Methodological bias and lack of results reproducibility Blind interventions allocation and outcomes assessment are rarely performed in preclinical research, leading to comparability issues [3]. In addition, day-to-day experimental reproducibility is sometimes hard to achieve and researcher may blame external factors, such as room temperature variation, change in reagent, etc. Non-matching results may therefore be discarded from final analysis, adding additional bias. As a matter of fact, changing reagent sources (antibodies, buffers, culture medium. . .) during a protocol may by itself modify experimental results. In 2015, Freedman et al. reported and weighted the main causes of preclinical irreproducibility listing biological reagents and reference materials (36.1%), study design (27.6%), data analysis and reporting (25.5%) and laboratory protocols (10.8%) [4].

http://dx.doi.org/10.1016/j.therap.2016.06.001 0040-5957/© 2016 Soci´ et´ e franc ¸aise de pharmacologie et de th´ erapeutique. Published by Elsevier Masson SAS. All rights reserved.

Please cite this article in press as: Angoulvant D, Bejan-Angoulvant T. False positive results in preclinical research. Therapie (2016), http://dx.doi.org/10.1016/j.therap.2016.06.001

+Model THERAP-92; No. of Pages 3

ARTICLE IN PRESS

2

Editorial

Lost in translation Focusing on translational research, Perrin in 2014 claimed that 80% of potential therapeutic strategies that were efficient in mouse models failed to be efficient when tested in people [5]. In addition to methodological pitfalls, other biases may explain this discrepancy between preclinical and translational findings, such as the pertinence of animal models, genetic background instability, gender influence and choices of outcomes. He suggested several measures to optimize mouse studies and avoid false positive findings, such as documented exclusion of irrelevant animals, balance for gender, littermate splitting among experimental groups and gene tracking in modified animals as gene modification may not be reliably inherited.

Publication bias Publishing positive results is a major key of success both on personal (academic promotion) and collective (research grants, academic funding) points of view. Negative results are unlikely to be published leading other researchers to loose time and money on duplicating worthless experiments. In addition, published negative results have a much lower citation rate compared to supportive data [6]. Scientific journals also take their part in this biased process. Journals are often giving higher chances of publication to manuscript that seem ‘‘novel’’ rather than those reproducing already known results. If one is concerned with scientific reproducibility then we need to re-examine the current emphasis on novelty and its role in the scientific process [7].

an external Protocol Review and Monitoring Committee [9].

Meta-analysis of preclinical studies Systematic review and meta-analysis of preclinical studies may also help describing the quality of preclinical studies, summarizing the evidence, identifying heterogeneity and publication bias, and therefore improving research quality to enhance the validity of preclinical decision-making on therapeutic targets [3]. Funnel plot asymmetry may suggest the presence of publication bias and an overestimation of experimental intervention size likely to be misleading for future translational research. Comparing small and large animal studies to investigate cardiac stem cell treatment in myocardial infarction, the meta-analysis by Zwetsloot et al. showed significantly higher left ventricular ejection improvement in mice (12%) than in pigs (5%) together with more publication bias in mice studies [10]. In a meta-analysis of preclinical studies investigating the neuroprotective effect of IL1-RA, Banwell et al. showed that the quality of included studies was modest (only 1/11 studies reported randomization to intervention and only 3/11 studies reported blinded outcome assessment), no studies reported sample size calculation and there was evidence consistent with substantial publication bias [11]. Methodological optimization, reproducibility practices and use of systematic review and meta-analysis may reduce false positive in preclinical research and improve the efficacy of future translational and clinical research. Wide mobilization of both researchers and scientific journals editing boards are required to move forward on this path.

How to make more research findings true In 2014, John PA Ioannidis made a proposal on ‘‘How to make more published research true’’ [8]. He proposed the adoption of large-scale collaborative research, replication culture, research protocol registration and data sharing, reproducibility practices, better statistical methods, standardization of definitions and analyses, more appropriate statistical thresholds, and improvement in study design standards, peer review, reporting, dissemination of research, and training of the scientific workforce. As an example, the Consortium for preclinicAl assESsment of cARdioprotective therapies (CESAR) consortium is a large-scale US collaborative group investigating cardioprotective therapies in animal models. The aim of this collaboration is to achieve rigorous, accurate and reproducible evaluation of putative infarct-sparing interventions in mice, rabbits and pigs. All animal sample generation and data generation are based on the principles of randomization, investigator blinding, a priori sample size determination and exclusion criteria, appropriate statistical analyses, and assessment of reproducibility. To do so, they established 2 surgical centers for each animal species, a Pathology Core (to assess infarct size), a Biomarker Core (to measure plasma cardiac troponin levels), and a Data Coordinating Center — all with the oversight of

Disclosure of interest The authors declare that they have no competing interest.

References [1] Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;2:e124. [2] García-Berthou E, Alcaraz C. Incongruence between test statistics and P-values in medical papers. BMC Med Res Methodol 2004;4:13. [3] Kleikers PWM, Hooijmans C, Göb E, Langhauser F, Rewell SS, Radermacher K, et al. A combined preclinical metaanalysis and randomized confirmatory trial approach to improve data validity for therapeutic target validation. Sci Rep 2015;5:13428. [4] Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol 2015;13:e1002165. [5] Perrin S. Preclinical research: make mouse studies work. Nature 2014;507:423—5. [6] Greenberg SA. How citation distortions create unfounded authority: analysis of a citation network. BMJ 2009;339: b2680. [7] Ten Hagen KG. Novel or reproducible: that is the question. Glycobiology 2016;26:429.

Please cite this article in press as: Angoulvant D, Bejan-Angoulvant T. False positive results in preclinical research. Therapie (2016), http://dx.doi.org/10.1016/j.therap.2016.06.001

+Model THERAP-92; No. of Pages 3

ARTICLE IN PRESS

Editorial

3

[8] Ioannidis JPA. How to make more published research true. PLoS Med 2014;11:e1001747. [9] Jones SP, Tang X-L, Guo Y, Steenbergen C, Lefer DJ, Kukreja RC, et al. The NHLBI-sponsored Consortium for preclinicAl assESsment of cARdioprotective therapies (CAESAR): a new paradigm for rigorous, accurate, and reproducible evaluation of putative infarct-sparing interventions in mice, rabbits, and pigs. Circ Res 2015;116:572—86. [10] Zwetsloot PP, Végh AMD, Jansen Of Lorkeers SJ, van Hout GP, Currie GL, Sena ES, et al. Cardiac stem cell treatment in myocardial infarction: a systematic review and meta-analysis of preclinical studies. Circ Res 2016;118:1223—32. [11] Banwell V, Sena ES, Macleod MR. Systematic review and stratified meta-analysis of the efficacy of interleukin-1 receptor antagonist in animal models of stroke. J Stroke Cerebrovasc Dis 2009;18:269—76.

a

Cardiology department and EA4245, Faculté de médecine, university Franc¸ois-Rabelais, Tours university Hospital, 10, boulevard Tonnellé, BP 3223, 37044 Tours, France b Clinical pharmacology and UMR CNRS 7292, university Franc¸ois-Rabelais, Tours university Hospital, 37044 Tours, France ∗ Corresponding

author.

E-mail address: [email protected] (D. Angoulvant)

Denis Angoulvant a,∗ , Theodora Bejan-Angoulvant b

Please cite this article in press as: Angoulvant D, Bejan-Angoulvant T. False positive results in preclinical research. Therapie (2016), http://dx.doi.org/10.1016/j.therap.2016.06.001