Adverse-event rates: journals versus databases

Adverse-event rates: journals versus databases

Comment Adverse-event rates: journals versus databases Clinical trials prospectively assess therapeutic approaches in specific populations, examining ...

61KB Sizes 3 Downloads 61 Views

Comment

Adverse-event rates: journals versus databases Clinical trials prospectively assess therapeutic approaches in specific populations, examining efficacy and adverse side-effects. Variability of adverse-event profiles across trials and populations is expected. From our experience with multicentre clinical trials in cancer research funded by the US National Institutes of Health, coordinating centres are required to monitor trials by ongoing reviews of submitted data, expedited reporting of serious adverse events,1,2 data and safety monitoring boards and internal review boards,3,4 and regular data transfers to external entities.5 Data from several trials can be combined for licensure (eg, by the US Food and Drug Administration [FDA]) after extensive review, appearing in summary format in package inserts thereafter (figure). Patients and physicians may select one treatment over another, on the basis of adverse events summarised in package inserts or published work. Finally, pharmacosurveillence after marketing could detect new but rare signals of adverse events.6 Orit Scharf and A Dimitrios Colevas7 recently compared data for adverse events published in peer-reviewed manuscripts with data submitted to and monitored by the US National Cancer Institute (NCI) via the Clinical Data Update System (CDUS).5 Unless noted, all published adverse events were assumed to be attributable to treatment, which may or may not be true.8–10 Their review7 of 22 multicentre cancer trials funded by the NCI (a non-random sample from more than 200 identified trials) highlighted three problem areas: under-reporting of low-grade adverse events; under-reporting of recurrent adverse events; and inconsistent or incomplete characterisation and reporting of high-grade adverse events. Scharf and Colevas did not summarise the publishing journals, or whether the shortcomings were from a few bad apples in the barrel. We agree that increased access to raw data from clinical trials would be helpful and that the reporting of trials can be improved;11–14 however, no single publication standard has been established. Scharf and Colevas’ report of these observations encourages readers to ask several valid questions. Can we trust the adverse-event data published from a clinical trial? Why are there differences between databases and publications? As an investigator providing data for clinical trials, why am I doing extra work to www.thelancet.com Vol 369 January 20, 2007

submit adverse-event data that do not appear in the publication? Stewart Anderson’s accompanying editorial15 suggested possible explanations for the discrepancies. These included databases that are continually updated over time, publication timelines, representation of the sample, journal constraints, and reporting of only relevant adverse events. Furthermore, in our experience, CDUS submissions are problematic because of the presence of business rules specific to: not accepting a submission with missing or queried data; not accepting reported adverse events occurring during non-treatment phases; and incomplete data mappings.16,17 We suspect that, for the sake of having a CDUS report accepted, groups undertaking trials either conservatively suppress problematic findings, or temporarily label missing attributions as “possible”, until resolution. In fact, many researchers in NCI-funded trials see the CDUS as an administrative (as opposed to scientific) duty, focusing their resources on internal quality-control reviews and audits to ensure integrity of data in subsequent publications. A substantial volume of adverse-event data is obtained during the course of cancer clinical trials. The investigators make a conscious and informed decision to report data at a time that it is considered mature, with the knowledge that the database will continue to be updated. From our experience,18 of adverse events that are probed for, 85% never occur; only 3% are severe, life-threatening, or fatal; and 11% require inquiries to the local site. Much of this workload is due to assignment of adverse-event attribution, which has repeatedly been shown to be unreliable.8–10 Notably, the NCI common-data elements for randomised phase-III trials do not record attribution. Our findings18 suggest that adverse-event data can be condensed by at least 72% for summary purposes, which focuses resources on events of greatest clinical relevance. The purpose of publishing results is to disseminate novel findings to the scientific community in a summary fashion.11–13,19 Scharf and Colevas reviewed small phase-II trials (30–100 patients), assessing investigational drugs with premature and developing adverse-event profiles. Unfortunately, in the face of life-threatening or uncommon disease with few treatment options and 171

Comment

represent; a restricted experience in an often highly selected population. As researchers and journal editors, we are responsible for ensuring accuracy in publications, which represent the primary venue for the dissemination of scientific findings. Resources must be focused on the crucial elements: primary efficacy endpoints, and clinically relevant serious adverse events.

Patient evaluated and treated by local physician Serious adverse events reported through expedited systems and sent to sponsor, IRBs, FDA, NCI Routine data submitted by sites to coordinating centre Reports to DSMBs/IRBs, participating sites, and to NCI (via CDUS) Analysis of trial by coordinating centre; update publication of study results

*Michelle R Mahoney, Daniel J Sargent Adverse event data is continually reviewed, queried, and updated with sites as part of ongoing quality control process

Division of Biostatistics, Mayo Clinic, Rochester, MN 55905, USA [email protected] We declare that we have no conflict of interest. 1

2

3

Reports to DSMBs, sites, NCI, and IRBs continue until all patients are off treatment

4 5 Data submitted, with several trials, to US FDA for licensing Adverse events observed during postmarketing pharmacosurveillance

Figure: Flow of clinical trials data for trials coordinated by US NCI IRBs=institutional review boards, DSMBs=data and safety monitoring boards.

6

7

8

9 10

scarce data, physicians might need to use such data to guide treatment. Ideally, such guidance should come from standards by medical oversight entities on the basis of randomised phase-III trials, which are better suited to address the relative burden of adverse events of different treatments.20 Should a publication include data similar to those reported via other mechanisms during the trial? Of course. In consideration of continually updated adverse-event databases, we should work to improve the mechanisms creating the problems,7,15 define acceptable rates of discrepancy, and investigate the public sharing of raw data.12,14 Should standards be used in publications? Absolutely. Researchers and journals should be more explicit.13,19 Are publications of phase-II results meant to replace package inserts and guide patients’ treatment decisions? Only with much caution. Scientific publications should be viewed for what they 172

11

12 13

14 15 16

17

18

19 20

National Cancer Institute: CTEP/NCI guidelines: adverse event reporting guidelines. Jan 1, 2005: http://ctep.cancer.gov/reporting/newadverse_ 2006.pdf (accessed Nov 28, 2006). Goldberg RM, Sargent DJ, Morton RF, Mahoney MR, Krook JE, O’Connell MJ. Early detection of toxicity and adjustment of ongoing clinical trials: the history and performance of the North Central Cancer Treatment Group’s real-time toxicity monitoring program. J Clin Oncol 2002; 20: 4591–96. National Cancer Institute. Monitoring of clinical trials. http://ctep.cancer. gov/monitoring/index.html (accessed Nov 28, 2006). Morse M, Califf R, Sugarman J. Monitoring and ensuring safety during clinical research. JAMA 2001; 285: 1201–05. National Cancer Institute. Clinical data update system (CDUS). 2003: http://ctep.cancer.gov/reporting/cdus.html (accessed Nov 28, 2006). Calfee JE. The roles of the FDA and pharmaceutical companies in ensuring the safety of approved drugs, like Vioxx. Testimony, House Committee on Government Reform, (Washington, DC, USA). May 5, 2005: http://www. aei.org/publication22465 (accessed Nov 28, 2006). Scharf O, Colevas D. Adverse event reporting in publications compared with sponsor database for cancer clinical trials. J Clin Oncol 2006; 24: 3933–38. Hillman SL, Mandrekar SJ, Bot BM, et al. Should attribution be considered when interpreting adverse event data: a North Central Cancer Treatment Group (NCCTG) evaluation of a phase III placebo controlled trial. J Clin Oncol 2006; 18S (suppl): 6006 (abstr). Ioannidis J, Mulrow C, Goodman S. Adverse events: the more you search, the more you find. Ann Intern Med 2006; 144: 298–300. Raisch DW, Troutman WG, Sather MR, Fudala PJ. Variability in the assessment of adverse events in a multicenter clinical trial. Clin Ther 2001; 23: 2011–20. Ioanndis J, Evans S, Gotzsche P, for the CONSORT Group. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med 2004; 141: 781–88. Foote M, Noguchi P. Posting of clinical trials and clinical trial results: information for medical writers. Am Med Writers Assoc J 2005; 20: 47–49. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 2006; 185: 263–67. Mann H. Research ethics committees and public dissemination of clinical trial results. Lancet 2002; 360: 406–08. Anderson SJ. Some thoughts on reporting of adverse events in phase II cancer clinical trials. J Clin Oncol 2006; 24: 3821–22. National Cancer Institute. CTEP forms, templates, and documents: CTCAE v3.0 and lay term mapping document. http://ctep.cancer.gov/forms (accessed Nov 28, 2006). National Cancer Institute. List of codes and values: CTCAE/CTC codes. May 10, 2006: http://ctep.cancer.gov/guidelines/codes.html#ctc (accessed Nov 28, 2006). Mahoney MR, Sargent DJ, O’Connel MJ, Goldberg RM, Schaefer P, Buckner JC. Dealing with a deluge of data: an assessment of adverse event data on North Central Cancer Treatment Group trials. J Clin Oncol 2005; 23: 9275–81. Korn D, Ehringhaus S. Principles for strengthening the integrity of clinical research. PLoS Clin Trials 2006; 1: e1. Keech AC, Wonders SM, Cook DI, Gebski VJ. Balancing the outcomes: reporting adverse events. Med J Aust 2004; 181: 215–18.

www.thelancet.com Vol 369 January 20, 2007