Correspondence
I am Vice-President of the Royal Statistical Society and chair of its Working Party on Performance Monitoring in the Public Services.
Sheila M Bird
[email protected] MRC Biostatistics Unit, Cambridge CB2 0SR, UK 1
2
3
BBC News Online. Failing hospital ‘caused deaths’. http://news.bbc.co.uk/2/hi/uk_news/ england/staffordshire/7948293.stm (accessed April 22, 2009). Healthcare Commission. Investigation into Mid Staffordshire NHS Foundation Trust. http://www.parliament.uk/deposits/ depositedpapers/2009/DEP2009-0861.pdf (accessed April 28, 2009). Royal Statistical Society Working Party on Performance Monitoring in the Public Services. Performance indicators: good, bad, and ugly. J R Stat Soc Series A 2005; 168: 1–27. http:// www.rss.org.uk/PDF/PerformanceMonitoring. pdf (accessed April 8, 2009).
www.thelancet.com Vol 373 May 9, 2009
Let’s make the studies within systematic reviews count From 1969 to 2008, the annual number of new articles collected in PubMed increased from 261 130 to 804 215. Consequently, systematic reviews, which comprehensively summarise and assess the available evidence, have become indispensable. At the same time, knowledge development is driven by new empirical studies, and clinical researchers must keep motivated if they are to make the huge efforts needed for such studies. Such motivation depends not only on resources but also on recognition. Scientific recognition, acquisition of funding, and knowledge development are circularly inter-related. Despite all the limitations of citation scores, scientific recognition still implies being cited. Systematic reviews nowadays attract many more citations than the original studies on which they are based.1,2 Although this is logical in that it is more efficient to find, assess, and cite good reviews than all the individual studies, and practical for editors who like a minimum of references, the scientific reward for time-consuming original studies decreases when citations to them are absorbed by citations to systematic reviews. Their high citation rate also makes systematic reviews more attractive than original research for journals striving for high impact factors. To safeguard appropriate scientific acknowledgment for original studies, we propose that a Science Citation Index “count” should be assigned to every included study when a systematic review is cited. This process will prevent empirical studies from being overshadowed by reviews and encourage researchers to keep doing clinical studies. Reviewed studies could receive a citation independently of quality score, since all will have been used to assess current evidence
and because quality judgment is complex (eg, older studies might be of lower methodological quality but more significant as landmark papers). We would restrict our proposal to systematic reviews—the key publication in which to summarise clinical evidence. Narrative reviews have a different role and are not cited much more than original studies. In our proposal, systematic reviews would not be cited less often than currently, which is important in view of their role as media for efficient communication and to motivate review authors. As such, “evidence chase” as the motor of scientific progress, and “evidence base” as the body of continuously accumulating knowledge, can develop in harmony. We declare that we have no conflicts of interest.
*J André Knottnerus, Bart J Knottnerus
[email protected] Maastricht University, Maastricht, 6213 HD Limburg, Netherlands (JAK); and Academic Medical Center, University of Amsterdam, Amsterdam, Netherland (BJK) 1
2
Patsopoulos NA, Analatos AA, Ioannidis JPA. Relative citation impact of various study designs in the health sciences. JAMA 2005; 293: 2362–66. Montori VM, Wilczynski NL, Morgan D, Haynes RB, and the Hedges Team. Systematic reviews: a cross-sectional study of location and citation counts. BMC Med 2003; 1: 2.
NICE head injury guidelines pre-empted two millennia ago Science Photo Library
if 120 pings (say) rather than 30 were allowed per quarter. Mid Staffordshire could not possibly have done, for itself, the types of standardised analysis done by the Healthcare Commission because it did not have access to the relevant national data. Lacking that comparator, its own mortality data were far from alerting. Other aspects of its performance—from patient complaints through near-misses to inquests—were, however. The practice of providing NHS data collected at public expense to an organisation which releases back the analysed information, even to the originators of the data, only exceptionally or at cost is unacceptable. If the Healthcare Commission’s commendable quarterly analyses were routinely shared with NHS Trusts irrespective of whether a statistical alert had been triggered, local good judgment and expertise could lead to appropriate action on early-bird warnings before a major statistical alert is ever raised. Mid Staffordshire was exceptional, not only in its mortality but apparently in its judgment about the evidence. Denying early-bird warnings compromises the good judgment of the better and best NHS Trusts: another legacy for Ministers and the Healthcare Commission’s successor to address.
Mortality from head injury is high1 and the promptness of appropriate treatment is known to affect outcome.2 Knowing which patients with head injury have an evolving, lifethreatening pathology when they first present to a clinician is still a challenge for modern medicine. The UK National Institute for Health and Clinical Excellence (NICE) endeavoured to define in its rigorous guidelines of 2003 and 2007 which symptoms and signs represent a high 1605