Informed consent forms fail to reflect best practice

Informed consent forms fail to reflect best practice

Journal of Clinical Epidemiology 65 (2012) 703e704 EDITORIAL Informed consent forms fail to reflect best practice Informed consent is a critical com...

55KB Sizes 1 Downloads 56 Views

Journal of Clinical Epidemiology 65 (2012) 703e704

EDITORIAL

Informed consent forms fail to reflect best practice Informed consent is a critical component of all clinical research; however, this process is often not appropriately administered with research participants. Informed consent has typically emphasized the provision of information over support to people making a difficult decision. In a study by Brehaut et al, informed consent documents were assessed for the degree to which they conform to the International Patient Decision Aid Standards for supporting decision making. They found that informed consent documents do not meet most validated standards for encouraging good decision making. Vested interest’s influence on guidelines is not always from industry involvement! Norris et al examined the relationship between guideline panel members’ conflicts of interest and guideline recommendations for mammography screening in asymptomatic women. They found that a guideline with at least 1 author who is a radiologist was more likely to recommend routine screening. In addition, the odds of a recommendation in favor of routine screening were related to the number of recent publications on breast disease diagnosis and treatment by the lead guideline author. The authors conclude that recommendations regarding mammography screening may reflect the specialty and intellectual interests of the guideline authors. Akl et al propose a new strategy to manage conflicts of interest within the context of guideline development. They propose giving primary responsibility to methodologists and making the content experts members of the guideline committee. This is a reversal in the dominant power structure. Following a series of semi-structured interviews with both methodologists and content experts, the authors found that methodologists believe this change will lead to more rigorous guidelines, while the content experts were worried that the methodologists’ lack of content knowledge could hurt the guidelines. In an interesting response to this article, Sniderman and Furberg contest this change in structure and propose their thoughts on addressing this issue of conflict of interest. Bias at the level of the systematic review is also alive and well! Orestis and Ioannidis investigated whether interpretation bias in meta-analyses might be an issue. They found that when interpreting meta-analyses that included their own study, authors who had published significant results were more likely to believe that a strong association existed as compared with methodologists with no vested interests. 0895-4356/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi: 10.1016/j.jclinepi.2012.04.004

In another paper on Guidelines, the GRADE approach (The Grading of Recommendations Assessment, Development, and Evaluation) was endorsed by Vandvik et al as useful in a real world application because of its systematic, explicit, and transparent process for evaluating and reporting quality of research evidence and moving from evidence to recommendations of clinical practice guidelines. They found that guideline panelists considered information in the GRADE evidence profiles easily accessible and helpful in making recommendations. They preferred format alternatives that provided additional information in table cells and that presented risk differences. These formats also led to improved comprehension of some key information for this group. Should systematic reviews be expected to cite not only individual studies but also previous systematic reviews? In a nice review of reviews, Weir et al found that even when reviews have met PRISMA criteria for systematically searching for original article, there was a striking failure to cite previous systematic reviews. Weir et al also address the ongoing debate around the relative merits of authors selecting ‘narrow’ questions (colloquially known as ‘splitters’) versus selecting broad questions (colloquially known as ‘lumpers’). In a review of health professional reminders, they found a proliferation of systematic reviews of reminders and an overall disorganization of the literature, in large part because of different authors deciding to narrow their objective in 1 or more of the 5 ways that systematic review questions are split (population, study design, outcomes, setting, and condition/ targeted behavior). It is always pleasing to be able to demonstrate construct validity, such as for a quality-of-life scale being able to demonstrate that a measure of health-related quality of life, including the Health Utilities Index Mark 3 (HUI3) is predictive of mortality. Feeny et al assessed whether specific attributes of the HUI3, including vision, vision, hearing, speech, ambulation, dexterity, cognition, emotion, and pain and discomfort, were predictive of mortality in an adult population. They found that ambulation, hearing, and pain were statistically significantly associated with an increased risk of mortality. In sample size tables for estimating interobserver agreement, there are few easily available studies measuring interobserver agreement (reliability). Rotondi and Donner provide a sample size estimation technique to achieve

704

Editorial / Journal of Clinical Epidemiology 65 (2012) 703e704

a pre-specified lower and upper limit for a confidence interval for the k coefficient in studies of interobserver agreement. Linmans et al argue that primary care lifestyle programs to combat the complications of Type II diabetes shown to be efficacious in ideal circumstances should not be adopted by policymakers or practitioners until they are shown to be ‘effective’ in the real world. They caution that despite the proliferation of effective lifestyle interventions in controlled trial settings, they found that these programs were not nearly as effective in real-world primary care settings. Thus, the translation of evidence on lifestyle interventions from research settings into real-world settings needs to be urgently addressed. In another article on primary care research, Treweek et al sought to determine whether it was more effective to recruit general practitioners into an online trial by e-mail or by post. They found that there was no significant difference between

e-mail and post invitations and reminders; however, e-mail was substantially cheaper to administer. Finally, another area on bias needing attention is that of biases produced by indirect and mixed treatment comparisons. Jansen et al found that direct acyclic graphs were useful tools with which to conceptually evaluate the assumptions underlying indirect and mixed treatment comparison. In addition, they were useful in identifying the sources of bias and to guide the implementation of analytical methods used for the network meta-analysis of RCTs. Peter Tugwell Andre Knottnerus Leanne Idzerda Editors E-mail address: [email protected] (P. Tugwell)