Clinical Effectiveness in Nursing (2004) 8, 101–110
Clinical
Effectiveness
in Nursing
http://intl.elsevierhealth.com/journals/cein
Reviewing the case for critical appraisal skills training Anne Mulhalla,*, Andree le Mayb,1 a
The Coach House, Rectory Road, Ashmanhaugh, Norfolk NR12 8YP, UK School of Nursing and Midwifery, University of Southampton, Highfield Campus, Southampton SO17 1BJ, UK b
KEYWORDS
Summary Introduction: To review the evolution of critical appraisal skills training alongside the results from a long-term evaluation of a series of research utilisation workshops. Critical appraisal skills training and evidence based health care: The evolution of critical appraisal skills (CAS) training across health care professions is described, showing the centrality of this to evidence based health care and thus clinical effectiveness. The evidence for the effectiveness of CAS training is reviewed. The long-term evaluation of the Foundation of Nursing Studies’ workshops: A postal questionnaire and qualitative interview study of participants and managers were used to evaluate the workshops. These findings were then contextualised through a consultation exercise with senior nurses, practice developers and nurse educators in health care trusts and academic units across the UK. Discussion: The effectiveness of CAS training, its focus and structure is revisited with reference to the literature and the long-term evaluation. Questions concerning the way forward are proposed. c 2005 Published by Elsevier Ltd.
Critical appraisal techniques; Evidence based practice; Research implementation; Nurse education
Introduction It is surely no co-incidence that in the same year the Briggs’ Report (Briggs, 1972) stressed the importance of nursing becoming a research based profession and the waste of resources on medical *
Corresponding author. Tel.: +44 1603 784874. E-mail addresses:
[email protected] (A. Mulhall),
[email protected] (A.le May). 1 Tel.: +44 2380 597979.
1361-9004/$ - see front matter c 2005 Published by Elsevier Ltd. doi:10.1016/j.cein.2004.11.002
practices of doubtful efficacy was highlighted by Cochrane (1972). Thus for 30 years the two largest health care professions have extorted their practitioners to base more of their practice on sound research. In nursing an overt educational policy was adopted that emphasised the importance of research-based teaching (Project 2000), and introduced specific courses focused on teaching research methodology. In medicine, this educational policy was less overt, but implicitly it was accepted that practice should be based on best
102 evidence, and proof of research activity was required for career enhancement. However, little formal advice was given as to how evidence was to be retrieved, evaluated and synthesised by practising clinicians of either profession. Historically undergraduate courses in both nursing and medicine included assignments where students were required to critically appraise research articles, but both pre- and post-registration education in this field was limited. This review charts the rise in educational interventions aimed at improving practitioners’ skills in the critical appraisal of research and assesses the evidence for their effectiveness. It also presents data from a long-term evaluation of a series of research utilisation workshops funded and organised by the Foundation of Nursing Studies and first published in ‘Taking Action: Moving towards evidence based practice’ (Foundation of Nursing Studies, 2000). (The Foundation of Nursing Studies promotes the use of research in practice and supports the study of implementation strategies).
Critical appraisal skills training and evidence based health care In 1994 the Department of Health reported a range of obstacles that were preventing health service personnel from becoming skilled in research techniques (Department of Health, 1994). Adding to this, surveys in the early 1990s reported that nurses considered they lacked the skills of critical appraisal and the opportunities to improve them (Pearcey, 1995; Lacey, 1996). Thus the 1990s witnessed an increasing demand amongst health care practitioners for appropriate education in this field. The sanctioning and promulgation by both the government and the professions of a certain approach to evidence-based medicine (EBM) (Sackett et al., 1998) crystallised the need for all health care workers to become proficient in the skills of critical appraisal. The search for, and critical evaluation of evidence being the lynchpin of the EBM cycle. A cycle in which patients ‘problems’ are converted into ‘searchable questions’ which are used to interrogate the literature in the quest for appropriate studies. These are then scrutinised critically for their scientific validity and usefulness in answering the initial question. The need for skills in critical appraisal thus moved from a theoretical arena within education to focus on the practice arena and continuing professional education. It
A. Mulhall, A.le May also became apparent that if managers and consumers were to play a part in evidence based health care then they too would need these skills. Thus both the audience for these skills and their place of application changed alongside an everincreasing cultural pressure to adopt evidencebased working. To address this need two regional NHS Executives – North Thames, and Oxford and Anglia organised substantial programmes to assist a range of health care professionals and lay people to enhance their skills in critical appraisal. The North Thames Research Appraisal Group (NTRAG) and the Critical Appraisal Skills Programme (CASP), run by Oxford and Anglia, provided half or one-day workshops that focused on particular areas of clinical care, particular topics such as guidelines, or particular research designs such as randomised controlled trials. Initially much of the training focused on medical topics and quantitative designs, but latterly clinical topics of more relevance to nursing and the professions allied to medicine, and qualitative research designs have been included. Some of these workshops function with a multi-professional audience and others are specifically designed for one professional group. Alongside these training programmes designed specifically to improve critical appraisal skills (CAS) were others, with a broader remit, such as the courses in teaching and practising EBM run by the Centre for Evidence Based Medicine in Oxford, which included CAS training. In the early 1990s a number of courses on research utilisation were either already available to nurses (for example, the ENB 870 course ‘Introduction to the Understanding and Application of Research’), or were newly designed (for example, the series of workshops organised and sponsored by the Foundation of Nursing Studies). CAS training formed part, but not all of these courses/ workshops. Yet further initiatives, such as that run in the West Midlands Region by the Evidence Supported Medical Union, aimed both to train health care professionals in evidence based practice and equip them with the ability to train others. The CASP workshops also anticipated that those who were trained would go on to train others. In this way it was hoped that training in CAS could be cascaded through health care trusts. Currently most of these programmes are still running and they have been joined by numerous ‘in house’ courses run by NHS trusts. However, the effectiveness of CAS training remains to be demonstrated.
Reviewing the case for critical appraisal skills training
Is training in CAS effective? The premise behind CAS training was that it would enhance health care professionals’ attitude to research and increase their abilities to search for and evaluate appropriate evidence for practice. This would then feed into the cycle of EBHC and thereby improve clinical effectiveness. Based on this assumption several courses and workshops were commissioned. These programmes were organised both as uni- and multi-professional events and were undertaken within and outwith NHS trusts. However, the effectiveness of teaching skills in critical appraisal was not established prior to the widespread introduction of these measures and it is only latterly that such evidence has been sought. One of the problems in assessing the effectiveness of CAS training is that through necessity courses have different contents and are aimed at different target groups. Despite this the evidence for the effectiveness of CAS training has been collated into a systematic review (Hyde et al., 2000). This indicated that CAS teaching has a positive effect on clinicians’ knowledge and skills. In addition, the effect on clinicians’ attitudes to evidence based practice was generally positive, but the authors caution that this may be biased by a tendency of respondents to answer in the ‘desired way’. Moreover there were gaps in the evidence as to whether CAS impacted on decision making or patient health. Taylor et al. (2000) also reviewed studies of medical students/doctors in training concluding that CAS training improved knowledge in epidemiology/biostatistics and attitude to medical literature. A single randomised controlled trial of the effectiveness of CASP workshops provided to general practitioners, nurses, physiotherapists and researchers reported an increase in participants’ knowledge of the principles necessary for appraising evidence (Taylor, 2000). However, CAS training did not lead to improvements in attitude towards the use of evidence in health care, confidence, evidence-seeking behaviour, or the ability to appraise published literature. A review of controlled (not randomised) studies indicated that CAS training improved the knowledge of undergraduates, but not that of residents (Norman and Shannon, 1998). This potential lack of effect in the practice arena is corroborated by a further controlled study of medical students that showed that although training increased skills of critical appraisal, an improving trend through the year was not demonstrated (Frasca et al., 1992).
103 The authors conclude that CAS skills were not being acquired in ‘day to day clerkship activities’. In nursing non-experimental studies of educational interventions have been shown to improve attitudes to research, albeit in the short term (Harrison et al., 1991; Perkins, 1992; Burls, 1997). Similarly Lacey (1996) in an evaluation of nurses following completion of ENB 870 reported that 65% of students had been able to implement research guided by a change proposal which each of them had developed. Whilst Hicks (1994) recorded that two months following a study day midwives had increased: their reading of research; their confidence in evaluating it; and the degree to which research influenced their practice.
The long-term evaluation of the Foundation of Nursing Studies CAS/research utilisation workshops The workshops Between 1994 and 1995 the Foundation of Nursing Studies (FoNS) ran a series of nine 4 day CAS/utilisation of research workshops involving 206 participants (Registered General Nurses, Registered Mental Nurses, Health Visitors and Midwives) spanning all clinical grades from nine NHS trusts. The objectives were to enable practitioners to Retrieve and select research studies appropriate to their needs, Develop criteria to evaluate quantitative and qualitative research, Practice critical appraisal, Recognise the individual and organisational barriers to change, Devise and evaluate strategies to utilise research in their own areas of practice. Shortly after their delivery these workshops were evaluated through a written questionnaire (response rate 84%) and a qualitative study (13 participants) to determine the immediate effect of the workshops on practitioners’ attitudes to research and their use of research in practice. The results of this short-term evaluation have been reported elsewhere (Foundation of Nursing Studies, 1996; Le May et al., 1998; Mulhall et al., 2000). These workshops increased practitioners’ CAS and consolidated their prior knowledge and experience regarding evidence-based health care (EBHC). No changes in participants’ attitudes to
104 research occurred – these generally being positive both before and after undertaking the workshops. However, qualitative data from the evaluation raised important concerns about structural, organisational and social barriers in the workplace that might deflect practitioners from capitalising on their new skills and increasing the incidence of EBHC. Moreover, there was a suggestion that over time skills might be lost as participants faced the exigencies of everyday practice. Thus it seemed essential to explore whether the effects of the workshops could be sustained in the longer term.
The long-term evaluation The long-term evaluation was conducted over 11 months in 1997/8 and used the original sample who had attended the utilisation of research workshops. It used (1) A postal questionnaire covering participants’ research skills and knowledge before and after the event; whether skills had been lost/enhanced and why; whether they had used or undertaken research and if the workshop had helped them in this; and factors which hindered or helped their use of research. There were 52 respondents drawn from all clinical grades (50% being grade G or H). The earliest year of qualification was 1960, the largest proportion (54%) qualified in the 1980s. None of the respondents had been trained through Project 2000, but 30% were graduates and 10% had done the ENB 870 course (Introduction to the Understanding and Application of Research). The relatively low response rate (52/206, 25%) probably related to the length of time since the first evaluation (between 2 and 3 years depending when respondents had attended the workshops which ran between September 1994 and December 1995). (2) A qualitative study using semi-structured telephone interviews (13 practitioners) or face to face interviews (11 managers) from three of the trusts where workshops had been run. Following the final analysis of the data from the long-term evaluation in 1999 several significant developments in research implementation had occurred, notably the introduction of the National Institute for Clinical Excellence (NICE), the Commission for Health Improvement (CHI) and clinical governance. It was considered prudent therefore to contextualise the original findings. Thus the results of the long-term evaluation were distributed as a consultation paper to senior nurses (for example, Directors of Nursing; Directors of Nursing Development); practice developers (for example, Directors of Research and Development,
A. Mulhall, A.le May Directors of Quality Development and Lecturer Practitioners); and nurse educators (for example Professors and Senior Lecturers) in Trusts and academic institutions across the UK. A full list of attendees may be found in Foundation of Nursing Studies (2000). The consultation exercise had three aims To disseminate the results of the long-term evaluation, To gauge how closely these results reflect the current situation in the NHS, To initiate a debate concerning the way forward and specifically to identify how the use of research in practice could be best sustained and supported. Recipients of the paper were requested to comment on how the results from the evaluations matched their own experiences. They were also asked to identify the impact of clinical governance on research utilisation. Respondents to the document were as follows: England 41; Scotland 12; Wales 16; Northern Ireland 8. Recipients of the paper were then invited to attend one of four consultation events held in Northern Ireland (n = 31), Scotland (n = 46), England (n = 36) and Wales (n = 30). Participants were asked to focus on four areas developing knowledge and skills to support research utilisation, establishing organisational structures to support and sustain research utilisation, creating and maintaining a culture for research utilisation, the role of the Foundation of Nursing Studies in supporting and sustaining research utilisation. and record their individual and group views and experiences about current practice. The individual and group notes, questionnaires and flip charts were transcribed to form the data set. Data were reviewed for consensus and commonality within and between groups. The analysis represents the common themes and important individual/group comments.
Results Those results that have most bearing on the issue of CAS training were obtained from the postal survey and the consultation exercise. Using CAS in clinical practice – skills gained, skills lost.
Reviewing the case for critical appraisal skills training
105 Factors hindering use of research Time 44% Resources 19% Lack of support 16% Other 21%
Figure 2 practice. Figure 1
Skills obtained at the workshops.
Participants in the postal survey recognised a range of skills that had been acquired through attendance at the workshops (Fig. 1). All 52 responded to this section, but one consistently replied in the ‘Do not know’ category. Proportions in Fig. 1 are therefore calculated using a denominator of 51. Most of the attendees considered that they had gained CAS and an appreciation of the methods of data collection and analysis that underpin these skills. However, only a third felt they had acquired skills in retrieving relevant literature. The majority of participants (41/48, 85%) had used a combination of these skills together with the confidence they imbued to
Factors hindering the use of research in
develop guidelines, protocols and policies, review current practices, enhance educational opportunities, underwrite clinical decisions.
In contrast, participants in the consultation exercise indicated that in their organisations although nurses were beginning to question practice, this was often in a limited way. Moreover, over half the postal survey sample stated that over time they had lost critical appraisal/implementation skills, mainly through a lack of opportunities to practice them. Skill loss was equivalent across all the sites and across all clinical grades. However, the loss of skills was twice as high amongst those who had never attended a research course or who were non-graduates. Supporting this the consultation exercise indicated that nurses who had undertaken degree/diploma courses were more likely to question practice. Although a lack of CAS may inhibit EBHC all respondents cited one or more other factors that hindered their attempts to use research (Fig. 2).
It is also unclear as to who should receive CAS training and what format it should take. Participants in the consultation exercise struggled to identify which groups of staff needed which particular skills. There was a consensus that all nurses need CAS and the knowledge to access research efficiently, but that not all nurses needed to undertake research. However, respondents did not answer the question of how givers of direct care should be facilitated to undertake search/appraisal activities.
The context of CAS use The use of CAS is embedded in the wider matrix of a practitioner’s working life. Both participants in the workshops and respondents to the consultation exercise recognised this citing the following factors as facilitating research implementation Support from colleagues, managers and the organisation. Specific structures for example, R&D groups, good links with universities. The relevance/importance of the topic to their clinical speciality. A questioning approach to practice was most likely to be fostered through group activities such as journal clubs, being part of multidisciplinary teams or through reviewing policies, procedures and guidelines to make them evidence based. Thus although the accepted model of EBHC (Sackett et al., 1998) suggests that research use will be triggered by a question or problem arising during the clinical encounter our respondents highlight the wider process in which EBHC is being instituted.
106
A. Mulhall, A.le May
What triggers the use of research in clinical practice? The consultation exercise identified four different categories of triggers that may precipitate the use of research in practice – personal, organisational, external, and educational. Table 1 summarises the elements in each of these categories. At both a personal and organisational level safety/accountability issues and the work environment may trigger research use. Thus individual respondents were concerned about gaps in their knowledge and involvement in critical incidents, whilst litigation, complaints and risk management worked as triggers within the organisation. Particular professional environments may stimulate individuals to read, network and garner research ideas from colleagues whilst the organisation itself may stimulate research use through establishing professional positions/groupings such as nurse specialists and practice development teams, or structures such as journals clubs. Individuals sensed a professional pressure to meet job descriptions or professional expectations for performance. Motivation, professional maturity and the recognition of clinical opinion leaders were other ‘personcentred’ factors. In this context new staff often were mentioned as triggers to research use. Trusts’ policy and organisation also exerted a significant
Table 1 research
Triggers that precipitate the use of
Individual Safety and accountability Professional work environment Professional and cultural pressure Person centred Organisational Safety and accountability Professional work environment Organisational policy and structures Quality assurance External Governmental initiatives New therapies The public External bodies Comparisons with other organisations Educational Higher education Links with higher education Conferences Clinically focused training
effect. Both systems of quality assurance such as audit, standards, guidelines, clinical governance, and resources to support change acted to initiate research use. Factors external to trusts, particularly those related to governmental initiatives regarding for example, clinical governance and risk management, are having a significant effect on eliciting a greater use of research. Additionally, comparisons between NHS trusts (can we do it better?) and the introduction of new therapies (how shall we do this most effectively?) also act as significant prompts. External bodies such as The Scottish Intercollegiate Guidelines Network (SIGN) and public pressure/preferences were also important triggers. Research use was also related to educational factors such as improving links with higher education, perhaps through the creation of joint appointments, enhancing individuals through higher education, and providing clinically focussed training that prompted a questioning of current practice.
Discussion The effectiveness of CAS training Our long-term evaluation of nurses adds to the body of evidence already accrued from both experimental and non-experimental studies conducted with doctors and medical students. The workshops were effective in transmitting the skills of critical appraisal and most practitioners had used these skills to enhance evidence-based care. However, at least half of the sample had lost skills through a lack of opportunity to exercise or practice them. This echoes Frasca et al.’s (1992) study of medical students that indicated that CAS were being poorly developed outside the educational environment. It seems probable that the loss of skills acquired through continuing professional development is a significant problem that may detract from the usefulness of EBHC/CAS programmes. Hyde et al. (2000) comment that although the development of critical appraisal skills training has been important, further expansion of such training is not warranted on the evidence to date. Whether individual practitioners are able to cascade training through their organisations also requires careful consideration. An evaluation of the CASP programme in Scotland reported that these workshops increased knowledge of clinical
Reviewing the case for critical appraisal skills training effectiveness, but doubts were expressed concerning whether participants would be able to roll out the programme on their own (Ibbotson et al., 1998). Those attending our workshops would probably have struggled to cascade education, given that they often were unable to practise these skills in their own practice, let alone provide instruction and support for others.
107 Is CAS training most effective if it is provided to uni- or multi-professional groups? Should training focus on disparate individuals who are elected to go to workshops or on clinical directorates/teams who work together?
The way forward The focus and structure of CAS training The importance of continuing professional development in developing EBHC is illustrated by the results of the long-term evaluation and the consultation exercise. However, recent studies of research use by nurses (Taylor et al., 2000; Le May et al., 1998) have highlighted the complex sociocultural and organisational factors that may influence practitioners’ efforts to use CAS and improve EBHC. The experience in providing these and other workshops has emphasised the importance of trusts having a defined strategy that ‘situates’ and recognises the process and outcome of any educational intervention. There is an historic lack of local and central guidance regarding professional development and this was reflected in the consultation exercise where respondents struggled to identify which groups of staff needed which skills. It is only recently that recognition has been given to the importance of systematic processes for assuring the professional development and appraisal of those working within the NHS. Working Together, Learning Together (Department of Health, 2001) sets out for the first time a strategy for a more co-ordinated approach to learning supported by the establishment of the NHS University in 2003. Likewise there is a paucity of discussion in the literature as to the level or nature of CAS training which might be most appropriate for the various professions working within the NHS. Many questions concerning the current model of EBHC and its associated emphasis on CAS training need to be explored. For example: Is it necessary for direct care givers to acquire CAS and gain the ability to conduct sophisticated literature searches? If direct caregivers do not perform these activities who should? How can training best be cascaded? Is ward/unit level access to IT software and hardware either effective or efficient in terms of enhancing EBHC?
Currently many of the questions above remain unanswered and yet the proliferation of workshops, courses and books designed to assist individual practitioners to acquire CAS continues unabated. Our experience and that of others both in medicine and nursing indicates that practitioners can effectively acquire CAS through attending workshops. Others (Lyne et al., 2002) have advocated more ‘realistic’ methods of critical appraisal suited to clinicians’ needs and resources. However, the acquisition of such skills does not guarantee that they are retained or used. Usage of skills occurs in a complex professional and social environment where particular constraints or opportunities operate. Whilst the model of EBHC developed and advocated first by Sackett has undoubted face value we need to know just how far this model can be applied effectively in practice. Specifically in the context of this discussion we need to identify the focus and format of CAS training that will produce the most effective and efficient outcome in terms of increasing EBHC and improving clinical effectiveness. More controversially the question must be posed as to whether the right people are being trained in CAS, or whether more collective strategies within and outside health care organisations could serve practitioners’ needs for evidence more effectively. Would resources be better spent on effective implementation of nationally developed evidencebased guidelines? Or on local critical appraisal teams such as those described by Zeigler et al. (2002) for nurses. Our own evaluation and that of others suggests that practitioners delivering direct care either do not view on-line databases as having much practice utility, or have difficulties accessing/using such resources (Thompson et al., 2001; Griffiths and Riddington, 2001). Another strategy is the development of computer based enquiry services such as ATTRACT/ TRIP. This is a rapid query answering service for GP’s that searches for, appraises and summarises evidence (Brassey et al., 2001). It was produced in response to a large needs assessment exercise of GP’s who stated that they had neither the time
108 nor the expertise to keep up to date. Alternative approaches might focus on greater integration of EBHC with clinical topics. Korenstein et al. (2002) discuss how pre-registration curricula can utilise clinical vignettes and role play to link EBHC concepts such as ‘numbers needed to treat’ to real patient decisions. Educational strategies for improving EBHC also need to incorporate evidence from studies exploring practitioners’ decision-making habits. Within both nursing and medicine there is evidence that indicates that practitioners value experiential knowledge when making clinical decisions (Covell et al., 1985; Greenhalgh and Hurwitz, 1998; Luker et al., 1998). A recent cross-case analysis using qualitative interviews, observations, documentary audit and Q methodology modeling of 122 nurses in three large acute sector sites confirmed that humans, in particular clinical nurse specialists, were preferred over text based and electronic sources of information. The authors suggest that the challenge is ‘either to give nurses the skills and resources to make information technologies more useful or to explore alternative ways of presenting quality research information possibly by harnessing the power of those who embody the clinical nurse specialist or nurse consultant role’ (Thompson et al. (2001), p. 387). However, the identification of opinion leaders may be problematic. Grimshaw (2000) testing two identification instruments reported that opinion leaders are condition specific and therefore separate exercises would be needed for each clinical area. Identification of opinion leaders was highly variable and idiosyncratic suggesting that this strategy is unlikely to be effective across all settings and professional groups. Furthermore, individuals are unlikely to exert a significant effect if they remain unsupported by organisational structures and policy or distanced from others within the multidisciplinary team. In such an event skills and motivation quickly may be lost and the benefits of training wasted. Many factors may trigger the use of research – individual, organisational, external, and educational. The classic model of evidence based practice may thus represent just the end point of a wider picture whereby individuals react within or against organisational, professional and personal boundaries to use evidence more widely. Making use of CAS to implement evidence based practice is clearly not a simple matter of providing adequate training. Implementing evidence-based change requires a range of other organisational, management and change skills.
A. Mulhall, A.le May Furthermore, evidence use is occurring against a complex professional and social culture. Different groups of professionals, or different clinical specialties may have different needs. Some practitioners such as GP’s may need to make rapid and autonomous decisions about care, whilst others, such as policy makers may make more collaborative decisions over a longer period of time. The availability of research evidence also varies across professions and across clinical specialties. For example, evidence from systematic reviews or meta-analyses is more readily available for many medical decisions, simply because these are often ‘treatment’ decisions that involve choice of drugs. Such evidence is either less readily available to the nursing and allied professions, or is not always the sort of evidence required for decision making in these disciplines. It is against this backdrop that decisions concerning the provision, content and format of providing CAS training to health care professionals must be made in the future.
Conclusions Although practitioners are being encouraged to gain skills to retrieve and critically appraise research evidence, the effectiveness of this training and its impact on patient care remains unclear. The use and sustainability of CAS by clinicians is affected by the complex professional and organisational culture in which they work. The use of research may be triggered by many factors other than patient ‘problems’. Deeper consideration is needed as to who should be trained in CAS and whether other strategies would better serve the needs of clinicians for evidence.
Limitations of this paper This paper does not purport to provide a comprehensive and systematic review of all the literature concerning either CAS training or EBHC. Participants attending the original workshops were either self-selecting or were directed to attend by others, they may thus represent a bias towards those who were interested/motivated in improving their skills relating to research utilisation. The low response
Reviewing the case for critical appraisal skills training rate in the long-term evaluation was related to the length of time between attending the workshop and participation in the postal questionnaire. This was of course an integral part of the design of a longitudinal evaluation, but again it may have biased the results. The data sets are also open to the problem of self-report although the questionnaires were returned anonymously which may offset this problem.
Contributors AM and ALM designed and undertook the short- and long-term evaluations and analysed the data from the consultation exercise. The Foundation of Nursing Studies organised the consultation exercise. AM and ALM are guarantors for the paper.
Acknowledgements We are grateful to the workshop participants who were involved in the short- and long-term evaluations and to those who replied to the consultation exercise. Our thanks also go to Cheryl Thornton and Caroline Alexander for assistance with data collection for the evaluation and to Theresa Shaw for collating the data from the consultation exercise.
References Brassey, J., Elwyn, G., Price, C., Kinnersley, P., 2001. Just in time information for clinicians: a questionnaire evaluation of the ATTRACT project. British Medical Journal 322, 529– 530. Briggs, A., 1972. Report on the committee for Nursing Cmnd 5115. HMSO, London. Burls, A., 1997. An evaluation of the impact of half day workshops teaching critical appraisal skills. Critical Appraisal Skills Project, Anglia and Oxford. Cochrane, A.L., 1972. Effectiveness and efficiency: random reflections on health services. The Rock Carling Fellowship. Cambridge University Press, Cambridge. Covell, D., Unman, G., Manning, P., 1985. Information needs in office practice: are they being met? Annals of Internal Medicine 103, 596–599. Department of Health, 1994. Supporting Research and Development in the NHS. A Report for the Minister for Health by Research and Development Taskforce Chaired by Professor Anthony Culyer. HMSO, London. Department of Health, 2001. Working Together, Learning Together: A Framework for Life-long Learning for the NHS. Department of Health, London. Foundation of Nursing Studies, 1996. Reflection for Action. Foundation of Nursing Studies, London.
109 Foundation of Nursing Studies, 2000. Taking Action: Moving Towards Evidence Based Practice. Foundation of Nursing Studies, London. Frasca, M.A., Dorsch, J.L., Aldag, J.C., Christiansen, R.G., 1992. A multidisciplinary approach to information management and critical appraisal instruction: a controlled study. Bulletin of the Medical Library Association 80, 23–28. Greenhalgh, T., Hurwitz, B., 1998. Narrative based medicine: dialogue and discourse in clinical practice. Login Brothers Book Company, London. Griffiths, P., Riddington, L., 2001. Nurses’ use of computer databases to identify evidence for practice: a cross-sectional questionnaire survey in a UK hospital. Health Information and Libraries Journal 18, 2–9. Grimshaw, J., 2000. Is the involvement of opinion leaders in the implementation of research findings a feasible strategy. Health Services Research Unit, University of Aberdeen. Harrison, L.L., Lowery, B., Bailey, P., 1991. Changes in nursing students’ knowledge about and attitudes towards research following an undergraduate research course. Journal of Advanced Nursing 16, 807–812. Hicks, C., 1994. Bridging the gap between research and practice: an assessment of the value of a study day in developing critical reading skills in midwives. Midwifery 10, 18–25. Hyde, C., Parkes, J., Deeks, J., Milne, R., 2000. Systematic review of effectiveness of teaching critical appraisal. ICRF/ NHS Centre for Statistics in Medicine, Oxford. Ibbotson, T., Grimshaw, J., Grant, A., 1998. Evaluation of a programme of workshops for promoting the teaching of critical appraisal skills. Medical Education 32, 486–491. Korenstein, D., Dunn, A., McGinn, T., 2002. Mixing it up. Integrating evidence-based medicine and patient care. Academic Medicine 77, 741–742. Lacey, A., 1996. Facilitating research based practice by educational interventions. Nurse Education Today 16, 296–301. Le May, A., Mulhall, A., Alexander, C., 1998. Bridging the research-practice gap:exploring the research cultures of practitioners and managers. Journal of Advanced Nursing 28, 428–437. Luker, K.A., Hogg, C., Austin, L., Ferguson, B., Smith, K., 1998. Decision-making: the context of nurse prescribing. Journal of Advanced Nursing 27, 657–665. Lyne, P., Allen, D., Martinsen, C., Satherley, P., 2002. Improving the evidence base for practice: a realistic method for appraising evaluations. Clinical Effectiveness in Nursing 6, 81–88. Mulhall, A., le May, A., Alexander, C., 2000. Research based nursing practice – an evaluation of an educational programme. Nurse Education Today 20, 435–442. Norman, G.R., Shannon, S.I., 1998. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. CMAJ 158, 203–204. Pearcey, P.A., 1995. Achieving research based practice. Journal of Advanced Nursing 22, 33–39. Perkins, E.R., 1992. Teaching research to nurses: issues for tutor training. Nurse Education Today 12, 252–257. Sackett, D.L., Richardson, W.S., Rosenberg, W., Haynes, R.B., 1998. Evidence-based medicine. Churchill Livingstone, Edinburgh. Taylor, R., 2000. A Randomised Controlled Trial of the Effects of Critical Appraisal Skills Workshops on Health Service Decision-makers. Available from: www.doh.gov.uk/research/ rd3/nhsrandd/timetdprogs/imp/commiss/execut/imp_1209. htm. Taylor, R., Reeves, B., Ewing, P., Binns, S., Keast, J., Mears, R., 2000. A systematic review of the effectiveness of critical
110 appraisal training for clinicians. Medical Education 34, 120– 125. Thompson, C., McCaughan, D., Cullum, N., Sheldon, T., Mulhall, A., Thompson, D.R., 2001. Research information in nurses’
A. Mulhall, A.le May clinical decision-making: what is useful?. Journal of Advanced Nursing 36, 376–388. Zeigler, L., Joffe, C., Hay, N., 2002. A critical appraisal team for nurses. Nursing Times 98, 39–40