Programme evaluation in CME

Programme evaluation in CME

Education and practice Books Doctoring the risk society Science Photo Library Jabs & Jibes Rights were not granted to include this image in electro...

138KB Sizes 11 Downloads 124 Views

Education and practice Books Doctoring the risk society

Science Photo Library

Jabs & Jibes

Rights were not granted to include this image in electronic media. Please refer to the printed journal.

Susan Shannon e-mail: [email protected]

1084

Education and practice

Programme evaluation in CME here are two different approaches to assessment of continuing medical education (CME) programmes: educational research and programme evaluation. Educational research measures outcomes of educational interventions and produces generalisable knowledge. Such research uses the same hypothesis-testing approach and rigorous research design as do other types of medical research. Programme evaluation measures the worth of a programme to the learner and aims to provide information to planners and other decision makers. This assessment uses evaluative inquiry and is designed to address the issues of a specific programme. Both types of evaluation are important for CME. Educational research describes effective interventions to be included in programme design, whereas programme evaluation tells us how valuable present programmes are and gives direction for future planning. Programme evaluation is an integral part of systematic programme development and is planned at the same time as the CME programme’s objectives and content are set. Educational activities can be assessed at various levels, depending on how the results will be used. The following questions are useful to develop an effective and systematic programme evaluation.1 What is the intended use of the evaluation? This question is the most important in evaluation planning, and the answer will help guide the rest of the process. In CME programme planning, evaluation is focused on one course or programme to monitor satisfaction with

T

the programme’s format, speakers, or venue, to improve future programmes, justify a course, or compare outcomes with course objectives. Who will use the results? Planners, participants, and all stakeholders, including sponsors, administrators, and faculty, will be interested in the evaluation. The process is expensive, however, and with limited resources only one main audience can usually be served. What specific questions will the evaluation answer? The main audience of the evaluation will determine the issues to be addressed by the evaluation. Planners are likely to focus on quality and acceptance, whereas for administrators the issues may be cost related. Policymakers, by contrast, might be more interested in attendance or dissemination. What kinds of evidence? Three main types of evidence are used for programme evaluation: judgments of quality and satisfaction; measures of competency in knowledge, attitude, and skills; and measures of change in practice behaviours. The kind of evidence needed depends on the question and the audience of the evaluation. What resources are needed? Realistic assessment of availability of time, materials, money, manpower, expertise, and participants’ willingness is useful in setting limits for the scope of the programme evaluation. How are data to be collected and analysed? There are four basic designs for gathering data for programme evaluation: post-test; pretest and post-test; repeated testing over a specific period; and group comparisons. Data can

be obtained by performance tests, audits, interviews, and questionnaires, to name just a few approaches. Questionnaires are the most typical way to gather information from programme participants, but care needs to be taken in formulating appropriate questions, because data are only as good as the questions asked. The types of data gathered could include judgments about the degree of satisfaction with different aspects of the educational activity, measures of change in attitude, knowledge, or skill, and practice changes. Data analysis depends on the question asked and the type of data obtained. A statistician should be consulted early in the planning of the evaluation to make sure the methods chosen will give the answers to the questions being asked. How will results be reported? The evaluation report should be a clear, comprehensive, and a timely document or presentation of why it was done (questions), what was done (evidence needed), and how it was done (data collection and analysis), with results and conclusions. It should be tailored for the audience who will use the results to make decisions. A systematic approach to programme evaluation will make the process much easier and more rewarding. The bottom line is that even the smallest CME programme can have a useful evaluation. The key to successful programme evaluation is understanding from the outset how the results will be used.

1

Morrison J. ABC of learning and teaching in medicine: evaluation. BMJ 2003; 326: 385–87.

THE LANCET • Vol 362 • September 27, 2003 • www.thelancet.com

For personal use. Only reproduce with permission from The Lancet.