Review of performance assessment and improvement in ambulatory medical care

Review of performance assessment and improvement in ambulatory medical care

Health Policy 77 (2006) 64–75 Review Review of performance assessment and improvement in ambulatory medical care Philippe Contencin ∗ , Hector Falco...

155KB Sizes 0 Downloads 26 Views

Health Policy 77 (2006) 64–75

Review

Review of performance assessment and improvement in ambulatory medical care Philippe Contencin ∗ , Hector Falcoff, Michel Doumenc ANAES, avenue du Stade de France, F-93218 Saint-Denis La Plaine Cedex, France

Abstract Health care plans often consider quality of care as a means of containing rising health costs. The assessment of physician and group practice performance has become increasingly widespread in ambulatory care. This article reviews the three main methods used to improve and assess performance: practice audits, peer-review groups and practice visits. The focus is on Europe – which countries use which methods – and on the following aspects: which authorities or bodies are responsible for setting up and running the systems, are the systems mandatory or voluntary, who takes part in assessments and what is their motivation, are patients views taken into account. Many countries run parallel systems managed by authorities working at different hierarchical levels (national, regional or local). The reasons that underlie the choice of a particular system are discussed. They are mostly related to the national health care system and to cultural factors. © 2005 Elsevier Ireland Ltd. All rights reserved. Keywords: Quality of care; Performance assessment; Primary care; Audits; Peer review

Contents 1. 2. 3.

4. 5.



Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Review of the literature and consultation with experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The three most common methods of performance assessment in primary care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Practice audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Peer-review groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Practice visits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sponsors of performance assessment in primary care and costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Participation in performance assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1. Target populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Mandatory participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Corresponding author. Tel.: +33 144494684; fax: +33 144494690. E-mail address: [email protected] (P. Contencin).

0168-8510/$ – see front matter © 2005 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.healthpol.2005.07.017

65 65 66 66 66 68 69 69 69 69

P. Contencin et al. / Health Policy 77 (2006) 64–75

6.

7.

5.3. Incentives and motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4. Cost-effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1. Healthcare management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2. Cultural factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Public opinion and patients’ needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1. Introduction In 1997, the Committee of Ministers of the Council of Europe issued recommendations stressing the need to define policies and set up structures for quality improvement at all levels of healthcare, including individual providers and practices in ambulatory care1 [1]. In 2001, it issued recommendations on the development and implementation of clinical practice guidelines and commented upon their use in performance assessment and continuous medical education (CME) [2]. These recommendations from on high beg several questions: which are the performance assessment systems in place in Europe; what are their primary objectives (e.g. changing individual physician behaviour, improving quality of care delivered, reducing costs. . .); are they coercive (mandatory or voluntary participation); what are the sanctions and incentives offered to participants, if any; how do educational and support measures fit in with performance assessment? For instance, is it a first step toward individually tailored CME [3]? In Europe, The Netherlands and the UK have been the pioneers of performance assessment in primary care. Between them, they introduced major assessment methods in the late seventies such as practice visits, audit and peer-review groups. Recently, the UK has taken a bold step and negotiated a general medical services (GMS) contract for the provision of care under the national health service (NHS) whereby up to a third of a practice’s income can depend on performance as judged by score points for quality indicators [4]. In France, the National Agency for Accreditation 1 In this paper, the term “ambulatory care” covers any care given by a physician outside the hospital sector (i.e. care by GPs and by independent specialists working in either primary or secondary care).

65

70 70 72 72 72 73 73 73 73

and Evaluation in Healthcare (ANAES2 ) was commissioned in December 1999 to design a national system for voluntary performance assessment of general practitioners (GPs) and specialists in the ambulatory sector [5]. This system is based on the application of standards derived by the agency from its own database of clinical practice guidelines [6]. The purpose of this paper is to highlight issues in the choice of a performance assessment system in primary care by considering the systems used in European countries.

2. Review of the literature and consultation with experts We defined “performance assessment” as comparing or measuring deviations of observed clinical practice from recommended practice. This assessment may range from a formal in-depth evaluation process to a much less elaborate simple review of practice. To find out which are the most common performance assessment methods, we explored the published and grey literature for assessment methods and tools [7]. An earlier systematic electronic search of three databases (Medline, HealthStar, Pascal; January 1985–June 2002) was completed by targeted searches (up to August 2004). The search focused on methods of continuous quality improvement (CQI) rather than on assessment of competency. We were more interested in how a practitioner could improve his skills in his working environment than in whether he or she met competency standards and regulations. For this reason, we deliberately excluded certification (as used in Norway, for example), revalidation (e.g. UK), and accreditation (e.g. 2

Part of the Haute Autorit´e de sant´e since January 2005.

66

P. Contencin et al. / Health Policy 77 (2006) 64–75

Belgium) as search terms. The articles we retrieved and selected on practice or performance assessment in ambulatory medical care fell into three broad categories corresponding to three types of method: (i) audits/audit groups, (ii) peer-review groups, and (iii) practice visits. We noted the countries that used these methods, the underlying principles, and the resources required, and consulted the websites of the bodies that had developed the methods or were implementing them. Twelve countries (nine European countries plus Australia, Canada, and New Zealand) were the source of the most comprehensive information (Table 1). We identified experts in performance assessment in these countries from published articles and at international meetings on quality in healthcare and contacted them by mail or interviewed them face-to-face or by phone.

3. The three most common methods of performance assessment in primary care Measuring deviations from recommended clinical practice is a feature commonly encountered in the following methods. It can range from a strict check of conformity with established standards as in many audits and on-site visits (i.e. external control of conformity) to a much more informal process of judging practice by peers who meet to discuss improvements in practice among themselves. In the latter case, rigid standards or rules may not be drawn from clinical practice guidelines. Instead, pragmatic attitudes are discussed and opinions exchanged. As our paper will show, there is a whole gamut of possible approaches that often meld into one another or that may use common tools (e.g. review of medical records). 3.1. Practice audits Audits with remote analysis of self-assessments were introduced by GPs in the UK and developed in the 1980s. A medical or clinical audit has been defined as “a study of some part of the structure, process and outcome of medical care carried out by those personally engaged in the activity concerned, to measure whether set objectives have been attained and [it] thus assesses the quality of care delivered” [8]. Audits may address clinical and non-clinical topics (e.g. use of clinical

practice guidelines, availability of equipment in the practice, and patient satisfaction surveys). In both The Netherlands and in the UK, audits (and also practice visits, see below) address practice organisation in depth [9,10]. The implementation of high-quality audit protocols providing evidence-based review criteria with information on their provenance is strongly recommended [11]. Most audits are based on record keeping although it is not clear how closely record keeping and performance are related [12]. Good record keeping is invaluable for population studies, to ensure continuity of care by professionals, but may not always be the ideal indicator for the personal doctor–patient relationship. For ease of implementation, an audit also requires a computerized practice. In the UK, 96% of GPs’ practices are computer run; 16% would even be paperless [13]. In France, about 85% of practices are equipped with computers. However, these computers may sometimes just be used to notify the patient’s local national insurance office of the patient’s visit and of the code of the service provided, as required by regulations in force. Several methods are used to review records and other data: remote analysis by a specially trained investigator belonging to a professional body, on-site review by a colleague or an appraiser, or review within a group meeting of practitioners. Although a recent systematic review of multipractice audits in the UK has shown that audits are “moderately effective”, it has also stressed that their effectiveness can be difficult to gauge as they are often part of a complex set of interactions [14]. A literature review has pinpointed five main barriers to audits, including lack of resources and expertise for design, analysis and planning as well as organisational impediments and problems between group members [15]. Moreover, we have learned from the developers of auditing systems that concerns have been raised about discrepancies between actual procedures and results of audits as compared to their theoretical aims of better quality in medical practice. 3.2. Peer-review groups The Netherlands was also the first European country to set up peer-review groups, in 1979 [16]. Most European countries have based their peer-review groups on the Dutch model. Practitioners meet to present

Table 1 Examples of performance assessment initiatives in the ambulatory sector Country/province

Method

Target

RACGP/AGPAL/GPA

Belgium

Mutual visit (V)

1994

GLEM/LOKs (V)

1996

Flemish society of general medicine (WVVH) Ministry of Health/participants

Practice visit (Inspection by 2 persons incl. 1 peer) Practice visit

Physician Achievement Review Program (M) Peer Assessment Program (M)

1980

Inspection Professionnelle (M) Audit Project Odense (V)

Canada Alberta Ontario

Quebec Denmark

Frequency

Fee/procedure

GPs

Patients consulted Yes

every 5 yrs

AUD 1295

GPs

Yes

at GP’s request

-

Peer-review group

GPs and specialists

No

2–4 year

-

CPSA + university

Remote data analysis

Yes

Every 5 years

CAD 200

1980

CPSO + university

Practice visit (1 peer)

GPs and specialists GPs and specialists

Yes

CAD 1100

1997

Coll`ege des M´edecins du Quebec Odense university

Inspection (1 inspector–doctor) Group audit

GPs and specialists GPs (Group audit line) GPs

Yes

At random and doctors >70 years old Per doctor profiling

Yes

At GP’s request

DKK 500

No

At GP’s request

-

GPs

Yes

At GP’s request

GBP 800

GPs

No

At GP’s request



1989

-

Supervision groups (V)

1996

Local groups

Peer-review group

England and Wales (see also Table 2)

Fellowship by assessment (V)

1989

Royal College of General Practitioners

France

1993

Local GPs

Germany

Groupes d’audit, d’´evaluation/Cercles de qualit´e (V) Qualitatszirkel (V)

Audit, remote analysis, practice visit (three persons including two peers) Peer-review group

1993

ASHIP/ Hannover university

Peer-review group

GPs

No

At GP’s request



The Netherlands

Peer-review group (V)

1976

NHG/WOK/LHV/KNMG

Peer-review group

No

At GP’s request

-

Visitatie (V)

1980

KNMG/WOK

Practice visit (1 assistant)

GPs (Peer-review group line) GPs

Yes

At GP’s request

EUR 380

New Zealand

Practice review (M)

1999

Optional

Every 3 years



Praexisbesok (V)

1996

Practice visit (Inspection by 1 peer) Practice visit (mutual)

GPs

Norway

Royal New Zealand College of General Practitioners Ministry of Health/University

GPs

No

Every 5 years



Sweden

Medical Quality Assessment (V) QualiDoc (V)

1994

National councils of doctors

Practice visit

Yes

At doctor’s request

-

1999

Swisspep/Swiss society of GPs

Audit, remote analysis + optional practice visit (1 peer)

GPs and specialists GPs

Yes

At GP’s request

CHF 864–2214

Switzerland

P. Contencin et al. / Health Policy 77 (2006) 64–75

Year introduced 1999

Sponsor(s)

Australia

Name of PA system mandatory (M)/voluntary (V) Accreditation (M)

AGPAL: Australian General Practice Accreditation Limited; ASHIP: Associations of Statutory Health Insurance Physicians; CHI: Commission for Health Improvement; CPSA: College of Physicians and Surgeons of Alberta; CPSO: College of Physicians and Surgeons of Ontario; GP: general practitioner; GPA: Private company for the accreditation of Australian doctors; KNMG: Royal Dutch Medical Association; LHV: Dutch National Association of General Practitioners; NHG: Dutch College of General Practitioners; RACGP: Royal Australian College of General Practitioners; SWISSPEP: Institute for Quality and Research in Health Care; WOK: Institute for research in primary care affiliated with Nijmegen and Maastricht universities.

67

68

P. Contencin et al. / Health Policy 77 (2006) 64–75

their work, review their records and performance, and engage in a form of dynamic CQI [17–19]. Peer-review groups may be as formal as audit groups with specific, scheduled aims or as informal as interactive CME sessions [20]. Each group meeting is often devoted to a different topic [21,22], such as a drug-prescribing habit or implementation of a clinical practice guideline [23]. Many countries have implemented peer-review groups or quality circles. In Norway, small groups of GPs take part in a 1-year project over 4–6 meetings. In Belgium, meetings resemble CME sessions. In New Zealand, activities cover quality assurance (audit), mentoring, peer support and education. An overview of the development of peer-review groups/quality circles as a method of quality improvement in Europe has been published [24]. It concluded that there has been substantial development of this performance assessment method in 10 countries over the last 10 years but that the impact on the quality of care needs to be evaluated. Specific studies, some of which are randomised controlled studies, have revealed improvements in physician behaviour, particularly in relation to prescribing and test ordering. In Norway, prescribing behaviour for patients with asthma or urinary tract infections was improved by deriving prescribing quality criteria during discussions of guideline recommendations within peer-review groups [25]. In The Netherlands, antibiotic prescribing for respiratory tract infections in primary care was reduced by instituting group education meetings to derive a consensus on indications and drug use. This was coupled with feedback on behaviour [26]. Test ordering behaviour was improved by organising peer-group meetings and was found to be more effective than simply providing feedback [27]. In the UK, a multi-group initiative involving peer-review meetings, practice comparison feedback and input from an advisor showed that working together effectively within a Primary Care Trust could improve prescribing patterns and help control prescribing costs [28]. 3.3. Practice visits Visits between GPs first took place in the sixties in The Netherlands but were not developed until 15 years later [16,29]. An on-site peer review is the most advanced and personal mode of performance assessment. Peers or trained assistants analyse data collated on the spot or during an earlier self-assessment

phase [9,30,31], often using structured protocols with standards and/or checklists [32]. In the Fellowship by Assessment system in England and Wales, selfassessment criteria include patient satisfaction questionnaires, evidence for participation in CME and for good professional practice, and video-recorded consultations [10]. In Australia, AGPAL, an independent body that runs the system for the government and for professional bodies, sends out a self-assessment questionnaire before sending out two surveyors who complete a checklist on site [33]. Feedback from an on-site visit is the key element in encouraging improvements in quality of care. It may take the form of either advice, tailored education and/or sanctions that include restrictions on care. Emphasis on education and quality improvement softens resistance. In Switzerland, feedback is a practice profile and a comparison with national and local benchmarks [34]. In Australia, the GP is given a report with a tailored educational programme [33]. In Ontario, a “Quality Assurance Committee” grades the GP’s performance on the basis of the assessor’s report. It may call for further education, re-assessment and/or an interview with the “Peer Assessment Committee” during which targeted educational programmes or care restrictions are discussed with the doctor [31,35]. Who performs an on-site visit is a key factor in the acceptance of an assessment system. Practitioners prefer the visit and comments of peer(s) who have been trained as assessors as in Ontario [35]. The candidate completes a questionnaire relating to the practice. The assessor examines the premises, equipment and 20–30 medical records, fills in a checklist, and drafts a report, which is discussed during a face-to-face interview. However, ancillary staff may take part in visits. In Sweden, a district nurse accompanies the assessor (a GP) and helps review organisational, clinical, and communication skills as well as patient satisfaction [30]. In The Netherlands, practice visits (“visitatie”) are undertaken by “non-physician observers” (experienced assistants) who, on a given day, complete a checklist, interview staff, and ask patients to complete questionnaires. However, feedback is provided by a trained peer [9]. Mutual practice visits have been developed for GPs in Norway and, recently, by Dutch dentists [36]. The most intrusive types of on-site visits are inspections. They aim to protect the public from the poorest

P. Contencin et al. / Health Policy 77 (2006) 64–75

medical practices by looking out for patterns of consistently unacceptable behaviour [37]. To our knowledge, only two licensing bodies, both in Canada, consider inspections to be a performance assessment system. (i) The Coll`ege des M´edecins du Qu´ebec [37] has a body of six full-time inspector–doctors who check on the performance of practices chosen at random or earmarked by complaints from patients or doctors, or by prescription profiling. The inspected doctor’s viewpoint is heard and a report is made, the conclusions of which are final. (ii) In Alberta, the College of Physicians and Surgeons uses compulsory questionnaires completed by the doctor, patients, co-workers and colleagues to assess perceived level of performance in office practice [38,39]. Physicians are then visited every 5 years by trained surveyor–doctors who check standards of practice on-site and help elaborate a tailored improvement plan.

4. Sponsors of performance assessment in primary care and costs In some countries, medical practitioners themselves have been the driving force behind assessment initiatives. In others, health authorities have taken the lead. Sponsors fall into three broad categories: (i) groups of practitioners often with university links (e.g. the Danish, Dutch, French, German and Swiss peer-review groups or the Belgian and Norwegian practice visits). The Danish “Audit Project Odense” [21,22] is a university-sponsored audit model being adopted in Norway, Iceland and Sweden [40]; (ii) professional bodies in charge of protecting patients such as professional colleges (Coll`ege des M´edecins du Qu´ebec, College of Physicians and Surgeons in Ontario) or the General Medical Council (GMC) in the UK [41] which have the authority to inspect doctors’ activities at random or after complaints from patients; (iii) national authorities which have included performance assessment in an accreditation or certification procedure as in Australia [42] and Belgium [43]. In New Zealand, peer-review groups are supported by the “maintenance of professional standards” (MOPS) programme, which is

69

part of the compulsory accreditation process [44]. GPs who have been in practice for 5 years must undergo “practice review” within the “MOPS” programme. The costs of setting up and running performance assessment systems are not known. Costs borne by the individual practitioner range from D 67 (group audit) to 1435 (QualiDoc with on-site visit) [34] or US$ 81 to 1735 (Table 1).

5. Participation in performance assessment 5.1. Target populations Performance assessment most often targets individual GPs, sometimes a group practice [45] or a health centre [46]. Assessments may be made at a practitioner’s request, at random or, as in Ontario, when a practitioner reaches 70 years of age or when patients complain [31]. Specialists are potential candidates for assessment in three of the eight countries in which they are part of the ambulatory sector (Table 1). They seem less ready than GPs to undergo performance assessment. Maybe GPs feel a greater need because their knowledge has to be broad-based and/or because they are often gatekeepers in the countries that have pioneered performance assessment in primary care (The Netherlands, UK, Australia,. . .). Dutch medical specialists have shown a positive attitude towards the intervention of management consultants in the implementation of practice-specific recommendations made during external peer review [47]. At times, health professionals other than doctors, e.g. nurses and practice managers, may be included in an assessment, as in the UK and New Zealand [44,45]. 5.2. Mandatory participation Performance assessment is mandatory if it is part of an accreditation and/or certification procedure, as in New Zealand. It can be indirectly mandatory by being linked to another quality improvement initiative such as CME. In Switzerland, although participation in quality circles is voluntary, the Swiss Association of General Practitioners (SGAM/SSMG) and other specialist organisations (e.g. internists) view activities

70

P. Contencin et al. / Health Policy 77 (2006) 64–75

undertaken in quality circles as part of CME which is mandatory [48]. Although most countries have opted for voluntary systems of assessment, British Commonwealth countries seem to favour mandatory systems (Tables 1 and 2). In 1991, the UK Department of Health created the Medical Audit Advisory Groups (MAAGs), which planned and reviewed audit activities at the local/regional level [49,50]. The MAAGs were incorporated in the multiple dimensions of the NHS’ programme for “revalidation” within the Clinical Governance framework. Moreover, a Commission for Health Improvement (the Commission for Healthcare Audit & Inspection at the time) was mandated to perform visits every 4 years to all health care organisations in England and Wales, including the Primary Care Trusts to which GPs belong. With the use of carefully selected criteria [11], audits remain a cost-effective part of this scheme, which couples group assessment with remote analysis of data and is followed by an on-site visit in case of an anomaly [51–53]. Very recently, the French Ministry of Health has decided that performance assessment should be mandatory (every 5 years) for all practitioners, whether from the public or private sector, who work in ambulatory care [54]. The assessment is based on standards derived from current clinical practice guidelines.

medicine” with a higher income and more varied duties (research, teaching. . .) [58]. For this, they have to undergo “recertification” and attend peer-review group meetings. In England and Wales, a desire for recognition and excellence prompts about 100–140 British GPs each year to apply for College membership or fellowship by individual assessment [32]. The mainspring of peer-review groups tends to be personal motivation—a desire to learn and share. A group provides interaction between practices, a sense of cohesion and the support afforded by group policies [59]. In Germany, group meetings are attended mostly by single-handed GPs seeking an exchange of views and constructive criticism [60]. In the UK, the peerreview element was a particularly valued facet in a team based, multiprofessional and formative quality initiative [61]. To motivate participants, most peer group meetings are run by trained moderators [62]. In Denmark, psychologists teach “moderator-GPs” problemsolving techniques, communication skills and how to prevent burn-out [63]. In Switzerland, hundreds of volunteer tutors (doctors or allied health professionals) have been trained [48]. They chair groups of doctors and encourage them to improve their performance. An important factor is maintaining early enthusiasm and quality of assessment. Cyclic quality improvement procedures are useful in this regard.

5.3. Incentives and motivation

5.4. Cost-effectiveness

Incentives to take part in performance assessment programmes (when these are not mandatory) tend to be of several kinds: financial, linked to career and/or honorific status, or purely professional [55]. Interviews of primary care staff who had enrolled in a quality improvement scheme designed to produce widespread changes in chronic disease management in the UK showed that GPs were motivated by a desire to improve patient care, financial incentives, maintenance of professional autonomy in how to reach targets, maintenance of professional pride, and peer pressure [56]. Financial incentives are often linked to accreditation or recertification procedures. In Australia, accreditation provides access to the “Commonwealth’s Incentive Programme” which guarantees a higher income for accredited GPs [57]. In Belgium, a financial bonus is awarded to accredited doctors. In Norway, practicing GPs can aspire to become “specialists in general

Which performance assessment methods are the most cost-effective is not known because there are no validated indicators yet for estimating their efficiency with regard to patients’ health. Published studies are few and only address impact on doctors’ behaviour. For instance, German quality circles have led to substantial improvement in drug-prescribing habits and in the quality of care of diabetic patients [20,64–66]. Selfaudits have led to improvements in preventive measures such as vaccination rates [67] and provide a satisfactory overview of performance unless, of course, a doctor cheats [68]. They are less expensive than group audits [49]. However, the cost-effectiveness of video observation to assess communication skills and medical performance has not been evaluated [69]. Most countries have chosen the least costly assessment methods, i.e. the use of mailed questionnaires with feedback from a peer or structured peer-review groups chaired by

Table 2 UK’s performance assessment initiatives in the medical ambulatory sector Name of PA system, Year mandatory (M), introduced voluntary (V)

UK

RCGPa Membership by Assessment Programme (V) Fellowship by Assessment (V)

Financier

Method

Target

Patients consulted

Frequency

Before 1975 RCGPa

Participants

Written and oral exam plus videos from own practice

GPs

No

Once

1989

RCGPa

Participants

GPs

Yes

At GP’s request

Medical Audit (V) (MAAGs)

1991

NHS

GPs

Optional

Ad libitum

UK

Summative assessment (M)

By 1995

Department of Health/Advisory groups JCPTGPb

Audits, videos, remote analysis, practice visit (three persons including two peers) Group audit

England and Wales

Participants

Trainee GPs (‘registrars’)

No

Once

UKc

GMCd performance procedure (M)

By 1997

GMCd

GMC

No

Once

UKc

Quality Practice Assessment (V) NCAAe (M)

2000

RCGPa

Participants

Doctors referred because of suspected poor professional standards General practices

Yes

Every 5 years

2000

NHS

NHS

No

Once

2002

GMCd Dept of Health/CHIf /GMCd Department of DoH Health (DoH)

Yes

Every 5 years

Yes

Every 3 years

UK

Englandc

UK England and Walesc a b c e d f

Revalidation process (M) CHAIf report (M))

2004

Sponsor(s)

Four-part assessment including consultations video and trainer’s report Exam and practice visit by trained assessors

Self-reporting plus site visit Practice based assessment Doctors referred by and interviews by trained employer because of assessors poor clinical performance Remote data analysis GPs Visits and reports using a framework

Primary Care Trusts

P. Contencin et al. / Health Policy 77 (2006) 64–75

Country

Royal College of General Practitioners. Joint Committee for Postgraduate Training in General Practice. Note, these assessments are not exclusive to ambulatory care. National Clinical Assessment Agency. General Medical Council. Commission for Healthcare Audit and Inspection (formerly CHI – Commission for Health Improvement).

71

72

P. Contencin et al. / Health Policy 77 (2006) 64–75

experienced moderators. In Canada, at least one professional college seems to consider inspections the least costly means of eliminating bad or poor performance [37].

6. Discussion The above overview has highlighted differences in the modes of performance assessment of GPs and/or specialists in the ambulatory sector in 12 countries. The most common assessment methods were peer-review groups and on-site visits by peers but with variations depending upon country (who assesses, on what basis, whether assessment is mandatory or not, etc.). Audit with its two assessment modes (individual and collective) is a method per se but also a tool used in peer-review groups and practice visits. We hypothesise that the variety no doubt arises from differences in social, political and economic context as a potentially successful assessment method in any country has to be in line with (i) prevailing healthcare management and regulation, (ii) cultural factors, and (iii) patients’ expectations. 6.1. Healthcare management How the healthcare system is managed and regulated varies considerably among countries, even within the European Union, and influences performance and outcomes. The relative clout exerted by the government and opinion leaders in the medical profession are key factors in the spread and acceptance of performance assessment. Although the state can exert various forms and levels of coercion, no policing system by a higher authority that weeds out “rotten apples” will be accepted at grass roots level if it is rejected by the opinion leaders. Moreover, any scheme that offers incentives such as a higher status (e.g. specialist in general medicine in Norway, college fellowship in the UK) and/or a higher income (e.g. in Australia, Belgium, Norway) is more likely to be accepted and to succeed than one that does not [15]. Key questions are whether national CQI efforts aimed at health care systems are more effective than strategies aimed at the individual doctor [70] and whether locally owned team-based initiatives can deliver on prevailing national policies [61]. A compar-

ison of a national quality policy in The Netherlands with a decentralised approach in Finland for hospitals, care for the disabled and care for the elderly concluded that promoting a greater number of quality initiatives nationally was less important than promoting the most effective initiatives [71]. According to our overview, a key factor seems to be “marketing”, i.e. creating the right ethos that will encourage participation. A practitioner will feel good if he/she is not imposed upon, is personally motivated, well informed, better educated and becomes more skilled [72]. In practice, this often means that: (a) participation is organised by professional institutions (medical councils or colleges) or at least with their support. Self-regulation guarantees freedom of action, such freedom being a prerequisite for change in some countries; (b) participation is mostly voluntary; (c) participation is handled by moderators with a gently persuasive rather than an overpowering or inquisitorial touch. For instance, in Germany, inadequate training of moderators [63] and poor group participation [20] have compromised the success of some quality circles. The quality that may not yet be part of the ethos is “lack of pretentiousness”. Few GPs accept that a non-doctor observer collects information on their practice on site. They prefer feedback from a peer, even on non-medical issues [73]. 6.2. Cultural factors Culture – whether national or local – is a key factor determining a health professional’s acceptance of outside intervention with his/her practice [74]. Deepseated ancestral habits come into play. On the one hand, there is transparency, openness, mentoring, etc.; on the other, there is secretiveness with regard to procedures, records, outcomes, and income. Calls for transparency are resisted because doctors fear that the results of a performance assessment may be used against them [75]. Cultural differences are wide even within a supposedly homogeneous national health system. A recent study of the culture of medical group practices has distinguished three different types of practice: practices with a strong information culture that favour evidencebased data, practices with a quality-centered culture that seem to prefer patient satisfaction surveys, and business-oriented practices that rely on benchmarking and practitioner profiling [76].

P. Contencin et al. / Health Policy 77 (2006) 64–75

6.3. Public opinion and patients’ needs Patients want to be involved in performance assessment [77–79] and their opinion is often solicited through questionnaires. Although, according to the results of the EUROPEP questionnaire, patients across Europe often hold similar views, their priorities and expectations depend on cultural background and environment (e.g. urban versus rural) [80,81]. Assessment is thus best carried out at a local level. Patient empowerment is a difficult issue as the patient may only see the tip of the iceberg. Patient satisfaction with an efficient service is a laudable objective but it should not override the issues of medical effectiveness [67,72], managerial efficiency [82,83], and payers’ policies.

[2]

[3]

[4] [5]

[6]

7. Conclusion In conclusion, our overview of the main performance assessment systems used in 12 countries has highlighted differences in approach, which depend mainly on how health care systems are organised and on cultural factors. It is not clear, however, which environmental factors and incentives are needed for a particular performance assessment method to be successful. More sharing of experience in assessment procedures is needed, especially if there is a will to harmonise health care systems across Europe [84].

[7]

[8] [9]

[10]

[11]

Acknowledgements

[12]

We are especially grateful to the experts who gave generously of their time and provided valuable information on performance assessment in their country: Richard Baker, Charles Bruneau, Andr´e Dahinden, Birgitta Danielsson, Nicklaus Egli, Dan Faulkner, James Goldberg, Richard Grol, Knut Holtedahl, Pieter van den Hombergh, Andr´e Jacques, Beat K¨unzi, Kjell Lindstr¨om, Geoff Martin, Lorna Martin, Yves Matillon, Anders Munck, Tiiu Ojasoo, Mike Pringle, Luc Seuntjens, Joachim Szeczeny, Bryan Ward and Noella Whitby.

[13]

[14]

[15]

[16]

[17]

References [1] Committee of Ministers of the Council of Europe. The development and implementation of quality improvement systems

[18]

73

(QIS) in health care. Recommendation No. R (97) 17. Strasbourg: COE, 1997. Council of Europe. Recommendation. Rec(2001)13 on developing a methodology for drawing up guidelines on best medical practices and explanatory memorandum. See http://www.coe. int/T/E/Social Cohesion/Health/Recommendations/Rec(2001) 13.asp (accessed December 2004). Davis D, O’Brien MA, Freemantle N, et al. Impact of formal continuing medical education: do conferences, workshops, rounds and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Association 1999;282:867–74. The NHS confederation. New GMS contract. See: http://www. nhsconfed.org/gms/default.asp (accessed December 2004). Decret no 99-1130 du 28 decembre 1999 relatif a l’evaluation des pratiques professionnelles et a l’analyse de l’evolution des depenses medicales. Journal Officiel de la Republique Francaise, 29 decembre 1999. Base francaise d’evaluation en sante. See: http://bfes.anaes. fr/HTML/index.html (accessed December 2004). Grol R, Baker R, Wensing M, et al. Quality assurance in general practice: the state of the art in Europe. Family Practice 1994;11:460–7. Bentzen N. WONCA international glossary for general/family practice. Family Practice 1995;12:267. van den Hombergh P, Grol R, van den Hoogen HJ, van den Bosch WJ. Practice visits as a tool in quality improvement: acceptance and feasibility. Quality in Health Care 1999;3:167–71. Royal College of General Practitioners. Fellowship by Assessment. The Royal College of General Practitioners, London, 2nd ed., 1995. Hearnshaw HM, Harker RM, Cheater FM, Baker RH, Grimshaw GM. Are audits wasting resources by measuring the wrong things? A survey of methods used to select audit review criteria. Qual Saf Health Care 2003;12:24–8. Kelly L. Record keeping: synonymous with quality of care. Canadian Family Physician 1998;44:29. Department of Health. Delivering 21st century IT support for the NHS: national strategic programme. London: Department of Health, 2002. See: www.dh.gov.uk/assetRoot/ 04/06/71/12/04067112.pdf (accessed December 2004). Holden JD. Systematic review of published multi-practice audits from British general practice. Journal of Evaluation in Clinical Practice 2004;10:247–72. Johnston G, Crombie IK, Davies HT, et al. Reviewing audit: barriers and facilitating factors for effective clinical audit. Quality in Health Care 2000;9:23–6. van den Hombergh P. Practice visits. Assessing and improving management in general practice [Thesis]. University of Nijmegen: WOK, 1998. Gerlach FM, Beyer M, Romer A. Quality circles in ambulatory care: state of development and future perspective in Germany. International Journal for Quality in Health Care 1998;10:35– 42. Egli N. Continuous quality improvement by peer review groups. In: Alles V, M¨akel¨a M, Persson L, et al., editors. Tools and

74

[19]

[20]

[21] [22]

[23]

[24]

[25]

[26]

[27]

[28]

[29] [30]

[31]

[32]

[33]

[34] [35]

[36]

P. Contencin et al. / Health Policy 77 (2006) 64–75 methods for quality improvement in general practice (EQuiP). Jyv¨askyl¨a: Gummerus Printing; 1998. p. 86. Schweizerische Gesellschaft fur Allgemeine Medizin. Qualitatszirkel. See: http://www.sgam.ch/index.htm (accessed December 2004). Weisser P, Harter M, Tausch B. Family practice quality circles between goals and reality: an interaction analysis. Zeitschrift Arztl Fortbild Qualitatssich 2000;94:4–10. Munck A. Audit project odense a scandinavian audit centre for general practice. Auditing in Trends 1995;3:18–21. Forskningsenheden for Almen Praksis i Odense. Audit Projekt Odense (APO). See: http://www.almen.dk/odense/ (accessed December 2004). Lauven G, Becker H, Euteneuer H, et al. Basic tenets for quality circles in north Rhine medical service of public health insurance. Zeitschrift Arztl Fortbild Qualitatssich 2000;94:71–7. Beyer M, Gerlach FM, Flies U, Grol R, Krol Z, Munck A, et al. The development of quality circles/peer review groups as a method of quality improvement in Europe Results of a survey in 26 European countries. Family Practice 2003;20:443–51. Lagerlov P, Loeb M, Andrew M, Hjortdahl P. Improving doctor’s behaviour through reflection on guidelines and prescription feedback: a randomised controlled study. Quality in Health Care 2000;9:159–65. Welschen I, Kuyvenhoven MM, Hoes AW, Verheij TJ. Effectiveness of a multiple intervention to reduce antibiotic prescribing for respiratory tract symptoms in primary care: randomised controlled trial. British Medical Journal 2004;329:431. Verstappen WH, van der Weidjden T, Dubois WI, Smeele I, Hermsen J, Tan FE, et al. Improving test ordering in primary care: the added value of a small-group quality improvement strategy compared with classic feedback only. Annals of Family Medicine 2004;2:569–75. Walker J, Mathers N. The impact of a general practice group intervention on prescribing costs and patterns. British Journal of General Practice 2002;52(478):412–3. Bergsma J. Onderlinge toetsing der pratijkvoering door huisartsen. Huisarts Wet 1966;9:82–7. Eliasson G, Berg L, Carlsson P, et al. Facilitating quality improvement in primary health care by practice visiting. Quality in Health Care 1998;7:48–54. College of Physicians and Surgeons of Ontario. Peer Assessment Program. See: http://www.cpso.on.ca/Info physicians/ peer.htm (accessed December 2004). Royal College of General Practitioners. Membership by Assessment of Performance (MAP). See: http://www.rcgp.org.uk/ map/usersguide.asp (accessed December 2004). Australia General Practice Accreditation Ltd. Accreditation. See: http://www.agpal.com.au/subpage.asp?page=content &Id=2 (accessed December 2004). SWISSPEP. Quali Doc programme. See: http://www.swisspep. ch/pages/sitemap e.html (accessed December 2004). Norton PG, Dunn EV. What factors affect quality of care? Using the peer assessment program in Ontario family practices. Canadian Family Physician 1997;43:1739–44. Sluijs EM, Bennema-Broos M, Wagner C. Dentists and mutual practice visitation. Ned Tijdschr Tandheelkd 2002;109:20–4.

[37] College des Medecins du Quebec. A system of monitoring and improving the performance of physicians. See: http://www.cmq.org/asp/english.asp?id=919 (accessed December 2004). [38] Hall W, Violato C, Lewkonia R, et al. Assessment of physician performance in Alberta The physician achievement review. Canadian Medical Association Journal 1999;161:52–7. [39] College of Physicians and Surgeons of Alberta. Physician Achievement Review. See: http://www.par-program.org/PARHistory2.htm (accessed December 2004). [40] Munck A. A Nordic collaboration on medical audit. The APO method for quality development and continuous medical education (CME) in primary health care. Scandinavian Journal of Primary Health Care 1998;16:2–6. [41] General Medical Council. Protecting the public. See: http:// www.gmc-uk.org/about/default.htm/ (accessed December 2004). [42] The DrsReference site. Accreditation. See: http://www.drsref. com.au/accreditation.html (accessed December 2004). [43] Ministere des Affaires sociales, de la Sante publique et de l’Environnement. Accreditation des medecins. See: http://www. health.fgov.be/AGP/fr/professions/medecins/accreditation/ professions medicales-generaliste-accreditation medicale.htm (accessed December 2004). [44] Royal New Zealand College of General Practitioners. Maintenance of Professional Standards (MOPS). See: http://www. rnzcgp.org.nz/mops.php (accessed December 2004). [45] Royal College of General Practitioners. Quality Team Development. See: http://www.rcgp.org.uk/qtd/index.asp (accessed December 2004). [46] Lindman A. Guide medicine. In: Alles V, M¨akel¨a M, Persson L, et al., editors. Tools and methods for quality improvement in general practice (EQuiP). Jyvaskyla: Gummerus Printing; 1998. p. 32–3. [47] Lombarts MJ, Klazinga NS. Supporting Dutch medical specialists with the implementation of visitatie recommendations: a descriptive evaluation of a 2-year project. International Journal for Quality in Health Care 2003;15:119–29. [48] Schweizerische Gesellschaft f¨ur Allgemeinmedizin. Qualitatszirkel im Umbruch? See: http://www.ssmg.ch/pdf/0301-05.Mod.Info.Umfrage.pdf and: http://www.qualitaetszirkel. ch/cgi-bin/WebObjects/THECLUB.woa/wa/default?mandant= HTS000 (accessed December 2004). [49] Baker R, Hearnshaw H, Cooper A, et al. Assessing the work of medical audit advisory groups in promoting audit in general practice. Quality in Health Care 1995;4:234–9. [50] Department of Health (UK); Health service developments. Medical audit in the family practitioner services. London: Department of Health, 1990. [51] Commission for Health Improvement. Clinical audits. See: http://www.chi.nhs.uk/eng/audit/index.shtml (accessed December 2004). [52] Healthcare commission. Performance ratings. See: http:// ratings2004.healthcarecommission.org.uk/ (accessed December 2004). [53] Department of Health. Appraisals, Revalidation. See: http:// www.dh.gov.uk/PolicyAndGuidance/HumanResourcesAnd

P. Contencin et al. / Health Policy 77 (2006) 64–75

[54]

[55]

[56]

[57]

[58]

[59]

[60]

[61]

[62] [63]

[64]

[65]

[66]

[67] [68]

[69]

Training/LearningAndPersonalDevelopment/Appraisals/fs/en (accessed December 2004). Loi no 2004-810 du 13 aout 2004 relative a l’assurance maladie. Article 14. Journal Officiel de la Republique Francaise, 17 aout 2004. Durand-Zaleski I, Chaix C. Financial incentives Comment ameliorer les pratiques medicales? In: Durieux P, editor. Approche comparee internationale. Paris: Medecine-Sciences Flammarion; 1998. p. 43–56. Spooner A, Chapple A, Roland M. What makes British general practitioners take part in a quality improvement scheme? Journal of Health Service Research Policy 2001;6:145–50. Royal Australian College of General Practitioners. Australian Medical Council (AMC) accreditation. See: http://www. racgp.org.au/document.asp?id=8498 (accessed December 2004). Den norske laegeforening (Norwegian Medical Association). Spesialistutdanning. See: http://www.legeforeningen.no/index. db2?id=1058&PHPSESSID=70fdeff0473a730e3f06d7024a 63e6af (accessed December 2004). Walker J, Mathers N. Working together: a qualitative study of effective group formation amongst GPs during a cost-driven prescribing initiative. Family Practice 2004;21:552–8. Steinkhol M, Niemann D. General practice quality circles in the large city Participation by Hamburg general physicians. Zeitschrift Arztl Fortbild 1997;90:747–51. MacFarlane F, Greenhalgh T, Schofield T, Desombre T. RCGP Quality team development programme: an illuminative evaluation. Quality in Safety Health Care 2004;13:356–62. Michel C. Cercle de la qualite: rompre l’isolement, enrichir son experience. Revue Du Praticien MG 2000;14:1462–4. Errebo-Knudsen L. Supervision groups. In: Alles V, M¨akel¨a M, Persson L, et al., editors. Tools and methods for quality improvement in general practice (EQuiP). Gummerus Printing: Jyv¨askyl¨a; 1998. p. 44. Harter M, Vauth R, Tausch B, et al. Goals, content and evaluation of training seminars for quality circle moderators. Zeitschrift Arztl Fortbild 1996;90:394–9. Hartmann P, Grusser M, Jorgens V. Structured public health quality circle on the topic of diabetes management in general practice. Zeitschrift Arztl Fortbild 1995;89:415–8. von Ferber L, Baush J, Koster I, et al. Pharmacotherapeutic circles. Results of an 18-month peer review prescribingimprovement programme for general practitioners. Pharmacoeconomics 1999;16:273–83. Senez B. Application de l’audit clinique a un medecin generaliste isole. Revue Du Praticien MG 1995;9:50–2. Borgiel AE, Williams JI, Davis DA. Evaluating the effectiveness of 2 educational interventions in family practice. Canadian Medical Association Journal 1999;161:965–70. Ram P, Grol R, Rethans JJ, et al. Assessment of general practitioners by video observation of communicative and medical

[70] [71]

[72]

[73]

[74]

[75] [76]

[77] [78]

[79]

[80]

[81]

[82]

[83]

[84]

75

performance in daily practice: issues of validity, reliability and feasibility. Medical Education 1999;33:447–54. Lanier DC, Roland M, Burstin H, Knottnerus JA. Doctor performance and public accountability. Lancet 2003;362:1404–8. Sluijs EM, Outinen M, Wagner C, Liukko M, de Bakker DH. The impact of legislative versus non-legislative quality policy in health care: a comparison between two countries. Health Policy 2001;58:99–119. Klazinga N. Re-engineering trust: the adoption and adaptation of four models for external quality assurance of health care services in western European health care systems. International Journal of Quality Health Care 2000;12:183–9. van den Hombergh P, Grol P, van den Hoogen HJ. Practice visits as a tool in quality improvement: mutual visits and feedback by peers compared with visits and feedback by non-physician observers. Quality in Health Care 1999;8:161–6. Marshall M, Sheaff R, Rogers A, Campbell S, Halliwell S, Pickard S, et al. A qualitative study of the cultural changes in primary care organisations needed to implement clinical governance. British Journal of General Practice 2002;52:641–5. Irvine D. The performance of doctors: the new professionalism. Lancet 1999;353:1174–7. Kaissi A, Kralewski J, Curoe A, Dowd B, Silversmith J. How does the culture of medical group practices influence the types of programs used to assure quality of care? Health Care Management Review 2004;29:129–38. Rothman DJ. Medical professionalism: focusing on the real issues. New England Journal of Medicine 2000;342:1284–6. Grol P, Wensing M, Mainz J. Patients’ priorities with respect to general practice care: an international comparison European Task Force on Patient Evaluations of General Practice (EUROPEP). Family Practice 1999;16:4–11. Kazandjian VA. Power to the people: taking the assessment of physician performance outside the profession. Journal of the American Medical Association 1999;161:44–5. Wensing M, Mainz J, Ferreira P, et al. General practice care and patients’ priorities in Europe: an international comparison. Health Policy 1998;45:175–86. Grol P, Wensing M, Mainz J. Patients in Europe evaluate general practice care: an international comparison. British in Journal General Practice 2000;50:882–7. Shaw C. External quality mechanisms for health care: summary of the ExPeRT project on visitatie, accreditation EFQM and ISO assessment in European Union countries. International Journal of Quality Health Care 2000;12:169–75. Ram P, Grol R, van den Hombergh P, Rethans JJ, van der Vleuten C, Aretz K. Structure and process: the relationship between practice management and actual clinical performance in general practice. Family Practice 1998;15:354–62. Cucic S. European Union health policy and its implications for national convergence. International Journal of Quality Health Care 2000;12:217–25.