SPECIAL CONTRIBUTION
Evaluation of an EMS Algorithm System American College of Emergency Physicians Algorithm Review Subcommittee George Podgorny, MD, Principal Investigator The American College of Emergency Physicians algorithm project was a pilot study designed to identify a method for evaluating algorithms by peer review and field test. The intent of the pilot project was to make recommendations which would.permit a more extensive evaluation of the logic, usefulness, and safety of various algorithms. The project yielded a report which summarizes the research design and suggests possible revisions to the phases of peer review, algorithm assignment, and field test. It is anticipated that the report will be used by the National Center for Health Services Research in developing a method to carefully assess the various algorithms developed to date prior to their widespread use. Presented here is the executive summary of the report. American College of Emergency Physicians: Evaluation of an EMS algorithm system. Ann Emerg Med 9:534-536, October 1980.
ACEP, algorithm project, summary; algorithms, evaluation by peer review and field test, ACEP BACKGROUND In July 1977, representatives of the American College of Emergency Physicians (ACEP) attended a meeting sponsored by the National Center for Health Services Research (NCHSR) for the purpose of considering ways to disseminate and determine the usefulness of a number of emergency care algorithms which had been developed as part of the research activities of NCHSR. Although most of these algorithms were not intended specifically for use in emergency medical services (EMS) systems, it seemed that many of them might prove valuable to the rapidly expanding field of emergency medicine. The set of algorithms available varied greatly in terms of content, level of training of users, service settings, data needs, and evidence of effectiveness in terms of clinical results. NCHSR sought to define the roles and responsibilities of authors, testers, and professional societies in promulgating algorithms, with particular emphasis on the policy implications of algorithmic care in emergency medicine. From the beginning of this study, it was the concern of the principal investigator that the various algorithms as developed required careful assessment by professional groups before their widespread use. A uniform set of procedures for conducting this assessment was highly desirable, and various strategies were considered, including: 1. A consensus of opinion of a specific group of experts; 2. Assessment by various practitioners in various settings, ie, a field test; and 3. Controlled clinical trials, with random assignment of patients to treatment modalities. Address for reprints: American College of Emergency Physicians, Emergency Medical Services Information Center, PO Box 61911, Dallas, Texas 75261,
9:10 (October) 1980
Ann Emerg Med
534/57
A protocol was needed which could be used by any group to assess the medical content and functional usefulness of any group of algorithms. The principal investigator recognized that many of the more significant problems in the use of algor i t h m s can be discovered through peer review and the analysis of field test data; therefore, a pilot study was designed to permit a subsequent intensive and extensive evaluation of the safety and usefulness of various algorithms. Prior to this project, algorithms developed for various purposes had not been systematically assessed for their applicability to emergency care settings. ACEP received a grant from NCHSR in July 1978 to develop and demonstrate a method of reviewing and evaluating emergency care algorithms. ACEP began the algorithm project with enthusiasm, only to be met by serious resistance from emergency care providers regarding the use of algorithms. Physicians may be reluctant to use algorithms because they believe that their education and extensive clinical experience qualifies them to make the subtle discriminations t h a t a l g o r i t h m s omit. M a n y physicians claim that medical decision making is an intuitive process, not amenable to display as a flowchart. A l g o r i t h m s h a v e been said to encourage "cookbook" medicine, threatening serious compromises in the quality of health care. Physicians also think that another professional's algorithm is i m p r a c t i c a l for t h e i r own emergency department environment, noting t h a t algorithms often appear to be directed toward a particular type of emergency department staff and patient population. Proponents of algorithms counter these arguments by emphasizing the contributions of algorithms to improved teaching, record keeping, time and resource utilization, and the claim t h a t algorithms promote compliance with standards of care and better patient outcomes. They argue that algorithms appear to have a direct and beneficial influence on patient care, as health care personnel improve their recognition of signs and s y m p t o m s . It is a s s e r t e d by Looney et al, 1 that of all the clinical specialties, emergency medicine stands to gain most from the use of algorithms. No other specialty is confronted with so many diverse clinical problems, and no other specialist has so little time to prepare for and treat
s8/535
the urgent patient. For the proponents of clinical algorithms, an alg o r i t h m displays clearly the logic and sequence of events in the process of patient care. The author of an alg o r i t h m m u s t seek a f o r m u l a between excessive details which cannot be simply and logically sequenced and elements of care which are too generalized to be meaningful. The acceptability and employm e n t of algorithms appears to depend on the extent to which they include: 1. Accurate medical logic and acceptable medical procedures; 2. Methods which are generally employed in other emergency departments; 3. Evidence of improvements in the quality of care provided; 4. Ways to ensure acceptability and use by qualified medical staff; and 5. Evidence of increased efficiency and cost-effectiveness in terms of clinical results.
OBJECTIVES The proposed objectives of this study, Evaluation of an EMS Algorithm System, were: 1. To develop a process by which existing algorithms m a y be examined in terms of their medical content and functional usefulness; 2. To field test this review method using a select sample of algorithms; and 3. Make recommendations to N C H S R r e g a r d i n g the feasibility, acceptability, and effectiveness of the methodology. RESEARCH DESIGN The system for developing and t e s t i n g a r e v i e w process was designed in a ~0gical and orderly manner. F e d e r a l l y funded projects at v a r i o u s locations t h r o u g h o u t the country have produced large numbers of algorithms, some of which are relevant to emergency medicine. It was this set of algorithms, available from each project director, from which ACEP drew most of its sample. The study involved the collection of these algorithms and the field testing of some. A specially created Algorithm Review Subcommittee categorized the a l g o r i t h m s according to m e d i c a l topic and selected those which were applicable to emergency medicine. The S u b c o m m i t t e e consisted of the principal investigator and selected m e m b e r s of ACEP's Ann Emerg Med
Educational Materials Committee, Research Committee, and Certification Task Force, as well as a representative from the Emergency Department Nurses Association (EDNA). The s c r e e n i n g of those alg0rithms determined to be relevant to e m e r g e n c y m e d i c i n e w a s accomplished by selected members of the ACEP Certification Task Force, a committee of 27 physicians who developed a certification examination in emergency medicine. Each reviewer was asked to examine those alg0rithms which pertained to the clini. cal field with which he was most familiar. Each was asked to judge the medical content of the algorithm, with specific attention to types and dosages of medications, and the appropriateness of individuals at the designated level of training performing prescribed procedures. In order to select algorithms for field testing, the Subcommittee decided t h a t only those algorithms judged acceptable by 80% or more of the a s s i g n e d r e v i e w e r s would be acceptable for field testing. Comments on the algorithms also were sought from professional associations involved in emergency care. These organizations were asked to review the algorithms with specific attention to the question of which EMS personnel should perform the specific functions listed in the algorithm. Twenty-six algorithms were identified by this screening process as both r e l e v a n t to emergency medicine and acceptable for the field test of the evaluation methodology. Testing took place only at selected sites of emergency medicine residencies endorsed by the Liaison Residency Endorsement Committee for Emergency Medicine. The Algorithm Review Subcommittee developed data collection ins t r u m e n t s and i n s t r u c t i o n s to be used in the testing process. These ins t r u m e n t s included both a patient encounter form on which the provider r e p o r t e d each use of the algorithm, and a provider reaction form on which the provider summarized his a s s e s s m e n t of each algorithm. Thus at each site there was one provider reaction form per algorithmusing provider and one patient encounter form per a l g o r i t h m - u s i n g patient. Each test site was permitted to select two algorithms from a set of five sent for review. If a site refused to test a particular algorithm, the reason for rejection was requested. Upon the selection of at least two
9:10 (October) 1980
algorithms per site, a quantity of encounter forms and copies of the algorithms to be field tested were sent to each site with instructions. Each test site was responsible for training its staff regarding the study procedures. Providers at each site were expected to complete a patient encounter form for each patient to whom the algorithm applied, for a minimum of 12 patient encounters per algorithm. It appeared t h a t it would require varying amounts of time in different sites to collect the 12 patient exposures to each algorithm, and therefore the collection of data on all algorithms was continued for approximately four months. Providers were permitted to deviate from the algorithm as they deemed appropriate, but they were expected to note and explain these deviations on each patient encounter form. When 12 patient e n c o u n t e r forms were completed for each provider, the provider
9:10(October) 1980
submitted a provider reaction form as well, which included the following information: 1. A summary of the acceptability of the algorithms field tested; 2. The usefulness of the algorithms; 3. Types and significance of deviations from the algorithms in practice; and 4. Recommendations for the future use of the algorithms field tested.
CONCLUSION Of the 26 emergency medicine residency programs which offered to participate in the field test, only eight sites completed the test of the algorithms. Lack of time, an overriding dissatisfaction with the algorithms sent for review, and a perceived need for approval of the algorithm by each institution's H u m a n Exper i m e n t a t i o n C o m m i t t e e were the
Ann Emerg Med
primary reasons residencies did not participate. From the limited data gathered in this study, it appears that: 1. Algorithms might indeed be useful in training, education, and retrospective audit; and 2. The widespread use of algorithms is restricted by the perception that most algorithms are highly personalized. The ACEP algorithm project was a pilot study designed to identify a method for e v a l u a t i n g algorithms by peer review and field test. The intent of the pilot project was to make recommendations which would permit a more extensive evaluation of the logic and usefulness of various algorithms.
REFERENCES 1. Looney GL, Roy A, Anderson GV: Research algorithms for emergency medicine. Ann Emerg Med 9:12-17, 1980.
536/59