Examinations and Ophthalmic Education

Examinations and Ophthalmic Education

Editorial Examinations and Ophthalmic Education Denis M. O’Day, MD - Nashville, Tennessee In the 1960s, Dr. Melvin Rubin published a series of papers ...

81KB Sizes 1 Downloads 76 Views

Editorial Examinations and Ophthalmic Education Denis M. O’Day, MD - Nashville, Tennessee In the 1960s, Dr. Melvin Rubin published a series of papers in which he proposed a radical approach to ophthalmic education.1,2 Until that time, education of residents had largely been the responsibility of individual programs. His plan, known as the Ophthalmic Knowledge Assessment Program (OKAP), was designed to be national in scope, comprehensive in content, and constructed in such a way that both residents and their training programs could assess the effectiveness of learning. Very quickly, ophthalmology training programs across the country began to participate, as well as some from Canada. The program continues today under the administrative control of the American Academy of Ophthalmology and with the collaboration of the American Board of Ophthalmology. It is regarded as a normal activity in the training of residents in the United States. As part of the OKAP structure, an examination is taken each year of training. Although results for individual residents are kept anonymous and are not published, residents and their program directors are able to compare their performance with residents at the same level of training across the country and similarly, programs can compare performance of their residents with those of other programs.3 Examinations are deeply woven into the fabric of medical education. Beginning with the United States Medical Licensing Examination4 and continuing through medical school, students are confronted with examinations that must be successfully negotiated. Upon graduation and entering specialty education the final step is success with board certification examinations. During this period, examinations may range in perceived difficulty from fairly straightforward testing of knowledge and skills to what has been termed by candidates as “high-stakes” because of the impact failure has on their careers. In ophthalmology, the collaborative relationship between the Board on the one hand, through its development of the Written Qualifying Examination (WQE) and those preparing the annual examination component of the OKAP on the other, has encouraged the view of some that these examinations are equivalent. In this issue, Lee et al5 have now examined this relationship further, looking at the predictive value of performance in the OKAP in terms of success in subsequently passing the WQE. While it is true that both organizations work together on the development and accumulation of libraries of test items and employ the same psychometric principles in examination construction, each examination is assembled independently of the other and to somewhat different psychometric standards with not unexpectedly, those governing the WQE being the more rigorous. Their analysis raises questions about how performance in an OKAP examination might be interpreted by programs and residents as being predictive of performance in a subsequent WQE. At first glance, this metric is encouraging evidence of the effectiveness of the underlying educational programs and at the same © 2012 by the American Academy of Ophthalmology Published by Elsevier Inc.

time is supportive of one of the original purposes of the OKAP to foster successful preparation for the WQE. However, there are concerns that suggest caution in adopting their conclusions too readily. Attempting to link the two violates a fundamental principle of examination design– using an examination in ways other than those intended by the original designers. In this instance, it is the introduction of a new retrospective measure to analyze data from the OKAP that sets up a conflict between the primary goal of fostering cognitive formation of residents who are preparing to participate in a national examination and the performance outcome, which lies in the province of the WQE and is of principal interest to the public.2 Seeking to excel in the OKAP is surely a worthy goal for residents in training, but the serious question is whether using the metric of a positive predictive value in performance in the OKAP is the way to demonstrate it. Setting a passing standard different to the WQE raises fundamental questions about the two examinations and their outcomes. Although on the surface the two examinations may appear to be the same, the only unchallengeable similarity is that test items in both examinations do address aspects of ophthalmic knowledge as broadly defined. The OKAP is a formative assessment of cognitive competency designed to enhance the subsequent performance of learners during a continuing educational activity, whereas the WQE is summative, meaning that it comes at the conclusion of a period of education and is designed to measure what has been effectively taught and learned. This difference is worthy of note because of ramifications that may affect the entire educational and certification enterprise. Lee et al5 have exposed a tension between how the Board and residents and their training programs view the WQE and OKAP. This tension is manifest in the approach of residents and their programs to the annual OKAP examinations and their response to the “results.” The urgent desire to “pass” the OKAP and the introduction of strategies to make this possible subtly undermine the concepts on which the OKAP is built.6 The OKAP rests balanced between its educational and assessment roles during training. The WQE is a statistically valid sample of knowledge deemed by the Board as important to the quality of care offered by board certified ophthalmologists. What seems clear is that in some instances residents and their programs are treating the OKAP as a kind of pretest for the WQE with programs going so far as to provide review and “cram” sessions in materials presumed to be on the WQE. That these tend to occur close to the administration of the OKAP and that residents appear to know their true purpose reinforces the conclusion that both activities are directed primarily at enhancing a candidate’s performance in the OKAP examination as an end in itself to the possible detriment of the broader educational objective to serve the public through cognitive development of trainISSN 0161-6420/12/$–see front matter http://dx.doi.org/10.1016/j.ophtha.2012.07.060

1947

Ophthalmology Volume 119, Number 10, October 2012 ees. We do not know whether these practices were followed by any of the programs in the study, but the end result is likely to be partial or complete nullification of the valuable formative data sought through the examination while at the same time potentially providing an inaccurate picture of the performance of the training program and its residents. The Board has been a practical supporter of OKAP since its inception because of its value as an educational tool in meeting the residency training requirement for board certification. Board certification was originally conceived as a means of improving the quality of eye care and this remains its principal responsibility today.7 The WQE is thus best viewed as part of an overall quality assurance strategy in which the quality and relevance of the examination are of prime concern and the ability of candidates to pass it speaks to the effectiveness or otherwise of educational activities undertaken by candidates seeking recognition by Board certification. It is a high stakes activity, but not only for candidates. The public also relies on measures such as success in the WQE as evidence of meeting the standards set by the Board.7 That being the case, some important differences between the two examinations with regard to the definition of content boundaries and the construction of test items will affect scores achieved by candidates. In the OKAP to give one example, items are included that the Board would more likely address in the oral examination or in the case of basic sciences, not at all. Setting of the passing score is also a crucial decision. The Board has put into place an objective strategy that employs several well-tried measures to reach this determination including assessments of content by expert judges, pretesting of all scored items, test items difficulty, and the ability to discriminate accurately between well performing and poorly performing candidates. As is generally assumed, the WQE is created anew each year for reasons that are fairly obvious.

1948

From an examination security perspective, the chance of cheating is reduced. However, the Board has an obligation to both the public and to candidates to produce an examination each year that across years is equivalent in difficulty, discrimination, and content. Because of this standardization, candidate performance in different years can be judged by the same criteria and the public can be assured of consistent standards in alignment with its formative intent, the OKAP does not calculate a pass score. Properly interpreted, the OKAP already provides residents and their programs with considerable data on which to base progressive educational improvements. It would be prudent for it to move slowly on the implications of this study. References 1. Rubin ML. The Ophthalmic Knowledge Assessment Program (OKAP): a personal view. Surv Ophthalmol 1988;32:282–7. 2. Rubin ML. The second annual residency in training examination. Am J Ophthalmology 1970;69:1– 8. 3. Liesegang TJ. New directions for the Ophthalmic Knowledge Assessment Program (OKAP) examination. Ophthalmology 1994;101:194 – 8. 4. McCollister RJ. The use of Part I National Board scores in the selection of residents in ophthalmology and otolaryngology JAMA 1988;259:240 –2. 5. Lee AG, Oetting TA, Bloomquist P, et al. A multicenter analysis of Ophthalmic Knowledge Assessment program (OKAP) and board Written Exam (WQE) performance. Ophthalmology 2012;119:1949 –53. 6. Pearls for the OKAPs: Tips from Your YO Info Editorial Board http://www.aao.org/yo/newsletter/201202/article04.cfm. Accessed June 6, 2012. 7. O’Day DM, Ladden MR. The influence of Derrick T. Vail Sr, MD, and Edward M. Jackson, MD, on the creation of the American Board of Ophthalmology and the Specialist Board System in the United States. Arch Ophthalmol 2012;130: 224 –32.