Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 0161-4754/99/$8.00 + 0 76/1/95618 © 1999 JMPT
Developing a Clinical Competency Examination in Radiology: Part II—Test Results Dennis M. Marchiori, DC,a Charles N.R. Henderson, DC, PhD,b and Tawnia L. Adams, DC c
ABSTRACT Background: This is the second of two articles introducing a clinical competency examination in radiology. The first article described the structure, administration, and postexamination student comments for two versions of the radiology competency examination. This article reports the results obtained from these two administrations of the examinations. Objective: To measure and identify potential outcome predictors of student aptitude in clinical film interpretation. Design: Experimental. Methods: An examination was developed to simulate the radiologic interpretive skills needed in clinical chiropractic practice. Two versions of the examination were given to a class of 210 ninth trimester students in a 10-trimester chiropractic program. Linear regression and bivariate correlations were performed on possible predictors of student success and test scores on the version 2 examination. Results: On version 1 of the examination, students were able to identify an average of 59.6% of the normal cases as normal
INTRODUCTION This is the second of two articles introducing a clinical competency examination in radiology. The first article described the evolution and structure of the examination. This second article presents the results of two administrations of the examination and evaluates possible outcome predictors. If radiology educators were asked to identify their most knowledgeable students, they would probably consult their grade books and choose students who have performed well on typical course examinations. If an educator was particularly enthusiastic, scores on widely administered and standardized examinations such as those given by the National Board of Chiropractic Examiners might be consulted. However, if these same educators were asked to identify which students are more likely to recognize pathologic conditions present on radiographs, the task would be difficult. The abil-
a Palmer Center for Chiropractic Research and Palmer Chiropractic Clinics, Palmer College of Chiropractic, Davenport, Iowa. b Palmer Center for Chiropractic Research, Davenport, Iowa. c Private practice of radiology, Phoenix, Arizona. Submit reprint requests to: Dennis M. Marchiori, DC, Palmer Center for Chiropractic Research, 741 Brady St, Davenport, IA 52803. Paper submitted March 11, 1998; in revised form April 3, 1998.
and 51.6% of abnormal cases as abnormal. On version 2, 55.6% of the normal cases were recognized as normal and 58.2% of abnormal cases as abnormal. On both versions, students were less successful at correctly categorizing, managing, or naming pathologic conditions they found. Of the predictors evaluated, only the students’ grades in the third radiology course (tumors, arthritides, and extremity trauma) and the scores on the diagnostic imaging section of National Boards part II were significant predictors. Discussion: Our results should cause some concern for educators who use content-based radiology curricula. Students demonstrated poor abilities to recognize, categorize, manage, and identify common radiographic pathologic conditions. Educators cannot rely on National Board scores and course grades to determine student clinical competency. More radiology clinical competency exercises that emphasize film interpretation need to be incorporated into content-based curricula. (J Manipulative Physiol Ther 1999;22:63-74) Key Indexing Terms: Radiology; Medical Education; Chiropractic; Educational Testing
ity for students to recite factual knowledge describes knowledge competency and is purportedly measured by course and national board examinations. The ability for students to discern normal from abnormal radiographs and correctly manage the patient’s condition on the basis of the radiographic findings describes clinical competency. Clinical (performance) competency is a psychologic construct that evaluates a student’s ability to integrate cognitive, affective, and to some extent psychomotor skills.1 At present, no widely applied measure of radiology clinical competency exists. Part IV of the National Chiropractic Board Examination attempts to address clinical competency. However, because it does not incorporate normal studies or chiropractic patient management, its structure does not parallel the sequence of image interpretation that occurs in clinical practice. One cannot determine from the part IV scores whether students can discern normal radiographs from abnormal radiographs or whether students are adjusting patients they should not or not adjusting patients they should on the basis of radiographic findings. One could argue that students who perform well on course and National Board examinations will also be proficient film interpreters. But is this a reasonable assumption? Are available measures of knowledge competency adequate proxies for measures of clinical competency? The central goals of this article address two questions. First, how proficient are students in recognizing normal and
63
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
64
Table 1. Results of version 1 examination, listing test cases and percentage of students who correctly located, categorized, managed, or identified case (n = 116; 55.2% of 210 students in the class) Test case
Normal vs abnormal*
Normals Normal cervical Normal cervical Normal thoracic Normal lumbar Normal chest Average for normals Pathologic conditions Calcified gallstones Calcified uterine leiomyoma C2/C3 congenital blocked segmentation Os odontoideum Spinous process fractures, C5-6 Compression fracture of C6 Lytic metastasis of L3 and pelvis C2 hangman’s fracture Enlarged hilum on chest film Legg-Calvé-Perthes Paget’s disease Diffuse idiopathic skeletal hyperostosis Aneurysmal bone cyst Abdominal aorta aneurysm Spina bifida, S1 Lumbar hemangioma Porcelain gallbladder Fibrous dysplasia L2 transverse process fracture Ankylosing spondylitis Average for pathologic conditions
Categorize Manage Identify
77 66 60 59 36 59.6 94 91 88
35 10 61
57 10 41
22 20 39
72 72
50 71
62 67
33 65
62 57
53 51
37 50
44 36
52 50 48 46 44
43 17 19 78 23
41 24 20 15 35
38 21 28 12 11
41 41 38 37 35 28 27
16 28 26 14 11 4 19
33 28 15 8 18 18 16
18 22 22 25 11 3 16
10 51.6
3 31.6
5 30.0
4 24.5
*Students were asked to conclude whether the film was normal or abnormal. To receive credit for an abnormal designation, students had to correctly identify the quadrant of the film in which the abnormality was present. This provision gave the authors reasonable assurance the students were looking at the proper abnormality and not simply guessing a designation of abnormal. It was not possible to receive credit for categorize, manage, or identify unless the abnormality was correctly located.
abnormal radiographs? In assessment terms, how well do students function when tested in a context that mimics a “real” clinical situation?2 This question may also be asked in relation to specific components of film interpretation; do students find it more difficult to categorize, manage, or identify (name) abnormalities found on radiographs? The second central goal of this article attempts to answer the question: what factors predict student success in recognizing normal and abnormal radiographic studies? The reader may feel compelled to ask: are grades from radiology courses or scores from National Board examinations adequate predictors of success on clinical competency examinations? Is the student who makes extensive use of library resources or external radiology seminars more competent in clinical radiology? Similarly, is attendance at clinical film review sessions and radiology lectures associated with radiographic clinical competency? The first article in this 2-part series described the structure, administration, and postexamination student comments
for 2 versions of the radiology competency examination. This article reports the results obtained from these 2 administrations of the examinations. We examine student performance with regard to localizing and categorizing film pathologic conditions and the appropriateness of their clinical decisions when considering film pathologic conditions. In addition, we evaluate the predictive strength of classroom grades, self-reported use of library resources, National Board examinations, and external radiology seminars with regard to performance on the administered radiology competency examination.
METHODS Two hundred ten chiropractic students in the middle of their ninth trimester of a 10-trimester program were encouraged to take the clinical radiology competency examination. Participation was not mandatory; however, students were told the exercise would likely provide them with valuable experience for evaluating their interpretative skills. Members of this class took 2 different versions of the examination with a 1-month separation between examinations. The 2 versions followed identical formats with 4 exceptions: (1) In version 1, students were allowed 60 seconds at each station to review a film series and answer 4 standard questions, whereas they were allowed 90 seconds for the same task in version 2. (2) The first version consisted of 5 normal stations and 20 abnormal cases, and the second version consisted of 7 normal stations and 18 abnormal cases. (3) A new “user friendly” answer sheet was used in version 2. (4) Last, different film series were used in the 2 examination versions. Both versions of the clinical radiology competency examination used 25 individual viewbox stations. Twin examination venues were constructed to accommodate 50 students at one time. At each station a complete series of chest, cervical, thoracic, or lumbar spine radiographs were displayed. For each radiographic series, a transparent plastic sheet was placed over one film, producing a localization reference film. Students were told that some of the cases were normal, but the percentage of normal cases was not revealed. Students were not permitted to return to a station for a second viewing. The selected pathologic conditions varied in severity and degree of difficulty. Abnormal cases were consistent with published data on pathologic conditions commonly seen in chiropractic practice.3-5 Students reviewed the film series and answered 4 standard questions at each station as follows: 1. Review the films for abnormality. Please indicate all regions in which the reference radiograph’s most severe finding is located. 2. Which of the following choices best categorizes the radiographic findings (no abnormal findings, trauma, malignant tumor, benign tumor, arthritide, congenital/normal variant, other)? 3. You are about to adjust this patient’s spine with a highforce technique (Gonstead, diversified, etc) in the region of the spine that is depicted on the radiographs. Which
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
65
Table 2. Results of version 2 examination, listing test cases and percentage of students who correctly located, categorized, managed, or identified case (n = 181; 86.2% of 210 students in the class) Test case Normals Normal cervical Normal thoracic Normal cervical Normal cervical Normal lumbar Normal chest Normal chest Average for normals Pathologic conditions Ivory vertebra (L3) Butterfly vertebra (T8) Osteochondroma Diffuse idiopathic skeletal hyperostosis Bone island Ankylosing spondylitis Type II odontoid fracture Missing pedicle (T12) Mediastinal lymphoma Paget’s disease (pelvis) Legg-Calvé-Perthes Lytic L5 spondylolisthesis Degenerative C5 spondylolisthesis Pancoast tumor Chondrosarcoma (pelvis) Femoral neck stress fracture Hiatal hernia Slipped capital epiphysis Average for pathologic conditions
Normal vs abnormal*
Categorize
Manage
Identify
Categorize and manage†
59 76 60 77 48 64 62 27 31 8 22 40 22 41 22 18 12 12 38.9
64 65 50 60 46 56 65 45 40 49 48 15 13 36 24 17 5 12 39.4
47 69 6 55 14 56 53 23 24 24 23 15 7 34 11 17 3 8 27.2
51 62 38 56 39 46 57 24 23 6 21 12 8 34 20 15 5 10 29.3
69% 67 58 56 55 48 36 55.6 92 92 88 85 75 75 73 65 62 62 52 50 46 45 35 20 19 13 58.2
*Students were asked to conclude whether the film was normal or abnormal. To receive credit for an abnormal designation, students had to correctly identify the quadrant in which the film abnormality was present. This provision gave the authors reasonable assurance that the students were looking at the proper abnormality and not simply guessing a designation of abnormal. It was not possible to receive credit for categorize, manage, or identify unless the abnormality was correctly located. †Percentages of students who both categorized and managed the case correctly.
one of the following management plans is most appropriate given the radiographic findings? a. No abnormal findings, proceed with high-force spinal adjustment. b. Abnormal findings of no/limited clinical significance, proceed with high-force adjustment. c. Abnormal findings of clinical significance, refer patient for consultation or further studies (laboratory and/or imaging). However, findings do not preclude high-force adjustment to any vertebral segment in the region of the spine depicted on the radiographs. d. Abnormal findings of great clinical significance, refer patient for consultation or further studies (laboratory and/or imaging). Do not perform high-force spinal adjustment to the region of the spine depicted on the radiographs. 4. If abnormal finding(s) are noted, what is the name of the condition or disease that they represent? An informal debriefing session followed examination version 1. A survey was distributed to all participants after examination version 2. Both the informal debriefing session and the survey were administered to obtain an evaluation of the examination format and process. In addition, the survey gathered information to evaluate possible predictors of clini-
cal radiology competency by indicating which radiology resources were most helpful to their learning. Student comments after examination version 1 led the authors to increase the station review time from 60 seconds to 90 seconds and to adopt a “user friendly” answer sheet. Descriptive statistics were calculated for both the first and second versions of the clinical radiology competency examination (Tables 1 and 2). The properties of the version 2 examination items were studied by computing point biserial correlations between responses to questions at each film station to the total score for all 25 film stations. Likewise, internal consistency of the version 2 examination was measured by computing Kuder-Richardson formula 20 (KR-20) method coefficients for each of the 4 questions and for the total score on the version 2 examination. Correlations and linear regression analysis were performed with the data from the version 2 examination. Scores from version 2 were chosen for analysis because they provided a larger sample size, and the authors believed the longer time per viewbox station afforded in version 2 may have provided a more accurate measure of students’ abilities. In particular, students’ responses to the first question at each station (ability to judge films as normal or abnormal) were compared with students’ scores on part III and selected tests on part II of the
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
66
B
A
Fig 1. Hemangioma. Anteroposterior (A) and lateral (B) lumbar projections demonstrating a coarsened appearance of L4 vertebral body. Student scoring (Table 1): 37% found lesion, 14% categorized as tumor, 8% did not refer patient, and 25% identified lesion as a hemangioma.
Table 3. Mean and SD of grades in radiology classes (n = 175) Course
Mean*
SD
Course in tumors, arthritides, skeletal trauma Course in normal anatomy, spinal trauma Course in radiographic physics Course in radiographic positioning Course in chest and abdomen
3.01 3.42 3.10 3.12 3.24
0.74 0.62 0.76 0.67 0.67
*4.0 is equal to a letter grade of A; 3.0, B; and 2.0, C.
National Board examinations. Students’ responses to the first question were also compared with their scores in courses in the radiology curriculum and with the perceived helpfulness reported on the surveys regarding available radiology learning resources. The result of the first question of the competency examination was chosen as the dependent variable because the authors believed it represented the purest form of clinical competency in radiology, the ability to discern normal from abnormal. Both Spearman’s and Pearson’s correlation coefficients were used. The linear regression analysis was done with a forward stepwise method (entry criteria were set at P <.05 and removal criteria at P <.10). Probability plots and regression diagnostics were used to assess normality and ensure the model properly fit the data. All descriptive and inferential statistics were computed with SPSS for Windows, version 7.0 software.
RESULTS One hundred sixteen students (55.2%; 82 men and 34 women) took version 1 of the examination. Table 1 lists data from examination version 1. Version 1 consisted of 5 normal and 20 abnormal cases. No survey followed examination version 1; however, solicited comments suggested the time given at each station was inadequate. Other comments suggested the answer sheet was too complex and did not facilitate quick correlation between viewing the films and marking responses on the answer sheet. In this version of the clinical radiology competency examination students were able to identify an average of 59.6% of the normal cases and 51.6% of abnormal cases. Students were less successful at correctly categorizing (31.6%), managing (30.0%), or naming (24.5%) pathologic conditions they found (Table 1). Table 2 lists results from examination version 2. One hundred eighty-one students (86.2%; 148 men and 43 women) participated in version 2. Students were able to identify an average of 55.6% of the normal cases as normal and 58.2% of abnormal cases as abnormal. As in examination version 1, students were less successful at correctly categorizing (38.9%), managing (39.4%), or naming (27.2%) pathologic conditions they found (Table 2). Selected test cases are presented in Figs 1 through 13. The point-biserial correlation coefficients were not less than 0.20 for all 4 questions at any station. No case was excluded from subsequent analysis.
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
67
Table 4. Correlation coefficients of predictor variables to students’ ability to judge radiographs as normal or abnormal Predictor variables
r*
Rad 3; course in tumors, arthritides, extremity trauma Rad 2; course in normal anatomy, spinal trauma Rad 1; course in radiographic physics National Board, part II—diagnostic imaging National Board, part III National Board, part II—neurodiagnosis Rad 4; course in radiographic positioning Rad 5; course in chest and abdomen National Board, part II—general diagnosis Helpfulness of library resources Helpfulness of background/work experience Helpfulness of off-campus radiology seminars Helpfulness of film review sessions Helpfulness of technique courses Percent of radiology classes missed Helpfulness of radiology courses Frequency of attendance to film review sessions
0.448† 0.379† 0.363† 0.353† 0.345† 0.328† 0.326† 0.278† 0.274† 0.051 0.051 0.033 0.003 –0.025 –0.038 –0.074 –0.132
*Spearman correlation coefficients for all listings except National Board scores. Pearson correlation coefficients were used for National Board scores. †P <.01.
Table 5. Summary of models for forward stepwise regression analysis with radiology course scores and National Board scores as predictors of students’ ability to discern normal from abnormal (n = 158) Model
Predictors*
R2
df
F
P value
1 2
Rad 3 Rad 3, NB (diagnostic imaging)
0.164 0.188
157 156
30.7 18.0
<.001 <.001
*Rad 3 is the third of 5 courses in radiology and covers tumors, arthritides, and extremity trauma. NB (diagnostic imaging) is the National Board part II score of the diagnostic imaging section.
Fig 2. Congenital blocked segmentation. Lateral cervical projection demonstrating lack of intervertebral disk space, prominency of intervertebral foramina, and fusion of posterior elements at C2/C3 level. Student scoring (Table 1): 88% found the lesion, 61% categorized as tumor, 41% did not refer patient, and 39% identified appearance as a congenital blocked segment.
Internal consistency, measured by the KR-20, was adequate. The KR-20 coefficients were 0.51, 0.50, 0.50, and 0.61 for the 25 items of the first through fourth questions, and 0.87 for the 100 total items of examination scores, respectively. All but 5 (2.7%) of the 181 students who took the version 2 examination gave written consent to release their radiology course grades to the authors. The students who did not give consent averaged 6.5% lower scores on the version 2 examination than the 176 students who gave consent, indicating only slight difference between the groups. The average GPA across all radiology courses was 3.2 (SD = 0.48), where 3.0 is a “B” grade and 4.0 is an “A” grade. Means of course grades are listed in Table 3. In addition, 90.1% of the students gave written consent to release their scores from the National Board examinations parts II and III. These students had an average score of 538.6 (SD = 86.6) on the diagnostic imaging section of part II. Their average general diagnosis score was 511.9 (SD = 99.4), and their average score on the neurodiagnosis section was 508.9 (SD = 89.9). Their average score on part III was 470.50 (SD = 81.6). National Board scores range from 200 to 800, with an average score near 500.
Because only 81 (44.7%) of the version 2 examination participants completed the survey, it was decided to use only the radiology course grades and National Board scores and not the survey predictors in the initial regression model (Table 4). This provided a sample size of 154 individuals who took the examination, gave written consent to obtain their records, and had available radiology course grades and National Boards part II and III scores. By use of the forward stepwise method only two of the predictor variables significantly contribute to explaining the variation in the dependent variable. Both the students’ grades in the third radiology course (tumors, arthritides, and extremity trauma) and the scores on the diagnostic imaging section of National Boards part II were significant predictors (Table 5). Of these two predictors, grades in the third radiology course explained more of the dependent’s variation (R2 = .164; 16.4%). Performance on National Board part II (diagnostic imaging) explains only an additional 2.4% of the variation (Table 5). Assessment of the probability plots indicated the regression model did not violate assumptions of normality. In addition, no patterns were observed in the residual plots, suggesting the model adequately fit the data. Unfortunately,
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
68
A Fig 3. Enlarged hilum. Posteroanterior (A) and lateral (B) projections of chest demonstrating an enlarged right hilum (reading left side of photograph) highly suggestive of malignancy. Student scoring (Table 1): 50% found lesion, 17% categorized as tumor, 24% referred patient, and 21% identified appearance as an enlarged hilum (unilateral or bilateral).
A
B
B
Fig 4. Gallstones. Anteroposterior (A) and lateral (B) lumbar projections demonstrating multiple radiodense gallstones in right upper abdominal quadrant on anteroposterior projection (reading left) and anterior to spine on lateral projection. Student scoring (Table 1): 94% found lesion, 35% categorized them correctly, 57% did not refer patient, and 22% identified appearance as gallstones. the addition of other predictors obtained from the survey limited the available sample to the 81 students with complete data. However, the addition of these survey predictors did not meaningfully alter the regression results.
DISCUSSION Film Interpretation The 4 questions at each station of the examination quantify recognition, localization, categorization, and identifica-
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
A
69
B
Fig 5. Os odontoideum. Anteroposterior (A) and lateral (B) cervical projections demonstrating a defect at base of odontoid process. Smooth cortical outline, rounded configuration, and lack of soft tissue enlargement mitigate against acute odontoid fracture and lead to diagnosis of os odontoideum. Student scoring (Table 1): 72% found lesion, 50% categorized it as congenital, 62% referred patient, and 33% identified appearance as an os odontoideum.
tion of abnormality on plain film radiographs. These 4 questions attempt to measure clinical competency by assessing interpretative skills in question 1, the integration of cognitive and diagnostic decision-making abilities in questions 2 and 3, and knowledge in question 4. The structure of the examination permits comparison of students’ ability to correctly categorize, manage, and identify pathologic conditions they correctly locate on radiographs. It appears from the examination scores that students demonstrated similar aptitude to categorize and manage pathologic conditions. To determine whether these are the same students who are correctly categorizing and managing pathologic conditions, an additional column was reported in Table 2. The additional column lists the percentage of students able to both categorize the pathologic condition and manage the patient. These data indicate that a large overlap exists between students’ ability to categorize and manage pathologic conditions they may find, suggesting whatever abilities enable students to categorize pathologic conditions also assists them in selecting an appropriate management plan. However, temporal information is not provided, and therefore the reverse may also be true. Careful inspection of the pathologic films listed in Tables 1 and 2 does not reveal a particular category of pathologic condition that students are more successful in locating, categorizing, managing, or identifying. As one might expect, students’ success appears to reflect lesion contrast and the obvious incongruity of the normal appearance rather than pathologic category (ie, tumor, fracture, arthritis). Unfortunately, students correctly identified only 24.5% of the pathologic conditions on version 1 and 29.3% on version 2. This aptitude is consistent with reported data on a similar group of chiropractic students who correctly identified 20.5% of pathologic conditions on lumbosacral films.6
Fig 6. C2 hangman’s fracture. Lateral projection of cervical spine demonstrating a radiolucent shadow consistent with fracture through articular pillar of C2. Body of C2 is separated from C2 lamina and spinous process. Student scoring (Table 1): 52% found the radiolucent line, 43% categorized it as trauma, 41% referred patient, and 38% identified appearance as a C2 fracture.
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
70
A Fig 7. Chondrosarcoma. Anteroposterior projection of pelvis (A) and close-up (B) view of an aggressive destructive lesion of right lower margin (reading left side of photograph) of ilium. Student scoring (Table 2): 35% found lesion, 22% categorized as tumor, 24% referred patient, and 11% identified lesion as a chondrosarcoma.
A
B
B
Fig 8. Paget’s disease. Anteroposterior projection of pelvis (A) and close-up (B) view demonstrating bone enlargement, trabecular prominence, and cortical thickening of pelvis. Multiple radiopaque vascular clips are also noted, anterior to lumbar spine. Student scoring (Table 2): 62% found lesion, 8% categorized it correctly, 49% correctly managed case, and 24% identified lesion as Paget’s disease.
Patient Mismanagement Review of the test cases listed in Tables 1 and 2 suggests potential instances of patient mismanagement if these decisions were made in clinical practice. For instance in Table 1,
57% of the students found the lytic metastasis of L3 and the pelvis and 50% of the students correctly referred the patient for evaluation. This means 43% of students never saw the problem, and 7% found the problem but did not recognize it
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
71
A Fig 9. Legg-Calvé-Perthes disease. Anteroposterior projection of pelvis reveals a flattened appearance of patient’s right (reading left side) femoral capital epiphysis with open physis consistent with avascular necrosis during childhood. Student scoring (Table 2): 52% found lesion, 22% categorized it correctly, 48% referred patient for concurrent evaluation, and 23% identified lesion as avascular necrosis or Legg-Calvé-Perthes disease.
as something they should refer for further evaluation. Adding these figures together, this case was inappropriately managed by 50% of the students. Conversely, mistakes of overmanagement were also made. Again in Table 1, 88% of students successfully found the C2/C3 congenital blocked segment, but only 41% believed this was something they could solely manage; 47% of the students referred the patient for further evaluation. Unless some clinical suggestion of further malformation of the spine or genitourinary tract (which was not suggested at the viewbox station) exists, a congenital block segment is not a contraindication to spinal adjustments and needs no further workup. There is likely no consequence from the 12% of students who did not see the congenital block, but the 41% who sent the patient out added unneeded expense to the case and may have denied the patient the benefit of chiropractic care.
Radiology Curriculum The test institution’s (Palmer College of Chiropractic, Davenport, Iowa) radiology curriculum could be characterized as a traditional format that emphasizes didactic classroom presentation. It is content based, not problem based. Therefore it is not surprising that students overwhelmingly identified classroom lectures as being the most helpful resource in the postexamination survey. This observation is consistent with data obtained in other content-based settings, which suggests the perception that direct exposure to faculty members is the most important learning resource.7 The students who participated in the test represent a convenience sample, and generalizing results to students of another institution is problematic. In addition, because the test institution uses a content-based curriculum, the results of this report are not generalizable to substantially dissimilar curricula. Students who have gone through problem-based
B Fig 10. Hiatal hernia. Posteroanterior (A) and lateral (B) chest projections demonstrating an air-filled radiodensity of middle mediastinum bordered inferiorly by an air-fluid level consistent with hiatal hernia. Student scoring (Table 2): 19% found abnormality, 12% categorized it correctly, 5% believed they could adjust patient before referring for concurrent evaluation, and 3% identified lesion as a hiatal hernia.
curricula may demonstrate different ability on the administered examinations. Although differences exist, it is our opinion that skills of clinical competency are not emphasized in chiropractic and other health care institutions. That is, we suspect the poor scores are not unique to the test institution or chiropractic. Systematic review of Palmer’s curriculum reveals that only a few structured clinical competency exercises are incorporated into the radiology curriculum. Small group film laboratory sessions with learning libraries are not in place. Course tests are typically administered with a multiple-choice format. When radiography slides are presented in a course test, little effort is made to mimic the circumstances of clinical practice. Normal films or pathologic conditions other than the topic currently studied are not incorporated in the examination. For example, a midterm test on bone tumors immediately follows the classroom presentation on bone tumors. Therefore students know that some type of
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
72
Fig 11. Diffuse idiopathic skeletal hyperostosis. Lateral projection of lumbar spine demonstrating prolific bone formation along anterior margin of lumbar spine consistent with diffuse idiopathic skeletal hyperostosis. Student scoring (Table 2): 85% labeled findings as abnormal, 77% categorized it as an arthritide or as congenital/other, 60% believed they could adjust patient, and 55% identified appearance as diffuse idiopathic skeletal hyperostosis.
Fig 12. Pancoast tumor. Anteroposterior projection of lower cervical spine demonstrates an increased radiodense appearance of patient’s left lung apex (reading right side). Finding is consistent with bronchogenic carcinoma. Rib destruction is not apparent. Student scoring (Table 2): 45% found radiodensity, 41% categorized it as a tumor, 36% referred patient for evaluation, and 34% identified as a malignancy in lung apex (Pancoast tumor).
tumor is the only possible correct choice for any given question, especially questions from slides. As a consequence, these course examinations are highly artificial and do not reflect circumstances encountered in clinical practice. In clinical practice most radiographic studies are essentially normal. Major pathologic conditions are not common. Moreover, when an abnormality is present, clinicians do not have the luxury of concentrating on only one category of pathologic condition. Certainly, the patient history may be highly suggestive, but it seldom enables a clinician to concentrate on a single category of pathologic condition, which is a luxury provided to students taking a course examination. It is not possible within the boundaries of this study to assign individual importance to these perceived drawbacks of a content-based curriculum. We speculate that students’ performance may reflect their unfamiliarity with this study’s clinically based examination format. They functioned poorly with the “real world” examination format that incorporated normal films and an equal likelihood of many categories of disease being represented. Also, the fixed time and lack of developed history contributed to the overall poor scores.
We also believed there were drawbacks to providing students with a detailed patient history at each viewbox station. First, the time students would need to read the history would add to an already lengthy total examination time. And second, providing a detailed history would shift the focus of the examination from students’ radiographic interpretative abilities to skills of clinical pattern recognition. We wanted to concentrate on radiographic interpretative skills unconfounded by information gathered from an elaborate patient history. For these reasons, the accompanying patient history at each viewbox was kept to a short statement describing pain in the regions presented (ie, “back pain” for radiographs of the lumbar spine, “neck pain” for radiographs of the cervical spine). After exposure to a content-based curriculum, the students who took these examinations appeared knowledgeable in clinical radiology. The mean grade point average across all courses was 3.2 (SD = 0.48), and the mean score on the diagnostic imaging section of the National Board part III examination was also above average (538.6, SD = 86.6). Therefore students who participated in both versions of this
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
73
A
B Fig 13. Odontoid process fracture. Lateral (A) and anteroposterior open mouth (B) projections of cervical spine that show fracture at base of odontoid process (type II). There is slight left (reading right) lateral displacement of fractured odontoid process. Student scoring (Table 2): 73% found radiolucency at base of odontoid process, 62% categorized it as a fracture, 65% referred patient for evaluation, and 53% identified as a fractured odontoid process.
examination were, by traditional educational outcome measures, better than average chiropractic students. However, their average scores on these radiology competency examinations were disappointing. This becomes a serious issue when it is considered that the structure of these examinations more closely reflects clinical practice than currently available outcome measures.
The regression analysis indicates that the students’ third radiology course (tumors, arthritides, and skeletal trauma) is significantly related to the students’ ability to discern normal and abnormal radiographic studies. However, this course grade can only explain 16.4% (R2 = .164) of the variation in this ability (Table 5). The pathologic case categories on both versions of the radiology competency examinations did not
74
Journal of Manipulative and Physiological Therapeutics Volume 22 • Number 2 • February 1999 Competency Examination Part II • Marchiori et al
appear to favor 1 class over another. Perhaps the pedagogy used in the third course related skills of clinical competency that other radiology courses did not. Another explanation is that the third radiology course meets more hours per week than the other courses in the curriculum. Or perhaps the course content of the third radiology course more closely reflects the types of pathologic conditions presented on the examinations. If this radiology competency examination is a valid measure of clinical competence, the results of this study do not give educators much to be excited about. Student clinical competency appears low, and no adequate predictors or proxies could be identified within the Palmer curriculum that may be monitored as changes in the curriculum are undertaken.
identify and adopt programs into their radiology curricula that foster clinical competency. Second, one or more outcomes of radiology clinical competency must be developed and regularly administered to provide some ongoing measure of clinical competency. The regression analysis indicates that no acceptable current predictor or proxy outcome exists at the test institution. As programs to enhance radiology clinical competency are adopted, an outcome must be in place to measure any effect. The data collected in this report predate the availability of National Board Examination part IV scores. The effectiveness of this examination to predict clinical competency is not known or at least has not been reported in the literature.
ACKNOWLEDGMENTS Additional Goals of a Radiology Curriculum A successful education program should extend beyond taking and interpreting plain films. The teaching paradigm of radiology in the chiropractic profession should include more information concerning the appropriate clinical circumstances for specialized imaging (ie, computed tomography and magnetic resonance imaging).7,8 Information on the use of hospital services and imaging centers is important. Evidence in the educational literature suggests that the way a topic is taught will influence how it is used in a clinical setting.9,10
Examination Development The radiology competency examination presented in this article is labor intensive. The complexity of setup and the need to hand grade the answer sheets required considerable time and effort (approximately 48 hours per examination). We are investigating the possibility of using computer workstations. However, it has not been determined whether images presented on computer monitors are comparable to those on radiographic film, making preliminary studies on this image medium requisite.
CONCLUSION The results of this study lead to two conclusions. First, these pilot data suggest clinical competency is poor. Educators, especially those using content-based curricula, should
We thank Monica Smith, DC, PhD, for her helpful comments during the preparation of the manuscript.
REFERENCES 1. Girot EA. Assessment of competence in clinical practice: a review of the literature. Nurse Educ Today 1993;13:88-90. 2. Brenner P. Issues in competency-based testing. Nurs Outlook 1982;30:303-9. 3. Marchiori DM. A survey of radiographic impressions on a selected chiropractic patient population. J Manipulative Physiol Ther 1996;19:109-12. 4. Cooley J, Schultz G, Phillips R. What do chiropractors see on xrays? In: Proceedings of the International Conference on Spinal Manipulation. Arlington (VA): Foundation for Chiropractic Education and Research; 1990. p. 24-6. 5. Hall T, Schultz G, Phillips R. Why do chiropractors order Xrays? In: Proceedings of the International Conference on Spinal Manipulation. Arlington (VA): Foundation for Chiropractic Education and Research; 1990. p. 21-3. 6. Taylor JAM, Clopton P, Bosch E, Miller KA, Marcelis S. Interpretation of abnormal lumbosacral spine radiographs. Spine 1995;20:1147-54. 7. Ott DJ, Meschan I, Skinner NS. Evaluation of medical student education in radiology. AJR Am J Roentgenol 1983;140:155-7. 8. Pope TL. Medical student education: a practical approach to radiology. Invest Radiol 1986;21:592-3. 9. Gonella JS, Goran MJ, Williamson JW, Cotsonas NJ. Evaluation of patient care: an approach. JAMA 1970;214:2040-3. 10. Schmidt HG. Problem-based learning: rationale and description. Med Educ 1983;17:11-6.