What is an hour-lecture worth?

What is an hour-lecture worth?

The American Journal of Surgery 195 (2008) 379 –381 The Midwest Surgical Association What is an hour-lecture worth? Donald N. Reed Jr, M.D.*, Travis...

54KB Sizes 3 Downloads 126 Views

The American Journal of Surgery 195 (2008) 379 –381

The Midwest Surgical Association

What is an hour-lecture worth? Donald N. Reed Jr, M.D.*, Travis A. Littman, M.D., Cheryl I. Anderson, R.N., M.S.N., George R. Dirani, Jeffrey M. Gauvin, M.D., Keith N. Apelgren, M.D., Carol A. Slomski, M.D. Department of Surgery, Michigan State University College of Human Medicine, SPB #655, 1200 E. Michigan Ave, Lansing, MI 48912, USA Manuscript received December 17, 2007; revised manuscript December 17, 2007

Abstract Background: Although there are many ways to convey knowledge, attitudes, and techniques when teaching residents and students, the most optimal method (lecture, online lecture, online tutorial, simulator practice, and so on) is yet to be determined. Methods: This study was designed to be a prospective analysis of change in resident behavior, and the model chosen was resident compliance with alcohol screening during admissions to the trauma service. Baseline values were determined the month before the educational “intervention,” which was planned to be a 1-hour lecture during Grand Rounds on the importance of screening for alcohol disuse syndromes. After the “intervention,” results were analyzed at 3 points in time: during the first month after the lecture and then at 3 and 12 months. Results: Resident compliance with alcohol usage screening rose from 53% at baseline to 80% at 1 year. Conclusions: This straightforward model of utility of a lecture showed a significant change in resident behavior. © 2008 Elsevier Inc. All rights reserved. Keywords: Resident education; Lectures; Didactic; Alcohol screening

Although there are many ways to convey knowledge, attitudes, and techniques when teaching residents and students, the most optimal method (lecture, online lecture, online tutorial, simulator practice, and so on) is yet to be determined. It is well known that screening for alcohol use in trauma patients is suboptimal, despite a recent mandate to that effect from the American College of Surgeons (ACS) [1]. In an effort to accomplish the latter at our ACS level I trauma center, we tested the effectiveness of a 1-hour didactic lecture given by a faculty member on that topic and used the percent of trauma patients queried for alcohol use by the surgical residents on admission as an outcome measure. Methods With the approval of the University’s Biological Institutional Review Board, we reviewed admission History and Physical Examination forms used on trauma patients for the month of July 2005 to determine the residents’ frequency of inquiring into the patients’ history of alcohol use or disuse. * Corresponding author. Tel.: ⫹1-260-435-7380. E-mail address: [email protected] 0002-9610/08/$ – see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.amjsurg.2007.12.028

This was done by chart review, without the knowledge of the surgical residents (except for TAL, who is a coauthor on this study). In early August 2005, during the setting of Surgical Grand Rounds, the lead author provided a 1-hour standard lecture on the need for and benefits of alcohol screening on the trauma population. Special emphasis was placed on both the desire of the trauma service attendings to have the residents obtain this information at the time of admission as well as the new ACS Committee on Trauma’s requirement for verified trauma centers to screen for alcohol disuse syndromes. The residents were not told that their compliance would be the subject of a study. At 1 month after the lecture or educational intervention, compliance was checked and then again at 3 months. After 3 months, a reminder or “reinforcement” statement was made, again by the lead author and again during the protected educational time of Surgical Grand Rounds, stressing the importance of acquiring this information but omitting any reference to a study of their behavior. Finally, this was repeated at 12 months, and the residents were then informed of the results. Per institutional review board instructions, residents received aggregate feedback, without reference to individual performance. The percent compliance rate at each time point was calculated. Differences between the baseline value and each

380

D.N. Reed Jr et al. / The American Journal of Surgery 195 (2008) 379 –381

Table 1 Percent of trauma patients who were asked a question about alcohol intake during their history and physical examination (number asked divided by total admissions) Time postintervention

Asked question about alcohol intake (%)

Baseline 1 month 3 months 1 year (all residents) 1 year (only residents who received intervention)

36/68 (53) 29/46 (63) 39/59 (66) 32/43 (74)* 16/20 (80)*

* Values followed by an asterisk are significantly different from baseline (P ⬍ .05).

future time point were analyzed by using the chi-square test. Significance was assessed at P ⬍ .05. Results In this prospective study, 68 patients had charts available for review after admission to the trauma service at Sparrow Hospital in July 2005. These were primarily done by junior house staff, but we were prohibited by the University’s institutional review board from tracking individual resident performance for fear of reprisal. Similar analysis was performed on trauma patients’ H&Ps at the time points specified, although the number of charts available for review varied from month to month. Performance clearly improved over time (Table 1). Analysis revealed that residents’ inquiring about and recording alcohol consumptive behaviors increased in an almost linear fashion. It is worth noting that although time “zero” was before the intervention, the only residents at 12 months who had not received instruction were the new interns. This may account for part of the increased compliance. Residents received aggregate feedback, without reference to individual performance. Comments Although knowledge acquisition by medical trainees and its effect on performance have been extensively studied, the results are inconsistent. One of the first articles on this topic studied the effect of mandatory textbook reviews on surgical resident performance on the American Board of Surgery In-service Training Examination (ABSITE). Not surprisingly, scores improved after the introduction of mandatory textbook reviews, as shown by our department over 2 decades ago [2]. A more recent article also found that lectures by faculty aimed at subjects covered on the ABSITE improved those scores [3]. However, other studies using ABSITE scores as an outcome measure show conflicting results. Bull et al [4] studied a group of thoracic surgery residents and found that self-study resulted in superior performance on the ABSITE compared with standard didactic lectures by faculty, a finding not dissimilar to one from UCLA [5]. Although some authors have found similar results from programmed readings, at least a couple suggested resident attendance at facultyconducted teaching conferences failed to show a correlation with ABSITE scores [6,7].

Data from other specialties are also contradictory. A study of family practice residents found that a 25-minute videotape lecture on the importance of prescribing fluoride to pediatric patients had no impact on prescribing frequency; however, when faculty monitored or reinforced that behavior compliance increased to 91% [8]. Picciano et al [9] found that noon lectures to family practice residents, a common feature of those residencies, did not improve test scores. Another article in that field found that didactic lectures and noontime conferences on pneumococcal vaccination had no effect on the rate of eligible patients offered vaccinations. The authors concluded that lecture-based presentations had no significant effect on resident behavior [10]. Perhaps one reason for the conflicting results is the variable effectiveness of any particular lecturer. For example, Stern et al [11] found that medical students of attendings with higher-rated teaching skills had higher examination scores than students of lower-rated attendings. If one assumes that only superior speakers are used in producing computer-based lectures, it may explain why some studies have shown the superiority of computer-based training [12–16]. There are conflicting data regarding the effectiveness of faculty lectures on knowledge acquisition and influencing behavior. Two years ago, our group published an article on how to assess various competencies required by the ACGME in training general surgery residents [17]. The current study measured a change in resident behavior/competence in performing screening for alcohol use in trauma patients. The data support the position that a focused, 1-hour lecture to residents can affect a positive change in behavior. It is likely that other factors may have also impacted resident behavior. These could include reminders by the trauma attendings and other resident members of the trauma team to perform the screening, and individual feedback could have been valuable. Additionally, the frequency of the desired behavior may have increased as a result of a brief reinforcement statement at 3 months from the time of the lecture. However, because there was no control group for which the reinforcement statement was omitted, we are uncertain of its effect. The most beneficial timing and frequency of reinforcement are unknown and areas for further study. There is no one teaching method that is effective for all groups of learners. Certainly, other forms of instruction such as conferences, online tutorials, video lectures, OSCEs, and simulation laboratory values may be effective, but they can involve more time and expense. We were able to show a significant improvement in screening behavior after a 1-hour lecture, thus showing that a low-cost, “low-tech” intervention can be very successful. It is puzzling why there was a gradual increase in resident performance. We attribute some of that to the 2 reinforcements given. In an era of increasing physician workload, teaching physicians may take comfort knowing that the time to prepare and present a lecture may not be in vain. References [1] Resources for Optimal Care of the Injured Patient. American College of Surgeons, 2006:116. [2] Dean RE, Hanni CL, Pyle MJ, Nicholas WR. Influence of programmed textbook review on American Board of Surgery In-Service Examination scores. Am Surg 1984;50:349 –50.

D.N. Reed Jr et al. / The American Journal of Surgery 195 (2008) 379 –381 [3] Mahmood A, Matolo N, Sloan D, et al. Didactic surgical education by faculty: the effect on American Board of Surgery In-Training Examination percentile scores. Am Surg 2006;72:1176 – 80. [4] Bull DA, Stringham JC, Karwande SV, Neumayer LA. Effect of a resident self-study and presentation program on performance of the thoracic surgery in-training examination. Am J Surg 2001;181:142– 4. [5] deVirgilio C, Stabile BE, Lewis RJ, Brayack C. Significantly improved American Board of Surgery In-Training Examination scores associated with weekly assigned reading and preparatory examinations. Arch Surg 2003;138:1195–7. [6] Pollak R, Baker RJ. The acquisition of factual knowledge and the role of the didactic conference in a surgical residency program. Am Surg 1988;54:531– 4. [7] Hirvela ER, Becker DR. Impact of programmed reading on ABSITE performance. Am J Surg 1991;162:487–90. [8] Pinkerton RE, Tinanoff N, Willms JL, Tapp JT. Resident physician performance in a continuing education format. JAMA 1980;244: 2183–5 [9] Picciano A, Winter R, Ballan D, et al. Resident acquisition of knowledge during a noontime conference series. Fam Med 2003;35:418 –22. [10] Warner S, Williams DE, Lukman R, et al. Classroom lectures do not influence family practice residents’ learning. Acad Med 1998;73: 347– 8. [11] Stern DT, Williams BC, Gill A, et al. Is there a relationship between attending physicians’ and residents’ teaching skills and students’ examination scores. Acad Med 2000;75:1144 – 6. [12] Mehrabi A, Gluckstein CH, Benner A, et al. A new way for surgical education— development and evaluation of a computer-based training module. Comp Biol Med 2000;30:97–109. [13] Gilbart MK, Hutchinson CR, Cusimano MD, Regehr G. A computerbased trauma simulator for teaching trauma management skills. Am J Surg 2000;179:223– 8. [14] Dayal R, Faries PL, Lin SC, et al. Computer simulation as a component of catheter-based training. J Vasc Surg 2004;40:1112–7. [15] Tsai TC, Harasym PH, Nijssen-Jordan C, Jennett P. Learning gains derived from a high-fidelity mannequin-based simulation in the pediatric emergency department. J Formos Med Assoc 2006;105:94 – 8. [16] Kalet AL, Coady SH, Hopkins MA, et al. Preliminary evaluation of the Web Initiative for Surgical Education (WISE-MD). Am J Surg 2007;194:89 –93. [17] Anderson CI, Jentz AM, Kareti LR, et al. Assessing the competencies in general surgery residency training. Curr Surg 2005;62:111– 6.

Discussion Roxie M. Albrecht, M.D. (Oklahoma City, OK): Did you have a standardized history and physical form that prompted the residents to do the screen during their initial evaluation? Secondly, why the late blip in the significance? Was that resident-resident education, attending resident ongoing education by the trauma medical director? Was it by the trauma program manager? Or sometimes registrars who record these things for the ACS get involved. Thirdly, if the patient screens positive what type of intervention was done? What technique did you use to get the residents to listen and retain the information? Did you use any performance-enhancing medications (laughter), stimulants, or was it an interactive nature? Donald N. Reed Jr, M.D. (Lansing, MI): As far as standardized questions, we had not used prompters on the admission H and P forms. We subsequently did that. The month after the conclusion of this study in August 2006, we introduced a standardized form with a preprinted H and P and a tickler on there, and that’s a subsequent study that maybe we will present here next year. Late blip in significance, that’s a good question. The compliance improved over the year, and that was with 2

381

reminders. We suspect that we developed a culture of compliance. In other words, people got reinforced over time, and, as more residents remembered to do it, then they reminded their colleagues and so on. But we do not have any evidence actually to back that up. It was not that other attendings or anybody to do with the registry or the trauma program manager did it. I think one horse drove this cart. As far as the positive screeners, the first year of the screening we did not have a brief intervention program going. We did have an alcohol counselor that was involved with the study. So if they screened positive enough, which would be between a score of 8 and 19, then they would get an appointment with the alcohol counselor. For the residents’ performance, if they did not do it, they had to either speak with the program director Dr. Apelgren or the chair Dr. Slomski. That is what was held over them. Thomas A. Stellato, M.D. (Moreland Hills, OH): How much time did you spend, if you can tell us, with the bureaucracy of obtaining institutional review board approval? Did every single resident have to sign an informed consent to be part of this? Donald N. Reed Jr, M.D. (Lansing, MI): Our institutional review board, which I happened to have been on at the time, ended being a fairly painful process. There were many nonphysicians on Michigan State University’s Institutional Review Board. I was constantly getting these questions back from these reviewers. Of course, they are anonymous stating, “Well, do we know that you aren’t going to use this information to damage the residents’ self-esteem?” I assured them that these are surgical residents. There is no way you could hurt their self-esteem. I did not have to have individual residents sign it. I did have to agree that I would not provide the residents with their individual performance— only group performance. James R. DeBord, M.D. (Peoria, IL): Well, I was a little disappointed when I saw the title of your talk because I really thought you were going to tell us how many RVUs we could charge our institutions for giving a lecture. But my question has to do with patients who did not get asked the alcohol disuse questions. Working in a level I trauma unit, it is pretty obvious that many patients that come in are acutely intoxicated. And how many of these patients who did not get asked the question were really obviously intoxicated and maybe the resident did not feel he/she had to ask it or they were so intoxicated he/she could not get a history. Donald N. Reed Jr, M.D. (Lansing, MI): We had a couple of exclusion criteria, one of which was they have to have a GCS of 15 within 24 hours of admission so that excludes the head injuries, but there also were a few real heavy drinkers that when we came around the next day still had not sobered up to the point where they could be asked. It was a fairly small number, but I got some students now who are in the fourth month of a 6-month project doing the brief intervention by medical students. They were shocked that some heavy drinkers, while still inebriated, will absolutely deny that they drink.