Forty-One Million RADPEER Reviews Later: What We Have Learned and Are Still Learning

Forty-One Million RADPEER Reviews Later: What We Have Learned and Are Still Learning

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ...

243KB Sizes 0 Downloads 18 Views

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53

ORIGINAL ARTICLE

Q1

Q5

Forty-One Million RADPEER Reviews Later: What We Have Learned and Are Still Learning Humaira Chaudhry, MD a , Andrew Del Gaizo, MD, MBA b, L. Alexandre Frigini, MD c, Shlomit Goldberg-Stein, MD d, Scott D. Long, MD e, Zeyad A. Metwalli, MD f, Jonathan A. Morgan, MD g, Xuan V. Nguyen, MD, PhD h, Mark S. Parker, MD i, Hani Abujudeh, MD, MBA j Abstract ACR RADPEER is the leading method of radiologic peer review in the United States. The program has evolved since its inception in 2002 and was most recently updated in 2016. In 2018, a survey was sent to RADPEER participants to gauge the current state of the program and explore opportunities for continued improvement. A total of 26 questions were included, and more than 300 practices responded. In this report, the ACR RADPEER Committee authors summarize the survey results and discuss opportunities for future iterations of the RADPEER program. Key Words: RADPEER, peer review, quality improvement, diagnostic radiology J Am Coll Radiol 2020;-:---. ª 2019 Published by Elsevier on behalf of American College of Radiology

INTRODUCTION RADPEER was conceived in 2002 as a simple, costeffective process to perform retrospective peer review with minimal burden on clinical workflow. This platform was built on the premise that radiologists review historical imaging studies when interpreting new imaging studies, allowing opportunities to review the accuracy of prior

Department of Radiology, Rutgers – New Jersey Medical School, Newark, New Jersey. b Department of Radiology, Wake Forest University Baptist Medical Center, Winston-Salem, North Carolina. c Department of Radiology, Baylor College of Medicine, Houston, Texas. d Montefiore Medical Center, The University Hospital at Albert Einstein College of Medicine, Bronx, New York. e Southern Illinois University School of Medicine, Springfield, Illinois. f Department of Interventional Radiology, MD Anderson Cancer Center, Houston, Texas. g Crozer Chester Medical Center, Upland, Pennsylvania. h Department of Radiology, The Ohio State University College of Medicine, Columbus, Ohio. i Thoracic Imaging Division, VCU Health Systems, Richmond, Virginia. j Detroit Medical Center, Envision Physician Services, Detroit, Michigan. Corresponding author and reprints: Humaira Chaudhry, MD, Department of Radiology, Rutgers – New Jersey Medical School, 185 S Orange Avenue, MSB F-506, Newark, NJ 07103; e-mail: [email protected]. The authors state that they have no conflict of interest related to the maQ 3 terial discussed in this article. a

reports [1]. The program was first offered by the ACR to radiologists in 2003 and has evolved over the past 16 years, with revisions in 2009 and 2016 [1-4]. As of January 2019, 1,036 radiology practices use RADPEER for the collection of peer review data on more than 18,000 individual radiologists, with more than 41.2 million completed RADPEER reviews. The original RADPEER introduced a 4-point rating system, with rising degrees of discrepancy [1] (Table 1). In 2009, the second RADPEER Committee white paper modified the definitions within each rating and further clarified each rating to a more widely applicable, outcomesbased classification. In addition, the update included an optional annotation of whether discrepancies were clinically significant [2] (Table 1). The 2016 RADPEER update reduced the number of scoring categories from four to three to place greater emphasis on peer learning (Table 1), emphasizing types of error rather than severity of discrepancy. Finally, the 2016 update provided an expanded classification option to include body systems and age group and also introduced a self-reporting feature to the review process to allow self-assessment [3]. In 2018, a new web-based electronic survey was sent to RADPEER participants to gauge the current state of the RADPEER program and identify opportunities for

ª 2019 Published by Elsevier on behalf of American College of Radiology 1546-1440/20/$36.00 n https://doi.org/10.1016/j.jacr.2019.12.023

FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

1

54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106

107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158

Table 1. RADPEER scoring (2002-2016) 2002 RADPEER Score

Meaning

2009 RADPEER Meaning

Optional

Meaning

Optional

1

Concur with interpretation

Concur with interpretation

2

Difficult diagnosis, not ordinarily expected to be made

a. Unlikely to be Discrepancy in a. Unlikely to be Discrepancy in clinically interpretation/not clinically interpretation/not significant ordinarily expected to significant ordinarily expected to b. Likely to be clinibe made b. Likely to be clinibe made cally significant (understandable miss) cally significant (understandable miss)

3

Diagnosis should be made most of the time

a. Unlikely to be Discrepancy in a. Unlikely to be Discrepancy in clinically interpretation/should clinically interpretation/should significant be made most of the significant be made most of the b. Likely to be clinitime b. Likely to be clinitime cally significant cally significant

4

Diagnosis should be made almost every time— misinterpretation of findings

Removed a. Unlikely to be Discrepancy in clinically interpretation/should significant be made almost every b. Likely to be clinitime— cally significant misinterpretation of finding

continued improvement. In this report we describe the results of the survey.

METHODS The 2018 survey was created by the ACR RADPEER Committee on the basis of a modification of a previous ACR survey administered in 2012 to RADPEER users and ACR members. Most items in the survey were multiple-choice questions, but a few solicited free-text responses. Surveyed topics included general practice characteristics, reasons for performing peer review, details regarding peer review implementation and data reporting, and perceptions regarding peer review and the 2016 RADPEER updates. A total of 26 questions were provided, including 14 repeated, 5 reworded, and 7 new questions compared with the 2012 survey. Some questions were added specifically to obtain user feedback related to recently implemented changes to RADPEER, while others were included to assess views on topics deemed important to the future of RADPEER or other peer review mechanisms. Reasons for rewording some of the 2012 survey items included the following: improve clarity, offer updated answer choices, and ensure continued relevance to the intended survey population of RADPEER users. A few 2012 survey items were not included in the 2018 survey, because they were deemed by the ACR RADPEER Committee to be no longer of high relevance or 2

2016 RADPEER

Concur with interpretation

usefulness. (The Appendix contains the full survey with responses and is published as an online-only supplement.) The survey was implemented electronically using a webbased survey platform (SurveyMonkey, San Mateo, California). Invitations to participate were sent to 1,716 practices that use RADPEER via e-mail to account administrators, who serve as points of contact with the ACR RADPEER program. This represents a change in the survey population relative to the 2012 survey, which had included ACR members who do not use RADPEER in addition to RADPEER users. The rationale for including only RADPEER users in the 2018 survey was to obtain more focused feedback from users of RADPEER, whereas the previous survey also sought information on perceptions of RADPEER compared with other peer review options.

RESULTS Characteristics of Respondents Three hundred five practices responded to the 2018 survey, among 1,716 practices to which the survey was sent, representing an 18% response rate. Respondents came from varied practice settings, including 40% (122 of 305) from hospital-based private practices, 26% (79 of 305) from hospital-employed practices, 24% (73 of 305) from outpatient private practices, 12% (35 of 305) from academic practices, and 10% (30 of 305) from multispecialty Journal of the American College of Radiology Volume - n Number - n Month 2020

FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210

211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262

Table 2. Use of peer review in radiology practices

Are targets used? Yes No No opinion

Response Percentage

Numerator/ Denominator*

74% 17% 9%

181/243 41/243 21/243

Response Percentage

Actual

Numerator/ Denominator*

Desired

Type of targets Number Percentage No opinion

65% 29% 6%

120/185 53/185 12/185

40% 40% 20%

64/160 64/160 32/160

Period Per day Per month Per quarter Per year No opinion

23% 39% 19% 16% 3%

37/163 64/163 31/163 26/163 5/163

20% 38% 19% 19% 4%

25/124 47/124 24/124 23/124 5/124

*Differences in denominator among survey questions reflect variations in numbers of respondents to each question.

practices. The majority of respondents (64% [195 of 305]) had participated in some form of structured peer review for more than 7 years. Many survey respondents did not answer all questions.

Why Facilities Do Peer Review The most common reason for performing peer review was to meet accreditation requirements such as those set forth by the ACR (92% [262 of 286]). The second most reported reason was to increase patient safety and improve quality of clinical care (76% [216 of 286]). Approximately one-third of survey respondents reported using more than one type of peer review (ie, RADPEER, PACS-embedded peer review, in-house) (97 of 286). Meeting ACR accreditation requirements was the reason RADPEER was selected as the peer review platform for 81% of survey participants (232 of 286), and 47% of respondents (133 of 286) felt that RADPEER was simpler than other available options.

Using Peer Review Most survey respondents (87% [249 of 286]) reported that all the physicians in their practice participate in peer review. For the remainder (11% [30 of 286]), freestyle written responses (n ¼ 17) revealed that lack of physician participation was related to specific practice exceptions (eg, part-time physicians, night physicians, non–relative value unit–based physicians), noncompliance, or use of an alternative peer review model. Most practices (75% [181 of 243]) had established peer review targets. Practice targets for most respondents

Q2

involve absolute numbers of cases (65% [120 of 186]) rather than percentage of cases (29% [53 of 186]), but responding practices were evenly split on preference for the target to be a specific number (40% [64 of 160]) or a percentage of cases (40% [64 of 160]). Respondents demonstrated a moderate preference for monthly targets (38% [47 of 124]), with daily, quarterly, and annual targets evenly represented at approximately 20% (23-25 of 124) (Table 2).

Reporting of Peer Review Data External reporting of peer review data was noted by a majority of respondents (71% [198 of 279]), with the hospital (56% 156 of 279), medical staff committees (37% [102 of 279]), and The Joint Commission (18% [49 of 279]) reported as recipients. Only 22% of practices (61 of 279) surveyed were currently using peer review data for internal quality improvement activities without reporting to outside entities. Most survey participants indicated that peer review data were reported in aggregate for the whole practice (75% [205 of 273]) and of or by individual physician (65% [177 of 273]). Reporting data by practice site location either as aggregate data for the whole practice (24% [66 of 273]) and/or by individual physician (17% [47 of 273]) was less frequent.

Satisfaction With Peer Review Only 23% of survey respondents identified practice pattern changes secondary to the use of peer review (57 of 245). Fifty percent of survey respondents did not (123 of 245), while 27% were unsure if practice patterns had changed (65

Journal of the American College of Radiology Chaudhry et al n RADPEER: What We Have Learned FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

3

263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314

315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366

Table 3. Concerns and challenges

Table 4. Response to 2016 updates

Response Numerator/ Percentage Denominator* Are peer review data part of physician evaluations? Yes No Don’t know Are disagreements underreported? Yes No Don’t know Would anonymity help improve reporting of disagreements? Yes No Don’t know

39% 42% 19%

31% 44% 25%

38% 30% 32%

93/238 100/238 45/238

75/245 109/245 61/245

92/245 74/245 79/245

*Differences in denominator among survey questions reflect variations in numbers of respondents to each question.

of 245). Several respondents who noted a change in practice cited improvements in the oversight process and in quality, with increased accuracy and reduction in errors. Positive change in practice was also identified as a result of RADPEER use as an educational tool through review of cases with discrepancy scores. Survey respondents were given the opportunity to provide positive or negative feedback regarding their peer review systems. The most frequently cited positive comments described the ease of use of the RADPEER and eRADPEER systems. The most frequently cited complaint regarding peer review systems was lack of integration with PACS or dictation systems, thereby reducing the usability of peer review systems. Although the RADPEER program was explicitly designed to improve quality and facilitate peer learning, respondents frequently cited concerns that errors were underreported because of fear of offending others and to avoid punitive measures. Finally, fulfillment of hospital, medical staff committee, or Joint Commission requirements for a peer review system was frequently mentioned as a benefit of using RADPEER.

Concerns and Challenges Survey respondents reported that the most common reason peer review was not routinely used in their practices was that it was too time consuming (53% [102 of 193]). Eighteen percent of respondents (35 of 193) felt that it did not 4

Response Numerator/ percentage Denominator* Was adding body systems beneficial to your practice? Yes No Don’t know

34% 38% 28%

68/200 76/200 56/200

Was indicating classification of discrepancy type beneficial to your practice? Yes No Don’t know

47% 24% 29%

94/200 48/200 58/200

Was adding the pediatric classification beneficial to your practice? Yes No Don’t know

18% 55% 28%

35/200 109/200 56/200

Was the removal of RADPEER score of 4 beneficial to your practice? Yes No Don’t know

39% 22% 40%

77/200 44/200 79/200

Was the addition of the self-review option useful in your QA sessions or learning processes? Yes No Don’t know

30% 21% 50%

60/200 41/200 99/200

Note: QA ¼ quality assurance. *Differences in denominator among survey questions reflect variations in numbers of respondents to each question.

produce any significant quality improvement. Approximately 30% (56 of 193) had other varying responses, including possibility of reviewer bias and lack of randomness. Also, some mentioned that they had peer review systems in use in their practices, but individual doctors would forget to log in and enter cases. Concerns about using peer review fell into two broad categories of medicolegal concerns and emotional concerns; the latter included being shamed for deficient performance. Journal of the American College of Radiology Volume - n Number - n Month 2020

FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418

419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470

Fifty-six percent of respondents (115 of 207) cited concerns about discoverability of peer review data in malpractice lawsuits. Thirty percent (62 of 207) had concerns about adverse effect on physician credentialing. Fifty-two percent (107 of 207) were concerned about being “graded” by peers, and 20% (42 of 207) were concerned about public humiliation. Thirty-five percent (72 of 207) admitted awkwardness when evaluating more senior colleagues. Survey respondents were asked about the use of systematic group review of significant discrepancies, such as by a quality improvement group. Nearly 60% (147 of 245) responded “always” or “almost always,” and 24% (59 of 245) responded “sometimes.” While 8% (20 of 245) stated that they never reviewed discrepancies as a group, 8% (19 of 245) of respondents were unsure. Regarding sharing of peer review data, either privately to individual physicians or to the entire group, 44% (110 of 245) responded that all findings were reported in some way, 28% (68 of 245) responded that only disagreements were reported, 13% (33 of 245) said that data were not shared, and 14% (34 of 245) were unsure. The 2018 survey included a new question asking whether peer review data were used in physician performance evaluations. Approximately 39% (93 of 238) responded “yes,” and 42% (100 of 238) responded “no,” with the remainder (19% [45 of 238]) responding “don’t know.” The respondents who noted underreporting of significant discrepancies at their practices totaled 31% (75 of 245), compared with 44% (109 of 245) who did not. Reasons for underreporting were similar to the concerns expressed about peer review in general. On the basis of qualitative review of free-text comments submitted by respondents to this question, potential reasons for underreporting include awkwardness associated with disagreeing with a supervisor, biases related to nonrandom selection, concern regarding adverse consequences for the reviewed physician or practice reputation, insufficient anonymity and potential retribution, and unwillingness to take time to document discrepancies. In response to whether anonymity of peer review would improve reporting of disagreements, slightly more respondents responded affirmatively (38% [92 of 245]) than negatively (30% [74 of 245]), although these percentages are similar. About one-third (32% [79 of 245]) responded “don’t know” (Table 3).

Response to 2016 Updates For unclear reasons, 105 of 305 respondents (29%) skipped the questions inquiring about the 2016 changes. Among those answering these questions, there was no consensus regarding perceived benefit of addition of the body system or age classification. Forty-seven percent (94 of 200) viewed the incorporation of discrepancy classification into the RADPEER reporting system as positive, useful, or helpful. The

impact of eliminating the score 4 category on the practice of radiology remains uncertain. Forty percent (79 of 200) were unsure of the benefit of eliminating score 4, and 39% (77 of 200) found it beneficial. Twenty-two percent (44 of 200) reported no benefit in eliminating score 4. Thirty percent (60 of 200) found the self-review feature useful in quality improvement sessions, while 21% (41 of 200) found it not useful, and 50% (99 of 200) were unsure (Table 4).

DISCUSSION The 2018 RADPEER survey confirms that peer review has become a routine component of radiology practice since the introduction of RADPEER in 2003. Most practices have used some form of peer review for more than 7 years, with many using more than one form of structured peer review. In addition to the requirement of physician peer review participation for accreditation, which can affect reimbursement, hospitals and The Joint Commission require peer review for credentialing. The ABR has required participation in a practice quality improvement project to maintain board certification and has promoted the ACR’s RADPEER program as an accepted project. Satisfying accreditation requirements remains the most common reason for the use of peer review, while the external reporting of peer review data has markedly increased in comparison with the 2012 survey (71% in 2018 vs. 32% in 2012), with the recipient entities remaining similar. RADPEER was praised for its ease of use for both accomplishing peer review and data reporting. Although a preference for peer review targets for absolute number versus percentage of cases was not demonstrated, monthly targets were favored over daily, quarterly, or annual targets. Persistent criticisms of peer review systems include concerns regarding time consumption due to poor integration with workflow and insufficient random case selection. Medicolegal concerns and emotional fears, including the perception of being “graded,” remain the leading concerns. However, the percentage of responders with these apprehensions has dropped since 2012 (18% in 2018 vs. 32% in 2012). Perception of underreporting of significant disagreements remains stagnant, with a slight indication that anonymity could improve the reporting of disagreements. Highlighting protections provided by the Patient Safety and Quality Improvement Act of 2005 limiting discoverability of peer review in malpractice events may alleviate medicolegal concerns among RADPEER users [5]. Although it is encouraging that the second most reported reason to use peer review was to increase patient safety and improve quality of clinical care, only a quarter of respondents reported practice changes as a result of using peer review, similar to findings from the 2012 survey. However, over this same interval, the percentage of survey responders reporting

Journal of the American College of Radiology Chaudhry et al n RADPEER: What We Have Learned FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

5

471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522

523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574

noncompliance with peer review due to perceived lack of benefit to quality improvement has declined. In 2016, the ACR RADPEER Committee aimed to provide more useful information and to guide potential practice quality improvement initiatives by adding three classification categories to the RADPEER reporting system to annotate discrepancies as errors in perception, interpretation, or communication. These three categories of errors have been primary contributors to malpractice lawsuits against radiologists [3]. Errors in perception represent cognitive errors accounting for the majority (as high as 80%) of diagnostic image interpretation errors and are often readily appreciated retrospectively by the original reporting radiologist or by other radiologists [6-12]. Errors in interpretation also represent cognitive errors and occur when the interpreting radiologist recognizes an abnormal imaging finding but dismisses the finding or fails to recognize its clinical significance or relevance. Interpretative errors are rarely the result of insufficient knowledge but rather of failure in clinical judgment [7-12]. Errors in communication occur when the interpreting radiologist fails to relay clinically urgent or relevant imaging study findings to the appropriate provider in a timely manner. Such errors also include unclear or misleading radiology reports that fail to effectively communicate recommendations. Communication errors occurred in 20% of cases reviewed in a claims survey study by the Physicians Insurers Association of America and ACR [13]. Positive change in practice was identified as a result of RADPEER use as an educational tool through review of cases with a discrepancy score, consistent with the goals of nonpunitive peer learning and quality improvement envisioned by the RADPEER program [2,3,14]. This survey did not assess how the perceived benefit varies across the types of error (perception, interpretation, or communication). Such stratification may prove helpful in the future to further assist radiology practices to guide practice quality improvement. In 2016, the ACR RADPEER Committee also made the recommendation to further refine data summary reports by adding body system and pediatric case classification sections. The rationale behind these additions was to provide participants opportunities to make targeted practice improvements on the basis of identified weaknesses by rendering more granular data. At that time, the ACR RADPEER Committee also replaced the previous 4-point scoring system with a 3-point scoring system to facilitate peer learning instead of punitive application of peer review. Previous surveys had indicated confusion and disagreement in selecting between scores of 3 and 4. The self-review option implemented in 2016 was intended to promote sharing selfidentified oversights in an educational conference format 6

[3]. As not all practices review discrepancies in an anonymous conference format, this feature may not be applicable to all groups. The 2018 survey results indicate no clear perception of benefit of these 2016 changes. Interpretation is limited because of paucity of responses to survey questions addressing the 2016 changes, with many of those who did respond reporting lack of knowledge on the topic. Educating users on the postulated benefits of these changes could potentially improve their impact. Recently, there has been increased interest in peer learning, which deemphasizes scoring and performance measurement, as an alternative to traditional score-based peer review. In peer learning, reviewing radiologists are asked to identify opportunities to learn from an anonymized case, instead of requesting score-based review of a colleague’s report. Attention is drawn away from measuring individual performance and toward performance improvements for groups of radiologists. As such, peer learning aims to foster a culture of group learning in a nonpunitive environment to drive performance improvement [15]. The survey results suggest that peer review provides opportunities for peer learning through consensus review, case conferences, and direct feedback. Although there is early evidence that an implemented peer learning tool may increase clinically significant feedback and learning opportunities compared with traditional score-based peer review [16], there are practical and regulatory hurdles to widespread implementation. Incorporation of peer learning should be considered in future iterations of the eRADPEER system as an adjunct tool for practices seeking to augment the educational opportunities provided by peer review. There were several limitations to this study. The 2018 survey was only sent to current RADPEER users, while the 2012 survey was sent to both RADPEER users and the entire ACR membership. Additionally, some questions were modified, while other questions were new. Therefore, quantitative differences from the 2012 survey results should be interpreted with caution. Finally, only a small fraction of RADPEER practices responded to the survey, and not all questions in the survey were answered by all survey respondents, which may result in an unbalanced representation of responses. The radiology community has clearly integrated peer review into daily practice; however, concerns regarding workflow interruption, medicolegal implications, and emotional stress persist. There is acknowledgment of quality improvement benefits of peer review yet a lack of confidence in its ability to improve practice patterns. Creating a nonpunitive peer learning environment may help ease fears while effectuating true quality improvement. Peer review systems will need to continue evolving in order to truly transform into peer learning environments. Journal of the American College of Radiology Volume - n Number - n Month 2020

FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626

627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678

TAKE-HOME POINTS -

Peer review is now a routine component of radiology practice, with the most common reason for the use of peer review being to meet accreditation requirements.

-

Medicolegal concerns and emotional fears, including the feeling of being “graded,” remain the leading concerns expressed by survey participants.

-

Positive change in practice was identified as a result of RADPEER use as an educational tool through review of cases with discrepancy scores, which is consistent with the goals of nonpunitive peer learning and quality improvement envisioned by the RADPEER program.

-

Incorporation of peer learning should be considered in future iterations of the eRADPEER system, which may help foster a nonpunitive peer learning environment.

ACKNOWLEDGMENTS Q4

The authors thank Fern Jackson, the RADPEER administrator, for her ongoing work to maintain and improve the RADPEER program.

ADDITIONAL RESOURCES Additional resources can be found online at: https://doi. org/10.1016/j.jacr.2019.12.023.

REFERENCES 1. Borgstede JP, Lewis RS, Bhargavan M, Sunshine JH. RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol 2004;1:59-65.

2. Jackson VP, Cushing T, Abujudeh HH, et al. RADPEER scoring white paper. J Am Coll Radiol 2009;6:21-5. 3. Goldberg-Stein S, Frigini LA, Long S, et al. ACR RADPEER Committee white paper with 2016 updates: revised scoring system, new classifications, self-review, and subspecialized reports. J Am Coll Radiol 2017;14:1080-6. 4. Abujudeh H, Pyatt RS Jr, Bruno MA, et al. RADPEER peer review: relevance, use, concerns, challenges, and direction forward. J Am Coll Radiol 2014;11:899-904. 5. Liang BA, Riley W, Rutherford W, Hamman W. The Patient Safety and Quality Improvement Act of 2005: provisions and potential opportunities. Am J Med Qual 2007;22(1):8-12. 6. Berlin L, Hendrix RW. Perceptual errors and negligence. AJR Am J Roentgenol 1998;170:863-7. 7. Renfrew DL, Franken EA Jr, Berbaum KS, Weigelt FH, AbuYousef MM. Error in radiology: classification and lessons in 182 cases presented at a problem case conference. Radiology 1992;183:145-50. 8. Fitzgerald R. Error in radiology. Clin Radiol 2001;56:938-46. 9. Whang JS, Baker SR, Patel R, Luk L, Castro A III. The causes of medical malpractice suits against radiologists in the United States. Radiology 2013;266:548-54. 10. Kim YW, Mansfield LT. Fool me twice: delayed diagnoses in radiology with emphasis on perpetuated errors. AJR Am J Roentgenol 2014;202: 465-70. 11. Bruno MA, Walker EA, Abujudeh HH. Understanding and confronting our mistakes: the epidemiology of error in radiology and strategies for error reduction. Radiographics 2015;35:1668-76. 12. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. Radiographics 2018;38:184565. 13. Brenner RJ, Lucey LL, Smith JJ, Saunders R. Radiology and medical malpractice claims: a report on the practice standards claims survey of the Physician Insurers Association of America and the American College of Radiology. AJR Am J Roentgenol 1998;171:19-22. 14. Halsted MJ. Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol 2004;1:984-7. 15. Larson DB, Donnelly LF, Podberesky DJ, Merrow AC, Sharpe RE Jr, Kruskal JB. Peer feedback, learning, and improvement: answering the call of the Institute of Medicine report on diagnostic error. Radiology 2017;283:231-41. 16. Trinh TW, Boland GW, Khorasani R. Improving radiology peer learning: comparing a novel electronic peer learning tool and a traditional score-based peer review system. AJR Am J Roentgenol 2019;212:135-41.

Journal of the American College of Radiology Chaudhry et al n RADPEER: What We Have Learned FLA 5.6.0 DTD  JACR5052_proof  25 January 2020  8:23 pm  ce

7

679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730