Accepted Manuscript Development and Validation of Trigger Algorithms to Identify Delays in Diagnostic Evaluation of Gastroenterological Cancer Daniel R. Murphy, MD MBA, Ashley N.D. Meyer, PhD, Viralkumar Vaghani, MBBS, Elise Russo, MPH, Dean F. Sittig, PhD, Li Wei, MS, Louis Wu, PA, Hardeep Singh, MD MPH PII: DOI: Reference:
S1542-3565(17)30936-9 10.1016/j.cgh.2017.08.007 YJCGH 55387
To appear in: Clinical Gastroenterology and Hepatology Accepted Date: 5 August 2017 Please cite this article as: Murphy DR, Meyer AND, Vaghani V, Russo E, Sittig DF, Wei L, Wu L, Singh H, Development and Validation of Trigger Algorithms to Identify Delays in Diagnostic Evaluation of Gastroenterological Cancer, Clinical Gastroenterology and Hepatology (2017), doi: 10.1016/ j.cgh.2017.08.007. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Development and Validation of Trigger Algorithms to Identify Delays in Diagnostic Evaluation of Gastroenterological Cancer Daniel R. Murphy MD MBA1,2, Ashley N.D. Meyer PhD1,2, Viralkumar Vaghani MBBS1,2, Elise Russo MPH1,2, Dean F. Sittig PhD3, Li Wei MS1,2, Louis Wu PA1, Hardeep Singh MD MPH1,2
RI PT
1
Houston VA Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas 2 Baylor College of Medicine, Department of Medicine, Houston, Texas; 3 University of Texas Health Science Center at Houston’s School of Biomedical Informatics and the UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas
SC
The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.
M AN U
Word Counts: Abstract: 411 Text: 4054 References: 41 Tables: 4, Figures: 1
EP
TE D
Funding: This project is funded by a Veteran Affairs Health Services Research and Development CREATE grant (CRE12-033) and partially funded by the Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13–413). Dr. Murphy is additionally funded by an Agency for Healthcare Research & Quality Mentored Career Development Award (K08-HS022901) and Dr. Singh is additionally supported by the VA Health Services Research and Development Service (CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety and the Agency for Health Care Research and Quality (R01HS022087). These funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. There are no conflicts of interest for any authors.
AC C
Address for Correspondence and Reprints: Daniel R. Murphy, MD MBA Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC) Houston Center for Innovation in Quality, Effectiveness & Safety (IQuESt) (152) 2002 Holcombe Boulevard Houston, TX 77030 USA 713-440-4600 (o), 713-748-7359 (f)
[email protected]
1
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
ABBREVIATIONS AFP – Alpha-Fetoprotein CI – Confidence Interval
RI PT
CPT - Current Procedural Terminology CRC – Colorectal Cancer EHR – Electronic Health Record
SC
FIT - Fecal Immunochemical Test
GI – Gastrointestinal HCC – Hepatocellular Cancer ICD - International Classification of Diseases NPV – Negative Predictive Value
TE D
PPV – Positive Predictive Value
M AN U
FOBT – Fecal Occult Blood Test
VA – Department of Veteran Affairs
EP
AUTHOR CONTRIBUTIONS
DRM, ANDM, DFS, and HS contributed to the study concept and design; DRM, ER, LWei, VV, and LWu
AC C
contributed to acquisition of data; DRM, ANDM, ER, LWei, DFS, and HS contributed to analysis and interpretation of data; All authors contributed to drafting of the manuscript and critical revision of the manuscript for important intellectual content; DRM and ANDM contributed to statistical analysis; HS and DFS obtained funding; All authors contributed to administrative, technical, or material support; DRM, DFS, and HS supervised the study.
2
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
ABSTRACT
Background & Aims: Colorectal cancer (CRC) and hepatocellular cancer (HCC) are common
RI PT
causes of death and morbidity, and patients benefit from early detection. However, delays in follow-up of suspicious findings are common, and methods to efficiently detect such delays are needed. We developed, refined, and tested trigger algorithms that identify patients with
SC
delayed follow-up evaluation of findings suspicious of CRC or HCC.
M AN U
Methods: We developed and validated two trigger algorithms that detect delays in diagnostic evaluation of CRC and HCC using laboratory, diagnosis, procedure, and referral codes from the Department of Veteran Affairs (VA) National Corporate Data Warehouse. The algorithm initially identified patients with positive test results for iron deficiency anemia or fecal immunochemical
TE D
test (for CRC) and elevated alpha-fetoprotein results (for HCC). Our algorithm then excluded patients for whom follow-up evaluation was unnecessary, such as patients with a terminal illness or those who had already completed a follow-up evaluation within 60 days. Clinicians
EP
reviewed samples of both delayed and non-delayed records, and review data were used to
AC C
calculate trigger performance.
Results: We applied the algorithm for CRC to 245,158 patients seen from January 1, 2013 through December 31, 2013; we identified 1073 patients with delayed follow up. In a review of 400 randomly selected records, we found our algorithm to identify patients with delayed follow-up with a positive predictive value (PPV) of 56.0% (95% CI, 51.0%–61.0%). We applied the algorithm for HCC to 333,828 patients seen from January 1, 2011 through December 31,
3
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
2014, and identified 130 patients with delayed follow up. During manual review of all 130 record, we found that our algorithm identified patients with delayed follow-up with a PPV of
RI PT
82.3% (95% CI, 74.4%–88.2%). When we extrapolated the findings to all patients with abnormal results, the algorithm identified patients with delayed follow-up evaluation for CRC with 68.6% sensitivity (95% CI, 65.4%–71.6%) and 81.1% specificity (95% CI; 79.5%–82.6%); it identified
SC
patients with delayed follow-up evaluation for HCC with 89.1% sensitivity (95% CI, 81.8%–
93.8%) and 96.5% specificity (95% CI, 94.8%–97.7%). Compared to nonselective methods, use of
M AN U
the algorithm reduced the number of records required for review to identify a delay by more than 99%.
Conclusion: Using data from the VA electronic database, we developed an algorithm that
TE D
greatly reduces the number of record reviews necessary to identify delays in follow-up evaluations for patients with suspected CRC or HCC. This approach offers a more efficient
EP
method to identify delayed diagnostic evaluation of gastrointestinal cancers.
AC C
KEY WORDS: electronic health records; health information technology; medical informatics; primary care
4
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
BACKGROUND Colorectal cancer (CRC) and hepatocellular cancer (HCC) account for the second and ninth most common causes of death in the United States, respectively, and contribute to preventable morbidity and
RI PT
mortality. Collectively, these gastrointestinal (GI) cancers contribute to approximately 75,000 deaths per year worldwide, with survival highly dependent on stage at diagnosis.1–3 For early detection or
surveillance efforts to be effective,4,5 reliable and timely follow-up of abnormal clinical findings
SC
suggestive of possible cancer is essential. Many of these clinical findings are found incidentally outside of screening programs, underscoring the need for robust early diagnosis programs regardless of clinical
M AN U
setting.
Despite improved communication channels touted by electronic health records (EHRs), missed and delayed action on important clinical information continues to be a problem in health care.6–14 These
TE D
misses occur despite the efforts of well-intentioned clinicians and can impact patients’ health outcomes, potentially resulting in harm,15,16 and in some cases, leading to malpractice suits.17–20 A variety of different socio-technical factors are believed to contribute to these delays, most notably, information
EP
overload, time pressures in the clinical setting, suboptimal team and workflow designs, and lack of tools and EHR designs to optimally manage test result and other clinical data.21–28 Together, these factors
AC C
culminate in delays in care, such as primary care providers failing to place a gastroenterology referral until multiple office visits have occurred, or gastroenterologists’ inability to see patients or perform diagnostic endoscopies in a timely manner. Unfortunately, delayed and missed opportunities to make a diagnosis are not uncommon, affecting up to a third of patients with findings suspicious for cancer.29
Vast amounts of information are now collected during the day-to-day use of EHRs. This could enable the use of data mining techniques to detect missed opportunities for action by clinicians. However, few
5
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
institutions actively leverage ‘big data,’ or any data for that matter, to identify opportunities to improve diagnosis. A recent Institute of Medicine report concluded that diagnostic errors and delays are often more difficult to measure as compared to other healthcare quality and safety domains, and new
RI PT
methods for measurement are needed in practice.30 Using algorithms on EHR data could facilitate identification of patient cohorts that are at particularly high-risk of safety event. Compared to
nonsystematic chart reviews, which would be too cumbersome or practical as a method to find delays,
SC
use of electronic algorithms for EHR data mining could help exclude patients with normal results and help identify those at highest risk for a diagnostic error. Such computerized algorithms, usually referred
M AN U
to as “triggers,” can detect and alert clinicians to high risk situations and have shown initial promise for use in detecting diagnostic delays. Although use of simple triggers has been in practice for over a decade,31,32 development of such sophisticated triggers for diagnostic errors and delays is a more recent occurence.33–37 These triggers have the potential to mitigate the impact of missed or delayed follow-up
TE D
of test results or other clinical findings.
In our prior work, we developed a preliminary trigger to detect delayed follow-up action after clinical
EP
findings were identified suggesting a possible colon cancer.33 We showed that the use of such a trigger could shorten delays of diagnostic evaluation at individual health care facilities.37 To prepare such a
AC C
trigger for larger scale application for gastrointestinal cancers, we evaluated the performance characteristics of algorithms for CRC and HCC when applied directly to a large national repository of clinical data. Our long-term goal is to use these algorithms to improve early detection of CRC and HCC in routine clinical care.
6
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
METHODS Setting We developed and tested triggers using the Department of Veteran Affairs (VA)’s national Corporate
RI PT
Data Warehouse. This database contains patient data captured during daily clinical practice for all
patients seen in the inpatient and outpatient settings at all VA facilities nationwide. We accessed these data via the VA Informatics and Computing Infrastructure (VINCI). For feasibility in conducting chart
SC
reviews, we narrowed our study to a single regional network consisting of 7 hospitals and associated
(Protocol H-30995).
Trigger Development and Refinement
M AN U
clinics in the mid-western United States. Our local institutional review board approved this study
We designed triggers to initially identify all patients with red flag findings suspicious for specific cancers
TE D
(“red flag criteria”). To improve the practicality of use of triggers, we designed the trigger to then automatically exclude patients where follow up is not indicated (“exclusion criteria”) or where follow-up action has already been completed within a pre-defined timeframe (“appropriate follow-up criteria”).
EP
During piloting, input from clinicians indicated that more than one false positive out of every two flagged cases would be too burdensome to review. Thus, to be practical for clinical use, we assumed
AC C
that triggers must achieve a minimum of 50% positive predictive value (PPV). For the basis of our CRC trigger, we used previously-piloted criteria that had been tested at a single VA facility, but had not yet been evaluated on the large national data set.37 Red flags for CRC included a positive fecal immunochemical or occult blood test (FIT/FOBT) or lab findings suggestive of iron deficiency anemia. For the HCC trigger, we used alpha-fetoprotein (AFP) as the red flag criteria and developed a set of exclusion and appropriate follow-up criteria based on literature reviews, existing clinical follow-up procedures, and expert input gathered during interviews with 2 primary care providers, 1
7
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
gastroenterologist, 1 medical oncologist, and 2 surgical oncologists independent of the study team. We then operationally defined the criteria and converted each into a computerized search algorithm. No standard definition of a delay exists for HCC or CRC. Thus, based on feedback from experts, guidance
RI PT
from a 2013 Office of Inspector General report outlining timely follow-up after a positive fecal occult blood test,38 and our prior work on delays,29 we chose 60 days as the maximum timeframe for follow up. This timeframe would allow sufficient time for typical diagnostic evaluation and follow-up processes to
SC
occur without our intervention, while limiting the clinical impact of a delay. The criteria were
subsequently programmed into a combined computerized algorithm. The algorithms were designed to
M AN U
extract structured data fields, such as International Classification of Diseases (ICD) and Current Procedural Terminology (CPT) codes. Final criteria for each trigger are listed in Tables 1 and 2 whereas the Appendix includes a complete list of codes. We tested each criterion individually, with two clinicians performing independent chart reviews to ensure that each criterion appropriately extracted the desired
TE D
information.
Based on lessons learned after single-site testing of the CRC trigger37, we modified the algorithm in 3
EP
ways: (1) standardized the definition of “terminal illnesses” in the exclusion criteria to comprise cancer diagnoses where 5-year mortality was less than 50% based on National Cancer Institute Surveillance,
AC C
Epidemiology, and End Results (SEER) data,39 (2) included a documented multidisciplinary tumor board meeting as an appropriate follow-up, and (3) refined surgery ICD codes to include only those where bleeding risk was high or bleeding was noted. During iterative testing and refinement of the algorithm, we extracted 280 records (260 to test individual criterion and 20 to test the combined criteria) from the clinical data repository between 1/1/2008-12/31/2008. Two reviewers performed manual chart reviews to determine whether the algorithm appropriately identified the intended information.
8
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Independently, once the final HCC trigger criteria were developed and reviewed by clinical experts, two reviewers independently performed 280 preliminary ‘test’ record reviews, 260 reviews to test the ability to correctly extract each criterion from the data repository and 20 to evaluate output of the completed
RI PT
algorithm. Records were chosen via the same method used for the CRC trigger, and reviewers confirmed whether the data extracted by the trigger algorithm appropriately met the criteria.
SC
Trigger Performance
We evaluated trigger performance by applying each trigger to a one-year data set (different from that
M AN U
used to develop the trigger) within the clinical database. Two clinicians, one physician and one physician-assistant (VV and LW), with extensive clinical and research experience performing record reviews, reviewed a randomly-selected sample of 400 trigger-positive and 100 trigger-negative records (i.e. records where patients had a red flag finding, but also had electronically-identified clinical exclusion
TE D
or expected follow-up criteria). Reviewers were blinded to trigger-positive or negative status and classified each record into one of the following; (1) failed to receive necessary follow-up action (delay), (2) already received follow-up or did not require it, or (3) required follow-up, but contained a
EP
documented plan for action at a subsequent date beyond 60 days, thus “required tracking” to completion. Reviewers were instructed to review all documentation in the progress notes section during
AC C
the 60-day period after the red flag, as well as documentation and data in the relevant sections related to specialty and multidisciplinary consultations, procedures, laboratory results, and imaging results. To determine long-term impact of missing delays, reviewers also collected data on subsequent 2-year cancer diagnosis outcomes. Each reviewer evaluated 240 trigger-positive and 60 trigger-negative records, allowing 20% overlap in each such that interrater reliability could be calculated. When reviewers encountered ambiguity during reviews, the full research team discussed cases to develop
9
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
consensus and resolve areas of disagreement. We calculated initial interrater agreement, but all disagreements were subsequently resolved via consensus.
RI PT
We used review findings to calculate each trigger’s Positive Predictive Value (percentage of the 400 trigger-flagged records that truly contained a delay or required tracking and did not receive follow up within one month of the documented plan) and Negative Predictive Value (NPV; percentage of the 100
SC
trigger-negative records that truly did not receive follow-up or required tracking but received follow-up within one month of the documented plan) and extrapolated review findings to all records with red-flag
M AN U
findings to calculate sensitivity and specificity. We also estimated the number of record reviews necessary to identify a single delay both with and without electronic triggers.
Sample Size
TE D
The study was powered to identify the number of records needed to develop a 2-sided 95% confidence interval (CI) with a width of 10% at any possible PPV. Because, the largest sample size occurs at 50% in binomial distributions, we used a PPV of 50% for sample size calculations. This calculation yielded the
EP
need to review a minimum of 384 trigger-positive records, which we rounded up to 400. Similarly, assuming a conservative 95% NPV, our calculation for trigger-negative records yielded the need to
AC C
review 73 records, and we reviewed 100.
Data Analysis
We used STATA (v14.1; StataCorp LP, College Station, Texas) to analyze data, including reviewer agreement using ordered Kappa, trigger performance, and time to follow-up. We reported reasons for lack of follow-up using descriptive statistics. We used two-tailed Fisher’s Exact test to compare records
10
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
with and without delays.
RESULTS
RI PT
Colorectal Cancer Trigger
We applied the final CRC trigger (Table 1) to 245,158 records of patients seen at the study sites between 1/1/2013-12/31/2013 and identified 1852 positive FIT/FOBTs and 7087 results suggestive of IDA,
SC
corresponding to 3369 unique patients with red flags (IDA or positive FIT/FOBT). The trigger
automatically excluded 1682 (50.0%) patients with clinical exclusion criteria and 614 (18.2%) patients
M AN U
where evidence of completed follow-up action was detected (Figure 1). From the remaining 1073 (31.8%) patients with potential delays (517 FIT/FOBT and 1728 IDA results), we reviewed a randomlyselected sample of 400 records, and if a patient had more than one delay, we reviewed the earliest test
TE D
that was delayed.
We found 209 confirmed delays and 16 records that required tracking to ensure completion of planned action. Of 16 records needing tracking, only one received follow-up within a month of the anticipated
EP
follow-up date documented in the provider’s plan, which increased the number of delays to 224 (PPV of 56.0%; 95%CI: 51.0-61.0%). In most cases (56.7%), the reason for lack of follow-up was not documented.
AC C
No difference in PPVs or NPVs between the FOBT/FIT and IDA portions of the trigger were identified (PPVs of 57.0% and 55.1% for FOBT/FIT and IDA, respectively, [p=0.71] and NPVs of 85.4% and 90.4% [p=0.44]). Additionally, no differences in PPVs across sites were identified (p=0.29). Of all 224 delays, 100 (44.6%) received follow-up action within 2 years, and 4 patients were subsequently found to have a CRC diagnosis made within 2 years of the red flag date. Extrapolating our findings to all 1073 patients identified by the triggers suggests we would find an estimated 601 patients with delays, 11 of whom would be eventually be diagnosed with CRC. Additional contributory factors are listed in Table 3.
11
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Reasons for the 176 false positives are listed in Table 4. Of the patients excluded by the trigger, we reviewed 100 records and found 88 true negative records (truly no delay in care; NPV of 88%; 95% CI 79.6%-93.4%). Of the 12 false negative records, 9 were from patients over age 75, but where the
RI PT
treating physician documented that diagnostic evaluation should be pursued regardless of advanced age, suggesting that adjustments to age exclusion criteria could improve NPVs. The remaining 3 resulted from EHR data miscoding (colonoscopy date field not matching the date a procedure actually occurred
SC
and a specialist’s telephone call being coded as a visit) or insufficient initial testing detected by the trigger (poor colonoscopy prep warranting a repeat procedure). After extrapolating review findings to all
81.1% (95% CI:79.5-82.6%), respectively.
Hepatocellular Carcinoma Trigger
M AN U
3369 patients with red flag findings, sensitivity and specificity were 68.6% (95% CI:65.4-71.6%) and
TE D
Once the criteria (Table 2) were finalized, we then tested the trigger’s performance by applying it to the records of all 333,828 patients seen at the study sites between 1/1/2011-12/31/2014. A longer study period was necessary due to the significantly fewer numbers of AFP tests performed. The trigger
EP
identified 2345 elevated AFP results in 784 patients. The trigger then excluded 141 (18.0%) patients due to clinical exclusion criteria and 513 (65.4%) based on expected follow-up criteria (Figure 1). We
AC C
reviewed all of the remaining 130 patient records at high risk for delayed follow up (155 elevated APFs), and found 102 (78.5%) records with delayed diagnostic evaluation and 15 (11.5%) that required additional tracking to completion of a provider’s documented plan. Of 15 records requiring tracking, 10 received follow-up within one month of the provider’s documented follow-up timeframe; thus a total of 107 delays were identified (PPV of 82.3%; 95% CI 74.4%-88.2%). Factors contributing to delays and reasons for false positives are listed in Tables 3 and 4, respectively. Additionally, we reviewed 100 of the excluded records and found 2 false negative records (NPV of 98.0%; 95% CI 92.3-99.7%), one due to
12
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
incorrect data coding in the EHR (patients visit coded as complete, whereas the patient did not show) and another from a visit with oncology which was actually for a benign hematologic issue unrelated to the liver findings. No differences in PPV across site were identified (p=0.84). Extrapolating findings to all
RI PT
784 patients with red flag findings revealed a sensitivity and specificity of 89.1% (95% CI 81.8-93.8%) and 96.5% (95% CI 94.8-97.7%), respectively. Of the 107 delays, 85 (79.4%) received follow-up within 2 years. Of all records with delays or requiring tracking, 9 were subsequently diagnosed with HCC within
M AN U
Impact on workflows to detect delays
SC
two years after the red flag date.
Without the benefits of electronic data mining algorithms, an institution would likely need to perform nonselective record reviews of all 245,158 patients seen to identify the estimated 601 true delays in diagnostic evaluation of CRC, and 333,828 patients to find the 107 delays in evaluation of HCC. This
TE D
translates to approximately 408 chart reviews to find one CRC delay and 3120 to identify one HCC delay. If facilities limit reviews to only patients where testing was performed, this would require review of 107,360 patients with FOBT/FIT or hemoglobin/mean corpuscular volume results (10,661 and 103,560
EP
patients, respectively, with 6861 where both tests were performed) and 8855 patients with AFP results. This translates to 179 and 83 record reviews to identify a single delay for CRC and HCC, respectively. Our
AC C
triggers have reduced the number of patient records requiring review by over 98%, to 1.8 records to find a CRC-related delay and 1.2 records to find a HCC-related delay.
DISCUSSION
We developed and refined trigger algorithms to identify instances of delayed diagnostic evaluation of colorectal and hepatocellular cancer. Both triggers achieved positive predictive values >50%, suggesting their potential to efficiently impact clinical care in this as well as other clinical settings. For instance, if a
13
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
care coordinator was designated to review and act upon this information, they would find at least one patient with a true delay in every two records flagged. This relatively efficient method is a substantive improvement over the current state, where most institutions are not using any systematic methods to
RI PT
detect such delays.
While the majority of patients with red flag findings receive appropriate diagnostic evaluation, detecting
SC
patients experiencing delays is akin to finding needles in a haystack. Without leveraging electronic health record data, it becomes extremely difficult for institutions to monitor for red flags to ensure
M AN U
patients needing follow-up action actually receive it in a timely manner. In this study, we found that 83% (3445 of 4153) of patients with one or more red flags suggestive of either CRC or HCC received appropriate follow up or did not require action as determined by the trigger and record reviews. However, identifying which of the remaining 17% were experiencing delays, including the 20 patients
TE D
expected to be subsequently diagnosed with GI cancer, is difficult currently. For example, reviewing the records of all patients seen requires hundreds of thousands of reviews and carries a low yield (one true delay in 408 and 3120 patients for CRC and HCC, respectively). This makes this method far too
EP
impractical for clinical use, even when considering potential efficiencies gained when reviewing records
AC C
for both cancer types simultaneously.
Similarly, even when focusing on only those records with abnormal tests of interest performed, our triggers greatly (>98%) reduced the number of records reviews needed to identify a true delay compared to non-selective methods. This makes use of triggers substantially more efficient and practical for use in a busy practice. As in the data repository used in this study, application to large multi-site databases or health care exchanges could bring about further economies of scale by having a single team review and monitor trigger results across departments or facilities, making such programs even
14
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
more practical. Additionally, further efficiencies could be gained as subsequent research efforts continue to improve the performance of trigger algorithms. For example, the use of more advanced data extraction and analysis methods, such as natural language processing, can allow triggers to make
RI PT
use of free-text data (e.g., symptoms documented in progress notes), reducing the number of false positives and negatives identified. Despite the efficiencies gained by the use of triggers, caution must be taken in the delivery of this information to prevent the provider alert fatigue22–24,40 that may have
SC
resulted in the initial delay. Information about trigger-identified delays is likely best delivered to
designated clinical support staff, such as care managers, who can take action to bring follow-up to
M AN U
completion. Further research on the delivery of information about delays is needed.
In addition to improving the timeliness of care for individual providers, triggers have several implications for practice managers, researchers, and heath care improvement personnel. Given the Institute of
TE D
Medicine’s recent attention to diagnostic errors and emerging research in this area, our triggers offer a valid method of identifying and measuring diagnostic delays. For example, such triggers can assist clinical practices and researchers with identifying delays in care for further investigation. This would
EP
help them understand the underlying system causes, allow implementation of more robust workflows to ensure that follow-up actions progress more reliably, and help build safeguards when diagnostic
AC C
evaluation is not proceeding as expected. Additionally, monitoring changes in numbers of records flagged by triggers can serve as a metric for determining whether research or quality improvement interventions to reduce delays were effective. While these triggers were developed and tested in a single, highly integrated healthcare delivery organization, similar triggers have also shown to be effective at other institutions that utilize a comprehensive EHR to send referral requests electronically,33,37,41 suggesting that such triggers could be applied at any organization employing a modern EHR along with a comprehensive clinical data repository.
15
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Several limitations warrant discussion. First, the study sites included are part of the same health care network. While this may impact generalizability, the use of widely available structured data including
RI PT
ICD and CPT codes suggests improved potential for portability of such triggers. Second, we were unable to reach our projected sample size for the HCC trigger even after increasing the study period due to the infrequent use of alpha-fetoprotein. However, the trigger nevertheless achieved >50% PPV and
SC
identified 107 patients experiencing a care delay, of which 9 were diagnosed with HCC and could have potentially received diagnostic evaluation sooner. Finally, follow-up determination was made based on
M AN U
electronic chart review data, which do not always represent the actual care delivered or rationale for inaction, and some information may be missed on review. However, prior work suggests that documentation by providers reasonably represents actual care, and records remain an important reflection of care in malpractice suits.6 Finally, the trigger performance metrics obtained may be
TE D
affected by the 60-day definition of delay used in this study. Shorter time-period definitions would likely increase false positives, and vice versa. While based on our findings, 60 days appears to be a reasonable default setting for implementing triggers, this could be further adjusted based on local context or
EP
additional evaluation to optimize timings. For example, other sites could change this to 90 or 120 days
AC C
or even longer, rather than use this as a default value for implementation.
CONCLUSION
We developed and tested trigger algorithms to identify patients at risk for delayed diagnostic evaluation for GI cancers. Such EHR-based triggers may enable health care facilities to prevent prolonged delays in care, and thus, improve time to diagnosis.
16
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
REFERENCES Altekruse SF, McGlynn KA, Reichman ME. Hepatocellular Carcinoma Incidence, Mortality, and Survival Trends in the United States From 1975 to 2005. J Clin Oncol. 2009;27(9):1485-1491. doi:10.1200/JCO.2008.20.7753.
2.
Mittal S, El-Serag HB. Epidemiology of HCC: Consider the Population. J Clin Gastroenterol. 2013;47(0):S2-S6. doi:10.1097/MCG.0b013e3182872f29.
3.
SEER Cancer Stat Fact Sheets. http://seer.cancer.gov/statfacts/. Accessed July 27, 2016.
4.
Lin JS, Piper MA, Perdue LA, et al. Screening for Colorectal Cancer: Updated Evidence Report and Systematic Review for the US Preventive Services Task Force. JAMA. 2016;315(23):2576-2594. doi:10.1001/jama.2016.3332.
5.
Marquardt JU, Nguyen-Tat M, Galle PR, Wörns MA. Surveillance of Hepatocellular Carcinoma and Diagnostic Algorithms in Patients with Liver Cirrhosis. Visc Med. 2016;32(2):110-115. doi:10.1159/000445407.
6.
Singh H, Arora HS, Vij MS, Rao R, Khan MM, Petersen LA. Communication Outcomes of Critical Imaging Results in a Computerized Notification System. J Am Med Inform Assoc. 2007;14(4):459466. doi:10.1197/jamia.M2280.
7.
Singh H, Petersen LA, Daci K, Collins C, Khan M, El-Serag HB. Reducing referral delays in colorectal cancer diagnosis: is it about how you ask? Qual Saf Health Care. 2010;19(5):e27. doi:10.1136/qshc.2009.033712.
8.
Singh H, Thomas EJ, Sittig DF, et al. Notification of Abnormal Lab Test Results in an Electronic Medical Record: Do Any Safety Concerns Remain? Am J Med. 2010;123(3):238-244. doi:10.1016/j.amjmed.2009.07.027.
9.
Poon EG, Gandhi TK, Sequist TD, Murff HJ, Karson AS, Bates DW. “I wish I had seen this test result earlier!”: Dissatisfaction with test result management systems in primary care. Arch Intern Med. 2004;164(20):2223-2228. doi:10.1001/archinte.164.20.2223.
10.
Graber ML, Franklin N, Gordon R. Diagnostic Error in Internal Medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493.
11.
Singh H, Giardina TD, Petersen LA, et al. Exploring situational awareness in diagnostic errors in primary care. BMJ Qual Saf. 2012;21(1):30-38. doi:10.1136/bmjqs-2011-000310.
12.
Murphy DR, Singh H, Berlin L. Communication Breakdowns and Diagnostic Errors: A Radiology Perspective. Diagnosis. 2014;1(4):253-261. doi:10.1515/dx-2014-0035.
13.
Menon S, Smith MW, Sittig DF, et al. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open. 2014;4(11):e005985. doi:10.1136/bmjopen2014-005985.
AC C
EP
TE D
M AN U
SC
RI PT
1.
17
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Singh H, Sethi S, Raber M, Petersen LA. Errors in Cancer Diagnosis: Current Understanding and Future Directions. J Clin Oncol. 2007;25(31):5009-5018. doi:10.1200/JCO.2007.13.2142.
15.
Tørring ML, Frydenberg M, Hansen RP, Olesen F, Hamilton W, Vedsted P. Time to diagnosis and mortality in colorectal cancer: a cohort study in primary care. Br J Cancer. 2011;104(6):934-940. doi:10.1038/bjc.2011.60.
16.
Neal RD, Tharmanathan P, France B, et al. Is increased time to diagnosis and treatment in symptomatic cancer associated with poorer outcomes? Systematic review. Br J Cancer. 2015;112(s1):S92-S107. doi:10.1038/bjc.2015.48.
17.
Beckman HB, Markakis KM, Suchman AL, Frankel RM. The doctor-patient relationship and malpractice. Lessons from plaintiff depositions. Arch Intern Med. 1994;154(12):1365-1370.
18.
Singh H, Thomas EJ, Petersen LA, Studdert DM. Medical errors involving trainees: a study of closed malpractice claims from 5 insurers. Arch Intern Med. 2007;167(19):2030-2036. doi:10.1001/archinte.167.19.2030.
19.
Gale BD, Bissett-Siegel DP, Davidson SJ, Juran DC. Failure to notify reportable test results: significance in medical malpractice. J Am Coll Radiol JACR. 2011;8(11):776-779. doi:10.1016/j.jacr.2011.06.023.
20.
Singh H, Allen JI. Patient Safety Counterpoint: Systems Approaches and Multidisciplinary Strategies at the Centerpiece of Error Prevention. Clin Gastroenterol Hepatol. 2015;13(5):824-826. doi:10.1016/j.cgh.2015.01.005.
21.
Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19(s3):i68-74. doi:10.1136/qshc.2010.042085.
22.
Murphy DR, Meyer AD, Russo E, Sittig DF, Wei L, Singh H. The Burden of Inbox Notifications in Commercial Electronic Health Records. JAMA Intern Med. 2016;176(4):559-560. doi:10.1001/jamainternmed.2016.0209.
23.
Murphy DR, Reis B, Sittig DF, Singh H. Notifications received by primary care practitioners in electronic health records: a taxonomy and time analysis. Am J Med. 2012;125(2):209.e1-7. doi:10.1016/j.amjmed.2011.07.029.
24.
Murphy DR, Reis B, Kadiyala H, et al. Electronic Health Record-Based Messages to Primary Care Providers: Valuable Information or Just Noise? Arch Intern Med. 2012;172(3):283. doi:10.1001/archinternmed.2011.740.
25.
Smith M, Murphy D, Laxmisan A, et al. Developing Software to “Track and Catch” Missed Follow-up of Abnormal Test Results in a Complex Sociotechnical Environment: Appl Clin Inform. 2013;4(3):359-375. doi:10.4338/ACI-2013-04-RA-0019.
26.
Tarkan S, Plaisant C, Shneiderman B, Hettinger AZ. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems. AMIA Annu Symp Proc. 2011:1382-1391.
AC C
EP
TE D
M AN U
SC
RI PT
14.
18
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Chen EH, Bodenheimer T. Improving Population Health Through Team-Based Panel Management: Comment on “Electronic Medical Record Reminders and Panel Management to Improve Primary Care of Elderly Patients.” Arch Intern Med. 2011;171(17):1558-1559. doi:10.1001/archinternmed.2011.395.
28.
Bhise V, Modi V, Kalavar A, et al. Patient-Reported Attributions for Missed Colonoscopy Appointments in Two Large Healthcare Systems. Dig Dis Sci. 2016;61(7):1853-1861. doi:10.1007/s10620-016-4096-3.
29.
Singh H, Daci K, Petersen LA, et al. Missed Opportunities to Initiate Endoscopic Evaluation for Colorectal Cancer Diagnosis. Am J Gastroenterol. 2009;104(10):2543-2554. doi:10.1038/ajg.2009.324.
30.
Committee on Diagnostic Error in Health Care, Board on Health Care Services, Institute of Medicine, The National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. (Balogh EP, Miller BT, Ball JR, eds.). Washington DC: National Academies Press; 2015. http://www.ncbi.nlm.nih.gov/books/NBK338596/. Accessed August 16, 2016.
31.
Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12 Suppl 2:ii39-45.
32.
Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients. 1991. Qual Saf Health Care. 2005;14(3):221-225. doi:10.1136/qshc.2002.002972/10.1136/qshc.2005.014522.
33.
Murphy DR, Laxmisan A, Reis BA, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf. 2014;23(1):8-16. doi:10.1136/bmjqs-2013-001874.
34.
Murphy DR, Thomas EJ, Meyer AND, Singh H. Development and Validation of Electronic Health Record-based Triggers to Detect Delays in Follow-up of Abnormal Lung Imaging Findings. Radiology. 2015;277(1):81-87. doi:10.1148/radiol.2015142530.
35.
Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304.
36.
Murphy DR, Meyer AND, Bhise V, et al. Computerized Triggers of Big Data to Detect Delays in Follow-up of Chest Imaging Results. Chest. 2016;150(3):613-620. doi:10.1016/j.chest.2016.05.001.
37.
Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic Trigger-Based Intervention to Reduce Delays in Diagnostic Evaluation for Cancer: A Cluster Randomized Controlled Trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301.
38.
Evaluation of Colorectal Cancer Screening and Follow-Up in Veterans Health Administration Facilities. Washington, D.C.: VA Office of the Inspector General; 2013. https://www.va.gov/oig/pubs/VAOIG-13-01741-215.pdf. Accessed June 20, 2017.
39.
Howlader N, Noone A, Krapcho M, et al. SEER Cancer Statistics Review, 1975-2010. Bethesda, MD: National Cancer Institute; 2013. http://seer.cancer.gov/csr/1975_2010/.
AC C
EP
TE D
M AN U
SC
RI PT
27.
19
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Hysong SJ, Sawhney MK, Wilson LA, et al. Understanding the Management of Electronic Test Result Notifications in the Outpatient Setting. BMC Med Inform Decis Mak. 2011;11(1):22. doi:10.1186/1472-6947-11-22.
41.
Danforth KN, Smith AE, Loo RK, et al. Electronic Clinical Surveillance to Improve Outpatient Care: Diverse Applications within an Integrated Delivery System. EGEMs Gener Evid Methods Improve Patient Outcomes. 2014;2(1). doi:10.13063/2327-9214.1056.
AC C
EP
TE D
M AN U
SC
RI PT
40.
20
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
TABLES Table 1. Colorectal Cancer Trigger Criteria
RI PT
Red Flag Criteria Labs suggestive of Iron Deficiency Anemia • Hemoglobin ≤ 11 g/dl • AND Mean Corpuscular Volume ≤ 81 fL • AND No Ferritin ≥ 100 ng/ml within 12 months before or 60 days after Hemoglobin (i.e., ferritin result < 100 or not checked) OR Positive Fecal Occult Blood or Fecal Immunochemical Test result Clinical Exclusion Criteria Timeframe Within 3 years prior to red flag Within 1 year prior to red flag
M AN U
SC
Criteria Colonoscopy performed Colorectal cancer diagnosis Terminal illness diagnosis Hospice/palliative care enrollment Pregnancy (Iron Deficiency Anemia red flag only) GI bleeding diagnosis (e.g., esophageal ulcer) Other sources of bleeding (Iron deficiency anemia red flag only) Age < 40 or > 75 History of Total Colectomy History of Thalassemia (Iron deficiency Anemia red flag only) Deceased Terminal illness diagnosis Hospice/palliative care enrollment Pregnancy (Iron Deficiency Anemia red flag only)
Within 6 months prior to red flag On date of red flag
Expected Follow-up Criteria Timeframe Within 60 days after red flag
TE D
Within 60 days after red flag
Criteria Colonoscopy Performed Gastroenterology Referral Performed Multidisciplinary tumor board summary documented
AC C
EP
hCG = human chorionic gonadotropin
21
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
Table 2. Hepatocellular Cancer Trigger Criteria Red Flag Criteria Alpha-Fetoprotein level > 20 ng/mL Clinical Exclusion Criteria Timeframe Within 3 years prior to red flag Within 1 year prior to red flag
RI PT
Criteria Colonoscopy performed Hepatocellular cancer diagnosis Gonadal (Ovarian or Testicular) Tumor diagnosis Terminal illness diagnosis Hospice/palliative care enrollment Pregnancy (diagnosis or positive hCG) Diagnosis of GI bleeding (e.g., esophageal ulcer) Other sources of bleeding (Iron deficiency anemia red flag only) Age < 18 Deceased Gonadal (Ovarian or Testicular) Tumor diagnosis Terminal illness diagnosis Hospice/palliative care enrollment Pregnancy (diagnosis or positive hCG)
SC
Within 9 months prior to red flag Within 6 months prior to red flag
M AN U
On date of red flag Within 60 days after red flag
Criteria Hepatology visit completed Gastroenterology visit completed Surgery visit completed Oncology visit completed Transplant surgery visit completed Multidisciplinary tumor board summary documented Liver imaging (ultrasound, CT, or MRI) performed Liver biopsy performed Liver embolization performed Liver surgery performed
TE D
Expected Follow-up Criteria Timeframe Within 60 days after to red flag
hCG = human chorionic gonadotropin
EP
Table 3. Reasons and contributory factors for delays CRC Trigger
AC C
Reason No documentation of any rationale for delay in care Follow up ordered, but patient was not scheduled to be seen within 60 days Provider failed to follow documented follow-up plan Follow up ordered, but patient canceled follow-up appointment or patient later declined to follow up after 60 days Follow up ordered, but patient sought care at an outside facility after 60 days (or documented after 60 days) Follow up ordered, but patient was a no-show and was not seen for follow-up within 60 days The provider was unable to reach patient even after multiple (≥3) documented attempts Other Total
22
HCC Trigger
No. 127 43
(%) (56.7) (19.2)
No. 49 35
(%) (45.8) (32.7)
15 13
(6.7) (5.8)
5 2
(4.7) (1.9)
8
(3.6)
1
(0.9)
7
(3.1)
11
(10.3)
6
(2.7)
0
(0.0)
5
(2.2)
4
(3.7)
224
(100)
107
(100)
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
CRC Trigger No. (%) 59 (33.5) 57 (32.4) 16 (9.1)
TE D EP AC C 23
(6.8)
SC
12 10 4 1 17 176
M AN U
Reason The patient did not have a finding suggestive of cancer Patient declined follow-up, and this was documented within 60 days Patient received appropriate follow up within 60 days at outside institution (including another VA) The patient had a terminal illness or was in hospice, making follow-up unnecessary/inappropriate Patient received appropriate follow up at the VA within 60 days Patient has a known history of cancer Provider followed-up in appropriate time after tracking Other Total
RI PT
Table 4. Reasons for false positive results
(5.7) (2.3) (0.6) (9.7) (100)
HCC Trigger No. (%) 1 (4.3) 2 (8.7) 3 (13.0) 0
(0.0)
6 0 10 1 23
(26.1) (0.0) (43.5) (4.3) (100)
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
FIGURE TEXT
COMPETING INTERESTS
AC C
EP
TE D
M AN U
SC
The authors have no conflicts of interest to disclose.
RI PT
Trigger and Review Process Flow Diagram
24
ACCEPTED MANUSCRIPT Electronic Algorithms to Improve GI Cancer Follow-up
ACKNOWLEDGMENTS The authors thank Dr. Yvonne Sada, Dr. Hashem El-Serag, Dr. Himabindu Kadiyala, Dr. Kamal Hirani, Dr.
RI PT
Nadar Massarweh, and Dr. Daniel A. Anaya for their input and expertise during development of the
AC C
EP
TE D
M AN U
SC
trigger algorithms.
25
Colorectal Cancer-focused Trigger Trigger applied to 245,158 unique patients seen between 1/1/2013-12/31/2013
M AN U
SC
RI PT
ACCEPTED MANUSCRIPT
TE D
10,661 patients with 11,750 FITs performed, and 103,560 patients with 301,132 Hb and MCV tests performed
3,369 patients met Red Flag Criteria (1852 positive FIT/FOBTs and 7087 results consistent with IDA)
Hepatocellcular Cancer-focused Trigger Trigger applied to 333,828 unique patients seen between 1/1/2011-12/31/2014
8,855 patients with 18,488 AFP tests performed
784 patients met Red Flag Criteria (2345 elevated AFP results) 141 records excluded by Clinical Exclusion Criteria
614 records excluded by Expected Follow-up Criteria
513 records excluded by Expected Follow-up Criteria
AC C
EP
1,682 records excluded by Clinical Exclusion Criteria
1073 “Trigger Positive” records flagged as having a possible delay (517 positive FITs and 1728 results consistent with IDA)
130 “Trigger Positive” records flagged as having a possible del (155 elevated AFP results)
400 Trigger Positive records randomly selected for manual review
130 Trigger Positive records manually reviewed
16 records require tracking
15 records require tracking
224 (56%) records with delays
4 diagnosed with cancer within 2 years
15
1
176 (44%) records without delays
107 (82%) records with delays
1 diagnosed with cancer within 2 years
9 diagnosed with cancer within 2 years
5
10
23 (18%) reco without dela
1 diagnosed w cancer within 2
ACCEPTED MANUSCRIPT Red Flag Criteria 1. Identify all patient records with: 1
Iron Deficiency Anemia, defined as: ( Hemoglobin (Hb) ≤ 11 g/dl ) 2 AND ( Mean Corpuscular Volume (MCV) ≤ 81 fL) AND ( No Ferritin ≥ 100 ng/ml within 12 months before or 60 days after CBC (i.e., ferritin 3 not checked or result < 100)) 4
OR a ((Positive Fecal Occult Blood Test (FOBT) or (Fecal Immunochemical Test (FIT))) result
2. Then Exclude patients < 40 years old OR >75 years old on test result date 5
3. Then Exclude patients listed as deceased within 60 DAYS AFTER test result date 6
RI PT
Clinical Exclusion Criteria
4. Then Exclude patients with active (colon cancer diagnosis) within 1 YEAR PRIOR TO test result date 7
5. Then Exclude patients (with colectomy) ANY TIME PRIOR to and 60 DAYS AFTER test result date 8
6. Then Exclude patients (enrolled in hospice or palliative care) within 1 YEAR PRIOR TO and 60 DAYS AFTER test result date 9
10
11
M AN U
SC
7. Then Exclude patients with a diagnosis of (pancreatic cancer) OR (leukemia [except acute lymphocytic]) OR (liver cancer) OR 12 13 14 15 16 17 (biliary cancer) OR (esophageal cancer) OR (gastric cancer) OR (brain cancer) OR (uterine cancer) OR (ovarian cancer) 18 19 OR (peritoneal, omental, or mesenteric cancer) OR (myeloma) OR (lung, bronchus, tracheal, or mesothelial cancer diagnosis)20 within 1 YEAR PRIOR TO and 60 DAYS AFTER test result date8. Then Exclude patients with diagnosis of upper GI 21 22 bleeding (Hematemesis) OR (Ulcer of esophagus, stomach or duodenum with bleeding) within 6 MONTHS PRIOR TO the test result date 23 9. Then Exclude patients with (colonoscopy) WITHIN 3 YEARS PRIOR TO test result date 24
25
26
10. Then for Iron Deficiency Anemia Only, Exclude patients with (Menorrhagia) OR (Hematuria) OR (Epistaxis) OR (Uterine , 27 28 29 cervical or vaginal bleeding) OR (Hemoptysis) OR (Secondary hemorrhage) WITHIN 6 MONTHS PRIOR TO test result date 30
11. Then for Iron Deficiency Anemia Only, Exclude patients with diagnosis of (Pregnancy) WITHIN 1 YEAR PRIOR TO or 60 DAYS AFTER test result date 31
Expected Follow-up Criteria
TE D
12. Then for Iron Deficiency Anemia Only, Exclude patients with (Thalassemia) ANY TIME PRIOR TO OR WITHIN 60 DAYS AFTER test result date
32
13. Then Exclude patients with a completed (Gastroenterology visit) WITHIN 60 DAYS AFTER test result date 23
14. Then Exclude patients with a (Colonoscopy) performed WITHIN 60 DAYS AFTER test result date 1
AC C
EP
Based on LOINC 718-7, 30313-1, 30350-3, 30352-9 Based on LOINC 30428-7, 787-2, 2276-4 3 Based on LOINC 2276-4 4 Based on LOINC 50196, 14563, 14564, 14565, 38527, 38526, 57803, 7905, 56490, 56491, 59841, 57804, 2335, 29771, 57804, 59841 5 Based on status in mortality table 6 Based on ICD-9:153.xx,154.0,154.1,154.8 (where ‘x’ is any value between 0 and 9) 7 Based on ICD-9: 45.81,45.82,45.83; CPT: 44150, 44151, 44155, 44156, 44157, 44158, 44202, 44210, 44211, 44212 8 Based on ICD-9 V66.7 or consult code entry for completed Hospital/Palliative Care consult 9 Based on ICD-9 157.xx 10 Based on ICD-9 205.0, 206.0, 207.0, 207.2x, or 208.0 11 Based on ICD-9 155.0, 155.1, 155.2, or 197.7 12 Based on ICD-9 156.xx 13 Based on ICD-9 150.xx 14 Based on ICD-9 151.xx 15 Based on ICD-9 191.x, 198.3, or 198.4 16 Based on ICD-9 179.xx 17 Based on ICD-9 183.0 18 Based on ICD-9 158.8, 158.9, or 197.6 19 Based on ICD-9 203.0x, or 238.6 20 Based on ICD-9 162.0, 162.2x, 162.3x, 162.4x, 162.5x, 162.8x, 162.9x, 163.xx, 197.0, , 197.2, or 197.3 (where ‘x’ is any value) 21 Based on ICD-9 578.0 22 Based on ICD-9 530.21, 531.0x, 531.2x, 531.4x, 531.6x, 532.0x, 532.2x, 532.4x, 532.6x, 533.0x, 533.2x, 533.4x, 533.6x, 534.0x, 534.2x, 534.4x, 534.6x 23 Based on CPT: 44387,44388,44389,44391,44392,44394,45378,45379,45380,45381,45382,45383,45384,45385,45386,45387,45355,45391,45392 24 Based on ICD-9: 626.2,626.6,627.0,627.1 25 Based on ICD-9: 599.7x 26 Based on ICD-9: 784.7 27 Based on ICD-9: 623.8, 626.8 28 Based on ICD-9: 786.3x 29 Based on ICD-9: 958.2 30 Based on ICD-9: 629.81, 631.0, 633.0, 633.01, 633.10, 633.2x, 633.8x, 633.9x ,V22.0,V22.1, V22.2, V23.0, V23.1, V23.2, V23.3, V23.41, V23.49, V23.5, V23.7, V23.81, V23.82, V23.83, V23.84, V23.89, V23.9 31 Based on ICD-9: 282.4x 32 Based on Visit “Stop Code” 33,307,321 or Note title entry for completed Gastroenterology consult Note: ‘x’ indicates any possible digit 2
ACCEPTED MANUSCRIPT Red Flag Criteria 1. Identify all patient records with: 1
(Alpha Fetoprotein) > 20 Clinical Exclusion Criteria 2. Then Exclude patients < 18 years old 2
3. Then Exclude patients listed as deceased within 60 DAYS AFTER test result date 3
4. Then Exclude patients with active (HCC diagnosis) within 1 YEAR PRIOR TO test result date 4
RI PT
5. Then Exclude patients with diagnosis of (Pregnancy) WITHIN 9 MONTHs PRIOR TO or 60 DAYS AFTER test result date 5
6. Then Exclude patients (enrolled in hospice or palliative care) within 1 YEAR PRIOR TO and 60 DAYS AFTER test result date
SC
7. Then Exclude patients with a diagnosis of (pancreatic cancer)6 OR (leukemia [except acute lymphocytic])7 OR (liver cancer)8 OR 9 10 11 12 13 14 (biliary cancer) OR (esophageal cancer) OR (gastric cancer) OR (brain cancer) OR (uterine cancer) OR (ovarian cancer) 15 16 OR (peritoneal, omental, or mesenteric cancer) OR (myeloma) OR (lung, bronchus, tracheal, or mesothelial cancer 17 diagnosis) within 1 YEAR PRIOR TO and 60 DAYS AFTER test result date8. Then Exclude patients with diagnosis of (Ovarian 18 19 tumor) OR (Testicular tumor) 1 YEAR PRIOR TO and 60 DAYS AFTER test result date Expected Follow-up Criteria 20
9. Then Exclude patients with a completed (Hepatology visit) WITHIN 60 DAYS AFTER test result date 21
M AN U
10. Then Exclude patients with a completed (Gastroenterology visit) WITHIN 60 DAYS AFTER test result date 22
11. Then Exclude patients with a completed (Surgery consult visit) WITHIN 60 DAYS AFTER test result date 23
12. Then Exclude patients with a completed (Oncology visit) WITHIN 60 DAYS AFTER test result date 24
13. Then Exclude patients with a completed (Transplant consult visit) WITHIN 60 DAYS AFTER test result date 25
14. Then Exclude patients with a (Liver Biopsy) performed WITHIN 60 DAYS PRIOR TO or 60 DAYS AFTER test result date 26
15. Then Exclude patients with a (Liver Imaging) performed WITHIN 60 DAYS PRIOR TO or 60 DAYS AFTER test result date 27 16. Then Exclude patients with a completed (Liver Surgery) WITHIN 60 DAYS PRIOR TO or 60 DAYS AFTER test result date 28
TE D
17. Then Exclude patients with a completed (Liver Tumor Embolization) WITHIN 60 DAYS PRIOR TO or 60 DAYS AFTER test result date 29
18. Then Exclude patients with a completed (Tumor Board Conference) WITHIN 60 DAYS PRIOR TO or 60 DAYS AFTER test result date 1
AC C
EP
Based on LOINC 1834 Based on status in mortality table 3 Based on ICD-9 : 155.0, 155.1, 155.2, 197.7 4 Based on ICD-9 : 629.81, 631.0, 633.0, 633.01, 633.10, 633.2x, 633.8x, 633.9x, V22.0,V22.1, V22.2, V23.0, V23.1, V23.2, V23.3, V23.41, V23.49, V23.5, V23.7, V23.81, V23.82, V23.83, V23.84, V23.89, V23.9 5 Based on ICD-9 V66.7 or visit code 351, 353 for completed Hospital/Palliative Care consult 6 Based on ICD-9 157.xx 7 Based on ICD-9 205.0, 206.0, 207.0, 207.2x, or 208.0 8 Based on ICD-9 155.0, 155.1, 155.2, or 197.7 9 Based on ICD-9 156.xx 10 Based on ICD-9 150.xx 11 Based on ICD-9 151.xx 12 Based on ICD-9 191.x, 198.3, or 198.4 13 Based on ICD-9 179.xx 14 Based on ICD-9 183.0 15 Based on ICD-9 158.8, 158.9, or 197.6 16 Based on ICD-9 203.0x, or 238.6 17 Based on ICD-9 162.0, 162.2x, 162.3x, 162.4x, 162.5x, 162.8x, 162.9x, 163.xx, 197.0, , 197.2, or 197.3 18 Based on ICD-9 183.0,183.2,183.8,220.x 19 Based on ICD-9 186.0, 222.0 20 Based on Visit “Stop Code” 337, or 454 or Note title text match for completed Hepatology consult 21 Based on Visit “Stop Code” 33, 307, or 321 or Note title text match for completed Gastroenterology consult 22 Based on liver surgery note title text match, consult service name match, and visit code name match for completed liver surgery consult 23 Based on Visit “Stop Code” 316, or note title text match for completed Oncology consult 24 Based on liver transplant note title match, consult service name match, and visit code name match for completed liver surgery consult 25 Based on CPT: 47000, 47001, 47100, or ICD-9: 50.1x 26 Based on CPT: 76705,76700,93975,93976,74150,74160,74170,74714,74175,74176,74177,74178,74181,74182,74183,74185,74190 Or ICD-9: 88.01,88.02,88.03,88.04,88.76 27 Based on CPT: 47010,47015,47120,47122,47125,47130,47135,47136,47140,47141,47143,47144,47300,47370,47371, Or ICD-9: 50.0, 50.20, 50.21, 50.22, 50.23, 50.24, 50.25,50.26,50.29,50.30,50.40,50.50,50.51,50.59,50.60,50.90,50.91,50.93 28 Based on CPT: 37204, 37243 29 Based on Visit “Stop Code” 316 or note title text match for completed Hepatology consult Note: ‘x’ indicates any possible digit 2