Author's Accepted Manuscript
Towards computer-assisted coding: A case study of ‘charge by documentation’ software at an endoscopy clinic Kevin A. Jones, Nicholas J. Beecroft, Emily S. Patterson
www.elsevier.com/hlpt
PII: DOI: Reference:
S2211-8837(14)00034-3 http://dx.doi.org/10.1016/j.hlpt.2014.05.002 HLPT90
To appear in:
Health Policy and Technology
Received date: 3 February 2014 Revised date: 26 May 2014 Accepted date: 28 May 2014 Cite this article as: Kevin A. Jones, Nicholas J. Beecroft, Emily S. Patterson, Towards computer-assisted coding: A case study of ‘charge by documentation’ software at an endoscopy clinic, Health Policy and Technology, http://dx.doi.org/ 10.1016/j.hlpt.2014.05.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Title Page RE: Second Submission of Ms. Ref. No.: HLPT‐D‐13‐00058 Title: Towards computer-assisted coding: A case study of ‘charge by documentation’ software at an endoscopy clinic Authors: Kevin A. Jones
Information Technology, The Ohio State University Medical Center Nicholas J. Beecroft School of Health and Rehabilitation Sciences, Ohio State University Emily S. Patterson, PhD School of Health and Rehabilitation Sciences, Ohio State University
Contact information for the corresponding author is: Dr. Emily Patterson Ohio State University, 453 W. 10th Ave, Room 543 (HIMS) Atwell Hall, Columbus OH 43210 Phone number: 614‐292‐4623 Fax number: 614‐292‐0210 E‐mail:
[email protected] There was no funding received for this research. Key words: ICD‐9, ICD‐10, CAC, computer assisted coding, computer assisted documentation, electronic health record Running Title: Towards computer-assisted coding of ICD-9 codes
ABSTRACT
Objectives: An imminent transition to the ICD-10 diagnostic code set has increased interest in automating portions of the reimbursement process for clinical procedures. In this paper, we compare two distinct sets of billing codes generated at an endoscopy clinic using a traditional manual methods and computer-assisted coding using a ‘charge by documentation’ approach. Methods: This is a retrospective, cross sectional research design analyzing data collected from all patients treated at one outpatient endoscopy clinic from July 2010 through June 2011. The collected data were the medical record number, data of service, diagnosis, procedure, CPT codes, ICD-9-CM codes, and CPT modifiers. The paired data were categorized as either an exact match or discrepant. Results: 98% of the 2923 procedures were either colonoscopies or upper GI endoscopies, which have predictable workflow deviations that reliably map to changes in procedural and diagnostic codes. The codes from the two methods were an exact match for 31% of the cases. The automated approach generated 1-8 additional codes for 62% of the cases, and the manual approach generated codes without accompanying supporting documentation in the progress note for 24% of the cases. Conclusions: We conclude that the automated approach was superior to the manual approach. We recommend the ‘charge by documentation’ approach for settings where the workflow is relatively predictable, including pre-identified frequently occurring branches to the workflow that affect the selection of procedural and diagnostic codes.
.
INTRODUCTION There is great interest in having computerized support for efficient documentation by a physician that will meet requirements for timely, justified reimbursement for clinically relevant procedures. This interest is due in large part to an imminent expansion of the diagnostic code set (ICD‐10) in October 2015 for numerically assigned diagnostic codes to a clinical procedure. In this case study, we describe a retrospective comparison of the specificity, quantity, and accuracy of diagnostic ICD‐9 codes assigned to an outpatient endoscopy procedure using computerized “charge by documentation” support as compared to a more traditional manual approach. We hypothesized that the automated approach would be superior to the manual approachfor an outpatient endoscopy clinic, where the workflow is highly predictable for most patients. In order to receive reimbursement for services provided to patients, providers need to document that clinical procedures were performed and that the procedures were justified based upon relevant medical diagnoses. In the United States, the Medicare payment system is founded upon providers justifying their assignments of a set of procedural (Current Procedural Terminology – CPT) codes by having appropriate related diagnostic codes and accompanying text documentation in the physician’s progress note There are several standardized systems for diagnosis coding, with the International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) historically being the most common. In October 2015 in the United States, the ICD‐9‐CM coding system will transition to the Tenth Revision,
which is segmented for the first time into codes for Clinical Modification (ICD‐10‐CM) and the Procedural Classification System for use solely in inpatient hospital settings (ICD‐10‐PCS).i The use of ICD‐10 is required for all patients covered by the Health Insurance Portability and Accountability Act (HIPAA), as well as for all patients who qualify for Medicare or Medicaid reimbursement. The ICD‐10 update adds increased specificity in that it represents a three to seven fold increase in possible codesii, and expands to include alphanumeric characters and an additional tenth digit. A bidirectional general equivalent mapping (GEM) system for comparing ICD‐9‐CM and ICD‐10‐CM/PCS codes has been made publicly available by the National Center for Health Statistics (NCHS).iii The authors of the GEM system found that, with a backward mapping approach comparing ICD‐10‐CM to ICD‐9‐CM, 93% of the codes had only approximate matches; therefore the ICD‐10 transition is more of an overhaul than a minor update. For some diseases, ICD‐10 code sets are only modestly increased from ICD‐9, but some areas are greatly expanded, including ten‐fold increases in diabetic codes and the specificity of the anatomic location for injuries to the musculoskeletal system. The change to ICD‐10 does not affect procedural (CPT) coding for outpatient procedures. The World Health Organisation (WHO) continuously revises ICD‐10 and the production of ICD‐11 is planned for 2017. ICD‐11iv implementation is likely to first occur in other countries, including Europe, and then the United States, similar to the implementation of ICD‐10. In many healthcare organizations, diagnostic and procedural codes are not generated by physicians or others who interact directly with the patient. Notably, ‘scribes’, who are typically medical studentsv or permanent employees, are increasingly used to document progress notes in real‐time under the supervision of an attending physician. These scribes do not usually generate any codesvi. Typically, coding specialists have an associate’s or bachelor’s degree in health information technology and are certified as a coding specialist by The American Health Information Management Association.
Certification is based upon completing relevant coursework in an accredited program and successfully passing either the two‐year Registered Health Information Technology (RHIT) or four‐year Registered Health Information Associate (RHIA) examination.vii Professional coders can also be certified by the American Association of Professional Coders. Medical coders assign a numeric descriptor to medical diagnoses, clinical procedures, and other elements such as medical complications. Medical coders in a hospital setting are typically employed in the Hospital Information Management (HIM) department to support billing requirements and gather data for statistical use, including for the purposes of reporting quality measures. Since the introduction of the ICD‐10 code set, a few accuracy studies have been conducted with the use of professional coders. Overall, these studies indicate that there is no substantial change in coding accuracy with the use of the respective countries’ versions of ICD‐10, which are all significantly smaller than the US version, as compared to ICD‐9. One study found that the accuracy of coding marginally improved for 30/36 co‐morbidities that were studied following the transitionviii, and their results for sensitivity and Kappa values were similar to those obtained in a Canadian study using a similar methodology.ix One study found a modest improvement in the accuracy of diagnostic codes two years following ICD‐10 implementation in Australia as compared to the year it was implemented. In 2000‐ 2001, agreement of the principal diagnosis code was 87% as compared to 85% in 1998‐1999; agreement of the principal procedure code was 83% in 2000‐2001 and 85% in 1998‐1999.x In one study of professional coders working with a sample of complex patient records to generate ICD‐9 codes, computer‐assisted coding (CAC) support was associated with a similar level of accuracy to unassisted coding and a 22% reduction in time spent per record.xi
In order to achieve full reimbursement for services, not only do procedure codes need to be justified by relevant diagnostic codes, but it is also required that specific phrases be included in the progress note created by providers, which can be a physician, nurse practitioner, physician’s assistant, intern, or resident physician. Without having the appropriate justification included in the progress note, reimbursement will be denied by the third party payer. Medical coders are not allowed to alter documentation created by a provider, but they can request that the specific provider who generated the note can revise their progress note documentation. For example, the provider could clarify using a particular phrase that a particular procedure did occur or that it was justified based upon diagnostic data. Typically, this request is conducted via a secure electronic mail system between a coder and a provider prior to the medical coder assigning the CPT and ICD‐9 or ICD‐10 codes and forwarding the information to generate a claim for reimbursement. Delays in correctly responding to a request result in delayed reimbursement to the organization, and a failure to respond to the request leads to a loss of revenue. Therefore, attempts to completely automate or semi‐automate the coding process are more attractive when they reduce the likelihood of a provider needing to revise or augment progress note documentation after‐the‐fact. Computerized support that employs automated detection of potential gaps in progress note documentation is easier to program when there the relevant text in the progress note is in structured format rather than unstructured free text. We group the existing literature for computerized assistance for the coding process into three categories: 1) computer‐assisted code recommendations for clinical providers, 2) computer‐assisted coding (CAC) support for medical coders, and 3) computer‐assisted code recommendations for clinical providers based upon analyzing structured text in the progress note which was generated with electronic support for procedure‐specific workflow, which we term “charge by documentation” support. The first and simplest approach is to have clinical providers select from a menu of recommended diagnostic and procedural codes. Typically these selections and the accompanying progress note
documentation are then reviewed by professional coders within a few days, either for every patient or for a sample of patients, to optimize the likelihood of reimbursement. Some software packages add pop‐ up alerts to reduce predictable mistakes. For example, one system in anesthesia automatically alerts clinicians to possible documentation errors by pager and by email, such as including both start and stop times for every infusion.xii The second approach is to have computer‐assisted coding (CAC) support for medical coders. There are a number of variations on this approach currently in use, but the typical approach is to display a provider’s progress note along with automatically suggested codes to assign to highlighted blocks of text.xiii In practice, medical coders nearly always have to deviate from at least some of the automatically generated suggestions and thus they employ a collaborative model with the automation. In addition to existing CAC software in use, this is an extremely active research area. Much of the research explores how to create better natural language processing (NLP) algorithms to generate suggested diagnostic and procedural codes based upon analyzing a provider’s progress structured or unstructured written notexiv. Some of the NLP techniques include the widely used MetaMap/MMtx by the National Library of Medicinexv, the MEDLEE system developed by Friedman and colleaguesxvi, the MedEx system developed by Xu and colleagues,xvii the HITEx tool by Zeng and colleaguesxviii, and the statistical tool developed by Taira and colleaguesxix. In 2007, a Medical NLP challenge was conducted with forty‐four different research teams in an effort to improve the current NLP techniques. The challenge, provided a training corpus of radiological reports with associated human‐generated ICD‐9‐CM codes. xx Two teams published papers from this event, both of which combined multiple, parallel components followed by an adjudication module as a final step
that prioritized one of the components. Aronson and colleagues integrated existing technologies of a medical text indexer, a support vector machine classifier, and a non‐parametric lazy learning (K Nearest Neighbor ‐ KNN) algorithm.xxi Crammer and colleagues used machine learning, a rule‐based system, and an automatic coding system based on human coding policies. Results were combined by giving priority to the human coding policy approach.xxii Currently, there are a number of commercial systems on the market in regular use that provide automatically suggested codes based on written text in a progress note. In addition to extracting ICD‐9 codes from text, there has been research that extracts other code systems. Research on ICD‐10 extraction primarily comes from Europe, where it has been implemented for some time, and an overview of ICD‐10 specific issues is provided by Baud and colleagues.xxiii Disease‐ specific code sets have been developed, including for pneumoniaxxiv, neuroradiologyxxv, rheumatoid arthritisxxvi, and pathologyxxvii. Friedman and colleagues applied MedLEE to the SNOMED diagnostic code set.xxviii Finally, there has been impressive progress in extracting codes to meet local institutional standards like problem lists. Pakhomov and colleagues used a multi‐pass, certainty‐based approach to automatically generate problem lists at the Mayo Clinicxxix,xxx. Haug and colleagues use a Bayesian belief network‐based approach to automatically generate problem lists at Intermountain Healthcare.xxxi The third and final category is the “charge by documentation” approach used in our case study. With this approach, real‐time support is provided to guide a physician through pre‐identified steps in a routine workflow for a particular clinical procedure. As a byproduct of this support, structured text in the progress note is generated real‐time, based upon the clinical procedure actions which are selected by the provider using drop‐down menus. We term this category of support “charge by documentation”
because the progress notes that are generated are structured in a way that makes it easier to automatically recommend diagnostic codes and procedure codes which will meet reimbursement requirements, including having the accompanying required elements in the progress note. The system uses a point‐and‐click style method for the provider to create the clinical system. The application ties certain clinical phrases with diagnosis and CPT codes. The approach eliminates the step of having a data entry clerk or medical coder process the information within a few days. Instead, the billing is automatically initiated upon signature of the progress note. Medical record coders are still required for quality assurance purposes, but they now have a more encompassing role of validating the entire documentation and billing functions in one global process. This role includes identifying necessary modifications to the software to meet reimbursement requirements, such as those changes needed for fine‐tuning the process after the ICD‐10 transition.xxxii For multiple reasons, in particular to support cohort identification,xxxiii research, quality improvement, reimbursement, and efficiently generating progress note documentation from templates, blocks of text in clinical documentation are increasingly structured.xxxiv On the other hand, this movement towards structured text generates tension due to other objectives for the progress note documentation. In particular, there has been an increased appreciation of the importance of preserving a narrative format for a patient history in order to be informative, efficient, and easy to remember as well as to support effective communication between providers and empathetic communication with patients.xxxv A potential compromise to manage these competing demands is to combine structured text elements and unstructured text narratives in a single progress note. This allows for queries of structured data fields and ease of generating documentation for routine elements, while still allowing flexibility for documenting and communicating complex, uncertain, or subtle information.
A case study comparison of the third category of automated coding support is the focus of this study. In this case, this application was used in three sites in the same geographic region all affiliated with the same medical center, however only two sites used the support features for clinical providers to select appropriate diagnostic codes. In these two sites, professional coders performed quality assurance on samples of generated codes, but did not typically review the codes or progress notes prior to submitting for billing reimbursement. In the third site which was the focus of this study, the traditional manual process with paper billing sheets was employed where physicians checked boxes following each clinical procedure and professional coders translated the billing sheets at a later time into diagnostic and procedural codes and requested modifications to progress notes by providers. Therefore, a naturally occurring experiment occurred which enabled a valid side‐by‐side comparison of the ICD‐9 diagnostic codes which were generated with automated support for providers following real‐time documentation support as compared to the traditional coding process using professional coders following real‐time documentation support. METHODS This study looked at one specific application that was implemented in an endoscopy outpatient clinic nearly a decade before and is still in use at the site. A point‐and‐click paradigm is employed. For example, if a provider clicks the “screening” indication, it prompts to select items such as “average risk” and to enter information about risks due to family history. On the basis of this information, the system can accurately distinguish between different ICD‐9 codes. For instance, the ICD‐9 code V18.51 is
appropriate for a colonoscopy when there is a family history of polyps whereas the ICD‐9 code V76.51 is more appropriate, and has a higher reimbursement amount, when special screening for malignant neoplasms of colon is conducted during the procedure. If multiple ICD‐9 codes are potentially relevant, the software requests additional information real‐time from the provider that enables the unique code selection. This is a retrospective, cross sectional research design analyzing data collected from all patients treated at one outpatient endoscopy clinic covering the period July 1, 2010 through June 30, 2011. The study was approved by the Institutional Review Board of the Ohio State University. In 2003, the clinic had implemented a software application that includes charge by documentation functionality to automate the assignment of diagnostic ICD‐9 codes, which physicians are prompted to approve. In addition, during the study period, all billing codes (CPT, ICD‐9‐CM, and billing modifiers) were generated with a more traditional manual method where physicians documented information on paper charge sheets that were entered into the professional practice billing system by a data entry clerk the next day. Therefore, two distinct, independent sets of billing codes were generated for all patients during the time period at the study site using different methods: 1) traditional manual method (with a data entry clerk but no professional coders), and 2) a charge by documentation approach. The software provided real‐time support at the time of ordering procedures to the physician for appropriate documentation, as visualized generically in Figure 1. In Figure 1, the user is prompted to select a specific clinical procedure based upon pre‐identified steps in the typical workflow sequence. In this case, the user has already selected colonoscopy, and therefore a list of indications for performing a colonoscopy is displayed in a menu format. The user selects the most accurate indication, which in this
case is screening for colorectal cancer with an average risk. The phrase “Screening for colorectal malignant neoplasm” is added to the note documentation due to this selection. In this case, there are four distinct data elements captured in one step. The physician sees the phrase “Screen for Colorectal CA, Average Risk”, the report reads a more clinical phrase of “Screening for colorectal malignant neoplasm”, the patient discharge instructions reads a more user‐friendly phrase of “cancer screening colonoscopy” and the database stores a discrete numeric code with references to both English phrases. Figure 1 Menu Prompting Real‐Time Documentation In addition to real‐time support for generating appropriate clinical documentation, the software also verifies immediately following a procedure that documentation in the note adequately supports selected diagnostic (ICD‐9‐CM) and procedural (CPT) codes. A generic example of a pop‐up alert flagging a potential mismatch between the documentation in the note and the procedural codes (CPT) is displayed in Figure 2. In this example, the software has detected that a procedural (CPT) code was selected for colonoscopy screening of a high‐risk patient with no therapeutic maneuvers done such as a biopsy or polyp removal (G0121), which is not compatible with the selection that was made during the procedure to remove a polyp (43578). Therefore, code 45378 is recommended (rather than G0121 and the associated ICD‐9‐CM code V76.51) to match the supporting documentation. Figure 2 – Pop‐up Alert to Match Codes and Note Documentation
The collected identical data were extracted from two applications: the endoscopy software application and the professional practice billing application. These data were the medical record number (MRN), data of service, diagnosis, procedure, CPT codes, ICD‐9‐CM codes, CPT modifiers (automatically generated in the endoscopy software and manually entered by the clerk based upon the physician’s documentation on the paper charge sheet). The data were analyzed in an Excel spreadsheet, and the MRN and date of service were used to link the cases in order to do a matched comparison for the identical procedures. The paired data were exclusively categorized as 1) exact match (same number of codes, same codes), 2) auto‐generated code set which results in higher reimbursement than the manual code because the additional codes impact the diagnosis‐related group (DRG) code, 3) auto‐generated code set which results in lower reimbursement than the manual code because the reduction of codes impacts the diagnosis‐related group (DRG) code, and 4) other discrepancies between codes, including omission of charges or codes that are not able to be reimbursed.
RESULTS There were a total of 2,923 procedures conducted during the time period. The two most frequent procedures, colonoscopies and Esophagogastroduodenoscopies (EGD), accounted for 98% of the procedures, as shown in Table 1. Table 1. Procedures conducted during the study period The ICD‐9 diagnostic codes had a 31% (905/2923 procedures) rate for exact matches. The charge by documentation system had more diagnostic codes than the manual approach for 62% (1818/2923 procedures). As shown in Table 2, there were 1‐8 additional diagnosis codes, with one additional code for 24% of the cases and two additional codes for 20%. Eight additional diagnostic codes were found for three separate procedures with different patients. In addition, the charge by documentation system would have had a reduced risk of the claim being denied as well as avoided compliance issues as compared to the manual approach for 24% (693/2923 procedures) of the cases, where there was inadequate supporting clinical documentation in the physician’s progress note. Typically, in this situation, medical coders detect the discrepancy, but in this case study only a clerk reviewed the information, and so likely would not have. In that situation, it is likely that the insurance providers denied the initial reimbursement request.
The level of specificity of the ICD‐9 codes was also compared. The level of specificity of coding (not including additional codes) was identical for 97% of the cases (2827/2923 procedures). The discrepant codes were each individually reviewed by one investigator to compare the level of specificity. For instance, the computer may have coded ‘786.5 – Chest pain’ where the physician checked ‘786.59 – Discomfort, pressure or tightness in chest’. For the 96 discrepant codes, the results were mixed. 48 were more specific when it was automatically generated and 48 were more specific when was generated by the physician. Therefore, charge by documentation has no detectable impact on the level of specificity of coding. CONCLUSIONS Our primary finding was that the automated approach captured more diagnostic (ICD‐9) codes than were captured with the traditional approach for two‐thirds of the procedures. Similar findings were reported in a study in anesthesia where information from pre‐anesthesia evaluation forms was used to automatically identify diagnostic codes.xxxvi In that study, at least one additional ICD‐9 code was found for 12% of the charts, with a corresponding 1.5% increase in hospital reimbursement. Capturing accurate ICD‐9 diagnostic codes is important for more than reimbursement purposes. Despite published concerns about validity of the codes, they are routinely used in publicly available quality indicators and for research purposes. It is possible that transitioning to ICD‐10 may increase the capture of diagnostic codes, although studies suggest no difference has been found to date.xxxvii This will only be possible if providers understand what the new codes are, how to select them, and what accompanying text is needed in the progress note. A system such as the one used in this case study could therefore also serve a training function as providers become familiar with the new code set.
The study did not find that the software makes diagnosis codes more or less specific in nature. Several have expressed concern that the increased use of electronic health records might be correlated with an increase in ‘upcoding’, where automated functions contributes to erroneously increasing reimbursement which is unjustified.xxxviii Although this study only investigated one software product, we did not find this to be a concern, and so suggest that likely the design of any software package using this approach could be modified if upcoding does increase. There is a downside to the clinical documentation driving the billing directly. If a diagnosis or condition is not documented, then it will not generate a diagnosis code. The computer application is only as good as the rules it uses and what is being documented. In this case, a medical record coder’s role shifts from generating the codes to more of a quality and process improvement role reviewing the entire process of clinical documentation. Because the documentation and diagnosis is highly codified, reviews of specific populations of patients can be done effectively. The cost implications of the level of discrepancy found in this study are important, particularly given an anticipated need to decrease costs in the coming years in order to be able to continue to provide services. Compliance issues and reduction in denials or reduced payments based off a lack of documentation make charging by documentation a worthwhile endeavor. The additional diagnosis capture also improves research and data quality efforts. As hospitals migrate from ICD9 to ICD10, the increased complexity of coding also likely increases the utility of systems that can facilitate the transitions for physicians and other providers.
There are a number of limitations to this study. The research was conducted at one site, and thus is limited in scope and size. Nevertheless, an exhaustive sample was analyzed using data that were derived from actual care provision for real patients. We believe that the results are sufficiently encouraging that the site which was studied should switch to the charge by documentation approach for reimbursement. In addition, this study was done exclusively in one endoscopy center, which by nature has a limited number of types of procedures, all of which are relatively easy to document. Since there is a limit to the number of items typically documented with something like a colonoscopy, the content can be highly tailored to that specific procedure. A system like this may not be easily expanded to a surgical or family medicine site that has a different level of workflow and progress note documentation variation. Based on this study, we believe that ‘charge by documentation’ systems are beneficial in that they provide a comparable level of specificity and higher overall volume of captured diagnosis codes with accompanying support documentation as compared to a more traditional approach. In addition, there was a better match between diagnostic codes and supporting documentation, which reduces the time to receive reimbursement, and likely also increases reimbursement. This streamlining also eliminates manual data entry by a clerk or medical coder and reduces the time from documentation to the creation of the bill. Although a formal economic analysis was not performed, it is estimated that revenue enhancements and savings with this approach exceed the costs of implementing and maintaining the software application.
Procedure Colonoscopy Upper GI Endoscopy Flexible Sigmoidoscopy Other Total
Number (Percent of total) 1926 (65.9%) 956 (32.7%) 30 (1.0%) 11 (0.4%) 2923 (100.0%)
Table 1. Number of procedures performed at the endoscopy clinic
Number of additional ICD-9 Codes 1 2 3 4 5 6 7 8
Number (Percent of total) 702 (38.6%) 595 (32.7%) 310 (17.1%) 143 (7.9%) 47 (2.6%) 14 (0.8%) 4 (0.2%) 3 (0.2%)
Table 2. Number of additional ICD-9 codes generated by the automated method compared to the manual method
Figure 1 - Automated Coding Support Menu
Figure 2 – Pop-up Alert to Match Codes and Note Documentation
Highlights
•
We compare billing codes from a manual approach and with computerized support.
•
More ICD-9 codes were generated with computerized support.
•
Fewer codes had missing supporting documentation with computerized support.
•
The ‘charge by documentation’ approach is recommended for predictable workflows.
REFERENCES i
Office of the Secretary HHSHIPAA administrative simplification: modification to medical data
code set standards to adopt ICD-10-CM and ICD-10-PCS. Proposed rule. Fed Regist2008;73:49795–832. ii
Steindel SJ. International classification of diseases, 10th edition, clinical modification and
procedure coding system: descriptive overview of the next generation HIPAA code sets. J Am Med Inform Assoc. 2010 May-Jun; 17(3): 274–282. iii
Sydney V Ross-Davis, Preparing for ICD-10-CM/PCS: One Payer's Experience with General
Equivalence Mappings (GEMs). Perspect Health Inf Manag. 2012 Winter; 9(Winter): 1e. iv
Production of ICD-11: The overall revision process.
[http://www.who.int/classifications/icd/ICDRevision.pdf] Accessed January 20, 2014 v
Health Heiman HL, Rasminsky S, Bierman JA, et al. Medical students' observations, practices, and
attitudes regarding electronic health record documentation. Teach Learn Med 2014; 26(1):49‐55. vi
Unertl KM, Johnson KB, Lorenzi NM. Health information exchange technology on the front lines of
healthcare: workflow factors and patterns of use. J Am Med Inform Assoc. 2012 May 1;19(3):392–400. vii
Condon J, Barefield A. Assessment of Success on the RHIA Certification Examination: A Comparison of
Baccalaureate Program Graduates and Postbaccalaureate Certificate Program Graduates. Perspect Health Inf Manag. 2012;9:1‐12. viii
Januel J-M, Luthi J-C, Quan H, Borst F, Taffe P, Ghali WA, Burnand B. Improved accuracy
of co-morbidity coding over time after the introduction of ICD-10 administrative data. BMC Health Serv Res. 2011;11:194–207. doi: 10.1186/1472-6963-11-194. ix
Quan H, Li B, Saunders D, Parsons GA, Nilsson CI, Alibhai A, Ghali WA: Assessing validity of ICD‐9‐
CM and ICD‐10 administrative data in recording clinical conditions in a unique dually‐coded database. Health Serv Res 2008, 43:1424‐1441. x
Henderson T, Shepheard J, Sundararajan V. Quality of diagnosis and procedure coding in ICD-
10 administrative data. Med Care. 2006 Nov;44(11):1011-9. xi
Dougherty M, Seabold S, White SE. Study Reveals Hard Facts on CAC. Journal of AHIMA
2013;84(7): 54-56.
xii
Spring SF, Sandberg WS, Anupama SB, Walsh JL, Driscoll WD, Raines DE. Automated
Documentation Error Detection and Notification Improves Anesthesia Billing Performance. 2007; 106(1). xiii
Stanfill MH, Williams M, Fenton SH, Jenders RA, Hersh WR. A systematic literature review of
automated clinical coding and classification systems. J Am Med Inform Assoc 2010;17:646‐651. xiv
Meystrel SM, Savova GK, Kipper-Schuler KC, Hurdle JF. Extracting Information from
Textual Documents in the Electronic Health Record: A Review of Recent Research. Yearb Med Inform. 2008:128-44. xv
Aronson AR, Lang F. An overview of MetaMap: historical perspective and recent advancesJ Am Med
Inform Assoc 2010;17:229‐236. xvi
Friedman C, Shagina L, Lussier Y, Hripcsak G: Automated encoding of clinical documents
based on natural language processing.J Am Med Inform Assoc 2004, 11(5):392-402. xvii
Xu H, Stenner SP, Doan S, Johnson KB, Waitman LR, Denny JC. MedEx: a medication
information extraction system for clinical narratives. J Am Med Inform Assoc 2010, 17(1):19-24. xviii
Zent QT, Goryachev S, Weiss S, Sordo M, Murphy SN, Lazarus R. Extracting principal
diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system. BMC Med Inform Decis Mak. 2006; 6: 30. xix
Taira RK, Soderland SG, Jakobovits RM. Automatic Structuring of Radiology Free-Text
Reports. Radiographics 2001:21(1):237-245. xx
Pestian JP, Brew C, Matykiewicz P, Hovermale DJ, Johnson N, Cohen KB, et al. A Shared
Task Involving Multi-label Classification of Clinical Free Text. BioNLP 2007: Biological, translational, and clinical language processing.; 2007:97-104. xxi
Aronson AR, Bodenreider O, Demner-Fushman D, Fung KW, Lee VK, Mork JG, et al. From
indexing the biomedical literature to coding clinial text: experience with MTI and machine learning approaches. BioNLP 2007: Biological, translational, and clinical language processing.; 2007:105-12. xxii
Crammer K, Dredze M, Ganchev K, Talukdar PP, Carroll S. Automatic Code Assignment to
Medical Text. BioNLP 2007: Biological, translational, and clinical language processing.; 2007:129-36.
xxiii
Baud R. A natural language based search engine for ICD10 diagnosis encoding. Med Arh
2004:79-80. xxiv
Lui V, Clark MP, Mendoza M, Saket R, Gardner MH, Turk BJ, Escobar GJ. Automated
identification of pneumonia in chest radiograph reports in critically ill patients. BMC medical informatics and decision making 2013,13(1)::1-8. xxv
Elkins JS, Friedman C, Boden-Albala B, Sacco RL, Hripcsak G. Coding neuroradiology
reports for the Northern Manhattan Stroke Study: a comparison of natural language processing and manual review. Comput Biomed Res 2000:1-10. xxvi
Ford E, Nicholson A, Koeling R, Tate AR, Carroll J, Axelrod L, et al. Optimising the use of
electronic health records to estimate the incidence of rheumatoid arthritis in primary care: what information is hidden in free text? BMC Med Res Methodol. 2013 Aug 21;13(1):105. xxvii
Hasman A, de Bruijn LM, Arends JW. Evaluation of a method that supports pathology report
cod- ing. Methods Inf Med 2001;40(4):293-7. xxviii
Lussier YA, Shagina L, Friedman C. Automating SNOMED coding using medical language
under- standing: a feasibility study. Proc AMIA Symp 2001:418-22. xxix
Pakhomov SV, Buntrock JD, Chute CG. Auto- mating the assignment of diagnosis codes to
pa- tient encounters using example-based and machine learning techniques. J Am Med Inform Assoc 2006:516-25. xxx
Savova, Guergana K., et al. "Mayo clinical Text Analysis and Knowledge Extraction System
(cTAKES): architecture, component evaluation and applications." J Am Med Inform Assoc 2010;17(5):507-513. xxxi
Haug PJ, Christensen L, Gundersen M, Clemons B, Koehler S, Bauer K. A natural language
parsing system for encoding admitting diagnoses. Proc AMIA Annu Fall Symp 1997:814-8. xxxii
Morsch, M. Computer-assisted coding: the secret weapon. CAC does not eliminate the need
for medical-coding professionals to be involved in the coding process, but it can make them more productive and accurate. Health management technology 2010:31(2): 24. xxxiii
Al-Haddad MA, Friedlin J, Kesterson J, Waters JA, Aguilar-Saavedra JA, Schmidt CM
Natural language processing for the development of a clinical registry: a validation study in intraductal papillary mucinous neoplasms. HPB (Oxford). 2010 December; 12(10): 688–695.
xxxiv Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation.J Am Med Inform Assoc. 2011; 18(2): 181–186. xxxv
Resnik P, Niv M, Nossal M, et al. Communication of clinically relevant information in electronic health
records: a comparison between structured data and unrestricted physician language. Proceedings of Computer Assisted Coding 2008, 2008, 1‐10. xxxvi
Reich DL, Kahn RA, Wax D, Palvia T, Galati M, Krol M. Development of a Module for
Point-of-care Charge Capture and Submission Using an Anesthesia Information Management System. Anesthesiology. 2006 Jul;105(1):179-86. xxxvii
Quan H, Li B, Saunders LD, Parsons GA, Nilsson CI, Alibhai A, Ghali WA. Assessing
validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions in a unique dually coded database. Health Serv Res. 2008; 43: 1424–1441. xxxviii
Pitts SR. Higher Complexity ED Billing Codes-Sicker Patients, More Intensive Practice, or
Improper Payments? N Engl J Med 2012; 367:2465-2467.