CHEST
Commentary
Will Performance Measurement Lead to Better Patient Outcomes? What Are the Roles of the National Quality Forum and Medical Specialty Societies? James M. O’Brien, MD; Janet Corrigan, PhD; Joyce Bruno Reitzner, MBA, MIPH; Lisa K. Moores, MD; Mark Metersky, MD; Robert C. Hyzy, MD; Michael H. Baumann, MD; John Tooker, MD; Bernard M. Rosof, MD; Helen Burstin, MD; and Karen Pace, PhD, MSN Performance measures (PMs) are specified metrics by which a health-care provider’s care can be compared with national benchmarks. The use of PMs is a key component of efforts to improve the quality and value of health care. The National Quality Forum (NQF) is the federally recognized endorser of PMs. From 2006 to 2009, the Quality Improvement Committee (QIC) of the American College of Chest Physicians engaged in the review of proposed PMs as a member of the NQF. This article provides a review of the QIC’s experience with PMs and NQF membership and the lessons learned, an overview of the enhancements made to the NQF endorsement process in 2010 and 2011, and a discussion of the next steps that would further strengthen the measure development and endorsement processes and increase the likelihood of measurement leading to better patient outcomes. CHEST 2012; 141(2):300–307 Abbreviations: ACCP 5 American College of Chest Physicians; CAP 5 community-acquired pneumonia; CMS 5 Centers for Medicare and Medicaid Services; CSAC 5 Consensus Standards Approval Committee; DHHS 5 Department of Health and Human Services; NQF 5 National Quality Forum; PM 5 performance measure; QIC 5 Quality Improvement Committee
Editor’s Note: On March 25, 2011, a meeting of the leaderships of the National Quality Forum (NQF), the Quality Improvement Committee (QIC) of the American College of Chest Physicians (ACCP), and Affiliations: From the Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, Center for Critical Care (Dr O’Brien), Ohio State University Medical Center, Columbus, OH; the National Quality Forum (Drs Corrigan, Burstin, and Pace), Washington, DC; the Department of Medicine (Dr Moores), Uniformed Services University of the Health Sciences, Bethesda, MD; the Center for Bronchiectasis Care, Division of Pulmonary and Critical Care Medicine (Dr Metersky), University of Connecticut School of Medicine, Farmington, CT; the Division of Pulmonary and Critical Care Medicine (Dr Hyzy), University of Michigan, Ann Arbor, MI; the Department of Medicine (Dr Baumann), University of Mississippi Health Care, Jackson, MS; American College of Physicians (Dr Tooker), Philadelphia, PA; the Board of Directors (Dr Rosof), Huntington Hospital; and the Physicians Consortium for Performance Improvement (Dr Rosof), Huntington, NY; and Healthcare Practice, Informatics, and Research (Ms Reitzner), American College of Chest Physicians, Northbrook, IL. Funding/Support: The authors have reported to CHEST that no funding was received for this study.
the journal took place in Hollywood, Florida. The NQF is a private sector entity created in 1999 to standardize health-care performance measures (PMs) in the United States as a result of a recommendation in 1998 of the Presidential Advisory Commission on Consumer Protection and Quality in the Healthcare Industry in the United States. Pursuant to the 2008 Medicare Improvements for Patients and Providers Act, the federal government now provides support for much of NQF’s standard-setting work. The QIC was established in 2005 as a full standing committee of the Correspondence to: James M. O’Brien, MD, Ohio State University Medical Center, Division of Pulmonary, Allergy, Critical Care and Sleep Medicine, Center for Critical Care, 201 Davis HLRI, Columbus, OH 43210; e-mail: James.OBrien@ osumc.edu © 2012 American College of Chest Physicians. Reproduction of this article is prohibited without written permission from the American College of Chest Physicians (http://www.chestpubs.org/ site/misc/reprints.xhtml). DOI: 10.1378/chest.11-1942
300
Commentary
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
ACCP, the largest pulmonary, critical care, and sleep medicine medical society in the world. Its charge was to advocate for improved health-care outcomes for patients in a variety of ways, including working with organizations such as the NQF. The purpose of the meeting was threefold: (1) to better understand the processes by which the NQF makes decisions and the concerns of the QIC related to these decisions, (2) to determine how the QIC and medical specialty societies in general can be most effective in working with the NQF to improve the quality of American health care, and (3) to decide on the format and content of an article to be published in CHEST that summarizes the interactions between the NQF and the QIC and the proceedings of the meeting. The article that follows summarizes the lessons learned by the QIC and the NQF and discusses necessary steps for performance measurement to lead to better patient outcomes. As the chair of the meeting, I found it to be enlightening and hopeful. As editor of the journal, I believe that the article advances the cause.
Lessons Learned by the ACCP Centers for Medicare and Medicaid Services The(CMS) report that health-care expenditures for
2009 reached an estimated 2.5 trillion dollars and comprised 17.3% of the gross domestic product.1 One effort to improve the value of health care (reduced costs and/or increased quality) is the implementation of pay-for-performance programs. Pay-for-performance, also known as value-based purchasing, uses PMs to determine reimbursement. PMs are the metrics by which a health-care provider’s or a facility’s performance is compared with national benchmarks.2 Pursuant to provisions in the Medicare Improvements and Extension Act of 2006, the NQF was identified as the organization responsible for the review and endorsement of national PMs.3 The Department of Health and Human Services (DHHS) relies extensively on NQF-endorsed measures when selecting measures for use in programs such as the CMS Physician Quality Reporting System3 or the CMS Electronic Health Records Incentive Program.4 Periodically, the NQF calls for developers to submit proposed PMs for consideration of NQF endorsement. Once submitted, an NQF expert panel uses the NQF Measure Evaluation Criteria5 to assess each proposed PM for its: (1) importance to measure and report (addresses an area where significant gains in health-care quality can be made), (2) scientific acceptability of the measure properties (yields results that are valid and reliable), (3) usability (yields results that are understandable and useful for decision makers),
www.chestpubs.org
and (4) feasibility (ability to ascertain the data to inform the PM). Those PMs deemed to meet these criteria are released for public comment.6 The expert panel reviews all public comments and may revise its recommendations in the draft PM report or request that the measure developer address concerns about the PM (the developer is under no obligation to do so). Following additional comment and voting by NQF members, the expert panel makes a recommendation to the NQF Consensus Standards Approval Committee (CSAC),7 which then makes a final recommendation to the NQF Board of Directors to fully endorse the PM, to endorse the PM for a limited time pending testing, or to not endorse the PM. Every 3 years, all endorsed PMs are reassessed to determine if they continue to meet the NQF Measure Evaluation Criteria for endorsement. The ACCP QIC, 2006-2009 The ACCP QIC was formed in the spring of 2005. Given the NQF’s influential role, the first major expenditure ($15,000 in 2006) requested by the QIC was the membership fee to join the NQF. The QIC also quickly developed a process for commenting and voting on PMs. A screening process ensures that the QIC only reviews PMs relevant to pulmonary, critical care, and/or sleep medicine and that the ACCP possesses the expertise to adequately assess the PMs. Each QIC member independently provides feedback on each PM via an online survey. Survey results are discussed on a conference call with the full committee to achieve consensus. To evaluate PMs, the QIC uses the ACCP Measure Evaluation Criteria,8 which were adapted from the NQF Measure Evaluation Criteria. The final QIC vote and comments are submitted to the NQF. The ACCP Board of Regents allowed the QIC to vote on PMs on behalf of the ACCP, without prior Board of Regents approval. Since joining the NQF in 2006 to the publication of this manuscript, the ACCP reviewed 130 NQF PMs and 107 NQF best-practice recommendations. The QIC struggled with the most effective way of providing feedback to the NQF when measures were not at a standard the QIC felt was appropriate. If a PM did not meet the ACCP evaluation criteria, the QIC was uncertain if it was more effective to disapprove the PM or approve the measure with comments. Despite commenting on 90% of the PMs that were reviewed, QIC input from 2005 to 2009 resulted in perceptible changes on two occasions: (1) 032 Ventilator Bundle, for which the ACCP suggested the inclusion of definitions for “sedation vacations” and “readiness to wean,” and changes were made in the PM, and (2) VTE best practices, which provided a framework for the later-endorsed VTE measure set. CHEST / 141 / 2 / FEBRUARY, 2012
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
301
Early in 2006, the QIC reviewed, commented on, and voted on the NQF VTE Preferred Practices set. Despite significant concerns expressed by the QIC and others about several of the recommended practices, especially the endorsement of individualized risk assessment for VTE prophylaxis, the set was approved at the NQF Board of Directors meeting. The following month, the QIC filed a formal appeal to the CSAC. In that appeal, the QIC noted that the seventh edition of the ACCP Guidelines (arguably the most definitive guidelines on the topic in the world) specifically recommended against the use of individualized risk assessment9 and expressed concern that the required risk assessment might have the unintended consequences of either deterring prophylaxis or contributing to variability in prophylaxis administration patterns. Noting that most patients who are hospitalized meet the criteria for prophylaxis, the QIC argued instead for a measure requiring universal prophylaxis (or documentation of why prophylaxis was not given). While the NQF initially denied the appeal, the ACCP was invited in spring 2007 to debate the issue of risk assessment at the NQF meeting. Although the QIC was not included in conversations subsequent to that meeting, when the measures were put forward for comment and vote, the standards for VTE prophylaxis were consistent with the modification suggested by the QIC. More commonly, the QIC was ineffective in altering PMs. For example, NQF 473: PN-006-073 “Appropriate DVT Prophylaxis in Women Undergoing Cesarean Delivery” assesses the percentage of women undergoing cesarean section who receive thromboprophylaxis. As noted by the QIC in our feedback to the NQF, the ACCP Evidence-Based Antithrombotics Guidelines specifically recommended against routine pharmacologic thromboprophylaxis in this group because of a lack of supporting evidence.9 Other commenters noted that minor bleeding in this population is increased by the use of pharmacologic prophylaxis and raised concerns that routine use of unfractionated heparin could result in more thromboembolic events due to heparin-induced thrombocytopenia than would otherwise be prevented in this low-risk group.10 Despite the concerns, the NQF endorsed the PM,11 acknowledging that “studies addressing thromboprophylaxis in pregnancy are limited and inconclusive. However, trials in all other groups demonstrate the importance and effectiveness of prophylaxis. Given the potentially devastating consequences of a perinatal DVT or PE, the SC believes that without evidence to the contrary, it is reasonable to assume that these findings are applicable to this population as well.” From the perspective of the QIC, the NQF endorsed the PM without evidence that there was a significant burden of dis-
ease due to thromboembolic complications after cesarean delivery (importance criteria in the NQF Measure Evaluation Criteria6) and without evidence that implementation of this measure would improve patient outcomes (scientific acceptability criteria in the NQF Measure Evaluation Criteria6). Throughout the process of reviewing PMs, the QIC membership considered the implications of each measure on his or her health-care facility, practice, and patients. This has provided an opportunity to reflect on the general limitations of PMs. For example, while PMs are intended to improve the quality of care and, ultimately, patient outcomes, there may be unintended consequences that do not appreciably improve care or may even worsen care or costs. For example, a PM intended to improve the timeliness of antibiotic administration in patients with community-acquired pneumonia (CAP) is thought by many to have increased the rate of unnecessary antibiotic administration because it may not be possible to complete a full diagnostic evaluation and confirm the diagnosis in a rapid manner. Therefore, some practitioners simply administered antibiotics to all patients with possible CAP, some of whom had noninfectious causes for their complaints.12,13 A PM encouraging blood cultures for patients with CAP in EDs may have resulted in increased lengths of stay in some because of false-positive culture results.14 If a PM does not meet NQF standards, the potential deleterious effects of PM implementation may not be balanced by a reasonable expectation of improved health-care quality. Resources are required to collect, analyze, and report compliance with these measures. For organizations already operating with fewer resources, such a strain might impair the delivery of other care and exacerbate disparities.15 For patients with uncommon conditions or diseases for which PMs have not yet been developed, their care may suffer as a result of attention drawn to “reportable” conditions. Because PMs have been linked to reimbursement, they may also create economic pressures to refuse or to shift patients at high-risk of nonadherence from one provider or health-care facility to another, rather than preventing the events. The ACCP and the NQF: From the Outside Looking In While the QIC recognizes that there can be differences of opinion among well-intentioned, knowledgeable individuals, it was our impression during our term of membership that the NQF did not rigorously enforce its own Measure Evaluation Criteria. In July 2009, after considerable deliberation, the ACCP chose not to renew its membership with the
302
Commentary
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
NQF (see e-Appendix 1, “Letter of Resignation”). Chief among the QIC’s concerns was that some might infer that because the ACCP was a member organization, it had endorsed all measures that were ultimately approved. At the time of resignation, there was no public reporting of the voting on each measure, reducing the transparency for ACCP members to understand the QIC’s voting record. Additionally, the ACCP supports the NQF Measure Evaluation Criteria for measure endorsement and believes that when adhered to, they result in useful PMs. However, the ACCP noted that PMs were endorsed despite not meeting the Measure Evaluation Criteria. While no longer a voting member, the QIC continues to comment on all PMs within the scope of expertise of the ACCP during the public comment period and has continued to stay engaged with the evolution of the NQF, including the meeting with the NQF in March 2011.
The NQF’s Improvement Journey Just as we expect the health-care system to measure, report, and improve, it is the NQF’s responsibility to continuously enhance the measure endorsement process. Input provided by the medical community and other stakeholders has helped to identify opportunities for improvement and resulted in significant enhancements. The NQF is particularly appreciative of the very thoughtful analysis and comments provided by the ACCP. In this section, we review the changes to the NQF’s endorsement program that have been implemented over the last 2 years, which fall into two major categories: refinement of the Measure Evaluation Criteria and strengthening of the evaluation process. Refinement of Measure Evaluation Criteria The four NQF Measure Evaluation Criteria can be viewed as a hierarchy that guides the sequential process for evaluating measures. Building on previous work by McGlynn,16 the updated NQF approach could be summarized as follows. If a measure is not important, its other characteristics are less meaningful. If a measure is not scientifically acceptable, its results may be at risk for improper interpretation. If measure results are not interpretable or usable, it would not be wise to invest scarce resources in producing the performance results. However, if a measure addresses an important topic, has been specified in a way that is scientifically acceptable, and produces useful results, all efforts should be made to acquire the necessary information to make implementation feasible. www.chestpubs.org
In 2010, the NQF established two special task forces to focus on refinement of the first two Measure Evaluation Criteria: importance to measure and report and scientific acceptability of the measure properties. The NQF also established an expert panel to develop guidance and recommendations pertaining to measure harmonization. Importance to Measure and Report When a PM is submitted to the NQF, the first criterion it must satisfy is the importance to measure and report, defined as: the extent to which the specific measure’s focus is evidencebased, is important to making significant gains in healthcare quality, and offers an opportunity for improvement in health outcomes for a specific high-impact aspect of health-care where there is variation in or overall less-thanoptimal performance.
To satisfy this criterion, the measure must meet three subcriteria: (1) the measure’s focus must address a national health priority, such as those identified in the DHHS National Quality Strategy,17 or a demonstrated high-impact aspect of health care (eg, a leading cause of morbidity or mortality); (2) there must be evidence that quality problems exist and there is opportunity for improvement; and (3) for measures of structure, care processes, or intermediate outcomes, there must be evidence of a clear linkage to improved health or avoidance of harm. For example, while the ACCP and the NQF may not agree about the VTE prophylaxis measure for women undergoing cesarean section, pulmonary embolism remains the most common cause of maternal death in the United States.18 Evidence supporting clinical decisions comes in many different forms (eg, peer-reviewed publications, practice guidelines from authoritative sources, expert assessments), it is often inconsistent and incomplete, and it can be difficult to interpret to reach conclusions. Both evidence and expert judgment play a role in evaluating measures against criteria, however, judgment can best be applied when experts have a thorough understanding of the evidence that does or does not exist. In fall 2009, the NQF established the Task Force on Evidence, chaired by David Shahian, MD, of the Society of Thoracic Surgeons and Harvard Medical School, which was charged with identifying the type and strength of evidence needed to satisfy the importance to measure criterion, as well as the development of guidance to assist expert panels in evaluating measures. The Task Force on Evidence completed its work in late 2010, and the NQF implemented new requirements and processes effective with projects initiated after January 2011,19 including the following. CHEST / 141 / 2 / FEBRUARY, 2012
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
303
• Requirements for measure stewards pertaining to the synthesis and grading of evidence that must accompany a measure when it is submitted to the NQF for consideration. • Modifications to the NQF Measure Evaluation Criteria and subcriteria to further clarify evidence requirements for various types of measures. • Provision of guidance to NQF expert panels pertaining to the quantity, quality, and consistency of evidence, including a rating scale and decision table. In conducting its work, the task force built on the work of groups involved in measure development and practice guidelines, such as the Physician Consortium for Performance Improvement and the Institute of Medicine, in an effort to align NQF requirements with other related activities. Scientific Acceptability of Measure Properties The second criterion a measure must satisfy to be endorsed by the NQF is scientific acceptability of the measure properties, which refers to the reliability and validity of the measure as specified. In 2010, the NQF established the Task Force on Measure Testing, chaired by Timothy Ferris, MD, of Massachusetts General Hospital, to further define the scope and types of reliability (ie, repeatability or precision of measurement) and validity (ie, correctness of measurement) testing that should be required of measure developers and to develop guidance for NQF expert panels on how best to evaluate the results of testing. Reliability and validity are not all-or-none properties; measures of reliability and validity produce graduated results that always require interpretation. Threats to reliability include ambiguous measure specifications (including definitions, codes, data collection, and scoring) and small case volume, while threats to validity can include inappropriate exclusions, lack of appropriate risk adjustment or stratification for outcome or resource use measures, and systematic missing or incorrect data. In conducting its work, the Task Force on Measure Testing made a deliberate attempt to balance the goal of endorsing measures that achieve adequate reliability and validity to be meaningful with not setting the goal so high as to stifle measure development and use. The consequences of using unreliable and invalid measures can at times be significant for those being measured as well as for those using measures to select health-care providers, while the consequences of not measuring may mean that critical gaps in quality and safety remain unchanged. The task force completed its report in late 2010, including a system rating measures for reliability and validity, which is
being used by all NQF expert panels engaged in measure evaluation for new projects beginning in 2011.20 Last, although it is recognized that evidence of reliability and, in particular, validity are accumulated over time, in most instances, measure developers are now required to submit complete testing results prior to initial endorsement of the measure by the NQF. The NQF does grant “time-limited” endorsement when certain measures are needed to satisfy a time-sensitive legislative mandate if there is not an endorsed measure and the measure is not complex. For time-limited measures, stewards have 1 year to submit complete testing results to the NQF to maintain endorsement status. The evaluation of feasibility also considers potential and actual unintended consequences of a measure. At the time of initial endorsement, most measures are not in widespread use, so the focus tends to be on potential unintended consequences. Once a measure has been endorsed, the NQF can initiate an ad hoc review at any time if there is evidence of unintended consequences (eg, the revision of the ED antibiotic administration measure for presumed pneumonia cited in this article). The engagement of the clinical community is critical to identifying both potential and actual unintended consequences, as evidenced by the recent input from the ACCP that provided the impetus for the NQF to request that a measure steward remove interhospital transfers from a new measure related to ICU mortality and length of stay. Measure Harmonization The current quality landscape includes many quality reporting initiatives and measure developers. Duplicative and overlapping measures can result when measure development initiatives focus on different settings, patient populations, or data platforms (eg, electronic health records, claims data) but address the same measure concept (eg, pain). Duplicative and overlapping measures with disparate specifications can create confusion on the part of users and increase the data collection burden. In 2010, the NQF convened an expert panel to develop operational guidance for achieving measure harmonization. The panel’s recommendations, implemented as of January 2011, include guidance for both measure developers and for the NQF project steering committees charged with evaluating measure harmonization, and they resulted in refinements to both the NQF Measure Submission requirements and the Measure Evaluation Criteria.21 Strengthening the Evaluation Process The 2010 efforts aimed at refining the evaluation criteria and developing operational guidance for
304
Commentary
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
measure developers and the NQF expert panels were implemented for projects initiated after January 2011. The NQF has also taken steps to strengthen other aspects of the endorsement process, such as conflict of interest policies and transparency. Conflict of Interest: The NQF endorsement process relies heavily on expert panels to evaluate measures. For each project, a multi-stakeholder steering committee has been established to assume primary responsibility for the evaluation of measures. Technical advisors may also be appointed to provide in-depth knowledge and expertise in very specific clinical areas or for methodologic issues, such as risk adjustment. Starting in 2010, the NQF has posted proposed rosters for steering committees and technical advisors, along with biographical sketches, and submitted measures for a 2-week period to provide the public an opportunity to identify potential gaps in expertise and to raise any concerns regarding real or perceived conflicts of interest. The CSAC leadership is responsible for reviewing public input and determining if modifications in the committee structure are needed to provide the necessary breadth and depth of expertise. The NQF’s conflict of interest policies were strengthened in 2010, and all potential steering committee members and technical advisors are required to complete comprehensive disclosure of interest forms, which are reviewed by the NQF’s general counsel.22 Measure Evaluation Guidance: The process of evaluating measures has become more structured in an attempt to promote greater consistency in the application of the Measure Evaluation Criteria and subcriteria. As a result of the task force recommendations, technical advisors and steering committee members now rate measures against the criteria and subcriteria on a three-point scale (ie, high, moderate, low), and experts are provided with detailed guidance on what constitutes adequate evidence to substantiate ratings at each level. Transparency and Public Input: All project-related conference calls, webinars, and face-to-face meetings are now open to the public. All meeting transcripts and recordings along with public comments and voting results are available on the NQF Web site. For each PM, the NQF also posts the measure submission and the ratings of the measure by the technical advisors and steering committee members. The NQF welcomes and strongly encourages input from all stakeholders at each step in the endorsement process. The NQF does not define consensus as unanimity, but rather as an opportunity for all www.chestpubs.org
“voices to be heard.” As an organization that makes decisions by consensus, there is no inference that all NQF members supported all measures. The voice of the medical community is a critical input to all aspects of the “quality enterprise,” and the NQF is appreciative of the valuable input provided by the ACCP QIC and cognizant of the need to foster effective ongoing communication with professional societies and other stakeholders.
Communication and Collaboration: Keys to Continued Improvement The joint meeting between the ACCP and the NQF provided both parties with a greater appreciation of each other’s perspectives and the efforts underway to address challenges. While the QIC approach to PM review focuses on how the proposed measure will affect its members, their healthcare organizations, and the patients, the NQF is charged with convening and meeting the needs of a broader range of stakeholders, including industry, payers, and patient advocacy groups. More recently, the NQF has established policies and procedures that should address many of the ACCP’s concerns. These policies govern areas such as harmonization (avoiding the endorsement of competing measures that address the same condition, event, outcome, process, or population) and the incorporation of evidence grading in the development of PMs. Implementation of these policies is underway, and both parties recognize that ongoing assessment will be critical to effective implementation. The dialogue between the ACCP and the NQF also identified issues of mutual concern that will require the broader engagement of the many groups involved in the quality enterprise to resolve. For example, when the QIC recommends that the NQF modify a proposed measure and the NQF expert panels concur, the ultimate decision to revise the measure rests with the measure developer. The NQF can only decide whether to endorse or not endorse the PM as proposed. In March 2011, the secretary of the DHHS released the first National Quality Strategy developed with input from the National Priorities Partnership, a multi-stakeholder group convened by the NQF and cochaired by Bernard Rosof, from the Physician Consortium for Performance Improvement, and Helen Darling, from the National Business Group on Health. The NQS identified three overarching priorities—better health care, affordable care, and healthier people and communities–and efforts are now underway to specify measurable goals, metrics, and strategic opportunities. The national priorities CHEST / 141 / 2 / FEBRUARY, 2012
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
305
and goals will play a major role in focusing future measure development and endorsement activities. Many societies create practice guidelines that serve as a basis for PMs, and some develop PMs. Understanding the complexity and challenges of the NQF process also made apparent the importance of strengthening and filling gaps in the clinical evidence base and translating evidence into practice guidelines that provide a strong foundation for measure development. By adding the consideration of PMs during guideline development, there is an opportunity to identify gaps in evidence and take steps to fill them, to address differences in the interpretation of evidence that may lead to different guidelines across specialty areas, to ensure that guidelines have the necessary specificity to support measure development, and to identify areas ready for measure development and topics requiring more evidence before specific practices can be recommended and measured. The NQF endorsement process is one link in a “supply chain” of activities that includes: • Identification of national priorities for measurement and improvement. • Support for clinical research and guideline development. • Development and testing of PMs. • NQF endorsement of PMs. • Development of the necessary data platform to support measurement. • Implementation of measures in various payment and public reporting programs and for quality improvement purposes, and ongoing evaluation of their use and usefulness in achieving improved patient outcomes. Medical specialty societies have important roles to play in each step of the supply chain, as do other stakeholders. Current PMs draw on many data sources including abstracted medical records, electronic health records, clinical registries, administrative data (eg, claims, laboratory results, prescriptions), and patient-reported data. Some specialty societies have established registries, and others have been leaders in the design and development of electronic health records and performance measurement and improvement tools specific to their specialty that will allow for better assessment of performance gaps and development of methods for eliminating these gaps. Societies should consider ways in which they can collaborate with others to marshal resources for these critical efforts. NQF-endorsed measures are used in many different applications including quality improvement, professional certification and accreditation programs, incentive payments tied to “meaningful use” of health
information technology, and payment and public reporting programs. Medical specialty societies provide a direct mechanism for physicians to provide feedback on how a PM impacts care delivery, whether the intended process or outcome can be reliably measured, and what, if any, limitations exist or unintended consequences may arise. In all of these areas, collaboration of medical societies and other stakeholders is integral. Joint efforts improve efficiencies and allow for shared resources and expertise. Multi-stakeholder processes are particularly challenging and rarely result in unanimity, but offer the best hope for finding some degree of “common ground” on ways to address our country’s safety, quality, and cost challenges. Conclusion PMs are integral to improving the quality and safety of patient care and achieving the best patient outcomes. More recently, PMs have become a key component of accountability programs, in particular, value-based payment programs. Over the last decade, the NQF has evolved to play a central role in convening multiple stakeholders to endorse standardized PMs and identify the best available measures for use in payment and public reporting programs. Medical specialty societies are valuable and trusted resources with important roles to play in establishing practice guidelines, identifying measure gaps, developing PMs, and providing a feedback loop to ensure that measures are working to support improvement at the bedside. Their memberships bring vast clinical knowledge and patient care experience. As advocates for physicians and patients, societies also create opportunities for physician engagement and participation in the broader quality enterprise, and this is an important role at a time when the pace of change in health care is extraordinary. If specialty societies fail to engage, their members will be relegated to trying to alter the course of a train speeding down the tracks. Although the ACCP and the NQF have different cultures and membership compositions, both are driven by a mission to improve health and health care. The two organizations have made progress in developing a relationship that will leverage each other’s strengths, and both recognize that ongoing collaboration and communication are keys to success. Acknowledgments Financial/nonfinancial disclosures: The authors have reported to CHEST the following conflicts of interest: All authors are current members of the American College of Chest Physicians Quality Improvement Committee, except Ms Bruno Reitzner, who is a staff member. Dr O’Brien serves on the board of directors for
306
Commentary
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
the Sepsis Alliance, a not-for-profit organization dedicated to improving awareness and care of patients with sepsis. He is not paid for this directorship. In 2009, Dr O’Brien gave a lecture related to sepsis as a result of an unrestricted grant from BRAHMS, Inc. He donated the honorarium to the Sepsis Alliance and received airfare and two night’s hotel accommodations totaling approximately $1,500. Dr Corrigan serves on the boards of several not-for-profit organizations including the eHealth Initiative, the National eHealth Collaborative, and the National Center for Healthcare Leadership. She is not paid for these directorships. Dr Metersky serves as a consultant for the Centers for Medicare and Medicaid Services on various quality improvement and patient safety initiatives. Remuneration is given to his employer. Dr Metersky previously served on the DVT Prophylaxis Board for the Jefferson School of Public Health, which is supported by Aventis. Dr Hyzy serves as a consultant to the Michigan Health and Hospitals Association Keystone Center for Patient Safety. Drs Moores, Baumann, Tooker, Rosof, Burstin and Pace; and Ms Bruno Reitzner have reported that no potential conflicts of interest exist with any companies/organizations whose products or services may be discussed in this article. Additional information: The e-Appendix can be found in the Online Supplement at http://chestjournal.chestpubs.org/content/ 141/2/300/suppl/DC1.
References 1. Truffer CJ, Keehan S, Smith S, et al. Health spending projections through 2019: The recession’s impact continues. Health Aff (Millwood). 2010;29(3):522-529. 2. Baumann MH, Dellert E. Performance measures and pay for performance. Chest. 2006;129(1):188-191. 3. Proposed revisions to payment policies under the Physician Fee Schedule, and other Part B payment policies for CY 2008. Fed Regist. 2007;72(133):38196-38199. 4. Electronic Health Record Incentive Program. Fed Regist. 2010;75(8):1872-1901. 5. Measure Evaluation Criteria. National Quality Forum Web site. http://www.qualityforum.org/Measuring_Performance/ Submitting_Standards/Measure_Evaluation_Criteria.aspx. Accessed March 29, 2010. 6. Public and member comment. National Quality Forum Web site. http://www.qualityforum.org/Measuring_Performance/ Consensus_Development_Process%e2%80%99s_Principle/ Public_and_Member_Comment.aspx. Accessed March 29, 2010. 7. Consensus Standards Approval Committee decision. National Quality Forum Web site. http://www.qualityforum.org/ Measuring_Performance/Consensus_Development_Process/ CSAC_Decision.aspx. Accessed March 29, 2010. 8. Functions and processes. American College of Chest Physicians Web site. http://www.chestnet.org/accp/qualityimprovement/function-process. Accessed January 7, 2010.
www.chestpubs.org
9. Geerts WH, Pineo GF, Heit JA, et al. Prevention of venous thromboembolism: The Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest. 2004; 126(suppl 3):338S-400S. 10. Quiñones JN, James DN, Stamilio DM, Cleary KL, Macones GA. Thromboprophylaxis after cesarean delivery: A decision analysis. Obstet Gynecol. 2005;106(4):733-740. 11. National Voluntary Consensus Standards for Perinatal Care: A Consensus Report. Washington, DC: National Quality Forum; 2009. 12. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: Side effects of the 4-h antibiotic administration rule. Chest. 2007;131(6):1865-1869. 13. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med. 2008; 168(4):351-356. 14. Metersky ML, Ma A, Bratzler DW, Houck PM. Predicting bacteremia in patients with community-acquired pneumonia. Am J Respir Crit Care Med. 2004;169(3):342-347. 15. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safetynet hospitals. JAMA. 2008;299(18):2180-2187. 16. McGlynn EA. Selecting common measures of quality and system performance. Med Care. 2003;41(suppl 1):I39-I47. 17. Report to Congress: national strategy for quality improvement in health care. HealthCare.gov Web site. http://www. healthcare.gov/law/resources/reports/quality03212011a.html. Accessed December 14, 2011. 18. Chang J, Elam-Evans LD, Berg CJ, et al. Pregnancy-related mortality surveillance—United States, 1991-1999. MMWR Surveill Summ. 2003;52(2):1-8. 19. Guidance for evaluating the evidence related to the focus of quality measurement and importance to measure and report. National Quality Forum Web site. http://www.qualityforum. org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=58170. Accessed December 14, 2011. 20. Guidance for measure testing and evaluating scientific acceptability of measure properties. National Quality Forum Web site. http://www.qualityforum.org/WorkArea/linkit.aspx? LinkIdentifier=id&ItemID=59116. Accessed December 14, 2011. 21. Guidance for measure harmonization: a consensus report. National Quality Forum Web site. http://www.qualityforum. org/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=62381. Accessed December 14, 2011. 22. Disclosure of interest policy for steering committees and technical advisory panels. National Quality Forum Web site. http://www.qualityforum.org/docs/Disclosure_of_Interest_ Policy_and_Form_2010-01-14.aspx. Accessed December 14, 2011.
CHEST / 141 / 2 / FEBRUARY, 2012
Downloaded from chestjournal.chestpubs.org by Kimberly Henricks on February 10, 2012 © 2012 American College of Chest Physicians
307