Do consumers respond to publicly reported quality information? Evidence from nursing homes

Do consumers respond to publicly reported quality information? Evidence from nursing homes

Journal of Health Economics 31 (2012) 50–61 Contents lists available at SciVerse ScienceDirect Journal of Health Economics journal homepage: www.els...

491KB Sizes 1 Downloads 48 Views

Journal of Health Economics 31 (2012) 50–61

Contents lists available at SciVerse ScienceDirect

Journal of Health Economics journal homepage: www.elsevier.com/locate/econbase

Do consumers respond to publicly reported quality information? Evidence from nursing homes Rachel M. Werner a,∗ , Edward C. Norton b , R. Tamara Konetzka c , Daniel Polsky d a

Philadelphia VA Medical Center and University of Pennsylvania, 1230 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104, United States University of Michigan, 1415 Washington Heights, Ann Arbor, MI 48109-2029, United States c University of Chicago, 5841 S. Maryland, MC2007, Chicago, IL 60637, United States d University of Pennsylvania, 1204 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104, United States b

a r t i c l e

i n f o

Article history: Received 9 November 2010 Received in revised form 1 January 2012 Accepted 3 January 2012 Available online 10 January 2012 JEL classification: L15 I11 I18 Keywords: Report cards Quality information Nursing home care Nursing home demand

a b s t r a c t Public reporting of quality information is designed to address information asymmetry in health care markets. Without public reporting, consumers may have little information to help them differentiate quality among providers, giving providers little incentive to compete on quality. Public reporting enables consumers to choose highly ranked providers. Using a four-year (2000–2003) panel dataset, we examine the relationship between report card scores and patient choice of nursing home after the Centers for Medicare and Medicaid Services began publicly reporting nursing home quality information on postacute care in 2002. We find that the relationship between reported quality and nursing home choice is positive and statistically significant suggesting that patients were more likely to choose facilities with higher reported post-acute care quality after public reporting was initiated. However, the magnitude of the effect was small. We conclude that there has been minimal consumer response to information in the post-acute care market. Published by Elsevier B.V.

1. Introduction A key strategy used by policymakers to improve health care quality is the release of “report cards” that publicly report information about the quality of health care providers. Report cards are designed to address information asymmetry in health care markets. Without report card information, patients may have little information to help them differentiate quality among providers. This, in turn, gives providers little incentive to compete on quality. By providing quality information, report cards may improve the performance of health care markets in at least two ways. First, they enable consumers to identify and choose high-quality providers. Second, by making demand more elastic to quality, they give providers incentives to improve their quality so they can increase demand for their services. Thus, a demand-side response to report cards is instrumental in gaining the potential benefits from report cards.

∗ Corresponding author. Tel.: +1 215 898 9278; fax: +1 215 573 8778. E-mail addresses: [email protected] (R.M. Werner), [email protected] (E.C. Norton), [email protected] (R.T. Konetzka), [email protected] (D. Polsky). 0167-6296/$ – see front matter. Published by Elsevier B.V. doi:10.1016/j.jhealeco.2012.01.001

Despite the face validity of using report cards to increase demand for highly rated providers, the evidence that providers that receive a high report card rating are rewarded with an increased market share (or, conversely, that providers that receive a low report card rating lose market share) is mixed. Early evidence of the effectiveness of information disclosure on consumer health care choices examined changes in market share that were limited to the post-reporting period. For example, Baker et al. (2003) examined the effectiveness of a public reporting effort in hospitals in Ohio, finding little relationship between a hospital’s report card ranking and changes in its market share. Cutler et al. (2004) examined the effects of reporting quality information about cardiac surgery on hospital volume, finding that being identified as a high-mortality hospital was associated with a decline in the number of cardiac surgery patients at that hospital in the period following the designation. However, as these studies were limited to the post period they are unable to account for prior trends in market share, making it difficult to distinguish changes in market share due to information from changes due to other ongoing market factors. Subsequent work in this area used pre-post designs, testing whether consumer choice changes after the introduction of report cards. In much of this literature, report card scores are observed

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

only after public reporting is initiated, but not before. This makes it necessary to make assumptions about “market learning”—or how the relationship between market share and report card scores would have changed in the absence of public reporting. Dafny and Dranove (2008) examine the influence of Medicare HMO report cards. Although they observe report card scores only once in the post-reporting period, they estimate a functional form for changes over time in the relationship between health plan enrollment and report card ranking, showing that highly ranked plans were gaining market share prior to the report cards’ release but that the report cards led to further gains in market share for high-scoring plans. Also examining health plan report cards, Chernew et al. (2008) use a Bayesian learning model to estimate enrollees’ general assessment of plan quality prior to the release of report cards and the changes in these assessments over time. They find that the addition of publicly reported plan information has a small incremental effect. Dranove and Sfekas (2008) similarly estimate consumers’ prior beliefs about quality, but for cardiac surgeons prior to public reporting in New York. They then allow updating of these priors by projecting their trajectories over time, assuming the functional form. These studies offer sophisticated approaches to differentiating market learning from the effects of report cards, but without data on the relationship between reported measures and consumer choice in the absence of report cards, they rely on assumptions about what would have happened in the absence of report cards. Several studies have directly observed report card scores both before and after public reporting was initiated, enabling them to account for changes in market learning that are correlated with report card quality. Bundorf et al. (2009) examine changes in selection of fertility clinics under public reporting, taking advantage of data on both reported and unreported information that was available prior to the public release of information. They find that highly ranked clinics gained market share after public reporting was initiated. Wedig and Tai-Seale (2002) use a similar empirical set-up to test the effects of health plan report cards on the managed care market with similar results—reported information affects plan choices, particularly among new enrollees. It remains possible, however, that the correlation between market learning and report-card quality changed contemporaneously with public disclosure of report card information. Jin and Sorensen (2006) address this concern by examining the effect of voluntary health plan report cards and using nondisclosing health plans as a control group to identify changes in consumer choice of health plans related to the disclosure of quality information. However, their identification strategy is threatened by the voluntary nature of the non-disclosing group. Opting out of disclosure may itself signal low quality, making their control group endogenous to the policy being evaluated. Despite the methodological rigor of many prior studies, they have been limited by their inability to completely control for supply-side responses to information disclosure that will, in turn, affect the demand response. The usual supply-side response to information disclosure is for providers to increase their quality in order to attract more consumers. In most settings it is also possible for providers to change the price for their goods, for example, by increasing (or decreasing) prices if quality is reported as being high (or low). In some prior studies both of these supply-side responses are observable and thus, can be controlled for. However, another possible supply-side response is for providers to change the benefits or bundle of services included in the price in response to their public report card ranking. For example, if they are rated as high quality, health plans might choose to offer less comprehensive benefits or fertility clinics might chose to include fewer services or amenities in their service bundle. These supply-side changes will also affect the demand response. Such changes are not easily

51

observed and have not been controlled for in prior studies. Thus, prior estimates of consumer response to information disclosure are mixed supply and demand effects. These omitted variables of supply response will bias the estimate of consumer response downward. Empirically, our approach is similar to that of Jin and Sorensen, but we have the opportunity to use control groups that are not endogenous to the policy. We observe report card scores both before and after public reporting was initiated in nursing homes and test whether the correlation between consumer choice of nursing homes and report card scores changes once the scores are publicly disclosed. We also use an exogenously defined control group of nursing homes that were excluded from the mandated public reporting due to their size to explicitly test the counterfactual—what would have happened in the absence of report cards. Unlike prior studies, we are able to eliminate or control for several supply-side responses to report cards, thus reducing possible sources of bias. We are able to do this because of the unique setting in which these report cards were instituted: in the regulated setting of Medicare-financed post-acute care in nursing homes. In this setting, prices are set by Medicare, as are the bundles of services included in these prices. Additionally, we empirically control for observed supply-side changes in report card quality. We thus control for three main supply-side responses to report cards and reduce omitted variable bias from unobservable price and services. This provides an estimate of the demand-side effect of public reporting (i.e. without the compensatory supply-side response one might expect to see in many settings), and thus provides policy-relevant information about the effect of information disclosure on consumer choice. The setting of our study is also unique. We know of no prior work on the demand response to public reporting in the setting of post-acute care in nursing homes. One prior study examined the role information disclosure for long-stay quality at nursing homes, finding no meaningful change in patient demand (Grabowski and Town, 2011). However, the post-acute care setting is distinctive. Compared to health plan or fertility clinic choice, choices for postacute care are made under time pressure (during a hospital stay), which may limit the effect of information disclosure. On the other hand, these decisions are often made with significant input from health care workers such as social workers and discharge planners. These agents may be more likely to use new information about health care quality. Our findings of small effects in the expected direction help provide insight into the role of public reporting in this common setting and factors that may increase use of quality information in consumer choice. 2. Setting Nursing homes have been cited as having poor quality of care for decades (Institute of Medicine, 1986; Wunderlich and Kohler, 2000) and numerous attempt to improve the quality of nursing home care, including federal and state regulation, have been undertaken.1 Recently, efforts to improve care have focused on the public dissemination of quality information. In 1998, the Nursing

1 A 1986 Institute of Medicine report led to the1987 Nursing Home Reform Act or OBRA. This congressional act mandated extensive regulatory controls and the development of a resident-level assessment, data collection, and care planning system. As a result of OBRA, each nursing home certified for Medicare and/or Medicaid is inspected at least once every 15 months and is required to submit a comprehensive assessment of each resident at least once per quarter. Despite some improvements under OBRA, a follow-up report by the Institute of Medicine in 2000 concluded that significant problems remained.

52

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

Fig. 1. Example of the Nursing Home Compare report card for one short-stay clinical quality measure. Data for Nursing Home A is not available for this measure because the nursing home has too few residents to be included in Nursing Home Compare.

Home Compare (NHC) website was first launched with limited information on nursing home regulatory deficiencies. In 2000, the available information was expanded to include nurse staffing data. While these quality measures were publicly available, they were not widely disseminated or publicized. Then, in 2001, the Department of Health and Human Services announced the formation of the Nursing Home Quality Initiative. One of the major goals of this initiative was to improve the information available to consumers on the quality of care at nursing homes. Thus, as part of this effort, the Centers for Medicare and Medicaid Services (CMS) reconfigured and re-launched NHC, as a web-based guide detailing quality of care at over 17,000 Medicareand/or Medicaid-certified nursing homes (Centers for Medicare and Medicaid, 2002). The NHC website compiled quality information from numerous sources in an online report card that facilitated making comparisons among nursing homes and included information on clinical quality measures (see Fig. 1 for an example) as well as deficiencies and staffing. NHC was first launched as a pilot program in 6 states in April 2002 (Colorado, Florida, Maryland, Ohio, Rhode Island, and Washington). Seven months later, in November 2002, NHC was launched nationally. The quality information on NHC is updated quarterly. Thus, although some quality information was available prior to 2002, it was not widely disseminated or used and, with the reconfiguration of NHC in 2002, new information about the clinical quality at nursing homes became available. The release of this web-based information about nursing home quality was actively promoted to consumers in 2002, with the hope that consumers would use this information to help choose a nursing home. When NHC was launched, CMS launched a multimedia campaign. Full-page newspaper advertisements were run in 71 newspapers across all 50 states. Under the banner “How do your local nursing homes compare?” each advertisement showed an example of the quality information available on the NHC website using local nursing homes as examples. CMS also ran national television advertisements promoting the website and worked with state long-term care Ombudsman to promote awareness of the website among consumers. During the pilot rollout, phone calls to 1-800-MEDICARE concerning nursing home information more than doubled. Additionally, visits to the NHC website increased tenfold in the six pilot states. With the national

launch of NHC, website visits quadrupled nationally, jumping from fewer than 100,000 visits per month to over 400,000 visits in November 2002. Survey research suggests that people use the information in NHC. In the post-acute care setting, one survey found that 38% of hospital discharge planners reported using the NHC website as part of their discharge planning (BearingPoint, 2004). Among long-stay nursing home residents, a survey of the family members of nursing home residents found that 12% used the NHC website (Castle, 2009). Several recent studies have examined whether quality improved under NHC. These studies have found that quality improved on some clinical measures but not others (Mukamel et al., 2008; Werner et al., 2009b). For measures where quality improved, such as the proportion of nursing home residents with pain, the size of the improvement was modest (Werner et al., 2009b). One prior study examined whether consumer demand for nursing home care changed under NHC (Grabowski and Town, 2011). This study was limited to long-stay quality in nursing homes and found no meaningful effect of information disclosure on demand for nursing homes. We focus our analyses on post-acute care (or short-stay) residents of nursing homes. Post-acute care provides a transition between hospitalization and home (or another long-term care setting) for over 5.1 million Medicare beneficiaries annually (MedPAC, 2008), providing health care services including rehabilitation, skilled nursing, and other ancillary services in a variety of health care settings. The largest proportion of post-acute care occurs in nursing homes (or skilled nursing facilities (SNFs)), with 2.4 million SNF stays in 2008 for which Medicare paid over $24 billion, accounting for 43% of post-acute care stays (MedPAC, 2010).2 Approximately 14% of nursing home beds are filled by Medicare post-acute care patients at any given time; these services provide an important revenue stream for nursing homes.

2 After SNFs, home health care is the second most common setting for post-acute care, accounting for 40% of post-acute care stays. Regulatory changes to implement prospective payment for skilled nursing facilities and home health care were made in 1998 and 2001, respectively. In addition, home health began publicly reporting patient outcomes in late 2003.

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

We focus on the post-acute care nursing home population for two important reasons. First, the post-acute population has a high turnover rate and less cognitive impairment compared to the chronic-care nursing home population. This makes it empirically feasible to find changes in the demand for nursing home care in response to public reporting over a short timeframe, if they exist. Second, by limiting our analyses to fee-for-service Medicare beneficiaries in post-acute care, we constrain consumer out-of-pocket expenditures and services to be effectively the same across all observations. This eliminates the possibility that nursing home choice will be driven in part by differences in the price (or changes in price after public reporting) and service bundles that consumers face, as might be the case in the long-stay market for nursing homes.

3. Conceptual framework Our conceptual framework starts with demand for nursing home care. Demand for nursing home care depends primarily on health status, out-of-pocket price, and the family situation (Norton, 2000). As health status worsens, demand for nursing home care increases. Demand falls with out-of-pocket price, which depends greatly on whether Medicare, Medicaid, or private insurance applies. The family situation—encompassing marital status and involvement of children and extended family—matters because family members are sources of informal care, which can be a substitute for formal nursing home care (Van Houtven and Norton, 2004, 2008). Contingent on a decision to receive nursing home care, the choice of a particular nursing home is a function of that nursing home’s perceived quality, price, and bed availability relative to other homes in the local market. In this study we focus on Medicare beneficiaries requiring postacute care, a subset of nursing home care. Because post-acute care patients have considerable need for skilled care (rehabilitation after stroke or hip fracture, for example), informal care is a less viable alternative. Patients needing post-acute care may choose to receive care in a SNF (the largest provider of post-acute care) or from an alternative post-acute provider such as home health care. Contingent upon choosing SNF care, the utility function leading to choice of a particular SNF is simplified by Medicare payment. Because all post-acute care patients in our study are covered by Medicare, Medicare coverage is uniform across SNFs, and out-of-pocket costs do not vary with SNF quality, there are essentially no differences in out-of-pocket costs when choosing a nursing home for post-acute care. The choice of a particular SNF is therefore primarily a function of that nursing home’s bed availability and perceived quality relative to other homes in the market. Within this conceptual framework, we expect NHC to affect consumers’ choices through the perceived quality component of the utility function. Report cards lower the cost of collecting objective quality information, thus increasing the elasticity of demand with respect to quality, at least for the types of quality reported on the web. The central response to public reporting is assumed to be a demand response. This demand response may prompt a secondary provider response, which may include improving quality (to increase demand) as well as charging higher prices or reducing covered benefits or bundles of services (for example, reducing health plan benefits). In the setting of post-acute care in nursing homes, prices and associated service bundles are fixed and thus any supply-side changes prompted by public reporting will be observed through changes in measured (and reported) quality. The regulation of SNFs enables us to estimate an unbiased demand response to public reporting. Thus, our central hypothesis is that patients are

53

more likely to choose highly rated nursing homes after report card scores are publicly released. 4. Methods Our empirical strategy is to compare the relationship between consumer choice and report card scores before and after these scores were publicly released using a market fixed-effect approach. We define the choice set as all SNFs included in public reporting within a geographic market. Patients who have an acute-care hospital stay in an area (and are therefore eligible for post-acute care under Medicare) may also choose home health care or other alternatives to these SNFs, and these other alternatives are controlled for in the empirical model. The relationship between consumer choice and report card scores in the pre-reporting period controls for the correlation between knowledge of the market through other pathways (market learning) and report-card quality. We can then estimate the report-card effect by testing for changes in the correlation between consumer choice and report card scores once this information is publicly disseminated. Because this pre–post design fails to account for the possibility that the correlation between market learning and report card scores changed contemporaneously with the release of report cards, we support identification in several ways. First, we capitalize on the fact that nursing homes in six states (Colorado, Florida, Maryland, Ohio, Rhode Island, and Washington) began Nursing Home Compare seven months before nursing homes in all other states. Including these staggered start dates helps to differentiate report card effects from a simple time trend. Second, we test for the hypothesized effect among the SNFs small enough to be consistently excluded from public reporting. Although demand for these small SNFs may change when larger SNFs in the same market have their quality publicly reported, these changes should be uncorrelated with small SNFs’ quality. Therefore, we should not see an effect in these nursing homes because post-acute quality is not reported publicly for them. Finally, we test for potential trends in market learning by examining whether the relationship between consumer choice and report card scores changed in the years prior to the public dissemination of this information. 4.1. Data sources Our primary data source is the nursing home Minimum Data Set (MDS). The MDS contains detailed clinical data collected at regular intervals for every resident in a Medicare- or a Medicaid-certified nursing home. These data are collected and used by nursing homes to assess needs and develop a plan of care unique to each resident. The detailed clinical information contained in these data is used by CMS to calculate the clinical quality measures included in NHC. We also use the Online Survey, Certification and Reporting (OSCAR) dataset to obtain facility characteristics and the MedPAR file (containing discharges from acute-care hospitals for Medicare beneficiaries) to estimate the number of patients eligible for postacute care. 4.2. Study sample We include all nursing homes that were large enough to be included in public reporting for post-acute care measures over the entire period in our analyses of 2000–2003 (SNFs with fewer than 20 eligible patients over a six-month period are excluded from NHC (Morris et al., 2003)). Our study sample excludes all small facilities, including those that were included in NHC in some periods but not other periods as changes in their size may be endogenous to their performance if they decreased their size in an effort

54

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

Table 1 Description of 7675 Medicare-certified nursing homes included in the study sample. Variables ln(SNF patients per eligible population) ln(patients treated with outside good per eligible population) Report card scores No pain No delirium Improved walking Nursing home characteristicsa Total number of beds Occupancy rate % Medicare For profit Not for profit Chain Hospital based

Mean

Standard deviation

−4.29 −0.153

1.46 0.101

0.748 0.959 0.282

0.165 0.066 0.141

135.4 0.843 0.192 0.700 0.263 0.616 0.102

79.4 0.173 0.222 0.458 0.441 0.486 0.302

a Nursing home characteristics (from OSCAR) are based on both short- and longstay nursing home residents.

to opt out of public reporting. While approximately one-half of SNFs are excluded from our main analyses due to their small size, only 6% of SNF admissions are excluded. Within the larger facilities, SNF admissions for Medicare fee-for-service beneficiaries age 65 or older are included. We also exclude markets (and the SNFs within markets) that have only one SNF included in public reporting as public reporting offers no choice within these markets. This excludes 6% of SNFs. Our final sample includes 7675 large SNFs, each observed quarterly over 16 quarters (we observe each SNF on average 15.3 times), giving a final sample of 117,196 SNF-quarter observations. Long-stay nursing home residents who are hospitalized are most often transferred back to the same nursing home and admitted to a post-acute care bed in that nursing home’s SNF prior to transitioning back to a long stay. In our data, we find that 74% of SNF admissions who had a nursing home stay prior to their hospitalization returned to the same nursing home. This may occur for several reasons. First, because the nursing homes are largely experience goods, prior experience will dominate the perception of quality and the decision making process. In fact, we find that patients with a prior nursing home stay at a nursing with low quality were more likely to change nursing homes than were patients with a prior nursing home stay at a nursing home with high quality.3 This pattern was true both before and after information about nursing home quality was publicly disclosed. Second, many states have Medicaid bed-hold policies that pay nursing homes to reserve beds for acutely hospitalized nursing home residents who plan to return to the facility as long-stay residents after the acute and post-acute episode ends. This provides a strong incentive to discharge patients back to the same nursing home. Thus, we eliminate anyone in our sample who had a nursing home stay in one year prior to the SNF admission and consider anyone who has not had a nursing home stay within the last year as a potential user of Nursing Home Compare. Our final sample includes 3,008,731 SNF admissions in 2000–2003. See Table 1 for summary statistics. 4.3. Models We estimate a nested discrete choice demand model in which each hospitalized patient selects the post-acute care option in the patient’s market that maximizes his or her utility. Patients’ choices

3

For example, among SNF admissions with a prior nursing home stay, 77% of those coming from a SNF with high pain quality returned to the same SNF whereas 71% of those coming with SNF with low pain quality returned to the same SNF.

for post-acute care are nested within one of J SNFs in the market or treatment with an “outside good,” such as going directly home or going to another post-acute setting. We observe patient choices of SNFs in the MDS data and infer choice of an outside good from MedPAR, where patients who are discharged from the hospital but not admitted to SNF are assumed to have chosen an outside good. We estimate a market-level choice model following Berry (1994), as we do not observe the characteristics of the outside goods (or the people choosing them), and the Berry model allows demand estimates at a market rather than individual level. Consumers use multiple sources of information to infer provider quality and choose the provider that gives the highest level of utility, where consumer i chooses provider j if the expected utility from provider j is higher than that of any other option. Summing over consumers we estimate the following with ordinary least squares: ln(sjmt ) − ln(s0mt ) = ˇ1 Scorejt−1 + ˇ2 postNHCjt + ˇ12 Scorejt−1 × postNHCjt + ˇ3 ln(sjt|m ) + Xjt + m + jt (1) where sjmt denotes the number of patients admitted to facility j as a proportion of all patients eligible for post-acute care in market m and quarter t (or the share of market m that is treated at facility j), s0mt denotes the proportion of the eligible population treated with an outside good in market m and quarter t (or the share of market m that is treated with an outside good) and sjt|m denotes within-group market share, a nesting parameter which accounts for correlation of choices within nests. We define the post-acutecare eligible population (s0mt ) as all Medicare beneficiaries within the market and quarter discharged alive and not to hospice from an acute care hospitalization after a stay of at least 72 h, as Medicare eligibility for a SNF stay requires a preceding hospital stay of at least 72 h. Markets are defined as the Hospital Service Area where the SNF is located based on the Dartmouth Atlas definition. We chose a hospital-based market because all Medicare-covered SNF stays originate from a hospitalization and we found that the majority of SNF stays occurred in the same HSA as the preceding hospital stay. To estimate how consumer choice changes with public reporting, we estimate our dependent variable as a function of each facility’s report card score in the prior quarter (Scorejt−1 ), an indicator for whether report card information was available (postNHCjt after April 22, 2002 in pilot states and November 12, 2002 in non-pilot states), and the interaction between the two. The coefficient on ˇ12 represents the mean marginal utility associated with the report card score once it is disseminated publicly and we test whether ˇ12 = 0. Thus, a positive coefficient implies that consumers value nursing homes with high report card scores more once information on the score is publicly available. We also include observable time-varying nursing home characteristics (summarized in Table 1) and market fixed effects. We include the three post-acute care report card scores included in NHC at the time of its launch in 2002: percent of shortstay patients who did not have moderate or severe pain; percent of short-stay patients without delirium; and percent of short-stay patients whose walking remained independent or improved.4 (Report card scores were rescaled so that higher levels indicate higher quality for all three measures.) We lag the report card score by one quarter to allow consumers to respond to the report card score from the prior quarter. We first estimate this equation

4 When NHC was launched, it also included a second measure of delirium that adjusted for facility admissions profile, but this measure was soon dropped leaving the 3 post-acute care measures we include in this study. In addition, the measure of improvement in walking was dropped from NHC in December 2003.

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

separately for each of the three report card scores and then include all three report card scores simultaneously in the regression. We calculate these report card scores directly from MDS, enabling us to consistently measure facility report card scores both before and after public reporting of these scores was initiated. We calculated report card scores for each facility following the method used to calculate the publicly reported post-acute-care report card scores on NHC (Morris et al., 2003): each measure is based on patient assessments 14 days after admission; is calculated quarterly based on assessments over a 6-month period 3–9 months before its publication; includes only those residents who stay in the facility long enough to have a 14-day assessment; is calculated only at facilities with at least 20 cases during the target time period. To ensure accurate replication of the report card scores, we benchmarked our calculated report card scores against the report cards published by CMS. Our calculated scores differed from those reported by CMS by only 0–0.1 percentage points, indicating an excellent match between the results of our calculations and the publicly reported scores. Because the within-group market share is correlated with unobserved facility characteristics, we instrument for the value of within-group share using instruments that are correlated with a patient’s choice of a facility over its competitors but uncorrelated with unobserved quality. As suggested by Berry (1994) we estimate (1) by instrumental variables using the average characteristics of competing facilities within a market-quarter. We choose as instruments facility characteristics that are stable over our study period or that facilities would not choose in response to the quality of its competitors. We include 7 such characteristics (the average total number of beds at competing facilities, the percentage of competing facilities that are hospital based or for-profit ownership, and the proportion of care devoted to specialized services of tracheostomy care, radiation, chemotherapy, and dialysis). These variables are available annually from OSCAR data and are good predictors of within-group market share in first-stage regressions. Table A1 presents results from the first stage model. We conduct two falsification tests to see whether any observed changes in patient choice of nursing homes are observed under counterfactuals when no publicly reported information was available. First, we repeat the above analyses in a group of small SNFs that were never included in public reporting. Although NHC was launched nationally in 2002, small SNFs (with fewer than 20 eligible patients in a 6-month period) were excluded from public reporting. While one-third of SNFs are excluded from NHC at any given time, many of these small SNFs are only intermittently excluded from NHC when their census drops below the 20-patient threshold. However, approximately 15% of SNFs are never included in NHC. By testing whether nursing home choice among this group changed after public reporting was initiated, we test whether market learning changed contemporaneously with public reporting. Second, we test for market learning. We test whether there were changes in nursing home choice related to report card scores using a false NHC-implementation date, comparing 2000 vs. 2001–2002, prior to the implementation of NHC. We also test whether there were linear time trends in nursing home choice related to report card scores. If observed changes in nursing home choice are due to public reporting rather than due an ongoing trend of an increasing correlation between patient demand and report card scores, we expect no change in nursing home choice in these two specifications.

5. Results The 7675 SNFs included in public reporting in the study sample cover 1892 markets (see Table 2). On average, a SNF admitted

55

Table 2 Description of SNF markets. # or mean (SD) # skilled nursing facilities included in public reporting # markets with public reporting Admissions to SNF j/eligible population in market m (sjt ) Admissions to SNF j/admissions to all SNFs in market m Mean # SNFs included in public reporting/market Range of reported scores within markets No pain No delirium Improved walking

7675 1892 .033 (.048) .190 (.214) 3.9 (4.8) .187 (.130) .064 (.077) .188 (.172)

3% of all patients eligible for SNF care in its market and 19% of all patients who chose a SNF for their post-acute care provider. For most report card scores, there was a wide range of scores within a market—the pain and improved walking scores differed by 19 percentage points each between the best and worst rated SNF in the average market. The relationship between choice of nursing home and report card scores among large SNFs is displayed in Table 3. The coefficients on the non-interacted quality scores, representing the marginal utility associated each report card score in the absence of publicly reported information, indicate that SNFs with lower pain scores were higher utility whereas higher delirium and walking scores were associated with higher utility in the pre-reporting period. Mean utilities were generally higher in the post-reporting period compared to the pre-reporting period, as reflected by the positive coefficient on the post-NHC dummy, although this was not statistically significant in some specifications. The coefficient on the interaction between each report card score and the pre-post NHC indicator represents the effect of these report card scores on choosing that nursing home once the scores are publicly released, over and above the correlation between scores and nursing home choice before they were publicly released. We find that a better pain score is associated with an increase in consumer demand after public reporting was initiated; for delirium the coefficient is close to zero and for improved walking the coefficient is unexpectedly negative. The results are similar when the three report card scores are included separately or simultaneously in the same regression. The nesting parameter is positive and significant in all regressions, suggesting that a separate nest for SNFs is appropriate. The effect of nursing home characteristics on utility are in the expected direction, with higher mean utilities at facilities that are larger, hospital based, part of a chain, not for profit, and provide more Medicare-financed care (or post-acute care). The two falsification tests support the hypothesis that the demonstrated relationship between nursing home choice and the report card pain score is caused by public reporting. First, when we test whether there were changes in the choice of small nursing homes not included in the public report cards after NHC was launched were related to the quality metrics at these small SNFs, we see an even smaller and statistically non-significant effect based on the facility’s score on the pain measure (Table 4, column 1). We further test for differences in nursing home choice in response to report card scores between large and small nursing homes using a difference-in-differences model. We do this by interacting the Scorejt−1 × postNHCjt interaction in equation 1 with a large SNF indicator variable (and include all lower-order interactions). Consistent with our main results, we find that consumer choice for large SNFs increased among SNFs with better reported pain quality compared to small SNFs, though the statistical significance of the effect at large SNFs declines compared to the main specification (coefficient 0.121, standard error 0.079). Higher delirium scores

56

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

Table 3 Estimates of the effect of report card scores on facility choice after Nursing Home Compare is implemented. No pain No pain

No delirium

−0.609 (0.040)

0.397*** (0.077)

No delirium

0.549*** (0.041) 0.119*** (0.012)

Improved walking Post-NHC No pain × post-NHC

0.045* (0.026) 0.082** (0.034)

No delirium × post-NHC

0.101 (0.103)

−0.004 (0.107)

Improved walking × post-NHC Nesting parameter Nursing home characteristics Total number of beds/100 Hospital based Part of a chain Not for profit ownership For profit ownership % Medicaid % Medicare Total occupancy Constant Market fixed effects Observations Number of markets R-squared

Improved walking

***

0.196*** (0.067)

0.190*** (0.068)

−0.066* (0.038) 0.203*** (0.069)

0.315*** (0.030) 0.500*** (0.042) 0.046*** (0.015) 0.295*** (0.048) 0.366*** (0.048) −1.131*** (0.056) 1.481*** (0.069) 0.783*** (0.057) −4.461*** (0.175) × 117,196 1892 0.444

0.316*** (0.031) 0.540*** (0.042) 0.040*** (0.015) 0.296*** (0.049) 0.355*** (0.048) −1.185*** (0.058) 1.507*** (0.071) 0.790*** (0.057) −5.281*** (0.189) × 117,196 1892 0.438

0.320*** (0.031) 0.526*** (0.041) 0.048*** (0.015) 0.279*** (0.048) 0.351*** (0.047) −1.118*** (0.057) 1.512*** (0.070) 0.804*** (0.057) −5.084*** (0.174) × 117,196 1892 0.443

All reported scores −0.592*** (0.039) 0.419*** (0.075) 0.513*** (0.040) 0.074 (0.105) 0.066* (0.036) −0.010 (0.109) −0.036 (0.040) 0.209*** (0.069) 0.317*** (0.030) 0.496*** (0.042) 0.052*** (0.015) 0.279*** (0.047) 0.357*** (0.047) −1.069*** (0.056) 1.485*** (0.069) 0.794*** (0.056) −5.035*** (0.191) × 117,196 1892 0.451

Robust standard errors in parentheses. * p < 0.1. ** p < 0.05. *** p < 0.01.

remains associated with decreased consumer demand in large SNFs but, as before, remains not significant (coefficient −0.206, standard error 0.211). Finally, the difference-in-differences model finds that higher walking scores are associated with increased consumer demand in large SNFs compared to small SNFs (coefficient 0.165, standard error 0.085). While these results support our main finding of a consumer response to report card information, we do not use this difference-in-differences model as our main specification for two main reasons. First, the difference-in-differences model constrains all control variables (including trends over time) to have the same effect on consumer choice of nursing homes in small and large SNFs. In formal testing, we find that trends in our dependent variable differ between small and large nursing homes prior to the public reporting intervention, violating this key assumption of the difference-in-differences model. Second, because small SNFs are most often located in the same markets as large SNFs, they may be affected by the public reporting intervention, particularly if consumers interpret non-disclosure of report card information as a signal of low quality at small SNFs. Thus, we rely on and present the pre-post specification stratified by small and large SNF size as our main specification. As a second falsification test, we investigate whether the report card effects noted in our main specification could be due to market learning. Although we found no effect of report cards among small

SNFs, it remains possible that market learning in large SNFs differs from that in small SNFs. When we test whether there was a change in nursing home choice between 2000 and 2001, when there was no change in the availability of information but there was the potential for market learning through other pathways, we again find that the coefficient on the pain score interaction is smaller than the main effect and is not statistically different from zero (Table 4, column 2). Similarly, when we test whether there was a change in the overall time trend of SNF choice related to report card scores in the prereporting period, we find that there was no significant time trend for any of the three report card scores in large SNFs subject to public reporting (Table 4, column 3). 6. Simulations The magnitudes of the coefficients, representing the marginal utilities associated with report card scores, are not easily interpretable. However, these coefficients can be translated into changes in market share using simulation. To do this we assume that the proportion of the eligible population treated with an outside good does not change with public reporting, which is supported by our data showing that this proportion is stable over our study period. We then predict how SNF market share changes once report card information becomes available based on the results

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

57

Table 4 Falsification tests of the effect of report card scores on facility choice after Nursing Home Compare is implemented (1) among small nursing homes not included in public reporting and (2) testing for market learning in large SNFs using a false implementation date and a time trend. Small nursing homes without public reporting

No pain No delirium Improved walking Post-NHC

−0.116*** (0.043) 0.118 (0.102) 0.223*** (0.047) −0.028 (0.156)

Market learning in large SNFs False implementation date

Time trend

−0.606*** (0.046) 0.327*** (0.085) 0.519*** (0.043) 0.015 (0.111)

−0.591*** (0.051) 0.294*** (0.109) 0.552*** (0.048)

Time trend No pain × post-NHC No delirium × post-NHC Improved walking × post-NHC

0.007 (0.011) 0.014 (0.063) 0.085 (0.158) −0.128* (0.069)

0.009 (0.040) 0.090 (0.111) −0.077* (0.045)

No pain × time trend No delirium × time trend Improved walking × time trend Nesting parameter Constant Nursing home characteristics Market fixed effects Observations Number of markets R-squared

0.415** (0.166) −4.670*** (0.367) × × 21,194 1011 0.291

0.372*** (0.124) −4.686*** (0.285) × × 64,983 1865 0.450

0.002 (0.004) 0.008 (0.011) −0.006 (0.004) 0.295*** (0.073) −4.833*** (0.206) × × 84,682 1879 0.454

Robust standard errors in parentheses. * p < 0.1 ** p < 0.05. *** p < 0.01.

from our model. We find that for the average facility, an increase in a SNF’s reported pain quality from the 25th to the 75th percentile increases the facility’s share of the SNF market by 1.3 percent. In smallest markets (i.e., with 2 SNFs) this translates into a change in market share of 0.7 percentage points. In the median market (i.e., with 6 SNFs) market share changes by only two tenths of a percentage point and in larger markets (i.e., the 75th percentile market with 12 SNFs) market share changes by just one tenth of a percentage point. 7. Extensions 7.1. Heterogeneity in consumer use of information: the role of patient education Consumer characteristics, such as education, may predict responsiveness to report card scores. There are two main ways that education is likely to matter for report card scores. One is that the information is available online, so access to computers is essential. The second is that the wealth of information in the report cards may be less daunting to those comfortable assessing complex information. Prior work has found that individual socioeconomic status is predictive of understanding report card information (Jewett and Hibbard, 1996). If this is indeed the case we might expect that consumers with higher education levels will be more responsive to report card information. However, the effect may be limited in that choosing a nursing home is often a joint decision with other family members and social workers, and use of report card scores may only be weakly correlated with the consumer’s own education. In

addition, it is possible that the effect of education may work in the opposite direction. Highly educated consumers may have better access to informal information about health care quality prior to public reporting. If public reporting helps equalize access to information about quality, less-educated consumers may have a stronger response to public reporting. Given the current lack of evidence supporting the latter argument, we predict that on net the wealth of information on the web will be more valuable to those with more education. We test whether education predicts responsiveness to report card information using the same setup described in Eq. (1), but we redefine sjt as the proportion of people with high (or low) education choosing a facility in a quarter out of all high- (or low-) educated consumers eligible for post-acute care in that market and quarter. We redefine s0t as the number of people with high (or low) education choosing the outside good. We stratify education at high school graduation or above. Data on education for nursing home residents is from the MDS, which gathers self-reported information on admission for every patient through patient or family interview. Data on education for patients choosing an outside good is derived from the 2000 U.S. Census using age–sex–race specific estimates of education within each patient’s ZIP code. We hypothesize that the effect of report card scores in the post period (ˇ12 ) should be larger for those with more education. We find patients with higher education levels have a slightly larger response to publicly reported information for most report card scores (Table 5). Differences between high- and low-education groups are small but statistically significantly different for all three clinical conditions based on a test of difference in means.

58

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

Table 5 Effect of report card scores on facility choice after Nursing Home Compare is implemented among those with education less than high school vs. high school graduation or higher education level.

No pain No delirium Improved walking Post-NHC No pain × post-NHC No delirium × post-NHC Improved walking × post-NHC Nesting parameter Constant Nursing home characteristics Market fixed effects Observations Number of markets R-squared

Less than HS

HS or more

−0.332*** (0.039) 0.146* (0.077) 0.308*** (0.041) −0.082 (0.127) 0.107** (0.044) −0.002 (0.131) −0.115** (0.049) 0.296*** (0.072) −4.548*** (0.194) × × 108,446 1889 0.230

−0.684*** (0.044) 0.370*** (0.083) 0.591*** (0.042) −0.021 (0.112) 0.120*** (0.041) 0.060 (0.115) −0.029 (0.044) 0.170** (0.073) −5.059*** (0.199) × × 114,801 1892 0.425

Robust standard errors in parentheses. * p < 0.1. ** p < 0.05. *** p < 0.01.

7.2. Demand response in a setting without capacity constraints A demand response to the public release of quality information is only possible in a setting with excess capacity. If, at the introduction of report cards, all providers that are rated as high quality

are also at their capacity limit, it will be empirically challenging to observe an increased demand for these providers as increased demand will not translate into increased market share (Mukamel et al., 2007). In the nursing home setting, we find a small demand response to only one type of information. In this section we test whether there is a larger demand response in nursing homes without capacity constraints. While nursing homes on average are 84% occupied, total facility capacity is constrained by the number of beds and size of facility. Conceptually, nursing homes that are fully occupied will be unable to take on more customers, even if demand for the nursing home increases. To test whether there is a larger demand response among facilities without capacity constraints, we create an indicator for facilities with top-quartile occupancy rates in the prior period (i.e. occupancy of 95.2% or higher). We also created indicators for facilities with quality scores above the median for each quality score. Our results were not sensitive to the specific cut point chosen for occupancy or report-card quality. By combining these two indicators we can separate facilities that are operating at capacity and are expected to experience increased consumer demand from their report card scores (i.e., those facilities in which we expect to see no demand response) from all other facilities where we expect a larger demand response. We use these indicators to run stratified regressions as described in Eq. (1), stratifying by facilities that are high occupancy and rated as high quality compared to facilities that are either low occupancy or rated as low quality. As expected, we see that good report card scores do not have a positive effect on consumer choice in facilities with high occupancy rates (Table 6). Additionally we find that the demand response to information on pain quality is larger among facilities without capacity constraints (the second column in Table 6) than it was in the overall group of facilities (in Table 3). The response to information on delirium quality goes from being negative to positive, but remains imprecisely estimated. The response to information

Table 6 The effect of report card scores on facility choice after Nursing Home Compare (NHC) is released among facilities with and without capacity constraints for each quality measure. Facilities are identified as being capacity constrained if they are highly occupied and have high report card scores and are thus unable to accommodate increased demand.

No pain

High occupancy and high pain quality

High occupancy and high delirium quality

High occupancy and high walking quality

Yes

Yes

No

Yes

No

−9.450*** (0.939)

0.562*** (0.080) 0.363*** (0.101) 0.094 (0.064)

0.610*** (0.041) 0.121*** (0.012)

−0.032 (0.168) 0.154 (0.173) −4.347*** (0.594) × × 13,704 1200 0.353

−0.074* (0.041) 0.187*** (0.069) −5.140*** (0.176) × × 103,492 1889 0.449

No

−0.866 (0.154)

***

−0.509 (0.040)

***

No delirium Improved walking Post-NHC No pain × post-NHC

0.241 (0.210) −0.155 (0.244)

0.044 (0.027) 0.083** (0.036)

No delirium × post-NHC

1.614 (1.708)

0.082 (0.106)

−1.526 (1.716)

0.015 (0.110)

Improved walking × post-NHC Nesting parameter Constant Nursing home characteristics Market fixed effects Observations Number of markets R-squared Robust standard errors in parentheses. * p < 0.1. ** p < 0.05. *** p < 0.01.

0.004 (0.126) −4.514*** (0.741) × × 15,125 1164 0.242

0.201*** (0.071) −4.500*** (0.189) × × 102,071 1880 0.458

0.084 (0.115) 4.650*** (1.140) × × 14,889 1216 0.308

0.189*** (0.073) −5.401*** (0.209) × × 102,307 1889 0.450

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61 Table 7 Effect of information disclosure on facility choice after Nursing Home Compare is implemented among facilities with and without information disclosure (e.g. large vs. small SNFs). SNFs without information disclosure are further grouped into those that are located in markets with other SNFs that did have their quality publicly disseminated and those that are located in markets only with other SNFs that did not have their quality publicly disseminated. Large SNFs Small SNFs collocated in markets with large SNFs Small SNFs not collocated in markets with large SNFs Post-NHC Small collocated SNFs × post-NHC Small non-collocated SNFs × post-NHC Nesting parameter Constant Nursing home characteristics Market fixed effects Observations Number of markets R-squared

(Omitted) −1.332*** (0.029) −1.020*** (0.125) 0.101*** (0.005) −0.108*** (0.015) −0.033 (0.030) 0.129** (0.063) −4.874*** (0.154) × × 138,390 2152 0.587

Robust standard errors in parentheses. ** p < 0.05. *** p < 0.01.

on walking quality remains negative, is larger in magnitude than in the main analysis, and becomes statistically different than zero. For all three types of quality measures, the effect of report cards on consumer choice was statistically significantly different between non-capacity constrained facilities and all other facilities based on a test of difference in means. 7.3. The effect of no information on consumer demand While new information on nursing home quality may affect demand for those nursing homes, how this policy change affects nursing homes that do not disclose quality information is unknown. In the case of Nursing Home Compare, where nursing home size is used to exclude small facilities from public reporting, approximately half of all nursing homes are excluded from reporting information about post-acute care. Consumer response to undisclosed information depends on how they treat this information. Rationally, consumers will treat undisclosed information as a signal for low quality, at least in the case of voluntary disclosure, where consumers who observe that some facilities voluntarily disseminate quality information will become skeptical of the motives of non-disclosing facilities for not revealing their quality publicly. However, in the case of Nursing Home Compare, where nondisclosure is mandated by the policy, it is unclear how consumers will respond to no signal about quality. To understand how consumers respond to no signal about quality we examine changes in demand among non-disclosing SNFs more closely. We do this by first dividing the SNFs small enough to consistently be excluded from public reporting into those that are collocated in a market with SNFs large enough to have to disclose (n = 1837) and those that operate in markets without any large SNFs (n = 431). We hypothesize that consumers in markets with both disclosing and non-disclosing SNFs will interpret nondisclosure as a signal for poor quality and will thus choose facilities with public disclosure over facilities without public disclosure. We also hypothesize that SNFs that are located in markets with no disclosing SNFs will experience no significant change in market share related to public disclosure of quality information in large SNFs.

59

We test these hypotheses using the same setup as above, but include indicators for each non-disclosing SNF type and interact these indicators with the post-NHC variable. As expected, we find that the coefficient on the interaction between post-NHC and collocated small SNF is negative, suggesting there was decreased consumer demand for these small non-disclosing SNFs once public reporting was initiated (Table 7). We also find a slight decrease in the demand for small non-disclosing, non-collocated SNFs, but these changes were small and not statistically significant. These findings suggest that consumers use the lack of information about small SNFs as a signal for poor quality.

8. Discussion We examine the effect of public reporting of quality scores on patient choice of nursing homes. Our main findings indicate that public reporting results in a small increase in consumer choice of high-scoring facilities. As expected, we do not find changes in nursing home choice related to report card scores among facilities not exposed to public reporting. The changes in facility choice that we document were consistent for the report card score of pain whereas reported scores for improved walking often resulted in unexpected changes. There are several plausible reasons for this apparent contradictory finding. The walking score was dropped from NHC after one year (at the end of our study period) due to concerns that it was not a valid measure. Consumers may have discounted nursing home scores on this measure as well when choosing a facility if they, too, did not believe it was a true reflection of quality. In fact, facility scores on walking are negatively correlated with scores on pain ( = −.22) and delirium ( = −.03) whereas pain and delirium are positively correlated. Thus, it is also possible that adequate pain control is a SNF attribute that is more meaningful or important to consumers. Our empirical approach takes advantage of the longitudinal data we have on report card scores and consumer choice of nursing homes, enabling us to control for the correlation between market learning and report card scores. We are also able to test whether market learning contemporaneously changes with public reporting by testing for changes in nursing home choice related to quality measures at the time of public reporting among facilities excluded from the report card, finding no changes in facility choice in this group. However, small SNFs are an imperfect control group. First, demand for small SNFs appears to have changed at the time public reporting was initiated. Additionally, tests of trends in market share for small and large SNFs indicate that they differed prior to the public reporting intervention. Both of these factors suggest that caution is warranted in interpreting the market share response in small SNFs as the counterfactual to the response in large SNFs. It is also possible that facilities included in public reporting had contemporaneous changes in market learning that did not occur in excluded facilities. For example, the policy change could cause facilities to focus on reported quality at the expense of unreported quality and if unreported quality is nonetheless observable to consumers it could bias estimates of the effect of information disclosure. Prior work testing for multitasking has been contradictory. Some finds that unreported quality measures are generally positively correlated with reported quality measures after information disclosure (Werner et al., 2009a) while others conclude that unreported quality worsened with public reporting (Lu, in press). Thus, it remains possible that our results are biased by the effect of multitasking, which could result in a downward bias of our estimates. Unlike prior studies, we separate out the demand response to public reporting from the supply response. Public reporting may

60

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61

generate a simultaneous supply-side response, particularly in the presence of changes in demand. Supply-side responses may be observed through changes in reported quality or changes in price or service bundles for a given service. For post-acute care stays in nursing homes, prices for a fixed service are administered by Medicare. As we control for measured quality, this unique setting gives us the opportunity to estimate a demand response that is unbiased by omitted supply-side response variables. We thus expect to obtain upper-bound estimates of the effect of information disclosure—had high-ranked nursing homes increased price in response to information disclosure, demand for those nursing homes would have been smaller than what we estimate. In this setting, our upper-bound estimate is of a very small (though statistically significant) demand response to public reporting, with market share increasing by one-tenth of one percent for a nursing home in the average market if a SNF’s reported pain quality increased from the 25th to the 75th percentile. We cannot say with certainty whether this effect is economically meaningful, as this depends on the cost of improving report card scores. However, for public reporting to generate a business case for quality improvement for SNFs, the increased revenue from this small market share increase would have to outweigh the increased cost of improving quality to translate into higher marginal profits. Given the very small change in market share we see in response to reported quality in just one area, this seems unlikely. Consistent with economic theory, a consumer-based response to public reporting is important if public reporting is to have a sustained result on improving health care quality. Our findings of small consumer response suggest that provider-based improvements in quality that have been observed under this public reporting system (Werner et al., 2009b) are not sustainable. Nursing homes will have little incentive to continue to invest in quality improvement if there is no business case for this strategy. The demand response in SNFs may not generalize to other health care settings. In particular, the market for post-acute care may not be responsive to quality information. As post-acute care is preceded by a hospital stay, decisions about post-acute care settings may necessitate quick decisions thereby preventing a full search for information about choices. In addition, patients needing post-acute care are likely to be older with worse health status, making the search for information more difficult than among the younger populations studied in non-nursing home settings. Thus, the effect of information on demand might be larger in other settings. It is also possible that the demand response for nursing homes is small because the measures fail to capture information about nursing home quality that is meaningful to consumers. While quality measures are often imperfect, nursing home measures may be particularly imperfect. In addition, only three clinical quality measures were used to measure post-acute care quality in nursing homes. The lack of comprehensive measures of nursing home quality would bias our estimates of a demand response to report cards downward. Indeed, the size of the effect we find is smaller than the consumer response to information disclosure that has been found in most other settings, despite our estimating the upper-bound effect. For example, in the fertility clinic market, Bundorf et al. (2009) found that an increase in a clinic’s quality (in this case birth rate) from the 25th to the 75th percentile resulted in an increase in market share of 2.9 percentage points. In the setting of Medicare HMOs, Dafny and Dranove (2008) found that a one standard deviation increase in health-plan quality increases market share between 0.8 and 1.98 percentage points depending on assumptions. In the setting of hospitals, a recent study found that a one standard deviation in hospital quality score increases market share from 20% to 25% (Jung et al., in press). An equivalent one standard deviation increase

in the pain score in our simulation increases market share between 0.1 and 0.3 percentage points. One notable exception to these findings of larger market share responses is that prior work has found no appreciable effect of Nursing Home Compare on demand for long-term nursing home stays (Grabowski and Town, 2011). That study was unable to control for supply-side changes in response to public reporting, such as changes in price, which might bias the results downward. Additionally, long-stay nursing home residents are different than short-stay residents, most notably in their age and cognitive function, both of which might decrease the effect of information disclosure on consumer demand. It is also possible that hospital discharge planners, who influence the choice of post-acute care setting, are more likely to incorporate quality information in their referrals as they are savvier about health care decisions. How can report card information be more effective in improving health care quality? The differential response across patients by education level we observe raises the possibility that the format and distribution of this information matters. This information may be more influential if it is delivered to consumers in a more userfriendly format or if it is delivered to patient advocates or surrogate decision makers. However, the relatively small response to education also suggests that at least some part of the decision making might be influenced by agents such as hospital discharge planners. Encouraging health care agents to incorporate publicly reported information into their counseling of patients and decision making on behalf of patients could increase the effect of reported information. The small response may also reflect skepticism on the part of consumers in the importance or accuracy of these quality measures. Studies in numerous settings suggest that most quality measures do not reflect overall quality and often do not capture consumers’ perceptions of health care quality. Ongoing work to develop and validate quality measures with face validity to consumers will be important to improve the relevance of quality information to consumer decision making and thus increase the likelihood of report cards achieving desired outcomes. Acknowledgements This research was funded by a grant from the Agency for Healthcare Research and Quality (R01 HS016478-01). Rachel Werner is funded in part by a VA HSR&D Career Development Award. The authors thank Ying Fan, David Grabowski, and Alan Zaslavsky for helpful comments on an earlier draft of this manuscript, and seminar participants at Imperial College London, Harvard Health Care Policy, University of South Florida, University of Miami at Ohio, and the University of Michigan. Appendix A. See Table A1. Table A1 Coefficients from first-stage model. ln(SNF share within HSA) Average total beds of competitors Share of competitors that are hospital based Share of competitors that are for profit Average % tracheostomy care of competitors Average % radiation of competitors

−.001*** (.000) .024** (.010) −.066*** (.006) −.256*** (.065) −1.059*** (.250)

R.M. Werner et al. / Journal of Health Economics 31 (2012) 50–61 Table A1 (Continued) ln(SNF share within HSA) Average % chemotherapy of competitors Average % dialysis of competitors Constant Number of observations F(7, 117,188) ** ***

1.242*** (.189) −5.82*** (.129) −1.67*** (.007) 117,196 552.9

p < 0.05. p < 0.01.

References Baker, D.W., Einstadter, D., Thomas, C., Husak, S., Gordon, N.H., Cebul, R.D., 2003. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Medical Care 41, 729–740. BearingPoint, 2004. Following up on the NHQI and HHQI: A National Survey of Hospital Discharge Planners. BearingPoint, Inc., Health Services Research & Management Group, McLean, VA. Berry, S.T., 1994. Estimating discrete-choice models of product differentiation. The Rand Journal of Economics 25, 242–262. Bundorf, M.K., Chun, N., Goda, G.S., Kessler, D.P., 2009. Do markets respond to quality information? The case of fertility clinics. Journal of Health Economics 28, 718–727. Castle, N.G., 2009. The Nursing Home Compare report card: consumers’ use and understanding. Journal of Aging and Social Policy 21, 187–208. Centers for Medicare and Medicaid, 2002. Nursing Home Quality Initiatives Overview. Centers for Medicare and Medicaid Services, http://www.cms. hhs.gov/NursingHomeQualityInits/downloads/NHQIOverview.pdf. Chernew, M., Gowrisankaran, G., Scanlon, D.P., 2008. Learning and the value of information: evidence from health plan report cards. Journal of Econometrics 144, 156–174. Cutler, D.M., Huckman, R.S., Landrum, M.B., 2004. The role of information in medical markets: an analysis of publicly reported outcomes in cardiac surgery. American Economic Review 94, 342–346. Dafny, L., Dranove, D., 2008. Do report cards tell consumers anything they don’t already know? The case of Medicare HMOs. The Rand Journal of Economics 39, 790–821.

61

Dranove, D., Sfekas, A., 2008. Start spreading the news: a structural estimate of the effects of New York hospital report cards. Journal of Health Economics 27, 1201–1207. Grabowski, D.C., Town, R.J., 2011. Does information matter? Competition, quality, and the impact of nursing home report cards. Health Services Research 46, 1698–1719. Institute of Medicine, 1986. Improving the Quality of Care in Nursing Homes. National Academies Press, Washington, DC. Jewett, J.J., Hibbard, J.H., 1996. Comprehension of quality care indicators: differences among privately insured, publicly insured, and uninsured. Health Care Financing Review 18, 75–94. Jin, G.Z., Sorensen, A.T., 2006. Information and consumer choice: the value of publicized health plan ratings. Journal of Health Economics 25, 248–275. Jung, K., Feldman, R., Scanlon, D. Where would you go for your next hospitalization? Journal of Health Economics, in press. Lu, S. Multitasking, information disclosure and product quality: evidence from nursing homes. Journal of Economics & Management Strategy, in press. MedPAC, 2008. A Data Book: Healthcare Spending and the Medicare Program (June 2008). Medicare Payment Advisory Commission, Washington, DC. MedPAC, 2010. Report to the Congress: Medicare Payment Policy (March 2010). Medicare Payment Advisory Commission, Washington, DC. Morris, J.N., Moore, T., Jones, R., Mor, V., Angelelli, J., Berg, K., Hale, C., Morriss, S., Murphy, K.M., Rennison, M., 2003. Validation of Long-Term and Post-Acute Care Quality Indicators. Centers for Medicare and Medicaid Services, Baltimore, MD. Mukamel, D.B., Weimer, D.L., Mushlin, A.I., 2007. Interpreting market share changes as evidence for effectiveness of quality report cards. Medical Care 45, 1227–1232. Mukamel, D.B., Weimer, D.L., Spector, W.D., Ladd, H., Zinn, J.S., 2008. Publication of quality report cards and trends in reported quality measures in nursing homes. Health Services Research 43, 1244–1262. Norton, E.C., 2000. Long-term care. In: Culyer, A.J., Newhouse, J.P. (Eds.), Handbook of Health Economics, vol. 1A. Elsevier Science, Amsterdam. Van Houtven, C.H., Norton, E.C., 2004. Informal care and health care use of older adults. Journal of Health Economics 23, 1159–1180. Van Houtven, C.H., Norton, E.C., 2008. Informal care and Medicare expenditures: testing for heterogeneous treatment effects. Journal of Health Economics 27, 134–156. Wedig, G.J., Tai-Seale, M., 2002. The effect of report cards on consumer choice in the health insurance market. Journal of Health Economics 21, 1031–1048. Werner, R.M., Konetzka, R.T., Kruse, G.B., 2009a. Impact of public reporting on unreported quality of care. Health Services Research 44, 379–398. Werner, R.M., Konetzka, R.T., Stuart, E.A., Norton, E.C., Polsky, D., Park, J., 2009b. The impact of public reporting on quality of post-acute care. Health Services Research 44, 1169–1187. Wunderlich, G.S., Kohler, P., 2000. Improving the Quality of Long-Term Cares. Division of Health Care Services, Institute of Medicine, Washington, DC.