INTRODUCTION

Surveys of patient care experiences are administered regularly and broadly in many countries, including England, Scotland, Australia, Canada, and Mexico. Measures derived from such surveys are increasingly being included in public reporting and pay-for-performance programs. In the United States, the most widely used patient experience measures are the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) surveys. For example, in fiscal year 2014, CAHPS Hospital Survey (HCAHPS) data account for 30 % of hospitals’ Total Performance Scores in the Centers for Medicare and Medicaid Services’ (CMS) Hospital Inpatient Value-Based Purchasing Program, while CAHPS survey data comprise 25 % of the score for Accountable Care Organizations participating in CMS’s Shared Savings Program. Critics have expressed concerns about the relevance and fairness of including patient experience survey data as indicators of health care quality.

Here, we address seven common criticisms of patient experience measures. We draw from our experience developing and implementing CAHPS surveys; however, the evidence we present is relevant to patient experience measures more generally.

  1. 1.

    Patient surveys do not provide valid information about the quality of medical care, because consumers do not have the expertise needed to evaluate care quality.

    The Institute of Medicine has identified patient-centeredness as a critical element of health care quality.1 CMS, the UK’s National Health Service, and numerous other countries’ health systems have operationalized this by including patient experience measures amongst their set of quality performance metrics. Surveys of patients’ health care experiences directly assess the patient-centeredness of care. The CAHPS surveys do not elicit information on aspects of care, such as technical quality, which are better obtained from other sources. Rather, they inquire about aspects of care quality for which patients are the best or only source of information, such as the degree to which care is respectful and responsive to their needs (i.e., “patient-centered”). Patient experience measures derived from these surveys are meant to complement measures of technical care quality, to provide an overall assessment of the quality performance of providers and plans. There is a substantial literature documenting the reliability and validity of CAHPS surveys for assessing patient-centered care (see 2 for example). Furthermore, some facts about care processes, such as receiving understandable information, ease of obtaining after-hours medical advice, or being seen at the appointed time for an office visit, can be collected only by surveying patients.

  2. 2.

    Patient surveys measure patient “satisfaction,” which may not reflect care quality and is not actionable.

    To ensure that results are relevant and actionable for both consumers and health care providers, patient surveys should assess aspects of care that have been identified as important to patients, and that patients want and need to know when they choose providers or health plans. In addition, surveys should inquire about specific care experiences, such as whether the health care provider listened carefully, rather than overall satisfaction, which is highly subjective.3 Most CAHPS survey questions ask about specific experiences of care. The surveys are tailored to different care settings, so that results can be used to help identify aspects of care that can be targeted to improve patient experiences. Some CAHPS surveys ask about services provided through health plans, while others ask about experiences with care delivered in physicians’ offices, or in facilities such as hospitals, nursing homes, hemodialysis centers, or hospices, among others.

    Improvement in hospitals’ HCAHPS scores following national implementation suggests that HCAHPS results are actionable.4 Nonetheless, despite broad availability of resources to support quality improvement initiatives designed to enhance patient experiences,5 many providers have not acted on their patient experience survey results.6 Further research is needed to understand barriers to pursuing quality improvement activities to improve patient experiences.

  3. 3.

    To improve patient experience scores, health care providers and plans may be motivated to fulfill patient desires, regardless of the appropriateness or effectiveness of the care provided.

    There is mixed evidence regarding the relationship between reports about care and the extent to which health care providers meet patient expectations for tests, medications or referrals.7 , 8 Some studies suggest that there is an inconsistent relationship between the amount of care delivered and patients’ assessments of care,9 while still others find that higher-intensity care is related to more negative patient experiences.10 Regardless, providers who are aware of patient expectations are better positioned to fulfill patients’ well-founded requests and to negotiate with patients regarding requests that are likely to yield limited clinical benefit.11 Several strategies have been shown to promote positive experiences with care despite providers’ nonfulfillment of requests, such as involving patients in decision making,12 discussing the context for the patient’s request, proposing an alternative,13 and offering the possibility that a request will be fulfilled later if the patient’s condition warrants it.14 Patient assessments of care have been shown to be more strongly associated with the nature and content of provider communication than with receipt of desired treatment (see,15 for example). To our knowledge, there is no published evidence indicating that providers obtain higher CAHPS scores by providing inappropriate care. Nonetheless, as for all quality measurement and improvement activities, it is important to monitor potential unintended consequences of holding providers accountable for patient experiences.

  4. 4.

    There is a trade-off between achieving good patient experience scores and providing high-quality clinical care.

    Quality is multidimensional, and individual quality indicators may or may not reflect quality of care in other areas. Therefore, it is not surprising that some health care providers with high clinical quality scores receive poor CAHPS scores and vice versa,16 or that providers exhibit varied performance within each quality domain. Measurement of distinct dimensions of performance can identify areas for improvement for providers and plans, and enable patients to seek care where performance is superior on the dimensions that are most important to them. It is possible for health care providers and plans to simultaneously offer better patient experiences and better clinical quality, and there is little to no evidence to support concerns about a trade-off between the two. One recent, widely-publicized study found that patients reporting the best patient–provider communication and overall ratings of care had greater total healthcare and prescription drug expenditures, more inpatient admissions, and higher mortality17; however, methodological challenges of that study may undermine the strength of its findings.18 Among dozens of studies examined in a recent systematic review, the vast majority found either positive or null associations between patient experiences and best practice clinical processes, lower hospital readmissions, and desirable clinical outcomes.19

  5. 5.

    Patient experience scores may be confounded by factors that are not directly associated with the quality of care delivered, such as geographic region, or patients’ sociodemographic characteristics or health status.

    As is the case for any quality indicator (e.g., mortality, readmissions), there are factors unrelated to the quality of care provided that might influence scores. These factors include patient characteristics, such as age, illness severity, or education, which may systematically influence how patients respond to survey questions or how care is delivered.20 For example, older patients tend to be more likely to report positive experiences of care, and it might be more difficult to satisfy the communication needs of patients who are sicker.20 , 21 Varying distributions of such characteristics across providers or plans might affect relative rankings on patient experience measures. Consequently, comparisons of patient experience scores across providers or health plans need to be adjusted, a process known as case-mix adjustment or patient-mix adjustment. Statistical models predict what each provider’s or plan’s score would have been for a standard patient or population, thereby removing from comparisons the predictable effects of differences in patient characteristics that vary across providers or plans. Case-mix adjustment helps ensure that reports and ratings of care are comparable and reduces the incentive for providers and plans to avoid those patients most likely to report problems.

  6. 6.

    Response rates to patient experience surveys are low. Only patients who had terrible or fantastic experiences of care complete surveys.

    Increased concern has been expressed in recent years about potential bias associated with low response rates to surveys as survey response rates in industrialized countries have decreased. Mean response rates to recent CAHPS surveys ranged from 34 % to 61 %.4 , 22 A meta-analysis of studies of nonresponse bias found no consistent relationship between a survey’s nonresponse rate and nonresponse bias.23 Nevertheless, it is important to be aware of the possibility of bias and use available information to adjust results so that nonresponse does not result in biased comparisons.

    There is evidence that those with fewer positive evaluations of their care are less likely to respond.24 Thus, nonresponse would tend to bias overall patient experience scores towards more positive evaluations of providers. CAHPS survey results, however, are typically compared among providers from similar settings using standard methods and achieving similar response rates; nonresponse bias is likely to have less effect on such comparisons than on overall levels. Case-mix adjustment models include factors, such as age and health status, that are related to nonresponse and thus compensate for bias associated with these factors when comparing hospitals, plans, or other types of health care entities. In addition, HCAHPS analysis models adjust for possible nonresponse bias resulting from differential response rates across hospitals.25

  7. 7.

    There are faster, cheaper, and more customized ways to survey patients than the standardized approaches mandated by federal accountability initiatives.

    Online reviews, open-ended questions, single question surveys, and customized provider surveys have been proposed to make collection of patient experience data faster and less burdensome than the surveys required by many federal public reporting and value-based purchasing initiatives. While these approaches may be useful for expediently informing providers’ internal quality improvement efforts, systematic and standardized measurements, such as those provided by CAHPS surveys or England’s General Practice Patient Survey, are required to ensure fair comparisons between providers for the purposes of public reporting and pay for performance.

CONCLUSION

To evaluate patient-centeredness, an essential element of health care quality, patients’ voices must be heard. Patient experience quality measures can facilitate providers’ efforts to improve patients’ experiences of care, and complement other quality measures designed to inform patients’ decisions regarding health care providers and plans and payor oversight of quality of care. Patient experience measures based on rigorously developed and implemented patient surveys can and do overcome concerns regarding the relevance, fairness and unintended consequences of these surveys.