Corresponding Author: Karina W. Davidson, PhD, Center for Behavioral Cardiovascular Health, Department of Medicine, Columbia University Medical College, 622 W 168th St, PH9 Center, Room 948, New York, NY 10032 (ude.aibmuloc@4212dk); phone: 212-342-4486; fax: 212-342-3431
The publisher's final edited version of this article is available at BMJ Qual SafMany hospital systems seek to improve patient satisfaction as assessed by the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. A systematic review of the current experimental evidence could inform these efforts and does not yet exist.
We conducted a systematic review of the literature by searching electronic databases, including MEDLINE and EMBASE, the six databases of the Cochrane Library, and grey literature databases. We included studies involving hospital patients with interventions targeting at least 1 of the 11 HCAHPS domains, and that met our quality filter score on the 27-item Downs and Black coding scale. We calculated post-hoc power when appropriate.
A total of 59 studies met inclusion criteria, with with 44 of these did not meet the quality filter of 50% (average quality rating 27.8% ± 10.9%.) Of the 15 studies that met the quality filter (average quality rating 67.3% ± 10.7%), 8 targeted the Communication with Doctors HCAHPS domain, 6 targeted Overall Hospital Rating, 5 targeted Communication with Nurses, 5 targeted Pain Management, 5 targeted Communication about Medicines, 5 targeted Recommend the Hospital, 3 targeted Quietness of the Hospital Environment, 3 targeted Cleanliness of the Hospital Environment, and 3 targeted Discharge Information. Significant HCAHPS improvements were reported by 8 interventions, but their generalizability may be limited by narrowly focused patient populations, heterogeneity of approach, and other methodological concerns.
Although there are a few studies that show some improvement in HCAHPS score through various interventions, we conclude that more rigorous research is needed to identify effective and generalizable interventions to improve patient satisfaction.
Keywords: Patient satisfaction, Healthcare quality improvement, Health services research, Patient-centered care, Quality improvement
The importance of patient satisfaction has long being recognized, 1 and is being increasingly emphasized by health systems including those of the United Kingdoms 2 and the United States. 3 In the United States, beginning in 2007, the Centers for Medicare & Medicaid Services (CMS) launched an ambitious program to require hospitals to report patient satisfaction through the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to be eligible for annual Inpatient Prospective Payment System updates. 4 HCAHPS results across 11 domains are also publicly reported through the Hospital Compare website (http://www.medicare.gov/hospitalcompare). Starting in 2012, the CMS program for Hospital Value-Based Purchasing also incorporated HCAHPS survey scores to determine global bonus or penalties for Medicare Severity Diagnosis-Related Groups payments. 4,5
The HCAHPS public reporting and inclusion in Value-Based Purchasing have impelled hospitals and clinicians to closely monitor and improve their patient satisfaction and HCAHPS survey scores. Scientifically, much remains unknown regarding the impact of various interventions for improving patient satisfaction, the magnitude of improvement, and in what context improvement efforts are successful. Given the scope of the CMS HCAHPS program, a better assessment which interventions are effective would be vital for improving patient satisfaction in diverse healthcare settings.
We conducted a systematic review of all studies that employed experimental designs to improve hospital patient satisfaction as measured by the HCAHPS survey. As this is a large domain of possible interventions and practices, we focused specifically on hospital inpatients, receiving interventions to improve patient satisfaction, compared to pre-intervention or control group(s), with a goal of improving HCAHPS scores.
We conducted a systematic review of the literature using formal methods of literature identification, selection of relevant articles, data abstraction, and quality assessment. We then assessed the scope and nature of the available research literature.
The search strategy was developed by one of the authors (LF), an information scientist. We searched electronic databases, including MEDLINE, EMBASE, and the six databases of the Cochrane Library (inception to date of manuscript submission). The MEDLINE search strategy, which formed the basis for the search strategies for the other electronic databases, is shown in Supplementary Appendix A. We also searched the following grey literature: Open Grey and NY Academy of Medicine Grey Literature Report.
We included studies of inpatients with interventions targeting at least one of the 21 HCAHPS survey items. Only studies that reported one or more HCAHPS measure as an outcome were included. We excluded articles written in languages other than English. We restricted eligible studies to those of sufficient quality to allow data extraction and interpretation, as described below.
At least two reviewers (JAS, SY, IOE) independently screened the titles and abstracts of all of the citations retrieved by the search strategy to identify articles potentially meeting the inclusion criteria. When reviewers agreed that an article was eligible or a decision regarding eligibility could not be made because of insufficient information, the article was retrieved for full-text review. When reviewers disagreed on eligibility, the remaining team members were consulted and disagreements were resolved by consensus.
We developed a data extraction form to: (1) confirm eligibility for full article review, (2) record study characteristics, and (3) abstract relevant data regarding the intervention. Specifically, we abstracted the HCAHPS domain or domains that were targeted by each intervention, the intervention type and description, and the study results. HCAHPS scores are typically presented as percentages of patients who respond using the most positive category 1 (i.e., “top-box scores”, “Always” for 5 HCAHPS domains, “Yes” for Discharge Information, “9” or “10” for Hospital Rating, and “Definitely” for Recommend the Hospital). For example, if a study reports that a cohort of patients received a score of 75% on the item “During this hospital stay how often did nurses treat you with courtesy and respect, this finding indicates that 75% of patients responded “Always” to this item. Percentage “top-box” scores for each of the three nursing communication items are then averaged to yield the “top-box” percentage for the HCAHPS Nurse Communication domain. Where possible we present the improvement in “top-box” scores.
We used the Downs and Black rating scale to assess the quality of the studies. 6 This 27-item checklist assesses studies’ reporting of objectives, outcomes, interventions, and findings; external validity; internal validity; and confounding. Given the pre-post nature of most of the studies and the fact that different cohorts of participants were assessed during the pre- and post- phases, items pertaining to follow-up of the same patients were deemed not eligible for inclusion in the quality rating. In addition, as most of the retrieved citations were in abstract form, we could not assess quality for certain items across all studies. As such, we offer a prorated score percentage. For example, if we could only assess 20 of the 27 items on the checklist for a given study and that study received 10 points, it was assigned a quality rating of 50%. We defined our quality filter as having a prorated quality rating of 50% or higher, and restricted our final sample to those studies that met this criteria. As few studies presented data that could be submitted to a meta-analytic approach, we performed only a qualitative review of the evidence.
We identified 548 unique studies in our initial search results. Of these 548, 98 were selected for title and abstract review, and 59 were determined to be eligible for formal quality rating, as described above. A total of 15 studies were selected as eligible for final inclusion because they met our criteria for being of sufficient quality for data extraction and interpretation ( Figure 1 ).
PRISMA 2009 Flow Diagram
Eligible studies were published between the years 2013 and 2016. The sample size of the 15 eligible studies ranged from 72 to 3021 patients; however, especially for studies in 2016, the sample sizes for the HCAHPS scores were often not reported, as these were often secondary outcomes. For evaluation of the impact on HCAHPS interventions, ten studies featured pre-post designs, four were randomized, controlled trials, and one was a prospective, observational study.
For the 15 eligible studies, the average prorated score was 67.3% (±10.7%). An additional 18 studies had quality rating between 0 and 24%, and 26 had quality rating between 25% to 50%; the average quality rating of these 44 studies were 27.8% (±10.9%). Few of the eligible studies provided enough information to rate whether adverse clinical events occurred, whether study participants were representative of the entire population from which they were drawn, and the degree of compliance with the interventions. In addition, most studies provided limited information regarding whether attempts were made to mask participants or observers to intervention status. Few studies reported characteristics of the study participants, and even fewer reported whether confounding variables were considered in statistical analyses.
As seen in Table 1 , 8 studies targeted the Communication with Doctors HCAHPS domain, 6 targeted Overall Hospital Rating, 5 targeted Communication with Nurses, 5 targeted Pain Management, 5 targeted Communication about Medicines, 5 targeted Recommend the Hospital, 3 targeted Quietness of the Hospital Environment, 3 targeted Cleanliness of the Hospital Environment, and 3 targeted Discharge Information.
Description of High Quality Interventions to Improve Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) domains.
Author/Year | Setting | Design and Size | Domains Assessed and Descriptions of Intervention |
---|---|---|---|
O'Leary 2013 17 | Patients admitted to non- teaching hospital service at an academic medical center in Chicago, IL | Pre-post design (N=278 pre vs 186 post) | Communication With Doctors, Overall Hospital Rating: A communication skills training program for hospitalists. Patients who were discharged from the hospitalist service during the 26 weeks prior to the intervention were compared to those discharged from the hospitalist service during the 22 weeks after the intervention |
Wang 2013 12 | Spine surgery patients at an academic medical center in Pittsburgh, PA | Pre-post design (N=273 pre vs 254 after 1 st intervention vs 214 after both interventions) | Communication with Nurses, Communication about Medicines, Discharge Information: First intervention was a “surgical flight plan” to standardize communication to patients; second intervention used “SmartRoom” technology to provide patients with tailored education videos and informed providers of viewing progress. Patients discharged during 3 months prior to interventions were compared to those discharged during 3 months of the first intervention, and then to those discharged during 3 months of both interventions |
Amin 2014 19 | All patients at an academic medical center in Irvine, CA | Pre-post design (N= 555 pre vs 534 post) | Recommend the Hospital: Care management services were changed from a unit-based to a service-based model, to allow better integration with the care team. HCAHPS comparison was between the diffusion period and the post-intervention period. |
Fornwalt 2014 13 | All patients at a general medical and surgical hospital in Birmingham, AL | Pre-post design (N not reported) | Communication with Nurses, Communication with Doctors, Responsiveness of Hospital Staff, Pain Management, Communication about Medicines, Discharge Information, Cleanliness of Hospital Environment, Quietness of Hospital Environment, Overall Hospital Rating, Recommend the Hospital: Patients discharged during 9 months of a program of using flyers describing to patients the state of the art disinfection being used (a portable UV disinfection system), compared to patients discharged during the prior 30 months |
Simons 2014 15 | Patients on general internal medicine hospitalist and housestaff services at an academic medical center in Chicago, IL | Clustered randomized controlled trial (N=72 control vs 66 intervention) | Communication with Doctors, Overall Hospital Rating: Randomization was at the unit level. Physicians working on the intervention units received facecards that listed the name and role of attendings, residents, and interns. The facecards were directly delivered to patients by physicians who participated in their care. |
Banka 2015 9 | All patients at an academic medical center in Los Angeles, CA | Pre-post design (N=465 pre vs 528 post) | Communication with Doctors, Recommend the Hospital: Patient satisfaction education was provided to internal medicine residents via a conference, real-time feedback, monthly recognition, and a small reward. Patients discharged post- intervention were compared to those discharged pre-intervention, controlling for changes in satisfaction score for non-internal medicine patients |
Chan 2015 18 | Patients at a safety net hospital in San Francisco, CA | Randomized, controlled trial (N=685 total; per arm not reported) | Communication with Nurses, Communication with Doctors, Communication about Medicines, Discharge Information: Patients randomized to intervention with 1) inpatient visits by a language concordance nurse that provided post-hospitalization education and with 2) post-discharge phone call by nurse practitioner were compared to patients who received usual care |
Harper 2015 7 | Patients undergoing unilateral hip or knee replacement at a single center in Boston, MA | Randomized, controlled trial (N=36 in each arm) | Communication with Nurses, Cleanliness of Hospital Environment, Quietness of Hospital Environment, Pain Management, Overall Hospital Rating, Recommend the Hospital: Patients randomized to receive animal-assisted therapy (therapy dogs) compared to patients who did not |
Indovina 2015 16 | Patients on general internal medicine service at a university-affiliated public safety net hospital in Denver, CO | Randomized, controlled trial (N=35 control vs 30 intervention) | Communication with Doctors, Overall Hospital Rating: Patients were surveyed daily regarding physician communication. Attending hospitalist caring for patients randomized to the intervention arm received daily feedback of survey results, as well as brief 1-on-1 education and coaching sessions. They were also asked to revisits patients who did give a top box score. |
Siddiqui 2015 8 | All patients at an academic medical center in Baltimore, MD | Pre-post design with concurrent controls (N=1648 pre vs 1373 post) | Cleanliness of Hospital Environment, Quietness of Hospital Environment, Communication with Nurses, Communication with Doctors, Pain Management, Communication about Medicines, Overall Hospital Rating, Recommend the Hospital: Patients discharged from a new clinical building during the first 7.5 months, compared to patients on the same units discharged from the old clinical building during the preceding 12 months |
Boissy 2016 11 | All patients at an academic medical center in Cleveland, OH | Observational study with control group (N=230 control vs 204 intervention) | Communication with Doctors: All attending physicians were offered 8 hours of experiential communication skill training. Those who participated were compared with those who didn’t with regards to how they were evaluated by their patients. |
Schroeder 2016 20 | Patients on an orthopedic unit at a community hospital in Johnstown, PA | Pre-post design (N not reported) | Pain Management: Developed online learning module for improving pain assessment for postoperative total joint patients. Module was used to educate nursing staff on orthopedics unit. |
Soric 2016 10 | Patients on general internal medicine service at a community hospital in Chardon, OH | Pre-post design (N not reported) | Communication about Medicines: Intervention consists of pharmacy team (clinical pharmacists, pharmacy resident, and pharmacy student) participating in team rounds and providing patient education. Comparison was between patients hospitalized prior to the intervention period to those hospitalized afterwards, though not all patients received intervention. |
Titsworth 2016 14 | Patients on neurosurgery service at an academic medical center in Gainesville, FL | Pre-post design (N not reported) | Pain Management: Interdisciplinary team developed and implemented standard analgesia protocol for neurosurgery patients. |
Phatak 2016 21 | Patients on general internal medicine services at an academic medical center in Chicago, NY | Pre-post design (N not reported) | Communication about Medicines: Pharmacist intervention for transition of care, including face-to- face medication reconciliation, patient-specific pharmaceutical care plan, discharge counseling, and follow-up phone calls. |
Eligible interventions are presented with their quality rating and main results in Table 2 . Eight studies reported statistically significant results. One of these was a small randomized, controlled trial, finding that the use of therapy dogs prior to physical therapy sessions for orthopedic patients improved Pain Management, Communication with Nurses, and Overall Hospital Rating. 7 Two studies with pre-post assessment found that constructing a new hospital building improved Cleanliness of Hospital Environment but did not impact other domains 8 and that physician education and real-time feedback of patient satisfaction via an information technology intervention improved Communication with Doctors and Recommend the Hospital domains. 9 Another pre-post assessment of a pharmacy team intervention found significant improvement for Communication about Medicine domain, 10 while an observational study assessing an intervention consisting of communication training for attending physicians found improvement in a single item of Communication with Doctors. 11 . A more complicated study assessed two sequential interventions using a “surgical flight plan”, and then providing a large menu of patient education videos via “SmartRoom” technology. 12 Although this latter study reported some statistically significant improvements in individual communication questions from different domains, this was after multiple comparisons without correction, and domain scores were not reported. An additional study reported the results of advertising about the use and cleanliness of a portable ultraviolet (UV) disinfection device. 13 Although the authors reported improvement in the Cleanliness of Hospital Environment domain, the sample size was not reported, and there was already a strong trend for improvement for many HCAHPS domains even prior to the intervention. Similarly, a final study on development and implementation of a standardized analgesia protocol for neurosurgery patients demonstrated improvement in Pain Management, but the authors state that persistent trends in improvement after the intervention argues for the presence of other system causes for the observed improvement. 14
Results of high quality interventions on Improve Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) domain scores. Number needed (number in study) are calculated for domain composite scores and not for individual survey items.
Author / Year | Downs & Black Quality Rating | Results |
---|---|---|
O'Leary 2013 17 | 81% | Communication with Doctors: Composite score: Pre: 75.8% vs Post: 79.2%, p = 0.42 |
Doctors treated with courtesy/respect: OR for top-box rating with intervention = 1.23 (0.81–2.44), p =
.22
Doctors listened: OR = 1.22 (0.74–2.04), p = 0.42
Doctors explained: OR = 0.98 (0.59 –1.64), p = 0.94
Communication about Medicines:
Staff tell you what new medicine was for:
81% (both interventions) versus 64% (1 st intervention only), p = 0.029
Doctors always treated with courtesy and respect: post minus pre = 4.1%, p = 0.09
Doctors always listened carefully: post minus pre = 4.6%, p = 0.1
Doctors explained things in way patient could understand: post – minus pre = 6.8%, p = 0.03
Communication with Doctors:
91.9 (Intervention) versus 91.8 (Control), p = 0.60
Communication about Medicines:
72.3 (Intervention) versus 75.7 (Control), p = 0.34
Pain Management:
Treatment 94% (95% CI, 81%–99%) vs Control 72% (95% CI, 55% – 86%), p = 0.024
Overall Hospital Rating:
Treatment: 9.6 (SD = 0.7) vs Control: 8.6 (SD = 0.9), p < 0.001
Communication with Doctors:
Treatment: 81% (95% CI, 64% – 92%) vs. Control: 78% (95% CI, 61% – 90%), p = 1.0
Cleanliness of Hospital Environment and Quietness of Hospital Environment:
Treatment: 50% (95% CI, 34% – 66%) vs. Control: 53% (95% CI, 37% – 69%), p = 1.0
Doctors listened: Intervention: 90% versus Control: 83%, p > 0.05
Doctors explained: Intervention: 80% versus Control: 77%, p > 0.05
Quietness of Hospital Environment:
OR = 1.89 (1.63–2.19), p < 0.0001
Communication with Nurses:
Nurse treated with courtesy respect: OR = 1.28 (1.05–1.57), p =0.92
Nurse listened: OR = 1.21 (1.03–1.43), p = 0.26
Nurse explained: OR = 1.10 (0.94–1.30), p = 0.43
Communication with Doctors:
Doctors treated with courtesy/respect: OR = 1.13 (0.89–1.42), p = 0.77
Doctors listened: OR = 0.93 (0.83–1.19), p = 0.68
Doctors explained: OR = 1.00 (0.84–1.19), p = 0.49
Pain Management:
Pain well controlled: OR = 1.06 (0.90–1.25), p = 0.60
Staff do everything to help with pain: OR = 1.19 (0.99–1.44), p = 0.07
Communication about Medicines:
Staff describe medicine side effects: OR =1.05 (0.89–1.24), p =0.32
Tell you what medicine was for: OR = 1.02 (0.84–1.25), p = 0.65
Overall Hospital Rating:
OR = 1.71 (1.44–2.05), p = 0.006
Doctors treated with courtesy/respect: Intervention: 91.08 versus Control: 88.09, p = 0.02
Doctors listened: Intervention: 83.13 versus Control: 82.79, p = 0.78
Staff describe medicine side effects: Pre: 67.8% versus Post: 77.3%, p < 0.001
Bolded statements are significant at the p< 0.05 level
Odds ratios refer to likelihood of an improvement in HCAHPS score.
Downs & Black Quality Rating score ranges from 0% – 100%
1 Unless otherwise noted, percent improvement denotes absolute change in percent of respondents who reported “top box” scores, with the assumption that unspecified percentages in studies refer to “top box” scores.
Seven additional studies did not report significant findings, either because statistical significance was not assessed or the study had inadequate power, or because the interventions were implemented inappropriately or were truly ineffective. Two randomized controlled trials assessed interventions targeting physician communication, one through providing patients with physician face cards 15 while the other by providing physicians with training and real-time patient satisfaction feedback. 16 Although both demonstrated positive trends, the sample size for which HCAHPS scores were assessed was small, which may have limited their ability to detect statistical significance. Another pre-post assessment of a communication skills training program for hospitalists also did not improve Communication with Doctors or Overall Hospital Rating. 17 A randomized, controlled trial for a nurse-led, language-concordant, hospital-based care transition program that did not improve any of the Communication domains or Discharge Information domains; 18 similarly, a pre-post assessment of changing care management from a unit-based model to a service-based one did not affect HCAHPS score for Recommend the Hospital. 19 Finally, two studies did not report p-values. One involved the development and deployment of a pain management education module for nurses on an orthopedic unit, showing potential improvement in Pain Management, 20 while the other was a personalized pharmacist intervention for transition of care, with potential improvement in Communication about Medicine. 21 Both studies used HCAHPS scores for pre-post assessment but did not report sample sizes or statistical testing for HCAHPS comparisons.
In this systematic review of interventions to improve HCAHPS scores, we found that most of the studies published were of low quality. For those with satisfactory quality, the most frequent HCAHPS domains targeted included Communication with Doctors, Communications with Nurses, Communication about Medicines, Pain Management, Recommend the Hospital, and Overall Hospital Rating. These studies differed widely in approach, methodology, and targeted patient population, and even the studies that reported statistically significant results often have caveats that would limit recommendations for adapting them at other healthcare institutions.
Our results also highlight the dilemma faced by health care institutions that seek to improve HCAHPS scores, as it is unclear whether comprehensive approaches such as global physician education or new facilities would be more effective, or if it might better to target specific units or HCAHPS domains. Our review identified remarkably few high-quality designs and/or evaluations, with most demonstrating impact that was narrow in scope and small in magnitude. Across the heterogeneous domains assessed through the HCAHPS survey, we found little evidence of either specific or globally efficacious interventions for the HCAHPS domains. Nearly all of the studies located were of poor methodological quality and only a few employed a rigorous intervention design, and it is often unclear whether the effect on HCAHPS scores is the direct result of the intervention or is due to spill-over effects. Thus, any type of quantitative synthesis to estimate effect sizes was not possible. We did find that of those that were eligible by our quality filter a slight majority had significant findings. However, caution is warranted in interpreting even these results, as often the reported HCAHPS scores are secondary outcomes collected through the mandated surveys, and, as several authors acknowledge, could be influenced by other ongoing quality initiatives.
The lack of appropriate design, reporting, and statistics among our additional 44 located but quality-ineligible studies is problematic for the improvement of patient satisfaction with hospital and provider care for many reasons. First, there may be important and useful hospital/provider improvements that were tested amongst these possible interventions that will go unrecognized, because studies did not have sufficient sample sizes or robust study designs to assess their usefulness. Second, hospital and clinician initiatives, such as interdisciplinary rounding and commercial customer service training, are currently being implemented and disseminated by hospitals at great expense, but there is little published evidence suggesting these will result in improvements in patient satisfaction, particularly across diverse geographic and practice contexts. The absence of high-quality evidence about ways to improve the hospital experience for patients leaves healthcare leaders with little more than anecdotes to guide their strategic decision-making. For example, one healthcare leader conducted daily CEO rounds, 22 but it is not clear how beneficial this type of practice might be because anecdotal/single case studies are the only available evidence. In the absence of rigorous, actionable evidence on which to judge the appropriateness of interventions aimed at improving patient satisfaction, we cannot expect hospitals or clinicians to adopt best evidence-based practices. 23
To help address these issues, it would be useful for future studies to adapt more rigorous approaches. These would include formal power calculations that take into account reasonable assumptions for effect size and local survey response rate. The latter is particularly important, as in our experience it is often no longer feasible to directly conduct surveys using HCAHPS items as part of study protocols, due to concern for contamination with CMS required surveys. This likely explains our observation that more recent studies have tended to use HCAHPS scores obtained through surveys as secondary outcomes. An example of such a power calculation might be as follows. If a hospital had a response rate of 35%, and wanted to improve one of the HCAHPs domains from their current 75% to 80%, it would take approximately 2,262 survey responses to effectively test their proposed intervention; 6,463 patients would need to be exposed to the intervention to receive that many surveys. More thoughtful sample size planning in this fashion might alleviate the issue of being unable to assess whether a targeted intervention that met the primary research outcomes might also meaningfully impact patient satisfaction as measured by the HCAHPS score.
One of the reasons for the excitement and interest in improving patient satisfaction with hospital care is derived from other study results that have noted that these scores are observationally associated with improved clinical outcomes. 24–28 A recent systematic review concluded that higher patient satisfaction was observationally associated with better patient safety, clinical effectiveness, health outcomes, adherence, and lower resource utilization. 29 However, many other studies examining quality process measures, such as those reported by the Hospital Compare website, have found a low concordance between excellence in care and HCAHPS scores (kappa < 0.20). 30
Yet other studies have found no association between patient satisfaction and the technical quality of care. 31 A national study of 51,946 adult respondents reported that higher patient satisfaction was associated with higher risk of inpatient admission, greater expenditures, greater prescription drug expenditures and higher mortality; 32 and a study of 31 hospitals in 10 states reported that patient satisfaction was independent of hospital compliance with surgical processes of quality care. 33 Nonetheless, despite some inconsistencies, patient satisfaction is likely to remain a key quality metric, especially given its essential importance to the relationship between patients and the healthcare system. 34 It is therefore imperative to identify effective patient satisfaction interventions, and to directly investigate if improving patient satisfaction can also directly improve other important clinical outcomes.
What then can be done to move this field forward? There seem to be few interventions either designed to improve one patient satisfaction domain, across all hospitalized patients, and that is rigorously tested for usefulness. These might be the next generation of interventions, which if married with more rigorous designs and power analyses, appropriate correction for multiple comparisons, and use of the correct unit of analysis (e.g. site, physician, patient, service line) would be helpful in building an evidence-base. Published interventions most commonly used a pre-post design, which does not guard against secular trends, contamination by other co-occurring interventions, and the other validity threats present when randomization is not present. An example of future useful intervention might be randomizing all physicians to either receive or not receive real-time feedback on their own Communication with Doctors domain scores, to determine if this improved that one domain across the hospital, and across all patient groups. Or, one could test one of many behavioral economics approaches have been used to change physician behavior, including randomizing physicians to a peer-commitment letter about their Communication with Doctors score goal, vs no such commitment. 35 Another example might be implementing sleep hygiene environment practices for all patients on a floor, 36 in which noise meters, red-spectrum lighting, and white noise machines are introduced, and alerts, overhead paging systems and elective phlebotomy are minimized or eliminated. Units could be randomized in a stepped wedge design to test the rollout of such environmental changes to determine if the Cleanliness of Hospital Environment and Quietness of Hospital Environment domains are improved. Guarding against multiple comparisons and conducting the analyses mindful of the correct unit of analysis (surveys nested within physician, or within unit) would be important. Successful studies along these lines would also need to recognize resource constraints and the operational priorities of healthcare systems. Thus, these types of innovative interventions will require close collaboration among hospital leadership with front-line staff and patients, to address the need for the improvement in satisfaction with health care service, while rigorously testing the implications of the intervention for the quality of that care.
The systematic review reported here is limited by a number of factors. First, because the HCAHPS score contains many domains, this required the use of a broad range of search terms which contributed to the heterogeneity of the studies captured. Relatedly, this “scoping review” differed from an in-depth systematic review in that: (1) hand searching was not conducted, (2) there was no contact with the study authors, and (3) there was no attempt to combine results in a meta-analysis. 37
In conclusion, we identified few high-quality studies that tested the efficacy of interventions to improve patient satisfaction scores as assessed by the HCAHPS survey. Despite the visibility of public reporting and accountability of value-based purchasing for HCAHPS survey scores, there is minimal evidence to inform hospitals, clinicians, payers, and healthcare policy/management experts about what interventions can improve patient satisfaction and in what context. Given the importance of patient satisfaction as well as patient outcomes, safety, and cost in high-value healthcare, there is an urgent need for properly designed interventions to evaluate novel and sustainable methods to improve patient satisfaction, that have a demonstrable impact on important clinical outcomes, and that can be spread across different regions and hospital contexts.
Dr. Davidson is a member of the United States Preventive Services Task Force (USPSTF). This article does not necessarily represent the views and policies of the USPSTF. Dr. Davidson is also the co-owner of MJBK, Inc., a small business that provides mhealth technology solutions to consumers. She is also the co-owner of IOHealthWorks, LLC., a small consulting services company.
Dr. Ting is a member of the National Quality Forum Consensus Standards Approval Committee and the American Board of Internal Medicine Council.
This work was supported by the Value Institute of New York Presbyterian Hospital, and New York State Department’s Empire Clinical Research Investigator Program (ECRIP). Additional support was provided by contract #ME-1403-12304 of the Patient-Centered Outcomes Research Institute. Drs. Shaffer and Ye are supported by National Institutes of Health K23 career development awards (K23 HL112850 and K23 HL121144, respectively).
HCAHPS | Hospital Consumer Assessment of Healthcare Providers and Systems |
CMS | Centers for Medicare & Medicaid Service |
PICO | problem/patients intervention comparison outcomes |
QI | quality improvement |
ACA | Affordable Care Act |
1 HCAHPS items are scaled in a number of different ways. Fourteen items feature a four point response scale ranging from “Never” to “Always.” Three items use a four point response scale ranging from “Strongly Disagree to Strongly Agree.” Two discharge-related items offer a yes/no response option. Overall rating of care uses an 11-point Likert scale, and the item “Likelihood to recommend” features a four-point response scale ranging from “Definitely No” to “Definitely Yes.”
Competing interests
Dr. Davidson has disclosed those interests fully to Columbia University Medical Center, and has in place an approved plan for managing any potential conflicts arising from this arrangement.
Authors’ contributions
JAS, SY, KS, IAI, and IOE conducted the title, abstract, and full text review for this study, performed data extraction, evaluated study quality, and drafted major parts of the manuscript. LF developed the search strategy. DKV, SLM, WMM, HHT, KWD, JAS, and SY conceived the idea for this study, and drafted major parts of the manuscript. All authors read and approved the final manuscript.
1. Cleary PD, McNeil BJ. Patient Satisfaction as an Indicator of Quality Care. Inquiry. 1988; 25 (1):25–36. [PubMed] [Google Scholar]
2. Roland M. Linking physicians' pay to the quality of care--a major experiment in the United kingdom. The New England journal of medicine. 2004; 351 (14):1448–1454. [PubMed] [Google Scholar]
3. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the Twenty-First Century. Washington DC: The National Academies Press; 2001. [PubMed] [Google Scholar]
6. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998; 52 (6):377–384. [PMC free article] [PubMed] [Google Scholar]
7. Harper CM, Dong Y, Thornhill TS, et al. Can therapy dogs improve pain and satisfaction after total joint arthroplasty? A randomized controlled trial. Clin Orthop. 2015; 473 (1):372–379. [PMC free article] [PubMed] [Google Scholar]
8. Siddiqui ZK, Zuccarelli R, Durkin N, et al. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015; 10 (3):165–171. [PubMed] [Google Scholar]
9. Banka G, Edgington S, Kyulo N, et al. Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015; 10 (8):497–502. [PubMed] [Google Scholar]
10. Soric MM, Glowczewski JE, Lerman RM. Economic and patient satisfaction outcomes of a layered learning model in a small community hospital. Am J Health-Syst Pharm. 2016; 73 (7):456–462. [PubMed] [Google Scholar]
11. Boissy A, Windover AK, Bokar D, et al. Communication Skills Training for Physicians Improves Patient Satisfaction. Journal of general internal medicine. 2016 [PMC free article] [PubMed] [Google Scholar]
12. Wang W, Dudjak LA, Larue EM, et al. The influence of goal setting and SmartRoom patient education videos on readmission rate, length of stay, and patient satisfaction in the orthopedic spine population. Comput Inform Nurs. 2013; 31 (9):450–456. [PubMed] [Google Scholar]
13. Fornwalt L, Riddell B. Implementation of innovative pulsed xenon ultraviolet (PX-UV) environmental cleaning in an acute care hospital. Risk manag. 2014; 7 :25–28. [PMC free article] [PubMed] [Google Scholar]
14. Titsworth WL, Abram J, Guin P, et al. A prospective time-series quality improvement trial of a standardized analgesia protocol to reduce postoperative pain among neurosurgery patients. Journal of neurosurgery. 2016:1–10. [PubMed] [Google Scholar]
15. Simons Y, Caprio T, Furiasse N, et al. The impact of facecards on patients' knowledge, satisfaction, trust, and agreement with hospital physicians: a pilot study. Journal of Hospital Medicine (Online) 2014; 9 (3):137–141. [PubMed] [Google Scholar]
16. Indovina K, Keniston A, Reid M, et al. Real-time patient experience surveys of hospitalized medical patients. Journal of hospital medicine : an official publication of the Society of Hospital Medicine. 2016 [PubMed] [Google Scholar]
17. O'Leary KJ, Darling TA, Rauworth J, et al. Impact of hospitalist communication-skills training on patient-satisfaction scores. J Hosp Med. 2013; 8 (6):315–320. [PubMed] [Google Scholar]
18. Chan B, Goldman LE, Sarkar U, et al. The Effect of a Care Transition Intervention on the Patient Experience of Older Multi-Lingual Adults in the Safety Net: Results of a Randomized Controlled Trial. J Gen Intern Med. 2015 [PMC free article] [PubMed] [Google Scholar]
19. Amin AN, Hofmann H, Owen MM, et al. Reduce readmissions with service-based care management. Prof Case Manag. 2014; 19 (6):255–262. [PubMed] [Google Scholar]
20. Schroeder DL, Hoffman LA, Fioravanti M, et al. Enhancing Nurses' Pain Assessment to Improve Patient Satisfaction. Orthop Nurs. 2016; 35 (2):108–117. [PubMed] [Google Scholar]
21. Phatak A, Prusi R, Ward B, et al. Impact of pharmacist involvement in the transitional care of high-risk patients through medication reconciliation, medication education, and postdischarge call-backs (IPITCH Study) Journal of hospital medicine : an official publication of the Society of Hospital Medicine. 2016; 11 (1):39–44. [PubMed] [Google Scholar]
22. Wolf JA. Healing Humankind One Patient at a Time: UCLA Health System. Los Angeles CA: The Beryl Institute; 2011. http://www.theberylinstitute.org/ [Google Scholar]
23. Lewis S. Toward a general theory of indifference to research-based evidence. J Health Serv Res Policy. 2007; 12 (3):166–172. [PubMed] [Google Scholar]
24. Jha AK, Orav EJ, Zheng J, et al. Patients' perception of hospital care in the United States. N Engl J Med. 2008; 359 (18):1921–1931. [PubMed] [Google Scholar]
25. Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010; 3 (2):188–195. [PubMed] [Google Scholar]
26. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the Role of Patient Experience Surveys in Measuring Health Care Quality. Med Care Res Rev. 2014; 71 (5):522–554. [PMC free article] [PubMed] [Google Scholar]
27. Glickman SW, Boulding W, Manary M, et al. Patient Satisfaction and Its Relationship With Clinical Quality and Inpatient Mortality in Acute Myocardial Infarction. Circulation: Cardiovascular Quality and Outcomes. 2010; 3 (2):188–195. [PubMed] [Google Scholar]
28. Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Annals of surgery. 2015; 261 (1):2–8. [PMC free article] [PubMed] [Google Scholar]
29. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013; 3 (1):e001570. [PMC free article] [PubMed] [Google Scholar]
30. Girotra S, Cram P, Popescu I. Patient Satisfaction at America's Lowest Performing Hospitals. Circulation: Cardiovascular Quality and Outcomes. 2012; 5 (3):365–372. [PMC free article] [PubMed] [Google Scholar]
31. Chang JT, Hays RD, Shekelle PG, et al. Patients' Global Ratings of Their Health Care Are Not Associated with the Technical Quality of Their Care. Ann Intern Med. 2006; 144 (9):665–672. [PubMed] [Google Scholar]
32. Fenton JJ, Jerant AF, Bertakis KD, et al. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012; 172 (5):405–411. [PubMed] [Google Scholar]
33. Lyu H, Wick EC, Housman M, et al. Patient satisfaction as a possible indicator of quality surgical care. JAMA surgery. 2013; 148 (4):362–367. [PubMed] [Google Scholar]
34. Chatterjee P, Tsai TC, Jha AK. Delivering value by focusing on patient experience. The American journal of managed care. 2015; 21 (10):735–737. [PubMed] [Google Scholar]
35. Meeker D, Knight TK, Friedberg MW, et al. Nudging guideline-concordant antibiotic prescribing: a randomized clinical trial. JAMA internal medicine. 2014; 174 (3):425–431. [PMC free article] [PubMed] [Google Scholar]
36. Koch S, Haesler E, Tiziani A, et al. Effectiveness of sleep management strategies for residents of aged care facilities: findings of a systematic review. J Clin Nurs. 2006; 15 (10):1267–1275. [PubMed] [Google Scholar]
37. Borkhoff CM, Wieland ML, Myasoedova E, et al. Reaching those most in need: a scoping review of interventions to improve health care quality for disadvantaged populations with osteoarthritis. Arthritis Care Res (Hoboken) 2011; 63 (1):39–52. [PubMed] [Google Scholar]