Study: 20 of 21 billing metrics unreliable for measuring care quality

A new meta-study by a John Hopkins research team questions the increasing use of Agency for Healthcare Research and Quality’s (AHRQ) patient safety indicators (PSI) and CMS’s hospital-acquired conditions (HAC) for pay-for-performance and public reporting. PSIs and HACs use billing data rather than clinical data to judge hospital performance, which have raised concerns over their validity as quality benchmarks. Many of these measures are used by several public rating systems, including U.S. News & World Report's Best Hospitals, Leapfrog's Hospital Safety Score, and CMS’ Star Ratings.

The John Hopkins study, published in Medical Care, said a PSI was accurate if the medical record and the administrative database matched 80% of the time. However, 16 measures didn’t have enough data to be evaluated and out of the five that did, only PSI 15, which measures accidental punctures or lacerations obtained during surgery, was accurate.

"These measures have the ability to misinform patients, misclassify hospitals, misapply financial data, and cause unwarranted reputational harm to hospitals," said the study's lead author, Bradford Winters, MD, in a press release. "If the measures don't hold up to the latest science, then we need to re-evaluate whether we should be using them to compare hospitals."

One of the study authors, Peter Pronovost, MD, PhD, recently published a commentary in The Journal of the American Medical Association that went over possible solutions and methods that could be taken by the rating community.

“The variation in coding severely limits our ability to count safety events and draw conclusions about the quality of care between hospitals,” he writes. “Patients should have measures that reflect how well we care for patients, not how well we code that care.”

Click here to read the full HealthLeaders Media article on the patient safety indicator study.

Found in Categories: 
Quality & Errors

More Like This