“Improving the quality of care is the best way to decrease cost of care.” Dr. Raja Sabapathy at Ganga Hospital in India said this to me a few weeks ago. He is one of those unique individuals that truly practices what he preaches. His hospital has grown through leaps and bounds and is establishing itself as a Mecca throughout the world for trauma and secondary reconstruction. The cost of care needs to be kept low so that they remain accessible to all patients, though the quality remains superb. While we hear the phrase “improving the quality of care is the best way to decrease cost of care”, do we really believe it? Do our actions reflect this?
In its current state, the U.S. healthcare quality initiative is dominated by metrics: rates of surgical site infections, readmissions, DVT/PEs, HCAHPS top box scores, catheter associated UTIs…etc. While it’s common to hear physicians complain about this, none of these metrics are inherently negative. However, judging by the fervency with which administrators work to optimize these metrics, it seems as if they believe optimal rankings based on metric scores and “quality care” are synonymous. There is a very common, yet very fallacious saying in management that goes “you can only manage what you can measure.” What’s happening in healthcare is a perfect example of this – the things we are measuring are the things that we are directly working to improve. Those other things don’t matter (until there is a new metric that payers start caring about). I was reminded of this when a friend shared his recent experience. He had met with a high-ranking physician administrator from his institution whose focus is quality improvement. He presented his OR efficiency data and analysis. The findings were unfortunately not very surprising: the value of added usage of the ORs was outweighed by the non-value added OR usage. This means that in a full work day, the OR was generating income only about half of the time and losing money the other half. That is like paying an employee for the full day but after lunch they doing nothing other than Sudoku and posting on Instagram. Because of naiveté, the assumption was that the analysis would be an alarming finding to administration and spur change. The response received though was “that’s a ‘fun’ project to work on…but we’re too busy doing the things that have to be done”. If you’re wondering what those things are, they are the metrics that the U.S. News and World Report rankings and the U.S. government care about and any other acute/urgent problems that arise.
What is “quality” health care? How much time have you spent contemplating that question? The quality metrics, like we discussed, are not inherently bad (or good). If we have zero re-admissions, does that mean we provide good, quality care? If we have high re-admissions, does that mean we provide poor, low quality care? Do low surgical site infection rates identify good surgical care? Careful consideration will reveal that the answer here is “well, maybe…but it depends”. What does quality care really look like in health care? After some thought, my list included:
1) Correct diagnosis
2) Timely diagnosis
3) Timely determination of correct treatment plan
4) Efficient and optimal execution of treatment plan
5) Complete coordination of care and communication between all healthcare providers
6) Timely identification of complications
7) Kind, considerate, nonjudgmental, empathetic care…etc.
You could probably add several more things to this list. One key difference that these points have from the metrics is: how do you measure these? How can you measure even something as simple as rate of “correct diagnosis”? How do we identify the patients that had incorrect diagnosis? These things are largely immeasurable. Quality and Leadership guru, W. Edwards Deming, long ago taught us that the most important things in business are often immeasurable and unknowable. This is so true in health care. Some administrators become very uncomfortable with this because they rely on their Excel spreadsheet with all of the numbers and graphs that it generates to manage. Hospital administrators are swimming in data. EMRs generate mind-numbing amounts of data. However, data and information are not synonymous with knowledge. We would be held hostage by data if we had to make a spreadsheet accompanied with a pie chart every time before we could make an important decision.
Good and reliable data is of course very important and useful. However, we must understand that for some things, we just won’t have good data. How do we measure the percentage of patients that received kind, considerate, nonjudgmental, empathetic care? Patient satisfaction surveys are very imperfect and can only give us somewhat of an indication. Like patient satisfaction surveys, the other metrics are imperfect indicators and must be treated as such and not as a goal unto themselves. Unfortunately, metrics are treated as the ultimate goal: We want zero DVTs because we want zero DVTs, we want zero re-admissions because we want zero re-admissions. If we work to optimize the metrics we lose site of the fact that they should be treated as imperfect indicators of quality care…they themselves are not clear identifiers of quality care. Since so much focus is placed on the metrics, have you ever heard of or witnessed someone trying to game the system to optimize their metrics? Is it difficult to imagine that it likely happens on a daily basis in major hospitals around the country? We have become fixated on improving the metrics rather on improving the care. The negative consequences from this can be significant. For example, if a surgeon has an above average rate of readmissions and is told that he/she must decrease it by administration (because it is above average…along with the other 50% of surgeons), how does this exhortation help the surgeon? Is the surgeon’s readmission rate a marker of poor surgical care? Maybe…but we need more information before we can make that conclusion. An easy way to decrease the readmission rate would be to refer out those patients that are at high risk for readmissions. The metric improves but the system remains unchanged and access to care suffers.
Where do we go from here? You can tell that I have strong opinions about metrics. However, I am not an advocate for doing away with them. Rather, it is essential that we interpret and use them appropriately. Here follows some key take away points: First of all, continual process improvement doesn’t necessitate any metrics. Do you require a metric to continually improve the way you treat patients? Second, you have to realize that there will always be variability from month to month and between hospitals/clinics/surgeons. Therefore, blindly rewarding the top 50% and punishing the bottom 50% is not only stupid, but harmful. Third, the metric is an indicator of how your system is functioning and should always drive you back to your system. Don’t focus on the metric itself. Fourth, a “high” metric score or a “low” metric score has no inherent meaning. Why? Because the metric is an imperfect indicator. Can you reliably and fairly compare surgical site infections between two surgeons without knowing significant details about their respective systems e.g. patient specific details, hospital specific details, nursing specific details, region of country…etc? Furthermore, metrics can easily be gamed – and you better believe that this happens all the time.
Many times it feels like an uphill battle when it comes to hospital administration and their flavor of the month consultants to improve “quality”. However, I firmly believe that physicians are in the best position to understand what quality care really is and how best to improve quality. Focusing on metrics does not improve quality but creates the illusion of quality. It is our responsibility to take up the quality initiative. Only through quality care can we begin to meaningfully decrease the cost of care.