Have You Seen This Pessimist’s Guide To Benchmarking In Healthcare?

By:  DMKashmer MD MBA FACS (@DavidKashmer)

LinkedIn Profile here.

 

It sure sounds like a good idea to measure our healthcare processes against standards from other centers, right? It seems like pretty obvious logic that if we benchmark ourselves against how other organizations and professional societies want us to do (or how they perform) that we’ll be better off in the end. Doesn’t it sound straightforward that we should have an external benchmark that we compare to our processes?

 

Guess what? It’s not, and here’s why. You probably have a long way to go before you benchmark.

 

Thirty five healthcare quality projects in the last three years have reinforced this simple truism for me:  don’t benchmark at first. Why? There is usually a lot more you have to do before you look to some external agencies for a benchmark.  Here are some of the items that probably need doing before you scoop and apply an external measure to your system.

 

You Don’t Have A Clear, Usable Definition of What You’re Measuring

 

For example, your healthcare system probably lacks a clear operational definition of the metrics it wants to measure.  Will you use a definition for VAP (Ventilator Assisted Pneumonia) from the CDC or some other definition?  Does everyone who is performing data collection have the same definition?  Truth is, unfortunately, when you scratch the service…they probably don’t.

 

You Don’t Know The Voice of the Customer…Or Even Who The Customer Is (!)

 

You may not even know the voice of the customer (VOC) and key process indicators for your various systems.  Who exactly is receiving output from this system of yours?  And what do they (not you) want?  Get over yourself already and go find who is on the receiving end of your system and what they expect from the system.  You may even need to get out of the building to find out.  (Shudder!)

In other words, until you have a clear definition of what you’re measuring, a way to measure it, and a knowledge that it will significantly impact what you’re doing, you have a long way to go before you benchmark. Let me tell you more. One of the common areas we make with healthcare statistical process control and other quality projects is that we fumble at the one yard line. I mean that we don’t have a sense of a clear definition for what we are measuring or how we are going to measure it. How can we benchmark against an external measure before we even know what we are talking about? All too often, this is exactly what happens.

 

Consider this story of woe that owes itself to the problems we discussed above.

 

A Cautionary Tale:  VAPs in the ICU

 

Once upon a time there was an intensive care unit that wanted to benchmark its performance with ventilator associated pneumonia versus external organizations. (By the way, this is NOT the organization I work for!) It looked around and found typical rates of ventilator associated pneumonias as determined from other organizations. It seemed to make a lot of sense to do this. After all, they could bring their expected performance in line with other organizations. Of course, they wanted to have zero ventilator associated pneumonias as their real goal. What were the problems?

 

First, they had a non-standard definition of ventilator associated pneumonia. In fact, the operational definition they chose of VAP did not square with the definition of ventilator associated pneumonia from other centers. What did this cause? This caused all sorts of misguided quality interventions.  Alas, they didn’t discover this until a lot of work had been done.

 

For example, the team adopted a VAP bundle, which also makes a lot of sense. It then went on to perform no less than 12 other interventions in order to achieve quality improvement. Some of these decreased the VAP rate and some (many) did not.  The team spun its wheels and fatigue and staff churn quickly set in.

 

Another problem with external benchmarking? The team did not have the infrastructure to determine if they were doing significantly better or not. This is a common danger of benchmarking. The fact that the operational definitions did not align made the team add layer after layer of complexity and friction for dubious outcomes in quality. Worse yet, this wild goose chase caused an increase in worse outcomes owing to the variations that all of the ineffective changes caused in the system.  Because quality teams often lack sophistication to do statistical testing and to protect against tampering / type 1 errors, the wild goose chase in healthcare (sometimes from inappropriate benchmarking) really hurts!

 

I see this all the time and it’s very challenging to avoid in our current healthcare climate. For example, it is always hard to argue against doing more. Intuitively, who wouldn’t want to do more to make sure their patients were safe?  It’s an easy position to support, akin to “putting more cops on the street” promises from politicians.  Who could disagree!

 

However, it turns out, that when we make too many changes, or changes that do not result in significant improvement, we can unfortunately increase variation in our processes and obtain paradoxically worse outcomes. Processes can become cumbersome or resource intensive, whether that be in terms of manpower or other sources of friction. This is very difficult to guard against.

 

Learn from this instructional fairy tale: Align the operational definition you are working with, with your benchmark. Or better yet, don’t benchmark at first.

 

Important Thoughts on Benchmarking

 

So, if I’m telling you not to benchmark first, what is there to do? My recommendation is to follow the DMAIC process where there is a clear definition and those definitions are measured in rigorous statistical ways. This means having a team together that adopts a standard definition of the item that is being studied. I can’t say enough about that.

 

The operational definition for your particular item must align with the eventual item you want to benchmark. Typically in non-rigorous healthcare quality projects, this does not happen. Before you go on to accept the benchmark that you so badly want to look toward, make sure that this definition can be measured in adequate ways.

 

A measurement systems analysis and other measurement vagaries can really throw off your quality project. You can end up forever chasing your tail or the benchmark if your measurement system is not statistically rigorous or useful. Does the outside institution obtain the benchmark rate from retrospective cleaned data warehouses? Or did they obtain it prospectively right from the process? These are things you’ll have to wrestle with and it may make a difference in the benchmark you accept and what you think represents quality.

 

If the benchmark you are looking toward is a zero defect rate or some similar end point that’s one thing. However, typically we use benchmarks to get a sense of what a typical rate of performance is. As taught to me by experience and Lean and Six Sigma coursework: don’t benchmark until you have rigorously improved your process as much as possible. And when you do benchmark, I recommend that you have carefully aligned your operational definition, measurement system, and even the control phase of your project with this eventual benchmark.

 

Do you have thoughts on benchmarks? You probably feel, like I do, that used properly benchmarks can be very useful for quality projects…but when used carefully!  Have you ever seen a benchmark used inappropriately or one that caused all of the issues raised above? If you have, let me know, because I would love to discuss!

0 comments