Applying the Healthcare Value Process Index

David Kashmer (@DavidKashmer)

In the last entry, you saw a novel, straightforward metric to capture the value provided by a healthcare service called the Healthcare Value Process Index (HVPI).  In this entry, let’s explore another example of exactly how to apply the metric to a healthcare service to demonstrate how to use the index.

At America’s Best Hospital, a recent quality improvement project focused on time patients spent in the waiting room of a certain physician group’s practice.  The project group had already gone through the steps of creating a sample plan and collecting data that represents how well the system is working.

From a patient survey, sent out as part of the project, the team learned that patients were willing to wait, at most, 20 minutes before seeing the physician.  So, the Voice of the Customer (VOC) was used to set the Upper Specification Limit (USL) of 20 minutes.

A normality test (the Anderson-Darling test) was performed, and the data collected follow the normal distribution as per Figure 1 beneath.  (Wonder why the p >0.05 is a good thing when you use the Anderson-Darling test?  Read about it here.)

Figure 1: Anderson-Darling test result for time in waiting room.

The results of the data collection and USL were reviewed for that continuous data endpoint “Time Spent In Waiting Room” and were plotted as Figure 2 beneath.

Figure 2: Histogram with USL for time spent in waiting room. Cpk = 0.20

The Cpk value for the waiting room system was noted to be 0.20, indicating that (long term) the system in place would produce more that 500,000 Defects Per Million Opportunities (DPMO) with the accompanying Sigma level of < 1.5.  Is that a good level of performance for a system?  Heck no.  Look at how many patients wait more than 20 minutes in the system.  There’s a quality issue there for sure.

What about the Costs of Poor Quality (COPQ) associated with waiting in the waiting room?  Based on the four buckets of the COPQ, your team determines that the COPQ for the waiting room system (per year) is about $200,000.  Surprisingly high, yes, but everyone realizes (when they think about it) that the time Ms. Smith fell in the waiting room after being there 22 minutes because she tried to raise the volume on the TV had gotten quite expensive.  You and the team take special note of what you items you included from the Profit and Loss statement as part of the COPQ because you want to be able to go back and look after changes have been made to see if waste has been reduced.

In this case, for the physician waiting room you’re looking at, you calculate the HVPI as

(100)(0.20) / (200) or 0.1

That’s not very good!  Remember, the COPQ is expressed in thousands of dollars to calculate the HVPI.

Just then, at the project meeting to review the data, your ears perk up when a practice manager named Jill says:  “Well our patients never complain about the wait in our waiting room which I think is better than that data we are looking at.  It feels like our patients wait less than 20 minutes routinely, AND I think we don’t have a much waste in the system.  Maybe you we could do some things like we do them in our practice.”

As a quality improvement facilitator, you’re always looking for ideas, tools, and best practices to apply in projects like this one.  So you and the team plan to look in on the waiting room run by the practice manager.

Just like before, the group samples the performance of the system.  It runs the Anderson-Darling test on the data and they are found to be normally distributed.  (By the way, we don’t see that routinely in waiting room times!)

Then, the team graphs the data as beneath:

Figure 3: Histogram of times spent in Jill’s waiting room. Cpk = 0.06

 

Interestingly, it turns out that this system has a central tendency very similar to the first waiting room you looked at–about 18 minutes.  Jill mentioned how most patients don’t wait more than 18 minutes and the data show that her instinct was spot on.

…but, you and the team notice that the performance of Jill’s waiting room is much worse than the first one you examined.  The Cpk for that system is 0.06–ouch!  Jill is disappointed, but you reassure her that it’s very common to see that how we feel about a system’s performance doesn’t match the data when we actually get them.  (More on that here.)  It’s ok because we are working together to improve.

When you calculate the COPQ for Jill’s waiting room, you notice that (although the performance is poor) there’s less as measured by the costs to deliver that performance.  The COPQ for Jill’s waiting room system is $125,000.  (It’s mostly owing to the wasted time the office staff spend trying to figure out who’s next, and some other specifics to how they run the system.) What is the HVPI for Jill’s waiting room?

(100)(0.06) / (125) = 0.048

Again, not good!

So, despite having lower costs associated with poor quality, Jill’s waiting room provides less value for patients than does the first waiting room that you all looked at.  It doesn’t mean that the team can’t learn anything from Jill and her team (after all, they are wasting less as measured by the COPQ) but it does mean that both Jill’s waiting room and the earlier one have a LONG way to go to improve their quality and value!

Fortunately, after completing the waiting room quality improvement project, the Cpk for the first system studied increased to 1.3 and Jill’s waiting room Cpk increased to 1.2–MUCH better.  The COPQ for each system decreased to $10,000 after the team made changes and went back to calculate the new COPQ based on the same items it had measured previously.

The new HVPI (with VOC from the patients) for the first waiting room?  That increased to 13 and the HVPI for Jill’s room rose to 12.  Each represents an awesome increase in value to the patients involved.  Now, of course, the challenge is to maintain those levels of value over time.

This example highlights how value provided by a system by a healthcare system for any continuous data endpoint can be calculated and compared across systems.  It can be tracked over time to demonstrate increases.  The HVPI represents a unique value measure comprised of a system capability measure and the costs of poor quality.

Questions or thoughts about the HVPI?  Let me know & let’s discuss!

 

 

How Well Do We Supervise Resident Surgeons?

By:  David Kashmer (@David Kashmer)

 

I was recently part of a team that was trying to decide how well residents in our hospital were supervised. The issue is important, because residency programs are required to have excellent oversight to maintain their certification. Senior physicians are supposed to supervise the residents as the residents care for patients. There are also supposed to be regular meetings with the residents and meaningful oversight during patient care. We had to be able to show accrediting agencies that supervision was happening effectively. Everyone on the team, myself included, felt we really did well with residents in terms of supervision. We would answer their questions, we’d help them out with patients in the middle of the night, we’d do everything we could to guide them in providing safe, excellent patient care. At least we thought we did . . . .

 

We’d have meetings and say, “The resident was supervised because we did this with them and we had that conversation about a patient.” None of this was captured anywhere; it was all subjective feelings on the part of the senior medical staff. The residents, however, were telling us that they felt supervision could have been better in the overnight shifts and also in some other specific situations. Still, we (especially the senior staff doing the supervising) would tell ourselves in the meetings, “We’re doing a good job. We know we’re supervising them well.”

 

We weren’t exactly lying to ourselves. We were supervising the residents pretty well. We just couldn’t demonstrate it in the ways that mattered, and we were concerned about any perceived lack in the overnight supervision. We were having plenty of medical decision-making conversations with the residents and helping them in all the ways we were supposed to, but we didn’t have a critical way to evaluate our efforts in terms of demonstrating how we were doing or having something tangible to improve.

 

When I say stop lying to ourselves, I mean that we tend to self-delude into thinking that things are OK, even when they’re not. How would we ever know? What changes our ability to think about our performance? Data. When good data tell us, objectively and without question, that something has to change–well, at least we are more likely to agree. Having good data prevents all of us from thinking we’re above average . . . a common misconception.

 

To improve our resident supervision, we first had to agree it needed improvement. To reach that point, we had to collect data prospectively and review it. But before we even thought about data collection, we had to deal with the unspoken issue of protection. We had to make sure all the attending physicians knew they were protected against being blamed, scapegoated, or even fired if the data turned out to show problems. We had to reassure everyone that we weren’t looking for someone to blame. We were looking for ways to make a good system better. There are ways to collect data that are anonymous. The way we chose did not include which attending or resident was involved at each data point. That protection was key (and is very important in quality improvement projects in healthcare) to allowing the project to move ahead.

 

I’ve found that it helps to bring the group to the understanding that, because we are so good, data collection on the process will show us that we’re just fine—maybe even that we are exceptionally good. Usually, once the data are in, that’s not the case. On the rare occasion when the system really is awesome, I help the group to go out of its way to celebrate and to focus on what can be replicated in other areas to get that same level of success.

 

When we collected the data on resident supervision, we asked ourselves the Five Whys. Why do we think we may not be supervising residents well? Why? What tells us that? The documentation’s not very good. Why is the documentation not very good? We can’t tell if it doesn’t reflect what we’re doing or if we don’t have some way to get what we’re doing on the chart. Why don’t we have some way to get it on the chart? Well, because . . . .

 

If you ask yourself the question “why” five times, chances are you’ll get to the root cause of why things are the way they are. It’s a tough series of questions. It requires self-examination. You have to be very honest and direct with yourself and your colleagues. You also have to know some of the different ways that things can be—you have to apply your experience and get ideas from others to see what is not going on in your system. Some sacred cows may lose their lives in the process. Other times you run up against something missing from a system (absence) rather than presence of something like a sacred cow. What protections are not there? As the saying goes, if your eyes haven’t seen it, your mind can’t know it.

 

As we asked ourselves the Five Whys, we asked why we felt we were doing a good job but an outsider wouldn’t be able to tell. We decided that the only way an outsider could ever know that we were supervising well was to make sure supervision was thoroughly documented in the patient charts.

 

The next step was to collect data on our documentation to see how good it was. We decided to rate it on a scale of one to five. One was terrible: no sign of any documentation of decision-making or senior physician support in the chart. Five was great: we can really see that what we said was happening, happened.

 

We focused on why the decision-making process wasn’t getting documented in the charts. There were lots of reasons: Because it’s midnight. Because we’re not near a computer. Because we were called away to another patient. Because the computers were down. Because the decision was complicated and it was difficult to record it accurately.

 

We developed a system for scoring the charts that I felt was pretty objective. The data were gathered prospectively; names were scrubbed, because we didn’t care which surgeon it was and we didn’t want to bias the scoring. To validate the scoring, we used a Gage Reproducibility and Reliability test, which (among other things) helps determine how much variability in the measurement system is caused by differences between operators. We chose thirty charts at random and had three doctors check them and give them a grade with the new system. Each doctor was blinded to the chart they rated (as much as you could be) and rated each chart three times. We found that most charts were graded at 2 or 2.5.

 

Once we were satisfied that the scoring system was valid, we applied it prospectively and scored a sample of charts according to the sample size calculation we had performed. Reading the chart to see if it documented supervision correctly only took about a second. We found, again, our score was about 2.5. That was little dismaying, because it showed we weren’t doing as well as we thought, although we weren’t doing terribly, either.

 

Then we came up with interventions that we thought would improve the score. We made poka-yoke changes—changes that made it easier to do the right thing without having to think about it. In this case, the poka-yoke answer was to make it easier to document resident oversight and demonstrate compliance with Physicians At Teaching Hospitals (PATH) rules; the changes made it harder to avoid documenting actions. By making success easier, we saw the scores rise to 5 and stay there. We added standard language and made it easy to access in the electronic medical record. We educated the staff. We demonstrated how, and why, it was easier to do the right thing and use the tool instead of skipping the documentation and getting all the work that resulted when the documentation was not present.

 

The project succeeded extremely well because we stopped lying to ourselves. We used data and the Five Whys to see that what we told ourselves didn’t align with what was happening. We didn’t start with the assumption that we were lying to ourselves. We thought we were doing a good job. We talked about what a good job looked like, how we’d know if we were doing a good job, and so on, but what really helped us put data on the questions was using a fishbone diagram. We used the diagram to find the six different factors of special cause variation…

 

Want to read more about how the team used the tools of statistical process control to vastly improve resident oversight?  Read more about it in the Amazon best-seller:  Volume To Value here.

Cover of new book.
Cover of new book.

 

Did You Know Lean & Six Sigma Studies In Healthcare Are On The Rise?

By:  @DavidKashmer

Whew!  Finally!  I’ve been waiting for some years now for our healthcare system to start to widely adopt standard, well-known quality improvement tools.  It seemed like many new quality articles I read concerning the healthcare system frequently invented some new way to look at quality.  You’ll find multiple blog entries on here where I implore our healthcare colleagues to start to use well-known quality tools instead of re-inventing the wheel.  Here’s one now.

 

Well, thanks to our colleagues at Minitab, we have some evidence that, in fact, the use of Lean & Six Sigma techniques is catching on in healthcare.  Look here:

Lean & Six Sigma Techniques trend plot from the Minitab blog (http://blog.minitab.com/blog/statistics-and-quality-data-analysis/qi-trends-in-healthcare:-what-are-the-statistical-soft-spots)
Number of Lean & Six Sigma studies in healthcare trend plot from the Minitab blog (http://blog.minitab.com/blog/statistics-and-quality-data-analysis/qi-trends-in-healthcare:-what-are-the-statistical-soft-spots)

 

What a great visual!  Now we see how the number of studies per year is increasing, and we can sense how 2015 demonstrated quite a jump in the number of Lean & Six Sigma studies.  Time will tell if the rate of increase in number of studies per year has significantly changed.

 

At the end of the day, here, we see evidence that Lean & Six Sigma techniques are catching on in healthcare.  It’s no surprise, as the transition from volume to value helps healthcare focus on proven techniques to make measurable, and sustainable improvements.  Healthcare colleagues:  here is the call to action to learn and use the standard techniques of Lean & Six Sigma.