David Kashmer (@DavidKashmer)
In the last entry, you saw a novel, straightforward metric to capture the value provided by a healthcare service called the Healthcare Value Process Index (HVPI). In this entry, let’s explore another example of exactly how to apply the metric to a healthcare service to demonstrate how to use the index.
At America’s Best Hospital, a recent quality improvement project focused on time patients spent in the waiting room of a certain physician group’s practice. The project group had already gone through the steps of creating a sample plan and collecting data that represents how well the system is working.
From a patient survey, sent out as part of the project, the team learned that patients were willing to wait, at most, 20 minutes before seeing the physician. So, the Voice of the Customer (VOC) was used to set the Upper Specification Limit (USL) of 20 minutes.
A normality test (the Anderson-Darling test) was performed, and the data collected follow the normal distribution as per Figure 1 beneath. (Wonder why the p >0.05 is a good thing when you use the Anderson-Darling test? Read about it here.)
The results of the data collection and USL were reviewed for that continuous data endpoint “Time Spent In Waiting Room” and were plotted as Figure 2 beneath.
The Cpk value for the waiting room system was noted to be 0.20, indicating that (long term) the system in place would produce more that 500,000 Defects Per Million Opportunities (DPMO) with the accompanying Sigma level of < 1.5. Is that a good level of performance for a system? Heck no. Look at how many patients wait more than 20 minutes in the system. There’s a quality issue there for sure.
What about the Costs of Poor Quality (COPQ) associated with waiting in the waiting room? Based on the four buckets of the COPQ, your team determines that the COPQ for the waiting room system (per year) is about $200,000. Surprisingly high, yes, but everyone realizes (when they think about it) that the time Ms. Smith fell in the waiting room after being there 22 minutes because she tried to raise the volume on the TV had gotten quite expensive. You and the team take special note of what you items you included from the Profit and Loss statement as part of the COPQ because you want to be able to go back and look after changes have been made to see if waste has been reduced.
In this case, for the physician waiting room you’re looking at, you calculate the HVPI as
(100)(0.20) / (200) or 0.1
That’s not very good! Remember, the COPQ is expressed in thousands of dollars to calculate the HVPI.
Just then, at the project meeting to review the data, your ears perk up when a practice manager named Jill says: “Well our patients never complain about the wait in our waiting room which I think is better than that data we are looking at. It feels like our patients wait less than 20 minutes routinely, AND I think we don’t have a much waste in the system. Maybe you we could do some things like we do them in our practice.”
As a quality improvement facilitator, you’re always looking for ideas, tools, and best practices to apply in projects like this one. So you and the team plan to look in on the waiting room run by the practice manager.
Just like before, the group samples the performance of the system. It runs the Anderson-Darling test on the data and they are found to be normally distributed. (By the way, we don’t see that routinely in waiting room times!)
Then, the team graphs the data as beneath:
Interestingly, it turns out that this system has a central tendency very similar to the first waiting room you looked at–about 18 minutes. Jill mentioned how most patients don’t wait more than 18 minutes and the data show that her instinct was spot on.
…but, you and the team notice that the performance of Jill’s waiting room is much worse than the first one you examined. The Cpk for that system is 0.06–ouch! Jill is disappointed, but you reassure her that it’s very common to see that how we feel about a system’s performance doesn’t match the data when we actually get them. (More on that here.) It’s ok because we are working together to improve.
When you calculate the COPQ for Jill’s waiting room, you notice that (although the performance is poor) there’s less as measured by the costs to deliver that performance. The COPQ for Jill’s waiting room system is $125,000. (It’s mostly owing to the wasted time the office staff spend trying to figure out who’s next, and some other specifics to how they run the system.) What is the HVPI for Jill’s waiting room?
(100)(0.06) / (125) = 0.048
Again, not good!
So, despite having lower costs associated with poor quality, Jill’s waiting room provides less value for patients than does the first waiting room that you all looked at. It doesn’t mean that the team can’t learn anything from Jill and her team (after all, they are wasting less as measured by the COPQ) but it does mean that both Jill’s waiting room and the earlier one have a LONG way to go to improve their quality and value!
Fortunately, after completing the waiting room quality improvement project, the Cpk for the first system studied increased to 1.3 and Jill’s waiting room Cpk increased to 1.2–MUCH better. The COPQ for each system decreased to $10,000 after the team made changes and went back to calculate the new COPQ based on the same items it had measured previously.
The new HVPI (with VOC from the patients) for the first waiting room? That increased to 13 and the HVPI for Jill’s room rose to 12. Each represents an awesome increase in value to the patients involved. Now, of course, the challenge is to maintain those levels of value over time.
This example highlights how value provided by a system by a healthcare system for any continuous data endpoint can be calculated and compared across systems. It can be tracked over time to demonstrate increases. The HVPI represents a unique value measure comprised of a system capability measure and the costs of poor quality.
Questions or thoughts about the HVPI? Let me know & let’s discuss!