Healthcare Errors Are More Like Frogger Than Swiss Cheese

(Click the Soundcloud app above to hear David read this article.)

By:  David Kashmer (@DavidKashmer, LinkedIn here.)

All models are wrong, but some are useful.

–EP Box

Remember when you first heard of the Swiss-Cheese model of medical error?  It sounds so good, right?  When a bunch of items line up, we get a defect.  It’s satisfying.  We all share some knowledge of Swiss Cheese–it’s a common image.

That’s what makes it so attractive–and, of course, the Swiss Cheese model is a much better mental model than what came before, which was some more loose concept of a bunch of items that made an error or, worse yet, a profound emphasis on how someone screwed up and how that produced a bad outcome.

Models supplant each other over time.  Sun goes around the Earth (geocentric) model was supplanted by the sun at the center (heliocentric) model–thank you Kepler and Copernicus!

Now, we can do better with our model of medical error and defect, because medical errors really don’t follow a Swiss Cheese model.  So let’s get a better one and develop our shared model together.

In fact, medical errors are more like Frogger.  Now that we have more Millenials in the workplace than ever (who seem to be much more comfortable as digital natives than I am as a Gen Xer), we can use a more refined idea of medical error that will resonate with the group who staff our hospitals.  Here’s how medical errors are more like Frogger than Swiss Cheese:

 

(1) In Swiss Cheese, the holes stay still.  That’s not how it is with medical errors.  In fact, each layer of a system that a patient passes through has a probability of having an issue.  Some layers are lower, and some are higher.  Concepts like Rolled Throughput Yield reflect this and are much more akin to how things actually work than the illusion that we have fixed holes…thinking of holes gives the illusion that, if only we could only identify and plug those holes, life would be perfect!

In Frogger, there are gaps in each line of cars that pass by.  We need to get the frog to pass through each line safely and oh, by the way, the holes are moving and sometimes not in the same place.  That kind of probabilistic thinking is much more akin to medical errors:  each line of traffic has an associated chance of squishing us before we get to the other side.  The trick is, of course, we can influence and modify the frequency and size of the holes…well, sometimes anyway.  Can’t do that with Swiss Cheese for sure and, with Frogger, we can select Fast or Slower modes.  (In real life, we have a lot more options.  Sometimes I even label the lines of traffic as the 6M‘s.)

 

(2) In the Swiss Cheese Model, we imagine a block of cheese sort of sitting there.  There’s no inherent urgency in cheese (unless you’re severely lactose intolerant or have a milk allergy I guess).  It’s sort of a static model that doesn’t do much to indicate safety.

But ahhh, Frogger, well there’s a model that makes it obvious.  If you don’t maneuver the frog carefully that’s it–you’re a goner.  Of course, we have the advantage of engineering our systems to control the flow of traffic and both the size and presence of gaps.  We basically have a cheat code.  And, whether your cheat code is Lean, Lean Six Sigma, Six Sigma, Baldrige Excellence Framework, ISO, Lean Startup, or some combination…you have the ultimate ability unlike almost any Frogger player to change the game to your patient’s advantage.  Of course, unlike Frogger, your patient only gets one chance to make it through unscathed–that’s very different than the video game world and, although another patient will be coming through the system soon, we’ll never get another chance to help the current patient have a perfect experience.

All of that is highlighted by Frogger and is not made obvious by a piece of cheese.

 

(3) In Frogger, the Frog starts anywhere.  Meaning, not only does the traffic move but the Frog starts anywhere along the bottom of the screen.  In Frogger we can control that position, but in real life the patients enter the system with certain positions we can not control easily and, for the purposes of their hospital course anyway, are unable to be changed.  It may be their 80 pack year history of smoking, their morbid obesity, or their advanced age.  However, the Frogger model recognizes the importance of initial position (which unlike real life we can control more easily) while the Swiss Cheese model doesn’t seem to make clear where we start.  Luckily, in real life, I’ve had the great experience of helping systems “cheat” by modifying the initial position…you may not be able to change initial patient comorbid conditions but you can sometimes set them up with a better initial position for the system you’re trying to improve.

 

Like you, I hear about the Swiss Cheese model a lot.  And, don’t get me wrong, it’s much better than what came before.  Now, however, in order to recognize the importance of probability, motion, initial position, devising a safe path through traffic, and a host of other considerations, let’s opt for a model that recognizes uncertainty, probability, and safety.  With more Millenials than ever in the workplace (even though Frogger predates them!) we have digital natives with whom game imagery is much more prone to resonate than a static piece of cheese.

Use Frogger next time you explain medical error because it embodies how to avoid medical errors MUCH better than cheese.

 

 

Dr. David Kashmer, a trauma and acute care surgeon, is a Fellow of the American College of Surgeons and is a nationally known healthcare expert. He serves as a member of the Board of Reviewers for the Malcolm Baldrige National Quality Award. In addition to his Medical Doctor degree from MCP Hahnemann University, now Drexel University College of Medicine, he holds an MBA degree from George Washington University. He also earned a Lean Six Sigma Master Black Belt Certification from Villanova University. Kashmer contributes to TheHill.com, Insights.TheSurgicalLab.com, and The Healthcare Quality Blog.com where the focus is on quality improvement and value in surgery and healthcare.

To learn more about the application of quality improvement tools like Lean Six Sigma in healthcare and Dr. David Kashmer, visit http://TheHealthcareQualityBlog.com

 

 

Applying the Healthcare Value Process Index

David Kashmer (@DavidKashmer)

In the last entry, you saw a novel, straightforward metric to capture the value provided by a healthcare service called the Healthcare Value Process Index (HVPI).  In this entry, let’s explore another example of exactly how to apply the metric to a healthcare service to demonstrate how to use the index.

At America’s Best Hospital, a recent quality improvement project focused on time patients spent in the waiting room of a certain physician group’s practice.  The project group had already gone through the steps of creating a sample plan and collecting data that represents how well the system is working.

From a patient survey, sent out as part of the project, the team learned that patients were willing to wait, at most, 20 minutes before seeing the physician.  So, the Voice of the Customer (VOC) was used to set the Upper Specification Limit (USL) of 20 minutes.

A normality test (the Anderson-Darling test) was performed, and the data collected follow the normal distribution as per Figure 1 beneath.  (Wonder why the p >0.05 is a good thing when you use the Anderson-Darling test?  Read about it here.)

Figure 1: Anderson-Darling test result for time in waiting room.

The results of the data collection and USL were reviewed for that continuous data endpoint “Time Spent In Waiting Room” and were plotted as Figure 2 beneath.

Figure 2: Histogram with USL for time spent in waiting room. Cpk = 0.20

The Cpk value for the waiting room system was noted to be 0.20, indicating that (long term) the system in place would produce more that 500,000 Defects Per Million Opportunities (DPMO) with the accompanying Sigma level of < 1.5.  Is that a good level of performance for a system?  Heck no.  Look at how many patients wait more than 20 minutes in the system.  There’s a quality issue there for sure.

What about the Costs of Poor Quality (COPQ) associated with waiting in the waiting room?  Based on the four buckets of the COPQ, your team determines that the COPQ for the waiting room system (per year) is about $200,000.  Surprisingly high, yes, but everyone realizes (when they think about it) that the time Ms. Smith fell in the waiting room after being there 22 minutes because she tried to raise the volume on the TV had gotten quite expensive.  You and the team take special note of what you items you included from the Profit and Loss statement as part of the COPQ because you want to be able to go back and look after changes have been made to see if waste has been reduced.

In this case, for the physician waiting room you’re looking at, you calculate the HVPI as

(100)(0.20) / (200) or 0.1

That’s not very good!  Remember, the COPQ is expressed in thousands of dollars to calculate the HVPI.

Just then, at the project meeting to review the data, your ears perk up when a practice manager named Jill says:  “Well our patients never complain about the wait in our waiting room which I think is better than that data we are looking at.  It feels like our patients wait less than 20 minutes routinely, AND I think we don’t have a much waste in the system.  Maybe you we could do some things like we do them in our practice.”

As a quality improvement facilitator, you’re always looking for ideas, tools, and best practices to apply in projects like this one.  So you and the team plan to look in on the waiting room run by the practice manager.

Just like before, the group samples the performance of the system.  It runs the Anderson-Darling test on the data and they are found to be normally distributed.  (By the way, we don’t see that routinely in waiting room times!)

Then, the team graphs the data as beneath:

Figure 3: Histogram of times spent in Jill’s waiting room. Cpk = 0.06

 

Interestingly, it turns out that this system has a central tendency very similar to the first waiting room you looked at–about 18 minutes.  Jill mentioned how most patients don’t wait more than 18 minutes and the data show that her instinct was spot on.

…but, you and the team notice that the performance of Jill’s waiting room is much worse than the first one you examined.  The Cpk for that system is 0.06–ouch!  Jill is disappointed, but you reassure her that it’s very common to see that how we feel about a system’s performance doesn’t match the data when we actually get them.  (More on that here.)  It’s ok because we are working together to improve.

When you calculate the COPQ for Jill’s waiting room, you notice that (although the performance is poor) there’s less as measured by the costs to deliver that performance.  The COPQ for Jill’s waiting room system is $125,000.  (It’s mostly owing to the wasted time the office staff spend trying to figure out who’s next, and some other specifics to how they run the system.) What is the HVPI for Jill’s waiting room?

(100)(0.06) / (125) = 0.048

Again, not good!

So, despite having lower costs associated with poor quality, Jill’s waiting room provides less value for patients than does the first waiting room that you all looked at.  It doesn’t mean that the team can’t learn anything from Jill and her team (after all, they are wasting less as measured by the COPQ) but it does mean that both Jill’s waiting room and the earlier one have a LONG way to go to improve their quality and value!

Fortunately, after completing the waiting room quality improvement project, the Cpk for the first system studied increased to 1.3 and Jill’s waiting room Cpk increased to 1.2–MUCH better.  The COPQ for each system decreased to $10,000 after the team made changes and went back to calculate the new COPQ based on the same items it had measured previously.

The new HVPI (with VOC from the patients) for the first waiting room?  That increased to 13 and the HVPI for Jill’s room rose to 12.  Each represents an awesome increase in value to the patients involved.  Now, of course, the challenge is to maintain those levels of value over time.

This example highlights how value provided by a system by a healthcare system for any continuous data endpoint can be calculated and compared across systems.  It can be tracked over time to demonstrate increases.  The HVPI represents a unique value measure comprised of a system capability measure and the costs of poor quality.

Questions or thoughts about the HVPI?  Let me know & let’s discuss!

 

 

Great Healthcare Quality Projects Repeat Themselves

 

David Kashmer, MD MBA MBB (@DavidKashmer)

As healthcare adopts more and more of the Lean Six Sigma techniques, certain projects begin to repeat across organizations.  It makes sense.  After all, we live in the healthcare system and, once we have the tools, some projects are just so, well, obvious!

About two years ago, I wrote about a project I’d done that included decreasing the amount of time required to prepare OR instruments.  See that here.  And, not-surprisingly, by the time I had written about the project, I had seen this done at several centers with amazing results.

Recently, I was glad to see the project repeat itself.  This time, Virginia Mason had performed the project and had obtained its routine, impressive result.

This entry is to compliment the Virginia Mason team on their completion of the OR quality improvement project they describe here.  I’m sure the project wasn’t easy, and compliment the well-known organization on drastically decreasing waste while improving both quality & patient safety.

Like many others, I believe healthcare quality improvement is in its infancy.  We, as a field, are years behind other industries in terms of sophistication regarding quality improvement–and that’s for many different reasons, not all of which we directly control.

In that sort of climate, it’s good to see certain projects repeating across institutions.  This particular surgical instrument project is a great one, as the Virginia Mason & Vanderbilt experience indicate, that highlights the dissemination of quality tools throughout the industry.

Nice work, Virginia Mason team!