Healthcare Errors Are More Like Frogger Than Swiss Cheese

By:  David Kashmer (@DavidKashmer, LinkedIn here.)

All models are wrong, but some are useful.

–EP Box

Remember when you first heard of the Swiss-Cheese model of medical error?  It sounds so good, right?  When a bunch of items line up, we get a defect.  It’s satisfying.  We all share some knowledge of Swiss Cheese–it’s a common image.

That’s what makes it so attractive–and, of course, the Swiss Cheese model is a much better mental model than what came before, which was some more loose concept of a bunch of items that made an error or, worse yet, a profound emphasis on how someone screwed up and how that produced a bad outcome.

Models supplant each other over time.  Sun goes around the Earth (geocentric) model was supplanted by the sun at the center (heliocentric) model–thank you Kepler and Copernicus!

Now, we can do better with our model of medical error and defect, because medical errors really don’t follow a Swiss Cheese model.  So let’s get a better one and develop our shared model together.

In fact, medical errors are more like Frogger.  Now that we have more Millenials in the workplace than ever (who seem to be much more comfortable as digital natives than I am as a Gen Xer), we can use a more refined idea of medical error that will resonate with the group who staff our hospitals.  Here’s how medical errors are more like Frogger than Swiss Cheese:

 

(1) In Swiss Cheese, the holes stay still.  That’s not how it is with medical errors.  In fact, each layer of a system that a patient passes through has a probability of having an issue.  Some layers are lower, and some are higher.  Concepts like Rolled Throughput Yield reflect this and are much more akin to how things actually work than the illusion that we have fixed holes…thinking of holes gives the illusion that, if only we could only identify and plug those holes, life would be perfect!

In Frogger, there are gaps in each line of cars that pass by.  We need to get the frog to pass through each line safely and oh, by the way, the holes are moving and sometimes not in the same place.  That kind of probabilistic thinking is much more akin to medical errors:  each line of traffic has an associated chance of squishing us before we get to the other side.  The trick is, of course, we can influence and modify the frequency and size of the holes…well, sometimes anyway.  Can’t do that with Swiss Cheese for sure and, with Frogger, we can select Fast or Slower modes.  (In real life, we have a lot more options.  Sometimes I even label the lines of traffic as the 6M‘s.)

 

(2) In the Swiss Cheese Model, we imagine a block of cheese sort of sitting there.  There’s no inherent urgency in cheese (unless you’re severely lactose intolerant or have a milk allergy I guess).  It’s sort of a static model that doesn’t do much to indicate safety.

But ahhh, Frogger, well there’s a model that makes it obvious.  If you don’t maneuver the frog carefully that’s it–you’re a goner.  Of course, we have the advantage of engineering our systems to control the flow of traffic and both the size and presence of gaps.  We basically have a cheat code.  And, whether your cheat code is Lean, Lean Six Sigma, Six Sigma, Baldrige Excellence Framework, ISO, Lean Startup, or some combination…you have the ultimate ability unlike almost any Frogger player to change the game to your patient’s advantage.  Of course, unlike Frogger, your patient only gets one chance to make it through unscathed–that’s very different than the video game world and, although another patient will be coming through the system soon, we’ll never get another chance to help the current patient have a perfect experience.

All of that is highlighted by Frogger and is not made obvious by a piece of cheese.

 

(3) In Frogger, the Frog starts anywhere.  Meaning, not only does the traffic move but the Frog starts anywhere along the bottom of the screen.  In Frogger we can control that position, but in real life the patients enter the system with certain positions we can not control easily and, for the purposes of their hospital course anyway, are unable to be changed.  It may be their 80 pack year history of smoking, their morbid obesity, or their advanced age.  However, the Frogger model recognizes the importance of initial position (which unlike real life we can control more easily) while the Swiss Cheese model doesn’t seem to make clear where we start.  Luckily, in real life, I’ve had the great experience of helping systems “cheat” by modifying the initial position…you may not be able to change initial patient comorbid conditions but you can sometimes set them up with a better initial position for the system you’re trying to improve.

 

Like you, I hear about the Swiss Cheese model a lot.  And, don’t get me wrong, it’s much better than what came before.  Now, however, in order to recognize the importance of probability, motion, initial position, devising a safe path through traffic, and a host of other considerations, let’s opt for a model that recognizes uncertainty, probability, and safety.  With more Millenials than ever in the workplace (even though Frogger predates them!) we have digital natives with whom game imagery is much more prone to resonate than a static piece of cheese.

Use Frogger next time you explain medical error because it embodies how to avoid medical errors MUCH better than cheese.

 

 

Dr. David Kashmer, a trauma and acute care surgeon, is a Fellow of the American College of Surgeons and is a nationally known healthcare expert. He serves as a member of the Board of Reviewers for the Malcolm Baldrige National Quality Award. In addition to his Medical Doctor degree from MCP Hahnemann University, now Drexel University College of Medicine, he holds an MBA degree from George Washington University. He also earned a Lean Six Sigma Master Black Belt Certification from Villanova University. Kashmer contributes to TheHill.com, Insights.TheSurgicalLab.com, and The Healthcare Quality Blog.com where the focus is on quality improvement and value in surgery and healthcare.

To learn more about the application of quality improvement tools like Lean Six Sigma in healthcare and Dr. David Kashmer, visit http://TheHealthcareQualityBlog.com

 

 

Applying the Healthcare Value Process Index

David Kashmer (@DavidKashmer)

In the last entry, you saw a novel, straightforward metric to capture the value provided by a healthcare service called the Healthcare Value Process Index (HVPI).  In this entry, let’s explore another example of exactly how to apply the metric to a healthcare service to demonstrate how to use the index.

At America’s Best Hospital, a recent quality improvement project focused on time patients spent in the waiting room of a certain physician group’s practice.  The project group had already gone through the steps of creating a sample plan and collecting data that represents how well the system is working.

From a patient survey, sent out as part of the project, the team learned that patients were willing to wait, at most, 20 minutes before seeing the physician.  So, the Voice of the Customer (VOC) was used to set the Upper Specification Limit (USL) of 20 minutes.

A normality test (the Anderson-Darling test) was performed, and the data collected follow the normal distribution as per Figure 1 beneath.  (Wonder why the p >0.05 is a good thing when you use the Anderson-Darling test?  Read about it here.)

Figure 1: Anderson-Darling test result for time in waiting room.

The results of the data collection and USL were reviewed for that continuous data endpoint “Time Spent In Waiting Room” and were plotted as Figure 2 beneath.

Figure 2: Histogram with USL for time spent in waiting room. Cpk = 0.20

The Cpk value for the waiting room system was noted to be 0.20, indicating that (long term) the system in place would produce more that 500,000 Defects Per Million Opportunities (DPMO) with the accompanying Sigma level of < 1.5.  Is that a good level of performance for a system?  Heck no.  Look at how many patients wait more than 20 minutes in the system.  There’s a quality issue there for sure.

What about the Costs of Poor Quality (COPQ) associated with waiting in the waiting room?  Based on the four buckets of the COPQ, your team determines that the COPQ for the waiting room system (per year) is about $200,000.  Surprisingly high, yes, but everyone realizes (when they think about it) that the time Ms. Smith fell in the waiting room after being there 22 minutes because she tried to raise the volume on the TV had gotten quite expensive.  You and the team take special note of what you items you included from the Profit and Loss statement as part of the COPQ because you want to be able to go back and look after changes have been made to see if waste has been reduced.

In this case, for the physician waiting room you’re looking at, you calculate the HVPI as

(100)(0.20) / (200) or 0.1

That’s not very good!  Remember, the COPQ is expressed in thousands of dollars to calculate the HVPI.

Just then, at the project meeting to review the data, your ears perk up when a practice manager named Jill says:  “Well our patients never complain about the wait in our waiting room which I think is better than that data we are looking at.  It feels like our patients wait less than 20 minutes routinely, AND I think we don’t have a much waste in the system.  Maybe you we could do some things like we do them in our practice.”

As a quality improvement facilitator, you’re always looking for ideas, tools, and best practices to apply in projects like this one.  So you and the team plan to look in on the waiting room run by the practice manager.

Just like before, the group samples the performance of the system.  It runs the Anderson-Darling test on the data and they are found to be normally distributed.  (By the way, we don’t see that routinely in waiting room times!)

Then, the team graphs the data as beneath:

Figure 3: Histogram of times spent in Jill’s waiting room. Cpk = 0.06

 

Interestingly, it turns out that this system has a central tendency very similar to the first waiting room you looked at–about 18 minutes.  Jill mentioned how most patients don’t wait more than 18 minutes and the data show that her instinct was spot on.

…but, you and the team notice that the performance of Jill’s waiting room is much worse than the first one you examined.  The Cpk for that system is 0.06–ouch!  Jill is disappointed, but you reassure her that it’s very common to see that how we feel about a system’s performance doesn’t match the data when we actually get them.  (More on that here.)  It’s ok because we are working together to improve.

When you calculate the COPQ for Jill’s waiting room, you notice that (although the performance is poor) there’s less as measured by the costs to deliver that performance.  The COPQ for Jill’s waiting room system is $125,000.  (It’s mostly owing to the wasted time the office staff spend trying to figure out who’s next, and some other specifics to how they run the system.) What is the HVPI for Jill’s waiting room?

(100)(0.06) / (125) = 0.048

Again, not good!

So, despite having lower costs associated with poor quality, Jill’s waiting room provides less value for patients than does the first waiting room that you all looked at.  It doesn’t mean that the team can’t learn anything from Jill and her team (after all, they are wasting less as measured by the COPQ) but it does mean that both Jill’s waiting room and the earlier one have a LONG way to go to improve their quality and value!

Fortunately, after completing the waiting room quality improvement project, the Cpk for the first system studied increased to 1.3 and Jill’s waiting room Cpk increased to 1.2–MUCH better.  The COPQ for each system decreased to $10,000 after the team made changes and went back to calculate the new COPQ based on the same items it had measured previously.

The new HVPI (with VOC from the patients) for the first waiting room?  That increased to 13 and the HVPI for Jill’s room rose to 12.  Each represents an awesome increase in value to the patients involved.  Now, of course, the challenge is to maintain those levels of value over time.

This example highlights how value provided by a system by a healthcare system for any continuous data endpoint can be calculated and compared across systems.  It can be tracked over time to demonstrate increases.  The HVPI represents a unique value measure comprised of a system capability measure and the costs of poor quality.

Questions or thoughts about the HVPI?  Let me know & let’s discuss!

 

 

Healthcare is at least a decade behind other high-risk industries

 

 

By:  David Kashmer (@DavidKashmer)

Did you know?  Our field lags behind many others in terms of attention to basic safety.  For those of you who focus on healthcare quality & safety, that’s probably old news.  After all, the Institute of Medicine said exactly that in its To Err Is Human report…from 1999 (!)

Here’s a portion of a recent post I wrote up for TheHill.com which describes exactly that & includes a link to that report:

Healthcare is at least a decade behind other high-risk industries in its attention to basic safety.

In 1999, the IOM published “To Err Is Human,” which codified what many quality experts in healthcare already knew:  in terms of quality improvement, healthcare is at least a decade behind.

More recently, a widely criticized paper from Johns Hopkins cited medical errors as the third leading cause of death in the United States. Even if you don’t agree that medical errors are the third leading cause, the fact that medical errors even make the list at all is obviously very concerning.

First published in TheHill.com


Click here for entire article:  http://thehill.com/blogs/pundits-blog/healthcare/311570-3-facts-about-us-healthcare-that-wont-change-with-the


What you may NOT know is that our field lags when it comes to the adoption of other emerging trends.  For example, here’s a graphic from earlier this year:

Healthcare lags other fields
Healthcare lags other fields

Now, all of that said, I spend a lot of time wondering exactly why we lag in certain key areas.  Here’s what I’ve come up with, and I’m interested in any thoughts or feedback you might have.

(1) Using the word “lag” supposes that the direction everyone else is going is some sort of goal to be achieved or a type of race

It seems to me that the way the graphic above sets things up implies a progression or goal of digitization.  In that graphic, it seems as if we are ranked in terms of progress toward some endpoint of digitization.  Let’s take some time and consider whether framing the situation as progress toward some digital endpoint really makes sense.

Perhaps no one likes technology more than me.  I tend to be an early adopter (and sometimes an innovator) with new devices and software that help me get done what I want to do both personally and for patients.  Yes, I use a Fitbit.  (Not so special nowadays really.) And I use services like Exist.io to look for meaningful correlations across things I do, such as how much sleep I get with how I perform.  This system takes me no time (it all happens under the hood) and sometimes even gives me non-intuitive correlations, which are perhaps the most useful.  Here’s an example of what I mean, but this one is weak and I wouldn’t do anything differently based on it:

linkedincorrelationjpg

The bottom line is, I think, every time I see a Big Data article or learn about how websites figure out things about my health that I don’t even know, well, I think we are pretty much all-in on this progression towards the digitization idea…at least I am!

So, on this one, I believe that (yes) there is a meaningful progression toward digitization across industries and, yes, I feel it’s more useful for healthcare to get on board than it is to lament where things are going or to question whether digitization is meaningful for healthcare…and I especially feel good about it when I remember the days of my training and how I used to have to hunt for Xrays on film, yet now I have the Xray or CT scan on my computer instantly!

(2) In part, we are slower to adopt because we deal with people’s health.

We don’t build cars or fly planes, really.  Although certain lessons learned from other industries are very important, many in healthcare believe our service is different.  Some are even skeptical of whether we should adopt tools that worked well across other industries.  We work with people’s health, after all.  In the United States especially, that’s a very big deal and many regard it as a true calling.  So, being the careful people we are (I often wonder just how risk-averse we are) it seems to make sense to me that our field may be slower than others to adopt new things.  It’s very conservative and maybe even highly adaptive to be that way.

When it comes to certain aspects of our work, like patient safety and quality, I should add here that there are well-worn tools that apply to all services–even services like ours called healthcare.  We should adopt these, and unfortunately are still behind.  I’ll add that adopting these tools helps us as providers even as it helps our patients.  (If you’re interested in specifics, take a look at Volume to Value.)

So, bottom line here:  part of why healthcare may be slower to adopt emerging trends is because we feel very strongly that only the best, well-worn, known tools should be applied to people’s health.

(3) Sometimes we are slower to adopt because much of the push to adopt has come from outside

About three months ago, I’d just finished speaking at a quality improvement conference in Philadelphia.  This one had over a thousand participants from diverse companies.  It really ran the gamut from Ford to Crayola to large hospitals to DuPont, and each participant was focused on quantitative quality improvement.  After my talk, there were lots of questions.  One really struck me in particular:

“How can you improve healthcare quality when you still get paid even when things are bad?  I mean, when I make a car if there’s a quality problem and it comes back, I eat that cost…”

This audience member really hit it on the head.  Isn’t it difficult to advance topics like quality (where healthcare is a decade behind) if you’re still reimbursed even when there’s a quality issue?  What he’d hit on is the tension between a pure fee-for-service model versus value-based reimbursement.

I was able to tell him that healthcare is transitioning, right now, away from being paid even when there’s a quality issue to a model where reimbursement is much more focused on value provided to patients.  I also shared with him that things aren’t easy, because we all have to agree on what exactly value and quality means in healthcare, but that we are getting there.  We talked about how buy-in from everyone in healthcare for quality initiatives (and more rigorous, quantitative ones), I think, will increase in the next 10-15 years as a result.  Sure enough, I think we can see this is already happening:

lssjpg
Click image for entire article.

Our conversation reinforced for me that much of the quality push, and digitization push, has come from outside of healthcare.  When the adoption of electronic health records and other forms of digitization are incentivized via meaningful use initiatives, and the HHS department explains that more and more of reimbursement will be tied to value-based metrics, it’s clear that a significant portion of the push to adopt emerging trends has come from outside what may be considered the typical traditional healthcare sphere.

Items that were typically hailed as improvements in healthcare, over the last hundred years, included game-changers like general anesthesia, penicillin, or the ability to safely traverse the one to two inches between the heart and the outside world with cardiac surgery.  (Prior to the development of cardiac surgery, some famous surgeons had previously predicted that route would forever be closed!)

Now, especially to physicians, it can be harder to see the value in moving in these directions.  Many in healthcare feel they are pushed toward them.  Yes, every physician wants the best outcome for the patient, yet seeing quality as the systematic reduction of variation along with improvement in the central tendency of a population is not always, well, intuitive.  Given the backdrop of the very specific, individualized physician-patient relationship, it can be challenging to understand the value of a quality initiative that sometimes seems to play to eliminating a defect which the patient in front of the doctor seems to be at low (or even no) risk for.

I’m not saying whether any of this is good or bad, and I’m only sharing what is:  we may be slower to adopt these trends in healthcare because they have often come from outside.  Rather than commenting on whether this is good or bad, it seems to me that the trend does explain some of why the field is slower to adopt these changes.

Having worked in healthcare for more than a decade through many venues, from cleaning rooms in the Emergency Department to work in the OR as a surgeon, I can share that yes we in healthcare are behind other industries in terms of adopting key trends.  However, I believe this is much more understandable given the nature of our work that directly (and individually) affects quality and quantity of human life, as well as the fact that (for better or worse) much of the impetus to adopt these trends has come from the outside.  I consider it my responsibility, and all of ours as providers, to be on the lookout for ways in which we can adopt well-worn tools that already exist to improve quality and digitization in our field.  Let’s make our call to action one where we get on board with these trends for at least those aspects that we reasonably expect may improve our care.

Great Healthcare Quality Projects Repeat Themselves

 

David Kashmer, MD MBA MBB (@DavidKashmer)

As healthcare adopts more and more of the Lean Six Sigma techniques, certain projects begin to repeat across organizations.  It makes sense.  After all, we live in the healthcare system and, once we have the tools, some projects are just so, well, obvious!

About two years ago, I wrote about a project I’d done that included decreasing the amount of time required to prepare OR instruments.  See that here.  And, not-surprisingly, by the time I had written about the project, I had seen this done at several centers with amazing results.

Recently, I was glad to see the project repeat itself.  This time, Virginia Mason had performed the project and had obtained its routine, impressive result.

This entry is to compliment the Virginia Mason team on their completion of the OR quality improvement project they describe here.  I’m sure the project wasn’t easy, and compliment the well-known organization on drastically decreasing waste while improving both quality & patient safety.

Like many others, I believe healthcare quality improvement is in its infancy.  We, as a field, are years behind other industries in terms of sophistication regarding quality improvement–and that’s for many different reasons, not all of which we directly control.

In that sort of climate, it’s good to see certain projects repeating across institutions.  This particular surgical instrument project is a great one, as the Virginia Mason & Vanderbilt experience indicate, that highlights the dissemination of quality tools throughout the industry.

Nice work, Virginia Mason team!