http://bit.ly/2k8TDNR This episode describes some of the keys for easy collection of meaningful data. Do you find yourself so busy that data collection is difficult? (Like trying to eat wings and watch the game?) Then this is the episode for you!
David Kashmer (@DavidKashmer)
Of the many barriers we face while trying to improve quality in healthcare, none is perhaps more problematic than the lack of good data. Although everyone seems to love data (I see so much written about healthcare data) it is often very tough to get. And when we do get it, much of the data we get are junk. It’s not easy to make meaningful improvements based on junk data. So, what can we do to get meaningful data for healthcare quality improvement?
In this entry, I’ll share some tools, tips, & techniques for getting meaningful quality improvement data from your healthcare system. I’ll share how to do that by telling a story about Super Bowl LI…
The Super Bowl Data Collection
About ten minutes before kickoff, I had a few questions about the Super Bowl. I was wondering if there was a simple way to gauge the performance of each team and make some meaningful statements about that performance.
When we do quality improvement projects, it’s very important to make sure it’s as easy as possible to collect data. I recommend collecting data directly from the process rather than retrospectively or from a data warehouse. Why? For one, I was taught that the more filters the data pass through the more they are cleaned up or otherwise altered. They tend to lose fidelity and a direct representation of the system. Whether you agree or not, my experience has definitely substantiated that teaching.
The issue with that is how do I collect data directly from the system? Isn’t that cumbersome? We don’t have staff to collect data (!) Like you, I’ve heard each of those barriers before–and that’s what makes the tricks and tools I’m about to share so useful.
So back to me then, sitting on my couch with a plate of wings and a Coke ready to watch the Super Bowl. I wanted data on something that I thought would be meaningful. Remember, this wasn’t a DMAIC project…it was just something to see if I could quickly describe the game in a meaningful way. It would require me to collect data easily and quickly…especially if those wings were going to get eaten.
Decide Whether You’ll Collect Discrete or Continuous Data
So as the first few wings disappeared, I decided about what type of data I’d want to collect. I would definitely collect continuous data if at all possible. (Not discrete.) That part of the deciding was easy. (Wonder why? Don’t know the difference between continuous and discrete data? Look here.)
Ok, the next issue was these data had to be very easy for me to get. They needed to be something that I had a reasonable belief would correlate with something important. Hmmm…ok, scoring touchdowns. That’s the whole point of the game after all.
Get A Clear Operational Definition Of What You’ll Collect
As wings number three and four disappeared, and the players were introduced, I decided on my data collection plan:
- collect how far away each offense was from scoring a touchdown when possession changed
- each data point would come from where ball was at start of 4th down
- interceptions, fumbles, or change of possession (like an interception) before 4th down would NOT recorded (I’ll get to why in a minute.)
- touchdowns scored were recorded as “0 yards away”
- a play where a field goal was attempted would be recorded as the where the ball was on the start of the down
Of course, for formal quality projects, we would collect more than just one data point. Additionally, we specify exactly the operational definition of each endpoint.
We’d also make a sample size calculation. Here, however, I intended to collect every touchdown and change of possession where a team kicked away on fourth down or went for it but didn’t make it. So this wasn’t a sample of those moments. It was going to be all of them. Of course, they don’t happen that often. That was a big help here, because they can also be anticipated. That was all very important so I could eat those wings.
Items like interceptions, fumbles, and other turnovers can not be anticipated as easily. They also would make me have to pay attention to where the ball was spotted at the beginning of every down. It was tough enough to pay attention to the spot of the ball for the downs I was going to record.
With those rules in mind, I set out to record the field position whenever possession changed. I thought that the position the offense wound up its possession at, over time, might correlate with who won the game. Less overall variance in final position might mean that team had less moments where it under-performed and lost possession nearer to its own endzone.
Of course, it could also mean that the team never reached the endzone for a touchdown. In fact, if the offense played the whole game between their opponents 45 and 50 yard line it would have little variation in field position…but also probably wouldn’t score much. Maybe a combination of better field position (higher median field position) and low variation in field position would indicate who won the game. I thought it might. Let’s see if I was right.
Data Collection: Nuts and Bolts
Next, I quickly drew a continuous data collection sheet. It looked like this:
Sounds fancy, but obviously it isn’t. That’s an important tool for you when you go to collect continuous data right from your process: the continuous data collection sheet can be very simple and very easy to use.
Really, that was about it. I went through the game watching, like you, the Patriots fall way behind for the first two quarters. After some Lady Gaga halftime show (with drones!) I looked at the data and noticed something interesting.
The Patriots data on distance from the endzone seemed to demonstrate less variance than the Falcons. (I’ll show you the actual data collection sheets in a moment.) It was odd. Yes, they were VERY far behind. Yes there had been two costly turnovers that lead to the Falcons opening up a huge lead. But, strangely, in terms of moving the ball and getting closer to the endzone based on their own offense, the Patriots were actually doing better than the Falcons. Three people around me pronounced the Patriots dead and one even said we should change the channel.
If you’ve read this blog before, you know that one of the key beliefs it describes is that data is most useful when it can change our minds. These data, at least, made me unsure if the game was over.
As you know (no spoiler alert really by now) the game was far from over and the Patriots executed one of the most impressive comebacks (if not the most impressive) in Super Bowl history. Data collected and wings eaten without difficulty! Check and check.
Here are the final data collection sheets:
Notice the number in parenthesis next to the distance from the endzone when possession changed? That number is the possession number the team had. So, 52(7) means the Falcons were 52 yards away from the Patriots endzone when they punted the ball on their seventh possession of the game. An entry like 0(10) would mean that the team scored a touchdown (0 yards from opposing team’s endzone) on their tenth possession.
Notice that collecting data this way and stacking similar numbers on top of each other makes a histogram over time. That’s what let me see how the variation of the Patriot’s final field position was smaller than the Falcon’s by about halfway through the game.
Anything To Learn From The Data Collection?
Recently, I put the data into Minitab to see what I could learn. Here are those same histograms for each offense’s performance:
Notice a few items. First, each set of data do NOT deviate from the normal distribution per the Anderson-Darling test. (More info on what that means here.) However, a word of caution: there are so few data points in each set that it can be difficult to tell which distribution they follow. I even performed distribution fitting to demonstrate that testing will likely show that these data do not deviate substantially from other distributions either. Again, it’s difficult to tell a difference because there just aren’t that many possessions for each team in a football game. In a Lean Six Sigma project, we would normally protect against this with a good sampling plan as part of our data collection plan but, hey, I had wings to eat! Here’s an example of checking the offense performance against other data distributions:
Just as with the initial Anderson-Darling test, we see here that the data do not deviate from many of these other distributions either. Bottom line: we can’t be sure which distribution it follows. Maybe the normal distribution, maybe not.
In any event, we are left with some important questions. Notice the variance exhibited by the Patriots offense versus the Falcons offense: this highlights that the Patriots in general were able to move the ball closer to the Falcons endzone by the time the possession changed (remember that turnovers aren’t included). Does that decreased variation correlate with the outcome of every football game? Can it be used to predict outcomes of games? I don’t know…at least not yet. After all, if stopping a team inside their own 10 yard line once or twice was a major factor in predicting who won a game, well, that would be very useful! If data is collected by the league on field position, we could apply this idea to previous games (maybe at half time) and see if it predicts the winner routinely. If it did, we could apply it to future games.
In the case of Super Bowl LI, the Patriots offense demonstrated a better median field position and less variation in overall field position compared to the Falcons.
Of course, remember this favorite quote:
All models are wrong, but some are useful. — George E.P. Box (of Box-Cox transform fame)
Final Recommendations (How To Eat Wings AND Collect Data)
More importantly, this entry highlights a few interesting tools for data collection for your healthcare quality project. At the end of the day, in order to continue all the things you have to do and collect good data for your project, here are my recommendations:
(1) get data right from the process, not a warehouse or after it has been cleaned.
(2) use continuous data!
(3) remember the continuous data check sheet can be very simple to set up and use
(4) when you create a data collection plan, remember the sample size calculation & operational definition!
(5) reward those who collect data…maybe with wings!
David Kashmer is a trauma surgeon and Lean Six Sigma Master Black Belt. He writes about data-driven healthcare quality improvement for TheHill.com, Insights.TheSurgicalLab.com, and TheHealthcareQualityBlog.com. He is the author of the Amazon bestseller Volume To Value, & is especially focused on how best to measure value in Healthcare.
http://bit.ly/2jJ2qlq This episode describes a useful healthcare value metric based on a process capability measure (Cpk) and waste measurement (Cost of Poor Quality).
David Kashmer (@DavidKashmer)
In the last entry, you saw a novel, straightforward metric to capture the value provided by a healthcare service called the Healthcare Value Process Index (HVPI). In this entry, let’s explore another example of exactly how to apply the metric to a healthcare service to demonstrate how to use the index.
At America’s Best Hospital, a recent quality improvement project focused on time patients spent in the waiting room of a certain physician group’s practice. The project group had already gone through the steps of creating a sample plan and collecting data that represents how well the system is working.
From a patient survey, sent out as part of the project, the team learned that patients were willing to wait, at most, 20 minutes before seeing the physician. So, the Voice of the Customer (VOC) was used to set the Upper Specification Limit (USL) of 20 minutes.
A normality test (the Anderson-Darling test) was performed, and the data collected follow the normal distribution as per Figure 1 beneath. (Wonder why the p >0.05 is a good thing when you use the Anderson-Darling test? Read about it here.)
The results of the data collection and USL were reviewed for that continuous data endpoint “Time Spent In Waiting Room” and were plotted as Figure 2 beneath.
The Cpk value for the waiting room system was noted to be 0.20, indicating that (long term) the system in place would produce more that 500,000 Defects Per Million Opportunities (DPMO) with the accompanying Sigma level of < 1.5. Is that a good level of performance for a system? Heck no. Look at how many patients wait more than 20 minutes in the system. There’s a quality issue there for sure.
What about the Costs of Poor Quality (COPQ) associated with waiting in the waiting room? Based on the four buckets of the COPQ, your team determines that the COPQ for the waiting room system (per year) is about $200,000. Surprisingly high, yes, but everyone realizes (when they think about it) that the time Ms. Smith fell in the waiting room after being there 22 minutes because she tried to raise the volume on the TV had gotten quite expensive. You and the team take special note of what you items you included from the Profit and Loss statement as part of the COPQ because you want to be able to go back and look after changes have been made to see if waste has been reduced.
In this case, for the physician waiting room you’re looking at, you calculate the HVPI as
(100)(0.20) / (200) or 0.1
That’s not very good! Remember, the COPQ is expressed in thousands of dollars to calculate the HVPI.
Just then, at the project meeting to review the data, your ears perk up when a practice manager named Jill says: “Well our patients never complain about the wait in our waiting room which I think is better than that data we are looking at. It feels like our patients wait less than 20 minutes routinely, AND I think we don’t have a much waste in the system. Maybe you we could do some things like we do them in our practice.”
As a quality improvement facilitator, you’re always looking for ideas, tools, and best practices to apply in projects like this one. So you and the team plan to look in on the waiting room run by the practice manager.
Just like before, the group samples the performance of the system. It runs the Anderson-Darling test on the data and they are found to be normally distributed. (By the way, we don’t see that routinely in waiting room times!)
Then, the team graphs the data as beneath:
Interestingly, it turns out that this system has a central tendency very similar to the first waiting room you looked at–about 18 minutes. Jill mentioned how most patients don’t wait more than 18 minutes and the data show that her instinct was spot on.
…but, you and the team notice that the performance of Jill’s waiting room is much worse than the first one you examined. The Cpk for that system is 0.06–ouch! Jill is disappointed, but you reassure her that it’s very common to see that how we feel about a system’s performance doesn’t match the data when we actually get them. (More on that here.) It’s ok because we are working together to improve.
When you calculate the COPQ for Jill’s waiting room, you notice that (although the performance is poor) there’s less as measured by the costs to deliver that performance. The COPQ for Jill’s waiting room system is $125,000. (It’s mostly owing to the wasted time the office staff spend trying to figure out who’s next, and some other specifics to how they run the system.) What is the HVPI for Jill’s waiting room?
(100)(0.06) / (125) = 0.048
Again, not good!
So, despite having lower costs associated with poor quality, Jill’s waiting room provides less value for patients than does the first waiting room that you all looked at. It doesn’t mean that the team can’t learn anything from Jill and her team (after all, they are wasting less as measured by the COPQ) but it does mean that both Jill’s waiting room and the earlier one have a LONG way to go to improve their quality and value!
Fortunately, after completing the waiting room quality improvement project, the Cpk for the first system studied increased to 1.3 and Jill’s waiting room Cpk increased to 1.2–MUCH better. The COPQ for each system decreased to $10,000 after the team made changes and went back to calculate the new COPQ based on the same items it had measured previously.
The new HVPI (with VOC from the patients) for the first waiting room? That increased to 13 and the HVPI for Jill’s room rose to 12. Each represents an awesome increase in value to the patients involved. Now, of course, the challenge is to maintain those levels of value over time.
This example highlights how value provided by a system by a healthcare system for any continuous data endpoint can be calculated and compared across systems. It can be tracked over time to demonstrate increases. The HVPI represents a unique value measure comprised of a system capability measure and the costs of poor quality.
Questions or thoughts about the HVPI? Let me know & let’s discuss!
You’ve probably heard the catchphrase “volume to value” to describe the current transition in healthcare. It’s based on the idea that healthcare volume of services should no longer be the focus when it comes to reimbursement and performance. Instead of being reimbursed a fee per service episode (volume of care), healthcare is transitioning toward reimbursement with a focus on value provided by the care given. The Department of Health and Human Services (HHS) has recently called for 50% or more of payments to health systems to be value-based by 2018.
Here’s a recent book I completed on just that topic: Volume to Value. Do you know what’s not in that book, by the way? One clear metric on how exactly to measure value across services! That matters because, after all
If you can’t measure it, you can’t manage it. –Peter Drucker
An entire book on value in healthcare and not one metric which points right to it! Why not? (By the way, some aren’t sure that Peter Drucker actually said that.)
Here’s why not: in healthcare, we don’t yet agree on what “value” means. For example, look here. Yeesh, that’s a lot of different definitions of value. We can talk about ways to improve value by decreasing cost of care and increasing value, but we don’t have one clear metric on value (in part) because we don’t yet agree on a definition of what value is.
In this entry, I’ll share a straightforward definition of value in healthcare and a straightforward metric to measure that value across services. Like all entries, this one is open for your discussion and consideration. I’m looking for feedback on it. An OVID, Google, and Pubmed search revealed nothing similar to the metric I propose beneath.
First, let’s start with a definition of value. Here’s a classic, arguably the classic, from Michael Porter (citation here).
Value is “defined as the health outcomes per dollar spent.”
Ok so there are several issues that prevent us from easily applying this definition in healthcare. Let’s talk about some of the barriers to making something measurable out of the definition. Here are some now:
(1) Remarkably, we often don’t know how much (exactly) everything costs in healthcare. Amazing, yes, but nonetheless true. With rare exception, most hospitals do not know exactly how much it costs to perform a hip replacement and perform the after-care in the hospital for the patient. The time spent by FTE employees, the equipment used, all of it…nope, they don’t know. There are, of course, exceptions to this. I know of at least one health system that knows how much it costs to perform a hip replacement down to the number and amount of gauze used in the OR. Amazing, but true.
(2) We don’t have a standardized way for assessing health outcomes. There are some attempts at this, such as QALYs, but one of the fundamental problems is: how do you express quality in situations where the outcome you’re looking for is different than quality & quantity of life? The QALY measures outcome, in part, in years of life, but how does that make sense for acute diseases like necrotizing soft tissue infections that are very acute (often in patients who won’t be alive many more years whether the disease is addressed or not), or other items to improve like days on the ventilator? It is VERY difficult to come up with a standard to demonstrate outcomes–especially across service lines.
(3) The entity that pays is not usually the person receiving the care. This is a huge problem when it comes to measuring value. To illustrate the point: imagine America’s Best Hospital (ABH) where every patient has the best outcome possible.
No matter what patient with what condition comes to the ABH, they will have the BEST outcome possible. By every outcome metric, it’s the best! It even spends little to nothing (compared to most centers) to achieve these incredible outcomes. One catch: the staff at ABH is so busy that they just never write anything down. ABH, of course, would likely not be in business for long. Why? Despite these incredible outcomes for patients, ABH would NEVER be re-imbursed. This thought experiment shows that valuable care must somehow include not just the attention to patients (the Voice of the Patient or Voice of the Customer in Lean & Six Sigma parlance), but also to the necessary mechanics required to be reimbursed by the third party payors. I’m not saying whether it’s a good or bad thing…only that it simply is.
So, where those are some of the barriers to creating a good value metric for healthcare, let’s discuss how one might look. What would be necessary to measure value across different services in healthcare? A useful value metric would
(1) Capture how well the system it is applied to is working. It would demonstrate the variation in that system. In order to determine “how well” the system is working, it would probably need to incorporate the Voice of the Customer or Voice of the Patient. The VOP/VOC often is the upper or lower specification limit for the system as my Lean Six Sigma and other quality improvement colleagues know. The ability to capture this performance would be key to represent the “health outcomes” portion of the definition.
(2) Be applicable across different service lines and perhaps even different hospitals. This requirement is very important for a useful metric. Can we create something that captures outcomes as disparate as time spent waiting in the ER and something like patients who have NOT had a colonoscopy (but should have)?
(3) Incorporate cost as an element. This item, also, is required for a useful metric. How can we incorporate cost if, as said earlier, most health systems can’t tell you exactly how much something costs?
With that, let’s discuss the proposed metric called the “Healthcare Value Process Index”:
Healthcare Value Process Index = (100) Cpk / COPQ
where Cpk = the Cpk value for the system being considered, COPQ is the Cost of Poor Quality for that same system in thousands of dollars, and 100 is an arbitrary constant. (You’ll see why that 100 is in there under the example later on.)
Yup, that’s it. Take a minute with me to discover the use of this new value metric.
First, Cpk is well-known in quality circles as a representation of how capable a system is at delivering a specified output long term. It gives a LOT of useful information in a tight package. The Cpk, in one number, describes the number of defects a process is creating. It incorporates the element of the Voice of the Patient (sometimes called the Voice of the Customer [VOC] as described earlier) and uses that important element to define what values in the system are acceptable and which are not. In essence, the Cpk tells us, clearly, how the system is performing versus specification limits set by the VOC. Of course, we could use sigma levels to represent the same concepts.
Weaknesses? Yes. For example, some systems follow non-normal data distributions. Box-Cox transformations or other tools could be used in those circumstances. So, for each Healthcare Value Process Index, it would make sense to specify where the VOC came from. Is it a patient-defined endpoint or a third party payor one?
That’s it. Not a lot of mess or fuss. That’s because when you say the Cpk is some number, we have a sense of the variation in the process compared to the specification limits of the process. We know how whatever process you are talking about is performing, from systems as different as time spent peeling bananas to others like time spent flying on a plane. Again, healthcare colleagues, here’s the bottom line: there’s a named measure for how well a system represented by continuous data (eg time, length, etc.) is performing. This system works for continuous data endpoints of all sorts. Let’s use what’s out there & not re-invent the wheel!
(By the way, wondering why I didn’t suggest the Cp or Ppk? Look here & here and have confidence you are way beyond the level most of us in healthcare are with process centering. Have a look at those links and pass along some comments on why you think one of those other measures would be better!)
Ok, and now for the denominator of the Healthcare Value Process Index: the Cost of Poor Quality. Remember how I said earlier that health systems often don’t know exactly how much services cost? They are often much more able to tell when costs decrease or something changes. In fact, the COPQ captures the Cost of Poor Quality very well according to four buckets. It’s often used in Lean Six Sigma and other quality improvement systems. With a P&L statement, and some time with the Finance team, the amount the healthcare system is spending on a certain system can usually be sorted out. For more info on the COPQ and 4 buckets, take a look at this article for the Healthcare Financial Management Association. The COPQ is much easier to get at than trying to calculate the cost of an entire system. When the COPQ is high, there’s lots of waste as represented by cost. When low, it means there is little waste as quantified by cost to achieve whichever outcome you’re looking at.
So, this metric checks all the boxes described earlier for exactly what a good metric for healthcare value would look like. It is applicable across service lines, captures how well the system is working, and represents the cost of the care that’s being rendered in that system. Let’s do an example.
Pretend you’re looking at a sample of the times that patients wait in the ER waiting room. The Voice of the Customer says that patients, no matter how not-sick they may seem, shouldn’t have to wait any more than two hours in the waiting room.
Of course, it’s just an example. That upper specification limit for wait time could have been anything that the Voice of the Customer said it was. And, by the way, who is the Voice of the Customer that determined that upper spec limit? It could be a regulatory agency, hospital policy, or even the director of the ER. Maybe you sent out a patient survey and the patients said no one should ever have to wait more than two hours!)
When you look at the data you collected, you find that 200 patients came through the ER waiting room in the time period studied. That means 2 defects per 200 opportunities, which is a DPMO (Defects Per Million Opportunities) of 10,000. Let’s look at the Cpk level associated with that level of defect:
Ok, that’s a Cpk of approximately 1.3 as per the table above. Now what about the costs?
We look at each of the four buckets associated with the Cost of Poor Quality. (Remember those four buckets?) First, the surveillance bucket: an FTE takes 10 minutes of their time every shift to check how long people have been waiting in the waiting room. (In real life, there are probably more surveillance costs than this.) Ok, so those are the costs required to check in on the system because of its level of function.
What about the second bucket, the cost of internal failures? That bucket includes all of the costs associated with issues that arise in the system but do not make it to the patient. In this example, it would be the costs attributed to problems with the amount of time a person is in the waiting room that don’t cause the patient any problems. For example, were there any events when one staff member from the waiting room had to walk back to the main ED because the phone didn’t work and so they didn’t know if it was time to send another patient back? Did the software crash and require IT to help repair it? These are problems with the system which may not have made it to the patient and yet did have legitimate costs.
The third bucket, often the most visible and high-profile, includes the costs associated with defects that make it to the patient. Did someone with chest pain somehow wind up waiting in the waiting room for too long, and require more care than they would have otherwise? Did someone wait more than the upper spec limit and then the system incurred some cost as a result? Those costs are waste and, of course, are due to external failure of waiting too long.
The last bucket, my favorite, is the costs of prevention. As you’ve probably learned before, this is the only portion of the COPQ that generates a positive Return On Investment (ROI) because money spent on prevention usually goes very far toward preventing many more costs downstream. In this example, if the health system spent money on preventing defects (eg some new computer system or process that freed up the ED to get patients out of the waiting room faster) that investment would still count in the COPQ and would be a cost of prevention. Yes, if there were no defects there would be no need to spend money on preventative measures; however, again, that does not mean funds spent on prevention are a bad idea!
After all of that time with the four buckets and the P&L, the total COPQ is discovered to be $325,000. Yes, that’s a very typical size for many quality improvement projects in healthcare.
Now, to calculate the Healthcare Value Process Index, we take the system’s performance (Cpk of 1.3), multiple it by 100, and divide by 325. We see a Healthcare Value Process Index of 0.4. We carefully remember that the upper spec limit was 120 and came from the VOC who we list when we report it out. The 100 is there to make the results easier to remember. It simply changes the size of the typical answer we get to something that’s easier to remember.
We would report this Healthcare Value Process Index as “Healthcare Value Process Index of 0.4 with VOC of 120 min from state regulation” or whomever (whichever VOC) gave us the specification limits to calculate the Cpk. Doing that allows us to compare a Healthcare Value Process Index from institution to institution, or to know when they should NOT be compared. It keeps it apples to apples!
Now imagine the same system performing worse: a Cpk of 0.7. It even costs more, with a COPQ of 425,000. The Healthcare Value Process Index (HVPI)? That’s 0.0165. Easy to see it’s bad!
How about a great system for getting patient screening colonoscopies in less that a certain amount of time or age? It performs really well with a Cpk of 1.9 (wow!) and has a COPQ of $200,000. It’s HVPI? That’s 0.95. Much better than those other systems!
Perhaps even more useful than comparing systems with the HVPI is tracking the HVPI for a service process. After all, no matter what costs were initially assigned to a service process, watching them change over time with improvements (or worsening of the costs) would likely prove more valuable. If the Cpk improves and costs go down, expect a higher HVPI next time you check the system.
At the end of the day, the HVPI is a simple, intuitive, straightforward measure to track value across a spectrum of healthcare services. The HVPI helps clarify when value can (and can not) be compared across services. Calculating the HVPI requires knowledge of system capability measures and clarity in assigning COPQ. Regardless of initial values for a given system and different ways in which costs may be assigned, trending HVPI may be more valuable to track the trend of value for a given system.
Questions? Thoughts? Hate the arbitrary 100 constant? Leave your thoughts in the comments and let’s discuss.
http://bit.ly/2izQO63 In this podcast, we discuss 2 key ideas to evaluate your quality improvement system.
David Kashmer (@David Kashmer)
How would you evaluate a healthcare quality improvement program? Let’s say you’re looking at your healthcare system’s process improvement system and wondering “How good are we at process improvement?” How would you know just how well the quality system was performing?
I’ve sometimes heard this called “PI-ing the PI”, and it makes sense–after all, the idea of building a quality system even extends to learning how well the process improvement (PI) system works.
In the many systems I’ve either worked in, helped design, or have consulted for I’ve found the question of “How good are we at PI?” can often be boiled down to a matter of efficiency and effectiveness.
This dimension of the PI process can be thought of as how little waste there is in the PI process. What is the cycle time from issue identification until closure? How much paper & cost does the PI process incur? Do projects take more than 120 days?
The efficiency question is very difficult to answer in healthcare process improvement, and I think that’s because our systems are not so well developed yet as to have many benchmarks for how long things should take from identification until closure (for example). I often use three months (90 days) as the median time from issue identification to closure, because there are a few papers that cite that number for formal DMAIC projects.
Now, there are a few important statements here: (1) when I say 90 days to issue closure I mean meaningful closure & (2) if 90 days is a median target…what’s the variance of the population?
Let me explain a bit: Lean Six Sigma practitioners are often comfortable with thinking of continuous variables as a distribution with a measure of variance (like range or standard deviation) to indicate just how wide the population at hand is. Quality projects often focus on decreasing the standard deviation to make sure things go better in general. This same approach can be used to “PI the PI” effectiveness. What is the standard deviation of how long it takes to identify and close out an issue for the PI system, for example? How can it be reduced?
These are some of the key questions when it comes to measuring the efficiency of the PI system.
This dimension is, arguably, more important than efficiency. For example, imagine working really hard to decrease the amount of time it takes someone to throw something away. Yup, imagine working hard on improving how well someone throws away a piece of trash. Making a process efficient, but ultimately ineffective, probably isn’t worth your time. (I’m sure there’s some counter example that describes a situation where waste disposal efficiency is very important! I just use that example to show how efficiency can be very far removed from effectiveness.)
When it comes to measuring the effectiveness of your PI system, where would you start? Being busy is one thing, but being busy about the right things is likely more important.
One important consideration is issue identification. How does your PI system learn about its issues? Does it just tackle each item that comes up from difficult cases? How do staff highlight issues to PI staff? Is that easy to do? Does your system gather data and decide which issues are a “big enough deal” to move ahead? Does it use a FMEA and COPQ to look at factors that help prioritize issues?
These are some of the most important issue identification factors for your PI system, but by no means are the only ones related to effectiveness.
Once the right issues are acquired in the right way at the right time, where do they go from there? Are all the stakeholders involved in a process to make improvement? Does the system use data and follow those data to decide what really needs to happen, or does it only use its “gut”? Is the PI system politicized, so that data aren’t used, aren’t important, aren’t regarded, or just aren’t made?
The staff at the “tip of the sword” (the end of the process that touches patients) and even those who never see a patient but whose efforts impact them (that’s every staff member right?) are armed with data they can understand that describe performance. Even better, the staff receive data that they’ve asked for because the PI/QI process tailor made what data the staff receive. (More on that a little later.)
Once issues are identified, and the PI system performs, what happens with the output? This is another key question regarding effectiveness that can let you know a lot about the health system. There’s an element of user design (#UX) in good PI systems. Do the data get to the staff who need to know? Do the staff understand what the data mean? Are the data in a format that allow the data to impact performance? Are the data endpoints (at least some of them) something unique and particular that the staff asked about way-back-when?
Lean Six Sigma is 80% people and 20% math.
You may have heard that old saying. In fact, it’s been said about several quality programs. (I’ve discussed previously that, yes, the system is 80% people but getting the 20% math correct is essential–otherwise the boat won’t float!) It is on this point about effectiveness that I’d like to take a second with you before we go:
One of the major items with quality improvement is the ability to use trusted data to impact what we do for patients for the better.
That’s the whole point right? If the data don’t represent what we do, are the wrong data at the wrong time, or are beautiful but no one can understand them, well, the PI process is not effective.
This, to my mind, is the key question to gauge PI / QI success:
Do we see data impact our behavior on important topics in a timely fashion?
If we do, we have checked many of the boxes regarding efficiency and effectiveness, because, for that to happen, we must have identified key issues, experienced a process that somehow takes those issues and creates meaningful data, taken that data in a format that is understood by the organization, and we must have done it all in a timely fashion that actually changes what we do. That is efficient and effective.
http://bit.ly/2iigxwl This episode explores To Err Is Human, & the idea that healthcare is a decade behind other industries in some important areas.
By: David Kashmer (@DavidKashmer)
Did you know? Our field lags behind many others in terms of attention to basic safety. For those of you who focus on healthcare quality & safety, that’s probably old news. After all, the Institute of Medicine said exactly that in its To Err Is Human report…from 1999 (!)
Here’s a portion of a recent post I wrote up for TheHill.com which describes exactly that & includes a link to that report:
Healthcare is at least a decade behind other high-risk industries in its attention to basic safety.
In 1999, the IOM published “To Err Is Human,” which codified what many quality experts in healthcare already knew: in terms of quality improvement, healthcare is at least a decade behind.
More recently, a widely criticized paper from Johns Hopkins cited medical errors as the third leading cause of death in the United States. Even if you don’t agree that medical errors are the third leading cause, the fact that medical errors even make the list at all is obviously very concerning.
First published in TheHill.com
Click here for entire article: http://thehill.com/blogs/pundits-blog/healthcare/311570-3-facts-about-us-healthcare-that-wont-change-with-the
What you may NOT know is that our field lags when it comes to the adoption of other emerging trends. For example, here’s a graphic from earlier this year:
Now, all of that said, I spend a lot of time wondering exactly why we lag in certain key areas. Here’s what I’ve come up with, and I’m interested in any thoughts or feedback you might have.
(1) Using the word “lag” supposes that the direction everyone else is going is some sort of goal to be achieved or a type of race
It seems to me that the way the graphic above sets things up implies a progression or goal of digitization. In that graphic, it seems as if we are ranked in terms of progress toward some endpoint of digitization. Let’s take some time and consider whether framing the situation as progress toward some digital endpoint really makes sense.
Perhaps no one likes technology more than me. I tend to be an early adopter (and sometimes an innovator) with new devices and software that help me get done what I want to do both personally and for patients. Yes, I use a Fitbit. (Not so special nowadays really.) And I use services like Exist.io to look for meaningful correlations across things I do, such as how much sleep I get with how I perform. This system takes me no time (it all happens under the hood) and sometimes even gives me non-intuitive correlations, which are perhaps the most useful. Here’s an example of what I mean, but this one is weak and I wouldn’t do anything differently based on it:
The bottom line is, I think, every time I see a Big Data article or learn about how websites figure out things about my health that I don’t even know, well, I think we are pretty much all-in on this progression towards the digitization idea…at least I am!
So, on this one, I believe that (yes) there is a meaningful progression toward digitization across industries and, yes, I feel it’s more useful for healthcare to get on board than it is to lament where things are going or to question whether digitization is meaningful for healthcare…and I especially feel good about it when I remember the days of my training and how I used to have to hunt for Xrays on film, yet now I have the Xray or CT scan on my computer instantly!
(2) In part, we are slower to adopt because we deal with people’s health.
We don’t build cars or fly planes, really. Although certain lessons learned from other industries are very important, many in healthcare believe our service is different. Some are even skeptical of whether we should adopt tools that worked well across other industries. We work with people’s health, after all. In the United States especially, that’s a very big deal and many regard it as a true calling. So, being the careful people we are (I often wonder just how risk-averse we are) it seems to make sense to me that our field may be slower than others to adopt new things. It’s very conservative and maybe even highly adaptive to be that way.
When it comes to certain aspects of our work, like patient safety and quality, I should add here that there are well-worn tools that apply to all services–even services like ours called healthcare. We should adopt these, and unfortunately are still behind. I’ll add that adopting these tools helps us as providers even as it helps our patients. (If you’re interested in specifics, take a look at Volume to Value.)
So, bottom line here: part of why healthcare may be slower to adopt emerging trends is because we feel very strongly that only the best, well-worn, known tools should be applied to people’s health.
(3) Sometimes we are slower to adopt because much of the push to adopt has come from outside
About three months ago, I’d just finished speaking at a quality improvement conference in Philadelphia. This one had over a thousand participants from diverse companies. It really ran the gamut from Ford to Crayola to large hospitals to DuPont, and each participant was focused on quantitative quality improvement. After my talk, there were lots of questions. One really struck me in particular:
“How can you improve healthcare quality when you still get paid even when things are bad? I mean, when I make a car if there’s a quality problem and it comes back, I eat that cost…”
This audience member really hit it on the head. Isn’t it difficult to advance topics like quality (where healthcare is a decade behind) if you’re still reimbursed even when there’s a quality issue? What he’d hit on is the tension between a pure fee-for-service model versus value-based reimbursement.
I was able to tell him that healthcare is transitioning, right now, away from being paid even when there’s a quality issue to a model where reimbursement is much more focused on value provided to patients. I also shared with him that things aren’t easy, because we all have to agree on what exactly value and quality means in healthcare, but that we are getting there. We talked about how buy-in from everyone in healthcare for quality initiatives (and more rigorous, quantitative ones), I think, will increase in the next 10-15 years as a result. Sure enough, I think we can see this is already happening:
Our conversation reinforced for me that much of the quality push, and digitization push, has come from outside of healthcare. When the adoption of electronic health records and other forms of digitization are incentivized via meaningful use initiatives, and the HHS department explains that more and more of reimbursement will be tied to value-based metrics, it’s clear that a significant portion of the push to adopt emerging trends has come from outside what may be considered the typical traditional healthcare sphere.
Items that were typically hailed as improvements in healthcare, over the last hundred years, included game-changers like general anesthesia, penicillin, or the ability to safely traverse the one to two inches between the heart and the outside world with cardiac surgery. (Prior to the development of cardiac surgery, some famous surgeons had previously predicted that route would forever be closed!)
Now, especially to physicians, it can be harder to see the value in moving in these directions. Many in healthcare feel they are pushed toward them. Yes, every physician wants the best outcome for the patient, yet seeing quality as the systematic reduction of variation along with improvement in the central tendency of a population is not always, well, intuitive. Given the backdrop of the very specific, individualized physician-patient relationship, it can be challenging to understand the value of a quality initiative that sometimes seems to play to eliminating a defect which the patient in front of the doctor seems to be at low (or even no) risk for.
I’m not saying whether any of this is good or bad, and I’m only sharing what is: we may be slower to adopt these trends in healthcare because they have often come from outside. Rather than commenting on whether this is good or bad, it seems to me that the trend does explain some of why the field is slower to adopt these changes.
Having worked in healthcare for more than a decade through many venues, from cleaning rooms in the Emergency Department to work in the OR as a surgeon, I can share that yes we in healthcare are behind other industries in terms of adopting key trends. However, I believe this is much more understandable given the nature of our work that directly (and individually) affects quality and quantity of human life, as well as the fact that (for better or worse) much of the impetus to adopt these trends has come from the outside. I consider it my responsibility, and all of ours as providers, to be on the lookout for ways in which we can adopt well-worn tools that already exist to improve quality and digitization in our field. Let’s make our call to action one where we get on board with these trends for at least those aspects that we reasonably expect may improve our care.
http://ift.tt/2hNwo7L Have you ever noticed that great quality improvement projects repeat themselves across organizations? Is it because organizations share techniques? Is it that we have so much room to improve in terms of healthcare quality…