And now, this: a recent JAMA online first that talks all about how we don’t have standards when it comes to healthcare quality reporting. Oh boy don’t we!
Once upon a time, I worked for an organization that claimed it had no catheter associated urinary tract infections or central line infections in the ICU for more than two years!
Was this organization incredibly adept at quality improvement initiatives? No, not really. Had it closed its ICU to patients? (That may be the only way to truly prevent those nosocomial infections.) Nope, sure hadn’t.
Several issues were at play, including a stubborn refusal to diagnose those infections even when they were obviously present and contributing to patient morbidity and mortality. Can we blame them? I’m not sure…there are plenty of pressures to avoid “never” diagnoses from CMS.
I’m not saying that makes it ok to ignore these diagnoses, but it does make it more understandable. Hospitals didn’t create these incentive games, after all.
Although hospitals and physicians are perceived as trusted entities, these organizations have an incentive to present themselves in a positive light. This conflict of interest should be less pronounced when outside entities, such as the Centers for Medicare & Medicaid Services (CMS) or the Leapfrog Group, report to the public about health care quality.Evidence suggests that some organizations may be providing potentially misleading information to the public. For instance, one hospital stated on its website, “Come to us, we have no infections,” without stating which types of infections were included, how this performance outcome was measured, or how long the hospital had gone without an infection.5 Even though there has not been a systematic study of the accuracy of the quality data reported by hospitals and physicians on their own websites, concerns are likely to increase with the number and types of measures now being reported (eg, patient experience, costs), some of which may be more meaningful to patients.The potential for misinformation is understandable given the absence of standards to guide the reporting efforts of hospitals and physicians.
Of the many barriers we face while trying to improve quality in healthcare, none is perhaps more problematic than the lack of good data. Although everyone seems to love data (I see so much written about healthcare data) it is often very tough to get. And when we do get it, much of the data we get are junk. It’s not easy to make meaningful improvements based on junk data. So, what can we do to get meaningful data for healthcare quality improvement?
In this entry, I’ll share some tools, tips, & techniques for getting meaningful quality improvement data from your healthcare system. I’ll share how to do that by telling a story about Super Bowl LI…
The Super Bowl Data Collection
About ten minutes before kickoff, I had a few questions about the Super Bowl. I was wondering if there was a simple way to gauge the performance of each team and make some meaningful statements about that performance.
When we do quality improvement projects, it’s very important to make sure it’s as easy as possible to collect data. I recommend collecting data directly from the process rather than retrospectively or from a data warehouse. Why? For one, I was taught that the more filters the data pass through the more they are cleaned up or otherwise altered. They tend to lose fidelity and a direct representation of the system. Whether you agree or not, my experience has definitely substantiated that teaching.
The issue with that is how do I collect data directly from the system? Isn’t that cumbersome? We don’t have staff to collect data (!) Like you, I’ve heard each of those barriers before–and that’s what makes the tricks and tools I’m about to share so useful.
So back to me then, sitting on my couch with a plate of wings and a Coke ready to watch the Super Bowl. I wanted data on something that I thought would be meaningful. Remember, this wasn’t a DMAIC project…it was just something to see if I could quickly describe the game in a meaningful way. It would require me to collect data easily and quickly…especially if those wings were going to get eaten.
Decide Whether You’ll Collect Discrete or Continuous Data
So as the first few wings disappeared, I decided about what type of data I’d want to collect. I would definitely collect continuous data if at all possible. (Not discrete.) That part of the deciding was easy. (Wonder why? Don’t know the difference between continuous and discrete data? Look here.)
Ok, the next issue was these data had to be very easy for me to get. They needed to be something that I had a reasonable belief would correlate with something important. Hmmm…ok, scoring touchdowns. That’s the whole point of the game after all.
Get A Clear Operational Definition Of What You’ll Collect
As wings number three and four disappeared, and the players were introduced, I decided on my data collection plan:
collect how far away each offense was from scoring a touchdown when possession changed
each data point would come from where ball was at start of 4th down
interceptions, fumbles, or change of possession (like an interception) before 4th down would NOT recorded (I’ll get to why in a minute.)
touchdowns scored were recorded as “0 yards away”
a play where a field goal was attempted would be recorded as the where the ball was on the start of the down
Of course, for formal quality projects, we would collect more than just one data point. Additionally, we specify exactly the operational definition of each endpoint.
We’d also make a sample size calculation. Here, however, I intended to collect every touchdown and change of possession where a team kicked away on fourth down or went for it but didn’t make it. So this wasn’t a sample of those moments. It was going to be all of them. Of course, they don’t happen that often. That was a big help here, because they can also be anticipated. That was all very important so I could eat those wings.
Items like interceptions, fumbles, and other turnovers can not be anticipated as easily. They also would make me have to pay attention to where the ball was spotted at the beginning of every down. It was tough enough to pay attention to the spot of the ball for the downs I was going to record.
With those rules in mind, I set out to record the field position whenever possession changed. I thought that the position the offense wound up its possession at, over time, might correlate with who won the game. Less overall variance in final position might mean that team had less moments where it under-performed and lost possession nearer to its own endzone.
Of course, it could also mean that the team never reached the endzone for a touchdown. In fact, if the offense played the whole game between their opponents 45 and 50 yard line it would have little variation in field position…but also probably wouldn’t score much. Maybe a combination of better field position (higher median field position) and low variation in field position would indicate who won the game. I thought it might. Let’s see if I was right.
Data Collection: Nuts and Bolts
Next, I quickly drew a continuous data collection sheet. It looked like this:
Sounds fancy, but obviously it isn’t. That’s an important tool for you when you go to collect continuous data right from your process: the continuous data collection sheet can be very simple and very easy to use.
Really, that was about it. I went through the game watching, like you, the Patriots fall way behind for the first two quarters. After some Lady Gaga halftime show (with drones!) I looked at the data and noticed something interesting.
The Patriots data on distance from the endzone seemed to demonstrate less variance than the Falcons. (I’ll show you the actual data collection sheets in a moment.) It was odd. Yes, they were VERY far behind. Yes there had been two costly turnovers that lead to the Falcons opening up a huge lead. But, strangely, in terms of moving the ball and getting closer to the endzone based on their own offense, the Patriots were actually doing better than the Falcons. Three people around me pronounced the Patriots dead and one even said we should change the channel.
If you’ve read this blog before, you know that one of the key beliefs it describes is that data is most useful when it can change our minds. These data, at least, made me unsure if the game was over.
As you know (no spoiler alert really by now) the game was far from over and the Patriots executed one of the most impressive comebacks (if not the most impressive) in Super Bowl history. Data collected and wings eaten without difficulty! Check and check.
Here are the final data collection sheets:
Notice the number in parenthesis next to the distance from the endzone when possession changed? That number is the possession number the team had. So, 52(7) means the Falcons were 52 yards away from the Patriots endzone when they punted the ball on their seventh possession of the game. An entry like 0(10) would mean that the team scored a touchdown (0 yards from opposing team’s endzone) on their tenth possession.
Notice that collecting data this way and stacking similar numbers on top of each other makes a histogram over time. That’s what let me see how the variation of the Patriot’s final field position was smaller than the Falcon’s by about halfway through the game.
Anything To Learn From The Data Collection?
Recently, I put the data into Minitab to see what I could learn. Here are those same histograms for each offense’s performance:
Notice a few items. First, each set of data do NOT deviate from the normal distribution per the Anderson-Darling test. (More info on what that means here.) However, a word of caution: there are so few data points in each set that it can be difficult to tell which distribution they follow. I even performed distribution fitting to demonstrate that testing will likely show that these data do not deviate substantially from other distributions either. Again, it’s difficult to tell a difference because there just aren’t that many possessions for each team in a football game. In a Lean Six Sigma project, we would normally protect against this with a good sampling plan as part of our data collection plan but, hey, I had wings to eat! Here’s an example of checking the offense performance against other data distributions:
Just as with the initial Anderson-Darling test, we see here that the data do not deviate from many of these other distributions either. Bottom line: we can’t be sure which distribution it follows. Maybe the normal distribution, maybe not.
In any event, we are left with some important questions. Notice the variance exhibited by the Patriots offense versus the Falcons offense: this highlights that the Patriots in general were able to move the ball closer to the Falcons endzone by the time the possession changed (remember that turnovers aren’t included). Does that decreased variation correlate with the outcome of every football game? Can it be used to predict outcomes of games? I don’t know…at least not yet. After all, if stopping a team inside their own 10 yard line once or twice was a major factor in predicting who won a game, well, that would be very useful! If data is collected by the league on field position, we could apply this idea to previous games (maybe at half time) and see if it predicts the winner routinely. If it did, we could apply it to future games.
In the case of Super Bowl LI, the Patriots offense demonstrated a better median field position and less variation in overall field position compared to the Falcons.
Of course, remember this favorite quote:
All models are wrong, but some are useful. — George E.P. Box (of Box-Cox transform fame)
Final Recommendations (How To Eat Wings AND Collect Data)
More importantly, this entry highlights a few interesting tools for data collection for your healthcare quality project. At the end of the day, in order to continue all the things you have to do and collect good data for your project, here are my recommendations:
(1) get data right from the process, not a warehouse or after it has been cleaned.
(2) use continuous data!
(3) remember the continuous data check sheet can be very simple to set up and use
(4) when you create a data collection plan, remember the sample size calculation & operational definition!
(5) reward those who collect data…maybe with wings!
As healthcare adopts more and more of the Lean Six Sigma techniques, certain projects begin to repeat across organizations. It makes sense. After all, we live in the healthcare system and, once we have the tools, some projects are just so, well, obvious!
About two years ago, I wrote about a project I’d done that included decreasing the amount of time required to prepare OR instruments. See that here. And, not-surprisingly, by the time I had written about the project, I had seen this done at several centers with amazing results.
Recently, I was glad to see the project repeat itself. This time, Virginia Mason had performed the project and had obtained its routine, impressive result.
This entry is to compliment the Virginia Mason team on their completion of the OR quality improvement project they describe here. I’m sure the project wasn’t easy, and compliment the well-known organization on drastically decreasing waste while improving both quality & patient safety.
Like many others, I believe healthcare quality improvement is in its infancy. We, as a field, are years behind other industries in terms of sophistication regarding quality improvement–and that’s for many different reasons, not all of which we directly control.
In that sort of climate, it’s good to see certain projects repeating across institutions. This particular surgical instrument project is a great one, as the Virginia Mason & Vanderbilt experience indicate, that highlights the dissemination of quality tools throughout the industry.
Have you ever worked at a hospital that wanted to improve its ED throughput? I bet you have, because almost all do! Here’s a story of how advanced quality tools lead a team to find at least one element that added 20 minutes to almost every ED stay…
Once upon a time…
At one hospital where I worked, a problem with admission delays in the emergency department led us far astray when we tried to solve it intuitively. In fact, we made the situation worse. Patients were spending too much time in the emergency room after the decision to admit them was made. There was a lot of consternation about why it took so long and why we were routinely running over the hospital guidelines for admission. We had a lot of case-by-case discussion, trying to pinpoint where the bottleneck was. Finally, we decided to stop discussing and start gathering data.
Follow a patient through the value stream…
We did a prospective study and had one of the residents walk through the system. The observer watched each step in the system after the team mapped out exactly what the system was. What we discovered was that a twenty-minute computer delay was built into the process for almost every patient that came through the ED.
The doctor would get into the computer system and admit the patient, but the software took twenty minutes to tell the patient-transport staff that it was time to wheel the patient upstairs. That was a completely unexpected answer. We had been sitting around in meetings trying to figure out why the admission process took too long. We were saying things like, “This particular doctor didn’t make a decision in a timely fashion.” Sometimes that was actually true, but not always. It took using statistical tools and a walk through the process to understand at least one hidden fact that cost almost every patient 20 minutes of waiting time. It’s amazing how much improvement you can see when you let the data (not just your gut) guide process improvement.
The issue is not personal
We went to the information-technology (IT) people and showed them the data. We asked what we could do to help them fix the problem. By taking this approach, instead of blaming them for creating the problem, we turned them into stakeholders. They were able to fix the software issue, and we were able to shave twenty minutes off most patients’ times in the ER. Looking back, we should probably have involved the IT department from the start!
Significant decrease in median wait time and variance of wait times
Fascinatingly, not only did the median time until admission decrease, but the variation in times decreased too. (We made several changes to the system, all based on the stakeholders’ suggestions.) In the end, we had a much higher quality system on our hands…all thanks to DMAIC and the data…
David Kashmer (@DavidKashmer, LinkedIn profile here.)
Once upon a time, a healthcare quality improvement team celebrated: it had solved a huge problem for its organization. After months of difficult work, the team had improved the hospital’s Length of Stay incredibly. But, three months later, the Length of Stay slid back to exactly where it was before the team spent an entire year of work on the project. What happened!?
One Of The Most Important Steps In A Project
The final part of a quality improvement project is setting it up so you get feedback from the system on a regular basis. If you don’t do that last part correctly, you don’t know that things have gone haywire until a problem jumps out at you. All quality improvement projects need a control phase that lets the system signal you somehow to tell you when things aren’t going right anymore. All the work you did on your quality improvement project isn’t really over until you answer the final question, “How do we sustain improvement?”
The answer is using the right tools in the control phase. In healthcare, patients come through the system one at a time, but to get the big picture, Lean Six Sigma often uses control charts after the quality improvements have been implemented. All a control chart can tell you is that a system is functioning at its routine level of performance over time. It can’t tell you whether that routine level of performance is acceptable or not. If you look only at the control chart (especially if you do that too early), everything may look like it’s going fine, but in fact, the performance may be totally unacceptable. This is why control charts shouldn’t be applied until the end of a quality project: the control chart can tell us when the system is performing routinely, yet lull us to sleep. It can tell us the system is performing routinely— yet that routine may be no good.
How To Choose The Right Chart
Control charts vary, depending on what you’re measuring and how your data is distributed. Your Lean Six Sigma blackbelt is the right person to help you decide which type of chart to use and understand what it’s telling you. You would use a different control for averages over time than you would for proportions over time, for example.
Specifically, in healthcare, we often use a control chart that tracks individuals as they come through the system. It’s called an individual moving range (ImR) chart. (There’s some advice on how to choose a control chart here.) It plots patients and people or events as they come through the system one at a time.
The range is an important part of the ImR chart. Range is a measure of variance between data points. In other words, range shows you how wide the swings are in your data. If you see an unusual amount of variance between data points, the question becomes “Why such a wide swing? What is it telling us?”
Applying the Rules
If you don’t build a control chart into the ongoing phase of your quality improvement project, and look at it on a regular basis, you won’t pick up the signals that say, “This case is beyond the upper control limit. Something must have gotten out of whack with this case. We have to look into it.” The power of the control chart is it will tell you when things are going off the rails.
To understand what’s going on with your control charts, Lean Six Sigma applies what are known as the Shewhart Rules, which are rooted in the Westinghouse Rules originally devised by Westinghouse Electric. The rules tell what to look for in the control charts to see if a problem is on the way or is already there. Often, obvious signs tell you about a problem. A data point might be above or below the limits set in the chart. In healthcare, we mostly look for variants above the limit, because that often indicates something took too long or didn’t go smoothly. If something is more than three standards deviations beyond what’s expected, that means there’s less than a 1 percent chance it happened at random. You need to look into it.
Check The Control Chart On A Regular Schedule
Control charts need to be checked on a regular schedule, but they also need to be reviewed if anything external changes the system. The chief of the department might leave as part of personnel shuffle. That means new people who may not understand the system well come in. The control chart should be checked more often to see where the personnel changes may be affecting quality. Remember to make it clear, before the project’s end, exactly who will look in on the chart, when they will do it, and who they should call when there’s an issue. It’s important that this be someone who lives with the new process as it will be after changes.
A lot can change quickly in just a month or two. The control phase provides feedback from the system when something has gone wrong, or something needs maintenance, or the weeds need trimming.
The bottom line: plan to maintain the gains you’ve made with your important quality improvement project by designing in a control phase from the beginning!
Excerpt above was originally published as part of Volume To Value: Proven Methods For Achieving High Quality In Healthcare.
Want to read more about advanced quality tools and their uses in healthcare? Click here.
I was recently part of a team that was trying to decide how well residents in our hospital were supervised. The issue is important, because residency programs are required to have excellent oversight to maintain their certification. Senior physicians are supposed to supervise the residents as the residents care for patients. There are also supposed to be regular meetings with the residents and meaningful oversight during patient care. We had to be able to show accrediting agencies that supervision was happening effectively. Everyone on the team, myself included, felt we really did well with residents in terms of supervision. We would answer their questions, we’d help them out with patients in the middle of the night, we’d do everything we could to guide them in providing safe, excellent patient care. At least we thought we did . . . .
We’d have meetings and say, “The resident was supervised because we did this with them and we had that conversation about a patient.” None of this was captured anywhere; it was all subjective feelings on the part of the senior medical staff. The residents, however, were telling us that they felt supervision could have been better in the overnight shifts and also in some other specific situations. Still, we (especially the senior staff doing the supervising) would tell ourselves in the meetings, “We’re doing a good job. We know we’re supervising them well.”
We weren’t exactly lying to ourselves. We were supervising the residents pretty well. We just couldn’t demonstrate it in the ways that mattered, and we were concerned about any perceived lack in the overnight supervision. We were having plenty of medical decision-making conversations with the residents and helping them in all the ways we were supposed to, but we didn’t have a critical way to evaluate our efforts in terms of demonstrating how we were doing or having something tangible to improve.
When I say stop lying to ourselves, I mean that we tend to self-delude into thinking that things are OK, even when they’re not. How would we ever know? What changes our ability to think about our performance? Data. When good data tell us, objectively and without question, that something has to change–well, at least we are more likely to agree. Having good data prevents all of us from thinking we’re above average . . . a common misconception.
To improve our resident supervision, we first had to agree it needed improvement. To reach that point, we had to collect data prospectively and review it. But before we even thought about data collection, we had to deal with the unspoken issue of protection. We had to make sure all the attending physicians knew they were protected against being blamed, scapegoated, or even fired if the data turned out to show problems. We had to reassure everyone that we weren’t looking for someone to blame. We were looking for ways to make a good system better. There are ways to collect data that are anonymous. The way we chose did not include which attending or resident was involved at each data point. That protection was key (and is very important in quality improvement projects in healthcare) to allowing the project to move ahead.
I’ve found that it helps to bring the group to the understanding that, because we are so good, data collection on the process will show us that we’re just fine—maybe even that we are exceptionally good. Usually, once the data are in, that’s not the case. On the rare occasion when the system really is awesome, I help the group to go out of its way to celebrate and to focus on what can be replicated in other areas to get that same level of success.
When we collected the data on resident supervision, we asked ourselves the Five Whys. Why do we think we may not be supervising residents well? Why? What tells us that? The documentation’s not very good. Why is the documentation not very good? We can’t tell if it doesn’t reflect what we’re doing or if we don’t have some way to get what we’re doing on the chart. Why don’t we have some way to get it on the chart? Well, because . . . .
If you ask yourself the question “why” five times, chances are you’ll get to the root cause of why things are the way they are. It’s a tough series of questions. It requires self-examination. You have to be very honest and direct with yourself and your colleagues. You also have to know some of the different ways that things can be—you have to apply your experience and get ideas from others to see what is not going on in your system. Some sacred cows may lose their lives in the process. Other times you run up against something missing from a system (absence) rather than presence of something like a sacred cow. What protections are not there? As the saying goes, if your eyes haven’t seen it, your mind can’t know it.
As we asked ourselves the Five Whys, we asked why we felt we were doing a good job but an outsider wouldn’t be able to tell. We decided that the only way an outsider could ever know that we were supervising well was to make sure supervision was thoroughly documented in the patient charts.
The next step was to collect data on our documentation to see how good it was. We decided to rate it on a scale of one to five. One was terrible: no sign of any documentation of decision-making or senior physician support in the chart. Five was great: we can really see that what we said was happening, happened.
We focused on why the decision-making process wasn’t getting documented in the charts. There were lots of reasons: Because it’s midnight. Because we’re not near a computer. Because we were called away to another patient. Because the computers were down. Because the decision was complicated and it was difficult to record it accurately.
We developed a system for scoring the charts that I felt was pretty objective. The data were gathered prospectively; names were scrubbed, because we didn’t care which surgeon it was and we didn’t want to bias the scoring. To validate the scoring, we used a Gage Reproducibility and Reliability test, which (among other things) helps determine how much variability in the measurement system is caused by differences between operators. We chose thirty charts at random and had three doctors check them and give them a grade with the new system. Each doctor was blinded to the chart they rated (as much as you could be) and rated each chart three times. We found that most charts were graded at 2 or 2.5.
Once we were satisfied that the scoring system was valid, we applied it prospectively and scored a sample of charts according to the sample size calculation we had performed. Reading the chart to see if it documented supervision correctly only took about a second. We found, again, our score was about 2.5. That was little dismaying, because it showed we weren’t doing as well as we thought, although we weren’t doing terribly, either.
Then we came up with interventions that we thought would improve the score. We made poka-yoke changes—changes that made it easier to do the right thing without having to think about it. In this case, the poka-yoke answer was to make it easier to document resident oversight and demonstrate compliance with Physicians At Teaching Hospitals (PATH) rules; the changes made it harder to avoid documenting actions. By making success easier, we saw the scores rise to 5 and stay there. We added standard language and made it easy to access in the electronic medical record. We educated the staff. We demonstrated how, and why, it was easier to do the right thing and use the tool instead of skipping the documentation and getting all the work that resulted when the documentation was not present.
The project succeeded extremely well because we stopped lying to ourselves. We used data and the Five Whys to see that what we told ourselves didn’t align with what was happening. We didn’t start with the assumption that we were lying to ourselves. We thought we were doing a good job. We talked about what a good job looked like, how we’d know if we were doing a good job, and so on, but what really helped us put data on the questions was using a fishbone diagram. We used the diagram to find the six different factors of special cause variation…
Want to read more about how the team used the tools of statistical process control to vastly improve resident oversight? Read more about it in the Amazon best-seller: Volume To Value here.
Catheter-associated urinary-tract infections in hospitalized patients are considered “never events”—they should never happen. When they do, the hospital is penalized by Medicare and third-party payers. The issue can really burn a hospital. Naturally, hospitals are very interested in ways to avoid UTIs. One hospital I worked at had tried several solutions, and some turned out to be bad choices. They tried taking catheters out of patients before those patients had a chance to develop an infection. That sounds like a good idea because, in general, removing a catheter as early as possible is a good thing, but it’s not good if it’s removed too early. That’s an important distinction that didn’t get made, and catheters were being removed too early for many patients. In critically ill patients, for instance, the catheter may be needed to follow the patient’s urine output carefully. Many ICU patients could not be monitored appropriately once their catheters were removed too early. The hospital also tried out perhaps the worst possible solution, which was just not sending samples for urinalysis so they wouldn’t have to make the diagnosis. Obviously, that’s something we don’t want for patients. If a patient gets an infection, we want to know about it and treat it. At this hospital, when patients did get a urinary tract infection, it was recognized much later. So what can be done? What does a good solution to a healthcare system problem look like?
HERE’S HOW BAD (AND GOOD) SOLUTIONS LOOK
In its attempt to solve a problem, the hospital chose bad solutions that, in some cases, actually made patients sicker. Bad solutions often have a certain look about them: they’re solutions that are difficult to implement, are expensive, are otherwise prohibitive, take multiple steps to get done, don’t work or just generally make things worse.
What do good solutions look like? Above all, a good solution is implementable. A good system makes it easy to do the right thing and hard to make a mistake. A good system is error-proof because the playing field is tilted toward making it easier to do the right thing. In designing the system, the questions are always “What’s easy for the physician or healthcare provider?” and “What’s the right thing for the patient?” and “What’s doable?”
ONE POTENTIAL “RIGHT SOLUTION”
If a patient comes to the hospital with an existing UTI, then the hospital isn’t generally responsible for it as a hospital-acquired UTI that the patient received in their institution, and therefore the hospital doesn’t get penalized. (Of course, the hospital is still responsible for diagnosing and treating the patient properly.) Obviously, the key is to test patients at admission, especially ones who are at high risk, to find out if they already have a catheter-associated UTI or that they’ve come in with a UTI even if no catheter is present on their arrival. The test is very quick, inexpensive, and easy. To make it a routine part of admissions across the hospital, however, isn’t always easy. At one hospital where I worked, the center had to decide what changes to make to its system to ensure that every patient, not just the obvious high-risk ones, was automatically tested for a UTI at admission. The solution was fairly obvious: allow nurses to obtain the test, via a standing order from physicians that included certain criteria regarding for who should receive the test and results. The urinalysis becomes part of a comprehensive outside hospital (jokingly nicknamed the “OSH” for “outside hospital”) workup for patients who come from other hospitals, nursing homes, rehab centers, or even retirement communities. These facilities are like “outside hospitals” because their patients are similar to transfers arriving from other hospitals “outside” the one we’re describing. This urinalysis test doesn’t hurt the patient at all, it’s very inexpensive, and there’s very little to no downside risk. This small, simple change turns out to be a big help for the patient and the organization. The comprehensive approach catches not only UTIs but also other problems, such as deep venous thrombosis. That’s another condition that can penalize the hospital if the patient develops it during a stay, so it’s better to know if they’re coming in with it, both to prevent a penalty and to get treatment started right away. Deep venous thrombosis can kill a patient. Part of the OSH workup in the hospital where I worked included a test for deep venous thrombosis.
A good solution is one that is easy to implement, straight- forward, and turns out to bolster other quality and safety issues. The best solution makes it easier to do the right thing. In the case of catheter-associated UTIs and deep venous thromboses, the hospital set up standing orders from a physician that empowered ER nurses to order the tests.
With the DMAIC process—define, measure, analyze, improve, control—you’ll often end up with several can- didate solutions. How do you filter through the changes that you want to make and the guiding principles to come up with the best solution? In the case of catheter-related UTIs, you’d want to find a way early in the process to identify patients who arrive with one. You’d want to define what you’re measuring. It’s very important to align the measurement with the intervention, and vice versa. Are you looking at the percentage of patients who have a urinary-tract infection? Are you looking at reducing the number of hospitalized patients who have one, measured monthly? The endpoint measurement really matters here, because when you implement the program, you may well see an increased rate of urinary-tract infection in hospitalized patients. That’s because now you’re looking for them, so you’re finding them. But on the other hand, with your new program in place, the rate of hospital-acquired catheter-associated urinary-tract infections should be lower.
That leads to a further measurable endpoint: savings from not being penalized by the cost of poor quality. Part of your UTI rate project may include a SIPOC diagram. Many patients come to the emergency room with catheter-associated infections that they got in their nursing home, or a pre-existing urinary tract infection / colonization even if no catheter is present at that time. So, you can look at nursing homes as suppliers who send you patients. One way to reduce the number of patients coming in with UTIs would be to do outreach to the nursing homes to help them manage catheters better and be more aware of the symptoms of an infection. Or you could do outreach only to the nursing homes that send you the most patients with infections. You could make sure that attending physicians who round on nursing homes are sensitized to the problem. But you also have to be aware of the scope of your project and realize that you can’t always influence the people who send you patients. Solutions that work are realistic and within the criteria the team selects.
Most solutions to quality problems in medicine end up creating more paperwork. I rarely see solutions that involve less paperwork. Based on my experience, I estimate that at least 80 percent of the solutions that come out of healthcare improvement projects typically involve more paperwork—another form to fill out, another item on the chart, another checklist.
Now, let me be clear: I do like checklists. They’re useful and have a place in quality improvement. But they’re only one part of a vast arsenal of what you can do to improve a system. Although checklists are a buzzword and hot topic now, a checklist isn’t always the best, most implementable, or most effective solution. It often just creates more paperwork. Checklists can be a good starting point, but they’re often not the most effective solution in the set of all possible solutions. (They are, however, infinitely better than nothing!) Physicians today often spend about half of their working day on paperwork. A checklist that only adds to the load often isn’t really helping. For residents, the paperwork is even worse. A lot of it just gets dumped on them, and they end up doing mindless clerical work that doesn’t necessarily improve quality. How much of an impact does this have? We don’t know, because we don’t rigorously measure that sort of work. We often don’t really know if it makes any difference to quality. We often don’t know if we’re doing better or worse for having added twenty minutes of paperwork. I advise us all to look to a wider array of interventions than just checklists.
ERROR-PROOFING: THE POKA-YOKE APPROACH
When a system is error-proofed, it’s a lot easier for every- one to do the right thing every time and a lot harder to make a mistake. This is the Japanese design philosophy of poka-yoke (pronounced “poke a yoke”), also known as error-proofing, mistake-proofing, or sometimes (rarely) idiot-proofing. The idea is to set up a system that’s as immune to human error as possible. Many mistakes are inadvertent; poka-yoke helps avoid them. In manufacturing processes, where the idea was first developed, poka-yoke is used to prevent mistakes before or while they’re being made. The idea is to eliminate defects at the source. For example, on an assembly line, a poka-yoke solution to putting a part in backward might be to redesign it so that it can only fit when it’s in the proper position, or to color it on one side so that you can see immediately if it’s in place correctly. If a part requires the worker to install five screws, provide the screws in packages of five so that forgetting one or using the wrong screw becomes almost impossible. In healthcare, where we’re dealing with humans in fluid situations that require experience and judgment, poka-yoke changes aren’t generally as straightforward as retooling a part. For example, although we commonly use kits that contain everything needed for a procedure such as inserting a central line, often the procedure doesn’t require everything in the kit, leaving plenty of room for human error. In medicine, we have to make it easier to do the right thing even when the right thing is complex and the people who need to do it are very busy and have a lot of distractions. Under these circumstances, poka-yoke solutions almost always mean making something harder, either mechanically, physically, mentally, structurally, or by creating more paperwork. This sounds counterproductive and more like punishment than help, but in fact, by making it harder to deviate from a process or protocol, the system makes it harder to mess up.
Great healthcare poka-yoke solutions are ones that eliminate or reduce the ability to make a mistake and eliminate some piece of paperwork! Some poka-yoke solutions are very simple, such as pop-up messages on a computer screen or making a form easier to fill in correctly (and quickly) by highlighting where the information needs to go. A good example of a simple poka-yoke solution for hand cleanliness is putting hand-sanitizer dispensers outside every doorway. If you have to look around for a dispenser, you might skip sanitizing; if a dispenser is right in front of you everywhere you turn, you’ll probably use it.
Curious to read more about examples of solutions that work in healthcare quality improvement initiatives? Read more in Volume To Value here.
Dr. Kashmer receives no reimbursement from Microsoft for reviewing their product or for anything else for that matter (!)
It’s rare that a new piece of technology falls in my lap that makes me say wow. Maybe it’s the professional detachment from years of physician training…who knows! But, write it down: the Microsoft Hololens is amazing…and it’s useful right now.
Recently, as a Microsoft Developer, I received the Hololens I bought several months ago. I had fairly low expectations. I mean, yes, I’d read great things from CES and other events. But I mean, come on, we’ve all seen way over-hyped tech products that promise great things and do very little.
I’d been a Google Glass Explorer, and I loved the idea. The heads up display, the fact that the device took up very little real estate, and the ability to connect to useful data in a rapid way seemed to hold great potential for healthcare applications. Once upon a time, I was even part of a company that was developing a system for the device for healthcare applications. However, once I reviewed the device (see that review here) I began to realize that Glass held great potential, and could be more useful with time, but that it really wasn’t ready for primetime.
Now, fast forward a year or so, and my expectations were (maybe understandably) low. I mean, after all, I’d experienced the Glass, and the “Glass-hole” (term coined for how people came off while wearing Glass) phenomenon. I was still a little jaded from the whole thing. My expectations were low.
So, when I received the developer version of the Hololens, I figured much of the experience would be the same. I was wrong. So very, very wrong.
First, the developer version of Hololens that I received has smooth, incredible functionality. It does MUCH more than the comparatively bare bones developer version of Glass that I’d received previously. But that’s not all.
This thing is stunning, its voice, hand gesture, gaze, & click recognition are all excellent. Cortana (the Microsoft voice-activated assistant) is also very useful. Battery life is good. And, of course, there’s the holographic interface.
I mean, jeez, I would’ve bought it just for that. A three dimensional anatomic model, a virtual trip to Rome, and a Holo Studio for creating your own 3D (and 3D printable) models were easy to install from the Microsoft store via Wifi.
The form factor? Well, this device isn’t super cool or incredibly sleek. Lucky, with its amazing creation of a three dimensional interface environment, I didn’t (and still don’t) care. After all, a lot of the accessories we wear in healthcare don’t look cool.
What did I do with the device first? Well, after setting it up, I did what any good user would do and immediately tested this new, incredible piece of technology by opening a panel with Netflix and streaming a Game of Thrones episode followed by an episode of Stranger Things. I laughed at myself for how silly it was to use such awesome technology as a fancy Netflix streaming device…but, hey, it could easily handle it and the whole situation was (although funny) truly awesome. (Not long after, it was on to Family Guy.)
So what now? Now, it’s easy to take this incredible device into the different fluid, fast-paced venues of the hospital. It’s a simple matter to use the device as eye protection in the trauma bay or the OR. It’s straightforward to setup some holographic projections over the patient’s bed and to display their real time info from the electronic medical record. It’s no big deal to setup a panel with their CT scan displayed while I teach or perform a procedure. The photo above highlights just a bit of how easy it is to show website information in the Hololens environment.
In conclusion, it’s rare that I’m amazed by a tech product–especially in these days of fast-paced innovation. However, when it comes to this one, I have one thing to say:
Thank you, Microsoft, for building Hololens. This thing is amazing and will allow us in healthcare to do a lot of good. Thank you so much.
A colleague asked: “How will you know if this gamification project is successful?” and it got me thinking…how do you track success in a re-designed environment that uses some techniques typically seen in gaming? How do you track, measure, and improve employee engagement or, better yet, that elusive endpoint called culture?
Asking For Endpoints
In case you haven’t seen case made regarding the “why bother with gamification”, look here. If you believe the sobering statistic that 70% of US workers are actively trying to hurt their company, then you know that somethingmust be done. Even if the issue is simply emotional detachment from work (rather than actively attempting to hurt the company), and missed opportunities for your company, then it truly is worth trying some tools to help improve the situation. A recent Deloitte report (2015) highlights that issues of employee engagement and culture have become “the number one challenge in the world [of business]”.
In the quest to do something about it on the healthcare side I’ve helped deploy gamified systems before. (More on the experience here.) Recently, while setting off on a deployment, a colleague who was in favor of the new system asked: “But how will we know if it works?”
What’s Been Done Before
The game endpoints, to tell whether the system has accomplished your organization’s goals, should be specific to your company’s needs. Do you need staff to complete the yearly compliance work? Then there’s an endpoint. Do staff need to treat each other differently? That may be tough to measure…
After all, these things center around that notorious intangible called “culture”. And how, in short, do you measure that?
In a previous post, I discussed attempts to measure job satisfaction using a standardized questionnaire that has previously been used & validated in healthcare called (simply) the Job Satisfaction Survey (JSS) pre and post game deployment. (For more on that, look here or here.) Lots of famous business journals and authors write about how to measure culture. What my colleague had asked seemed to have no simple solution: do we measure something as simple as percent compliance with yearly training, as complex as some global measure of culture, or both?
Nuts & Bolts Of Endpoints
Clearly we are out of the realm of easy or straightforward science & testing here! Consider, for example, a seemingly straightforward move like administering the JSS before gamification and then after the system winds up. Let’s pretend, at the end of the deployment, there has been statistically significant improvement in many of the scores on each question from the JSS. Great…except, well, a lot of things in your organization likely changed over the time the game system was deployed. How do you know whether the system was really the driver of the improvement?
Now let’s take the more simple endpoint: compliance with yearly corporate training such as fire safety training. How do we know that, after the deployment, that any increase in fire safety training compliance was due to the system and not just passage of time with more participants completing the training?
The bottom line, here, is that there may be no perfect endpoint for the game system. Even endpoints that seem straightforward, such as participants returning to a certain place at a certain time, or reviewing certain materials, is just as prone to criticism as endpoints of typical work across the sciences.
Another important consideration is timing. Consider, for example, an endpoint that your organization truly values such as employee churn. Perhaps churn has reached a steady state over the last year prior to the new system you’re deploying. After the system (which directly impacts participants considered in the churn metric) was deployed, the rate of employees leaving the organization dropped sharply. This may have meaning in your individual system. So, another important consideration in these metrics is timing: choose something on which you can reasonably expect an impact from the new game system, which is already measured / important to the organization, and which has achieved a steady state.
Just as challenging as measuring culture in your organization is the measurement of endpoints to determine how successful your gamified system is. I recommend a combination of endpoints that compare post-game performance to important measures that have achieved stability over time prior to game system deployment and which you can reasonably expect a change related to the new system.
This is no easy task! Gamified systems, often designed to impact culture and organizational behavior, can be challenging to quantify owing to all the vaugeries of measuring culture in general. Consider the Job Satisfaction Survey, in addition to more specific endpoints rooted in quantifiable behaviors, to get a sense of the performance of your gamified system prior to update, revision, improvement, and release of any version 2.0 you have planned.
Questions, comments, or thoughts of endpoints for gamified systems in healthcare or in general? Wonder how to deploy a gamified system to promote engagement and certain actions in your organization? Email me or comment beneath.
Sometimes, you can see the train coming but can’t get out of the way fast enough. Whack! The train gets you despite your best efforts. Wouldn’t have been great to start to get out of the way earlier? In this entry, let’s focus on how to identify, as early as possible, four types of bad metrics in healthcare so that we can run away from that particular train as early as possible. After all, the sooner we flee from these bad actors the more likely we are to avoid being run over by them.
Truth is, you’ve probably seen the train of bad metrics before. After all, you know that all sorts of things are getting measured in our field nowadays and, sometimes, certain endpoints don’t feel particularly helpful and (in fact) seem to make things a lot worse.
First, a disclaimer: this entry does not argue with metrics that the government mandates. There are some things that we measure because we have to for reimbursement or other reasons. However, if you believe (like me and other quality professionals) that a focus on reducing defects eventually impacts all sorts of quality measures (even mandated ones), then this is the entry for you! This work does not focus on arguing or pushing back against those things that we must measure owing to regulation. Now, on with the show…
Let’s explore four broad categories of bad metrics and how to avoid them.
#1 Metrics for which you cannot collect accurate or complete data.
It can be very challenging, in hospitals, to collect data. Often, data collection is frowned upon, or is even thought of as an afterthought or imposition. So, as we launch in here, remember: saying that you can’t collect complete or accurate data is not the same as actually being unable to.
Colleagues, listen: if you think you can’t afford the time to collect good data, let me tell you that you can’t afford not to collect and use data.
When I’m working with a team that’s new to Lean or Six Sigma and we discuss data collection, the team often balks and focuses on the fact that no one is available to measure data, that we don’t have data collection resources or that, even if we had resources, we can’t get data.
I usually start with a quote: “If you think it’s tough to get data, remember how tough it is to not get data.” (Split infinitive included for drama’s sake.)
Then we go on to explore together how there are several techniques we can use to make gathering data much easier so that we can avoid the “easy out” of “we can’t collect data about this and so it’s not a useful metric”. In fact, most projects we do require data collection for 1-2 secondsper patient at most. And that’s for prospective data collection. (Want more info about how to make data collection easy, email me at email@example.com and I’ll pass it along.)
However, in healthcare, we have all seen projects where data collection is arduous and so we react against data collection when we hear about it.
Sometimes, teams focus on using retrospective data. Of course, using retrospective data is much better than using no data. However, retrospective data has often been cleaned via editing or in some other way that makes it less valuable. Raw data that focuses on the specific operational definition of what you’re looking at tends to have the most value.
Sometimes, you have no way to measure a certain metric or concept and yet the team believes that concept to be very valuable. Take, for instance, a team that focused on scheduling patients for the operating room. The team felt that many patients were not prepared adequately before coming to the holding room. This included all sorts of ideas such as not having consent on the chart or some other issue. The team decided to measure this prospectively and found that only about one third of patients were completely prepared by the time they came to the pre-operative holding area. This was measured prospectively with a discrete data check sheet.
Let me explain that, sometimes, the fact that something hasn’t been measured previously means that the organization has not had that concept on its radar previously. This goes back to the old statement that if it is measured it will be managed and its corollary that if an endpoint is not measured, it is very hard to manage that endpoint.
To wrap this one up: it is important to mention that one category of bad data or a bad metric is a metric that you cannot measure. However, it is important to realise that just because you haven’t measured it before doesn’t mean that you absolutely cannot measure it. Sometimes, if the idea or concept is important enough, you should develop a measure for it. We discuss how to develop a new end point in the entry here. That said, if it is absolutely impossible or arduous to collect accurate or complete data, the metric is much less likely to have value…but don’t just let yourself off the hook! If you think something is important to measure, learn that there are ways to collect data that require only four or five seconds per patient!
#2 Metrics that are complex and difficult to explain to others.
If a metric gives a result that people can’t feel or conceptualize it’s just plain less valuable. Take, for example, a metric for OR readiness. In the month of April the operating room scored a very clear score on this metric. That score was “pumpkin”.
“Pumpkin?!”…Well, pumpkin doesn’t mean much to us in terms of operating room readiness. For that reason, you may want to measure your OR preparedness with a different metric than the pumpkin. Complex and difficult metrics that lack tangible meaning should be avoided. Chose something that tells a story or evokes an emotion. One upon a time, a center created (and validated) a “Hair On Fire Index” to indicate the level of emergent problems and crazy situations the operating room staff encountered in a day to indicate how stressed the OR staff was that day. Wonder how they did it? Look here.
#3 Metrics that complicate operations and create excessive overhead.
This type of metric is especially problematic. If a metric is difficult to measure and requires an incredible level of structure / workload to create it, it may not be useful.
Imagine, for example, a metric to predict sepsis that requires a twelve part scoring system, multiple regression, and the computing power of IBM’s Watson. This may not be a useful day to day metric for quality or outcome. Metrics that complicate operations and create excessive difficulty should be avoided. When you see that type of metric coming, jump out of the way of the train.
#4 Metrics that cause employees to ‘make their numbers’.
This is similar to problem metric number two. When staff can’t feel the metrics that we describe, or see how they affect patient care, it can be very hard to mentally link what we do every day to our quality levels. That can lead to situations where employees are acting just to ‘make their numbers’. That type of focus is difficult and makes metrics less useful.
It’s important to have metrics that we perceive as having a tangible relationship to patients and their outcomes. We are so busy in healthcare that often if staff can fudge a metric, complete a form just to say it’s done, or in some other way ‘make numbers’, well, we often see that’s what happens. (That effect may not just be confined to healthcare of course!) It can be very challenging to create a metric that very clearly indicates what we have to do (and should be doing) rather than one that is sort of an abstract number we ‘have to hit’.
Take Aways, Or How To Avoid Being Hit By The Train Of Bad Metrics
In conclusion, there are at least four types of bad metrics and very clear ways to avoid them. Take a moment to try to see these trains coming from as far away in the distance as possible so that you can quickly get off the tracks unscathed.
We need metrics that we can feel and that tell a story of our patient care. We need ones that, whether government mandated or not, seem to relate to what we do everyday. We need ones that are easily gathered and tell the story of our performance clearly to both us as practitioners and staff who review us. Sometimes, we are mandated to collect certain end points yet, over time, I have come to find that when we do a good job with metrics that have meaning, we often have less defects and see better outcomes in all the metrics…whether we are mandated to collect a particular metric or not.
As part of your next quality project and how you participate in the healthcare system, take a minute to focus on whether the metrics you’re using are useful and, if not, how you can make them better. Be the first to sound the alarm if you see the train of bad metrics on the track to derail meaningful improvement for our patients.