Great Healthcare Quality Projects Repeat Themselves

 

David Kashmer, MD MBA MBB (@DavidKashmer)

As healthcare adopts more and more of the Lean Six Sigma techniques, certain projects begin to repeat across organizations.  It makes sense.  After all, we live in the healthcare system and, once we have the tools, some projects are just so, well, obvious!

About two years ago, I wrote about a project I’d done that included decreasing the amount of time required to prepare OR instruments.  See that here.  And, not-surprisingly, by the time I had written about the project, I had seen this done at several centers with amazing results.

Recently, I was glad to see the project repeat itself.  This time, Virginia Mason had performed the project and had obtained its routine, impressive result.

This entry is to compliment the Virginia Mason team on their completion of the OR quality improvement project they describe here.  I’m sure the project wasn’t easy, and compliment the well-known organization on drastically decreasing waste while improving both quality & patient safety.

Like many others, I believe healthcare quality improvement is in its infancy.  We, as a field, are years behind other industries in terms of sophistication regarding quality improvement–and that’s for many different reasons, not all of which we directly control.

In that sort of climate, it’s good to see certain projects repeating across institutions.  This particular surgical instrument project is a great one, as the Virginia Mason & Vanderbilt experience indicate, that highlights the dissemination of quality tools throughout the industry.

Nice work, Virginia Mason team!

How Well Do We Supervise Resident Surgeons?

By:  David Kashmer (@David Kashmer)

 

I was recently part of a team that was trying to decide how well residents in our hospital were supervised. The issue is important, because residency programs are required to have excellent oversight to maintain their certification. Senior physicians are supposed to supervise the residents as the residents care for patients. There are also supposed to be regular meetings with the residents and meaningful oversight during patient care. We had to be able to show accrediting agencies that supervision was happening effectively. Everyone on the team, myself included, felt we really did well with residents in terms of supervision. We would answer their questions, we’d help them out with patients in the middle of the night, we’d do everything we could to guide them in providing safe, excellent patient care. At least we thought we did . . . .

 

We’d have meetings and say, “The resident was supervised because we did this with them and we had that conversation about a patient.” None of this was captured anywhere; it was all subjective feelings on the part of the senior medical staff. The residents, however, were telling us that they felt supervision could have been better in the overnight shifts and also in some other specific situations. Still, we (especially the senior staff doing the supervising) would tell ourselves in the meetings, “We’re doing a good job. We know we’re supervising them well.”

 

We weren’t exactly lying to ourselves. We were supervising the residents pretty well. We just couldn’t demonstrate it in the ways that mattered, and we were concerned about any perceived lack in the overnight supervision. We were having plenty of medical decision-making conversations with the residents and helping them in all the ways we were supposed to, but we didn’t have a critical way to evaluate our efforts in terms of demonstrating how we were doing or having something tangible to improve.

 

When I say stop lying to ourselves, I mean that we tend to self-delude into thinking that things are OK, even when they’re not. How would we ever know? What changes our ability to think about our performance? Data. When good data tell us, objectively and without question, that something has to change–well, at least we are more likely to agree. Having good data prevents all of us from thinking we’re above average . . . a common misconception.

 

To improve our resident supervision, we first had to agree it needed improvement. To reach that point, we had to collect data prospectively and review it. But before we even thought about data collection, we had to deal with the unspoken issue of protection. We had to make sure all the attending physicians knew they were protected against being blamed, scapegoated, or even fired if the data turned out to show problems. We had to reassure everyone that we weren’t looking for someone to blame. We were looking for ways to make a good system better. There are ways to collect data that are anonymous. The way we chose did not include which attending or resident was involved at each data point. That protection was key (and is very important in quality improvement projects in healthcare) to allowing the project to move ahead.

 

I’ve found that it helps to bring the group to the understanding that, because we are so good, data collection on the process will show us that we’re just fine—maybe even that we are exceptionally good. Usually, once the data are in, that’s not the case. On the rare occasion when the system really is awesome, I help the group to go out of its way to celebrate and to focus on what can be replicated in other areas to get that same level of success.

 

When we collected the data on resident supervision, we asked ourselves the Five Whys. Why do we think we may not be supervising residents well? Why? What tells us that? The documentation’s not very good. Why is the documentation not very good? We can’t tell if it doesn’t reflect what we’re doing or if we don’t have some way to get what we’re doing on the chart. Why don’t we have some way to get it on the chart? Well, because . . . .

 

If you ask yourself the question “why” five times, chances are you’ll get to the root cause of why things are the way they are. It’s a tough series of questions. It requires self-examination. You have to be very honest and direct with yourself and your colleagues. You also have to know some of the different ways that things can be—you have to apply your experience and get ideas from others to see what is not going on in your system. Some sacred cows may lose their lives in the process. Other times you run up against something missing from a system (absence) rather than presence of something like a sacred cow. What protections are not there? As the saying goes, if your eyes haven’t seen it, your mind can’t know it.

 

As we asked ourselves the Five Whys, we asked why we felt we were doing a good job but an outsider wouldn’t be able to tell. We decided that the only way an outsider could ever know that we were supervising well was to make sure supervision was thoroughly documented in the patient charts.

 

The next step was to collect data on our documentation to see how good it was. We decided to rate it on a scale of one to five. One was terrible: no sign of any documentation of decision-making or senior physician support in the chart. Five was great: we can really see that what we said was happening, happened.

 

We focused on why the decision-making process wasn’t getting documented in the charts. There were lots of reasons: Because it’s midnight. Because we’re not near a computer. Because we were called away to another patient. Because the computers were down. Because the decision was complicated and it was difficult to record it accurately.

 

We developed a system for scoring the charts that I felt was pretty objective. The data were gathered prospectively; names were scrubbed, because we didn’t care which surgeon it was and we didn’t want to bias the scoring. To validate the scoring, we used a Gage Reproducibility and Reliability test, which (among other things) helps determine how much variability in the measurement system is caused by differences between operators. We chose thirty charts at random and had three doctors check them and give them a grade with the new system. Each doctor was blinded to the chart they rated (as much as you could be) and rated each chart three times. We found that most charts were graded at 2 or 2.5.

 

Once we were satisfied that the scoring system was valid, we applied it prospectively and scored a sample of charts according to the sample size calculation we had performed. Reading the chart to see if it documented supervision correctly only took about a second. We found, again, our score was about 2.5. That was little dismaying, because it showed we weren’t doing as well as we thought, although we weren’t doing terribly, either.

 

Then we came up with interventions that we thought would improve the score. We made poka-yoke changes—changes that made it easier to do the right thing without having to think about it. In this case, the poka-yoke answer was to make it easier to document resident oversight and demonstrate compliance with Physicians At Teaching Hospitals (PATH) rules; the changes made it harder to avoid documenting actions. By making success easier, we saw the scores rise to 5 and stay there. We added standard language and made it easy to access in the electronic medical record. We educated the staff. We demonstrated how, and why, it was easier to do the right thing and use the tool instead of skipping the documentation and getting all the work that resulted when the documentation was not present.

 

The project succeeded extremely well because we stopped lying to ourselves. We used data and the Five Whys to see that what we told ourselves didn’t align with what was happening. We didn’t start with the assumption that we were lying to ourselves. We thought we were doing a good job. We talked about what a good job looked like, how we’d know if we were doing a good job, and so on, but what really helped us put data on the questions was using a fishbone diagram. We used the diagram to find the six different factors of special cause variation…

 

Want to read more about how the team used the tools of statistical process control to vastly improve resident oversight?  Read more about it in the Amazon best-seller:  Volume To Value here.

Cover of new book.
Cover of new book.

 

Coming Soon: We’re Going From Volume To Value

By:  DMKashmer MD MBA MBB FACS (@DavidKashmer)

 

Yup, Healthcare is going through a major transition and we all know it.  Whether you’ve followed along with the blog, or even if you haven’t, you probably know that Health & Human Services is transitioning us to a focus on value delivered to patients rather than volume of services we deliver in healthcare.  If you haven’t heard exactly what’s coming, look here.

So, in order to help prepare, I’m sharing tools and experiences with quality improvement that lead to improvements in value delivered to patients.  Take a look at Volume to Value, coming soon on Amazon.

Now, more than ever, a clear focus on well-known quality improvement tools is paramount for success.

 

Have You Seen The Microsoft Hololens?

By:  David Kashmer (@DavidKashmer)

Dr. Kashmer receives no reimbursement from Microsoft for reviewing their product or for anything else for that matter (!)

 

It’s rare that a new piece of technology falls in my lap that makes me say wow.  Maybe it’s the professional detachment from years of physician training…who knows!  But, write it down:  the Microsoft Hololens is amazing…and it’s useful right now.

 

Recently, as a Microsoft Developer, I received the Hololens I bought several months ago.  I had fairly low expectations.  I mean, yes, I’d read great things from CES and other events.  But I mean, come on, we’ve all seen way over-hyped tech products that promise great things and do very little.

 

I’d been a Google Glass Explorer, and I loved the idea.  The heads up display, the fact that the device took up very little real estate, and the ability to connect to useful data in a rapid way seemed to hold great potential for healthcare applications.  Once upon a time, I was even part of a company that was developing a system for the device for healthcare applications.  However, once I reviewed the device (see that review here) I began to realize that Glass held great potential, and could be more useful with time, but that it really wasn’t ready for primetime.

 

Now, fast forward a year or so, and my expectations were (maybe understandably) low.  I mean, after all, I’d experienced the Glass, and the “Glass-hole” (term coined for how people came off while wearing Glass) phenomenon.  I was still a little jaded from the whole thing.  My expectations were low.

 

So, when I received the developer version of the Hololens, I figured much of the experience would be the same.  I was wrong.  So very, very wrong.

 

First, the developer version of Hololens that I received has smooth, incredible functionality.  It does MUCH more than the comparatively bare bones developer version of Glass that I’d received previously.  But that’s not all.

 

This thing is stunning, its voice, hand gesture, gaze, & click recognition are all excellent.  Cortana (the Microsoft voice-activated assistant) is also very useful.  Battery life is good.  And, of course, there’s the holographic interface.

 

I mean, jeez, I would’ve bought it just for that.  A three dimensional anatomic model, a virtual trip to Rome, and a Holo Studio for creating your own 3D (and 3D printable) models were easy to install from the Microsoft store via Wifi.

 

The form factor?  Well, this device isn’t super cool or incredibly sleek.  Lucky, with its amazing creation of a three dimensional interface environment, I didn’t (and still don’t) care.  After all, a lot of the accessories we wear in healthcare don’t look cool.

 

What did I do with the device first?  Well, after setting it up, I did what any good user would do and immediately tested this new, incredible piece of technology by opening a panel with Netflix and streaming a Game of Thrones episode followed by an episode of Stranger Things.  I laughed at myself for how silly it was to use such awesome technology as a fancy Netflix streaming device…but, hey, it could easily handle it and the whole situation was (although funny) truly awesome.  (Not long after, it was on to Family Guy.)

 

So what now?  Now, it’s easy to take this incredible device into the different fluid, fast-paced venues of the hospital.  It’s a simple matter to use the device as eye protection in the trauma bay or the OR.  It’s straightforward to setup some holographic projections over the patient’s bed and to display their real time info from the electronic medical record.  It’s no big deal to setup a panel with their CT scan displayed while I teach or perform a procedure.  The photo above highlights just a bit of how easy it is to show website information in the Hololens environment.

 

In conclusion, it’s rare that I’m amazed by a tech product–especially in these days of fast-paced innovation.  However, when it comes to this one, I have one thing to say:

 

Thank you, Microsoft, for building Hololens.  This thing is amazing and will allow us in healthcare to do a lot of good.  Thank you so much.

 

…and that’s coming from an Apple guy!

 

Did You Know Lean & Six Sigma Studies In Healthcare Are On The Rise?

By:  @DavidKashmer

Whew!  Finally!  I’ve been waiting for some years now for our healthcare system to start to widely adopt standard, well-known quality improvement tools.  It seemed like many new quality articles I read concerning the healthcare system frequently invented some new way to look at quality.  You’ll find multiple blog entries on here where I implore our healthcare colleagues to start to use well-known quality tools instead of re-inventing the wheel.  Here’s one now.

 

Well, thanks to our colleagues at Minitab, we have some evidence that, in fact, the use of Lean & Six Sigma techniques is catching on in healthcare.  Look here:

Lean & Six Sigma Techniques trend plot from the Minitab blog (http://blog.minitab.com/blog/statistics-and-quality-data-analysis/qi-trends-in-healthcare:-what-are-the-statistical-soft-spots)
Number of Lean & Six Sigma studies in healthcare trend plot from the Minitab blog (http://blog.minitab.com/blog/statistics-and-quality-data-analysis/qi-trends-in-healthcare:-what-are-the-statistical-soft-spots)

 

What a great visual!  Now we see how the number of studies per year is increasing, and we can sense how 2015 demonstrated quite a jump in the number of Lean & Six Sigma studies.  Time will tell if the rate of increase in number of studies per year has significantly changed.

 

At the end of the day, here, we see evidence that Lean & Six Sigma techniques are catching on in healthcare.  It’s no surprise, as the transition from volume to value helps healthcare focus on proven techniques to make measurable, and sustainable improvements.  Healthcare colleagues:  here is the call to action to learn and use the standard techniques of Lean & Six Sigma.

 

 

 

 

 

 

Here’s How To Show Patient Risk

Here’s How To Show Patient Risk