How To Know When You’re Tampering With A Healthcare System

By:  David Kashmer MD MBA MBB (@DavidKashmer)

Lean Six Sigma Master Black Belt

 

Quick, what’s worse:  to work in a healthcare system that pretends everything is ok when things obviously are not ok, or to work in a healthcare system that looks to correct problems that may not even exist?  Both are difficult scenarios, and each makes meaningful quality improvement difficult for different reasons.  In this entry, let’s explore some tools to help recognize when a problem actually exists and how to guard against each of the extremes mentioned above.  Where is the balance between tampering with a system and under-controlling for problems?  Would you know whether you were in either situation?

 

If you’re seen a process improvement system that seems to be based on someone’s gut feeling, one that is based on who complains the loudest, or one that is based more in politics than in actually improving things, read on…

 

It Starts With Data

 

One of the toughest elements of deciding whether an issue is real and needs correction is good data.  Unfortunately, many times we stumble at this initial step in guarding against tampering with good systems or under-recognizing bad ones.  We’ll talk more about this in the last section (A Tangible Example) but, for know, realize that there are barriers to being able to make an intelligent decision about whether the issue you’re looking to improve is real or imagined.

 

Using data from your system, as we’ll describe later, is much better than using just your gut feeling about whether you’re tampering with a system.

 

 

You Need Some Knowledge

 

It takes some extra training and knowledge, often, to understand the issue of tampering versus under-controlling.  You may get the training you in a statistics course or lean six sigma coursework–that’s where I got it.  Wherever you get the training, it may go something like this:

 

Statistical testing helps us guard against important errors.  Like one where we think there is a difference in system performance after an intervention when there is no difference.  It also helps us when we think there is NO difference in system performance after we’ve made changes yet something has changed.  These errors have names.

 

A type 1 error is the probability of thinking a difference exists when there isn’t one (tampering) and a type 2 error is the probability of thinking there is no difference when in face on exists.  Type 1 errors, also known as alpha errors, are prevented by using statistical testing and YOU making an important choice.  More on that beneath.

 

Use Some Tools

 

The tools to help you avoid tampering with a system that is ok include statistical testing…but which test?  Previously on the blog, I shared the most useful file that I use routinely to decide which statistical test to use for which data.  You can see it here.

 

great tool for understanding the problem with tampering with systems is here.  This video (thanks Villanova) drove home for me the issues that occur (and how we make things much worse) when we tamper.  If you’ve never heard of the quincunx (cool word) check out the video!

 

Tampering often happens when a healthcare system that is well-meaning and wants to do better, to grow or achieve greatness, embarks on making changes ad infinitum without meaningful improvement.  Heart in right place for sure and a very difficult problem to avoid in healthcare systems that really care.  It’s a much better problem, I’d say, to have than under-controlling (head-in-sand ostrich syndrome) that is sometimes seen in systems.

 

A Tangible Example

 

Pretend your process improvement system is very focused on an issue with time.  Time for what?  Let’s say it’s the time patients stay in the emergency department.  You and the team decided to get some real data, and it wasn’t easy.  There were all sorts of problems with deciding what to measure, and how, but (finally) you and the team decided on a clear operational definition:  time in the ED would be measured from the time the patient is first logged in triage until the time the patient physically arrived at their disposition whether that was the OR, floor bed, ICU, or something similar.  No using the time the orders to transfer went in for you!  You go by actual arrival time.  You & the group decided the time in ED would be measured based on patient’s triage until they were wherever they are going.  The team likes the definition and you proceed.

 

You measured the data for patients for a month because, after all, the meetings with the higher-ups were once a month or so and they wanted data and the monthly meetings were the routine.  So, ok, sure…you measured the data for a month and had some cases where patients were in the ED for more than six hours and those felt pretty bad.  You discussed the cases at the meeting-of-the-month and some eminent physicians and well-regarded staff weighed in.  You made changes based on what people thought and felt about those bad cases.

 

A few more months went by and finally the changes were achieved.  You still collected the data and reported it.  There were, you think, fewer patients in the ED over six hours but you aren’t too sure.  I mean, after all, some patients were still in the ED too long.  Eventually, other more important issues seemed to come up, and the monthly meetings didn’t focus too much on the patient time in ED.  Did the changes work?  Should you make more changes because, after all, you think things may be a little better?  Hmmm…

 

Welcome, my friend, to the problem:  how do you know whether things are better or worse?  Should you do more?  Do less?  Wait and see?  Is the time patients spend in the ED really better?  Yeeesh, it’s a lot of questions.  Let’s go back to the beginning and see how to tell…

 

Back In Time…

 

You and a group are looking at how long patients are in the ED.  After getting together, you decide to measure time in ED in a certain way:  the clock starts when the patient arrives at triage and ends when the patient arrives at his/her destination.

 

Because you’ve learned about DMAIC and data collection plans, you decide to calculate how big a sample you will eventually need to detect whether you have improved time in the ED for patients by at least 10 minutes overall.  This sample size helps guide you that if you see a 5 minute decrease in your time in ED after you make changes to the system, well, it may just be fantasy.  The sample size math you do calculates the size of sample that will allow you to detect ten minutes as the smallest meaningful change and no smaller.  How do you do that calculation?  Look here.

 

Ok so now you know how many patients you will need to look at after you make changes.  By the way, how long will that take?  It may not coincide with your monthly meetings.  It may take longer than a month to get enough data.  That would tell you not to make any more changes to the system until you’ve given any changes you already made enough time to show how they work.  This is part of how sample size protects use from too many changes too fast.  We need to collect enough data to tell whether any changes we’ve made already work!

 

So let’s say you and the project team make some changes and now know the sample size you need to collect to see how you’re doing.  Well, after those data are collected, now what?  Now it’s time to pick a test!

 

You can use the tools listed here, but be careful!  The type of tool you pick (and what it means) may take some extra training.  Not sure which tool to use?  Email me or use the comment section below and I’ll help.

 

So you pick the proper tool and compare your data pre and post changes, and you notice that your new median patient time in ED shows a decrease by 25 minutes!  The test statistic you receive, when you run a statistical test regarding the central tendency of your data (maybe a nice Mood’s median test) shows a p value < 0.05.  Does that mean you’re doing better regarding the median time in the ED?  Yup…especially because you chose ahead of time that you would accept (at most) a 10% chance of a type 1 error (tampering).

 

That’s what the p value and alpha risk (tampering) is all about:  it’s about you saying ahead of checking the data that there is some risk of thinking there’s a difference when there is no difference.  You further say that you will accept, let’s say, a 10% risk of making that wrong conclusion.  You could be wrong in one of two ways:  thinking the new median is significantly higher than the old one or significantly lower and so to be fair you split the 10% risk between two tails (too high and too low) of that distribution.  And viola!  A two tailed p value is created of 5% probability:  if the p from your test is < 0.05 (5%) then you say well the difference I see is likely to be legitimate, after all there’s a less than 10% probability that it is due to chance alone because it’s less than 5% in this tail and so even if I accept the whole other tail (5%) then I’m still less than 10% alpha risk I set at the beginning…

 

So the bottom line is you set the alpha risk (tampering risk / thinking there’s a difference when there isn’t one) waaayyyy back when as you set up what you’re doing with patient time in the ED.

 

Take Home

 

If you slogged through that, you may start to see the headlines on using statistics from the Six Sigma process to prevent tampering:

 

(1) calculate a sample size required to sort out the minimum amount data you need to be able to detect a certain size change in the system.  The calculation will tell you, given your choice about the smallest change you want to be able to detect, how many data points you need.  It will make you wait and get that data after you make changes to a system rather than constantly adjusting it or tampering with the system owing to pressure of having the latest meeting come up, etc. etc.

 

(2) perform statistical testing to determine whether you’re really seeing something different or whether it’s just a fantasy.  This will help disabuse you of your gut telling you you’re doing better when you aren’t (and thus missing opportunities) or charging at windmills with lance drawn as you misidentify when things have actually improved and continue to charge on.

 

Call To Action

 

Now that you’ve heard about some strategies to guard against layering change after change on a poor, unsuspecting process, imagine how you can use tools to avoid making a lot of extra work that (as the quincrux teaches) can actually hurt your system performance.  Want more info on how to get it done?  Get in touch!

 

Have you seen any examples of tampering with a system or failing to appreciate issues?  Let me know about your experiences with or thoughts on tampering and under-controlling beneath!

 

 

 

 

 

Healthcare Quality Podcast: 4 Types of Bad Metrics In Healthcare

By:  David Kashmer MD MBA FACS (@DavidKashmer) & Vivienne Neale (@SupposeIAm)

LinkedIn Profile here.

 

Listen to the podcast here.

 

DDD, bringing you the metrics behind the data. Here is your host, Vivienne Neale.

Hi, and welcome to DDD, which is Data Driven Decision Radio, episode six. My name is Vivienne Neale and I am delighted to be back with you. For those who have asked, my background is in education, training, broadcasting and social media. Of course, I am also a sometime patient curious to know what decisions are being taken in my name that might just affect me and of course thousands of people just like me. So, once again I am joined by David Kashmer, Chair of Surgery at Signature Healthcare. David is an expert in statistical process control, including Lean and Six Sigma. He has a special interest in new tools to improve healthcare, like gamification. David also edits and writes for a blog called SurgicalBusinessModelInnovation.com. So, hi David and welcome back.

 

Vivienne, thank you so much for the warm welcome and it’s great to be here again with you and the listeners.

 

Right, well I don’t know about you, but I’ve actually found something in the news this week that I thought you and the listeners might find quite interesting.

 

Vivienne, it would be great to hear. You always have interesting news stories for us and I’m interested to hear another.

 

DDD News

 

I have gone slightly left field here and I’ve decided to look at a story that actually has nothing to do with health care, but has a considerable amount to do with big data. So, in the UK we have an underground, or would I be better to call it a metro system?

 

[Laughter]. I think we would understand you with either word, Vivienne, but both are welcome. The underground or metro system in London is a great mode of transport, really amazing.

 

It is and it gets better and better. However, we are about to introduce all night trains and there is a little bit of an issue with the staff who are going to run them. There have been a series of strikes, including one in February 2004. You might say to me, “Well, excuse me, what is a strike on the London underground got to do with health or anything else?” Well, it was interesting because the University of Oxford and Cambridge did a joint study where they explored the data that is being collated by travellers when they use a swipe card, which you load money onto and then you can literally run through the underground all day and then at the end of the day it will calculate how much you’ve spent.

 

It’s the Oyster Card.

 

It is the Oyster Card, yes. 10 out of 10 [laughter]. So, anyway, they found out that a sizable fraction of commuters were able to find better routes to work and ironically actually produced a net economic benefit. The reason they found this out was that they examined 20 days’ worth of anonymised Oyster Card data that actually contained more than 200 million data points. They were able to see how individual tube journeys changed during the strike and what happened… this particular strike only resulted in a partial closure of the tube network and so not every commuter was affected by the strike. So, you could actually compare directly what was going on. So, the data enabled researchers to see whether people chose to go back to their normal commute once the strike was over, or if they found a more efficient route and decided to switch. So, therefore you can start seeing that actually regular commuters affected by the strike, either because certain stations were closed or because travel times were considerably different, about 1 in 20 decided to stick with their new route once the strike was over. So, I started thinking that they also did… while the proportion of individuals who ended up changing their routes may sound small, researchers found that the strike actually ended up producing a net economic benefit. By performing a cost benefit analysis of the amount of time saved by those who changed their daily commute, the researchers found that the amount of time saved in the longer term actually outweighed the time lost by commuters during the strike. You wouldn’t expect that kind of knowledge to be thrown up really and it’s only through big data, and we are talking massive data, 200 million data points that throw this up. They also found out, which I also found interesting that the London tube map, which is iconic, may have been a reason why many commuters didn’t find their optimal journey before the strike. In many parts of London the actual distances between trains are distorted on the iconic map, by digitising the tube map and comparing it to the actual distances between stations, the researchers found that those commuters living in or travelling to parts of London where distortion is greatest were more likely to have learned from the strike and found a more efficient route. So, what I was thinking is how is big data throwing up very different, surprising insights into health practice in the US?

 

Vivienne, big data, just as you say today and as we’ve said previously on the podcast, is powerful especially in that it generates these surprising conclusions and the ability to process these large data sets is part of what has empowered us to see these counter intuitive or surprising conclusions. In the United States, we see that in many different places. For example, on a somewhat smaller scale in what is sometimes called “small data” or “little data”, as an allusion to big data, we have the ability to use data sets from the hospital to do predictive modeling for things like when our hospital will be so full we’ll have to divert patients. When I say small data or little data, these are still large data sets. The bottom line is that the computing power we have now coupled with statistical sophistication lets us see things that we’ve never seen before and just as with this news story, the classic is necessity is the mother of invention, we can see how changes in our department of surgery or in our hospital… sometimes small tweaks, like for example, opening up more intensive care unit beds can have large impact and even predictable impact on the rest of our system. That’s one use of fairly large data and larger data sets that I have used before in part of our surgical system. We have made a predictive model for when our hospital would have to divert patients so that we would know ahead of time and be able to offset it. Things like ICU bed capacity made a world of difference for all sorts of end points through our hospital and it would not have been as easy to tease out, or we wouldn’t have been as focused with our interventions without the use of larger data sets and sophisticated statistical modelling techniques. So, I think the lesson learned from analogous situations like the London tube strike, or the most recent London tube strike, is that this power that we have now can bring us to sometimes counterintuitive conclusions, or sometimes just surprising conclusions that we wouldn’t have guessed otherwise. I think it’s a good lesson, and as you said, it’s right there in the news today.

 

What I found really interesting was the fact that most people hadn’t bothered to find their optimal route until they are forced to experiment. That says something about human nature, but it’s also something we should learn from. Perhaps we shouldn’t be too frustrated that we can’t always get what we want or that others sometimes take decisions for us. That was the conclusions of one of the co-authors, Dr Tim Williams from Oxford University’s Department of Economics. He said, and the final note to the article was, if we behave anything like London commuters and experiment too little, hitting such constraints may very well be to our long term advantage. That turns everything on its head, doesn’t it? That through adversity, another take on your necessity is the mother of invention, but through adversity we do discover other things. I think with the use of big data, those gains could be huge.

 

Vivienne, I think it’s a very interesting message and in the quality control and improvement world we have two related techniques. One is called the Design of Experiments, where on a small scale we tweak certain portions of a system and through the use of statistics we can tell if the changes in the systems output are related to our initial tweaks. Again, it’s called DOE, or Design of Experiments, and we do it prospectively rather than analysing data retrospectively and using a more big data technique. So, there are lots of ways to get it… just what the professor said. The fact that sometimes tweaking and innovating around that is key. The last technique is more of a theory actually called the theory of constraints. The theory of constraints goes along with this idea of these boundaries. You need to understand what they are, and again, if you just stayed within the current frame you would never find, for example, your faster way to work. So, what an interesting article.

 

You mentioned retrospective data and I think that whole business of using data retrospectively has lots of problems to it, don’t you think? I think it is something that perhaps we ought to talk about today in a bit more detail than we’ve mentioned in the past.

 

Well Vivienne, thanks for highlighting that issue and I think today it’s worth spending some time on some of the different types of bad metrics we have seen in healthcare. I think there are four broad categories that we see recurrently in healthcare that really make it challenging to have a data driven department or hospital, when after all its very valuable to have a data driven mentality because it allows us to make all of these advancements, like the ones you talked about. So, I will share with you today some of the different types of difficult metrics I have seen and how they impact us as patients.

 

Right. So, would one of those be the difficulties when you can’t actually collect accurate or complete data? Does that cause a problem? Well, I suppose it does [laughter].

 

You have highlighted one right at the onset and that is these metrics for which we cannot collect accurate or complete data. Vivienne, it is surprisingly challenging in hospitals to collect data, accurate complete data. Often, in fact, in some western management systems data collection is frowned upon or it’s an afterthought. People are so busy that it’s an imposition. So, as we launch in here, let me just tell you that often in hospitals we will hear that we can’t collect any data or any more data or data on a particular problem. Of course, if it’s not measured, it’s really challenging to manage. So, if you can’t afford the time to collect good data, well sometimes it’s useful to stop and think, we really can’t afford not to collect good data.

 

So, in fact, you are talking about some staff. I don’t think it’s specifically in health. It’s in education certainly, I know that, where people are seeing it as an imposition. I suppose before you do anything, a change of perception, perspectives and attitudes is important before you do anything. Like you say, you can’t afford not to collect big data or data of any kind.

 

You are exactly right. Not only is it changing people’s thoughts about data collection, it is certain techniques that can make data collection much quick and accurate. For Six Sigma in particular, we have certain ways to collect prospective data that really takes a second, two seconds at most as patients come through the system. So, part of it is leading the culture and our colleagues around us to understand the importance of data collection and then there are specific techniques that can be used and specific ways to set up data collection for both discreet and continuous data that get us the data we need a lot faster and I’m happy to share those with the listeners. If they want to email us through our website or get in touch, we can talk about some specifics for how to set up data collection to get at good data for difficult managerial problems.

 

We are DDD. Data Driven Radio.

 

Well, I’m going to throw you a curve ball. We have been talking in this country, just today in fact, about robotics and how robots are becoming more useful to us in the workplace, and I assume in healthcare and everything else, where there will be certain manual tasks or simple tasks, or not quite so simple tasks that will be done by robots. Now, that would be quite interesting if those robots automatically collected data as well, wouldn’t it?

 

It would be and you probably know that robot comes from a Czechoslovakian word for servant. In fact, if I remember right, it was coined by Isaac Asimov, but I may be wrong about that part. [I was wrong here.  It was Josef Capek.] The fact is, yes, it would be very useful, especially with some of these rapid data collection techniques, to have a robot or a similar automaton that does some pre-described straightforward task and also collects us some data. Yeah, it’s an interesting opportunity and it segways nicely into another type of bad data, or bad data metric. Those are metrics that complicate operations and create excessive overhead. Those are some of the worst ones, Vivienne. The ones where you think you have something important to measure and just the doing of it is so arduous and it makes things so difficult and it takes staff time. Those are metrics that should be designed out and probably aren’t as useful.

 

Have you got any examples of those?

 

Sure. Often in healthcare, the reason that staff recoil at data collection is when you hear data collection it can mean one page filled with 10-12 checkboxes that float amidst the sea of checkboxes, vital signs, prescribed forms that we have to fill out. Something even as arduous as one more page can make a big difference for our colleagues when we are at the front line taking care of a patient, or our nursing colleagues who are checking a patient into the post anaesthesia recovery unit. It can be challenging if it’s even one more form. So, yes, I can think of probably five or six stories where management wanted some data collected and the doing of it was just really difficult.

 

So, in fact the development of specialised software which is ongoing will probably take a lot of the pain away, enabling healthcare workers, surgeons, teachers, whoever, to actually get on with the business that they feel that they are trained for. I saw in the news today actually that there is software being pioneered by some law firms which take out the really basic jobs like checking whether a contract is appropriate and legal. Which are really tedious jobs that you would give a junior to do that can done really quickly by software alone.

 

Well, electronic health records and similar tools hold great potential to allow us to retrospectively pull data. For the Six Sigma and quality projects, we really like to collect prospective data and often the end points that have the most meaning in our systems are not ones that are typically thought of to be including in an EHR. Again, electronic health records can be very useful, but the Six Sigma teaching is to collect data prospectively in statistically valid sample sizes directly from the line or the process. So, I think you are exactly right. Electronic health records hold great potential, and I would add that just as valuable, if not more, is prospective data as we do it.

 

Have you got another example of bad data for us?

 

Vivienne, one of the other big problems we see routinely are metrics that are exceedingly complex and ones that don’t tell a story. There is a value to having a metric that feels how things are going. You may have seen ones created, like the happiness index for countries or some similar metric that almost is a statistic with a humanised element built in. Metrics that are exceedingly complex, that are difficult to explain, that don’t have an intuitive feel, those are tough. On the blog, we have an entry where we briefly describe a metric for operating room readiness. Are you ready to hear, Vivienne, what the operating room readiness was for our department of surgery for last month? Do you want to hear what it was?

 

Yeah, hit me with it.

 

Okay, it was…pumpkin. Now, if you are recoiling or confused, well “pumpkin” is a confusing answer or metric or statistic for how well the operating room is doing in terms of readiness. Yet, all the time throughout healthcare and surgery, we have metrics like pumpkin for operating room readiness that really don’t tell us much, or that answer a question in an odd way. That is really challenging. So, like using the score pumpkin for operating room readiness, sometimes in healthcare we have metrics that are complex or difficult to explain or lack a feel to them.

 

I am speechless here [laughter].

 

It’s a strange example, but what is stranger are some of the unusual metrics we see on quality dashboards all the time. That idea of just putting some metric on a dashboard… a bad metric, Vivienne, can also cause employees to just ‘make their numbers’. When staff can’t feel the metrics or you have something that really doesn’t have a lot of meaning or you can’t see how it affects patient care, it can be very hard to engage yourself in wanting to collect it, wanting to improve it and that type of challenge makes a metric less useful. So, a metric that causes an employee to just make their numbers or focus on the number rather than what it means, that’s a whole other category of bad metric in healthcare, and they are ones that we see all the time.

 

Right, so would you like to give us a couple of takeaways, bearing in mind that you’ve been researching and looking into these problems with data?

 

Yes. I would say, Vivienne, wherever possible, especially as we continue in the information age with so much data and big data that we should take the opportunity to make sure the end points we collect are tailored to have meaning for us, ones that we can feel. I think the way to avoid being hit by the train of bad metrics, and we can see the train coming, it’s important to step out of the way. Some of the metrics that would be most useful for healthcare are the ones that can help us tell a story of our patient care, ones that have a meaning to them, rather than just as we often see, dry percentages and different ones like that. So, especially as we collect more and more data in healthcare, I have this feel like [22:51], a visual display of quantitative information. We have a lot of info and it’s important to represent it in a way that holds meaning for us. So, wherever possible its useful to have humanised endpoints that don’t complicate operations or create excessive overhead and that are straightforward to explain to others, and ones that we can collect accurately and completely. I think those are the takeaways.

 

Well, thank you very much, David, and I’m sure this is something health teams all over the place will consider very carefully. In fact, we are very interested to hear what innovative practices are being undertaken in your health provision. So, if you’d like to appear on the show, contact us through our website. David, would you like to give us the address?

 

Sure. They can contact us via SurgicalBusinessModelInnovation.com and there is an address linked to that page, or they can contact us at our info address, which is info@TheSurgicalLab.com.

 

So, we look forward to hearing from you. Meanwhile, if you’ve liked the show, do leave us a rating on ITunes, and you can catch us on Soundcloud too. It’s one way we can ensure the word is spread. So, we look forward to being with you next time. So, from David and from me, its bye for now.

We are DDD, Data Driven Radio. Catch us on Soundcloud and iTunes.

The Healthcare Quality Podcast: Big Data in Healthcare

By:  Vivienne Kneale (@supposeIam) & David Kashmer (@DavidKashmer)

 

Listen to the podcast here.

 

Hi, and welcome to DDD, which is Data Driven Decision Radio, episode four, if you didn’t know. My name is Vivienne Neale and I’m delighted to be back with you. For those who have asked, my background is in education, training, broadcasting and social media. Of course, I am also a sometime patient, curious to know what decisions are being taken in my name that might just affect me and thousands of people just like me. So, once again I am joined by David Kashmer, Chair of Surgery at Signature Healthcare. David is an expert in statistical process control, including Lean and Six Sigma. He has a special interest in new tools to improve healthcare, like gamification. David also edits and writes for a blog called SurgicalBusinessModelInnovation.com. So, hi David and welcome back.

 

Vivienne, again it’s great to be here with you and all the listeners today.

 

Yes. We are all looking forward to seeing what you have for us, but before you start, I have been scrabbling about in the news and I’ve come across something I think you’ll find quite interesting. It was an article about a designer who has managed to create a 3D printed elbow and upper arm prosthetic. This actually has hand actuation and it was made for a boy without an elbow. I know that you have a real interest in 3D printing. So, what do you think about that?

 

Vivienne, this fascinating phenomenon has been seen throughout social media. I am connected via Twitter to many of the 3D printing hubs and we have seen interesting cases like this over the last 4-5 years along with the rise of 3D printing. The particular one you reference comes to us from 3DPrint.com and they run through a nice example of manufacture and design of a useful novel prosthetic for a child, just as you mentioned, and one that is able to grip and flex. It was about three to four years ago that I first saw an example of a 3D printed prosthetic for a child and this one really shows the nice advances that have come between now and then, but Vivienne, again what we are seeing is this rise of personalised medicine where a sharable stereolithographic, or STL file, can be posted on the web, transmitted easily across the internet and can be used to size, design, redesign and tweak a prosthetic limb for a child or an adult. It’s really a fascinating time that we’ve come to.

 

I agree. It’s like science fiction, but its science fact and its science now. Do you know very much about what Ninja Flex is?

 

Not the Ninja Flex product in particular, but my understanding from the article is that there are several types of implement that will relieve pressure from the plastic prosthetic against the skin. My understanding from a brief review of the article is that one of these products is a Ninja Flex inner liner. It allows a comfortable fit for the prosthetic while relieving pressure. If you’ve ever seen the show Dolphin Tale where there is a prosthetic limb manufactured for a dolphin. That tail requires pressure relief against the skin, and similarly, but distinct, for humans, we require some type of pressure relief to both fit the prosthetic properly and make sure we don’t have pressure sores develop at the sight of the limb interface. In this case, my understanding is that Ninja Flex is one of those products, but pardon me if I misspeak, I am not familiar with that particular one.

 

Well, it’s certainly exciting and I’m looking forward to a full body transplant. I want to go somewhere and have a look at all the bodies on offer on hangers and say, “yeah, it’s okay. I’ll have that one”.

 

I think there may have been several science fiction books written in a similar vein.

 

There has been, and there is a new film out as well. So, yeah, I am all for it. So, anyway, talking about things that are changing the way we might possibly live our lives, I have been exploring a little bit more about data this week. We both know that data points, absolutely billions of them, are being generated every day. I think, and you may be in this position, some healthcare providers may well feel totally overwhelmed by what’s being generated. You can actually end up wondering just what to do with big data. For some people, even knowing what big data is, is a hurdle. So, let’s start right at the very beginning. I mean, big data, as far as I can see, is at the heart of the smart revolution and the basic idea behind it is that everything we do is increasingly leaving a digital trace. I quite like the phrase ‘digital exhaust’ myself and this trace can be used by us or others to analyse what is happening and making decision making much more smart. The driving forces in this brave new world are access to ever increasing volumes of data and also our ever increasing technological capability to actually mine that data for maybe commercial or organisational insights. So, you can actually end up thinking, well, I’ve collected all of this stuff, what am I going to do with it? We actually know that data in itself can be difficult to manage, but that’s not an excuse not to do something about it. It’s impossible to ignore. So, what happens if you feel your department’s information isn’t being represented or you’re wondering where to go next? Do you have any experience of that kind of situation David?

 

Vivienne, I do. Not just in my professional life, but in every day of our lives. Let’s review. Most of the data that exists has been generated in the last ten or even five years. What I mean by that is, information scientists teach us that the data we’ve generated lately, relatively lately in human history, vastly outstrips all of the information that we had produced here before. What is interesting about that is we live in a very different world, some say that our minds were originally designed for. It was evolved to be hunters and gatherers eventually and now we live in a sea of information, which is very different and much more fast paced, some say, than what we are designed for. Now, to step back from that philosophically into more of, I guess, a concrete example, I’ll tell you that in our department of surgery one of the things we wrestle with is how to extract meaning from the data we have. As you said, big data implies the use of these massive data sets that exist to come to meaningful conclusions with the power to change our mind and make us act differently, even sometimes in counter intuitive ways. Two examples, one from a surgical department and one not. Let’s talk about the one that’s not first. About two or three years ago, a very famous pop news article came out about how Target was able to identify, the store Target, was able to identify women who were pregnant via their web browsing and associated web browser history before the women themselves knew they were pregnant. They would receive targeted advertisements and other data that was related to their pregnancy before they knew they were pregnant. It was just based upon their web search history and the big data that Target had used, extracted, culled and made a predictive model of that could tell us which customers were pregnant. Well, it was so good that again it would identify women before they knew themselves that they were pregnant and such is the value of big data. To come to these seemingly impossible, but very valid conclusions from enormous data sets. Now, let me tell you just in the hospital setting, we are now evolving away from the very consistent historically utilised process of process improvement that we always had. We had a quality improvement process that focused really on specific people. Us as surgeons, individuals, who went wrong, how did they go astray strategy. We are learning now with more complex data, statistically robust tools and the valid use of data this different lesson. The lesson of systems and how to evolve even predictive models for what we are about in our surgical department. For example, we put together a model based on our data over a long stretch of time that would predict when our hospital would have to divert patients, when we would be unable to render the care we really focus on for sick and injured patients and that process is called diversion. We built a model with a large data set that would tell us ahead of time when we would be going on diversion, and then we leveraged that to patch up our system so that we would know how to avoid going on diversion and how to take care of patients without having to turn anyone away. So, those are two example, Vivienne, of how we leverage big data both in healthcare and how I’ve seen it used in industry.

 

I can see that as being exactly the point in the UK with the National Health Service. We have some extraordinary periods of overload and I think that data will obviously help tremendously in predicting that actually almost down to the wire to make sure that the whole process works smoothly. Certainly, I think the health profession in general has been able to reveal new insights and opportunities and even excavate unknown or recurring problems. I think that will help both in efficiencies and also economic efficiencies as well, which is just as important. When you overspend, you don’t have the money to do what you have to do.

 

Vivienne, that’s just one of the ways, as you said, one of the important ways that big data can really serve us in healthcare. I think that there are several insights we can get from big data and then there are some insights that we can’t even imagine. what is so fascinating about the big data process is we can consider it more of a way in which we have become sophisticated enough with data to process big data along certain lines, large data sets, to draw from them meaningful conclusions and its more of a way to focus on what insights… rather how to get insights from data instead of the specific nature of what they are going to be, but just as you said, they fall in line with resource allocation, whether its diversion or some other aspect of quality fairly routinely.

 

Yeah. in fact, Bernard Marr, I don’t know if you are aware of his book, it’s called Big Data, discussed starting with a smart model and what he suggests is you start with strategy, you measure metrics and data, then you apply analytics, report the results and then you transform the business of caring. Is that how it works in your organization or are you doing something different?

 

No, it’s fascinating that although I have not used the smart model, many models point to the same pattern. For example, the Six Sigma model is called DMAIC, which is where you define what you are looking at, you measure it rigorously according to statistically valid sampling patterns, you analyse that data and then you improve, make changes. Then, an additional aspect that the smart model does not include, you control the project over time, meaning once you’ve made changes you have a plan to look in on it again to see how you’re doing. So, the DMAIC model incorporates some things the smart model does not and yet the smart model starts us off with, I think, a key element, which is strategy. The DMAIC model does not include really whether you should be doing a particular quality project, it just starts you off with definitions, making sure you have the correct or a reliable clear definition that can be used consistently. What I like about the smart model is the focus on strategy upfront.

 

Yeah. Well, it’s quite interesting. I like the controlling the project afterwards. So, we need a new one that combines both, I think, because it’s so easy, isn’t it, to actually have a huge exploration of something and then brush off the dust and say, “excellent. That’s done”, and it has to be ongoing.

 

I agree completely and I would just say that what’s done in Six Sigma, as far as the strategy portion goes, is not again the project pattern or the DMAIC pattern that you’ll use to improve a process to find, measure, analyse, improve, control. That’s once you get into a project, but Six Sigma practitioners, especially what are called master black belts, that’s where the strategy aspect comes in. There are other tools that we use outside the DMAIC project to say, “Hey, look. In our set of possible projects, which ones should we be doing? Which ones will have the most bang for the buck?” We use tools such as FMEA or failure mode effects analysis, FMEA, to decide which projects have the most bang for the buck, where we should go first, and that’s the strategy component that compliments the DMAIC pathway. So, it does get there, just in a different way.

 

I don’t want to put you on the spot, but I will [laughter]. Can you tell me what observations and insights have really given you the most bang for your buck in your experience? Is there something that really stands out for you?

 

There is. There are several projects, which across centres, give bang for the buck. What is useful about the FMEA tool is it steers you towards projects that maybe you wouldn’t have thought of, for example, it focuses on the ability to detect a problem. Meaning if you have a problem that is very challenging to figure out because you don’t have a good mode of surveillance or you don’t have a good way to catch it, that’s a more important problem. Also, the incidence of the problem, how often it happens, the severity of the problem. When it does happen, how bad is it? These and other factors are used by the FMEA tool to come up with a composite number to prioritise projects. Across centres, you see similar projects come up. Some are disaster planning at certain centres. What would happen if the building collapsed? Is that going to happen often? No, but it’s so severe and it’s so immediate that often a FMEA will find that. So, for surgery departments there are similar projects. That’s how it usually goes.

 

Well, certainly thinking about the profound explosion that happened in China that would have been an example where a disaster model would have to have been put into practice immediately.

 

Yeah, absolutely.

 

So, do you think therefore that what we are saying is data makes healthcare or practice more predictable, if that’s every possible?

 

Well, I do think it is possible, I would say. I think one of the typical replies to quality interventions is, “well, every patient is so different”. I think that’s true. I do think patient to patient there are opportunities to personalise medicine, and we should, and that’s coming both with the personalised dosages and 3D printed pills that we talked about last week, to genomic medicine. There are really so many ways to personalise healthcare. That said, we can consider each system, and not just individual patients passing through it, but each system of patients, each population of patients who come through the ED, the patients who… we can consider them as a population. There will be a bell curve, not always a normal distribution, but some curve associated with, for example, how long patients stay in the emergency department, and yes, from both personal experience and quality education we can do things to improve that populations, for example, time in the emergency department. We can decrease the variation, the width of the curve with certain interventions to improve our care. I think it can be done, I’ve done it, it’s what I focus on every day. It is very different than how we are typically educated in healthcare and that’s what makes big data valuable, but often more challenging to implement in different systems.

 

Of course, that does bring up some moral and philosophical questions, in terms of if you keep so much data on your patients or a group of patients, you could possibly ultimately deny treatment or you could say, “well look, look at the stats. Look at how you’ve looked after yourself. Do you really think that I want to use my time, my skill, my money, my department’s money in giving you another stint when you haven’t given up smoking or you’ve eaten too much cholesterol or what have you?” I mean, I know this is trivializing it in one sense, but there has got to be… I’m sure in the wrong hands we might end up giving data freely and it’s then used, not against us, but not to help us.

 

Vivienne, there are two large ideas that you reference in your comment. I think it’s important to talk about them. One is data security. You indirectly say we give up so much data, and we do. In the United States, we’ve had in the veteran’s association even one laptop being stolen that wasn’t secure, exposed the data of many, many patients. That is a challenge in the age of big data. How do we keep our data secure against cyber-attacks? How does my personal health information stay secure? There are lots of answers to that. I think it’s going to be a little beyond the scope of what we talk about today, but it’s no less important. I think its key. Then the second related thought you had as well, we may have all of these aggregated… all of this data, but the challenge is then what if it is used for a purpose that’s, let’s just say nefarious or maybe not what we would want it to be, or maybe to deny treatment to an individual? I would say, yeah, that is a real possibility. I have seen large data sets and ones that aren’t ‘big data’, just different studies used to say we should or should not treat populations. So, I think that risk exists whether we talk about big data or just staff who are driven to say there are certain things we should not treat. That’s a tough balance. I don’t look to big data to make individual treatment decisions and we teach, at least when I teach data and data use, I focus on the fact that there is a real difficulty in applying population level data, bigger data to an individual patient, to any one individual. It’s very tough to do for a lot of different reasons. You have to watch it. So, where there is a down side, like you said, where maybe somebody will use this not to treat, there are incredible upsides. Maybe we will use this to treat you better, which is my focus. Maybe we will use this to get you through the emergency department faster because we know that actually makes you do better in the hospital stay, for one example. I am not saying that’s true across hospitals. I am just using this as an example. ED time and length of stay at some places may correlate to outcome, maybe at some it doesn’t, but the issue I reference is I would say, and I’ve heard exactly what you bring up before, maybe they will use the data for the wrong thing. Yes, but I would say maybe they’ll use the data to treat me and my family much better than we could otherwise get. That opportunity, that side of things is what I tend to focus on, and I think your concern is well said.

 

Yeah. Well, I think it’s really interesting. We could take a whole podcast to talk about the pros and cons in a wider field and I’ve got so many ideas buzzing in my head. I will keep you right on the straight and narrow [laughter]. So, data does show important trends and what can improve quality of care, but that actually is often an individual experience and I suppose data becomes more valuable to you when it’s used to further customise and personalise a client’s experience. Certainly in the marketing context, customer experience is all. So, the question for me now, my last question is, how far does personalisation actually go without actually being unattractively intrusive in the medical profession, or should we ask, does that actually matter when more hospital procedures are intrusive anyway? What do you think?

 

I would say, for me personally, I believe in personalised medicine as a window on better healthcare. I think as we become more advanced there is a real opportunity, not just for satisfaction in my experience of care when I’m a patient, but also to get a better outcome because we are using medications that will work for me, a prosthetic that will fit for me, or whatever my needs are. So, I think it’s very useful, but I would also say, while we manage things down that road it’s important to manage and have the ability to interpret and gather larger data sets that say, “this is our performance for the group of patients in the emergency department as a whole, or referrals to our outpatients surgery practice as a whole”. When we manage things that way also and use our data set and slice it up in different ways to get… those two complimentary paths give a much better experience. We can ask ourselves, okay, how well are we doing when it comes to getting a patient into our practice and using the tools, for example, of personalised medicine for that patient. You can use big data to ask questions like that. So, I really think there are two different ways at getting at excellent care, neither one of which is alone enough. Those are my thoughts on it.

 

Thank you very much. Well, certainly you’ve given us much to think about this week and thank you very much David. So, I hope you’ve enjoyed todays episode, and if you want to keep up to date with David Kashmer’s approach to quality and statistical process control, business model innovation and critical practice, do join us for the next programme. In fact, we are very interested to hear what innovative practices are being undertaken in your health provision. If you would like to appear on the show, contact us through our website. We are looking forward to hearing from you. Meanwhile, if you have enjoyed the show, do leave us a rating on iTunes. It is one way we can ensure the word is spread. We look forward to being with you next time. So, bye for now.  

Have You Seen This Pessimist’s Guide To Benchmarking In Healthcare?

By:  DMKashmer MD MBA FACS (@DavidKashmer)

LinkedIn Profile here.

 

It sure sounds like a good idea to measure our healthcare processes against standards from other centers, right? It seems like pretty obvious logic that if we benchmark ourselves against how other organizations and professional societies want us to do (or how they perform) that we’ll be better off in the end. Doesn’t it sound straightforward that we should have an external benchmark that we compare to our processes?

 

Guess what? It’s not, and here’s why. You probably have a long way to go before you benchmark.

 

Thirty five healthcare quality projects in the last three years have reinforced this simple truism for me:  don’t benchmark at first. Why? There is usually a lot more you have to do before you look to some external agencies for a benchmark.  Here are some of the items that probably need doing before you scoop and apply an external measure to your system.

 

You Don’t Have A Clear, Usable Definition of What You’re Measuring

 

For example, your healthcare system probably lacks a clear operational definition of the metrics it wants to measure.  Will you use a definition for VAP (Ventilator Assisted Pneumonia) from the CDC or some other definition?  Does everyone who is performing data collection have the same definition?  Truth is, unfortunately, when you scratch the service…they probably don’t.

 

You Don’t Know The Voice of the Customer…Or Even Who The Customer Is (!)

 

You may not even know the voice of the customer (VOC) and key process indicators for your various systems.  Who exactly is receiving output from this system of yours?  And what do they (not you) want?  Get over yourself already and go find who is on the receiving end of your system and what they expect from the system.  You may even need to get out of the building to find out.  (Shudder!)

In other words, until you have a clear definition of what you’re measuring, a way to measure it, and a knowledge that it will significantly impact what you’re doing, you have a long way to go before you benchmark. Let me tell you more. One of the common areas we make with healthcare statistical process control and other quality projects is that we fumble at the one yard line. I mean that we don’t have a sense of a clear definition for what we are measuring or how we are going to measure it. How can we benchmark against an external measure before we even know what we are talking about? All too often, this is exactly what happens.

 

Consider this story of woe that owes itself to the problems we discussed above.

 

A Cautionary Tale:  VAPs in the ICU

 

Once upon a time there was an intensive care unit that wanted to benchmark its performance with ventilator associated pneumonia versus external organizations. (By the way, this is NOT the organization I work for!) It looked around and found typical rates of ventilator associated pneumonias as determined from other organizations. It seemed to make a lot of sense to do this. After all, they could bring their expected performance in line with other organizations. Of course, they wanted to have zero ventilator associated pneumonias as their real goal. What were the problems?

 

First, they had a non-standard definition of ventilator associated pneumonia. In fact, the operational definition they chose of VAP did not square with the definition of ventilator associated pneumonia from other centers. What did this cause? This caused all sorts of misguided quality interventions.  Alas, they didn’t discover this until a lot of work had been done.

 

For example, the team adopted a VAP bundle, which also makes a lot of sense. It then went on to perform no less than 12 other interventions in order to achieve quality improvement. Some of these decreased the VAP rate and some (many) did not.  The team spun its wheels and fatigue and staff churn quickly set in.

 

Another problem with external benchmarking? The team did not have the infrastructure to determine if they were doing significantly better or not. This is a common danger of benchmarking. The fact that the operational definitions did not align made the team add layer after layer of complexity and friction for dubious outcomes in quality. Worse yet, this wild goose chase caused an increase in worse outcomes owing to the variations that all of the ineffective changes caused in the system.  Because quality teams often lack sophistication to do statistical testing and to protect against tampering / type 1 errors, the wild goose chase in healthcare (sometimes from inappropriate benchmarking) really hurts!

 

I see this all the time and it’s very challenging to avoid in our current healthcare climate. For example, it is always hard to argue against doing more. Intuitively, who wouldn’t want to do more to make sure their patients were safe?  It’s an easy position to support, akin to “putting more cops on the street” promises from politicians.  Who could disagree!

 

However, it turns out, that when we make too many changes, or changes that do not result in significant improvement, we can unfortunately increase variation in our processes and obtain paradoxically worse outcomes. Processes can become cumbersome or resource intensive, whether that be in terms of manpower or other sources of friction. This is very difficult to guard against.

 

Learn from this instructional fairy tale: Align the operational definition you are working with, with your benchmark. Or better yet, don’t benchmark at first.

 

Important Thoughts on Benchmarking

 

So, if I’m telling you not to benchmark first, what is there to do? My recommendation is to follow the DMAIC process where there is a clear definition and those definitions are measured in rigorous statistical ways. This means having a team together that adopts a standard definition of the item that is being studied. I can’t say enough about that.

 

The operational definition for your particular item must align with the eventual item you want to benchmark. Typically in non-rigorous healthcare quality projects, this does not happen. Before you go on to accept the benchmark that you so badly want to look toward, make sure that this definition can be measured in adequate ways.

 

A measurement systems analysis and other measurement vagaries can really throw off your quality project. You can end up forever chasing your tail or the benchmark if your measurement system is not statistically rigorous or useful. Does the outside institution obtain the benchmark rate from retrospective cleaned data warehouses? Or did they obtain it prospectively right from the process? These are things you’ll have to wrestle with and it may make a difference in the benchmark you accept and what you think represents quality.

 

If the benchmark you are looking toward is a zero defect rate or some similar end point that’s one thing. However, typically we use benchmarks to get a sense of what a typical rate of performance is. As taught to me by experience and Lean and Six Sigma coursework: don’t benchmark until you have rigorously improved your process as much as possible. And when you do benchmark, I recommend that you have carefully aligned your operational definition, measurement system, and even the control phase of your project with this eventual benchmark.

 

Do you have thoughts on benchmarks? You probably feel, like I do, that used properly benchmarks can be very useful for quality projects…but when used carefully!  Have you ever seen a benchmark used inappropriately or one that caused all of the issues raised above? If you have, let me know, because I would love to discuss!

The Healthcare Quality Podcast: Mistakes With Control Charts

 

By:  DMKashmer, MD MBA (@DavidKashmer) with Vivienne Neale (@SupposeIAm)

 

Have you seen these common issues with control charts in healthcare?  Thanks to Vivienne and the podcast team at The Healthcare Quality Podcast for helping explore the use of control charts in healthcare!

 

 

Hi, and welcome to DDD, which is Data Driven Decision Radio, episode two, if you didn’t know. My name is Vivienne Neale and I’m delighted to be back with you. For those who have asked, my background is in education, training, broadcasting and social media. Of course, I am also a sometime patient, curious to know what decisions are being taken in my name that might just affect me and thousands of people just like me. So, this week, once again I am joined by David Kashmer, Chair of Surgery at Signature Healthcare. David is an expert in statistical process control, including Lean and Six Sigma. He has a special interest in new tools to improve healthcare, like gamification. David also edits and writes for a blog called SurgicalBusinessModelInnovation.com. Hi David and welcome back.

 

Vivienne, its great to be here again with you, and let me share that is been quite a week in the news for patient quality and satisfaction. We tend to talk, when we get together, about some of the various news items that we’ve seen and I have one to share with you and the listeners for this week. This one comes to us from ModernHealthcare.com and that well known magazine has a highlighted article this week about ‘bad metrics that put patients at risk and prevent providers from improving’, and as you know, this is something that is near and dear to our heart in healthcare and the focus on healthcare quality improvement. Their article goes on to say that hospitals most often penalised by the centres for Medicare and Medicaid are typically ones that do well on other publically reported quality measures and ones that are typically accredited by the joint commission, as that US crediting body for hospitals. Vivienne, to my mind, what this really highlighted is the tension between different metrics that we see in healthcare now. As you and I have discussed before, one of the challenges that we have in healthcare is this glut of data that we typically see as we try to do the best we can for patients and as we focus on quality. So, in today’s episode, I know we are going to go on to have a conversation about some of the specific tools, tips and techniques we can use to find meaning with certain quality tools. This article for ModernHealthcare.com really resonates with that tension that sometimes exists between accreditation and a true focus on quality. Really just fascinating stuff this week.

 

Brilliant. I think you are right to point out about the different metrics that are available to all of us and this glut of data that, in the end, we can’t see the wood for the trees and it’s quite difficult to make real progress.

 

Vivienne, I think that’s well said and we are seeing this all around healthcare now, for those of us who focus on being quality professionals and clinical practitioners. So, I’m excited to keep you and the listeners up to date on some of the different things we see and to focus on the tools that we are here to discuss today.

 

Right, indeed. Well, now, let me share something with you that I discovered in the UK press a couple of days ago. It was an item from the BBC and it’s about the importance of speaking out. although we all feel it’s our democratic right to be heard, sometimes the hierarchy of an organisation can make it actually particularly difficult for those lower down the pecking order to speak out. Do you have any experience of this?

 

Vivienne, as recently as a week ago, while in the operating room, I took a moment to actually solicit the thoughts of those around me and it’s something we do routinely in certain cases and situations, and it is a fascinating thing that when we review issues that we’ve had, cases that have a quality issue or something else, that there is often someone in the room who had a different thought or had a different perspective and they didn’t share it, for whatever reason. So, at certain points in certain cases, we actually take a moment to actively solicit everyone’s thoughts, even when we are on comfortable footing and we know what we are, where we are, what we’re doing, and we know it well, we still take a moment at those decision points in a case to sometimes solicit actively the thoughts of those around us. So, yes, I have seen it. That staff are sometimes uncomfortable speaking out. It’s a fascinating experience that we see commonly in healthcare. So, yes, it happens all the time.

 

Yes. Well, hmm. Interesting. I don’t know if many people do as you do. It has been pointed out that hospitals and airlines are highlighted as being areas of specific concern. So, one of the ways airlines are trying to reduce potentially fatal errors occurring is to use psychological techniques to break down that hierarchical structure and encourage people at all levels to highlight if something is about to go wrong, and guess what? Medicine, in general, is starting to follow suit. The aviation industry has embraced what is known as ’just culture’, where reporting errors in encouraged to prevent mistakes turning into tragedies. They discovered this, of course, through painful and tragic events, and that many people found it hard to speak up in front of senior colleagues, even when it was a matter of life or death. It’s something that can get in the way of openly pointing out errors. So, what’s worth noting is that even when teams are working very closely together, like the crew on an aeroplane, junior staff have been known to keep quiet in an emergency rather than question the actions of a pilot. I guess you can see just how that happens. So, surgical teams now hope to learn from years of research in aviation psychology, which have made crashed a rarity and may offer some pertinent lessons to medical staff. A guy called Matt Linley, who flies jumbo jets, but also now trains doctors in safety, recalls a case where a surgeon was preparing to operate on a child’s hand. A junior member of staff noticed they were about to operate on the wrong hand, but her fears were dismissed… her team finally realized they had operated on the wrong hand.  She tried again. Matt was adamant about what goes on at this point and he said, it’s quite usual, a lot of people just simply back down after the first time they’re not acknowledged, and she was told quite bluntly just to be quiet. Interestingly, the team finally realised they’d operated on the wrong hand about ten minutes into the procedure. Afterwards, the junior doctor said she actually felt guilty, but also that she didn’t have the skills to make herself heard.

 

Vivienne, what solution did the authors offer or describe that may help us in situations like these?

 

Well, actually, it’s making use of assertive phrases and using certain trigger words. Phrases like, ‘I’m concerned…’ or, ‘I’m uncomfortable…’ even, ‘this is unsafe’, or, ‘we need to stop’. I think no matter what position you are in the pecking order, to ignore those four trigger phrases would be very, very difficult, don’t you think?

 

I agree.

 

Most doctors say they’ve had a lightbulb moment when they’ve finished the course that this Matt Linley actually runs, and he says many say, “why am I doing this course when I’ve been a doctor for 25 years? I should have actually done it on day one”. So, if you want to more about this, we’ll add the link to the show notes. So, let’s get down to the business of the week’s show. David, what do you have for us this week?

 

Well, Vivienne, first I’ll share with you a personal note. I think that your comments on the importance of staff speaking up and trigger words can be overstated, and I will share with you that as a surgeon there comes a moment in every case that even when we are completely confident that we are in exactly the right spot doing exactly the right thing at exactly the right moment, we sometimes encounter quality issues, whether that’s a laparoscopic cholecystectomy or some other procedure, and I encourage mu surgical colleagues that even when we think that asking staff will be low yield, or even when we think that it would make us seem more apt, to appear that we don’t know where we are, which is a concern for surgeons. I say, even in those moments, it is worth our time to actively solicit from our staff, even if it adds five minutes to our case. Are there any concerns at this point? It is the critical part of the procedure… and that can really help with patient safety. So, the article you discuss really resonates with me.

 

I’m glad to hear that. Also, it makes us think about something that is termed ‘conscious incompetence’. Do you know what I mean?

 

I do. I think exploring that phrase is key. I think it reminds us that we can be conscious of those moments where there are high risk events occurring and we can leverage certain techniques to increase the probability that things go the right way. We can be conscious of those moments that are high risk and those quality issues can occur at different points in the procedure. So, I think that phrase is well taken and important.

 

Yes, I agree. So, come on, tell me about what you’ve got on your mind this week.

 

Well, this week Vivienne, I think we should explore control charts as a specific quality technique.

 

Right. So, are you talking about the kind of thing that’s used to determine how well a system is performing? The sort of things you might see outside operating rooms, administrator’s offices, and even hospital cafeterias. Am I right?

 

That’s exactly what I mean, Vivienne. Each location uses a control chart with the idea that the tool will show them when the data and the associated system are out of control.

 

Strange that, because I would have thought they were an easy visual tool that keeps everyone up to date with what’s going on.

 

Well, sadly that’s the oversimplified message of control charts and it actually leads us to apply control charts improperly. So, I want to flag this issue so that anyone who uses control charts to follow performance can be sure that they are using them the right way, so that they can actually get the accurate message from the data. In fact, I’m sharing that these can cause real problems if they’re not used properly. There are some major mistakes that are commonly made with control charts.

 

So, can you give us some examples please?

 

Well, first off Vivienne, a classic issue is choosing the wrong type of control chart for the data that we have and that can be problematic.

 

So, are you saying… is that something like understanding the type of control charts you should use with your data varies with the type of data that you have to hand?

 

Well, exactly. For example, in healthcare, one of the most useful charts that we can use is called the IMR chart, and IMR stands for Individuals Moving Range chart. They are particularly useful when an individual patient or an item moves through a system just one item at a time, and one of the first problems I see with the use of control charts is that staff often pick the wrong type of control chart and apply it to their data. If you go to our blog, and that’s surgicalbusinessmodelinnovation.com, you will see some charts that will help you get to grips on this and select the proper type of control chart for your data.  (Click here for that chart.)

 

I guess that’s easily done, really, when you’re in a hurry. It’s not an excuse, but I guess it does happen. I suppose there is also a tendency to create a control chart before you have the context to save time.

 

Well, Vivienne, you really picked up on it. That’s a fact, especially when the voice of the customer or the patient is missing. What I mean by that is, the point of a control chart is to get a sense of when the data are working with an expected variation according to the tolerances set by the voice of the customer of the process, or of the patient, or of the regulatory agency. So, if we apply a control chart too quickly, we see all kinds of issues and that kind of failure can be avoided.

 

So, David, what do you see as the main point of a control chart then?

 

Well, the control charts are utilized, Vivienne, so that you can recognize when a process is out of control, when it’s beyond expectations for what a system should do or when it’s tending to become out of control. The chart may highlight cases or values or outcomes that are causing problems much more quickly than other techniques. Sometimes control charts can even tell us things like whether the central tendency of the process is shifting.

 

Do you have any examples or can you explain this further?

 

Sure. For example, one of the treatments we typically use for trauma and acute care surgery involves fresh frozen plasma, a blood product or a blood related product. So, the issue is that we may feel like for a particular patient it took ‘way too long’ to thaw and deliver fresh frozen plasma, or FFP, to a patient. One of the first ways we can fail in this quality circumstance is not establishing the context for how the system functions overall. What the distribution of times typically is to deliver FFP for a patient. One of the next ways we can sort of fail in understanding and improving the system is to not have that voice of the customer or that tolerance for how long should it take? What is the upper limit of how long we are willing to wait for this medication? Vivienne, all those roll into the issue with the control chart because without that context should we go to apply a control chart, it won’t have meaning. We won’t know until we have a reference for the control chart, for how long it should take. We won’t be able to adequately understand, is this process performing as expected? Is it tending to become out of control over time? really applying not just the wrong type of chart, which we’ve already said can happen, but applying the chart too soon before having context also really impacts its ability to tell us what it needs to. So, really, with that said, you may think that a control chart tells you whether a process is being effective. That’s a typical mistake made, and guess what, control charts in no way tell you whether a process is adequate, useful or performing well. They only tell you whether a process is in statistical control, and that’s only if you select the correct type of control chart. A control chart will answer the question, are these data within expectations for the process I already have? Really, that’s all they do.

 

So, there is quite a lot to think about, and I guess you’re saying that relying on statistical controls only will not help either patients or staff.

 

Well, Vivienne, they can be very valuable, but they need to be applied just so. Meaning, just selecting a control chart or saying we’re going to put our data on a control chart, that actually will not typically have meaning for the way we want things to go for our patients and staff. So, rather than saying that they won’t help patients or staff, I would say the, I guess, more subtle message is that they have to be applied just so, in just the right way, in the just the right, at just the right context and then they can really have value for patients.

 

So, the unsubtle headline would be, do not use control charts until you have improved your process as much as possible.

 

Absolutely, and I would also suggest that departments ensure their process, the one that they’re looking at with the control chart, really satisfies the voice of the customer or the voice of the patient or the voice of the regulatory authority that they’re targeting. On our blog, we show a sample of a control chart and it shows that no individual case was out of expectation for the process, and to an administrator that often sounds great, right?

 

Yep.

 

Unfortunately, a control chart can look great and yet can demonstrate a process that is in no way adequate. Why? The figures demonstrate the process does not satisfy the VOC, or voice of the customer, when it’s applied. This brings up the interesting situation where a process is completely in control and yet wholly inadequate. We see this all the time in healthcare where run charts and control charts may be misapplied. If you do it wrong, the data will look in control and just great, yet the process will remain completely inadequate and both you and your audience will be fooled.

 

Right. So, David, you can have an IMR with data in control, but actually be running an inadequate system as a consequence? Am I right?

 

Well said, Vivienne. A control chart can show you that all data are in control, as if to say, “Nothing special here, so go about your business”, and yet the process hums along on its merry way to making outcomes that are completely unacceptable. That’s the danger of misapplying a control chart. Doing so makes us miss the whole point.

 

Ah, so I’m guessing the take away here is before you apply a control chart, make sure you’ve improved your data and try the best you can to become compliant with the voice of the customer.

 

Well said, and yet I would also add, there are other restrictions or important points that also go into what we have to do before we use this tool of control charts. You have to ensure that the department does not apply a control chart that’s based on the wrong underlying data distribution.

 

In your experience, is that common?

 

To be honest, most control charts that I see commonly used assume that data are normally distributed and that’s a classic fail. In fact, much data for health systems, including things like patient time in the emergency department, length of stay for patients and many other examples are often non normal data. So, applying a control chart, a specific type with its assumption that the data are normally distributed, is a nonstarter.

 

So, what do you think, David, is a potential solution to this?

 

It’s straightforward. We need to use control charts based on the distribution that we know the data follow. That’s why it’s important to get a sense, Vivienne, before we apply a control chart, of what data distribution we have, which one we’re looking at. If they are not normally distributed and in fact if they are some non-normal distribution, it makes no sense to apply a control chart that requires the assumption that the data follow the normal distribution, as it’s called. As a matter of fact, again, the misapplication of that control chart will mislead us.

 

A classic case of garbage in/garbage out model?

 

Yes, indeed. In my practice, I have found that the items outlined today have helped keep me out of trouble as I’ve gone to apply control charts for statistical process control projects in healthcare. Remember, if you haven’t improved the process and you haven’t placed it in a context already, it makes no sense and really is of little value often to apply the control chart. The control chart will only tell you whether the process is performing within some typical zone and not whether it is good enough. A control chart simply can’t tell you, when used improperly, that things are just fine.

 

Well, David, I think that’s given us sufficient food for thought and good enough is never appropriate in matters of patient or clinical safety. So, thank you very much, David, and if you want to keep up to date with David Kashmer’s approach to quality and statistical process control, business model innovation and critical practice, do join us for the next programme. In fact, we are very interested to hear what innovative practices are being undertaken in your health provision. If you’d like to appear on the show, contact us through our website and we’re looking forward to hearing from you. Meanwhile, if you’ve liked the show, do leave us a rating on iTunes. It’s one way we can ensure the word is spread, and we look forward to being with you next time. So, bye for now!

 

 

Fine Time At The Podcast

Thanks to Vivienne and the team at The Healthcare Quality Podcast.  I had a great time learning about the specifics of podcasting and appreciated the help to muddle through the talk!

Look forward to working with you all in the future…I wonder how many times I used the word “share” at the beginning of the cast!

I count 4 times in 30 seconds…how many times do you think I over-used that word?  “Share” your thoughts anytime!

4 Types of Bad Metrics Seen In Healthcare

 

By:  DM Kashmer MD MBA MBB FACS (@DavidKashmer)

 

Sometimes, you can see the train coming but can’t get out of the way fast enough.  Whack!  The train gets you despite your best efforts.  Wouldn’t have been great to start to get out of the way earlier?  In this entry, let’s focus on how to identify, as early as possible, four types of bad metrics in healthcare so that we can run away from that particular train as early as possible.  After all, the sooner we flee from these bad actors the more likely we are to avoid being run over by them.

 

Truth is, you’ve probably seen the train of bad metrics before.  After all, you know that all sorts of things are getting measured in our field nowadays and, sometimes, certain endpoints don’t feel particularly helpful and (in fact) seem to make things a lot worse.

 

First, a disclaimer:  this entry does not argue with metrics that the government mandates. There are some things that we measure because we have to for reimbursement or other reasons. However, if you believe (like me and other quality professionals) that a focus on reducing defects eventually impacts all sorts of quality measures (even mandated ones), then this is the entry for you!  This work does not focus on arguing or pushing back against those things that we must measure owing to regulation.  Now, on with the show…

 

Let’s explore four broad categories of bad metrics and how to avoid them.

 

#1 Metrics for which you cannot collect accurate or complete data.

 

It can be very challenging, in hospitals, to collect data. Often, data collection is frowned upon, or is even thought of as an afterthought or imposition.  So, as we launch in here, remember:  saying that you can’t collect complete or accurate data is not the same as actually being unable to.

 

Colleagues, listen:  if you think you can’t afford the time to collect good data, let me tell you that you can’t afford not to collect and use data.

 

When I’m working with a team that’s new to Lean or Six Sigma and we discuss data collection, the team often balks and focuses on the fact that no one is available to measure data, that we don’t have data collection resources or that, even if we had resources, we can’t get data.

 

I usually start with a quote:  “If you think it’s tough to get data, remember how tough it is to not get data.” (Split infinitive included for drama’s sake.)

 

Then we go on to explore together how there are several techniques we can use to make gathering data much easier so that we can avoid the “easy out” of “we can’t collect data about this and so it’s not a useful metric”.  In fact, most projects we do require data collection for 1-2 seconds per patient at most.  And that’s for prospective data collection.  (Want more info about how to make data collection easy, email me at dmkashmer@zoho.com and I’ll pass it along.)

 

However, in healthcare, we have all seen projects where data collection is arduous and so we react against data collection when we hear about it.

 

Sometimes, teams focus on using retrospective data. Of course, using retrospective data is much better than using no data. However, retrospective data has often been cleaned via editing or in some other way that makes it less valuable. Raw data that focuses on the specific operational definition of what you’re looking at tends to have the most value.

 

Sometimes, you have no way to measure a certain metric or concept and yet the team believes that concept to be very valuable. Take, for instance, a team that focused on scheduling patients for the operating room. The team felt that many patients were not prepared adequately before coming to the holding room. This included all sorts of ideas such as not having consent on the chart or some other issue. The team decided to measure this prospectively and found that only about one third of patients were completely prepared by the time they came to the pre-operative holding area. This was measured prospectively with a discrete data check sheet.

 

Let me explain that, sometimes, the fact that something hasn’t been measured previously means that the organization has not had that concept on its radar previously. This goes back to the old statement that if it is measured it will be managed and its corollary that if an endpoint is not measured, it is very hard to manage that endpoint.

 

To wrap this one up:  it is important to mention that one category of bad data or a bad metric is a metric that you cannot measure. However, it is important to realise that just because you haven’t measured it before doesn’t mean that you absolutely cannot measure it. Sometimes, if the idea or concept is important enough, you should develop a measure for it. We discuss how to develop a new end point in the entry here. That said, if it is absolutely impossible or arduous to collect accurate or complete data, the metric is much less likely to have value…but don’t just let yourself off the hook!  If you think something is important to measure, learn that there are ways to collect data that require only four or five seconds per patient!

 

#2 Metrics that are complex and difficult to explain to others.

 

If a metric gives a result that people can’t feel or conceptualize it’s just plain less valuable. Take, for example, a metric for OR readiness. In the month of April the operating room scored a very clear score on this metric. That score was “pumpkin”.

 

“Pumpkin?!”…Well, pumpkin doesn’t mean much to us in terms of operating room readiness. For that reason, you may want to measure your OR preparedness with a different metric than the pumpkin. Complex and difficult metrics that lack tangible meaning should be avoided.  Chose something that tells a story or evokes an emotion.  One upon a time, a center created (and validated) a “Hair On Fire Index” to indicate the level of emergent problems and crazy situations the operating room staff encountered in a day to indicate how stressed the OR staff was that day.  Wonder how they did it?  Look here.

 

#3 Metrics that complicate operations and create excessive overhead.

 

This type of metric is especially problematic. If a metric is difficult to measure and requires an incredible level of structure / workload to create it, it may not be useful.

 

Imagine, for example, a metric to predict sepsis that requires a twelve part scoring system, multiple regression, and the computing power of IBM’s Watson. This may not be a useful day to day metric for quality or outcome. Metrics that complicate operations and create excessive difficulty should be avoided.  When you see that type of metric coming, jump out of the way of the train.

 

#4 Metrics that cause employees to ‘make their numbers’.

 

This is similar to problem metric number two. When staff can’t feel the metrics that we describe, or see how they affect patient care, it can be very hard to mentally link what we do every day to our quality levels. That can lead to situations where employees are acting just to ‘make their numbers’. That type of focus is difficult and makes metrics less useful.

 

It’s important to have metrics that we perceive as having a tangible relationship to patients and their outcomes. We are so busy in healthcare that often if staff can fudge a metric, complete a form just to say it’s done, or in some other way ‘make numbers’, well, we often see that’s what happens. (That effect may not just be confined to healthcare of course!) It can be very challenging to create a metric that very clearly indicates what we have to do (and should be doing) rather than one that is sort of an abstract number we ‘have to hit’.

 

Take Aways, Or How To Avoid Being Hit By The Train Of Bad Metrics

In conclusion, there are at least four types of bad metrics and very clear ways to avoid them. Take a moment to try to see these trains coming from as far away in the distance as possible so that you can quickly get off the tracks unscathed.

 

We need metrics that we can feel and that tell a story of our patient care. We need ones that, whether government mandated or not, seem to relate to what we do everyday. We need ones that are easily gathered and tell the story of our performance clearly to both us as practitioners and staff who review us. Sometimes, we are mandated to collect certain end points yet, over time, I have come to find that when we do a good job with metrics that have meaning, we often have less defects and see better outcomes in all the metrics…whether we are mandated to collect a particular metric or not.

 

As part of your next quality project and how you participate in the healthcare system, take a minute to focus on whether the metrics you’re using are useful and, if not, how you can make them better.  Be the first to sound the alarm if you see the train of bad metrics on the track to derail meaningful improvement for our patients.