Use Continuous Data (!)

 

 

 

For the purposes of quality improvement projects, I prefer continuous to discrete data.  Here, let’s discuss the importance of classifying data as discrete or continuous and the influence this can have over your quality improvement project.  For those of you who want to skip to the headlines: continuous data is preferable to discrete data for your quality improvement project because you can do a lot more with a lot less of it.

 

First let’s define continuous versus discrete data.  Continuous data is data that is infinitely divisible.  That means types of data that you can divide forever ad infinitum comprise the types of data we can label as continuous.  Examples of continuous data include time.  One hour can be divided into two groups of thirty minutes, minutes can be divided into seconds and seconds can continue to be divided on down the line.  Contrast this with discrete data:  discrete data is data which is, in short, not continuous.  (Revolutionary definition, I know.) Things like percentages, levels and colors comprise data that comes in divided packets and so can be called discrete.

 

Now that we have our definitions sorted, let’s talk about why discreet data can be so challenging.  First, when we go to sample directly from a system, discrete data often demands a larger sample size.  Consider our simple sample size equation for how big a sample we need of discreet data to detect a change:

 

(p)(1-p) (2 / delta)^2.

 

This sample size equation for discreet data has several important consequences.  First, consider the terms.  P is the probability of outcome of a certain event.  This is for percentage-type data where we have a yes or no, go or stop, etc.  The terms help determine our sample size.   As mentioned, p is the probability of the event occurring and the delta is the smallest change we want to be able to detect with our sample.

 

The 2 in the equation comes from the (approximate) z-score at the 95% level of confidence.  We round up from the true value of z to 2 because that gives us a whole number sample slightly larger than what’s required rather than a sample with a fraction in it.  (How do you have 29.2 of a patient, for example?) Rounding up is important too because rounding down would yield a sample that is slightly too small.

 

In truth, there are many other factors in sampling besides merely sampling size.  However, here, notice what happens when we work through this sample size equation for discrete data.  Let’s say we have an event that has a 5% probability of occurring. This would be fairly typical for many things in medicine, such as wound infections in contaminated wounds etc.  Working through the sample size equation, and in order to detect a 2% change in that percentage, we have 0.05 x 0.95 (2 / 0.02)^2.  This gives us approximately 475 samples required in order to detect a smallest possible decrease of 2%.  In other words, we have obtained a fairly large sample size to see a reasonable change.  We can’t detect a change of 1% with that sample size, so if we think we see a 4.8% as the new percentage after interventions to change wound infections…well, perforce of our sample size, we can’t really say if anything has changed.

 

One more thing:  don’t know the probability of an event because you’ve never measured it before?  Well, make your best guess.  Many of us use 0.5 as the p if we really have no idea.  Some sample size calculation is better than none, and you can always revise the p as you start to collect data from a system and you get a sense of what the p actually is.

 

Now let’s consider continuous data.  For continuous data, sample size required to detect some delta at 95% level of confidence can be represented as

 

 

( [2][historic standard deviation of the data] / delta)^2.

 

When we plug numbers into this simplified sample size equation we see very quickly that we have much smaller samples of data required to show significant change.  This is one of the main reasons why I prefer continuous to discrete data.  Smaller sample sizes can show meaningful change.  However, for many of the different end points you will be collecting in your quality project, you will need both.  Remember, as with the discrete data equation, you set the delta as the smallest change you want to be able to find with your data collection project.

 

Interesting trick:  if you don’t know the historic standard deviation of your data (or you don’t have one) take the highest value of your continuous data and subtract the lowest.  Then divide what you get by 3.  Viola…estimate of historic standard deviation.

 

Another reason why continuous data is preferable to discrete data is the number of powerful tools it unlocks.  Continuous data allows us to use many other quality tools such as the CPK, data power transforms, and useful hypothesis testing. This can be more challenging with discrete data.  Some of the best ways we have see to represent discrete data include a Pareto diagram.  For more information regarding a Pareto diagram visit here.

 

Other than the Pareto diagram and a few other useful ways, discrete data presents us with more challenges for analysis.  Yes, there are statistical tests such as the chi squared proportions test that can determine statistical significance.  However, continuous data plainly open up a wider array of options for us.

 

Having continuous data allows us to make often better visual representations and allows our team to achieve a vision of the robustness of the process along with the current level of variation in the process.  This can be more challenging with the discrete data endpoints.

 

In conclusion, I like continuous data more than discrete data and I use it wherever I can in a project.  Continuous data endpoints often allow better visualization of variation in a process.  They also require smaller sample sizes and unlock a more full cabinet of tools which we can use to demonstrate our current level of performance.  In your next healthcare quality improvement project be sure to use continuous data points where possible and life will be easier!

 

Disagree?  Do you like discrete data better or argue “proper data for proper questions”?  Let us know!

 

 

 

A Very Personal Take On Medical Errors

 

Click beneath for the audio presentation of the entry:

 

 

It’s A Little Personal…

Let me share a personal story about the important differences between how I was trained in Surgery regarding medical errors and later training in statistical process control. Here, let’s discuss some personal thoughts on the important differences between a more systemic approach to error and the more traditional take on error which includes a focus on personal assignability. I am sharing these thoughts owing to my experience in different organizations. These experiences have ranged from some organizations which seek to lay blame specifically in one person to those organizations that are system focused on error reduction at the systemic level. What are some characteristics of each of these approaches?

 

M&M Is Useful, But Not For Quality Improvement (At Least Not Much)

I remember well my general surgical training and subsequent fellowships and I’m grateful for them.  I didn’t realize, at the time, how much of my training was very focused on personal issues with respect to quality improvement. What I mean is that, at the morbidity and mortality conference, I was trained both directly and indirectly to look at the patient care I provided and to focus on it for what I could improve personally.  This experience was shared by my colleagues.  The illusion by which we all abide in morbidity and mortality conference is that we can (and should) overcome all the friction inherent in the system and that by force of personal will and dedication we should be able to achieve excellent results or great outcomes based on our performance alone. What I mean is our morbidity and mortality presentations, or M&M’s, don’t focus on how the lab tests weren’t available, how the patient didn’t have their imaging in a timely fashion, or any of the other friction that can add to uncertainty in fluid situations. M & M, as many of my colleagues have said, is a contrivance. Read on, however, because there’s more: while M & M may be a contrivance, it is a very useful contrivance for training us as staff.

 

Consider that in the personally assignable world of the M&M conference we often take responsibility for decisions we didn’t make. Part of the idea of the M&M conference to this day (despite the 80 hours restrictions for residents) is that the resident understand the choices made in the OR and be able to defend or at least represent them effectively…even if that resident wasn’t in the OR. So from the standpoint of preventing defects, a case presentation by someone who wasn’t in the OR may help educate the staff…yet it probably doesn’t make for effective process improvement–at least not by itself.

 

Clearly, this “personal responsibility tact” is an excellent training tool for residents.  Morbidity and Mortality conference focuses on what we could do better personally. It forces us to read the relevant data and literature on the choices that were made in the operating room.  It is extraordinarily adaptive to place trainees in the position where they must defend certain choices, understand certain choices, and be able to discuss the risk versus benefit of the care in the pre-operative, intra-operative, and post-operative phase.  However, classic M&M is not a vehicle for quality improvement.

 

In The Real World, There Are Many Reasons For A Positive (Or Negative) Outlier

 

What I mean by this is that we in statistical process control know (and as we in healthcare are learning) there are many reasons that both positive and negative outliers exist.  Only one of the causes for a “bad” outcome is personal failure on the part of the provider and staff, and, in fact, most issues have roots in many other categories of what creates variation. This does not mean that, as a provider, I advocate a lack of personal responsibility for poor medical outcomes and outliers in the system.  (I’ve noticed that staff who, like me, grew up with the focus on personally assignable error and a “who screwed up” mentality typically accuse the process of ignoring personal responsibility owing to their lack of training or understanding of the process.) However, I recognize that outcomes have “man” or “people” as only one cause of variation.  In fact, as we have described previously on the blog, there are six causes of special cause variation.

 

There are six categories of reasons why things occur outside the routine variation for a system.  This doesn’t mean that a system’s normal variation (common cause variation) is even acceptable. In fact, sometimes systems can be performing at their routine level of variation and that routine level of variation is unacceptable as it generates too many defects. Here, let’s focus on the fact that there are 6 causes of special cause variation which can yield outliers above and beyond other values we might see. As mentioned before in the blog here, these 6 causes include the 6 M’s, which are sometimes referred to as the 5 M’s and 1 P.

 

Which approach to error and process improvement do I favor?  I favor the more comprehensive approach to error reduction inherent in the statistical process control methodology.  This process is not just for manufacturing and I know this because I’ve seen it succeed in healthcare, where much of the task involved was helping the other physicians understand what was going on and the philosophy behind it.

 

Let me explain why there can be so much friction in bringing this rigorous methodology to healthcare.  In healthcare, we are often, I believe, more slow to adopt changes.  This is especially true for changes in our thought processes and philosophy.  I think this is perfectly fine and is likely acceptable.  This conservative approach protects patients.  We don’t accept something until we know it works and it works very well.  This, however, does make us later to adopt certain changes (late to the party) compared to the rest of our society.  One of these changes is the rigorous approach to process control.

 

Physicians and surgeons may even feel that patients are so different that there can be no way to have data that embody their complexities.  (That’s another classic challenge to the Lean and Six Sigma process by the way.) Of course, in time, physicians realize that we use population level data all the time and we see it in the New England Journal of Medicine, The Lancet, and other journals.  A rigorous study with the scientific method, which is what statistical process control brings, allows us to narrow variation in a population without ignoring individual patient complexities.  After all, we do not commit the fallacy of applying population level data directly to individual patients and INSTEAD make system-wide changes that support certain outcomes.  Surprise, after only a month of experiencing the improvements, even physicians come to believe in the methods.

 

Also, physicians are not trained in this and we see only its fringes in medical education.  This is also adaptive, as there is a great deal to learn in medical education and a complete introduction to quality control maybe out of place.  However, the culture of medicine (of which I am a part) often still favors, at least in Surgery, this very personal and self-focused approach to error rate. However, I can say with confidence and after experimentation, that the systemic approach to error reduction is more effective.

 

Lean & Six Sigma Have Been Deployed In The Real World Of Healthcare…And Yes They Work

 

As a Medical Director and Section Chief for a trauma and acute care surgery center, I had the opportunity to deploy statistical process control in the real world as my colleagues and I re-bulit a program with administrative support.  This was highly effective and allowed our surgical team to focus on our rates of defect production as a system.  This eliminated focusing on individual differences and instead helped team building etc.  It also gave a rigorous way for us to measure interval improvement. These are just a few advantages of statistical process control.

 

Other advantages included the fact that it allowed us to know when to make changes and when to let the system chug along. Using statistical process control allowed us to know our type one and type two error rate which is key to know when to change a system.  For more information regarding type one and type two error rate look here.

 

There are advantages to both approaches to errors.  The straightforward and often more simplistic view of personal responsibility is highly adaptive and very advantageous for training surgeons.  I think that, while training surgeons, we should realize (and make it transparent) that this personal approach to error is merely a convention which is useful for teaching, keeping us humble, and focusing on how we can improve personally.  After all, the surgical trainees often are in the position where they must take responsibility in a conference format for decisions over which they had no influence. They also must, again, give the illusion that there were no barriers to excellent patient care beyond their control such as multiple trauma activations at once, lab tests not being performed, and no short-staffing on holidays.  Again, personal responsibility and the illusion of complete control over the production of errors is important when the focus is on education and for this reason the personal approach to error is highly adaptive.

 

However, when we want to actually make less defects, a systemic approach to error that recognizes personal issues as just one of 6 causes of potential defects is key as is rigorous methodology to bring about change.  Being able to quantify the common cause level of variation and special causes of variation in a system is a very useful tool to actually make less defects.  As statistical process control teaches us, prevention is the only portion of the cost of poor quality that has return on investment.  For more information on the cost of poor quality, visit here.

 

Personal Responsibility Is One Part Of A More Comprehensive View

At the end of the day, I view personal responsibility for medical error as just one portion of a more comprehensive view on error reduction, risk reduction, and true quality control.  As a surgeon, I strongly advocate personal responsibility for patient care and excellent direct patient care.  This is how I was trained.  However, I feel that, although this is key, my focus on how I can do better personally is part of a larger focus that is more comprehensive when it comes to the reduction and elimination of defects.  Statistical process control gives us a prefabricated format that uses rigorous mathematical methods to embody and allow visualization of our error rate.  Other differences from the more classic model of process improvement in healthcare include that statistical process control tends to degenerate less often into a pejorative discussion than the personal focus approach.

 

Unfortunately, I have been in systems previously where staff are overly focused on who made an error (and how) while they ignore the clear issues that contributed to the outcome.  Sometimes, it is not an individual’s wanton maliciousness, indolence, or poor care that yielded a defect.  Often, it’s that there was friction inherent in the system and the provider didn’t “go the extra mile” that M&M makes us believe is always possible.  Sometimes it is a combination of all these issues and such non-controllable factors such as the weather or similar issue.

 

The bottom line is, at the end of the day, statistical process control as demonstrated in Lean and Six Sigma methodology allows us to see where we fit in a bigger picture and to rigorously eliminate errors.  I have found that providers in the systems tend to “look much better” when there has been this focus on systems issues in a rigorous fashion.  Outcomes that were previously thought unachievable become routine.  In other words, when the system is repaired and supportive, the number of things we tend to attribute to provider defects or patient disease factors substantially decreases.  I have had the pleasure to deploy this at least once in my life as part of a team, and I will remember it as an example of the power of statistical process control and Lean thinking in Medicine.

 

Questions, comments, or thoughts on error reduction in medicine and surgery?  Disagree with the author’s take on personal error attribution versus a systemic approach?  Please leave your comments below and they are always welcome.

These Two Tools Are More Powerful Together

 

powerfulqualjpg

 

Click beneath for the video version of the blog entry:

 

Click beneath for the audio version of the blog entry:

 

Using two quality improvement tools together can be more powerful than using one alone. One great example is the use of the fishbone diagram and multiple regression as a highly complementary combination.  In this entry, let’s explore how these two tools, together, can give powerful insight and decision-making direction to your system.

 

You may have heard of a fishbone, or Ishikawa diagram, previously. This diagram highlights the multiple causes for special cause variation.  From previous blog entries, recall that special cause variation may be loosely defined as variation above and beyond the normal variation seen in a system.  These categories are often used in root cause analysis in hospitals.  See Figure 1.

 

Figure 1:  Fishbone (Ishikawa) diagram example
Figure 1: Fishbone (Ishikawa) diagram example

 

As you also may recall from previous discussions, note that there are six categories of special cause variation. These are sometimes called the “6 M’s’” or “5 M’s and one P”. They are Man, Materials, Machine, Method, Mother Nature and Management (the 6 Ms).  We can replace the word “man” with the word “people” to obtain the 5Ms and one P version of the mneumonic device.  In any event, the issue is that an Ishikawa diagram is a powerful tool for demonstrating the root cause of different defects.

 

Although fishbone diagrams are intuitively satisfying, they can also be very frustrating.  For example, once a team has met and has created a fishbone diagram, well…now what?  Other than opinion, there really is no data to demonstrate that what the team THINKS is associated with the defect / outcome variable is actually associated with that outcome.  In other words, the Ishikawa represents the team’s opinions and intuitions.  But is it actionable?  In other words, can we take action based on the diagram and expect tangible improvements?  Who knows.  This is what’s challenging about fishbones:  we feel good about them, yet can we ever regard them as more than just a team’s opinion about a system?

 

Using another tool alongside the fishbone makes for greater insight and more actionable data.  We can more rigorously demonstrate that the outcome / variable / defect is directly and significantly related to those elements of the fishbone about which we have hypothesized with the group.  For this reason, we typically advocate taking that fishbone diagram and utilizing it to frame a multiple regression.  Here’s how.

 

We do this in several steps.  First, we label each portion of the fishbone as “controllable” or “noise”.  Said differently, we try to get a sense of which factors we have control over and which we don’t.  For example, we cannot control the weather.  If sunny weather is significantly related to number of patients on the trauma service, well, so it is and we can’t change it.  Weather is not controllable by us.  When we perform our multiple regression we do so with all factors identified labeled as controllable or not.  Each is embodied in the multiple regression model.  Then, depending on how well the model fits the data, we may decide to see what happens if the elements that are beyond our control are removed from the model such that only with the controllable elements are used.  Let me explain in greater detail about this interesting technique.

 

Pretend we create the fishbone diagram in a meeting with stakeholders. This lets us know, intuitively, what factors are related to different measures. We sometimes talk about the fishbone as a hunt for Y=f(x) where Y is the outcome we’re considering and it represents a function of underlying x’s. The candidate underlying x’s (which may or may not be significantly associated with Y) are identified with the fishbone diagram.  Next, we try to identify which fishbone elements are ones for which we have useful data already.  We may have rigorous data from some source that we believe. Also, we may need to collect data on our system. Therefore, it bears saying that we take specific time to try to identify those x’s about which we have data.  We then establish a data collection plan. Remember, all the data for the model should be over a similar time period.  That is, we can’t have data from one time period and mix it with another time period to predict a Y value or outcome at some other time.  In performing all this, we label the candidate x’s as controllable or noise (non-controllable).

 

Next, we seek to create a multiple regression model with Minitab or some other program. There are lots of ways to do this, and some of the specifics are ideas we routinely teach to Lean Six Sigma practitioners or clients. These include the use of dummy variables for data questions that are yes/no, such as “was it sunny or not?” (You can use 0 as no and 1 as yes in your model.) Next, we perform the regression and do try to input confounders if we think two x’s or more x’s are clearly related.  (We will describe this more in a later blog entry on confounding.) Finally, when we review the multiple regression output, we look for an r^2 value of greater than 0.80. This indicates that 80% of the variability in our outcome data, or our Y, is explained by the x’s that are in the model. We prefer higher r^2 and r^2 adjusted values. R^2 adjusted is a more stringent test based on the specifics of your data and we like both r^2 and r^2 adjusted to be higher.

 

Next we look at the p values associated with each of the x’s to determine whether any of the x’s affect the Y in a statistically significant manner.  As a final and interesting step we remove those factors that we cannot control and run the model again so as to determine what portion of the outcome is in our control.  We ask the question “What portion of the variability in the outcome data is in our control per choices we can make?”

 

So, at the end of the day, the Ishikawa / fishbone diagram and the multiple regression are powerful tools that complement each other well.

 

Next, let me highlight an example of a multiple regression analysis, in combination with a fishbone, and its application to the real world of healthcare:

 

A trauma center had issues with a perceived excess time on “diversion’, or that time in which patients are not being accepted and so are diverted to other centers. The center had more than 200 hours of diversion over a brief time period.  For that reason, the administration was floating multiple reasons why this was occurring.  Clearly diversion could impact quality of care for injured patients (because they would need to travel further to reach another center) and could represent lost revenue.

 

Candidate reasons included an idea that the emergency room physicians (names changed in the figure beneath) were just not talented enough to avoid the situation.  Other reasons included the weather, and still other reasons included lack of availability of regular hospital floor beds.  The system was at a loss for where to start and it was challenging for everyone to be on the same page to have clarity with respect to where to start and what to do next with this complex issue.

 

For this reason, the trauma and acute care surgery team performed an Ishikawa diagram with relevant stakeholders and combined this with the technique of multiple regression to allow for sophisticated analysis and decision making.  See Figure 2.

 

Figure 2:  Multiple regression output
Figure 2: Multiple regression output

 

Variables utilized included the emergency room provider who was working when the diversion occurred (as they had been impugned previously), the day of the week, the weather, and the availability of intensive care unit beds to name just a sample of variables used.  The final regression result gave an r^2 value less than than 0.80 and, interestingly, the only variable which reached significance was presence or absence of ICU beds.  How do we interpret this?  The variables included in the model explain less than 80% of the variation in the amount of time the hospital was in a state of diversion (“on divert”) for the month.  However, we can say the the availability of ICU beds is significantly associated with whether the hospital was “on divert”.  Less ICU beds was associated with increased time on divert.  This gave the system a starting point to correct the issue.

 

Just as important was what the model did NOT show.  The diversion issue was NOT associated significantly with the emergency room doctor.  Again, as we’ve found before, data can help foster positive relationships.  Here, it disabused the staff and the rest of administration of the idea that the emergency room providers were somehow responsible for (or associated with) the diversion issue.

 

The ICU was expanded in terms of available nursing staff which allowed more staffed beds and made the ICU more available to accept patients. Recruitment and retention of new nurses were linked directly to the diversion time for the hospital:  the issue was staffed beds, and so the hospital realized that more nursing staff needed to be hired as one intervention.  This lead to a recruitment and retention push, and, shortly thereafter, an increase in the number of staffed beds.  The diversion challenge resolved immediately once the additional staff was available.

 

In conclusion, you can see how the fishbone diagram, when combined with the multiple regression, is a very powerful technique to determine which issues underly the seemingly complex choices we make on a daily basis.  In the example above, a trauma center utilized these powerful techniques together to resolve a difficult problem. At the end of the day, consider utilizing an fishbone diagram in conjunction with a multiple regression to help make complex decisions in our data intensive world.

 

Thoughts, questions, or feedback regarding your use of multiple regression or fishbone diagram techniques? We would love to hear from you.

 

8 Useful Tools For Your Blog

Click beneath for the video version of this blog entry:

 

Click beneath for the audio presentation of this entry:

 

 

We get frequent questions at the blog from colleagues and contacts. We often trade information about the different tools we use to create the blog. Here we will list some of the useful tools we have found online and will distill our experience in reading about how to create a blog with a certain look and feel. Yes, we are not marketing experts by any means; however, this list of tools (which we routinely use) is taken from many excellent blog writers across the internet. Please learn from our years of reading!  (Yes, David used his profiles as examples–why not!)

1) inspirably.com

This tool is useful for creating graphics focused around quotations. We use this one routinely so as to generate some of the titles for our blogs. Color and font can easily be varied. This looks great on a Mac or Windows PC.

2) placeit.net

Placeit is a relatively new tool to us. This lets you take any picture file and place it in a natural setting on a PC, iPhone, iPad or similar device. These look very nice and are available for download. They have a premium model where the picture is watermarked unless you pay for a subscription. They have different licence levels based on expected traffic for your blog or website.

3) Natural readers

Natural readers is the program we utilize to generate audio content. Natural readers is an excellent tool to vary the type of content on your blog. After we have an entry created we use natural voice so as to generate the red version of the entry. We then embed this in our blog by taking the MP3 file we generate and placing it in to our entry. Varied content in terms of media is felt to be more attractive to views by most bloggers.

4) Twitter

Twitter is an excellent tool to help drive traffic to your blog. You can create a teaser or tweet that lets people know about your new entry and drives traffic. You probably already know, if you are a fan of twitter, about URL shorteners. URL shorteners such the one available from google or tinyurl.com take long website names and shorten them so that these can be input into Twitter and similar social media outlets. These shorter URL’s take up less characters and often these links are permanently directed as redirects to your content. This allows you to type a longer message with a shorter URL so that you have more characters to do what you want in Twitter.

5) slideshare.net

Slideshare is an excellent content outlet. You can create a powerpoint or MP4 movie and upload this. This allows you to direct traffic, again, to your blog. Slideshare has many viewers.

6) storify.com

Storify is a very useful service that allows you to incorporate tweets and other links into the body of your entry. This allows you to type an entry and craft content with social media. It is very useful for quick blog entries on hot topics. Storify allows you to embed content onto your website.

7) WordPress

WordPress is one of the most popular platforms for blogging etc. It is easy to embed different media into your blog entry. We advise you to purchase a standalone domain name from godaddy.com. Our blog name is unusually long; a shorter one would probably be more useful.  However, your blog title should make it clear what your blog is about. We recommend uploading WordPress directly to your new domain with godaddy or similar platform. Ours is done with godaddy and it was very straightforward for even a non-programmer to upload WordPress to get started.

8) Linkedin

Linkedin is another tool we use to drive traffic to the blog. As a new feature on Linkedin, you can upload the title and content of your blog as a link on your profile. New entries will appear as part of your showcase and will direct people to your blog.  When you log into LinkedIn, check David’s profile and you’ll see the blog entries.

We hope you find these 8 tools useful for your blogging experience. Remember, blogging is sort of a labor of love–it takes time to increase traffic and get the certain look and feel you want for your blog. In a future entry we will describe some of our experience for how to drive traffic to your blog. Please notice that varied content and multiple outlets are useful to bring people to your site. Additionally, allow us to recommend utilizing photos and eye catching content. Remember, on Twitter and other outlets, content with photos and other media are viewed and clicked much more frequently than the content that is text only. Again, we hope you find these tools and tips useful to achieve the look and exposure you would like for your content. If you have any questions, suggestions, or other tips that you think would be helpful to writers who maintain a blog, let us know. We are always glad to hear!

A Lean Surgery Project To Get You Started

Click play to watch the video version of this blog entry:

Click the link beneath for an audio presentation of the entry:

 

There are some quality improvement projects that are so straightforward we see them repeated across the country.  One of these straightforward projects is decreasing the amount of surgical instruments we have in our operative pans.  The impetus to do this is that we only infrequently use the many clamps and devices that we routinely have sterilized for different procedures.  This project, which we commonly refer to as “Leaning the pan”, is so useful and intuitive that it is repeated across the country.  Here we take a second to describe how the project looks and some ways in which you might decide to apply  it in your practice.

 

First, much of this Lean project focuses on the concept of value added time, or VAT.  It turns out, in most systems, only approximately 1% of the time is spent adding value to whatever implement or service we are providing. It’s a striking statistic that we see repeated across systems.  Again, only approximately 1% of our time is generally spent in things for which the customer will pay.  As we have described before on the blog, here, one of the challenges we have in healthcare is establishing who the customer is.  In part, the customer is the patient who receives the service.  In another very real sense the customer is the third party payer who reimburses us for our procedures.  The third party payer does not reimburse us any more or less if we use 20 Kelly clamps or 10 Kelly clamps to finish a procedure.  Do we need 40 Kelly clamps in a pan?  If we use the most expensive Gortex stitch, or the least expensive silk suture, our reimbursement does not vary.  So, this concept of value added time is key in Leaning the pan.

 

We can demonstrate as we go through this quality improvement project that we are decreasing the amount of time that we spend doing things that do not add value to the case.  In short, we can demonstrate that our proportion of value added time increases just as our proportion of non-value added time decreases.  Again, notice that we have introduced this concept of value added time which focuses us squarely on the idea that in general, in most systems, we only spend approximately 1% of our time adding value to our output in systems. So, as we begin to set up the preconditions for this project, one of the ideas we can focus on is how much time a procedure takes.  Here, the concept of operational definition becomes important.  When does a procedure start and end?  The procedure can start from the time the nurse opens the pan and spends time counting (along with the scrub nurse) the implements in the pan.  Alternatively, we can focus on room turnover time and including the counting as part of that defined time.  This is just one way to demonstrate a decreased time spent as non-value added time and it highlights the importance of definition.  Less instruments to count translates into less length of time spent counting.  We can also define the procedural time as the time from when the instruments are sterilized and repackaged.  Again, as with all quality improvements projects, the operational definition of what time we are measuring and what we call procedural time is key.

 

Another useful idea in Leaning the pan is the Pareto diagram.  As you probably remember, the Pareto Principle (or 80/20 rule) was originally developed by Italian economist Vilfredo Pareto.  It demonstrated that approximately 80% of the effect seen is caused by 20% of the possible causes for that effect.  In other words, there are a vital few which create the bulk of the effect in a system.  This has been extrapolated to multiple other systems beyond the initial data Pareto utilized to describe this principle.  Pareto was focused on wealth in Italy. However, it turns out the the 80/20 principle has been applied to many other systems and practices throughout the business and quality improvement world.  In short, there is now a named diagram and lean six sigma tool called a Pareto diagram.

 

The Pareto diagram is a histogram that demonstrates frequency of use or occurrence of different items or implements in a system. See Figure 1.  In general, we know that if we select 10 instruments and plot out how frequently they are used, we will find that only approximately 2 of the 10 instruments are responsible for over 80% of the usage of instruments in a procedure.

 

 

Fig. 1:  sample Pareto Diagram from excel-easy.com
Fig. 1: sample Pareto Diagram from excel-easy.com

 

The example above highlights how two complaints of ten possibilities about food (overpriced and small portions) are responsible for around 80% of issues.  Similarly, this tool may be used as a graphic way to demonstrate that the bulk of instruments in the operative pan are not used or are used rarely.  There are several options here.  First we could create a data collection plan to demonstrate how many times each instrument is used in the pan.  Clearly this takes some data collection.  Next we could demonstrate, as a Pareto diagram, instrument usage.  Next we could say “Ok  let’s remove from the pan the rarely used instruments or perhaps keep a few of the rarely used instruments that are particularly hard to find.”

 

In any case, we will discover that the bulk of the instruments that we sterilize every time are not vital for performance of the procedure and are, in fact, negligible.  So, the Pareto diagram is a useful tool to demonstrate which instruments can (and should) be removed from the pan  Again, this may take some data collection.

 

We have now demonstrated a straightforward way to demonstrate a change in value added time with our surgical instrument sterilization project and we have also demonstrated one of the key ways to highlight what instruments are used and what instruments can go.  Next, let’s discuss some of the interesting solutions and consequences from leaning the pan projects across the country.  First, we can usually establish consensus from a team of surgeons based on data from which tools and instruments are used.  We can establish one pan which has data behind it that shows which instruments we all use as surgeons.  This eliminates each doctor from requiring their own special pan.  We can then take those hard to find instruments or things that individual surgeons feel are must-haves or must-have-available and put those in accessory packs for each surgeon.  So, the basic laparotomy tray can be the same for everyone with its Lean, time-saving methodology.  This saves time not just for one procedure but, over the total number of procedures, a surprising amount of time:  if we performed 1000 exploratory laparotomies in a year and saved 5 minutes per laparotomy we have clearly save 5000 minutes of non-value added time over the course of the year.  Some simple math demonstrates that this is hours of non-value added time eliminated from the procedure per year.  Things like this are useful and key to establish the utility of these projects.  Let’s look at some other keys.

 

One of the other keys to a successful project is a project charter.  Before we even begin a Leaning the surgical pan project it is useful to have a stakeholder meeting with all the people involved in sterilizing trays and pans etc. This way, there can be a discussion about some of the things required by our system and the reason why things are the way they are now.  It is important to get a sense of the reason why things are the way they are at the beginning of the quality improvement project.  A project charter will include the scope of the project, the people involved, and an outline of the days required for the project to be completed.  In a study of most Lean Sensei and Lean Six Sigma Black Belts, we discovered that one of the most frequently used tools in the body of knowledge is this project charter at the onset of this project.  This is key in that it clearly focuses us on what is important for the project, timeline, stakeholders, and what the outcome measures will be.  Again, for the Leaning the pan project we would recommend value added time as one of the key outcome measures.

 

Another key outcome measure should include something about cost.  This helps with the business case for managing up the organization.  Typically, in Lean and Six Sigma projects we use the cost of poor quality (COPQ) which we have described previously here.  In this case, the cost of poor quality is somewhat more challenging to establish.  Remember, the cost of poor quality is composed of four “buckets”.  These include the cost of internal failures, external failures, surveillance and prevention.  For more information on the COPQ and how it is calculated look here.  In this case the COPQ is harder to demonstrate.  What internal failures and external failures exist with this Leaning the pan model?  We, instead of using a strict COPQ in this case, recommend demonstrating any cost savings based on the cost of instrument sterilization, the amount of time instruments can be sterilized before being replaced (life extension for instruments), and the savings that flow from the decreased amount of time utilized in counting a tray (ie more cases).

 

In short, it will be very challenging to demonstrate direct cost savings with this type of Leaning the pan project.  We have seen around the country with this project  that it is challenging to demonstrate firm cost savings on the income statement or balance sheet.  However, this is a good starter Lean project and can really help the surgeons and operative team see the value in the Lean methodology.   It can also help build consensus as an early, straightforward project in your Lean or Six Sigma journey.

 

In conclusion we have described the process of Leaning the operating room pan.  As most Lean projects go, this one is relatively straightforward and includes the concept of value added time in addition to the Pareto diagram.  It is more challenging to use other Lean tools such as value stream mapping and load leveling with a project like this.  However, some standard Lean tools can greatly assist the practitioner in this nice warm-up project.  The cost of poor quality is more challenging to establish and much of the case for the savings and decrease in waste from projects like this may come from the representation of how value added time increases as a proportion of time spent.

 

Discussion, thoughts, or personal reports of how you demonstrated cost savings or “leaned the pan” in your operating room?  We would love to hear your comments and thoughts beneath.

Let’s Choose Residents Differently

 

Click beneath for the audio version of the blog entry:

 

 

Many Are Wondering How To Choose Residents

One of the ways we identify new blog topics is by searching the web and reading social media with content from our colleagues.  Sometimes, this overlaps with our area of expertise and things in which we at the blog have an interest.  For about the last 5 years I have had a strong interest in how resident surgeons are selected by training programs.  Recently, one of my co-bloggers ran across an entry from one of our colleagues that also referenced this difficult topic.  What are some of the ways to select for resident surgeons among applicants so we can identify who will eventually make excellent attending surgeons?  How should we be selecting our residents?

 

Here’s What Doesn’t Work

First, let’s consider what doesn’t work in resident selection.  Here we find most of the criteria that are routinely used to select residents.  Things such as USMLE scores and other classically used selection criteria just plain don’t work.  There is significant evidence for this and let’s avoid reciting what my colleagues at other blogs have described.  We will simply point you to them with a link here.

 

Why Don’t We Use Things That Work?

Next, let’s consider some potential reasons why the criteria we use frequently don’t work and why we choose to try them.  For one, typically, we as surgeons are not educated in how to select people for different jobs.  We may think one good surgeon can tell someone else who can be a good surgeon by a combination of personal interviewing, test scores, or some other routine criteria.  However, as my colleague the Skeptical Scalpel points out very adeptly, that doesn’t seem to be the case.

 

In short, we are not educated on what selection criteria can be utilized to find people who will be effective in their profession. Often, attending surgeons who help to select residents have little to no education in the significant amount of human resource literature that has been created on how to select people for certain positions.  Just like many things in life, there may be some science and literature that makes things a little easier.  Let’s take a look.

 

First, consider that, as a surgeon, I would not operate unless I had been educated in how to perform a procedure with respect to the technical aspects of the procedure as well as the cognitive ones.  However, I find that (as physicians) we are typically asked to perform tasks for which we have no education.  This is often because physicians are bright, articulate, and dedicated.  However, being bright, dedicated, and articulate alone is not enough to fly a plane, play a violin, or perform any other complex task for which there are skills and cognitive dimensions required in which we need education.  Just as we do not ask people who are non-surgeons to understand all of the surgical literature, so it is that we as physicians are often left to select new and up-coming surgeons without being aware of the significant body of knowledge that exists on how to select people for different positions–in essence we are asked to do something that we aren’t trained to do.  In short, we have no education about the significant evidence on how to select people and yet we are asked to fly the plane based on intuition alone.

 

Allow me (unsupported by anything but experience) to say that, as confident surgeons we typically don’t know or don’t seek out all of this relevant data in part because we make so many other high stakes, challenging decisions.  This selection can be seen as just one more and it may feel nowhere near the most important decision we will make in a day in that the repercussions are not immediately felt.  (Of course, it is perhaps one of the most important decisions we will make in a day in that it will affect many people in years to come.) For that reason, I believe, we don’t seek out literature on how to select people.

 

There’s Literature About How To Select

Here is the moment we push the soapbox aside.  Today, let me share just a few of the relevant data points on selecting people (and getting a sense for eventual job performance) from various organizational behavior and psychology entries into the literature.  I first saw these in an MBA course and was impressed about the breadth and depth of literature there is behind how to select people effectively for different roles.  It impressed me that there was such a body of knowledge and that I had previously been ignorant of it.  (Guess that’s how I knew the education was working.) After, we will describe some of the typically used selection criteria other than the more trivial ones listed above.  such as medical school grades, USMLE scores, and letters of recommendation to name a few.  As mentioned, my colleagues on other blogs have clearly highlighted issues with these and shown their shortcomings.  We will then describe some of the other typically used selection criteria as well as their potential utility for selecting people who will make effective residents and, more importantly, effective attending surgeons.

 

You’ve Heard Of The MBTI, And It’s Not The Answer

First, let’s consider the Myers-Briggs Type Indicator or MBTI.  This is perhaps the most widely used personality assessment instrument in the world, and it’s likely you’ve heard of it.  The MBTI is a hundred question personality test that focuses on how people feel in certain venues.  People become classified as either Extroverted or Introverted (E or I), Sensing or Intuitive (S or N), Thinking or Feeling (T or F) and Judging or Perceiving (J or P).  Each of these has a specific definition.  Based on this, a four letter type is then generated for each person with 16 potential personality types.  The MBTI is widely used by multiple organizations including the US Armed Forces, Apple Computer, AT&T, and GE to name some of the larger companies.

 

However, most evidence suggests the MBTI is not a valid measure of personality and it appears as though the MBTI results tend to be unrelated to job performance.  Thus, selecting people based on Myers-Briggs Type Indication probably does not effectively predict their job performance.  See, for instance, RM Capraro and MM Capraro’s article “Myers-Briggs Type Indicator score reliability across studies: A meta-analytic reliability generalization study in education and psychological measurement”, August 2002, page 590-602.  Also, see the RC Arnau et al article “Are Jungian preferences really categorical? An empirical investigation using taxometric analysis in Personality and Individual Differences”, January 2003 page 233-251 for additional information.  So, dividing people into these classic personality types does not seem to be useful in predicting job performance.  Are there any reliable indicators that can help us in order to select people for different positions?  Let me tell you what I have learned, some of the evidence, and the factor I use to help select young physicians for residency roles and beyond.

 

The Big 5 Can Help

By way of contrast to the MBTI, the Big 5 model (also called the Big 5) has a significant amount of supporting evidence that indicates 5 basic dimensions are responsible for significant variation in human personality.  This is paraphrased from Robins and Judge, Essentials of Organizational Behavior 9th Edition and much of the section above we have just paraphrased comes from that text.  The Big 5 personality model delineates 5 dimensions of personality including: Extroversion, Agreeableness, Conscientiousness, Stability and Openness to experience.  A significant amount of research indicates that those 5 basic personality factors underlie all other types.  It turns out that research on the Big 5 has found relationships between these personality dimensions and job performance.

 

Conscientiousness Predicts Performance Across A Wide Range of Occupations

Researchers examined a broad spectrum of occupations including engineers, architects, accountants, attorneys, police, managers, sales people, and both semi-skilled and skilled employees.  Results indicated that across all occupation groups, conscientiousness predicted job performance.  For more information including the primary work see J Hoogen and B Holland “Using Theory to Evaluate Personality and Job Performance relations: A socioanalytic perspective” Journal of Applied Psychology, February 2003, page 100-112.  Also MR Barrick and MK Mount “Select on conscientiousness and emotional stability” in EA Lock’s The Handbook of Principles of Organizational Behavior 2004, page 15-28.

 

Since I first learned about the Big 5 model I have attempted to assess conscientiousness as best I can as the most important predictor of success for prospective residents.  Yes, I do not administer a personality test; however, I do my best to get at the dimension of conscientiousness during job interviews and other interactions.  Experientially, this seems to work.  Yes, there are many types of interviewing styles which have also been delineated.  However, I find that the Big 5 personality inventory’s dimension of conscientiousness, assessed as best as possible, does seem to indicate the level of function for the eventual resident and beyond.

 

Do Something (Anything) Besides What We’ve Done So Far

Clearly, there are multiple potential manners in which we can select residents.  I think that it is worthwhile that we go on the hunt for new selection criteria owing to the fact that we are acutely aware of the limitations in the ones we typically use.  As mentioned, there is an extensive body of research that exists about how to select people for different roles and we’ve highlighted some here.  The unfortunate fact is that many of us are simply not educated in the significant body of knowledge that exists around how to select people for roles owing to the fact that we have no exposure to this in our training, are often busy working, and make other high-stakes decisions which gives us a certain comfort level with making hiring decisions.  My recommendation is that we take a moment to reconsider how we select for residents and turn towards more evidence-based models such as the Big 5 personality inventory or at least the dimension of conscientiousness assessed as best we can.

 

Disagree?  Comments, thoughts or discussion?  Please share your thoughts beneath.

 

How The COPQ Helps Your Healthcare Quality Project

Click here for video presentation of content beneath:

Click here for audio presentation of entry:

 

Challenging To Demonstrate The Business Case For Your Healthcare Quality Project

One of the biggest challenges with quality improvement projects is clearly demonstrating the business case that drives them.  It can be very useful to generate an estimated amount of costs recovered by improving quality.  One of the useful tools in Lean and Six Sigma to achieve this is entitled ‘The cost of poor quality’ or COPQ.  Here we will discuss the cost of poor quality and some ways you can use it in your next quality improvement project.

 

Use The COPQ To Make The Case, And Here’s Why

The COPQ helps form a portion of the business case for the quality improvement project you are performing.  Usually, the COPQ is positioned prominently in the project charter.  It may sit after the problem statement or in another location depending on the template you are using.  Of the many tools of Six Sigma, most black belts do employ a project charter as part of their DMAIC project.  For those of you who are new to Six Sigma, DMAIC is the acronym for the steps in a Six Sigma project.  It includes: Define, Measure, Analysis, Improve, and Control.  Importantly, we use these steps routinely and each step has a different objective we must achieve.  These objectives are often called tollgates.  Things must happen in each of these steps before progressing to the next step.  One of the tools we can use, and again most project leaders do use this tool routinely, is called the project charter.

 

The project charter defines the scope for the problem.  Importantly, it defines the different stakeholders who will participate, and the time line for completion of the project.  It fulfills other important roles too as it clearly lays out the specific problem to be addressed.  Here is where the COPQ comes in:  we utilize the COPQ to give managers, stakeholders, and financial professionals in the organization an estimate of the costs associated with current levels of performance.

 

The Four Buckets That Compose The COPQ

The COPQ is composed of four ‘buckets’.  These are:  the cost of internal failures, the cost of external failures, the cost of surveillance, and the cost associated with prevention of defects.  Let’s consider each of these as we describe how to determine the Costs of Poor Quality.  The cost of internal failures are those costs associated with problems with the system that do not make it to the customer or end user.  In healthcare this question of who is the customer can be particularly tricky.  For example, do we consider the customer as the patient or as the third party payer?  The reason why this is challenging is that, although we deliver care to the patient, the third party payer is the one who actually pays for the value added.  This can make it very challenging to establish the Costs of Poor Quality for internal and other failures.  I believe, personally, this is one of the sources that puts Lean, Six Sigma, and other business initiatives into certain challenges when we work in the healthcare arena.  Who, exactly, is the customer?  Whoever we regard as the customer, internal failures, again, are those issues that do not make it to the patient, third party payer, or eventual recipient of the output of the process.

 

External failures, by contrast, are those issues and defects that do make it to the customer of the system. There are often more egregious.  These may be less numerous than internal failures but are often visible, important challenges.

 

Next is the cost of surveillance.  These are the costs associated with things like intermittent inspections from state accrediting bodies or similar costs that we incur perhaps more frequently because of poor quality.  Perhaps our state regulatory body has to come back yearly instead of every three years because of our quality issues.  This incurs increased costs.

 

The final bucket is the cost of prevention.  Costs associated with prevention are other important components of the cost of poor quality.  The costs associated with prevention are the only expenditures on which we have a Return On Investment (ROI).  Prevention is perhaps the most important element of the COPQ because money we spend on prevention actually translates into, often, that return on investment.

 

A Transparent Finance Department Gives Us The Numbers

In order to construct the COPQ we need to have ties to the financial part of our organization.  This is where transparency in the organization is key.  It can be very challenging to get the numbers we require in some organizations and in others it can be very straightforward.  Arming the team with good financial data can help make a stronger case for quality improvement.  It is key, therefore, that each project have a financial stakeholder so that the quality improvement effort never strays too far from a clear idea of the costs associated with the project and the expectation of costs recovered.  Interestingly, in the Villanova University Lean Six Sigma healthcare courses, a common statistic cited is that each Lean and Six Sigma project recovers a median value of approximately $250000.  This is a routine amount of recovery of COPQ even for healthcare projects and beyond.  It can be very striking just how much good quality translates into cost cutting.  In fact, I found that decreasing the variance in systems, outliers and bad outcomes has a substantial impact on costs in just the manner we described.

 

Conclusion:  COPQ Is Key For Your Healthcare Quality Project

In conclusion, the Cost of Poor Quality is useful construct for your next quality improvement project because it clearly describes exactly what the financial stakeholders can expect to recover from the expenditure of time and effort.  The COPQ is featured prominently in the project charter used by many project leaders in the DMAIC process.  To establish the COPQ we obtain financial data from our colleagues in finance who are part of our project.  We then review the costs statements with them and earmark certain costs as costs of internal failure, external failure, surveillance or costs associated with prevention.  We then use these to determine the staged cost of poor quality.  Additionally, we recognize that the COPQ is often a significant figure on the order of 200-300 thousand dollars for many healthcare-related projects.

 

We hope that use of the COPQ for your next quality improvement project helps you garner support and have a successful project outcome.  Remember, prevention is the only category of expenditures in the COPQ that has a positive return on investment.

 

Thought, questions, or discussion points about the COPQ?  Let us know your thoughts beneath.