20 Useful Tools For Your Startup Business Experiment


Our team has built virtual offices for more than 10 years, and so we’re always on the lookout for useful tools from around the web.  Here, we share with you some of the most useful tools we’ve found to build low-overhead, highly effective paths to getting done what you need done for your business:


(1) Voicerecorder hi def:  this app is available for iPhone.  Our team doesn’t know about Android or other platforms.  It is ideal for recording thoughts on the go and having them transcribed later.  It allows files to be uploaded to dropbox from within the app.  Of course you could share the dropbox folder with whomever you would like, such as a transcriptionist who could type your final document and email it to you.  Where would you find someone like that?  Read on.


(2) Dropbox:  the best application we’ve seen for sharing files between all your computers.  Works on mac and PC.  Use with ECM, sendtodropbox and efax to make a great way to get documents uploaded to all your computers at once.


(3) Sendtodropbox.com:  this does exactly what it says.  It gives you an email address you can use with your smartphone to get documents into dropbox (and therefore all your computers) to truly go paperless.


(4) Peopleperhour.com:  People per hour is one of several platforms where you can find independent contractors to do many different things.  Tasks like transcribing your audio files from dropbox, maintaining a calendar, creating a logo, and many other things are available from this platform which is by far our favorite.


(5) Getfriday.com:  this is another of the most useful tools on the list.  Getfriday is a team in India that will allow you to retain a virtual personal assistant.  You can get a great assistant if you specify excellent English, etc., up front.  Your assistant will do anything that does not require physical presence, like:  place calls, maintain a calendar, online shopping (they protect your credit card with certain techniques), fill out online applications, and many other tasks.  Many of us on the team now have a virtual assistant and it is really excellent.


(6) 1dollarscan.com:  This will allow you to send any book to an address (eg from Amazon directly or anywhere else) and will then turn it into a PDF for you to download.  Excellent quality.


(7) Doodle:  this tool is a great way to get a group to commit to a meeting time.


(8) Boomerang:  this tool is an add-on for gmail that will show you when an email you’ve sent has been read, send a reminder email, etc.


(9) Gotomypc:  best way to have your office computer accessible from home and vice-versa.  Will usually install on your office computer despite firewalls and other network blocks.


(10) Gotomeeting:  if you work with a team that’s spread across North America, or if you want to talk with your family on a night you can’t be home, Gotomeeting works great.  It’s even better than free apps because you can record meetings you host, upload them to dropbox, and get a transcript for later.


(11) Efax:  you can setup an efax number to receive documents and email them to you as attachments.  You can also have it send to dropbox.  In other words, you can fax things from any fax machine and have it quickly on all your computers for e-signature or whatever you need.  Go to efax.com


(12) Textlater:  this is a clever iPhone app that lets you program when and where you’d like a text sent sometime in the future.


(13) Postal methods:  this allows you, from your phone or computer, to write a letter and mail it anywhere (at least in the US).  You do the typing and they do the mailing.  Google postal methods.  Obviously you can use this even if you’re out of the country.  Google postalmethods.


(14) Click2mail:  this is a great option for sending certified letters.  You can do it all from your computer.  Google click2mail.


(15) Uber:  this app allows you to get a private car (instead of calling a taxi) in most medium to large cities.


(16) Pocket Scanner:  this app creates PDFs from anything you can snap with the camera on your smartphone.  Available for iPhone.


(17) Evernote:  this service allows you to keep notes, photos, and anything else you need.  You can have it available across platforms and on your PC.  Available at the app store and online.


(18) Google Voice:  this service gives you a free phone number.  You can create a message and direct the number to any phone (or phones) you’d like.  You can change the phone number whenever you need to be less available.  The service will also transcribe the message and send it to you as an email.


(19) Badnews robot:  this is available on the iPhone app store and also on the web.  This clever app will call any number you specify and deliver a message you specify that contains bad news.  Hysterical and anonymous.


(20) Google drive:  this allows you to upload documents and files that you can share across computers.


Hope you find these 20 tools as useful as we have.  Tools like these allow us to setup low overhead business models and experiment to find ones that work.  Interested in more useful business tools from around the web?  Let us know.

Best Talk On Startup Methodology I’ve Heard



It’s unusual for our team to re-post content from another source. We believe, and enjoy, original content or (at the very least) an original take on well-known content.

This entry, however, is an exception to our rule because we found a talk by Eric Ries (author of The Lean Startup) that was part of a post by thecoderfactory.com.

Although Eric’s talk does not explicitly discuss ALL of the methods we use for startups (including the power of premium, unique positioning), he delivers the single most useful talk we’ve heard on how to startup that is focused on decision-making, pivoting, and methods of “innovation accounting”.

Please enjoy, and use, the video beneath.  Take this as an important piece of the story about how to startup.  Complementary information, such as how to fundraise and the business model canvas, may be coupled with Eric’s excellent talk to round out many of the mechanics of starting up your unique business.  Please enjoy the video beneath as much as we did!



Questions, thoughts, or comment’s on Eric’s talk?  Please leave your thoughts beneath.




My Data Are Non-normal…Now What?



So you are progressing through your quality improvement project and you are passing through the steps of DMAIC or some similar steps.  You finally have some good, continuous data and you would like to work on analyzing it.


You look at your data to find whether these data are normally distributed.  You likely performed the Anderson-Darling test, as described here, or some similar test to find out whether your data are normally distributed.  Oh no! You have found that your data are non normal.  Now what?  Beneath we discuss some of the treatments and options for non-normal data sets.


One of the frequent issues with quality improvement projects and data analysis is that people often assume their data are normally distributed when they are not.  They then go on and use statistical tests which require data that are normally distributed.  (Ut oh.) Conclusions ensue which may or may not be justified.  After all, non-normal data sets do not allow us to utilize the familiar, comfortable statistical tests that we employ routinely. For this reason, let’s talk about how to tell whether your data are normally distributed.


First, we review our continuous data as a histogram.  Sometimes, the histogram may look like the normal distribution to our eyes and intuition.  We call this the “eyeball test”.  Unfortunately, the eyeball test is not always accurate.  There is an explicit test, called the Anderson-Darling test, which asks whether our data deviate significantly from the normal distribution.


Incidentally, the normal distribution does not mean that all is right with the world.  Plenty of systems are known to display distributions other than the normal distribution–and they are meant to do so.  Having the normal distribution does not mean everything is OK–it’s just that we routinely see the normal distribution in nature and so call it, well, normal.  We will get to more on this later.


For now, you have reviewed your data with the eyeball test and you think they are normally distributed.  Now what?  We utilize the Anderson-Darling test to compare our data set to the normal distribution.  If the p value associated with the Anderson-Darling test statistic is GREATER than 0.05 this means that our data do NOT deviate from a normal distribution.  In other words we can say that we have normally distributed data.  For more information with regard to the Anderson-Darling test, and its application to your data, look here.


So now we know whether our data are or are not normally distributed.  Next, let’s pretend that our Anderson-Darling test gave us a p value of less than 0.05 and we were forced to say that our data are not normally distributed.  There are plenty of systems in which data are not normally distributed.  Some of these include time until metal fatigue / failure and other similar systems.  Time until failure and other systems may display, for example, a Pousieulle distribution.  This is just one of the many other named distributions we find in addition to the normal (aka Gaussian) distribution.  Simply because a system does not follow the normal distribution does not mean the system is wrong or some how irrevocably broken.


Many systems, however, should follow the normal distribution.  When they do not follow it and are highly skewed in some manner, the system may be very broken.  If the normal distribution is not followed and there is not some other clear distribution, we may say that there is a big problem with one of the six causes of special variation as described here.  When data are normally distributed we routinely say the system is displaying common cause variation, and all of the causes for variation are in balance and contributing expected amounts.  Next, let’s talk about where to go from here.


When we have a non normal data set, one option is to perform a distribution fitting.  This asks the question “If we don’t have the normal distribution, which distribution do we have?”  This is where we ask MiniTab, SigmaXL, or a similar program to fit our data versus known distributions and to tell us whether our distribution deviates from these other distributions.  Eventually, we may find that one particular distribution fits our data.  This is good. We now know that this is the expected type of system for our data.  If we have non-normal data and we fit a distribution to our data, the question then becomes what can we do as far as statistical testing goes.  How can we say whether we made improvement after intervening in the system?  One of the things we can do is to use statistical tests which are not contingent on having normally distributed data.  These are infrequently used and include the Mood’s median test, the Levene test, and the Kruskal-Wallis test (or KW because that one’s not easy to say) test.  I have a list of tools and statistical tests used for both normal and non-normal data sets at the bottom of the blog entry here.


So, to conclude this portion, one option for working with non-normal data sets is to perform distribution fitting and then to utilize statistical tests which do not rely on the assumption of having a normal data set.


The next option for when you are faced with a non-normal data set is to transform the data so that it becomes a normally distributed data set.  For example, pretend that you are measuring time for some process in your hospital.  Let’s say you have used the Anderson-Darling test and discovered that time is not normally distributed in your system.  As mentioned, you could perform distribution fitting and use non-normal data tools.  Another option is to transform the data so that they become normal.   Transform does not mean that you have faked, or doctored, the data.  What transformed means is that you raise the variable, here time, to some power value.  This can be any power value including the 1/2 value, 2, 3, and every number in-between and beyond.  This can also be a ‘negative’ power such as -2 etc.  So, now, you raised your time variable until the data set becomes normally distributed.  A computer software package like MiniTab or SigmaXL will test each value of the power to which you raised your data.  These values get called lambda values.  The computer will find the lambda value at which your data become normally distributed according to the Anderson-Darling test.  Let’s pretend in this situation that time^2 is normally distributed according to the Anderson-Darling test.


This brings up a philosophic question.  We can easily feel what it means to manage the variable time.  What, however, does it mean to managed time raised to the second power?  These are questions that six sigma practitioners and clinical staff may ask, and, again, are more philosophic in nature.  Next, we use our transformed data set, and remember that if we transform the data before we intervened in the system we must again transform the data after we intervene in the system (to the same value).  This allows us to compare apples to apples.  Next, we can utilize the routine, familiar, statistical tests on this transformed data set.  We can use t tests, anova tests, and other tests that we typically enjoy to analyze the central tendency of the data and data dispersion / variance.


This, then, represents the second option for how to deal with data that are not normally distributed:  transform the data set and utilize our routine tests.  For examples of tests that do utilize normal data see the tool time excel spreadsheet from Villanova University that is at the bottom of our blog entry as mentioned above.  In conclusion, working with non normal data sets can be challenging.  We have presented the two classic options for looking at how to deal with data that are not normally distributed and which include distribution fitting followed by utilization of statistical tests that are not contingent on normality, as well as transforming the data with a power transform (such as the Box-Cox transform) and then utilizing the transformed data with our routine tools that require normally distributed data.


Questions, comments or thoughts about utilizing non-normal data in your quality improvement project? Leave us your thoughts beneath.

Use Continuous Data (!)




For the purposes of quality improvement projects, I prefer continuous to discrete data.  Here, let’s discuss the importance of classifying data as discrete or continuous and the influence this can have over your quality improvement project.  For those of you who want to skip to the headlines: continuous data is preferable to discrete data for your quality improvement project because you can do a lot more with a lot less of it.


First let’s define continuous versus discrete data.  Continuous data is data that is infinitely divisible.  That means types of data that you can divide forever ad infinitum comprise the types of data we can label as continuous.  Examples of continuous data include time.  One hour can be divided into two groups of thirty minutes, minutes can be divided into seconds and seconds can continue to be divided on down the line.  Contrast this with discrete data:  discrete data is data which is, in short, not continuous.  (Revolutionary definition, I know.) Things like percentages, levels and colors comprise data that comes in divided packets and so can be called discrete.


Now that we have our definitions sorted, let’s talk about why discreet data can be so challenging.  First, when we go to sample directly from a system, discrete data often demands a larger sample size.  Consider our simple sample size equation for how big a sample we need of discreet data to detect a change:


(p)(1-p) (2 / delta)^2.


This sample size equation for discreet data has several important consequences.  First, consider the terms.  P is the probability of outcome of a certain event.  This is for percentage-type data where we have a yes or no, go or stop, etc.  The terms help determine our sample size.   As mentioned, p is the probability of the event occurring and the delta is the smallest change we want to be able to detect with our sample.


The 2 in the equation comes from the (approximate) z-score at the 95% level of confidence.  We round up from the true value of z to 2 because that gives us a whole number sample slightly larger than what’s required rather than a sample with a fraction in it.  (How do you have 29.2 of a patient, for example?) Rounding up is important too because rounding down would yield a sample that is slightly too small.


In truth, there are many other factors in sampling besides merely sampling size.  However, here, notice what happens when we work through this sample size equation for discrete data.  Let’s say we have an event that has a 5% probability of occurring. This would be fairly typical for many things in medicine, such as wound infections in contaminated wounds etc.  Working through the sample size equation, and in order to detect a 2% change in that percentage, we have 0.05 x 0.95 (2 / 0.02)^2.  This gives us approximately 475 samples required in order to detect a smallest possible decrease of 2%.  In other words, we have obtained a fairly large sample size to see a reasonable change.  We can’t detect a change of 1% with that sample size, so if we think we see a 4.8% as the new percentage after interventions to change wound infections…well, perforce of our sample size, we can’t really say if anything has changed.


One more thing:  don’t know the probability of an event because you’ve never measured it before?  Well, make your best guess.  Many of us use 0.5 as the p if we really have no idea.  Some sample size calculation is better than none, and you can always revise the p as you start to collect data from a system and you get a sense of what the p actually is.


Now let’s consider continuous data.  For continuous data, sample size required to detect some delta at 95% level of confidence can be represented as



( [2][historic standard deviation of the data] / delta)^2.


When we plug numbers into this simplified sample size equation we see very quickly that we have much smaller samples of data required to show significant change.  This is one of the main reasons why I prefer continuous to discrete data.  Smaller sample sizes can show meaningful change.  However, for many of the different end points you will be collecting in your quality project, you will need both.  Remember, as with the discrete data equation, you set the delta as the smallest change you want to be able to find with your data collection project.


Interesting trick:  if you don’t know the historic standard deviation of your data (or you don’t have one) take the highest value of your continuous data and subtract the lowest.  Then divide what you get by 3.  Viola…estimate of historic standard deviation.


Another reason why continuous data is preferable to discrete data is the number of powerful tools it unlocks.  Continuous data allows us to use many other quality tools such as the CPK, data power transforms, and useful hypothesis testing. This can be more challenging with discrete data.  Some of the best ways we have see to represent discrete data include a Pareto diagram.  For more information regarding a Pareto diagram visit here.


Other than the Pareto diagram and a few other useful ways, discrete data presents us with more challenges for analysis.  Yes, there are statistical tests such as the chi squared proportions test that can determine statistical significance.  However, continuous data plainly open up a wider array of options for us.


Having continuous data allows us to make often better visual representations and allows our team to achieve a vision of the robustness of the process along with the current level of variation in the process.  This can be more challenging with the discrete data endpoints.


In conclusion, I like continuous data more than discrete data and I use it wherever I can in a project.  Continuous data endpoints often allow better visualization of variation in a process.  They also require smaller sample sizes and unlock a more full cabinet of tools which we can use to demonstrate our current level of performance.  In your next healthcare quality improvement project be sure to use continuous data points where possible and life will be easier!


Disagree?  Do you like discrete data better or argue “proper data for proper questions”?  Let us know!




These Two Tools Are More Powerful Together




Click beneath for the video version of the blog entry:


Click beneath for the audio version of the blog entry:


Using two quality improvement tools together can be more powerful than using one alone. One great example is the use of the fishbone diagram and multiple regression as a highly complementary combination.  In this entry, let’s explore how these two tools, together, can give powerful insight and decision-making direction to your system.


You may have heard of a fishbone, or Ishikawa diagram, previously. This diagram highlights the multiple causes for special cause variation.  From previous blog entries, recall that special cause variation may be loosely defined as variation above and beyond the normal variation seen in a system.  These categories are often used in root cause analysis in hospitals.  See Figure 1.


Figure 1:  Fishbone (Ishikawa) diagram example
Figure 1: Fishbone (Ishikawa) diagram example


As you also may recall from previous discussions, note that there are six categories of special cause variation. These are sometimes called the “6 M’s’” or “5 M’s and one P”. They are Man, Materials, Machine, Method, Mother Nature and Management (the 6 Ms).  We can replace the word “man” with the word “people” to obtain the 5Ms and one P version of the mneumonic device.  In any event, the issue is that an Ishikawa diagram is a powerful tool for demonstrating the root cause of different defects.


Although fishbone diagrams are intuitively satisfying, they can also be very frustrating.  For example, once a team has met and has created a fishbone diagram, well…now what?  Other than opinion, there really is no data to demonstrate that what the team THINKS is associated with the defect / outcome variable is actually associated with that outcome.  In other words, the Ishikawa represents the team’s opinions and intuitions.  But is it actionable?  In other words, can we take action based on the diagram and expect tangible improvements?  Who knows.  This is what’s challenging about fishbones:  we feel good about them, yet can we ever regard them as more than just a team’s opinion about a system?


Using another tool alongside the fishbone makes for greater insight and more actionable data.  We can more rigorously demonstrate that the outcome / variable / defect is directly and significantly related to those elements of the fishbone about which we have hypothesized with the group.  For this reason, we typically advocate taking that fishbone diagram and utilizing it to frame a multiple regression.  Here’s how.


We do this in several steps.  First, we label each portion of the fishbone as “controllable” or “noise”.  Said differently, we try to get a sense of which factors we have control over and which we don’t.  For example, we cannot control the weather.  If sunny weather is significantly related to number of patients on the trauma service, well, so it is and we can’t change it.  Weather is not controllable by us.  When we perform our multiple regression we do so with all factors identified labeled as controllable or not.  Each is embodied in the multiple regression model.  Then, depending on how well the model fits the data, we may decide to see what happens if the elements that are beyond our control are removed from the model such that only with the controllable elements are used.  Let me explain in greater detail about this interesting technique.


Pretend we create the fishbone diagram in a meeting with stakeholders. This lets us know, intuitively, what factors are related to different measures. We sometimes talk about the fishbone as a hunt for Y=f(x) where Y is the outcome we’re considering and it represents a function of underlying x’s. The candidate underlying x’s (which may or may not be significantly associated with Y) are identified with the fishbone diagram.  Next, we try to identify which fishbone elements are ones for which we have useful data already.  We may have rigorous data from some source that we believe. Also, we may need to collect data on our system. Therefore, it bears saying that we take specific time to try to identify those x’s about which we have data.  We then establish a data collection plan. Remember, all the data for the model should be over a similar time period.  That is, we can’t have data from one time period and mix it with another time period to predict a Y value or outcome at some other time.  In performing all this, we label the candidate x’s as controllable or noise (non-controllable).


Next, we seek to create a multiple regression model with Minitab or some other program. There are lots of ways to do this, and some of the specifics are ideas we routinely teach to Lean Six Sigma practitioners or clients. These include the use of dummy variables for data questions that are yes/no, such as “was it sunny or not?” (You can use 0 as no and 1 as yes in your model.) Next, we perform the regression and do try to input confounders if we think two x’s or more x’s are clearly related.  (We will describe this more in a later blog entry on confounding.) Finally, when we review the multiple regression output, we look for an r^2 value of greater than 0.80. This indicates that 80% of the variability in our outcome data, or our Y, is explained by the x’s that are in the model. We prefer higher r^2 and r^2 adjusted values. R^2 adjusted is a more stringent test based on the specifics of your data and we like both r^2 and r^2 adjusted to be higher.


Next we look at the p values associated with each of the x’s to determine whether any of the x’s affect the Y in a statistically significant manner.  As a final and interesting step we remove those factors that we cannot control and run the model again so as to determine what portion of the outcome is in our control.  We ask the question “What portion of the variability in the outcome data is in our control per choices we can make?”


So, at the end of the day, the Ishikawa / fishbone diagram and the multiple regression are powerful tools that complement each other well.


Next, let me highlight an example of a multiple regression analysis, in combination with a fishbone, and its application to the real world of healthcare:


A trauma center had issues with a perceived excess time on “diversion’, or that time in which patients are not being accepted and so are diverted to other centers. The center had more than 200 hours of diversion over a brief time period.  For that reason, the administration was floating multiple reasons why this was occurring.  Clearly diversion could impact quality of care for injured patients (because they would need to travel further to reach another center) and could represent lost revenue.


Candidate reasons included an idea that the emergency room physicians (names changed in the figure beneath) were just not talented enough to avoid the situation.  Other reasons included the weather, and still other reasons included lack of availability of regular hospital floor beds.  The system was at a loss for where to start and it was challenging for everyone to be on the same page to have clarity with respect to where to start and what to do next with this complex issue.


For this reason, the trauma and acute care surgery team performed an Ishikawa diagram with relevant stakeholders and combined this with the technique of multiple regression to allow for sophisticated analysis and decision making.  See Figure 2.


Figure 2:  Multiple regression output
Figure 2: Multiple regression output


Variables utilized included the emergency room provider who was working when the diversion occurred (as they had been impugned previously), the day of the week, the weather, and the availability of intensive care unit beds to name just a sample of variables used.  The final regression result gave an r^2 value less than than 0.80 and, interestingly, the only variable which reached significance was presence or absence of ICU beds.  How do we interpret this?  The variables included in the model explain less than 80% of the variation in the amount of time the hospital was in a state of diversion (“on divert”) for the month.  However, we can say the the availability of ICU beds is significantly associated with whether the hospital was “on divert”.  Less ICU beds was associated with increased time on divert.  This gave the system a starting point to correct the issue.


Just as important was what the model did NOT show.  The diversion issue was NOT associated significantly with the emergency room doctor.  Again, as we’ve found before, data can help foster positive relationships.  Here, it disabused the staff and the rest of administration of the idea that the emergency room providers were somehow responsible for (or associated with) the diversion issue.


The ICU was expanded in terms of available nursing staff which allowed more staffed beds and made the ICU more available to accept patients. Recruitment and retention of new nurses were linked directly to the diversion time for the hospital:  the issue was staffed beds, and so the hospital realized that more nursing staff needed to be hired as one intervention.  This lead to a recruitment and retention push, and, shortly thereafter, an increase in the number of staffed beds.  The diversion challenge resolved immediately once the additional staff was available.


In conclusion, you can see how the fishbone diagram, when combined with the multiple regression, is a very powerful technique to determine which issues underly the seemingly complex choices we make on a daily basis.  In the example above, a trauma center utilized these powerful techniques together to resolve a difficult problem. At the end of the day, consider utilizing an fishbone diagram in conjunction with a multiple regression to help make complex decisions in our data intensive world.


Thoughts, questions, or feedback regarding your use of multiple regression or fishbone diagram techniques? We would love to hear from you.


A Lean Surgery Project To Get You Started

Click play to watch the video version of this blog entry:

Click the link beneath for an audio presentation of the entry:


There are some quality improvement projects that are so straightforward we see them repeated across the country.  One of these straightforward projects is decreasing the amount of surgical instruments we have in our operative pans.  The impetus to do this is that we only infrequently use the many clamps and devices that we routinely have sterilized for different procedures.  This project, which we commonly refer to as “Leaning the pan”, is so useful and intuitive that it is repeated across the country.  Here we take a second to describe how the project looks and some ways in which you might decide to apply  it in your practice.


First, much of this Lean project focuses on the concept of value added time, or VAT.  It turns out, in most systems, only approximately 1% of the time is spent adding value to whatever implement or service we are providing. It’s a striking statistic that we see repeated across systems.  Again, only approximately 1% of our time is generally spent in things for which the customer will pay.  As we have described before on the blog, here, one of the challenges we have in healthcare is establishing who the customer is.  In part, the customer is the patient who receives the service.  In another very real sense the customer is the third party payer who reimburses us for our procedures.  The third party payer does not reimburse us any more or less if we use 20 Kelly clamps or 10 Kelly clamps to finish a procedure.  Do we need 40 Kelly clamps in a pan?  If we use the most expensive Gortex stitch, or the least expensive silk suture, our reimbursement does not vary.  So, this concept of value added time is key in Leaning the pan.


We can demonstrate as we go through this quality improvement project that we are decreasing the amount of time that we spend doing things that do not add value to the case.  In short, we can demonstrate that our proportion of value added time increases just as our proportion of non-value added time decreases.  Again, notice that we have introduced this concept of value added time which focuses us squarely on the idea that in general, in most systems, we only spend approximately 1% of our time adding value to our output in systems. So, as we begin to set up the preconditions for this project, one of the ideas we can focus on is how much time a procedure takes.  Here, the concept of operational definition becomes important.  When does a procedure start and end?  The procedure can start from the time the nurse opens the pan and spends time counting (along with the scrub nurse) the implements in the pan.  Alternatively, we can focus on room turnover time and including the counting as part of that defined time.  This is just one way to demonstrate a decreased time spent as non-value added time and it highlights the importance of definition.  Less instruments to count translates into less length of time spent counting.  We can also define the procedural time as the time from when the instruments are sterilized and repackaged.  Again, as with all quality improvements projects, the operational definition of what time we are measuring and what we call procedural time is key.


Another useful idea in Leaning the pan is the Pareto diagram.  As you probably remember, the Pareto Principle (or 80/20 rule) was originally developed by Italian economist Vilfredo Pareto.  It demonstrated that approximately 80% of the effect seen is caused by 20% of the possible causes for that effect.  In other words, there are a vital few which create the bulk of the effect in a system.  This has been extrapolated to multiple other systems beyond the initial data Pareto utilized to describe this principle.  Pareto was focused on wealth in Italy. However, it turns out the the 80/20 principle has been applied to many other systems and practices throughout the business and quality improvement world.  In short, there is now a named diagram and lean six sigma tool called a Pareto diagram.


The Pareto diagram is a histogram that demonstrates frequency of use or occurrence of different items or implements in a system. See Figure 1.  In general, we know that if we select 10 instruments and plot out how frequently they are used, we will find that only approximately 2 of the 10 instruments are responsible for over 80% of the usage of instruments in a procedure.



Fig. 1:  sample Pareto Diagram from excel-easy.com
Fig. 1: sample Pareto Diagram from excel-easy.com


The example above highlights how two complaints of ten possibilities about food (overpriced and small portions) are responsible for around 80% of issues.  Similarly, this tool may be used as a graphic way to demonstrate that the bulk of instruments in the operative pan are not used or are used rarely.  There are several options here.  First we could create a data collection plan to demonstrate how many times each instrument is used in the pan.  Clearly this takes some data collection.  Next we could demonstrate, as a Pareto diagram, instrument usage.  Next we could say “Ok  let’s remove from the pan the rarely used instruments or perhaps keep a few of the rarely used instruments that are particularly hard to find.”


In any case, we will discover that the bulk of the instruments that we sterilize every time are not vital for performance of the procedure and are, in fact, negligible.  So, the Pareto diagram is a useful tool to demonstrate which instruments can (and should) be removed from the pan  Again, this may take some data collection.


We have now demonstrated a straightforward way to demonstrate a change in value added time with our surgical instrument sterilization project and we have also demonstrated one of the key ways to highlight what instruments are used and what instruments can go.  Next, let’s discuss some of the interesting solutions and consequences from leaning the pan projects across the country.  First, we can usually establish consensus from a team of surgeons based on data from which tools and instruments are used.  We can establish one pan which has data behind it that shows which instruments we all use as surgeons.  This eliminates each doctor from requiring their own special pan.  We can then take those hard to find instruments or things that individual surgeons feel are must-haves or must-have-available and put those in accessory packs for each surgeon.  So, the basic laparotomy tray can be the same for everyone with its Lean, time-saving methodology.  This saves time not just for one procedure but, over the total number of procedures, a surprising amount of time:  if we performed 1000 exploratory laparotomies in a year and saved 5 minutes per laparotomy we have clearly save 5000 minutes of non-value added time over the course of the year.  Some simple math demonstrates that this is hours of non-value added time eliminated from the procedure per year.  Things like this are useful and key to establish the utility of these projects.  Let’s look at some other keys.


One of the other keys to a successful project is a project charter.  Before we even begin a Leaning the surgical pan project it is useful to have a stakeholder meeting with all the people involved in sterilizing trays and pans etc. This way, there can be a discussion about some of the things required by our system and the reason why things are the way they are now.  It is important to get a sense of the reason why things are the way they are at the beginning of the quality improvement project.  A project charter will include the scope of the project, the people involved, and an outline of the days required for the project to be completed.  In a study of most Lean Sensei and Lean Six Sigma Black Belts, we discovered that one of the most frequently used tools in the body of knowledge is this project charter at the onset of this project.  This is key in that it clearly focuses us on what is important for the project, timeline, stakeholders, and what the outcome measures will be.  Again, for the Leaning the pan project we would recommend value added time as one of the key outcome measures.


Another key outcome measure should include something about cost.  This helps with the business case for managing up the organization.  Typically, in Lean and Six Sigma projects we use the cost of poor quality (COPQ) which we have described previously here.  In this case, the cost of poor quality is somewhat more challenging to establish.  Remember, the cost of poor quality is composed of four “buckets”.  These include the cost of internal failures, external failures, surveillance and prevention.  For more information on the COPQ and how it is calculated look here.  In this case the COPQ is harder to demonstrate.  What internal failures and external failures exist with this Leaning the pan model?  We, instead of using a strict COPQ in this case, recommend demonstrating any cost savings based on the cost of instrument sterilization, the amount of time instruments can be sterilized before being replaced (life extension for instruments), and the savings that flow from the decreased amount of time utilized in counting a tray (ie more cases).


In short, it will be very challenging to demonstrate direct cost savings with this type of Leaning the pan project.  We have seen around the country with this project  that it is challenging to demonstrate firm cost savings on the income statement or balance sheet.  However, this is a good starter Lean project and can really help the surgeons and operative team see the value in the Lean methodology.   It can also help build consensus as an early, straightforward project in your Lean or Six Sigma journey.


In conclusion we have described the process of Leaning the operating room pan.  As most Lean projects go, this one is relatively straightforward and includes the concept of value added time in addition to the Pareto diagram.  It is more challenging to use other Lean tools such as value stream mapping and load leveling with a project like this.  However, some standard Lean tools can greatly assist the practitioner in this nice warm-up project.  The cost of poor quality is more challenging to establish and much of the case for the savings and decrease in waste from projects like this may come from the representation of how value added time increases as a proportion of time spent.


Discussion, thoughts, or personal reports of how you demonstrated cost savings or “leaned the pan” in your operating room?  We would love to hear your comments and thoughts beneath.

Let’s Choose Residents Differently


Click beneath for the audio version of the blog entry:



Many Are Wondering How To Choose Residents

One of the ways we identify new blog topics is by searching the web and reading social media with content from our colleagues.  Sometimes, this overlaps with our area of expertise and things in which we at the blog have an interest.  For about the last 5 years I have had a strong interest in how resident surgeons are selected by training programs.  Recently, one of my co-bloggers ran across an entry from one of our colleagues that also referenced this difficult topic.  What are some of the ways to select for resident surgeons among applicants so we can identify who will eventually make excellent attending surgeons?  How should we be selecting our residents?


Here’s What Doesn’t Work

First, let’s consider what doesn’t work in resident selection.  Here we find most of the criteria that are routinely used to select residents.  Things such as USMLE scores and other classically used selection criteria just plain don’t work.  There is significant evidence for this and let’s avoid reciting what my colleagues at other blogs have described.  We will simply point you to them with a link here.


Why Don’t We Use Things That Work?

Next, let’s consider some potential reasons why the criteria we use frequently don’t work and why we choose to try them.  For one, typically, we as surgeons are not educated in how to select people for different jobs.  We may think one good surgeon can tell someone else who can be a good surgeon by a combination of personal interviewing, test scores, or some other routine criteria.  However, as my colleague the Skeptical Scalpel points out very adeptly, that doesn’t seem to be the case.


In short, we are not educated on what selection criteria can be utilized to find people who will be effective in their profession. Often, attending surgeons who help to select residents have little to no education in the significant amount of human resource literature that has been created on how to select people for certain positions.  Just like many things in life, there may be some science and literature that makes things a little easier.  Let’s take a look.


First, consider that, as a surgeon, I would not operate unless I had been educated in how to perform a procedure with respect to the technical aspects of the procedure as well as the cognitive ones.  However, I find that (as physicians) we are typically asked to perform tasks for which we have no education.  This is often because physicians are bright, articulate, and dedicated.  However, being bright, dedicated, and articulate alone is not enough to fly a plane, play a violin, or perform any other complex task for which there are skills and cognitive dimensions required in which we need education.  Just as we do not ask people who are non-surgeons to understand all of the surgical literature, so it is that we as physicians are often left to select new and up-coming surgeons without being aware of the significant body of knowledge that exists on how to select people for different positions–in essence we are asked to do something that we aren’t trained to do.  In short, we have no education about the significant evidence on how to select people and yet we are asked to fly the plane based on intuition alone.


Allow me (unsupported by anything but experience) to say that, as confident surgeons we typically don’t know or don’t seek out all of this relevant data in part because we make so many other high stakes, challenging decisions.  This selection can be seen as just one more and it may feel nowhere near the most important decision we will make in a day in that the repercussions are not immediately felt.  (Of course, it is perhaps one of the most important decisions we will make in a day in that it will affect many people in years to come.) For that reason, I believe, we don’t seek out literature on how to select people.


There’s Literature About How To Select

Here is the moment we push the soapbox aside.  Today, let me share just a few of the relevant data points on selecting people (and getting a sense for eventual job performance) from various organizational behavior and psychology entries into the literature.  I first saw these in an MBA course and was impressed about the breadth and depth of literature there is behind how to select people effectively for different roles.  It impressed me that there was such a body of knowledge and that I had previously been ignorant of it.  (Guess that’s how I knew the education was working.) After, we will describe some of the typically used selection criteria other than the more trivial ones listed above.  such as medical school grades, USMLE scores, and letters of recommendation to name a few.  As mentioned, my colleagues on other blogs have clearly highlighted issues with these and shown their shortcomings.  We will then describe some of the other typically used selection criteria as well as their potential utility for selecting people who will make effective residents and, more importantly, effective attending surgeons.


You’ve Heard Of The MBTI, And It’s Not The Answer

First, let’s consider the Myers-Briggs Type Indicator or MBTI.  This is perhaps the most widely used personality assessment instrument in the world, and it’s likely you’ve heard of it.  The MBTI is a hundred question personality test that focuses on how people feel in certain venues.  People become classified as either Extroverted or Introverted (E or I), Sensing or Intuitive (S or N), Thinking or Feeling (T or F) and Judging or Perceiving (J or P).  Each of these has a specific definition.  Based on this, a four letter type is then generated for each person with 16 potential personality types.  The MBTI is widely used by multiple organizations including the US Armed Forces, Apple Computer, AT&T, and GE to name some of the larger companies.


However, most evidence suggests the MBTI is not a valid measure of personality and it appears as though the MBTI results tend to be unrelated to job performance.  Thus, selecting people based on Myers-Briggs Type Indication probably does not effectively predict their job performance.  See, for instance, RM Capraro and MM Capraro’s article “Myers-Briggs Type Indicator score reliability across studies: A meta-analytic reliability generalization study in education and psychological measurement”, August 2002, page 590-602.  Also, see the RC Arnau et al article “Are Jungian preferences really categorical? An empirical investigation using taxometric analysis in Personality and Individual Differences”, January 2003 page 233-251 for additional information.  So, dividing people into these classic personality types does not seem to be useful in predicting job performance.  Are there any reliable indicators that can help us in order to select people for different positions?  Let me tell you what I have learned, some of the evidence, and the factor I use to help select young physicians for residency roles and beyond.


The Big 5 Can Help

By way of contrast to the MBTI, the Big 5 model (also called the Big 5) has a significant amount of supporting evidence that indicates 5 basic dimensions are responsible for significant variation in human personality.  This is paraphrased from Robins and Judge, Essentials of Organizational Behavior 9th Edition and much of the section above we have just paraphrased comes from that text.  The Big 5 personality model delineates 5 dimensions of personality including: Extroversion, Agreeableness, Conscientiousness, Stability and Openness to experience.  A significant amount of research indicates that those 5 basic personality factors underlie all other types.  It turns out that research on the Big 5 has found relationships between these personality dimensions and job performance.


Conscientiousness Predicts Performance Across A Wide Range of Occupations

Researchers examined a broad spectrum of occupations including engineers, architects, accountants, attorneys, police, managers, sales people, and both semi-skilled and skilled employees.  Results indicated that across all occupation groups, conscientiousness predicted job performance.  For more information including the primary work see J Hoogen and B Holland “Using Theory to Evaluate Personality and Job Performance relations: A socioanalytic perspective” Journal of Applied Psychology, February 2003, page 100-112.  Also MR Barrick and MK Mount “Select on conscientiousness and emotional stability” in EA Lock’s The Handbook of Principles of Organizational Behavior 2004, page 15-28.


Since I first learned about the Big 5 model I have attempted to assess conscientiousness as best I can as the most important predictor of success for prospective residents.  Yes, I do not administer a personality test; however, I do my best to get at the dimension of conscientiousness during job interviews and other interactions.  Experientially, this seems to work.  Yes, there are many types of interviewing styles which have also been delineated.  However, I find that the Big 5 personality inventory’s dimension of conscientiousness, assessed as best as possible, does seem to indicate the level of function for the eventual resident and beyond.


Do Something (Anything) Besides What We’ve Done So Far

Clearly, there are multiple potential manners in which we can select residents.  I think that it is worthwhile that we go on the hunt for new selection criteria owing to the fact that we are acutely aware of the limitations in the ones we typically use.  As mentioned, there is an extensive body of research that exists about how to select people for different roles and we’ve highlighted some here.  The unfortunate fact is that many of us are simply not educated in the significant body of knowledge that exists around how to select people for roles owing to the fact that we have no exposure to this in our training, are often busy working, and make other high-stakes decisions which gives us a certain comfort level with making hiring decisions.  My recommendation is that we take a moment to reconsider how we select for residents and turn towards more evidence-based models such as the Big 5 personality inventory or at least the dimension of conscientiousness assessed as best we can.


Disagree?  Comments, thoughts or discussion?  Please share your thoughts beneath.


How The COPQ Helps Your Healthcare Quality Project

Click here for video presentation of content beneath:

Click here for audio presentation of entry:


Challenging To Demonstrate The Business Case For Your Healthcare Quality Project

One of the biggest challenges with quality improvement projects is clearly demonstrating the business case that drives them.  It can be very useful to generate an estimated amount of costs recovered by improving quality.  One of the useful tools in Lean and Six Sigma to achieve this is entitled ‘The cost of poor quality’ or COPQ.  Here we will discuss the cost of poor quality and some ways you can use it in your next quality improvement project.


Use The COPQ To Make The Case, And Here’s Why

The COPQ helps form a portion of the business case for the quality improvement project you are performing.  Usually, the COPQ is positioned prominently in the project charter.  It may sit after the problem statement or in another location depending on the template you are using.  Of the many tools of Six Sigma, most black belts do employ a project charter as part of their DMAIC project.  For those of you who are new to Six Sigma, DMAIC is the acronym for the steps in a Six Sigma project.  It includes: Define, Measure, Analysis, Improve, and Control.  Importantly, we use these steps routinely and each step has a different objective we must achieve.  These objectives are often called tollgates.  Things must happen in each of these steps before progressing to the next step.  One of the tools we can use, and again most project leaders do use this tool routinely, is called the project charter.


The project charter defines the scope for the problem.  Importantly, it defines the different stakeholders who will participate, and the time line for completion of the project.  It fulfills other important roles too as it clearly lays out the specific problem to be addressed.  Here is where the COPQ comes in:  we utilize the COPQ to give managers, stakeholders, and financial professionals in the organization an estimate of the costs associated with current levels of performance.


The Four Buckets That Compose The COPQ

The COPQ is composed of four ‘buckets’.  These are:  the cost of internal failures, the cost of external failures, the cost of surveillance, and the cost associated with prevention of defects.  Let’s consider each of these as we describe how to determine the Costs of Poor Quality.  The cost of internal failures are those costs associated with problems with the system that do not make it to the customer or end user.  In healthcare this question of who is the customer can be particularly tricky.  For example, do we consider the customer as the patient or as the third party payer?  The reason why this is challenging is that, although we deliver care to the patient, the third party payer is the one who actually pays for the value added.  This can make it very challenging to establish the Costs of Poor Quality for internal and other failures.  I believe, personally, this is one of the sources that puts Lean, Six Sigma, and other business initiatives into certain challenges when we work in the healthcare arena.  Who, exactly, is the customer?  Whoever we regard as the customer, internal failures, again, are those issues that do not make it to the patient, third party payer, or eventual recipient of the output of the process.


External failures, by contrast, are those issues and defects that do make it to the customer of the system. There are often more egregious.  These may be less numerous than internal failures but are often visible, important challenges.


Next is the cost of surveillance.  These are the costs associated with things like intermittent inspections from state accrediting bodies or similar costs that we incur perhaps more frequently because of poor quality.  Perhaps our state regulatory body has to come back yearly instead of every three years because of our quality issues.  This incurs increased costs.


The final bucket is the cost of prevention.  Costs associated with prevention are other important components of the cost of poor quality.  The costs associated with prevention are the only expenditures on which we have a Return On Investment (ROI).  Prevention is perhaps the most important element of the COPQ because money we spend on prevention actually translates into, often, that return on investment.


A Transparent Finance Department Gives Us The Numbers

In order to construct the COPQ we need to have ties to the financial part of our organization.  This is where transparency in the organization is key.  It can be very challenging to get the numbers we require in some organizations and in others it can be very straightforward.  Arming the team with good financial data can help make a stronger case for quality improvement.  It is key, therefore, that each project have a financial stakeholder so that the quality improvement effort never strays too far from a clear idea of the costs associated with the project and the expectation of costs recovered.  Interestingly, in the Villanova University Lean Six Sigma healthcare courses, a common statistic cited is that each Lean and Six Sigma project recovers a median value of approximately $250000.  This is a routine amount of recovery of COPQ even for healthcare projects and beyond.  It can be very striking just how much good quality translates into cost cutting.  In fact, I found that decreasing the variance in systems, outliers and bad outcomes has a substantial impact on costs in just the manner we described.


Conclusion:  COPQ Is Key For Your Healthcare Quality Project

In conclusion, the Cost of Poor Quality is useful construct for your next quality improvement project because it clearly describes exactly what the financial stakeholders can expect to recover from the expenditure of time and effort.  The COPQ is featured prominently in the project charter used by many project leaders in the DMAIC process.  To establish the COPQ we obtain financial data from our colleagues in finance who are part of our project.  We then review the costs statements with them and earmark certain costs as costs of internal failure, external failure, surveillance or costs associated with prevention.  We then use these to determine the staged cost of poor quality.  Additionally, we recognize that the COPQ is often a significant figure on the order of 200-300 thousand dollars for many healthcare-related projects.


We hope that use of the COPQ for your next quality improvement project helps you garner support and have a successful project outcome.  Remember, prevention is the only category of expenditures in the COPQ that has a positive return on investment.


Thought, questions, or discussion points about the COPQ?  Let us know your thoughts beneath.

CPK Does Not Just Stand For Creatine Phosphokinase


Play while you’re here:


Or download the mp3 for listening on the go:



One of the most entertaining things we have found with respect to statistical process control and quality improvement is how some of the many acronyms overlap what we typically use for healthcare.  One acronym we frequently use in healthcare, but which takes on a very different definition in quality control, is CPK.  In healthcare, CPK typically stands for creatine phosphokinase.  (Yes, you’re right:  who knows how we in the healthcare world turn creatinine phosphokinase into “CPK” because both the letter p and letter k are together in the second word.  We just have to suspend disbelief on that one.) CPK may be elevated in rhabdomyolysis and other conditions. In statistical process control CpK is a process performance indicator that can help tell us how well our system is performing.  CpK could not be more different than CPK and is a useful tool in developing systems.


As we said, healthcare abounds with multiple acronyms.  So does statistical process control.  Consider the normal distribution that we have discussed in previous blogs.  The normal distribution or Gaussian distribution is frequently noted in processes.  We can do tests such as the Anderson-Darling test, where a p value greater than 0.05 is ‘good’ in that this result indicates our data do not deviate from the normal distribution.  See the previous entry on “When is it good to have a p > 0.05?”


As mentioned, having a data set that follows the normal distribution allows us to utilize well known and comfortable statistical tests for hypothesis testing.  Let’s explore normal data sets more carefully as we build up to considering the utility of Cpk.  Normal data sets display common cause variation.  Common cause variation is when the variation in a system is not due to an imbalance in certain underlying factors.  These underlying factors are known as the 6 M’s, which include man (or, sometimes, nowadays called just “person”), materials, machines, methods, mother nature, or management/measurement.  These are described differently in different texts but the key here is that they are well established sources that yield variability in data.  Again, the normal distribution demonstrates what’s called common cause variation in that none of the 6 M’s are highly imbalanced.


By way of contrast, sometimes we see special cause variation.  Special cause variation occurs when certain findings make the data set vary from the normal distribution to a substantial degree.  Special cause variation is caused when one of the 6Ms is very imbalanced and contributes to a great deal of variation such that a normal distribution is not present.  Where such insights can tell us a great deal about our data, there are other process indicators that are commonly used and that may yield even more insight.


Did you know, for example, the six sigma process is called six sigma because the goal is to fit six standard deviations’ worth of data between the upper and lower specification limit?  This would ensure a robust process where even a relative outlier of the data set is not near an unacceptable level.  In other words, the chances of making a defect are VERY slim.


We have introduced some vocabulary here so let’s take a second to review it.  The lower specification limit (or “lower spec limit”) is the lowest acceptable value for a certain system as specified by the customer for that system, regulatory body, or other entity.  Similarly, the upper spec limit is the highest value acceptable for some data.  Normally in Six Sigma we say the spec limits should be set by the Voice of the Customer (or VOC).  It is the lowest value that is acceptable from the customer, whether that customer be Medicare, an internal customer or another group such as patients.  The upper spec limit is the highest value acceptable in the data set according to that same voice of the customer.  Importantly, there are these other process capabilities that tell us how well systems are performing.  As mentioned, the term six sigma comes from the fact that one of the important goals in Motorola (which formalized this process) and other companies is to have systems where over 6 standard deviations of data can fit between this upper and lower spec limit.


Interestingly, there are some arguments that only 4.5 standard deviations of data should be able to fit between the upper and lower spec limit in idealized systems because systems tend to shift slowly over time plus or minus 1.5 sigma and forcing 6 standard deviations worth between the upper and lower spec limit is over-controlling the system.  This so-called 1.5 “sigma shift” is debated by practitioners of six sigma.


In any event, let’s take a few more moments to talk about why all of this is worthwhile.  First, service industries such as healthcare, law, medicine operate at certain levels of error generically speaking.  This error rate, again in a general sense, is approximately 1 defect per 1000 opportunities of making a defect.  This level of error is what is called the 1-1.5 sigma level. This is because that, when demonstrated as a distribution, a defect rate of 1 per 1000 occurs and a portion of the bell curve fits outside either the upper spec limit, lower spec limit or both.  These defects occur at with only 1-1.5 standard deviation’s worth of data fitting between upper and lower spec limit.  In other words, you don’t have to go far from the central tendency of the data, or the most common values you “feel” when practicing in a system, before you see errors.


…and that, colleagues, is some of the power of this process:  it demonstrates clearly that how we feel when practicing in a system (“hey things are pretty good…I only rarely see problems with patient prescriptions…”) and highlights, often, the sometimes counter-intuitive fact that the rate of defects in our complex service system just isn’t ok.  Best of all, the Six Sigma process makes it clear that it is more than just provider error that yields an issue with a system and in fact there are usually components of the 6Ms that conspire to make a problem.  This does NOT mean that we are excused from any responsibility as providers, yet it recognizes how the data tell us (over and over again) that many things go into making a defect and these are modifiable by us.  The idea that it is the nurse’s fault, the doctor’s fault, or any one person’s issue is rarely (yet not never) the case and is almost a non-starter in Six Sigma.  To get to the low error rates we target, and patient safety we want, the effective system must rely on more than just one of the 6Ms.  I have many stories where defects that lead to patient care issues are built into the system and are discovered when the team collects data on a process, including one time where a computerized order entry automatically yielded the wrong order for a patient on the nurse’s view of the chart.  Eliminating the computer issue, nursing education, and physician teamwork drastically as part of a comprehensive program greatly improved compliance with certain hospital quality measures.


Let’s be more specific about the nuts and bolts of this process as we start to describe CpK.  This bell curve approach demonstrates that a certain portion of the distribution fits above, below (or both) relative to the acceptable area.  We can think of these two acceptable areas as goalposts. There is the lower spec limit goal post as well as an upper spec limit goalpost and our goal is to have at least 4.5 or more likely 6 standard deviations of data fit easily between these goalposts. This ensures a very very low error rate.  Again, this from where term six sigma comes.  If we have approximately 6 standard deviations of data between upper and lower spec limit, we have a process that makes approximately 3.4 defects per every one million opportunities.


Six sigma is more of a goal to achieve for systems rather than a rigid, proscribed, absolute requirement.  We attempt to progress to this level with some of the various techniques we will discuss.  Interestingly, we can think of defect rates with sigma levels as described.  Again, one defect per every thousand opportunities is approximately 1-1.5 sigma level.


There are also other ways to quantify the defect rate and system performance.  One of these is the CpK about which we speak above.  The CpK is a number that represents how the process is centered between the lower and upper spec limit.  A CpK value tells us much of what we want to know about how the process is centered and how well it fits between the upper and lower spec limit.  Thus we can understand a system performance with the associated CpK.  We can also then understand the associated defect rate.  Each CpK value corresponds to a ‘sigma’ value which corresponds to an error rate.  So, a CpK tells us a great deal about system performance in one compact number.


Before we progress on to our next blog entry, take a moment and consider some important facts about defect rates. For example, you may feel that one error in one thousand opportunities is not bad.  That’s how complex systems may fool us…they lull us to sleep because the most common experience is perfectly acceptable and we’ve already stated typically error rates are 1 defect per every 1000 opportunities…that’s low!  However, if that 1-1.5 sigma rate were acceptable there would be several important consequences.  First, let’s use that error rate to highlight some world manifestations in high-stakes situations.  If the 1-1.5 sigma rate were acceptable, we would be ok with 1 plane crash each day at O’Hare airport.  We would also be comfortable with thousands of wrong site surgeries every day across the United States.  In short, the 1-1.5 sigma defect rate is not truly appropriate for high stake situations such as healthcare.  Advanced tools such as the CpK, sigma level and defect rates are key in order to have a common understanding of the rate of performance for different systems and a sense of at what level of performance the system should be.  This useful framework is easily shared by staff across companies who are trained in six sigma and practitioners looking at similar data sets come to similar conclusions.  We can benchmark them and follow them.  We can show ourselves our true performance (as a team) and make improvements over time.  This is very valuable from a quality standpoint and gives us a common approach to often complex data.


In conclusion, it is interesting to see that a term we use typically in healthcare has a different meaning in the statistical process control terminology.  CPK is a very valuable lab test in patients who are at risk for rhabdomyolysis, and for those who have the condition, yet it is also key in terms of describing process centering and defect rates.  Consider using CpK to describe the  level of performance for your next complex system and to help represent overall process functionality.


Questions, thoughts, or stories of how you have used the CpK in lean and six sigma?  Please let us know your thoughts.