CPK Does Not Just Stand For Creatine Phosphokinase

 

Play while you’re here:

 

Or download the mp3 for listening on the go:

cpkblogpeterv2

 

One of the most entertaining things we have found with respect to statistical process control and quality improvement is how some of the many acronyms overlap what we typically use for healthcare.  One acronym we frequently use in healthcare, but which takes on a very different definition in quality control, is CPK.  In healthcare, CPK typically stands for creatine phosphokinase.  (Yes, you’re right:  who knows how we in the healthcare world turn creatinine phosphokinase into “CPK” because both the letter p and letter k are together in the second word.  We just have to suspend disbelief on that one.) CPK may be elevated in rhabdomyolysis and other conditions. In statistical process control CpK is a process performance indicator that can help tell us how well our system is performing.  CpK could not be more different than CPK and is a useful tool in developing systems.

 

As we said, healthcare abounds with multiple acronyms.  So does statistical process control.  Consider the normal distribution that we have discussed in previous blogs.  The normal distribution or Gaussian distribution is frequently noted in processes.  We can do tests such as the Anderson-Darling test, where a p value greater than 0.05 is ‘good’ in that this result indicates our data do not deviate from the normal distribution.  See the previous entry on “When is it good to have a p > 0.05?”

 

As mentioned, having a data set that follows the normal distribution allows us to utilize well known and comfortable statistical tests for hypothesis testing.  Let’s explore normal data sets more carefully as we build up to considering the utility of Cpk.  Normal data sets display common cause variation.  Common cause variation is when the variation in a system is not due to an imbalance in certain underlying factors.  These underlying factors are known as the 6 M’s, which include man (or, sometimes, nowadays called just “person”), materials, machines, methods, mother nature, or management/measurement.  These are described differently in different texts but the key here is that they are well established sources that yield variability in data.  Again, the normal distribution demonstrates what’s called common cause variation in that none of the 6 M’s are highly imbalanced.

 

By way of contrast, sometimes we see special cause variation.  Special cause variation occurs when certain findings make the data set vary from the normal distribution to a substantial degree.  Special cause variation is caused when one of the 6Ms is very imbalanced and contributes to a great deal of variation such that a normal distribution is not present.  Where such insights can tell us a great deal about our data, there are other process indicators that are commonly used and that may yield even more insight.

 

Did you know, for example, the six sigma process is called six sigma because the goal is to fit six standard deviations’ worth of data between the upper and lower specification limit?  This would ensure a robust process where even a relative outlier of the data set is not near an unacceptable level.  In other words, the chances of making a defect are VERY slim.

 

We have introduced some vocabulary here so let’s take a second to review it.  The lower specification limit (or “lower spec limit”) is the lowest acceptable value for a certain system as specified by the customer for that system, regulatory body, or other entity.  Similarly, the upper spec limit is the highest value acceptable for some data.  Normally in Six Sigma we say the spec limits should be set by the Voice of the Customer (or VOC).  It is the lowest value that is acceptable from the customer, whether that customer be Medicare, an internal customer or another group such as patients.  The upper spec limit is the highest value acceptable in the data set according to that same voice of the customer.  Importantly, there are these other process capabilities that tell us how well systems are performing.  As mentioned, the term six sigma comes from the fact that one of the important goals in Motorola (which formalized this process) and other companies is to have systems where over 6 standard deviations of data can fit between this upper and lower spec limit.

 

Interestingly, there are some arguments that only 4.5 standard deviations of data should be able to fit between the upper and lower spec limit in idealized systems because systems tend to shift slowly over time plus or minus 1.5 sigma and forcing 6 standard deviations worth between the upper and lower spec limit is over-controlling the system.  This so-called 1.5 “sigma shift” is debated by practitioners of six sigma.

 

In any event, let’s take a few more moments to talk about why all of this is worthwhile.  First, service industries such as healthcare, law, medicine operate at certain levels of error generically speaking.  This error rate, again in a general sense, is approximately 1 defect per 1000 opportunities of making a defect.  This level of error is what is called the 1-1.5 sigma level. This is because that, when demonstrated as a distribution, a defect rate of 1 per 1000 occurs and a portion of the bell curve fits outside either the upper spec limit, lower spec limit or both.  These defects occur at with only 1-1.5 standard deviation’s worth of data fitting between upper and lower spec limit.  In other words, you don’t have to go far from the central tendency of the data, or the most common values you “feel” when practicing in a system, before you see errors.

 

…and that, colleagues, is some of the power of this process:  it demonstrates clearly that how we feel when practicing in a system (“hey things are pretty good…I only rarely see problems with patient prescriptions…”) and highlights, often, the sometimes counter-intuitive fact that the rate of defects in our complex service system just isn’t ok.  Best of all, the Six Sigma process makes it clear that it is more than just provider error that yields an issue with a system and in fact there are usually components of the 6Ms that conspire to make a problem.  This does NOT mean that we are excused from any responsibility as providers, yet it recognizes how the data tell us (over and over again) that many things go into making a defect and these are modifiable by us.  The idea that it is the nurse’s fault, the doctor’s fault, or any one person’s issue is rarely (yet not never) the case and is almost a non-starter in Six Sigma.  To get to the low error rates we target, and patient safety we want, the effective system must rely on more than just one of the 6Ms.  I have many stories where defects that lead to patient care issues are built into the system and are discovered when the team collects data on a process, including one time where a computerized order entry automatically yielded the wrong order for a patient on the nurse’s view of the chart.  Eliminating the computer issue, nursing education, and physician teamwork drastically as part of a comprehensive program greatly improved compliance with certain hospital quality measures.

 

Let’s be more specific about the nuts and bolts of this process as we start to describe CpK.  This bell curve approach demonstrates that a certain portion of the distribution fits above, below (or both) relative to the acceptable area.  We can think of these two acceptable areas as goalposts. There is the lower spec limit goal post as well as an upper spec limit goalpost and our goal is to have at least 4.5 or more likely 6 standard deviations of data fit easily between these goalposts. This ensures a very very low error rate.  Again, this from where term six sigma comes.  If we have approximately 6 standard deviations of data between upper and lower spec limit, we have a process that makes approximately 3.4 defects per every one million opportunities.

 

Six sigma is more of a goal to achieve for systems rather than a rigid, proscribed, absolute requirement.  We attempt to progress to this level with some of the various techniques we will discuss.  Interestingly, we can think of defect rates with sigma levels as described.  Again, one defect per every thousand opportunities is approximately 1-1.5 sigma level.

 

There are also other ways to quantify the defect rate and system performance.  One of these is the CpK about which we speak above.  The CpK is a number that represents how the process is centered between the lower and upper spec limit.  A CpK value tells us much of what we want to know about how the process is centered and how well it fits between the upper and lower spec limit.  Thus we can understand a system performance with the associated CpK.  We can also then understand the associated defect rate.  Each CpK value corresponds to a ‘sigma’ value which corresponds to an error rate.  So, a CpK tells us a great deal about system performance in one compact number.

 

Before we progress on to our next blog entry, take a moment and consider some important facts about defect rates. For example, you may feel that one error in one thousand opportunities is not bad.  That’s how complex systems may fool us…they lull us to sleep because the most common experience is perfectly acceptable and we’ve already stated typically error rates are 1 defect per every 1000 opportunities…that’s low!  However, if that 1-1.5 sigma rate were acceptable there would be several important consequences.  First, let’s use that error rate to highlight some world manifestations in high-stakes situations.  If the 1-1.5 sigma rate were acceptable, we would be ok with 1 plane crash each day at O’Hare airport.  We would also be comfortable with thousands of wrong site surgeries every day across the United States.  In short, the 1-1.5 sigma defect rate is not truly appropriate for high stake situations such as healthcare.  Advanced tools such as the CpK, sigma level and defect rates are key in order to have a common understanding of the rate of performance for different systems and a sense of at what level of performance the system should be.  This useful framework is easily shared by staff across companies who are trained in six sigma and practitioners looking at similar data sets come to similar conclusions.  We can benchmark them and follow them.  We can show ourselves our true performance (as a team) and make improvements over time.  This is very valuable from a quality standpoint and gives us a common approach to often complex data.

 

In conclusion, it is interesting to see that a term we use typically in healthcare has a different meaning in the statistical process control terminology.  CPK is a very valuable lab test in patients who are at risk for rhabdomyolysis, and for those who have the condition, yet it is also key in terms of describing process centering and defect rates.  Consider using CpK to describe the  level of performance for your next complex system and to help represent overall process functionality.

 

Questions, thoughts, or stories of how you have used the CpK in lean and six sigma?  Please let us know your thoughts.

0 comments