CPK Does Not Just Stand For Creatine Phosphokinase

 

Play while you’re here:

 

Or download the mp3 for listening on the go:

cpkblogpeterv2

 

One of the most entertaining things we have found with respect to statistical process control and quality improvement is how some of the many acronyms overlap what we typically use for healthcare.  One acronym we frequently use in healthcare, but which takes on a very different definition in quality control, is CPK.  In healthcare, CPK typically stands for creatine phosphokinase.  (Yes, you’re right:  who knows how we in the healthcare world turn creatinine phosphokinase into “CPK” because both the letter p and letter k are together in the second word.  We just have to suspend disbelief on that one.) CPK may be elevated in rhabdomyolysis and other conditions. In statistical process control CpK is a process performance indicator that can help tell us how well our system is performing.  CpK could not be more different than CPK and is a useful tool in developing systems.

 

As we said, healthcare abounds with multiple acronyms.  So does statistical process control.  Consider the normal distribution that we have discussed in previous blogs.  The normal distribution or Gaussian distribution is frequently noted in processes.  We can do tests such as the Anderson-Darling test, where a p value greater than 0.05 is ‘good’ in that this result indicates our data do not deviate from the normal distribution.  See the previous entry on “When is it good to have a p > 0.05?”

 

As mentioned, having a data set that follows the normal distribution allows us to utilize well known and comfortable statistical tests for hypothesis testing.  Let’s explore normal data sets more carefully as we build up to considering the utility of Cpk.  Normal data sets display common cause variation.  Common cause variation is when the variation in a system is not due to an imbalance in certain underlying factors.  These underlying factors are known as the 6 M’s, which include man (or, sometimes, nowadays called just “person”), materials, machines, methods, mother nature, or management/measurement.  These are described differently in different texts but the key here is that they are well established sources that yield variability in data.  Again, the normal distribution demonstrates what’s called common cause variation in that none of the 6 M’s are highly imbalanced.

 

By way of contrast, sometimes we see special cause variation.  Special cause variation occurs when certain findings make the data set vary from the normal distribution to a substantial degree.  Special cause variation is caused when one of the 6Ms is very imbalanced and contributes to a great deal of variation such that a normal distribution is not present.  Where such insights can tell us a great deal about our data, there are other process indicators that are commonly used and that may yield even more insight.

 

Did you know, for example, the six sigma process is called six sigma because the goal is to fit six standard deviations’ worth of data between the upper and lower specification limit?  This would ensure a robust process where even a relative outlier of the data set is not near an unacceptable level.  In other words, the chances of making a defect are VERY slim.

 

We have introduced some vocabulary here so let’s take a second to review it.  The lower specification limit (or “lower spec limit”) is the lowest acceptable value for a certain system as specified by the customer for that system, regulatory body, or other entity.  Similarly, the upper spec limit is the highest value acceptable for some data.  Normally in Six Sigma we say the spec limits should be set by the Voice of the Customer (or VOC).  It is the lowest value that is acceptable from the customer, whether that customer be Medicare, an internal customer or another group such as patients.  The upper spec limit is the highest value acceptable in the data set according to that same voice of the customer.  Importantly, there are these other process capabilities that tell us how well systems are performing.  As mentioned, the term six sigma comes from the fact that one of the important goals in Motorola (which formalized this process) and other companies is to have systems where over 6 standard deviations of data can fit between this upper and lower spec limit.

 

Interestingly, there are some arguments that only 4.5 standard deviations of data should be able to fit between the upper and lower spec limit in idealized systems because systems tend to shift slowly over time plus or minus 1.5 sigma and forcing 6 standard deviations worth between the upper and lower spec limit is over-controlling the system.  This so-called 1.5 “sigma shift” is debated by practitioners of six sigma.

 

In any event, let’s take a few more moments to talk about why all of this is worthwhile.  First, service industries such as healthcare, law, medicine operate at certain levels of error generically speaking.  This error rate, again in a general sense, is approximately 1 defect per 1000 opportunities of making a defect.  This level of error is what is called the 1-1.5 sigma level. This is because that, when demonstrated as a distribution, a defect rate of 1 per 1000 occurs and a portion of the bell curve fits outside either the upper spec limit, lower spec limit or both.  These defects occur at with only 1-1.5 standard deviation’s worth of data fitting between upper and lower spec limit.  In other words, you don’t have to go far from the central tendency of the data, or the most common values you “feel” when practicing in a system, before you see errors.

 

…and that, colleagues, is some of the power of this process:  it demonstrates clearly that how we feel when practicing in a system (“hey things are pretty good…I only rarely see problems with patient prescriptions…”) and highlights, often, the sometimes counter-intuitive fact that the rate of defects in our complex service system just isn’t ok.  Best of all, the Six Sigma process makes it clear that it is more than just provider error that yields an issue with a system and in fact there are usually components of the 6Ms that conspire to make a problem.  This does NOT mean that we are excused from any responsibility as providers, yet it recognizes how the data tell us (over and over again) that many things go into making a defect and these are modifiable by us.  The idea that it is the nurse’s fault, the doctor’s fault, or any one person’s issue is rarely (yet not never) the case and is almost a non-starter in Six Sigma.  To get to the low error rates we target, and patient safety we want, the effective system must rely on more than just one of the 6Ms.  I have many stories where defects that lead to patient care issues are built into the system and are discovered when the team collects data on a process, including one time where a computerized order entry automatically yielded the wrong order for a patient on the nurse’s view of the chart.  Eliminating the computer issue, nursing education, and physician teamwork drastically as part of a comprehensive program greatly improved compliance with certain hospital quality measures.

 

Let’s be more specific about the nuts and bolts of this process as we start to describe CpK.  This bell curve approach demonstrates that a certain portion of the distribution fits above, below (or both) relative to the acceptable area.  We can think of these two acceptable areas as goalposts. There is the lower spec limit goal post as well as an upper spec limit goalpost and our goal is to have at least 4.5 or more likely 6 standard deviations of data fit easily between these goalposts. This ensures a very very low error rate.  Again, this from where term six sigma comes.  If we have approximately 6 standard deviations of data between upper and lower spec limit, we have a process that makes approximately 3.4 defects per every one million opportunities.

 

Six sigma is more of a goal to achieve for systems rather than a rigid, proscribed, absolute requirement.  We attempt to progress to this level with some of the various techniques we will discuss.  Interestingly, we can think of defect rates with sigma levels as described.  Again, one defect per every thousand opportunities is approximately 1-1.5 sigma level.

 

There are also other ways to quantify the defect rate and system performance.  One of these is the CpK about which we speak above.  The CpK is a number that represents how the process is centered between the lower and upper spec limit.  A CpK value tells us much of what we want to know about how the process is centered and how well it fits between the upper and lower spec limit.  Thus we can understand a system performance with the associated CpK.  We can also then understand the associated defect rate.  Each CpK value corresponds to a ‘sigma’ value which corresponds to an error rate.  So, a CpK tells us a great deal about system performance in one compact number.

 

Before we progress on to our next blog entry, take a moment and consider some important facts about defect rates. For example, you may feel that one error in one thousand opportunities is not bad.  That’s how complex systems may fool us…they lull us to sleep because the most common experience is perfectly acceptable and we’ve already stated typically error rates are 1 defect per every 1000 opportunities…that’s low!  However, if that 1-1.5 sigma rate were acceptable there would be several important consequences.  First, let’s use that error rate to highlight some world manifestations in high-stakes situations.  If the 1-1.5 sigma rate were acceptable, we would be ok with 1 plane crash each day at O’Hare airport.  We would also be comfortable with thousands of wrong site surgeries every day across the United States.  In short, the 1-1.5 sigma defect rate is not truly appropriate for high stake situations such as healthcare.  Advanced tools such as the CpK, sigma level and defect rates are key in order to have a common understanding of the rate of performance for different systems and a sense of at what level of performance the system should be.  This useful framework is easily shared by staff across companies who are trained in six sigma and practitioners looking at similar data sets come to similar conclusions.  We can benchmark them and follow them.  We can show ourselves our true performance (as a team) and make improvements over time.  This is very valuable from a quality standpoint and gives us a common approach to often complex data.

 

In conclusion, it is interesting to see that a term we use typically in healthcare has a different meaning in the statistical process control terminology.  CPK is a very valuable lab test in patients who are at risk for rhabdomyolysis, and for those who have the condition, yet it is also key in terms of describing process centering and defect rates.  Consider using CpK to describe the  level of performance for your next complex system and to help represent overall process functionality.

 

Questions, thoughts, or stories of how you have used the CpK in lean and six sigma?  Please let us know your thoughts.

Should Locum Tenens Surgery Be Its Own Specialty?

 

One of the realities facing trauma and acute care surgery programs across the country is the need for intermittent staffing assistance from qualified professionals. Locum tenens, Latin for “placeholder”, is one of the typical staffing models utilized so as to satisfy this shortfall. In fact, the shortfall is predicted to increase in the future.  Specifically, more patients are entering the market and more are expected to continue looking for acute care services throughout the United States.  The increasing patient demand is not the only impetus making centers require short term surgical staffing. General surgeons are retiring out of rural markets and finishing trainees are not keen to replace them.  This leaves large rural areas with opportunities for trauma and emergency surgeons to participate in a flexible staffing model.  Locum tenens surgery has often been the vehicle used to satisfy the demand for trauma and acute care surgeons in addition to other surgical sub-specialties.  It is now more important than ever for us to focus on how best to train, certify, and deploy surgeons for short term staffing.

 

So, we are faced with a situation where the future seems to indicate a continued need for short term staffing models including locum tenens surgery.  The question, then, is not whether we will continue to use locum surgeons but rather how we can best use locum surgeons.  My feeling, owing to a previous role as a rural Trauma and Acute Care Surgery Section Chief, is that we can more effectively utilize locum tenens-type models.  We truly can improve our use of locum tenens and per diem models.

 

In order to provide the best care for our patients, let’s consider some characteristics of short term staffing with respect to its classic shortfalls.  First, it is challenging for locum tenens-type surgeons to sign their charts and complete medical records in a timely fashion for the institution.  This inability to complete medical records in a timely fashion results, often, in medicolegal liability for the practitioners on site and the locum tenens practitioner who has left the area.  Compliance with medical staff bylaws is usually lacking and often hospitals have to look the other way for the locum tenens staff involved.  So, one shortfall of the locum tenens model is that many excellent locum tenens providers have “day jobs” and are not available to sign paperwork despite their best intentions.  The paperwork and compliance burden, however, is the not the only challenge.

 

Other issues include quality of care.  I have found that many locum tenens and per diem staff are excellent, skilled practitioners.  However, some are not.  My feeling is that there is a wide distribution of skill sets among locum tenens and per diem providers.  It is not that locum tenens is ‘bad’ or ‘good’.  Rather, it is the fact that locum tenens providers are very variable in what they can achieve in your system.  Add the talent factor to other issues such as organizational fit, lack of familiarity with local politics, and an unfamiliarity with the system in which they are practicing and we have a setup, obviously, for poor quality.

 

How can we best ameliorate  some of these challenges for locum tenens surgeons and the systems in which they practice? One suggestion I have is that we encourage a locum tenens certification. Think about this: locum tenens / per diem staffing has a specialized body of knowledge and special constraints.  The specialized body of knowledge concerns how to enter an unfamiliar system quickly, ascertain what is necessary to make the system run effectively and how to provide excellent care despite the overall amount of friction inherent in entering a new system.  It involves staying calm when the hospital does not have credentials, a parking card, or an ID ready for the arrival of the practitioner.  It involves interfacing with locum tenens and short term staffing companies which often have surgeons at a disadvantage with respect to how much surgeons should be compensated and other factors involved.

 

Interesting “classic” locums scenarios include:  you (the locums surgeon) may believe it is safe, and standard, to start chemoprophylaxis such as heparin or lovenox for patients with traumatic brain injuries that demonstrate no change on head CT after 48 hours–but the local neurosurgeons do not.  Do you (A) throw a fit in the ICU and degrade the level of care at the institution (B) point to the latest guideline from EAST or a similar, respected trauma body and chuckle to yourself about how uneducated the “locals” are? (C) find the trauma program’s practice management guideline and do whatever it says (D) order a surveillance ultrasound?  Issues like these, fraught with balances between local politics, personal beliefs, and keeping the peace while delivering effective patient care abound in the lives of locums practitioners.  In such acute, high risk situations, doesn’t it just make sense to educate and certify surgeons in the role of locums practitioner?  Wouldn’t certification guarantee a more reliable approach, with less variance, to these complex issues?

 

There are other considerations too, such as reputational issues for locum tenens. I am aware of many instances where the local surgeons feels that it is acceptable to sacrifice the reputation of the locum tenens surgeon in order to preserve the relationship with other physicians who are on staff full time.  This unique set of circumstances makes for pressures and requirements for short term staffing and locum providers.  Given the multiple constraints, high stakes quality issues, and increasing need for short term staffing doesn’t it make sense to have a credentialing body or a body of knowledge specialized for locum tenens surgeons? Doesn’t it make sense to have a professional group that is dedicated to the surgeons who practice an often very challenging craft?

 

Other important, less patient-oriented endpoints include factors such as independent contractor taxation levels along with other perils and pitfalls that practitioners do not fully appreciate until they have functioned within the locum tenens model.  Most new graduates do not understand the business of medicine, and even fewer understand the special business issues with respect to locum tenens practice.

 

To this end, innovative staffing companies such as Emergency Surgical Staffing, LLC (often called ESS) have evolved as potential disruptors which a focus on issues inherent in typical locums models such as variability, inability to participate in process improvement, and challenges in completing paperwork on time.  Teams like ESS have arisen to specifically address each of those issues and improve upon the current challenges in short term staffing.  Information is available at EmergencySurgicalStaffing.com.

 

At the end of the day, locum tenens surgery will become more and more useful in addition to per diem and other staffing models so as to satisfy the surgeon shortfall–especially in rural areas.  Owing to the specialized nature of what needs to be achieved by locums surgeons, it is more and more important that the specialized body of knowledge they require becomes codified and that surgeons who practice in locums venues are either certified themselves or have some special education both to protect themselves and their patients.  Locum tenens practice and per diem practice are very different than routine surgical practice and should be treated as such.  Rather than having a young surgeons find out the hard way about reputational issues, patient care challenges, and taxation issues it is a workable solution to evolve a locum tenens system which allows us to more effectively use the staffing medium as it becomes more and more in demand.

 

Disclosure:  David Kashmer, author of this post, is Associate Director of Emergency Surgical Staffing LLC and highlights it here as an example of a company that seeks competitive advantage by leveraging disruptive innovations pointed at challenges in the current locums model.

 

Questions, comments, or thoughts?  We invite your feedback and are happy to review your comments beneath.

Risk Is The Standard Deviation of Returns

 

In this entry we will review some of the different products in which you can invest.  We will explore the concept of risk, some historic data on risk vs. return, and provide an examination of each of these for us as personal investors in the market.

 

The concept of risk is a challenging one and yet it is the focus of much of what we decide with respect to personal investing.  One interesting way to measure how much risk you are willing to endure, aka your risk tolerance, is to ask yourself how much you would bet on the flip of a coin.  This gives you an idea of your risk threshold.  Although that simple experiment is a classic way to measure risk tolerance,  it is by no means the most complex or comprehensive.

 

Regardless of how you measure your personal risk threshold, remember:  risk and reward should be closely associated in your mind when it comes to personal investing.  In fact, risk and reward are sort of the flip sides of the same coin.  Risk is what we endure for certain rewards.  Therefore, the reward from a certain investment must be commensurate with the risk we take in putting our money into a certain stock, mutual fund, or other investment vehicle.  Here, we will describe some of the different ways to understand how much risk is inherent in an investment versus its return.  We will examine some classic equity backed securities such as stocks, mutual funds, bonds, and real estate.

 

One way to conceptualise risk is as the standard deviation of returns.  That is, the risk inherent in, for example, a stock can be considered as its standard deviation of returns where returns are plotted as a histogram.  Therefore, the wider the standard deviation of the amount of return on your money, the riskier the investment. Here, we will cite several distributions for risk from a standard text.

 

One of the most popular texts that gives us the concept of risk versus return is Burton Malkiel’s A Random Walk Down Wall Street. Page 185 gives us one of the most useful representations of risk inherent in different types of investment opportunities.  For one, consider the distribution of returns from the stock market.  The historic standard deviation of returns from the stock market is approximately 20% as described by Malkiel’s text.  See Figure 1.  Therefore, of the various opportunities we will discuss, investing in the stock market is one of the most ‘risky’.

 

randomwalkjpg
Figure 1: Histograms with standard deviations of various investment products from Malkiel’s book: A Random Walk Down Wallstreet

 

This is because, although there is great potential for upside, there is also routine potential for downside. As you probably know, plus or minus one standard deviation on either side of the central tendency of the bell curve contains 66% of values.  Therefore, based on historic data, 66% of the time we invest in the stock market our gain or loss will be between plus 20% or minus 20% with an “average” return of around 11%.  Again, the more narrow the standard deviation in a system the less risk inherent in that system. 

 

Consider, next, the bond market.  Corporate and municipal bonds are similar in their return.  Corporate and municipal bonds returns approximately 3-4% per year.  (See Figure 1.) There is a much lower standard deviation of returns, historically, in the bond market. Please notice, however, that on an after-tax basis, the corporate and municipal bonds basically give the same return for the same amount of money.  Therefore, corporate bonds are thought to be slightly more risky than municipal bonds in that an individual corporation may default (bankruptcy) and give you no money back.  Therefore, US municipal bonds are felt to be more conservative investment option (the government is unlikely to go bankrupt) for those interested in investing in bonds.

 

Real estate is another investment option.  The historical return of the real estate market is approximately 2%.  Clearly, just as with stocks & bonds, the real estate market we are talking about is the real estate market as a whole from its inception until now.  There are several things which are not well represented by taking such a global view of the market.  For one, different geographic areas in the country, such as vacation destinations like Hawaii or Florida, may perform better than the market as a whole.  Also, isolated periods of time may buck the trend and outperform history.  The housing bubble, for example, was able to generate significant returns for some participants in the market.  However, in general, real estate is felt to be a secure investment and returns about approximately 2% to personal investors.  Keep in mind, inflation is often higher than this 2% return; however, the real estate can be enjoyed, will generally increase in value with time, and may, depending on the market, generate a nice rental return.  Also, renting a location maybe able to generate some nice passive income.

 

Did you notice how we considered the real estate market as a whole and then focused on how individual segments with in (as well as isolated periods in its history) may differ? Similarly, the stock market data we cite above is for the market as a whole since inception.  There are important limitations to the model we describe.  For one, it does not highlight crash years such as the great stock market crash or the several significant recessions we have had.  Therefore, some cite that the market has a greater standard deviation of returns than 20%.

 

Now we have a concept of risk as the standard deviation of returns on an investment.  Some other questions include “How we can measure risk for individual stocks in the market?”  There are several nice pieces of data I would like to share which are available for you and are publicly available on Yahoo finance as well as other public finance search engines.

 

One piece of data you can use to decide whether to invest in an individual stock is the beta value.  The beta is obtained by plotting the stock’s increase or decrease (on the y axis) versus the market’s increase or decrease (x-axis).  That is, a standard stock market index (such as the Dow Jones Industrial average or S&P 500) will be plotted and the stock’s performance will be plotted against this. This XY plot indicates that when the market as a whole moves a certain way the stock moves a certain way.  Next, a regression line model is used so as to generate the equation of a regression line.  The slope of the regression line is the beta value.

 

cohenjpg
Figure 2: Beta is the slope of the regression line that associates S&P 500 return with an individual stock’s return. From Neil Cohen, DBA CFA in Financial Management MBAD 233-EM1, George Washington University Spring 2010

 

In other words, the beta value is the amount the stock described moves relative to the market as a whole.  (See Figure 2.) Thus, stock with a beta of positive 1 goes up one point when the market increases 1 point.  (It also goes down one point when the market goes down one point.) Therefore, you get a sense of how risky the stock you may like is compared to the market as a whole.  Stocks that have a beta of 2 may increase by 2 when the market goes up and also are more likely to decrease by 2 when the market goes down.  Again, recall this is from a linear regression model.  Therefore, it should not be said that every day the market goes up one point your stock with beta of 2 will go up 2 points.  This is just a general tendency over time.  The beta gives you a sense of how significantly the stock varies when the market varies.  This gives you a concept of risky your stock investment is.

 

Another useful number is the alpha.  The alpha is a measurement associated with mutual funds.  The alpha indicates the amount of return you can obtain from a mutual fund in excess of the risk inherent in the portfolio that makes up the mutual fund.  (Awesome right?  Who wouldn’t want a return higher than the risk involved!) Now you know why the well-known blog SeekingAlpha.com is called that; who wouldn’t want to seek a return in excess of the involved risk?

 

As you probably know, a mutual fund is a group of stocks that are maintained in a portfolio by a given company.  You can purchase shares of the fund of stocks.  It is important to know the managerial fees associated with your mutual fund.  Remember, managers maybe paid to help change the stocks which make up the mutual fund so as to trade in stocks that are performing well and decrease stocks that are performing less effectively.  There are some mutual funds which are indexed to the market as a whole.  These funds attempt to reproduce an investment in the market as a whole and reflect the market increases and decreases by being composed of a broad selection of stocks felt to represent the market as a whole.  These allow you to invest in the whole market.

 

One of the other newer investment vehicles are the derivatives.  The derivatives are so called because they derive from the typical products with which you are now more familiar.  One of the derivative markets is the options market.  Options allow you to purchase or sell a contract that allows someone to buy or sell a stock in the future at a given price.  I will not discuss much about the derivatives market here except to say that it seems as if there is a wider standard deviation of returns for options in general. However there are multiple risk mitigation strategies that allow for some very interesting opportunities in the options market.

 

In conclusion, we have discussed a definition of risk as it relates to personal investment:  risk may be thought of as the standard deviation of returns on an investment product.  We described one classic test to allow you to get a sense of your risk threshold.  We then highlighted the standard deviation of returns across multiple products including stocks, bonds, and real estate.  Good luck in your personal investment decisions and your attempts to beat inflation.  Remember, a key focus as a personal investor is on the return at the end of the day AFTER taxes, any fees (such as those a mutual fund charges for managing the fund) and inflation.

 

Please note:  None of the above constitutes investment advice–professional or otherwise.  I’m sharing just a bit of how risk may be conceptualized in complex investment decisions so that you can make your own decision based on your unique situation.

 

Questions, comments, or feedback?  Feel free to add your thoughts and comments beneath.

 

When Is It Good To Have p > 0.05?

Some of the things I learned as a junior surgical resident were over simplified. One of these includes that a p value less than 0.01 is “good”.  In this entry we discuss when it is appropriate for a p value to be greater than 0.01 and those times when it’s even favorable to have one greater than 0.05.  We invite you to take some time with this blog entry as it makes for some of the most interesting facts we have found about hypothesis testing and statistics.

 

In Lean and Six Sigma much of what we do is to take statistical tools that exist and to apply these to business scenarios so that we have a more rigorous approach to process improvement.  Although we call the processes Six Sigma or Lean depending on the toolset we are using, in fact, the processes are pathways to set up a sampling plan, capture data, and rigorously test data so as to determine if we are doing better or worse with certain system changes–and we get this done with people, as a people sport, in a complex organization.  I have found, personally, the value in using the data to tell how us how we are doing is that it disabuses us of instances where we think we are doing well and we are not.  It also focuses our team on team factors and system factors, which are, in fact, responsible for most defects.  Using data prevents us from becoming defensive or angry at ourselves and our colleagues.  That said, there are some interesting facts about hypothesis testing about which many of us knew nothing as surgical residents.  In particular, consider the idea of the p value.

 

Did you know, for example, that you actually set certain characteristics of your hypothesis testing when you design your experiment or data collection?  For example, when you are designing a project or experiment, you need to decide at what level you will set your alpha.  (This relates to p values in just a moment.) The alpha is the risk of making a type 1 error.  For more information about a type 1 error please visit our early blog entry about type 1 and type 2 errors here.  In this case, let’s leave it at saying the alpha risk is the risk of tampering with a system that is ok; that is, alpha is the risk of thinking there is an effect or change when in fact there is no legitimate effect or change.  So, when we set up an experiment or data collection, we set the alpha risk inherent in our hypothesis testing.  Of course, there are certain conventions in medical literature that determine what alpha level we accept.

 

Be careful, by the way, because alpha is used in other fields too.  For example, in investing, alpha is the amount of return on your mutual fund investment that you will get IN EXCESS of the risk inherent in investing in the mutual fund.  In that context, alpha is a great thing.  There are even investor blogs out there that focus on how to find and get this extra return above and beyond the level of risk you take by investing.  If you’re ever interested, visit seekingalpha.com.

 

Anyhow, let’s pretend, here, that we say we are willing to accept a 10% risk of concluding that there is some change or difference in our post-changes-we-made state when in fact there is no actual difference (10% alpha).  In most cases the difference we may see could vary in either direction.  Our values post changes could be either higher or lower than they were pre changes.  For this reason, it is customary to use what is called a two tailed p value.  The alpha risk is split among two tails of the distribution (ie the values post changes are higher or lower than by chance alone) so that we say if the p value is greater than 0.05 (a 5% alpha risk in either direction) we would conclude there is no significant difference in our data between the pre and post changes we made to a system.

 

The take home is that we decide, before we collect data to keep the ethics of it clean, how we will test these data to conclude if there a change or difference under 2 states.  We determine what will we will count as a statistically significant change based on the conditions we set:  what alpha risk is too high to be acceptable in our estimation?

 

Sometimes, if we have reason to suspect the data may or can vary in only one direction (such as prior evidence indicating an effect only going one direction or some other factor) we may use a one tailed p value.  A one tailed p value simply says that all of our alpha risk is lumped in one tail of the distribution.  In either case we should set up how we will test our data before we collect them.  Of course, in real life, sometimes there are already data that exist, are high quality (clear operating definition etc.) and we need to analyze them for some project.

 

Next, let’s build up to when it’s good to have a p > 0.05.  After all, that was the teaser for this entry.  This brings us to some other interesting facts about data collection and the sampling methods by which we do this.  For example, in Lean and Six Sigma, we tend to classify data as either discrete or continuous.  Discrete data is, for example, yes or no data.  Discrete data can be certain defined categories only such as red, yellow, blue, yes / no, black / white / grey etc. etc…continuous data, by contrast, is data that is infinitely divisible.  One way I have heard continuous data described that I use when I teach is that continuous data are data that can be divided in half forever and still make sense.  That is, an hour can be divided into two groups of 30 minutes, minutes can be divided into seconds, and seconds can continue to be divided.  This infinitely divisible type of data is continuous and makes a continuous curve when plotted.  In Lean and Six sigma we attempt to utilize continuous data whenever possible.  Why?  The answer makes for some interesting facts about sampling.

 

First, did you know that we need much smaller samples of continuous data in order to be able to demonstrate statistically significant changes? In fact, consider a boiled down sampling equation for continuous data versus discrete data.  A sampling equation for continuous data is (2s/delta)^2 where s is the historic standard deviation of the data and delta is the smallest change you want to be able to detect with your data.  The 2 comes from the z score at the 95th percent level of confidence.  For now just remember that this is a generic conservative sampling equation for continuous data.

 

Now let’s look a sampling equation for discrete data.  The sampling equation for discrete data is p(1-p)(2/delta)^2.  In other words, let’s plug in what it would take to be able to detect a 10% difference in discrete data.  Plugging in the numbers and using p=50% for the probability of yes or no we find that we need a large sample to detect a small change.  For continuous data, using similar methodology we need much smaller samples.  Usually for reasonably small deltas this may be only 35 data points or so.  Again, this is why Lean and Six sigma utilizes continuous data whenever possible.  So, now, we focus on some sampling methodology issues and the nature of what a p value is.

 

Next, consider the nature of statistical testing and some things that you may not have learned in school.  For example, did you know that underlying most of the common statistical tests is the assumption that the data involved are normally distributed?  In fact, data in the real world may be normally distributed.  Again, normal distribution means data that may be demonstrated as a histogram that follows a Gaussian curve.  However, in the real world of business, manufacturing and healthcare, it is often not the case that data are actually distributed normally.  Sometimes data maybe plotted and look normally distributed but in fact they are not.  This fact would invalidate some of the assumptions utilized by common statistical tests.  In other words, we can’t use a t test on data that are not normally distributed.  Students t test, for example, has the assumption that the data are normally distributed.  What can we do in this situation?

 

First we can rigorously test our data to determine if they are normally distributed.  There is a named test, called the Anderson-Darling test, that focuses on whether our data are normally distributed.  The Anderson-Darling test tests our data distribution versus normally distributed data.  If the p value for the Anderson-Darling test is greater than 0.05 that means our data do not deviate significantly from the normal distribution.  In other words, if the Anderson-Darling test statistic’s accompanying p value is greater than 0.05 we conclude that our data are normally distributed and we can use the common statistical tests that are known and loved by general surgery residents (and beyond) everywhere.  However, if the Anderson-Darling test indicates that our data are not normally distributed, that is the p value is less than 0.05 we must look for alternative ways to test our data.  This was very interesting to me when I first learned it.  In other words, a p value greater than 0.05 can be good especially if we are looking to demonstrate that our data are normal so that we can go on and use hypothesis tests which require normally distributed data.  Here are some screen captures that highlight Anderson-Darling.  Note that, in Fig 1., the data DON’T appear to be normally distributed by the “eyeball test” (the “eyeball test” is when we just look at the data and go with our gut).  Yet, in fact, the data ARE normally distributed and p > 0.05.  Figure 2 highlights how a data distribution follows the routine, expected frequencies of the normal distribution.

 

ADjpg1

Figure 1:  A histogram with its associated Anderson-Darling test statistic and p value > 0.05.  Here, p > 0.05 means these data do NOT deviate from the normal distribution…and that’s a good thing if you want to use hypothesis tests that assume your data are normally distributed.

 

ADnormalplotjpg

Figure 2:  These data follow the expected frequencies associated with the normal distribution.  The small plot in Figure 2 demonstrates the frequencies of data in the distribution versus those of the normal distribution.

As with most things, the message that a p value less 0.01 is good and one greater than 0.01 is bad  is a vast oversimplification.  However, it is probably useful as we teach statistics to general surgery residents and beyond.

So, now that you have identified a methodology for whether your data are or are not normally distributed, let’s progress to talking about what to do next–especially when you find that your data are NOT normally distributed and you wonder where to go next.  In general, there are two options when we have continuous data sets that are NOT normally distributed.  One is that we must transform these data sets with what is called a power transformation. There are many different power transformations including the Box-Cox transformation and Johnson transformation to name a few.

 

The power transforms take the raw, non-normally distributed data, and raise the data to different powers, such as raising the data to the 1/2 power (aka taking its square root) or raising the data to the second power, third power, fourth power, etc. The optimal power to which the data are raised so as to make the data closest to the normal distribution is identified.  The data are then replotted as transformed data to that power, and then the Anderson-Darling test (or a similar test) is performed on that transformed data to determine whether the new data are now normally distributed.

 

Often the power transformations will allow the data to become normally distributed.  This brings up an interesting point:  pretend we are looking at a system where time is the focus.  The data are not normally distributed and we perform a power transform which demonstrates that time squared is a normally distributed variable.  Interestingly we may have a philosophic management question.  What does it mean to manage time squared instead of time?  These and other interesting questions arise when we use power transforms.  The use of power transforms is somewhat controversial for that reason. Sometimes it is challenging to know whether the variables have meaning for management when we use power transforms.

 

However, on the bright side, if we successfully “Box-Cox-ed” or somehow otherwise power-transformed the data to normal data we can now use the common statistical tests. Remember, if the initial data set is transformed the subsequent data must be transformed to the same power.  We have to compare apples to apples.

 

The next option for how to deal with non-normal data set is to utilize statistical tests which do not require the input of normal data.  These include such rarely used tests as the Levene test, and so called KW or Kruskal-Wallis test.  The Levene test and KW test are tests of data variability.  Another test, the Mood’s median test, tests the median value for non-normal data.  So, again, we have several options for how to address non-normal data sets.  Usually, as we teach the Lean and Six Sigma process, we reserve teaching about how to deal with non-normal data for at least a black belt level of understanding.

 

At the end of the day, this blog post explores some interesting consequences of the choices we make with respect to data and the consequences of some interesting facts about hypothesis testing.  Again, interestingly, there is much more choice involved than I ever understood as a general surgical resident.  Eventually, working through the Lean and Six sigma courses (and finally the master black belt course) taught me about the importance of how we manage data and, in fact, ourselves.  Also, there are more than 10 projects in which I have participated that have really highlighted these certain facts about data and reinforced text book learning.

 

An interesting take home message is that the p value less than 0.01 does not mean all is right with the world, just as a p value greater than 0.05 is not necessarily bad.  Again, after all, tests like the Anderson-Darling test are useful to tell us when our data are normally distributed and when we can continue using the more comfortable hypothesis tests that focuses on data which are normally distributed.  In this blog post, we describe some of the interesting ways to deal with data that are non-normally distributed so as to improve our understanding and conclusions based on continuous data sets.  Whenever possible, we favor continuous data as it requires a smaller sample size with which to make meaningful conclusions.  However, as with all sampling, we have to be sure that our continuous data sample adequately represents the system we are attempting to characterize.

 

Our team hopes you enjoyed this review of some interesting statistics related to the nature and complexity of p-values.  As always, we invite your input as statisticians or mathematicians especially if you have special expertice or interest in these topics.  None of us, as Lean or Six Sigma practitioners, claim to be statisticians or mathematicians.  However, the Lean and Six Sigma process is extremely valuable in applying classic statistical tools to business decision-making.  In our experience, this approach to data driven decision making has yielded vast improvements in how we practice in business systems instead of other models based on opinion or personal experience.

 

As a parting gift, please enjoy (and use!) the file beneath to help you to select what tool to use to analyze your data.  This tool, taken from Villanova’s Master Black Belt Course, helps me a great deal on a weekly basis.  No viruses or spam from me involved I promise!

ToolTime

Logrolling & The BATNA: Valuable Tools For Negotiating

 

The MBA, medical school, and other course work have each been very useful.  However, three of the most valuable courses I have ever taken came via the University of Notre Dame and were all about negotiating.  Interestingly, each of these courses had us calling each other across the country to negotiate out often unusual scenarios:  one week I was negotiating out a manufacturing plant opening in Mexico with some local officials and the next I was negotiating the purchase of a blue used car (the “Blue Buggy” scenario).  In that manner, I completed an interactive, online Master’s Certificate with the University of Notre Dame with what I consider to be some of the most valuable coursework I have taken.  Let me share some of the basics of negotiation with you beneath because these skills are so useful.  These skills will add value for you across a broad spectrum of endeavors in your life.  My hope is that, if you and I achieve nothing else here, we at least pique your interest to learn more about negotiating skills.  It’s also important to me to highlight how negotiating over things like jobs or resources is NOT as simple as win/lose.

 

In fact, a win/lose view on negotiating leads to missed opportunities and suboptimal deals.  Did you know, for example, that negotiating based on rigid positions, ie “They HAVE to give me this brand new OR team because that’s the ONLY way.”, leads to suboptimal outcomes?  Yes, it has been studied:  positional negotiating with the mindset described above leads to outcomes that are not nearly as good as those obtained when each group in a negotiation focuses on how to satisfy their interests rather than taking on such rigid positions.  It’s tough to believe that it works when you’re fatigued and skeptical; yet, that said, it does.

 

As we start to dive into these and other findings, let’s first focus on vocabulary:  the Harvard Negotiation Project is one of the sources for certain findings about negotiation and we’ll draw on it heavily here. Some of the vocabulary we will use in this blog entry includes the term BATNA.  The BATNA stands for ‘Best Alternative To a Negotiated Agreement.’  The BATNA is felt to be the source of negotiating power.  How?  Well, your willingness and ability to negotiate on certain points or ideas is contingent upon your alternatives:  the better (and more readily executable) your alternatives the better and more willing you are to negotiate in different situations.

 

Now, if you have a great alternative, it is frowned up to remind your partner in the negotiation (the so-called “other side”) of your BATNA up front.  Meaning, in general you shouldn’t walk into the negotiation and say “Well this is no big deal because my other option is to take a trip around the world on my 3 million dollar yacht next week.” Why?  This is because, as I’ll describe later, the quality and type of relationship you develop up front impacts the overall quality of the deal you make.  That said, a good general rule of thumb is that, if it becomes necessary, you should use your BATNA & power to educate the other side during a negotiation rather than up front.  There are rare instances where displaying the BATNA up front may be necessary.

 

Another important vocabulary word is “anchor”.  When a negotiation starts, the first value given from one side to the other for a particular item in the negotiation is called the anchor.  The Harvard Negotiation Project demonstrated many things, and one of these is that the anchor, to a large degree, determines the eventual outcome of a scenario.  So, if salary is important to you for a job, and the other side passes along an initially very low salary offer, you are more apt to get a lower salary at the end of negotiating than you otherwise would have been if the anchor had been higher.  If the anchor is set higher by one side at the beginning, the overall outcome will be higher.  This goes to the question of who should offer first in a given scenario.  Regardless of who offers first, if you are the recipient of an offer you should seek to replace that offer with your own value with a reason behind it at soon as possible.

 

You may think, like I did initially, that “Of course if the anchor is higher then the outcome is higher because setting a higher anchor shows that the offering side values something more.  So the anchor doesn’t cause a higher outcome because the outcome would’ve been higher anyway.” Interestingly, that does not seem to be the case.  The anchor’s initial location, other factors constant, seems to correlate with eventual outcome.  In other words:  same scenario, same players, same interests but a different initial anchor position and the eventual outcome follows that anchor position.  Interesting huh?

 

Next is the ZOPA, or ‘zone of potential agreement.’ This is the zone of values for something like salaries etc. over which you and your partner in the negotiation may agree.  A related piece of vocabulary is the ‘floor’ and ‘ceiling.’  The floor is the lowest you will go on a certain point and the ceiling is the highest you would accept on a certain item.  Between your ceiling and floor is the set of values you would accept on a given item.  That interval overlaps (hopefully) with at least some of the set of values between the other side’s ceiling and floor.  That overlap is called the ZOPA.  It is that area of values over which you and your colleague in negotiation may agree.

 

Now that we have described some of the important vocabulary, let me share with you some of the important lessons learned I had from the course and these will clarify some of this vocabulary and how we can implement it.  First, one of the key descriptions in the course is, that as Americans, we tend to focus immediately on the task at hand rather than developing a relationship.  Developing a relationship has been shown in multiple series to impact the overall course of the negotiation.  Time spent discussing weather or finding a common ground with the ‘other side’ in the negotiation actually improves negotiation outcome.

 

Further, the reciprocity effect is important.  Did you know, for example, that when a salesperson gives someone a bottle of water at the car dealership he/she triggers a reciprocity effect?  It is now known that, in general, if you give someone a relatively small gift it actually triggers a disproportionate chance that they will buy something large from you like a car.  This reciprocity effect is strong and relates to social norms across cultures.  In the end, it is useful for many reasons to develop a relationship.  This, again, influences both negotiation outcome and overall quality of the deal at the end of the day.

 

Next is the useful concept of log rolling.  Advice from the Harvard Negotiation Project is, in part, represented by Ury and Fisher’s book Getting To Yes.  The book includes the fact that you should have 5 or 6 topics or headings that are important to you in a negotiation.  Salary should usually be last as salary is determined by all the important factors beforehand.  For example, if certain points are particularly wonderful or ominous in the negotiation you maybe willing to do the job for less or more salary accordingly.

 

Having 5 or 6 points also allows for log rolling.  Log rolling is a term used to describe how one interest influences the other interests you have.  For example, if vacation is important to you, you may say you would need 5 weeks of vacation for one reason or another.  If your colleague in the negotiation says that only only 2 weeks would be possible you may relate how, perhaps, you needed the 3 other weeks in order to help your mother with her home–if, of course, that’s the reason you needed the vacation.  Because you are now unable personally to help her, you will need to pay for help to come to her home.  This means you may require a larger salary.  The point here is that you are negotiating over interests rather than rigid points.  There is no perfect deal but only a workable deal for both sides in the negotiation.

 

When physicians are educated they often come up the ranks feeling like things like negotiation etc are win/lose.  Nothing could be further from the truth.  Each side, in an effective negotiation, has interests which it brings to the table and satisfying these interests does not always imply that one side wins and the other side loses.  The negotiating course took great pains to illustrate this with stories, such as the story of two young sisters and an orange.  When a father saw that 2 young daughters were fighting over the orange, the father cut the orange in half, gave them each half and declared it settled.  Both girls were upset and cried, however, because one girl wanted the orange skin to make an art project and the other girl wanted the pulp to eat.  This short story hughlights the concept of abstract fairness versus significant interests.  At the end of the day your ceiling or floor on a given issue may be influenced by the issues around them and your other interests.

 

There are many different styles of negotiation which are useful to learn. In fact, there are many different negotiating tricks or tactics which we must learn to identify so we can move beyond these to truly focus on each sides interests and how to represent these interests in an effective deal.  Learning the tricks is useful to get passed them on the path to an effective deal.  Negotiating effectively is in the interests of all sides in a negotiation because afterward all sides must live with each other and the deal.  If we take an attrition or I win / you lose style of negotiation and we eventually form a employment contract or a deal with a healthcare association we must then work with them after…and the side of the negotiation that realizes it was tricked or abused is challenging with which to work.  Also, if we establish a difficult reputation or relationship during the negotiation this is much less adaptive for the aftermath when the deal is made.

 

Clearly there is a great deal of information to be learned about negotiating including some of the classic negotiating tricks.  I will highlight some of these here.  One trick to watch out for is the second bite effect.  The second bite effect occurs when you have negotiated a deal with one person in an organization and they say “Ok this looks great now I need to take the deal to my superior so that he or she can review it and ok the deal”.  The person who you may never see, of course, says the deal won’t be possible for several reasons unless you are willing to take less salary or less benefit or something along those lines.  This is called the “second bite effect” because you have been negotiating twice and one of these was with someone who you may never see.  All of your time was spent, and now the other side has taken the opportunity to simply disregard what was agreed upon and re-negotiate at their leisure.

 

This also happens in car dealerships where the salesperson says he or she needs to go to the manager’s office to ok the deal and they sit and idly chat about something.  Then the salesman returns to you, and he or she informs you that the manager is just unable to make that price that you had negotiated out for the car and that something has changed.  So, the second bite effect is a classic effect and a great way to guard against this is to make sure that, as you negotiate, you have established upfront that the person with whom you are negotiating has the ability to actually make the deal.

 

Other classic techniques include the pawn technique.  This is one that is useful for you and others interested in principle based negotiation.  Among your 5 to 6 points for log rolling, include 1 point about which you feel less strongly.  You can then give away this point to the other side, like a pawn in chess, and utilize log rolling and the reciprocity effect for issues on which you are more focused.  This is a useful technique for negotiation.  The pawn is something you care about, yet less so than the other interests.  You intentionally place the pawn early in the list of items you want to discusss, and if it ends up being given away it helps you on other points.

 

There are other, less scrupulous negotiating tactics such as Russian style negotiating and other issues.  However, at the end of the day, negotiating is an important transactional skill that has served me well.  I didn’t realize how much there was to it until the Notre Dame coursework.  I recommend negotiating courses to anyone in business of many types and, even if you consider yourself to be in something other than the business world, I still recommend negotiating courses.  This is for the simple reason that we negotiate every day of our lives with our children, with the rest of our family, and alongside people with whom we interact each day.

 

One last point:  this work has focused on the vocabulary and transactions of negotiating so far.  However, as things wind up, consider this last point.  Perhaps the most important portion of the negotiation is the preparation you put in ahead of time.  For example, if you are trauma surgeon, have you reviewed the data on salaries across the country?  Have you found the MGMA website that posts salary data?  Do you understand how other centers structure reimbursement, benefits, and vacation?  Preparation is key because it allows you to know your interests clearly, those of the other side(s), and have data ready if and when you need to have recourse to objective data to preserve the relationship, negotiation, or your interests.  Being prepared with respect to your needs and interests allows you to move away from positional negotiation (eg “I want three months off and that’s just it.”) to principled and interest-focused negotiation (eg “I want three months off so I can visit my grandparents in Florida to help them do their estate planning, yet if three months can’t work then a salary increase could let me use an estate planner and supervise their work…”).  Incidentally, positional negotiation has been shown to give inferior outcomes and should be avoided whenever possible.

 

These are some of the most useful skills I think we can have and, again, I will share that, of all the courses I have taken in medical school, business school and beyond, the three courses in negotiating I took along with the course mandated reading of Getting to Yes were some of the most valuable academic experiences I have had.  These courses have shown me that principled negotiation is effective and possible.  Consider finding these skills and working them into your toolbox.

 

Questions or thoughts on negotiation as a business skill?  Have you seen any situations with negotiations gone wrong or ones where the information above showed up?  Please leave any comments or thoughts beneath.

8 Steps To Culture Change

 

Want a roadmap to create change in an organization?  Here’s Kotter’s classic roadmap on organizational change.

Once consensus has been established about the business situation (easier said than done sometimes), there are models and steps for how to go about change management.  One of the most well known is John Kotter’s 8 Steps to Culture Change.  John Kotter, previously a Harvard Business School Professor, developed these 8 steps in part to help articulate why change efforts fail and to better improve our numbers with respect to successful change efforts.

 

By way of review, let’s discuss them beneath:

 

Step 1 – Establishing a sense of urgency.

 

This is sometimes called ‘the burning platform’.  This can be a short timeline until a quality review or some event that is important to the organization.  Step 1 creates a timeline which justifies action etc.  Being sure that people around you understand the importance of the event and feel the urgency without being overly anxious is key.

 

Step 2 – Creating the guiding coalition.

 

The guiding coalition is a team with enough power to lead the change effort.  This team must be encouraged to work as a group.  This is also challenging especially in an organization where there may be no support.  If you find yourself where there are clear issues and yet you do not have administrative support (or are not able to enlist it) it is likely you have a nonstarter for change management.

 

Step 3 – Developing a change vision.

 

Creating a vision to help direct the change effort, and developing clear strategies for achieving that vision, are central to successful change.  This is key to give the team something to work towards and to give the team something to achieve.  Articulating this change as a vision is key and this must be represented by both how you act and what you say.

 

Step 4 – The vision must be communicated for buy in.

 

People have to understand and accept both the vision and the strategy.  Again, if there is no administrative support from you from your colleagues in administration, or if you don’t communicate the vision, then people are unlikely to understand and accept the roadmap for the future.

 

Step 5 – Empowering broad based action.

 

This means you are obligated to remove obstacles to change for those people who are working with you on the team and at different levels in the organization.  In short, this goes back to the classic idea that you must make it easier to act in the way that change effort requires people to act.  That is, you remove barriers to people acting in the way they need to act for the change to occur. Some leaders will add friction in the opposite direction.  That is, they erect barriers to acting in the current mode to create enough friction that people must favor the newer, easier pathway to which barriers have been removed.

 

Step 6 – Generating short term wins.

 

Achievements that can be made early on are key.  Sometimes this is just harvesting the low hanging fruit.  Whatever the short terms wins are, these must be visible and these must be followed through with people receiving recognition and reward in a way that gets noticed.

 

Step 7 – Never letting up.

 

This increased credibility must be utilized to change systems, structures, and policies that don’t fit the vision.  Your hiring, promotion, and development of employees must be such that those who can implement the vision are brought along.  This makes the change programmatic and lasting.

 

Step 8 – Incorporating the changes into the culture.

 

The connections between the new behaviors and organizational success must be articulated, and these changes must be institutionalized.  There must be a means to ensure succession in leadership roles so that these changes become commonplace and are reinforced.  It is useful at this point to demonstrate that the new way is superior to the old with data.

 

My personal recommendation is that data be made to underlie this entire process.  In fact, the lean six sigma statistical process control pathway satisfies each of these steps in a positive way that allows us to avoid taking issues with each other or personal attacks.  Incidentally, one of the things I have noticed in change efforts is that what are called ad hominem attacks may abound.  An ad hominem attack occurs when someone attacks the person / messenger involved rather than the argument or data.  Ad hominem attacks are difficult, insidious, and common in Medicine.  It can be a real challenge to let these pass.  It can be even harder when a change agent has clinical decision making, technical prowess, or other professional, patient care skills questioned as part of an ad hominem attack.  Stay calm, and think of how good it will look when the situation is successful, or, failing that, leave if the situation becomes threatening either personally or professionally.

 

In fact, one of the most challenging things I have found is to note ad hominem attacks and try to progress beyond them.  Fortunately I have not been in this situation often, but let me say this can be a real challenge especially in a failed change effort or in difficult organizations…and, of course, despite our best efforts most change efforts fail.  So we should always enter these situations with a “batting average” mentality:  I may only get a hit .333 of the time, but I take the at-bat because the hits are worth it.

 

I recommend a data-driven approach, in general, where people are educated in their data and the data are not personally assignable.  This prevents finger pointing and allows us to make data driven decisions which are reproducible, transparent, and may be followed over time to gauge improvement.  If you can get the culture to respond to data rather than personal attacks, the team can improve over time in a meaningful way.

 

This focus on data makes for a situation which is not often encountered in Medicine; yet, when we do attain it, it is truly magical.  Sometimes I see my colleagues in Medicine reinventing tools that have names and are well utilized in other fields.  Some of these are utilized in the lean six sigma toolset, which is mostly a pre-established pathway to use these advanced statistical process controls for quality improvement and culture change.

 

I really enjoy helping groups in healthcare see that not all changes or improvements need come by confrontation or finger pointing.  Often, in different service lines in Medicine, it is too often the case that staff attribute issues to personal defects rather than system defects.  Commonly, many of what are felt to be personal issues are in fact system issues.  This is supported by the quality control literature and I have often noticed that poor systems may set up physicians and healthcare providers for confrontation amongst themselves.

 

Functional systems based on data which run smoothly often alleviate the need for frustration, conflict, and other issues.  Such feelings may represent symptoms of opportunities for improvement.

 

Last, please remember:  even if you know the steps, practice them, and work to create positive business situations these change situations are challenging and high risk.  Our batting average may even be below .500 (after all .333 is a good average in the major leagues) yet we take the at-bats because we learn from them, they improve our skills, and the hits are worth it.

 

Comments, questions, or thoughts on change management in your healthcare organization?  Have you seen a failed change management situation? If you have, let us know in the comments section beneath.  We always enjoy hearing and learning about change management across different organizations.