When Is It Good To Have p > 0.05?

Some of the things I learned as a junior surgical resident were over simplified. One of these includes that a p value less than 0.01 is “good”.  In this entry we discuss when it is appropriate for a p value to be greater than 0.01 and those times when it’s even favorable to have one greater than 0.05.  We invite you to take some time with this blog entry as it makes for some of the most interesting facts we have found about hypothesis testing and statistics.

 

In Lean and Six Sigma much of what we do is to take statistical tools that exist and to apply these to business scenarios so that we have a more rigorous approach to process improvement.  Although we call the processes Six Sigma or Lean depending on the toolset we are using, in fact, the processes are pathways to set up a sampling plan, capture data, and rigorously test data so as to determine if we are doing better or worse with certain system changes–and we get this done with people, as a people sport, in a complex organization.  I have found, personally, the value in using the data to tell how us how we are doing is that it disabuses us of instances where we think we are doing well and we are not.  It also focuses our team on team factors and system factors, which are, in fact, responsible for most defects.  Using data prevents us from becoming defensive or angry at ourselves and our colleagues.  That said, there are some interesting facts about hypothesis testing about which many of us knew nothing as surgical residents.  In particular, consider the idea of the p value.

 

Did you know, for example, that you actually set certain characteristics of your hypothesis testing when you design your experiment or data collection?  For example, when you are designing a project or experiment, you need to decide at what level you will set your alpha.  (This relates to p values in just a moment.) The alpha is the risk of making a type 1 error.  For more information about a type 1 error please visit our early blog entry about type 1 and type 2 errors here.  In this case, let’s leave it at saying the alpha risk is the risk of tampering with a system that is ok; that is, alpha is the risk of thinking there is an effect or change when in fact there is no legitimate effect or change.  So, when we set up an experiment or data collection, we set the alpha risk inherent in our hypothesis testing.  Of course, there are certain conventions in medical literature that determine what alpha level we accept.

 

Be careful, by the way, because alpha is used in other fields too.  For example, in investing, alpha is the amount of return on your mutual fund investment that you will get IN EXCESS of the risk inherent in investing in the mutual fund.  In that context, alpha is a great thing.  There are even investor blogs out there that focus on how to find and get this extra return above and beyond the level of risk you take by investing.  If you’re ever interested, visit seekingalpha.com.

 

Anyhow, let’s pretend, here, that we say we are willing to accept a 10% risk of concluding that there is some change or difference in our post-changes-we-made state when in fact there is no actual difference (10% alpha).  In most cases the difference we may see could vary in either direction.  Our values post changes could be either higher or lower than they were pre changes.  For this reason, it is customary to use what is called a two tailed p value.  The alpha risk is split among two tails of the distribution (ie the values post changes are higher or lower than by chance alone) so that we say if the p value is greater than 0.05 (a 5% alpha risk in either direction) we would conclude there is no significant difference in our data between the pre and post changes we made to a system.

 

The take home is that we decide, before we collect data to keep the ethics of it clean, how we will test these data to conclude if there a change or difference under 2 states.  We determine what will we will count as a statistically significant change based on the conditions we set:  what alpha risk is too high to be acceptable in our estimation?

 

Sometimes, if we have reason to suspect the data may or can vary in only one direction (such as prior evidence indicating an effect only going one direction or some other factor) we may use a one tailed p value.  A one tailed p value simply says that all of our alpha risk is lumped in one tail of the distribution.  In either case we should set up how we will test our data before we collect them.  Of course, in real life, sometimes there are already data that exist, are high quality (clear operating definition etc.) and we need to analyze them for some project.

 

Next, let’s build up to when it’s good to have a p > 0.05.  After all, that was the teaser for this entry.  This brings us to some other interesting facts about data collection and the sampling methods by which we do this.  For example, in Lean and Six Sigma, we tend to classify data as either discrete or continuous.  Discrete data is, for example, yes or no data.  Discrete data can be certain defined categories only such as red, yellow, blue, yes / no, black / white / grey etc. etc…continuous data, by contrast, is data that is infinitely divisible.  One way I have heard continuous data described that I use when I teach is that continuous data are data that can be divided in half forever and still make sense.  That is, an hour can be divided into two groups of 30 minutes, minutes can be divided into seconds, and seconds can continue to be divided.  This infinitely divisible type of data is continuous and makes a continuous curve when plotted.  In Lean and Six sigma we attempt to utilize continuous data whenever possible.  Why?  The answer makes for some interesting facts about sampling.

 

First, did you know that we need much smaller samples of continuous data in order to be able to demonstrate statistically significant changes? In fact, consider a boiled down sampling equation for continuous data versus discrete data.  A sampling equation for continuous data is (2s/delta)^2 where s is the historic standard deviation of the data and delta is the smallest change you want to be able to detect with your data.  The 2 comes from the z score at the 95th percent level of confidence.  For now just remember that this is a generic conservative sampling equation for continuous data.

 

Now let’s look a sampling equation for discrete data.  The sampling equation for discrete data is p(1-p)(2/delta)^2.  In other words, let’s plug in what it would take to be able to detect a 10% difference in discrete data.  Plugging in the numbers and using p=50% for the probability of yes or no we find that we need a large sample to detect a small change.  For continuous data, using similar methodology we need much smaller samples.  Usually for reasonably small deltas this may be only 35 data points or so.  Again, this is why Lean and Six sigma utilizes continuous data whenever possible.  So, now, we focus on some sampling methodology issues and the nature of what a p value is.

 

Next, consider the nature of statistical testing and some things that you may not have learned in school.  For example, did you know that underlying most of the common statistical tests is the assumption that the data involved are normally distributed?  In fact, data in the real world may be normally distributed.  Again, normal distribution means data that may be demonstrated as a histogram that follows a Gaussian curve.  However, in the real world of business, manufacturing and healthcare, it is often not the case that data are actually distributed normally.  Sometimes data maybe plotted and look normally distributed but in fact they are not.  This fact would invalidate some of the assumptions utilized by common statistical tests.  In other words, we can’t use a t test on data that are not normally distributed.  Students t test, for example, has the assumption that the data are normally distributed.  What can we do in this situation?

 

First we can rigorously test our data to determine if they are normally distributed.  There is a named test, called the Anderson-Darling test, that focuses on whether our data are normally distributed.  The Anderson-Darling test tests our data distribution versus normally distributed data.  If the p value for the Anderson-Darling test is greater than 0.05 that means our data do not deviate significantly from the normal distribution.  In other words, if the Anderson-Darling test statistic’s accompanying p value is greater than 0.05 we conclude that our data are normally distributed and we can use the common statistical tests that are known and loved by general surgery residents (and beyond) everywhere.  However, if the Anderson-Darling test indicates that our data are not normally distributed, that is the p value is less than 0.05 we must look for alternative ways to test our data.  This was very interesting to me when I first learned it.  In other words, a p value greater than 0.05 can be good especially if we are looking to demonstrate that our data are normal so that we can go on and use hypothesis tests which require normally distributed data.  Here are some screen captures that highlight Anderson-Darling.  Note that, in Fig 1., the data DON’T appear to be normally distributed by the “eyeball test” (the “eyeball test” is when we just look at the data and go with our gut).  Yet, in fact, the data ARE normally distributed and p > 0.05.  Figure 2 highlights how a data distribution follows the routine, expected frequencies of the normal distribution.

 

ADjpg1

Figure 1:  A histogram with its associated Anderson-Darling test statistic and p value > 0.05.  Here, p > 0.05 means these data do NOT deviate from the normal distribution…and that’s a good thing if you want to use hypothesis tests that assume your data are normally distributed.

 

ADnormalplotjpg

Figure 2:  These data follow the expected frequencies associated with the normal distribution.  The small plot in Figure 2 demonstrates the frequencies of data in the distribution versus those of the normal distribution.

As with most things, the message that a p value less 0.01 is good and one greater than 0.01 is bad  is a vast oversimplification.  However, it is probably useful as we teach statistics to general surgery residents and beyond.

So, now that you have identified a methodology for whether your data are or are not normally distributed, let’s progress to talking about what to do next–especially when you find that your data are NOT normally distributed and you wonder where to go next.  In general, there are two options when we have continuous data sets that are NOT normally distributed.  One is that we must transform these data sets with what is called a power transformation. There are many different power transformations including the Box-Cox transformation and Johnson transformation to name a few.

 

The power transforms take the raw, non-normally distributed data, and raise the data to different powers, such as raising the data to the 1/2 power (aka taking its square root) or raising the data to the second power, third power, fourth power, etc. The optimal power to which the data are raised so as to make the data closest to the normal distribution is identified.  The data are then replotted as transformed data to that power, and then the Anderson-Darling test (or a similar test) is performed on that transformed data to determine whether the new data are now normally distributed.

 

Often the power transformations will allow the data to become normally distributed.  This brings up an interesting point:  pretend we are looking at a system where time is the focus.  The data are not normally distributed and we perform a power transform which demonstrates that time squared is a normally distributed variable.  Interestingly we may have a philosophic management question.  What does it mean to manage time squared instead of time?  These and other interesting questions arise when we use power transforms.  The use of power transforms is somewhat controversial for that reason. Sometimes it is challenging to know whether the variables have meaning for management when we use power transforms.

 

However, on the bright side, if we successfully “Box-Cox-ed” or somehow otherwise power-transformed the data to normal data we can now use the common statistical tests. Remember, if the initial data set is transformed the subsequent data must be transformed to the same power.  We have to compare apples to apples.

 

The next option for how to deal with non-normal data set is to utilize statistical tests which do not require the input of normal data.  These include such rarely used tests as the Levene test, and so called KW or Kruskal-Wallis test.  The Levene test and KW test are tests of data variability.  Another test, the Mood’s median test, tests the median value for non-normal data.  So, again, we have several options for how to address non-normal data sets.  Usually, as we teach the Lean and Six Sigma process, we reserve teaching about how to deal with non-normal data for at least a black belt level of understanding.

 

At the end of the day, this blog post explores some interesting consequences of the choices we make with respect to data and the consequences of some interesting facts about hypothesis testing.  Again, interestingly, there is much more choice involved than I ever understood as a general surgical resident.  Eventually, working through the Lean and Six sigma courses (and finally the master black belt course) taught me about the importance of how we manage data and, in fact, ourselves.  Also, there are more than 10 projects in which I have participated that have really highlighted these certain facts about data and reinforced text book learning.

 

An interesting take home message is that the p value less than 0.01 does not mean all is right with the world, just as a p value greater than 0.05 is not necessarily bad.  Again, after all, tests like the Anderson-Darling test are useful to tell us when our data are normally distributed and when we can continue using the more comfortable hypothesis tests that focuses on data which are normally distributed.  In this blog post, we describe some of the interesting ways to deal with data that are non-normally distributed so as to improve our understanding and conclusions based on continuous data sets.  Whenever possible, we favor continuous data as it requires a smaller sample size with which to make meaningful conclusions.  However, as with all sampling, we have to be sure that our continuous data sample adequately represents the system we are attempting to characterize.

 

Our team hopes you enjoyed this review of some interesting statistics related to the nature and complexity of p-values.  As always, we invite your input as statisticians or mathematicians especially if you have special expertice or interest in these topics.  None of us, as Lean or Six Sigma practitioners, claim to be statisticians or mathematicians.  However, the Lean and Six Sigma process is extremely valuable in applying classic statistical tools to business decision-making.  In our experience, this approach to data driven decision making has yielded vast improvements in how we practice in business systems instead of other models based on opinion or personal experience.

 

As a parting gift, please enjoy (and use!) the file beneath to help you to select what tool to use to analyze your data.  This tool, taken from Villanova’s Master Black Belt Course, helps me a great deal on a weekly basis.  No viruses or spam from me involved I promise!

ToolTime

Logrolling & The BATNA: Valuable Tools For Negotiating

 

The MBA, medical school, and other course work have each been very useful.  However, three of the most valuable courses I have ever taken came via the University of Notre Dame and were all about negotiating.  Interestingly, each of these courses had us calling each other across the country to negotiate out often unusual scenarios:  one week I was negotiating out a manufacturing plant opening in Mexico with some local officials and the next I was negotiating the purchase of a blue used car (the “Blue Buggy” scenario).  In that manner, I completed an interactive, online Master’s Certificate with the University of Notre Dame with what I consider to be some of the most valuable coursework I have taken.  Let me share some of the basics of negotiation with you beneath because these skills are so useful.  These skills will add value for you across a broad spectrum of endeavors in your life.  My hope is that, if you and I achieve nothing else here, we at least pique your interest to learn more about negotiating skills.  It’s also important to me to highlight how negotiating over things like jobs or resources is NOT as simple as win/lose.

 

In fact, a win/lose view on negotiating leads to missed opportunities and suboptimal deals.  Did you know, for example, that negotiating based on rigid positions, ie “They HAVE to give me this brand new OR team because that’s the ONLY way.”, leads to suboptimal outcomes?  Yes, it has been studied:  positional negotiating with the mindset described above leads to outcomes that are not nearly as good as those obtained when each group in a negotiation focuses on how to satisfy their interests rather than taking on such rigid positions.  It’s tough to believe that it works when you’re fatigued and skeptical; yet, that said, it does.

 

As we start to dive into these and other findings, let’s first focus on vocabulary:  the Harvard Negotiation Project is one of the sources for certain findings about negotiation and we’ll draw on it heavily here. Some of the vocabulary we will use in this blog entry includes the term BATNA.  The BATNA stands for ‘Best Alternative To a Negotiated Agreement.’  The BATNA is felt to be the source of negotiating power.  How?  Well, your willingness and ability to negotiate on certain points or ideas is contingent upon your alternatives:  the better (and more readily executable) your alternatives the better and more willing you are to negotiate in different situations.

 

Now, if you have a great alternative, it is frowned up to remind your partner in the negotiation (the so-called “other side”) of your BATNA up front.  Meaning, in general you shouldn’t walk into the negotiation and say “Well this is no big deal because my other option is to take a trip around the world on my 3 million dollar yacht next week.” Why?  This is because, as I’ll describe later, the quality and type of relationship you develop up front impacts the overall quality of the deal you make.  That said, a good general rule of thumb is that, if it becomes necessary, you should use your BATNA & power to educate the other side during a negotiation rather than up front.  There are rare instances where displaying the BATNA up front may be necessary.

 

Another important vocabulary word is “anchor”.  When a negotiation starts, the first value given from one side to the other for a particular item in the negotiation is called the anchor.  The Harvard Negotiation Project demonstrated many things, and one of these is that the anchor, to a large degree, determines the eventual outcome of a scenario.  So, if salary is important to you for a job, and the other side passes along an initially very low salary offer, you are more apt to get a lower salary at the end of negotiating than you otherwise would have been if the anchor had been higher.  If the anchor is set higher by one side at the beginning, the overall outcome will be higher.  This goes to the question of who should offer first in a given scenario.  Regardless of who offers first, if you are the recipient of an offer you should seek to replace that offer with your own value with a reason behind it at soon as possible.

 

You may think, like I did initially, that “Of course if the anchor is higher then the outcome is higher because setting a higher anchor shows that the offering side values something more.  So the anchor doesn’t cause a higher outcome because the outcome would’ve been higher anyway.” Interestingly, that does not seem to be the case.  The anchor’s initial location, other factors constant, seems to correlate with eventual outcome.  In other words:  same scenario, same players, same interests but a different initial anchor position and the eventual outcome follows that anchor position.  Interesting huh?

 

Next is the ZOPA, or ‘zone of potential agreement.’ This is the zone of values for something like salaries etc. over which you and your partner in the negotiation may agree.  A related piece of vocabulary is the ‘floor’ and ‘ceiling.’  The floor is the lowest you will go on a certain point and the ceiling is the highest you would accept on a certain item.  Between your ceiling and floor is the set of values you would accept on a given item.  That interval overlaps (hopefully) with at least some of the set of values between the other side’s ceiling and floor.  That overlap is called the ZOPA.  It is that area of values over which you and your colleague in negotiation may agree.

 

Now that we have described some of the important vocabulary, let me share with you some of the important lessons learned I had from the course and these will clarify some of this vocabulary and how we can implement it.  First, one of the key descriptions in the course is, that as Americans, we tend to focus immediately on the task at hand rather than developing a relationship.  Developing a relationship has been shown in multiple series to impact the overall course of the negotiation.  Time spent discussing weather or finding a common ground with the ‘other side’ in the negotiation actually improves negotiation outcome.

 

Further, the reciprocity effect is important.  Did you know, for example, that when a salesperson gives someone a bottle of water at the car dealership he/she triggers a reciprocity effect?  It is now known that, in general, if you give someone a relatively small gift it actually triggers a disproportionate chance that they will buy something large from you like a car.  This reciprocity effect is strong and relates to social norms across cultures.  In the end, it is useful for many reasons to develop a relationship.  This, again, influences both negotiation outcome and overall quality of the deal at the end of the day.

 

Next is the useful concept of log rolling.  Advice from the Harvard Negotiation Project is, in part, represented by Ury and Fisher’s book Getting To Yes.  The book includes the fact that you should have 5 or 6 topics or headings that are important to you in a negotiation.  Salary should usually be last as salary is determined by all the important factors beforehand.  For example, if certain points are particularly wonderful or ominous in the negotiation you maybe willing to do the job for less or more salary accordingly.

 

Having 5 or 6 points also allows for log rolling.  Log rolling is a term used to describe how one interest influences the other interests you have.  For example, if vacation is important to you, you may say you would need 5 weeks of vacation for one reason or another.  If your colleague in the negotiation says that only only 2 weeks would be possible you may relate how, perhaps, you needed the 3 other weeks in order to help your mother with her home–if, of course, that’s the reason you needed the vacation.  Because you are now unable personally to help her, you will need to pay for help to come to her home.  This means you may require a larger salary.  The point here is that you are negotiating over interests rather than rigid points.  There is no perfect deal but only a workable deal for both sides in the negotiation.

 

When physicians are educated they often come up the ranks feeling like things like negotiation etc are win/lose.  Nothing could be further from the truth.  Each side, in an effective negotiation, has interests which it brings to the table and satisfying these interests does not always imply that one side wins and the other side loses.  The negotiating course took great pains to illustrate this with stories, such as the story of two young sisters and an orange.  When a father saw that 2 young daughters were fighting over the orange, the father cut the orange in half, gave them each half and declared it settled.  Both girls were upset and cried, however, because one girl wanted the orange skin to make an art project and the other girl wanted the pulp to eat.  This short story hughlights the concept of abstract fairness versus significant interests.  At the end of the day your ceiling or floor on a given issue may be influenced by the issues around them and your other interests.

 

There are many different styles of negotiation which are useful to learn. In fact, there are many different negotiating tricks or tactics which we must learn to identify so we can move beyond these to truly focus on each sides interests and how to represent these interests in an effective deal.  Learning the tricks is useful to get passed them on the path to an effective deal.  Negotiating effectively is in the interests of all sides in a negotiation because afterward all sides must live with each other and the deal.  If we take an attrition or I win / you lose style of negotiation and we eventually form a employment contract or a deal with a healthcare association we must then work with them after…and the side of the negotiation that realizes it was tricked or abused is challenging with which to work.  Also, if we establish a difficult reputation or relationship during the negotiation this is much less adaptive for the aftermath when the deal is made.

 

Clearly there is a great deal of information to be learned about negotiating including some of the classic negotiating tricks.  I will highlight some of these here.  One trick to watch out for is the second bite effect.  The second bite effect occurs when you have negotiated a deal with one person in an organization and they say “Ok this looks great now I need to take the deal to my superior so that he or she can review it and ok the deal”.  The person who you may never see, of course, says the deal won’t be possible for several reasons unless you are willing to take less salary or less benefit or something along those lines.  This is called the “second bite effect” because you have been negotiating twice and one of these was with someone who you may never see.  All of your time was spent, and now the other side has taken the opportunity to simply disregard what was agreed upon and re-negotiate at their leisure.

 

This also happens in car dealerships where the salesperson says he or she needs to go to the manager’s office to ok the deal and they sit and idly chat about something.  Then the salesman returns to you, and he or she informs you that the manager is just unable to make that price that you had negotiated out for the car and that something has changed.  So, the second bite effect is a classic effect and a great way to guard against this is to make sure that, as you negotiate, you have established upfront that the person with whom you are negotiating has the ability to actually make the deal.

 

Other classic techniques include the pawn technique.  This is one that is useful for you and others interested in principle based negotiation.  Among your 5 to 6 points for log rolling, include 1 point about which you feel less strongly.  You can then give away this point to the other side, like a pawn in chess, and utilize log rolling and the reciprocity effect for issues on which you are more focused.  This is a useful technique for negotiation.  The pawn is something you care about, yet less so than the other interests.  You intentionally place the pawn early in the list of items you want to discusss, and if it ends up being given away it helps you on other points.

 

There are other, less scrupulous negotiating tactics such as Russian style negotiating and other issues.  However, at the end of the day, negotiating is an important transactional skill that has served me well.  I didn’t realize how much there was to it until the Notre Dame coursework.  I recommend negotiating courses to anyone in business of many types and, even if you consider yourself to be in something other than the business world, I still recommend negotiating courses.  This is for the simple reason that we negotiate every day of our lives with our children, with the rest of our family, and alongside people with whom we interact each day.

 

One last point:  this work has focused on the vocabulary and transactions of negotiating so far.  However, as things wind up, consider this last point.  Perhaps the most important portion of the negotiation is the preparation you put in ahead of time.  For example, if you are trauma surgeon, have you reviewed the data on salaries across the country?  Have you found the MGMA website that posts salary data?  Do you understand how other centers structure reimbursement, benefits, and vacation?  Preparation is key because it allows you to know your interests clearly, those of the other side(s), and have data ready if and when you need to have recourse to objective data to preserve the relationship, negotiation, or your interests.  Being prepared with respect to your needs and interests allows you to move away from positional negotiation (eg “I want three months off and that’s just it.”) to principled and interest-focused negotiation (eg “I want three months off so I can visit my grandparents in Florida to help them do their estate planning, yet if three months can’t work then a salary increase could let me use an estate planner and supervise their work…”).  Incidentally, positional negotiation has been shown to give inferior outcomes and should be avoided whenever possible.

 

These are some of the most useful skills I think we can have and, again, I will share that, of all the courses I have taken in medical school, business school and beyond, the three courses in negotiating I took along with the course mandated reading of Getting to Yes were some of the most valuable academic experiences I have had.  These courses have shown me that principled negotiation is effective and possible.  Consider finding these skills and working them into your toolbox.

 

Questions or thoughts on negotiation as a business skill?  Have you seen any situations with negotiations gone wrong or ones where the information above showed up?  Please leave any comments or thoughts beneath.

8 Steps To Culture Change

 

Want a roadmap to create change in an organization?  Here’s Kotter’s classic roadmap on organizational change.

Once consensus has been established about the business situation (easier said than done sometimes), there are models and steps for how to go about change management.  One of the most well known is John Kotter’s 8 Steps to Culture Change.  John Kotter, previously a Harvard Business School Professor, developed these 8 steps in part to help articulate why change efforts fail and to better improve our numbers with respect to successful change efforts.

 

By way of review, let’s discuss them beneath:

 

Step 1 – Establishing a sense of urgency.

 

This is sometimes called ‘the burning platform’.  This can be a short timeline until a quality review or some event that is important to the organization.  Step 1 creates a timeline which justifies action etc.  Being sure that people around you understand the importance of the event and feel the urgency without being overly anxious is key.

 

Step 2 – Creating the guiding coalition.

 

The guiding coalition is a team with enough power to lead the change effort.  This team must be encouraged to work as a group.  This is also challenging especially in an organization where there may be no support.  If you find yourself where there are clear issues and yet you do not have administrative support (or are not able to enlist it) it is likely you have a nonstarter for change management.

 

Step 3 – Developing a change vision.

 

Creating a vision to help direct the change effort, and developing clear strategies for achieving that vision, are central to successful change.  This is key to give the team something to work towards and to give the team something to achieve.  Articulating this change as a vision is key and this must be represented by both how you act and what you say.

 

Step 4 – The vision must be communicated for buy in.

 

People have to understand and accept both the vision and the strategy.  Again, if there is no administrative support from you from your colleagues in administration, or if you don’t communicate the vision, then people are unlikely to understand and accept the roadmap for the future.

 

Step 5 – Empowering broad based action.

 

This means you are obligated to remove obstacles to change for those people who are working with you on the team and at different levels in the organization.  In short, this goes back to the classic idea that you must make it easier to act in the way that change effort requires people to act.  That is, you remove barriers to people acting in the way they need to act for the change to occur. Some leaders will add friction in the opposite direction.  That is, they erect barriers to acting in the current mode to create enough friction that people must favor the newer, easier pathway to which barriers have been removed.

 

Step 6 – Generating short term wins.

 

Achievements that can be made early on are key.  Sometimes this is just harvesting the low hanging fruit.  Whatever the short terms wins are, these must be visible and these must be followed through with people receiving recognition and reward in a way that gets noticed.

 

Step 7 – Never letting up.

 

This increased credibility must be utilized to change systems, structures, and policies that don’t fit the vision.  Your hiring, promotion, and development of employees must be such that those who can implement the vision are brought along.  This makes the change programmatic and lasting.

 

Step 8 – Incorporating the changes into the culture.

 

The connections between the new behaviors and organizational success must be articulated, and these changes must be institutionalized.  There must be a means to ensure succession in leadership roles so that these changes become commonplace and are reinforced.  It is useful at this point to demonstrate that the new way is superior to the old with data.

 

My personal recommendation is that data be made to underlie this entire process.  In fact, the lean six sigma statistical process control pathway satisfies each of these steps in a positive way that allows us to avoid taking issues with each other or personal attacks.  Incidentally, one of the things I have noticed in change efforts is that what are called ad hominem attacks may abound.  An ad hominem attack occurs when someone attacks the person / messenger involved rather than the argument or data.  Ad hominem attacks are difficult, insidious, and common in Medicine.  It can be a real challenge to let these pass.  It can be even harder when a change agent has clinical decision making, technical prowess, or other professional, patient care skills questioned as part of an ad hominem attack.  Stay calm, and think of how good it will look when the situation is successful, or, failing that, leave if the situation becomes threatening either personally or professionally.

 

In fact, one of the most challenging things I have found is to note ad hominem attacks and try to progress beyond them.  Fortunately I have not been in this situation often, but let me say this can be a real challenge especially in a failed change effort or in difficult organizations…and, of course, despite our best efforts most change efforts fail.  So we should always enter these situations with a “batting average” mentality:  I may only get a hit .333 of the time, but I take the at-bat because the hits are worth it.

 

I recommend a data-driven approach, in general, where people are educated in their data and the data are not personally assignable.  This prevents finger pointing and allows us to make data driven decisions which are reproducible, transparent, and may be followed over time to gauge improvement.  If you can get the culture to respond to data rather than personal attacks, the team can improve over time in a meaningful way.

 

This focus on data makes for a situation which is not often encountered in Medicine; yet, when we do attain it, it is truly magical.  Sometimes I see my colleagues in Medicine reinventing tools that have names and are well utilized in other fields.  Some of these are utilized in the lean six sigma toolset, which is mostly a pre-established pathway to use these advanced statistical process controls for quality improvement and culture change.

 

I really enjoy helping groups in healthcare see that not all changes or improvements need come by confrontation or finger pointing.  Often, in different service lines in Medicine, it is too often the case that staff attribute issues to personal defects rather than system defects.  Commonly, many of what are felt to be personal issues are in fact system issues.  This is supported by the quality control literature and I have often noticed that poor systems may set up physicians and healthcare providers for confrontation amongst themselves.

 

Functional systems based on data which run smoothly often alleviate the need for frustration, conflict, and other issues.  Such feelings may represent symptoms of opportunities for improvement.

 

Last, please remember:  even if you know the steps, practice them, and work to create positive business situations these change situations are challenging and high risk.  Our batting average may even be below .500 (after all .333 is a good average in the major leagues) yet we take the at-bats because we learn from them, they improve our skills, and the hits are worth it.

 

Comments, questions, or thoughts on change management in your healthcare organization?  Have you seen a failed change management situation? If you have, let us know in the comments section beneath.  We always enjoy hearing and learning about change management across different organizations.

 

How-To Guide For Surgical Innovation: A Book Review

 

Book reviews, and reviews on literature regarding innovation in Surgery can be useful tools to help decide where to spend our scarce time.  Here, we review a very useful text book called Bio-Design: The Process of Innovating Medical Technology.  This book serves as a great template for how to innovate in the field of Surgery.

 

This text was published in 2009 and has contributing authors including: Stefanos Zenios, Josh Makower, Paul Yock, Todd Brinton, Uday Kumar, Lyn Denend and Thomas Krummel.  About 5 years ago, Dr. Krummel spoke near Center Valley, Pennsylvania, at the Lehigh Valley Health System.  It was there that I was exposed to just how process-oriented the Bio-Design system had become.  In short, Dr Krummel and his colleagues at Stanford demonstrated a nice pathway for how to evolve, rate and create companies out of the talented surgical residents and interdisciplinary teams they helped form.  This Bio-Design text mostly captures a sketch of that system and makes it a readily available template for other centers across the country.

 

One of the most useful portions of the text is its organization.  It is clearly organized from brainstorming and conceptual phase down to licensing and beyond.  Some of the more interesting facts that you can glean from the text include the idea that the Stanford system has this innovation pathway mapped to such a level that the staff maintains a computer database with which they attach ratings to different ideas and potential startup teams.

 

Why is this pathway so attractive?  There are at least two reasons:  (1) effective innovation can help more patients than we could ever help in our day-to-day work, and (2) innovation provides a non-patient volume-based revenue streams that is very effective.  That is, each of these medical devices has the potential to return multiples of investment.  Therefore, in years where patient care numbers are decreasing or have issues, these non-patient volume sensitive revenue streams are even more useful.

 

What I especially like about the process described in Biodesign is that it leverages human capital to a very full degree.  The text clearly conveys how the program achieves this end.  It highlights how this system takes motivated future/current physicians and plugs them into a process that gives them the ability to create their own medical devices and companies.  Again, they can leverage these companies to help more patients with amazing designs and devices than they could otherwise help in day to day practice seeing patients in the office.

 

Other hightlights of the book include thoughts and quotes from known innovators.  Dr. Thomas J. Fogarty, of Fogarty catheter fame, is one of the Stanford physicians and his comments throughout certain portions of the book are insightful and relevant.

 

The book also contains several directly useful tools. Some of these include sample non-disclosure agreements and other legal documents.  The FDA regulatory process, both for compassionate use and other pathways, is clearly explained such that even a person inexperienced to the process can come to understand some of the various regulatory issues and challenges in bringing a device to FDA clearance.  Of the various textbooks on medical device innovation and biodesign I cannot recommend this text strongly enough.  Its ease of use, clear organization and practical tools makes it an excellent primer for healthcare executives and clinicians who have an interest in the bio-design process and who have an excellent idea that they might want to bring to market.  If you have a moment visit Amazon.com.  You will find the textbook available for download as a Kindle textbook or available for purchase as a hardback.

 

Note the author has no financial association with the publisher of the textbook, the various surgeons and clinicians involved, or any association with the university mentioned.

New 3D printing technology revealed at CES 2014 conference

3dprintingjpg

 

One of the regular features on our blog includes 3D printing as a vehicle for surgical innovation.  This year’s consumer electronics show (CES), in Las Vegas, highlighted multiple new 3D printing technologies.  Some of the more interesting ones include the 3Doodler, the new series of Makerbot 3D printers and the new Chef-Jet 3D printer series.

 

This blog focuses on potential applications of technology similar to Chef-Jet in the field of surgical innovation.

 

Some of the at-home applications of 3D printing include PLA or ABS plastic based creations.  Both PLA and ABS have interesting structural characteristics, and PLA has become the plastic type of choice owing to ease of home use.  However, new print media are already coming.  Just as 3D printing represents disruptive innovation for multiple existing technologies, the new 3D printers and possible new media are evolving this cutting edge industry at an exciting pace.

 

For example, new hardware is already displacing certain elements of the market.  The 3Doodler, which recently arrived at my home (see photo above), is a handheld pen which utilizes PLA filament so as to allow the user to create plastic objects via additive manufacturing by hand.  See the photo above.  Letters made in about five minutes with the 3Doodler.  These by-hand prints are often not to the resolution of most non-freehand, “traditional” (amazing we can apply the word “traditional” to 3D printing at all) 3D printers. However, these can serve as rough sketches for later, more definitive creations, are rapid, and are amazing in terms of what can be produced quickly.

 

In addition to the 3doodler, other hardware is already evolving.  We described in an earlier blog post the importance of creating an entire ecology around your innovation. As mentioned, Makerbot and other 3D companies have done this with different 3D scanners as part of their system.  Makerbot recently released a 3D scanner, and presented an amazing new lineup of printers at the CES conference.  These now feature heated build areas, which I can attest would make all the difference when I go to print objects in my cold office.  The current Replicator 2, an amazing device itself, would be that much better with one of these heated areas.

 

One of the interesting, new technologies was recently debuted at CES.  This is the Chef-Jet 3D printer.  The Chef-Jet 3D printer prints in edible media including sugar, vanilla, chocolate, sour apple, sour cherry and watermelon.  This is expected to begin shipping in late 2014.  There will be both a professional grade and an at home grade Chef-Jet.  Wedding cakes and at-home recipes may never be the same.  At this point it is important to mention that the author of this blog post is a stockholder in 3D systems, stock ticker DDD.  I have been a stock holder for about 2 years and did not know this technology was coming.  However, now that it is here, I couldn’t be more interested.

 

Now, consider the logical extensions of printers and these new media: Now, surgical innovation is possible with resorbable matrixes and other resorbable scaffolds.  That is, we will be able to create, cheaply and effectively, collagen scaffolds for things like drug delivery, eventual tissue ingrowth and other similar devices.  Clearly this will make intellectual property and device manufacturing both paramount and in flux over the next several years.  In fact, companies in Cambridge, UK, are already making bio-resorbable, implantable, printing filament.  Importantly, these tissue scaffolds and similar issues have already been explored.  For example, Organovo has explained that a 3D printed liver will be available for pharmaceutical companies on which it may test new drugs by the end of this year…truly amazing.

 

This blog post highlights some of the interesting twists and turns of what can be described as the 3D printing revolution.  The different companies involved are poised to truly disrupt, in a positive way, the manner in which we have done things.  In surgery, this is coming and, in fact, is already here.  The new generation of 3D printers, some of which debuted in 2014 at CES are already here and soon will allow hospitals across the country to purchase the ability to make biodegradable implements.  Indeed, there are already English manufactures that make filaments for 3D printers that are implantable.  Check back in our blog often for updates regarding the 3D printing revolution and its impact on surgery in the coming months.

Gamification Applied To A Surgical Residency: Caught In The Act Of Doing Something Right

 

gamesjpg

 

You may have heard the term “gamification” (pronounced game-ification) previously.  Gamification is the process of taking certain elements from the world of computer and board games and applying these toward motivational and customer retention strategies for different groups.  Game dynamics may also be applied to other important functions for different companies. Importantly, gamification is a hot topic and is even being taught in some business schools.  It is currently thought that gamification will resonate with the millennial generation (“millennials”) and subsequent generations to a greater degree than, for example, Generation X and the Baby Boom generation.

 

There are multiple important strategies in gamification that we could discuss in this blog post.  Here, we focus on several important game dynamics as they were applied to a general surgery residency in 2012.  Our group used game dynamics for our section of trauma, emergency surgery, and surgical critical care to assess their impact on resident motivation and perception of quality of learning.  Here, we will discuss the dynamics we used and different outcomes.  Interestingly, we also utilized game dynamics for our team of surgical attendings.  We agreed to participate in a similar strategy so as to demonstrate our support for this new approach.

 

The set up included the creation of a consensus group of behaviors by the trauma and emergency surgeons that the team wished to reinforce in residents.  Similarly, the resident staff created consensus behaviors they wished to see demonstrated by surgical attendings.  Many behaviors were already present to varying degrees, and the consensus behaviors were not a list of all new behaviors–rather, they were ones that each group wished to reinforce or make more common.  Each group assigned certain point values to the behaviors. The point assignment was arbitrary and was contingent on several factors including the relative scarcity of the event as well as the importance of the event to our trauma and emergency surgery section as a whole.

 

We then set up an email address and surgeons were able to email from their smartphone each time they caught the resident surgeon doing something correct.  This is a very new concept in residency education:  “catching someone in the act” of doing something correctly.  Interestingly we really focused on catching the resident in the act of doing something good.

 

Next, residents were given a letter so as anonymize them. Each resident knew his or her letter only. The letters were drawn as part of a leaderboard which was displayed in the trauma and emergency surgery conference room.  Therefore, at each morning report, residents could see their progress and point accumulation relative to the point accumulation of their anonymized colleagues.  Certain threshold levels of points were set and were displayed on the leaderboard.  That is, there were certain thresholds of points at which events took place. Some of these events include obtaining a new skill, such as the ability to clear a cervical spine.  Residents would be educated in cervical spine clearance and the appropriate template cervical spine clearance note.  They were then empowered to clear a cervical spine with the supervision of the trauma surgical attending.  Other events would occur at different levels of achievement including a letter of support to the residency program director for the resident’s file and other important elements.  Unbeknownst to the residents, the overall points leader at year’s end was given a special congratulations and a year end gift at the resident’s award dinner.  This was the only time at which the resident point total was revealed, and each resident except the overall points leader was kept anonymous.

 

A survey was given to the residents prior to the institution of this motivational pathway.  This was a validated survey which used a visual analogue scale so as to determine job satisfaction.  This is called the Job Satisfaction Survey (JSS) and has been validated among emergency department physicians.  All resident years participated in the system.  All residents received the job satisfaction survey prior to and then after the year long process.

 

A statistically significant improvement was noted in a proportion of residents (two tailed p < 0.01 by Chi-squared test) who perceived the quality of their education to be excellent.  These data were reported at a trauma and emergency surgery conference at Atlantic City, NJ.

 

This is one nice case study for how gamification is possible in surgical residency.  This program leverages multiple dynamics including comparison for each individual to a peer group and a focus on positive reinforcements for appropriate behavior.  Experientially, the trauma surgeons involved found this to be highly effective in improving resident behavior and reinforcing positive behavior on the service.  Interestingly, some of the residents originally felt that the gamification may belittle resident eduction or somehow cheapen it.  Instead, by the end of the process, these residents were reporting positive results and were receptive to the process. Again, this type of education is a far cry from typical resident education. The terminology of ‘gamification’ was felt to decrease buy in as some residents felt, as mentioned, initial feedback from a minority of residents included an idea that turning their residency into a game was not appropriate.  However, once residents saw that this was merely an assessment tool that focused on positive behavior and reinforced it while maintaining anonymity, they became much more receptive etc.  There was no ability to remove points from any participant in the system at any time, and, again, focus was placed on what the residents did appropriately according to the defined behaviors.

 

Experientially, from this, our team learned that gamification is achievable in the inpatient medical education world.  We also learned that this innovative process was a true boon for the human resources portion of our section.  It changed the dynamic and interaction between the surgeons and the residents for the better.  Also, experientially, we learned that  performance seemed to greatly improve.  There was a constancy of expectation of the residents by the trauma and emergency surgeon and vice versa.

 

As mentioned, the trauma surgeons also participated in the system.  Trauma surgeons were also anonymized and had a leaderboard. Each trauma surgeon was given a letter and each trauma surgeon only knew his or her letter.  Point totals were accumulated as emails were sent from the residents to a third party who was not one of the surgeons who attended.  These were then reflected on the points on the leaderboard.  Many of the dynamics have names in the gamification world.  For example, the system described above incorporates some of the most basic game dynamics including what are called PBL’s.  PBL stands for points, badges and leaderboard.  Points are self explanatory as is the leaderboard.  These leverage positive peer pressure and reinforcement so as to increase performance.  The badges are those things that are achieved to signify an improvement in level. Although we did not give true badges to be put on the physicians coat etc, stickers and other cues maybe options for other programs.

 

We did use another dynamic, that of “leveling up”, to allow this gamification process to recognize good performance and to increase point accumulation by allowing residents to obtain new skills independent of their year level. Although senior residents had certain skills grandfathered in based on year level, such as PGY-5’s ability to clear c-spines with supervision of an attending surgeon, younger residents were able to attain these and other skills via achieving a certain point total that demonstrated competency in the performance of similar tasks.  This process makes for a competency-based method of advancement and attainment of new skills.

 

Surgeons and residents were greatly satisfied with this innovative system and it is one you may apply in your educational or motivational process.  Consider applying the process to your team of surgeons, residents, or Advanced Practitioners.

 

Questions or comments?  As always we invite your thoughts.

Handy Tools For Startups & Investors

 

In this entry we focus on some of our favorite resources for potential startup founders and investors.  These are resources focused on better understanding the nature of startups in terms of mechanics, fundraising, valuation and other important keys. We will highlight several of our favorite resources on these important topics.

 

 

1 – Kauffman.org

 

Some of the most useful resources for startups and potential founders is the Kauffman Foundation.  Visit kauffman.org.  Most education from the Kaffuman Foundation is free and readily available online for download.  The Kauffman Foundation is an organization focused on those key facts and further resources for startup entrepreneurs.  We found it particularly useful for a broad range of topics. Interestingly, Kauffman Labs focuses on hands-on, laboratory type interactions for potential startup founders and investors to better understand the mechanics of starting a new venture.

 

2 – Social Media

 

Social media is one of the more interesting resources for potential angel investors and potential startup founders.  In particular, Twitter has multiple users who tweet daily on different important topics in investing, startups, and other useful information.  There are also multiple CEO’s who maintain Twitter accounts.  On twitter, for example. you can find Y-combinator, Kickstarter, 500 startups, Start-ups.co, and multiple other interesting resources such as entire VC firms like Sequoia and others.  Social media like Twitter has a plethora of daily updates that give free information, often with links to evidence, or blog entries that discuss certain useful topics for startups.  Angel investment teams like Grizzly and Gibbon LLC can also be found on Twitter and, although unlikely to result in a deal, you can follow these different investment teams on that platform.

 

3 – The Founder’s Dilemmas

 

Another useful resource for startups is The Founder’s Dilemmas by Noam Wasserman.  This text highlights many of the issues involved with startups.  Old favorite subjects that we also cover on this blog include dynamic ownership equity, alignment, scaling, team composition, and whether to startup with family.  We can’t stress highly enough just how useful this text is for potential startups.  It really functions as a roadmap and can prevent you from having to learn lessons by brute force methodology or by going through them yourself.  Take a look at The Founder’s Dilemmas if you have a moment.

 

4 – The Lean Startup

 

Another useful text for startups is The Lean Startup by Eric Ries.  This is one of the fundamental texts that develops and introduces the concepts of applying lean methodology to business startups.  Other old favorites on this blog, such as the minimum viable product, the business model canvas and host of others are introduced by The Lean Startup.

 

5 – Novoed.com

 

One of the other useful sources of knowledge for investors and startups is Novoed.com.  novoed.com runs classes such as Clint Korver’s Venture Capital 101.  VC101 which recently completed an online class was conducted with a team from the Kaffuman Foundation and Clint’s useful course gives insight into the multiple functions of VC, the mechanics, and the investment decisions involved with venture capital. Clint gave excellent case studies from Ulu Ventures, which is the VC fund he co-manages.  Novoed.com has an excellent team-based approach to learning the intricacies of VC etc.

 

6 – Coursera.org

 

Coursera.org is another online platform for acquiring the tools necessary to startup effectively.  Entrepreneurship 101, and other similar courses have excellent opportunities to join a team that runs through the startup mechanics online.  These course, often put on by Stanford University Professors, are free to join and make you part of a team that is often located throughout the country or even the world.  The creation of a business model canvas is a focus as are other topics such as optimal team size, creation of a low fidelity prototype and other useful decision making strategies.

 

We can’t speak highly enough of this selection of tools for making your next startup or investment decision of a higher quality.  So, today we have reviewed several useful tools for better understanding startups, investment opportunities and the nature of entrepreneurship.  We highly recommend the online learning platforms of Novoed and Coursera.  Further, we recommend those selected texts including The Founder’s Dilemmas and The Lean Startup.  We think these will equip you well so you can avoid having to relearn some of the same lessons we have experienced in our time.  Here is wishing you higher quality investment deals and smoother, more effective startups.  We feel that using the tools above will make it easier for you to achieve your goals in entrepreneurship.