It’s unusual for our team to re-post content from another source. We believe, and enjoy, original content or (at the very least) an original take on well-known content.

This entry, however, is an exception to our rule because we found a talk by Eric Ries (author of The Lean Startup) that was part of a post by thecoderfactory.com.

Although Eric’s talk does not explicitly discuss ALL of the methods we use for startups (including the power of premium, unique positioning), he delivers the single most useful talk we’ve heard on how to startup that is focused on decision-making, pivoting, and methods of “innovation accounting”.

Please enjoy, and use, the video beneath. Take this as an important piece of the story about how to startup. Complementary information, such as how to fundraise and the business model canvas, may be coupled with Eric’s excellent talk to round out many of the mechanics of starting up your unique business. Please enjoy the video beneath as much as we did!

Questions, thoughts, or comment’s on Eric’s talk? Please leave your thoughts beneath.

So you are progressing through your quality improvement project and you are passing through the steps of DMAIC or some similar steps. You finally have some good, continuous data and you would like to work on analyzing it.

You look at your data to find whether these data are normally distributed. You likely performed the Anderson-Darling test, as described here, or some similar test to find out whether your data are normally distributed. Oh no! You have found that your data are non normal. Now what? Beneath we discuss some of the treatments and options for non-normal data sets.

One of the frequent issues with quality improvement projects and data analysis is that people often assume their data are normally distributed when they are not. They then go on and use statistical tests which require data that are normally distributed. (Ut oh.) Conclusions ensue which may or may not be justified. After all, non-normal data sets do not allow us to utilize the familiar, comfortable statistical tests that we employ routinely. For this reason, let’s talk about how to tell whether your data are normally distributed.

First, we review our continuous data as a histogram. Sometimes, the histogram may look like the normal distribution to our eyes and intuition. We call this the “eyeball test”. Unfortunately, the eyeball test is not always accurate. There is an explicit test, called the Anderson-Darling test, which asks whether our data deviate significantly from the normal distribution.

Incidentally, the normal distribution does not mean that all is right with the world. Plenty of systems are known to display distributions other than the normal distribution–and they are meant to do so. Having the normal distribution does not mean everything is OK–it’s just that we routinely see the normal distribution in nature and so call it, well, normal. We will get to more on this later.

For now, you have reviewed your data with the eyeball test and you think they are normally distributed. Now what? We utilize the Anderson-Darling test to compare our data set to the normal distribution. If the p value associated with the Anderson-Darling test statistic is GREATER than 0.05 this means that our data do NOT deviate from a normal distribution. In other words we can say that we have normally distributed data. For more information with regard to the Anderson-Darling test, and its application to your data, look here.

So now we know whether our data are or are not normally distributed. Next, let’s pretend that our Anderson-Darling test gave us a p value of less than 0.05 and we were forced to say that our data are not normally distributed. There are plenty of systems in which data are not normally distributed. Some of these include time until metal fatigue / failure and other similar systems. Time until failure and other systems may display, for example, a Pousieulle distribution. This is just one of the many other named distributions we find in addition to the normal (aka Gaussian) distribution. Simply because a system does not follow the normal distribution does not mean the system is wrong or some how irrevocably broken.

Many systems, however, should follow the normal distribution. When they do not follow it and are highly skewed in some manner, the system may be very broken. If the normal distribution is not followed and there is not some other clear distribution, we may say that there is a big problem with one of the six causes of special variation as described here. When data are normally distributed we routinely say the system is displaying common cause variation, and all of the causes for variation are in balance and contributing expected amounts. Next, let’s talk about where to go from here.

When we have a non normal data set, one option is to perform a distribution fitting. This asks the question “If we don’t have the normal distribution, which distribution do we have?” This is where we ask MiniTab, SigmaXL, or a similar program to fit our data versus known distributions and to tell us whether our distribution deviates from these other distributions. Eventually, we may find that one particular distribution fits our data. This is good. We now know that this is the expected type of system for our data. If we have non-normal data and we fit a distribution to our data, the question then becomes what can we do as far as statistical testing goes. How can we say whether we made improvement after intervening in the system? One of the things we can do is to use statistical tests which are not contingent on having normally distributed data. These are infrequently used and include the Mood’s median test, the Levene test, and the Kruskal-Wallis test (or KW because that one’s not easy to say) test. I have a list of tools and statistical tests used for both normal and non-normal data sets at the bottom of the blog entry here.

So, to conclude this portion, one option for working with non-normal data sets is to perform distribution fitting and then to utilize statistical tests which do not rely on the assumption of having a normal data set.

The next option for when you are faced with a non-normal data set is to transform the data so that it becomes a normally distributed data set. For example, pretend that you are measuring time for some process in your hospital. Let’s say you have used the Anderson-Darling test and discovered that time is not normally distributed in your system. As mentioned, you could perform distribution fitting and use non-normal data tools. Another option is to transform the data so that they become normal. Transform does not mean that you have faked, or doctored, the data. What transformed means is that you raise the variable, here time, to some power value. This can be any power value including the 1/2 value, 2, 3, and every number in-between and beyond. This can also be a ‘negative’ power such as -2 etc. So, now, you raised your time variable until the data set becomes normally distributed. A computer software package like MiniTab or SigmaXL will test each value of the power to which you raised your data. These values get called lambda values. The computer will find the lambda value at which your data become normally distributed according to the Anderson-Darling test. Let’s pretend in this situation that time^2 is normally distributed according to the Anderson-Darling test.

This brings up a philosophic question. We can easily feel what it means to manage the variable time. What, however, does it mean to managed time raised to the second power? These are questions that six sigma practitioners and clinical staff may ask, and, again, are more philosophic in nature. Next, we use our transformed data set, and remember that if we transform the data before we intervened in the system we must again transform the data after we intervene in the system (to the same value). This allows us to compare apples to apples. Next, we can utilize the routine, familiar, statistical tests on this transformed data set. We can use t tests, anova tests, and other tests that we typically enjoy to analyze the central tendency of the data and data dispersion / variance.

This, then, represents the second option for how to deal with data that are not normally distributed: transform the data set and utilize our routine tests. For examples of tests that do utilize normal data see the tool time excel spreadsheet from Villanova University that is at the bottom of our blog entry as mentioned above. In conclusion, working with non normal data sets can be challenging. We have presented the two classic options for looking at how to deal with data that are not normally distributed and which include distribution fitting followed by utilization of statistical tests that are not contingent on normality, as well as transforming the data with a power transform (such as the Box-Cox transform) and then utilizing the transformed data with our routine tools that require normally distributed data.

Questions, comments or thoughts about utilizing non-normal data in your quality improvement project? Leave us your thoughts beneath.