By: DMKashmer MD MBA MBB FACS (@DavidKashmer)
Have you ever wondered how a measurement system affects your conclusions? There are several ways we’ve mentioned that the type of data you choose affects a great deal about your quality improvement project. In this entry, let’s talk more about how your setup for measuring a certain quality endpoint determines, in part, what you find…and more importantly, perhaps, how you respond.
The Type Of Data You Collect Affects What You Can Learn
Remember, previously, we discussed discrete versus continuous data. Discrete data, we mentioned, is data that is categorical, such as yes/no, go/stop, black/white, or red/yellow/green. This type of data has some advantages including that it can be rapid to collect. However, we also described that discrete data comes with several drawbacks.
First, discrete data often requires a much larger sample size to demonstrate significant change. Look here. Remember the simplified equation for discrete sample data size:
where p = the probability of some event, and delta is the smallest change you would like to be able to detect.
So, let’s pretend we wanted to detect a 10% (or greater) improvement in some feature of our program, which is currently performing at a rate of 40% of such-and-such. We would need sample size: (0.40)(0.60)(2/0.10)^2, or 96 samples.
Continuous Data Require A Smaller Sample Size
Continuous data, by contrast, requires a much smaller sample size to show meaningful change. Look at the simplified continuous data sample size equation here:
(2 [standard deviation] / delta)^2
This is an important distinction between discrete and continuous data and, in part, can play a large role in what conclusions we draw from our quality improvement project. Let’s investigate with an example.
A Cautionary Fairy Tale
Once upon a time there was a Department of Surgery that wanted to improve its usage of a surgical checklist. The team believed this would help keep patients safe in their surgical system. The team decided to use discrete data.
If a checklist was missing any element at all (and there were lots) it was called “not adequate”. If it was complete from head to toe, 100%, then it would count as “adequate”. The team collected data on its current performance and found that only 40% of checklists were adequate . The teams target goal was 100%.
Using the discrete data formula, the team set up a sample that (at best) would allow them to detect only changes of 10% or larger. That was going to require a sample size of 96 per the simplified discrete data formula above.
The team made interesting changes to their system. For example, they made changes so that the surgeon would need to be present on check-in for the patient, and they made other changes to patient flow that they felt would result in improved checklist compliance.
Weeks later, the team recollected its data to discover how much things had improved. Experientially, the team saw many more checklists being utilized and there was significantly more participation. Much more of the checklist was being completed, per observations, each time. The team felt that there was going to be significant improvement in the checklists and was excited to re-collect the data. Unfortunately, when the team used their numbers in statistical testing, there was no significant improvement in checklist utilization. Why was that?
This resulted because the team had utilized discrete data. Anything other than complete checklist utilization counted in the “not adequate” bin and so was counted against them. So, even if checklists were much more complete than they ever had been (and that seemed to be so), anything less than perfection would still count against the percentage of complete (“adequate”) checklists. Because they used discrete data in that way, they were unable to demonstrate significant improvement based on their numbers. They were disappointed and, more importantly, they had actually made great strides.
What options did the team have? Why, they could have developed a continuous data endpoint on checklist completion. How? Look here. This would have required a smaller sample size and may have shown meaningful improvement more easily.
A Take-Home Message
So remember: discrete data can limit your ability to demonstrate meaningful change in several important ways. Continuous data, by contrast, can allow teams like the checklist team above to demonstrate significant improvement even if checklists are still not quite 100% complete. For your next quality improvement project, make sure you choose carefully whether you want discrete data endpoints or continuous data end points, and recognize how your choice can greatly impact your ability to draw meaningful conclusions as well as your chance of celebrating meaningful change.